paper
stringlengths
13.4k
699k
reviews
stringlengths
0
38.9k
# Dual-Windowed Vision Transformer With Angular Selfattention Anonymous authors Paper under double-blind review ## Abstract Following the great success in natural language processing, transformer-based models have emerged as the competitive model against the convolutional neural networks in computer vision. Vision transformer (ViT) and its subsequent variants have exhibited promising performance in tasks such as image classification, object detection and semantic segmentation. The core of vision transformers is the self-attention mechanism, which models the long-range dependency of different tokens. Conventionally, the attention matrix in self-attention is calculated by the scaled dot-product of *query* (Q) and key (K). In this case, the attention weight would depend on norm of Q and K as well as the angle between them. In this paper, we propose a new attention mechanism named angular self-attention, which replaces the scaled dot-product operation with the angular function in order to effectively model the relationship between tokens. In particular, we propose two forms of functions: quadratic and cosine functions, for our angular self-attention. Based on angular self-attention, we design a new vision transformer architecture called dual-windowed angular vision transformer (**DWAViT**). DWAViT is a hierarchical-structured model characterized by the angular self-attention and a new local window mechanism and the new model is supposed to achieve competitive performance on the downstream tasks. We evaluate DWAViT on multiple computer vision benchmark , including image classification on ImageNet-1K, object detection on COCO, and semantic segmentation on ADE20K. Our experimental results also suggest that though our model can achieve promising performance on the tasks, the computational cost of our model is higher than that of the baseline models (i.e., Swin Transformer) due to the new formulation of the self-attention. ## 1 Introduction Vision transformers have received tremendous attention since its emergence. Inspired by the success of the transformer (Vaswani et al., 2017) in the sequence modeling, Dosovitskiy et al. (Dosovitskiy et al., 2021) proposed the initial architecture of vision transformer which can be regarded as the encoder part of the original transformer (Vaswani et al., 2017). Compared to convolutional neural networks (CNNs), the vision transformer is featured by its ability to transform the spatial visual representation learning on the image into the token-to-token learning, by partitioning the image into multiple patches. Benefited from the ability of the self-attention mechanism that can model the long-range dependence of tokens in the image, vision transformers exhibit on par or better performance against CNNs in many computer vision tasks, such as image classification (Dosovitskiy et al., 2021; Dong et al., 2022), object detection (Carion et al., 2020; He & Todorovic, 2022; Zhang et al., 2022), and semantic segmentation (Zheng et al., 2021; Xie et al., 2021). Despite the merits mentioned above, the shortcoming of vision transformer is also obvious. The low level of the inductive bias requires more large datasets such as Image21K (Deng et al., 2009) and JFT300M (Sun et al., 2017) for model training. Besides, the time complexity of the computation of the self-attention is quadratic to the number of input tokens, which prohibits the application of vision transformers on tasks involving high-resolution images. To deal with the excessive computation of the self-attention, the subsequent work (Liu et al., 2021; Dong et al., 2022; Huang et al., 2019; Wang et al., 2020; Xia et al., 2022) proposed different local-window mechanisms to restrict the computation of self-attention in a local window. For instance, the pioneering work Swin Transformer (Liu et al., 2021) adopts the shift windows to reduce the workload of computing self-attention and to facilitate the interaction of local windows. CSwin (Dong et al., 2022) proposes the cross-shaped window, in which the image is split into the horizontal and vertical strips in parallel. Another work (Xia et al., 2022) presents flexible local window, which could be implemented in a data-dependent way. Another branch of work (Qin et al.; Katharopoulos et al., 2020; Peng et al.; Choromanski et al.) focuses on the in-depth understanding of the self-attention mechanism and proposes new formulations to calculate the attention scores between different pairs of tokens. From the perspective of kernel learning, the interaction of the query and key can be modeled by specific kernel function, and the scaled dot-product operation can be replaced by the softmax-free operation in the self-attention. Usually, the softmax-free operation can lower the time complexity of the computation in self-attention. ![1_image_0.png](1_image_0.png) Figure 1: The illustration of the dual window mechanism. The image is partitioned into (a) even number of local windows and (b) odd number of local windows in two layers, respectively. The size of the local window is flexible and the tokens lie on the border of one local window would reside in the interior of the local window in the following layer. The connection of the local window in each layer can be bridged by the operation of the local window in the next layer. In this paper, we present new designs on the local window mechanism and the operation in self-attention. In terms of the local window, we propose a dual window mechanism. As shown in Fig 1, similar to Swin Transformer (Liu et al., 2021), the local window is also imposed on the feature maps for the purpose of reduction of the time complexity. However, unlike the previous work in which the size of the local window is fixed, the size of the our proposed local window is flexible and is adjustable according to the size of the feature maps. Besides, to mitigate the problem of lacking connections between local windows, the number of local windows is different at layer t and layer t + 1. For instance, there are even number of the local windows at layer t but odd number of local windows at layer t + 1. In this case, the tokens that lie in one local window from the first feature map would belong to another local window in the following features map. Since the features maps are partitioned into different number of local windows, the coordinates of the local windows in the adjacent feature maps are different. The tokens in the overlapping area of local windows can bridge the connection of local windows since these tokens would participate in the self-attention calculation within each local windows. With the dynamic interaction of the local windows, the receptive field can be enlarged implicitly, and the ability to model long range relations can also be enhanced. In traditional self-attention mechanism, the similarity of the query and key is computed by the scaled dot-product. Thus, the similarity would depend on the norm of query and key as well as the angle between them. Inspired by previous work (Wang et al., 2018; Zhao et al., 2020), we notice that scaled dot-product function is not the only choice to model the relationship of tokens. In this paper, we propose the angular self-attention, in which the similarity of query and key is only dependent on the angle between them. To reduce the impact of the norm of the query and key on the relation of tokens, the query and key are L2-normalized, and query and key are distributed on the unit sphere. The relationship of query and key would be determined by the angle between them, and smaller angle could yield larger attention score between a pair of query and key. In angular self-attention, we adopt two forms of functions: quadratic and cosine functions, to model this relationship, and the similarity is further enlarged by the temperature scaling. Our experiments show that the angular self-attention can serve as the competitive alternative for the traditional scaled dot-product self-attention. Jointly combining the dual window mechanism and angular self-attention, we propose a novel hierarchicalstructured vision transformer backbone called dual-windowed angular vision transformer (DWAViT). In DWAViT, the attention score for each pair of query and key is modeled by the temperature-scaled quadratic/cosine functions, and experimental results validate that our quadratic/cosine functions are effective in modeling the relationship between tokens. Besides, dual window mechanism is also adopted in our new backbone. The feature maps are partitioned into even/odd number of local windows in the layers of DWAViT alternatively. The dual window mechanism can preserve the ability to model long-range relationship between tokens. However, due to the new formulation of the self-attention. the computational cost of our model is higher than that of baseline models with a similar size (i.e., Swin Transformer (Liu et al., 2021)). With proper partition of the feature maps, our DWAViT can be applied to the downstream tasks (e.g., object detection, semantic segmentation) which involve high-resolution images. The major contributions of this paper are as follows: - We propose the dual window mechanism to split the global feature map into a couple of smaller localized feature maps. By partitioning the feature maps in even/odd number of local windows in an alternative way, the lack of connection between local windows can be alleviated. - We propose the angular self-attention in which the scaled dot-product operation is replaced by the temperature-scaled quadratic/cosine functions. Our proposed angular self-attention can model the long range relationship of tokens and is the competitive alternative for the traditional scaled dot-product self-attention. - The dual-windowed angular vision transformer (DWAViT) is proposed by jointly combining the dual window and angular self-attention. The DWAViT is evaluated on a series of dense prediction tasks and achieve competitive performance on ImageNet image classification, COCO object detection, and ADE20K semantic segmentation. ## 2 Related Work Vision Transformers. The pioneering work (Parmar et al., 2018; Wang et al., 2018) first introduced the selfattention mechanism to the computer vision field and some early work Ramachandran et al. (2019); Cordonnier et al. (2019) applied self-attention in the computer vision tasks. Dosovitskiy et al. (Dosovitskiy et al., 2021) proposed the transformer-based backbone architecture called vision transformers (ViTs) (Dosovitskiy et al., 2021). With the new paradigm of representative learning, ViTs achieve on par or better performance on image classification, object detection and semantic segmentation against CNNs. Since the emergence of vision transformers, plenty of work (Touvron et al., 2021a; 2022) has been done on this field and the subsequent work aims to improve the ViTs on different aspects. DeiT (Touvron et al., 2021a; 2022) proposes new training recipe to reduce the high demand of ViTs for the very large datasets. With the techniques provided by DeiT (Touvron et al., 2021a; 2022), ViTs can pretrained from scratch on smaller datasets such as ImageNet-1K (Deng et al., 2009) compared to Image21K (Deng et al., 2009) and JFT300M (Sun et al., 2017). Besides, ViTs also borrow the idea form the modern CNN architectures (He et al., 2016; Howard et al., 2017; Sandler et al., 2018; Tan & Le, 2019; 2021; 2019; Huang et al., 2017; Liu et al., 2022b; Rao et al.; Wang et al., 2022a; Dai et al., 2021) to improve the ability of representative learning and develop hierarchical pyramid structure to handle the multi-scale feature maps. The pyramid-structured ViTs usually have four stage and in each stage the size of the feature maps is half of that in the previous stage while the dimension is doubled. Another line of work (Wu et al., 2021; Guo et al., 2022; Xiao et al., 2021; Tu et al., 2022; Yuan et al., 2021; Srinivas et al., 2021; Chen et al., 2022; Mehta & Rastegari; Peng et al., 2021) incorporates the convolution operation into the architecture of the vision transformers at different location. The performance of the hybrid vision transformers are further improved by fusing the local information learned by CNNs and global dependence information obtained by self-attention. To mitigate the computational cost of the global self-attention which is quadratic to the size of the input features. Some work (Lee et al., 2022; Chen et al., 2021) learn the contextual information from the multi-scale patch embedding. An extensive work (Dong et al., 2022; Liu et al., 2022a; 2021; Xia et al., 2022; Wang et al., 2020; Hassani et al., 2022; Han et al., 2021; Huang et al., 2019; Ren et al., 2022) proposes different local window mechanism to reduce the computational cost. The self-attention is performed within the local windows and the connection of different local window is achieved by some techniques such as shifted window (Liu et al., 2021) or cross-shaped window (Dong et al., 2022). In our paper we propose a new local window mechanism called dual window and the connection of local window can be achieved in a simple way. ![3_image_0.png](3_image_0.png) Figure 2: The illustration of our proposed dual-windowed angular vision transformer (DWAViT). Similar to previous work our backbone adopts the hierarchical pyramid structure. The core module in our backbone is the dual-windowed angular multi-head self-attention (DWA MSA) which jointly combines the dual window mechanism and angular self-attention. In each block the feature maps are divided into even/odd number of local windows. Besides. The depthwise convolution provides the conditional positional embedding. Self-attention. Apart from the traditional scaled dot-product self-attention, different forms of self-attention mechanism are also proposed. Early work (Wang et al., 2018; Zhao et al., 2020) explored the general form of the function in the self-attention and proposed several operations such as dot-product and concatenation. XciT (Ali et al., 2021) proposed cross-covariance attention (XCA) in which the attention is performed on g over channels instead of tokens. MaxVit (Tu et al., 2022) and DaViT (Ding et al., 2022) proposed grid attention and channel group attention, respectively. These attentions are also performed on channels dimension rather than spatial dimension. To reduce the the computational cost, the efficient self-attention is proposed to approximate the traditional softmax self-attention under the lens of kernel learning. Linear transformer (Katharopoulos et al., 2020) suggests that softmax function can be removed and the similarity of tokens can be obtained by pure dot product of query and key. RFA (Peng et al.) and performer (Choromanski et al.) approximate the softmax attention with positive random features. CosFormer (Qin et al.) proposed cos-based re-weighting self-attention in which the attention score is calculated by the weighted dot-product of query and key. In SOFT (Lu et al., 2021), the dot-product similarity is replaced by the Gaussian kernel function. In our paper, we also propose a new self-attention mechanism called angular self-attention, in which the similarity of tokens is calculated from a quadratic function. ## 3 Methodology 3.1 Dual Window As shown if Fig 1, the feature maps is partitioned into even number of local windows and odd number of local window at layer t and layer t+1 alternatively. The feature maps is padded if necessary. Suppose the original size of feature map is h × w. after the padding, the size of the padded feature map is h ′ × w ′. The number of the local window is N*even* = n 2 even(Nood = n 2 ood). neven(nood) is the number of local window per side. Thus, the size of local window is h n*even* ×w n*even* (h nood ×w nood ). Compared to Swin Transformer (Liu et al., 2021) that bridge the connection of different local windows by complicated techniques such as cycle shift, we solve this problem in a simple way. Notice that tokens lie on the border of one local window would reside in the interior of the local window in the following layer. Therefore, the tokens on the border of the local window at one layer can participate in the self-attention calculation with the tokens from other local windows in the next layer. The dynamic interaction of the local windows can facilitate the propagation of the information between local windows, The actual size of receptive field would be larger than the size of local window and the ability to model the long-range relationship of tokens would also be enhanced. 4 ## 3.2 Angular Self-Attention Self-attention can be regarded as a weighted combination of the input sequence, where the weights are determined by the similarities between elements of the input sequence. We use Oi ∈ Rdto denote the generated embedding of token i from self-attention. Then the general form of self-attention could be written as: $${\cal O}_{i}=\sum_{j}\frac{{\cal S}(Q_{i},K_{j})}{\sum_{j}{\cal S}(Q_{i},K_{j})}V_{j},\tag{1}$$ $$(1)$$ $$\left(2\right)$$ where S(·) represents the similarities between Q and K and it has many forms according to the previous work (Wang et al., 2018; Zhao et al., 2020). if S(Qi, Kj ) = exp(Qi· Kj/ √dk), Eq. 1 would become the scaled dot-product attention as we commonly see in vision transformers. The formulation of scaled dot-product self-attention in vision transformer is: $$\mathrm{Attention}(Q,K,V)=\mathrm{Softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})V.\tag{1}$$ In dot-product self-attention, the attention weight is generated from the scaled dot-product between Q and K. The dot-product of Qi and Kj can be expanded as Qi· Kj = ||Qi*||||*Kj || cos θ. It indicates that the similarity would depend on the L2 norm of Q and K as well as their angles θ. In our paper, we propose angular self-attention, in which we use angular function s(θ) to replace the conventional scaled dot-product operation. Then the self-attention could be reformulated as: $${\mathcal{O}}_{i}=\sum_{j}{\frac{\exp(s(\theta_{i j})/\tau)}{\sum_{j}\exp(s(\theta_{i j})/\tau)}}V_{j},$$ where θij = arccos(Qˆi· Kˆj ), Qˆ and Kˆ are L2 normalized query and key, respectively. τ is the temperature hyper-parameter that regulates the attention weight of each token. When Q and K are normalized, they can be distributed on the surface of the unit sphere. Then the attention weight obtained from our angular self-attention is solely dependent on the angle θ. Through our training the angles θ between different Q and K would be adjusted to model the relationship of different tokens and make the vision transformer achieve strong representative ability. Thus, we propose two alternative functions for s(θ) in Eq. 3. They are cosine function s(θ) = cos(θ) and quadratic function s(θ) = 1 − 4θ 2 π2 . In angular self-attention, the similarity of Q and K would solely depend on their angles. The matrix form of angular self-attention could be formulated as: Attention($Q,K,V$) = Softmax($\frac{\hat{Q}\hat{K}}{\tau}$)$V$ Attention($Q,K,V$) = Softmax($\frac{1-4\Theta^{2}/\pi^{2}}{\tau}$)$V$, $$\left({3}\right)$$ where Θ = arccos(Qˆ · Kˆ ). Qˆ and Kˆ are L2 normalized query and key, respectively. The cosine similarity and quadratic distance has common mathematical properties. Both functions are descending when θ ∈ [0, π], which means that the tokes with larger angles would have less weaker relationships. Specifically, when θ ∈ [0*, π/*2], cos θ ≈ 1 − θ 2/2 ≈ 1 − 4θ 2/π2, when θ ∈ (π/2, π], 1 − 4θ 2/π2 < cos θ < 0 , which means that tokens with angles larger than π/2 have weaker relationships in quadratic function than that of in cos function. Our experiments suggest that in most tasks like image classification and object detection, the performance of quadratic and cosine functions is comparable and the difference is very slight (<0.5%). However, in semantic segmentation, the performance of cosine function is better than that of quadratic function (>1.0%). ## 3.3 Overall Architecture We replace the traditional scaled dot-product self-attention with our angular self-attention, and integrate the dual window mechanism to build our dual-windowed angular vision transformer (DWAViT). The overall | Table 1: The details of the DWAViT variants. | | | | | | |------------------------------------------------|------------------|------------|-----------|-----------|-----------| | Models | #Dim | #Blocks | #Heads | #Param(M) | #FLOPs(G) | | DWAViT-Tiny | [64,128,256,512] | [2,4,18,2] | [1,2,4,8] | 22.7 | 4.2 | | DWAViT-Small | [80,160,320,640] | [3,6,21,3] | [1,2,4,8] | 44.6 | 8.2 | | DWAViT-Base | [96,192,384,768] | [4,8,24,4] | [1,2,4,8] | 77.4 | 14.3 | illustration of our proposed dual-windowed angular vision transformer (DWAViT) is illustrated in Fig 2. Similar to the previous work (Wang et al., 2021; 2022b; Ding et al., 2022; Fan et al., 2021; Li et al., 2022; Liu et al., 2021; 2022a), the DWAViT also adopt the hierarchical pyramid structure that take advantage of the multi-scale resolution of feature maps for the dense prediction task. The size of the input image is H × W × 3. Instead of adopting the convolutional layer with large kernel, we follow the work (Xiao et al., 2021) and leverage the two stacked convolutional layer as the stem to generate patch embedding. For each convolutional layer, the kernel size is 3 × 3 and the stride is 2 × 2. The size of the output from the stem is H 4 × W 4 × C. The DWAViT consists of four stages in which the size of the feature maps is half of that from previous stage while the dimension is doubled compared to that from previous stage. Between two adjacent stage we adopt a convolutional layer with kernel size of 2 × 2 and stride of 2 to downsample the feature maps. Each stage consists of multiple blocks which include the depthwise convolution (Chollet, 2017) that generates the conditional positional embedding (CPE) (Chu et al., 2021b) , the dual-windowed angular multi-head self-attention (DWA MSA) and feed-forward network (FFN). Compared to absolute positional embedding (APE) (Vaswani et al., 2017) that could only provide the the positional information for the fixed length of sequence, The CPE can provide flexible positional information adaptive to various length of input sequence that is often seen in the downstream tasks. Relative positional embedding (RPE) (Liu et al., 2021; Shaw et al., 2018) provide the relative positional information within the window. However, since the size of window is different in each stage, we don't adopt the RPE in our DWAViT. ![5_image_0.png](5_image_0.png) The dual-windowed angular multi-head self-attention (DWA MSA) serves the core function for our backbone. It jointly combines the dual window mechanism and angular self-attention. The details of DWA MSA is illustrated in Fig 3, Suppose the input feature is X ∈ R h×w×D, N = Neven or Nood is the number of local windows and n = √N is the number of local windows per side. After linear projection, we obtain query, key and value Q, K, V ∈ R h×w×D. Instead of splitting the X into smaller local windows, we split Q and K into N local windows. The size of the local window is h ′ n × w ′ n . The h ′ and w ′ are the height and width of the padded feature maps. In each stage, the partition with even/ood number of windows take turns and the value of Neven(Nood) vary according to the size of the feature maps. After the partition, for k th head, we obtain the localized query Q˜k = {Qk i , Qk 2 , ..., QkN } and key K˜ k = {Kk i , Kk 2 , ..., KkN }. the localized query and key are L2 normalized and the angle matrix Θ is calculated from the normalized query and key. Next the new embedding is calculated from the localized query, key and value by Eq. 4. The new feature maps of each local window is obtained by the concatenation of the embedding from each head Figure 3: The illustration of the pipeline in the dualwindowed angular multi-head self-attention (DWA MSA). The Q, K and V are localized by dividing them into a couple of local windows. The traditional scaled dot-product operation is replaced by the temperaturescaled angular function in the calculation of attention matrix X˜i = concat(X˜ 1 i , X˜ 2 i , ..., X˜ K i ). K is the number of heads. We concatenate the feature maps of each local window to form the complete feature map X˜ = concat(X˜1, X˜2*, ...,* X˜N ). The size of X˜ is larger than that of original feature map. To restore it to the original size, the final feature map is obtained from X˜ with slice operation: X = X˜[top : h − bottom, left : w − right], (5) $$X=\tilde{X}[\mathrm{top}:h-\mathrm{bottom},\mathrm{left}:w-\mathrm{right}],$$ $\left(5\right)^3$ where top, bottom, left and right are the size of padding on the top, bottom,left and right of the feature maps, respectively. With all the components aforementioned above, the pipeline of the block in our DWAViT can be formulated as: $$(6)$$ $$\begin{array}{l}{{Z^{\ell}=\mathrm{DWConv}(X^{\ell-1})+X^{\ell-1},}}\\ {{\hat{X}_{\ell}=\mathrm{DWA-attention}(\mathrm{LN}(Z^{\ell}))+Z^{\ell},}}\\ {{X_{\ell}=\mathrm{FFN}(\mathrm{LN}(\hat{X}_{\ell}))+\hat{X}_{\ell},}}\end{array}$$ where Xℓ−1 denote the output feature from the ℓ th block in the backbone and DWConv denotes the depthwise convolution. In our DWAViT, there are strong connections between the two key components: angular self-attention and dual local window mechanism. On one hand, The dual window mechanism can confine the operation of selfattention in localized areas. On the other hand, our empirical study suggests that our angular self-attention can achieve better performance with our dual local window techniques than that with previous local window techniques (i.e., Swin Transformer (Liu et al., 2021)). To wrap up, the two proposed techniques (dual windows and angular self-attention) have strong mutual connections and are two indispensable components in our DWAViT. ## 3.4 Architecture Variants We build three variants of DWAViT with different number of parameters and FLOPs, namely DWAViT-Tiny, DWAViT-Small and DWAViT-Base. For all the variants, the number of local window in each stage is set to (64,49), (16,9), (4,1), (1,1) in image classification, respectively. In stage one, the size of the local window would be 7 × 7 and 8 × 8 in two consecutive blocks. For the downstream tasks such as object detection and semantic segmentation, since the size of the input image is larger, the number of local windows in DWAViT is also different. The details of three DWAViT variants are illustrated in Table 1. ## 3.5 Time Complexity Analysis Suppose the original size of feature map is h × w and the dimension is C. After padding the size of feature maps become h ′ × w ′. The total number of local window is N. The time complexity of the linear projection is 4hwC2. Since the self-attention is performed on the padded feature maps, the time complexity for the self-attention calculation is 2(h ′w ′) 2C/N. Thus, the total time complexity of our DWA MSA is: $$\Omega(\mathrm{DWA~MSA})=4h w C^{2}+2(h^{\prime}w^{\prime})^{2}C/N.$$ 2*C/N.* (7) As illustrated in Eq. 7, In order to reduce the time complexity, we should choose a large value of N while keeping the h ′ and w ′ as close as to the original value. Note that though angular self-attention includes some operations like arccos and L2 normalization, they do not increase the time complexity. However, the memory demand of angular self-attention is larger than that of traditional self-attention. ## 3.6 Theoretical Analysis As aforementioned, we propose quadratic and cosine functions to model the relationships of tokens. Compared to the scale dot-product function, the difference of our method is that we map the Q and K features on the unit sphere and the relationship of tokens is only dependent on the angles between them. To better understand the angular self-attention, it is essential to investigate the relationship of tokens in our method. Thus, we provided the Proposition to analyze this problem. $$\left(7\right)$$ Proposition 1 *Suppose the angles between the query of a token and the keys of all the tokens match* θ0 ≤ θ1 ≤ · · · ≤ θJ−1. J *is the total number of the tokens. In angular self-attention, the embedding of the* token Oi ∝ V0 + ω1V1 + · · · + ωJ−1VJ−1*, where* ωk = exp( s(θk)−s(θ0) τ) ∈ (0, 1). s(θ) = 1 − 4θ 2 π2 *for quadratic* self-attention and s(θ) = cos θ *for cosine self-attention.* The proposition states that the embedding of a token can be regarded as the combination of the value vectors with the relative weight denoted by wk. The relative weight of the value vector with smallest angle to the target query is normalized to 1. And the relative weight of a value vector is smaller if the corresponding angle is larger. The proposition suggest the larger contribution of the value vector to the embedding of the target token with smaller angles between them. The proof can be found in Appendix. ## 4 Experiments In this section, we evaluate our proposed DWAViT on ImageNet-1K (Deng et al., 2009) classification, COCO (Lin et al., 2014) object detection, and ADE20K (Zhou et al., 2017) semantic segmentation. Besides, we also implement the ablation study to investigate the effectiveness of angular self-attention and compare the results of angular self-attention against that of traditional scaled dot-product self-attention on benchmarks. ## 4.1 Imagenet-1K Classification this experiment we adopt the same training recipe as previous work (Touvron et al., 2021a; Li et al., 2022; Lee et al., 2022; Dong et al., 2022) for fair comparison. The training strategies include repeated data augmentation methods and the EMA (Polyak & Juditsky, 1992). The total training epoch is 300 with the first 20 epochs as warm-up. We adopt the AdamW (Kingma & Ba, 2014) algorithm to optimize the model. The initial learning rate is 1.2e-3 and the weight decay is 0.05. The learning rate is adjusted according to the cosine learning rate schedule. The drop path rate is 0.1 and the input image is resized to 224 × 224. The mlp ratio for all the DWAViT variants is set to 4. The number of windows in each stage is (64,49), (16,9), (4,1), (1,1). The temperature in an angular self-attention is 0.1 for DWAViT-T and DWAViT-S and 0.25 for DWAViT-B, respectively. All the experiments are running on NVIDIA A100. Since the model is trained on ImageNet-1K for many training epochs (300 epochs), the variance could be diminished considerably and become insignificant compared to the main results. As a consequence, only the major results are reported. The results are illustrated in Table 2. Our proposed DWAViT is compared against the previous state-ofthe-art vision transformers and CNNs including the CSwin (Dong et al., 2022), MViTv2 (Li et al., 2022), DaViT (Ding et al., 2022) and ConvNeXt (Liu et al., 2022b). Specifically, Swin Transformer (Liu et al., 2021), Focal Transformer (Yang et al., 2021), DaViT (Ding et al., 2022), MViTv2 (Li et al., 2022) and CSWin Transformer (Dong et al., 2022) are baselines for the exact comparison. The experimental results show that under the similar amount of parameters, the DWAViT-T can outperform the latest vision transformers and CNNs, The top-1 accuracy of the DWAViT-T can achieve 82.8%, which is even 0.1% higher than that of CSwin-T (Dong et al., 2022). As for the small-sized model, our DWAViT-S can achieve the top-1 accuracy of 83.6% in the classification task, which is on par with that of CSWin (Dong et al., 2022) and MViTv2 (Li et al., 2022). For base-sized models, our DWAViT-B with cosine self-attention can achieve 83.9% accuracy in classification task. ## 4.2 Coco Object Detection Next, we evaluate our model on COCO object detection task. The COCO dataset has 118K images for training and 5K images for validation. We adopt the Mask R-CNN (He et al., 2017) and Cascade Mask R-CNN (Cai & Vasconcelos, 2018) as the framework and our DWAViT serves as the backbone. For a fair comparison, we follow the same training recipe as the previous work (Touvron et al., 2021a; Li et al., 2022; Lee et al., 2022; Dong et al., 2022) and perform the experiment with MMDetection toolbox (Chen et al., 2019). In order to tackle with the images with high resolution. The number of local windows in object detection task is different from that in image classification task. The number of window in each stage is (256,225), (64,49), (16,9), (4,1), respectively. In both framework, the size of the local window in each stage | The resolution of the image is 224 × 224. [cos] and [quad] denote cosine and quadratic fun | ction, respectively. | | | |----------------------------------------------------------------------------------------------|------------------------|-----------|-----------| | Model | #Param(M) | #FLOPs(G) | Top-1 Acc | | ResNet-50 (He et al., 2016) | 25.0 | 4.1G | 76.2 | | DeiT-S (Touvron et al., 2021a) | 22.1 | 4.5 | 79.8 | | PVT-S (Wang et al., 2021) | 24.5 | 3.8 | 79.8 | | RegNetY-4G (Radosavovic et al., 2020) | 21.0 | 4.0 | 80.0 | | CrossViT-S (Chen et al., 2021) | 26.7 | 5.6 | 81.0 | | TNT-S (Han et al., 2021) | 23.8 | 5.2 | 81.3 | | Swin-T (Liu et al., 2021) | 28.3 | 4.5 | 81.2 | | CoAtNet-0 (Dai et al., 2021) | 25 | 4.0 | 81.6 | | CvT-13 (Wu et al., 2021) | 20.0 | 4.5 | 81.6 | | CaiT–XS-24 (Touvron et al., 2021b) | 26.6 | 5.4 | 81.8 | | ViL-S (Zhang et al., 2021) | 24.6 | 5.1 | 82.0 | | PVTv2-B2 (Wang et al., 2022b) | 25.4 | 4.0 | 82.0 | | ConvNeXt-T (Liu et al., 2022b) | 29 | 5.0 | 82.1 | | Focal-T (Yang et al., 2021) | 29.1 | 4.9 | 82.2 | | DaViT-T (Ding et al., 2022) | 28.3 | 4.5 | 82.8 | | MViTv2-T (Li et al., 2022) | 24 | 4.7 | 82.3 | | CSWin-T (Dong et al., 2022) | 23 | 4.3 | 82.7 | | DWAViT-T[cos] (Ours) | 22.7 | 4.2 | 82.7 | | DWAViT-T[quad] (Ours) | 22.7 | 4.2 | 82.8 | | ResNet-101 (He et al., 2016) | 45.0 | 7.9 | 77.4 | | PVT-M (Wang et al., 2021) | 44.2 | 6.7 | 81.2 | | RegNetY-8G (Radosavovic et al., 2020) | 39.0 | 8.0 | 81.7 | | Swin-S (Liu et al., 2021) | 49.6 | 8.7 | 83.1 | | CoAtNet-1 (Dai et al., 2021) | 42.0 | 8 | 83.3 | | CvT-21 (Wu et al., 2021) | 32.0 | 7.1 | 82.5 | | ViL-M (Zhang et al., 2021) | 39.7 | 9.1 | 83.3 | | PVTv2-B (Wang et al., 2022b) | 45.2 | 6.9 | 83.2 | | ConvNeXt-S (Liu et al., 2022b) | 50.0 | 9.0 | 83.1 | | Focal-S (Yang et al., 2021) | 51.1 | 9.1 | 83.5 | | MViTv2-S (Li et al., 2022) | 35 | 7.0 | 83.6 | | CSWin-S (Dong et al., 2022) | 35 | 6.9 | 83.6 | | DWAViT-S[cos] (Ours) | 44.6 | 8.2 | 83.5 | | DWAViT-S[quad] (Ours) | 44.6 | 8.2 | 83.6 | | ResNet-152 (He et al., 2016) | 60.0 | 11.0 | 78.3 | | PVT-L (Wang et al., 2021) | 61.4 | 9.8 | 81.7 | | DeiT-B (Touvron et al., 2021a) | 86.7 | 17.4 | 81.8 | | Swin-B (Liu et al., 2021) | 87.8 | 15.4 | 83.4 | | ViL-B (Zhang et al., 2021) | 55.7 | 13.4 | 83.2 | | Focal-B (Yang et al., 2021) | 89.8 | 16.0 | 83.8 | | DWAViT-B[cos] (Ours) | 77.4 | 14.3 | 83.9 | | DWAViT-B[quad] (Ours) | 77.4 | 14.3 | 83.8 | Table 2: The performance of our proposed DWAViT and the baseline models on the ImageNet-1K classification. The resolution of the image is 224 × 224. [cos] and [quad] denote cosine and quadratic function, respectively. | × 1280. [cos] and [quad] denote cosine and quadratic function, respectively. Backbone #Param(M) #FLOPs(G) APb APb 50 APb 75 | AP m | AP m 50 | AP m 75 | | | | | | |-------------------------------------------------------------------------------------------------------------------------------|--------|-----------|-----------|------|------|------|------|------| | ResNet-50 (He et al., 2016) | 44 | 260 | 38.0 | 58.6 | 41.4 | 34.4 | 55.1 | 36.7 | | PVT-S (Wang et al., 2021) | 44 | 245 | 40.4 | 62.9 | 43.8 | 37.8 | 60.1 | 40.3 | | ViL-S (Zhang et al., 2021) | 45 | 218 | 44.9 | 67.1 | 49.3 | 41.0 | 64.2 | 44.1 | | TwinsP-S (Chu et al., 2021a) | 44 | 245 | 42.9 | 65.8 | 47.1 | 40.0 | 62.7 | 42.9 | | Twins-S (Chu et al., 2021a) | 44 | 228 | 43.4 | 66.0 | 47.3 | 40.3 | 63.2 | 43.4 | | Swin-T (Liu et al., 2021) | 48 | 264 | 42.2 | 64.6 | 46.2 | 39.1 | 61.6 | 42.0 | | DAT-T (Xia et al., 2022) | 48 | 272 | 44.4 | 67.6 | 48.5 | 40.4 | 64.2 | 43.1 | | CSWin-T (Dong et al., 2022) | 42 | 279 | 46.7 | 68.6 | 51.3 | 42.2 | 65.6 | 45.4 | | DWAViT-T[cos] (Ours) | 42 | 255 | 46.2 | 69.2 | 50.8 | 41.7 | 65.9 | 44.7 | | DWAViT-T[quad] (Ours) | 42 | 255 | 46.6 | 69.6 | 51.3 | 42.2 | 66.3 | 45.7 | | Res101 (He et al., 2016) | 63 | 336 | 40.4 | 61.1 | 44.2 | 36.4 | 57.7 | 38.8 | | PVT-M (Wang et al., 2021) | 64 | 302 | 42.0 | 64.4 | 45.6 | 39.0 | 61.6 | 42.1 | | ViL-M (Zhang et al., 2021) | 60 | 261 | 44.6 | 66.3 | 48.5 | 40.7 | 63.8 | 43.7 | | TwinsP-B (Chu et al., 2021a) | 64 | 302 | 44.6 | 66.7 | 48.9 | 40.9 | 63.8 | 44.2 | | Twins-B (Chu et al., 2021a) | 76 | 340 | 45.2 | 67.6 | 49.3 | 41.5 | 64.5 | 44.8 | | Swin-S (Liu et al., 2021) | 69 | 354 | 44.8 | 66.6 | 48.9 | 40.9 | 63.4 | 44.2 | | CSWin-S (Dong et al., 2022) | 54 | 342 | 47.9 | 70.1 | 52.6 | 43.2 | 67.1 | 46.2 | | DAT-S (Xia et al., 2022) | 69 | 378 | 47.1 | 69.9 | 51.5 | 42.5 | 66.7 | 45.4 | | DWAViT-S[cos] (Ours) | 64 | 338 | 48.0 | 70.7 | 52.6 | 43.3 | 67.6 | 46.5 | | DWAViT-S[quad] (Ours) | 64 | 338 | 48.2 | 70.8 | 52.8 | 43.3 | 67.7 | 46.6 | | X101-64 (Xie et al., 2017) | 101 | 493 | 42.8 | 63.8 | 47.3 | 38.4 | 60.6 | 41.3 | | PVT-L (Wang et al., 2021) | 81 | 364 | 42.9 | 65.0 | 46.6 | 39.5 | 61.9 | 42.5 | | CSWin-B (Dong et al., 2022) | 97 | 526 | 48.7 | 70.4 | 53.9 | 43.9 | 67.8 | 47.3 | | DWAViT-B[cos] (Ours) | 97 | 462 | 48.6 | 71.1 | 53,7 | 43.6 | 68.0 | 46.9 | | DWAViT-B[quad] (Ours) | 97 | 462 | 48.6 | 71.2 | 53.6 | 43.6 | 67.9 | 47.1 | is half of that in the previous stage. We use the model pretrained on ImageNet-1K and fine-tune it on the COCO dataset with 1× and 3 × schedule with 12 and 36 epochs, respectively. The results on object detection and instance segmentation of our model and the baseline models with Mask R-CNN (He et al., 2017) framework with 1× schedule are illustrated in Table 3. The baseline methods include the latest ViT models such as CSwin (Dong et al., 2022) and DAT (Xia et al., 2022). Specifically, Swin Transformer (Liu et al., 2021), DAT (Xia et al., 2022) and CSwin (Dong et al., 2022) are baselines for exact comparison. These baselines and our method adopt MMDetection toolbox (Chen et al., 2019) to perform the experiment and use models pre-trained on ImageNet-1K only as the feature extractor. For tiny-sized models the experimental results show that the DWAViT-T can achieve on par or better result against that of CSWin (Dong et al., 2022). For instance, the APb 50 of DWAViT-T(quad) can achieve 69.6%, which is 1.0% higher than that of CSWin (Dong et al., 2022). And the AP m of DWAViT-T(quad) is 42.2%, which is on par with that of CSWin (Dong et al., 2022). The DWAViT-S with quadratic self-attention can outperform all the baseline methods on all the metrics. The APb and AP m can reach 48.2% and 43.3%, respectively. Furthermore, DWAViT-B can achieve best results on APb 50 and AP m 50 . The results on object detection and instance segmentation of our our model and the baseline models with Mask R-CNN (He et al., 2017) framework with 3× schedule are illustrated in Table 4. The baseline methods include the latest ViT models such as MViTv2 (Li et al., 2022), DAT (Xia et al., 2022) and DAViT (Xia et al., 2022). Specifically, Swin Transformer (Liu et al., 2021), Focal Transformer (Yang et al., 2021), XciT (Ali et al., 2021), DAT (Xia et al., 2022) and CSwin (Dong et al., 2022) are baselines for exact comparison. The experimental results show that the DWAViT can achieve on par or better result with latest baseline methods. For DWAViT-T, the APbcan achieve 48.8%, which is 0.4% higher than that of MViTv2-T (Li et al., 2022). the APm of DWAViT-T is 43.8%, which is on par with that of MViTv2-T (Li et al., 2022). The DWAViT-S achieves 49.1% on APb and 44.4% on AP m, which outperforms all the baseline methods. And the experimental results suggest that our DWAViT-B can outperform the Swin-B (Liu et al., 2021) on this task. | 800 × 1280. [cos] and [quad] denote cosine and quadratic function, respectively. Backbone #Param(M) #FLOPs(G) APb APb 50 APb 75 | AP m | AP m 50 | AP m 75 | | | | | | |-----------------------------------------------------------------------------------------------------------------------------------|--------|-----------|-----------|------|------|------|------|------| | ResNet-50 (He et al., 2016) | 44 | 260 | 41.0 | 61.7 | 44.9 | 37.1 | 58.4 | 40.1 | | ConvNeXt-T (Liu et al., 2022b) | 48 | 262 | 46.2 | 67.9 | 50.8 | 41.7 | 65.0 | 44.9 | | PVT-S (Wang et al., 2021) | 44 | 245 | 43.0 | 65.3 | 46.9 | 39.9 | 62.5 | 42.8 | | ViL-S (Zhang et al., 2021) | 45 | 218 | 47.1 | 68.7 | 51.5 | 42.7 | 65.9 | 46.2 | | TwinsP-S (Chu et al., 2021a) | 44 | 245 | 46.8 | 69.3 | 51.8 | 42.6 | 66.3 | 46.0 | | Twins-S (Chu et al., 2021a) | 44 | 228 | 46.8 | 69.2 | 51.2 | 42.6 | 66.3 | 45.8 | | Swin-T (Liu et al., 2021) | 48 | 264 | 46.0 | 68.2 | 50.2 | 41.6 | 65.1 | 44.8 | | Focal-T (Yang et al., 2021) | 49 | 291 | 47.2 | 69.4 | 51.9 | 42.7 | 66.5 | 45.9 | | PVTv2-B2 (Wang et al., 2022b) | 45 | 309 | 47.8 | 69.7 | 52.6 | 43.1 | 66.8 | 46.7 | | XciT-S12/8 (Ali et al., 2021) | 43 | 550 | 47.0 | 68.9 | 51.7 | 42.3 | 66.0 | 45.4 | | DaViT-T (Ding et al., 2022) | 48 | 363 | 46.5 | 68.1 | 49.6 | 32.3 | 50.6 | 59.9 | | DAT-T (Xia et al., 2022) | 48 | 272 | 47.1 | 69.2 | 51.6 | 42.4 | 66.1 | 45.5 | | MViTv2-T (Li et al., 2022) | 44 | 279 | 48.2 | 70.9 | 53.3 | 43.8 | 67.9 | 47.2 | | DWAViT-T[cos] (Ours) | 42 | 255 | 48.4 | 70.4 | 53.1 | 43.5 | 67.7 | 47.1 | | DWAViT-T[quad] (Ours) | 42 | 255 | 48.8 | 70.7 | 53.6 | 43.8 | 68.1 | 47.1 | | Res101 (He et al., 2016) | 63 | 336 | 42.8 | 63.2 | 47.1 | 38.5 | 60.1 | 41.3 | | ConvNeXt-S (Liu et al., 2022b) | 70 | 348 | 47.9 | 70.0 | 52.7 | 42.9 | 66.9 | 46.2 | | PVT-M (Wang et al., 2021) | 64 | 302 | 44.2 | 66.0 | 48.2 | 40.5 | 63.1 | 43.5 | | ViL-M (Zhang et al., 2021) | 60 | 261 | 44.6 | 66.3 | 48.5 | 40.7 | 63.8 | 43.7 | | TwinsP-B (Chu et al., 2021a) | 64 | 302 | 47.9 | 70.1 | 52.5 | 43.2 | 67.2 | 46.3 | | Twins-B (Chu et al., 2021a) | 76 | 340 | 48.0 | 69.5 | 52.7 | 43.0 | 66.8 | 46.6 | | Swin-S (Liu et al., 2021) | 69 | 354 | 48.5 | 70.2 | 53.5 | 43.3 | 67.3 | 46.6 | | Focal-S (Yang et al., 2021) | 71 | 401 | 48.8 | 70.5 | 53.6 | 43.8 | 67.7 | 47.2 | | PVTv2-B3 (Wang et al., 2022b) | 65 | 397 | 48.4 | 69.8 | 53.3 | 43.2 | 66.9 | 46.7 | | XCiT-M24/8 (Ali et al., 2021) | 99 | 1448 | 48.5 | 70.3 | 53.4 | 43.7 | 67.5 | 46.9 | | DAT-S (Xia et al., 2022) | 69 | 378 | 49.0 | 70.0 | 53.3 | 43.6 | 67.4 | 47.0 | | DWAViT-S[cos] (Ours) | 64 | 338 | 48.4 | 70.0 | 53.3 | 43.6 | 67.4 | 47.0 | | DWAViT-S[quad] (Ours) | 64 | 338 | 49.1 | 70.8 | 53.5 | 44.0 | 68.2 | 47.4 | | X101-64 (Xie et al., 2017) | 101 | 493 | 44.4 | 64.9 | 48.8 | 39.7 | 61.9 | 42.6 | | PVT-L (Wang et al., 2021) | 81 | 364 | 44.5 | 66.0 | 48.3 | 40.7 | 63.4 | 43.7 | | Swin-B (Liu et al., 2021) | 107 | 496 | 48.5 | 69.8 | 53.2 | 43.4 | 66.8 | 46.9 | | DWAViT-B[cos] (Ours) | 97 | 462 | 49.8 | 71.2 | 54.8 | 44.5 | 68.6 | 47.8 | | DWAViT-B[quad] (Ours) | 97 | 462 | 49.4 | 71.1 | 54.6 | 44.4 | 68.6 | 47.8 | Table 5 show the performance of of our our model and the baseline models with Cascade Mask R-CNN (Cai & Vasconcelos, 2018) framework on object detection and instance segmentation. Swin Transformer (Liu et al., 2021) and DAT (Xia et al., 2022) are the major baselines for the exact comparison. The experimental results show that our DWAViT outperforms baseline methods. DWAViT-T can achieves 52.2% on APb and 45.1% on AP m, and DWAViT-S can achieves 52.5% on APb and 45.6% on AP m. The DWAViT-S with quadratic self-attention can achieve 45.6% and 49.9% on AP m and AP m 75 , respectively, which is 0.1% and 0.3% higher than that of DAT-S (Xia et al., 2022). ## 4.3 Ade20K Semantic Segmentation In this section, we further investigate the performance of our proposed model on semantic segmentation task. The Upernet (Xiao et al., 2018) framework is adopted. Our model and the baseline methods are evaluated on benchmark ADE20K (Zhou et al., 2017). For fair comparison, we follow the training procedure from previous works (Ding et al., 2022; Dong et al., 2022) and perform the experiment with MMSegmentation toolbox (Contributors, 2020). The image is resized to 512 × 512 and train the model with 160K iterations. the mIoU is adopted as the metric. The results of experiment are illustrated in Table 6 and Table 14 (see Appendix). SWin Transformer (Liu et al., 2021), Focal Transformer (Yang et al., 2021), XciT (Ali et al., 2021), DaViT (Xia et al., 2022) and DAT (Xia et al., 2022) are the baselines for exact comparison. Those methods are also implemented with MMSegmentation toolbox (Contributors, 2020). Besides, Upernet (Xiao et al., 2018) is adopted as the framework and models pre-trained on ImageNet-1K only are used as the feature | calculated with resolution 512 × 2048. [cos] and [quad] denote cosine and quadratic function, respecti | | | vely. | |----------------------------------------------------------------------------------------------------------|-----------|-----------|---------| | Backbone | #Param(M) | #FLOPs(G) | mIoU | | Swin-T (Liu et al., 2021) | 59 | 945 | 44.5 | | Focal-T (Yang et al., 2021) | 62 | 998 | 45.8 | | XciT-S12/16 (Ali et al., 2021) | 54 | 966 | 45.9 | | XciT-S12/8 (Ali et al., 2021) | 53 | 1237 | 46.6 | | DaViT-T (Ding et al., 2022) | 60 | 940 | 46.3 | | DAT-T (Xia et al., 2022) | 60 | 957 | 45.5 | | DWAViT-T[cos] (Ours) | 52 | 930 | 47.5 | | DWAViT-T[quad] (Ours) | 52 | 930 | 45.4 | | resolution 800 × 1280. [cos] and [quad] denote cosine and quadratic function, respectively. Backbone #Param(M) #FLOPs(G) APb APb 50 APb 75 AP m AP m 50 | | AP m 75 | | | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------|-----|-----------|------|------|------|------|------|------| | Res50 (He et al., 2016) | 82 | 739 | 46.3 | 64.3 | 50.5 | 40.1 | 61.7 | 43.4 | | Swin-T (Liu et al., 2021) | 86 | 745 | 50.5 | 69.3 | 54.9 | 43.7 | 66.6 | 47.1 | | DAT-T (Xia et al., 2022) | 86 | 750 | 51.3 | 70.1 | 55.8 | 44.5 | 67.5 | 48.1 | | DWAViT-T[cos] (Ours) | 80 | 734 | 52.2 | 71.0 | 57.0 | 45.1 | 68.3 | 49.0 | | DWAViT-T[quad] (Ours) | 80 | 734 | 51.0 | 70.6 | 56.7 | 44.9 | 68.5 | 48.8 | | X101-32 (Xie et al., 2017) | 101 | 819 | 48.1 | 66.5 | 52.4 | 41.6 | 63.9 | 45.2 | | Swin-S (Liu et al., 2021) | 107 | 838 | 51.8 | 70.4 | 56.3 | 44.7 | 67.9 | 48.5 | | DAT-S (Xia et al., 2022) | 107 | 807 | 52.7 | 71.7 | 57.2 | 45.5 | 69.1 | 49.3 | | DWAViT-S[cos] (Ours) | 102 | 817 | 52.5 | 71.3 | 57.0 | 45.6 | 68.9 | 49.6 | | DWAViT-S[quad] (Ours) | 102 | 817 | 52.5 | 71.4 | 57.2 | 45.6 | 68.9 | 49.9 | | X101-64 (Xie et al., 2017) | 140 | 972 | 48.3 | 66.4 | 52.3 | 41.7 | 64.0 | 45.1 | | Swin-B (Liu et al., 2021) | 145 | 982 | 51.9 | 70.9 | 56.5 | 45.0 | 68.4 | 48.7 | | DWAViT-B[cos] (Ours) | 134 | 940 | 52.5 | 71.3 | 57.0 | 45.6 | 69.0 | 49.6 | | DWAViT-B[quad] (Ours) | 134 | 940 | 52.8 | 71.6 | 57.3 | 45.7 | 69.1 | 49.5 | extractor for theu baselines and our method. The results show that our DWAViT with cosine self-attention function can outperform the baselines. Besides, the performance of our model with cosine function is also much better than that of our model with quadratic function. The mIoU of DWAViT-T can reach 47.5% , which outperforms other baseline methods like DAT-T (Xia et al., 2022), DaViT-T (Ding et al., 2022) and XciT-S (Ali et al., 2021). The mIoU of DWAViT-S and DWAViT-B can reach 49.3% and 49.5%, respectively, which can outperform other baseline models like DAT (Xia et al., 2022) and DaViT (Ding et al., 2022). ## 4.4 Runtime Analysis In this section we quantitatively evaluate the actual runtime of our model during the training on image classification, object detection and semantic segmentation tasks. Specifically, for image classification task, we fix the batch to 100 and compare the runtime of our model with that of Swin Transformer (Liu et al., 2021). For object detection and semantic segmentation, we adopt the Mask R-CNN and Upernet as the framework and set the batch size to 2. All the experiments are implemented on a single A100 GPU and we report the average time per iteration. Table 11 illustrates the runtime of our model and the Swin Transformer (Liu et al., 2021) on image classification task. The results suggest that our model would be more time-consuming than that of Swin Transformer (Liu et al., 2021) due to the different form of self-attention function and local window mechanism. Specifically, compared to Swin-T (Liu et al., 2021), our DWAViT-T(quad) can achieve 1.6% higher accuracy in image classification with doubled computational cost. Our DWAViT-B(cos) can achieve an improvement of 0.5% in accuracy for image classification task but it takes twice as much time Table 7: Object detection and instance segmentation performance the our DWAViT with angular selfattention and scaled dot-product self-attention in Mask R-CNN framework. The model is trained with 3x scheme. The FLOPs are measured at resolution 800 × 1280. The [dot product], [cos] and [quad] denote scaled dot-product, cosine and quadratic function, respectively. Backbone #Param(M) #FLOPs(G) APb APb 50 APb 75 AP m AP m 50 AP m 75 DWAViT-T[dot product] 42 255 48.4 70.3 53.2 43.5 67.5 **47.1** DWAViT-T[cos] 42 255 48.4 70.4 53.1 43.5 67.7 **47.1** DWAViT-T[quad] 42 255 48.8 70.7 53.6 **43.8 68.1 47.1** Table 9: The performance of our proposed DWAViT with angular self-attention and scaled dot-product self-attention on the ADE20k semantic segmentation. FLOPs are calculated with resolution 512 × 2048. Backbone Param(M) FLOPs(G) mIoU DWAViT-T[dot product] 52 930 44.7 DWAViT-T[cos] 52 930 **47.5** DWAViT-T[quad] 52 930 45.4 Table 8: The performance of our proposed DWAViT with angular self-attention and scaled dot-product self-attention on the ImageNet-1K classification. The resolution of the image is 224 × 224. Model Param(M) FLOPs(G) Top-1 Acc DWAViT-T[dot product] 22.7 4.2 82.5 DWAViT-T[cos] 22.7 4.2 82.7 DWAViT-T[quad] 22.7 4.2 **82.8** Table 10: The performance of DeiT-S and CSwin-T with angular self-attention and scaled dot-product self-attention on the ImageNet-1K classification. The resolution of the image is 224 × 224. The dot product, cos and quad in the square bracket denote scaled dot-product, cosine and quadratic function, respectively. Model Param(M) FLOPs(G) Top-1 Acc DeiT-S (Touvron et al., 2021a) [dot product] 22 4.6 79.8 DeiT-S (Touvron et al., 2021a) [cos] 22 4.6 80.0 DeiT-S (Touvron et al., 2021a) [quad] 22 4.6 **80.2** CSwin-T (Dong et al., 2022)[exp] 23 4.3 82.7 CSwin-T (Dong et al., 2022)[cos] 23 4.3 82.9 CSwin-T (Dong et al., 2022)[quad] 23 4.3 **83.0** | average time per iteration is reported. Model | #Param(M) | #FLOPs(G) | time_per_iteration(s) | |-------------------------------------------------|-------------|-------------|-------------------------| | Swin-T (Liu et al., 2021) | 28.3 | 4.5 | 0.15 | | DWAViT-T[cos](Ours) | 22.7 | 4.2 | 0.29 | | DWAViT-T[quad] (Ours) | 22.7 | 4.2 | 0.31 | | Swin-S (Liu et al., 2021) | 49.6 | 8.7 | 0.23 | | DWAViT-S[cos](Ours) | 44.6 | 8.2 | 0.43 | | DWAViT-S[quad] (Ours) | 44.6 | 8.2 | 0.48 | | Swin-B (Liu et al., 2021) | 87.8 | 15.4 | 0.30 | | DWAViT-B[cos](Ours) | 77.4 | 14.3 | 0.60 | | DWAViT-B[quad] (Ours) | 77.4 | 14.3 | 0.64 | as that of Swin-B Liu et al. (2021). Basically, the time taken by our model is almost twice that of Swin Transformer (Liu et al., 2021). However, the actual runtime does not blow up when the size of our model scales up. The runtime of our models on object detection and semantic segmentation is illustrated in Table 12 and Table 13, respectively. ## 4.5 Ablation Study In the ablation study we aim to compare the performance of our model with traditional scaled dot-product self-attention and our proposed angular self-attention. In the experiment we replace the angular self-attention with traditional scaled dot-product self-attention and evaluate the performance on ImageNet-1K classification, Table 12: The actual runtime of our DWAViT during the training on COCO for object detection task. Mask R-CNN is adopted as the framework. We implement the testing on a single A100 GPU and the batch size is fixed to 2 for all the models. The average time per iteration is reported. Backbone #Param(M) #FLOPs(G) time_per_iteration(s) DWAViT-T[cos] 42 255 0.24 DWAViT-T[quad] 42 255 0.25 DWAViT-S[cos] 44.6 8.2 0.29 DWAViT-S[quad] 44.6 8.2 0.30 DWAViT-B[cos] 77.4 14.3 0.57 DWAViT-B[quad] 77.4 14.3 0.57 Table 13: The actual runtime of our DWAViT during the training on ADE20K for object detection task. Upernet is adopted as the framework. We implement the testing on a single A100 GPU and the batch size is fixed to 2 for all the models. The average time per iteration is reported. Backbone #Param(M) #FLOPs(G) time_per_iteration(s) DWAViT-T[cos] 42 255 0.19 DWAViT-T[quad] 42 255 0.20 DWAViT-S[cos] 44.6 8.2 0.22 DWAViT-S[quad] 44.6 8.2 0.23 DWAViT-B[cos] 77.4 14.3 0.56 DWAViT-B[quad] 77.4 14.3 0.56 COCO object detection and ADE20K semantic segmentation. We adopt the DWAViT-T as the backbone in three tasks. In object detection Mask R-CNN is adopted as the framework and the model is trained with 36 epochs. In semantic segmentation Upernet is adopted as the framework and the model is trained with 160K iteration. The results of the image classification, object detection and semantic segmentation are illustrated in Table 4.3, Table 7 and Table 9, respectively. When our model adopt the traditional scaled dot-product self-attention, the top-1 accuracy would be 0.2% and 0.3% lower than that of cosine function and quadratic function. On other two tasks the performance of our model with scaled dot-product self-attention is also lower than that of our model with the proposed angular self-attention. On some tasks the the gap of the performance is more obvious. For instance, on semantic segmentation our model of scaled dot-product self-attention only achieve 44.7% of mIoU, approximately 3% lower than that of our model with cosine self-attention. The experimental results suggest our angular self-attention can model the relationship of the tokens successfully and the angular self-attention is a powerful alternative to the traditional scaled dot-product self-attention. We further investigate the performance of our angular self-attention by replacing the scaled dot-product selfattention with our angular self-attention in other vision transformer models. Table 10 shows the performance of DeiT-S (Touvron et al., 2021a) and CSWin-T (Dong et al., 2022) on ImageNet-1K image classification with scaled dot-product self-attention and our proposed angular self-attention. The experimental results suggest that when the existing vision transformer models are equipped with our angular self-attention, the performance on the image classification can be improved. It further validates that our angular self-attention is a competitive alternative for the existing scaled dot-product self-attention. ## 5 Conclusions In this paper, we propose the dual-window mechanism and angular self-attention. The dual-window mechanism divide the feature maps into even/odd number of local window in each stage alternatively for the information exchange of the local windows. In angular self-attention, the traditional scaled dot-product operation is replaced by our proposed quadratic and cosine functions. The proposed angular function can also model the relationship of tokens in the long range. Based on dual-window mechanism and angular self-attention, we propose a new vision transformer backbone called dual-windowed angular vision transformer. Extensive experiments show that our backbone can achieve competitive performance on the tasks such as image classification, object detection and semantic segmentation. However, the computational cost of our model is higher than that of baseline models due to the new formulation of the self-attention. Broader Impact. This work proposed a new architecture of vision transformer called DWAViT featured by angular self-attention and dual local window mechanism. Our model is proven to achieve competitive performance in downstream tasks such as object detection and semantic segmentation and has the enormous potential to be used in various practical scenarios. In particular, object detection is one of the most promising applications of vision transformers in the real world and it is often used in systems which require extensive interaction with the surrounding environment visually. For instance, autonomous vehicles require a large number of object detectors to identify the pedestrians and other vehicles nearby and the safety and the trustworthiness of vision transformers are critical in this area. Though our proposed model can achieve promising results on object detection and other tasks, some critical issues such as adversarial robustness and trustworthiness are quite under-explored and further investigation is necessarily required. ## References Alaaeldin Ali, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, et al. Xcit: Cross-covariance image transformers. Advances in neural information processing systems, 34:20014–20027, 2021. Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, pp. 213–229. Springer, 2020. Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. In *Proceedings of the IEEE/CVF international conference on computer* vision, pp. 357–366, 2021. Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, et al. Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. Yinpeng Chen, Xiyang Dai, Dongdong Chen, Mengchen Liu, Xiaoyi Dong, Lu Yuan, and Zicheng Liu. Mobile-former: Bridging mobilenet and transformer. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 5270–5279, 2022. François Chollet. Xception: Deep learning with depthwise separable convolutions. In *Proceedings of the IEEE* conference on computer vision and pattern recognition, pp. 1251–1258, 2017. Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. In *International Conference on Learning Representations*. Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. In *NeurIPS*, pp. 9355–9366, 2021a. Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Conditional positional encodings for vision transformers. *arXiv preprint arXiv:2102.10882*, 2021b. MMSegmentation Contributors. Mmsegmentation: Openmmlab semantic segmentation toolbox and benchmark, 2020. Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self-attention and convolutional layers. In *International Conference on Learning Representations*, 2019. Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. *Advances in Neural Information Processing Systems*, 34:3965–3977, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Mingyu Ding, Bin Xiao, Noel Codella, Ping Luo, Jingdong Wang, and Lu Yuan. Davit: Dual attention vision transformers. In *Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October* 23–27, 2022, Proceedings, Part XXIV, pp. 74–92. Springer, 2022. Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, and Baining Guo. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12124–12134, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*. OpenReview.net, 2021. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In *Proceedings of the IEEE/CVF International Conference* on Computer Vision, pp. 6824–6835, 2021. Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang, and Chang Xu. Cmt: Convolutional neural networks meet vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12175–12185, 2022. Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. Advances in Neural Information Processing Systems, 34:15908–15919, 2021. Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. Neighborhood attention transformer. arXiv preprint arXiv:2204.07143, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In *Proceedings of the IEEE* International Conference on Computer Vision (ICCV), Oct 2017. Liqiang He and Sinisa Todorovic. Destr: Object detection with split transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9377–9386, 2022. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. *arXiv preprint arXiv:1704.04861*, 2017. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 4700–4708, 2017. Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu. Ccnet: Crisscross attention for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 603–612, 2019. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In *International Conference on Machine Learning*, pp. 5156–5165. PMLR, 2020. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Youngwan Lee, Jonghee Kim, Jeffrey Willette, and Sung Ju Hwang. Mpvit: Multi-path vision transformer for dense prediction. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 7287–7296, 2022. Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, and Christoph Feichtenhofer. Mvitv2: Improved multiscale vision transformers for classification and detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4804–4814, 2022. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755. Springer, 2014. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 10012–10022, 2021. Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In *Proceedings of the IEEE/CVF* conference on computer vision and pattern recognition, pp. 12009–12019, 2022a. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11976–11986, 2022b. Jiachen Lu, Jinghan Yao, Junge Zhang, Xiatian Zhu, Hang Xu, Weiguo Gao, Chunjing Xu, Tao Xiang, and Li Zhang. Soft: Softmax-free transformer with linear complexity. Advances in Neural Information Processing Systems, 34:21297–21309, 2021. Sachin Mehta and Mohammad Rastegari. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. In *International Conference on Learning Representations*. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In *International conference on machine learning*, pp. 4055–4064. PMLR, 2018. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong. Random feature attention. In *International Conference on Learning Representations*. Zhiliang Peng, Wei Huang, Shanzhi Gu, Lingxi Xie, Yaowei Wang, Jianbin Jiao, and Qixiang Ye. Conformer: Local features coupling global representations for visual recognition. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 367–376, 2021. Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. *SIAM journal* on control and optimization, 30(4):838–855, 1992. Zhen Qin, Weixuan Sun, Hui Deng, Dongxu Li, Yunshen Wei, Baohong Lv, Junjie Yan, Lingpeng Kong, and Yiran Zhong. cosformer: Rethinking softmax in attention. In International Conference on Learning Representations. Ilija Radosavovic, Raj Prateek Kosaraju, Ross B. Girshick, Kaiming He, and Piotr Dollár. Designing network design spaces. In *CVPR*, pp. 10425–10433. Computer Vision Foundation / IEEE, 2020. Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-attention in vision models. *Advances in neural information processing systems*, 32, 2019. Yongming Rao, Wenliang Zhao, Yansong Tang, Jie Zhou, Ser-Nam Lim, and Jiwen Lu. Hornet: Efficient high-order spatial interactions with recursive gated convolutions. In *Advances in Neural Information* Processing Systems. Pengzhen Ren, Changlin Li, Guangrun Wang, Yun Xiao, Qing Du, Xiaodan Liang, and Xiaojun Chang. Beyond fixation: Dynamic window visual transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11987–11997, 2022. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510–4520, 2018. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. *arXiv* preprint arXiv:1803.02155, 2018. Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani. Bottleneck transformers for visual recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16519–16529, 2021. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In *Proceedings of the IEEE international conference on computer vision*, pp. 843–852, 2017. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105–6114. PMLR, 2019. Mingxing Tan and Quoc Le. Efficientnetv2: Smaller models and faster training. In International conference on machine learning, pp. 10096–10106. PMLR, 2021. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, pp. 10347–10357. PMLR, 2021a. Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Hervé Jégou. Going deeper with image transformers. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 32–42, 2021b. Hugo Touvron, Matthieu Cord, and Hervé Jégou. Deit iii: Revenge of the vit. In *Computer Vision–ECCV* 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV, pp. 516–533. Springer, 2022. Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. In *Computer Vision–ECCV 2022: 17th European Conference, Tel* Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV, pp. 459–479. Springer, 2022. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In *Computer Vision–ECCV 2020: 16th European* Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV, pp. 108–126. Springer, 2020. Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 568–578, 2021. Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. *arXiv preprint arXiv:2211.05778*, 2022a. Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvt v2: Improved baselines with pyramid vision transformer. *Computational Visual Media*, 8(3): 415–424, 2022b. Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7794–7803, 2018. Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22–31, 2021. Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, and Gao Huang. Vision transformer with deformable attention. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 4794–4803, 2022. Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 418–434, 2018. Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollár, and Ross Girshick. Early convolutions help transformers see better. *Advances in Neural Information Processing Systems*, 34:30392–30400, 2021. Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34:12077–12090, 2021. Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern* Recognition (CVPR), July 2017. Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, and Jianfeng Gao. Focal attention for long-range interactions in vision transformers. In *NeurIPS*, pp. 30008–30022, 2021. Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Fengwei Yu, and Wei Wu. Incorporating convolution designs into visual transformers. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision, pp. 579–588, 2021. Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel Ni, and Harry Shum. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. In International Conference on Learning Representations, 2022. Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, and Jianfeng Gao. Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 2998–3008, 2021. Hengshuang Zhao, Jiaya Jia, and Vladlen Koltun. Exploring self-attention for image recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10076–10085, 2020. Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6881–6890, 2021. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 633–641, 2017. ## A Appendix The proof of Proposition 1 is provided below. Proof 1 Assume θ0 ≤ θ1 ≤ ... ≤ θJ−1 for angles between the target query and all the keys. J *is the total* number of the tokens. V is the value vector. According to our angular self-attention, the embedding of target tokens can be expressed as: $$\begin{array}{l}{{{\mathcal{O}}=\frac{1}{C}\sum_{j}\exp(s(\theta_{j})/\tau)V_{j}}}\\ {{\quad=\frac{1}{C}\exp(\frac{s(\theta_{0})}{\tau})(V_{0}+\exp(\frac{s(\theta_{1})-s(\theta_{0})}{\tau})V_{1}+\ldots}}\\ {{\quad+\exp(\frac{s(\theta_{J-1})-s(\theta_{0})}{\tau})V_{J-1})}}\\ {{\quad=\frac{1}{C}\exp(\frac{s(\theta_{0})}{\tau})(V_{0}+\omega_{1}V_{1}+\ldots+\omega_{J-1}V_{J-1}),}}\\ {{\quad\propto V_{0}+\omega_{1}V_{1}+\cdots+\omega_{J-1}V_{J-1},}}\end{array}$$ $$({\mathfrak{s}})$$ where ωj = exp( s(θj )−s(θ0) τ) and C is normalization coefficient. Since θj > θ0 and s(θj ) < s(θ0), ωj ∈ (0, 1). Table 14: The semantic segmentation performance of DWAViT-S, DWAViT-B and baselines on ADE20k. The FLOPs are calculated with resolution 512 × 2048. [cos] and [quad] denote cosine and quadratic function, respectively. Backbone #Param(M) #FLOPs(G) mIoU ResNet-101 (He et al., 2016) 86 1029 44.9 XCiT-S24/16 (Ali et al., 2021) 76 1053 46.9 TwinsP-B (Chu et al., 2021a) 74 977 47.1 XCiT-M24/16 (Ali et al., 2021) 112 1213 47.6 Swin-S (Liu et al., 2021) 81 1038 47.6 Twins-B (Chu et al., 2021a) 89 1020 47.7 Focal-S (Yang et al., 2021) 85 1130 48.0 DaViT-T (Ding et al., 2022) 81 1030 48.8 DAT-T (Xia et al., 2022) 83 1079 48.3 DWAViT-S[cos] (Ours) 75 1015 **49.3** DWAViT-S[quad] (Ours) 75 1015 47.8 XCiT-M24/8 (Ali et al., 2021) 110 2161 48.4 Swin-B (Liu et al., 2021) 121 1841 48.1 Focal-B (Yang et al., 2021) 126 1354 49.0 DaViT-B (Ding et al., 2022) 121 1175 49.4 DAT-B (Xia et al., 2022) 121 1212 49.4 DWAViT-B[cos] (Ours) 108 1143 **49.5** DWAViT-B[quad] (Ours) 108 1143 47.9
Review 1: Summary: The manuscript presents a transformer like architecture for visual data. It proposes: 1. Windowed self attention where information exchange across windows is achieved by using different window sizes in alternating blocks. 2. Angular attention, an alternative to dot product attention which computes a function of the angle, $\theta$, between key and query vectors. Two variations of angular attention are proposed, `cos` and `quadratic`. `cos` computes $cos\left( \theta \right)$ and `quadratic` works on $\theta^2$. The overall architecture also down-samples the feature grid after every stage. The architecture is evaluated on ImageNet-1k classification, MSCOCO object detection and instance segmentation, and ADE-20k semantic segmentation. Strengths and Weaknesses: The idea is well presented and evaluated on a range of downstream tasks: classification, detection and segmentation. The experiments are at scale and compare against several related prior models. Comparisons are made between the two angular attention functions and against dot-product attention (ablation). The model does not cause a major overhead in terms of #params and #flops. Requested Changes: ### Citations [critical] In page 3, the manuscript identifies Dosovitskiy et al. 2021 as the first introduction of self-attention to computer vision. The method that first introduced self attention to computer vision is https://arxiv.org/abs/1802.05751 or https://arxiv.org/pdf/1711.07971.pdf; and transformer inspired architectures have been since applied to classification, for example [here](https://proceedings.neurips.cc/paper_files/paper/2019/file/3416a75f4cea9109507cacd8e2f2aefc-Paper.pdf). The idea of using image patches as tokens was also implemented in https://arxiv.org/abs/1911.03584. Several alternatives to dot-product attention were explored in ["Exploring Self-attention for Image Recognition"](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhao_Exploring_Self-Attention_for_Image_Recognition_CVPR_2020_paper.pdf). The ["Non local networks"](https://arxiv.org/pdf/1711.07971.pdf) paper also discusses some alternatives to dot-product attention. These and similar works that appeared prior to ViT need to be cited. ### Analysis The analysis quantifies performance when controlling for #params and #flops. Please also quote the actual runtime in training steps per second for a fixed hardware setup (#gpus) used across all experiments for atleast some subset of runs. In many cases, although the #flops are fixed, the actual runtime can blow up because of subtle differences in the formulation. [critical] The analysis mainly compares against several baselines but it is not clear which baselines are actually an exact comparison. Exact comparison = same prediction head (mask-rcnn/cascaded mask-rcnn/upernet) + same training regime + same pre-training and fine-tuning datasets. Please clarify in the manuscript which of these comparison are exact. [critical] Error bars are not included in any of the tables. I suspect that this is because of how expensive it is to train these networks. It is necessary however to understand whether the proposed model exhibits variance or not. Please provide error bars for at least the DWAViT-S entries in Table 2. [critical] ### Minor typos [recommendation] Missing closing brackets in equation 6. Double our in paragraph 2 of 4.2. Broader Impact Concerns: Computer vision, especially object detection, is an ethically sensitive topic. A broader impact statement is not present and I strongly encourage the authors to add one. ================================================== Review 2: Summary: In this paper, the authors proposed dual-windowed ViT with angular attention (DWA-ViT). As evidenced in its name, the contribution is two-folded: - The authors proposed to use alternate window sizes across adjacent blocks to achieve a similar effect of SwinTransformer; - The authors change the similarity formulation in ViTs to one based on angular distance. The effectiveness of DWAViT is demonstrated through evaluations on various computer vision benchmarks, including ImageNet-1K for image classification, COCO for object detection, and ADE20k for semantic segmentation. The paper also validates the proposed angular self-attention by comparing it to the traditional scaled dot-product operation on multiple tasks. Strengths and Weaknesses: Strengths: - The paper aligns well with the interests of a specific audience (This is one of the acceptance criterion). - The concept of dual windows is innovative. I am not aware of apparent precedent in existing literature (though other reviewers could correct me if I am wrong). - The authors effectively demonstrate the superiority of their proposed designs (dual windows and angular self-attention) over existing approaches in certain contexts (This is one of the acceptance criterion). - Rigorous experimentation covers a range of tasks (classification, detection, segmentation) and datasets, enhancing the paper's robustness. Weaknesses: - While the notion of normalizing features and using cosine similarity as a distance metric is common in face recognition, its application might not be novel enough for potential paper rejection according to TMLR's guidelines. - The two proposed techniques (dual windows and angular self-attention) appear somewhat disjointed and self-contained. It would be valuable to explore potential synergies or connections between these innovations. - The utilization of cosine similarity and quadratic distance as angular self-attention metrics seems to cater to distinct application scenarios. More insights into this distinction would enrich the understanding. - The runtime of the proposed DWA-ViT is absent from the discussion. Including these latency figures would enhance the paper's practicality, even though it's assumed to be no slower than Swin. Requested Changes: Please respond to my questions in the "Weaknesses" section. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper explores variants of vision transformers based on applying a normalization prior to the self-attention mechanism (such that only the angle between vectors is taken into account) and mechanism to apply self-attention among neighboring tokens that is effective without compromising the computational cost. Results show that the proposed final architecture is competitive with state-of-the-art accuracy in several computer vision tasks. Strengths and Weaknesses: The paper is easy to follow and the claims seem to be accurate given the evidence presented. The following are several suggestions that I hope help the authors: - The techniques explored in this paper are well described and easy to follow. The dual window mechanism is easy to understand the intuitions behind it and the motivation why this was introduced. Yet, the normalization of the self-attentions seems quite an arbitrary choice to explore. I would suggest trying to motive a bit the reasons why the authors decided investigating this (eg. what were the expectations before and after the experiments). - Theoretical analysis: This sections comes quite abruptly, it would help to explain what is the intent of the proposition before introducing it. This section is hard to follow without providing more context. - Please provide the code to reproduce all the experiments in the paper. - State more clearly (ie. providing quantities) what is the computational cost gain and accuracy gain. There are also several typos and text that needs some refinement: - In abstract "we propose two solutions": no problem was discussed before, so talking about "solutions" is confusing. - In abstract "We evaluate...": it is mentioned the evaluation but not the takeaway of those evaluations which is what is relevant to learn in the abstract. - List of contributions (top of page 3), first point: "We propose the dual window mechanism to reduce the computation cost..." indicate by how much the computational cost is reduced. - Section 3.1 "cycle shift." and "following layer," it seems "," and "." have been swapped. - Equation 6 is missing a parenthesis Requested Changes: Most of my suggestions are about improving the text to state more clearly the achievements of the paper and help the reader understand the paper. An important request that needs to be fulfilled is to provide the code to reproduce the experiments. Broader Impact Concerns: n/a ================================================== Metareview: Recommendation: Reject Comment: This paper presented two techniques to improve the Swin Transformer for vision tasks, including dual window and angular self-attention to replace the corresponding original designs of Swin Transformer. The proposed models were tested on a series of tasks, including ImageNet classification, COCO detection and instance segmentation, and ADE20k segmentation. After the rebuttal, two reviewers rated the paper "Accept" and 1 reviewer rated the paper "Leaning to Reject". The main negative comment is on the slow inference speed of the proposed angular self-attention mechanism, which the AE agrees. If a network is as twice slow as other ViT models, it is quite unlikely any follow-up work will choose the DWAViT as the backbone, as nowadays experiments are already quite slow. The authors shall carefully discuss and/or consider how to solve the slow inference problem (not just simply admit it is twice slow and move on). In addition, the AE found that the paper lacks adequate ablation study on the proposed dual window design. Most ablation study focuses on the angular attention design. As the paper also claims the superiority of the dual window design, this component should also be carefully studied (better on multiple datasets and/or tasks). ==================================================
# Exploring Generative Neural Temporal Point Process Haitao Lin linhaitao@westlake.edu.cn CAIRI, Westlake University Zhejiang University Lirong Wu *wulirong@westlake.edu.cn* CAIRI, Westlake University Zhejiang University Guojiang Zhao *guojianz@andrew.cmu.edu* CAIRI, Westlake University Carnegie Mellon University Pai Liu *pailiu@westlake.edu.cn* School of Engineering, Westlake University Stan Z. Li Stan.ZQ.Li@westlake.edu.cn CAIRI, Westlake University Reviewed on OpenReview: *https: // openreview. net/ forum? id= NPfS5N3jbL* ## Abstract Temporal point process (TPP) is commonly used to model the asynchronous event sequence featuring occurrence timestamps and revealed by probabilistic models conditioned on historical impacts. While lots of previous works have focused on 'goodness-of-fit' of TPP models by maximizing the likelihood, their predictive performance is unsatisfactory, which means the timestamps generated by models are far apart from true observations. Recently, deep generative models such as denoising diffusion and score matching models have achieved great progress in image generating tasks by demonstrating their capability of generating samples of high quality. However, there are no complete and unified works exploring and studying the potential of generative models in the context of event occurence modeling for TPP. In this work, we try to fill the gap by designing a unified generative framework for neural temporal point process (GNTPP) model to explore their feasibility and effectiveness, and further improve models' predictive performance. Besides, in terms of measuring the historical impacts, we revise the attentive models which summarize influence from historical events with an adaptive reweighting term considering events' type relation and time intervals. Extensive experiments have been conducted to illustrate the improved predictive capability of GNTPP with a line of generative probabilistic decoders, and performance gain from the revised attention. To the best of our knowledge, this is the first work that adapts generative models in a complete unified framework and studies their effectiveness in the context of TPP. Our codebase including all the methods given in Section. 5.1.1 is open in https://github.com/BIRD-TAO/GNTPP. We hope the code framework can facilitate future research in Neural TPPs. ## 1 Introduction Various forms of human activity or natural phenomena can be represented as discrete events happening with irregular time intervals, such as electronic health records, purchases in e-commerce systems, and earthquakes with aftershocks. A natural choice for revealing the underlying mechanisms of the occurrence of events is temporal point processes (TPPs) (D.J. Daley, 2003; Isham & Westcott, 1979; Hawkes, 1971), which describe the probability distribution of time intervals and types of future events' occurrence by summarizing the impacts of historical observations. Recently, with the rapid development in deep learning, TPP models also benefit from great expressiveness of neural networks, from the first work proposed in Du et al. (2016). A neural TPP model can be divided into two parts - **history encoder** and **probabilistic decoder**. Recently, great success has been achieved in modeling the TPPs thanks to fast developments in sequential models and generative models in deep learning (Shchur et al., 2021; Lin et al., 2021). The former concentrates on the competence of encoding and aggregating the past impacts of events on the next event's occurrence probability (Zhang et al., 2020a; Zuo et al., 2020), which is called history encoder; The latter one aims to improve the flexibility and efficiency to approximate the target distribution of occurrence time intervals conditioned on the historical encoding (Omi et al., 2019; Shchur et al., 2020a;b), which is called probabilistic decoder. Most of the previous works focus on the 'goodness-of-fit' of the proposed models, which can be quantified by negative log-likelihood (NLL). However, limited by this, to model the distribution in a general fashion, one needs to formulate the probabilistic decoder with certain families of functions whose likelihoods are tractable, such as the mixture of distributions whose support sets are positive real numbers (Shchur et al., 2020a; Lin et al., 2021) or triangular maps as a generalization of autoregressive normalizing flows (Shchur et al., 2020b). Although some probabilistic decoders with intractable likelihoods still perform well in the evaluation of 'goodness-of-fit' (Mei & Eisner, 2017; Zhang et al., 2020a; Zuo et al., 2020), they rely on the stochastic or numerical approximation to calculate the likelihood, which leads to unaffordable high computational cost despite their theoretically universal approximation ability (Soen et al., 2021). To sum up, both the requirements for a certain structure in the functional approximators, e.g. on the mixture of log-normal distribution (Shchur et al., 2020a), and the excessive emphasis on 'goodness-of-fit' as the optimization objective considerably impose restrictions on the models' predictive and extrapolation performance. This causes that the timestamp samples generated by them are far apart from the ground-truth observations, which limits their application in real-world scenarios. Recent empirical studies show that these models' predictive performance is very unsatisfactory (Lin et al., 2021), with extremely high error in the task of next arrival time prediction. As a probabilistic models, a good TPP model should not only demonstrate its goodness-of-fitting (lower NLL), but also have ability to generate next arrival time as samples of high quality, as well as preserve the randomness and diversity of the generated samples. Since the TPP models aim to approximate the target distribution of event occurrence conditioned on the historical influences, we can classify the TPP models into conditional probabilistic or generative models in the field of deep learning, and lend the techniques in these fields to improve the predictive performance (Yan et al., 2018; Xiao et al., 2017a). In deep probabilistic models, the functional forms in energy-based models are usually less restrictive, and the optimization of the objective function as unnormalized log-likelihood can be directly converted into a point estimation or regression problem, thus empowering the models to have an improved predictive ability. Recently, deep generative models including denoising diffusion models (Sohl-Dickstein et al., 2015b; Ho et al., 2020) and score matching models (Song et al., 2021b;a; Bao et al., 2022) as an instance of energy-based deep generative model have attracted lots of scientific interests as it demonstrates great ability to generate image samples of high quality. Thanks to its less restrictive functional forms and revised unnormalized log-probability as the optimization objective, in other fields such as crystal material generation (Xie et al., 2021) and time series forecasting (Rasul et al., 2021), they are also employed for generative tasks and demonstrates great feasibility. Enlighted by this, we conjecture that the probabilistic decoders constructed by deep generative models in the context of TPP are likely to generate samples of time intervals that are close to ground truth observations, thus improving the predictive performance. In this way, we design a unified framework for generative neural temporal point process (GNTPP) by employing the deep generative models as the probabilistic decoder to approximate the target distribution of occurrence time. Besides, we revise the self-attentive history encoders (Zhang et al., 2020a) which measure the impacts of historical events with adaptive reweighting terms considering events' type relation and time intervals. In summary, the contributions of this paper are listed as follows: - To the best of our knowledge, we are the first to establish a complete framework of generative models to study their effectiveness for improving the predictive performance in TPPs, i.e. enabling TPP models to generate time sample of high quality. - In terms of history encoders, we revise the self-attentive encoders with adaptive reweighting terms, considering type relation and time intervals of historical observations, showing better expressiveness. - We conduct extensive experiments on one complicated synthetic dataset and four real-world datasets, to explore the potential of GNTPP in the aspect of predictive performance and fitting ability. Besides, further studies give more analysis to ensure the effectiveness of the revised encoder. ## 2 Background 2.1 Temporal Point Process 2.1.1 Preliminaries A realization of a TPP is a sequence of event timestamps {ti}1≤i≤N , where ti ∈ R +, and ti < ti+1 ≤ T. In a marked TPP, it allocates a type mi (a.k.a. mark) to each event timestamps, where there are M types of events in total and mi ∈ [M] with [M] = {1, 2*, . . . , M*}. A TPP is usually described as a counting process, with the measure N (t) defined as the number of events occurring in the time interval (0, t]. We indicate with {(ti, mi)}1≤i≤N as an observation of the process, and the history of a certain timestamp t is denoted as H(t) = {(tj , mj ), tj < t}. In this way, the TPP can be characterized via its intensity function conditioned on H(t), defined as $$\begin{array}{r l}{\lambda^{*}(t)={}}&{{}\lambda(t|{\mathcal{H}}(t))=\operatorname*{lim}_{\Delta t\to0^{+}}{\frac{\mathbb{E}[{\mathcal{N}}(t+\Delta t)-{\mathcal{N}}(t)|{\mathcal{H}}(t)]}{\Delta t}},}\end{array}$$ which means the expected instantaneous rate of happening the events given the history. Note that it is always a non-negative function of t. Given the conditional intensity, the probability density function of the occurrence timestamps reads $\left(1\right)$ . $$q^{*}(t)=\lambda^{*}(t)\exp(-\int_{t_{i-1}}^{t}\lambda^{*}(\tau)d\tau),$$ $$\left(2\right)$$ ∗(τ )dτ ), (2) The leading target of TPPs is to parameterize a model p ∗ θ (t), to fit the distribution of the generated marked timestamps, i.e. q ∗(t), as to inference probability density or conditional intensity for further statistical prediction, such as next event arrival time prediction. Besides, in marked scenarios, parameterizing p ∗ θ (m) to predict the next event type is also an important task. More details on preliminaries are given in Appendix A. Usually, in the deep neural TPP models, the impacts of historical events H(t) on the distribution of time t are summarized as a historical encoding hi−1, where i − 1 = arg maxj∈N tj < t, and the parameters in p ∗ θ (t) and p ∗ θ (m) are determined by hi−1, for *t > t*i, which reads $$p^{*}(t;\theta(\mathbf{h}_{i-1}))=p_{\theta}(t|\mathbf{h}_{i-1});\quad p^{*}(m;\theta(\mathbf{h}_{i-1}))=p_{\theta}(m|\mathbf{h}_{i-1})$$ ∗(m; θ(hi−1)) = pθ(m|hi−1) (3) In summary, in deep neural TPPs, there are two key questions to answer in modeling the process: (1) How to measure the historical events' impacts on the next events' arrival time and type distribution. In other words, how to encode the historical events before time t which is H(t) for ti−1 < t into a vector hi−1 to parameterize pθ(t|hi−1) or pθ(m|hi−1)? (2) How to use a conditional probabilistic model p ∗(*t, m*; θ(hi−1)) = pθ(*t, m*|hi−1) with respect to time t and type m, whose parameters are obtained by θ = θ(hi−1), to approximate the true probability of events' time and types? $$\left({\mathbf{3}}\right)$$ ## 2.1.2 History Encoders For the task (1), it can be regarded as a task of sequence embedding, i.e. finding a mapping H which maps a sequence of historical event time and types before t, i.e. H(t) = {(tj , mj )}1≤j≤i−1 where *t > t*i−1, to a vector hi−1 ∈ R D called historical encoding. D is called '*embedding size*' in this paper. To increase expressiveness, the j-th event in the history set is firstly lifted into a high-dimensional space, considering both temporal and type information, as $$u(t_{j},m_{j})=\mathbf{e}_{j}=[\mathbf{\omega}(t_{j});\mathbf{E}_{m}^{T}\mathbf{m}_{j}],\tag{1}$$ $$\left(4\right)$$ mmj ], (4) where ω transforms one-dimension tj into a high-dimension vector, which can be linear, trigonometric, and so on, Em is the embedding matrix for event types, and mj is the one-hot encoding of event type mj . Commonly, to normalize the timestamps into a unifying scale, the event embeddings take the τj = tj −tj−1 as the inputs, i.e. ej = u(τj , mj ) . And then, another mapping v will be used to map the sequence of embedding {e1, e2*, . . . ,* ei−1} into a vector space of dimension D, by $$\mathbf{h}_{i-1}=v([\mathbf{e}_{1};\mathbf{e}_{2};\ldots;\mathbf{e}_{i-1}]).$$ $$\left({\bar{5}}\right)$$ $\downarrow$ . hi−1 = v([e1; e2; *. . .* ; ei−1]). (5) For example, units of recurrent neural networks (RNNs) including GRU and LSTM can all be used to map the sequence (Du et al., 2016; Omi et al., 2019; Shchur et al., 2020a), as $$\mathbf{h}_{0}=\mathbf{0};\quad\mathbf{h}_{i}=\mathrm{RNN}(\mathbf{e}_{i},\mathbf{h}_{i-1})$$ $$\mathbf{\partial}$$ h0 = 0; hi = RNN(ei, hi−1) (6) Therefore, the history encoder can be dismantled as two parts, as hi−1 = H(H(t)) = v ◦ u(H(t)) = v([u(τ1, m1); u(τ2, m2); . . . ; u(τi−1, mi−1)]), (7) where u and v are the event encoder and sequence encoder respectively, and the composites of both make up the history encoder. The attention mechanisms (Vaswani et al., 2017) which have made great progress in language models prove to be superior history encoders for TPPs in the recent research (Zhang et al., 2020a; Zuo et al., 2020). In this paper, we follow their works in implementing self-attention mechanisms (Vaswani et al., 2017) but conduct revisions to the self-attentive encoders, leading to our revised attentive history encoders in our paper. ## 2.1.3 Probabilistic Decoders For the task (2), it is usually regarded as setting up a conditional probabilistic model p ∗(*t, m*|θ(hi−1)), whose conditional information is contained by model's parameters θ(hi−1) obtained by historical encoding. Statistical inference and prediction can be conducted according to the model, such as generating new sequences of events, or using expectation of time t to predict the next arrival time. Besides, for interpretability, the relation among different types of events such as Granger causality (Xu et al., 2016; Eichler et al., 2016) inferred by probabilistic models also arises research interests. In the temporal domain, one choice is to directly formulate the conditional intensity function λ ∗ θ (t) (Du et al., 2016), cumulative hazard function Λ ∗ θ (t) = R t 0 λ ∗ θ (τ )dτ (Omi et al., 2019) or probability density function p ∗ θ (t) (Shchur et al., 2020a), and to minimize the negative log-likelihood as the optimization objective. For example, the loss of a single event's arrival time reads $$l_{i}=-\log\lambda_{\theta}(t_{i}|\mathbf{h}_{i-1})+\int_{t_{i-1}}^{t_{i}}\lambda_{\theta}(t|\mathbf{h}_{i-1})dt.\tag{8}$$ However, as demonstrated in Equation. (2) and (8), minimizing the negative likelihood requires the closed form of probability density function, which limits the flexibility of the models to approximate the true probabilistic distribution where the event data are generated. For example, one attempts to formulate λ ∗ θ (t) will inevitably confront whether the integration of it (a.k.a. cumulative hazard function) has closed forms, for | Methods | History Encoder | Probabilistic Decoder | Closed Likelihood | Flexible Sampling | |-------------------------------|-------------------|----------------------------|---------------------|---------------------| | RMTPP(Du et al., 2016) | RNN | Gompertz | " | " | | LogNorm(Shchur et al., 2020a) | RNN | Log-normal | " | " | | ERTPP(Xiao et al., 2017b) | RNN | Gaussian | " | " | | WeibMix(Lin et al., 2021) | Transformer | Weibull | " | " | | FNNInt(Omi et al., 2019) | RNN | Feed-forward Network | " | % | | SAHP(Zhang et al., 2020a) | Transformer | Exp-decayed + Nonlinear | % | % | | THP(Zuo et al., 2020) | Transformer | Linear-decayed + Nonlinear | % | % | | WasGANTPP(Xiao et al., 2017a) | RNN | GAN | - | " | Table 1: Description of exisiting neural TPP methods, following Lin et al. (2021). manageable computation of the likelihood. Although this term can be approximated by numerical or Monte Carlo integration methods, the deviation of the approximation from the analytical likelihood may occur due to insufficient sampling and high computational cost may be unaffordable. Another problem is that these likelihood-targeted models usually perform unsatisfactorily in next arrival time prediction (Lin et al., 2021) despite its good fitting capability in terms of negative log-likelihood. For these reasons and enlightened by effectiveness of adversarial and reinforcement learning (Yan et al., 2018; Arjovsky et al., 2017; Upadhyay et al., 2018; Li et al., 2018) in the context of TPPs, we conjecture that state-of-the-art methods and techniques in deep generative models can be transferred to deep neural TPPs. Differing from the previous works focusing on models' fitting ability in terms of higher likelihood, these models aim to promote models' prediction ability, i.e. to generate high-quality samples which are closer to ground truth observations. Inspired by great success achieved by recently proposed generative models (Sohl-Dickstein et al., 2015a; Ho et al., 2020; Song et al., 2021b) which enjoy the advantages in generating image samples of good quality and have been extended to a line of fields (Xie et al., 2021; Rasul et al., 2021), we hope to deploy this new state-of-the-art probabilistic model to TPPs, to solve the dilemma of unsatisfactory predictive ability of neural TPPs as well as further enhance models' flexibility. 2.2 A Brief Review ![4_image_0.png](4_image_0.png) Figure 1: Intuitive explanation of our motivation: In (a), SAHP and THP cost more memory than others, and WasGANTPP is more time-consuming because of the adversarial training. In (b), the 'MAPE' in WasGANTPP as a generative model is relatively smaller, in comparison to the classical TPP probabilistic decoders. The detail of the experimental settings and results is given in Section 5 and Appendix B.4. We review recently-proposed neural TPP models, and give a brief disciption of them in Table 1. Most methods directly model the intensity function or probabilistic density function of the process, while only WasGANTPP (Xiao et al., 2017a) employed Wasserstein GAN as the probabilistic decoder, whose learning target is an approximation to the Wasserstein distance between the empirical and model distributions, and allows flexible sample generation. The classical methods with closed-form likelihood usually show unsatisfactory performance, while SAHP, THP and WasGANTPP achieve improvements, as shown in Figure. 1(b). SAHP and THP depends on numerical or Monte Carlo integration to approximate the likelihood because the probabilistic decoder has no closed form, leading to higher computational cost (shown in Figure 1(a)). Besides, FNNInt, SAHP and THP do not allow flexible sampling, which limits the real-world application when one needs to draw out samples from the learned models. Figure 1 adn Table 1 give an intuitive demonstration of our motivation. ## 3 Methodology 3.1 Revised Attentive Encoder Self-attention as the key module in Transformer (Vaswani et al., 2017; Dong et al., 2021) benefits from its fast parallel computing and capability of encoding long-term sequences in lots of fields. In attentive TPPs (Zhang et al., 2020a), the events are embedded as vectors ej = [ω(τj ); ETmmj ] by positional encoding techniques, where ω(τj ) = [sin(ω1j + ω2τj ); cos(ω1j + ω2τj )], (9) in which ω1j is positional term, and ω2τj is time term. Or in Zuo et al. (2020), it reads $$\omega(\tau_{j})=[\sin(\omega_{1}\tau_{j});\cos(\omega_{2}\tau_{j})].$$ ω(τj ) = [sin(ω1τj ); cos(ω2τj )]. (10) After that, the historical encoding obtained by attention mechanisms can be written as $$\mathbf{h}_{i-1}=\sum_{j=1}^{i-1}\exp(\phi(\mathbf{e}_{j},\mathbf{e}_{i-1}))\psi(\mathbf{e}_{j})/\sum_{j=1}^{i-1}\exp(\phi(\mathbf{e}_{j},\mathbf{e}_{i-1})),$$ $$(9)$$ $$(10)$$ $$(11)$$ where φ(·, ·) maps two embedding into a scalar called attention weight, e.g. φ(ej , ei) = (ejWQ)(eiWK) T and ψ transforms ej into a series of D-dimensional vectors called values, e.g. ψ(ej ) = ejWV . This calculation of Equation. (11) can be regarded as summarizing all the previous events' influence, with different weights wj,i−1 = P exp(φ(ej ,ei−1)) i−1 j=1 exp(φ(ej ,ei−1)) = softmax(φ(ej , ei−1)). Although the self-attentive history encoders are very expressive and flexible for both long-term and local dependencies, which prove to be effective in deep neural TPPs (Zuo et al., 2020; Zhang et al., 2020a), we are motivated by the following two problems raised by event time intervals and types, and revise the classical attention mechanisms by multiplying two terms which consider time intervals and type relation respectively. P.1. As shown in Equation. (11), in the scenarios where the positional encoding term j is not used (Refer to Equation. (2) in Zuo et al. (2020)), if there are two events, with time intervals τj1 = tj1 −tj1−1 and τj2 = tj2 − tj2−1 which are equal and of the same type but tj2 > tj1 , the attention weights of them will be totally equal because their event embeddings ej1 and ej2 are the same. However, the impacts of the j2-th and j1-th event on time t can be not necessarily the same, i.e. when the short-term events outweigh long-term ones, the impacts of tj2 should be greater, since tj2 > tj1 . P.2. As discussed, the relations between event types are informative, which can provide interpretation of the fundamental mechanisms of the process. Self-attention can provide such relations, through averaging the attention weights of certain type of events to another (Zhang et al., 2020a). In comparison, we hope that attention weights are just used for expressiveness, and the relations among events should be provided by other modules. To solve the problem *P.1.*, we revise the attention weight by a time-reweighting term by exp {a(t − tj )}, where a is a learnable scaling parameter. This term will force the impacts of short-term events to be greater (a < 0) or less (a ≥ 0) than ones of long-term events, when the two time intervals are the same. Although the position term in Zhang et al. (2020a) can also fix the problem, the exponential term can slightly improve the performance thanks to it further enhances models' expressivity (See Section 5.3). To provide events' relations learned by the model as well as avoid flexibility of attention mechanisms as discussed in *P.2.*, we employ the type embedding E to calculate the cosine similarity of different types, and the inner product of two embedding vectors is used as another term to revise the attention weights (Zuo et al., 2020; Zhang et al., 2021). In this way, the weight of event j in attention mechanisms can be written as $$w_{j,i-1}=\mathrm{softmax}((\mathbf{E}_{m}^{T}\mathbf{m}_{j})^{T}(\mathbf{E}_{m}^{T}\mathbf{m}_{i-1})\exp{\{a(t_{i-1}-t_{j})\}}\phi(\mathbf{e}_{j},\mathbf{e}_{i-1})),$$ mmi−1) exp {a(ti−1 − tj )}φ(ej , ei−1)), (12) where ETmm is normalized as a unit vector for all m ∈ [M] as type embedding, and thus the inner product is equivalent to cosine similarity. Note that in Eq. 12, when (ETmmj ) T(ETmmi−1) = 0, the influence of mj to mi−1 will not be eliminated after the softmax(·), so in implementation, we map (ETmmj ) T(ETmmi−1) to −109 to force the influence of type mj events to mi−1 to be zero after softmax function if (ETmmj ) T(ETmmi−1) = 0. We deploy the revised attentive encoder into the Transformer, which are commonly used in Zhang et al. (2020a); Zuo et al. (2020). ## 3.2 Generative Probabilistic Decoder In the generative model, we directly model the time intervals instead of timestamps. For the observation {tj}j≤i−1, the next observed arrival time interval is τi = ti − ti−1, while the next sampled arrival time is tˆi = ˆτi + ti−1 after the sample τˆiis obtained. 3.2.1 Temporal Conditional Diffusion Denoising Probabilistic Model ![6_image_0.png](6_image_0.png) $$(12)$$ $$(13)$$ Figure 2: The workflows of revised attentive history encoder with TCDDM as probabilistic decoder. Temporal Diffusion Denoising Probabilistic Decoder. For notation simplicity, we first introduce the temporal diffusion denoising decoder with no historical encoding as condition, and the i-th sample τi is denoted by τ in that we are discussing occurrence of a single event. Denote the a single observation of event occurrence time interval by τ = τ 0 ∼ q(τ 0), where τ 0 ∈ R + and q(τ 0) is unkown, and the probability density function by pθ(τ 0) which aims to approximate q(τ 0) and allows for easy sampling. Diffusion models (Sohl-Dickstein et al., 2015a) are employed as latent variable models of the form pθ(τ 0) := Rpθ(τ 0:K) dτ 1:K, where τ 1*, . . . , τ* K are latent variables. The approximate posterior q(τ 1:K|τ 0), $$q(\tau^{1:K}|\tau^{0})=\Pi_{k=1}^{K}q(\tau^{k}|\tau^{k-1});\quad q(\tau^{k}|\tau^{k-1}):={\mathcal{N}}(\tau^{k};\sqrt{1-\beta_{k}}\tau^{k-1},\beta_{k}).$$ is fixed to a Markov chain, which is called the 'forward process'. β1*, . . . , β*K ∈ (0, 1) are predefined parameters. 'Reverse process' is also a Markov chain with learned Gaussian transitions starting with p(τ K) = N (τ K; 0, 1) $$p_{\theta}(\tau^{k-1}|\tau^{k}):={\mathcal{N}}(\tau^{k-1};\mu_{\theta}(\tau^{k},k),\Sigma_{\theta}(\tau^{k},k)),$$ k, k)), (14) $$(14)$$ 7 The likelihood is not tractable, so the parameters θ are learned to fit the data distribution by minimizing the negative log-likelihood via its variational bound (Ho et al., 2020): $$\min_{\theta}\mathbb{E}_{q(\tau^{0})}[-\log p_{\theta}(\tau^{0})]\leq\mathbb{E}_{q}\left[\frac{1}{2\Sigma_{\theta}}\|\tilde{\mu}_{k}(\tau^{k},\tau^{0})-\mu_{\theta}(\tau^{k},k)\|^{2}\right]+C,\tag{15}$$ where C is a constant which does not depend on θ, and µ˜k(τ k, τ 0) := √α¯k−1βk 1−α¯kτ 0 + √αk(1−α¯k−1) 1−α¯kτ k; β˜k := 1−α¯k−1 1−α¯k βk. The optimization objective in Equation. (15) is straightforward since it tries to use µθ to predict µ˜k for every step k. To resemble learning process in multiple noise scales score matching (Song & Ermon, 2019; 2020), it further reparameterizing Equation. (15) as $$\mathbb{E}_{\tau^{0},\epsilon}\left[\frac{\beta_{k}^{2}}{2\Sigma_{\theta}\alpha_{k}(1-\bar{\alpha}_{k})}\|\epsilon-\epsilon_{\theta}(\sqrt{\bar{\alpha}_{k}}\tau^{0}+\sqrt{1-\bar{\alpha}_{k}}\epsilon,k)\|^{2}\right].\tag{16}$$ since τ k(τ 0, ) = √α¯kτ 0 + √1 − α¯k for ∼ N (0, 1) . Temporal Conditional Diffusion Denoising Probabilistic Decoder. We establish a temporal conditional diffusion denoising model (TCDDM) as the probabilistic decoder in GNTPP. In training, after hi−1 is obtained as historical encoding, through a similar derivation in the previous paragraph, we can obtain the temporal conditional variant of the objective of a single event arrival time in Equation. (16) as $$l_{i}=\mathbb{E}_{\tau_{i}^{0},\epsilon}\left[\|\epsilon-\epsilon_{\theta}(\sqrt{\bar{\alpha}_{k}}\tau_{i}^{0}+\sqrt{1-\bar{\alpha}_{k}}\epsilon,\mathbf{h}_{i-1},k)\|^{2}\right],$$ $$(17)$$ 2, (17) in which the technique of reweighting different noise term is employed. θ as a neural network is conditioned on the historical encodings hi−1 and the diffusion step k. Our formutaion of θ is a feed-forward neural network, as $$\epsilon_{\theta}(\sqrt{\alpha_{k}}\tau_{i}^{0}+\sqrt{1-\alpha_{k}}\epsilon,\mathbf{h}_{i-1},k)$$ $$=\mathbf{W}^{(3)}(\mathbf{W}^{(2)}(\mathbf{W}_{h}^{(1)}\mathbf{h}_{i-1}+\mathbf{W}_{t}^{(1)}\tau_{i}^{\prime}+\cos(\mathbf{E}_{k}\mathbf{k}))+b^{(2)})+b^{(3)},$$ $$(18)$$ where W(1) h ∈ R D×D, W(1) t ∈ R D×1, W(2) ∈ R D×D, W(3) ∈ R 1×D, τ 0 i = √α¯kτ 0 i + √1 − α¯k, Ek is the learnable embedding matrix of step k and k is the one-hot encoding of k. In implementation, the residual block is used for fast and stable convergence. In sampling, given the historical encoding hi−1, we first sample τˆ K ifrom the standard normal distribution N (0, 1), then take it and hi−1 as the input of network θ to get the approximated noise, and generally remove the noise with different scales to recover the samples. This process is very similar to annealed Langevin dynamics in score matching methods. For inference, the prediction is based on Monte Carlo estimation. For example, when mean estimation is deployed to predict the next event arrival time interval after ti−1, we first sample a large amount of time interval from pθ(τ |hi−1) (e.g. 100 times), then use the average of sampled {τˆ (s)}1≤s≤S to estimate the mean of learned distribution, as the prediction value of next event arrival time interval, so the next arrival time is estimated as E [ti] ≈ ti−1 + 1 N PS s=1 τˆ (s). | Algorithm 1 Training for each timestamp ti > ti−1 in temporal point process in TCDDM 1: Input: Observation time interval τi and historical encoding hi−1 2: repeat 3: Initialize k ∼ Uniform(1, . . . , K) and ∼ | K | |---|-------------------------------------------------------------------| | | Input: noise τˆ for k = K to 1 do if k > 1 then z ∼ N (0, 1) else | **repeat** Initialize $k\sim$ Uniform$(1,\ldots,K)$ and $\epsilon\sim\mathcal{N}(0,1)$ Take gradient step on $$\nabla_{\theta}\|\epsilon-\epsilon_{\theta}(\sqrt{\bar{\alpha}_{k}}\tau_{i}+\sqrt{1-\bar{\alpha}_{k}}\epsilon,{\bf h}_{i-1},k)\|^{2}$$ 5: **until** converged Algorithm 2 Sampling tˆi > ti−1 via Langevin dynamics Input: noise τˆ K i ∼ N (0, 1) and historical encoding hi−1 for k = K to 1 do if k > 1 **then** z ∼ N (0, 1) else **else** $z=0$ **end if** $\hat{\tau}_{i}^{k-1}=\frac{1}{\sqrt{\alpha_{k}}}(\hat{\tau}_{i}^{k}-\frac{\beta_{k}}{\sqrt{1-\hat{\alpha}_{k}}}\epsilon_{\theta}(\hat{\tau}_{i}^{k},\mathbf{h}_{i-1},k))+\sqrt{\Sigma_{\theta}}z$ **end for** **Return:**$\hat{t}_{i}=\hat{\tau}_{i}^{0}+t_{i-1}$ ![8_image_0.png](8_image_0.png) $$(19)$$ Figure 3: Network structure of temporal conditional variational autoencoder as the probabilistic decoder. ## 3.3 Temporal Conditional Variational Autoencoder Probabilistic Model We establish a temporal conditional variational autoencoder (TCVAE) as the probabilistic decoder (Kingma & Welling, 2014; Pan et al., 2020), which consists of a variational encoder qξ(z|τi, hi−1) as a conditional Gaussian distribution N (µξ, Σξ) for approximating the prior standard Gaussian N (0, I) and a variational decoder pθ(τ |zi, hi−1) to generate arrival time samples. The network structure is given in Figure 3, where the latent Gaussian variable z ∈ R D. The training objective of a single event's arrival time interval is the evidence lower bound, which can be written as $$\operatorname*{min}_{\theta,\xi}D_{\mathrm{KL}}(q_{\xi}(z|\tau_{i},\mathbf{h}_{i-1})|\mathcal{N}(\mathbf{0},\mathbf{I}))+\mathbb{E}_{\hat{\tau}_{i}\sim p_{\theta}}\left[||\hat{\tau}_{i}-\tau_{i}||_{2}^{2}\right].$$ In sampling process, the encoder qξ(z|τi, hi−1) is abandoned. The decoder pθ(τ |zi, hi−1) transforms sample zi which is generated from N (0, I) to the target sample τˆi conditioned on hi−1. ## 3.3.1 Temporal Conditional Generative Adversarial Network Probabilistic Model Our temporal conditional generative adversarial network (TCGAN) decoder is mostly based on Wasserstein GAN in TPPs (Arjovsky et al., 2017; Xiao et al., 2017a). The probabilistic generator pθ(τ |z, hi−1) is trained via adversarial process, in which the other network called discriminator dξ(τ |hi−1) is trained to map the samples to a scalar, for maximizing the Wasserstein distance between the distribution of generated samples τˆi and the distribution of observed samples τi. The final objective to optimize in TCGAN is $$\min_{\theta}\max_{\xi}\mathbb{E}_{t_{i}\sim\text{pr}(\tau|\mathbf{h}_{i-1})}\left[d_{\xi}(\tau_{i}|\mathbf{h}_{i-1})-d_{\xi}(\hat{\tau}_{i}|\mathbf{h}_{i-1})\right]-\eta\left|\frac{d_{\xi}(\tau_{i}|\mathbf{h}_{i-1})-d_{\xi}(\hat{\tau}_{i}|\mathbf{h}_{i-1})]}{|\hat{\tau}_{i}-\tau_{i}|}-1\right|,\tag{20}$$ where the first term is to maximize the distance, and the second is to add a Lipschitz constraint as a regularization term proposed in original Wasserstein GAN (Arjovsky et al., 2017) with η as the loss weight. The formulation of the pθ(τ |z, hi−1) and dξ(τ |hi−1) are similar to the variational decoder in the TCVAE, both transforming the D-dimensional random variables into 1. After training, pθ(τ |z, hi−1) can be used for sampling in the same process as the variational decoder in TCVAE. ## 3.3.2 Temporal Conditional Continuous Normalizing Flow Probabilistic Model As a classical generative model, normalizing flows (Papamakarios et al., 2019) are constructed by a series of invertible equi-dimensional mapping. However, in TPPs, the input data sample is 1-dimensional time, and thus the flexibililty and powerful expressiveness of neural network is hard to harness. Therefore, here we choose to use temporal conditional continuous normalizing flows (TCCNF) (Mehrasa et al., 2020) which is based on Neural ODE (Chen et al., 2019; 2021). Note that the t term in Neural ODE is here replaced by k, to avoid confusion. The TCCNF defines the distribution through the following dynamics system: $$\tau_{i}=F_{\theta}(\tau(k_{0})|\mathbf{h}_{i-1})=\tau(k_{0})+\int_{k_{0}}^{k_{1}}f_{\theta}(\tau(k),k|\mathbf{h}_{i-1})d k,\tag{1}$$ $$(21)$$ where τ (k0) ∼ N (0, 1). fθ is implemented with the same structure of variational decoder in TCVAE. The invertibility of Fθ(τ (k0)|hi−1) allows us to not only conduct fast sampling, but also easily optimize the parameter set θ by minimizing the negative log-likelihood on a single time sample: $$\operatorname*{min}_{\theta}\left\{-\log(p(\tau(k_{0})))+\int_{k_{0}}^{k_{1}}\mathrm{Tr}(\frac{\partial f_{\theta}(\tau(k),k|\mathbf{h}_{i-1})}{\partial\tau(k)})d k|_{\tau=\tau_{i}}\right\}.$$ $$(22)$$ ## 3.3.3 Temporal Conditional Noise Score Network Probabilistic Model Finally, we establish the probabilistic decoder via a temporal conditional noise score network (TCNSN) as a score matching method, which aims to learn the gradient field of the target distribution (Song & Ermon, 2019; 2020). In specific, given a sequence of noise levels {σk} K k=1 with noise distribution qσi (τ˜i|τi, hi−1), i.e. N (˜τi|τi, hi−1, σ2 k ), the training loss for a single arrival time on each noise level σk is as follows $$l_{i}=\frac{1}{2}\|s_{\theta}(\tilde{\tau}_{i};\sigma_{k}|\mathbf{h}_{i-1})-\nabla_{\tilde{\tau}_{i}}\log q_{\sigma_{k}}(\tilde{\tau}_{i}|\tau_{i},\mathbf{h}_{i-1})\|_{2}^{2},\tag{1}$$ where the sθ is the gradient of target distribution with the same formulation of variational decoders in TCVAE. According to Song & Ermon (2019), the weighted training objective can be written as $$(23)$$ $$\min_{\theta}\frac{\sigma_{k}^{2}}{2}\|\frac{s_{\theta}(\tilde{\tau_{i}};\sigma_{k}|\mathbf{h}_{i-1})}{\sigma_{k}}+\frac{\tilde{\tau_{i}}-\tau_{i}}{\sigma_{k}^{2}}\|_{2}^{2},$$ $$(24)$$ $$(25)$$ where τ˜i ∼ qσk (˜τi|τi, hi−1). In the sampling process, the Langevin dynamics (Welling & Teh, 2011) is used, in which the sample is firstly drawn from a Gaussian distribution, then iteratively updated by $$\hat{\tau}_{i}^{k}=\hat{\tau}_{i}^{k-1}+\alpha_{k}s_{\theta}(\hat{\tau}_{i}^{k-1};\sigma_{k}|\mathbf{h}_{i-1})+\sqrt{2\alpha_{k}}z,\tag{1}$$ in different noise levels with different times, where z ∼ N (0, 1). ## 3.4 Mark Modeling When there exists more than one event type, another predictive target is what type of event is most likely to happen, given the historical observations. The task is regarded as a categorical classification. Based on the assumption that the mark and time distributions are conditionally independent given the historical embedding Shchur et al. (2021); Lin et al. (2021), we first transform the historical encoding hi−1 to the discrete distribution's logit scores as $\kappa(\mathbfit{h}_{i-1})=\text{logit}(\hat{m}_i)$, pen_softmax_function is used to transform logit scores. where logit(mˆ i) ∈ RM, κ : R D → RM. Then, *softmax* function is used to transform logit scores into the categorical distribution, as p( ˆmi = m|θ(hi−1)) = softmax(logit( ˆmi))m (27) where softmax(logit(mˆ i))m means choose the m-th element after *softmax*'s output. In training, a cross-entropy loss for categorical classification CEi = CE(p(mi|hi−1)) will be added to the loss term, leading the final loss of a single event to Li = li + CEi. (28) $$(26)$$ $$\mathrm{p}({\hat{m}}_{i}=m|\theta(\mathbf{h}_{i-1}))=\mathrm{softmax}(\mathrm{logit}({\hat{m}}_{i}))_{m}$$ $$(27)$$ $$L_{i}=l_{i}+\mathrm{CE}_{i}.$$ $$(28)$$ ## 4 Related Work Deep Neural Temporal Point Process. From Du et al. (2016) which firstly employed RNNs as history encoders with a variant of Gompertz distribution as its probabilistic decoder. Following works, proposed to combine deep neural networks with TPPs, have achieved great progress (Lin et al., 2021; Shchur et al., 2021). For example, in terms of history encoder, a continuous time model (Mei & Eisner, 2017) used recurrent units and introduces a temporal continuous memory cell in it. Recently, attention-based encoder (Zhang et al., 2020a; Zuo et al., 2020) is established as a new state-of-the-art history encoder. In probabilistic decoders, Omi et al. (2019) fit the cumulative harzard function with its derivative as intensity function. Xiao et al. (2017b) and Shchur et al. (2020a) used the single Gaussian and the mixture of log-normal to approximate the target distribution respectively. In events dependency discovering, Mei et al. (2022) explicitly modeled dependencies between event types in the self-attention layer, and Zhang et al. (2020b) implicitly captured the underlying event interdependency by fitting a neural point process. Probabilistic Generative TPP Models. A line of works have been proposed to deploy new progress in deep generative models to TPPs. For example, the reinforcement learning approaches which are similar to adversarial settings, used two networks with one generating samples and the other giving rewards are proposed sequentially (Upadhyay et al., 2018; Li et al., 2018). And adversarial and discriminative learning (Yan et al., 2018; Xiao et al., 2017a) have been proposed to further improve the predictive abilities of probabilistic decoders. Noise contrastive learning to maximize the difference of probabilistic distribution between random noise and true samples also proved to be effective in learning TPPs (Guo et al., 2018; Mei et al., 2020). ## 5 Experiments 5.1 Experimental Setup 5.1.1 Implementation Description ![10_image_0.png](10_image_0.png) Figure 4: The hierarchical description of our experimental framework with modules integrated in GNTPP. We first introduce our experimental framework for model comparison, as shown in Figure 4. In the history encoder module, it includes: GRU, LSTM and Transformer with and without our revised attention. In the probabilistic decoder module, probabilistic models in EDTPP (Lin et al., 2021) with closed likelihood including Gaussian, Gompertz, Log-norm, Feed-forward Network, and Weibull are implemented and integrated into our code. And the discussed neural generative models which is classified into our GNTPP including TCDDM, TCVAE, TCGAN, TCCNF and TCNSN are implemented. ## 5.1.2 Datasets We use a complex synthetic dataset which is simulated by Hawkes process of five types of events with different impact functions (Appendix B.1.) and 4 real-world datasets containing event data from various domains: MOOC (user interaction with online course system), Retweet (posts in social media), Stack Overflow (question-answering badges in website), Yelp (check-ins to restaurants). The data statistics are shown in Table 2. We clamp the maximum length of sequences to 1000. The dataset is split into 20% ratio for testing, 80% ratio for training with 20% in training set as validation set used for parameter tuning. All the time scale [0, Tmax] is normalized into [0, 50] for numerical stability, where Tmax is the maximum of observed event occurrence time in the training set. The detailed description is given in Appendix B.1. | Dataset | # of sequences | Mean length | Min length | Max length | # of event type | |----------------|------------------|---------------|--------------|--------------|-------------------| | MOOC | 7047 | 56.28 | 4 | 493 | 97 | | Retweet | 24000 | 108.75 | 50 | 264 | 3 | | Stack Overflow | 6633 | 72.42 | 41 | 736 | 22 | | Yelp | 300 | 717.15 | 424 | 2868 | 1 | | Synthetic | 6000 | 580.36 | 380 | 835 | 5 | Table 2: Dataset Statistics ## 5.1.3 Protocol In the training process, hyper-parameters of every model are tuned in the range of *'learning rate'* : {1 × 10−3, 5 × 10−4, 1 × 10−4}, 'embedding size' : {8, 16, 32}, *'layer number'* : {1, 2, 3}, where '*embedding size*' is the dimension of historical encoding, i.e. D. The hyper-parameters are tuned on validation set. The maximum training epoch number is set as 100, and early stopping technique is used based on values of loss on validation set. The reported metrics are the results of models trained with the lowest loss, except 'TCGAN' probabilistic decoder, whose parameters are choosen as the final epoch's results. The mean and standard deviation of each metric is obtained according to 5 experiments' results with different random seeds. ## 5.1.4 Metrics To evaluate the predictive performance of each methods, we deploy commonly used metric - 'mean absolute percent error' (MAPE) for measuring the predictive performance of next arrival time (Zhang et al., 2020a), and 'top-1 accuracy' (Top1_ACC) and 'top-3 accuracy' (Top3_ACC) to measure the predictive performance of next event types Lin et al. (2021). Note that there is only one event type in Yelp, so 'Top3_ACC' is not meaningful. However, the commonly used negative likelihood has no closed form in deep generative models. Therefore, we use another two metrics to evaluate the 'goodness-of-fitness'. The first is 'continuous ranked probability score' (CRPS), which is widely used in time series prediction (Rasul et al., 2021; Ben Taieb, 2022) for measuring the compatibility of a cumulative distribution function (CDF) F with an observation t as CRPS(*F, t*) = RR (F(y) − I{t ≤ y}) 2 dy. Regarding the model as fitting next event arrival time's distribution (Jordan et al., 2018), we can calculate it on a single timestamp by using the empirical CDF as $${\rm CRPS}(\hat{F},t_{i})=\frac{1}{S}\sum_{i=1}^{S}|\hat{t}_{i,k}-t_{i}|-\frac{1}{2S^{2}}\sum_{i=1}^{S}\sum_{j=1}^{S}|\hat{t}_{i,k}-\hat{t}_{j,k}|,\tag{29}$$ where there are S samples {tˆi,k}1≤k≤S drawn from pθ(t|hi−1), tiis the ground truth observation. Equation. (29) reflects that CRPS can also evaluate models' the sampling quality as predictive performance in the first term, and the sampling diversity in the second term. Another metric is 'QQPlot-deviation' (QQP-Dev) (Xiao et al., 2017a), which can be calculated by first computing the empirical cumulative hazard function Λˆ∗ θ (t), and its distribution should be exponential with parameter 1. Therefore, the deviation of the QQ plot of it v.s. Exp(1) is calculated, as metric 'QQP-Dev'. Appendix B.2. gives details. ## 5.2 Performance Comparison Here we choose 5 methods whose probabilistic decoders are not generative models as baseline for performance comparison: (1) RMTPP (Lin et al., 2021) as the extension of RMTPP (Du et al., 2016), whose probabilistic decoder is a mixture of Gompertz distribution. (2) LogNorm (Shchur et al., 2020a), which uses Log-normal distribution as its decoder. (3) ERTPP (Lin et al., 2021) as the generalization of ERTPP (Xiao et al., 2017b), with a mixture of Gaussian as its decoder. (4) FNNInt (Omi et al., 2019), which formulates the cumulative harzard function as a fully-connected feed-forward network, and its derivative w.r.t. time as intensity. (5) WeibMix (Marín et al., 2005; Lin et al., 2021) with Weibull mixture distribution as its decoder. (6) SAHP (Zhang et al., 2020a), whose conditional intensity function is an exponential-decayed formulation, with a softplus as a nonlinear activation function stacked in the final. (7) THP (Zuo et al., 2020), whose intensity function reads λ ∗(τ ) = softplus(α τ ti−1 + Whhi−1 + b). (8) Deter, as a baseline for MAPE and ACC metrics, which is a totally deterministic model with the probabilistic model replaced by a linear projection head whose weight and bias are all constraint to be positive, and trained by MSE loss as a regression task. | | MOOC | | Retweet | | | | | | | | |------------------------------------|--------------------------------------------|-----------------------------|----------------------------------------------------------|--------------------------------------------|--------------------------------------------|-----------------------------|---------------|---------------|---------------|---------------| | Methods | MAPE(↓) | CRPS(↓) | QQP_Dev(↓) | Top1_ACC(↑) | Top3_ACC(↑) | MAPE(↓) | CRPS(↓) | QQP_Dev(↓) | Top1_ACC(↑) | Top3_ACC(↑) | | Deter | 17.4356±6.4756 | - | - | 0.3894±0.6027 | 0.70445±0.3361 | 12.7697±1.1566 | - | - | 0.5745±0.0001 | 1.0000±0.0000 | | RMTPP | 67.2866±0.2321 37.1259±0.2539 | 1.9677±0.0002 | 0.4069±0.0130 | 0.7189±0.0131 | 65.1189±1.2747 | 0.3282±0.0075 | 1.7006±0.0035 | 0.6086±0.0001 | 1.0000±0.0000 | | | LogNorm | 70.8006±0.3010 36.2675±0.6712 | 1.9678±0.0006 | 0.3992±0.0012 | 0.7155±0.0011 | 75.3065±0.0000 | 0.4579±0.0803 | 1.7091±0.0101 | 0.6003±0.0063 | 1.0000±0.0000 | | | ERTPP | 94.3711±0.0713 24.4113±0.3728 | 1.9571±0.0006 | 0.3841±0.0189 | 0.7043±0.0165 | 71.5601±0.0000 | 0.3842±0.0144 | 1.7283±0.0033 | 0.6055±0.0042 | 1.0000±0.0000 | | | WeibMix | 75.2570±1.3158 18.1352±4.0137 | 1.9776±0.0000 | 0.3409±0.0293 | 0.6613±0.0295 | 72.5045±0.4957 | 0.3795±0.0043 | 1.9776±0.0001 | 0.6058±0.0005 | 1.0000±0.0000 | | | FNNInt | 66.5765±2.4615 | - | 1.3780±0.0067 | 0.4203±0.0035 | 0.7310±0.0024 | 22.7489±3.8260 | - | 1.0318±0.0749 | 0.5348±0.0367 | 1.0000±0.0000 | | SAHP | 43.0847±0.9447 | - | 1.0336±0.0007 | 0.3307±0.0082 | 0.6527±0.0138 | 15.5689±0.0239 | - | 1.0286±0.0030 | 0.6032±0.0001 | 1.0000±0.0000 | | THP | 41.6676±1.1192 | - | 1.0207±0.0001 | 0.3287±0.0097 | 0.6531±0.0109 | 16.4464±0.0112 | - | 1.0242±0.0014 | 0.5651±0.0003 | 1.0000±0.0000 | | TCDDM 23.5559±0.3098 0.1468±0.0000 | 1.0369±0.0000 | 0.4308±0.0069 | 0.7408±0.0044 12.6058±0.5550 0.2076±0.0000 1.0327±0.0111 | 0.6274±0.0075 | 1.0000±0.0000 | | | | | | | TCVAE | 19.3336±1.4021 0.1465±0.0003 1.0369±0.0001 | 0.3177±0.0066 | 0.6282±0.0032 12.2332±0.6755 0.1848±0.0005 | 1.0443±0.0018 | 0.5825±0.0213 | 1.0000±0.0000 | | | | | | TCGAN | 24.4184±4.7497 0.1470±0.0001 | 1.0352±0.0001 | 0.4179±0.0049 | 0.7270±0.0005 | 15.4630±1.5843 | 0.2084±0.0134 | 1.0356±0.0002 | 0.6263±0.0088 | 1.0000±0.0000 | | | TCCNF | 26.3197±1.7119 | 0.1636±0.0044 | 1.0578±0.0106 | 0.4297±0.0105 | 0.7374±0.0100 13.9865±1.9811 0.1625±0.0092 | 1.0598±0.0022 | 0.5965±0.0105 | 1.0000±0.0000 | | | | TCNSN | 80.8541±4.7017 | 1.3668±0.0371 | 1.3345±0.0011 | 0.3292±0.0115 | 0.6516±0.0102 | 63.3995±1.2366 | 1.1954±0.0196 | 1.3291±0.0017 | 0.5845±0.0132 | 1.0000±0.0000 | | | Stack Overflow | | Yelp | | | | | | | | | Methods | MAPE(↓) | CRPS(↓) | QQP_Dev(↓) | Top1_ACC(↑) | Top3_ACC(↑) | MAPE(↓) | CRPS(↓) | QQP_Dev(↓) | Top1_ACC(↑) | Top3_ACC(↑) | | Deter | 4.7518±0.0658 | - | - | 0.5302±0.0010 | 0.8327±0.0014 | 15.9814±2.4486 | - | - | 1.0000±0.0000 | - | | RMTPP | 7.6946±1.3470 | 6.2844±0.3374 | 1.9317±0.0020 | 0.5343±0.0013 | 0.8555±0.0073 | 13.6576±0.1261 | 0.0657±0.0005 | 1.3142±0.0055 | 1.0000±0.0000 | - | | LogNorm | 13.3008±1.2214 | 6.3377±0.2380 | 1.9313±0.0017 | 0.5335±0.0019 | 0.8542±0.0064 | 32.1609±0.3978 | 0.0646±0.0018 | 1.2840±0.0395 | 1.0000±0.0000 | - | | ERTPP | 17.3008±1.5724 | 4.5747±0.0947 | 1.9299±0.0016 | 0.5316±0.0028 | 0.8526±0.0044 | 34.8405±0.0000 | 0.0673±0.0014 | 1.2632±0.0087 | 1.0000±0.0000 | - | | WeibMix | 7.6260±1.0663 | 4.3028±0.5535 | 1.9776±0.0000 | 0.5327±0.0011 | 0.8454±0.0056 | 34.8391±0.0019 | 0.0680±0.0000 | 1.9776±0.0000 | 1.0000±0.0000 | - | | FNNInt | 6.1583±0.0952 | - | 1.5725±0.0065 | 0.5336±0.0009 | 0.8432±0.0010 | 16.2753±0.5204 | - | 1.2579±0.0390 | 1.0000±0.0000 | - | | SAHP | 5.5246±0.0271 | - | 1.5175±0.0010 | 0.5235±0.0002 | 0.8278±0.0003 | 12.9830±0.0474 | - | 1.0755±0.0004 | 1.0000±0.0000 | - | | THP | 5.6331±0.0413 | - | 1.5033±0.0016 | 0.5310±0.0003 | 0.8508±0.0001 | 14.4189±0.0474 | - | 1.0775±0.0004 | 1.0000±0.0000 | - | | TCDDM | 4.9947±0.0366 0.4375±0.0163 | 1.5711±0.0043 | 0.5371±0.0004 | 0.8693±0.0010 10.8426±0.0253 0.0570±0.0001 | 1.1728±0.0082 | 1.0000±0.0000 | - | | | | | TCVAE | 5.1397±0.0626 0.5129±0.0082 | 1.5320±0.0057 | 0.5398±0.0022 | 0.8418±0.0112 | 9.9204±0.2895 | 0.0631±0.0008 1.1732±0.0001 | 1.0000±0.0000 | - | | | | TCGAN | 5.0874±0.1527 | 0.5458±0.0254 1.5178±0.0095 | 0.5340±0.0048 | 0.8481±0.0200 12.0471±0.7363 0.0608±0.0022 | 1.1170±0.0275 | 1.0000±0.0000 | - | | | | | TCCNF | 6.3022±0.0281 0.4259±0.0005 | 1.6319±0.0007 | 0.5428±0.0003 | 0.8721±0.0003 | 13.4562±0.2129 0.0575±0.0008 | 1.2355±0.0034 | 1.0000±0.0000 | - | | | | TCNSN | 29.4333±2.4937 | 0.8350±0.0035 | 1.6611±0.0004 | 0.5352±0.0012 | 0.8538±0.0095 | 43.9613±2.1338 | 0.4274±0.0131 | 1.5855±0.0055 | 1.0000±0.0000 | - | The methods of (1) ∼ (4) have closed-form expectation. Mean of FNNInt, SAHP and THP is obtained by numerical integration, and mean of GNTPP is obtained by Monte Carlo integration thanks to its advantages in flexible sampling. Note that it is possible to sample events from SAHP and THP using Ogata's thinning method (Ogata, 1981) since the intensity for both methods is monotonically decreasing between events. Samples can also be drawn from FNNInt model using numerical root-finding (Shchur et al., 2020a), but these sampling methods designed especially for the three models have not yet been developed. Therefore, flexible sampling is not allowed for FNNInt, SAHP and THP (Table 1), so we do not report their 'CRPS'. In all the generative methods, the trick of **log-noramlization** is used: The input samples are firstly normalized by log τ−Mean(log τ) Var(log τ)during training, and rescaled back by exp (log τVar(log τ ) + Mean(log τ )) to make sure the sampled time intervals are positive. For fair comparison, we all use revised attentive encoder which is a variant of Transformer to obtain the historical encodings. We conclude from the experimental results in Table 3 that - All these established generative methods show comparable effectiveness and feasibility in TPPs, except TCNSN as a score matching method. TCDDM, TCVAE and TCGAN usually show good performance in next arrival time prediction, compared with the diffusion decoder. - In spite of good performance, as a continuous model, TCCNF is extremely time-consuming, whose time complexity is unaffordable as shown in Appendix B.4. - By using the numerical integration to obtain the estimated expectation of SAHP and THP, we find they can also reach comparable 'MAPE' to generative decoders. However, the two models do not provide a flexible sampling methods, where the time interval samples cannot be flexibly drawn from the learned conditional distribution. - From 'CRPS' and 'QQP_Dev' evaluating models' fitting abilities of arrival time, the generative decoders still outperforms others. For 'QQP_Dev', SAHP and THP's show very competitive fitting ability thanks to its employing expressive formulation as the intensity function. For the Synthetic dataset, results are given in Appendix B.3. In summary, the empirical results show that proposed generative neural temporal point process employs and demonstrates deep generative models' power in modeling the temporal point process. Further Discussion on Generative Models. As TCDDM, TCCNF, and TCNSN can all be classified into score-based methods according to Song et al. (2021b), in which they are described as different stochastic differential equations, this raises a question to us: Why only TCNSN fails to model the temporal point process effectively? Following the former work (Song et al., 2021b; Song & Ermon, 2019; 2020), the continuous form of temproal conditional score-matching model is given by the stochastic differential equation (SDE) as $$d\tau=\sqrt{\frac{d[\sigma^{2}(k)]}{d k}}d w,$$ $$(30)$$ dk dw, (30) where w is a standard Wiener process. It is a variance exploding process because the variance goes to infinity as k → +∞. In comparison, the forward process of temporal conditional diffusion model can be regarded as a variance perserving one, as $$d\tau=-\frac{1}{2}\sqrt{1-\beta(k)}\tau d k+\sqrt{\beta(k)}d w.$$ p1 − β(k)τ dk +pβ(k)dw. (31) And temporal conditional continuous normalizing flow is the deterministic process where dw = 0, as $$(31)$$ $$d\tau=f_{\theta}(\tau,k)d k,$$ $$(32)$$ dτ = fθ(τ, k)dk, (32) where fθ is a learnable neural network. Note that we omit the conditional notation in the single event modeling for simplicity. In the reverse process which is used for sampling (or denoising), these three models firstly sample τK ∼ N (0, 1). τK is denoised by the learnable score function θ in TCDDM and TCNSN, or invertible process in TCCNF, and τ0 as the output of the final stage of the process is generated as the time interval sample. For the variance exploding property of TCNSN, in the reverse process as shown in Figure 5, the variance of the distribution will firstly become excessively large. As a result, later in small-variance scales, it cannot recover the input signals and distributions attributing to the high variance of the early stage. In comparison, the variance in the sampling dynamics of TCDDM and TCCNF keeps stable, and the learned distributions are approaching the input gradually in the iteration of denoising process. ## 5.3 Advantages Of Revised Encoders In this part, we aim to conduct empirical study to prove the better expressivity our revised attentive encoder. We first fix the probabilistic decoder as diffusion decoder, and conduct experiments with different history encoders, including our revised attentive (Rev-ATT), self-attentive (ATT) and LSTM encoders to demonstrate the advantages in expressiveness of the revised attention. Table 4 shows the advantages of the ![14_image_0.png](14_image_0.png) Figure 5: The empirical distribution of the sample generating dynamics of TCDDM, TCCNF, and TCNSN. The visualization is conducted in Stack Overflow dataset obtained by 5000 samples, with the iteration step set as 100 in TCDDM and 1000 in TCNSN. We regard the time scale as [0, 0.9], and give the dynamics of the distribution change of the reverse sampling process at different discrete time points. As demonstrated, the variance of TCNSN is much larger than others, which prevents the model from recovering the distribution of the input samples. **Log-noramlization** trick is used to force the intermediate samples to be positive while σ is calculated with unnormalized latent variables for consistent order of numerical values. | Table 4: Comparison of different history encoders. MOOC | | | | | | |-----------------------------------------------------------|-----------------------------|-----------------------------|---------------|---------------|---------------| | Encoders | MAPE | CRPS | QQP_Dev | Top1_ACC | Top3_ACC | | LSTM | 23.3562±0.0076 | 0.1468±0.0000 | 1.0369±0.0000 | 0.4232±0.0004 | 0.7279±0.0001 | | ATT | 23.3559±0.0283 | 0.1468±0.0000 | 1.0369±0.0000 | 0.4228±0.0003 | 0.7275±0.0000 | | Rev-ATT 23.3559±0.3098 | 0.1468±0.0000 | 1.0369±0.0000 | 0.4308±0.0069 | 0.7408±0.0044 | | | | | Retweet | | | | | Encoders | MAPE(↓) | CRPS(↓) | QQP_Dev(↓) | Top1_ACC(↑) | Top3_ACC(↑) | | LSTM | 16.3525±0.0237 | 0.2079±0.0001 | 1.0521±0.0012 | 0.6083±0.0002 | 1.0000±0.0000 | | ATT | 16.3160±0.0397 | 0.2077±0.0001 | 1.0469±0.0002 | 0.6083±0.0001 | 1.0000±0.0000 | | Rev-ATT 15.6058±0.5550 0.2076±0.0001 1.0327±0.0111 | | 0.6274±0.0075 | | 1.0000±0.0000 | | | | | Stack Overflow | | | | | Encoders | MAPE(↓) | CRPS(↓) | QQP_Dev(↓) | Top1_ACC(↑) | Top3_ACC(↑) | | LSTM | 5.0381±0.0055 | 0.4502±0.0005 | 1.5737±0.0006 | 0.5337±0.0001 | 0.8626±0.0001 | | ATT | 5.0285±0.0290 | 0.4502±0.0012 1.5683±0.0013 | 0.5326±0.0002 | 0.8632±0.0001 | | | Rev-ATT | 4.9947±0.0366 0.4375±0.0163 | 1.5711±0.0043 | 0.5371±0.0004 | 0.8693±0.0010 | | | | | Yelp | | | | | Encoders | MAPE | CRPS | QQP_Dev | Top1_ACC | Top3_ACC | | LSTM | 10.9082±0.0387 | 0.0571±0.0001 | 1.1792±0.0003 | 1.0000±0.0000 | - | | ATT | 10.9119±0.0188 | 0.0571±0.0001 | 1.1792±0.0002 | 1.0000±0.0000 | - | | Rev-ATT 10.8426±0.0253 0.0570±0.0001 1.1728±0.0082 | | 1.0000±0.0000 | | - | | ![14_image_1.png](14_image_1.png) Figure 6: The relations of similarity between event types of Stack Overflow which is inferenced by Rev-Att + TCDDM. Rows are arranged in the same order as columns. revision on two datasets, the revised attention outperforms others in most metrics. Results on Synthetic dataset are shown in Appendix B.3. The revised attentive encoder achieves overall improvements by a small margin. The complete empirical study (Lin et al., 2021) has illustrate that the performance gain brought from history encoders is very small, and our revision can further brings tiny improvements. Second, we give visualization shown in Figure 6 on the events' relations of similarity obtained by EmETm, as discussed in Section 3.1. It shows that the effects of some pairs of event types are relatively significant with high absolute value of event similarity, such as (Stellar Question, *Great Answer*) and (*Great Question*, Constituent). It indicates the statistical co-occurrence of the pairs of the event types in a sequence. ## 5.4 Hyperparameter Sensitivity Analysis Several hyper-parameters affect the model performance, and in this part we try to explore their impacts. We conduct experiments on different '*embedding size*', i.e. D and '*layer number*'. The partial results of TCDDM are given in Figure 7 and 8, and details are shown in Appendix B.4. ![15_image_0.png](15_image_0.png) layer number of TCDDM on MOOC. embedding size of TCDDM on MOOC. The MAPE metric is more sensitive than CRPS with the change of the hyperparameter of the model. In MOOC dataset, the large 'layer number' and 'embedding size' is not beneficial to predictive performance. ## 6 Conclusion A series of deep neural temporal point process (TPP) models called GNTPP, integrating deep generative models into the neural temporal point process and revising the attentive mechanisms to encode the history observation. GNTPP improves the predictive performance of TPPs, and demonstrates its good fitting ability. Besides, the feasibility and effectiveness of GNTPP have been proved by empirical studies. And experimental results show good expressiveness of our revised attentive encoders, with events' relation provided. A complete framework with a series of methods are integrated into our code framework, and we hope the fair empirical study and easy-to-use code framework can make contributions to advancing research progress in deep neural temporal point process in the future. ## Acknowledgement This work is supported in part by National Natural Science Foundation of China, Geometric Deep Learning and Applications in Proteomics-Based Cancer Diagnosis (No. U21A20427). We thank a lot to all the reviewers who are responsible, careful and professional in TMLR for their valuable and constructive comments. ## References Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan, 2017. E. Bacry, M. Bompaire, S. Gaïffas, and S. Poulsen. tick: a Python library for statistical learning, with a particular emphasis on time-dependent modeling. *ArXiv e-prints*, July 2017. Fan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models, 2022. URL https://arxiv.org/abs/2201.06503. Souhaib Ben Taieb. Learning quantile functions for temporal point processes with recurrent neural splines. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera (eds.), *Proceedings of The 25th International* Conference on Artificial Intelligence and Statistics, volume 151 of *Proceedings of Machine Learning Research*, pp. 3219–3241. PMLR, 28–30 Mar 2022. URL https://proceedings.mlr.press/v151/ben-taieb22a. html. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations, 2019. Ricky T. Q. Chen, Brandon Amos, and Maximilian Nickel. Neural spatio-temporal point processes, 2021. D. Vere-Jones D.J. Daley. *An Introduction to the Theory of Point Processes*, volume 1. Springer-Verlag New York, 2003. Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: Pure attention loses rank doubly exponentially with depth, 2021. Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: Embedding event history to vector. In *In Proceedings of the 22nd ACM* SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016. Michael Eichler, Rainer Dahlhaus, and Johannes Dueck. Graphical modeling for multivariate hawkes processes with nonparametric link functions, 2016. Ruocheng Guo, Jundong Li, and Huan Liu. Initiator: Noise-contrastive estimation for marked temporal point process. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp. 2191–2197. International Joint Conferences on Artificial Intelligence Organization, 7 2018. doi: 10.24963/ijcai.2018/303. URL https://doi.org/10.24963/ijcai.2018/303. Alan G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. *Biometrika*, 58(1): 83–90, 1971. ISSN 00063444. URL http://www.jstor.org/stable/2334319. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models, 2020. Valerie Isham and Mark Westcott. A self-correcting point process. *Stochastic Processes and their Applications*, 8(3):335–347, 1979. ISSN 0304-4149. doi: https://doi.org/10.1016/0304-4149(79)90008-5. URL https: //www.sciencedirect.com/science/article/pii/0304414979900085. Alexander Jordan, Fabian Krüger, and Sebastian Lerch. Evaluating probabilistic forecasts with scoringrules, 2018. Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2014. Shuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, and Le Song. Learning temporal point processes via reinforcement learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 5d50d22735a7469266aab23fd8aeb536-Paper.pdf. Haitao Lin, Cheng Tan, Lirong Wu, Zhangyang Gao, and Stan. Z. Li. An empirical study: Extensive deep temporal point process, 2021. J. M. Marín, M. T. Rodríguez-Bernal, and M. P. Wiper. Using weibull mixture distributions to model heterogeneous survival data. *Communications in Statistics - Simulation and Computation*, 34(3):673–684, 2005. doi: 10.1081/SAC-200068372. URL https://doi.org/10.1081/SAC-200068372. Nazanin Mehrasa, Ruizhi Deng, Mohamed Osama Ahmed, Bo Chang, Jiawei He, Thibaut Durand, Marcus Brubaker, and Greg Mori. Point process flows, 2020. URL https://openreview.net/forum?id= rklJ2CEYPH. Hongyuan Mei and Jason M Eisner. The neural hawkes process: A neurally self-modulating multivariate point process. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ 6463c88460bd63bbe256e495c63aa40b-Paper.pdf. Hongyuan Mei, Tom Wan, and Jason Eisner. Noise-contrastive estimation for multivariate point processes, 2020. Hongyuan Mei, Chenghao Yang, and Jason Eisner. Transformer embeddings of irregularly spaced events and their participants. In *International Conference on Learning Representations*, 2022. URL https: //openreview.net/forum?id=Rty5g9imm7H. Y. Ogata. On lewis' simulation method for point processes. *IEEE Transactions on Information Theory*, 27 (1):23–31, 1981. doi: 10.1109/TIT.1981.1056305. Takahiro Omi, naonori ueda, and Kazuyuki Aihara. Fully neural network based model for general temporal point processes. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/file/39e4973ba3321b80f37d9b55f63ed8b8-Paper.pdf. Zhen Pan, Zhenya Huang, Defu Lian, and Enhong Chen. A variational point process model for social event sequences. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34(01):173–180, Apr. 2020. doi: 10.1609/aaai.v34i01.5348. URL https://ojs.aaai.org/index.php/AAAI/article/view/5348. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. 2019. doi: 10.48550/ARXIV.1912. 02762. URL https://arxiv.org/abs/1912.02762. Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland Vollgraf. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting, 2021. Oleksandr Shchur, Marin Biloš, and Stephan Günnemann. Intensity-free learning of temporal point processes. In *International Conference on Learning Representations*, 2020a. URL https://openreview.net/forum? id=HygOjhEYDH. Oleksandr Shchur, Nicholas Gao, Marin Biloš, and Stephan Günnemann. Fast and flexible temporal point processes with triangular maps, 2020b. Oleksandr Shchur, Ali Caner Türkmen, Tim Januschowski, and Stephan Günnemann. Neural temporal point processes: A review, 2021. Alexander Soen, Alexander Mathews, Daniel Grixti-Cheng, and Lexing Xie. Unipoint: Universally approximating point processes intensities, 2021. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of *Proceedings of Machine Learning Research*, pp. 2256–2265, Lille, France, 2015a. PMLR. URL http://proceedings.mlr.press/v37/sohl-dickstein15. html. Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics, 2015b. Yang Song and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems, volume 32, pp. 11918–11930. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/file/3001ef257407d5a371a96dcd947c7d93-Paper.pdf. Yang Song and Stefano Ermon. Improved Techniques for Training Score-Based Generative Models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 33. Curran Associates, Inc., 2020. URL https: //proceedings.neurips.cc/paper/2020/file/92c3b916311a5517d9290576e3ea37ad-Paper.pdf. Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models, 2021a. URL https://arxiv.org/abs/2101.09258. Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations, 2021b. Utkarsh Upadhyay, Abir De, and Manuel Gomez Rodriguez. Deep reinforcement learning of marked temporal point processes. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 71a58e8cb75904f24cde464161c3e766-Paper.pdf. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. *CoRR*, abs/1706.03762, 2017. URL http: //arxiv.org/abs/1706.03762. Max Welling and Yee Whye Teh. Bayesian learning via stochastic gradient langevin dynamics. In *ICML*, 2011. Shuai Xiao, Mehrdad Farajtabar, Xiaojing Ye, Junchi Yan, Le Song, and Hongyuan Zha. Wasserstein learning of deep generative point process models, 2017a. Shuai Xiao, Junchi Yan, Stephen M. Chu, Xiaokang Yang, and Hongyuan Zha. Modeling the intensity function of point process via recurrent neural networks, 2017b. Tian Xie, Xiang Fu, Octavian-Eugen Ganea, Regina Barzilay, and Tommi Jaakkola. Crystal diffusion variational autoencoder for periodic material generation, 2021. Hongteng Xu, Mehrdad Farajtabar, and Hongyuan Zha. Learning granger causality for hawkes processes. In Maria Florina Balcan and Kilian Q. Weinberger (eds.), *Proceedings of The 33rd International Conference* on Machine Learning, volume 48 of *Proceedings of Machine Learning Research*, pp. 1717–1726, New York, New York, USA, 20–22 Jun 2016. PMLR. URL http://proceedings.mlr.press/v48/xuc16.html. Junchi Yan, Xin Liu, Liangliang Shi, Changsheng Li, and Hongyuan Zha. Improving maximum likelihood estimation of temporal point process via discriminative and adversarial learning. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp. 2948–2954. International Joint Conferences on Artificial Intelligence Organization, 7 2018. doi: 10.24963/ijcai.2018/409. URL https://doi.org/10.24963/ijcai.2018/409. Qiang Zhang, Aldo Lipani, Omer Kirnap, and Emine Yilmaz. Self-attentive Hawkes process. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of Machine Learning Research*, pp. 11183–11193. PMLR, 13–18 Jul 2020a. URL http://proceedings.mlr.press/v119/zhang20q.html. Qiang Zhang, Aldo Lipani, and Emine Yilmaz. Learning neural point processes with latent graphs. In In Proceedings of the Web Conference 2021 (WWW '21), 2021. URL https://doi.org/10.1145/3442381. 3450135. Wei Zhang, Thomas Kobber Panum, Somesh Jha, Prasad Chalasani, and David Page. Cause: Learning granger causality from event sequences using attribution methods, 2020b. URL https://arxiv.org/abs/ 2002.07906. Simiao Zuo, Haoming Jiang, Zichong Li, Tuo Zhao, and Hongyuan Zha. Transformer Hawkes process. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 11692–11702. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/zuo20a.html. ## A Preliminaries On Temporal Point Process Temporal point process with markers. For a temporal point process {ti}i≥1 as a real-valued stochastic process indexed on N + such that Ti ≤ Ti+1 almost surely (here Ti representing the random variable), each random variable is generally viewed as the arrival timestamp of an event. When each timestamp is given a type marker, i.e. {(ti, mi)}i≥1, the process is called marked temporal point process, also called multivariate point process as well. Conditional intensity function and probability density function. As defined in Eq. 1, the probability density function and cumulative distribution function can be obtained through λ(t|H(t))dt = E[N (t + dt) − N (t)|H(t)] = P(ti ∈ [t, t + dt)|H(t)) = P(ti ∈ [t, t + dt)|ti ∈/ [ti−1, t), H(ti−1)) = P(ti ∈ [t, t + dt), ti ∈/ [ti−1, t)|H(ti−1)) P(ti ∈/ [ti−1, t)|H(ti−1))) = P(ti ∈ [t, t + dt)|H(ti−1)) P(ti ∈/ [ti−1, t)|H(ti−1)) =p(t|H(ti−1)) 1 − P(t|H(ti−1)) =p ∗(t) 1 − P∗(t) , In this way, the reverse relation can be given by $$\begin{array}{c}{{p^{*}(t)=\lambda^{*}(t)\exp(-\int_{t_{i-1}}^{t}\lambda^{*}(\tau)d\tau);}}\\ {{P^{*}(t)=1-\exp(-\int_{t_{i-1}}^{t}\lambda^{*}(\tau)d\tau).}}\end{array}$$ Example 1. (Poisson process) The (homogeneous) Poisson process is quite simply the point process where the conditional intensity function is independent of the past. For example, λ ∗(t) = λ(t) = c which is a constant. Example 2. (Hawkes process) The conditional intensity function of which can be written as $$\lambda^{*}(t)=\alpha+\sum_{t_{j}<t}g(t-t_{j};\eta_{j},\beta_{j}),$$ which measures all the impacts of all the historical events on the target timestamp t. The classical Hawkes process formulates the impact function g(t − tj ; *η, β*) = η exp(β(t − tj )) as the exponential function. ## B Experiments B.1 Sythetic Dataset Description The synthetic datasets are generated by tick1 packages (Bacry et al., 2017), using the Hawkes process generator. Four Hawkes kernels as impact functions are used with each process's intensity simulated according to **Example 2**, including $g_{a}(t)=0.09\exp(-0.4t)$ $g_{b}(t)=0.01\exp(-0.8t)+0.03\exp(-0.6t)+0.05\exp(-0.4t)$ $g_{c}(t)=0.25|\cos3t|\exp(-0.1t)$ $g_{d}(t)=0.1(0.5+t)^{-2}$ The impact function gj,i(t) measuring impacts of type i on type j is uniformly-randomly chosen from above. A probability equalling to r which we called *cutting ratio* is set to force the impact to zero, thus leading the Granger causality graph to be sparse. The cutting ratio is set as 0.2, and the total number of types is set as 5. 1https://github.com/X-DataInitiative/tick ## B.2 Calculation Of Qqp_Dev If the sequences come from the intensity function of point process λ(t) , then the integral Λ = R tt+1 tiλ(τ )dτ between consecutive events should be exponential distribution with parameter 1. Therefore, the QQ plot of Λ against exponential distribution with rate 1 should fall approximately along a 45-degree reference line. Therefore, we first use the model to sample a series timestamps, and use them to estimate the empirical Λ ∗. Mean absolute deviation of the QQ plot of it v.s Exp(1) from the line with slop 1 is 'QQP_DEV'. ## B.3 Supplementary Results | | Table 5: Comparison on Sythetic dataset. Synthetic | | | | | |---------|------------------------------------------------------|---------------|---------------|---------------|---------------| | Methods | MAPE | CRPS | QQP_Dev | Top1_ACC | Top3_ACC | | E-RMTPP | 22.8206±1.5594 | 0.1905±0.0110 | 1.7921±0.0047 | 0.2497±0.0031 | 0.6693±0.0034 | | LogNorm | 54.6208±0.0000 | 0.1916±0.0102 | 1.7920±0.0044 | 0.2494±0.0027 | 0.6687±0.0026 | | E-ERTPP | 54.6208±0.0000 | 0.1843±0.0072 | 1.7881±0.0062 | 0.2482±0.0011 | 0.6674±0.0012 | | WeibMix | 26.5910±7.3426 | 0.1060±0.0029 | 1.9776±0.0000 | 0.2476±0.0001 | 0.6668±0.0002 | | FNNInt | 4.5223±0.0976 | - | 1.3342±0.0005 | 0.2531±0.0041 | 0.6724±0.0040 | | SAHP | 4.5198±0.1677 | - | 1.1775±0.0002 | 0.2964±0.0003 | 0.7269±0.0001 | | THP | 4.4958±0.1331 | - | 1.1775±0.0001 | 0.2474±0.0002 | 0.6667±0.0002 | | TCVAE | 3.3237±0.0304 | 0.0617±0.0001 | 1.4124±0.0024 | 0.2476±0.0002 | 0.6670±0.0000 | | TCGAN | 3.5009±0.1288 | 0.2510±0.2690 | 1.4297±0.0223 | 0.1924±0.0894 | 0.5059±0.2438 | | TCCNF | 4.7095±0.0303 | 0.0654±0.0000 | 1.5787±0.0007 | 0.2465±0.0001 | 0.6669±0.0001 | | TCNSN | 33.9541±0.9428 | 0.1109±0.0002 | 1.5884±0.0001 | 0.2554±0.0001 | 0.6766±0.0001 | | TCDDM | 3.2323±0.0015 | 0.0509±0.0000 | 1.4261±0.0002 | 0.2492±0.0020 | 0.6686±0.0019 | We give the negative ELBO which is the upper bound of models' NLL of TCDDM, TCVAE, and TCNSN, and the exact NLL of other models except that in TCGAN we give Wasserstein distance between the empirical and model distributions in the four real-world dataset, as shown in Table 6. ## B.4 Complexity Comparison | Table 6: Comparison on four real-world datasets on the NLL a | | | nd ELBO. | | |----------------------------------------------------------------|----------|----------|----------------|----------| | Methods | MOOC | Retweet | Stack Overflow | Yelp | | E-RMTPP | 1.7504 | −1.9872 | 4.9031 | −1.0832 | | LogNorm | 1.3635 | −2.4197 | 4.8782 | −1.2808 | | E-ERTPP | 3.5791 | −0.8876 | 5.0845 | −0.9678 | | WeibMix | 0.7950 | −2.5110 | 3.8717 | −1.1125 | | FNNInt | −2.3024 | −3.0064 | 1.8469 | −1.2294 | | SAHP | −2.3472 | −2.9955 | 1.8348 | −1.6607 | | THP | 0.1270 | −1.3794 | 1.8591 | −1.6349 | | TCDDM | ≤ 1.7609 | ≤ 0.7560 | ≤ 2.2450 | ≤ 0.0142 | | TCVAE | ≤ 9.4754 | ≤ 7.5911 | ≤ 3.9727 | ≤ 6.2181 | | TCGAN | (0.0056) | (0.0001) | (0.0511) | (0.0560) | | TCCNF | −2.8591 | −3.0464 | 1.6881 | −1.8791 | | TCNSN | ≤ 2.1815 | ≤ 0.8340 | ≤ 2.4131 | ≤ 0.3517 | We provide each methods mean training time for one epoch to figure out which methods are extremely time-consuming. It shows all these methods are affordable in time complexity except CNSN, and CGAN is also time-consuming. The test is implemented on a single Nvidia-V100(32510MB). In all the test setting, batch size is set as 16, embedding size is 32 and layer number is 1. For methods whose likelihood has no closed-form, we use Monte Carlo integration, where in each interval, the sample number is 100. | Table 8: Comparison of used memory. Peak Memory in Training | | | | | | |---------------------------------------------------------------|----------|----------|----------------|-----------|-----------| | Methods | MOOC | Retweet | Stack Overflow | Yelp | Synthetic | | E-RMTPP | 2174 MiB | 1544 MiB | 4288 MiB | 22294 MiB | 7074 MiB | | LogNorm | 1976 MiB | 1544 MiB | 4096 MiB | 22294 MiB | 7078 MiB | | E-ERTPP | 1976 MiB | 1544 MiB | 4096 MiB | 22294 MiB | 7078 MiB | | WeibMix | 2078 MiB | 1544 MiB | 4290 MiB | 22294 MiB | 7080 MiB | | FNNInt | 1828 MiB | 1438 MiB | 3692 MiB | 21366 MiB | 10686 MiB | | SAHP | 7197 MiB | 1606 MiB | 4706 MiB | 22730 MiB | 15258 MiB | | THP | 6004 MiB | 1496 MiB | 4544 MiB | 18384 MiB | 7648 MiB | | TCVAE | 2286 MiB | 1436 MiB | 3198 MiB | 27684 MiB | 6854 MiB | | TCGAN | 3712 MiB | 2164 MiB | 4906 MiB | OOM | 12318 MiB | | TCCNF | 2598 MiB | 1486 MiB | 3932 MiB | 27836 MiB | 4048 MiB | | TCNSN | 2926 MiB | 1662 MiB | 3884 MiB | 22374 MiB | 7984 MiB | | TCDDM | 3110 MiB | 1542 MiB | 3508 MiB | 22764 MiB | 8084 MiB | | Table 7: Comparison of time complexity. Time per Epoch | | | | | | |----------------------------------------------------------|---------|---------|----------------|---------|-----------| | Methods | MOOC | Retweet | Stack Overflow | Yelp | Synthetic | | E-RMTPP | 4600 | 2 00800 | 1 02200 | 1200 | 1 02600 | | LogNorm | 3900 | 2 00200 | 1 01400 | 1100 | 1 02700 | | E-ERTPP | 4200 | 2 01400 | 1 01700 | 1100 | 1 03300 | | WeibMix | 4900 | 2 01800 | 1 02700 | 1300 | 1 02200 | | FNNInt | 4500 | 2 00800 | 1 01600 | 1400 | 1 03200 | | SAHP | 1 01100 | 2 04200 | 1 05100 | 2100 | 2 01400 | | THP | 5700 | 2 02900 | 1 03000 | 1800 | 2 00200 | | TCVAE | 4600 | 2 04300 | 1 03300 | 1100 | 1 03100 | | TCGAN | 2 03000 | 1102400 | 3 04700 | OOM | 4 00200 | | TCCNF | 5 04200 | 2102800 | 7 00600 | 3 02400 | 5 04700 | | TCNSN | 3300 | 1 04600 | 4200 | 1000 | 5600 | | TCDDM | 3500 | 1 03900 | 1 01600 | 1300 | 1 01200 | ## B.5 Hyperparameter Sensitivity Analysis We give the CDDM's performance under different parameters on MOOC, Retweet and Stack Overflow, with layer number and embedding size set in range of {1, 2, 3} and {8, 16, 32} respectively. It shows that the large embedding size usually brings improvements, so we recommend that it should be set as 32. And layer number should be set as 1 or 2. ![21_image_0.png](21_image_0.png) Figure 9: Change of Performance with Layer Number on MOOC ![22_image_1.png](22_image_1.png) ![22_image_0.png](22_image_0.png) Figure 10: Change of Performance with Layer Number on Retweet ![22_image_2.png](22_image_2.png) Figure 11: Change of Performance with Layer Number on Stack Overflow ![22_image_3.png](22_image_3.png) Figure 12: Change of Performance with embedding size on MOOC ![23_image_1.png](23_image_1.png) ![23_image_0.png](23_image_0.png) Figure 13: Change of Performance with embedding size on Retweet ![23_image_2.png](23_image_2.png) Figure 14: Change of Performance with embedding size on Stack0verf1ow
Review 1: Summary: In this paper, the authors study several generative neural temporal point process models. For the history encoder part, the authors modify the self-attentive encoder by adding learnable time encoding and incorporating event type encoding. On the decoder part, various generative models are adopted, including diffusion models, VAE, GAN, normalizing flows, and score-based generative models. Finally, the authors conducted extensive experimental studies to evaluate the performance of different generative TPP models on several benchmark datasets. Strengths and Weaknesses: Strengths: The paper is clearly written. All major modern generative models are considered as decoders. The experimental study is extensive. It should be a very good first paper to read for someone new to this field. Weaknesses: My impression of this paper is that it seems like something between a survey paper and a technical contribution paper. The paper has its technical contributions, but it is somewhat limited, since this is not the first paper applying generative models to TPP modeling. On the other hand, it is also not as systematic and comprehensive as a full fledged survey paper. For example, when comparing different generative models as decoders, there are not many insights on the pros and cons of each model, other than the empirical evaluation. Coming back to the technical contribution part, one claimed contribution is the augmentation of the self-attentive encoder. I think the two problems listed in section 3.1 are definitely important considerations when designing the model. But I would argue that they are not necessarily “problems”, they are more like different modeling assumptions or inductive biases. Incorporating more features or learnable parameters will improve the performance of the model in some cases, but probably not on all datasets. In my opinion, the advantages of generative models over discriminative ones are that (1) there is a quantitative evaluation of the goodness-of-fit, for example, the log-likelihood or ELBO; (2) the probabilistic model can be used for sampling or posterior inference. In the experimental study, the evaluation metrics seems to be focusing on the predictive quality of the models, rather than the generative capabilities. I would suggest adding more model comparisons from this perspective. Requested Changes: * Provide more insights into the choice of different probabilistic encoders, their respective pros and cons. * Consider alternative evaluation methods for generative models if applicable, quantitatively (e.g., comparing likelihood/ELBO) and qualitatively (e.g., latent variable analysis in https://arxiv.org/abs/1506.02216) Broader Impact Concerns: None ================================================== Review 2: Summary: Temporal point processes (TPPs) are generative models for asynchronous event sequences occurring in continuous time. Most recent papers on neural TPPs evaluate goodness-of-fit of these models using negative log-likelihood (NLL). The authors of this work point out that good NLL scores may not be correlated with good predictive performance of TPP models (i.e., good sample quality). The main contribution of this work aims to improve the sample quality by modifying the probabilistic **decoders** used in neural TPP models. Specifically, the authors propose several new decoders for neural TPPs based on other modern deep generative model architectures - diffusion model - variational autoencoder - generative adversarial network - continuous-time normalizing flows. These new architectures combined with the respective training procedures lead to better time prediction compared to existing NTPP architectures. The second contribution of this work is a modification to the self-attention **encoder** architecture used in some neural TPPs. The new modified parametrization of the self-attention layer - replaces the positional embeddings with exponential smoothing and - explicitly models dependencies between different mark types (similar to a low-rank Granger causality matrix) Strengths and Weaknesses: Strengths: - This paper highlights an important problem with existing neural TPP models ("good NLL does not imply good sample quality"). This is highly relevant to the practical applications of TPPs in predictive tasks. - This work provides a thorough discussion of the different possible architectural choices in neural TPP models (both encoder, decoder and training procedures) and nicely connects to the previous works. Weaknesses: 1. The code provided with the submission is incomplete. The `models` directory that should contain the implementations of all models is missing; and the link to Google Drive with the source code seems to be broken. Because of this I couldn't resolve some of my concerns below by looking at the source code. 2. The introduced "relation between event types" mechanism (property P.2 / L184 / Eq. 11) seems to be incorrect. - According the Eq. 11 and Fig. 5, the weights $w_{j, i-1}$ can be negative since the event type embedding matrix $E_m$ is not guaranteed to be positive. Because of this, we cannot interpret the weights $w_{j, i-1}$ as attention coefficients anymore. The denominator in Eq. 11 can even be equal to zero or negative, breaking the attention mechanism. - Even if we restrict the mark embedding matrix $E_m$ to be positive, we still cannot interpret the matrix $W = E_m^T E_m^T \in \mathbb{R}^{M \times M}$ in Eq. 11 as modeling interactions between marks / capturing Granger causality in the sense of https://arxiv.org/abs/1602.04511. For the correct definition of Granger causality, we need the following property. For each past event $j$ and any mark $k \in [M]$, it must hold that $W_{m_j, k} = 0 \implies$ "$p^*(t_i | m_i = k)$ is independent of event $(t_j, m_j)$". However, the definition provided in Eq. 11 uses the observed mark $m_{i-1}$ of the previous event number $i-1$, not the possible marks $k$ of the next event number $i$. Therefore, the matrix $W$ doesn't carry the usual intuitive meaning of the Granger causality matrix for TPPs. 3. Certain aspects of the experimental evaluation do not convincingly demonstrate the benefits of the proposed models. - The improvements in predictive performance of the new encoder architecture (Table 4) are really minor. - The learned similarity matrix (Figure 5) is quite dense, and it's not clear whether the larger values here indicate actual dependencies between event types or noise. See also the point #2 above. A quantitative evaluation compared to other methods for structure discovery (e.g., Hawkes process, [CAUSE](https://arxiv.org/abs/2002.07906)) would be more convincing. - It's not clear whether the improved performance is based on the differences between the models or peculiarities of the training procedure (such as early stopping). For example, all proposed model variants (TCDDM, TCVAE, TCGAN, TCCNF, TCNSN) as well as the baselines (RMTPP, LogNorm, WeibMix, FNNInt) use the exact same parametrization of the mark distribution $p^*(m_i)$ and are trained using the cross-entropy loss for marks. However, the accuracy scores achieved by different models in Table 3 are quite different, which is very suprising. 4. While the motivation for the proposed approach makes sense ("low NLL doesn't imply good samples"), it's not clear whether a much simpler baseline can't achieve the same or even better results. For example, why don't we use a purely discriminative baseline that directly predicts the next inter-event time $\hat{\tau}$ and is trained by minimizing MAPE? In my opinion, comparing to such a baseline is important if the focus of the paper is on inter-event time prediction. Minor comments: - Several points that need to be clarified in the paper: - Does the LogNorm baseline use a single log-normal distribution or a mixture, as in the original reference? - When computing mark prediction accuracy, do we consider $p^*(m_i)$, the marginal distribution of the next mark, or $p^*(m_i | t_i)$, the distribution of the next mark at time $t_i$? The two quantities are identical for the proposed models where the marks are independent of the arrival time (Sec. 3.4), but are different for the SAHP and TAHP baselines. - Do SAHP and THP models use the original encoders or the modified encoder (Eq. 11)? - Some missing citations: - [Zhang et al., ICML 2020](https://arxiv.org/abs/2002.07906) also learn dependencies between event types in a marked neural TPP. - [Mehrasa et al., 2020](https://openreview.net/forum?id=rklJ2CEYPH) also use continuous-time normalizing flows / VAEs to model the inter-event time distribution in TPPs. - [Mei et al., ICLR 2022](https://openreview.net/forum?id=Rty5g9imm7H) also explicitly model dependencies between event types in the self-attention layer of a TPP (Eq. 8). - [Ben Taieb, AISTATS 2022](https://proceedings.mlr.press/v151/ben-taieb22a.html) also uses CRPS to evaluate TPP predictions. - L88: Usually $N(t)$ is defined as the number of events in $(0, t]$, not $[0, t)$ (e.g., see in Daley & Vere-Jones). - L155: The objective function of WGAN doesn't provide a lower bound on the likelihood, but is an approximation to the Wasserstein distance between the empirical and model distributions - L369: It should be possible to sample events from SAHP and THP using Ogata's thinning method since the intensity for both methods is monotonically decreasing between events. It's also possible to sample from the FNNInt model using numerical root-finding. Requested Changes: The following points are critical for securing my recommendation for acceptance: 1. Clarifying my concerns regarding the proposed attention mechanism and its connection to the Granger causality matrix (see point #2 above). 2. A quantitative evaluation on the structure discovery task. 3. Comparing to a purely discriminative baseline that directly predicts the next inter-event time $\hat{\tau}$ (see point #4 above). Broader Impact Concerns: There is no Broader Impact Statement provided, but I don't believe that there are any major ethical concerns that haven't been addressed. ================================================== Review 3: Summary: This paper revises the self-attentive encoders with adaptive reweighting for history encoders and adopts recently popular generative models, e.g. Conditional DDPM, VAE and others, to learn the temporal point processes. Strengths and Weaknesses: 1. The idea is clear and new. 2. The description of existing methods is clear in Table 1. 3. Experiments are clear. Requested Changes: 1. In Line204, what is q(t^0)? Known Standard Gaussian, unknown distribution, or empirical distribution? It is not clear. 2. In Eq.12, I object to the assumption that q(t^k|t^{k-1}) is Gaussian. By definition, it is obvious that t^k> t^{k-1}, however in Gaussian assumption t^k \in R. Similar situations exist for p(t^k) and Eq.13. 3. For the original DDPM models, the interpolation between Gaussian noise and original data is one of the important reasons for good performance. However, similar data augmentation operations do not seem to be used. 4. The full name of CDD should be specified. And the names of the methods should be related to TPP to distinguish the original models. 5. The writing of contributions should be improved. Minor comments: 1. Line79, exploring->explore 2. Table 1, Discription-> Description; exsiting-> existing 3. Line177, P.1. is not clear enough. 4. Figure.1 -> Fig.1 or Figure 1. Similar situations exist for others. Broader Impact Concerns: na ================================================== Review 4: Summary: **Summary of the contributions:** The authors propose a generative framework for neural temporal point process (GNTPP) to apply deep generative models in the context of Temporal Point Processes. The paper breaks down previous research into Temporal Point processes into two parts - historical encoders to generate context from observations and probabilistic decoders that generate a distribution over the next event time conditioned on this historical context. The authors then propose a modification to the attention mechanism of the historical encoder proposed by Zhang et. al and Zuo et. al to incorporate additional context from event types and study different conditional generative models (Diffusion/VAE/WGAN/ Score based/Flows) to learn the distribution of next event time conditioned on this historical context vector. Strengths and Weaknesses: **Strengths:** Application of generative models to Temporal Point processes is an interesting line of research and it is interesting to study if they provide a viable alternative to traditional conditional intensity based modelling. The authors study various families of generative models and the comparison would be appreciated by the community. **Weakness:** The authors make certain claims that are not backed up by evidence and certain claims that are false. The following are the claims that I have issues with: 1. In the abstract and introduction, The authors claim previous works have focused on maximizing the likelihood of the next event which leads to poor predictive performance and that generative models can be used to improve performance. Again in section 2.1.3 the authors claim *“Differing from the previous works focusing on models’ fitting ability in terms of higher likelihood, these models aim to promote models’ prediction ability, i.e. to generate high-quality samples which are closer to ground truth observations”*. I do not quite understand what the authors are trying to imply. All the generative models that the authors have proposed either maximize a bound on the log-likelihood of the predicted event which surmounts to minimizing the KL divergence between the true and the predicted distribution (VAE/Diffusion/Score) or the Wasserstein distance (WGAN). The authors also do not convey why this is expected to improve the predictive performance. This brings me to my second point. 2. If improving predictive performance is something the authors are interested in, an important baseline to be considered is adding this as an explicit penalty term along with the Negative log-likelihood loss as done in Transformers Hawkes Process (THP) (Zuo et. al). 3. In section 2.2, the authors claim the Monte Carlo estimate used in THP and SAHP is biased. This is false, as the THP authors mention that the estimate is unbiased and I have verified it to the best of my knowledge. 4. In section 2.3, to motivate the revised encoder, P1 mentions use time difference instead of actual event time to show a scenario where the embeddings are the same. This seems wrong as $j_1$ and $j_2$ are different, $\omega(\tau_{j_1})$ and $\omega(\tau_{j_2})$ are going to be different even if $\tau_{j_1}$ and $\tau_{j_2}$ are the same. 5. In section 2.1.3, the authors make a comment about the computational cost of the monte carlo integral used in THP and SAHP. As the authors claim their method avoids this cost, It would be interesting to look at a comparison of FLOPs/ Wall-clock time for training/ inference as generative models come with their own overhead. 6. The authors do not mention whether the baseline RMTPP uses the same time normalization they mention in section 4.1.2. Any scaling of the time of the events leads to a change in the negative log-likelihood by a constant factor and makes the comparison meaningless. **Other Concerns:** 1. In section 2.3 the authors talk about transformers yet do not cite the original paper. 2. The citation for Transformers Hawkes process which is heavily featured in the paper is incomplete. 3. In section 2.6, The authors should mention their assumption that the mark and time distributions are conditionally independent given the historical embedding 4. In section 4.2, the authors use an example from the tick library as their simulated example thus I think it is important to cite the work in the main paper. 5. It would be nice to see the actual QQPlot to see how close it is to the $y=x$ line instead of just the deviation as done in SAHP. 6. Section 4.2, line 1 is missing the word “decoder”. Requested Changes: **Requested changes:** Weakness 1, 2, 3, 4 need to be addressed by the authors and are critical for securing my recommendation. Weakness 5 and other concerns 5 would make the paper stronger. Broader Impact Concerns: **Broader impact concerns:** None ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper studies neural temporal point process (TPP) models, and makes the following two contributions: - It explores a range of modern generative models for the TPP decoder. - It proposes a modification in the attention mechanism of certain TPP encoders. The reviewers found both contributions interesting and sufficiently evaluated. However, as part of the second contribution, the paper contains the following interpretability claim (lines 76--78): "we revise the self-attentive encoders with adaptive reweighting terms [...] showing better interpretability" The reviewers were not convinced that this claim was supported by evidence. I'm willing to recommend the paper for acceptance as long as the authors revise their manuscript to *completely remove all claims* that their proposed modification leads to improved interpretability. At a minimum, I would expect to see the following changes: - Lines 77--78: delete "showing better interpretability". - Line 202: delete "To provide interpretation of the model". - Lines 431--432: delete "prove the better [...] interpretability of our revised attentive encoder". - Lines 454--455: delete "experimental results show good [...] interpretability of our revised attentive encoders" ==================================================
# Multi-View Data Visualisation Via Manifold Learning Anonymous authors Paper under double-blind review ## Abstract Non-linear dimensionality reduction can be performed by *manifold learning* approaches, such as Stochastic Neighbour Embedding (SNE), Locally Linear Embedding (LLE) and Isometric Feature Mapping (ISOMAP). These methods aim to produce two or three latent embeddings, primarily to visualise the data in intelligible representations. This manuscript proposes extensions of Student's t-distributed SNE (t-SNE), LLE and ISOMAP, for dimensionality reduction and visualisation of *multi-view* data. Multi-view data refers to multiple types of data generated from the same samples. The proposed multi-view approaches provide more comprehensible projections of the samples compared to the ones obtained by visualising each data-view separately. Commonly, visualisation is used for identifying underlying patterns within the samples. By incorporating the obtained low-dimensional embeddings from the multi-view manifold approaches into the K-means clustering algorithm, it is shown that clusters of the samples are accurately identified. Through extensive comparisons of novel and existing multi-view manifold learning algorithms on real and synthetic data, the proposed multi-view extension of t-SNE, named *multi-SNE*, is found to have the best performance. We further illustrate the applicability of the multi-SNE approach for the analysis of multi-omics single-cell data, where the aim is to visualise and identify cell heterogeneity and cell types in biological tissues relevant to health and disease. ## 1 Introduction Data visualisation is an important and useful component of exploratory data analysis, as it can reveal interesting patterns in the data and potential clusters of the observations. A common approach for visualising high-dimensional data (data with a higher number of features (p) than samples (n), *i.e.* p ≫ n) is by reducing its dimensions. Linear dimensionality reduction methods, including Principal Component Analysis (PCA) (Jolliffe & Cadima, 2016) and Non-negative Matrix Factorization (NMF) (García et al., 2018), assume linearity within data sets and as a result these methods often fail to produce reliable representations when linearity does not hold. *Manifold learning*, an active research area within machine learning, in contrast to the linear dimensionality reduction approaches do not rely on any linearity assumptions. By assuming that the dimensions of the data sets are artificially high, manifold learning methods aim to capture important information with minimal noise, in an induced low-dimensional embedding (Zheng & Xue, 2009). The generated low-dimensional embeddings can be used for data visualisation in the 2-D or 3-D spaces. Manifold learning approaches used for dimensionality reduction and visualisation, focus on preserving at least one of the characteristics of the data. For example, the Stochastic Neighbour Embedding (SNE) preserves the probability distribution of the data (Hinton & Roweis, 2003). The Locally Linear Embedding (LLE) proposed by Roweis & Saul (2000) is a neighbourhood-preserving method. The Isometric Feature Mapping (ISOMAP) proposed by Tenenbaum et al. (2000) is a quasi-isometric method based on Multi-Dimensional Scaling (Kruskal, 1964). Spectral Embedding finds low-dimensional embeddings via spectral decomposition of the Laplacian matrix (Ng et al., 2001). The Local Tangent Space Alignment method proposed by Zhang & Zha (2004) learns the embedding by optimising local tangent spaces, which represent the local geometry of each neighbourhood, and Uniform Manifold Approximation and Projection (UMAP) preserves the global structure of the data by constructing a theoretical framework based on Riemannian geometry and algebraic topology (McInnes et al., 2018). This manuscript focuses on data visualisation of *multi-view data*, which are regarded as different types of data sets that are generated on the same samples of a study. It is very common nowadays in many different fields to generate multiple data-views on the same samples. For example, multi-view imaging data describe distinct visual features such as local binary patterns (LBP), and histogram of oriented gradients (HOG) (Shen et al., 2013), while multi-omics data, *e.g. proteomics, genomics, etc*, in biomedical studies quantify different aspects of an organism's biological processes (Hasin et al., 2017). Through the collection of multi-view data, researchers are interested in better understanding the collected samples, including their visualisation, clustering and classification. Analysing simultaneously the multi-view data is not a straightforward task, as each data-view has its own distribution and variation pattern (Rodosthenous et al., 2020). Several approaches have been proposed for the analysis of multi-view data. These include methods on clustering (Kumar et al., 2011; Liu et al., 2013; Sun et al., 2015; Ou et al., 2016; Ye et al., 2018; Ou et al., 2018; Wang & Allen, 2021), classification (Shu et al., 2019), regression (Li et al., 2019), integration (Rodosthenous et al., 2020) and dimensionality reduction (Sun, 2013; Zhao et al., 2018; Xu et al., 2015). Such approaches have been extensively discussed in the review papers of Xu et al. (2013) and Zhao et al. (2017). In this manuscript, we focus on the visualisation task. By visualising multi-view data collectively the aim is to obtain a global overview of the data and identify patterns that would have potentially be missed if each data-view was visualised separately. Typically, multiple visualisations are produced, one from each data-view, or the features of the data-views are concatenated to produce a single visualisation. The former could provide misleading outcomes, with each data-view revealing different visualisations and patterns. The different statistical properties, physical interpretation, noise and heterogeneity between data-views suggest that concatenating features would often fail in achieving a reliable interpretation and visualisation of the data (Fu et al., 2008). A number of multi-view visualisation approaches have been proposed in the literature, with some of these approaches based on the manifold approaches t-SNE and LLE. For example, Xie et al. (2011) proposed m-SNE that combines the probability distributions produced by each data-view into a single distribution via a weight parameter. The algorithm then implements t-SNE on the combined distribution to obtain a single low-dimensional embedding. The proposed solution finds the optimal choice for both the low-dimensional embeddings and the weight parameter simultaneously. Similarly, Kanaan Izquierdo (2017) proposed two alternative solutions based on t-SNE, named MV-tSNE1 and MV-tSNE2. MV-tSNE2 is similar to m-SNE combining the probability distributions through expert opinion pooling. In parallel to our work, Hoan Do & Canzar (2021) proposed a multi-view extension of t-SNE, named j-SNE. Both multi-SNE and j-SNE firstly appeared as preprints in January 20211. J-SNE produces low-dimensional embeddings through an iterative procedure that assigns each data-view a weight value that is updated per iteration through regularisation. In addition, Shen et al. (2013) proposed multi-view Locally Linear Embeddings (m-LLE) that is an extension of LLE for effectively retrieving medical images. M-LLE produces a single low-dimensional embedding by integrating the embeddings from each data-view according to a weight parameter c, which refers to the contribution of each data-view. Similarly to m-SNE, the algorithm optimizes both the weight parameter and the embeddings simultaneously. Zong et al. (2017) proposed MV-LLE that minimises the cost function by assuming a consensus matrix across all data-views. Building on the existing literature work, we propose here alternative extensions to the manifold approaches: t-SNE, LLE, and ISOMAP, for visualising multi-view data. The cost functions of our proposals are different from the existing ones, as they integrate the available information from the multi-view data iteratively. At each iteration, the proposed *multi-SNE* updates the low-dimensional embeddings by minimising the dissimilarity between their probability distribution and the distribution of each data-view. The total cost of this approach equals to the weighted sum of those dissimilarities. Our proposed variation of LLE, *Multi-LLE*, constructs the low-dimensional embeddings by utilising a consensus weight matrix, which is taken as the weighted sum of the weight matrices computed by each data-view. Lastly, the low-dimensional embeddings 1https://doi.org/10.1101/2021.01.10.426098 in the proposed *multi-ISOMAP* are constructed by using a consensus graph, for which the nodes represent the samples and the edge lengths are taken as the averaged distance between the samples in each data-view. M-ISOMAP is proposed as an alternative ISOMAP-based multi-view manifold learning algorithm. Similar to m-SNE and m-LLE, m-ISOMAP provide a weighted integration of the low-dimensional embeddings produced by the implementation of ISOMAP on each data-view separately. As the field of multi-view data analysis is relatively new, the literature lacks comparative studies between multi-view manifold learning algorithms. This manuscript makes a novel contribution to the field by conducting extensive comparisons between the multi-view non-linear dimensionality reduction approaches proposed in this manuscript, multi-SNE, multi-LLE, multi-ISOMAP and m-ISOMAP with other approaches proposed in the literature. These comparisons are conducted on both real and synthetic data that have been designed to capture different data characteristics. The aim of these comparisons is to identify the best-performing algorithms, discuss pitfalls of the approaches and guide the users to the most appropriate solution for their data. We illustrate that our proposals result to more robust solutions compared to the approaches proposed in the literature, including m-SNE, m-LLE and MV-SNE. We further illustrate through the visualisation of the lowdimensional embeddings produced by the proposed multi-view manifold learning algorithms, that if clusters exist within the samples, they can be successfully identified. We show that this can be achieved by applying the K-means algorithm on the low-dimensional embeddings of the data. The K-means (MacQueen, 1967) was chosen to cluster the data points, as it is one of the most famous and prominent partition clustering algorithms (Xu & Tian, 2015). A better clustering performance by K-means suggests a visually clearer separation of clusters. Through the conducted experiments, we show that the proposed multi-SNE approach recovers well-separated clusters of the data, and has comparable performance to multi-view clustering algorithms that exist in the literature. ## 2 Material And Methods In this section, the proposed approaches for multi-view manifold learning are described. This section starts with an introduction of the notation used throughout this manuscript. The proposed multi-SNE, multiLLE and multi-ISOMAP are described in Sections 2.2, 2.3 and 2.4, respectively. The section ends with a description of the process for tuning the parameters of the algorithms. ## 2.1 Notation Throughout this paper, the following notation is used: - N: The number of samples. - X ∈ R N×p: A single-view data matrix, representing the original high-dimensional data used as input; xi ∈ R pis the i th data point of X. - M: The number of data-views in a given data set; m ∈ {1, · · · , M} represents an arbitrary data-view. - X(m) ∈ R N×pm: The mth data-view of multi-view data; x m i ∈ R pm is the i th data point of X(m). - Y ∈ R N×d: A low-dimensional embedding of the original data. yi ∈ R drepresents the i th data point of Y . In this manuscript, d = 2, as the focus of the manuscript is on data visualisation. ## 2.2 Multi-Sne SNE, proposed by Hinton & Roweis (2003), measures the probability distribution, P of each data point xi by looking at the similarities among its neighbours. For every sample i in the data, j is taken as its potential neighbour with probability pij , given by $$p_{i j}=\frac{\exp{(-d_{i j}^{2})}}{\sum_{k\neq i}\exp{(-d_{i k}^{2})}},$$ , (1) ``` where dij = ||xi−xj ||2 2σ 2 i represents the dissimilarity between points xi and xj . The value of σiis either set ``` by hand or found by binary search (van der Maaten & Hinton, 2008). Based on this value, a probability distribution of sample i, Pi =Pj pij , with fixed perplexity is produced. Perplexity refers to the effective number of local neighbours and it is defined as *P erp*(Pi) = 2H(Pi), where H(Pi) = −Pj pij log2 pij is the Shannon entropy of Pi. It increases monotonically with the variance σi and typically takes values between 5 and 50. In the same way, a probability distribution in the low-dimensional space, Y , is computed as follows: $$q_{i j}=\frac{\exp\left(-||\mathbf{y}_{i}-\mathbf{y}_{j}||^{2}\right)}{\sum_{k\neq i}\exp\left(-||\mathbf{y}_{i}-\mathbf{y}_{k}||^{2}\right)},\tag{1}$$ $$\mathrm{(2)}$$ which represents the probability of point i selecting point j as its neighbour. The induced embedding output, yi , represented by probability distribution, Q, is obtained by minimising the Kullback-Leibler divergence (KL-divergence) KL(P||Q) between the two distributions P and Q (Kullback & Leibler, 1951). The aim is to minimise the cost function: I am thinking the other function: $ C_{SNE}=\sum_i KL(P_i||Q_i)=\sum_i\sum_j p_{ij}\log\frac{p_{ij}}{q_{ij}}$ I am able to see that this is not a good way. Hinton & Roweis (2003) assumed a Gaussian distribution in computing the similarity between two points in both high and low dimensional spaces. van der Maaten & Hinton (2008) proposed a variant of SNE, called t-SNE, which uses a symmetric version of SNE and a Student t-distribution to compute the similarity between two points in the low-dimensional space Q, given by $$\mathbf{\Sigma}$$ $$q_{ij}=\frac{(1+||\mathbf{y}_{i}-\mathbf{y}_{j}||^{2})^{-1}}{\sum_{k\neq l}(1+||\mathbf{y}_{k}-\mathbf{y}_{l}||^{2})^{-1}}\tag{1}$$ $$\mathbf{\Sigma}$$ T-SNE is often preferred, because it reduces the effect of crowding problem (limited area to accommodate all data points and differentiate clusters) and it is easier to optimise, as it provides simpler gradients than SNE (van der Maaten & Hinton, 2008). We propose *multi-SNE*, a multi-view manifold learning algorithm based on t-SNE. Our proposal computes the KL-divergence between the distribution of a single low-dimensional embedding and each data-view of the data separately, and minimises their weighted sum. An iterative algorithm is proposed, in which at each iteration the induced embedding is updated by minimising the cost function: $$C_{multi-SNE}=\sum_{m}\sum_{i}\sum_{j}w^{m}p_{ij}^{m}\log\frac{p_{ij}^{m}}{q_{ij}},\tag{5}$$ where w m is the combination coefficient of the mth data-view. The vector w = (w 1, · · · , wM) acts as a weight vector that satisfies Pm w m = 1. In this study, equal weights on all data-views were considered, *i.e.* w m = 1 M , ∀m = 1, · · · , M. The algorithm of the proposed multi-SNE approach is presented in Appendix A. An alternative multi-view extension of t-SNE, called *m-SNE* was proposed by Xie et al. (2011). M-SNE applies t-SNE on a single distribution in the high-dimensional space, which is computed by combining the probability distributions of the data-views, given by pij =PM m=1 β mp m ij . The coefficients (or weights) β m share the same role as w m in multi-SNE and similarly β = (β 1, · · · , βM) satisfies Pm β m = 1. This leads to a different cost function than the one in equation (5). Kanaan Izquierdo (2017) proposed a similar cost function for multi-view t-SNE, given as follows: $$C_{\mathrm{MV-tSNE1}}=\sum_{m}\sum_{i}\sum_{j}p_{i|j}^{m}\log{\frac{p_{i|j}^{m}}{q_{i|j}}}$$ qi|j(6) $$(6)$$ Their proposal is a special case of multi-SNE, with wm = 1 M . Kanaan Izquierdo (2017) did not pursue MVtSNE1 any further, but instead, they proceeded with an alternative solution, MV-tSNE2, which combines the probability distributions (similar to m-SNE) through expert opinion pooling. A comparison between multi-SNE and MV-tSNE2 is presented in Appendix D.1. Based on two real data sets, multi-SNE and m-SNE outperformed MV-tSNE2, with the solution by multi-SNE producing the best separation among the clusters in both examples. Multi-SNE avoids combining the probability distributions of all data-views together. Instead, the induced embeddings are updated by minimising the KL-divergence between every data-view's probability distribution and that of the low-dimensional representation we seek to obtain. In other words, this is achieved by computing and summing together the gradient descent for each data-view. The induced embedding is then updated by minimising the summed gradient descent. Throughout this paper, for all variations of t-SNE we have applied the PCA pre-training step proposed by van der Maaten & Hinton (2008). van der Maaten & Hinton (2008) discussed that by reducing the dimensions of the input data through PCA the computational time of t-SNE is reduced. In this paper, the principal components taken retained at least 80% of the total variation (variance explained) in the original data. In addition, as the multi-SNE algorithm is an iterative algorithm we opted for running the algorithm for 1,000 iterations for all analyses conducted. Alternatively, a stopping rule could have been implemented with the iterative algorithm to stop after no significant changes were observed to the cost-function. Both these options are available at the implementation of the multi-SNE algorithm. The original t-SNE implementation was applied in the presented work. All t-SNE results presented in this manuscript were based on the original R implementation (https://cran.r-project.org/web/packages/tsne/) and verified by the original Python implementation (https://lvdmaaten.github.io/tsne/). ## 2.3 Multi-Lle LLE attempts to discover a non-linear structure of high-dimensional data, X, by computing low-dimensional and neighbourhood-preserving embeddings, Y (Saul & Roweis, 2001). The main three steps of the algorithm are: 1. The set, denoted by Γi, contains the K nearest neighbours of each data point xi, i = 1, · · · , N. The most common distance measure between the data points is the Euclidean distance. Other local metrics can also be used in identifying the nearest neighbours (Roweis & Saul, 2000). 2. A weight matrix, W, is computed, which acts as a bridge between the high-dimensional space in X and the low-dimensional space in Y . Initially, W reconstructs X, by minimising the cost function: $${\mathcal{E}}_{X}=\sum_{i}|\mathbf{x}_{i}-\sum_{j}W_{i j}\mathbf{x}_{j}|^{2}$$ $$\left(7\right)$$ 2(7) where the weights Wij describe the contribution of the j th data point to the i th reconstruction. The optimal weights Wij are found by solving the least squares problem given in equation (7) subject to the constraints: (a) Wij = 0, if j /∈ Γi, and (b) Pj Wij = 1 3. Once W is computed, the low-dimensional embedding yi of each data point i = 1, · · · , N, is obtained by minimising: $${\mathcal{E}}_{Y}=\sum_{i}|\mathbf{y}_{i}-\sum_{j}W_{i j}\mathbf{y}_{j}|^{2}$$ 2(8) The solution to equation (8), is obtained by taking the bottom d non-zero eigenvectors of the sparse N × N matrix, M = (I − W) T(I − W) (Roweis & Saul, 2000). $$({\boldsymbol{\delta}})$$ We propose *multi-LLE*, a multi-view extension of LLE, that computes the low-dimensional embeddings by using the consensus weight matrix: $${\hat{W}}=\sum_{m}\alpha^{m}W^{m}$$ $$({\mathfrak{g}})$$ mW m (9) where Pm α m = 1, and W m is the weight matrix for each data-view m = 1, *· · ·* M. Thus, Yˆ is obtained by solving: $${\mathcal{E}}_{\hat{Y}}=\sum_{i}|{\hat{\mathbf{y}}}_{i}-\sum_{j}{\hat{W}}_{i j}{\hat{\mathbf{y}}}_{j}|^{2}$$ The multi-LLE algorithm is presented in Appendix A. Shen et al. (2013) proposed *m-LLE*, an alternative multi-view extension of LLE. The LLE embeddings of each data-view are combined and LLE is applied to each data-view separately. The weighted average of those embeddings is taken as the unified low-dimensional embedding. In other words, computing the weight matrices W m and solving EY m =Pi |y m i −Pj W m ij y m j | 2, for each m = 1, *· · ·* M separately. Thus, the low-dimensional embedding Yˆ is computed by Yˆ =Pm β mY m, where Pm β m = 1. An alternative multi-view LLE solution was proposed by Zong et al. (2017) to find a consensus manifold, which is then used for multi-view clustering via Non-negative Matrix Factorization; we refer to this approach as *MV-LLE*. This solution minimises the cost function by assuming a consensus weight matrix across all data-views, as given in equation (9). The optimisation is then solved by using the Entropic Mirror Descent Algorithm (EMDA) (Beck & Teboulle, 2003). In contrast to m-LLE and MV-LLE, multi-LLE combines the weight matrices obtained from each data-view, instead of the LLE embeddings. No comparisons were conducted with MV-LLE and the proposed multi-LLE, as the code of the MV-LLE algorithm is not publicly available. ## 2.4 Multi-Isomap ISOMAP aims to discover a low-dimensional embedding of high-dimensional data by maintaining the geodesic distances between all points (Tenenbaum et al., 2000); it is often regarded as an extension of Multidimensional Scaling (MDS) (Kruskal, 1964). The ISOMAP algorithm comprises of the following three steps: Step 1. **A graph is defined.** Let G ∼ (*V, E*) define a neighbourhood graph, with vertices V representing all data points. The edge length between any two vertices *i, j* ∈ V is defined by the distance metric dX(*i, j*), measured by the Euclidean distance. If a vertex j does not belong to the K nearest neighbours of i, then dX(*i, j*) = ∞. The parameter K is given as input, and it represents the connectedness of the graph G; as K increases, more vertices are connected. Step 2. The shortest paths between all pairs of points in G **are computed.** The shortest path between vertices *i, j* ∈ V is defined by dG(*i, j*). Let DG ∈ R |V |×|V | be a matrix containing the shortest paths between any vertices *i, j* ∈ V , defined by (DG)ij = dG(*i, j*). The most efficient known algorithm to perform this task is Dijkstra's Algorithm (Dijkstra, 1959). In large graphs, an alternative approach to Dijkstra's Algorithm would be to initialize dG(*i, j*) = dX(*i, j*) and replace all entries by dG(*i, j*) = min {dG(i, k), dG(*k, j*)}. Step 3. **The low-dimensional embeddings are constructed.** The i th component of the low-dimensional embedding is given by yi =pλpu i p , where u i p the i th component of p th eigenvector and λp is the p th eigenvalue in decreasing order of the the matrix τ (DG) (?). The operator, τ is defined by τ (D) = − HSH 2, where S is the matrix of squared distances defined by Sij = D2 ij , and H is defined by Hij = δij − 1 N . This is equivalent to applying classical MDS to DG, leading to a low-dimensional embedding that best preserves the manifold's estimated intrinsic geometry. Multi-ISOMAP is our proposal for adapting ISOMAP on multi-view data. Let Gm ∼ (*V, E*m) be a neighbourhood graph obtained from data-view X(m) as defined in the first step of ISOMAP. All neighbourhood graphs are then combined into a single graph, G˜; the combination is achieved by computing the edge length as the averaged distance of each data-view, i.e. dG˜(*i, j*) = wmPm dGm(*i, j*). Once a combined neighbourhood graph is computed, multi-ISOMAP follows steps 2 and 3 of ISOMAP described above. For simplicity, the weights throughout this paper were set as wm = 1 M , ∀m. The multi-ISOMAP algorithm is presented in Appendix A. For completion, we have in addition adapted ISOMAP for multi-view visualisation following the framework of both m-SNE and m-LLE. Following the same logic, *m-ISOMAP* combines the ISOMAP embeddings of each data-view by taking the weighted average of those embeddings as the unified low-dimensional embedding. In other words, the low-dimensional embedding Yˆ is obtained by computing Yˆ =Pm β mY m, where Pm β m = 1. ## 2.5 Parameter Tuning The multi-view manifold learning algorithms were tested on real and synthetic data sets for which the samples can be separated into several clusters. The true clusters are known and they were used to tune the parameters of the methods. To quantify the clustering performance, we used the following four extrinsic measures: (i) Accuracy (ACC), (ii) Normalised Mutual Information (NMI) (Vinh et al., 2010), (iii) Rand Index (RI) (Rand, 1971) and (iv) Adjusted Rand Index (ARI) (Hubert & Arabie, 1985). All measures take values in the range [0, 1], with 0 expressing complete randomness, and 1 perfect separation between clusters. The mathematical formulas of the four measures are presented in Appendix B. SNE, LLE and ISOMAP depend on parameters of which their proper tuning ensues to optimal results. LLE and ISOMAP depend on the number of nearest neighbours (NN). SNE depends on the Perplexity (*P erp*) parameter, which is directly related to the number of nearest neighbours. Similarly, the multi-view extensions of the three methods depend on the same parameters. The choice of the parameter can influence the visualisations and in some cases present the data into separate maps (van der Maaten & Hinton, 2012). By assuming that the data samples belong to a number of clusters that we seek to identify, the performance of the algorithms was measured for a range of tuning parameter values, S = {2, 10, 20, 50, 80, 100, 200}. Note that for all algorithms, the parameter value cannot exceed the total number of samples in the data. For all manifold learning approaches, the following procedure was implemented to tune the optimal parameters of each method per data set: 1. The method was applied for all different parameter values in S. 2. The K-means algorithm was applied to the low-dimensional embeddings produced for each parameter value 3. The performance of the chosen method was evaluated quantitatively by computing ACC, NMI, RI and ARI for all tested parameter values. The optimal parameter value was finally selected based on the evaluation measures. Section 4.3 explores how the different approaches are affected by their parameter values. For the other subsections of Section 4, the optimal parameter choice per approach was used for the comparison of the multi-view approaches. Section 4.3 presents the process of parameter tuning on the synthetic data analysed, and measured the performance of single-view and multi-view manifold learning algorithms. The same process was repeated for the real data analysed (see Appendix D.3 for more information). ## 3 Data Data sets with different characteristics were analysed to explore and compare the proposed multi-view manifold learning algorithms under different scenarios (Table 1). The methods were evaluated on data sets that have a different number of data-views, clusters and sample sizes. The real data sets analysed are classified as heterogeneous, due to the nature of their data, while the synthetic data sets are classified as non-heterogeneous, since they were generated under the same conditions and distributions. Both highdimensional (p ≫ N) and low-dimensional data sets were analysed. Through these comparisons, we wanted to investigate how the multi-view methods perform and how they compare with single-view methods. In this section, we describe the synthetic and real data sets analysed in the manuscript. Some of the real data sets analysed have previously been used in the literature for examining different multi-view algorithms, for example, data integration (Wang et al., 2014) and clustering (Ou et al., 2018). ## 3.1 Synthetic Data A motivational multi-view example was constructed to qualitatively evaluate the performance of multi-view manifold learning algorithms against their corresponding single-view algorithms. Its framework was designed specifically to produce distinct projections of the samples from each data-view. Additional synthetic data sets were generated to explore how the algorithms behave when the separation between the clusters exists, but it is not as explicit as in the motivational example. All synthetic data were generated using the following process. For the same set of samples, a specified number of data-views were generated, with each data-view capturing different information of the samples. Each data-view, m follows a multivariate normal distribution with mean vector µm = (µ1, · · · , µpm) T and covariance matrix Σm = Ipm, where pm is the number of features in the mth data-view. The matrix Ipm represents a pm × pm identity matrix. For each data-view, different µm values were chosen to distinguish the clusters. Noise, ϵ, following a multivariate normal distribution with mean µϵ = 0 and covariance matrix Σϵ = Ipm was added to increase randomness within each data-view. Noise, ϵ, increases the variability within a given data-view. The purpose of this additional variability is to assess whether the algorithms are able to equally capture information from all data-views and are not biased towards the data-view(s) with a higher variability. Thus, noise, ϵ, was only included in selected data-views and not in the rest. Although this strategy is equivalent to sampling once using a larger variance, the extra noise explicitly distinguishes the data-view with the higher variability from the rest. In other words, X ∼ *MV N*(µm, Σm)+ϵ, where *MV N* represents multivariate normal distribution. Distinct polynomial functions (*e.g.* h(x) = x 4 + 3x 2 + 5) were randomly generated for each data-view and applied on the samples to express non-linearity. The last step was performed to ensure that linear dimensionality reduction methods (*e.g.* PCA) would not successfully cluster the data. The three synthetic data sets with their characteristics are described next. ## 3.1.1 Motivational Multi-View Data Scenario (Mmds) Assume that the truth underlying structure of the data separates the samples into three true clusters as presented in Figure 1. Each synthetic data-view describes the samples differently, which results in three distinct clusterings, none of which reflects the global underlying truth. In particular, the first view separates only cluster C from the others (View 1 in Figure 1), the second view separates only cluster B (View 2) and the third view separates only cluster A (View 3). In this scenario, only the third data-view contained an extra noise parameter, ϵ, resulting in a data-view with a higher variability than the other two data-views. ## 3.1.2 Noisy Data-View Scenario (Nds) A synthetic data set which consists of 4 data-views and 3 true underlying clusters was generated. The first three data-views follow the same structure as MMDS, while the 4 th data-view represents a completely noisy data-view, *i.e.* with all data points lying in a single cluster. The rationale for creating such a data set is to examine the effect of the noisy data views in the multi-view visualisation and clustering. This data set was used to show that the multi-view approaches can identify not useful data-views and discard them. For n = 300 equally balanced data samples, the data-views contain pm = 100, ∀m = 1, 2, 3, 4, features. To summarise, NDS adds a noisy data-view to the MMDS data set. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) Figure 1: **Motivational Multi-view Data Scenario (MMDS).** Each data-view captures different characteristics of the three clusters, and thus produces different clusterings. ![8_image_2.png](8_image_2.png) ![8_image_3.png](8_image_3.png) Figure 2: **More Clusters than data-views Scenario (MCS).** In this example, there are 3 data views but 5 true underlying clusters. Each data-view captures different characteristics of the five clusters, and thus produces different clusterings. ## 3.1.3 More Clusters Than Data-Views Scenario (Mcs) A synthetic data set that was generated similarly to MMDS but with 5 true underlying clusters instead of 3. The true underlying structure of the each data-view is shown in Figure 2. In this data set, pv = 100, ∀v features were generated on n = 500, equally balanced data samples. In comparison with MMDS, MCS contains more clusters, but the same number of data-views. Similarly to MMDS and NDS, in this scenario, only the third data-view contained an extra noise parameter, ϵ, resulting in a data-view with a higher variability than the other two data-views. ## 3.2 Real Data The three real data sets analysed in the study are described below. ## 3.2.1 Cancer Types 2 This data set includes 65 patients with breast cancer, 82 with kidney cancer and 106 with lung cancer. For each patient the three data-views are available: (a) genomics (p1 = *10299* genes), (b) epigenomics (p2 = *22503* methylation sites) and (c) transcriptomics (p3 = 302 mi-RNA sequences). The aim is to cluster patients by their cancer type (Wang et al., 2014). 2http://compbio.cs.toronto.edu/SNF/SNF/Software.html ## 3.2.2 Caltech7 3 Caltech-101 contains pictures of objects belonging to 101 categories. This publicly available subset of Caltech101 contains 7 classes. It consists of *1474* objects on six data-views: (a) Gabor (p1 = 48 ), (b) wavelet moments (p2 = 40 ), (c) CENTRIST (p3 = 254 ), (d) histogram of oriented gradients (p4 = *1984* ), (e) GIST (p5 = 512 ), and (f) local binary patterns (p6 = 928 ) (Fei-Fei et al., 2006). ## 3.2.3 Handwritten Digits 4 This data set consists of features on handwritten numerals (0 − 9) extracted from a collection of Dutch utility maps. Per class 200 patterns have been digitised in binary images (in total there are 2000 patterns). These digits are represented in terms of six data-views: (a) Fourier coefficients of the character shapes (p1 = 76 ), (b) profile correlations (p2 = 216 ), (c) Karhunen-Love coefficients (p3 = 64 ), (d) pixel averages in 2 x 3 windows (p4 = 240 ), (e) Zernike moments (p5 = 47 ) and (f) morphological features (p6 = 6 ) (Dua & Graff, 2017). The handwritten digits data set is characterised by having perfectly balanced data samples; each of the ten clusters contains exactly 200 numerals. On the other hand, caltech7 is an imbalanced data set with the first two clusters containing many more samples than the other clusters. The number of samples in each cluster is {A: 435, B: 798, C: 52, D: 34, E: 35, F: 64, G: 56}. The performance of the methods was explored on both the imbalanced caltech7 data set and a balanced version of the data, for which 50 samples from clusters A and B were randomly selected. ## 4 Results In this section, we illustrate the application and evaluation of the proposed multi-view extensions of t-SNE, LLE and ISOMAP on real and synthetic data. Comparisons between the multi-view solutions, along with their respective single-view solutions are implemented. A trivial solution is to concatenate the features of all data-views into a large single data matrix and apply on this dataset a single-view manifold learning algorithm. Since it is likely that each data-view has different variability, each data-view was firstly normalised before concatenation to ensure the same variability across all data-views. Normalisation was achieved by removing the mean and dividing by the standard deviation of the features in all data-views. In the following subsections we have addressed the following: 1. **Can multi-view manifold learning approaches obtain better visualisations than singleview approaches?** The performance of the multi-view approaches in visualising the underlying structure of the data is illustrated. It is shown how the underlying structure is misrepresented when individual data sets or the concatenated data set are visualised. 2. The visualisations of multi-view approaches are quantitatively evaluated using K**-means**. By extracting the low dimensional embeddings of the multi-view approaches and inputting them as features in the clustering algorithm K-means, we have quantitatively evaluated the performance of the approaches for identifying underlying clusters and patterns within the data. 3. The effect of the parameter values on the multi-view manifold learning approaches was explored. As discussed the proposed multi-view manifold approaches depend on a parameter that requires tuning. In a series of experiments, we investigated the effect that the parameter value has on each approach. This was done by exploring both the visualisations produced and by evaluating the clustering of the approaches for different parameter values. 4. **Should we use all available data-views? If some data-views contain more noise than** signal, should we discard them? These are two crucial questions that concern every researcher 3https://github.com/yeqinglee/mvdata 4https://archive.ics.uci.edu/ml/datasets/Multiple+Features | Data Description | Views | Clusters | Features | Samples | Hetero- | High | | |--------------------|----------|------------|------------|-----------|-----------|-------------|----| | Data Set | (M) | (k) | (plargest) | (N) | geneous | dimensional | | | Cancer Types | 3 | 3 | 22503 | 253 | ✓ | ✓ | | | Real | Caltech7 | 6 | 7 | 1984 | 1474 | ✓ | ✓ | | Handwritten Digits | 6 | 10 | 240 | 2000 | ✓ | ✗ | | | MMDS | 3 | 3 | 300 | 300 | ✗ | ✗ | | | Synthetic | NDS | 4 | 3 | 400 | 300 | ✗ | ✓ | | MCS | 3 | 5 | 300 | 500 | ✗ | ✗ | | Table 1: **The characteristics of the data sets analysed.** The number of views, number of clusters, the largest number of features amongst the data views, and the number of samples for both the real and synthetic data sets analysed are presented. Real data are taken as heterogeneous, whereas the synthetic data are regarded as homogeneous. High-dimensional data contain more features than samples (p ≫ N). working with multi-view data; are all data-views necessary and beneficial to the final outcome? We have addressed these questions by analysing data sets that contain noisy data. By investigating both the produced visualisations and evaluating the clusterings obtained with and without the noisy data, we discuss why it is not always beneficial to include all available data views. The section ends by proposing alternative variations for the best-performing approach, multi-SNE. Firstly, a proposal for automatically computing the weights assigned to each data-view. In addition, we explore an alternative pre-training step for multi-SNE, where instead of conducting PCA on each data-view, multiCCA is applied on the multiple data-views for reducing their dimensions into a latent space of uncorrelated embeddings (Rodosthenous et al., 2020). ## 4.1 Comparison Between Single-View And Multi-View Visualisations Visualising multi-view data can be trivially achieved either by looking at the visualisations produced by each data-view, or by concatenating all features into a long vector. T-SNE, LLE and ISOMAP applied on every single data-view of the MMDS data set separately capture the correct local underlying structure of the respective data-view (Figure 3). However, by design, they cannot capture the global structure of the data. SNE*concat*, LLE*concat* and **ISOMAP***concat* represent the trivial solutions of concatenating the features of all data-views before applying t-SNE, LLE and ISOMAP, respectively. These trivial solutions capture mostly the structure of the third data-view, because that data-view has a higher variability between the clusters than the other two. Multi-SNE, multi-LLE and multi-ISOMAP produced the best visualisations out of all SNE-based, LLE-based and ISOMAP-based approaches, respectively. These solutions were able to separate clearly the three true clusters, with multi-SNE showing the clearest separation between them. Even though m-SNE separates the samples according to their corresponding clusters, this separation would not be recognisable if the true labels were unknown, as the clusters are not sufficiently separated. The visualisation by m-LLE was similar to the ones produced by single-view solutions on concatenated features, while m-ISOMAP bundles all samples into a single cluster. By visualising the MMDS data set via both single-view and multi-view clustering approaches, multi-SNE has shown the most promising results (Figure 3). We have shown that single-view analyses may lead to conflicting results, while multi-view approaches are able to capture the true underlying structure of the synthetic MMDS. ## 4.2 Multi-View Manifold Learning For Clustering It is very common in studies to utilise the visualisation of data to identify any underlying patterns or clusters within the data samples. Here, it is illustrated how the multi-view approaches can be used to identify such clusters. To quantify the visualisation of the data, we applied the K-means algorithm on the low-dimensional embeddings produced by multi-view manifold learning algorithms. If the two-dimensional ![11_image_0.png](11_image_0.png) Figure 3: Visualisations of MMDS. Projections produced by the SNE, LLE, ISOMAP based algorithms. The projections within the red frame present our proposed methods: multi-SNE, multi-LLE and multi- ISOMAP. The parameters Perp and N N refer to the optimised perplexity and number of nearest neighbours, respectively. ![11_image_1.png](11_image_1.png) Figure 4: Multi-SNE visualisations of handwritten digits. Projections produced by multi-SNE with perplexity Perp = 10 . Colours present the clustering on the data points by (a) K-means, and (b) Ground truth. embeddings can separate the data points to their respective clusters quantitatively with high accuracy via a clustering algorithm, then those clusters are expected to be qualitatively separated and visually shown in two dimensions. For all examined data sets (synthetic and real), the number of clusters (ground truth) within the samples is known, which attracts the implementation of K-means over alternative clustering algorithms. The number of clusters was used as the input parameter, K, of the K-means algorithm and by computing the clustering measures we evaluated whether the correct sample allocations were made. The proposed multi-SNE, multi-LLE, and multi-ISOMAP approaches were found to outperform their competitive multi-view extensions (m-SNE, m-LLE, m-ISOMAP) as well as their concatenated versions (SNE*concat*, LLE*concat*, **ISOMAP***concat*) (Tables 2 and 3). For the majority of the data sets the multi-SNE approach was found to overall outperform all other approaches. Figure 4 shows a comparison between the true clusters of the handwritten digits data set and the clusters identified by K-means. The clusters reflecting the digits 6 and 9 are clustered together, but all remaining clusters are well separated and agree with the truth. Multi-SNE applied on caltech7 produces a good visualisation, with clusters A and B being clearly separated from the rest (Figure 5b). Clusters C and G are also well-separated, but the remaining three clusters are bundled together. Applying K-means to that low-dimensional embedding does not capture the true structure of the data (Table 2). It provides a solution with all clusters being equally sized (Figure 5a) and thus its quantitative evaluation is misleading. Motivated by this result, we have further explored the performance of proposed approaches on a balanced version of the caltech7 data set (generated as described in Section 3.2). Similarly to the visualisation of the original data set, the visualisation of the balanced caltech7 data set shows clusters A, B, C and G to be well-separated, while the remaining are still bundled together (Figures 5c and 5d). Through the conducted work, it was shown that the multi-view approaches proposed in the manuscript generate low-dimensional embeddings that can be used as input features in a clustering algorithm (as for example the K-means algorithm) for identifying clusters that exist within the data set. We have illustrated that the proposed approaches outperform existing multi-view approaches and the visualisations produced by multi-SNE are very close to the ground truth of the data sets. Alternative clustering algorithms, that do not require the number of clusters as input, can be considered as well. For example, Density-based spatial clustering of applications with noise (DBSCAN) measures the density around each data point and does not require the true number of clusters as input (Ester et al., 1996). In situations, where the true number of clusters is unknown, DBSCAN would be preferable over K-means. For completeness of our work, DBSCAN was applied on two of the real data sets explored, with similar results observed as the ones with K-means. The proposed multi-SNE approach was the best-performing method of partitioning the data samples. The analysis using DBSCAN can be found in Appendix D.8. An important observation made was that caution needs to be taken when data sets with imbalanced clusters are analysed as the quantitative performance of the approaches on such data sets is not very robust. ## 4.3 Optimal Parameter Selection SNE, LLE and ISOMAP depend on a parameter that requires tuning. Even though the parameter is defined differently in each algorithm, it is always related to the nearest number of global neighbours. As described earlier, the optimal parameter was found by comparing the performance of the methods on a range of parameter values, S = {2, 10, 20, 50, 80, 100, 200}. In this section, the synthetic data sets, NDS and MCS, were analysed, because both data sets separate the samples into known clusters by design and evaluation via clustering measures would be appropriate. To find the optimal parameter value, the performance of the algorithms was evaluated by applying K-means on the low-dimensional embeddings and comparing the resulting clusterings against the truth. Once the optimal parameter was found, we confirmed that the clusters were visually separated by manually looking at the two-dimensional embeddings. Since the data in NDS and MCS are perfectly balanced and were generated for clustering, this approach can effectively evaluate the data visualisations. On NDS, single-view SNE, LLE and ISOMAP algorithms produced a misclustering error of 0.3, meaning that a third of the samples was incorrectly clustered (Figure 6b). This observation shows that single-view ![13_image_0.png](13_image_0.png) Figure 5: Multi-SNE visualisations of caltech7 and its balanced subset. Projections produced by multi-SNE with perplexity Perp = 80 and Perp = 10 for the original and balanced caltech7 data set, respectively. Colours present the clustering on the data points by (a), (c) K-means, and (b), (d) Ground truth. (a) (b) present the data points on the original caltech7 data set, while (c), (d) are on its balanced subset. methods capture the true local underlying structure of each synthetic data-view. The only exception for NDS is the fourth data-view, for which the error is closer to 0.6, i.e. randomly assigns the clusters (which follows the simulation design, as it was designed to be a random data-view). After concatenating the features of all data-views, the performance of single-view approaches remains poor (Figure 6a). The variance of the misclustering error on this solution is much greater, suggesting that single-view manifold learning algorithms on concatenated data are not robust and thus not reliable. Increasing the noise level (either by incorporating additional noisy data-views, or by increasing the dimensions of the noisy data-view) in this synthetic data set had little effect on the overall performance of the multi-view approaches (see Appendix D.4 for more information). On both NDS and MCS, multi-LLE and multi-SNE were found to be sensitive to choice of their corresponding parameter value (Figures 6a and 7). While multi-LLE performed the best when the number of nearest neighbours was low, multi-SNE provided better results as perplexity was increasing. On the other hand, multi-ISOMAP had the highest NMI value when the parameter was high. Overall, ISOMAP-based multi-view algorithms showed higher variability than the other multi-view methods, which makes them less favourable solutions. The performance of ISOMAP-based methods improved as the | Data Set | Algorithm | Accuracy | NMI | RI | ARI | |----------------------|---------------------|---------------|---------------|---------------|---------------| | Handwritten Digits | SNEconcat [Perp=10] | 0.717 (0.032) | 0.663 (0.013) | 0.838 (0.005) | 0.568 (0.026) | | m-SNE [Perp=10] | 0.776 (0.019) | 0.763 (0.009) | 0.938 (0.004) | 0.669 (0.019) | | | multi-SNE [Perp=10] | 0.882 (0.008) | 0.900 (0.005) | 0.969 (0.002) | 0.823 (0.008) | | | LLEconcat [NN=10] | 0.562 | 0.560 | 0.871 | 0.441 | | | m-LLE [NN=10] | 0.632 | 0.612 | 0.896 | 0.503 | | | multi-LLE [NN=5] | 0.614 | 0.645 | 0.897 | 0.524 | | | ISOMAPconcat [NN=20] | 0.634 | 0.619 | 0.905 | 0.502 | | | m-ISOMAP [NN=20] | 0.636 | 0.628 | 0.898 | 0.477 | | | multi-ISOMAP [NN=5] | 0.658 | 0.631 | 0.909 | 0.518 | | | SNEconcat [Perp=50] | 0.470 (0.065) | 0.323 (0.011) | 0.698 (0.013) | 0.290 (0.034) | | | m-SNE [Perp=10] | 0.542 (0.013) | 0.504 (0.029) | 0.757 (0.010) | 0.426 (0.023) | | | multi-SNE [Perp=80] | 0.506 (0.035) | 0.506 (0.006) | 0.754 (0.009) | 0.428 (0.022) | | | LLEconcat [NN=100] | 0.425 | 0.372 | 0.707 | 0.305 | | | m-LLE [NN=5] | 0.561 | 0.348 | 0.718 | 0.356 | | | multi-LLE [NN=80] | 0.638 | 0.490 | 0.732 | 0.419 | | | ISOMAPconcat [NN=20] | 0.408 | 0.167 | 0.634 | 0.151 | | | m-ISOMAP [NN=5] | 0.416 | 0.306 | 0.686 | 0.261 | | | multi-ISOMAP [NN=10] | 0.519 | 0.355 | 0.728 | 0.369 | | | Caltech7 | | | | | | | Caltech7 (balanced) | SNEconcat [Perp=80] | 0.492 (0.024) | 0.326 (0.018) | 0.687 (0.023) | 0.325 (0.015) | | m-SNE [Perp=10] | 0.581 (0.011) | 0.444 (0.013) | 0.838 (0.022) | 0.342 (0.016) | | | multi-SNE [Perp=20] | 0.749 (0.008) | 0.686 (0.016) | 0.905 (0.004) | 0.619 (0.009) | | | LLEconcat [NN=20] | 0.567 | 0.348 | 0.725 | 0.380 | | | m-LLE [NN=10] | 0.403 | 0.169 | 0.617 | 0.139 | | | multi-LLE [NN=5] | 0.622 | 0.454 | 0.710 | 0.391 | | | ISOMAPconcat [NN=5] | 0.434 | 0.320 | 0.791 | 0.208 | | | m-ISOMAP [NN=5] | 0.455 | 0.299 | 0.797 | 0.224 | | | multi-ISOMAP [NN=5] | 0.548 | 0.368 | 0.810 | 0.267 | | | Cancer types | SNEconcat [Perp=10] | 0.625 (0.143) | 0.363 (0.184) | 0.301 (0.113) | 0.687 (0.169) | | m-SNE [Perp=10] | 0.923 (0.010) | 0.839 (0.018) | 0.876 (0.011) | 0.922 (0.014) | | | multi-SNE [Perp=20] | 0.964 (0.007) | 0.866 (0.023) | 0.902 (0.005) | 0.956 (0.008) | | | LLEconcat [NN=10] | 0.502 | 0.122 | 0.091 | 0.576 | | | m-LLE [NN=20] | 0.637 | 0.253 | 0.235 | 0.647 | | | multi-LLE [NN=10] | 0.850 | 0.567 | 0.614 | 0.826 | | | ISOMAPconcat [NN=5] | 0.384 | 0.015 | 0.009 | 0.556 | | | m-ISOMAP [NN=10] | 0.390 | 0.020 | 0.013 | 0.558 | | | multi-ISOMAP [NN=50] | 0.514 | 0.116 | 0.093 | 0.592 | | Table 2: **Clustering performance.** For each data set, red highlights the method with the best performance on each measure between each group of algorithms (SNE, LLE or ISOMAP based). The overall superior method for each data set is depicted with **bold**. The parameters *P erp* and NN refer to the selected perplexity and number of nearest neighbours, respectively. They were optimised for the corresponding methods. Due to the non-convexity of SNE-based approaches, the mean (and standard deviation) of 100 separate runs on the same data is reported. parameter value increased (Figure 7). However, they were outperformed by multi-LLE and multi-SNE for both synthetic data sets. Out of the three manifold learning foundations, LLE-based approaches mostly depend on their parameter value to produce the optimal outcome. Specifically, their performance dropped when the parameter value lay between 20 and 100 (Figure 6a). When the number of nearest neighbours was set to be greater than 100 their performance started to improve. Out of all LLE-based algorithms, the highest NMI and lowest misclustering error was obtained by multi-LLE (Figures 6 and 7). Our observations on the tuning parameters of LLE-based approaches are in agreement with earlier studies (Karbauskait˙e et al., 2007; Valencia-Aguirre et al., 2009). Both Karbauskait˙e et al. (2007) and Valencia-Aguirre et al. (2009) found that LLE performs best with low nearest number of neighbours and their conclusions reflect the performance of multi-LLE; best performed on low values of the tuning parameter. Even though m-SNE performed better than single-view methods in terms of both clustering and error variability, multi-SNE produced the best results (Figures 6 and 7). In particular, multi-SNE outperformed all algorithms presented in this paper on both NDS and | Data Set | Algorithm | Accuracy | NMI | RI | ARI | |-----------------------|---------------------|---------------|---------------|----------------|---------------| | NDS | SNEconcat [Perp=80] | 0.747 (0.210) | 0.628 (0.309) | 0.817 (0.324) | 0.598 (0.145) | | m-SNE [Perp=50] | 0.650 (0.014) | 0.748 (0.069) | 0.766 (0.022 | 0.629 (0.020) | | | multi-SNE [Perp=80] | 0.989 (0.006) | 0.951 (0.029) | 0.969 (0.019) | 0.987 (0.009) | | | LLEconcat [NN=5] | 0.606 (0.276) | 0.477 (0.357) | 0.684 (0.359) | 0.446 (0.218) | | | m-LLE [NN=20] | 0.685 (0.115) | 0.555 (0.134) | 0.768 (0.151) | 0.528 (0.072)) | | | multi-LLE [NN=20] | 0.937 (0.044) | 0.768 (0.042) | 0.922 (0.028) | 0.823 (0.047) | | | ISOMAPconcat [NN=100] | 0.649 (0.212) | 0.528 (0.265) | 0.750 (0.286) | 0.475 (0.133) | | | m-ISOMAP [NN=5] | 0.610 (0.234) | 0.453 (0.221) | 0.760 (0.280) | 0.386 (0.138) | | | multi-ISOMAP [NN=300] | 0.778 (0.112) | 0.788 (0.234) | 0.867 (0.194) | 0.730 (0.094) | | | SNEconcat [Perp=200] | 0.421 (0.200) | 0.215 (0.185) | 0.711 (0.219) | 0.173 (0.089) | | | m-SNE [Perp=2] | 0.641 (0.069) | 0.670 (0.034) | 0.854 (0.080) | 0.575 (0.055) | | | multi-SNE [Perp=50] | 0.919 (0.046) | 0.862 (0.037) | 0.942 (0.052) | 0.819 (0.018) | | | LLEconcat [NN=50] | 0.569 (0.117) | 0.533 (0.117) | 0.796 (0.123) | 0.432 (0.051) | | | m-LLE [NN=20] | 0.540 (0.079) | 0.627 (0.051) | 0.819 (0.077) | 0.487 (0.026) | | | multi-LLE [NN=20] | 0.798 (0.059) | 0.647 (0.048) | 0.872 (0.064) | 0.607 (0.022) | | | ISOMAPconcat [NN=150] | 0.628 (0.149) | 0.636 (0.139) | 0.834 (0.167) | 0.526 (0.071) | | | m-ISOMAP [NN=5] | 0.686 (0.113) | 0.660 (0.106) | 0.841 (0.119) | 0.565 (0.051) | | | multi-ISOMAP [NN=300] | 0.717 (0.094) | 0.630 (0.101) | 0.852 (0.118) | 0.570 (0.044) | | | MCS | | | | | | Table 3: **Clustering performance.** For each data set, red highlights the method with the best performance ![15_image_0.png](15_image_0.png) on each measure between each group of algorithms (SNE, LLE or ISOMAP based). The overall superior method for each data set is depicted with **bold**. The parameters *P erp* and NN refer to the selected perplexity and number of nearest neighbours, respectively. They were optimised for the corresponding methods. Figure 6: **NDS evaluation measures. (a)** NMI values along different parameter values on all manifold learning algorithms and (b) Misclustering error on the optimal parameter values. MCS. Even though it performed poorly for low perplexity values, its performance improved for *P erp* ≥ 20. Multi-SNE was the algorithm with the lowest error variance, making it a robust and preferable solution. ![16_image_0.png](16_image_0.png) parameter values on the all SNE, LLE and ISOMAP based algorithms. The four implemented measures (Accuracy, NMI, RI and ARI) use the true clusters of the samples to evaluate the clustering performance. In situations where cluster allocation is unknown, alternative clustering evaluation measures can be used, such as the Silhouette score (Rousseeuw, 1987). The Silhouette score in contrast to the other measures does not require as input the cluster allocation and is a widely used approach for identifying the best number of clusters and clustering allocation in an unsupervised setting. Evaluating the clustering performance of the methods via the Silhouette score agrees with the other four evaluation measures, with multi-SNE producing the highest value out of all multi-view manifold learning solutions. The Silhouette score of all methods applied on the MCS data set can be found in Appendix D.7. The same process of parameter tuning was implemented for the real data sets and their performance is presented in the Appendix D.3. In contrast to the synthetic data, multi-SNE on cancer types data performed the best at low perplexity values. For the remaining data sets, its performance was stable for all parameter values. With the exception of cancer types data, the performance of LLE-based solutions follows their behaviour on synthetic data. ## 4.4 Optimal Number Of Data-Views It is common to think that more information would lead to better results, and in theory that should be the case. However, in practice that is not always true (Kumar et al., 2011). Using the cancer types data set, we explored whether the visualisations and clusterings are improved if all or a subset of the data-views are used. With three available data-views, we implemented a multi-view visualisation on three combinations of two data-views and a single combination of three data-views. The genomics data-view provides a reasonably good separation of the three cancer types, whereas miRNA data-view fails in this task, as it provides a visualisation that reflects random noise (first column of plots in Figure 8). This observation is validated quantitatively by evaluating the produced t-SNE embeddings (Table 8 in Appendix D.5). Concatenating features from the different data-views before implementing t-SNE does not improve the final outcome of the algorithm, regardless of the data-view combination. Overall, multi-view manifold learning algorithms have improved the data visualisation to a great extent. When all three data-views are considered, both multi-SNE and m-SNE provide a good separation of the clusters (Figure 8). However, the true cancer types can be identified perfectly when the miRNA data-view is discarded. In other words, the optimal solution in this data set is obtained when only genomics and epigenomics data-views are used. That is because miRNA data-view contains little information about the cancer types and adds random noise, which makes the task of separating the data points more difficult. This observation was also noted between the visualisations of MMDS and NDS (Figure 9). The only difference between the two synthetic data sets is the additional noisy data-view in NDS. Even though NDS separates the samples to their corresponding clusters, the separation is not as clear as it is in the projection of MMDS via multi-SNE. In agreement with the exploration of the cancer types data set, it is favourable to discard any noisy data-views in the implementation of multi-view manifold learning approaches. It is not always a good idea to include all available data-views in multi-view manifold learning algorithms; some data-views may provide noise which would result in a worse visualisation than discarding those dataviews entirely. The noise of a data-view with unknown labels may be viewed in a single-view t-SNE plot (all data-points in a single cluster), or identified, if possible, via quantification measures such as signal-to-noise ratio. ## 4.5 Multi-Sne Variations This section presents two alternative variations of multi-SNE, including automatic weight adjustments and multi-CCA as a pre-training step for reducing the dimensions of the input data-views. ![18_image_0.png](18_image_0.png) Figure 8: **Visualisations of cancer types.** Projections produced by all SNE-based manifold learning algorithms on all possible combinations between the three data-views in the cancer types data set.The parameter *P erp* refers to the selected perplexity, which was optimised for the corresponding methods. ## 4.5.1 Automated Weight Adjustments A simple weight-updating approach is proposed based on the KL-divergence measure from each data-view. This simple weight-updating approach guarantees that more weight is given to the data-views producing lower KL-divergence measures and that no data-view is being completely discarded from the algorithm. Recall that KL(P||Q) ∈ [0, ∞), with KL(P||Q) = 0, if the two distributions, P and Q, are perfectly matched. Let k = (k (1), · · · , k(M)) be a vector, where k (m) = KL(P (m)||Q), ∀m = {1, · · · , M} and initialise the weight vector w = (w (1), · · · , w(M)) by w (m) =1M , ∀m. To adjust the weights of each data-view, the following steps are performed at each iteration: 1. Normalise KL-divergence by k (m) =k $\frac{k^{(m)}}{\sum^{M}k^{(i)}}$. This step ensures that $k^{(m)}\in[0,1],\forall m$ and that PM i Pm k (m) = 1. 2. Measure the weights for each data-view by w (m) = 1 − k (m). This step ensures that the data-view with the lowest KL-divergence value receives the highest weight. Based on the analysis in Section 4.4, we know that cancer types and NDS data sets contain noisy data-views and thus multi-SNE performs better when they are entirely discarded. Here, we assume that this information is unknown and the proposed weight-updating approach is implemented on those two data sets to test if the weights are being adjusted correctly according to the noise level of each data-view. The proposed weight-adjustment process, which looks at the produced KL-divergence between each dataview and the low-dimensional embeddings, distinguishes which data-views contain the most noise and the weight values are updated accordingly (Figure 10b). In cancer types, transcriptomics (miRNA) receives the lowest weight, while genomics (Genes) was given the highest value. This weight adjustment comes in ![19_image_0.png](19_image_0.png) Figure 9: Multi-SNE visualisations of MMDS and NDS. Projections produced by multi-SNE with perplexity Perp = 100 for both MMDS and NDS. ![19_image_1.png](19_image_1.png) Figure 10: Cancer types and NDS with automated weight adjustments. The first row presents the produced visualisations of multi-SNE with the automated weight adjustment procedure implemented. The second row of figures presents the weights assigned to each data-view at each step of the iteration. For both data the iterations ran for a maximum of 1,000 steps. agreement with the qualitative (t-SNE plots) and quantitative (clustering) evaluations performed in Section 4.4. In NDS, X(4) which represents the noisy data-view received the lowest weight, and the other data-views had around the same weight value, as they all impact the final outcome equally. The proposed weight-adjustment process updates the weights at each iteration. For the first 100 iterations, the weights are not changing, as the algorithm adjusts to the produced low-dimensional embeddings (Figure 10b). In NDS, the weights converge after 250 iterations, while in cancer types, they are still being updated even after 1000 iterations. The changes recorded are small and the weights can be said to have stabilised. The low-dimensional embeddings produced in NDS with weight adjustments separate clearly the three clusters, an observation missed without the implementation of the weight-updating approach (Figure 10a); it resembles the MMDS (*i.e.* without noisy data-view) multi-SNE plot (Figure 9). The automatic weightadjustment process identifies the informative data-views, by allocating them a higher weight value than to the noisy data-views. This observation was found to be true even when a dataset contains more noise than informative data-views (see Appendix D.4 for further details). The produced embeddings in cancer types do not separate the three clusters as clearly as multi-SNE without the noisy data-view, but it projects a more clear separation than multi-SNE on the complete data set without weight adjustments. The weights produced by this weight adjustment approach can indicate the importance of each data-view in the final lower-dimensional embedding. For example, data-views with very low weights may be assumed futile and a better visualisation may be produced if those data-views are discarded. The actual weights assigned to each data-view do not have any further meaning. ## 4.5.2 Multi-Cca As Pre-Training As mentioned earlier, van der Maaten & Hinton (2008) proposed the implementation of PCA as a pretraining step for t-SNE to reduce the computational costs, provided that the fraction of variance explained by the principal components is high. In this paper, pre-training via PCA was implemented in all variations of SNE. Alternative linear dimensionality reduction methods may be considered, especially for multi-view data. In addition to reducing the dimensions of the original data, such methods can capture information between the data-views. For example, Canonical Correlation Analysis (CCA) captures relationships between the features of two data-views by producing two latent low-dimensional embeddings (canonical vectors) that are maximally correlated between them (Hotelling, 1936; Rodosthenous et al., 2020). Rodosthenous et al. (2020) demonstrated that multi-CCA, an extension of CCA that analyses multiple (more than two) dataviews, would be preferable as it reduces over-fitting. This section demonstrates the application of multi-CCA as pre-training in replacement of PCA. This alteration of the multi-SNE algorithm was implemented on the handwritten digits data set. Multi-CCA was applied on all data-views, with 6 canonical vectors produced for each data-view (in this particular data set min (p1, p2, p3, p4, p5, p6) = 6). The variation of multi-CCA proposed by Witten & Tibshirani (2009) was used for the production of the canonical vectors, as it is computationally cheaper compared to others (Rodosthenous et al., 2020). By using these vectors as input features, multi-SNE produced a qualitatively better visualisation than using the principal components as input features (Figure 11). By using an integrative algorithm as pre-training, all 10 clusters are clearly separated, including 6 and 9. Quantitatively, clustering via K-Means was evaluated with ACC = 0.914, NMI = 0.838, RI = 0.968, ARI = 0.824. This evaluation suggests that quantitatively, it performed better than the 10-dimensional embeddings produced multi-SNE with PCA as pre-training. ## 4.5.3 Comparison Of Multi-Sne Variations Section 2.2 introduced multi-SNE, a multi-view extension of t-SNE. In Sections 4.5.1 and 4.5.2, two variations of multi-SNE are presented. The former implements a weight-adjustment process which at each iteration updates the weights allocated for each data-view, and the latter uses multi-CCA instead of PCA as a pretraining step. In this section, multi-SNE and its two variations are compared to assess whether the variations introduced to the algorithm perform better than the initial proposal. The implementation of the weight-adjustment process improved the performance of multi-SNE on all real data sets analysed (Table 4). The influence of multi-CCA as a pre-training step produced inconsistent results; in some data sets this step boosted the clustering performance of multi-SNE (*e.g.* handwritten digits), while for the other data sets, it did not (*e.g.* cancer types). From this analysis, we conclude that adjusting the ![21_image_0.png](21_image_0.png) Figure 11: **Multi-SNE on handwritten digits, with multi-CCA as pre-training.** Projections of handwritten digits data sets, produced by multi-SNE. Multi-CCA was implemented on all data-views with their respective canonical vectors acting as input features for multi-SNE. | Variation | Handwritten | Cancer | Caltech7 | Caltech7 | NDS | MCS | |-----------------------------------------------|---------------|----------|------------|------------|---------------|---------------| | | digits | types | original | balanced | | | | Multi-SNE | 0.822 | 0.964 | 0.506 | 0.733 | 0.989 (0.006) | 0.919 (0.046) | | without weight-adjustment Multi-SNE | 0.883 | 0.994 | 0.543 | 0.742 | 0.999 (0.002) | 0.922 (0.019) | | with weight-adjustment Multi-CCA multi-SNE | 0.901 | 0.526 | 0.453 | 0.713 | 0.996(0.002) | 0.993 (0.005) | | without weight-adjustment Multi-CCA multi-SNE | 0.914 | 0.562 | 0.463 | 0.754 | 0.996 (0.002) | 0.993 (0.005) | | with weight-adjustment | | | | | | | Table 4: **Clustering performance of multi-SNE variations.** For each data set, **bold** highlights the multi-SNE variation with the best performance - highest accuracy (ACC). Perplexity was optimised for all variations. The mean performance (and its standard deviation) is depicted for the synthetic data sets NDS and MCS.) weights of each data-view always improves the performance of multi-SNE. On the other hand, the choice of pre-training, either via PCA or multi-CCA, is not clear, and it depends on the data at hand. ## 5 Discussion In this manuscript, we propose extensions of the well-known manifold learning approaches t-SNE, LLE, and ISOMAP for the visualisation of multi-view data sets. These three approaches are widely used for the visualisation of high-dimensional and complex data sets on performing non-linear dimensionality reduction. The increasing number of multiple data sets produced for the same samples in different fields, emphasises the need for approaches that produce expressive presentations of the data. We have illustrated that visualising each data set separately from the rest is not ideal as it does not reveal the underlying patterns within the samples. In contrast, the proposed multi-view approaches can produce a single visualisation of the samples by integrating all available information from the multiple data-views. Python and R (only for multi-SNE) code of the proposed solutions can be found in the links provided in Appendix E. Multi-view visualisation has been explored in the literature with a number of approaches proposed in recent years. In this work, we propose multi-view visualisation approaches that extend the well-known manifold approaches: t-SNE, LLE, and ISOMAP. Through a comparative study of real and synthetic data, we have illustrated that the proposed approach, multi-SNE, provides a better and more robust solution compared to the other tested approaches proposed in the manuscript (multi-LLE and multi-ISOMAP) and the approaches proposed in the literature including m-LLE, m-SNE, MV-tSNE2, j-SNE, j-UMAP (additional results in Appendices D.1, D.2). Although multi-SNE was computationally the most expensive multi-view manifold learning algorithm (Table 9 in Appendix F), it was found to be the solution with the superior performance, both qualitatively and quantitatively. We have utilised the low-dimensional embeddings of the proposed algorithms as features in the K-means clustering algorithm, which we have used (1) to quantify the visualisations produced, and (2) to select the optimal tuning parameters for the manifold learning approaches. By investigating synthetic and real multi-view data sets, each with different data characteristics, we concluded that multi-SNE provides a more accurate and robust solution than any other single-view and multi-view manifold learning algorithms we have considered. Specifically, multi-SNE was able to produce the best data visualisations of all data sets analysed in this paper. Multi-LLE provides the second-best solution, while multi-view ISOMAP algorithms have not produced competitive visualisations. By exploring several data sets, we concluded that multi-view manifold learning approaches can be effectively applied to heterogeneous and high-dimensional data (*i.e.* p ≫ n). Through the conducted experiments, we have illustrated the effect of the parameters on the performance of the methods. We have shown that SNE-based methods perform the best when perplexity is in the range [20, 100], LLE-based algorithms should take a small number of nearest neighbours, in the range [10, 50], while the parameter of ISOMAP-based should be in the range [100, N], where N is the number of samples. We believe that the best approach to selecting the tuning parameters of the methods is to explore a wide range of different parameter values and assess the performance of the methods both qualitatively and quantitatively. If the produced visualisations vary a lot between a range of parameter values, then the data might be too noisy, and the projections misleading. In this case, it might be beneficial to look at the weights obtained for each data-view and explore removing the noisiest data-views (depending on the number of dataviews used and/or existing knowledge of noise in the data). Otherwise (if the produced visualisations vary slightly between various parameter values), the parameter value with the best qualitative and quantitative performance can be selected. Since t-SNE (and its extensions) are robust to perplexity (van der Maaten & Hinton, 2008), a strict optimal parameter value would not be necessary to produce meaningful visualisations and clusterings, *i.e.* identical performance qualitatively and quantitatively can be observed for a range of values. Cao & Wang (2017) proposed an automatic approach for selecting the perplexity parameter of t-SNE. According to the authors the trade between the final KL divergence and perplexity value can lead to good embeddings, and they proposed the following criterion: $$S(Perp)=2KL(P||Q)+\log(n)\frac{Perp}{n}\tag{10}$$ This solution can be extended to automatically select the multi-SNE perplexity, by modifying the criterion to: $$S(Perp)=2\sum_{m}KL(P^{(m)}||Q)+\log(n)\frac{Perp}{n}\tag{11}$$ Our conclusions about the superiority of multi-SNE have been further supported by implementing the Silhouette score as an alternative approach for evaluating the clustering and tuning the parameters of the methods. In contrast to the measures used throughout the paper, the Silhouette score does not take into account the number of clusters that exist in the data set, illustrating the applicability of multi-SNE approach in unsupervised learning problems where the underlying clusters of the samples are not known (Appendix | Multi-view Clustering on handwritten digits data set Kumar et al. Liu et al. Sun et al. Ou et al. Ou et al. | multi-SNE with PCA/multi-CCA | | | | | | | | | |---------------------------------------------------------------------------------------------------------------|--------------------------------|--------|--------|--------|-------|-------------|-------------|-------------|-------------| | (2011) | (2013) | (2015) | (2016) | (2018) | 2D | 3D | 5D | 10D | | | NMI | 0.768 | 0.804 | 0.876 | 0.785 | 0.804 | 0.863/0.838 | 0.894/0.841 | 0.897/0.848 | 0.899/0.850 | | ACC | - | 0.881 | - | 0.876 | 0.880 | 0.822/0.914 | 0.848/0.915 | 0.854/0.922 | 0.849/0.924 | Table 5: **Multi-view clustering performance on handwritten digits.** The NMI and accuracy (ACC) values of multi-view clustering approaches, as they were presented by the authors in their corresponding papers, are depicted along the clustering performance of multi-SNE on a range of dimensions for the embeddings (d = 2, 3, 5, 10). The performance of multi-SNE with PCA and multi-CCA are depicted in the table, with weight adjustments in both variations. D.7). Similarly, we have illustrated that alternative clustering algorithms can be implemented for clustering the samples. By inputting the produced multi-SNE embeddings in the DBSCAN algorithm we further illustrated how the clusters of the samples can be identified (Appendix D.8). Multi-view clustering is a topic that has gathered a lot of interest in recent years with a number of approaches published in the literature. Such approaches include the ones proposed by Kumar et al. (2011), Liu et al. (2013), Sun et al. (2015), Ou et al. (2016) and Ou et al. (2018). The handwritten data set presented in the manuscript has been analysed by the aforementioned studies for multi-view clustering. Table 5 shows the NMI and accuracy values of the clusterings performed by the multi-view clustering algorithms (these values are as given in the corresponding articles). In addition, the NMI and accuracy values of the K-means clustering applied on the multi-SNE low-dimensional embeddings (from 2 to 10 dimensions) are presented in the table. On handwritten digits, the multi-SNE variation with multi-CCA as pre-training and weight adjustments had the best performance (Table 4). This variation of multi-SNE with K-means was compared against the multi-view clustering algorithms and it was found to be the most accurate, while pre-training with PCA produced the highest NMI (Table 5). By applying K-means to the low-dimensional embeddings of multi-SNE can successfully cluster the observations of the data (see Appendix D.6 for a 3-dimensional visualisation via multi-SNE). An important area of active current research, where manifold learning approaches, such as t-SNE, as visualisation tools are commonly used is single-cell sequencing (scRNA-seq) and genomics. Last few years, have seen fast developments of multi-omics single-cell methods, where for example for the same cells multiple omics measurements are being obtained such as transcripts by scRNA-seq and chromatin accessibility by a method known as scATAC-seq (Stuart et al., 2019). As recently discussed the integration of this kind of multi-view single-cell data poses unique and novel statistical challenges (Argelaguet et al., 2021). We, therefore, believe our proposed multi-view methods will be very useful in producing an integrated visualisation of cellular heterogeneity and cell types studied by multi-omics single-cell methods in different tissues, in health and disease. To illustrate the capability of multi-SNE for multi-omics single-cell data, we applied multi-SNE on a representative data set of scRNA-seq and ATAC-seq for human peripheral blood mononuclear cells (PBMC) 5 (Figure 7). Multi-SNE produced more intelligible projections of the cells compared to m-SNE and achieved higher evaluation scores (Appendix C). To test the quality of the obtained multi-view visualisation, we compared its performance against the multi-view clustering approach proposed by Liu et al. (2013) on this single-cell data. A balanced subset of this data set was used, which consists of two data-views on 9105 cells (*scRNA-seq* and *ATAC-seq* with 36000 and 108000 features, respectively). A detailed description of this data set, the pre-processing steps performed, and the projections of t-SNE and multi-SNE on the original data are provided in the Appendix C (Figure 13). We found Multi-SNE to have the highest accuracy (and a close NMI to the approach by Liu et al. (2013)) as seen in Figure 7. Qualitatively, the projections by t-SNE on scRNA-seq and multi-SNE are similar, but multi-SNE separates the clusters better (especially between CD4 and CD8 cell types (Figure 12, Figure 13 in Appendix C). While it is known that ATAC-seq data is noisier and has less information by itself, we see that integration of the data-views results in better overall separation of the different cell types in this data set. These results indicate the promise of multi-SNE as a unified multi-view and clustering approach for multi-omics single-cell data. 5 https://support.10xgenomics.com/single-cell-multiome-atac-gex/datasets/1.0.0/pbmc_granulocyte_sorted_10k ![24_image_0.png](24_image_0.png) ![24_image_1.png](24_image_1.png) Figure 12: Visualisations and clustering performance on single-cell multi-omics data. Projections produced by t-SNE on RNA, ATAC and multi-SNE on both data-views with perplexity Perp = 80 for the two t-SNE projections and Perp = 20 for multi-SNE. The clustering performance of the data by Liu et al. (2013), t-SNE and multi-SNE are presented. The increasing number of multi-view, high-dimensional and heterogeneous data requires novel visualisation techniques that integrate this data into expressive and revealing representations. In this manuscript, new multi-view manifold learning approaches are presented and their performance across real and synthetic data sets with different characteristics was explored. The multi-SNE approach is proposed to provide a unified solution for robust visualisation and subsequent clustering of multi-view data. ## References Ricard Argelaguet, Anna S. E. Cuomo, Oliver Stegle, and John C. Marioni. Computational principles and challenges in single-cell data integration. Nature Biotechnology, pp. 1546-1696, 2021. Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167-175, 2003. Yanshuai Cao and Luyu Wang. Automatic selection of t-sne perplexity. *arXiv preprint arXiv:1708.03229*, 2017. Edsger W Dijkstra. A note on two problems in connexion with graphs. *Numerische mathematik*, 1(1): 269–271, 1959. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. *Knowledge Discovery and Data Mining*, 34:226–331, 1996. Li Fei-Fei, Rob Fergus, and Perona Pietro. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4):594–611, 2006. Yun Fu, Liangliang Cao, Guodong Guo, and Thomas S. Huang. Multiple feature fusion by subspace learning. In *Proceedings of the 2008 International Conference on Content-Based Image and Video Retrieval*, pp. 127–134, 2008. Diego García, Ignacio Díaz, Daniel Pérez, Abel A. Cuadrado, Manuel Domínguez, and Antonio Morá. Interactive visualization for nilm in large buildings using non-negative matrix factorization. *Energy and* Buildings, 176:95 - 108, 2018. Yehudit Hasin, Marcus Seldin, and Aldons Lusis. Multi-omics approaches to disease. *Genome Biology*, 18, 2017. Geoffrey E. Hinton and Sam T. Roweis. Stochastic neighbor embedding. *Advances in Neural Information* Processing Systems, pp. 857–864, 2003. Van Hoan Do and Stefan Canzar. A generalization of t-sne and umap to single-cell multimodal omics. Genome Biology, 22(1):1–9, 2021. Paul Hoffman, Satija Lab, and Collaborators. Integrating scrna-seq and scatac-seq data. https:// satijalab.org/seurat/articles/atacseq_integration_vignette.html, 2021. Accessed: 2021-03-18. Harold Hotelling. Relations between two sets of variates. *Biometrika*, 28:321, 1936. Lawrence Hubert and Phipps Arabie. Comparing partitions. *Journal of Classification*, 2:193–218, 1985. Ian T. Jolliffe and Jorge Cadima. Review article principal component analysis: A review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374, 2016. Samir Kanaan Izquierdo. Multiview pattern recognition methods for data visualization, embedding and clustering. 2017. Rasa Karbauskait˙e, Olga Kurasova, and Gintautas Dzemyda. Selection of the number of neighbours of each data point for the locally linear embedding algorithm. *Information technology and control*, 36(4), 2007. Joseph B Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. *Psychometrika*, 29(1):1–27, 1964. Solomon Kullback and Richard A. Leibler. On information and sufficiency. 22:79–86, 1951. Abhishek Kumar, Piyush Rai, and Daume Hal. Co-regularized multi-view spectral clustering. In Advances in Neural Information Processing Systems 24, pp. 1413–1421, 2011. Gen Li, Xiaokang Liu, and Kun Chen. Integrative multi-view regression: Bridging group-sparse and low-rank models. *Biometrics*, 75:593 - 602, 2019. Jialu Liu, Chi Wang, Jing Gao, and Jiawei Han. Multi-view clustering via joint nonnegative matrix factorization. *Proceedings of the 2013 SIAM International Conference on Data Mining*, pp. 252–260, 2013. James MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pp. 281–297, 1967. Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. Umap: Uniform manifold approximation and projection. *Journal of Open Source Software*, 3(29):861, 2018. Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: analysis and an algorithm. In *Proceedings of the Fourteenth International Conference on Neural Information Processing Systems:* Natural and Synthetic, pp. 849–856, 2001. Weihua Ou, Shujian Yu, Gai Li, Jian Lu, Kesheng Zhang, and Gang Xie. Multi-view non-negative matrix factorization by patch alignment framework with view consistency. *Neurocomputing*, 204:116 - 124, 2016. Weihua Ou, Fei Long, Yi Tan, Shujian Yu, and Wang Pengpeng. Co-regularized multiview nonnegative matrix factorization with correlation constraint for representation learning. *Multimedia Tools and Applications*, 77:12955–12978, 2018. William M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):846–850, 1971. Theodoulos Rodosthenous, Vahid Shahrezaei, and Marina Evangelou. Integrating multi-omics data through sparse canonical correlation analysis for the prediction of complex traits: A comparison study. *Bioinformatics*, 36(17):4616–4625, 2020. Peter J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20:53–65, 1987. Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, 2000. Lawrence Saul and Sam Roweis. An introduction to locally linear embedding. *Journal of Machine Learning* Research, 7, 01 2001. Hualei Shen, Dacheng Tao, and Dianfu Ma. Multiview locally linear embedding for effective medical image retrieval. *PLOS ONE*, 8(12):1–21, 2013. Ting Shu, Bob Zhang, and Yuan Yan Tang. Multi-view classification via a fast and effective multi-view nearest-subspace classifier. *IEEE Access*, 7:49669–49679, 2019. Tim Stuart, Andrew Butler, Paul Hoffman, Christoph Hafemeister, Efthymia Papalexi, William M 3rd Mauck, Yuhan Hao, Marlon Stoeckius, Peter Smibert, and Rahul Satija. Comprehensive integration of single-cell data. *Cell*, 177:1888–1902, 2019. Jiangwen Sun, Jin Lu, Tingyang Xu, and Jinbo Bi. Multi-view sparse co-clustering via proximal alternating linearized minimization. volume 37 of *Proceedings of Machine Learning Research*, pp. 757–766, 2015. Shiliang Sun. A survey of multi-view machine learning. *Neural Computing and Applications*, 23:2031 - 2038, 2013. Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reducition. *Science*, 290:2319–2323, 2000. Juliana Valencia-Aguirre, Andrés Álvarez-Mesa, Genaro Daza-Santacoloma, and Germán CastellanosDomínguez. Automatic choice of the number of nearest neighbors in locally linear embedding. In *Iberoamerican Congress on Pattern Recognition*, pp. 77–84. Springer, 2009. Laurens van der Maaten and Geoffrey Hinton. Visualising data using t-sne. Journal of Machine Learning Research, 9:2579–2605, 2008. Laurens van der Maaten and Geoffrey Hinton. Visualizing non-metric similarities in multiple maps. *Machine* Learning, 87(1), 2012. Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. *Journal of Machine Learning Research*, 11:2837–2854, 2010. Bo Wang, Aziz M. Mezlini, Feyyaz Demir, Marc Fiume, Zhuowen Tu, Michael Brudno, Benjamin HaibeKains, and Anna Goldenberg. Similarity network fusion for aggregating data types on a genomic scale. Nature Methods, 11(3):333–337, 2014. Minjie Wang and Genevera I. Allen. Integrative generalized convex clustering optimization and feature selection for mixed multi-view data. *Journal of Machine Learning Research*, 22(55):1–73, 2021. Daniela M Witten and Robert J Tibshirani. Extensions of sparse canonical correlation analysis with applications to genomic data. *Statistical applications in genetics and molecular biology*, 8(1), 2009. Bo Xie, Yang Mu, Dacheng Tao, and Kaiqi Huang. m-sne: multiview stochastic neighbor embedding. *IEEE* Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41:1088–1096, 2011. Chang Xu, Dacheng Tao, and Chao Xu. A survey on multi-view learning. *ArXiv*, abs/1304.5634, 2013. Chang Xu, Dacheng Tao, and Chao Xu. Multi-view intact space learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(12):2531–2544, 2015. Dongkuan Xu and Yingjie Tian. A comprehensive survey of clustering algorithms. *Annals of Data Science*, 2:165–193, 2015. Fanghua Ye, Zitai Chen, Hui Qian, Rui Li, Chuan Chen, and Zibin Zheng. New approaches in multi-view clustering. In *Recent Applications in Data Clustering*, chapter 11. 2018. Zhenyue Zhang and Hongyuan Zha. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. *SIAM Journal on Scientific Computing*, 8:406–424, 2004. Jing Zhao, Xijiong Xie, Xin Xu, and Shiliang Sun. Multi-view learning overview: Recent progress and new challenges. *Information Fusion*, 38:43 - 54, 2017. Yue Zhao, Xinge You, Shujian Yu, Chang Xu, Wei Yuan, Xiao-Yuan Jing, Taiping Zhang, and Dacheng Tao. Multi-view manifold learning with locality alignment. *Pattern Recognition*, 78:154 - 166, 2018. Nanning Zheng and Jianru Xue. *Manifold Learning*, pp. 87–119. 2009. Linlin Zong, Xianchao Zhang, Long Zhao, Hong Yu, and Qianli Zhao. Multi-view clustering via multimanifold regularized non-negative matrix factorization. *Neural Networks*, 88:74–89, 2017. ## A Algorithms Data:M data sets, X(m) ∈ R N×pm, ∀m ∈ {1, · · · , M} Parameters:*Perp* [Perplexity]; T [Number of iterations]; η [Learning rate]; α(t) [Momentum] Result:Induced embedding, Y ∈ R n×d. Often, d = 2 begin Optional step of implementing PCA, or multi-CCA on X(m) ∀m ∈ {1, · · · , M} Compute pairwise affinities p m i|j with perplexity Perp, ∀m ∈ {1, · · · , M} Set p m ij = pm i|j+pm j|i 2n, ∀m ∈ {1, · · · , M} Initialise solution Y (0) ∼ N (0, 0.1) for t=1 to T do Compute induced affinities qi|j and set sum of gradients, G = 0 for m=1 to M do Compute gradient δCm δY G ← G + δCm δY end Set Y (t) = Y (t−1) + ηG + α(t)(Y (t−1) − Y (t−2)) end end Algorithm 1: Multi-SNE Data: M data sets, X(m) ∈ R N×pm, ∀m ∈ {1, · · · , M} Parameters: k [Number of neighbours] Result: Induced embedding, Yˆ ∈ R n×d. Often, d = 2 begin for m=1 to M do Find k nearest neighbours of X(m). Compute W m by minimising equation (7). end Let Wˆ =Pm α mW m, where Pm α m = 1. Compute the d-dimensional embeddings Yˆ by minimising equation (8) under Wˆ . end Algorithm 2: multi-LLE Data: M data sets, X(m) ∈ R N×pm, ∀m ∈ {1, · · · , M} Parameters: k [Number of neighbours] Result: Induced embedding, Yˆ ∈ R n×d. Often, d = 2 begin for m=1 to M do Construct a N × N neighborouhood graph, Gm ∼ (*V, E*m) with samples represented by nodes. The edge length between k nearest neighbours of each node is measured by Euclidean distance. end Measure the average edge length between all nodes. Combine all neighborouhood graphs into a single graph, G˜. In DG ∈ R |V |×|V |, the computed shortest path distances between nodes in G˜ are stored. Compute the d-dimensional embeddings Y by computing yi =pλpu i p , where λp is the p th eigenvalue in decreasing order of the the matrix τ (DG) and u i p the i th component of p th eigenvector. The operator, τ is defined by τ (D) = − HSH 2, where S is the matrix of squared distances defined by Sij = D2 ij , and H is defined by Hij = δij − 1 N . end Algorithm 3: multi-ISOMAP ## B Data Clustering Evaluation Measures Let X = {X1, · · · , Xr} be the true classes of the data and Y = {Y1, · · · , Ys} the clusterings found on N objects. In this study, we assume to know the number of clusters and thus set r = s. Let nij be the number of objects in Xi and Yj . A contingency table is defined as shown in Table 6. $\begin{array}{c}\includegraphics[height=142.26375pt]{Fig1}\end{array}$ \(\begin{array}{c}\includegraphics[height=142.26375pt]{Fig2}\end{array}\] Table 6: A contingency table for data clustering. Xi refers to the i th class (truth) and Yj refers to the j th cluster. nij are the number of samples found in class i and cluster j. In this study, r = s was taken, as K-means with the true number of classes known was performed. The formulas of the four measures used to evaluate data clustering are given below, with the terms defined in Table 6. Accuracy (ACC) $$(A c c)={\frac{\sum_{i}\sum_{j}\mathbf{1}\left\{i=j\right\}n_{i j}}{\mathbf{S}}}$$ $$(12)$$ S(12) Normalised Mutual Information (NMI) $$(NMI)=\frac{2I(\mathbf{X},\mathbf{Y})}{H(\mathbf{X})+H(\mathbf{Y})}\tag{1}$$ where I(X, Y) is the mutual information between X and Y, and H(X) is the entropy of X. Rand Index (RI) $$(RI)=\frac{\binom{N}{2}-\left[\frac{1}{2}\sum_{i}(\sum_{j}n_{ij})^{2}+\sum_{j}(\sum_{i}n_{ij})^{2}-\sum_{i}\sum_{j}n_{ij}^{2}\right]}{\binom{N}{2}}$$ $$=\frac{\alpha+\beta}{\binom{N}{2}}$$ where α refers to the number of elements that are in the same subset in X and in the same subset in Y , while β is the number of elements that are in different subsets in X and in different subsets in Y . Adjusted Rand Index (ARI) $$(A R I)={\frac{\sum_{i}\sum_{j}{\binom{n_{i j}}{2}}-{\frac{\sum_{i}{\binom{n_{i}}{2}}\sum_{j}{\binom{b_{i}}{2}}}{\binom{b_{i}}{2}}}}{{\frac{1}{2}}\left[\sum_{i}{\binom{a_{i}}{2}}+\sum_{j}{\binom{b_{j}}{2}}\right]-{\frac{\sum_{i}{\binom{a_{i}}{2}}\sum_{j}{\binom{b_{i}}{2}}}{\binom{b_{i}}{2}}}}}$$ $$(13)$$ $$(14)$$ $$(15)$$ ## C Single-Cell Data In the multi-omics single-cell data analysis, we used the publicly available data set provided by 10x Genomics for human peripheral blood mononuclear cells (PBMC) 6. This data set can be downloaded and installed via the R package SeuratData, by running the command InstallData("pbmcMultiome"). In their vignette 7, Hoffman et al. (2021) explored this data set to demonstrate how to jointly integrate and analyse such data. In this data set, scRNA-seq and scATAC-seq profiles were simultaneously collected in the same cells by 10x Genomics. Data on 11909 single cells are available on 36601 genes and 108377 peaks in scRNA-seq and scATAC-seq, respectively. Cells with zero summed expression, along all genes were removed, leaving us with 10412 cells. Pre-processing was employed via the Seurat package, following the steps performed by Hoffman et al. (2021). Firstly, we log-normalised both data-views and then selected features for each individual data-view. In feature selection, we aim to identify a subset of features with high variability across cells (using the functions FindVariableFeatures and FindTopFeatures) (Stuart et al., 2019). The multi-omics single-cell data set consists of 19 imbalanced clusters that correspond to their corresponding ![31_image_0.png](31_image_0.png) cell types; we assume the annotations provided by Seurat to be accurate (Figure 13). To evaluate the clustering performance of multi-SNE, we took a balanced subset of the data. Cell-type clusters with less than 200 cells were removed entirely and we combined cells with cell types under the same hierarchy. For example, Intermediate B, *Naive B* and *Memory B* were combined to create a single cluster, *B cells*. Similarly, CD4 Naive, *CD4 TCM* and *CD4 TEM* were combined as *CD4 cells*. After this process, we ended up with a subset of 9105 single cells separated in 6 cell-type clusters (B cell, CD14 Mono, CD4, CD8 Naive, CD8 TEM and NK). Figure 13: **Visualisations of single-cell data.** Projections of the full data set with unbalanced clusters produced by t-SNE on RNA, ATAC, m-SNE and multi-SNE on both data-views with perplexity *P erp* = 80 for the two t-SNE projections, *P erp* = 100 for m-SNE and *P erp* = 20 for multi-SNE. M-SNE and multi-SNE combined the scRNA-seq and scATAC-seq to produce a more intelligible projection of the cells than t-SNE applied on either data-view. Qualitatively the superiority of the multi-view manifold learning algorithms may not be obvious at first, but subtle differences can be observed. Quantitatively, 6 https://support.10xgenomics.com/single-cell-multiome-atac-gex/datasets/1.0.0/pbmc_granulocyte_sorted_10k 7 https://satijalab.org/seurat/articles/atacseq_integration_vignette.html multi-SNE received the best evaluation scores, with NMI = 0.807, while m-SNE received NMI = 0.760. Single-view t-SNE scored NMI = 0.620 and NMI = 0.572 for scRNA-seq and scATAC-seq, respectively. ## D Additional Comparisons Multi-Sne, M-Sne And Mv-Tsne2 D.1 This section justifies the exclusion of MV-tSNE2 from the comparisons against multi-SNE. Due to its superior performance, m-SNE was selected as an existing competitor of multi-SNE. Multi-SNE and m-SNE outperformed MV-tSNE2 on all data sets presented in this manuscript (Figure 14). By comparing the produced visualisations on two data sets, Figure 14 evaluates the three algorithms qualitatively. Multi-SNE produced the best separation among clusters on both data sets. In MV-tSNE2, a lot of the samples are projected bundled together, making it difficult to distinguish the true clusters. Quantitative evaluation of the methods agree with the conclusions reached by assessing the visualisations qualitatively. ![33_image_0.png](33_image_0.png) Figure 14: Visualisations by multi-SNE, m-SNE and MV-tSNE2. The three multi-view SNE-based projections of cancer types and handwritten digits data sets. ## D.2 Multi-Sne, J-Sne And J-Umap At the same time as multi-SNE was developed, Hoan Do & Canzar (2021) proposed generalisations of t-SNE (named j-SNE) and UMAP (named j-UMAP) based on a similar objective function as multi-SNE. Hoan Do & Canzar (2021) introduced a regularisation term that reduces the bias towards specific data-views; the proposed objective function is given by: $$C_{j-SNE}=\sum_{m}\sum_{i}\sum_{j}\alpha^{m}p_{ij}^{m}\log\frac{p_{ij}^{m}}{q_{ij}}+\lambda\sum_{m}\alpha^{m}\log\alpha^{m},\tag{16}$$ where α m represents the weight provided for the mth data-view and λ is a regularisation parameter. The weights and low-dimensional embeddings are updated iteratively. The adjustments on the weights of each ![34_image_0.png](34_image_0.png) data-view are performed in accordance to the regularisation parameter, which requires tuning for optimal results. Figure 15: **j-UMAP, j-SNE and multi-SNE visualisations.** Projections of cancer types and handwritten digits data sets, produced by j-UMAP, j-SNE and multi-SNE. Figure 15 compares qualitatively multi-SNE with j-SNE and j-UMAP (with their respective tuning parameters optimised) on the cancer types and handwritten digits data. As expected, the projections by j-SNE and multi-SNE are very much alike for both data sets. The increasing complexity imposed by the regularisation term in j-SNE does not seem to benefit the visualisation of the samples. j-UMAP does not separate the three cancer types, but it manages to separate the 10 digits, even samples that represent the 6 and 9 numerals; j-SNE failed to do that. This was achieved by multi-SNE at the 3-dimensional visualisation, or alternatively by using multi-CCA as a pre-training step. All three algorithms allocated similar weight values to each data-view on both data sets. In particular, transcriptomics on cancer types and morphological features on handwritten digits received the lowest weight. ## D.3 Tuning Parameters On Real Data ![35_image_0.png](35_image_0.png) In this section, we have explored how the parameter values affect the multi-view approaches when analysing real data sets. Figure 16 depicts the NMI evaluation measure on each real data set for parameter values in the range S. Figure 16: Real data sets evaluation via NMI. The NMI values are plotted against different parameter values on all multi-view manifold learning algorithms investigated in this manuscript. Similar conclusions to the ones made in Section 4.3 were reached (Figure 16). SNE-based solutions had a more consistent performance than LLE and ISOMAP based approaches. In contrast to the conclusions reached by testing the tuning parameters on synthetic data, SNE-based approaches applied to the cancer types data set, performed the best when the perplexity was low. This observation highlights the importance of the tuning parameters (perplexity and the number of neighbours) in these algorithms, as discussed by their respective authors. For the remaining data sets, its performance was stable on different parameter values. With the exception of cancer types data, the performance of LLE-based solutions form similar behaviour with the synthetic data (i.e. their performance is reduced around NN = 50 and then it is regained. ## D.4 Increased Randomness In Data We have further explored how additional noise affects the performance of the multi-view learning approaches. As discussed, the NDS data set contains three informative data-views and one noisy data-view. In Section 4.4, we concluded that the inclusion of the noisy data-view reduces the performance both qualitatively and quantitatively. This complication was targeted and solved through an automatic weight-updating approach in Section 4.5.1. The purpose of this section is to test the performance of multi-view manifold learning solutions on data sets with higher levels of randomness. To increase the noise in the synthetic data, additional noisy data-views were generated. In particular, this section compares the performance of manifold learning algorithms on three synthetic data sets: (a) NDS, (b) NDS with one additional noisy data-view, and (c) NDS with two additional noisy data-views. Each simulation was performed for 200 runs and with equal weights for a fair comparison. Multi-SNE was the superior algorithm in all simulations (Table 7). With each additional noisy data-view, all multi-view manifold learning algorithms saw a reduction in their performance. Although all evaluation measures reflect this observation, the change in their performance is best observed on the NMI values (Table 7). Further, with more noisy data-views, the variance of the evaluation measures increased. This observation suggests that all algorithms clustered the samples with higher uncertainty. The proposed automatic weight-adjusting process ensures that all data-views receive a weight value, which suggests that noisy data-views do not receive zero weight. For example, in the NDS with two additional noisy data-views scenario, this process returned higher weights for the informative data-views (X1, X2, X3), than the noisy ones (X4, X5, X6) (Figure 17). Although the weights between informative and noisy data-views are close in value, the proposed automatic weight-adjusting process can successfully distinguish informative from noisy data-views (Figures 17). To further assess if this process allocates substantial weight to noisy data-views, a scenario in which the data set contains more noisy data-views than informative ones was investigated. In particular, a simulation was performed in which 1 data-view is informative and 2 are noisy. The informative data-view contains information to split the samples into 3 clusters, while the 2 noisy data-views assign all samples on the same cluster. Multi-SNE separates the samples by their respective cluster, despite having more noisy data-views than informative ones (Figure 18). In accordance with the other simulations, multi-SNE assigns a higher weight value on the informative data-view than on the noisy data-views. The informative data-view received a weight of 0.4 and each of the two noisy data-view 0.3 (Figure 18). This difference in the weights between the data-views acts as an incentive for the user to investigate the implementation of the algorithm by excluding the data-view(s) that received the lowest weight(s). | Data Set | Algorithm | Accuracy | NMI | RI | ARI | |-----------------------|---------------------|------------|-------|-------|-------| | NDS | SNEconcat [Perp=80] | 0.747 | 0.628 | 0.817 | 0.598 | | m-SNE [Perp=50] | 0.650 | 0.748 | 0.766 | 0.629 | | | multi-SNE [Perp=80] | 0.989 | 0.951 | 0.969 | 0.987 | | | LLEconcat [NN=5] | 0.606 | 0.477 | 0.684 | 0.446 | | | m-LLE [NN=20] | 0.685 | 0.555 | 0.768 | 0.528 | | | multi-LLE [NN=20] | 0.937 | 0.768 | 0.922 | 0.823 | | | ISOMAPconcat [NN=100] | 0.649 | 0.528 | 0.750 | 0.475 | | | m-ISOMAP [NN=5] | 0.610 | 0.453 | 0.760 | 0.386 | | | multi-ISOMAP [NN=300] | 0.778 | 0.788 | 0.867 | 0.730 | | | Higher dimension | SNEconcat [Perp=80] | 0.723 | 0.648 | 0.787 | 0.585 | | m-SNE [Perp=50] | 0.623 | 0.705 | 0.734 | 0.605 | | | multi-SNE [Perp=80] | 0.983 | 0.937 | 0.951 | 0.966 | | | LLEconcat [NN=5] | 0.575 | 0.427 | 0.628 | 0.402 | | | m-LLE [NN=20] | 0.671 | 0.534 | 0.755 | 0.513 | | | multi-LLE [NN=20] | 0.903 | 0.788 | 0.898 | 0.802 | | | ISOMAPconcat [NN=100] | 0.622 | 0.510 | 0.705 | 0.453 | | | m-ISOMAP [NN=5] | 0.589 | 0.439 | 0.734 | 0.344 | | | multi-ISOMAP [NN=300] | 0.765 | 0.767 | 0.859 | 0.711 | | | One additional noisy | SNEconcat [Perp=10] | 0.650 | 0.522 | 0.724 | 0.489 | | m-SNE [Perp=100] | 0.689 | 0.584 | 0.786 | 0.530 | | | multi-SNE [Perp=50] | 0.965 | 0.854 | 0.956 | 0.901 | | | LLEconcat [NN=10] | 0.604 | 0.445 | 0.723 | 0.413 | | | m-LLE [NN=10] | 0.667 | 0.522 | 0.765 | 0.490 | | | multi-LLE [NN=5] | 0.912 | 0.692 | 0.891 | 0.756 | | | ISOMAPconcat [NN=20] | 0.543 | 0.375 | 0.733 | 0.481 | | | m-ISOMAP [NN=20] | 0.552 | 0.482 | 0.703 | 0.444 | | | multi-ISOMAP [NN=5] | 0.584 | 0.501 | 0.739 | 0.493 | | | Two additional noisy | SNEconcat [Perp=10] | 0.581 | 0.310 | 0.688 | 0.309 | | m-SNE [Perp=10] | 0.603 | 0.388 | 0.712 | 0.359 | | | multi-SNE [Perp=10] | 0.936 | 0.781 | 0.926 | 0.832 | | | LLEconcat [NN=10] | 0.523 | 0.251 | 0.641 | 0.222 | | | m-LLE [NN=10] | 0.570 | 0.344 | 0.682 | 0.317 | | | multi-LLE [NN=5] | 0.858 | 0.557 | 0.832 | 0.622 | | | ISOMAPconcat [NN=20] | 0.470 | 0.389 | 0.565 | 0.409 | | | m-ISOMAP [NN=20] | 0.489 | 0.406 | 0.611 | 0.453 | | | multi-ISOMAP [NN=5] | 0.524 | 0.467 | 0.782 | 0.517 | | Table 7: **Clustering performance of NDS and with additional noisy data-views.** For each data set, red highlights the method with the best performance on each measure between each group of algorithms (SNE, LLE or ISOMAP based). The overall superior method for each data set is depicted with **bold**. The parameters *P erp* and NN refer to the selected perplexity and number of nearest neighbours, respectively. They were optimised for the corresponding methods. ![38_image_0.png](38_image_0.png) Figure 17: Visualisations on NDS with 2 additional noisy data-views. (A) Scatter-plot of the simulated samples obtained by multi-SNE with perplexity Perp = 200 and (B) Weights received by the algorithm on the 6 data-views. The first 4 follow the structure of NDS simulation; three informative dataviews and a noisy one. Each informative data-view separates the samples differently, but taken together they are split equally into three clusters. X5 and X6 represent the 2 additional noisy data-views. ![38_image_1.png](38_image_1.png) Figure 18: Visualisations on new simulation with 1 informative and 2 noisy data-views. (A) Scatter-plot of the simulated samples obtained by multi-SNE with perplexity Perp = 100 and (B) Weights received by the algorithm on the 3 data-views. The informative data-view contains information to split the samples into 3 clusters, while the 2 noisy data-views assign all samples to lie on the same cluster. ## D.5 T-Sne On Single-View Cancer Types Table 8 presents the clustering performance of t-SNE applied on the three views in the cancer types data set, separately. Genomics was the favoured view on all evaluation measures. | ACC | NMI | RI | ARI | | |-----------------|---------------|---------------|---------------|---------------| | Genomics | 0.595 (0.044) | 0.299 (0.041) | 0.667 (0.017) | 0.253 (0.039) | | Epigenomics | 0.500 (0.036) | 0.116 (0.033) | 0.598 (0.018) | 0.107 (0.035) | | Transcriptomics | 0.456 (0.023) | 0.042 (0.011) | 0.572 (0.006) | 0.049 (0.013) | Table 8: Clustering performance on the induced embedding of a single view obtained by implementing t-SNE on Cancer Types data. Standard deviation is reported in parentheses. D.6 Handwritten digits projection in 3 dimensions (3D) ![40_image_0.png](40_image_0.png) Figure 19: **3D multi-SNE visualisation of handwritten digits**. Projections produced by weight adjusting multi-SNE with multi-CCA as pre-training and perplexity *P erp* = 80. Colours present the true clustering of the data points. ## D.7 Alternative Quantitative Evaluation Measures For Clustering Accuracy (ACC), Normalised Mutual Information (NMI), Rand Index (RI) and Adjusted Rand Index (ARI) are the evaluation measures chosen to quantitatively evaluate the clustering performance of the proposed multi-view approaches. These measures were chosen because the true annotations of the data sets are known and together they provide a wide assessment range. Practically, clustering is often applied to data with unknown annotations (labelling), therefore for completeness, we have further explored the implementation of the Silhouette score for identifying the optimal tuning parameter of the manifold visualisation approaches. The Silhouette score is a widely used measure for quantifying the clustering produced by the clustering algorithms, or for selecting the optimal number of clusters. ![41_image_0.png](41_image_0.png) Figure 20: Silhouette score on MCS. The clustering evaluation via Silhouette score is plotted against different parameter values on all SNE, LLE and ISOMAP based algorithms. Figure 20 presents the evaluation performance of the methods with respect to their tuning parameter when the Silhouette score is evaluated instead of the other four measures. This figure complements Figure 7. The Silhouette score is not always in agreement with the other evaluation measures. For example, in SNE- based solutions, according to the Silhouette score, multi-SNE is favoured over the other methods only when perplexity is 100. Another difference between the silhouette score and other measures is that as the perplexity increases, multi-SNE remains stable for the other measures (for Perp ≥ 50), while its silhouette score keeps increasing. Silhouette score measures how well the clusters are separated between them. This is conceptually different from what the other measures quantify which is how well the proposed clusters agree with the known clusters. It is therefore of no surprise that the findings are not always in agreement. ## D.8 Alternative Clustering Algorithms For the clustering task of the samples, any clustering algorithm could have potentially been applied to the ![42_image_0.png](42_image_0.png) low-dimensional embeddings produced by the multi-view visualisation approaches proposed. Figure 21: K-means and DBSCAN on SNE-based solutions applied on handwritten digits data set. The clustering evaluation measures are plotted against different perplexity values on multi-SNE, mSNE and SNE*concat*. The performance of K-means and DBSCAN applied on the produced embeddings is depicted in the first and second row of this figure, respectively In the main body of this manuscript, the K-means algorithm was chosen due to its popularity, its strong robust performance and because the true number of clusters is known for all data sets. In practice, the latter is not always true, and clustering algorithms that do not require the number of clusters as a parameter input are preferable. Density-based spatial clustering of applications with noise (DBSCAN) is an example of such an algorithm (Ester et al., 1996). DBSCAN instead requires two other tuning parameters: the minimum number of samples required to form a dense cluster and a threshold in determining the neighbourhood of a sample, named ϵ. The implementation of DBSCAN on handwritten digits, smooths the performance of SNE-based solutions across different perplexity values (Figure 21). For all parameter values, DBSCAN performs equally well, while the performance of K-means slightly oscillates. A greater disagreement between the two unsupervised learning algorithms is observed in their application to caltech7 data set (Figure 22). While the accuracy of multi-SNE by implementing K-means reduces with higher perplexity, the opposite behaviour is observed when DBSCAN is implemented. In addition, DBSCAN finds multi-SNE to be superior over m-SNE, while K-means concludes the opposite. This appendix demonstrates that the implementation of clustering on the produced embeddings is not restricted only to K-means, but alternative clustering solutions may be used. In particular, DBSCAN is a good choice, especially when the true number of clusters is unknown. ![43_image_0.png](43_image_0.png) Figure 22: K-means and DBSCAN on SNE-based solutions applied on caltech7 data set. The clustering evaluation measures are plotted against different perplexity values on multi-SNE, m-SNE and SN Econcat. The performance of K-means and DBSCAN applied on the produced embeddings is depicted in the first and second row of this figure, respectively ## E Reproducibility The code of multi-SNE was based on the publicly available software, written by the author of t-SNE, found in the following link: https://lvdmaaten.github.io/tsne/ In this manuscript, all t-SNE results were obtained by running the original R implementation (https://cran.rproject.org/web/packages/tsne/) and verified by the original Python implementation (https://lvdmaaten. github.io/tsne/). We refer the readers to follow the code and functions provided in the link below to reproduce the findings of this paper. The software for m-SNE and m-LLE were not found publicly available and thus we used our own implementation of the method that can be found in the same link below: https://github.com/theorod93/multiView_manifoldLearning An R package that contains the code for multi-SNE can be installed via devtools and it can be found in https://github.com/theorod93/multiSNE We refer the readers to follow the links provided in the main body of the paper for the public multi-view data used in this paper. ## F Computation Time In terms of computation time, none of the multi-view manifold learning algorithms was consistently faster than the rest (Table 9). However, multi-SNE was often the slowest algorithm, while m-SNE and multiISOMAP had the fastest computation time. The SNE-based solutions are based on the original t-SNE algorithm, as described in Appendix E. | Running time | MMDS | NDS | MCS | Caltech7 | Handwritten Digits | Cancer Types | |----------------|--------------|-------------|-------------|--------------|----------------------|----------------| | m-SNE | 0.43 (0.019) | 0.29 (0.07) | 0.42 (0.01) | 4.29 (0.54) | 13.34 (3.43) | 251.71 (15.68) | | multi-SNE | 1.07 (0.100) | 0.78 (0.14) | 1.02 (0.01) | 15.95 (0.71) | 45.76 (8.44) | 252.00 (11.23) | | m-LLE | 0.25 (0.071) | 0.40 (0.12) | 0.42 (0.34) | 37.5 (2.21) | 26.28 (8.82) | 159.52 (17.49) | | multi-LLE | 0.28 (0.099) | 0.41 (0.15) | 0.30 (0.14) | 37.9 (2.57) | 27.94 (5.29) | 157.73 (18.19) | | m-ISOMAP | 0.22 (0.015) | 0.57 (0.06) | 0.37 (0.01) | 38.07 (3.09) | 29.52 (5.37) | 154.83 (18.04) | | multi-ISOMAP | 0.24 (0.032) | 0.54 (0.13) | 0.33 (0.05) | 21.23 (2.24) | 16.77 (4.65) | 85.47 (14.57) | Table 9: **Averaged running time recorded in minutes.** Taken for each manifold learning algorithm on all data sets seen in this paper; standard deviation is given in parentheses. All algorithms ran on High Performance Computing with 4 nodes.
Review 1: Summary: This submission proposes variations of manifold learning algorithms, namely, SNE, LLE and ISOMAP, in the context of multiview learning. It introduces a weighting factor and combines the loss function of each view using a weighted sum. The manuscript also presents a series of numerical results to demonstrate the effectiveness of the proposed approach, including feeding the manifold learning results to K-means and applying the learned visualization to biological data. Strengths and Weaknesses: Some strengths are noted by the reviewer: 1) The submission has a clear statement of the problem and also clear expression of the existing manifold learning algorithms. 2) The submission is quite easy to follow and read. The writing is clear and smooth. Some critical weakness: 1) The proposed approach seems to be too straightforward, if not trivial. The proposed method is simply combining the existing t-SNE, LLE and ISOMAP losses of different views using a weighted sum. The weight is set to be w_m = 1/M across this paper in most of the part, where M is the number of views. This does not seem to constitute a significant technical contribution. Easy-to-implement approaches are indeed appreciated, but such simple approaches should be supported by good rationale or theoretical analysis to be convincing. 2) The technical insight that could be gained from reading this article seems to be on the limited side. Other than proposing such a weighted combination-type variation of existing losses, the manuscript does not have much novelty to be noted. The rationale and motivation of using the proposed method and the choice of w_m were not discussed. There are some procedures of adjusting the weights in a later part of the paper, but it was unclear how this adjustment would affect the algorithm's convergence. 3) The experiments seem to be not very convincing. The synthetic data only had 3-5 clusters and the cluster sizes are balanced. This does not poses a lot of challenges to dimensionality reduction problems. The real data are also classic small size datasets that do not represent new challenges in multiview learning in NLP or computer vision. Requested Changes: Some suggestions: 1) It would be nice to see the reasons why using the proposed weighted combinations can lead to better manifold learning results. Other than merely proposing a modified loss, the society would benefit more if the insights behind the proposed losses can be clearly stated. 2) It would also be nice to have some theoretical justifications, if possible. Considering all the modified methods are all well known and well studied. Additional understanding in theory would definitely add intellectual merit to the existing knowledge. 3) The empirical results can be enriched by considering larger, imbalanced, and more recent datasets. Baselines can use some modern foundation model trained embeddings (e.g., CLIP + t-SNE). Broader Impact Concerns: This submission seems to have not had a section on broader impacts. ================================================== Review 2: Summary: This paper introduces extensions of manifold learning approaches in multi-view dataset, specifically named multi-SNE, multi-LL, and multi-ISOMAP. The proposed methodology modifies conventional algorithms by taking weighted combinations of each data-view. These algorithms show comprehensive projections to the low-dimensional space, compared to the ones obtained by visualizing each data-view separately. By employing Multi-SNE and other proposed algorithms, improvements are observed in terms of four metrics (Accuracy, Normalized Mutual Information, Rand Index, Adjusted Rand Index) as well as an enhanced 2-D visualization. The authors substantiate their claims through comprehensive experiments, providing qualitative results that demonstrate enhanced effectiveness on visualizing diverse real and synthetic datasets, including those derived from biological applications. Strengths and Weaknesses: The paper demonstrates several strengths, one of which is the straightforward extension of conventional visualization algorithms, enabling their application to other dimensionality reduction techniques for multi-view datasets. The proposed framework, comprising multi-SNE, multi-LLE, and multi-ISOMAP, extends these conventional algorithms by adopting a weighted average of low-dimensional embeddings for each data-view. The authors present intuitive scenarios supporting the viability of the multi-view approach (Figure 2) and provide visualization results, comparing the algorithms with single-view approaches (Figure 3, 4, 5). However, the main weakness of the paper lies in its limited explanation of why the proposed algorithms outperform other dimensionality reduction approaches. Specifically, the authors need to clarify why treating each view separately yields better results compared to concatenating the views into a single set of data. Although Figure 3 appears to demonstrate the superiority of the proposed algorithm over the concatenation approach, the comparison is based on different hyperparameters, which hinder a fair assessment. To establish a solid foundation for the proposed algorithm's superiority over simple concatenation of multiple views, the authors should provide either experimental evidence or theoretical analysis. Moreover, a comprehensive comparison with other existing methods is essential to highlight the novelty of the proposed methodologies. For instance, the proposed extensions, multi-SNE, and multi-LLE, seem similar to previous works such as m-SNE [1] and m-LLE [2]. To distinguish the proposed algorithms from these prior methods, the authors should clearly indicate the key differences and illustrate the novelty of their approach and its expected outcomes, preferably in theoretical analysis. In conclusion, while the paper presents an extension of conventional visualization algorithms from various perspectives, there is a need for theoretical analysis or more robust experiments to justify the effectiveness of the proposed algorithm. [1] Bo Xie, Yang Mu, Dacheng Tao, and Kaiqi Huang. m-sne: multiview stochastic neighbor embedding. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41:1088–1096, 2011. [2] Hualei Shen, Dacheng Tao, and Dianfu Ma. Multiview locally linear embedding for effective medical image retrieval. PLOS ONE, 8(12):1–21, 2013. Requested Changes: A major limitation of the paper is the lack of novelty on how it extends the existing algorithms. The concept of utilizing a weighted combination of each view does not seem to offer a significant impact. To address this concern effectively, a more extensive comparison between the proposed methodology and alternative approaches, including concatenation of data-views, m-SNE, m-LLE, among others, should be provided through a combination of experiments and theoretical analysis. This comprehensive evaluation is essential to substantiate the claim of novelty in the paper. Broader Impact Concerns: There are no concerns regarding ethical implications of the work. ================================================== Review 3: Summary: In their paper "Multi-view data visualization via manifold learning" the authors suggest a generalization of t-SNE (as well as LLE and ISOMAP) to multi-view data, i.e. datasets where several distinct sets of features are available for the same set of samples. They show that their suggested "multi-SNE" performs better than naive approaches like feature concatenation, and better than competitor methods, on toy simulations and some real-world datasets. Strengths and Weaknesses: STRENGTHS: * The paper addresses an important problem, as multi-view data become more and more prevalent in application fields like single-cell biology where manifold learning is frequently used. * The suggested methods (multi-SNE) is reasonable and straightforward, and performs well in practice. * The authors provide an overview of multi-view manifold learning methods based on LLE/ISOMAP and suggest and test their own versions of multi-view LLE (multi-LLE) and multi-view ISOMAP (multi-ISOMAP). WEAKNESSES: * A very similar paper has been published two years ago in Genome Biology (https://genomebiology.biomedcentral.com/articles/10.1186/s13059-021-02356-5) suggesting multi-view versions of t-SNE and UMAP called j-SNE and j-UMAP. In fact, multi-SNE is very similar to j-SNE (and moreover, j-SNE is arguably a more flexible generalization as it allows automatic selection of non-equal modality weights). That said, I checked and confirmed that both papers first appeared on arXiv in January 2021, and so can be considered parallel work. However, the current paper needs to be more explicit about that, see below. * The feature concatenation comparison is a bit of a strawman, as it does not perform any feature normalization (see below). * While it is interesting to consider multi-view generalizations of LLE and ISOMAP, it is clear that for the considered toy examples and real-world datasets they will lose to multi-SNE, as LLE and ISOMAP perform much worse than t-SNE on such data. Indeed, the authors find that multi-SNE performs the best. * Some of the methods description is not sufficiently clear (see below). Overall I think the paper can be accepted to TMLR after some straightforward revision. Requested Changes: All "major issues" are critical to securing my recommendation, but should be all straightforward to implement. MAJOR ISSUES: * The j-SNE paper is referenced only briefly in the Introduction, and then only in the Appendix the authors say that it was parallel work and that the loss function is nearly identical. I was surprised to see that it was parallel work, given that the j-SNE paper was published in 2021. So I checked the arXiv versions of both papers and realized that they both appeared in January 2021 for the first time. I think the authors need to be *very explicit* about it. The Introduction should say that their method is very similar to the one suggested in the j-SNE paper but that it was parallel work -- and explicitly say that both papers appeared on arXiv in Jan 2021 for the first time. I think this is very important, to set the record straight. * In Section 4.1: "... because that data-view has a higher variability between the clusters than the other two". This is a crucial aspect of the simulation, but it was **not described** in Section 3.1, or at least I cannot find it there. Please explain this crucial simulation setup in Section 3.1. * Section 4.1: Of course feature concatenation will fail if one data-view has a lot higher variance than another data-view. This can to some extent be remedied by normalizing each data-view before concatenation, for example by dividing each data-view matrix by its Frobenius norm. This would be less of a strawman comparison, and I suggest the authors do something like that. * Some font sizes in Figures 6 and 7 are too small and unreadable when the paper is printed out. * Discussion, 2nd paragraph, and Table 9: Unclear what t-SNE approximation was used. Was it vanilla t-SNE (which is O(n^2)), was it Barnes-Hut t-SNE, or something else? Please clarify somewhere the implementation details. If it's vanilla t-SNE, please mention **explicitly** that the implementation as you provide it, cannot be run for large sample sizes (larger than ~10,000). This is a very serious limitation that needs to be explicitly stated. Also specify that your software is in R. All of this should be in the main text, not in the Appendix. MINOR ISSUES: * ">>" in formulas should be typeset as \gg * The j-SNE paper has the authors swapped for some reason. Canzar is the second author: https://genomebiology.biomedcentral.com/articles/10.1186/s13059-021-02356-5, not the first. * Last paragraph in Section 2.3: What is a "consensus matrix" here exactly? * Step in Section 2.4: should it be d_x = \infty if ij is not in the kNN graph? Instead of d_x = 0. * Step 3 in Section 2.4: I don't think that's how ISOMAP works. ISOMAP applies "classical MDS" to the distance matrix D, which means that it double-centers it before computing the eigendecomposition. * Section 3.1: the description is confusing because it contains spherical Gaussian noise TWICE. First you sample from a Gaussian with mean \mu and spherical covariance I_p_m. And then you add a noise vector epsilon sampled from another Gaussian with mean 0 and covariance I_p_m (same!). Is this intentional? It's a very strange setup as it is equivalent to sampling once using larger variance. Or is it an inaccurate description? * Table 1: Why is the NDS setup "high-dimensional"? Is it because p>n? Please clarify. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Reject Comment: All reviewers feel that the authors have done a good job in their rebuttal, resolving most of issues raised by reviewers. The main idea is to consider a weighted sum of loss of each view for multi-view manifold learning. The method in this paper is a straightforward extension of existing algorithms to multi-view domains, so it does not bring much insights and ideas that can be of interest for TMLR audience. Moreover it is criticized that the claim on "using multiple views benefits visualization" is too vague since the paper does not have much insight revealed regarding this claim. Therefore, the paper is not recommended for acceptance in its current form. I hope authors found the review comments informative and can improve their paper by addressing these carefully in future submissions to other venue. ==================================================
# Solidgen: An Autoregressive Model For Direct B-Rep Synthesis Pradeep Kumar Jayaraman pradeep.kumar.jayaraman@autodesk.com Autodesk Research Joseph G. Lambourne *joseph.lambourne@autodesk.com* Autodesk Research Nishkrit Desai nishkrit.desai@mail.utoronto.ca University of Toronto, Vector Institute Karl D.D. Willis karl.willis@autodesk.com Autodesk Research Aditya Sanghi *aditya.sanghi@autodesk.com* Autodesk Research Nigel J.W. Morris *nigel.morris@autodesk.com* Autodesk Research Reviewed on OpenReview: *https: // openreview. net/ forum? id= ZR2CDgADRo* ## Abstract The Boundary representation (B-rep) format is the de-facto shape representation in computer-aided design (CAD) to model solid and sheet objects. Recent approaches to generating CAD models have focused on learning sketch-and-extrude modeling sequences that are executed by a solid modeling kernel in postprocess to recover a B-rep. In this paper we present a new approach that enables learning from and synthesizing B-reps without the need for supervision through CAD modeling sequence data. Our method SolidGen, is an autoregressive neural network that models the B-rep directly by predicting the vertices, edges, and faces using Transformer-based and pointer neural networks. Key to achieving this is our Indexed Boundary Representation that references B-rep vertices, edges and faces in a well-defined hierarchy to capture the geometric and topological relations suitable for use with machine learning. SolidGen can be easily conditioned on contexts e.g., class labels, images, and voxels thanks to its probabilistic modeling of the B-rep distribution. We demonstrate qualitatively, quantitatively, and through perceptual evaluation by human subjects that SolidGen can produce high quality, realistic CAD models. ## 1 Introduction Almost every manufactured object in the world starts life as a computer-aided design (CAD) model. The Boundary representation format (B-rep) is the de-facto standard used in CAD to represent solid and sheet objects as a collection of trimmed parametric surfaces connected with well-structured topology (Weiler, 1986). The ability to generate B-reps automatically, driven by some context, is an enabling technology for design exploration and is critical to solving a range of problems in CAD such as reverse engineering from noisy 3D scan data, inpainting holes in solid models plausibly, and design synthesis from images or drawings. Recent approaches to the generation of CAD models have focused on generating sequences of CAD modeling operations (Willis et al., 2021a; Xu et al., 2021; Wu et al., 2021; Xu et al., 2022; Lambourne et al., 2022), ![1_image_0.png](1_image_0.png) Figure 1: SolidGen generates B-reps incrementally by building vertices, edges, and faces one at a time. Here we show snapshots from the edge (columns 1–2) and face generation (columns 3–8) for two data samples. rather than the underlying 3D geometry and topology in the B-rep format. These methods produce a sequence of sketch and extrude modeling operations using a neural network and the B-rep is recovered in postprocess with a solid modeling kernel that executes the operations. Although this approach generates an editable CAD model, it is currently limited to the sketch and extrude modeling operations and cannot be easily extended to support other operations or build sheet bodies. In particular, fillets and chamfers, which are widely used in CAD for structural performance and ease of manufacture, are challenging as they operate on B-rep edges which are not available until the predicted solid has been built. Furthermore, sketch-andextrude workflows operate with 2D planar curves, hence extensions to freeform modeling are not trivial. There are several advantages to pursuing an approach that directly synthesizes the B-rep. Firstly, significantly more CAD data exists without a stored sequence of CAD modeling operations. Files created via direct modeling or exported to common B-rep file formats do not retain a history of CAD modeling operations. In the public domain, CAD model datasets with CAD modeling operations (Willis et al., 2021a; Wu et al., 2021) are smaller (∼190K models) than those without (Koch et al., 2019) (1M models). Secondly, support for freeform curves and surfaces like Bézier and non-uniform rational B-splines, or advanced topological structures like T-splines or Catmull-Clark subdivision meshes can be provided with the ability to generate B-reps directly, as they share geometric and topological similarities. Finally, there are several problems in CAD that can only be solved with a direct B-rep synthesis approach. Examples include hole-filling for repairing solids which have poorly trimmed or missing faces due to data exchange (Butlin & Stops, 1996; Assadi, 2003), patching up geometry in regions of a model where error-prone operations, such as offsetting, fail (Bodily, 2014) and the creation of parting surfaces and shut-outs in molding workflows (Bhargava et al., 1991; Ser, 1995). Although heuristic algorithms exist for such applications, a learning-based method has the potential to incorporate external context and respect aesthetic aspects of the CAD model. We make the following contributions in this paper: - We propose SolidGen, a novel generative model based on Transformers and two-level pointer networks for the direct synthesis of B-reps (Figure 1) without supervision from a sequence of CAD modeling operations. To the best of our knowledge, SolidGen is the first generative model that can synthesize B-reps directly. - We propose a new representation, the indexed boundary representation, that can represent B-reps as numeric arrays suitable for use with machine learning, while still allowing the geometry and topology of B-reps to be completely recovered. - We show the quantitative and qualitative performance of SolidGen for unconditional generation and perform a perceptual study showing SolidGen produces more realistic results than the state-of-the-art. - We demonstrate how controllable generation of B-reps can be achieved by conditioning with class labels, images, and voxels when available. ## 2 Related Work Since the proliferation of the B-rep format (Lee & Lee, 2001; Slyadnev et al., 2020; Weiler, 1986) in the 1980s, several research areas have been explored. Learning from B-reps. The ability to represent B-rep data as a graph (Ansaldi et al., 1985) has inspired recent approaches using graph neural networks for classification and segmentation problems (Cao et al., 2020; Jayaraman et al., 2021). B-rep topology can also be used for custom convolutions (Lambourne et al., 2021) or hierarchical graph structures (Jones et al., 2021). Rather than an *encoder*, our work focuses on a decoder to synthesize B-rep data directly. Constructive Solid Geometry (CSG). In CSG, 3D shapes are formed by combining primitives (e.g. cuboids, spheres) with Boolean operations (e.g. union, subtraction) into a CSG tree. By parsing the tree with an appropriate geometry kernel, a B-rep can be obtained. CSG approaches have been used to reconstruct 'shape programs' in combination with techniques from the program synthesis literature, both with neural guidance (Sharma et al., 2018; Ellis et al., 2019; Tian et al., 2019; Kania et al., 2020) and without (Du et al., 2018; Nandi et al., 2017; 2018). By contrast, our work applies to multiple tasks beyond reconstruction. Sequential CAD Generation. Another line of research leverages the supervision available from modeling operations stored in parametric CAD files to directly predict sequences of CAD modeling operations. The result is editable CAD files in the form of 2D engineering sketches (Willis et al., 2021b; Para et al., 2021; Ganin et al., 2021; Seff et al., 2021) or 3D CAD models generated from a B-rep (Willis et al., 2021a; Xu et al., 2021) or point cloud (Uy et al., 2021) target, interactive system (Li et al., 2020), or generative model (Wu et al., 2021; Xu et al., 2022). Although sequential CAD generation approaches have made significant progress, an outstanding challenge is how to extend these techniques to other modeling operations beyond sketch and extrude. Common modeling operations, such as fillet and chamfer, are challenging as they operate on B-rep edges which are not part of the representation available to the network, but rather generated as a postprocess. An alternate approach might be to use reinforcement learning with a CAD kernel integrated into the environment, similar to Lin et al. (2020), at the expense of extended training times due to the sparse reward space and slow CAD kernel execution times. Instead, we focus our efforts on direct synthesis of the B-rep data structure which cannot be achieved by predicting a sequence of CAD modeling operations. Direct B-rep Generation. Direct B-rep generation involves the synthesis of the underlying parametric curves/surfaces and the topology that connects them to form a solid model. Several learning-based approaches have made progress with the generation of parametric curves (Wang et al., 2020) and surfaces (Sharma et al., 2020). Yet to be addressed is a generative model for the creation of topology that joins parametric curves and surfaces together to form solid models, rather than relying on pre-defined topological templates (Smirnov et al., 2021). PolyGen (Nash et al., 2020) is a promising approach to joint geometry and topology synthesis where sequences of n-gon mesh vertices and faces are predicted using Transformers (Vaswani et al., 2017). A key insight of this work is the use of pointer networks (Vinyals et al., 2015) to reference previously predicted primitives and form the underlying shape topology. Our work draws inspiration from the use of pointer networks to develop a novel representation and method for the challenging task of direct B-rep generation. Wang et al. (2022) concurrently applied a pointer network to identify planar and cylindrical faces given the 2D projection of edges. Our method is more general and synthesizes the entire B-rep while supporting all prismatic surfaces in a generative modeling framework. A similar representation to ours was proposed in a concurrent work (Guo et al., 2022), where the objective was to reverse engineer B-reps from point clouds. Using a neural network to predict the rough B-rep geometry and topology, a combinatorial optimization was applied to refine the solution. By contrast, our method is an autoregressive generative model with support for more input modalities such as image and class in addition to point clouds, and does not require an expensive optimization step to build plausible B-reps. ## 3 Representation Boundary Representation. B-reps are a kind of generalized mesh structure in which many of the restrictions of triangle meshes have been removed (Figure 2). In place of planar triangular facets, B-reps allow *faces* built from planes, cylinders, cones, spheres, toroidal surfaces or B-splines surfaces. The *edges* ![3_image_0.png](3_image_0.png) Figure 2: The Boundary Representation data structure consists of vertices, edges formed by curve primitives (left) and faces formed by surface primitives (right). Loops of oriented edges—wires, are used to delimit the surfaces into visible and hidden regions. at the boundary of faces can be lines, arcs, circles or B-spline curves. Unlike meshes, B-rep faces can have internal holes, hence closed loops of edges are organized into oriented *wires* to delimit the surfaces. The curves defining the edges can be viewed as functions from an interval in the 1D u-parameter domain into 3D, while the surfaces can be viewed as functions from a 2D box in the uv-parameter domain to 3D. The wires defining a face's boundaries can be projected onto its surface's 2D parameter domain, forming closed loops. Outer loops in uv-space are oriented anti-clockwise, while inner loops are in clockwise direction. This allows uv-points to be classified as visible or hidden and facilitates meshing (Chew, 1987; Rockwood et al., 1989) 1. The B-rep data structure stores many references among entities (typically using pointers) allowing efficient constant-time querying of adjacency information (Lee & Lee, 2001). Indexed Boundary Representation. We now derive an indexed representation of the B-rep (Figure 3, right) that can be represented as numeric arrays suitable for use with machine learning. Importantly, this representation can be inverted back into a B-rep exactly. We currently restrict the representation to elementary curves (lines and arcs) and surfaces (planes, cylinders, cones, spheres and tori). We do not support B-splines and conic sections that are uncommon in mechanical CAD (Willis et al., 2021a; Koch et al., 2019). Without loss of generality, we assume that closed faces (cylinders, cones, etc.) in the B-rep are all split on the seams to simplify downstream processing, for e.g., a closed cylinder with a seam is split into two four sided half-cylinders. The indexed B-rep is analogous to the indexed list data structure used to represent polygonal meshes, e.g. the Wavefront OBJ file format (Library of Congress, 2020). The indexed list data structure can be represented numerically using two lists of numbers, and has been successfully used for generative modeling of meshes (Nash et al., 2020) and engineering sketches (Willis et al., 2021b). Just as a polygonal mesh can be represented using a topological data structure like winged edge (Baumgart, 1972), or a simple data structure like the indexed list, we show that B-reps can be converted into an indexed format. However, while a polygonal mesh is restricted to straight line edges and planar faces, B-reps are more general with support for curved edges and faces, making our problem more challenging. We address this challenge by explicitly representing only the bare minimum geometric and topological information in our indexed format, and designing a rule-based algorithm leveraging solid modeling kernels to infer and build the rest of the information. The indexed B-rep B is comprised of three lists {V, E, F}: *Vertices* V contain a list of 3D point coordinates corresponding to each B-rep vertex and the arc midpoints. *Edges* E are represented as hyperedges (Willis et al., 2021b) where each hyperedge is a list of indices into V. A hyperedge connects two or more vertices to define the edge geometry. The curve type is defined by the cardinality of the hyperedge: line (2), arc (3). *Faces* F are defined as the set of edges bounding a surface. Each face contains a list of indices into E. By indexing from *Faces*, to *Edges*, to *Vertices*, our representation is well suited for use with pointer networks (Vinyals et al., 2015). Parametric surfaces that define the geometry of the face, and wires that trim the surfaces are left out of our representation and recovered in postprocess, as described next. Recovering a Boundary Representation. We leverage the boundary curve geometry that is defined by the edges E, together with the mapping of the edges that bound a face F to recover the wire and surface geometry using well-defined rules. In particular, the surface type can be first fixed from the curve types 1For periodic surfaces (cylinders, cones, spheres and tori), which have parameter domains "topologically glued" from 0 to 2π in one or both dimensions, the surface must first be subdivided into patches homeomorphic to the disk. ![4_image_0.png](4_image_0.png) Figure 3: The Boundary Representation (left) comprises several topological entities. Our Indexed Boundary Representation (right) references vertices, edges, and faces in a well-defined hierarchy suitable for use with machine learning. while the surface parameters can be inferred from the curve and vertex geometry. The surface type and parameters can be derived from the curves by a simple set of rules. For instance, faces whose vertices are coplanar define a trimmed planar surface, faces whose curves are a non-planar collection of lines and arcs will be cylinders or cones and faces whose curves are all arcs will be spheres or tori. We explain the procedure to apply this rule-based algorithm and build the B-rep in greater detail in Section A.2. Applying this procedure to every face in the indexed B-rep yields the surface type and parameters. Then, the data is read into the solid modeling kernel using standard data exchange and shape fixing functions designed for reading neutral CAD file formats like STEP and IGES. ## 4 Solidgen Architecture SolidGen is a generative model that directly synthesizes CAD data in the B-rep format. SolidGen leverages Transformers (Vaswani et al., 2017) and pointer networks (Vinyals et al., 2015) to enable the generation of vertices, curved edges, and non-planar faces, such that edges can refer to vertices and faces can refer to edges, allowing us to build the complete B-rep topology. Our goal is to estimate a distribution over indexed B-reps B that can be sampled from to generate new indexed B-reps, and converted to actual B-reps as described in the previous section. This probability distribution p(B) is defined as the joint distribution of the vertices V, edges E, and faces F p(B) = p(V, E, F). (1) To make the learning problem tractable, we factorize this joint distribution into a product of conditionals $$p({\mathcal{B}})=p({\mathcal{V}},{\mathcal{E}},{\mathcal{F}}).$$ $\left(1\right)$. $$p({\mathcal{B}})=p({\mathcal{F}}|{\mathcal{E}},{\mathcal{V}})p({\mathcal{E}}|{\mathcal{V}})p({\mathcal{V}}),$$ $$\left(2\right)$$ p(B) = p(F|E, V)p(E|V)p(V), (2) and learn these distributions with separate neural networks. Once these conditional distributions are learned, B-reps can be generated by sampling vertices first, followed by edges conditioned on vertices, and faces conditioned on the edges and vertices. It is also possible to condition the generation using a context z in which case the joint distribution becomes $$p({\mathcal{B}}|z)=p({\mathcal{F}}|{\mathcal{E}},{\mathcal{V}},z)p({\mathcal{E}}|{\mathcal{V}},z)p({\mathcal{V}}|z),$$ p(B|z) = p(F|E, V, z)p(E|V, z)p(V|z), (3) where z can be derived from other representations like class labels, images, voxels, etc. Since there can be an arbitrary number of vertices, edges, and faces in each B-rep, and strong symmetries are present in CAD models, we use autoregressive neural networks to model the probability distributions above. Figure 4 shows an overview of the SolidGen architecture at inference time, with each model described in further detail below. ## 4.1 Vertex Model The vertex model (Figure 4, left) learns a distribution p(V) over the vertex positions. Vertices here include both B-rep vertices and the additional points inserted on B-rep edges to encode the curve primitive information as explained in Figure 3. The vertices are first sorted lexicographically, i.e., by z coordinates first, y coordinates next, and x coordinates finally, then their coordinates are flattened into a 1D list, with a stopping token <EOS> marking the end. After flattening, V seq = {v0, v1*, . . . ,* <EOS>} is of length |Vseq| = (*|V| ×* 3) + 1, $$(3)$$ ![5_image_0.png](5_image_0.png) $$\quad(4)$$ Figure 4: The SolidGen architecture consists of a vertex (left), edge (center), and face (right) model that predict the vertices, edge connections between vertices, and face connections between edges to form an indexed B-rep (top). with every triplet corresponding to coordinate values of a single vertex. Rather than using real-valued coordinates for the vertices, we quantize the vertex sequence V seq uniformly into 6-bits as is common in previous work (Nash et al., 2020; Seff et al., 2021; Wu et al., 2021). The vertex model learns the following distribution $$p(\mathcal{V}^{\mathrm{seq}};\theta_{\mathcal{V}})=\prod_{t=0}^{|\mathcal{V}^{\mathrm{seq}}|-1}p(v_{t}\mid v_{0},v_{1},\ldots,v_{t-1};\theta_{\mathcal{V}}),$$ where θV are learnable parameters of the neural network that is a Transformer decoder (Vaswani et al., 2017), and the conditional distributions above can all be treated as categorical distributions. Given the input vertices V seq <t at step t, the goal is to model the next output vertex token vt. The input to the Transformer decoder hV is derived from each of the tokens in V seq <t by learning three kinds of embeddingscoordinate embeddings indicating whether a token is an x, y or z-coordinate, *positional embeddings* indicating the vertex index which the token belongs to, and a *value embedding* encoding the x, y or z-coordinate: $\mathbf{h}_{\mathcal{V}_{<t}^{\text{seq}}}=\{\mathbf{W}_{\text{coo}}(\mathbb{1}_{i\text{mod}3})+\mathbf{W}_{\text{pos}}(\mathbb{1}_{[i\div3]})+\mathbf{W}_{\text{val}}(\mathbb{1}_{v_{i}})\}_{i=0}^{t-1}$, where 1 maps its integer argument in the subscript into a one-hot vector, Wcoo ∈ R demb×3, Wpos ∈ R demb*×|V|* and Wval ∈ R demb×2 6are all learned matrices that map their inputs to demb=256 dimensional embeddings. Then a Transformer decoder T V dec model predicts a probability distribution (in the form of logits) over all possible vertex locations for the next token vt: $$p(v_{t})=T_{\mathrm{dec}}^{\mathcal{V}}(\mathbf{h}_{{\mathcal{V}}_{<t}^{\mathrm{seq}}}).$$ The model is trained using teacher-forcing (Williams & Zipser, 1989) to maximize the log-likelihood of the training data with a cross-entropy loss with label smoothing of 0.01. ## 4.2 Edge Model The edge model (Figure 4, center) learns a distribution p(E) over the vertex indices. The edges are represented as a 1D list E seq by flattening the vertex indices in E, with a new edge token <NEW_EDGE> marking the start of a new edge and a stopping token <EOS> marking the end of the list. For ordering invariance, we sort each edge E ∈ E in ascending order and then sort E such that the edges with lowest vertex indices come first. The edge sequence E seq = {e0, e1, <NEW_EDGE>*, . . . ,* <EOS>} is of length |Eseq| =PE∈E |E| + 1, and the edge model learns the following distribution: $$p({\cal E}^{\rm seq}|{\cal V};\theta_{\cal E})=\prod_{j=0}^{|{\cal E}^{\rm seq}|-1}p(e_{j}\mid e_{0},e_{1},\ldots,e_{j-1},{\cal V};\theta_{\cal E}),\tag{5}$$ where θE are learnable parameters of the neural network. Given the input vertices V and current edges E seq <t until the step t, the goal is to model the next output vertex index et as a probability distribution over the indices of V. This is done by first learning *value embeddings* hV ∈ R |V|×demb from the input vertices V = {(xi, yi, zi)} |V|−1 i=0 : $$\mathbf{h}_{\mathcal{V}}=\{\phi\,(\mathbf{W}_{\mathbf{x}}\,\mathbbm{1}_{x_{i}}\,\|\,\mathbf{W}_{\mathbf{y}}\,\mathbbm{1}_{y_{i}}\,\|\,\mathbf{W}_{\mathbf{z}}\,\mathbbm{1}_{z_{i}})\}_{i=0}^{|{\mathcal{V}}|-1}\,,$$ $$\quad(6)$$. where Wx, Wy and Wz ∈ R 64×2 6are learned matrices that map their inputs into 64-dimensional embeddings, ∥ is the concatenation operation along the second dimension, and ϕ ∈ R demb×(64×3) is a learned linear layer to produce demb=256 dimensional embeddings. The token embeddings for <NEW_EDGE> and <EOS>, h<NEW_EDGE> ∈ R demb and h<EOS> ∈ R demb are learnable parameters that are concatenated to form a total of |V| + 2, which is further processed by Transformer encoder T E enc to obtain hinp ∈ R (demb*×|V|*+2) $$\mathbf{h}_{\mathrm{imp}}=T_{\mathrm{enc}}^{\mathcal{E}}(\mathbf{h}_{\mathrm{{\acute{e}}BOS}}\mathbin\|\ \mathbf{h}_{\mathrm{{\acute{e}}NEN\_EDGE}}\mathbin\|\ \mathbf{h}_{\mathcal{V}}).$$ Then, edge embeddings hE seq <t ∈ R |Eseq <t |×256 are formed by gathering the input embeddings hinp corresponding to the vertex indices in E seq <t and summed with learned *positional embeddings* indicating the position of each token in E seq <t . $$\mathbf{h}_{\varepsilon_{<t}^{\mathrm{seq}}}=\{\mathbf{h}_{\mathrm{imp}}[e_{j}]+\mathbf{W}_{\mathrm{pos}}\mathbb{I}_{j}\}_{j=0}^{t-1},$$ $$\mathbf{\textcolor{red}{<\!\!\!\textcolor{black}{-}}}$$ where $\mathbf{W_{\text{pos}}\in\mathbb{R}^{|\mathcal{E}^{\text{neg}}|\times d_{\text{emb}}}}$ is a learnable matrix... A Transformer decoder T E dec then processes these embeddings followed by a linear layer to output a pointer vector pt that is compared to the input embeddings hinp using a dot product operation and normalized using softmax to get a distribution over the 0 ≤ k *≤ |V|* + 1 indices including the vertex indices and the <NEW_EDGE>, <EOS> tokens: $$\mathbf{p}_{t}=T_{\mathrm{dec}}^{\mathcal{E}}(\mathbf{h}_{\mathcal{E}_{<t}^{\mathrm{seq}}}),$$ $$p(e_{t}=k\mid\mathcal{E}_{<t}^{\mathrm{seq}},\mathcal{V})=\mathrm{softmax}_{k}(\mathbf{p}_{t}\cdot\mathbf{h}_{\mathrm{imp}}[k]).$$ ## The Model Is Trained Using The Cross-Entropy Loss To Predict The Distribution P(Et = K | Eseq <T , V) Which Is Repeatedly Created At Each Time Step And Sampled To Generate The Edge Tokens Autoregressively. During Training, Ground-Truth Is Used For Conditioning (Teacher-Forcing) Rather Than Previous Samples. 4.3 Face Model The face model (Figure 4, right) learns a distribution p(F) over the faces. The faces are represented as a flat sequence F seq similar to the edges by flattening F into a 1D list of edge indices, with a new face token <NEW_FACE> marking the start of a new face and a stopping token <EOS> marking the end of the faces list. The model learns the following probability distribution: $$p({\cal F}^{\rm seq}|{\cal E},{\cal V};\theta_{\cal F})=\prod_{t=0}^{|{\cal F}^{\rm seq}|-1}p(f_{t}\mid f_{0},f_{1},\ldots,f_{t-1}|{\cal E},{\cal V};\theta_{\cal F}),\tag{7}$$ where θF are learnable parameters of the neural network. The face model functions similarly as the edge model and represents the face features by gathering edge embeddings. Given the input vertices V and edges E, and the current faces F seq <t at step t, the goal is to model the next output face token ft as a probability distribution over the indices of E. This is done by learning vertex embeddings h ′ V ∈ R |V|×demb from the input vertices as in Equation 6. The rows of h ′ V corresponding to the vertex indices in E are first gathered and summed, and learned embeddings for <NEW_FACE> and <EOS>, h<NEW_FACE> and h<EOS> are further concatenated to these embeddings, and passed through a Transformer encoder T¯E enc to form hE ∈ R (|E|+2)×demb : $\mathbf{h}_{\mathcal{E}}=\tilde{T}_{\text{enc}}^{\mathcal{E}}(\{\mathbf{h}_{\text{cMDS}}\mid\parallel\mathbf{h}_{\text{cIGN}}\_\text{FACE}>\parallel\{\sum_{v\in E}\mathbf{h}_{\mathcal{V}}^{\prime}[v])\}_{E\in\mathcal{E}}\})$, ``` Finally, face embeddings hF seq <t are formed by gathering the edge embeddings which is further summed with ``` learned *positional embeddings*, and passed through another Transformer encoder T F enc: $$\mathbf{h}_{\mathcal{F}_{<t}^{\mathrm{seq}}}=T_{\mathrm{enc}}^{\mathcal{F}}(\{\mathbf{h}_{\mathcal{E}}[f_{k}]+\mathbf{W}_{\mathrm{pos}}^{\prime}\mathbb{1}_{k}\}_{k=0}^{t-1}),$$ where W′pos ∈ R |Fseq|×demb is a learnable matrix. The idea here is to encode faces by the embeddings of the edges that lie on their boundary, and encode the edges by the embeddings of the incident vertices. A Transformer decoder T F dec then processes hF seq <t to output a pointer vector pt that is compared to the edge embeddings hE using a dot product operation and normalized using softmax to get a distribution over the 0 ≤ k *≤ |E|* + 1 input edge indices and <NEW_FACE>, <EOS> tokens: $${\bf p}_{t}=T_{\rm dec}^{\cal F}({\bf h}_{{\cal F}_{<t}^{\rm seq}}),$$ $p(f_{t}=k\mid{\cal F}_{<t}^{\rm seq},{\cal E},{\cal V})={\rm softmax}_{k}({\bf p}_{t}\cdot{\bf h}_{{\cal E}}[k])$. The face model is also trained by teacher-forcing using a cross-entropy loss. ## 5 Experiments In this section we perform experiments to qualitatively and quantitatively evaluate our method on unconditional generation, and various conditional generation tasks based on class labels, images, and voxels. Implementation. Our implementation is in PyTorch (Paszke et al., 2019). We train our models for 1000 epochs with batch size 512 using the AdamW optimizer (Loshchilov & Hutter, 2019) (learning rate: 10−4, weight decay: 0.01) on an Nvidia DGX A100 machine. The vertex, edge, and face models can be trained independently since we employ teacher-forcing where the models are decoupled during training—rather than using the samples from one model to condition the subsequent model, we use the ground truth data. When a conditional embedding is jointly learned however, we have to train the three models together. Training time ranges from 1–3 days depending on the dataset. We find it critical for convergence to use the pre-LayerNorm variant of the Transformer (Xiong et al., 2020). All Transformer modules use 8 layers with an embedding dimension of 256, and fully-connected dimension of 512 and 8 attention heads. We initially experimented with 4, 8 and 12-layer models and observed the 8-layer model to work well. The 4-layer model was slightly underfitting while the 12-layer model was overfitting on our datasets. To sample from the models we use nucleus sampling (Holtzman et al., 2020). We find it helpful to mask logits that are invalid in each step of the sampling (see Section A.3). All data processing related to building indexed B-reps and reconstructing B-reps uses the OpenCascade/pythonOCC (Paviot, 2008) solid modeling kernel. Datasets. Publicly available datasets for B-rep solid models are limited compared to other 3D representations. We demonstrate our method on two datasets while considering only files that contain the surface and curve types supported by the indexed B-rep format. The Parametric Variations (PVar) Dataset is synthetically designed (see Section A.5 for details) for testing SolidGen on the class-conditional generation task, since categorically labeled B-rep datasets are unavailable. There are 120,000 models, 2000 in each of the 60 classes. The dataset is split in a 90 (train)/5 (validation)/5 (test) proportion. The DeepCAD Dataset (Wu et al., 2021) contains a subset of the CAD models available in the ABC dataset (Koch et al., 2019) that additionally includes the sequence of sketch and extrude CAD modeling operations. The original dataset contains 178,238 CAD models with a significant portion of duplicate and trivial models. We use a hash-based method, described in Section A.1, to remove duplicates. We filter out Table 1: Modeling metrics for unconditional and conditional SolidGen models computed on the test set. | Dataset | Model | NLL (bits per, ↓) | Top-1 Accuracy (%, ↑) | | | | | | | |------------------|----------|---------------------|-------------------------|-------|-------|-------|-------|-------|-------| | | Vert. | Edge | Face | Total | Vert. | Edge | Face | Mean | | | Uniform | 18.44 | 13.59 | 24.96 | 56.99 | 2.01 | 1.59 | 2.04 | 1.88 | | | PVar | SolidGen | 4.49 | 0.01 | 0.04 | 4.54 | 91.30 | 99.97 | 99.88 | 97.05 | | w/class (vertex) | 4.19 | - | - | 4.24 | 83.20 | - | - | 94.35 | | | w/class (all) | 1.28 | 0.01 | 0.03 | 1.32 | 92.43 | 99.97 | 99.89 | 97.43 | | | Uniform | 18.32 | 15.05 | 28.14 | 61.51 | 1.48 | 1.10 | 1.42 | 1.33 | | | DeepCAD | SolidGen | 5.42 | 0.43 | 0.26 | 6.11 | 86.12 | 98.87 | 99.55 | 94.85 | | w/image (vertex) | 1.97 | - | - | 2.66 | 80.61 | - | - | 93.01 | | | w/image (all) | 1.98 | 0.26 | 0.27 | 2.51 | 89.74 | 98.84 | 99.31 | 95.96 | | | w/voxel (vertex) | 4.89 | - | - | 5.42 | 82.28 | - | - | 93.48 | | | w/voxel (all) | 2.00 | 0.52 | 0.65 | 3.17 | 91.26 | 98.62 | 99.18 | 96.35 | | trivial (< 8 faces, for e.g., boxes), overly-complex (> 130 faces) models, those that contain merged vertices after quantization and ones that yield very long sequences with > 200 tokens. We are left with 49,759 models that are split in 90/5/5 train/validation/test sets and used for all experiments. For image conditioning, we render images of the CAD models with the OpenCascade viewer. For voxel conditioning, we sample a point cloud with 2048 points from the surface of the CAD models, and quantize them into voxel grids of dimension (26) 3 matching our vertex quantization. Metrics. We use the following metrics to quantitatively evaluate and compare SolidGen with other methods. Valid reports the percentage of solids that were successfully built and considered valid by the solid modeling kernel (see Section A.6). *Novel* measures the percentage of valid solids that are not duplicated from the training set, and *Unique* reports the percentage of valid solids not duplicated within the sample set. The duplication check is based on our hashing procedure described in Section A.1. ## 5.1 Unconditional Generation In this section we evaluate SolidGen on unconditional generation of B-reps. Modeling Performance. To evaluate the modeling performance, we report the negative log-likelihood (NLL) and per-step accuracy computed on the DeepCAD test set. The NLL is reported individually for our vertex, edge and face models in bits per vertex, bits per edge and bits per face, respectively. Since there is no other method that uses our indexed B-rep format, a one-to-one comparison is not possible and we use a 'Uniform' model that allocates uniform probability to the entire co-domain of the data as a baseline reference. The accuracy corresponds to the top-1 prediction correctness of the next token given the ground truth for the previous tokens. Table 1 (first two rows under 'DeepCAD') shows the quantitative metrics of our model and the uniform baseline on the DeepCAD (Wu et al., 2021) dataset. SolidGen obtains an NLL of 4.54 bits and 97.05% accuracy compared to the 'Uniform' model that has an NLL of 56.99 bits and 1.88% accuracy. We find that the edge and face models always outperform the vertex model. This could be because once the vertices are estimated, the edges and faces are significantly constrained in our representation. We show plots for top-10 accuracy in Section A.8. Comparison with DeepCAD. We evaluate the quality of the generated samples produced by SolidGen and compare them with that of DeepCAD (Wu et al., 2021) model. Note that both models work in fundamentally different ways—SolidGen is an autoregressive generative model trained directly on the B-reps, while DeepCAD is an autoencoder that learns from sequences of CAD modeling operations. This fundamental difference in representation leads to differences in design choices making a fair comparison challenging. We strive to provide a one-to-one comparison by focusing mainly on the sample quality. Figure 5 shows qualitative results for unconditional generation trained on the DeepCAD dataset Wu et al. (2021). We show samples from the dataset for comparison (a), along with generated samples from DeepCAD (b) and SolidGen (c). More results are shown in Section A.8. We present quantitative metrics in Table 2. A large portion ![9_image_0.png](9_image_0.png) Figure 5: Qualitative results for unconditional generation. Samples (a) from the DeepCAD dataset, (b) generated by the DeepCAD model (Wu et al., 2021), and (c) generated by SolidGen. | Model | Valid (%,↑) | Novel (%,↑) | Unique (%,↑) | |------------------|---------------|---------------|----------------| | SolidGen (p=0.5) | 87.57 | 66.22 | 33.69 | | SolidGen (p=0.6) | 88.12 | 73.00 | 61.42 | | SolidGen (p=0.7) | 86.70 | 82.46 | 83.91 | | SolidGen (p=0.8) | 85.21 | 88.09 | 93.65 | | SolidGen (p=0.9) | 83.10 | 92.38 | 97.49 | | DeepCAD | 62.11 | 96.73 | 99.01 | Table 2: Quality of 5000 unconditional samples generated by networks trained on the DeepCAD dataset. SolidGen results are shown for samples generated with different top-p values in nucleus sampling. of the models produced by SolidGen are valid and usable B-reps, with SolidGen showing between 20.99% to 26.01% improvement in the valid ratio compared to DeepCAD. By varying the top-p hyperparameter in nucleus sampling, SolidGen is able to trade-off between validity and novelty and uniqueness of B-reps produced. We observe a clear trend where increasing p reduces validity while increasing the novelty and uniqueness. SolidGen falls slightly below DeepCAD and obtains 4.38% to 30.51% lower novel and 1.52% to 65.32% lower unique ratio depending on the top-p parameter. We found that many of DeepCAD's samples contain self-intersections and tend to be noisy/unrealistic leading to decreasing the valid score, but artificially increasing the unique and novel scores. We further investigate this through a human perceptual study next. Human Perceptual Evaluation. Rather than rely only on metrics, we believe it is critical to perform human evaluation on the output of generative models. To evaluate the realism of the generated samples, we perform a perceptual study using human evaluators recruited through Amazon's Mechanical Turk service (Mishra, 2019). The crowd workers were shown pairs of images, one of which was generated by SolidGen (p = 0.7) or DeepCAD and the other randomly selected from the training set, and asked to judge which of the two was more realistic. Each image pair was rated by 7 different human evaluators and we record the number of crowd workers who selected the generated model as more realistic. The results are shown in Figure 6. We see that for SolidGen the distribution is roughly symmetric with the majority vote judging the ![10_image_0.png](10_image_0.png) Figure 6: Human perceptual study showing the distribution of votes by 7 human evaluators for the realism of SolidGen and DeepCAD models compared to samples from the training set. ![10_image_1.png](10_image_1.png) Figure 7: Comparison of distributions computed from SolidGen samples and the DeepCAD training set. SolidGen output to be more realistic 52.67% of the time. This indicates the realism of the SolidGen output is indistinguishable from the training set. For DeepCAD we see the distribution is skewed to the left, with the majority vote judging the DeepCAD output to be more realistic only 35.22% of the time. Statistics. To check if SolidGen synthesizes B-reps representative of the dataset, we compute several statistics between the samples produced by SolidGen (with top-p=0.9) compared to the DeepCAD training set in Figure 7. Here we used kernel density estimation with Gaussian kernels to fit the curves, and see that SolidGen well captures the modes of the data distribution. ## 5.2 Conditional Generation In this section, we demonstrate various conditional generation results using class labels in the PVar dataset and rendered images and voxels from the DeepCAD dataset. Conditional models use the same architecture, hyperparameters and training strategy as the unconditional model, and support two kinds of conditioning: Start-of-sequence (SOS) Embedding. For class conditioning, we learn a demb=256 dimensional embedding from the input class label, and use it as the SOS embedding in the vertex, edge and face models. Cross Attention. For image/voxel conditioning, we use a 2D/3D convolutional neural network (details in Section A.7) to obtain an 162/8 3 dimensional embedding that is flattened and jointly used by the Transformer decoders in the vertex, edge and face models via cross-attention. Modeling Performance. The modeling performance of the conditional models are shown in Table 1. The 'w/class (all)' row under 'PVar' shows that class conditioning improves the NLL by −3.22 and accuracy by +0.38% compared to the unconditional 'SolidGen' model trained on the same data. A similar trend is observed for image and voxel conditioning shown under row 'DeepCAD'. The NLL for the 'w/image (all)' conditional model improved by −3.6 while the accuracy improved by +3.6%. For the voxel conditional model 'w/voxel (all)', the NLL improved by −2.94 while the accuracy increased by +1.5%. $\mathbf{a}$ $\textcolor{red}{\text{i=e}}$ 5. $\sum_{i=1}^{\infty}\infty_i$ c. $\simeq\!\!\!\subset\!\!\!\subset\!\!\supset$ $\ll$ $\phi$ $\square$ $\widehat{\mathbb{E}}$ $\sigma$ $\square$ $\xrightarrow[5\leq5]{}$ 2. $\sum\limits_{\text{P}<\text{P}}\infty$ $\widehat{\epsilon_{\epsilon_{\epsilon_{\epsilon_{\epsilon}}}}}$ $\sim\sim\overline{\phi}$ $$\begin{array}{l}{{=-2}}\\ {{<-2}}\end{array}$$ $\widehat{\mathfrak{sl}}_2^+$ $\blacksquare$ $\mathrm{S}_{\mathrm{f}}^{\mathrm{I}}$ $\square$ Figure 8: Class conditional samples (each row is a unique class) from SolidGen trained on the PVar dataset. ![11_image_0.png](11_image_0.png) Figure 9: Image conditional samples from SolidGen using images (first column) obtained by rendering CAD models in the DeepCAD test set. Sample Quality. Qualitative results of our class-conditioning model given random class labels as input ![11_image_1.png](11_image_1.png) are shown in Figure 8. Figure 9 shows B-reps produced by our model given images as input conditioning, while Figure 10 shows results where voxelized point clouds were used as conditioning. We see that SolidGen is able to synthesize design variations that are close to the input conditioning. Quantitative evaluation of the class-conditional samples (40 per-class), image conditional samples (20 per image) and voxel conditional samples (20 per voxel grid) are provided in Table 3. We see that SolidGen generates a high number of novel B-reps, but the valid and unique ratio go down as we transition from class to image to voxel conditioning. We suspect this trend is due to data imbalance in the dataset which contains arbitrary shapes without clear categories making the overall task challenging. Moreover, unlike unconditional and class conditional generation, image and voxel conditioning require generating design variations for every data in the test set, Figure 10: Voxel conditional samples from SolidGen using voxelized point clouds (first column) obtained by sampling 2048 points from CAD models in the DeepCAD test set. | Conditioning | Valid (%,↑) | Novel (%,↑) | Unique (%,↑) | |-----------------|---------------|---------------|----------------| | Class (PVar) | 83.48 | 92.35 | 70.71 | | Image (DeepCAD) | 71.45 | 94.61 | 75.08 | | Voxel (DeepCAD) | 66.78 | 92.75 | 53.95 | Table 3: Quality of conditional samples generated by SolidGen with top-p=0.7. making the task more challenging. The less precise nature of 2D image conditioning makes it easier to learn and generate design variations compared to the 3D case. Ablation on joint conditioning. Conditional models require the vertex, edge and face models to be trained together. To understand the importance of jointly learning a conditional embedding for the vertex, edge and face models, we experimented with a variant of SolidGen where the conditioning is only applied to the vertex model. This variant denoted by 'w/ ∗(vertex)' in Table 1 has the advantage that the vertex, edge, and face models can be trained independently in parallel since the probability distribution being modeled is p(B|z) = p(F|E, V)p(E|V)p(V|z). We observe that conditioning only the vertex model does improve the NLL in all cases (class:−0.3, image:−3.45, voxel:−0.69), but the accuracy drops below that of the unconditional model (class:−2.7%, image:−1.84%, voxel:−1.37%). The joint conditioning variants 'w/class (all)', 'w/image (all)' and 'w/voxel (all)' perform the best in all cases. ## 6 Conclusion Limitations & Future Work. Our current method has some limitations that can be addressed in future work. Extremely complex CAD models with long sequence lengths increase the training time and chance for compounding errors at test time, which could be attributed to teacher forcing. Scaling our model and training on larger CAD datasets might alleviate the problem. Our indexed B-rep format supports the most common curve and surface types found in mechanical CAD models but not conic sections and splines that are common in freeform modeling. Uniform B-spline curves of a fixed degree can be supported by considering edges that group >= (degree+ 1) vertices (the spline's control points). However, unlike prismatic surfaces, the B-spline surfaces cannot be fully determined by the boundary curves, and require the additional prediction of a grid of interpolating or control points. Like previous generative neural networks (Willis et al., 2021b; Nash et al., 2020; Wu et al., 2021; Xu et al., 2022) our method is trained using classification losses. Several important CAD applications, e.g., reverse engineering, require reconstruction losses that incorporate the B-rep geometry which is not available until postprocess. Finally, research into conditioning schemes could facilitate stronger user guidance in the generation, and latent representations would help with better generalization. Summary. We presented SolidGen, a generative model that can directly learn from and synthesize boundary representation (B-rep) CAD models without the need for supervision from a sequence of CAD modeling operations. We achieved this by deriving the indexed B-rep format that captures the hierarchical nature of B-rep vertices, edges and faces in a new machine learning friendly representation. SolidGen generates highly coherent yet diverse B-reps as demonstrated in our comparison with prior work. Our method has potential to be integrated into CAD software workflows, since all CAD software allows the import of solids without modeling history. As our method can generate local regions of B-rep topology, in addition to entire solids, this allows learning-based techniques to play a role in many other workflows such as solid model inpainting and parting surface creation. Conditional generation can aid in sketch-based modeling workflows and to convert point clouds, meshes or other file formats into B-reps for further editing. ## References Automatic Generation of Parting Surfaces and Mold Halves, volume ASME 1995 15th International Computers in Engineering Conference and the ASME 1995 9th Annual Engineering Database Symposium of International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, 09 1995. ASME. Silvia Ansaldi, Leila De Floriani, and Bianca Falcidieno. Geometric modeling of solid objects by using a face adjacency graph representation. Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 19(3):131–139, 1985. Armand Daryoush Assadi. *CAD model robustness assessment and repair*. PhD thesis, Iowa State University, Ames, Iowa, 2003. Bruce G. Baumgart. Winged edge polyhedron representation. Technical report, Stanford University, Stanford, CA, USA, 1972. Rahul Bhargava, Lee Elliot Weiss, Friedrich B Prinz, et al. *Automated ejectability analysis and parting surface* generation for mold tool design. [Carnegie Mellon University], Engineering Design Research Center, 1991. Garrett Bodily. *A Computational Hybrid Method for Self-Intersection Free Offsetting of CAD Geometry*. Brigham Young University, 2014. William Bouma, Ioannis Fudos, Christoph Hoffmann, Jiazhen Cai, and Robert Paige. Geometric constraint solver. *Computer-Aided Design*, 27(6):487–501, 1995. ISSN 0010-4485. doi: https:// doi.org/10.1016/0010-4485(94)00013-4. URL https://www.sciencedirect.com/science/article/pii/ 0010448594000134. Geoffrey Butlin and Clive Stops. Cad data repair. In *Proceedings of the 5th International Meshing Roundtable*, pp. 7–12. Citeseer, 1996. Weijuan Cao, Trevor Robinson, Yang Hua, Flavien Boussuge, Andrew R. Colligan, and Wanbin Pan. Graph representation of 3d cad models for machining feature recognition with deep learning. In *Proceedings of* the ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, IDETC-CIE. ASME, 2020. L. P. Chew. Constrained delaunay triangulations. In Proceedings of the Third Annual Symposium on Computational Geometry, SCG '87, pp. 215–222. Association for Computing Machinery, 1987. ISBN 0897912314. doi: 10.1145/41958.41981. URL https://doi.org/10.1145/41958.41981. Tao Du, Jeevana Priya Inala, Yewen Pu, Andrew Spielberg, Adriana Schulz, Daniela Rus, Armando SolarLezama, and Wojciech Matusik. Inversecsg: Automatic conversion of 3d models to csg trees. Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 37(6):1–16, 2018. Kevin Ellis, Maxwell Nye, Yewen Pu, Felix Sosa, Josh Tenenbaum, and Armando Solar-Lezama. Write, execute, assess: Program synthesis with a repl. In *Advances in Neural Information Processing Systems* (NeurIPS), pp. 9169–9178, 2019. Yaroslav Ganin, Sergey Bartunov, Yujia Li, Ethan Keller, and Stefano Saliceti. Computer-aided design as language. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2021. Haoxiang Guo, Shilin Liu, Hao Pan, Yang Liu, Xin Tong, and Baining Guo. Complexgen: Cad reconstruction by b-rep chain complex generation. *ACM Trans. Graph. (SIGGRAPH)*, 41(4), July 2022. doi: 10.1145/ 3528223.3530078. URL https://doi.org/10.1145/3528223.3530078. Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. *CoRR*, abs/1606.08415, 2016. URL http://arxiv.org/abs/1606.08415. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In *8th International Conference on Learning Representations, ICLR*. OpenReview.net, 2020. URL https://openreview.net/forum?id=rygGQyrFvH. Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph G. Lambourne, Karl D.D. Willis, Thomas Davies, Hooman Shayani, and Nigel Morris. Uv-net: Learning from boundary representations. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 11703–11712, June 2021. Benjamin Jones, Dalton Hildreth, Duowen Chen, Ilya Baran, Vladimir G Kim, and Adriana Schulz. Automate: a dataset and learning approach for automatic mating of cad assemblies. *Annual Conference on* Computer Graphics and Interactive Techniques Asia (SIGGRAPH Asia), 40(6):1–18, 2021. Kacper Kania, Maciej Zięba, and Tomasz Kajdanowicz. Ucsg-net–unsupervised discovering of constructive solid geometry tree. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2020. Sebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams, Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, and Daniele Panozzo. Abc: A big cad model dataset for geometric deep learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. Joseph G. Lambourne, Karl D.D. Willis, Pradeep Kumar Jayaraman, Aditya Sanghi, Peter Meltzer, and Hooman Shayani. Brepnet: A topological message passing system for solid models. In *IEEE Conference* on Computer Vision and Pattern Recognition (CVPR), pp. 12773–12782, June 2021. Joseph G. Lambourne, Karl D.D. Willis, Pradeep Kumar Jayaraman, Longfei Zhang, Aditya Sanghi, and Kamal Rahimi Malekshan. Reconstructing editable prismatic cad from rounded voxel models. In SIGGRAPH Asia, December 2022. Sang Hun Lee and Kunwoo Lee. Partial entity structure: A compact non-manifold boundary representation based on partial topological entities. In Proceedings of the Sixth ACM Symposium on Solid Modeling and Applications, SMA '01, pp. 159––170, New York, NY, USA, 2001. Association for Computing Machinery. ISBN 1581133669. doi: 10.1145/376957.376976. URL https://doi.org/10.1145/376957.376976. Changjian Li, Hao Pan, Adrien Bousseau, and Niloy J Mitra. Sketch2cad: Sequential cad modeling by sketching in context. *ACM Transactions on Graphics (TOG)*, 39(6):1–14, 2020. Sustainability of Digital Formats Library of Congress. *Wavefront OBJ File Format*, 2020. URL https: //www.loc.gov/preservation/digital/formats/fdd/fdd000507.shtml. Cheng Lin, Tingxiang Fan, Wenping Wang, and Matthias Nießner. Modeling 3d shapes by reinforcement learning. In *European Conference on Computer Vision (ECCV)*, 2020. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In *ICLR*, 2019. Abhishek Mishra. Machine learning in the aws cloud: Add intelligence to applications with amazon sagemaker and amazon rekognition, 2019. URL https://aws.amazon.com/sagemaker/groundtruth/. Chandrakana Nandi, Anat Caspi, Dan Grossman, and Zachary Tatlock. Programming language tools and techniques for 3d printing. In *2nd Summit on Advances in Programming Languages (SNAPL 2017)*. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017. Chandrakana Nandi, James R Wilcox, Pavel Panchekha, Taylor Blau, Dan Grossman, and Zachary Tatlock. Functional programming for compiling and decompiling computer-aided design. *Proceedings of the ACM* on Programming Languages, 2(ICFP):1–31, 2018. Charlie Nash, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. Polygen: An autoregressive generative model of 3d meshes. In *International Conference on Machine Learning (ICML)*, pp. 7220–7229. PMLR, 2020. Wamiq Reyaz Para, Shariq Farooq Bhat, Paul Guerrero, Tom Kelly, Niloy Mitra, Leonidas Guibas, and Peter Wonka. Sketchgen: Generating constrained cad sketches. In Advances in Neural Information Processing Systems (NeurIPS), 2021. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 32*, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips. cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. Thomas Paviot. pythonocc, 3d cad/cae/plm development framework for the python programming language. https://dev.opencascade.org/project/pythonocc (Accessed 05-Mar-2022), 2008. Alyn Rockwood, Kurt Heaton, and Tom Davis. Real-time rendering of trimmed surfaces. In Proceedings of the 16th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '89, pp. 107–116. Association for Computing Machinery, 1989. ISBN 0897913124. doi: 10.1145/74333.74344. URL https://doi.org/10.1145/74333.74344. Ari Seff, Yaniv Ovadia, Wenda Zhou, and Ryan P. Adams. SketchGraphs: A large-scale dataset for modeling relational geometry in computer-aided design. In *ICML 2020 Workshop on Object-Oriented Learning*, 2020. Ari Seff, Wenda Zhou, Nick Richardson, and Ryan P Adams. Vitruvion: A generative model of parametric cad sketches. *arXiv:2109.14124*, 2021. Gopal Sharma, Rishabh Goyal, Difan Liu, Evangelos Kalogerakis, and Subhransu Maji. Csgnet: Neural shape parser for constructive solid geometry. In *IEEE Conference on Computer Vision and Pattern Recognition* (CVPR), 2018. Gopal Sharma, Difan Liu, Subhransu Maji, Evangelos Kalogerakis, Siddhartha Chaudhuri, and Radomír Měch. Parsenet: A parametric surface fitting network for 3d point clouds. In *European Conference on* Computer Vision (ECCV), pp. 261–276. Springer, 2020. Sergey Slyadnev, Alexander Malyshev, Andrey Voevodin, and Vadim Turlapov. On the role of graph theory apparatus in a cad modeling kernel. *Proceedings of GraphiCon 2020*, 2020. Dmitriy Smirnov, Mikhail Bessmeltsev, and Justin Solomon. Learning manifold patch-based representations of man-made shapes. In *International Conference on Learning Representations (ICLR)*, 2021. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. *Journal of Machine Learning Research*, 15(56): 1929–1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html. Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, and Jiajun Wu. Learning to infer and execute 3d shape programs. In International Conference on Learning Representations (ICLR), 2019. Mikaela Angelina Uy, Yen-yu Chang, Minhyuk Sung, Purvi Goel, Joseph Lambourne, Tolga Birdal, and Leonidas J. Guibas. Point2cyl: Reverse engineering 3d objects from point clouds to extrusion cylinders. CoRR, abs/2112.09329, 2021. URL https://arxiv.org/abs/2112.09329. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 6000–6010, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. *Advances in Neural Information* Processing Systems (NeurIPS), 2015. Kehan Wang, Jia Zheng, and Zihan Zhou. Neural face identification in a 2d wireframe projection of a manifold object. In *2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 1612–1621, 2022. doi: 10.1109/CVPR52688.2022.00167. Xiaogang Wang, Yuelang Xu, Kai Xu, Andrea Tagliasacchi, Bin Zhou, Ali Mahdavi-Amiri, and Hao Zhang. Pie-net: Parametric inference of point cloud edges. In *Advances in Neural Information Processing Systems*, volume 33, pp. 20167–20178. Curran Associates, Inc., 2020. K.J. Weiler. *Topological structures for geometric modeling*. Technical report RPI, Center for Interactive Computer Graphics. University Microfilms, 1986. Ronald J. Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. *Neural Computation*, 1(2):270–280, 1989. doi: 10.1162/neco.1989.1.2.270. Karl D. D. Willis, Yewen Pu, Jieliang Luo, Hang Chu, Tao Du, Joseph G. Lambourne, Armando SolarLezama, and Wojciech Matusik. Fusion 360 gallery: A dataset and environment for programmatic cad construction from human design sequences. *ACM Transactions on Graphics (TOG)*, 40(4), 2021a. Karl DD Willis, Pradeep Kumar Jayaraman, Joseph G Lambourne, Hang Chu, and Yewen Pu. Engineering sketch generation for computer-aided design. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshop), pp. 2105–2114, 2021b. Rundi Wu, Chang Xiao, and Changxi Zheng. Deepcad: A deep generative network for computer-aided design models. In *IEEE International Conference on Computer Vision (ICCV)*, pp. 6772–6782, October 2021. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of Proceedings of Machine Learning Research, pp. 10524–10533. PMLR, 2020. URL https://proceedings. mlr.press/v119/xiong20b.html. Xiang Xu, Karl DD Willis, Joseph G Lambourne, Chin-Yi Cheng, Pradeep Kumar Jayaraman, and Yasutaka Furukawa. Skexgen: Autoregressive generation of cad construction sequences with disentangled codebooks. In *International Conference on Machine Learning*, 2022. Xianghao Xu, Wenzhe Peng, Chin-Yi Cheng, Karl D.D. Willis, and Daniel Ritchie. Inferring cad modeling sequences using zone graphs. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 6062–6070, June 2021. ## A Appendix A.1 Hashing Indexed B-Reps We use duplication detection (Willis et al., 2021b) tools to compare B-reps. This is used, for example, to assess the degree to which the generated B-reps exactly match with those in the training set, quantify the diversity of samples generated by SolidGen, etc. The duplicate detection algorithm is based on the Weisfeiler Lehman isomorphism test. Two graphs are built from the B-rep—a face adjacency graph (Ansaldi et al., 1985) where the B-rep faces are graph nodes and adjacent faces are connected by graph edges, and a vertex adjacency graph where B-rep vertices are graph nodes and the B-rep edges are graph edges. The quantized vertex positions are used to initialize the vertex hashes and stored as node attributes, and the curve type information is stored as edge attributes, making the final hash dependent on the solid geometry. The final hash string for each solid is created by concatenating the graph hashes of the face adjacency and vertex adjacency graphs. ## A.2 Converting Indexed B-Reps To B-Reps Here we detail the complete method to convert indexed B-reps B = {V, E, F} to B-reps. ## A.2.1 Vertices And Edges Each point in V is potentially a B-rep vertex. Each edge E ∈ E is used to dereference into V and collect the list of points defining the edge and its geometry. The endpoints of E are used to construct a pair of B-rep vertices if they do not already exist, and the topological B-rep edge connecting them. The curve geometry is defined by the cardinality of E: two points define a line, while three points define an arc (see Figure 3 right). Since we sort the indices in each edge in ascending order for ordering invariance, the start point, arc midpoint and end point must be identified. We do this by considering the permutations of the three points, and choose the one where the second point coincides with the point evaluated at the middle parameter of the arc. ## A.2.2 Wires Next, we examine each face in F to gather the list of edges bounding it. Note that edges bounding a face can either be part of an outer wire running in counter-clockwise direction that defines the visible part of the face's geometry, or part of one or more inner wires running in clockwise direction that define the hidden portion (holes) in the face's geometry. Our goal here is to form these wires by connecting the given set of edges into one or more closed loops. We achieve this by defining a vertex-edge graph where each B-rep edge and its vertices are graph nodes connected by directed edges and detecting cycles in this graph (see Figure A1 (a)). Each cycle (except trivial ones) defines a wire, and the outer wire is defined as the largest wire (based on the extents of its bounding box). The other wires are assumed to run in clockwise direction and define holes. ## A.2.3 Surfaces We support five prismatic parametric surface types that are common in CAD: plane, cylinder, cone, sphere, and torus. We develop an algorithm that identifies the surface type by finding the simplest surface consistent with the boundary curves in the wires bounding a face within a prescribed tolerance (see Figure A1 (b)). If all curves in the face are coplanar then a plane is fitted. Otherwise, the curves are checked for consistency with a cylinder, cone, sphere and torus in that order. If a face is trimmed by a combination of lines and arcs then the surface will either be a cylinder or a cone. For a cylinder the radii of the arcs must be the same, the normals must be parallel and the centers aligned in the normal direction. Any lines must also be parallel to the normal direction. If these conditions are not met then the surface is treated as a cone. ![18_image_0.png](18_image_0.png) Figure A1: (a) Vertex-edge graph used to build and orient the wires by finding cycles. (b) Identifying different kinds of surfaces from the curves (lines in blue, arcs in green) in the outer wire. If a face contains only arcs, we can define a sphere whose center lies at the center of an arc, and check for consistency with the rest of the arcs. If a sphere is not an appropriate fit, we define a torus with the major radius set to the average of the two largest arcs and minor radius set to that of the smallest arc. The only ambiguity with this procedure is when a face contains a single circle on its boundary—the surface could be a plane trimmed into a disk or a hemisphere. Real world CAD data does not contain many examples like this. Spheres typically appear only on corner fillets as part of three-sided faces where the presence of three non-coplanar arcs clearly disambiguates this case. ## A.2.4 Faces With wires and surfaces available, it is straightforward to combine them to build a face using the solid modeling kernel. Finally, to topologically connect the individual faces with each other, we sew the faces into shells, and further attempt to stitch the shells into a single solid model. ## A.3 Masking Invalid Logits During the autoregressive sampling, we mask the invalid logits (by setting their values to a small number e.g. −109) that do not satisfy these criteria at each time step, and distribute the next token probabilities among the valid logits as in Nash et al. (2020) (which happens automatically when applying the softmax operation). Let t denote the current step in the sampling starting from t = 0 for the first step. For the vertex model: - If the vertex token being generated is a z-coordinate (t mod 3 = 0) then it has to be greater than or equal to the previous z-coordinate that was generated. - If the vertex token being generated is a y-coordinate (t mod 3 = 1), and the last two z-coordinates were equal, then it has to be greater than or equal to the previous y-coordinate that was generated. - If the vertex token being generated is a x-coordinate (t mod 3 = 2), and the last two z-coordinates and y-coordinates were equal, then it has to be greater than the previous x-coordinate that was generated. - The <EOS> token can only appear after an x-coordinate i.e. when t > 0 and t mod 3 = 0. For the edge model: - The <EOS> and <NEW_EDGE> tokens can only appear if t ̸= 0 and cannot be repeated consequently. - If the previous token was a <NEW_EDGE>, then the current token has to be greater than or equal to first token in the previous edge to respect the sorted ordering of the edges. - If the previous token was not a <NEW_EDGE>, then the current token has to be greater than the previous token to respect the sorted order of tokens within each edge. ![19_image_0.png](19_image_0.png) Figure A2: (a) An example of an image pair shown to the crowd workers. (b) Examples of "realistic" models (from the DeepCAD training set) shown to the crowd workers. (c) Examples of unrealistic models. Two of these were generated by DeepCAD and two by SolidGen. - A <NEW_EDGE> or <EOS> tokens can only appear after two or three edge tokens have been generated. This guarantees that we define an edge as a line or an arc. For the face model: - The <EOS> and <NEW_EDGE> tokens can only appear if t ̸= 0 and cannot be repeated consequently. - If the previous token was a <NEW_FACE> token, then the current token has to be greater than or equal to first token in the previous face to respect the sorted ordering of the faces. - If the previous token was not a <NEW_FACE>, then the current token has to be greater than the previous token to respect the sorted order of tokens within each face. - A <NEW_FACE> or <EOS> tokens can only appear at least two face tokens have been generated. This guarantees that each face at least two edges to make it closed e.g. two arcs. - An edge index can only be used twice in the sampled face tokens. This ensures that an edge can at most be shared by two faces, and helps prevents non-manifold results. ## A.4 Human Evaluation As discussed in Section 5.1, to evaluate the realism of the models generated by SolidGen, we perform a perceptual study using human evaluators recruited through Amazon's Mechanical Turk service (Mishra, 2019). The crowd workers were shown pairs of images, one of which was generated by SolidGen or DeepCAD and the other randomly selected from the training set. The position of the image of the generated model was randomized to be at the top or the bottom of the pair. An example of an image pair is shown in Figure A2a. The crowd workers were asked to select the model which they found to be most "realistic". To assist with this task we provided four examples of realistic models (Figure A2b) and "unrealistic" models (Figure A2c). The realistic examples were carefully chosen from the training set to include desirable properties like symmetry and clear design intent and function. For the unrealistic examples we deliberately selected models which had obvious problems like missing faces or which formed an incoherent collection of shapes. In total 4900 pairs of images were shown to the crowd workers, half from SolidGen and half from DeepCAD. Each pair was independently judged by 7 crowd workers and we record the number of raters who identified the generated model as more realistic than randomly selected model from the training set. This gives us a "realism" score from 0–7 for each image pair. The percentage of image pairs with each realism are shown in the histogram in Figure 6 of the main paper. If the human raters were selecting randomly between the two images then we would expect Figure 6 to form a binomial distribution. We see that for SolidGen the distribution is centered on a realism score of 3.5, but has wider tails than would be expected from chance alone. This indicates that while the realism of the ![20_image_0.png](20_image_0.png) Figure A3: (a) An example of one of the parametric sketch templates used to build the dataset. (b–d) Examples of generated models with the extrusion from the inner wire added to the base extrusion. Here we see the effect of modifying the parameters controlling both the sketch geometry and the lengths of the two extrusions. (e) The result of a Boolean subtraction of the inner wire. models generated by SolidGen is very similar to the training data, the raters do tend to agree on the realism of individual examples to a greater extent than would be expected by chance alone. For the DeepCAD data there is a clear skew towards the raters identifying the training set as more realistic. ## A.5 Generation Of The Parametric Variation Dataset The parametric variation (PVar) dataset was created using constrained parametric sketches from the SketchGraphs dataset (Seff et al., 2020). Geometry and constraints were first extracted from 60 hand picked sketches and additional constraints and parameters controlling the lengths of lines and radii of arcs and circles added one by one until the sketches were optimally constrained (Bouma et al., 1995). These parameters could then be varied to generate multiple sketch geometries, while the constraints enforce aspects of the design intent such as horizontal, vertical and parallel lines and concentric arcs and circles. In cases where the sketch geometry defined multiple adjacent regions (see Figure A3 (a)), the curves were organized into a set of nested wires. The outermost wire is chosen to contain the union of the regions and inner wires chosen at random to avoid any two wires sharing the same curve. The outermost wire is first extruded by a random distance forming a base extrusion. The inner wires are then extruded, starting from the top plane of the base extrusion and a Boolean union or subtraction is used to either add or remove material from the result. Figure A3 show the results of this process. The solids generated from each distinct sketch template are considered to belong to the same class resulting in 60 classes. Figure A4 shows one representative example from each class. ## A.6 Criteria Used To Evaluate Validity Of Generated B-Reps We consider a B-rep to be valid if it was successfully built from the network's output and additionally satisfies the following criteria: 1. **Triangulatable.** Every face in the B-rep must generate at least one triangle, otherwise it cannot be rendered properly and is not possible to manufacture. 2. **Wire ordering.** The edges in each wire in the B-rep must be ordered correctly. We check this using the ShapeAnalysis_Wire::CheckOrder() function in PythonOCC (Paviot, 2008) with a tolerance of 0.01. 3. **No wire self-intersection.** The wires should not self-intersect to ensure that faces are well-defined and surfaces trimmed correctly. We check this using the ShapeAnalysis_Wire::CheckSelfIntersection() function in PythonOCC with a tolerance of 0.01. 4. **No bad edges.** The edges in shells should be present once or twice but with different orientations. This can be checked with the ShapeAnalysis_Shell::HasBadEdges() function. ![21_image_0.png](21_image_0.png) Figure A4: One example from each class in the PVar dataset. ## A.7 Architecture Of Image And Voxel Encoders The image encoder is a 2D convolutional neural network (CNN), that takes in 105 × 128 RGB images and outputs (16 × 16) × demb features, where demb = 256. Its architecture is defined as: Conv2d(64, 7, 3, 2) → GELU() → Conv2d(64, 3, 1, 2) → GELU() → Dropout(0.1) → Conv2d(64, 3, 1, 2) → GELU() → Conv2d(64, 3, 1, 1) → GELU() → Dropout(0.1) → AdaptiveAvgPool2d(16, 16) → Conv2d(256, 3, 1, 1) → PosEmbed() → SpatialFlatten(), where Conv2d(output channels, kernel size, padding,stride) is a 2D convolutional layer with bias included, GELU() is the Gaussian error linear unit activation (Hendrycks & Gimpel, 2016), Dropout(probability) is the dropout layer Srivastava et al. (2014), AdaptiveAvgPool2d(height, width) is an adaptive average pooling layer that outputs feature maps of the given spatial resolution, PosEmbed() learns embeddings for the spatial indices of the grid and adds them to the features, and SpatialFlatten flattens a convolutional feature map of shape (N × C × H × W) into (N × C × (H × W)), and further reshapes it into N × (H × W) × C where the second dimension is treated as the sequence dimension as in Nash et al. (2020). The voxel encoder is a 3D CNN that takes in 64 × 64 × 64 binary voxel grids and outputs (8 × 8 × 8) × demb features. It's architecture is: Embed(2, 8) →Conv3d(64, 7, 3, 2) → GELU() → Conv3d(64, 3, 1, 1) → GELU() → Dropout(0.1) → Conv3d(64, 3, 1, 2) → GELU() → Conv3d(256, 3, 1, 1) → GELU() → Dropout(0.1) → Conv3d(256, 3, 1, 2) → PosEmbed() → SpatialFlatten(), where the Embed(input dimension, output dimension) is a linear embedding layer that maps the binary voxels into learned 8 dimensional features. The output features are utilized by SolidGen via cross-attention as described in Section 5.2. ## A.8 Additional Results Additional results comparing DeepCAD (Wu et al., 2021) and SolidGen on the unconditional generation task are shown in Figure A5 and Figure A6. We show additional results for class-conditional generation on the PVar dataset in Figure A7, and image conditional and voxel conditional results on the DeepCAD dataset in Figure A8 and Figure A9, respectively. We include the top-k accuracy plots (1 ≤ k ≤ 10) for SolidGen evaluated on the DeepCAD (Wu et al., 2021) test set in Figure A10. ![22_image_0.png](22_image_0.png) ![23_image_0.png](23_image_0.png) ![24_image_0.png](24_image_0.png) Figure A7: Additional class-conditional generation results. Each row shows samples generated from the ![24_image_1.png](24_image_1.png) same class. Figure A8: Additional image conditional samples from SolidGen using images (first column) obtained by rendering CAD models in the DeepCAD test set. ![25_image_0.png](25_image_0.png) Figure A9: Additional voxel conditional samples from SolidGen using voxelized point clouds (first column) obtained by sampling 2048 points from CAD models in the DeepCAD test set. ![25_image_1.png](25_image_1.png) Figure A10: Top-k next-token prediction accuracy of the unconditional SolidGen model trained on the DeepCAD dataset compared to a uniform baseline.
Review 1: Summary: The authors present the first autoregressive model of CAD designs that directly generates boundary-representation format (B-rep) designs. The work extends the PolyGen-style [1] modelling approach (autoregressive transformers + pointer networks) to B-rep data, formulating a data representation, and architecture modifications to handle this domain. The authors demonstrate that the approach works well and that generations can be conditioned by class, image and voxel contexts. [1] PolyGen: An Autoregressive Generative Model of 3D Meshes, Charlie Nash, Yaroslav Ganin, S. M. Ali Eslami, Peter W. Battaglia, https://arxiv.org/abs/2002.10880 Strengths and Weaknesses: Strengths: * Direct B-rep modelling makes sense: Approaches that generate sequences of user actions can be constrained to a limited set of user operations. And it is complex to extend such approaches to the full range of useful operations. Also, data availability is an important consideration: Autoregressive modelling performance scales reliably with data, and CAD data withouts user operation sequences is more abundant. * The work presents a sensible strategy for representating B-rep data in a way suitable for AR modelling. It builds on PolyGen and related models, with reasonable tweaks to the data representation and architecture. * The paper is well-written: Visualizations are clear and useful, and the method description is clear. * The results look good, although some instances of poorly formed designs can be found. Weaknesses: * The models are pretty small, which will impact performance. It would be interesting to see something like a scaling study: how performance changes as a function of model size. If overfitting is an issue at this dataset size, it would be good to know. Questions: * What is the impact of the 6-bit quantization? That isn't a whole lot of precision, does it limit the complexity of the shapes that can be generated? * In the face model, does summing the vertex embeddings for an edge destroy order information which might be useful? * Are the vertex encoders shared in the edge and face model? And if not, it could be worth trying. Requested Changes: Figure 1: Please be more explicit about whether the examples are data or samples. E.g. "two data examples", or "two samples" Section 1: Please say how much bigger the datasets with / without modeling history are. This is an important point, and would save readers time. End of section 2: "By contrast, our method is an autoregressive generative model, and does not require an expensive optimization step to build plausible B-reps." -> autoregressive generation is pretty expensive too. Is there a more convincing argument for the benefits of autoregression vs the optimization-based approach? Equation (3): The p(z) should be omitted, as we're conditioning on z I would appreciate a bit more of an explanation of the use of B-rep arc points. Sec 4.1: "...additional points inserted on B-rep edges to encode the curve primitive information as explained in Figure 3", Figure 3 doesn't really explain how this works. How does it work with the position / coordinate embeddings? Fig 7. Minor plotting issue: the bottom of the figures are cut off Fig 9: It would be better to show the actual voxels, rather than point clouds Broader Impact Concerns: I'm not aware of any broader impact concerns ================================================== Review 2: Summary: The paper proposes one of the first generative models for synthesizing solid 3D models structured as a boundary representation (B-rep). B-reps represent an important class of 3D representations that may be natively supported by a variety of CAD systems and geometric modelling kernels. Hence, the capability to directly synthesize data in B-rep format represents natural interest in the context of data-driven 3D design (e.g., for presenting designs initialised by scans, sketches or other forms of 2D/3D data). The B-rep format considered in this project basically consists of three lists: vertices, edges (that index into vertices), and faces (index into edges). For each part of the format, a separate transformer-based model is trained to predict parameters of its respective instances in the list, given the previously generated sequence. I view the transformer model as an appropriate architecture due to its success in related tasks (e.g., mesh synthesis) and overall wide adoption. In terms of quantitative evaluation, unconditional (from noise) and conditional (from shape category, image, and point cloud) generation results are presented; performance is described in terms of log-likelihood, B-rep validity, novelty, and uniqueness. While the unconditional samples are shown to be quite unique and diverse, constraining generation by feeding in an initializer image or point cloud correctly conditions the results to resemble the input. Both these qualities are important as creative tools aimed to help designers develop CAD designs. Strengths and Weaknesses: Strengths: — The task that is being considered is new, relevant, and differs from those considered previously in the context of CAD generation (e.g. generating CSG trees or CAD commands). — The solution is adequate and seems to be comprehensively addressing the problem. — Qualitative and quantitative evaluations are presented versus one related work (DeepCAD) and convincing, with SolidGen models being generally more valid but arguably less diverse/novel. — The text of the paper is clearly written and easy to follow. — Interesting conditioning cases (e.g. images and point clouds) are presented. Weaknesses: — The list of unsupported CAD primitives includes certain important instances, e.g. B-splines which are arguably crucial in modern CAD designs. — Technically, using pointer networks in the context of the considered task is similar to their applications in other works, e.g. PolyGen. However, this does not diminish the interesting application of these models to B-rep synthesis. Requested Changes: I believe that the evaluation is sufficiently comprehensive, and the text is sufficiently well-written so that no critical changes to the submission is necessary. Broader Impact Concerns: Most generative models require careful considerations of their generated samples to avoid ethical concerns. Another concern is leveraging large amounts of authored data in the context of product design (e.g., would one want to ensure that the design is fully “novel” so that no copyright issues are raised?). A recent Github Copilot issue illustrates the potential risks involved with using publicly gathered datasets in the context of data-driven design assistants. ================================================== Review 3: Summary: This paper proposes a new generative model, namely the SolidGen, that directly synthesizes the Boundary representation format (B-reps) from a sequence of CAD modeling operations in an unsupervised way using Transformers and two-level pointer networks. The SolidGen operates on the indexed boundary representation which is a new representation for B-reps as numeric arrays proposed by the authors. The index boundary representation maintains the geometry and topology of B-reps. The authors empirically show that the SolidGen achieve impressive results in both unconditional and conditional/controllable generation of B-reps. Strengths and Weaknesses: ################################################################### **Summary of the Review:** Overall, this paper could be an interesting contribution. There is no significant algorithmic development. However, the application studied in the paper is relevant, and the use of a transformer-based generative model to generate B-reps in this new application area is novel. Currently, I am leaning toward accepting the paper. ################################################################### **Strong points:** 1. The paper addresses an important problem in computer-aided design. 2. Utilizing a transformer-based generative model to generative B-reps is novel. 3. The empirical results in the paper are convincing. 4. The paper is well written with great-looking illustrative figures. ################################################################### **Weak points:** 1. There is no significant algorithmic development in the paper. 2. Comparisons to other methods need to be provided. Requested Changes: **Questions for the Authors:** 1. Can the authors provide comparisons to other methods on the task of generating B-reps? 2. How can the proposed method be combined with existing CAD methods or incorporated into existing CAD software such as AutoCAD? The authors should discuss these in the paper. Broader Impact Concerns: I have no concerns on the ethical implications of the work. ================================================== Review 4: Summary: The authors consider the problem of generating a CAD model through its B-Rep (boundary representation). The B-rep is the de-facto representation for parametric CAD models, representing shapes as geometric graphs. They propose an autoregressive model to generate the B-rep through its (hyper-)graph representation. The model uses a transformer architecture, augmented with pointer networks to represent relations. The model is evaluated on an unconditional generation task, as well as a generation task conditioned on a rendering of the CAD target. Strengths and Weaknesses: ## Strong points The paper lays out a model which achieves its stated goals, and performs well on the evaluated problem. The proposed model is sound, and models the important relational aspect of the B-rep which is crucial for parametric CAD design. Overall, this paper provides an interesting demonstration of the possibility of auto-regressive generative modeling of B-rep for CAD design, with potentially interesting applications in the future. Requested Changes: 1. It would be great to improve the description of the B-rep in section 3, by e.g. including diagrams as in fig. A1 as well as providing a formal mathematical description (perhaps simplified) of the data at hand. This would help make the main text better self-contained and more accessible to an applied ML audience who may not be familiar with CAD design. This would also clarify the constructions used in section 4. (This recommendation is important for my recommendation). 2. In fig. 3, the wireframe in the first panel is very light and hard to see - I would recommend increasing the line weight of the figure in order to make it more apparent. (This recommendation is minor) 3. In section 5, or a separate appendix, it would be great to include the following details of training: 1) all hyper-parameters of training, including AdamW L2 decay parameter, number of training epochs, learning rate scheduling (if any) etc. 2) include some details on how the hyper-parameters were selected (e.g. manually / automatically), 3) include some information about training time / hardware. (This recommendation is minor). Broader Impact Concerns: I do not have any broader impact concerns ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper is of particularly high-quality: well-written, technically sound, making a novel and potentially impactful contribution that is convincingly evaluated. I think it's appropriate for a featured certification. Before the paper can be formally accepted, I would like to request a minor revision. In the camera-ready version, I would like the authors to comment on the following questions, which came up during the discussion with the reviewers: 1. The list of unsupported CAD primitives includes certain instances, such as B-splines, which are important in modern CAD. Can the proposed approach be modified to take these into account? 2. The paper considers relatively small models in the experimental evaluation. How does the approach scale with model size? Is overfitting a concern? ==================================================
# Transport Score Climbing: Variational Inference Using Forward Kl And Adaptive Neural Transport Liyi Zhang zhang.liyi@princeton.edu Department of Computer Science Princeton University David M. Blei *david.blei@columbia.edu* Department of Statistics Department of Computer Science Columbia University Christian A. Naesseth c.a.naesseth@uva.nl Amsterdam Machine Learning Lab Informatics Institute University of Amsterdam Reviewed on OpenReview: *https: // openreview. net/ forum? id= 7KW7zvKd7J* ## Abstract Variational inference often minimizes the "reverse" Kullbeck-Leibler (KL) divergence, DKL(q||p) from the approximate distribution q to the posterior p. Recent work instead studies the "forward" KL divergence, DKL(p||q), which unlike reverse KL does not lead to variational approximations that underestimate uncertainty. To optimize the forward KL, these methods leveraged Markov chain Monte Carlo (MCMC) methods to evaluate the intractable expectation with respect to the posterior p. This paper introduces Transport Score Climbing (TSC), a method that optimizes DKL(p||q) by using Hamiltonian Monte Carlo (HMC). For improved performance the HMC chain is run on a transformed, or warped, space. A function called the transport map performs the transformation by acting as a change-of-variable from the latent variable space. TSC uses HMC samples to dynamically train the transport map while optimizing DKL(p||q). TSC leverages synergies, where better transport maps lead to better HMC sampling, which then leads to better transport maps. We demonstrate TSC on synthetic and real data, including using TSC to train variational auto-encoders. We find that TSC achieves competitive performance on the experiments. ## 1 Introduction A main goal in probabilistic modeling and inference is to find the posterior distribution of latent variables given observed data (Gelman et al., 2013). Probabilistic modeling allows using both structured knowledge and flexible parameterizations, including neural networks, but the posterior is often intractable. In this situation, we can resort to approximations to estimate the posterior distribution (Bishop, 2006). Variational Inference (VI) is an optimization-based approximate inference method. It posits a family of distributions, and chooses a distribution q in that family to approximate the posterior p of a probabilistic model. It is a popular method for complex models because of its computational convenience, particularly when optimizing the "reverse", or "exclusive", Kullbeck-Leibler (KL) divergence DKL(q||p) through stochastic gradient descent (SGD) (Jordan et al., 1999; Hoffman et al., 2013; Blei et al., 2017). However, reverse VI - VI that uses the reverse KL - leads to approximations that may underestimate the uncertainty in p (Minka, 2005; Yao et al., 2018). As an alternative, forward VI minimizes the "forward", or "inclusive", KL DKL(p||q). This approach better captures posterior uncertainty, but it is more computationally challenging (Bornschein & Bengio, 2015; Gu et al., 2015; Finke & Thiery, 2019; Naesseth et al., 2020). Another approach to approximate inference is Markov chain Monte Carlo (MCMC). MCMC methods sample from a Markov chain whose stationary distribution is the posterior, and produce good samples if run for long enough. However, in practice MCMC methods can be more computationally demanding than reverse VI in that they can take many iterations to converge. To combine the advantages of both paradigms, Naesseth et al. (2020) introduce Markovian score climbing (MSC). MSC is a variational method for minimizing the forward DKL(p||q), which uses a Markov chain to approximate its intractable expectation over p. MSC uses an MCMC chain to approximate the expectation without asymptotic biases. However, this method uses basic MCMC kernels that can lead to slow exploration of the sampling space. In this paper, we develop transport score climbing (TSC), a new algorithm to minimize DKL(p||q). TSC uses the MSC framework, but replaces the simple MCMC kernel with a Hamiltonian Monte Carlo (HMC) on a transformed, or warped, space (Marzouk et al., 2016; Mangoubi & Smith, 2017; Hoffman et al., 2019). In particular, TSC adaptively transforms the HMC sampling space, where the transformation is based on the current iteration of the variational approximation. In this way, TSC overcomes the slow sampling of MCMC and approximates the posterior more reliably and efficiently. In more detail, TSC optimizes a normalizing flow (Rezende & Mohamed, 2015), where the flow (or, equivalently, transport map) is trained from HMC samples from the warped space. Thus, TSC trains its transport map from scratch and leverages a synergy between the Markov chain and the variational approximation: an updated transport map improves the HMC trajectory, and the better HMC samples help train the transport map. Finally, we show how TSC is amenable to SGD on large-scale IID data. To this end, we use TSC to improve training of deep generative models with a variational autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014). Contributions. 1) We develop a novel method, TSC, that minimizes DKL(p||q) with flow posteriors and an adaptive HMC kernel. The HMC kernel reuses the flow posterior to warp the underlying latent space for more efficient sampling. 2) We show that the transport map of the warped space can be trained adaptively, instead of requiring a separate pre-training suggested by previous methods. 3) Empirical studies show that TSC more closely approximates the posterior distribution than both reverse VI and MSC. Furthermore, we use this methodology to develop a novel VAE algorithm competitive against four benchmarks. TSC continuously runs HMC chains and requires no reinitializations from the variational approximation q at each epoch, but these reinitializations are used by previous methods. Related Work. Forward VI is explored by several approaches. Bornschein & Bengio (2015); Finke & Thiery (2019); Jerfel et al. (2021) study VI with DKL(p||q) by using importance sampling (IS), and Gu et al. (2015) uses sequential Monte Carlo (SMC). Dieng & Paisley (2019) combines IS and VI in an expectation maximization algorithm. IS and SMC introduce a non-vanishing bias that leads to a solution which optimizes DKL(p||q) and the marginal likelihood only approximately (Naesseth et al., 2019; 2020). Closest to the method proposed here is Naesseth et al. (2020); Ou & Song (2020); Gabrié et al. (2021), which all use MCMC kernels to minimize DKL(p||q). Ou & Song (2020); Gabrié et al. (2021) can be considered to be instances of MSC (Naesseth et al., 2020). We build on MSC and propose to use the more robust HMC kernel together with a space transformation. The work of Kim et al. (2022), running parallel Markov chains for improved performance, can be combined with TSC for potential further gains. Mangoubi & Smith (2017) show that MCMC algorithms are more efficient on simpler spaces, such as on strongly log-concave targets. Marzouk et al. (2016); Hoffman et al. (2019) use transformations to create warped spaces that are easy to sample from. The transformation is defined by functions called "transport maps" that are pre-trained by reverse VI. The proposed algorithm differs in the optimization objective and by learning the transport map together with model parameters end-to-end. Using MCMC to learn model parameters based on the maximum marginal likelihood is studied in many papers, e.g., Gu & Kong (1998); Kuhn & Lavielle (2004); Andrieu & Moulines (2006); Andrieu & Vihola (2014). In contrast TSC proposes a new method for the same objective, by adapting the MCMC kernel using VI. Kingma & Welling (2014); Rezende et al. (2014) introduce variational autoencoders (VAE) where both the generative model and the approximate posterior are parameterized by neural networks. They optimize a lower bound to the marginal log-likelihood, called the evidence lower bound (ELBO), with the reparameterization trick. Salimans et al. (2015); Caterini et al. (2018) incorporate MCMC or HMC steps to train a modified ELBO with lower variance. Hoffman (2017); Hoffman et al. (2019); Ruiz et al. (2021) instead formulate the optimization as maximum likelihood while utilizing MCMC methods. The work proposed here also targets maximum likelihood, but we neither augment the latent variable space (Ruiz et al., 2021) nor reinitialize the Markov kernel from the posterior at each epoch (Hoffman, 2017; Hoffman et al., 2019). Instead, we continuously run the Markov kernel on the warped latent variable space. ## 2 Background Let p(x, z) be a probabilistic model, with z as latent variables and x as observed data. The probabilistic model factorizes into the product of the likelihood p(x|z) and prior p(z), which are known and part of the modeling assumptions. A main goal of Bayesian inference is to calculate or approximate the posterior distribution of latent variables given data, p(z|x). VI approximates the posterior by positing a family of distributions Q, where each distribution takes the form q(z; λ) with variational parameters λ. The most common approach, reverse VI, minimizes the reverse KL using gradient-based methods: minλ DKL(q(z; λ)||p(z|x)). The main strength of reverse VI is computational convenience. ## 2.1 Variational Inference With Forward Kl Reverse VI often underestimates the uncertainty in p. An alternative approach, which is the focus of this work, is to minimize the forward KL: minλ DKL(p(z|x)||q(z; λ)). While more challenging to work with, this objective does not lead to approximations that underestimate uncertainty (Naesseth et al., 2020). Moreover, if Q is the subset of exponential family distributions with sufficient statistics T and Ep[T] exist and are finite, the optimal q matches the expected sufficient statistics values under the posterior p exactly. The forward KL divergence from p to q is $$D_{K L}(p(\mathbf{z}|\mathbf{x})||q(\mathbf{z};\lambda)):=\mathbb{E}_{p(\mathbf{z}|\mathbf{x})}\left[\log{\frac{p(\mathbf{z}|\mathbf{x})}{q(\mathbf{z};\lambda)}}\right].$$ . (1) To minimize eq. (1), the gradient w.r.t. the variational parameters is, $\left(1\right)^{2}$ . $$\mathbb{E}_{p(\mathbf{z}|\mathbf{x})}[-\nabla_{\lambda}\log q(\mathbf{z};\lambda)].$$ $$\left(2\right)$$ Ep(z|x)[−∇λ log q(z; λ)]. (2) Approximating the expectation over the unknown posterior p(z|x) is a major challenge. Bornschein & Bengio (2015); Gu et al. (2015) approximate the expectation in eq. (2) through importance sampling and sequential Monte Carlo, but these methods gives estimates of the gradient with systematic bias. In this work we leverage Markovian score climbing (MSC) (Naesseth et al., 2020), which uses samples z from an MCMC kernel with the posterior p(z|x) as its stationary distribution. The resulting SGD method leads to an algorithm that provably minimizes DKL(p||q) (Naesseth et al., 2020). Normalizing Flow. In this work we focus on the variational family of normalizing flows. Normalizing flows transform variables with simple distributions and build expressive approximate posteriors (Rezende & Mohamed, 2015; Tabak & Turner, 2013), and are tightly linked with warped space HMC. Given a ddimensional latent variable z, the transformation uses an invertible, smooth, trainable function Tλ : R d7→ R d and introduces a random variable ϵ with a simple distribution q0(ϵ), oftentimes an isotropic Gaussian. Using the change-of-variable identity, the probability density function q of Tλ(ϵ) is, $$q(T_{\lambda}(\epsilon))=q_{0}(\epsilon)\Big\vert\operatorname*{det}\!{\frac{d T_{\lambda}}{d\epsilon}}\Big\vert^{-1},$$ where dTλ dϵis the Jacobian matrix. ![3_image_0.png](3_image_0.png) Figure 1: Outline of the TSC algorithm. HMC generates samples of latent variable z that are used to train the normalizing flow transformation. The refined transformation further improves the geometry of HMC, which goes on to generate the next sample. ## 2.2 Hamiltonian Monte Carlo (Hmc) And Neural Transport Hmc The HMC kernel used in the algorithm proposed below is closely related to Neural Transport HMC (NeutraHMC), proposed by Hoffman et al. (2019). NeutraHMC simplifies the geometry of the sampling space through neural network-parameterized transport maps. Compared to HMC, it explores the target distribution more efficiently. We briefly explain HMC and NeutraHMC. Hamiltonian Monte Carlo. HMC is an MCMC algorithm that produces larger moves in latent variable z by introducing "momentum" variables m of the same dimension as z (Duane et al., 1987; Neal, 2011). It constructs a joint proposal on the augmented space (z, m) to target p(z|x)p(m), where x is data. A common choice for the distribution p(m) is N(0, I). In a given iteration, a proposal involves L "leapfrog steps" of step-size s, where the l-th leapfrog step is defined by $$\begin{array}{c}{{{\bf m}^{(l)^{\prime}}={\bf m}^{(l-1)}+\frac{1}{2}s\frac{d\log p({\bf x},{\bf z}^{(l)})}{d{\bf z}^{(l)}},}}\\ {{{\bf z}^{(l)}={\bf z}^{(l-1)}+s{\bf m}^{(l)^{\prime}},}}\\ {{{\bf m}^{(l)}={\bf m}^{(l)^{\prime}}+\frac{1}{2}s\frac{d\log p({\bf x},{\bf z}^{(l)^{\prime}})}{d{\bf z}^{(l)^{\prime}}},}}\end{array}$$ starting from (z (0), m(0)), m(0) ∼ N(0, I), z (0) randomly initialized or set to the previous MCMC state. The final leapfrog step gives the proposed state (z (L), m(L)). The new state is accepted with probability min{1, p(x,z (L))p(m(L)) p(x,z)p(m)} (Neal, 2011; Robert & Casella, 2004). HMC on Warped Space. Marzouk et al. (2016); Mangoubi & Smith (2017); Hoffman et al. (2019) propose running MCMC methods on a simpler geometry through transforming, or warping, the sampling space with a transport map. A transport map is defined as a parameterized function Tλ(·). The warped space is defined by the change in variable z0 = T −1 λ(z) for z ∼ p(z|x). If Tλ is chosen well, z0 will be simpler than z to sample. The target distribution in the MCMC algorithm is defined as the distribution of z0. Each z (k) 0generated by MCMC at the k-th iteration is passed to the transport map with z (k) = Tλ(z (k) 0). (z (1), z (2)*, ...*) then have the true target distribution as its stationary distribution, but with faster mixing than MCMC on the original space. Hoffman et al. (2019) introduces NeutraHMC that uses HMC instead of general MCMC. NeutraHMC utilizes both affine and neural network transport maps that are pretrained using VI based on KL(q||p). ## 3 Transport Score Climbing We now develop Transport Score Climbing (TSC), a method for VI with forward KL, DKL(p||q). TSC uses HMC on the warped space to estimate the intractable expectation in the gradient (eq.(2)). A transport map is defined to act as both flow transformation in the variational posterior q as well as mapping between the HMC sampling spaces. As q is updated, the mapping is updated simultaneously which further refines the HMC sampling space. Figure 1 shows the synergy between HMC sampling and the corresponding variational approximation. ## 3.1 Types Of Transport Maps Let ϵ ∼ N(0, Id), let Tλ(·) be a function R d7→ R d, or transport map, with trainable parameter λ, and define the variational distribution q(z; λ) such that z = Tλ(ϵ) ∼ q(z; λ). The variational distribution and transport map share trainable parameters. We illustrate with three concrete examples below. Affine Transformation. Consider an affine transformation Tλ(ϵ) = µ+σ⊙ϵ, where ⊙ denotes elementwise multiplication. The variational distribution is q(z; λ) = N(z; µ,σ 2I). In the empirical studies, Section 4, we find that the affine transport map is simple and effective. IAF Transformation. A popular flow transformation is the inverse autoregressive flow (IAF) (Kingma et al., 2016). Tλ is chosen with the autoregressive property, that is, along each dimension i of ϵ, $$T_{i}(\mathbf{\epsilon})=\mathbf{\epsilon}_{i}\sigma_{i}(\mathbf{\epsilon}_{1:i-1};\phi)+\mu_{i}(\mathbf{\epsilon}_{1:i-1};\phi).$$ Here µ and σ are neural networks that act as shift and scale functions. IAF is flexible because of neural networks; its determinant is cheap to compute because of the autoregressive property that leads to a lower triangular Jacobian matrix. However, the inverse IAF T −1 λ(z), required to evaluate the density q, is costly to compute. Thus, we only use IAF in studies where latent variables are low-dimensional. RealNVP Transformation. RealNVP is an alternative to IAF with slightly less flexibility for the same number of parameters but fast invertibility (Dinh et al., 2016). RealNVP uses an affine coupling layer to transform the input ϵ. In practice, we use a checkerboard binary mask b to implement the transformation, as detailed in Dinh et al. (2016), $$T(\mathbf{\epsilon})=b\odot\mathbf{\epsilon}+(1-b)\odot\left(\mathbf{\epsilon}\odot\exp\bigl(\sigma(b\odot\mathbf{\epsilon})\bigr)+\mu(b\odot\mathbf{\epsilon})\right).$$ where µ and σ are also neural networks. The idea is that the part of ϵ that is transformed by neural networks depends only on the other part of ϵ that goes through the identity function. This construction allows for fast inversion. Both IAF and RealNVP flow transformations can be stacked to form more expressive approximations. Let T (l) λdenote one IAF or RealNVP transformation. We stack L transformations, and define the transport map as Tλ(ϵ) = T (L) λL ◦ ... ◦ T (1) λ1 (ϵ). The variational distribution q(z; λ) is a flow-based posterior with q(z; λ) = N(ϵ; 0, I) detdTλ(ϵ) dϵ −1 . ## 3.2 Vi With Hmc On Warped Space In order to sample the latent variables z, we define the target of the HMC kernel H(·|z0) as the distribution of z0 = T −1 λ(z), z ∼ p(z|x), $$p(\mathbf{z_{0}}|\mathbf{x};\lambda)\propto p(\mathbf{x},T_{\lambda}(\mathbf{z_{0}}))|\mathrm{det}J_{T_{\lambda}}(\mathbf{z_{0}})|,$$ (z0)|, (3) where Jf (x) is the Jacobian matrix of function f evaluated at x. This means that we are sampling on the warped space defined by Tλ(·) rather than the original space of latent variables. After z0 is sampled, we pass it to the transport map with z = Tλ(z0) to acquire the latent variable sample. As in MSC (Naesseth et al., 2020), we do not re-initialize the Markov chain at each iteration, but use the previous sample z (k)to both estimate the gradient and serve as the current state of the HMC kernel H to sample z (k+1). A crucial element is that the transport map Tλ is trained jointly as we update the DKL(p||q) objective in eq.(2). This is because the map is also the flow transformation part of the variational distribution q. Specifically, HMC at iteration k uses variational parameters of the previous iteration, λ (k−1), in its target distribution (eq.(3)) at iteration k. By construction, if q is close to the true posterior, target p(z0|x; λ) will be close to the isotropic Gaussian. Therefore, TSC keeps refining the geometry of the HMC sampling space throughout the training process. $$\left({\mathrm{3}}\right)$$ $\mathrm{\dots}\downarrow$ d Algorithm 1 TSC Input: Probabilistic model p(z, x; θ); transformation Tλ : R d7→ R d; HMC kernel H[z (k+1) 0|z (k) 0; *λ, θ*] with target distribution p(z0|x; *θ, λ*) and initial state z (0) 0 ; variational distribution q(z; λ); step-sizes α1, α2. *λ, θ* randomly initialized. Output: *λ, θ*. for k ∈ {0, 1, 2*, ...*} do z (k+1) 0 ∼ H[z (k+1) 0|z $${\bf\Sigma}_{0}^{0}|z_{0}^{(k)};\lambda,\theta|.$$ $$\begin{array}{l}{{\therefore(\mathbf{z}).}}\\ {{\log q(\mathbf{z};\lambda)].}}\\ {{\log p(\mathbf{x}|\mathbf{z};\theta)-\log}}\end{array}$$ ## Z = Tλ(Z (K+1) 0). Z = Stop-Gradient(Z). Λ = Λ − Α1∇Λ[− Log Q(Z; Λ)]. Θ = Θ − Α2∇Θ[− Log P(X|Z; Θ) − Log P(Z)]. Z (K+1) 0 = T −1 Λ(Z). End For 3.2.1 Model Parameters The probabilistic model p(z, x; θ) can also contain unknown parameters θ. The corresponding warped space posterior is $$\mathbf{\partial}\cdot p(\mathbf{z})].$$ $$p(\mathbf{z_{0}}|\mathbf{x};\lambda,\theta)\propto p(\mathbf{x},T_{\lambda}(\mathbf{z_{0}});\theta)|\mathrm{det}J_{T_{\lambda}}(\mathbf{z_{0}})|.$$ (z0)|. (4) Taking samples from the true posterior p(z|x; θ) allows one to learn θ using maximum likelihood, optimizing the marginal likelihood p(x; θ). This fact follows from the Fisher identity, which writes the gradient of the marginal likelihood as an expectation over the posterior, $$\nabla_{\theta}\log p(\mathbf{x};\theta)=\nabla_{\theta}\log\int p(\mathbf{z},\mathbf{x};\theta)d\mathbf{z}$$ $\left(4\right)$. $$J$$=\mathbb{E}_{p(\mathbf{z}|\mathbf{x};\theta)}[\nabla_{\theta}\log p(\mathbf{z},\mathbf{x};\theta)].$$ $$\left(5\right)$$ = Ep(z|x;θ)[∇θ log p(z, x; θ)]. (5) The expectation above is estimated by the same HMC sample z (k)that is used to update variational parameters λ. Additionally, the HMC kernel at iteration k uses model parameters of the previous iteration, θ (k−1), in its target distribution (eq.(4)) at iteration k. Algorithm 1 summarizes TSC for learning λ and θ. ## 3.3 Amortized Inference When the dataset x = (x1*, ...,* xn) is i.i.d. with empirical distribution pb(x) each xi has its own latent variable z. Amortized inference then uses the approximate posterior q(z|x; λ) instead of a separate q(z; λi) for each xi. In amortized inference, variational parameters λ are shared across data-points xi. It is known as a VAE when both the likelihood p(x|z; θ) and the approximate posterior q are parameterized by neural networks. TSC conducts maximum likelihood and VI with DKL(p||q) on VAE and is amenable to SGD with mini-batches. Following derivations from Naesseth et al. (2020), the gradient with respect to λ is $$\mathbb{E}_{p(\mathbf{x})}\big{[}\nabla_{\lambda}\text{KL}(p(\mathbf{z}|\mathbf{x};\theta)||q(\mathbf{z}|\mathbf{x};\lambda))\big{]}\approx\frac{1}{M}\sum_{i=1}^{M}\mathbb{E}_{p(\mathbf{z}|\mathbf{x}_{i})}[-\nabla_{\lambda}\log q(\mathbf{z}|\mathbf{x}_{i};\lambda)],\tag{6}$$ where M is the mini-batch size. For model learning, we similarly estimate the gradient using eq. (5), $$\mathbb{E}_{p(\mathbf{x})}\big{[}\nabla_{\theta}\log p(\mathbf{x};\theta)\big{]}=\mathbb{E}_{p(\mathbf{x})}\big{[}\mathbb{E}_{p(\mathbf{z}|\mathbf{x};\theta)}[\nabla_{\theta}[\log p(\mathbf{x}|\mathbf{z};\theta)+\log p(\mathbf{z})]]\big{]}$$ $$\approx\frac{1}{M}\sum_{i=1}^{M}\mathbb{E}_{p(\mathbf{z}|\mathbf{x}_{i};\theta)}[\nabla_{\theta}[\log p(\mathbf{x}_{i}|\mathbf{z};\theta)+\log p(\mathbf{z})]].\tag{7}$$ The expectations are approximated using HMC samples, as in Algorithm 1. Similarly with the non-amortized case, we do not re-initialize the Markov chain at each iteration, but approximate the expectation by running one step of the Markov chain on the previous sample z (k−1). ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) Figure 2: Variational parameter σ learned by TSC (this paper), MSC, and ELBO VI across iterations, with the Diagonal Gaussian family. The fitted Gaussian distributions approximate the funnel distribution. The plot shows parameter values and the ground truth value of the first dimension (i.e. the horizontal dimension) of the distributions. Parameter µ converges to 0 and is not drawn here. TSC, while slightly more volatile than ELBO maximization, converges closer to the ground truth. Figure 3: Synthetic targets (orange) with fitted posteriors laid on top. Top two: funnel; bottom two: banana. Rows 1 & 3: Gaussian family; rows 2 & 4: IAF family. Left: VI; middle: MSC; right: TSC (this paper). In general, TSC more accurately approximates the posterior distribution. ## 4 Empirical Evaluation All implementations are made in TensorFlow and TensorFlow Probability (Abadi et al., 2015; Dillon et al., 2017)1. On two synthetic datasets, TSC converges to near-optimal values. On survey data, TSC is more efficient than MSC and gives more reliable approximations. For VAE, TSC achieves higher log-marginal likelihood on static MNIST, dynamic MNIST, and CIFAR10 than VAEs learned using four other baselines. ## 4.1 Synthetic Data Neal's Funnel Distribution. We first study the funnel distribution described by Neal (2003), a distribution known to be hard to sample from using HMC. Let random variable z have probability density function, $$p(z)={\mathcal{N}}(z_{1}|0,1){\mathcal{N}}(z_{2}|0,e^{z_{1}/2}).$$ Then, z follows the Funnel distribution. Banana Distribution. Following Haario et al. (1999), we twist the Gaussian distribution to create a banana-shaped distribution. Let (v1, v2) ∼ N (0,( 100 0 0 1 )), (v1, v2) be transformed by, $$z_{1}=v_{1},$$ $$z_{1}=v_{1},$$ $$z_{2}=v_{2}+b\cdot v_{1}^{2}-100b,$$ where b is a factor set to 0.02. Then, (z1, z2) follows the Banana distribution. Both distributions are visualized in Figure 3. The experiments use the Adam optimizer (Kingma & Ba, 2015) with inverse time decay, decay rate 3 · 10−4, and initial learning rate 3 · 10−3. The HMC sampler consists of 1 chain, with step size s tuned in [0.03, 1) to target 67% acceptance rate, and number of leapfrog steps L set to ⌈ 1 s ⌉. The HMC hyperparameter tuning follows the practice of Gelman et al. (2013). Results. For the first variational family, we consider a diagonal Gaussian, q(θ) = N(θ|µ,σ 2I). The optimal variational parameter for TSC is the true mean and standard deviation of θ. 1Code is available at https://github.com/zhang-liyi/tsc. Table 1: Uncertainty estimation for the IAF family on synthetic data. The table gives standard deviation (std) estimations across 100 groups, each group containing 107i.i.d. samples from fitted posteriors. Standard errors of the std estimations across the 100 groups are also given. *All methods give reasonable approximations* using an expressive distribution, but TSC more closely recovers the true target distribution. | (a) Funnel distribution. | | (b) Banana distribution. | | | | |----------------------------|---------------|----------------------------|---------------|---------------|---------------| | Method | Std on Dim 1 | Std on Dim 2 | Method | Std on Dim 1 | Std on Dim 2 | | | Ground truth | 10 | 3 | | | | Ground truth | 2.718 | 1 | | | | | ELBO VI | 2.286 ± 0.002 | 0.989 ± 0 | | | | | MSC | 2.151 ± 0.001 | 0.961 ± 0 | | | | | TSC | 2.426 ± 0.002 | 0.991 ± 0 | ELBO VI | 9.511 ± 0.002 | 2.675 ± 0.001 | | | | MSC | 9.661 ± 0.002 | 2.562 ± 0.001 | | | | | TSC | 9.949 ± 0.002 | 2.883 ± 0.001 | | ![7_image_0.png](7_image_0.png) Figure 4: Estimates by states, where states are ordered by Republican vote in the 2016 election. We show TSC, 5000 sample unadjusted estimates, and 60000 sample unadjusted estimates. The unadjusted estimates are found by caluclating the mean and standard error of Bernoulli random variables. 95% confidence intervals are plotted. The closer to green the better. *TSC is robust to noise in the data sample and improves over the* 5000 sample unadjusted estimate. Figure 2 shows variational parameters by iteration. While both VI and TSC converge to near-optimal values for µ, VI significantly underestimates uncertainty by converging to low values of σ. This problem is ameliorated by TSC, which gives σ estimates much closer to the ground truth. As a second variational approximation, we use an IAF with 2 hidden layers. With an expressive posterior, each method gives reasonable approximations (Figure 3). Table 1 quantitatively compares these methods on synthetic data by giving standard deviations of large numbers of samples from the fitted IAF posteriors, and estimates from TSC are closest to the ground truth. However, TSC still gives approximations that are often a little narrower than the true target distribution. One reason is the difficulty of the HMC chain to sample from certain areas in the target distribution. While an expressive flow further simplifies the geometry for HMC, it still does not guarantee perfect approximation in finite time. ## 4.2 Survey Data We use the multilevel regression and post-stratification (MRP) model from Lopez-Martin et al. (2021) and apply it to a survey dataset provided by the same authors. Details of the model are given in Supplement A.1. The dataset originally comes from the 2018 Cooperative Congressional Election Study (Schaffner et al., 2019). The survey response is a binary variable representing individual answers to the question of whether to allow employers to decline coverage of abortions in insurance plans (Support / Oppose). Each response Table 2: The sum of squared difference from MRP MCMC estimates of mean and std of the response variable, one row for each method. *Lower is better.* | Method | Mean Difference | Std Difference | |----------|-------------------|------------------| | ELBO VI | 1.18 · 10−3 | 1.95 · 10−3 | | MSC | 2.44 · 10−2 | 1.53 · 10−2 | | TSC | 5.86 · 10−4 | 1.02 · 10−3 | ![8_image_0.png](8_image_0.png) Figure 5: Cumulative ESS of TSC and MSC over number of samples across epochs. Higher is better. *As the* transport map is learnt, TSC achieves higher ESS. is attached with a set of features about the individual. The dataset consists of 60,000 data-points, but as suggested by the study, inference methods are trained on a randomly selected 5,000 subset. Reliable estimations are demonstrated by the ability to generalize from the 5,000 subset to the full 60,000 set, and closeness to gold-standard MRP MCMC results (Lopez-Martin et al., 2021). Implmentational details. The experiments implement TSC and MSC with diagonal Gaussian approximations, and use the Adam optimizer with inverse time decay, decay rate 10−3, and initial learning rates 0.01. The HMC sampler has 1 chain, with step size s tuned in [0.03, 1] to target 67% acceptance rate and number of leapfrog steps L set to ⌈ 1 s ⌉. Results. Figure 4 shows estimates by individuals' U.S. state, with states ordered by Republican vote share. The large-sample (60,000) estimates show an upward trend, which is intuitive. The estimates for TSC comes from 10,000 posterior samples. Figure 4 shows that TSC gives reasonable approximations since it generalizes from the 5,000 data points and is close to the 60,000 sample estimates by making estimations that are robust against noise in the 5,000 data sample. We also compare TSC with other VI methods, which are ELBO VI and MSC, and they are quantitatively evaluated against the long-run MCMC benchmark. While we consider long-run MCMC samples as the gold-standard, it is useful to note that an advantage of methods of the VI class is that, unlike MCMC, they learn an approximate distribution that is more computationally efficient for generating samples for applications after they are learned. Table 2 shows that TSC results are closer to the MRP MCMC results than those of ELBO VI and MSC. Asymptotic sample behavior measured through effective sample size (ESS) shows that the warped HMC chain underlying TSC outperforms the vanilla HMC chain used by MSC (Figure 5). It also suggests that dynamic training of the transport map actually hurts HMC efficiency when the variational approximation is still poor, but it quickly catches up when the approximation is better trained. ## 4.3 Variational Autoencoders Finally, we study TSC with amortized inference on statically binarized MNIST (LeCun et al., 2010), dynamically binarized MNIST (Salakhutdinov & Murray, 2008), and CIFAR10 (Krizhevsky, 2009). With DKL(p||q) and dynamic updates of transport maps, TSC achieves higher log-marginal likelihood than several benchmarks algorithms. Implementation Details. For benchmark methods, we use the regular ELBO VAE (Kingma & Welling, 2014; Rezende et al., 2014), importance-weighted (IW) autoencoder (Burda et al., 2016), MSC with the conditional importance sampler (CIS-MSC) (Naesseth et al., 2020), and NeutraHMC that follows the training procedure detailed in Hoffman (2017); Hoffman et al. (2019). The methods share the same architecture, which is detailed in Supplement A.2. For MNIST, we use a small-scale convolutional architecture and output Bernoulli parameters; for CIFAR10, we use a DCGAN architecture (Radford et al., 2016) and output Gaussian means. Hoffman (2017) gives insightful training techniques: we also add an extra shearing matrix in the generative network and adapt HMC step-sizes s to target a fixed acceptance rate. The best target acceptance rate is hand-tuned in [0.67, 0.95]. Number of leapfrog steps L is set to ⌈ 1 s ⌉. The HMC initial state z (0) is sampled from the encoder. Additionally, TSC is more computationally demanding compared with ELBO maximization because of the HMC steps. We cap L to 4 to ensure similar run-time with NeutraHMC, because TSC tends to lead to smaller step-sizes. Finally, we note that for NeutraHMC, separate pretraining of the encoder via ELBO is observed to be helpful and is used by the reported results. TSC and CIS-MSC do not use pretraining. The Adam optimizer with learning rates 0.001 and mini-batch size 256 are used. For TSC and NeutraHMC, we use one HMC step per data-point per epoch. For IWAE and CIS-MSC, we use 50 samples, suggested by Burda et al. (2016). We estimate test log-marginal likelihood log p(x; θ) using Annealed Importance Sampling (AIS) (Neal, 2001; Wu et al., 2017) with 10 leapfrog steps and adaptive step sizes tuned to 67% acceptance, and 2500 annealing steps for MNIST, 7500 annealing steps for CIFAR10. Table 3: Average test log-marginal likelihood across three random seeds. 'Dim.' is the latent dimension; dim. 2 corresponds to Gaussian posterior, and dim. 64 or 128 corresponds to RealNVP posterior. *: -2900 must be added to log p(x) for each CIFAR10 result. *TSC gives better predictive performance.* | (a) Static MNIST | (b) Dynamic MNIST | (c) CIFAR10 | | | | | |--------------------|---------------------|----------------|--------------|---------------|----------|----| | Dim. | Method | log p(x) | Dim. | Method | log p(x) | * | | | Dim. | Method | log p(x) | | | | | | ELBO | −83.68 ± 0.07 | | | | | | | IW | −82.23 ± 0.06 | | | | | | | 64 | NeutraHMC | −83.15 ± 0.1 | | | | | | CIS-MSC | −85.1 ± 0.53 | | | | | | | TSC | −82.03 ± 0.03 | ELBO | −34.57 ± 0.08 | | | | | IW | −33.15 ± 0.07 | | | | | | | 128 | NeutraHMC | −34 ± 0.53 | | | | | | CIS-MSC | −33.35 ± 0.04 | | | | | | | TSC | −31.36 ± 0.13 | | | | | | ELBO | −134.22 ± 0.25 | | | | | | | 2 | NeutraHMC | −130.38 ± 1.02 | | | | | | TSC | −127.64 ± 0.88 | | | | | | | ELBO | −61 ± 0.03 | | | | | | | IW | −58.83 ± 0.03 | | | | | | | 64 | NeutraHMC | −60.3 ± 0.48 | | | | | | CIS-MSC | −60.76 ± 0.4 | | | | | | | TSC | −58.06 ± 0.15 | | | | | | | (a) Static MNIST | (b) CIFAR10 | | | | | | | | | |--------------------|---------------|--------|-------|-----|------|--------|-----|----|----| | Dim. | Method | Acc | MI | AU | Dim. | Method | Acc | MI | AU | | ELBO | 96.25% | 11.58 | 48 | | | | | | | | IW | 96.7% | 10.25 | 54 | | | | | | | | 64 | NeutraHMC | 97.16% | 11.67 | 64 | | | | | | | CIS-MSC | 94.48% | 10.04 | 64 | | | | | | | | TSC | 95.76% | 10.61 | 64 | | | | | | | | Benchmark | 98.79% | ELBO | 44.5% | 9.2 | 62 | | | | | | | IW | 45.06% | 9.07 | 59 | | | | | | | 128 | NeutraHMC | 45.24% | 9.1 | 66 | | | | | | | | CIS-MSC | 45.44% | 9.31 | 128 | | | | | | | | TSC | 52.36% | 10.73 | 128 | | | | | | | | Benchmark | 66.46% | | | | | | | | Table 4: Three VAE metrics: downstream linear classification accuracy based on linear classifier (Acc), mutual information (MI), and number of active units (AU). The same models as in Table 3 are considered. Higher is better for Acc and MI, while AU reflects how many latent units are utilized. Benchmark for Acc is a deterministic classifier imitating the encoder architecture and trained directly on the images. TSC gives best performance on CIFAR10, and NeutraHMC gives best performance on MNIST. CIS-MSC and TSC achieves the most number of active units. Results based on log-marginal likelihood. TSC achieves higher log-marginal likelihood for both low dimensional latent variables on Gaussian warped space and high dimensional latent variables on Real NVP warped space (Table 3). VAE experiments use RealNVP instead of IAF for fast inversion T −1 λ(z). Two RealNVPs are stacked to form the variational posterior, each one having two hidden layers. Every model, including baselines, uses this flow distribution, contains a single layer of latent variables, and trains for 500 epochs. TSC demonstrates effective synergy between transport map training and HMC sampling by training both encoders and decoders from scratch. This framework no longer requires a separate pretraining, which NeutraHMC does by warming up the encoder (the encoder includes the normalizing flow) with ELBO maximization for 500 epochs. NeutraHMC then continues to train the warmed-up encoder during the main training phase (500 more epochs), as done in Hoffman (2017). Meanwhile, in the first 10 of the 500 TSC training epochs, the encoder is not trained, a design that improves stability. Additional evaluation metrics. The quality of the approximate posterior is reflected in its learned latent representations, and we use three additional metrics to evaluate these latent representations: downstream classification accuracy based on a linear classifier (Acc), mutual information (MI), and number of latent units (AU). The purpose of MI and AU is to measure whether the model tends to give near-deterministic estimates to some of the latent variables - a known problem in VAE that may be caused by optimization problems (Burda et al., 2016). Classification accuracy complements MI and AU by estimating the usefulness of the latents for prediction. It also measures whether the latents match human intuition by being able to be separated by labels. The definition and implementational details of these metrics are given in Supplement A.2. The metrics are computed for models trained on the static MNIST and CIFAR10 datasets (Table 4). The latent representations allow the linear classifiers to achieve high accuracy in general on MNIST, with NeutraHMC performing best on these three metrics on MNIST, even though its log marginal likelihood is lower than that of TSC. Meanwhile, TSC significantly outperforms all benchmarks in terms of the three metrics on CIFAR10. TSC and MSC, the methods that keep running the Markov chain and train parameters by completely relying on MCMC or HMC samples, have full number of active units on both datasets. We hypothesize that different classification difficulties leads to performance differences across MNIST and CIFAR10. When the classification boundary is easy to learn as in MNIST, larger posterior variance, although more faithful in terms of probabilistic inference, is not a main factor in downstream classification performance. When the task is difficult for the given neural networks, more accurate variance estimates and utilization of latent units gain more importance. Ablation Studies. Since we utilize both DKL(p||q) and a continuously-run, warped-space HMC, we wish to know whether the algorithm is as effective if one of these two components is removed. In the case of 2-dimensional latent variables, we first train a model with maximum likelihood using warped space HMC like in TSC, but it uses a pretrained encoder and no longer optimizes DKL(p||q) to update the encoder. Next, we train a model that, compared to TSC, uses an ordinary HMC kernel without the space transformation. Results detailed in Supplement A.2 show that neither model achieves competitive performance. Therefore, not only is warped space HMC necessary for effective performance, but the dynamic DKL(p||q) updates of the approximate posterior and hence the transport map also play an essential role. ## 5 Conclusions We develop Transport Score Climbing, improving variational inference with forward KL, DKL(p||q), by using a HMC kernel on a simpler geometry defined by a transport map. This framework naturally leverages synergies since the transformation that warps the geometry is updated by HMC samples at each iteration, enabling more effective HMC sampling in future iterations. We illustrate the advantages of this method on two synthetic examples, a survey data example, and on MNIST and CIFAR10 using VAE. ## References Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. Christophe Andrieu and Éric Moulines. On the ergodicity properties of some adaptive MCMC algorithms. The Annals of Applied Probability, 16(3):1462 - 1505, 2006. Christophe Andrieu and Matti Vihola. Markovian stochastic approximation with expanding projections. Bernoulli, 20(2), November 2014. Christopher M. Bishop. *Pattern Recognition and Machine Learning*. Springer, 2006. David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017. Jörg Bornschein and Yoshua Bengio. Reweighted wake-sleep. In International Conference on Learning Representations, 2015. Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Computing Research Repository, abs/1509.00519, 2016. Anthony L. Caterini, Arnoud Doucet, and Dino Sejdinovic. Hamiltonian variational auto-encoder. In *Neural* Information Processing Systems, 2018. Adji Bousso Dieng and John Paisley. Reweighted expectation maximization. *arXiv:1906.05850*, 2019. Adji Bousso Dieng, Yoon Kim, Alexander M. Rush, and David M. Blei. Avoiding latent variable collapse with generative skip models. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), *Proceedings of the* Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of *Proceedings* of Machine Learning Research, pp. 2397–2405. PMLR, 16–18 Apr 2019. Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matthew D. Hoffman, and Rif A. Saurous. Tensorflow distributions. *Computing* Research Repository, abs/1711.10604, 2017. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. *Computing* Research Repository, abs/1605.08803, 2016. Simon Duane, A.D. Kennedy, Brian J. Pendleton, and Duncan Roweth. Hybrid Monte Carlo. Physics Letters B, 195(2):216–222, 1987. Axel Finke and Alexandre H. Thiery. On importance-weighted autoencoders. *arXiv:1907.10477*, 2019. Marylou Gabrié, Grant M. Rotskoff, and Eric Vanden-Eijnden. Adaptive Monte Carlo augmented with normalizing flows. *arXiv:2105.12603*, 2021. Andrew Gelman, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. Bayesian Data Analysis. CRC Press, 3 edition, 2013. Ming Gao Gu and Fan Hui Kong. A stochastic approximation algorithm with Markov chain Monte-Carlo method for incomplete data estimation problems. *Proceedings of the National Academy of Sciences*, 95(13): 7270–7274, 1998. Shixiang (Shane) Gu, Zoubin Ghahramani, and Richard E Turner. Neural adaptive sequential Monte Carlo. In *Neural Information Processing Systems*, pp. 2629–2637. Curran Associates, Inc., 2015. Heikki Haario, Eero Saksman, and Johanna Tamminen. Adaptive proposal distribution for random walk Metropolis algorithm. *Computational Statistics*, 14(3):375–395, 1999. Matthew D. Hoffman. Learning deep latent Gaussian models with Markov chain Monte Carlo. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pp. 1510–1519, 2017. Matthew D. Hoffman and M. D. Johnson. ELBO surgery: yet another way to carve up the variational evidence lower bound. *Workshop in Advances in Approximate Bayesian Inference*, 2016. Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347, 2013. Matthew D. Hoffman, Pavel Sountsov, Joshua V. Dillon, Ian Langmore, Dustin Tran, and Srinivas Vasudevan. Neutra-lizing bad geometry in Hamiltonian Monte Carlo using neural transport. *arXiv:1903.03704*, 2019. Ghassen Jerfel, Serena Wang, Clara Wong-Fannjiang, Katherine A. Heller, Yian Ma, and Michael I. Jordan. Variational refinement for importance sampling using the forward Kullback-Leibler divergence. In *Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence*, volume 161 of *Proceedings* of Machine Learning Research, pp. 1819–1829, 2021. Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. *Machine Learning*, 37(2):183–233, November 1999. Kyurae Kim, Jisu Oh, Jacob R. Gardner, Adji Bousso Dieng, and Hongseok Kim. Markov chain score ascent: A unifying framework of variational inference with Markovian gradients. *arXiv:2206.06295*, 2022. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *Computing Research* Repository, abs/1412.6980, 2015. Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. *Computing Research Repository*, abs/1312.6114, 2014. Diederik P. Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. *Computing Research Repository*, abs/1606.04934, 2016. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. Estelle Kuhn and Marc Lavielle. Coupling a stochastic approximation version of EM with an MCMC procedure. *European Series in Applied and Industrial Mathematics: Probability and Statistics*, 8:115–131, 2004. Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. *ATT Labs [Online].* Available: http://yann.lecun.com/exdb/mnist, 2, 2010. Juan Lopez-Martin, Justin H. Phillips, and Andrew Gelman. Multilevel regression and poststratification case studies, 2021. URL https://bookdown.org/jl5522/MRP-case-studies/. Oren Mangoubi and Aaron Smith. Rapid mixing of hamiltonian Monte Carlo on strongly log-concave distributions. *arXiv: Probability*, 2017. Youssef Marzouk, Tarek Moselhy, Matthew Parno, and Alessio Spantini. Sampling via measure transport: An introduction. *Handbook of Uncertainty Quantification*, pp. 1–41, 2016. Tom Minka. Divergence measures and message passing. Technical report, Technical report, Microsoft Research, 2005. Christian A. Naesseth, Fredrik Lindsten, and Thomas B. Schön. Elements of sequential Monte Carlo. Foundations and Trends® *in Machine Learning*, 12(3):307–392, 2019. Christian A. Naesseth, Fredrik Lindsten, and David M. Blei. Markovian score climbing: Variational inference with KL(p∥q). In *Neural Information Processing Systems*, 2020. Radford M. Neal. Annealed importance sampling. *Statistics and Computing*, 11(2):125–139, 2001. Radford M. Neal. Slice sampling. *The Annals of Statistics*, 31(3):705 - 767, 2003. Radford M. Neal. *Handbook of Markov Chain Monte Carlo*. Chapman and Hall/CRC, 2011. ISBN 9780429138508. Zhijian Ou and Yunfu Song. Joint stochastic approximation and its application to learning discrete latent variable models. In *Conference on Uncertainty in Artificial Intelligence*, 2020. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. *Computing Research Repository*, abs/1511.06434, 2016. Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In *International* Conference on Machine Learning, 2015. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International Conference on Machine Learning*, 2014. Christian Robert and George Casella. *Monte Carlo Statistical Methods*. Springer Science & Business Media, 2004. Francisco J. R. Ruiz, Michalis K. Titsias, Cemgil Taylan, and Arnoud Doucet. Unbiased gradient estimation for variational auto-encoders using coupled Markov chains. In Conference on Uncertainty in Artificial Intelligence, 2021. Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In International Conference on Machine Learning, 2008. Tim Salimans, Diederik P. Kingma, and Max Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. In *International Conference on Machine Learning*, 2015. Brian Schaffner, Stephen Ansolabehere, and Sam Luks. CCES Common Content, 2018. 2019. Esteban G. Tabak and Cristina V. Turner. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164, 2013. Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger B. Grosse. On the quantitative analysis of decoder-based generative models. *arXiv*, abs/1611.04273, 2017. Yuling Yao, Aki Vehtari, Daniel P. Simpson, and Andrew Gelman. Yes, but did it work?: Evaluating variational inference. In *International Conference on Machine Learning*, 2018. ## A Experiments ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) (b) TSC, MSC, and 60000 sample unadjusted estimate. Figure 6: Estimates by states, where states are ordered by Republican vote in the 2016 election. 95% confidence intervals are plotted. The closer to green the better. ## A.1 Survey Data Model. Following (Lopez-Martin et al., 2021), we model binary variable x taking values 0 or 1 with a multilevel regression model. x indicate individual responses, and each individual comes with given features: state, age, ethnicity, education, and gender. For each data-point xi, the model is defined as, p(xi = 1) = logit−1(z $\mathbf{l}\left(z_{s[i]}^{\text{state}}+\right)$ $\mathbf{l}\cdot\mathbf{e}_{\text{outh}}^{\text{south}}$ $\eta(x)$ age a[i] + z eth r[i] + z edu e[i] + z gen.eth g[i],r[i] + z edu.age e[i],a[i] + z edu.eth e[i],r[i] + β gen· Geni). z state s ∼ N (γ 0 + γ south· 1(s in south) + γ northcentral· 1(s in northcentral)+ γ west· 1(s in west), σstate), for s = 1, ..., 50, z age a ∼ N (0, σage), for a = 1, ..., 6, z eth r ∼ N (0, σeth), for r = 1, ..., 4, z edu e ∼ N (0, σedu), for e = 1, ..., 5, z gen.eth g,r ∼ N (0, σgen.eth), for g = 1, 2 and r = 1, ..., 4, z edu.age e,a ∼ N (0, σedu.age), for e = 1, ..., 5 and a = 1, ..., 6, z edu.eth e,r ∼ N (0, σedu.eth), for e = 1, ..., 5 and r = 1, ..., 4. Each z * * is a latent variable. For example, z state :is a length-50 latent variable that indicates the effect of state on the binary response. As another example, z edu.age :,:indicates the interaction effect of education and age, and is length-30 because there are 5 education levels and 6 age levels. In total, the model has a length-123 latent variable z. We model the rest, namely γ *, σ *, and β gen, as model parameters where we find fixed estimates. Results. Reliable approximations on the survey data are shown by ability to generalize from small, 5,000 sample to the full 60,000 sample, and closeness to gold-standard MRP MCMC results. We visualize TSC, MSC, and MRP MCMC estimates by state. Figure 6a shows that the mean of TSC estimates are barely discernible from the mean of results given by the gold-standard MRP MCMC (Lopez-Martin et al., 2021). Figure 6b shows that MSC is also robust against noise from the small 5,000 sample, but it slightly differs in results from TSC. ## A.2 Variational Autoencoder Architecture. In MNIST, both statically and dynamically binarized, the encoder uses two convolutional layers with number of filters 32 and 64, followed by a dense layer that outputs Gaussian mean and log-variances (so its hidden-size is two times latent variable dimension). The decoder begins with a dense layer with hidden-size 7 · 7 · 32, followed by three transpose convolutional layers with number of filters 32, 64, and 1, and it outputs a Bernoulli parameter for each pixel. All layers use kernel size 3, stride size 2, same padding, and ReLU activations, except for the last transpose convolutional layer that uses stride size 1. A DCGAN-style architecture is used for CIFAR10, featuring no dense layers, batch normalization, and leaky ReLU. The encoder uses four convolutional layers with number of filters 64, 128, 256, and latent dimension times 2. The last layer has no activation function and is flattened to give Gaussian mean and log-variances. The decoder uses four transpose convolutional layers with number of filters 256, 128, 64, 3. The last layer uses tanh activation and outputs Gaussian mean. Batch normalization and leaky ReLU (0.2) are applied after each layer except for the last layer in encoder and decoder. All layers use kernel size are 4, stride size 2, and same padding, except that the last layer in encoder and first layer in decoder use stride size 1 and valid padding. Additional evaluation metrics. In addition to log-marginal likelihood, we use three additional metrics to evaluate the quality of the approximate posterior and its latent representations. The metrics are defined as follows. Downstream classification accuracy. The approximate posterior maps every data-point to a vector representation. We take these vectors that correspond to training data-points from each method, and for each method we train a linear classifier based on the vector representations to predict the class of the data-point. Accuracy is reported on held-out test dataset. Mutual information (MI). MI I(z; x) measures the dependence between data x and latent representation z. High MI is desirable because it suggests that the generative model makes use of unique information encoded by latents z. MI is defined as follows (Hoffman & Johnson, 2016; Dieng et al., 2019), I(z; x) = Ebp(x) -DKL(q(z|x; λ)||p(z)) − DKL(q(z; λ)||p(z)), where q(z; λ) is the 'aggregate posterior', a marginal over the latent z defined as q(z; λ) = 1N PN i=1 q(z|xi; λ), and N is number of data-points. I(z; x) is approximated by Monte Carlo. Number of latent units (AU). AU measures how many units on the latent representation are 'active' (Burda et al., 2016) (for instance, if a 64-dimensional latent is used, we measure how many units among the 64 total units are active). It is desirable that all latent variables are used. A latent dimension, i.e., the u-th index of the latent variable z, is considered active if Covx(Ez∼q(z|x;λ)[z]u) > 0.02. Ablation studies. We do two VAE ablation studies under the case of two dimensional latent variables. Study I: no approximate inference with KL(p||q); only maximum likelihood on p(x; θ). First, we wonder whether warped space HMC itself along with a pretrained transport map achieves competitive performance. That is, the encoder is no longer trained, and the overall training is essentially Monte Carlo EM (Duane et al., 1987; Kingma & Welling, 2014). It achieves −156.1 log-marginal likelihood after the same number of epochs of training, lower than all baselines. Study II: run HMC on original space instead of warped space. We also test whether running HMC on the original space together with approximate posterior training via KL(p||q) achieves competitive performance. The estimated log-marginal likelihood is −133.9, lower than both TSC and NeutraHMC.
Review 1: Summary: The paper is minimally changed from the previous submission. Strengths and Weaknesses: The paper is minimally changed from the prior submission. I went over the paper a couple of times, and I share similar concerns as before. The paper is extremely hard to follow. After reading it a couple of times, I have a hard time following what they do. The notation has room from improvement, the problem statement has room from improvement, and it is not clear what they are tackling. Most likely, I might not be the right person to write a constructive review for this paper. I am sorry for the late notification. Requested Changes: None Broader Impact Concerns: None ================================================== Review 2: Summary: In this resubmission, the authors follow prior work on minimizing KL(p,q), instead of the mode seeking KL(q,p) divergence, which is commonly used in Variational inference. Specifically, they rely on a combination of Naesseth et al. (2020) who used MCMC for this minimization task, and Hoffmann et al., (2019), who propose the usage of HMC in a transformed space in general. The paper contains a series of experiments ranging from purely synthetic to real-world data. Strengths and Weaknesses: ## Strengths - The paper extends Naesset et al (2020) in a minor but clean way, demonstrating consistent improvement in a series of experiments. - Compared to the original submission, the writing has improved. ## Weaknesses Given the removal of the theory, the paper has, as stated by the authors, methodology and empirical evaluation as its main contribution. The methodological contribution is, as stated above, minor but consistent. However, despite the main focus of the paper being its empirical evaluation of this contribution, there are several weak points: - A main "competitor" of the paper is the MSC approach. The proposal improves upon it in all experimental settings, yet these are all distinct from those evaluated in Naesseth et al. (2020). The change in experimental setups should be well motivated within the text. - In response to requests by reviewers on the original submission, the authors have added several performance metrics in addition to the log-marginal likelihood originally reported. However, their motivation remains vague, i.e., why are exactly these three chosen? Especially the number of active units lacks a clear motivation. Can the authors clarify why this should be maximized? Comparing AU with the mutual information metric I do not see why it would be useful for a model to use more units if it can the same amount of information with a smaller number of units. E.g., in Table 4 (a) ELBO VI achieves a higher MI with fewer active units than MSC/TSC. This seems desirable instead of negative. - Regarding the Accuracy metric. Can the authors provide the performance a deterministic classifying net would give, which could serve as an upper performance bound? E.g., a network that consists solely of the encoding part and specifically learns to classify. - While TSC shows a consistent improvement upon NeutraHMC with respect to log-marginal likelihood performance, the latter easily improve upon the former in Table 4(a), while TSC clearly improves upon the baselines in 4.(b). This counterintuitive performance is simply acknowledged in the text without further interpretation or explanation. - A related table to the ones reported in Table 4 for "dynamic MNIST" is lacking. - Several experiments lack error bars. Answering a related question by reviewer HvpZ in the earlier submission such an extension was promised by the authors, but is still lacking. - Similarly, HvpZ had commented on the lack of citations/explanations for the image data sets. These have not been provided. ### Questions _Additional Questions to the points mentioned in the weaknesses above_ - In the survey experiment Section 4.2 (and a rebuttal answer to reviewer kw6Z), the authors mention the increased computational efficiency of the proposed approach compared to the "gold standard" MRP MCMC results. Can the authors quantify this statement? Requested Changes: Improvements and clarifications in the experimental setups/reports. _____ Wrt. Claims and Evidence: The status is currently "Mostly yes". Solving the weaknesses mentioned above will greatly strengthen the paper and give it a clear yes. Broader Impact Concerns: n/a ================================================== Review 3: Summary: The authors propose combining the approaches of Hoffman et al, 2019 and Naesseth et al., 2020 to minimize the forward KL-divergence when performing approximate inference. They provide empirical comparisons to a range of methods suggesting that this approach is effective. Strengths and Weaknesses: The changes made by the authors in improving the writing and focusing on the methodological contribution exlcusively address previous concerns. Strengths: - The exposition is generally clear. - The scope of experiments is good. Weaknesses: - Figure quality should be improved. Requested Changes: ## Typos and formatting p5- "The idea is that the part of ε that is transformed by neural networks depend only on the other part of ε that goes through the identity function." (Agreement of singular versus plural, depend -> depends or part -> parts to match). Figure 2 - The font size is too small. Greek letters should be used in the legend $\mu$ instead of $mu$ (this can easily be done in matplotlib, which it appears was used). It might be better to plot the standard deviation and mean on different plots to improve readability of the figure, since some of the curves are not comparable. Figure 4- Fonts are too small, and axis labels are missing. Pg 17 - Add a line break to prevent the equation running into the margin. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept as is Comment: This is a resubmission paper of the submission 575 - the section is filled there - the authors have implemented requested modifications, the reviewers are now happy with them and we now propose an acceptance. ==================================================
# A Reproducible And Realistic Evaluation Of Partial Domain Adaptation Methods Tiago Salvador†*tiago.saldanhasalvador@mail.mcgill.ca* Mila - Quebec AI Institute, McGill University Kilian Fatras†kilian.fatras@mila.quebec Mila - Quebec AI Institute, McGill University Ioannis Mitliagkas *ioannis@mila.quebec* Mila - Quebec AI Institute, Université de Montréal, Canada CIFAR AI Chair Adam Oberman adam.oberman@mila.quebec Mila - Quebec AI Institute, McGill University, Canada CIFAR AI Chair Reviewed on OpenReview: *https: // openreview. net/ forum? id= XcVzIBXeRn* ## Abstract Unsupervised Domain Adaptation (UDA) aims at classifying unlabeled target images leveraging source labeled ones. In the case of an extreme label shift scenario between the source and target domains, where we have extra source classes not present in the target domain, the UDA problem becomes a harder problem called Partial Domain Adaptation (PDA). While different methods have been developed to solve the PDA problem, most successful algorithms use model selection strategies that rely on target labels to find the best hyper-parameters and/or models along training. These strategies violate the main assumption in PDA: only unlabeled target domain samples are available. In addition, there are also experimental inconsistencies between developed methods - different architectures, hyper-parameter tuning, number of runs - yielding unfair comparisons. The main goal of this work is to provide a realistic evaluation of PDA methods under different model selection strategies and a consistent evaluation protocol. We evaluate 6 state-of-the-art PDA algorithms on 2 different real-world datasets using 7 different model selection strategies. Our two main findings are: (i) without target labels for model selection, the accuracy of the methods decreases up to 30 percentage points; *(ii)* only one method and model selection pair performs well on both datasets. Experiments were performed with our PyTorch framework, BenchmarkPDA, which we open source. ## 1 Introduction Domain adaptation. Deep neural networks are highly successful in image recognition for in-distribution samples (He et al., 2016) with this success being intrinsically tied to the large number of labeled training data. However, they tend to not generalize as well on images with different backgrounds or colors not seen during training. These differences constitute some examples of what is known in the literature as covariate shift. Unfortunately, enriching the training set with new samples from different domains is challenging as labeling data is both an expensive and time-consuming task. Thus, researchers have focused on unsupervised domain adaptation (UDA) where we have access to unlabelled samples from a different domain, known as the target domain. The purpose of UDA is to classify these unlabeled samples by leveraging the knowledge given by the labeled samples from the source domain (Pan & Yang, 2010; Patel et al., 2015). In the standard †These authors contributed equally to this work UDA problem, the source and target domains are assumed to share the same classes. In this paper, we consider a more challenging variant of the problem called *partial domain adaptation* (PDA): the classes in the target domain Yt form a subset of the classes in the source domain Ys (Cao et al., 2018), i.e., Yt ⊂ Ys. The number of target classes is unknown as we do not have access to the labels. The extra source classes, not present in the target domain, make the PDA problem more difficult: simply aligning the source and target domains results in matching target samples to source samples whose classes are not present in the target domain. Realistic evaluations. Most recent PDA methods report an increase of the target accuracy up to 15 percentage points on average when compared to the baseline approach that is trained only on source samples. While these successes constitute important breakthroughs in the DA research literature, target labels are used for model selection, violating the main UDA assumption. In their absence, the effectiveness of PDA methods remains unclear and model selection still constitutes an open problem as we show in this work. Moreover, the hyper-parameter tuning is either unknown or lacks details and sometimes requires labeled target data, which makes it challenging to apply PDA methods to new datasets. Recent work has highlighted the importance of model selection in the presence of domain shift. Gulrajani & Lopez-Paz (2021) showed that when evaluating domain generalization (DG) algorithms, whose goal is to generalize to a completely unseen domain, in a consistent and realistic setting no method outperforms the baseline ERM method by more than 1 percentage point. They argue that DG methods without a model selection strategy remain incomplete and should therefore be specified as part of the method. A similar recommendation was done by Saito et al. (2021) for domain adaptation. PDA methods have been designed using target labels at test time to select the best models. Related work (Saito et al., 2021; You et al., 2019) on model selection strategies for domain adaptation claimed to select the best models without using target labels. However, a realistic empirical study of these strategies in PDA is still lacking. In this work, we conduct extensive experiments to study the impact of model selection strategies on the performance of partial domain adaptation methods. We evaluate 6 different PDA methods over 7 different model selection strategies, 4 of which do not use target labels, and 2 different datasets under the same experimental protocol for a fair comparison. We list below our major findings: - For a given method, the difference of accuracy attained by a model selected without target labels and a model selected with target labels can reach 30 percentage points (See Table 1 for a summary of results). - Only 1 pair of PDA methods and target label-free model selection strategies achieve comparable accuracies to when target labels are used, while still improving over a source only baseline. - Random seed plays an important role in the selection of hyper-parameters. Selected parameters are not stable across different seeds and the standard deviation between accuracies on the same task can be up to 8.4% even when relying on target labels for model selection. - Under a more realistic scenario where some target labels are available, 100 random samples is enough to see only a drop of 1 percentage point in accuracy (when compared to using all target samples). However, using only one labeled target sample per class leads to a significant drop in performance. Related work. Concurrent work Musgrave et al. (2022) also conducted a study of model selection methods for unsupervised domain adaptation. Similarly to our work, the authors study several model selection techniques and several domain adaptation methods, but their focus is in the standard domain adaptation setting, while we focus on PDA. In Section 4, we discuss our slightly different methodologies and in Section 5, we compare our respective findings. Other domain adaptation variants could have been considered like the open-set domain adaptation (Busto & Gall, 2017). However, these other variants require specific methods and it would have been computationally expensive to make an extensive study of each variant in a single work. We thus leave other domain adaptation variants for future work. Outline. In Section 2, we provide an overview and a mathematical description of the different model selection strategies considered in this work. Then in Section 3, we discuss the PDA methods that we consider in this work. In Section 4 we describe the training procedures, hyper-parameter tuning and evaluation protocols | Dataset | Model Selection | s. only | pada | safn | ba3us | ar | jumbot | mpot | |---------------------------|---------------------------|---------------------------------------------------------------------------------------------------------|---------------|---------------|---------------|------------------------------|---------------|---------------| | office | Worst (w/o target labels) | 59.55 (-2.31) 52.72 (-11.00) 61.37 (-1.93) 62.25 (-13.73) 64.32 (-8.42) 61.28 (-15.87) 46.92 (-30.38) | | | | | | | | Best (w/o target labels) | | 60.73 (-1.14) 63.08 (-0.64) | 62.59 (-0.71) | 75.37 (-0.61) | 70.58 (-2.16) | 74.61 (-2.54) 66.24 (-11.07) | | | | home | Avg (w/o target labels) | 60.22 ± 0.43 | 59.5 ± 4.1 | 62.02 ± 0.4 | 69.9 ± 5.1 | 67.7 ± 2.8 | 67.8 ± 5.8 | 59.7 ± 7.6 | | | oracle | 61.87 | 63.72 | 63.30 | 75.98 | 72.73 | 77.15 | 77.31 | | Worst (w/o target labels) | | 55.02 (-4.46) 32.32 (-22.26) 42.83 (-19.81) 51.07 (-16.60) 55.69 (-18.15) 59.86 (-24.15) 61.62 (-25.33) | | | | | | | | visda | Best (w/o target labels) | 55.24 (-4.24) | 56.83 (2.26) | 58.62 (-4.02) | 65.58 (-2.09) | 67.20 (-6.65) | 77.69 (-6.31) | 78.40 (-8.54) | | Avg (w/o target labels) | | 55.1 ± 0.1 | 45.1 ± 8.8 | 51.1 ± 7.4 | 57.5 ± 5.3 | 63.5 ± 4.6 | 65.2 ± 7.3 | 71.2 ± 6.26 | | | oracle | 59.48 | 54.57 | 62.64 | 67.67 | 73.85 | 84.01 | 86.95 | Table 1: Task accuracy average computed over three different seeds (2020, 2021, 2022) on Partial officehome and Partial-visda. For each dataset and PDA method, we display the results of the *worst and best* performing model selection that do not use target labels as well as their average and the oracle model selection strategy. All results can be found in Table 6. used to evaluate all methods fairly. In Section 5, we discuss the results of the different benchmarked methods and the performance of the different model selection strategies. Finally in Section 6, we give some recommendations for future work in partial domain adaptation. ## 2 Model Selection Strategies: An Overview In UDA, the goal is to classify unlabeled data from a target domain leveraging labeled data from a source domain (Pan & Yang, 2010). Formally, let Ds (resp. Dt) be the labeled (resp. unlabeled) source (resp. target) dataset which is composed of ns (resp. nt) *i.i.d* random labeled (resp. unlabeled) vectors in R d drawn from a distribution p (resp. q), *i.e.,* Ds =(x s i , y s i ) ns i=1, x s i ∈ R d(resp. Dt =(x t j ) nt j=1, x t j ∈ R d). The goal is to find fθ, typically a deep neural network, that minimizes the target risk ϵt(f) = P(x,y)∼q[f(x) ̸= y]. In the standard domain adaptation, both domains share the same label space Ys = Yt, while in PDA the target classes form a subset of the source classes Yt ⊂ Ys. To classify the samples from the two domains, we rely on a feature extractor and a classifier. Formally, let gθ : *X 7→* R d be a feature extractor and fϕ : R d7→ Σ C be a classifier, where d is the dimension of feature space, Σ is the simplex and C be the number of classes. The maps gθ and fϕ are usually neural networks parametrized by θ and ϕ. As discussed in introduction, model selection (choosing hyper-parameters, training checkpoints, neural network architectures) is a crucial part of training neural networks. In the supervised learning setting, a validation set is used to estimate the model's accuracy. However, in UDA such approach is not possible as we have unlabeled target samples. Several strategies have been designed to address this issue. Below, we discuss the ones used in this work. Source Accuracy (s-acc). Ganin & Lempitsky (2015) used the accuracy estimated on a small validation set from the source domain to perform the model selection. While the source and target accuracies are related, there are no theoretical guarantees. You et al. (2019) showed that when the domain gap is large this approach fails to select competitive models. Deep Embedded Validation (dev). Sugiyama et al. (2007) and Long et al. (2018) perform model selection through Importance-Weighted Cross-Validation (IWCV). Under the assumption that the source and target domain follow a covariate shift, the target risk can be estimated from the source risk through importance weights that give increased importance to source samples that are closer to target samples. Formally, the importance weights w(x) correspond to the ratio of the target and source densities, i.e., w(x) = q(x) p(x) and are found by estimating each distribution using Gaussian kernels. Recently, You et al. (2019) proposed an improved variant, Deep Embedded Validation (dev), that controls the variance of the estimator, through a method called control variate, and estimates the importance weights with a discriminative model that distinguish source samples from target samples leading to a more stable and effective method. The latter is based on the fact that $$\psi(\mathbf{x})={\frac{1}{1+w(\mathbf{x})}}$$ where ψ is an optimal source-target discriminator, meaning that if x is drawn from p, ψ(x) = 1 and ψ(x) = 0 if x is drawn from q. Entropy (ent). While minimizing the entropy of the target samples has been used in domain adaptation to improve accuracy by promoting tighter clusters, Morerio et al. (2018) showed that it can also be used for model selection. Formally, we want the classifier fϕ⋆ and feature extractor gθ ⋆ that have the lowest entropy *i.e.,* Pnt i=1 H fϕ gθ(x t i ), where H denotes the entropy. The intuition is that a lower entropy model corresponds to a highly confident model with discriminative target features and therefore reliable predictions. Soft Neighborhood Density (snd). Saito et al. (2021) argue that a good UDA model will have a cluster structure where nearby target samples are in the same class. They claim that entropy is not able to capture this property and propose the Soft Neighborhood Density (snd) score to address it. Formally, they compute the similarity between extracted features from the target data Si,j = ⟨gθ(x t i ), gθ(x t j )⟩. Then, the snd score is defined as the entropy of the following probability distribution Pi,j =exp(Si,j)/τ/Pj exp(Si,j)/τ. The selected model gθ is the model with the highest snd score. Target Accuracy (oracle). We consider as well the target accuracy on all target samples. While we emphasize once again its use is not realistic in unsupervised domain adaptation (hence why we will refer to it as oracle), it has nonetheless been used to report the best accuracy achieved by the model along training in several previous works (Cao et al., 2018; Xu et al., 2019; Jian et al., 2020; Gu et al., 2021; Nguyen et al., 2022). Here, we use it as an upper bound for all the other model selection strategies and to check the reproducibility of previous works. Small Labeled Target Set (1-shot and 100-rnd). For real-world applications in an industry setting, it is unlikely that a model will be deployed without an estimation of its performance on the target domain. Therefore, one can imagine a situation where a PDA method is used and a small set of target samples is available. Thus, we will compute the target accuracy with 1 labeled sample per class (1-shot) and 100 random labeled target samples (100-rnd) as model selection strategies. One could argue that the 100 random samples could have been used in the training with semi-supervised domain adaptation methods. However, note that we do not know how many classes we have on the target domain so it is hard to form a split when we have uncertainty of classes. For instance, 100-rnd represents possibly less than 2 samples per class for one of our real-world dataset, as we do not know the number of classes, making a potential split between a train and validation target sets not possible. ## 3 Partial Domain Adaptation Methods In this section, we give a brief description of the PDA methods considered in our study. They can be grouped into two families: adversarial training and divergence minimization. Adversarial training. To solve the UDA problem, Ganin et al. (2016) aligned the source and target domains with the help of a domain discriminator trained adversarially to be able to distinguish the samples from the two domains. In addition to gθ and fϕ, we also consider an adversarial classifier Dζ : *Z 7→ {*0, 1} parameterized by ζ. The maps are trained in a adversarial manner where Dζ tries to distinguish the source data from the target data, while gθ is trained to confuse the domain discriminator Dζ : $$\zeta^{\star},\theta^{\star}=\operatorname*{arg\,min}_{\theta}\arg\operatorname*{max}_{\zeta}\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\log\left(D_{\zeta}\big(g_{\theta}({\bf x}_{i}^{s})\big)\right)+\frac{1}{n_{t}}\sum_{j=1}^{n_{t}}\log\left(1-D_{\zeta}\big(g_{\theta}({\bf x}_{j}^{t})\big)\right).$$ However, when naively applied to the PDA setting, this strategy aligns target samples to source sample whose classes are not in the target domain and as a result the model performs worse than a model trained only on source data. To remedy this problem, Cao et al. (2018) proposed pada that introduces a PDA specific solution to adversarial domain adaptation: the contribution of the source-only class samples to the training of both the source classifier and the domain adversarial network is decreased. This is achieved through class weights that are calculated by simply averaging the classifier prediction on all target samples. As the | PDA Methods | pada, safn, ba3us, ar, jumbot, mpot | |----------------------------|--------------------------------------------------------------------| | Model Selection Strategies | s-acc, ent, dev, snd, 1-shot, 100-rnd, oracle | | Architecture | ResNet50 backbone ⊕ linear bottleneck ⊕ linear classification head | | Experimental protocol | 3 seeds on the 12 tasks of office-home and 2 tasks of VisDA | | Method | Architecture | Runs | Model Selection | | |--------------|----------------|------------------|----------------------|--------| | (bottleneck) | per tasks | Hyper-Parameters | Along Training | | | pada | Linear | 1 | IWCV (lacks details) | oracle | | safn | Non-Linear | 3 | Unknown | oracle | | ba3us | Linear | 3 | Unknown | oracle | | ar | Non-Linear | 1 | IWCV (lacks details) | oracle | | jumbot | Linear | 1 | oracle | final | | mpot | Linear | 3 | Unknown | oracle | Table 2: Summary of our considered methods, model selection strategies, architecture and datasets, where ⊕ denotes the operation of adding layers. Table 3: Summary of the experimental protocol used for SOTA partial domain adaptation methods. We refer to Appendix A.1 for additional details. source-only classes should not be predicted in the target domain, they should have lower weights. More recently, Jian et al. (2020) proposed ba3us which augments the target mini-batch with source samples to transform the PDA problem into a vanilla DA problem. In addition, an adaptive weighted entropy objective is used to encourage incorrect classes to have uniform and low prediction scores. Divergence minimization. Another standard direction to align the source and target distributions in the feature space of a neural network is to minimize a given divergence between distributions of domains. The purpose of the minimization divergence approach is to find the optimal parameters θ ⋆ which minimizes the divergence L: ## Θ ⋆ = Argminθ Lgθ#P, Gθ#Q, where gθ\# denotes the pushforward operator of the map gθ. Xu et al. (2019) empirically found that target samples have low feature norm compared to source samples. Based on this insight, they proposed safn which progressively adapts the feature norms of the two domains by minimizing the Maximum Mean Feature Norm Discrepancy as the divergence L (Gretton et al., 2012). Other approaches consider the optimal transport (OT) cost as the divergence L (Bhushan Damodaran et al., 2018) with mini-batches (Peyré & Cuturi, 2019; Fatras et al., 2020; 2021b). For the PDA problem in specific, (Fatras et al., 2021a) developed jumbot, a mini-batch unbalanced optimal transport that learns a joint distribution of the embedded samples and labels. The use of unbalanced OT is critical for the PDA problem as it allows to transport only the source samples whose classes are in the target domain. Based on this work, (Nguyen et al., 2022) investigated the partial OT variant (Chapel et al., 2020), a particular case of unbalanced OT, proposing m-pot. Finally, another line of work is to use the Kantorovich-Rubenstein duality of optimal transport to perform the alignment similarly to WGAN (Arjovsky et al., 2017). This is precisely the work of Gu et al. (2021) that proposed, ar. In addition, source samples are reweighted in order to not transport the source-only class samples. The Kantorovich-Rubenstein duality relies on a one Lipschitz function which is approximated using adversarial training like the PDA methods described above. ## 4 Experimental Protocol In this section, we discuss our choices regarding the training details, datasets and neural network architecture. We then discuss the hyper-parameter tuning used in this work. We summarize the PDA methods, model selection strategies and experimental protocol used in this work in Table 2. The main differences in the experimental protocol of the different published state-of-the-art (SOTA) methods is summarized in Table 3. | Method | s. only | pada | safn | ba3us | ar | jumbot | mpot | |------------------|-----------|--------|--------|---------|-------|----------|--------| | Reported | 61.35 | 62.06 | 71.84 | 75.98 | 77.11 | 75.47 | 77.98 | | Reimplementation | 61.87 | 63.72 | 74.72 | 75.98 | 76.00 | 77.15 | 77.31 | | Ours | 61.87 | 63.72 | 63.30 | 75.98 | 72.73 | 77.15 | 77.31 | Table 4: Comparison between reported accuracies on partial office-home from published methods with our reimplementation using the oracle model selection strategy. Ours denotes our reimplementation where all methods have the same bottleneck architectures as discussed in Section 4. To perform our experiments we developed a PyTorch (Paszke et al., 2019) framework: BenchmarkPDA. We make it available for researchers to use and contribute with new algorithms and model selection strategies: ## Https://Github.Com/Oberman-Lab/Benchmarkpda It is the standard in the literature when proposing a new method to report directly the results of its competitors from the original papers (Cao et al., 2018; Xu et al., 2019; Jian et al., 2020; Gu et al., 2021; Nguyen et al., 2022). As a result some methods differ for instance in the neural network architecture implementation (ar (Gu et al., 2021), safn (Xu et al., 2019)) or evaluation protocol jumbot (Fatras et al., 2021a) with other methods. These changes often contribute to an increased performance of the newly proposed method leaving previous methods at a disadvantage. Therefore we chose to implement all methods with the same commonly used neural network architecture, optimizer, learning rate schedule and evaluation protocol. We discuss the details below. ## 4.1 Methods, Datasets, Training And Evaluation Details Methods. We implemented 6 PDA methods by adapting the code from the Official GitHub repositories of each method: Source Only, pada (Cao et al., 2018), safn (Xu et al., 2019), ba3us (Jian et al., 2020), ar (Gu et al., 2021), jumbot (Fatras et al., 2021a), mpot (Nguyen et al., 2022). We provide the links to the different official repositories in Appendix A.1. A comparison with previous reported results can be found in Table 11 and we postpone the discussion to Section 5. Datasets. We consider two standard real-world datasets used in DA. Our first dataset is office-home (Venkateswara et al., 2017). It is a difficult dataset for unsupervised domain adaptation (UDA), it has 15,500 images from four different domains: Art (A), Clipart (C), Product (P) and Real-World (R). For each domain, the dataset contains images of 65 object categories that are common in office and home scenarios. For the partial office-home setting, we follow Cao et al. (2018) and select the first 25 categories (in alphabetic order) in each domain as a partial target domain. We evaluate all methods on all 12 possible tasks, where by task we mean training a classifier using one domain as source data and another different domain as target data. For example, the AC task is the partial domain adaptation scenario where the Art domain is used as source and the Clipart domain is used as target. visda (Peng et al., 2017) is a large-scale dataset for UDA. It has 152,397 synthetic images and 55,388 real-world images, where 12 object categories are shared by these two domains. For the partial VisDA setting, we follow Cao et al. (2018) and select the first 6 categories, taken in alphabetic order, in each domain as a partial target domain. We evaluate the models in the two possible scenarios. We highlight that we are the first to investigate the performance of jumbot and mpot on partial visda. Model Selection Strategies We consider the 7 different strategies for model selection described in Section 2: s-acc, dev, ent, snd, oracle, 1-shot, 100-rnd. We use them both for hyper-parameter tuning as well selecting the best model along training. Since s-acc, dev and snd require a source validation set, we divide the source samples into a training subset (80%) and validation subset (20%). Regardless of the model selection strategy used, all methods are trained using the source training subset. This is in contrast with previous work that uses all source samples, but necessary to ensure a fair comparison of the model selection strategies. We refer to Appendix A.2 for additional details. | Dataset | Variant | ba3us | jumbot | mpot | safn | | | | | | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|----------|--------|--------|-------------------|-------------|------|-----|-----|-----|-----| | | ent | dev | snd | ent | dev | snd | ent | dev | snd | ent | dev | snd | | | Naive | 52.60 63.10 44.48 | 52.30 | 26.75 | 17.67 | 49.01 16.72 30.63 | 32.12 49.67 | 5.01 | | | | | | office-home Heuristic 58.45 63.10 60.96 56.24 45.79 55.16 49.01 45.61 30.63 46.27 49.67 49.67 Naive 39.06 36.99 1.14 35.89 54.53 11.99 75.04 55.33 36.11 52.82 53.26 0.83 visda Heuristic 67.50 34.94 38.76 47.23 54.53 66.42 75.04 55.33 85.36 52.82 53.26 52.82 | | | | | | | | | | | | | Table 5: Comparison between the naive model selection strategy and our heuristic approach. Accuracy on AC task for office-home and SR task for visda. Best results in **bold**. Architecture. Our network is composed of a feature extractor with a linear classification layer on top of it. The feature extractor is a ResNet50 (He et al., 2016), pre-trained on ImageNet (Deng et al., 2009), with its last linear layer removed and replaced by a linear bottleneck layer of dimension 256. This architecture is used by almost all methods with the exception of safn and ar which use different architectures. However we believe that it is important to use the same architecture for all methods to understand their own performance. While it is possible that we underestimate the performance of safn and ar as a result, it is also possible that other methods benefit from these different architectures. We leave this question for future work. Optimizer. We use the SGD (Robbins & Monro, 1951) algorithm with momentum of 0.9, a weight decay of 5e −4 and Nesterov acceleration. As the bottleneck and classifier layers are randomly initialized, we set their learning rates to be 10 times that of the pre-trained ResNet50 backbone. We schedule the learning rate with a strategy similar to the one in (Ganin et al., 2016): χp =χ0 (1+µi)−ν , where i is the current iteration, χ0 = 0.001, γ = 0.001, ν = 0.75. While this schedule is slightly different than the one reported in previous work, it is the one implemented in the different official code implementations. We elaborate in the Appendix A.3 on the differences and provide additional details. Finally, as for the mini-batch size, jumbot and mpot were designed with a stratified sampling, *i.e.,* a balanced source mini-batch with the same number of samples per class. This strategy reduces the optimal transport matching between target samples and source samples from source-only classes, which in turn leads to less target samples being classified as belonging to source-only classes. On the other hand, it was shown that for some methods (e.g. BA3US) using a larger mini-batch, than what was reported, leads to a decreased performance (Fatras et al., 2021a). As a result, we used the default mini-batch strategies for each method. jumbot and m-pot use stratified mini-batches of size 65 for office-home and 36 for visda. All other methods use a random uniform sampling strategy with a mini-batch size of 36. Evaluation Protocol. For the hyper-parameters chosen with each model selection strategy, we run the methods for each tasks 3 times, each with a different seed (2020, 2021, 2022). We tried to control for the randomness across methods by setting the seeds at the beginning of training. Interestingly, as we discuss in more detail in Section 5, some methods demonstrated a non-negligible variance across the different seeds showing that some hyper-parameters and methods are not robust to randomness. ## 4.2 Hyper-Parameter Tuning Previous works (Gulrajani & Lopez-Paz, 2021; Musgrave et al., 2021; 2022) perform random searches with the same number of runs for each method. In contrast, we perform hyper-parameter grid searches for each method. As a result, the hyper-parameter tuning budgets differs across the methods depending on the number of hyper-parameters and the chosen grid. While one can argue this leads to an unfair comparison of the methods, in practice in most real-world applications one will be interested in using the best method that our approach will precisely capture. The hyper-parameter tuning needs to be performed for each tasks of each dataset, but that would require a significant computational resources without a clear added benefit. Instead for each dataset, we first perform the hyper-parameter tuning on trained models over a single task: A2C for office-home and S2R for visda. Then in a second time, we use the selected hyper-parameters from the first phase on the remaining tasks. This same strategy was adopted in (Fatras et al., 2021a) and the hyper-parameters were found to generalize to the remaining tasks in the dataset. We conjecture that this may be due to the fact that information regarding the number of target only classes is implicitly hidden in the hyper-parameters. See Appendix A.4 for more details regarding the hyper-parameters. In practice during the first phase, we set the number of training iterations to 5000 steps for office-home and 10000 steps for visda. Then, we apply the different model selection strategies to the different trained models and select the hyper-parameters that optimize the model selection strategies. As we are looking for the best hyper-parameters on a single task that hopefully generalize to the remaining tasks, we do not consider the end of training as a hyper-parameter during this phase. In the second phase, we use the chosen hyper-parameters and run the methods on all tasks. We use the model selection strategies to select the best model for each task along training, effectively considering the 'end of training' as a hyper-parameter. With the wrong choice of hyper-parameters, we can end up with a degenerate model that predicts the same label for all examples with high confidence. This model will have good DEV, SND and ENT scores (being highly confident means it has low entropy) and therefore is considered a 'good' model according to these criterias when in fact it's quite the contrary. Several runs in our hyper-parameter search for jumbot, m-pot and ba3us were unsuccessful with the optimization reaching its end without the model being trained at all. The extreme case is a degenerate model that predicts the same label for all examples with high confidence. This poses a challenge to dev, snd and ent. A highly confident model has low entropy and therefore is considered a 'good' model according to ent. A similar reasoning can be made for dev and snd. This failure mode is accounted for in (Saito et al., 2021). Following their recommendations, for jumbot, m-pot and ba3us, before applying the model selection strategy, we discard models whose source domain accuracy is below a certain threshold thr, which is set with the heuristic as thr = 0.9 · Acc. Here Acc denotes the source domain accuracy of the Source-Only model. In our experiments, this leads to select models whom the source accuracy is at least of thr = 69.01% for the A2C tasks on office-home and thr = 89.83% for the S2R tasks on visda. We choose this heuristic because the ablation study of some methods showed that doing the adaptation decreased slightly the source accuracy (Bhushan Damodaran et al., 2018). Table 5 shows that our heuristic leads to improved results. Lastly, when choosing the hyper-parameters, we only consider the model at the end of training, discarding the intermediate checkpoint models in order to select hyper-parameters which do not lead to overfitting at the end of training and better generalize to the other tasks. Following the above protocol, for each dataset we trained 468 models in total in order to find the best hyper-parameters. Then, to obtain the results with our neural network architecture on all tasks of each dataset, we trained an additional *1224* models for office-home and 156 models for visda. We additionally trained 231 models with the different neural network architectures for ar and safn. In total, *2547* models were trained to make this study and we present the different results in the next section. ## 5 Partial Domain Adaptation Experiments We start the results section by discussing the differences between our reproduced results and the published results from the different PDA methods. Then, we compare the performance of the different model selection strategies. Finally, we discuss the sensitivity of methods to the random seed. ## 5.1 Reproducibility Of Previous Results We start by ensuring that our reimplementation of PDA methods was done correctly by comparing our reproduced results with previously reported results in Table 11. As such the model selection strategy used is oracle. On office-home, both pada and jumbot achieved higher average tasks accuracy (1.6 and 1.7 percentage points, respectively) in our reimplementation, while for ba3us and mpot we recover the reported accuracy in their respective papers. However, we saw a decrease in performance for both safn and ar of roughly 8 and 5 percentage points respectively. This is to be expected due to the differences in the neural network architectures. While we use a linear bottleneck layer, safn uses a nonlinear bottleneck layer. As for ar, they make two significant changes: the linear classification head is replaced by a spherical logistic regression (SLR) layer (Gu et al., 2020) and the features are normalized (the 2-norm is set to a dataset dependent value, another hyper-parameter that requires tuning) before feeding them to the classification | Dataset | Method | s-acc | ent | dev | snd | 1-shot | 100-rnd | oracle | |-------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | office-home | s. only | 60.38±0.5 | 60.73±0.2 | 60.22±0.3 | 59.55±0.3 | 58.92±0.4 | 60.34±0.4 | 61.87±0.3 | | pada | 63.08±0.3 | 59.74±0.5 | 52.72±2.8 | 62.36±0.4 | 62.00±0.5 | 63.22±0.1 | 63.72±0.3 | | | safn | 62.09±0.2 | 61.37±0.3 | 62.03±0.4 | 62.59±0.1 | 49.30±0.7 | 62.36±0.2 | 63.30±0.2 | | | ba3us | 68.32±1.1 | 73.36±0.6 | 62.25±7.1 | 75.37±0.8 | 65.56±7.6 | 75.19±0.4 | 75.98±0.3 | | | ar | 65.68±0.3 | 70.58±0.4 | 64.32±0.9 | 70.25±0.2 | 70.56±0.7 | 70.34±0.2 | 72.73±0.3 | | | jumbot | 62.89±0.2 | 74.61±0.8 | 61.28±0.1 | 72.29±0.2 | 74.95±0.1 | 75.74±0.3 | 77.15±0.4 | | | mpot | 66.24±0.1 | 64.46±0.1 | 61.37±0.2 | 46.92±0.4 | 68.28±0.2 | 73.06±0.3 | 77.31±0.5 | | | visda | s. only | 55.15±2.4 | 55.24±3.2 | 55.07±1.2 | 55.02±2.9 | 55.72±2.2 | 58.16±0.6 | 59.48±0.4 | | pada | 47.48±4.8 | 32.32±4.9 | 43.43±5.3 | 56.83±1.0 | 53.15±2.9 | 54.38±2.7 | 54.57±2.6 | | | safn | 58.20±1.7 | 42.83±6.3 | 58.62±1.3 | 44.82±8.8 | 56.89±2.1 | 59.09±2.8 | 62.64±1.5 | | | ba3us | 55.10±3.7 | 65.58±1.4 | 58.40±1.4 | 51.07±4.3 | 64.77±1.4 | 67.44±1.2 | 67.67±1.3 | | | ar | 66.68±1.0 | 64.27±3.6 | 67.20±1.5 | 55.69±0.9 | 70.29±1.7 | 72.60±0.8 | 73.85±0.9 | | | jumbot | 60.63±0.7 | 62.42±2.4 | 59.86±0.6 | 77.69±4.2 | 78.34±1.9 | 83.49±1.9 | 84.01±1.9 | | | mpot | 70.02±2.0 | 74.64±4.4 | 61.62±1.3 | 78.40±3.9 | 70.96±3.7 | 86.69±5.1 | 86.95±5.0 | | Table 6: Task accuracy average over seeds 2020, 2021, 2022 on Partial office-home and Partial visda for the PDA methods and model selection strategy. For each method, we highlight the best and worst label-free model selection strategies in green and red, respectively. head. While we account for the first change by comparing to AR (w/ linear) results reported in (Gu et al., 2021), in our neural network architecture we do not normalize the features. These changes, nonlinear bottleneck layer for safn and feature normalization for ar, significantly boost the performance of both methods. Performing an ablation study of new architectures with respect to the original one, as done in (Gu et al., 2021), allows us to understand the architecture's influence on the method. We recommend authors to perform one as it also makes the method easier to reproduce results. When now comparing our reimplementation with the same neural network architectures, our SAFN reimplementation achieves a higher average tasks accuracy by 3 percentage points, while our AR reimplementation is now only 1 percentage points below. The fact that AR reported results are from only one run, while ours are averaged across 3 distinct seeds, justifies the small remaining gap. Moreover, we report higher accuracy or on par on 4 of the 12 tasks. Given all the above and further discussion of the visda dataset results in Appendix B, our reimplementations are trustworthy and give validity to the results we discuss in the next sections. ## 5.2 Results For Model Selection Strategies Model Selection Strategies (w/ vs w/o target labels) All average accuracies on the office-home and visda datasets can be found in Table 6. For all methods on office-home, we can see that the results for model selection strategies which do not use target labels are below the results given by oracle. For some pairs, the drop of performance can be significant, leading some methods to perform on par with the s. only method. That is the case on office-home when dev is paired with either ba3us, jumbot and mpot. Even worse is mpot with snd as the average accuracy is more than 10 percentage points below that of s. only with any model selection strategy. Overall on office-home, except for mpot, all methods when paired with either ent or snd give results that are at most 2 percentage points below compared to when paired with oracle. A similar situation can be seen over the visda dataset where the accuracy without target labels can be down to 25 percentage points. Yet again, some model selection strategies can lead to scores even worse than s. only. That is the case for pada, safn and ba3us. Contrary to office-home, all model selection strategies without target labels lead to at least one method with results on par or worse in comparison to the s. only method. More generally, no model selection strategy without target labels can lead to score on par to the oracle model selection strategy. Finally, pada performs worse than s. only for most model selection strategies, including the ones which use target labels. Perhaps a little surprising, when combined with snd it achieved a higher per run average than with oracle, although within the standard deviation. This is also a consequence of the random seed dependence mentioned before on visda: as the hyper-parameters were chosen by performing just one run, we were simply "unlucky". In general, all of this confirms the standard assumption in the literature regarding the difficulty of the visda dataset. These experiments also allow us to draw some conclusions regarding the robustness of methods to model selection strategies with respect to the number of hyper-parameters. We see that pada and ba3us are not robust for either one of the datasets, while safn is robust to the choice of model selection strategy for office-home. These are the methods with fewest hyper-parameters ruling out the fact that less hyperparameters leads to more robust methods and suggesting it is method and dataset specific. Furthermore regarding reliability, we find that the optimal transport based approaches (mpot, jumbot) to be the most sensitive to model selection strategies as they exhibit the largest performance gaps between the best and worse model selection strategies without target labels. It shows, that while they are able to achieve SOTA results with the oracle model selection strategy, that performance is highly tied to the hyper-parameter choice. Model Selection Strategies (w/ target labels) We recall that the oracle model selection strategy uses all the target samples to compute the accuracy while 1-shot and 100-rnd use only subsets: 1-shot has only one sample per class for a total of 25 and 6 on office-home and visda, respectively, while 100-rnd has 100 random target samples. Our results show that using only 100 random target labeled samples is enough to reasonably approximate the target accuracy leading to only a small accuracy drop (one percentage point in almost all cases) for both datasets. Not surprisingly, the gap between the 1-shot and oracle model selection strategies is even bigger, leading in some instances to worse results than with a model selection strategy that uses no target labels. This poor performance of the 1-shot model selection strategy also highlights that semi-supervised domain adaptation (SSDA) methods are not a straightforward alternative to the 100-rnd model selection strategy. While one could argue that the target labels could be leveraged during training like in SSDA methods, one still needs labeled target data to perform model selection. However our results suggest that we would need at least 3 samples per class for SSDA methods. In addition, knowing that we have a certain number of labeled samples per class provides information regarding which classes are target only, one of the main assumptions in PDA. In that case, PDA methods could be tweaked. This warrants further study that we leave as future work. Finally, we have also investigated a smaller labeled target set of 50 random samples (50-rnd) instead of 100 random samples. The accuracies of methods using 50-rnd were not as good as when using 100-rnd. All results of pairs of methods and 50-rnd can be found in Appendix B. The smaller performance show that the size of the labeled target set is an important element and we suggest to use at least 100 random samples. Model Selection Strategies (w/o target labels) Among all 49 pairs of methods and model selection strategies, only the (ba3us, ent) pair achieved average tasks accuracies within 3 percentage points of its oracle counterpart (*i.e.*, (ba3us, oracle)), while improving over S. Only model. Our experiments show that there is no model selection strategy which performs well for all methods. That is why to deploy models in a real-world scenario, we advise to test selected models on a small labeled target set (*i.e.,* 100-rnd)) to assess the performance of the models as model selection strategies without target labels can perform poorly. Our conclusion is that the model selection for PDA methods is still an open problem. We conjecture that it is also the case for domain adaptation as the considered metrics were developed first for this setting. For future proposed methods, researchers should specify not only which model selection strategy should be used, but also which hyper-parameter search grid should be considered, to deploy them in a real-world scenario. Comparison with other model selection strategy studies Our study of model selection strategies is related to the Gulrajani & Lopez-Paz (2021); Saito et al. (2021); Musgrave et al. (2022). In their respective study, Gulrajani & Lopez-Paz (2021) for domain generalization and Saito et al. (2021); Musgrave et al. (2022) for unsupervised domain adaptation argue that the methods remain incomplete without model selection strategies as the latter can have a big impact on their performance. Our findings are similar to theirs and we recommend that for each new partial domain adaptation method, its authors should recommend a target label-free model selection strategy. | Task | Method | s-acc | ent | dev | snd | 1-shot | 100-rnd | oracle | |--------|-------------|--------------|-------------|--------------|-------------|-------------|-------------|-------------| | S2R | s. only | 46.96 ± 1.5 | 48.17 ± 3.9 | 49.00 ± 0.9 | 48.17 ± 3.9 | 49.43 ± 0.8 | 50.01 ± 1.6 | 51.86 ± 1.4 | | pada | 44.56 ± 5.9 | 40.83 ± 11.3 | 41.04 ± 4.3 | 56.14 ± 9.7 | 52.94 ± 4.3 | 49.34 ± 8.4 | 49.34 ± 8.4 | | | safn | 52.04 ± 3.5 | 29.86 ± 16.7 | 52.42 ± 2.9 | 28.46 ± 16.5 | 49.97 ± 3.3 | 47.83 ± 0.6 | 56.88 ± 2.1 | | | ba3us | 44.21 ± 3.0 | 71.17 ± 1.9 | 48.78 ± 1.9 | 46.12 ± 7.8 | 66.79 ± 1.5 | 71.45 ± 0.8 | 71.77 ± 1.1 | | | ar | 68.39 ± 1.3 | 75.28 ± 2.9 | 68.54 ± 1.3 | 57.61 ± 0.4 | 70.11 ± 1.4 | 75.09 ± 5.2 | 76.33 ± 4.5 | | | jumbot | 55.23 ± 2.3 | 56.25 ± 2.1 | 54.35 ± 2.0 | 75.23 ± 8.4 | 81.27 ± 6.9 | 89.94 ± 1.1 | 90.55 ± 0.5 | | | mpot | 64.57 ± 2.9 | 82.10 ± 2.0 | 57.02 ± 1.5 | 84.45 ± 0.4 | 71.33 ± 4.4 | 87.20 ± 2.3 | 87.23 ± 2.3 | | | R2S | s. only | 63.34 ± 3.4 | 62.32 ± 2.7 | 61.13 ± 3.3 | 61.88 ± 2.3 | 62.00 ± 3.9 | 66.30 ± 2.0 | 67.11 ± 2.1 | | pada | 50.39 ± 3.8 | 23.80 ± 1.6 | 45.82 ± 9.2 | 57.53 ± 10.3 | 53.36 ± 1.7 | 59.43 ± 5.8 | 59.81 ± 6.2 | | | safn | 64.37 ± 0.7 | 55.80 ± 5.2 | 64.82 ± 0.5 | 61.19 ± 3.3 | 63.82 ± 1.0 | 70.34 ± 5.8 | 68.40 ± 1.2 | | | ba3us | 65.99 ± 4.6 | 59.99 ± 1.3 | 68.01 ± 1.9 | 56.01 ± 2.9 | 62.75 ± 2.6 | 63.44 ± 1.9 | 63.56 ± 1.8 | | | ar | 64.97 ± 0.8 | 53.26 ± 9.7 | 65.86 ± 3.5 | 53.78 ± 2.1 | 70.46 ± 4.7 | 70.11 ± 5.0 | 71.36 ± 5.5 | | | jumbot | 66.04 ± 1.0 | 68.59 ± 4.6 | 65.36 ± 0.8 | 80.16 ± 1.1 | 75.42 ± 4.8 | 77.03 ± 2.7 | 77.46 ± 3.3 | | | mpot | 75.47 ± 3.8 | 67.18 ± 9.1 | 66.21 ± 1.2 | 72.36 ± 7.4 | 70.58 ± 3.1 | 86.18 ± 8.1 | 86.67 ± 7.8 | | | Avg | s. only | 55.15 ± 2.4 | 55.24 ± 3.2 | 55.07 ± 1.2 | 55.02 ± 2.9 | 55.72 ± 2.2 | 58.16 ± 0.6 | 59.48 ± 0.4 | | pada | 47.48 ± 4.8 | 32.32 ± 4.9 | 43.43 ± 5.3 | 56.83 ± 1.0 | 53.15 ± 2.9 | 54.38 ± 2.7 | 54.57 ± 2.6 | | | safn | 58.20 ± 1.7 | 42.83 ± 6.3 | 58.62 ± 1.3 | 44.82 ± 8.8 | 56.89 ± 2.1 | 59.09 ± 2.8 | 62.64 ± 1.5 | | | ba3us | 55.10 ± 3.7 | 65.58 ± 1.4 | 58.40 ± 1.4 | 51.07 ± 4.3 | 64.77 ± 1.4 | 67.44 ± 1.2 | 67.67 ± 1.3 | | | ar | 66.68 ± 1.0 | 64.27 ± 3.6 | 67.20 ± 1.5 | 55.69 ± 0.9 | 70.29 ± 1.7 | 72.60 ± 0.8 | 73.85 ± 0.9 | | | jumbot | 60.63 ± 0.7 | 62.42 ± 2.4 | 59.86 ± 0.6 | 77.69 ± 4.2 | 78.34 ± 1.9 | 83.49 ± 1.9 | 84.01 ± 1.9 | | | mpot | 70.02 ± 2.0 | 74.64 ± 4.4 | 61.62 ± 1.3 | 78.40 ± 3.9 | 70.96 ± 3.7 | 86.69 ± 5.1 | 86.95 ± 5.0 | | Table 7: Accuracy of different PDA methods based on different model selection strategies on the 2 Partial visda tasks. Average is done over three seeds (2020, 2021, 2022). For each method, we highlight the best and worst label-free model selection strategies in green and red, respectively. ## 5.3 Random Seed Dependence Ideally, PDA methods should be robust to the choice of random seed. This is of particular importance when performing hyper-parameter tuning since typically only one run per set of hyper-parameters is done (that was the case in our work as well). We investigate this robustness by averaging all the results presented over three different seeds (2020, 2021 and 2022) and reporting the standard deviations. This is in contrast with previous work where only a single run is reported (Fatras et al., 2021a; Gu et al., 2021). Other works (Cao et al., 2018; Xu et al., 2019; Jian et al., 2020) that report standard deviations do not specify if the random seed is different across runs. Results for all tasks on visda dataset are in Table 7 and on office-home in Appendix B due to space constraints. Our experiments show that some methods express a non-negligible instabilities over randomness with respect to any model selection methods. This is particularly true for ba3us when paired with dev and 1-shot as model selection strategies: there are several tasks where the standard deviation is above 10%. While in this case this instability may stem from the poor performance of the model selection strategies, it is also visible when oracle is the model selection strategy used. For instance, the m-pot has a standard deviation of 3.3% on the AP tasks of office-home which corresponds to a variance of 11%. On visda this instability and seed dependence is even larger. ## 6 Conclusion In this paper, we investigated how model selection strategies affect the performance of PDA methods. We performed a quantitative study with six PDA methods and seven model selection strategies on two real-word datasets. Based on our findings, we provide the following recommendations: i) Target label samples should be used to test models before using them in real-world scenario. While this breaks the main PDA assumption, it is impossible to confidently deploy PDA models selected without the use of target labels. Indeed, model selection strategies without target labels lead to a significant drop in performance in most cases in comparison to using a small validation set. We argue that the cost of labeling it outweighs the uncertainty in current model selection strategies. ii) The robustness of new PDA method to randomness should be tested over at least three different seeds. We suggest to use the seeds (2020, 2021, 2022) to allow for a fair comparison with our results. iii) An ablation study should be considered when a novel architecture is proposed to quantify the associated increase of performance. As our work focus on a quantitative study of model selection methods and reproducibility of state-of-the-art partial domain adaptation methods, we do not see any potential ethical concern. Future work will investigate new model selection strategies which can achieve similar results as model selection strategies which use label target samples. ## Acknowledgments This work was partially supported by NSERC Discovery grant (RGPIN-2019-06512) and a Samsung grant. Thanks also to CIFAR for their support through the Canada CIFAR AI Chairs program. Authors thank Christos Tsirigotis and Chen Sun for early comments on the manuscript. ## References Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pp. 214–223, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/arjovsky17a. html. Bharath Bhushan Damodaran, Benjamin Kellenberger, Remi Flamary, Devis Tuia, and Nicolas Courty. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. In *The European* Conference on Computer Vision (ECCV), September 2018. Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In *2017 IEEE International Conference* on Computer Vision (ICCV), pp. 754–763, 2017. doi: 10.1109/ICCV.2017.88. Zhangjie Cao, Lijia Ma, Mingsheng Long, and Jianmin Wang. Partial adversarial domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 135–150, 2018. Laetitia Chapel, Mokhtar Z. Alaya, and Gilles Gasso. Partial optimal transport with applications on positiveunlabeled learning. In *Advances in Neural Information Processing Systems*, 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Kilian Fatras, Younes Zine, Rémi Flamary, Remi Gribonval, and Nicolas Courty. Learning with minibatch wasserstein : asymptotic and gradient properties. In Silvia Chiappa and Roberto Calandra (eds.), *Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics*, volume 108 of *Proceedings of Machine Learning Research*, pp. 2131–2141, Online, 26–28 Aug 2020. PMLR. URL http://proceedings.mlr.press/v108/fatras20a.html. Kilian Fatras, Thibault Sejourne, Rémi Flamary, and Nicolas Courty. Unbalanced minibatch optimal transport; applications to domain adaptation. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 3186–3197. PMLR, 18–24 Jul 2021a. URL http://proceedings.mlr.press/v139/fatras21a.html. Kilian Fatras, Younes Zine, Szymon Majewski, Rémi Flamary, Rémi Gribonval, and Nicolas Courty. Minibatch optimal transport distances; analysis and applications, 2021b. Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer. Pot: Python optimal transport. *Journal of Machine Learning Research*, 22(78):1–8, 2021. URL http://jmlr.org/papers/ v22/20-451.html. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of *Proceedings of Machine Learning Research*, pp. 1180–1189, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/ganin15.html. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. *The journal of* machine learning research, 17(1):2096–2030, 2016. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *Journal of Machine Learning Research*, 13(25):723–773, 2012. URL http://jmlr.org/ papers/v13/gretton12a.html. Xiang Gu, Jian Sun, and Zongben Xu. Spherical space domain adaptation with robust pseudo-label loss. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. Xiang Gu, Xi Yu, Yan Yang, Jian Sun, and Zongben Xu. Adversarial reweighting for partial domain adaptation. In *Thirty-Fifth Conference on Neural Information Processing Systems*, 2021. URL https: //openreview.net/forum?id=f5liPryFRoA. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In *International Conference* on Learning Representations, 2021. URL https://openreview.net/forum?id=lQdXeXDoWtI. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 770–778, 2016. doi: 10.1109/CVPR.2016.90. Liang Jian, Wang Yunbo, Hu Dapeng, He Ran, and Feng Jiashi. A balanced and uncertainty-aware approach for partial domain adaptation. In *European Conference on Computer Vision (ECCV)*, August 2020. Mingsheng Long, ZHANGJIE CAO, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ ab88b15733f543179858600245108dd8-Paper.pdf. Pietro Morerio, Jacopo Cavazza, and Vittorio Murino. Minimal-entropy correlation alignment for unsupervised deep domain adaptation. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum?id=rJWechg0Z. Kevin Musgrave, Serge Belongie, and Ser-Nam Lim. Unsupervised domain adaptation: A reality check. arXiv preprint arXiv: Arxiv-2111.15672, 2021. Kevin Musgrave, Serge Belongie, and Ser-Nam Lim. Benchmarking validation methods for unsupervised domain adaptation. *arXiv preprint arXiv: Arxiv-2208.07360*, 2022. Khai Nguyen, Dang Nguyen, Tung Pham, and Nhat Ho. Improving mini-batch optimal transport via partial transportation. In *Proceedings of the 39th International Conference on Machine Learning*, 2022. Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, 2010. doi: 10.1109/TKDE.2009.191. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 32*, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips. cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation: A survey of recent advances. *IEEE Signal Processing Magazine*, 32(3):53–69, 2015. doi: 10.1109/MSP.2014. 2347059. Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. *CoRR*, abs/1710.06924, 2017. Gabriel Peyré and Marco Cuturi. Computational optimal transport. Foundations and Trends® *in Machine* Learning, 2019. H. Robbins and S. Monro. A stochastic approximation method. *Annals of Mathematical Statistics*, 22: 400–407, 1951. Kuniaki Saito, Donghyun Kim, Piotr Teterwak, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Tune it the right way: Unsupervised validation of domain adaptation via soft neighborhood density. arXiv preprint arXiv:2108.10860, 2021. Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert Müller. Covariate shift adaptation by importance weighted cross validation. *Journal of Machine Learning Research*, 8(35):985–1005, 2007. URL http: //jmlr.org/papers/v8/sugiyama07a.html. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In (IEEE) Conference on Computer Vision and Pattern Recognition (CVPR), 2017. Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In *The IEEE International Conference on Computer* Vision (ICCV), October 2019. Kaichao You, Ximei Wang, Mingsheng Long, and Michael Jordan. Towards accurate model selection in deep unsupervised domain adaptation. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings* of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning* Research, pp. 7124–7133. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/you19a. html. # A Reproducible And Realistic Evaluation Of Partial Domain Adaptation Methods Supplementary Material Outline. The supplementary material of this paper is organized as follows: - In Section A, we give more details on our experimental protocol. - In Section B, we provide additional results from our experiments. ## A Additional Details On Experimental Protocol A.1 Implementations In Benchmarkpda In order to reimplement the different PDA methods, we adapted the code from the official repository associated with each of the paper. We list them in Table 8. | Method | Code Repository | |----------|-------------------------------------------------------------------------------------| | pada | https://github.com/thuml/PADA/blob/master/pytorch/src/ | | safn | https://github.com/jihanyang/AFN/blob/master/partial/OfficeHome/SAFN/code/ | | ba3us | https://github.com/tim-learn/BA3US/ | | ar | https://github.com/XJTU-XGU/Adversarial-Reweighting-for-Partial-Domain-Adaptation | | jumbot | https://github.com/kilianFatras/JUMBOT | | m-pot | https://github.com/UT-Austin-Data-Science-Group/Mini-batch-OT/tree/master/PartialDA | Table 8: Office Github code repositories for the PDA methods considered in this work. One of our main claims regarding previous work is the use of target labels to choose the best model along training. This can be easily verified by inspecting the code. For pada it can be seen on line 240 of the script "train_pada.py", for ba3us in line 116 for the script "run_partial.py", for m-pot it can be seen line 164 of the file "run_mOT.py", for safn it can be seen in the "eval.py" file and finally for ar in line 149 of the script "train.py". For jumbot and m-pot which are based on optimal transport, we used the optimal transport solvers from (Flamary et al., 2021). ## A.2 Model Selection dev requires learning a discriminative model to distinguish source samples from target samples. Its neural network architecture must be specified as well the training details. You et al. (2019) (dev) use a multilayer perceptron, while Saito et al. (2021) (snd) use a Support Vector Machine in their reimplementation of dev. We empirically observed the latter to yield more stable weights and so that was the one we used. In order to train the SVM discriminator, following (Saito et al., 2021), we take 3000 feature embeddings from source samples used in training and 3000 random feature embeddings from target samples, both chosen randomly. We do a 80/20 split into training and test data. The SVM is trained with a linear kernel for a maximum of 4000 iterations. Of 5 different SVM models trained with decay values spaced evenly on log space between 10−2 and 104the one that leads to the highest accuracy (in distinguishing source from target features) on the test data split is the chosen one. As for snd, it also requires specifying a temperature for temperature scaling component of the strategy. We used the default value of 0.05 that is suggested in (Saito et al., 2021). Finally, we mention that the samples used for 100-rnd were randomly selected and their list is made available together with the code. As for the samples used for 1-shot, they are the same as the ones used in semi-supervised domain adaptation. ## A.3 Optimizer In general, all methods claim to adopt Nesterov's acceleration method as the optimization method with a momentum of 0.9 and setting the weight decay set to 5 × 10−4. The learning rate follows the annealing strategy as in Ganin et al. (2016): µp = µ0(1 + αůp) −β, where p is the training progress linearly changing from 0 to 1, µ0 = 0.01 and α = 10 and β = 0.75. However, inspecting the Official code repo for each PDA method, the actual learning schedule is given by µi = µ0(1 + αůi) −β, where i is the iteration number in the training procedure, µ0 = 0.01 and α = 0.001 and β = 0.75. Only when the total number of iterations is 10000 do the learning rate schedules match. In this work, we followed the latter since it is the one indeed used. For office-home, all methods are trained for 5000 iterations, while for visda they are trained for 10000 iterations, with the exception of the s. only which is trained for 1000 iterations on office-home and 5000 iterations on visda. ## A.4 Hyper-Parameters In Table 9, we report the values used for each hyper-parameter in our grid search. We report in Table 10 the hyper-parameters chosen by each model selection strategy for each method on both datasets. In addition, for the reproducibility of ar with the proposed architecture in Gu et al. (2021),a feature normalization layer is added in the bottleneck which requires specifying r, the value to which the 2-norm is set. This hyperparameter is therefore included in the hyper-parameter grid search with the possible values of {5, 10, 20} which are the different values used in the experiments in (Gu et al., 2021). ## B Additional Discussion Of Results In this section, we provide additional results that we could not add to the main paper due to the space constraints. We start by ensuring that our reimplementation of PDA methods was done correctly by comparing our reproduced results with previously reported results in Table 11 . In Table 12, we show the accuracy per tasks on office-home averaged over three different seeds (2020, 2021, 2022) for all pairs of methods and model selection strategies. In Table 13, we compare previously reported results with ours on visda. While proposed methods reported results on office-home, only pada and ar results are reported in the original papers for visda. Gu et al. (2021) ar) also report results for ba3us. Analysing the results, we see a 9 percentage point decrease in average tasks accuracy for pada, but our experiments show that there is a significant seed dependence which we discuss in detail below. This is particularly important since Cao et al. (2018) (pada) report results from a single run. Comparing our best seeds for pada on the SR and RS tasks, we achieve 58.01% and 67.9% accuracy versus a reported 53.53% and 76.5%. Moreover, we point out that the official code repository for pada does not include the details to reproduce the visda experiments, so it is possible that minor tweaks (e.g learning rate) are necessary. As for ba3us, our results are within the standard deviation being better on the SR tasks and worse on the RS tasks. Finally as for ar we see a decrease in performance which, as the results on office-home show, can be explained by the differences in the neural network architecture. Finally in Table 14, we show all the average tasks accuracies from all pairs of methods and model selection strategies on the office-home and visda datasets including the 50-rnd model selection strategy. | Method | HP | Values | |----------|-------------------------------------|----------------------------| | pada | λ | [0.1, 0.5, 1.0, 5.0, 10.0] | | ba3us | λwce | [0.1, 0.5, 1, 5, 10] | | λent | [0.01, 0.05, 0.1, 0.5, 1] | | | λ | [0.005, 0.01, 0.05, 0.1, 0.5] | | | safn | ∆r | [0.01, 0.1, 1.0] | | ar | ρ0 | [2.5, 5.0, 7.5, 10.0] | | Aup | [5.0, 10.0] | | | Alow | −Aup | | | λent | [0.01, 0.1, 1.0] | | | jumbot | τ | [0.001, 0.01, 0.1] | | η1 | [0.00001, 0.0001, 0.001, 0.01, 0.1] | | | η2 | [0.1, 0.5, 1.] | | | η3 | [5, 10, 20] | | | mpot | ϵ | [0.5, 1.0, 1.5] | | η1 | [0.0001, 0.001, 0.01, 0.1, 1.0] | | | η2 | [0.1, 1.0, 5.0, 10.0] | | | m | [0.1, 0.2, 0.3, 0.4] | | Table 9: Hyper-Parameter values for each PDA method considered in the grid search. 17 | Method | Dataset | HP | oracle 1-shot 50-rnd 100-rnd s-acc | ent | dev | snd | | | | | |--------------------------------------------------------------------------------------------------------|-----------|--------|--------------------------------------|--------|-------|---------------|-------|--------|-------|------| | office-home | λ | 0.5 | 0.1 | 0.1 | 0.5 | 0.1 | 1.0 | 5.0 | 0.5 | | | pada | visda | λ | 0.5 | 1.0 | 10.0 | 0.5 | 1.0 | 0.5 | 5.0 | 0.1 | | λ | 0.005 | 0.1 | 0.005 | 0.01 | 0.005 | 0.01 | 0.005 | 0.005 | | | | office-home | ∆r | 0.1 | 0.01 | 0.01 | 0.01 | 0.01 | 0.1 | 0.1 | 0.1 | | | safn | λ | 0.005 | 0.005 | 0.05 | 0.05 | 0.005 | 0.05 | 0.005 | 0.05 | | | visda | ∆r | 0.1 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | | | office-home λwce | 5.0 | 10.0 | 5.0 | 5.0 | 5.0 | 0.1 | 10.0 | 1.0 | | | | λent | 0.05 | 0.05 | 0.01 | 0.05 | 0.01 | 0.1 | 0.05 | 0.01 | | | | ba3us | λwce | 1.0 | 1.0 | 0.1 | 1.0 | 5.0 | 1.0 | 5.0 | 5.0 | | | visda | λent | 0.5 | 0.5 | 0.5 | 0.5 | 0.05 | 0.5 | 0.05 | 1.0 | | | office-home | ρ0 | 2.5 | 2.5 | 5.0 | 5.0 | 2.5 | 5.0 | 7.5 | 10.0 | | | Aup | 5.0 | 5.0 | 10.0 | 5.0 | 5.0 | 10.0 | 10.0 | 10.0 | | | | Alow | -5.0 | -5.0 | -10.0 | -5.0 | -5.0 | -10.0 | -10.0 | -10.0 | | | | λent | 0.1 | 0.1 | 1.0 | 1.0 | 0.01 | 1.0 | 0.01 | 1.0 | | | | ar | visda | ρ0 | 2.5 | 2.5 | 2.5 | 2.5 | 2.5 | 7.5 | 2.5 | 10.0 | | Aup | 10.0 | 10.0 | 10.0 | 10.0 | 5.0 | 10.0 | 10.0 | 10.0 | | | | Alow | -10.0 | -10.0 | -10.0 | -10.0 | -5.0 | -10.0 | -10.0 | -10.0 | | | | λent | 0.1 | 0.1 | 0.1 | 0.1 | 0.01 | 0.1 | 0.01 | 0.01 | | | | τ | 0.01 | 0.01 | 0.01 | 0.001 | 0.1 | 0.01 | 0.01 | 0.001 | | | | η1 | 0.0001 | 0.0001 | 0.001 | 0.0001 | 0.01 | 1e-05 | 0.01 | 1e-05 | | | | η2 | 0.5 | 1.0 | 0.5 | 0.1 | 0.1 | 0.5 | 1.0 | 1.0 | | | | η3 | 10.0 | 5.0 | 5.0 | 5.0 | 5.0 | 20.0 | 10.0 | 5.0 | | | | office-home | | | | | | | | | | | | jumbot | visda | τ | 0.01 | 0.01 | 0.01 | 0.01 | 0.001 | 0.01 | 0.001 | 0.01 | | η1 | 0.001 | 0.001 | 0.001 | 0.001 | 0.01 | 1e-05 | 0.01 | 0.0001 | | | | η2 | 1.0 | 1.0 | 0.5 | 1.0 | 0.1 | 0.5 | 1.0 | 1.0 | | | | η3 | 5.0 | 5.0 | 5.0 | 5.0 | 10.0 | 5.0 | 20.0 | 5.0 | | | | office-home | ϵ | 0.5 | 0.5 | 1.0 | 0.5 | 1.0 | 1.5 | 1.0 | 1.5 | | | η1 | 0.01 | 0.01 | 0.01 | 0.01 | 0.001 | 0.0001 | 1.0 | 0.01 | | | | η2 | 10.0 | 1.0 | 1.0 | 1.0 | 1.0 | 10.0 | 0.1 | 1.0 | | | | m | 0.3 | 0.1 | 0.1 | 0.2 | 0.3 | 0.4 | 0.2 | 0.4 | | | | mpot | visda | ϵ | 0.5 | 0.5 | 0.5 | 0.5 | 1.0 | 1.0 | 1.0 | 0.5 | | η1 | 0.01 | 0.001 | 0.01 | 0.01 | 0.001 | 0.0001 0.0001 | 0.01 | | | | | η2 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 10.0 | 1.0 | 10.0 | | | | m | 0.3 | 0.1 | 0.3 | 0.3 | 0.2 | 0.4 | 0.2 | 0.3 | | | | Table 10: Hyper-parameters selected for the different methods for each model selection strategy on bot | h | | | | | | | | | | Table 10: Hyper-parameters selected for the different methods for each model selection strategy on both office-home and visda. | Method | A2C | A2P | A2R | C2A | C2P | C2R | P2A | P2C | P2R | R2A | R2C | R2P | Avg | |----------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | s. only† | 46.33 | 67.51 | 75.87 | 59.14 | 59.94 | 62.73 | 58.22 | 41.79 | 74.88 | 67.40 | 48.18 | 74.17 | 61.35 | | s. only (Ours) | 45.43 | 68.91 | 79.53 | 55.59 | 57.42 | 65.23 | 59.32 | 40.80 | 75.80 | 69.88 | 47.20 | 77.31 | 61.87 | | pada† | 51.95 | 67.00 | 78.74 | 52.16 | 53.78 | 59.03 | 52.61 | 43.22 | 78.79 | 73.73 | 56.60 | 77.09 | 62.06 | | pada (Ours) | 50.53 | 67.45 | 80.14 | 57.30 | 54.47 | 64.55 | 61.07 | 40.94 | 79.55 | 73.09 | 54.63 | 80.93 | 63.72 | | safn†∗ | 58.93 | 76.25 | 81.42 | 70.43 | 72.97 | 77.78 | 72.36 | 55.34 | 80.40 | 75.81 | 60.42 | 79.92 | 71.84 | | safn* (Ours) | 59.98 | 79.85 | 85.18 | 72.02 | 73.73 | 78.54 | 76.09 | 59.32 | 83.25 | 80.04 | 64.20 | 84.44 | 74.72 | | safn (Ours) | 49.57 | 68.55 | 78.26 | 57.91 | 59.29 | 66.81 | 59.87 | 45.29 | 75.98 | 69.08 | 51.68 | 77.29 | 63.30 | | ba3us† | 60.62 | 83.16 | 88.39 | 71.75 | 72.79 | 83.40 | 75.45 | 61.59 | 86.53 | 79.25 | 62.80 | 86.05 | 75.98 | | ba3us (Ours) | 63.26 | 82.75 | 89.16 | 69.91 | 71.93 | 77.58 | 75.73 | 59.94 | 86.89 | 80.93 | 66.77 | 86.93 | 75.98 | | ar†∗ | 62.13 | 79.22 | 89.12 | 73.92 | 75.57 | 84.37 | 78.42 | 61.91 | 87.85 | 82.19 | 65.37 | 85.27 | 77.11 | | ar* (Ours) | 62.75 | 81.55 | 89.07 | 71.63 | 73.41 | 82.94 | 75.88 | 61.03 | 85.70 | 79.86 | 62.93 | 85.30 | 76.00 | | ar (Ours) | 57.33 | 79.61 | 86.31 | 69.45 | 71.88 | 79.94 | 70.28 | 53.57 | 83.78 | 77.26 | 59.68 | 83.72 | 72.73 | | jumbot† | 62.70 | 77.50 | 84.40 | 76.00 | 73.30 | 80.50 | 74.70 | 60.80 | 85.10 | 80.20 | 66.50 | 83.90 | 75.47 | | jumbot (Ours) | 61.87 | 78.19 | 88.11 | 77.69 | 76.75 | 84.15 | 76.83 | 63.72 | 84.80 | 81.79 | 64.70 | 87.17 | 77.15 | | mpot† | 64.60 | 80.62 | 87.17 | 76.43 | 77.61 | 83.58 | 77.07 | 63.74 | 87.63 | 81.42 | 68.50 | 87.38 | 77.98 | | mpot (Ours) | 64.48 | 80.88 | 86.78 | 76.22 | 77.95 | 82.59 | 75.18 | 64.60 | 84.87 | 80.59 | 67.04 | 86.52 | 77.31 | Table 11: Comparison between reported (†) accuracies on partial office-home from published methods with our implementation using the oracle model selection strategy. * denotes different bottleneck architectures. | Task Method | s-acc | ent | dev | snd | 1-shot | 100-rnd | oracle | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------|-------|----------|-----------|----------| | s. only 44.50 ± 1.7 45.27 ± 1.1 43.74 ± 1.8 42.23 ± 1.3 43.84 ± 1.7 43.28 ± 1.6 45.43 ± 0.9 pada 50.15 ± 2.8 46.03 ± 2.9 44.70 ± 1.3 50.43 ± 0.8 52.98 ± 0.2 50.41 ± 0.8 50.53 ± 0.7 safn 47.36 ± 0.1 47.08 ± 2.0 48.12 ± 0.4 49.57 ± 0.3 31.40 ± 3.7 47.58 ± 0.8 49.57 ± 0.3 | | | | | | | | | AC | ba3us 54.89 ± 4.7 59.26 ± 0.9 41.67 ± 18.9 62.21 ± 0.9 44.60 ± 21.0 62.53 ± 2.0 63.26 ± 1.0 ar 51.12 ± 1.2 54.91 ± 1.8 49.25 ± 2.8 54.37 ± 1.6 56.00 ± 2.3 54.89 ± 2.0 57.33 ± 1.7 jumbot 49.07 ± 0.2 57.69 ± 5.6 46.11 ± 0.1 56.60 ± 2.8 61.59 ± 1.7 61.07 ± 0.9 61.87 ± 1.4 mpot 53.07 ± 0.3 52.94 ± 2.0 46.07 ± 0.7 32.96 ± 0.4 53.97 ± 1.3 61.59 ± 1.2 64.48 ± 1.2 s. only 67.71 ± 2.4 68.91 ± 1.4 67.81 ± 1.2 68.91 ± 1.4 66.52 ± 3.1 68.76 ± 1.6 68.91 ± 1.4 pada 66.93 ± 1.2 62.09 ± 2.8 61.61 ± 5.4 66.72 ± 1.5 63.03 ± 1.6 67.21 ± 1.8 67.45 ± 1.6 safn 66.82 ± 1.9 66.83 ± 0.5 67.30 ± 0.5 68.18 ± 1.3 49.73 ± 4.3 67.53 ± 0.8 68.55 ± 1.0 | | | | | | | | AP | ba3us 71.34 ± 0.8 76.38 ± 1.5 50.05 ± 28.7 83.29 ± 0.4 51.39 ± 29.8 82.09 ± 0.8 82.75 ± 0.9 ar 72.79 ± 0.7 78.45 ± 1.8 70.20 ± 1.7 79.01 ± 2.2 78.58 ± 1.9 78.54 ± 1.4 79.61 ± 1.6 jumbot 65.45 ± 0.4 75.44 ± 1.4 66.33 ± 0.6 68.48 ± 1.5 76.86 ± 3.4 77.87 ± 1.4 78.19 ± 2.4 mpot 72.61 ± 1.2 68.94 ± 1.2 65.43 ± 0.8 49.73 ± 1.1 68.78 ± 1.7 75.56 ± 1.7 80.88 ± 3.3 s. only 78.37 ± 0.3 79.26 ± 0.7 78.28 ± 0.7 79.35 ± 0.6 77.38 ± 0.9 77.97 ± 1.2 79.53 ± 0.3 pada 76.73 ± 1.7 76.05 ± 1.4 68.99 ± 11.3 79.72 ± 1.8 78.06 ± 2.6 79.97 ± 1.5 80.14 ± 1.4 safn 77.62 ± 0.2 77.73 ± 0.2 77.43 ± 0.5 77.86 ± 0.5 62.82 ± 2.0 77.91 ± 0.4 78.26 ± 0.2 | | | | | | | | AR | ba3us 81.91 ± 3.9 86.03 ± 0.6 63.74 ± 26.1 88.50 ± 0.6 65.47 ± 27.2 88.28 ± 0.4 89.16 ± 0.2 ar 77.91 ± 0.2 84.23 ± 0.9 79.73 ± 2.5 84.54 ± 0.8 82.77 ± 2.0 84.34 ± 0.6 86.31 ± 0.4 jumbot 77.14 ± 0.3 85.24 ± 2.7 76.42 ± 0.3 84.70 ± 2.1 86.45 ± 2.1 86.01 ± 1.3 88.11 ± 1.5 mpot 78.50 ± 0.7 75.98 ± 0.6 76.46 ± 0.4 57.39 ± 1.4 78.04 ± 2.1 82.59 ± 0.6 86.78 ± 0.5 s. only 52.56 ± 0.9 54.21 ± 2.1 51.42 ± 2.7 51.76 ± 3.7 50.47 ± 2.4 53.75 ± 1.1 55.59 ± 0.7 pada 58.00 ± 1.4 55.07 ± 2.7 35.08 ± 13.1 57.30 ± 1.9 51.67 ± 5.0 56.69 ± 1.5 57.30 ± 1.9 safn 57.85 ± 0.6 56.54 ± 2.2 56.75 ± 0.3 57.91 ± 0.3 48.88 ± 2.4 56.47 ± 1.0 57.91 ± 0.3 | | | | | | | | CA | ba3us 61.68 ± 5.2 68.96 ± 1.8 60.70 ± 2.2 68.50 ± 0.9 65.63 ± 1.4 69.15 ± 1.2 69.91 ± 0.2 ar 63.21 ± 1.5 64.86 ± 2.3 62.72 ± 1.0 64.52 ± 1.6 68.99 ± 0.2 64.95 ± 2.4 69.45 ± 0.5 jumbot 60.09 ± 0.1 75.97 ± 1.4 56.81 ± 0.1 71.81 ± 1.8 74.20 ± 0.9 74.56 ± 0.4 77.69 ± 0.1 mpot 61.92 ± 0.5 60.58 ± 0.8 56.44 ± 1.0 44.11 ± 2.4 69.24 ± 0.4 72.48 ± 1.0 76.22 ± 0.1 s. only 54.81 ± 0.1 55.52 ± 0.6 54.55 ± 1.2 53.48 ± 2.1 53.24 ± 2.0 55.57 ± 2.2 57.42 ± 1.2 pada 56.13 ± 1.4 47.28 ± 0.1 24.24 ± 20.5 52.10 ± 1.7 56.28 ± 0.4 53.86 ± 1.6 54.47 ± 1.7 safn 57.89 ± 0.7 59.07 ± 0.7 58.17 ± 1.2 58.17 ± 1.2 45.27 ± 0.7 58.19 ± 0.4 59.29 ± 0.5 ba3us 67.13 ± 3.9 71.07 ± 0.8 59.08 ± 10.9 71.45 ± 3.6 59.78 ± 15.3 71.65 ± 1.5 71.93 ± 1.6 | | | | | | | | CP | ar | 60.54 ± 4.0 68.16 ± 3.5 61.85 ± 4.6 68.05 ± 3.2 68.35 ± 1.9 69.00 ± 3.7 71.88 ± 0.9 | | | | | | | jumbot 59.59 ± 1.3 74.85 ± 3.3 56.36 ± 0.5 71.84 ± 1.6 73.43 ± 3.3 76.40 ± 1.4 76.75 ± 0.8 mpot 64.16 ± 1.8 65.99 ± 2.2 57.95 ± 1.0 38.66 ± 1.2 65.88 ± 0.5 69.77 ± 0.9 77.95 ± 1.3 s. only 62.88 ± 0.9 63.19 ± 0.3 63.94 ± 1.7 63.94 ± 1.7 61.77 ± 1.1 63.94 ± 0.4 65.23 ± 0.8 pada 66.45 ± 0.8 60.92 ± 2.4 61.66 ± 2.4 63.11 ± 1.9 64.00 ± 1.4 63.94 ± 1.3 64.55 ± 1.1 safn 66.92 ± 0.9 66.22 ± 0.5 65.64 ± 1.3 66.13 ± 1.0 57.26 ± 2.2 65.88 ± 0.2 66.81 ± 0.5 | | | | | | | | | CR | ba3us 72.96 ± 1.0 76.22 ± 1.2 67.88 ± 0.9 76.96 ± 0.6 68.49 ± 1.3 77.21 ± 0.6 77.58 ± 0.9 ar 72.76 ± 0.9 80.45 ± 0.8 70.86 ± 5.6 79.16 ± 2.8 77.25 ± 1.4 79.57 ± 0.2 79.94 ± 0.8 jumbot 66.67 ± 1.3 79.75 ± 1.2 66.70 ± 0.8 80.91 ± 0.9 79.85 ± 0.3 81.54 ± 1.7 84.15 ± 1.3 mpot 70.22 ± 0.2 71.51 ± 0.8 66.35 ± 1.0 50.06 ± 1.0 71.42 ± 0.7 75.41 ± 0.5 82.59 ± 0.7 s. only 58.77 ± 0.5 56.96 ± 1.5 57.94 ± 0.9 55.37 ± 0.6 56.11 ± 1.7 58.37 ± 0.4 59.32 ± 0.7 pada 60.33 ± 2.1 56.69 ± 2.8 57.91 ± 1.7 60.82 ± 3.0 58.92 ± 3.3 60.27 ± 2.7 61.07 ± 3.0 safn 58.80 ± 0.7 56.75 ± 2.1 59.08 ± 0.5 59.14 ± 0.8 42.33 ± 1.6 59.69 ± 0.1 59.87 ± 0.7 | | | | | | | | PA | ba3us 68.90 ± 5.0 73.16 ± 0.6 64.62 ± 1.6 76.19 ± 1.2 68.38 ± 1.7 75.15 ± 1.3 75.73 ± 1.3 ar 63.39 ± 3.1 67.58 ± 0.4 61.65 ± 1.0 65.60 ± 1.7 69.67 ± 1.5 66.73 ± 0.3 70.28 ± 1.0 jumbot 60.24 ± 1.0 72.85 ± 2.4 58.03 ± 1.1 70.28 ± 0.8 74.96 ± 3.4 72.60 ± 1.2 76.83 ± 1.9 mpot 64.13 ± 0.9 58.28 ± 0.9 57.64 ± 0.8 43.74 ± 4.3 70.31 ± 1.0 72.64 ± 0.9 75.18 ± 1.3 s. only 39.28 ± 0.8 38.75 ± 0.6 39.40 ± 0.9 37.35 ± 1.0 37.35 ± 1.0 39.12 ± 0.4 40.80 ± 0.9 pada 43.50 ± 1.2 38.43 ± 3.0 38.03 ± 0.6 39.26 ± 2.0 43.62 ± 1.0 40.56 ± 1.8 40.94 ± 1.6 safn 42.49 ± 0.6 39.58 ± 2.0 43.00 ± 1.1 43.90 ± 0.5 29.77 ± 2.6 43.14 ± 1.7 45.29 ± 0.7 | | | | | | | | PC | ba3us 55.92 ± 1.3 57.91 ± 2.5 56.74 ± 1.3 59.94 ± 1.7 57.83 ± 1.3 58.17 ± 1.0 59.94 ± 0.7 ar 48.36 ± 1.7 52.34 ± 1.0 43.72 ± 0.7 51.28 ± 1.6 51.98 ± 1.8 50.85 ± 1.4 53.57 ± 0.2 jumbot 43.60 ± 0.0 60.18 ± 0.9 41.99 ± 0.8 50.69 ± 4.9 62.87 ± 0.6 59.92 ± 0.4 63.72 ± 0.5 mpot 50.87 ± 1.1 49.87 ± 2.6 43.60 ± 0.6 28.66 ± 2.6 53.03 ± 0.7 57.67 ± 1.6 64.60 ± 0.0 s. only 75.08 ± 0.5 75.65 ± 1.3 74.91 ± 0.6 74.10 ± 2.8 71.97 ± 1.8 75.56 ± 1.3 75.80 ± 1.2 pada 76.70 ± 0.4 77.08 ± 0.2 73.11 ± 3.4 79.33 ± 1.3 74.27 ± 4.1 78.91 ± 1.8 79.55 ± 1.4 safn 75.46 ± 0.4 73.90 ± 0.9 74.64 ± 0.4 75.81 ± 0.7 63.52 ± 3.2 75.00 ± 0.7 75.98 ± 0.6 | | | | | | | | PR | ba3us 79.13 ± 4.7 85.59 ± 1.2 75.21 ± 0.6 86.31 ± 1.4 82.05 ± 1.0 85.92 ± 1.3 86.89 ± 0.5 ar 78.02 ± 1.7 82.48 ± 1.9 76.29 ± 0.7 83.05 ± 1.1 78.72 ± 1.0 82.39 ± 1.9 83.78 ± 1.0 jumbot 74.43 ± 0.9 83.21 ± 1.1 74.97 ± 0.5 83.89 ± 1.5 81.83 ± 0.9 84.63 ± 2.3 84.80 ± 1.3 mpot 77.40 ± 0.1 73.77 ± 1.3 74.86 ± 1.3 58.40 ± 1.9 76.88 ± 1.3 82.02 ± 0.6 84.87 ± 1.4 s. only 68.90 ± 0.6 69.24 ± 1.0 69.27 ± 1.0 68.53 ± 1.4 68.96 ± 0.5 69.02 ± 0.5 69.88 ± 0.9 pada 69.27 ± 3.5 69.48 ± 1.3 66.33 ± 0.7 73.09 ± 1.5 68.26 ± 3.1 72.70 ± 1.4 73.09 ± 1.5 safn 67.92 ± 0.0 67.80 ± 0.2 68.11 ± 0.9 68.17 ± 1.6 56.11 ± 3.2 69.64 ± 1.0 69.08 ± 0.6 | | | | | | | | RA | ba3us 72.27 ± 3.5 78.11 ± 1.4 70.92 ± 2.0 79.46 ± 1.4 80.78 ± 1.1 79.86 ± 2.1 80.93 ± 0.8 ar 70.00 ± 1.1 74.75 ± 2.1 70.31 ± 1.7 75.02 ± 1.6 76.19 ± 0.7 74.66 ± 2.3 77.26 ± 0.6 jumbot 70.19 ± 0.5 81.97 ± 1.0 67.43 ± 0.3 81.21 ± 0.6 78.48 ± 2.0 81.85 ± 1.7 81.79 ± 0.8 mpot 70.40 ± 0.6 64.98 ± 0.4 67.68 ± 0.5 56.90 ± 1.9 76.52 ± 0.4 79.80 ± 0.5 80.59 ± 0.6 s. only 45.33 ± 1.0 45.31 ± 1.0 45.33 ± 1.0 43.78 ± 0.6 46.13 ± 2.0 43.46 ± 0.2 47.20 ± 0.9 pada 53.93 ± 1.3 49.73 ± 3.5 29.97 ± 21.0 45.77 ± 1.6 54.25 ± 1.6 53.39 ± 2.2 54.63 ± 0.9 safn 49.73 ± 0.1 48.76 ± 0.1 50.53 ± 0.6 49.59 ± 1.6 37.55 ± 0.8 50.85 ± 0.3 51.68 ± 0.8 | | | | | | | | RC | ba3us 51.84 ± 0.5 62.85 ± 2.7 58.39 ± 2.3 65.35 ± 1.9 63.10 ± 0.8 66.57 ± 1.5 66.77 ± 1.5 ar 52.52 ± 1.0 55.64 ± 1.2 49.61 ± 0.8 55.02 ± 1.8 55.48 ± 2.1 55.42 ± 1.6 59.68 ± 1.1 jumbot 51.12 ± 1.1 61.81 ± 4.6 48.12 ± 0.5 58.85 ± 1.7 61.59 ± 2.2 64.84 ± 1.0 64.70 ± 1.1 mpot 53.99 ± 1.5 57.53 ± 0.6 48.12 ± 0.8 39.34 ± 1.2 57.39 ± 1.7 64.64 ± 0.1 67.04 ± 0.6 s. only 76.34 ± 0.7 76.47 ± 0.8 75.99 ± 1.3 75.84 ± 1.6 73.33 ± 2.2 75.28 ± 2.3 77.31 ± 0.1 pada 78.88 ± 0.8 78.00 ± 1.7 71.07 ± 11.3 80.62 ± 0.4 78.62 ± 0.4 80.73 ± 0.9 80.93 ± 0.6 safn 76.23 ± 0.8 76.23 ± 0.7 75.65 ± 0.5 76.64 ± 0.5 67.00 ± 1.4 76.41 ± 0.8 77.29 ± 0.5 | | | | | | | | RP | ba3us 81.85 ± 4.1 84.84 ± 0.6 78.06 ± 1.3 86.35 ± 0.9 79.20 ± 1.1 85.66 ± 1.0 86.93 ± 0.2 ar 77.55 ± 2.6 83.06 ± 1.2 75.61 ± 0.4 83.40 ± 0.9 82.73 ± 1.0 82.80 ± 0.4 83.72 ± 0.6 jumbot 77.12 ± 1.3 86.33 ± 1.6 76.04 ± 0.1 88.18 ± 0.4 87.34 ± 0.2 87.64 ± 0.7 87.17 ± 1.7 mpot 77.61 ± 0.3 73.17 ± 2.7 75.89 ± 0.4 63.14 ± 0.6 77.95 ± 1.4 82.60 ± 0.5 86.52 ± 1.2 s. only 60.38 ± 0.5 60.73 ± 0.2 60.22 ± 0.3 59.55 ± 0.3 58.92 ± 0.4 60.34 ± 0.4 61.87 ± 0.3 pada 63.08 ± 0.3 59.74 ± 0.5 52.72 ± 2.8 62.36 ± 0.4 62.00 ± 0.5 63.22 ± 0.1 63.72 ± 0.3 safn 62.09 ± 0.2 61.37 ± 0.3 62.03 ± 0.4 62.59 ± 0.1 49.30 ± 0.7 62.36 ± 0.2 63.30 ± 0.2 | | | | | | | | Avg | ba3us 68.32 ± 1.1 73.36 ± 0.6 62.25 ± 7.1 75.37 ± 0.8 65.56 ± 7.6 75.19 ± 0.4 75.98 ± 0.3 ar 65.68 ± 0.3 70.58 ± 0.4 64.32 ± 0.9 70.25 ± 0.2 70.56 ± 0.7 70.34 ± 0.2 72.73 ± 0.3 jumbot 62.89 ± 0.2 74.61 ± 0.8 61.28 ± 0.1 72.29 ± 0.2 74.95 ± 0.1 75.74 ± 0.3 77.15 ± 0.4 mpot 66.24 ± 0.1 64.46 ± 0.1 61.37 ± 0.2 46.92 ± 0.4 68.28 ± 0.2 73.06 ± 0.3 77.31 ± 0.5 | | | | | | | Table 12: Average accuracy of different PDA methods based on different model selection strategies on the 12 tasks of Partial office-home. Average is done over three seeds (2020, 2021, 2022). For each method, we highlight the best and worst label-free model selection strategies in green and red, respectively. 20 | Algorithm | S2R | R2S | Avg | |----------------|-------|-------|-------| | s. only† | 45.26 | 64.28 | 54.77 | | s. only (Ours) | 51.86 | 67.11 | 59.48 | | pada† | 53.53 | 76.50 | 65.02 | | pada (Ours) | 49.34 | 59.81 | 54.57 | | safn† | 67.65 | - | - | | safn (Ours) | 56.88 | 68.40 | 62.64 | | ba3us† | 69.86 | 67.56 | 68.71 | | ba3us (Ours) | 71.77 | 63.56 | 67.67 | | ar†∗ | 85.30 | 74.82 | 80.06 | | ar (Ours) | 76.33 | 71.36 | 73.85 | | jumbot† | - | - | - | | jumbot (Ours) | 90.55 | 77.46 | 84.01 | | mpot† | - | - | - | | mpot (Ours) | 87.23 | 86.67 | 86.95 | | Dataset | Method | s-acc | ent | dev | snd | 1-shot | 50-rnd | 100-rnd | oracle | |-------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | office-home | s. only | 60.38±0.5 | 60.73±0.2 | 60.22±0.3 | 59.55±0.3 | 58.92±0.4 | 60.28±0.4 | 60.34±0.4 | 61.87±0.3 | | pada | 63.08±0.3 | 59.74±0.5 | 52.72±2.8 | 62.36±0.4 | 62.00±0.5 | 63.82±0.4 | 63.22±0.1 | 63.72±0.3 | | | safn | 62.09±0.2 | 61.37±0.3 | 62.03±0.4 | 62.59±0.1 | 49.30±0.7 | 62.00±0.2 | 62.36±0.2 | 63.30±0.2 | | | ba3us | 68.32±1.1 | 73.36±0.6 | 62.25±7.1 | 75.37±0.8 | 65.56±7.6 | 73.22±0.3 | 75.19±0.4 | 75.98±0.3 | | | ar | 65.68±0.3 | 70.58±0.4 | 64.32±0.9 | 70.25±0.2 | 70.56±0.7 | 70.26±0.2 | 70.34±0.2 | 72.73±0.3 | | | jumbot | 62.89±0.2 | 74.61±0.8 | 61.28±0.1 | 72.29±0.2 | 74.95±0.1 | 64.95±0.3 | 75.74±0.3 | 77.15±0.4 | | | mpot | 66.24±0.1 | 64.46±0.1 | 61.37±0.2 | 46.92±0.4 | 68.28±0.2 | 69.90±0.5 | 73.06±0.3 | 77.31±0.5 | | | visda | s. only | 55.15±2.4 | 55.24±3.2 | 55.07±1.2 | 55.02±2.9 | 55.72±2.2 | 57.90±1.1 | 58.16±0.6 | 59.48±0.4 | | pada | 47.48±4.8 | 32.32±4.9 | 43.43±5.3 | 56.83±1.0 | 53.15±2.9 | 55.67±2.5 | 54.38±2.7 | 54.57±2.6 | | | safn | 58.20±1.7 | 42.83±6.3 | 58.62±1.3 | 44.82±8.8 | 56.89±2.1 | 57.90±3.3 | 59.09±2.8 | 62.64±1.5 | | | ba3us | 55.10±3.7 | 65.58±1.4 | 58.40±1.4 | 51.07±4.3 | 64.77±1.4 | 66.66±2.4 | 67.44±1.2 | 67.67±1.3 | | | ar | 66.68±1.0 | 64.27±3.6 | 67.20±1.5 | 55.69±0.9 | 70.29±1.7 | 71.91±0.3 | 72.60±0.8 | 73.85±0.9 | | | jumbot | 60.63±0.7 | 62.42±2.4 | 59.86±0.6 | 77.69±4.2 | 78.34±1.9 | 82.85±2.9 | 83.49±1.9 | 84.01±1.9 | | | mpot | 70.02±2.0 | 74.64±4.4 | 61.62±1.3 | 78.40±3.9 | 70.96±3.7 | 86.65±5.1 | 86.69±5.1 | 86.95±5.0 | | Table 13: Comparison between reported (†) accuracies on partial visda from published methods with our implementation using the oracle model selection strategy. * denotes different bottleneck architectures. Table 14: Task accuracy average over seeds 2020, 2021, 2022 on Partial office-home and Partial visda for the PDA methods and model selection strategies. For each method, we highlight the best and worst label-free model selection strategies in green and red, respectively.
Review 1: Summary: This paper provides a comprehensive investigation to existing partial domain adaptation methods with various model selection strategies. It’s interesting to know that existing PDA methods failed at an UDA setting while with 100 labelled test samples can ensure an effective PDA. Strengths and Weaknesses: There are some minor issues that should be addressed for better readability and soundness. It’s misleading to claim that PDA methods decrease up to 30% percentages according to Table 1. Please revise. Also in Table 1, besides of reporting the worst and best results on different random seed, please also report the average results with variances. In the first paragraph, domain shift is clearly not only limited to different backgrounds or colours. Therefore, ‘such shift is referred to as domain shift in the literature’ is not accurate. at the end of first paragraph, ‘outlier source only label’ may not be accurate. I understand because source only samples may be regarded as outliers w.r.t target distribution. Please rewrite it to avoid confusion as normally outlier is used w.r.t training set. In table 2 row Architecture, the caption should clarify what does the symbol between networks mean. A table should explain itself. Requested Changes: See the weakness. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This work studies an important topic of model selection in Partial Domain Adaptation. In the current practice, there is no single established model selection technique and therefore each method adopts its own technique (often using target labels which violates the label-free assumption in Unsupervised Domain Adaptation). The authors elucidate this challenge and develop a benchmarking framework - BenchmarkPDA with a focus on reproducibility and fair evaluation under a consistent setting. A rigorous analysis of several label-based (1-shot, 50-RND, 100-RND, ORACLE) and label-free (S-ACC, ENT, DEV, SND) model selection techniques is presented across various Partial DA methods, hyperparameters and seeds. The key insight is that each DA method underperforms using label-free model selection techniques and labeled target data is necessary to select a model close to the oracle performance. Strengths and Weaknesses: Strengths: * Readability: The manuscript is a good read - the writing is simple and clear and describes a spectrum of latest works in Partial DA and model selection strategies in Sec. 2 & 3. * Significance: This study is of practical significance since model selection in Unsupervised DA is an important problem, and not commonly addressed by prior works. This work will motivate the study of more accurate model selection strategies that do not use expensive target annotations. * Reproducibility: The BenchmarkPDA codebase is very useful for future research in PartialDA and encourages fair evaluation and reproducibility. * Insights: Currently, the main goal is to reimplement, benchmark, and provide empirical results on the stability and reproducibility of the existing Partial DA methods using various model selection techniques. Though this has limited contributions in terms of algorithmic novelty, it is an important investigation which I believe has value to the progress of the field. Weaknesses & scope for improvement: * Motivation for PDA: The motivation to restrict the study to only Partial DA methods is unclear. As such the presented hypotheses could very well be applicable to other scenarios such as Open-set [P1], or Universal [P2] DA. Could the authors clarify why Partial DA was specifically chosen? * Missing meta-insights: It would be great to understand if certain methods are more robust than others, and why. Here are some interesting questions which could lead to useful insights: "What contributes to the stability of a method - is it due to less number of hyperparameters, or data-augmentation, or is it simply dataset and task dependent?", "Are there certain kinds of Partial DA approaches that are more reliable?", "Can we calibrate the trade-off between the amount of labeled examples required, and the reliability of model selection?", "What should a researcher keep in mind while developing a new model selection technique?". Building a narrative that answers meta-questions like these could strengthen the work. * Minor comments: 1) The second bullet point in Page 2 (“Only 1 pair…”) is not revisited later in the paper (did I miss it?). Also, point (iii) in Conclusion (“An ablation study…”) is a generic recommendation which is not talked about in the main text. The manuscript can be improved by providing a more comprehensive discussion on these points. 2) Please fix typos: - Table 7 best and worst models are incorrectly highlighted: For S2R method SAFN, SND produces the worst model (not ENT), and for the method AR, ENT selects the best model. - Supplementary Table 6: In Office-Home dataset, for JUMBOT, the 50-RND selects worse models (10% worse) than 1-SHOT and 100-RND which looks like a typo. Please double check the same. 3) "7 different state-of-the-art Partial DA algorithms": In my opinion, the source-only model should not be considered as a partial DA algorithm. Thus there are six PDA algorithms that are studied in this work (PADA, SAFN, BA3US, AR, JUMBOT, MPOT). I would suggest the authors to make appropriate changes to reflect the same. References: [P1] Cao et al., "Separate to Adapt: Open Set Domain Adaptation via Progressive Separation", CVPR 2019 [P2] Yu et al., "Universal Domain Adaptation", CVPR 2019 Requested Changes: Overall, I found the paper interesting and addressing an important topic. This work can be further improved (please refer to the points in the Weaknesses section). Broader Impact Concerns: This work investigates the reproducibility and stability of the existing partial domain adaptation methods and therefore does not impose any significant ethical concerns. ================================================== Review 3: Summary: This paper performs a thorough experimental evaluation of methods for Partial Domain Adaptation (PDA) (a variant of Domain Adaptation (DA) where the target domain contains a strict subset of the classes appearing in the source domain, making the problem harder compared to standard DA). In particular, their evaluation is realistic in the sense that they relax the optimistic assumption made in previous work that a labeled set of examples are available from the target dataset for model selection / validation (as this assumption violates the definition of DA’s problem setup, that assumes only unlabeled examples from the target domain). They consider cases where a small(er) number of labeled examples are available from the target domain for validation (which might be viable in practice), as well as several more ‘legal’ variants for model selection without a labeled validation set, that were proposed in some previous work. Their thorough analysis surfaces interesting findings about the generalizability of model selection approaches, the degree of optimism that results from using an ‘oracle’ for model selection (that uses labeled target examples), the reliance of results on the random seed, and other findings in the context of specific PDA methods evaluated and their reliance on hyperparameters like architectural choices. Finally, the authors developed and released a framework in pytorch, BenchmarkPDA, for facilitating research in this direction. Strengths and Weaknesses: Strengths - The paper studies an important topic which often receives too little attention in papers presenting new methods for problem settings that involve generalization outside of the training distribution without labeled examples. As the authors point out, the unavailability of labeled examples from the target distribution also implies the unavailability of a validation set for model selection, therefore model selection becomes an open problem in this space too. - Thorough empirical investigation of a large number of methods and model selection approaches on two different datasets, with design choices made to enable fair comparisons and reproducible results - Interesting findings and recommendations for the (P)DA community - The paper is nicely written and easy to follow (see below for some small exceptions) Weaknesses - Unclear (missing motivation) why the scope is limited to PDA in particular instead of DA more generally or other variants of DA - Missing formal problem description of DPA - Insufficient description of previous methods (both in DA/PDA methods as well as previous approaches to model selection considered) - Insufficient discussion of some design choices, as well as on the relationship between the findings of this work and findings of related papers (see below) - Some places where clarity can be improved (see below) Requested Changes: - Add a formal problem description of the problem setting of PDA - Add motivation for why the scope is limited to PDA in this work: why PDA in particular instead of other variants of the problem, and why not DA more generally - Add motivations for why the particular PDA methods were chosen for evaluation. - It would be great to also add more detail about the PDA methods and the model selection methods evaluated coming from previous literature. While the high-level descriptions provided are nice, more details into each approach would be useful to help the reader engage better with the space of solutions which in turn would aid in understanding and interpreting the findings from this evaluation - Add discussion on whether the findings from this work are analogous to or contradict findings in previous studies of similar spirit (e.g. Gulrajani and Paz for domain generalization). It sounds like that study and problem setting is quite related. Though I understand domain generalization does not even assume unlabelled examples from the target domain, some model selection approaches can be applied to both, like using a subset of the source for validation and the oracle. How do results differ w.r.t these, for example, across the two studies? - Define ‘negative transfer’, as the term appears several times in the paper, e.g. in the context of explaining why PDA is harder than DA. I’ve seen this term used before in transfer learning / continual learning to describe the issue arising when solving an earlier problem makes a later problem harder to solve. But here, there is no sequential task solving, if I understand correctly: there is just 1 training phase where the model is trained on labeled source data and unlabeled target data simultaneously (though, it would be nice to have a formal problem description as mentioned above, to clarify this point). - In Section 4.2, the authors mention ‘... for each task of each dataset’. The term ‘task’ hadn’t been used up to this point (unless I missed it?) and is not clear what it means here (I can somewhat infer from context, but should be defined) - In the same section, the authors say that the unsuccessful runs of the hyperparameter search pose a challenge for DEV, SND and ENT, but it is not clear why that is. Looking at the description of these methods in Section 2, it seems that there are some missing pieces in terms of understanding why this is an issue. Please elaborate. - Regarding hyperparameters, the authors say ‘we only consider the model at the end of training’. This confused me for two reasons: first, isn’t the ‘end of training’ also a hyperparameter in and of itself (i.e the number of steps)? If not, then what is the stopping criterion? Second, I’m a bit confused here because, earlier (Section 4.1, in the ‘Model Selection Strategies’ paragraph), the authors said ‘we use them [model selection approaches] both for hyper-parameter tuning as well selecting the best model along training’. This seems to contradict the statement of only considering the model at the end of training? It would be great to elaborate on what exactly is the procedure for choosing hyperparameters and a checkpoint - In Section 4.2, the authors also say: ‘Then, to obtain the results with our neural network architecture on all tasks of each dataset, we trained an additional …’. It wasn’t clear to me what this additional training phase is for. I thought that after model selection, the chosen model (i.e. set of hyperparameters and checkpoint) would be directly deployed on the test set? The ‘with our neural network architecture’ part makes it sound like a different architecture was used for tuning rather than testing? But that doesn’t sound right, so I think I’m misunderstanding the setup. Please explain. - Section 5 about reproducibility: by reading the discussion here I realized that the reproduced numbers are sometimes obtained with a different architecture compared to the original numbers compared against? I wouldn’t really call this a reproducibility trial then, as the architecture is an important component which is changed. For the sake of verifying reproducibility / correctness of implementation, it would make more sense to use the same setup (including the same architecture) as the one that was used to obtain the results we are trying to reproduce. - On a similar vein, it sounds like the authors have changed the architectures of some of the compared approaches, in order to have the same architecture used for all. While I do agree that it’s nice to eliminate that factor of variation, in order to directly compare methods to each other, it might be that the performance of some is underestimated due to it being strongly dependent on a particular architecture from which it has now been decoupled? To what extent do the authors think it’s a possibility? Perhaps for the purposes of this paper this is OK but it would be great to add some discussion about the pros and cons of this design choice. - RE: PADA combined with SND ‘performs better than the oracle on average, although within the standard deviation’: my understanding is that for assessing statistical significance, one needs to look at overlap of confidence intervals, not standard deviations. I find the wording here confusing. - Table 7: state what is the relationship between this and Table 6 (e.g. in the caption). Currently the captions of these two tables are similar and it takes some scrolling up and down to find the differences. - Section 5.3, when discussing random seeds, please state what exactly these seeds control (e.g. is it just the order of mini batches, or something more? Is this method-specific?) - Minor: fix inconsistent notation, e.g. dev (in section 2) versus DEV elsewhere in the paper; analogously for ent, snd, etc Additional comments and discussion - Table 4 is hard to read as it contains a large number of figures. Maybe a better presentation of these results would be through a (set of) barplots, where each one shows the difference between the published results and reproduced results? - Regarding experimental setup, as explained in Section 4.1 for the datasets, can the particular selection of the first X categories to serve as the partial target domain bias the results? That is, conceivably the PDA problem becomes easier or harder depending on which categories are chosen to be source-only (excluded from the target). Is there a rationale for choosing the first X? Has previous literature studied this? Broader Impact Concerns: I have no broader impact concerns. ================================================== Metareview: Recommendation: Accept as is Comment: This is more a survey/benchmark paper, so the comprehensiveness and elaboration are the crucial factors impacting the paper quality. In a nutshell, this paper has great work in achieving these. The outcome of this paper is a solid BenchmarkPDA for reproducibility, and insights about the robustness and stability of well-established partial domain adaptation methods, with particular insights into the validation and parameter tuning strategies. Three reviewers assessed this paper and the revisions, with active interactions with the authors through OpenReview. Authors managed to address all comments reasonably well, which leads to unanimous acceptance recommendations from the reviewers. The Associate Editor agrees that this is a good paper that is meaningful for the community, and thus accepts the paper as is. ==================================================
# Meta Continual Learning On Graphs With Experience Replay Altay Unal *unal21@itu.edu.tr* Department of Computer Engineering Istanbul Technical University Abdullah Akgül akgul@imada.sdu.dk Department of Mathematics and Computer Science University of Southern Denmark Melih Kandemir *kandemir@imada.sdu.dk* Department of Mathematics and Computer Science University of Southern Denmark Gozde Unal gozde.unal@itu.edu.tr Department of Computer Engineering Istanbul Technical University Reviewed on OpenReview: *https: // openreview. net/ forum? id= 8tnrh56P5W* ## Abstract Continual learning is a machine learning approach where the challenge is that a constructed learning model executes incoming tasks while maintaining its performance over the earlier tasks. In order to address this issue, we devise a technique that combines two uniquely important concepts in machine learning, namely "replay buffer" and "meta learning", aiming to exploit the best of two worlds. In this method, the model weights are initially computed by using the current task dataset. Next, the dataset of the current task is merged with the stored samples from the earlier tasks and the model weights are updated using the combined dataset. This aids in preventing the model weights converging to the optimal parameters of the current task and enables the preservation of information from earlier tasks. We choose to adapt our technique to graph data structure and the task of node classification on graphs. We introduce MetaCLGraph, which outperforms the baseline methods over various graph datasets including Citeseer, Corafull, Arxiv, and Reddit. This method illustrates the potential of combining replay buffer and meta learning in the field of continual learning on graphs. ## 1 Introduction Deep learning models have proven to perform successfully at numerous machine learning tasks including classification and regression. Despite their celebrated performances, deep learning models tend to provide poor results when they are expected to learn from a sequence of data or tasks (Li & Hoiem, 2017). This process is called continual learning, which aims to train a deep learning model such that it manages learning different tasks in order while avoiding forgetting the tasks that it has learned previously. In continual learning, a model is trained in a way such that it can be retrained for future tasks while preserving information from earlier tasks. The main challenge of continual learning is catastrophic forgetting (McCloskey & Cohen, 1989; Goodfellow et al., 2013), which causes the model to lose obtained information from earlier tasks, leading to a performance drop in the earlier tasks. The studies addressing the catastrophic forgetting problem in continual learning can be divided into three groups. The first group comprises the memory based studies which either use examples (Rebuffi et al., 2017; Lopez-Paz & Ranzato, 2017) or generate pseudo samples using the stored information (Lavda et al., 2018; Atkinson et al., 2018). The second group focuses on parameter isolation from earlier tasks such that the model can preserve its performance on earlier tasks (Mallya et al., 2018; Serra et al., 2018), while the third group is the regularization-based methods which propose an extra regularization term to preserve information from earlier tasks while learning on a new task (Kirkpatrick et al., 2017; Li & Hoiem, 2017). While the main problem of continual learning is catastrophic forgetting, continual learning is also challenging due to the differences in the problem setup. Generally, continual learning has two different setups: class incremental and task incremental (De Lange et al., 2021). In class incremental learning, the individual classes are presented sequentially (Masana et al., 2022) while only the tasks are presented in task incremental learning (De Lange et al., 2021). Class incremental setup requires a deep learning model to classify across the observed and current classes whereas the task incremental setup requires an indicator to separate tasks since the model predicts classes within each task in the task incremental setup. Class incremental setup is more challenging than task incremental setup since the model has no indicator of which task is tested (Zhang et al., 2022). In addition, class incremental learning becomes more challenging as the number of classes increases and the data concerning the earlier tasks is not provided. Continual learning aims to adjust to newly incoming tasks without forgetting the older ones. Meta learning has the potential to provide benefit to the continual learning problem as the aim of the meta learning is to provide a model with the capability to generalize itself to new tasks. Meta learning is used for scarce data regimes (Finn et al., 2017) and typically refers to processes in which a model learns how to learn. For instance, meta learning becomes an important method for few-shot learning in which deep learning models have very few samples to train with (Nguyen et al., 2019; Chen et al., 2023). The meta learning paradigm guides the neural network model weights so that the model weights do not fit the optimal parameters solely for a given task. As parameters that are optimal for each task cannot be attained, the model tends to find the set of parameters that can collectively adapt to all observed tasks. The model parameters are calculated for the current task without any update. After that, the loss which is called the meta loss, is calculated with the earlier calculated weights, and the model is updated with the meta loss. This mitigates catastrophic forgetting as well as quick adaptation to new tasks. The meta learning method can also be applied to graph structured data (Zhou et al., 2019; Tan et al., 2022). Graph structured data is a type of data that consists of vertices and edges. In addition, graph structured data is non-Euclidean as it does not have a specific hierarchy and order (Asif et al., 2021). Hence, graph structured data does not have any certain geometry. In certain real world applications such as citation networks or online social networks, data that are represented by graphs dynamically change and, hence tend to expand continuously. Due to the expansion of the data, two problems may arise: (i) new classes may emerge, and the deep learning model cannot handle the newly arrived classes since the model is not trained for those classes; (ii) on the contrary, if the model is trained with the new classes, they may lose the earlier information. Therefore, continual learning on graph structured data becomes necessary. Continual learning on graphs has been studied recently (Liu et al., 2021; Zhou & Cao, 2021; Zhang et al., 2022) and it has become an emerging field. While the studies addressing this field has similar approaches such as memory-based (Zhou & Cao, 2021) or regularization-based (Liu et al., 2021), they also consider the properties of the graph structured data as their methods can be affected by different factors of the graph structure (Xu et al., 2020) such as topology and irregularity of the graph. In addition, newly arriving tasks alter the dynamics of the graph, interfering with the learning of the model. Therefore, continual learning on graphs diverges and it is handled concerning both the continual learning paradigm and the characteristics of the graph structured data. In our research, we fused the best of two worlds on continual learning for the graph data type, merging meta learning and replay buffer in a single method with learnable learning rates for the first time, to the best of our knowledge. The comparison of our method to the algorithmic families that build on replay buffer, and meta learning is given in Figure 1. Our developed method fuses the experience replay & selection method in the ER-GNN (Zhou & Cao, 2021) with the meta learning & learnable learning rates from the LA-MAML (Gupta et al., 2020) to advance on graph continual learning. In the graph continual learning problem, we focus on the task of node classification for the class incremental setup, which requires the model to "remember" earlier | | Algorithm 2 LA-MAML for t = 1 to M do Calculate weights for Tt Meta loss with calculated weights Update learning rates end for Algorithm 3 MetaCLGraph for t = 1 to M do Gaux ← B ∪ Tt Calculate weights with Tt Meta loss with calculated weights on Gaux Update learning rates Extend B with Tt samples end for | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Algorithm 1 ER-GNN for t = 1 to M do Calculate loss with Tt ′ = 1 to t − 1 do for t Get Tt ′ from B Calculate loss with Tt ′ end for Sum Losses from Tt & B Extend B with Tt samples end for | | Figure 1: The algorithms for ER-GNN, LA-MAML, and MetaCLGraph (our approach). B represents the initially empty replay buffer while M is the number of tasks and Tt is the current task. ER-GNN uses the replay buffer to store examples from tasks while LA-MAML uses meta learning for continual learning. MetaCLGraph fuses both by using meta learning in the training phase while storing examples on the replay buffer. classes so that the model can avoid catastrophic forgetting. Our approach has outperformed the benchmark methods on various graph datasets. ## 2 Background Graph Neural Networks (GNN) are constructed to process the graph structured data and their inference problems (Kipf & Welling, 2016). Given a graph G = (V, E), where V = {vi} |V| i=1 denotes the set of nodes and E = {eij} |E| i,j=1 denotes the edges of the graph, GNNs operate in two phases to compute the output embeddings h (l) v . First, the embeddings are obtained by multiplying the weight matrices of each layer W(l) with the embeddings calculated from earlier layers h (l−1) v . Next, the embeddings of the neighboring nodes to node v are aggregated by using an aggregation function, and the resulting neighboring embedding is further aggregated with the embedding of the node v from the earlier layer. Separating the embedding of node v from the neighboring embeddings allows GNNs to preserve the information from node v. GNNs contribute to understand the interactions within adjacency matrix and reflect these interactions onto the features so that tasks related to graph structured data can be completed. GNNs are used for node and graph classification while they are also useful for the link prediction between nodes (Asif et al., 2021; Wu et al., 2022). As the graph structured data in real world often evolve, GNNs require continual learning so that the incoming new knowledge can be learned while preserving the information from earlier tasks. One of the common solutions to the continual learning problem is to use a replay buffer that stores examples from earlier tasks (Zhou & Cao, 2021), ensuring the model's adaptability for incoming tasks without losing earlier information. Using the replay buffer, the model "remembers" the past tasks. Meta learning is a method that aims to train a model on a variety of tasks (Finn et al., 2017). Given the task set T = {T1, T2, . . . , Ti*, . . . , T*M} where M denotes the number of tasks, the model initially calculates the task i's set of parameters which is denoted as ϕi. This process is called the inner update that is used to find the set of parameters for task i. After calculating ϕi, the model weights θ are updated by calculating the derivative of ϕi with respect to θ. This allows altering the model weights in the direction of the gradient of the loss calculated accordingly to the current task. The objective function for θ is given below where Dtr i and Dts irepresent the training and test set for task i, respectively. $$\operatorname*{min}_{\theta}\,\sum_{\mathrm{task}\ i}{\mathcal{L}}\left(\theta-\alpha\nabla_{\phi}{\mathcal{L}}\left(\theta,{\mathcal{D}}_{i}^{\mathrm{tr}}\right),{\mathcal{D}}_{i}^{\mathrm{ts}}\right).$$ . (1) Meta learning ensures that the model does not converge to the optimal parameters of a single task and preserves the adaptability of the model to incoming tasks. Unlike catastrophic forgetting, meta learning aims to develop a "collective intelligence" that allows the model to perform over all observed tasks. ## 3 Related Work Continual Learning mainly deals with the catastrophic forgetting problem. In order to overcome the latter, three approaches are generally used aiming to maintain the adaptability of the model to incoming tasks without any performance loss. The first approach is to isolate the parameters so that the previously obtained information would not be lost. Important parameters for the current task are determined, and those parameters are isolated such that they would not change with the future tasks, and the model can still perform for that specific task (Serra et al., 2018; Mallya et al., 2018). The second approach is to use a regularization term such that the weight importance can be considered, and the changes in the weights that are important for the past tasks are penalized (Kirkpatrick et al., 2017; Aljundi et al., 2018). The regularization prevents the model weights from fitting the optimal parameters of the current task, and hence from forgetting the old tasks. The third approach is to use an additional memory that maintains some examples from the previous tasks. The training process for future tasks uses certain examples from earlier tasks, hence the model would not lose previously obtained information. The usage of additional memory allows the model to observe examples from earlier tasks while it concurrently observes the current task. As a result, the weights would not distance themselves from those of the earlier tasks. The additional memory is used to store information about the earlier tasks to generate similar samples, or it can store examples from the tasks, and those examples can be observed during future tasks. The idea of additional memory is shown to be effective against catastrophic forgetting (Lopez-Paz & Ranzato, 2017; Zhou & Cao, 2021). Capacity saturation is also another challenge in continual learning (Sodhani et al., 2020). The architecture of the model defines the capacity as the components of the architecture affect learning dramatically (Mirzadeh et al., 2022; Shahawy et al., 2022). A model with fixed capacity saturates as it is kept training on more tasks since the model loses its ability to adapt to incoming tasks. This is called the stability-plasticity dilemma (Mermillod et al., 2013). The same dilemma exists in graph continual learning where for GNN architectures, the learning may become limited as a large number of new classes arrives. Addressing this dilemma requires a specialized focus, and while numerous studies addressed the effects of architecture on continual learning (Huang et al., 2021; Feillet et al., 2023), a similar study on GNNs has yet to be conducted. Graph Neural Networks recently have been used in continual learning on graphs (Wang et al., 2020). Since GNNs consider both the adjacency matrix and the features as mentioned earlier, they are capable of discovering the relationship between the nodes and the knowledge held by the graph. Graph continual learning is a very recent topic (Zhou & Cao, 2021) where GNNs are employed for continual learning of graph-structured data. General continual learning approaches, namely regularization based and replay based methods, are adapted to graph continual learning. Inspired by the work in the continual learning field, replay based methods are developed for graph continual learning, storing samples from earlier tasks (Zhou & Cao, 2021) or generating samples by using the stored representations from earlier tasks (Wang et al., 2022). On the other hand, regularization based methods use an additional term in the loss function to adjust the updates in the model weights. In graph continual learning, regularization based methods (Liu et al., 2021; Chen et al., 2021; Cai et al., 2022; Sun et al., 2023) use a regularization term to handle the issues caused by the properties of the graph structured data such as topology and irregularity. Since graph structured data obtains objects which are connected to each other, the tasks obtained by dividing the graph provide several connections among different tasks. Hence, learning different tasks can improve the performance of the GNN for earlier tasks, as different connections can be discovered with every incoming task, and this can be used as a remedy against catastrophic forgetting. In real-world scenarios, graphs tend to be dynamic and the boundaries between the tasks do not usually exist (Tang & Matteson, 2020). Due to the nature of the realworld scenarios, dynamic graph learning (Rossi et al., 2020; Kazemi et al., 2020) can be applied to capture the dynamics of the real-world graphs. Whereas dynamic graph learning primarily concentrates on capturing the up-to-date graph representations (Huang et al., 2022) and it does not focus on the forgetting problem because the observed data can still be accessed by the model. However, in graph continual learning, the observed data is not accessible once after the model is trained on that task, requiring the model to develop mechanisms to deal with forgetting and use the obtained knowledge. Due to the nature of the real-world graphs, the GNN model becomes more powerful with continual learning, as the same GNN model can be used for different tasks. This allows us to avoid training separate models for each task, and obtain a single model that can perform all tasks which is in line with the ultimate goal of lifelong learning. Memory Mechanism. The implementation of memory mechanisms proves to be really effective (Knoblauch et al., 2020) as memory mechanisms allow models to revisit the earlier examples during the current task and maintain the performances from earlier tasks. Various types of memory units are explored for continual learning problem. In one approach, the implemented memory mechanism can store examples from earlier tasks such that when the model is learning future tasks, it is also trained on the examples on the memory (Zhou & Cao, 2021; Lopez-Paz & Ranzato, 2017). This allows the model not to differentiate from the learned parameters of the earlier tasks. Another approach stores some representations from each task such that those representations can be used to generate examples belonging to earlier tasks (Rebuffi et al., 2017). The generated early-task examples are also used during the training of the current task. Hence, the model can preserve its performance while learning the current task with this approach. Meta Learning is used in reinforcement learning and few-show learning (Finn et al., 2017) to perform multi task learning on a large set of tasks. Since the aim of continual learning is to adjust the model to newly coming tasks while maintaining its ability to perform in older ones, meta learning becomes useful in continual learning. Meta learning prevents overfitting for a specific task by using the gradients of optimal parameters for each task (Gupta et al., 2020). This approach is effective in mitigating catastrophic learning as it prevents the model from overfitting to a specific task and the learning process of the model would not be stopped. Our proposed approach, namely Meta Continual Learning for Graphs with Experience Replay (MetaCLGraph), fuses meta learning with a memory mechanism to increase the efficiency in learning incoming tasks. Focused on the node classification problem in the class incremental learning setting, MetaCLGraph uses a meta learning mechanism to improve the model's ability to store already-seen information while learning the current task and incorporates a replay buffer as the memory mechanism that allows the model to observe samples from earlier tasks while learning a new one. In addition, learnable parameters are introduced to the model weights as learning rates so that the updates of each parameter can be different. ## 4 Method 4.1 Problem Setup The goal is to predict the classes of the nodes for a collection of tasks T = {T1, T2*, . . . , T*M}, where M is the number of tasks. The continual graph problem here entails the model to learn those series of tasks in T. For each task Ti, we have a training node set Dtr iand a test node set Dtst i. Node classification aims to predict the right class for each node, i.e., to classify each node in the test node set Dtst iinto the correct class by learning the tasks using Dtr i . In our graph continual learning setup, we aim to classify incoming nodes based on early observed classes which is also known as class incremental learning (Masana et al., 2022). ## 4.2 Our Method: Metaclgraph MetaCLGraph is our proposed solution for graph continual learning which fuses meta learning and episodic memory through an experience replay mechanism. The algorithm for our method is given in Algorithm 4. Although Zhou & Cao (2021) and Gupta et al. (2020) both inspire the MetaCLGraph, the transition to the latter is not straightforward, as the adaptation of the LA-MAML to graph continual learning while using an episodic memory includes overcoming the following obstacles such as lack of graph nodes batches and utilizing the replay buffer efficiently. The model weights are initially calculated for the current task and later updated using gradients calculated while using both stored examples and current task data. This update is called the meta update and it allows instead of fitting for just the current task Ti, to be able to learn the previous tasks along with current task Ti. The experience replay is obtained when the model is trained on Algorithm 4 MetaCLGraph-Detailed Algorithm The model weights θ, learning rates α, Graph of current task Gt, Buffer B, number of tasks M, learning rate for α: η 1: for t = 1 *to M* do 2: Join buffer samples with the current task dataset: G ′ = Gt ∪ B 3: for *epoch* = 1 *to E* do 4: Inner update with the Gt: θ*fast* = θ - α t· ∇θ Lt (θ, Gt) 5: Learning rate update: α t+1 = α t- η · ∇αt Lt (θ, G ′) 6: Meta update with the G ′: θ = θfast - max(0, α t) · ∇θ*fast* Lt (θ, G ′) 7: **end for** 8: Save samples to buffer B with coverage maximization 9: **end for** a task and some nodes are selected by using the coverage maximization function (4.2.1). The replay buffer can also be seen as an episodic memory since the buffer stores the nodes from the current task at the end of the training. The replay buffer is constructed in this way so that the finished tasks can be revisited by the model using the buffer. The main reason for using the meta learning with a buffer is to prevent the model weights from positioning close to the optimal parameters for a certain task. Following a continual learning paradigm, MetaCLGraph adjusts the model weights to the incoming tasks. Revisiting the stored sample nodes allows the model to remember the earlier tasks. With this added reminder process, the model avoids adapting to only one task. It is able to distance its parameters from the optimal parameters of the current task and also stays tuned to the earlier tasks. ## 4.2.1 Experience Replay At the end of the training for each task, the additional memory is updated by selecting some examples from that particular task. The additional memory allows our model to "remember" previous tasks as the model weights are trained for the next task. As the model also observes the selected examples, the experience replay allows our model to preserve knowledge from earlier tasks. However, the selection process for the experience replay is important as the representation of a given task relies on the stored nodes. During the sample selection, we use the coverage maximization method (Zhou & Cao, 2021) to determine the nodes which can represent their tasks accordingly. Coverage maximization method is used as the selection function to maximize the coverage of the embedding space by finding the distances of nodes to the other class. According to the distances, the nodes are ranked and lower ranked nodes are selected. The equation for the coverage maximization is given in the following. $${\mathcal{N}}\left(v_{i}\right)=\left\{v_{j}\mid\operatorname{dist}\left(v_{i}-v_{j}\right)<d,{\mathcal{Y}}\left(v_{i}\right)\neq{\mathcal{Y}}\left(v_{j}\right)\right\}$$ N (vi) = {vj | dist (vi − vj ) *< d,* Y (vi) ̸= Y (vj )} (2) where N (vi) is the set of nodes coming from different classes with distance d to node vi, Y(vi) and Y(vj ) are the labels of the nodes vi and vj , respectively. The distance inside the embedding space is minimized so that the classes can be represented inside the buffer. This reinforces the model in remembering the previous classes as the selected examples in this manner cover their embedding spaces. The experience replay allows us to construct a merged dataset as illustrated in Figure 2. When the training process for a task is finished, several nodes are selected according to the coverage maximization function and stored inside the experience replay. After that, when a new task arrives, the nodes stored inside the experience replay are merged with the data of the current task, and a merged graph is stored. The merged graph allows us to train our model accurately since the model can keep the information learned earlier while learning the current task. $\eqref{eq:walpha}$. ![6_image_0.png](6_image_0.png) Figure 2: The dataset merging process is illustrated in this figure. The arrows represent the transition between the tasks while coloured nodes represent the active nodes during that task. In our approach, multiple nodes are selected at the end of the task and stored inside the replay buffer. When a new task is arrived, stored nodes are merged with the current task and form a merged dataset. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) ## 4.2.2 Meta Learning The meta learning is adapted to MetaCLGraph in two steps as follows. When learning a task Ti, the model weights for the current task Ti, which are called fast weights θ*fast* are calculated using the current task data, shown in Algorithm 4, line 4. This step is called the inner update and allows the model to benefit from the direction of the gradients for the current task. After the inner update, θ*fast* is introduced to the model, and the meta loss is calculated when the model is fed with the merged dataset which contains the current task data and the buffer samples from previous tasks. This is called the meta update and the model weights θ are updated with the gradients with respect to the meta loss using θ*fast* in this step, shown in Algorithm 4, line 6. At the end of the meta learning, the model is updated considering the gradients with respect to the current task data and also the samples from previous tasks. ## 4.2.3 Learning Rate Update In addition to meta learning and experience replay setups, the MetaCLGraph assigns learning rates to every weight, and the assigned learning rates are also updated during the training. Every weight having a different importance for each task during the training as the model learns a task helps in the preservation of the learned knowledge (Gupta et al., 2020). Using the meta learning loss calculated for the merged dataset, the learning rates for each parameter are also updated by using the gradients. With the updates on the learning rates, the updates on our model weights become more foreseeable. If the model weights require more updates in order to learn the task better, then the learning rate update would be limited. However, if some of the model weights have reached an optimal region, then their learning rate decreases more dramatically than the learning rate of the other weights since the model would tend to preserve that information. This update difference among the model weights allows our model to adjust the parameters according to the tasks, hence the already-acquired information is not lost. For a further detailed reading on how the computation of the gradients with respect to learning rates are calculated, we encourage the readers to refer LA-MAML (Gupta et al., 2020). ## 5 Experiments The continual learning experimental setups for the GNNs are investigated, and the performance of our setup is compared with the other continual learning setups. The pipeline provided by (Zhang et al., 2022) is used | Table 1: Details of the datasets used in the experiments. | | | | | | |-------------------------------------------------------------|---------|-----------|------------|-----------|---------| | Dataset | # nodes | # edges | # features | # classes | # tasks | | CiteSeer-CL | 3327 | 9228 | 3703 | 6 | 3 | | Corafull-CL | 19793 | 130622 | 8710 | 70 | 35 | | Arxiv-CL | 169343 | 1166243 | 128 | 40 | 20 | | Reddit-CL | 227853 | 114615892 | 602 | 40 | 20 | for the experiments. The datasets, baselines, experimental settings, and results are reported and described next. We provide a reference implementation of the proposed model and the experimental pipeline 1. ## 5.1 Datasets For evaluating MetaCLGraph, four benchmark datasets were employed: Corafull (Bojchevski & Günnemann, 2017), Arxiv (Hu et al., 2021), Reddit (Hamilton et al., 2017), and Citeseer (Sen et al., 2008). Arxiv, Corafull, and Reddit are used in (Zhang et al., 2022) to construct a benchmark for continual graph learning, and Citeseer is additionally included for its significance in graph structured data. In this problem, we divide the datasets into tasks containing two classes for each task. This resulted in 35 tasks from Corafull, 3 from Citeseer, and 20 each from Arxiv and Reddit. For the Reddit dataset, the 41st and the last class is dropped since it only has one example. Table 1 provides the detailed task information, and graph structures for each dataset. To manage device memory requirements, larger graphs like Reddit are divided into batches, while the rest are processed as single batches. ## 5.2 Baselines The following baselines are used in order to evaluate the MetaCLGraph. Base model is the graph neural network without using any continual learning components. The data is simply passed to the GNN architecture and the model is trained with the provided data. There are no further improvements on the GNN architecture. Elastic weight consolidation (EWC) (Kirkpatrick et al., 2017) focuses on preserving crucial model weights for each task, allowing updates on only less significant weights during the learning of the new tasks to preserve earlier information. Memory aware synapses (MAS) (Aljundi et al., 2018) is a regularization based method that evaluates the importance of the parameters according to the sensitivity of the predictions on the parameters. When new data arrives, the model weights are updated according to the activations of the network so that the important model weights for the newly acquired data can be updated. Topology-aware weight preserving (TWP) (Liu et al., 2021) is a method introduced for graph continual learning which integrates weight preservation considering the graph topology. Graph structure is taken into consideration in order to prevent catastrophic forgetting. After the loss is calculated, it is regularized by calculating the importance score of the model weights according to the topological structure of the provided graph. It benefits from the properties of the graph. ER-GNN (Zhou & Cao, 2021) uses experience replay for this problem. After learning a task Ti, sample nodes are saved to the buffer with a selection function and when Ti+1 is being learned, separate graphs for each learned task from k={0 to i} are constructed, and the GNN is trained with those graphs. The overall 1https://github.com/ituvisionlab/MetaCLGraph. loss is calculated with the loss for the current task regularized by the loss calculated from the constructed separate graphs using the experience replay. MetaCLGraph (Ours) uses an episodic memory buffer, and combines meta learning with experience replay. Joint is the method where all the learned tasks are accessed during training. Therefore, it serves as the upper bound for our setup. EWC and MAS are the techniques focusing on parameter isolation, while ER-GNN uses the additional memory as a replay buffer to store examples from earlier tasks, however, lacking any meta-learning component. TWP is a regularization method that considers the topology of the graph while updating weights. Our model, the MetaCLGraph is compared with those baseline techniques for performance assessment. ## 5.3 Experiment Settings And Performance Scores The experiments are conducted with a learning rate of 0.005, and each task is trained for 200 epochs. The batch size is selected as 2000 for the batched datasets. Adam optimizer (Kingma & Ba, 2014) is selected as the optimizer. All methods use the graph convolutional network as the backbone GNN architecture (Kipf & Welling, 2016). The selection algorithm relies on coverage maximization. The hyperparameters concerning the compared methods are obtained from the benchmark paper (Zhang et al., 2022) and its repository, as the results in the benchmark are reproducible. The buffer budget is selected as 10 for the replay based methods, namely the ER-GNN and the MetaCLGraph. Given in the benchmark paper (Zhang et al., 2022), the buffer budget sets the number of samples selected for each class in the task, and the entire class or task can be stored when the buffer budget is set high. Setting the buffer budget high does not serve the purpose of continual learning since it would become likely that an entire class or task can be observed in future tasks. To avoid storing an entire class or task that would defeat the purpose of continual learning, the buffer budget is determined as 10. This is based on the observation that all of the datasets contain more than 10 samples for each class. Hence, the buffer budget becomes strict where storing an entire class or task is avoided. The two evaluation measures are the Average Performance (AP) and the Average Forgetting (AF) (LopezPaz & Ranzato, 2017). AP focuses on the model's accuracy in classification of each task. It is obtained by calculating the mean accuracies of the observed tasks so far. AF considers whether or not the model preserves its performance on the observed tasks. The AF is calculated as follows: $$a_{j,j}-a_{k,j},\quad\forall j<k$$ $\left(3\right)$. aj,j − ak,j , ∀*j < k* (3) where aj,j , and aj,j represent the performance scores obtained from the accuracy matrix. j and k represent the numbers of tasks where k represents the current task and j represents a certain old task when it was trained as the current task. In summary, forgetting is determined by finding the performance difference between the last performance and the performance obtained as the current task for each task. All experiments are repeated 5 times on one Nvidia RTX A4000 GPU. ## 5.4 Results The experimental results are reported in Table 2. The results indicate that the proposed MetaCLGraph outperforms the compared baselines, and achieves the best performance across all datasets in terms the AP measure, and the best performance in terms of the AF measure excluding only the Reddit dataset. These results provide evidence for the fact that the proposed method allows the model to learn the tasks and stores information about the early-observed tasks. The MetaCLGraph outperforms the regularization based methods such as MAS, TWP, and EWC by employing its replay buffer. As mentioned earlier, regularization based methods alter the loss such that the model can preserve the obtained information. However, our proposed method uses the stored examples during the learning of the future tasks as those examples are observed during their training. Therefore, the utilization of the replay buffer allows the MetaCLGraph to outperform regularization based methods. Although MAS has a better AF score for the Reddit dataset, this score is obtained since the AP score is low. Therefore, it can be deduced that the MAS method cannot learn | in bold. Method | Citeseer | CoraFull | | Arxiv | Reddit | | | | |---------------------|------------|-------------|------------|-------------|------------|-------------|------------|-------------| | AP ↑ | AF ↑ | AP ↑ | AF ↑ | AP ↑ | AF ↑ | AP ↑ | AF ↑ | | | Base Method | 31.49±0.15 | -77.48±0.29 | 2.24±0.20 | -94.82±0.33 | 4.85±0.04 | -87.17±1.97 | 5.49±1.05 | -93.39±1.28 | | EWC | 31.42±0.15 | -77.46±0.32 | 15.02±2.14 | -81.67±2.32 | 4.86±0.01 | -88.35±0.45 | 8.88±1.64 | -94.56±1.75 | | TWP | 31.37±0.08 | -78.43±0.26 | 16.21±1.76 | -77.81±1.57 | 4.86±0.02 | -88.89±0.13 | 11.75±1.60 | -91.59±1.81 | | MAS | 31.58±0.07 | -77.47±0.32 | 6.85±1.89 | -88.52±2.14 | 4.86±0.06 | -86.52±0.62 | 12.58±4.67 | -26.24±8.53 | | ER-GNN | 46.81±0.33 | -51.10±0.45 | 2.26±0.32 | -95.12±0.40 | 27.83±0.26 | -54.67±0.17 | 37.64±0.64 | -64.55±0.64 | | MetaCLGraph (Ours) | 51.48±1.64 | -46.79±2.78 | 64.73±0.34 | -16.86±0.38 | 31.76±0.74 | -51.03±0.82 | 67.66±3.64 | -33.01±3.86 | | Joint (Upper bound) | 76.40±0.20 | - | 81.56±0.14 | - | 46.35±0.85 | - | 98.33±0.26 | - | the tasks for the Reddit dataset, and its AF score does not mean a better performance as the MAS method has not learned enough information about the observed tasks for storage. The MetaCLGraph outperforms ER-GNN which also utilizes a replay buffer. Although they use the same selection function for the examples to be stored inside the replay buffer, the MetaCLGraph exploits meta learning during the training process. During training, the optimal model parameters are determined for the current task, however, the loss is not calculated until the model observes the stored examples from the earlier tasks. Therefore, the model does not exhibit over-fitting to the current task. On the contrary, it finds the set of parameters that can perform for the earlier tasks while learning the current task. The calculation of the meta loss allows the model to adjust the parameters considering the earlier tasks, hence leads to MetaCLGraph model outperforming the ER-GNN method. In Figure 3, the performance matrices of the benchmark methods and MetaCLGraph are visualized. The performance scores are presented for all of the tasks after the observation. It can be observed for each dataset that the changes in the performance scores for MetaCLGraph are less pronounced than those for other methods, thanks to a relatively better preservation of the learned information in the former. ## 5.5 Computational Costs | Table 3: The computational cost table for various bas | elines and MetaCLGraph. | |---------------------------------------------------------|---------------------------| | Method | Time per experiment (s) | | Base Method | 159.77 | | EWC | 381.46 | | TWP | 418.71 | | MAS | 177.50 | | ER-GNN | 234.87 | | MetaCLGraph | 616.28 | The computational costs are also investigated and the costs are reported in Table 3. The experiments are repeated 5 times on the Corafull dataset. The results show that the computational costs of our proposed model surpass the baseline models. Although the computational costs of our model are the highest among the baselines, its computational costs are justifiable since its performance is beyond other methods and the computational costs are tested on the dataset with the greatest number of tasks among the tested datasets. Corafull dataset contains the greatest number of tasks among the benchmark datasets, showing that the application of our model to other datasets would not be limited since the computational cost difference among the baselines would not be greater. ## 5.6 Ablation This section focuses on the impact of meta learning and experience replay on the performance of MetaCLGraph in class incremental learning. MetaCLGraph variants that either exclude meta learning or experience replay are examined to understand their contributions to the model performance. For testing the effects of ![10_image_0.png](10_image_0.png) Figure 3: The visualization of performance matrices for MetaCLGraph and benchmark methods for the trained datasets. Each entry in the matrices represents the performance of the method on the column task while learning the task on the row. The determined performance metric on the visualization is accuracy where lighter colours represent the higher score while darker colours represent the lower scores. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) experience replay, the buffer budget for the experience replay is changed. When the buffer budget is 0, this would be our approach without experience replay. On the other hand, our method without meta learning would be equivalent to ER-GNN whose results were given in Section 5.4. The results for the ablation study are given in Figure 4. It can be observed that the model only using meta learning gives similar results to those of the base model. Since this version of our proposed model does not store any information from earlier tasks, and only relies on meta learning, it only learns the current task. Therefore, its AP and AF scores would be similar to the base model since it adjusts its parameters on only the current task and not the other tasks. It is also observed that the number of stored samples increases our model's performance significantly. The number of samples for each class is determined as 10 in order to avoid storing an entire class or task inside the buffer. The results show that when the capacity of the buffer is increased, the performance is improved. However, increasing the capacity too much may cause storing tasks, and storing the tasks would not serve the purpose of continual learning. Although some examples were stored inside the buffer, continual learning setup requires the continuous data flow from earlier tasks to be stopped once the training for a task is finished since the finished tasks become unreachable after the training. However, when the tasks start to be stored in the replay buffer, the model will be retrained on the already visited tasks. Therefore, the continual learning setup is no longer valid, and the model would be trained in a traditional supervised learning setup. This is observed in average forgetting as well. Since all examples from a class may have been stored with a high buffer budget, the model is trained on a task more than once. Therefore, the average forgetting rate becomes greater than zero due to the fact that it has been trained more than specified epochs and the model's performance on that task is improved by meta learning ![11_image_0.png](11_image_0.png) Figure 4: The effect of buffer budget on MetaCLGraph is shown for AP (left) and AF (right). As the buffer budget increases, both the model performance and forgetfulness are improved due to the increase in the stored number of samples per class. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) and additional epochs. Therefore, a limit on the number of stored examples in the buffer is imposed so that the integrity of continual learning can be maintained. ## 6 Conclusion Summary. We present the MetaCLGraph for graph continual learning which merges the meta learning paradigm with the use of a replay buffer. MetaCLGraph is compared with the baseline continual learning methods and graph continual learning methods. The experiments provide evidence to our hypothesis that the meta learning paradigm improves the efficiency of the replay buffer and mitigates the catastrophic forgetting problem by conserving the obtained information from earlier tasks. It is observed that the MetaCLGraph outperforms the corresponding baselines in terms of average performance and average forgetting measures. Broad Impact. MetaCLGraph framework can be used in applications for expanding networks where new members and connections are emerging and their characteristics are discovered. The replay buffer of our model can be altered with a different type of memory that store the class representations instead of class examples in order to address data security concerns. Future Work. Our work has shown that the utilization of meta learning improves the efficiency of using a memory mechanism such as a replay buffer. Expanding from MetaCLGraph, a future research direction could be changing the memory mechanism from a replay buffer to a subconscious memory that stores representations for every observed class. Limitations. Storing examples from earlier tasks creates a dilemma for the memory based solutions in terms of data sharing. Our method performs storing a limited number of examples, hence, limiting the buffer capacity and not storing the entire incoming task or class. By limiting the number of examples stored in the replay buffer, our method serves well for the purposes of continual learning while alleviating data privacy and ethical concerns. Another limitation is the increase in the number of tasks as it may limit the application of the proposed model since the overall complexity of the model is already higher than its counterparts. ## References Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In *Proceedings of the European conference on computer* vision (ECCV), pp. 139–154, 2018. Nurul A. Asif, Yeahia Sarker, Ripon K. Chakrabortty, Michael J. Ryan, Md. Hafiz Ahamed, Dip K. Saha, Faisal R. Badal, Sajal K. Das, Md. Firoz Ali, Sumaya I. Moyeen, Md. Robiul Islam, and Zinat Tasneem. Graph neural network: A comprehensive review on non-euclidean space. *IEEE Access*, 9:60588–60606, 2021. doi: 10.1109/ACCESS.2021.3071274. Craig Atkinson, Brendan McCane, Lech Szymanski, and Anthony Robins. Pseudo-recursal: Solving the catastrophic forgetting problem in deep neural networks. *arXiv preprint arXiv:1802.03875*, 2018. Aleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. *arXiv preprint arXiv:1707.03815*, 2017. Jie Cai, Xin Wang, Chaoyu Guan, Yateng Tang, Jin Xu, Bin Zhong, and Wenwu Zhu. Multimodal continual graph learning with neural architecture search. In *Proceedings of the ACM Web Conference 2022*, pp. 1292–1300, 2022. Lisha Chen, Sharu Theresa Jose, Ivana Nikoloska, Sangwoo Park, Tianyi Chen, Osvaldo Simeone, et al. Learning with limited samples: Meta-learning and applications to communication systems. *Foundations* and Trends® *in Signal Processing*, 17(2):79–208, 2023. Xu Chen, Junshan Wang, and Kunqing Xie. Trafficstream: A streaming traffic flow forecasting framework based on graph neural networks and continual learning. *arXiv preprint arXiv:2106.06273*, 2021. Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence, 44(7):3366–3385, 2021. Eva Feillet, Grégoire Petit, Adrian Popescu, Marina Reyboz, and Céline Hudelot. Advisil-a class-incremental learning advisor. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pp. 2400–2409, 2023. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pp. 1126–1135. PMLR, 2017. Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. *arXiv preprint arXiv:1312.6211*, 2013. Gunshi Gupta, Karmesh Yadav, and Liam Paull. Look-ahead meta learning for continual learning. Advances in Neural Information Processing Systems, 33:11588–11598, 2020. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. *Advances* in neural information processing systems, 30, 2017. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs, 2021. Shenyang Huang, Vincent Francois-Lavet, and Guillaume Rabusseau. Understanding capacity saturation in incremental learning. In *Canadian Conference on AI*, 2021. Xuanwen Huang, Yang Yang, Yang Wang, Chunping Wang, Zhisheng Zhang, Jiarong Xu, Lei Chen, and Michalis Vazirgiannis. Dgraph: A large-scale financial dataset for graph anomaly detection. Advances in Neural Information Processing Systems, 35:22765–22777, 2022. Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, and Pascal Poupart. Representation learning for dynamic graphs: A survey. *The Journal of Machine Learning* Research, 21(1):2648–2720, 2020. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *arXiv* preprint arXiv:1609.02907, 2016. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. *Proceedings of the national academy of sciences*, 114(13):3521–3526, 2017. Jeremias Knoblauch, Hisham Husain, and Tom Diethe. Optimal continual learning has perfect memory and is np-hard. In *International Conference on Machine Learning*, pp. 5327–5337. PMLR, 2020. Frantzeska Lavda, Jason Ramapuram, Magda Gregorova, and Alexandros Kalousis. Continual classification learning using generative models. *arXiv preprint arXiv:1810.10612*, 2018. Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017. Huihui Liu, Yiding Yang, and Xinchao Wang. Overcoming catastrophic forgetting in graph neural networks. In *Proceedings of the AAAI conference on artificial intelligence*, volume 35, pp. 8653–8661, 2021. David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. *Advances in* neural information processing systems, 30, 2017. Arun Mallya, Dillon Davis, and Svetlana Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 67–82, 2018. Marc Masana, Xialei Liu, Bartłomiej Twardowski, Mikel Menta, Andrew D Bagdanov, and Joost Van De Weijer. Class-incremental learning: survey and performance evaluation on image classification. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 45(5):5513–5533, 2022. Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109–165. Elsevier, 1989. Martial Mermillod, Aurélia Bugaiska, and Patrick Bonin. The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects, 2013. Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Timothy Nguyen, Razvan Pascanu, Dilan Gorur, and Mehrdad Farajtabar. Architecture matters in continual learning. *arXiv preprint arXiv:2202.00275*, 2022. Binh D Nguyen, Thanh-Toan Do, Binh X Nguyen, Tuong Do, Erman Tjiputra, and Quang D Tran. Overcoming data limitation in medical visual question answering. In *Medical Image Computing and Computer* Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part IV 22, pp. 522–530. Springer, 2019. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In *Proceedings of the IEEE conference on Computer Vision and* Pattern Recognition, pp. 2001–2010, 2017. Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. *arXiv preprint arXiv:2006.10637*, 2020. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. *AI magazine*, 29(3):93–93, 2008. Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In *International conference on machine learning*, pp. 4548–4557. PMLR, 2018. Mohamed Shahawy, Elhadj Benkhelifa, and David White. A review on plastic artificial neural networks: Exploring the intersection between neural architecture search and continual learning. *arXiv preprint* arXiv:2206.05625, 2022. Shagun Sodhani, Sarath Chandar, and Yoshua Bengio. Toward training recurrent neural networks for lifelong learning. *Neural computation*, 32(1):1–35, 2020. Li Sun, Junda Ye, Hao Peng, Feiyang Wang, and S Yu Philip. Self-supervised continual graph learning in adaptive riemannian spaces. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 4633–4642, 2023. Zhen Tan, Kaize Ding, Ruocheng Guo, and Huan Liu. Graph few-shot class-incremental learning. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 987– 996, 2022. Binh Tang and David S Matteson. Graph-based continual learning. *arXiv preprint arXiv:2007.04813*, 2020. Junshan Wang, Guojie Song, Yi Wu, and Liang Wang. Streaming graph neural networks via continual learning. In *Proceedings of the 29th ACM international conference on information & knowledge management*, pp. 1515–1524, 2020. Junshan Wang, Wenhao Zhu, Guojie Song, and Liang Wang. Streaming graph neural networks with generative replay. In *Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data* Mining, pp. 1878–1888, 2022. Lingfei Wu, Peng Cui, Jian Pei, and Liang Zhao. Graph Neural Networks: Foundations, Frontiers, and Applications. Springer Singapore, Singapore, 2022. Yishi Xu, Yingxue Zhang, Wei Guo, Huifeng Guo, Ruiming Tang, and Mark Coates. Graphsail: Graph structure aware incremental learning for recommender systems. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 2861–2868, 2020. Xikun Zhang, Dongjin Song, and Dacheng Tao. Cglb: Benchmark tasks for continual graph learning. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. Fan Zhou and Chengtai Cao. Overcoming catastrophic forgetting in graph neural networks with experience replay. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 4714–4722, 2021. Fan Zhou, Chengtai Cao, Kunpeng Zhang, Goce Trajcevski, Ting Zhong, and Ji Geng. Meta-gnn: On fewshot node classification in graph meta-learning. In *Proceedings of the 28th ACM International Conference* on Information and Knowledge Management, pp. 2357–2360, 2019.
Review 1: Summary: The submission proposes to combine "replay buffer" and "meta learning" to address continual learning problems on graph data. In particular, these techniques are applied to the training of graph neural networks in node classification tasks. With "replay buffer", nodes from previous classes are stored in the buffer for later training. With "meta learning", the model is updated by considering both the current task and previous tasks. The proposed model is applied to multiple graph classification tasks and shows superior performance over several competing methods. Strengths and Weaknesses: Strengths: 1. Graph neural networks with the two strategies are shown to have superior performance in controlled experiments. Weakness: 1. The work is somewhat incremental as it combines two known techniques. 2. The proposed method is general enough for most learning tasks, but I don't know why it is tested on graph learning tasks. If the authors want to claim the value of the proposed new method, then it should be tested over general learning tasks. 3. The learning tasks in the experiment section seem to be all made-up problems. Could you elaborate more on the practical value of solving these problems? I am also wondering whether there are actual continual learning problems in this field. 4. The writing is a little bit redundant. In particular, some content is repeated (e.g. the discussion of meta learning in sections 2 and 3; the discussion of the meta update at the beginning of section 4.2 and in section 4.2.2). The first three sections can greatly be compressed. Requested Changes: While it is hard to suggest any changes to increase the novelty of the work, I would like to see a more rigorous study of the proposed method. 1. Can you evaluate the proposed method in general learning tasks. If better performances are observed across multiple learning domains, the work still deserves a publication. If the proposed model only perform better on graph learning tasks, then a thorough explanation is needed. 2. The writing of the paper should be compressed. The text introduction of the proposed algorithm can be clearer. For example, the following text is hard to understand. The following text quoted from the submission contains examples of writing issues. Questions are in brackets. "First, the model is trained with Dtri in order to determine (*optimize until convergence?*) the parameters for the current task Ti. The parameters for the current task are determined (*some repetition of the previous sentence*) in order to find the direction of the optimal parameters (*how the "optimality" is defined?*) for that task. In order to achieve that (*what?*), the model weights are calculated (*how? Are model weights the same as parameters?*) and stored. After that, the gradients are calculated (*which gradients, and how are they computed?*) and the model weights (*are they from the stored weights? If not, then how are gradients calculated*), which are called as fast weights θfast, are updated, shown in Algorithm 4, line 4." Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper tackles the challenge of continual learning where new classes and tasks continually emerge over time, with the idea of using the reply buffer that can save and learn with the important samples and the meta-learning that can learn model parameters over the distribution of tasks. The authors evaluate the proposed MetaCLGraph on the node classification task of graph-structured data, showing that it can not only improve the performance but also mitigate the forgetting issue. Strengths and Weaknesses: ### Strengths * The idea of using the replay buffer often used in the continual learning setup over the meta-learning framework is interesting. * The proposed MetaCLGraph significantly outperforms relevant baselines. * This paper is well-written and easy to follow. ### Weaknesses * The main weakness is that this work has an incremental contribution against existing works, namely ER-GNN and LA-MAML. In particular, this work directly utilizes the algorithms from both prior works (i.e., using the replay buffer while performing the meta-learning) and does not have clear motivations on why using both of them is necessary. * The comparison against the LA-MAML model is missing; meanwhile, this LA-MAML, which is the basis model of the proposed MetaCLGraph, is the direct and important baseline to compare. * The computation costs for performing the meta-learning might be very high especially when the number of tasks is large, in contrast to using other continual learning techniques (e.g., reply buffer or regularization), due to multiple optimizations (i.e., inner and meta updates). It may be beneficial to discuss them. * The coverage maximization method (Section 4.2.1), which is an important notion when selecting and storing samples in the memory, is not clearly described. Requested Changes: Please address the aforementioned weaknesses in marginal contribution against existing works, comparisons against baselines, computational costs, and clarity. Broader Impact Concerns: The authors clearly discuss the broader impact and limitations of the proposed work in Section 6. ================================================== Review 3: Summary: This paper proposes to model consisting of experience replay and meta learning for the continual learning problem. Also, the learning rate of each parameter is gradually decreased according to the gradient on the parameter. Strengths and Weaknesses: There are multiple concerns about this paper, including some major ones. 1. The so-called meta learning in this work is not real meta learning. It is simply optimizing the models with two steps: 1. optimize with the current task data. 2. update the model again with the extended dataset consisting of both current task data and stored data. While meta-learning is learning to learn and typically find an optimal starting point for the model so that the model can easily adapt to any task. Therefore, the motivation of this 'meta-learning' operation is questionable, why not directly train the model on the extended dataset containing both the current task data and the stored data, so that the model optimization considers both new and old data? 2. Continual learning on graphs has been extensively studied recently, but the related work does not reflect this and only mention few works on this, only Zhou&Cao is mentioned. I would recommend the authors to carefully investigate the recent related works. 3. The writing can be significantly improved. For example, some sentences are very redundant, like the snetences in 4.2.2: ' First, the model is trained with Dtri in order to determine the parameters for the current task Ti. The parameters for the current task are determined in order to find the direction of the optimal parameters for that task. In order to achieve that, the model weights are calculated and stored.' All these three sentences are talking about one same thing. 4. The so-called meta learning in this work is not real meta learning. It is simply optimizing the models with two steps: 1. optimize with the current task data. 2. update the model again with the extended dataset consisting of both current task data and stored data. While meta-learning is learning to learn and typically find an optimal starting point for the model so that the model can easily adapt to any task. 5. In line 5 of Algorithm 4, how is the gradients with respect to the learning rate alpha calculated? 6. Nodes are stored in the memory. However, when replaying them, does GNN only take in single nodes? This is weird because GNNs will aggregate information from a neighborhood containing multiple nodes (message passing). If one single node is stored, its neighbors are not stored and the message passing cannot work. 7. If the learning rates monotonically decreases, the proposed model will be less and less capable to adapt to new tasks. Requested Changes: All the concerns mentioned above should be addressed to improve the paper. Broader Impact Concerns: N/A ================================================== Review 4: Summary: In this work, the authors presented MetaCLGraph for graph continual learning which combines meta learning and experience replay techniques. By using experience replay / a memory module, the proposed model is able to mitigate the impact of catastrophic forgetting which is a major challenge in continual learning. With meta learning, fast prototype of model weights are first computed on the new task without any updates and then using a meta loss to learn a joint representation for previous tasks. This work targets the node classification task for the class-incremental learning setting on graphs. The contributions can be summarized as follows: - combining ideas from replay buffer and meta learning to form a new learning method for graph continual learning - strong empirical performance on current graph continual learning benchmark datasets. Strengths and Weaknesses: First, I will list the strengths of this work: 1. The authors presented MetaCLGraph, a combination of replay buffer and meta learning. It is interesting to combine these two previous ideas and shows a new approach for graph continual learning. 2. MetaCLGraph showed strong empirical performance on the node classification task across four graph datasets in the class-incremental learning setting based on the CGLB benchmark. 3. Extensive ablation studies are conducted to validate the components in this approach as well as effect of number of tasks. Now I will discuss some weaknesses and rooms for improvement (more details see requested changes): 1. the related work discussion on continual learning is incomplete. Outside of catastrophic forgetting, continual learning also faces the challenge of capacity saturation which was pointed out in various previous work. This discussion should be added. 2. missing discussion on how graph continual learning is related to another research area: dynamic graph or temporal graph learning. The idea of adding more nodes / classes over time is exactly the setting seen in dynamic graphs and some discussions should be added. 3. some parts of the writeup seems confusing and counter-intuitive which needs improvement Requested Changes: Following on the weaknesses, here are my requested changes to the paper, [important] means it is critical to my recommendation for acceptance, [minor] means it will improve the paper: 1. [important] the authors briefly mentions the dynamic nature of real world graphs (or dynamic graphs), in Section 3. "In real-world scenarios, graphs tend to be dynamic and the boundaries between the tasks do not usually exist". How is class-incremental learning on graphs different from classification on temporal graphs? Would it be more natural to learn continuously on temporal graphs as new classes naturally arises over time rather than the current setup of splitting the classes manually into different tasks? I think some discussions on this point would be helpful. I will also provide some references for temporal graphs[1,2,3]. [1] Huang, Xuanwen, et al. "Dgraph: A large-scale financial dataset for graph anomaly detection." Advances in Neural Information Processing Systems 35 (2022): 22765-22777. [2] Rossi, Emanuele, et al. "Temporal graph networks for deep learning on dynamic graphs 2020." URL https://arxiv. org/abs (2006). [3] Kazemi, Seyed Mehran, et al. "Representation learning for dynamic graphs: A survey." The Journal of Machine Learning Research 21.1 (2020): 2648-2720. 2. [important] In the related work, the authors only discussed the challenge of catastrophic forgetting in continual learning. However, this is not the only challenge, another significant challenge is capacity saturation where the architecture of a neural network needs to continue to grow / adapt when more information arrives otherwise it would be limited in learning by the fixed architecture. This is discussed in various previous work, including [1,2,3]. The same challenge exists in graph continual learning where if the GNN architecture is fixed, then it is possible that its learning may become limited as large number of new classes arrives. [1] Huang, Shenyang, Vincent Francois-Lavet, and Guillaume Rabusseau. "Understanding Capacity Saturation in Incremental Learning." Canadian Conference on AI. 2021. [2] Shahawy, Mohamed, Elhadj Benkhelifa, and David White. "A Review on Plastic Artificial Neural Networks: Exploring the Intersection between Neural Architecture Search and Continual Learning." arXiv preprint arXiv:2206.05625 (2022). [3] Mirzadeh, Seyed Iman, et al. "Architecture matters in continual learning." arXiv preprint arXiv:2202.00275 (2022). 3. [minor] "it allows our model not to fit the optimal parameters for a given task". It seems counter-intuitive to not want model to find the optimal parameters for a given task. The authors should explain this part better in Section 4.2. The goal of continual learning is to find an optimal parameter set for the joint set of tasks at time t. 4. [minor] the explanation in Section 4.2.2 about the model training on the merged dataset is very confusing. Is theta_fast trained with only data from the new task or the merged dataset? how are the merged dataset utilized to compute loss? Broader Impact Concerns: There is no concerns on the ethical implications of the work. The authors also addressed this in the broader impact section of the paper. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The reviewers generally agree that this is a sound submission, with good experimental results, while they have also found the idea of combining replay buffer with meta learning interesting. However, the paper still has some small issues. I am thus recommending accept with a minor revision, and I request that the final version of the manuscript implements any remaining items promised during the discussion period and carefully considers the final reviewers' comments, particularly the items listed below: - provide the overall computational complexity of the proposed method, compare it to the complexity of the baselines and discuss whether this could limit the applicability of the proposed method to large scale datasets (Reviewer JeUv). - revise the related work section on continual learning on graph data to better acknowledge work that has appeared in the past few years and also discuss how the present work is different from closely related works (Reviewer s6gH). - improve the quality of the presentation and clarity throughout the paper (Reviewers yCNC and s6gH). ==================================================
# Learning Network Granger Causality Using Graph Prior Knowledge Lucas Zoroddu lucas.zoroddu@ens-paris-saclay.fr Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190, Gif-sur-Yvette, France Pierre Humbert *pierre.humbert@universite-paris-saclay.fr* Université Paris-Saclay, CNRS, Inria, Laboratoire de mathématiques d'Orsay, F-91405, Orsay, France Laurent Oudre laurent.oudre@ens-paris-saclay.fr Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190, Gif-sur-Yvette, France Reviewed on OpenReview: *ht tp s: // op en re vi ew .n et /f or um ?i d= DN 6s ut 5f yR* ## Abstract Understanding the relationships among multiple entities through Granger causality graphs within multivariate time series data is crucial across various domains, including economics, finance, neurosciences, and genetics. Despite its broad utility, accurately estimating Granger causality graphs in high-dimensional scenarios with few samples remains a persistent challenge. In response, this study introduces a novel model that leverages prior knowledge in the form of a noisy undirected graph to facilitate the learning of Granger causality graphs, while assuming sparsity. In this study we introduce an optimization problem, we propose to solve it with an alternative minimization approach and we proved the convergence of our fitting algorithm, highlighting its effectiveness. Furthermore, we present experimental results derived from both synthetic and real-world datasets. These results clearly illustrate the advantages of our proposed method over existing alternatives, particularly in situations where few samples are available. By incorporating prior knowledge and emphasizing sparsity, our approach offers a promising solution to the complex problem of estimating Granger causality graphs in high-dimensional, data-scarce environments. ## 1 Introduction Multivariate time series analysis plays a crucial role in understanding and forecasting real-world phenomena involving multiple interdependent variables. Within such complex systems, Granger causality (Granger, 1969) graph has emerged as a powerful tool to uncover directional relationships and causal influences. It has therefore been the basis for a wide range of applications in fields such as economics (Stock & Watson, 2016), neuroscience (Seth et al., 2015), gene regulation (Yao et al., 2013), protein-protein interactions (Zou et al., 2010) etc. Granger causality assesses whether the past values of one variable can help predict the future values of another variable. For instance, we say that a random variable X Granger-causes a variable Y if the past values of X contain information that enhances our ability to predict Y beyond what we could achieve using only the historical values of Y itself. While Granger causality is a powerful tool for studying relationships between two variables, many real-world phenomena involve multiple interconnected variables. Network Granger causality extends the basic Granger causality concept by assuming linear causal relationships between p variables. Hence, it allows to capture the complex causal relationships within multiple variables. In this context, the aim is not to identify whether one variable influences another but to unravel the intricate causal structure that underlies an entire system. Network Granger causality is particularly relevant in fields like neuroscience where researchers seek to understand how different regions of the brain interact and influence each other over time (Seth et al., 2015), in finance to capture market dynamics or in climatology to investigate complex interactions shaping Earth's climate system. It is also applicable to discover cause-and-effect relationships within genetic pathways and protein-protein interactions, shedding light on the regulatory mechanisms that govern these biological processes (Yao et al., 2013). A conventional approach to learn Granger causality graphs involves estimating the parameters of a Vector AutoRegressive model (VAR) based on observed multivariate time series (Lütkepohl, 2005). However, inferring parameters of a VAR model can be non trivial, particularly when dealing with high-dimensional data and limited samples. To address this issue, researchers have studied various regularization techniques, approaching the problem from both frequentist and Bayesian perspectives. From a frequentist standpoint, Basu et al. (2015) explored the application of a Group Lasso penalty and its consistency properties. Since then, several sparsity-inducing penalties have been proposed in the works of Kock & Callot (2015), Lin & Michailidis (2017), and Nicholson et al. (2020). On the Bayesian side, diverse prior distributions governing the parameters of the VAR model were investigated in the literature. These include the use of Gaussian priors (Sims, 1993), Gaussian-inverted Wishart priors (Banbura et al., 2010), and hierarchical normal priors, as explored by Ghosh et al. (2018). However, as underlined by Duan et al. (2023), the resulting Granger causality graphs from these methods often exhibit undesired characteristics, ranging from excessive density to pronounced disconnection, potentially conflicting with established scientific knowledge. To mitigate these issues and enhance the fidelity of Granger causality graphs, researchers have investigated for the incorporation of supplementary information in the form of network structures. For instance, in gene network analysis, genes are often grouped into distinct pathways, and it is a common observation that interactions within a pathway are more frequent than those bridging different pathways (Marlin et al., 2012). In response, Yao et al. (2013) have introduced penalization terms into the optimization process to ensure that the derived Granger graph aligns with this a priori knowledge. Another approach involves adopting a tree-rank prior distribution, enforcing the graph of Granger causalities to be a subgraph within the union of spanning trees Duan et al. (2023). More recently, Lin et al. (2024) proposed to solve a Structural Equation Model incorporating constraint to obtain a DAG. Furthermore, in scenarios characterized by signals from physical processes and recorded by sensors distributed across multiple spatial locations, the Euclidean k-NN (k-Nearest Neighbors) graph is often leveraged and finds widespread adoption within the domain of Graph Signal Processing (Ortega et al., 2018), where it serves as a foundational element for tasks such as signal filtering, denoising, and prediction. For instance, in (Isufi et al., 2019), a VAR(MA) model is fitted as a polynomial of the Laplacian matrix of the kNN graph. Nonetheless, most of works within the scientific literature operate under the idealized assumption of possessing a complete and precise prior graph - an unrealistic supposition in many real applications. Contributions: In this paper, we propose an optimization problem for learning VAR model parameters by utilizing prior knowledge about relationships within time series data, represented in the form of an undirected graph (cf Figure 1). Our approach enables the incorporation of an incomplete and noisy prior undirected network when learning the Granger causality Network, and we jointly learn VAR parameters while denoising the initial graph. To do that, we propose a two-block coordinate descent algorithm and we prove that it converges to a set of stationary points, strengthening the reliability of our approach. Moreover, we show that the optimization problem corresponds to the computation of a Maximum A Posteriori (MAP) of a particular statistical model. To validate the effectiveness of our method, we conduct a series of experiments encompassing synthetic and real-world datasets. Our findings convincingly demonstrate the superiority of our proposed model over vanilla alternatives for varying noise levels over the prior network, particularly in scenarios marked by data scarcity. This underscores the potential of our method as a valuable tool for deriving Granger causality graphs in practical settings across a spectrum of applications. ## 2 Preliminaries In this section we present the basics of (Network) Granger causality and its relationships with Vector Autoregressive (VAR) models. We also give a brief overview of existing methods for learning VAR parameters. ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) Figure 1: Diagram of the proposed method. Left: Existing methods mainly infer causality graphs from time series only. **Right:** Our proposed method leverage both time series and a prior undirected graph (encoding 'likelihood' of link presence) to infer the causal graph. ## 2.1 Network Granger Causality In order to define the Network Granger causality, we recall the concept of classic Granger causality. Definition 1 (Granger causality (Granger, 1969)). Let x and y *be two stationary time series such that:* $$y[t]=\sum_{i=1}^{d}\alpha_{i}y[t-i]+\sum_{i=1}^{d}\beta_{i}x[t-i]+\epsilon[t]\ ,\tag{1}$$ $$(1)$$ where ϵ[t] ∼ N (0, σ2), d is the order of the model, and the parameters αi, βi are fitted minimizing the least square error between y *and its reconstruction using* (1). We say that x Granger-causes y *if one of the* {βi}i is non zero. Remark 2. *In Granger (1969), the Granger causality is introduced as a statistical test based on the null* hypothesis : H0 : β1 = β2 = ... = βp = 0. One can use a F-test to know whether the null hypothesis if rejected or not. Remark 3 (Link between causality and Granger causality). While Granger causality can effectively capture causal relationships, its interpretation leans more towards predictive power than a direct expression of true causality. For instance, if x and y are driven by an unknown z, it is possible that (x, y) rejects the null hypothesis whereas there is no causality between them. To generalize the Granger causality to p variables, we introduce Vector Autoregressive (VAR) models. A VAR model of order d ≥ 1, denoted VAR(d), explains the values of a multivariate time series at time t using a linear combination of its d previously observed values. Definition 4 (Vector Autoregressive Model (VAR)). Given d *matrices* (Cτ) d τ=1 in R p×p, a VAR(d) is defined at each time t = 1, 2, . . . by: $$X[t]=\sum_{\tau=1}^{d}\mathbf{C}^{\tau}X[t-\tau]+\varepsilon[t]\;,$$ CτX[t − τ ] + ε[t] , (2) $$\left(2\right)$$ where X[·] = (X1[·], ..., Xp[·]) is a random p-dimensional time series and ε[t] ∼ N (0, σ2Ip), σ > 0, is some innovation noise. Traditionally, Granger causality has been based on the assumption of a VAR model Lütkepohl (2005), primarily conducting tests on the VAR coefficients in a bivariate context. However, in complex real-world systems with numerous time series, analyzing the relationship between only two series can result in misleading conclusions due to confounding factors Lütkepohl (2005). In practice, VAR models are often used to analyze relationships between several variables of interest. Indeed, the matrices {Cτ } d τ=1 in Equation (2) capture specific temporal dependencies between the p dimensions and are associated with the notion of Network Granger causality (NGC), which was then introduced as a generalization of the Granger causality in Basu et al. (2015) to adjust for possible confounders or jointly consider multiple series. Definition 5 (Network Granger causality (Basu et al., 2015)). Let X[t] = (X1[t]*, ..., X*p[t]) ∈ R pfor t = 1, 2, . . ., be a p*-dimensional time series following the VAR model defined in Eq.* (2). Then, Xj [·] *is called a* Granger cause of Xi[·] *if at least one matrix entry* {Cτ i,j , τ = 1, ..., d} *is non-zero.* The intuition is that if Cτ i,j ̸= 0 for some τ , like in the bivariate case, it means that past values of Xj are useful to predict future values of Xi, independently from the other variables. Indeed, we can reduce the model to a bivariate case for each couple (*i, j*) writing: $$X_{i}[t]=\sum_{\tau=1}^{d}\sum_{\begin{array}{l}{{1\leq k\leq p}}\\ {{\bar{k}\neq j}}\end{array}}\mathbf{C}_{i,k}^{\tau}X_{k}[t-\tau]+\sum_{\tau=1}^{d}\mathbf{C}_{i,j}X_{j}[t-\tau],$$ and then use the definition of the classical Granger causality. Note that, like for the classical Granger causality (p = 2), NGC does not necessarily capture true causal relationships, but rather indicates the power of prediction of some variables to others. Nevertheless, NGC remains a powerful tool for understanding interactions between random time series, and its estimation is of practical interest. ## 2.2 Estimation Of The Var Parameters Given N samples X(n)[t] ∈ R p, n = 1*, ..., N*, at each time t ∈ J1, dK = stored in X[t] ∈ R p×N , one possibility to estimate the VAR parameters {Cτ } d τ=1 is to maximize the likelihood of Model (2). This leads to the following Least Square problem: $$\hat{\bf C}^{1:d}=\arg\min_{{\bf C}^{1},\ldots,{\bf C}^{d}}\ \frac{1}{2N}\left\|{\bf X}[t]-\sum_{\tau=1}^{d}{\bf C}^{\tau}{\bf X}[t-\tau]\right\|_{F}^{2}.\tag{3}$$ This minimisation can be divided into p independent sub-problems, which are easier to handle. For j = 1,...,$p$, they are given by: $$\arg\min_{\mathbf{C}^{*}_{j},...,\mathbf{C}^{*}_{j}}\ \frac{1}{2N}\left\|\mathbf{X}_{j}[t]-\sum_{r=1}^{d}\mathbf{C}^{*}_{jr}\mathbf{X}[t-r]\right\|_{2}^{2},\tag{4}$$ where $\mathbf{C}^{*}_{j}$ refers to the $j$-th row of $\mathbf{C}^{*}$. Each sub-problem (4) is a standard linear problem and can therefore j: be solved efficiently. In high dimensional settings, it can still be challenging to fit VAR(d) models because the model has d × p 2 degrees of freedom. Some regularization approaches were developed to overcome this limitation, both from a frequentist and Bayesian point of view. Several authors suggest adding a penalty term to learn the VAR parameters. Thus, the optimisation problem becomes: $$\arg\min_{\mathbf{C}^{1}_{j},...,\mathbf{C}^{\tau}_{j}}\ \frac{1}{2N}\left\|\mathbf{X}_{j}[t]-\sum_{\tau=1}^{d}\mathbf{C}^{\tau}_{j\tau}\mathbf{X}[t-\tau]\right\|_{2}^{2}+\lambda\mathcal{P}_{\tau}(\mathbf{C}^{1}_{j},...,\mathbf{C}^{\tau}_{j\tau})\,\tag{5}$$ | Method | Penalization term | Guarantees | Statistical Model | | |----------------|---------------------|-------------------------------------------|-------------------------|----| | 2 | | | | | | Ridge | λ∥u∥ 2 | - | u ∼ N (0, Ip) | | | Lasso | λ∥u∥1 | Selection consistency | ui iid∼ Laplace(0, 1) 1 | | | Adaptive Lasso | λ P wk|uk| | Selection consistency (Zou, 2006) | ui ind∼ Laplace(0, | ) | | | wi | | | | | k | 2 | 2 | | | | Group Lasso | λ P wg∥u[g]∥2 | Direction consistency (Basu et al., 2015) | ug|τ , σ2 ind∼ N µg, τ 2 g σ Img , g g | | | g | 2 | | | | | | τ g ∼ Gamma mg+1 2 2 , λ 2 | | | | Table 1: Main regularizations for linear regression. where (Pτ ) d τ=1 are some penalization terms. The Lasso shrinkage technique (L1 penalization), as introduced by Tibshirani ((Tibshirani, 1996)), has been integrated into various models to account for specific data properties. In the study by Basu et al. ((Basu et al., 2015)), for example, the Group Lasso is proposed and explored to leverage knowledge of grouping structures. Lasso shrinkage proves effective in enhancing lag order selection, as demonstrated in the work of Shojaie and Michailidis ((Shojaie & Michailidis, 2010)) using a truncated Lasso estimator and in Nicholson et al.'s ((Nicholson et al., 2020)) hierarchical model. The Adaptive Lasso (or weighted Lasso), introduced by Zou ((Zou, 2006)), finds applications in improving variable selection ((Zou, 2006)), addressing non-stationary time series, and proposing online algorithms ((Messner & Pinson, 2019)) by leveraging previous parameters to estimate the next ones. It's worth noting that the conventional approach for utilizing Adaptive Lasso involves initially solving the least square problem (without constraints) to obtain an initial estimator Cb OLS, followed by setting the weights to *wi, j* =1 |Cbi,j | . Additionally, Adaptive Lasso has proven to be a relevant method for incorporating prior knowledge about parameters, as exemplified by the consideration of past values in ((Messner & Pinson, 2019)). Another commonly used estimator, the Ridge estimator, is also examined in Giovanni's study ((Ballarin, 2022)) within the context of VAR models, where the authors discuss the implications of anisotropic penalization. It's important to note that for some of these models, theoretical guarantees have been established regarding the selection consistency of Lasso-based methods (refer to Table 1). As highlighted in the introduction, some Bayesian models have been introduced to learn VAR parameters by incorporating prior distributions. These priors aim to prioritize certain lags, with lower lags commonly assumed to be more informative than higher ones, or to impose specific structures on the graph, such as a spanning trees structure ((Sims, 1993; Banbura et al., 2010; Ghosh et al., 2018; Duan et al., 2023)). A comprehensive survey of existing Bayesian VAR models and their associated sampling algorithms is presented in the work of Miranda Agrippino and Ricco ((Miranda Agrippino & Ricco, 2018)). However, note that in some cases, frequentist penalizations correspond to compute the Maximum Likelihood Estimator (MLE) of a particular Bayesian model, so it can be useful to consider a model from both point of view to have a deeper understanding of the solution. We refer to Table 1 for the correspondences between conventional penalization terms and their associated Bayesian models. ## 3 Model And Framework In this section, we present our main contributions. In Section 3.1, we first introduce an optimization problem allowing to incorporate graph prior knowledge in the estimation of the parameters of a VAR model. Then, in Section 3.4, we describe an algorithm based on alternating minimization to solve this problem. In section 3.5, we prove that this algorithm converges to a stationary point and that its time complexity for a fixed number of iterations is of the same order as a Lasso estimator. Finally, we link this optimization problem to a maximum a posteriori of a certain statistical model, which is presented in Section 3.6. In the following, we make the two classical assumptions: Assumption 1. *The time series are generated by a* VAR(1) *model:* $$X[t]=\mathbf{C}X[t-1]+\varepsilon[t]\ ,$$ $$({\bar{0}})$$ X[t] = CX[t − 1] + ε[t] , (6) so we only need to learn one matrix C*. However, note that the assumption* d = 1 *is not limiting since a* VAR(d) *model can always be written as a* VAR(1) *model (Lütkepohl, 2005). This generalization is detailed* in Section 3.7. Assumption 2. *Hamilton (1994) showed that obtaining reliable estimates for* VAR parameters necessitates stationary of X[·]*. Subsequently, in Lütkepohl (2005)'s work, it was established that the model* (6) *is stable* if and only if ρ(C) < 1, where ρ *is the spectral radius. This assumption will be maintained in our analysis.* ## 3.1 Problem Formulation In general, the estimation of the parameters of the VAR model (2), i.e. the matrix C, requires the observation of a long stationary realization of the p-dimensional time series. However, in many applications, we observe only short replicas of the time series, and additional information must be incorporated into the model to obtain accurate estimates. We propose to leverage prior knowledge on the structure of the matrix C by assuming that it is statistically related to a given matrix Aprior which is the adjacency matrix of an undirected graph encoding a priori relationships between the individual dimensions. More specifically, the idea is to encode the following property: if two nodes (*i, j*) are not likely to be linked, A prior i,j is small and the value of the associated coefficient Ci,j is more likely to be closed to zero (meaning that there is no Granger causality between the two time series Xi[·] and Xj [·]). While forbidding certain links in causal discovery is a standard approach and it used in recent methods like PCMCI Runge et al. (2019), this assumption may be too strong and this issue will be addressed in the paper (cf Remark 8). In addition, since the prior information is rarely perfectly accurate for most applications, we assume that Aprior is not necessarily the optimal prior knowledge. So we want to allow the model to refine this prior through iterations, i.e. computing a matrix A close to Aprior but not equal. To understand the relationships between these matrices, a statistical point of view is derived in 3.6 and provides the mathematical relationships between C, Aprior and A. Formally, let consider N independent wide-sense stationary multivariate time series (Assumption 2) X(1)[1 : d]*, ..., X*(N)[1 : d]. In addition, suppose that we have access to a matrix Aprior, summarizing our prior knowledge on the relationships between each pair of variables. To estimate the VAR parameters, we introduce the following optimization problem: $$\min_{\mathbf{A},\ \mathbf{C}}\ \frac{1}{N}\sum_{n=1}^{N}\left\|X^{(n)}[t]-\mathbf{C}X^{(n)}[t-1]\right\|_{2}^{2}+\lambda\sum_{1\leq i<j\leq p}\frac{|\mathbf{C}_{i,j}|+|\mathbf{C}_{j,i}|}{\mathbf{A}_{i,j}}$$ $$+2\lambda\sum_{1\leq i<j\leq p}\log(2\mathbf{A}_{i,j})+\gamma\left\|\mathbf{A}-\mathbf{A}^{\mathrm{prior}}\right\|_{F}^{2}\,\tag{7}$$ $$\mathrm{subject\ to\ \ }\mathbf{A}_{i,j}\geq0\,\ \mathbf{A}_{i,j}=\mathbf{A}_{j,i}\ \,\ \ 1\leq i<j\leq p\.$$ Equation (7) contains 3 terms: $\frac{1}{N}\sum\limits_{n=1}^{N}\left\|X^{(n)}[t]-\mathbf{C}X^{(n)}[t-1]\right\|_{2}^{2}$ corresponds to the Least Square problem objective: this term allows us to compute the difference between the weighted standard and the computations. Recall that lows to measure the difference between the original signals and their reconstructions. Recall that we only consider in this section VAR(1) model, hence t = 2. (ii) P $$\sum_{1\leq i<j\leq p}{\frac{|\mathbf{C}_{i,j}|+|\mathbf{C}_{j,i}|}{\mathbf{A}_{i,j}}}+2\sum_{1\leq i<j\leq p}\log(2\mathbf{A}_{i,j}){\mathrm{~is~the~}}$$ log(2Ai,j ) is the penalization term that takes into account the graph prior knowledge. This penalization is inspired by the one used in Adaptive Lasso models, where the terms 1/Ai,j act as weights. The higher the Ai,j , the closer i and j are in the graph, and the lower the penalty for Ci,j and Cj,i. It should be noted that the additional term composed of the sum of log is a normalization term linked to the associated statistical model (see Section 3.6). (iii) ∥A − Aprior∥ 2 F is a regularization term to take into account that Aprior is not necessarily the optimal prior knowledge and could therefore be further refined. Actually, adding this term in the optimization problem is equivalent to imposing a normal prior distribution to the coefficient of Ai,j (see Section 3.6). In practice, this term allows to increases the robustness to the prior knowledge noise. The symmetry constraint Ai,j = Aj,i for 1 ≤ *i < j* ≤ p is applied as A represents the adjacency matrix of the denoised undirected prior graph knowledge. Remark 6. This work presents a model leveraging a prior undirected weighted graph. This model choice is based on the idea that for some applications, we have ideas about which variables are linked together but we do not know the directions of causality (i 7→ j or j 7→ i*?). However, it is possible to slightly modify the* model to leverage a directed graph prior knowledge (the optimization problem remains the same except that we split the part with Ai,j *and that we remove the symmetry constraint).* Finally, the problem (7) depends on two hyper parameters λ and γ. λ controls the sparsity of the learned graph and γ controls the confidence in the prior graph. ## 3.1.1 Interpretation Of The Optimization Problem - Aprior is a p×p symmetric matrix, seen as the adjacency matrix of an undirected graph, corresponding to the prior knowledge of the Network Granger causality structure. - C is a p × p matrix corresponding to the VAR(1) parameters and seen as the adjacency matrix of the directed graph of Granger causality. - A is a p × p symmetric matrix, seen as the adjacency matrix of an undirected graph, corresponding to the denoised prior graph Aprior during the optimization process. - X(i)[t] (t = 1, 2) are the multivariate time series in R p generated by the VAR(1) model with parameters C. The idea of this optimization problem is to find a sparse matrix C allowing to forecast X using Aprior as a first estimation of the parameters model. The main advantage here is that the prior network is taking into account during the learning process and allows to perform a relevant variable selection. Indeed, even though the variable selection consistency is proved for estimator like Lasso or Adaptive Lasso (cf (Zou, 2006)), this property holds for an infinite number of samples. Here, by assigning weights to pairwise relationships, our method allows to guide the variable selection using (noisy) prior knowledge in order to be efficient in settings with few samples available. ## 3.2 **Motivating Examples** Before entering in the details of the solving method, we illustrate here some use cases for which our method could be used, and how the graph Aprior can be constructed depending on the problem. Multi-sensor networks: In the context of multi-sensor networks, Aprior can be computed by focusing on the distances between the sensors. For certain applications, the recordings are associated with physical processes, suggesting that causal links are more likely to exist between nearby sensors than distant ones: for instance, temperature data are governed by complex equations but exhibit spatial continuity. Consequently, in the experiment with the Molene dataset (Sec. 4.5.2), Aprior is computed applying a Gaussian kernel to the pairwise distances: A prior i,j = Kσ(d(xi, xj )) in order to increase the penalization of causal links between far away sensors (cf Figure 2). Protein interactions: Expert knowledge from the literature can be used to construct a score that reflects how likely two proteins interact based on previous experiments. In this case, A prior i,j is the score of the couple (i,j). This can lead to prior knowledge graph about pairwise proteins interaction allowing to lead the causal discovery process. An interesting example is the one presented in Carlin et al. (2017) by aggregating various established pathways from Pathway Common Cerami et al. (2010) ![7_image_0.png](7_image_0.png) Figure 2: Prior graph for the multisensor network example. Intracardiac electrocardiograms: In this scenario, we are measuring electrical activities in the heart ![7_image_1.png](7_image_1.png) using multiple electrodes. While this example falls within the case of multi-sensor networks, and the prior graph can be constructed as previously explained, Aprior can also be computed by focusing on the similarity between signals recorded by the electrodes, using cross-correlation for example. It allows guiding the algorithm to find causality links between the most similar signals (cf Figure 3). Figure 3: Construction of the graph prior for intracardiac signals example. ## 3.3 Links With Existing Models The work of Yao et al. (2013) is the closest one to ours. They propose to leverage prior knowledge Aprior ∈ R p×psolving: $$\hat{\mathbf{C}}={\underset{\mathbf{C}}{\operatorname{argmin}}}{\frac{1}{2}}\|X[t]-\mathbf{C}X[t-1]\|_{F}^{2}+{\frac{1}{2}}\lambda_{1}N(\mathbf{C}-\lambda_{2}\mathbf{A}^{\mathrm{prior}})\ ,$$ with N : M 7→ ∥M∥F or N : M *7→ ∥*M∥1. However this model works under the assumption that we have prior knowledge about the exact values of the matrix C, which can be unrealistic in some applications. Consequently, this framework impose to know the sign of values in C since the optimization problem penalizes the component wise difference between C and Aprior. The authors propose to address this issue by hand, computing a Ridge estimator of the VAR model to chose the signs of Aprior. On the contrary, by exploiting a weighted Lasso, our model does not suffer from these limitations and only require relative prior knowledge about relationships. $$(8)$$ Our method is also closed to an Adaptive Lasso estimator obtained with: $$\min_{\mathbf{C}}\ \ \frac{1}{N}\sum_{n=1}^{N}\left\|X^{(n)}[t]-\mathbf{C}X^{(n)}[t-1]\right\|_{2}^{2}+\lambda\sum_{1\leq i<j\leq p}\frac{|\mathbf{C}_{i,j}|+|\mathbf{C}_{j,i}|}{\mathbf{A}_{i,j}^{\mathrm{prior}}}.\tag{9}$$ Indeed, considering only the two first terms of our problem, we obtain exactly the Adaptive Lasso one. However the Adaptive Lasso estimator is not robust to the errors in the weights, since the asymptotic consistency is achieved taking for weights unbiased estimators of the true solution (Zou, 2006). Thus, the last term in the optimization problem allows to refine the prior through iterations using information in the time series resulting in a more robust version of Adaptive Lasso as will be seen in the experiments (Section 4). ## 3.4 Solving Method: A-Adaptivelasso (Aalasso) The function to minimize in (7) is not convex in (A, C). Indeed, we can show that even with C fixed, the function in A is not convex (c.f. Appendix A). However, the function is convex in C with A fixed (Adaptive Lasso problem) and we have a closed form for the roots of the derivative in A (with C fixed). Thus, we use an alternating minimization algorithm to solve this problem. ## 3.4.1 C **Update.** For fixed A, the optimization problem (7) with respect to C is: $$\min_{\mathbf{C}}\ \ \frac{1}{N}\sum_{n=1}^{N}\left\|X^{(n)}[t]-\mathbf{C}X^{(n)}[t-1]\right\|_{2}^{2}+\lambda\sum_{1\leq i<j\leq p}\frac{|\mathbf{C}_{i,j}|+|\mathbf{C}_{j,i}|}{\mathbf{A}_{i,j}}.\tag{10}$$ From Eq. (10), we see that the optimization step in C is an Adaptive Lasso problem with weights equal to 1/Ai,j (Zou, 2006). A common way to solve Adaptive Lasso regression is to remark that it can be written like a Lasso problem. Indeed, considering C˜i,j = Ci,j Ai,j , we can write the problem as a Lasso one transforming X. While there is no closed formula to compute the Lasso estimator in the general case, common ways to solve it are to use: (i) the least-angle regression (LARS) algorithm Efron et al. (2004), (ii) algorithms based on coordinate descent Wu & Lange (2008), or (iii) the ADMM algorithm Boyd (2010). A recent survey presents some of these algorithms and provides their convergence rates (Zhao & Huo, 2023). Details of the ADMM updates to solve Problem 10 are presented below. Let i ∈ J1, pK, we first rewrite (10) as p subproblems: $$\min_{{\bf C}_{i}:}\ \ \frac{1}{N}\,\|X_{i}[t]-{\bf C}_{i}X[t-1]\|_{2}^{2}+\lambda\sum_{j=1}^{p}\frac{|{\bf C}_{i,j}|}{{\bf A}_{i,j}}\ \ ,\ \ i=1,...,p\ ,$$ with Xi[t] and Xi[t − 1] the vectors containing the N samples of the variable i. Then we rewrite (11) as a Lasso problem: $$\min_{\tilde{\mathbf{C}}:}\ \ \frac{1}{N}\left\|X_{i}[t]-\tilde{\mathbf{C}}_{i}\tilde{X}[t-1]\right\|_{2}^{2}+\lambda\sum_{j=1}^{p}|\tilde{\mathbf{C}}_{i,j}|\ \,\ \ i=1,...,p$$ $\tilde{\mathbf{C}}_{i,j}$ is the $i$th element of $\tilde{\mathbf{C}}_{i,j}$, $\tilde{\mathbf{C}}_{i,j}$ is the $i$th element of $\tilde{\mathbf{C}}_{i,j}$. where C˜i,j = Ci,j Ai,j and X˜[t − 1] = (Ai,j × Xi,j )1≤j≤p. In order to use the ADMM algorithm, we rewrite the problem as follows: $$\operatorname*{min}_{{\mathsf{C}}_{i},U}\frac{1}{N}\left\|X_{i}[t]-{\hat{\mathsf{C}}}_{i}{\hat{X}}[t-1]\right\|_{2}^{2}+\lambda\|U\|_{1}\quad\mathrm{subject~to}\quad{\hat{\mathsf{C}}}_{i}\cdot-U=0\;.$$ Finally, the ADMM updates are: $$\tilde{\bf C}_{i}^{+}=\arg\operatorname*{min}_{\tilde{\bf C}_{i}}\frac{1}{2}\|X_{i}[t]-\tilde{\bf C}_{i}\tilde{X}[t-1]\|_{2}^{2}+\frac{\rho}{2}\|\tilde{\bf C}_{i}-U+W\|_{2}^{2}$$ $$(11)$$ $$\left(12\right)$$ 9 $$=(\tilde{X}[t-1]\tilde{X}[t-1]^{T}+\rho I)^{-1}(X_{i}[t]\tilde{X}[t-1]^{T}+\rho(U-W))$$ $$U^{+}=\arg\min_{U}\lambda||U||_{1}+\frac{\rho}{2}||\tilde{\mathbf{C}}_{i}^{+}-U+W\|_{2}^{2}$$ $$=S_{\lambda/\rho}(\tilde{\mathbf{C}}_{i}^{+}+W)\quad\text{(Soft-thresholding of$\tilde{\mathbf{C}}_{i}^{+}+W$)}$$ $$W^{+}=W+\tilde{\mathbf{C}}_{i}^{+}-U^{+}$$ where $[S_{t}(x)]_{j}=\begin{cases}x_{j}-t&\text{if}x>t\\ 0&\text{if}-t\leq x\leq t,\quad j=1,\ldots,p\\ x_{j}+t&\text{if}x<-t\end{cases}$ and $W,U$ are auxiliary variables specific to the ADMM algorithm. ## 3.4.2 A **Update.** For fixed C, the optimization problem (7) with respect to A is: $$\min_{\mathbf{A}}\ \ \lambda\sum_{1\leq i<j\leq p}\frac{|\mathbf{C}_{i,j}|+|\mathbf{C}_{j,i}|}{\mathbf{A}_{i,j}}+2\lambda\sum_{1\leq i<j\leq p}\log(2\mathbf{A}_{i,j})+\gamma\left\|\mathbf{A}-\mathbf{A}^{\mathrm{prior}}\right\|_{F}^{2}\,\tag{13}$$ $$\mathrm{subject\ to\ }\ \mathbf{A}_{i,j}\geq0\,\ \mathbf{A}_{i,j}=\mathbf{A}_{j,i}\ \,\ 1\leq i<j\leq p\.$$ To address the symmetry constraint, a straightforward way is to optimize over the upper diagonal values and to set Aj,i = Ai,j for *i < j*. The minimisation can then be carried out by directly calculating the exact minimum, which is given in the next proposition. Proposition 7. The roots of the derivative with respect to Al,m *of the objective function* (13) *are:* zk = A prior l,m 3+ e 2ikπ/3 3 vuut 1 2 −q + r∆ 27!+ e −2ikπ/3 3 vuut 1 2 −q − r∆ 27!, k = 0, 1, 2 , (14) p = − (Aprior l,m ) 2 3 + λ 2γ q = − Aprior l,m 3 8γ(Aprior l,m ) 2 9 − 2λ − λ (|Cl,m| + |Cm,l|) where ∆ = 4p 3 + 27q 2. Furthermore, there exists at least one positive root for the derivative with respect to Al,m*, and the global* minimum on the interval ]0, +∞[ *is attained at one of these roots.* Proof of 7. Finding the roots of the objective function derivative with respect to Al,m is equivalent to finding the roots of a 3-degree polynomial, allowing us to apply the Cardan formula. Subsequently, as the objective function in Al,m diverges towards infinity at the boundaries of ]0, +∞[, there exists at least one positive root, and the minimum is attained at one of these roots. ## 3.4.3 Alternating Minimization Algorithm. The final AALasso algorithm is presented in Algorithm 1. AALasso depends on four input parameters: (i) λ controls the sparsity of the learned graph. The larger λ, the more sparse the solution. (ii) γ controls the confidence in the prior graph. A large γ will constrain A to stay close to Aprior, whereas a small value allows A to deviate from Aprior. (iii) The choice of A(0) has an impact on the solution since we start by solving the problem in C with fixed A. The straightforward idea is to take A(0) = Aprior, but note that other initializations can be chosen (c.f. Appendix B.3). For instance, AALasso can be seen as a generalization of Lasso and Algorithm 1: Fitting algorithm. input : Niter, λ, γ, Aprior output: Cb, Ab A(0) ← Aprior for r ← 1 to N*iter* do C(r) ← fC(C, A(r−1)) where fC denotes the update in (3.4.1). A(r) ← fA(C(r), A) where fA denotes the update in (3.4.2). return C(Niter), A(Niter). LS+ALasso since these two estimators correspond to the first iteration of AALasso for particular choices of A(0). Indeed, taking A(0) equals to the one matrix (1)1≤i,j≤p , the C update corresponds to the Lasso algorithm, while taking A(0) = (wi,j )1≤i,j≤p where wi,j are the weights computed by the Least Square Estimator, the first step corresponds to the LS+ALasso estimator. (iv) The number of iterations Niter of the alternating minimization algorithm impact directly the runtime and the performances, and in practice we will chose a relatively small number of iterations (around 10), according to synthetic experiments conducted in 4. Remark 8. Note that if it exists an iteration r0 *such that* A (r0) i,j = 0*, the coefficients* C (r) i,j and C (r) j,i will be zero for r > r0. Next, in order to allow the algorithm to add an edge that is not originally present in the prior graph, we set the minimum values of the adjacency matrix to ϵ > 0. Remark 9. As explained in Remark 6, it is possible to leverage a directed prior graph with slights modifications of the model. Regarding the solving method, the C update remains the same, and for the A *update,* we compute the solutions of p 2 *3-degree polynomial rather than* p(p − 1)/2). ## 3.5 Theoretical Properties In this section, we prove the convergence towards a set of stationary points of our algorithm 1 to solve the optimization problem (7), and we show that the time complexity is asymptotically in the same order than the one of Lasso estimator. ## 3.5.1 Convergence Classical results of alternating minimization convergence assume that the objective function is differentiable (c.f. Grippo & Sciandrone (2000)). However it is not applicable to our case since our objective function in (7) is not differentiable in Ci,j = 0. We now introduce the lower directional derivatives to address this issue. Definition 10 (Lower directional derivative). *For any* x ∈ R p *and any* v ∈ R p*, we denote the (lower)* directional derivative of f at x *in the direction* v by f ′(x; v) = lim inf λ↓0 hf(x+λv)−f(x) λ i. Definition 11 (Stationary points). We say that z is a **stationary point** of f if z ∈ domf and f ′(z; v) ≥ 0, ∀v. We say that z is a **coordinate-wise minimum** point of f if z ∈ domf and f(z + (0, ..., vk*, ...,* 0)) ≥ f(z), ∀vk ∈ R nk *, for all* k = 1*, ..., N*. Using this framework, the following theorem holds. Theorem 12. *(Convergence of AALasso)* The sequence (C(r), A(r)) r=1,2*,...* generated by the two-blocks alternating minimization is well defined and bounded. Moreover, every cluster point is a stationary point of the problem 7. Proof. The proof is given in Appendix A.2. ## 3.5.2 Computational Complexity Recall that when fitting a Lasso estimator for learning VAR parameters in p dimension, we fit p independent Lasso estimators in dimension p. In the following, p denotes the dimension and N the number of samples. Recall that our algorithm iteratively solves two steps, A and C. First, as detailed in 3.4.1, the C update is solved by transforming the Adaptive Lasso problem into a Lasso problem in O(p × N) and using ADMM to fit the Lasso parameters in O(p 3 + N × p 2) (see A.3 for more details). Thus, the time complexity of the C update is in O(p 3 + N × p 2). Then the A update is done computing a closed formula in O(1) for each value Ai,j , 1 ≤ *i < j* ≤ p, so this step is in O(p 2), which is negligible compared to the C update. Finally, considering a fixed number of iterations Niter, the AALasso algorithm has a time complexity in O(Niter ×p 3 + Niter × N ×p 2), which is approximately Niter times the one of the Lasso estimator solved with the ADMM algorithm. ## 3.6 Link With Statistical Model It is interesting to adopt the Bayesian point of view to deeper understand the statistical hypothesis behind the model and how the algorithm works. Here, it can be shown that the optimization problem (7) presented in the previous section is equivalent to the Maximum A Posteriori (MAP) of the probabilistic graphical model presented above in Eq. (15). ![11_image_0.png](11_image_0.png) with : $$\begin{array}{c}{{i,j\sim{\mathcal{N}}({\bf A}_{i,j}^{*},\sigma^{2})\;\;,\;\;i,j=1,...,p}}\\ {{\sim\mathrm{Laplace}(0,{\bf A}_{i,j}^{*})\;\;,\;\;i,j=1,...,p}}\\ {{X[t]\sim{\mathcal{N}}({\bf C}^{*}X[t-1],\sigma_{X}^{2})}}\end{array}$$ C∗ $$\left(15\right)$$ Figure 4: Observable variables are in grey and latent variables in white. Theorem 13. *The solutions of Problem* (7) *correspond to the Maximum A Posteriori (MAP) of the statistical model* (15) under the assumption that A∗follows the improper distribution 1Sp(R). Proof of 13. The proof is given in Appendix A.1. Then, the normalization term log(2Ai,j ) introduced in (7) corresponds to the normalization of the Laplace distribution. Intuitively, it allows A to remain meaningful regarding the Laplace distribution and it avoids parameters to go to infinity. This probabilistic graphical model allows a good understanding of the model introduced. An unknown graph G ∗ first generates a parameter matrix {C∗ i,j}i,j from Laplace distribution with variances equal to {A∗ i,j}i,j (note that Laplace distribution encourages sparsity), and C∗ allows to generate the multivariate time series X following a VAR(1) model. On the other side, we observe a noisy (Gaussian noise) version of A∗. To $\square$ leverage the information provided by A to learn C∗ we propose to jointly infer the two latent variables A∗ and C∗. ## 3.7 Generalization To Var(D**) Models** In the preceding section, we discussed VAR(1) models for simplicity. However, certain applications require the consideration of VAR(d) models and we detail the generalization of our model in this section. A VAR(d) model can be expressed as a VAR(1) model as follows. Let X a process defined by a VAR(d) model: $$X[t]=\sum_{\tau=1}^{d}\mathbf{C}^{\tau}X[t-\tau]+\varepsilon[t]\ ,$$ where $X[t]=(X_{1}[t],...,X_{p}[t])$ is a random $p$-dimensional time series and $\varepsilon[t]\sim\mathcal{N}(0,\sigma_{X}^{2}I_{p})$, $\sigma_{X}>0$. For $t\geq d$, let $\overline{\mathbf{X}}[t]=(X[t],X[t-1],...,X[t-d+1])^{T}$ and $\overline{\mathbf{C}}=\begin{bmatrix}\mathbf{C}^{1}&\mathbf{C}^{2}&\cdots&\mathbf{C}^{n}\\ 1&0&\cdots&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\cdots&1&0\end{bmatrix}$, then the process $\overline{\mathbf{X}}[t]$ satisfies the VAR(1) model: $$\overline{\mathbf{X}}[t]=\overline{\mathbf{C}}^{T}\overline{\mathbf{X}}[t-1]+(\varepsilon[t],\quad\varepsilon[t-d+1])^{T}$$ $${\overline{{\mathbf{X}}}}[t]={\overline{{\mathbf{C}}}}\,{\overline{{\mathbf{X}}}}[t-1]+(\epsilon[t],...,\epsilon[t-d+1])^{T}.$$ Note that the stability assumption of the VAR model is now satisfied if ρ(C) < 1 (cf (Lütkepohl, 2005)). Using this last point, our model is still applicable for some d ∈ N assuming we have a prior matrix defined by: $${\overline{{\mathbf{A}^{\mathrm{prior}}}}}={\left[\begin{array}{l l l l}{\mathbf{A}_{1}^{\mathrm{prior}}}&{\mathbf{A}_{2}^{\mathrm{prior}}}&{\cdots}&{\mathbf{A}_{d}^{\mathrm{prior}}}\\ {1}&{0}&{\cdots}&{0}\\ {\vdots}&{\ddots}&{\ddots}&{\vdots}\\ {\vdots}&{}&{\ddots}&{1}&{0}\end{array}\right]}$$ . Thus, solving this problem with a number of lags d > 1 is equivalent to solve a problem with 1 lag in dimension p × d, since we can consider the lags > 1 as new variables. Note that a straightforward idea is to set A prior l = Aprior for l = 1*, ..., d* but this generalization allows to leverage prior knowledge about relationships likelihood at each order. ## 4 Experiments In this section, a large series of experiments is carried out to assess the effectiveness and applicability of AALasso to learn Granger causality graphs. Our algorithm is tested using both synthetic and real-world data sets. To utilize AALasso effectively, the input requirements include a multivariate time series and the availability of an adjacency matrix representing a prior network structure. By employing synthetic and real datasets, we evaluate AALasso's robustness across various scenarios, including limited number of samples and several levels of noise, and real-world datasets. These experiments provide valuable insights into the algorithm's capabilities and its potential applications in diverse domains. The code to reproduce our experiments with synthetic data will be available. ## 4.1 Task And Evaluation Metrics In these experiments, the objective is to learn a Network Granger causality (NGC) from given multivariate time series and a prior network. We suppose that these series follow a VAR(1) model, hence, learning the NGC is equivalent to fit the VAR parameters, i.e., given X ∈ R p×N and Aprior in R p×p we want to estimate the matrix C. Since VAR models are usually employed for forecasting tasks, a standard metric to evaluate estimators is the normalized Root Mean Square Error (nRMSE) of the one step predictions. Let X[t] be a multivariate time series and Xˆ[t] be the reconstruction using the fitted VAR model at time t, the nRMSE is defined by: $$\mathrm{nRMSE}({\hat{X}}):={\sqrt{\frac{\sum_{t}{\Big\|}{\hat{X}}[t]-X[t]{\Big\|}_{2}^{2}}{\sum_{t}\|X[t]\|_{2}^{2}}}}$$ . Note that, the main objective of this paper is to learn the underlying NGC, so we are more interested in learning a relevant graph than allowing a good reconstruction (even though the two tasks are correlated). To evaluate the quality of the learned graph, we compute the F1-score between Cb and C∗(see e.g. (Pasdeloup et al., 2016) for more information). This metric is defined as follows: $$\mathrm{F1\textrm{-score}}:={\frac{2\cdot\mathrm{precision}\cdot\mathrm{recall}}{\mathrm{precision}+\mathrm{recall}}}\ ,$$ $\mathbb{H}^n\mathbb{V}$ where precision =True Positives $\begin{array}{c}\text{True Positives}\\ \hline\text{True Positives}+\text{False Positives}\end{array}$. and Here, the precision measures the proportion of correctly identified causality links, while recall measures the ability to capture all causality links. Note that the F1-score is only available for synthetic data as we need to have access to the true graph (the true VAR model). $\text{recall}=\dfrac{\text{True Positives}}{\text{True Positives}+\text{False Negatives}}$ . ## Although These Two Measures Are Related (A Good Graph Should Lead To A Good Reconstruction), It Should Be Noted That A Good Reconstruction Can Be Achieved By A Relatively Dense Graph. Given That Sparsity Is A Desired Property, The F1 Score Is Used To Understand Whether The Learned Graph Can Efficiently Reconstruct Time Series While Avoiding Irrelevant Edges. 4.2 Methods The aim of these experiments is to show that AALasso can exploit prior knowledge to improve its performance compared to existing methods. We compare our estimator to the classical estimators: the Lasso and the Adaptive Lasso with weights equal to the least squares estimates (noted LS + ALasso, cf (Zou, 2006)). Moreover, since the first step of our algorithm is equivalent to solve an Adaptive Lasso problem assuming that the weights are given by Wi,j =1 Aprior i,j , we compare our method with this first step (denoted 1-AALasso) to demonstrate the usefulness to perform several steps. Note that we do not show the performances of the Least Squares estimator since the results are poor in settings with only few samples. Finally, note that the Lasso and LS+ALasso algorithms do not take into account the prior matrix, so the prior noise will not impact their results. Implementation details. For all experiments, we used the package asgl (Álvaro Méndez Civieta et al., 2021) implemented using cvxpy (Diamond & Boyd, 2016) to solve Lasso and Adaptive Lasso regression problems. ## 4.3 Datasets 4.3.1 Synthetic Data Synthetic data are generated with respect to the statistical model (15). To define the matrix A∗, we first generate p = 40 points in [0, 1]2 uniformly at random. Then, we construct a matrix D ∈ R p×p ∗ by applying the Gaussian kernel to the pairwise Euclidean distance between the points (the standard deviation of the Gaussian kernel is taken equal to the median values of all pairwise distances). A∗is obtained by randomly setting to 0 a ratio τm = 0.5 of values of D (mispecified edges) and cutting to 0 values smaller than τ = 0.7 to promote sparsity. Finally, the VAR parameters are drawn from *Laplace*(0, A∗ i,j ) and Aprior = D + E, where E is a symmetric matrix whose subdiagonal values are sampled from i.i.d. Gaussian distribution with Algorithm 2: Data generation. input : p, τm, τ, σX, σA output: Aprior, X Generate randomly p points in [0, 1]2. Compute the euclidean distance matrix D. A∗is obtained setting to 0 a ratio of τm values of D (mispecified edges). Generate VAR parameters C∗ i,j ∼ *Laplace*(0, A∗ i,j ). for 1 ≤ *i < j* ≤ p do Randomly set C∗ i,j or C∗ j,i to 0 (directed graph). Aprior = D + E where E is a symmetric matrix where subdiagonal values are sampled from i.i.d. gaussian distribution with variance σ 2 A. Sample X[t] ∼ N (C∗X[t − 1], σ2X) return Aprior, X variance σ 2 A (varying in {0.02, 0.1, 0.25, 0.35} to test several level of noises). Note that Aprior is computed from D and not from A∗. Indeed, A∗is obtained by performing sparse variable selection from D; therefore, we do not incorporate this variable selection into Aprior and the objective is to assess the effectiveness of AALasso in accurately retrieving this selection. This data generation is summarized in Algorithm (2). At the end, for each experiment, we sample N = {80, 200*, . . . ,* 500} different time series X[t] ∼ N (C∗X[t − 1], σ2X), σ 2 X = 0.1, which we split into training and test sets of equal sizes. We repeat this procedure 20 times for each value of N. ## 4.3.2 Breast Cancer Network Dataset The Heritage Provider Network DREAM 8 Breast Cancer Network Prediction dataset focuses on predicting causal protein networks using time series data from reverse phase protein array (RPPA) experiments. The aim is to advance breast cancer understanding by using complex time series data and computational modeling to uncover causal relationships within protein networks responding to various stimuli and inhibitors across different cell lines. It involves examining four cell lines (BT549, BT20, MCF7, and UACC812) under four inhibitor conditions (AKT, AKT + MET, FGFR1 + FGFR3, and DMSO control) and eight ligand stimuli (Serum, PSB, EGF, Insulin, FGF1, HGF, NRG1, and IGF1) at multiple time points (t = 0, 5 min, 15 min, 30 min, 1 hr, 2 hr, and 4 hr). Here, the task is to create 32 causality networks, one for each combination of stimulus ligand and cell line, from protein probe across the time points. Formally, given a 3-uple (cell line, ligand, inhibitor), we observe a multivariate times series X ∈ R p×N with N = 6 and p ∈ {41, 45, 48} and we want to learn a graph of Granger causalities G. Choice of A**prior** In (Carlin et al., 2017), the authors successfully utilized a prior network derived from the Pathway Commons database version 3 Cerami et al. (2010) to enhance their analysis. In essence, this network prior was developed by aggregating various established pathways from Pathway Commons, where protein interactions closely aligned with the concept of causal influence. It assumed that interactions declared in these pathway databases implied that perturbations to upstream regulatory proteins could lead to either direct or indirect perturbations of downstream target proteins connected via directed paths. The adjacency matrix was obtained performing a heat diffusion over the initial graph (cf (Carlin et al., 2017) for more details). Importantly, the network prior was independent of training data and could be reused across experiments. The utilization of this network prior improved performances, highlighting the importance of taking into account prior knowledge for the causality inference task. However, in (Carlin et al., 2017) the authors did not take into account this network in their inference algorithm: they averaged a posteriori the output of the GENIE3 (Huynh-Thu et al., 2010) algorithm with the adjacency matrix of this prior. We therefore chose Aprior equals to this adjacency matrix. Note that the objective of this challenge is to infer causality but not necessarily temporal/Granger causality. Thus, we do not expect to achieve better results than the challengers but the objective is to demonstrate that taking into account a prior can be relevant. Pre-processing of time series We normalize the time series as a pre-processing step to satisfy the first-order stationary assumption: $$X[1:T]\leftarrow{\frac{X[1:T]-{\overline{{X}}}}{\sigma_{X}}}$$ σX(16) where X is the mean of X and σX its standard deviation. ## 4.3.3 Molène Dataset Dataset The Molène datase contains hourly temperatures recorded by sensors at p = 32 locations in Britany (France) during N = 746 hours. Here the objective is to understand the spatio-temporal dynamics of the temperature and to assess the extent to which the model can describe complex phenomena (such as weather) by considering only data and geographical information (sensor positions). Choice of A**prior** The prior adjacency matrix is A prior i,j = exp(−dist(i, j)*/dist*) where dist(*i, j*) is the Euclidean distance between the stations i and k and *dist* is the median of all computed distances. Note that this case is a good example of what could be a "noisy" prior knowledge, since the euclidean geometry is isotropic contrarily to the weather dynamics. Pre-processing of time series We consider the first derivative of the signals rather than original signals in order to verify as much as possible the wide-sense stationary property. ## 4.4 Results On Synthetic Data 4.4.1 Comparison With Classical Algorithms In this part, we compare AALasso with standard method to compute Network Granger Causality. Thus we focus on methods based solving the classic NGC optimization problem with some regularization (Lasso, Adaptive Lasso). The aim of these experiments is to show that our method is able to leverage prior knowledge to increase the accuracy of the retrieved NGC. For all of the 20 experiments, we performed Niter = 10 (see 4.4.4) iterations in the alternating minimization algorithm (1) using half of the training set. The parameters λ and γ were selected by cross-validation minimizing the nRMSE over the second half of the training set (the validation set). ![15_image_0.png](15_image_0.png) $$(16)^{\frac{1}{2}}$$ Figure 5: rNMSE and F1-score in function of the number of samples used for training using σA = 0.1. We plotted the 90% confidence intervals. The results in Figure 5 exhibit better reconstruction and greater F1-score when utilizing AALasso rather than vanilla methods when the number of samples is lower than 140. From 40 to 140 samples, AALasso ![16_image_0.png](16_image_0.png) Figure 6: F1-score in function of the sparsity of the learned graph s = number of edges p(p−1) (obtained using several values for λ) with σA = 0.1 and N = 40. A sparsity of 0 means an empty graph while an sparsity of 1 means a dense graph. We plotted the 90% confidence intervals. returns F1-scores from 0.6 to 0.8 while LS+ALasso F1-score ranges from 0.5 and 0.73 (an average gain of 0.1). Thus, our algorithm effectively leverages the additional information in settings with few samples, and our approach enables fine-tuning of the graph while remaining a good forecasting power. Results presented in Table 2 provides more details regarding the computation of F1-score (precision and recall). We observe that the gain in the F1-score is a consequence of a important gain in precision. Indeed, whereas the first iteration of the algorithm results in relatively good recall and precision, the next iterations lead to important improvements of precision with limited loss in recall. Moreover, the results shown in Figure 6 indicate that AALasso outperforms Lasso and LS+ALasso estimators for sparsity levels near to the ground truth one (with a manual selection of λ). These results confirm that the model allows to increase the performances of the variable selection and that the gains in Figure 2 are not just a consequence of a better selection of λ. When the number of samples increases, the LS+ALasso estimator provides better reconstruction than AALasso. This behavior can be explained by the fact that the prior knowledge given is noisy so the AALasso estimator is biased. In practice it allows to reduce the variance when few samples are available, but when the number of samples becomes large enough to perform statistical inference directly from time series data, this bias lead to slightly less accurate results. Thus, the question is to know whether the time series are informative to know if adding biased prior knowledge will improve or not the results. However, recall that we are interested in a graph learning task so the F1-score is more informative than the reconstruction error, and shows satisfying results even for relatively large number of samples (until a certain threshold, here 250 samples). This can be explained by the fact that the algorithm prefers precision to recall, and will focus on returning true causal relations, sacrifying reconstruction and recall. Finally, the difference of results between the first iteration and the complete optimization process of AALasso points out the interest of the alternating minimization. | N Samples | 40 | 100 | 180 | | | | | | | |-------------|-------------------------------------------------|-------------------------------------|-------------------------------------------------|-------------|-------------------------------------|----|----|----|----| | Metrics | P | R | F1 | P | R | F1 | P | R | F1 | | Lasso | 0.28 (0.04) 0.61 (0.09) 0.38 (0.04) | 0.51 (0.1) | 0.75 (0.08) | 0.6 (0.09) | 0.57 (0.08) 0.83 (0.06) 0.68 (0.07) | | | | | | LS+ALasso | 0.54 (0.09) 0.54 (0.11) 0.53 (0.07) | 0.69 (0.1) | 0.68 (0.1) | 0.68 (0.09) | 0.64 (0.08) 0.83 (0.06) 0.72 (0.07) | | | | | | 1-AALasso | 0.44 (0.06) 0.73 (0.1) 0.55 (0.06) | 0.65 (0.06) 0.82 (0.09) 0.72 (0.06) | 0.83 (0.06) 0.84 (0.06) 0.83 (0.05) | | | | | | | | AALasso | 0.55 (0.07) 0.71 (0.11) 0.62 (0.07) 0.76 (0.06) | 0.79 (0.1) | 0.77 (0.07) 0.87 (0.05) 0.81 (0.07) 0.84 (0.05) | | | | | | | Table 2: Precision, Recall and F1-score in function of the number of samples. We took a noise over the prior network with σA = 0.1. Experiments in higher dimension Finally, to understand whether our method was able to deal with high-dimensional environment (i.e *N << p*), we conducted the same experiments taking p ∈ {60, 100, 160} and N = 40 (still 20 samples for training and 20 for validation). | Dimension | 60 | 100 | 160 | | | | | | | |-------------|-------------------------------------|-------------------------------------|-------------------------------------|----|----|----|----|----|----| | Metrics | P | R | F1 | P | R | F1 | P | R | F1 | | Lasso | 0.31 (0.07) 0.48 (0.09) 0.37 (0.07) | 0.26 (0.04) 0.32 (0.06) 0.28 (0.04) | 0.26 (0.03) 0.24 (0.04) 0.25 (0.03) | | | | | | | | LS+ALasso | 0.58 (0.06) 0.38 (0.10) 0.45 (0.08) | 0.52 (0.06) 0.20 (0.05) 0.29 (0.05) | 0.48 (0.05) 0.13 (0.04) 0.20 (0.05) | | | | | | | | 1-AALasso | 0.45 (0.05) 0.60 (0.10) 0.51 (0.06) | 0.40 (0.04) 0.47 (0.06) 0.43 (0.04) | 0.39 (0.03) 0.38 (0.05) 0.39 (0.04) | | | | | | | | AALasso | 0.56 (0.06) 0.60 (0.11) 0.57 (0.08) | 0.47 (0.05) 0.45 (0.07) 0.46 (0.05) | 0.45 (0.02) 0.37 (0.06) 0.40 (0.04) | | | | | | | Table 3: Precision, Recall and F1-score in function of the dimension. We took a noise over the prior network with σA = 0.1 and N = 40 samples. From Table 3, we see that in settings where *N << p*, AALasso remains better than the other standard methods regarding the F1 score. More of that, the higher the dimension, the higher the gap between LS+ALasso and AALasso regarding the F1 score (from less than 1.2 times better in dimension 40 to twice better in dimension 160). Thus, our method efficiently leverages prior knowledge, and it is of high interest in this kind of settings. ## 4.4.2 Influence Of The Prior Network In this section, we present results for various configurations of prior noises. Similar to the experiments in the previous section, for each configuration, we conducted 20 experiments, we took Niter = 10, we utilized half of the training set and the parameters λ and γ were selected via cross-validation, minimizing the normalized Root Mean Square Error (nRMSE) over the second half. Figure 7 presents the results for Prediciton error and F1-score with N = 40, 100, 140 samples for varying noise levels. It is important to note that we are assessing the robustness of our method to prior matrix noise, where the noise specifically corresponds to Aprior noise and not to the noise of the VAR model. Additionally, since Lasso and LS+ALasso do not leverage prior knowledge, variations in this noise do not impact the results. The findings demonstrate that AALasso exhibits robustness to noise, displaying a comparable prediction error than the Lasso or LS+ALasso one (even better than LS+AALasso in few samples settings) and a consistently better F1-score for all tested configurations. Furthermore, these results indicate that our model effectively generalizes Adaptive Lasso (corresponding to the first iteration) and enables refinement of results over subsequent iterations. Tables 4 and 5 provide further insights into the high F1-score achieved with AALasso for both N = 40 and N = 140 scenarios. The AALasso's behavior remains consistent across all tested noise levels. While the first iteration yields a good recall but limited precision, subsequent iterations lead to a significant increase in precision (an average gain of 0.15), especially in high noise level settings (gain of 0.2) while maintaining a good recall (an average loss of 0.05). ## 4.4.3 Comparison With Other Causal Discovery Algorithms While we focused in this paper on learning Network Granger Causality, it is interesting to compare our method with other causal network discovery models. A recent survey Assaad et al. (2022) presents some state of the art methods to infer causality from time series. The main ones are based on the conditional independence framework Spirtes et al. (2000) which consists in running conditional independence tests to check whether a variable X is a parent of another variable Y conditioned to some other variables Z1*, ..., Z*n. Based on the PC algorithm Spirtes & Glymour (1991), in Runge et al. (2019), the authors introduced a new model (PCMCI) specific to causal discovery from times series, using what they call "the momentary conditional independence (MCI)" to remove false positive returned by the PC algorithm. Some variants of PCMCI were then designed, like PCMCI+ Runge (2020) which includes contemporaneous links and improves ![18_image_0.png](18_image_0.png) Figure 7: Prediction error and F1-score in function of the noise σA. (Top) N = 40 samples, (Middle) N = 100 samples, (Bottom) N = 140 samples. the reliability of CI tests, or LPCMCI Gerhardus & Runge (2020) which is designed to address the issue of the presence of latent confounders. Note that it is possible to include some prior knowledge when using methods based on conditional independence. These prior information can be expressed as: there is a causal link i 7→ j, there is not any link between i and j, i is a leaf, i is a root or i is an ancestor of j. This kind of prior knowledge do not exactly match with the one used in this paper (the information is binary in this framework) but we tested PCMCI leveraging prior (denoted by Prior PCMCI) by encoding forbidding edges (i, j) or (*j, i*) when A prior i,j < τ (here we tested τ ∈ {0.2, 0.3}). All of these methods are implemented in https://jakobrunge.github.io/tigramite/ and we used this package to perform the same experiments than the one in Section 4.4.1. We also add the results obtained with CGC-2SPR Yao et al. (2013), the method presented in Section 3.3 that leverage the same kind of prior knowledge than our method but using a L2 penalty. | σA | 0.1 | | 0.25 | 0.35 | | | | | | |-----------|-------------------------------------|-------------|------------------------------------------------------------------------|-------------------------|------------------------------------|-------------------------|-------------|------------|-------------| | Metrics | P | R | F1 | P | R | F1 | P | R | F1 | | Lasso | 0.54 (0.13) | 0.78 (0.08) | 0.63 (0.1) | 0.54 (0.13) 0.78 (0.08) | 0.63 (0.1) | 0.54 (0.13) 0.78 (0.08) | 0.63 (0.1) | | | | LS+ALasso | 0.67 (0.25) | 0.77 (0.1) | 0.69 (0.14) | 0.67 (0.25) | 0.77 (0.1) | 0.69 (0.14) | 0.67 (0.25) | 0.77 (0.1) | 0.69 (0.14) | | 1-AALasso | 0.75 (0.05) 0.81 (0.08) 0.78 (0.06) | 0.71 (0.06) | 0.8 (0.08) | 0.75 (0.06) | 0.66 (0.07) 0.78 (0.1) 0.71 (0.07) | | | | | | AALasso | 0.82 (0.06) 0.79 (0.09) | 0.8 (0.07) | 0.76 (0.06) 0.76 (0.1) 0.76 (0.07) 0.71 (0.09) 0.74 (0.11) 0.72 (0.09) | | | | | | | | σA | 0.1 | 0.25 | 0.35 | | | | | | | |-----------|-------------------------------------|-------------------------------------|-------------------------------------|-------------------------------------|-------------|----|----|----|----| | Metrics | P | R | F1 | P | R | F1 | P | R | F1 | | Lasso | 0.28 (0.04) 0.61 (0.09) 0.38 (0.04) | 0.28 (0.04) | 0.61 (0.09) 0.38 (0.04) | 0.28 (0.04) 0.61 (0.09) 0.38 (0.04) | | | | | | | LS+ALasso | 0.54 (0.09) 0.54 (0.11) 0.53 (0.07) | 0.54 (0.09) 0.54 (0.11) 0.53 (0.07) | 0.54 (0.09) 0.54 (0.11) 0.53 (0.07) | | | | | | | | 1-AALasso | 0.44 (0.06) 0.73 (0.1) 0.55 (0.06) | 0.39 (0.06) 0.71 (0.11) 0.49 (0.07) | 0.37 (0.08) | 0.7 (0.11) | 0.47 (0.07) | | | | | | AALasso | 0.58 (0.07) 0.69 (0.11) 0.63 (0.08) | 0.5 (0.07) | 0.67 (0.12) 0.57 (0.07) | 0.45 (0.06) 0.65 (0.12) 0.53 (0.07) | | | | | | | N Samples | 40 | | | 100 | | 180 | | | | |--------------|-------------------------------------|------------|-------------------------------------|-------------------------------------|-------------------------------------|-------------------------------------|----|----|----| | Metrics | P | R | F1 | P | R | F1 | P | R | F1 | | Lasso | 0.28 (0.04) 0.61 (0.09) 0.38 (0.04) | | 0.51 (0.1) | 0.75 (0.08) | 0.6 (0.09) | 0.57 (0.08) 0.83 (0.06) 0.68 (0.07) | | | | | LS+ALasso | 0.54 (0.09) 0.54 (0.11) 0.53 (0.07) | | 0.69 (0.1) | 0.68 (0.1) | 0.68 (0.09) | 0.64 (0.08) 0.83 (0.06) 0.72 (0.07) | | | | | CGC-2SPR | 0.45 (0.22) 0.42 (0.16) 0.36 (0.07) | | 0.66 (0.22) 0.57 (0.11) 0.58 (0.10) | | 0.84 (0.12) 0.64 (0.10) 0.72 (0.06) | | | | | | PCMCI+ | 0.89 (0.08) 0.38 (0.07) 0.53 (0.07) | | 0.92 (0.05) 0.56 (0.06) 0.69 (0.05) | | 0.92 (0.04) 0.67 (0.05) 0.77 (0.04) | | | | | | LPCMCI | 0.64 (0.08) | 0.5 (0.06) | 0.56 (0.06) | 0.64 (0.07) 0.63 (0.06) 0.63 (0.06) | 0.64 (0.08) 0.72 (0.06) 0.68 (0.06) | | | | | | Prior PCMCI+ | 0.68 (0.07) | 0.5 (0.06) | 0.58 (0.05) | 0.7 (0.06) | 0.64 (0.06) 0.66 (0.05) | 0.71 (0.05) 0.74 (0.05) 0.72 (0.04) | | | | | 1-AALasso | 0.44 (0.06) 0.73 (0.1) 0.55 (0.06) | | 0.65 (0.06) 0.82 (0.09) 0.72 (0.06) | | 0.83 (0.06) 0.84 (0.06) 0.83 (0.05) | | | | | | AALasso | 0.55 (0.07) 0.71 (0.11) 0.62 (0.07) | | 0.76 (0.06) | 0.79 (0.1) | 0.77 (0.07) | 0.87 (0.05) 0.81 (0.07) 0.84 (0.05) | | | | Table 4: Precision, Recall and F1-score in function of σA. We used N = 40 samples for training. Table 5: Precision, Recall and F1-score in function of σA. We used N = 140 samples for training. Table 6: Precision, Recall and F1-score for all causal discovery methods tested, p = 40, σA = 0.1. Compared to methods that do not leverage prior knowledge (Lasso, PCMCI and its variants), Table 6 show that AALasso allows for better network reconstruction, highlighting its ability to efficiently leverage prior information. Moreover, AALasso remains superior to Prior PCMCI+, showing that the way we introduce the prior information in our model is relevant and efficient. In details, we remark that PCMCI+ allows for a very good Precision for the retrieved causal links (from 0.89 to 0.92), but the Recall remains very low (from 0.38 to 0.67). This method achieves the best Precision but the worst Recall, leading to a F1-score lower than the one of AALasso. ## 4.4.4 Empirical Time Complexity Concerning time complexity, as discussed theoretically in Section 3.5.2, we observed that the runtime of AALasso is approximately proportional to Niter times that of a Lasso estimator. Therefore, it is useful to analyze the convergence rate of our method to find a balance between precision and computational efficiency. Figure 8 shows the behavior of the Prediction error and F1-score averaged on all experiments conducted to visualize the convergence over the successive iterations. The convergence seems to be achieved for Niter = 10 in average (see Appendix B.1 for more experiments), and that motivated the choice of Niter for comparing the performances of the algorithms. Then we can compare runtimes of Lasso, LS+ALasso and AALasso with Niter = 10 iterations. To well understand the gain and the tradeoff between runtime and performances, we plot the F1 score in function of the runtimes for Niter ∈ {0, 5, 10, 15} with N = 40 samples and σA = 0.1 in Figure 9 and for other scenarios in Appendix B.5. We remark that AALasso with 5 iterations is a good trade off to optimize both the runtime and the F1-score. ![20_image_0.png](20_image_0.png) Figure 8: Prediction error and F1-score in function of the number of iterations used for training using synthetic data with σA ∈ {0.02, 0.1, 0.25, 0.35} and N ∈ {40, 100, 140, 180, 250}. ![20_image_1.png](20_image_1.png) Figure 9: F1 score in function of runtimes for Lasso, LS+ALasso and AALasso for Niter ∈ {0, 5, 10, 15} with N = 40 samples, σA = 0.1. ## 4.5 Experiments On Real-World Data 4.5.1 Hpn Dream8 Challenge For this dataset, the choice of Aprior is detailed in Section 4.3.2 and we compare the results with Lasso and Least Squares methods (note that we replace the LS+ALasso method here because the results were better when fitting the Least Squares estimator). The hyperparameters are selected using data for the cells BT549, MCF7, and UACC812 and the test set contains the data for all pairs (ligand, inhibitor) for the cell BT20. We use two common metrics to compare the performances of the algorithms : the overall accuracy, measuring the relevance of variable selection, and ![21_image_0.png](21_image_0.png) Figure 10: Results obtained on the DREAM8 dataset the Receiver Operating Characteristic Area Under the Curve (ROCAUC), allowing to design a thresholdindependent score as mentioned in (Bradley, 1997) and measuring the ability of a model to discriminate the two classes. Formally, let (Ci,j )1≤i,j≤p ∈ R p×pthe parameters computed and (Yi,j )1≤i,j≤p ∈ {0, 1} p×pthe ground truth values (causality or not), then * The accuracy is computed by : $Ace=\frac{\sum\limits_{1\leq i,j\leq x}T_{\tau}(C_{i,j})\mathcal{W}_{i,j}}{p^{2}}$, where $T_{\tau}:x\mapsto\begin{cases}1\text{if}|x|>\tau\\ 0\text{otherwise}\end{cases}$, taking $\tau=0.05$. This metrics measures the relevance of variable selection. - The ROCAUC is computed by ROCAUC =R 1 0 TPR(FPR−1(t)) dt where TPR(t) stands for True Positive Rate (sensitivity), and FPR(t) stands for False Positive Rate (1 - specificity) with a threshold t. As shown in Figure 10 our method AALasso demonstrates better performance when compared to traditional estimators such as Lasso and Ordinary Least Squares (OLS) regarding both accuracy and ROCAUC. Thus, by incorporating the adjacency matrix derived from the Pathway Commons database as a prior network, AALasso effectively harnessed valuable prior knowledge about protein interactions. Our results showcase that AALasso outperform the baseline methods, emphasizing the significance of considering such prior information in causality inference tasks. The Figure 11 presents an example of graph learned, and we remark that the Lasso estimator explain all the variables only with a few number of variables (presence of columns in the matrix). On the contrary, the AALasso estimates are more homogeneous, following a directed version of the prior structure. ## 4.5.2 Molene Dataset VAR(1**) model** For this dataset, we train the models with 80 points, still selecting λ and γ by crossvalidation. Figure (12) compares the resulting graphs of Granger causalities using our method AALasso and Lasso. We observe that the graph resulting of the AALasso is sparse while remaining connected and allows a good visualization of the physical process. Moreover, contrarily to the Lasso one, it is consistent with the Euclidean structure, confirming that the algorithm leverages the given prior matrix. This simple example encourages the using of AALasso to learn Granger causality for dynamic or physic system by taking into account the physics of the problem. VAR(3) model For this example, we applied the generalization of our model with the order d = 3. We computed the same matrix Aprior than for d = 1 (cf Section 4.3.3) and we then considered the generalized ![22_image_0.png](22_image_0.png) Figure 11: Example of graphs learned. (a) Least Squares method. (b) Lasso method. (c) AALasso. (d) Prior network. prior matrix given by $${\overline{{\mathbf{A}^{\mathrm{prior}}}}}={\left[\begin{array}{l l l l}{\mathbf{A}^{\mathrm{prior}}}&{\mathbf{A}^{\mathrm{prior}}}&{\cdots}&{\mathbf{A}^{\mathrm{prior}}}\\ {1}&{0}&{\cdots}&{0}\\ {\vdots}&{\ddots}&{\ddots}&{\vdots}\\ {0}&{\cdots}&{1}&{0}\end{array}\right]}$$ . The comparison between the LS+ALasso and the AALasso estimates for d = 3 is presented if Figure 13. Like for the case d = 1, the AALasso graph is sparser than the LS+ALasso one, and allows a better visualization of the physical process. Moreover, we remark that AALasso explains the main part of the signal only by using the first order (only 3 and 1 edges for order 2 and 3) which is consistent with a diffusion process. Finally, the size of the edges for AALasso seems to be proportional to the order (edges for order 2 and 3 are longer than the ones for order 1) which is again consistent with the physics (information take more time to travel longer distances). ## 5 Discussion And Conclusion In conclusion, this paper has introduced a novel method designed to efficiently learn Granger causalities in settings with limited samples. Our approach stands out by effectively incorporating prior knowledge through the utilization of a noisy adjacency matrix. We demonstrated the convergence of our algorithm AALasso and ![23_image_0.png](23_image_0.png) 0.6 Figure 12: C Results on the Molène dataset for (a) AALasso and (b) Lasso. Darker colors indicate larger weight. 0.15 0.20 0.25 0.30 0.35 0.40 0.125 0.120 0.115 0.110 0.105 ![23_image_1.png](23_image_1.png) 0.7 Figure 13: C Results on the Molène dataset with d = 3 for (a) LS + ALasso and (b) AALasso. Darker colors indicate larger weight. showed that the time complexity was in the same order of magnitude as that of the Lasso. To empirically validate our method and demonstrate its efficacy, we conducted a series of experiments. We selected a variety of datasets, including synthetic data and real-world examples like the Breast Cancer Network and the Molène Dataset. In these experiments, we employed rigorous evaluation metrics to assess the performance of our method across different scenarios, showcasing its versatility and applicability. Thus the incorporation of prior information has proven instrumental in achieving superior accuracy and robustness when compared to classical algorithms in this domain. While our method allows to incorporate prior knowledge in the learning process, it could be interesting to add structure to the learned graph and go beyond sparsity. Indeed, the framework we have presented here can be readily extended to learn graphs with specific structural constraints, such as spectral and adjacency constraints, similar to those outlined in (Kumar et al., 2020). This flexibility arises from the fact that our model operates with a symmetric matrix containing positive values, making it amenable to a range of applications that necessitate tailored graph structures. Finally, it could be interesting to study whether this framework could be used to learn time-varying graphs of Granger causality (cf (Gao & Yang, 2022)), for example by considering the graph learned at time t as a prior for the graph at time t + 1. ## References Charles K Assaad, Emilie Devijver, and Eric Gaussier. Survey and evaluation of causal discovery methods for time series. *Journal of Artificial Intelligence Research*, 73:767–819, 2022. Giovanni Ballarin. Ridge regularized estimation of var models for inference. *arXiv*, 06 2022. URL https: //doi.org/10.48550/arXiv.2105.00860. Marta Banbura, Domenico Giannone, and Lucrezia Reichlin. Large bayesian vector auto regressions. Journal of Applied Econometrics, 25(1):71–92, 2010. URL https://EconPapers.repec.org/RePEc:jae:japmet: v:25:y:2010:i:1:p:71-92. Sumanta Basu, Ali Shojaie, and George Michailidis. Network granger causality with inherent grouping structure. *Journal of Machine Learning Research*, 16(13):417–453, 2015. URL http://jmlr.org/paper s/v16/basu15a.html. Stephen Boyd. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® *in Machine Learning*, 3(1):1–122, 2010. ISSN 1935-8245. doi: 10.1561/2200000016. URL http://dx.doi.org/10.1561/2200000016. Andrew P. Bradley. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognition, 30(7):1145–1159, 1997. ISSN 0031-3203. doi: https://doi.org/10.1016/S0031-320 3(96)00142-2. URL https://www.sciencedirect.com/science/article/pii/S0031320396001422. Daniel E. Carlin, Evan O. Paull, Kiley Graim, Christopher K. Wong, Adrian Bivol, Peter Ryabinin, Kyle Ellrott, Artem Sokolov, and Joshua M. Stuart. Prophetic granger causality to infer gene regulatory networks. *PLOS ONE*, 12(12):1–21, 12 2017. doi: 10.1371/journal.pone.0170340. URL https://doi.or g/10.1371/journal.pone.0170340. Ethan G Cerami, Benjamin E Gross, Emek Demir, Igor Rodchenkov, Özgün Babur, Nadia Anwar, Nikolaus Schultz, Gary D Bader, and Chris Sander. Pathway commons, a web resource for biological pathway data. Nucleic acids research, 39(suppl_1):D685–D690, 2010. Steven Diamond and Stephen P. Boyd. Cvxpy: A python-embedded modeling language for convex optimization. *Journal of machine learning research : JMLR*, 17, 2016. URL https://api.semanticscholar.or g/CorpusID:6298008. Leo L Duan, Zeyu Yuwen, George Michailidis, and Zhengwu Zhang. Low tree-rank bayesian vector autoregression models. *Journal of Machine Learning Research*, 24(286):1–35, 2023. URL http://jmlr.org/p apers/v24/22-0360.html. Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. The Annals of Statistics, 32(2):407 - 499, 2004. doi: 10.1214/009053604000000067. URL https://doi.org/10.121 4/009053604000000067. Wei Gao and Haizhong Yang. Time-varying group lasso granger causality graph for high dimensional dynamic system. *Pattern Recognition*, 130:108789, 2022. ISSN 0031-3203. doi: https://doi.org/10.1016/j.patcog.2 022.108789. URL https://www.sciencedirect.com/science/article/pii/S0031320322002709. Andreas Gerhardus and Jakob Runge. High-recall causal discovery for autocorrelated time series with latent confounders. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in* Neural Information Processing Systems, volume 33, pp. 12615–12625. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/94e70705efae423efda10886141 28d0b-Paper.pdf. Satyajit Ghosh, Kshitij Khare, and George Michailidis. High-dimensional posterior consistency in bayesian vector autoregressive models. *Journal of the American Statistical Association*, 114:735 - 748, 2018. doi: 10.1080/01621459.2018.1437043. C. W. J. Granger. Investigating causal relations by econometric models and cross-spectral methods. *Econometrica*, 37(3):424–438, 1969. doi: https://doi.org/10.2307/1912791. L. Grippo and M. Sciandrone. On the convergence of the block nonlinear gauss–seidel method under convex constraints. *Operations Research Letters*, 26(3):127–136, 2000. ISSN 0167-6377. doi: https://doi.org/10 .1016/S0167-6377(99)00074-7. URL https://www.sciencedirect.com/science/article/pii/S01676 37799000747. James Douglas Hamilton. *Time series analysis*. Princeton University Press, Princeton, NJ, January 1994. ISBN 9780691042893. Vân Anh Huynh-Thu, Alexandre Irrthum, Louis Wehenkel, and Pierre Geurts. Inferring regulatory networks from expression data using tree-based methods. *PLOS ONE*, 2010. URL https://journals.plos.org/ plosone/article?id=10.1371%2Fjournal.pone.0012776. Elvin Isufi, Andreas Loukas, Nathanaël Perraudin, and Geert Leus. Forecasting time series with varma recursions on graphs. *IEEE Transactions on Signal Processing*, 67(18):4870–4885, 2019. doi: 10.1109/TS P.2019.2929930. Anders Kock and Laurent Callot. Oracle inequalities for high dimensional vector autoregressions. Journal of Econometrics, 186(2):325–344, 2015. URL https://EconPapers.repec.org/RePEc:eee:econom:v: 186:y:2015:i:2:p:325-344. Sandeep Kumar, Jiaxi Ying, José VinÃcius de M. Cardoso, and Daniel P. Palomar. A unified framework for structured graph learning via spectral constraints. *Journal of Machine Learning Research*, 21(22):1–60, 2020. URL http://jmlr.org/papers/v21/19-276.html. Jiahe Lin and George Michailidis. Regularized estimation and testing for high-dimensional multi-block vector-autoregressive models. *Journal of Machine Learning Research*, 18(117):1–49, 2017. URL http: //jmlr.org/papers/v18/17-055.html. Jiahe Lin, Huitian Lei, and George Michailidis. Structural discovery with partial ordering information for time-dependent data with convergence guarantees. *Journal of Computational and Graphical Statistics*, pp. 1–20, 01 2024. doi: 10.1080/10618600.2023.2301097. Helmut Lütkepohl. *New Introduction to Multiple Time Series Analysis*. Springer, 2005. URL https: //EconPapers.repec.org/RePEc:spr:sprbok:978-3-540-27752-1. Benjamin Marlin, Mark Schmidt, and Kevin Murphy. Group Sparse Priors for Covariance Estimation. arXiv e-prints, art. arXiv:1205.2626, May 2012. doi: 10.48550/arXiv.1205.2626. Jakob W. Messner and Pierre Pinson. Online adaptive lasso estimation in vector autoregressive models for high dimensional wind power forecasting. *International Journal of Forecasting*, 35(4):1485–1498, 2019. doi: 10.1016/j.ijforecast.2018.02.001. Silvia Miranda Agrippino and Giovanni Ricco. Bayesian vector autoregressions. working paper or preprint, May 2018. URL https://sciencespo.hal.science/hal-03458277. William B. Nicholson, Ines Wilms, Jacob Bien, and David S. Matteson. High dimensional forecasting via interpretable vector autoregression. *Journal of Machine Learning Research*, 21(166):1–52, 2020. URL http://jmlr.org/papers/v21/19-777.html. Antonio Ortega, Pascal Frossard, Jelena Kovačević, José M. F. Moura, and Pierre Vandergheynst. Graph signal processing: Overview, challenges, and applications. *Proceedings of the IEEE*, 106(5):808–828, 2018. doi: 10.1109/JPROC.2018.2820126. Bastien Pasdeloup, Vincent Gripon, Grégoire Mercier, Dominique Pastor, and Michael G. Rabbat. Characterization and inference of graph diffusion processes from observations of stationary signals. *IEEE* Transactions on Signal and Information Processing over Networks, 4:481–496, 2016. URL https: //api.semanticscholar.org/CorpusID:10604344. Jakob Runge. Discovering contemporaneous and lagged causal relations in autocorrelated nonlinear time series datasets. In *Conference on Uncertainty in Artificial Intelligence*, pp. 1388–1397. Pmlr, 2020. Jakob Runge, Peer Nowack, Marlene Kretschmer, Seth Flaxman, and Dino Sejdinovic. Detecting and quantifying causal associations in large nonlinear time series datasets. *Science advances*, 5(11):eaau4996, 2019. Anil K. Seth, Adam B. Barrett, and Lionel Barnett. Granger causality analysis in neuroscience and neuroimaging. *Journal of Neuroscience*, 35(8):3293–3297, 2015. ISSN 0270-6474. doi: 10.1523/JNEUROSCI. 4399-14.2015. URL https://www.jneurosci.org/content/35/8/3293. Ali Shojaie and George Michailidis. Discovering graphical Granger causality using the truncating lasso penalty. *Bioinformatics*, 26(18):i517–i523, 09 2010. ISSN 1367-4803. doi: 10.1093/bioinformatics/btq377. URL https://doi.org/10.1093/bioinformatics/btq377. Christopher A. Sims. *A Nine-Variable Probabilistic Macroeconomic Forecasting Model*, pp. 179–212. University of Chicago Press, January 1993. URL http://www.nber.org/chapters/c7192. Peter Spirtes and Clark Glymour. An algorithm for fast recovery of sparse causal graphs. *Social science* computer review, 9(1):62–72, 1991. Peter Spirtes, Clark N Glymour, and Richard Scheines. *Causation, prediction, and search*. MIT press, 2000. J.H. Stock and M.W. Watson. Dynamic Factor Models, Factor-Augmented Vector Autoregressions, and Structural Vector Autoregressions in Macroeconomics. In J. B. Taylor and Harald Uhlig (eds.), *Handbook* of Macroeconomics, volume 2 of *Handbook of Macroeconomics*, chapter 0, pp. 415–525. Elsevier, 2016. doi: 10.1016/bs.hesmac.2016.04. URL https://ideas.repec.org/h/eee/macchp/v2-415.html. Robert Tibshirani. Regression shrinkage and selection via the lasso. *Journal of the Royal Statistical Society:* Series B (Methodological), 58(1):267–288, 1996. doi: https://doi.org/10.1111/j.2517-6161.1996.tb02080.x. URL https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/j.2517-6161.1996.tb02080.x. P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of Optimization Theory and Applications, 109(3):475–494, jun 2001. doi: 10.1023/a:1017501703105. URL https://doi.org/10.1023%2Fa%3A1017501703105. Tong Tong Wu and Kenneth Lange. Coordinate descent algorithms for lasso penalized regression. The Annals of Applied Statistics, 2(1), March 2008. ISSN 1932-6157. doi: 10.1214/07-aoas147. URL http: //dx.doi.org/10.1214/07-AOAS147. Shun Yao, Shinjae Yoo, and Dantong Yu. Prior knowledge driven causality analysis in gene regulatory network discovery. In *2013 IEEE 13th International Conference on Data Mining Workshops*, pp. 124–130, 2013. doi: 10.1109/ICDMW.2013.107. Yujie Zhao and Xiaoming Huo. A survey of numerical algorithms that can solve the lasso problems. *arXiv*, 2023. URL https://doi.org/10.48550/arXiv.2303.03576. Cunlu Zou, Christophe Ladroue, Shuixia Guo, and Jianfeng Feng. Identifying interactions in the time and frequency domains in local and global networks - a granger causality approach. *BMC Bioinformatics*, 11 (1), 2010. doi: 10.1186/1471-2105-11-337. Hui Zou. The adaptive lasso and its oracle properties. *Journal of the American Statistical Association*, 101 (476):1418–1429, 2006. doi: 10.1198/016214506000000735. URL https://doi.org/10.1198/01621450 6000000735. Álvaro Méndez Civieta, M. Carmen Aguilera-Morillo, and Rosa E. Lillo. Asgl: A python package for penalized linear and quantile regression. *arXiv*, 2021. URL https://doi.org/10.48550/arXiv.2303.03 576. ## A Proofs A.1 Statistical Model Proof of (13). The MAP of the model (4) is given by: $$\begin{array}{r l}{{\widehat{\mathbf{A}},\widehat{\mathbf{C}}=\operatorname*{arg\,max}_{\mathbf{A},\mathbf{C}}\;L(\mathbf{A},\mathbf{C}\mid\{X_{i}[1:t]\}_{i=1}^{N},\mathbf{A}^{\mathrm{prior}})}}\\ {{}}&{{\mathrm{subject~to}\;\;\mathbf{A}\geq0}}\end{array}$$ $$(17)$$ $$(18)$$ where L(·) is the likelihood function. Using Bayes formula, ones have : L(A ∗, C∗| X1:t, Aprior) := P(A ∗, C∗| X1:t, Aprior) = P(C ∗| X1:t, Aprior) × P(A ∗| C ∗, X1:t, Aprior) = P(C ∗| X1:t, Aprior) × P(A ∗| C ∗, Aprior) = P(X1:t| C ∗) × P(C ∗| Aprior) P(X1:t| Aprior)× P(A ∗| C ∗, Aprior) = P(X1:t| C ∗) P(X1:t| Aprior) ×P(C ∗| Aprior) × P(A ∗| C ∗, Aprior) Then: $$\mathbb{P}(C^{*}\mid\mathbf{A}^{\text{prior}})\times\mathbb{P}(A^{*}\mid C^{*},\mathbf{A}^{\text{prior}})=\mathbb{P}(C^{*}\mid A^{*},\mathbf{A}^{\text{prior}})\times\mathbb{P}(A^{*}\mid\mathbf{A}^{\text{prior}})$$ $$=\mathbb{P}(C^{*}\mid A^{*})\times\mathbb{P}(A^{*}\mid\mathbf{A}^{\text{prior}})$$ $$=\mathbb{P}(C^{*}\mid A^{*})\times\mathbb{P}(\mathbf{A}^{\text{prior}}\mid A^{*})\frac{\mathbb{P}(A^{*})}{\mathbb{P}(\mathbf{A}^{\text{prior}})}\.$$ Assuming that A∗follows the improper distribution 1Sp(R), the MAP is finally given by: $\max\limits_{A^{*},C^{*}}\mathbb{P}(X^{1:t}\mid C^{*})\times\mathbb{P}(C^{*}\mid A^{*})\times\mathbb{P}(\mathbf{A}^{\text{prior}}\mid A^{*})$ $=\max\limits_{A^{*},C^{*}}\mathcal{N}(X^{t};C^{*}X^{t-1},\sigma_{X}^{2}Id)\times\prod\limits_{i,j}Laplace(C^{*}_{i,j};0,A^{*}_{i,j})\times\prod\limits_{i,j}\mathcal{N}(\mathbf{A}^{\text{prior}}_{i,j};A^{*}_{i,j},\sigma^{2})$ Applying the log function to the previous expression concludes the proof. ## A.2 Proof Of Theorem 12 Definition 14 (Regular function). We say that f is regular at z ∈ *domf* if f ′(z; d) ≥ 0, ∀d = (d1, ..., dp), such that f ′(z; (0, ..., dk*, ...,* 0)) ≥ 0, k = 1*, ..., p*. Remark 15. If f *is differentiable then* f ′(x; d) = ∇f(x) T d*. So, if* f ′(z; (0, ..., dk*, ...,* 0)) ≥ 0 ∀k = 1*, ..., p*, we have that: $$f^{\prime}(z;(d_{1},...,d_{k},...,d_{p}))=\nabla f(x)^{T}(d_{1},...,d_{k},...,d_{p})=\sum_{k=1}^{p}\nabla f(x)^{T}(0,...,d_{k},...,0)\geq0.$$ Thus a differentiable function is regular. Definition 16 (Stationary points). We say that z is a **stationary point** of f if z ∈ domf and f ′(z; d) ≥ 0, ∀d. We say that z is a **coordinatewise minimum** point of f if z ∈ domf and f(z + (0, ..., dk*, ...,* 0)) ≥ f(z), ∀dk ∈ R nk *, for all* k = 1*, ..., N*. Remark 17. If z is a coordinatewise minimum point of f, z is a stationary point of f whenever f *is regular* at z. $$\left(19\right)$$ Using this framework, the following theorem holds. Theorem 18 (Theorem 4.1 in (Tseng, 2001)). Assume that the level set LX = {x | f(x) ≤ f(x0)} is compact (where x0 ∈ R) and that f is continuous on LX*. Then, the sequence* {x r}r=1,2*,...* generated by the Block Coordinate Descent method is defined and bounded. Moreover, the following statements hold: If f(x1, ..., X(n)) has at most one minimum in xk for k ∈ {2, ..., N − 1}*, then every cluster point* z of {x r}r≡N−1 mod N is a coordinatewise minimum point of f. In addition, if f is regular at z, then z is a stationary point of f. Actually the case N = 2 is very simple and we do not need any supplementary assumptions that the continuity of f and the compactness of LX to prove the convergence of the algorithm. Indeed, since {2*, ..., N* − 1} = for the case N = 2, the point (3) can be directly applied without satisfying any assumptions about the number of minimum so ones obtain directly the following corollary. Corollary 19. Assume that the level set LX is compact, that f is continuous on LX *and that* N = 2*. Then,* the sequence {x r}r=1,2*,...* generated by the BCD method is defined and bounded. Moreover, every cluster point z of {x r}r≡N−1 mod N is a coordinatewise minimum point of f. In addition, if f is regular at z, then z *is a* stationary point of f. A simplified version of the proof provided in (Tseng, 2001) for the case N = 2 can be found in Appendix (A.2). Now we need to prove that the function f of our model (the MAP) satisfies the conditions of the previous theorem. Proposition 20. *The function* f : R p 2× R ∗ + p 2 → R *defined by :* $$f({\bf C},{\bf A})=\frac{1}{N}\sum_{n=1}^{N}||X^{(n)}[t]-{\bf C}X^{(n)}[t-1]||_{2}^{2}+\lambda\sum_{i<j}\frac{|{\bf C}_{i,j}|+|{\bf C}_{j,i}|}{{\bf A}_{i,j}}+\lambda\sum_{i<j}\log(2{\bf A}_{i,j})+\gamma||{\bf A}-{\bf A}^{spin}||_{F}^{2}\tag{20}$$ is regular. Proposition 21. The function f *defined in (20) but constrained on* R p 2× [ϵ, +∞] p 2with ϵ > 0 *satisfies* assumptions in (19). Finally, using 20 and 21 we show that our objective function in (7) verifies the assumptions in (19), and applying the Theorem, we obtain the following result. Proof of 19. Let {x r}r=1,2*,...* the sequence generated with the BCD algorithm. By definition of the algorithm ones have that f(x r+1) ≤ f(x r) for all r and x r+1 ∈ LX for all r. Since LX is compact, {x r}r∈R converges towards z = z 1. In the same way, we can assume w.l.o.g that x r+1 r∈R converges towards z 2(taking a sub-sequence). First, {f(x r)}r∈R is decreasing (and bounded) so it converges and ones have that : $$f(x^{0})\geq\operatorname*{lim}_{r\in\mathcal{R}\rightarrow+\infty}f(x^{r})=f(z)=f(z^{1})=f(z^{2}).$$ Now, we assume that for every r ∈ R, x r = *argmin* x f(*x, x*r−1 2), e.g for all r: $$\begin{array}{l}{{f(x^{r})\leq f(x^{r}+(d_{1},0)),\;\;\forall d_{1}}}\\ {{f(x^{r+1})\leq f(x^{r+1}+(0,d_{1})),\;\;\forall d_{2}}}\\ {{x_{1}^{r}=x_{1}^{r+1}\;\;\;\;\mathrm{where}\;\;x^{r}=(x_{1}^{r},x_{2}^{r})}}\end{array}$$ Since f is continuous on LX, we get : $$\begin{array}{l}{{f(z^{1})\leq f(z^{1}+(d_{1},0)),\;\;\forall d_{1}}}\\ {{f(z^{2})\leq f(z^{2}+(0,d_{2})),\;\;\forall d_{2}}}\\ {{z_{1}^{2}=z_{1}^{1}}}\end{array}$$ Then, for all d2 : $$f(z^{1})=f(z^{2})$$ $$\leq f((z_{1}^{2},z_{2}^{2})+(0,d_{2}))$$ $$=f((z_{1}^{1},z_{2}^{2})+(0,d_{2}))$$ $$=f((z_{1}^{1},z_{2}^{1})+(0,z_{2}^{2}-z_{2}^{1})+(0,d_{2}))$$ $$=f(z^{1}+(0,\tilde{d}_{2}))$$ Since z 1 = z, we proved that for all d1, d2 : $$\begin{array}{l l}{{f(z)\leq f(z+(d_{1},0)),}}&{{\forall d_{1}}}\\ {{f(z)\leq f(z+(0,d_{2})),}}&{{\forall d_{2}}}\end{array}$$ so z is a componentwise minimum of f. Finally, if f is regular, the previous inequalities become : $$\begin{array}{l}{{f^{\prime}(z;(d_{1},0))\geq0,\;\;\forall d_{1}}}\\ {{f^{\prime}(z;(0,d_{2}))\geq0,\;\;\forall d_{2}}}\end{array}$$ and by definitions z is a stationary point of f. Proof of 20. The only points where f is not differentiable are {(C, A) | ∃i, j ; Ci,j = 0} because of the absolute value. Let's write : $$f(\mathbf{C},\mathbf{A})=g(\mathbf{C},\mathbf{A})+\lambda\sum_{i<j}h_{i,j}(\mathbf{C},\mathbf{A})+l(\mathbf{C},\mathbf{A})$$ where $$\begin{array}{l}{{g(\mathbf{C},\mathbf{A})=\frac{1}{N}\sum_{n=1}^{N}||X^{(n)}[t]-\mathbf{C}X^{(n)}[t-1]||_{2}^{2}}}\\ {{h_{i,j}(\mathbf{C},\mathbf{A})=\frac{|\mathbf{C}_{i,j}|+|\mathbf{C}_{j,i}|}{\mathbf{A}_{i,j}}}}\\ {{l(\mathbf{C},W)=\lambda\sum_{i<j}\log(2\mathbf{A}_{i,j})+\gamma||\mathbf{A}-\mathbf{A}^{\mathrm{prior}}||_{F}^{2}.}}\end{array}$$ g and l are differentiable so we have for all (C, A) ∈ R p 2× R ∗ + p 2, for all (DC , DA) ∈ R p 2× R p 2such that (C + DC , A + DA) ∈ *domf* : $$g^{\prime}((\mathbf{C},\mathbf{A});(D_{C},D_{\mathbf{A}}))=\sum_{i,j}g^{\prime}((\mathbf{C},\mathbf{A});(D_{C}^{(i,j)},0))+g^{\prime}((\mathbf{C},\mathbf{A});(0,D_{Z}^{(i,j)}))$$ $$l^{\prime}((\mathbf{C},\mathbf{A});(D_{C},D_{\mathbf{A}}))=\sum_{i,j}l^{\prime}((\mathbf{C},\mathbf{A});(D_{C}^{(i,j)},0))+l^{\prime}((\mathbf{C},\mathbf{A});(0,D_{\mathbf{A}}^{(i,j)}))$$ where D (i,j) Cis the matrix with zero values everywhere except the coefficient (*i, j*) which is equal to D ( C i, j). If Ci,j ̸= 0 and Cj,i ̸= 0, then hi,j is differentiable in (C, A) so we have the same result. Otherwise, we need to compute the lower directional derivative of hi,j in (C, W) with Ci,j = 0 or Cj,i = 0. Ones can compute that: - $h'_{i,j}\big((\mathbf{C},\mathbf{A});(D_C,D_\mathbf{A})\big)=\frac{|D_C^{(i,j)}|+|D_C j,i|}{\mathbf{A}_{i,j}}$ if $\mathbf{C}_{i,j}=0$ and $\mathbf{C}_{j,i}=0$ - $h'_{i,j}\big((\mathbf{C},\mathbf{A});(D_C,D_\mathbf{A})\big)=\frac{|D_C^{(i,j)}|}{\mathbf{A}_{i,j}}+\frac{\text{sign}(C_{j,i})D_C^{(i,j)}}{\mathbf{A}_{i,j}}-\frac{D_{\mathbf{A}}^{(i,j)}|C_{j,i}|}{\mathbf{A}_{i,j}^2}$ if $\mathbf{C}_{i,j}=0$ , the same for the last case. if Ci,j = 0 and Ci,j ̸= 0 and we can do Thus we still have that : $$h_{i,j}^{\prime}((\mathbf{C},\mathbf{A});(D_{C},D_{\mathbf{A}}))=\sum_{i,j}h_{i,j}^{\prime}((\mathbf{C},\mathbf{A});(D_{C}^{(i,j)},0))+h_{i,j}^{\prime}((\mathbf{C},\mathbf{A});(0,D_{\mathbf{A}}^{(i,j)})).$$ Finally, by definition of regular function, ones have that f is regular at all (C, A) ∈ R $\mathbb{R}^{p^{2}}\times\mathbb{R}^{*}_{+}\,p^{2}$. Proof of 21. First it is clear that f is continuous on LX. Let's consider again the decomposition : $$f(\mathbf{C},\mathbf{A})=g(\mathbf{C})+\lambda\sum_{i,j}h_{i,j}(\mathbf{C},\mathbf{A})+l(\mathbf{A})$$ where $$\begin{array}{l}{{g(\mathbf{C})=\frac{1}{N}\sum_{n=1}^{N}||X^{(n)}[t]-\mathbf{C}X^{(n)}[t-1]||_{2}^{2}}}\\ {{h_{i,j}(\mathbf{C},\mathbf{A})=\frac{|\mathbf{C}_{i,j}|+|\mathbf{C}_{j,i}|}{\mathbf{A}_{i,j}}}}\\ {{l(|\mathbf{C}_{i,j}|,\mathbf{A})=\lambda\sum_{i<j}\log(2\mathbf{A}_{i,j})+\gamma||\mathbf{A}-\mathbf{A}^{\mathrm{prior}}||_{F}^{2}.}}\end{array}$$ It is clear that lim ||C||→+∞ g(C) = +∞ and lim ||A||→+∞ $l(\mathbf{A})=+\infty$. Then, since hi,j (C, A) ≥ 0 for all C, A for all *i, j*, we have that : $$\operatorname*{lim}_{||(C,\mathbf{A})||\to+\infty}f(\mathbf{C},\mathbf{A})=+\infty.$$ $$(21)$$ f(C, A) = +∞. (21) We proved that f was coercive, it follows that LX is bounded. Moreover, lim Ai,j→0 l(A) = +∞, for i, j ∈ [|1, p|]. Additionally, f is continuous so f −1(] − ∞, f(x (0))]) is closed and finally LX **is compact**. Proposition 22. The function f *defined in (20) is not convex.* Proof of 22. The function 20 is a function of (C,(Ai,j )1≤i,j≤p), so it is sufficient to prove that it is not convex in Ai,j for fixed i and j and for a fixed value of C. The second derivative wrt Ai,j is : $$\partial_{i,j}^{2}f({\bf C},{\bf A})=-\frac{\lambda}{{\bf A}_{i,j}^{2}}+2\frac{|C_{i,j}|+|C_{j,i}|}{{\bf A}_{i,j}^{3}}+2\gamma\tag{22}$$ so it has the same sign than the degree 3 polynomial: $$-\lambda\mathbf{A}_{i,j}+2(|C_{i,j}|+|C_{j,i}|)+2\gamma\mathbf{A}_{i,j}^{3}$$ $$(23)$$ $\square$ i,j (23) Then, the minimum of this polynomial on [0, +∞] is reached in q λ 6γ and take the value − 2λ 3/2 3 √6γ + 2(|Ci,j | + |Cj,i|) which can be negative for small values of |Ci,j | + |Cj,i|. Thus the second derivative can reach negative value with certain value of Ci,j and Cj,i. ## A.3 Time Complexity Lemma 23 (C update time complexity). Let CLasso(p, N) the time cost for training a Lasso estimator to fit VAR(1) parameters in dimension p with N samples, then the time complexity of a C *update is in* O (p × N + CLasso(*p, N*)). Proof of 23. The C update is done solving an Adaptive Lasso problem, so recalling that an Adaptive Lasso problem can be written as a Lasso problem in p × N multiplications and that we solve p Adaptive Lasso problems (cf 3.4.1), the time complexity of this step is in Op 2 × N + CLasso(*p, N*). Lemma 24 (A update time complexity). The time complexity of a A *update is in* Op 2. Proof of 24. The A update is done by computing in O(1) the closed form given in 3.4.2 for each value Ai,j so this update is in O(p 2). In order to completely express the time complexity, we need to compute CLasso(*p, N*). Note that when using gradient descent based methods to solve an optimization problem, whereas a convergence rate analysis can be conducted (cf (Zhao & Huo, 2023)), the time complexity depends on the stop criterion of the algorithm. Thus we conduct the time complexity analysis assuming that ADMM is utilized with a fixed number of iterations NADMM. Lemma 25 (ADMM complexity for Lasso). The time complexity of the ADMM algorithm (with a fixed number of steps) to solve one Lasso problem in dimension p is O(p 3 + N × p 2). Proof of 25. The updated formula of the ADMM given in 3.4.1 require to multiply a p × N matrix with a N × p matrix which is in O(N × p 2) and to inverse a p × p matrix which is in O(p 3). Theorem 26. Let CAALasso(p, N) the time cost for training a AALasso estimator to fit VAR(1) parameters in dimension p with N *samples, then:* $${\mathcal{C}}_{A A L a s s o}(p,N)\underset{p,N}{=}O({\mathcal{C}}_{L a s s o}(p,N))\underset{p,N}{=}O\left(p^{2}\times N+p^{3}\right).$$ 3. (24) Proof of 26. Summing the time complexity of A step 24 and C step 23 gives a complexity in Op 2 + p 2 × N + CLasso(*p, N*). Since the matrix inversion need to be performed only one time for the p Lasso problems at each step, the complexity of CLasso(*p, N*) using 25 becomes O(p 3 + N × p 2), finally resulting in a complexity in Op 2 + p 2 × N + p 3= Op 2 × N + p 3= O(CLasso(*p, N*)). ## B Additional Experiments B.1 Number Of Iterations For synthetic experiments in Section 4, we motivated the choice of Niter = 10 by Figure 8. While this parameter allows good results in various scenarios, it can be interesting to understand whether this parameter is related to the dataset. Convergence analysis results are shown in Figures 14 and 15. A trend seems to appear : the larger N, the faster the convergence. Moreover, while the noise impacts the performances of AALasso, the convergence rate seems to remain unchanged for σA = 0.1 or σA = 0.25. $$(24)$$ ![33_image_0.png](33_image_0.png) Figure 14: F1-score and Prediction error through AALasso iterations for N ∈ {40, 100, 140} and σA = 0.1. ## B.2 Runtime B.3 Initialization Impact Let's recall the algorithm to train AALasso : The results presented in the previous sections were obtained initializing A(0) with Aprior. However, it can be interesting to test the algorithm with other initializations. We therefore conducted experiments initializing A(0) with only 1 values (denoted AALasso-ones), A(0) obtained by solving the Least Squares problem (denotes AALasso-LS) or with random values (denoted AALasso-random). Our method is proven to converge towards ![34_image_0.png](34_image_0.png) Figure 15: F1-score and Prediction error through AALasso iterations for N ∈ {40, 100, 140} and σA = 0.25. a stationary point, no matter the initialization of A(0). However, since our objective function is not convex, better local minimum could be found starting from a 1 vector or a random vector rather than the Aprior. The results for all the scenarios tested are presented in Figures 16 and 17. Globally, the results are similar for all initializations tested and it is not clear if one of the initializations is better than the other. However we remark than in settings with very few samples (N = 40), the random initialization surprisingly outperforms the other with a gain of 2.5 in the F1-scores compared to AALasso for all noise levels. Algorithm 3: Fitting algorithm. input : Niter, λ, γ, A output: Cb, Wc W(0) ← Subdiagonal values of A for i ← 1 to N*iter* do C (i) ← fC(C, W(i−1)) where fC denotes the update in (3.4.1). W(i) ← fW (C(i), W) where fW denotes the update in (3.4.2). return C(Niter), W(Niter). ![35_image_0.png](35_image_0.png) Figure 16: Impact of the initialization of A(0) on the F1-scores with σA ∈ {0.02, 0.1, 0.25, 0.35}. ## B.4 Comparison With Random Prior Graph In this last experiment, we compare our results with the AALasso method taking a random Aprior to check that the prior structure is well leveraged and that a random L2 penalization can not achieve the same performances. The values A prior i,j are sampled independently from a uniform distribution in [0.2, 1]. The results are presented in Figure 18, and we see that results using a random Aprior are very poor regarding both F1-score and Prediction error. This reinforces the thesis that AALasso judiciously leverages the information provided by Aprior. ![36_image_0.png](36_image_0.png) Figure 17: Impact of the initialization of A(0) on the F1-scores with N ∈ {40, 100, 180, 250}. ![36_image_1.png](36_image_1.png) Figure 18: Comparison of AALasso, Lasso and LS+ALasso with AALasso taking Aprior random, with σA = 0.1. ## B.5 Runtime Here, several scenarios are tested to complete the Figure 9 and show that the behavior regarding the F1 score in function of the runtime remains similar for some parameters choices. ![37_image_0.png](37_image_0.png) Figure 19: F1 score in function of runtimes for Lasso, LS+ALasso and AALasso for Niter ∈ {0, 5, 10, 15}. (a) N = 40 samples, σA = 0.1. (b) N = 100 samples, σA = 0.1. (c) N = 40 samples, σA = 0.25. (d) N = 100 samples, σA = 0.25.
Review 1: Summary: The paper introduces a novel optimization problem that allows to estimate VAR models while incorporating the use of Granger causality graphs based on the assumption that there exists a noisy Prior causal graph of the specific multivariate times-series input. Importantly, the authors provide an efficient two-block coordinate descent algorithm while also providing guarantees for the convergence of such an algorithm. The results showcase the superiority of the method against the considered baselines. Lastly, they showcase the important of sparsity in the performance of the general model. Strengths and Weaknesses: Strengths: 1) The paper provides an extensive mathematical study of the proposed two stage optimization method, and importantly convergence guarantees. 2) The loss function of equation (7) of the main paper, is well-motivated and principled under the statistical model provided in Figure (1) of the main paper. 3) The model is simple and intuitive. Weaknesses: 1) In real-world scenarions, the problem setting where there exists even a noisy A_prior matrix is rather limiting. In most settings, such an assumption is difficult to be made. In this sense, the proposed approach is unable to work in the more general case where prior knowledge is not present. 2) The baselines are rather limited. Comparison with more recent and competitive methods that aim on learning Granger causality graphs based on time series data should be provided. I am including just a few here: i) Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Data, Löwe et al, 2022 ii) Discovering Nonlinear Relations with Minimum Predictive Information Regularization, Wu et al, 2020 Requested Changes: 1) Please provide an extensive discussion on the problem setting, and on how common is the existence of a prior Granger causality information 2) The VAR coefficient matrix C as stated in the main paper expresses the directed Granger causality graph. Given that, why is A and Aprior chosen to be undirected? Would it make more sense that also the prior is directed? Please elaborate more on this modeling decision. 3) Please include more recent and complex baselines and compare how your method competes against them in discovering causality graphs. Broader Impact Concerns: No ethical concerns. ================================================== Review 2: Summary: The current paper proposes a method to build a vector autoregressive model for multi-variate forecasting of time series data while recovering an underlying causal network between the time series. The problem is cast as a linear program to predict the current value of a time series by a linear combination of all the series’ history with a predefined depth. The least squares cost function is added to a reformulation of the adaptive lasso such that the undirected edges that act as coefficients in the cost function are penalized based on a prior undirected network. The edges of the prior network are sampled uniformly from a distribution in [0.2,1]. In addition, an l2 penalty is set to adjust the prior graph according to the observation, similar to an l2 regularisation in linear regression which is known to be equivalent to setting a Gaussian prior in the learning parameters of the MLE problem. The authors develop a similar proof for the same argument on the proposed model. The method is proved to be nonconvex and an ADMM algorithm is derived and is proved to converge in stationary points. The results indicate an improved performance in forecasting accuracy and network reconstruction error in several real-world datasets. Strengths and Weaknesses: Strengths: The paper is well written, with clear arguments and proofs. The model is interpretable which is very important for practitioners in the targeted fields. The algorithm is sound and the convergence proof for a non-convex and non-differentiable function is a useful result. The method is tested in both simulated and real data and it improves compared to other traditional methods. A visual explanation of the results facilitates the qualitative evaluation of the methods. Weaknesses: To my understanding, the method aims to reconstruct the causal network. This is useful in cases where we search for new causal relationships while observing an already established causal network i.e. enriching the network. If this is the aim of the method, it is not perfectly clear from the title or the text and the experiments. Specifically, the simulated data utilize a noisy representation as a prior and are evaluated based on the reconstruction error with the correct network. This is not very clear with the real data, as neither the noisy network nor the reconstruction error (accuracy in 4.5.1?) is clarified sufficiently. Similar to section 4.1. an integral experiment is to remove edges for varying ratios in the real network, use it as a prior, and calculate how close is the retrieved network with the real one. An even more important question is the role of A in eq. 7. If C is the retrieved network and A^prior the noisy observation, what does A represent? I imagine it can not be the actual causal network that we aim to identify with C, because in the real-world this is not observed. In addition, the method is not tested sufficiently. Comparison with other causal network discovery models https://arxiv.org/pdf/2006.10833.pdf or algorithms that do not utilize priors or facilitate prediction, such as PCMCI https://www.science.org/doi/pdf/10.1126/sciadv.aau4996 and its variants https://openreview.net/forum?id=dYeUvLUxBQ can be beneficial to argue about the usability of the model. In accordance, comparison with other forecasting methods https://arxiv.org/pdf/1704.04110.pdf and simple average window benchmarks for the accuracy need to be added. A final vague point pertains to the identification of confounders. The model seems to allow for causal relationships for example s1 -> s2, s1 -> s3 to create spurious edges s2 -> s3 in case s2 precedes s3 since s2 and s3 change behaviour according to s1. Requested Changes: “The closer the I and j are in the graph the lower the penalty”: My confusion here pertains to the nature of A. If A is a weighted adjacency the elements show strength, not proximity. If A indicates distance i.e. in a number of hops, it is A + A^2 + A^3 etc.. That however would mean that A is no longer sparse and the computational burden for real-world graphs would be a problem. This could be clarified further. Some simulations indicating how the model behaves under non-stationary setting would make the paper more complete. Moreover, it is not clear if equation 16 indicates a normalization of solely the input or the whole time series. If it is the latter doesn’t it cause a change of the results? Although explained in 3.5 and used in 4.5.2, it is not perfectly clear to me, does the model for d > 1 breaks down to d different models i.e. graphs learnt simultaneously? Causality is inherently a directed measure so the intuition behind setting an undirected graph as a prior is not fully clear. To verify the intuition of section 3.4, wouldn’t sampling the prior edges from a gaussian instead of uniform distribution improve the final reconstruction? Typos: ant, wi,j, the generalize prior Broader Impact Concerns: There is no obvious ethical concern for the paper's broader impact. ================================================== Review 3: Summary: The paper introduces a model to learn network Granger causality by using prior knowledge in the form of an undirected graph. The study aims to mitigate the challenge of estimating Granger causality graphs in high-dimensional scenarios with limited samples. Experimental results from synthetic and real-world datasets demonstrate the effectiveness of the proposed method over existing alternatives. The work has a solid theoretical and technical foundation. However, some serious issues need to be clarified and resolved. I lean toward a major revision for the work. Strengths and Weaknesses: Strength: S1: The work has a solid theoretical foundation and support. S2: The authors conducted extensive experiments to evaluate the performance of the proposed method. Weakness: W1: Although the work has a solid theoretical foundation, the work lacks high-level intuition to illustrate the idea, which establishes a barrier to a broad audience. W2: Although the authors provided extensive preliminaries to help the reviewer delve into the research. However, some details of the network Granger causality should be clarified. W3: It is unclear how to construct the prior matrix Aprior. The concept is suddenly introduced in problem formulation without a detailed explanation. W4: The assumption in the problem formulation (i.e. if two nodes (i, j) are far away in the graph, then there is no Granger causality between the two-time series) seems to be problematic and needs further discussion. W5: Compared to existing works, it is still unclear the advantages of the proposed one using the graph prior knowledge. W6: The authors claimed the design can deal with high dimensional variables with limited samples. This case often means n<<d, where n is the number of samples and d is the dimension. However, the experiments fall short of evaluating the case. Requested Changes: The authors can refer to the weakness to make changes. Specifically, the authors should make the following changes. C1: The authors should explain why non-zero matrix entry Cij can capture the Granger cause between Xi and Xj? C2: Please add high-level intuition to illustrate the idea. C3: Please provide a detailed explanation of how to construct the graph prior Aprior in the problem formulation. C4: Is there a long-distance dependency between two nodes of Aprior that are far away? If yes, then the assumption may be problematic. If not, please justify the point. C5: The authors should conduct experiments on extremely scarce data and higher dimensional data to validate performance C6: "Equation (2) capture specific temporal dependencies between the p time series". However, p is the dimension of variables. Is this a typo? C7: Compared to existing works, please illustrate the advantages of the proposed one and the reasons. Broader Impact Concerns: No ethical concerns ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers provided insightful comments and the authors have responded to those comments. The clarity of presentation is much improved. One reviewer expressed some concerns about the extent of comparisons to other state-of-the-art methods. These concerns are reasonable. However, the authors scope their claim to rest upon the evidence: "Thus the incorporation of prior information has proven instrumental in achieving superior accuracy and robustness when compared to classical algorithms in this domain." Overall, the reviewers were in favor of acceptance, and I agree. ==================================================
# The Vendi Score: A Diversity Evaluation Metric For Machine Learning Dan Friedman1 and **Adji Bousso Dieng**1, 2 1Department of Computer Science, Princeton University 2Vertaix Reviewed on OpenReview: https://openreview.net/forum?id=g97OHbQyk1 ## Abstract Diversity is an important criterion for many areas of machine learning (ml), including generative modeling and dataset curation. However, existing metrics for measuring diversity are often domain-specific and limited in flexibility. In this paper we address the diversity evaluation problem by proposing the *Vendi Score*, which connects and extends ideas from ecology and quantum statistical mechanics to ml. The Vendi Score is defined as the exponential of the Shannon entropy of the eigenvalues of a similarity matrix. This matrix is induced by a user-defined similarity function applied to the sample to be evaluated for diversity. In taking a similarity function as input, the Vendi Score enables its user to specify any desired form of diversity. Importantly, unlike many existing metrics in ml, the Vendi Score does not require a reference dataset or distribution over samples or labels, it is therefore general and applicable to any generative model, decoding algorithm, and dataset from any domain where similarity can be defined. We showcase the Vendi Score on molecular generative modeling where we found it addresses shortcomings of the current diversity metric of choice in that domain. We also applied the Vendi Score to generative models of images and decoding algorithms of text where we found it confirms known results about diversity in those domains. Furthermore, we used the Vendi Score to measure mode collapse, a known shortcoming of generative adversarial networks (gans). In particular, the Vendi Score revealed that even gans that capture all the modes of a labelled dataset can be less diverse than the original dataset. Finally, the interpretability of the Vendi Score allowed us to diagnose several benchmark ml datasets for diversity, opening the door for diversity-informed data augmentation. 1. ## 1 Introduction Diversity is a criterion that is sought after in many areas of machine learning (ml), from dataset curation and generative modeling to reinforcement learning, active learning, and decoding algorithms. A lack of diversity in datasets and models can hinder the usefulness of ml in many critical applications, e.g. scientific discovery. It is therefore important to be able to measure diversity. Many diversity metrics have been proposed in ML, but these metrics are often domain-specific and limited in flexibility. These include metrics that define diversity in terms of a reference dataset (Heusel et al., 2017; Sajjadi et al., 2018), a pre-trained classifier (Salimans et al., 2016; Srivastava et al., 2017), or discrete features, like n-grams (Li et al., 2016). In this paper, we propose a general, reference-free approach that defines diversity in terms of a user-specified similarity function. Our approach is based on work in ecology, where biological diversity has been defined as the exponential of the entropy of the distribution of species within a population (Hill, 1973; Jost, 2006; Leinster, 2021). This value can be interpreted as the effective number of species in the population. To adapt this approach to ML, we define the diversity of a collection of elements x1*, . . . , x*n as the exponential of the entropy of the 1Code for calculating the Vendi Score is available at https://github.com/vertaix/Vendi-Score. 1 (a) (b) (c) ![1_image_0.png](1_image_0.png) Figure 1: (a) The Vendi Score can be interpreted as the effective number of unique elements in a sample. It increases linearly with the number of modes in the dataset. IntDiv, the expected dissimilarity, becomes less sensitive as the number of modes increases, converging to 1. (b) Combining distinct similarity functions can increase the Vendi Score, as should be expected of a diversity metric, while leaving IntDiv unchanged. (c) IntDiv does not take into account correlations between features, but the Vendi Score does. The Vendi Score is highest when the items in the sample differ in many attributes, and the attributes are not correlated with each other. eigenvalues of the n × n similarity matrix K, whose entries are equal to the similarity scores between each pair of elements. This entropy can be seen as the von Neumann entropy associated with K (Bach, 2022), so we call our metric the *Vendi Score*, for the von Neumann diversity. Contributions. We summarize our contributions as follows: - We extend ecological diversity to ML, and propose the Vendi Score, a metric for evaluating diversity in ML. We study the properties of the Vendi Score, which provides us with a more formal understanding of desiderata for diversity. - We showcase the flexibility and wide applicability of the Vendi Score–characteristics that stem from its sole reliance on the sample to be evaluated for diversity and a user-defined similarity function–and highlight the shortcomings of existing metrics used to measure diversity in different domains. ## 2 Are We Measuring Diversity Correctly In Ml? Several existing metrics for diversity rely on a reference distribution or dataset. These reference-based metrics define diversity in terms of coverage of the reference. They assume access to an embedding function–such as a pretrained Inception model (Szegedy et al., 2016)–that maps samples to real-valued vectors. One example of a reference-based metric is Fréchet Inception distance (fid) (Heusel et al., 2017), which measures the Wasserstein-2 distance between two Gaussian distributions, one Gaussian fit to the embeddings of the reference sample and another one fit to the embeddings of the sample to be evaluated for diversity. fid was originally proposed for evaluating image generative adversarial networks (gans) but has since been applied to text (Cífka et al., 2018) and molecules (Preuer et al., 2018) using domain-specific neural network encoders. Sajjadi et al. (2018) proposed a two-metric evaluation paradigm using precision and recall, with precision measuring quality and recall measuring diversity in terms of coverage of the reference distribution. Several other variations of precision and recall have been proposed (Kynkäänniemi et al., 2019; Simon et al., 2019; Naeem et al., 2020). Compared to these approaches, the Vendi Score is a reference-free metric, measuring the intrinsic diversity of a set rather than the relationship to a reference distribution. This means that the Vendi Score should be used along side a quality metric, but can be applied in settings where there is no reference distribution. Some other existing metrics evaluate diversity using a pre-trained classifier, therefore requiring labeled datasets. For example, the Inception score (is) (Salimans et al., 2016), which is mainly used to evaluate the perceptual quality of image generative models, evaluates diversity using the entropy of the marginal distribution of class labels predicted by an ImageNet classifier. Another example is number of modes (nom) (Srivastava et al., 2017), a metric used to evaluate the diversity of gans. nom is calculated by using a classifier trained on a labeled dataset and then counting the number of unique labels predicted by the classifier when using samples from a gan as input. Both is and nom define diversity in terms of predefined labels, and therefore require knowledge of the ground truth labels and a separate classifier. In some discrete domains, diversity is often evaluated in terms of the distribution of unique features. For example in natural language processing (nlp), a standard metric is n-gram diversity, which is defined as the number of distinct n-grams divided by the total number of n-grams (e.g. Li et al., 2016). These metrics require an explicit, discrete feature representation. There are proposed metrics that use similarity scores to define diversity. The most widely used metric of this form is the average pairwise similarity score or the complement, the average dissimilarity. In text, variants of this metric include pairwise-bleu (Shen et al., 2019) and d-lex-sim (Fomicheva et al., 2020), in which the similarity function is an n-gram overlap metric such as bleu (Papineni et al., 2002). In biology, average dissimilarity is known as IntDiv (Benhenda, 2017), with similarity defined as the Jaccard (Tanimoto) similarity between molecular fingerprints. Average similarity has some shortcomings, which we highlight in Figure 1. The figure shows the similarity matrices induced by a shape similarity function and/or a color similarity function. Each of the similarity functions is 1 when the index of the column and the index of the row have the same shape or color and 0 otherwise. As shown in Figure 1, the average similarity–here measured by IntDiv–becomes less sensitive as diversity increases and does not account for correlations between features. This is not the case for the Vendi Score, which accounts for correlations between features and is able to capture the increased diversity resulting from composing distinct similarity functions. Related to the metric we propose here is a similarity-sensitive diversity metric proposed in ecology by Leinster & Cobbold (2012), and which was introduced in the context of ml by Posada et al. (2020). This metric is based on a notion of entropy defined in terms of a *similarity profile*, a vector whose entries are equal to the expected similarity scores of each element. Like IntDiv, it does not account for correlations between features. Some other diversity metrics in the ml literature fall outside of these categories. The Birthday Paradox Test (Arora & Zhang, 2018) aims to estimate the size of the support of a generative model, but requires some manual inspection of samples. gilbo (Alemi & Fischer, 2018) is a reference-free metric but is only applicable to latent variable generative models. Kviman et al. (2022) measure the diversity of ensembles of variational approximations using the Jensen-Shannon Divergence (jsd); this metric is only applicable to sets of probability distributions. Mitchell et al. (2020) introduce metrics for diversity and inclusion, defining diversity in terms of the representation of socially relevant attributes like gender and race, and using the term *heterogeneity* to refer to variety in arbitrary attributes; in this paper, we use the term diversity to have the same sense as heterogeneity, meaning variety in arbitrary (user-specified) attributes. In the context of drug exploration, Xie et al. (2022) propose a metric based on the size of the largest subset of elements such that the similarity between any pair of elements is below some threshold, but this metric requires setting a threshold. Similarly, in the field of evolutionary computation, quality diversity (qd) algorithms (Pugh et al., 2015), have assessed diversity by discretizing the feature space into grid of bins and counting the number of covered bins, but this approach requires picking a bin size. As discussed above, several attempts have been made to measure diversity in ml. However, the proposed metrics can be limited in their applicability in that they require a reference dataset or predefined labels, or are domain-specific and applicable to one class of models. The existing metrics that do not have those applicability limitations have shortcomings when it comes to capturing diversity that we have illustrated in Figure 1. ## 3 Measuring Diversity With The Vendi Score We now define the Vendi Score, state its properties, and study its computational complexity. (We relegate all proofs of lemmas and theorems to the appendix.) ## 3.1 Defining The Vendi Score To define a diversity metric in ml we look to ecology, the field that centers diversity in its work. In ecology, one main way diversity is defined is as the exponential of the entropy of the distribution of the species under study (Jost, 2006; Leinster, 2021). This is a reasonable index for diversity. Consider a population with a uniform distribution over n species, with entropy log(n). This population has maximal ecological diversity n, the same diversity as a population with n members, each belonging to a different species. The ecological diversity decreases as the distribution over the species becomes less uniform, and is minimized and equal to one when all members of the population belong to the same species. For a more extensive mathematical discussion of entropy and diversity in the context of biodiversity, we refer readers to Leinster (2021). How can we extend this way of thinking about diversity to ml? One naive approach is to define diversity as the exponential of the Shannon entropy of the probability distribution defined by a machine learning model or dataset. However, this approach is limiting in that it requires a probability distribution for which entropy is tractable, which is not possible in many ml settings. We would like to define a diversity metric that only relies on the samples being evaluated for diversity. And we would like for such a metric to achieve its maximum value when all samples are dissimilar and its minimum value when all samples are the same. This implies the need to define a similarity function over the samples. Endowed with such a similarity function, we can define a form of entropy that only relies on the samples to be evaluated for diversity. This leads us to the Vendi Score: Definition 3.1 (Vendi Score). Let x1, . . . , xn ∈ X denote a collection of samples, let k : *X × X →* R be a positive semidefinite similarity function, with k(*x, x*) = 1 for all x*, and let* K ∈ R n×n denote the kernel matrix with entry Ki,j = k(xi, xj ). Denote by λ1, . . . , λn the eigenvalues of K/n*. The Vendi Score (VS) is* defined as the exponential of the Shannon entropy of the eigenvalues of K/n: $$V\!S_{k}(x_{1},\ldots,x_{n})=\exp\left(-\sum_{i=1}^{n}\lambda_{i}\log\lambda_{i}\right),\tag{1}$$ where we use the convention 0 log 0 = 0. To understand the validity of the Vendi Score as a mathematical object, note that the eigenvalues of K/n are nonnegative (because k is positive semidefinite) and sum to one (because the diagonal entries of K/n are equal to 1/n). The Shannon entropy is therefore well-defined and the Vendi Score is well-defined. In this form, the Vendi Score can also be seen as the *effective rank* of the kernel matrix K. Effective rank was introduced by Roy & Vetterli (2007) in the context of signal processing; the effective rank of a matrix is defined as the exponential of the entropy of the normalized singular values. Effective rank has also been used in machine learning, for example, to evaluate word embeddings (Torregrossa et al., 2020) and to study the implicit bias of gradient descent for low-rank solutions (Arora et al., 2019). The Vendi Score can be expressed directly as a function of the kernel similarity matrix K: $$\left(1\right)$$ Lemma 3.1. Consider the same setting as Definition 3.1. Then $$V S_{k}(x_{1},\ldots,x_{n})=\exp\left(-\operatorname{tr}\left({\frac{K}{n}}\log{\frac{K}{n}}\right)\right).$$ $$\left(2\right)$$ . (2) The lemma makes explicit the connection of the Vendi Score to quantum statistical mechanics: the Vendi Score is equal to the exponential of the von Neumann entropy associated with K/n (Bach, 2022). In quantum statistical mechanics, the state of a quantum system is described by a *density matrix*, often denoted ρ. The von Neumann entropy of ρ quantifies the uncertainty in the state of the system (Wilde, 2013). The normalized similarity matrix K/n here plays the role of the density matrix. Our formulation of the Vendi Score assumes that x1*, . . . , x*n were sampled independently, and so p(xi) ≈ 1 n for all i. This is the usual setting in ML and the setting we study in our experiments. However, we can generalize the Vendi Score to a setting in which we have an explicit probability distribution over the sample space X (see Definition A.1 in the appendix). ## 3.2 Understanding The Vendi Score Figure 1 illustrates the behavior of the Vendi Score on simple toy datasets in which each element is defined by a shape and a color, and similarity is defined to be 1 if elements share both shape and color, 0.5 if they share either shape or color, and 0 otherwise. First, Figure 1a illustrates that the Vendi Score is an *effective number*, and can be understood as the effective number of dissimilar elements in a sample. The value of measuring diversity with effective numbers has been argued in ecology (e.g. Hill, 1973; Patil & Taillie, 1982; Jost, 2006) and economics (Adelman, 1969). Effective numbers provide a consistent basis for interpreting diversity scores, and make it possible to compare diversity scores using ratios and percentages. For example, in Figure 1a, when the number of modes doubles from two to four, the Vendi Score doubles as well. If we doubled the number of modes from four to eight, the Vendi Score would double once again. Figures 1b and 1c illustrate another strength of the Vendi Score, which is that it accounts for correlations between features. Given distinct similarity functions k and k ′, the Vendi Score calculated using the combined similarity function 12 k(x) + 12 k ′(x) can be greater than the average of the individual Vendi Scores if the two similarity functions describe distinct dimensions of variation. Furthermore, the Vendi Score increases when the items in the sample differ in more attributes, and the attributes become less correlated with each other. The Vendi Score has several desirable properties as a diversity metric. We summarize them in the following theorem. Theorem 3.1 (Properties of the Vendi Score). Consider the same definitions in Definition 3.1 and Definition A.1. 1. **Effective number.** If k(xi, xj ) = 0 for all i ̸= j, then VSk(x1, . . . , xn) *is maximized and equal to* n. If k(xi, xj ) = 1 for all i, j, then VSk(x1, . . . , xn) *is minimized and equal to* 1. 2. **Identical elements.** Suppose k(xi, xj ) = 1 for some i ̸= j*. Let* p ′ denote the probability distribution created by combining i and j*, i.e.* p ′ i = pi + pj and p ′ j = 0*. Then the Vendi Score is unchanged,* VSk(x1*, . . . , x*n, p) = VSk(x1*, . . . , x*n, p ′). 3. **Partitioning.** Suppose S1, . . . , Sm are collections of samples such that, for any i ̸= j*, for all* x ∈ Si, x′ ∈ Sj , k(*x, x*′) = 0. Then the diversity of the combined samples depends only on the diversities of S1, . . . , Sm *and their relative sizes. In particular, if* pi = |Si|/Pj |Sj | *is the relative size* of Si and H(p1, . . . , pm) denotes the Shannon entropy, then the Vendi Score is the geometric mean, VSk(S1*, . . . , S*m) = exp(H(p1*, . . . , p*m)) Qm i=1 VSk(Si) pi. 4. **Symmetry.** If π1, . . . , πn is a permutation of 1, . . . , n, then $\rm VS_{k}(\rm c_{1},\ldots,\rm c_{n})=\rm VS_{k}(\rm c_{n},\ldots,\rm c_{n})$. The effective number property provides a consistent frame of reference for interpreting the Vendi Score: a sample with a Vendi Score of m can be understood to be as diverse as a sample consisting of m completely dissimilar elements. The identical elements property provides some justification for our use of a sampling approximation: for example, calculating the empirical Vendi Score of a sample of 90 blue diamonds and 10 yellow squares is equivalent to calculating the probability-weighted Vendi Score of a sample of one blue spade and one yellow square, with p = (0.9, 0.1). The partitioning property is analogous to the partitioning property of the Shannon entropy and means that if two samples are completely dissimilar we can calculate the diversity of the union of the samples using only the diversity of each sample independently and their relative sizes. The symmetry property means that the Vendi Score will be the same regardless of how we order the rows and columns in the similarity matrix. ## 3.3 Calculating The Vendi Score Calculating the Vendi Score for a sample of n elements requires finding the eigenvalues of an n × n matrix, which has a time complexity of O(n 3). The Vendi Score can be approximated using column sampling methods (i.e. the Nyström method; Williams & Seeger, 2000). However, in many of the applications we consider, the similarity functions we use are inner products between explicit feature vectors ϕ(x) ∈ R d, with d ≪ n. That is, K = X⊤X, where X ∈ R n×dis the feature matrix with row Xi,: = ϕ(xi). The eigenvalues of K/n are the same as the eigenvalues of the covariance matrix XX⊤/n, therefore we can calculate the Vendi Score exactly in a time of O(d 2n + d 3) = O(d 2n). This is the same complexity as existing metrics such as fid (Heusel et al., 2017), which require calculating the covariance matrix of Inception embeddings. Sample complexity. The Vendi Score is the exponential of the kernel entropy, H(K) = − tr K n log K n . Bach (2022) proves that empirical estimator of the kernel entropy has a convergence rate proportional to 1/ √n, where n is the number of samples (Appendix A.5). ## 3.4 Connections To Other Areas In Ml Here we remark on the connections between the Vendi Score and other commonly studied objects in ml that make use of the eigenvalues of a similarity matrix. Determinantal Point Processes. The Vendi Score bears a relationship to Determinantal Point Processs (dpps), which have been used in machine learning for diverse subset selection (Kulesza et al., 2012). A dpp is a probability distribution over subsets of a ground set X parameterized by a positive semidefinite kernel matrix K. The likelihood of drawing any subset X ⊆ X is defined as proportional to |KX|, the determinant of the similarity matrix restricted to elements in X: p(X) ∝ |KX| =Qi λi, where λi are the eigenvalues of KX. The likelihood function has a geometric interpretation, as the square of the volume spanned by the elements of X in an implicit feature space. However, the dpp likelihood is not commonly used for evaluating diversity, and has some limitations. For example, it is always equal to 0 if the sample contains any duplicates, and the geometric meaning is arguably less straightforward to interpret than the Vendi Score, which can be understood in terms of the effective number of dissimilar elements. Spectral Clustering. The eigenvalues of the similarity matrix are also related to spectral clustering algorithms (Von Luxburg, 2007), which use a matrix known as the graph Laplacian, defined L = D − K, where K is a symmetric, weighted adjacency matrix with non-negative entries, and D is a diagonal matrix with Di,i =Pj Ki,j . The eigenvalues of L can be used to characterize different properties of the graph—for example, the multiplicity of the eigenvalue 0 is equal to the number of connected components. As a metric for diversity, the Vendi Score is somewhat more general than the number of connected components: it provides a meaningful measure even for fully connected graphs, and captures within-component diversity. ## 4 Experiments We illustrate the Vendi Score, which we now denote by vs for the rest of this section, on synthetic data to illustrate that it captures intuitive notions of diversity, and then apply it to a variety of setting in ml. We used vs to evaluate the diversity of generative models of molecules, an application where diversity plays an important role in enabling discovery. We compare vs to IntDiv, a function of the average similarity: $$\operatorname{IntDiv}(x_{1},\ldots,x_{n})=1-{\frac{1}{n^{2}}}\sum_{i,j}k(x_{i},x_{j}).$$ We found that vs identifies some model weaknesses that are not detected by IntDiv. We also applied vs to generative models of images, and decoding algorithms of text, where we found it confirms what we know about diversity in those applications. We also used vs to measure mode collapse in gans and datasets and show that it reveals finer-grained distinctions in diversity than current metrics for measuring mode collapse. Finally, we used vs to analyze the diversity of several image, text, and molecule datasets, gaining insights into the diversity profile of those datasets. (Implementation details are provided in Appendix B.) ![6_image_0.png](6_image_0.png) Figure 2: VS increases proportionally with diversity in three sets of synthetic datasets. In each row, we sample datasets from univariate mixture-of-normal distributions, varying either the number of components, the mixture proportions, or the per-component variance. The datasets are depicted in the left, as histograms, and the diversity scores are plotted on the right. ## 4.1 Synthetic Experiments To illustrate the behavior of the Vendi Score, we calculate the diversity of simple datasets drawn from a mixture of univariate normal distributions, varying either the number of components, the mixture proportions, or the per-component variance. We measure similarity using the RBF kernel: k(*x, x*′) = exp(∥x − x ′∥ 2/2σ 2). The results are illustrated in Figure 2. VS behaves consistently and intuitively in all three settings: in each case, VS can be interpreted as the effective number of modes, ranging between one and five in the first two rows and increasing from five to seven in the third row as we increase within-mode variance. On the other hand, the behavior of IntDiv is different in each settings: for example, IntDiv is relatively insensitive to within-mode variance, and additional modes bring diminishing returns. In Appendix C.1, we also validate that vs captures mode dropping in a simulated setting, using image and text classification datasets, where we have information about the ground truth class distribution. In both cases, vs has a stronger correlation with the true number of modes compared to IntDiv. ## 4.2 Evaluating Molecular Generative Models For Diversity Next, we evaluate the diversity of samples from generative models of molecules. For generative models to be useful for the discovery of novel molecules, they ought to be diverse. The standard diversity metric in this ![7_image_0.png](7_image_0.png) Figure 3: The kernel matrices for 250 molecules sampled from the hmm, aae, and the original dataset, sorted lexicographically by smiles string representation. The samples have similar IntDiv scores, but the hmm samples score much lower on vs. The figure shows that the hmm generates a number of exact duplicates. vs is able to capture the hmm's lack of diversity while IntDiv cannot. setting is IntDiv. We evaluate samples from generative models provided in the moses benchmark (Polykovskiy et al., 2020), using the first 2500 valid molecules in each sample. Following prior work, our similarity function is the Morgan fingerprint similarity (radius 2), implemented in RDKit.2In Figure 3, we highlight an instance where vs and IntDiv disagree: IntDiv ranks the hmm among the most diverse models, while vs ranks it as the least diverse (the complete results are in Appendix Table 4). The hmm has a high IntDiv score because, on average, the hmm molecules have low pairwise similarity scores, but there are a number of clusters of identical or nearly identical molecules. ## 4.3 Assessing Mode Collapse In Gans Mode collapse is a failure mode of gans that has received a lot of attention from the ml community (Metz et al., 2017; Dieng et al., 2019). The main metric for measuring mode collapse, called number of modes(nom), can only be used to assess mode collapse for gans trained on a labelled dataset. nom is computed by training a classifier on the labeled training data and counting the number of unique classes that are predicted by the trained classifier for the generated samples. In Table 1, we evaluate two models that were trained on the Stackedmnist dataset, a standard setting for evaluating mode collapse in gans. Stackedmnist is created by stacking three mnist images along the color channel, creating 1000 classes corresponding to 1000 number of modes. | Model | nom | Mode Div. | vs | |----------------|-------|-------------|-------| | Self-cond. gan | 1000 | 921.0 | 746.7 | | Presgan | 1000 | 948.7 | 866.6 | | Original | 1000 | 950.8 | 943.7 | Table 1: vs captures a more fine-grained notion of diversity than number of modes(nom). Although Presgan and Self-cond.gan both capture all the 1000 modes, vs reveals that Presgan is more diverse than Self-cond.gan and that they both are less diverse than the original dataset. In prior work, mode collapse is evaluated by training an mnist classifier and counting the number of unique classes that are predicted for the generated samples. We adapt this approach and we calculate vs using the probability product kernel (Jebara et al., 2004): k(*x, x*′) = Py p(y | x) 1 2 p(y | x ′) 1 2 , where the class likelihoods 2RDKit: Open-source Cheminformatics. https://www.rdkit.org. | Model | is↑ | fid↓ | Prec↑ | Rec↑ | vs↑ | is↑ | fid↓ | Prec↑ | Rec↑ | vs↑ | |------------------|----------------------|--------|---------|--------|-------|-------|--------|---------|--------|-------| | cifar-10 | ImageNet 64×64 | | | | | | | | | | | Original | 19.50 | | | | | | | | | | | vdvae | 5.82 | 40.05 | 0.63 | 0.35 | 12.87 | | | | | | | DenseFlow | 6.01 | 34.54 | 0.62 | 0.38 | 13.55 | | | | | | | iddpm | 9.24 | 4.39 | 0.66 | 0.60 | 16.86 | 43.93 | | | | | | | 9.68 | 57.57 | 0.47 | 0.37 | 18.04 | | | | | | | | 5.62 | 102.90 | 0.36 | 0.17 | 12.71 | | | | | | | | 15.59 | 19.24 | 0.59 | 0.58 | 24.28 | | | | | | | LSUN Cat 256×256 | LSUN Bedroom 256×256 | | | | | | | | | | | Original | 15.12 | | | | | | | | | | | Stylegan2 | 4.84 | 7.25 | 0.58 | 0.43 | 13.55 | | | | | | | adm | 5.19 | 5.57 | 0.63 | 0.52 | 13.09 | | | | | | | rq-vt | 5.76 | 10.69 | 0.53 | 0.48 | 14.91 | 8.99 | | | | | | | 2.55 | 2.35 | 0.59 | 0.48 | 8.76 | | | | | | | | 2.38 | 1.90 | 0.66 | 0.51 | 7.97 | | | | | | | | 2.56 | 3.16 | 0.60 | 0.50 | 8.48 | | | | | | ImageNet 64×64 LSUN Bedroom 256×256 Table 2: vs generally agrees with the existing metrics. On low-resolution datasets (top left and top right) the diffusion model performs better on all of the metrics. On the lsun datasets (bottom left and bottom right), the diffusion model gets the highest quality scores as measured by is, but scores lower on vs. No model matches the diversity score of the original dataset they were trained on. are given by the classifier. We compare Presgan (Dieng et al., 2019) and Self-conditioned gan (Liu et al., 2020), two gans that are known to capture all the modes. Table 1 shows that Presgan and Self-conditioned gan have the same diversity according to number of modes, they capture all 1000 modes. However, vs reveals a more fine-grained notion of diversity, indicating that Presgan is more diverse than Self-conditioned gan and that both are less diverse than the original dataset. One possibility is that vs is capturing imbalances in the mode distribution. To see whether this is the case, we also calculate what we call *Mode Diversity*, the exponential entropy of the predicted mode distribution: exp H(pˆ(y)), where pˆ(y) = 1n Pn i=1 p(y | xi). The generative models score lower on vs than Mode Diversity, indicating that low scores cannot be entirely attributed to imbalances in the mode distribution. Therefore vs captures more aspects of diversity, even when we are using the same representations as existing methods. ## 4.4 Evaluating Image Generative Models For Diversity We now evaluate several recent models for unconditional image generation, comparing the diversity scores with standard evaluation metrics, is (Salimans et al., 2016), fid (Heusel et al., 2017), Precision (Sajjadi et al., 2018), and Recall (Sajjadi et al., 2018). The models we evaluate represent popular classes of generative models, including a variational autoencoder (vdvae; Child, 2020), a flow model (DenseFlow; Grcić et al., 2021), diffusion models (iddpm, Nichol & Dhariwal, 2021; adm Dhariwal & Nichol, 2021), gan-based models (Karras et al., 2019; 2020), and an auto-regressive model (rq-vt; Lee et al., 2022). The models are trained on cifar-10 (Krizhevsky, 2009), ImageNet (Russakovsky et al., 2015), or two categories from the lsun dataset (Yu et al., 2015). We either select models that provide precomputed samples, or download publicly available model checkpoints and sample new images using the default hyperparameters. (More details are in Appendix B.) The standard metrics in this setting use a pre-trained Inception ImageNet classifier to map images to real vectors. Therefore, we calculate vs using the cosine similarity between Inception embeddings, using the same 2048-dimensional representations used for evaluating fid and Precision/Recall. As a result, the highest possible similarity score is 2048. The baseline metrics are reference-based, with the exception of is. fid and is capture diversity implicitly. Recall was introduced to capture diversity explicitly, with diversity defined as coverage of the reference distribution. The results of this comparison are in Table 2. On the lower resolution datasets (top left and top right), vs generally agrees with the existing metrics. On those datasets the diffusion model performs better on all of the metrics. On the lsun datasets (bottom left and bottom right), the diffusion model gets the highest quality scores as measured by precision and recall, but scores lower on vs. In these cases, vs can be interpreted as complementing the existing metrics. For example, on lsun Cat, the ADM model achieves a precision score | Source | bleu-4 | N-gram div. | vs | |-------------|----------|---------------|------| | Human | 0.82 | 4.88 | | | Beam Search | 0.27 | 0.42 | 3.00 | | dbs γ = 0.2 | 0.25 | 0.49 | 3.16 | | dbs γ = 0.5 | 0.22 | 0.63 | 4.14 | | dbs γ = 0.8 | 0.21 | 0.68 | 4.37 | Table 3: Quality and diversity scores for an image captioning model using Beam Search or diverse beam search(dbs), varying the diversity penalty γ. bleu-4 measures n-gram overlap with the human-written reference captions, a proxy for quality. Increasing γ leads to higher diversity scores but a lower quality score. of 0.63 and recall of 0.52, implying that 63% of generated images look like reference images, and that the generated images cover 52% of the reference distribution; however, the low vs suggests that the remaining images have low internal diversity—for example, the model may generate many near-duplicates. No model matches the diversity score of the original dataset they were trained on. In addition to comparing the diversity of the models, we can also compare the diversity scores between datasets: as a function of Inception similarity, the most diverse dataset is ImageNet 64×64, followed by cifar-10, followed by lsun Cat, and then lsun Bedroom. Cat (all cats, but coming in different species), followed by lsun Bedrooms. vs should be understood as the diversity with respect to a specific similarity function, in this case, the Inception ImageNet similarity. We illustrate this point in in the appendix (Figure 6) by comparing the top eigenvalues of the kernel matrices corresponding to the cosine similarity between Inception embeddings and pixel vectors. Inception similarity captures a form of semantic similarity, with components corresponding to particular cat breeds, while the pixel kernel provides a simple form of visual similarity, with components corresponding to broad differences in lightness, darkness, and color. ## 4.5 Evaluating Decoding Algorithms For Text For Diversity We evaluate diversity on the ms coco image-captioning dataset (Lin et al., 2014), following prior work on diverse text generation (Vijayakumar et al., 2018). In this setting, the subjects of evaluation are diverse decoding algorithms rather than parametric models. Given a fixed conditional model of text p(x | c), where c is some conditioning context, the aim is to identify a "Diverse N-Bet List", a list of sentences that have high likelihood but are mutually distinct. The baseline metric we compare to is n-gram diversity (Li et al., 2016), which is the proportion of unique n-grams divided by the total number of n-grams. We define similarity using the n-gram overlap kernel: for a given n, the n-gram kernel kn is the cosine similarity between bag-of-n-gram feature vectors. We use the average of k1*, . . . , k*4. This ensures that vs and n-gram diversity are calculated using the same feature representation. Each image in the validation split has five captions written by different human annotators, and we compare these with captions generated by a publicly available captioning model trained on this dataset 3. For each image, we generate five captions using either beam search or diverse beam search (dbs) (Vijayakumar et al., 2018). dbs takes a parameter, γ, called the diversity penalty, and we vary this between 0.2, 0.6, and 0.8. Table 3 shows that all diversity metrics increase as expected, ranking beam search the lowest, the human captions the highest, and dbs in between, increasing with the diversity penalty. The human diversity score of 4.88 can be interpreted as meaning that, on average, all five human-written captions are almost completely dissimilar from each other, while beam search effectively returns only three distinct responses for every five that it generates. ## 4.6 Diagnosing Datasets For Diversity In Figure 4, we calculate vs for samples from different categories in cifar-100, using the cosine similarity between either Inception embeddings or pixel vectors. The pixel diversity is highest for categories like "aquarium fish", which vary in color, brightness, and orientation, and lowest for categories like "cockroach" in which images have similar regions of high pixel intensity (like white backgrounds). The Inception diversity is 3https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts ![10_image_0.png](10_image_0.png) Figure 4: The categories in cifar-100 with the lowest and highest vs, defining similarity as the cosine similarity between either Inception embeddings or pixel vectors. We show 100 examples from each category, in decreasing order of average similarity, with the image at the top left having the highest average similarity scores according to the corresponding kernel. less straightforward to interpret, but might correspond to some form of semantic diversity—for example, the Inception diversity might be lower for classes like "castle," that correspond to distinct ImageNet categories, and higher for categories like "clock" and "keyboard" that are more difficult to classify. In Appendix C.5, we show additional examples from text, molecules, and other image datasets. ## 5 Limitations Here, we discuss several important limitations that should be considered when interpreting VS scores. First, VS is a reference-free metric, meaning that it measures the internal diversity of a set and not how it relates to a reference distribution. While this makes VS useful in settings where there is no reference distribution, it also means that it is possible to get a high diversity score by, for example, sampling random noise. This is also true of other reference-free metrics, like IntDiv and n-gram diversity. Therefore, VS should be used alongside a quality metric. Second, like other similarity-based metrics, VS is dependent on the choice of similarity function. If the similarity function is too sensitive, all sets will appear very diverse, while if it is not sensitive enough, all sets will have low diversity. Additionally, the wrong choice of similarity function can introduce biases that lead to skewed diversity scores. Therefore, care should be taken when choosing a similarity function to ensure that it is appropriate for the specific application. Finally, the computational cost of calculating VS can be high, particularly if the similarity function is not associated with a low-dimensional embedding space. This may limit the applicability of VS in certain settings, particularly those with large datasets or complex similarity functions. ## 6 Conclusion We introduced the Vendi Score, a metric for evaluating diversity in machine learning (ml). The Vendi Score is defined as a function of the pairwise similarity scores between elements of a sample and can be interpreted as the effective number of unique elements in the sample. The Vendi Score is interpretable, general, and applicable to any domain where similarity can be defined. It is unsupervised, in that it does not require labels or a reference probability distribution or dataset. Importantly, the Vendi Score allows its user to specify the form of diversity they want to measure via the similarity function. We showed the Vendi Score can be computed efficiently exactly and showcased its usefulness in several ml applications, different datasets, and different domains. In future work, we will leverage the Vendi Score to improve data augmentation, an important ml approach in settings with limited data. ## Acknowledgements Adji Bousso Dieng is supported by the National Science Foundation, Office of Advanced Cyberinfrastructure (OAC): \#2118201. We thank Sadhika Malladi for pointing us to the effective rank. Adji Bousso Dieng would like to dedicate this paper to her PhD advisors, David Blei and John Paisley. ## References Morris A Adelman. Comment on the "H" concentration measure as a numbers-equivalent. *The Review of* economics and statistics, pp. 99–101, 1969. Alexander A Alemi and Ian Fischer. GILBO: one metric to measure them all. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 7037–7046, 2018. S. Arora and Y. Zhang. Do GANs actually learn the distribution? some theory and empirics. In *International* Conference on Learning Representations, 2018. Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. In *Advances in Neural Information Processing Systems*, 2019. Francis Bach. Information theory with kernel methods. *arXiv preprint arXiv:2202.08545*, 2022. Mostapha Benhenda. ChemGAN challenge for drug discovery: can AI reproduce natural chemical diversity? arXiv preprint arXiv:1708.08227, 2017. Steven Bird. Nltk: The natural language toolkit. In *Proceedings of the COLING/ACL 2006 Interactive* Presentation Sessions, pp. 69–72, 2006. Rewon Child. Very deep VAEs generalize autoregressive models and can outperform them on images. *arXiv* preprint arXiv:2011.10650, 2020. Ondřej Cífka, Aliaksei Severyn, Enrique Alfonseca, and Katja Filippova. Eval all, trust a few, do wrong to none: Comparing sentence generation models. *arXiv preprint arXiv:1804.07972*, 2018. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794, 2021. Adji B Dieng, Francisco JR Ruiz, David M Blei, and Michalis K Titsias. Prescribed generative adversarial networks. *arXiv preprint arXiv:1910.04302*, 2019. Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. Unsupervised quality estimation for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:539–555, 2020. Tianyu Gao, Xingcheng Yao, and Danqi Chen. SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pp. 6894–6910, 2021. Matej Grcić, Ivan Grubišić, and Siniša Šegvić. Densely connected normalizing flows. Advances in Neural Information Processing Systems, 34:23968–23982, 2021. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information* processing systems, 30, 2017. Mark O Hill. Diversity and Evenness: A Unifying Notation and Its Consequences. *Ecology*, 54(2):427–432, 1973. Tony Jebara, Risi Kondor, and Andrew Howard. Probability product kernels. *The Journal of Machine* Learning Research, 5:819–844, 2004. Lou Jost. Entropy and Diversity. *Oikos*, 113(2):363–375, 2006. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 4401–4410, 2019. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110–8119, 2020. Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. The multilingual Amazon reviews corpus. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020. A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. Alex Kulesza, Ben Taskar, et al. Determinantal point processes for machine learning. Foundations and Trends® *in Machine Learning*, 5(2–3):123–286, 2012. Oskar Kviman, Harald Melin, Hazal Koptagel, Victor Elvira, and Jens Lagergren. Multiple importance sampling elbo and deep ensembles of variational approximations. In International Conference on Artificial Intelligence and Statistics, pp. 10687–10702. PMLR, 2022. Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. In *Proceedings of the 33rd International Conference on* Neural Information Processing Systems, pp. 3927–3936, 2019. Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive Image Generation using Residual Quantization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 11523–11532, 2022. Tom Leinster. *Entropy and Diversity: The Axiomatic Approach*. Cambridge University Press, 2021. Tom Leinster and Christina A Cobbold. Measuring Diversity: The Importance of Species Similarity. *Ecology*, 93(3):477–489, 2012. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and William B Dolan. A diversity-promoting objective function for neural conversation models. In *Proceedings of the 2016 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 110–119, 2016. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In *European conference on computer* vision, pp. 740–755. Springer, 2014. Steven Liu, Tongzhou Wang, David Bau, Jun-Yan Zhu, and Antonio Torralba. Diverse image generation via self-conditioned gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In *International Conference* on Computer Vision, 2015. L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks. In International Conference on Learning Representations, 2017. Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben Hutchinson, Alex Hanna, Timnit Gebru, and Jamie Morgenstern. Diversity and inclusion metrics in subset selection. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 117–123, 2020. Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo. Reliable fidelity and diversity metrics for generative models. In *International Conference on Machine Learning*, pp. 7176–7185. PMLR, 2020. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162–8171. PMLR, 2021. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311–318, 2002. Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On aliased resizing and surprising subtleties in gan evaluation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11410–11420, 2022. GP Patil and Charles Taillie. Diversity as a Concept and Its Measurement. Journal of the American Statistical Association, 77(379):548–561, 1982. Daniil Polykovskiy, Alexander Zhebrak, Benjamin Sanchez-Lengeling, Sergey Golovanov, Oktai Tatanov, Stanislav Belyaev, Rauf Kurbanov, Aleksey Artamonov, Vladimir Aladinskiy, Mark Veselov, Artur Kadurin, Simon Johansson, Hongming Chen, Sergey Nikolenko, Alan Aspuru-Guzik, and Alex Zhavoronkov. Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models. *Frontiers in Pharmacology*, 2020. Jose Gallego Posada, Ankit Vani, Max Schwarzer, and Simon Lacoste-Julien. Gait: A geometric approach to information theory. In *International Conference on Artificial Intelligence and Statistics*, pp. 2601–2611. PMLR, 2020. Kristina Preuer, Philipp Renz, Thomas Unterthiner, Sepp Hochreiter, and Günter Klambauer. Fréchet ChemNet distance: a metric for generative models for molecules in drug discovery. Journal of chemical information and modeling, 58(9):1736–1741, 2018. Justin K Pugh, Lisa B Soros, Paul A Szerlip, and Kenneth O Stanley. Confronting the challenge of quality diversity. In *Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation*, pp. 967–974, 2015. A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In *arXiv:1511.06434*, 2015. Olivier Roy and Martin Vetterli. The effective rank: A measure of effective dimensionality. In *2007 15th* European signal processing conference, pp. 606–610. IEEE, 2007. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 5234–5243, 2018. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved Techniques for Training GANs. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 29*, pp. 2234–2242. Curran Associates, Inc., 2016. Benjamin Sanchez-Lengeling, Jennifer N Wei, Brian K Lee, Richard C Gerkin, Alán Aspuru-Guzik, and Alexander B Wiltschko. Machine learning for scent: Learning generalizable perceptual representations of small molecules. *arXiv preprint arXiv:1910.10685*, 2019. Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. Mixture models for diverse machine translation: Tricks of the trade. In *International conference on machine learning*, pp. 5719–5728. PMLR, 2019. Loïc Simon, Ryan Webster, and Julien Rabin. Revisiting precision and recall definition for generative model evaluation. In *International Conference on Machine Learning (ICML)*, 2019. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021. A. Srivastava, L. Valkov, C. Russell, M. U. Gutmann, and C. Sutton. VEEGAN: reducing mode collapse in GANs using implicit variational learning. In *Advances in Neural Information Processing Systems*, 2017. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the Inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016. François Torregrossa, Vincent Claveau, Nihel Kooli, Guillaume Gravier, and Robin Allesiardo. On the correlation of word embedding evaluation metrics. In Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pp. 4789–4797, 2020. Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. Diverse beam search for improved description of complex scenes. In *Proceedings of the AAAI* Conference on Artificial Intelligence, 2018. Ulrike Von Luxburg. A tutorial on spectral clustering. *Statistics and computing*, 17(4):395–416, 2007. Mark M Wilde. *Quantum information theory*. Cambridge University Press, 2013. Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American Chapter of* the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122, 2018. Christopher Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. Advances in Neural Information Processing Systems, 13, 2000. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. Huggingface's transformers: State-of-the-art natural language processing. *arXiv preprint arXiv:1910.03771*, 2019. H. Xiao, K. Rasul, and R. Vollgraf. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. In *arXiv:1708.07747*, 2017. Yutong Xie, Ziqiao Xu, Jiaqi Ma, and Qiaozhu Mei. How much of the chemical space has been explored? selecting the right exploration measure for drug discovery. In *ICML 2022 2nd AI for Science Workshop*, 2022. Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. *arXiv preprint arXiv:1506.03365*, 2015. ## A Proofs A.1 Probability-Weighted Vendi Score Definition A.1 (Probability-Weighted Vendi Score). Let p ∈ ∆n *denote a probability distribution on a* discrete space X = {x1, . . . , xn}, where ∆n denotes the (n − 1)-dimensional simplex, let k : *X × X →* R be a positive semidefinite similarity function, with k(*x, x*) = 1 for all x*, and let* K ∈ R n×n *denote the kernel* matrix with Ki,j = k(xi, xj )*. Let* K˜p = diag( √p)Kdiag( √p) denote the probability-weighted kernel matrix. Let λ1, . . . , λn denote the eigenvalues of K˜p. The Vendi Score (VS) is defined as the exponential of the Shannon entropy of the eigenvalues of K˜p: $$V S_{k}(x_{1},\ldots,x_{n},p)=\exp\left(-\sum_{i=1}^{S}\lambda_{i}\log\lambda_{i}\right).$$ $\left(3\right)$. . (3) When all elements in the sample are completely dissimilar, the probability-weighted Vendi Score defined in Definition A.1 reduces to the exponential of the Shannon entropy of the weighting distribution: Lemma A.1. Let p ∈ ∆n be a probability distribution over x1, . . . , xn and suppose k(xi, xj ) = 0 *for all* i ̸= j. Then VSk(x1*, . . . , x*n, p) = exp H(p)*, the exponential of the Shannon entropy of* p. ## A.2 Proof Of Lemma 3.1 Lemma. Consider the same setting as Definition 3.1. Then $${\rm VS}_{k}(x_{1},\ldots,x_{n})=\exp\left(-\,{\rm tr}\left(\frac{\mathbf{K}}{n}\log\frac{\mathbf{K}}{n}\right)\right).\tag{4}$$ Proof. For any square matrix X ∈ R n×n, if X has an eigendecomposition X = UΛU −1, then log X = U (log Λ) U −1, where log Λ = diag(log λ1*, . . . ,* log λn) is a diagonal matrix whose diagonal entries are the logarithms of the eigenvalues of X. Also, tr(X) = tr UΛU −1= tr(Λ), because the trace is similarityinvariant. K/n is diagonalizable because it is positive semidefinite, so let K/n = UΛU −1 denote the eigendecomposition. Then $$\operatorname{tr}(\boldsymbol{K}/n\log\boldsymbol{K}/n)=\operatorname{tr}\left(\boldsymbol{U\Lambda U}^{-1}\log\left(\boldsymbol{U\Lambda U}^{-1}\right)\right)$$ $$=\operatorname{tr}\left(\boldsymbol{U\Lambda U}^{-1}\boldsymbol{U}\left(\log\boldsymbol{\Lambda}\right)\boldsymbol{U}^{-1}\right)$$ $$=\operatorname{tr}\left(\boldsymbol{\Lambda}\log\boldsymbol{\Lambda}\right)$$ $$=\sum_{i=1}^{n}\lambda_{i}\log\lambda_{i}.$$ $$\square$$ Therefore $$\mathrm{VS}_{k}(x_{1},\ldots,x_{n})=\exp\left(-\sum_{i=1}^{n}\lambda_{i}\log\lambda_{i}\right)=\exp\left(-\operatorname{tr}\left({\frac{K}{n}}\log{\frac{K}{n}}\right)\right).$$ ## A.3 Proof Of Lemma A.1 Lemma. Let p ∈ ∆n be a probability distribution over x1*, . . . , x*n and suppose k(xi, xj ) = 0 for all i ≠ j. Then VSk(x1*, . . . , x*n, p) = exp H(p), the exponential of the Shannon entropy of p. Proof. If all element in p are completely dissimilar, then K˜p is a diagonal matrix, and the eigenvalues λ1*, . . . , λ*S are the diagonal entries, which are the entries of p. So the von Neumann entropy of K˜p is identical to the Shannon entropy of p, and the exponential is the Vendi Score. ## A.4 Proof Of Theorem 3.1 Proof. (a) Effective number: If p is the uniform distribution over N completely dissimilar elements, then K˜p is a diagonal matrix with each diagonal entry equal to 1/N. The eigenvalues of a diagonal matrix are the diagonal entries, so VSK(p) = exp H(1*/N, . . . ,* 1/N) = exp log N = N. On the other hand, if all elements are completely similar to each other, then K˜p has rank one and so the Vendi Score is equal to one. (b) Identical elements: The eigenvalues of K˜p are the same as the eigenvalues of the covariance matrix of the corresponding feature space: $${\tilde{\Sigma}}_{\mathbf{p}}=\sum_{i=1}^{N}p(x_{i})\phi(x_{i})\phi(x_{i})^{\top}.$$ Suppose elements i and j are identical, and let p ′ denote the probability distribution created by combining i and j, i.e. p ′ i = pi + pj and p ′ j = 0. Clearly, Σ˜p = Σ˜p′ , and so VSk(x1*, . . . , x*n, p) = VSk(x1*, . . . , x*n, p ′). (c) Partitioning: Suppose N samples are partitioned into M groups S1*, . . . ,* SM such that, for any i ̸= j, for all x ∈ Si, x′ ∈ Sj , k(*x, x*′) = 0. Let pi = |Si|/Pj |Sj | denote the relative size of group i, and let K denote kernel matrix of ∪iSi, sorted in order of group index, and let KSi denote the restriction of K to elements in Si. Then K/N is a block diagonal matrix, with each block i equal to piKSi . The eigenvalues of a block diagonal matrix are the combined eigenvalues of each block, and the partitioning property then follows from the partitioning property of the Shannon entropy. (e) Symmetry: The eigenvalues of a matrix are unchanged by orthonormal transformation, and the Shannon entropy is symmetric in its arguments, so the Vendi Score is symmetric. ## A.5 Sample Complexity The Vendi Score is the exponential of the kernel entropy, H(K) = − tr K n log K n . Bach (2022) proves that empirical estimator of the kernel entropy, Hˆ , has a convergence rate proportional to 1/ √n, where n is the number of samples. Additionally, by Jensen's inequality, the E[Hˆ ] is no greater than H. Therefore: $$\exp(H)-\exp(\hat{H})\leq\exp(H)-\exp\left(H-\frac{1}{\sqrt{n}}\right)$$ $$=\exp(H)-\exp(H)/\exp\left(\frac{1}{\sqrt{n}}\right)$$ $$=\exp(H)\left(1-\frac{1}{\sqrt{n}}\right).$$ The empirical estimator of the Vendi Score therefore also has a convergence rate proportional to 1/ √n, with a constant term depending on the true entropy. ## B Implementation Details B.1 Images Stacked MNIST We train GANs on Stacked MNIST using the publicly available code for PresGANs 4 and self-conditioned GANs 5. The models share the same DCGAN (Radford et al., 2015) architecture and are trained on the same dataset of 60,000 Stacked MNIST images, rescaled to 32×32 pixels, and other hyperparameters are set according to the descriptions in the papers. The models are trained for 50 epochs and the diversity scores are evaluated every five epochs by taking 10,000 samples. For both models, we report the scores from the epoch corresponding to the highest VS score. As in prior work (Metz et al., 2017), we classify Stacked MNIST digits by applying a pretrained MNIST classifier to each color channel independently. The 1000-dimensional Stacked MNIST probability vector is then the tensor product of the three 10-dimensional probability vectors predicted for the three channels. Obtaining Image Samples In Section 4.4, we calculate the diversity scores of several recent generative models of images. We select models that represent a range of families of generative models and and provide publicly available samples or model checkpoints for common image datasets. On the low-resolution datasets, we generate 50,000 samples from each model using the official code for VDVAE,6 DenseFlow,7, and IDDPM,8, each of which provides a checkpoint for unconditional image generation models on CIFAR-10 and ImageNet-64. For IDDPM, we sample using DDIM (Song et al., 2021) for 250 steps, and otherwise use the default sampling parameters. For the higher-resolution datasets, we use the 50,000 precomputed samples provided by Dhariwal & Nichol (2021)9for ADM and StyleGAN models. We obtain 50,000 samples from the RQ-VAE/Transformer model using the code and checkpoints provided by the authors,10 with the default sampling parameters. 4https://github.com/adjidieng/PresGANs 5https://github.com/stevliu/self-conditioned-gan 6https://github.com/openai/vdvae/ 7https://github.com/matejgrcic/DenseFlow 8https://github.com/openai/improved-diffusion 9https://github.com/openai/guided-diffusion 10https://github.com/kakaobrain/rq-vae-transformer Calculating Image Metrics In Table 2, we calculate standard image quality and diversity metrics, which are based on Inception embeddings. These Inception-based metrics are sensitive to a number of implementation details (Parmar et al., 2022) and in general cannot be compared directly between papers. For a consistent comparison, we calculate all scores using the evaluation code provided by Dhariwal & Nichol (2021). We also calculated FID and Precision/Recall using the provided reference images and statistics, with the exception of CIFAR-10, for which we use the training set as the reference. (The diversity scores of the Original datasets in Table 2 are calculated using these reference images.) As a result, the numbers in this table may not be directly comparable to results reported in prior work. ## B.2 Text Obtaining Image Captions In Section 4.5, we sample image captions from a pretrained image-captioning model,11 which is publicly available in Hugging Face (Wolf et al., 2019), and we use the Hugging Face implementation of beam search and diverse beam search. For beam search we use a beam size of 5. For diverse beam search, we use a beam size of 10, a beam group size of 10, and set the number of return sequences to 5. Calculating Text Metrics The text metrics we use are calculated in terms of word n-grams, and therefore depend on how sentences are tokenized into words. We calculate all text metrics using the pre-trained wordpiece tokenizer used by the captioning models. We use the implementation of the BLEU score in NLTK (Bird, 2006). ## C Additional Results C.1 Assessing Mode Dropping in Datasets ![18_image_0.png](18_image_0.png) Figure 5: Detecting mode dropping in image and text datasets. We evaluate vs and IntDiv on datasets containing 500 examples drawn uniformly from between one and ten classes: digits in mnist and sentences genres in Multinli. Compared to IntDiv, vs increases more consistently with the number of classes. In Figure 5, we examine whether VS captures mode dropping in a controlled setting, where we have information about the ground truth class distribution. We simulate mode dropping by sampling equal-sized subsets of two classification datasets, with each subset Si containing examples sampled uniformly from the first i categories. We perform this experiment one image dataset (mnist) and one text dataset (multinli; Williams et al., 2018), using simple similarity functions. We compare vs to the Internal Diversity (IntDiv), defined as above. 11https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts | Model | IntDiv | vs | |---------------|----------|-------| | Original | 0.855 | 403.9 | | aae | 0.859 | 501.1 | | char-rnn | 0.856 | 482.4 | | Combinatorial | 0.873 | 536.9 | | hmm | 0.871 | 250.9 | | jtn | 0.856 | 489.5 | | Latent gan | 0.857 | 486.4 | | N-gram | 0.874 | 479.8 | | vae | 0.856 | 475.3 | Table 4: IntDiv and vs for generative models of molecules. The hmm has one of the highest IntDiv scores, but scores much lower on vs . An analysis of 250 molecules from the hmm reveals vs is more accurate in this case. (See Figure 3.) mnist consists of 28×28-pixel images of hand-written digits, divided into ten classes. The similarity score we use is the cosine similarity between pixel vectors: k(*x, x*′) = ⟨*x, x*′⟩/∥x∥∥x ′∥, where *x, x*′ are 282-dimensional vectors with entries specifying pixel intensities between 0 and 1. multinli is a multi-genre sentence-pair classification dataset. We use the premise sentences from the validation split (mismatched), which are drawn from one of ten genres. We define similarity using the n-gram overlap kernel: for a given n, the n-gram kernel kn can be expressed as the cosine similarity between feature vectors ϕ n(x), where ϕ n i (x) is equal to the number of times n-gram i appears in x. We use the average of k(*x, x*′) = 14 P4n=1 kn(*x, x*′). The results (Figure 5) show that vs generally increases with the number of classes, even using these simple similarity scores. In mnist (left), vs increases roughly linearly for the first six digits (0-5) and then fluctuates. This could occur if the new modes are similar to the other modes in the sample, or have low internal diversity. In multinli (right), vs increases monotonically with the number of genres represented in the sample. In both cases, vs has a stronger correlation with the number of modes compared to IntDiv. ## C.2 Evaluating Molecular Generative Models For Diversity We evaluate samples from generative models provided in the moses benchmark (Polykovskiy et al., 2020), using the first 2,500 valid molecules in each sample. Following prior work, our similarity function is the Morgan fingerprint similarity (radius 2), implemented in RDKit.12 IntDiv ranks the hmm among the most diverse models, while VS ranks it as the least diverse (see Section 4.2). ## C.3 Evaluating Image Generative Models For Diversity In Table 5, we replicate the table described in Section 4.4 and add an additional column, which evaluates diversity using the cosine similarity between pixel vectors as the similarity function. vs should be understood as the diversity with respect to a specific similarity function, in this case, the Inception ImageNet similarity. We illustrate this point in Figure 6 by comparing the top eigenvalues of the kernel matrices corresponding to the Inception similarity and the pixel similarity, which we calculate by resizing the images to 32×32 pixels and taking the cosine similarity between pixel vectors. Inception similarity provides a form of semantic similarity, with components corresponding to particular cat breeds, while the pixel kernel provides a simple form of visual similarity, with components corresponding to broad differences in lightness, darkness, and color. ## C.4 Evaluating Decoding Algorithms For Text For Diversity In Figure 7, we plot the relationship between VS and n-gram diversity using the MS-COCO captioning data and the n-gram overlap kernel described in Section 4.5. The figure shows that VS is highly correlated with n-gram diversity, which is expected given that our similarity function is based on n-gram overlap. Nonetheless, there are some data points that the metrics rank differently. This is because n-gram diversity conflates two 12RDKit: Open-source Cheminformatics. https://www.rdkit.org. | | Model | IS↑ | FID↓ | Prec↑ | Rec↑ | VSI ↑ | VSP ↑ | | | |----------------|----------------------|--------|--------|---------|--------|---------|----------|-------|------| | Model | IS↑ | FID↓ | Prec↑ | Rec↑ | VSI ↑ | VSP ↑ | | | | | CIFAR-10 | LSUN Bedroom 256×256 | | | | | | | | | | Original | 19.50 | 3.52 | | | | | | | | | VDVAE | 5.82 | 40.05 | 0.63 | 0.35 | 12.87 | 3.34 | | | | | DenseFlow | 6.01 | 34.54 | 0.62 | 0.38 | 13.55 | 2.94 | | | | | IDDPM | 9.24 | 4.39 | 0.66 | 0.60 | 16.86 | 3.27 | Original | 8.99 | 3.10 | | | StyleGAN | 2.55 | 2.35 | 0.59 | 0.48 | 8.76 | 3.09 | | | | | ADM | 2.38 | 1.90 | 0.66 | 0.51 | 7.97 | 3.27 | | | | | RQ-VT | 2.56 | 3.16 | 0.60 | 0.50 | 8.48 | 3.67 | | | | ImageNet 64×64 | LSUN Cat 256×256 | | | | | | | | | | Original | 43.93 | 4.43 | | | | | | | | | VDVAE | 9.68 | 57.57 | 0.47 | 0.37 | 18.04 | 4.24 | | | | | DenseFlow | 5.62 | 102.90 | 0.36 | 0.17 | 12.71 | 3.51 | | | | | IDDPM | 15.59 | 19.24 | 0.59 | 0.58 | 24.28 | 4.57 | Original | 15.12 | 4.58 | | | StyleGAN2 | 4.84 | 7.25 | 0.58 | 0.43 | 13.55 | 4.53 | | | | | ADM | 5.19 | 5.57 | 0.63 | 0.52 | 13.09 | 4.81 | | | | | RQ-VT | 5.76 | 10.69 | 0.53 | 0.48 | 14.91 | 5.83 | | | Table 5: We evaluate samples from several recent models, measuring similarity using either Inception representations (VSI ) or pixels (VSP ). The pixel similarity score is the cosine similarity between pixel vectors, calculated after resizing the images to 32×32 pixels. The pixel similarity and Inception similarity scores do not always agree—for example, if the images in a sample represent a variety of ImageNet classes by share a similar color palette, we might expect the sample to have high Inception diversity but low pixel diversity. The pixel diversity scores are on a lower scale, indicating that this similarity metric is less capable of making fine-grained distinctions between the images in these samples. ![20_image_0.png](20_image_0.png) Figure 6: The choice of similarity function provides a way of specifying the notion of diversity that is relevant for a given application. We project lsun Cat images along the top eigenvectors of the kernel matrix, using either Inception features or pixels to define similarity. Inception similarity provides a form of semantic similarity, with components corresponding to particular cat breeds, while the pixel kernel captures visual similarity. For each eigenvector u, we show the four images with the highest and lowest entries in u. For both kernels, every similarity score is positive, so all entries in the top eigenvector have the same sign; the images with the highest weights in this component have the highest expected similarity scores. The remaining eigenvectors partition the images along different dimensions of variation. ![21_image_0.png](21_image_0.png) Figure 7: VS is correlated with N-gram diversity. Each point represents a group of five captions for a particular image. properties: the diversity of n-grams within a single sentences and the n-gram overlap between sentences. We highlight two examples in Figure 8. In general, the instances that n-gram diversity ranks lower compared to VS contain individual sentences that repeat phrases. On the other hand, n-gram diversity can be inflated in cases when one sentence in the sample is much longer than the others, even if the other sentences are not diverse. High Vendi Score, low n-gram diversity: - *two men in bow ties standing next to steel rafter.* - *several men in suits talking together in a room.* - an older **man in a tuxedo** *standing next to a younger* man in a tuxedo *wearing glasses.* - two men wearing tuxedos glance at each other. - older **man in tuxedo** *sitting next to another younger* man in tuxedo. Low Vendi Score, high n-gram diversity: - a man and woman cutting a slice of cake by trees. - a couple of people standing cutting a cake. - *the dork with the earring stands next to the asian* beauty who is way out of his league. - a newly married couple cutting a cake in a park. - *a bride and groom are cutting a cake as they smile.* Figure 8: Two sets of captions that receive different ranks according Vendi Score and n-gram diversity. We manually highlight some features contributing to the different scores. On the left, a sentence contains repeated n-grams, which are penalized by n-gram diversity. On the right, one long outlier sentence contributes most of the n-grams for this group, greatly increasing the n-gram diversity. ## C.5 Diagnosing Datasets For Diversity Molecules We evaluate the diversity scores of molecules in the GoodScents database of perfume materials,13 which has been used in prior machine learning research on odor modeling (Sanchez-Lengeling et al., 2019). We use the standardized version of the data provided by the Pyrfume library. 14 Each molecule in the dataset is labeled with one or more odor descriptors (for example, "clean, oily, waxy" or "floral, fruity, green"). We form groups of molecules corresponding to the seven most common odor descriptors, with each group consisting of 500 randomly sampled molecules. We evaluate VS using two similarity functions: the Morgan fingerprint similarity (radius 2), and the similarity between odor descriptors, defined as the cosine similarity between descriptor indicator vectors ϕ(x), where ϕi(x) is equal to one if descriptor i is associated with molecule x and zero otherwise. The diversity scores are plotted in Figure 9. The molecular diversity score and the odor-descriptor diversity scores are correlated, meaning that words like "woody" and "green" are used to describe molecules that vary 13http://www.thegoodscentscompany.com/ 14https://pyrfume.org/ ![22_image_0.png](22_image_0.png) Figure 9: The Vendi Scores of samples containing 500 molecules with different scent labels, calculating diversity using two similarity functions: Morgan molecular fingerprint similarity, and the similarity between odor descriptors. Each molecule is associated with one or more human-written tags (e.g. "floral, fruity, green, sweet"), and the odor-descriptor similarity is the cosine similarity between binary tag indicator vectors. in molecular structure and also elicit diverse odor descriptions, while words like "waxy" and "fatty" are used for molecules that are similar to each other and elicit similar odor descriptions. For example, the word "green" appears in tag sets such as "aldehydic, citrus, cortex, green, herbal, tart" and "floral, green, terpenic, tropical, vegetable, woody", whereas the word "waxy" tends to co-occur with the same tags ("fresh, waxy"; "fresh, green, melon rind, mushroom, tropical, waxy"; "fruity, green, musty, waxy"). Molecules from the categories with the highest and lowest scores are illustrated in Figure 10. ![22_image_1.png](22_image_1.png) Figure 11: The Vendi Scores of samples containing 500 MultiNLI sentences with different genres (left) or Amazon reviews with different star ratings (right), defining similarity using either n-gram overlap or SimCSE (Gao et al., 2021). Text In Figure 11, we evaluate the diversity scores of samples sentences with different genres, from the MultiNLI dataset (Williams et al., 2018), and Amazon product reviews with different star ratings (Keung et al., 2020), using either the n-gram overlap similarity or SimCSE (Gao et al., 2021). SimCSE is a Transformer- Woody Herbal Most diverse ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) ![23_image_2.png](23_image_2.png) ![23_image_3.png](23_image_3.png) Least diverse Figure 10: The scent categories in Goodscents dataset with the highest (top) and lowest (bottom) vendi score (vs), using the molecular fingerprint similarity. We show 100 examples from each category, in decreasing order of average similarity, with the image at the top left having the highest average similarity scores. based sentence encoder that achieves state-of-the-art scores on semantic similarity benchmarks. The model we use initialized from the uncased BERT-base model (Devlin et al., 2019) and trained with a contrastive ![23_image_4.png](23_image_4.png) learning objective to assign high similarity scores to pairs of MultiNLI sentences that have a logical entailment relationship. In MultiNLI, both models assign the highest score to Slate, which consists of sentences from articles published on slate.com. SimCSE assigns a higher score to the "Fiction" category, possibly because it is less sensitive to common n-grams (e.g. "he said"), that appear in many sentences in this genre and contribute to the low N-gram diversity score. In the Amazon review dataset, the 5-star reviews have the highest N-gram diversity but the lowest SimCSE diversity, perhaps because SimCSE assigns high similarity scores to sentences that have the same strong sentiment. SimCSE assigns the highest diversity score to 3-star reviews, which can vary in sentiment. Images Following the setting in Section 4.6, we evaluate two additional dataset, Fashion mnist (Xiao et al., 2017) and celeba (Liu et al., 2015). We use the same similarity scores as in Section 4.6. Images in CelebA are associated with 40-dimensional binary attribute vectors. We use these attributes as an additional similarity score, defining the attribute similarity as the cosine similarity between attribute vectors. These illustrations highlight the importance of the choice of similarity function in defining a diversity metric. ![24_image_0.png](24_image_0.png) Figure 12: The categories in Fashion MNIST with the lowest (left) and highest (right) Vendi Scores, defining similarity as the cosine similarity between either Inception embeddings (top) or pixel vectors (bottom). We show 100 examples from each category, in decreasing order of average similarity, with the image at the top left having the highest average similarity scores according to the corresponding kernel. ![25_image_0.png](25_image_0.png) Figure 13: The attributes in celeba with the lowest (left) and highest (right) vs, defining similarity as the cosine similarity between either Inception embeddings (top), pixel vectors (middle), or binary attribute vectors (bottom). We show 100 examples from each category, in decreasing order of average similarity, with the image at the top left having the highest average similarity scores according to the corresponding kernel. These examples illustrate the importance of the choice of similarity function for defining the notion of diversity that is relevant for a given application. However, almost all choices of similarity functions show that the celeba dataset is more diverse for men than for women.
Review 1: Summary: The authors propose the Vendi score: a quantitative metric for evaluation diversity of empirial distributions of generative models or ground truth datasets. The Vendi score is calculated based on a similarity matrix of K samples of a distribution. After highlighting some properties of the proposed metric, the authors calculate and compare their metric with ID, FID, PRD, and IntDiv across several domainsl: image, text and molecules. Strengths and Weaknesses: The paper is remarkably well-written, especially the first half, and it is overall fairly easy to follow. The toy examples highlight and support key properties of the proposed metric (e.g., Figure 1) in a well-presented package. The Vendi score is based on structural properties of the data distribution, which is backed by theory. I see two major weaknesses: - The manuscript is missing a limitations section, or any discussion on drawbacks of the VS compared to prior methods. - The evaluation section is extensive in terms of covering several types of data, but does not feel conclusive in terms of the comparison with IS, FID, and PRD on image-generating models. It appears that VS mostly agrees with prior methods, and in in cases of disagreements, it is still not clear to me how to interpret them. Requested Changes: The paper is missing a discussion of limitations of the Vendi score. I strongly urge the authors to include a comprehensive mention of possible shortcomings of the proposed method, both on its own, and compared to existing methods. For example, no-reference diversity measures can be gamed by simply introducing new modes to the distribution (that do not exist in the original dataset to be measured). Furthermore, the Vendi score leaves open the question of how to evaluate the "precision" of a model -- this is a point that IS and FID measure implicitly and PRD handles explicitly. Some more comments: - Sec. 1, Intro, first paragraph: the authors may want to include a brief definition for the term "diversity" here. - Sec. 2, 1st paragraph: the authors mention here that the Vendi score is a no-reference metric. It would be helpful here to discuss implications of this property, both advantageous and disadvantageous. - Sec. 4.2: minor grammatical mistake in "In Figure 3, we highlight [...]" - Table 2 (and also Table 3) could be replaced by plots for easier parsing. At the very least, the table cells could be highlighted for "best" and "second best". In its current form, the table is slow to read. - Table 2: For the currently empty "Original" rows, the authors could calculate the metrics in a bootstrapping experiment, or compare training with validation/test splits. - Sec. 4.4, 2nd paragraph: It would be interesting to calculate VS also with the L2 distance instead of cosine similarity, that would e.g. make the results more comparable to FID and PRD which use distances. At the very least, it would be interesting to see if the choice of using cosine similarity has a significant effect on the results. - Sec. 4.4, 3rd paragraph: ImageNet is likely the dataset with the highest (semantic) diversity even in absolute terms, so while I agree with the last sentence of this paragraph (the choice of the embedding is important), I believe it misses the crucial point that image embedding have been found to be fairly universal. In other words, I would expect ImageNet to yield higher VS than Cifer even when computing similarity in a feature space of a model trained on CIFAR. - Sec. 5, 1st paragraph: The authors stress here that the measured "form of diversity" can be specified for VS by changing the similarity function. While true, in practice, the embedding space will more often than not dictate the choice of the similarity function -- and importantly, the choice of the embedding space is also relevant for most baseline metrics such as FID and PRD. Broader Impact Concerns: I don't have significant concerns. The proposed method adds to the list of existing diversity measures and the fact that it can be applies to ground-truth datasets is a nice addition. That said, the authors may want to include a curt Broader Impact Statement highlighting the shortcomings of the proposed metric for evaluating diversity on datasets. For example, the wrong choice of an embedding space for calculating similarities can reintroduce biases that lead to skewed diversity scores. ================================================== Review 2: Summary: In many machine learning applications, there is often a need to obtain diverse solutions, or to be able to use diverse inputs during the training process. Diversity is often task-dependent and thus hard to measure -- researchers and practitioners resort to some intuitive methods to enforce diversity (e.g. using a regularization employing the inverse of the dot product similarity for instance). However, the treatment of diversity is rarely systematic (although there are exceptions), and it seems to have never made it as a first-class citizen despite its importance. This paper takes an approach that will remind readers of a lot of things -- determinantal point processes (which work with a kernel matrix), permanental processes, spectral clustering (self-tuning and otherwise), generalized Schrodinger operators (and using them for cuts), graph cuts, graph entropies etc., but the actual use of the concept proposed in the paper in the context of diversity seems to have never happened. The main idea in the paper is to define a score of diversity between _data points_ as the entropy of the eigenvalues of a normalized pairwise similarity matrix and make an argument that this is a reasonable metric unsupervised metric for diversity, supported by some basic experiments. The proposed object is of course reminiscent of the von Neumann entropy (and divergences) and many concepts as I said and has a precedent in the species richness literature (btw, some of the species richness stuff has inspired work in machine learning, and of course applied statistics) -- which is taken as the main inspiration for the paper. One could summarize this paper in a couple of lines: For N points, construct the pairwise similarity matrix, normalize it in a particular way, and then find the entropy of its normalized eigenvalues and use that as a measure of diversity. Similar objects appear in many settings, but the main inspiration for the paper comes from species richness, and it seems to be the first to suggest to use this kind of object as a measure of diversity. Some informal arguments and basic experiments make a case for the proposed metric. Strengths and Weaknesses: - The contribution is straightforward and has simplicity going for it. It is also unsupervised and depends on the representation and a similarity function. - The proposed metric has not been proposed in the context of measuring diversity in machine learning, despite similar objects appearing in many contexts. Although to the author's credit, whenever I thought to myself "Oh, this is just concept A.." soon enough I found the authors citing exactly that. So, clearly, the authors are well aware of this. - At some level, I can imagine that one can find the paper underwhelming, as one might think the idea is used in so many contexts (or at least is related to ideas in many contexts). But if one thinks about it a bit more -- the contribution in the context of diversity is certainly new. In that sense, I see the paper has to have a good place in the literature. - To continue on the above -- given the contribution, I would have certainly expected more extensive experiments. More specifically, some settings where diversity is important and can make a big difference. I would have liked to have the Vendi score placed in the optimization and then showing that using it helps for some downstream task. I am a little conflicted if in its current state, the contribution is enough to merit publication -- although I lean towards acceptance (for reasons described above). Requested Changes: I would like to hear first what the authors think about the experiments above, and then I can give more suggestions. I feel the experiments are too basic and only show some intuitive qualities (one of the figures about components resembles one in the spectral clustering tutorial by von Luxburg, for example). The authors try their best to cite prior work diligently, although, I wouldn't quite say that diversity hasn't attracted attention. For example, if the authors see papers and theses such as this (and the dozens of references in them) https://apps.dtic.mil/sti/pdfs/AD1168004.pdf and https://home.ttic.edu/~pyadolla/papers/MBestModes.pdf and https://aclanthology.org/D13-1111.pdf they'll see that some have considered various notions (and even set it up as an optimization), although I will agree that they are not measuring diversity in the manner proposed in the paper. The authors need to add some discussion (and this ties back to the point about experiments) about when the metric fails. Since it is intuitively cut-based (although the similarity function can be learnt), it will obviously have its drawbacks. But it is not clear to me what its failure modes are. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper suggested using the von Neumann entropy of kernel (Bach, 2022) to measure the diversity of samples in machine learning, and called this metric the Vendi Score. The paper evaluated this metric on a number of tasks such as measuring the diversity of the samples in generative models. Strengths and Weaknesses: Strength - This paper proposed to use von Neumann entropy of kernel to measure the diversity of samples, which could be a useful metric to consider in the case of evaluating diversity of generative models. Even though this metric is directly borrowed from existing literature, it used it in the context of diversity measurements. - This paper evaluated the metric on a number of different tasks. Weakness - The paper criticized existing metrics such as FIDs to depend on reference datasets and / or labels. I don't think this is a fair criticism, as those methods are trying to measure the similarity between the distribution and the reference distribution, instead of merely measuring diversity as the proposed metric does. For example, if a generative model generate purely random noises, it would have scored high with Vendi Score, but not under FIDs (say for natural images reference datasets). So it is unclear whether this is an advantage or disadvantage, and thus not really a good comparison. - The score critically depends on a kernel similarity metrics, or an embedding function to do the heavy lifting. It is therefore crucial to have some systematic studies on how robust the score is depending on different kernel functions. - There is no way to evaluate the proposed metrics is good or bad. This is a weakness of the paper, though I'm not sure if there is any way to get around it given the nature of the problem. Requested Changes: In addition to the points mentioned in the "weakness" section above, please also add - Robustness study of the scores with respect to common factors, such as changing the sample sizes, re-sampling a new collection of examples from the same generative model under evaluation. Broader Impact Concerns: N/A ================================================== Review 4: Summary: The paper introduces the Vendi score as a metric for measuring the diversity of samples. This is defined based on the Shannon entropy of the eigenvalues of a similarity matrix. Empirical evaluations demonstrate that it is more effective at measuring certain aspects of diversity than other metrics. Strengths and Weaknesses: Strengths: - The idea is quite simple, works with various distance metrics, and scales only linearly with the number of samples for certain kernels. - Empirical results demonstrate the effectiveness of the Vendi score across multiple domains and multiple metrics. - A fair comparison with existing works on diversity metrics is given. Weaknesses: - It is unclear whether this metric is consistent, i.e., with an infinite number of i.i.d. samples there is a limit for $VS(x_1, \ldots, x_n) / n$. - It is also unclear how many samples we need to get a good estimate of VS. For example, if I know a distribution is more diverse than another, how many samples do I need to confidently say this is the case? - Sometimes the kernels chosen do not satisfy $k(x, x) = 1$, such as the probability product kernel. Requested Changes: Overall, it is an interesting paper. I would suggest addressing some of the points in the weakness section above. - Are there any failure modes for Vendi score? If not, what kind of axioms for diversity are tested for this? - It would be nice to add a "limitations" section, in the sense that Vendi score only measures diversity and nothing else, so it has to be combined with other metrics for sample quality (such as FID). Broader Impact Concerns: No. ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers recognized the importance of the research direction taken by the paper and most reviewers agreed on the significance of the VS as a contribution. A reviewer raised the concern that the score definition was simple and hence might not make a significant contribution. I disagree, in that simple ideas, if novel and well executed can have quite some impact. The authors properly acknowledged the other fields and sources outside ML from which the VS is taking inspiration This is enough to provide the needed background for generative models users and researchers to understand the VS or build on it. Additional concerns included the comparison of the VS w.r.t. other metrics to evaluate the quality of generated images such as FID and the request to run additional experiments, using different similarity metrics and optimizing the VS. The authors engaged with the reviewers comments and managed to address the majority of them convincingly, especially concerns regarding the comparison w.r.t. to FID&co which are there to highlight orthogonal aspects of the generated data. I agree with authors when they claim that the scope of this paper is not directly optimizing for VS, and that it can constitute a research question for future work. Ultimately, the majority of the reviewers agreed that the current version of the updated manuscript satisfies the acceptance criteria of the journal and decided to lean towards acceptance. The paper is therefore accepted as is. ==================================================
# Normatch: Matching Normalizing Flows With Discriminative Classifiers For Semi-Supervised Learning Zhongying Deng *zd294@cam.ac.uk* Department of Applied Mathematics and Theoretical Physics University of Cambridge Rihuan Ke *rihuan.ke@bristol.ac.uk* School of Mathematics Research University of Bristol Carola-Bibiane Schönlieb *cbs31@cam.ac.uk* Department of Applied Mathematics and Theoretical Physics University of Cambridge Angelica I Aviles-Rivero ai323@cam.ac.uk Department of Applied Mathematics and Theoretical Physics University of Cambridge Reviewed on OpenReview: *https: // openreview. net/ forum? id= ebiAFpQ0Lw& noteId= 5PmBQKApbT* ## Abstract Semi-Supervised Learning (SSL) aims to learn a model using a tiny labeled set and massive amounts of unlabeled data. To better exploit the unlabeled data the latest SSL methods use pseudo-labels predicted from *a single discriminative classifier*. However, the generated pseudo-labels are inevitably linked to inherent confirmation bias and noise which greatly affects the model performance. In this work we introduce a new framework for SSL named NorMatch. Firstly, we introduce a new uncertainty estimation scheme based on normalizing flows, as an auxiliary classifier, to enforce highly certain pseudo-labels yielding a boost of the discriminative classifiers. Secondly, we introduce a threshold-free sample weighting strategy to exploit better both high and low confidence pseudo-labels. Furthermore, we utilize normalizing flows to model, in an unsupervised fashion, the distribution of unlabeled data. This modelling assumption can further improve the performance of generative classifiers via unlabeled data, and thus, implicitly contributing to training a better discriminative classifier. We demonstrate, through numerical and visual results, that NorMatch achieves state-of-the-art performance on several datasets. ## 1 Introduction Deep convolutional neural networks (CNNs) have achieved enormous success in various computer vision tasks (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2016; Long et al., 2015; Chen et al., 2017; Girshick et al., 2014; Girshick, 2015). The key for such outstanding performance is the large amount of labeled data used in supervised techniques. However, collecting a vast amount of labeled data is time-consuming and labor-extensive. Semi-supervised Learning (SSL) has been a focus of great interest as it mitigates these drawbacks (Tarvainen & Valpola, 2017; Berthelot et al., 2019b;a; Sohn et al., 2020; Li et al., 2021). SSL works under the assumption of learning with a tiny label set and a vast amount of unlabeled data, in which the majority of real-world problems unlabeled data is abundant. ![1_image_0.png](1_image_0.png) Figure 1: The comparison of a) discriminative classifier, b) normalizing flow classifier (NFC) (Izmailov et al., 2020), and c) discriminative + normalizing flow classifiers in predicting unlabeled data. Here, the data points in all the sub-plots are of the same set of inputs. Our goal is to predict highly certain pseudo-labels for the unlabeled samples (the red dots). Inconsistent predictions from the discriminative classifier and NFC on a sample (e.g., the left red dot) indicate that the pseudo-label is less trustworthy (e.g., the uncertain region in c)). In this case, we will downplay its importance to avoid over-confidence. In contrast, if consistent predictions (e.g., on the right red dot) are achieved among both classifiers, the pseudo labels have higher certainty. Ideally, if the predictions are consistent under any hypothesises, i.e., any different classifiers, we can fully trust the predicted pseudo-labels. The crucial principle behind SSL is how to better handle the unlabeled set. The current SSL techniques (Sohn et al., 2020; Li et al., 2021; Zheng et al., 2022) use predicted classes from a discriminative classifier as pseudolabels, for the unlabeled data, with a threshold to filter out low-confidence predictions. The threshold is essentially used to estimate the uncertainty or confidence of the generated pseudo-labels. However, existing threshold-based uncertainty estimation strategies have some disadvantages. Firstly, a manually-set threshold cannot effectively identify noisy samples on which the discriminative classifier can be over-confident. This issue can further cause noise accumulation and the inherent confirmation bias (Tarvainen & Valpola, 2017; Arazo et al., 2020) in pseudo-labeling. Secondly, the threshold is a sensitive hyper-parameter to the performance, which means that a sub-optimal threshold may lead to substantial degradation on some datasets. The intuition is that a high threshold allows for a few pseudo-labels for training while a low threshold introduces high label noise. The optimal threshold depends on the average predicted probability of the discriminative classifier, where the average probability is dependent on the datasets' statistics including the class number and image quality. Finally, thresholding usually discards some low-confidence samples, but they can be hard samples which contribute to better performance. In this work, we go around the drawbacks associated with thresholding by using normalizing flows to estimate the uncertainty of pseudo-labels from a discriminative classifier. In particular, a Normalizing Flow Classifier (NFC) is used as an auxiliary classifier to estimate the uncertainty of pseudo-labels. The uncertainty estimation is achieved by **match**ing the predictions of a Normalizing flow classifier and the discriminative classifier, which we call NorMatch. NorMatch uses the NFC to prevent a discriminative classifier from being over-confident on noisy pseudolabels; as illustrated in Figure 1. This effect is because a pseudo-label having a consensus among diverse classifiers is usually of high quality. Diversity is achieved by using two fundamentally different but complementary classifiers - the NFC as a generative classifier and the Softmax classifier as a discriminative one. NorMatch accepts a pseudo-label if the predicted pseudo-labels of these two classifiers are consistent. Otherwise, NorMatch downplays the importance of such predicted pseudo-label by using the minimum predicted probability of these two classifiers. We call this design Normalizing flow for Consensus-based Uncertainty Estimation (NCUE). NCUE is a threshold-free scheme for different datasets. Moreover, our NCUE in NorMatch leverages low-confidence samples for model training, which can improve the performance. Overall, our NCUE scheme can effectively tackle the aforementioned three disadvantages of threshold-based uncertainty estimation. Furthermore, NorMatch also utilizes normalizing flow to model, in an unsupervised fashion, the distribution of unlabeled data. This design is named Normalizing flow for Unsupervised Modeling (NUM). NUM can contribute to learning a better generative classifier on the unlabeled data, thus further improving the performance of a discriminative classifier implicitly. Our contributions are summarized as follows. - We propose a new SSL method named NorMatch, which utilizes normalizing flows as an auxiliary generative classifier to estimate pseudo-label uncertainty for the discriminative classifier. - We introduce a threshold-free sample weighting scheme to exploit both high- and low-confidence pseudo-labels called NCUE. We further leverage normalizing flows to model the distribution of unlabeled data in an unsupervised manner (NUM). - We demonstrate that our NorMatch achieves better, or comparable, performance than state-ofthe-art methods on several popular SSL datasets including CIFAR-10, CIFAR-100, STL-10 and Mini-ImageNet. ## 2 Related Work In this section, we first review the semi-supervised learning methods, then introduce normalizing flow. ## 2.1 Semi-Supervised Learning Semi-Supervised Learning (SSL) methods can be broadly divided into two categories. The first category adopts consistency regularization while the second one builds upon pseudo-labeling. The idea behind **Consistency regularization** is to enforce consistent outputs, for the same unlabeled sample, under different labelpreserving perturbations. These perturbations can be RandAugment (Cubuk et al., 2020), Dropout (Srivastava et al., 2014) or adversarial transformations (Miyato et al., 2018). With multiple perturbed versions of the same sample, Π-Model (Laine & Aila, 2016) minimizes the squared difference between their predictions for a consistent output. Mean Teacher (Tarvainen & Valpola, 2017) further enforces such consistency between the predictions of a model and its exponential moving averaged teacher model. FlowGMM (Izmailov et al., 2020) adopts a normalizing flow model together with a Gaussian Mixture Model to enforce a probabilistic consistency regularization. Unlike FlowGMM which uses a single normalizing flow to encode the clustering principle (no discriminative classifier included), our NorMatch uses it as an auxiliary classifier to deal with the threshold-based uncertainty estimation problem caused by a single discriminative classifier. We estimate the uncertainty of pseudo-labels based on the consensus of these two classifiers. Based on the uncertainty, we propose a threshold-free sample weighting scheme to assign different weights for pseudo-labels. As a result, NorMatch significantly outperforms FlowGMM (see Table 6). Pseudo-labeling, including self-training, uses the model's predictions as pseudo-labels for the unlabeled data, with the pseudo-labels used for the model training in a supervised fashion. MixMatch (Berthelot et al., 2019b) generates 'soft' pseudo-labels using the averaged prediction of the same image with multiple strong augmentations while ReMixMatch (Berthelot et al., 2019a) uses weakly-augmented ones to obtain pseudolabels. It further proposes a distribution alignment to encourage the distribution of pseudo-labels to match that of ground-truth labels of labeled data. FixMatch (Sohn et al., 2020) also employs weak augmentation for pseudo-label generation but it obtains the one-hot 'hard' pseudo-labels. Since 'hard' pseudo-labels may contain noise, it further introduces a threshold to filter out low-confidence thus potentially noisy samples. To improve the pseudo-labels strategy of FixMatch, CoMatch (Li et al., 2021) further imposes a smoothness constraint on the pseudo-labels by introducing an extra contrastive learning task. SemCo (Nassar et al., 2021) improves the pseudo-labels by adopting two discriminative classifiers for co-training. Some other methods seek to improve the pseudo-label by modifying the threshold. For example, Dash (Xu et al., 2021) improves FixMatch by proposing an adaptive threshold which decreases during training. Adsh (Guo & Li, 2022) argues that a fixed threshold for all the classes is sub-optimal, so it designs adaptive thresholds for different classes to improve over FixMatch. A few works have explored how to improve pseudo-labelling through the lens of graphs, where different types of Laplacian energies have been used. The CREPE model (AvilesRivero et al., 2019) introduced a new energy model based on the graph 1-Laplacian, which generates highly certain pseudo-labels. LaplaceNet (Sellars et al., 2022) uses quadratic energy along with a new multi-sample augmentation scheme. ![3_image_0.png](3_image_0.png) Weakly- augmented **data** Low **weight** Stop gradient Figure 2: The overview of our NorMatch for unlabeled data. The modules with the same colors (i.e., the CNN and discriminative classifier) share the same set of parameters. NorMatch uses the shared CNN backbone to extract the features of weakly- and strongly-augmented versions of the same unlabeled sample. Then the weakly-augmented features are input to the Normalizing Flow Classifier (NFC) for Unsupervised Modeling (NUM) by likelihood maximization, and to the discriminative classifier to obtain the pseudo-labels. These features are also input to the NFC and the discriminative classifier to enforce a Consensus Uncertainty Estimation (called NCUE). The NCUE generates weights for each sample/pseudo-label, which highlights the consistent predictions and downplays the disagreed ones. The weights together with pseudo-labels are then used to enforce a weighted cross-entropy, which supervises the training for the strongly-augmented version. Our NorMatch also builds on FixMatch, but leverages the consensus of an auxiliary generative classifier and the main discriminative one to improve pseudo-labels. Importantly, NorMatch is threshold-free, and simpler yet more effective than existing methods. ## 2.2 Normalizing Flow Normalizing Flows (Dinh et al., 2014; 2016; Kobyzev et al., 2020) composes some invertible and differentiable mapping functions to transform a simple distribution, e.g., standard Gaussian, to match a complex one, e.g., the distribution of real data. Such mappings preserve the exact likelihood, which facilitates the probability density estimation for new data. This property of normalizing flow can be used as a generative classifier, which cannot be achieved by other generative models, such as generative adversarial network (GAN) (Goodfellow et al., 2020) and variational auto-encoder (VAE) (Kingma & Welling, 2013). As a generative model, normalizing flow can model the marginal distribution of the real data by likelihood maximization. This makes it suitable for unsupervised tasks since no ground-truth labels are needed for model training. Generative models are widely used to generate data or enforce consistency for SSL. However, scarce works use generative models, especially normalizing flow, as a classifier to help the discriminative one for uncertainty estimation which is the major challenge in pseudo-label-based SSL. To this end, we exploit the normalizing flow classifier (NFC) for uncertainty estimation. Remarkably, our NorMatch exploits NFC (Izmailov et al., 2020) to weigh each pseudo-label and uses it to model the marginal distribution of unlabeled data, both contributing to better performance for SSL task. ## 3 Methodology In this section, we detail our motivation on leveraging the Normalizing Flow Classifier (NFC) as an auxiliary classifier to estimate the uncertainty of pseudo-labels predicted from the discriminative classifier. We then present how the NFC-based NorMatch works for unlabeled data. Finally, we illustrate the training and inference process of our proposed method. ## 3.1 Motivation Our insight is that the consensus among diverse classifiers on a pseudo-label can reduce the risk of confirmation bias. Following the work of (Melville & Mooney, 2003), we can define the diversity as the measure of disagreement across different classifiers. Diversity is ensured by using two fundamentally different but complementary classifiers, namely, the NFC as a generative classifier and the Softmax classifier, e.g., a fully connected layer followed by a Softmax activation function, as a discriminative one. We choose the Normalizing Flow Classifier (NFC) as the auxiliary classifier for the following reasons. (1) Compared to other generative models like GAN or VAE, normalizing flows can evaluate the exact probability density for new test data. This means that given new test data, we can know the probability of the data following the distribution of a specific class. In practice, the class-specific distribution is modeled by the y-th component of the Gaussian Mixture Model (GMM) in (1) (as we implement the NFC as a RealNVP (Papamakarios et al., 2017) followed by a GMM prior (Izmailov et al., 2020), with the RealNVP acting as invertible and differentiable mapping functions). If we further normalize such a class-specific probability using all the probabilities of all the classes, then the output of the Normalizing Flow can be used to measure the probability of the input data following the distribution of all the different classes, which is illustrated in (1). Therefore, normalizing flow can be used as a generative classifier while the others cannot. (2) Compared to another Softmax-based discriminative classifier as an auxiliary one, the NFC can be used to model the marginal distribution of unlabeled data in an unsupervised way. This can boost the performance (as demonstrated in Table 2). More importantly, NFC, as a generative classifier, is more complementary to the main discriminative classifier than using another discriminative classifier. We then can ensure the diversity of these two classifiers. Concretely, diversity manifests in the following ways. Firstly, the Normalizing Flow Classifier (NFC) predicts the conditional probability derived from Bayes Theorem while a discriminative classifier directly learns the conditional probability distribution p(y|z), where z is the feature representation of an image x and y is an element of the label space. Secondly, NFC is an Euclidean distance-based classifier (Izmailov et al., 2020) while the Softmax-based discriminative classifier focuses on cosine distance. The NFC is Euclidean distance-based classifier as it predicts the labels based on the following conditional probability: $$p_{n}(y|z)=\frac{\mathcal{N}(z|\mu_{y},\Sigma_{y})}{\sum_{k=1}^{C}\mathcal{N}(z|\mu_{k},\Sigma_{k})}\propto\mathbf{E}(||z-\mu_{y}||_{2}^{2}),\tag{1}$$ where the denominator PC k=1 N (z|µk, Σk) is a normalization factor shared by all the class. C is the class number, and N (µy, Σy) denotes the y-th class/component in a Gaussian Mixture Model (GMM), parameterized by the mean µy and covariance Σy. It shows that the probability is influenced by the Euclidean distance between a sample's feature z and the mean ||z − µy||22 . It is worth noting that to facilitate a better understanding, (1) simplifies the NFC by viewing its invertible and differentiable mapping functions fn(·) as an identity matrix I, i.e., fn(·) = I. This simplification makes the Normalizing Flow degrade to a Gaussian Mixture Model (GMM), which is easier to understand. But in practice, we use a RealNVP (Papamakarios et al., 2017) as the invertible and differentiable functions fn(·), so the input of (1), i.e., z, is actually transformed to fn(z) before input to GMM. This further leads to pn(y|z) ∝ E(||fn(z) − µy||22 ). In contrast to a Normalizing Flow Classifier (NFC), a discriminative classifier makes predictions based on conditional probability, which reads: $$p_{d}(y|z)=\frac{\exp(W_{y}^{T}z)}{\sum_{k=1}^{C}\exp(W_{k}^{T}z)}\propto W_{y}^{T}z=||W_{y}^{T}||\cdot||z||\cdot\cos(W_{y}^{T},z),\tag{2}$$ where Wy is the weight for class y. It is clear that pd(y|z) is based on the cosine similarity of WT y and z. Intuitively, two cosine distance-based discriminative classifiers are less diverse than Euclidean distancebased NFC combined with a cosine distance-based discriminative classifier. Furthermore, the Euclidean distance of NFC is calculated between the fn(z) and the mean feature of the y-th class, µy. In contrast, the Softmax classifier computes the cosine similarity between the **latent feature** z and **the weight of the** y**-th component of the classifier** Wy as in (2), i.e., z is directly used to compute the labels rather than transformed by any invertible and differentiable functions. Since fn(z) in the NFC is not equal to z in the Softmax classifier and the mean feature of the y-th class (i.e., µy) is different from the weight of the y-th component of the classifier (i.e., Wy), these two classifiers are considered to be sufficiently diverse. Lastly, diversity comes from the lens of statistical learning where a generative model has a higher asymptotic error than the discriminative one. However, the generative one can reach the asymptotic error much faster. That is, our model enforces these two distinctive performance regimes as complementary– this is translated to enforce higher diversity in terms of boundary between classes (discriminative) while also the distribution of individual classes. As illustrated in Figure 1, if consistent predictions are achieved among these diverse classifiers, i.e., under different measurements (Euclidean and cosine), the predicted pseudo-labels have higher certainty. Otherwise, the pseudo-label is less reliable, thus its importance should be downplayed. We remark that we downplay low-confidence pseudo-labels rather than simply ignore them as current methods do (Sohn et al., 2020; Li et al., 2021). We do this because they can be hard samples, which might contribute to better performance. ## 3.2 Normatch The key in our NorMatch is to exploit the discriminative classifier and Normalizing flows for Consensus-based Uncertainty Estimation (NCUE), and apply the Normalizing flow for Unsupervised Modeling (NUM), as depicted in Figure 2. NCUE estimates the uncertainty for pseudo-labels by emphasizing consistently predicted pseudo-labels and downplaying low-confidence ones that cause disagreement. NUM uses the Normalizing Flow Classifier (NFC) to model the distribution of unlabeled data by likelihood maximization. We detail these two designs next. ## 3.2.1 Normalizing Flow For Consensus-Based Uncertainty Estimation (Ncue) As shown in Figure 2, the unlabeled data Du = {x u i } Nu i=0 where Nu is the total sample number, are applied with both weak and strong augmentations to obtain two different versions of the same input. For weaklyaugmented versions, we use flipping and cropping (still denote it as x u i ). For the strongly-augmented version, we have A(x u i ), being A RandAugment (Cubuk et al., 2020). These two versions are then input to a CNN backbone to extract features for classification. The feature of the weakly-augmented version is fed to the Normalizing Flow Classifier (NFC) and the discriminative one to obtain the probabilities pn(y|x u i ), pd(y|x u i ), as in (1) and (2) respectively (denoted as pn, pd for clarity). We can then obtain the pseudo-label from the discriminative classifier as yˆ u i = arg max(pd) or yˆ u i = pd. The latter is not a one-hot version as the latest methods use it to enforce distribution alignment (Berthelot et al., 2019a; Li et al., 2021). The design choice is evaluated in the experiments. With these probabilities pn, pd, NCUE estimates the uncertainty of a pseudo-label by investigating the consensus of the Normalizing Flow Classifier (NFC) and the discriminative classifier, and then adaptively sets a weight for such pseudo-label. Formally, the NCUE reads: $$\tau(x_{i}^{u})=\begin{cases}1,&\text{if}\arg\max(p_{d})=\arg\max(p_{n}),\\ \min(p_{d},p_{n}),&\text{if}\arg\max(p_{d})\neq\arg\max(p_{n}),\end{cases}\tag{3}$$ being τ (x u i ) the weight for each unlabeled sample. It means that we accept the pseudo-label if it achieves consensus among these two classifiers. Otherwise, we downplay its importance by min(pd, pn) as it can be noise. With τ (x u i ) as sample weight, and yˆ u i as the pseudo-label, the loss for the unlabeled data, i.e., the weighted cross-entropy in Figure 2, is given by: $$L_{u}(\theta_{d})=\frac{1}{\mu B}\sum_{i}^{\mu B}\tau(x_{i}^{u})\cdot H(\hat{y}_{i}^{u},p_{d}(y|{\cal A}(x_{i}^{u}),\theta_{d})),\tag{4}$$ where pd(y|A(x u i ), θd) is the probability of strongly-augmented version A(x u i ) predicted from the discriminative classifier. θd is the parameters of the CNN backbone and the discriminative classifier. B is batch size, µ = 7 as in (Sohn et al., 2020), and H(*y, p*) is the cross-entropy. ## 3.2.2 Normalizing Flow For Unsupervised Modeling (Num) NUM models the distribution of the features z of unlabeled data x by likelihood maximization estimation. The likelihood for the feature of i-th unlabeled image is $$p_{n}(z_{i}^{u})=\sum_{c}^{C}p_{n}(z_{i}^{u}|y=c)p(y=c),$$ where pn(z u i |y = c) is obtained by feeding the feature z u i of x u i to the c-th class/component of the GMM. We then can optimize the parameters of normalizing flow θn to maximize the joint probability of unlabeled data $$p_{n}(\mathcal{D}_{u}|\theta_{n})=\prod_{i}^{N_{u}}p_{n}(z_{i}^{u}|\theta_{n}).\tag{1}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $\downarrow$ . $$\mathbf{\Sigma}$$ Equivalently, we can achieve the maximization of (6) by minimizing the negative log-likelihood of pn(Du|θn). Therefore, we define a loss function Lu(θn) for the goal of likelihood maximization in Figure 2. Lu(θn) is formulated as: $$L_{u}(\theta_{n})=-\log p_{n}({\cal D}_{u}|\theta_{n}).\tag{1}$$ Remark. We model the probability mass of the latent features p(z), rather than the original input images p(x), using Normalizing Flow. In NUM, we input the latent feature z u i to the Normalizing Flow Classifier (NFC) to obtain pn(z u i |y = c). Then the invertible and differentiable mapping functions (implemented as RealNVP (Papamakarios et al., 2017)) in the NFC can be learned to match the complex distribution of z u i by optimizing (7). Since we use the invertible and differentiable mapping functions, denoted as T : R n → R n, to transform a simple GMM p(g) to match a more complex distribution of p(z), the NFC in our method is Normalizing Flow with the p(z) computed by $$p(z)=p(g)|\det\frac{\partial T^{-1}(z)}{\partial z}|=p_{g}(T^{-1}(z))|\det\frac{\partial T^{-1}(z)}{\partial z}|\quad\mbox{where}g=T^{-1}(z).$$ It is notable that T = f −1 n, where fn is defined in Section 3.1. Furthermore, we remark that the input of Normalizing Flow is not necessarily to be the original images x but can also be the latent features z if we regard the latent features as a complex distribution. Here, the "complex distribution" is a relative concept, which means that the distribution of latent features is usually more complex than GMM. We adopt the latent features z as the input of the Normalizing Flow rather than the images x for two reasons: 1) we aim to use the Normalizing Flow as a generative classifier, which usually takes semantic features as its input for better performance. Thus, using Normalizing Flow to model latent features z, containing more semantic information than the images x, can better achieve our goal of improving classification accuracy; 2) The images x are in high dimension (e.g., 3×96×96=27,648 dimension on Mini-ImageNet dataset) and Normalizing Flow cannot reduce their dimension (otherwise the loss of dimension/information can make the Normalizing Flow NOT invertible). In this case, directly modeling p(x) in a high-dimensional image space costs too much computational resources which we cannot afford. ## 3.3 Training And Inference Schemes For the labeled samples {x l i , yi} Nl i=0, we adopt cross-entropy for supervised training. Formally, the Normalizing Flow Classifier (NFC) θn is trained on a labeled set by minimizing $$L_{x}(\theta_{n})=\frac{1}{B}\sum_{i}^{B}H(y_{i},p_{n}(y|x_{i}^{l},\theta_{n})),$$ $$\left({\mathfrak{g}}\right)$$ where pn(y|x l i , θn) is the probability distribution of the sample x l i . We also define Lx(θd) as the supervised loss for the discriminative classifier. Our total training loss is then formulated as: $$L=L_{x}(\theta_{d})+L_{u}(\theta_{d})+L_{x}(\theta_{n})+\lambda L_{u}(\theta_{n}),$$ $$(10)$$ 7 where λ is a hyper-parameter. Note that the gradients of Lu(θn) and Lx(θn) are only back-propagated to NFC (i.e., θn) rather than the CNN backbone because we discard the auxiliary Normalizing Flow Classifier (NFC) during inference. In this case, these two loss terms contribute to feature learning by influencing pseudo-labels. Concretely, they influence the learning of NFC, which impacts the weight τ of pseudolabels. τ in (4) can adjust the gradient of Lu(θd) to the CNN for feature learning and contribute to better performance. We found that this design worked the best (see Table 3). For inference, we only use the discriminative classifier while discard the NFC. ## 4 Experiments We conduct extensive experiments on CIFAR-10, CIFAR-100, STL-10 and Mini-ImageNet to demonstrate the effectiveness of our NorMatch. ## 4.1 Experimental Setting Dataset and Protocols. (1) *CIFAR-10* (Krizhevsky et al., 2009) has 10 classes, each with 5,000 images of size 32×32 for training, and 1,000 images for testing, so there are 60,000 images in total. Following (Sohn et al., 2020), we evaluate our methods in the settings of training with 4, 25, and 400 labels per class, respectively. (2) *CIFAR-100* (Krizhevsky et al., 2009) has the same image size as CIFAR-10, but comprises 100 classes. Each class includes 500 images for training and 100 for testing. We also follow (Sohn et al., 2020) to report the results of our models trained on 4, 25, and 100 labels per class, respectively. (3) STL-10 (Coates et al., 2011) consists of 96×96 images of 10 classes, with 500 training and 800 test images per class. We train our model on 1000 labels, with 100 for each class, following (Sohn et al., 2020). (4) Mini-ImageNet is a subset of ImageNet (Russakovsky et al., 2015), which includes 84×84 images from 100 classes, with 600 images per class. We adopt the training and testing split from (Iscen et al., 2019), then evaluate NorMatch in the settings of 40 labels for each class. Implementation Details. The Softmax classifier is implemented as a fully connected layer followed by a Softmax activation function. The backbone CNN (Cf. the blue block in Figure 2) is selected as follows. We follow (Sohn et al., 2020) to adopt Wide ResNet-28-2 for CIFAR-10 and Wide ResNet-28-8 (Zagoruyko & Komodakis, 2016) for CIFAR-100. On STL-10 and Mini-ImageNet, we use a ResNet-18 (He et al., 2016) as the backbone CNN, following (Li et al., 2021) and (Nassar et al., 2021), respectively. The other training settings for all these datasets are the same (unless otherwise specified). Specifically, we optimize the model using Stochastic Gradient Descend (SGD) with Nesterov momentum (Sutskever et al., 2013). The initial learning rate is 0.03 and then decreases according to a cosine learning decay (Loshchilov & Hutter, 2016). The batch size B is 64 and the total training iteration is 2 20 (1024 epochs with each epoch having 1024 iterations, except on Mini-ImageNet training for 600 epochs). We follow the latest works (Berthelot et al., 2019a; Li et al., 2021) to use distribution alignment to yˆ u i in (4). We do not apply sharpening or one-hot to yˆ u i (except on STL-10 where a one-hot version is used). We use the exponential moving average of model parameters to report the final performance, as most SSL methods (Berthelot et al., 2019a; Sohn et al., 2020; Li et al., 2021) do. For the settings specific to our NorMatch, we set the default value of λ in (10) to 1e-6 for all the datasets. The NFC is a RealNVP (Papamakarios et al., 2017) (with 6 coupling layers) followed by a Gaussian Mixture Model (GMM) prior (Izmailov et al., 2020). As such, θn in (10) includes three parts: a) weights of GMM, initialized as 1 for each class, b) the µ (init. as 0) and Σ (init. as 1) of GMM, and c) randomly initialized coupling layers of the Normalizing Flow Classifier (NFC). It is trained with AdamW (Loshchilov & Hutter, 2017) optimizer using an initial learning rate of 0.001 with a cosine decay. Our implementation is based on PyTorch (Paszke et al., 2019) and our code is available at https://github. com/Zhongying-Deng/NorMatch. Table 1: Ablation study on CIFAR-10 with 40 labels. NCUE and NUM are proposed in Section 3.2.1 and 3.2.2, respectively. The NCUE variant sets the weight to 0 when the NFC's predictions are different from the discriminative classifier's, i.e., it simply discards all the low-confidence pseudo-labels. The best result is highlighted in yellow. Methods Accuracy Baseline (FixMatch (Sohn et al., 2020)) 87.77 Baseline + NCUE 92.78 Baseline + NCUE variant 91.55 NorMatch (Baseline + NCUE + NUM) **93.41** ![8_image_1.png](8_image_1.png) ![8_image_0.png](8_image_0.png) Figure 3: The amount of low-uncertainty (highconfidence) pseudo-labels obtained by NCUE (the blue curve) and threshold-based FixMatch (the green curve with the threshold set to 0.95 as in (Sohn et al., 2020)), respectively. The red curve denotes the number of correct pseudolabels measured by using ground-truth labels. The x-axis represents the training epoch while the y-axis denotes the percentage (%) of total samples. Figure 4: Visualization of 1) the accuracy of high/lowconfidence predictions (the blue and red curves) and the weight distributions (the dark curve) of our NorMatch; 2) the accuracy of predictions from the threshold-based FixMatch, i.e., the green curve; 3) the uncertainties of discriminative classifier and NFC (dashed gray and brown curve respectively) of our NorMatch. ## 4.2 Delving Into Normatch Performance In this section, we present a comprehensive analysis on CIFAR-10 with 40 labels to better understand each module in our method. Our baseline model is the vanilla FixMatch with threshold and one-hot pseudo-labels. For this analysis, we only run 300 epochs to save time. Effectiveness of NCUE. Table 1 shows that the Normalizing flow for Consensus-based Uncertainty Estimation (NCUE, proposed in Section 3.2.1) significantly improves the FixMatch baseline by 5.01%. Note that FixMatch uses a threshold-based uncertainty estimation, so the superiority of NCUE verifies that using the consensus among NFC and the discriminative classifier can be better than using a threshold to estimate uncertainty for pseudo-labels. Furthermore, to investigate whether we should simply discard all the low-confidence pseudo-labels, we evaluate an NCUE variant which sets the weights of low-confidence samples to 0 (rather than min(pd, pn) as in (3)). This variant (the 3rd row) decreases the performance by 1.23%. The degradation implies that simply ignoring all the low-confidence samples can be sub-optimal, as they can be hard samples and contribute to better performance. Further analysis on NCUE. To better understand how our NCUE works, we further provide the following visualizations. 1) Figure 3 plots the amount of low-uncertainty (or high-confidence) pseudo-labels. The 9 results are obtained by running the proposed training scheme, and then using the generated pseudo-labels to compare NCUE and thresholding. We find that the amount of high-confidence pseudo-labels from NCUE (the blue curve) is very similar to that of the correct pseudo-labels. Thus, our NCUE can adaptively choose a proper amount of high-confidence pseudo-labels for training. In contrast, a fixed threshold of 0.95 (the green curve) ignores too many samples that have correct pseudo-labels. This comparison explains why our NCUE works better than a fixed threshold, which is widely used in FixMatch-based methods (Nassar et al., 2021; Hu et al., 2021). 2) Figure 4 depicts the accuracy of low-uncertainty (or high-confidence) pseudo-labels obtained by NCUE (the blue line) and a fixed threshold (the green line denoted as 'Acc. Threshold') respectively. We find that the pseudo-labels' accuracy of these two is similar (∼93%), but NCUE has a larger absolute number of correct pseudo-labels as it has a much higher recall rate (i.e., more high-confidence pseudo-labels as in Figure 3, about 92% vs. 85%). That is, for NCUE, 50K training images with 92% high-confidence pseudo-labels, among which ∼93% are accurate, totally 50K×92%×93%≈42.8K accurate pseudo-labels. In contrast, a fixed threshold has only 50K×85%×93%≈39.5K accurate pseudo-labels, thus achieving inferior performance. 3) Figure 4 also provides the percentage of correct predictions within high- and low-confidence samples (the blue and red curves respectively), as well as the weight distributions of low-confidence samples (the dark line). These curves can show how the consensus re-weighting technique in NCUE works. We observe that the high-confidence samples have high accuracy, which is essential for good performance; the low-confidence samples have low accuracy and small weights. Small weights can alleviate the issue of over-confidence or confirmation bias, and meanwhile, take full use of low-confidence samples for better performance. The advantage of small weights is also verified in Table 1 where small weight-based NCUE achieves 92.78%, outperforming the NCUE variant by 1.23% which simply ignores these low-confidence samples. 4) Finally, Figure 4 visualizes the uncertainty of the discriminative classifier and NFC (dashed gray and brown lines). We measure the uncertainty of these classifiers by using 1 − pall, with pall being the average predicted probabilities of a classifier for all the samples. It can be seen that the discriminative classifier in NorMatch can be more over-confident (lower uncertainty) compared to NFC, thus may lead to confirmation bias. NFC is less over-confident (i.e., higher uncertainty) to alleviate confirmation bias. Importance of NUM. In Table 1, we can also see that the Normalizing flow-based Unsupervised Modeling (NUM, proposed in Section 3.2.2) brings a performance gain over FixMatch + NCUE. With NUM, the normalizing flow is exposed to unlabelled data, in comparison to the case without NUM where the NFC is trained using only the labelled data. In particular, the NFC is based on the calculation of the conditional probability pn(y|x), hence having the unlabeled data for training potentially enforces better prediction of the labels. NFC vs. an auxiliary discriminative classifier. We argue that the Normalizing Flow Classifier (NFC) is more diverse and complementary to the main discriminative classifier. Here, we replace the NFC with a discriminative classifier to justify our argument. For fair comparison, the replacement classifier is with similar parameters (also 6 layers, each layer comprising a fully-connected layer followed by ReLU and batch normalization (Ioffe & Szegedy, 2015)) to NFC. We show their parameters and performance in Table 2. Our NFC is better than using another discriminative classifier by about 1%, demonstrating that NFC is more complementary. Another notable observation is that the NFC is lightweight, with only 0.08M parameters and 0.08M Multiply ACcumulate operations (MACs). This shows its efficiency. DC + NFC vs. two NFCs. To further support the argument that the discriminative classifier (DC) and the Normalizing Flow Classifier (NFC) are the better options for diversity, we also replace the main discriminative classifier with an NFC. This design choice leads to two NFCs. Note that in this case, the gradient of one of these two NFCs needs to back-propagate to the backbone CNN so that the backbone CNN can be updated. From the last row of Table 2, we observe that two NFCs cause a large performance drop of 12.60% when compared with our default setting (DC + NFC). The drop is probably because the diversity of two NFCs is not as large as DC + NFC. While in NorMatch, the diversity of classifiers plays a vital role. Table 2: Evaluation on different classifier combinations. Normalizing Flow Classifier (NFC) vs. another Discriminative Classifier (DC). MACs: Multiply ACcumulate operations. The #Param and MACs denote the additional parameters and MACs that the extra classifier introduces. Methods #Param MACs **Accuracy** DC + NFC (Default setting) 0.08M 0.08M **93.41** DC + Another DC 0.09M 0.09M 92.42 NFC + NFC 0.08M 0.08M 80.81 Table 3: Evaluation on stopping gradient of the Normalizing Flow Classifier (NFC) to CNN backbone. rhc denotes the high-confidence samples that have their predicted probability >0.95. Stop gradient rhc **Accuracy** ! 82.31 **93.41** % 32.82 29.27 Necessity of stopping gradient of NFC to the backbone CNN. As stated in Section 3.3, during training, the gradients of Lu(θn) and Lx(θn) are only back-propagated to NFC (θn) rather than the CNN backbone (please also see the stop gradient symbol in Figure 2). We thus evaluate this design choice in Table 3. We can see that if we allow the gradient to be back-propagated to the CNN backbone and hence play a role in its parameter updates, the training almost fails (with a poor accuracy of 29.27%). This is probably because the gradient from the NFC may harm the discriminative feature learning supervised by the main classifier. As a result, the features from the backbone CNN can hardly fit these two fundamentally different classifiers simultaneously. This can be inferred from the decreased amount of high-confidence samples, e.g., from 82.31% to 32.82%. The sharp decrease is because the features are not discriminative enough to achieve high confidence, further causing poor performance. Pseudo-Labels or No Pseudo-Labels to train NFC on unlabeled data? We further investigate whether the performance can be improved by training the Normalizing Flow Classifier (NFC) with pseudolabels on unlabeled data. Table 4 shows that pseudo-label-based supervised training for NFC decreases the performance (the first two rows). This is probably because the pseudo-labels can contain noise, which makes the NFC less effective. In addition, since both the NFC and the main discriminative classifier are trained with the same set of pseudo-labels, they may suffer from the same set of noise, thus not complementary to each other anymore. As such, noisy pseudo-labels cannot be correctly identified based on the consensus of these two classifiers. This can further lead to confirmation bias. More NFCs are Better Performance? It is natural to ask whether one more Normalizing Flow Classifier (NFC) as an auxiliary classifier can further help. To answer this question, we conduct the experiment by introducing an extra NFC to our NorMatch, leading to two NFCs with the same architecture. The Normalizing flow for Consensus-based Uncertainty Estimation (NCUE) is then enforced on these two NFCs and the discriminative classifier in a similar way to equation 3: If and only if these three classifiers predict the same pseudo-label for an unlabeled sample, the weight of such a sample is 1; Otherwise, the weight is the minimal probability of these three predictions. We then show its result in the last row of Table 4. We Table 4: Evaluations on (1) using pseudo-labels to train Normalizing Flow Classifier (NFC) on unlabeled data, and (2) one more NFC as the auxiliary classifier. Methods Accuracy Default setting **93.41** Use pseudo-label to train NFC 92.72 One more NFC as auxiliary classifier 92.67 | Table 6: Classification accuracy (%) on CIFAR-10, CIFAR-100 and STL-10. Best results are in bold. CIFAR-10 CIFAR-100 STL-10 Methods 40 labels 250 labels 4000 labels 400 labels 2500 labels 10000 labels 1000 labels Π-Model - 45.74±3.97 58.99±0.38 - 42.75±0.48 62.12±0.11 - Mean Teacher - 67.68±2.30 90.81±0.19 - 46.09±0.57 64.17±0.24 - MixMatch 52.46±11.50 88.95±0.86 93.58±0.10 32.39±1.32 60.06±0.37 71.69±0.33 38.02±8.29 ReMixMatch 80.90±9.64 94.56±0.05 95.28±0.13 55.72±2.06 72.57±0.31 76.97±0.56 - FlowGMM-cons - - 80.9 - - - - FixMatch 86.19±3.37 94.93±0.65 95.74±0.05 51.15±1.75 71.71±0.11 77.40±0.12 65.38±0.42 CoMatch 93.09±1.39 95.09±0.33 - - - - 79.80±0.38 SemCo - 94.88±0.27 96.20±0.08 - 68.07±0.01 75.55±0.12 - Dash 86.78±3.75 95.44±0.13 95.92±0.06 55.24±0.96 72.82±0.21 78.03±0.14 - NorMatch (Ours) 94.70±0.16 95.06±0.18 95.89±0.12 59.39±0.39 73.41±0.29 78.55±0.18 81.38±0.12 | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| observe a performance drop with one more NFC. This means that using a single NFC can already work well for uncertainty estimation because it is sufficiently diverse and complementary to the main discriminative classifier. With one more NFC, i.e., two NFCs, only a small portion of the pseudo-labels can achieve the consensus among these three classifiers. As a result, a large number of samples are discarded even though their pseudo-labels can be true. Too many samples being discarded can cause a performance drop when compared with using a single NFC. Sensitivity of the model's performance to **hyper-parameter.** The loss weight λ for Lu(θn) is the only hyper-parameter in our NorMatch, as in (10). We then evaluate the sensitivity of the classification accuracy (%) to λ in Figure 5. Note that the likelihood-based loss Lu(θn) can be much larger than the cross-entropy-based losses in (10), so we tune λ from a very small value, e.g., 1e-7. We can see that λ ≤ 1e-6 can improve the performance of FixMatch + NCUE (the abbreviation of "Normalizing flow for Consensusbased Uncertainty Estimation" proposed in Section 3.2.1), i.e., 92.78% obtained by λ=0, as is in the 2nd row of Table 1. Note that the performance of λ = 0 is not drawn in the log-scale plot. While a large λ (>1e-6) results in the NUM (i.e., "Normalizing flow for Unsupervised Modeling" proposed in Section 3.2.2) dominating the training process, which may harm the supervised discriminative feature learning and decrease the performance. Hence, we recommend properly setting λ so that the λLu(θn) is smaller than the supervised loss - this constrains λ from being very large. ![11_image_0.png](11_image_0.png) Figure 5: Sensitivity of the model's performance to λ. ## 4.3 Comparison With The State Of The Art Table 6 reports the comparison of our NorMatch to the other state-of-the-art methods on CIFAR-10, CIFAR100 and STL-10. We observe that NorMatch achieves the best performance on almost all the label splits, | Table 5: Evaluation on distribution alignment | (DA). | | |-------------------------------------------------|---------|----------| | Methods | Epochs | Accuracy | | NorMatch w/o DA | 300 | 93.41 | | NorMatch w/ DA | 300 | 93.71 | | NorMatch w/ DA | 1024 | 94.70 | Evaluation on distribution alignment. We follow the latest works (Berthelot et al., 2019a; Li et al., 2021) to apply distribution alignment to yˆ u i(neither sharpened nor one-hot version) in (4). We further evaluate this strategy in Table 5. We observe that the distribution alignment brings 0.3% improvement over the one-hot version (the first two rows). When fully trained for 1024 epochs, our NorMatch with distribution alignment further obtains 94.70%. We thus use it as our final model to compare with the state-of-the-art methods in Section 4.3. | Table 7: Classification accuracy (%) on Mini-ImageNet. Methods 4000 labels Mean Teacher (Tarvainen & Valpola, 2017) 27.49 Label Propagation (Iscen et al., 2019) 29.71 PLCB (Arazo et al., 2020) 43.51 MixMatch (Berthelot et al., 2019b) 50.21 SimPLE (Hu et al., 2021) 49.39 FixMatch (Sohn et al., 2020) 40.27 NorMatch (Ours) 48.36 | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| favorably outperforming the baseline, FixMatch (Li et al., 2021), and the state-of-the-art methods such as Mean Teacher (Tarvainen & Valpola, 2017), MixMatch (Berthelot et al., 2019b), and CoMatch (Li et al., 2021). Below we analyze the results on each dataset in more detail. Results on CIFAR-10 and CIFAR-100. NorMatch surpasses FlowGMM Izmailov et al. (2020), which uses a single FlowGMM (without the discriminative classifier) to enforce a consistency regularization, by about 15% on CIFAR-10 in the setting of 4000 labels. The better performance demonstrates the effectiveness of introducing the Normalizing Flow Classifier (NFC) to help estimate the uncertainty of pseudo-labels for the discriminative classifier. In addition, our NorMatch is superior to the latest methods, CoMatch (Li et al., 2021) and SemCo (Nassar et al., 2021), e.g., 1.61% over CoMatch on CIFAR-10 in the 40 labels setting and 5.34% over SemCo on 2500 labels of CIFAR-100. The superiority of NorMatch shows that a fundamentally different but complementary NFC for uncertainty estimation is better than an extra discriminative classifier for co-training (SemCo) or an additional projection head for self-training (CoMatch). Compared to Dash (Xu et al., 2021), which employs an adaptive threshold, our NorMatch is threshold-free and with the best performance on the 40 labels of CIFAR-10 and on CIFAR-100 for all label counts. This supports our argument that the threshold-free NorMatch is simpler yet more effective. NorMatch does not outperform SemCo or Dash on CIFAR-10 in the setting of 250 or 4000 labels. However, we also observe that with these two label counts we reach a near fully supervised performance, hence the performance is saturated as more labels are added. Notably, in a very ideal setting where all images are labelled for training, our fully supervised baseline obtains an accuracy of 95.44%, which is still lower than that of SemCo or Dash. Nevertheless, NorMatch still obtains a performance comparable to SemCo or Dash in these two label counts. Results on STL-10. NorMatch outperforms all the other competitors by at least 1.58%. Notably, it beats the baseline method, FixMatch, by 16%. This significant improvement strongly supports the effectiveness of our NorMatch. Thanks to the NCUE and NUM (see Section 3.2.1 and 3.2.2), NorMatch also excels CoMatch considerably. Moreover, our NorMatch is much simpler than CoMatch because no threshold needs to be tuned and no graph needs to be constructed for contrastive learning. Results on Mini-ImageNet. We further evaluate NorMatch on the challenging Mini-ImageNet, and the results are displayed in Table 7. Our NorMatch achieves significant improvement over the classical Mean Teacher (Tarvainen & Valpola, 2017) and Label Propagation (Iscen et al., 2019) methods. It is also clearly better than the FixMatch baseline (Sohn et al., 2020) by 8.09%, owing to the Normalizing Flow Classifier (NFC) for better uncertainty estimation for pseudo-labels. The better performance illustrates the scalability of our method on the challenge dataset. Furthermore, NorMatch is simpler than the other state-of-the-art methods, with fewer hyper-parameters, especially without the sensitive threshold. Our NorMatch is on par with PLCB (Arazo et al., 2020) and SimPLE (Hu et al., 2021) but worse than MixMatch (Berthelot et al., 2019b) probably because our baseline method, FixMatch, is much worse than MixMatch, even the large improvement (8.09% over the FixMatch baseline) obtained by NorMatch cannot eliminate the huge gap to MixMatch. ## 5 Limitations And Discussions Though effective, the NFC in our NorMatch inevitably introduces more computational cost, i.e., 0.08M parameters (see Table 2) and 0.08M Multiply ACcumulate operations (MACs). In addition, the training can fail if we do not stop the gradient of NFC to the backbone CNN, as shown in Table 3. Therefore, we usually need elaborate designs, e.g., gradient stop strategy, to ensure the effectiveness of the auxiliary NFC. Furthermore, the success of our NorMatch largely relies on the baseline, FixMatch, so it can be inferior to other state-of-the-art methods when FixMatch performs poor, e.g., on the challenging Mini-ImageNet dataset as shown in Table 7. On the other hand, as a pseudo-labeling-based method, our NorMatch can hardly boost the performance for a large gap when the FixMatch can already achieve satisfying results, such as 250 and 4000 labels on CIFAR-10 (see Table 6). This is because the noise in pseudo-labels cannot be thoroughly eliminated, which may hinder our NorMatch from achieving a near-saturated classification accuracy. Despite the above limitations, our NorMatch improves the performance of baseline FixMatch considerably in most circumstances. Notably, when FixMatch achieves comparable results to the other state-of-the-art methods, our NorMatch can outperform the competitors favorably owing to the significant improvement over FixMatch. ## 6 Conclusion In this paper we propose a novel SSL method called NorMatch. NorMatch leverages a normalizing flow classifier (NFC) to help estimate pseudo-label uncertainty for training a discriminative classifier. This is achieved by applying a Normalizing flow for greeting a Consensus-based Uncertainty Estimation (NCUE) scheme. NCUE evaluates the consensus of the predictions from NFC and the discriminative classifier, then highlights these consistently predicted pseudo-labels and discounts low-confidence ones that cause disagreement. Moreover, NorMatch exploits Normalizing flow for Unsupervised Modeling (NUM), which models the distribution of unlabeled data for better performance. Extensive experiments on CIFAR-10, CIFAR-100, STL-10, and Mini-ImageNet demonstrate that NorMatch achieves state-of-the-art performance. ## Acknowledgements ZD, AIAR and CBS acknowledge support from the EPSRC grant EP/T003553/1. AIAR acknowledges support from CMIH and CCIMI, University of Cambridge. CBS acknowledges support from the Philip Leverhulme Prize, the Royal Society Wolfson Fellowship, the EPSRC advanced career fellowship EP/V029428/1, EPSRC grants EP/S026045/1 and EP/T003553/1, EP/N014588/1, EP/T017961/1, the Wellcome Innovator Awards 215733/Z/19/Z and 221633/Z/20/Z, the European Union Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 777826 NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute. ## References Eric Arazo, Diego Ortego, Paul Albert, Noel E O'Connor, and Kevin McGuinness. Pseudo-labeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2020. Angelica I Aviles-Rivero, Nicolas Papadakis, Ruoteng Li, Philip Sellars, Samar M Alsaleh, Robby T Tan, and Carola-Bibiane Schönlieb. Energy models for better pseudo-labels: Improving semi-supervised classification with the 1-laplacian graph energy. *arXiv preprint arXiv:1906.08635*, 2019. David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785, 2019a. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. *Advances in neural information processing* systems, 32, 2019b. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2017. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In *Proceedings of the fourteenth international conference on artificial intelligence and statistics*, pp. 215–223. JMLR Workshop and Conference Proceedings, 2011. Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 702–703, 2020. Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. *arXiv preprint* arXiv:1605.08803, 2016. Ross Girshick. Fast r-cnn. In *Proceedings of the IEEE international conference on computer vision*, pp. 1440–1448, 2015. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, 2014. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. *Communications of the ACM*, 63(11): 139–144, 2020. Lan-Zhe Guo and Yu-Feng Li. Class-imbalanced semi-supervised learning with adaptive thresholding. In International Conference on Machine Learning, pp. 8082–8094. PMLR, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Zijian Hu, Zhengyu Yang, Xuefeng Hu, and Ram Nevatia. Simple: similar pseudo label exploitation for semisupervised classification. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 15099–15108, 2021. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pp. 448–456. pmlr, 2015. Ahmet Iscen, Giorgos Tolias, Yannis Avrithis, and Ondrej Chum. Label propagation for deep semi-supervised learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 5070–5079, 2019. Pavel Izmailov, Polina Kirichenko, Marc Finzi, and Andrew Gordon Wilson. Semi-supervised learning with normalizing flows. In *International Conference on Machine Learning*, pp. 4615–4630. PMLR, 2020. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Ivan Kobyzev, Simon JD Prince, and Marcus A Brubaker. Normalizing flows: An introduction and review of current methods. *IEEE transactions on pattern analysis and machine intelligence*, 43(11):3964–3979, 2020. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25:1097–1105, 2012. Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. *arXiv preprint* arXiv:1610.02242, 2016. Junnan Li, Caiming Xiong, and Steven CH Hoi. Comatch: Semi-supervised learning with contrastive graph regularization. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 9475– 9484, 2021. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3431–3440, 2015. Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. *arXiv preprint* arXiv:1608.03983, 2016. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. *arXiv preprint arXiv:1711.05101*, 2017. Prem Melville and Raymond J Mooney. Constructing diverse classifier ensembles using artificial training examples. In *Ijcai*, volume 3, pp. 505–510, 2003. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018. Islam Nassar, Samitha Herath, Ehsan Abbasnejad, Wray Buntine, and Gholamreza Haffari. All labels are not created equal: Enhancing semi-supervision via label grouping and co-training. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7241–7250, 2021. George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. Advances in neural information processing systems, 30, 2017. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *Advances in neural information processing systems*, 32, 2019. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. Philip Sellars, Angelica I Aviles-Rivero, and Carola-Bibiane Schönlieb. Laplacenet: A hybrid graph-energy neural network for deep semisupervised classification. *IEEE Transactions on Neural Networks and Learning* Systems, 2022. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. *Advances in neural information processing systems*, 33:596–608, 2020. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1): 1929–1958, 2014. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In *International conference on machine learning*, pp. 1139–1147. PMLR, 2013. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In *Proceedings of the IEEE* conference on computer vision and pattern recognition, pp. 1–9, 2015. Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. *Advances in neural information processing systems*, 30, 2017. Yi Xu, Lei Shang, Jinxing Ye, Qi Qian, Yu-Feng Li, Baigui Sun, Hao Li, and Rong Jin. Dash: Semisupervised learning with dynamic thresholding. In *International Conference on Machine Learning*, pp. 11525–11536. PMLR, 2021. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. *arXiv preprint arXiv:1605.07146*, 2016. Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi-supervised learning with similarity matching. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 14471–14481, 2022.
Review 1: Summary: This paper presents a method called NorMatch for semi-supervised learning based on FixMathch method. Normatch introduces a new uncertainty estimation scheme based on the Normalizing Flow Classifier as called in the paper. This scheme acts as an auxiliary classifier to enforce highly certain pseudo-labels, which improves the pseudo quality of discriminative classifiers. The NorMathch also introduces unsupervised learning by maximizing the log-likelihood of unlabeled data. The experiments show that this could further improve the accuracy in SSL tasks. Overall, the main contributions of this paper is by modeling the Gaussian probibilty model in the latent features space to enchance the uncertainty estimation and representation learning. Strengths and Weaknesses: ## Strengths - Two techinical parts are presented in this paper: a new psudolabels' purification method when applied to FixMatch and a unsupervised learning loss of NUM. Experiments show that these two skills help to improve the baseline method of FixMatch. - The presentation is easy to follow with well organization. ## Weaknesses - But my frist question is that how Normalizing flows act in this manner. Normalizing flow is a generative model by transfering random variable to different representation spaces in order to facilitate the computing of probability mass. In my point, the utilizing the Gaussian probability model in the latent space does not mean it is a Normalizing flow model. The name of Normalizing flow classifier and the NUM are not presented properly in my view. Could you explain why they are so called? - Second, what are the befinits from other pseudolabels' enhancement methods? since there are many works for doing so, I think the authours should review them more compresensively and study different effects among different strategies of pseudolabels' purification? Requested Changes: - The Normalizing Flow should be introducted and linked to the method with rigorously formal. - When there are many methods for pseudolabels' purification, the paper should review them more comprehensively and compare them with analytical perspectives. Broader Impact Concerns: No ================================================== Review 2: Summary: This paper proposes a new uncertainty estimation scheme using normalizing flows, which aims at improving the pseudo-label quality for semi-supervised learning. Strengths and Weaknesses: Strengths: 1. The Normalizing Flow Classifier (NFC) could use both the labeled and unlabeled data to learn a better decision boundary. 2. Experiments on four datasets, including the ablation studies and parameter analyses, demonstrate the effectiveness of the proposed method. Weakness: 1. Is it possible to try another two combinations of the standard classifier and NFC? Namely, what would happen if we use "standard classifier + standard classifier" or "NFC + NFC", compared with the current "standard classifier + NFC" strategy? 2. I encourage the authors to discuss the potential limitations of the proposed NFC to give the readers a more comprehensive understanding of this paradigm. 3. Why the baselines on Mini-ImageNet are less than those on CIFAR and STL? These baselines need to be supplied. 4. How much additional computational cost would the NFC bring? Requested Changes: please see the weaknesses Broader Impact Concerns: None ================================================== Review 3: Summary: This paper proposes a new semi-supervised learning method named NorMatch, which leverages an additional classifier through normalizing flow to estimate the uncertainty of pseudo-labels. NorMatch further employs a diversity-promoting term through normalizing flow as a regularization term. Results on several standard semi-supervised learning datasets validate the effectiveness of NorMatch. Strengths and Weaknesses: Strengths - the proposed method sounds reasonable - the results are better than previous SSL methods - analysis like an ablation study is conducted to verify each component of the proposed method Weaknesses - this paper (e.g., the method section and the experiment section) could be further simplified to increase the readability Requested Changes: Generally, this paper meets the standard of TMLR with a new SSL method. As stated before, the authors need to pay more attention to the writing to improve the readability of this paper. (There are so many concepts in the method section and the results are not well displayed in the experiment section) Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers found the paper “easy to follow” (Reviewer mhzF), the experiments demonstrate the effectiveness of the proposed method (Reviewer GUVx), the results are better than previous SSL methods and ablations are conducted (Reviewer Br9c). During the rebuttal, the authors have made clarification to the paper to address feedback from the reviewers, added more explanations, baselines and a discussion of limitations. In my opinion, the updated draft meets the standard for TMLR acceptance: it tackles an important problem that is of interest to the community, presents insights and a method that works better than previous SSL methods on some datasets, and as far as I can tell, is technically sound and provides convincing evidence for the claims made. I encourage the authors to add to the limitations section that some other methods outperform NorMatch on some datasets and perhaps any insights they have on when NorMatch is expected to have an advantage over those methods. ==================================================
# Fairgrad: Fairness Aware Gradient Descent Gaurav Maheshwari gaurav.maheshwari@inria.fr Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France Michaël Perrot michael.perrot@inria.fr Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France Reviewed on OpenReview: *https:// openreview.net/ forum?id=0f8tU3QwWD* ## Abstract We address the problem of group fairness in classification, where the objective is to learn models that do not unjustly discriminate against subgroups of the population. Most existing approaches are limited to simple binary tasks or involve difficult to implement training mechanisms which reduces their practical applicability. In this paper, we propose FairGrad, a method to enforce fairness based on a re-weighting scheme that iteratively learns group specific weights based on whether they are advantaged or not. FairGrad is easy to implement, accommodates various standard fairness definitions, and comes with minimal overhead. Furthermore, we show that it is competitive with standard baselines over various datasets including ones used in natural language processing and computer vision. FairGrad is available as a PyPI package at - https://pypi.org/project/fairgrad ## 1 Introduction Fair Machine Learning addresses the problem of learning models that are free of any discriminatory behavior against a subset of the population. For instance, consider a company developing a model to predict whether a person would be a suitable hire based on their biography. A possible source of discrimination here can be if, in the data available to the company, individuals that are part of a subgroup formed based on their gender, ethnicity, or other sensitive attributes, are consistently labelled as unsuitable hires regardless of their true competency due to historical bias. This kind of discrimination can be measured by a fairness notion called Demographic Parity (Calders et al., 2009). If the data is unbiased, another source of discrimination may be the model itself that consistently mislabels the competent individuals of a subgroup as unsuitable hires. This can be measured by a fairness notion called Equality of Opportunity (Hardt et al., 2016). Several such fairness notions have been proposed in the literature as different problems call for different measures. They can be divided into two major paradigms, namely (i) Individual Fairness (Dwork et al., 2012; Kusner et al., 2017) where the idea is to treat similar individuals similarly regardless of the sensitive group they belong to, and (ii) Group Fairness (Calders et al., 2009; Hardt et al., 2016; Zafar et al., 2017a; Denis et al., 2021) where the underlying idea is that no sensitive group should be disadvantaged compared to the overall reference population. In this paper, we focus on group fairness in the context of classification where we only assume access to the sensitive attributes during the training phase. The existing approaches for group fairness in Machine Learning may be divided into three main paradigms. First, pre-processing methods aim at modifying a dataset to remove any intrinsic unfairness that may exist in the examples. The underlying idea is that a model learned on this modified data is more likely to be fair (Dwork et al., 2012; Kamiran & Calders, 2012; Zemel et al., 2013; Feldman et al., 2015; Calmon et al., 2017). Then, post-processing approaches modify the predictions of an accurate but unfair model so that it becomes fair (Kamiran et al., 2010; Hardt et al., 2016; Woodworth et al., 2017; Iosifidis et al., 2019; Chzhen et al., 2019). Finally, in-processing methods aim at learning a model that is fair and accurate in a single step (Calders & Verwer, 2010; Kamishima et al., 2012; Goh et al., 2016; Zafar et al., 2017a;b; Donini et al., \# The library is available at https://pypi.org/project/fairgrad. from fairgrad.torch **import** CrossEntropyLoss \# Same as PyTorch's loss with some additional meta data. \# A fairness rate of 0.01 is a good rule of thumb for standardized data. criterion = CrossEntropyLoss(y_train, s_train, fairness_measure, fairness_rate=0.01) \# The dataloader and model are defined and used in the standard way. for x, y, s in data_loader: optimizer.zero_grad() loss = criterion(model(x), y, s) loss.backward() optimizer.step() Figure 1: A standard training loop where the PyTorch's loss is replaced by FairGrad's loss. 2018; Krasanakis et al., 2018; Agarwal et al., 2018; Wu et al., 2019; Cotter et al., 2019; Iosifidis & Ntoutsi, 2019; Jiang & Nachum, 2020; Lohaus et al., 2020; Roh et al., 2020; Ozdayi et al., 2021). In this paper, we propose a new in-processing group fairness approach based on a re-weighting scheme that may also be used as a kind of post-processing approach by fine-tuning existing classifiers. Motivation. In-processing approaches can be further divided into several sub-categories (Caton & Haas, 2020). Common amongst them are methods that cast the fairness task as a constrained optimization problem, and then relax the fairness constraints under consideration to simplify the learning process (Zafar et al., 2017a; Donini et al., 2018; Wu et al., 2019). Indeed, standard fairness notions are usually difficult to handle due to their non-convexity and non-differentiability. Unfortunately, these relaxations may be far from the actual fairness measures, leading to sub-optimal models (Lohaus et al., 2020). Similarly, several approaches address the fairness problem by designing specific algorithms and solvers. This is, for example, done by reducing the optimization procedure to a simpler problem (Agarwal et al., 2018), altering the underlying solver (Cotter et al., 2019), or using adversarial learning (Raff & Sylvester, 2018). However, these approaches are difficult to adapt to existing systems as they require special training procedures or changes in the model. They are also limited in the range of problems to which they can be applied. For example, the work of Agarwal et al. (2018) can only be applied in a binary classification setting, while the work of Ozdayi et al. (2021) is limited to two sensitive groups. Furthermore, they may come with several hyper-parameters that need to be carefully tuned to obtain fair models. For instance, the scaling parameter in adversarial learning (Raff & Sylvester, 2018; Li et al., 2018) or the number of iterations in inner optimization for bi-level optimization based mechanisms (Ozdayi et al., 2021). The complexity of the existing methods might hinder their deployment in practical settings. Hence, there is a need for simpler methods that are straightforward to integrate into existing training loops. Contributions. In this paper, we present FairGrad, a general purpose approach to enforce fairness in empirical risk minimization solved using gradient descent. We propose to dynamically update the influence of the examples after each gradient descent update to precisely reflect the fairness level of the models obtained at each iteration and guide the optimization process in a relevant direction. Hence, the underlying idea is to use lower weights for examples from advantaged groups than those from disadvantaged groups. Our method is inspired by recent re-weighting approaches that also propose to change the importance of each group while learning a model (Krasanakis et al., 2018; Iosifidis & Ntoutsi, 2019; Jiang & Nachum, 2020; Roh et al., 2020; Ozdayi et al., 2021). We discuss these works in Appendix A. Interestingly, we also find that FairGrad can be seen as solving a kind of constrained optimization problem. In Section 2.2, we expand upon this link and show how FairGrad can be seen as a solution that connects these two kinds of methods. A key advantage of FairGrad is that it is straightforward to incorporate into standard gradient based solvers that support examples re-weighting like Stochastic Gradient Descent. Hence, we developed a Python library (provided in the supplementary material) where we augmented standard PyTorch losses to accommodate our approach. From a practitioner point of view, it means that using FairGrad is as simple as replacing their existing loss from PyTorch with our custom loss and passing along some meta data, while the rest of the training loop remains identical. This is illustrated in Figure 1. It is interesting to note that FairGrad only brings one extra hyper-parameter, the fairness rate, besides the usual optimization ones (learning rates, batch size, . . . ). Moreover, FairGrad incurs minimal computational overhead during training as it relies on objects that are already computed for standard gradient descent, namely the predictions on the current batch and the loss incurred by the model for each example. In particular, the overhead is independent of the number of parameters of the model. Furthermore, as many in-processing approaches in fairness (Cotter et al., 2019; Roh et al., 2020), FairGrad does not introduce any overhead at test time. Overall, FairGrad is a lightweight fairness solution that is compatible with various group fairness notions, including exact and approximate fairness, can handle both multiple sensitive groups and multiclass problems, and can fine tune existing unfair models. Through extensive experiments, we also show that, in addition to its versatility, FairGrad is competitive with several standard baselines in fairness on both standard datasets as well as complex natural language processing and computer vision tasks. ## 2 Problem Setting, Notations, And Related Work In the remainder of this paper, we assume that we have access to a feature space X , a finite discrete label space Y, and a set S of values for the sensitive attribute. We further assume that there exists a distribution D ∈ DZ where DZ is the set of all distributions over Z = *X × Y × S*. Our goal is then to learn an accurate model hθ ∈ H, with learnable parameters θ ∈ R d, such that hθ : *X → Y* is fair with respect to a given fairness definition that depends on the sensitive attribute. In Section 2.1, we formally define the family of fairness measures that are compatible with our approach and provide several examples of popular notions encompassed by our fairness definition. As usual in machine learning, we will assume that D is unknown and that we only get to observe a finite dataset T = {(xi, yi, si)} n i=1 of n examples drawn i.i.d. from D. Let P (E(*X, Y, S*)) represent the probability that an event E happens with respect to (X, Y, S) ∼ D while Pb (E(*x, y, s*)) = 1n Pn i=1 IE(xi,yi,si)is an empirical estimate with respect to T where IP is the indicator function which is 1 when the property P is verified and 0 otherwise. In the remainder of this paper, all our derivations will be considered in the finite sample setting and we will assume that what was measured on our finite sample is sufficiently close to what would be obtained if one had access to the overall distribution. This seems reasonable in light of the previous work on generalization in standard machine learning (Shalev-Shwartz & Ben-David, 2014) and the recent work of Woodworth et al. (2017) or Mangold et al. (2022) which show that the kind of fairness measures we consider in this paper tend to generalize well when the hypothesis space is not too complex, as measured respectively by the VC or the Natarajan Dimension (Shalev-Shwartz & Ben-David, 2014). Since these generalization results only rely on a capacity measure of the hypothesis space and are otherwise algorithm agnostic, they are applicable to the models returned by FairGrad when they have finite VC or Natarajan dimensions. This is for example the case for linear models. ## 2.1 Fairness Definition We assume that the data may be partitioned into K disjoint groups denoted T1, . . . , Tk*, . . . ,* TK such that SK k=1 Tk = T and TK k=1 Tk = ∅. These groups highly depend on the fairness notion under consideration. They might correspond to the usual sensitive groups, as is the case for Accuracy Parity (see Example 1), or might be subgroups of the usual sensitive groups, as in Equalized Odds where the subgroups are defined with respect to the true labels (see Example 2 in Appendix B). For each group, we assume that we have access to a function Fbk : Dn *× H →* R such that Fbk > 0 when the group k is advantaged by the given classifier and Fbk < 0 when the group k is disadvantaged. Furthermore, we assume that the magnitude of Fbk represents the degree to which the group is (dis)advantaged. Finally, we assume that each Fbk can be rewritten as follows: $$\widehat{F}_{k}({\cal T},h_{\theta})=C_{k}^{0}+\sum_{k^{\prime}=1}^{K}C_{k}^{k^{\prime}}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|{\cal T}_{k^{\prime}}\right)$$ $$(1)$$ where the constants C are group specific and independent of hθ. The probabilities Pb (hθ(x) 6= y|Tk0 ) represent the error rates of hθ over each group Tk0 with a slight abuse of notation. Below, we show that Accuracy Parity (Zafar et al., 2017a) respects this definition. In Appendix B, we show that Equality of Opportunity (Hardt et al., 2016), Equalized Odds (Hardt et al., 2016), and Demographic Parity (Calders et al., 2009) also respect this definition. It means that using this generic formulation allows us to simultaneously reason about multiple fairness notions. Example 1 (**Accuracy Parity (AP) (Zafar et al., 2017a)**). A model hθ is fair for Accuracy Parity when the probability of being correct is independent of the sensitive attribute, that is, ∀r ∈ S $$\widehat{\mathbb{P}}\left(h_{\theta}(x)=y\,|\,s=r\right)=\widehat{\mathbb{P}}\left(h_{\theta}(x)=y\right).$$ It means that we need to partition the space into K = |S| groups and, ∀r ∈ S, we define Fb(r) as the fairness level of group $(r)$ $$\widehat{F}_{(r^{\prime})}(\mathcal{T},h_{\theta})=\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y\right)-\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y\,|\,s=r\right)$$ $$=(\widehat{\mathbb{P}}\left(s=r\right)-1)\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y\,|\,s=r\right)+\sum_{(r^{\prime})\neq(r^{\prime})}\widehat{\mathbb{P}}\left(s=r^{\prime}\right)\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y\,|\,s=r^{\prime}\right)$$ where the law of total probability was used to obtain the last equality. Thus, Accuracy Parity satisfies all our assumptions with C (r) (r) = Pb (s = r) − 1, C (r 0) (r) = Pb (s = r 0) with r 0 6= r, and C 0 (r) = 0. It is worth noting that FairGrad applies to any fairness measure that respects the definition above, even when there is a large number of groups. However, the performance of FairGrad may degrade when there are only a few samples per group, as fairness estimations become unreliable. In this case, the risk is that the learned model is fair on the training set but does not generalize well to new examples. To circumvent some of these issues works such as Hebert-Johnson et al. (2018); Kearns et al. (2018) have extended fairness definitions to multi-group settings and proposed mechanisms to optimize them. In this work, we focus on classical fairness definitions and keep the line of research to extend Fairgrad to these alternative definitions for the future. ## 2.2 Related Work Various in-processing methods have been proposed in the fair machine learning literature. Amongst them, many methods rely on formulating the problem as either a constrained optimization, which is later relaxed to an unconstrained case or using re-weighting techniques where examples are dynamically re-weighed based on the fairness levels of the model (Caton & Haas, 2020). In this sub-section, we will provide a brief overview of these methods and explain the similarities and differences between FairGrad and the corresponding approaches. Additionally, we will also demonstrate how FairGrad can be seen as a solution that connects these two streams of work. For more details about very closely related works, please refer to Appendix A. Constrained Optimization The problem of fair machine learning can be seen as the following constrained optimization problem (Cotter et al., 2019; Agarwal et al., 2018): optimization problem (Cover et al., 2015, Argentina et al., 2010). $$\underset{h_{\mathbf{\theta}}\in\mathcal{H}}{\arg\min}\,\widehat{\mathbb{P}}\big{(}h_{\mathbf{\theta}}(x)\neq y\big{)}$$ $$\text{s.t.}\ \forall\mathbf{k}\in[K],\widehat{F}_{h}(\mathcal{T},h_{\mathbf{\theta}})=0.\tag{2}$$ This problem can then be reformulated as an unconstrained optimization problem using Lagrange multiplier This problem can then be reformulated as an unconstrained optimization problem using Lagrange multipliers. More specifically, with multipliers denoted by λ1*, . . . , λ*K, the unconstrained objective that should be minimized for hθ ∈ H and maximized for λ1*, . . . , λ*K ∈ R is: $$\mathcal{L}\left(h_{\theta},\lambda_{1},\ldots,\lambda_{K}\right)=\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y\right)+\sum_{k=1}^{K}\lambda_{k}\widehat{F}_{k}(\mathcal{T},h_{\theta})\,.\tag{3}$$ $$\left(2\right)$$ Several strategies may then be employed to find a saddle point for this objective1. Agarwal et al. (2018) first relax the problem by searching for a distribution over the models rather than a single optimal hypothesis. 1These min-max formulations are not new in the literature and was already used in the 1940's (Wald, 1945). More recently, Madry et al. (2018) employed the formulation to make deep neural networks more robust against adversarial attacks. Similarly, Ben-Tal et al. (2012) modeled uncertainty in input via this formulation. Then, they alternate between using an exponentiated gradient step to find λ1*, . . . , λ*K ∈ R and a procedure based on cost sensitive learning to find the next hθ to add to their distribution. Similarly, Cotter et al. (2019) also search for a distribution over the models using an alternating approach based on Lagrange multipliers where they relax objective (3) by replacing the error rate with a loss term. To update the λ multipliers, unlike Agarwal et al. (2018), they use projected gradient descent based on the original fairness terms. To search the next hθ to add to their distribution of models they use a projected gradient descent update over a relaxed overall objective function where the fairness measures are replaced with smooth upper bounds. In this work, we also use an alternating approach based on objective (3). However, we look for a single model rather than a distribution of models. To this end, at each iteration, we update λ using a projected gradient descent step similar to Cotter et al. (2019), that is using the original fairness measures. To solve for hθ, contrary to Cotter et al. (2019), we first show that Objective (3), with fixed λ, may be rewritten as a weighted sum of group-wise error rates. This is similar in spirit to the cost-sensitive learning method of Agarwal et al. (2018) but can be applied beyond simple binary classification. We then follow Cotter et al. (2019) and replace in our new objective the error rate terms with a loss function, albeit not necessarily an upper bound, to obtain meaningful gradient directions. Re-weighting Another way to learn fair models is to use a re-weighting approach where each example x is associated with a weight wx ∈ R so that minimizing the following objective for hθ outputs a fair model: W (hθ) = EbwxI{hθ(x)6=y} . The underlying idea for the methods which posit the problem as above is to propose a cost function that outputs weights for each example. On the one hand, the weights can be determined in a pre-processing step (Kamiran & Calders, 2012), based on the statistics of the data under consideration. On the other hand, the weights may evolve with hθ, that is they are dynamically updated each time the model changes during the training process (Roh et al., 2020). In this work, to find hθ, we also use a dynamic re-weighting approach where the weights change at each iteration. To choose the weights, we initially give the same importance to each example. Then, we increase the weights of disadvantaged examples and decrease the weights of advantaged examples proportionally to the fairness level of the current model for their group. An important feature of our approach, unlike other re-weighting approaches, is that we do not constrain ourselves to positive weights but rather allow the use of negative weights. Indeed, we show in Lemma 1 that the latter are sometimes necessary to learn fair models. To summarize, we first frame the task as a constrained optimization problem, similar to Cotter et al. (2019) and Agarwal et al. (2018). We then propose an alternating approach, where we update λ at each iteration using a projected gradient descent step similar to Cotter et al. (2019). However, in order to learn the model hθ, we show that Objective (3), with fixed λ, can be rewritten as a weighted sum of group-wise error rates. This step can be interpreted as an instance of dynamic re-weighting where the weights change at each iteration. Thus our method can be seen as a connection between constrained optimization and re-weighting. ## 3 Fairgrad In the previous section, we argued that FairGrad is connected to both constrained optimization and reweighting approaches. In this section, we provide details on our method and we present it starting from the constrained optimization point of view as we believe it makes it easier to understand how the weights are selected and updated. We begin by discussing FairGrad for exact fairness and then extend it to -fairness. ## 3.1 Fairgrad For Exact Fairness To solve the fairness problem described in equation 3, we propose to use an alternating approach where the hypothesis and the multipliers are updated one after the other2. We begin by describing our method to update the multipliers and then the model. 2It is worth noting that, here, we do not have formal duality guarantees and that the problem is not even guaranteed to have a fair solution. Nevertheless, the approach seems to work well in practice as can be seen in the experiments. Updating the Multipliers. To update λ1*, . . . , λ*K, we will use a standard gradient ascent procedure. Hence, given that the gradient of Problem (3) is $$\nabla_{\lambda_{1},\ldots,\lambda_{K}}{\mathcal{L}}\left(h_{\theta},\lambda_{1},\ldots,\lambda_{K}\right)=\begin{pmatrix}{{\widehat{F}}_{1}({\mathcal{T}},h_{\theta})}\\ {\vdots}\\ {{\widehat{F}}_{K}({\mathcal{T}},h_{\theta})}\end{pmatrix}$$ we have the following update rule ∀k ∈ [K]: $$\lambda_{k}^{T+1}=\lambda_{k}^{T}+\eta_{\lambda}\hat{F}_{k}\big(T,h_{\theta}^{T}\big)$$ where ηλ is a rate that controls the importance of each update. In the experiments, we use a constant rate of 0.01 as our initial tests showed that it is a good rule of thumb when the data is properly standardized. Updating the Model. To update the parameters θ ∈ R D of the model hθ, we use a standard gradient descent. However, first, we notice that, given our fairness definition, Equation (3) can be written as $$\mathcal{L}\left(h_{\theta},\lambda_{1},\ldots,\lambda_{K}\right)=\sum_{k=1}^{K}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|\mathcal{T}_{k}\right)\left[\widehat{\mathbb{P}}\left(\mathcal{T}_{k}\right)+\sum_{k^{\prime}=1}^{K}C_{k^{\prime}}^{k}\lambda_{k^{\prime}}\right]+\sum_{k=1}^{K}\lambda_{k}C_{k}^{0}.\tag{4}$$ where PK k=1 λkC 0 k is independent of hθ by definition. Hence, at iteration t, the update rule becomes $$\theta^{T+1}=\theta^{T}-\eta_{\theta}\sum_{k=1}^{K}\left[\widehat{\mathbb{P}}\left({\cal T}_{k}\right)+\sum_{k^{\prime}=1}^{K}C_{k^{\prime}}^{k}\lambda_{k^{\prime}}\right]\nabla_{\theta}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|{\cal T}_{k}\right)\,,$$ where ηθ is the usual learning rate that controls the importance of each parameter update. Here, we obtain our group specific weights ∀k, wk = hPb (Tk) + PK k0=1 C k k0λk0 i, that depend on the current fairness level of the model through λ1*, . . . , λ*K, the relative size of each group through Pb (Tk), and the fairness notion under consideration through the constants C. The exact values of these constants are given in Section 2.1 and Appendix B for various group fairness notions. Overall, they are such that, at each iteration, the weights of the advantaged groups are reduced and the weights of the disadvantaged groups are increased. The main limitation of the above update rule is that one needs to compute the gradient of 0−1-losses since ∇θPb (hθ(x) 6= y|Tk) = 1 nk P(x,y)∈Tk ∇θI{hθ(x)6=y}. Unfortunately, this usually does not provide meaningful optimization directions. To address this issue, we follow the usual trend in machine learning and replace the 0−1-loss with one of its continuous and differentiable surrogates that provides meaningful gradients. For instance, in our experiments, we use the cross entropy loss. ## 3.2 Computational Overhead Of Fairgrad. We summarize our approach in Algorithm 1, where we have used italic font to highlight the steps inherent to FairGrad that do not appear in classic gradient descent. We consider batch gradient descent rather than full gradient descent as it is a popular scheme. We empirically investigate the impact of the batch size in Section 4.7. The main difference is Step 5 (in italic font), that is the computation of the group-wise fairness levels. However, these can be cheaply obtained from the predictions of h (t) θon the current batch which are always available since they are also needed to compute the gradient. Hence, the computational overhead of FairGrad is very limited. ## 3.3 Importance Of Negative Weights. A key property of FairGrad is that we allow the use of negative weights, that is hPb (Tk) + PK k0=1 C k k0λk0 imay become negative, while existing methods (Roh et al., 2020; Iosifidis & Ntoutsi, 2019; Jiang & Nachum, 2020) restrict themselves to positive weights. In this section, we show that these negative weights are important as they are sometimes necessary to learn fair models. Hence, in the next lemma, we provide sufficient conditions so that negative weights are mandatory if one wants to enforce Accuracy Parity. ## Algorithm 1 Fairgrad For Exact Fairness Input: Groups T1*, . . . ,* TK, Functions Fb1*, . . . ,* FbK, Function class H of models hθ with parameters θ ∈ R D, Learning rates ηλ, ηθ, and Iterator *iter* that returns batches of examples. Output: A fair model h ∗ θ . 1: Initialize *the group specific weights* and the model. 2: for B in *iter* do 3: Compute the predictions of the current model on the batch B. 4: Compute the group-wise losses using the predictions. 5: *Compute the current fairness level using the predictions and update the group-wise weights*. 6: Compute the overall *weighted* loss using the *group-wise weights*. 7: Compute the gradients based on the loss and update the model. 8: **end for** 9: **return** the trained model h ∗ θ Lemma 1 (Negative weights are necessary.). *Let the fairness notion be Accuracy Parity (Example 1). Let* h ∗ θ be the most accurate and fair model. Then using negative weights is necessary as long as $$\min_{\begin{subarray}{c}h_{\theta}\in\mathcal{H}\\ h_{\theta}\,unfair\end{subarray}}\max_{\mathcal{T}_{k}}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|\mathcal{T}_{k}\right)<\widehat{\mathbb{P}}\left(h_{\theta}^{*}(x)\neq y\right).$$ $$y!\,.$$ $\square$ Proof. The proof is provided in Appendix C. The previous condition can sometimes be verified in practice. As a motivating example, assume a binary setting with only two sensitive groups T1 and T−1. Let h −1 θbe the model minimizing Pb (hθ(x) 6= y|T−1) and assume that Pbh −1 θ(x) 6= y< Pbh −1 θ(x) 6= y|T−1 , that is group T−1 is disadvantaged for accuracy parity. Given h ∗ θ the most accurate and fair model, we have $$\operatorname*{min}_{\begin{subarray}{c}h\theta\in{\mathcal{H}}\\ h\theta\,\mathrm{unfair}\end{subarray}}\operatorname*{max}_{\mathcal{T}_{k}}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|{\mathcal{T}}_{k}\right)=\widehat{\mathbb{P}}\left(h_{\theta}^{-1}(x)\neq y|{\mathcal{T}}_{-1}\right)<\widehat{\mathbb{P}}\left(h_{\theta}^{*}(x)\neq y\right)$$ as otherwise we would have a contradiction since the fair model would also be the most accurate model for group T−1 since Pb (h ∗ θ (x) 6= y) = Pb (h ∗ θ (x) 6= y|T−1) by definition of Accuracy Parity. In other words, a dataset where the most accurate model for a given group still disadvantages it requires negative weights. This might be connected to the notion of leveling down (Zietlow et al., 2022; Mittelstadt et al., 2023), where fairness can only be achieved by harming all the groups or bringing advantaged groups closer to disadvantaged groups by harming them. It is generally an artifact of strictly egalitarian fairness measures. Investigating this negative effect is an important research direction that goes beyond the scope of this paper. Nevertheless, a potential solution to mitigate it is to use other kind of fairness definitions. As a first step in this direction, in the next section we extend FairGrad to -fairness where strict equality is relaxed. ## 3.4 Fairgrad For **-Fairness** In the previous section, we considered exact fairness and we showed that this could be achieved by using a re-weighting approach. Here, we extend this procedure to -fairness where the fairness constraints are relaxed and a controlled amount of violations is allowed. Usually, is a user defined parameter but it can also be set by the law, as it is the case with the 80% rule in the US (Biddle, 2006). The main difference with exact fairness is that each equality constraint in Problem (2) is replaced with two inequalities of the form $$\begin{array}{l}{{\forall k\in[K],\widehat{F}_{k}(\mathcal{T},h_{\theta})\leq\epsilon}}\\ {{\forall k\in[K],\widehat{F}_{k}(\mathcal{T},h_{\theta})\geq-\epsilon.}}\end{array}$$ The main consequence is that we need to maintain twice as many Lagrange multipliers and that the groupwise weights are slightly different. Since the two procedures are similar, we omit the details here but provide them in Appendix D for the sake of completeness. ## 4 Experiments In this section, we present several experiments that demonstrate the competitiveness of FairGrad as a procedure to learn fair models for classification. We begin by presenting results over standard fairness datasets and a Natural language Processing dataset in Section 4.4. We then study the behaviour of the -fairness variant of FairGrad in Section 4.5. Next, we showcase the fine-tuning ability of FairGrad on a Computer Vision dataset in Section 4.6. Finally, we investigate the impact of batch size on the learned model in Section 4.7 and present results related to the computational overhead incurred by FairGrad in Section 4.8. ## 4.1 Datasets In the main paper, we consider 4 different datasets and postpone the results on another 6 datasets to Appendix E.3 as they follow similar trends. We also postpone the detailed descriptions of these datasets as well as the pre-processing steps to Appendix E.2. We consider commonly used fairness datasets, namely **Adult Income** (Kohavi, 1996) and **CelebA** (Liu et al., 2015). Both are binary classification datasets with binary sensitive attributes (gender). We also consider a variant of the Adult Income dataset where we add a second binary sensitive attribute (race) to obtain a dataset with 4 disjoint sensitive groups. For both datasets, we use 20% of the data as a test set and the remaining 80% as a train set. We further divide the train set into two and keep 25% of the training examples as a validation set. For each repetition, we randomly shuffle the data before splitting it, and thus we have unique splits for each random seed. Lastly, we standardize each features independently by subtracting the mean and scaling to unit variance which were estimated on the training set. To showcase the wide applicability of FairGrad, we consider the **Twitter Sentiment**3(Blodgett et al., 2016) dataset from the Natural Language Processing community. It consists of 200k tweets with binary sensitive attribute (race) and binary sentiment score. We employ the same setup, splits, and the pre-processing as proposed by Han et al. (2021) and Elazar & Goldberg (2018) and create bias in the dataset by changing the proportion of each subgroup (race-sentiment) in the training set. Following the footsteps of Elazar & Goldberg (2018) we encode the tweets using the DeepMoji (Felbo et al., 2017) encoder with no fine-tuning, which has been pre-trained over millions of tweets to predict their emoji, thereby predicting the sentiment. We also employ the **UTKFace** dataset4(Zhang et al., 2017) from the Computer Vision community. It consists of 23, 708 images tagged with race, age, and gender with pre-defined splits. ## 4.2 Performance Measures For fairness, we consider the four measures introduced in Section 2.1 and Appendix B, namely Equalized Odds (EOdds), Equality of Opportunity (EOpp), Accuracy Parity (AP), and Demographic Parity (DP). For each specific fairness notion, we report the average absolute fairness level of the different groups over the test set, that is 1K PK k=1 Fbk(T , hθ) (lower is better). To assess the utility of the learned models, we use their accuracy levels over the test set, that is 1n Pn i=1 Ihθ(xi)=yi (higher is better). All the results reported are averaged over 5 independent runs and standard deviations are provided. Note that, in the main paper, we graphically report a subset of the results over the aforementioned datasets. We provide detailed results in Appendix E.3, including the missing pictures as well as complete tables with accuracy levels, fairness levels, and fairness level of the most well-off and worst-off groups for all the relevant methods. ## 4.3 Methods We compare FairGrad to a wide variety of baselines, namely: - **Unconstrained**, which is oblivious to any fairness measure and is trained using a standard batch gradient descent method. 3http://slanglab.cs.umass.edu/TwitterAAE/ 4https://susanqq.github.io/UTKFace/ - **Adversarial** learning based method where we employ adversarial mechanism (Goodfellow et al., 2014) using a gradient reversal layer (Ganin & Lempitsky, 2015), similar to GRAD-Pred (Raff & Sylvester, 2018), where an adversary, with an objective to predict the sensitive attribute, is added to the unconstrained model - Bi-level optimization based method implemented in the form of **BiFair** (Ozdayi et al., 2021) - Re-weighting based methods in the form of **FairBatch** (Roh et al., 2020). We also compare against a simpler baseline called **Weighted ERM** where each example is reweighed based on the size of the sensitive group the example belongs to in the beginning. Unlike FairBatch these weights are not updated during training. - Constrained optimization based method as proposed by Cotter et al. (2019). We refer to this method as **Constraints** in this article. - **Reduction** implements the exponentiated gradient based fair classification approach as proposed by Agarwal et al. (2018). In all our experiments, we consider two different hypothesis classes. On the one hand, we use linear models implemented in the form of neural networks with no hidden layers. On the other hand, we use a more complex, non-linear architecture with three fully-connected hidden layers of respective sizes 128, 64, and 32. We use ReLU as our activation function with batch normalization and dropout. In both cases, we optimize the cross-entropy loss. In several experiments, we only consider subsets of the baselines due to the limitations of the methods. For instance, BiFair was designed to handle binary labels and binary sensitive attributes and thus is not considered for the datasets with more than two sensitive groups or two labels. Furthermore, we implemented it using the authors code that is freely available online but does not include AP as a fairness measure, thus we do not report results related to this measure for BiFair. Similarly, we also implemented FairBatch from the authors code which does not support AP as a fairness measure, thus we also exclude it from the comparison for this measure. For Constraints, we based our implementation on the publicly available authors library but were only able to reliably handle linear models and thus we do not consider this baseline for non-linear models. Finally, for Adversarial, we used our custom made implementation. However, it is only applicable when learning non-linear models since it requires at least one hidden layer to propagate its reversed gradient. Apart from the common hyper-parameters such as dropout, several baselines come with their own set of hyper-parameters. For instance, BiFair has the *inner loop length*, which controls the number of iterations in its inner loop, while Adversarial has the *scaling*, which re-weights the adversarial branch loss and the task loss. We provide details of common and approach specific hyper-parameters with their range in Appendix E.1. With several hyper-parameters for each approach, selecting the best combination is often crucial to avoid undesirable behaviors such as over-fitting (Maheshwari et al., 2022). In this paper, we opt for the following procedure. First, for each method, we consider all the X possible hyper-parameter combinations and we run the training procedure for 50 epochs for each combination. Then, we retain all the models returned by the last 5 epochs, that is, for a given method, we have 5X models and the goal is to select the best one among them. Since we have access to two performance measures, we can select either the most accurate model, the most fair, or a trade-off between the two depending on the end goal. Here, we chose to focus on the third option and select the model with the lowest fairness score between certain accuracy intervals. More specifically, let α ∗ be the highest validation accuracy among the 5X models. We choose the model with the lowest validation fairness score amongst all models with a validation accuracy in the interval [α ∗ − *k, α*∗]. In this work, we fix k to 0.03. ## 4.4 Results For Exact Fairness We report the results over the Adult Income dataset using a linear model, the Adult Income dataset with multiple groups with a non-linear model, and the Twitter sentiment dataset using both linear and nonlinear models in Figures 2, 3, and 4 respectively. In these figures, the best methods are closer to the bottom right corner. If a method is closer to the bottom left corner, it has good fairness but reduced accuracy. Similarly, a method closer to the top right corner has good accuracy but poor fairness. The main take-away from these experiments is that there is no fairness enforcing method that is consistently better than the others in terms of both accuracy and fairness. All of them have strengths, that is datasets and fairness measures where they obtain good results, and weaknesses, that is datasets and fairness measures for which they are sub-optimal. FairBatch induces better accuracy than the other approaches over Adult with linear model and EOdds and only pays a small price in fairness. However, it is significantly worse in terms of fairness over the Adult Multigroup dataset with a non-linear model. Similarly, BiFair is sub-optimal on Adult with EOpp, while being comparable to the other approaches on the Twitter Sentiment dataset. We observed similar trends on the other datasets, available in Appendix E.3, with different methods coming out on top for different datasets and fairness measures. Interestingly, FairGrad generally outperforms other approaches in terms of fairness, albeit with a slight loss in accuracy. These observations are even more amplified in the Accuracy Parity and Equalized Odds settings. Moreover, it is generally more robust and tends to show a lower standard deviation in accuracy and fairness than the other approaches. Even in terms of accuracy, the largest difference is over the Crime dataset, where the difference between FairGrad and Unconstrained is 0.04. However, in most cases, the difference is within 0.02. In terms of the multi-group setup, we find similar observations, that is FairGrad outperforms other approaches in fairness, albeit with a drop in accuracy. In fact, for Equality of Opportunity FairGrad almost outperforms all approaches in terms of fairness and accuracy. Overall, FairGrad performs reasonably well in all the settings we considered with no obvious weaknesses, that is no datasets with the lowest accuracy and fairness compared to the baselines. ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Equal Opportunity ![9_image_3.png](9_image_3.png) ![9_image_2.png](9_image_2.png) ![9_image_5.png](9_image_5.png) ![9_image_4.png](9_image_4.png) 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 Figure 3: Results for the Adult Multigroup dataset using Non Linear models. ## 4.5 Accuracy Fairness Trade-Off In this second set of experiments, we demonstrate the capability of FairGrad to support approximate fairness (see Section 3.4). In Figure 5, we show the performance, as accuracy-fairness pairs, of several models learned Figure 2: Results for the Adult dataset using Linear Models. ![10_image_3.png](10_image_3.png) ![10_image_1.png](10_image_1.png) ![10_image_0.png](10_image_0.png) ![10_image_2.png](10_image_2.png) Figure 4: Results for the Twitter Sentiment dataset for Linear and Non Linear Models. ![10_image_4.png](10_image_4.png) ![10_image_5.png](10_image_5.png) Figure 5: Results for CelebA using Linear models. The Unconstrained Linear model achieves a test accuracy of 0.8532 with fairness level of 0.0499 for EOdds, 0.0204 for AP, and 0.0387 for EOpp. on the CelebA dataset by varying the fairness level parameter . These results suggest that FairGrad respects the constraints well. Indeed, the average absolute fairness level (across all the groups, see Section 4.2) achieved by FairGrad is either the same or less than the given threshold. It is worth mentioning that FairGrad is designed to enforce -fairness for each constraint individually which is slightly different from the summarized quantity displayed here. Finally, as the fairness constraint is relaxed, the accuracy of the model increases, reaching the same performance as Unconstrained when the fairness level of the latter is below . ## 4.6 Fairgrad As A Fine-Tuning Procedure While FairGrad has primarily been designed to learn fair classifiers from scratch, it can also be used to finetune an existing classifier to achieve better fairness. To showcase this, we fine-tune the ResNet18 (He et al., 2016) model, developed for image recognition, over the UTKFace dataset (Zhang et al., 2017), consisting of human face images tagged with Gender, Age, and Race information. Following the same process as Roh et al. (2020), we use Race as the sensitive attribute and consider two scenarios. Either we consider Demographic Parity as the fairness measure and use the gender (binary) as the target label or we consider Equalized Odds Table 1: Results for the UTKFace dataset where a ResNet18 is fine-tuned using different strategies. | Method | s=Race ; y=Gender | s=Race ; y=Age | | | |---------------|---------------------|------------------|-----------------|-----------------| | Accuracy | DP | Accuracy | EOdds | | | Unconstrained | 0.8691 ± 0.0075 | 0.0448 ± 0.0066 | 0.6874 ± 0.0080 | 0.0843 ± 0.0089 | | FairGrad | 0.8397 ± 0.0085 | 0.0111 ± 0.0064 | 0.6491 ± 0.0082 | 0.0506 ± 0.0059 | | Batch Size | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 | 2048 | |--------------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Accuracy | 0.8186 | 0.8234 | 0.8215 | 0.8268 | 0.8273 | 0.8286 | 0.8292 | 0.8289 | 0.8303 | | Accuracy Std | 0.0013 | 0.006 | 0.0028 | 0.0025 | 0.0031 | 0.0008 | 0.0027 | 0.0017 | 0.0031 | | Fairness | 0.0031 | 0.0091 | 0.0045 | 0.0036 | 0.0051 | 0.0046 | 0.004 | 0.0038 | 0.0057 | | Fairness Std | 0.0042 | 0.0062 | 0.0012 | 0.0014 | 0.0025 | 0.0032 | 0.0026 | 0.0019 | 0.0018 | Table 2: Batch size effect on the CelebA dataset with Linear Models and EOdds as the fairness measure. and predict the age (multi-valued). The results are displayed in Table 1. In both settings, FairGrad learns models that are more fair than an Unconstrained fine-tuning procedure, albeit at the expense of accuracy. ## 4.7 Impact Of The Batch-Size In this section, we evaluate the impact of batch size on the fairness and accuracy level of the learned model. Indeed, at each iteration, in order to minimize the overhead associated with FairGrad (see Section 3.1), we update the weights using the fairness level of the model estimated solely on the current batch. When these batches are small, these estimates are unreliable and might lead the model astray. In Table 2 we present the performances of several linear models learned with different batch sizes on the CelebA dataset. Over this dataset, we observe that FairGrad consistently learns a fair model across all batch sizes and obtains reasonable accuracy since Unconstrained has an accuracy of 0.8532 for this problem. Nevertheless, we still recommend the practitioners to use a larger batch size whenever possible as we observe a slight reduction in terms of fairness standard deviations. ## 4.8 Computational Overhead In this last experiment, we evaluate the overhead of FairGrad, by reporting the wall clock time in seconds to train for an epoch with the Unconstrained approach and our method in various settings. - We show the effect of model size by varying the number of hidden layers of the model over the Adult Income dataset, which consists of 45, 222 records. We used an Intel Xeon E5-2680 CPU to train. - We consider a large convolutional neural network (ResNet18 (He et al., 2016)) fine tuned over the UTK-Face dataset consisting of 23, 708 images. We trained the model using a Tesla P100 GPU. - We experiment with a large transformer (bert-base-uncased (Devlin et al., 2019)) fine tuned over the Twitter Sentiment Dataset consisting of 200k tweets. We trained it using a Tesla P100 GPU. We present results of the computation overhead of FairGrad in Table 3. We find that the overhead is limited and should not be critical in most applications as it does not depend on the complexity of the model but, instead, on the number of examples and the batch size. Overall, these observations are in line with the arguments presented in Section 3.2. ## 5 Conclusion In this paper, we proposed FairGrad, a fairness aware gradient descent approach based on a re-weighting scheme. We showed that it can be used to learn fair models for various group fairness definitions and is able Table 3: The computational overhead of FairGrad in various settings. BS here refers to Batch Size, and the Unconstrained and FairGrad columns refers to the average time in seconds taken by these approaches for an epoch, respectively. Delta refers to the difference in time between these two approaches. | Setting | Parameters | BS | Unconstrained | FairGrad | Delta | |-----------------------------------|--------------|------|-----------------|-----------------|---------| | Linear model - Adult Dataset -CPU | 106 | 512 | 0.277 ± 0.031 | 0.307 ± 0.01 | 0.03 | | 2 layers -Adult Dataset -CPU | 1762 | 512 | 0.315 ± 0.036 | 0.316 ± 0.029 | 0.01 | | 5 layers -Adult Dataset -CPU | 21346 | 512 | 0.370 ± 0.042 | 0.394 ± 0.025 | 0.02 | | 10 layers -Adult Dataset -CPU | 39042 | 512 | 0.483 ± 0.021 | 0.499 ± 0.034 | 0.02 | | 20 layers -Adult Dataset -CPU | 80642 | 512 | 0.672 ± 0.034 | 0.689 ± 0.026 | 0.02 | | ResNet18 trained -UTKFace -GPU | 11177538 | 64 | 31.173 ± 0.085 | 31.588 ± 0.055 | 0.42 | | Bert Twitter Sentiment -GPU | 109505310 | 32 | 2246.342 ± 3.20 | 2294.382 ± 4.01 | 48.04 | to handle multiclass problems as well as settings where there is multiple sensitive groups. We empirically showed the competitiveness of our approach against several baselines on standard fairness datasets and on a Natural Language Processing task. We also showed that it can be used to fine-tune an existing model on a Computer Vision task. Finally, since it is based on gradient descent and has a small overhead, we believe that FairGrad could be used for a wide range of applications, even beyond classification. ## Limitations And Societal Impact While appealing, FairGrad also has limitations. It implicitly assumes that a set of weights that would lead to a fair model exists but this might be difficult to verify in practice. Thus, even if in our experiments FairGrad seems to behave quite well, a practitioner using this approach should not trust it blindly. It remains important to always check the actual fairness level of the learned model. On the other hand, we believe that, due to its simplicity and its versatility, FairGrad could be easily deployed in various practical contexts and, thus, could contribute to the dissemination of fair models. ## Acknowledgements This work has been supported by the Région Hauts de France (Projet STaRS Equité en apprentissage décentralisé respectueux de la vie privée) and Agence Nationale de la Recherche under grant number ANR19-CE23-0022. The authors would also like to thank Michael Lohaus and anonymous reviewers for helpful discussions and feedbacks. ## References Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification. In *International Conference on Machine Learning*, pp. 60–69. PMLR, 2018. Aharon Ben-Tal, Sahely Bhadra, Chiranjib Bhattacharyya, and Arkadi Nemirovski. Efficient methods for robust classification under uncertainty in kernel matrices. *J. Mach. Learn. Res.*, 13:2923–2954, 2012. doi: 10.5555/2503308.2503335. URL https://dl.acm.org/doi/10.5555/2503308.2503335. Dan Biddle. *Adverse impact and test validation: A practitioner's guide to valid and defensible employment* testing. Gower Publishing, Ltd., 2006. Su Lin Blodgett, Lisa Green, and Brendan O'Connor. Demographic dialectal variation in social media: A case study of African-American English. 2016. Toon Calders and Sicco Verwer. Three naive bayes approaches for discrimination-free classification. *Data* mining and knowledge discovery, 21(2):277–292, 2010. Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. Building classifiers with independency constraints. In *2009 IEEE International Conference on Data Mining Workshops*, pp. 13–18. IEEE, 2009. Flavio P Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. volume 30, 2017. Simon Caton and Christian Haas. Fairness in machine learning: A survey. *arXiv preprint arXiv:2010.04053*, 2020. Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, and Massimiliano Pontil. Leveraging labeled and unlabeled data for consistent fair binary classification. *arXiv preprint arXiv:1906.05082*, 2019. Andrew Cotter, Heinrich Jiang, and Karthik Sridharan. Two-player games for efficient non-convex constrained optimization. In *Algorithmic Learning Theory*, pp. 300–332. PMLR, 2019. Christophe Denis, Romuald Elie, Mohamed Hebiri, and François Hu. Fairness guarantee in multi-class classification. *arXiv preprint arXiv:2109.13642*, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423. Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems* 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 6478–6490, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ 32e54441e6382a7fbacbbbaf3c450059-Abstract.html. Michele Donini, Luca Oneto, Shai Ben-David, John Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. In *Proceedings of the 32nd International Conference on Neural* Information Processing Systems, pp. 2796–2806, 2018. Dheeru Dua, Casey Graff, et al. Uci machine learning repository. 2017. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In *Proceedings of the 3rd innovations in theoretical computer science conference*, pp. 214–226, 2012. Yanai Elazar and Yoav Goldberg. Adversarial removal of demographic attributes from text data. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pp. 11–21. Association for Computational Linguistics, 2018. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel (eds.), *Proceedings of the 2017 Conference on Empirical* Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pp. 1615–1625. Association for Computational Linguistics, 2017. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In *proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining*, pp. 259–268, 2015. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of *Proceedings of Machine Learning Research*, pp. 1180–1189, Lille, France, 07–09 Jul 2015. PMLR. Gabriel Goh, Andrew Cotter, Maya Gupta, and Michael P Friedlander. Satisfying real-world goals with dataset constraints. In *Advances in Neural Information Processing Systems*, pp. 2415–2423, 2016. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. *Advances in neural information processing* systems, 27, 2014. Xudong Han, Timothy Baldwin, and Trevor Cohn. Diverse adversaries for mitigating bias in training. In Paola Merlo, Jörg Tiedemann, and Reut Tsarfaty (eds.), Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pp. 2760–2765. Association for Computational Linguistics, 2021. Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. *Advances in* neural information processing systems, 29:3315–3323, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Ursula Hebert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (Computationally-identifiable) masses. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference on Machine Learning*, volume 80 of *Proceedings of Machine* Learning Research, pp. 1939–1948. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/ hebert-johnson18a.html. Vasileios Iosifidis and Eirini Ntoutsi. Adafair: Cumulative fairness adaptive boosting. In *Proceedings of the* 28th ACM International Conference on Information and Knowledge Management, pp. 781–790, 2019. Vasileios Iosifidis, Besnik Fetahu, and Eirini Ntoutsi. Fae: A fairness-aware ensemble framework. In 2019 IEEE International Conference on Big Data (Big Data), pp. 1375–1380. IEEE, 2019. Heinrich Jiang and Ofir Nachum. Identifying and correcting label bias in machine learning. In International Conference on Artificial Intelligence and Statistics, pp. 702–712. PMLR, 2020. Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1):1–33, 2012. Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy. Discrimination aware decision tree learning. In 2010 IEEE International Conference on Data Mining, pp. 869–874. IEEE, 2010. Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. Fairness-aware classifier with prejudice remover regularizer. In *Joint European Conference on Machine Learning and Knowledge Discovery in* Databases, pp. 35–50. Springer, 2012. Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th* International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 2564–2572. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/kearns18a.html. Ron Kohavi. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. In Evangelos Simoudis, Jiawei Han, and Usama M. Fayyad (eds.), Kdd, pp. 202–207. AAAI Press, 1996. Emmanouil Krasanakis, Eleftherios Spyromitros-Xioufis, Symeon Papadopoulos, and Yiannis Kompatsiaris. Adaptive sensitive reweighting to mitigate bias in fairness-aware classification. In *Proceedings of the 2018* World Wide Web Conference, pp. 853–862, 2018. Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. *Advances in neural* information processing systems, 30, 2017. Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How we analyzed the compas recidivism algorithm. *ProPublica (5 2016)*, 9(1):3–3, 2016. Yitong Li, Timothy Baldwin, and Trevor Cohn. Towards robust and privacy-preserving text representations. In Iryna Gurevych and Yusuke Miyao (eds.), *Proceedings of the 56th Annual Meeting of the Association* for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pp. 25–30. Association for Computational Linguistics, 2018. doi: 10.18653/v1/P18-2005. URL https://aclanthology.org/P18-2005/. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738, 2015. Michael Lohaus, Michaël Perrot, and Ulrike Von Luxburg. Too relaxed to be fair. In *International Conference* on Machine Learning, pp. 6360–6369. PMLR, 2020. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings*. OpenReview.net, 2018. URL https://openreview.net/forum?id=rJzIBfZAb. Gaurav Maheshwari, Pascal Denis, Mikaela Keller, and Aurélien Bellet. Fair NLP models with differentially private text encoders. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pp. 6913–6930. Association for Computational Linguistics, 2022. URL https://aclanthology. org/2022.findings-emnlp.514. Paul Mangold, Michaël Perrot, Aurélien Bellet, and Marc Tommasi. Differential privacy has bounded impact on fairness in classification. *arXiv preprint arXiv:2210.16242*, 2022. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. *ACM Computing Surveys (CSUR)*, 54(6):1–35, 2021. Brent D. Mittelstadt, Sandra Wachter, and Chris Russell. The unfairness of fair machine learning: Levelling down and strict egalitarianism by default. *CoRR*, abs/2302.02404, 2023. doi: 10.48550/arXiv.2302.02404. URL https://doi.org/10.48550/arXiv.2302.02404. Mustafa Safa Ozdayi, Murat Kantarcioglu, and Rishabh Iyer. Bifair: Training fair models with bilevel optimization. *arXiv preprint arXiv:2106.04757*, 2021. Edward Raff and Jared Sylvester. Gradient reversal against discrimination: A fair neural network learning approach. In *2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)*, pp. 189–198. IEEE, 2018. Michael Redmond and Alok Baveja. A data-driven software tool for enabling cooperative information sharing among police departments. *European Journal of Operational Research*, 141(3):660–678, 2002. Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh. Fairbatch: Batch selection for model fairness. In *International Conference on Learning Representations*, 2020. Shai Shalev-Shwartz and Shai Ben-David. *Understanding machine learning: From theory to algorithms*. Cambridge university press, 2014. Abraham Wald. Statistical decision functions which minimize the maximum risk. *Annals of Mathematics*, pp. 265–280, 1945. Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. Learning nondiscriminatory predictors. In *Conference on Learning Theory*, pp. 1920–1953. PMLR, 2017. Yongkai Wu, Lu Zhang, and Xintao Wu. On convexity and bounds of fairness-aware classification. In The World Wide Web Conference, pp. 3356–3362, 2019. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web, pp. 1171–1180, 2017a. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classification. In *Artificial Intelligence and Statistics*, pp. 962–970. PMLR, 2017b. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International conference on machine learning, pp. 325–333. PMLR, 2013. Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 5810–5818, 2017. Dominik Zietlow, Michael Lohaus, Guha Balakrishnan, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, and Chris Russell. Leveling down in computer vision: Pareto inefficiencies in fair deep classifiers. In *IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022*, pp. 10400–10411. IEEE, 2022. doi: 10.1109/CVPR52688.2022.01016. URL https://doi.org/10.1109/CVPR52688.2022.01016. Indre Žliobaite, Faisal Kamiran, and Toon Calders. Handling conditional discrimination. In 2011 IEEE 11th International Conference on Data Mining, pp. 992–1001. IEEE, 2011. ## Appendix In this appendix, we provide details that were omitted in the main paper. First, in Section A, we review several works closely related to ours. Then, in Section B, we show that several well known group fairness measures are compatible with FairGrad. In Section C, we prove Lemma 1. Next, in Section D, we derive the update rules for FairGrad with -fairness. Finally, in Section E, we provide additional experiments. ## A Related Work The fairness literature is extensive and we refer the interested reader to recent surveys (Caton & Haas, 2020; Mehrabi et al., 2021) to get an overview of the subject. Here, we focus on recent works that are more closely related to our approach. BiFair (Ozdayi et al., 2021). This paper proposes a bilevel optimization scheme for fairness. The idea is to use an outer optimization scheme that learns weights for each example so that the trade-off between fairness and accuracy is as favorable as possible while an inner optimization scheme learns a model that is as accurate as possible. One limitation of this approach is that it does not directly optimize the fairness level of the model but rather a relaxation that does not provide any guarantees on the goodness of the learned predictor. Furthermore, it is limited to binary classification with a binary sensitive attribute. In this paper, we also learn weights for the examples in an iterative way. However, we use a different update rule. Furthermore, we focus on exact fairness definitions rather than relaxations and our objective is to learn accurate models with given levels of fairness rather than a trade-off between the two. Finally, our approach is not limited to the binary setting. FairBatch (Roh et al., 2020). This paper proposes a batch gradient descent approach to learn fair models. More precisely, the idea is to draw a batch of examples from a skewed distribution that favors the disadvantaged groups by oversampling them. In this paper, we propose to use a re-weighting approach which could also be interpreted as altering the distribution of the examples based on their fairness level if all the weights were positive. However, we allow the use of negative weights, and we prove that they are sometimes necessary to achieve fairness. Furthermore, we employ a different update rule for the weights. AdaFair (Iosifidis & Ntoutsi, 2019). This paper proposes a boosting based framework to learn fair models. The underlying idea is to modify the weights of the examples depending on both the performances of the current strong classifier and the group memberships. Hence, examples that belong to the disadvantaged group and are incorrectly classified receive higher weights than the examples that belong to the advantaged group and are correctly classified. In this paper, we use a similar high level idea but we use different weights that do not depend on the accuracy of the model but solely on its fairness. Furthermore, rather than a boosting based approach, we consider problems that can be solved using gradient descent. Finally, while AdaFair only focuses on Equalized Odds, we show that our approach works with several fairness notions. Identifying and Correcting Label Bias in Machine Learning (Jiang & Nachum, 2020). This paper tackles the fairness problem by assuming that the observed labels are biased compared to the true labels. The goal is then to learn a model with respect to the true labels using only the observed labels. To this end, it proposes to use an iterative re-weighting procedure where positive example-wise weights and the model are alternatively updated. In this paper, we also propose a re-weighting approach. However, we use different weights that are not necessarily positive. Furthermore, our approach is not limited to binary labels and can handle multiclass problems. ## B Reformulation Of Various Group Fairness Notion In this section, we present several group fairness notions which respect our fairness definition presented in Section 2.1. Example 2 (**Equalized Odds (EOdds) (Hardt et al., 2016)**). A model hθ is fair for Equalized Odds when the probability of predicting the correct label is independent of the sensitive attribute, that is, ∀l ∈ Y, ∀r ∈ S $$\widehat{\mathbb{P}}\left(h_{\theta}(x)=l\,|\,s=r,y=l\right)=\widehat{\mathbb{P}}\left(h_{\theta}(x)=l\,|\,y=l\right).$$ It means that we need to partition the space into K = *|Y × S|* groups and, ∀l ∈ Y, ∀r ∈ S, we define Fb(l,r) as $$\widehat{F}_{(l,r)}(\mathcal{T},h_{\theta})=\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq l\,|\,y=l\right)-\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq l\,|\,s=r,y=l\right)$$ $$=\sum_{(l,r^{\prime})\neq(l,r)}\widehat{\mathbb{P}}\left(s=r^{\prime}|y=l\right)\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq l\,|\,s=r^{\prime},y=l\right)$$ $$\quad-(1-\widehat{\mathbb{P}}\left(s=r|y=l\right))\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq l\,|\,s=r,y=l\right)$$ where the law of total probability was used to obtain the last equation. Thus, Equalized Odds satisfies all our assumptions with C (l,r) (l,r) = Pb (s = r|y = l) − 1, C (l,r0) (l,r) = Pb (s = r 0|y = l), C (l 0,r0) (l,r) = 0 with r 0 6= r and l 0 6= l, and C 0 (l,r) = 0. Example 3 (**Equality of Opportunity (EOpp) (Hardt et al., 2016)**). A model hθ is fair for Equality of Opportunity when the probability of predicting the correct label is independent of the sensitive attribute for a given subset Y 0 ⊂ Y of labels called the desirable outcomes, that is, ∀l ∈ Y0, ∀r ∈ S $$\widehat{\mathbb{P}}\left(h_{\theta}(x)=l\,|\,s=r,y=l\right)=\widehat{\mathbb{P}}\left(h_{\theta}(x)=l\,|\,y=l\right).$$ It means that we need to partition the space into K = *|Y × S|* groups and, ∀l ∈ Y, ∀r ∈ S, we define Fb(l,r) as $${\widehat{F}}_{(l,r)}({\mathcal{T}},h_{\theta})={\left\{\begin{array}{l l}{{\widehat{\mathbb{P}}}\left(h_{\theta}(x)=l\,|\,s=r,y=l\right)}\\ {-{\widehat{\mathbb{P}}}\left(h_{\theta}(x)=l\,|\,y=l\right)}&{{\forall}(l,r)\in{\mathcal{Y}}^{\prime}\times{\mathcal{S}}}\\ {0}&{{\forall}(l,r)\in{\mathcal{Y}}\times{\mathcal{S}}\setminus{\mathcal{Y}}^{\prime}\times{\mathcal{S}}}\end{array}\right.}$$ which can then be rewritten in the correct form in the same way as Equalized Odds, the only difference being that C · (l,r) = 0, ∀(l, r) ∈ Y × S \ Y0 × S. Example 4 (**Demographic Parity (DP) (Calders et al., 2009)**). A model hθ is fair for Demographic Parity when the probability of predicting a binary label is independent of the sensitive attribute, that is, ∀l ∈ Y, ∀r ∈ S $$\widehat{\mathbb{P}}\left(h_{\theta}(x)=l\,|\,s=r\right)=\widehat{\mathbb{P}}\left(h_{\theta}(x)=l\right).$$ It means that we need to partition the space into K = *|Y × S|* groups and, ∀l ∈ Y, ∀r ∈ S, we define Fb(l,r) as Fb(l,r)(T , hθ) = Pb (hθ(x) 6= l) − Pb (hθ(x) 6= l | s = r) = Pb (y = l, s = r) − Pb (y = l | s = r) Pb (hθ(x) 6= y | s = r, y = l) +X (l,r0)6=(l,r) Pb (y = l, s = r 0) Pb (hθ(x) 6= y | s = r 0, y = l) + Pby = ¯l | s = r− Pby = ¯l, s = rPbhθ(x) 6= y | s = r, y = ¯l −X (¯l,r0)6=(¯l,r) Pby = ¯l, s = r 0Pbhθ(x) 6= y | s = r 0, y = ¯l Pby = ¯l− Pby = ¯l | s = r where the law of total probability was used to obtain the last equation. Thus, Demographic Parity satisfies all our assumptions with C (l,r) (l,r) = Pb (y = *l, s* = r) − Pb (y = l | s = r), C (l,r0) (l,r) = Pb (y = *l, s* = r 0) with r 0 6= r, C (¯l,r) (l,r) = Pby = ¯l | s = r− Pby = ¯*l, s* = r, C (¯l,r0) (l,r) = −Pby = ¯*l, s* = r 0with r 0 6= r, and C 0 (l,r) = Pby = ¯l− Pby = ¯l | s = r. ## C Proof Of Lemma 1 Lemma (Negative weights are necessary.). *Assume that the fairness notion under consideration is Accuracy* Parity. Let h ∗ θ be the most accurate and fair model. Then using negative weights is necessary as long as $$\operatorname*{min}_{\begin{array}{c}{h_{\theta}\in{\mathcal{H}}}\\ {h_{\theta}\,w f a i r}\end{array}}\operatorname*{max}_{\mathcal{T}_{k}}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|{\mathcal{T}}_{k}\right)<\widehat{\mathbb{P}}\left(h_{\theta}^{*}(x)\neq y\right).$$ Proof. To prove this Lemma, one first need to notice that, for Accuracy Parity, since PK k=1 Pb (Tk) = 1 we have that $$\sum_{k^{\prime}=1}^{K}C_{k}^{k^{\prime}}=(\widehat{\mathbb{P}}\left({\mathcal{T}}_{k}\right)-1)+\sum_{\begin{array}{l}{{k^{\prime}=1}}\\ {{k^{\prime}\neq k}}\end{array}}^{K}\widehat{\mathbb{P}}\left({\mathcal{T}}_{k^{\prime}}\right)=0.$$ This implies that $$\sum_{k=1}^{K}\left[\widehat{\mathbb{P}}\left({\mathcal{T}}_{k}\right)+\sum_{k^{\prime}=1}^{K}C_{k^{\prime}}^{k}\lambda_{k^{\prime}}\right]=1.$$ This implies that, whatever our choice of λ, the weights will always sum to one. In other words, since we also have that PK k=1 λkC 0 k = 0 by definition, for a given hypothesis hθ, we have that $$\max_{\lambda_{1},\ldots,\lambda_{K}\in\mathbb{R}}\sum_{k=1}^{K}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|\mathcal{T}_{k}\right)\left[\widehat{\mathbb{P}}\left(\mathcal{T}_{k}\right)+\sum_{k^{\prime}=1}^{K}C_{k^{\prime}}^{k},\lambda_{k^{\prime}}\right]\tag{5}$$ $$=\max_{w_{1},\ldots,w_{K}\in\mathbb{R}}\sum_{k=1}^{K}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|\mathcal{T}_{k}\right)w_{k}$$ (6) $$s.t.\sum_{k}w_{k}=1\sum_{k}^{K}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|\mathcal{T}_{k}\right)w_{k}$$ where, given w1*, . . . , w*K, the original values of lambda can be obtained by solving the linear system Cλ = w where $$C=\begin{pmatrix}C_{1}^{1}&\dots&C_{K}^{1}\\ \vdots&&\vdots\\ C_{1}^{K}&\dots&C_{K}^{K}\end{pmatrix},\quad\lambda=\begin{pmatrix}\lambda_{1}\\ \vdots\\ \lambda_{K}\end{pmatrix},\quad w=\begin{pmatrix}w_{1}-\widehat{\mathbb{P}}\left({\mathcal{T}}_{1}\right)\\ \vdots\\ w_{K}-\widehat{\mathbb{P}}\left({\mathcal{T}}_{K}\right)\end{pmatrix}.$$ which is guaranteed to have infinitely many solutions since the rank of the matrix C is K − 1 and the rank of the augmented matrix (C|w) is also K − 1. Here we are using the fact that Pb (Tk) 6= 0, ∀k since all the groups have to be represented to be taken into account. We will now assume that all the weights are positive, that is wk ≥ 0, ∀k. Then, the best strategy to solve Problem (6) is to put all the weight on the worst off group k, that is set wk = 1 and wk0 = 0, ∀k 0 6= k. It implies that $$\operatorname*{max}_{\begin{array}{c}{{w_{1},\dots,w_{K}\in\mathbb{R}}}\\ {{s.t.\sum_{k}w_{k}=1}}\end{array}}\sum_{k=1}^{K}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|{\mathcal{T}}_{k}\right)w_{k}=\operatorname*{max}_{k}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|{\mathcal{T}}_{k}\right).$$ Furthermore, notice that, for fair models with respect to Accuracy Parity, we have that Pb (hθ(x) 6= y|Tk) = Pb (hθ(x) 6= y), ∀k. Thus, if it holds that $$\operatorname*{min}_{\begin{array}{c}{h_{\theta}\in{\mathcal{H}}}\\ {h_{\theta}\mathrm{unfair}}\end{array}}\operatorname*{max}_{\mathcal{T}_{k}}{\widehat{\mathbb{P}}}\left(h_{\theta}(x)\neq y|{\mathcal{T}}_{k}\right)<{\widehat{\mathbb{P}}}\left(h_{\theta}^{*}(x)\neq y\right)$$ where h ∗ θ is the most accurate and fair model, then the optimal solution of Problem (3) in the main paper will be unfair. It implies that, in this case, using positive weights is not sufficient and negative weights are necessary. ## D Fairgrad For **-Fairness** To derive FairGrad for -fairness we first consider the following standard optimization problem $$\begin{array}{r l}{{\operatorname*{arg\,min}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y\right)}}\\ {{}}&{{\mathrm{s.t.~}\forall k\in[K],\widehat{F}_{k}({\mathcal{T}},h_{\theta})\leq\epsilon}}\\ {{}}&{{\forall k\in[K],\widehat{F}_{k}({\mathcal{T}},h_{\theta})\geq-\epsilon.}}\end{array}$$ We, once again, use a standard multipliers approach to obtain the following unconstrained formulation: $$\mathcal{L}\left(h_{\theta},\lambda_{1},\ldots,\lambda_{K},\delta_{1},\ldots,\delta_{K}\right)=\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y\right)+\sum_{k=1}^{K}\lambda_{k}\left(\widehat{F}_{k}(\mathcal{T},h_{\theta})-\epsilon\right)-\delta_{k}\left(\widehat{F}_{k}(\mathcal{T},h_{\theta})+\epsilon\right)\tag{7}$$ where λ1*, . . . , λ*K and δ1*, . . . , δ*K are the multipliers that belong to R +, that is the set of positive reals. Once again, to solve this problem, we will use an alternating approach where the hypothesis and the multipliers are updated one after the other. Updating the Multipliers. To update the values λ1*, . . . , λ*K, we will use a standard gradient ascent procedure. Hence, noting that the gradient of the previous formulation is $$\nabla_{\lambda_{1},\ldots,\lambda_{K}}\mathcal{L}\left(h_{\theta},\lambda_{1},\ldots,\lambda_{K},\delta_{1},\ldots,\delta_{K}\right)=\begin{pmatrix}\widehat{F}_{1}(\mathcal{T},h_{\theta})-\epsilon\\ \vdots\\ \widehat{F}_{K}(\mathcal{T},h_{\theta})-\epsilon\end{pmatrix}$$ $$\nabla_{\delta_{1},\ldots,\delta_{K}}{\mathcal{L}}\left(h_{\theta},\lambda_{1},\ldots,\lambda_{K},\delta_{1},\ldots,\delta_{K}\right)=\begin{pmatrix}-{\widehat{F}}_{1}({\mathcal{T}},h_{\theta})-\epsilon\\ \vdots\\ -{\widehat{F}}_{K}({\mathcal{T}},h_{\theta})-\epsilon\end{pmatrix}$$ we have the following update rule ∀k ∈ [K] $$\begin{array}{l}{{\lambda_{k}^{T+1}=\,\operatorname*{max}\left(0,\lambda_{k}^{T}+\eta\left(\widehat{F}_{k}\big({\mathcal{T}},h_{\theta}^{T}\big)-\epsilon\right)\right)}}\\ {{\delta_{k}^{T+1}=\,\operatorname*{max}\left(0,\delta_{k}^{T}-\eta\left(\widehat{F}_{k}\big({\mathcal{T}},h_{\theta}^{T}\big)+\epsilon\right)\right)}}\end{array}$$ where η is a fairness rate that controls the importance of each weight update. Updating the Model. To update the parameters θ ∈ R D of the model hθ, we proceed as before, using a gradient descent approach. However, first, we notice that given the fairness notions that we consider, Equation (7) is equivalent to $$\mathcal{L}\left(h_{\theta},\lambda_{1},\ldots,\lambda_{K},\delta_{1},\ldots,\delta_{K}\right)=\sum_{k=1}^{K}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|\mathcal{T}_{k}\right)\left[\widehat{\mathbb{P}}\left(\mathcal{T}_{k}\right)+\sum_{k^{\prime}=1}^{K}C_{k^{\prime}}^{k}\left(\lambda_{k^{\prime}}-\delta_{k^{\prime}}\right)\right]$$ $$-\sum_{k=1}^{K}\left(\lambda_{k}+\delta_{k}\right)\epsilon+\sum_{k=1}^{K}(\lambda_{k}-\delta_{k})C_{k}^{0}.$$ $$\quad(8)$$ Since the additional terms in the optimization problem do not depend on hθ, the main difference between exact and -fairness is the nature of the weights. More precisely, at iteration t, the update rule becomes $$\theta^{T+1}=\theta^{T}-\eta_{\theta}\sum_{k=1}^{K}\left[\widehat{\mathbb{P}}\left({\mathcal{T}}_{k}\right)+\sum_{k^{\prime}=1}^{K}C_{k^{\prime}}^{k}\left(\lambda_{k^{\prime}}-\delta_{k^{\prime}}\right)\right]\nabla_{\theta}\widehat{\mathbb{P}}\left(h_{\theta}(x)\neq y|{\mathcal{T}}_{k}\right)$$ where ηθ is a learning rate. Once again, we obtain a simple re-weighting scheme where the weights depend on the current fairness level of the model through λ1*, . . . , λ*K and δ1*, . . . , δ*K, the relative size of each group through Pb (Tk), and the fairness notion through the constants C. ## E Extended Experiments In this section, we provide additional details related to the baselines and the hyper-parameters tuning procedure. We then provide descriptions of the datasets and finally the results. ## E.1 Baselines - **Adversarial**: One of the common ways of removing sensitive information from the model's representation is via adversarial learning. Broadly, it consists of three components, namely an encoder, a task classifier, and an adversary. On the one hand, the objective of the adversary is to predict sensitive information from the encoder. On the other hand, the encoder aims to create representations that are useful for the downstream task (task classifier) and, at the same time, fool the adversary. The adversary is generally connected to the encoder via a gradient reversal layer (Ganin & Lempitsky, 2015) which acts like an identity function during the forward pass and scales the loss with a parameter −λ during the backward pass. In our setting, the encoder is a Multi-Layer Perceptron with two hidden layers of size 64 and 128 respectively, and the task classifier is another Multi-Layer Perceptron with a single hidden layer of size 32. The adversary is the same as the main task classifier. We use a ReLU as the activation function with the dropout set to 0.2 and employ batch normalization with default PyTorch parameters. As a part of the hyper-parameter tuning, we did a grid search over λ, varying it between 0.1 to 3.0 with an interval of 0.2. - **BiFair (Ozdayi et al., 2021)**: For this baseline, we fix the weight parameter to be of length 8 as suggested in the code released by the authors5. In this fixed setting, we perform a grid search over the following hyper-parameters: - Batch Size: 128, 256, 512 - Weight Decay: 0.0, 0.001 - Fairness Loss Weight: 0.5, 1, 2, 4 - Inner Loop Length: 5, 25, 50 - **Constraints**: We use the implementation available in the TensorFlow Constrained Optimization6 library with default hyper-parameters. - **FairBatch**: We use the implementation publicly released by the authors7. - **Weighted ERM**: We reweigh each example in the dataset based on inverse of the proportion of the sensitive group it belongs to. - **Reduction**: We use the implementation available in the Fairlearn8 with default hyper-parameters. In our initial experiments, we varied the batch size, and learning rates for both Constraints and FairBatch. However, we found that the default hyper-parameters as specified by the authors result in the best performances. In the spirit of being comparable in terms of hyper-parameter search budget, we also fix all hyper-parameters of FairGrad, apart from the batch size and weight decay. We experiment with two different batch sizes namely, 64 or 512 for the standard fairness dataset. Similarly, we also experiment with three weight decay values namely, 0.0, 0.001 and 0.01. Note that we also vary weight decay and batch sizes for FairBatch, Adversarial, Unconstrained, and BiFair. For all our experiments, apart from BiFair, we use Batch Gradient Descent as the optimizer with a learning rate of 0.1 and a gradient clipping of 0.05 to avoid exploding gradients. For BiFair, we employ the Adam optimizer as suggested by the authors with a learning rate of 0.001. For FairGrad, FairBatch and Unconstrained, we considered 6 hyper-parameters combinations. For BiFair, we considered 72 such combinations, while for Adversarial, there were 90 combinations. ## E.2 Datasets Here, we provide additional details on the datasets used in our experiments. We begin by describing the standard fairness datasets for which we follow the pre-processing procedure described in Lohaus et al. (2020). - **Adult**9: The dataset (Kohavi, 1996) is composed of 45222 instances, with 14 features each describing several attributes of a person. The objective is to predict the income of a person (below or above 50k) while remaining fair with respect to gender (binary in this case). Following the pre-processing step of Wu et al. (2019), only 9 features were used for training. - **CelebA**10: The dataset (Liu et al., 2015) consists of 202, 599 images, along with 40 binary attributes associated with each image. We use 38 of these as features while keeping gender as the sensitive attribute and "Smiling" as the class label. - **Dutch**11: The dataset (Žliobaite et al., 2011) is composed of 60, 420 instances with each instance described by 12 features. We predict "Low Income" or "High Income" as dictated by the occupation as the main classification task and gender as the sensitive attribute. 5https://github.com/TinfoilHat0/BiFair 6https://github.com/google-research/tensorflow_constrained_optimization 7https://github.com/yuji-roh/fairbatch 8https://fairlearn.org/ 9https://archive.ics.uci.edu/ml/datasets/adult 10https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html 11https://sites.google.com/site/conditionaldiscrimination/ - **Compas**12: The dataset (Larson et al., 2016) contains 6172 data points, where each data point has 53 features. The goal is to predict if the defendant will be arrested again within two years of the decision. The sensitive attribute is race, which has been merged into "White" and "Non White" categories. - **Communities and Crime**13: The dataset (Redmond & Baveja, 2002) is composed of 1994 instances with 128 features, of which 29 have been dropped. The objective is to predict the number of violent crimes in the community, with race being the sensitive attribute. - **German Credit**14: The dataset (Dua et al., 2017) consists of 1000 instances, with each having 20 attributes. The objective is to predict a person's creditworthiness (binary), with gender being the sensitive attribute. - **Gaussian**15: It is a toy dataset with binary task label and binary sensitive attribute, introduced in Lohaus et al. (2020). It is constructed by drawing points from different Gaussian distributions. We follow the same mechanism as described in Lohaus et al. (2020), and sample 50000 data points for each class. - **Adult Folktables**16: This dataset (Ding et al., 2021) is an updated version of the original Adult Income dataset. We use California census data with gender as the sensitive attribute. There are 195665 instances, with 9 features describing several attributes of a person. We use the same preprocessing step as recommended by the authors. For all these datasets, we use a 20% of the data as a test set and 80% as a train set. We further divide the train set into two and keep 25% of the training examples as a validation set. For each repetition, we randomly shuffle the data before splitting it, and thus we had unique splits for each random seed. We use the following seeds: 10, 20, 30, 40, 50 for all our experiments. As a last pre-processing step, we centered and scaled each feature independently by substracting the mean and dividing by the standard deviation both of which were estimated on the training set. Twitter Sentiment Analysis17: The dataset (Blodgett et al., 2016) consists of 200k tweets with binary sensitive attribute (race) and binary sentiment score. We follow the setup proposed by Han et al. (2021) and Elazar & Goldberg (2018) and create bias in the dataset by changing the proportion of each subgroup (race-sentiment) in the training set. With two sentiment classes being happy and sad, and two race classes being AAE and SAE, the training data consists of 40% AAE-happy, 10% AAE-sad, 10% SAE-happy, and 40% SAE-sad. The test set remains balanced. The tweets are encoded using the DeepMoji (Felbo et al., 2017) encoder with no fine-tuning, which has been pre-trained over millions of tweets to predict their emoji, thereby predicting the sentiment. Note that the train-test splits are pre-defined and thus do not change based on the random seed of the repetition. ## E.3 Detailed Results 12https://github.com/propublica/compas-analysis 13http://archive.ics.uci.edu/ml/datasets/communities+and+crime 14https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29 15https://github.com/mlohaus/SearchFair/blob/master/examples/get_synthetic_data.py 16https://github.com/zykls/folktables 17https://slanglab.cs.umass.edu/TwitterAAE/ ![23_image_2.png](23_image_2.png) ![23_image_0.png](23_image_0.png) 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 ![23_image_1.png](23_image_1.png) Figure 6: Results for the Adult dataset with different fairness measures. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|-----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.8456 ± 0.0033 | AP | 0.0571 ± 0.0022 | 0.077 ± 0.0029 | -0.0373 ± 0.0017 | | Constant | 0.751 ± 0.0 | AP | 0.102 ± 0.0 | 0.138 ± 0.0 | 0.067 ± 0.0 | | Weighted ERM | 0.8442 ± 0.0016 | AP | 0.0581 ± 0.0021 | 0.0783 ± 0.0028 -0.0379 ± 0.0014 | | | Constrained | 0.783 ± 0.007 | AP | 0.005 ± 0.003 | 0.007 ± 0.005 | 0.004 ± 0.002 | | Reduction | 0.7064 ± 0.0315 | AP | 0.0361 ± 0.0158 | 0.0235 ± 0.0103 -0.0487 ± 0.0214 | | | FairGrad | 0.8124 ± 0.005 | AP | 0.0097 ± 0.0029 | 0.0131 ± 0.004 | -0.0063 ± 0.0019 | | Unconstrained | 0.846 ± 0.0028 | Eodds | 0.0453 ± 0.0039 | 0.048 ± 0.0043 | -0.0878 ± 0.01 | | Constant | 0.748 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8475 ± 0.0024 | Eodds | 0.044 ± 0.0043 | 0.0477 ± 0.0031 -0.0837 ± 0.0124 | | | Constrained | 0.805 ± 0.004 | Eodds | 0.007 ± 0.005 | 0.019 ± 0.017 | 0.002 ± 0.001 | | BiFair | 0.793 ± 0.009 | Eodds | 0.036 ± 0.008 | 0.085 ± 0.027 | -0.03 ± 0.016 | | FairBatch | 0.8437 ± 0.0013 | Eodds | 0.0228 ± 0.0071 | 0.0411 ± 0.0105 -0.0245 ± 0.0183 | | | Reduction | 0.7059 ± 0.0277 | Eodds | 0.0542 ± 0.0158 | 0.0711 ± 0.0189 -0.1055 ± 0.022 | | | FairGrad | 0.8284 ± 0.004 | Eodds | 0.0051 ± 0.0021 | 0.0078 ± 0.0068 -0.0078 ± 0.0054 | | | Unconstrained | 0.8457 ± 0.0028 | Eopp | 0.0263 ± 0.0024 | 0.0157 ± 0.0011 -0.0893 ± 0.0083 | | | Constant | 0.754 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8475 ± 0.0024 | Eopp | 0.0246 ± 0.0036 | 0.0148 ± 0.002 | -0.0837 ± 0.0124 | | Constrained | 0.846 ± 0.002 | Eopp | 0.011 ± 0.004 | 0.039 ± 0.012 | 0.0 ± 0.0 | | BiFair | 0.8 ± 0.009 | Eopp | 0.031 ± 0.024 | 0.019 ± 0.014 | -0.107 ± 0.083 | | FairBatch | 0.8457 ± 0.0016 | Eopp | 0.0098 ± 0.0068 | 0.0225 ± 0.0174 -0.0166 ± 0.0241 | | | Reduction | 0.8226 ± 0.0149 | Eopp | 0.0341 ± 0.0168 | 0.116 ± 0.0575 | -0.0204 ± 0.0098 | | FairGrad | 0.8353 ± 0.0106 | Eopp | 0.0053 ± 0.006 | 0.0177 ± 0.021 | -0.0037 ± 0.0033 | Table 4: Results for the Adult dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. Table 5: Results for the Adult dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | | FAIRNESS | | | | |--------------------------|-----------------|------------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | | MAXIMUM | MINIMUM | | | | Unconstrained | 0.8438 ± 0.0025 | AP | 0.0575 ± 0.0025 | 0.0776 ± 0.0033 -0.0375 ± 0.0018 | | | Constant | 0.751 ± 0.0 | AP | 0.102 ± 0.0 | 0.138 ± 0.0 | 0.067 ± 0.0 | | Weighted ERM | 0.8469 ± 0.0035 | AP | 0.0564 ± 0.003 | 0.0761 ± 0.0038 -0.0368 ± 0.0021 | | | Adversarial | 0.8364 ± 0.0063 | AP | 0.0526 ± 0.0017 | 0.0709 ± 0.0025 -0.0343 ± 0.0009 | | | Reduction | 0.7015 ± 0.0225 | AP | 0.0681 ± 0.0184 | 0.0444 ± 0.0122 -0.0917 ± 0.0247 | | | FairGrad | 0.8054 ± 0.0051 | AP | 0.0034 ± 0.0033 | 0.0033 ± 0.0031 -0.0036 ± 0.0042 | | | Unconstrained | 0.8299 ± 0.0142 | Eodds | 0.0448 ± 0.0109 | 0.0404 ± 0.0136 -0.0977 ± 0.0422 | | | Constant | 0.748 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8285 ± 0.0085 | Eodds | 0.0102 ± 0.0025 | 0.0196 ± 0.0102 -0.0099 ± 0.0047 | | | Adversarial | 0.8202 ± 0.0068 | Eodds | 0.0145 ± 0.0052 | 0.0288 ± 0.0177 -0.0153 ± 0.0067 | | | BiFair | 0.823 ± 0.017 | Eodds | 0.038 ± 0.009 | 0.09 ± 0.034 | -0.038 ± 0.015 | | FairBatch | 0.8379 ± 0.0009 | Eodds | 0.02 ± 0.0088 | 0.0327 ± 0.0153 -0.0244 ± 0.0218 | | | Reduction | 0.729 ± 0.0252 | Eodds | 0.0636 ± 0.0176 | 0.0673 ± 0.0203 -0.115 ± 0.0334 | | | FairGrad | 0.827 ± 0.0071 | Eodds | 0.0118 ± 0.0024 | 0.022 ± 0.014 | -0.0165 ± 0.0135 | | Unconstrained | 0.8382 ± 0.0076 | Eopp | 0.0242 ± 0.0031 | 0.0145 ± 0.0017 -0.0822 ± 0.0108 | | | Constant | 0.754 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8293 ± 0.0091 | Eopp | 0.0051 ± 0.0033 | 0.0141 ± 0.0137 -0.0062 ± 0.0038 | | | Adversarial | 0.8324 ± 0.0058 | Eopp | 0.007 ± 0.0044 | 0.0139 ± 0.0159 -0.0144 ± 0.0133 | | | BiFair | 0.815 ± 0.014 | Eopp | 0.03 ± 0.015 | 0.019 ± 0.009 | -0.103 ± 0.053 | | FairBatch | 0.8415 ± 0.0054 | Eopp | 0.0082 ± 0.0073 | 0.0157 ± 0.0121 -0.017 ± 0.0271 | | | Reduction | 0.8343 ± 0.0059 | Eopp | 0.0294 ± 0.0164 | 0.0779 ± 0.0662 -0.0396 ± 0.0455 | | | FairGrad | 0.8373 ± 0.0043 | Eopp | 0.0053 ± 0.0047 | 0.0099 ± 0.0146 -0.0112 ± 0.0127 | | ![25_image_0.png](25_image_0.png) Figure 7: Results for the CelebA dataset with different fairness measures. Table 6: Results for the CelebA dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|-----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.8532 ± 0.0009 | AP | 0.0204 ± 0.0022 | 0.017 ± 0.0019 | -0.0238 ± 0.0025 | | Constant | 0.516 ± 0.0 | AP | 0.072 ± 0.0 | 0.084 ± 0.0 | 0.06 ± 0.0 | | Weighted ERM | 0.853 ± 0.0008 | AP | 0.0193 ± 0.0021 | 0.0161 ± 0.0018 -0.0225 ± 0.0023 | | | Constrained | 0.799 ± 0.013 | AP | 0.01 ± 0.001 | 0.012 ± 0.002 | 0.009 ± 0.001 | | Reduction | 0.7734 ± 0.011 | AP | 0.0242 ± 0.006 | 0.0282 ± 0.0071 -0.0201 ± 0.005 | | | FairGrad | 0.835 ± 0.0028 | AP | 0.0012 ± 0.0009 | 0.0011 ± 0.0007 -0.0014 ± 0.0011 | | | Unconstrained | 0.8532 ± 0.0009 | Eodds | 0.0499 ± 0.0019 | 0.0538 ± 0.0024 -0.1011 ± 0.0033 | | | Constant | 0.518 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.853 ± 0.0009 | Eodds | 0.0504 ± 0.0019 | 0.0532 ± 0.0024 -0.1001 ± 0.0032 | | | Constrained | 0.802 ± 0.004 | Eodds | 0.006 ± 0.001 | 0.01 ± 0.003 | 0.002 ± 0.001 | | BiFair | 0.845 ± 0.007 | Eodds | 0.021 ± 0.005 | 0.02 ± 0.003 | -0.036 ± 0.009 | | FairBatch | 0.8518 ± 0.0009 | Eodds | 0.0226 ± 0.0017 | 0.0218 ± 0.0028 -0.0411 ± 0.0053 | | | Reduction | 0.7268 ± 0.011 | Eodds | 0.0312 ± 0.0036 | 0.0628 ± 0.0089 -0.0334 ± 0.0047 | | | FairGrad | 0.8274 ± 0.002 | Eodds | 0.0025 ± 0.0009 | 0.0038 ± 0.0018 -0.0046 ± 0.0026 | | | Unconstrained | 0.8532 ± 0.0009 | Eopp | 0.0387 ± 0.0014 | 0.0538 ± 0.0024 -0.1011 ± 0.0033 | | | Constant | 0.518 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.853 ± 0.0008 | Eopp | 0.0383 ± 0.0014 | 0.0531 ± 0.0024 -0.0999 ± 0.0032 | | | Constrained | 0.834 ± 0.005 | Eopp | 0.002 ± 0.001 | 0.005 ± 0.002 | 0.0 ± 0.0 | | BiFair | 0.848 ± 0.004 | Eopp | 0.014 ± 0.006 | 0.02 ± 0.009 | -0.037 ± 0.017 | | FairBatch | 0.8498 ± 0.001 | Eopp | 0.0102 ± 0.0016 | 0.0142 ± 0.0022 -0.0268 ± 0.0042 | | | Reduction | 0.7358 ± 0.0159 | Eopp | 0.0698 ± 0.0118 | 0.1824 ± 0.0313 -0.0968 ± 0.0158 | | | FairGrad | 0.844 ± 0.0022 | Eopp | 0.0013 ± 0.0009 | 0.0025 ± 0.0021 -0.0028 ± 0.0018 | | Table 7: Results for the CelebA dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | | FAIRNESS | | | | |--------------------------|-----------------|------------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | | MAXIMUM | MINIMUM | | | | Unconstrained | 0.8587 ± 0.0015 | AP | 0.0184 ± 0.0014 | 0.0154 ± 0.0012 -0.0215 ± 0.0016 | | | Constant | 0.516 ± 0.0 | AP | 0.072 ± 0.0 | 0.084 ± 0.0 | 0.06 ± 0.0 | | Weighted ERM | 0.8593 ± 0.0018 | AP | 0.018 ± 0.0017 | 0.015 ± 0.0014 | -0.021 ± 0.0019 | | Adversarial | 0.8588 ± 0.0012 | AP | 0.0178 ± 0.0014 | 0.0148 ± 0.0012 -0.0208 ± 0.0015 | | | Reduction | 0.7802 ± 0.0142 | AP | 0.0436 ± 0.0108 | 0.0508 ± 0.0123 -0.0364 ± 0.0092 | | | FairGrad | 0.8359 ± 0.0033 | AP | 0.0023 ± 0.0012 | 0.0025 ± 0.0015 -0.0021 ± 0.0009 | | | Unconstrained | 0.8583 ± 0.0012 | Eodds | 0.0432 ± 0.003 | 0.0475 ± 0.0028 -0.0893 ± 0.0049 | | | Constant | 0.518 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8589 ± 0.0009 | Eodds | 0.0419 ± 0.0021 | 0.0459 ± 0.0025 -0.0864 ± 0.0038 | | | Adversarial | 0.8567 ± 0.0014 | Eodds | 0.0223 ± 0.002 | 0.0272 ± 0.0039 -0.0511 ± 0.0073 | | | BiFair | 0.856 ± 0.004 | Eodds | 0.023 ± 0.002 | 0.028 ± 0.005 | -0.052 ± 0.009 | | FairBatch | 0.8533 ± 0.0037 | Eodds | 0.0217 ± 0.0014 | 0.0197 ± 0.0026 -0.0321 ± 0.005 | | | Reduction | 0.7021 ± 0.0323 | Eodds | 0.0813 ± 0.0253 | 0.1777 ± 0.0426 -0.0946 ± 0.0238 | | | FairGrad | 0.8304 ± 0.0031 | Eodds | 0.0037 ± 0.0017 | 0.0048 ± 0.0018 -0.0055 ± 0.0023 | | | Unconstrained | 0.8585 ± 0.0016 | Eopp | 0.0341 ± 0.002 | 0.0473 ± 0.003 | -0.0889 ± 0.0052 | | Constant | 0.518 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.859 ± 0.0009 | Eopp | 0.0331 ± 0.0014 | 0.046 ± 0.0023 | -0.0866 ± 0.0035 | | Adversarial | 0.8557 ± 0.0019 | Eopp | 0.0161 ± 0.002 | 0.0223 ± 0.0029 -0.0419 ± 0.0053 | | | BiFair | 0.854 ± 0.004 | Eopp | 0.015 ± 0.009 | 0.021 ± 0.012 | -0.039 ± 0.022 | | FairBatch | 0.8475 ± 0.0043 | Eopp | 0.0051 ± 0.0024 | 0.007 ± 0.0033 | -0.0131 ± 0.0063 | | Reduction | 0.765 ± 0.0149 | Eopp | 0.0533 ± 0.0124 | 0.1393 ± 0.033 | -0.0738 ± 0.0167 | | FairGrad | 0.8439 ± 0.0063 | Eopp | 0.0009 ± 0.0008 | 0.002 ± 0.0022 | -0.0016 ± 0.0011 | ![27_image_0.png](27_image_0.png) Figure 8: Results for the Crime dataset with different fairness measures. Table 8: Results for the Crime dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|-----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.8145 ± 0.0136 | AP | 0.0329 ± 0.0195 | 0.0258 ± 0.0162 -0.0399 ± 0.0229 | | | Constant | 0.734 ± 0.0 | AP | 0.272 ± 0.0 | 0.377 ± 0.0 | 0.168 ± 0.0 | | Weighted ERM | 0.808 ± 0.0246 | AP | 0.0361 ± 0.0108 | 0.0284 ± 0.0091 -0.0438 ± 0.0129 | | | Constrained | 0.775 ± 0.015 | AP | 0.025 ± 0.019 | 0.031 ± 0.025 | 0.019 ± 0.014 | | Reduction | 0.8521 ± 0.0075 | AP | 0.055 ± 0.0197 | 0.0426 ± 0.0147 -0.0673 ± 0.0253 | | | FairGrad | 0.814 ± 0.0102 | AP | 0.0403 ± 0.0181 | 0.0316 ± 0.0147 -0.049 ± 0.0218 | | | Unconstrained | 0.8035 ± 0.0212 | Eodds | 0.2152 ± 0.0215 | 0.1038 ± 0.0231 -0.396 ± 0.0433 | | | Constant | 0.677 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8045 ± 0.0271 | Eodds | 0.2086 ± 0.0357 | 0.0974 ± 0.0165 -0.3747 ± 0.0679 | | | Constrained | 0.751 ± 0.014 | Eodds | 0.036 ± 0.012 | 0.088 ± 0.043 | 0.007 ± 0.004 | | BiFair | 0.76 ± 0.03 | Eodds | 0.082 ± 0.048 | 0.048 ± 0.03 | -0.163 ± 0.092 | | FairBatch | 0.8306 ± 0.0237 | Eodds | 0.2015 ± 0.035 | 0.1054 ± 0.0333 -0.3704 ± 0.067 | | | Reduction | 0.6842 ± 0.0339 | Eodds | 0.0611 ± 0.0281 | 0.0349 ± 0.0111 -0.1291 ± 0.047 | | | FairGrad | 0.7634 ± 0.03 | Eodds | 0.0938 ± 0.0144 | 0.0491 ± 0.016 | -0.1927 ± 0.0362 | | Unconstrained | 0.804 ± 0.0215 | Eopp | 0.1215 ± 0.0183 | 0.1009 ± 0.0238 -0.3852 ± 0.0549 | | | Constant | 0.697 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8171 ± 0.0213 | Eopp | 0.1209 ± 0.0154 | 0.0985 ± 0.0106 -0.3851 ± 0.0599 | | | Constrained | 0.762 ± 0.021 | Eopp | 0.044 ± 0.021 | 0.138 ± 0.066 | 0.0 ± 0.0 | | BiFair | 0.806 ± 0.01 | Eopp | 0.085 ± 0.038 | 0.073 ± 0.042 | -0.268 ± 0.112 | | FairBatch | 0.8225 ± 0.0252 | Eopp | 0.1126 ± 0.0259 | 0.1002 ± 0.0281 -0.3501 ± 0.0821 | | | Reduction | 0.6747 ± 0.0488 | Eopp | 0.0283 ± 0.022 | 0.0413 ± 0.0375 -0.0718 ± 0.0829 | | | FairGrad | 0.7755 ± 0.0233 | Eopp | 0.0609 ± 0.0149 | 0.0507 ± 0.0166 -0.193 ± 0.0456 | | Table 9: Results for the Crime dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | | FAIRNESS | | | | |--------------------------|-----------------|------------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | | MAXIMUM | MINIMUM | | | | Unconstrained | 0.8165 ± 0.019 | AP | 0.0535 ± 0.0199 | 0.0423 ± 0.0155 -0.0648 ± 0.0251 | | | Constant | 0.734 ± 0.0 | AP | 0.272 ± 0.0 | 0.377 ± 0.0 | 0.168 ± 0.0 | | Weighted ERM | 0.8271 ± 0.0114 | AP | 0.0483 ± 0.0167 | 0.0382 ± 0.0139 -0.0584 ± 0.02 | | | Adversarial | 0.809 ± 0.0175 | AP | 0.0592 ± 0.0173 | 0.0464 ± 0.0135 -0.0719 ± 0.0223 | | | Reduction | 0.8501 ± 0.0096 | AP | 0.0559 ± 0.0215 | 0.0432 ± 0.0166 -0.0685 ± 0.0269 | | | FairGrad | 0.822 ± 0.0203 | AP | 0.0434 ± 0.0206 | 0.0341 ± 0.0162 -0.0526 ± 0.0252 | | | Unconstrained | 0.8115 ± 0.014 | Eodds | 0.1635 ± 0.0395 | 0.0854 ± 0.014 | -0.3326 ± 0.0649 | | Constant | 0.677 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8135 ± 0.0137 | Eodds | 0.1739 ± 0.0394 | 0.0861 ± 0.0212 -0.3309 ± 0.0778 | | | Adversarial | 0.791 ± 0.007 | Eodds | 0.1464 ± 0.0168 | 0.0797 ± 0.0192 -0.3001 ± 0.0296 | | | BiFair | 0.793 ± 0.022 | Eodds | 0.161 ± 0.032 | 0.091 ± 0.025 | -0.339 ± 0.048 | | FairBatch | 0.8391 ± 0.0195 | Eodds | 0.189 ± 0.0368 | 0.1106 ± 0.0313 -0.3828 ± 0.0671 | | | Reduction | 0.7258 ± 0.0267 | Eodds | 0.0743 ± 0.0409 | 0.0553 ± 0.014 | -0.1556 ± 0.0976 | | FairGrad | 0.7734 ± 0.0251 | Eodds | 0.0982 ± 0.0513 | 0.0511 ± 0.0179 -0.2016 ± 0.0771 | | | Unconstrained | 0.817 ± 0.0152 | Eopp | 0.1044 ± 0.0133 | 0.0856 ± 0.0123 -0.3321 ± 0.0489 | | | Constant | 0.697 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8205 ± 0.0184 | Eopp | 0.1159 ± 0.0191 | 0.0955 ± 0.019 | -0.368 ± 0.0642 | | Adversarial | 0.795 ± 0.0148 | Eopp | 0.0959 ± 0.0153 | 0.0802 ± 0.0227 -0.3036 ± 0.042 | | | BiFair | 0.807 ± 0.025 | Eopp | 0.11 ± 0.031 | 0.091 ± 0.031 | -0.351 ± 0.097 | | FairBatch | 0.8411 ± 0.0177 | Eopp | 0.1217 ± 0.0277 | 0.1083 ± 0.0311 -0.3784 ± 0.0891 | | | Reduction | 0.6887 ± 0.0271 | Eopp | 0.0282 ± 0.0159 | 0.034 ± 0.0281 | -0.0788 ± 0.0619 | | FairGrad | 0.7799 ± 0.0243 | Eopp | 0.0675 ± 0.0179 | 0.0556 ± 0.0147 -0.2143 ± 0.0592 | | ![29_image_0.png](29_image_0.png) ![29_image_1.png](29_image_1.png) Figure 9: Results for the Adult with multiple groups dataset with different fairness measures. Table 10: Results for the Adult with multiple groups dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (L) ACCURACY ↑ | | FAIRNESS | | | | |-------------------------|-----------------|------------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | | MAXIMUM | MINIMUM | | | | Unconstrained | 0.8451 ± 0.0042 | AP | 0.0559 ± 0.0047 | 0.0985 ± 0.0111 -0.042 ± 0.003 | | | Constant | 0.754 ± 0.0 | AP | 0.097 ± 0.0 | 0.159 ± 0.0 | 0.024 ± 0.0 | | Weighted ERM | 0.8454 ± 0.0032 | AP | 0.0562 ± 0.0042 | 0.0993 ± 0.0117 -0.0426 ± 0.0018 | | | Reduction | 0.6436 ± 0.0178 | AP | 0.049 ± 0.01 | 0.0493 ± 0.017 | -0.0661 ± 0.0113 | | FairGrad | 0.807 ± 0.0022 | AP | 0.0148 ± 0.0041 | 0.0256 ± 0.0048 -0.0107 ± 0.0045 | | | Unconstrained | 0.844 ± 0.0011 | Eodds | 0.0558 ± 0.0062 | 0.0578 ± 0.0069 -0.1586 ± 0.0621 | | | Constant | 0.75 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8448 ± 0.0038 | Eodds | 0.0586 ± 0.0097 | 0.0567 ± 0.0048 -0.1702 ± 0.0776 | | | FairBatch | 0.8396 ± 0.0034 | Eodds | 0.0308 ± 0.0057 | 0.0565 ± 0.0116 -0.0641 ± 0.0234 | | | Reduction | 0.6932 ± 0.0264 | Eodds | 0.0446 ± 0.0048 | 0.0806 ± 0.043 | -0.0896 ± 0.0278 | | FairGrad | 0.8162 ± 0.0052 | Eodds | 0.0197 ± 0.0118 | 0.0373 ± 0.0233 -0.0493 ± 0.0403 | | | Unconstrained | 0.8431 ± 0.002 | Eopp | 0.0391 ± 0.0052 | 0.0297 ± 0.0131 -0.169 ± 0.0565 | | | Constant | 0.762 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8443 ± 0.0038 | Eopp | 0.0415 ± 0.01 | 0.0316 ± 0.0145 -0.1767 ± 0.0797 | | | FairBatch | 0.8392 ± 0.004 | Eopp | 0.0219 ± 0.0055 | 0.05 ± 0.0133 | -0.0749 ± 0.0285 | | Reduction | 0.7615 ± 0.0357 | Eopp | 0.026 ± 0.0189 | 0.0487 ± 0.0378 -0.1115 ± 0.0867 | | | FairGrad | 0.834 ± 0.0044 | Eopp | 0.0201 ± 0.0099 | 0.0442 ± 0.0415 -0.0679 ± 0.0808 | | Table 11: Results for the Adult with multiple groups dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | | FAIRNESS | | | | |--------------------------|---------------------|------------|-----------------|----------------------------------|------------------| | | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | Unconstrained | 0.8427 ± 0.0041 | AP | 0.0546 ± 0.0026 | 0.0966 ± 0.0098 -0.0421 ± 0.0022 | | | Constant | 0.754 ± 0.0 | AP | 0.097 ± 0.0 | 0.159 ± 0.0 | 0.024 ± 0.0 | | Weighted ERM | 0.8408 ± 0.0031 | AP | 0.0575 ± 0.0035 | 0.101 ± 0.0106 | -0.0443 ± 0.0026 | | Adversarial | 0.8358 ± 0.0043 | AP | 0.0527 ± 0.0028 | 0.0889 ± 0.0066 -0.0401 ± 0.0022 | | | Reduction | 0.7025 ± 0.0144 | AP | 0.0388 ± 0.0066 | 0.054 ± 0.0151 | -0.0525 ± 0.0099 | | FairGrad | 0.7991 ± 0.0036 | AP | 0.013 ± 0.0051 | 0.0257 ± 0.0138 -0.0125 ± 0.0043 | | | Unconstrained | 0.8347 ± 0.0129 | Eodds | 0.0523 ± 0.0126 | 0.0495 ± 0.0166 -0.1772 ± 0.0512 | | | Constant | 0.75 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8199 ± 0.002 | Eodds | 0.0287 ± 0.0076 | 0.0274 ± 0.0177 -0.1013 ± 0.0543 | | | Adversarial | 0.8251 ± 0.0064 | Eodds | 0.0223 ± 0.0065 | 0.0451 ± 0.0308 -0.0667 ± 0.0559 | | | FairBatch | 0.8212 ± 0.0103 | Eodds | 0.0806 ± 0.0137 | 0.0522 ± 0.0076 -0.2545 ± 0.0525 | | | Reduction | 0.7649 ± 0.0241 | Eodds | 0.0386 ± 0.011 | 0.044 ± 0.02 | -0.0954 ± 0.0465 | | FairGrad | 0.8128 ± 0.0102 | Eodds | 0.0196 ± 0.0061 | 0.0392 ± 0.0176 -0.0443 ± 0.0342 | | | Unconstrained | 0.8373 ± 0.0123 | Eopp | 0.0331 ± 0.008 | 0.0183 ± 0.0045 -0.1587 ± 0.0643 | | | Constant | 0.762 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8216 ± 0.0031 | Eopp | 0.0245 ± 0.008 | 0.0243 ± 0.0196 -0.1016 ± 0.0543 | | | Adversarial | 0.8343 ± 0.0036 | Eopp | 0.0209 ± 0.0093 | 0.0327 ± 0.013 | -0.0927 ± 0.0589 | | FairBatch | 0.821 ± 0.0097 | Eopp | 0.067 ± 0.0168 | 0.047 ± 0.0113 | -0.2484 ± 0.0535 | | Reduction | 0.8156 ± 0.0204 | Eopp | 0.0259 ± 0.0209 | 0.0472 ± 0.0325 -0.0968 ± 0.1117 | | | FairGrad | 0.8341 ± 0.0053 | Eopp | 0.0176 ± 0.0059 | 0.0302 ± 0.0272 -0.0731 ± 0.0543 | | ![31_image_0.png](31_image_0.png) ![31_image_1.png](31_image_1.png) Figure 10: Results for the Compas dataset with different fairness measures. Table 12: Results for the Compas dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|-----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.6644 ± 0.0137 | AP | 0.0091 ± 0.0025 | 0.0076 ± 0.0031 -0.0107 ± 0.004 | | | Constant | 0.545 ± 0.0 | AP | 0.066 ± 0.0 | 0.085 ± 0.0 | 0.047 ± 0.0 | | Weighted ERM | 0.6671 ± 0.0169 | AP | 0.0088 ± 0.004 | 0.0061 ± 0.0028 -0.0115 ± 0.0051 | | | Constrained | 0.65 ± 0.012 | AP | 0.014 ± 0.005 | 0.018 ± 0.006 | 0.009 ± 0.003 | | Reduction | 0.6141 ± 0.011 | AP | 0.0107 ± 0.0064 | 0.009 ± 0.006 | -0.0124 ± 0.0086 | | FairGrad | 0.6708 ± 0.0166 | AP | 0.0083 ± 0.0068 | 0.0057 ± 0.0048 -0.0108 ± 0.0088 | | | Unconstrained | 0.6636 ± 0.0104 | Eodds | 0.0827 ± 0.0165 | 0.0758 ± 0.0133 -0.1553 ± 0.0259 | | | Constant | 0.527 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.6685 ± 0.0073 | Eodds | 0.082 ± 0.0137 | 0.0697 ± 0.0115 -0.1618 ± 0.0222 | | | Constrained | 0.564 ± 0.014 | Eodds | 0.007 ± 0.004 | 0.014 ± 0.011 | 0.002 ± 0.001 | | BiFair | 0.672 ± 0.021 | Eodds | 0.076 ± 0.023 | 0.071 ± 0.025 | -0.15 ± 0.039 | | FairBatch | 0.6847 ± 0.0175 | Eodds | 0.09 ± 0.0094 | 0.0854 ± 0.0149 -0.1727 ± 0.0304 | | | Reduction | 0.5493 ± 0.027 | Eodds | 0.029 ± 0.0058 | 0.0268 ± 0.0062 -0.0622 ± 0.0219 | | | FairGrad | 0.6557 ± 0.0075 | Eodds | 0.0593 ± 0.0128 | 0.0524 ± 0.0102 -0.1241 ± 0.0202 | | | Unconstrained | 0.6609 ± 0.0106 | Eopp | 0.052 ± 0.0107 | 0.062 ± 0.0145 | -0.1461 ± 0.0286 | | Constant | 0.55 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.6695 ± 0.0055 | Eopp | 0.0554 ± 0.0074 | 0.0659 ± 0.0107 -0.1557 ± 0.0194 | | | Constrained | 0.565 ± 0.015 | Eopp | 0.004 ± 0.003 | 0.011 ± 0.009 | 0.0 ± 0.0 | | BiFair | 0.68 ± 0.013 | Eopp | 0.054 ± 0.016 | 0.064 ± 0.022 | -0.15 ± 0.044 | | FairBatch | 0.6865 ± 0.0171 | Eopp | 0.0618 ± 0.0134 | 0.0715 ± 0.0173 -0.1755 ± 0.0364 | | | Reduction | 0.5828 ± 0.0457 | Eopp | 0.0252 ± 0.0178 | 0.03 ± 0.0216 | -0.0707 ± 0.0498 | | FairGrad | 0.6565 ± 0.0152 | Eopp | 0.0467 ± 0.0046 | 0.0554 ± 0.0071 -0.1313 ± 0.0119 | | Table 13: Results for the Compas dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | | FAIRNESS | | | | |--------------------------|-----------------|------------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | | MAXIMUM | MINIMUM | | | | Unconstrained | 0.6593 ± 0.0192 | AP | 0.0119 ± 0.0072 | 0.0095 ± 0.004 | -0.0144 ± 0.0107 | | Constant | 0.545 ± 0.0 | AP | 0.066 ± 0.0 | 0.085 ± 0.0 | 0.047 ± 0.0 | | Weighted ERM | 0.6687 ± 0.0138 | AP | 0.0127 ± 0.0061 | 0.011 ± 0.0034 | -0.0145 ± 0.0099 | | Adversarial | 0.6583 ± 0.0157 | AP | 0.0078 ± 0.0051 | 0.0066 ± 0.0044 -0.009 ± 0.0069 | | | Reduction | 0.6287 ± 0.0117 | AP | 0.0118 ± 0.0024 | 0.0103 ± 0.0062 -0.0134 ± 0.0024 | | | FairGrad | 0.6672 ± 0.0099 | AP | 0.0113 ± 0.005 | 0.0095 ± 0.0023 -0.0131 ± 0.0082 | | | Unconstrained | 0.6562 ± 0.0154 | Eodds | 0.0782 ± 0.014 | 0.0715 ± 0.0136 -0.1521 ± 0.0277 | | | Constant | 0.527 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.6615 ± 0.0175 | Eodds | 0.0789 ± 0.0131 | 0.0726 ± 0.0077 -0.1496 ± 0.0313 | | | Adversarial | 0.6504 ± 0.0157 | Eodds | 0.059 ± 0.0138 | 0.0549 ± 0.0107 -0.1294 ± 0.0183 | | | BiFair | 0.661 ± 0.009 | Eodds | 0.07 ± 0.013 | 0.068 ± 0.018 | -0.133 ± 0.016 | | FairBatch | 0.6792 ± 0.0086 | Eodds | 0.071 ± 0.0083 | 0.0663 ± 0.0091 -0.1508 ± 0.0304 | | | Reduction | 0.5631 ± 0.0072 | Eodds | 0.0214 ± 0.0112 | 0.024 ± 0.0102 | -0.0489 ± 0.0363 | | FairGrad | 0.6457 ± 0.0088 | Eodds | 0.061 ± 0.0075 | 0.0564 ± 0.0065 -0.127 ± 0.0081 | | | Unconstrained | 0.6552 ± 0.0137 | Eopp | 0.0553 ± 0.0108 | 0.0659 ± 0.015 | -0.1552 ± 0.0281 | | Constant | 0.55 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.6604 ± 0.0163 | Eopp | 0.0519 ± 0.0111 | 0.0618 ± 0.0148 -0.1458 ± 0.0299 | | | Adversarial | 0.6494 ± 0.0148 | Eopp | 0.0472 ± 0.0072 | 0.0563 ± 0.0108 -0.1327 ± 0.0183 | | | BiFair | 0.669 ± 0.01 | Eopp | 0.042 ± 0.02 | 0.05 ± 0.025 | -0.117 ± 0.055 | | FairBatch | 0.6802 ± 0.0114 | Eopp | 0.0536 ± 0.0133 | 0.062 ± 0.0167 | -0.1526 ± 0.0367 | | Reduction | 0.5801 ± 0.0258 | Eopp | 0.025 ± 0.0119 | 0.0296 ± 0.0145 -0.0702 ± 0.0333 | | | FairGrad | 0.6586 ± 0.0118 | Eopp | 0.0476 ± 0.0056 | 0.0563 ± 0.0067 -0.1339 ± 0.0163 | | ![33_image_0.png](33_image_0.png) Figure 11: Results for the Dutch dataset with different fairness measures. Table 14: Results for the Dutch dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|-----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.8049 ± 0.007 | AP | 0.0281 ± 0.006 | 0.0281 ± 0.006 | -0.0282 ± 0.0061 | | Constant | 0.524 ± 0.0 | AP | 0.151 ± 0.0 | 0.152 ± 0.0 | 0.15 ± 0.0 | | Weighted ERM | 0.8052 ± 0.0073 | AP | 0.028 ± 0.006 | 0.028 ± 0.006 | -0.0281 ± 0.006 | | Constrained | 0.799 ± 0.009 | AP | 0.009 ± 0.006 | 0.009 ± 0.006 | 0.009 ± 0.006 | | Reduction | 0.723 ± 0.0341 | AP | 0.0367 ± 0.0172 | 0.0368 ± 0.0172 -0.0367 ± 0.0172 | | | FairGrad | 0.8042 ± 0.0046 | AP | 0.0048 ± 0.0033 | 0.0048 ± 0.0033 -0.0048 ± 0.0032 | | | Unconstrained | 0.8071 ± 0.0072 | Eodds | 0.0212 ± 0.0018 | 0.0322 ± 0.009 | -0.0256 ± 0.0052 | | Constant | 0.522 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8074 ± 0.0074 | Eodds | 0.0213 ± 0.002 | 0.032 ± 0.0086 | -0.0254 ± 0.0051 | | Constrained | 0.79 ± 0.005 | Eodds | 0.005 ± 0.003 | 0.009 ± 0.005 | 0.002 ± 0.002 | | BiFair | 0.804 ± 0.008 | Eodds | 0.021 ± 0.003 | 0.025 ± 0.004 | -0.033 ± 0.01 | | FairBatch | 0.809 ± 0.0096 | Eodds | 0.018 ± 0.0016 | 0.0262 ± 0.0039 -0.0211 ± 0.004 | | | Reduction | 0.6716 ± 0.0251 | Eodds | 0.0226 ± 0.006 | 0.0333 ± 0.0107 -0.0404 ± 0.0213 | | | FairGrad | 0.7978 ± 0.0064 | Eodds | 0.0053 ± 0.0019 | 0.007 ± 0.0019 | -0.009 ± 0.0049 | | Unconstrained | 0.8129 ± 0.0021 | Eopp | 0.0075 ± 0.0034 | 0.0107 ± 0.0049 -0.0193 ± 0.0086 | | | Constant | 0.524 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8077 ± 0.0078 | Eopp | 0.0076 ± 0.0034 | 0.011 ± 0.0049 | -0.0196 ± 0.0087 | | Constrained | 0.814 ± 0.003 | Eopp | 0.003 ± 0.002 | 0.007 ± 0.006 | 0.0 ± 0.0 | | BiFair | 0.808 ± 0.01 | Eopp | 0.005 ± 0.005 | 0.008 ± 0.007 | -0.012 ± 0.012 | | FairBatch | 0.8149 ± 0.0117 | Eopp | 0.0031 ± 0.0014 | 0.0044 ± 0.002 | -0.0079 ± 0.0036 | | Reduction | 0.7397 ± 0.0176 | Eopp | 0.026 ± 0.0058 | 0.0669 ± 0.0149 -0.0372 ± 0.0083 | | | FairGrad | 0.8144 ± 0.0021 | Eopp | 0.004 ± 0.0037 | 0.006 ± 0.0052 | -0.0099 ± 0.0097 | | METHOD (NL) ACCURACY ↑ | FAIRNESS | | | | | |--------------------------|-----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.7937 ± 0.0052 | AP | 0.0252 ± 0.0091 | 0.0252 ± 0.009 | -0.0252 ± 0.0091 | | Constant | 0.524 ± 0.0 | AP | 0.151 ± 0.0 | 0.152 ± 0.0 | 0.15 ± 0.0 | | Weighted ERM | 0.7954 ± 0.0023 | AP | 0.0257 ± 0.0089 | 0.0257 ± 0.0089 -0.0257 ± 0.0089 | | | Adversarial | 0.7939 ± 0.0043 | AP | 0.0232 ± 0.0071 | 0.0232 ± 0.0071 -0.0232 ± 0.007 | | | Reduction | 0.7421 ± 0.0168 | AP | 0.0227 ± 0.0141 | 0.0227 ± 0.0142 -0.0227 ± 0.0141 | | | FairGrad | 0.8043 ± 0.0071 | AP | 0.0052 ± 0.0026 | 0.0052 ± 0.0026 -0.0052 ± 0.0026 | | | Unconstrained | 0.7914 ± 0.006 | Eodds | 0.0162 ± 0.0062 | 0.0193 ± 0.0071 -0.0263 ± 0.0142 | | | Constant | 0.522 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7958 ± 0.0027 | Eodds | 0.0168 ± 0.0053 | 0.0202 ± 0.0048 -0.0261 ± 0.0131 | | | Adversarial | 0.7928 ± 0.0077 | Eodds | 0.0148 ± 0.0041 | 0.0202 ± 0.0066 -0.0211 ± 0.006 | | | BiFair | 0.819 ± 0.003 | Eodds | 0.021 ± 0.004 | 0.03 ± 0.005 | -0.028 ± 0.007 | | FairBatch | 0.8091 ± 0.012 | Eodds | 0.018 ± 0.0021 | 0.0254 ± 0.0058 -0.0248 ± 0.0062 | | | Reduction | 0.7144 ± 0.0176 | Eodds | 0.0253 ± 0.0073 | 0.0347 ± 0.0123 -0.0323 ± 0.0064 | | | FairGrad | 0.8013 ± 0.0073 | Eodds | 0.0069 ± 0.0031 | 0.0099 ± 0.0038 -0.0095 ± 0.0068 | | | Unconstrained | 0.8149 ± 0.0034 | Eopp | 0.0055 ± 0.0024 | 0.0079 ± 0.0035 -0.014 ± 0.0061 | | | Constant | 0.524 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8179 ± 0.0044 | Eopp | 0.0066 ± 0.0026 | 0.0095 ± 0.0037 -0.017 ± 0.0065 | | | Adversarial | 0.8156 ± 0.0038 | Eopp | 0.004 ± 0.0039 | 0.0058 ± 0.0057 -0.0102 ± 0.01 | | | BiFair | 0.819 ± 0.003 | Eopp | 0.009 ± 0.002 | 0.012 ± 0.003 | -0.022 ± 0.006 | | FairBatch | 0.8174 ± 0.0031 | Eopp | 0.002 ± 0.0012 | 0.0029 ± 0.0017 -0.0052 ± 0.0031 | | | Reduction | 0.7571 ± 0.0061 | Eopp | 0.0219 ± 0.0021 | 0.0563 ± 0.0054 -0.0313 ± 0.0028 | | | FairGrad | 0.8158 ± 0.0051 | Eopp | 0.0036 ± 0.0031 | 0.0051 ± 0.0045 -0.0092 ± 0.0079 | | Table 15: Results for the Dutch dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. ![35_image_0.png](35_image_0.png) Figure 12: Results for the German dataset with different fairness measures. Table 16: Results for the German dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.692 ± 0.0232 | AP | 0.0226 ± 0.0181 | 0.0169 ± 0.0111 -0.0284 ± 0.0256 | | | Constant | 0.73 ± 0.0 | AP | 0.05 ± 0.0 | 0.069 ± 0.0 | 0.031 ± 0.0 | | Weighted ERM | 0.707 ± 0.0344 | AP | 0.0243 ± 0.0191 | 0.0186 ± 0.0113 -0.0299 ± 0.027 | | | Constrained | 0.733 ± 0.033 | AP | 0.024 ± 0.025 | 0.032 ± 0.033 | 0.015 ± 0.017 | | Reduction | 0.631 ± 0.0396 | AP | 0.0323 ± 0.0139 | 0.0286 ± 0.0202 -0.036 ± 0.0185 | | | FairGrad | 0.744 ± 0.0357 | AP | 0.0274 ± 0.0212 | 0.0215 ± 0.0123 -0.0334 ± 0.0306 | | | Unconstrained | 0.69 ± 0.0266 | Eodds | 0.0316 ± 0.0207 | 0.0499 ± 0.0341 -0.0618 ± 0.0471 | | | Constant | 0.7 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.709 ± 0.0296 | Eodds | 0.0324 ± 0.0338 | 0.0461 ± 0.046 | -0.055 ± 0.0626 | | Constrained | 0.739 ± 0.027 | Eodds | 0.037 ± 0.012 | 0.072 ± 0.025 | 0.01 ± 0.004 | | BiFair | 0.698 ± 0.039 | Eodds | 0.033 ± 0.01 | 0.052 ± 0.023 | -0.059 ± 0.029 | | FairBatch | 0.7 ± 0.0247 | Eodds | 0.0706 ± 0.0184 | 0.1102 ± 0.0489 -0.1134 ± 0.0518 | | | Reduction | 0.707 ± 0.0335 | Eodds | 0.0361 ± 0.0175 | 0.0716 ± 0.056 | -0.0576 ± 0.0266 | | FairGrad | 0.734 ± 0.0358 | Eodds | 0.0464 ± 0.0201 | 0.0784 ± 0.0232 -0.0721 ± 0.0496 | | | Unconstrained | 0.704 ± 0.0193 | Eopp | 0.0053 ± 0.0035 | 0.0096 ± 0.004 | -0.0116 ± 0.0117 | | Constant | 0.7 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.706 ± 0.0328 | Eopp | 0.0048 ± 0.0039 | 0.0097 ± 0.0091 -0.0096 ± 0.0092 | | | Constrained | 0.741 ± 0.019 | Eopp | 0.005 ± 0.002 | 0.015 ± 0.006 | 0.0 ± 0.0 | | BiFair | 0.703 ± 0.037 | Eopp | 0.007 ± 0.006 | 0.014 ± 0.015 | -0.013 ± 0.015 | | FairBatch | 0.718 ± 0.0229 | Eopp | 0.0172 ± 0.0124 | 0.0272 ± 0.0187 -0.0416 ± 0.0396 | | | Reduction | 0.717 ± 0.0441 | Eopp | 0.0183 ± 0.014 | 0.036 ± 0.0254 | -0.0372 ± 0.0407 | | FairGrad | 0.723 ± 0.0425 | Eopp | 0.0125 ± 0.0043 | 0.0212 ± 0.0087 -0.0288 ± 0.0162 | | Table 17: Results for the German dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | FAIRNESS | | | | | |--------------------------|----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.695 ± 0.0122 | AP | 0.0426 ± 0.0241 | 0.0314 ± 0.0144 -0.0537 ± 0.0345 | | | Constant | 0.73 ± 0.0 | AP | 0.05 ± 0.0 | 0.069 ± 0.0 | 0.031 ± 0.0 | | Weighted ERM | 0.703 ± 0.0183 | AP | 0.035 ± 0.0237 | 0.0265 ± 0.0138 -0.0436 ± 0.0338 | | | Adversarial | 0.681 ± 0.0156 | AP | 0.041 ± 0.0254 | 0.0327 ± 0.0165 -0.0492 ± 0.0368 | | | Reduction | 0.666 ± 0.0198 | AP | 0.0173 ± 0.0171 | 0.0131 ± 0.0115 -0.0215 ± 0.0231 | | | FairGrad | 0.714 ± 0.026 | AP | 0.037 ± 0.0222 | 0.0291 ± 0.0119 -0.0448 ± 0.0331 | | | Unconstrained | 0.689 ± 0.0213 | Eodds | 0.0089 ± 0.0052 | 0.0117 ± 0.0045 -0.0144 ± 0.0116 | | | Constant | 0.7 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.703 ± 0.034 | Eodds | 0.0211 ± 0.0106 | 0.0305 ± 0.0186 -0.0372 ± 0.0158 | | | Adversarial | 0.684 ± 0.0097 | Eodds | 0.0184 ± 0.0122 | 0.0263 ± 0.0201 -0.0339 ± 0.0237 | | | BiFair | 0.725 ± 0.031 | Eodds | 0.016 ± 0.015 | 0.021 ± 0.018 | -0.027 ± 0.018 | | FairBatch | 0.692 ± 0.026 | Eodds | 0.0489 ± 0.0382 | 0.0607 ± 0.0446 -0.0882 ± 0.0983 | | | Reduction | 0.706 ± 0.0272 | Eodds | 0.0489 ± 0.0217 | 0.0742 ± 0.0266 -0.0717 ± 0.051 | | | FairGrad | 0.695 ± 0.0237 | Eodds | 0.0095 ± 0.004 | 0.0121 ± 0.0046 -0.0175 ± 0.0076 | | | Unconstrained | 0.686 ± 0.0215 | Eopp | 0.0124 ± 0.0075 | 0.0227 ± 0.0128 -0.0269 ± 0.0227 | | | Constant | 0.7 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7 ± 0.0261 | Eopp | 0.0066 ± 0.0057 | 0.0131 ± 0.0071 -0.0133 ± 0.0173 | | | Adversarial | 0.687 ± 0.0129 | Eopp | 0.0085 ± 0.0051 | 0.0203 ± 0.0147 -0.0137 ± 0.0099 | | | BiFair | 0.727 ± 0.023 | Eopp | 0.015 ± 0.013 | 0.023 ± 0.019 | -0.036 ± 0.038 | | FairBatch | 0.697 ± 0.025 | Eopp | 0.0084 ± 0.0079 | 0.0235 ± 0.0226 -0.0102 ± 0.0094 | | | Reduction | 0.701 ± 0.0397 | Eopp | 0.0102 ± 0.008 | 0.0242 ± 0.024 | -0.0167 ± 0.0134 | | FairGrad | 0.696 ± 0.0166 | Eopp | 0.0052 ± 0.0038 | 0.0093 ± 0.0064 -0.0115 ± 0.0108 | | ![37_image_0.png](37_image_0.png) Figure 13: Results for the Gaussian dataset with different fairness measures. Table 18: Results for the Gaussian dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|-----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.8689 ± 0.0037 | AP | 0.0966 ± 0.0029 | 0.0957 ± 0.0028 -0.0974 ± 0.0036 | | | Constant | 0.497 ± 0.0 | AP | 0.001 ± 0.0 | 0.001 ± 0.0 | 0.001 ± 0.0 | | Weighted ERM | 0.869 ± 0.0039 | AP | 0.0966 ± 0.0026 | 0.0957 ± 0.0023 -0.0974 ± 0.0034 | | | Constrained | 0.799 ± 0.004 | AP | 0.003 ± 0.002 | 0.003 ± 0.002 | 0.003 ± 0.002 | | Reduction | 0.7891 ± 0.0266 | AP | 0.0575 ± 0.0114 | 0.057 ± 0.0118 | -0.0579 ± 0.0111 | | FairGrad | 0.8516 ± 0.0064 | AP | 0.0558 ± 0.0094 | 0.0553 ± 0.0093 -0.0562 ± 0.0096 | | | Unconstrained | 0.869 ± 0.0037 | Eodds | 0.0971 ± 0.0026 | 0.1872 ± 0.0067 -0.1896 ± 0.0056 | | | Constant | 0.499 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.869 ± 0.0039 | Eodds | 0.0971 ± 0.0023 | 0.1869 ± 0.0063 -0.1894 ± 0.0051 | | | Constrained | 0.497 ± 0.003 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | BiFair | 0.873 ± 0.004 | Eodds | 0.113 ± 0.004 | 0.21 ± 0.007 | -0.213 ± 0.004 | | FairBatch | 0.8649 ± 0.0025 | Eodds | 0.0902 ± 0.0035 | 0.1717 ± 0.0046 -0.1719 ± 0.0079 | | | Reduction | 0.6241 ± 0.054 | Eodds | 0.0632 ± 0.0164 | 0.0732 ± 0.0198 -0.074 ± 0.0226 | | | FairGrad | 0.8459 ± 0.01 | Eodds | 0.0786 ± 0.0051 | 0.1504 ± 0.0102 -0.1527 ± 0.0142 | | | Unconstrained | 0.8598 ± 0.0121 | Eopp | 0.0928 ± 0.0012 | 0.1845 ± 0.0041 -0.1869 ± 0.0041 | | | Constant | 0.498 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8599 ± 0.0121 | Eopp | 0.0931 ± 0.0011 | 0.1849 ± 0.004 | -0.1874 ± 0.004 | | Constrained | 0.698 ± 0.005 | Eopp | 0.004 ± 0.002 | 0.008 ± 0.005 | 0.0 ± 0.0 | | BiFair | 0.863 ± 0.009 | Eopp | 0.1 ± 0.003 | 0.2 ± 0.007 | -0.202 ± 0.006 | | FairBatch | 0.8635 ± 0.0024 | Eopp | 0.085 ± 0.0023 | 0.17 ± 0.0032 | -0.1702 ± 0.0065 | | Reduction | 0.6251 ± 0.0355 | Eopp | 0.0189 ± 0.0138 | 0.0379 ± 0.0271 -0.0378 ± 0.0282 | | | FairGrad | 0.8431 ± 0.0065 | Eopp | 0.0752 ± 0.0043 | 0.1494 ± 0.0087 -0.1514 ± 0.0094 | | Table 19: Results for the Gaussian dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | FAIRNESS | | | | | |--------------------------|-----------------|---------|-----------------|----------------------------------|----------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.88 ± 0.0038 | AP | 0.0897 ± 0.0045 | 0.0888 ± 0.0035 -0.0905 ± 0.0055 | | | Constant | 0.497 ± 0.0 | AP | 0.001 ± 0.0 | 0.001 ± 0.0 | 0.001 ± 0.0 | | Weighted ERM | 0.8809 ± 0.0048 | AP | 0.0903 ± 0.0045 | 0.0894 ± 0.0033 -0.0911 ± 0.0057 | | | Adversarial | 0.8725 ± 0.0115 | AP | 0.0858 ± 0.0077 | 0.0851 ± 0.0076 -0.0866 ± 0.0081 | | | Reduction | 0.718 ± 0.0251 | AP | 0.0694 ± 0.0237 | 0.0699 ± 0.0236 -0.0689 ± 0.0239 | | | FairGrad | 0.8542 ± 0.0047 | AP | 0.0352 ± 0.0047 | 0.0349 ± 0.0048 -0.0355 ± 0.0046 | | | Unconstrained | 0.8814 ± 0.0024 | Eodds | 0.093 ± 0.0032 | 0.1807 ± 0.0066 -0.183 ± 0.005 | | | Constant | 0.499 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8821 ± 0.0031 | Eodds | 0.0939 ± 0.0013 | 0.1826 ± 0.0042 -0.185 ± 0.0033 | | | Adversarial | 0.8775 ± 0.0091 | Eodds | 0.0852 ± 0.007 | 0.1643 ± 0.0125 -0.1666 ± 0.0146 | | | BiFair | 0.868 ± 0.013 | Eodds | 0.092 ± 0.011 | 0.167 ± 0.035 | -0.168 ± 0.031 | | FairBatch | 0.8735 ± 0.0032 | Eodds | 0.0749 ± 0.0041 | 0.1455 ± 0.0059 -0.1456 ± 0.0056 | | | Reduction | 0.7309 ± 0.0189 | Eodds | 0.0262 ± 0.0141 | 0.0438 ± 0.0257 -0.0435 ± 0.0265 | | | FairGrad | 0.8539 ± 0.0056 | Eodds | 0.0596 ± 0.0068 | 0.1013 ± 0.0147 -0.1025 ± 0.0144 | | | Unconstrained | 0.8801 ± 0.004 | Eopp | 0.0902 ± 0.0017 | 0.1792 ± 0.0041 -0.1816 ± 0.0053 | | | Constant | 0.498 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.8805 ± 0.0046 | Eopp | 0.0912 ± 0.0008 | 0.1812 ± 0.0024 -0.1837 ± 0.0045 | | | Adversarial | 0.8754 ± 0.0086 | Eopp | 0.0808 ± 0.0066 | 0.1605 ± 0.0128 -0.1628 ± 0.0143 | | | BiFair | 0.88 ± 0.003 | Eopp | 0.086 ± 0.005 | 0.17 ± 0.013 | -0.172 ± 0.009 | | FairBatch | 0.874 ± 0.0035 | Eopp | 0.0733 ± 0.0029 | 0.1465 ± 0.0054 -0.1467 ± 0.0066 | | | Reduction | 0.6868 ± 0.0234 | Eopp | 0.0505 ± 0.0179 | 0.1015 ± 0.0359 -0.1005 ± 0.036 | | | FairGrad | 0.8543 ± 0.0082 | Eopp | 0.0517 ± 0.0095 | 0.1028 ± 0.0191 -0.1041 ± 0.0192 | | ![39_image_1.png](39_image_1.png) ![39_image_0.png](39_image_0.png) Figure 14: Results for the Twitter Sentiment dataset with different fairness measures. Table 20: Results for the Twitter Sentiment dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|-----------------|---------|-----------------|----------------------------------|-----------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.7211 ± 0.004 | AP | 0.0426 ± 0.0011 | 0.0426 ± 0.0011 -0.0426 ± 0.0011 | | | Constant | 0.5 ± 0.0 | AP | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7212 ± 0.0044 | AP | 0.0426 ± 0.0011 | 0.0426 ± 0.0011 -0.0426 ± 0.0011 | | | Constrained | 0.72 ± 0.002 | AP | 0.04 ± 0.003 | 0.04 ± 0.003 | 0.04 ± 0.003 | | Reduction | 0.6008 ± 0.022 | AP | 0.0159 ± 0.0092 | 0.0159 ± 0.0092 -0.0159 ± 0.0092 | | | FairGrad | 0.7219 ± 0.0027 | AP | 0.0462 ± 0.0021 | 0.0462 ± 0.0021 -0.0462 ± 0.0021 | | | Unconstrained | 0.7237 ± 0.0054 | Eodds | 0.1867 ± 0.0052 | 0.2287 ± 0.0078 -0.2288 ± 0.0078 | | | Constant | 0.5 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7234 ± 0.0054 | Eodds | 0.188 ± 0.0033 | 0.2314 ± 0.0056 -0.2315 ± 0.0056 | | | Constrained | 0.72 ± 0.004 | Eodds | 0.012 ± 0.002 | 0.019 ± 0.005 | 0.006 ± 0.005 | | BiFair | 0.736 ± 0.009 | Eodds | 0.041 ± 0.012 | 0.056 ± 0.022 | -0.056 ± 0.022 | | FairBatch | 0.7413 ± 0.0014 | Eodds | 0.1391 ± 0.0043 | 0.1755 ± 0.0084 -0.1756 ± 0.0084 | | | Reduction | 0.5962 ± 0.0113 | Eodds | 0.0213 ± 0.0108 | 0.0314 ± 0.0211 -0.0314 ± 0.021 | | | FairGrad | 0.7193 ± 0.0062 | Eodds | 0.0154 ± 0.0051 | 0.0204 ± 0.0098 -0.0204 ± 0.0098 | | | Unconstrained | 0.7244 ± 0.0051 | Eopp | 0.0719 ± 0.0012 | 0.1439 ± 0.0023 -0.1438 ± 0.0023 | | | Constant | 0.5 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.72 ± 0.0054 | Eopp | 0.0718 ± 0.0013 | 0.1437 ± 0.0026 -0.1436 ± 0.0026 | | | Constrained | 0.752 ± 0.004 | Eopp | 0.002 ± 0.001 | 0.005 ± 0.001 | 0.0 ± 0.0 | | BiFair | 0.746 ± 0.009 | Eopp | 0.009 ± 0.004 | 0.017 ± 0.009 | -0.017 ± 0.009 | | FairBatch | 0.7426 ± 0.001 | Eopp | 0.0429 ± 0.0005 | 0.0858 ± 0.0011 -0.0858 ± 0.0011 | | | Reduction | 0.6381 ± 0.0039 | Eopp | 0.0712 ± 0.0117 | 0.1424 ± 0.0234 -0.1425 ± 0.0234 | | | FairGrad | 0.7518 ± 0.0069 | Eopp | 0.0024 ± 0.002 | 0.0049 ± 0.004 | -0.0049 ± 0.004 | Table 21: Results for the Twitter Sentiment dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | FAIRNESS | | | | | |--------------------------|-----------------|---------|-----------------|----------------------------------|-----------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.715 ± 0.0043 | AP | 0.0392 ± 0.0055 | 0.0392 ± 0.0055 -0.0392 ± 0.0055 | | | Constant | 0.5 ± 0.0 | AP | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7183 ± 0.0042 | AP | 0.0427 ± 0.0019 | 0.0427 ± 0.0019 -0.0427 ± 0.0019 | | | Adversarial | 0.7385 ± 0.0075 | AP | 0.0367 ± 0.0027 | 0.0367 ± 0.0027 -0.0368 ± 0.0027 | | | Reduction | 0.6555 ± 0.0162 | AP | 0.0101 ± 0.0038 | 0.0101 ± 0.0038 -0.0101 ± 0.0038 | | | FairGrad | 0.7154 ± 0.0047 | AP | 0.0368 ± 0.0079 | 0.0367 ± 0.0078 -0.0368 ± 0.0079 | | | Unconstrained | 0.7167 ± 0.0126 | Eodds | 0.1854 ± 0.0061 | 0.2349 ± 0.0091 -0.235 ± 0.0091 | | | Constant | 0.5 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.718 ± 0.0137 | Eodds | 0.1882 ± 0.0062 | 0.2379 ± 0.0073 -0.2381 ± 0.0073 | | | Adversarial | 0.7393 ± 0.0024 | Eodds | 0.0382 ± 0.0056 | 0.06 ± 0.0151 | -0.06 ± 0.0151 | | BiFair | 0.74 ± 0.01 | Eodds | 0.039 ± 0.016 | 0.058 ± 0.017 | -0.058 ± 0.017 | | FairBatch | 0.7318 ± 0.004 | Eodds | 0.1313 ± 0.0057 | 0.1724 ± 0.0055 -0.1725 ± 0.0055 | | | Reduction | 0.6653 ± 0.0134 | Eodds | 0.0133 ± 0.0097 | 0.0199 ± 0.0172 -0.0199 ± 0.0173 | | | FairGrad | 0.717 ± 0.0082 | Eodds | 0.0109 ± 0.0027 | 0.0165 ± 0.0053 -0.0165 ± 0.0053 | | | Unconstrained | 0.7147 ± 0.0118 | Eopp | 0.0653 ± 0.0062 | 0.1306 ± 0.0124 -0.1306 ± 0.0124 | | | Constant | 0.5 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7074 ± 0.0158 | Eopp | 0.0672 ± 0.0062 | 0.1346 ± 0.0125 -0.1345 ± 0.0125 | | | Adversarial | 0.7471 ± 0.0042 | Eopp | 0.005 ± 0.0035 | 0.0099 ± 0.007 | -0.0099 ± 0.007 | | BiFair | 0.747 ± 0.009 | Eopp | 0.007 ± 0.005 | 0.013 ± 0.01 | -0.013 ± 0.01 | | FairBatch | 0.7359 ± 0.0011 | Eopp | 0.0368 ± 0.0012 | 0.0736 ± 0.0025 -0.0736 ± 0.0025 | | | Reduction | 0.681 ± 0.0078 | Eopp | 0.0436 ± 0.0071 | 0.0871 ± 0.0143 -0.0871 ± 0.0143 | | | FairGrad | 0.7401 ± 0.0059 | Eopp | 0.0049 ± 0.0041 | 0.0099 ± 0.0083 -0.0099 ± 0.0083 | | ![41_image_1.png](41_image_1.png) ![41_image_0.png](41_image_0.png) ![41_image_2.png](41_image_2.png) Figure 15: Results for the Folktables Adult dataset with different fairness measures. | METHOD (L) ACCURACY ↑ | FAIRNESS | | | | | |-------------------------|-----------------|---------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | MAXIMUM | MINIMUM | | | | | Unconstrained | 0.7905 ± 0.0033 | AP | 0.0131 ± 0.0021 | 0.0123 ± 0.0021 -0.0138 ± 0.0022 | | | Constant | 0.666 ± 0.0 | AP | 0.053 ± 0.0 | 0.056 ± 0.0 | 0.051 ± 0.0 | | Weighted ERM | 0.7906 ± 0.0032 | AP | 0.0127 ± 0.0023 | 0.0119 ± 0.0022 -0.0134 ± 0.0024 | | | Constrained | 0.467 ± 0.115 | AP | 0.036 ± 0.003 | 0.039 ± 0.003 | 0.034 ± 0.003 | | Reduction | 0.733 ± 0.0106 | AP | 0.0653 ± 0.0114 | 0.0614 ± 0.011 | -0.0692 ± 0.0118 | | FairGrad | 0.7837 ± 0.0049 | AP | 0.0023 ± 0.0009 | 0.0023 ± 0.001 | -0.0022 ± 0.0008 | | Unconstrained | 0.789 ± 0.0026 | Eodds | 0.0301 ± 0.011 | 0.0377 ± 0.0153 -0.0458 ± 0.0184 | | | Constant | 0.667 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7886 ± 0.0032 | Eodds | 0.0294 ± 0.012 | 0.0364 ± 0.0169 -0.0443 ± 0.0206 | | | Constrained | 0.663 ± 0.032 | Eodds | 0.008 ± 0.003 | 0.013 ± 0.004 | 0.004 ± 0.002 | | BiFair | 0.768 ± 0.007 | Eodds | 0.008 ± 0.005 | 0.011 ± 0.006 | -0.011 ± 0.008 | | FairBatch | 0.788 ± 0.0027 | Eodds | 0.0045 ± 0.0033 | 0.0069 ± 0.0065 -0.0063 ± 0.0049 | | | Reduction | 0.6922 ± 0.0346 | Eodds | 0.077 ± 0.0322 | 0.0761 ± 0.0257 -0.0903 ± 0.0378 | | | FairGrad | 0.7885 ± 0.0027 | Eodds | 0.0043 ± 0.0019 | 0.0073 ± 0.0037 -0.0068 ± 0.0045 | | | Unconstrained | 0.7902 ± 0.0038 | Eopp | 0.0094 ± 0.0031 | 0.0162 ± 0.0053 -0.0215 ± 0.0071 | | | Constant | 0.667 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7893 ± 0.0031 | Eopp | 0.009 ± 0.003 | 0.0155 ± 0.0051 -0.0206 ± 0.0069 | | | Constrained | 0.706 ± 0.002 | Eopp | 0.004 ± 0.0 | 0.01 ± 0.001 | 0.0 ± 0.0 | | BiFair | 0.77 ± 0.002 | Eopp | 0.019 ± 0.01 | 0.033 ± 0.017 | -0.044 ± 0.023 | | FairBatch | 0.79 ± 0.0031 | Eopp | 0.0012 ± 0.0015 | 0.0022 ± 0.0026 -0.0026 ± 0.0034 | | | Reduction | 0.7388 ± 0.0144 | Eopp | 0.0409 ± 0.0111 | 0.0932 ± 0.025 | -0.0704 ± 0.0194 | | FairGrad | 0.7893 ± 0.0026 | Eopp | 0.0011 ± 0.0009 | 0.0024 ± 0.002 | -0.0021 ± 0.0016 | Table 22: Results for the Folktables Adult dataset with Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. Table 23: Results for the Folktables Adult dataset with Non Linear Models. All the results are averaged over 5 runs. Here MEAN ABS., MAXIMUM, and MINIMUM represent the mean absolute fairness value, the fairness level of the most well-off group, and the fairness level of the worst-off group, respectively. | METHOD (NL) ACCURACY ↑ | | FAIRNESS | | | | |--------------------------|-----------------|------------|-----------------|----------------------------------|------------------| | MEASURE MEAN ABS. ↓ | | MAXIMUM | MINIMUM | | | | Unconstrained | 0.8037 ± 0.0037 | AP | 0.0131 ± 0.0017 | 0.0123 ± 0.0016 -0.0139 ± 0.0017 | | | Constant | 0.666 ± 0.0 | AP | 0.053 ± 0.0 | 0.056 ± 0.0 | 0.051 ± 0.0 | | Weighted ERM | 0.8046 ± 0.0049 | AP | 0.0131 ± 0.0014 | 0.0123 ± 0.0014 -0.0138 ± 0.0015 | | | Adversarial | 0.8016 ± 0.0053 | AP | 0.0122 ± 0.0016 | 0.0115 ± 0.0015 -0.0129 ± 0.0016 | | | Reduction | 0.7293 ± 0.0133 | AP | 0.0991 ± 0.0149 | 0.0932 ± 0.0139 -0.1051 ± 0.016 | | | FairGrad | 0.7917 ± 0.0025 | AP | 0.0016 ± 0.0011 | 0.0016 ± 0.0011 -0.0016 ± 0.001 | | | Unconstrained | 0.7947 ± 0.0078 | Eodds | 0.0314 ± 0.0059 | 0.0373 ± 0.0058 -0.0454 ± 0.0066 | | | Constant | 0.667 ± 0.0 | Eodds | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7902 ± 0.0049 | Eodds | 0.0327 ± 0.0061 | 0.04 ± 0.0067 | -0.0488 ± 0.0077 | | Adversarial | 0.806 ± 0.0047 | Eodds | 0.0035 ± 0.0018 | 0.0051 ± 0.0021 -0.0053 ± 0.0028 | | | BiFair | 0.793 ± 0.006 | Eodds | 0.006 ± 0.003 | 0.007 ± 0.003 | -0.007 ± 0.004 | | FairBatch | 0.8061 ± 0.0044 | Eodds | 0.0051 ± 0.0015 | 0.0087 ± 0.0048 -0.0084 ± 0.0029 | | | Reduction | 0.7416 ± 0.01 | Eodds | 0.0933 ± 0.022 | 0.1517 ± 0.0311 -0.1244 ± 0.026 | | | FairGrad | 0.7997 ± 0.0087 | Eodds | 0.0045 ± 0.0029 | 0.0067 ± 0.0045 -0.0071 ± 0.0058 | | | Unconstrained | 0.7902 ± 0.0044 | Eopp | 0.0097 ± 0.0026 | 0.0168 ± 0.0045 -0.0222 ± 0.006 | | | Constant | 0.667 ± 0.0 | Eopp | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | Weighted ERM | 0.7947 ± 0.0022 | Eopp | 0.0105 ± 0.0027 | 0.0181 ± 0.0047 -0.024 ± 0.0062 | | | Adversarial | 0.8108 ± 0.0161 | Eopp | 0.0034 ± 0.0057 | 0.0041 ± 0.0057 -0.0095 ± 0.017 | | | BiFair | 0.793 ± 0.008 | Eopp | 0.028 ± 0.017 | 0.048 ± 0.029 | -0.064 ± 0.039 | | FairBatch | 0.8038 ± 0.0063 | Eopp | 0.0008 ± 0.0005 | 0.0014 ± 0.0009 -0.0018 ± 0.0012 | | | Reduction | 0.7334 ± 0.0155 | Eopp | 0.0573 ± 0.0116 | 0.1307 ± 0.0265 -0.0986 ± 0.0199 | | | FairGrad | 0.8058 ± 0.0035 | Eopp | 0.0014 ± 0.0014 | 0.003 ± 0.0031 | -0.0026 ± 0.0024 |
Review 1: Summary: The paper presents FairGrad, a method for inducing fairness in gradient-based algorithms. Fairness here is defined as loss functions compared across groups, and the FairGrad method reweights gradients based on the group membership of the associated individual. The paper presents empirical results on four datasets and show that the FairGrad method has comparable and at times better results than other baselines. Strengths and Weaknesses: S1) The empirical results are thorough and show extensive results with four datasets, (up to) five baselines, and four fairness metrics. The empirical results on batch size and epsilon values is helpful for better understanding the effects of these parameters. W1) My biggest question of the paper is that it's is unclear where to place FairGrad within the pantheon of existing fairness algorithms. As explained in Section 4.4, there is no fairness method that is consistently better than the others over all of different datasets. Why should we consider FairGrad compared to the others? The introduction claims that the "complexity of the existing models" is the main obstacle for existing methods. Ease of implementation is a key selling point for FairGrad, but it appears that the other models are merely missing an easily accessilbe code package? I had originally thought that FairGrad's strength would be that it could be used for off-the-shelf algorithms as opposed to having to know the loss function, but it seems that other methods have that as well. I had thought that FairGrad was more computationally efficient than other algorithms, hence one of its strengths. Despite Table 3 going in depth about computational speed, the paper doesn't mention FairGrad as being superior in runtime than other methods. Some clarify around this would be helpful. Small comment that Algorithm 1 line 5 is italicized and it's unclear why. Requested Changes: More guidance about when to use FairGrad compared to alternative fairness methods would strengthen the paper. As it stands, there are a variety of fairness methods who have similar accuracy-fairness tradeoffs. Highlighting what FairGrad contributes to the landscape will help the community leverage the method. Broader Impact Concerns: Existing Limitations and Social Impact section is sufficient. ================================================== Review 2: Summary: This paper proposes a modification to gradient descent algorithms to incorporate/enforce fairness constraints. The procedure follows closely in spirit with some modifications to the reductions-based algorithms for solving fairness-constrained optimization problems in Agarwal et al. (2018) and Cotter et al. (2019). For a particular class of fairness definitions, the authors consider solving a standard fairness constrained optimization problem that they re-interpret as an unconstrained problem with some lagrange multipliers. As in Agarwal et al. (2018) and Cotter et al. (2019), the authors propose to solve this unconstrained problem by searching for saddle points via an iterative search -- in each step, first the $\lambda$-player moves to select $\lambda$ to drive up the objective function, and then the model player updates the prediction function's weights to lower loss. The author's main insight is that for the class of fairness definitions considered in the paper, the min-player's objective function is just a particular weighted average of group-specific losses. As a result, we can also apply take gradient descent steps to optimize the min-player's objective function. Altogether, the author's main contribution effectively is to p ropose that we can solve fairness constrained optimization problems by applying a gradient descent ascent algorithm with an objective function that updates at each step. Such algorithms have been extensively analyzed in a literature on adversarial learning such as Madry et al. (2019). References: Madry et al. (2019): Towards Deep Learning Models Resistant to Adversarial Attacks Strengths and Weaknesses: Strengths: (1) FairGrad is a simple gradient descent algorithm for solving a wide class of fairness constrained optimization problems. The experimental results suggest that it works well in a variety of settings and is computationally more efficient than competing methods. Weakness: (1) Throughout the paper, the authors assume "we will assume that what was measured on our finite sample is sufficiently close to what would be obtained if one had access to the overall distribution". In other words, their analysis of FairGrad provides no guarantees about its generalization to the true underlying distribution F(). This stands in contrast to the literature on fairness-constrained optimization problems, where, for example, papers like Agarwal et al. (2018) and Agarwal et al. (2019) provide generalization bounds on their returned solutions (i.e., they are close to the true population optimum with high probability). (2) In light of (1), the paper can be thought of as just providing a particular optimization routine for the sampled data more in the spirit of the analysis in Cotter et al. (2019). However, here too the authors analysis is lacking -- in particular, Cotter et al. (2019) are able to provide some optimization guarantees about when their reductions based approach will in fact converge (i.e., how must the hyper-parameters be set, how many iterations must be run etc). Requested Changes: (1) I don't think it's fair to write ``They are also limited in the range of problems to which they can be applied. For example, the work of Agarwal et al. (2018) can only be applied in a binary classification setting...'' in the introduction. This a correct statement about the specific paper (Agarwal et al. 2018) but the broader in-processing literature certainly provides extensions of these algorithms to richer settings. See, for example, Agarwal et al. (2019) that extends these procedures to cases with a continuous outcome. By contrast, the main text of this paper focuses on a case with discrete outcomes. (2) In the main text, you should clarify that the group definitions Tk can also depend on the true label y. This is important with how you can nest definitions like equalized odds into the fairness definition. (3) Per the weaknesses above, I think the authors need to incorporate a more up-front discussion about how their analysis is limited relative to existing work on fairness-constrained optimization. (4) The authors should discuss how their proposed algorithm is closely connected to procedures that are popular in adversarial learning (e.g., see Madry et al. (2019) and related citations). I view the author's main contributions here as showing that we can apply the gradient descent-ascent optimization procedures that are popular in adversarial learning to do fairness constrained optimization over the particular combination of loss function and fairness definitions considered. References: Agarwal et al. (2019): Fair Regression: Quantitative Definitions and Reduction-based Algorithms Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper proposes a new in-processing approach, called FairGrad, to enforce statistical group fairness notions such as error parity, equality of opportunity, and equalized odds. Strengths and Weaknesses: The main strengths of the paper are as follows: --The proposed approach is very simple. --The proposed approach can deal with multi-class classification, more than two subgroups, and can handle several notions of fairness. However, the paper suffers from some shortcomings: --Given the plethora of papers dealing with proposing new in-processing methods to provide group fairness, it is unclear how much the new approach can add to the existing approaches. In particular, the approach does not meaningfully outperform previous in-processing methods as shown in the experiments. -- Given that the major contribution of the paper is empirical, there are a few major issues with the experiments. In particular, some baselines are not included, and also some experimental details and rationale behind the choices are not clear in the paper (see the Requested Changes for more details). Requested Changes: I would appreciate it if the paper addresses the following issues: -- Are there any situations in which FairGrad outperforms the existing in-processing approach to achieve a better fairness/accuracy tradeoff? -- A major omitted baseline is the reduction approach of Agarwal et al., 2018 which is perhaps the most commonly used in-processing baseline in the literature. -- The paper misses some crucial discussion on in-processing approaches for an arbitrary number of groups e.g., https://proceedings.mlr.press/v80/hebert-johnson18a.html and https://proceedings.mlr.press/v80/kearns18a.html. -- Many of the experimental details are unclear. For example, what is the non-linear model used in the experiments? How does the performance change as a function of different model classes? What is the justification for using linear models? Broader Impact Concerns: Broader impacts are addressed in a satisfactory manner. ================================================== Metareview: Recommendation: Accept as is Comment: The three reviewers have positive views on this manuscript. The revised version has incorporated reviewers’ comments including adding experimental results related of Agarwal et al., 2018, and adding discussion on when the proposed method outperforms other existing in-processing approaches to achieve a better fairness/accuracy tradeoff. ==================================================
# Dext: Detector Explanation Toolkit Anonymous authors Paper under double-blind review ## Abstract State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions. ## 1 Introduction Object detection is imperative in applications such as autonomous driving (Feng et al., 2021), medical imaging (Araújo et al., 2018), and text detection (He et al., 2017). An object detector outputs bounding boxes to localize objects and categories for objects of interest in an input image. State-of-the-art detectors are deep convolutional neural networks (Zou et al., 2019), a type of Deep Neural Network (DNN), with high accuracy and fast processing compared to traditional detectors. However, convolutional detectors are considered black boxes (Shwartz-Ziv & Tishby, 2017) due to over-parameterization and hierarchically non-linear internal computations. This non-intuitive decision-making process restricts the capability to debug and improve detection systems. The user trust in model predictions has decreased and consequently using detectors in safety-critical applications is limited. In addition, the process of verifying the model and developing secure systems is challenging (Doshi-Velez & Kim, 2017; Zablocki et al., 2021). Numerous previous studies state interpreting detectors by explaining the model decision is crucial to earning the user's trust (Wagstaff, 2012; Rudin & Wagstaff, 2014; Spiegelhalter, 2020), estimating model accountability (Kim & Doshi-Velez, 2021), and developing secure object detector systems (Doshi-Velez & Kim, 2017; Zablocki et al., 2021). With a range of users utilizing detectors for safety critical applications, providing humanly understandable explanations for the category and each bounding box coordinate predictions together is essential. In addition, as object detectors are prone to failures due to non-local effects (Rosenfeld et al., 2018), the visualization techniques for detector explanations should integrate explanations for multiple objects in a single image at the same time. Previous saliency map-based methods explaining detectors (Petsiuk et al., 2021; Tsunakawa et al., 2019; Gudovskiy et al., 2018) focus on classification or localization decisions individually, not both at the same time. In this paper we consider three deficits in the literatureL methods to explain each category and bounding box coordinate decision made by an object detector, visualizing explanations of multiple bounding boxes into the same output explanation image, and a software toolkit integrating the previously mentioned aspects. This work concentrates on providing individual humanly understandable explanations for the bounding box and classification decisions made by an object detector for any particular detection, using gradient-based saliency maps. Figure 1 provides an illustration of the proposed solution by considering the complete output information to generate explanations for the detector decision. Explanations for all the decisions can be summarized by merging the saliency maps to achieve a high-level analysis and increasing flexibility to analyze detector decisions, improving improving model transparency and trustworthiness. We suggest methods to combine and visualize explanations of different bounding boxes in a single output explanation image as well as an approach to analyze the detector errors using explanations. This work contributes: - DExT, software toolkit, to explain each decisions (bounding box regression and object classification jointly), evaluate explanations, and identify errors made by an object detector. - A simple approach to extend gradient-based explanation methods to explain bounding box and classification decisions of an object detector. - An approach to identify reasons for the detector failure using explanation methods. - Multi-object visualization methods to summarize explanations for all output detections in a single output explanation. - An evaluation of gradient-based saliency maps for object detector explanations, including quantitative results and a human user study. We believe our work reveals some major conclusions about object detector explainability. Overall quantitative metrics do not indicate that a particular object detector is more interpretable, but visual inspection of explanations indicates that recent detectors like EfficientDet seem to be better explained using gradientbased methods than older detectors (like SSD or Faster R-CNN, shown in Figure 2), based on lack of artifacts on their heatmaps. Detector backbone has a large impact on explanation quality (Shown in Figure 6). The user study (Section 4.4) reveals that humans clearly prefer the convex polygon representation, and Smooth Guided Backpropagation provides the best object detector explanations, which is consistent with quantitative metrics. We believe these results are important for practitioners and researchers of object detection interpretability, and the overall message is to explain both object classification and bounding box decisions, and it is possible to combine all explanations into a single image using the convex polygon representation of the heatmap pixels. ## 2 Related Work Interpretability is relatively underexplored in detectors compared to classifiers. There are post hoc (Petsiuk et al., 2021; Tsunakawa et al., 2019; Gudovskiy et al., 2018) as well as intrinsic (Kim et al., 2020; Wu & Song, 2019) detector interpretability approaches. Detector Randomized Input Sampling for Explanation (D-RISE) (Petsiuk et al., 2021) in a model-agnostic manner generates explanations for detector decisions for the complete detector output. However, saliency map quality depends on the computation budget, the method is time consuming, and individual explanations for bounding boxes are not evaluated. Contrastive Relevance Propagation (CRP) (Tsunakawa et al., 2019) extends Layer-wise Relevance Propagation (LRP) (Bach et al., 2015) to explain individually the bounding box and classification decisions of Single Shot MultiBox Detector (SSD). This procedure includes propagation rules specific to SSD. Explain to fix (E2X) (Gudovskiy et al., 2018) contributes a framework to explain the SSD detections by approximating SHAP ![2_image_0.png](2_image_0.png) Figure 1: A depiction of the proposed approach to interpret all object detector decisions. The two elephants detected by the EfficientDet-D0 detector in the image have 0.92 and 0.88 confidence. The input image is taken from the MS COCO dataset (Lin et al., 2014). The corresponding explanations are provided in the same colored boxes. This breakdown of explanations offers more flexibility to analyze decisions and serves as a holistic explanation for all the detections. The explanation for each decision is a saliency map highlighting the important pixels of the input image. Saliency maps are overlaid on the input image to illustrate the correspondence with input image pixels. (Lundberg & Lee, 2017) feature importance values using Integrated Gradients (IG), Local Interpretable Model-agnostic Explanations (LIME), and Probability Difference Analysis (PDA) explanation methods. E2X identifies the detection failure such as false negative errors using the explanations generated. The individual explanations for bounding box decisions and classification decisions are unavailable. ![3_image_0.png](3_image_0.png) Figure 2: Comparison of the classification and all bounding box coordinate explanations corresponding to the cat detection (red-colored box) across different detectors using SGBP is provided. The bounding box explanations from EfficientDet-D0 illustrate the visual correspondence to the respective bounding box coordinates. The explanations from Faster R-CNN illustrate a sharp checkerboard pattern. The intrinsic approaches majorly focus on developing detectors that are inherently interpretable. Even though the explanations are provided for free, currently, most of the methods are model-specific, do not provide any evaluations on the explanations generated, and includes complex additional designs. Certain attention-based models such as DEtector TRansformer (DETR) (Carion et al., 2020) and detectors using non-local neural networks (Wang et al., 2018) offer attention maps improving model transparency. A few previous works with attention reveal contradicting notions of using attention for interpreting model decisions. Serrano & Smith (2019) and Jain & Wallace (2019) illustrate attention maps are not a reliable indicator of important input region as well as attention maps are not explanations, respectively. Bastings & Filippova (2020) have revealed saliency methods provide better explanations over attention modules. In this work, post hoc gradient-based explanation methods are selected because the methods provide better model translucency, computational efficiency, do not affect model performance, and utilize the gradients in DNNs. Finally, saliency methods are widely studied in explaining DNN-based models (Ancona et al., 2019). A detailed comparative evaluation of various detectors reporting robustness, accuracy, speed, inference time as well as energy consumption across multiple domains has been carried out by Arani et al. (2022). In this work, the authors compare detectors from the perspective of explainability. ## 3 Proposed Approach 3.1 Explaining Object Detectors This work explains various detectors using gradient-based explanation methods as well as evaluate different explanations for bounding box and classification decisions. The selected detectors are: SSD512 (SSD) (Liu et al., 2016), Faster R-CNN (FRN) (Ren et al., 2017), and EfficientDet-D0 (ED0) (Tan et al., 2020). The short-form tags are provided in the bracket. SSD512 and Faster R-CNN are widely used single-stage and two-stage approaches, respectively. Explaining the traditional detectors will aid in extending the explanation procedure to numerous similar types of recent detectors. EfficientDet is a relatively recent state-of-the-art single-stage detector with higher accuracy and efficiency. It incorporates a multi-scale feature fusion layer called a Bi-directional Feature Pyramid Network (BiFPN). EfficientDet-D0 is selected to match the input size of SSD512. The variety of detectors selected aids in evaluating the explanation methods across different feature extractors such as VGG16 (SSD512), ResNet101 (Faster R-CNN), and EfficientNet (EfficientDet-D0). The gradient-based explanation methods selected in this work to explain detectors are: Guided Backpropagation (GBP) (Springenberg et al., 2015), Integrated Gradients (IG) (Sundararajan et al., 2017), SmoothGrad (Smilkov et al., 2017) + GBP (SGBP), and SmoothGrad + IG (SIG). The short-form tags are provided in the bracket. GBP produces relatively less noisy saliency maps by obstructing the backward negative gradient flow through a ReLU. In addition, GBP is a simple and widely-used approach compared to other methods. For instance, an uncertainty estimate of the most important pixels influencing the model decisions is carried out using GBP and certain uncertainty estimation methods (Wickstrøm et al., 2020). This combines uncertainty estimation and interpretability to better understand DNN model decisions. IG satisfies the implementation and sensitivity invariance axioms that are failed by various other state-of-the-art interpretation methods. SmoothGrad aids in sharpening the saliency map generated by any interpretation method and improves the explanation quality. These four explanation methods explain a particular detector decision by computing the gradient of the predicted value at the output target neuron with respect to the input image. The object detector decisions for a particular detection are bounding box coordinates (xmin, ymin, xmax, ymax), and class probabilities (c1, c2*, ..., c*k), where k is the total number of classes predicted by the detector. Usually these are output by heads at the last layer of the object detector. The classification head is denoted as modelcls(x), while the bounding box regression head is modelbbox(x). Considering that an explanation method computes a function expl(x, yˆ) of the input x and scalar output prediction yˆ (which is one output layer neuron), then a classification explanation ecls is: $$\hat{c}=\operatorname*{model}_{\mathrm{cls}}(x)\qquad k=\operatorname*{arg\,max}_{i}\hat{c}_{i}\qquad e_{\mathrm{cls}}=\operatorname*{exp}(x,\hat{l}_{k})$$ $$\quad(1)$$ A bounding box explanation consists of four different explanations, one for each bounding box component exmin , eymin , exmax , eymax : $$\begin{array}{r l}{{\hat{x}}_{\mathrm{min}},{\hat{y}}_{\mathrm{min}},{\hat{x}}_{\mathrm{max}},{\hat{y}}_{\mathrm{max}}=\operatorname*{model}_{\mathrm{bbox}}(x)}&{{e}_{x_{\mathrm{min}}}=\exp({x},{\hat{x}}_{\mathrm{min}})}&{{e}_{y_{\mathrm{min}}}}&{{=\exp({x},{\hat{y}}_{\mathrm{min}})}}\\ {{e}_{x_{\mathrm{max}}}=\exp({x},{\hat{x}}_{\mathrm{max}})}&{{e}_{y_{\mathrm{max}}}}&{{=\exp({x},{\hat{y}}_{\mathrm{max}})}}\end{array}$$ In case of explaining the bounding box coordinates, the box offsets predicted by an object detectors are converted to normalized image coordinates before computing the gradient. In case of classification decisions, the logits (ˆlk, before softmax probability, cˆ = softmax( ˆl)) are used to compute the gradient. Figure 2 illustrates the explanations generated for each decisions of the cat detection by across detectors. Saliency explanations can be computed for each bounding box of interest in the image. ## 3.2 Multi-Object Visualization In order to summarize the saliency maps of all detections, the individual saliency maps corresponding to each detection are represented using a canonical form. This representation illustrates the most important pixels for the decision explanation. This paper proposes four different methods for combining detection explanations into a single format: principal components, contours, density clustering, and convex polygons. Each method uses a different representation, allowing for detected bounding box, and category to be marked (2) (3) $\text{}$ ![5_image_0.png](5_image_0.png) Figure 3: Overview of the Multi-object visualizations pipeline to jointly visualize all detections. ![5_image_1.png](5_image_1.png) Figure 4: Multi-object visualizations generated to jointly visualize all detections from EfficientDet-D0 and the corresponding classification explanations generated using SIG in the same color. The combination approach is specified in sub-captions. Explanation pixels are colored same as the corresponding bounding box that is being explained. using same colors on the input image. The general process is described in Figure 3. An example the four multi-object visualizations are illustrated in Figure 4. Appendix E provides additional details on the multi-object visualization approaches and how different combination methods work. including explanation heatmap samples. ## 4 Experiments Section 4.1 visually analyzes the explanations generated for different detector and explanation method combinations. Section 4.3 provides the quantitatively evaluates different detector and explanation method combinations. Finally, Section 4.4 estimates an overall ranking for the explanation methods based on user preferences of the explanations produced for each decision. In addition, the multi-object visualization methods are ranked based on user understandability of the detections. In Section F, the procedure to analyze the failures of detector using the proposed approach is discussed. Most of the experiments use ED0, SSD, and FRN detectors detecting common objects from COCO (Lin et al., 2014). The additional details about these detectors are provided in Table 5. In cases requiring training a detector, different versions of SSD with various pre-trained backbones detecting marine debris provided in Table 6 are used. The marine debris detectors are trained using a train split of the Marine Debris dataset (Valdenegro-Toro, 2019) and explanations are generated for the test split images. These detectors are used only to study how are the explanations change across different backbones and different performance levels (epochs) in Section 4.1. ## 4.1 Visual Analysis Across target decision and across detectors. The saliency maps for the classification and bounding box decisions generated using a particular explanation method for a specific object change across different detectors as shown in Figure 2. All the bounding box explanations of EfficientDet-D0 in certain scenarios provide visual correspondence to the bounding box coordinates. | Detections | Class Decision | Bounding Box, xmin | Bounding Box, ymin | Bounding Box, xmax | Bounding Box, ymax | 1.0 | |--------------|------------------|----------------------|----------------------|----------------------|----------------------|-------| | Person | 0.8 | | | | | | | Person | 0.6 | | | | | | | Person | | | | | | | | Person | 0.4 | | | | | | | Bicycle | 0.2 | | | | | | | Umbrella | 0.0 | | | | | | Figure 5: Comparison of classification and bounding box explanations for all detections from EfficientDetD0 using SIG is provided. Each row provides the detection (red-colored box) followed by the corresponding classification and all bounding box explanation heatmaps. Across different target objects. Figure 5 illustrate that the explanations highlight different regions corresponding to the objects explained. This behavior is consistent in most of the test set examples across the classification and bounding box explanations for all detectors. Figure 6 illustrates the classification explanations for the wall detection across the 6 different backbones. Apart from the attribution intensity changes, the pixels highlight the different input image pixels, and the saliency map texture changes. MobileNet and VGG16 illustrate thin horizontal lines and highlight other object pixels, respectively. ResNet20 highlights the wall as a thick continuous segment. Figure 18 illustrate the ymin and ymax bounding box coordinate explanations for the chain detection across different backbones. The thin horizontal lines of MobileNet are consistent with the previous example. In addition, VGG16 illustrates a visual correspondence with the ymin and ymax bounding box coordinate by highlighting the upper half and lower half of the bounding box respectively. However, this is not witnessed in other detectors. This behavior is consistent over a set of 10 randomly sampled test set images from the Marine Debris dataset. The explanations generated using SSD model instances with ResNet20 backbone at different epochs are provided in Figure 7. The model does not provide any final detections at lower epochs. Therefore, the explanations are generated using the target neurons of the output box corresponding to the interest decision in the final detections from the trained model. Figure 7 illustrate variations in the saliency maps starting from a ![7_image_0.png](7_image_0.png) Figure 6: Comparison of class "wall" classification explanations across different SSD backbones. The detections from each SSD backbone are provided in the first row. The wall detection explained is marked using a white-colored box. The explanations vary across each backbone. randomly initialized model to a completely trained model for the classification decision of the chain detection. The explanations extracted using the random model are dispersed around the features. The explanations slowly concentrate along the chain object detected and capture the object feature to a considerable amount. This behavior is qualitatively analyzed by visualizing the explanation of 10 randomly sampled test set images from the Marine Debris dataset. In the case of the small hook explained in Figure 19, the variations between the random model and the trained model are not as considerable as the previous chain example. This illustrates the variations change with respect to each class. ![7_image_1.png](7_image_1.png) SSD-ResNet20 Classification Decision Explanation Using Guided Backpropagation Over Epochs Figure 7: Classification explanation for class "chain" across different epochs (along columns) of SSD-ResNet20 using GBP is illustrated. The first column is the chain ground truth annotation (white-colored box). ## 4.2 Error Analysis The section analyzes errors made by a detector by generating explanations using the detector explanation approach proposed in this work. The saliency map highlighting the important regions can be used as evidence to understand the reason for the detector failure rather than assuming the possible reasons for detector failure. The failure modes of a detector are wrongly classifying an object, poorly localizing an object, or missing a detection in the image (Petsiuk et al., 2021). As the error analysis study requires ground truth annotations, the PASCAL VOC 2012 images are used. The PASCAL VOC images with labels mapping semantically to COCO labels are only considered as the detectors are trained using the COCO dataset. For instance, the official VOC labels such as sofa and tvmonitor are semantically mapped to couch and tv, respectively, by the model output trained on COCO. The procedure to analyze a incorrectly classified detection is straightforward. The output bounding box information corresponding to the wrongly classified detection can be analyzed in two ways. The target neuron can be the correct class or the wrongly classified class to generate the saliency maps as shown in Figure 8. ![8_image_0.png](8_image_0.png) Classification Explanation Using Guided Backpropagation on EfficientDet-D0 ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) -1.4e-02 Classification Explanation Using SmoothGrad + Integrated Gradients on EfficientDet-D0 ![8_image_4.png](8_image_4.png) ![8_image_3.png](8_image_3.png) ![8_image_5.png](8_image_5.png) ![8_image_6.png](8_image_6.png) Figure 8: Example error analysis using gradient-based explanations. EfficientDet-D0 wrongly classifies the dog (red-colored box) in ground truth as cat (red-colored box). We display two saliency explanations (GBP and SIG). In this figure, it is clear the model is imagining a long tail for the dog (GBP) and wrongly classifies the dog as a cat. The saliency map highlights certain features of the dog and the background stripes pattern along the edges of the dog body (GBP and SIG). In order to illustrate the tail clearly which is predominant in cats available in COCO dataset, the saliency map is only shown without overlaying on the input image. More examples of error analysis are available in Section F in the appendix. ## 4.3 Quantitative Evaluation Evaluating detector explanations quantitatively provides immense understanding on selecting the explanation method suitable for a specific detector. In this section we provide quantitative evaluation of saliency explanations on object detectors. ## 4.3.1 Evaluation Metrics The quantitative evaluation of the explanations of a detector incorporates causal metrics to evaluate the bounding box and classification explanations. This works by causing a change to the input pixels and measuring the effect of change in model decisions. The evaluation aids in estimating the faithfulness or truthfulness of the explanation to represent the cause of the model decision. The causal metrics discussed in this work are adapted from the previous work (Samek et al., 2021; Petsiuk et al., 2021; 2018). The two variants of causal evaluation metrics based on the cause induced to alter the prediction are deletion and insertion metric. The deletion metric evaluates the saliency map explanation by removing the pixels from the input image and tracking the change in model output. The pixels are removed sequentially in the order of the most important pixels starting with a larger attribution value and the output probability of the predicted class is measured. The insertion metric works complementary to the deletion metric by sequentially adding the most important pixel to the image and causing the model decision to change. Using deletion metric, the explanation methods can be compared by plotting the fraction of pixels removed along x-axis and the predicted class probability along y-axis. The method with lower Area Under the Curve (AUC) illustrates a sharp drop in probability for lesser pixel removal. This signifies the explanation method can find the most important pixels that can cause a significant change in model behavior. The explanation method with less AUC is better. In the case of insertion metric, the predicted class probability increases as the most relevant pixels are inserted. Therefore, an explanation method with a higher AUC is relatively better. Petsiuk et al. (2021) utilize constant gray replacing pixel values and blurred image as the start image for deletion and insertion metric calculation respectively. Effects Tracked. The previous work evaluating the explanations of detector decisions utilize insertion and deletion metric to track the change in the bounding box IoU and classification probability together. Petsiuk et al. (2021) formulate a vector representation involving the box coordinates, class, and probability. The similarity score between the non-manipulated and manipulated vectors are tracked. However, this work performs an extensive comparison of explanation methods for each decision of a detector by tracking the change in maximum probability of the predicted class, Intersection over Union (IoU), distance moved by the bounding box (in pixels), change in height of the bounding box (in pixels), change in width of the bounding box (in pixels), change in top-left x coordinate of the bounding box (in pixels), and change in top-left y coordinate of the bounding box (in pixels). The box movement is the total movement in left-top and rightbottom coordinates represented as euclidean distance in pixels. The coordinates distances are computed using the interest box corresponding to the current manipulated image and the interest box corresponding to the non-manipulated image. This extensive evaluation illustrates a few explanation methods are more suitable to explain a particular decision. As declared in the previous sections, the image origin is at the top-left corner. Therefore, a total of 7 effects are tracked for each causal evaluation metric. Evaluation Settings. The previous section establishes the causal deletion and insertion metric along with the 7 different effects. In this section, two different settings used to evaluate the detectors using the causal metrics are discussed. Single-box Evaluation Setting. The detector output changes drastically on manipulating the input image at different fractions. The detector output box detecting the object in the non-manipulated input image is termed the principal box. In this setting, the 7 effects of the principal box are tracked across insertion and deletion of input pixels. This aids in capturing how well the explanation captures the true causes of the principal box prediction. Therefore, the impact of the explanation on the actual detected box is analyzed. The effects measured for the single-box setting are bounded because the value of the principal box is always measurable. This is called a single-box setting because only the changes in the principal box are tracked. For instance, the output principal box tracked will always output a probability or bounding box coordinate. This probability or bounding box coordinate can be used to track the effects for all manipulated input images. Realistic Evaluation Setting. In this evaluation setting, all 7 effects are tracked for the complete object detector output involving the post-processing steps of a detector. The faithfulness of the explanation to the detection pipeline is analyzed. Therefore, how well the explanation depicts the cause of the detector decision is evaluated. In this setting, the current detection for a particular manipulated input image is matched to the interest detection by checking the same class and an IoU threshold greater than 0.9. For various manipulated | Cause | Effect Tracked | Evaluation Setting | |-----------------|-------------------------------------------------------------------------------------|----------------------| | ↓ Deletion (D) | Class Maximum Probability (C) | Single-box (S) | | ↑ Insertion (I) | Box IoU (B) | Realistic (R) | | | Box Movement Distance (M) Box X-top (X), Box Y-top (Y) Box Width (W), Box Height(H) | | Table 1: The components of an evaluation metric with the respective tag for each component is provided in bracket. Therefore, a total of 28 metrics (2 × 7 × 2 ) are used in the work. The abbreviation for each evaluation metric is read from the left-side. For instance, DCS means Deletion metric tracking the change in maximum probability of the output box chosen in single-box setting. input images, there is no current detection matching the interest detection. Therefore, depending on the effect tracked and to calculate AUC, a suitable value is assigned to measure the effect. For instance, if the effect tracked is the class probability for deletion metric and none of the current detection matches with the interest detection, a zero class probability is assigned. Similarly, if the effect tracked is box movement in pixels for deletion metric, the error in pixels increases to a large value. In such scenarios, instead of assigning a large value, assuming the 4 coordinates of interest detection are at a particular image corner and the 4 box coordinates of the final detection are in the opposite corner, the maximum distance is assigned to the box movement in pixels. Interpretation Through Curves. Given the different causes induced to change model output, effects tracked, and evaluation setting for the detector, 28 causal evaluation metrics are used in this work. Table 1 summarize the combinations of different aspects of the evaluation metrics with short-form tags. To interpret a causal evaluation metric, a graph is drawn tracking the change of the effect tracked along the y-axis and the fraction of pixels manipulated along the x-axis. For instance, consider the scenario of deleting image pixels sequentially to track the maximum probability of the predicted class at single-box evaluation setting. The x-axis is the fraction of pixels deleted. The y-axis is the maximum probability of the predicted class at the output of the box tracked. In this work, the curve drawn is named after the combination of the causal evaluation metrics, effects tracked, end evaluation settings. The curves are the DCS curve, DBS curve, ICS curve. For instance, the DCS curve is the change in the maximum probability for the predicted class (C) at the single output box (S) due to removing pixels (D). The curves are the evaluation metrics used in this work and also called as DCS evaluation metric (deletion + class maximum probability + single-box setting), DBS (deletion + box IoU + single-box setting) evaluation metric, and so on. In order to compare the performance of explanation methods to explain a single detection, as stated before, the AUC of a particular evaluation metric curve is estimated. The corresponding AUC is represented as AUC<evaluation_metric_name>. In order to estimate a global metric to compare the explanation methods explaining a particular decision of a detector, the average AUC, represented as AAUC<evaluation_metric_name>, is computed. As the explanations are provided for each detection, the evaluation set is given by the total number of detections. The total detections in the evaluation set are the sum of detections in each image of the evaluation set. The average evaluation metric curve is computed by averaging the evaluation metric curve at each fraction of pixels manipulated across all detections. AAUC of a particular evaluation metric curve is the AUC of the average evaluation metric curve. ## 4.3.2 Results Figure 9 illustrates the AAUC computed by evaluating the explanations of each bounding box coordinate is similar across different evaluation metrics curves. This similarity is consistent for all the detectors and explanation methods combinations evaluated. Therefore, the explanation methods quantitatively explain each bounding box coordinate decisions with similar performance. In this work, the AAUC for the bounding box decision is computed by averaging the AUC of all the evaluation metric curves corresponding to all ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) (c) Deletion - Box IoU - Realistic ↓ Single-box ↓ (b) Deletion - Box Movement - Single-box ↓ (d) Deletion - Box Movement - Realistic ↓ Figure 9: The figure illustrates the average AUC, AAUC, for the evaluation metric curves obtained by tracking box IoU (a, c) and box movement distance (b, d) as the pixels are deleted sequentially. Each bar corresponds to the AAUC estimated by evaluating explanations generated for each bounding box coordinate decisions using the explanation methods specified in the x-axis of all detection made by EfficientDet-D0 in the evaluation set images. AAUC is computed by averaging the AUC of all the evaluation metric curves generated using the combination specified in the sub-captions. Lower AAUC is better in all the plots. ![11_image_2.png](11_image_2.png) (a) Deletion - Box IoU - Single-box ↓ (b) Deletion - Box Movement - Single-box ↓ (c) Deletion - Box IoU - Realistic ↓ (d) Deletion - Box Movement - Realistic ↓ Figure 10: Comparison of average curves obtained by tracking box IoU (a, c) and box movement distance (b, d) as the pixels are deleted sequentially. Each average curve is the average of the evaluation curves plotted by evaluating the explanations of all bounding box coordinate decisions across all the detections by the respective detector. The explanations are generated using GBP. The evaluation metric curve is generated using the combination specified in the sub-captions. the box coordinate explanations. This offers the means to evaluate the explanation methods across all the bounding box coordinate decisions. Figure 10 and Figure 11 illustrate quantitatively complementary trends in the evaluation metric curves plotted by tracking box movement distance in pixels and box IoU. The IoU decreases and box movement distance increases as the pixels are deleted sequentially as shown in Figure 10. Similarly, Figure 11 illustrates the increase in box IoU and decrease in box movement distance as pixels are inserted to a blurred version of the image. There is a large difference in the AAUC between the single-stage and two-stage detectors. This is primarily due to the RPN in the two-stage detectors. The proposals from RPN are relatively more sensitive to the box coordinate change than the predefined anchors of the single-stage detectors. In addition, Figure 10(d) and Figure 11(d) indicates the steady change of box coordinates in the final detections of the EfficientDet-D0. However, SSD and Faster R-CNN saturate relatively sooner. In the remainder of this work, the ability of the box IoU effect is used for quantitative evaluation. This is only because the box IoU effect offers the same scale between 0 to 1 as the class maximum probability effect. In addition, both box IoU and class maximum probability effect follow the trend lower AUC is better for the deletion case. However, it is recommended to consider all the box IoU and box movement distance effects at the level of each box coordinate for a more accurate evaluation. Figure 12 and Figure 17 aids in understanding the explanation method interpreting both the classification and bounding box decision of a particular detector more faithful than other explanation methods. Figure 12(a) illustrate SSD512 classification decisions are better explained by SGBP at single-box setting for deletion metrics. However, the bounding box decisions are not explained as well as the classification decisions. Figure ![12_image_0.png](12_image_0.png) (a) Insertion - Box IoU - Single-box ↑ (b) Insertion - Box Movement - Single-box ↑ (c) Insertion - Box IoU - Realistic ↑ (d) Insertion - Box Movement - Realistic ↑ Figure 11: Comparison of average curves obtained by tracking box IoU (a, c) and box movement distance (b, d) as the pixels are inserted sequentially. Each average curve is the average of the evaluation curves plotted by evaluating the explanations of all bounding box coordinate decisions across all the detections by the respective detector. The explanations are generated using GBP. The evaluation metric curve is generated using the combination specified in the sub-captions. 12(b) illustrate a similar scenario for SGBP with EfficientDet-D0 and Faster R-CNN at the realistic setting for deletion metrics. However, all selected explanation methods explain the bounding box and classification decisions of SSD512 relatively better at the single-box setting for insertion metrics. In general, none of the selected explanation methods explain both the classification and bounding box regression decisions substantially well compared to other methods for all detectors. This answers EQ13. Similarly, none of the detectors is explained more faithfully for both classification and bounding box decisions among the selected detectors by a single method across all evaluation metrics discussed. This is illustrated by no explanation methods (by different colors) or no detectors (by different alphabets) being represent in the lower left rectangle or upper right rectangle in Figure 12 and Figure 17 respectively. Figure 14(a) and Figure 14(c) illustrate AAUC of the classification saliency maps and the saliency maps combined using different merging methods are different in certain scenarios while tracking the maximum probability. The AAUC of all the box coordinate saliency maps is provided for a baseline comparison. This denotes the effect on maximum probability by removing pixels in the order of most important depending on the all box coordinates saliency maps. Similarly, Figure 14(b) and Figure 14(d) illustrate the similarity in the AAUC of all box coordinate explanations and the merged saliency maps while tracking the box IoU. In Figure 14(a), the evaluation of the GBP classification saliency map is less faithful than the merged saliency map. Therefore, the merged saliency map represents the classification decision more faithfully than the standalone classification explanation in the case of EfficientDet-D0. However, Figure 14(a) and Figure 14(c) illustrate in the case of SGBP explaining EfficientDet-D0 and certain cases of Faster R-CNN respectively separately classification saliency maps are more faithful in depicting the classification decision. The larger AAUC for all the box coordinate saliency maps generated using each method for Faster R-CNN indicate the box saliency maps are not faithful to the bounding box decisions of Faster R-CNN. This is coherent with the visual analysis. Therefore, in certain scenarios merging is helpful to represent the reason for a particular decision. However, each individual saliency map provides peculiar information about the detection. For instance, the visual correspondence shown in Figure 2 to each bounding coordinate information is seen only at the level of individual box coordinate explanations. An overall comparison of all quantitative metrics is shown in Figure 13. For the purpose of understanding, the ranking of detectors better explained by a particular explanation method is provided in Table 2. The ranking of explanation methods explaining a particular detector is provided in Table 3. SGBP performs relatively better across all selected detectors. In addition, IG is ranked least across all the selected detectors. SSD detector is better explained by all the explanation methods. One of the reasons can be SSD is a simpler architecture compared to EfficientDet-D0 and Faster R-CNN. EfficientDet-D0 and Faster R-CNN include a Bi-directional Feature Pyramid Network (BiFPN) and Region Proposal Network (RPN) respectively. However, further experiments should be conducted for validation. ![13_image_0.png](13_image_0.png) ED0 SSD FRN GBP SGBP IG SIG Figure 12: Comparison between the Deletion AAUC of the evaluation metric curves for the classification and all bounding box coordinate explanations generated using different explanation methods across all detectors. This offers a means to understand the explanation method generating more faithful explanations for both classification explanations and all bounding box coordinates. As the curves to compute the respective AUC are computed using deletion metric, lower values in both axis are better. The explanation methods (highlighted with different colors) placed at a lower value in the x-axis and y-axis perform relatively better at explaining the box coordinates and classification decisions respectively. The detectors (marked with different alphabets) placed at a lower value in x-axis and y-axis are relatively better explained for the box coordinates and classification decisions respectively.0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 ![13_image_1.png](13_image_1.png) Figure 13: Multi-metric comparison of quantitative results. According to these metrics, all methods perform similarly when considering all object detectors. The user study and visual inspection of explanation heatmaps reveal more information. ## 4.4 Human-Centric Evaluation The human-centric evaluation ranks the explanation methods for each detector and ranks the multi-object visualization methods with a user study. All important details of the user study are presented in Appendix G. ![14_image_0.png](14_image_0.png) class separate box separate pca and or Figure 14: Comparison of average AUC, AAUC, for the evaluation metric curves obtained by tracking maximum probability (a, c) and box IoU (b, d) as the most important pixels based on the explanation generated using the explanation methods specified in the x-axis are deleted sequentially. All the explanations are generated for detection made by EfficientDet-D0 (left) and Faster R-CNN (right) in the evaluation set images. Lower AAUC is better in both plots. GBP ED0 2 2 2 2 2 3 3 3 3 SSD 1 1 3 1 3 2 1 2 1 FRN 3 3 1 3 1 1 2 1 2 SGBP ED0 2 2 2 2 1 3 2 2 2 SSD 1 1 3 1 3 2 1 1 1 FRN 3 3 1 3 2 1 3 3 3 IG ED0 1 2 2 2 1 3 2 2 2 SSD 2 1 3 1 3 2 1 1 1 FRN 3 3 1 3 2 1 3 3 3 SIG ED0 2 2 2 2 1 3 2 2 2 SSD 1 1 3 1 3 2 1 1 1 FRN 3 3 1 3 2 1 3 3 3 IM OD DCS ICS DBS IBS DCR ICR DBR IBR Overall Rank Table 2: Ranking of all detectors for a particular explanation method based on the quantitative evaluation metrics. A lower value is a better rank. The detector better explained by a particular explanation method is awarded a better rank. Each detector is ranked with respect to each evaluation metric considering a particular explanation method. The column names other than the last column and the first two columns represent the AAUC for the respective evaluation metric. The overall rank is computed by calculating the sum along the row and awarding the best rank to the lowest sum. OD - Object detectors, IM - Interpretation method. ## 4.4.1 Rank Generation Ranking Explanation Methods. Previous work assess the user trust in the model explanations generated by a particular explanation method (Petsiuk et al., 2021; Selvaraju et al., 2020; Ribeiro et al., 2016). As user trust is difficult to evaluate precisely. This work in contrast to the the previous works actually estimate the user preferability of the explanation methods. The user preferability for the methods GBP, SGBP, IG, and SIG are evaluated by comparing two explanations corresponding to a particular predictions. In this study, the explanation methods are compared directly for a particular interest detection and interest decision across SSD, EDO, and FRN detector separately. The evaluation identifies the relatively more trusted explanation method by the users for a particular detector. The explanation methods are ranked by relatively rating the | OD | IM | DCS | ICS | DBS | IBS | DCR | ICR | DBR | IBR | Overall Rank | |------|------|-------|-------|-------|-------|-------|-------|-------|-------|----------------| | ED0 | GBP | 4 | 3 | 1 | 2 | 4 | 3 | 3 | 1 | 3 | | SGBP | 1 | 2 | 2 | 4 | 1 | 2 | 2 | 2 | 2 | | | IG | 3 | 4 | 4 | 3 | 3 | 4 | 4 | 4 | 4 | | | SIG | 2 | 1 | 3 | 1 | 2 | 1 | 1 | 3 | 1 | | | SSD | GBP | 2 | 3 | 2 | 3 | 1 | 3 | 2 | 3 | 3 | | SGBP | 1 | 2 | 1 | 2 | 2 | 2 | 1 | 1 | 1 | | | IG | 4 | 4 | 4 | 4 | 4 | 4 | 7 | 4 | 4 | | | SIG | 3 | 1 | 3 | 1 | 3 | 1 | 3 | 2 | 2 | | | FRN | GBP | 4 | 3 | 1 | 2 | 2 | 1 | 1 | 1 | 1 | | SGBP | 1 | 1 | 2 | 1 | 1 | 3 | 2 | 2 | 2 | | | IG | 3 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | | | SIG | 2 | 2 | 3 | 3 | 3 | 2 | 3 | 3 | 3 | | Table 3: Ranking of all the explanation methods for a particular detector based on the quantitative evaluation metrics. A lower value is a better rank. The explanation method better explaining a particular detector is awarded a better rank. Each detector is ranked with respect to each evaluation metric considering a particular explanation method. The column names other than the last column and the first two columns represent the average AUC for the respective evaluation metric. The overall rank is computed by calculating the sum along the row and awarding the best rank to the lowest sum. OD - Object detectors, IM - Interpretation method. explanations generated using different explanation methods for a particular detection made by a detector. The rating serves as a measure of user preference. | Options | A Score | B Score | |----------------------------------------|-----------|-----------| | Robot A explanation is much better | 2 | -2 | | Robot A explanation is slightly better | 1 | -1 | | Both explanations are same | 0 | 0 | | Robot A explanation is slightly worse | -1 | 1 | | Robot A explanation is much worse | -2 | 2 | A pair of explanations generated by different explanation methods using the same interest decision and same interest detection for the same detector is shown to a number of human users as shown in Figure 38. The detector, interest decision, interest detection, and explanation method used to generate explanations are randomly sampled for each question and each user. In addition, the image chosen for a particular question is randomly sampled from an evaluation set. The evaluation set is a randomly sampled set containing 50 images from the COCO test 2017. This avoids incorporating any bias into the question generation procedure. Each question is generated on the fly for each user performing the task. The explanations are named Robot A explanation and Robot B explanation to conceal the names of the explanation methods to the user. The robots are not detectors. In this study, the robots are treated as explanation methods. Robot A explanation and Robot B explanation for each question is randomly assigned with a pair of explanation method output. This is done to reduce the bias due to positioning and ordering bias of the explanations as shown to users. The task provided for the user is to rate the quality of the Robot A explanation based on the Robot B explanation. The available options are provided in Table 4. Table 4: User study options and scores awarded to respective explanations. A single question in the evaluation is treated as a game between two randomly matched players. The explanation methods are the players. The game result depends on the explanation quality produced by the competing explanation methods for a particular detection decision. In case of a draw, both explanation methods receive the same score. During non-draw situations, the points won by a particular explanation method are the points lost by the other explanation method. By treating all the questions answered by numerous users as individual games, the global ranking is obtained using the Elo rating system (Elo, 1978). Each explanation method is awarded an initial Elo rating of 1000. Ranking Multi-Object Visualization Methods. The rank for multi-object visualization methods is obtained by voting for the method producing the most understandable explanation among the four methods. Each user is asked a set of questions showing the multi-object visualization generated by all four methods. The user is provided with a *None of the methods* option to chose during scenarios where all the multiobject visualizations generated are confusing and incomprehensible to the user. The methods are ranked by counting the total number of votes each method has obtained. The experiment is performed using COCO 2017 test split and the VOC 2012. ## 4.4.2 Results Each user is requested to answer 10 questions. The total number of questions is split as 7 and 3 between Task 1 and Task 2 respectively. 52 participants have answered the user study for both task 1 and task 2. The participants range across researchers, students, deep learning engineers, office secretaries, and software engineers. Figure 15 indicates SGBP provide relatively more reasonable explanations with higher user preferability for both single-stage detectors. Similarly, SIG is preferred for the two-stage detector. ![16_image_0.png](16_image_0.png) Figure 15: Ranking obtained for the explanation methods from the user trust study for each detector selected in this work. An initial Elo rating of 1000 is used for all explanation methods. The explanation method with a higher Elo rating has gained relatively more user preferability in the random pair-wise comparisons of explanations for each detector. The rank of a particular method is provided on the top of the bar corresponding to the method. Figure 16(a) illustrates the top two ranks are obtained by SmoothGrad versions of the SGBP and IG for all detectors. GBP relatively performs in the middle rank in the majority of cases. SGBP achieves the first rank in both the human-centric evaluation and functional evaluation. Figure 16(a) illustrates the overall ranking taking into account all the bounding box and classification explanations together. The ranking is similar in analyzing the bounding box and classification explanations separately. With the ranking of multi-object visualization methods, it is clearly evident that majority of the users are able to understand convex polygon-based explanations. 18 answers among the total 156 are None of the methods because none of the four other multi-object visualization methods provided a legible summary of all the explanation methods and detections. The users have selected principal component-based visualization in cases involving less than 3 detections in an image. In addition, *None of the methods* is chosen in most of the cases involving more than 9 detections or more than 3 overlapping detections in an image. Among the total participants, only 89 users (57%) agree with the convex polygon-based visualization. Therefore, by considering the remaining 43% users, there is a lot of need to improve the multi-object visualization methods discussed in this work and achieve a better summary. ## 5 Conclusions And Future Work Explaining convolutional object detectors is crucial given the ubiquity of detectors in the fields of autonomous driving, healthcare, and robotics. In this paper we extend post-hoc gradient-based explanation methods to explain both classification and bounding box decisions of EfficientDet-D0, SSD512, and Faster R-CNN. Additionally, in order to integrate explanations for all detected bounding boxes into a single output images, we propose four multi-object visualization methods to merge explanations of a particular decision, namely PCA, Contours, Density clustering, and Convex polygons. We evaluate these detectors and their explanations using a set of quantitative metrics (insertion and deletion of pixels according to saliency map importance, with single-box and realistic settings) and with a user study to understand how useful these explanations are to humans. ![17_image_0.png](17_image_0.png) Methods Figure 16: Ranking obtained from the user study considering all user answers. The rank of a particular method is provided on the top of the bar corresponding to the method. Insertion and deletion metrics indicate that SGBP provides more faithful explanations in the overall ranking. In general there is no detector that clearly provides better explanations, as a best depends on the criteria being used, but visual inspection indicates a weak relationship that newer detectors (like EfficientDet) have better explanations without artifacts (as shown in Figure 2), and that different backbones do have an influence on the saliency map quality (Figure 6). The user study reveals a human preference for SGBP explanations for SSD and EfficientDet (and SIG for Faster RCNN), which is consistent with the quantitative evaluation, and for multi-object explanation visualizations, convex polygons are clearly preferred by humans. In addition, we analyze some failure modes of a detector using the formulated explanation approach and provide several examples. The overall message of our work is to always explain both object classification and bounding box decisions, and that it is possible to combine explanations into a single output image through convex polygon representation of the saliency map. Finally, we developed an open-source toolkit, DExT, to explain decisions made by a detector using gradientbased saliency maps, to generate multi-object visualizations, and to analyze failure modes. We expect that DExT and our evaluation will contribute to the development of holistic explanation methods for object detectors, considering all their output bounding boxes, and both object classification and bounding box decisions. Limitations. Our work encompasses the following limitations. The first is about the pixel insertion/deletion metrics might be difficult to interpret (Grabska-Barwinska et al., 2021) and more advanced metrics could be used (Tomsett et al., 2020). However, the metric selected should consider the specifics of object detection. Firstly, both classification and bounding box regression should be evaluated with the selected metric. Moreover, as detectors are prone to non-local effects, removing pixels from the image (Rosenfeld et al., 2018) can cause bounding boxes to appear or disappear. Therefore, special tracking of a particular box is needed. In our work, we extend the classic pixel insertion/deletion metrics (Ancona et al., 2019) for object detection considering these two aspects. The second limitation is about the user study. Given the challenges in formulating a bias-free question, in our user study we ask users to select which explanation method is better. This is a subjective human judgment and does not necessarily have to correspond with the true input feature attribution made by the explanation method. Another part of the user study is comparing multi-object visualization methods, where we believe there is a much clearer conclusion. The novelty of our work is to combine quantitative, qualitative, and a user study, to empirically evaluate saliency explanations for object detectors considering object classification and bounding box regression decisions. In general, saliency methods are prone to heavy criticisms questioning the reliability of the methods. This study extends a few gradient-based saliency methods for detectors and conducts extensive evaluation. However, we acknowledge that there are other prominent saliency methods to study. Our work evaluates and explains real-world object detectors without any toy example. The literature has previously performed basic sanity checks on toy usecases that does not include multiple localization and classification outputs. In addition, object detectors are categorized on the basis of number of stages (singlestage Liu et al. (2016); Tan et al. (2020) and two-stage Ren et al. (2017)), availability of anchors (anchor-based Liu et al. (2016); Tan et al. (2020) and anchor-free Redmon et al. (2016); Tian et al. (2019)), and vision transformer based detectors Carion et al. (2020); Beal et al. (2020). In this work, we explain detectors specific to certain groups (SSD512, Faster R-CNN, and EfficientDet) and leave anchor-free and transformer-based detectors for future work, as they are not trivial to explain using gradient-based saliency method. Broader Impact Statement. As concerns on AI safety is increasing, explainable machine learning is imperative to gain human trust and satisfy the legal requirements. Any machine learning model user for human applications should be able to explain its predictions, in order to be audited, and to decide if the predictions are useful or further human processing is needed. Similarly, such explanations are pivotal to earn user trust, increase applicability, address safety concerns for complex object detection models. We expect that our work can improve the explainability of object detectors, by first steering the community to explain all object detector decisions (bounding box and object classification), considering to visualize all saliency explanations in a single image per detector decision, and to evaluate the non-local effect of image pixels into particular detections. We believe that saliency methods can be used to partially debug object detection models. Consequently, saliency methods are useful to explain detector and address the trustworthiness and safety concerns in critical applications using detectors. However, additional validation of explanations is needed. We also perform sanity checks in object detectors [reference withheld due to double blind submission] with similar conclusions and validation of saliency map quality. Additional large scale user studies could be done to evaluate how useful these explanations are for humans, instead of just asking which explanation method is better. Even though fully white-box interpretable models would be the best solution (Rudin, 2019), this is not yet available at the model scale required for high object detection performance. ## References Waleed Abdulla. Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. GitHub, 2017. (Online accessed on 20 September 2021). Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradientbased attribution methods for Deep Neural Networks. In *6th International Conference on Learning Representations (ICLR) Conference Track Proceedings*, 2018. Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus H. Gross. Gradient-Based Attribution Methods. In Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller (eds.), *Explainable AI: Interpreting, Explaining and Visualizing Deep Learning*, volume 11700 of *Lecture* Notes in Computer Science (LNCS), pp. 169–191. Springer, Cham, 2019. Elahe Arani, Shruthi Gowda, Ratnajit Mukherjee, Omar Magdy, Senthilkumar Sockalingam Kathiresan, and Bahram Zonooz. A comprehensive study of real-time object detection networks across multiple domains: A survey. *Transactions on Machine Learning Research*, 2022. Survey Certification. Teresa Araújo, Guilherme Aresta, Adrian Galdran, Pedro Costa, Ana Maria Mendonça, and Aurélio Campilho. UOLO - Automatic Object Detection and Segmentation in Biomedical Images. In Danail Stoyanov, Zeike Taylor, Gustavo Carneiro, Tanveer F. Syeda-Mahmood, Anne L. Martel, Lena MaierHein, João Manuel R. S. Tavares, Andrew P. Bradley, João Paulo Papa, Vasileios Belagiannis, Jacinto C. Nascimento, Zhi Lu, Sailesh Conjeti, Mehdi Moradi, Hayit Greenspan, and Anant Madabhushi (eds.), *Deep* Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA 2018 and ML-CDS 2018, volume 11045 of *Lecture Notes in Computer Science (LNCS)*, pp. 165–173, Cham, 2018. Springer. Octavio Arriaga, Matias Valdenegro-Toro, Mohandass Muthuraja, Sushma Devaramani, and Frank Kirchner. Perception for Autonomous Systems (PAZ). *Computing Research Repository (CoRR)*, abs/2010.14541, 2020. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. *PLOS ONE*, 10(7):1–46, 07 2015. Jasmijn Bastings and Katja Filippova. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In Afra Alishahi, Yonatan Belinkov, Grzegorz Chrupala, Dieuwke Hupkes, Yuval Pinter, and Hassan Sajjad (eds.), Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP, pp. 149–155. Association for Computational Linguistics ACL, 2020. Josh Beal, Eric Kim, Eric Tzeng, Dong Huk Park, Andrew Zhai, and Dmitry Kislyuk. Toward transformerbased object detection. *CoRR*, abs/2012.09958, 2020. URL https://arxiv.org/abs/2012.09958. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-End Object Detection with Transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), *Computer Vision - ECCV 2020*, volume 12346 of Lecture Notes in Computer Science (LNCS), pp. 213–229. Springer, 2020. Finale Doshi-Velez and Been Kim. Towards A Rigorous Science of Interpretable Machine Learning. arXiv preprint arXiv:1702.08608, 2017. Arpad E Elo. *The Rating of Chess Players, Past and Present*. BT Batsford Limited, 1978. Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Evangelos Simoudis, Jiawei Han, and Usama M. Fayyad (eds.), Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD), pp. 226–231. AAAI Press, 1996. Di Feng, Christian Haase-Schütz, Lars Rosenbaum, Heinz Hertlein, Claudius Gläser, Fabian Timm, Werner Wiesbeck, and Klaus Dietmayer. Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges. *IEEE Transactions on Intelligent Transportation* Systems (TITS), 22(3):1341–1360, 2021. Agnieszka Grabska-Barwinska, Amal Rannen-Triki, Omar Rivasplata, and András György. Towards better visual explanations for deep image classifiers. In *eXplainable AI approaches for debugging and diagnosis.*, 2021. Denis A. Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Yasunori Ishii, and Sotaro Tsukizawa. Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions. Computing Research Repository (CoRR), abs/1811.08011, 2018. Pan He, Weilin Huang, Tong He, Qile Zhu, Yu Qiao, and Xiaolin Li. Single Shot Text Detector with Regional Attention. In *2017 IEEE International Conference on Computer Vision (ICCV)*, pp. 3066–3074. Institute of Electrical and Electronics Engineers (IEEE), 2017. Sarthak Jain and Byron C. Wallace. Attention is not Explanation. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) Volume 1 (Long and Short Papers), pp. 3543–3556. Association for Computational Linguistics (ACL), 2019. Been Kim and Finale Doshi-Velez. Machine Learning Techniques for Accountability. *AI Magazine*, 42(1): 47–52, 2021. Jung Uk Kim, Sungjune Park, and Yong Man Ro. Towards Human-Like Interpretable Object Detection Via Spatial Relation Encoding. In *2020 IEEE International Conference on Image Processing (ICIP)*, pp. 3284–3288. Institute of Electrical and Electronics Engineers (IEEE), 2020. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In David J. Fleet, Tomás Pajdla, Bernt Schiele, and Tinne Tuytelaars (eds.), *Computer Vision - ECCV 2014*, volume 8693 of Lecture Notes in Computer Science (LNCS), pp. 740–755, Cham, 2014. Springer. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott E. Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: Single Shot MultiBox Detector. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (eds.), *Computer Vision - ECCV 2016*, volume 9905 of Lecture Notes in Computer Science (LNCS), pp. 21–37, Cham, 2016. Springer. Scott M. Lundberg and Su-In Lee. A Unified Approach to Interpreting Model Predictions. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), *Proceedings of the 31st International Conference on Neural Information Processing* Systems (NIPS), NIPS'17, pp. 4768–4777. Curran Associates, Inc., 2017. Vitali Petsiuk, Abir Das, and Kate Saenko. RISE: Randomized Input Sampling for Explanation of Black-box Models. In *British Machine Vision Conference (BMVC)*, pp. 151. BMVA Press, 2018. Vitali Petsiuk, Rajiv Jain, Varun Manjunatha, Vlad I. Morariu, Ashutosh Mehra, Vicente Ordonez, and Kate Saenko. Black-box Explanation of Object Detectors via Saliency Maps. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11443–11452, 2021. Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, and Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788. Institute of Electrical and Electronics Engineers (IEEE), 2016. Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis Machine Intelligence (PAMI), 39(6):1137–1149, 2017. Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi (eds.), Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Association for Computing Machinery (ACM), 2016. Amir Rosenfeld, Richard S. Zemel, and John K. Tsotsos. The Elephant in the Room. Computing Research Repository (CoRR), abs/1808.03305, 2018. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. *Nature machine intelligence*, 1(5):206–215, 2019. Cynthia Rudin and Kiri L. Wagstaff. Machine learning for science and society. *Machine Learning*, 95(1): 1–9, 2014. Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, and Klaus-Robert Müller. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. *Proceedings of the IEEE*, 109(3):247–278, 2021. Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision, 128(2):336–359, 2020. Sofia Serrano and Noah A. Smith. Is Attention Interpretable? In Anna Korhonen, David R. Traum, and Lluís Màrquez (eds.), Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL), pp. 2931–2951. Association for Computational Linguistics (ACL), 2019. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning Important Features Through Propagating Activation Differences. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International* Conference on Machine Learning (ICML) 2017, volume 70 of *Proceedings of Machine Learning Research*, pp. 3145–3153. Proceedings of Machine Learning Research (PMLR), 2017. Ravid Shwartz-Ziv and Naftali Tishby. Opening the Black Box of Deep Neural Networks via Information. Computing Research Repository (CoRR), abs/1703.00810, 2017. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations (ICLR) Workshop Track Proceedings, 2014. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viégas, and Martin Wattenberg. SmoothGrad: removing noise by adding noise. *Computing Research Repository (CoRR)*, abs/1706.03825, 2017. David Spiegelhalter. Should We Trust Algorithms? *Harvard Data Science Review*, 2(1), 2020. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving for Simplicity: The All Convolutional Net. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations (ICLR) Workshop Track Proceedings, 2015. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic Attribution for Deep Networks. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning* (ICML) 2017, volume 70 of *Proceedings of Machine Learning Research*, pp. 3319–3328. Proceedings of Machine Learning Research (PMLR), 2017. Mingxing Tan, Ruoming Pang, and Quoc V. Le. EfficientDet: Scalable and Efficient Object Detection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10778–10787. Institute of Electrical and Electronics Engineers (IEEE), 2020. Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. FCOS: fully convolutional one-stage object detection. In *2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South),* October 27 - November 2, 2019, pp. 9626–9635. IEEE, 2019. doi: 10.1109/ICCV.2019.00972. URL https://doi.org/10.1109/ICCV.2019.00972. Richard Tomsett, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, and Alun Preece. Sanity checks for saliency metrics. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pp. 6021–6029, 2020. Hideomi Tsunakawa, Yoshitaka Kameya, Hanju Lee, Yosuke Shinya, and Naoki Mitsumoto. Contrastive Relevance Propagation for Interpreting Predictions by a Single-Shot Object Detector. In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–9. Institute of Electrical and Electronics Engineers (IEEE), 2019. Matias Valdenegro-Toro. Forward-Looking Sonar Marine Debris Datasets. GitHub, 2019. (Online accessed on 01 December 2021). Kiri L. Wagstaff. Machine Learning that Matters. In *Proceedings of the 29th International Conference on* Machine Learning (ICML) 2012. icml.cc / Omnipress, 2012. Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non-Local Neural Networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7794–7803. Institute of Electrical and Electronics Engineers (IEEE), 2018. Kristoffer Wickstrøm, Michael Kampffmeyer, and Robert Jenssen. Uncertainty and Interpretability in Convolutional Neural Networks for Semantic Segmentation of Colorectal Polyps. *Medical Image Analysis*, 60, 2020. Tianfu Wu and Xi Song. Towards Interpretable Object Detection by Unfolding Latent Structures. In *2019* IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6032–6042. Institute of Electrical and Electronics Engineers (IEEE), 2019. Éloi Zablocki, Hedi Ben-Younes, Patrick Pérez, and Matthieu Cord. Explainability of vision-based autonomous driving systems: Review and challenges. *Computing Research Repository (CoRR)*, abs/2101.05307, 2021. Matthew D. Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. In David J. Fleet, Tomás Pajdla, Bernt Schiele, and Tinne Tuytelaars (eds.), *Computer Vision - ECCV 2014*, volume 8689 of *Lecture Notes in Computer Science (LNCS)*, pp. 818–833. Springer, 2014. Zhengxia Zou, Zhenwei Shi, Yuhong Guo, and Jieping Ye. Object Detection in 20 Years: A Survey. *Computing Research Repository (CoRR)*, abs/1905.05055, 2019.
Review 1: Summary: This paper focuses on explaining salient part of images for object detection. Authors proposed a framework which supports multiple types of object detectors and supports explaining both classification and regression part of detectors. Authors extended existing gradient-based methods to explain and visualize multiple detection outputs. On top of the proposed method, authors designed and carried out a series of quantitative and qualitative methods to evaluate the explainability of detectors. Strengths and Weaknesses: Strengths: * The proposed analysis covers multiple different detectors, and in great detail including both classification and regression (even include 4 edges of the bounding boxes). Weaknesses: * Some of the details in this paper is not very clear. For example, when discussing the "Realistic Evaluation Setting", authors mentioned "matched to the interest detection by checking the same class and an IoU threshold greater than 0.9". This is not very clear to me what are the "interest detection". * The discussed detectors are mostly old ones (especially Faster RCNN and SSD which are almost 6 years old). Some of the recent detectors are not included such as anchor-free detectors and transformer-based detectors. * It's not super clear to me how the proposed method can identify the reason of detector failure. The Figure 26 seems to be a cherrypick to me, and it's not clear to me that why a dog with a long tail would be a cat. * In Figure 8.c: why the AAUC differs a lot on y_min vs y_max? * The section 4 is a bit messy and includes too many details. I feel it somewhat misses a "core observation". Also it's not very clear to me whether it is helpful to provide so many different settings (e.g. single-box vs realistic settings). * Page 7, the section 4.2 seems to be incomplete, as the sentence ends with "and". Requested Changes: Please see the weakness. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper proposes a way to compute 'explanations' for object detectors, i.e., models that output bounding boxes to localize an object in the scene as well as providing a prediction for the particular object that has been identified in the bounding box. The paper proposes to use saliency maps, i.e., approaches that give a relevance score for each dimension of the input towards the output prediction as primitives for performing explanations. At a high level, the saliency maps selected in this work are gradient based ones, so the map corresponds to the derivative of some output with respect to the input. The proposal here obtains this gradient for each object classification, as well as each of the corresponding points in the bounding box around that object. The paper also proposes a way to aggregate these saliency maps (five per object), into a single canonical map that explains the decision of the object detector for a given image. A series of different evaluations are then conducted to justify the performance of the explanation methods. Overall, the paper makes a commendable effort towards explaining object detectors. Strengths and Weaknesses: First, I'd like to apologize to the authors for a delayed review, especially in light of the critical nature of my review, and the changes I'd be asking for. ## Strengthens - As far as I can tell, this is one of the first works to address the task of explaining object detectors in a comprehensive way. - The authors make a good faith effort towards assessing the scheme they propose in a variety of ways. First, they conduct a flipping experiment, and second perform a user study to assess whether users are satisfied with the output of an explanation method. - The open source toolkit is quite helpful, and should be useful for others in the community. ## Weaknesses While I commend the authors for their contributions, I think their are pretty severe limitations of the work as it current stands. - **Exposition**: It was difficult for me to understand how the saliency maps are computed especially for $x_\mathrm{min}$, $x_\mathrm{max}$, $y_\mathrm{min}$, and $_\mathrm{max}$. The last paragraph of Section 3.1 is the only one providing a high level overview of how this is done. That is currently unclear. Specifically, the authors should clarify in this section what the exact scalar output is that is being used to compute the saliency maps for those each coordinates. In addition, there is no discussion of how the canonical representation is commuted in the main text. Overall, Section 3.1 is much too sparse to provide a useful overview of the dext method proposed in this work. - **The flipping explanation**: This paper does a commendable job of evaluation of the saliency map approaches that it uses. However, I don't believe any of the approaches used to evaluate these methods get at any meaningful measure of explanation quality. First, there are now well known critics of the Samek et. al. flipping experiments. See the following papers: 1) Sanity checks for saliency maps, 2) Sanity checks for saliency metrics, 3) Sanity Simulations for Saliency Maps, and 4) Evaluating feature importance estimates. The crux of the discussion here is that a saliency method that is not faithful to the underling detector being explained will still be able to get a high score on the flipping experiment. However, these methods will not out perform say randomly zeroing out random coordinates of the input. - **Issue with user studies on explanations**: I commend the authors here again for performing a user study. However, there is a critical flaw here too. First, it seems like the authors asked the users: "Which Robot's explanation is better?" However, it is now well-known that just because a user likes a particular explanation type does not mean that the model is actually relying on those portions of the image to detect the object in the bounding box. The key metric of interest is whether the explanation method is communicating to the user which features the model is relying on. However, whether the user 'likes' that explanation or not is orthogonal to the faithfulness issue. Again, this metric does not tell us whether smoothgrad, GBP, and the other saliency maps are effective methods. Overall, I think this is commendable work. However, the issues above give me high reservation about he usefulness of the dext approach and insights in this paper. Requested Changes: Here I provide feedback to the authors on how they can strengthen this work. - **Exposition**: Section 3.1 is much too short and does not actually describe the dext approach. An end-to-end scheme should be discussed here. In addition, Figure 3 provides an overview of the flow, but is unclear to me how the last 3 steps in that figure are actually achieved. The authors should provide, in detail, either in the appendix, or the main draft (ideally this second option), how exactly these saliency maps are computed. - **Saliency Methods and Faithfulness**: The reliability of saliency approaches is still a hotly debated question and under strict questioning. As it stands it still unclear whether these approaches are actually effective. As a matter of fact, the evidence against the approaches used in this paper continues to accrue. For example see the papers: "Do feature attribution methods correctly attribute features", "Do input gradients highlight discriminative features", and "Rethinking the Role of Gradient Based Attribution Methods for Model Interpretability." All these papers show that the saliency maps, including the ones used in this work, *do not* highlight the features that the model is relying on for its output. However, these approaches are often used for explaining classifiers and not object detectors, but there is no reason why these results should not hold here since the backbone models used have been shown to suffer from such issues. Here is what I would propose to rectify the above problems: 1) I am willing to accept the paper as is if the authors would acknowledge, prominently in the text, the related work showing limitations of the methods that they have used in the work. Both for the evaluation approaches, and for the methods themselves. 2) A series of alternative experiments that demonstrate the effectiveness of the saliency maps selected in this new setting. Asking for new experiments is time/cost prohibitive, so I understand that this is an extra burden on the authors. However, there is too much evidence in the literature now pointing to the limitations of these approaches to just accept this paper as is. **New Experimental Evidence** I think any of the following experiments would provide evidence to backup the claims in this paper: 1) A toy setting where the ground-truth saliency map is known for each object in the scene. The authors can the train models rely on a synthetic signal in the dataset that the detector should rely on for 100 percent performance. The authors can then compare the saliency maps computed to the ground truth saliency maps since they have forced the model to rely on a synthetic signal. See the papers I cited in the previous section for additional details. The paper, "Sanity Simulations for Saliency Methods" is a good place to look for how to construct this kind of toy experiment. The critical thing here is that it must be a task where the ground truth saliency map is known ahead of time, **and**, one must be able to train a model to rely on this ground truth signal. 2) Same experiment as above but for adversarially robust detectors to see whether the issue raised in the paper "Do input gradients highlight discriminative features", translates to the object detection setting. I would be happy to clarify any of the points I raised in this section. Broader Impact Concerns: None. ================================================== Review 3: Summary: The paper focuses on the problem of producing explanations for object detectors. Explanations that are robust and human understandable are crucial to increase the trust of users in these models in safety critical applications, such as autonomous driving and medical imaging. Building on a growing literature for explanation methods, the authors propose a methodology to provide saliency maps for both the bounding boxes and the prediction of the model, visualize explanations for multiple objects in a single output, and collect all this into a toolkit. The contributions of the paper are the following: * As mentioned above, a software package for producing explanations, evaluating them and identifying errors. The code is provided with the paper. * Extending gradient based explanations to bounding boxes and predictions of object detectors * Merge all detection outputs in a single explanation, with a multi-object visualization procedure. The authors propose 4 approaches to operate this merging: principal components (drawing ellipses centered at the mass center depicting the minimum and maximum spread of saliency maps densities), contours, density clustering and convex polygons (covering the density clustered saliency map pixels). * An evaluation procedure, including both qualitative and quantitative analysis: the analysis consider all possible combinations of selected detectors and saliency map producing approaches (see details in the next paragraph). For the quantitative analysis, the authors build on previously introduced pixel removal or insertion based metrics, and identify 7 effects to track for a detector: maximum probability of the predicted class, Intersection over Union (IoU), distance moved by the bounding box, change in height of the bounding box, change in width of the boundingbox, change in top-left coordinate of the bounding box in both directions. The 5 last effects are measured in pixels. Finally, they conducted a user study, assessing the preference of users among the tested approaches (detector x explanation x merging procedure). * A brief description of failure detection in the appendix. In order to conduct their study, the authors select 3 detectors: SSD512, Faster R-CNN and EfficientDet-D0. These detectors use feature extraction backbones from different architectural families: VGG, Resnet and EfficientNet. They also select 4 gradient-based approaches to produce saliency maps: guided backpropagation, integrated gradients, and the two approaches combined with SmoothGrad. They present results on 2 datasets: COCO and Marine Debris. Strengths and Weaknesses: Strengths: * The topic of focus in the paper is important and of interest to the community. * The paper is generally well structured and clear. * The analysis conducted by the authors is thorough, studying the different combinations of models and approaches, and properly isolating the different factors that can impact the quality of the explanation. * The discussion and proposal on evaluation from the perspective of detection models is of interest, and identifies important factors to track in this setting. * The user study is well designed, and conducted with a particular consideration for different sources of biases. * The toolkit provided with the paper is well structured and documented, with separate components for explanation, evaluation, visualization and error analysis. It seems easy to reuse, but I haven't tested it myself. Weakness: * An important family of architectures seems missing from the study. Given the recent and growing interest in transformers, and the success of visual transformers for different vision tasks, including detection (see for example Carion, Nicolas, et al. "End-to-end object detection with transformers." European conference on computer vision 2020 and Beal, Josh, et al. "Toward transformer-based object detection." (2020)), excluding them from the study seems a limitation. * The authors also omit more recent explanation approaches, such as FullGrad (Suraj Srinivas and Francois Fleuret. "Full-gradient representation for neural network visualization". Advances in Neural Information Processing Systems (NeurIPS) 2019) without proper explanation or comparison. The authors state in the section 3.1 that "[guided backpropagation] is a simple and widely-used approach compared to other methods" and that "[integrated gradients] satisfy the implementation and sensitivity invariance axioms that are failed by various other state-of-the-art interpretation methods", but haven't given more details on these counterparts. * It has been observed in previous works that score-based pixel removal metrics could be hard to interpret, and can give scores to meaningless perturbations (e.g. random) that are of comparable scale to useful saliency maps based perturbations. Moreover, given the variability of behavior across classes, images and even different instances of the same category in a single image, aggregating metrics can be misleading. It can be more rigorous to provide statistical summaries based on per image/instance pairwise rankings of methods. Both aspects are discussed for example in Grabska-Barwinska, Agnieszka, et al. "Towards Better Visual Explanations for Deep Image Classifiers." 2021. * The contributions listed in the introduction are over-stated. For example, the error detection is only very briefly discussed in the appendix and the extension of the explanation methods to detection is not (or not well) described. The paper could benefit from some restructuring to increase clarity and impact (see suggestions below). * [Minor] Limited novelty, as the paper builds on existing explanation methods, but the easiness of use of the toolkit as well as the discussion of the different aspects of the work, and especially the evaluation of explanations, from the perspective of object detection make the contribution interesting. * [Minor] Some passages need rewriting to improve the clarity of the work. Examples: [P5] "By tracking the output box corresponding to detection explained the target neuron is selected." [P6] "Section 4.2 provides the quantitatively evaluates different detector and explanation method combinations." [P7 - end] "Evaluating detector explanations quantitatively provides immense understanding on selecting the explanation method and" Requested Changes: Building on the weaknesses in the previous section, here are some suggestions: * Review that writing of some paragraphs: a more clear statement of different contribution (e.g. how the authors extend explanation methods) can only improve the clarity and impact of the work. * It would be great to see some analysis using visual transformers. * It would also be interesting to see results building on FullGrad or similar approaches. Alternatively, the authors should explain more explicitly why they have been discarded from the study. * The authors can also consider a more robust evaluation (this would be a nice to have addition, but I don't see it as a requirement for acceptance). * The paper can be slightly restructured. For example, I would suggest to move the error detection paragraph to the main text, and provide more details on it. Providing a detailed proposal and reflection on this important topic can increase the impact of the paper and the interest of the audience. In the counterparts, there are multiple redundant figures in the different sections of the main paper. For example, for the quantitative evaluation results, I would suggest for example to keep figure 11, and move figure 17 from the appendix, and move the remaining curves to the appendix. Finally, I think it would be interesting to provide more details on the user study in the main text. Broader Impact Concerns: Given the motivation of the work, targeting to increase trustworthiness of detection methods for safety-critical applications, I think a discussion on the broader impact of the work is interesting to add the paper. For example, the authors can discuss whether and propose a way how the error detection can help reduce safety and trustworthiness issues of the models of interest. ================================================== Metareview: Recommendation: Reject Comment: Providing a library for providing saliency maps for object detection methods is a nice contribution for the community. However, for the reasons stated above and in the reviews, the reviewers and I feel that there is insufficient evidence in the paper to support the claim that such methods can be used to reliably diagnose detection failures. Note that TMLR does not have a "major revision" recommendation option. If the authors believe that they can address the reviewer's concerns with additional evidence then I would be willing to receive a revised version of the paper. If this were done, I would strongly suggest shortening the paper to 12 pages, and increasing the clarity of the explanation of the methods and main conclusions. Note, the revision would go through another full round of reviews. ==================================================
# Indiscriminate Data Poisoning Attacks On Neural Networks∗ Yiwei Lu *yiwei.lu@uwaterloo.ca* University of Waterloo Gautam Kamath†*g@csail.mit.edu* University of Waterloo Yaoliang Yu‡*yaoliang.yu@uwaterloo.ca* University of Waterloo Reviewed on OpenReview: *https://openreview.net/forum?id=x4hmIsWu7e* ## Abstract Data poisoning attacks, in which a malicious adversary aims to influence a model by injecting "poisoned" data into the training process, have attracted significant recent attention. In this work, we take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games. By choosing an appropriate loss function for the attacker and optimizing with algorithms that exploit second-order information, we design poisoning attacks that are effective on neural networks. We present efficient implementations by parameterizing the attacker and allowing simultaneous and coordinated generation of tens of thousands of poisoned points, in contrast to most existing methods that generate poisoned points one by one. We further perform extensive experiments that empirically explore the effect of data poisoning attacks on deep neural networks. Our paper sets a new benchmark on the possibility of performing indiscriminate data poisoning attacks on modern neural networks. ## 1 Introduction Adversarial attacks have repeatedly exposed critical vulnerabilities in modern machine learning (ML) models (Nelson et al., 2008; Szegedy et al., 2013; Kumar et al., 2020). As ML systems are deployed in increasingly important settings, significant effort has been levied in understanding attacks and defenses towards *robust* machine learning. In this paper, we focus on *data poisoning attacks*. ML models require a large amount of data to achieve good performance, and thus practitioners frequently gather data by scraping content from the web (Gao et al., 2020; Wakefield, 2016). This gives rise to an attack vector, in which an adversary may manipulate part of the training data by injecting poisoned samples. For example, an attacker can *actively* manipulate datasets by sending corrupted samples directly to a dataset aggregator such as a chatbot, a spam filter, or user profile databases; the attacker can also *passively* manipulate datasets by placing poisoned data on the web and waiting for collection. Moreover, in *federated learning*, adversaries can also inject malicious data into a diffuse network (Shejwalkar et al., 2021; Lyu et al., 2020). A spectrum of such data poisoning attacks exists in the literature, including targeted, *indiscriminate* and backdoor attacks. We focus on indiscriminate attacks for image classification, where the attacker aims at decreasing the overall test accuracy of a model by adding a small portion of poisoned points. Current indiscriminate attacks are most effective against convex models (Biggio et al., 2011; Koh & Liang, 2017; ∗GK and YY are listed in alphabetical order. †Supported by an NSERC Discovery Grant, an unrestricted gift from Google, and a University of Waterloo startup grant. ‡Supported by an NSERC Discovery Grant, Canada CIFAR AI chair program and WHJIL. Koh et al., 2018; Shumailov et al., 2021), and several defenses have also been proposed (Steinhardt et al., 2017; Diakonikolas et al., 2019). However, existing poisoning attacks are less adequate against more complex non-convex models, especially deep neural networks, either due to their formulation being inherently tied to convexity or computational limitations. For example, many prior attacks generate poisoned points sequentially. Thus, when applied to deep models or large datasets, these attacks quickly become computationally infeasible. To our knowledge, a systematic analysis of indiscriminate data poisoning attacks on deep neural works is still largely missing—a gap we aim to fill in this work. To address this difficult problem, we design more versatile data poisoning attacks by formulating the problem as a non-zero-sum Stackelberg game, in which the attacker crafts some poisoned points with the aim of decreasing the test accuracy, while the defender optimizes its model on the poisoned training set. We exploit second-order information and apply the Total Gradient Descent Ascent (TGDA) algorithm to address the attacker's objective, even on non-convex models. We also examine the effectiveness of alternative formulations, including the simpler zero-sum setting as well as when the defender leads the optimization. Moreover, we address computational challenges by proposing an efficient architecture for poisoning attacks, where we parameterize the attacker as a separate network rather than optimizing the poisoned points directly. By applying TGDA to update the attacker model directly, we are able to generate tens of thousands of poisoned points simultaneously in one pass, potentially even in a coordinated way. In this work, we make the following contributions: - We construct a new data poisoning attack based on TGDA that incorporates second-order optimization. In comparison to prior data poisoning attacks, ours is significantly more effective and runs at least an order of magnitude faster. - We summarize and classify existing data poisoning attacks (specifically, indiscriminate attacks) in both theoretical formulations and experimental settings. - We propose an efficient attack architecture, which enables a more efficient clean-label attack. - We conduct experiments to demonstrate the effectiveness of our attack on neural networks and its advantages over previous methods. Notation. Throughout this paper, we denote training data as Dtr, validation data as Dv, test data as D*test*, and poisoned data as Dp. We use L to denote the leader in a Stackelberg game, ℓ for its loss function, x for its action, and θ for its model parameters (if they exist). Similarly, we use F to denote the follower, f for its loss function, and w for its model parameters. Finally, we use ε as the poison budget, namely that |Dp| = ε|Dtr|. ## 2 Background In this section, we categorize existing data poisoning attacks according to the attacker's *power* and *objectives*, and specify the type of attack we study in this paper. ## 2.1 Power Of An Attacker Injecting poisoned samples. Normally, without breaching the defender's database (i.e., changing the existing training data Dtr), an attacker can only *inject* poisoned data, actively or passively to the defender's database, such that its objective can be achieved when the model is retrained after collection. Such a situation may be realistic when the defender gathers data from several sources, some of which may be untrusted (e.g., when scraping data from the public Internet). The goal of the attacker can be presented as: $$\mathbf{w}_{*}=\mathbf{w}_{*}({\mathcal{D}}_{p})\in{\underset{\mathbf{w}}{\operatorname{arg\,min}}}\;\;{\mathcal{L}}({\mathcal{D}}_{t r}\cup{\mathcal{D}}_{p},\mathbf{w}),$$ L(Dtr ∪ Dp, w), (1) where w∗ is the attacker's desired model parameter, which realizes the attacker's objectives, L(·) is the loss function. We focus on such attacks and further categorize them in the next subsection. $\left(1\right)$. Perturbing training data. Some work makes the assumption that the attacker can directly change the training data Dtr. This is perhaps less realistic, as it assumes the attacker has compromised the defender's database. We note that this threat model may be more applicable in an alternate setting, where the defender wishes to prevent the data from being used downstream to train a machine learning model. This research direction focuses on so called *unlearnable examples* (Huang et al., 2021; Yu et al., 2021; Fowl et al., 2021b;a), and has faced some criticism that it provides "a false sense of security" (Radiya-Dixit et al., 2022). We provide more details of this line of research in Appendix B. In this paper, we focus on injecting poisoned samples as it is a more realistic attack. ## 2.2 Objective Of An Attacker Data poisoning attacks can be further classified into three categories according to the adversary's objective (Cinà et al., 2022; Goldblum et al., 2022). Targeted attack. The attacker adds poisoned data Dp resulting in a w∗such that a particular target example from the test set is misclassified as the *base* class (Shafahi et al., 2018; Aghakhani et al., 2020; Guo & Liu, 2020; Zhu et al., 2019). This topic is well studied in the literature, and we refer the reader to Schwarzschild et al. (2021) for an excellent summary of existing methods. Backdoor attack. This attack aims at misclassifying any test input with a particular trigger pattern (Gu et al., 2017; Tran et al., 2018; Chen et al., 2018; Saha et al., 2020). Note that backdoor attacks require access to both the training data as well as the input at inference time to plant the trigger. Indiscriminate attack. This attack aims to induce a parameter vector w∗that broadly decreases the model utility. We consider image classification tasks where the attacker aims to reduce the overall classification accuracy. Existing methods make different assumptions on the attacker's knowledge: - Perfect knowledge attack: the attacker has access to both training and test data (Dtr and D*test*), the target model, and the training procedure (e.g., the min-max attack of Koh et al. 2018). - Training-only attack: the attacker has access to training data Dtr, the target model, and the training procedure (e.g., Muñoz-González et al. 2017; Biggio et al. 2012). - Training-data-only attack: the attacker only has access to the training data Dtr (e.g., the label flip attack of Biggio et al. 2011). In Appendix A we give a more detailed summary of the existing indiscriminate data poisoning attacks. In this work, we focus on training-only attacks because perfect knowledge attacks are not always feasible due to the proprietary nature of the test data, while existing training-data-only attacks are weak and often fail for deep neural networks, as we show in Section 5. ## 3 Total Gradient Descent Ascent Attack In this section, we formulate the indiscriminate attack and introduce our attack algorithm. We first briefly introduce the Stackelberg game and then link it to data poisoning. ## 3.1 Preliminaries On Stackelberg Game The Stackelberg competition is a strategic game in Economics in which two parties move sequentially (von Stackelberg, 1934). Specifically, we consider two players, a leader L and a follower F in a Stackelberg game, where the follower F chooses w to best respond to the action x of the leader L, through minimizing its loss function f: $$\forall\mathbf{x}\in\mathbf{X}\subseteq\mathrm{{\mathbb{R}}}^{d},\ \ \mathbf{w}_{*}(\mathbf{x})\in{\underset{\mathbf{w}\in\mathbf{W}}{\operatorname{arg\,min}}}\ f(\mathbf{x},\mathbf{w}),$$ f(x, w), (2) and the leader L chooses x to maximize its loss function ℓ: $$\mathbf{x}_{*}\in{\underset{\mathbf{x}\in\mathbf{X}}{\operatorname{arg\,max}}}\,\ell(\mathbf{x},\mathbf{w}_{*}(\mathbf{x})),$$ ℓ(x, w∗(x)), (3) $$(2)$$ where (x∗, w∗(x∗)) is known as a Stackelberg equilibrium. We note that an early work of Liu & Chawla (2009) already applied the Stackelberg game formulation to learning a linear discriminant function, where an adversary perturbs the whole training set first. In contrast, we consider the poisoning problem where the adversary can only add a small amount of poisoned data to the *unperturbed* training set. Moreover, instead of the genetic algorithm in Liu & Chawla (2009), we solve the resulting Stackelberg game using gradient algorithms that are more appropriate for neural network models. The follow-up work of Liu & Chawla (2010) further considered a constant-sum simplification, effectively crafting unlearnable examples (see more discussion in Appendix B) for support vector machines and logistic regression. Finally, we mention the early work of Dalvi et al. (2004), who essentially considered a game-theoretic formulation of adversarial training. However, the formulation of Dalvi et al. (2004) relied on the notion of Nash equilibrium where both players move simultaneously while in their implementation the attacker perturbs the whole training set w.r.t a fixed surrogate model (naive Bayes). When f = ℓ we recover the zero-sum setting where the problem can be written compactly as: $$\operatorname*{max}_{\mathbf{x}\in\mathbb{X},\;\mathbf{w}\in\mathbb{W}}\ell(\mathbf{x},\mathbf{w}),$$ $$\left({\boldsymbol{4}}\right)$$ ℓ(x, w), (4) see, e.g., Zhang et al. (2021) and the references therein. For simplicity, we assume W = Rp and the functions f and ℓ are smooth, hence the follower problem is an instance of unconstrained smooth minimization. ## 3.2 On Data Poisoning Attacks There are two possible ways to formulate data poisoning as a Stackelberg game, according to the acting order. Here we assume the attacker is the leader and acts first, and the defender is the follower. This assumption can be easily reversed such that the defender acts first. Both of these settings are realistic depending on the defender's awareness of data poisoning attacks. We will show in Section 5 that the ordering of the two parties affects the results significantly. Non-zero-sum formulation. In this section, we only consider the attacker as the leader as the other case is analogous. Here recall that the follower F (i.e., the defender) aims at minimizing its loss function f = L(Dtr ∪ Dp, w) under data poisoning: $\mathbf{w}_{*}=\mathbf{w}_{*}(\mathcal{D}_{p})\in\operatorname*{arg\,min}_{\mathbf{w}}\ \mathcal{L}(\mathcal{D}_{tr}\cup\mathcal{D}_{p},\mathbf{w}),$ while the leader L (i.e., the attacker) aims at maximizing a different loss function ℓ = L(Dv, w∗) on the validation set Dv: $$\mathbf{\Sigma}$$ $${\mathcal{D}}_{p_{*}}\in\operatorname*{arg\,max}_{{\mathcal{D}}_{p}}{\mathcal{L}}({\mathcal{D}}_{v},{\mathbf{w}}_{*}),$$ $$\mathbf{\Sigma}$$ L(Dv, w∗), (6) where the loss function L(·) can be any task-dependent target criterion, e.g., the cross-entropy loss. Thus we have arrived at the following non-zero-sum Stackelberg formulation of data poisoning attacks (a.k.a., a bilevel optimization problem, see e.g. Muñoz-González et al. 2017; Huang et al. 2020; Koh et al. 2018): $$\operatorname*{max}_{{\mathcal{D}}_{p}}{\mathcal{L}}({\mathcal{D}}_{v},{\mathbf{w}}_{*}),{\mathrm{~s.t.~}}{\mathbf{w}}_{*}\in\operatorname*{arg\,min}_{{\mathbf{w}}}{\mathcal{L}}({\mathcal{D}}_{t r}\cup{\mathcal{D}}_{p},{\mathbf{w}}).$$ L(Dtr ∪ Dp, w). (7) Note that we assume that the attacker can inject εN poisoned points, where N = |Dtr| and ε is the power of the attacker, measured as a fraction of the training set size. We identify that Equation (7) is closely related to the formulation of unlearnable examples (Liu & Chawla, 2010; Huang et al., 2021; Yu et al., 2021; Fowl et al., 2021b;a; Sandoval-Segura et al., 2022; Fu et al., 2021): $$\operatorname*{max}_{{\mathcal{D}}_{p}}{\mathcal{L}}({\mathcal{D}}_{v},{\mathbf{w}}_{*}),{\mathrm{~s.t.~}}{\mathbf{w}}_{*}\in\operatorname*{arg\,min}_{{\mathbf{w}}}{\mathcal{L}}({\mathcal{D}}_{p},{\mathbf{w}}).$$ L(Dp, w). (8) $\square$ where Dp = {(xi + σi, yi)} N i=1, σiis the bounded sample-wise or class-wise perturbation (∥σi∥p ≤ εσ). The main differences lie in the direct modification of the training set Dtr (often all of it). In comparison, adding poisoned points to a clean training set would never results in 100 % modification in the augmented training set. This seemingly minor difference can cause a significant difference in algorithm design and performance. We direct interested readers to Appendix B for details. Previous approaches. Next, we mention three previous approaches for solving Equation (7). (1) A direct approach: While the inner minimization can be solved via gradient descent, the outer maximization problem is non-trivial as the dependence of L(Dv, w∗) on Dp is *indirectly* through the parameter w of the poisoned model. Thus, applying simple algorithms (e.g., Gradient Descent Ascent) directly *would* result in zero gradients *in practice*. Nevertheless, we can rewrite the desired derivative using the chain rule: $$\partial{\cal L}({\cal D}_{v},{\bf w}_{*})=\frac{\partial{\cal L}({\cal D}_{v},{\bf w}_{*})}{\partial{\bf w}_{*}}\frac{\partial{\bf w}_{*}}{\partial{\cal D}_{p}}.\tag{9}$$ The difficulty lies in computing ∂w∗ ∂Dp , i.e., measuring how much the model parameter w changes with respect to the poisoned points Dp. Biggio et al. (2011) and Koh & Liang (2017) compute ∂w∗ ∂Dp exactly via KKT $$\begin{array}{l}{(10)}\\ {(11)}\end{array}$$ conditions while Muñoz-González et al. (2017) approximate it using gradient ascent. We characterize that the Back-gradient attack in Muñoz-González et al. (2017) can be understood as the k-step unrolled gradient descent ascent (UGDA) algorithm in Metz et al. (2017): $$\begin{array}{l}{{\bf x}_{t+1}={\bf x}_{t}+\eta_{t}\nabla_{\bf x}\ell^{[k]}({\bf x}_{t},\tilde{\bf w}_{t}),}}\\ {{\bf w}_{t+1}={\bf w}_{t}-\eta_{t}\nabla_{\bf w}f({\bf x}_{t},{\bf w}_{t})}\end{array}$$ where ℓ [k](x, w) is the k-time composition, i.e., we perform k steps of gradient descent for the leader. Furthermore, Huang et al. (2020) propose to use a meta-learning algorithm for solving a similar bilevel optimization problem in targeted attack, and can be understood as running UGDA for M models and taking the average. (2) Zero-sum reduction: Koh et al. (2018) also proposed a reduced problem of Equation (7), where the leader and follower share the same loss function (i.e. f = ℓ): $$\operatorname*{max}_{{\mathcal{D}}_{P}}\;\;\operatorname*{min}_{{\mathbf{w}}}{\mathcal{L}}({\mathcal{D}}_{t r}\cup{\mathcal{D}}_{p},{\mathbf{w}}).$$ wL(Dtr ∪ Dp, w). (12) This relaxation enables attack algorithms to optimize the outer problem directly. However, this formulation may be problematic as its training objective does not necessarily reflect its true influence on test data. However, for unlearnable examples, the zero-sum reduction is feasible, and might be the only viable approach. See Appendix B for more details. This problem is addressed by Koh et al. (2018) with an assumption that the attacker can acquire a *target* model parameter, usually using a label flip attack which considers a much larger poisoning fraction ε. By adding a constraint involving the target parameter wtar, the attacker can search for poisoned points that maximize the loss ℓ while keeping a low loss on wtar ∗. However, such target parameters are hard to obtain since, as we will demonstrate, non-convex models appear to be robust to label flip attacks and there are no guarantees that wtar ∗is the solution of Equation (7). (3) Fixed follower (model): Geiping et al. (2020) propose a gradient matching algorithm for crafting targeted poisoning attacks, which can also be easily adapted to *unlearnable examples* (Fowl et al., 2021a). This method fixes the follower and supposes it acquires clean parameter w on clean data Dtr. We define a reversed function f ′, where f ′can be the reversed cross entropy loss for classification problems (Fowl et al., 2021a). As f ′ discourages the network from classifying clean samples, one can mimic its gradient ∇wf ′(w; Dtr) by adding poisoned data such that: $\nabla_{\mathbf{w}}f'(\mathbf{w};\mathcal{D}_{tr})\approx\nabla_{\mathbf{w}}f(\mathbf{w};\mathcal{D}_{tr}\cup\mathcal{D}_p).$ t al.~(2020) define a similarity function $\mathcal{S}$ for gradient. To accomplish this goal, Geiping et al. (2020) define a similarity function S for gradient matching, leading to the attack objective: $${\mathcal{L}}=S(\nabla_{\mathbf{w}}f^{\prime}(\mathbf{w};{\mathcal{D}}_{t r}),\nabla_{\mathbf{w}}f(\mathbf{w};{\mathcal{D}}_{t r}\cup{\mathcal{D}}_{p})),$$ ′(w; Dtr), ∇wf(w; Dtr ∪ Dp)), (14) $$(12)$$ $$(13)$$ $$(14)$$ where we minimize L w.r.t Dp. This method is not studied yet in the indiscriminate attack literature, but would serve as an interesting future work. TGDA attack. In this paper, we solve Equation (7) and avoid the calculation of ∂w∗ ∂Dp using the Total gradient descent ascent (TGDA) algorithm (Evtushenko, 1974; Fiez et al., 2020) 1: TGDA takes a total gradient ascent step for the leader and a gradient descent step for the follower: $$\begin{array}{l}{{\mathbf{x}_{t+1}=\mathbf{x}_{t}+\eta_{t}\mathsf{D}_{\mathbf{x}}\ell(\mathbf{x}_{t},\mathbf{w}_{t}),}}\\ {{\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta_{t}\nabla_{\mathbf{w}}f(\mathbf{x}_{t},\mathbf{w}_{t})}}\end{array}$$ $$\begin{array}{l}{(15)}\\ {(16)}\end{array}$$ xt+1 = xt + ηtDxℓ(xt, wt), (15) wt+1 = wt − ηt∇wf(xt, wt) (16) where Dx := ∇xℓ − ∇wxf · ∇−1 wwf · ∇wℓ is the total derivative of ℓ with respect to x, which implicitly measures the change of w with respect to Dp. As optimizing ℓ does not involve the attacker parameter θ, we can rewrite Dx := −∇wxf · ∇−1 wwf · ∇wℓ. Here, the product (∇−1 wwf · ∇wℓ) can be efficiently computed using conjugate gradient (CG) equipped with Hessian-vector products computed by autodiff. As CG is essentially a Hessian *inverse-free approach* (Martens, 2010), each step requires only linear time. Note that TGDA can also be treated as letting k → ∞ in UGDA. We thus apply the total gradient descent ascent algorithm and call this the **TGDA attack**. Avoiding computing ∂w∗ ∂Dp enables us to parameterize Dp and generate points indirectly by treating L as a separate model. Namely that Dp = Lθ(D′tr), where θ is the model parameter and D′tr is part of the training set to be poisoned. Therefore, we can rewrite Equation (15) as: $$(17)$$ $$\theta_{t+1}=\theta_{t}+\eta_{t}\mathrm{D}_{\theta}\ell(\theta_{t},\mathbf{w}_{t}).$$ $$//\;\;\;\mathrm{TG\!A}\;\;\mathrm{on}\;\;\bot$$ θt+1 = θt + ηtDθℓ(θt, wt). (17) Thus, we have arrived at a poisoning attack that generates Dp in a batch rather than individually, which greatly improves the attack efficiency in Algorithm 1. Note that the TGA update does not depend on the choice of ε. This is a significant advantage over previous methods as the running time does not increase as the attacker is allowed a larger budget of introduced poisoned points, thus enabling data poisoning attacks on larger training sets. ## Algorithm 1: Tgda Attack Input: Training set Dtr = {xi, yi} N i=1, validation set Dv, training steps T, attacker step size α, attacker number of steps m, defender step size β, defender number of steps n, poisoning fraction ε, L with θpre and ℓ = L(Dv, w∗) , F with wpre and f = L(Dtr ∪ Dp, w). 1 Initialize poisoned data set D0 p *←− {*(x ′ 1 , y′1 )*, ...,*(x ′ εN , y′εN )} 2 for t = 1*, ..., T* do 3 for i = 1*, ..., m* do 4 θ ← θ + αDθℓ(θ, wt) // TGA on L 5 for j = 1*, ..., n* do 6 w ← w − β∇wf(θ, w) // GD on F 7 return model Lθ and poisoned set Dp = Lθ(D0 p ) Necessity of Stackelberg game. Although Equation (7) is equivalent to the bilevel optimization problem in Muñoz-González et al. (2017); Huang et al. (2020); Koh et al. (2018), our sequential Stackelberg formulation is more suggestive of the data poisoning problem as it reveals the subtlety in the order of the attacker and the defender. ## 4 Implementation In this section, we (1) discuss the limitations of existing data poisoning attacks and how to address them, (2) set an efficient attack architecture for the TGDA attack. 1There are other possible solvers for Equation (7), and we have listed them in Appendix C. $$//\;\;\;\;\mathrm{GD}\;\;\;\mathrm{on}\;\;F$$ Table 1: Summary of existing poisoning attack algorithms, evaluations, and their respective code. While some papers may include experiments on other datasets, we only cover vision datasets as our main focus is image classification. The attacks: Random label flip and Adversarial label flip attacks (Biggio et al., 2011), P-SVM: PoisonSVM attack (Biggio et al., 2011), Min-max attack (Steinhardt et al., 2017), KKT attack (Koh et al., 2018), i-Min-max: improved Min-max attack (Koh et al., 2018), MT: Model Targeted attack (Suya et al., 2021), BG: Back-gradient attack (Muñoz-González et al., 2017). | Attack | Dataset | Model | |Dtr| | |Dtest| | ε | Code | Multiclass | Batch | |------------------------|------------------|---------|------------|-----------|-------|--------|--------------|---------| | Random label flip | toy | SVM | / | / | 0-40% |  | ✓ | ε|Dtr| | | Adversarial label flip | toy | SVM | / | / | 0-40% |  | × | ε|Dtr| | | P-SVM | MNIST-17 | SVM | 100 | 500 | 0-9% |  | × | 1 | | Min-max | MNIST-17/Dogfish | SVM | 60000 | 10000 | 0-30% |  | ✓ | 1 | | KKT | MNIST-17/Dogfish | SVM, LR | 13007/1800 | 2163/600 | 3% |  | × | 1 | | i-Min-max | MNIST | SVM | 60000 | 10000 | 3% |  | ✓ | 1 | | MT | MNIST-17/Dogfish | SVM, LR | 13007/1800 | 2163/600 | / |  | ✓ | 1 | | BG | MNIST | SVM, NN | 1000 | 8000 | 0-6% |  | ✓ | 1 | ## 4.1 Current Limitations We observe two limitations of existing data poisoning attacks. Limitation 1: Inconsistent assumptions. We first summarize existing indiscriminate data poisoning attacks in Table 1, where we identify that such attacks work under subtly different assumptions, on, for example, the attacker's knowledge, the attack formulation, and the training set size. These inconsistencies result in somewhat unfair comparisons between methods. Solution: We set an experimental protocol for generalizing existing attacks and benchmarking data poisoning attacks for systematic analysis in the future. Here we fix three key variants: (1) the attacker's knowledge: as discussed in Section 2, we consider training-only attacks; (2) the attack formulation: in Section 3, we introduce three possible formulations, namely non-zero-sum, zero-sum, and zero-sum with target parameters. We will show in the experiment section that the latter two are ineffective against neural networks. (3) the dataset size: existing works measure attack efficacy with respect to the size of the poisoned dataset, where size is measured as a *fraction* ε of the training dataset. However, some works subsample and thus reduce the size of the training dataset. As we show in Figure 1, attack efficacy is not invariant to the size of the training set: larger training sets appear to be harder to poison. Furthermore, keeping ε fixed, a smaller training set reduces the number of poisoned data points and thus the time required for methods that generate points sequentially, potentially concealing a prohibitive runtime for poisoning the full training set. Thus we consider not only a fixed ε, but also the complete training set for attacks. Limitation 2: Running time. As discussed in Section 3, many existing attacks approach the problem by optimizing individual points directly, thus having to generate poisoned points one by one. Such implementation takes enormous running time (see Section 5) and does not scale to bigger models or datasets. Solution: We design a new poisoning scheme that allows simultaneous and coordinated generation of Dp in batches requiring only one pass. Thanks to the TGDA attack in Section 3, we can treat L as a separate model (typically a neural network such as an autoencoder) Figure 1: Comparing the efficacy of ![6_image_0.png](6_image_0.png) poisoning MNIST-17 with the PoisonSVM and Back-gradient attacks. The training set size is varied, while the ratio of the number of poisoned points to the training set size is fixed at 3%. These attacks become less effective as training set sizes increase. that takes part of the Dtr as input and generates Dp correspondingly. Thus we fix the input and optimize only the parameters of L. ## 4.2 A More **Efficient Attack Architecture** Once we have fixed the attack assumptions and poisoned data generation process, we are ready to specify the complete three-stage attack architecture, which enables us to compare poisoning attacks fairly. One can easily apply this unified framework for more advanced attacks in the future. (1) Pretrain: The goals of the attacker L are to: (a) Reduce the test accuracy (i.e., successfully attack). (b) Generate Dp that is close to Dtr (i.e., thwart potential defenses). The attacker achieves the first objective during the attack by optimizing ℓ. However, ℓ does not enforce that the distribution of the poisoned points will resemble those of the training set. To this end, we pretrain L to reconstruct Dtr, producing a parameter vector θpre. This process is identical to training an autoencoder. For the defender, we assume that F is fully trained to convergence. Thus we perform standard training on Dtr to acquire F with wpre. Here we record the performance of F on D*test* (denoted as acc1 for image classification tasks) as the benchmark we are poisoning. (2) Attack: We generate poisoned points using the TGDA attack. We assume that the attacker can inject εN poisoned points, where N = |Dtr| and ε is the power of the attacker, measured as a fraction of the training set size. We summarize the attack procedure in Figure 2. Initialization: We take the pretrained model L with parameter θpre and F with pretrained parameter wpre as initialization of the two networks; the complete training set Dtr; a validation set Dv and part of the training set as initialization of the poisoned points D0 p = Dtr[0 : εN]. TGDA attack: In this paper, we run the TGDA attack to generate poisoned data. But it can be changed to any suitable attack for comparison. Specifically, we follow Algorithm 1 and perform m steps of TGA updates for the attacker, and n steps of GD updates for the defender in one pass. We discuss the role of m and n in Section 5. Note that previous works (e.g., Koh et al. 2018; Muñoz-González et al. 2017) choose n = 1 by default. However, we argue that this is not necessarily appropriate. When a system is deployed, the model is generally trained until convergence rather than for only a single step. Thus we recommend choosing a much larger n (e.g., n = 20 in our experiments) to better resemble the testing scenario. Label Information: We specify that D0 p = {xi, yi} εN i=1. Prior works (e.g., Koh et al. 2018; Muñoz-González et al. 2017) optimize x to produce xp, and perform a label flip on y to produce yp (more details in Appendix A). This approach neglects label information during optimization. ![7_image_0.png](7_image_0.png) Figure 2: Our experimental protocol benchmarks data poisoning attacks. (1) Pretrain: the attacker and the defender are first trained on Dtr to yield a good autoencoder/classifier respectively. (2) During the attack, the attacker generates the optimal θ ∗(thus Dp) w.r.t Dv and the the optimal w∗; the defender generates optimal w∗ w.r.t D′tr = Dtr ∪ Dp and the optimal θ ∗(which mimics testing). In contrast, we fix yp = y, and concatenate x and y to D0 p = {xi; yi} εN i=1 as input to L. Thus we generate poisoned points by considering the label information. We emphasize that we do not optimize or change the label during the attack, but merely use it to aid the construction of the poisoned xp. Thus, our attack can be categorized as clean label. 8 (3) Testing: Finally, we discuss how we measure the effectiveness of an attack. In a realistic setting, the testing procedure should be identical to the pretrain procedure, such that we can measure the effectiveness of Dp fairly. The consistency between pretrain and testing is crucial as the model F is likely to underfit with fewer training steps. Given the final θ, we produce the poisoned points Dp = Lθ(D0 p ) and train F from scratch on Dtr∪Dp. Finally, we acquire the performance of F on D*test* (denoted as acc2 for image classification tasks). By comparing the discrepancy between pretrain and testing acc1 − acc2 we can evaluate the efficacy of an indiscriminate data poisoning attack. ## 5 Experiments We evaluate our TGDA attack on various models for image classification tasks and show the efficacy of our method for poisoning neural networks. In comparison to existing indiscriminate data poisoning attacks, we show that our attack is superior in terms of both effectiveness and efficiency. Specifically, our results confirm the following: (1) By applying the Stackelberg game formulation and incorporating second-order information, we can attack neural networks with improved efficiency and efficacy using the TGDA attack. (2) The efficient attack architecture further enables the TGDA attack to generate Dp in batches. (3) The poisoned points are visually similar to clean data, making the attack intuitively resistant to defenses. ## 5.1 Experimental Settings Hardware and package: Experiments were run on a cluster with T4 and P100 GPUs. The platform we use is PyTorch (Paszke et al., 2019). Specifically, autodiff can be easily implemented using torch.autograd. As for the total gradient calculation, we follow Zhang et al. (2021) and apply conjugate gradient for calculating Hessian-vector products. Dataset: We consider image classification on MNIST (Deng, 2012) (60,000 training and 10,000 test images), and CIFAR-10 (Krizhevsky, 2009) (50,000 training and 10,000 test images) datasets. We are not aware of prior work that performs indiscriminate data poisoning on a dataset more complex than MNIST or CIFAR10, and, as we will see, even these settings give rise to significant challenges in designing efficient and effective attacks. Indeed, some prior works consider only a simplified subset of MNIST (e.g., binary classification on 1's and 7's, or subsampling the training set to 1,000 points) or CIFAR-10 (e.g., binary classification on dogs and fish). In contrast, we set a benchmark by using the full datasets for multiclass classification. Training and validation set: During the attack, we need to split the clean training data into the training set Dtr and validation set Dv. Here we split the data to 70% training and 30% validation, respectively. Thus, for the MNIST dataset, we have |Dtr| = 42000 and |Dv| = 18000. For the CIFAR-10 dataset, we have |Dtr| = 35000 and |Dv| = 15000. Attacker models and Defender models: (1) For the attacker model, for MNIST dataset: we use a threelayer neural network, with three fully connected layers and leaky ReLU activations; for CIFAR-10 dataset, we use an autoencoder with three convoluational layers and three conv transpose layers. The attacker takes the concatenation of the image and the label as the input, and generates the poisoned points. (2) For the defender, we examine three target models for MNIST: Logistic Regression, a neural network (NN) with three layers and a convolutional neural network (CNN) with two convolutional layers, maxpooling and one fully connected layer; and only the CNN model and ResNet-18 (He et al., 2016) for CIFAR-10 (as CIFAR-10 contains RBG images). Hyperparameters: (1) Pretrain: we use a batch size of 1,000 for MNIST and 256 for CIFAR-10, and optimize the network using our own implementation of gradient descent with torch.autograd. We choose the learning rate as 0.1 and train for 100 epochs. (2) Attack: for the attacker, we choose α = 0.01, m = 1 by default; for the defender, we choose β = 0.1, n = 20 by default. We set the batch size to be 1,000 for MNIST; 256 for CIFAR10 and train for 200 epochs, where the attacker is updated using total gradient ascent and the defender is updated using gradient descent. We follow Zhang et al. (2021) and implement TGA using Table 2: The attack accuracy/accuracy drop (%) and attack running time (hours) on the MNIST dataset. We only record the attack running time since pretrain and testing time are fixed across different methods. As the label flip attack does not involve optimization, its running time is always 0. We take three different runs for TGDA to get the mean and the standard derivation. Our attack outperforms the Min-max, i-Min-max and Back-gradient attacks in terms of both effectiveness and efficiency across neural networks. | Model | Clean | Label Flip | Min-max | i-Min-max | BG | TGDA(ours) | | | | | | |---------|----------|--------------|-----------|-------------|----------|--------------|----------|--------------|----------|-------------------|---------| | Acc | Acc/Drop | Time | Acc/Drop | Time | Acc/Drop | Time | Acc/Drop | Time | Acc/Drop | Time | | | LR | 92.35 | 90.83 / 1.52 | 0 hrs | 89.80/2.55 | 0.7 hrs | 89.56/2.79 | 19 hrs | 89.82 / 2.53 | 27 hrs | 89.56 / 2.79±0.07 | 1.1 hrs | | NN | 98.04 | 97.99 / 0.05 | 0 hrs | 98.07/-0.03 | 13 hrs | 97.82/0.22 | 73 hrs | 97.67 / 0.37 | 239 hrs | 96.54 / 1.50±0.02 | 15 hrs | | CNN | 99.13 | 99.12 / 0.01 | 0 hrs | 99.55/-0.42 | 63hrs | 99.05/0.06 | 246 hrs | 99.02 / 0.09 | 2153 hrs | 98.02 / 1.11±0.01 | 75 hrs | conjugate gradient. We choose the poisoning fraction ε = 3% by default. Note that choosing a bigger ε will not increase our running time, but we choose a small ε to resemble the realistic setting in which the attacker is limited in their access to the training data. (3) Testing: we choose the exact same setting as pretrain to keep the defender's training scheme consistent. Baselines: There is a spectrum of data poisoning attacks in the literature. However, due to their attack formulations, only a few attacks can be directly compared with our method. See Table 1 in Appendix A for a complete summary. For instance, the Poison SVM (Biggio et al., 2011) and KKT (Koh et al., 2018) attacks can only be applied to convex models for binary classification; the Min-max (Steinhardt et al., 2017) and the Model targeted (Suya et al., 2021) attacks can be only applied to convex models. However, it is possible to modify Min-max (Steinhardt et al., 2017) and i-Min-max (Koh et al., 2018) attacks to attack neural networks. Moreover, we compare with two baseline methods that can originally attack neural networks: the Back-gradient attack (Muñoz-González et al., 2017) and the Label flip attack (Biggio et al., 2011). It is also possible to apply certain targeted attack methods (e.g., MetaPoison, Huang et al. 2020) in the context of indiscriminate attacks. Thus we compare with MetaPoison on CIFAR-10 under our unified architecture. We follow Huang et al. (2020) and choose K = 2 unrolled inner steps, 60 outer steps, and an ensemble of 24 inner models. ## 5.2 Comparison With Benchmarks MNIST. We compare our attack with the Min-max, i-Min-max, Back-gradient and the Label flip attacks with ε = 3% on MNIST in Table 2. Since the Min-max, i-Min-max, and Back-gradient attack relies on generating poisoned points sequentially, we cannot adapt it into our unified architecture and run their code directly for comparison. For the label flip attack, we flip the label according to the rule y ← 10 − y, as there are 10 classes in MNIST. We observe that label flip attack, though very efficient, is not effective against neural networks. Min-max attack, due to its zero-sum formulation, does not work on neural networks. i-Min-max attack is effective against LR, but performs poorly on neural networks where the assumption of convexity fails. Although Muñoz-González et al. (2017) show empirically that the Back-gradient attack is effective when attacking subsets of MNIST (1,000 training samples, 5,000 testing samples), we show that the attack is much less effective on the full dataset. We also observe that the complexity of the target model affects the attack effectiveness significantly. Specifically, we find that neural networks are generally more robust against indiscriminate data poisoning attacks, among which, the CNN architecture is even more robust. Overall, our method outperforms the baseline methods across the three target models. Moreover, with our unified architecture, we significantly reduce the running time of poisoning attacks. CIFAR-10. We compare our attack with the Label flip attack and the MetaPoison attack with ε = 3% on CIFAR-10 in Table 3. We omit comparison with the Back-gradient attack as it is too computationally expensive to run on CIFAR-10. We observe that running the TGDA attack following Algorithm 1 directly is computationally expensive on large models (e.g., ResNet, He et al. 2016). However, it is possible to run TGDA on such models by slightly changing Algorithm 1: we split the dataset into 8 partitions and run TGDA separately on different GPUs. This simple trick enables us to poison deeper models and we find it works well in practice. We observe that the TGDA attack is very effective at poisoning the CNN and the ResNet-18 architectures, Also, MetaPoison is a more efficient attack (meta-learning with two unrolled steps is Table 3: The attack accuracy/accuracy drop (%) and attack running time (hours) on CIFAR-10. Note that TGDA experiments are performed on 8 GPUs for parallel training. We take three different runs for TGDA and MetaPoison to get the mean and the standard derivation. | Model | Clean | Label Flip | MetaPoison | TGDA(ours) | | | | |-----------|----------|--------------|--------------|-----------------|----------|-----------------|---------| | Acc | Acc/Drop | Time | Acc/Drop | Time | Acc/Drop | Time | | | CNN | 69.44 | 68.99/0.45 | 0 hrs | 68.14/1.13±0.12 | 35 hrs | 65.15/4.29±0.09 | 42 hrs | | ResNet-18 | 94.95 | 94.79/0.16 | 0 hrs | 92.90/2.05±0.07 | 108 hrs | 89.41/5.54±0.03 | 162 hrs | | Target Model Clean Attacker as leader Defender as leader LR 92.35 89.56 / 2.79 89.79 / 2.56 NN 98.04 96.54 / 1.50 96.98 / 1.06 CNN 99.13 98.02 / 1.11 98.66 / 0.47 | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------| Table 4: Comparing the TGDA attack with different orders: attacker as the leader and defender as the leader in terms of test accuracy/accuracy drop(%). Attacks are more effective when the attacker is the leader. much quicker than calculating total gradient), but since its original objective is to perform targeted attacks, its application to indiscriminate attacks is not effective. Moreover, the difference between the efficacy of the TGDA attack on MNIST and CIFAR-10 suggests that indiscriminate attacks may be dataset dependent, with MNIST being harder to poison than CIFAR-10. ## 5.3 Ablation Studies To better understand our TGDA attack, we perform ablation studies on the order in the Stackelberg game, the attack formulation, roles in our unified attack framework, and the choice of hyperparameters. For computational considerations, we run all ablation studies on the MNIST dataset unless specified. Furthermore, we include empirically comparison with *unlearnable examples* in Appendix B. Who acts first. In Section 3, we assume that the attacker is the leader and the defender is the follower, i.e., that the attacker acts first. Here, we examine the outcome of reversing the order, where the defender acts first. Table 4 shows the comparison. We observe that across all models, reversing the order would result in a less effective attack. This result shows that even without any defense strategy, the target model would be more robust if the defender acts one step ahead of the attacker. Figure 3: We visualize the poisoned data generated by the ![10_image_0.png](10_image_0.png) TGDA attack with/without pretraining the leader L on the MNIST dataset. Attack formulation. In Section 3, we discuss a relaxed attack formulation, where ℓ = f and the game is zero-sum. We perform experiments on this setting and show results in Table 5. We observe that the non-zerosum formulation is significantly more effective, and in some cases, the zero-sum setting actually *increases* the accuracy after poisoning. We also find that using target parameters would not work for neural networks as they are robust to label flip attacks even when ε is large. We ran a label flip attack with ε = 100% and observed only 0.1% and 0.07% accuracy drop on NN and CNN architectures, respectively. This provides further evidence that neural networks are robust to massive label noise, as previously observed by Rolnick et al. (2017). Role of pretraining. In Section 4, we propose two desired properties of L, among which L should generate Dp that is visually similar to Dtr. Thus requires the pretraining of L for reconstructing images. We perform experiments without pretraining L to examine its role in effecting the attacker. Figure 3 confirms that without pretraining, the attacker will generate images that are visually different from the Dtr distribution, thus fragile to possible defenses. Moreover, Table 6 indicates that without pretraining L, the attack will also be ineffective. Thus we have demonstrated the necessity of the visual similarity between Dp and Dtr. Different ε. We have set ε = 3% in previous experiments. However, unlike prior methods which generate points one at a time, the running time of our attack does not scale with ε, and thus we can consider significantly larger ε and compare with other feasible methods. Figure 4 shows that attack efficacy increases Table 5: Comparing the TGDA attack with different formulations: non-zero-sum and zero-sum in terms of test accuracy/accuracy drop (%). Non-zero-sum is more effective at generating poisoning attacks. | Target Model | Clean | Non Zero-sum | Zero-sum | |----------------|---------|----------------|---------------| | LR | 92.35 | 89.56 / 2.79 | 92.33 / 0.02 | | NN | 98.04 | 96.54 / 1.50 | 98.07 / -0.03 | | CNN | 99.13 | 98.02 / 1.11 | 99.55 / -0.42 | | Target Model | Clean | With Pretrain | Without Pretrain | |----------------|---------|-----------------|--------------------| | LR | 92.35 | 89.56 / 2.79 | 92.09 / 0.26 | | NN | 98.04 | 96.54 / 1.50 | 97.47 / 0.57 | | CNN | 99.13 | 98.02 / 1.11 | 98.72 / 0.41 | Table 6: Comparing the TGDA attack with/without pretraining the attacker L in terms of test accuracy/accuracy drop (%). Pretraining strongly improves attack efficacy. with ε, but the accuracy drop is significantly less than ε when ε is very large. Moreover, TGDA outperforms baseline methods across any choice of ε. Number of steps m and n. We discuss the choice of m and n, the number of steps of L and F, respectively. We perform three choices of m and n in Table 7. We observe that 20 steps of TGA and 1 step of GD results in the most effective attack. This indicates that when *m > n*, the outer maximization problem is better solved with more TGA updates. However, setting 4 (m = 20, n = 1) takes 10 times more computation than setting 3 (m = 1, n = 20), due to the fact that the TGA update is expensive. When m = n = 1, the attack is less effective as the defender might not be fully trained to respond to the attack. When n = 0, the attack is hardly effective at all as the target model is not retrained. We conclude that different choices of m and n would result in a trade-off between effectiveness and efficiency. Cold-Start. The methods we compare in this work all belong to the partial warm-start category for bilevel optimization (Vicol et al., 2022). It is also possible to formulate the cold-start Stackelberg game for data poisoning. Specifically, we follow Vicol et al. (2022) such that in Algorithm 1, the follower update is modified to w ← wpre − β∇wf(θ, w). We report the results in Table 8 on MNIST dataset. We observe that in indiscriminate data poisoning, partial warm-start is a better approach than cold-start overall, and our outer problem (autoencoder for generating poisoned points) does not appear to be highly over-parameterized. ## 5.4 Visualization Of Attacks Finally, we visualize some poisoned points Dp generated by the TGDA attack in Figure 5. The poisoning samples against NN and CNN are visually very similar with Dtr, as our attack is a clean label ![11_image_0.png](11_image_0.png) attack (see Section 4). Moreover, we evaluate the magnitude of perturbation by calculating the maximum of pixel-level difference. Both visual similarity and magnitude of perturbation provide heuristic evidence that the TGDA attack may be robust against data sanitization algorithms. Note that Dp against LR is visually distinguishable, and the reason behind this discrepancy between the convex model and the neural networks Figure 4: Accuracy drop induced by our TGDA poisoning attack and baseline methods versus ε (left three figures: MNIST; right two figures: CIFAR-10). Attack efficacy increases modestly with ε. Note that when ε = 1, only 50% of the training set is filled with poisoned data. Table 7: Comparing different numbers of steps of the attacker (m) and defender (n) in terms of test accuracy/accuracy drop (%). Many attacker steps and a single defender step produces the most effective attacks. | Model Clean m = 1, n = 0 m = 1, n = 1 m = 1, n = 20 m = 20, n = 1 m = n = 20 | | | | | |--------------------------------------------------------------------------------|--------------------|--------------|--------------|---------------------------| | LR | 92.35 92.30 / 0.05 | 89.97 / 2.38 | 89.56 / 2.79 | 89.29 / 3.06 89.77 / 2.57 | | NN | 98.04 98.02 / 0.02 | 97.03 / 1.01 | 96.54 / 1.50 | 96.33 / 1.71 96.85 / 1.19 | | Model | Clean | Partial Warm-start | Cold-start | |---------|---------|----------------------|--------------| | LR | 92.35 | 89.56 / 2.79 | 89.84 / 2.41 | | NN | 98.04 | 96.54 / 1.50 | 96.77 / 1.27 | | CNN | 99.13 | 98.02 / 1.11 | 98.33 / 0.80 | Table 8: Comparing the TGDA attack with partial warm-start (our original setting) and cold-start in terms of test accuracy/accuracy drop (%). Cold-start training is less effective overall. may be that the attacker L is not expressive enough to generate extremely strong poisoning attacks against neural networks. ## 5.5 Transferability Of The Tgda Attack Even for training-only attacks, the assumption on the attacker's knowledge can be too strong. Thus we study the scenario when the attacker has limited knowledge regarding the defender's model F and training process, where the attacker has to use a surrogate model to simulate the defender. We report the transferability of the TGDA attack on different surrogate models in Table 9. We observe that poisoned points generated against LR and NN have a much lower impact against other models, while applying CNNs as the surrogate model is effective on all models. ## 5.6 Against Defenses: To further evaluate the robustness of the TGDA attack against data sanitization algorithms: (a) We perform the loss defense (Koh et al., 2018) by removing 3% of training points with the largest loss. We ![12_image_0.png](12_image_0.png) compare with pGAN (Muñoz-González et al., 2019), which includes a constraint on the similarity between the clean and poisoned samples, and is thus inherently robust against defenses. In Table 10, we observe that although we do not add an explicit constraint on detectability in our loss function, our method still reaches comparable robustness against such defenses with pGAN. Figure 5: We visualize the poisoned data generated by the TGDA attack and report the magnitude of perturbation (left: CIFAR-10; right: MNIST). | Surrogate | LR | NN | CNN | | | | | | | |------------------|------|------|-------|------|------|------|------|------|------| | Target | LR | NN | CNN | LR | NN | CNN | LR | NN | CNN | | Accuracy Drop(%) | 2.79 | 0.12 | 0.27 | 0.13 | 1.50 | 0.62 | 3.22 | 1.47 | 1.11 | Table 9: Transferability expeirments on MNIST. | Method | TGDA (wo/w defense) | pGAN(wo/w defense) | | | | | |-------------------|-----------------------|----------------------|------------|-----------|-----------|-----------| | Target Model | LR | NN | CNN | LR | NN | CNN | | Accuracy Drop (%) | 2.79/2.56 | 1.50/1.49 | 1.11/1.104 | 2.52/2.49 | 1.09/1.07 | 0.74/0.73 | | Model | Influence | Sever | MaxUp | | | | |------------|-------------|------------|-----------|------------|-----------|------| | wo defense | w defense | wo defense | w defense | wo defense | w defense | | | LR | 2.79 | 2.45 | 2.79 | 2.13 | 2.79 | 2.77 | | NN | 1.50 | 1.48 | 1.50 | 1.32 | 1.50 | 1.50 | | CNN | 1.11 | 1.10 | 1.11 | 0.98 | 1.11 | 1.11 | Table 10: Comparison with pGAN on MNIST with loss defense. Table 11: TGDA attack on MNIST with Influence/Sever/MaxUp defense. (b) Other defenses remove suspicious points according to their effect on the learned parameters, e.g., through influence functions (influence defense in Koh & Liang 2017) or gradients (Sever in Diakonikolas et al. 2016). Specifically, influence defense removes 3% of training points with the highest influence, defined using their gradients and Hessian-vector products (Koh & Liang, 2017); Sever removes 3% of training points with the highest outlier scores, defined using the top singular value in the matrix of gradients. Here we examine the robustness of TGDA against these two strong defenses. We observe in Table 11 that TGDA is robust against Influence defense, but its effectiveness is significantly reduced by Sever. Thus, we conclude Sever is potentially a good defense against the TGDA attack, and it might require special design (e.g., an explicit constraint on the singular value) to break Sever sanitation. (c) We examine the robustness of our TGDA attack against strong data augmentations, e.g., the MaxUp defense2 of Gong et al. (2020). In a nutshell, MaxUp generates a set of augmented data with random perturbations and then aims at minimizing the worst case loss over the augmented data. This training technique addresses overfitting and serves as a possible defense against adversarial examples. However, it is not clear if MaxUp is a good defense against indiscriminate data poisoning attacks. Thus, we implement MaxUp under our testing protocol, where we add random perturbations to the training and the poisoned data, i.e., {Dtr ∪ Dp}, and then minimize the worst case loss over the augmented set. We report the results in Table 11, where we observe that even though MaxUp is a good defense against adversarial examples, it is not readily an effective defense against indiscriminate data poisoning attacks. Part of the reason we believe is that in our formulation the attacker anticipates the retraining done by the defender, in contrast to the adversarial example setting. ## 6 Conclusions While indiscriminate data poisoning attacks have been well studied under various formulations and settings on convex models, non-convex models remain significantly underexplored. Our work serves as a first exploration into poisoning neural networks under a unified architecture. While prior state-of-the-art attacks failed at this task due to either the attack formulation or a computationally prohibitive algorithm, we propose a novel Total Gradient Descent Ascent (TGDA) attack by exploiting second-order information, which enables generating thousands of poisoned points in only one pass. We perform experiments on (convolutional) neural networks and empirically demonstrate the feasibility of poisoning them. Moreover, the TGDA attack 2We follow the implementation in https://github.com/Yunodo/maxup produces poisoned samples that are visually indistinguishable from unpoisoned data (i.e., it is a clean-label attack), which is desired in the presence of a curator who may attempt to sanitize the dataset. Our work has some limitations. While our algorithm is significantly faster than prior methods, it remains computationally expensive to poison deeper models such as ResNet, or larger datasets such as ImageNet. Similarly, while our attacks are significantly more effective than prior methods, we would ideally like a poison fraction of ε to induce an accuracy drop far larger than ε, as appears to be possible for simpler settings (Lai et al., 2016; Diakonikolas et al., 2016; 2019). We believe our work will set an effective benchmark for future work on poisoning neural networks. ## Acknowledgement We thank the action editor and reviewers for the constructive comments and additional references, which have greatly improved our presentation and discussion. ## References Hojjat Aghakhani, Dongyu Meng, Yu-Xiang Wang, Christopher Kruegel, and Giovanni Vigna. Bullseye polytope: A scalable clean-label poisoning attack with improved transferability. arXiv preprint arXiv:2005.00191, 2020. Samyadeep Basu, Philip Pope, and Soheil Feizi. Influence functions in deep learning are fragile. In *International Conference on Learning Representations (ICLR)*, 2021. Battista Biggio, Blaine Nelson, and Pavel Laskov. Support vector machines under adversarial label noise. In *Proceedings of the Asian Conference on Machine Learning (ACML)*, pp. 97–112, 2011. Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In Proceedings of the 29th International Conference on Machine Learning (ICML), pp. 1467–1474, 2012. Mengjie Chen, Chao Gao, and Zhao Ren. Robust covariance and scatter matrix estimation under Huber's contamination model. *The Annals of Statistics*, 46(5):1932–1960, 2018. URL https://doi.org/10. 1214/17-AOS1607. Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, and Fabio Roli. Wild patterns reloaded: A survey of machine learning security against training data poisoning. *arXiv preprint arXiv:2205.01992*, 2022. Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, and Deepak Verma. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 99–108, 2004. URL https://dl.acm.org/doi/abs/10.1145/1014052.1014066. Li Deng. The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Processing Magazine, 29(6):141–142, 2012. Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robust estimators in high dimensions without the computational intractability. In *Proceedings of the 57th Annual* IEEE Symposium on Foundations of Computer Science, pp. 655–664, 2016. Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, and Alistair Stewart. Sever: A robust meta-algorithm for stochastic optimization. In Proceedings of the 36th International Conference on Machine Learning, pp. 1596–1606, 2019. Ürün Dogan, Tobias Glasmachers, and Christian Igel. A unified view on multi-class support vector classification. *Journal of Machine Learning Research*, 17(45):1–32, 2016. Yu. G. Evtushenko. Iterative methods for solving minimax problems. *USSR Computational Mathematics* and Mathematical Physics, 14(5):52–63, 1974. Tanner Fiez, Benjamin Chasnov, and Lillian J Ratliff. Implicit learning dynamics in Stackelberg games: Equilibria characterization, convergence analysis, and empirical study. In Proceedings of the International Conference on Machine Learning (ICML), 2020. Liam Fowl, Ping-yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Bansal, Wojtek Czaja, and Tom Goldstein. Preventing unauthorized use of proprietary data: Poisoning for secure dataset release. arXiv preprint arXiv:2103.02683, 2021a. Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojtek Czaja, and Tom Goldstein. Adversarial examples make strong poisons. arXiv preprint arXiv:2106.10807, 2021b. Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, and Dacheng Tao. Robust unlearnable examples: Protecting data privacy against adversarial learning. In *International Conference on Learning Representations*, 2021. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. Jonas Geiping, Liam Fowl, W Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, and Tom Goldstein. Witches' brew: Industrial scale data poisoning via gradient matching. *arXiv preprint* arXiv:2009.02276, 2020. Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022. URL https://doi.org/10.1109/TPAMI.2022.3162397. Chengyue Gong, Tongzheng Ren, Mao Ye, and Qiang Liu. Maxup: A simple way to improve generalization of neural network training. *arXiv preprint arXiv:2002.09024*, 2020. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv:1708.06733, 2017. Junfeng Guo and Cong Liu. Practical poisoning attacks on neural networks. In *European Conference on* Computer Vision, 2020. Frank R Hampel. The influence curve and its role in robust estimation. *Journal of the American Statistical* Association, 69(346):383–393, 1974. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR '16, pp. 770–778, Washington, DC, USA, 2016. IEEE Computer Society. Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, and Yisen Wang. Unlearnable examples: Making personal data unexploitable. arXiv preprint arXiv:2101.04898, 2021. W Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, and Tom Goldstein. Metapoison: Practical general-purpose clean-label data poisoning. In *Advances in Neural Information Processing Systems*, volume 33, pp. 12080–12091, 2020. Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning (ICML), pp. 1885–1894, 2017. Pang Wei Koh, Jacob Steinhardt, and Percy Liang. Stronger data poisoning attacks break data sanitization defenses. arXiv:1811.00741, 2018. Alex Krizhevsky. Learning multiple layers of features from tiny images. tech. report, 2009. URL https: //www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf. Ram Shankar Siva Kumar, Magnus Nyström, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, and Sharon Xia. Adversarial machine learning-industry perspectives. In *IEEE* Security and Privacy Workshops (SPW), pp. 69–75, 2020. Kevin A. Lai, Anup B. Rao, and Santosh Vempala. Agnostic estimation of mean and covariance. In Proceedings of the 57th Annual IEEE Symposium on Foundations of Computer Science, pp. 665–674, 2016. Wei Liu and Sanjay Chawla. A game theoretical model for adversarial learning. In *IEEE International* Conference on Data Mining Workshops, pp. 25–30, 2009. URL https://ieeexplore.ieee.org/ abstract/document/5360532. Wei Liu and Sanjay Chawla. Mining adversarial patterns via regularized loss minimization. *Machine learning*, 81(1):69–83, 2010. URL https://link.springer.com/article/10.1007/ s10994-010-5199-2. Lingjuan Lyu, Han Yu, and Qiang Yang. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133, 2020. James Martens. Deep learning via hessian-free optimization. In *ICML*, pp. 735–742, 2010. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. In International Conference on Learning Representation (ICLR), 2017. Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec), pp. 27–38, 2017. Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, and Emil C Lupu. Poisoning attacks with generative adversarial nets. arXiv preprint arXiv:1906.07773, 2019. Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D Joseph, Benjamin IP Rubinstein, Udam Saini, Charles Sutton, J Doug Tygar, and Kai Xia. Exploiting machine learning to subvert your spam filter. LEET, 8:1–9, 2008. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, NeurIPS '19, pp. 8026–8037. Curran Associates, Inc., 2019. Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, and Florian Tramer. Data poisoning won't save you from facial recognition. In *Proceedings of the 10th International Conference on Learning Representations*, ICLR '22, 2022. David Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017. Aniruddha Saha, Akshayvarun Subramanya, and Hamed Pirsiavash. Hidden trigger backdoor attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein, and David W Jacobs. Autoregressive perturbations for data poisoning. arXiv preprint arXiv:2206.03693, 2022. Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P Dickerson, and Tom Goldstein. Just how toxic is data poisoning? A unified benchmark for backdoor and data poisoning attacks. In *Proceedings of the* 38th International Conference on Machine Learning (ICML), 2021. Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! Targeted clean-label poisoning attacks on neural networks. In *Advances in* Neural Information Processing Systems (NeurIPS), pp. 6103–6113, 2018. Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage. Back to the drawing board: A critical evaluation of poisoning attacks on federated learning. arXiv:2108.10241, 2021. Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A Erdogdu, and Ross Anderson. Manipulating SGD with data ordering attacks. arXiv:2104.09667, 2021. Jacob Steinhardt, Pang Wei Koh, and Percy Liang. Certified defenses for data poisoning attacks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3520–3532, 2017. Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, and Yuan Tian. Model-targeted poisoning attacks with provable convergence. In Proceedings of the 38th International Conference on Machine Learning (ICML), pp. 10000–10010, 2021. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. Brandon Tran, Jerry Li, and Aleksander Madry. Spectral signatures in backdoor attacks. In *Advances in* Neural Information Processing Systems (NeurIPS), 2018. Paul Vicol, Jonathan P Lorraine, Fabian Pedregosa, David Duvenaud, and Roger B Grosse. On implicit bias in overparameterized bilevel optimization. In *International Conference on Machine Learning*, pp. 22234–22259, 2022. Heinrich von Stackelberg. *Market structure and equilibrium*. Springer, 1934. Jane Wakefield. Microsoft chatbot is taught to swear on twitter. BBC News, 2016. Yuanhao Wang, Guodong Zhang, and Jimmy Ba. On solving minimax optimization locally: A follow-theridge approach. In *The 8th International Conference on Learning Representations (ICLR)*, 2020. Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. Indiscriminate poisoning attacks are shortcuts. arXiv preprint arXiv:2111.00898, 2021. Guojun Zhang, Kaiwen Wu, Pascal Poupart, and Yaoliang Yu. Newton-type methods for minimax optimization. In *ICML workshop on Beyond First-Order Methods in ML Systems*, 2021. Chen Zhu, W Ronny Huang, Hengduo Li, Gavin Taylor, Christoph Studer, and Tom Goldstein. Transferable clean-label poisoning attacks on deep neural nets. In *International Conference on Machine Learning*, 2019. ## A Indiscriminte Data Poisoning Attacks We first show that perfect knowledge attacks and training-only attacks can be executed by solving a nonzero-sum bi-level optimization problem. ## A.1 Non-Zero-Sum Setting For perfect knowledge and training-only attacks, recall that we aim at the following bi-level optimization problem: $$\operatorname*{max}_{{\mathcal{D}}_{p}}\ {\mathcal{L}}({\mathcal{D}}_{v},{\mathbf{w}}_{*}),\ \ \ \mathrm{s.t.}\ \ {\mathbf{w}}_{*}\in\operatorname*{arg\,min}_{{\mathbf{w}}\in{\mathbb{W}}}\ {\mathcal{L}}({\mathcal{D}}_{tr}\cup{\mathcal{D}}_{p},{\mathbf{w}}),$$ $$(18)$$ where we constrain |Dp| = ε|Dtr| to limit the amount of poisoned data the attacker can inject. The attacker can solve (18) in the training only attack setting. With a stronger assumption where D*test* is available, we substitute Dv with D*test* and arrive at the perfect knowledge attack setting. Existing attacks generate poisoned points one by one by considering the problem: $$\max_{x_{p}}\ {\cal L}({\cal D}_{v},{\bf w}_{*}),\ \ \ {\rm s.t.}\ \ {\bf w}_{*}\in\arg\min_{{\bf w}\in{\rm W}}\ {\cal L}({\cal D}_{tr}\cup\{x_{p},y_{p}\},{\bf w}).\tag{1}$$ While the inner minimization problem can be solved via gradient descent, the outer maximization problem is non-trivial as the dependency of L(Dv, w∗) on xp is indirectly encoded through the parameter w of the poisoned model. As a result, we rewrite the desired derivative using the chain rule: $$(19)$$ $$\frac{\partial{\cal D}({\cal D}_{v},{\bf w}_{*})}{\partial x_{p}}=\frac{\partial{\cal D}({\cal D}_{v},{\bf w}_{*})}{\partial{\bf w}_{*}}\frac{\partial{\bf w}_{*}}{\partial x_{p}},$$ $$(20)$$ where the difficulty lies in computing ∂w∗ ∂xp , i.e., measuring how much the model parameter w changes with respect to the poisoning point xp. Various approaches compute ∂w∗ ∂xp by solving this problem exactly, using either influence functions (Koh & Liang, 2017) (Influence attack) or KKT conditions (Biggio et al., 2011) (PoisonSVM attack3). Another solution is to approximate the problem using gradient descent (MuñozGonzález et al., 2017). We discuss each of these approaches below. Influence attack. The influence function (Hampel, 1974) tells us how the model parameters change as we modify a training point by an infinitesimal amount. Borrowing the presentation from Koh & Liang (2017), we compute the desired derivative as: $$\frac{\partial{\bf w}_{*}}{\partial x_{p}}=-H_{{\bf w}*}^{-1}\frac{\partial^{2}{\cal L}(\{x_{p},y_{p}\},{\bf w}_{*})}{\partial{\bf w}_{*}\partial x_{p}},\tag{1}$$ $$(21)$$ where Hw∗ is the Hessian of the training loss at w∗: $$H_{\mathbf{w}_{*}}:=\lambda I+{\frac{1}{|{\mathcal{D}}_{t r}\cup{\mathcal{D}}_{p}|}}\sum_{(x,y)\in{\mathcal{D}}_{t r}\cup{\mathcal{D}}_{p}}{\frac{\partial^{2}{\mathcal{L}}((x,y),\mathbf{w}_{*})}{\partial(\mathbf{w}_{*})^{2}}}$$ Influence functions are well-defined for convex models like SVMs and are generally accurate for our settings. However, they have been showed to be inaccurate for neural networks (Basu et al., 2021). PoisonSVM attack. Biggio et al. (2012) replaces the inner problem with its stationary (KKT) conditions. According to the KKT condition, we write the implicit function: $${\frac{\partial{\mathcal{L}}({\mathcal{D}}_{t r}\cup\{x_{p},y_{p}\},\mathbf{w}_{*})}{\partial\mathbf{w}_{*}}}=0,$$ ∂w∗= 0, (23) 3While this might naturally suggest the name "KKT attack," this name is reserved for a different attack covered in Section A.3. $$(22)$$ $$(23)$$ which yields the linear system: $$\frac{\partial^{2}{\mathcal{L}}({\mathcal{D}}_{t r}\cup\{x_{p},y_{p}\},{\mathbf{w}}_{*})}{\partial{\mathbf{w}}_{*}\partial x_{p}}+\frac{\partial^{2}{\mathcal{L}}({\mathcal{D}}_{t r}\cup\{x_{p},y_{p}\},{\mathbf{w}}_{*})}{\partial({\mathbf{w}}_{*})^{2}}\frac{\partial{\mathbf{w}}_{*}}{\partial x_{p}}=0,$$ $$(24)$$ $$(25)$$ and thus we can solve the desired derivative as: $$\frac{\partial\mathbf{w}_{*}}{\partial x_{p}}=-\left(\frac{\partial^{2}{\mathcal{L}}({\mathcal{D}}_{t r}\cup\{x_{p},y_{p}\},\mathbf{w}_{*})}{\partial(\mathbf{w}_{*})^{2}}\right)^{-1}\frac{\partial^{2}{\mathcal{L}}({\mathcal{D}}_{t r}\cup\{x_{p},y_{p}\},\mathbf{w}_{*})}{\partial\mathbf{w}_{*}\partial x_{p}}.$$ Note that despite their differences in approaching the derivative, both the influence attack and PoisonSVM attack involve the inverse Hessian. Back-gradient attack. Muñoz-González et al. (2017) avoid solving the outer maximization problem exactly by replacing it with a set of iterations performed by an optimization method such as gradient descent. This incomplete optimization of the inner problem allows the algorithm to run faster than the two above methods, and poisoning neural networks. ## A.2 Zero-Sum Setting Steinhardt et al. (2017) reduce Equation (18) to a zero-sum game by replacing L(Dv, w∗) with L(Dtr ∪ Dp, w∗), and the original problem can be written as: $$\operatorname*{max}_{\mathcal{D}_{p}}\ \mathcal{L}(\mathcal{D}_{t r}\cup\mathcal{D}_{p},\mathbf{w}_{*}),\ \ \text{s.t.}\ \ \mathbf{w}_{*}\in\operatorname*{arg\,min}_{\mathbf{w}\in\mathbf{W}}\ \mathcal{L}(\mathcal{D}_{t r}\cup\mathcal{D}_{p},\mathbf{w}).$$ which is identical to the saddle-point or zero-sum problem: $$(26)$$ $$\operatorname*{max}_{\mathcal{D}_{p}}\operatorname*{min}_{\mathbf{w}}\ \mathcal{L}(\mathcal{D}_{t r}\cup\mathcal{D}_{p},\mathbf{w})\tag{1}$$ $$(27)$$ $$(28)$$ For an SVM model, given that the loss function is convex, we can solve (27) by swapping the min and max and expand the problem to: $$\min_{\mathbf{w}}\quad\sum_{(x,y)\in\mathcal{D}_{tr}}\mathcal{L}(\{x,y\},\mathbf{w})+\max_{\{x_{p},y_{p}\}}\mathcal{L}(\{x_{p},y_{p}\},\mathbf{w}),$$ However, we emphasize that this relaxed gradient-based attack is problematic and could be ineffective since the loss on the clean data Dtr could still be low. In other words, the inner maximization does not address the true objective where we want to change the model parameter to cause wrong predictions on clean data. This can be addressed by keeping the loss on the poisoned data small, but this contradicts the problem formulation. One solution to this is to use target parameters in Section A.3. ## A.3 Zero-Sum Setting With Target Parameters Gradient-based attacks solve a difficult optimization problem in which the poisoned data Dp affects the objective through the model parameter w∗. As a result, evaluating the gradient usually involves computing a Hessian, a computationally expensive operation which can not be done in many realistic settings. Koh et al. (2018) propose that if we have a target parameter wtar ∗ which maximizes the loss on the test data L(D*test*, w∗), then the problem simplifies to: $$\mathrm{find}\ {\mathcal{D}}_{p},\ \ \mathrm{s.t.}\quad{\mathbf{w}}_{*}^{t a r}=\operatorname*{arg\,min}_{{\mathbf{w}}\in{\mathrm{W}}}\ {\mathcal{L}}({\mathcal{D}}_{t r}\cup{\mathcal{D}}_{p},{\mathbf{w}}),$$ KKT attack. Since the target parameter wtar ∗is pre-specified, the condition can be rewritten as: $$\mathbf{w}_{*}^{tar}=\operatorname*{arg\,min}_{\mathbf{w}\in\mathbb{W}}\ \mathcal{L}(\mathcal{D}_{tr}\cup\mathcal{D}_{p},\mathbf{w})$$ $$=\operatorname*{arg\,min}_{\mathbf{w}\in\mathbb{W}}\ \sum_{\{x,y\}\in\mathcal{D}_{tr}}\mathcal{L}(\{x,y\},\mathbf{w})+\sum_{\{x_{p},y_{p}\}\in\mathcal{D}_{p}}\mathcal{L}(\{x_{p},y_{p}\},\mathbf{w}),$$ $$(29)$$ $$(30)$$ $$(31)$$ Again we can use the KKT optimality condition to solve the argmin problem for convex losses: $$\sum_{\{x,y\}\in{\mathcal{D}}_{t r}}{\mathcal{L}}(\{x,y\},{\mathbf{w}}_{*}^{t a r})+\sum_{\{x_{p},y_{p}\}\in{\mathcal{D}}_{p}}{\mathcal{L}}(\{x_{p},y_{p}\},{\mathbf{w}}_{*}^{t a r})=0$$ Thus we can rewrite the problem as: $$\mathrm{find}\ {\mathcal{D}}_{p},\ \ \mathrm{s.t.}\sum_{\{x,y\}\in{\mathcal{D}}_{r}}{\mathcal{L}}(\{x,y\},{\bf w}_{*}^{l a r})+\sum_{\{x_{p},y_{p}\}\in{\mathcal{D}}_{p}}{\mathcal{L}}(\{x_{p},y_{p}\},{\bf w}_{*}^{l a r})=0.$$ If this problem has a solution, we can find it by solving the equivalent norm-minimization problem: $$\operatorname*{min}_{\mathcal{D}_{p}}\quad\left\|\sum_{\{x,y\}\in\mathcal{D}_{tr}}\mathcal{L}(\{x,y\},\mathbf{w}_{*}^{t a r})+\sum_{\{x_{p},y_{p}\}\in\mathcal{D}_{p}}\mathcal{L}(\{x_{p},y_{p}\},\mathbf{w}_{*}^{t a r})\right\|_{2}^{2},$$ $$(32)$$ $$(33)$$ $$(34)$$ $$(35)$$ where the problem can only be minimized if the KKT condition is satisfied. This attack is called the KKT attack. Of course, the success of this attack relies on the target parameter wtar ∗. Koh et al. (2018) propose to use the label flip attack for such purpose where we use the trained parameter as the target. This attack achieves comparable results to other attacks while being much faster since it can be solved efficiently using grid search for binary classification. Note that for multi-class classification, this algorithm quickly become infeasible. Improved min-max. Koh et al. (2018) applies the target parameters to address the issue for the relaxed gradient-based attack, where we add the following constraint during training: $${\mathcal{L}}(\{x,y\},\mathbf{w}_{*}^{t a r})\leq\tau,$$ ∗) ≤ τ, (35) where τ is a fixed threshold. Thus the attacker can search for poisoned points that maximize loss under the current parameter w while keeping low loss on the target parameter wtar ∗. Model Targeted Poisoning. Suya et al. (2021) propose another algorithm for generating poisoned points using target parameters. This attack considers a different attack strategy from the others, where the attacker adopts an online learning procedure. In this case, the attacker does not have a poison fraction ε to generate a specific amount of poisoned data. Instead, the attacker aims at reaching a stopping criteria (can be either a desired accuracy drop or desired distance to the target parameter). However, such attacking procedure may cause the poison fraction ε to be large and it is hard to measure the success of the attack. Thus, we use the same setting as others for fair comparison. ## A.4 Training-Data-Only Attack In the training-data-only attack setting, since the attacker does not have access to the training procedure, the bi-level optimization methods are not applicable. The remaining strategies focus either on modifying the labels only (i.e., label flip attacks). Random label flip attack. Random label flipping is a very simple attack, which constructs a set of poisoned points by randomly selecting training points and flipping their labels: $${\mathcal{D}}_{p}=\{(\mathbf{x}_{i},{\bar{y}}_{i}):(\mathbf{x}_{i},y_{i})\in{\mathcal{D}}_{t r}\}\;\;\;\mathrm{s.t.}\;\;\;|{\mathcal{D}}_{p}|=\varepsilon|{\mathcal{D}}_{t r}|,$$ where for each class j = 1*, . . . , c*, we set $${\bar{y}}_{i}=j{\mathrm{~with~probability~}}p_{j}.$$ y¯i = j with probability pj . (37) Note that the weights {pj} may depend on the true label yi. For instance, for binary classification (i.e., c = 2), we may set pc+1−yi = 1 in which case y¯i simply flips the true label yi. $$(36)$$ $$(37)$$ Adversarial label flip attack. Biggio et al. (2011) consider an adversarial variant of the label flip attack, where the choice of the poisoned points is not random. This attack requires access to the model and training procedure, and thus is not a training-data-only attack. Biggio et al. (2011) design an attack focused on SVMs. They choose to poison non-support vectors, as these are likely to become support vectors when an SVM is trained on the dataset including these points with flipped labels. Label flip for multi-class classification For binary classification, label flip is trivial. Koh et al. (2018) provides a solution for multi-class classification problem. For marginal based models (for example, SVM), we can write the multi-class hinge loss, where we have (Dogan et al., 2016): $$(38)$$ $${\cal L}({\bf w},(x_{i},y_{i}))=\max\{0,1+\max_{j\neq y_{i}}{\bf w}_{j}x_{i}-{\bf w}_{y_{i}}x_{i}\},\tag{1}$$ where the choice of j is obvious: we choose the index with the highest function score except the target class yi. Naturally, we can use this index j as the optimal label flip. As for non-convex models, the choice of optimal label flip is not clear. In this case, one can use a heuristic by choosing the class with the biggest training loss. ## B Unlearnable Examples B.1 Stackelberg Game On Unlearnable Examples We recall the general Stackelberg game in Section 3, where the follower F chooses w to best respond to the action x of the leader L, through minimizing its loss function f: $$\forall\mathbf{x}\in\mathbb{X}\subseteq\mathbb{R}^{d},\ \ \mathbf{w}_{*}(\mathbf{x})\in{\underset{\mathbf{w}\in\mathbb{W}}{\operatorname{arg\,min}}}\ f(\mathbf{x},\mathbf{w}),$$ f(x, w), (39) and the leader L chooses x to maximize its loss function ℓ: $$(39)$$ $$\mathbf{x}_{*}\in{\underset{\mathbf{x}\in\mathbf{X}}{\operatorname{arg\,max}}}\,\ell(\mathbf{x},\mathbf{w}_{*}(\mathbf{x})),$$ $$(40)$$ ℓ(x, w∗(x)), (40) where (x∗, w∗(x∗)) is a Stackelberg equilibrium. We then formulate unlearnable examples as a non-zero-sum Stackelberg formulation (Liu & Chawla, 2010; Huang et al., 2021; Yu et al., 2021; Fowl et al., 2021b;a; Sandoval-Segura et al., 2022; Fu et al., 2021): $$\operatorname*{max}_{{\mathcal{D}}_{p}}{\mathcal{L}}({\mathcal{D}}_{v},{\mathbf{w}}_{*}),{\mathrm{~s.t.~}}{\mathbf{w}}_{*}\in\operatorname*{arg\,min}_{{\mathbf{w}}}{\mathcal{L}}({\mathcal{D}}_{p},{\mathbf{w}}).$$ L(Dp, w). (41) where Dp = {(xi + σi, yi)} N i=1, σiis the bounded sample-wise perturbation (∥σi∥p ≤ εσ), which can be generalized to class-wise perturbation (Huang et al., 2021) . Similar to indiscriminate data poisoning attacks, this primal formulation is difficult, as for the outer maximization problem, the dependence of L(Dv, w∗) on Dp or σ is *indirectly* through the parameter w of the poisoned model. However, in practice, we can perform a similar zero-sum reduction in Section 3 (Liu & Chawla, 2010): $$(41)$$ $$\operatorname*{max}_{{\mathcal{D}}_{p}}\;\;\operatorname*{min}_{{\mathbf{w}}}{\mathcal{L}}({\mathcal{D}}_{p},{\mathbf{w}}).$$ $$(42)$$ wL(Dp, w). (42) We recall that in indiscriminate data poisoning attacks, such formulation is problematic as the attacker may simply perform well on poisoned points Dp but poorly on clean points. However, we would not encounter such a problem here as the perturbations are applied across the entire training set Dtr. Now we are ready to categorize existing algorithms on unlearnable examples: - Error-Minimizing Noise (EMN) (Huang et al., 2021): By slightly modifying Equation (42) to $\mathrm{ng}$ Equation (42) to $$\operatorname*{min}_{\mathbf{w}}\;\;\operatorname*{min}_{{\mathcal{D}}_{p}}{\mathcal{L}}({\mathcal{D}}_{p},\mathbf{w}),$$ L(Dp, w), (43) $\text{finingNoise}(1)$ . Intuitively, Huang et al. (2021) construct the perturbation σ to fool the model into learning a strong correlation between σ and the labels. $$(43)$$ Table 12: Indiscriminate data poisoning attacks: the attack accuracy/accuracy drop (%) and attack running time (hours) on CIFAR-10. | Model | Clean | Label Flip | EMN | TGDA(ours) | | | | |-----------|----------|--------------|----------|--------------|----------|------------|---------| | Acc | Acc/Drop | Time | Acc/Drop | Time | Acc/Drop | Time | | | CNN | 69.44 | 68.99/0.45 | 0 hrs | 69.00/0.44 | 2.2 hrs | 65.15/4.29 | 42 hrs | | ResNet-18 | 94.95 | 94.79/0.16 | 0 hrs | 94.76/0.19 | 8.4 hrs | 89.41/5.54 | 162 hrs | | Method | 0% | 20% | 40% | 60% | 80% | 100% | |----------|-------|-------|-------|-------|-------|--------| | EMN | 94.95 | 94.38 | 93.10 | 91.90 | 86.85 | 19.93 | | TGDA | 94.95 | 93.22 | 92.80 | 91.85 | 85.77 | 16.65 | Table 13: Unlearnable examples: model accuracy (%) under different unlearnable percentages on CIFAR-10 with ResNet-18 model. Percentage of unlearnable examples is defined as |Dp| |Dtr+Dp| . - Robust Unlearnable Examples (Fu et al., 2021): Fu et al. (2021) further propose a min-min-max formulation following Equation (43): $$\operatorname*{min}_{\mathbf{w}}\;\;\operatorname*{min}_{\sigma^{u}}\;\;\operatorname*{max}_{\sigma^{a}}{\mathcal{L}}({\mathcal{D}}_{p}^{\prime},\mathbf{w}),$$ $$(44)$$ , w), (44) where ∥σ u i ∥p ≤ εu is the defensive perturbation, which is forced to be imperceptible; ∥σ a i ∥p ≤ εa is the adversarial perturbation, which controls the robustness against adversarial training; D′p = {(xi +σ u i +σ a i , yi)} N i=1. Fu et al. (2021) find this formulation generates robust unlearnable examples against adversarial training. - Adversarial poisoning (Error maximizing) (Fowl et al., 2021b): By freezing the follower entirely in Equation (42), Fowl et al. (2021b) propose to solve the maximization problem: $$\operatorname*{max}_{{\mathcal{D}}_{p}}{\mathcal{L}}({\mathcal{D}}_{p},\mathbf{w}),$$ $$(45)$$ L(Dp, w), (45) such that it is similar to an adversarial example problem. - Gradient Matching (Fowl et al., 2021a): Fowl et al. (2021a) solve the same maximization problem in Equation (45) and apply the gradient matching algorithm in Geiping et al. (2020) (see more details in Section 3). ## B.2 Comparison With Indiscriminate Data Poisoning Attacks Despite their differences in problem formulation, it is possible to compare algorithms for unlearnable examples (we take EMN as an example here) with our TGDA attack. We identify two possible scenarios where we may fairly compare TGDA and EMN empirically: - Indiscriminate Data Poisoning Attacks: for EMN, we first craft perturbations using the original algorithm. After the attack, we take Dp = {(xi + δi), yi} εN i=1, recall ε is the attack budget (or poison rate). Then, we follow our experimental protocol to perform the attack. - Unlearnable Examples: for TGDA, we follow Equation (12) and perform the zero-sum reduction of TGDA to perturb the entire training set. Note that we only consider sample-wise perturbation across all experiments. Similar to our test protocol, we retrain the perturbed model and test the performance of the attack on the test set. We report the experimental results in Table 12 and Table 13, where we observe that: - For indiscriminate data poisoning attacks: In Table 12, although EMN is efficient, its attack efficacy is poor. Such poor performance is expected as the objective of EMN does not reflect the true influence of an attack on clean test data. - For unlearnable examples: In Table 13, we observe that TGDA (after simplification) can be directly comparable with EMN, and the zero-sum simplification allows it to scale up to large models (i.e., ResNet) easily (training time for 100% unlearnable examples is 1.8 hours). However, the perturbation introduced by TGDA is not explicitly bounded. ## C **Other Solvers Than Tgda** We recall that in Section 3, we solve Equation (7) and approximate the calculation of ∂w∗ ∂Dp using the total gradient descent ascent (TGDA) algorithm (Evtushenko, 1974; Fiez et al., 2020): $${\bf x}_{t+1}={\bf x}_{t}+\eta_{t}{\bf D}_{\bf x}\ell({\bf x}_{t},{\bf w}_{t}),\tag{1}$$ $${\bf w}_{t+1}={\bf w}_{t}-\eta_{t}\nabla_{\bf w}f({\bf x}_{t},{\bf w}_{t})\tag{2}$$ $$\begin{array}{l}{(46)}\\ {(47)}\end{array}$$ where Dx := ∇xℓ − ∇wxf · ∇−1 wwf · ∇wℓ is the total derivative of ℓ with respect to x. Furthermore, it is possible to apply two other algorithms to solve Equation (7): - Follow the ridge (FR) (Evtushenko, 1974; Wang et al., 2020): $$\begin{array}{l}{{\mathbf{x}_{t+1}=\mathbf{x}_{t}+\eta_{t}\mathrm{D}_{\mathbf{x}}\ell(\mathbf{x}_{t},\bar{\mathbf{w}}_{t}),}}\\ {{\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta_{t}\nabla_{\mathbf{w}}f(\mathbf{x}_{t},\mathbf{w}_{t})+\eta_{t}\nabla_{\mathbf{w}\mathbf{w}}^{-1}f\cdot\nabla_{\mathbf{x}\mathbf{w}}f\cdot\mathrm{D}_{\mathbf{x}}\ell(\mathbf{x}_{t},\mathbf{w}_{t}),}}\end{array}$$ wwf · ∇xwf · Dxℓ(xt, wt), (49) - Gradient descent Newton (GDN) (Evtushenko, 1974; Zhang et al., 2021): $$\begin{array}{l}{(48)}\\ {(49)}\end{array}$$ $${\bf x}_{t+1}={\bf x}_{t}+\eta_{t}{\bf D}_{\bf x}\ell({\bf x}_{t},\tilde{\bf w}_{t}),$$ $${\bf w}_{t+1}={\bf w}_{t}-\eta_{t}\nabla_{{\bf w}{\bf w}}^{-1}f\cdot\nabla_{{\bf w}}f({\bf x}_{t},{\bf w}_{t}),\tag{1}$$ $$(50)$$ Zhang et al. (2021) showed that both TGDA and FR are first-order approximations of GDN, despite having similar computational complexity of all three. In our preliminary experiments, TGDA appears to be most effective which is why we chose it as our main algorithm.
Review 1: Summary: This review is about TMLR submission 495. The submission "Indiscriminate Data Poisoning Attacks on Neural Networks" reviews and categorizes poisoning attacks on model availability and discuss a new attack. The main focus in this work is on model availability attacks (alternatively described as indiscriminate attacks) where the fraction of poisoned samples is small. The attack is evaluated on MNIST and CIFAR10 with linear models and 3-layer neural networks. Strengths and Weaknesses: In broad strokes I do think that this work engages an important topic. Model availability attacks with small budgets (which I will refer to as indiscriminate attacks following the author's notation from here on) against neural networks are a hard problem, where it is currently unclear if strong attacks are even possible, and I think it makes sense to approach this work with this framing in mind. I like the derivation of a new attack from bilevel literature via TGDA which I find well motivated. Yet, to me at least, the current version of this submission falls short on delivering on the overall goal of providing a great attack. To put it briefly, I have concerns clustered around two main areas. First, the submission promises a systematic analysis on poisoning deep neural networks, but reviews mostly Cinà et al. "Wild patterns reloaded: A survey of machine learning security against training data poisoning" and other surveys with fewer new insights than I would have hoped. Then, existing analysis based on Stackelberg games in the context of data poisoning is reviewed. Here, the submission opens several interesting questions that could go beyond previous treatments of Stackelberg games in poisoning (especially in terms of sequential Stackelberg games), but does not really act on this. Second, the submission promises indiscriminate data poisoning attacks on modern neural networks, and deep neural networks, but the experimental results are not convincing enough to verify this, showing minimal performance degradations on 3-layer models at the cost of astronomical compute effort. I'll provide a bullet-point list of more details below: * The submission dismisses comparisons to attacks that perturb training data, but this seems an oversimplification to me. There is a decent number of works that investigate these "unlearnable examples", but these are also poisoning availability attacks. The only difference is that perturbations are constrained in input space, but not in number of poisoned examples. It is straightforward to apply these attacks in scenarios where subsets of the training set are poisoned (or poisoned data is added) as analzyed in this work. This task of mixing clean and poisoned data is evaluated in several works on untrainable data, for example, in Sandoval-Segura "Autoregressive Perturbations for Data Poisoning" in Table 5. Methods tuned for untrainable data attacks might underperform in the 3% scenario investigated in this work, but the burden of proof there is on the submission to show this. A large range of attacks from this angle of work, especially error-minimizing and error-maximizing perturbations might still have a measurable impact when used in the scenario investigated in this work, especially when epsilon bounds are increased (or liffted entirely). * More broadly, when setting out to categorize poison availability attacks, it seems surprising to dismiss these attacks, especially as many attacks can be cast as min-max optimization problems in line with the analysis based on zero-sum Stackelberg games, and works such as Fu et al. "Robust Unlearnable Examples: Protecting Data Privacy Against Adversarial Learning" can actually be understood as sequential Stackelberg games. I think it would be an insightful addition to include these works in the categorization. * The authors discuss the use of TGDA as algorithm to approximate Eq.(5). It would be great to characterize previous attacks in relationship to TGDA. For example, gradient descent steps on the follower are also prominently discussed in related work. Huang et al., 2020, include a single descent step after evaluating K gradient steps on the leader [and average over many followers]. Muñoz-González et al.,2017, alternate single steps on both leader and follower (same as Koh 2017) and Geiping et al. 2021 "Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching" approximate the leader dynamics via gradient matching and freeze the pretrained follower entirely. The update on the leader in Huang and Muñoz-González is actually not so different from the leader step arising from TGDA, as truncated backpropagation can converge to the same objective (see discussion in Shaban 2019, "Truncated back-propagation for bilevel optimization") and I think it would be interesting to conceptualize these works and others as instances of TGDA to strengthen the desired overview over poison attacks against neural networks, aside from only minor remarks in Sec. 4.2 * A central problem in previous work on data poisoning with formulations like TGDA is warm-starting, best summarized in Vicol 2022, "On Implicit Bias in Overparameterized Bilevel Optimization". It would be great if the submission could include a discussion of the problems discussed in Vicol 2022 for overparametrized models, in the context of this work. * It is unclear to me why this attack is so expensive for these tiny machine learning models, especially on CIFAR-10. It would be great to at least include a breakdown of costs, but maybe even better would be to address this problem and provide a feasible algorithm that can be evaluated on modern neural networks. For example, have the authors considered the suggestions in Koh and Liang 2017 (Sec. 3), such as fast Pearlmutter approximations for the Hessian vector product? How many CG iterations are required? In it's current state it feels hard to find this algorithm an "attack with improved efficiency" for neural networks. * If I understand correctly, the attack of Muñoz-González is slow in Table 3 as poisons are created sequentially? While this arguably follows Muñoz-González to the letter, a more modern implementation could just compute poisoned data points in batches (or a single batch) by the same algorithm. I think this somewhat clouds the runtime comparison with the proposed approach. * The submission claims that the proposed attack is very effective, but over all experiments (which really should include standard deviations), the effective drop is arguably small. The submission attempts to attack with a budget of 3% which really is a hard ask. I think it would be much more convincing if aside from the 3% scenario, attacks would be plotted for a range of budgets, as done in Fig.4. and compared to other attacks on the full range. * Speaking of Fig.4 (which I think shows MNIST? it would be great to mark this and also show the CIFAR variant), it is surprising that the proposed attack is not able to break the model even as epsilon goes to 1. Also just for clarification, at eps=1, is half the training set is filled with poisoned data, or is the entire dataset poisoned? * Concerning the auto-encoder formulation to generate poisoned data: Overall I find this to be an interesting idea, but I do think it detracts somewhat from the central messaging in Sec. 3, as this component could be combined independently with any number of previous attacks. Essentially, this appears as a vague constraint on allowable images (it is, for example not clear what the worst-case image from this generator would look like and whether it still would be in-distribution to real data) that I would either formalize as part of the threat model or re-investigate. For example, another interpretation of the data Sec. 5.3 might be that this component regularizes poisoned data and helps to generate generalizable poisoning data points, with inconspiciousness more of a side effect. * Table 7, what happens for m=n=1? Or variations with n=0? * I liked the connection to seq. Stackelberg games briefly discussed in Sec. 5.3 and would have liked to see more analysis of this case. I do think that here again the critique of Radiya-Dixit et al. would be appropriate to discuss, namely that ultimately the defender always moves last in data poisoning. Minor comments: * The section 3 be strengthened by tying the discussion to older works analyzing adversarial games from this lens, for example Dalvie et al, 2004, "Adversarial classification" and Liu and Chawla, 2009 "A Game Theoretical Model for Adversarial Learning" and 2010 "Mining adversarial patterns via regularized loss minimization" * Sec. 3.2., the submission notes that gradient descent ascent is infeasible due to zero gradients. While I agree with the sentiment, this seems false in general. It would be better to clarify the writing here, especially as the submission then turns to a decomposition of the total gradient (which also might be zero/undefined in large regions). * CG is not a Hessian-free approach, I think the intention here was matrix-free? * The defense section describes MaxUp as a good defense against adversarial examples. This is a statement that I would be careful with. In general, the only strong defense agaisnt adversarial examples is adversarial training. Requested Changes: In summary, I would like for this submission to be revised, based on concerns discussed above. Most important for me, would be to: * Improve the categorization of poisoning attacks in Sec. 3. * Include attacks based on untrainable examples. * Provide stronger attack scenarios in which the attack works well. The current attack budget is small, and so is attack success and this makes it hard to evaluate the work done in Sec. 3. and 4. * Improve the engineering of the attack to make it feasible to attack deep neural networks, for example ResNets. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper summarizes a novel approach to data poisoning attacks on Neural Networks. The approach leverages TGDA(Total Gradient Descent Ascent) algorithm, which aims to address the attacker's objective. The paper talks about numerous approaches, such as how the defensive party can act to combat the attack. Their technique aims to be more effective and run on a higher magnitude. The experiment is based on the Stackelberg game, in which the attacker uses poisoned data to decrease accuracy while the defender optimizes the model using poisonous data. Strengths and Weaknesses: Strengths: 1. All assumptions are explained well 2. The Stackelberg game is explained nicely 3. The different types of Data Poisoning attacks are explained in detail Requested Changes: N/A Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper studies indiscriminate data poisoning attacks on machine learning classifiers. The paper formulates data poisoning attacks and defenses as Stackelberg games, and based on the formulation, the paper proposes Total Gradient Descent Ascent (TGDA) poisoning attacks (that is a type of clean-label poisoning attacks). The proposed attack allows an adversary to efficiently generate a large number of poisoning samples. The paper evaluates the TGDA attacks against MNIST and CIFAR-10 classifiers and shows that the attacks achieve high accuracy drops while reducing the computational time for generating poisons. Strengths and Weaknesses: Strengths 1. This paper studies the cost of indiscriminate poisoning attacks. 2. The paper proposes an efficient technique for generating a large number of poisons. 3. The paper is well-written and easy to follow. Weaknesses 1. The motivation of this study is a bit weak. 2. The formulation of indiscriminate poisoning as Stackelberg games is somewhat unclear. 3. The techniques proposed for the TGDA attacks are less novel. 4. The experimental results are not sufficient to back up the paper’s claims. Detailed comments [Weak Motivation] The TGDA attack reduces the computational costs of generating poisoning samples. However, I do not understand why the attacker may want to reduce the crafting costs. In indiscriminate poisoning, the objective is to reduce the accuracy of the target classifiers. Thus, it makes more sense to focus on reducing the number of poisoning samples the attacker injects or, with the same amount of poisoning samples, to increase an accuracy drop. [Stackelberg Game] I am a bit unclear about the benefits that the Stackelberg formulation brings. Typically, when we formulate a game between an attacker and a defender, we expect to explore and bind the limit of the attacker (e.g., similar to the work [1]). But, in this paper, the formulation seems to be only useful for making the poisoning sample generation faster. It raises a concern that we may not need the formulation to achieve the same goal—the TGDA attack. [1] Steinhardt et al., Certified Defenses for Data Poisoning Attacks [Technical Advances in the TGDA Attack] The novelty is weak if the technical advance only reduces the time it takes to craft poisoning samples. (1) I am again not sure why it’s a limitation of a prior work that they craft poisoning samples one by one, not in a batch. What will be the scenario where the attacker is under time pressure? In my opinion, the attacker can use as much time as they want as long as they can increase the accuracy drop of the target model after poisoning. (2) It’s also a bit unclear whether the TGDA attack performs better than the others in terms of accuracy. The accuracy drop that this paper improves is from 1.52% (vs. label-flipping) /2.53 (vs. back-gradient) to 2.79% in LR trained on MNIST. I believe that this should be evaluated with multiple baselines; for example, even those two baselines, I think, will be weaker than the min-max attack proposed by Steinhardt et al. [Weak Evaluations] First, several benchmarks [1] evaluate the effectiveness of poisoning attacks. Thus, I believe that the claim that “there is a lack of systematic evaluation of poisoning attacks” should be toned down. At least, this should be only in the context of indiscriminate poisoning. [1] Schwarzschild et al., Just How Toxic Is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks Second, the paper states that there is an unfair comparison between methods. However, I don’t see any evaluation that backs up this claim. It’s unclear (1) what “fairness” means and (2) whether (if there are some benchmark that exists) it is “unfair.” Third, the clean models in CIFAR10 achieve a clean accuracy of 69%. Most recent models achieve at least 90% accuracy on CIFAR10; thus, I wouldn’t believe the impact of poisoning on those models. They could be amplified as the models used are too outdated and not good. Fourth, in Sec 4.2, the attack process is designed to “thwart defenses,” but I couldn’t see the evaluation against most recent defenses (I only see one defense that seems way outdated than existing ones), such as Steinhardt et al.’s or some basic sanitization defenses. Some other work uses influence functions to identify poisoning samples. To make such claims, I think it’s necessary to evaluate the attack against existing defenses. Fifth, it’s a clean-label poisoning attack. I think it is necessary to compare the magnitude of perturbations the TGDA attack introduces with the baseline attacks. [Minor comments] 1. Backdoor attacks assume a completely different threat model (it assumes the attacker can manipulate the test-time data). I would remove the discussion about the backdoor attacks in the introduction and related work. 2. It may not be important to show the code is on GitHub in Table 1. 3. It’s unclear what “perfect autoencoder” in Figure 2 means. Requested Changes: 1. Clarification of the motivation of this attack. 2. Clarification of the scientific advances this new formulation with Stackelberg makes. 3. Clarification of the technical advances this attack makes (other than just proposing an efficient attack) 4. Evaluation against many baseline indiscriminate attacks. (e.g., the paper claims the Min-max attack only works for convex models, but it seems not to be true. It does not guarantee the worst-case for non-convex models, but still works.) 5. Evaluation to back the "fairness" of evaluation claims. 6. Evaluation with the CIFAR10 models that achieve more than 90% accuracy. 7. Evaluation against potential defense mechanisms proposed in the community. 8. Comparison of the poisoning examples (clean-label) in terms of perturbations. Broader Impact Concerns: No concern. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper aims to answer the question of whether indiscriminate poisoning attacks are feasible on modern deep neural networks. It formulates indiscriminate poisoning attacks and defenses as Stackelberg games. The Stackelberg games formulation gives a little more insight into the attacker/defender scenario by showing that who acts first matters. Reviewer tVSb was a solicit reviewer and they ended up giving a very terse review with little substance; therefore I put little weight on their recommendation. However, both reviewers mWKi and especially TZgv gave outstanding reviews that were exceedingly detailed and they engaged with the authors. Both of them recommended Leaning Accept. A common point of confusion was whether the reduction of computation time from the TGDA attack was of importance. I think this is crucial to the point of this work that they push for computational feasibility rather than attack efficacy. Overall, i think this paper's results are neither significant nor its methods novel. However, based on TMLR's acceptance criteria (https://www.jmlr.org/tmlr/acceptance-criteria.html) which says ... "Crucially, it should not be used as a reason to reject work that isn't considered “significant” or “impactful” because it isn't achieving a new state-of-the-art on some benchmark. Nor should it form the basis for rejecting work on a method considered not “novel enough”, as novelty of the studied method is not a necessary criteria for acceptance. We explicitly avoid these terms (“significant”, “impactful”, “novel”), and focus instead on the notion of “interest”. If the authors make it clear that there is something to be learned by some researchers in their area from their work, then the criteria of interest is considered satisfied" ... I think it's reasonable to give this paper an accept because it provided clear, evidence-backed results, along with an interesting formulation, that are of interest to a portion (albeit limited) of the data poisoning community. Authors: before camera-ready, please address the revisions from the reviewers. Further please revise the following: Tone down the claim that other works crafted poison points sequentially. For example, both MetaPoison and Witches Brew crafted their poison points in batches and those poison points can coordinate with one another during the crafting. ==================================================
# Tensorvae: A Simple And Efficient Generative Model For Conditional Molecular Conformation Generation Hongyang Yu *hyyu@anticancerbio.com* Anticancer Bioscience (ACB), Ltd., Sydney, Australia Hongjiang Yu hjyu@anticancerbio.com Anticancer Bioscience (ACB), Ltd., Sydney Australia Reviewed on OpenReview: *https: // openreview. net/ forum? id= rQqzt4gYcc* ## Abstract Efficient generation of 3D conformations of a molecule from its 2D graph is a key challenge in in-silico drug discovery. Deep learning (DL) based generative modelling has recently become a potent tool to tackling this challenge. However, many existing DL-based methods are either indirect–leveraging inter-atomic distances or direct–but requiring numerous sampling steps to generate conformations. In this work, we propose a simple model abbreviated TensorVAE capable of generating conformations directly from a 2D molecular graph in a single step. The main novelty of the proposed method is *focused on feature engineering*. We develop a novel encoding and feature extraction mechanism relying solely on *standard* convolution operation to generate token-like feature vector for each atom. These feature vectors are then transformed through *standard transformer encoders* under a conditional Variational Autoencoder framework for generating conformations directly. We show through experiments on two benchmark datasets that with intuitive feature engineering, a relatively simple and standard model can provide promising generative capability outperforming more than a dozen state-of-the-art models employing more sophisticated and specialized generative architecture. Code is available at https://github.com/yuh8/TensorVAE. ## 1 Introduction Recent advance in deep learning has enabled significant progress in computational drug design (Chen et al., 2018). Particularly, capable graph-based generative models have been proposed to generate valid 2D graph representation of novel drug-like molecules (Honda et al., 2019; Mahmood et al., 2021; Yu & Yu, 2022), and there is an increasing interest on extending these methods to generating 3D molecular structures which are essential for structured-based drug discovery (Li et al., 2021; Simm et al., 2021; Gebauer et al., 2022). A stable 3D structure or conformation of a molecule is specified by the 3D Cartesian coordinates of all its atoms. Traditional molecular dynamics or statistical mechanic driven Monte Carlo methods are computationally expensive, making them unviable for generating 3d molecular structures at scale (Hawkins, 2017). In this regard, deep learning(DL)-based generative methods have become an attractive alternative. DL-based generative methods may be broadly classified into 4 categories: distance-based, reconstructionbased, sequential and energy-based and direct methods. The main goal of distance-based methods is learning a probability distribution over the inter-atomic distances. During inference, distance matrices are sampled from the learned distribution and converted to valid 3D conformations through post-processing algorithms. Two representative methods of this category include GraphDG (Simm & Hernández-Lobato, 2020) and CGCF (Xu et al., 2021a). An advantage of modeling distance is its roto-translation invariance property– an important inductive bias for molecular geometry modeling (Köhler et al., 2020). Additional virtual edges and their distances between 2 nd and 3 rd neighbors are often introduced to constrain bond angles and dihedral angles crucial to generating a valid conformation. However, Luo et al. (2021) have argued that these additional bonds are still inadequate to capture structural relationship between distant atoms. To alleviate this issue, DGSM (Luo et al., 2021) proposed to add higher-order virtual bonds between atoms in an expanded neighborhood region. Another weakness of the distance-based methods is the error accumulation problem; random noise in the predicted distance can be exacerbated by an Euclidean Distance Geometry algorithm, leading to generation of inaccurate conformations (Xu et al., 2022; 2021b). To address the above weaknesses, reconstruction-based methods directly model a distribution over 3D coordinates. Their main idea is to reconstruct valid conformations from distorted coordinates. GeoDiff (Xu et al., 2022) and Uni-Mol (Zhou et al., 2023) are pioneering studies in this respect. Though sharing similar idea, they differ in the process of transforming corrupted coordinates to stable conformations. While GeoDiff adapts a reverse diffusion process (Sohl-Dickstein et al., 2015), Uni-Mol treats conformation reconstruction as an optimization problem. Despite their promising performance, both methods require designing of task-specific and complex coordinate transformation methods. This is to ensure the transformation is roto-translation or SE(3)-equivariant. To achieve this, GeoDiff proposed a specialized SE(3)-equivariant Markov transition kernel. On the other hand, Uni-Mol accomplished the same by combining a task-specific adaption of transformer (Vaswani et al., 2017) inspired by the AlphaFold's Evoformer (Jumper et al., 2021) with another specialized equivariant prediction head (Satorras et al., 2021). Furthermore, GeoDiff requires numerous diffusing steps to attain satisfactory generative performance which can be time consuming. While promising generative performance has been achieved by directly learning a distribution over the 3D geometries (coordinates or pair-wise distances) of molecules, energy-based learning methods have also recently been shown to yield competitive performance in molecular conformation generation. A unique advantage of using energy minimization as the reward mechanism is that energy-based models can better explore the low-energy regions of the conformational space of a molecule, leading to generating conformations with both high quality and diversity. On the other hand, methods relying directly on minimizing distance metrics to ground-truth conformations may result in generating very similar conformations with high energy strain. Two recent methods adopting the energy minimization paradigm are TorsionNet (Gogineni et al., 2020) and GFlowNet (Volokhova et al., 2023) for conformation generation. Both of which are sequential and energy-based methods that sequentially move an original molecular conformation by modifying the torsion angle of all rotatable bonds towards a lower energy state. Despite their promising potential and advantages, sequential methods are relatively inefficient compared to the direct methods. For instance, GFlowNet requires iterating through 40,000 training steps per molecule to achieve satisfactory performance. CVGAE (Mansimov et al., 2019) and DMCG (Zhu et al., 2022) have attempted to resolve the generative efficiency issue by developing models that can produce a valid conformation directly from a 2D molecular graph in a single sampling step. Regrettably, the performance of CVGAE is significantly worse than its distancebased counterparts mainly due to the use of inferior graph neural network for information aggregation (Zhu et al., 2022). DMCG aimed to improve the performance of its predecessor by using a more sophisticated graph neural network and a loss function invariant to symmetric permutation of molecular substructures. Although DMCG achieved superior performance, acquiring such loss function requires enumerating all permutations of a molecular graph, which can become computationally expensive for long-sequence molecules. Regardless of their category, a common recipe of success for these models can be distilled to developing model architecture with ever increasing sophistication and complexity. There is little attention on input feature engineering. In this work, we forgo building specialized model architecture but instead focus on intuitive input feature engineering. We propose to encode a molecular graph using a fully-connected and symmetric tensor. For preliminary information aggregation, we run a rectangle kernel filter through the tensor in a 1D convolution manner. This operation has a profound implication; with a filter width of 3, the information from two immediate neighbors as well as all their connected atoms can be aggregated onto the focal atom in a single operation. It also generates token-like feature vector per atom which can be directly consumed by a standard transformer encoder for further information aggregation. The generative framework follows the standard conditional variational autoencoder (CVAE) setup. We start with building two input tensors with one encoding only the 2D molecular graph and the other also encoding 3D coordinate and distance. Both tensors go through the same feature engineering step and the generated feature vectors are fed through two separate transformer encoders. The output of these two encoders are then combined in an intuitive way to form the input for another transformer encoder for generating conformation directly. The complete generative model is abbreviated as TensorVAE. In summary, the proposed method has 4 main advantages. (1) *Direct and Efficient*, generating conformation direclty from a 2D molecular graph in a single step. (2) *Simple*, not requiring task-sepecific design of neural network architecture, relying only on simple convolution and off-the-shelf transformer architecture; (3) *Easy to implement*, no custom module required as both PyTorch and TensorFlow offer ready-to-use convolution and transformer implementation. These advantages translate directly to excellent practicality of the TensorVAE method. (4) *Achieving competitive performance through simplicity*, we demonstrate through extensive experiments on two benchmark datasets that the proposed TensorVAE, despite its simplicity, can perform competitively against **22 recent state-of-the-art methods** for conformation generation and molecular property prediction. ## 2 Method 2.1 Preliminaries Problem Definition. We formulate molecular conformation generation as a conditional generation task. Given a set of molecular graphs G and their corresponding i.i.d conformations R, the goal is to train a generative model that approximates the Boltzman distribution, and from which a valid conformation conditioned on a molecular graph can be easily sampled in a single step. Story Line. In the ensuing sections, we breakdown the formulation of the proposed method in three novel ideas. We first introduce how a molecular graph can be encoded using a 3D tensor. Then, we demonstrate how token-like feature vector can be generated from the input tensor by using a 1D convolution operation. The generated feature tokens resemble those used in the language modelling, thereby allowing the use of standard transformer encoders for effective information aggregation. Finally, we propose a novel mechanism to combine the outputs of the transformer encoders under a conditional-VAE framework to arrive at the final generative model. ## 2.2 Input Tensor Graph Message passing graph neural network (GNN) is a popular feature extraction backbone for DL-based molecular conformation generation. The input for this backbone is often composed of three components, including atom features, edge features and an adjacency matrix. Atom and edge features normally pass through separated embedding steps before being fed to the GNN. Adjacency matrix is then used to determine neighboring atoms for layer-wise information aggregation. Although bond features are aggregated onto atom features and vice versa, these two features are maintained separately throughout the message passing layers (Gilmer et al., 2017; Satorras et al., 2021). Instead of having separated inputs, **our first simple idea** is to combine them into a single input. Specifically, we add an additional dimension to the adjacency matrix, making it a 3D tensor. Each cell on-diagonal of the tensor holds the *focal* atom feature vector to which information from nearby connected atoms are aggregated. We consider three types of atom features comprising atom type, charge and chirality. Each feature is one-hot encoded and they are stacked together to form a single atom feature vector. There are two variants of the atom feature vector corresponding to two input tensors for the two encoders of the CVAE framework: an encoder conditioned only on graph (referred to as the G tensor) and the other conditioned on both graph and coordinates (referred to as the GDR tensor). For the GDR tenosr, every focal atom feature vector has three additional channels incorporating the 3D coordinate of the respective atom, and a distance channel filled with zeros. Each off-diagonal cell holds the **stacked** *neighbour* atom and bond features. The considered bond features are bond type, bond stereo-chemistry type, ring size and normalized bond length. A virtual bond is also included in the bond type. It is worth noting that all virtual bonds share the same virtual bond type; they only differ in their normalized bond length. The normalized bond length is calculated as edge length (1 for ![3_image_0.png](3_image_0.png) Figure 1: Benzene ring tensor graph example. Note that the values in the feature vector and its dimension are for demonstration purpose only. We explain how they are determined in Sec.3. direct neighbor, 2 for 2 nd neighbor, etc.) divided by the longest chain length. To construct off-diagonal feature vector, we first sum the atom feature vectors of the connected atoms. This vector is then stacked with one-hot encoded bond type vector, normalized bond length, and one-hot encoded ring size vector to become the off-diagonal feature vector. Since there are no bond features for a focal atom, the bond feature vector channels on-digonal are also filled with 0s. Therefore, **both on and off-diagonal feature vectors** have the same dimension. There are also two variants of the off-diagonal feature vector. For the G tensor, coordinate and distance channles are excluded. For the GDR tensor, to match the size of the on-diagonal feature vector, every offdiagonal feature vector has three more coordinate channels filled with 0s, and an additional distance channel holding the Euclidean distance between two connected atoms. This off-diagonal feature vector is obtained for all atom pairs, making the proposed tensor fully-connected and symmetric. A tensor encoding of the benzene ring is illustrated in Fig.1. Having obtained the tensor representation, a naive way of building a generative model is to apply a convolutional neural network directly on the tensor, and train it to predict a distribution over the inter-atomic distances. We utilize a standard UNet (Ronneberger et al., 2015) structure to map the input tensor to a probability distribution over a distance matrix containing all pair-wise Euclidean distances. Distance matrices are then sampled and converted to valid conformations following the same method presented in GraphDG (Simm & Hernández-Lobato, 2020). We refer to this model as the NaiveUNet. More details of the NaiveUNet can be found in Sec.A.4. This naive model achieves unsatisfactory performance as shown in Tab.1 and Tab.9, merely outperforming GraghDG and is far from that of the state-of-the-art. There are two major issues to this approach. First, with a small kernel size (3 × 3 used in the UNet), it takes many convolution layers to achieve information aggregation between atoms that are far apart; it does not take full advantage of high-order bonds (chemical or virtual) already made available in the input tensor. Secondly, the output size grows quadratically with the number of atoms, as compared to only linear growth in the reconstruction-based or direct generation methods. The solution to the first issue is rather simple, obtained by increasing the kernel size to expand its "field of view". On the other hand, solving the second issue requires elevating the naive two-step generative model to a direct one. ## 2.3 Extended Kernel And Attention Mechanism We observe that every row or column of the proposed tensor contains global information encompassing a focal atom and all of its connected atoms (by both chemical and virtual bond). This motivates our **second main** idea which is to extend the length of the kernel to the length of the tensor graph while keeping the width unaltered. This idea has a profound implication; *global* information from the immediate neighbors, all their connected atoms, and all the bond features can be aggregated onto the focal atom in a single convolution operation. In contrast, achieving the same aggregation may require many layers of propagation for the naive model and other GNN-based models. A direct consequence of this modification is that only 1D convolution is permitted. With multiple kernels being applied simultaneously, each stride of these kernels generates a feature vector for a single atom. An illustration of the 1D convolution operation is shown in Fig.2. ![4_image_0.png](4_image_0.png) Figure 2: Extending kernel and 1D convolution. We further observe that the generated feature vectors resemble the token-like feature vectors used in language modeling. This observation combined with the proven success of attention mechanism in other related work leads to the selection of transformer architecture as the backbone of our generative model. A significant advantage of using transformer's self-attention mechanism is, similar to the extended kernel, it enables a global information aggregation from and for all atoms. It also eliminates the need to maintain separated atom and bond features at each step of feature transformation. We present further insight and a more detailed analysis of the adavantage of this input feature engineering in Sec.A.1. There is also an interesting equivalence between the information aggregation achieved by a fully-connected MPNN (Gilmer et al., 2017) and running a 1 × 1 convolution operation over the proposed input tensor, as detailed in Sec.A.2. ## 2.4 Putting Everything Together Conditional variational autoencoder framework. We aim at obtaining a generative model pθ(R|G) that approximates the Boltzmann distribution through Maximum Likelihood Estimation. Particularly, given a set of molecular graphs G and their respective ground-truth conformations R, we wish to maximize the following objective. $$\log p_{\theta}\left(R|G\right)=\log\int p\left(z\right)p_{\theta}\left(R|z,G\right)d z$$ A molecular graph can have many random conformations. We assume this randomness is driven by a latent random variable z ∼ p (z), where p (z) is a known distribution e.g. a standard normal distribution. As pθ (R|*z, G*) is often modeled by a complex function e.g. a deep neural network, evaluation of the integral in Eq.1 is intractable. Instead, we resort to the same techniques proposed in the original VAE (Kingma & Welling, 2014) to establish a tractable lower bound for Eq.1. $$\log p_{\theta}\left(R|G\right)\geq\mathbb{E}_{q_{w}\left(z|R,G\right)}\left[\log p_{\theta}\left(R|z,G\right)\right]-D_{K L}\left[q_{w}\left(z|R,G\right)||p\left(z\right)\right]$$ where DKL is the Kullback-Leibler divergence and qw (z|*R, G*) is a variational approximation of the true posterior p (z|*R, G*). We assume p (z) = N (0, I) and qw (z|*R, G*) is a diagonal Gaussian distribution whose $$(1)$$ $$\left(2\right)$$ means and standard deviations are modeled by a transformer encoder. The input of this transformer encoder is the proposed tensor containing both the coordinate and distance information. We denote this tensor the GDR tensor. On the other hand, pθ (R|*z, G*) is further decomposed into two parts: a decoder pθ2 (R|*z, σ*θ1 (G)) for predicting conformation directly and another encoder σθ1 (G) for encoding the 2D molecular graph. The input tensor for σθ1 (G) is absent of coordinate and distance information, and is therefore denoted the G tensor. Both encoders share the same standard transformer encoder structure. However, there is a minor modification to the transformer structure for the decoder. Specifically, the Query, Key matrices for the first multi-head attention layer are computed based on the output vectors of σθ1 (G), and the Value matrices come directly from the reparameterization of the output of qw (z|*R, G*), as z = µw + Σwϵ, where µw and Σw are the predicted mean and standard deviation respectively. ϵ is sampled from N (0, I). We present the complete picture of how the two encoders and the decoder are arranged in a CVAE framework in Fig.3. ![5_image_0.png](5_image_0.png) Figure 3: Variational AutoEncoder framework (left) and modified multi-head attention (right) Intuition behind the modified attention. There are multiple ways to join together the output of the two encoders to form the input to the final decoder. Popular methods include stacking or addition. We tried both these methods with unsatisfactory performance. We notice that, due to direct stacking or addition of the sampled output of qw onto the output of σθ1 , attention weights computed in the first layer of the decoder are easily overwhelmed by random noise of the sampled values, and become almost indiscernible1. This leads to ineffective information aggregation which is then further cascaded through the remaining attention layers. Intuitively, in the first attention layer, the attention weights dictating how much influence an atom exerts on the other should predominantly be determined by the graph structure, and remain stable for the same molecule. Further, attention weights are computed by Query and Key matrices. Therefore, these two matrices should stay stable for the same graph. This motivates **our third and final main idea**; that is, we compute Query and Key matrices only from the output h L 1 , ..., hLN of σθ1 , and attribute the variation in conformation to the Value matrices which are directly sampled from {z1, ..., zN } ∼ qw. The resultant information aggregation is much more meaningful and each output vector corresponding to an individual atom carries distinct features, facilitating information aggregation of the ensuing attention layers. 1Imagine a mixture model with randomly varying mixture weights. Learning to achieve approximate Roto-translation invariant loss. Following ConfVAE (Xu et al., 2021b), we formulate the reconstruction loss as. $$-\log p_{\theta}\left(R|z,G\right)=-\sum_{i=1}^{N}\sum_{j=1}^{3}\left(R_{ij}-A\left(\hat{R},R\right)_{ij}\right)^{2}\tag{3}$$ where A (·) is a function aligning the predicted conformation Rˆ onto the reference conformation R. We choose Kabsch algorithm (Arun et al., 1987) as the alignment method which translates and rotates the predicted conformation onto its corresponding ground-truth before loss computation. This makes the reconstruction loss roto-translation invariant. Simultaneously, minimizing the KL-loss component DKL [qw (z|*R, G*)||p (z)] compels the output of the posterior encoder to adhere to a standard normal distribution. Despite this minimization promoting convergence, achieving exact equality qw (z|*R, G*) = p (z) in practice is challenging, especially in the presence of SE(3) transformations of the input R. Consequently, upon convergence, the objective function defined in Eq.2 only achieves approximate roto-translation invariance. Direct conformation generation at inference time. To generate a single conformation, we first construct the G tensor of a molecular graph and obtain a single latent sample {z1*, ...z*N } from a standard diagonal Gaussian distribution. The G tensor is passed through σθ1 encoder to produce h L 1 , ..., hLN which is then combined with the latent sample via the modified multi-head attention mechanism. The output of this modified attention layer further goes through L − 1 standard attention layers to be transformed to the final conformation. **The entire generation process depends only on a 2D molecular graph, and** requires a single sampling step and a single pass of the TensorVAE model. ## 3 Experiment In this section, we first elaborate on the implementation details of the TensorVAE model including determining the size of the input tensors, network architecture and how the entire framework is trained end-to-end. We then present conformation generation experiment results of the proposed TensorVAE on three benchmark data-sets, including GEOM-QM9, GEOM-Drugs and Platinum data-sets. While the GEOM datasets contain unbound conformations of molecules, the Planinum dataset contains molecular conformations bound to their respective protein targets. The generative performance of the proposed model is compared to those of 15 state-of-the-art baselines. In addition to conformation generation, in Sec.A.8, we further demonstrate the effectiveness of information aggregation of the proposed TensorVAE architecture by briefly comparing the molecular property prediction performance of the proposed method against 7 more state-of-the-art baselines on the MolecularNet (Wu et al., 2018) benchmark. ## 3.1 Experiment Setup Dataset. Following existing work (Luo et al., 2021; Shi et al., 2021; Xu et al., 2021b;a; 2022; Zhou et al., 2023), we utilize the GEOM data-set for evaluating the performance of the proposed TensorVAE. GEOM contains 37 million energy and statistical weight annotated molecular conformations corresponding to 450,000 molecules (Axelrod & Gómez-Bombarelli, 2022). This dataset is further divided into two constituent datasets, Drugs and QM9. The Drugs dataset covers 317,000 median-sized molecules averaging 44.4 number of atoms. The QM9 dataset contains 133,000 smaller molecules averaging only 18 atoms. We follow Xu et al. (2022) to randomly select 40,000 molecules from each dataset to form the training set. For each molecule, we choose the top 5 most likely2conformations. This results in 200,000 training conformations for each train set. For validation set, we randomly sample 2,500 conformations for both Drugs and QM9 experiments. Finally, for testing, following (Shi et al., 2021; Xu et al., 2022), we randomly select 2Ranked by their Boltzmann weight. 200 molecules each with more than 50 and less than 500 annotated conformations from QM9, and another 200 with more than 50 and less than 100 annotated conformations from Drugs3. The GEOM dataset contains conformations of molecules that are not bound to any specific target. To assess the proposed model's ability to generate ligand-protein bound conformations, we additionally evaluate its performance using the Platinum dataset (Friedrich et al., 2017). The Platinum dataset is derived from the Pretein Data Bank (Berman et al., 2000) and consists of two high-quality ligand-protein bound conformation dataset: a comprehensive dataset and a diversified subset of 4,626 and 2,912 structures, respectively. Following the setup in (Friedrich et al., 2017), we test the performance of the proposed TensorVAE on the diversified subset. Determining input tensor size and atom ordering. We conduct a basic data analysis on the entire Drugs dataset to determine the 98.5 th percentile of the number of atoms to be 69, and the percentage of molecules having more than 69 atoms and with more than 50 but less than 100 conformations is only 0.19%. Accordingly, we set the size of the input tensor to 69 × 69 for Drugs experiment. On the other hand, we use the maximum number of atoms 30 for QM9 experiment. The channel features for the input tensor include atom types, atom charge, atom chirality, bond type, bond stereo-chemistry and bond in-ring size. For the GDR tensor, we also include 3D coordinate channels and a distance channel. The resulting channel depth is 50 for GDR tensor and 46 for G tensor. The detailed information of these features and their encoding method is listed in Sec.A.5. The ordering of the atoms along the diagonal of the tensor is determined by a random Depth-First Traversal (DFT) of the molecular graph. Implementation details. We implement the proposed TensorVAE using Tensorflow 2.5.0. All three transformer encoders of TensorVAE follow the standard Tensorflow implementation in https://www.tensorflow. org/text/tutorials/transformer. All of them have 4 layers, 8 heads and a latent dimension of 256. Both QM9 and Drugs experiments share the same network architecture and hyper-parameter configuration. We present the detailed training hyperparameter configuration in Sec.A.3. Evaluation metrics. We adopt the widely accepted coverage score (COV) and matching score (MAT) (Shi et al., 2021) to evaluate the performance of the proposed TensorVAE model. These two scores are computed as; $$\mathrm{COV}\left(\mathbb{C}_{g},\mathbb{C}_{r}\right)=\frac{1}{|\mathbb{C}_{r}|}\left|\left\{R\in\mathbb{C}_{r}|\mathrm{RMSD}\left(R,\hat{R}\right)\leq\delta,\forall\hat{R}\in\mathbb{C}_{g}\right\}\right|\tag{4}$$ $$\operatorname{MAT}\left(\mathbb{C}_{g},\mathbb{C}_{r}\right)={\frac{1}{\left|\mathbb{C}_{r}\right|}}\sum_{R\in\mathbb{C}_{r}}\operatorname*{min}\operatorname{RMSD}\left(R,{\hat{R}}\right)$$ (5) $\frac{1}{2}$ . where Cg is the set of generated conformations and Cr is the corresponding reference set. The size of Cg is twice of that of Cr, as for every molecule, we follow (Xu et al., 2022) to generate twice the number of conformations as that of reference conformations. δ is a predefined threshold and is set to 0.5˚A for QM9 and 1.25˚A for Drugs respectively (Shi et al., 2021) . RMSD stands for the root-mean-square deviation between R and Rˆ, and is computed using the GetBestRMS method in the RDKit (Riniker & Landrum, 2015) package. While COV score measures the ability of a model in generating diverse conformations to cover all reference conformations, MAT score measures how well the generated conformations match the ground-truth. A good generative model should have a high COV score and a low MAT score. To evaluate the accuracy of the proposed model on the Platinum dataset, we employ two metrics: the root-mean-square deviation (RMSD) for four ensemble sizes (10, 50, 250, and 500) and the percentage of molecules with RMSD within specified thresholds (0.5, 1.0, 1.5, and 2) for two ensemble sizes (50 and 250). In terms of generative speed evaluation, we calculate and compare the mean and median generation times for the four ensemble sizes across all 2,912 molecules. Baselines. We first compare the generative performance of the proposed TensorVAE model to those of 1 classical RDKit method; 5 distance-based methods including GraphDG, CGCF, ConfVAE, ConfGF and DGSM; 2 reconstruction-based methods including GeoDiff and Uni-Mol; 3 direct methods including CVGAE, 3This limit on the number of conformations for testing molecules is taken directly from https://github.com/ DeepGraphLearning/ConfGF which is also followed by all other compared methods in the GEOM experiment. GeoMol, and DMCG. For the Platinum dataset, we also incorporate 4 classical methods, namely Ballon DG and Ballon GA (Vainio & Johnson, 2007), MultiConf-Dock (Sauton et al., 2008) and ETKDG (Riniker & Landrum, 2015). We then compare the molecular property prediction performance of the proposed model (specifically the GDR encoder) to 7 more strong baselines comprising D-MPNN (Yang et al., 2019), AttentiveFP (Xiong et al., 2019), N-Gram (Liu et al., 2019), PretrainingGNN (Hu et al., 2020), GROVER (Rong et al., 2020), GEM (Fang et al., 2022) and finally again Uni-Mol. ## 3.2 Results And Discussion Unbound conformation generation. The COV and MAT scores for all compared methods on both QM9 and Drugs datasets are presented in Tab.1. The proposed TensorVAE achieves the state-of-the-art generative performance. Additionally, **we have conducted 4 ablation** studies on the input feature engineering method in Sec.3.3 to demonstrate why 1D convolution with a N × 3 kernel is crucial to achieving a good generative performance. While none of the cited baselines quantify confidence of their results, we have included standard deviations in all our results. In Tab.1, TensorVAEREF results are obtained by running test on the same set of test data4that is adopted by all other baselines. While TensorVAE employs the same test set for evaluation, it's essential to note that the training dataset differs from that in ConfGF. Nevertheless, we have conducted a thorough examination to confirm that none of the molecules in the ConfGF test set are included in our training dataset. Additionally, we observed that the **ConfGF DRUGS test set has a maximum of 71 heavy atoms per** molecule, exceeding our predetermined maximum of 69 atoms by 2. While there are only 3 molecules with more than 69 heavy atoms, we do not anticipate a significant performance change by allowing TensorVAE to handle an additional 2 atoms. Therefore, we opt not to retrain TensorVAE for this test set. To ensure a fair comparison, for these 3 molecules, we assume a worst-case scenario where the trained TensorVAE can only achieve a MAT score of 2˚A and a COV score of 0%. The TensorVAEREF's Mean/Median MAT and COV scores for the DRUGS dataset are computed under this worst-case scenario. For the QM9 dataset, as we have already used the maximum number of heavy atoms, TensorVAEREF's results are obtained as usual. On the other hand, TensorVAE1results have been obtained on a set of random testset, selected based on the same filtering condition proposed in ConfGF and having a maximum number of heavy atoms of 69 per molecule. This set of 200 molecules contains 23,079 and 14,396 testing conformation for QM9 and Drugs, respectively. TensorVAE1results and standard deviations are obtained by running 10 experiements each with a different random seed on the same 200 testing molecules. TensorVAE2results are obtained by running 10 experiements each with a different random seed as well as a different set of 200 testing molecules. In this setting, both testsets contain 2,000 testing molecules, amounting to more than 280k and 140k testing conformations for QM9 and Drugs, respectively. The number of testing conformations is more than 70% **of that of training conformations**. Attaining consistent performance on this much larger testset consolidates the generalization capability of the proposed TensorVAE, and **verifies its robustness under random permutation of atom ordering**. Additionally, as noted by Xu et al. (2022), Eqs.4 and 5 are only the *recall* scores. We also present the *precision* scores results in Tab.11 of Sec.A.6, where TensorVAE again achieves the state-of-the-art performance with a considerable margin. Xu et al. (2021b) discovered that the quality of conformations generated by deep generative models can be further refined by an additional empirical force field (FF) (Halgren, 1996) optimization procedure. UniMol also leverages FF optimization to improve its generative performance. Different from GeoDiff which reconstructs a valid conformation directly from random noisy coordinates, Uni-Mol simply refines an initial conformation optimized by RDKit (using ETKGD with FF (Riniker & Landrum, 2015)). 4This dataset is available for download at https://github.com/DeepGraphLearning/ConfGF. Originally generated in ConfGF, it serves as the common dataset across all compared baselines. For the 200 testing molecules, the total numbers of annotated conformations are 22,408 and 14,324 for QM9 and Drugs, respectively. As all compared baselines utilize the same test set, including standard deviation as an uncertainty measure is unnecessary. | QM9 | | | Drugs | | | | | | |--------------------------------------------------------------------------------------------------|-----------|------------|-----------|------------|--------|---------|---------|--------| | Models | COV (%) ↑ | MAT (˚A) ↓ | COV (%) ↑ | MAT (˚A) ↓ | | | | | | Mean | Median | Mean | Median | Mean | Median | Mean | Median | | | RDkit | 83.26 | 90.78 | 0.3447 | 0.2935 | 60.91 | 65.70 | 1.2026 | 1.1252 | | CVGAE | 0.09 | 0.00 | 1.6713 | 1.6088 | 0.00 | 0.00 | 3.0702 | 2.9937 | | GraphDG | 73.33 | 84.21 | 0.4245 | 0.3973 | 8.27 | 0.00 | 1.9722 | 1.9845 | | CGCF | 78.05 | 82.48 | 0.4219 | 0.3900 | 53.96 | 57.06 | 1.2487 | 1.2247 | | ConfVAE | 80.42 | 85.31 | 0.4066 | 0.3891 | 53.14 | 53.98 | 1.2392 | 1.2447 | | ConfGF | 88.49 | 94.13 | 0.2673 | 0.2685 | 62.15 | 70.93 | 1.1629 | 1.1596 | | GeoMol | 71.26 | 72.00 | 0.3731 | 0.3731 | 67.16 | 71.71 | 1.0875 | 1.0586 | | DGSM | 91.49 | 95.92 | 0.2139 | 0.2137 | 78.73 | 94.39 | 1.0154 | 0.9980 | | GeoDiff | 92.65 | 95.75 | 0.2016 | 0.2006 | 88.45 | 97.09 | 0.8651 | 0.8598 | | DMCG | 94.98 | 98.47 | 0.2365 | 0.2312 | 91.27 | 100 | 0.8287 | 0.7908 | | TensorVAEREF | 97.79 | 100 | 0.1985 | 0.1951 | 93.05 | 98.98 | 0.8087 | 0.7866 | | TensorVAE1 | 98.11 | 100 | 0.1970 | 0.1926 | 94.91 | 100 | 0.7789 | 0.7585 | | ±0.25 | ±0 | ±0.0016 | ±0.0027 | ±0.35 | ±0 | ±0.0027 | ±0.0076 | | | TensorVAE2 | 97.11 | 100 | 0.2041 | 0.1920 | 93.34 | 99.90 | 0.8074 | 0.7927 | | ±0.31 | ±0 | ±0.0046 | ±0.007 | ±1.17 | ±0.31 | ±0.0135 | ±0.0186 | | | *Bold font indicates best result. Results for RdKit, CVGAE, GraphDG, CGCF, ConfGF are taken from | | | | | | | | | TensorVAE2 97.11 100 0.2041 0.1920 93.34 99.90 0.8074 0.7927 ±0.31 ±0 ±0.0046 ±0.007 ±1.17 ±0.31 ±0.0135 ±0.0186 *Bold font indicates best result. Results for RdKit, CVGAE, GraphDG, CGCF, ConfGF are taken from (Shi et al., 2021); all other results are taken from (Zhou et al., 2023). Values following ±are standard deviations. Table 1: Performance comparison between TensorVAE and 10 baselines on GEOM dataset. For a fair comparison, we exclude deep generative models relying on FF optimization from Tab.1 and compare their performances separately in Tab.2. Again, the proposed TensorVAE with FF optimization outperforms all of them with a significant margin. | Method | COV(%) ↑ | MAT(˚A) ↓ | | | |--------------|------------|-------------|--------|---------| | | Mean | Median | Mean | Median | | CVGAE | 83.08 | 95.21 | 0.9829 | 0.9177 | | GraphDG | 84.68 | 93.94 | 0.9129 | 0.9090 | | Uni-Mol | 91.91 | 100 | 0.7863 | 0.7794 | | CGCF | 92.28 | 98.15 | 0.7740 | 0.7338 | | ConfVAE | 91.88 | 100 | 0.7634 | 0.7312 | | GeoDiff | 92.27 | 100 | 0.7618 | 0.7340 | | TensorVAEREF | 93.36 | 98.18 | 0.7267 | 0.7032 | | TensorVAE2 | 94.74 | 100 | 0.6985 | 0.6845 | | | ±0.66 | ±0 | ±0.012 | ±0.0196 | Table 2: Performance comparison between methods **with FF optimization** on GEOM Drugs dataset TensorVAE2 **94.74 100 0.6985 0.6845** ±0.66 ±0 ±0.012 ±0.0196 *Results for CVGAE, GraghDG, CGCF, and ConfVAE are taken from (Xu et al., 2021b); GeoDiff and Uni-Mol results are from their source paper. In terms of simplicity, the proposed TensorVAE uses a standard transformer encoder and a simple Kabsch alignment loss. On the other hand, due to the lack of effective input feature engineering, both DMCG and Uni-Mol require design of sophisticated network architectures and complex loss functions to achieve a good generative performance. A direct consequence of these complicated designs is a large number of model parameters, as shown in Tab.3. | Method | Number of parameters | |---------------------|------------------------| | DMCG | 128M | | Uni-Mol | 47.81M | | TensorVAE training | 11.5M | | TensorVAE inference | 6.65M | Table 3: Comparison of number of parameters among TensorVAE, DMCG and Uni-Mol. In terms of efficiency, TensorVAE is a direct generative model capable of producing conformation from a 2D molecular graph in a single step. It takes only 62 seconds using a single Xeon 8163 CPU to decode 200 QM9 molecules, and 128 seconds for 200 Drug molecules. In comparison, GeoDiff requires 5, 000 diffusion steps per conformation, and takes around 8, 500 seconds for decoding 200 QM9 molecules and 11, 500 seconds for decoding 200 Drugs molecules on a single Tesla V100 GPU. **The proposed TensorVAE achieves more** than 100× **speed up**. Finally, some samples of the TensorVAE generated conformations are shown in Fig.4. ![10_image_0.png](10_image_0.png) Figure 4: Generated samples by the TensorVAE Protein-ligand bound conformation generation. The performance evaluation of the proposed model on generating ligand-bound conformation is vital to establish its potential application in high throughput virtual screening of drug candidates. Following the setup in Friedrich et al. (2017), we further compare the performance of the proposed TensorVAE model with 5 popular baselines on the Platinum dataset. We took the TensorVAE model trained on the GEOM-drugs conformations, and applied it directly to the **Platinum diverse dataset** for conformer ensemble generation. Before presenting the evaluation results on the Platinum dataset, we would like to first emphasize the difference between the Platinum dataset which was proposed in Friedrich et al. (2017) and the GEOM dataset which we have used to train TensorVAE. While the GEOM-drugs dataset mainly contains vacuum conformer-rotamer ensembles that are generated using semi-empirical density functional theory, the Platinum dataset only includes protein-bound ligand conformations. The energy states of conformers bound to a protein target are different from those of the stable unbound conformers. The underlying distributions governing the generation of these two datasets also differ significantly. Testing on the Platinum dataset without any retraining or finetuning creates a distribution shift from that of the GEOM training data. Inevitably, this will lead to performance degradation of the proposed TensorVAE. However, evaluating the proposed TensorVAE on the Platinum dataset remains valuable for assessing its ability to generalize and accurately generate valid ligand-protein bound conformations, despite being trained solely on unbound conformations. We have repeated the experiments from Table 3 to Table 6 in Friedrich et al. (2017). The results of these experiments are presented below. | Maximum ensemble size | 10 | 50 | | 250 | 500 | | | | |-------------------------|--------|------|--------|-------|--------|------|--------|------| | Mean | Median | Mean | Median | Mean | Median | Mean | Median | | | Balloon DG | 1.10 | 0.97 | 1.00 | 0.86 | 0.92 | 0.77 | 0.89 | 0.74 | | Balloon GA | 1.22 | 1.10 | 0.90 | 0.80 | 0.72 | 0.63 | 0.67 | 0.58 | | RDKit | 1.00 | 0.89 | 0.77 | 0.64 | 0.63 | 0.52 | 0.59 | 0.48 | | ETKDG | 0.98 | 0.87 | 0.77 | 0.66 | 0.63 | 0.54 | 0.59 | 0.51 | | Multiconf-DOCK | 0.99 | 0.89 | 0.84 | 0.72 | 0.80 | 0.69 | 0.80 | 0.69 | | TensorVAE | 1.02 | 0.95 | 0.85 | 0.77 | 0.73 | 0.67 | 0.69 | 0.63 | Table 4: Arithmetic Mean and Median RMSD in ˚A Obtained for the Platinum Diverse Dataset. | Maximum ensemble size | 50 | | 250 | | | | | | |-------------------------|------|------|-------|------|------|------|------|------| | Minimum accuracy [˚A] | 0.5 | 1.0 | 1.5 | 2.0 | 0.5 | 1.0 | 1.5 | 2.0 | | Balloon DG | 0.29 | 0.57 | 0.77 | 0.92 | 0.33 | 0.62 | 0.81 | 0.92 | | Balloon GA | 0.30 | 0.72 | 0.90 | 0.97 | 0.43 | 0.84 | 0.96 | 0.99 | | RDKit | 0.39 | 0.71 | 0.89 | 0.96 | 0.48 | 0.82 | 0.95 | 0.98 | | ETKDG | 0.36 | 0.72 | 0.91 | 0.97 | 0.45 | 0.83 | 0.95 | 0.99 | | Multiconf-DOCK | 0.32 | 0.68 | 0.87 | 0.96 | 0.34 | 0.71 | 0.89 | 0.97 | | TensorVAE | 0.27 | 0.65 | 0.89 | 0.97 | 0.34 | 0.76 | 0.95 | 0.99 | Table 5: Fraction of Structures of the Platinum Diverse Dataset Successfully Reproduced within a Specified RMSD Threshold. Although the proposed TensorVAE is trained solely on unbound conformations, it demonstrates comparable performance to 5 popular baselines in terms of accurately generating ligand-protein bound conformations (Tab.4 and Tab.5), which serves to validate its generalization capability. More specifically, it demonstrates a slight performance advantage over Balloon DG/GA and multiconf-Dock; however, it falls short of matching the performance achieved by RDkit and ETKDG. This result appears to contradict the findings obtained from the GEOM dataset, where the proposed TensorVAE outperformed RDkit. The main reason of this contradiction could be attributed to the distribution shift or **dataset shift** between training and testing. Additionally, for constructing the training dataset, we sampled 40,000 molecules from GEOM drugs dataset and only retained the top-5 conformations with the highest Boltzmann weight for each molecule. These conditions further restrict the energy search space for conformation generation. Consequently, the Boltzmann distribution approximated (and learned) by the proposed TensorVAE might not be directly suited to prediction of ligand-bound conformations without further fine-tuning. | Maximum ensemble size | 10 | 50 | 250 | 500 | | | | | |-------------------------|------|--------|-------|--------|------|--------|------|--------| | | Mean | Median | Mean | Median | Mean | Median | Mean | Median | | Balloon DG | 10 | 10 | 50 | 50 | 249 | 250 | 498 | 500 | | Balloon GA | 9 | 10 | 49 | 50 | 244 | 250 | 487 | 500 | | RDKit | 10 | 10 | 50 | 50 | 250 | 250 | 500 | 500 | | ETKDG | 10 | 10 | 50 | 50 | 250 | 250 | 500 | 500 | | Multiconf-DOCK | 9 | 10 | 36 | 50 | 78 | 57 | 80 | 57 | | TensorVAE | 10 | 10 | 50 | 50 | 250 | 250 | 500 | 500 | Table 6: Arithmetic Mean and Median Ensemble Sizes Measured for the Platinum Diverse Dataset. | Maximum ensemble size | 10 | 50 | 250 | | 500 | | | | |-------------------------|------|--------|-------|--------|-------|--------|------|--------| | | Mean | Median | Mean | Median | Mean | Median | Mean | Median | | Balloon DG | 6 | 5 | 27 | 24 | 132 | 117 | 260 | 260 | | Balloon GA | 4 | 3 | 19 | 17 | 105 | 98 | 256 | 234 | | RDKit | 1 | 1 | 5 | 4 | 22 | 18 | 42 | 34 | | ETKDG | 1 | 1 | 4 | 3 | 16 | 12 | 32 | 23 | | Multiconf-DOCK | 5 | 1 | 8 | 2 | 15 | 3 | 15 | 3 | | TensorVAE | <1 | <1 | <1 | <1 | 1 | 1 | 2 | 2 | In terms of generative capability (Tab.6), the proposed TensorVAE is able to generate the complete 10-, 50-, 250-, and 500-conformers ensemble sizes for all molecules which puts it head-to-head against RDKit and ETKDG. In terms of generative speed (Tab.7), as the proposed TensorVAE only needs a single pass of the neural network to generate conformations for each ensemble size, its mean and median runtimes (measured on a single core of Xeon 8163 CPU) are significantly faster than the other compared methods. Table 7: Arithmetic Mean and Median Runtimes in Seconds Measured for the Platinum Diverse Dataset. ## 3.3 Ablation Studies In this section, we further demonstrate the effectiveness and necessity of running an 1D convolution with N × 3 kernels over the proposed input tensor through 4 ablation studies on GEOM drugs dataset. We also show that the transformer attention mechanism is also an important contributing factor for a competitive generative performance. Why is 1D convolution necessary. We have shown a model based on a 3 × 3 kernel in Sec.A.4 called NaiveUNet. Here, we provide a more detailed analysis of why NaiveUNet produces unsatisfactory result. The primary reason for this poor performance is the "field of view" of a conventional d × d (*d < N*) kernel only sees a partial connection pattern of a focal atom. In comparison, a N × 3 kernel's "field of view" encompasses the complete connection pattern of a focal atom. We further observe that when applying a 3 × 3 kernel filter to the top left region of the proposed tensor, its field of view only includes a focal atom, its two neighboring atoms and how the focal atom is connected to them. There are two main disadvantages associated with this. Firstly, it only achieves a 1-hop information aggregation. Secondly when the 3 × 3 kernel moves to an off-diagonal part of the tensor, where most connections are virtual bonds (as atoms of a molecule are often sparsely connected), information aggregation occurs mostly between atoms that are not chemically connected and is therefore less meaningful than that on the diagonal part of the tensor. For these two reasons, the NaiveUNet's performance on the GEOM Drugs dataset is the worst as shown in Tab.9. What happens if we remove all virtual bonds. Notice that if we remove all the virtual bonds in each column and still run a N × 3 kernel through the tensor, its "field of view" is a "2-hop atomic-environment" (because the focal atom can "see" how neighboring atoms are chemically connected to all their direct neighbors). Another observation is that after removing all virtual bonds, each column does not correspond to a fully-connected MPNN. Therefore it no longer enables a global information aggregation. The conformation generation results of this variant of TensorVAE on Drugs dataset is shown as as TensorVAE abla1 in table below. It is observed that due to a less effective local information aggregation as a result of removing all virtual bonds (and related atom features), the performance is worse than of the complete TensorVAE version. What happens if a N × 1 **kernel is used**. The third ablation study concerns with using a N × 1 kernel with a smaller "field of view" as compared to that of a N × 3 kernel. Its performance on Drugs dataset is shown as TensorVAE abla2 in Tab.9. It performs slightly better than the ablation removing all virtual bonds. The reason is that though its field of view is smaller, it still achieves a global information aggregation for the focal atom. Nevertheless, it underperforms the complete TensorVAE version due to a smaller "field of view" for information aggregation. What happens if a 1 × 1 **kernel is used**. This setup corresponds to connecting a fully-connected MPNN (GNN) with a standard transformer backbone for conformation generation. Since using a 1 × 1 kernel leads to a model with a signicantly less model compacity as compared to the models in previous ablation studies, we experimented with 6 hyper-parameter configurations listed as following to ensure this variant has roughly the same model capacity (number of parameters). | Model name | Embedding size | KL weight schedule | No. of transformer layers | No. of parameters | |--------------|------------------|-------------------------------|-----------------------------|---------------------| | GNN_base | 256 | same as TensorVAE | 4 | 6.5M | | GNN_large1 | 320 | same as TensorVAE | 4 | 11M | | GNN_large2 | 256 | same as TensorVAE | 6 | 10M | | GNN_large3 | 320 | 1e-5 doubling every 16 epochs | 6 | 11M | | GNN_large4 | 320 | 1e-6 doubling every 16 epochs | 6 | 11M | | GNN_large5 | 320 | 1e-7 doubling every 16 epochs | 6 | 11M | Table 8: Experimental setups for 1 × 1 kernel. There are two ways to increase the number of parameters of the MPNN-based variant to match that of the TensorVAE employing a N × 3 kernel for a fair comparison, including a larger embedding size and more transformer layers. These two setups correspond to large 1 and large 2. Unfortunately, training for these 3 setups failed to reduce RMSD error after more than 10 epochs of training; we kept facing the KL vanishing problem. To tackle this, we experimented with 3 more configurations (large 3,4 and 5) with much lower KL weights and shorter step period to force training to focus more on reducing the RMSD loss. Unfortunately again, after more than 40 epochs (25+ hours) of training all three efforts have also failed to resolve this issue. We have included the training and validation curve for all 6 experiments in Fig.5. It is observed that for all cases, while KL error quickly decreases to close to zero, the RMSD loss stays almost constant at 4.0, indicating model's inability to learn. It seems that the MPNN-based models struggle to learn any meaningful information that contribute to producing valid conformations. Instead, they always resort to reducing KL loss which is a much easier learning task. **This fact combined with previous 3** ablation studies manifest an emerging trend that the TensorVAE model's capacity to learn difficult conformation generation task improves with the increase of expressive power of its aggregation mechanism. In other words, the extra flexibility introduced by the increased kernel size (from 1 × 1 to N × 3) is a main contributing factor to the promising performance of the TensorVAE model. Therefore, we conclude that the design choice made to use a N × 3 kernel is sensible and fully justified. What happens if the transformer architecture is replaced by a MLP. In this setup, we replace the transformer encoder block with a MLP block as following: $$\begin{array}{c}{{m_{i}^{l}=W_{2}(\mathbf{R E L U}(W_{1}h_{i}^{l}+b_{1}))+b_{2}}}\\ {{h_{i}^{l+1}=\mathbf{D r o p O u t(L a y e r N o r m}(m_{i}^{l}))}}\end{array}$$ ![14_image_1.png](14_image_1.png) ![14_image_0.png](14_image_0.png) Figure 5: Ablation study: performance comparison between using N × 3 kernel and 1 × 1 kernel. The model architectural difference between GNN_large models and the GNN_base model can be found in Tab.8 Where h l i ∈ R256×1is the l th layer output for the i th atom, W1 ∈ R1024×256, b1 ∈ R1024×1, W2 ∈ R256×1024 and b2 ∈ R256×1. Additionally, due to the absence of attention mechanism, the output {h L 1 , ..., hLN } of the graph encoders σθ1 (G) and the sampled latent output {z1*, ..., z*N } of the posterior encoder qw(z|*R, G*) are simply summed to become the input of the of the decoder pθ2 (R|*z, σ*θ1 (G)) to generate conformation directly. Akin to the TensorVAE, all three components consist of 4 MLP blocks. The total number of parameters corresponding to this setup is 11M which is similar to that of the TensorVAE. The dropout rate was initially set to 0.1. In this configuration, we trained the model for 100 epochs and observed a severe overfitting issue, as illustrated in Fig.6, where the training and validation curves of the MLP variant and TensorVAE are compared. The MLP variant exhibited not only a significantly higher KL loss but also a much higher RMSD validation loss compared to both its training loss and that of TensorVAE. Upon observing this behavior, we decided to experiment with higher dropout rates, including 0.3, 0.5, and 0.7, to mitigate overfitting. After training the model for 100 epochs for each dropout rate, we found that the dropout rate of 0.3 achieved the best validation KL and RMSD losses without encountering any overfitting issues. However, with an increase in the dropout rate, the RMSD training loss decreased (while the validation RMSD error remained the same), and the KL loss increased significantly. This behavior mirrored that of the 1 × 1 convolution kernel (fully-connected MPNN) variant mentioned earlier. Essentially, the model increasingly relied on posterior encoder information to reconstruct the conformation and reduce the RMSD error, which is an easier task compared to reconstructing conformation from a 2D molecular graph in the absence of any coordinate information. This trend suggested that a higher dropout rate led to a reduction in the model's capacity to learn. Despite achieving the best performance among all tested dropout rates, after 120 epochs of training, the MLP variant with 0.3 dropout still performed significantly worse than the TensorVAE with a transformer backbone. Although its RMSD validation loss matched that of the TensorVAE, its KL validation loss was more than double that of the TensorVAE, indicating significantly lower learning capacity. Observing this behavior led us to the conclusion that it was no longer necessary to complete the training to demonstrate the necessity of TensorVAE with a transformer architecture to obtain competitive performance. This experiment suggests that the attention mechanism is also a crucial contributing factor for effective information aggregation among atoms. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) Figure 6: Ablation study: performance comparison among MLP backbones with different dropout rates. While the base TensorVAE with a transformer backbone and the base MLP ablation studay have a default dropout rate of 0.1, the other ablation studies have dropout rates ranging from dr_0.3 to dr_0.7, respectively. Table 9: Performance comparison among models with different input feature engineering setup on GEOM Drugs dataset | Method | COV | MAT | | | |----------------|--------------|--------------|-----------------|-----------------| | Mean | Median | Mean | Median | | | NaiveUNet | 52.14 ± 1.48 | 51.69 ± 1.17 | 1.4322 ± 0.0247 | 1.3861 ± 0.0173 | | TensoVAE abla1 | 90.72 ± 1.54 | 99.53 ± 0.64 | 0.8748 ± 0.0161 | 0.8619 ± 0.0214 | | TensoVAE abla2 | 91.04 ± 1.21 | 99.74 ± 0.42 | 0.8706 ± 0.0131 | 0.8561 ± 0.0204 | | TensorVAE | 93.34 ± 0.35 | 99.90 ± 0.31 | 0.8074 ± 0.0135 | 0.7927 ± 0.0186 | ## 4 Reproducibility Statement We did not introduce any task-specific neural network archiecture. The results presented in this study can be straightforwardly reproduced using publically available datasets and ready-to-use implementation of convolution and Transformer from either PyTorch or TensorFlow. We have also provided detail hyper-parameter setup to ensure reproducibility. We have included the complete code for reproducing the conformation generation results in https://anonymous.4open.science/r/TensorVAE-4576/ and code for reproducing the property prediction results in https://anonymous.4open.science/r/TensorVAE-0DE7. ## 5 Conclusion We develop TensorVAE, a simple yet powerful model able to generate 3D conformation directly from a 2D molecular graph. Unlike many existing work focusing on designing complex neural network structure, we focus on developing novel input feature engineering techniques. We decompose these techniques into three main ideas, and explain how one idea naturally connects to the next. We first propose a tensor representation of a molecular graph. Then, we demonstrate that sliding a rectangle kernel through this tensor in an 1D convolution manner can achieve a global information aggregation. Finally, we present the complete CVAE-based framework featuring 2 transformer-based encoders and another transformer-based decoder, and propose a novel modification to the first multi-head attention layer of the decoder to enable sensible integration of the output of the other two encoders. The proposed TensorVAE demonstrates state-of-the-art generative performance compared to recently proposed deep-learning-based generative models on the GEOM dataset, utilizing DFT-generated unbound conformations. When directly applied to the Platinum dataset, which contains ligand-protein bound conformations, the proposed method offers faster generation speed while maintaining competitive accuracy as compared to 5 popular and classical methods. Limitations and Future Directions. Despite achieving promising performance in conformational generation, the current work has three major limitations that pave the way for future improvements. Firstly, the proposed tensor graph representation lacks invariance under random permutations of atom ordering. While experimentally robust, achieving true invariance to such transformations would enhance the stability of conformation generation. Secondly, the training process only achieves approximate SE(3) invariance due to the presence of R in the input of the posterior encoder. Aiming for exact invariance has the potential to further improve the TensorVAE framework's performance. To address this, we plan to replace qw (z|*R, G*) with the recently proposed equivariant posterior encoder component from the Geometric AutoEncoder framework in GeoLDM (Xu et al., 2023). Thirdly, the current TensorVAE can only predict the local structure of molecules, specifically heavy atom coordinates with respect to an arbitrary origin. It lacks the capability to predict the SE(3) transformation necessary to obtain the bounding pose with respect to a protein target—a crucial aspect for structuralbased drug discovery tasks. Our next objective involves expanding TensorVAE's architecture to predict both unbound ligand and protein conformers as input and produce valid bound ligand conformation as output. To achieve this, we are integrating the SE(3) equivariant convolution operation proposed in the TensorField Networks (Thomas et al., 2018) into the TensorVAE framework. This expansion aims to enhance the model's ability to generate conformer ensembles suitable for docking to specific protein targets. ## References K Somani Arun, Thomas S Huang, and Steven D Blostein. Least-squares fitting of two 3-d point sets. IEEE Transactions on pattern analysis and machine intelligence, (5):698–700, 1987. Simon Axelrod and Rafael Gómez-Bombarelli. Geom, energy-annotated molecular conformations for property prediction and molecular generation. *Scientific Data*, 9(1):185, 2022. doi: 10.1038/s41597-022-01288-4. URL https://doi.org/10.1038/s41597-022-01288-4. Helen M Berman, John Westbrook, Zukang Feng, Gary Gilliland, Talapady N Bhat, Helge Weissig, Ilya N Shindyalov, and Philip E Bourne. The protein data bank. *Nucleic acids research*, 28(1):235–242, 2000. Hongming Chen, Ola Engkvist, Yinhai Wang, Marcus Olivecrona, and Thomas Blaschke. The rise of deep learning in drug discovery. *Drug discovery today*, 23(6):1241–1250, 2018. Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning for property prediction. Nature Machine Intelligence, 4(2):127–134, 2022. Nils-Ole Friedrich, Agnes Meyder, Christina de Bruyn Kops, Kai Sommer, Florian Flachsenberg, Matthias Rarey, and Johannes Kirchmair. High-quality dataset of protein-bound ligand conformations and its application to benchmarking conformer ensemble generators. *Journal of chemical information and modeling*, 57(3):529–539, 2017. Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. Cyclical annealing schedule: A simple approach to mitigating kl vanishing. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 240–250, 2019. Niklas WA Gebauer, Michael Gastegger, Stefaan SP Hessmann, Klaus-Robert Müller, and Kristof T Schütt. Inverse design of 3d molecular structures with conditional generative neural networks. *Nature communications*, 13(1):1–11, 2022. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In *International conference on machine learning*, pp. 1263–1272. PMLR, 2017. Tarun Gogineni, Ziping Xu, Exequiel Punzalan, Runxuan Jiang, Joshua Kammeraad, Ambuj Tewari, and Paul Zimmerman. Torsionnet: A reinforcement learning approach to sequential conformer search. *Advances* in Neural Information Processing Systems, 33:20142–20153, 2020. Thomas A Halgren. Merck molecular force field. v. extension of mmff94 using experimental data, additional computational data, and empirical rules. *Journal of Computational Chemistry*, 17(5-6):616–641, 1996. Paul CD Hawkins. Conformation generation: the state of the art. Journal of chemical information and modeling, 57(8):1747–1756, 2017. Shion Honda, Hirotaka Akita, Katsuhiko Ishiguro, Toshiki Nakanishi, and Kenta Oono. Graph residual flow for molecular graph generation. *arXiv preprint arXiv:1909.13521*, 2019. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=HJlWWJSFDH. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. *Nature*, 596(7873):583–589, 2021. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015. URL http://arxiv.org/abs/1412.6980. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In *International Conference on* Learning Representations, 2014. URL https://openreview.net/forum?id=33X9fd2-9FyZd. Jonas Köhler, Leon Klein, and Frank Noé. Equivariant flows: exact likelihood generative learning for symmetric densities. In *International conference on machine learning*, pp. 5361–5370. PMLR, 2020. Yibo Li, Jianfeng Pei, and Luhua Lai. Structure-based de novo drug design using 3d deep generative models. Chemical science, 12(41):13664–13675, 2021. Shengchao Liu, Mehmet F Demirel, and Yingyu Liang. N-gram graph: Simple unsupervised representation for graphs, with applications to molecules. *Advances in neural information processing systems*, 32, 2019. Shitong Luo, Chence Shi, Minkai Xu, and Jian Tang. Predicting molecular conformation via dynamic graph score matching. *Advances in Neural Information Processing Systems*, 34:19784–19795, 2021. Omar Mahmood, Elman Mansimov, Richard Bonneau, and Kyunghyun Cho. Masked graph modeling for molecule generation. *Nature communications*, 12(1):1–12, 2021. Elman Mansimov, Omar Mahmood, Seokho Kang, and Kyunghyun Cho. Molecular geometry prediction using a deep generative graph neural network. *Scientific reports*, 9(1):1–13, 2019. Sereina Riniker and Gregory A Landrum. Better informed distance geometry: using what we know to improve conformation generation. *Journal of chemical information and modeling*, 55(12):2562–2574, 2015. Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Selfsupervised graph transformer on large-scale molecular data. Advances in Neural Information Processing Systems, 33:12559–12571, 2020. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In *International Conference on Medical image computing and computer-assisted intervention*, pp. 234–241. Springer, 2015. Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pp. 9323–9332. PMLR, 2021. Nicolas Sauton, David Lagorce, Bruno O Villoutreix, and Maria A Miteva. Ms-dock: accurate multiple conformation generator and rigid docking protocol for multi-step virtual ligand screening. *BMC bioinformatics*, 9(1):1–12, 2008. Chence Shi, Shitong Luo, Minkai Xu, and Jian Tang. Learning gradient fields for molecular conformation generation. In *International Conference on Machine Learning*, pp. 9558–9568. PMLR, 2021. Gregor N. C. Simm, Robert Pinsler, Gábor Csányi, and José Miguel Hernández-Lobato. Symmetry-aware actor-critic for 3d molecular design. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=jEYKjPE1xYN. Gregor NC Simm and José Miguel Hernández-Lobato. A generative model for molecular distance geometry. In *Proceedings of the 37th International Conference on Machine Learning*, pp. 8949–8958, 2020. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015. Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. Mikko J Vainio and Mark S Johnson. Generating conformer ensembles using a multiobjective genetic algorithm. *Journal of chemical information and modeling*, 47(6):2462–2474, 2007. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Alexandra Volokhova, Michał Koziarski, Alex Hernández-García, Cheng-Hao Liu, Santiago Miret, Pablo Lemos, Luca Thiede, Zichao Yan, Alán Aspuru-Guzik, and Yoshua Bengio. Towards equilibrium molecular conformation generation with gflownets. *arXiv preprint arXiv:2310.14782*, 2023. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. *Chemical science*, 9(2):513–530, 2018. Zhaoping Xiong, Dingyan Wang, Xiaohong Liu, Feisheng Zhong, Xiaozhe Wan, Xutong Li, Zhaojun Li, Xiaomin Luo, Kaixian Chen, Hualiang Jiang, et al. Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism. *Journal of medicinal chemistry*, 63(16):8749–8760, 2019. Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, and Jian Tang. Learning neural generative dynamics for molecular conformation generation. In *International Conference on Learning Representations*, 2021a. URL https://openreview.net/forum?id=pAbm1qfheGk. Minkai Xu, Wujie Wang, Shitong Luo, Chence Shi, Yoshua Bengio, Rafael Gomez-Bombarelli, and Jian Tang. An end-to-end framework for molecular conformation generation via bilevel programming. In *International* Conference on Machine Learning, pp. 11537–11547. PMLR, 2021b. Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang. Geodiff: A geometric diffusion model for molecular conformation generation. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=PzcvxEMzvQC. Minkai Xu, Alexander S Powers, Ron O Dror, Stefano Ermon, and Jure Leskovec. Geometric latent diffusion models for 3d molecule generation. In *International Conference on Machine Learning*, pp. 38592–38610. PMLR, 2023. Kevin Yang, Kyle Swanson, Wengong Jin, Connor Coley, Philipp Eiden, Hua Gao, Angel Guzman-Perez, Timothy Hopper, Brian Kelley, Miriam Mathea, et al. Analyzing learned molecular representations for property prediction. *Journal of chemical information and modeling*, 59(8):3370–3388, 2019. Hongyang K Yu and Hongjiang C Yu. Powerful molecule generation with simple convnet. *Bioinformatics*, 38(13):3438–3443, 2022. Gengmo Zhou, Zhifeng Gao, Qiankun Ding, Hang Zheng, Hongteng Xu, Zhewei Wei, Linfeng Zhang, and Guolin Ke. Uni-mol: A universal 3d molecular representation learning framework. In *The Eleventh* International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id= 6K2RM6wVqKu. Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Yusong Wang, Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, and Tie-Yan Liu. Direct molecular conformation generation. *Transactions on Machine Learning Research*, 2022. ISSN 2835-8856. URL https://openreview.net/forum?id= lCPOHiztuw. ## A Appendix A.1 Global Information Aggregation Beyond The Nth**-Hop** A geometric interpretation of GNN's message passing layer is it aggregates information between atoms (and their bond) that are 1-hop away. With L layers, information from atoms that are L-hop apart can be aggregated. Here, we define a global information aggregation as the Nth-hop aggregation with N being the total number of atoms, where each atom is able to aggregate information from all other atoms. It is worth noting that for a fully-connected GNN, a 1-hop message passing can already achieve this global information aggregation. Transformer's self-attention can be considered as a type of fully-connected GNN. However, a vanilla transformer can only aggregate features from each token/atom; if edge features are not included, they needed to be incorporated somehow through additional inputs (e.g. the pair interaction matrix of Uni-Mol). The primary reason motivating the creation of the fully-connected tensor representation is we want each generated token contain both atom and bond features, such that we can eliminate the pair interaction or bond matrix. To achieve this, we fill each column of the fully-connected tensor with; - focal atom features; - chemical and virtual bond features indicating how the focal atom is connected to all other atoms; - atom features of all connected atoms, since for off-diagonal cell, we sum atom features of both the neighbour atom and the focal atom. Running a N × 1 kernel filter on the proposed tensor also achieves a global information aggregation. By increasing kernel width to 3, the aggregation window includes global information from two immediate neighbours. This type of information aggregation extends far beyond just Nth-hop. More interestingly, when multiple kernels are applied simultaneously to the same N × 3 × C region, each kernel is free to choose whichever group of atom/bond features to attend to depending on its kernel weights. This resembles the multi-head attention mechanism of a transformer, where each kernel(head) contributes to a portion of the generated feature token. We believe the effective global information aggregation driven by these two (tensor representation + 1D Conv) simple yet intuitive ideas is the main reason why the proposed TensorVAE achieves superior performance with a much less number of parameters. ## A.2 Connection To A Fully-Connected Message Passing Gnn We show that information aggregation achieved by running a 1 × 1 convolution over the proposed tensor representation is similar to that achieved by a fully-connected MPNN (Gilmer et al., 2017). When running a 1 × 1 convolutional operation over the proposed tensor, a W ∈ R1×1×F ×C kernel matrix is shared among all N × N cells of the tensor, where F is the number of kernels and C is the channel depth. Since each cell, regardless it is on-diagonal or off-diagonal, is stacked with an atom feature vector and a bond feature vector, the weight matrix can be decomposed into two parts, Wv ∈ RF ×Cv and We ∈ RF ×Ce, where Cv is the atom feature vector size and Ce is the bond feature vector size. The bond feature vector for on-diagonal cells is filled with zeros, since there is no self connection for focal atoms. Subsequently, for each column n of the tensor, a 1 × 1 convolution operation followed by a sum-aggregation over the rows can be decomposed into 3 steps; - **Off-diagonal cell aggregation**. For each off-diagonal cell, we first sum the atom feature vectors of the focal atom and its cell-specific neighbour atom, as described in Fig.1. Due to convolution operation, the dot product of the summed vector and Wv is then computed. Simultaneously, the dot product between the bond feature vector and We is also computed. The resulting two feature vectors are added together. This aggregation process can be expressed as; Wvh 0 n + Wvh 0 m + Ween,m ∈ R F ×1 where h 0 n ∈ RCv×1, h 0 m ∈ RCv×1 and en,m ∈ RCe×1 are the focal atom feature of the n th column, neighbour atom feature of the mth cell in column n, and bond feature between the n th focal atom and its mth neighbour atom, respectively. If we concat h 0 n ,h 0 m, and en,m into a single vector (h 0 n , h0m, en,m) 5, this operation can also be represented as; Mh 0 n , h0m, en,m= (Wv, Wv, We) ∈R F×(C+Cv)· (h 0 n $${\bf\Phi}_{n}^{0},e_{n,m})^{\Theta}$$ ∈R (C+Cv)×1 - **Row-wise aggregation**. The above aggregation operation generates a feature vector (of size RF ×1) for each off-diagonal row of column n. The sum-aggregation over these rows generates a feature vector which contains the aggregated information from all neighbour atoms. $\downarrow$ $\downarrow$ . $$\mathbf{\partial}\cdot\left(h\right)$$ $\mathbf{r}$). $\ell$. $$n_{n}^{1}=\sum_{m\in N_{\backslash n}}M\left(h_{n}^{0},h_{m}^{0},e_{n,m}\right)$$ - **Complete aggregation**. Finally, we aggregate this feature vector m1n onto the focal atom feature to complete the sum-aggregation operation over all the rows of column n. 5we define (•) as a concatenation operator as in MPNN (Gilmer et al., 2017) $$h_{n}^{1}=U\left(h_{n}^{0},m_{n}^{1}\right)=\mathbf{ReLu}\left(W_{v}h_{n}^{0}+m_{n}^{1}\right)$$ Noticeably, the M and U operators correspond exactly to the message passing phase of a single forward pass of a fully-connected MPNN, as described by Eqs.1 and 2 of the MPNN paper (Gilmer et al., 2017). Similarly, the feature aggregation operation of a N × 1 kernel can be expressed as; $$h_{n}^{1}=\mathbf{ReLu}\left(W_{v}^{n}h_{n}^{0}+\sum_{m\in N\backslash n}\left(W_{v}^{m}\,h_{n}^{0}+W_{v}^{m}\,h_{m}^{0}+W_{e}^{m}e_{n,m}\right)\right)$$ This type of aggregation is more flexible and has more expressive power as different node and edge features are weighted differently. This flexibility is further increased with a N ×3 kernel whose corresponding aggregation can be expressed as; $$h_{j}^{1}=\mathbf{ReLu}\left(\sum_{c}^{(i,j,k)}W_{v}^{c c}h_{c}^{0}+\sum_{c}^{(i,j,k)}\sum_{m}^{N_{\mathrm{c}}}W_{v}^{c m}h_{c}^{0}+W_{v}^{c m}h_{c m}^{0}+W_{c}^{c m}e_{c,m}\right)$$ where *i, j, k* are the indices of three adjacent columns. In this respect, the information aggregation achieved by a fully-connected MPNN is a special case (the simplest form) of a more general framework embodied by a single convolution operation over the proposed tensor representation. ## A.3 Training Hyperparameters Conformation generation. Training is conducted on a single Tesla V100 GPU. We follow a similar learning rate schedule, shown by Eq.3 of the original Transformer paper (Vaswani et al., 2017) but with d*model* = 9612. This results in a maximum learning rate of 1.6e −4. To tackle the notorious issue of KL vanishing (Fu et al., 2019), we set a minimum KL weight of 1e −4 and double it every 62.5e 3iterations until a maximum weight of 0.0256 is reached. We select Adam optimizer (Kingma & Ba, 2015) default hyperparameters for training. We present some interesting observations of the training/validation curve corresponding to this setup in Sec.A.7. For both experiments, the TensorVAE is trained for 1e 6iterations with a batch size of 128. The implementation details of NaiveUNet is explained in Sec.A.4. Molecular property prediction. For the molecular property prediction task, we use the same GDR transformer encoder structure (4 attention layers and approximately 5M parameters) and add an additional mean pooling layer, which is then followed by a linear layer for property prediction. We follow the same data train-val-test split in Uni-Mol and GEM and standardize the output property data. We train the GDR model for 300 epochs with a batch size of 128. The learning rate schedule is the same as that of the TensorVAE. ## A.4 Naiveunet Model Architecture ![23_image_0.png](23_image_0.png) N2 N2 N2/2 N2/2 N2/16N2/4 N2N2/4N2/16 N2/16 N2/16 N2/6 N2/2 5 N2/16 N2/16 N2/6 Figure 7: Naive UNet model. N = 69 We train the above NaiveUNet on the Drugs dataset for 30 epochs with a constant learning rate of 1e −4, and batch size of 32. We follow the same method presented in GraphDG (Simm & Hernández-Lobato, 2020) to convert the predicted distance matrix to conformation. ## A.5 Atom And Bond Features We list the atom features and bond features together with the encoding method used to construct the proposed tensor in Tab.10. | Feature name | Feature value | Encoding method | |-------------------------|-------------------------------------------|-------------------| | Atom type | H, C, N, O, F, S, Cl, Br, | one-hot | | | P, I, Na, B, Si, Se, K, Bi | | | Atom charge | -2, -1, 0, 1, 2, 3 | one-hot | | Atom chirality | Unspecified, Tetrahedral_CW | one-hot | | | Tetrahedral_CCW, Other | | | Bond type | Single, Double, Triple, Aromatic, Virtual | one-hot | | Normalized bond length | - | real-value | | Bond stereochem | StereoNone, StereoAny, StereoZ | one-hot | | | StereoE, StereoCIS, StereoTrans | | | Bond in-ring size | 3 - 10 | one-hot | | Coordinate (3 channels) | - | real-value | | Pair wise atom distance | - | real-value | Table 10: Atom and bond features used to construct input tensor. ## A.6 Cov And Mat Precision Results The precision COV and MAT scores are defined as; $$\operatorname{COV}_{P}\left(\mathbb{C}_{r},\mathbb{C}_{g}\right)={\frac{1}{|\mathbb{C}_{g}|}}\left|\left\{{\hat{R}}\in\mathbb{C}_{g}|\mathrm{RMSD}\left(R,{\hat{R}}\right)\leq\delta,\forall R\in\mathbb{C}_{r}\right\}\right|$$ $$\operatorname{MAT}_{P}\left(\mathbb{C}_{r},\mathbb{C}_{g}\right)={\frac{1}{|\mathbb{C}_{g}|}}\sum_{\hat{R}\in\mathbb{C}_{g}}\operatorname*{min}\operatorname{RMSD}\left(R,{\hat{R}}\right)$$ | Method | COVP (%) ↑ | MATP (˚A) ↓ | | | |-------------------------------------------------|--------------|---------------|---------|--------| | Mean | Median | Mean | Median | | | GraphDG | 2.08 | 0.00 | 2.4340 | 2.4100 | | CGCF | 21.68 | 13.72 | 1.8571 | 1.8066 | | ConfVAE | 22.96 | 14.05 | 1.8287 | 1.8159 | | ConfGF | 23.42 | 15.52 | 1.7219 | 1.6863 | | GeoDiff | 61.47 | 64.55 | 1.1712 | 1.1232 | | TensorVAE2 | 72.12 | 79.02 | 1.0655 | 1.0355 | | ±1.5 | ±1.9 | ±0.0145 | ±0.0166 | | | *Results for GraphDG, CGCF, ConfVAE, ConfGF and | | | | | Table 11: Precision performance comparison on GEOM Drugs dataset ## A.7 Training And Validation Curve We present the train and validation plots for KL and reconstruction loss based on Drugs dataset in Fig.8a and Fig.8b, respectively. Both plots are based on an initial KL weight of 1e −4 doubling every 62.5k iterations (40 epochs). While KL validation loss reached 18.29 after 1e 6iterations (640 epochs), the reconstruction/RMSD loss reached 0.64˚A at the end of training. During the first 5 epochs of training, model learning focused on reducing the KL loss due to it is orders of magnitude larger than the RMSD loss. We were expecting this trend to continue for a while until both losses converge roughly in the same range. However, much to our surprise, the model seemed to find a way to drastically reduce RMSD loss much earlier by leveraging the information from the GDR encoder; it learned to "cheat" by directly reversing coordinate information embedded in the output of GDR encoder back to the original conformation. The RMSD loss dropped to as low as 0.08˚A. On the other hand, the KL loss climbed to almost 800, signaling signifcant divergence from standard normal distribution. At this stage, output of the GDR encoder contains informative features of the original 3D coordinates. With the KL loss weight increasing, it becomes more difficult for the model to cheat since training is forcing the output of GDR encoder to conform to a standard uninformative Gaussian distribution. The KL loss started to drop while the RMSD loss remained steady, indicating increasing reliance on the output of G encoder for reconstructing the conformation. As the output of GDR encoder becomes less informative, the model learned to rely almost entirely on the aggregated feature from the G encoder to decode conformation. We attempted to initiate the training with a much larger initial KL weight (1e −2) to prevent "cheating" from begining. However, this quickly led to the notorious KL vanishing issue (Fu et al., 2019). We figure that "cheating" is actually beneficial in that it reduces learning difficulty particularly for the decoder; its weights are tuned on easy training task, simply reversing what GDR encoder has done. In other words, the tuned weights of the decoder already hold crucial information on how to decode highly informative input features. As KL weight increases, model learning shifts to make the output of G encoder more informative. Also, this maybe an easier learning task as the RMSD loss is already very low (back-propagation of this loss contributes little to weight update); instead, model learning primarily focuses on optimizing the KL loss. This two-stage iterative loss optimization is much easier than optimizing both losses simultaneously throughout the training process. ![25_image_0.png](25_image_0.png) Figure 8: Training and validation plots for Drugs dataset. Orange line: Train; Blue line: Validation ## A.8 Molecular Property Prediction Following Uni-Mol (Zhou et al., 2023) and GEM (Fang et al., 2022), we report property prediction result on the MolecularNet (Wu et al., 2018) QM9 regression task. The goal of this task is to estimate *homo*, lumo, and *homo-lumo gap* properties of molecules in the QM9 dataset based on their molecular structure. We adapt the proposed GDR encoder to this regression task by changing its prediction head. We defer the details of this adaption and training procedure to SecA.3. We report the mean average error(MAE) over all the test samples. The result of the adapted model is compared to those of 7 other models including; - D-MPNN (Yang et al., 2019), AttentiveFP (Xiong et al., 2019) and GEM which are GNN based models without pretraining; - N-Gram (Liu et al., 2019), PretrainingGNN (Hu et al., 2020) and GROVER (Rong et al., 2020) with pretraining; - a variant of Uni-Mol without pretraining. The MAE for all compared methods are summaried in Tab.12. The proposed GDR encoder produces a SOTA performance with less than 5M parameters. This experiment demonstrates that the proposed feature engineering method is very effective at information aggregation. Table 12: Property prediction result comparison based on MolecularNet QM9 benchmark. | Method | MAE | |--------------------------------------------------|-------------------| | D-MPNN | 0.00814 (0.00001) | | AttentiveFP | 0.00812 (0.00001) | | N-Gram | 0.00964 (0.00031) | | PretrainGNN | 0.00922 (0.00004) | | GROVER base | 0.00984 (0.00055) | | GROVER large | 0.00986 (0.00025) | | GEM | 0.00746 (0.00001) | | Uni-Mol w/o pretraining | 0.00653 (0.00040) | | GDR encoder (ours) | 0.00553 (0.00012) | | *All results are taken from (Zhou et al., 2023). | |
Review 1: Summary: The paper proposes a new method for generating 3D molecular conformers from 2D graphs. The primary contribution of the method is encoding the input features of the 2D molecular graph as a tensor graph which includes atomic encodings (atom type, atom charge, atom chirality, atom coordinate) as well as bond encoding (bond type, bond stereochemistry, normalized bond length, distances) for all of the atoms in the system. In the tensor graph, the diagonal elements include the primary while the off-diagonal atoms represent information about the neighbors of the atom. Based on this tensor graph, the paper then proposes using a 1D-convolution as the primary neural network operation to process the tensor graph followed by a self-attention transformer. Taken together, the above make up the primary components of TensorVAE which is the paper's proposed method. Next, the paper describes the formulation of the problem, including the VAE training loss and how it applies to the conformer generation case study and outline their primary experiments. The experiments focus on three main datasets: GEOM-QM9, GEOM-Drugs, and Platinum. The paper outlines the experimental conditions and metrics in detail before presenting the results on GEOM, where TensorVAE generally out performs the other methods the author compare against. The results include methods that apply force-fields for conformer generation in Table 2 and methods with and without force-fields in Table 1. In Table 3, the paper claims that TensorVAE outperforms other method while having less parameters and outline details about the compute efficiency during of TensorVAE compared to other methods. Next, Table 4, Table 5 and Table 6 the paper provides an analysis of zero-shot performance of TensorVAE on the Platinum dataset, which includes molecular conformation in a protein environment. The results generally show reasonable performance on the Platinum dataset. Lastly, the authors provide a brief study on molecular property prediction along with a reproducibility statement and a conclusion. Strengths and Weaknesses: Strengths: * The paper introduces a new representation for 2D molecular graphs that appears useful for conformer generation and additional tasks, such as property prediction. * The paper provides an extensive set of experiments across relevant datasets (GEOM, Platinum) and compares with relevant methods. TensorVAE generally shows performance improvements and distinct advantages, such as better compute efficiency. * The paper provides significant detail on the method and experimental settings. Weaknesses: * I think the claims of single step generation of conformers and simple architectures are somewhat overstated. Part of the text and the ablations in the appendix seem to indicate that neural network architecture does matter in performance, so it would be good to be clearer about the paper claims as important. * It is unclear why the paper includes zero-shot experiments on Platinum only and why molecular property prediction experiments are also included in the main text. Given the focus on conformer generation, I think the results of the ablation would be more interesting to understand the important components of the proposed method. * Related work is discussed primarily in the introduction and is missing a discussion on sequence-based methods for conformer generation, such as reinforcement learning [1] and GFlowNets [2]. [1] Gogineni, Tarun, et al. "Torsionnet: A reinforcement learning approach to sequential conformer search." Advances in Neural Information Processing Systems 33 (2020): 20142-20153. [2] Volokhova, Alexandra, et al. "Towards equilibrium molecular conformation generation with GFlowNets." arXiv preprint arXiv:2310.14782 (2023). Requested Changes: Important Requests that would sway my opinion: * Please adjust the claims related to single-step generation and simple architectures. Are there other methods that perform conformer generation in a single step? How do they compare to TensorVAE. * You mention that your method does not require sophisticated neural network design, so it would be good to see performance with even simpler parts, such as MLPs. * Add related work of sequence-based methods for conformer generation. * Could you clarify why you only ran zero-shot experiments for Platinum? It seems like fine-tuning or training experiments for TensorVAE on Platinum would be appropriate. Additional requests (nice to have): * I would prefer seeing results of the ablation in the main text and would be OK moving the property prediction experiments to the appendix in exchange. Feel free to explain why you included property prediction in the main text. * Could you clarify if the tensor graph representation is symmetric. If so, is there a way to take advantage of the symmetry? This could be interesting to discuss in the conclusion and future work section. * It would be nice to expand potential future work directions. Broader Impact Concerns: N/A ================================================== Review 2: Summary: Summary: In this submission, the authors propose a simple but effective method for conditional molecular conformation generation, which generates 3D molecular conformations from the corresponding 2D graphs. Technically, this work 1) proposes a tensor-based graph representation, encoding the atom and bond information jointly in a tensor format, and 2) applies 1D convolution to tokenize the tensor, leading to the following Transformer-based encoder. Therefore, the main technical contributions of this work are feature engineering and encoder architecture, in my opinion. Experimental results demonstrate the usefulness of the proposed method in several datasets. Strengths and Weaknesses: Strengths: 1. The paper is well-written and easy to follow. The key ideas and contributions are claimed clearly. 2. Many baselines are considered, and experimental results seem promising. 3. The method itself is simple and easy to reproduce. Weaknesses: 1. If my understanding is correct, the 1D convolution applied to the graph tensor is not permutation-invariant, i.e., permuting the order of the rows and columns of the graph tensor leads to different token sequences. Accordingly, although the method is simple, I think it will lead to different 3D conformation coordinates when changing the order of atoms in the 2D graphs, which does harm to the rationality of the proposed method. 2. The proposed method seems to focus on generating conformations conditioned on 2D graphs. It might be more suitable to change the title to "conditional generation". Requested Changes: 1. Each molecule may have various 3D conformations, is the proposed method able to generate various 3D conformations from a single 2D graph? 2. What is the full name of "GDR"? 3. It seems that the outputs of the decoder are 3D atom coordinates. Did the authors use other tools like OpenBabel to generate conformations from the coordinates? 4. When implementing the 1D convolution, a kernel with size N x 3 is applied. If my understanding is correct, N is the number of atoms in a molecule, which varies for different molecules. How to learn such a size-varying kernel? Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper proposes TensorVAE: a VAE based on a specific tensor representation which can be used to both generate 3D conformations of molecules and predict their properties. The contributions are: - The TensorVAE architecture and data format. However, I am uncertain to what degree this can be considered a contribution: TensorVAE seems to fall into the category of "specialized GNN" or "special case of transformer", but it is described in a somewhat unclear way that makes me uncertain about this equivalence. - A transformer-VAE hybrid with a modified attention (although again this is not very clearly described so its contribution is unclear) - Experiments showing strong results for conformer generation Strengths and Weaknesses: An overarching weakness of this paper is its clarity. Although I have certainly read less clear papers, many things were not described very clearly in this paper: particularly the model (e.g. how $p_\theta(R|z,G)$ decomposes into several sequential decoders and how the attention masking works). I think an explicit algorithmic statement of the model would help for clarity. This issue makes it difficult to properly assess the correctness and impact of the claims made in the paper. The following review content is therefore based on my best guess about what I think the authors are doing. The next thing I would comment on is the model and featurization (the efficacy of which is a key claimed contribution from the authors). The proposed featurization seems to just be an adjacency matrix with features, differing from a standard MPNN only in that the atom and edge features are explicitly mixed rather than being maintained separately. This seemed like a fairly small implementation detail to me. Next, the authors propose a kind of masked convolution, which seemed very similar to the attention mechanism of a transformer. In the majority of the paper the authors seem to refer to their model simply as a transformer, making me believe that it is just a transformer variant. The authors go on to use this transformer as the encoder/decoder of a VAE, which is not very novel (e.g. https://arxiv.org/abs/2207.13529). Finally, I will focus on the experiments. The reported results of the experiments are quite strong, and if correct would be one of the strongest parts of the paper. However, it is not totally clear to me that these results are correct/comparable with previous work. The authors state that their method is "simple" and mainly relies on a good choice of features. It is therefore surprising that it outperforms a lot of other methods by such a large margin. My intuition is that the authors have made some kind of mistake here (I would bet ~3:1 odds). Possible sources of this are: - Different random selection of train/test molecules (it was not clear if the exact same set of molecules from Xu et al 2022 was used or whether it was just generated with a similar same procedure) - Data discarding: the authors seem to have discarded a fraction of the data which had too many atoms. Even though the fraction is small, presumably the other methods did not discard it. Maybe it has an outsized influence on the overall error? - Subtle data leakage: for example, the authors use a depth-first traversal to order the nodes in the tensor. Maybe there is actually some spurious correlation here which the model picks up on? - Some other small inconsistency (possible because the authors are copying results from previous papers rather than re-running things). E.g. a different normalization or different value of $\delta$ in equation 4. I should clarify that I have not *found* a bug, just that I am highly suspicious of the results. Summing up: Strengths of the paper: a generally sensible method, good experimental results (if correct) Weaknesses of the paper: lack of clarity, little novelty Requested Changes: I am not really sure what to write for this: TMLR asks for claims which are well-supported and of interest to the audience. I can't really decide what the key claims of the paper are. I would guess either: - "TensorVAE is a great method for predicting conformers". This claim seems fairly well-supported, but given that TensorVAE seems to just be a transformer variant, I'm not sure how interesting such a claim is: have the authors essentially just tuned a transformer? - "The featurization of TensorVAE somehow really improves upon previous work": this would be an interesting claim but it is not well-supported, since when comparing to other methods the authors are changing not just the featurization but also the model, so it is not clear what is actually causing the performance. Ultimately my request to the authors is to clarify what the key claims are for the paper and explain how they are supported. For this reason I have written "claims not supported" in my review below. Also, it is not clear that the method is actually equivariant as the authors suggest: when describing equation 3, the authors say that the encoder "does not involve any coordinate", implying it is equivariant. However, it appears that it *does* actually involve coordinates, as these are included in $R$, no? Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: This work proposes a method for conditional molecular conformation generation. After extensive engagement from the authors, all reviewers agree that the work meets TMLR's acceptance criteria. I thus recommend accepting the paper, and ask that the authors please do revise figures 5 and 6 as requested by reviewer 1HGp. ==================================================
# Hyperspherical Prototype Node Clustering Jitao Lu dianlujitao@gmail.com School of Computer Science, School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University Danyang Wu *danyangwu.cs@gmail.com* State Key Laboratory for Manufacturing Systems Engineering, School of Electronic and Information Engineering, Xi'an Jiaotong University Feiping Nie∗feipingnie@gmail.com School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University Rong Wang wangrong07@tsinghua.org.cn School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University Xuelong Li *li@nwpu.edu.cn* School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University Reviewed on OpenReview: *https: // openreview. net/ forum? id= z3ZlnaOM0d* ## Abstract The general workflow of deep node clustering is to encode the nodes into node embeddings via graph neural networks and uncover clustering decisions from them, so clustering performance is heavily affected by the embeddings. However, existing works only consider preserving the semantics of the graph but ignore the inter-cluster separability of the nodes, so there's no guarantee that the embeddings can present a clear clustering structure. To remedy this deficiency, we propose Hyperspherical Prototype Node Clustering (HPNC), an end-to-end clustering paradigm that explicitly enhances the inter-cluster separability of learned node embeddings. Concretely, we constrain the embedding space to a unit-hypersphere, enabling us to scatter the cluster prototypes over the space with maximized pairwise distances. Then, we employ a graph autoencoder to map nodes onto the same hypersphere manifold. Consequently, cluster affinities can be directly retrieved from cosine similarities between node embeddings and prototypes. A clustering-oriented loss is imposed to sharpen the affinity distribution so that the learned node embeddings are encouraged to have small intra-cluster distances and large inter-cluster distances. Based on the proposed HPNC paradigm, we devise two schemes (HPNC-IM and HPNC-DEC) with distinct clustering backbones. Empirical results on popular benchmark datasets demonstrate the superiority of our method compared to other state-of-the-art clustering methods, and visualization results illustrate improved separability of the learned embeddings. ## 1 Introduction Graph-structured data are ubiquitous in numerous domains, including social networks, recommendation systems, physics, and biology. Community structure is one of the most important characteristics of a graph and is useful in many downstream tasks. Finding communities from a graph can be formulated as a node clustering problem. Node clustering aims to categorize the nodes of a graph into a set of disjoint groups such that similar nodes are assigned to a common group. Spectral clustering (von Luxburg, 2007) has been one of the most successful and well-known node clustering methods in the past two decades. Given a ∗Corresponding author. weighted undirected graph, spectral clustering firstly uses Laplacian eigenmaps (Belkin & Niyogi, 2003) to embed the nodes into a low-dimensional feature space, then employs K-means to uncover clustering results from them. Being the first step of spectral clustering, the task of encoding nodes into feature vectors is called *node embeddings* (Cui et al., 2019), and Laplacian eigenmap is essentially the earliest node embedding method. Hence, the output of general-purpose node embedding methods can also be passed to K-means or other clustering methods for node clustering. However, real-world graph data are usually too complex for "shallow" embedding methods to capture the underlying relationships, leading to suboptimal performance of downstream tasks including clustering. Recently, a surge in research on graph neural networks (GNNs) has led to state-of-the-art results on numerous downstream tasks. Moreover, the encoder can be easily chained with downstream tasks to customize the embeddings for them, which is non-trivial for previous methods. Methods employing a graph neural network (GNN) to do node embedding and perform clustering after that are called *deep clustering*. For example, graph autoencoder (GAE, Kipf & Welling 2016) uses stacked graph convolution network (GCN, Kipf & Welling 2017) layers to encode the input graph and node features into a low-dimensional space, then reconstructs the graph by inner products of the latent embeddings and minimizes the reconstruction error to self-supervise the training. After converging, the embeddings are passed to K-means to obtain clustering results. Later autoencoder-based approaches (Wang et al., 2017; Pan et al., 2020; Cui et al., 2020; Wang et al., 2020; Hou et al., 2022; Zhang et al., 2022) improved the GNN towards better latent embeddings. However, they mostly focus on having the learned embeddings better reconstruct the input graph and/or node features. That is, optimizing their objectives does not explicitly lead to better clusterability, so the resulting embeddings may not be suitable for clustering. To be specific, these methods overlooked the distribution of cluster prototypes1: different cluster prototypes should be far from each other for their affiliated node embeddings to be distinguishable. A concurrent work (Liu et al., 2023b) noticed this issue and proposed to push away different cluster centroids by penalizing the smallest centroid distance, but this is implemented as a regularization term so the effect is not guaranteed in practice. On the other hand, most existing works either adopt the same sequential pipeline as GAE or follow (Yang et al., 2017) to iteratively optimize reconstruction loss and K-means loss. The fact is not limited to autoencoder-based approaches but also contrastive-learning-based approaches (Park et al., 2022; Devvrit et al., 2022; Liu et al., 2022a) that becoming popular in recent years. Exceptions are DAEGC (Wang et al., 2019) and SDCN (Bo et al., 2020). They pretrain the GAE as usual in the first step, then use K-means to obtain cluster centroids from the latent embeddings and follow (Xie et al., 2016) to form soft labels based on sample–centroid distances. After that, the embeddings are refined by self-training, and final clustering results are obtained from the soft labels after refinement. SDCN additionally normalized the latent embeddings by softmax function so soft labels can also be obtained there, but K-means is still unavoidable to obtain the centroids. The most critical drawback of K-means in a deep learning pipeline is that K-means' objective is not differentiable. As a result, it cannot be optimized by gradient descent thus prohibiting chained training with other downstream tasks. Moreover, the network parameters and clustering centroids have to be alternatively updated as the original K-means does, which leads to error propagation, prevents parallelism, and is prone to get stuck in bad local optimums. In order for the learned embeddings to present a clear clustering structure, their intra-cluster distance should be as small as possible and inter-cluster distance should be as large as possible. Motivated by this criterion, this work mainly focuses on enhancing the separability of different clusters in unsupervised settings. In fact, this is similar to the objective of distance metric learning (Wang et al., 2014; Zhao et al., 2021b), which treats samples from the same class as positive pairs and samples from different classes as negative pairs and trains the neural network to generate similar embeddings for positive pairs and dissimilar embeddings for negative pairs. For instance, Proxy-NCA loss (Movshovitz-Attias et al., 2017) assigns a proxy to each class. During training, a data point is encouraged to be close to its corresponding class proxy, and far apart from other class proxies. As long as the proxies are separable, the learned embeddings are also separable. However, these class proxies are selected according to ground truth labels of the training set, so it's impossible to follow its selection strategy in unsupervised tasks and metric learning methods cannot be trivially adopted in clustering. Nonetheless, our method is heavily inspired by Proxy-NCA. 1Also known as *cluster centroids* in some clustering methods. ![2_image_0.png](2_image_0.png) Figure 1: The workflow of HPNC. The input nodes are passed to several GNN layers to generate node embeddings. Then, the embeddings are passed to a GNN layer to reconstruct node features and an inner product decoder to reconstruct edges. L 2-normalization is applied to make them fit on a unit-hypersphere manifold. Given uniformly distributed cluster prototypes on the same manifold, the soft labels Q can be obtained by the cosine similarities between node embeddings (gray circle) and cluster prototypes (colored triangles). A clustering loss is imposed on Q to sharpen it so that confident predictions are emphasized and unconfident predictions are suppressed. The clustering loss is jointly minimized with the reconstruction errors towards discriminative node embeddings. Clustering results are directly obtained from Q. In this paper, we simultaneously address the issues mentioned above and propose Hyperspherical Prototype Node Clustering (HPNC), a *fully differentiable* and *end-to-end* clustering paradigm which explicitly considers the separability of different clusters. Unlike previous works that infer clustering centroids from preliminary node embeddings, we use predefined centroids and maximize their margins to encourage the separability of different clusters. To this goal, we constrain the embedding space to a unit-hypersphere where the maximum distance between points is bounded. Then, we scatter the cluster prototypes on the hypersphere manifold so they have equal and maximized pairwise distances. After that, we use a graph autoencoder to map the input nodes onto the same manifold. Consequently, the soft labels can be obtained by their cosine similarities between prototypes. A clustering-oriented objective is jointly optimized with GAE to push nodes to corresponding prototypes. Remarkably, our HPNC is a general pipeline and can be used with any unsupervised representation learning methods and any deep clustering objective for joint representation learning and clustering. We demonstrate such flexibility by devising two clustering schemes based on HPNC. The general pipeline of HPNC is illustrated in Figure 1. Our main contributions are: - We propose a novel node clustering paradigm called HPNC for joint representation learning and node clustering. HPNC explicitly maximizes the inter-cluster distances of the learned embeddings to provide sufficient discriminative power for clustering. Moreover, HPNC is fully differentiable so it can be easily integrated into a deep learning pipeline and jointly optimized with other modules. - We develop a learning-based prototype rotation strategy to assist GNN to find matching prototypes. It allows the predefined prototypes to rotate on the hypersphere manifold so HPNC becomes less sensitive to their initial coordinates. Ablation study results verify that it's crucial to the clustering performance. - Based on the proposed HPNC paradigm, we devise two schemes (i.e., HPNC-IM and HPNC-DEC) to demonstrate the flexibility of HPNC to work with different clustering backbones. HPNC can also combine with other backbones to enjoy their advantages. - Empirical results on widely adopted node clustering benchmark datasets consistently verify the effectiveness of our proposal compared to other state-of-the-art methods. To explore the characteristics of the learned embeddings, we further apply K-means clustering and t-SNE visualization on them, and the results significantly reveal that our method indeed leads to more discriminative and separable latent embeddings. ## 2 Related Work In this section, we briefly review the techniques closely related to our work, including unsupervised graph representation learning and deep clustering methods. ## 2.1 Unsupervised Graph Representation Learning Unsupervised graph representation learning (UGRL) aims at projecting the nodes, edges, subgraphs, or entire graph, to a low-dimensional vector R m, without access to ground truth labels, so that they can be effectively handled by downstream machine learning algorithms. Traditional UGRL methods utilize matrix factorization (Belkin & Niyogi, 2003; Cao et al., 2015) and random walks (Grover & Leskovec, 2016) to capture graph characteristics. Specifically, DeepWalk Perozzi et al. (2014) defines the similarity between two nodes as the probability that they co-occur on a random walk, so it first estimates these probabilities from fixed-length and unbiased random walks sampled over the graph, then employs the skip-gram model to learn node embeddings that reconstruct these probabilities. node2vec Grover & Leskovec (2016) improved the walk strategy by using flexible, biased random walks that can trade off between local and global views of the graph. NetRA Yu et al. (2018b) proposed to learn the node embeddings with adversarially regularized autoendoers. Unlike skip-gram based models, it employs a long short-term memory (LSTM) network to map one hot vertex encodings into latent representations and train them to reconstruct the input. In addition to static graphs, NetWalk Yu et al. (2018a) proposed a novel reservoir sampling strategy to incrementally maintain the learned embeddings to effectively handle dynamic graphs. However, these methods utilize the graph topology only. For attributed graphs where each node is also associated with a feature vector, they are unable to exploit such extra information thus leading to suboptimal performance on downstream tasks. With the advance of deep learning on graphs, recent UGRL methods employ a GNN to perform the projection and design various unsupervised objectives to train the GNN. Autoencoder-based methods are the most popular thanks to their simplicity and effectiveness. GAE and VGAE (Kipf & Welling, 2016) take the dot products of latent embeddings as predicted edges and let them reconstruct the input graph. ARGA and ARVGA (Pan et al., 2020) introduced an adversarial regularizer on top of GAE and VGAE to encourage the latent embeddings to also follow a prior distribution. MGAE (Wang et al., 2017) passes corrupted node features to the GNN and lets the latent embeddings reconstruct uncorrupted node features. GALA (Park et al., 2019) designed a decoder to reconstruct node features from latent embeddings. Another fruitful avenue of UGRL is based on contrastive learning (Zhu et al., 2021), which learns meaningful embeddings by pulling predefined positive pairs and pushing negative pairs. DGI (Velickovic et al., 2019) generates graph embeddings of the input graph and its corrupted counterpart, then treats them as positive and negative samples of the uncorrupted node embeddings, respectively. InfoGraph (Sun et al., 2020) extends DGI to handle a batch of graphs, node embeddings from the same graph are positive pairs, and vice versa. MVGRL (Hassani & Ahmadi, 2020) employs graph diffusion (Klicpera et al., 2019) to generate an augmented view of the graph, then regards node embeddings from one view and graph embedding of another view as positive pairs. GRACE (Zhu et al., 2020) and SimGRACE (Xia et al., 2022) generate two augmented views of the graph, then follow the instance discrimination task (Wu et al., 2018) to treat a pair of embeddings of the same node in different views as positive, and all other pairs as negative. GGD (Zheng et al., 2022) simplified DGI and improved it to scale to large data. Compared to traditional UGRL methods, GNN-based methods usually incorporate the node attributes when learning latent embeddings, hence performing better on downstream tasks. ## 2.2 Deep Clustering Real-world data are usually high-dimensional and non-globular, so shallow clustering methods such as Kmeans can not effectively handle them. Methods employing a deep neural network (DNN) to do non-linear mapping and perform clustering after that are called *deep clustering*, whose assumption is that mapped data distributions are more suitable for clustering than raw features. However, directly combining DNN with K-means is not a viable approach, because the global optimum of K-means is achieved when DNN maps all data samples to one single point, which is called *cluster collapse*. To avoid this issue, most works employ an autoencoder to do the feature mapping, because the autoencoder's objective requires the latent embeddings to reconstruct the original inputs, which ensures that critical semantic information is preserved. GraphEncoder (Tian et al., 2014) is the earliest deep clustering method, it sequentially uses an autoencoder to perform feature mapping and K-means to cluster the latent embeddings. Beyond that, DCN (Yang et al., 2017) summed up K-means and autoencoder objectives and iteratively optimize their parameters, indirectly avoiding cluster collapse by minimizing autoencoder's reconstruction error. DEC (Xie et al., 2016) developed a self-supervised loss to iteratively refine autoencoder embeddings by confident predictions. Other than autoencoders, there are also deep clustering methods based on spectral clustering (Shaham et al., 2018), subspace clustering (Ji et al., 2017), mixture of experts (Tsai et al., 2021), etc. We suggest the readers refer to (Zhou et al., 2022) for a comprehensive survey on deep clustering methods. As described above, the basic assumption of deep clustering is that the learned representations are more suitable for clustering, i.e., more distinguishable from different clusters compared to raw features. Nonetheless, we argue that the assumption hardly holds for previous deep clustering methods. First, the representation learning modules aim at preserving the semantics (Kipf & Welling, 2016; Bo et al., 2020) or consistency (Bachman et al., 2019; Ji et al., 2019) of the latent embeddings, or making the embeddings discriminative at *instance* level (Zhao et al., 2021a; Tao et al., 2021; Devvrit et al., 2022). Second, clustering centroids and soft labels are inferred from the learned embeddings and (optionally) employed to finetune the embeddings. Unfortunately, none of the representation learning objectives contribute to *cluster-wise* discrimination due to inaccessible ground truth labels, and hence the ambiguous clustering centroids and soft labels, leading to suboptimal clustering performance. In a word, the chicken or the egg causality dilemma prohibits conventional deep clustering from learning unambiguous clustering results. ## 3 Method An attributed undirected graph dataset G = ⟨V, E, X⟩ consists of a node set V, an edge set E, and a tabular data matrix X ∈ R n×d. Each node is associated with a feature vector xi ∈ R d. The edge set E can also be represented by an adjacency matrix A, where $$A_{u,v}=\begin{cases}1,&\text{if}\langle u,v\rangle\in\mathcal{E},\\ 0,&\text{otherwise}.\end{cases}\tag{1}$$ $\quad(1)$ . Node clustering algorithms aim to partition the nodes into c disjoint clusters without ground truth labels so that nodes from the same cluster are similar to each other, and nodes from different clusters are not. Next, we give an overview of the proposed HPNC method and analysis how it differs from conventional deep clustering diagrams. After that, we introduce its pipeline and details of each submodule. ## 3.1 Overview Intuitively, cluster prototypes should be as far as possible from each other, so that the data samples distributed around them are more distinguishable from different clusters. Unlike previous works that infer cluster prototypes from the learned node embeddings in a K-means-like fashion (Xie et al., 2016; Devvrit et al., 2022), we directly maximize their distances in a *data-independent* approach. However, it's impossible to give the coordinates of the farthest pair of points in unconstrained Euclidean space, because the distances are up to infinity. Hence, the embedding space needs to be constrained to make the distances bounded. In this work, we simply restrict the embedding space to a unit hypersphere, which is widely adopted and massively studied in various machine learning tasks (Davidson et al., 2018; Xu et al., 2019; Wang & Isola, 2020; Tan et al., 2022). On a hypersphere manifold, the sum of pairwise distances of cluster prototypes is maximized when uniformly scattered, which implies the best separability among different clusters. Once obtaining the prototypes, their coordinates are fixed and no longer updated during training, so that their separability is always guaranteed and the cluster collapse issue is thoroughly avoided. ## 3.2 Hyperspherical Prototype Node Clustering Next, we elaborate on the pipeline of HPNC. It merely consists of a representation learning module to map the nodes onto the same hypersphere manifold as the cluster prototypes, and a clustering module to push them close to corresponding prototypes. In this way, the intra-cluster distances are also naturally minimized, leading to discriminate embeddings. Moreover, the clustering labels can be directly retrieved from indexes of the nearest prototypes for each node, without reliance on external clustering algorithms. ## 3.2.1 Pretrain Prototypes The prerequisite step of HPNC is to obtain the coordinates of the cluster prototypes. Specifically, we propose to use uniformly scattered points on a unit-hypersphere as cluster prototypes. Uniformly placing c points on a m-dimensional unit-hypersphere S m−1so that the minimum distance between any two points gets maximized is known as the Tammes problem (Tammes, 1930) in geometry. For S 1(i.e., a unit-circle on a 2D plane), this is as easy as splitting it into c equal slices, and there are optimal solutions for S 2 as well. Unfortunately, no such solutions exist for m > 3 given an arbitrary c (Musin & Tarasov, 2015). To this end, we follow (Mettes et al., 2019) to adopt a gradient-based approach. Concretely, we minimize the maximum pairwise cosine similarities by gradient descent: $${\cal L}_{\mathrm{HP}}=\frac{1}{c}\sum_{i=1}^{c}\max_{\mathbf{\mu}_{j}}\frac{\mathbf{\mu}_{i}^{\top}\mathbf{\mu}_{j}}{\|\mathbf{\mu}_{i}\|\cdot\|\mathbf{\mu}_{j}\|},\ j\in1,\ldots,c,j\neq i,$$ $$\left(2\right)$$ where µi ∈ R m are unnormalized prototypes. For convenience, we use µ˜i =µi ∥µi∥ to denote L 2-normalized prototypes. Thus, µ˜i will uniformly scatter on S m−1 after convergence. It's worth noting that the optimization is very fast because there are just c×m scalars to update. The pretraining is data-independent and thus only needs to be done once for each pair of ⟨*c, m*⟩. It's worthwhile to emphasize again that the prototypes are *fixed* after pretraining, they work as constants and are no longer updated in later steps of HPNC. ## 3.2.2 Representation Learning The next step is to map the input graph into low-dimensional node embeddings. Without access to ground truth labels in a clustering context, an unsupervised objective is demanded to produce meaningful embeddings. Motivated by recent advances in generative graph representation learning, we adopt a masked graph autoencoder (GraphMAE) (Hou et al., 2022) as the representation learning backbone. We briefly introduce its objective in this section. Masked feature reconstruction. Unlike conventional GAEs that reconstruct edges, GraphMAE developed a masked feature reconstruction strategy to recover masked node features X ∈ R n×d. To be specific, a large random subset (e.g., 50%) of nodes Vmask ∈ V are sampled, and their node features are replaced by a shared, trainable *mask token*. Then, the partially masked features and unaltered input graph are fed into multiple GNN layers to generate latent embeddings Z ∈ R n×m. A re-mask strategy is applied to Z before feeding it to the decoder, which replaces the embeddings of Vmask again, but with zeros. Denoting the reconstructed node features as Xˆ ∈ R n×d, GraphMAE introduced scaled cosine error to minimize the reconstruction error as $$\mathcal{L}_{\text{fea}}=\frac{1}{|\mathcal{V}_{\text{mask}}|}\sum_{v_{i}\in\mathcal{V}_{\text{mask}}}\left(1-\frac{\mathbf{x}_{i}^{\top}\hat{\mathbf{x}}_{i}}{\|\mathbf{x}_{i}\|\cdot\|\hat{\mathbf{x}}_{i}\|}\right)^{\gamma},\gamma\geqslant1.\tag{3}$$ During inference, the original node features without mask and re-mask strategies are used. As a result, it potentially leads to a mismatch between training and interference because the mask token is not observed in the later stage. To mitigate this issue, GraphMAE adopts a "random-substitution" method that randomly substitutes node features of a small subset of Vmask (e.g., 15%) with node features sampled from X. Edge reconstruction. GraphMAE focuses on classification and the authors argue that feature reconstruction is more beneficial than edge reconstruction (Hou et al., 2022), so they propose to reconstruct node features only. However, reconstructing the edges A captures more node pair-level information (Liu et al., 2023a) and thus beneficial for link prediction and clustering (Pan et al., 2020). Hence, we also reconstruct the edges in addition to node features (Gao & Huang, 2018; Salehi & Davulcu, 2020). To be specific, we adopt the widely used inner product decoder (Kipf & Welling, 2016) to predict edges based on the latent embeddings: $$A_{u,v}=\sigma\left(\frac{\mathbf{z}_{u}^{\top}\mathbf{z}_{v}}{\|\mathbf{z}_{u}\|\cdot\|\mathbf{z}_{v}\|}\right),\tag{1}$$ where σ(·) is the sigmoid function. Then, we measure their discrepancies between ground truth edges by binary cross entropy as $${\mathcal{L}}_{\mathrm{edge}}=-{\frac{1}{|{\mathcal{E}}|}}\left(\sum_{\langle u,v\rangle\in{\mathcal{E}}}\log A_{u,v}+\sum_{\langle\bar{u},\bar{v}\rangle\in{\mathcal{E}}}\log(1-A_{\bar{u},\bar{v}})\right),$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ where E¯ is the set of unconnected edges obtained by negative sampling and |E| = |E| ¯ . By applying just Lfea and Ledge, the GNN can generate semantically meaningful node embeddings, and clustering results can be obtained by applying external clustering algorithms such as K-means. Next, we describe how to further refine them for better clustering performance and obtain clustering labels without reliance on K-means. ## 3.2.3 Rotated Clustering Affinity In order to prevent collapsed clusters and encourage large inter-cluster distance of the latent embeddings, we aim to map nodes around scattered cluster prototypes while leaving these prototypes unchanged. However, we empirically find that it's too challenging for the GNN to map nodes to appropriate prototypes. To mitigate this issue, we propose allowing the prototypes to *rotate* on the hypersphere. With a shared rotation matrix, their pairwise similarities still remain unchanged so the large separation property is preserved, while their coordinates can change to some extent. In other words, the GNN doesn't need to learn to rotate the nodes anymore so it will converge easier. To be specific, we first apply L 2-normalization to latent embeddings and denote them as z˜i =zi ∥zi∥ , then the cosine similarity of the i-th node and j-th rotated prototype is simply their inner product z˜ ⊤ i Rµ˜j , where R ∈ R m×m, R⊤R = I is the rotation matrix. Finally, we apply the softmax function to make them sum up to 1: $$Q_{i,j}=\frac{\exp(\hat{\mathbf{z}}_{i}^{\top}\mathbf{R}\hat{\mathbf{\mu}}_{j})}{\sum_{j^{\prime}=1}^{c}\exp(\hat{\mathbf{z}}_{i}^{\top}\mathbf{R}\hat{\mathbf{\mu}}_{j^{\prime}})},\quad\text{s.t.}\mathbf{R}^{\top}\mathbf{R}=\mathbf{I}.\tag{6}$$ Hence, Q ∈ R n×ccan be interpreted as (soft) clustering labels. The rotation matrix R is learnable and updated by gradient descent2. After obtaining preliminary clustering affinities, we devise two schemes to refine them (hence the node embeddings) as we elaborate below. ## 3.3 Scheme 1: Hpnc-Im The clustering affinities Q should be unambiguous and sharp for Z˜ to be discriminative, so that intracluster distances are minimized and inter-cluster distances are maximized. To this end, we can employ the information maximization (IM) loss (Gomes et al., 2010; Hu et al., 2017; Liang et al., 2020) to encourage the cluster distributions individually unambiguous and globally uniform. Let X and Y denote the domains of data samples and cluster assignments, IM minimizes the following objective: $${\mathcal{L}}_{\mathrm{IM}}=-(H(Y)-H(Y|X)),$$ LIM = −(H(Y ) − H(Y |X)), (7) 2We initialize R as an identity matrix and enforce the orthogonal constraint with a PyTorch built-in function torch.nn.utils.parametrizations.orthogonal during training. $$\left(7\right)$$ where X ∈ X and Y ∈ Y denotes the random variables for data samples and cluster assignments, H(·) and $H(\cdot|\cdot)$ are the marginal and conditional entropy, which can be estimated as $$\mathcal{L}_{\text{bal}}=H(Y)=h(\mathbf{p}_{\mathbf{\theta}}(\mathbf{y}))=h(\mathbb{E}_{\mathbf{x}_{i}\in X}(\mathbf{p}_{\mathbf{\theta}}(\mathbf{y}|\mathbf{x}_{i})))=h\left(\frac{1}{n}\sum_{i=1}^{n}\mathbf{Q}_{i,:}\right)=\log c-D_{\text{KL}}\left(\mathbb{E}_{\mathbf{x}_{i}\in X}(\mathbf{p}_{\mathbf{\theta}}(\mathbf{y}|\mathbf{x}_{i}))\big{|}\frac{1}{c}\mathbf{1}\right),\tag{8}$$ $$({\mathfrak{g}})$$ $$(10)$$ $$\mathcal{L}_{\text{ent}}=H(Y|X)=\mathbb{E}_{\mathbf{x}_{i}\in\mathcal{X}}(h(\mathbf{p}_{\theta}(\mathbf{y}|\mathbf{x}_{i})))=\frac{1}{n}\sum_{i=1}^{n}h(\mathbf{Q}_{i,:}),\tag{1}$$ h(Qi,:), (9) where pθ(y) and pθ(y|·) are the marginal and conditional probabilities over label values y ∈ {1*, . . . , c*} modeled by a GNN with parameters θ, h(p) = −Pc j=1 pj log pj denotes the entropy function, and 1 is a vector with all ones. Intuitively, maximizing Eq. (8) encourages the average cluster probabilities Exi∈X (pθ(y|xi)) to converge to the uniform distribution, so data samples tend to be evenly assigned to different clusters. On the other hand, minimizing Eq. (9) encourages individual cluster probabilities pθ(y|xi) to approximate the one-hot distribution, so the cluster assignment becomes unambiguous, and the samples are pushed far from the decision boundaries. In summary, minimizing Eq. (7) trains the GNN to learn embeddings that agree with the *cluster assumption* (Chapelle & Zien, 2005), hence improving clustering performance. Combining Eqs. (3), (5) and (7) together, the full objective of HPNC-IM is $${\cal L}_{\mathrm{HPNC-IM}}={\cal L}_{\mathrm{fea}}+\alpha{\cal L}_{\mathrm{edge}}-\beta{\cal L}_{\mathrm{bal}}+\gamma{\cal L}_{\mathrm{ent}}.\tag{1}$$ Unlike previous works that require bootstrapping the autoencoder for hundreds of epochs with the reconstruction loss only, we randomly initialize the network parameters and train HPNC-IM with the full objective from scratch. Finally, clustering results are directly obtained as the indices of the largest Qi,j without relying on external clustering algorithms such as K-means. Nonetheless, we'll show later in Section 4.2.2 that the clustering performance of Q and K-means on Z˜ are very close, verifying that our objective actually leads to discriminative node embeddings. ## 3.4 Scheme 2: Hpnc-Dec In addition to the IM loss, we demonstrate that our proposed HPNC paradigm can also integrate with a simplified version of the DEC loss (Xie et al., 2016). As introduced in Section 1, the original version of DEC is a three-stage process: 1. Employing an autoencoder to learn preliminary embeddings {z1*, . . . ,* zn} from the input data. 2. Performing K-means on the embeddings to obtain clustering centroids {µ1*, . . . ,* µc}, then calculating the cluster assignment distribution Q with the Student's t-distribution as the similarity measurement between the embeddings and clustering centroids: $$Q_{i,j}=\frac{(1+\|\mathbf{z}_{i}-\mathbf{\mu}_{j}\|^{2}/\sigma)^{-\frac{\sigma+1}{2}}}{\sum_{j^{\prime}}(1+\|\mathbf{z}_{i}-\mathbf{\mu}_{j^{\prime}}\|^{2}/\sigma)^{-\frac{\sigma+1}{2}}},\tag{1}$$ $$(11)$$ where σ is the degree of freedom of the Student's t-distribution. 3. Defining an auxiliary distribution P by raising Q to the second power and normalizing it to sum up to 1: $$P_{i,j}=\frac{Q_{i,j}^{2}/\sum_{i^{\prime}=1}^{n}Q_{i^{\prime},j}}{\sum_{j^{\prime}=1}^{c}(Q_{i,j^{\prime}}^{2}/\sum_{i^{\prime}=1}^{n}Q_{i^{\prime},j^{\prime}})}.$$ Then, their Kullback-Leibler (KL) divergence is minimized to finetune the encoder network: $${\mathcal{L}}_{\mathrm{DEC}}=D_{\mathrm{KL}}(\mathbf{P}\|\mathbf{Q})={\frac{1}{n}}\sum_{i=1}^{n}\sum_{j=1}^{c}P_{i,j}\log{\frac{P_{i,j}}{Q_{i,j}}}.$$ . (13) $$(12)$$ $$(13)$$ P is sharper than Q because confident predictions (ones with large Qi,j ) are emphasized and unconfident predictions (ones with small Qi,j ) are suppressed. Hence, minimizing the KL divergence will sharpen Q, leading to discriminative and cluster-aware embeddings. DEC iteratively performs 2) and 3) until convergence. However, the reliance on K-means and iterative optimization prohibits the potential use of DEC in an end-to-end deep learning pipeline. We propose a crucial simplification to integrate DEC with the HPNC paradigm, that is to replace the Student's t-distribution-based cluster assignment distribution (i.e., step 2) with Eq. (6). Benefiting from the pipeline of HPNC, we no longer need K-means because the clustering centroids are given in advance. The iterative optimization of centroids and encoder parameters is also discarded because the centroids are not updated during training HPNC. Intuitively, DEC shares a similar purpose with the conditional entropy minimization part of IM (9) to make the target cluster assignment certain. However, it doesn't consider balancing the scale of different clusters so it may produce empty clusters, which is undesired when we would like to ensure that all clusters are assigned data samples. Thus, we simply employ the marginal entropy maximization part of IM (8) to prevent degenerate solutions with empty clusters. Finally, the full objective of HPNC-DEC is as follows: LHPNC-DEC = Lfea + αLedge − βLbal + γLDEC. (14) As HPNC-IM, HPNC-DEC is also trained from randomly initialized network parameters by end-to-end gradient descent, without any form of pretraining. The loss functions of both schemes Eqs. (10) and (14) are fully differentiable and thus can be easily integrated into a deep learning pipeline and jointly optimized with other modules. ## 3.5 Complexity Analysis When pretraining prototypes, the time complexity of calculating their pairwise similarities is O(c 2m), and there are cm parameters to be updated, so the complexity of pretraining is O(c 2mtpre) where tpre refers to the number of pretraining epochs. We empirically find that setting tpre to 3000 is sufficient, so this step usually finishes in half of a minite. The complexity of representation learning depends on the specific backbone, which is usually linear with the number of nodes and edges. The complexity of calculating rotated clustering affinity (6) is O(ncm + m2c) by applying rotation to prototypes first and obtain their inner products with samples later. There are m2trainable parameters in R, and its orthogonal parametrization is also O(m2). Hence, this step needs O((n + m)cmt) in total, where t is the number of training epochs. The complexity of the clustering backbone also depends on the specific choice, which is O(nct) for both IM and DEC that employed in this work. In summary, the time complexity of HPNC without representation learning is O(c 2mtpre + (n + m)cmt). The time complexity of K-means clustering is O(*ncmt*), so HPNC is comparable to applying K-means after representation learning. ## 4 Experiments In this section, we conduct extensive experiments to evaluate the effectiveness of the proposed HPNC paradigm. The experiments are designed with the aim to answer the following research questions: - **RQ1:** What's the clustering performance of the proposed HPNC paradigm along with the two devised schemes? - **RQ2:** How useful is the proposed *rotated clustering affinity* compared to conventional cosine similarity? | Table 1: Averaged clustering performance. The best results are in bold, second-best results are underlined | | . | | | | | | | | | | | | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|--------------------------|--------------------------|--------------------------|---------|-----|-----|-----|-----|-----|-----|-----|-----| | Method | Cora | CiteSeer | PubMed | ACM | DBLP | | | | | | | | | | | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | | GAE | 53.3±0.2 40.7±0.3 30.5±0.2 41.3±0.4 18.3±0.3 19.1±0.3 63.1±0.4 24.9±0.3 21.7±0.2 84.5±1.4 55.4±1.9 59.5±3.1 61.2±1.2 30.8±0.9 22.0±1.4 | | | | | | | | | | | | | | | VGAE | 56.0±0.3 38.5±0.4 34.7±0.3 44.4±0.2 22.7±0.3 20.6±0.3 65.5±0.2 25.0±0.4 20.3±0.2 84.1±0.2 53.2±0.5 57.7±0.7 58.6±0.1 26.9±0.1 17.9±0.1 | | | | | | | | | | | | | | | MGAE | 63.4±0.5 45.6±0.3 43.6±0.4 63.5±0.4 39.7±0.4 42.5±0.5 59.3±0.5 28.2±0.2 24.8±0.4 87.6±0.0 62.5±0.0 67.1±0.0 75.6±0.1 43.3±0.2 47.6±0.2 | | | | | | | | | | | | | | | ARGA | 63.9±0.4 45.1±0.3 35.1±0.5 57.3±0.5 35.2±0.3 34.0±0.4 68.0±0.5 27.6±0.4 29.0±0.4 86.3±0.4 56.2±0.8 63.4±0.9 64.8±0.6 29.4±0.9 28.0±0.9 | | | | | | | | | | | | | | | ARVGA | 64.0±0.5 44.9±0.4 37.4±0.5 54.4±0.5 26.1±0.5 24.5±0.3 51.3±0.4 11.7±0.3 7.8±0.2 83.9±0.5 51.9±1.0 57.8±1.2 54.4±0.4 25.9±0.3 19.8±0.4 | | | | | | | | | | | | | | | NetRA | 30.4±0.7 9.5±2.6 | 4.7±0.7 25.7±3.3 1.9±1.0 | 1.5±1.0 40.0±0.0 0.0±0.0 | 0.0±0.0 36.7±0.2 0.6±0.1 | 0.5±0.1 33.3±0.4 1.5±0.2 | 0.8±0.2 | | | | | | | | | | NetWalk | 27.2±3.1 4.5±0.5 -0.2±0.2 24.9±0.2 3.5±0.2 | 1.9±0.1 35.8±2.3 0.18±0.3 0.2±0.3 37.9±0.4 1.6±0.4 | 0.9±0.0 34.0±0.5 2.0±0.3 | 1.8±0.0 | | | | | | | | | | | | AGC | 68.9±0.5 53.7±0.3 48.6±0.3 66.9±0.5 41.1±0.4 41.9±0.5 69.8±0.4 31.6±0.3 31.8±0.4 79.9±0.0 49.6±0.0 51.2±0.0 64.4±0.1 34.6±0.0 28.2±0.1 | | | | | | | | | | | | | | | DAEGC | 70.2±0.4 52.6±0.3 49.7±0.4 67.2±0.5 39.7±0.5 41.1±0.4 66.8±0.5 26.6±0.2 27.7±0.3 86.9±2.8 56.2±4.2 59.4±3.9 62.1±0.5 32.5±0.5 21.0±0.5 | | | | | | | | | | | | | | | AGE | 72.8±0.5 58.1±0.6 56.3±0.4 70.0±0.3 44.6±0.4 45.4±0.5 69.9±0.5 30.1±0.4 31.4±0.6 90.6±0.2 68.7±0.5 74.3±0.4 75.1±0.6 45.5±0.3 47.6±0.8 | | | | | | | | | | | | | | | SDCN | 48.5±0.5 24.6±0.4 20.6±0.3 66.0±0.3 38.7±0.3 40.2±0.4 64.2±1.3 22.9±2.0 22.3±2.0 90.5±0.2 68.3±0.3 73.9±0.4 68.1±1.8 39.5±1.3 39.2±2.0 | | | | | | | | | | | | | | | DCRN | 63.4±0.5 46.2±0.4 36.3±0.9 70.9±0.2 45.9±0.4 47.6±0.3 69.9±0.1 32.2±0.1 31.4±0.1 91.9±0.2 71.6±0.6 77.6±0.5 79.7±0.3 49.0±0.4 53.6±0.5 | | | | | | | | | | | | | | | GraphMAE 68.0±2.0 57.6±1.1 50.2±2.8 69.0±0.4 43.6±0.5 44.4±0.5 69.9±0.5 34.4±0.5 32.9±0.7 89.6±0.2 66.7±0.3 71.8±0.4 73.6±0.5 45.7±0.3 43.6±0.7 SUBLIME 71.2±0.1 53.6±0.4 50.6±0.4 68.1±0.5 43.2±0.3 43.4±0.7 63.8±1.3 27.4±1.4 24.5±2.3 88.9±0.3 66.0±0.8 70.1±0.7 54.8±2.9 30.2±2.6 18.1±2.4 NAFS 70.4±0.0 56.6±0.0 48.0±0.0 71.8±0.0 45.1±0.0 47.6±0.0 70.5±0.0 33.9±0.0 33.2±0.0 81.2±0.1 51.4±0.3 52.9±0.2 52.8±0.0 25.7±0.1 14.7±0.0 CONVERT 74.1±1.5 55.6±1.1 50.5±2.0 68.4±0.7 41.6±0.7 42.8±1.6 67.1±1.8 30.0±1.5 29.3±1.8 84.4±2.9 55.3±4.4 59.6±6.8 52.9±2.8 20.4±2.1 17.2±2.36 | | | | | | | | | | | | | | | | HPNC-IM 74.1±0.5 58.9±0.6 51.5±0.9 72.4±0.5 46.3±0.7 48.4±0.7 70.4±0.4 34.6±0.733.5±0.7 92.1±0.1 72.1±0.3 78.1±0.3 80.0±0.4 49.8±0.4 54.9±0.6 H | | | | | | | | | | | | | | | | PNC-DEC74.4±0.559.4±0.9 51.9±1.2 73.0±0.546.9±0.749.7±0.770.6±0.9 33.1±1.5 33.3±1.4 92.3±0.172.4±0.378.4±0.280.1±0.549.9±0.555.0±0.7 | | | | | | | | | | | | | | | - **RQ3:** Despite clustering assignments can be directly retrieved from the soft labels Q, what's the difference between the ones obtained by performing K-means on the embeddings Zˆ? - **RQ4:** What's the relationship between the centroids found by performing K-means on Zˆ and the rotated HPNC prototypes {Rµ˜j} c j=1? - **RQ5:** Is the separability of the learned embeddings really improved? Baselines. We compare HPNC with 13 classical and state-of-the-art node clustering and node embedding models, including GAE, VGAE (Kipf & Welling, 2016), MGAE (Wang et al., 2017), ARGA, ARVGA (Pan et al., 2020), NetRA Yu et al. (2018b), NetWalk Yu et al. (2018a), AGC (Zhang et al., 2019), DAEGC (Wang et al., 2019), AGE (Cui et al., 2020), SDCN (Bo et al., 2020), DCRN (Liu et al., 2022b), GraphMAE (Hou et al., 2022), SUBLIME (Liu et al., 2022a), NAFS (Zhang et al., 2022), and CONVERT Yang et al. (2023). NetRA and NetWalk are random-walk-based node embedding methods that utilize graph topology only, and the rest competitors are designed to handle attributed graphs. Evaluation protocol. For node embedding methods that cannot directly produce cluster assignments, we perform K-means on the learned embeddings to obtain labels for comparison. K-means is reinitialized for 50 times to eliminate randomness, so that their clustering performance solely depends on the quality of the learned embeddings. We run all competitors five times with distinct random seeds and report the averaged results and standard deviations of the best epochs. We present the selected datasets, evaluation metrics and implementation details in Appendices A.1 to A.3, respectively. ## 4.1 Node Clustering Performance Comparison (Rq1) Table 1 reports the clustering performance of our method and other baselines. As shown in the table, both schemes of our proposed HPNC consistently achieved the best or second-best clustering performance on all five datasets, revealing the effectiveness of HPNC. In contrast, some competitors achieved appealing performance on certain datasets but failed on others. For instance, the ACC of AGE on Cora is only 1.6 lower than that of HPNC-DEC, but it failed to handle DBLP, and the ACC is 5.0 lower, which is a severe performance drop. DCRN's ACC on ACM is only 0.2 lower than that of HPNC-IM, but 10.7 lower on Cora. Generally speaking, the performance of HPNC is quite competitive compared with recent state-of-the-art models. Apart from clustering performance, HPNC also enjoys the benefits from its end-to-end training strategy: Other strong baselines including but not limited to AGE, GraphMAE, and NAFS all rely on Kmeans or spectral clustering to obtain clustering labels, increasing hardware requirements on CPU/memory and prohibiting end-to-end training with other downstream tasks. Notably, HPNC-DEC seems to perform slightly better than HPNC-IM, but their differences are marginal (less than 0.5) in most cases owing to similar objectives between IM (9) and DEC (13). ## 4.2 Ablation Study In this section, we further perform ablation studies to confirm the effectiveness of individual design choices of HPNC. ## 4.2.1 Prototype Rotation (Rq2) When calculating the affinities between nodes and cluster prototypes, we propose to introduce a learnable rotation matrix R applied to these prototypes to allow them to rotate on the hypersphere manifold. To verify whether it's useful to assist the GNN to converge, we replace R in Eq. (6) with an identity matrix such that the coordinates of prototypes are strictly equal to their initial states, i.e., ones randomly initialized and refined by minimizing Eq. (2). The clustering results are reported in Table 2, where the first four rows do not apply rotation and the last four are their counterparts with rotation applied. We observe that the clustering performances with prototype rotation outperform their counterparts without rotation. Specifically, HPNC-IM and HPNC-DEC have drastic 24.3 and 31.1 NMI drops on the Cora dataset after disabling prototype rotation. What's worse, the NMI on PubMed dataset without rotation is only 9.3 for HPNC-IM and 8.5 for HPNC-DEC, which means the clustering algorithm is not even working. In contrast, they give a plausible performance with rotation enabled, and the standard deviations are much lower. The results strongly suggest that allowing the prototypes to rotate on the hypersphere manifold is indeed helpful and essential. The reason is that we consider pairwise distances of prototypes only in Eq. (2) while their coordinates are ignored. And as a result, unsuitable random initialization of prototypes may fail to match up semantic centroids of node embeddings, leading to performance degradation. Our prototype rotation is thus also regarded as a learning-based strategy to reduce sensitivity to initial states to stabilize training. ## 4.2.2 K-Means Labels Vs Soft Labels (Rq3) The clustering results of our proposed HPNC are obtained by directly applying arg max to Eq. (6) so external clustering algorithms are not needed at all. Nonetheless, the learned node embeddings Zˆ can be also used for clustering or potentially other downstream tasks. Here, we feed them into K-means and report the clustering results in Table 2 (marked as Z˜+KM). We observe that: - With prototype rotation applied, the clustering performance of K-means and soft labels are very comparable, which means HPNC can indeed guide the GNN towards discriminative node embeddings so that traditional clustering algorithms can also work well. - The clustering performance of both K-means and soft-labels degenerate without prototype rotation, but K-means works better than soft-labels in most cases. The reason might be that the autoencoder backbone preserved the clustering structure of the latent embeddings to some extent for K-means to discover, but their geometric centroids deviate far from the predefined prototypes so it's too challenging for HPNC to match them up. To further investigate the consistency between K-means predictions and soft labels (HPNC-DEC is employed in this experiment), we calculate the proportion of them being the same and plot the results in Figure 2. We | Rotate prototype | Scheme | Target | Metric | Cora | CiteSeer | PubMed | ACM | DBLP | |--------------------|----------|-----------|----------|-----------|------------|----------|-------|--------| | ACC | 60.2±5.0 | 59.9±10.3 | 69.1±1.0 | 80.4±9.8 | 64.4±4.3 | | | | | Z˜+KM | NMI | 50.2±2.7 | 37.1±7.0 | 32.3±1.9 | 58.0±8.3 | 35.4±2.2 | | | | ARI | 40.0±3.3 | 36.7±8.4 | 31.3±1.7 | 59.9±11.5 | 37.6±4.3 | | | | | HPNC-IM | ACC | 49.3±4.0 | 44.9±2.0 | 52.5±6.7 | 87.5±2.5 | 63.9±5.1 | | | | Q | NMI | 34.6±7.2 | 26.3±3.3 | 9.3±5.6 | 63.0±5.1 | 35.7±3.0 | | | | ARI | 26.5±5.6 | 22.4±3.2 | 9.6±7.3 | 67.4±5.6 | 37.8±5.1 | | | | | ✗ | ACC | 62.3±2.9 | 61.1±3.8 | 69.3±1.2 | 92.1±0.2 | 64.7±4.6 | | | | Z˜+KM | NMI | 52.5±1.8 | 38.0±1.4 | 32.5±2.0 | 72.1±0.5 | 36.3±2.5 | | | | ARI | 42.5±1.7 | 36.6±2.4 | 31.5±1.9 | 78.0±0.4 | 38.8±4.6 | | | | | HPNC-DEC | ACC | 44.9±2.5 | 53.5±3.5 | 52.0±5.4 | 92.1±0.1 | 64.2±4.5 | | | | Q | NMI | 28.3±4.2 | 32.4±3.7 | 8.5±3.7 | 72.2±0.5 | 36.1±2.8 | | | | ARI | 20.0±3.2 | 27.8±4.0 | 8.3±5.2 | 78.1±0.4 | 38.4±4.7 | | | | | ACC | 71.4±2.9 | 70.1±1.5 | 69.7±0.5 | 92.1±0.1 | 80.0±0.4 | | | | | Z˜+KM | NMI | 57.5±1.6 | 45.4±0.9 | 33.5±0.6 | 72.0±0.2 | 49.8±0.3 | | | | ARI | 49.1±1.4 | 47.1±1.3 | 32.3±0.8 | 77.9±0.2 | 55.0±0.6 | | | | | HPNC-IM | ACC | 74.1±0.5 | 72.4±0.5 | 70.4±0.4 | 92.1±0.1 | 80.0±0.4 | | | | Q | NMI | 58.9±0.6 | 46.3±0.7 | 34.6±0.7 | 72.1±0.3 | 49.8±0.4 | | | | ARI | 51.5±0.9 | 48.4±0.7 | 33.5±0.7 | 78.1±0.3 | 54.9±0.6 | | | | | ✓ | ACC | 70.6±0.8 | 70.6±2.1 | 69.8±0.5 | 92.2±0.1 | 80.2±0.5 | | | | Z˜+KM | NMI | 57.4±0.5 | 46.2±0.9 | 33.8±0.2 | 72.3±0.3 | 50.0±0.4 | | | | ARI | 48.8±1.3 | 48.3±1.5 | 32.6±0.8 | 78.3±0.2 | 55.1±0.7 | | | | | HPNC-DEC | ACC | 74.4±0.5 | 73.0±0.5 | 70.6±0.9 | 92.3±0.1 | 80.1±0.5 | | | | Q | NMI | 59.4±0.9 | 46.9±0.7 | 33.1±1.5 | 72.4±0.3 | 49.9±0.5 | | | | ARI | 51.9±1.2 | 49.7±0.7 | 33.3±1.4 | 78.4±0.2 | 55.0±0.7 | | | | Table 2: Clustering performance of HPNC with different ablation settings. Z˜+KM denotes K-means results of latent embeddings, Q denotes results of Eq. (6). ![11_image_0.png](11_image_0.png) Figure 2: The proportion of Eq. (6) and K-means making the same predictions. observe that they are highly consistent on all datasets when prototype rotation is applied but deviate far from each other without it. This again confirms our analysis before. To summarize, prototype rotation is a very important component of our proposed HPNC paradigm. It not only helps HPNC to match semantic and predefined prototypes but also leads to more discriminative latent embeddings for potential use as input of other downstream algorithms. ![12_image_0.png](12_image_0.png) Figure 3: Cosine distances between rotated prototypes of HPNC and K-means centroids. ![12_image_1.png](12_image_1.png) Figure 4: t-SNE visualization of learned latent embeddings on CiteSeer dataset. Pentagrams in HPNC denote cluster prototypes. HPNC embeddings are the most discriminative and separable among all the competitors. ## 4.2.3 K-Means Centroids Vs Prototypes (Rq4) To investigate the relationships between the clustering centroids inferred from the latent embeddings by K-means and HPNC's (HPNC-DEC is employed in this experiment) rotated prototypes {Rµ˜j} c j=1, we calculate their pairwise cosine distances and plot the confusion matrices in Figure 3. We observe that K-means centroids and rotated prototypes corresponding to the same cluster distribute very close to each other, which means they are consistent. On the other hand, the pairwise distances between centroids of different clusters are nearly the same, which means different clusters are successfully scattered on the hypersphere manifold. Since K-means centroids are inferred from the latent embeddings, we conclude that HPNC indeed successfully mapped nodes from different clusters to their corresponding prototypes, and made them uniformly distribute on the hypersphere manifold to have large margins. ## 4.3 Visualization (Rq5) We employ t-SNE (Van der Maaten & Hinton, 2008) to visualize the learned latent embeddings of HPNC (HPNC-DEC is employed in this experiment) on the CiteSeer dataset and compare it with different baseline methods. The results are illustrated in Figure 4. In addition, the cluster prototypes of HPNC are marked in pentagrams. It's obvious that the embeddings learned by HPNC are the most discriminative and separable among all the competitors. Also, data samples from the same cluster distribute evenly around corresponding prototypes, which is a desired property of our motivation. ## 5 Conclusion This paper presents a novel HPNC paradigm for end-to-end node clustering. In HPNC, uniformly scattered points on a unit-hypersphere are regarded as cluster prototypes. Then, a graph autoencoder is employed to encode the nodes onto the same manifold with the guidance to push them close to the prototypes. Unlike previous works that overlook the separability of learned embeddings, HPNC explicitly maximizes their intercluster distances to provide sufficient discriminative power. Moreover, HPNC does not rely on external clustering algorithms to uncover clustering labels and its objective is fully differentiable, which enables its use as a module in deep learning pipelines. We devise two different schemes based on HPNC to demonstrate its flexibility to work with different clustering backbones. Extensive experiments on real-world graph datasets strongly confirm the effectiveness of HPNC. ## Acknowledgments This work was supported in part by the China Postdoctoral Science Foundation under Grant 2022M722532. ## References Philip Bachman, R. Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In *NeurIPS*, pp. 15509–15519, 2019. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput., 15(6):1373–1396, 2003. Deyu Bo, Xiao Wang, Chuan Shi, Meiqi Zhu, Emiao Lu, and Peng Cui. Structural deep clustering network. In WWW, pp. 1400–1410, 2020. Shaosheng Cao, Wei Lu, and Qiongkai Xu. Grarep: Learning graph representations with global structural information. In *CIKM*, pp. 891–900, 2015. Olivier Chapelle and Alexander Zien. Semi-supervised classification by low density separation. In *AISTATS*, 2005. Ganqu Cui, Jie Zhou, Cheng Yang, and Zhiyuan Liu. Adaptive graph encoder for attributed graph embedding. In KDD, pp. 976–985, 2020. Peng Cui, Xiao Wang, Jian Pei, and Wenwu Zhu. A survey on network embedding. *IEEE Trans. Knowl.* Data Eng., 31(5):833–852, 2019. Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. Hyperspherical variational auto-encoders. In UAI, pp. 856–865, 2018. Fnu Devvrit, Aditya Sinha, Inderjit S Dhillon, and Prateek Jain. S3GC: Scalable self-supervised graph clustering. In *NeurIPS*, pp. 3248–3261, 2022. Hongchang Gao and Heng Huang. Deep attributed network embedding. In *IJCAI*, pp. 3364–3370, 2018. Ryan Gomes, Andreas Krause, and Pietro Perona. Discriminative clustering by regularized information maximization. In *NIPS*, pp. 775–783, 2010. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In KDD, pp. 855–864, 2016. Kaveh Hassani and Amir Hosein Khas Ahmadi. Contrastive multi-view representation learning on graphs. In *ICML*, volume 119, pp. 4116–4126, 2020. Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, and Jie Tang. Graphmae: Self-supervised masked graph autoencoders. In KDD, pp. 594–604, 2022. Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. Learning discrete representations via information maximizing self-augmented training. In *ICML*, pp. 1558–1567, 2017. Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, and Ian D. Reid. Deep subspace clustering networks. In *NIPS*, pp. 24–33, 2017. Xu Ji, Andrea Vedaldi, and João F. Henriques. Invariant information clustering for unsupervised image classification and segmentation. In *ICCV*, pp. 9864–9873, 2019. Thomas N. Kipf and Max Welling. Variational graph auto-encoders. *CoRR*, abs/1611.07308, 2016. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learning. In NeurIPS, pp. 13333–13345, 2019. Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In *ICML*, pp. 6028–6039, 2020. Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, and Shirui Pan. Towards unsupervised deep graph structure learning. In WWW, pp. 1392–1403, 2022a. Yixin Liu, Ming Jin, Shirui Pan, Chuan Zhou, Yu Zheng, Feng Xia, and Philip S. Yu. Graph self-supervised learning: A survey. *IEEE Trans. Knowl. Data Eng.*, 35(6):5879–5900, 2023a. Yue Liu, Wenxuan Tu, Sihang Zhou, Xinwang Liu, Linxuan Song, Xihong Yang, and En Zhu. Deep graph clustering via dual correlation reduction. In *AAAI*, pp. 7603–7611, 2022b. Yue Liu, Ke Liang, Jun Xia, Sihang Zhou, Xihong Yang, Xinwang Liu, and Stan Z. Li. Dink-net: Neural clustering on large graphs. In *ICML*, 2023b. Pascal Mettes, Elise van der Pol, and Cees Snoek. Hyperspherical prototype networks. In *NeurIPS*, pp. 1485–1495, 2019. Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, and Saurabh Singh. No fuss distance metric learning using proxies. In *ICCV*, pp. 360–368, 2017. Oleg R Musin and Alexey S Tarasov. The tammes problem for n= 14. *Exp. Math.*, 24(4):460–468, 2015. Shirui Pan, Ruiqi Hu, Sai-Fu Fung, Guodong Long, Jing Jiang, and Chengqi Zhang. Learning graph embedding with adversarial training methods. *IEEE Trans. Cybern.*, 50(6):2475–2487, 2020. Jiwoong Park, Minsik Lee, Hyung Jin Chang, Kyuewang Lee, and Jin Young Choi. Symmetric graph convolutional autoencoder for unsupervised graph representation learning. In *ICCV*, pp. 6518–6527, 2019. Namyong Park, Ryan A. Rossi, Eunyee Koh, Iftikhar Ahamath Burhanuddin, Sungchul Kim, Fan Du, Nesreen K. Ahmed, and Christos Faloutsos. CGC: contrastive graph clustering forcommunity detection and tracking. In WWW, pp. 1115–1126, 2022. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: online learning of social representations. In KDD, pp. 701–710, 2014. Amin Salehi and Hasan Davulcu. Graph attention auto-encoders. In *ICTAI*, pp. 989–996, 2020. Uri Shaham, Kelly P. Stanton, Henry Li, Ronen Basri, Boaz Nadler, and Yuval Kluger. Spectralnet: Spectral clustering using deep neural networks. In *ICLR*, 2018. Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In *ICLR*, 2020. Pieter Merkus Lambertus Tammes. On the origin of number and arrangement of the places of exit on the surface of pollen-grains. *Recueil Trav. Bot. Néerl.*, 27(1):1–84, 1930. Cheng Tan, Zhangyang Gao, Lirong Wu, Siyuan Li, and Stan Z. Li. Hyperspherical consistency regularization. In *CVPR*, pp. 7234–7245, 2022. Yaling Tao, Kentaro Takagi, and Kouta Nakata. Clustering-friendly representation learning via instance discrimination and feature decorrelation. In *ICLR*, 2021. Fei Tian, Bin Gao, Qing Cui, Enhong Chen, and Tie-Yan Liu. Learning deep representations for graph clustering. In *AAAI*, pp. 1293–1299, 2014. Tsung Wei Tsai, Chongxuan Li, and Jun Zhu. Mice: Mixture of contrastive experts for unsupervised image clustering. In *ICLR*, 2021. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *JMLR*, 9(11), 2008. Petar Velickovic, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R. Devon Hjelm. Deep graph infomax. In *ICLR*, 2019. Ulrike von Luxburg. A tutorial on spectral clustering. *Stat. Comput.*, 17(4):395–416, 2007. Chun Wang, Shirui Pan, Guodong Long, Xingquan Zhu, and Jing Jiang. MGAE: marginalized graph autoencoder for graph clustering. In *CIKM*, pp. 889–898, 2017. Chun Wang, Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, and Chengqi Zhang. Attributed graph clustering: A deep attentional embedding approach. In *IJCAI*, pp. 3670–3676, 2019. Chun Wang, Bo Han, Shirui Pan, Jing Jiang, Gang Niu, and Guodong Long. Cross-graph: Robust and unsupervised embedding for attributed graphs with corrupted structure. In *ICDM*, pp. 571–580, 2020. Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. Learning fine-grained image similarity with deep ranking. In *CVPR*, pp. 1386–1393, 2014. Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *ICML*, pp. 9929–9939, 2020. Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In *CVPR*, pp. 3733–3742, 2018. Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu, and Stan Z. Li. Simgrace: A simple framework for graph contrastive learning without data augmentation. In WWW, pp. 1070–1079, 2022. Junyuan Xie, Ross B. Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In ICML, pp. 478–487, 2016. Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In *ICCV*, pp. 1426–1435, 2019. Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, and Mingyi Hong. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In *ICML*, pp. 3861–3870, 2017. Xihong Yang, Cheng Tan, Yue Liu, Ke Liang, Siwei Wang, Sihang Zhou, Jun Xia, Stan Z. Li, Xinwang Liu, and En Zhu. CONVERT: contrastive graph clustering with reliable augmentation. In *ACM Multimedia*, pp. 319–327, 2023. Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In *ICML*, pp. 40–48, 2016. Jiaxuan You, Zhitao Ying, and Jure Leskovec. Design space for graph neural networks. In *NeurIPS*, 2020. Wenchao Yu, Wei Cheng, Charu C. Aggarwal, Kai Zhang, Haifeng Chen, and Wei Wang. Netwalk: A flexible deep embedding approach for anomaly detection in dynamic networks. In KDD, pp. 2672–2681, 2018a. Wenchao Yu, Cheng Zheng, Wei Cheng, Charu C. Aggarwal, Dongjin Song, Bo Zong, Haifeng Chen, and Wei Wang. Learning deep network representations with adversarially regularized autoencoders. In KDD, pp. 2663–2671, 2018b. Wentao Zhang, Zeang Sheng, Mingyu Yang, Yang Li, Yu Shen, Zhi Yang, and Bin Cui. NAFS: A simple yet tough-to-beat baseline for graph representation learning. In *ICML*, pp. 26467–26483, 2022. Xiaotong Zhang, Han Liu, Qimai Li, and Xiao-Ming Wu. Attributed graph clustering via adaptive graph convolution. In *IJCAI*, pp. 4327–4333, 2019. Han Zhao, Xu Yang, Zhenru Wang, Erkun Yang, and Cheng Deng. Graph debiased contrastive learning with joint representation clustering. In *IJCAI*, pp. 3434–3440, 2021a. Wenliang Zhao, Yongming Rao, Ziyi Wang, Jiwen Lu, and Jie Zhou. Towards interpretable deep metric learning with structural matching. In *ICCV*, pp. 9867–9876, 2021b. Yizhen Zheng, Shirui Pan, Vincent C. S. Lee, Yu Zheng, and Philip S. Yu. Rethinking and scaling up graph contrastive learning: An extremely efficient approach with group discrimination. In *NeurIPS*, 2022. Sheng Zhou, Hongjia Xu, Zhuonan Zheng, Jiawei Chen, Zhao Li, Jiajun Bu, Jia Wu, Xin Wang, Wenwu Zhu, and Martin Ester. A comprehensive survey on deep clustering: Taxonomy, challenges, and future directions. *CoRR*, abs/2206.07579, 2022. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. Deep graph contrastive representation learning. *CoRR*, abs/2006.04131, 2020. Yanqiao Zhu, Yichen Xu, Qiang Liu, and Shu Wu. An empirical study of graph contrastive learning. In NeurIPS Datasets and Benchmarks, 2021. ## A Appendix: Experimental Setups A.1 Datasets We evaluate the clustering performance on five widely adopted attributed graph datasets: Cora, CiteSeer, PubMed (Yang et al., 2016)3, ACM and DBLP (Bo et al., 2020)4. - Cora is a citation graph of a number of machine-learning papers divided into seven research topics. Each edge represents a citation. - CiteSeer is a citation graph of scientific publications classified into one of six machine-learning areas. Each edge represents a citation. - PubMed is a citation graph from the PubMed database, where nodes are papers about three diabetes types and edges are citations among them. - ACM is a paper graph from the ACM Digital Library. The papers are categorized into three research areas. There is an edge between two papers if they are written by the same author. - DBLP is an author graph from the DBLP computer science bibliography. Edges denote co-authorship between two authors. The authors are categorized into four research areas. Statistics of these datasets are summarized in Table 3. ## A.2 Evaluation Metrics We employ three widely adopted metrics to evaluate node clustering performance, including clustering accuracy (ACC), normalized mutual information (NMI), and adjusted Rand index (ARI). The larger values are, the better the clustering performance. 3https://github.com/kimiyoung/planetoid 4https://github.com/bdy9527/SDCN | Table 3: Statistics for benchmark datasets. | | | | | |-----------------------------------------------|-------|-------|----------|-----------| | Dataset | Nodes | Edges | Clusters | Dimension | | Cora | 2708 | 5278 | 7 | 1433 | | CiteSeer | 3327 | 4552 | 6 | 3703 | | PubMed | 19717 | 44324 | 3 | 500 | | ACM | 3025 | 13128 | 3 | 1870 | | DBLP | 4057 | 3528 | 4 | 334 | Table 3: Statistics for benchmark datasets. ## A.3 Implementation Details We implement our proposed HPNC in PyTorch 2.0 and PyG 2.3.0 and use GraphGym (You et al., 2020) for experiment management to ensure reproducibility. Throughout the experiments, the encoders of HPNC are all composed of two GAT layers with 4 128d attention heads and dropout probability 0.1 for attention coefficients. We also apply dropout with probability 0.2 between the GAT layers and use PReLU as the activation function. The decoder is a single GAT layer without non-linear activation. The coefficients α, β and γ in Eqs. (10) and (14) are tuned by random search within the following ranges: α ∈ {0.0, 0.01, 0.02}, β, γ ∈ (0.0, 0.1]. Detailed hyperparameter settings can be found in YAML configuration files from our code.
Review 1: Summary: The paper proposes a new node clustering method called Hyperspherical Prototype Node Clustering (HPNC). The general idea is to leverage the clustering-oriented loss with the learned node embeddings to encourage small intra-cluster distances and large inter-cluster distances. Some experiments are conducted to demonstrate the effectiveness of HPNC compared with many baseline methods. Ablation study and analytical results are also provided. Strengths and Weaknesses: Strengths + It is interesting to improve node embeddings for node clustering by enforcing small intra-cluster distances and large inter-cluster distances. The proposed method looks reasonable to me. + Experiments on 5 datasets are conducted to show that HPNC outperforms many baseline methods. + The presentation is overall good. The motivation is clearly described and the paper is easy to follow. I have some concerns regarding the method and experiments. - The novelty of this paper is incremental to me. Leveraging loss to encourage small intra-cluster distances and large inter-cluster distances is a common idea for improving the clustering algorithm. In addition, it is relatively straightforward that incorporating clustering-based loss into self-supervised loss will improve the clustering performance. - The improvement of HPNC over baseline methods is not significant. In many cases, the improvements are less than 1%. In addition, the authors should consider comparing more recent graph self-supervised learning baseline methods. Moreover, in the ablation study, it is necessary to report the results of all metrics rather than just NMI. It is also necessary to report the standard deviations of the results over several repeated experiments. - The current experimental datasets are relatively small. It is better to consider large-scale graph datasets, such as large datasets in OGB. Requested Changes: In my opinion, the novelty of this work is incremental and the experiments should be improved. Please see the above weakness for any changes. Broader Impact Concerns: NA ================================================== Review 2: Summary: This work proposes a new method for node clustering tasks. The main innovation of the work is to enforce cluster separation in the optimization objective. Two measures are used to achieve separation: 1) explicitly setting cluster centers and maximizing distances between them; and 2) encouraging the sharpness of the cluster membership matrix. The results indicate the proposed method achieves clear separation and better clustering performance than previous methods. Strengths and Weaknesses: Strength: 1) The work identifies the problem that clusters need to be separated and then devises optimization terms to encourage separation. 2) The experiment results are generally solid. They verify the hypothesis in the work. Weakness: The paper writing should be improved. Here is a list of detailed comments. 1) Section 3.1 should be merged into the Introduction and Related Work sections. 2) Section 3.2.2 is mostly from previous work and not this work's contribution. It should be in a "Background" section 3) Equation (6) it is unclear how R^TR = I is enforced. 4) Section 3.3 introduces new notations y and Q. They should be defined at the problem definition. 5) In figure 1, the word "disambiguate" is not even used in the text. Does it mean "sharpening"? 6) The word "prototype" is not great. It's meaning is "a first, typical or preliminary model of something, especially a machine, from which other forms are developed or copied.", which doesn't seem to be appropriate for a cluster center. Maybe "cluster centers" are better? 7) The experiment settings should be included in the text because they are important for the analysis of the results. Requested Changes: Please check the list of comments in the "Weakness" section. Broader Impact Concerns: No concerns detected. ================================================== Review 3: Summary: The paper introduces an approach to deep node clustering called Hyperspherical Prototype Node Clustering (HPNC). The proposed method enhances the inter-cluster separability of learned node embeddings by constraining the embedding space to a unit hypersphere. The paper provides a detailed explanation of the HPNC paradigm and its benefits over existing approaches to deep node clustering. It also includes experimental results demonstrating the superior clustering performance of HPNC on various real-world datasets. Overall, the paper presents an interesting technique that has the potential to advance the field of deep node clustering. However, the novelty of the idea is not encouraging. Basically, "enhancing the inter-cluster separability" is a very traditional research point for clustering. It is not clear what is the very unique part of node clustering. Strengths and Weaknesses: Pros: - HPNC enhances the inter-cluster separability of learned node embeddings, which can lead to improved clustering performance. - The method constrains the embedding space to a unit-hypersphere, which enables the scattering of cluster prototypes over the space with maximized pairwise distances. - HPNC employs a graph autoencoder to map nodes onto the same hypersphere manifold, which can simplify the clustering process. - The proposed method is an end-to-end clustering paradigm, which eliminates the need for additional clustering algorithms such as K-means or spectral clustering. - HPNC has demonstrated competitive clustering performance compared to recent state-of-the-art models on various real-world datasets. Requested Changes: Cons: - The major concern is the novelty of the idea is not encouraging. Basically, "enhancing the inter-cluster separability" is a very traditional research point for clustering. It is not clear what is the very unique part of node clustering. The authors are suggested to include more description on this point. - Some unsupervised node embedding approaches are not cited and compared, such as: NetWalk: A Flexible Deep Embedding Approach for Anomaly Detection in Dynamic Networks. KDD'18 Learning Deep Network Representations with Adversarially Regularized Autoencoders. KDD'18 It is suggested to have a more thorough literature survey on the graph embedding part. - The paper does not provide a detailed comparison of the computational efficiency of HPNC compared to other clustering algorithms. - The paper does not provide a detailed analysis of the robustness of HPNC to noise or outliers in the data. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: Overall, the reviewers agree that this paper is technically correct, and I am confident it is of interest to the TMLR audience. 2 out of 3 reviewers recommended to accept the paper, the 3rd reviewer leaned towards rejection on the grounds of 1) novelty 2) datasets and 3) marginal improvement. After careful consideration I decided to accept the submission for publication, mainly because even though all reviewers noticed to some extent that the novelty may be limited, the TMLR guidelines clearly state that **Papers should be accepted if they meet the criteria [claims/evidence and audience] , even if the contribution or significance of the work is modest. **. I believe that the submission meets both these criteria and hence should published in TMLR. Please prepare the final version of the manuscript, while doing so, I suggest renaming the new section 3.5 to "Complexity Analysis" (instead of "Complexity Analyses"). ==================================================
# A Simple Unified Method For Node Classification On Homophilous And Heterophilous Graphs Anonymous authors Paper under double-blind review ## Abstract In graph learning, there have been two predominant inductive biases regarding graph-inspired architectures: On the one hand, higher-order interactions and message passing work well on homophilous graphs and are leveraged by GCNs and GATs. Such architectures, however, cannot easily scale to large real-world graphs. On the other hand, shallow (or node-level) models using ego features and adjacency embeddings work well in heterophilous graphs. In this work, we propose a novel scalable shallow method - GLINKX - that can work both on homophilous and heterophilous graphs. GLINKX leverages (i) novel monophilous label propagations (ii) ego/node features, (iii) knowledge graph embeddings as positional embeddings, (iv) node-level training, and (v) low-dimensional message passing. Formally, we prove novel error bounds and justify the components of GLINKX. Experimentally, we show its eectiveness on several homophilous and heterophilous datasets with up to millions of nodes and tens of millions of edges. ## 1 Introduction In recent years, graph learning methods have emerged with a strong performance for various ML tasks. Graph ML methods leverage the topology of graphs underlying the data (Battaglia et al., 2018) to improve their performance. Two very important design options for proposing graph ML-based architectures in the context of *node classification* are related to whether the data is homophilous or *heterophilous*. For homophilous data - where neighboring nodes share similar labels (McPherson et al., 2001; Altenburger & Ugander, 2018a) - Graph Neural Network (GNN)-based methods can achieve high accuracy. Specifically, a broad subclass of successful GNNs are Graph Convolutional Networks (GCNs) (e.g., GCN, GAT, etc.) (Kipf & Welling, 2016; VelikoviÊ et al., 2017; Zhu et al., 2020). In the GCN paradigm, *message passing* and *higher-order interactions* help node classification tasks in the homophilous setting since such inductive biases tend to bring the learned representations of linked nodes close to each other. However, GCN-based architectures suer from *scalability issues*. Performing (higher-order) propagations during the training stage are hard to scale in large graphs because the number of nodes grows exponentially with the increase of the filter receptive field. Thus, for practical purposes, GCN-based methods require *node sampling*, substantially increasing their training time. For this reason, architectures (Huang et al., 2020; Zhang et al., 2022b; Sun et al., 2021; Maurya et al., 2021; Rossi et al., 2020) that leverage propagations outside of the training loop (as a preprocessing step) have shown promising results in terms of scaling to large graphs since they do not require neighborhood sampling. Moreover, deploying GCN-based methods on an industrial scale incurs infrastructure overhead since training GNNs in the large-scale regime with node sampling requires significant computing resources (i.e. more GPU memory) compared to simpler shallow methods that perform i.i.d.-based random sampling. In *heterophilous* datasets (Rogers et al., 2014), the connected nodes tend to have dierent labels. Currently, many works that address heterophily can be classified into two categories concerning scale. On the one hand, recent successful architectures (in terms of accuracy) (Rossi et al., 2023; Di Giovanni et al., 2022; Zheng et al., 2022b; Luan et al., 2021; Chien et al., 2020; Lei et al., 2022) that address heterophily resemble GCNs in terms of design and and therefore face the same scalability problems as GCNs/GATs. On the other hand, shallow or node-level models (see, e.g., (Lim et al., 2021; Zhong et al., 2022)), i.e., models that are treating graph data as tabular/i.i.d. data, resemble simple multi-layer perceptrons (MLPs) in their architecture, and do not involve propagations during training, have shown a lot of promise for large heterophilous graphs. For instance, in Lim et al. (2021), it is shown that combining ego embeddings (node features) and adjacency embeddings has good accuracy and scalability in the heterophilous setting (see Section 2). However, their design is still impractical in real-world data since the LINKX method presented in their paper is not inductive (see Section 2), and embedding the adjacency matrix exactly requires many parameters in a model, which can be millions to billions of extra parameters for industry-scale networks. In LINKX, the adjacency embedding of a node can alternatively be thought of as a *positional embedding* (PE)1 of the node in the graph, and recent developments (Kim et al., 2022; Dwivedi et al., 2021; Lim et al., 2021; Wang et al., 2022) have shown the importance of PEs in both homophilous and heterophilous settings. However, most of these works suggest PE parameterizations (e.g., Laplacian eigenvectors) that are dicult to compute in large-scale directed graphs, such as modern social networks. We argue that a way to circumvent this problem is to rely on *knowledge graph* embeddings (Bordes et al., 2013; Yang et al., 2014) which can be used to perform a non-linear factorization of the adjacency matrix of the network and have been recently shown to be able to be trained in billion-scale networks (El-Kishky et al., 2022; Lerer et al., 2019). ## Goal & Contribution We present GLINKX, a simple, scalable, and eective shallow method that works on both homophilous and heterophilous graphs that address the scalability problems of GNNs as well as the accuracy and memory ineciency problems that shallow heterophilous methods face. For a method to be scalable, we argue that it should: (i) run models on node-scale (thus leveraging i.i.d. minibatching), *(ii)* avoid doing message passing during training and do it a constant number of times before training, and *(iii)* transmit small messages along the edges (to reduce memory footprint). To develop GLINKX, our main structural intuition is that many real-world homophilous and heterophilous datasets exhibit the well-documented *monophily* property (see Section 2.5, and Altenburger & Ugander (2018a); Lim et al. (2021); Altenburger & Ugander (2018b)); namely, a node tends to associate with nodes that have the same class as one another (i.e., the node has unique types of "friends"). This property can hold regardless of the graph being homophilous or heterophilous (see Figures 1(b) and 1(c)). Given this real-world intuition, GLINKX tackles the problems that current methods suited for homophily and heterophily face by combining three simple, novel, and eective components: 1. A novel 2-hop propagation scheme called *MLaP* (see Section 3 and Figure 3) which performs propagations outside of the training loop and addresses the infrastructure bottlenecks of the messagepassing architectures, as well as the decreased performance of shallow models in homophilous graphs. MLaP relies on the structural assumption that many real-world homophilous and heterophilous graphs are monophilous. 2. Knowledge Graph Embeddings (KGEs) (see also Sections 2.6 and 3) to compress the adjacency graph representations that existing methods such as LINKX use and provide positional information (positional embeddings) about the nodes. 3. The ego embeddings of each node which have been shown to work in both the homophilous and heterophilous context2. We show that GLINKX can perform well on a variety of homophilous and heterophilous datasets ranging from a few thousand nodes and edges to millions and tens of millions of nodes and edges (Section 4). Moreover, we provide novel theoretical error bounds to justify the components of GLINKX (see Section 3.3). Even though the state-of-the-art methods (see, e.g., Luan et al. (2021); Rossi et al. (2023)), outperform our method in terms of accuracy; such methods are inherently not scalable to large-scale datasets because they perform propagations (message-passing) during training which make neighborhood sampling mandatory to be able to run on large datasets. In these cases, training GNNs on large datasets is still feasible; yet, it is very costly 1We use the word *"positional embedding"* to talk broadly about embeddings that correspond to the nodes of a graph and encode the structural characteristics of each node. 2We use ego embeddings and node features interchangeably. in terms of time, cost, and resources and takes up considerable time compared to our method (cf. Frasca et al. (2020); Bojchevski et al. (2020) and the references therein, and the runtime comparison in Section 4). Our method overcomes this bottleneck by performing propagations twice and out of the training loop, which makes it easier to get deployed in industrial applications since our propagations can be implemented with modern distributed storage and processing software such as Apache Hadoop. Moreover, we also argue that GLINKX is complementary to what other methods propose, and such complementary information can be included in GLINKX without sacrificing performance. Finally, GLINKX is suitable for industrial applications since it can work in both a *transductive* and an *inductive*3 setting. ## 2 Preliminaries 2.1 Notation We denote scalars with lower-case, vectors with bold lower-case letters, and matrices with bold upper-case. We consider a directed graph G = G(V,E) with vertex set V with |V | = n nodes, and edge set E with |E| = m edges, and adjacency matrix A. X œ Rn◊dX represents the dX-dimensional features and P œ Rn◊dP represent the dP -dimensional PE matrix (see Section 2.6 and Appendix A.2). A node i has a feature vector xi œ RdX and a positional embedding pi œ RdP and belongs to a class yi œ {1*,...,c*}. The training set is denoted by Gtrain(Vtrain, Etrain), the validation set by Gvalid(Vvalid, Evalid), and test set by Gtest(Vtest, Etest). I{·} is the indicator function. c is the c-dimensional simplex. ## 2.2 Graph Convolutional Neural Networks In homophilous datasets, GCN-based methods have been used for node classification. GCNs (Kipf & Welling, 2016) utilize feature propagations together with non-linearities to produce node embeddings. Specifically, a GCN consists of multiple layers where each layer i collects i-th hop information from the nodes through propagations and forwards this information to the i + 1-th layer. More specifically, if G has a symmetricallynormalized adjacency matrix AÕsym (with self-loops) (ignoring the directionality of edges), then a GCN layer has the form $\mathbf{H}^{(0)}=\mathbf{X},\mathbf{H}^{(i+1)}=\sigma\left(\mathbf{A}^{\prime}_{sym}\mathbf{H}^{(i)}\mathbf{W}^{(i)}\right)\quad\forall i\in\{1,\ldots,L\},$ $\mathbf{Y}=\text{softmax}\left(\mathbf{H}^{(L)}\right).$ Here H(i) is the embedding from the previous layer, W(i) is a learnable projection matrix and ‡(·) is a non-linearity (e.g. ReLU, sigmoid, etc.). ## 2.3 Message-Passing Architectures Vs. Ecient Shallow Methods Message-Passing Architectures: To train a GCN-based model (or generally, whenever message-passing is involved) on a large network (that cannot fit in the GPU memory), one has to do *minibatching* through neighbor sampling. For large-scale networks, mini-batching takes much longer than full-batch training and requires substantially more infrastructure, which is one of the reasons that graph GCNs are not preferred in real-world settings (see, e.g., Jin et al. (2022b); Zhang et al. (2022a); Zheng et al. (2022a); Lim et al. (2021); Maurya et al. (2021); Rossi et al. (2020)). Ecient Shallow Methods: *Shallow* (or node-level) models are based on manipulating the node features X and the graph topology A so that propagations do not occur during training. Such methods treat the input embeddings as tabular data and pass them through a feed-forward neural network (MLP) to produce the predictions. Thus, they avoid the message-passing bottlenecks and instead rely on simple tabular minibatching. For this reason, most methods that can scale on real-world settings are *shallow*. In heterophilous 3For this paper, we operate in the *transductive setting*. See Appendix B for the inductive setting. ![3_image_0.png](3_image_0.png) (b) Homophilous example (c) Heterophilous example Figure 1: Top: Node and class homophily distributions for the yelp-chi dataset. Bottom: Examples of a homophilous (Figure 1(b)) and a heterophilous (Figure 1(c)) region in the same graph that are both monophilous, namely they are connected to many neighbors of the same kind. In a spam network, the homophilous region corresponds to many non-spam reviews connecting to non-spam reviews (which is the expected behavior of a non-spammer user). The heterophilous region corresponds to spam reviews targeting non-spam reviews (the expected behavior of spammers), thus, yielding a graph with both homophilous and heterophilous regions such as in Figure 1(a). datasets, the simple method of LINKX has been shown to perform well. LINKX combines two components – MLP on the node features X and LINK regression (Altenburger & Ugander, 2018a) on the adjacency matrix - as follows: $\mathbf{H}_{X}=\text{MLP}_{X}(\mathbf{X})\,,\ \mathbf{H}_{A}=\text{MLP}_{A}(\mathbf{A})\,,\ Y=\text{ResNet}(\mathbf{H}_{X},\mathbf{H}_{A})$. Examples of other methods include FSGNN (Maurya et al., 2021), and SIGN (Rossi et al., 2020). ## 2.4 Node Classification In graph node classification, we have a model f(X,Ytrain, A; ◊) that takes as an input the node features X, the training labels Ytrain and the graph topology A and produces a prediction for each node i of G, which corresponds to the probability that a given node belongs to any of c classes (with the sum of such probabilities being one). The model is trained with back-propagation. Once trained, the model can be used for the prediction of labels of nodes in the test set. There are two training regimes: *transductive* and *inductive*. In the transductive training regime, we have full knowledge of the graph topology (for the train, test, and validation sets) and the node features, and the task is to predict the labels of the validation and test set. In the inductive regime, only the graph induced by Vtrain is known at the time of training, and then the full graph is revealed for prediction on the validation and test sets. In real-world scenarios, such as online social networks, the dynamic nature of problems makes the inductive regime particularly useful. ## 2.5 Homophily, Heterophily & Monophily Homophily and Heterophily: There are various measures of homophily in the GNN literature like node homophily and edge homophily (Lim et al., 2021). Intuitively, homophily in a graph implies that nodes with similar labels are connected. GNN-based approaches like GCN, GAT, etc., leverage this property to improve the node classification performance. Alternatively, if a graph has low homophily - namely, nodes that connect tend to have dierent labels - it is said to be *heterophilous*. In other words, a graph is heterophilous if neighboring nodes do not share similar labels. Monophily: Generally, we define a graph to be monophilous if the label of a node is similar to that of its neighbors' neighbors4. Etymologically, the word "monophily" is derived from the Greek words *"monos"* (unique) and *"philos"* (friend), which in our context means that a node - regardless of its label - has neighbors of primarily one label. In the context of a directed graph, monophily can be thought of as a structure that resembles Figure 3(a) where similar nodes (in this case, three green nodes connected to a yellow node) are connected to a node of dierent/same label. ## 2.6 Knowledge Graph Embeddings As Positional Embeddings Knowledge Graphs: Knowledge graph embeddings are a way to present knowledge about the world in a structured way. They consist of triplets (*h, r, t*), which correspond to a head (h), a relation (r), and a tail (t), such that the tail t is related to the head h via the relation r. The union of all such triplets defines a heterogeneous graph, the Knowledge Graph G(V,E1*,...,E*R) where the number of relations is R. Knowledge graphs can have multiple relations that represent dierent associations of objects, for example, (Paris, isCapitalOf, France), and (Baguette, isEatenIn, France). Knowledge Graph Embeddings: The aim of knowledge graph embeddings (KGEs) is to provide continuousspace representations for the entities {pu}iœV R|V |◊dP and the relations {rl}lœ[R] œ RR◊dP , such that for a triplet (h = pu, r = rl, t = pv), h+r ¥ t where "¥" corresponds to minimizing a distance criterion. Training is done by sampling negative examples for each positive example (h, r, t) and minimizing a contrastive-type loss. There have been numerous methods proposed for modeling and training KGEs, see Wang et al. (2017) for an in-depth literature review. In our paper, we use the DistMult method introduced by Yang et al. (2014) where the distance criterion is the triple Hadamard (element-wise) product h § r § t. Moreover, we note that *homogeneous* graphs - namely graphs where there is only one relation (for example the simple edge relation) - are a subset of knowledge graphs where all nodes and edges have the same types. In this paper, we use KGEs on homogeneous graphs as a way to extract embeddings for the nodes of the graph. KGEs as Positional Embeddings: Apart from representing knowledge in a continuous space, KGEs can provide positional information - i.e., positional embeddings - for the graph nodes. Specifically, training knowledge graph embeddings can be seen as performing **a non-linear factorization on the adjacency** matrices in order to generate embeddings for the nodes. These embeddings can then be used as positional embeddings (PEs) for representing the "position of a node" in the graph (see Section 3), which serves as a useful additional signal for node classification in both homophilous and heterophilous graphs (cf. Lim et al. (2021); Dwivedi et al. (2021); Srinivasan & Ribeiro (2019); Wang et al. (2022)). Finally, KGEs can be trained scalably for graphs with billions of nodes (see, e.g., El-Kishky et al. (2022)). For our paper, we describe the KGE training method in Appendix A.2. ## 3 Method 3.1 Components & Motivation The desiderata we laid down on Section 1 can be realized by three components: (i) PEs, (ii) ego embeddings, and (iii) label propagations that encode monophily. More specifically, ego embeddings and PEs are used as primary features, which have been shown to work for both homophilous and heterophilous graphs for the models we end up training. Finally, the propagation step is used to encode monophily to provide additional information to our final prediction. Positional Embeddings: We use PEs to provide our model information about the position of each node and hypothesize that PEs are an important piece of information in the context of large-scale node classification. 4A similar definition of monophily has appeared in (Altenburger & Ugander, 2018a), whereby many nodes have extreme preferences for connecting to a certain class. ![5_image_0.png](5_image_0.png) Figure 2: Block Diagrams of GLINKX stages. PEs have been used to help discriminate isomorphic graph (sub)-structures (Kim et al., 2022; Dwivedi et al., 2021; Srinivasan & Ribeiro, 2019), and also architectures that are able to learn node representations jointly and PEs have been developed (Wang et al., 2022). Specifically, Wang et al. (2022) develops a method to learn PEs during GNN training by using a separate channel to update the original node features and the PEs. Their architecture (PEG Layer) is permutation-invariant wrt the node features, rotationally and reflectively invariant wrt the PEs, and has good stability guarantees. We note that the dierence with our method is that in our method the PEs are pre-trained and provided externally, rather than trained in an end-to-end manner. PEs are useful for both homophily (Kim et al., 2022; Dwivedi et al., 2021) and heterophily (Lim et al., 2021) because isomorphic (sub)-structures can exist in both the settings. In the homophilous case, adding positional information can help distinguish nodes that have the same neighborhood but distinct position (Dwivedi et al., 2021; Morris et al., 2019; Xu et al., 2019), circumventing the need to do higher-order propagations (Dwivedi et al., 2021; Li et al., 2019; Bresson & Laurent, 2017) which are prone to over-squashing (Alon & Yahav, 2021). In heterophily, structural similarity among nodes is important for classification, as in the case of LINKX - where adjacency embedding can be considered a PE. However, in large graphs, using adjacency embeddings or Laplacian eigenvectors be a computational bottleneck and may be infeasible (cf. Kim et al. (2022)). In this work, we leverage *knowledge graph embeddings* (KGEs) to encode positional information about the nodes, and embed the graph, as a way to perform *a non-linear factorization on the adjacency matrix of the* graph. The resulting factorization can serve as a graph embedding - as we describe in Section 2.6 - which can be utilized as a PE. For our paper, we consider a simple case of the graph having *only* one relation - also called a *homogeneous graph* - which represents the topological links in the graph (i.e., edges). Using KGEs has two benefits: Firstly, KGEs can be trained easily and eciently in many real-world scenarios (cf. El-Kishky et al. (2022)). This is because KGEs compress the adjacency matrix into a fixed-sized embedding. Further, KGEs are lower-dimensional than the adjacency matrix (e.g., dP ≥ 102), allowing for faster training and inference times, as well as better utilization of machine learning infrastructure. Secondly, KGEs can be pre-trained eciently on such graphs (Lerer et al., 2019) and can be used o-the-shelf for other downstream tasks, including node classification (El-Kishky et al., 2022)5. So, in the 1st Stage of GLINKX in Algorithm 1 (Figure 2(a)) we train KGEs model on the available graph structure. Here, we fix this positional encoding once they are pre-trained for downstream usage. Finally, we note that this step is transductive but we can easily make it inductive (El-Kishky et al., 2022; Albooyeh et al., 2020). 5Positional information can also be provided by other methods such as node2vec (Grover & Leskovec, 2016) or LINE (Tang et al., 2015), however, most of such methods are less scalable. Ego Embeddings: We get ego embeddings from the node features. Such embeddings have been used in homophilous and heterophilous settings (Lim et al., 2021; Zhu et al., 2020). Node embeddings are useful for tasks where the graph structure provides little/no information about the task. Monophilous Label Propagations: We now propose a novel monophily (refer Section 2.5) inspired label propagation which we refer to as Monophilous Label Propagation (MLaP). MLaP has the advantage that we can use it both for homophilous and heterophilous graphs or in a scenario with varying levels of graph homophily (see, e.g., Figure 1). Why Encode Monophily? We argue that encoding monophily into a model can be helpful for both heterophilous and homophilous graphs (see Figures 1(b) and 1(c)), which is one of the main motivators behind our work. In homophilous graphs, monophily will fundamentally encode the 2nd-hop neighbor's label information, and since in such graphs, neighboring nodes have similar labels, it can provide a helpful signal for node classification. In heterophily, neighboring nodes have dierent labels, but the 2nd-hop neighbors may share the same label, providing helpful information for node classification. Monophily is eective for heterophilous graphs (Lim et al., 2021). Therefore, an approach encoding monophily has an advantage over methods designed specifically for homophilous and heterophilous graphs, especially when varying levels of homophily can exist between dierent sub-regions in the same graph (see Figure 1). It may also not be apparent if the (sub-)graph is purely homophilous/heterophilous (since these are not binary constructs), which makes a unified architecture that can leverage graph information for both settings all the more important. How does MLaP encode monophily? To understand how MLaP encodes monophily, we consider the example in Figure 3. In this example, we have three green nodes connected to a yellow node and two nodes of dierent colors connected to the yellow node. Then, one way to encode monophily in Figure 3(a) while predicting label for j¸, ¸ œ [5], is to get a *distribution* of labels of nodes connected to node i thus encoding its neighbors' distribution. The fact that there are more nodes with green color than other colors can be used by the model to make a prediction. But this information may only sometimes be present, or there may be few labeled nodes around node i. Consequently, we propose a 2-step approach to get this information. First, we train a model that predicts the label distribution of nodes connected to i. We use the node features (xi) and PE (pi) of node i to build this model since nodes that are connected to node i share similar labels and thus, the features of node i must be predictive of its neighbors. So, in Figure 3(a), we train a model to predict a distribution of i's neighbors. Next, we provide j¸ the learned distribution of i's neighbors by propagating the learned distribution from i back to j¸, and therefore now j¸ has information about i's neighbors. Equations (1) to (3) correspond to MLaP. We train a final model that leverages this information together with node features and PEs (Figure 3(b)). ## 3.2 Our Method: Glinkx We put the components discussed in Section 3.1 together into three stages. In the first stage, we pre-train the PEs by using KGEs. Next, encode monophily into our model by training a model that predicts a node's neighbors' distribution and by propagating the soft labels from the fitted model. Finally, we combine the propagated information, node features, and PEs to train a final model. GLINKX is described in Algorithm 1 and consists of three main components detailed as block diagrams in Figure 2. Figure 3 shows the GLINKX stages from Algorithm 1 on a toy graph: 1st Stage (KGEs): We train DistMult KGEs with Pytorch-Biggraph (Yang et al., 2014) treating G as a knowledge graph with only one relation (see Appendix A.2 for more details). Here, we have decided to use DistMult, but one can use their method of choice to embed the graph. 2nd Stage (MLaP): First (2nd Stage in Algorithm 1, Figure 2(b), and Figure 3(a)), for a node we want to learn the distribution of *its neighbors*. To achieve this, we propagate the labels from a node's neighbors (we call this step MLaP Forward), i.e., calculate $\hat{\mathbf{y}}_{i}=\frac{\sum_{j\in V_{\text{train}}:(j,i)\in E_{\text{train}}\mathbf{y}_{j}}}{|\{j\in V_{\text{train}}:(j,i)\in E_{\text{train}}\}|}\qquad\forall i\in V_{\text{train}}$. |{j œ Vtrain : (*j, i*) œ Etrain}| 'i œ Vtrain. (1) ## Algorithm 1 Glinkx Algorithm Input: Graph G(V,E) with train set Vtrain ™ V , node features X, labels Y Output: Node Label Predictions Yfinal 1st Stage (KGEs). Pre-train knowledge graph embeddings P with Pytorch Biggraph. 2nd Stage (MLaP). Propagate labels and predict the neighbor distribution 1. **MLaP Forward:** Calculate the distribution of each training node's neighbors, i.e. yˆi = qjœVtrain:(j,i)œEtrain yj |{jœVtrain:(j,i)œEtrain}| for all i œ Vtrain 2. **Learn distribution of a node's neighbors:** (a) For each epoch, calculate y˜i = f1(xi, pi; ◊1) for i œ Vtrain (b) Update the parameters s.t. the negative cross-entropy LCE,1(◊1) = qiœVtrain CE(yˆi, y˜i; ◊1) is maximized in order to bring y˜i statistically close to yˆi. (c) Let ◊ú1 be the parameters at the end of the training that correspond to the epoch with the best validation accuracy. 3. **MLaP Backward:** Calculate yÕi = ![7_image_0.png](7_image_0.png) qjœV :(i,j)œE y˜j |{jœV :(i,j)œE}| for all i œ Vtrain, where y˜j = f1(xj , pj ; ◊ú1). 3rd Stage (Final Model). Predicting node's own label distribution: 1. For each epoch, calculate yfinal,i = f2(xi, pi, yÕi; ◊2). 2. Update the parameters s.t. the negative cross-entropy LCE,2(◊2) = qiœVtrain CE(yi, yfinal,i; ◊2) is maximized. Return Yfinal (a) MLaP Forward & Neighbor Model (b) MLaP Backward & Final Model Figure 3: Example. For node i we want to learn a model that takes i's features xi œ RdX , and PEs pi œ RdP and predict a value yÂi œ Rc that matches the label distribution of it's neighbors neighbors yˆi using a shallow model. Next, we want to propagate (outside the training loop) the (predicted) distribution of a node back to its neighbors and use it together with the ego features and the PEs to make a prediction about a node's own label. We propagate y˜i to its neighbors j1 to j5. For example, for j1, we encode the propagated distribution estimate y˜i from i to form yÕj1 . We predict the label by using yÕj1 , xj1 , pj1 . In our example in Figure 3(a), we calculate the distribution of node i's neighbors which is (3/5, 1/5, 1/5, 0). Then, we train a model that predicts the distribution of neighbors, which we denote with y˜i using the ego features {xi}iœVtrain and the PEs {pi}iœVtrain and maximize the negative cross-entropy with treating {yˆi}iœVtrain as ground truth labels; namely we maximize $${\mathcal{L}}_{\mathrm{CE,\,\,1}}(\theta_{1})=\sum_{i\in V_{\mathrm{train}}}\sum_{l\in[c]}{\hat{y}}_{i,l}\log({\tilde{y}}_{i,l}),$$ yˆi,l log(y˜i,l), (2) where y˜i = f1(xi, pi; ◊1) and ◊1 œ 1 is a learnable parameter vector. Although in this paper we assume to be in the *transductive setting*, this step allows us to be inductive (see Appendix B). In Section 3.3 we give a theoretical justification of this step, namely *"why is it good to use a parametric model to predict the* distribution of neighbors (i.e., a parametric model vs. neighborhood statistics)?". Again, in the example of Figure 3(a), we train a model to learn the distribution of i's neighbors. $$\left(2\right)$$ Finally, we propagate the predicted soft-labels y˜i back to the original nodes, i.e. calculate $$\mathbf{y}_{i}^{\prime}={\frac{\sum_{j\in V:(i,j)\in E}{\tilde{\mathbf{y}}}_{j}}{|\{j\in V:(i,j)\in E\}|}}\quad{\forall i\in V_{\mathrm{train}},}$$ $\quad(3)$ . |{j œ V : (*i, j*) œ E}| 'i œ Vtrain, (3) where the soft labels {y˜i}iœVtrain have been computed with the parameter ◊ú1 of the epoch with the best validation accuracy from model f1(·|◊1). We call this step MLaP Backward. In the example (Figure 3(b)), this means propagating back the learned distribution from node i - which are close to (3/5, 1/5, 1/5, 0) – back to i's neighbors. 3rd Stage (Final Model): We make the final predictions yfinal, i = f2(xi, pi, yÕi; ◊2) by combining the ego embeddings, PEs, and the (back)-propagated soft labels (◊2 is a learnable parameter vector). We use the soft labels y˜i instead of the actual labels one-hot(yi) in order to avoid label leakage, which hurts performance (see also (Shi et al., 2020) for a dierent way to combat label leakage). Finally, we maximize the negative cross-entropy with respect to a node's own labels, $$\mathcal{L}_{\text{CE},\ 2}(\boldsymbol{\theta}_{2})=\sum_{i\in V_{\text{train}}}\sum_{l\in[c]}\mathbb{I}\{y_{i}=l\}\log(\boldsymbol{y}_{\text{final},\ i,l}),\tag{1}$$ $$\left(4\right)$$ Finally, in our example in Figure 3(b), this corresponds to using the propagated distribution as one of the inputs in the models we train for each of the nodes j1*,...,j*5. Overall, Stage 2 corresponds to learning the neighbor distributions and propagating these distributions, and Stage 3 uses these distributions to train a new model which predicts a node's labels. In Section 3.3, we prove that such a two-step procedure incurs lower errors than directly using the features to predict a node's labels. Complexity: GLINKX is highly scalable as it can utilize existing machine learning architecture eciently since it performs message passing a constant number of times by paying an O(mc) cost, where the dimensionality of classes c is usually small (compared to dX that GCNs rely on). In both Stages 2 and 3 of Algorithm 1, we train node-level MLPs, which allow us to leverage i.i.d. (row-wise) mini-batching, like tabular data; our complexity is similar to other shallow methods (LINKX, FSGNN) (Lim et al., 2021; Maurya et al., 2021). This, combined with the propagation outside the training loops, circumvents the bottlenecks faced by GNNs. Finally, as with every method, the inference complexity is also a function of how many parameters the model has, which also aects the runtime. For more details, refer Appendix A.1. Complementarity: Dierent components of GLINKX provide a *complementary* signal to components proposed in the GNN literature (Maurya et al., 2021; Zhang et al., 2022b; Rossi et al., 2020). One can combine GLINKX with existing architectures (e.g. feature propagations (Maurya et al., 2021; Rossi et al., 2020), label propagations (Zhang et al., 2022b)) for potential metric gains. For example, SIGN computes a series of r œ N feature propagations [X, X, 2*X, . . . ,* rX] where is a matrix (e.g., normalized adjacency or normalized Laplacian) as a preprocessing step. We can include this complementary signal, namely, embed each of the propagated features and combine them in the 3rd Stage to GLINKX. Overall, although in this paper we want to keep GLINKX simple to highlight its main components, we conjecture that adding more components to GLINKX would improve its performance on datasets with highly variable homophily (see Figure 1). Varying Homophily: Graphs with monophily experience homophily, heterophily, or both. For instance, in the yelp-chi dataset - where we classify a review as spam/non-spam (see Figure 1) - we observe a case of monophily together with varying homophily. Specifically in this dataset, spam reviews are linked to non-spam reviews, and non-spam reviews usually connect to other non-spam reviews, which makes the node homophily distribution bimodal. Here the 2nd-order similarity makes the MLaP mechanism eective. ## 3.3 Theoretical Analysis Justification of MLaP (Stage 2): In the MLaP stage, we train a parametric model to learn the distribution of a node's neighbors from the node features ›i 6. Arguably, we can learn such a distribution naïvely by counting the neighbors i that belong to each class. This motivates our first theoretical result. In summary, we show that training a parametric model for learning the distribution of a node's neighbors (as in Stage 2) yields a lower error than the naïve solution. Below we present the Theorem 1 for undirected graphs (the case of directed graphs is the same, but we omit it for simplicity of exposition): Theorem 1. Let G([n], E) be an undirected graph of minimum degree K>c2 and let Qi œ c be the distribution of i's neighbors, for every i œ [n]. The following two facts are true (under standard assumptions for SGD and the losses): 1. Let Q'i be the sample average of Qi*, i.e.* $${\widehat{Q}}_{i,j}={\frac{1}{|{\mathcal{N}}(i)|}}\sum_{k\in{\mathcal{N}}(i)}\mathbb{I}\{y_{k}=j\}.$$ Then, for every i œ [n]*, we have that* $\max_{j\in[c]}\mathbb{E}_{\mathbf{Q}_{i}}[|Q_{i,j}-\widehat{Q}_{i,j}|]\leq\mathbb{E}_{\mathbf{Q}_{i}}[\|\mathbf{Q}_{i}-\widehat{Q}_{i}\|_{\infty}]\leq O\left(\sqrt{\frac{\log(Kc)}{K}}\right)$. 4. 2. Let q(·|›i; ◊) be a model parametrized by ◊ œ RD that uses the features ›i *of each node* i to predict Qi. We estimate the parameter ◊I by running SGD for t = n *steps to maximize* L(◊) = 1n qni=1 qcj=1 Qi,j log q(j|›i; ◊). Then, for every i œ [n]*, we have that* $$\operatorname*{max}_{j\in[c]}\mathbb{E}[|q(j;\xi_{i};\theta_{I})-Q_{i,j}|]\leq O\left({\sqrt{\frac{\log n}{n}}}\right)$$ The expectation is taken over Qi *and the randomness of the SGD.* The proof can be found in Appendix F. It is evident here that if the minimum degree K is much smaller than n, then the parametric model has lower error than the naïve approach, namely O˜(n≠1/2) compared to O˜(K≠1/2). Justification of MLaP and Final Model Stages (Stages 2 and 3): We now provide theoretical foundations for the two-stage approach. Specifically, we argue that a two-stage procedure involving learning the distribution of a node's 2nd-hop neighbor distributions (we assume for simplicity, again, that the graph is undirected) first with a parametric model such as in Theorem 1, and then running a two-phase algorithm to learn a parametric model that predicts a node's label, yields a lower error than naïvely training a shallow parametric model to learn a node's labels. The first phase of the two-phase algorithm involves training the model first by minimizing the cross-entropy between the predictions and the 2nd-hop neighborhood distribution. Then the model trains a joint objective that uses the learned neighbor distributions and the actual labels starting from the model learned in the previous phase. Theorem 2. Let G([n], E) be an undirected graph of minimum degree K>c2 and, let Pi be the likelihood of node i to be assigned to a dierent class, and let Qi, q(·|›i; ◊I ) defined as in Theorem 1. Let p(·|›i; w) *be a* model parametrized by w œ RD that is used to predict the class assignments yi ≥ p(·|›i; w). Let wú *be the* optimal parameter. The following are true (under standard assumptions for SGD and the losses): 1. *The naïve optimization scheme that runs SGD to maximize* G(w) = 1n qni=1 qcj=1 Pi,j log p(j|›i; w) for n *steps has error* $$\mathbb{E}[{\mathcal{G}}(\mathbf{w}_{n+1})-{\mathcal{G}}(\mathbf{w}_{*})]\leq O\left({\frac{\log n}{n}}\right)$$ 2, where the expectation is taken over Pi*, and the randomness of the SGD.* 6In Section 3.1, ›is correspond to the augmented features ›i = [xi; pi] | Homophilous Datasets | Heterophilous Datasets | | | | | |---------------------------------------------------------------------------------------------|--------------------------|-----------------|-------------|-------------|-------------| | PubMed | ogbn-arxiv | squirrel | yelp-chi | arxiv-year | | | n | 19.7K | 169.3K | 5.2K | 169.3K | 45.9K | | m | 44.3K | 1.16M | 216.9K | 7.73M | 1.16M | | Homophily (Lim et al., 2021) | 0.66 | 0.41 | 0.02 | 0.05 | 0.27 | | dX | 500 | 128 | 2089 | 32 | 128 | | c | 27 | 40 | 5 | 2 | 5 | | GLINKX w/ KGEs | 87.95 ±0.30 | 69.27 ±0.25 | 45.83 ±2.89 | 87.82 ±0.20 | 54.09 ±0.61 | | GLINKX w/ Adjacency | 88.03 ±0.30 | 69.09 ±0.13 | 69.15 ±1.87 | 89.32 ±0.45 | 53.07 ±0.29 | | Label Propagation (1-hop) | 83.02 ±0.35 | 69.59 ±0.00 | 32.22 ±1.45 | 85.98 ±0.28 | 43.71 ±0.22 | | LINKX (from (Lim et al., 2021)) | 87.86 ±0.77 | 67.32 ±0.24 | 61.81 ±1.80 | 85.86 ±0.40 | 56.00 ±1.34 | | LINKX (our runs) | 87.55 ±0.37 | 63.91 ±0.18 | 61.46 ±1.60 | 88.25 ±0.24 | 53.78 ±0.06 | | GCN w/ 1 Layer | 86.43 ±0.74 | 50.76 ±0.20 | 26.17 ±2.49 | 85.57 ±0.15 | 44.82 ±0.18 | | GAT w/ 1 Layer | 86.41 ±0.53 | 54.42 ±0.10 | 30.13 ±1.55 | 86.02 ±1.00 | 45.66 ±0.36 | | FSGNN w/ 1 Layer | 88.93 ±0.31 | 61.82 ±0.84 | 64.06 ±2.69 | 86.36 ±0.36 | 42.86 ±0.22 | | Higher-order GCN | 86.29 ±0.46 | 71.18 ±0.27 (*) | 24.81 ±1.70 | 85.60 ±0.15 | 44.58 ±0.28 | | Higher-order GAT | 86.64 ±0.40 | 73.66 ±0.11 (*) | 27.00 ±1.51 | 85.63 ±0.18 | 45.77 ±0.41 | | Higher-order FSGNN | 89.37 ±0.49 | 69.26 ±0.36 | 68.04 ±2.19 | 86.33 ±0.30 | 44.89 ±0.29 | | Label Propagation (2-hop) | 83.44 ±0.35 | 69.78 ±0.00 | 43.41 ±1.44 | 85.95 ±0.26 | 46.30 ±0.27 | | Label Prop. on I[A2≠A≠I Ø0] | 82.14 ±0.33 | 9.87 ±0.00 | 24.43 ±1.18 | 85.68 ±0.32 | 23.08 ±0.13 | | Table 2: Ablation Study. We use the hyperparameters of the best run from Table 1 with KGEs. | | | | | | | Ablation Type | Stages | All | Remove ego embeddings | Remove propagation | Remove PEs | | |-----------------|------------|-------------|-------------------------|----------------------|--------------|-------------| | Heterophilous | arxiv-year | All Stages | 54.09 ±0.61 | 53.52 ±0.77 | 50.83 ±0.24 | 39.06 ±0.35 | | arxiv-year | 3rd Stage | 54.09 ±0.61 | 53.69 ±0.65 | 50.83 ±0.24 | 49.13 ±1.10 | | | Homophilous | ogbn-arxiv | All Stages | 69.27 ±0.25 | 61.26 ±0.33 | 62.70 ±0.34 | 65.64 ±0.18 | | ogbn-arxiv | 3rd Stage | 69.27 ±0.25 | 67.60 ±0.39 | 62.70 ±0.34 | 69.62 ±0.15 | | Table 1: Small-scale and medium-scale experimental results. (*) = results from the OGB leaderboard. 2. *The two-phase optimization scheme that runs SGD to maximize* $$\widehat{\mathcal{G}}(\mathbf{w})=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{c}\left(\frac{1}{|\mathcal{N}(i)|}\sum_{k\in\mathcal{N}(i)}q(j|\mathbf{\xi}_{k};\mathbf{\theta}_{I})\right)\log p(j|\mathbf{\xi}_{i};\mathbf{w})\,$$ $\mathbf{\theta}$ is a solution $\mathbf{\theta}$ such that $\mathbf{\theta}$ is a function of $\mathbf{\theta}$. for n1 steps, to estimate a solution wÕ *and then runs SGD on the objective* ⁄G'(w) + (1 ≠ ⁄)G(w) for n steps starting from wÕ*, achieves error* $$\mathbb{E}[\mathcal{G}(\mathbf{w_{n+1}})-\mathcal{G}(\mathbf{w_{*}})]\leq O\left(\frac{\sqrt{\log n\log\log n}}{n}\right).$$ _where the expectation is taken over $P_{i}$, $Q_{i}$, and the randomness of the SGD._ You can find the proof in Appendix F. We observe that the two-phase optimization scheme can reduce the error by a factor of log n/ log log n highlighting the importance of using the distribution of the 2nd-hop neighbors of a node to predict its label and holds regardless of the homophily properties of the graph. Also, note that the above two-phase optimization scheme diers from the description of the method we gave in Algorithm 1. The dierence is that the distribution of a node's neighbors is embedded into the model in the case of Algorithm 1, and the distribution of a node's neighbors are embedded into the loss function in Theorem 2 as a regularizer. In Algorithm 1, we chose to incorporate this information in the model because using multiple losses harms scalability and makes training harder in practice. In the same spirit, the conception of GCNs (Kipf & Welling, 2016) replaces explicit regularization with the graph Laplacian with the topology into the model (see also Hamilton et al. (2017); Yang et al. (2016)). ## 4 Experiments & Conclusions Small-scale and Medium-scale datasets. We experiment with homophilous and heterophilous datasets (see Table 1). We train KGEs with Pytorch-Biggraph (Lerer et al., 2019; Yang et al., 2014; Wang et al., 2017). For homophilous datasets, we compare with vanilla GCN and GAT, FSGNN, and Label Propagation (LP). For a fair comparison, we compare with one-layer GCN/GAT/FSGNN/LP since GLINKX is one-hop. We also compare with higher-order (h.o.) GCN/GAT/FSGNN/LP with 2 and 3 layers. In the heterophilous case, we compare with LINKX 7 because it is scalable and is shown to work better than other baselines (e.g., H2GCN, MixHop, etc.) and with FSGNN. For reasons of completeness, we also report numbers for GCN and GAT (1-layer and h.o.), which are known to underperform in heterophilous settings and suer from message-passing bottlenecks, as well as LP. Note that we do not compare GLINKX with other more complex methods because (i) lots of GNN-based methods are not scalable8, (ii) GLINKX is complementary to them (see Section 3.2), and we can incorporate these designs into GLINKX, (iii) for the heterophilous datasets, GLINKX outperforms or is in agreement with LINKX which is we believe is a strong baseline and outperforms or is in agreement with many recent methods (cf. He et al. (2021); Tang et al. (2019); Dai et al. (2022)) while being substantially more scalable and tested on bigger datasets. Finally, we use a *ResNet* module to combine our algorithm's components from Stages 2 and 3. Details about the hyperparameters we use are in Appendix C. In the heterophilous datasets, GLINKX is better/competitive with the baselines. Moreover, the performance gap between using KGEs and adjacency embeddings shrinks as the dataset grows. In the homophilous datasets, GLINKX outperforms 1-layer GCN/GAT/LP/FSGNN and LINKX. In PubMed, GLINKX beats h.o. GCN/GAT and in arxiv-year GLINKX is very close to the performance of GCN/GAT. It is important to highlight that the higher-order GNN methods (GCN/GAT) are as good as GLINKX or better **only** in the case where the graph is homophilous, since GCNs/GATs have shown to perform poorly in heterophilous graphs; see Zhu et al. (2020); Lim et al. (2021); Di Giovanni et al. (2022); Jin et al. (2022a); Luan et al. (2021), and the references therein. Finally, we note that GLINKX *produces consistent results across regime shifts*. In detail, in the heterophilous regime, GLINKX performs on par with LINKX; however, when we shift to the homophilous regime, LINKX's performance drops, whereas GLINKX's performance continues to be high. Similarly, while FSGNN performs similarly to GLINKX on the homophilous datasets, we observe a significant performance drop on the heterophilous datasets (see arxiv-year). Scalability Experiment. To show the scalability of GLINKX, we experiment with the ogbn-products dataset from the OGB benchmark (Hu et al., 2020), which has n = 2.44M nodes and m = 61.8M edges. The features have dimension dX = 100 and our aim is to predict the correct class out of c = 47 classes. The graph is categorized as a homophilous graph and has a homophily equal to 0.45. As shown in Table 3, GLINKX outperforms LINKX by a big margin. Moreover, GLINKX performs much better than the h.o. GCN and FSGNN - albeit the higher number of hops - and has performance comparable to the h.o. GAT. However, at the same time, GCN and GAT require neighbor sampling which raises the training time to days, compared to the simple i.i.d. sampling GLINKX performs, which is able to complete within ours, en par with LINKX. The ogbn-products dataset took 17.53 seconds on average per epoch to train (for both phases), including the propagation costs. At the same time, the 1-layer GCN took 104.39 seconds per epoch on average to train, and the 1-layer GAT took 107.39 seconds on average per epoch to train. This indicates at least *an 83%* reduction in training time (or equivalently our method is approximately 6◊ faster). Ablation Study. We ablate each component of Algorithm 1 to see each component's performance contribution. We use the hyperparameters of the best model from Table 1. We perform two types of ablations: (i) we remove each of the components from all stages of the training, and (ii) we remove the corresponding components only from the 3rd Stage. Except for removing the PEs from the 3rd Stage only on ogbn-arxiv, all components 7We have run GLINKX with hyperparameter space that is a subset of the sweeps reported in (Lim et al., 2021) due to resource constraints. A bigger hyperparameter search would improve our results. 8Here we have added a comparison with vanilla GCN and GAT to motivate the architecture of our method and the usefulness of the MLaP layer, and we should note that the vanilla GCN and GAT have the same scalability problems. Table 3: Results for ogbn-products. (*) = results from the OGB leaderboard. We have omitted the 1-layer GCN/GAT and LP-based methods since GLINKX repeatedly outperforms these by a big margin in the small and medium-scale datasets. | GLINKX w/ KGEs | 78.15 ±0.10 | |--------------------|-----------------| | LINKX | 69.02 ±0.61 | | GCN w/ 1 Layer | 66.28 ±0.12 | | GAT w/ 1 Layer | 65.31 ±0.27 | | FSGNN w/ 1 Layer | 70.44 ±0.15 | | Higher-order GCN | 75.64 ±0.21 (*) | | Higher-order GAT | 79.45 ±0.59 (*) | | Higher-order FSGNN | 76.03 ±0.33 | contribute to increased performance on both datasets. Note that adding PEs in the 1st Stage does improve performance, suggesting the primary use case of PEs. ## 5 Conclusion We present GLINKX, a simple, scalable shallow method for node classification in homophilous and heterophilous graphs that combines three components: (i) ego embeddings, (ii) PEs, and (iii) monophilous propagations. As future work, (i) GLINKX can be extended in heterogeneous graphs, (ii) use more expressive methods such as attention or Wasserstein barycenters (Cuturi & Doucet, 2014) for averaging the low-dimensional messages, and (iii) add complementary signals. While our method is outperformed by the current GNN-based state-of-the-art methods (Rossi et al., 2023; Luan et al., 2021), our *simple* and *scalable* design design avoids the scalability bottlenecks of GNNs which include neighborhood sampling which is a very costly operation, by leveraging propagations outside of the training loop, and can be easily and eciently deployed in an industrial scale with modern infrastructure. Our extensive evaluation on a database of six datasets with various sizes from the homophilous and heterophilous regime, and the theoretical error bounds we provide, justify our design choices and show that GLINKX is able to perform well and consistently across regime shifts. ## References Marjan Albooyeh, Rishab Goel, and Seyed Mehran Kazemi. Out-of-sample representation learning for knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 2657–2666, 2020. Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= i80OPhOCVH2. Kristen M Altenburger and Johan Ugander. Monophily in social networks introduces similarity among friends-of-friends. *Nature human behaviour*, 2(4):284–290, 2018a. Kristen M Altenburger and Johan Ugander. Node attribute prediction: An evaluation of within-versus across-network tasks. In *NeurIPS Workshop on Relational Representation Learning*, 2018b. Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. *arXiv preprint arXiv:1806.01261*, 2018. K. Bhatia, K. Dahiya, H. Jain, P. Kar, A. Mittal, Y. Prabhu, and M. Varma. The extreme classification repository: Multi-label datasets and code, 2016. URL http://manikvarma.org/downloads/XC/XMLRepos itory.html. Aleksandar Bojchevski, Johannes Gasteiger, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. Scaling graph neural networks with approximate pagerank. In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &* Data Mining, pp. 2464–2473, 2020. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. *Advances in neural information processing systems*, 26, 2013. Xavier Bresson and Thomas Laurent. Residual gated graph convnets. *arXiv preprint arXiv:1711.07553*, 2017. Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An ecient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 257–266, 2019. Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. *arXiv preprint arXiv:2006.07988*, 2020. Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In Eric P. Xing and Tony Jebara (eds.), *Proceedings of the 31st International Conference on Machine Learning*, volume 32 of Proceedings of Machine Learning Research, pp. 685–693, Bejing, China, 22–24 Jun 2014. PMLR. URL https://proceedings.mlr.press/v32/cuturi14.html. Enyan Dai, Shijie Zhou, Zhimeng Guo, and Suhang Wang. Label-wise graph convolutional network for heterophilic graphs. In *Learning on Graphs Conference*, pp. 26–1. PMLR, 2022. Francesco Di Giovanni, James Rowbottom, Benjamin P Chamberlain, Thomas Markovich, and Michael M Bronstein. Graph neural networks as gradient flows. *arXiv preprint arXiv:2206.10991*, 2022. Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, and Philip S Yu. Enhancing graph neural network-based fraud detectors against camouflaged fraudsters. In *Proceedings of the 29th ACM International* Conference on Information & Knowledge Management, pp. 315–324, 2020. Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Graph neural networks with learnable structural and positional representations. *arXiv preprint arXiv:2110.07875*, 2021. Ahmed El-Kishky, Thomas Markovich, Serim Park, Chetan Verma, Baekjin Kim, Ramy Eskander, Yury Malkov, Frank Portman, Sofía Samaniego, Ying Xiao, et al. Twhin: Embedding the twitter heterogeneous information network for personalized recommendation. *arXiv preprint arXiv:2202.05387*, 2022. Fabrizio Frasca, Benjamin Paul Chamberlain, Davide Eynard, Emanuele Rossi, and Federico Monti. Simple scalable graph neural networks, 2020. URL https://blog.twitter.com/engineering/en_us/topics/i nsights/2021/simple-scalable-graph-neural-networks. Blog post. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In *Proceedings of the* 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864, 2016. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. Mingguo He, Zhewei Wei, Hongteng Xu, et al. Bernnet: Learning arbitrary graph spectral filters via bernstein approximation. *Advances in Neural Information Processing Systems*, 34:14239–14251, 2021. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020. Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, and Austin R Benson. Combining label propagation and simple models out-performs graph neural networks. *arXiv preprint arXiv:2010.13993*, 2020. Di Jin, Rui Wang, Meng Ge, Dongxiao He, Xiang Li, Wei Lin, and Weixiong Zhang. Raw-gnn: Random walk aggregation based graph neural network. *arXiv preprint arXiv:2206.13953*, 2022a. Wei Jin, Lingxiao Zhao, Shichang Zhang, Yozen Liu, Jiliang Tang, and Neil Shah. Graph condensation for graph neural networks. In *International Conference on Learning Representations*, 2022b. URL https://openreview.net/forum?id=WLEx3Jo4QaB. Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, and Seunghoon Hong. Pure transformers are powerful graph learners. *arXiv preprint arXiv:2207.02505*, 2022. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *arXiv* preprint arXiv:1609.02907, 2016. Runlin Lei, Zhen Wang, Yaliang Li, Bolin Ding, and Zhewei Wei. Evennet: Ignoring odd-hop neighbors improves robustness of graph neural networks. *arXiv preprint arXiv:2205.13892*, 2022. Adam Lerer, Ledell Wu, Jiajun Shen, Timothee Lacroix, Luca Wehrstedt, Abhijit Bose, and Alex Peysakhovich. Pytorch-biggraph: A large scale graph embedding system. *Proceedings of Machine Learning and Systems*, 1:120–131, 2019. Guohao Li, Matthias Muller, Ali Thabet, and Bernard Ghanem. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9267–9276, 2019. Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34:20887–20902, 2021. Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, and Doina Precup. Is heterophily a real nightmare for graph neural networks to do node classification? *arXiv* preprint arXiv:2109.05641, 2021. Sunil Kumar Maurya, Xin Liu, and Tsuyoshi Murata. Improving graph neural networks with simple architecture design. *arXiv preprint arXiv:2105.07634*, 2021. Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual review of sociology, pp. 415–444, 2001. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In *Proceedings of* the AAAI conference on artificial intelligence, volume 33, pp. 4602–4609, 2019. Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. *arXiv preprint arXiv:2002.05287*, 2020. Everett M Rogers, Arvind Singhal, and Margaret M Quinlan. Diusion of innovations. In *An integrated* approach to communication theory and research, pp. 432–448. Routledge, 2014. Emanuele Rossi, Fabrizio Frasca, Ben Chamberlain, Davide Eynard, Michael Bronstein, and Federico Monti. Sign: Scalable inception graph neural networks. *arXiv preprint arXiv:2004.11198*, 7:15, 2020. Emanuele Rossi, Bertrand Charpentier, Francesco Di Giovanni, Fabrizio Frasca, Stephan Günnemann, and Michael Bronstein. Edge directionality improves learning on heterophilic graphs. *arXiv preprint* arXiv:2305.10498, 2023. Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 2021. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. *AI magazine*, 29(3):93–93, 2008. Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjin Wang, and Yu Sun. Masked label prediction: Unified message passing model for semi-supervised classification. *arXiv preprint arXiv:2009.03509*, 2020. Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June Hsu, and Kuansan Wang. An overview of microsoft academic service (mas) and applications. In *Proceedings of the 24th international* conference on world wide web, pp. 243–246, 2015. Balasubramaniam Srinivasan and Bruno Ribeiro. On the equivalence between positional node embeddings and structural graph representations. *arXiv preprint arXiv:1910.00452*, 2019. Chuxiong Sun, Hongming Gu, and Jie Hu. Scalable and adaptive graph neural networks with self-labelenhanced training. *arXiv preprint arXiv:2104.09376*, 2021. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In *Proceedings of the 24th international conference on world wide web*, pp. 1067–1077, 2015. Shanshan Tang, Bo Li, and Haijun Yu. Chebnet: Ecient and stable constructions of deep neural networks with rectified power units using chebyshev approximations. *arXiv preprint arXiv:1911.05467*, 2019. Petar VelikoviÊ, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. *arXiv preprint arXiv:1710.10903*, 2017. Haorui Wang, Haoteng Yin, Muhan Zhang, and Pan Li. Equivariant and stable positional encoding for more powerful graph neural networks. *arXiv preprint arXiv:2203.00199*, 2022. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. Knowledge graph embedding: A survey of approaches and applications. *IEEE Transactions on Knowledge and Data Engineering*, 29(12):2724–2743, 2017. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id= ryGs6iA5Km. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. *arXiv preprint arXiv:1412.6575*, 2014. Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In *International conference on machine learning*, pp. 40–48. PMLR, 2016. Shichang Zhang, Yozen Liu, Yizhou Sun, and Neil Shah. Graph-less neural networks: Teaching old MLPs new tricks via distillation. In *International Conference on Learning Representations*, 2022a. URL https: //openreview.net/forum?id=4p6_5HBWPCw. Wentao Zhang, Ziqi Yin, Zeang Sheng, Yang Li, Wen Ouyang, Xiaosen Li, Yangyu Tao, Zhi Yang, and Bin Cui. Graph attention multi-layer perceptron. *Proceedings of the 28th ACM SIGKDD Conference on* Knowledge Discovery and Data Mining, 2022b. Wenqing Zheng, Edward W Huang, Nikhil Rao, Sumeet Katariya, Zhangyang Wang, and Karthik Subbian. Cold brew: Distilling graph node representations with incomplete or missing neighborhoods. In *International* Conference on Learning Representations, 2022a. URL https://openreview.net/forum?id=1ugNpm7W6E. Xin Zheng, Yixin Liu, Shirui Pan, Miao Zhang, Di Jin, and Philip S Yu. Graph neural networks for graphs with heterophily: A survey. *arXiv preprint arXiv:2202.07082*, 2022b. Zhiqiang Zhong, Sergey Ivanov, and Jun Pang. Simplifying node classification on heterophilous graphs with compatible label propagation. *arXiv preprint arXiv:2205.09389*, 2022. Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and eective designs. *Advances in Neural Information* Processing Systems, 33:7793–7804, 2020. Jiong Zhu, Ryan A Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K Ahmed, and Danai Koutra. Graph neural networks with heterophily. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 11168–11176, 2021.
Review 1: Summary: This work proposes a new graph deep learning approach to improve the node classification performance over both homophilic and heterophilic graphs. The key idea is to leverage the property that the labels of nodes that share a common neighbor tend to be monophilous. The algorithm first trains a model based on single-node features to predict the aggregated labels of neighbors. Then, use the learned model to predict the unlabeled nodes and propagate the obtained pseudo labels to neighbors. A second model uses the received aggregated pseudo labels combined with node-self features to make the final prediction. The work also finds that node positional embeddings derived from knowledge graph embedding algorithms are effective. Overall, the algorithm achieves good prediction accuracy. Strengths and Weaknesses: Strengths: 1. The overall algorithm of this work mainly combines two previous frameworks: LINKX [1] (combining with node positional embeddings without keeping permutation equivariant) and C&S [2] (label propagation and combine it with NNs), but the combination is not trivial to me. It is quite interesting to see such a kind of combination work. 2. The idea of leveraging monophilous labels of the neighbors of a node is also new. 3. The empirical performance of the model is good. 4. Overall the paper is written well, and it is easy to follow. 5. The work also provides some theory to explain why the label propagation may help. [1] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods, Lim et al., NeurIPS 2021 [2] Combining label propagation and simple models out-performs graph neural networks, Huang et al., ICLR 2021 Weaknesses: 1. The performance of the model is not uniformly the best. In particular, over homophilic graphs, there are some non-trivial gaps between the proposed approach and the state-of-the-art method. Over some heterophilic graphs, say arxiv-year, the performance of the proposed method is not the best as well. 2. In most cases, using node positional encoding achieves even worse performance than using rows in the adjacency matrix (LINKX adopts). 3. Miss the discussion on the key reference [3] that also uses generalized network embedding methods as positional encoding for node-level tasks. [3] Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks, Wang et al. ICLR 2022 Requested Changes: I expect that the authors can find more datasets that can show the effectiveness of the proposed approach. Also, add discussion on [3]. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper considers the node classification task on one graph in a supervised setting. The authors propose a new algorithm based on local computations that can scale to large graphs. Strengths and Weaknesses: There are some main issues to be addressed before any possible publication: 1- The theoretical analysis does not make any sense. In Theorem 1, the authors introduce a likelihood but never introduce a probabilistic model. I do not understand at all what they mean here. What is Q_i? and what is the measure of probability used for the expectation? Looking at the proof does not help. In the proof, there is a new notation P_k, that has never been introduced. Why can you assume that the loss is a smooth function of the parameters? In the proof, what do you mean by: "Let \theta_1 = \theta_{n+1}" (top of page 23)? 2- I do not understand why in stage 2 of the algorithm, you propagate the predicted soft-labels instead of propagating the true \hat{y} given in equation (1). 3- In the experiments, details need to be provided. The measure of performance is never defined. The main motivation for the work is to derive a scalable algorithm. In the introduction, the authors say that GNNs do not scale but Table 3 provides performances for GNN on large datasets. To make their points, the authors should provide numerical evidence that their algorithm is faster than those GNNs. Requested Changes: As explained above, the paper is not ready for publication. I am not sure if the proofs can be repaired. The experimental section is weak. The organization of the paper is also problematic. In particular, there are a lot of places in the Introduction where some notions are defined only much later in the paper. As a result, the paragraph at the beginning of page 2 cannot be understood with the context provided. I am not sure the authors should introduce knowledge graphs as they are actually using them only with one type of relation. Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper proposes GLINKX, a method that works for both homophilous and heterophilous node classification, and is also scalable. The proposed GLINKX leverages a novel notion called monophilous (i.e. the neighbors of a node tend to be of the same class). To leverage the property, the authors propose a two-stage bilevel label propagation method to propagate the 2nd-order neighbor label distribution back to the original node. The authors also design a positional encoding scheme based on knowledge graph embedding (KGE). Experiments show that the proposed method is competitive in terms of both homophilous and heterophilous graphs, and is also scalable compared to GNN methods. Strengths and Weaknesses: ## Strengths - The paper aims to unify homophilous and heterophilous node classification with a unified method, which is a novel and interesting attempt as existing works may primarily tackle one of them. - The introduced notion of monophily seems interesting and has the potential for unifying homophily and heterophily. - The proposed method is competitive in both homophilous and heterophilous classification and is also scalable. - Experiments are all done on relatively large datasets. ## Weaknesses - Although monophily is conceptually appealing to unify the homophily and heterophily graphs, there is no quantitative analysis on how monophily holds true on the real-world graphs. It would be better if the authors can use some quantitative metrics to verify the monophily property. - It is unclear why the authors specifically choose knowledge graph embeddings for learning positional encodings. Specifically, there are many methods for ordinary graph embedding (i.e. not knowledge graphs), such as DeepWalk (KDD 2014), Node2vec (KDD 2016), etc. Moreover, they are also implicitly factorizing the adjacency matrix (Levy and Goldberg 2014). Therefore, it is unclear why the authors specifically use KGE (namely also embedding the relation instead of nodes). More justifications should be needed. - Are the experiments in Table 1 and Table 3 in the transductive or inductive setting? The authors say on Page 2 that the paper focuses on the transductive setting, but this does not seem convincing to demonstrate the contribution of GLINKX. First, the authors say in Section 1 that LINKX is not practical because it is not inductive, and failing to show GLINKX in the inductive setting means that a key advantage of GLINKX is not demonstrated. Moreover, GNN baselines are often evaluated with inductive settings. It is unclear whether the comparison is fair. - The authors use the subtitle 'scalability experiment' in Section 4, but the authors fail to show the training time of GLINKX, which does not seem convincing as a 'scalability experiment'. - It seems that the proposed method GLINKX relies heavily on ground truth labels (to obtain the label distribution). Therefore, GLINKX may be sensitive to the number of labels. It is good to have a sensitivity analysis of GLINKX (as well as other semi-supervised GNN baselines). - It seems that the proposed notion of monophily is similar to the 'second-order similarity' proposed in LINE (Tang et al. 2015). The authors are suggested to justify the similarity and difference. - (Minor) The arrangement of figures and algorithms makes the paper hard to read. For example, Figure 3 is referenced on Page 2 but lies on 9. Figure 2 is referenced on Page 4 but lies on Page 8. Algorithm 1 is referenced on Page 6 but given on Page 3. Please consider revising the organization. (Levy and Goldberg 2014) Neural Word Embedding as Implicit Matrix Factorization, NIPS 2014. (Tang et al. 2015) LINE: Large-scale Information Network Embedding, WWW 2015. Requested Changes: Please see 'Weaknesses' and try to address them with revisions or clarifications. Broader Impact Concerns: No. ================================================== Metareview: Recommendation: Reject Comment: Concretely, the proposed MLaP method first calculates the averaged label distribution for each node from the labels of its neighbors, \hat{y}, then trains a parametric model to predict this $\hat{y}$. The analysis claims that the prediction made by the parametric model is more accurate compared to $\hat{y}$ itself in terms of approximating the true label distribution. Furthermore, the analysis claims that the approximation error of the parametric model goes down as the number of SGD steps, n, increases. This analysis is wrong even just looking at the claims themselves. When optimizing a model to approximate $\hat{y}$, even assuming perfect approximation, the model would at best do as good as the target $\hat{y}$. The approximation error, therefore, should not be better than achievable by $\hat{y}$. If the claim that the approximation error decreases with SGD steps n were true, then as $n \rightarrow \infty$ we would have 0 approximation error, which is obviously wrong. The authors should probably check if all the simplification and independence assumptions hold in their setup. I also found that the proofs and claims made in the analysis didn’t state the setting and assumptions very clearly, which should be improved. In my opinion the good thing about learning a parametric model to predict the smoothed label distribution is that the parametric model can then be used for nodes that e.g. does not have labeled neighbors and can therefore generalize, and can also potentially be used in the inductive setting, that’s not possible by a pure label propagation approach. In the ablation experiments, the authors also did not show that this learned parametric model works better than just the empirical average (or maybe that’s not possible). It would be good to demonstrate this somehow. ==================================================
# Masked Capsule Autoencoders Anonymous authors Paper under double-blind review ## Abstract We propose Masked Capsule Autoencoders (MCAE), the first Capsule Network that utilises pretraining in a self-supervised manner. Capsule Networks have emerged as a powerful alternative to Convolutional Neural Networks (CNNs), and have shown favourable properties when compared to Vision Transformers (ViT), but have struggled to effectively learn when presented with more complex data, leading to Capsule Network models that do not scale to modern tasks. Our proposed MCAE model alleviates this issue by reformulating the Capsule Network to use masked image modelling as a pretraining stage before finetuning in a supervised manner. Across several experiments and ablations studies we demonstrate that similarly to CNNs and ViTs, Capsule Networks can also benefit from self-supervised pretraining, paving the way for further advancements in this neural network domain. For instance, pretraining on the Imagenette dataset, a dataset of 10 classes of Imagenet-sized images, we achieve not only state-of-the-art results for Capsule Networks but also a 9% improvement compared to purely supervised training. Thus we propose that Capsule Networks benefit from and should be trained within a masked image modelling framework, with a novel capsule decoder, to improve a Capsule Network's performance on realistic-sized images. ## 1 Introduction Capsule Networks are an evolution of Convolutional Neural Networks (CNNs) which remove pooling operations and replace scalar neurons with a fixed number of vector or matrix representations known as capsules at each location in the feature map. At each location there will be multiple capsules, each theoretically representing a different concept. Each of these capsules has a corresponding activation value between 0 and 1 which represents how strongly the network believes the concept which the capsule represents is present at the location in the feature map. Capsule Networks have shown promising signs, such as being naturally strong in invariant and equivariant tasks (Sabour et al., 2017; Hinton et al., 2018; De Sousa Ribeiro et al., 2020; Hahn et al., 2019; Ribeiro et al., 2020; 2022) while having low parameter counts, but have yet to scale to more complex datasets with realistic resolutions that CNNs and Vision Transformers (ViTs) are typically benchmarked on. Masked Image Modelling (MIM) is a Self Supervised Learning (SSL) technique with roots in language modelling (Devlin et al., 2018). In language modelling, words are removed from passages of text, the network is then trained to predict the correct words to fill in the gaps. This technique can be extended to image modelling by splitting an image into equal regions called patches, randomly removing some of these patches and then requiring the network to predict the pixel values of the removed patches. This has been shown to require the network to have an improved world model in both Vision Transformers (ViTs) (He et al., 2021) and CNNs (Woo et al., 2023), which is strong enough to reconstruct occluded areas from the remaining visible areas. Combining this technique with supervised finetuning, accuracy can be significantly improved compared to not using any pretraining (He et al., 2021; Woo et al., 2023). We propose that MIM pretraining should be added to the training paradigm of Capsule Networks to mitigate the weaknesses, as it will force the Capsule Network to learn better representations at each area of the image to allow for accurate reconstruction. These better local representations can then be utilised at the finetuning stage for better activation of the correct global class capsules which are added after pretraining. The main contributions of this work can be summarised as follows: ![1_image_0.png](1_image_0.png) Figure 1: Our Masked Capsule Autoencoder architecture. During pretraining we randomly select a number of patches from the original image to be processed. The Capsule Network will then create a representation for each patch. Masked patch capsule representations are then re-added before the capsule decoder, where the unmasked capsules can contribute to the masked positions, which are finally decoded by a single linear layer to the original patch dimensions. The pretraining objective is the mean squared error between the reconstructed patches and the target patches. The dog image used is sourced from the Imagewoof validation set (Howard, 2019b). 1. We propose a novel adaption of Capsule Networks to accommodate masked image modelling. 2. We have shown that classification accuracy with a Capsule Network can be improved via selfsupervised pretraining followed by supervised finetuning compared to only supervised training. 3. We have improved the state-of-the-art on multiple benchmark datasets for Capsule Networks, including realistically sized images where Capsule Networks typically perform very badly. 4. We implemented a fully capsule decoder layer, replacing the CNN decoders which are typically used for reconstruction tasks in Capsule Networks to ensure that our proposed MCAE model does not need a handcrafted decoder. 5. We provide the first investigation into the use of ViTs to replace the traditional convolutional stem. The rest of this paper presents the necessary background on Capsule Networks, highlighting previous research that has inspired the work presented here. We then formally define our new self-supervised capsule formulation called Masked Capsule Autoencoders and present several experiments and ablation studies on benchmark datasets. We conclude the paper by highlighting the main advantages of our new methods, with some key takeaway messages and some future directions that could further support the future developments of large-scale self-supervised Capsule Network models. ## 2 Related Works 2.1 Capsule Networks Capsule Networks are a variation of CNNs, which replace scalar neurons with vector or matrix capsules and construct a parse tree, representing part-whole relationships within the network. Each type of capsule in a layer of capsules can be thought of as representing a specific concept at the current level of the parse tree which is part of a bigger concept. Capsules in deeper layers are closer to the final class label than capsules in shallower layers. The capsules in the lowest layer can be thought of as the most basic parts which could be a part of any of the end classes, thus are denoted as the primary capsules, signifying that they are the base parts of the parse tree. Capsules in lower layers decide their contribution to capsules in higher layers through a process called routing. Capsule Routing, in brief, is a non-linear, cluster-like process that takes place between adjacent capsule layers. This part of the network has been the predominant research focus for state-of-the-art Capsule Networks, to find better or more efficient methods of finding ways to decide the contribution of lower capsules to higher capsules. In brief, the purpose of capsule routing is to assign *part* capsules i = 1*, . . . , N* in layer ℓ to *object* capsules j = 1*, . . . , M* in layer ℓ+ 1, by adjusting coupling coefficients γ ∈ R N×M iteratively, where 0 ≤ γij ≤ 1. These coupling coefficients have similarities to an attention matrix (Vaswani et al., 2017) which modulates the outputs as a weighted average of the inputs. For more information on the numerous routing algorithms proposed for Capsule Networks, please see here (Ribeiro et al., 2022). Dynamic Routing Capsule Networks are the original Capsule Network architecture, as described in (Sabour et al., 2017). DR Caps employs a technique called dynamic routing to iteratively refine the connections between capsules. This approach introduces the concept of coupling coefficients which represent the strength of each connection and updates them using a softmax function to ensure that each capsule in a lower layer must split its contribution amongst capsules that it deems relevant in the higher layer. The update process relies on agreement values calculated as the dot product between a lower-level capsule's output and a predicted output from a higher-level capsule. After a pre-determined number of iterations, the activation of each higher-level capsule is calculated as the weighted sum of the lower-level capsule activations, where the weights are the final coupling coefficients. Self-Routing Capsule Networks (SR-Caps) (Hahn et al., 2019) address the heavy computational burden of iterative routing algorithms in Capsule Networks by introducing a novel, independent routing mechanism. Each capsule in an SR-CapsNet utilises a dedicated routing network to directly compute its coupling coefficients, eliminating the need for iterative agreement-based approaches. This approach draws inspiration from the concept of a mixture of experts network (Masoudnia & Ebrahimpour, 2014), where each capsule acts as an expert specialising in a specific concept of the feature space. SR Caps achieve this by employing two trainable weight matrices, W*route* and W*pose*. These matrices represent fully connected layers for each capsule in the subsequent layer. Within each routing network layer, capsule pose vectors (ui) are multiplied by W*route* to directly generate coupling coefficients. These coefficients are then normalised using softmax and multiplied by the capsule's activation scalar (ai) to generate weighted votes. Finally, the activation (aj ) of the capsule in the higher layer is calculated by summing these weighted votes across spatial dimensions (H×W) or across K×K dimensions for convolutions. While SR Caps achieve competitive performance on standard benchmarks, their reliance on pre-learned routing network parameters limits the network's ability to dynamically adjust routing weights based on the specific input, a characteristic advantage of agreement-based routing approaches. ## 2.2 Masked Autoencoders Masked Autoencoders (He et al., 2021) are a specific variant of ViTs which are pretrained via a patch-specific reconstruction loss, tasking the network to reconstruct masked patches based upon the information which can be learnt from the visible patches, this can be seen visually in figure 4. An image is first split into N × N patches of equal size and are flattened, allowing for the tokenisation of an image akin to text in a standard transformer (Vaswani et al., 2017). To mask patches of the image, tokens are chosen randomly up to a specified percentage of the total tokens and removed from the sequence, removing the information from the feature map. The remaining visible patches are then processed via a ViT. Once the encoder has finished processing the visible patches, masked tokens are reinserted where the selected visible tokens were once removed in the masking process. The network now uses a ViT decoder to make predictions for these masked tokens utilising the attention mechanism and multi layer perceptrons within the standard ViT blocks. This process requires the network to learn how local areas might correspond to their neighbouring patches by predicting the removed patches. ## 3 Masked Capsule Autoencoders To create the MCAE we must first define how Capsule Networks can have their feature maps masked. In CNNs this is a difficult task that is usually achieved by setting areas of the feature map to 0, but this does not mask in the same way as the masked autoencoder (He et al., 2021) as 0 masking has been shown to change the distribution of pixels in the image (Balasubramanian & Feizi, 2023) and thus effecting results. As such, in the following section, we will discuss the changes we have made to allow for correct masking within our MCAE. ![3_image_0.png](3_image_0.png) 3.1 Flattened Feature Map Figure 2: A visual representation of how a 2D patch feature map or capsule feature map with height and width is flattened into a 1D feature map with a length instead. At each location, there is the same amount of different capsule types, each corresponding to a different part or concept in the part-whole parse tree. The dog image used is sourced from the Imagewoof validation set (Howard, 2019b). Vision Transformers can easily perform masking on a feature map, as patches of the image can be removed from computation by simply removing selected patches from the flattened sequence of patches after the patch embedding layer. Capsule Networks on the other hand have traditionally used a 2D feature map, which comes with the drawback that masking can only be achieved either via replacing masked regions with 0's or utilising sparse operations (Woo et al., 2023), which come with their own drawbacks (Balasubramanian & Feizi, 2023; Tian et al., 2023). Thus we propose that by flattening the 2D feature map into a 1D feature map, mimicking the design of a ViT feature map, masking can be achieved in the same way as in the Masked Autoencoder (He et al., 2021). We thus achieve masking by simply removing all capsules at a specific location along the length dimension of our feature map. ## 3.2 Building Upon Self Routing Capsule Networks We use the SR Caps Network (Hahn et al., 2019) as a starting point due to its simplicity and speed. We adjust the routing procedure such that rather than merging local capsules within a H × W sliding kernel, ![4_image_0.png](4_image_0.png) Figure 3: A visual representation of the masking process. An image is split into non-overlapping patches of N × N pixels. Randomly, a percentage, in this case, 50% of patches are removed in order to deprive the network of information available in these patches. The patches are then flattened into a 1D sequence of the remaining patches, ready to be processed by our encoder. The dog image used is sourced from the Imagewoof validation set (Howard, 2019b). we simply use a 1 × 1 region and only route to the capsules in the upper layer at the same location in the 1D feature map, meaning our network is fully isotropic in the encoder. This allows for a per-patch parse tree to be constructed which is used to provide a pose representation for each capsule at each patch in the feature map. When pretraining, we do not route to a class capsule, instead, we reinsert a masked capsule placeholder at the locations in the feature map which were previously removed after the encoding stage, ensuring the feature map is ready for decoding to the original shape. This feature map which now contains both encoded capsule representations and a random noise-masked capsule representation is now fed through a capsule layer which considers all capsules at all locations in the lower layer when creating the pose vector and activation values of all capsules at all locations in the higher layer, meaning that the encoded capsules can predict the values of the masked regions. We call this layer the fully capsule decoder. These reconstructed regions are then fed through a single linear projection layer which projects the activation scaled pose vectors at each location into the correctly sized pixel values of the original images patch at this location. When finetuning, we remove the capsule decoder and add an additional class capsule layer on top of the encoder. This new layer averages the activations per capsule type along the H ×W feature map, allowing for class predictions to be made for supervised finetuning while leveraging the improved representations from the pretrained encoder. ## 3.3 Loss Function A crucial aspect of the pretraining stage of the MCAE involves training the network to accurately reconstruct the masked portions of the input image. To achieve this, we use the Mean Squared Error (MSE) loss, which quantifies the difference between the actual pixel values of the masked patches and the predicted pixel values generated by the capsule decoder. MSE loss is defined as: $$\mathrm{MSE}={\frac{1}{N}}\sum_{i=1}^{N}(y_{i}-{\hat{y}}_{i})^{2}$$ 2(1) ![5_image_0.png](5_image_0.png) Figure 4: A visual representation of how our pretrain loss function selects patches for the loss function defined in equation 1. The dog image used is sourced from the Imagewoof validation set (Howard, 2019b). where N represents the total number of pixels across all masked patches in the training batch, yiis the actual value of the i th pixel in the masked patch, and yˆi denotes the predicted value from our capsule decoder for the same pixel. A visual representation of patch selection from the target and prediction can be seen in figure 4. The MSE loss aligns with our objective to minimize the difference between the reconstructed and original patches, ensuring precise prediction of masked patch pixel values by the capsule decoder. It accentuates larger discrepancies by squaring errors, thereby pushing the model to improve on significant deviations and enhance reconstruction on each masked patch. When finetuning for classification, the MSE loss is replaced with the cross entropy (CE) loss, defined by: $$C E=-\sum_{i=1}^{N}\sum_{c=1}^{C}y_{i c}\log(y_{i c})$$ $$\left(2\right)$$ yic log( ˆyic) (2) where N is the number of samples, C the number of classes, yic indicates if class c is correct for sample i, and yˆic is the average activation for each class c of sample i. This loss encourages the model to activate the correct class capsules with high confidence. ## 3.4 Backbone Selection To ensure that information is completely masked out, we replace a standard ResNet (He et al., 2016) or ConvNet backbone with a ConvMixer (Trockman & Kolter, 2022). This architecture's first layer uses a kernel size and stride of equal size, known as a patch embedding layer, allowing for our feature map to contain no overlapping information. This ensures that when regions of the image are masked, information cannot be leaked via the overlapping sliding convolutional kernel. We also provide a set of architectures with a ViT backbone. This is achieved by setting the dimension of each token's representation to Number of Primary Capsules × *Primary Capsule Embedding Dimension* allowing for an easy reshape into the primary capsules tensor dimensions. To create the activations for the primary capsules, we use a simple linear layer with sigmoid activation to ensure that the value of the activation remains between 0 and 1. ![6_image_0.png](6_image_0.png) Finetune Figure 5: A visual depiction of the pretrain and finetuning components. We show how the feature extracting CNN and capsule encoder are kept from the pretrain to finetune step. The capsule decoder is discarded after pretraining and replaced with a class capsules layer which maps the capsule encoder network to a classification output. ## 4 Experiments To validate that our method is successful, we have run numerous experiments with various ablations on multiple datasets. These experiments validate that masking is indeed effective for pushing the boundaries of Capsule Networks. ## 4.1 Experimental Setup All of our experiments follow the same experimental setup, which is to optionally pretrain the network minus the class capsules for 50 epochs with 50% of patches removed on either removed patch or whole image reconstruction as a target. We then add the class capsules to our network and fully finetune the network for 350 epochs, following the supervised training settings of (Hahn et al., 2019; Everett et al., 2023). A visual depiction of the elements of the components of pretraining and finetuning can be found in figure 5. All models use the SGD optimizer with default settings and the cosine annealing learning rate scheduler with a 0.1 initial learning rate. When a validation dataset has not been predefined, we randomly split 10% of the training dataset to act as our validation dataset. The best model is tested once on the test set of our datasets, with the best model being chosen based on the epoch with the lowest validation loss. ## 4.2 Datasets We validate our results on multiple datasets. For all of our benchmark datasets, we use the augmentation strategy proposed in (Everett et al., 2023), which aligns with the augmentations used in other capsule papers, as we are the first to provide results on Imagenette, we define the augmentations to be exactly the same as the augmentations for Imagewoof. Initially, we provide a sanity check on the MNIST dataset (LeCun et al., 2010), to provide quick experimentation to ensure that our methods work at all. Next, we use both the FashionMNIST and CIFAR-10 Table 1: The results for a number of foundational Capsule Network models compared to both the MCAE with masked pretraining and without. Showing the effectiveness of masked pretraining when applied to Capsule Networks. We show results on the four datasets that Capsule Networks are traditionally benchmarked on, as well as providing results for the Imagenette and Imagewoof datasets which are subsets of the Imagenet dataset. Unfortunately, it is computationally infeasible to train DR, EM or VB Caps on these larger datasets due to their heavy VRAM requirements. | MNIST | FashionMNIST | CIFAR-10 | SmallNORB | Imagenette | Imagewoof | | |----------------------------------|----------------|------------|-------------|--------------|-------------|------| | DR Caps (Sabour et al., 2017) | 99.5 | 82.5 | 91.4 | 97.3 | - | - | | EM Caps (Hinton et al., 2018) | 99.4 | - | 87.5 | - | - | - | | VB Caps (Ribeiro et al., 2020) | 99.7 | 94.8 | 88.9 | 98.5 | - | - | | SR Caps (Hahn et al., 2019) | 99.6 | 91.5 | 92.2 | 92 | 45.2 | 32.5 | | ProtoCaps (Everett et al., 2023) | 99.5 | 92.5 | 87.1 | 94.4 | 74.4 | 59.0 | | MCAE no PT | 99.6 | 92.1 | 91.9 | 93.1 | 73.1 | 55.9 | | MCAE | 99.6 | 95.0 | 92.8 | 95.0 | 82.1 | 61.8 | datasets (Xiao et al., 2017; Krizhevsky et al., 2009), two datasets which are well within the abilities of a standard Capsule Network and allow us to ensure that we are not limited to the simplest of experiments. The SmallNORB dataset (LeCun et al., 2004) allows us to ensure that we are maintaining the equivariant properties and generalisation abilities of Capsule Networks as the test set is specifically chosen to vary substantially from the train set while remaining within a similar distribution. In addition to standard classification accuracy on the SmallNORB dataset, we also follow (Hahn et al., 2019; Ribeiro et al., 2020; Hinton et al., 2018) and test our model on the novel azimuth and elevation tasks to verify generalisation capabilities. Finally, we use the Imagenette and Imagewoof datasets (Howard, 2019a;b) to test our networks performance on larger, more realistic datasets. Imagenette and Imagewoof take 10 different classes from the Imagenet dataset (Deng et al., 2009). Imagenette is designed to be easily differentiable and simply tests our network's ability to process larger, more complex images. While Imagewoof is ten classes of dogs and is designed to be more difficult to differentiate between classes due to the highly overlapping shared features between classes. ## 4.3 Results 4.3.1 Results On Image Classification: Table 1 presents the classification results of key state-of-the-art Capsule Networks compared to our approach with no pretraining and with pretraining on the datasets proposed in our experimental design. MCAE with no pretraining is architectually similar to SR Caps Networks, but with the 1D modification to the feature map and 1 × 1 kernels, along with the other required changes to the computation to allow for this. This method yields improved results over SR-Caps, but does not achieve state-of-the-art in any dataset. However, when we apply the masked pretraining paradigm, our results improve on all datasets except MNIST, pushing the MCAE with pretraining to be state-of-the-art for Capsule Networks in all datasets except SmallNORB, which is still dominated by iterative routing methods. ## 4.3.2 Backbone Choice: Leveraging a ConvMixer backbone (Trockman & Kolter, 2022) aligns with our models' requirement of a patch embedding layer to provide non-overlapping patches of the image. ConvMixer's feature maps are by default patchified, while ViTs (Dosovitskiy et al., 2020) utilise a patch embedding layer. Prompted by this similarity, we explored this as an ablation study. Our observation reveals that ViT-based models underperform compared to those employing a convolutional backbone. Although ViT models yielded better performance than vanilla ViTs on smaller datasets, such as CIFAR-10 or SmallNORB, the overall results suggest that ConvMixers offer a more suitable architecture for the MCAE. ![8_image_0.png](8_image_0.png) Figure 6: Graphs depicting how the top 1 accuracy changes based on different ablations of the MCAE per dataset. Full Image Reconstruction refers to a ConvMixer backbone MCAE pretrained for 50 epochs on full image reconstruction. No PT refers to a ConvMixer backbone MCAE with no pretraining epochs. ViT Backbone refers to a ViT backbone MCAE pretrained for 50 epochs on masked patch reconstruction. MCAE refers to our best-performing model which utilises a ConvMixer backbone and masked patch reconstruction. All models use the same linear SR Caps model which contains 3 layers, with 16 Capsules per layer and are finetuned for 350 epochs. ## 4.3.3 Reconstruction Target: While the masked autoencoder (He et al., 2021) framework that we build upon only reconstructs masked patches, we also provide results where the reconstruction objective includes visible patches. Reconstructing based upon the whole image is inspired by DR Caps (Sabour et al., 2017) using a full image reconstruction objective along with the classification objective in order to regularise the network. The results are shown Table 2: Results of experimentation with a ViT (Dosovitskiy et al., 2020) with depth 4 backbone compared to Capsule Networks with standard CNN backbone. The specific CNN which we use is a ConvMixer (Trockman & Kolter, 2022) of depth 4 due to its easily scalable esoteric design being based on the presumption that the image has been patchified, ensuring no information leakage of masked regions due to a sliding window of overlapping convolutional kernels. | | Vision | Conv | |--------------|-------------|--------| | | Transformer | Mixer | | MNIST | 99.6 | 99.6 | | FashionMNIST | 91.1 | 95.0 | | SmallNORB | 91.4 | 95.0 | | CIFAR-10 | 90.3 | 92.8 | | Imagenette | 68.4 | 82.1 | | Imagewoof | 55.4 | 61.8 | | | Visible and | Masked | |--------------|----------------|--------------| | | Masked Patches | Patches Only | | MNIST | 99.6 | 99.6 | | FashionMNIST | 88.4 | 95.0 | | SmallNORB | 82.0 | 95.0 | | CIFAR-10 | 84.8 | 92.8 | | Imagenette | 45.1 | 82.1 | | Imagewoof | 32.5 | 61.8 | Table 3: This table compares performance across our target datasets for MCAE pretraining based upon reconstructing both visible and masked patches versus those focusing on masked patches only. Results show equal or superior performance for models reconstructing masked patches only, across all datasets. in table 3 and show that reconstructing masked patches is the best method, with reconstructing all patches providing significantly worse results. | Azimuth | | Elevation | | | |-----------|-------|-------------|-------|------| | Familiar | Novel | Familiar | Novel | | | DR Caps | 93.1 | 79.7 | 94.2 | 83.6 | | EM Caps | 92.6 | 79.8 | 94.0 | 82.5 | | VB Caps | 96.3 | 88.7 | 95.7 | 88.4 | | SR Caps | 92.4 | 80.1 | 94.0 | 84.1 | | MCAE | 93.2 | 85.6 | 95.3 | 86.1 | ## 4.3.4 Smallnorb Novel Viewpoint: Table 4: Comparing novel viewpoint generalisation on the SmallNORB novel azimuth and elevation tasks (LeCun et al., 2004). Results for DR, EM and SR Caps are from (Hahn et al., 2019) and results for VB Caps are taken from (Ribeiro et al., 2020). In order to verify that we retain the novel viewpoint generalisation capabilities of Capsule Networks, we use the novel azimuth and elevation tasks of the SmallNORB dataset. We replicate the experimental design of (Hahn et al., 2019; Hinton et al., 2018) and conduct two experiments. 1) Training only on azimuths in (300, 320, 340, 0, 20, 40) and test on azimuths in the range of 60 to 280. 2) Training on the elevations in (30, 35, 40) degrees from horizontal and then testing on elevations in the range of 45 to 70 degrees. In table 4 we compare our accuracy on the test set on both the seen and unseen viewpoints. We pretrain for 50 epochs and finetune for 350 epochs, the same as our best model for SmallNORB in table 1. We do not achieve state-of-the-art results on this task, but do outperform all Capsule Networks except for VB Caps (Ribeiro et al., 2020), showing that masked pretraining does not remove the generalisation capabilities of our network. ## 5 Conclusion We have proposed the Masked Capsule Autoencoder model, the first capsule architecture trained in a selfsupervised manner, which can be a step change in the development of scalable Capsule Network models. Extensive experiments demonstrate that MCAE outperforms other Capsule Network architectures on almost all datasets, with particularly favourable results on higher-resolution images. Considering the unique and well-established advantages that Capsule Networks have around capturing viewpoint equivariance and viewpoint invariance (Ribeiro et al., 2022) compared with Transformers and CNNs, our model is a step towards developing large and scalable Capsule Network models. These models can compete on equal terms with the likes of Transformers and CNNs. We would consider the drawbacks of our method to be in the fully capsule decoder. In the Masked Autoencoder paper (He et al., 2021) they state that the pretraining loss was continuing to decrease at the point at which they stopped pretraining at 1600 epochs. While our reconstruction loss plateaus much quicker, to the point where it does not decrease any further after the 50 epochs which we pretrain for, indicating that there is a point at which our model has reached the best reconstructions that it can achieve. While we have shown that the pretraining stage improves the maximum classification accuracy for all datasets except MNIST (due to very fine margins for quantifiable improvement), if an improved decoding mechanism can be found to benefit from additional masked pretraining, the peak classification accuracy could likely be higher. In addition, the decoder is computationally heavy due to the need to consider the entire feature map, thus increasing training time and VRAM requirements significantly compared to when no finetuning is used. ## References Sriram Balasubramanian and Soheil Feizi. Towards improved input masking for convolutional neural networks, 2023. Fabio De Sousa Ribeiro, Georgios Leontidis, and Stefanos Kollias. Introducing routing uncertainty in capsule networks. *Advances in Neural Information Processing Systems*, 33:6490–6502, 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805, 2018. URL http://arxiv.org/ abs/1810.04805. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Miles Anthony Everett, Mingjun Zhong, and Georgios Leontidis. Protocaps: A fast and non-iterative capsule network routing method. *Transactions on Machine Learning Research*, 2023. Taeyoung Hahn, Myeongjang Pyeon, and Gunhee Kim. Self-routing capsule networks. *Advances in neural* information processing systems, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners, 2021. Geoffrey E Hinton, Sara Sabour, and Nicholas Frosst. Matrix capsules with em routing. In International conference on learning representations, 2018. Jeremy Howard. Imagenette: A smaller subset of 10 easily classified classes from imagenet, March 2019a. URL https://github.com/fastai/imagenette. Jeremy Howard. Imagewoof: a subset of 10 classes from imagenet that aren't so easy to classify, March 2019b. URL https://github.com/fastai/imagenette\#imagewoof. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. Yann LeCun, Fu Jie Huang, Leon Bottou, et al. Learning methods for generic object recognition with invariance to pose and lighting. In *CVPR (2)*, pp. 97–104. Citeseer, 2004. Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. *ATT Labs [Online].* Available: http://yann.lecun.com/exdb/mnist, 2, 2010. Saeed Masoudnia and Reza Ebrahimpour. Mixture of experts: a literature survey. *The Artificial Intelligence* Review, 42(2):275, 2014. Fabio De Sousa Ribeiro, Georgios Leontidis, and Stefanos Kollias. Capsule routing via variational bayes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 3749–3756, 2020. Fabio De Sousa Ribeiro, Kevin Duarte, Miles Everett, Georgios Leontidis, and Mubarak Shah. Learning with capsules: A survey. *arXiv preprint arXiv:2206.02664*, 2022. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. *Advances in* neural information processing systems, 30, 2017. Keyu Tian, Yi Jiang, Qishuai Diao, Chen Lin, Liwei Wang, and Zehuan Yuan. Designing bert for convolutional networks: Sparse and hierarchical masked modeling, 2023. Asher Trockman and J Zico Kolter. Patches are all you need?, 2022. URL https://openreview.net/ forum?id=TVHS5Y4dNvM. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. *CoRR*, abs/1706.03762, 2017. URL http: //arxiv.org/abs/1706.03762. Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, and Saining Xie. Convnext v2: Co-designing and scaling convnets with masked autoencoders, 2023. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017.
# Stochastic Batch Acquisition: A Simple Baseline For Deep Active Learning Andreas Kirsch∗*andreas.kirsch@cs.ox.ac.uk* OATML, Department of Computer Science, University of Oxford† Sebastian Farquhar∗*sebastian.farquhar@cs.ox.ac.uk* OATML, Department of Computer Science, University of Oxford† Parmida Atighehchian *parmi.atg@gmail.com* ServiceNow† Andrew Jesson andrew.jesson@cs.ox.ac.uk OATML, Department of Computer Science, University of Oxford† Frédéric Branchaud-Charron *frederic.branchaud.charron@gmail.com* ServiceNow† Yarin Gal yarin.gal@cs.ox.ac.uk OATML, Department of Computer Science, University of Oxford Reviewed on OpenReview: *https: // openreview. net/ forum? id= vcHwQyNBjW* ## Abstract We examine a simple stochastic strategy for adapting well-known single-point acquisition functions to allow batch active learning. Unlike acquiring the top-K points from the pool set, score- or rank-based sampling takes into account that acquisition scores change as new data are acquired. This simple strategy for adapting standard single-sample acquisition strategies can even perform just as well as compute-intensive state-of-the-art batch acquisition functions, like BatchBALD or BADGE while using orders of magnitude less compute. In addition to providing a practical option for machine learning practitioners, the surprising success of the proposed method in a wide range of experimental settings raises a difficult question for the field: when are these expensive batch acquisition methods pulling their weight? ## 1 Introduction Active learning is a widely used strategy for efficient learning in settings where unlabelled data are plentiful, but labels are expensive (Atlas et al., 1989; Settles, 2010). For example, labels for medical image data may require highly trained annotators, and when labels are the results of scientific experiments, each one can require months of work. Active learning uses information about unlabelled data and the current state of the model to acquire labels for those samples that are most likely to be informative. While many acquisition schemes are designed to acquire labels one at a time (Houlsby et al., 2011; Gal et al., 2017), recent work has highlighted the importance of *batch acquisition* (Kirsch et al., 2019; Ash et al., 2020). Acquiring in a batch lets us parallelise labelling. For example, we could hire hundreds of annotators to work in parallel or run more than one experiment at once. Batch acquisition also saves compute as single-point selection also incurs the cost of retraining the model for every new data point. ∗Joint first authors †Work done while there. Table 1: Acquisition runtime (in seconds, 5 trials, ± *s.d.)*. The stochastic acquisition methods are as fast as top-K, here with BALD scores, and **orders of magnitude** faster than BADGE or BatchBALD. Synthetic pool set with M = 10, 000 pool points with 10 classes. BatchBALD & BALD: 20 parameter samples. In superscript, mean accuracy over acquisition steps from Figure 2 on Repeated-MNIST with 4 repetitions (5 trials). | K | Top-K (BALD) 80% | Stochastic (PowerBALD) 90% | BatchBALD 90% | BADGE 86% | |-----|--------------------|------------------------------|-------------------|-------------| | 10 | 0.2 ± 0.0 | 0.2 ± 0.0 | 566.0 ± 17.4 | 9.2 ± 0.3 | | 100 | 0.2 ± 0.0 | 0.2 ± 0.0 | 5, 363.6 ± 95.4 | 82.1 ± 2.5 | | 500 | 0.2 ± 0.0 | 0.2 ± 0.0 | 29, 984.1 ± 598.7 | 409.3 ± 3.7 | Unfortunately, existing performant batch acquisition schemes are computationally expensive (Table 1). Intuitively, this is because batch acquisition schemes face combinatorial complexity when accounting for the interactions between possible acquisition points. Recent works (Pinsler et al., 2019; Ash et al., 2020; 2021) trade off a principled motivation with various approximations to remain tractable. A commonly used, though extreme, heuristic is to take the top-K highest scoring points from an acquisition scheme designed to select a single point ("top-K *acquisition*"). This paper examines a simple baseline for batch active learning that is competitive with methods that cost orders of magnitude more across a wide range of experimental contexts. We notice that single-acquisition score methods such as BALD (Houlsby et al., 2011) do not take correlations between the samples into account and thus only act as a noisy proxy for future acquisition scores as we motivate in Figure 1. This leads us to stochastically acquire points following a distribution determined by the single-acquisition scores instead of acquiring the top-K samples. In sequential decision making, stochastic multi-armed bandits and reinforcement learning, variants of this are also known a Boltzmann exploration (Cesa-Bianchi et al., 2017). We examine other related methods in §4. In deep active learning this simple approach remains under-explored, yet it can match the performance of earlier more complex methods for batch acquisition (e.g., BatchBALD, Kirsch et al. (2019); see in §5) despite being very simple. Indeed, this acquisition scheme has a time complexity of only O(M log K) in the pool size M and acquisition size K, just like top-K acquisition. We show empirically that the presented stochastic strategy performs as well as or better than top-K acquisition with almost identical computational cost on several commonly used acquisition scores, empirically making it a strictly-better batch strategy. Strikingly, the empirical comparisons between this stochastic strategy and the evaluated more complex methods cast doubt on whether they function as well as claimed. Concretely, in this paper we: - examine a family of three computationally cheap stochastic batch acquisition strategies (softmax, power and soft-rank)—the latter two which have not been explored and compared to in detail before; - demonstrate that these strategies are preferable to the commonly used top-K acquisition heuristic; and - identify the failure of two existing batch acquisition strategies to outperform this vastly cheaper and more heuristic strategy. Outline. In §2, we present active learning notation and commonly used acquisition functions. We propose stochastic extensions in §3, relate them to previous work in §4, and validate them empirically in §5 on various datasets, showing that these extensions are competitive with some much more complex active learning approaches despite being orders of magnitude computationally cheaper. Finally, we further validate some of the underlying theoretical motivation in §6 and discuss limitations in §7. ## 2 Background & Problem Setting The stochastic approach we examine applies to batch acquisition for active learning in a pool-based setting (Settles, 2010) where we have access to a large unlabelled *pool* set, but we can only label a small subset of the points. The challenge of active learning is to use what we already know to pick which points to label in the most efficient way. Generally, we want to avoid labelling points similar to those already labelled. In the pool-based active learning setting, we are given a large pool of unlabeled data points and can request labels for only a small subset, the acquisition batch. To formalize this, we first introduce some notation. Notation. Following Farquhar et al. (2021), we formulate active learning over *indices* instead over data points. This simplifies the notation. The large, initially fully unlabelled, pool set containing M input points is $${\mathcal{D}}^{\mathrm{pool}}=\{x_{i}\}_{i\in{\mathcal{I}}^{\mathrm{pool}}},$$ $$(1)$$ $$\left(2\right)$$ pool = {xi}i∈Ipool , (1) where I pool = {1*, . . . , M*} is the initial full index set. We initialise a training dataset with N0 randomly selected points from Dpool by acquiring their labels, yi, $${\cal D}^{\rm train}=\{(x_{i},y_{i})\}_{i\in{\cal I}^{\rm train}},\tag{10}$$ where I train is the index set of Dtrain, *initially* containing N0 indices between 1 and M. A model of the predictive distribution, p(y | x), can then be trained on Dtrain. ## 2.1 Active Learning At each acquisition step, we select additional points for which to acquire labels. Although many methods acquire one point at a time (Houlsby et al., 2011; Gal et al., 2017), one can alternatively acquire a whole batch of K examples. An acquisition function a takes I train and I pool and returns K indices from I pool to be added to I train. We then label those K datapoints and add them to I train while making them unavailable from the pool set. That is, $$\begin{array}{l}{{\mathcal{I}^{\mathrm{train}}\gets\mathcal{I}^{\mathrm{train}}\cup a(\mathcal{I}^{\mathrm{train}},\mathcal{I}^{\mathrm{pool}}),}}\\ {{\mathcal{I}^{\mathrm{pool}}\gets\mathcal{I}^{\mathrm{pool}}\setminus\mathcal{I}^{\mathrm{train}}.}}\end{array}$$ (3) $\text{(4)}$ . A common way to construct the acquisition function is to define some scoring function, s, and then select the point(s) that score the highest. Probabilistic Model. We assume classification with inputs X, labels Y , and a discriminative classifier p(y | x). In the case of Bayesian models, we further assume a subjective probability distribution over the parameters, p(ω), and we have p(y | x) = Ep(ω)[p(y | *x, ω*)]. Two commonly used acquisition scoring functions are Bayesian Active Learning by Disagreement (BALD) (Houlsby et al., 2011) and predictive entropy (Gal et al., 2017): BALD. *Bayesian Active Learning by Disagreement* (Houlsby et al., 2011) computes the expected information gain between the predictive distribution and the parameter distribution p(ω | Dtrain) for a Bayesian model. For each candidate pool index, i, with mutual information, I, and entropy, H, the score is $$s_{\text{BALD}}(i;\mathcal{I}^{\text{train}}):=\text{I}[Y;\Omega\mid X=x_{i},\mathcal{D}^{\text{train}}]$$ $$=\text{H}[Y\mid X=x_{i},\mathcal{D}^{\text{train}}]-\mathbb{E}_{\mathfrak{p}(\omega\mid\mathcal{D}^{\text{train}})}[\text{H}[Y\mid X=x_{i},\omega]].$$ Entropy. The *(predictive) entropy* (Gal et al., 2017) does not require Bayesian models, unlike BALD, and performs worse for data with high observation noise Mukhoti et al. (2021). It is identical to the first term of the BALD score $${\mathcal{S}}_{\mathrm{entropy}}(i;{\mathcal{I}}^{\mathrm{train}}):=\mathrm{H}[Y\mid X=x_{i},{\mathcal{D}}^{\mathrm{train}}].$$ train]. (6) $$\left(5\right)$$ $\left(6\right)$. ## 2.2 Acquisition Functions These scoring functions were introduced for single-point acquisition: $$a_{s}({\mathcal{I}}^{\mathrm{train}}):=\arg\operatorname*{max}_{i\in{\mathcal{I}}^{\mathrm{pool}}}s(i;{\mathcal{I}}^{\mathrm{train}}).$$ $\square$ $$\mathbb{N}$$ train). (7) For deep learning in particular, single-point acquisition is computationally expensive due to retraining the model for every acquired sample. Moreover, it also means that labelling can only happen sequentially instead of in bulk. Thus, single-point acquisition functions were expanded to multi-point acquisition via acquisition batches in batch active learning. The most naive batch acquisition function selects the highest K scoring points $$a_{s}^{\mathrm{batch}}({\mathcal{I}}^{\mathrm{train}};K):=\operatorname*{arg\,max}_{I\subseteq{\mathcal{I}}^{\mathrm{pool}},|I|=K}\sum_{i\in I}s(i;{\mathcal{I}}^{\mathrm{train}}).$$ train). (8) Maximizing this sum is equivalent to taking the top-k scoring points, which cannot account for the interactions between points in an acquisition batch because individual points are scored independently. For example, if the most informative point is duplicated in the pool set, all instances will be acquired, which is likely wasteful when we assume no label noise (see also Figure 1 in Kirsch et al. (2019)). ## 2.3 Batch Acquisition Functions Some acquisition functions are explicitly designed for batch acquisition (Kirsch et al., 2019; Pinsler et al., 2019; Ash et al., 2020). They try to account for the interaction between points, which can improve performance relative to simply selecting the top-K scoring points. However, existing methods can be computationally expensive. For example, BatchBALD rarely scales to acquisition sizes of more than 5–10 points due to its long runtime (Kirsch et al., 2019), as we evidence in Table 1. ACS-FW. Pinsler et al. (2019) propose *"Active Bayesian CoreSets with Frank-Wolfe optimization"* which successively minimizes the distance between the (expected) loss on the pool set and the loss on a potential training set using a greedy Frank-Wolfe optimization in a Hilbert space. Either a posterior-weighted Fisher inner-product or a posterior-weighted loss inner-product between two samples are used. For non-linear models, random projections of the gradients speed up the Fisher inner-product. BADGE. Ash et al. (2020) propose *"Batch Active learning by Diverse Gradient Embeddings"*: it motivates its batch selection approach using a k-Determinantal Point Process (Kulesza & Taskar, 2011) based on the (inner product) similarity matrix of the scores (gradients of the log loss) using hard pseudo-labels (the highest probability class according to the model's prediction) for each pool sample. See also Kirsch & Gal (2022) for a more detailed analysis. In practice, they use the intialization step of k-MEANS++ with Euclidian distances between the scores to select an acquisition batch. BADGE is also computationally expensive as we elaborate in Section 4. BatchBALD. Kirsch et al. (2019) extend BALD to batch acquisition using the mutual information between the parameter distribution and the *joint* distribution of the predictions of multiple point in an acquistion batch: this mutual information is the expected information gain for a full acquistion batch: $$s_{\mathrm{BatchBALD}}(i_{1},...,i_{K};\mathcal{I}^{\mathrm{train}})\coloneqq\mathrm{I}[Y_{1},...,Y_{K};\Omega\mid X_{1}=x_{i_{1}},...,X_{K}=x_{i_{K}},\mathcal{D}^{\mathrm{train}}].$$ Kirsch et al. (2019) greedily construct an acquisition batch by iteratively selecting the next unlabelled pool point that maximizes the joint score with the already selected points. This is 1 − 1/e-optimal as the expected information gain is submodular (Krause & Golovin, 2014). They note that their approach is computationally expensive, and they only consider acquisition batches of up to size 10. ## 3 Method Selecting the top-K points at acquisition step t amounts to the assumption that the informativeness of these points is independent of each other. This leads to the pathology that if the most informative pool point is duplicated in the pool set, each instance would be selected (up to the acquisition size). This is clearly wrong. $$(9)^{\frac{1}{2}}$$ Table 2: *Summary of stochastic acquisition variants.* Perturbing the scores si themselves with ϵi ∼ Gumbel(0; β −1) i.i.d. yields a softmax distribution. Log-scores result in a power distribution, with assumptions that are reasonable for active learning. Using the score-ranking, ri finally is a robustifying assumption. β is included for completeness; we use β := 1 in our experiments—except for the ablation in §6.1. | Perturbation | Distribution | Probability mass | |----------------|----------------|--------------------| | si + ϵi | Softmax | ∝ exp βsi β | | log si + ϵi | Power | ∝ s i −β | | − log ri + ϵi | Soft-rank | ∝ r i | ![4_image_1.png](4_image_1.png) ![4_image_0.png](4_image_0.png) Figure 1: Acquisition scores at individual acquisition step t *are only a loose proxy for later scores at* t + n *(here:* t = 0). Specifically, the Spearman rankcorrelation between acquisition scores on the zerotth and n'th time-step falls with n. While top-K acquisition incorrectly implicitly assumes the rankcorrelation remains 1, stochastic acquisitions do not. Using Monte-Carlo Dropout BNN trained on MNIST at initial 20 points and 73% initial accuracy; score ranks computed over test set. Figure 2: *Performance on Repeated-MNIST with 4* repetitions (5 trials). Up and to the right is better (↗). PowerBALD outperforms (top-K) BALD and BADGE and is on par with BatchBALD. This is despite being orders of magnitude faster. Acquisition size: 10, except for BatchBALD (5). See Figure 10 in the appendix for an ablation study of BADGE's acquisition size, and Figure 11 for a comparison of all BALD variants at acquisition size 5. Top-K **Acquisition Pathologies.** Another way to see that this is wrong is to think step by step, that is to split the batch acquisition into multiple steps of size 1: We select the top pool sample by acquisition score and retrain the model once for each possible class label for this point. We then compute the averaged acquisition scores on the pool set given each of these models weighted by the original model's probability of each class label. We select the top pool sample using this new (averaged) score, and repeat the process, exponentially branching out as necessary. This is equivalent to the joint acquisition batch selection in BatchBALD (Kirsch et al., 2019): I[Y1, ..., YK; Ω | X1 = xi1 , ..., XK = xiK , D train] = (10) = X K j=1 Ep(yi1 ,...,yij−1 |xi1 ,...,xij−1 ,Dtrain)I[Yj ; Ω | Xj = xij , X1 = xi1 , Y1 = yi1 , ..., X1 = xij−1 , Y1 = yij−1 , D train] ̸= X K j=1 I[Yj ; Ω | Xj = xij , D train] = X K j=1 sBALD(ij ; I train) (11) using the chain rule of the mutual information. We see that the informativeness of the samples will usually not be independent of each other. Of course, the acquisition scores for models trained with these additional points will be quite different from the first set of scores. After all, the purpose of active learning is to add the most informative points—those that will update the model the most. In contrast, selecting a top-K batch using the same scores implicitly assumes that the score ranking will not change due to other points. ![5_image_0.png](5_image_0.png) Figure 3: *BALD scores for 1000 randomly-chosen points from the MNIST dataset (handwritten digits).* The points are colour-coded by digit label and sorted by score. The model used for scoring has been trained to 90% accuracy first. If we were to pick the top scoring points (e.g. scores above 0.6), most of them would be 8s, even though we can assume that after acquiring the first couple of them, the model would consider them less informative than other available data. Points are slightly jittered on the x-axis by digit label to avoid overlaps. We provide empirical confirmation in Figure 1 that, in fact, the ranking of acquisition scores at step t and t + K is decreasingly correlated as K grows when we retrain the model for each acquired point. Figure 3 also illustrates this on MNIST. Moreover, as we will see in §6, this effect is the strongest for the most informative points. Instead, we investigate the use of stochastic sampling as an alternative to top-K acquisition, which implicitly acknowledges the uncertainty within the batch acquisition step using a simple noise process model governing how scores change. We motivate and investigate the theory behind this in §6, but given how simple the examined methods are, this theory only obstructs their simplicity. Specifically, we examine three simple stochastic extensions of single-sample scoring functions s(i; I train) that make slightly different assumptions. These methods are compatible with conventional active learning frameworks that typically take the top-K highest scoring samples. For example, it is straightforward to adapt entropy, BALD, and other scoring functions for use with these extensions. Gumbel Noise. These stochastic acquisition extensions assume that future scores differ from the current score by a perturbation. We model the noise distribution of this perturbation as the addition of Gumbeldistributed noise ϵi ∼ Gumbel(0; 1), which is frequently used to model the distribution of extrema. At the same time, the choice of a Gumbel distribution for the noise is also one of mathematical convenience, in the spirit of a straightforward baseline: the maximum of sets of many other standard distributions, such as the Gaussian distribution, is not analytically tractable. On the other hand, taking the highest-scoring points from a distribution perturbed with Gumbel noise is equivalent to sampling from a softmax distribution1 without replacement. We investigate the choice of Gumbel noise further in §6. This follows from the *Gumbel-Max* trick (Gumbel, 1954; Maddison et al., 2014) and, more specifically, the Gumbel-Top-K trick (Kool et al., 2019). We provide a short proof in appendix §B.2. Expanding on Maddison et al. (2014): Proposition 3.1. For scores si, i ∈ {1, . . . , n}, and k ≤ n and β > 0, if we draw ϵi ∼ *Gumbel*(0; β −1) independently, then arg topk{si + ϵi}i *is an (ordered) sample without replacement from the categorical* distribution Categorical(exp(β si)/Pj exp(β sj ), i ∈ {1*, . . . , n*}). β ≥ 0 is a 'coldness' parameter. In the spirit of providing a simple and surprisingly effective baseline without hyperparameters, we fix β := 1 for our main experiments. However, to understand its effect, we examine 1Also known as Boltzmann/Gibbs distribution. ablations of β in §6.1. Overall, for β → ∞, this distribution will converge towards top-K acquisition; whereas for β → 0, it will converge towards uniform acquisition. We apply the perturbation to three quantities in the three sampling schemes: the scores themselves, the log scores, and the rank of the scores. Perturbing the log scores assumes that scores are non-negative and uninformative points should be avoided. Perturbing the ranks can be seen as a robustifying assumption that requires the relative scores to be reliable but allows the absolute scores to be unreliable. Table 2 summarizes the three stochastic acquisition variants with their associated sampling distributions, which make slightly different assumptions about the acquisition scores: Soft-Rank Acquisition. This first variant only relies on the rank order of the scores and makes no assumptions on whether the acquisition scores are meaningful beyond that. It thus uses the *least* amount of information from the acquisition scores. It only requires the *relative score order* to be useful and ignores the *absolute score values*. If the absolute scores provide useful information, we would expect this method to perform worse than the variants below, which make use of the score values. As we will see, this is indeed sometimes the case. Ranking the scores s(i; I train) with descending ranks {ri}i∈Ipool such that s(ri; I train) ≥ s(rj ; I train) for ri ≤ rj and smallest rank being 1, we sample index i with probability psoftrank(i) ∝ r −β i with coldness β. This is invariant to the actual scores. We can draw ϵi ∼ Gumbel(0; β −1) and create a perturbed 'rank' $$s^{\mathrm{softrank}}(i;{\mathcal{I}}^{\mathrm{train}}):=-\log r_{i}+\epsilon_{i}.$$ $$(12)^{\frac{1}{2}}$$ train) := − log ri + ϵi. (12) Following Proposition 3.1, taking the top-K points from s softrank is equivalent to sampling without replacement from the rank distribution psoftrank(i). Softmax Acquisition. The next simplest variant uses the actual scores instead of the ranks. Again, it perturbs the scores by a Gumbel-distributed random variable ϵi ∼ Gumbel(0; β −1) $$s^{\mathrm{softmax}}(i;{\mathcal{I}}^{\mathrm{train}}):=s(i;{\mathcal{I}}^{\mathrm{train}})+\epsilon_{i}.$$ $$\left(13\right)$$ train) + ϵi. (13) However, this makes no assumptions about the semantics of the absolute values of the scores: the softmax function is invariant to constants shifts. Hence, the sampling distribution will only depend on the *relative* scores and not their absolute value. Power Acquisition. For many scoring functions, the scores are non-negative, and a score close to zero means that the sample is not informative in the sense that we do not expect it will improve the model—we do not want to sample it. This is the case with commonly used score functions such as BALD and entropy. BALD measures the expected information gain. When it is zero for a sample, we do not expect anything to be gained from acquiring a label for that sample. Similarly, entropy is upper-bounding BALD, and the same consideration applies. This assumption also holds for other scoring functions such as the standard deviation and variation ratios; see Appendix B.1. To take this into account, the last variant models the future log scores as perturbations of the current log score with Gumbel-distributed noise $$s^{\mathrm{power}}(i;{\mathcal{I}}^{\mathrm{train}}):=\log s(i;{\mathcal{I}}^{\mathrm{train}})+\epsilon_{i}.$$ train) + ϵi. (14) By Proposition 3.1, this is equivalent to sampling from a power distribution $$\left(14\right)$$ $$\exists\Pi$$ $${\rm P}_{power}(i)\propto\left(\frac{1}{s(i;{\cal I}^{\rm train})}\right)^{-\beta}.\tag{1}$$ This may be seen by noting that exp(β log s(i; I train)) = s(i; I train) β. Importantly, as scores → 0, the (perturbed) log scores *→ −∞* and will have probability mass → 0 assigned. This variant takes the absolute scores into account and avoids data points with score 0. Given the above considerations, when using BALD, entropy, and other appropriate scoring functions, power acquisition is the most sensible. Thus, we expect it to work best. Indeed, we find this to be the case in the toy experiment on Repeated-MNIST (Kirsch et al., 2019) depicted in Figure 2. However, even soft-rank acquisition often works well in practice compared to top-K acquisition; see also appendix §D for a more $$\left(15\right)$$ in-depth comparison. In the rest of the main paper, we mostly focus on power acquisition. We include results for all methods in §C. In Appendix G, we present the simple implementation of the stochastic acquisition variants we use in our experiments. ## 4 Related Work Researchers in active learning (Atlas et al., 1989; Settles, 2010) have identified the importance of *batch* acquisition as well as the failures of top-K acquisition using straightforward extensions of single-sample methods in a range of settings including support vector machines (Campbell et al., 2000; Schohn & Cohn, 2000; Brinker, 2003; Hoi et al., 2006; Guo & Schuurmans, 2007), GMMs (Azimi et al., 2012), and neural networks (Sener & Savarese, 2018; Kirsch et al., 2019; Ash et al., 2020; Baykal et al., 2021). Many of these methods aim to introduce structured diversity to batch acquisition that accounts for the interaction of the points acquired in the learning process. In most cases, the computational complexity scales poorly with the acquisition size (K) or pool size (M), for example because of the estimation of joint mutual information (Kirsch et al., 2019); the O(KM) complexity of using a k-means++ initialisation scheme (Ash et al., 2020), which approximates k-DPP-based batch active learning (Bıyık et al., 2019), or Frank-Wolfe optimization (Pinsler et al., 2019); or the O(M2log M) complexity of methods based on K-centre coresets (Sener & Savarese, 2018) (although heuristics and continuous relaxations can improve this somewhat). In contrast, we examine simple and efficient stochastic strategies for adapting well-known single-sample acquisition functions to the batch setting. The proposed stochastic strategies are based on observing that acquisition scores would change as new points are added to the acquisition batch and modelling this difference for additional batch samples in the most naive way, using Gumbel noise. The presented stochastic extensions have the same complexity O(M log K) as naive top-K batch acquisition, yet outperform it, and they can perform on par with above more complex methods. For multi-armed bandits, Soft Acquisition is also known as Boltzmann exploration (Cesa-Bianchi et al., 2017). It has also been shown that adding noise to the scores, specifically via Thompson sampling, is effective for choosing informative batches (Kalkanli & Özgür, 2021). Similarly, in reinforcement learning, stochastic prioritisation has been employed as *prioritized replay* (Schaul et al., 2016) which may be effective for reasons analogous to those motivating the approach examined in this work. While stochastic sampling has not been extensively explored for acquisition in deep active learning, most recently it has been used as an auxiliary step in diversity-based active learning methods that rely on clustering as main mechanism (Ash et al., 2020; Citovsky et al., 2021). Kirsch et al. (2019) empirically find that additional noise in the acquisition scores seems to benefit batch acquisition but do not investigate further. Fredlund et al. (2010) suggest modeling single-point acquisition as sampling from a "*query density*" modulated by the (unknown) sample density p(x) and analyze a binary classification toy problem. Farquhar et al. (2021) propose stochastic acquisition as part of de-biasing actively learned estimators. Most relevant to this work, and building on Fredlund et al. (2010) and Farquhar et al. (2021), Zhan et al. (2022) propose a stochastic acquisition scheme that is asymptotically optimal. They normalize the acquisition scores via the softmax function to obtain a query density function for unlabeled samples and draw an acquisition batch from it, similar to SoftmaxEntropy. Their method aims to achieve asymptotic optimality for active learning processes by mitigating the impact of bias. In contrast, in this work, we examine multiple stochastic acquisition strategies based on score-based or rank-based distributions and apply these strategies to several single-sample acquisition functions, such as BALD and entropy (and standard deviation, variation ratios, see Figure 12); and we focus on active learning in a (Bayesian) deep learning setting. As such our empirical results and additional proposed strategies can be seen as complementary to their work. Thus, while stochastic sampling is generally well-known within acquisition functions, to our knowledge, this work is the first2to investigate simple stochastic sampling methods entirely as alternatives to naive top-K acquisition in (Bayesian) deep active learning and to compare them to more complex approaches in various settings. 2A workshop version was presented at ICML 2021, and the first submission of this work was concurrent to Zhan et al. (2022). ## 5 Experiments In this section, we empirically verify that the presented stochastic acquisition methods (a) outperform top-K acquisition and (b) are competitive with specially designed batch acquisition schemes like BADGE (Ash et al., 2020) and BatchBALD (Kirsch et al., 2019); and are vastly cheaper than these more complicated methods. To demonstrate the seriousness of the possible weakness of recent batch acquisition methods, we use a range of datasets. These experiments show that the performance of the stochastic extensions is not dependent on the specific characteristics of any particular dataset. Our experiments include computer vision, natural language processing (NLP), and causal inference (in §6.1). We show that stochastic acquisition helps avoid selecting redundant samples on Repeated-MNIST (Kirsch et al., 2019), examine performance in active learning for computer vision on EMNIST (Cohen et al., 2017), MIO-TCD (Luo et al., 2018), Synbols (Lacoste et al., 2020), and CLINC-150 (Larson et al., 2019) for intent classification in NLP. MIO-TCD is especially close to real-world datasets in size and quality. In appendix §C.5, we further investigate edges cases using the Synbols dataset under different types of biases and noise, and in Appendix C.7, we also separately examine stochastic batch acquisition using last-layer MFVI models on CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), Repeated-MNIST, Fashion-MNIST (Xiao et al., 2017) and compare to ACS-FW (Pinsler et al., 2019). In the main paper, we consider BALD as scoring function. We examine predictive entropy on many of these datasets in the appendix. Additionally, other scoring functions are examined on Repeated-MNIST in appendix §C.2.1. Overall, we observe similar results as for BALD. For the sake of legible figures, we mainly focus on power acquisition in this section, as it fits BALD and entropy best: the scores are non-negative, and zero scores imply uninformative samples. We show that all three methods (power, softmax, softrank) can perform similarly in appendix §D. We are not always able to compare to BADGE and BatchBALD because of computational limitations of those methods. BatchBALD is computationally infeasible for large acquisition sizes (≥ 10) because of time constraints, cf. Table 1. When possible, we use BatchBALD with acquisition size 5 as baseline. Note that this gives BatchBALD an advantage as it is known to perform better with smaller acquisition sizes (Kirsch et al., 2019). Similarly, BADGE ran out of memory for large dataset sizes, such as EMNIST 'ByMerge' with 814,255 examples, independently of the acquisition size. Figures interpolate linearly between available points, and we show 95% confidence intervals. Experimental Setup & Compute. We document the experimental setup and model architectures in detail in appendix §C.1. Our experiments used about 25,000 compute hours on Titan RTX GPUs. Runtime Measurements. We emphasize that the stochastic acquisition strategies are much more computationally efficient compared to specialised batch-acquisition approaches like BADGE and BatchBALD. Runtimes, shown in Table 1, are essentially identical for top-K and the stochastic versions. Both are orders of magnitude faster than BADGE and BatchBALD even for small batches. Unlike those methods, stochastic acquisition scales *linearly* in pool size and *logarithmically* in acquisition size. Runtime numbers do not include the cost of retraining models (identical in each case). The runtimes for top-K and stochastic acquisition appear constant over K because the execution time is dominated by fixed-cost memory operations. The synthetic dataset used for benchmarking has 4,096 features, 10 classes, and 10,000 pool points. Repeated-MNIST. Repeated-MNIST (Kirsch et al., 2019) duplicates MNIST a specified number of times and adds Gaussian noise to prevent perfect duplicates. Redundant data are incredibly common in industrial applications but are usually removed from standard benchmark datasets. The controlled redundancies in the dataset allow us to showcase pathologies in batch acquisition methods. We use an acquisition size of 10 and 4 dataset repetitions. Figure 2 shows that PowerBALD outperforms top-K BALD. While much cheaper computationally, cf. Table 1, PowerBALD also outperforms BADGE and even performs on par with BatchBALD. We use an acquisition size of 10 for all methods, except for BatchBALD, for which we use an acquisition size of 5, as explained above. Note that BatchBALD performs better for smaller acquisition sizes while BADGE (counterintuitively) can perform better for larger ones; see Figure 10 in the appendix for an ablation. BatchBALD, BALD, and ![9_image_1.png](9_image_1.png) ![9_image_0.png](9_image_0.png) Figure 4: *Performance on various datasets.* BatchBALD took infeasibly long on these datasets & acquisition sizes. (a) *EMNIST 'Balanced':* On 132k samples, PowerBALD (acq. size 10) outperforms BatchBALD (acq. size 5) and BADGE (acq. size 10). (b) *EMNIST 'ByMerge':* On 814k samples, PowerBALD (acq. size 10) outperforms BatchBALD (acq. size 5). BADGE (not shown) OOM'ed, and BatchBALD took > 12 days for 115 acquisitions. (c) *MIO-TCD:* PowerBALD performs better than BALD and on par with BADGE (all acq. size 100). (d) *Synbols with minority groups:* PowerBALD performs on par with BADGE (all acq. size 100). the stochastic variants all become equivalent for acquisition size 1 when points are acquired individually, which performs best Kirsch et al. (2019). Computer Vision: EMNIST. EMNIST (Cohen et al., 2017) contains handwritten digits and letters and comes with several splits: we examine the 'Balanced' split with 131,600 samples in Figure 4a3 and the 'ByMerge' split with 814,255 samples in Figure 4b. Both have 47 classes. We use an acquisition size of 5 for BatchBALD, and 10 otherwise. We see that the stochastic methods outperform BatchBALD on it and both BADGE and BatchBALD on 'Balanced' (Figure 4a). They do not have any issues with the huge pool set in 'ByMerge' (Figure 4b). In the appendix, Figures 27 and 28 show results for all three stochastic extensions, and Figure 17 shows an ablation of different acquisition sizes for BADGE. For 'ByMerge', BADGE ran out of memory on our machines, and BatchBALD took more than 12 days for 115 acquisitions when we halted execution. Computer Vision: MIO-TCD. The Miovision Traffic Camera Dataset (MIO-TCD) (Luo et al., 2018) is a vehicle classification and localisation dataset with 648,959 images designed to exhibit realistic data characteristics like class imbalance, duplicate data, compression artefacts, varying resolution (between 100 and 2,000 pixels), and uninformative examples; see Figure 9 in the appendix. As depicted in Figure 4c, PowerBALD performs better than BALD and essentially matches BADGE despite being much cheaper to compute. We use an acquisition size of 100 for all methods. 3This result exactly reproduces BatchBALD's trajectory in Figure 7 from Kirsch et al. (2019). Computer Vision: Synbols. Synbols (Lacoste et al., 2020) is a character dataset generator which can demonstrate the behaviour of batch active learning under various edge cases (Lacoste et al., 2020; BranchaudCharron et al., 2021). In Figure 4d, we evaluate PowerBALD on a dataset with minority character types and colours. PowerBALD outperforms BALD and matches BADGE. Further details as well as an examination of the 'spurious correlation' and 'missing synbols' edge cases (Lacoste et al., 2020; Branchaud-Charron et al., 2021) can be found in appendix §C.5. Natural Language Processing: CLINC-150. We perform intent classification on CLINC-150 (Larson et al., 2019), which contains 150 intent classes plus an out-of-scope class. This setting captures data seen in production for chatbots. We fine-tune a pretrained DistilBERT model from HuggingFace (Wolf et al., 2020) on CLINC-150 for 5 epochs with Adam as optimiser. In appendix §C.6, we see that PowerEntropy shows strong performance: it performs better than Entropy and almost on par with BADGE. This demonstrates that our technique is domain independent and can be easily reused for other tasks. MFVI Last-Layer Comparison with ACS-FW. In Appendix C.7, we provide a comparison of the proposed stochastic acquisition functions with BALD and ACS-FW (Pinsler et al., 2019) in a Bayesian last-layer setting using variational inference with a mean-field Gaussian approximation (instead of Monte-Carlo dropout). We find that for smaller acquisition sizes, the stochastic acquisition functions also outperform BALD on Fashion-MNIST and Repeated-MNIST. For larger acquisition sizes on CIFAR-10 and SVHN, this is not the case, however: no single method seems to perform better than the others. However, we find that ACW-FW overall takes much longer to run than stochastic and top-K acquisition (Figure 25). We leave a more thorough investigation of this setting to future work. In Summary. We have verified that stochastic acquisition functions outperform top-K batch acquisition in several settings and perform on par with more complex methods such as BADGE or BatchBALD. Moreover, we refer the reader to Jesson et al. (2021), Murray et al. (2021), Tigas et al. (2022), Holmes et al. (2022), Malik et al. (2023), and Rubashevskii et al. (2023) for additional works that use the proposed stochastic acquisition functions in this paper and provide further empirical validation. ## 6 Further Investigations In this section, we examine and validate assumptions about the underlying score dynamics by examining the scores across acquisitions. We further hypothesise about when top-K acquisition is the most detrimental to active learning. Why Gumbel Noise? Intuitively, to select the k-th point in the acquisition batch, we want to take into account how much additional information (increase in acquisition scores) the still-to-be-selected additional K − k points will provide. As such we want to model the maximum over all possible additional candidate points that are still to be selected to complete the acquisition batch. Empirically, acquisition scores are similar to a truncated exponential distribution ('80/20' rule) as visualized in Figures 3 and 7c. Note that this is a rough approximation—we do not claim that the distribution of acquisition scores really truncated exponential. A sum of i.i.d. exponential variables follows the Erlang distribution (which is a special case of the Gamma distribution) and has an exponential tail. We know that the maximum of a set of i.i.d. random variables that follow an exponential distribution or a distribution which has an exponential tail is known to be well approximated by a Gumbel distribution in the sample limit (Gumbel, 1954) 4, and thus the maximum over sums of such random variables also follows a Gumbel distribution. Concretely, we can use the following result from Garg et al. (2023): Theorem 6.1 (Extreme Value Theorem (EVT) (Mood, 1950; Fisher & Tippett, 1928)). For i.i.d. random variables X1, . . . , Xn ∼ fX*, with exponential tails,* limn→∞ maxi (Xi) follows the Gumbel (GEV -1) distribution. Furthermore, G is max-stable, i.e. if Xi ∼ G*, then* maxi (Xi) ∼ G *holds.* However, it is unlikely that the increase in acquisition scores can be modeled well as i.i.d exponential random variables with the *same* rate, but the Gumbel approximation also seems to hold empirically for the hypoexponential distribution which is a sum of exponential distributions with *different* rates. We do not have a proof for this, but we present a numerical simulation in appendix §H. 4See also the following Math StackExchange thread. Overall, this motivates us to use a Gumbel distribution as a simple model for the increase in acquisition scores the still-to-be-selected additional K − k points will provide under the modelling assumption that the increase in acquisition scores at each step is exponentially distributed with *different* rates at each step. Zhan et al. (2022) provide a different analysis for the use of Gumbel noise in the context of active learning as we relate in §4. Acquisition Asymptotics of Bayesian Models. For well-specified and well-defined Bayesian parametric models, the posterior distribution of the model parameters converges to the true parameters as the number of data points increases (Van der Vaart, 2000). For such models and assuming that the predictions are independent given the model parameters, the total correlation between the predictions decreases as the number of training points increases, as the posterior distribution of the model parameters becomes more concentrated around the true parameters: $$\mathrm{TC}[Y_{1},\ldots,Y_{K}\mid x_{1},\ldots,x_{K},{\mathcal{D}}^{\mathrm{train}}]\to0\quad\mathrm{as}\quad|{\mathcal{D}}^{\mathrm{train}}|\to\infty.$$ train] → 0 as |Dtrain*| → ∞*. (16) This can be proved by noting that in the finite data limit, the posterior parameter distribution converges to the true model parameters, and the marginal distribution then factorizes. This means that the predictions become more independent as the number of training points increases and fully independent in the infinite data limit. The total correlation is defined as: $$\mathrm{TC}[Y_{1},\ldots,Y_{K}\mid x_{1},\ldots,x_{K},\mathcal{D}^{\mathrm{train}}]:=\underbrace{\sum_{i}\mathrm{H}[Y_{i}\mid x_{i},\mathcal{D}^{\mathrm{train}}]}_{\mathrm{top-$K$Entropy}}-\underbrace{\mathrm{H}[Y_{1},\ldots,Y_{K}\mid x_{1},\ldots,x_{K},\mathcal{D}^{\mathrm{train}}]}_{\mathrm{Batch\ Entropy}},$$ $$(16)$$ $$(17)$$ ., (17) We can also write the total correlation as difference between top-K BALD and BatchBALD: $$\text{TC}[Y_{1},\ldots,Y_{K}\mid x_{1},\ldots,x_{K},\mathcal{D}^{\text{train}}]=\underbrace{\sum_{i}\text{I}[Y_{i};\Omega\mid x_{i},\mathcal{D}^{\text{train}}]}_{\text{top-$K$BALL}}-\underbrace{\left[\left[Y_{1},\ldots,Y_{K};\Omega\mid x_{1},\ldots,x_{K},\mathcal{D}^{\text{train}}\right]\right]}_{\text{Batch}[\text{BALL}]}.$$ $$(18)$$ $$(19)$$ . (18) As the total correlation converges to 0, the top-K BALD term (first term) becomes equal to the BatchBALD term (the second term on the right side), and the same happens for top-K entropy and 'BatchEntropy', which we could similarly define. Thus, for well-specified and well-defined Bayesian parametric models, the top-K acquisition functions will eventually become equivalent to the BatchBALD and 'BatchEntropy' acquisition functions as the number of training points increases. This tells us that top-K acquisition is the most detrimental to active learning in the earlier stages of learning, when the total correlation between the predictions is still high. This is consistent with our empirical results below ('Increasing Top-K Analysis'). At the same time, as the number of training points increases and the model parameters concentrate, the expected information gain (BALD) also decreases. The mutual information with a deterministic variable is always 0, and thus: $$\mathrm{I}[Y;\Omega\mid x,\mathcal{D}^{\mathrm{train}}]\to0\quad\mathrm{as}\quad|\mathcal{D}^{\mathrm{train}}|\to\infty.\tag{1}$$ This asymptotic behavior is a trivial but important result, as it tells us that the expected information gain (BALD) will eventually become uninformative as the number of training points increases and no better than random acquisition, and the important question is: when? Given that we only have noisy estimators, this determines until when active learning is of use compared to random acquisition. Many different active learning methods that are considered non-Bayesian nevertheless approximate the expected information gain or the expected predictive information (Kirsch & Gal, 2022; Smith et al., 2023), which is an expected total correlation. Hence, the considerations apply to those methods, too. Finally, we observe that estimators such as BatchBALD, which utilize Monte-Carlo samples for parameter approximations, are inherently limited by the logarithm of the total number M of these samples, log M. ![12_image_0.png](12_image_0.png) Figure 5: *Rank correlations for BALD scores on MNIST between the* initial scores and later scores of the top- or bottom-scoring 1%, 10% and 100% of test points (smoothed with a size-10 Parzen window). Rankorders decorrelate faster for the most informative samples and in the early stages of training. The top scorers' ranks *anti-correlate* after roughly 40 (100) acquisitions unlike the bottom ones. Later in training, the acquisition scores stay more strongly correlated. This suggests the acquisition size could be increased later in training. Figure 6: Top-K acquisition hurts less later in training (BALD on MNIST). At t ∈ {20, 100} (blue), we keep acquiring samples using the BALD scores from those two steps. At t = 20 (orange), the model performs well for ≈ 20 acquisitions; at t = 120 (green), for ≈ 50; see §6. This constraint implies that their informativeness can diminish rapidly. Concretely, consider an empirical estimator ˆI[·; Ω] built using Monte-Carlo samples ω1*, . . . , ω*k. This is analogous to computing the exact mutual information I[·; Ω] ˆ with the 'empirical' random variable Ωˆ, which uniformly samples from ω1*, . . . , ω*k. Given that the discrete mutual information is restricted by the entropy of its terms, we have: $${\hat{\mathrm{I}}}[\cdot;\Omega]=\mathrm{I}[\cdot;{\hat{\Omega}}]\leq\mathrm{H}[{\hat{\Omega}}]=\log M.$$ For instance, BatchBALD employs a greedy approach to select the t-th acquisition sample in its batch. It does this by maximizing the empirical ˆI[Y ; Ω | x, Yt−1, xt−1, . . . , Y1, x1Dtrain] for the subsequent candidate samples denoted by x. We can represent this relationship as: $\log M\geq\hat{1}[Y_{1},\ldots,Y_{K};\Omega\mid x_{1},\ldots,x_{K},\mathcal{D}^{\rm train}]=\sum_{i=1}^{K}\hat{1}[Y;\Omega\mid x,Y_{y},x_{y},\ldots,Y_{1},x_{1}\mathcal{D}^{\rm train}]$. From the above equation, as K grows, the estimated ˆI[YK; Ω | xK, YK−1, xK−1*, . . . , Y*1, x1Dtrain] approaches zero since it is restricted by log M. For a scenario with M = 100 parameter samples (leading to log10 M = 2), BatchBALD might rapidly lose its informativeness after just two acquisitions in a classification scenario involving 10 categories. This situation arises if the pool set contains a minimum of two highly diverse, that is uncorrelated, data points with maximum disagreement. Rank Correlations Across Acquisitions. In Section 3, we made the following assumptions: (1) the acquisition scores st at step t are a proxy for scores st ′ at step t ′ > t; (2) the larger t ′ − t is, the worse a proxy st is for s ′ t ; (3) this effect is the largest for the most informative points. We demonstrate these empirically by examining the Spearman rank correlation between scores during acquisition. Specifically, we train a model for n steps using BALD as single-point acquisition function. We compare the rank order at each step to the starting rank order at step t. To denoise the rankings across n, we smooth the rank correlations with a Parzen window of size 10 and to reduce the effect of noise to the rank order, we round all scores to 2 decimal places. This especially removes unimportant rank changes for points with low scores around 0. Figure 1 shows that acquisition scores become less correlated as more points are acquired. Figure 5a shows this in more detail for the top and bottom 1%, 10% or 100% of scorers of the test set across acquisitions starting at step t = 0 for a model initialised with 20 points. The top-10% scoring points (solid green) quickly become uncorrelated across acquisitions and even become *anti-correlated*. In contrast, the points overall (solid blue) correlate well over time (although they have a much weaker training signal on average). This result supports all three of our hypotheses. ![13_image_0.png](13_image_0.png) Figure 7: AUROCs for BALD scores on MNIST between the initial scores and later scores of the top- or bottom-scoring 1%, 10% and 50% of test points (smoothed with a size-10 Parzen window). AUROC between original points as 'ground truth' and later scores as predictors. This is equivalent to the probability that the acquisition score at n for a point in t = 0's top or bottom 1%, etc. is larger than points outside. This tells us how likely other points outside the batch have higher acquisition scores. This ignores the ranking of points otherwise. (a, b) Points in the top quantiles are superseded by other points in the top quantiles in the later acquisitions to a large degree. This is much more pronounced early in the training than later. The bottom quantiles are more stable. (c) The overall score distributions at steps t = 0, 100 are visualized and the relevant top and bottom quantiles are marked. At the same time, we see that as training progresses, and we converge towards the best model, the order of scores becomes more stable across acquisitions. In Figure 5b the model begins with 120 points (t = 100), rather than 20 (t = 0). Here, the most informative points are less likely to change their rank—even the top-1% ranks do not become *anti-correlated*, only decorrelated. Thus, we hypothesise that further in training, we might be able to choose larger K. We do not display the correlations for bottom 1% of scorers as their scores are close to 0 throughout training and thus noisy and uninformative, see also Figure 7c for this. Overall, this analysis can be confounded by noisy samples and swaps in rank order between samples with similar scores that are not meaningful in the sense that they would not influence batch acquisition. Thus, to provide a different analysis, we also consider the more direct question in Figure 7 of how likely other samples have higher acquisition scores at t + n than the top samples from t for different quantiles (1%, 10%, 50%) of the test set. As a sanity check, we also examine the bottom quantiles. This is equivalent to computing the *AUROC* between the original points as 'ground truth' and later scores as predictors: for acquisition step n, the AUROC is p(S t,top/bottom p% n ≶ S t,bottom/top 1−p% n ) with S t,top/bottom p% n := {st+n,i : st,i ∈ top/bottom p% of {st, j}}. Specifically, we set up a binary classification with the top or bottom 1%, 10% or 50% of the test set as positive and the rest as negative. This again helps us quantify how much the scores meaningfully change across acquisition steps. These results match the previous ones and provide another validation for the mentioned assumptions. Increasing Top-K **Analysis.** Another way to investigate the effect of top-K selection is to freeze the acquisition scores during training and then continue single-point 'active learning' as if those were the correct scores. Comparing this to the performance of regular active learning with updated single-point scores allows us to examine how well earlier scores perform as proxies for later scores. We perform this toy experiment on MNIST, showing that freezing scores early on greatly harms performance while doing it later has only a small effect (Figure 6). For frozen scores at a training set size of 20 (73% accuracy, t = 0), the accuracy matches single-acquisition BALD up to a training set size of roughly 40 (dashed orange lines) before diverging to a lower level. But when freezing the scores of a more accurate model, at a training set size of 120 labels (93% accuracy, t = 100), selecting the next fifty points according to those frozen scores performs indistinguishably from step-by-step acquisition (dashed green lines). This result shows that top-K acquisition hurts less later in training but can negatively affect performance at the beginning of training. ![14_image_0.png](14_image_0.png) Figure 8: *Effect of changing* β. (a) *Repeated-MNISTx4 (5 trials):* PowerBALD outperforms BatchBALD for β = 8. (b) *EMNIST 'ByMerge' (5 trials):* SoftmaxBALD for β = 4 performs best. (c) *IHDP (400* trials): At high temperature (β = 0.1), CausalBALD with power acquisition is like random acquisition. As the temperature decreases, the performance improves (lower √ϵPEHE), surpassing top-K acquisition. Both experiments use an acquisition size of 10. These observations lead us to ask whether we could dynamically change the acquisition size: with smaller acquisition batches at the beginning and larger ones towards the end of active learning. We leave the exploration of this for future work. ## 6.1 Ablation: Changing Β So far, we have set β = 1 in the spirit of examining a simple baseline without additional hyperparameters. The results above show that this already works well and matches the performance of much more expensive methods, raising questions about their value. In addition, however, tuning β may be able to further improve performance. In the following, we show that other values of β can yield even higher performance on RepeatedMNIST, EMNIST, and when estimating causal treatment effects; we provide many additional results in Appendix F. Repeated-MNIST & EMNIST. In Figure 8a, we see that for PowerBALD the best-performing value, β = 8,even outperforms BatchBALD. Similarly, tuning β can improve the performance of other strategies as well as we see for SoftmaxBALD on EMNIST, where β = 4 performs better than β = 8 and β = 1, showing that the best β depends on the dataset. Causal Treatment Effects: Infant Health Development Programme. Active learning for Conditional Average Treatment Effect (CATE) estimation Heckman et al. (1997; 1998); Hahn (1998); Abrevaya et al. (2015) on data from the Infant Health and Development Program (IHDP) estimates the causal effect of treatments on an infant's health from observational data. Statistical estimands of the CATE are obtainable from observational data under certain assumptions. Jesson et al. (2021) show how to use active learning to acquire data for label-efficient estimation. Among other subtleties, this prioritises the data for which matched treated/untreated pairs are available. We follow the experiments of Jesson et al. (2021) on both synthetic data and the semi-synthetic IHDP dataset (Hill, 2011), a commonly used benchmark for causal effects estimation. In Figure 8c we show that power acquisition performs significantly better than both top-K and uniform acquisition, using an acquisition size of 10 in all cases with further. We provide additional results on semi-synthetic data in appendix §F.3. Note that methods such as BADGE and BatchBALD are not well-defined for causal-effect estimation, while stochastic batch acquisition remains applicable and is effective when fine-tuning β. Performance on these tasks is measured using the expected Precision in Estimation of Heterogeneous Effect (PEHE) (Hill, 2011) such that √ϵPEHE =pE[(τe(X) − τ (X))2] (Shalit et al., 2017) where τe is the estimated CATE and τ is CATE (i.e. a form of RMSE). Limitations. Although we highlight the possibility for future work to adapt β to specific datasets or score functions, our aim is not to offer a practical recipe for this to practitioners. Our focus is on showing how even the simplest form of stochastic acquisition already raises questions for some recent more complex methods. ## 7 Discussion & Conclusion Our experiments demonstrate that the stochastic sampling approach we have examined is orders of magnitude faster than sophisticated batch-acquisition strategies like BADGE and BatchBALD while retaining comparable performance in settings across computer vision, NLP, and causal inference. Compared to the flawed top-K batch acquisition heuristic, it is never worse: we see no reason to continue using top-K acquisition. Importantly, our work raises serious questions about these current methods. If they fail to outperform such a simple baseline in a wide range of settings, do they model the interaction between points sufficiently well? If so, are the scores themselves unreliable? We call on future work in batch active learning to at least demonstrate that it can outperform simple stochastic batch acquisition strategies. At the same time, this opens doors for improved methods. Although we only put forward a naive model due its computational and mathematical simplicity, future work can explore more sophisticated modelling of the predicted score changes that take the current model and dataset into account. In its simplest form, this might mean choosing the β hyperparameter of the acquisition distribution based on the dataset and adapting it online. Our experiments also highlight that the acquisition size could be adapted dynamically, with larger batch sizes acceptable in later acquisition steps. ## Acknowledgements The authors would like to thank their anonymous TMLR reviewers for their kind, constructive and helpful feedback during the review process, which has significantly improved this work. We would also like to thank Freddie Bickford Smith and Tom Rainforth as well as the members of OATML in general for their feedback at various stages of the project. SF is supported by the EPSRC via the Centre for Doctoral Training in Cybersecurity at the University of Oxford as well as Christ Church, University of Oxford. AK is supported by the UK EPSRC CDT in Autonomous Intelligent Machines and Systems (grant reference EP/L015897/1). ## References Jason Abrevaya, Yu-Chin Hsu, and Robert P Lieli. Estimating conditional average treatment effects. Journal of Business & Economic Statistics, 33(4):485–505, 2015. Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. In *8th International Conference on Learning* Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=ryghZJBKPS. Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, and Sham M. Kakade. Gone fishing: Neural active learning with fisher embeddings. In *Advances in Neural Information Processing Systems 34: Annual* Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, 2021. Parmida Atighehchian, Frédéric Branchaud-Charron, and Alexandre Lacoste. Bayesian active learning for production, a systematic study and a reusable library. *arXiv preprint arXiv:2006.09916*, 2020. Les E. Atlas, David A. Cohn, and Richard E. Ladner. Training connectionist networks with queries and selective sampling. In *Advances in Neural Information Processing Systems 2, [NIPS Conference, Denver,* Colorado, USA, November 27-30, 1989], 1989. Javad Azimi, Alan Fern, Xiaoli Zhang Fern, Glencora Borradaile, and Brent Heeringa. Batch active learning via coordinated matching. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress, 2012. URL http://icml.cc/2012/papers/607.pdf. Cenk Baykal, Lucas Liebenwein, Dan Feldman, and Daniela Rus. Low-regret active learning. arXiv preprint arXiv:2104.02822, 2021. Sarah Bird, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. Fairlearn: A toolkit for assessing and improving fairness in AI. Technical Report MSR-TR-2020-32, Microsoft, 2020. Erdem Bıyık, Kenneth Wang, Nima Anari, and Dorsa Sadigh. Batch active learning using determinantal point processes. *arXiv preprint arXiv:1906.07975*, 2019. Frédéric Branchaud-Charron, Parmida Atighehchian, Pau Rodríguez, Grace Abuhamad, and Alexandre Lacoste. Can active learning preemptively mitigate fairness issues? *ICLR Workshop on Responsable AI*, 2021. Klaus Brinker. Incorporating diversity in active learning with support vector machines. In Tom Fawcett and Nina Mishra (eds.), Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA, pp. 59–66. AAAI Press, 2003. URL http://www.aaai. org/Library/ICML/2003/icml03-011.php. Colin Campbell, Nello Cristianini, and Alexander J. Smola. Query learning with large margin classifiers. In Pat Langley (ed.), Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), Stanford University, Stanford, CA, USA, June 29 - July 2, 2000, pp. 111–118. Morgan Kaufmann, 2000. Nicolò Cesa-Bianchi, Claudio Gentile, Gábor Lugosi, and Gergely Neu. Boltzmann exploration done right. Advances in neural information processing systems, 30, 2017. Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. Batch active learning at scale. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, 2021. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and André van Schaik. EMNIST: an extension of MNIST to handwritten letters. *arXiv preprint arXiv:1702.05373*, 2017. Sebastian Farquhar, Yarin Gal, and Tom Rainforth. On statistical bias in active learning: How and when to fix it. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, 2021. Ronald Aylmer Fisher and Leonard Henry Caleb Tippett. Limiting forms of the frequency distribution of the largest or smallest member of a sample. In *Mathematical proceedings of the Cambridge philosophical* society. Cambridge University Press, 1928. Richard Fredlund, Richard M. Everson, and Jonathan E. Fieldsend. A bayesian framework for active learning. In *International Joint Conference on Neural Networks, IJCNN 2010, Barcelona, Spain, 18-23 July, 2010*, 2010. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 1183–1192. PMLR, 2017. Divyansh Garg, Joey Hejna, Matthieu Geist, and Stefano Ermon. Extreme q-learning: Maxent rl without entropy. *arXiv preprint arXiv:2301.02328*, 2023. Emil Julius Gumbel. *Statistical theory of extreme values and some practical applications: a series of lectures*, volume 33. US Government Printing Office, 1954. Yuhong Guo and Dale Schuurmans. Discriminative batch mode active learning. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis (eds.), *Advances in Neural Information Processing Systems 20,* Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3-6, 2007, pp. 593–600. Curran Associates, Inc., 2007. URL https: //proceedings.neurips.cc/paper/2007/hash/ccc0aa1b81bf81e16c676ddb977c5881-Abstract.html. Jinyong Hahn. On the role of the propensity score in efficient semiparametric estimation of average treatment effects. *Econometrica*, pp. 315–331, 1998. James J Heckman, Hidehiko Ichimura, and Petra E Todd. Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme. *The review of economic studies*, 64(4):605–654, 1997. James J Heckman, Hidehiko Ichimura, and Petra Todd. Matching as an econometric evaluation estimator. The review of economic studies, 65(2):261–294, 1998. Jennifer L Hill. Bayesian nonparametric modeling for causal inference. *Journal of Computational and* Graphical Statistics, 20(1):217–240, 2011. Steven CH Hoi, Rong Jin, Jianke Zhu, and Michael R Lyu. Batch mode active learning and its application to medical image classification. In *Proceedings of the 23rd international conference on Machine learning*, pp. 417–424, 2006. Geoff Holmes, Eibe Frank, Dale Fletcher, and Corey Sterling. Efficiently correcting machine learning: considering the role of example ordering in human-in-the-loop training of image classification models. In IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022, 2022. Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Máté Lengyel. Bayesian active learning for classification and preference learning. *arXiv preprint arXiv:1112.5745*, 2011. Andrew Jesson, Panagiotis Tigas, Joost van Amersfoort, Andreas Kirsch, Uri Shalit, and Yarin Gal. Causalbald: Deep bayesian active learning of outcomes to infer treatment-effects from observational data. In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information* Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, 2021. Cem Kalkanli and Ayfer Özgür. Batched thompson sampling. In *Advances in Neural Information Processing* Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, 2021. Andreas Kirsch and Yarin Gal. Unifying approaches in active learning and active sampling via fisher information and information-theoretic quantities. *Transactions on Machine Learning Research*, 2022. ISSN 2835-8856. URL https://openreview.net/forum?id=UVDAKQANOW. Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 7024–7035, 2019. Wouter Kool, Herke van Hoof, and Max Welling. Stochastic beams and where to find them: The gumbel-top-k trick for sampling sequences without replacement. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019,* Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pp. 3499–3508. PMLR, 2019. Andreas Krause and Daniel Golovin. Submodular function maximization. *Tractability*, 3:71–104, 2014. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Alex Kulesza and Ben Taskar. k-dpps: Fixed-size determinantal point processes. In *Proceedings of the 28th* International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, 2011. Alexandre Lacoste, Pau Rodríguez López, Frederic Branchaud-Charron, Parmida Atighehchian, Massimo Caccia, Issam Hadj Laradji, Alexandre Drouin, Matt Craddock, Laurent Charlin, and David Vázquez. Synbols: Probing learning algorithms with synthetic datasets. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 0169cf885f882efd795951253db5cdfb-Abstract.html. Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, 2019. Zhiming Luo, Frederic Branchaud-Charron, Carl Lemaire, Janusz Konrad, Shaozi Li, Akshaya Mishra, Andrew Achkar, Justin A. Eichel, and Pierre-Marc Jodoin. MIO-TCD: A new benchmark dataset for vehicle classification and localization. *IEEE Trans. Image Process.*, 2018. Chris J. Maddison, Daniel Tarlow, and Tom Minka. A* sampling. In Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 3086–3094, 2014. Shreshth A Malik, Salem Lahlou, Andrew Jesson, Moksh Jain, Nikolay Malkin, Tristan Deleu, Yoshua Bengio, and Yarin Gal. Batchgfn: Generative flow networks for batch active learning. arXiv preprint arXiv:2306.15058, 2023. Alexander McFarlane Mood. Introduction to the theory of statistics. 1950. Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip HS Torr, and Yarin Gal. Deep deterministic uncertainty: A simple baseline. *arXiv e-prints*, pp. arXiv–2102, 2021. Chelsea Murray, James Urquhart Allingham, Javier Antorán, and José Miguel Hernández-Lobato. Depth uncertainty networks for active learning. *arXiv preprint arXiv:2112.06796*, 2021. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. Jerzy Neyman. edited and translated by dorota m. dabrowska and terrence p. speed (1990). on the application of probability theory to agricultural experiments. essay on principles. section 9. *Statistical Science*, 5(4): 465–472, 1923. Robert Pinsler, Jonathan Gordon, Eric T. Nalisnick, and José Miguel Hernández-Lobato. Bayesian batch active learning as sparse subset approximation. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 6356–6367, 2019. URL https://proceedings.neurips.cc/ paper/2019/hash/84c2d4860a0fc27bcf854c444fb8b400-Abstract.html. Aleksandr Rubashevskii, Daria Kotova, and Maxim Panov. Scalable batch acquisition for deep bayesian active learning. In *Proceedings of the 2023 SIAM International Conference on Data Mining (SDM)*, pp. 739–747. SIAM, 2023. Donald B Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. *Journal* of educational Psychology, 66(5):688, 1974. Donald B Rubin. Randomization analysis of experimental data: The fisher randomization test comment. Journal of the American Statistical Association, 75(371):591–593, 1980. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In Yoshua Bengio and Yann LeCun (eds.), *4th International Conference on Learning Representations, ICLR 2016,* San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/ abs/1511.05952. Greg Schohn and David Cohn. Less is more: Active learning with support vector machines. In Pat Langley (ed.), *Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), Stanford* University, Stanford, CA, USA, June 29 - July 2, 2000, pp. 839–846. Morgan Kaufmann, 2000. Jasjeet S Sekhon. The neyman-rubin model of causal inference and estimation via matching methods. The Oxford handbook of political methodology, 2:1–32, 2008. Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/ forum?id=H1aIuk-RW. Burr Settles. Active Learning Literature Survey. *Machine Learning*, 2010. Uri Shalit, Fredrik D. Johansson, and David A. Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 3076–3085. PMLR, 2017. URL http://proceedings.mlr. press/v70/shalit17a.html. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http: //arxiv.org/abs/1409.1556. Freddie Bickford Smith, Andreas Kirsch, Sebastian Farquhar, Yarin Gal, Adam Foster, and Tom Rainforth. Prediction-oriented bayesian active learning. *arXiv preprint arXiv:2304.08151*, 2023. Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Schölkopf, Yarin Gal, and Stefan Bauer. Interventions, where and how? experimental design for causal models at scale. *arXiv preprint arXiv:2203.02016*, 2022. Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, and Yarin Gal. Improving deterministic uncertainty estimation in deep learning for classification and regression. *arXiv preprint arXiv:2102.11409*, 2021. Aad W Van der Vaart. *Asymptotic statistics*, volume 3. Cambridge university press, 2000. Sahil Verma and Julia Rubin. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, FairWare@ICSE 2018, Gothenburg, Sweden, May 29, 2018, 2018. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, 2020. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. Xueying Zhan, Yaowei Wang, and Antoni B. Chan. Asymptotic optimality for active learning processes. In *Uncertainty in Artificial Intelligence, Proceedings of the Thirty-Eighth Conference on Uncertainty in* Artificial Intelligence, UAI 2022, 1-5 August 2022, Eindhoven, The Netherlands, 2022. ## A Ethical Impact We do not foresee any ethical risks related to this work. Insofar as the sampling methods we have examined reduce computational costs, applications might benefit from reduced resource consumption. The methods appear to be as good or better than alternatives on evaluations examining the ability to learn from data with under-represented groups and on evaluations that measure the difference between performance for the mostand least-represented groups, which may aid algorithmic fairness (see C.5). ## B Method B.1 Other Scoring Functions Following Gal et al. (2017), we also examine using variation ratios (least confidence) and standard deviation as scoring functions. Variation Ratio. Also known as *least confidence*, the variation-ratios is the complement of the least-confident class prediction: $$S_{\mathrm{variation-ratios}}(i;{\mathcal{I}}^{\mathrm{train}}):=1-\operatorname*{max}_{y}\operatorname{p}(y\mid X=x_{i}).$$ $$(20)$$ This scoring function is non-negative and a score of 0 means that the sample is uninformative: a score of 0 means that the respective prediction is one-hot, which means that the expected information gain is also 0 as can be easily verified. Thus, variation ratios matches the intuitions behind power acquisition. Standard Deviation. The standard deviation score function measures the sum of the class probability deviations and is closely related to the BALD scores: $$S_{\mathrm{std-dev}}(i;\mathcal{I}^{\mathrm{train}}):=\sum_{y}\sqrt{\mathrm{Var}_{\mathrm{p}(\omega)}[\mathrm{p}(y\mid X=x_{i},\omega)]}.\tag{1}$$ $$(21)$$ This scoring function is also non-negative, and no variance for the predictions implies a zero expected information gain and thus an uninformative sample. Thus, the standard deviation should also perform well with power acquisition. ## B.2 Proof Of Proposition 3.1 First, we remind the reader that a random variable G is Gumbel distributed G ∼ Gumbel(µ; β) when its cumulative distribution function follows p(G ≤ g) = exp(− exp(− g−µ β)). Furthermore, the Gumbel distribution is closed under translation and positive scaling: Lemma B.1. Let G ∼ Gumbel(µ; β) *be a Gumbel distributed random variable, then:* $$\alpha G+d\sim G u m b e l(d+\alpha\mu;\alpha\beta).$$ αG + d ∼ *Gumbel*(d + αµ; αβ). (22) Proof. We have p(αG + d ≤ x) = p(G ≤ $=\text{p}(G\leq\frac{x-d}{\alpha}).\,$ Thus, we have:. $$\mathrm{p}(\alpha G+d\leq x)=\exp(-\exp(-\frac{\frac{x-d}{\alpha}-\mu}{\beta}))$$ $$=\exp(-\exp(-\frac{x-(d+\alpha\mu)}{\alpha\beta}))$$ $$\Leftrightarrow\alpha G+d\sim\mathrm{Gumbel}(d+\alpha\mu;\alpha\beta).$$ $$(22)$$ We can then easily prove Proposition 3.1 using Theorem 1 from Kool et al. (2019), which we present it here slightly reformulated to fit our notation: $$(23)$$ $$(24)$$ $$(25)$$ $\square$ ![21_image_0.png](21_image_0.png) Figure 9: *MIO-TCD Dataset* is designed to include common artifacts from production data. The size and quality of the images vary greatly between crops; from high-quality cameras on sunny days to low-quality cameras at night. (a) shows an example of clean samples that can be clearly assigned to a class. (b)(c)(d) and (e) show the different categories of noise. (b) shows an example of duplicates that exist in the dataset. (c) is a good example where the assigned class is subject to interpretation: motorcycle or bicycle? (d) is a sample with heavy compression artefacts and (e) is an example of samples with low resolution which again is considered a hard example to learn for the model. Lemma B.2. For k ≤ n*, let* I * [11] M. C. k = arg topk{si + ϵi}i with ϵi ∼ *Gumbel*(0; 1)*, i.i.d.. Then* I _with $\epsilon_{i}\sim$ Gumbel$(0;1)$, i.i.d.$. Then $I_{1}^{*},\ldots,I_{k}^{*}$ is $l\left(\frac{\exp s_{i}}{\sum_{i\in n}\exp s_{j}},i\in\{1,\ldots,n\}\right)$ distribution, e.g._ an (ordered) sample without replacement from the *Categorical* P exp si j∈n for a realization i ∗ 1 , . . . , i∗k it holds that $$P\left(I_{1}^{*}=i_{1}^{*},\ldots,I_{k}^{*}=i_{k}^{*}\right)=\prod_{j=1}^{k}{\frac{\exp s_{i_{j}^{*}}}{\sum_{\ell\in N_{j}^{*}}\exp s_{\ell}}}$$ where N∗ j = N\i ∗ 1 , . . . , i∗ j−1 is the domain (without replacement) for the j*-th sampled element.* Now, it is easy to prove the proposition: Proposition 3.1. For scores si, i ∈ {1, . . . , n}, and k ≤ n and β > 0, if we draw ϵi ∼ *Gumbel*(0; β −1) independently, then arg topk{si + ϵi}i is an (ordered) sample without replacement from the categorical distribution Categorical(exp(β si)/Pj exp(β sj ), i ∈ {1*, . . . , n*}). Proof. As ϵi ∼ Gumbel(0; β −1), define ϵ ′ i := βϵi ∼ Gumbel(0; 1). Further, let s ′ i := βsi. Applying Lemma B.2 on s ′ i and ϵ ′ i , arg topk{s ′ i + ϵ ′ i }i yields (ordered) samples without replacement from the categorical distribution Categorical(P exp(β si) j exp(β sj ) , i ∈ {1*, . . . , n*}). However, multiplication by β does not change the resulting indices of arg topk : $$\arg\operatorname{top}_{k}\{s_{i}^{\prime}+\epsilon_{i}^{\prime}\}_{i}=\arg\operatorname{top}_{k}\{s_{i}+\epsilon_{i}\}_{i},$$ i}i = arg topk{si + ϵi}i, (26) concluding the proof. $$(26)^{\frac{1}{2}}$$ $\square$ ## C Experiments C.1 Experimental Setup & Compute Frameworks. We use PyTorch. Repeated-MNIST and EMNIST experiments use PyTorch Ignite. Synbols and MIO-TCD experiments use the BaaL library: https://github.com/baal-org/baal (Atighehchian et al., 2020). Predictive parity is calculated using FairLearn (Bird et al., 2020). The CausalBALD experiments use https://github.com/anndvision/causal-bald (Jesson et al., 2021). The experiments comparing to ACS-FW (Pinsler et al., 2019) use the original authors' implementation with added support for stochastic batch acquisitions: https://github.com/BlackHC/active-bayesian-coresets/releases/ tag/stoch_batch_acq_paper. The Repeated-MNIST experiments were run using https://github.com/ BlackHC/active_learning_redux/releases/tag/stoch_batch_acq, and the results are also available on WandB (https://wandb.ai/oatml-andreas-kirsch/oatml-snow-stoch-acq). Compute. Results shown in Table 1 were run inside Docker containers with 8 CPUs (2.2Ghz) and 32 Gb of RAM. Other experiments were run on similar machines with Titan RTX GPUs. The Repeated-MNIST and EMNIST experiments take about 5000 GPU hours. The MIO, Synbols and CLINC-150 experiments take about 19000 GPU hours. The CausalBALD experiments take about 1000 GPU hours. Dataset Licenses. Repeated-MNIST is based on MNIST which is made available under the terms of the Creative Commons Attribution-Share Alike 3.0 license. The EMNIST dataset is made available as CC0 1.0 Universal Public Domain Dedication. Synbols is a dataset generator. MIO-TCD is made available under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. CLINC-150 is made available under the terms of Creative Commons Attribution 3.0 Unported License. ## C.1.1 Runtime Measurements The synthetic dataset used for benchmarking has 4,096 features, 10 classes, and 10,000 pool points. VGG-16 models (Simonyan & Zisserman, 2015) were used to sample predictions and latent embeddings. ## C.1.2 Repeated-Mnist The Repeated-MNIST dataset is also constructed following Kirsch et al. (2019) with duplicated examples from MNIST with isotropic Gaussian noise added to the input images (standard deviation 0.1). We use the same setup as Kirsch et al. (2019): a LeNet-5-like architecture with ReLU activations instead of tanh and added dropout. The model obtains 99% test accuracy when trained on the full MNIST dataset. Specifically, the model is made up of two blocks of a convolution, dropout, max-pooling, ReLU with 32 and 64 channels and 5x5 kernel size, respectively. As classifier head, a two-layer MLP with 128 hidden units (and 10 output units) is used that includes dropout between the layers. We use a dropout probability of 0.5 everywhere. The model is trained with early stopping using the Adam optimiser and a learning rate of 0.001. We sample predictions using 100 MC-Dropout samples for BALD. Weights are reinitialized after each acquisition step. ## C.1.3 Emnist We follow the setup from (Kirsch et al., 2019) with 20 MC dropout samples. We use a similar model as for Repeated-MNIST but with three blocks instead of two. Specifically, we use 32, 64, and 128 channels and 3x3 kernel size. This is followed by a 2x2 max pooling layer before the classifier head. The classifier head is a two-layer MLP but with 512 hidden units instead of 128. Again, we use dropout probability 0.5 everywhere. ## C.1.4 Synbols & Mio-Tcd The full list of hyperparameters for the Synbols and MIO-TCD experiments is presented in Table 3. Our experiments are built using the BaaL library (Atighehchian et al., 2020). We compute the predictive parity using FairLearn (Bird et al., 2020). We use VGG-16 model (Simonyan & Zisserman, 2015) trained for 10 epochs using Monte Carlo dropout for acquisition (Gal et al., 2017) with 20 dropout samples. Table 3: Hyper-parameters used in Section 5 and C.5 | Hyperparameter | Value | |-------------------|--------------| | Learning rate | 0.001 | | Optimiser | SGD | | Weight decay | 0 | | Momentum | 0.9 | | Loss function | Crossentropy | | Training duration | 10 | | Batch size | 32 | | Dropout p | 0.5 | | MC iterations | 20 | | Query size | 100 | | Initial set | 500 | In Figure 9, we show a set of images with common problems that can be found in MIO-TCD. ## C.1.5 Clinc-150 We fine-tune a pretrained DistilBERT model from HuggingFace (Wolf et al., 2020) on CLINC-150 for 5 epochs with Adam as optimiser. Estimating epistemic uncertainty in transformer models is an open research question, and hence, we do not report results using BALD and focus on entropy instead. ## C.1.6 Causalbald Using the Neyman-Rubin framework (Neyman, 1923; Rubin, 1974; Sekhon, 2008), the CATE is formulated in terms of the potential outcomes, Yt, of treatment levels t ∈ {0, 1}. Given observable covariates, X, the CATE is defined as the expected difference between the potential outcomes at the measured value X = x: τ (x) = E[Y1 − Y0 | X = x]. This causal quantity is fundamentally unidentifiable from observational data without further assumptions because it is not possible to observe both Y1 and Y0 for a given unit. However, under the assumptions of consistency, non-interference, ignoreability, and positivity, the CATE is identifiable as the statistical quantity τe(x) = E[Y | T = 1, X = x] − E[Y | T = 0, X = x] (Rubin, 1980). Jesson et al. (2021) define BALD acquisition functions for active learning CATE functions from observational data when the cost of acquiring an outcome, y, for a given covariate and treatment pair, (x,t), is high. Because we do not have labels for Y1 and Y0 for each (x,t) pair in the dataset, their acquisition function focusses on acquiring data points (x,t) for which it is likely that a matched pair (x, 1 − t) exists in the pool data or has already been acquired at a previous step. We follow their experiments on their synthetic dataset with limited positivity and the semi-synthetic IHDP dataset (Hill, 2011). Details of the experimental setup are given in (Jesson et al., 2021), we use their provided code, and implement the power acquisition function. The settings for causal inference experiments are identical to those used in Jesson et al. (2021), using the IHDP dataset (Hill, 2011). Like them, we use a Deterministic Uncertainty Estimation Model (van Amersfoort et al., 2021), which is initialised with 100 data points and acquire 10 data points per acquisition batch for 38 steps. The dataset has 471 pool points and a 201 point validation set. ## C.1.7 Acs-Fw We compare to the experiments in the paper 'Bayesian Batch Active Learning as Sparse Subset Approximation' (Pinsler et al., 2019) which introduces the ACS-FW algorithm for batch acquisition. For its deep learning experiments, the paper uses a ResNet feature extractor, followed by a Bayesian multi-class classification model on the final layer. Since exact inference is intractable, variational inference is used with a factorized Gaussian posterior approximation. As prior a zero-mean Gaussian is used, and Monte-Carlo samples are drawn to approximate the predictive distribution. C.2 Repeated-MNIST ![24_image_0.png](24_image_0.png) Figure 10: *Repeated-MNIST x4 (5 trials): acquisition size ablation for BADGE.* Acquisition size 40 performs best out of {10, 20, 40} for repetition factor 1, acquisition 20 performs best for repetition factor 2, and all three perform similarly for repetition factor 4. BADGE Ablation. In Figure 10, we see that BADGE performs best with acquisition size 20 on RepeatedMNISTx4 overall. BADGE 40 and BADGE 20 have the highest final accuracy, cf. BADGE 10 while BADGE 20 performs better than BADGE 40 for small training set sizes. ![24_image_1.png](24_image_1.png) Figure 11: *Repeated-MNIST x4 (5 trials): acquisition size 5.* To compare BatchBALD, BALD and the stochastic acquisition strategies more fairly, we also show the performance of acquisition size 5. This is in line with Figure 2. Acquisition Size 5. In Figure 11, we see that the stochastic acquisition strategies perform better than BALD and as well as BatchBALD for acquisition size 5 as well. Thus, we see no qualitative difference to the results in Figure 2. C.2.1 Other scoring functions ![25_image_0.png](25_image_0.png) Figure 12: *Repeated-MNIST x4 (5 trials): Performance for other scoring functions.* Entropy, std dev, variation ratios behave like BALD when applying a stochastic sampling scheme (Power from §3). In Figure 12 shows the performance of other scoring functions than BALD on RepeatedMNIST x4. C.2.2 Redundancy ablation ![25_image_1.png](25_image_1.png) Figure 13: *Repeated-MNIST (5 trials): Performance ablation for different repetition counts.* In Figure 13, we see the same behaviour in an ablation for different repetition sizes of Repeated-MNIST. ## C.3 Mio-Tcd ![26_image_1.png](26_image_1.png) (a) BALD ![26_image_0.png](26_image_0.png) (b) Entropy Figure 14: *MIO-TCD (5 trials).* In Figure 14, we see that power acquisition performs on par with BADGE with both BALD and entropy as underlying score functions. ## C.4 Emnist ![26_image_3.png](26_image_3.png) Figure 15: *EMNIST (Balanced) (5 trials): Performance with BALD.* ![26_image_2.png](26_image_2.png) Figure 16: *EMNIST (ByMerge) (5 trials): Performance with BALD.* In Figure 15 and 16, we see that PowerBALD outperforms BALD, BatchBALD, and BADGE. ![27_image_0.png](27_image_0.png) Figure 17: *EMNIST (Balanced) (5 trials): acquisition size ablation for BADGE.* BADGE Ablation. In Figure 17, we see that BADGE performs similarly with all three acquisition sizes. Acquisition size 10 is the smoothest. ## C.5 Edge Cases In Synbols We use Synbols (Lacoste et al., 2020) to demonstrate the behaviour of batch active learning in artificially constructed edge cases. Synbols is a character dataset generator for classification where a user can specify the type and proportion of bias and insert artefacts, backgrounds, masking shapes, and so on. We selected three datasets with strong biases supplied by Lacoste et al. (2020); Branchaud-Charron et al. (2021) to evaluate stochastic batch acquisition methods. We use an acquisition size of 100. The experimental settings are described in Appendix C.1. For these tasks, performance evaluation includes 'predictive parity', also known as 'accuracy difference', which is the maximum difference in accuracy between subgroups—which are, in this case, different coloured characters. This measure is used most widely in domain adaptation and ethics (Verma & Rubin, 2018). We want to maximise the accuracy while minimising the predictive parity. ![27_image_2.png](27_image_2.png) ![27_image_1.png](27_image_1.png) (b) Predictive parity (**Down and left is better.**) Figure 18: *Performance on Synbols Spurious Correlations (3 trials) with BALD.* Stochastic acquisition matches BADGE and BALD's predictive parity and performance, which is reassuring as stochastic acquisition functions might be affected by spurious correlations. Spurious Correlations. This dataset includes spurious correlations between character colour and class. As shown in Branchaud-Charron et al. (2021), active learning is especially strong here as characters that do not follow the correlation will be informative and thus selected. We compare the predictive parity between methods in Fig. 18b. We do not see any significant difference between the stochastic batch acquisition method and BADGE or BALD. This is encouraging, as stochastic approaches might select more examples following the spurious correlation and thus have higher predictive parity, but this is not the case. ![28_image_0.png](28_image_0.png) Figure 19: *Synbols Minority Groups (3 trials): Performance on BALD*. PowerBALD outperforms BALD and matches BADGE for both accuracy and predictive parity. Minority Groups. This dataset includes a subgroup of the data that is under-represented; specifically, most characters are red while few are blue. As Branchaud-Charron et al. (2021) shows, active learning can improve the accuracy for these groups. Our stochastic approach lets batch acquisition better capture under-represented subgroups. In Figure 19a, PowerBALD has an accuracy almost identical to that of BADGE, despite being much cheaper, and outperforms BALD. At the same time, we see in Figure 19b that PowerBALD has a lower predictive parity than BALD, demonstrating a fairer predictive distribution given the unbalanced dataset. ![28_image_1.png](28_image_1.png) ![28_image_2.png](28_image_2.png) Figure 20: BALD Figure 21: Entropy Figure 22: *Performance on Synbols Missing Characters (3 trials).* In this dataset with high aleatoric uncertainty, PowerBALD matches BADGE and BALD performance. PowerEntropy significantly outperforms Entropy which confounds aleatoric and epistemic uncertainty. Missing Synbols. This dataset has high aleatoric uncertainty (input noise). Some images are missing information required to make high-probability predictions—these images have shapes randomly occluding the character—so even a perfect model would remain uncertain. Lacoste et al. (2020) demonstrated that entropy is ineffective on this data as it cannot distinguish between aleatoric and epistemic uncertainty (input noise and model uncertainty), while BALD can do so. As a consequence, entropy will unfortunately prefer samples with occluded characters, resulting in degraded active learning performance. For predictive entropy, stochastic acquisition largely corrects the failure of entropy acquisition to account for missing data (Figure 22) although PowerEntropy still underperforms BADGE here. For BALD, we show in Figure 20 in the appendix that, as before, the stochastic batch acquisition method performs on par with BADGE and marginally better than BALD. ## C.6 Clinc-150 ![29_image_0.png](29_image_0.png) ![29_image_1.png](29_image_1.png) Figure 23: *Performance on CLINC-150 (10 trials).* PowerEntropy performs much better than entropy, which only performs marginally better than uniform, and almost on par with BADGE. In Figure 23, we see that PowerEntropy performs much better than entropy which only performs marginally better than the uniform baseline. PowerEntropy also performs better than BADGE at low training set sizes, but BADGE performs better in the second half. Between ≈ 2300 and 4000 samples, BADGE and PowerEntropy perform the same. We use an acquisition size of 100, starting from an initial training set of 1510 points (10 points per intent class). ## C.7 Acs-Fw ![30_image_0.png](30_image_0.png) ![30_image_1.png](30_image_1.png) (c) Repeated-MNIST ![30_image_2.png](30_image_2.png) Figure 24: *MFVI Last-Layer Performance (3 trials).* ![30_image_3.png](30_image_3.png) (c) Repeated-MNIST Figure 25: *MFVI Last-Layer Walltime (3 trials).* We run comparisons for several datasets using last-layer mean-field variational inference: CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), Repeated-MNIST (Kirsch et al., 2019), and FashionMNIST (Xiao et al., 2017). We use 5000 initial training samples with an acquisition size of 4000 for CIFAR-10, 1000 initial training samples with an acquisition size of 2000 for SVHN, 20 initial training samples with an acquisition size of 100 for Repeated-MNIST, and 20 initial training samples with an acquisition size of 25 for Fashion-MNIST. The same hyperparameters as in Pinsler et al. (2019) are used. In Figure 24, we see that PowerBALD performs best on Fashion-MNIST and Repeated-MNIST with smaller acquisition sizes, but ACS-FW and BALD perform similarly on CIFAR-10 and SVHN at larger acquisition sizes. This is in line with Section 6, where we have seen in Figures 5 and 7 that the top 1% of the samples are more sensitive than the top 10%: larger acquisition sizes can be less sensitive to the issues of top-K. The performance of ACS-FW contrasts to the results in Pinsler et al. (2019) where ACS-FW performs better than BALD on CIFAR-10 and SVHN. In Appendix E.3, we provide ablations with different acquisition sizes and initial training set sizes for these four datasets. Overall, beyond what we have noted above, these results are not conclusive, however, and the performance curves are very close to each other. On the other hand, the wall time curves in Figure 25 show that ACS-FW is significantly slower than PowerBALD. This is congruent with the motivation that stochastic sampling methods are fast yet competitive and don't underperform compared to top-K acquisition, thus being a good substitute for top-K acquisition. ## D Comparing Power, Softmax And Soft-Rank ![31_image_0.png](31_image_0.png) ## D.1 Empirical Evidence Figure 26: *Repeated-MNIST (5 trials): Performance with all three stochastic strategies.* Repeated-MNIST. In Figure 26, power acquisition performs best overall, followed by soft-rank and then softmax. ![32_image_0.png](32_image_0.png) Figure 27: *EMNIST (Balanced) (5 trials): Performance with all three stochastic strategies with* BALD. PowerBALD performs best. ![32_image_1.png](32_image_1.png) Figure 28: EMNIST (ByMerge) (5 trials): Performance with all three stochastic strategies with BALD. PowerBALD performs best. EMNIST. In Figure 27 and 28, we see that PowerBALD performs best, but Softmax- and SoftrankBALD also outperform other methods. BADGE did not run on EMNIST (ByMerge) due to out-of-memory issues and BatchBALD took very long as EMNIST (ByMerge) has more than 800,000 samples. ![32_image_2.png](32_image_2.png) ![32_image_3.png](32_image_3.png) (a) BALD (b) Entropy Figure 29: *MIO-TCD (3 trials): Performance with all three stochastic strategies.* MIO-TCD. In Figure 29, we see that all three stochastic acquisition methods perform about equally well. ![33_image_0.png](33_image_0.png) (b) Entropy Figure 30: *Synbols edge cases (3 trials): Performance with all three stochastic strategies.* Synbols. In Figure 30, power acquisition seems to perform better overall—mainly due to the performance in Synbols Missing Characters. ![33_image_1.png](33_image_1.png) Figure 31: *CLINC-150 (10 trials): Performance with* all three stochastic strategies. CLINC-150. In Figure 31, all three stochastic methods perform similarly. ## D.2 Investigation To further examine the three stochastic acquisition variants, we plot their score distributions, extracted from the same MNIST toy example, in Figure 32. Power and softmax acquisition distributions are similar for β = 8 (power, softmax) and β = 4 (softmax). This might explain why active learning with these β shows similar accuracy trajectories. We find that power and softmax acquisition are quite insensitive to β and thus selecting β = 1 might generally work quite well. ![34_image_0.png](34_image_0.png) Figure 32: *Score distribution for power and softmax acquisition of BALD scores on MNIST for varying* Coldness β at t = 0. Linear and log plot over samples sorted by their BALD score. At β = 8 both softmax and power acquisition have essentially the same distribution for high scoring points (closely followed by the power distribution for β = 4). This might explain why the coldness ablation shows that these β to have very similar AL trajectories on MNIST. Yet, while softmax and power acquisition seem transfer to RMNIST, this is not the case for softrank which is much more sensitive to β. At the same time, power acquisition avoids low-scoring points more than softmax acquisition. ## E Effect Of Changing The Aquisition Sizes ![34_image_1.png](34_image_1.png) E.1 Repeated-MNIST Figure 33: *Repeated-MNIST: Acquisition size ablation grouped by acquisition size.* Stochastic acquisition strategies (except for Softrank at acquisition size 5) always perform better than top-K BALD and also better than BADGE. Softrank is the most sensitive to acquisition size changes because it is independent of the scores. ![35_image_0.png](35_image_0.png) ![35_image_1.png](35_image_1.png) Figure 34: *Repeated-MNIST: Acquisition size ablation grouped by acquisition function.* PowerBALD and SoftmaxBALD are less sensitive to acquisition size changes while SoftrankBALD is more sensitive to acquisition size changes. Overall, all three are much less sensitive to acquisition size changes than top-K BALD. ## E.2 Emnist ![36_image_0.png](36_image_0.png) Figure 35: EMNIST 'Balanced': Acquisition size ablation. For (top-K) BALD, ß makes no difference, so ß = 1 is used as placeholder. The other two subplots are intentionally left out as they are very similar to this one. ![37_image_0.png](37_image_0.png) ![37_image_1.png](37_image_1.png) Figure 36: EMNIST 'ByMerge': Acquisition size ablation. For (top-K) BALD, β makes no difference, so ß = 1 is used as placeholder. The other two subplots are intentionally left out as they are very similar to this one. ## E.3 Acs-Fw ![38_image_0.png](38_image_0.png) Figure 37: MFVI Last-Layer Classification on CIFAR-10: Acquisition size ablation. ![39_image_0.png](39_image_0.png) ![39_image_1.png](39_image_1.png) Figure 38: *MFVI Last-Layer Classification on CIFAR-10: Acquisition size ablation (Walltime).* ![40_image_0.png](40_image_0.png) 41 ![41_image_0.png](41_image_0.png) 42 ![42_image_0.png](42_image_0.png) Figure 41: *MFVI Last-Layer Classification on Repeated-MNIST: Acquisition size ablation.* ![43_image_0.png](43_image_0.png) Figure 42: *MFVI Last-Layer Classification on Repeated-MNIST: Acquisition size ablation (Walltime).* ![44_image_0.png](44_image_0.png) Figure 43: *MFVI Last-Layer Classification on Fashion-MNIST: Acquisition size ablation.* ![45_image_0.png](45_image_0.png) ![45_image_1.png](45_image_1.png) Figure 44: *MFVI Last-Layer Classification on Fashion-MNIST: Acquisition size ablation (Walltime).* ## F Effect Of Changing Β F.1 ## Repeated-Mnist ![46_image_1.png](46_image_1.png) ![46_image_0.png](46_image_0.png) Figure 45: Repeated-MNIST: β ablation for *BALD. ## F.1.1 MIO-TCD and Synbols ![47_image_3.png](47_image_3.png) ![47_image_1.png](47_image_1.png) ![47_image_0.png](47_image_0.png) ![47_image_2.png](47_image_2.png) 1.0000 13500 15000 16000 12500 15000 10000 1.2500 1.98000 0.55 Figure 46: MIO-TCD and Synbols: ß ablation for *BALD. ![47_image_7.png](47_image_7.png) ![47_image_5.png](47_image_5.png) ![47_image_4.png](47_image_4.png) ![47_image_6.png](47_image_6.png) Figure 47: MIO-TCD and Synbols: β ablation for *Entropy. ## F.2 Emnist ![48_image_0.png](48_image_0.png) ![48_image_1.png](48_image_1.png) Figure 48: EMNIST 'Balanced': β *ablation for Softmax- and PowerBALD.* ![49_image_0.png](49_image_0.png) ![49_image_1.png](49_image_1.png) Figure 49: EMNIST 'ByMerge': β *ablation for Softmax- and PowerBALD.* ## F.3 Causalbald: Synthetic Dataset ![50_image_0.png](50_image_0.png) ![50_image_1.png](50_image_1.png) (c) High Temperature Only Figure 50: *CausalBALD: Synthetic Dataset.* (a) At a very high temperature (β = 0.1), PowerBALD behaves very much like random acquisition, and as the temperature decreases the performance of the acquisition function improves (lower √ϵPEHE). (b) Eventually, the performance reaches an inflection point (β = 4.0) and any further decrease in temperature results in the acquisition strategy performing more like top-K. We see that under the optimal temperature, power acquisition significantly outperforms both random acquisition and top-K over a wide range of temperature settings. We provide further β ablations for CausalBALD on the entirely synthetic dataset which is used by Jesson et al. (2021). This demonstrates the ways in which β interpolates between uniform and top-K acquisition. ## F.4 Clinc-150 ![50_image_2.png](50_image_2.png) Figure 51: Performance CLINC-150: β ablation for *Entropy. ## G Simple Implementation Of Stochastic Batch Acquisition def get_random_samples(scores_N: torch.Tensor, *, aquisition_batch_size: int): N = len(scores_N) aquisition_batch_size = min(aquisition_batch_size, N) indices = np.random.choice(N, size=aquisition_batch_size, replace=**False**) return indices.tolist() def get_softmax_samples(scores_N: torch.Tensor, *, beta: float, aquisition_batch_size: int): \# As beta -> 0, we obtain random sampling. if beta == 0.0: return get_random_samples(scores_N, aquisition_batch_size=aquisition_batch_size) N = len(scores_N) noised_scores_N = scores_N + scipy.stats.gumbel_r.rvs(loc=0, scale=1 / beta, size=N, random_state=None) return torch.topk(noised_scores_N, k=aquisition_batch_size) def get_power_samples(scores_N: torch.Tensor, *, beta: float, aquisition_batch_size: int): return get_softmax_samples(torch.log(scores_N), beta=beta, aquisition_batch_size=aquisition_batch_size) def get_softrank_samples(scores_N: torch.Tensor, *, beta: float, aquisition_batch_size: int): N = len(scores_N) sorted_indices_N = torch.argsort(scores_N, descending=**True**) ranks_N = torch.argsort(sorted_indices_N) + 1 return get_power_samples(1 / ranks_N, beta=beta, aquisition_batch_size=aquisition_batch_size) Listing 1: *Code for stochastic batch acquisition.* Colab here. ## H Hypoexponential Distribution Evaluation import **numpy** as np from scipy.stats **import** expon, gumbel_r import matplotlib.pyplot as plt \# Set the rate parameter \# Change here to get different scales of exponentials lam = 1/2 \# Generate 10 samples from the exponential distribution samples = expon.rvs(scale=1/lam, size=5000) \# Now sample from the samples using the previous samples as lambda of exponential distributions new_samples = expon.rvs(scale=1/samples, size=(4000,5000)) what_are_you = new_samples.mean(axis=1) \# Fit a Gumbel distribution to the data params = gumbel_r.fit(what_are_you) x = np.linspace(np.min(what_are_you), np.max(what_are_you), 1000) gumbel_cdf = gumbel_r.cdf(x, *params) \# Plot both what_are_you and the Gumbel distribution in the same plot plt.plot(np.sort(what_are_you), np.linspace(0, 1, len(what_are_you), endpoint=**False**), label="Simulated Hypoexponential") plt.plot(x, gumbel_cdf, label="Gumble Fit") plt.legend() plt.show() ![52_image_0.png](52_image_0.png) Listing 2: *Code for fitting a Gumbel distribution to the mean of samples from a hypoexponential distribution.* Colab here.
Review 1: Summary: This paper shows that complex batch acquisition strategies commonly used in active learning provide comparable predictive performance to the simple baseline of stochastic sampling from a softmax (or other distribution) based on the scores of the pool set, despite the latter strategy providing a vastly reduced computational complexity. This result calls into question the conventional wisdom in the field – that sophisticated batch acquisition strategies are worth their additional computational cost. The paper also discusses 3 different methods for constructing the sampling distribution, each making different assumptions about the scores. The paper shows that the power distribution performs slightly better than the softmax and soft-rank distributions for commonly used active learning score functions. Strengths and Weaknesses: ## Strengths 1. Challenging the conventional wisdom in batch active learning is both important and interesting. This paper has the potential to act as a stepping stone for the development of better theory and methods for batch active learning. 2. The paper advocates for a simple but effective baseline which is underutilized in the literature and likely in practice too. 3. The paper provides an interesting discussion of 3 different methods for constructing a distribution from which to sample from the pool set given scores. The power and soft-rank distributions are novel, as far as I am aware. ## Weaknesses 1. **Clarity:** the paper is unclear in several places; see the detailed list under "Requested Changes". The main issue in this regard is the connection between sampling from the acquisition distributions and the idea that scores at time $t+1$ are the scores at time $t$ with some perturbation applied. This idea is mentioned a few times in the paper but is not clearly described or obvious to me. 2. **Evidence:** not all claims are well supported by evidence. I believe that most of these issues can be addressed by adding citations. However, one issue I am concerned about is that the paper only uses BatchBALD and BADGE as examples of batch acquisition strategies. The problem is that these are *extremely* computationally expensive (either in memory or compute time). However, there exist well-performing batch acquisition strategies that scale much more gracefully. In particular, ACS-FW (with random projections) from Pinsler et al. (2019) scales linearly with pool-set size. (The authors also mention in the related work that methods exist for improving the scaling of core-set-based methods). Without including comparisons to some batch acquisition methods with better scaling, and thereby providing a more comprehensive picture of such methods, I do not believe that the paper's (interesting and potentially important) question about whether or not batch acquisition methods are "pulling their weight" can be taken seriously. 3. **Positioning:** the positioning of the paper is, in my opinion, somewhat strange and misleading. In particular, the idea of sampling from a distribution based on active learning scores in order to improve diversity in batch acquisition might not be well-known (and could definitely benefit from some advocacy) but has certainly been used before. It is a straightforward implementation of Boltzmann exploration to the active learning setting. Yet the paper presents this as a novel method, frequently referring to it as "our method", and the second claimed contribution of the paper is the demonstration that the sampling strategy improves on naive top-K acquisition. The specific implementations using the power and soft-rank distributions are novel as far as I know, but this paper is not clear enough in disentangling what is novel and what is not. (To be clear, I am not criticising this paper for lack of novelty. I am aware that this is not a criterion for acceptance in TMLR. Furthermore, I think there is plenty of novel/interesting content in the paper. However, the paper should be clear about its contributions and the sources of novelty.) Robert Pinsler, Jonathan Gordon, Eric T. Nalisnick, José Miguel Hernández-Lobato: Bayesian Batch Active Learning as Sparse Subset Approximation. NeurIPS 2019: 6356-6367 Requested Changes: I have grouped my requested changes into "Major" and "Minor", which are the changes which are critical for securing my recommendation and those that would simply strengthen the paper, respectively. ## Major changes 1. Please clarify the following. While some of these are minor issues in isolation, they add up to make the paper significantly less clear than it could be. 1. What does figure 1 have to do with the statement "... and it was assumed deep learning models hardly change after adding a single new point to the training set."? Is the large change in rank correlation at the initial acquisition steps being referenced here? 2. Some discussion should accompany equation 8 to highlight why this is a naive strategy. Figure 1 from the BatchBALD paper does a great job at this. A figure isn't necessary, but some discussion would be very helpful. 3. How does the $\beta$ parameter of the softmax correspond to the "expected rate at which the scores changed as more data is acquired"? 4. After equation 9, it isn't obvious how taking the top-k samples corresponds to sampling from the soft-rank distribution. 5. Regarding power acquisition, what is meant by "ideally" in "This assumption holds ideally for other scoring functions ..."? 6. Regarding power acquisition, how does the notion of time (e.g., "future log scores"/"current log scores") relate to equation 11? 7. Similarly, in section 6, the assumptions regarding rank correlations across acquisitions were really not made clear to me. 2. Please add at least a few experiments comparing to Pinsler et al., as discussed above. (Pinsler et al. should also be discussed in the related work). 3. Please provide citations (or other demonstrations/evidence) for the following statements: 1. In section 2, regarding entropy scoring, "... performs worse for data with high observation noise ...". 2. In section 2, regarding acquisition functions, "... and it was assumed deep learning models hardly change after adding a single new point to the training set." Who assumed this? 3. In section 3, "... which is frequently used for modelling extrema.". 4. In section 5, "... BatchBALD, which was SotA for small batch sizes.". 5. Similarly, in the caption for Figure 6, BatchBALD is mentioned as being SotA without a reference. 4. In figures 2, 3a and 3b, why does PowerBALD use an acquisition size of 10? This seems like it might make the comparison between it and BatchBALD unfair. Can you provide results for PowerBALD with an acquisition size of 5? And/or an ablation for the acquisition size for PowerBALD like was done for BADGE in figure 14? In general, I also think that the choices of hyperparameters should be better motivated, and any sweeps done should be described. 5. Similarly, why was the acquisition size for figures 3c and 3d raised to 100? My concern is that with a larger acquisition size, the impact of low diversity in the acquired examples can be mitigated. Would it be possible to see these results for an acquisition size of, say, 10? 6. It should be mentioned that for the CLINIC results, whose figure is somewhat hidden in the appendix, PowerEntropy does perform slightly worse than BADGE. 7. This might be a big misunderstanding on my part, but in Figure 4, it is not clear to me that the "Top" and "Bot" lines should be so similar. In fact, I think this contradicts assumption (3) – aren't the bottom 1% or 10% supposed to be less informative, so shouldn't the change in correlation in these two cases be closer to, if not even worse than for 100%? Could you clarify this? This currently leaves me somewhat sceptical of the results of these further investigations. Perhaps providing the additional experiment (just on MNIST, for example) that K can indeed be increased would provide additional evidence here. 8. An important missing sensitivity study is to use different BNN methods. Currently, the paper only uses MC Dropout for the BNN, so it is unclear whether these results generalize to other BNN approximations. In particular, I think that the linearised Laplace approximation (e.g., see Daxberger et al. (2021)) would be a great addition. Results for a single dataset would suffice. Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig: Laplace Redux - Effortless Bayesian Deep Learning. NeurIPS 2021: 20089-20103 ## Minor changes 1. After equation 2, I would suggest saying, "... the index set of $\mathcal{D}^\text{train}$ **initially** containing ...". 2. The footnote on page 2 should be removed. 3. In section 2, provide the expansion of the acronym for BALD as it helps provide intuition behind the method. 4. In section 2, provide brief descriptions of BatchBALD and BADGE since these are the most important baselines in the experiments. 5. Regarding the soft-rank acquisition, to be clear, there is only invariance insofar as the ranking us unchanged when the scores change. 6. Since BADGE and BatchBald are (understandably) not included in every figure, it would be helpful to always include "Uniform", so that the improvements over Bald from the stochastic aquisiton have a consistent point of comparison. 7. In the summary for section 5, the papers listed are not exactly "new", two being from 2021. 8. It would be great to see the $\beta$ ablation for the EMNIST dataset too. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper proposes a simple way to extend any single-datapoint acquisition function for active learning to batch settings, where many labels are collected at once in a particular round before the model is re-trained. The method is to simply sample from a distribution that upweights the datapoints with the highest acquisition values. This naturally encourages greater diversity than simply choosing the datapoints with K highest acquisition values, yet is not more computationally intensive. I find the proposed method highly practical and the empirical experiments on many image datasets and some text datasets convincing. Everything looks correctly done as far as I can tell. The one thing I am not sure about is whether this or similar methods have been previously proposed, given how straightforward they are. Strengths and Weaknesses: Strengths: How to properly extend acquisition functions to batch settings is an important problem in making active learning more practical, especially as model-training times grow larger for today’s large neural networks. Many of the methods previously published in this area are highly complex and computationally intensive. Papers like this one which show that a straightforward and cheap-to-implement/compute baseline outperforms large numbers of complicated publications serve as important reality-checks for the field. These are usually the papers with methods that end up actually being used in applications (eg. in industry/science). Overall the method is clearly presented and the writing is easy to understand. The experiments cover many different datasets. Weaknesses: The experiments do not compare against many other batch baseline methods. There have been many recent batch active learning methods published that could also be compared against. That said, I do find the existing experiments sufficient since they cover so many different datasets. Requested Changes: If you want to present a more convincing case, consider including other recently-published batch active learning methods in your experimental comparisons. That said, I am already convinced that the proposed method is a good idea by the current experiments, especially given the method’s simplicity. Adding more active learning methods would help convince me that your method truly matches the state-of-the-art. For instance, methods which use stochastic acquisition with explicit encouragement of diversity such as: choosing top acquisition values sampled from different clusters of the data, or using the acquisition values and data features to define a determinantal point process distribution. Batch Active Learning Using Determinantal Point Processes https://arxiv.org/abs/1906.07975 Related Work should mention other related ideas like “kriging believer”. That is another very simple approach to extend single datapoint acquisition values to batch settings, but does still involve a lot of model retraining to compute the batch scores. Broader Impact Concerns: I don't have any broader impact concerns about this work. ================================================== Review 3: Summary: The authors consider the task of batch acquisition in active learning, i.e., acquiring multiple labels at a time, rather than single ones as is done in many popular approaches. Methods such as BatchBALD (Kirsch et al., 2019) or BADGE (Ash et al., 2020) are designed to model this task properly, while the simplest heuristic would be to take the top-K points at each labeling round as proposed by the acquisition function. The authors discuss and demonstrate the flaw in this naive heuristic (its assumption on the stability of the ranking between labeling rounds) and instead propose a simple sampling-based approach that performs as well as/better than methods designed for batch acquisition while remaining at the computation cost of top-K acquisition. The approach is tested extensively in a variety of experimental settings and ablations. Strengths and Weaknesses: ## Strengths The approach provides a simple and computationally cheap solution to the task of batch acquisition. It is well-motivated and evaluated on a large array of different data sets, where the approach can provide consistent results. The paper is well written. As the authors acknowledge in the related work, similar stochastic approaches appear repeatedly within the active learning literature often hidden in experimental sections and appendencies as approximate heuristics that seem to help without being evaluated on their own. The contribution is therefore minor in the sense of coming up with the proposed idea. But given that, it is even more valuable as being a proper study that finally evaluates these approaches with surprisingly strong results. Given these results presented in this work, the challenge to the community of providing a model that can improve upon this heuristic is indeed justified. ## Weaknesses - The reference section is a mess. A lot of references simply have a year without further information on the venue, while others do. Provided venues vary a lot, e.g., NIPS, NeurIPS, Advances in Neural Information Processing Systems. Please reformat all references into a consistent style. ### Typos - Fig 3: Plot sizes and font sizes vary a lot. - p8 last line: the ranks of the top-**1%** scoring... Requested Changes: See weaknesses. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: The reviewers found the contribution of this paper to be useful: a simple strategy can work as a convincingly effective baseline for batch mode active learning. The AE concurred and recommended acceptance. In addition to incorporating the reviewers' comments in the final version (if not yet), the AE also suggests the following revisions in the final version: (Main suggestions) - In the comparison plots, using the same K for all algorithms except for BatchBALD (and clarify the computational challenges & anticipated performance decline of running BachBALD with the same K). - Clarify the choices of K in all experiments for all algorithms (Minor comments) - Is it possible to clarify the motivation of the Gumbel perturbation on the acquisition scores in pages 5-6 more? If I understand correctly, the last paragraph of page 5 says that, for choosing the k-th point in the batch, ideally we would like to model the information gain of this k-th point (relative to the batches of examples queried so far and the k-1 examples already selected in this batch) assuming that the remaining K-k points are selected optimally. The paper then mentioned the maximum of a set of exponential random variables; however I had difficulty in understanding what are the iid exponential random variables in the context of batch active learning. It might be better to remove the discussion if this connection is not so precise.. - could you clarify why in Eq. (17), log M is greater than the mutual information expression? M depends on the number of samples, where as the mutual information expression does not? - could you provide a mathematical expression of the AUROC evaluated in Figure 7? - In Figure 9(b), why is it called a near-duplicate? Is it because it is very similar to Figure 9(a)? - typo in B.2: Gumbel -> Gumble ==================================================
# Reducing Training Sample Memorization In Gans By Training With Memorization Rejection Anonymous authors Paper under double-blind review ## Abstract Generative adversarial network (GAN) continues to be a popular research direction due to its high generation quality. It is observed that many state-of-the-art GANs generate samples that are more similar to the training set than a holdout testing set from the same distribution, hinting some training samples are implicitly memorized in these models. This memorization behavior is unfavorable in many applications that demand the generated samples to be sufficiently distinct from known samples. Nevertheless, it is unclear whether it is possible to reduce memorization without compromising the generation quality. In this paper, we propose memorization rejection, a training scheme that rejects generated samples that are near-duplicates of training samples during training. Our scheme is simple, generic and can be directly applied to any GAN architecture. Experiments on multiple datasets and GAN models validate that memorization rejection effectively reduces training sample memorization, and in many cases does not sacrifice the generation quality. ## 1 Introduction There has been much progress made on improving the generation quality of Generative Adversarial Networks (GANs) (Brock et al., 2019; Goodfellow et al., 2014; Karras et al., 2020; Wu et al., 2019; Zhao et al., 2020; Zhang et al., 2019). Despite GANs being capable of generating high-fidelity samples, it has been recently observed that they tend to memorize training samples due to the high model complexity coupled with a finite amount of training samples (Meehan et al., 2020; Lopez-Paz & Oquab, 2018; Gulrajani et al., 2020; Borji, 2021). This naturally leads to the following questions: Are GANs learning the underlying distribution or merely memorizing training samples? More fundamentally, what is the relationship between learning and memorizing for GANs? Studying these questions are important since generative models that output near-duplicates of the training data are undesirable for many applications. For example, Repecka et al. (2021) proposed to learn the diversity of natural protein sequencing with GANs and generate new protein structures to aid medicine development. Frid-Adar et al. (2018) leveraged GANs to generate augmented medical images and increase the size of training data for improving liver lesion classification. Although measuring and preventing memorization in supervised learning is well-studied, handling memorization in generative modeling is non-trivial. For supervised learning, training sample memorization typically results in overfitting and can be diagnosed by benchmarking on a holdout testing dataset. In contrast, a generative model that completely memorizes the training data and *only* generates near-duplicates of the training data can still perform well on common distribution-matching-based quality metrics, even when evaluated on a holdout testing set. Recently various metrics and detection methods have been proposed to analyze the severity of memorization after GAN models are trained (Borji, 2021; Bounliphone et al., 2016; Esteban et al., 2017; Lopez-Paz & Oquab, 2018; Liu et al., 2017; Gulrajani et al., 2020; Thanh-Tung & Tran, 2020; Nalisnick et al., 2019). Some of these methods rely on training a new neural network for measuring sample distance while others rely on traditional statistical tests. However, it is still unclear how to actively reduce memorization during GAN training. We thus aim to answer the following questions in this paper: is it possible to efficiently reduce memorization during the training phase? If so, to what extent can memorization be reduced without sacrificing the generation quality? Our contributions are as follows: 1. We confirmed that while the distance of a generated instance to the training data is generally correlated with its quality, it is not the case for instances that are already sufficiently close. Therefore, it is possible to reduce memorization without sacrificing generation quality. 2. We propose memorization rejection, a simple training scheme that can effectively reduce memorization in GAN. The method is based on the key insight that a generated sample being sufficiently similar to its nearest neighbor in the training data implies good enough quality and further optimizing it causes the model to overfit and memorize. To the best of our knowledge, this is the first method proposed for reducing training data memorization in GAN training. 3. Experimental results demonstrate that our proposed method is effective in reducing training sample memorization. We provided a guideline for estimating the optimal hyperparameter that maximally reduces memorization while minimally impacting the generation quality. ## 2 Preliminaries Consider an input space X and an N-dimensional code space Z = R N . For instance, when considering RGB images, X is simply R 3×w×h, where w and h are respectively the width and height of the image (in this paper, X = R 3×w×hif not specified otherwise). Generative adversarial networks (GANs; Goodfellow et al., 2014) typically consist of a generator function and a discriminator function. The generator function Gθ : *Z → X* , parameterized by θ ∈ Θ, decodes from Z to X . The discriminator function Dϕ : X → R, parameterized by ϕ ∈ Φ, maps any x ∈ X to a real value that reflects how likely x comes from an underlying distribution p(X ). A typical objective of a GAN optimizes the minimax loss between Gθ and Dϕ $$\operatorname*{min}_{\theta\in\Theta}\operatorname*{max}_{\phi\in\Phi}\operatorname*{\mathbb{E}}_{x\sim p(\mathcal{X})}[\log D_{\phi}(x)]+\operatorname*{\mathbb{E}}_{z\sim q(\mathcal{Z})}[\log(1-D_{\phi}(G_{\theta}(z)))],$$ where q(Z) is a controllable distribution (e.g. Gaussian). GANs aim to approximate p(X ) by Gθ(q(Z)) with the adversarial help of the discriminator Dϕ(x). In particular, the generator is optimized to increase the likelihood of generated instances with the likelihood gauged by the discriminator, while the discriminator is optimized to increase the likelihood of instances sampled from the real distribution p(X ) and decrease the likelihood of instances generated from the fake distribution Gθ(q(Z)). Since it is infeasible to sample from p(X ) directly, a training set XT ⊆ X of N instances is used to approximate the population instead. ## 2.1 Quantitative Evaluation Of Sample Similarity The most commonly used method to detect training sample memorization is by visualizing nearest neighbors of generated images in the training data (Brock et al., 2019; Borji, 2018). If the visualized samples look similar to their nearest neighbors in the training data, it is reasonable to suspect that the model is trying to memorize the training data. Given a generated sample x ∼ Gθ(q(Z)) and an embedding function f : X → R k, the nearest neighbor of x in the training set is defined as $$\mathrm{NN}_{f,X_{T}}(x)=\operatorname*{arg\,min}_{x^{\prime}\in X_{T}}1-{\frac{\langle f(x),f(x^{\prime})\rangle}{\|f(x)\|\cdot\|f(x^{\prime})\|}}.$$ The cosine similarity is conventionally used for evaluating the similarity of latent vectors (Salton & Buckley, 1988; Le-Khac et al., 2020; Borji, 2021) but other distance metrics could also be chosen. To avoid sensitivity to noise in the input space, f is usually chosen to project to a latent space embedded with higher-level semantics. It is widely believed that a pretrained image classification model can extract high-level semantics and serves as a robust latent space for distance measurement. For example, calculation of FID involves first passing the set of images through the Inception v3 (Szegedy et al., 2016) classification model pretrained on ImageNet for feature extraction. A well-chosen f retrieves nearest neighbors that align well with human's perception. Following this definition, the distance to the nearest neighbor can serve as a quantitative measure for sample similarity $$d_{f,X_{T}}(x)=\operatorname*{min}_{x^{\prime}\in X_{T}}\;1-{\frac{\langle f(x),f(x^{\prime})\rangle}{\|f(x)\|\cdot\|f(x^{\prime})\|}}.$$ Thus, the problem of reducing memorization can be formulated as regulating the nearest neighbor distance of generated samples, which motivates our proposed algorithm. ## 2.2 Quantitative Evaluation Of Memorization Meehan et al. (2020) proposed a non-parametric test score CT for measuring the degree of training sample memorization of a generative model based on sample similarity. Their key insight is that a model should generate samples that are on average, as similar to the training data as an independently drawn test sample from the same distribution. The model is memorizing if the generated samples are on average, *more* similar to the training data than an independently drawn test sample from the same distribution. The memorization test is based on the Mann-Whitney U test, a non-parametric statistical test for testing the ordinal relationship with the null hypothesis that the given two sets of samples are from the same distribution. In this case, the two sets of samples are the nearest neighbor distances (with respect to the training data) of a generated set and a reference testing set. The more severe the memorization, the more negative the U statistics, and vice versa. Additionally, to better detect local memorization, the input domain can be divided into subspaces and the test score is aggregated over memorization tests performed on each of the subspaces. In this paper, we adopt the definition of memorization as characterized by the CT values. ## 2.3 Generation Quality And Memorization Good generation quality and reduced memorization can coexist. In the ideal case, if the generator perfectly fits the underlying data distribution, then the generated samples have perfect quality and are in no way more similar to the training data than another independent sample from the distribution. However, GAN models are imperfect. Figure 1 shows the nearest neighbor distance distribution (approximated by 2K samples) of a generated set from BigGAN and a reference testing set (CIFAR10.1). If the model successfully learned the data distribution, the expectations of the two nearest neighbor distributions should be identical. However, samples generated from BigGAN (orange line) are in fact closer to the training data than samples from the reference testing set (highlighted in orange) which indicates the memorization phenomenon. In general, it is true that generated samples with smaller nearest neighbor distances are associated with better quality. Smaller distances imply being closer to the training distribution. Figure 2 visualizes a subset of 5k samples from a BigGAN trained on CIFAR10. The images are sorted by their nearest neighbor distance. From top to bottom, each row shows 10 images from the 20%, 40%, 60%, 80%, and 100% percentile, respectively. The upper rows with lower nearest neighbor distance are associated with better perceptual quality. This confirms nearest neighbor distance in general is an indicator of quality. ![2_image_0.png](2_image_0.png) However, the nearest neighbor distance is not an indicator of quality when the distance is sufficiently small. If a generated sample is already close to the data distribution, a smaller nearest neighbor distance only implies higher similarity with the training sample. Figure 3 visualizes a subset of 5k samples from a BigGAN trained on CIFAR10, sorted by their nearest neighbor distance. From top to bottom, each Figure 1: Nearest neighbor distance distribution of the reference testing set (CIFAR10.1) versus BigGAN. ![3_image_0.png](3_image_0.png) Figure 2: Visualize CIFAR10 "horse" samples from BigGAN, sorted with the nearest neighbor distance. From top to bottom each row shows the 20%, 40%, 60%, 80%, and 100% percentile. row shows 10 images from the 4%, 8%, 12%, 16%, and 20% percentile, respectively. There is no perceptible ![3_image_1.png](3_image_1.png) quality difference between rows. Thus, for generated samples already sufficiently close to the training data, their nearest neighbor distances are indicative of potential memorization, not quality. Figure 3: Visualize CIFAR10 "horse" samples from BigGAN, sorted with the nearest neighbor distance. From top to bottom each row shows the 4%, 8%, 12%, 16%, and 20% percentile. The issue for learning the distribution with GANs is only having access to a finite number of training data. The data distribution is approximated by a joint Dirac delta distribution of training samples. As training progresses, the generated samples become more similar to the training data (see Figure 4). Coupled with model over-parametrization, the learned implicit likelihood is overly high for neighborhoods near training samples. Yang & E (2021) proved that the distribution learned with GANs either diverges or converges weakly to the empirical training data distribution. The authors proved that early stopping allows quality measured by Wasserstein metric to escape from the curse of dimensionality, despite inevitable memorization in the long run. ![4_image_0.png](4_image_0.png) Figure 4: tSNE plot of CIFAR10 "car" training and generated samples from different stages of training BigGAN Typically the training of GANs is terminated when the quality metric starts to deteriorate (e.g. FID starts to increase). However, Meehan et al. (2020) observed memorization in state-of-the-art GAN models, implying deterioration of quality metrics is not a sufficient criteria to prevent memorization. It is also unreasonable to expect the entire distribution to be learned at equal speed. Some easier parts of the distribution might already be well-fitted and starting to memorize while other more difficult parts of the distribution require more epochs to learn. For example, in conditional generation tasks such as CIFAR10 (Krizhevsky et al., 2009) the difficulty of learning each class is different. It is much easier to learn manmade objects with more clearly defined borders (e.g. cars, trucks, and ships) than animals with similar color as the background (e.g. deer, birds, and frogs). Thus, early stopping a model potentially leads to both underfitting and overfitting (memorization) for different parts of the distribution. ## 3 Methods Recall that the difference in nearest neighbor distance distributions shown in Figure 1 reflects memorization in GAN. To reduce memorization, the likelihood of the orange region should be reduced. We propose a simple and effective method to achieve this goal. ## 3.1 Memorization Rejection We want to regularize the model to avoid generating samples overly similar to the training data. As shown in Section 2.3, samples that are already close to the training data have good-enough quality. Pushing them further towards the training data results in memorization instead of quality improvement. Based on the premise, we proposed **Memorization Rejection (MR)** which rejects generated samples that resemble near duplicates of the training data (see Algorithm 3.1). This is achieved by setting a predefined rejection threshold τ on the nearest neighbor distance. The new objective is modified as follows $$\operatorname*{min}_{\theta\in\Theta}\operatorname*{max}_{\phi\in\Phi}\operatorname*{\mathbb{E}}_{x\sim X_{T}}[\log D_{\phi}(x)]+\operatorname*{\mathbb{E}}_{\hat{x}\sim G_{\theta}(q^{\prime}(\mathcal{Z}))}[\log(1-D_{\phi}(\hat{x}))]$$ $$q^{\prime}(z):=\begin{cases}\frac{q(z)}{Q},&\mathrm{if}\ d_{f,X_{T}}(G_{\theta}(z))\geq\tau\\ 0,&\mathrm{otherwise}\end{cases}$$ where Q is the normalizing constant. To understand the effect of memorization rejection, we can rewrite the the latter component of the objective $$\operatorname*{\mathbb{E}}_{\hat{x}\sim G_{\theta}(q^{\prime}(\mathcal{Z}))}[\log(1-D_{\phi}(\hat{x}))]=\operatorname*{\mathbb{E}}_{\hat{x}\sim G_{\theta}(q(\mathcal{Z}))}[\frac{\log(1-D_{\phi}(\hat{x}))}{Q}]+l_{r}$$ $$l_{r}=-\int_{z}\log(1-D_{\phi}(G_{\theta}(z)))\cdot q^{\prime\prime}(z)\mathrm{d}z$$ $$q^{\prime\prime}(z):=\begin{cases}\frac{q(z)}{Q},&\text{if}d_{f,X_{T}}(G_{\theta}(z))<\tau\\ 0,&\text{otherwise}\end{cases}$$ The new objective becomes the original GAN minimax loss plus the regularization term lr which penalizes Gθ for generating samples with nearest neighbor distance less than τ , i.e., overly similar to the training data. Memorization rejection can be viewed as a form of adaptive early stopping. The training stops for generated samples with sufficiently good quality (nearest neighbor distance less than τ ) while other samples continue to be updated and improved. Memorization rejection is performed when updating the generator and discriminator according to the objective. However, in practice performing MR only when training the generator is sufficient. We suspect the discriminator requires all the generated (fake) samples to accurately estimate the likelihood, which is related to how discriminators are updated multiple times before updating the generator once (Arjovsky et al., 2017). A partially converged discriminator can provide better feedback to the generator for generating realistic samples. The method is effective as long as the generator is penalized for memorization. Note that rejection is only performed during training. For testing, samples are drawn from the original distribution xˆ ∼ Gθ(q(Z)) as MR is effectively a regularization term lr and should not be applied during evaluation. The goal is still to learn the mapping from latent codes q(Z) to the real data distribution p(X). Training GAN with Memorization Rejection [1] Rejection Samplingτ, Gθ, d(·)*, q d* ← 0 d ≤ τ Sample z from q(Z) ˆx ← Gθ(z) d ← d(ˆx) **return** xˆ Training with MRXT , Gθ, Dϕ, N, τ, d(·)*, q i* = 1*, . . . , N* Sample x uniformly from XT Sample z from q(Z) ˆx ← Gθ(z) Update Dϕ with x and xˆ xˆ ← Rejection Sampling(τ, Gθ, d(·), q) Update Gθ with xˆ return Gθ and Dϕ ## 3.2 Computational Complexity Performing memorization rejection requires projecting samples to the latent space and calculating the nearest neighbor distance in each generator update. For each generator update, the additional forward pass through the embedding function f and the distance calculation result in twice the total amount of training time in our experiments. We conducted exact nearest neighbor search in all our experiments as the overhead is tolerable (less than 2x). To further speed up training on larger scale datasets, approximated nearest neighbor search (Li et al., 2020; Malkov & Yashunin, 2018) can be applied instead. Open source libraries (Guo et al., 2020; Jayaram Subramanya et al., 2019) allow efficient approximated nearest neighbor search on billion-scale datasets. ## 4 Related Work 4.1 Gan Memorization Metrics Many works have studied different definitions of memorization in GANs and some of which proposed metrics to quantify the severity of memorization (Meehan et al., 2020; Lopez-Paz & Oquab, 2018; Bounliphone et al., 2016; Gulrajani et al., 2020; Borji, 2021; Webster et al., 2019; Adlam et al., 2019; Feng et al., 2021; Bai et al., 2021). There is a line of studies that relies on sample-based statistical tests. Lopez-Paz & Oquab (2018) applied the Two-Sample Nearest Neighbor non-parametric test to evaluate the leave one out accuracy of the nearest neighbor classifier evaluated on a dataset consisting of generated samples and samples from the original training set. Esteban et al. (2017) adopted the result of maximum mean discrepancy three sample test (Bounliphone et al., 2016) as the null hypothesis and evaluated the averaged p-values. Meehan et al. (2020) proposed a non-parametric test which estimates the likelihood of the nearest neighbor distance of a generated sample being greater than a sample from a reference test set. Another line of studies relies on the Neural Network Divergence for measuring the overall generalization of generative models (Liu et al., 2017; Gulrajani et al., 2020). The method requires training a neural network to differentiate samples from two distributions and using the converged loss after training as a proxy for the discriminability of the two distributions. On a tangential perspective, Webster et al. (2019) measures memorization by retrieving latent code that maps to near duplicates of training samples. ## 4.2 Data Augmentation In Gan Training Data augmentation have been applied to reduce overfitting in GANs (Zhao et al., 2020; Karras et al., 2020; Tran et al., 2021; Liu et al., 2021; Tseng et al., 2021; Yang et al., 2021). However, the overfitting that can be solved by data augmentation refers to overfitting of the discriminator when only a limited amount of training data is available. The augmented training data improves generation quality by preventing the discriminator from making high confidence predictions. However, studies on GAN data augmentation techniques are not shown to reduce generator training sample memorization, as defined in this work. In fact, we later show in section 5.4 that data augmentation (Zhao et al., 2020) is ineffective at reducing memorization. ## 4.3 Sampling With Rejection In Gans Sampling is essential to training GANs since the GAN objective is based on the two-sample test of real and fake distributions. One common technique to efficiently sample from a complex distribution is rejection sampling. Rejection sampling can be further generalized to "sampling with rejection", where the criteria for rejecting a sample depends on a custom function as opposed to the probability density. This allows straightforward filtering of unfavorable samples. The criteria for rejection can be customized for different needs. Lim & Ye (2017) proposed to adopt hinge loss as the objective and rejects a sample (from being used to update the model) if it falls within the margin. Azadi et al. (2018) proposed a post-processing scheme where generated samples are rejected based on the likelihood estimated by the discriminator. Their key insight is it is easier for the discriminator to determine when the distribution is not being modeled precisely. Sinha et al. (2020) rejects samples associated with lower likelihood estimated by the discriminator during training and only update the generator with "good quality samples" to improve generation quality. ## 5 Experimental Results Our goal is to analyze how training GANs with memorization rejection affects the performance in terms of generation quality and memorization severity. We demonstrate that it is possible to reduce memorization with minimal (non-perceivable) impact on the generation quality. The code for all the experiments will be open sourced. ## 5.1 Experimental Setting We trained GAN models with different rejection thresholds τ . Higher rejection thresholds imply generated samples must be more distinct from the training data to be used for updating the model. The models are benchmarked with FID for generation quality and CT score defined in Section 2.2 for memorization severity. Each experiment is repeated 4 times for consistency and the average performance is reported. A latent projection function f is required to calculate FID and CT . We chose the same projection f for both metrics, allowing the evaluation of quality and memorization severity be in the same latent space. Following the convention for FID (Heusel et al., 2018), we chose the penultimate layer of the Inception v3 model (Szegedy et al., 2016) pretrained on ImageNet as the embedding function f. ## 5.1.1 Datasets And Models We conducted conditional generation experiments on the CIFAR10 dataset (Krizhevsky et al., 2009). The dataset consists of 50k training samples of natural images in 10 classes. We experimented on models of different complexity: SAGAN (Zhang et al., 2019), BigGAN (Brock et al., 2019), and BigGAN with differential augmentation (Zhao et al., 2020). The three models are intentionally selected to be incrementally more complex. BigGAN with differential augmentation is built upon BigGAN while BigGAN adopts the self-attention mechanism in SAGAN. We expect that higher complexity causes models to memorize the training data more. The training hyperparameters are set as follows (no additional finetuning): - Batch size: 64. - Total number of steps for training: 100,000. - Learning rate: 0.0002 (for the generator and discriminator). - Optimized with ADAM (β1: 0.5, β2: 0.999). - The discriminator is updated for 5 steps per one step update on the generator. ## 5.1.2 Reference Testing Set It is important to obtain an unbiased memorization measurement by using an independent test set disjoint from the training set. The more distinct and independent the reference testing set is, the more accurate the evaluation of memorization severity. Although the CIFAR10 testing set is the most accessible choice, Barz & Denzler (2020) identified the issue of high overlap between training and testing set of CIFAR10. They mined duplicate images in the testing set using the projected cosine similarity as the nearest neighbor distance (measured with respect to the training set) and manually replaced near duplicates to construct a new dataset ciFAIR10. Unfortunately upon closer inspection, many near duplicates still exist in the ciFAIR10 dataset, possibly due to only considering the 1-nearest neighbor when mining. Recht et al. (2018) constructed CIFAR10.1 to serve as a new benchmark dataset for CIFAR10 to verify whether state-of-the-art classification models can generalize to new samples. They resampled from the Tiny Images repository and went through the exact process of creating the CIFAR10 dataset with additional rigorous data-cleaning procedures. This includes manually inspecting the 10 nearest neighbors and removing instances of near duplicates. They observed that good model performance on the existing CIFAR10 testing set don't necessarily transfer to CIFAR10.1 and suggested important modes missing from the existing testing set. Thus, we selected CIFAR10.1 (Recht et al., 2018) as the reference testing set for metric evaluation since it is curated to have the least overlap with the training data. ## 5.2 Effectiveness Of Memorization Rejection Recall we showed in Figure 1 that samples generated from GANs are distributed closer to the training data than a reference testing set. In figure 5, in addition to the BigGAN distribution and the reference testing set (CIFAR10.1), we further plot the distribution generated by BigGAN with memorization rejection. We observe that the distribution of BigGAN with MR is more similar to the reference testing set, indicating training with memorization rejection reduces the similarity to the training data. This provides qualitative evidence that memorization rejection is effective in reducing training sample memorization. Next we analyze the generation quality and memorization severity quantitatively. Figure 6 shows the FID and CT of BigGANs trained with different rejection thresholds. Recall CT value of 0 implies the generated samples are as similar to the training data as the reference testing set, i.e., no memorization. Negative CT values implies memorization. We observe for rejection thresholds up to 0.13, CT decreases with minimal impact on the FID. The memorization is reduced without degrading generation quality within this region. Figure 7 visualize generated samples from BigGAN trained with no rejection and with τ = 0.13. There is no visually perceptible generation quality different between the two sets of samples. For thresholds greater than 0.13, the reduction in memorization comes with tradeoff in quality. For τ = 0.16 the CT remains negative, indicating slight memorization. Although training sample memorization is not completely removed (which may not even be possible), we demonstrated that training GANs with memorization rejection reduces the severity. The rejection threshold serves as a control knob to regularize the model and in this experiment a properly tuned threshold of τ = 0.13 improves CT while maintaining (even improving) FID. ![8_image_1.png](8_image_1.png) ![8_image_0.png](8_image_0.png) ![8_image_2.png](8_image_2.png) Figure 5: Nearest neighbor distance distribution (CIFAR10 train) of the reference testing sets. ![8_image_3.png](8_image_3.png) Figure 6: Relation between quality (FID) and memorization (CT ) with varying rejection threshold τ . Figure 7: Visualize non-curated, generated samples. Left: BigGAN trained *without* MR. Right: BigGAN trained *with* MR. The generation quality of the model with and without MR is similar. ## 5.3 Class-Wise Memorization Severity One incentive to adopt memorization rejection is to serve as a form of adaptive early stopping to allow regions in the distribution to be learned with different speeds. For conditional generation, the difficulty of learning each class is different. As mentioned in Section 2.3, manmade object classes (e.g. car, ship, plane) are easier than natural object classes (e.g. bird, cat, frog) to learn. Figure 8 shows the classwise CT values for models trained with different rejection thresholds. As a sanity check, CT values indeed reflect how easily a class is learned. Coincidentally, classes that are easier to learn are also more easily memorized. MR reduces the memorization severity of highly memorized classes. Besides reducing memorization of memorized classes, one thing to be aware of is whether memorization rejection causes underfitting classes to be even more underfitting. It is not ideal if memorization rejection shifts the entire generated distribution uniformly away from the training data. The target for memorization rejection is the highly memorized regions only. According to our experiments, MR is effective for classes with more severe memorization, i.e., more negative CT values. On the other hand, classes that are not memorized, i.e., more positive CT values are barely affected. ![9_image_0.png](9_image_0.png) Figure 8: Effect of rejection threshold τ on memorization (CT ) of each CIFAR10 class. Figure 9: Relation between quality (FID) and memorization (CT ) with varying τ for different models. ## 5.4 Experiments On Model Architectures Figure 9 shows that not only is memorization rejection generic and applicable to any GAN model, it is also effective. The complexity of the models from lowest to highest is SAGAN, BigGAN, and BigGAN with differential augmentation. The complexity is reflected on the performance of both metrics, where more complex models is associated with lower FID scores (better quality) and more negative CT values (more memorization). We observe that for all three models, there exists some rejection threshold that improves the CT value without compromising the quality. Differential augmentation (Zhao et al., 2020) is a technique for augmenting training data to avoid the discriminator overfitting while preventing the generator from learning the augmented samples. Figure 9 shows that BigGAN with differential augmentation exhibits the most severe memorization when no MR is applied. This indicates that augmentation techniques in GAN training, albeit useful for improving the generation quality, is not effective against reducing training sample memorization. However, the CT values can be significantly improved by increasing the rejection threshold without any degradation in quality as measured with FID when coupled with MR. This showcases the compatibility of memorization rejection with other tangential GAN training techniques. ## 5.5 Evaluation With Various Distance Metrics To perform memorization rejection, a distance metric df,XT is required to evaluate the nearest neighbor distance during training. Although the GAN model does not directly optimize for the distance, the rejection can be viewed as a regularization term on the original GAN objective as derived in section 3.1. Thus, there is a risk of the generator adversarially learning to generate samples that are dissimilar to the training data gauged with the distance metric used during training, but does not lead to actual reduction in memorization. If the memorization reduction is indeed legitimate, we should observe the same improvement in CT when evaluation with other distance metrics. We constructed different distance metrics by changing the embedding function f. Specifically, we selected the penultimate layer of different ImageNet pretrained models as the embedding space, which are expected to be rich in high-level semantics but not equivalent. Figure 10 shows the CT values evaluated with different distance metrics. The universal trend holds across all the metrics. Higher rejection thresholds yield less negative CT values. This provides quantitative evidence that the observed improvement in CT is not merely an artifact of MR, but represents actual reduction in memorization severity. ![10_image_0.png](10_image_0.png) Figure 10: Effect of rejection threshold τ on memorization (CT ) evaluated with different distance function df,XT . The result suggests applying memorization rejection *does not* cause the model to adversarially optimize for the distance function applied during training. Rather it effectively reduces memorization. Figure 11: Effect of rejection threshold τ on memorization (CT ) evaluated with respect to different testing sets. The positive correlation between the rejection threshold and increased CT value is consistent across the testing sets. ## 5.6 Evaluation With Various Reference Testing Sets Recall that we emphasized the importance of selecting a proper reference testing set in section 5.1.2. The distinctiveness of the reference testing set from the training data reflects the strictness of the criteria for memorization. Our choice of CIFAR10.1 (Recht et al., 2018) for reference in our experiments indicates a higher standard for the models. What happens if a testing set with more overlap with the training data is chosen as reference instead? Figure 11 shows the CT values when using other testing sets as reference. CIFAR10 test is the testing set included in the original CIFAR10 dataset, which is found to be highly overlapping with the training set (Recht et al., 2018). CIFAIR10 is the testing dataset curated by Barz & Denzler (2020) as an attempt to remove some near duplicates of the training data but not all of them. The testing sets ranked from the degree of overlap with the training data from high to low would be CIFAR10 testing, CIFAIR10, then CIFAR10.1. The absolute CT values of the testing sets reflect their distinctiveness from the training data. The less distinctive the higher the CT value, and the lower the criteria for training sample memorization. On the other hand, if we compare the relative CT values within the same testing set, the positive correlation between the rejection threshold and CT value holds. This suggests that even if a non-overlapping, independent reference isn't available, the effectiveness of MR can still be observed. ## 5.7 Guideline For Threshold Selection We have shown in previous sections that a well-chosen rejection threshold allows reduced memorization with minimal impact on generation quality. Due to the tradeoff nature between quality and memorization illustrated in section 2.3, we showed through experiments that the optimal rejection thresholds can and should be tuned. It is only natural to ask: is it possible to estimate the neighborhood of where the optimal threshold lies without brute-force trial and error? The initial intention for performing memorization rejection is to avoid updating generated samples that are too similar to their training data nearest neighbors. How the data is distributed in the latent space for distance evaluate is key to determining the rejection threshold. We can estimate the average density of the data distribution by measuring the average nearest neighbor distance of the training data $$\bar{d}_{X_{T}}=\frac{1}{|X_{T}|}\sum_{x\in X_{T}}d_{f,X_{T}\setminus\{x\}}(x).$$ An generated sample with nearest neighbor distance less than ¯dXT is likely close to the data distribution which implies "good enough" quality. This makes ¯dXT a natural initial choice for performing memorization rejection. The average nearest neighbor distance ¯dXT of the CIFAR10 training set with Inception v3 model pretrained on ImageNet as the embedding function f is 0.15. Figure 12 shows the tSNE figures for CIFAR10 generated samples partitioned according to their nearest neighbor distance df,XT . Generated samples with d less than 0.15 covers most of the data distribution while samples with d greater than 0.15 fall outside the data distribution. This suggest the average nearest neighbor distance of the training samples can be used to determine whether a generated sample is close enough to the data distribution and serve as the rejection threshold. ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) Figure 12: tSNE plot of CIFAR10 "car" 50k generated samples from BigGAN split with d = 0.15. The left with generated samples of d ≤ 0.15 covers most of the data distribution while generated samples in the right figure with d > 0.15 lies outside of the distribution. ## 6 Conclusion Training sample memorization is a known issue for GANs but is often addressed as a caution only after models are trained. To the best of our knowledge, we are the first to directly tackle the issue of memorization reduction during GAN training. Specifically, we proposed a training strategy to reject memorized samples when updating the generator. We showed through experiments that our method is effective at reducing memorization and the rejection threshold serves as a control knob for tuning the magnitude of regularization. Selecting a good rejection threshold allows the model to learn to generate from the training distribution but not replicating near duplicates of training samples. Currently our method discards update information provided by memorized samples to reduce memorization. The information could potentially be better utilized in other ways to further improve generation quality. We hope that the foundation we established inspires future works to explore active memorization reduction techniques for GANs, improving the Pareto frontier of reduced memorization and better generation quality. ## References Ben Adlam, Charles Weill, and Amol Kapoor. Investigating under and overfitting in wasserstein generative adversarial networks, 2019. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan, 2017. Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian Goodfellow, and Augustus Odena. Discriminator rejection sampling. *arXiv preprint arXiv:1810.06758*, 2018. Ching-Yuan Bai, Hsuan-Tien Lin, Colin Raffel, and Wendy Chi-wen Kan. On training sample memorization: Lessons from benchmarking generative modeling with a large-scale competition. *Proceedings of the 27th* ACM SIGKDD Conference on Knowledge Discovery Data Mining, Aug 2021. doi: 10.1145/3447548. 3467198. URL http://dx.doi.org/10.1145/3447548.3467198. Björn Barz and Joachim Denzler. Do we train on test data? purging cifar of near-duplicates. *Journal of* Imaging, 6(6):41, Jun 2020. ISSN 2313-433X. doi: 10.3390/jimaging6060041. URL http://dx.doi.org/ 10.3390/jimaging6060041. Ali Borji. Pros and cons of gan evaluation measures, 2018. Ali Borji. Pros and cons of gan evaluation measures: New developments. *arXiv preprint arXiv:2103.09396*, 2021. Wacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis Antonoglou, and Arthur Gretton. A test of relative similarity for model selection in generative models, 2016. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis, 2019. Cristóbal Esteban, Stephanie L. Hyland, and Gunnar Rätsch. Real-valued (medical) time series generation with recurrent conditional gans, 2017. Qianli Feng, Chenqi Guo, Fabian Benitez-Quiroz, and Aleix M. Martinez. When do gans replicate? on the choice of dataset size. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6701–6710, October 2021. Maayan Frid-Adar, Eyal Klang, Michal Amitai, Jacob Goldberger, and Hayit Greenspan. Synthetic data augmentation using gan for improved liver lesion classification, 2018. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014. Ishaan Gulrajani, Colin Raffel, and Luke Metz. Towards gan benchmarks which require generalization, 2020. Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. Accelerating large-scale inference with anisotropic vector quantization. In *International Conference on Machine* Learning, pp. 3887–3896. PMLR, 2020. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2018. Suhas Jayaram Subramanya, Fnu Devvrit, Harsha Vardhan Simhadri, Ravishankar Krishnawamy, and Rohan Kadekodi. Diskann: Fast accurate billion-point nearest neighbor search on a single node. Advances in Neural Information Processing Systems, 32, 2019. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 8110–8119, 2020. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Phuc H. Le-Khac, Graham Healy, and Alan F. Smeaton. Contrastive representation learning: A framework and review. *IEEE Access*, 8:193907–193934, 2020. ISSN 2169-3536. doi: 10.1109/access.2020.3031549. URL http://dx.doi.org/10.1109/ACCESS.2020.3031549. Wen Li, Ying Zhang, Yifang Sun, Wei Wang, Mingjie Li, Wenjie Zhang, and Xuemin Lin. Approximate nearest neighbor search on high dimensional data - experiments, analyses, and improvement. IEEE Transactions on Knowledge and Data Engineering, 32(8):1475–1488, 2020. doi: 10.1109/TKDE.2019. 2909204. Jae Hyun Lim and Jong Chul Ye. Geometric gan, 2017. Bingchen Liu, Yizhe Zhu, Kunpeng Song, and Ahmed Elgammal. Towards faster and stabilized {gan} training for high-fidelity few-shot image synthesis. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=1Fqg133qRaI. Shuang Liu, Olivier Bousquet, and Kamalika Chaudhuri. Approximation and convergence properties of generative adversarial learning, 2017. David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests, 2018. Yu A Malkov and Dmitry A Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. *IEEE transactions on pattern analysis and machine intelligence*, 42(4):824–836, 2018. Casey Meehan, Kamalika Chaudhuri, and Sanjoy Dasgupta. A non-parametric test to detect data-copying in generative models. *arXiv preprint arXiv:2004.05675*, 2020. Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don't know?, 2019. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do cifar-10 classifiers generalize to cifar-10?, 2018. Donatas Repecka, Vykintas Jauniskis, Laurynas Karpus, Elzbieta Rembeza, Irmantas Rokaitis, Jan Zrimec, Simona Poviloniene, Audrius Laurynenas, Sandra Viknander, Wissam Abuajwa, et al. Expanding functional protein sequence spaces using generative adversarial networks. *Nature Machine Intelligence*, 3(4): 324–333, 2021. Gerard Salton and Christopher Buckley. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523, 1988. Samarth Sinha, Zhengli Zhao, Anirudh Goyal ALIAS PARTH GOYAL, Colin A Raffel, and Augustus Odena. Top-k training of gans: Improving gan performance by throwing away bad samples. *Advances in Neural* Information Processing Systems, 33:14638–14649, 2020. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 2818–2826, 2016. Hoang Thanh-Tung and Truyen Tran. Toward a generalization metric for deep generative models, 2020. Ngoc-Trung Tran, Viet-Hung Tran, Ngoc-Bao Nguyen, Trung-Kien Nguyen, and Ngai-Man Cheung. On data augmentation for gan training. *IEEE Transactions on Image Processing*, 30:1882–1897, 2021. doi: 10.1109/TIP.2021.3049346. Hung-Yu Tseng, Lu Jiang, Ce Liu, Ming-Hsuan Yang, and Weilong Yang. Regularizing generative adversarial networks under limited data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7921–7931, June 2021. Ryan Webster, Julien Rabin, Loic Simon, and Frederic Jurie. Detecting overfitting of deep generative networks via latent recovery, 2019. Yan Wu, Jeff Donahue, David Balduzzi, Karen Simonyan, and Timothy Lillicrap. Logan: Latent optimisation for generative adversarial networks. *arXiv preprint arXiv:1912.00953*, 2019. Ceyuan Yang, Yujun Shen, Yinghao Xu, and Bolei Zhou. Data-efficient instance generation from instance discrimination, 2021. Hongkang Yang and Weinan E. Generalization error of gan from the discriminator's perspective, 2021. Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In *International conference on machine learning*, pp. 7354–7363. PMLR, 2019. Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han. Differentiable augmentation for data-efficient gan training. *arXiv preprint arXiv:2006.10738*, 2020.
Review 1: Summary: This paper tries to solve the sample memorization issue of deep generative models by first detecting if a generated sample is sufficiently close to the training set and then use early stopping if this is the case. Experimental results on CIFAR-10 datasets are provided to empirically justify the claims. Strengths and Weaknesses: The paper tries to tackle a well-known issue to generative model - sample memorization. It is true that most popular evaluation metrics such as FID or inception score cannot effectively detect if a generative model has the sample memorization issue and indeed in many application scenarios such as drug/material discovery and art, generative novel samples instead of simply memorizing training set is important. So I agree this is an important question to study. The main methodological contribution is to detect if a generated sample is sufficiently close to training set in terms of some distance metric (e.g. an embedding function + cosine similarity) and then perform early stopping on those samples. Based on my understanding, both the memorization detection method and the idea of early stopping have been studied previously, and from my point of view, it seems the combination of them does not provide enough theoretical insights/novelty. Below are some detailed questions/comments: - How to efficiently detect if the model is memorizing the training data is perhaps the first and most important question to consider in this setting. For example, we definitely should not measure L2-distance in pixel space as shifting an image by one pixel is almost just memorizing the image despite a large L2-distance. So employing a feature extraction and measuring similarity in latent space is necessary. But is using a standard architecture like inception-v3 is good enough? Most modern deep NNs have adversarial examples. If we add a negligible noise to an image (the resulting image looks almost the same as before, thus memorization), in the latent space, the new image may be quite different (classified as a different class). So we should be more careful about this part and to really tackle this problem, we may need additional technical consideration instead of plug-and-play. - "the nearest neighbor distance is not an indicator of quality when the distance is sufficiently small": this kind of argument is more like an intuition rather than a rigorous theoretical statement. For example, how would you prove this (under what assumptions over the function f and how to define quality mathematically)? - Is there any theoretical guarantee (convergence, unbiasedness etc) for the new objective function (some kind of adaptive importance sampling with dynamically changing proposal distribution), which corresponds to some **modified version** of a well-defined divergence function like f-divergence? - if the f is the inception network later used in evaluation (e.g. FID), then this may be a leak of information from test to train stage, and the model may have the chance to overfit/exploit to this evaluation metric (the comparison may not be fair for other models). - At the end of Section 3.1, the algorithm is hard to parse in its current form. - In the experiments, only CIFAR-10 is used, which cannot really justify the arguments about scaling up to larger datasets, as the computation still seems quite expensive. Also memorization issue seems to be a common problem shared by all deep generative models. So it would be more convincing to include more datasets and more scenarios such as other SOTA GANs and generative models like VAE, diffusion model/EBMs, etc. Requested Changes: Please see the "Strengths And Weaknesses" section for questions/comments. Broader Impact Concerns: No concerns on the ethical implications of the work. ================================================== Review 2: Summary: The paper addresses the problem of sample memorization in GANs. The paper observes that for the generated images with a sufficiently small distance to the nearest training image, further reducing the distance would result in memorization but does not improve the sample quality too much. Motivated by this observation, the paper proposes to discard generated images whose distance to the nearest training image is within a threshold during training. It can be seen as a regularization that penalizes the generator for copying the training images. Experiments show that the proposed approach can reduce memorization without impacting sample quality scores too much across several GAN architectures on the CIFAR10 dataset. Strengths and Weaknesses: Strengths: * The idea is simple and intuitive. It is compatible with a wide range of GAN architectures and formulations. * The results on the CIFAR10 dataset look promising. Weakness: * The paper needs a careful pass on the writing. There are many typos and writing issues. * Although the abstract claims "Experiments on multiple datasets", I only see experiments on the CIFAR10 dataset. Experiments on a single, low-resolution dataset are insufficient to demonstrate the value of the work. It is unclear whether the claimed benefits generalize to other more realistic datasets. Requested Changes: [Experiments] * [Critical] The abstract claims "Experiments on multiple datasets", but there are only experiments on the CIFAR10 dataset. It is important to have experiments on other datasets for supporting the claims and demonstrating the value of the work. * [Less critical] It would be better to have an ablation study of performing MR only on the generator v.s. performing MR on both the generator and the discriminator. [Writing] * [Less critical] Section 2: "X_T\subseteq \mathcal{X}" -> "X_T\subseteq \mathcal{X}^N" * [Critical] Figure 3: Visual inspection of images is too subjective. For me, I would say that the last row is indeed visually worse than the first row, as opposed to "there is no perceptible quality difference between rows" claimed in the paper. To support the claim, it is better to have a quantitative evaluation besides this qualitative visualization. * [Less critical] Figure 4: are you doing tSNE from the pixel space or an embedding space? It is not stated in the paper. * [Less critical] Section 3.1: Compared with the vanilla GAN, this approach has additional undesired equilibriums, where the generator mode-collapse to any single training image, and the discriminator outputs a constant no matter what the input is. I understand that the modified version where MR is only performed on the generator would eliminate this equilibrium, but there could be others. It is probably good to have discussions around this in the paper. * [Critical] The format of algorithm 3.1 is broken. * [Critical] Section 3.2 mentioned that an embedding space is used for computing the nearest distance. Which embedding space is used? This should be mentioned in the paper. * [Less critical] Section 5.2: figure -> Figure * [Less critical] Figure 5: which tau value is used for BigGAN+MR? * [Critical] Section 5.4 states "Figure 9 shows that BigGAN with differential augmentation exhibits the most severe memorization when no MR is applied." But in Figure 9, it is unclear which points are the ones without MR. * [Critical] Figure 8: In the previous texts, it was mentioned that negative C_T means memorization, and a zero C_T means no memorization. What is the meaning of a positive C_T then? * [Critical] A related question to the above. I don't quite get the discussion in Section 5.6 "The absolute C_T values of the testing ... and C_T value holds." Would you mind explaining it in the rebuttal? Thank you! * [Less critical] Section 5.6: The dataset names used here are different from what was used in Section 5.1.2. For example, Barz & Denzler (2020) is marked with CIFAIR10 in 5.6, but ciFAIR10 in 5.1.2. * [Critical] Section 5.7: I understand Figure 12. But why does that imply that "the average nearest neighbor distance of the training samples can be used to determine whether a generated sample is close enough"? Broader Impact Concerns: I don't have concerns about this. ================================================== Review 3: Summary: The paper tries to analyze the sample memorization issue in GANs and proposes a simple trick called memorization rejection to solve it. Some empirical results are insightful but the whole paper lacks novelty and has limited impact. Strengths and Weaknesses: Overall, this is an incremental paper with limited novelty and insufficient empirical studies. Basically, the memorization rejection is to early stop the training of the generator on some well-trained samples $z$. As agreed by the authors, the early-stopping trick has been investigated by the GAN community, so the technical novelty of the paper is poor. Besides, I worry that the proposed method has poor scalability and perhaps relies on intensive hyperparameter tuning in practice. Moreover, the authors conducted experiments on only the CIFAR-10 dataset. I want to know if the memorization issue still exists for large-scale datasets like ImageNet and if the proposal still works. Requested Changes: Other detailed comments are listed as follows: - Regarding the argument that "the nearest neighbor distance is not an indicator of quality when the distance is sufficiently small": It seems that the only evidence is the qualitative results in Figure 3. Do you cherry-pick the images? I admit that these rows have imperceptible differences, but are they really close enough to each other quantitatively (e.g., measured by FID/IS)? - "However, in practice performing MR only when training the generator is sufficient": Is this because GAN's discriminator is easy to converge, so you need to include those hard fake samples (which are very close to the training data) to avoid trivial convergence of the discriminator? BTW, do you have an ablation study on this to strengthen this part of the paper? - The last paragraph of Sec 3.1 may be an algorithm but it collapses - The computational complexity is high in large-scale cases. On one hand, the method needs to store the embeddings of all training instances; on the other hand, at each training iteration, the method has to do hundreds of times (i.e. the batch size) exact/approximate nearest neighbor search over potentially millions of candidates. Both of these are unacceptable when processing modern datasets like ImageNet and beyond. - In Figure 6, is the result of $\tau=0.13$ an average value or from one single trial? It is very strange to see that it leads to better FID than $\tau<0.13$. Why? - Comparisons to SOTA GAN models in aspects of both visual quality and memorization severity are necessary. Large-scale evaluation is also desired as CIFAR-10 data is less diverse than popular real-world image datasets. - It seems that different GAN models correspond to different C_T vs. FID plots (Figure 9). Do they prefer the same threshold $\tau$? Namely, does the proposed method require intensive hyper-parameter tuning when facing different GAN methods/datasets? Considering Figure 9, Sec 5.7 cannot *generally* solve the problem of the specification of $\tau$ Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: The three initial reviews raised significant doubts regarding the evidence to support of the claims and also asked clarification questions. One of the reviews also discussed novelty. As novelty (and significance) are not part of TMLR's evaluation criteria, I did not take these comments into account. The authors then provided a response that addressed the most significant criticisms raised by reviewers. The reviewers' evaluation did not change in light of these responses, and all suggest that the paper is not ready for publication at TMLR. Below I summarize the two main aspects that had the most influence on the decision: + A Single dataset is studied. The three reviewers found that it was difficult to support the claim about the method's efficacy since it was validated on a single dataset (as a minor point: the abstract claims it is validated on multiple datasets). The size of that dataset was also discussed, but this is a secondary concern. In their reply, the authors state that to obtain accurate memorization results a reference test set disjoint from the training data must be available. This is the case for CIFAR10.1---a "cleaned-up" version of the original CIFAR10 test set---but test sets from large-scale object recognition datasets likely contain near-duplicates from the training set. While one reviewer explicitly recognized this reference-set challenge, the reviewers all seem to believe that it would still be valuable to study existing larger-scale datasets. That is, *relative* improvements in memorization still seem meaningful even though an additional level of analysis might be required. In addition, setting aside size, studying synthetic datasets might be able to at least show that the method generalizes beyond CIFAR10. It was also suggested that another possibility would be to show that the proposed method generalizes to other deep generative models. + The claim that "the nearest neighbor distance is not an indicator of quality when the distance is sufficiently small" was also questioned by reviewers. This is a core assumption for the development of the method. While some qualitative evidence is provided in support (Figure 3), it would be worth confirming this assumption with either further empirical evidence, ideally including quantitative evidence, and/or a theoretical development. There were a few more minor comments that I suggest the authors should address: + Scalability. In their response, the authors say that their method can use approximate neighbor search and so should scale to larger datasets. I would suggest adding a short discussion in the paper about this and ideally verifying it empirically. + Writing quality. Two reviewers also suggest that the quality of the presentation could be improved (e.g. the presentation of Algorithm 3.1, see details in the reviews) and further proof-reading the paper was suggested. + Unanswered questions. The authors did answer two of the main concerns raised by reviewers in their response, but the reviewers also asked several other clarification questions which seemed not to have been answered at this stage. I would be willing to consider a significantly revised version of the manuscript. ==================================================
# Analysis Of Convolutions, Non-Linearity And Depth In Graph Neural Networks Using Neural Tangent Kernel Mahalakshmi Sabanayagam sabanaya@cit.tum.de School of Computation, Information and Technology Technical University of Munich Pascal Esser *esser@cit.tum.de* School of Computation, Information and Technology Technical University of Munich Debarghya Ghoshdastidar *ghoshdas@cit.tum.de* School of Computation, Information and Technology Technical University of Munich Reviewed on OpenReview: *https: // openreview. net/ forum? id= xgYgDEof29* ## Abstract The fundamental principle of Graph Neural Networks (GNNs) is to exploit the structural information of the data by aggregating the neighboring nodes using a 'graph convolution' in conjunction with a suitable choice for the network architecture, such as depth and activation functions. Therefore, understanding the influence of each of the design choice on the network performance is crucial. Convolutions based on graph Laplacian have emerged as the dominant choice with the symmetric normalization of the adjacency matrix as the most widely adopted one. However, some empirical studies show that row normalization of the adjacency matrix outperforms it in node classification. Despite the widespread use of GNNs, there is no rigorous theoretical study on the representation power of these convolutions, that could explain this behavior. Similarly, the empirical observation of the linear GNNs performance being on par with non-linear ReLU GNNs lacks rigorous theory. In this work, we theoretically analyze the influence of different aspects of the GNN architecture using the *Graph Neural Tangent Kernel* in a semi-supervised node classification setting. Under the population *Degree Corrected Stochastic Block Model*, we prove that: (i) linear networks capture the class information as good as ReLU networks; (ii) row normalization preserves the underlying class structure better than other convolutions; (iii) performance degrades with network depth due to over-smoothing, but the loss in class information is the slowest in row normalization; (iv) skip connections retain the class information even at infinite depth, thereby eliminating over-smoothing. We finally validate our theoretical findings numerically and on real datasets such as *Cora* and *Citeseer*. ## 1 Introduction With the advent of Graph Neural Networks (GNNs), there has been a tremendous progress in the development of computationally efficient state-of-the-art methods in various graph based tasks, including drug discovery, community detection and recommendation systems (Wieder et al., 2020; Fortunato & Hric, 2016; van den Berg et al., 2017). Many of these problems depend on the structural information of the data, represented by the graph, along with the features of the nodes. Because GNNs exploit this topological information encoded in the graph, it can learn better representation of the nodes or the entire graph than traditional deep learning techniques, thereby achieving state-of-the-art performances. In order to accomplish this, GNNs apply aggregation function to each node in a graph that combines the features of the neighboring nodes, and its variants differ principally in the methods of aggregation. For instance, graph convolution networks use mean neighborhood aggregation through spectral approaches (Bruna et al., 2014; Defferrard et al., 2016; Kipf & Welling, 2017) or spatial approaches (Hamilton et al., 2017; Duvenaud et al., 2015; Xu et al., 2019), graph attention networks apply multi-head attention based aggregation (Velickovic et al., 2018) and graph recurrent networks employ complex computational module (Scarselli et al., 2008; Li et al., 2016). Of all the aggregation policies, the spectral graph Laplacian based approach is most widely used in practice, specifically the one proposed by Kipf & Welling (2017) owing to its simplicity and empirical success. In this work, we focus on such graph Laplacian based aggregations in Graph Convolution Networks (GCNs), which we refer to as graph convolutions or *diffusion operators*. Kipf & Welling (2017) propose a GCN for node classification, a semi-supervised task, where the goal is to predict the label of a node using its feature and neighboring node information. They suggest symmetric normalization Ssym = D− 12 AD− 12 as the graph convolution, where A and D are the adjacency and degree matrix of the graph, respectively. Ever since its introduction, Ssym remains the popular choice. However, subsequent works such as Wang et al. (2018); Wang & Leskovec (2020); Ragesh et al. (2021) explore row normalization Srow = D−1A and particularly, Wang et al. (2018) observes that Srow outperforms Ssym for two-layered GCN empirically. Intrigued by this observation, and the fact that both Ssym and Srow are simply degree normalized adjacency matrices, we study the behavior over depth and observe that Srow performs better than Ssym in general, as illustrated in Figure 1 (Details of the experiment in Appendix C.1). Furthermore, another striking observation from Figure 1 is that the performance of GCN without skip connections decreases considerably with depth for both Ssym and Srow. This contradicts the conventional wisdom about standard neural networks which exhibit improvement in the performance as depth increases. Several works (Kipf & Welling, 2017; Chen et al., 2018b; Wu et al., 2019) observe this behavior empirically and attribute it to the over-smoothing effect from the repeated application of the diffusion operator, resulting in averaging out of the feature information to a degree where it becomes uninformative (Li et al., 2018; Oono & Suzuki, 2019; Esser et al., 2021). As a solution to this problem, Chen et al. (2020) and Kipf & Welling (2017) propose different forms of skip connections that overcome the smoothing effect and thus outperform the vanilla GCN. Extending it to the comparison of graph convolutions, Figure 1 shows Srow is preferable to Ssym over depth in general for different GCNs. Naturally, we ask: what characteristics of Srow enable better representation learning than Ssym *in GCNs?* Another contrasting behavior to the standard deep networks is that *linear GCNs perform on par or even better than non-linear GCNs* as demonstrated in Wu et al. (2019). While standard neural networks with non-linear activations are proved to be universal function approximator, hence an essential component in a network, this behavior of GCNs is surprising. 2 4 8 Depth of GCN 80 81 82 83 84 85 86 87 88 ``` Linear Srow ReLU Srow Linear Ssym ReLU Ssym ``` Accuracy of class prediction (%) Figure 1: GCN performance on Cora dataset. Rigorous theoretical analysis is particularly challenging in GCNs compared to the standard neural networks because of the added complexity due to the graph convolution. Adding skip connections and non-linearity further increase the complexity of the analysis. To overcome these difficulties, we consider GCN in infinite width limit wherein the *Neural Tangent Kernel (NTK)* captures the network characteristics very well (Jacot et al., 2018). The infinite width assumption is not restrictive for our analysis as the NTK model shows same general trends as trained GCN. Moreover, NTK enables the analysis to be parameter-free and thus eliminate additional complexity induced, for example, by optimization. Through the lens of NTK, we study the impact of different graph convolutions under a random graph model: Degree Corrected Stochastic Block Model (DC-SBM) (Karrer & Newman, 2011). The node degree heterogeneity induced in DC-SBM allows us to analyze the effect of different types of normalization of the adjacency matrix, thus revealing the characteristic difference between Ssym and Srow. Additionally, this model enables analysis of graphs that have *homophilic,* heterophilic and core-periphery structures. In this paper, we present a formal approach to analyze GCNs and, specifically, *the effect of activations, the representation power of different graph convolutions, the influence* of depth and the role of skip connections. This is a significant step toward understanding GCNs as it enables more informed network design choices like the convolution, depth and activations, as well as development of competitive methods based on grounded theoretical reasoning rather than heuristics. Contributions. We provide a rigorous theoretical analysis of the discussed empirical observations in GCN under DC-SBM distribution using graph NTK, leading to the following contributions. (i) In Sections 2–3, we present the NTK for GCN in infinite width limit in the node classification setting and our general framework of analysis, respectively. (ii) In Section 4, we derive the NTK under DC-SBM and show that linear GCNs capture the class structure similar to ReLU GCN (or slightly better than ReLU) and, hence, linear GCN performs as good as ReLU GCNs. For convenience, we restrict the subsequent analysis to linear GCNs. (iii) In Section 5, we show that for both homophilic and heterophilic graphs, row normalization preserves the class structure better, but is not useful in core-periphery models. We also derive that there is over-smoothing in vanilla GCN since the class separability decreases with depth. (iv) In Section 6, we leverage the power of NTK to analyze different skip connections (Kipf & Welling, 2017; Chen et al., 2020). We derive the corresponding NTKs and show that skip connections retain class information even at infinite depth along with numerical validation. Throughout the paper we illustrate the results numerically on planted models and validate the theoretical results on real dataset *Cora* in Section 7 and *Citeseer* in Appendix C.5, and conclude in Section 8 with the discussion on the impact of the results and related works. We provide all proofs, experimental details and more experiments in the appendix. Notations. We represent matrix and vector by bold faced uppercase and lowercase letters, respectively, the matrix Hadamard (entry-wise) product by ⊙ and the scalar product by ⟨*., .*⟩. M⊙k denotes Hadamard product of matrix M with itself repeated k times. We use σ˙(.) for derivative of function σ(.), E [.] for expectation, and [d] = {1, 2*, . . . , d*}. ## 2 Neural Tangent Kernel For Graph Convolutional Network Before going into a detailed analysis of graph convolutions we provide a brief background on Neural Tangent Kernel (NTK) and derive its formulation in the context of node level prediction using infinitely-wide GCNs. Jacot et al. (2018); Arora et al. (2019); Yang (2019) show that the behavior and generalization properties of randomly initialized wide neural networks trained by gradient descent with infinitesimally small learning rate is equivalent to a kernel machine. Furthermore, Jacot et al. (2018) also shows that the change in the kernel during training decreases as the network width increases, and hence, asymptotically, one can represent an infinitely wide neural network by a deterministic NTK, defined by the gradient of the network with respect to its parameters as $$\mathbf{\Theta}(\mathbf{x},\mathbf{x}^{\prime}):=\operatorname*{\mathbb{E}}_{\mathbf{W}\sim{\mathcal{N}}(\mathbf{0},\mathbf{I})}\left[\left\langle{\frac{\partial F(\mathbf{W},\mathbf{x})}{\partial\mathbf{W}}},{\frac{\partial F(\mathbf{W},\mathbf{x}^{\prime})}{\partial\mathbf{W}}}\right\rangle\right].$$ . (1) Here F(W, x) represents the output of the network at data point x parameterized by W and the expectation is with respect to W, where all the parameters of the network are randomly sampled from standard Gaussian distribution N (0, 1). Although the 'infinite width' assumption is too strong to model real (finite width) neural networks, and the absolute performance may not exactly match, the empirical trends of NTK match the corresponding network counterpart, allowing us to draw insightful conclusions. This trade-off is worth considering as this allows the analysis of over-parameterized neural networks without having to consider hyper-parameter tuning and training. Formal GCN Setup and Graph NTK. We present the formal setup of GCN and derive the corresponding NTK, using which we analyze different graph convolutions, skip connections and activations. Given a graph with n nodes and a set of node features {xi} n i=1 ⊂ R f, we may assume without loss of generality that the set of observed labels {yi} m i=1 correspond to first m nodes. We consider K classes, thus yi ∈ {0, 1} K and the goal is to predict the n − m unknown labels {yi} n i=m+1. We represent the observed labels of m nodes as Y ∈ {0, 1} m×K, and the node features as X ∈ R n×f with the assumption that entire X is available during training. We define S ∈ R n×n to be the graph convolution operator using the adjacency matrix A and the $$(1)$$ degree matrix D. The GCN of depth d is given by $$F_{\mathbf{W}}(\mathbf{X},\mathbf{S}):=\sqrt{\frac{c_{\sigma}}{h_{d}}}\mathbf{S}\sigma\left(\ldots\sigma\left(\sqrt{\frac{c_{\sigma}}{h_{1}}}\mathbf{S}\sigma\left(\mathbf{S}\mathbf{X}\mathbf{W}_{1}\right)\mathbf{W}_{2}\right)\ldots\right)\mathbf{W}_{d+1}\tag{2}$$ where W := {Wi ∈ R hi−1×hi } d+1 i=1 is the set of learnable weight matrices with h0 = f and hd+1 = K, hiis the size of layer i ∈ [d] and σ : R → R is the point-wise activation function where σ(x) := x for linear and σ(x) := max(0, x) for ReLU activations. Note that linear σ(x) is same as Simplified GCN (Wu et al., 2019). We initialize all the weights to be i.i.d standard Gaussian N (0, 1) and optimize it using gradient descent. We derive the NTK for the GCN in infinite width setting, that is, h1, . . . , hd → ∞. While this setup is similar to Kipf & Welling (2017), it is important to note that we consider linear output layer so that NTK remains constant during training (Liu et al., 2020) and a normalization pcσ/hi for layer i to ensure that the input norm is approximately preserved and c −1 σ = E u∼N(0,1) h(σ(u))2i(similar to Du et al. (2019a)). The following theorem states the NTK between every pair of nodes, as a n × n matrix that can be computed at once. Theorem 1 (NTK for Vanilla GCN) *For the vanilla GCN defined in* (2), the NTK Θ *at depth* d is Θ(d) = X d+1 k=1 S . . . S S | {z } d+1−k terms Σk ⊙ E˙k S T ⊙ E˙k+1S T ⊙ . . . ⊙ E˙d S T. (3) Here Σk ∈ R n×n is the co-variance between nodes of layer k, and is given by Σ1 = SXXT S T, Σk = SEk−1S T with Ek = cσ E F∼N(0,Σk) -σ(F)σ(F) T, E˙k = cσ E F∼N(0,Σk) -σ˙(F) ˙σ(F) Tand E˙d+1 = 1n×n. Comparison to Du et al. (2019b). While the NTK in (3) is similar to the graph NTK in Du et al. (2019b), the main difference is that NTK in our case is computed for all pairs of nodes in a graph as we focus on semi-supervised node classification, whereas Du et al. (2019b) considers supervised graph classification where input is many graphs and so the NTK is evaluated for all pairs of graphs. Moreover, the significant difference is in using the NTK to analytically characterize the influence of convolutions, non-linearity, depth and skip connections on the performance of GCN. ## 3 Theoretical Framework Of Our Analysis In this section we discuss the general framework of our analysis that enables in substantiating different empirical observations in GCNs. We use the derived NTK in Theorem 1 for our analysis on various aspects of the GCN architecture and consider four different graph convolutions as defined in Definition 1 with Assumption 1 on the network. Definition 1 Symmetric degree normalized Ssym := D− 12 AD− 12 , row normalized Srow := D−1A*, column* normalized Scol := AD−1 and unnormalized Sadj := 1nA *convolutions.* Assumption 1 (GCN with orthonormal features) *GCN in* (2) *is said to have orthonormal features if* XXT:= In, where In *is the identity matrix of size* n. Remark on Assumption 1. The orthonormal features assumption eliminates the influence of the features and facilitates identification of the influence of different convolution operators clearly. Additionally, it helps in quantifying the exact interplay between the graph structure and different activation functions in the network. Nevertheless, the analysis including the features can be done using *Contextual Stochastic Block* Model (Deshpande et al., 2018) resulting in similar theoretical conclusions as detailed in Appendix B.9. Besides, the evaluation of our theoretical results without this assumption on real datasets is in Section 7 and Appendix C.5 that substantiate our findings. While the NTK in (3) gives a precise characterization of the infinitely wide GCN, we can not directly draw conclusions about the convolution operators or activation functions without further assumptions on the input graph. Therefore, we consider a planted random graph model as described below. Random Graph Model. We consider that the underlying graph is from the *Degree Corrected Stochastic* Block Model (DC-SBM) (Karrer & Newman, 2011) since it enables us to distinguish between Ssym, Srow, Scol and Sadj by allowing non-uniform degree distribution on the nodes. The model is defined as follows: Consider a set of n nodes divided into K latent classes (or communities), Ci ∈ [1, K]. The DC-SBM model generates a random graph with n nodes that has mutually independent edges with edge probabilities specified by the population adjacency matrix M = E [A] ∈ R n×n, where $$\mathbf{M}_{i j}={\begin{cases}p\pi_{i}\pi_{j}&{\mathrm{if~}}{\mathcal{C}}_{i}={\mathcal{C}}_{j}\\ q\pi_{i}\pi_{j}&{\mathrm{if~}}{\mathcal{C}}_{i}\neq{\mathcal{C}}_{j}\end{cases}}$$ with the parameters *p, q* ∈ [0, 1] governing the edge probabilities inside and outside classes, and the degree correction πi ∈ [0, 1] ∀ i ∈ [n] with Pi πi = cn for a positive c that controls the graph sparsity. The constant c should be h√ 1 n , 1 isince the expected number of edges in this DC-SBM is O (cn) 2and is bounded by -*n, n*2. Note that we deviate from the original condition Pi πi = K in Karrer & Newman (2011), to ensure that the analysis even holds for dense graphs. One can easily verify that the analysis holds for Pi πi = K as well. We denote π = (π1*, . . . , π*n) for ease of representation. DC-SBM allows us to model different graphs: Homophilic graphs: 0 ≤ *q < p* ≤ 1, **Heterophilic graphs:** 0 ≤ *p < q* ≤ 1 and **Core-Periphery** graphs: p = q (no assumption on class structure) and π encodes core and periphery. It is evident that the NTK is a complex quantity and computing its expectation is challenging given the dependency of terms from the degree normalization in S, its powers S i and SST. To simplify our analysis, we make the following assumption on the DC-SBM, Assumption 2 (Population DC-SBM) *The graph has a weighted adjacency* A = M. Remark on Assumption 2. Assuming A = M is equivalent to analyzing DC-SBM in expected setting and it further enables the computation of analytic expression for the population NTK instead of the expected NTK. Moreover, we empirically show that this analysis holds for random DC-SBM setting as well in Figure 5. Furthermore, this also implies addition of self loop with a probability p. Analysis Framework. We analyze the observations of different GCNs by deriving the population NTK for each model and compare the preservation of class information in the kernel. Note that the true class information in the graph is determined by the blocks of the underlying DC-SBM - formally by p and q and independent of the degree correction π. Consequently, we define the *class separability of the DC-SBM* as r := p−q p+q . Hence, in order to capture the class information, the *kernel should ideally have a block structure* that aligns with the one of the DC-SBM. Therefore, we measure the *class separability of the kernel* as the average difference between in-class and out-of-class blocks. The best case is indeed when the class separability of the kernel is proportional (due to scale invariance of the kernel) to p − q and independent of π. ## 4 Linear Activation Captures Class Information As Good As Relu Activation While Kipf & Welling (2017) proposes ReLU GCNs, Wu et al. (2019) demonstrates that linear GCNs perform on par or even better than ReLU GCNs in a wide range of real world datasets, seemingly going against the notion that non-linearity is essential in neural networks. To understand this behavior, we derive the population NTK under DC-SBM for linear and ReLU GCNs, and compare the class separability of the kernels (average in-class and out-of-class block difference). Since our objective is in comparing linear and ReLU GCN, we consider homogeneous degree correction π, that is, ∀ *i, π*i:= c. In this case, population NTK for symmetric, row and column normalized adjacencies are equivalent, and unnormalized adjacency differ by a scaling that does not impact the block difference comparison. The following theorems state the population NTK for linear and ReLU GCNs of depth d for normalized adjacency S and K = 2. The results hold for K > 2 as presented in Appendix B.3.5. Theorem 2 (Population NTK Θ˜ **for linear GCN)** Let Assumption 1 and 2 hold, 1[.] *be indicator function,* K = 2, r := p−q p+q , δij := (−1)1[Ci̸=Cj ]and ∀ i, πi:= c. Then ∀ i, j*, population NTK for linear GCN of* ![5_image_0.png](5_image_0.png) Figure 2: **Linear as good as ReLU activation. Left:** analytical plot of in-class and out-of-class block difference of the population NTK Θ˜ (d)for a graph of size n = 1000, depths d = {1, 2, 4, 8} and varying class separability r of linear and ReLU GCNs (in log scale). **Right:** performance of trained linear and ReLU GCNs on *Cora* for d = {2, 4, 8}. depth d, Θ˜ (d) lin*, is* $$\left(\tilde{\Theta}_{l i n}^{(d)}\right)_{i j}=\frac{d+1}{n}\left(1+\delta_{i j}r^{2(d+1)}\right).$$ Theorem 3 (Population NTK Θ˜ **for ReLU GCN)** *Let assumptions of Theorem 2 hold and* κ0(x) := 1 π (π − *arccos* (x)), κ1(x) := 1π x (π − *arccos* (x)) + √1 − x 2, ∆1 := 1−r 2 1+r 2 and ∆k := (1−r 2)+(1+r 2)κ1(∆k−1) (1+r 2)+(1−r 2)κ1(∆k−1) . Furthermore, ∆n k and ∆dk denote the numerator and denominator of ∆k, respectively. Then ∀ i, j, the population NTK for ReLU GCN of depth d, Θ˜ (d) ReLU *, is computed using* (3) *with* $$\left(\mathbf{\Sigma}_{k}\right)_{i j}=\frac{1}{2^{k-1}n}\left(1[\delta_{i j}=1]\Delta_{k}^{d}+1[\delta_{i j}=-1]\Delta_{k}^{h}\right)\prod_{k^{\prime}=1}^{k-1}\Delta_{k^{\prime}}^{d}$$ $$\left(\mathbf{E}_{k}\right)_{i j}=\frac{1}{2^{k-1}n}\left(\kappa_{1}\left(\Delta_{k}\right)\right)^{1\left[\delta_{i j}=-1\right]}\prod_{k^{\prime}=1}^{k}\Delta_{k^{\prime}}^{d}\quad;\quad\left(\mathbf{\tilde{E}}_{k}\right)_{i j}=\left(\kappa_{0}\left(\Delta_{k}\right)\right)^{1\left[\delta_{i j}=-1\right]}.$$ Comparison of Linear and ReLU GCNs. The left of Figure 2 shows the analytic in-class and outof-class block difference Θ˜ (d) Ci=Cj − Θ˜ (d) Ci̸=Cj of the population NTKs of linear and ReLU GCNs with input graph size n = 1000 for different depths d and class separability r. Given the class separability r is large enough, theoretically *linear GCN preserves the class information as good as or slightly better than the ReLU* GCN. Particularly for d = 1, the difference is O r 2 n as shown in Appendix B.8. With depth, the difference prevails showing the effect of over-smoothing is stronger in ReLU than linear GCN, however larger depth proves to be detrimental for GCN as discussed in later sections. As a validation, we train linear and ReLU GCNs of depths {2, 4, 8} on *Cora* dataset for both the popular convolutions Ssym and Srow, and observe at par performance as shown in the right plot of Figure 2. ## 5 Convolution Operator Srow **Preserves Class Information** In order to analyze the representation power of different graph convolutions S, we derive the population NTKs under DC-SBM with non homogeneous degree correction π to distinguish the operators. We restrict our analysis to linear GCNs for convenience. In the following theorem, we state the population NTKs for graph convolutions Ssym, Srow, Scol and Sadj for K = 2 with Assumption 1 and 2. The result extends to K > 2 (Appendix B.3.5). Theorem 4 (Population NTKs Θ˜ and its class separability ζ **for the four graph convolutions** S) Let Assumption 1 and 2 hold, K = 2 and r :=p−q p+q , δij := (−1)1[Ci̸=Cj ]. π *is chosen such that* Pn i=1 πi1[Ci = k] = cn K ,Pn i=1 √πi1[Ci = k] = τ ∀ k and Pn i=1 π 2 i 1[Ci = k] = γ ∀ k, where τ and γ are constants. Then ∀ i, j, population NTKs Θ˜ sym, Θ˜ row, Θ˜ col and Θ˜ adj and class separability of the population NTKs ζ (d) sym, ζ(d) row, ζ(d) col and ζ (d) adj of depth d for S = Ssym, Srow,Scol and Sadj *respectively, are,* Θ˜ (d) symij = (d + 1) 1 + δij r 2d+2 √πiπj cn; ζ (d) sym = 16τ 2(d + 1) n2(cn)r 2d+2 Θ˜ (d) rowij = (d + 1) 1 + δij r 2d+2 2γ (cn) 2; ζ (d) row = 8γ(d + 1) (cn) 2r 2d+2 Θ˜ (d) colij = (d + 1) 1 + δij r 2d+2 nπiπj (cn) 2; ζ (d) col = 4(d + 1) nr 2d+2 Θ˜ (d) adjij = (d + 1)πiπj γ 2 d+1−1 n2d+2 1[δij = 1] 2 Xd l=0 2 d+1 2l p 2 d+1−2l+ 1[δij = −1] 2 Xd−1 l=0 2 d+1 2l + 1p 2 d+1−2l−1q 2l+1!; ζ (d) adj = (d + 1)c 2γ 2 d+1−1 n2d+2 (p − q) 2d+2. Note that the three assumptions on π are only to express the kernel in a simplified, easy to comprehend format. It is derived without the assumptions on π in Appendix B.3. Furthermore, the numerical validation of our result in Section 5.2 is without both these assumptions. Comparison of graph convolutions. The population NTKs Θ˜ (d) of depth d in Theorem 4 describes the information that the kernel has after d convolutions with S. To classify the nodes perfectly, the kernels should retain the class information of the nodes according to the underlying DC-SBM. That is, the average in-class and out-of-class block difference of the population NTKs (class separability of the kernel) is proportional to p − q and independent of π. On this basis, only Θ˜ row exhibits a block structure unaffected by the degree correction π, and the average block difference is determined by r 2 and d, making Srow preferable over Ssym, Sadj and Scol. On the other hand, Θ˜ sym, Θ˜ col and Θ˜ adj are influenced by the degree correction π which obscures the class information especially with depth. Although Θ˜ sym and Θ˜ col seem similar, the influence of π for Θ˜ col is O(π 2 i ) which is stronger compared to O(πi) for Θ˜ sym, making it undesirable over Ssym. As a result, the preference order from the theory is Θ˜ row ≻ Θ˜ sym ≻ Θ˜ col ≻ Θ˜ adj . ## 5.1 Impact Of Depth In Vanilla Gcn Given that r := p−q p+q < 1, Theorem 4 shows that the difference between in-class and out-of-class blocks decreases with depth monotonically which in turn leads to decrease in performance with depth, therefore explaining the observation in Figure 1. Corollary 1 characterizes the impact of depth as d → ∞. Corollary 1 (Class separability of population NTK ζ (∞) as d → ∞ ) From Theorem 4, the class separability of population NTKs of the four different convolutions for fixed n and as d → ∞ *converge to* 0. Corollary 1 presents the class separability of the population NTKs for fixed n and d → ∞ for all the four convolutions Ssym, Srow, Scol and Sadj , showing that the very deep GCN has zero class information. From this we also infer that, as d → ∞ the population NTKs converge to a constant kernel, thus 0 average in-class and out-of-class block difference for all the convolutions. Therefore, deeper GCNs have zero class information for any choice of convolution operator S. The class separability of population kernels at depth d for Ssym, Srow and Scol is O( dr2d n ) since τ and γ are O(n). Therefore, it shows that *the class separation decreases* at the exponential rate in d. This explains the performance degradation of GCN with depth. To further understand the impact of depth, we plot the average in-class and out-of-class block difference for homophilic and heterophilic graphs using the theoretically derived population NTK Θ˜ (d)for depths [1, 10] and n = 1000 in a well separated DC-SBM (row 2, column 1 of Figure 3 and column 4 of Figure 4, respectively). It clearly ![7_image_0.png](7_image_0.png) Figure 3: **Numerical validation of Theorem 4 using homophilic (***q < p***) DC-SBM** (Row 1, Column 1). Row 1, Columns 2–5 illustrate the exact NTKs of depth=2 and a graph of size n = 1000 sampled from the DC-SBM for Srow, Ssym, Scol and Sadj . Row 2 shows the respective analytic population NTKs from Theorem 4. Row 2, column 1 shows the average gap between in-class and out-of-class blocks from theory, that is, average of Θ˜ (d) Ci=Cj − Θ˜ (d) Ci̸=Cj . This validates that Srow preserves class information better than other convolutions. shows the exponential degradation of class separability with depth and the gap goes to 0 for large depths in all the four convolutions. Additionally, the gap in Θ˜ (d) row is the highest showing that the class information is better preserved, illustrating the strong representation power of Srow. Therefore, *large depth is undesirable* for all the convolutions in vanilla GCN and the theory suggests Srow *as the best choice for shallow GCN.* ## 5.2 Numerical Validation For Random Graphs Theorem 4 and Corollary 1 show that Srow has better representation power under Assumption 1 and 2, that is, for the linear GCN with orthonormal features and population DC-SBM. We validate this on homophilous and heterophilous random graphs of size n = 1000 with equal sized classes generated from DC-SBM. Figure 3 illustrates the results for depth=2 in the homophily case where the DC-SBM is presented in row 1 and column 1. We plot the NTKs of all the convolution operators computed from the sampled graph and the population NTKs as per the theory as heatmaps in rows 1 and 2, respectively. The heatmaps corresponding to the exact and the population NTKs clearly show that the class information for all the nodes is well preserved in Srow as there is a clear block structure than the other convolutions in which each node is diffused unequally due to the degree correction. Among Ssym, Scol and Sadj , Ssym retains the class structure better and Sadj has very small values (see the colorbar scale) and no clear structure. Thus, exhibiting the theoretically derived preference order. We plot both the exact and the populations NTKs to show that the population NTKs are a good representative of the exact NTKs especially for large graphs. We show this by plotting the norm of relative kernel difference, ∥ Θ˜ (d)−Θ(d) Θ˜ (d) ∥2, with graph size n for d = 2 in Figure 5. Figure 4 shows the analogous result for heterophily DC-SBM. The experimental details are provided in the Appendix C.3. ## 5.3 Ssym **Maybe Preferred Over** Srow **In Core-Periphery Networks (No Class Structure)** While we showed that the graph convolution Srow preserves the underlying class structure, it is natural to wonder about the random graphs that have no communities (p = q). One such case is graphs with coreperiphery structure where the graph has core nodes that are highly interconnected and periphery nodes that are sparsely connected to the core and other periphery nodes. Such a graph can be modeled using only the degree correction π such that πj ≪ πi ∀j ∈ periphery, i ∈ *core* (similar to Jia & Benson (2019)). Extending Theorem 4, we derive the following Corollary 2 and show that the convolution Ssym contains the graph information while Srow is a constant kernel. ![8_image_0.png](8_image_0.png) Figure 4: **Numerical validation of Theorem 4 using heterophilic (***p < q*) DC-SBM (Column 1). Columns 2–3 illustrate the exact NTKs of depth=2 and a graph of size n = 1000 sampled from the DC-SBM for Srow and Ssym. Column 4 shows the average gap between in-class and out-of-class blocks from theory. Figure 5: Norm of the relative kernel difference ∥ Θ˜ (2)−Θ(2) Θ˜ (2) ∥2 for depth d = 2 with graph size n. Corollary 2 (Population NTKs Θ˜ for p = q) *Let Assumption 1 and 2 hold,* K = 2 and p = q. Furthermore, π is chosen such that Pi∈*core* π 2 i = λ and Pi∈*periphery* π 2 i = µ. Then ∀ i and j*, the population NTKs* Θ˜ sym and Θ˜ row of depth d for S = Ssym and Srow*, respectively, are,* $$\left(\tilde{\Theta}_{s y m}^{(d)}\right)_{i j}=(d+1)\frac{\sqrt{\pi_{i}\pi_{j}}}{c n}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\left(\tilde{\Theta}_{r o w}^{(d)}\right)_{i j}=(d+1)\frac{\lambda+\mu}{\left(c n\right)^{2}}.$$ From Corollary 2, it is evident that the Ssym has the graph information and hence could be preferred when there is no community structure. We validate it experimentally and discuss the results in Figure 18 of Appendix C.3. While Srow results in a constant kernel for core-periphery without community structure, it is important to note that when there exists a community structure and each community has core-periphery nodes, then Srow is still preferable over Ssym as it is simply a special case of homophilic networks. This is demonstrated in Figure 19 of Appendix C.3. ## 6 Skip Connections Retain Class Information Even At Infinite Depth Skip connection is the most common way to overcome the performance degradation with depth in GCNs, but little is known about the effectiveness of different skip connections and their interplay with the convolutions. While our focus is to understand the interplay with convolutions, we also include the impact of convolving with and without the feature information. Hence, we consider the following two variants: Skip-PC (preconvolution), where the skip is added to the features before applying convolution (Kipf & Welling, 2017); and Skip-α, which gives importance to the features by adding it to each layer without convolving with S (Chen et al., 2020). To facilitate skip connections, we need to enforce constant layer size, that is, hi = hi−1. Therefore, we transform the input layer using a random matrix W to H0 := XW of size n × h where Wij ∼ N (0, 1) and h is the hidden layer size. Let Hi be the output of layer i. Definition 2 (Skip-PC) In a Skip-PC (pre-convolution) network, the transformed input H0 *is added to the* hidden layers before applying the graph convolution S, that is, ∀i ∈ [d], Hi:= pcσ h S (Hi−1 + σs (H0))Wi, where σs(.) *can be linear or ReLU.* Skip-PC definition deviates from Kipf & Welling (2017) in the fact that we skip to the input layer instead of the previous layer. The following defines the skip connection similar to Chen et al. (2020). Definition 3 (Skip-α) Given an interpolation coefficient α ∈ (0, 1), a Skip-α *network is defined* such that the transformed input H0 *and the hidden layer are interpolated linearly, that is,* Hi p := cσ h ((1 − α) SHi−1 + ασs (H0))Wi ∀i ∈ [d], where σs(.) *can be linear or ReLU.* ## 6.1 Ntk For Gcn With Skip Connections We derive NTKs for the skip connections - Skip-PC and Skip-α by considering the hidden layers width h → ∞. Both the NTKs maintain the form presented in Theorem 1 with the following changes to the co-variance matrices. Let E˜0 = E F∼N(0,Σ0) -σs(F)σs(F) T. Corollary 3 (NTK for Skip-PC) *The NTK for an infinitely wide Skip-PC network is as presented in* Theorem 1 where Ek is defined as in the theorem, but Σk *is defined as* $$\begin{array}{l l}{{\Sigma_{0}=\mathbf{X}\mathbf{X}^{T},}}&{{\qquad\Sigma_{1}=\mathbf{S}\tilde{\mathbf{E}}_{0}\mathbf{S}^{T}}}\end{array}$$ Tand Σk = SEk−1S $$\mathbf{\Sigma}_{k}=\mathbf{S E}_{k-1}\mathbf{S}^{T}+\mathbf{\Sigma}_{1}.$$ Corollary 4 (NTK for Skip-α) The NTK for an infinitely wide Skip-α network is as presented in Theorem 1 where Ek is defined as in the theorem, but Σk *is defined with* Σ0 = XXT, Σ1 = (1 − α) 2SE0S T + α (1 − α)SE0 + E0S T+ α 2E0 and Σk = (1 − α) 2SEk−1S T + α 2E˜0. ## 6.2 Impact Of Depth In Gcns With Skip Connection Similar to the previous section we use the NTK for Skip-PC and Skip-α (Corollary 3 and 4) and analyze the graph convolutions Ssym and Srow under the same considerations detailed in Section 5. Since, Sadj and Scol are theoretically worse and not popular in practice, we do not consider them for the skip connection analysis. The linear orthonormal feature NTK, Θ(d), for depth d is same as Θ (d) lin with changes to Σk as follows, $$\mathrm{Skip-PC:}\ \mathbf{\Sigma}_{k}=\mathbf{S}^{k}\mathbf{S}^{k T}+\mathbf{S}\mathbf{S}^{T},$$ Skip-α: $\mathbf{\Sigma}_{k}=(1-\alpha)^{2k}\,\mathbf{S}^{k}\mathbf{S}^{k\,T}+\alpha\,(1-\alpha)^{2k-1}\,\mathbf{S}^{k-1}\left(\mathbf{S}+\mathbf{S}^{T}\right)\mathbf{S}^{k-1\,T}+\alpha^{2}\sum_{l=1}^{k-1}\left(1-\alpha\right)^{2l}\mathbf{S}^{l}\mathbf{S}^{T}+\alpha^{2}\mathbf{I}_{n}$. $$a n d$$ $d\to$. $$\left(4\right)$$ We derive the population NTK Θ˜ (d) and, for convenience, only state the result as d → ∞ in the following theorems. Expressions for fixed d are presented in Appendices B.5 and B.6. Theorem 5 (Class Seperability of Population NTK for Skip-PC ζ in-$\mathbf{PC}$ $\zeta_{\mathcal{QC}}^{(\infty)}$ P C as d → ∞) *Under the assumptions of Theorem 4,* $\text{ty of B}$ #### Population $\text{N}$ . $$4,$$ $$\zeta_{P C,s y m}^{(\infty)}=\frac{16\tau^{2}r^{2}}{n^{2}(c n)(1-r^{2})},\quad a n d\quad\zeta_{P C,r o w}^{(\infty)}=\frac{8\gamma r^{2}}{(c n)^{2}(1-r^{2})}$$ Theorem 6 (Class Seperability of Population NTK for Skip-α ζ(∞) α as d → ∞ ) Under the assumptions of Theorem 4, $$\zeta^{(\infty)}_{\alpha,\text{sym}}=\frac{16r^{2}\alpha^{2}}{(cn)n^{2}\left(1-\left(1-\alpha\right)^{2}r^{2}\right)}\left(\frac{1}{1-r^{2}}\right),\quad\text{and}\quad\zeta^{(\infty)}_{\alpha,\text{row}}=\frac{8\gamma\alpha^{2}}{(cn)^{2}\left(1-\left(1-\alpha\right)^{2}r^{2}\right)}\left(\frac{1}{1-r^{2}}\right).\tag{5}$$ Theorems 5 and 6 present the class separability of population NTKs of Ssym and Srow for Skip-PC and Skip-α, respectively. Similar to Theorem 4, assumptions on π in above theorems is to simplify the results. Note that Srow is better than Ssym in the case of skip connections as well due to the independence on π and the underlying block structures are well preserved in Srow. The theorems show that the class separation in the kernel is *not zero* even at infinite depth for both Skip-PC and Skip-α. In fact, in the case of large n and d → ∞, it is O r 2 n and O α 2 n(1−(1−α) 2r 2) for Skip-PC and Skip-α, respectively, since τ and γ are O(n). Furthermore, to understand the role of skip connections, we plot in Figure 6 the gap between in-class and out-of-class blocks at infinite depth for different values of true class separability r and small and large graph setting, for vanilla linear GCN, Skip-PC and Skip-α using Corollary 1, Theorems 5–6, respectively. The plot clearly shows that the block difference is away from 0 for both the skip connections in both the small and large n cases given a reasonable true separation r, wheras the block difference in vanilla GCN is zero for small n and large n cases. Thus this analytical plot shows that the class information is retained in skip connections even at infinite depth. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) ![10_image_2.png](10_image_2.png) Figure 6: **Skip connection retains class information even at infinite depth. Left:** average in-class and out-of-class block difference at d = ∞ for small and large n and different true class separability r (in log scale). **Heatmaps:** exact NTKs Θ(8) for Ssym and Srow for linear GCN and Skip-PC. ## 6.3 Numerical Validation For Random Graphs We validate our theoretical result using the same setup detailed in Section 5.2, and compute the exact NTKs for Skip-PC and Skip-α for both Ssym and Srow. We show the result on homophilic graphs but they equally extend to the heterophilic case. While Ssym has no class information for depth=8 in vanilla GCN, it is retained reasonably in Skip-PC (right of Figure 6 column 1). In the case of Srow, we clearly observe the blocks in both cases with more prevalent gap in Skip-PC illustrating our theoretical results (right of Figure 6 column 2). Similar observation is made for Skip-α despite considering XXT = In as the model interpolates with the feature, and is discussed in Appendix C.3. Validation of the results for heterophily graphs is also included in Appendix C.3. While both Ssym and Srow retain the class information in larger depths, we observe that the degree correction plays a significant role in Ssym as elucidated in our theoretical analysis. ## 7 Empirical Analysis On Real Data In this section, we explore how well the theoretical results translate to real dataset *Cora* with features, that is, XXT ̸= In and A ̸= M. We consider multi-class node classification for Cora (K = 7). The NTKs for linear and ReLU GCNs, and GCN with Skip-PC are illustrated in Figure 7. Experimental details and additional results for Skip-α and *Citeseer* are in C.4 and Appendices C.5, respectively. We make the following observations from the experiments that validate the theory even in a much relaxed setting: (i) clear block structures show up in both GCN with and without skip connections for Srow, thus illustrating that the class information is well retained by Srow than Ssym; (ii) linear and ReLU GCNs show similar class preservation qualitatively. Thus, although the theoretical result is based on DC-SBM with mild assumptions, the conclusions hold reasonably well in real settings on real datasets as well. Figure 7: **Evaluation on Cora dataset.** Heatmaps show exact NTKs Θ˜ (8) for linear, ReLU and Skip-PC ![10_image_3.png](10_image_3.png) GCNs for both symmetric and row normalized adjacency. ## 8 Discussion Related Work. While GNNs are extensively used in practice, their understanding is limited, and the analysis is mostly restricted to empirical approaches (Bojchevski et al., 2018; Zhang et al., 2018; Ying et al., 2018; Wu et al., 2020). Beyond empirical methods, rigorous theoretical analysis using *learning theoretical* bounds such as VC Dimension, Scarselli et al. (2018), PAC-Bayes Liao et al. (2021), Lipschitzness analysis (Tang & Liu, 2023), or sample complexity using graph topology sampling (Li et al., 2022) are propounded. Rademacher Complexity bounds (Garg et al., 2020; Esser et al., 2021) show that normalized graph convolution is beneficial, but those works do not provide insight on the influence of different normalizations on the GCN performance. Another possible tool is the NTK using which interesting theoretical insights in deep neural networks are derived (e.g. (Du et al., 2019a)). In the context of GNNs, Du et al. (2019b) derives the NTK in the supervised setting (each graph is a data instance to be classified) and empirically studies the NTK performance, however does not extend it to a theoretical analysis, and Krishnagopal & Ruiz (2023) uses Graph NTK to study convergence of large graphs. In contrast, we derive the NTK in the *semi-supervised* setting for GCN with and without skip connections, and use it to further theoretically analyze the influence of different convolutions with respect to over-smoothing. Theoretical studies (Oono & Suzuki, 2019; Cai & Wang, 2020) show that over-smoothing causes the expressive power of GNNs to decrease exponentially with depth, while Keriven (2022) proves that in linear GNNs a finite number of convolutions improves learning before over-smoothing kicks in. On the other hand, Cong et al. (2021) argues that over-smoothing does not necessarily happen in practice, and a deeper model is provably expressive. While over-smoothing and role of skip connections in GNNs are theoretically analyzed in some works (Esser et al., 2021), the influence of different convolutions that causes over-smoothing and their interplay with skip connections is not studied. For a comprehensive theory survey see Jegelka (2022). Conclusion. The performance of GCNs is significantly influenced by the architecture choices, but existing learning theoretic bounds for GCNs do not provide insights specifically into the representation power of the graph convolutions and the influence of activation functions. We present a NTK based analysis that characterizes different convolutions, thereby proving the strong representation power of Srow in community detection and explaining why Srow, and to some extent Ssym, are preferred in practice (Theorem 4). In contrast to applying spectral analysis of the convolutions to explain over-smoothing, our explicit characterization of the network provides more exact quantification of the impact of over-smoothing in deep GCNs (Corollary 1, see Figures 3 and 4). In addition, the NTKs for GCNs with skip connections enable precise understanding of the role of skip connections in countering the over-smoothing effect (Theorems 5–6). Another value addition of our analysis is the exact quantification of the role of non-linearity (Theorem 3). While the DC-SBM assumption may seem restrictive, it is important to note that the impact of depth is derived for different convolutions exactly, therefore, making our result stronger and more precise than a general comment on the effect of over-smoothing resulting from these convolutions. Moreover, the experiments on *Cora* and *Citeseer* show that the general trends of our theoretical results extend beyond DC-SBM, although formally characterizing such behavior is difficult without model assumptions. Possible extensions. *(i) Theoretical Analysis.* Considering random A would be more precise, but the concentration inequalities for NTK is more complex than those for Laplacians. We note that our analysis could be extended by considering feature information (XXT ̸= In) using Contextual Stochastic Block Model as discussed in Appendix B.9, which would require more involved analysis but could provide further insights into GCNs, such as interplay between graph and feature information. *(ii) Graph Models.* The present NTK based setup allows for the analysis of different graphs having homophilic, heterophilic and core-periphery structures, and can be extended to other graph generating processes. *(iii) GCN Models.* Furthermore, the general formulation of NTK for vanilla GCNs (Theorem 1) and with skip connections (Corollaries 3–4) can be used for analyzing any new convolutions like topological structure preserving convolutions, for obtaining a rigorous understanding of GCNs by deriving statistical consistency results or information theoretic limits, as well as for theoretical analysis of other graph learning problems, such as link prediction. *(iv) Analysis.* We consider class separability as the main measure to compare different NTKs. However while we empirically observe that this measure captures the overall main trends in the MSE and accuracy, there are also cases where the measure does not capture all the trends. Therefore, we leave analyzing further ways to characterize the connection between changes in the NTK and the performance of the neural network for future study. ## 9 Acknowledgment This work has been supported by projects from the German Research Foundation (Research Training Group GRK 2428 and Priority Program SPP 2298, project GH 257/2-1). ## References Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. In *Conference on Neural Information Processing Systems*, 2019. Alberto Bietti and Julien Mairal. On the inductive bias of neural tangent kernels. In *Conference on Neural* Information Processing Systems, volume 32, pp. 12873–12884, 2019. Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, and Stephan Günnemann. Netgan: Generating graphs via random walks. In *International Conference on Machine Learning*, 2018. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and deep locally connected networks on graphs. In *International Conference on Learning Representations*, 2014. Chen Cai and Yusu Wang. A note on over-smoothing for graph neural networks. arXiv preprint arXiv:2006.13318, 2020. Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In *International Conference on Machine Learning*, pp. 1725–1735. PMLR, 2020. Minmin Chen, Jeffrey Pennington, and Samuel Schoenholz. Dynamical isometry and a mean field theory of rnns: Gating enables signal propagation in recurrent neural networks. In International Conference on Machine Learning, pp. 873–882. PMLR, 2018a. Zhengdao Chen, Lisha Li, and Joan Bruna. Supervised community detection with line graph neural networks. In *International Conference on Learning Representations*, 2018b. Weilin Cong, Morteza Ramezani, and Mehrdad Mahdavi. On provable benefits of depth in training graph convolutional networks. *Advances in Neural Information Processing Systems*, 34:9936–9949, 2021. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In *Conference on Neural Information Processing Systems*, 2016. Yash Deshpande, Subhabrata Sen, Andrea Montanari, and Elchanan Mossel. Contextual stochastic block models. *Advances in Neural Information Processing Systems*, 31, 2018. Pedro Domingos. Every model learned by gradient descent is approximately a kernel machine. arXiv preprint arXiv:2012.00152, 2020. Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In *International Conference on Machine Learning*, pp. 1675–1685. PMLR, 2019a. Simon S Du, Kangcheng Hou, Barnabás Póczos, Ruslan Salakhutdinov, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. In *Conference on Neural* Information Processing Systems, 2019b. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán AspuruGuzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. Neural Information Processing Systems, 28, 2015. Pascal Mattia Esser, Leena C. Vankadara, and Debarghya Ghoshdastidar. Learning theory can (sometimes) explain generalisation in graph neural networks. In *Proceedings of the 34th International Conference on* Neural Information Processing Systems, 2021. Santo Fortunato and Darko Hric. Community detection in networks: A user guide. *Physics reports*, 659: 1–44, 2016. Vikas Garg, Stefanie Jegelka, and Tommi Jaakkola. Generalization and representational limits of graph neural networks. In *International Conference on Machine Learning*, pp. 3419–3430. PMLR, 2020. Dar Gilboa, Bo Chang, Minmin Chen, Greg Yang, Samuel S Schoenholz, Ed H Chi, and Jeffrey Pennington. Dynamical isometry and a mean field theory of lstms and grus. *arXiv preprint arXiv:1901.08987*, 2019. William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Conference on Neural Information Processing Systems, pp. 1025–1035, 2017. Soufiane Hayou, Arnaud Doucet, and Judith Rousseau. On the impact of the activation function on deep neural networks training. In *International conference on machine learning*, pp. 2672–2680. PMLR, 2019. Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: convergence and generalization in neural networks. In *Conference on Neural Information Processing Systems*, pp. 8580–8589, 2018. Stefanie Jegelka. Theory of graph neural networks: Representation and learning, 2022. Junteng Jia and Austin R Benson. Random spatial network models for core-periphery structure. In *Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining*, pp. 366–374, 2019. Brian Karrer and Mark EJ Newman. Stochastic blockmodels and community structure in networks. *Physical* review E, 83(1):016107, 2011. Tatsuro Kawamoto, Masashi Tsubaki, and Tomoyuki Obuchi. Mean-field theory of graph neural networks in graph partitioning. *Advances in Neural Information Processing Systems*, 31, 2018. Nicolas Keriven. Not too little, not too much: a theoretical analysis of graph (over) smoothing. *arXiv* preprint arXiv:2205.12156, 2022. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. Sanjukta Krishnagopal and Luana Ruiz. Graph neural tangent kernel: Convergence on large graphs. arXiv preprint arXiv:2301.10808, 2023. Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha SohlDickstein. Deep neural networks as gaussian processes. In *International Conference on Learning Representations*, 2018. Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong. Generalization guarantee of training graph convolutional networks with graph topology sampling. In International Conference on Machine Learning, pp. 13014–13051. PMLR, 2022. Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semisupervised learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In *International Conference on Learning Representations*, 2016. Renjie Liao, Raquel Urtasun, and Richard Zemel. A pac-bayesian approach to generalization bounds for graph neural networks. In *International Conference on Learning Representations*, 2021. Chaoyue Liu, Libin Zhu, and Misha Belkin. On the linearity of large non-linear models: when and why the tangent kernel is constant. In *Conference on Neural Information Processing Systems*, volume 33, pp. 15954–15964, 2020. Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In *International Conference on Learning Representations*, 2019. Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in neural information processing systems, 29, 2016. Rahul Ragesh, Sundararajan Sellamanickam, Arun Iyer, Ramakrishna Bairi, and Vijay Lingam. Hetegcn: Heterogeneous graph convolutional networks for text classification. In *Proceedings of the 14th ACM International Conference on Web Search and Data Mining*, pp. 860–868, 2021. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. *IEEE transactions on neural networks*, 20(1):61–80, 2008. Franco Scarselli, Ah Chung Tsoi, and Markus Hagenbuchner. The vapnik–chervonenkis dimension of graph and recursive neural networks. *Neural Networks*, 108:248 - 259, 2018. Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. In *International Conference on Learning Representations*, 2017. URL https://openreview.net/ forum?id=H1W1UN9gg. Huayi Tang and Yong Liu. Towards understanding the generalization of graph neural networks. arXiv preprint arXiv:2305.08048, 2023. Rianne van den Berg, Thomas N Kipf, and Max Welling. Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263, 2017. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. *stat*, 1050:4, 2018. Hongwei Wang and Jure Leskovec. Unifying graph convolutional neural networks and label propagation. arXiv preprint arXiv:2002.06755, 2020. Xiaoyun Wang, Minhao Cheng, Joe Eaton, Cho-Jui Hsieh, and Felix Wu. Attack graph convolutional networks by adding fake nodes. In Proceedings of Woodstock'18: ACM Symposium on Neural Gaze Detection, Woodstock, NY, 2018. Oliver Wieder, Stefan Kohlbacher, Mélaine Kuenemann, Arthur Garon, Pierre Ducrot, Thomas Seidel, and Thierry Langer. A compact review of molecular property prediction with graph neural networks. Drug Discovery Today: Technologies, 37:1–12, 2020. Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In *International Conference on Machine Learning*, pp. 6861–6871. PMLR, 2019. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. A comprehensive survey on graph neural networks. In *IEEE transactions on neural networks and learning systems*, 2020. Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel Schoenholz, and Jeffrey Pennington. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks. In *International Conference on Machine Learning*, pp. 5393–5402. PMLR, 2018. Lechao Xiao, Jeffrey Pennington, and Samuel Schoenholz. Disentangling trainability and generalization in deep neural networks. In *International Conference on Machine Learning*, pp. 10462–10472. PMLR, 2020. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. Ge Yang and Samuel Schoenholz. Mean field residual networks: On the edge of chaos. Advances in neural information processing systems, 30, 2017. Greg Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. *arXiv preprint arXiv:1902.04760*, 2019. Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, 2018. Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In *AAAI Conference on Artificial Intelligence*, 2018. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. *AI Open*, 1:57–81, 2020. ## A Other Related Works In contrast to the infinite width analysis, mean field limit analysis of finitely wide neural networks is conducted for various architectures at initialization (Poole et al., 2016; Schoenholz et al., 2017; Yang & Schoenholz, 2017; Xiao et al., 2018; Chen et al., 2018a; Gilboa et al., 2019; Xiao et al., 2020). This analysis resorts to initializing the weights such that the variance of weights in every layer is scaled down by the number of neurons in the layer so that the input contribution of each neuron in the layer from the activations of the previous layer remains O(1). The primary objective of these works is to study the trainability, generalization and expressivity aspects of the neural networks. Poole et al. (2016) shows that the networks with larger depths have the capacity to express highly non linear functions, rather than larger widths. This is extended to deriving conditions for the trainability of extremely deep neural networks in Schoenholz et al. (2017). Using similar analysis, Yang & Schoenholz (2017) shows exponential input space collapse and vanishing/exploding gradients for deep feedforward networks, whereas it becomes subexponential, even polynomial in some cases for residual connections, and Hayou et al. (2019) derives initialization parameters for different activations to accelerate training. Consequently, better initialization schemes for trainability for extremely deep neural networks based on the conditioning of input-output Jacobian matrix are established for Convolutional Neural Networks (Xiao et al., 2018), Recurrent Neural Networks and Long Short Term Memory Networks Chen et al. (2018a); Gilboa et al. (2019). Interestingly, Xiao et al. (2020) studies the trainability and generalization of networks using the condition number of the NTK and the NTK predictor, and shows that the trainability and generalizability are at odds in very wide and deep networks. In the context of GNNs, Kawamoto et al. (2018) extends the mean field analysis to graph partitioning, however exploring the potential of the analysis is still nascent. ## B Mathematical Derivations And Proofs We first derive the NTK (Theorem 1) for GCN defined in (2) and prove Theorems 2, 4, 5 and 6, Corollaries 1, 2, 3 and 4 by considering linear GCN and computing the population NTK Θ˜ (d)for different graph convolutions S. We then derive Theorem 3 for ReLU GCN similar to the analysis of linear GCN. We represent the u-th row of a matrix M as Mu., and use 1n to denote a vector of n dimension with all 1s and 1ˆn for a vector of n dimension with −1 as first n 2 entries and +1 as the remaining n 2 entries, and 1n×n for the n × n matrix of ones. ## B.1 Theorem 1: Ntk For Vanilla Gcn We rewrite the GCN FW(X, S) defined in (2) using the following recursive definitions: $$\mathbf{G}_{1}=\mathbf{S}\mathbf{X},\qquad\mathbf{G}_{i}=\sqrt{\frac{c_{\sigma}}{h_{i-1}}}\mathbf{S}\sigma(\mathbf{F}_{i-1})\ \forall i\in\{2,\ldots,d+1\},\quad\mathbf{F}_{i}=\mathbf{G}_{i}\mathbf{W}_{i}\ \forall i\in[d+1].\tag{6}$$ Thus, FW(X, S) = Fd+1. Since all the output neurons behave similarly in the infinite width limit, we consider Wd+1 to be h × 1 and using the definitions in (6), the gradient with respect to Wi of node u is $$\left(\frac{\partial F_{\mathbf{W}}(\mathbf{X},\mathbf{S})}{\partial\mathbf{W}_{i}}\right)_{\mathbf{u}}=(\mathbf{G}_{i})^{T}(\mathbf{B}_{i})_{\mathbf{u}}\quad\text{with}\quad(\mathbf{B}_{i})_{\mathbf{u}}=\begin{cases}(\mathbf{1}_{n})_{n}&\text{if}i=d+1\\ \sqrt{\frac{\sigma_{n}}{\mu_{n}}}(\mathbf{S})_{\mathbf{u}}^{T}(\mathbf{B}_{d+1})_{n}\mathbf{W}_{d+1}^{T}\odot(\phi(\mathbf{F}_{i}))_{n}.&\text{if}i=d\\ \sqrt{\frac{\sigma_{n}}{\mu_{n}}}\mathbf{S}^{T}(\mathbf{B}_{i+1})_{n}\mathbf{W}_{d+1}^{T}\odot(\phi(\mathbf{F}_{i}))_{n}.&\text{if}i<d\end{cases}\tag{7}$$ where (Bi)u ∈ R n×hi. We derive the NTK, as defined in (1), using the recursive definition of FW(X, S) in (6) and its derivative in (7). Note that the derivatives in (7) are computed for every node output following the approach in Arora et al. (2019), hence ∂FW(X,S) ∂Wi u ∈ R hi−1×hi. We give the gradients in B.2. Co-variance between Nodes. We will first derive the co-variance matrix of size n × n for each layer comprising of co-variance between any two nodes u and v. The co-variance between u and v in F1 and Fi are derived below. We denote u-th row of matrix Z as Zu. throughout our proofs. E [(F1)uk (F1)vk′ ] = E [(G1W1)uk (G1W1)vk′ ] = E "X h0 r=1 (G1)ur (W1)rkX h0 s=1 (G1)vs (W1)sk′ #(W1)xy∼N(0,1) = 0 ; if r ̸= s or k ̸= k ′ E [(F1)uk (F1)vk] r=s = k=k′ E "X h0 r=1 (G1)ur (G1)vr (W1) 2 rk# (W1)xy∼N(0,1) =X h0 r=1 (G1)ur (G1)vr = ⟨(G1)u. ,(G1)v.⟩ (8) E [(Fi)uk (Fi)vk] r=s = k=k′ E h Xi−1 r=1 (Gi)ur (Gi)vr (Wi) 2 rk (Wi)xy∼N(0,1) = h Xi−1 r=1 (Gi)ur (Gi)vr = ⟨(Gi)u. ,(Gi)v.⟩ (9) Evaluating (8) and (9) in terms of the graph in the following, (8) : ⟨(G1)u. ,(G1)v.⟩ = ⟨(SX)u. ,(SX)v.⟩ = Su.XXT S T .v = (Σ1)uv (10) (9) : ⟨(Gi)u. ,(Gi)v.⟩ = cσ hi−1 ⟨(Sσ(Fi−1))u. ,(Sσ(Fi−1))v.⟩ =cσ hi−1 h Xi−1 k=1 (Sσ(Fi−1))uk (Sσ(Fi−1))vk hi−1→∞ = cσE [(Sσ(Fi−1))uk (Sσ(Fi−1))vk] ; law of large numbers = cσE " Xn r=1 Surσ (Fi−1)rk! Xn s=1 Svsσ (Fi−1)sk!# = cσE "Xn r=1 Xn s=1 SurSvsσ (Fi−1)rk σ (Fi−1)sk# $(\mathbf{\Sigma}_{i-1})_{rs}$ and the definition of $\mathbf{E}_{i-1}$ in Theorem 1. (a) = Xn r=1 Xn s=1 Sur (Ei−1)rs S T sv = Su.Ei−1S T .v = (Σi)uv (11) (a): using E [(Fi−1)rk (Fi−1)sk] = (Σi−1)rs and the definition of Ei−1 in Theorem 1. NTK for Vanilla GCN. Let us first evaluate the tangent kernel component from Wk respective to nodes u and v. The following two results are needed to derive it. To compute the NTK we need to evaluate the sum of all parameters gradient dot product between two nodes u and v. To do so, we first evaluate D ∂F ∂Wk u , ∂F ∂Wk v Ein the following. v = hkX−1,hk i=1,j=1 ∂F ∂Wk ∂F ∂Wk u , ∂F ∂Wk u ij ∂F ∂Wk v ij = hkX−1,hk i=1,j=1 GT k (Bk)u ij GT k (Bk)v ij = hkX−1,hk i=1,j=1 Xn,n a=1,b=1 GT k ia ((Bk)u )aj GT k ib ((Bk)v )bj = X hk j=1 Xn,n a=1,b=1 cσ hk S T(Bk+1)u WT k+1aj ( ˙σ (Fk))aj GkGT k ab S T(Bk+1)u WT k+1bj ( ˙σ (Fk))bj (12) = hk,hkX+1,hk+1 j=1,l=1,m=1 Xn,n a=1,b=1 cσ hk S T(Bk+1)u al WT k+1lj ( ˙σ (Fk))aj GkGT k ab S T(Bk+1)u bm WT k+1mj ( ˙σ (Fk))bj hk→∞ = hk+1→∞ cσ hkX ,hk+1 j=1,l=1 Xn,n a=1,b=1 S T(Bk+1)u al ( ˙σ (Fk))aj GkGT k ab S T(Bk+1)u bl ( ˙σ (Fk))bj = cσ h Xk+1 l=1 Xn,n a=1,b=1 S T(Bk+1)u al S T(Bk+1)u bl GkGT k ab E hσ˙ (Fk) ˙σ (Fk) T ab i (b) = h Xk+1 l=1 S T(Bk+1)u T GkGT k ⊙ E˙k S T(Bk+1)u ll = tr((Bk+1) T u SΣk ⊙ E˙k S T(Bk+1)v ) (c) = tr((Bd+1) T u Su *. . .* SSΣk ⊙ E˙k S T ⊙ E˙k+1S T ⊙ *. . .* ⊙ E˙d S T v (Bd+1)v ) = Su. *. . .* SSΣk ⊙ E˙k S T ⊙ E˙k+1S T ⊙ *. . .* ⊙ E˙d S T v. (13) (b): cσE hσ˙ (Fk) ˙σ (Fk) T ab i=E˙k ab. (c): Expanding Bk+1 will result in the expression similar to (12), and repeated expansion until Bd+1. The final equation is obtained by substituting (Bd+1)u = 1 from its definition in (3). Extending (13) to all n nodes which will result in n × n matrix, we get ∂F ∂Wk ,∂F ∂Wk = S. . . SSΣk ⊙ E˙k S T ⊙ E˙k+1S T ⊙ . . . ⊙ E˙d S T E Wk ∂F ∂Wk ,∂F ∂Wk = S. . . SSΣk ⊙ E˙k S T ⊙ E˙k+1S T ⊙ . . . ⊙ E˙d S T(14) Finally, NTK Θ is, $$\mathbf{\Theta}=\sum_{k=1}^{d+1}\mathbb{E}\left[\left\langle\frac{\partial\mathbf{F}}{\partial\mathbf{W}_{k}},\frac{\partial\mathbf{F}}{\partial\mathbf{W}_{k}}\right\rangle\right]$$ $$=\sum_{k=1}^{d+1}\mathbf{S}\left(\ldots\mathbf{S}\left(\mathbf{S}\left(\mathbf{\Sigma}_{k}\odot\dot{\mathbf{E}}_{k}\right)\mathbf{S}^{T}\odot\dot{\mathbf{E}}_{k+1}\right)\mathbf{S}^{T}\odot\ldots\odot\dot{\mathbf{E}}_{d}\right)\mathbf{S}^{T}\tag{15}$$ $$\quad(14)$$ with definition of Σk and E˙k mentioned in the theorem. □ ## B.2 Gradients Of Functions With Scalar Output We list here the aggregation of gradients for different functions that enable deriving the equation (7). The following ∂f ∂W are derived assuming f ∈ R. Hence the derivative will be of same dimension as W. $\square$ ∂XW ∂W= XT 1 ; ∂σ(XW) ∂W= XT σ˙(XW) ∂XWY ∂W= XT 1YT; ∂σ(XWY) ∂W= XT σ˙(XWY)YT ∂Zσ(XW)Y ∂W= XTZ T 1YT ⊙ σ˙(XW) ∂σ(Z1σ(Z2σ(XW)Y1)Y2) ∂W= XTZ T 2 Z T 1 σ˙(Z1σ(Z2σ(XW)Y1)Y2)YT 2 ⊙ σ˙(Z2σ(XW)Y1)YT 1 ⊙ σ˙(XW) In the above, all 1 are scalars. These derivatives are used to derive (7). ## B.3 Theorems 2, 4 And Corollary 1: Population Ntk Θ˜ **For Different Convolutions** S We consider linear GCN with Assumption 1, that is, orthonormal features and Assumption 2. We derive it generally without the assumption on γ. We first prove it for K = 2 and then extend it to K classes. We consider that all nodes are sorted per class for ease of analysis which implies A is a n × n matrix with pπiπj entries in [1, n 2 ][1, n 2 ] and [ n 2 + 1, n][ n 2 + 1, n] blocks and qπiπj entries in [1, n 2 ][ n 2 + 1, n] and [ n 2 + 1, n][1, n 2 ] blocks. Therefore, $$\mathbf{A}=\boldsymbol{\pi}\boldsymbol{\pi}^{T}\odot\left(\frac{p+q}{2}\mathbf{11}^{T}+\frac{p-q}{2}\mathbf{i}\mathbf{1}^{T}\right)$$ $$=\frac{p+q}{2}\boldsymbol{\pi}\boldsymbol{\pi}^{T}+\frac{p-q}{2}\boldsymbol{\hat{\pi}}\boldsymbol{\hat{\pi}}^{T}\tag{16}$$ where the entries of πˆ are −πi ∀ i ∈ [1, n 2 ] and +πi ∀ i ∈ [ n 2 +1, n]. The degree matrix D is D = (p+q)cn 2diag(π). ## B.3.1 Symmetric Degree Normalized Adjacency Ssym Now, lets compute Ssym using A (16) and its degree matrix D. Ssym = D− 12 AD− 12 =2 (p + q) cn diag(π) − 12 p + q 2ππT + p − q 2πˆπˆ T diag(π) − 12 = 1 cn π 1 2π 1 2 T + p − q p + q πˆ 1 2πˆ 1 2 T = √ √ π1 cn − √ √ π1 cn ...... √ √ πn cn + √ √ πn cn n×2 1 0 0 r 2×2 √ √ π1 cn − √ √ π1 cn ...... √ √ πn cn + √ √ πn cn T 2×n = UΛUT(17) $$(17)$$ Note that π 1 2 Tπ 1 2 = πˆ 1 2 Tπˆ 1 2 = cn, π 1 2 Tπˆ 1 2 = 0 since Pi∈Ck π = cn K and UT U = I2, thus (17) is the singular value decomposition of Ssym. To compute the population NTK Θ˜ (d) sym for linear GCN with orthonormal features, we need S k symS kT sym. Using (17), S k symS kT sym (17) = UΛ2kUT = √ √ π1 cn − √ √ π1 cn ...... √ √ πn cn + √ √ πn cn n×2 1 0 0 r 2k 2×2 √ √ π1 cn − √ √ π1 cn ...... √ √ πn cn + √ √ πn cn T 2×n S k symS kT symij =1 + δij r 2k √πiπj cn; δij = (−1)1[Ci̸=Cj ] $$\mathbf{S}_{sym}^{k}\mathbf{S}_{sym}^{kT}\underset{\text{notation}}{=}\left(cn\right)^{-1}\left[\frac{\left(1+r^{2k}\right)\sqrt{\pi_{i}\pi_{j}}\ \left|\ \left(1-r^{2k}\right)\sqrt{\pi_{i}\pi_{j}}\ \right.}{\underbrace{\left(1-r^{2k}\right)\sqrt{\pi_{i}\pi_{j}}}_{\frac{n}{2}\ \text{entries}}\right]_{n\times n}\tag{18}$$ $$(18)$$ $$(19)$$ Consequently, population NTK Θ˜ (d) sym for nodes i and j using (18) is as follows, $\begin{gathered}\left({\tilde{\Theta}_{sym}^{(d)}}\right)_{ij}=\sum\limits_{k=1}^{d+1}{\mathbf{S}_{sym}^{d+1}\mathbf{S}_{sym}^{(d+1)T}}\\ =\left({d+1}\right)\left({1+\delta_{ij}r^{2d+2}}\right)\frac{{\sqrt{\pi_i\pi_j}}}{{cn}}\\ \end{gathered}$ are of the population NTK which we refer to class separability of the L. Hence, the average block difference of the population NTK which we refer to class separability of the kernel ζ (d) sym is derived with Pn i=1 √πi1[Ci = k] = τk ∀ k ζ (d) sym = 4(d + 1) n2(cn) X n/2 i=1 X n/2 j=1 1 + r 2d+2 √πiπj +Xn i=n/2+1 Xn j=n/2+1 1 + r 2d+2 √πiπj − X n/2 j=n/2+1 1 − r 2d+2 √πiπj −Xn i=n/2+1 X n/2 i=1 Xn $$(20)$$ j=1 1 − r 2d+2 √πiπj ! = 4(d + 1) n2(cn) 1 + r 2d+2 τ 2 1 + τ 2 2 − 21 − r 2d+2(τ1τ2) = d + 1 cn 4 n2 (τ1 − τ2) 2 + 4 n2 r 2d+2 (τ1 + τ2) 2(20) In (20), τ1 is of same order as τ2 and τ1 ≈ τ2 for large n with Pi∈Ck π = cn K . Hence, considering τ1 = τ2 = τ , we get the block difference as 16τ 2(d+1) n2(cn)r 2d+2. It is of O( dr2d n ), since (τ1 + τ2) 2has n 2terms, each of O(1). Therefore, the block difference of the population NTK Θ˜ (d) sym at d → ∞ is interfaces of the population $N$11 $O_{5/N}$ at $d\to\infty$ is $$\lim_{d\to\infty}\frac{16r^{2}\left(d+1\right)}{n^{2}\left(cn\right)}r^{2d+2}=\lim_{d\to\infty}\frac{16r^{2}}{n^{2}\left(cn\right)}\frac{d+1}{r^{-\left(2d+2\right)}}\tag{21}$$ $$=\lim_{d\to\infty}\frac{16r^{2}}{n^{2}\left(cn\right)}\frac{1}{r^{-\left(2d+2\right)}\log(r)(-2)}=0$$ Apart from the block difference, we can also see that the population kernel at ij is proportional to √πiπj cnas d → ∞, thus converging to a constant kernel. Equations (19) and (21) prove the population NTK Θ˜ (d) sym and class separability of Θ˜ (∞) sym in Theorem 4 and Corollary 1, respectively. Substituting d = 1 and ∀*i, π*i = 1 n , Theorem 2 can be derived. □ ## B.3.2 Row Degree Normalized Adjacency Srow The assumption on γ in Assumption 2 is only to simplify the expression of population NTK for Srow. We derive it without this assumption in the following. We first derive S k rowS kT row. Srow = D−1A = D− 12 D− 12 AD− 12 D+ 12 = D− 12 UΛUT D+ 12 S k row = D− 12 UΛkUT D+ 12 S k rowS kT row = D− 12 UΛkUT D+ 12 D+ 12 UΛkUT D− 12 = D− 12 UΛkUT DUΛkUT D− 12 = D− 12 UΛkUT D− 12 D+ 12 DD+ 12 D− 12 UΛkUT D− 12 = UΛb kUb TD2UΛb kUb T; Ub = D− 12 U = √2 cn√p + q 1 T n 1ˆTn n×2 $$\left(\mathbf{S}_{r o u s}^{k T}\mathbf{S}_{r o u s}^{k T}\right)_{i j}=(c n)^{-2}\begin{cases}\left(1+r^{k}\right)^{2}\lambda+\left(1-r^{k}\right)^{2}\mu&\text{if}i\text{and}j\in\operatorname{class1}\\ \left(1+r^{k}\right)\left(1-r^{k}\right)\left(\lambda+\mu\right)&\text{if}i\text{and}j\notin\operatorname{same~class};\;\lambda=\sum_{s=1}^{8}\pi_{s}^{2};\;\mu=\sum_{s=\frac{8}{2}+1}^{8}\pi_{s}^{2};\\ \left(1-r^{k}\right)^{2}\lambda+\left(1+r^{k}\right)^{2}\mu&\text{if}i\text{and}j\in\operatorname{class2}\end{cases}$$ $$\mathbf{S}_{\text{F}_{\text{GMT}}}^{F}\mathbf{S}_{\text{F}_{\text{GMT}}}^{kT}\equiv\text{net}.\ \left(cn\right)^{-2}\left[\underbrace{\frac{\left(1+r^{k}\right)^{2}\lambda+\left(1-r^{k}\right)^{2}\mu}{\left(1+r^{k}\right)\left(1-r^{k}\right)\left(\lambda+\mu\right)}}_{\text{$\frac{\alpha}{2}$entire}}\right]_{\text{$\frac{\alpha}{2}$entire}}\underbrace{\left(1-r^{k}\right)\left(1-r^{k}\right)\left(\lambda+\mu\right)}_{\text{$\frac{\alpha}{2}$entire}}\underbrace{\left(1-r^{k}\right)^{2}\lambda+\left(1+r^{k}\right)^{2}\mu}_{\text{$\frac{\alpha}{2}$entire}}\right]_{\text{$\frac{\alpha}{2}$entire}}\tag{22}$$ $$(24)$$ Note that each block is a constant and independent of individual πi. Using (22) and the assumption λ = µ = γ in Theorem 4, the population NTK for nodes i and j is, $$\left(\tilde{\mathbf{\Theta}}_{row}^{(d)}\right)_{ij}\stackrel{{(22)}}{{=}}\sum_{k=1}^{d+1}\mathbf{S}_{row}^{d+1}\mathbf{S}_{row}^{(d+1)T}$$ $$=(d+1)\left(1+\delta_{ij}r^{2d+2}\right)\frac{2\gamma}{\left(cn\right)^{2}}\tag{23}$$ Using (23), we derive the class separability of the kernel ζ (d) row. $$\zeta_{r o w}^{(d)}=\frac{2\gamma(d+1)}{(c n)^{2}}4r^{2d+2}$$ 2d+2 (24) Similar to (20), ζ (d) row is of O( dr2d n ) since γ is O(n), and the class separability of the population NTK Θ˜ (d) row at d → ∞ is 0. Likewise, the population kernel at ij is proportional to 2γ (cn) 2 as d → ∞, thus converging to a constant kernel proving Theorem 4 and Corollary 1, respectively. □ ## B.3.3 Column Normalized Adjacency Scol In this section we derive the population NTK Θ˜ (d) col. Scol = AD−1 = D+ 12 UΛUT D− 12 S k col = D+ 12 UΛkUT D− 12 S k colS kT col = D+ 12 UΛkUT D− 12 D− 12 UΛkUT D+ 12 =UΛ˜ kU˜ TD−2UΛ˜ kU˜ T; U˜ = D+ 12 U = rp + q 2 π T πˆ T n×2 matrix not. =n (cn) 2 πiπj 1 + r 2kπiπj 1 − r 2k n×n (25) πiπj 1 − r 2k πiπj 1 + r 2k | {z } n 2 entries | {z } n 2 entries Therefore, Θ˜ (d) col for all i and j is $$(26)$$ $$(27)$$ $$\left(\tilde{\Theta}_{col}^{(d)}\right)_{ij}\stackrel{{(25)}}{{=}}\sum_{k=1}^{d+1}\mathbf{S}_{col}^{d+1}\mathbf{S}_{col}^{(d+1)T}$$ $$=(d+1)\left(1+\delta_{ij}r^{2d+2}\right)\frac{n\pi_{i}\pi_{j}}{\left(cn\right)^{2}}$$ Using (26) and Pi∈Ck π = cn K , the class separability of the kernel ζ (d) col is $$\zeta_{col}^{(d)}=\frac{4(d+1)}{n}r^{2d+2}\tag{1}$$ which is of O( dr2d n ) and the class separability of the population NTK Θ˜ (d) row at d → ∞ is 0 similar to symmetric and row normalization cases. Likewise, the population kernel at ij is proportional to nπiπj (cn) 2 as d → ∞, thus converging to a constant kernel. Hence, equations (26) and (27) prove the population NTK Θ˜ (d) col and ζ (d) col in Theorem 4 and Corollary 1, respectively. □ ## B.3.4 Unnormalized Adjacency Sadj We can rewrite A as follows, $$\mathbf{A}=\pi\pi^{T}\odot\left[\underbrace{p}_{\frac{q}{2}\text{entries}}\underbrace{q}_{\frac{p}{2}\text{entries}}\right]_{n\times n}$$ $$=\left[\begin{array}{cccc}\pi_{1}&&&\\ &\ddots&\\ &&&\pi_{n}\end{array}\right]_{n\times n}\left[\begin{array}{cccc}p&q&&&\\ q&p\\ q&p\end{array}\right]_{n\times n}\left[\begin{array}{cccc}\pi_{1}&&&\\ &\ddots&\\ &&&\pi_{n}\end{array}\right]_{n\times n}\tag{28}$$ We consider γ assumption for the analysis of unnormalised adjacency to simplify the computation. But the result holds without this assumption. $$\mathbf{A}^{2}\stackrel{{\eqref{eq:28}}}{{=}}\begin{bmatrix}\pi_{1}&&\\ &\ddots&\\ &&&\pi_{n}\end{bmatrix}\begin{bmatrix}\begin{bmatrix}\begin{array}{c|c}\left(p^{2}+q^{2}\right)\gamma&2pq\gamma\\ \hline\\ 2pq\gamma&\end{array}\\ \end{bmatrix}\begin{bmatrix}\pi_{1}&&\\ &\ddots&\\ &&\pi_{n}\end{bmatrix}$$ $$\mathbf{A}^{4}=\begin{bmatrix}\pi_{1}&&\\ &\ddots&\\ &&&\pi_{n}\end{bmatrix}\begin{bmatrix}\begin{array}{c|c}\left(p^{4}+q^{4}+6p^{2}q^{2}\right)\gamma^{3}&\left(4p^{3}q+4pq^{3}\right)\gamma^{3}&\\ \hline\\ \end{array}\begin{bmatrix}\pi_{1}&&\\ &\ddots&\\ \end{array}\\ \end{bmatrix}$$ Note that in the above shown A2kit is the even powers of binomial expansion of (p + q) 2 k for *i, j* in same class whereas it is the odd powers for *i, j* not in the same class. We compute the filter Sadj using this fact. $$\mathbf{S}_{adj}=\frac{1}{n}\mathbf{A}$$ $$\mathbf{S}_{adj}^{k}=\frac{1}{n^{k}}\mathbf{A}^{k}$$ $$\mathbf{S}_{adj}^{k}\mathbf{S}_{adj}^{kT}=\frac{1}{n^{2k}}\mathbf{A}^{2k}$$ $$=\begin{cases}\pi_{i}\pi_{j}\frac{2^{2^{k}-1}}{n^{2k}}\sum\limits_{l=0}^{2^{k}-1}\binom{2^{k}}{2l}p^{2^{k}-2l}q^{2l}&\text{if$i$and$j\in$same class}\\ \\ \pi_{i}\pi_{j}\frac{2^{2^{k}-1}}{n^{2k}}\sum\limits_{l=0}^{2^{k}-1}\binom{2^{k}}{2l+1}p^{2^{k}-2l-1}q^{2l+1}&\text{if$i$and$j\in$different class}\end{cases}$$ $$\tilde{\Theta}_{a d j}^{(d)}=(d+1)\mathbf{S}_{a d j}^{d+1}\mathbf{S}_{a d j}^{(d+1)T}$$ adj 2 Pd l=0 2 d+1 2l p 2 d+1−2lq 2lif i and j ∈ same class = (d + 1)πiπj γ 2 d+1−1 n2d+2 2 Pd−1 l=0 2 d+1 2l+1p 2 d+1−2l−1q 2l+1 if i and j ∈ different class The class separability in this case is ζ (d) adj = (d + 1)c 2 γ 2 d+1−1 n2d+2 (p − q) 2d+2. The above form is not simplified as it is not an interesting case where the gap between the two blocks disappears rapidly and Θ˜ (∞) adj ij = 0. There is no information in the kernel proving both Theorem 4 and Corollary 1. □ ## B.3.5 Number Of Classes K > 2 From the above derivation for K = 2, it can be seen that once S k symS kT sym is computed, the population NTK for all the graph convolutions can be derived using it. Therefore, we derive it for K > 2 and it suffices to show the conclusions of Theorem 4 and Corollary 1. We denote the vector πˆ1k with −πi∀i ∈-1, n K , +πi∀i ∈ hn(k−1) K, nk K iand 0 for the rest. With this definition, A is $$\mathbf{A}={\frac{p+(K-1)q}{K}}\pi\pi^{T}+{\frac{p-q}{K}}\sum_{l=2}^{K}{\hat{\pi}}_{1l}{\hat{\pi}}_{1l}^{T}.$$ $$(29)$$ . (29) D for K classes is (p+(K−1)q)cn Kdiag(π) from (29). We can compute Ssym using A and D as follows, $$\mathbf{S}_{s y m}=\mathbf{D}^{-{\frac{1}{2}}}\mathbf{AD}^{-{\frac{1}{2}}}$$ Ssym = D− 12 AD− 12 =K (p + (K − 1)q) cn diag(π − 12 ) p + (K − 1)q KππT + p − q K X K l=2 πˆ1lπˆ T 1l ! diag(π − 12 ) l=2 πˆ 1 2 1lπˆ 1 2 T 1l cn = π 1 2π 1 2 T cn+ p − q (p + (K − 1)q) cn X K (Ssym)ij = √πiπj cn 1 + δij p − q p + (K − 1)q X K K l + l 2 ! l=2 S k symij = √πiπj cn 1 + δij p − q p + (K − 1)q kX K K l + l 2 ! l=2 S k symS kT symij = √πiπj cn 1 + δij p − q p + (K − 1)q 2kX K K l + l 2 ! (30) l=2 It is noted that the equation (30) is very much similar to (18) for K = 2. The further derivations of the population NTKs Θ˜ for all the convolutions are similar and the theoretical results extend without any issues. □ ## B.4 Corollary 3 And 4: Ntk For Gcn With Skip Connections We observe that the definitions of Gi ∀i ∈ [1, d + 1] are different for GCN with skip connections from the vanilla GCN. Despite the difference, the definition of gradient with respect to Wiin (7) does not change as Giin the gradient accounts for the change and moreover, there is no new learnable parameter since the input transformation H0 = XW0 where (W0)ij is sampled from N (0, 1) is not learnable in our setting. Given the fact that the gradient definition holds for GCN with skip connection, the NTK will retain the form from NTK for vanilla GCN as evident from the derivation of NTK for vanilla GCN in Section B.1. The change in Gi will only affect the co-variance between nodes. Hence, we will derive the co-variance matrix for Skip-PC and Skip-α in the following. Skip-PC: Co-variance between nodes. The co-variance between nodes u and v in F1 and Fi are derived below. E [(F1)uk (F1)vk] = ⟨(G1)u. ,(G1)v.⟩ = cσ h ⟨(Sσs(H0))u. ,(Sσs(H0))v.⟩ = cσ h X h k=1 (Sσs(H0))uk (Sσs(H0))vk h→∞= cσE [(Sσs(H0))uk (Sσs(H0))vk] ; law of large numbers = Su.E˜0S T .v ; E˜0 = cσ E F∼N(0,XXT ) -σs(F)σs(F) T = (Σ1)uv (31) $$(31)$$ E [(Fi)uk (Fi)vk] = ⟨(Gi)u. ,(Gi)v.⟩ = cσ h ⟨(S (σ(Fi−1) + σs(H0)))u. ,(S (σ(Fi−1) + σs(H0)))v.⟩ = cσ h X h k=1 (Sσ(Fi−1) + Sσs(H0))uk (Sσ(Fi−1) + Sσs(H0))vk h→∞= cσE [(Sσ(Fi−1) + Sσs(H0))uk (Sσ(Fi−1) + Sσs(H0))vk] ; law of large numbers = cσ hE [(Sσ(Fi−1))uk (Sσ(Fi−1))vk] + E [(Sσ(Fi−1))uk (Sσs(H0))vk] + E [(Sσs(H0))uk (Sσ(Fi−1))vk] + E [(Sσs(H0))uk (Sσs(H0))vk] i = Su.Ei−1S T .v + cσE [(Sσ(Fi−1))uk (Sσs(XW0))vk] + cσE [(Sσs(XW0))uk (Sσ(Fi−1))vk] + cσE "Xn r=1 Xn s=1 SurSqsσs (XW0)rk σs (XW0)sk# (f) = Su.Ei−1S T .v + cσSu.E [σs (XW0)rk σs (XW0)sk] S T .v = Su.Ei−1S T .v + Su.E˜0S T .v = Su.Ei−1S T .v + (Σ1)uv = (Σi)uv (32) $$(32)$$ $)))_{uk}\left(\mathbf{S}\sigma_s(\mathbf{XW}_0)\right)_{vk}]$ a. (f): E [(Sσ(Fi−1))uk (Sσs(XW0))vk] and E [(Sσs(XW0))uk (Sσ(Fi−1))vk] evaluate to 0 by conditioning on W0 first and rewriting the expectation based on this conditioning. The terms within expectation are independent when conditioned on W0, and hence it is E W0 by taking h in W0 going to infinity first. dependent when conditioned on $\mathbf{W}_{0}$, and hence it is $$\begin{bmatrix}\mathbb{E}\\ \mathbf{\Sigma}_{i-1}|\mathbf{W}_{0}\end{bmatrix}[(\mathbf{S}\sigma(\mathbf{F}_{i-1}))_{uk}\,|\,\mathbf{W}_{0}]\underset{\mathbf{\Sigma}_{i-1}|\mathbf{W}_{0}}{\mathbb{E}}[(\mathbf{S}\sigma_{s}(\mathbf{X}\mathbf{W}_{0}))_{vk}\,|\,\mathbf{W}_{0}]\\ \mathbf{e},\quad\underset{\mathbf{\Sigma}_{i-1}|\mathbf{W}_{0}}{\mathbb{E}}[(\mathbf{S}\sigma_{s}(\mathbf{X}\mathbf{W}_{0}))_{vk}\,|\,\mathbf{W}_{0}]=0.\end{bmatrix}$$ Here, E We get the co-variance matrix for all pairs of nodes Σ1 = SE˜0S T and Σi = SEi−1S T + Σ1 from (31) and (32). Skip-α**: Co-variance between nodes.** Let u and v be two nodes and the co-variance between u and v in F1 and Fi are derived below. $$\mathbb{E}\left[\left(\mathbf{F}_{1}\right)_{u k}\left(\mathbf{F}_{1}\right)_{v k}\right]=\left\langle\left(\mathbf{G}_{1}\right)_{u.},\left(\mathbf{G}_{1}\right)_{v.}\right\rangle$$ = cσ h X h k=1 ((1 − α)Sσs(H0) + ασs(H0))uk ((1 − α)Sσs(H0) + ασs(H0))vk h→∞= cσE [((1 − α)Sσs(H0) + ασs(H0))uk ((1 − α)Sσs(H0) + ασs(H0))vk] = cσ h(1 − α) 2E [(Sσs(H0))uk (Sσs(H0))vk] + (1 − α)α E [(Sσs(H0))uk (σs(H0))vk] + E [(Sσs(H0))vk (σs(H0))uk] + α 2E [(σs(H0))uk (σs(H0))vk] = (1 − α) 2Su.E˜0S T .v + (1 − α)αSu. E˜0 .v +E˜0 u. S T .v+ α 2E˜0 uv = (Σ1)uv (33) Using E [(F1)uk (F1)vk], we recursively evalaue E [(Fi)uk (Fi)vk] in the following, $$\mathbb{E}\left[\left(\mathbf{F}_{i}\right)_{u k}\left(\mathbf{F}_{i}\right)_{v k}\right]=\left\langle\left(\mathbf{G}_{i}\right)_{u_{i}},\left(\mathbf{G}_{i}\right)_{v_{i}}\right\rangle$$ = cσ h X h k=1 ((1 − α)Sσ(Fi−1) + ασs(H0))uk ((1 − α)Sσ(Fi−1) + ασs(H0))vk h→∞= cσE [((1 − α)Sσ(Fi−1) + ασs(H0))uk ((1 − α)Sσ(Fi−1) + ασs(H0))vk] = cσ h(1 − α) 2E [(Sσ(Fi−1))uk (Sσ(Fi−1))vk] + α 2E [(σs(H0))uk (σs(H0))vk] + (1 − α)α E [(Sσ(Fi−1))uk (σs(H0))vk] + E [(σs(H0))uk (Sσ(Fi−1))vk] i (g) = (1 − α) 2Su.Ei−1S T .v + α 2E˜0 uv = (Σi)uv (34) (g): same argument as (f) in derivation of Σiin Skip-PC. We get the co-variance matrix for all pairs of nodes Σ1 = (1 − α) 2SE˜0S T + α(1 − α)SE˜0 + E˜0S T+ α 2E˜0 and Σi = (1 − α) 2SEi−1S T + α 2E˜0 from (33) and (34). ## B.5 Theorem 5: Class Separability Of Population Ntk Θ˜ **For Skip-Pc** NTK at depth d, Θ (d) P C for Skip-PC with linear activations is $$\mathbf{\Theta}_{PC}^{(d)}=\sum_{k=1}^{d+1}\mathbf{S}^{d+1-k}\mathbf{\Sigma}_{k}\mathbf{S}^{(d+1-k)T}$$ $$=\sum_{k=1}^{d+1}\mathbf{S}^{d+1-k}\left(\mathbf{S}^{k}\mathbf{S}^{kT}+\mathbf{SS}^{T}\right)\mathbf{S}^{(d+1-k)T}$$ $$=\sum_{k=1}^{d+1}\underbrace{\mathbf{S}^{d+1}\mathbf{S}^{(d+1)T}}_{I}+\underbrace{\mathbf{S}^{d+2-k}\mathbf{S}^{(d+2-k)T}}_{II}\tag{35}$$ In (35), I is NTK without skip connection and II is computed for Srow and Ssym as follows. Computing II for population NTK Θ˜ (d)for Ssym: for nodes i and j, k=1 S d+2−k sym S (d+2−k)T sym ij = X d+1 X d+1 k=1 1 + δij r 2d+4−2k √πiπj (cn) −1 = (d + 1) √πiπj cn+ δij √πiπj cn X d+1 k=1 r 2k = (d + 1) √πiπj cn+ δij √πiπj cn r 21 − r 2(d+1) 1 − r 2(36) (d) $$(37)$$ Combining (36) with (19), the class separability of the kernel ζ P C,sym as d → ∞ is determined only by the last term in (36) as the other terms give 0 separation. Hence, the influence of skip connection gives $$\zeta_{P C,s y m}^{(\infty)}=\frac{16\tau^{2}r^{2}}{n^{2}(c n)(1-r^{2})}$$ where τ is defined as in Theorem 4. √πiπj 2 + δij r 2. Thus showing class separation information retained even at ∞ depth and graph size. □ Similarly, computing II for Srow without assumption on γ, i and j in class 1, $$\sum_{k=1}^{d+1}\left(\mathbf{S}_{row}^{d+2-k}\mathbf{S}_{row}^{d+2-k)T}\right)_{ij}=\left(cn\right)^{-2}\sum_{k=1}^{d+1}\left(1+r^{2k}+2r^{k}\right)\lambda+\left(1+r^{2k}-2r^{k}\right)\mu$$ $$=\left(cn\right)^{-2}\left(\left(\lambda+\mu\right)\left(\left(d+1\right)+\frac{r^{2}\left(1-r^{2\left(d+1\right)}\right)}{1-r^{2}}\right)+2\left(\lambda-\mu\right)\frac{r\left(1-r^{d+1}\right)}{1-r}\right)\tag{38}$$ For $i$ and $j$ in class 2, For $i$ and $j$ in class 2, $$\sum_{k=1}^{d+1}\left(\mathbf{S}_{rw}^{d+k-1}\mathbf{S}_{rw}^{(d+k)T}\right)_{ij}=\left(cn\right)^{-2}\sum_{k=1}^{d+1}\left(1+r^{2k}-2r^{k}\right)\lambda+\left(1+r^{2k}+2r^{k}\right)\mu$$ $$=\left(cn\right)^{-2}\left(\left(\lambda+\mu\right)\left(\left(d+1\right)+\frac{r^{2}\left(1-r^{2\left(d+1\right)}\right)}{1-r^{2}}\right)+2\left(-\lambda+\mu\right)\frac{r\left(1-r^{d+1}\right)}{1-r}\right)\tag{39}$$ For i and j in different class, $$\sum_{k=1}^{d+1}\left(\mathbf{S}_{r o w}^{d+2-k}\mathbf{S}_{r o w}^{(d+2-k)T}\right)_{i j}=\left(c n\right)^{-2}\sum_{k=1}^{d+1}\left(1-r^{2k}\right)\left(\lambda+\mu\right)$$ $$=\left(c n\right)^{-2}\left(\lambda+\mu\right)\left(\left(d+1\right)-\frac{r^{2}\left(1-r^{2(d+1)}\right)}{1-r^{2}}\right)$$ $$(40)$$ Therefore, the influence of the skip connection in the class separability of population NTK Θ˜ (∞) P C,row with γ assumption is obtained by substituting λ + µ = 2γ and λ − µ = 0 in (38), (39) and (40) . $$\zeta_{P C,r o w}^{(\infty)}=\frac{8\gamma r^{2}}{(c n)^{2}(1-r^{2})}$$ hence deriving Theorem 5. □ B.6 Theorem 6: Population NTK Θ˜ **for Skip-**α We expand Σ1 and Σk of Skip-α first to derive the population NTK. Σ1 = (1 − α) 2SST + α (1 − α)S + S T+ α 2In Σk = (1 − α) 2SΣk−1S T + α 2In = (1 − α) 2kS kS kT + α (1 − α) 2k−1S k−1S + S TS k−1 T+ α 2 k X−1 l=0 (1 − α) 2lS lS $\square$ $$(41)$$ lT (41) Exact NTK of depth d for Skip-α is expanded using the above as follows. $$\mathbf{\Theta}_{n}^{(\ell)}=\sum_{k=1}^{d+1}\mathbf{S}^{d+1-k}\mathbf{\Sigma}_{k}\mathbf{S}^{(d+1-k)T}$$ $$=\sum_{k=1}^{d+1}\underbrace{\left(1-\alpha\right)^{2k}\mathbf{S}^{d+1}\mathbf{S}^{(d+1)T}}_{f}+\underbrace{\alpha\left(1-\alpha\right)^{2k-1}\mathbf{S}^{d}\left(\mathbf{S}+\mathbf{S}^{T}\right)\mathbf{S}^{d^{T}}}_{f}+\underbrace{\alpha^{2}\sum_{l=0}^{k-1}\left(1-\alpha\right)^{2l}\mathbf{S}^{d+1-k+l}\mathbf{S}^{(d+1-k+l)T}}_{f^{\prime}f}\tag{42}$$ We compute the class separability of the kernel Θ (∞) α as d → ∞ for Ssym and Srow. From (42), it is clear that terms I and II lead to 0 class separation as derived in previous cases. So, we evaluate III of (42) in the following. IIIij = α 2X d+1 k=1 k X−1 l=0 (1 − α) 2lS d+1−k+l sym S (d+1−k+l)T sym = √πiπj cnα 2X d+1 k=1 k X−1 l=0 (1 − α) 2l1 + δij r 2d+2−2k+2l 1 − (1 − α) 2 + δij r 2(d+1−k) 1 − (1 − α) 2r 2k = √πiπj cnα 2X d+1 k=1 1 − (1 − α) 2k 1 − (1 − α) 2r 2 = √πiπjα 2 cn "(d + 1) 1 − (1 − α) 2 − (1 − α) 21 − (1 − α) 2(d+1) 1 − (1 − α) 22 + 1 − r 2(d+1) 1 − r 2− r 2(d+1) (1 − α) 21 − (1 − α) 2(d+1) δij 1 − (1 − α) 2r 2 # (43) 1 − (1 − α) 2 The class separability of kernel is non zero only for the last term in (43). Hence, the class separability ζ (d) α,sym is $$\zeta_{\alpha,s y m}^{(d)}=\frac{16\tau^{2}\alpha^{2}}{(c n)n^{2}\left(1-\left(1-\alpha\right)^{2}r^{2}\right)}\left(\frac{1-r^{2(d+1)}}{1-r^{2}}-r^{2(d+1)}\frac{\left(1-\alpha\right)^{2}\left(1-\left(1-\alpha\right)^{2(d+1)}\right)}{1-\left(1-\alpha\right)^{2}}\right)$$ $$\zeta_{\alpha,s y m}^{(\infty)}=\frac{16\tau^{2}\alpha^{2}}{(c n)n^{2}\left(1-\left(1-\alpha\right)^{2}r^{2}\right)}\left(\frac{1}{1-r^{2}}\right)$$ proving Theorem 6. □ We now compute III for population NTK Θ˜ (∞) α using Srow under λ = µ = γ. The derivation holds without this consideration as well. IIIij = α 2X d+1 k=1 k X−1 l=0 (1 − α) 2lS d+1−k+l row S (d+1−k+l)T row =2γ (cn) 2 α 2X d+1 k=1 k X−1 l=0 (1 − α) 2l1 + δij r 2d+2−2k+2l 1 − (1 − α) 2 + δij r 2(d+1−k) 1 − (1 − α) 2r 2k =2γ (cn) 2 α 2X d+1 k=1 1 − (1 − α) 2k 1 − (1 − α) 2r 2 = 2γα2 (cn) 2 "(d + 1) 1 − (1 − α) 2 − (1 − α) 21 − (1 − α) 2(d+1) 1 − (1 − α) 22 + 1 − r 2(d+1) 1 − r 2− r 2(d+1) (1 − α) 21 − (1 − α) 2(d+1) δij 1 − (1 − α) 2r 2 # (44) 1 − (1 − α) 2 Similar to Ssym, the class separability of kernel is non zero only for the last term in (44). Hence, the class separability ζ (d) α,row is $$\zeta_{\alpha,row}^{(d)}=\frac{8\gamma\alpha^{2}}{\left(cn\right)^{2}\left(1-\left(1-\alpha\right)^{2}r^{2}\right)}\left(\frac{1-r^{2\left(d+1\right)}}{1-r^{2}}-r^{2\left(d+1\right)}\frac{\left(1-\alpha\right)^{2}\left(1-\left(1-\alpha\right)^{2\left(d+1\right)}\right)}{1-\left(1-\alpha\right)^{2}}\right)$$ $$\zeta_{\alpha,row}^{(\infty)}=\frac{8\gamma\alpha^{2}}{\left(cn\right)^{2}\left(1-\left(1-\alpha\right)^{2}r^{2}\right)}\left(\frac{1}{1-r^{2}}\right)$$ $\square$ proving Theorem 6. □ ## B.7 Theorem 3: Population Ntk Θ˜ **For Relu Gcn For Normalized Adjacency** S We first state the NTK for ReLU GCN using the general NTK Theorem 1 and result from Bietti & Mairal (2019) in the following corollary. Note that cσ = 2 for ReLU activation. Corollary 5 (ReLU GCN) *Consider* σ(x) := ReLU(x) in FW(X, S)*. The NTK is computed as in* (3), where given Σk at each layer, one can evaluate the entries of Ek and E˙k using a result from Bietti & Mairal (2019) as $$\begin{array}{c}{{\left(\mathbf{E}_{k}\right)_{i j}=\sqrt{\left(\mathbf{\Sigma}_{k}\right)_{i i}\left(\mathbf{\Sigma}_{k}\right)_{j j}}\ \kappa_{1}\left(\frac{\left(\mathbf{\Sigma}_{k}\right)_{i j}}{\sqrt{\left(\mathbf{\Sigma}_{k}\right)_{i i}\left(\mathbf{\Sigma}_{k}\right)_{j j}}}\right)}}\\ {{\left(\mathbf{\dot{E}}_{k}\right)_{i j}=\kappa_{0}\left(\frac{\left(\mathbf{\Sigma}_{k}\right)_{i j}}{\sqrt{\left(\mathbf{\Sigma}_{k}\right)_{i i}\left(\mathbf{\Sigma}_{k}\right)_{j j}}}\right),}}\end{array}$$ where $\kappa_{0}(x)=\frac{1}{\pi}\left(\pi-\arccos\left(x\right)\right)$ and $\kappa_{1}(x)=\frac{1}{\pi}\left(x\left(\pi-\arccos\left(x\right)\right)+\sqrt{1-x^{2}}\right)$. Using Corollary 5, we derive Theorem 3, the population NTK of the ReLU GCN for depth d, Θ˜ (d) ReLU considering homogeneous degree correction π. That is, π = (*c, . . . , c*) T. Therefore, symmetric, row and column normalized adjacencies are equivalent and is, $$\begin{array}{l}{\mathbf{S}=\mathbf{D}^{-{\frac{1}{2}}}\mathbf{AD}^{-{\frac{1}{2}}}}\\ {\quad={\frac{1}{n}}\left(\mathbf{11}^{T}+r\mathbf{\hat{1}}\mathbf{\hat{1}}^{T}\right)}\end{array}$$ Therefore, using S, κ0(.) and κ1(.) we compute Σ1, E1 and E˙1 as, Σ1 = SST = 1 n 1 + r 2 1 − r 2 1 − r 2 1 + r 2 n×n E1 = 1 n 1 + r 2 1 κ1 1−r 2 1+r 2 κ1 1−r 2 1+r 2 1 n×n n×n ; ∆1 := 1 − r 2 1 + r 2 = 1 n 1 + r 21 κ1 (∆1) κ1 (∆1) 1 E˙1 = 1 κ0 (∆1) κ0 (∆1) 1 n×n $$(45)$$ Now, lets define ∆k := (1 − r 2) + (1 + r 2)κ1(∆k−1) (1 + r 2) + (1 − r 2)κ1(∆k−1) . Furthermore, ∆n k and ∆dk denote the numerator and denominator of ∆k, respectively. With this definition, we compute Σk, Ek and E˙k recursive as follows to compute the population NTK Θ˜ (d), $$\begin{array}{l}{{\mathbf{\Sigma}_{2}=\mathbf{S E}_{1}\mathbf{S}^{T}=\frac{\Delta_{1}^{d}}{2n}\left[\begin{array}{l}{{\frac{\Delta_{2}^{d}}{\Delta_{2}^{2}}}}\\ {{\frac{1}{\kappa_{1}\left(\Delta_{2}\right)}}}\end{array}\right]_{n\times n}}}\\ {{\mathbf{E}_{2}=\frac{\Delta_{1}^{d}\Delta_{2}^{d}}{2n}\left[\begin{array}{l|l}{{1}}\\ {{\kappa_{1}\left(\Delta_{2}\right)}}\end{array}\right]_{n\times n};\quad\tilde{\mathbf{E}}_{2}=\left[\begin{array}{l|l}{{1}}&{{\kappa_{0}\left(\Delta_{2}\right)}}\\ {{\kappa_{0}\left(\Delta_{2}\right)}}&{{1}}\end{array}\right]_{n\times n}}}\end{array}$$ to $k$, n×n n×n Extending to k, Σk = ∆d 1 . . . ∆dk−1 2 k−1n ∆dk ∆n k ∆n k ∆dk n×n Ek = ∆d 1 . . . ∆dk 2 k−1n 1 κ1 (∆k) κ1 (∆k) 1 n×n ; E˙k = 1 κ0 (∆k) κ0 (∆k) 1 n×n (46) We obtain population NTK for ReLU GCN in Theorem 3 by substituting Σk, Σ1 and E˙k in the NTK equation in (3). □ ## B.8 Difference Between Block Difference Of Linear And Relu Gcns For Depth D = 1 First, lets compute the average in-class and out-of-class block differences for d = 1 linear and ReLU GCNs. To do so, lets consider homogeneous degree correction as in Section B.7. Therefore, population NTKs for linear and ReLU GCNs Θ˜ (1) and Θ˜ (1) ReLU are, $$\tilde{\Theta}^{(1)}=\frac{2}{n}\left[\frac{1+r^{2}}{1-r^{2}}\frac{1-r^{2}}{1+r^{2}}\right]_{n\times n}\tag{47}$$ $$\tilde{\Theta}_{R\epsilon LUE}^{(1)}=\frac{1}{2n}\left[\frac{\left(1+r^{2}\right)^{2}+\left(1-r^{2}\right)^{2}\kappa_{0}(\Delta_{1})}{\left(1-r^{4}\right)+\left(1-r^{4}\right)\kappa_{0}(\Delta_{1})}\right]\frac{\left(1-r^{4}\right)+\left(1-r^{4}\right)\kappa_{0}(\Delta_{1})}{\left(1+r^{2}\right)^{2}+\left(1-r^{2}\right)^{2}\kappa_{0}(\Delta_{1})}\right]_{n\times n}+\frac{\Delta_{1}^{4}}{2n}\left[\frac{\Delta_{2}^{4}}{\Delta_{2}^{2}}\right]\frac{\Delta_{2}^{2}}{\Delta_{2}^{2}}\right]_{n\times n}\tag{48}$$ Let the average block difference for linear and ReLU GCNs of depth 1 be denoted by ζlin and ζ*ReLU* , respectively. Using (47) and (48), we get $$\zeta_{l i n}=\frac{8r^{2}}{n}=\mathcal{O}\left(\frac{r^{2}}{n}\right)$$ $$\zeta_{R e L U}=\frac{4r^{2}\left(r^{2}+1+\left(r^{2}-1\right)\kappa_{0}(\Delta_{1})\right)}{2n}+\frac{4r^{2}\left(1+r^{2}\right)\left(1-\kappa_{1}(\Delta_{1})\right)}{2n}$$ $$=\mathcal{O}\left(\frac{r^{2}}{n}\right)$$ Therefore, theoretically linear GCN and ReLU GCN of depth 1 retains similar class information for large graphs and hence they perform similarly. □ ## B.9 Analysis Without Orthonormal Feature Assumption Xxt ̸= In To include the features so that XXT ̸= In, we consider *Contextual Stochastic Block Models* (Deshpande et al., 2018) in which the features of node i, xi ∼ ziµ + N (0, σ2If ), where µ ∈ R f and zi = +1 if node i ∈ C1, −1 if i ∈ C2 for K = 2. The analysis can be extended to K > 2 as well. Under this model, the population version of XXTis zµ T µz T = ||µ||2zzT where z = (z1*, . . . , z*n) ∈ R n. For simplicity, we present the average in-class and out-of-class block difference of linear (ζlin) and ReLU GCNs (ζ*ReLU* ) for depth d = 1. ζlin = ||µ||22r 4and ζ*ReLU* = ||µ||24r 4(1 + 1/n), respectively. Consequently, ζlin ≤ ζReLU ∀r ∈ [0, 1], n. However, both are of O(r 4). As the population NTK for depth d will be a more complex expression under Contextual SBM, we show the result for d = 1 for simplicity. But, we note that the result will extend to general d. □ ## C Empirical Analysis We provide the code for NTK and the block model in https://github.com/mahalakshmi-sabanayagam/NTK_GCN. ## C.1 Experimental Details Of Figure 1 We use the code for GCN without skip connections from github1(Kipf & Welling, 2017) and skip connection from github2(Chen et al., 2020). The following hyperparameters are used for GCN without skip connections: learning rate is 0.01, weight decay is 5e−4, hidden layer width is 64 and epochs is 500, 1500, 2000 for depths 2, 4, 8 respectively. For the skip connections, we used GCNII model, same parameters as vanilla GCN with α = 0.1. The performance is averaged over 5 runs. In Figure 8, we showcase the performance degradation of GCN with depth. The right plot shows the zoomed in version of the left plot to show the performance drop more clearly. Note that depth refers to the number of hidden layers in the definition of GCN (2). Hence, depth= 0 means there is no hidden layer. ![31_image_0.png](31_image_0.png) Figure 8: **Performance of GCN with depth on Cora.** Depth= 0 refers to no hidden layer in GCN. The right plot shows the zoomed in version of the left plot. ## C.2 Comparison Of Gcn And Ntk Although it is theoretically clear that the infinite width assumption should not affect the observations made on performance of GCN with Ssym and Srow in Figure 1, we illustrate the same using graph NTK. Figure 9 shows that the observation is seen in graph NTK as well, thus supporting our theoretical argument. ![32_image_0.png](32_image_0.png) Figure 9: Comparison of the accuracy of a trained finite width GCN and the corresponding NTK. NTK captures the performance trend of the GCN, although the exact performance doesn't match. ## C.3 Numerical Validation For Dc-Sbm For Vanilla Gcn And Skip-Α Experimental Details. For the experiments, we fix the size of the sampled graphs to n = 1000, p = 0.8 ![32_image_1.png](32_image_1.png) and q = 0.1 for homophily DC-SBM, p = 0.1 and q = 0.8 for heterophily DC-SBM and p = q = 1 for coreperiphery DC-SBM. π is sampled uniformly [0, 1] for homophily and heterophily, and πi ∼ Unif(0.5, 1)∀i ∈ core and πi ∼ Unif(0, 0.5)∀i ∈ *periphery* for core-periphery DC-SBM. Illustration of impact of depth in Vanilla GCN using Homophily DC-SBM. We show the impact of depth in Vanilla GCN using homophily DC-SBM in Figure 10. The DC-SBM is shown in the first column and columns 2 and 3 show the exact NTK for depth=1 and 8 for symmetric and row normalization, respectively. The plots clearly illustrate the complete loss of class information in symmetric normalization with depth (column 2). While the prevalence of block difference has decresed in row normalization over depth (column 3), the block/community structure is still retained. Thus showing the strong representation power of Srow. Figure 10: **Numerical validation of Theorem 4 using DC-SBM** shown in the first plot of column 1. Columns 2 and 3 illustrate the exact NTKs of depth=1 and 8 for Ssym and Srow, respectively. Second plot in column 1 shows the average gap between in-class and out-of-class blocks from theory. Illustration of Scol and Sadj **in Vanilla GCN using Homophily DC-SBM**. We extend the experiments on numerical validation for random graphs using vanilla GCN described in Section 5.2 to column normalized adjacency Scol and unnormalized adjacency Sadj here. We use the same setup described in Section 5.2 and Figure 11 illustrates the results. We observe that even for depth 1 both the convolutions are influenced by the degree correction and there is no class information in the kernels for higher depth. Thus, this validates ![33_image_0.png](33_image_0.png) the theoretical result in Theorem 4. Figure 11: **Numerical validation of DC-SBM for Vanilla GCN.** The first two heatmaps show the exact NTK Θ(d)for column normalized adjacency convolution Scol and the other two for unnormalized adjacency Sadj for depths d = 1 and 8. Validation of the theoretical filter ordering based on the population kernel block difference. ![33_image_1.png](33_image_1.png) We validate the theoretical finding of the filter Θ˜ row ≻ Θ˜ sym ≻ Θ˜ col ≻ Θ˜ adj based on the population kernel block difference by sampling a graph from a DC-SBM and measuring the Mean Squared Error (MSE) of the prediction from the exact kernel for various depth of GCN. Figure 12 illustrates the order of convolution filters obtained theoretically holds very well in practice. Figure 12: Numerical validation of the theoretical filter ordering based on the kernel class separability. Left plot shows the result from theory based on the block difference of the population NTK. The right plot shows the Mean Squared Error (MSE) of the prediction from the exact kernel of a sampled graph. The order of convolutions based on MSE clearly validates the theory. Illustration of impact of depth in Skip-PC and Skip-α **using Homophily DC-SBM**. We present a ![33_image_2.png](33_image_2.png) complementary result to Section 6.3 here. We use the same setting as described in Section 6.3 and plot the exact NTKs of depths 1 and 8 for symmetric and row normalization. Figure 13 shows the results for Skip-PC and we observe that the gap between in-class and out-of-class blocks decreases for both Srow and Ssym with depth, but the class information is still retained for larger depth and the gap doesn't vanish. Between Srow and Ssym, the heatmaps show that Srow retains the block structure better than Ssym and is devoid of the influence of the degree corrections. Figure 13: **Numerical validation of DC-SBM for Skip-PC.** It shows the exact NTKs Θ(d)for Ssym and Srow for depths d = 1 and 8. 34 In the case of Skip-α,we use α = 0.1 to obtain the result illustrated in Figure 14. Similar conclusions are ![34_image_0.png](34_image_0.png) derived from the experiment. Although we consider XXT = In for Skip-α which fundamentally relies on the feature information to interpolate, the results are still meaningful and demonstrate the theoretical findings. Figure 14: **Numerical validation of DC-SBM for Skip-**α. It shows the exact NTKs Θ(d)for Ssym and Srow for depths d = 1 and 8. Numerical analysis of the results using Heterophily DC-SBM. We extend the analysis to heterophily ![34_image_1.png](34_image_1.png) setting by sampling a graph of size n = 1000 and validate our theoretical results on the impact of depth in Vanilla GCN, Skip-PC and Skip-α. We plot the NTKs for depth d = 1 and d = 8 for symmetric and row normalized adjacency matrices and linear GCN for all the cases. Figure 15 illustrates the results for Vanilla GCN where the plot in the first column shows the heterophilic DC-SBM from which the graph is sampled. Observations are similar to the homophilic setting, validating our theoretical results from Theorem 4. Figure 15: **Numerical validation of Theorem 4 using DC-SBM** shown in the first plot of column 1. Columns 2 and 3 illustrate the exact NTKs of depth=1 and 8 for Ssym and Srow, respectively. Second plot in column 1 shows the average gap between in-class and out-of-class blocks from theory. Validation of the theoretical filter ordering based on the population kernel block difference. Similar to the homophily case, we validate the theoretical finding of the filter Θ˜ row ≻ Θ˜ sym ≻ Θ˜ col ≻ Θ˜ adj based on the population kernel block difference by sampling a graph from a DC-SBM and measuring the Mean Squared Error (MSE) of the prediction from the exact kernel for various depth of GCN. Figure 16 illustrates the order of convolution filters obtained theoretically holds very well in practice. ![35_image_0.png](35_image_0.png) Figure 16: Numerical validation of the theoretical filter ordering based on the kernel class separability. Left plot shows the result from theory based on the block difference of the population NTK. The right plot shows the Mean Squared Error (MSE) of the prediction from the exact kernel of a sampled graph. The order of convolutions based on MSE clearly validates the theory. Figure 17 shows the impact of depth for symmetric and row normalized adjacency in Skip-PC and Skip-α ![35_image_1.png](35_image_1.png) ![35_image_2.png](35_image_2.png) ![35_image_3.png](35_image_3.png) GCNs. Again, we observe similar results as homophilic and also the theoretic results hold such as the class information is still retained for larger depth and the gap doesn't vanish, and between Srow and Ssym, the heatmaps show that Srow retains the block structure better than Ssym and is devoid of the influence of the degree corrections. Figure 17: **Numerical validation of DC-SBM for Skip-PC and Skip-**α. It shows the exact NTKs Θ(d)for Ssym and Srow for depths d = 1 and 8. Numerical Validation of Core-Periphery DC-SBM. In this section, we validate the two scenarios ![35_image_4.png](35_image_4.png) discussed in Section 5.3 - core-periphery without community structure and core-periphery with community structure. For the firsr case, we consider core-periphery DC-SBM with n/4 nodes as core and the rest as periphery as shown in the first heatmap of Figure 18. We plot the exact NTKs of depth 2 for symmetric and row normalization using Vanilla GCN as shown in the second and third heatmaps of Figure 18. This clearly demonstrates the theoretical result presented in Corollary 2 where the symmetric normalization exhibits the graph structure and the row normalization is a constant kernel. Figure 18: **Numerical validation of Core-Periphery DC-SBM.** It shows the exact NTKs Θ(d)for Ssym and Srow for depth 2. In the second setting, we consider two communities of equal size n/2 with core-periphery in each, and the ![36_image_0.png](36_image_0.png) link probabilities between cores of the communities is higher than core-periphery or periphery-periphery of the two communities as shown in the first heatmap of Figure 19. The exact NTKs of symmetric and row normalization are illustrated in the second and third heatmaps of Figure 19 where we see that row normalization retains the community structure again. Figure 19: **Numerical validation of Core-Periphery DC-SBM with community structure.** It shows the exact NTKs Θ(d)for Ssym and Srow for depth 2. ## C.4 Experiments On Real Dataset: Cora Orthonormal Feature XXT = In **Assumption**. In this section, we present additional experiments on ![36_image_1.png](36_image_1.png) Cora. Since our theory assumed orthonormal features XXT = In, we validate it experimentally in similar setup described in Section 7. Figure 20 shows the result for Ssym and Srow for depth 1 and 8. The conclusions derived from real setting hold here as well and shows Srow preserves the class information better than Ssym. Figure 20: **Evaluation on Cora with** XXT = In. Plot shows Ssym and Srow for depths d = 1 and 8. ReLU GCN. We present the result for ReLU GCN in this section. Figure 21 shows the result where the ![36_image_2.png](36_image_2.png) conclusions derived in Section 7 holds very well. Additionally, we plot the average in-class and out-of-class block difference in the case of vanilla GCN (line plots in first row of Figure 21), we observe that the average in-class and out-of-class block difference degrades with depth for each class in Cora, showing the negative impact of depth which aligns well with the theoretical result. Figure 21: **Evaluation on Cora dataset.** Heatmaps show results of vanilla GCN and the decrease in class separability with depth for Ssym and Srow. Last two show NTKs of Skip-PC where a min and max threshold of 30 and 70 percentile is set for better visualization. Another experimental study is to understand how easy it is to learn the classes that showed good in-class and out-of-class gap preservation from the above experiment. The line plot in Figure 21 shows class C2 and C5 are well represented by both Ssym and Srow. To study how well this holds in the trained GCN, we considered depth 4 vanilla GCN with ReLU activations and used the same hyperparameters mentioned in Section C.1. The results are shown in Figure 22 where we observe that C2 and C5 are well learnt. On the other hand, other classes that showed small gap are also well learnt by the trained GCN. This needs further investigation as it has to do with the data split and some classes are poorly represented in the training data, for instance C6. Thus, we leave it for further analysis. Figure 22: **Class wise performance** ![37_image_0.png](37_image_0.png) of trained GCN of depth 4. Linear GCN. We present the result for linear GCN with the same setup as described in Section 7 to check ![37_image_1.png](37_image_1.png) the goodness of our theory. The results are illustrated in Figure 23 where we observe that the theory holds very well for linear GCN than ReLU GCN. The class information is better preserved in Srow than Ssym especially for higher depth in the case of both GCN with and without skip connections. All the conclusions derived in the main section hold here as well. Figure 23: **Evaluation on Cora using linear GCN.** First row shows the results for vanilla GCN for depths 1 and 8. Second row shows the result for Skip-PC and Skip-α for depth 8. The last column shows the average in-class and out-of-class block difference per class of both the symmetric and row normalized adjacencies. ## C.5 Experiments On Real Dataset: Citeseer ![37_Image_2.Png](37_Image_2.Png) Figure 24: **Evaluation on Citeseer dataset using linear GCN.** First row shows the results for vanilla GCN for depths 1 and 8. Second row shows the result for Skip-PC and Skip-α for depth 8. In this section, we validate our theoretical findings on Citeseer without much of the assumptions. We consider multi-class node classification (K = 6) using GCN with linear activations and relax the orthonormal feature condition, so XXT ̸= In. The NTKs for vanilla GCN, GCN with Skip-PC and Skip-α for depths d = 1, 2, 4, 8, 16 are computed and Figure 24 illustrates the results. All the observations made in Section 7 hold here as well and clear blocks emerge for Srow making it the preferable choice as suggested in the theory.
Review 1: Summary: The paper analyzes deep ReLU or linear GNNs in the Neural Tangent Kernel (NTK) regime under the Degree Corrected Stochastic Block Model (DC-SBM). The authors measure class separability of the NTK (mean absolute difference between in- and out-of-class entries) which is suggested to be indicative of the respective GNN, and establish that * Linear GNTK has class separability equal or better to those of the ReLU GNTK; * Row normalization aggregation has better class separability than other alternatives; * Class separability degrades slowest with depth for row normalized aggregation GNTK (yet degrades to random chance at infinite depth). * Skip connections allow GNTK to not degenerate at infinite depth and preserve class separability. The authors perform some preliminary experiments to establish the claims on two real datasets, suggesting that presented theory could explain why in prior works linear GNNs have performed on par with ReLU GNNs, and why row normalization sometimes empirically outperforms symmetric normalization. Strengths and Weaknesses: Disclaimer: I am not an expert on literature and did not verify the mathematical details. ### Strengths: * The idea of analysis of nonlinearity and aggregation mechanism using population NTK under DC-SBM appears novel and well-executed. The theoretical results appear robust. * The paper is overall well-written and clear. ### Weaknesses: * The presentation does not fully convince me that presented theory indeed explains empirical performance of GNNs. There is only one experiment (Figure 1/2) measuring actual accuracy of GNNs and the trends on it aren't robust enough (large error bars) or don't always match theory predictions. For example: * Section 4 predicts ReLU to do more over-smoothing at large depth than linear networks, but Figure 1/2 shows the opposite trend for row normalized networks. * Figure 3 predicts that row normalization will outperform symmetric most at shallow depths, but Figure 1/2 doesn't really show such a trend for linear networks (and the ReLU trend is inverted). * Section 6 predicts better performance of skip connections at large depth, but it's not visible on Figure 7 (leftmost vs rightmost heatmaps; in general, it's hard to analyze this Figure visually). * Minor: one of the conclusions of section 5.1 is that depth is detrimental to GCNs. Degenerating NTK at infinite depth is a common observation for most architectures, but it isn't actionable for depth selection and unlikely to explain the downward trend in Figure 1/2, as these depths are still too small (see more on this in Requested Changes). * See more actionable concerns in Requested Changes. On the whole I find the paper to study a very interesting theoretical setting with robust theoretical findings, but I think that it lacks actual experiments (GNN and GNTK accuracy, real-world and toy) to convince me that this setting is relevant to real-world GNN tasks. As such, I find this work to be of interest to the TMLR audience, but the claims and evidence are only convincing within the specific theoretical setting; extensions to real GNNs / datasets aren't very compelling. Requested Changes: * To demonstrate best agreement with theory, I believe Figure 1 and 2 should include results for depth $d = 1$. It is common for any dataset and architecture combination to have a depth sweet spot where accuracy falls off monotonically to both sides of the ideal depth (see for example https://arxiv.org/pdf/2010.09610.pdf), and I wonder if $d = 2$ could just happen to be the best accuracy for *Cora* for reasons unrelated to the theory. * Highly-optionally: results would look even more robust if full range of depth up to 8 were evaluated. * Moreover, could you extend your analysis and all experiments to depth $d=0$ NTK / GCN? My potential concern is that at $d = 0$ class separability of the kernel might be good, while empirical performance of such shallow GCNs may be lacking, which also risks invalidating the theory. * I wish all plots demonstrating kernel class separability were accompanied by the actual performance (accuracy, mean squared error) of the respective exact / population NTK(s) (and, optionally, even GNNs) on the graph(s). In addition or alternatively, I wish it was spelled out in more detail / more formally why the kernel class separability as defined in section 3 is the metric most predictive of performance (i.e. why look at the in-out class entries absolute difference and not say their ratios, squared distance or some other properties of these matrices). * Further, could class separability of the kernel be measured on the Cora dataset, to plot along Figures 1 and 2 to further validate the theory? * Page 10, "Infact" -> "In fact" Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: The authors use the machinery of the neural tangent kernel to attempt to understand the influence of convolutions based on graph Laplacian with row normalisation. Specifically, they prove that linear networks capture class information as well as ReLU,row normalisation preserves underlying class structure better than other convolution choices, performance decreases with network depth and skip connections retain class information even for infinitely deep models. Strengths and Weaknesses: Strengths: - The paper appears to cite a good amount of relevant literature. - The problem and content of this paper is likely to be of interest to the TMLR community. - The results and flow of the paper are mostly well presented, and the theorem statements are relatively clear. I did not check the proofs in the appendix. Weaknesses: - Precision can generally be improved. For example, the notation $\dot{\sigma}$ is used for the derivative in the notation section. However, the ReLU activation, which does not admit a classical derivative, is mentioned earlier and throughout the text. Then, in Theorem 1, a statement is made iinvolving $\dot{\sigma}$. It is not clear whether the authors intend to claim that this theorem applies to the ReLU, since no qualification on $\sigma$ is provided in the theorem conditions. If the authors are claiming that this theorem applies, how should one understand $\dot{\sigma}$? - Assumption 1 seems very strong to me. If $X \in \mathbb{R}^{n \times f}$, does $X X^\top = I$ require that $f \geq n$, i.e. the dimension of the node feature is greater than the number of features? Even under this requirement, one still then requires the features to be orthogonal, which seems implausible and also not expressive. Requested Changes: Please address the weaknesses I mentioned above. Broader Impact Concerns: I do not have any immediate broader impact concerns. ================================================== Review 3: Summary: 1. This paper provides a more general result of NTK of GCN for node classification settings. The analysis covers multi-layer GCNs for multi-community, homophilous and heterophilous graphs. 2. Theoretically and empirically, The paper studies the comparisons between linear GCN and Relu GCN and between different graph convolutions. It also shows an explanation of over-smoothing from the NTK perspective. 3. This work also covers the analysis of skip connections of GCNs using NTK. Strengths and Weaknesses: Strengths: 1. This analysis is quite novel, from my understanding. The contributions are significant and attractive to the graph neural network community. 2. The analysis seems solid, and the conclusions look correct. 3. The framework is general because it can cover homophilous/heterophilous graphs, graphs with skip connections, and different GCNs. Weaknesses: 1. Some claims and discussions are not clear or rigorous. For example, in the paragraph on page 7 titled "Comparison of graph convolutions", it is very unclear how you make the comparison by saying whether the kernels are affected by $\pi$ or not. Actually, whenever you want to have some discussions by comparing kernels, I feel the reasoning is not very rigorous, although there are supportive experiments. Another example is right below Figure 4 on page 8; what does it mean by "the population NTKs converge to a constant kernel, **thus** 0 class separability for all the convolutions"? I thought the class separability r should be a given condition for the analysis. Why does it read like a result by "thus 0 class separability"? 2. It is interesting to cover the analysis of heterophily. However, it seems the experiments are only implemented on Cora and Citeseer, which are homophilous graphs. Why not verify the effect of heterophily, even if you use synthetic graphs? Requested Changes: I will consider a higher final score if the following can be addressed. 1. Please examine the logic in Section 5 and improve the presentation. 2. The introduction to the random graph model in Section 3 also needs improvement. For example, I don't understand what "degree correction vector" means. 3. Linear GCN is confusing to me at first. Later I realized it refers to SGC. If you want to give it a new name, please clarify it. 4. Some related works of generalization or NTK analysis for GNNs are missing. Please include them in the related works. [1] Cong et al., 2021, "On Provable Benefits of Depth in Training Graph Convolutional Networks." [2] Li et al., 2022, "Generalization guarantee of training graph convolutional networks with graph topology sampling." [3] Tang et al., 2023, "Towards Understanding Generalization of Graph Neural Networks." Broader Impact Concerns: There are no concerns on the ethical implications of the work. ================================================== Metareview: Recommendation: Accept as is Comment: The paper cites a good amount of relevant literature, but I think there is still room for improvement. A novel contribution of the paper is to apply signal propagation (propagation of NTK from layer to layer) to analyze properties (e.g., expressivity) of graphical neural networks. There is a line of work, dating back to 2016, that uses this technique (signal propagation) to analyze expressivity, spectrum, and trainability of neural networks. A partial list of them includes: 1. MLP (https://arxiv.org/abs/1606.05340, https://arxiv.org/abs/1611.01232), 2. residual networks (https://arxiv.org/abs/1712.08969), 3. CNN (https://arxiv.org/abs/1806.05393), 4. RNN & LSTM (https://arxiv.org/abs/1806.05394, https://arxiv.org/abs/1901.08987). 5 general deep networks (https://arxiv.org/abs/1912.13053), 6. Tensor programs (https://arxiv.org/abs/1910.12478, cited in the paper). It would be good to discuss the connection between the current work and the above work, bridging the two communities. ==================================================
# Decentralized Feature-Distributed Optimization For Generalized Linear Models Anonymous authors Paper under double-blind review ## Abstract We consider the "all-for-one" decentralized learning problem for generalized linear models. The features of each sample are partitioned among several collaborating agents in a connected network, but only one agent observes the response variables. To solve the regularized empirical risk minimization in this distributed setting, we apply the Chambolle–Pock primal–dual algorithm to an equivalent saddle-point formulation of the problem. The primal and dual iterations are either in closed-form or reduce to coordinate-wise minimization of scalar convex functions. We establish convergence rates for the empirical risk minimization under two different assumptions on the loss funtion (Lipschitz and square root Lipschitz), and show how they depend on the characteristics of the design matrix and the Laplacian of the network. ## 1 Introduction Let ℓ : R × R → R≥0 denote a given *sample loss function* that is convex and, for simplicity, differentiable in its first argument. Given data points (x1, y1), . . . ,(xn, yn) ∈ Rd × R and a convex regularization function r(·), we consider the minimization of regularized empirical risk in generalized linear models, i.e., $$\operatorname*{min}_{\mathbf{\theta}\in\mathbb{R}^{d}}{\frac{1}{n}}\sum_{i=1}^{n}\ell(\mathbf{x}_{i}^{\mathsf{T}}\mathbf{\theta},y_{i})+r(\mathbf{\theta})\,,$$ in a *non-standard* distributed setting where the data features, rather than samples, are distributed among m agents that communicate through a connected network. The problem can be formally stated as follows. With A1*, . . . ,* Am denoting a partition of [d] def = {1*, . . . , d*} into m disjoint blocks, each agent j ∈ [m] observes the local features xj,i def = (xi)Aj ∈ Rdjfor every i ∈ [n], where (u)A denotes the restriction of u to the coordinates enumerated by the index set A. Without loss of generality we may assume that each Aj is a set of dj consecutive indices and simply write1 $$\mathbf{x}_{i}=\left[\mathbf{x}_{1,i};\quad\cdots;\quad\mathbf{x}_{m,i}\right]\,.$$ We also denote the n × dj *local design matrix* for agent j ∈ [m] by $$\mathbf{X}_{j}=\left[\mathbf{x}_{j,1}\quad\cdots\quad\mathbf{x}_{j,n}\right]^{\mathsf{T}}\,,$$ and the full n × d design matrix by $$\mathbf{X}=\begin{bmatrix}\mathbf{X}_{1}&\cdots&\mathbf{X}_{m}\end{bmatrix}=\begin{bmatrix}\mathbf{x}_{1}&\cdots&\mathbf{x}_{n}\end{bmatrix}^{\intercal}\,.$$ We assume that only one of the agents, say the first agent, observes the response (yi) n i=1 and the other agents only have access to their local features. There is an underlying communication network which can be abstracted by a *connected* undirected graph G over the vertex set V = [m]. If distinct agents j and 1We denote the vertical concatenations using semicolons as the delimiters. j ′can communicate directly, then they are adjacent in G and we write j ∼G j ′. The *Laplacian* of the communication graph, which is central in the distributed computations of the optimization algorithms, is denoted by L. Using the shorthand $$\ell_{i}(\cdot)\ {\stackrel{\mathrm{def}}{=}}\ \ell(\cdot,y_{i})$$ $$(1)$$ that we use henceforth to simplify the notation, we seek an approximation to the (regularized) empirical risk minimizer $$\widehat{\mathbf{\theta}}=\operatorname*{argmin}_{\mathbf{\theta}\in\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{n}\ell_{i}(\mathbf{x}_{i}^{\mathsf{T}}\mathbf{\theta})+r(\mathbf{\theta})\,.\tag{14.11}$$ where the regularizer r(·) is typically used to induce a certain structure (e.g., sparsity) in θb. To solve this optimization in our distributed setting, we use a primal–dual formulation that accommodates local calculations. Specifically, with ℓ ∗ i : R → R denoting the *convex conjugate* of the function ℓi(·), the minimization in (1) can be formulated as the saddle-point problem $$\operatorname*{min}_{\mathbf{\theta}\in\mathbb{R}^{d}}\operatorname*{max}_{\mathbf{\lambda}_{1}\in\mathbb{R}^{n}}{\frac{1}{n}}{\boldsymbol{\lambda}}_{1}^{\intercal}\mathbf{X}\mathbf{\theta}-{\frac{1}{n}}\sum_{i=1}^{n}\ell_{i}^{*}(\lambda_{1,i})+r(\mathbf{\theta})\,,$$ where λ1 =-λ1,1; *· · ·* ; λ1,nis the dual variable. The regularizer r(θ) might also be represented using its conjugate, making the objective of the resulting saddle-point problem linear in the primal variable θ. However, to avoid the need for the "dualization" of the regularizer, we focus on the special but important case that the regularizer is *separable* with respect to the agents. Partitioning the coordinates of the primal variable θ according to the partitioning of the features among the agents as $$\mathbf{\theta}=\begin{bmatrix}\mathbf{\theta}_{1};&\cdots;&\mathbf{\theta}_{m}\end{bmatrix}\,,$$ with θj ∈ Rdj, we assume that the regularizer takes the form $$r(\mathbf{\theta})=\sum_{j=1}^{m}r_{j}(\mathbf{\theta}_{j})\,,\tag{2}$$ where for each j ∈ [m] the convex functions rj (·) have a simple *proximal mapping* that is available to the jth agent. Giving each agent its own version of the dual variable denoted by λj ∈ Rn, we can express (1) in a form which is amenable to distributed computations as $$\begin{split}\min_{\mathbf{\theta}\in\mathbb{R}^{d}}\max_{\mathbf{\lambda}_{1},\ldots,\mathbf{\lambda}_{m}\in\mathbb{R}^{n}}&\sum_{j=1}^{m}\!\!\gamma_{j}(\mathbf{\theta}_{j})+\frac{1}{n}\mathbf{\lambda}_{j}^{\intercal}\mathbf{X}_{j}\mathbf{\theta}_{j}-\frac{1}{n}\!\sum_{i=1}^{n}\!\!\ell_{i}^{*}(\lambda_{1,i})\\ &\text{subject to}\mathbf{L}\left[\mathbf{\lambda}_{1}\quad\cdots\quad\mathbf{\lambda}_{m}\right]^{\intercal}=\mathbf{0}\,.\end{split}\tag{3}$$ The constraint involving the Laplacian simply enforces λj = λ ′ j for all j ∼G j ′. With ⟨·, ·⟩ denoting the usual (Frobenius) inner product henceforth, we can use the Lagrangian form of the inner optimization to express (3) equivalently as min θ∈Rd max λ1,...,λm∈Rnmin V ∈Rn×m 1 n ⟨V T, L-λ1 · · · λm T⟩ + Xm j=1 rj (θj ) + 1n λ T jXjθj − 1 n Xn i=1 ℓ ∗ i (λ1,i) = min θ∈Rd min V ∈Rn×m max λ1,...,λm∈Rn Xm j=1 rj (θj ) + 1n λ T jXjθj − 1 n Xn i=1 ℓ ∗ i (λ1,i) + 1n ⟨V L,-λ1 · · · λm ⟩ = min θ∈Rd, V ∈Rn×m max λ1,...,λm∈Rn − 1 n Xn i=1 ℓ ∗ i (λ1,i) +Xm j=1 rj (θj ) + 1n λ T j (Xjθj + V Lej ) , (4) where the second line follows from strong duality. In Section 2 we describe the iterations based on the Chambolle–Pock primal–dual algorithm (Chambolle & Pock, 2016) to solve the saddle-point problem (4). Our main result and the assumptions under which it holds are provided in Section 3. Some numerical experiments are also provided in Section 4. Proofs of the main result can be found in Appendix A. ## 1.1 Related Work Minimization of a sum of (convex) functions is the most studied problem in distributed optimization due to its prevalence in machine learning. The most commonly considered setting in the literature is by far the *sampledistributed* setting, where each agent merely has access to one of the summands of the objective function that can be computed using the locally available samples. The literature primarily considers two different communication models. *Centralized* first-order methods have a main computing agent that aggregates the local (sub)gradient evaluations of the other agents, updates the iterate and sends it back to the other agents. Therefore, the communication time for these methods grows linearly with the *diameter* of the underlying network. In contrast, *decentralized* first-order methods do not rely on a single aggregating agent; every agent maintains and updates a local copy of the candidate minimizer through local computations and communications with its immediate neighbors, and consistency of the solution across agents is achieved either through local averaging or consensus constraints. Due to the diffusion-style nature of the iterations, the convergence rate of these methods depends on a certain notion of *spectral gap* of the communication graph. Many algorithms have been introduced for sample-distributed decentralized convex optimization; surveys of the literature can be found in (Yang et al., 2019; Gorbunov et al., 2020), and prominent references include (Johansson et al., 2008; Nedić & Ozdaglar, 2009; Wang & Elia, 2011; Zhu & Martinez, 2012; Duchi et al., 2012; Scaman et al., 2017). In general, the computation and communication complexity of these algorithms to find an ϵ-accurate solution range from the "slow rate" of O(ε −2)+O(ε −1) for Lipschitz-continuous convex functions, to the "linear rate" of O(log(1/ε)) for smooth and strongly convex functions. Lower bounds and (nearly) optimal algorithms for a few common objective classes are established in (Scaman et al., 2019). The "feature-distributed" setting that we consider is studied to a lesser extent, but has found important applications such as *sensor fusion* (Sasiadek, 2002) and *cross-silo federated learning* (Kairouz et al., 2021). This setting is also relevant in *parallelized computing* to amplify the performance of resource limited computing agents in large-scale problems. Centralized federated learning protocols, in which the agents communicate with a server, with distributed features are proposed in (Hu et al., 2019) and (Chen et al., 2020). Hu et al. (2019) proposed the FDML method and, under convexity and smoothness of the objective, established a regret bound for SGD that decays with the number of iterations T at the rate of O(1/ √T). It is also assumed in this result that the iterates never exit a neighborhood of the true parameter, basically imposing the strong convexity on the objective in an implicit form. Chen et al. (2020) proposed a method called VAFL, in which a server maintains a global parameter and each client operates on local features and parameters that determine the client's corresponding predictor. The clients and the server communicate in an asynchronous fashion and exchange the value of clients' predictors and the gradients of the sample loss with respect to these predictors. Under certain models of the communication delays that impose the asynchrony, a variant of stochastic gradient descent is shown to converge at a rate O(1/T) under strong convexity. The performance of VAFL in the case of smooth nonconvex objectives and nonlinear predictors that are separable across the agents is also considered in (Chen et al., 2020). However, in this general setting where the guarantees are inevitably weaker, only the temporal average of the squared norm of the gradients (in expectation with respect to the SGD samples) are shown to converge at a rate O(1/ √T). The CoLa algorithm of He et al. (2018) considers a ubiquitous class of convex minimization problems in machine learning and statistics that involve *linear predictors*, in the decentralized distributed setting. Following the formulation of (Smith et al., 2018), a pair of convex programs that are dual to each other are considered in (He et al., 2018) depending on whether the data is split across the samples, or across the features. This latter setting is the closest related work in the literature to the present paper. The main step in each iteration of the CoLa algorithm is a regularized convex quadratic minimization. This minimization step is generally nontrivial and needs to be performed by a dedicated subroutine, though the analysis accommodates subroutines that compute inexact solutions. In contrast, our convex-concave saddle point formulation of the problem leads to iterations in which every agent evaluates either a closed-from expression or a simple proximal operator, except for one agent whose computations are as simple as performing one-dimensional strongly convex minimization for each dual coordinate. Furthermore, while our algorithm achieves an accuracy of O(1/T) after T iterations similar to the CoLa (in the general convex setting), our convergence analysis applies to the broader class of square root Lipschitz loss functions, defined below in Section 3, that includes the usual smooth loss functions as special case (Srebro et al., 2010, Lemma 2.1). Arablouei et al. (2015); Gratton et al. (2018) present algorithms based on ADMM for solving decentralized least-squares problems with distributed features, and establish asymptotic convergence. A featuredecentralized algorithm for logistic regression is presented in (Slavković et al., 2007), though no convergence guarantees are given. Finally, the primal-dual algorithm we present in the next section is related to an application of the distributed saddle point algorithm of Mateos-Núñez & Cortés (2017) where the goal is minimizing a sum of functions of independent variables subject to linear inequality constraints (see Remark III.1 in that paper). The general algorithm considered in (Mateos-Núñez & Cortés, 2017) is a (projected) gradient descent ascent method. Consequently, its convergence analysis relies on certain boundedness assumptions on the iterates and the corresponding gradients. Furthermore, while being applicable to a broader set of saddle point problems than our method, this gradient descent ascent method is only shown to converge at the rate 1/ √T. ## 1.2 Contributions Using the dual representation of the loss functions ℓi(·) in (1), we convert the corresponding minimization problem to a saddle-point problem that enables us to perform the decentralized minimization in the unconventional feature-distributed setting. Using the Chambole–Pock primal–dual algorithm as a natural method to solve the resulting saddle-point problem, we provide convergence guarantees for the algorithm in terms of the primal objective (rather than the primal–dual gap). In particular, we show convergence to the minimum primal value at a rate of 1/T, assuming that the loss functions ℓi(·) are either Lipschitz or square root Lipschitz smooth. The square root Lipschitz smooth functions include the more common Lipschitz gradient functions. In each iteration, the updates to the primal variables are either in closed-form or involve evaluation of a simple proximal mapping. Updating the dual variable corresponding to the "main agent" among the n agents, only requires computing the Moreau envelope of n univariate functions that is highly parallelizable and can be solved efficiently. Updating the dual variables for the rest of the agents is even simpler and is expressed in closed-form. ## 2 The Primal–Dual Algorithm Let f and g be convex functions such that f is smooth and has a tractable first-order oracle, and the possibly nonsmooth g admits a tractable proximal mapping. Furthermore, let h be a convex function whose convex conjugate, denoted by h ∗, admits a tractable proximal mapping. The Chambolle–Pock primal–dual algorithm (Chambolle & Pock, 2016) solves the saddle-point problem $$\operatorname*{min}_{\mathbf{z}}\operatorname*{max}_{\mathbf{\lambda}}\,f(\mathbf{z})+g(\mathbf{z})+\mathbf{\lambda}^{\mathsf{T}}\mathbf{K}\mathbf{z}-h^{*}(\mathbf{\lambda})\,,$$ for a given matrix K. Denoting the columns of V by v1*, . . . ,* vm, and the Kronecker product by ⊗, the optimization problem (4) fits into the above formulation by choosing $$\begin{array}{l l l l l}{{z=\left[\theta_{1};}}&{{\cdots;}}&{{\theta_{m};}}&{{v_{1};}}&{{\cdots;}}&{{v_{m}\right]\;,}}\\ {{\lambda=\left[\lambda_{1};}}&{{\cdots;}}&{{\lambda_{m}\right]\;,}}&{{}}&{{}}\end{array}$$ $$\begin{array}{l}\mathbf{K}=\frac{1}{n}\left[\begin{array}{ccccc}\mathbf{X}_{1}&\mathbf{0}&\mathbf{0}&\cdots&\mathbf{0}\\ \mathbf{0}&\mathbf{X}_{2}&\mathbf{0}&\cdots&\mathbf{0}\\ \vdots&\vdots&&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\cdots&\mathbf{X}_{m}\end{array}\right],\\ f\equiv0\,,\\ g(\mathbf{z})=r(\mathbf{\theta})=\sum_{j=1}^{m}r_{j}(\mathbf{\theta}_{j})\,,\end{array}$$ and $$h^{*}(\mathbf{\lambda})={\frac{1}{n}}\sum_{i=1}^{n}\ell_{i}^{*}(\lambda_{1,i})\,.$$ The update rule of the Chambolle–Pock algorithm can be summarized as $\mathbf{z}_{t+1}=\underset{\mathbf{z}\in\mathbb{R}^{d+nm}}{\operatorname{argmin}}\ f(\mathbf{z}_{t})+\langle\nabla f(\mathbf{z}_{t}),\mathbf{z}-\mathbf{z}_{t}\rangle+g(\mathbf{z})+\mathbf{\lambda}_{t}^{\mathsf{T}}\mathbf{K}\mathbf{z}+\frac{1}{2\tau}\|\mathbf{z}-\mathbf{z}_{t}\|_{2}^{2}$, $\mathbf{\lambda}_{t+1}=\underset{\mathbf{\lambda}\in\mathbb{R}^{nm}}{\operatorname{argmin}}\ h^{*}(\mathbf{\lambda})-\mathbf{\lambda}^{\mathsf{T}}\mathbf{K}\left(2\mathbf{z}_{t+1}-\mathbf{z}_{t}\right)+\frac{1}{2\sigma}\|\mathbf{\lambda}-\mathbf{\lambda}_{t}\|_{2}^{2}$, for appropriately chosen parameters *τ, σ >* 0. Writing this update explicitly for our special case, we have $$\begin{array}{l}{{z_{t+1}=\operatorname*{argmin}_{\mathbf{z}\in\mathbb{R}^{d+m n}}r\left((\mathbf{z})_{[d]}\right)+\lambda_{t}^{\mathsf{T}}\mathbf{K}\mathbf{z}+\frac{1}{2\tau}\|\mathbf{z}-\mathbf{z}_{t}\|_{2}^{2}}}\\ {{\lambda_{t+1}=\operatorname*{argmin}_{\mathbf{\lambda}\in\mathbb{R}^{m n}}\frac{1}{n}\sum_{i=1}^{n}\ell_{i}^{*}(\lambda_{1,i})-\lambda^{\mathsf{T}}\mathbf{K}(2\mathbf{z}_{t+1}-\mathbf{z}_{t})+\frac{1}{2\sigma}\|\mathbf{\lambda}-\mathbf{\lambda}_{t}\|_{2}^{2}\,.}}\end{array}$$ Expanding the linear term in the primal update, the equivalent local primal update for each agent j ∈ [m] can be written as $$\boldsymbol{\theta}_{j,t+1}=\operatorname*{argmin}_{\boldsymbol{\theta}_{j}\in\mathbb{R}^{d_{j}}}r_{j}(\boldsymbol{\theta}_{j})+\frac{1}{n}\boldsymbol{\lambda}_{j,t}^{\mathsf{T}}\boldsymbol{X}_{j}\boldsymbol{\theta}_{j}+\frac{1}{2\tau}\|\boldsymbol{\theta}_{j}-\boldsymbol{\theta}_{j,t}\|_{2}^{2},\tag{1}$$ $$\boldsymbol{v}_{j,t+1}=\operatorname*{argmin}_{\boldsymbol{v}_{j}\in\mathbb{R}^{n}}\ \frac{1}{n}\left(\sum_{j^{\prime}\in[m]:\ j\sim cj^{\prime}}\boldsymbol{\lambda}_{j,t}-\boldsymbol{\lambda}_{j^{\prime},t}\right)^{\mathsf{T}}\boldsymbol{v}_{j}+\frac{1}{2\tau}\|\boldsymbol{v}_{j}-\boldsymbol{v}_{j,t}\|_{2}^{2}\,.\tag{2}$$ Similarly, the equivalent local dual update for each agent j ∈ [m]\{1} is $$\boldsymbol{\lambda}_{j,i+1}=\operatorname*{argmin}_{\boldsymbol{\lambda}_{j}\in\mathbb{R}^{n}}-\frac{1}{n}\boldsymbol{\lambda}_{j}^{\mathsf{T}}\left(\sum_{j^{\prime}\in[m]:\,j^{\prime}\sim cj^{\prime}}2(\boldsymbol{v}_{j,i+1}-\boldsymbol{v}_{j^{\prime},i+1})-\boldsymbol{v}_{j,i}+\boldsymbol{v}_{j^{\prime},i}\right)\tag{1}$$ $$-\frac{1}{n}\boldsymbol{\lambda}_{j}^{\mathsf{T}}\boldsymbol{X}_{j}\;(2\boldsymbol{\theta}_{j,i+1}-\boldsymbol{\theta}_{j,i})+\frac{1}{2\sigma}\|\boldsymbol{\lambda}_{j}-\boldsymbol{\lambda}_{j,i}\|_{2}^{2}\,.$$ In the above, the above is the local index value for the $\sigma$-function. $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ (7) The fact that h ∗(·) depends entirely on λ1 makes the local dual update for the first agent (i.e., j = 1) different and in the form $$\begin{split}\boldsymbol{\lambda}_{1,t+1}&=\underset{\boldsymbol{\lambda}_{t}\in\mathbb{R}^{n}}{\operatorname{argmin}}\ \frac{1}{n}\sum_{i=1}^{n}\ell_{i}^{*}(\boldsymbol{\lambda}_{1,i})-\frac{1}{n}\boldsymbol{\lambda}_{1}^{*}\left(\sum_{j^{\prime}\in\left[m\right]\ 1\ :\ \boldsymbol{\lambda}_{i}\subset j^{\prime}}2(\boldsymbol{v}_{1,t+1}-\boldsymbol{v}_{j^{\prime},t+1})-\boldsymbol{v}_{1,t}+\boldsymbol{v}_{j^{\prime},t}\right)\\ &\quad-\frac{1}{n}\boldsymbol{\lambda}_{1}^{*}\boldsymbol{X}_{1}\left(2\boldsymbol{\theta}_{1,t+1}-\boldsymbol{\theta}_{1,t}\right)+\frac{1}{2\sigma}\|\boldsymbol{\lambda}_{1}-\boldsymbol{\lambda}_{1,t}\|_{2}^{2}\,,\end{split}\tag{1}$$ $$\mathbf{\Sigma}$$ (8) where the scalars (λ1,i)i denote the coordinates of λ1 and should not be confused with the vectors (λ1,t)t. The primal update (5) is simply an evaluation of the proximal mapping of τ rj denoted by proxτ rj (u) = argminu′ τ rj (u ′) + ∥u ′ − u∥ 2 2 /2. The updates (6) and (7) can also be solved in closed-form. While (8) does not admit a similar closed-form expression, it can be equivalently written in terms of the functions ℓ1(·)*, . . . , ℓ*n(·) using the separability of the objective function and the relation between the *Moreau envelope* of a function and its convex conjugate (Bauschke & Combettes, 2011, Proposition 13.24). Therefore, we can summarize the iterations as $$\boldsymbol{\theta}_{j,t+1}=\text{prox}_{\tau r_{j}}\left(\boldsymbol{\theta}_{j,t}-\frac{\tau}{n}\,\boldsymbol{X}_{j}^{\intercal}\boldsymbol{\lambda}_{j,t}\right)\,,\quad\text{for}j\in[m]\,,\tag{9}$$ $$(10)$$ $$\mathbf{v}_{j,t+1}=\mathbf{v}_{j,t}-{\frac{\tau}{n}}\sum_{j^{\prime}\in[m]:\ j\sim c j^{\prime}}\mathbf{\lambda}_{j,t}-\mathbf{\lambda}_{j^{\prime},t}\,,\quad{\mathrm{for~}}j\in[m]\,,$$ $$(11)$$ $$(12)$$ $$\boldsymbol{\lambda}_{j,t+1}=\boldsymbol{\lambda}_{j,t}+\frac{\sigma}{n}\boldsymbol{X}_{j}\left(2\boldsymbol{\theta}_{j,t+1}-\boldsymbol{\theta}_{j,t}\right)+\frac{\sigma}{n}\sum_{j^{\prime}\in[m]\;;\;j^{\prime}\sim\alpha j^{\prime}}2(\boldsymbol{v}_{j,t+1}-\boldsymbol{v}_{j^{\prime},t+1})-\boldsymbol{v}_{j,t}+\boldsymbol{v}_{j^{\prime},t}\,,\quad\text{for}j\in[m]\backslash\{1\}\,,\tag{11}$$ $$\lambda_{1,t+1}=\operatorname*{argmin}_{\lambda_{1}\in\mathbb{R}^{n}}\;\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left(\frac{n}{\sigma}\left(\lambda_{1,t+1/2}-\lambda_{1}\right)_{i}\right)+\frac{1}{2\sigma}\|\lambda_{1}\|_{2}^{2}\,,$$ where (u)i denotes the ith coordinate of a vector u, and the "intermediate dual iterate" λ1,t+1/2 is defined as $$\lambda_{1,t+1/2}=\lambda_{1,t}+{\frac{\sigma}{n}}X_{1}\left(2\theta_{1,t+1}-\theta_{1,t}\right)+{\frac{\sigma}{n}}\sum_{j^{\prime}\in[m]:\;1\sim c j^{\prime}}2(v_{1,t+1}-v_{j^{\prime},t+1})-v_{1,t}+v_{j^{\prime},t}\,.$$ ′,t . (13) Interestingly, (12) is a separable optimization with respect to the coordinates of λ1, i.e., for each i ∈ [n] we have $$(13)$$ $$\left(\lambda_{1,t+1}\right)_{i}=\operatorname*{argmin}_{\lambda\in\mathbb{R}}{\frac{1}{n}}\ell_{i}\left({\frac{n}{\sigma}}\left(\lambda_{1,t+1/2}\right)_{i}-\lambda\right)+{\frac{1}{2\sigma}}\lambda^{2}\,.$$ Therefore, (12) admits efficient and parallelizable solvers. ## 3 Convergence Guarantees We begin by stating a few assumptions that will be used to provide convergence guarantees for the primal iterates (θj,t)t≥1. Recall the assumptions that the loss function ℓ(·, ·) is *nonnegative* and the regularizer is separable as in (2). We will provide convergence rates for two different classes of loss functions. First, the Lipschitz loss functions, for which there exists a constant ρ ≥ 0 such that $$|\ell(u,w)-\ell(v,w)|\leq\rho|u-v|\,,$$ |ℓ(u, w) − ℓ(v, w)| ≤ ρ|u − v| , for all *u, v, w* ∈ R . By differentiability of ℓ(·, ·) in its first argument, the condition above is equivalent to $${\mathrm{for~all~}}u,v,w\in\mathbb{R}\,.$$ $\left|\frac{\mathrm{d}\ell(u,v)}{\mathrm{d}u}\right|\leq\rho$, for all $u,v\in\mathbb{R}$. (Lip.) Examples of the Lipschitz loss functions are the absolute loss, the Huber loss, and the logistic loss. Second, the *square root Lipschitz* loss functions, for which there exists a constant ρ ≥ 0 such that $$|\sqrt{\ell(u,w)}-\sqrt{\ell(v,w)}|\leq\frac{\rho}{2}|u-v|\,,$$ |u − v| , for all *u, v, w* ∈ R . Again, invoking differentiability of ℓ(·, ·) we can equivalently write $$\left|{\frac{\mathrm{d}\ell(u,v)}{\mathrm{d}u}}\right|\leq\rho{\sqrt{\ell(u,v)}}\,.$$ ≤ ρpℓ(*u, v*). ( $$({\sqrt{\mathrm{\boldmath~\-Lip.}}})$$ $${\mathrm{for~all~}}u,v,w\in\mathbb{R}\,.$$ Examples of the square root Lipschitz loss functions are the squared loss, the Huber loss. Furthermore, we assume that for some known constant R > 0 the empirical risk minimizer θb is bounded as $\left\|\widehat{\theta}\right\|_{2}\leq R\,.$ We also assume that the agents are provided with the constant χ that bounds the usual operator norm of the design matrix as $$\mathrm{(design~bound)}$$ $$\|\mathbf{X}\|\leq\chi\,.$$ $$({\mathrm{spectral~gap}})$$ ∥X∥ ≤ χ . (design bound) The constants δ > 0 that bounds the spectral gap of the network as $$\left\|{\boldsymbol{L}}^{\dagger}\right\|\leq\delta^{-1}\,,$$ −1, (spectral gap) with M† denoting the Moore-Penrose pseudoinverse of the matrix M, as well as the constant D > 0 that bounds the operator norm of the Laplacian as $$\|{\boldsymbol{L}}\|\leq D\,,$$ $$({\mathrm{Laplacian~bound}})$$ ∥L∥ ≤ D , (Laplacian bound) are also provided to the agents. Because n∥K∥ ≤ maxj∈[m] ∥Xj∥ + ∥L ⊗ I*∥ ≤ ∥*X∥ + ∥L∥, instead of assuming an additional bound for ∥K∥, we will use the bound ∥K∥ ≤ (χ + D)/n. Theorem 1. Suppose that the m agents are given the positive constants R, χ, δ and D that respectively satisfy (minimizer bound), (design bound), (spectral gap)*, and* (Laplacian bound)*, so that they can choose* σ = m1/2n 3/2ρ/ (χ + D)Rp1 + 2χ2/δ2and τ = n 2/(χ + D) 2σ. Denote the temporal average of the vectors θj,t over the first T ≥ 1 *iterations by* $$\overline{{{\boldsymbol{\theta}}}}_{j}=\frac{1}{T}\sum_{t=1}^{T}{\boldsymbol{\theta}}_{j,t}\,,\qquad\qquad\qquad\qquad\qquad\qquad\text{for}j\in[m]\,,\tag{1}$$ and let θ =-θ1; *· · ·* ; θm *. Under the Lipschitz loss model* (Lip.) *we have* $${\frac{1}{n}}\sum_{i=1}^{n}\ell_{i}({\overline{{\mathbf{\theta}}}})+r({\overline{{\mathbf{\theta}}}})\leq{\frac{1}{n}}\sum_{i=1}^{n}\ell_{i}({\widehat{\mathbf{\theta}}})+r({\widehat{\mathbf{\theta}}})+{\frac{2(\chi+D)R\rho}{(n/m)^{1/2}T}}{\sqrt{1+{\frac{2\chi^{2}}{\delta^{2}}}}}$$ Similarly, under the square root Lipschitz loss model ( √-Lip.) and for T ≥ 2mnρ2/σ we have an "isomporphic convergence" given by $$\frac{1}{n}\sum_{i=1}^{n}\ell_{i}(\overline{\mathbf{\theta}})+r(\overline{\mathbf{\theta}})\leq\left(1+\frac{2(\chi+D)R\rho}{m^{1/2}n^{3/2}T}\sqrt{1+\frac{2\chi^{2}}{\delta^{2}}}\right)\times\left(\frac{1}{n}\sum_{j=1}^{n}\ell_{i}\left((\mathbf{X}\widehat{\mathbf{\theta}})_{j}\right)+r(\widehat{\mathbf{\theta}})+\frac{(\chi+D)R\rho}{(n/m)^{1/2}T}\sqrt{1+\frac{2\chi^{2}}{\delta^{2}}}\right)\right).$$ $$\left.\begin{array}{c}{{\overline{{{2\chi^{2}}}}}}\\ {{\overline{{{\delta^{2}}}}}}\\ {{\overline{{{\delta^{2}}}}}}\\ {{(16)}}\end{array}\right\}.$$ $$(14)$$ $$\left(15\right)$$ The prescribed τ and σ are "optimized" for the Lipschitz model. The well-tuned choice of τ and σ under the square root Lipschitz model is slightly different and depends on the minimum value of the objective. For simplicity, we used the former in the theorem for both models. For a better understanding of the convergence bounds (15) and (16), it is worth considering more interpretable approximations of the quantities D and δ. With ∆(G) denoting the maximum degree of the graph G, we have an elementary bound ∥L∥ ≤ 2∆(G), so it suffices to choose D ≥ 2∆(G). Furthermore, for a connected graph,L† is reciprocal to the second smallest eigenvalue of L, and we can invoke an inequality due to Mohar (1991, Theorem 2.3) that relates the spectral gap, diameter, and the maximum degree of a graph, we haveL† ≥ 2 (diam(G) − 1 − log(m − 1)) /∆(G) which can provide a general bound on how large δ can possibly be. Another inequality (Mohar, 1991, Theorem 4.2), attributed to Brendan Mckay, also provides the boundL† ≤ m diam(G)/4 which implies a conservative choice of δ ≤ 4/(m diam(G)). The networks that are *(spectral) expanders* are more favorable as they typically have larger spectral gap and smaller maximum degree simultaneously. For instance, for k-regular *Ramanujan graphs* we can choose δ = k − 2 √k − 1 (Lubotzky et al., 1988; Mohar, 1992). The algorithm can be generalized by assigning weights to the edges of the network and choosing L to be the Laplacian of the weighted network. The effect of using weighted edges on the algorithm is that the simple summation iterates of the neighboring agents in (10), (11), and (13) (thereby (12)), will become weighted summations. Using weighted edges allows us, in principle, to optimize bounds (15) and (16) by adjusting the edge weights. We have shown that we can solve (1) in the feature-distributed setting and achieve a convergence rate of O(1/T) under relatively simple assumptions. The iterations each agent has to solve is rather simple, including (12) thanks to its separability. However, there are a few limitations in the proposed framework that have to be considered. First, the agents cannot rely only on local information to choose τ and σ; in general they can obtain the required global information at the cost of extra communications. Second, the scope of the algorithm is limited by the fact that the loss function acts on linear predictors x T iθ. It is worth mentioning, however, that this limitation is basically necessary to stay in the realm of convex optimization; we are not aware of any widely used nonlinear predictor whose composition with standard loss functions is convex. Third, the considered saddle-point formulation incurs a significant communication and computation cost associated with the iterates (λj,t) and (vj,t); it is not clear if this is inherent to the problem. ## 4 Numerical Experiments We provide several numerical experiments to illustrate the behavior of the proposed algorithm with varying quantities of agents and communication graphs. In the case where computation is of greater cost than communication, we find that our algorithm can make use of parallelism to improve performance. We solve the least squares problem $$\operatorname*{minimize}_{\boldsymbol{\theta}}{\frac{1}{2}}\|\mathbf{X}{\boldsymbol{\theta}}-\mathbf{y}\|_{2}^{2}$$ for a synthetic dataset of 2 14 = 16384 samples and 2 11 = 2048 features so that X is a 16384 × 2048 matrix. To construct the synthetic dataset, the design matrix X, the ground truth vector θ⋆, and the noise vector e are all populated by i.i.d. samples of the standard normal distribution. The corresponding noisy response vector y is then computed as y = Xθ⋆ + e. In all experiments, the features are partitioned equally among the agents, i.e., each agent has access to exactly d/m features. We explore the following communication graph structures: - *Complete Graph*: All agents are connected to all other agents. - *Star Graph*: All agents are connected only to the first agent. - *Erdős–Rényi Graph*: Each of the possible m 2 pairs of agents are connected with probability p ∈ {0.1, 0.5} independent of the other connections. To avoid violating the connectivity requirement of the communication graph (with high probability), we only consider graphs of 8 or more agents in the case p = 0.5, and graphs of 32 or more agents in the case of p = 0.1. - *2D Lattice Graph*: The agents are arranged in 2D space as a square lattice. Each agent is connected to its cardinal and diagonal neighbors. The first agent is located at one of the four center-most lattice points. - *Random Geometric Graph*: Agents are assigned positions in the 2D unit square uniformly at random. A pair of agents are connected if the Euclidean distance between their positions is less than 0.3. Again, to avoid violating the connectivity requirement of the communication graph (with high probability), we only consider 32 agents or more. ![8_image_0.png](8_image_0.png) Figure 1: Plots depicting algorithm progress for varying communication graph structures and number of agents. The single agent progress is included in all plots for reference. With Lt denoting the objective (i.e., the regularized empirical risk) at θt, and L⋆ denoting the minimum value of the objective, the vertical axis represents the base-10 logarithm of the relative error defined as log10 Lt−L⋆ L0−L⋆ . The horizontal axis represents number of iterations completed. As a baseline, we solve the single agent problem using the proposed primal-dual algorithm but with the Lagrange multiplier v terms fixed at zero, however we recognize that the problem choice could also be solved by other algorithms, e.g. gradient descent. (For the single agent case, the Laplacian constraints of (3) are trivially satisfied and can be omitted.) Figure 1 shows the convergence behavior of the proposed algorithm for each of the aforementioned communication graph structures. The complete graph tends to converge faster than any other graph for a fixed number of agents, and performs best at 64 agents (with 32 features per agent) instead of continually improving with increasing quantity of agents. Similarly, the Erdős-Rényi ![9_image_0.png](9_image_0.png) Figure 2: Plots depicting algorithm progress for Erdős-Rényi (p = 0.1) and random geometric graphs under the given cost paradigm. With Lt denoting the objective (i.e., the regularized empirical risk) at θt, and L⋆ denoting the minimum value of the objective, the vertical axis represents the base-10 logarithm of the relative error defined as log10 Lt−L⋆ L0−L⋆ . The horizontal axis represents units of operations per agent completed (not iteration) normalized such that the single agent case completes one iteration per unit of operation (i.e. the single agent completes 32 iterations). Explicitly, iteration t corresponds to n(4(d/m)+2∆(G)+7)+5(d/m) n(4d+1)+5dt on the horizontal axis (except for the single agent case, where iteration t corresponds to t on the horizontal axis). In short, settings with fewer operations per agent per iteration complete more iterations. graphs perform best at 128 and 256 agents for p = 0.5 and p = 0.1, respectively. Convergence degrades as p decreases. The random geometric graph performs very similarly to the Erdős-Rényi graph for p = 0.1. Both the star and 2D lattice graphs perform increasingly worse as the quantity of agents increases. We speculate this is caused by a large quantity of comparatively small eigenvalues for the associated Laplacian matrices. If we assume a situation where cost is dominated by computation rather than communication, the proposed algorithm can achieve comparable performance to the single agent case even under relatively sparse graphs. Recall that n, m, and d represent the number of samples, agents, and features, respectively, and that ∆(G) denotes the maximum degree of the communication graph G. One can show that each iteration of the proposed algorithm requires each agent complete n(4(d/m)+2∆(G)+7)+5(d/m) floating point operations.2 In the single agent case, one can show n(4d + 1) + 5d floating point operations are needed per iteration.3 We also compare scenarios for a fixed number of operations per agent. As the number of agents increases X and θ are increasingly split over more agents, effectively parallelizing the problem. This leads to a decrease in the number of operations per agent for the matrix-vector multiplies in (9) and (11) which dominate the operation cost. Figure 2 illustrates how, under this cost paradigm, the relatively sparse Erdős-Rényi (p = 0.1) and random geometric graphs with 256 agents achieve performance comparable to that of the single agent case. This speaks to the promise of the proposed algorithm for very large problem sizes over relatively sparse graphs. ## References Reza Arablouei, Kutluyil Doğançay, Stefan Werner, and Yih-Fang Huang. Model-distributed solution of regularized least-squares problem over sensor networks. In *2015 IEEE International Conference on Acoustics,* 2On a per-iteration per-agent basis, updating θ according to (9) equates to 2n(d/m) + 4(d/m) operations, updating v according to (10) equates to n(∆(G)+3) operations, and updating λ according to (11) equates to n(2(d/m)+∆(G)+4)+(d/m) operations. We omit the presumed negligible cost of the first agent solving (12), which for the specific case of least squares would be an extra 2n operations. For the specific case of *non-regularized* least squares, we could also omit 3(d/m) operations from the θ updates. 3To compute the required operations in the single agent case, a similar calculation is performed to that of Footnote 2 with caveats. The v quantities are absent, leading to a reduction of n(∆(G) + 3) from the updates in (10) as well as n(∆(G) + 3) from the updates in (11). Speech and Signal Processing (ICASSP), pp. 3821–3825, 2015. doi: 10.1109/ICASSP.2015.7178686. Heinz H. Bauschke and Patrick L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer New York, New York, NY, 2011. ISBN 978-1-4419-9467-7. doi: 10.1007/ 978-1-4419-9467-7. URL https://doi.org/10.1007/978-1-4419-9467-7. Antonin Chambolle and Thomas Pock. On the ergodic convergence rates of a first-order primal—dual algorithm. *Math. Program.*, 159(1–2):253–287, September 2016. ISSN 0025-5610. doi: 10.1007/ s10107-015-0957-3. Tianyi Chen, Xiao Jin, Yuejiao Sun, and Wotao Yin. VAFL: a method of vertical asynchronous federated learning. arXiv preprint arXiv: 2007.06081 [cs.LG], 2020. J. C. Duchi, A. Agarwal, and M. J. Wainwright. Dual averaging for distributed optimization: convergence analysis and network scaling. *IEEE Trans. Auto. Control*, 57(3):592–606, 2012. Eduard Gorbunov, Alexander Rogozin, Aleksandr Beznosikov, Darina Dvinskikh, and Alexander Gasnikov. Recent theoretical advances in decentralized distributed convex optimization. arXiv preprint arXiv: 2011.13259 [math.OC], 2020. Cristiano Gratton, Naveen K.D. Venkategowda, Reza Arablouei, and Stefan Werner. Distributed ridge regression with feature partitioning. In *2018 52nd Asilomar Conference on Signals, Systems, and Computers*, pp. 1423–1427, 2018. doi: 10.1109/ACSSC.2018.8645549. Lie He, An Bian, and Martin Jaggi. Cola: Decentralized linear learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/ paper/2018/file/05a70454516ecd9194c293b0e415777f-Paper.pdf. Yaochen Hu, Di Niu, Jianming Yang, and Shengping Zhou. FDML: A collaborative machine learning framework for distributed features. In *Proceedings of the 25th ACM SIGKDD International Conference* on Knowledge Discovery & Data Mining, KDD '19, pp. 2232––2240, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450362016. doi: 10.1145/3292500.3330765. URL https://doi.org/ 10.1145/3292500.3330765. B. Johansson, T. Kevieczky, M. Johansson, and K. H. Johansson. Subgradient methods and consensus algorithms for solving convex optimization problems. In *Proc. IEEE Conf. Decision and Control*, pp. 4185–4190, Cancun, Mexico, 2008. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. In Peter Kairouz and H. Brendan McMahan (eds.), *Foundations and Trends in Machine Learning*, volume 14. 2021. doi: 10.1561/2200000083. URL http://dx.doi.org/10.1561/2200000083. Alexander Lubotzky, Ralph Phillips, and Peter Sarnak. Ramanujan graphs. *Comb.*, 8(3):261–277, 1988. doi: 10.1007/BF02126799. URL https://doi.org/10.1007/BF02126799. David Mateos-Núñez and Jorge Cortés. Distributed saddle-point subgradient algorithms with laplacian averaging. *IEEE Transactions on Automatic Control*, 62(6):2720–2735, 2017. doi: 10.1109/TAC.2016. 2616646. Bojan Mohar. Eigenvalues, diameter, and mean distance in graphs. *Graph. Comb.*, 7(1):53–64, March 1991. ISSN 0911-0119. doi: 10.1007/BF01789463. URL https://doi.org/10.1007/BF01789463. Bojan Mohar. Laplace eigenvalues of graphs—-a survey. *Discrete Mathematics*, 109(1):171–183, 1992. ISSN 0012-365X. doi: https://doi.org/10.1016/0012-365X(92)90288-Q. URL https://www.sciencedirect. com/science/article/pii/0012365X9290288Q. A. Nedić and A. Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Trans. Auto. Control, 54(1):48–61, 2009. J.Z. Sasiadek. Sensor fusion. *Annual Reviews in Control*, 26(2):203–228, 2002. ISSN 1367-5788. doi: https: //doi.org/10.1016/S1367-5788(02)00045-7. URL https://www.sciencedirect.com/science/article/ pii/S1367578802000457. K. Scaman, F. Bach, S. Bubeck, Y. T. Lee, and L. Massoulié. Optimal algorithms for smooth and strongly convex distributed optimization in networks. In *Proc. Int. Conf. Machine Learning*, volume 70, pp. 3027– 3036, 2017. Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, and Laurent Massoulié. Optimal convergence rates for convex distributed optimization in networks. *Journal of Machine Learning Research*, 20(159): 1–31, 2019. URL http://jmlr.org/papers/v20/19-543.html. Aleksandra B. Slavković, Yuval Nardi, and Matthew M. Tibbits. "secure" logistic regression of horizontally and vertically partitioned distributed databases. In Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007), pp. 723–728, 2007. doi: 10.1109/ICDMW.2007.114. Virginia Smith, Simone Forte, Chenxin Ma, Martin Takáč, Michael I. Jordan, and Martin Jaggi. CoCoA: A general framework for communication-efficient distributed optimization. Journal of Machine Learning Research, 18(230):1–49, 2018. URL http://jmlr.org/papers/v18/16-512.html. Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Smoothness, low noise and fast rates. In J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta (eds.), Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc., 2010. URL https://proceedings.neurips.cc/paper/ 2010/file/76cf99d3614e23eabab16fb27e944bf9-Paper.pdf. J. Wang and N. Elia. A control perspective for centralized and distributed convex optimization. In Proc. IEEE Conf. Decision and Control, pp. 3800–3805, Orlando, FL, 2011. Tao Yang, Xinlei Yi, Junfeng Wu, Ye Yuan, Di Wu, Ziyang Meng, Yiguang Hong, Hong Wang, Zongli Lin, and Karl H. Johansson. A survey of distributed optimization. *Annual Reviews in Control*, 47:278– 305, 2019. ISSN 1367-5788. doi: https://doi.org/10.1016/j.arcontrol.2019.05.006. URL https://www. sciencedirect.com/science/article/pii/S1367578819300082. M. Zhu and S. Martinez. On distributed convex optimization under inequality and equality constraints. IEEE Trans. Auto. Control, 57(1):151–164, 2012. ## A Proof Of **Theorem 1** As the dual parameters are not important for our purposes, our goal is to convert the established saddlepoint convergence rates of the Chambolle–Pock algorithm (Chambolle & Pock, 2016) into primal convergence rates. Similar to (14), define the temporal average of the other iterates over the first T iterations as vj = 1 T X T ![11_image_0.png](11_image_0.png) vj,t $$\overline{{{\lambda}}}_{j}=\frac{1}{T}\sum_{t=1}^{T}\lambda_{j,t}\,,$$ ![11_image_1.png](11_image_1.png) for j ∈ [m], and let V =-v1 *· · ·* vm and λ =-λ1; *· · ·* ; λn . Furthermore, denote the objective of the saddle-point problem (4) by $$\mathcal{E}(\mathbf{\theta},\mathbf{V},\mathbf{\lambda})=\underbrace{\sum_{j=1}^{m}r_{j}(\mathbf{\theta}_{j})}_{=r(\mathbf{\theta})}+\frac{1}{n}\mathbf{\lambda}_{j}^{\intercal}\left(\mathbf{X}_{j}\mathbf{\theta}_{j}+\mathbf{V}\mathbf{Le}_{j}\right)-\frac{1}{n}\sum_{i=1}^{n}\ell_{i}^{*}(\lambda_{1,i})\,.\tag{17}$$ With the iterates initialized at zero (i.e., θj,0 = 0, vj,0 = 0, and λj,0 = 0 for all j ∈ [m]), and observing that τσ∥K∥ 2 ≤ 1 , we can apply the convergence rate established in (Chambolle & Pock, 2016, Theorem 1, and Remark 2) to obtain E(θ,V ,λ) − E(θ,V ,λ) ≤ 1 T Xm j=1 1 2τ ∥θj − θj,0∥ 2 2 + 1 2τ ∥vj − vj,0∥ 2 2 + 1 2σ ∥λj − λj,0∥ 2 2 − 1 n (λj − λj,0) TXj (θj − θj,0) + X j ′∈[m] : j∼Gj ′ vj − vj,0 − vj ′ + vj ′,0 ! ≤ 1 T Xm j=1 1 τ ∥θj − θj,0∥ 2 2 + 1 τ ∥vj − vj,0∥ 2 2 + 1 σ ∥λj − λj,0∥ 2 2 ! = 1 T 1 τ ∥θ∥ 2 2 + 1 τ ∥V ∥ 2 F + 1 σ ∥λ∥ 2 2 ! , for all θ, V , and λ. Rearranging the terms, we equivalently have $${\mathcal{E}}(\overline{{{\theta}}},\overline{{{V}}},\lambda)-\frac{1}{T\sigma}\|\lambda\|_{2}^{2}\leq{\mathcal{E}}(\theta,V,\overline{{{\lambda}}})+\frac{1}{T\tau}\left(\|\theta\|_{2}^{2}+\|V\|_{\mathrm{F}}^{2}\right)\,.$$ Recalling (17), taking the maximum of the left-hand side with respect to λ, and applying Lemma 1 to the part corresponding to λ1, we have r(θ) + 1n Xn i=1 ℓi (X1θ1 + V Le1)i −1 T σ Xn i=1 (ℓ ′ i ((X1θ1 + V Le1)i))2 + Xm j=2 T σ 4n2 Xjθj + V Lej 2 2 ≤ min θ∈Rd,V ∈Rn×m r(θ) + 1n Xm j=1 λ T j (Xjθj + V Lej ) − 1 n Xn i=1 ℓ ∗ i (λ1,i) + 1 T τ ∥θ∥ 2 2 + ∥V ∥ 2 F . (18) Next we establish a few more inequalities depending on the characteristics of the loss function, that together with (18) yield the desired convergence rates. ## A.1 Lower Bound For The Left-Hand Side Of (18) A.1.1 Lipschitz Loss We first consider the case of Lipschitz loss functions (Lip.). Using convexity of ℓi(·), we can write 1 n Xn i=1 ℓi (X1θ1 + V Le1)i ≥ 1 n Xn i=1 ℓi (Xθ)i − ℓ ′ i ((Xθ)i) Xm j=2 Xjθj + V Lej i ≥ 1 n Xn i=1 ℓi (Xθ)i − m − 1 T σ Xn i=1 (ℓ ′ i ((Xθ)i))2 − T σ 4n2 Xm j=2 Xjθj + V Lej 2 2 , where the second inequality is an application of the basic inequality 2ab ≤ a 2 +b 2. By construction, we have $$\mathbf{X}_{1}{\overline{{\theta}}}_{1}+{\overline{{V}}}\mathbf{L}\mathbf{e}_{1}+\sum_{j=2}^{m}\mathbf{X}_{j}{\overline{{\theta}}}_{j}+{\overline{{V}}}\mathbf{L}\mathbf{e}_{j}=\mathbf{X}{\overline{{\theta}}}\,.$$ Therefore, in view of (Lip.), what we have shown is $$r(\overline{\mathbf{\theta}})+\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left((\mathbf{X}_{i}\overline{\mathbf{\theta}}_{1}+\mathbf{V}\mathbf{L}\mathbf{e}_{1})_{i}\right)-\frac{1}{T\sigma}\sum_{i=1}^{n}(\ell_{i}^{\prime}((\mathbf{X}_{1}\overline{\mathbf{\theta}}_{1}+\mathbf{V}\mathbf{L}\mathbf{e}_{1})_{i}))^{2}+\sum_{j=2}^{m}\frac{T\sigma}{4n^{2}}\|\mathbf{X}_{j}\overline{\mathbf{\theta}}_{j}+\mathbf{V}\mathbf{L}\mathbf{e}_{j}\|_{2}^{2}$$ $$\geq\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left((\mathbf{X}\overline{\mathbf{\theta}})_{i}\right)+r(\overline{\mathbf{\theta}})-\frac{m\rho\sigma^{2}}{T\sigma}.$$ $$\quad(19)$$ ## A.1.2 Square Root Lipschitz Loss The second case we consider is that of the square root Lipschitz loss functions (√-Lip.). It follows from ( √-Lip.) that $$\sum_{i=1}^{n}(\ell_{i}^{\prime}((\mathbf{X_{1}}\overline{{{\mathbf{\theta}}}}_{1}+\overline{{{\mathbf{V}}}}\mathbf{L}\mathbf{e}_{1})_{i}))^{2}\leq\rho^{2}\sum_{i=1}^{n}\ell_{i}((\mathbf{X_{1}}\overline{{{\mathbf{\theta}}}}_{1}+\overline{{{\mathbf{V}}}}\mathbf{L}\mathbf{e}_{1})_{i})\,.$$ For sufficiently large T we have γ def = nρ2/(T σ) < 1/m, and we can lower bound the left-hand side of (18), excluding the term r(θ), as 1 n Xn i=1 ℓi (X1θ1 + V Le1)i −1 T σ Xn i=1 (ℓ ′ i ((X1θ1 + V Le1)i))2 + Xm j=2 T σ 4n2 Xjθj + V Lej 2 2 ≥ (1 − γ) 1 n Xn i=1 ℓi (X1θ1 + V Le1)i + Xm j=2 T σ 4n2 Xjθj + V Lej 2 2 ≥ (1 − γ) 1 n Xn i=1 ℓi (Xθ)i − ℓ ′ i ((Xθ)i) Xm j=2 Xjθj + V Lej i + Xm j=2 T σ 4n2 Xjθj + V Lej 2 2 , (20) where we used the convexity of the function ℓi(·) in the second line. Again using the basic inequality 2ab ≤ a 2 + b 2, we have 1 n Xn i=1 ℓ ′ i ((Xθ)i) Xm j=2 Xjθj + V Lej i ≤ Xn i=1 (1 − γ)(m − 1) T σ ℓ ′ i ((Xθ)i)2+T σ 4(1 − γ)n2 Xm j=2 Xjθj + V Lej 2 i = (1 − γ)(m − 1) T σ Xn i=1 ℓ ′ i ((Xθ)i)2+T σ 4(1 − γ)n2 Xm j=2 Xjθj + V Lej 2 2 . (21) By (√-Lip.) we also have $$\sum_{i=1}^{n}(\ell_{i}^{\prime}((\mathbf{X}{\overline{{{\theta}}}})_{i}))^{2}\leq\rho^{2}\sum_{i=1}^{n}\ell_{i}((\mathbf{X}{\overline{{{\theta}}}})_{i})\,,$$ which together with (20) and (21), and by adding back the term r(θ), yields $$r(\overline{{{\theta}}})+\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left((\mathbf{X}_{1}\overline{{{\theta}}}_{1}+\overline{{{V}}}\mathbf{L}\mathbf{e}_{1})_{i}\right)-\frac{1}{T\sigma}\sum_{i=1}^{n}(\ell_{i}^{\prime}((\mathbf{X}_{1}\overline{{{\theta}}}_{1}+\overline{{{V}}}\mathbf{L}\mathbf{e}_{1})_{i}))^{2}$$ $$\begin{array}{l}{{+\sum_{j=2}^{m}\frac{T\sigma}{4n^{2}}\|\mathbf{X}_{j}\overline{{{\boldsymbol{\theta}}}}_{j}+\overline{{{\boldsymbol{V}}}}\mathbf{L}\mathbf{e}_{j}\|_{2}^{2}}}\\ {{\ }}\\ {{\geq r(\overline{{{\boldsymbol{\theta}}}})+(1-\gamma)\left(1-\gamma(1-\gamma)(m-1)\right)\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left((\mathbf{X}\overline{{{\boldsymbol{\theta}}}})_{i}\right)}}\\ {{\ }}\\ {{\geq(1-m\gamma)\left(\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left((\mathbf{X}\overline{{{{\boldsymbol{\theta}}}}})_{i}\right)+r(\overline{{{{\boldsymbol{\theta}}}}})\right)}}\end{array}$$ $$\left(22\right)^{1}$$. A.2 Upper bound for the right-hand side of (18) Furthermore, the right-hand side of the inequality (18) can be bounded as min θ∈Rd,V ∈Rn×m r(θ) + 1n Xm j=1 λ T j (Xjθj + V Lej ) − 1 n Xn i=1 ℓ ∗ i (λ1,i) + 1 T τ ∥θ∥ 2 2 + ∥V ∥ 2 F ≤ min θ∈Rd,V ∈Rn×m r(θ) + 1n Xn i=1 ℓi ((X1θ1 + V Le1)i) + 1n Xm j=2 λ T j (Xjθj + V Lej ) + 1 T τ ∥θ∥ 2 2 + ∥V ∥ 2 F . Imposing the constraints Xjθj + *V Le*j = 0 for j = 2*, . . . , m*, can only increase the value of minimum on the right-hand side. Namely, we have min θ∈Rd,V ∈Rn×m r(θ) + 1n Xm j=1 λ T j (Xjθj + *V Le*j ) − 1 n Xn i=1 ℓ ∗ i (λ1,i) + 1 T τ ∥θ∥ 2 2 + ∥V ∥ 2 F ≤ min θ∈Rd,V ∈Rn×m r(θ) + 1n Xn i=1 ℓi ((X1θ1 + *V Le*1)i) + 1 T τ ∥θ∥ 2 2 + ∥V ∥ 2 F subject to Xjθj + *V Le*j = 0 , for j ∈ [m]\{1} ≤ min V ∈Rn×m 1 T τ ∥V ∥ 2 F + 1 n Xn i=1 ℓi (Xθb)i + r(θb) + 1 T τ θb 2 2 subject to Xjθbj + *V Le*j = 0 , for j ∈ [m]\{1} ≤ 1 n Xn i=1 ℓi (Xθb)i + r(θb) + 1 T τ θb 2 2 +1 T τ (L ⊗ I) † 2 hPm j=2 Xjθbj ; −X2θb2; *· · ·* ; −Xmθbm i 2 2 ≤ 1 n Xn i=1 ℓi (Xθb)i + r(θb) + 1 T τ θb 2 2 +2 T τ (L ⊗ I) † 2∥X∥ 2θb 2 2 , (23) where θb is the empirical risk minimizer given by (1), and we used the bound $$\left\|\left[\sum_{j=2}^{m}\mathbf{X}_{j}\widehat{\mathbf{\theta}}_{j};\quad-\mathbf{X}_{2}\widehat{\mathbf{\theta}}_{2};\quad\cdots;\quad-\mathbf{X}_{m}\widehat{\mathbf{\theta}}_{m}\right]\right\|_{2}^{2}\leq\left\|\mathbf{X}\right\|^{2}\left\|\widehat{\mathbf{\theta}}\right\|_{2}^{2}+\max_{j\in[m]\setminus\{1\}}\left\|\mathbf{X}_{j}\right\|^{2}\left\|\widehat{\mathbf{\theta}}\right\|_{2}^{2}.$$ A.3 Convergence of the regularized empirical risk We are now ready to derive the convergence rates under the loss models (Lip.) and (√-Lip.). ## A.3.1 Lipschitz Loss In the case of Lipschitz loss model (Lip.), the bounds (18), (19), and (23) guarantee that $$\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left((\mathbf{X}\mathbf{\bar{\theta}})_{i}\right)+r(\mathbf{\bar{\theta}})\leq\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left((\mathbf{X}\mathbf{\bar{\theta}})_{i}\right)+r(\mathbf{\bar{\theta}})+\frac{1}{T\tau}\left(1+2\|(\mathbf{L}\otimes\mathbf{I})^{\dagger}\|^{2}\|\mathbf{X}\|^{2}\right)\left\|\mathbf{\bar{\theta}}\right\|_{2}^{2}+\frac{m\rho^{2}}{T\sigma^{2}}\left\|\mathbf{\bar{\theta}}\right\|_{2}^{2}.$$ $$\leq\frac{1}{n}\sum_{i=1}^{n}\ell_{i}\left((\mathbf{X}\widehat{\mathbf{\theta}})_{i}\right)+r(\widehat{\mathbf{\theta}})+\frac{1}{T\tau}\left(1+\frac{2\chi^{2}}{\delta^{2}}\right)R^{2}+\frac{m n\rho^{2}}{T\sigma}\,.$$ Using the values of σ and τ prescribed by Theorem 1, we get (15). ## A.3.2 Square Root Lipschitz Loss Similarly, for square root Lipschitz losses (√-Lip.), it follows from (18), (22), and (23) that for T ≥ 2mnρ2/σ we have 1 n Xn i=1 ℓi (Xθ)i ≤ 1 − mnρ2 T σ −1 1n Xn i=1 ℓi (Xθb)i + r(θb) + 1 T τ 1 + 2(L ⊗ I) † 2∥X∥ 2 θb 2 2 ! ≤ 1 + 2mnρ2 T σ 1n Xn i=1 ℓi (Xθb)i + r(θb) + 1 T τ 1 + 2χ 2 δ 2 R 2 ! . Using the values of σ and τ prescribed by Theorem 1 yields (16). ## A.4 Auxiliary Lemma Lemma 1. Let f1 and f2 be differentiable (closed) convex functions defined over a linear space X *. Denote* their corresponding convex conjugate functions defined on the dual space X ∗*respectively by* f ∗ 1 and f ∗ 2 . For all u ∈ X *we have* $$\operatorname*{max}_{\mathbf{v}\in{\mathcal{X}}^{*}}\left\langle\mathbf{u},\mathbf{v}\right\rangle-\left(f_{1}^{*}(\mathbf{v})+f_{2}^{*}(\mathbf{v})\right)\geq f_{1}(\mathbf{u})-f_{2}^{*}(\nabla f_{1}(\mathbf{u}))\,.$$ Proof. The result follows from the duality of summation and *infimal convolution* (Bauschke & Combettes, 2011, Proposition 13.24), that is $$\max_{\mathbf{v}\in\mathcal{X}^{*}}\left(\mathbf{u},\mathbf{v}\right)-\left(f_{1}^{*}(\mathbf{v})+f_{2}^{*}(\mathbf{v})\right)=\min_{\mathbf{w}\in\mathcal{X}}f_{1}(\mathbf{u}-\mathbf{w})+f_{2}(\mathbf{w})$$ $$\geq\min_{\mathbf{w}\in\mathcal{X}}f_{1}(\mathbf{u})-\left\langle\nabla f_{1}(\mathbf{u}),\mathbf{w}\right\rangle+f_{2}(\mathbf{w})$$ $$=f_{1}(\mathbf{u})-f_{2}^{*}(\nabla f_{1}(\mathbf{u}))\,.$$ $$\square$$ ## B Supplementary Code And Figures All code may be found in the supplementary materials file, along with a read-me file which details how to reproduce the results. In an attempt to mimic the results of Figure 1, 101 trials were run for each setting. The random number generator was given a unique seed in each trial to generate different values. As a result, all of the problem tensors (X, θ⋆, e and by consequence y) along with graph structure (for random graphs) were re-sampled in each trial. Figure 3 shows the statistic results from these trials. It appears that convergence is most influenced by choice of graph structure as the random graphs tend to have a higher variance. However, intuitively this influence diminishes as the number of agents increases. A non-random graph structure yields nearly no perceivable spread, implying that the problem description is sufficiently robust for our purposes. ![16_image_0.png](16_image_0.png) Figure 3: Plots highlighting the effect of randomness on convergence results. All plots are analogous to those in Figure 1, however the series' values now portray the median loss while the shaded regions portray the 10th to 90th percentile over 101 trials iteration-wise. For the results of non-random graphs (i.e. the single agent, complete graph, star graph, and 2D lattice graph) the difference between 10th and 90th percentile is not readily visible, but has indeed been plotted.
Review 1: Summary: This paper studies regularized ERM for generalized linear models where both the loss functions and the regularizer are convex. Further, the paper considers the decentralized setting specified using a time-invariant communication graph, where each machine has access to a disjoint set of features for all data points, and the regularize is separable across these features. Among other scenarios, this distributed setting arises when collecting data through multiple sensors, but it isn't very well understood. The authors reduce the original undistributed optimization problem into a distributed saddle point problem and then solve it using a known primal-dual algorithm given appropriate proximal accesses. And provide convergence guarantees for Lipschitz and "square-root Lipschitz" loss functions. Numerical experiments study the effect of varying the communication graphs and the number of machines for the proposed algorithm. Strengths and Weaknesses: **Strength:** The paper studies an important understudied problem with broad applications. I like the general flow of the paper; the authors are clear about reducing the original non-distributed problem to a distributed one and proceeding from there on. **Weakness:** I believe the authors can add several clarifying remarks to the paper to improve the writing. In particular, the final algorithm is not exposed very well, making it difficult to understand the total number and nature of the oracle queries required. The final convergence results can be described more simply in settings with natural scaling making the contribution clear. Some relevant papers are also missing in the related works section. Please see the next section for more details. **Verdict:** This paper studies a critical problem that has not been studied extensively. The main contribution is casting the problem into a tractable saddle point problem which can be solved using a known algorithm given appropriate proximal accesses. The algorithmic insight in the paper is limited to showing how to implement the Chambolle-Pock algorithm in their setting. Further, I believe the authors can improve their writing in several places. I recommend revising the paper to alleviate these issues (see next section) and adding more results. I do not recommend accepting the paper's current form, but my review is borderline and might improve with a revised version. Requested Changes: 1. Authors should remark on which common regularizers are separable and which are not. 2. I agree that the problem in (3) is amenable to decentralized distributed optimization and is more complicated than (1), so solving it should give a guarantee for (1). But the paper doesn't discuss why/how the Laplacian constraint in (3) helps their analysis. 3. I believe [this](https://arxiv.org/abs/2003.10422) is a missing reference for decentralized first-order methods. 4. The paper discussed federated learning literature but didn't mention *vertical federated learning* which considers a similar distributed features problem. [This](https://arxiv.org/abs/2202.04309) is a representative paper. 5. I like the flow of section 2, showing how to implement the Chambolle-Pock algorithm in their setting. However, it would be helpful to summarize all the machines' updates in closed form or through a prox-oracle in pseudocode. This code should also emphasize when communication happens between the nodes. I would further recommend defining the prox oracles in section two and using specific access to them in the pseudocode. This presentation will make the computational and communication complexity very clear. Finally, the pseudocode should highlight the critical point that (12) can be parallelized. Please clarify if this happens on new machines or the machines in the communication cluster of machine one. 6. Given the direct access to the prox oracles, discussing when such access can be efficiently implemented in practice is essential. This can be achieved by considering several example functions in section 2, either through remarks or a separate subsection. Mentioning when such oracles are infeasible to implement is also important to give the unfamiliar reader a balanced view of the contribution. Extending the work to inexact prox-oracles can be an excellent direction to add technical novelty. 7. Instead of tuning the hyper-parameters sub-optimally for two different loss classes, consider splitting the theorem into separate theorems. 8. It is good that the authors discuss some natural bounds on their problem parameters below the theorem. However, it would be better if corollaries were provided below the theorem to re-state the theorem in terms of these fundamental properties of $G$. It will also make it easier to interpret the provided guarantee. 9. Several different communication graphs are considered in the experiments. Do the theoretical guarantees for those graphs, which can be specialized by estimating their different bounds, predict the actual performance well? The authors should consider adding corollaries for all these special cases, at least in the appendix. The overall aim of this exercise is to expose the theoretical contribution well. As of now, it is unclear what to make of the different experiments. It would also be helpful to remark if these communication graphs capture real-world behavior. Broader Impact Concerns: There are no apparent ethical concerns with the submission. ================================================== Review 2: Summary: The paper studies the decentralized feature-distributed optimizaiton for generalized linear models. The contribution can be summarized below: * Providing a tight theoretical analysis for the case of feature-distributed optimization for decentralized learning is interesting and crucial. * The paper applies the Chambolle-Pock primal-dual algorithm to reformulate the decentralized feature-distributed optimization problem, so as to give rates to the broader class of loss functions. Strengths and Weaknesses: # Strengths * The manuscript in general is well-written and well-structured. * The studied problem is interesting and crucial to the decentralized optimization community. # Weaknesses The main issues of the current manuscript are the novelty and significance, due to the 1. Incomplete related work. The considered feature-distributed decentralized optimization has been extensively studied by the federated learning community, a special variant of decentralized learning with the star topology, i.e., vertical federated learning. However, the latest relevant work mentioned in the current manuscript was Chen et al., 2020. There should exist a large volume of Vertical Federated Learning (VFL) research and the manuscript needs to discuss them. 2. The authors need to compare their theoretical results (by modifying the L to match the star topology) with existing rates developed in Vertical Federated Learning (VFL). 3. The numerical results omit the comparison with other methods, e.g., COLA, and other VFL methods (for star topology). As the manuscript argues that its analysis can be applied to a broader class of square root Lipschitz loss functions, it would be great if the authors can construct some synthetic cases to empirically verify this point, so as to highlight the tightness of the proposed analysis. 4. The reviewer is also uncertain about (did not carefully check) the technical difficulty of extending the Chambolle-Pock primal-dual algorithm to the considered feature-distributed decentralized learning scenario. Authors are required to justify this point. Requested Changes: Please check the four weak points mentioned above. Broader Impact Concerns: NA ================================================== Review 3: Summary: This work proposes to solve the problem of distributed minimization with linear predictors and a regularizer, where the features have been partitioned across the nodes (agents). The authors rewrite the problem as a prima-dual one using convex conjugates of the empirical losses. On top of that, the authors introduce a consensus constraint to make it solvable in a distributed fashion; for the dualized problem with the constraints, they the Lagrangian as the (minmax) objective and apply Chambolle-Pock algorithm to it. The authors then study the proposed algorithms numerically on the problem of quadratic minimization. I found the overall contribution to be too small. The work does not offer a substantial theoretical study, does not have significant algorithmic contributions and the numerical studies were done only to validate the theory. Therefore, I suggest rejecting this paper. Strengths and Weaknesses: ## Strengths **Writing**. I found the writing easy to follow and contain almost no typos or mistakes. The presentation is clear, and the related work seems to be covered. **Correctness**. I did not find any correctness issues. ## Weaknesses **Motivation**. The paper lacks motivation. Parts missing motivation: 1. Why is the problem where the features are partitioned of interest? Is it motivated by a practical scenario where each agent has access to a subset of features or is the goal to make distributed optimization on a cluster run faster? 2. "We assume that only one of the agents, say the first agent, observes the response" I wish this assumption was explained better and justified. 3. What is the purpose of formulating the problem as a decentralized one? Is it to link it to a specific application or just to make the problem more general? **Lack of novelty**. There is hardly much new in how the problem was dualized and how the Lagrangian was used to make it distributed. In particular, dualization is standard in the literature on primal-dual methods, while the feature partitioning has been previously considered in the work of Gratton et al. (2018), who used a similar approach in the context of quadratic problems and used an ADMM-based solver for it. **Questionable practicality**. The methods presented here are somewhat old-fashioned and rely on evaluations of proximal operators instead of gradients. There are, of course, applications, where proximal operators can be computed efficiently, but it is still a limitation as gradient oracle is available more commonly. **Limited numerical comparison**. The experiments do not compare the method to any other in the literature and do not make the method's motivation more apparent. Requested Changes: I do not expect this work to pass the acceptance threshold and, thus, do not request any significant changes. I did find a couple of typos and one missing reference, listed below: Equation (1) should end with a comma rather than period. Page 4, "square root Lipschitz" -> "square-root Lipschitz" Missing reference. The authors claim to consider "non-standard distributed setting where the data features, rather than samples, are distributed among m agents". This is, however, not that unusual as it has been considered for instance by Richtarik & Takac (2016) "Distributed Coordinate Descent Method for Learning with Big Data". Citing this reference would put the work into context. Broader Impact Concerns: This work is mostly theoretical and is unlikely to have any ethical implications. ================================================== Metareview: Recommendation: Reject Comment: This paper studies a *potentially* interesting problem, but fails to give proper justification for the studied setup. In other words, while I believe the problem *can* be well motivated, the authors did not do so to a degree that would be deemed sufficient by the reviewers and myself as well. In this regard, one reviewer wrote: "The setting considered in the paper is motivated by applications such as federated learning and parallel computing. When motivating the applications in federated learning, the authors simply mention the cross-silo setting, without explaining how the specific issues addressed in their paper would be helpful there." Another reviewer wrote: "I concede with the other reviewers that the oracle considered in the paper is not very well motivated. And even for the given oracle, the authors are unclear about the final oracle complexity and implemented algorithm (in terms of parallelism). One could expect the authors to make some changes to improve these issues, but overall the paper will need more than one round of changes to be acceptable." Moreover, the authors seem to be unaware of large swaths of related literature, mainly related to distributed variants of coordinate descent (there are more works on this topic than the one that was missed and pointed out by one reviewer; another example: "Fast distributed coordinate descent for minimizing non-strongly convex losses", MLSP 2014), and vertical FL (there is a very large body of recent work in this area). Their results need to be contrasted to what is known so that the readers can appreciate them in context. This is a basic requirement of the scientific method. Besides these issues, the contribution of this paper was described by one reviewer as "extremely small". The derivation done in the paper merely amounts to dualizing the problem (a standard technique), and subsequently applying a known method (Chambolle-Pock) to the saddle-point reformulation.This can be viewed as contribution that is closer to an "exercise" difficulty level than to a "substantial research" difficulty level. This could in principle have been somewhat alleviated if the authors provided additional insights, guidelines and contributions besides this, but the reviewers concluded the paper lacks in this kind of development as well. So, the reviewers and myself find it hard to see where the original contributions are, and how they relate to the literature. Because of this, readers could be easily misled rather than enlightened. After a short discussion, none of the reviewers proposed acceptance. I concur with these views, and therefore have no choice but to propose rejection. However, I encourage the authors to keep developing these ideas further. I believe that a very substantial revision, with more work invested into the research, could eventually lead to a nice paper. AC ==================================================
# Cold Start Streaming Learning For Deep Networks Anonymous authors Paper under double-blind review ## Abstract The ability to dynamically adapt neural networks to newly-available data without performance deterioration would revolutionize deep learning applications. Streaming learning (i.e., learning from one data example at a time) has the potential to enable such real-time adaptation, but current approaches i) freeze a majority of network parameters during streaming and ii) are dependent upon offline, base initialization procedures over large subsets of data, which damages performance and limits applicability. To mitigate these shortcomings, we propose Cold Start Streaming Learning (CSSL), a simple, end-to-end approach for streaming learning with deep networks that uses a combination of replay and data augmentation to avoid catastrophic forgetting. Because CSSL updates all model parameters during streaming, the algorithm is capable of beginning streaming from a random initialization, making base initialization optional. Going further, the algorithm's simplicity allows theoretical convergence guarantees to be derived using analysis of the Neural Tangent Random Feature (NTRF). In experiments, we find that CSSL outperforms existing baselines for streaming learning in experiments on CIFAR100, ImageNet, and Core50 datasets. Additionally, we propose a novel multi-task streaming learning setting and show that CSSL performs favorably in this domain. Put simply, CSSL performs well and demonstrates that the complicated, multi-step training pipelines adopted by most streaming methodologies can be replaced with a simple, end-toend learning approach without sacrificing performance. ## 1 Introduction Background. Many autonomous applications would benefit from real-time, dynamic adaption of models to new data. As such, online learning1 has become a popular topic in deep learning; e.g., continual (Lopez-Paz & Ranzato, 2017; Zenke et al., 2017), lifelong (Aljundi et al., 2017; Chaudhry et al., 2018b), incremental (Rebuffi et al., 2017; Castro et al., 2018), and streaming (Hayes et al., 2018; Hayes et al., 2020) learning. However, the (potentially) non-i.i.d. nature of incoming data causes catastrophic forgetting (McCloskey & Cohen, 1989; Kirkpatrick et al., 2017), thus complicating the learning process. Batch-incremental learning (Rebuffi et al., 2017; Castro et al., 2018), where batches of data—typically sampled from disjoint sets of classes or tasks within a dataset—become available to the model sequentially, is a widely-studied form of online learning. Within this setup, however, one must wait for a sizeable batch of data2to accumulate before the model is updated with an expensive, offline training procedure over new data, thus introducing latency that prevents real-time model updates. To avoid such latency, we adopt the streaming learning setting where i) each data example is seen once and ii) the dataset is learned in a single pass (Hayes et al., 2018). Streaming learning performs brief, online updates (i.e., one or a few forward/backward passes) for each new data example, forcing learning of new data to occur in real-time. Additionally, streaming learning techniques can be adapted to cope with batches of data instead of single examples (Wang et al., 2018), while batch-incremental learning techniques tend to 1We use "online learning" to generically describe methodologies that perform training in a sequential manner. 2These "batches" are large (e.g., a 100-class subset of ImageNet with >100,000 data examples (Rebuffi et al., 2017)) and using smaller batches damages performance (Belouadah et al., 2020; Hayes et al., 2020). deteriorate drastically given smaller batch sizes (Hayes et al., 2020). As such, the streaming learning setting, which has recently been explored for applications with deep neural networks (Hayes et al., 2018; Hayes et al., 2020; Hayes & Kanan, 2020), is generic and has the potential to enable low-cost updates of deep networks to incoming data. Current streaming methodologies for deep neural networks learn in two-phases: base initialization and streaming. During base initialization, network parameters (and other modules, if needed) are pre-trained over a subset of data. Then, a majority of network parameters are frozen (i.e., not updated) throughout the learning process. During streaming, most approaches maintain a replay buffer—though other techniques exist (Hayes & Kanan, 2020)—to avoid catastrophic forgetting by allowing prior data to be sampled and included in each online update. The current, multi-stage streaming learning pipeline suffers a few notable drawbacks. Namely, the learning process is dependent upon a latency-inducing, offline pre-training procedure and a majority of network parameters are not updated during the streaming process. As a result, the underlying network i) has reduced representational capacity, ii) cannot adapt a large portion of its parameters to new data, and iii) is dependent upon high-quality pre-training (during base initialization) to perform well. Given these considerations, one may begin to wonder whether a simpler streaming procedure could be derived to realize the goal of adapting deep networks to new data in real time. This Work. Inspired by the simplicity of offline training, we propose a novel method for streaming learning with deep networks, called Cold Start Streaming Learning (CSSL), that updates all network parameters in an end-to-end fashion throughout the learning process. Because no parameters are fixed by CSSL, base initialization and pre-training procedures are optional—streaming can begin from a completely random initialization (hence, "cold start" streaming learning). By leveraging a basic replay mechanism coupled with sophisticated data augmentation techniques, CSSL outperforms existing streaming techniques in a variety of domains. Furthermore, the simplicity of the approach makes our algorithm more apt to theoretical analysis, as well as easier to implement and deploy in practice. A summary of our contributions is as follows: - We propose CSSL, a simple streaming methodology that combines replay buffers with sophisticated data augmentation, and provide extensive comparison to existing techniques on common class-incremental streaming problems. We show that CSSL often outperforms baseline methodologies by a large margin. - We leverage techniques related to neural tangent random feature (NTRF) analysis (Cao & Gu, 2019) to prove a theoretical bound on the generalization loss of CSSL over streaming iterations. - We propose a multi-task streaming learning benchmark, where multiple, disjoint classification datasets are presented to the model sequentially and in a streaming fashion. We show that CSSL enables significant performance improvements over baseline methodologies in this new domain. - We extensively analyze models trained via CSSL, showing that resulting models i) achieve impressive streaming performance even when beginning from a random initialization (i.e., a cold start); ii) are robust to the compression of examples in the replay buffer for reduced memory overhead; and iii) provide highly-calibrated confidence scores due to our proposed data augmentation policy. ## 2 Related Work Online Learning. Numerous experimental setups have been considered for online learning, but they all share two properties: i) the sequential nature of the training process and ii) performance deterioration due to catastrophic forgetting when incoming data is non-i.i.d. (McCloskey & Cohen, 1989; Kemker et al., 2018). Replay mechanisms, which maintain a buffer of previously-observed data (or a generative model to produce such data (Rannen et al., 2017; Shin et al., 2017)) to include in online updates, are highly-effective at preventing catastrophic forgetting at scale (Douillard et al., 2020; Chaudhry et al., 2019; Hayes et al., 2020), leading us to base our proposed methodology upon replay. Similarly, knowledge distillation (Hinton et al., 2015) can prevent performance deterioration by stabilizing feature representations throughout the online learning process (Hou et al., 2019; Wu et al., 2019), even while training the network end-to-end (e.g., end-to-end incremental learning (Castro et al., 2018) and iCarl (Rebuffi et al., 2017)). Though distillation and replay are widely-used, numerous other approaches to online learning also exist (e.g., architectural modification (Rusu et al., 2016; Draelos et al., 2017), regularization (Dhar et al., 2019; Li & Hoiem, 2017), and dual memory (Kemker & Kanan, 2017; Belouadah & Popescu, 2019)). Streaming Learning. Streaming, which we study in this work, performs a single pass over the dataset, observing each sample once (Hayes et al., 2018). Having recently become popular in deep learning, streaming learning (Hayes et al., 2020; Acharya et al., 2020; Hayes & Kanan, 2020; Gallardo et al., 2021) trains the model in two-phases: base initialization and streaming. Base initialization uses a subset of data to pre-train the model and initialize relevant network modules. Then, the streaming phase learns the dataset in a singlepass with a majority of network parameters fixed. Within this training paradigm, replay-based techniques perform well at scale (Hayes et al., 2020), though other approaches may also be effective (Hayes & Kanan, 2020). Data Augmentation. The success of the proposed methodology is enabled by our data augmentation policy. Though data augmentation for computer vision is well-studied (Shorten & Khoshgoftaar, 2019), we focus upon interpolation methods and learned augmentation policies. Interpolation methods (e.g., Mixup (Zhang et al., 2017; Wolfe & Lundgaard, 2019; Inoue, 2018) and CutMix (Yun et al., 2019)) take stochasticallyweighted combinations of images and label pairs during training, which provides regularization benefits. Learned augmentation policies (e.g., AutoAugment (Cubuk et al., 2018; Lim et al., 2019; Hataya et al., 2020) and RandAugment (Cubuk et al., 2020)), on the other hand, consider a wide scope of augmentation techniques and adopt a data-centric approach to learn an optimal augmentation policy (i.e., using reinforcement learning or gradient-based techniques). Confidence Calibration. Beyond the core methodology of CSSL, we extensively explore the confidence calibration properties of resulting models. Put simply, confidence calibration is the ability of a model to accurately predict the probability of its own correctness—poor predictions should be made with low confidence and vice versa. Numerous methodologies have been proposed for producing calibrated models, including post-hoc softmax temperature scaling (Guo et al., 2017), ensemble-based techniques (Lakshminarayanan et al., 2017), Mixup (Zhang et al., 2017; Thulasidasan et al., 2019), Monte-Carlo dropout, and uncertainty estimates in Bayesian networks (Neal, 2012; Gal & Ghahramani, 2016). Confidence calibration has also been applied to problems including domain shift and out-of-distribution detection due to its ability to filter incorrect, or low confidence predictions (Hendrycks & Gimpel, 2016; Hendrycks & Dietterich, 2019). Multi-task learning. *Our work is the first to consider multi-task streaming learning*, though multi-task learning has been previously explored for both offline and batch-incremental settings. (Hou et al., 2018) studies the sequential learning of several datasets via knowledge distillation and replay, and several other works study the related problem of domain shift (Jung et al., 2016; Furlanello et al., 2016), where two datasets are learned in a sequential fashion. For the offline setting, multi-task learning has been explored extensively, leading to a variety of successful learning methodologies for computer vision (Lu et al., 2020), natural language processing (Stickland & Murray, 2019), multi-modal learning (Wolfe & Lundgaard, 2021; Nguyen & Okatani, 2019), and more. The scope of work in offline multi-task learning is too broad to provide a comprehensive summary here, though numerous surveys on the topic are available (Ruder, 2017; Worsham & Kalita, 2020). ## 3 Methodology The proposed streaming methodology, formulated in Algorithm 1, begins either from pre-trained parameters or a random-initialization (Initialize in Algorithm 1) and utilizes a replay buffer R to prevent catastrophic forgetting. At each iteration, CSSL receives a new data example xt, yt := Dt, combines this data with random samples from the replay buffer, performs a stochastic gradient descent (SGD) update with the combined data, and stores the new example within the buffer for later replay. Within this section, we will first provide relevant preliminaries and definitions, then each component of CSSL will be detailed and contrasted with prior work. Algorithm 1: Cold Start Streaming Learning Parameters: W model parameters, D data stream, R replay buffer, C maximum replay buffer size, B number of replay samples per iteration \# **Initialize** model parameters randomly or via pre-training (if possible) W := Initialize() R := ∅ for t = 1, . . . , |D| do \# **Sample** data from the replay buffer to combine with new data from D xnew, ynew := Dt Xreplay, Yreplay := ReplaySample(R, B) X , Y := {xnew*} ∪ X*replay, {ynew*} ∪ Y*replay \# **Update** all model parameters over augmented new and replayed data StreamingUpdate(W, Augment(X , Y)) \# **Compress** and **Store** the new data example for replay ReplayStore(R,(Compress(xnew), ynew)) \# **Evict** data from the replay buffer to maintain capacity if |R| > C **then** ReplayEvict(R) end end ## 3.1 Preliminaries Problem Setting. Following prior work (Hayes et al., 2020), we consider the problem of streaming for image classification.3In most cases, we perform class-incremental streaming experiments, in which examples from each distinct semantic class within the dataset are presented to the learner sequentially. However, experiments are also performed using other data orderings; see Section 6.2. Within our theoretical analysis only, we consider a binary classification problem that maps vector inputs to binary labels; see Section 5 for further details. Such a problem setup is inspired by prior work (Cao & Gu, 2019; Allen-Zhu et al., 2019). Streaming Learning Definition. Streaming learning considers an incoming data stream D = {(xt, yt)} n t=1 4(i.e., t denotes temporal ordering) and adopts the following rule set during training: 1. Each unique data example appears once in D. 2. The order of data within D is arbitrary and possibly non-iid. 3. Model evaluation may occur at any time t. Notably, these requirements make no assumptions about model state prior to the commencement of the streaming process. Previous methodologies perform offline, base initialization procedures before streaming begins, while CSSL may either begin streaming from randomly-initialized or pre-trained parameters. As such, CSSL is capable of performing streaming even without first observing examples from D, while other methodologies are reliant upon data-dependent base initialization. Evaluation Metric. In most experiments, performance is measured using Ωall (Hayes et al., 2018), defined as Ωall = 1 T PT t=1αt αoffline,t , where αt is streaming performance at testing event t, αoffline,t is offline performance at testing event t, and T is the number of total testing events. Ωall captures aggregate streaming performance (relative to offline training) throughout streaming. A higher score indicates better performance. As an example, consider a class-incremental streaming setup with two testing events: one after 1⁄2 of the classes have been observed and one at the end of streaming. The streaming learner is evaluated at each testing 3Streaming learning has been considered in the object detection domain (Acharya et al., 2020), but we consider this application beyond the scope of our work. 4E.g., xt ∈ R3×H×W (RGB image) and yt ∈ Z≥0(class label) for computer vision applications. event, yielding accuracy α1 and α2. Two models are trained offline over 1⁄2 of the classes and the full dataset, respectively, then evaluated to yield αoffline,1 and αoffline,2. From here, Ωall is given by 12 α1 αoffline,1 +α2 αoffline,2 . ## 3.2 Cold Start Streaming Learning Replay Buffer. As the learner progresses through the data stream, full images and their associated labels are stored (ReplayStore) within a fixed-size replay buffer R. The replay buffer R has a fixed capacity C. Data can be freely added to R if |R| < C, but eviction must occur (ReplayEvict in Algorithm 1) to make space for new examples when the buffer is at capacity. One of the simplest eviction policies—which is adopted in prior work (Hayes et al., 2020)—is to i) identify the class containing the most examples in the replay buffer and ii) remove a random example from this class. This policy, which we adopt in CSSL, is computationally efficient and performs similarly to more sophisticated techniques; see Appendix B.3. Data Compression. Data is compressed (Compress in Algorithm 1) before addition to R. By freezing a majority of network layers during streaming, previous methods can leverage learned quantization modules and pre-trained feature representations to drastically reduce the size of replay data (Hayes & Kanan, 2020; Hayes et al., 2020). For CSSL, full images, which induce a larger memory overhead, are stored in R and no network parameters are fixed, meaning that Compress cannot rely on pre-trained, fixed network layers. We explore several, data-independent Compress operations, such as resizing images, quantizing the integer representations of image pixels, and storing images on disk with JPEG compression; see Appendix B.2. Such compression methodologies significantly reduce memory overhead without impacting performance. Because storing the replay buffer on disk is not always possible (e.g., edge devices may lack disk-based storage), we present this as a supplemental approach and always present additional results without JPEG compression. Figure 1: Illustration of CSSL following the notation ![4_image_0.png](4_image_0.png) of Algorithm 1. Due to storing full images in the replay buffer, the proposed methodology has more memory overhead relative to prior techniques, making it less appropriate for memory-limited scenarios. However, many streaming applications (e.g., e-commerce recommendation systems, pre-labeling for data annotation, etc.) store and update models on cloud servers, making memory efficiency less of a concern. *We recommend CSSL for such* scenarios, as it outperforms prior methods given sufficient memory for replay. Model Updates. For each new example within D, CSSL updates learner parameters Θ (StreamingUpdate in Algorithm 1) using the new example and N replay buffer samples obtained via ReplaySample. Similar to prior work, ReplaySample uniformly samples data from R—alternative sampling strategies provide little benefit at scale (Aljundi et al., 2019b;a; Hayes et al., 2020); see Appendix B.4. StreamingUpdate is implemented as a simple SGD update of all network parameters over new and replayed data. Streaming updates pass the mixture of new and replayed data through a data augmentation pipeline (Augment in Algorithm 1). Though prior work uses simple augmentations (Castro et al., 2018; Verma et al., 2019; Hayes et al., 2020), we explore data interpolation (Zhang et al., 2017; Yun et al., 2019) and learned augmentation policies (Cubuk et al., 2018). In particular, CSSL combines random crops and flips, Mixup, Cutmix, and Autoaugment into a single, sequential augmentation policy5; see Appendix B.1 for further details. *Curating* a high quality data augmentation pipeline is the key CSSL's impressive performance. Our Methodology. In summary, CSSL, illustrated in Figure 1, initializes the network either randomly or with pre-trained parameters (Initialize). Then, for each new data example, N examples are sampled uniformly from the replay buffer (ReplaySample), combined with the new data, passed through a data augmentation pipeline (Augment), and then used to perform a training iteration (StreamingUpdate). Each new data example is added to the replay buffer (ReplayStore), and an example may be randomly removed from the replay buffer via random, uniform selection (ReplayEvict) if the capacity C is exceeded. ## 4 Why Is This Useful? The proposed methodology has two major benefits: - **Full Plasticity** (i.e., no fixed parameters) ## - Pre-Training Is Optional Such benefits are unique to CSSL; see Appendix C. However, one may question the validity of these "benefits"—*do they provide any tangible value?* Full Plasticity. CSSL updates all model parameters throughout the streaming process.6 Some incremental learning methodologies train networks endto-end(Castro et al., 2018; Hou et al., 2019; Wu et al., 2019), but these approaches either i) cannot be applied or ii) perform poorly when adapted to the streaming domain(Hayes et al., 2020); see Appendix B.6. Fixing network parameters is detrimental to the learning process. To show this, we pre-train a ResNet18 on ImageNet and fix different ratios of network parameters during fine-tuning on CIFAR10/100, finding that final accuracy deteriorates monotonically with respect to the ratio of frozen parameters; see Figure 2. Even given high-quality pre-training, fixing network parameters i) limits representational capacity and ii) prevents network representations from being adapted to new data, making end-to-end training a more favorable approach. Table 1: Ωall of REMIND on ImageNet with different amounts of base initialization data. *REMIND performance deteriorates with less data*. | No. Base | Top-1 Ωall | Top-5 Ωall | |-----------------|--------------|--------------| | Init.Classes 10 | 0.478 | 0.651 | | 50 | 0.727 | 0.848 | | 100 | 0.835 | 0.856 | Pre-Training is Optional. Prior streaming methods perform base initialization (i.e., fitting of network parameters and other modules over a subset of data) prior to streaming. Not only does base initialization introduce an expensive, offline training procedure, but it makes model performance dependent upon the availability of sufficient (possibly unlabeled (Gallardo et al., 2021)) pre-training data prior to streaming. Streaming performance degrades with less base initialization data; see Table 1. Thus, if little (or worse, no) data is available when streaming begins, such methods are doomed to poor performance. ![5_image_0.png](5_image_0.png) Figure 2: Test acccuracy of ResNet18 models that are pre-trained on ImageNet and fine-tuned on CIFAR10/100 with different ratios of frozen parameters. CSSL has no dependency upon pre-training because the network is trained end-to-end during streaming. Thus, the CSSL training pipeline is quite simple—*just initialize, then train*. Such simplicity makes CSSL 5Mixup and Cutmix are not performed simultaneously. Our policy randomly chooses between Mixup or Cutmix for each data example with equal probability. 6Training the network end-to-end within CSSL does make model inference and updates more computationally expensive, which adds latency to streaming updates; see Appendix B.7 for further analysis. easy to implement and deploy in practice. We emphasize that CSSL can begin streaming from a pre-trained set of model parameters, which often leads to better performance. The benefit of CSSL, however, is that such a pre-training step is *completely optional* and network parameters are still updated during streaming. Implications and Potential Applications. The proposed methodology yields improvements in performance, *simplicity*, and *applicability* relative to prior work. Fixing no network parameters enables better performance by increasing network representational capacity and adaptability, while the ability to stream from a cold start *simplifies* the streaming pipeline and makes streaming *applicable* to scenarios that lack sufficient data for base initialization. Consider, for example, pre-labeling for data annotation, where a streaming learner works with a human annotator to improve labeling efficiency. In this case, the annotated dataset may be built from scratch, meaning that no data is yet available when streaming begins. Similarly, cold-start scenarios for recommendation or classification systems often lack prior data from which to learn. Though pre-trained models may sometimes be downloaded online, this is not possible for domains that align poorly with public models and datasets (e.g., medical imaging). While prior approaches fail in these scenarios due to a dependence upon high-quality base initialization, CSSL can simply begin streaming from a random initialization and still perform well. ## 5 Theoretical Results CSSL's simple training pipeline enables the algorithm to be analyzed theoretically. To do this, we extend analysis of the neural tangent random feature (NTRF) (Cao & Gu, 2019) to encompass multi-layer networks trained via streaming to perform binary classification using a replay mechanism and data augmentation. The analytical setup is similar to Algorithm 1 with a few minor differences. We begin by providing relevant notation and definitions, then present the main theoretical result. ## 5.1 Preliminaries And Definitions Notation. We denote scalars and vectors with lower case letters, distinguished by context, and matrices with upper case letters. We define [l] = {1, 2*, . . . , l*} and kvkp = (Pd i=1 |vi| p) 1 p for v = (v1, v2*, . . . , v*d) > ∈ R d and 1 ≤ p < ∞. We also define kvk∞ = maxi|vi|. For matrix A ∈ R m×n, we use kAk0 to denote the number of non-zero entries in A. We use i ∼ N to denote a random sample i from distribution N , and N (*µ, σ*) to denote the normal distribution with mean µ and standard deviation σ. 1{·} is the indicator function. We then define kAkF = qPi,j A2 i,j and kAkp = maxkvkp=1 kAvkp for p ≥ 1 and v ∈ R n. For two matrices A, B ∈ R m×n, we have h*A, B*i = Tr(A>B). For vector v ∈ R d, diag(v) ∈ R d×dis a diagonal matrix with the entries of v on the diagonal. We also define the asymptotic notations O(·) and Ω(·) as follows. If an and bn are two sequences, an = O(bn) if lim supn→∞ | an bn | < ∞ and an = Ω(bn) if lim supn→∞ | an bn | > 0. Network Formulation. We consider fully forward neural networks with width m, depth L, input dimension d, and an output dimension of one. The weight matrices of the network are W1 ∈ R m×d, W` ∈ R m×m for ` = 2*, . . . , L* − 1, and WL ∈ R 1×m. 7 We denote W = {W1*, . . . , W*L} and define hW,W0i =PL i=1hWi, W0 i i when W,W0 are two sets containing corresponding weight matrices with equal dimension. All of such sets with corresponding weight matrices of the same dimension form the set W. Then, a forward pass with weights W = {W1, . . . , WL*} ∈ W* for input x ∈ R dis given by: fW(x) = √m · WLσ(WL−1σ(WL−2 . . . σ(W1(x))*. . .*)), (1) where σ(·) is the ReLU activation function. Following (Cao & Gu, 2019), the first and last weight matrices are not updated during training. Objective Function. For data stream D = {(xt, yt)} n t=1, we define LD(W) , E(x,y)∼DL(*x,y,ξ*)(W), where L(*x,y,ξ*)(W) = `[y · fW(x + ξ)] is the loss over (x, y) ∈ D with arbitrary perturbation vector ξrepresenting data augmentation—and `(z) = log(1 + e −z) is the cross-entropy loss. We use the notation 7Assuming the widths of each hidden layer are the same is a commonly-used simplification. See (Allen-Zhu et al., 2019; Cao & Gu, 2019) for papers that have adopted the same assumption. $$(x))\,.\,.\,.\,)),$$ L(t,ξ)(·) = L(xt,yt,ξ)(·) for convenience, and denote the 0-1 generalization error over the entire data stream D as L 0−1 D (W) , E(x,y)∼D[1{y · fW(x + ξ) < 0}] with arbitrary perturbation vector ξ. CSSL Formulation. Following Algorithm 1, our analysis considers a random initialization (Initialize) of model parameters W(0) = {W (0) 1*, . . . , W*(0) L } as shown below: $$W_{j}^{(0)}\sim{\cal N}\left(0,\frac{2}{m}\right),\ \forall\ j\in[L-1],\tag{2}$$ $$W_{L}^{(0)}\sim{\cal N}\left(0,\frac{1}{m}\right).$$ A buffer R of size C is used to maintain data for replay. All incoming data is added to R (ReplayStore), and an entry is randomly (uniformly) evicted from R (ReplayEvict) if |R| > C. Each online update takes B uniform samples from R (ReplaySample), forming a set St of replay examples at iteration t. The t-th update of model parameters W (StreamingUpdate) over new and replayed data is given by the following expression: W(t+1) = W(t) − η ∇W(t)L(xt,yt,ξt)(W(t)) + X (xrep,yrep)∈St ∇W(t)L(xrep,yrep,ξrep)(W(t)) , (3) where η is the learning rate, ξt is an arbitrary perturbation vector for the t-th data stream example, and ξrep is an arbitrary perturbation vector that is separately generated for each individual replay example. Our theoretical analysis does not consider the compression of replay examples (Compress(x) = x). Data Augmentation. The t-th example in the data stream (xt, yt) ∈ D is augmented (Augment) using the perturbation vector ξt; i.e., the network always observes an augmented version of the data, given by (xt + ξt, yt). No assumptions are made about vectors ξt, though our final rate depends on their magnitude. Similarly, all replay samples are augmented via ξrep, such that the network observes (xrep + ξrep, yrep) for each (xrep, yrep) ∈ St. All replay samples have unique perturbation vectors ξrep, but we use ξrep to denote all replay perturbation vectors for simplicity. We denote the set of all perturbation vectors used for new and replayed data throughout streaming as X and define Ξ = sup ξ∈X kξk 2 2 . Neural Tangent Random Feature. For W ∈ W, we define the ω-neighborhood as follows: B(W, ω) , {W0 ∈ W : kW0 l − WlkF ≤ ω, ∀l ∈ [L]}. Given a set of all-zero weight matrices 0 ∈ W and randomly initialized (according to equation 2) W(0), the Neural Tangent Random Feature (NTRF) (Cao & Gu, 2019) is then defined as the following function set F(W(0), R) = {f(·) = fW(0) (·) + h∇WfW(0) (·),Wi : W ∈ B(0, R)}, where R > 0 controls the size of the NTRF. We use the following shorthand to denote the first order Taylor approximation of network output, given weights W,W0 ∈ W and input x ∈ R d: $$r(x),\mathbf{W}^{\prime}-\mathbf{W}\rangle.$$ FW,W0 (x) , fW(x) + h∇WfW(x),W0 − Wi. Using this shorthand, the NTRF can be alternatively formulated as: $\mathcal{F}(\mathbf{W}^{(0)},R)=\{f(\cdot)=F_{\mathbf{W}^{(0)},\mathbf{W}^{\prime}}(\cdot):\mathbf{W}^{\prime}-\mathbf{W}^{(0)}\in\mathcal{B}(\mathbf{0},R)\}$. ## 5.2 Convergence Guarantees Here, we present the main result of our theoretical analysis for CSSL—an upper bound on the zero-one generalization error of networks trained via Algorithm 1, as described in Section 5.1. Proofs are deferred to Appendix E, but we provide a sketch in this section. Our analysis adopts the following set of assumptions. Assumption 1. For all data (xt, yt) ∈ D, we assume kxtk2 = 1. Given two data examples (xi, yi),(xj , yj ) ∈ D, we assume kxi − xjk2 ≥ λ. The conditions in Assumption 1 are widely-used in analysis of overparameterized neural network generalization (Cao & Gu, 2019; Allen-Zhu et al., 2019; Oymak & Soltanolkotabi, 2020; Zou et al., 2018; Li & Liang, 2018). The unit norm assumption on the data is adopted for simplicity but can be relaxed to c1 ≤ kxik ≤ c2 for all (xi, yi) ∈ D and absolute constants c1, c2 > 0. Given these assumptions, we derive the following: Theorem 1. Assume Assumption 1 holds, ω ≤ O L −6log−3(m), and m ≥ O nB√1 + ΞL 6log3(m). Then, defining Wˆ *as a uniformly-random sample from iterates* {W(0), . . . ,W(n)} *obtained via Algorithm 1* with η = O m− 32 and B replay samples chosen at every iteration from buffer R*, we have the following* with probability at least 1 − δ: $$\mathbb{E}\left[L_{\mathcal{P}}^{\rho-1}\left(\hat{\mathbf{W}}\right)\right]\leq\inf_{f\in\mathcal{F}\left(\hat{\mathbf{W}}^{\text{(0)}},\eta_{\text{(0)}}\right)}\left(\frac{4}{n}\sum_{i=1}^{n}\ell(y_{i}\cdot f(x_{i}+\xi_{i}))\right)+\sqrt{\frac{2\log\left(\frac{1}{n}\right)}{n}}+\mathcal{O}\left(\frac{(1+\Xi)^{\frac{1}{2}}B^{\frac{1}{2}}n+R^{2}}{L^{2}\log^{\frac{2}{2}}(m)}\right)$$ where ξt *is an arbitrary perturbation vector representing data augmentation applied at streaming iteration* t, R > 0 *is a constant controlling the size of the NTRF function class, and* δ ∈ [0, 1). Discussion. Theorem 1 provides an upper bound on the 0-1 generalization error over D for a network trained via Algorithm 1. This bound has three components. The first component—shaded in red—captures the 0-1 error achieved by the best function within the NTRF F(W(0), R/m). Intuitively, this term captures the "classifiability" of the data—it has a large value when no function in the ω-neighborhood of W(0) can fit the data well and vice versa. Given fixed m, however, this term can be made arbitrarily small by increasing R to enlarge the NTRF and, in turn, increase the size of the function space considered in the infimum. The second term—shaded in green—is a probabilistic term that arises after invoking an online-to-batch conversion argument (Cesa-Bianchi et al., 2004) to relate training error to 0-1 generalization error. This term contains log(1/δ) in the numerator and n in the denominator. Smaller values of δ will make the bound looser but allow it to hold with higher probability. On the other hand, larger values of n make this term arbitrarily small, meaning that the bound can be improved by observing more data. The final term—shaded in blue—considers all other additive terms that arise from technical portions of the proof. This asymptotic expression grows with Ξ, R, B, and n. However, the term L 2log 3 2 (m) occurs in the denominator, revealing that this final term will shrink as the model is made wider and deeper.8 Thus, Theorem 1, as a whole, shows that the network generalizes well given sufficiently large m, L, and n—meaning that the network is sufficiently overparameterized and enough data has been observed. Sketch. The proof of Theorem 1—provided in Appendix E—proceeds as follows: - We first show that the average 0-1 loss of iterates {W(0)*, . . . ,*W(n)} obtained during streaming can be upper bounded with a constant multiple of the average cross-entropy loss of the same iterates. - Invoking Lemma 1, we convert this average loss of iterates {W(0)*, . . . ,*W(n)} to an average loss of weights within the set W? ∈ B(W(0), R/m), where R > 0. - Using an online-to-batch conversion argument (see Proposition 1 in (Cesa-Bianchi et al., 2004)), we lower bound average 0-1 loss with expected 0-1 loss over D with probability 1 − δ. - From here, we leverage Lemma 3 to relate average loss of W? over streaming iterations to the NTRF, taking an infimum over all functions f ∈ F(W(0), R/m) to yield the final bound. 8Observe that n appears in the denominator of the green term and the numerator of the blue term of Theorem 1. However, because the model is assumed to be overparameterized (i.e., m n), the blue term is still negligible even given values of n that are sufficiently large to make the green term small. | Method | Base Init. | Ratio of | Uses Replay | |--------------------------------|---------------|------------|---------------| | Required | Params Frozen | | | | ExStream (Hayes et al., 2018) | X | 95% | X | | DeepSLDA (Hayes & Kanan, 2020) | X | 95% | ✗ | | REMIND (Hayes et al., 2020) | X | 75% | X | | CSSL | ✗ | 0% | X | Table 2: A basic outline of properties for each of the considered streaming algorithms. ## 6 Experiments Following prior work, we use ResNet18 to perform class-incremental streaming experiments on CIFAR100 and ImageNet (Krizhevsky et al., 2009; Deng et al., 2009), experiments with various different data orderings on Core50 (Lomonaco & Maltoni, 2017), and experiments using a novel, multi-task streaming learning setting. As baselines, we adopt the widely-used streaming algorithms ExStream (Hayes et al., 2018), Deep SLDA (Hayes & Kanan, 2020), and REMIND (Hayes et al., 2020); see Table 2 for details. Because REMIND freezes most network layers after base initialization, we also study a REMIND variant ("Remind + Extra Params") with added, trainable layers such that the number of trainable parameters is equal to CSSL.9 All experiments split the dataset(s) into several batches and data ordering is fixed between comparable experiments. Evaluation occurs after each batch and baselines use the first batch for base initialization—the first batch is seen both during base initialization and streaming. CSSL performs no base initialization, but may begin the streaming process with pre-trained parameters—such experiments are identified accordingly. We first present class-incremental streaming experiments, followed by the analysis of different data orderings and multi-task streaming. The section concludes with supplemental analysis that considers training other network architectures and studies behavioral properties of networks trained via CSSL. ## 6.1 Class-Incremental Learning We perform class-incremental streaming learning experiments using a ResNet18 architecture on CIFAR100 and ImageNet. The results of these experiments are summarized below, and a discussion of the results follows. CIFAR100. The dataset is divided into batches containing 20 classes and results are averaged over three different data orderings. Experiments are conducted using replay buffer memory capacities ranging from 30Mb (i.e., 10K CIFAR100 images) to 150Mb. These capacities are not relevant to Deep SLDA or REMIND, as Deep SLDA does not use a replay buffer and REMIND can store the full dataset using < 30Mb of memory. CSSL compresses replay examples by quantizing pixels to 6 bits and resizing images to 66% of their original area. In some cases, CSSL is initialized with parameters that are pre-trained over the first 20 classes of CIFAR100; see Appendix A.1 for details. The results of these experiments are illustrated in Figure 3. ImageNet. The dataset is divided into batches of 100 classes. Following (Hayes et al., 2020), we per- ![9_image_0.png](9_image_0.png) Figure 3: Streaming performance on CIFAR100 using different replay buffer capacities. CSSL outperforms all existing approaches, no matter the buffer capacity, and performs even better with pre-training. 9This modified version of REMIND has the same number of trainable parameters as a normal ResNet18 model and is included as a baseline to ensure the benefit of CSSL is not simply due to the presence of more traininable parameters within the underlying network. form experiments with replay buffer memory capacities of 1.5Gb and 8Gb. For reference, 1.5Gb and 8Gb experiments with (without) JPEG compression store 100,000 (10,000) and 500,000 (50,000) images for replay, respectively, given standard pre-processing. Such capacities are not relevant to Deep SLDA. Several CSSL variants are tested that i) optionally pre-train over the base initialization set and ii) may store the replay buffer on disk. For in-memory replay, data is compressed by quantizing each pixel to four bits and resizing images to 75% of their original area. No quantization or resizing is employed when the replay buffer is stored on disk; see Appendix A.1 for details. ImageNet results are provided in Table 3. Discussion. CSSL far outperforms baselines on CIFAR100. Even at the lowest buffer capacity with a random parameter initialization, our methodology exceeds the performance of REMIND by 3.6% absolute Ωall given an equal number of trainable parameters, revealing that i) the performance improvement of CSSL is not solely due to having more trainable parameters and ii) the proposed approach is capable of achieving impressive performance even with limited memory overhead. When a pre-trained parameter initialization is used, the performance improvement of CSSL is even more pronounced, reaching an absolute Ωall of 0.974 that nearly matches offline performance. For comparison, REMIND achieves Ωall of 0.790 with an equal number of trainable parameters and unlimited replay capacity. | Method | Replay Buffer Size 1.5Gb 8Gb | | |-------------------------|--------------------------------|-------| | ExStream | 0.569 | 0.594 | | Deep SLDA | 0.752 | 0.752 | | REMIND | 0.855 | 0.856 | | REMIND + Extra Params | 0.869 | 0.873 | | CSSL | 0.740 | 0.873 | | CSSL + Pre-Train | 0.750 | 0.903 | | CSSL + JPEG | 0.899 | 0.951 | | CSSL + Pre-Train + JPEG | 0.909 | 0.964 | Table 3: Top-5 Ωall on ImageNet with 1.5Gb and 8Gb buffer capacities. Methods that store the replay buffer on main memory and disk are separated by a horizontal line. REMIND performs well under memory constraints, but CSSL surpasses REMIND performance as the replay buffer grows in size. On ImageNet, the proposed methodology without JPEG compression performs comparably to Deep SLDA at a memory capacity of 1.5Gb. Performance improves significantly—and surpasses baseline performance—as the replay buffer grows. For example, CSSL without JPEG compression matches or exceeds the performance of all baselines given a replay buffer capacity of 8Gb, and performance continues to improve when a pre-trained parameter initialization is used and as more images are retained for replay. In fact, using pre-trained parameters and 8Gb of memory on disk for replay, the proposed methodology can achieve a Top-5 Ωall of 0.964, nearly matching offline training performance. What's possible with more memory? REMIND is capable of storing the replay buffer with limited memory overhead and is the highest-performing method under strict memory limitations; e.g., REMIND achieves the best performance given a 1.5Gb buffer capacity on ImageNet assuming no access to disk storage. Despite the applicability of streaming learning to edge-device scenarios (Hayes et al., 2018; Hayes & Kanan, 2022), many streaming applications exist that do not induce strict memory requirements due to ease of access to high-end hardware on the cloud. Thus, one may begin to wonder whether better performance is possible when memory constraints upon the replay buffer are relaxed. As outlined in Table 3, REMIND achieves an Ωall of 0.856 (0.873 with equal trainable parameters to CSSL) when the entire dataset is stored for replay. Such performance is > 9% absolute Ωall below the bestperforming CSSL experiment (i.e., 8Gb capacity with JPEG storage and pre-training), which, despite increased memory overhead relative to REMIND, does not store the whole dataset in the replay buffer. The proposed approach continues improving as the replay buffer becomes larger, reaching a Top-5 Ωall of 0.975 given unlimited replay capacity and a pre-trained parameter initialization. Although REMIND may be preferable in memory-constrained scenarios, the proposed approach continues improving as more memory is allocated for replay, reaching performance levels much closer to that of offline training. ## 6.2 Different Data Orderings We perform streaming experiments on the Core50 dataset using i.i.d. (ID), class i.i.d. (CID), instance (I), and class-instance (CI) orderings (Lomonaco & Maltoni, 2017). For each ordering, ten different permutations | Method | Top-1 Ωall without Pre-Training | | Top-1 Ωall with Pre-Training | | | | | | |-----------------------|-----------------------------------|-------|--------------------------------|-------|-------|-------|-------|-------| | | ID | CID | I | CI | ID | CID | I | CI | | ExStream | 0.719 | 0.635 | 0.734 | 0.627 | 0.959 | 0.936 | 0.954 | 0.935 | | Deep SLDA | 0.721 | 0.660 | 0.730 | 0.626 | 0.971 | 0.952 | 0.962 | 0.957 | | REMIND | 0.719 | 0.628 | 0.741 | 0.600 | 0.985 | 0.971 | 0.986 | 0.973 | | REMIND + Extra Params | 0.636 | 0.586 | 0.693 | 0.594 | 0.994 | 0.961 | 0.996 | 0.966 | | CSSL (100Mb Buffer) | 1.106 | 0.977 | 1.071 | 0.990 | 0.963 | 0.979 | 0.962 | 0.976 | | CSSL (200 Mb Buffer) | 1.107 | 1.005 | 1.104 | 0.995 | 0.968 | 0.981 | 0.977 | 0.976 | | CSSL (300 Mb Buffer) | 1.153 | 1.009 | 1.112 | 1.024 | 0.976 | 0.986 | 0.974 | 0.979 | Table 4: Core50 streaming performance for different data orderings, averaged across ten permutations. are generated, and performance is reported as an average across permutations. The dataset is split into batches of 1200 samples. Experiments are conducted with replay buffer memory capacities of 100Mb (i.e., 2,000 images from Core50, or 1/3 of the full dataset), 200Mb, and 300Mb. These memory capacities only apply to CSSL, while baselines are given unlimited replay capacity. The proposed methodology compresses images using 4-bit integer quantization and resizing to 75% of original area; see Appendix A.2 for details. Experiments are performed both with and without ImageNet pre-trained weights, allowing performance to be observed in scenarios without a high-quality parameter initialization. In experiments without ImageNet pre-training, baseline methodologies perform base initialization over the first 1200 dataset examples, while CSSL begins streaming from a random initialization. When ImageNet pre-training is used, CSSL begins the streaming process with these ImageNet pre-trained parameters, while baseline methodologies perform base initialization beginning from the pre-trained weights. *CSSL observes no data from the target domain prior* to the commencement of streaming. Core50 results are provided in Table 4. Discussion. Without ImageNet pre-training, our methodology surpasses baseline performance at all memory capacities and often exceeds offline training performance. In this case, Ωall > 1 indicates that models trained via CSSL outperform offline-trained models during evaluation. This result is likely possible due to the use of simple augmentations (i.e., random crops and flips) for training offline models used to compute Ωall; see Appendix A.2. Nonetheless, these findings highlight the broad applicability of CSSL. The proposed approach flourishes in this setting because it i) has no dependence upon base initialization and ii) is capable of learning all model parameters via streaming. In contrast, baseline methodologies struggle with the low-quality base initialization provided by only 1200 data examples. When initialized with pre-trained weights, baseline methodologies perform much better, though the proposed methodology still matches or exceeds this performance. We emphasize that such pre-trained parameter initializations are not always possible; see Section 4. In numerous practical scenarios (e.g., surgical video, medical imaging, satellite or drone imaging, etc.), well-aligned, annotated public datasets may not exist. The proposed methodology performs well with or without pre-training due to its end-to-end approach to streaming learning and even surpasses offline training performance when beginning streaming from a cold start. ## 6.3 Multi-Task Streaming Learning The Setting. In addition to our proposal of CSSL, we propose and explore a new, multi-task streaming learning domain. Although similar setups have been explored for batch incremental learning (Hou et al., 2018), no work has attempted multi-task learning in a streaming fashion, where multiple datasets with disjoint output spaces are learned sequentially, one example at a time. Such a setting is particularly interesting because the introduction of new datasets causes large shifts in properties of the data stream. In such scenarios, the learner must adapt to significant changes in incoming data, making the multi-task streaming learning domain an interesting test case for CSSL. We consider multi-task streaming learning over several image classification datasets. In particular, we perform multi-task streaming learning over the CUB-200 (Wah et al., 2011), Oxford Flowers (Nilsback & Zisser- | ImageNet | Method | Top-1 Accuracy | | | | |--------------|--------------|------------------|--------------|--------------|--------------| | Pre-Training | MIT Scenes | CUB-200 | Flowers | FGVC | | | Offline | 71.42 ± 0.60 | 73.70 ± 0.01 | 91.98 ± 0.02 | 76.05 ± 0.17 | | | X | ExStream | 58.96 ± 0.70 | 26.04 ± 0.46 | 69.16 ± 0.27 | 33.33 ± 0.56 | | Deep SLDA | 60.05 ± 0.35 | 29.85 ± 0.04 | 72.56 ± 0.13 | 34.16 ± 0.20 | | | REMIND | 54.23 ± 0.58 | 46.27 ± 1.01 | 78.06 ± 1.47 | 36.56 ± 3.24 | | | CSSL | 54.30 ± 1.95 | 59.94 ± 0.97 | 88.36 ± 0.98 | 45.29 ± 2.11 | | | Offline | 51.60 ± 0.56 | 46.95 ± 0.25 | 83.10 ± 0.57 | 60.12 ± 0.44 | | | ✗ | ExStream | 31.84 ± 0.34 | 9.37 ± 0.08 | 40.31 ± 0.15 | 16.58 ± 0.20 | | Deep SLDA | 35.05 ± 0.19 | 11.30 ± 0.10 | 44.04 ± 0.09 | 16.89 ± 0.06 | | | REMIND | 28.56 ± 0.71 | 17.97 ± 0.41 | 43.97 ± 1.29 | 16.53 ± 0.68 | | | CSSL | 42.49 ± 0.73 | 40.08 ± 0.45 | 78.70 ± 0.45 | 25.98 ± 2.30 | | Table 5: Top-1 test accuracy of models trained with different methodologies for multi-task streaming learning. CSSL is able to better adapt its representations to each of the new datasets over time, resulting in significant performance improvements relative to baselines. man, 2008), MIT Scenes (Quattoni & Torralba, 2009), and FGVC-Aircrafts datasets (Maji et al., 2013). The datasets are learned in that order and are introduced to the data stream one example at a time in a classincremental fashion; see Appendix A.3 for further details. Though the ordering of datasets is never changed, the ordering of data within each dataset may be changed. Comparable experiments use the same ordering of data. To handle the disjoint output spaces of each dataset, models for both the proposed methodology and baselines are modified to have a separate output layer for each dataset. Methodology. For simplicity, all methodologies are given unlimited replay capacity for multi-task streaming learning experiments. Baseline methodologies perform base initialization over the first half of the MIT indoor dataset, optionally beginning with ImageNet pre-trained parameters. CSSL begins the streaming process with either random or ImageNet pre-trained parameters, such that no data from the target domain is observed prior to the commencement of streaming. All tests are performed over three distinct permutations of the data, and performance is averaged across permutations. During streaming learning, each update performs i) an update with replayed and new data for the dataset currently being learned and ii) a separate, full update with replayed data for each of the datasets that have been learned previously. We find that such a strategy for multi-task replay performs better than other strategies, such as performing a single update that encompasses all datasets; see Appendix B.5. Deep SLDA and ExStream do not adopt this replay strategy, as only the final classification layers of the model are updated during streaming. Thus, multi-task streaming learning is identical to performing separate streaming experiments on each dataset. Performance in terms of Top-1 test accuracy is recorded separately over each dataset at the end of the streaming process, as shown in Table 5.10 Discussion As shown in Table 5, CSSL, both with and without pre-triaining, far outperforms all baseline methodologies on nearly all datasets in the multi-task streaming domain. For example, when ImageNet pre-training is used, CSSL outperforms REMIND by 10% absolute test accuracy (or more) on CUB-200, Oxford Flowers, and FGVC-Aircrafts datasets. When no ImageNet pre-training is used, the performance improvement is even more drastic. In particular, CSSL provides a 22% and 35% absolute boost in Top-1 accuracy on the CUB-200 and Oxford Flowers datasets, respectively. Interestingly, CSSL achieves lower performance than ExStream and Deep SLDA on the MIT Scenes dataset when ImageNet pre-training is used. The impressive performance of these baselines on MIT Scenes is caused by the fact that i) ExStream and Deep SLDA only update the model's classification layer during streaming 10We choose to avoid reporting performance in terms of Ωall and only report performance at the end of streaming to match the evaluation strategy of (Hou et al., 2018) and make results less difficult to interpret. | Dataset | Method | Average ECE | |-------------------------------------------|---------------|---------------| | CIFAR100 | ExStream | 0.301 ± 0.008 | | Deep SLDA | 0.296 ± 0.012 | | | REMIND | 0.023 ± 0.000 | | | CSSL + Pre-Train (30Mb Buffer) | 0.031 ± 0.005 | | | CSSL + Pre-Train (150Mb Buffer) | 0.023 ± 0.003 | | | CSSL + Pre-Train + No Aug. (150Mb Buffer) | 0.161 ± 0.031 | | | CSSL (30Mb Buffer) | 0.040 ± 0.001 | | | CSSL (150Mb Buffer) | 0.036 ± 0.002 | | | CSSL + No Aug. (150Mb Buffer) | 0.149 ± 0.015 | | | REMIND | 0.021 ± 0.005 | | | ImageNet | CSSL | 0.043 ± 0.003 | | CSSL + No Aug. | 0.266 ± 0.009 | | Table 6: Average ECE of models trained via diferent streaming methodologies on class incremental streaming experiments with CIFAR100 and ImageNet datasets. *Both REMIND and CSSL are found to facilitate* favorable calibration properties within the underlying model. and ii) the fixed feature extractor being used is pre-trained on ImageNet and fine-tuned on over half of the MIT Scenes dataset. Thus, ExStream and Deep SLDA perform well on this dataset only because their feature extractor has been extensively pre-trained and exposed to a large portion of the dataset during base initialization. When ImageNet pre-training is not included, the fixed feature extractor is of lower quality, leading ExStream and Deep SLDA to be outperformed by CSSL even on the MIT Scenes dataset. Adapting the model's representations to new datasets is pivotal to achieving competitive performance in the multi-task streaming learning domain. Baseline methodologies—due to the fixing of parameters after base initialization—cannot adapt a majority of the model's parameters, leading their representations to be biased towards data observed during base initialization. Such a dilemma emphasizes the importance of developing streaming algorithms, such as CSSL, that learn end-to-end. By using CSSL for multi-task streaming learning, model representations are not fixed based on a subset of data that does not reflect future, incoming datasets, and model performance can benefit from the positive inductive transfer that occurs by updating all model parameters over multiple tasks or datasets simultaneously. ## 6.4 Supplemental Experiments And Analysis We now present supplemental results that explore the properties of models trained with CSSL, as well as other model architectures. Confidence Calibration Analysis. Confidence calibration characterizes the ability of a model to provide an accurate probability of correctness along with any of its predictions. In many cases, model softmax scores are erroneously interpreted as valid confidence estimates, though it is widely-known that deep networks make incorrect or poor predictions with high confidence (Guo et al., 2017). Ideally, deep models should be highly-calibrated, as such a property would allow incorrect model predictions to be identified and discarded. For streaming learning, confidence calibration is especially useful, as one must decide during the streaming process whether the model's predictions should start to be ![13_image_0.png](13_image_0.png) Figure 4: ECE of models trained via REMIND and CSSL on an i.i.d. ordering of CIFAR100. *CSSL maintains better anytime calibration compared to REMIND.* relied upon. If the underlying model is highly-calibrated throughout streaming, this problem solves itselfjust use predictions that are made with high-confidence, indicating a high probability of correctness. Previous work indicates that using Mixup encourage good calibration properties (Thulasidasan et al., 2019), indicating that models obtained from CSSL and REMIND—both of which use some form of Mixup—should be highly-calibrated. In this section, we measure confidence calibration, in terms of average expected calibration error (ECE) across testing events, of models during class-incremental streaming experiments on CIFAR100 and ImageNet with different streaming methodologies; see Appendix A.4 for further details. As shown in Table 6, CSSL and REMIND both aid model calibration. Interestingly, the confidence calibration of CSSL degrades significantly if only random crops and flips are used for augmentation (i.e., "CSSL + No Aug."), *highlighting the positive impact of a proper data augmentation policy on calibration.* Using an i.i.d. data ordering, we measure calibration more frequently throughout the streaming process in Figure 4 and find that CSSL is highly-calibrated throughout streaming when beginning from a random initialization and improves upon the calibration of REMIND when beginning from pre-trained parameters. Performance Impact of a Cold Start. To better understand how model performance evolves throughout the streaming process, we perform classincremental learning experiments on CIFAR100 and measure model accuracy after every new class that is introduced during streaming; see Appendix A.5 for more details. ![14_image_0.png](14_image_0.png) As shown in Figure 5, baseline methodologies begin with a high accuracy but sharply decline in performance when the model encounters data not included in base initialization and continue to slowly degrade in performance throughout the remainder of the learning process. In contrast, randomly-initialized CSSL begins with relatively poor performance that improves gradually as more data is observed, eventually reaching a stable plateau in performance that surpasses all baseline methodologies. When beginning streaming from a pre-trained parameter initialization, this initial period of poor performance is eliminated, and the model still reaches a relatively stable plateau of higher performance. Figure 5: CIFAR100 class-incremental streaming performance measured after each new class is learned. Baseline methodologies perform well initially because of the data that was observed during base initialization, which is made evident by the sharp drop in baseline performance upon encountering new data; see Figure 5. Notably, CSSL does not experience this drop in accuracy even when beginning from pre-trained parameters, revealing that end-to-end training mitigates bias towards data used for base initialization. Furthermore, the ability of CSSL find a stable plateau in performance reveals that streaming learners need not always decay in performance as the data stream moves further away from data seen during base initialization. With CSSL, model representations can be adapted to changes in data over time to facilitate stable performance. Different Network Architectures. Because CSSL trains the underlying network in an end-toend fashion and is not dependent upon base initialization, its implementation is simple and agnostic to the particular choice of model architecture (i.e., substituting different architectures requires no implementation changes). Because a majority of prior work only studies ResNet architectures (Hayes et al., 2018; Hayes et al., 2020), we use the proposed methodology to study the behavior of ResNet101 (He et al., 2016), MobileNetV2 (Sandler et al., 2018), DenseNet121 (Huang et al., 2017), and Wide ResNet502 (Zagoruyko & Komodakis, 2016) architectures in class-incremental streaming experiments on ImageNet; see Appendix A.6 for further details. The replay buffer is stored on disk using JPEG compression with a Table 7: Class-incremental streaming performance with CSSL and various model architectures. CSSL performs competitively with all architectures. | Model | Top-5 Ωall | |-----------------|--------------| | ResNet101 | 0.872 | | MobileNetV2 | 0.904 | | DenseNet121 | 0.898 | | Wide ResNet50-2 | 0.880 | 15 memory capacity of 1.5Gb. The results of these experiments, recorded in terms of Top-5 Ωall, are provided in Table 7, where it can be seen that CSSL achieves competitive performance with each of the different model architectures. ## 7 Conclusion We present CSSL, a new approach to streaming learning with deep neural networks that trains the network end-to-end and uses a combination of replay mechanisms with sophisticated data augmentation to prevent catastrophic forgetting. Because the underlying network is learned end-to-end with CSSL, no base initialization is required prior to streaming, yielding a single-stage learning approach that is both simple and performant. In experiments, CSSL is found to surpass baseline performance—and even match or exceed offline training performance—on numerous established experimental setups, as well as on a multi-task streaming learning setup proposed in this work. We hope that CSSL inspires further developments in deep streaming learning by demonstrating the surprising effectiveness of simple learning techniques. ## References Manoj Acharya, Tyler L Hayes, and Christopher Kanan. Rodeo: Replay for online object detection. arXiv preprint arXiv:2008.06439, 2020. Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 3366–3375, 2017. Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 139–154, 2018. Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, and Tinne Tuytelaars. Online continual learning with maximally interfered retrieval. *arXiv preprint* arXiv:1908.04742, 2019a. Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. *arXiv preprint arXiv:1903.08671*, 2019b. Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In *International Conference on Machine Learning*, pp. 242–252. PMLR, 2019. Eden Belouadah and Adrian Popescu. Il2m: Class incremental learning with dual memory. In *Proceedings* of the IEEE/CVF International Conference on Computer Vision, pp. 583–592, 2019. Eden Belouadah and Adrian Popescu. Scail: Classifier weights scaling for class incremental learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1266–1275, 2020. Eden Belouadah, Adrian Popescu, and Ioannis Kanellos. A comprehensive study of class incremental learning algorithms for visual tasks. *Neural Networks*, 2020. Yuan Cao and Quanquan Gu. Generalization bounds of stochastic gradient descent for wide and deep neural networks. *Advances in neural information processing systems*, 32, 2019. Francisco M Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. Endto-end incremental learning. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 233–248, 2018. Nicolo Cesa-Bianchi, Alex Conconi, and Claudio Gentile. On the generalization ability of on-line learning algorithms. *IEEE Transactions on Information Theory*, 50(9):2050–2057, 2004. Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 532–547, 2018a. Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. *arXiv preprint arXiv:1812.00420*, 2018b. Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc'Aurelio Ranzato. On tiny episodic memories in continual learning. *arXiv* preprint arXiv:1902.10486, 2019. Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. *arXiv preprint arXiv:1805.09501*, 2018. Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition Workshops, pp. 702–703, 2020. Ali Dabouei, Sobhan Soleymani, Fariborz Taherkhani, and Nasser M Nasrabadi. Supermix: Supervising the mixing data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13794–13803, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, and Rama Chellappa. Learning without memorizing. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 5138–5146, 2019. Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, and Eduardo Valle. Podnet: Pooled outputs distillation for small-tasks incremental learning. In *Computer Vision–ECCV 2020: 16th European* Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16, pp. 86–102. Springer, 2020. Timothy J Draelos, Nadine E Miner, Christopher C Lamb, Jonathan A Cox, Craig M Vineyard, Kristofor D Carlson, William M Severa, Conrad D James, and James B Aimone. Neurogenesis deep learning: Extending deep networks to accommodate new classes. In *2017 International Joint Conference on Neural* Networks (IJCNN), pp. 526–533. IEEE, 2017. Tommaso Furlanello, Jiaping Zhao, Andrew M Saxe, Laurent Itti, and Bosco S Tjan. Active long term memory networks. *arXiv preprint arXiv:1606.02355*, 2016. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *international conference on machine learning*, pp. 1050–1059. PMLR, 2016. Jhair Gallardo, Tyler L Hayes, and Christopher Kanan. Self-supervised training enhances online continual learning. *arXiv preprint arXiv:2103.14010*, 2021. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning, pp. 1321–1330. PMLR, 2017. Ryuichiro Hataya, Jan Zdenek, Kazuki Yoshizoe, and Hideki Nakayama. Faster autoaugment: Learning augmentation strategies using backpropagation. In *European Conference on Computer Vision*, pp. 1–16. Springer, 2020. Tyler L. Hayes. Exstream. https://github.com/tyler-hayes/ExStream, 2019. Tyler L. Hayes. Remind. https://github.com/tyler-hayes/REMIND, 2020a. Tyler L. Hayes. Deep slda. https://github.com/tyler-hayes/Deep_SLDA, 2020b. Tyler L Hayes and Christopher Kanan. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 220–221, 2020. Tyler L Hayes and Christopher Kanan. Online continual learning for embedded devices. arXiv preprint arXiv:2203.10681, 2022. Tyler L. Hayes, Nathan D. Cahill, and Christopher Kanan. Memory Efficient Experience Replay for Streaming Learning. *arXiv e-prints*, art. arXiv:1809.05922, September 2018. Tyler L Hayes, Kushal Kafle, Robik Shrestha, Manoj Acharya, and Christopher Kanan. Remind your neural network to prevent catastrophic forgetting. In *European Conference on Computer Vision*, pp. 466–483. Springer, 2020. Tyler L Hayes, Giri P Krishnan, Maxim Bazhenov, Hava T Siegelmann, Terrence J Sejnowski, and Christopher Kanan. Replay in deep learning: Current approaches and missing biological elements. *arXiv preprint* arXiv:2104.04132, 2021. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *arXiv preprint arXiv:1903.12261*, 2019. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. *arXiv preprint arXiv:1610.02136*, 2016. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. *arXiv preprint* arXiv:1503.02531, 2015. Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. Lifelong learning via progressive distillation and retrospection. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 437–452, 2018. Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. Learning a unified classifier incrementally via rebalancing. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 831–839, 2019. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 4700–4708, 2017. Hiroshi Inoue. Data augmentation by pairing samples for images classification. *arXiv preprint* arXiv:1801.02929, 2018. Herve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. *IEEE* transactions on pattern analysis and machine intelligence, 33(1):117–128, 2010. Heechul Jung, Jeongwoo Ju, Minju Jung, and Junmo Kim. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122, 2016. Ronald Kemker and Christopher Kanan. Fearnet: Brain-inspired model for incremental learning. arXiv preprint arXiv:1711.10563, 2017. Ronald Kemker, Marc McClure, Angelina Abitino, Tyler Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. *Proceedings of the national academy of sciences*, 114(13):3521–3526, 2017. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in neural information processing systems*, 30, 2017. Kibok Lee, Kimin Lee, Jinwoo Shin, and Honglak Lee. Overcoming catastrophic forgetting with unlabeled data in the wild. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 312–321, 2019. Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. *Advances in Neural Information Processing Systems*, 31, 2018. Zhizhong Li and Derek Hoiem. Learning without forgetting. *IEEE transactions on pattern analysis and* machine intelligence, 40(12):2935–2947, 2017. Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast autoaugment. Advances in Neural Information Processing Systems, 32:6665–6675, 2019. Zhiqiu Lin, Deva Ramanan, and Aayush Bansal. Streaming self-training via domain-agnostic unlabeled images. *arXiv preprint arXiv:2104.03309*, 2021. Kuang Liu. Pytorch-cifar. https://github.com/kuangliu/pytorch-cifar, 2017. Vincenzo Lomonaco and Davide Maltoni. Core50: a new dataset and benchmark for continuous object recognition. In *Conference on Robot Learning*, pp. 17–26. PMLR, 2017. David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30:6467–6476, 2017. Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi-task vision and language representation learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 10437–10446, 2020. Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. *arXiv preprint arXiv:1306.5151*, 2013. Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In *Proceedings of the IEEE conference on Computer Vision and Pattern Recognition*, pp. 7765–7773, 2018. Arun Mallya, Dillon Davis, and Svetlana Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 67–82, 2018. Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109–165. Elsevier, 1989. Radford M Neal. *Bayesian learning for neural networks*, volume 118. Springer Science & Business Media, 2012. Duy-Kien Nguyen and Takayuki Okatani. Multi-task learning of hierarchical vision-language representation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10492– 10501, 2019. Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In *2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing*, pp. 722–729. IEEE, 2008. Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jahnichen, and Moin Nabi. Learning to remember: A synaptic plasticity driven framework for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11321–11329, 2019. Samet Oymak and Mahdi Soltanolkotabi. Toward moderate overparameterization: Global convergence guarantees for training shallow neural networks. *IEEE Journal on Selected Areas in Information Theory*, 1(1):84–105, 2020. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. Ariadna Quattoni and Antonio Torralba. Recognizing indoor scenes. In *2009 IEEE conference on computer* vision and pattern recognition, pp. 413–420. IEEE, 2009. Amal Rannen, Rahaf Aljundi, Matthew B Blaschko, and Tinne Tuytelaars. Encoder based lifelong learning. In *Proceedings of the IEEE International Conference on Computer Vision*, pp. 1320–1328, 2017. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In *Proceedings of the IEEE conference on Computer Vision and* Pattern Recognition, pp. 2001–2010, 2017. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Efficient parametrization of multi-domain deep neural networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 8119–8127, 2018. Hippolyt Ritter, Aleksandar Botev, and David Barber. Online structured laplace approximations for overcoming catastrophic forgetting. *arXiv preprint arXiv:1805.07810*, 2018. Amir Rosenfeld and John K Tsotsos. Incremental learning through deep adaptation. IEEE transactions on pattern analysis and machine intelligence, 42(3):651–663, 2018. Deboleena Roy, Priyadarshini Panda, and Kaushik Roy. Tree-cnn: a hierarchical deep convolutional neural network for incremental learning. *Neural Networks*, 121:148–160, 2020. Sebastian Ruder. An overview of multi-task learning in deep neural networks. *arXiv preprint* arXiv:1706.05098, 2017. Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 4510–4520, 2018. Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In *International Conference on Machine Learning*, pp. 4548–4557. PMLR, 2018. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. arXiv preprint arXiv:1705.08690, 2017. Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):1–48, 2019. Asa Cooper Stickland and Iain Murray. Bert and pals: Projected attention layers for efficient adaptation in multi-task learning. In *International Conference on Machine Learning*, pp. 5986–5995. PMLR, 2019. Xiaoyu Tao, Xiaopeng Hong, Xinyuan Chang, Songlin Dong, Xing Wei, and Yihong Gong. Few-shot classincremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12183–12192, 2020. Alexander V Terekhov, Guglielmo Montone, and J Kevin O'Regan. Knowledge transfer in deep blockmodular neural networks. In *Conference on Biomimetic and Biohybrid Systems*, pp. 268–279. Springer, 2015. Sunil Thulasidasan, Gopinath Chennupati, Jeff A Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. Advances in Neural Information Processing Systems, 32, 2019. Ragav Venkatesan, Hemanth Venkateswara, Sethuraman Panchanathan, and Baoxin Li. A strategy for an uncompromising incremental learner. *arXiv preprint arXiv:1705.00744*, 2017. Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In *International* Conference on Machine Learning, pp. 6438–6447. PMLR, 2019. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds200-2011 dataset. 2011. Shuo Wang, Leandro L Minku, and Xin Yao. A systematic study of online class imbalance learning with concept drift. *IEEE transactions on neural networks and learning systems*, 29(10):4802–4821, 2018. Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. Growing a brain: Fine-tuning by increasing model capacity. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 2471– 2480, 2017. Cameron R Wolfe and Keld T Lundgaard. E-stitchup: Data augmentation for pre-trained embeddings. arXiv preprint arXiv:1912.00772, 2019. Cameron R Wolfe and Keld T Lundgaard. Exceeding the limits of visual-linguistic multi-task learning. *arXiv* preprint arXiv:2107.13054, 2021. Joseph Worsham and Jugal Kalita. Multi-task learning for natural language processing in the 2020s: where are we going? *Pattern Recognition Letters*, 136:120–126, 2020. Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. Large scale incremental learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 374–382, 2019. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6023–6032, 2019. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. *arXiv preprint arXiv:1605.07146*, 2016. Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, pp. 3987–3995. PMLR, 2017. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. *arXiv preprint arXiv:1710.09412*, 2017. Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes overparameterized deep relu networks. *arXiv preprint arXiv:1811.08888*, 2018. ## A Experimental Details A.1 Class-Incremental Learning Here, we provide all details for the class-incremental streaming learning experiments presented in Section 6.1. For both CIFAR100 and ImageNet, the ordering of data (i.e., both by class and by example) is fixed for all methodologies, such that data is always encountered in the same order. Offline Training Details. For CIFAR100, Top-1 Ωall is computed using an offline-trained ResNet18 model that achieves a Top-1 accuracy of 78.61%. This model is trained using standard data augmentation for CIFAR100 (i.e., random crops and flips) using the SGD optimizer with momentum of 0.9 and an initial learning rate of 0.1. The learning rate is decayed throughout training using a cosine learning rate decay schedule over 200 total epochs. A weight decay of 5×10−4is used. These offline training settings are adopted from a widely-used repository for achieving state-of-the-art performance on CIFAR10 and CIFAR100 (Liu, 2017). Similarly, Top-5 Ωall on ImageNet is computed using an offline-trained ResNet18 model that achieves a Top-5 accuracy of 89.09%. This pre-trained model is made publicly available via torchvision (Paszke et al., 2017) and matches the offline normalization model used to evaluate class-incremental streaming learning on ImageNet in previous work (Hayes et al., 2020). Data Details. On CIFAR100, the dataset is divided into batches of 20 classes, where the first 20 classes are reserved for base initialization. Similarly, ImageNet is divided into batches of 100 classes, and the first 100 classes are used for base initialization. Evaluation occurs after each batch of data in the streaming process, and all testing events are aggregated within the Ωall score. For all baselines, standard test augmentations for both CIFAR100 and ImageNet are adopted. Namely, because the feature extractors of ExStream, Deep SLDA, and REMIND are fixed, we perform appropriate resizing, center cropping, and normalization (i.e., following standard test settings of CIFAR100 and ImageNet) prior to passing each image into the fixed feature extractor. REMIND also leverages random resized crops and manifold Mixup (Verma et al., 2019) on the feature representations stored within the replay buffer throughout streaming. The proposed methodology leverages the data augmentation policy described in Section 3, and images are cropped and normalized following standard practice prior to augmentation. Baseline Details. For ExStream, Deep SLDA, and REMIND, we utilize official, public implementations within all experiments (Hayes, 2019; 2020b;a). ExStream is optimized using Adam with a learning rate of 2 × 10−3 and no weight decay, as in the official implementation. For CIFAR100, we modify the number of class exemplars (i.e., 10K, 20K, . . . , 50K exemplars) to study ExStream's behavior with different replay buffer sizes. For ImageNet, we follow the settings of (Hayes et al., 2020) and train ExStream using 20 and 100 exemplars per class for buffer capacities of 1,5Gb and 8Gb, respectively. REMIND uses the SGD with momentum optimizer with a learning rate that decays from 0.1 to 0.001 from the beginning to the end of each class. Momentum is set to 0.9 and weight decay to 1 × 10−5. REMIND samples 100 and 50 replay samples during each online update for CIFAR100 and ImageNet, respectively. For Deep SLDA, we adopt the variant that does not fix the covariance matrix during the streaming process for all experiments. For REMIND, we perform experiments with a version that exactly matches the settings of the original papers, as well as with a version that adds two extra layers to the trainable portion of the ResNet18 model such that the number of trainable parameters between REMIND and CSSL is identical. All other experimental settings for each of the baseline methodologies exactly match the settings within each of the respective papers and/or public implementations (Hayes, 2019; 2020b;a). All baseline methodologies perform base initialization over the first batch of data within the dataset. This base initialization includes 50 epochs of tine-tuning, followed by the initialization of any algorithm-specific modules to be used during the streaming process (e.g., the PQ module for REMIND). For fine-tuning, we adopt the same hyperparameter settings used for training the offline baseline models. After base initialization, streaming begins at the first class within the base initialization batch, meaning that data used within base initialization is re-visited during streaming. For ExStream and Deep SLDA, models are initialized with pretrained ImageNet weights during base initialization on CIFAR100 experiments to enable more comparable experimental results on the smaller dataset. CSSL Details. The proposed methodology is trained with a SGD optimizer with momentum of 0.9 and a learning rate that decays linearly from 0.1 to 0.001 from the first to last example of each class (i.e., the same learning rate decay strategy as (Hayes et al., 2020) is adopted). We use weight decay of 1×10−5. 100 replay samples are used within each online update for both CIFAR100 and ImageNet. We utilize the augmentation strategy described in Section 3 in all experiments. For Mixup and Cutmix, we utilize α values of 0.1 and 0.8, respectively. For AutoAugment, we adopt the CIFAR and ImageNet learned augmentation policies for each of the respective datasets. See Appendix B.1 for details regarding the derivation of the optimal augmentation policy and associated hyperparameters. For experiments that begin with a pre-trained parameter setting, we use identical training settings as used in base initialization for baseline methodologies to train model parameters over the first 20 classes of CIFAR100 or 100 classes of ImageNet, then use the resulting model parameters to initialize CSSL for streaming. ## A.2 Different Data Orderings Here, we overview the experimental details of the experiments performed using the Core50 dataset (Lomonaco & Maltoni, 2017) in Section 6.2. Data Orderings. Experiments are performed using four different data orderings: i.i.d. (ID), class i.i.d. (CID), instance (I), and class-instance (CI). Such data orderings are described in detail in (Lomonaco & Maltoni, 2017), but we also provide a brief description of each ordering scheme below: - i.i.d.: batches contain a uniformly random selection of images from the dataset. - class i.i.d.: batches contain all images from two classes (i.e., out of ten total classes) in the dataset. - instance: batches contain all images, in temporal order, corresponding to 80 unique object instances within the dataset. - class-instance: batches contain images, in temporal order, corresponding to two classes in the dataset. For each of the four possible ordering, ten unique data permutations are generated. Then, all experiments are repeated over each of these permutations, and performance is recorded for each ordering scheme as an average over all possible permutations. Offline Training Details. Top-1 Ωall on Core50 is computed using an offline-trained ResNet18 model that achieves a Top-1 accuracy of 43.91%. This model is trained for 40 epochs from a random intialization using SGD with momentum of 0.9 and an initial learning rate of 0.01. We use a weight decay of 1×10−4 and decay the learning rate by 10× at epochs 15 and 30 during training. The settings of training our offline model for Core50 exactly match those provided in (Hayes et al., 2020), aside from not initializing the model with pre-trained ImageNet weights. We attempted to improve the performance of the offline model by utilizing a greater number of epochs and modifying hyperparameter settings, but such tuning resulted in only negligible performance improvements. Data Details. We sample the Core50 dataset at 1 frame per second, resulting in 600 and 225 training and testing images for each class (Hayes et al., 2018). The Core50 dataset has 10 total classes, resulting in 6000 total training images and 2250 total testing images. We adopt the same bounding box crops and splits from (Lomonaco & Maltoni, 2017). The dataset is split into batches of 1200 examples, where the content of each batch is determined by the ordering scheme and particular data permutation chosen for a particular experiment. Comparable experiments utilize both the same ordering scheme and the same permutation, where 10 unique permutations exist for each ordering scheme. The first batch is utilized for base initialization, and evaluation occurs after each batch to compute the final Ωall score. To match the settings of (Hayes et al., 2020), we do not utilize data augmentation within any of the baseline methodologies (i.e., adding data augmentation was shown to degrade performance on Core50). The proposed methodology utilizes the same data augmentation policy that is described in Section 3. Baseline Details. Again, we utilize official, public implementations of ExStream, Deep SLDA, and REMIND (Hayes et al., 2018; Hayes & Kanan, 2020; Hayes et al., 2020). ExStream is optimized using Adam with a learning rate of 2 × 10−3 and no weight decay, and the memory buffer is allowed to store the full dataset (i.e., this can be done with < 100Mb of memory). REMIND is trained using SGD with momentum of 0.9 and a fixed learning rate of 0.01. A weight decay of 1 × 10−4is used and 20 samples are taken during each online update, which matches settings of (Hayes et al., 2020). For Deep SLDA, we again adopt the variant that does not fix the covariance matrix during the streaming process. All other experimental settings match the hyperparameters provided in the respective papers and/or public implementations (Hayes, 2019; 2020b;a). No data augmentation is used during baseline streaming experiments, as outline in (Hayes et al., 2020). The first batch of data is used for base initialization within all streaming baseline methodologies. During base initialization, the fine-tuning procedure again adopts the same hyperparameters as the offline training | Dataset | Classes | Training Examples | Testing Examples | |----------------|-----------|---------------------|--------------------| | CUB-200 | 200 | 5994 | 5794 | | Oxford Flowers | 102 | 2040 | 6149 | | MIT Scenes | 67 | 5360 | 1340 | | FGVC-Aircrafts | 100 | 6667 | 3333 | Table 8: Details of datasets used for multi-task streaming learning experiments. procedure described above. After the network has been fine-tuned over base intialization data, all algorithmspecific modules are initialized using the same data, and streaming begins from the first class of the base initialization step. We do not utilize pre-trained ImageNet weights to initialize baseline model parameters within Core50 experiments, as we are attempting to accurately assess model performance in low-resource scenarios on Core50 experiments. CSSL Details. The proposed methodology is trained with SGD with momentum of 0.9 and a fixed learning rate of 0.01. We use a weight decay of 1 × 10−4. 100 replay samples are observed during each online update, and we adopt the data augmentation policy described in Section 3. For AutoAugment, we utilize the ImageNet augmentation policy, and α values of 0.1 and 0.8 are again adopted for Mixup and CutMix, respectively. The proposed methodology is tested with replay buffer capacities of 100, 200, and 300Mb. For the 200Mb experiment, image pixels are quantized to four bits, while for the 100Mb experiment we both quantize pixels and resize images to 75% of their original area. The 300Mb experiment performs no quantization or resizing, as the full dataset can be stored as raw images with memory overhead slightly below 300Mb. ## A.3 Multi-Task Streaming Learning Here, we overview the details of the multi-task streaming learning experiments in Section 6.3. Datasets. We perform multi-task streaming learning with the CUB-200 (Wah et al., 2011), Oxford Flowers (Nilsback & Zisserman, 2008), MIT Scenes (Quattoni & Torralba, 2009), and FGVC-Aircrafts datasets (Maji et al., 2013). The details of each of these datasets are provided in Table 8. Following the settings of (Hou et al., 2018), the datasets are learned in the following order: MIT Scenes, CUB-200, Oxford Flowers, then FGVC-Aircrafts. Offline Training Details. The offline trained models for multi-task streaming experiments are trained separately for each of the datasets listed in Table 8. For each dataset, offline models are trained for 50 total epochs with cosine learning rate decay. Initial learning rates of 0.1, 0.01, and 0.001 are tested and results of the best-performing model are reported in Section 6.3. We also test baseline models with step learning rate schedules but find that cosine learning rate decay tends to produce models with better performance. Data Details. For multi-task streaming learning performance, testing is performed after each dataset has been observed. We choose to not perform more regular evaluation (e.g., after subsets of each individual dataset have completed streaming) to make performance simple to interpret. All results are reported in terms of Top-1 accuracy computed separately on each dataset at the end of the streaming process. During streaming, each dataset is observed one example at a time in a class-incremental order. Three separate class-incremental data permutations are generated, and results are averaged across these three different permutations. The order of the datasets—presented above—is kept fixed in the three different permutations, as we only shuffle class and example orderings. Because the feature extractors of ExStream, Deep SLDA, and REMIND are fixed, we perform appropriate resizing, center cropping, and normalization prior to passing each image into the fixed feature extractor. REMIND also leverages random resized crops and manifold mixup (Verma et al., 2019) on the feature representations stored within the replay buffer throughout streaming. The proposed methodology leverages the data augmentation policy described in Section 3, and images are cropped and normalized following standard practice prior to augmentation. Baselines utilize standard training and testing augmentations as used on ImageNet, which matches the settings of (Hou et al., 2018). Baseline Details. Again, official, public implementations of baseline methodologies with the ResNet18 architecture are used in all experiments. For baseline methodologies, base initialization is performed over 50% of the MIT scenes dataset (i.e., the first dataset to be observed during streaming), encompassing 34 of the 67 total classes. All baseline methodologies are allowed unlimited memory capacity for replay. ExStream is optimized using Adam with a learning rate of 2×10−3 and no weight decay. REMIND uses the stochastic gradient descent (SGD) optimizer with a learning rate that decays from 0.1 to 0.001 from the beginning to the end of each class. Momentum is set to 0.9 and weight decay to 1×10−5. REMIND samples 50 examples from the replay buffer for each online update. Furthermore, we modify REMIND's replay strategy to sample a batch of replayed data from each of the seen tasks when performing each online update. Such a change is necessary to not forget previous tasks when learning each new dataset, and we do not enforce any memory capacity on the replay buffer (i.e., all data can be kept for replay). For Deep SLDA, we adopt the variant that does not fix the covariance matrix during the streaming process for all experiments. All baseline methodologies perform base initialization over the first 34 classes of the MIT scenes dataset. This base initialization includes 50 epochs of tine-tuning, followed by the initialization of any algorithmspecific modules to be used during the streaming process (e.g., the PQ module for REMIND). For finetuning, we adopt the same hyperparameter settings used for training the offline baseline models. After base initialization, streaming begins at the first class within the base initialization batch, meaning that data used within base initialization is re-visited during streaming. For each of the baseline methodologies, we perform experiments both with and without pre-training on ImageNet. Experiments with pre-training simply initialize the underlying ResNet18 architecture with ImageNet pre-trained weights prior to performing base initialization. CSSL Details. The proposed methodology is trained with an SGD optimizer with momentum of 0.9 and a learning rate that decays linearly from 0.1 to 0.001 from the beginning to the end of each class. Weight decay of 1 × 10−5is used. For all datasets, 100 replay samples are taken at each online update, and we modify the replay strategy to sample a batch of replayed data from each of the seen tasks when performing an online update. The augmentation strategy described in 3 is used in all experiments. Mixup and Cutmix use α values of 0.1 and 0.8, respectively, and AutoAugment adopts the Imagenet learned augmentation policy. See Appendix B.1 for details regarding the derivation of the optimal augmentation strategy. For all experiments, CSSL is fine-tuned on the first 34 classes of the MIT scenes dataset with identical hyperparameters as are used for base initialization with other methodologies. Some experiments utilize ImageNet pre-training and are identified as such. ## A.4 Confidence Calibration Analysis Here, we present experimental details for the confidence calibration analysis performed in Section 6.4. Expected Calibration Error. Confidence calibration is measured in terms of expected calibration error (ECE). Assume that the i-th example within the testing set is associated with a true label, a model prediction, and a confidence value. Additionally, assume N total examples exist within the test set. Given Nbin bins, ECE groups data into uniformly-spaced bins based on confidence values. For example, if Nbin = 2, all predictions are separated into two groups, one group within confidence values in the range [0.0, 0.5) and another with confidence values in the range [0.5, 1.0]. Denote the i-th bin as bini. Once model predictions are separated into such bins, ECE computes the average accuracy and confidence of predictions within each bin. Then, using ai and ci to denote the accuracy and average confidence within the i-th bin, ECE can be computed as shown in the equation below. ECE = ![24_image_0.png](24_image_0.png) | Model | Top-5 Acc. | |-----------------|--------------| | ResNet101 | 93.54% | | MobileNetV2 | 90.29% | | DenseNet121 | 91.98% | | Wide ResNet50-2 | 94.09% | Table 9: Top-5 accuracies achieved by offline models of different architectures on ImageNet. For all experiments within Section 6.4, we follow the settings of (Guo et al., 2017) and set Nbin = 15. Performance is sometimes reported in terms of average ECE. In such cases, average ECE is computed by measuring ECE separately at each testing event during the streaming process, then taking a uniform average of ECE values across all testing events. CIFAR100 Experiments. For the CIFAR100 experiments reported in Table 6, we perform classincremental streaming using the same data ordering described in Appendix A.1. CSSL experiments use two different memory capacities: 30Mb (10K CIFAR100 images) and 150Mb (full CIFAR100 dataset). The version of CSSL denoted as "No Aug." replaces the CSSL augmentation strategy outlined in Section 3 with a simple augmentation policy that performs only random crops and flips. Furthermore, the pre-trained variant of CSSL simply initializes model parameters with pre-trained ImageNet weights at the beginning of the streaming process. All other experimental settings are identical to that of the class incremental learning experiments outlined in Appendix A.1. For the streaming experiment depicted in Figure 4, we perform streaming with an i.i.d. ordering of the CIFAR100 dataset. Because the data ordering is i.i.d., all testing events perform evaluation over the entire CIFAR100 testing dataset. For CSSL, a memory capacity of 150Mb is used for the replay buffer. All experiments with CSSL utilize the augmentation strategy described in Section 3, except for the "No Aug." experiment that utilizes only random crops and flips for data augmentation. REMIND is allowed to store the full dataset for replay. All other experimental settings match those of the class-incremental learning experiments for CIFAR100 described in Appendix A.1. ImageNet. For the confidence calibration experiments on ImageNet presented in Table 6, we utilize a class-incremental data ordering for streaming. For CSSL, we use a 1.5Gb replay buffer capacity and store all images with JPEG compression on disk. For the "No Aug." variant of CSSL, we again use random crops and flips for data augmentation instead of the augmentation strategy described in Section 3. All other settings for these experiments are identical to those of the class incremental learning experiments on ImageNet described in Appendix A.1. ## A.5 Performance Impact Of A Cold Start The experiments in Section 6.4 utilize the same experimental setup for CIFAR100 as described in Section 6.1 and Appendix A.1. All results are recorded in terms of Top-1 Ωall using the same offline normalization model from 6.1 that achieves a Top-1 accuracy of 78.61%. The only difference between the two experimental setups is that model accuracy is recorded after every new class is observed during the streaming process, whereas experiments in Section 6.1 only record model accuracy after every 20-class batch. Within Section 6.4, both the proposed methodology and all baselines are allowed to retain the full dataset within the replay buffer. The proposed methodology does not perform and quantization or resizing on the images. All hyperparameter settings are the same as Appendix A.1 for all methodologies. ## A.6 Different Network Architectures The experiments with different network architectures in Section 6.4 follow the same settings as the classincremental streaming experiments on ImageNet provided in Section 6.4; see Appendix A.1 for further details. However, the ResNet18 architecture used in previous experiments is substituted for several different model architectures. The experimental results presented in Section 6.4 store the replay buffer on disk using JPEG | MU | CM | AA | RA | Top-1 Ωall | |------|---------------|---------------|---------------|--------------| | | 0.689 ± 0.001 | | | | | X | 0.807 ± 0.012 | | | | | X | 0.863 ± 0.020 | | | | | X | X | 0.877 ± 0.004 | | | | X | 0.822 ± 0.037 | | | | | X | X | 0.886 ± 0.018 | | | | X | X | 0.852 ± 0.019 | | | | X | X | X | 0.886 ± 0.002 | | | X | X | 0.856 ± 0.025 | | | | X | X | 0.857 ± 0.008 | | | | X | X | X | 0.875 ± 0.005 | | Table 10: Performance of the proposed methodology for CSSL with numerous augmentation policies that combine Mixup (MU), CutMix (CM), AutoAugment (AA), and RandAugment (RA) for class-incremental streaming experiments on CIFAR100. All tests use random clops and flips, and X indicates the use of a particular augmentation technique. The best performance is achieved with a combination of Mixup, Cutmix, and AutoAugment. compression, and a replay buffer capacity of 1.5Gb is adopted. All results are reported in terms of Top-5 Ωall, and the offline models used for normalization (i.e., pre-trained models taken from the torchvision package (Paszke et al., 2017)) achieve Top-5 accuracies listed in Table 9 ## A.7 Closing The Performance Gap With Offline Learning Here, we present the details of experiments performed in Appendix B.6. Separate comparison are performed between incremental learning methodologies (i.e., iCarl and E2EIL) and streaming learning methodologies (i.e., REMIND) for class-incremental learning on the CIFAR100 dataset. In all cases, we use a ResNet18 model, but the architecture is slightly modified to yield best-possible performance on the CIFAR100 dataset, as demonstrated in (Liu, 2017). Within these experiments, a class-specific ordering of the CIFAR100 dataset is induced that is used to ensure all experiments receive data in the same order. In all cases, the first 20 classes of CIFAR100 are used for base initialization. For REMIND, we use the public implementation, but adapt it to utilize the modified ResNet18 architecture and perform training over CIFAR100. The learning rate is decayed from 0.1 to 0.001 for each class during streaming (i.e., we perform tests with different learning rates and fine that 0.1 still performs best), and we leverage random crops and manifold mixup within REMIND's feature representations. We allow all dataset examples to be stored within the replay buffer, due to the memory-efficiency of REMIND's memory indexing approach. Settings for PQ are kept identical to ImageNet experiments. We find that weight decay of 5×10−4 yields the best performance when evaluated using grid search over a validation set. For iCarl and E2EIL, we adopt the official, public implementations of both approaches. Because the CIFAR100 dataset is small, we allow both iCarl and E2EIL store the full dataset within the replay buffer (i.e., we do not enforce a fixed buffer size) during the online learning process. To tune hyperparameters in the streaming setting, we perform a grid search over a validation set of CIFAR100 to obtain the optimal settings. We train iCarl and E2EIL in the typical, incremental fashion so that the impact of class imbalance can be observed. Within the incremental learning process, we use 20-class subsets of CIFAR100 as each sequential batch during the learning process. From here, we adopt identical hyperparameter settings as the original publication to replicate the original experimental settings. | Quant. | Resize Ratio | Top-1 Ωall | |----------|----------------|---------------| | 8 bit | 100% | 0.810 ± 0.009 | | 6 bit | 100% | 0.822 ± 0.014 | | 4 bit | 100% | 0.813 ± 0.025 | | 8 bit | 66% | 0.819 ± 0.001 | | 8 bit | 50% | 0.764 ± 0.029 | | 6 bit | 66% | 0.826 ± 0.002 | | 4 bit | 66% | 0.715 ± 0.038 | Table 11: Performance of the proposed methodology on class-incremental streaming experiments with CIFAR100, where different levels of quantization and resizing are used when adding examples into the replay buffer. ## B Further Experimental Results B.1 Data Augmentation Policies We test all combinations of the Mixup, Cutmix, AutoAugment, and RandAugment data augmentation strategies to arrive at the final, optimal augmentation policy used within the proposed methodology for CSSL. All experiments perform random crops and flips in addition to other augmentation strategies. The comparison of different data augmentation policies is performed with class-incremental learning experiments on CIFAR100. These experiments adopt identical experimental settings as CIFAR100 experiments in Section 6.1; see Appendix A.1 for further details. For these experiments, the entire dataset is allowed to be stored in the replay buffer, and the replay buffer is kept in main memory without the use of any compression schemes (i.e., no integer quantization or resizing). All reported results represent average performance recorded over three independent trials. For Mixup and Cutmix, we perform experiments using all combinations of α values in the set {0.1, 0.4, 0.8, 1.0, 1.2}. We leverage the CIFAR100 AutoAugment policy, and experiments are performed with RandAugment using 1-2 augmentations and all magnitudes in the set {4, 9, 14}. All combinations of considered data augmentation methodologies are tested, and results with best-performing hyperparameters are presented. These results inform the data augmentation policy that is adopted within the proposed methodology for CSSL used in later experiments. The results of these experiments, reported in terms of Top-1 Ωall, are provided in Table 10, where it can be seen that the best results are achieved by combining Mixup, Cutmix, and AutoAugment techniques into a single augmentation policy. Interestingly, this augmentation policy provides a nearly 20% improvement in absolute Ωall compared to a simple crop and flip augmentation strategy, *thus demonstrating the massive* impact of proper data augmentation on streaming performance. ## B.2 Memory Efficient Replay. The proposed methodology for CSSL compresses replay examples using integer quantization (i.e., on the uint8 pixels of each image) and image resizing. Within this section, we analyze the performance impact of this strategy in class-incremental streaming experiments on CIFAR100 using the same experimental setup as outlined in Section 6.1. All combinations of quantizing uint8 pixels to six or four bits11 and resizing images to 66% or 50% of their original area are tested. For each possible augmentation strategy, the corresponding experiment stores the maximum number of images within the replay buffer without exceeding 30Mb of storage. Three separate trails are performed for each possible compression strategy, and average results across the three trials are reported. These results inform the optimal strategy of reducing memory overhead for the replay mechanism used within the proposed methodology for CSSL. 11The integer quantization procedure is simulated by integer dividing each pixel value by 2 8−b, where b is the number of bits present in the quantized pixel values, then re-scaling the result into the range [0, 256). | Eviction Strategy | Top-1 Ωall | |-----------------------------|---------------| | Mixup (Zhang et al., 2017) | 0.746 ± 0.019 | | CutMix (Yun et al., 2019) | 0.778 ± 0.014 | | SamplePairing (Inoue, 2018) | 0.735 ± 0.023 | | Buffer Capacity | Random | Reservoir | |-------------------|---------------|---------------| | 30Mb | 0.810 ± 0.019 | 0.709 ± 0.099 | | 60Mb | 0.855 ± 0.011 | 0.764 ± 0.010 | | 90Mb | 0.889 ± 0.024 | 0.795 ± 0.004 | | 120Mb | 0.874 ± 0.009 | 0.830 ± 0.001 | Table 12: Performance of CSSL when various interpolation strategies are used to combine replay examples instead of evicting them. *These eviction policies are found to perform worse than the quantization and* resizing strategy used within the main experiments. Table 13: Comparison of different strategies for maintaining the replay buffer for class-incremental CSSL experiments on CIFAR100. A random eviction strategy is shown to outperform reservoir sampling for all different replay buffer capacities that were considered. Adopting a fixed buffer capacity of 30Mb, we explore different levels of quantization and resizing within the replay buffer.12 The results are reported in terms of Top-1 Ωall in Table 11, where it can be seen that the best performance is achieved by quantizing pixel values to 6 bits and resizing images to 66% of their original area (i.e., the exact strategy adopted for CIFAR100 experiments in Section 6.1). Interestingly, the proposed methodology is robust to high levels of quantization; e.g., using 4 bit quantization slightly exceeds the performance of storing full images for replay. Similar behavior is observed on ImageNet, where 4-bit quantization is found to outperform 6-bit quantization given a fixed buffer capacity. Moreover, resizing images beyond 66% of their original area noticeably degrades performance; e.g., 50% resizing without quantization yields noticeably degraded performance in Table 11. For datasets with higher-resolution images, we observe that performance is even more sensitive to the resizing ratio, leading us to adopt a ratio of 75% within ImageNet and Core50 experiments in Sections 6.1 and 6.2. ## B.3 Consolidating Replay Examples In addition to the approaches for memory efficient replay described in Section 3, we attempt to use data interpolation methods to consolidate examples within the replay buffer. In particular, when a replay example is about to be evicted from the replay before, interpolation is performed between two replay examples, thus consolidating two entrties in the replay buffer into one. Such an approach avoids evicting examples within the replay buffer without increasing memory overhead. In particular, we perform experiments with Mixup, CutMix, and SamplePairing interpolation strategies13 for class-incremental learning on the CIFAR100 dataset, adopting the same experimental setting as described in Section 6.1. We use a fixed replay buffer capacity of 30Mb and report results of these experiments in terms of Top-1 Ωall within Table 12. As can be seen, these eviction strategies are outperformed by the quantization and resizing strategies utilized in Section 6.4. We also consider setting a fixed number of times a certain example can be interpolated, such that a replay examples is evicted after it has been interpolated more than two or three times, but such an approach still does not meaningfully improve performance and introduces an extra hyperparameter. 12When images are compressed, more images are added into the replay buffer to reach the 30Mb capacity. 13The α values used for Mixup and Cutmix are tuned by searching over the set {0.1, 0.4, 0.8, 1.0, 1.2} using a hold-out validation set, and we present the results of the best-performing α for each method. By definition, Sample Pairing always takes an average between two images (i.e., there are no hyperparameters to tune). | Full | Split | Sep | Sum | Top-1 Accuracy | Average Top-1 | | |------------|---------|---------|--------|------------------|-----------------|-------| | MIT Scenes | CUB-200 | Flowers | FGVC | Accuracy | | | | X | X | 56.11 | 60.79 | 88.08 | 45.22 | 62.55 | | X | X | 53.13 | 57.232 | 84.50 | 36.78 | 57.91 | | X | X | 54.40 | 62.12 | 84.39 | 49.09 | 62.50 | | X | X | 54.48 | 60.32 | 86.57 | 43.05 | 61.11 | Table 14: Multi-task streaming learning performance of CSSL with different replay strategies. ## B.4 Alternative Sampling Approaches The methodology described in Section 3 assumes that, when a new example is added into an already-full replay buffer, an example must be removed via ReplayEvict in Algorithm 1. Within the experiments in Section 6, a random eviction procedure is adopted as described in Section 3. However, such random maintenance of the replay buffer is not the only option. Namely, different approaches such as herding selection (Rebuffi et al., 2017; Castro et al., 2018; Hou et al., 2019) or reservoir sampling (Chaudhry et al., 2019; Aljundi et al., 2019a) could be adopted instead to add and remove elements from the replay buffer in a more sophisticated manner. Although herding selection is a popular method in incremental learning literature (Rebuffi et al., 2017; Castro et al., 2018; Hou et al., 2019) for determining which examples within each incremental batch should be added into the replay buffer, it is not a viable approach given the restrictions of CSSL. Namely, herding selection requires that, for each class, a mean feature vector should be computed across all available data. Then, a representative subset of exemplars is selected within that class, such that the mean feature vector of that subset closely matches the overall mean feature vector. Such an approach is easily adoptable within incremental learning, as one can compute mean feature vectors and relevant exemplars for the disjoint classes that appear within each incremental batch. Further, herding selection can be modified to the streaming setting (i.e., based on a greedy approach) if fixed feature representations are available for each input image throughout the streaming process. However, within CSSL, the entire network is updated during streaming, meaning that the feature respresentations of data change over time, making the application of herding selection difficult. Despite the difficulty of adopting herding selection within CSSL, we compare the performance of our random eviction policy with reservoir sampling with class-incremental learning experiments on the CIFAR100 dataset, where we adopt the same setup as described in Section 6.1. Reservoir sampling adds items into the replay buffer until the buffer is full. Assuming the capacity of the replay buffer is C examples, once the k-th example is received (i.e., *k > C*), we sample a random integer i in the set {0, 1*, . . . , k* − 1}. If i ∈ {0, 1*, . . . , k* − 1}, then the i-th example in the replay buffer is replaced with the k-th example (i.e., the new data). Otherwise, the k-th example is discarded. The performance of reservoir sampling in comparison to the random eviction strategy described in Section 3 is provided in Table 13, where we measure performance across multiple different buffer capacities (i.e., 150Mb is full capacity for CIFAR100). As can be seen, the more sophisticated reservoir sampling approach is outperformed by the random eviction strategy outlined in Section 3. ## B.5 Different Strategies For Multi-Task Streaming Learning Within Section 6.3, we generalize the replay strategy of CSSL and REMIND to the multi-task streaming learning domain. In this domain, each model has multiple classification layers and replay buffers (i.e., one for each dataset).14 Thus, the previously-used strategy for replay—sampling a fixed number of replay samples to be included with each new data example during an online update—is no longer valid, as one must incorporate replay examples from previous tasks to avoid catastrophic forgetting. 14The separate replay buffers for each dataset could be considered as one replay buffer than encompasses all datasets and the resulting algorithm and discussin would be the same. ![30_image_0.png](30_image_0.png) Figure 6: Top-1 test accuracy for ResNet18 trained on CIFAR100 with both online and offline approaches. Accuracy is measured over "seen" classes at different points during online learning. The main strategies that were explored for multi-task replay were differentiated by i) whether each task is updated with B replay examples or if B total replay examples are split between each task (i.e., Full or Split) and ii) whether each task performs a separate update or if gradients are summed over all tasks to perform a single update (i.e., Sep or Sum). Considering these options forms four different strategies for multi-task replay. Using a fixed replay size of B = 100 and adopting the experimental settings explained in Appendix A.3 for multi-task streaming learning, these different strategies for multi-task streaming learning are explored for CSSL in Table 14. Models are initialized with ImageNet pre-trained parameters, but we do not perform any fine-tuning on the MIT Indoor dataset prior to streaming. As can be seen in Table B.5, the best performing strategy utilizes the full replay size for each task during the online update and performs a separate model update for each task. A strategy of using the full replay size for each task but performing a single update that sums the gradients of each task together performs similarly. However, this strategy is slightly worse than performing a separate update for each task. Thus, within Section 6.3, we adopt the strategy of, at each online update, sampling B replay examples—or B − 1 replay examples for the current task, plus the new example—for each of the tasks that have been seen so far and performing a separate model update for each task. The added computation of performing a separate update for each task is minimal relative to performing a single update across all tasks. ## B.6 Closing The Performance Gap With Offline Learning We claim that streaming learning methodologies have the potential for drastically-improved performance, but cannot be derived via simple modifications to existing techniques. To substantiate this claim, we perform class-incremental learning experiments on CIFAR100 (Krizhevsky et al., 2009) with two batch-incremental methods—iCarl (Rebuffi et al., 2017) and End-to-End Incremental Learning (E2EIL) (Castro et al., 2018)— and a state-of-the-art streaming approach—REMIND (Hayes et al., 2020); see Appendix A.7 for details. Performance is measured using Ωall (Hayes et al., 2018) and displayed in Figure 6. As can be seen, a performance gap exists between online and offline-trained models; e.g., in Figure 6, the final Top-1 accuracy of REMIND is > 20% below that of offline training. To be of use to most practitioners, this performance gap between online and offline training must be reduced—*no viable, real-time alternatives* to offline training yet exist. Considering possible sources of such a performance gap, one may look to the freezing of network layers within existing approaches to online learning. Such an approach is shown to significantly degrade network performance in Section 4. Thus, an approach with the potential to perform more comparably to offline training likely should not freeze large portions of network parameters. ![31_image_0.png](31_image_0.png) Figure 7: Confusion matrices (excluding base initialization data) for incremental and streaming learning models after class-incremental CIFAR100 training. Incremental learning techniques (iCarl and E2EIL) are biased towards recently-observed classes (see the many incorrect predictions for classes 80-100 in the two leftmost plots). In contrast, incorrect predictions for REMIND are concentrated on the last class observed during streaming (see the line of incorrect predictions at the bottom of third plot). This bias starts getting corrected when 100 replay samples—instead of 50—are used during each online update (see the rightmost plot). | Method | PQ Encoding | Forward Pass | Backward Pass | Parameter Update | |-----------------------|---------------|----------------|-----------------|--------------------| | Time | Time | Time | Time | | | CSSL | - | 3.67 ± 0.45 | 5.41 ± 0.97 | 1.53 ± 0.06 | | REMIND | 23.37 ± 1.13 | 2.65 ± 0.02 | 1.62 ± 0.03 | 0.52 ± 0.01 | | REMIND + Extra Params | 23.37 ± 1.13 | 2.95 ± 0.12 | 1.84 ± 0.09 | 0.69 ± 0.04 | Table 15: Timing metrics (in milliseconds) for performing model inference and updates with different streaming methodologies on ImageNet. Because CSSL is fully-plastic, backward passes and parameter updates take longer to complete, but REMIND's use of product quantization to encode model hidden representations make inference time an order of magnitude slower in comparison to CSSL. Interestingly, many incremental learning approaches exist that perform end-to-end training (Castro et al., 2018; Hou et al., 2019; Wu et al., 2019). Thus, one may wonder whether a high-quality streaming approachone that comes closer to matching offline learning performance—could be developed by simple modifications to such techniques. Unfortunately, these techniques, which employ replay, bias correction, and knowledge distillation to prevent catastrophic forgetting, are known to perform poorly in the streaming domain (Hayes et al., 2020). To understand why this is the case, it should be noted that knowledge distillation is known to provide minimal added benefit when combined with replay mechanisms (Belouadah & Popescu, 2020; 2019). Furthermore, we find that correcting bias in the network's classification layer is unnecessary for streaming learning. Though batch-incremental techniques are biased towards recently-observed classes, REMIND is only biased towards a single class (i.e., the last streamed class), which is easily corrected by increasing the number of replay samples observed during each online update; see Figure 7 and Appendix A.7 for experimental details.15 Such observations reveal that findings relevant to batch-incremental learning may not translate to the streaming setting. Additionally, a better streaming methodology cannot be derived by simply adopting and modifying established techniques for batch incremental learning. To close the performance gap with offline learning, streaming techniques likely need to avoid freezing network parameters and leverage more effective forms of replay, as other techniques like knowledge distillation and bias correction provide minimal benefit. 15Many incremental learning works have been proposed to correct such bias towards recently-observed classes(Hou et al., 2019; Wu et al., 2019). We emphasize that our focus here is not to compare streaming learning to the most recent approaches for incremental learning, but rather to demonstrate that bias towards recent classes is not a consideration for the streaming domain that we consider. ## B.7 Computational Impact Of Full Plasticity Given that CSSL makes all model parameters trainable, one may wonder about the impact of such an approach on the computational complexity of performing updates and inferences with the underlying model throughout streaming. To better understand the complexity impact of full plasticity, we measure the time taken to i) perform a forward pass, ii) perform a backward pass, and iii) update model parameters with CSSL over several streaming iterations. For comparison, we perform the same test with REMIND, which freezes a majority of the underlying network's early layers. These timing tests use a ResNet18 model and are performed on the ImageNet dataset using the same procedure described in Appendix A.1. At each iteration, 100 replay examples are sampled for both CSSL and REMIND. For REMIND, we also perform supplemental test in which the underlying model has added layers to make the number of trainable parameters equal to that of CSSL. In all cases, we measure timing across five streaming iterations and report the average time; see Table 15. CSSL has a more expensive forward pass, backward pass, and parameter update in comparison to REMIND, due to the fact that all network layers are updated by CSSL. However, REMIND encodes the output of the network's frozen feature extractor with a product quantization (PQ) (Jegou et al., 2010) module, which is quite slow in comparison to the other operations. As such, when data is passed through the REMIND model—aside from replay examples for which the model's encoded hidden representations can be cachedit must be passed through the frozen portion of the network, encoded via PQ, then passed through the remaining, trainable layers, making the total inference time of REMIND an order of magnitude slower than that of CSSL. ## C Adapting Baselines To Cssl. CSSL is capable of randomly initializing the full network—and all network modules—immediately prior to the commencement of streaming; see Section 3. The baseline methodologies used within Section 6.1 (i.e., ExStream, Deep SLDA, and REMIND), however, all utilize pre-trained networks within their respective streaming algorithms. In particular, ExStream (Hayes et al., 2018) and Deep SLDA (Hayes & Kanan, 2020) train only the final, fully-connected layer of the neural network, while REMIND trains the last few convolutional layers and the final fully-connected layer during streaming—all other network layers are fixed either by using pre-trained network parameters or performing offline training over a subset of data during base initialization. To be capable of starting streaming from a random initialization, baseline methodologies would either have to i) allow nearly all (or a majority of) network layers to remain random throughout streaming or ii) be adapted to train the network end-to-end. Forcing network layers to remain random throughout streaming catastrophically degrades performance, and adapting such methodologies to perform end-to-end training is highly non-trivial, as a majority of techniques within each methodology would break down (e.g., compressing the feature representations with PQ, clustering feature vectors, computing feature covariance, etc.). As such, we simply compare CSSL to baseline approaches in their original form, without forcing baselines to conform a cold-start setting (i.e., baselines do not satisfy rule four from Section 3). ## D Technical Results Within this section, we outline the technical theoretical results (i.e., Lemmas and Corollaries) that are established in the process of proving Theorem 1. Again, full proofs of each result are deferred to Section E. Lemma 1. Assume Assumption 1 holds and ω ≤ O L −6log−3(m). Consider iterates W(0), . . . ,W(n) ∈ B(W(0), ω) obtained via Algorithm 1 with B replay examples taken from buffer R per iteration with η = O(m− 32 ). Given sufficient overparamterization characterized by $$m\geq{\mathcal{O}}\left(n B{\sqrt{1+\Xi}}L^{6}\log^{3}(m)\right)$$ We have $$\sum_{t=1}^{n}L_{(t,\xi_{t})}(\mathbf{W}^{(i)})\leq\sum_{t=1}^{n}L_{(t,\xi_{t})}(\mathbf{W}^{\star})+\mathcal{O}\left(L\!R^{2}m^{-\frac{1}{2}}\right)+\mathcal{O}\left(nm^{-\frac{1}{2}}BL(1+\Xi)\right),$$ for W? ∈ B(W(0), R/m) with R > 0 and probability at least 1 − O(nBL)e −Ω mω 3 L . Lemma 2. Given Assumption 1 and taking ω ≤ O L −6log−3(m), we have that $$\mathbf{W}^{(0)},\ldots,\mathbf{W}^{(n)}$$ W(0), . . . ,W(n) ∈ B(W(0), ω) over n successive iterations of Algorithm 1 with B replay samples taken from buffer R per iteration, arbitrary perturbation vectors applied to both new and replayed data, η = O(m− 32 ), and sufficient overparameterization given by $$m\geq{\mathcal{O}}\left(n B{\sqrt{1+\Xi}}L^{6}\log^{3}(m)\right)$$ with probability at least 1 − O(nLB)e −Ω mω 2 3 L . Lemma 3. Assume Assumption 1 holds and ω ≤ O L −6log−3(m). Then, when W,W0 ∈ B(W(0), ω), we have $$\begin{array}{c}{{|f_{{\bf W}^{\prime}}(x_{t}+\xi)-f_{{\bf W}}(x_{t}+\xi)-\langle\nabla{\bf w}f_{{\bf W}}(x_{t}+\xi),{\bf W}^{\prime}-{\bf W}\rangle|}}\\ {{\leq{\mathcal{O}}\left(\sqrt{m\left(1+\|\xi\|_{2}^{2}\right)}\omega^{\frac{4}{3}}L^{3}\log(m)\right)}}\end{array}$$ for arbitrary perturbation vector ξ 16 and all t ∈ [n] with probability at least 1 − O(nL)e −Ω mω 2 3 L . Corollary 1. Assume Assumption 1 holds and ω ≤ O L −6log−3(m), then when W,W0 ∈ B(W(0), ω), we can extend Lemma 3 to show L(t,ξ)(W0) − L(t,ξ)(W) ≥ X L l=1 h∇WlL(t,ξ)(W), W0 l − Wli − O qm(1 + kξk 2 2 )ω 4 3 L 3log(m) 2 for all t ∈ [n] and arbitrary perturbation vector ξ with probability at least 1 − O(nL)e −Ω mω 3 L . Lemma 4. Assume Assumption 1 holds and ω ≤ O L − 92 log−3(m) . Then, for W ∈ B(W(0), ω) we have $$\|g_{(t,t,\xi)}-g_{(t,t,\xi)}^{(0)}\|_{2},\ \|h_{(t,t,\xi)}-h_{(t,t,\xi)}^{(0)}\|_{2}\leq\mathcal{O}\left(\omega L^{\frac{1}{2}}\sqrt{(1+\|\xi\|_{2}^{2})\log(m)}\right)\ .$$ for all t ∈ [n], l ∈ [L−1], and arbitrary perturbation vector ξ with probability at least 1−O(nL)e −Ω mω 2 3 L . $\sharp\left[n\right]$, $l\in\left[L-1\right]$, and arbitrary perturbation vector $\xi$ with pre-5. Assume Assumption 1 holds and $\omega\leq\mathcal{O}\left(L^{-6}\log^{-3}(m)\right)$, $\Omega\left(m\omega^{\frac{2}{3}}L\right)$ we have for all $l$ such that $l\in\left[L-1\right]$ and $t\in\left[n\right]$ −6log−3(m). Then, with probability at least 1 − O(nL)e −Ω 2 3 L we have for all l such that l ∈ [L − 1] and t ∈ [n] W0L L Y−1 r=l D0(t,r,ξ) + D00 (t,r) W0r − WL L Y−1 r=l D(t,r,ξ)Wr 2 ≤ O ω 1 3 L 2plog(m) where W,W0 ∈ B(W(0), ω), D00 i,r ∈ [−1, 1]m×m is any random diagonal matrix with at most O mω 2 3 L non-zero entries, and ξ is an arbitrary perturbation vector. 16Here, we do not assign a subscript ξt to the perturbation vector intentionally. This is because the perturbation vector used for this result is arbitrary. Namely, different perturbation vectors can be used for any t ∈ [n] and both when the data is encountered newly or sampled for replay. The result holds for any perturbation vector chosen for each of these scenarios, where the final bound then depends on the factor kξk 2 2 . Lemma 6. Given Assumption 1, randomly initialized weights W(0), and arbitrary perturbation vector ξ we have $$\left\|h_{(t,j,\xi)}^{(0)}\right\|_{2}^{2}\in\left[1-\|\xi\|_{2}^{2},1+\|\xi\|_{2}^{2}\right]$$ $nL)\,e^{-\Omega(m/L)}$ for all $t\in[n]$ and $j\in[L]$ with probability at least 1 − O (nL) e $$\mathrm{~all~}t\in[n]\mathrm{~and~}j\in[L-1].$$ Corollary 2. Given Assumption 1 and assuming ω ≤ O(L − 92 log−3(m)), we have for W ∈ B(W(0), ω) and arbitrary perturbation vector ξ $$\|h_{(t,j,\xi)}\|_{2}\leq\mathcal{O}\left(\sqrt{1+\|\xi\|_{2}^{2}}\left(\omega L^{\frac{\xi}{2}}\sqrt{\log(m)}+1\right)\right),$$ for all t ∈ [n] and j ∈ [L − 1] with probability at least 1 − O(nL)e $$n L)e^{-\Omega\left(m\omega^{\frac{2}{3}}L\right)}.$$ . Lemma 7. Given Assumption 1 and ω ≤ O L −6log−3(m), the following can be shown when W ∈ B(W(0), ω) for arbitrary perturbation vector ξ $$\|\nabla_{W_{t}}f_{\mathbf{W}}(x_{t}+\xi)\|_{F}\,,\,\|\nabla_{W_{t}}L_{(t,\xi)}(\mathbf{W})\|_{F}\leq\mathcal{O}\left(\sqrt{m(1+\|\xi\|_{2}^{2})}\right)$$ for all t ∈ [n] and l ∈ [L − 1] with probability at least 1 − O(nL)e $$e^{-\Omega\left(m\omega^{\frac{2}{3}}L\right)}$$ ## . E Proofs Within this section, we provide full proofs for all theoretical results derived within this work. We begin with an overview of further notation used within the analysis. Notation. For some arbitrary set of weight matrices W = {W1*, . . . , W*L}, we define h(t,0,ξ) = xt + ξ and h(t,l,ξ) = σ(Wlh(t,l−1,ξ)) for l ∈ [L−1] and arbitrary perturbation vector ξ. Similarly, we define g(t,0,ξ) = xt+ξ and g(t,l,ξ) = Wlh(t,l−1,ξ)for l ∈ [L−1]. For the weight matrices at initialization W(0) = {W (0) 1*, . . . W*(0) L }, we would similarly define such vectors with the notation h (0) (*t,l,ξ*) and g (0) (*t,l,ξ*) (or h 0 (*t,l,ξ*) , g 0 (*t,l,ξ*) for weight matrices W0 and so on). We also define D(*t,l,ξ*) = diag(1{(Wlh(t,l,ξ))1 > 0}, . . . , 1{(Wlh(*t,l,ξ*))m > 0}) for all t ∈ [n] and l ∈ [L − 1]. ## E.1 Proof Of Theorem 1 Proof. For t ∈ [n] and associated perturbation vectors ξt for t ∈ [n], let L 0−1 (t,ξt) (W(t)) = 1 {yt · fW(t) (xt + ξt) < 0}, where L 0−1 (t,ξt) (W(t)) ≤ 4L(t,ξt)(W(t)) due to properties of the cross entropy loss function (i.e., 1{z ≤ 0} ≤ 4`(z)). With this in mind, when m > OnB√1 + ΞL 6log3(m)and ω ≤ O L −6log−3(m), we can invoke Lemma 1 to yield $$\frac{1}{n}\sum_{t=1}^{n}L_{(t,\xi_{t})}^{0-1}(\mathbf{W}^{(t)})\leq\frac{4}{n}\sum_{t=1}^{n}L_{(t,\xi_{t})}(\mathbf{W}^{\star})+\mathcal{O}\left(\frac{LR^{2}}{nm^{\frac{1}{2}}}\right)+\mathcal{O}\left(m^{-\frac{1}{2}}BL(1+\Xi)\right),$$ with probability at least 1−O(nBL)e −Ω $\bigg(m\omega^{\frac{2}{3}}L\bigg)_{\text{in}}$ . Then, because the model is trained in a streaming fashion as described in Section 5.1, the t-th new data example is seen for the first time only at iteration t within the data stream. Thus, we have that weights W(t) only depend on data examples (x1, y1), . . . ,(xt−1, yt−1). In other words, W(t)is independent of the t-th new data example (xt, yt). Thus, we can invoke an online-tobatch conversion argument (see Proposition 1 in (Cesa-Bianchi et al., 2004)) to yield $\alpha(t)$ $${\frac{1}{n}}\sum_{t=1}^{n}L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t)})\leq{\frac{1}{n}}\sum_{t=1}^{n}L_{(t,\xi_{t})}^{0-1}(\mathbf{W}^{(t)})+{\sqrt{\frac{2\log(1/\delta)}{n}}}$$ with probability at least 1 − δ for δ ∈ (0, 1]. From here, we define Wˆ as a random entry chosen uniformly from the set {W(0)*, . . . ,*W(n)}. Thus, we have the following by definition $${\frac{1}{n}}\sum_{t=1}^{n}L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t)})=\mathbb{E}\left[L_{\mathcal{D}}^{0-1}({\hat{\mathbf{W}}})\right]$$ Then, the following can be derived by combining the above expressions $$\mathbb{E}\left[L_{\mathcal{D}}^{0-1}(\hat{\mathbf{W}})\right]\leq\frac{4}{n}\sum_{t=1}^{n}L_{(t,\xi_{t})}(\mathbf{W}^{\star})+\mathcal{O}\left(\frac{LR^{2}}{nm^{\frac{1}{2}}}\right)\tag{4}$$ $$\quad+\mathcal{O}\left(m^{-\frac{1}{2}}BL(1+\Xi)\right)+\sqrt{\frac{2\log(\frac{1}{\delta})}{n}}$$ with probability at least 1 − δ − O(nBL)e −Ω mω 2 3 L for all W? ∈ B(W(0), R/m). Now, we can compare the neural network function fW? with FW(0),W? as follows $$L_{(t,\xi_{t})}(\mathbf{W}^{\star})\stackrel{(i)}{\leq}\ell(y_{t}\cdot F_{\mathbf{W}^{(0)},\mathbf{W}^{\star}}(x_{t}+\xi_{t}))+{\mathcal{O}}\left({\sqrt{m\,(1+\Xi)}}\omega^{\frac{4}{3}}L^{3}\log(m)\right)$$ where (i) holds from the 1-Lipschitz continuity of `(·) and Lemma 3 with probability at least 1 − O(nL)e −Ω mω 2 3 L . Then, we can plug this inequality into equation 4 to yield $$\mathbb{E}\left[L_{\mathcal{D}}^{0-1}(\hat{\mathbf{W}})\right]\leq\frac{4}{n}\sum_{t=1}^{n}\ell[y_{t}\cdot F_{\mathbf{W}^{(0)},\mathbf{W}^{*}}(x_{t}+\xi_{t})]+\mathcal{O}\left(\sqrt{m\left(1+\Xi\right)}\omega^{\frac{4}{3}}L^{3}\log(m)\right)$$ $$\qquad\qquad+\mathcal{O}\left(\frac{LR^{2}}{nm^{\frac{1}{2}}}\right)+\mathcal{O}\left(m^{-\frac{1}{2}}BL(1+\Xi)\right)+\sqrt{\frac{2\log(\frac{1}{2})}{n}}$$ Then, by taking an infimum over all W? ∈ B(W(0), R/m), we get the following $$\mathbb{E}\left[L_{\mathcal{D}}^{0-1}\left(\hat{\mathbf{W}}\right)\right]\leq\inf_{f\in\mathcal{F}(\mathbf{W}^{(0)},\mathbb{R}/m)}\left(\frac{4}{n}\sum_{t=1}^{n}\ell(y_{t}\cdot f(x_{t}+\xi_{t}))\right)$$ $$\qquad\qquad+\mathcal{O}\left(\sqrt{m\left(1+\Xi\right)}\omega^{\frac{4}{3}}L^{3}\log(m)\right)+\mathcal{O}\left(\frac{LR^{2}}{nm^{\frac{1}{2}}}\right)$$ $$\qquad\qquad+\mathcal{O}\left(m^{-\frac{1}{2}}BL(1+\Xi)\right)+\sqrt{\frac{2\log(1/s)}{n}}$$ From here, by noting that ω ≤ O L −6log−3(m)and m ≥ O nB√1 + ΞL 6log3(m), the expression can be simplified as follows E hL 0−1 D Wˆi ≤ inf f∈F(W(0),R/m) 4 n Xn t=1 `(yt · f(xt + ξt))!+ r2 log(1/δ) n + O (1 + Ξ) 34 √nB L2 log 3 2 (m) +R2 n 3 2 B 1 2 (1 + Ξ) 14 L2 log 3 2 (m) (5) + B 1 2 (1 + Ξ) 34 n 1 2 L2 log 3 2 (m) ! We then analyze the asymptotic portion of the expression above—highlighted in red—as follows. O (1 + Ξ) 34 √nB L2 log 3 2 (m) +R2 n 3 2 B 1 2 (1 + Ξ) 14 L2 log 3 2 (m) +B 1 2 (1 + Ξ) 34 n 1 2 L2 log 3 2 (m) ! = O (1 + Ξ)B(n 2 + 1) + R2 L2 log 3 2 (m)nB 12 (1 + Ξ) 14 ! = O (1 + Ξ) 34 B 1 2 n L2 log 3 2 (m) ! + O R2 L2 log 3 2 (m)nB 12 (1 + Ξ) 14 ! (i) ≤ O (1 + Ξ) 34 B 1 2 n + R2 L2 log 3 2 (m) ! where (i) holds due to the fact that nB 12 (1 + Ξ) 14 > 1. Then, by substituting this simplified asymptotic expression, we have $$\mathbb{E}\left[L_{\mathcal{D}}^{D-1}\left(\hat{\mathbf{W}}\right)\right]\leq\inf_{f\in\mathcal{F}(\mathbf{W}^{(0)},\,\eta_{\infty})}\left(\frac{4}{n}\sum_{t=1}^{n}\ell(y_{t}\cdot f(x_{t}+\xi_{t})\right)+\sqrt{\frac{2\log(^{1/\delta_{t}})}{n}}+\mathcal{O}\left(\frac{(1+\Xi)^{\frac{3}{2}}B^{\frac{1}{2}}n+R^{2}}{L^{2}\log^{\frac{1}{2}}(m)}\right)\right).$$ where the asymptotic portion of the expression can be made arbitrarily small by increasing the value of m. Realizing that this result holds with probability 1 − δ − O(nBL)e −Ω mω 2 3 L ≈ 1 − δ as the value of m increases yields the desired result. Proof. We assume W(0) is initialized as described in Section 5.1, then updated according to Algorithm 1 with B replay samples taken from buffer R at each iteration and distinct, arbitrary data perturbation vectorsrepresenting data augmentation—applied to both new and replayed data throughout streaming. For R > 0, we have W? ∈ B(W(0), R/m), where W? ∈ B(W(0), ω) whenever m ≥ O(L 6log3(m)) (i.e., R is a constant that does not appear in the asymptotic bound), which is looser than the overparameterization requirement within Lemma 2. Similarly, by Lemma 2, we have W(0), . . . ,W(n) ∈ B(W(0), ω) with probability at least 1 − O(nLB)e −Ω mω 2 3 L whenever m ≥ O nB√1 + ΞL 6log3(m)and η = O(m− 32 ). Thus, for W(t),W? ∈ B(W(0), ω), by Corollary 1 we have with probability at least 1 − O(nL)e −Ω mω 2 3 L that $$L_{(t;\xi_{t})}(\mathbf{W}^{(t)})-L_{(t;\xi_{t})}(\mathbf{W}^{\star})\leq\left\langle\nabla_{\mathbf{W}^{(t)}}L_{(t;\xi_{t})}(\mathbf{W}^{(t)}),\mathbf{W}^{(t)}-\mathbf{W}^{\star}\right\rangle$$ $$\qquad+\mathcal{O}\left(\sqrt{m(1+\Xi)}\omega^{\frac{4}{3}}L^{3}\log(m)\right)\tag{6}$$ $$=\sum_{l=1}^{L}\left\langle\nabla_{W_{l}^{(t)}}L_{(t,\xi_{t})}(\mathbf{W}^{(t)}),W_{l}^{(t)}-W_{l}^{\star}\right\rangle$$ $$\qquad+\mathcal{O}\left(\sqrt{m(1+\Xi)}\omega^{\frac{4}{3}}L^{3}\log(m)\right)$$ We can focus on bounding the red term within the expression above as follows X L l=1 D∇W(t) l L(t,ξt)(W(t)), W(t) l − W? l E i= X L l=1 1 η * W (t) l − W (t+1) l − ηX (xrep,yrep)∼St ∇W(t) l L(xrep,yrep,ξrep)(W(t)), W(t) l − W? l + l=1 1 2η W (t) l − W (t+1) l − ηX (xrep,yrep)∼St ∇W(t) l L(xrep,yrep,ξrep)(W(t)) 2 ii≤ X L F + W (t) l − W? l 2 F − W (t+1) l − W? l + ηX (xrep,yrep)∼St ∇W(t) l L(xrep,yrep,ξrep)(W(t)) 2 F ! = X L l=1 1 2η η∇W(t) l L(t,ξt)(W(t)) 2 F + W (t) l − W? l 2 F − W (t+1) l − W? l + ηX (xrep,yrep)∼St ∇W(t) l L(xrep,yrep,ξrep)(W(t)) 2 F ! iii ≤ X L l=1 1 2η η∇W(t) l L(t,ξt)(W(t)) 2 F + W (t) l − W? l 2 F − W (t+1) l − W? l 2 F + ηX (xrep,yrep)∼St ∇W(t) l L(xrep,yrep,ξrep)(W(t)) 2 F ! iv≤ X L l=1 1 2η W (t) l − W? l 2 F − W (t+1) l − W? l 2 F + O (*ηBLm*(1 + Ξ)) where i follows from equation 3, ii holds from the identity that 2hA, B*i ≤ k*Ak 2 F + kBk 2 F − kA − Bk 2 F , and iii holds due to the lower triangle inequality, and iv holds due to the upper triangle inequality and Lemma 7 with probability at least 1 − O(nBL)e −Ω mω 2 3 L by taking union bound across all data, replay examples, and network layers. Plugging this bound into equation 6, we get $$L_{(t,\xi_{t})}(\mathbf{W}^{(t)})-L_{(t,\xi_{t})}(\mathbf{W}^{\star})\leq\sum_{l=1}^{L}\frac{1}{2\eta}\left(\left\|W_{l}^{(t)}-W_{l}^{\star}\right\|_{F}^{2}-\left\|W_{l}^{(t+1)}-W_{l}^{\star}\right\|_{F}^{2}\right)$$ $$+\mathcal{O}\left(\eta B Lm(1+\Xi)+\sqrt{m(1+\Xi)}\omega^{\frac{4}{3}}L^{3}\log(m)\right).$$ Then, telescoping this expression over t ∈ [n] yields l=1 W (0) l − W? l 2 F 2η Xn t=1 L(t,ξt)(W(t)) ≤ Xn t=1 L(t,ξt)(W?) +X L + O nηBLm(1 + Ξ) + npm(1 + Ξ)ω 4 3 L 3log(m) i≤ Xn t=1 L(t,ξt)(W?) + LR2 2ηm2 + O nηBLm(1 + Ξ) + npm(1 + Ξ)ω 4 3 L 3log(m) ii≤ Xn t=1 L(t,ξt)(W?) + O LR2m− 12 + O nm− 12 BL(1 + Ξ) ``` where i follows from the fact that W? ∈ B(W(0), R/m) and ii holds for sufficiently small ω with η =κ m 3 2 , where ``` κ is a small, positive constant. Thus, the desired result holds with probability at least 1−O(nBL)e −Ω mω 2 3 L with overparameterization given by the expression below. $$m\geq{\mathcal{O}}\left(n B{\sqrt{1+\Xi}}L^{6}\log^{3}(m)\right)$$ Proof. We assume W(0) is initialized as described in Section 5.1 and updated according to Algorithm 1 with B replay examples sampled uniformly from the replay buffer R at each update and arbitrary perturbation vectors—representing data augmentation—applied separately to both new and replayed data throughout streaming. We take ω = C1L −6log−3(m) such that C1 is a small enough constant to satisfy assumptions on ω in Lemma 7. From here, we can show that W(0), . . . ,W(n) ∈ B(W(0), ω) through induction. Base Case. The base case W(0) ∈ B(W(0), ω) holds trivially. Inductive Case. Assume that W(t) ∈ B(W(0), ω). By Lemma 7, we have $$\left\|\nabla_{W_{t}}L_{(t,\xi_{t})}(\mathbf{W})\right\|_{F}\leq{\mathcal{O}}\left({\sqrt{m(1+\|\xi_{t}\|_{2}^{2})}}\right)$$ (7) with probability at least 1 − e −Ω mω 2 3 L . Denoting the indices of data elements within our replay buffer as R, we can then show the following over iterations of Algorithm 1. W (t+1) l − W (0) l F i≤ Xt j=1 W (j+1) l − W (j) l F ii= Xt j=1 −η X (xrep,yrep)∼St ∇WlL(xrep,yrep,ξrep) W(j)+ ∇WlL(j,ξj ) W(j) F iii ≤ η Xt j=1 X(xrep, yrep) ∼ St ∇WlL(xrep,yrep,ξrep) W(j) + ∇WlL(j,ξj ) W(j) iv≤ O ηnB pm (1 + Ξ) where i and iii hold due to the upper triangle inequality, ii holds from equation 3, and iv holds due to equation 7 and the definition of Ξ. Thus, for ω = C1L −6log−3(m), if we set η =κ m 3 2 we have $$\left\|W_{l}^{(t+1)}-W_{l}^{(0)}\right\|_{F}\leq\mathcal{O}\left(\eta nB\sqrt{m\left(1+\Xi\right)}\right)$$ $$=\frac{\kappa nB\sqrt{1+\Xi}}{\sqrt{m}}\leq\omega$$ for some small enough absolute constant κ and m ≥ O nB√1 + ΞL 6log3(m). Thus, the inductive step is complete and we have W(0), . . . ,W(n) ∈ B(W(0), ω) with probability at least 1 − O(nLB)e −Ω mω 2 3 L by taking a union bound over all data examples, replay examples, and network layers. Proof. We consider some fixed t ∈ [n] with perturbation vector ξ and two weight matrices W,W0 ∈ B(W(0), ω), where W(0) is initialized as described in Section 3.1. It should be noted that ξ is an arbitrary perturbation vector that can be different for any t ∈ [n]. We omit the subscript ξt to emphasize that the result can hold with different perturbations for any t ∈ [n] and both when data is encountered newly or used for replay. In particular, the result only depends on the value of kξk 2 2 , thus allowing arbitrary settings of ξ to be used for new or replayed data. From the formulation of the forward pass in equation 1, it can be shown that fW0 (xt + ξ) = √m · W0Lh 0 (t,L−1,ξ) and fW(xt + ξ) = √m · WLh(t,L−1,ξ). These identities allow us to derive the following via direct calculation $$f_{\mathbf{W}^{\prime}}(x_{t}+\xi)-F_{\mathbf{W},\mathbf{W}^{\prime}}(x_{t}+\xi)$$ $$=-\sqrt{m}\cdot\sum_{l=1}^{L-1}W_{L}\left(\prod_{r=t+1}^{L-1}D_{(t,r,\xi)}W_{r}\right)D_{(t,l,\xi)}\left(W_{l}^{\prime}-W_{l}\right)h_{(t,l-1,\xi)}\tag{8}$$ $$\quad+\sqrt{m}\cdot W_{L}^{\prime}\left(h_{(t,L-1,\xi)}^{\prime}-h_{(t,L-1,\xi)}\right)$$ Then, from Claim in 11.2 in (Allen-Zhu et al., 2019), it is known that, for all t ∈ [n], h(t,L−1,ξ) − h 0 (t,L−1,ξ) (i.e., the red term within the above expression) can be re-written as h(t,L−1,ξ) − h 0 (t,L−1,ξ) = L X−1 l=1 LY−1 r=l+1 D0(t,r,ξ) + D 00 (t,r) W0r !D0(t,l,ξ) + D 00 (t,l) (Wl − W0 l ) h(t,l−1,ξ) where D 00 (t,l) ∈ R m×m for l ∈ [L−1] is any random diagonal matrix with at most O mω 2 3 L non-zero entries in the range [−1, 1]. With this in mind, we can then rewrite equation 8 as follows. $$f_{\mathbf{W}^{\prime}}(x_{t}+\xi)-F_{\mathbf{W},\mathbf{W}^{\prime}}(x_{t}+\xi)$$ $$=\sqrt{m}\cdot\sum_{t=1}^{L-1}W_{L}^{\prime}\left(\prod_{r=t+1}^{L-1}\left(D^{\prime}_{(t,r,\xi)}+D^{\prime\prime}_{(t,r)}\right)W_{r}^{\prime}\right)\left(D^{\prime}_{(t,t,\xi)}+D^{\prime\prime}_{(t,t)}\right)\left(W_{l}-W_{l}^{\prime}\right)h_{(t,t-1,\xi)}$$ $$\quad-\sqrt{m}\cdot\sum_{l=1}^{L-1}W_{L}\left(\prod_{r=t+1}^{L-1}D_{(t,r,\xi)}W_{r}\right)D_{(t,t,\xi)}\left(W_{l}^{\prime}-W_{l}\right)h_{(t,t-1,\xi)}$$ Now, given that ω ≤ O L −6log−3(m), we can unroll this expression to arrive at the final result as follows |fW0 (xt + ξ) − FW,W0 (xt + ξ)| = √m · L X−1 l=1 W0L LY−1 r=l+1 D0(*t,r,ξ*) + D 00 (t,r) W0r !D0(*t,l,ξ*) + D 00 (t,l) (Wl − W0 l ) h(t,l−1,ξ) ! r=l+1 D(*t,r,ξ*)Wr ! D(*t,l,ξ*) (W0 l − Wl) h(t,l−1,ξ) ! i≤ √m · L X−1 − WL LY−1 l=1 W0L LY−1 r=l+1 D0(*t,r,ξ*) + D 00 (t,r) W0r !D0(*t,l,ξ*) + D 00 (t,l) + WL LY−1 r=l+1 D(*t,r,ξ*)Wr ! D(*t,l,ξ*) 2 · kWl − W0 l k2 · kh(t,l−1,ξ)k2 ii≤ O √mω 1 3 L 2log(m) LX−1 l=1 kWl − W0 l k2 · kh(t,l−1,ξ)k2 iii ≤ O qm (1 + kξk 2 2 )ω 1 3 L 2log(m) ωL52plog(m) + 1 LX−1 l=1 kWl − W0 l k2 iv≤ O qm (1 + kξk 2 2 )ω 1 3 L 2log(m) ωL52plog(m) + 1(ωL) v≤ O qm (1 + kξk 2 2 )ω 4 3 L 3log(m) where i holds due to triangle inequality, ii holds due to Lemma 5 with probability at least 1 − O(nL)e −Ω mω 2 3 L , iii holds due to Corollary 2 with probability at least 1 − O(nL)e −Ω mω 2 3 L , iv holds due to the definition of the ω neighborhood, and v holds for sufficiently small ω. ## E.5 Proof Of Corollary 1 Proof. Consider some fixed t ∈ [n] and arbitrary perturbation vector ξ, where we again omit the subscript ξt to emphasize that the result holds with different perturbations for any t ∈ [n] and both when data is encountered newly or sampled for replay. We consider W,W0 ∈ B(W(0), ω), where W(0) is initialized as described within Section 5.1. Recall that we utilize a standard cross-entropy loss function `(z) = log(1+e −z). We denote the derivative of this function as ` 0(z), where ` 0(z) = d dz log(1 + e −z) = −1 e z+1 . For W,W0 ∈ B(W(0), ω), we can derive the following $L_{(t,\xi)}(\mathbf{W}^{\prime})-L_{(t,\xi)}(\mathbf{W})=\ell\left(y_{t}f_{\mathbf{W}^{\prime}}(x_{t}+\xi)\right)-\ell\left(y_{t}f_{\mathbf{W}}(x_{t}+\xi)\right)$ $$\stackrel{{i}}{{\geq}}\ell^{\prime}\left(y_{t}f_{\mathbf{W}}(x_{t}+\xi)\right)\cdot y_{t}\cdot\left(f_{\mathbf{W}^{\prime}}(x_{t}+\xi)-f_{\mathbf{W}}(x_{t}+\xi)\right)\.$$ where i holds due to the convexity of `(·). From here, we have ` 0(ytfW(xt + ξ)) · yt · (fW0 (xt + ξ) − fW(xt + ξ)) = ` 0(ytfW(xt + ξ)) · yt · fW0 (xt + ξ) − fW(xt + ξ) ± h∇WfW(xt + ξ),W0 − Wi = ` 0(ytfW(xt + ξ)) · yt · h∇WfW(xt + ξ),W0 − Wi + ` 0(ytfW(xt + ξ)) · yt · fW0 (xt + ξ) − fW(xt + ξ) − h∇WfW(xt + ξ),W0 − Wi ≥ ` 0(ytfW(xt + ξ)) · yt · h∇WfW(xt + ξ),W0 − Wi − ` 0(ytfW(xt + ξ)) · yt · fW0 (xt + ξ) − fW(xt + ξ) − h∇WfW(xt + ξ),W0 − Wi = X L l=1 h∇WlL(t,ξ)(W), W0 l − Wli − ` 0(ytfW(xt + ξ)) · yt · fW0 (xt + ξ) − fW(xt + ξ) − h∇WfW(xt + ξ),W0 − Wi Then, by noticing that |` 0(ytfW(xt + ξ)) · yt)| ≤ 1 and invoking Lemma 3, we can derive the following with probability at least 1 − O(nL)e −Ω mω 2 3 L $$L_{(t,\xi)}(\mathbf{W}^{\prime})-L_{(t,\xi)}(\mathbf{W})\geq\sum_{i=1}^{L}\langle\nabla_{W_{i}}L_{(t,\xi)}(\mathbf{W}),W_{i}^{\prime}-W_{i}\rangle-\mathcal{O}\left(\sqrt{m(1+\|\xi\|_{2}^{2})}\omega^{\frac{4}{2}}L^{3}\log(m)\right)$$ $$\leq\mathcal{O}\left(L^{-6}\mathbf{1}-\frac{3}{2}(\omega)\right).$$ whenever ω ≤ O L −6log−3(m). Proof. Consider some t ∈ [n] and arbitrary perturbation vector ξ, where we omit the subscript ξt to emphasize that the result holds with different perturbations for any t ∈ [n] and both when data is newly encountered or sampled for replay. Consider random weight matrices W(0) initialized as in Section 5.1, and W such that W ∈ B(W(0), ω). From Lemma 6, we have with probability at least 1 − O (nL) e −Ω(m/L)that kh (0) (*t,j,ξ*) k 2 2 , kg (0) (*t,j,ξ*) k 2 2 ∈ -1 − kξk 2 2 , 1 + kξk 2 2 . Furthermore, if m ≥ Ω(nLlog(nL)), we have that for all i ∈ [n] and all 1 ≤ a ≤ b ≤ L $$\left\|\prod_{l=a}^{b}W_{l}^{(0)}D_{(t,l-1,\xi)}^{(0)}\right\|_{2}\leq{\cal O}(\sqrt{L})\tag{9}$$ with probability at least 1 − e −Ω(m/L) by Lemma 7.3 in (Allen-Zhu et al., 2019).17 Now, we can prove the desired result with an induction argument over layers of the network. Base Case. kg(t,0,ξ) − g (0) (t,0,ξ) k2 = kxt + ξ − (xt + ξ)k2 = 0, so the base case trivially holds. The same is true for kh(t,0,ξ) − h (0) (t,0,ξ) k2. 17equation 9 has one extra factor of D (0) (t,l−1,ξ) within the expression in comparison to Lemma 7.3 in (Allen-Zhu et al., 2019), but this does not impact the norm of the overall expression because D (0) l−1 2 ≤ 1. Inductive Case. Assume the inductive hypothesis holds for l − 1. We can derive the following g(*t,l,ξ*) − g (0) (*t,l,ξ*) = W (0) l + Wl − W (0) l D (0) (t,l−1,ξ) + D(t,l−1,ξ) − D (0) (t,l−1,ξ) g (0) (t,l−1,ξ) + g(t,l−1,ξ) − g (0) (t,l−1,ξ) − W (0) l D (0) (t,l−1,ξ) g (0) (t,l−1,ξ) = Wl − W (0) l D (0) (t,l−1,ξ) + D(t,l−1,ξ) − D (0) (t,l−1,ξ) g (0) (t,l−1,ξ) + g(t,l−1,ξ) − g (0) (t,l−1,ξ) + W (0) l D(t,l−1,ξ) − D (0) (t,l−1,ξ) g (0) (t,l−1,ξ) + g(t,l−1,ξ) − g (0) (t,l−1,ξ) + W (0) l D (0) (t,l−1,ξ) g(t,l−1,ξ) − g (0) (t,l−1,ξ) From here, we can telescope over the g(*t,l,ξ*) − g (0) (*t,l,ξ*) terms to obtain the expression below, where distinct terms of interest are highlighted with separate colors g(t,l,ξ) − g (0) (t,l,ξ) = X l a=1 aY +1 b=l W (0) b D (0) (t,b−1,ξ) ! Wa − W(0) a D (0) (t,a−1,ξ) + D(t,a−1,ξ) − D (0) (t,a−1,ξ) g (0) (t,a−1,ξ) + g(t,a−1,ξ) − g (0) (t,a−1,ξ) + W(0) aD(t,a−1,ξ) − D (0) (t,a−1,ξ) g (0) (t,a−1,ξ) + g(t,a−1,ξ) − g (0) (t,a−1,ξ) ! (10) Now, we focus on a single summation element of the green term in equation 10 and provide an upper bound on the norm of this expression. aY +1 b=l W (0) b D (0) (t,b−1,ξ) !Wa − W(0) a D (0) (t,a−1,ξ) + D(t,a−1,ξ) − D (0) (t,a−1,ξ) g (0) (t,a−1,ξ) + g(t,a−1,ξ) − g (0) (t,a−1,ξ) 2 ≤ aY +1 b=l W (0) b D (0) (t,b−1,ξ) 2 · Wa − W(0) a 2 ·D(t,a−1,ξ) 2 · g (0) (t,a−1,ξ) + g(t,a−1,ξ) − g (0) (t,a−1,ξ) 2 i≤ O √L · Wa − W(0) a 2 ·D(t,a−1,ξ) 2 · g (0) (t,a−1,ξ) + g(t,a−1,ξ) − g (0) (t,a−1,ξ) 2 ii≤ O √Lω·D(t,a−1,ξ) 2 · g (0) (t,a−1,ξ) + g(t,a−1,ξ) − g (0) (t,a−1,ξ) 2 iii ≤ O √Lω· g (0) (t,a−1,ξ) + g(t,a−1,ξ) − g (0) (t,a−1,ξ) 2 iv≤ O √Lω hkg (0) (t,a−1,ξ) k2 + kg(t,a−1,ξ) − g (0) (t,a−1,ξ) k2 i v≤ O √Lωq1 + kξk 2 2 + O ωL52 q(1 + kξk 2 2 ) log(m) where i holds due to equation 9, ii holds because W ∈ B(W(0), ω), iii holds because kDi,a−1k2 ≤ 1 by construction, iv holds by upper triangle inequality, and v holds by Lemma 6 and the inductive hypothesis. Now that we have bounded the green term, we can focus on a single summation element of the red term in equation 10. First, we make the following definition $$\zeta\triangleq\left(D_{(t,a-1,\xi)}-D_{(t,a-1,\xi)}^{(0)}\right)\left(g_{(t,a-1,\xi)}^{(0)}+g_{(t,a-1,\xi)}-g_{(t,a-1,\xi)}^{(0)}\right)$$ $$=\left(D_{(t,a-1,\xi)}-D_{(t,a-1,\xi)}^{(0)}\right)\left(W_{a-1}^{(0)}h_{(t,a-2,\xi)}^{(0)}+g_{(t,a-1,\xi)}-g_{(t,a-1,\xi)}^{(0)}\right).$$ Then, by Claim 8.3 and Corollary 8.4 in (Allen-Zhu et al., 2019), we have that with probability at least ``` 1 − e −Ω(mω 2 3 L) ``` $$\left\|\frac{1}{c}\zeta\right\|_{0}\leq\mathcal{O}\left(m\omega^{\frac{5}{2}}L\right)\ \ \ \text{and}\ \ \ \left\|\frac{1}{c}\zeta\right\|_{2}\leq\mathcal{O}\left(\omega L^{\frac{3}{2}}\right)\tag{11}$$ where c = kh (0) (t,a−2,ξ) k2 ∈ [p1 − kξk 2 2 ,p1 + kξk 2 2 ]. Additionally, we define $$\gamma\triangleq\left(\prod_{i=1}^{a+1}W_{b}^{(0)}D_{(t,b-1,\xi)}\right)\cdot W_{a}^{(0)}\cdot\left[\left(D_{(t,a-1,\xi)}-D_{(t,a-1,\xi)}^{(0)}\right)\left(g_{(t,a-1,\xi)}^{(0)}+g_{(t,a-1,\xi)}-g_{(t,a-1,\xi)}^{(0)}\right)\right]$$ $$=c\left(\prod_{b=1}^{a+1}W_{b}^{(0)}D_{(t,b-1,\xi)}\right)\cdot W_{a}^{(0)}\cdot\left[\frac{1}{c}c\right]$$ By invoking Claim 8.5 from (Allen-Zhu et al., 2019), we can reformulate γ as $$\gamma=\left(c\left\|{\frac{1}{c}}\zeta\right\|_{2}\right)(\gamma_{1}+\gamma_{2})$$ where with probability at least 1 − e −Ω(mω 2 3 L log(m)) the γ1 and γ2 terms can be bounded as $$\|\gamma_{1}\|_{2}\leq{\mathcal{O}}\left(L^{\frac{1}{2}}\omega^{\frac{1}{3}}\log(m)\right)\ \ \mathrm{and}\ \ \|\gamma_{2}\|_{\infty}\leq{\mathcal{O}}\left({\sqrt{\frac{\log(m)}{m}}}\right)$$ $$\quad(12)$$. Combining all of this together, we get kg(*t,l,ξ*) − g (0) (*t,l,ξ*) k2 = X l O √Lωq1 + kξk 2 2 + O ωL52 q(1 + kξk 2 2 ) log(m) + c 1 c ζ 2 (γ1 + γ2) !2 = L O √Lωq1 + kξk 2 2 + O ωL52 q(1 + kξk 2 2 ) log(m) + c 1 c ζ 2 (γ1 + γ2) !2 i≤ O L 3 2 ω q1 + kξk 2 2 + O ωL52 q(1 + kξk 2 2 ) log(m) + L c 1 c ζ 2 kγ1 + γ2k2 ii≤ O L 3 2 ω q1 + kξk 2 2 + O ωL52 q(1 + kξk 2 2 ) log(m) + q1 + kξk 2 2 O ωL52 kγ1 + γ2k2 iii ≤ O L 3 2 ω q1 + kξk 2 2 + O ωL52 q(1 + kξk 2 2 ) log(m) + q1 + kξk 2 2 O ωL52 (kγ1k2 + kγ2k2) iv≤ O L 3 2 ω q1 + kξk 2 2 + O ωL52 q(1 + kξk 2 2 ) log(m) + q1 + kξk 2 2 O ωL52 O L 1 2 ω 1 3 log(m) + O plog(m) = O L 3 2 ω q1 + kξk 2 2 + O ω 2L 4q(1 + kξk 2 2 ) log(m) O ω 4 3 L 3q1 + kξk 2 2 log(m) + O ωL52 q(1 + kξk 2 2 ) log(m) where i holds by the upper triangle inequality and properties of norms, ii holds by Lemma 6 and equation 11, iii holds by triangle inequality, and iv holds by equation 12 and invoking the upper bound kγ2k2 ≤ √mkγ2k∞. Then, when ω is sufficiently small, we get $$\|g_{(t,l,\xi)}-g_{(t,l,\xi)}^{(0)}\|_{2}\leq\mathcal{O}\left(L^{\frac{2}{2}}\omega\sqrt{1+\|\xi\|_{2}^{2}}+\omega L^{\frac{5}{2}}\sqrt{1+\|\xi\|_{2}^{2}}\sqrt{\log(m)}\right)$$ $$\leq\mathcal{O}\left(\omega L^{\frac{5}{2}}\sqrt{(1+\|\xi\|_{2}^{2})\log(m)}\right)$$ $$(13)$$ thus completing the inductive portion of the proof for kg(*t,l,ξ*) − g (0) (*t,l,ξ*) k2. Then, to finish the inductive portion of the proof for kh(*t,l,ξ*) − h (0) (*t,l,ξ*) k2, we note that h(t,l,ξ) − h (0) (t,l,ξ) 2 = kD(t,l,ξ) g(t,l,ξ) − g (0) (t,l,ξ) + D(t,l,ξ) − D (0) (t,l,ξ) g(t,l,ξ)k2 i≤ kD(t,l,ξ)k2 · g(t,l,ξ) − g (0) (t,l,ξ) 2 + D(t,l,ξ) − D (0) (t,l,ξ) 2 · kg(t,l,ξ)k2 ii≤ 1 · O ωL52 q1 + kξk 2 2 plog(m) + 1 · ωL32 q1 + kξk 2 2 ≤ O ωL52 q1 + kξk 2 2 plog(m) where i holds from the upper triangle inequality and properties of the spectral norm and ii holds from equation 11 and equation 13. Thus, we have completed the inductive case for kh(*t,l,ξ*) − h (0) (*t,l,ξ*) k2. From here, a union bound can be taken over all t ∈ [n] and l ∈ [L − 1] to yield the desired result with probability 1− O(nL)e −Ω(m/L) − O(nL)e −Ω mω 2 3 L +O(nL)e −Ω mω 2 3 L log(m) = 1− O(nL)e −Ω mω 2 3 L , where the last equality again holds when ω is sufficiently small. Proof. We consider some fixed t ∈ [n] and arbitrary perturbation vector ξ, where we omit the subscript ξt to emphasize that the result holds with different perturbations for any t ∈ [n] and both when data is newly encountered or sampled for replay. We define random diagonal matrices D 00 (t,1)*, . . . , D*00 (t,L−1) as any diagonal matrix with at most O mω 2 3 L entries in the range ∈ [−1, 1]. Consider two sets of model parameters W,W0 ∈ B(W(0), ω), where W(0) is initialized as described in Section 5.1. From equation 11, we know that $$\left\|\frac{1}{\|h_{(t,l-1,\xi)}^{(0)}\|_{2}}\left(D_{(t,l,\xi)}-D_{(t,l,\xi)}^{(0)}\right)g_{(t,l,\xi)}\right\|_{0}\leq\mathcal{O}\left(m\omega^{\frac{2}{3}}L\right)$$ with probability at least 1 − e −Ω mω 2 3 L . This bound holds for both W and W0. Then, noticing that right multiplication by g(*t,l,ξ*) and division by a constant factor cannot increase the number of non-zero entries within the matrix D(*t,l,ξ*) − D (0) (*t,l,ξ*) (i.e., recall that this matrix is diagonal), we have $$\left\|D_{(t,l,\xi)}-D_{(t,l,\xi)}^{(0)}\right\|_{0}\leq\mathcal{O}\left(m\omega^{\frac{2}{3}}L\right)\tag{14}$$ From here, we can apply the upper triangle inequality to yield $$\left\|D_{(t,l,\xi)}-D^{\prime}_{(t,l,\xi)}\right\|_{0}=\left\|\left(D_{(t,l,\xi)}-D^{(0)}_{(t,l,\xi)}\right)-\left(D^{\prime}_{(t,l,\xi)}-D^{(0)}_{(t,l,\xi)}\right)\right\|_{0}$$ $$\stackrel{{i}}{{\leq}}\left\|D_{(t,l,\xi)}-D^{(0)}_{(t,l,\xi)}\right\|_{0}+\left\|D^{\prime}_{(t,l,\xi)}-D^{(0)}_{(t,l,\xi)}\right\|_{0}$$ $$\stackrel{{i\equiv}}{{=}}\mathcal{O}\left(m\omega^{\frac{2}{3}}L\right)$$ where i holds by the upper triangle inequality and ii is due to equation 14. From here, we apply a union bound over all t ∈ [n] and l ∈ [L − 1] to yield $$\left\|D_{(t,l,\xi)}-D_{(t,l,\xi)}^{\prime}\right\|_{0}\leq\mathcal{O}\left(m\omega^{\frac{2}{3}}L\right)$$ $$\left(15\right)$$. for all t ∈ [n] and l ∈ [L − 1] with probability at least 1 − O(nL)e −Ω mω 2 3 L . equation 15 can also be extended to show the following properties with identical probability $$\left\|D_{(t,l,\xi)}+D^{{}^{\prime\prime}}_{(t,l)}-D^{(0)}_{(t,l,\xi)}\right\|_{0}\leq\mathcal{O}\left(m\omega^{\frac{2}{3}}L\right)\tag{16}$$ $$\left\|D^{{}^{\prime}}_{(t,l,\xi)}+D^{{}^{\prime\prime}}_{(t,l)}-D^{(0)}_{(t,l,\xi)}\right\|_{0}\leq\mathcal{O}\left(m\omega^{\frac{2}{3}}L\right)$$ which holds due to the number of non-zero entries assumed to be within each random diagonal matrix D 00 (t,l) by construction. From here, we first note that with probability at least 1 − e −Ω mω 2 3 L log(m) we have $$\left\|\prod_{r=l}^{L-1}\left(D_{(t,l,\xi)}-D_{(t,l)}^{\prime\prime}\right)W_{l}\right\|_{2}\leq\mathcal{O}\left(\sqrt{L}\right)$$ $$(17)^{\frac{1}{2}}$$ due to Lemma 8.6 in (Allen-Zhu et al., 2019). Then, we consider bounding the following expression W0L L Y−1 r=l D0(*t,r,ξ*) + D00 (t,r) W0r − WL L Y−1 r=l D(*t,r,ξ*)Wr 2 = W0L + W (0) L − W (0) L LY−1 r=l D0(*t,r,ξ*) + D 00 (t,r) W0r − WL − W (0) L + W (0) L LY−1 r=l D(*t,r,ξ*)Wr 2 = W0L − W (0) L LY−1 r=l D0(*t,r,ξ*) + D 00 (t,r) W0r + W (0) L L Y−1 r=l D0(*t,r,ξ*) + D00 (t,r) W0r r=l D(*t,r,ξ*)Wr 2 i≤ W0L − W (0) L LY−1 − WL − W (0) L LY−1 r=l D(*t,r,ξ*)Wr − W (0) L L Y−1 r=l D0(*t,r,ξ*) + D 00 (t,r) W0r 2 + WL − W (0) L LY−1 r=l D(*t,r,ξ*)Wr 2 r=l D(*t,r,ξ*)Wr 2 ii≤ O √Lω+ W (0) L L Y−1 + W (0) L L Y−1 r=l D0(*t,r,ξ*) + D00 (t,r) W0r − W (0) L L Y−1 r=l D0(*t,r,ξ*) + D00 (t,r) W0r − W (0) L L Y−1 r=l D(*t,r,ξ*)Wr 2 = O √Lω+ W (0) L L Y−1 r=l D0(*t,r,ξ*) + D00 (t,r) W0r − W (0) L L Y−1 r=l D(*t,r,ξ*)Wr r=l D (0) (*t,r,ξ*)W(0) r 2 iii ≤ O √Lω+ W (0) L L Y−1 ± W (0) L L Y−1 r=l D0(*t,r,ξ*) + D00 (t,r) W0r − W (0) L L Y−1 r=l D (0) (*t,r,ξ*)W(0) r 2 r=l D (0) (*t,r,ξ*)W(0) r 2 iv≤ O √Lω+ O ω 1 3 L 2plog(m) + W (0) L L Y−1 r=l D(*t,r,ξ*)Wr − W (0) L L Y−1 v≤ O ω 1 3 L 2plog(m) where i and iii hold due to the upper triangle inequality, ii holds due to equation 17, iv holds by Lemma 8.7 in (Allen-Zhu et al., 2019) with probability at least 1−e −Ω mω 2 3 L log(m) , and v holds for ω ≤ O L −6log−3(m). Then, the desired result follows with probability at least 1−O(nL)e −Ω mω 2 3 L −O(nL)e −Ω mω 2 3 L log(m) = 1 − O(nL)e −Ω mω 2 3 L by taking a union bound across all t ∈ [n] and l ∈ [L − 1]. Proof. Consider some fixed t ∈ [n], j ∈ [L − 1], and arbitrary perturbation vector ξ, where we omit the subscript ξt to emphasize that the result holds with different perturbations for any t ∈ [n] and both when data is newly encountered or sampled for replay. Assume the neural network weights at initialization W(0) ``` follow the initialization scheme describe in Section 5.1. For l ∈ [L], we can define ∆ (0) l ,kh (0) (t,l,ξ) k 2 2 kh (0) (t,l−1,ξ) k 2 2 . ``` Applying a logarithm (with arbitrary base a > 1) to this definition, we have $$\log\left(\left\|h_{(t,j,\xi)}^{(0)}\right\|_{2}^{2}\right)=\log(\|x_{t}+\xi\|_{2}^{2})+\sum_{l=0}^{j}\log\left(\Delta_{l}^{(0)}\right)$$ Given some ∈ (0, 1], we can invoke Fact 7.2 and the proof of Lemma 7.1 from (Allen-Zhu et al., 2019) to show that $$\left|\sum_{l=0}^{j}\log\left(\Delta_{l}^{(0)}\right)\right|\leq\epsilon$$ with probability at least 1 − O(e −Ω( 2m/L)). Thus, we have $$\log(\|x_{t}+\xi\|_{2}^{2})-\epsilon\leq\log\left(\left\|h_{(t,j,\xi)}^{(0)}\right\|_{2}^{2}\right)\leq\log(\|x_{t}+\xi\|_{2}^{2})+\epsilon\tag{18}$$ We first expand the upper bound to derive a bound on kh (0) (*t,j,ξ*) k 2 2 . We begin by exponentiating both sides of the inequality in equation 18 to obtain the following $$\begin{array}{l}\left\|h_{(t,j,\xi)}^{(0)}\right\|_{2}^{2}\leq a^{\log(\|x_{t}+\xi\|_{2}^{2})+\epsilon}\\ \\ =(\|x_{t}+\xi\|_{2}^{2})\cdot a^{\epsilon}\\ \\ \leq(\|x_{t}\|_{2}^{2}+\|\xi\|_{2}^{2})\cdot a^{\epsilon}\\ \\ \leq(\|x_{t}\|_{2}^{2}+\|\xi\|_{2}^{2})\cdot a\\ \\ \stackrel{iii}{=}(1+\|\xi\|_{2}^{2})\cdot a\end{array}$$ where i follows from the upper triangle inequality, ii is implied by the fact that a > 1 and ∈ (0, 1], and iii follows from the unit norm assumption on input data. From here, we note that the base a chosen for the logarithm is arbitrary, and that any base greater than one can be chosen. With this in mind, we note that lima→1+ (1 + kξk 2 2 ) · a = 1 + kξk 2 2 , which yields the upper bound kh (0) (*t,j,ξ*) k 2 2 ≤ 1 + kξk 2 2 . We can similarly derive a lower bound on kh (0) (*t,j,ξ*) k 2 2 as follows, where we begin by exponentiating both sides of equation 18 $$\begin{split}\|h^{(0)}_{(t,j,\xi)}\|_2^2&\geq a^{\log(\|x_t+\xi\|_2^2)-\epsilon}\\ &=(\|x_t+\xi\|_2^2)\cdot a^{-\epsilon}\\ &\geq(\|x_t\|_2^2-\|\xi\|_2^2|)\cdot a^{-\epsilon}\\ &\overset{ii}{\geq}(1-\|\xi\|_2^2)\cdot a^{-\epsilon}\\ &\overset{iii}{\geq}(1-\|\xi\|_2^2)\cdot\frac{1}{a}\\ \end{split}$$ where i follows from the lower triangle inequality, ii follows from the norm assumption on the data, and iii follows from the fact that ∈ (0, 1] and a > 1. Noting that the base a chosen for the logarithm is arbitrary, we have lima→1+ (1 − kξk 2 2 ) · 1 a = 1 − kξk 2 2 . By invoking both the upper and lower bounds derived above, we end up with kh (0) (*t,j,ξ*) k 2 2 ∈-1 − kξk 2 2 , 1 + kξk 2 2 with probability at least 1 − O e −Ω(m/L)due to the fact that ∈ (0, 1]. From here, we can take a union bound over all all t ∈ [n] and j ∈ [L − 1] to yield the final result with probability 1 − O (nL) e −Ω(m/L). ## E.9 Proof Of Corollary 2 Proof. Consider some fixed t ∈ [n], j ∈ [L − 1], and arbitrary perturbation vector ξ, where we omit the subscript ξt to emphasize that the result holds with different perturbations for any t ∈ [n] and both when data is newly encountered or sampled for replay. Assume the neural network weights at initialization W(0) follow the initialization scheme described in Section 5.1. From here, we take W ∈ B(W(0), ω). If we then assume ω ≤ O L − 92 log−3(m) , then we have from Lemma 4 that $$\left\|h_{(t,j,\xi)}-h_{(t,j,\xi)}^{(0)}\right\|_{2}\leq\mathcal{O}\left(\omega L^{\frac{5}{2}}\sqrt{(1+\|\xi\|_{2}^{2})\log(m)}\right)\tag{19}$$ $$-e^{-\Omega\left(m\omega^{\frac{5}{2}}L\right)}.$$ Similarly, from Lemma 6, we have the following with probability at least 1 − e $$\left\|h_{(t,j,\xi)}^{(0)}\right\|_{2}\leq\sqrt{1+\|\xi\|_{2}^{2}}$$ with probability at least 1 − e −Ω(m/L). Then, we can use these expressions to derive a bound on kh(*t,j,ξ*)k2 as follows $$\begin{split}\left\|h_{(t,j,\xi)}-h_{(t,j,\xi)}^{(0)}\right\|_{2}&\geq\left\|\left\|h_{(t,j,\xi)}\right\|_{2}-\left\|h_{(t,j,\xi)}^{(0)}\right\|_{2}\right\|\\ &\geq\left\|\left\|h_{(t,j,\xi)}\right\|_{2}-\sqrt{1+\|\xi\|_{2}^{2}}\right\|\\ &\geq\|h_{(t,j,\xi)}\|_{2}-\sqrt{1+\|\xi\|_{2}^{2}}\end{split}$$ where i follows from the lower triangle inequality and ii follows from Lemma 6. Then, combining the expression above with equation 19, we derive kh(t,j,ξ)k2 ≤ O ωL52 q(1 + kξk 2 2 ) log(m) + q1 + kξk 2 2 = O q1 + kξk 2 2 ωL52plog(m) + 1 Then, by taking a union bound over all t ∈ [n] and j ∈ [L − 1], we have the desired result with probability at least 1 − O(nL)e −Ω mω 2 3 L . Proof. Consider some fixed t ∈ [n], l ∈ [L − 1], and arbitrary perturbation vector ξ, where we omit the subscript ξt to emphasize that the result holds with different perturbations for any t ∈ [n] and both when data is newly encountered or sampled for replay. Given W ∈ B(W(0), ω) with W(0) initialized as described in Section 5.1, we have $$\begin{array}{c}{{\left\|\nabla_{W_{L}}f_{\mathbf{W}}(x_{t}+\xi)\right\|_{2}\stackrel{i}{=}\left\|\sqrt{m}\cdot h_{(t,L-1,\xi)}\right\|_{2}}}\\ {{\stackrel{i i}{\leq}\mathcal{O}\left(\sqrt{m(1+\|\xi\|_{2}^{2})}\left(\omega L^{\frac{5}{2}}\sqrt{\log(m)}+1\right)\right)}}\end{array}$$ where i holds because ∇WL fW(xt + ξ) = √m · h > (t,L−1,ξ) and ii holds due to Corollary 2 with probability at least 1 − e −Ω mω 2 3 L . For layers l ∈ [L − 1], we have $\square$ $$\nabla_{W_{t}}f_{\mathbf{W}}(x_{t}+\xi)={\sqrt{m}}\cdot\left(h_{(t,l-1,\xi)}W_{L}\left(\prod_{r=l+1}^{L-1}D_{(t,r,\xi)}W_{r}\right)D_{(t,l,\xi)}\right)^{\top}$$ which allows us to show $$\|\nabla_{W_{t}}f_{\mathbf{W}}(x_{t}+\xi)\|_{F}=\sqrt{m}\cdot\left\|h_{(t,l-1,\xi)}W_{L}\left(\prod_{i=t+1}^{L-1}D_{(t,r,\xi)}W_{r}\right)D_{(t,l,\xi)}\right\|_{F}$$ $$\stackrel{{i}}{{=}}\sqrt{m}\cdot\|h_{(t,l-1,\xi)}\|_{2}\cdot\left\|W_{L}\left(\prod_{i=t+1}^{L-1}D_{(t,r,\xi)}W_{r}\right)\right\|_{2}\tag{20}$$ $$\stackrel{{ii}}{{=}}\mathcal{O}\left(\sqrt{m(1+\|\xi\|_{2}^{2})}\left(\omega L^{\frac{2}{\beta}}\sqrt{\log(m)}+1\right)\right)$$ $$\cdot\left\|W_{L}\left(\prod_{i=t+1}^{L-1}D_{(t,r,\xi)}W_{r}\right)\right\|_{2}$$ where i holds due to properties of norms and ii holds due to Corollary 2 with probability at least 1 − e −Ω mω 2 3 L . Now, we must bound the red term within the expression above to complete the proof WL LY−1 r=l+1 D(t,r,ξ)Wr !2 i≤ WL LY−1 r=l+1 D(t,r,ξ) + D 00 (t,r) Wr !2 = WL LY−1 r=l+1 D(t,r,ξ) + D 00 (t,r) Wr ! r=l+1 D (0) (t,r,ξ)W(0) r 2 ii≤ WL LY−1 ± W (0) L L Y−1 r=l+1 D(t,r,ξ) + D 00 (t,r) Wr ! (21) − W (0) L L Y−1 r=l+1 D (0) (t,r,ξ)W(0) r 2 r=l+1 D (0) (t,r,ξ)W(0) r 2 iii ≤ O ω 1 3 L 2plog(m) + O(1) + W (0) L L Y−1 = O (1) where i holds for some random diagonal matrix D 00 (t,l) ∈ [−1, 1]m×m with at most O mω 2 3 L non-zero entries and ii is due to the upper triangle inequality. iii holds due to Lemma 5 when ω ≤ O L −6log−3(m) and Lemma 7.4 in (Allen-Zhu et al., 2019) with probabilities at least 1 − e −Ω mω 2 3 L and 1 − e −Ω(m/L), respectively. Thus, we have from equation 20 and equation 21 $$\|\nabla_{W_{t}}f_{\mathbf{W}}(x_{t}+\xi)\|_{F}\leq\mathcal{O}\left(\sqrt{m(1+\|\xi\|_{2}^{2})}\left(\omega L^{\frac{5}{2}}\sqrt{\log(m)}+1\right)\right)$$ $$\leq\mathcal{O}\left(\sqrt{m(1+\|\xi\|_{2}^{2})}\right)$$ where the final inequality holds given suffiently small ω. Then, a union bound can be taken over all t ∈ [n] and l ∈ [L − 1] to yield the desired bound on k∇Wl fW(xt + ξ)kF with probability at least $$1\,-\,{\mathcal O}(n L)e^{-\Omega(m/L)}\,-\,{\mathcal O}(n L)e^{-\Omega\left(m\omega^{\frac{2}{3}}L\right)}\,=\,1\,-\,{\mathcal O}(n L)e^{-\Omega\left(m\omega^{\frac{2}{3}}L\right)}.$$ . From here, we translate this result $$\left\|\nabla_{W_{t}}L_{(t,\xi)}(\mathbf{W})\right\|_{F}=\left\|\ell^{\prime}\left(y_{t}\cdot f_{\mathbf{W}}(x_{t}+\xi)\right)\cdot y_{t}\cdot\nabla_{W_{t}}f_{\mathbf{W}}(x_{t}+\xi)\right\|_{F}$$ $$=\left|\ell^{\prime}\left(y_{t}\cdot f_{\mathbf{W}}(x_{t}+\xi)\right)\cdot y_{t}\right|\cdot\left\|\nabla_{W_{t}}f_{\mathbf{W}}(x_{t}+\xi)\right\|_{F}$$ $$\leq\mathcal{O}\left(\sqrt{m(1+\left\|\xi\right\|_{2}^{2})}\right)$$ where the final inequality is derived by noticing that |` 0(yt · fW(xt + ξ)) · yt| ≤ 1 and invoking the bound on k∇Wl fW(xt + ξ)kF derived above. Thus, we have arrived at the desired result.
Review 1: Summary: The submission presents a novel approach for the continuous training of DNN models in a streaming setting (each data is seen only once in a sequential stream of data). Extant models heavily rely on a multistage strategy: (a) initialization using pretraining, following by (b) learning in a streaming setting. Pretraining may induce inductive biases reducing the capability of the model to adapt to new data. To retain the benefits of pretraining, extant models also freeze a majority of network parameters during streaming, reducing its learning capacity. The presented approach proposes a single phase strategy removing the above constraints, allowing the model to start cold (i.e. without the necessity of pretraining) and for all network weights to change during streaming. It relies on the well-known techniques of memory replay and data augmentation, in a simple training setup allowing novel theoretical convergence guarantees. The approach is also tested in a multitask learning setup. The simple approach is shown to be surprisingly effective, demonstrating highly competitive performance against the state of the art on several image classification benchmarks. Strengths and Weaknesses: ## Strengths ### Approach & Impact - The approach at its core is simple and follows the standard methodology similar to offline batch training with the following differences: (a) each data point is seen only once except when in the replay buffer which is dynamically and simply updated, and (b) each training batch is constructed using a sophisticated data augmentation strategy on the sample comprising of the new data and random sampling over the replay buffer. This formulation, while being closer to offline training in spirit, eschews the necessity of pretraining, and freezing of network parameters to retain the gains of pretraining. It bridges the gap between offline and online training, and may also be useful in continuous/ never-ending learning setups. - Incorporation of sophisticated data augmentation techniques is claimed to be the primary driver, and shown to be effective in obtaining good performance in the proposed, otherwise simple, formulation. Data augmentation incorporates necessary inductive biases (invariances and regularization) to reduce the need of training data. It can be easily performed in both the offline and online settings. ### Theoretical soundness Theoretical results are presented for asymptotic bounds on generalization error under standard assumptions. This follows standard methodologies available in the literature, but contains several steps and needing to incorporate different results in the literature. ### Novelty The novelty is moderate and sufficient for publication. Multitask learning in the streaming setting is introduced and the approach is evaluated on this task and compared against the SOTA approaches. Significant performance gains are shown. ### Performance The proposed approach shows competitive performance against the state of the art on the benchmarks. The performance gains appear robust to various experimental settings, and data orderings. ### Experimental Evaluation There is an extensive experimental evaluation of the proposed approach on benchmarks against the SOTA. Several performance settings are considered and empirical evaluation of design choices is presented. ## Weaknesses: ### Scope of the submission The manuscript is too large - 16 pages of main paper + 32 pages of Appendix with experiments, and proofs. In addition, code is provided. The paper contains (a) a new approach (with somewhat limited novelty which is not a concern), (b) an additional, novel task, (c) large set of experiments, and evaluation, and (d) convergence theorems for the new approach backed by 22 pages of technical proofs. This makes not only reviewing hard, but dilutes the primary takeaways and conclusions. ### Lack of clarity The section on experiments (Section 6) is confusing because of the large variety of experimental setups and associated variations in hyperparameter choices (data augmentation, memory buffer sizes, buffer stored in memory/ on the disk …), and reporting metrics (top-1, top-5) leading to difficulty in drawing comparative conclusions between benefits of different system choices as well as comparative gains over SOTA. ### Proof and Practice The practical import of the proof on the implemented learning system should be discussed. Even if there is a gap between ML theory and practice, I recommend doing simulations that provide controlled, empirical validation of the theoretical results. ### Overall While there seems to be merit to the work: (a) a simple streaming model shown to be effective with powerful data augmentation schemes, and, (b) a corresponding proof of convergence, it seems to me that the paper tries to take on and present too much – leading to lack of clarity, careful analysis and discussion, and a convincing and clear articulation of merit. Requested Changes: ## Recommended changes for acceptance My primary recommendations are to make the experimental evaluation more focussed along the following lines, to more systematically communicate and support the gains in the system performance. ### Comparison with a single, CSSL model (or a proposed set) It is recommended that a single CSSL model, or an appropriately-tagged set is identified and a performance graph (with appropriately selected axes) demonstrating the tradeoffs between these and comparison with other SOTA models is added. ### Data Augmentation It seems that the primary gains in the proposed approach comes from the utilization of sophisticated data augmentation. It’ll be instructive to apply the same data augmentation on the main baseline SOTA approaches along with a comparison. Also, kindly add how much extra time is needed to compute the augmentations for each training batch. ### Batch vs streaming While “streaming learning methodologies … cannot be derived via simple modifications to existing techniques” (B.6), CSSL can be trained in the batch-incremental mode. It will be illustrative to compare the performance of CSSL with iCarl, and E2EIL in the batch-incremental mode. (Figure 6) ### Replay Buffer Size (Section 6.1, Figure 3) Equalizing for replay buffer size allows for different amounts of samples in the memory buffer for different strategies. This confounds the comparative impact of the number of historical samples used in training. Kindly either modify the experiments accordingly or explain if it is difficult to do so. ### Metrics Used - Some experimental results are reported using Top-1 $\Omega_{all}$ while others using Top-5 $\Omega_{all}$. Kindly incorporate both or use the same one or explain logistical difficulties. - $\Omega_{all}$ measures the streaming to offline performance ratio. It seems that offline performance should be an upper bound to the achievable streaming performance. If so, how do you explain numbers > 1 in Table 4, even when the buffer size doesn’t admit the entire data to be stored? Smaller than the entire data can indeed be sufficient. However, the metric and the experimental setup that produces numbers > 1 seems to indicate inadequacy of both? Kindly explain. ### Nonstationary settings/ System performance in time Performance curves as a function of time should be added as they will illustrate the system behavior in adapting to the non-stationary data distribution. ### Multi-task setting - (a) The task setup is not clear - more clarity, and even a system architecture are needed. It would seem that some part of the representation space would be shared between tasks using some backbone architecture. Several different task heads are then attached to the backbone. Is this correct? - (b) While the data seems to be ordered similar to the class-incremental setting and hence following that logic (different y-spaces), different memory buffers are used - why? ## Suggestions for quality improvement Suggestions below will improve the clarity of the writeup. - **Data stream settings**: The ordering of that data, for the single and multi-task settings, should be made more clear by using a figure to describe the data schema. - **(Section 6.1, Figure 3)**: Since the discussion reports performance to the third decimal place (0.974) which is difficult to ascertain from the figure, it is encouraged to add a table to the Appendix. - **“REMIND + Extra Params”** - kindly point to/ add details as to how exactly the architecture is modified to equalize the number of parameters. - **Typos and Grammar** - **(Appendix B.3)** “entrties” → entities - **(Appendix B.4)** rephrase paragraph 1, last line. - **(Section B.5)** Table B.5 → Table 14. - **(Appendix B.3)** What is interpolation based eviction strategies a mutually exclusive choice with quantization and resizing strategies in Section 6.4? Why can’t the two be done together? - **(Section 3.2)** Kindly clearly identify where lossy and lossless compression is used and discuss impact on training time and on computational statistical performance, as applicable. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper presents a simple online continual learning (or streaming learning) method, Cold Start Streaming Learning (CSSL), which combines replay and data augmentation to avoid forgetting. Theoretical convergence guarantees are derived via Neural Tangent Random Feature (NTRF). Experimental results on baseline methods demonstrates the effectiveness of the proposed method to some extent. Finally, the authors propose a new multi-task streaming learning setting, where CSSL performs favorably well. Strengths and Weaknesses: Strengths: - The paper is well written and easy to understand. - The theoretical guarantees derived via NTRF is quite interesting. Weaknesses: - The method itself seems incremental, because both replay [1-2], data augmentation [3-4], data compression [5-6] are widely adopted in the continual learning community. Although that not all the aforementioned methods are proposed for online continual learning, lots of them could be modified trivially for this setting. Please see [7] for a detailed review on this topic. - The major benefits of the proposed method are not unique! From a scientific perspective, comparing with only three methods in Section C cannot lead to such conclusion. In fact, the benefits are quite common in existing methods: for example, almost all replay-based methods assumes full plasticity with optional pre-training. Please refer [7] and more recent papers on online continual learning for more details. - The claim that "Fixing network parameters is detrimental to the learning process." is not necessarily true: [8] assumes a frozen pre-trained transformer backbone, and achieves state-of-the-art performance. - How is the theory related to *catastrophic forgetting*, the key challenge in continual learning? - For the experiment part, a *lot* classic work and recent work are missing for comparison. Take the following two recent papers on online continual learning (streaming learning) for example: 1. [9] is a NeurIPS 2022 paper, which discusses a very similar topic as this paper. Therefore, all comparing methods in Table 1 [9] should be take into consideration. 2. [10] is a ICML 2022 paper, which also focuses on online continual learning, though from a different perspective. Similarly, all comparing methods in Table 1 [10] should be take into consideration. Given this, the current comparison results cannot justify the effectiveness of the proposed method properly. - The so-called multi-task streaming learning setting is not novel, I would rather call it class-incremental setting with larger domain gap. [1] Buzzega, Pietro, et al. "Dark experience for general continual learning: a strong, simple baseline." NeurIPS 2020. [2] Fini, Enrico, et al. "Online continual learning under extreme memory constraints." ECCV 2020. [3] Zhu, Fei, et al. "Class-Incremental Learning via Dual Augmentation." NeurIPS 2021. [4] Qin, Chengwei, and Shafiq Joty. "Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation." arXiv preprint arXiv:2203.02135 (2022). [5] Caccia, Lucas, et al. "Online learned continual compression with adaptive quantization modules." ICML 2020. [6] Wang, Liyuan, et al. "Memory Replay with Data Compression for Continual Learning." ICLR 2022. [7] Mai, Zheda, et al. "Online continual learning in image classification: An empirical survey." Neurocomputing 469 (2022): 28-51. [8] Wang, Zifeng, et al. "Learning to prompt for continual learning." CVPR 2022. [9] Zhang, Yaqian, et al. "A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal." NeurIPS 2022. [10] Guo, Yiduo, Bing Liu, and Dongyan Zhao. "Online continual learning through mutual information maximization." ICML 2022. Requested Changes: Major points that are all critical: - Justify the novelty of the proposed method by including and comparing with all recent similar work (representative papers published in top venues such as ICML, NeurIPS, ICLR, CVPR, etc. in the last two years). - Justify the uniqueness of the benefits of the proposed method. - Include the most up-to-date state-of-the-art methods in the experiments. Please see the weaknesses part for more details and add corresponding discussions involving [1-10]. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper proposes a streaming learning approach claiming to make all model parameters trainable, and is not dependent to a beforehand pre-training process. The authors prove the convergence of the proposed approach under certain conditions, and empirically demonstrate its superiority over previous approaches. Strengths and Weaknesses: ## Strengths - The empirical results look pretty strong to me although I'm not familiar with this field and all compared baselines. The proposed CSSL even outperforms previous approaches without pre-training (Table 4). ## Weaknesses - I would hope that the authors can provide more insights of their improvement. The main components of CSSL, like replay buffer, data compression are not novel. Besides, the "REMIND + Extra Params" baseline that has the same amount of trainable parameters still perform worse than CSSL, even without pre-training. So I cannot get the intuition why it's so much better. I would recommend more control experiments. - The claim that CSSL does not rely on base initialization is too strong, as it also gains a lot from the pre-training process. - Some analysis looks pretty straightforward, so maybe not worth it to put them in the main text. For example, Table 1 that analyses the initialization data for REMIND and Figure 2 that analyses the effects of frozen parameters. It could better if the authors can ablate their own proposed method. - The experimental results only include the image classification settings, are there any more realistic online / stream learning scenarios like recommender system? It would be better to have support from multiple domains. Requested Changes: Please refer to the weaknesses part. Broader Impact Concerns: I don't see any potential ethical concern in this submission. ==================================================
# The Minimum-Norm Gauge For Deep Learning Anonymous authors Paper under double-blind review ## Abstract Feedforward neural networks with homogeneous activation functions possess a gauge symmetry: the functions they compute do not change when the incoming and outgoing weights at any hidden unit are rescaled by reciprocal positive values. There are other important properties of these networks, however, that are not invariant under such transformations. For example, deep networks with highly unbalanced weights may be slower to train or harder to compare and interpret. We describe a simple procedure for gauge-fixing in these networks; this procedure computes multiplicative rescaling factors—one at each hidden unit—that rebalance the weights of these networks without changing the end-to-end functions that they compute. Specifically, given an initial network with arbitrary weights, the procedure determines the functionally equivalent network whose weights are as small as possible (as measured by their ℓp-norm); this transformed network also has the property that the norms of incoming and outgoing weights at each hidden unit are exactly balanced. The rescaling factors that perform this transformation are found by solving a convex optimization, and we derive simple multiplicative updates that provably converge to its solution. Next we analyze the optimization landscape in these networks and derive conditions under which this minimum-norm solution is preserved during learning. Finally we explore the effects of gauge-fixing on the speed and outcomes of learning by stochastic gradient descent. On multiple problems in classification we find that gauge-fixing leads to faster descent in the regularized log-loss. ## 1 Introduction Many recent studies of deep learning have focused on the important role of symmetries (Bronstein et al., 2021; Kunin et al., 2021; Gluch & Urbanke, 2021; Armenta & Jodoin, 2021). In large part these studies were inspired by the role that symmetries play in our understanding of the physical world (Anderson, 1972; Zee, 2016). Of particular interest, in both physics and machine learning, are so-called *gauge* symmetries (Gross, 1992); these are symmetries that arise when a model is overparameterized—that is, when the model is formulated or expressed in terms of more parameters than its essential degrees of freedom. One such model in machine learning is a feedforward neural network with rectified linear hidden units. Such a network is specified by the values of its weights, but the function it computes does not change when the incoming and outgoing weights at any hidden unit are inversely rescaled by some positive value (Glorot et al., 2011). This invariance is illustrated in Fig. 1. In this case, the gauge symmetry arises from the freedom to choose an arbitrary rescaling factor at each hidden unit. This particular symmetry of deep learning has already led to many important findings. For example, it is known that this symmetry gives rise to a conservation law: at each hidden unit, there is a synaptic balancedefined as the difference in the sums of squared incoming and outgoing weights—that does not change when networks are trained by gradient flow (i.e., gradient descent in the limit of an infinitesimally small learning rate) (Du et al., 2018). From this conservation law follows another important observation: if the weights are initially balanced across layers, then they remain so during training, a key condition for proving certain ![1_image_0.png](1_image_0.png) Figure 1: A rectified linear unit (ReLU) has the same effect if its incoming weights w and bias b are rescaled by some factor a>0 while its outgoing weights w are rescaled by the inverse factor a −1. convergence results (Arora et al., 2019). It is also possible, by analyzing the synaptic flows across adjacent layers, to devise more powerful pruning algorithms (Tanaka et al., 2020). Finally, a number of authors have proposed more sophisticated learning rules that are invariant to these rescaling transformations (Neyshabur et al., 2015a; Meng et al., 2019) or that break this invariance in purposeful ways (Badrinarayanan et al., 2015; Armenta et al., 2021; Zhao et al., 2022). These latter works highlight an important distinction: though the functions computed by deep networks may be invariant to rescaling transformations, there are other important properties of these networks—such as the speed and eventual outcomes of learning—that are not. Notwithstanding these contributions, as well as progress on many other fronts (Arora et al., 2018; Belkin et al., 2019; Papyan et al., 2020; Zhang et al., 2021a), our theoretical understanding of deep learning has not kept pace with its demand in real-world applications (LeCun et al., 2015). This gap provides the larger context for this work. Large multilayer networks can be fickle to train (Glorot & Bengio, 2010; Sutskever et al., 2013) and hard to interpret (Zhang et al., 2021b). Despite these challenges—which are often magnified for new or unfamiliar tasks—most practitioners are still drawn to highly overparameterized models. In the further development and analysis of these models, there may be no tool more broadly useful than symmetry. To proceed, we consider how similar ideas have been developed to study the physical world. Perhaps the most familiar gauge symmetry arises in classical electrodynamics (Jackson, 2002). Electric and magnetic fields are often calculated from their corresponding potentials; these are the scalar electric potential Φ(r, t) and the magnetic vector potential A(r, t). When the fields are expressed in terms of these potentials, Maxwell's equations are invariant under the gauge transformation $$\begin{array}{r c l}{{\Phi}}&{{\leftarrow}}&{{\Phi-\frac{\partial\chi}{\partial t},}}\\ {{\mathbf{A}}}&{{\leftarrow}}&{{\mathbf{A}+\nabla\chi,}}\end{array}$$ $$\left(1\right)$$ where χ(r, t) is any twice-differentiable function of space and time. In practice, this gauge degree of freedom is often *fixed* to simplify or highlight certain physics of interest. For example, one choice is the Coulomb gauge, satisfying ∇·A = 0, which has the property that it minimizes the volume integral (Gubarev et al., 2001) of ∥A∥ 2. Another choice is the Lorenz gauge, satisfying ∇·A = 1 c ∂φ ∂t (where c is the speed of light), which simplifies certain wave equations. It is natural to ask if there are analogous gauge-fixing conditions for the weights of deep neural networks—for example, a rescaling transformation that *minimizes the norm* of the weights, or that *simplifies the dynamics of learning*. We will see that in both cases the answer is yes; in fact, there is a single rescaling transformation that achieves both these objectives. It is also one that is simple to formulate and compute. With these goals in mind, this paper investigates a family of norm-minimizing gauge transformations for feedforward networks with rescaling symmetries. These gauge transformations are designed to minimize the ℓp-norm of the weights in a network without changing the end-to-end function that the network computes. (The value of p>0 may be chosen by the practitioner.) A particular gauge transformation is specified by a set of multiplicative rescaling factors, one for each hidden unit of the network. The paper's main technical contribution is show how to obtain these rescaling factors by solving a problem in convex optimization. More concretely, we derive simple multiplicative updates for performing this optimization, analogous to those for nonnegative matrix factorization (Lee & Seung, 1999; Gillis, 2021), and we prove that these updates converge monotonically to fixed points that represent minimum-norm solutions (Theorem 2.1). Notably, these multiplicative updates are parameter-free in the sense that they do not require the setting or tuning of learning rates. Norm-minimizing gauge transformations also have the interesting property that they rebalance the weights of a network in a special way. In particular, we obtain the following result (Lemma 2.2): given a network with arbitrary weights, the norm-minimizing gauge transformation is also the gauge transformation that equates the p-norms of incoming and outgoing weights at each hidden unit in the network. The balancing of weights across layers has been a key tool for proving convergence theorems in deep learning (Du et al., 2018; Arora et al., 2019). Thus, a second contribution of this paper is to show that for a large class of multilayer networks, this balance can be restored at any time during the learning process by a well-chosen gauge transformation. In particular, this can be done no matter how the weights of the network were initialized and subsequently adapted. A final contribution of this paper is to explore the effects of gauge-fixing on the dynamics of learning (and vice-versa). Here we identify a natural pairing of gauge-fixing conditions and regularization penalties for deep learning in networks with homogeneous activation functions. In these networks, we show (Theorem 3.4) that if the gauge and regularizer are appropriately paired (e.g., minimum ℓ2-norm gauge-fixing and ℓ2-norm weight decay), then the gauge-fixing condition is preserved by the dynamics of gradient flow in the regularized loss function. Minimum-norm gauge transformations also provide an experimental probe for the implicit effect of weight decay on learning rates (van Laarhoven, 2017). In practice, deep networks are not trained by gradient flow, so for empirical results, we investigate how gauge-fixing affects the performance of stochastic gradient descent (SGD). On multiple data sets we find that gauge-fixing leads to consistently faster descent in the regularized loss. ## 2 Minimum-Norm Gauge Fixing In this section we consider how to shrink the weights in feedforward networks with homogeneous activation functions without changing the end-to-end function that these networks compute. Specifically, section 2.1 describes the symmetry group of rescaling transformations in these networks, section 2.2 presents multiplicative updates to optimize over the elements of this group, section 2.3 proves the convergence of these updates to a global minimizer, and section 2.4 illustrates how these updates work in practice. ## 2.1 Preliminaries Our interest lies in feedforward networks that parameterize vector-valued functions f : ℜ d → ℜr. The following notation will be useful. We denote the indices of a network's input, hidden, and output units, respectively, by I, H, and O, and we order the units so that I = {1*, . . . , d*}, H = {d+ 1*, . . . , n*−r}, and O = {n−r+1*, . . . , n*} where n is the total number of units. Let x ∈ ℜd denote an input to the network and f(x) ∈ ℜrits corresponding output. The mapping x → f(x) is determined by the network's weight matrix W ∈ ℜn×n, biases b ∈ ℜn, and activation functions gi: *ℜ → ℜ* at each non-input unit (where i *∈ H ∪ O*). The mapping x → f(x) can be computed by the feedforward procedure that sequentially activates all of the units in the network—that is, setting hi = xi for the input units, propagating $$h_{i}=g_{i}\Bigl(\sum_{j}W_{i j}h_{j}+b_{i}\Bigr)$$ (2) to the remaining units, and setting fi(x) = hn+i−r according to the i th output unit. Since the network is feedforward, its weight matrix is strictly lower triangular. For simplicity, we assume that the output units of the network are linear (with gi(z)=z for all i ∈ O) and unconnected (with Wij = 0 if j ∈ O and *i> j*). $$\left(2\right)$$ $$g_{i}(a z)=a g_{i}(z)$$ A rescaling symmetry arises at each hidden unit of the network whose activation function is positive homogeneous of degree one (Neyshabur et al., 2015a; Dinh et al., 2017)—that is, whose activation function satisfies gi(az) = agi(z) (3) for all a>0. In this paper we focus on networks of rectified linear hidden units (ReLUs), with the activation function gi(z) = max(0, z) at all i ∈ H, but the property in eq. (3) is also satisfied by linear units (Saxe et al., 2013), leaky ReLUs (He et al., 2015), and maxout units (Goodfellow et al., 2013). As illustrated in Fig. 1, when a hidden unit's activation function satisfies eq. (3), the function f computed by the network does not change when the unit's incoming weights (including the bias) and the outgoing weights are rescaled by reciprocal amounts. Note that in layered architectures with homogeneous activation functions, the network's overall mapping f will also be a homogeneous function of its parameters (i.e., weights and biases) (Lyu & Li, 2020) with a degree equal to the depth of the network; we refer to such architectures as *homogeneous* networks. Our theoretical results in this paper apply generally to feedforward networks with homogeneous activation functions, but our empirical investigations focus on the special case of layered architectures. In these networks, a gauge transformation is specified by a set of rescaling factors, one for each hidden unit. We use A = {a ∈ ℜn| ai >0 if i ∈ H, ai = 1 otherwise} (4) to denote the set of these gauge transformations. Then under a particular rescaling, represented by some a ∈ A, the network's weights and biases are transformed multiplicatively as $$\left({\boldsymbol{3}}\right)$$ $${\mathcal{A}}=\{\mathbf{a}\in{\mathfrak{R}}^{n}\,|\,a_{i}{>}0{\mathrm{~if~}}i\in{\mathcal{H}},a_{i}{=}1{\mathrm{~otherwise}}\}$$ $W_{ij}\gets W_{ij}(a_{i}/a_{j})$, $b_{i}\gets b_{i}a_{i}$. ) $\frac{1}{2}$ 2). It may seem redundant in eq. (4) to introduce rescaling factors at non-hidden units only to constrain them to be equal to one. With this notation, however, we can express the transformations in eqs. (5–6) without distinguishing between the network's different types of units. As shorthand, we write (W′, b ′) ∼ (W, b) to denote that two networks are equivalent up to some (unspecified) rescaling, and we write (W′, b ′) a∼ (W, b) to denote that they are equivalent up to the particular rescaling a ∈ A. ## 2.2 Multiplicative Updates We can now state the first main result of this paper; it is a solution to the following problem. Let p > 0, and let $$\|\mathbf{W}\|_{p,p}=\left(\sum_{ij}|W_{ij}|^{p}\right)^{1/p}\tag{1}$$ $$\left(7\right)$$ 1/p(7) denote the entry-wise p-norm of the weight matrix W. Given an initial weight matrix W0, we consider how to compute the gauge transformation in eqs. (5–6) that minimizes the p-norm in eq. (7). Algorithm 1 describes an iterative procedure to compute this *norm-minimizing* gauge transformation via a sequence of multiplicative updates of the form in eqs. (5–6). For each update, the key step is to compute the rescaling factor ai at each hidden unit i ∈ H from a ratio comparing the magnitudes of its ingoing and outgoing weights; this step is derived later in eq. (12). Our first main result is contained in the following theorem: Theorem 2.1 (Convergence of Multiplicative Updates). For all W0∈ℜn×n and b0∈ℜn, the multiplicative updates in Algorithm 1 converge to a global minimizer of the entry-wise p*-norm* $\left(\begin{array}{c}\mbox{\rm{\small$\frac{1}{2}$}}\\ \mbox{\rm{\small$\frac{1}{2}$}}\end{array}\right)$ $$m n x_{2}$$ $$\operatorname*{argmin}_{\mathbf{W},\mathbf{b}}\|\mathbf{W}\|_{p,p}\quad{\mathrm{such~that}}\quad(\mathbf{W},\mathbf{b})\sim(\mathbf{W}_{0},\mathbf{b}_{0}).$$ ∥W∥p,p such that (W, b) ∼ (W0, b0). (8) In addition, the intermediate solutions from these updates yield monotonically decreasing values for these p*-norms.* Algorithm 1 Procedure to compute a *norm-minimizing* gauge transformation for a network with weights W0 ∈ ℜn×n and biases b0 ∈ ℜn. The procedure computes rescaled weights W and biases b via a sequence of multiplicative updates that minimize the sum Pij |Wij | p up to some tolerance δ >0. The network's hidden units are specified by the set H. The key step per update (shaded) is to compute the rescaling factor ai at each hidden unit i ∈ H. procedure (W, b) = MinNorm(W0, b0, H, p, δ) W ← W0, b ← b0 ▷ *Initialize the rescaled weights and biases.* repeat for all i ∈ H do ▷ *Compute the rescaling factor at each hidden unit.* ![4_image_0.png](4_image_0.png) for all i ∈ H do ▷ *Rescale the weights and biases.* bi ← biai for all *j < i* do Wij ← Wij (ai/aj ) until maxi|ai−1| < δ ▷ *Iterate until convergence.* Before proving the theorem, it is worth contrasting the goals and guarantees of this gauge-fixing procedure versus those of typical learning algorithms for deep networks. Most importantly, the updates in Algorithm 1 do not constitute a learning algorithm; they only optimize over the set A of gauge transformations in eq. (4), and they do not change the end-to-end functions of the networks to which they are applied. The main content of the theorem is that the optimization in eq. (8) is considerably more tractable than the problem of learning, and as a result, the multiplicative updates for gauge fixing have much stronger guarantees than (say) learning via back-propagated gradients. In particular, these updates converge monotonically to a global minimizer of the p-norm ∥W∥p,p, and they do not require the tuning of any hyperparameters, such as learning rates. One might hope, therefore, that such gauge-fixing procedures—wherever beneficial—could piggyback on top of existing learning algorithms without much extra cost. We explore these ideas further in subsequent sections, but for now, we take up the task of proving Theorem 2.1. ## 2.3 Proof Of Convergence Theorem 2.1 is proved by showing that the multiplicative updates in Algorithm 1 satisfy the preconditions for Meyer's convergence theorem (Meyer, 1976). Meyer's result itself builds on the convergence theory of Zangwill (1969). Our tools are similar to those that have been used to derive the convergence of other multiplicative updates with nonnegativity constraints (Lee & Seung, 1999; Saul et al., 2003; Sha et al., 2007) as well as more general iterative procedures in statistical learning (Dempster et al., 1977; Wu, 1983; Yuille & Rangarajan, 2003; Gunawardana & Byrne, 2005; Sriperumbudur & Lanckriet, 2012). At the same time, we have needed to develop some more specialized machinery, not only for the specific updates in Algorithm 1, but also to to prove convergence to a global (as opposed to local) minimizer. We prove the theorem via a sequence of lemmas. Our first lemma answers the following question: given a fixed weight matrix W, when is it possible (or more precisely, when *isn't* it possible) to find a matrix W′ ∼ W such that ∥W′∥p,p < ∥W∥p,p? Lemma 2.2 (Weight Balancedness). Suppose that the p-norms of incoming and outgoing weights at each hidden unit are equal: namely, X j |Wij | p = X j |Wji| p(9) for all i ∈ H. Then there exists no weight matrix W′ ∼ W such that ∥W′∥p,p < ∥W∥p,p. $$\mathrm{M}_{j}$$ $\frac{\pi}{6}$ Proof. Suppose that W′ a∼ W, and consider how ∥W′∥ p p,p depends on the gauge transformation a ∈ A. This dependence is captured (up to a multiplicative constant) by the continuous function F : *A → ℜ*, where $$F({\bf a})=\frac{1}{p}\sum_{ij}|W_{ij}|^{p}(a_{i}/a_{j})^{p}.\tag{1}$$ Note that minimizing F over A is tantamount to a convex optimization in the transformed variable log a = (log a1, log a2*, ...,* log an); therefore, any stationary point of F corresponds to a global minimum of F. The partial derivatives of F are given by $$\frac{\partial F}{\partial a_{i}}=\frac{1}{a_{i}}\left[\sum_{j}|W_{i j}|^{p}(a_{i}/a_{j})^{p}-\sum_{j}|W_{j i}|^{p}(a_{j}/a_{i})^{p}\right].$$ $$(10)$$ $$(11)$$ Now suppose that Pj |Wij | p =Pj |Wji| p as in eq. (9). Then from eq. (11), it follows that a global minimum of F is obtained by the *identity* gauge transformation—that is, the gauge transformation with a =1 where 1 ∈ ℜn is the vector of all ones. But if a global minimum occurs at a=1, then by definition there exists no a ′ ∈ A such that F(a ′) < F(1), or equivalently, there exists no W′ ∼ W such that ∥W′∥p,p < ∥W∥p,p. The previous lemma shows that the balancedness of weights implies the minimality of their norm. The next lemma considers the effect of a single rescaling transformation in the gauge-fixing procedure of Algorithm 1. Here we prove that each update reduces the entry-wise p-norm of the weight matrix. Lemma 2.3 (Monotone Improvement). Let F : A → ℜ be defined as in eq. (10), and let a˜ ∈ A be the vector with elements $$\tilde{a}_{i}=\left(\frac{\sum_{j>i}|W_{ji}|^{p}}{\sum_{j<i}|W_{ij}|^{p}}\right)^{\frac{1}{4p}}\tag{1}$$ $$(13)$$ (12) for i ∈ H and a˜i = 1 otherwise. Then F(a˜) ≤ F(1), and the inequality is strict unless F(1) *is a global* minimum with a˜ = 1. Proof. The proof is based on constructing an auxiliary function, as in the derivations of the ExpectationMaximization algorithm (Dempster et al., 1977), nonnegative matrix factorization (Lee & Seung, 2000), and the convex-concave procedure (Yuille & Rangarajan, 2003). Here we define the auxiliary function G : *A → ℜ* by $$G({\bf a})=\frac{1}{2p}\sum_{i j}|W_{i j}|^{p}\left(a_{i}^{2p}+a_{j}^{-2p}\right).$$ Note that (ai/aj ) p ≤ 1 2 (a 2p i + a −2p j) from the inequality of arithmetic and geometric means. Hence it follows from eq. (10) that F(a) ≤ G(a) (14) for all a ∈ A. The partial derivatives of this auxiliary function are given by $$F(\mathbf{a})\leq G(\mathbf{a})$$ $$\frac{\partial G}{\partial a_{i}}=\sum_{j}|W_{i j}|^{p}a_{i}^{2p-1}-\sum_{j}|W_{j i}|^{p}a_{i}^{-2p-1},$$ and for i ∈ H they vanish at a˜ ∈ A where the elements of a˜i are defined by eq. (12). We claim that this stationary point at a˜ is the unique global minimizer of G. To see this, note that minimizing G over A is tantamount to a convex optimization in the variable a p = (a p 1 , a p 2 , ..., apn ); moreover, this optimization is strongly convex because Pj |Wij | p > 0 and Pj |Wji| p > 0 for all i ∈ H. (The first condition is necessary for a hidden unit to respond to the network's inputs, and the second is necessary for it to influence the network's outputs; if either fails, then the hidden unit has no effect on the network's input-output mapping, $$(14)$$ $$(15)$$ $$F({\tilde{\mathbf{a}}})\leq G({\tilde{\mathbf{a}}})\leq G(\mathbf{1})=F(\mathbf{1}),$$ a situation we can exclude without loss of generality.) From this property of strong convexity, it follows that a˜ is the unique global minimizer. Combining this observation with eq. (14), we have F(a˜) ≤ G(a˜) ≤ G(1) = F(1), (16) which proves the first part of the lemma. Now if a˜ ̸= 1, then G(a˜) < G(1) (since G has a unique global minimizer) and by extension F(a˜) < F(1). To prove the second part of the lemma, we suppose that a˜ =1, or equivalently that G(1) is the minimum of G. Then the partial derivatives in eq. (15) must vanish at a=1, implying that Pj |Wij | p =Pj |Wji| pfor all i ∈ H. But this is exactly the condition from the previous lemma—namely, that the p-norms of incoming and outgoing weights are exactly balanced—and this exact balancing occurs only when F(1) is a global minimum. Lemma 2.3 shows that each multiplicative update in Algorithm 1 serves to decrease the entry-wise p-norm ∥W∥p,p; it also rules out oscillations between distinct global minima. But this by itself is not enough to prove that the updates converge to a finite (i.e., bounded) solution. For this we also need the following lemma. Lemma 2.4 (Compactness of sublevel sets). Let F : A → ℜ be defined as in eq. (10). Then the sublevel set given by F1 = {a ∈ A | F(a) ≤ F(1)} is compact. Proof. It follows from the continuity of F that its sublevel sets are closed; thus it remains only to show that F1 is bounded. At a high level, this result will follow from the fact that the network has bounded depth. In particular, suppose a ∈ F1 with F(a) ≤ F(1). Then if Wij ̸= 0, it must be the case that $$\frac{a_{i}}{a_{j}}\leq\frac{\|\mathbf{W}\|_{P,p}^{p}}{|W_{ij}|^{p}},$$ because otherwise the ijth term of the sum in eq. (10) would by itself exceed F(1). Let j0 → j1 *→ · · · →* jm denote an m-step path through the network that starts at some input unit (j0 ∈ I), passes through the i th hidden unit after k steps (so that jk = i), ends at some output unit (jm ∈ O), and traverses only nonzero weights Wjℓ−1jℓ ̸= 0 in the process. Note that there must exist at least one such path if the i th hidden unit contributes in some way to the function computed by the network. Since aj0 = 1 and ajk =ai, it follows that $$a_{i}\ =\ \frac{a_{j i}}{a_{j0}}\ =\ \prod_{\ell=1}^{k}\frac{a_{j\ell}}{a_{j\ell-1}}\ \leq\ \prod_{\ell=1}^{k}\frac{\|\mathbf{W}\|_{p,p}^{p}}{|W_{j i j\ell-1}|^{p}},\tag{1}$$ $$(16)$$ $\overline{\underline{\textit{as in eq.}\ (10)}}$ $$(17)$$ where the inequality follows from eq. (17). Likewise, since ajm = 1 and ajk =ai, it follows that $${\frac{1}{a_{i}}}\ =\ {\frac{a_{j_{m}}}{a_{j_{k}}}}\ =\ \prod_{\ell=k+1}^{m}\,{\frac{a_{j_{\ell}}}{a_{j_{\ell-1}}}}\ \leq\ \prod_{\ell=k+1}^{m}\,{\frac{\|\mathbf{W}\|_{p,p}^{p}}{|W_{j_{\ell}j_{\ell-1}}|^{p}}}$$ $$(18)$$ $$\left(19\right)$$ Eqs. (18–19) provide upper and lower bounds on ai for all a ∈ F1. Thus F1 is closed and bounded, hence compact. With these results from the previous two lemmas, we can now prove Theorem 1. Proof of Theorem 2.1. It follows from Lemma 2.4 that the set C = {W|W ∼ W0, ∥W∥p,p ≤ ∥W0∥p,p} is compact. Likewise it follows from Lemma 2.3 that starting from W0, each multiplicative update yields a weight matrix W ∈ C whose p-norm is less than or equal to the previous one (with equality occurring only when W is both a global minimizer and a fixed point of the updates). Finally we note that the multiplicative coefficients in eq. (12) are a continuous function of the weights from which they are derived. The procedure in Algorithm 1 therefore satisfies the preconditions of compactness, strict monotonicity, and continuity for Meyer's convergence theorem (Meyer, 1976) in the setting where fixed points occur at (and only at) global minima of ∥W∥p,p in C. ![7_image_0.png](7_image_0.png) Figure 2: Convergence of gauge-fixing multiplicative updates in three randomly initialized networks with differing numbers of hidden layers but the same overall numbers of input, hidden, and output units. The panels show the results, respectively, for updates that minimize the ℓ2-norm (top) and ℓ1-norm (*bottom*) of the weights. Likewise they plot the value of the objective function (*left*) and the maximum degree of imbalance (*right*) in eq. (20) across all hidden units. ## 2.4 Demonstration Of Convergence Fig. 2 plots the convergence of the multiplicative updates (with p = 1, 2) for three randomly initialized networks with differing numbers of hidden layers but the same overall numbers of input (200), hidden (3750), and output (10) units. From shallowest to deepest, the networks had 200-2500-1250-10 units, 2002000-1000-500-250-10 units, and 200-1000-750-750-500-500-250-10 units. The networks were initialized with zero-valued biases and zero-mean Gaussian random weights whose variances were inversely proportional to the fan-in at each unit (He et al., 2015). The left panels of the figure plot the value of ∥W∥ p p,p, and the right panels plot the maximal imbalance γ ∈ [0, 1] across all hidden units, computed as $$\gamma=\max_{i\in{\cal H}}\left[\frac{\sum_{j}(|W_{ij}|^{p}-|W_{ji}|^{p})}{\sum_{j}(|W_{ij}|^{p}+|W_{ji}|^{p})}\right].\tag{20}$$ As expected, the updates take longer to converge in deeper networks (where imbalances must propagate through more layers), but in general a high degree of convergence is obtained for a modest number of updates. The panels show that conventionally initialized networks are (i) far from minimal as measured by the ℓp*-norm of their weights and (ii) easily rebalanced by a sequence of rescaling transformations.* Finally we note that the results in Fig. 2 did not depend sensitively on the value of the random seed used to generate them (though they might in networks with much smaller numbers of weights). ## 3 Minimum-Norm Learning With Gradient Flow The rescaling symmetry in Fig. 1 also has important consequences for learning (Kunin et al., 2021; Gluch & Urbanke, 2021; Armenta & Jodoin, 2021; Neyshabur et al., 2015a; Meng et al., 2019; Badrinarayanan et al., 2015; Armenta et al., 2021; Zhao et al., 2022). In this section we examine the conditions for learning under which the incoming and outgoing weights at each hidden unit remain balanced, as defined in Lemma 2.2; equivalently, these are the conditions for learning under which the entry-wise p-norm of the weight matrix remains minimal with respect to rescaling transformations. Our interest lies mainly in the following question: are there conditions such that the gauge-fixing procedure of the last section can be *one and done* at the outset of learning? To answer this question, we must understand whether learning and gauge-fixing are complementary procedures or whether they are operating in some way at cross-purposes (e.g., the former undoing the latter). There are many forms of learning in deep networks. Perhaps the simplest to analyze is gradient flow (Elkabetz & Cohen, 2021), where a network's weights and biases are adapted in continuous time to decrease its loss function. Gradient flow provides a reasonable approximation to the behavior of deep networks with small learning rates. In this section we analyze how the entry-wise p-norm of the weight matrix evolves under gradient flow. Section 3.1 analyzes this evolution for an unregularized loss function (i.e., no weight decay), while section 3.2 does the same for a regularized one. Our main result (Theorem 3.4) is to derive a regularized gradient flow under which the entry-wise p-norm of the weight matrix remains minimal with respect to rescaling transformations; this is done for any p>0 and for any amount of regularization. ## 3.1 Unregularized Flows As in the previous section, we focus on feedforward networks with homogeneous activation functions. Let C(y,f(x)) denote the cost when a network's actual output f(x) is evaluated against some reference output y. Suppose further that the network is trained to minimize the average empirical loss $$L=\frac{1}{T}\sum_{t=1}^{T}C({\bf y}_{t},{\bf f}({\bf x}_{t})),\tag{1}$$ $$(21)$$ on some training set {(xt, yt)} T t=1 of labeled examples. Note that while the loss in eq. (21) may depend in a complicated way on the network's weights and biases, it is necessarily invariant to the rescaling transformations of these parameters in eqs. (5–6). Many authors have observed that the rescaling symmetry in Fig. 1 gives rise to a conservation law when these networks are trained by gradient flow (Du et al., 2018; Kunin et al., 2021; Bronstein et al., 2021; Gluch & Urbanke, 2021; Armenta & Jodoin, 2021). The connection between symmetries and conservation laws is well known from Noether's theorem (Noether, 1918), but it is worth noting that the dynamics of gradient flow were not historically derived from a Lagrangian. Some recent works, however, have explored the connection to the dynamics of damped Lagrangian systems (Wibisono et al., 2016; Tanaka & Kunin, 2021). In this work we will consider a slightly generalized family of gradient flows in which the learning rate of each parameter is modulated by its magnitude. In particular, we suppose that $$\dot{W}_{ij}=-|\mathbf{W}_{ij}|^{2-\mathbf{p}}\cdot\frac{\partial L}{\partial W_{ij}},\tag{22}$$ $$\dot{b}_{i}=-|\mathbf{b}_{i}|^{2-\mathbf{p}}\quad\cdot\ \frac{\partial L}{\partial b_{i}},\tag{23}$$ where we have highlighted the additional modulating terms in blue. It is easy to see that the standard form of gradient descent is recovered for p= 2. This small bit of extra generality will be all that is needed for us to obtain results for the evolution of the entry-wise matrix p-norm ∥W∥p,p, where p>0. It also leads to our first result of this section. Lemma 3.1 (Conservation Law for Gradient Flow). *Consider a feedforward network with homogeneous* activation functions. At each hidden unit i ∈ H*, let* $$\Delta_{i}=|b_{i}|^{p}+\sum_{j}|W_{ij}|^{p}-\sum_{j}|W_{ji}|^{p}\tag{1}$$ measure the difference in norms of incoming and outgoing weights (including the bias) for some p>0. Then this difference is a constant of the motion under the gradient flow in eqs. (21–23). Proof. The proof is an extension of previous results for the special case p= 2 (Du et al., 2018; Kunin et al., 2021; Bronstein et al., 2021; Gluch & Urbanke, 2021; Armenta & Jodoin, 2021). Differentiating ∆iin eq. (24) with respect to time, we find: $$\dot{\Delta}_{i}=\mathbf{b}\cdot\frac{\partial\Delta_{i}}{\partial\mathbf{b}}+\dot{\mathbf{W}}\cdot\frac{\partial\Delta_{i}}{\partial\mathbf{W}},$$ $$=\dot{b}_{i}\frac{\partial\Delta_{i}}{\partial b_{i}}+\sum_{j}\left[\dot{W}_{ij}\frac{\partial\Delta_{i}}{\partial W_{ij}}+\dot{W}_{ji}\frac{\partial\Delta_{i}}{\partial W_{ji}}\right],$$ $$=\dot{b}_{i}\frac{p|b_{i}|^{p}}{b_{i}}+\sum_{j}\left[\dot{W}_{ij}\frac{p|W_{ij}|^{p}}{W_{ij}}-\dot{W}_{ji}\frac{p|W_{ji}|^{p}}{W_{ij}}\right].$$ $$(24)^{\frac{1}{2}}$$ $$(25)$$ $$(26)$$ $$(27)$$ $$(28)$$ $$(29)$$ $$(30)$$ $$(31)$$ Next we use the gradient flow in eqs. (22–23) to replace the time-derivatives in eq. (27) of the network's weights and biases: $$\dot{\Delta}_{i}=-p\,\frac{\partial L}{\partial b_{i}}b_{i}-p\sum\nolimits_{j}\left[\frac{\partial L}{\partial W_{ij}}W_{ij}-\frac{\partial L}{\partial W_{ji}}W_{ji}\right],$$ $$=-p\left[\frac{\partial L}{\partial\mathbf{b}}\cdot\frac{d\mathbf{b}}{da_{i}}+\frac{\partial L}{\partial\mathbf{W}}\cdot\frac{d\mathbf{W}}{da_{i}}\right]\bigg{|}_{\mathbf{a}=\mathbf{1}},$$ $$=-p\left[\frac{dL}{da_{i}}\right]\bigg{|}_{\mathbf{a}=\mathbf{1}},$$ $$=0.$$ Here, the final steps follow from the form of the rescaling transformations in eqs. (5–6) and the invariance of the loss in eq. (21) to these transformations. The conservation law in Lemma 3.1 was derived from the gradient flows in eq. (22–23). To proceed, we focus on networks in which the biases at all hidden units are **frozen at zero**. By this we simply mean that for all i ∈ H, we set ˙bi = bi = 0 in place of the flow of eq. (23). Doing so, we obtain the following result as an immediate but important corollary. Corollary 3.2. Consider a feedforward network with homogeneous activation functions whose biases at all hidden units are frozen at zero. Then in place of eq. (24), we obtain the conserved quantities $$\Delta_{i}\ =\ \sum_{j}|W_{i j}|^{p}-\sum_{j}|W_{j i}|^{p}.$$ p. (32) This corollary has important implications for minimum-norm gauge-fixing. The mathematical connection is the following: the difference that appears in eq. (32) as a conserved quantity is also the partial derivative that appears in eq. (11) for minimizing the p-norm of the network's weight matrix. Put another way, the network's symmetry group of rescaling transformations gives rise to multiple constants of the motion—one per hidden unit—and it is precisely when all these constants of the motion *vanish* that the network's weight matrix has a minimal entry-wise p-norm with respect to these transformations. From this observation we obtain the following theorem. Theorem 3.3 (Minimality-Preserving Flows). Suppose that the weight matrix W *has been initialized* and/or rescaled by the gauge-fixing procedure of Algorithm 1 such that ∥W∥p,p *cannot be further minimized by any rescaling transformation. Then this property of minimality is preserved by the gradient flow* in eqs. (21–23) if in addition the biases at all hidden units are frozen at zero. Proof. When a hidden unit has zero bias (i.e., bi = 0), the conserved quantity in eq. (24) reduces to the simple difference ∆i =Pj |Wij | p−Pj |Wji| pin eq. (32). Since ∥W∥p,p cannot be further minimized, the P partial derivative in eq. (11) must vanish at the identity gauge transformation a = 1. It follows that j |Wij | p−Pj |Wji| p = 0 at each hidden unit i ∈ H. But this quantity is conserved by the gradient flow in eqs. (21–23) subject to the additional constraint bi = ˙bi = 0. It follows that the weights at future times must also satisfy Pj |Wij | p =Pj |Wji| p, and thus by Lemma 2.2, the minimality property is preserved. The significance of Theorem 3.3 is that it establishes relatively mild conditions under which the network's weight matrix W never leaves the submanifold of weight space in which its entry-wise p-norm ∥W∥p,p is minimized with respect to rescaling transformations. The first of these conditions is that the hidden (but not the output) units have zero biases, and the second is that the learning rates of weights and biases are modulated by a power of their magnitudes. It is interesting that the minimality-preserving flows in Theorem 3.3 provide independent motivation for these practices, both of which have been previously studied for different reasons. It has been noted, for example, that zero-valued biases are required for rectified-linear units to learn *intensity-equivariant* representations of sensory inputs (Hinton et al., 2011; Mohan et al., 2020); these are representations in which the hidden-layer activations scale in proportion to the intensity of visual or auditory signals. Such networks also have certain margin-maximizing properties when they are trained by gradient descent (Lyu & Li, 2020). Additionally, we note that for p= 1, the modulated gradient flow in eqs. (22–23) approximates a discrete update that is additive in the log domain; in such an update, parameters of fixed sign are multiplied by the elements of an exponentiated gradient. Similar updates have been studied in many different contexts (Kivinen & Warmuth, 1997; Arora et al., 2012; Bernstein et al., 2020). ## 3.2 Regularized Flows Our goal in this section is to generalize Theorem 3.3 to learning in the presence of a regularizer. Regularization is a common practice to avoid overfitting of the training data by large networks (Goodfellow et al., 2016). In this paper we consider regularized losses of the form $$L_{\lambda}\ =\ \frac{1}{T}\sum_{t}C({\bf y}_{t},{\bf f}({\bf x}_{t}))+\frac{\lambda}{p}\|{\bf W}\|_{p,p}^{p}\tag{33}$$ with λ > 0. Regularizers are designed to prevent overfitting by penalizing large weights. But if regularizers were originally introduced for this reason, it is now widely appreciated that they also serve other purposes. It has been observed, for example, that regularizers help to learn better models on the *training* data (Krizhevsky et al., 2012), suggesting that smaller weights lead to better behaved gradients. Likewise, it has been observed that highly unbalanced weights lead to much slower training in homogeneous networks; the reason is that partial derivatives such as *∂L/∂W*ij scale *inversely* as the weights under rescaling transformations (Neyshabur et al., 2015a; Dinh et al., 2017). More generally, it has been argued (van Laarhoven, 2017) that "by decreasing the scale of the weights, weight decay increases the effective learning rate" and that "if no regularization is used the weights can grow unbounded, and the effective learning rate goes to 0." These observations suggest a natural pairing of regularization and minimum-norm gauge-fixing. In particular, if regularization has purely dynamical benefits—if (say) smaller or more balanced weights lead to faster learning—then one might expect similar benefits by minimizing the norm of the weights with respect to rescaling transformations. Our final theorem examines the interplay between regularization and gauge-fixing during learning. Intuitively, it states that if the regularizer and gauge-fixing condition are appropriately paired, then the minimality of the weight matrix is also preserved under gradient flow in the *regularized* loss function of eq. (33). Specifically, we consider the dynamics $$\dot{W}_{ij}=-|W_{ij}|^{2-p}\cdot\frac{\partial L_{\lambda}}{\partial W_{ij}},$$ $$\dot{b}_{i}=-|b_{i}|^{2-p}\quad\cdot\ \frac{\partial L_{\lambda}}{\partial b_{i}}\quad\mbox{for}\quad i\in{\cal O},$$ $$(34)$$ where the biases at all hidden units are frozen at zero (i.e., bi = ˙bi = 0 for all i ∈ H). With this flow we obtain the following result. Theorem 3.4 (Pairing Theorem). Suppose that the weight matrix W *has been initialized and/or rescaled by* the gauge-fixing procedure of Algorithm 1 such that ∥W∥p,p *cannot be further minimized. Then this property* of minimality is preserved by the regularized gradient flow in eqs. (34–35); specifically this is the flow (for any λ>0) that pairs the gauge-fixing condition with the corresponding entry-wise p*-norm regularizer.* Proof. Again we proceed by differentiating the difference ∆i =Pj |Wij | p −Pj |Wji| p with respect to time. Since the hidden-unit biases are frozen at zero, we have $$\dot{\Delta}_{i}=\dot{\bf W}\cdot\frac{\partial\Delta_{i}}{\partial{\bf W}}=\sum_{j}\left[\dot{W}_{ij}\frac{\partial\Delta_{i}}{\partial\dot{W}_{ij}}+\dot{W}_{ji}\frac{\partial\Delta_{i}}{\partial\dot{W}_{ji}}\right].\tag{36}$$ $$(35)$$ Next write Lλ = L0 + λR where L0 is the unregularized loss in eq. (21) and R = (1/p)∥W∥ p p,p. Note that the gradient flow for W˙ has two components, one generated by the gradient of L0, the other by the gradient of the regularizer, R. We have already shown, via Theorem 3.3, that ∆iis unchanged by the component from L0. It therefore suffices to consider only the component from R. From this component, we see that s suffices to consider only the component from $R$. From this component, we have $$\dot{\Delta}_{i}=-\lambda\sum_{j}\left[|W_{ij}|^{2-p}\frac{\partial R}{\partial W_{ij}}\frac{\partial\Delta_{i}}{\partial W_{ij}}\,+\,|W_{ij}|^{2-p}\frac{\partial R}{\partial W_{ji}}\frac{\partial\Delta_{i}}{\partial W_{ji}}\right],$$ $$=-\lambda\sum_{j}\left[\frac{W_{ij}^{2}}{|W_{ij}|^{p}}\frac{|W_{ij}|^{p}}{W_{ij}}\frac{p|W_{ij}|^{p}}{W_{ij}}-\frac{W_{ji}^{2}}{|W_{ji}|^{p}}\frac{|W_{ji}|^{p}}{W_{ji}}\frac{p|W_{ji}|^{p}}{W_{ji}}\right],$$ $$=-\lambda p\sum_{j}\left[|W_{ij}|^{p}-|W_{ji}|^{p}\right],$$ $$=-\lambda p\,\Delta_{i}.$$ $$(37)$$ By assumption W is initialized such that ∥W∥p,p is minimal with respect to rescaling transformations. Thus as before, the partial derivative in eq. (11) must vanish at the identity gauge transformation a=1, implying that ∆i =Pj |Wij | p−Pj |Wji| p = 0 at each hidden unit i ∈ H. But this also implies, via eq. (37), that ∆˙i = 0, and hence that ∆iis a *vanishing* constant of the motion for all i ∈ H. The theorem then follows from Lemma 2.2. The calculation in this proof yields another insight. From eq. (37), we see that ∆i decays exponentially to zero in the presence of regularization, so that *even in the absence of a gauge-fixing procedure*, the weights at hidden units are driven to satisfy the balancedness condition of Lemma 2.2. Note, however, that the time constant of this decay varies *inversely* with the magnitude of the regularization hyperparameter λ > 0. In practice, typical values of λ are quite small. Thus gauge-fixing can be viewed as a way of balancing the weights *at the outset and throughout the entire course of learning*, rather than relying on the asymptotic effects of regularization to do so in the limit. ## 4 Minimum-Norm Learning With Stochastic Gradient Descent In previous sections, we have analyzed the interplay between gauge-fixing and learning via gradient flow. In this section, we investigate empirically how gauge-fixing affects the performance of stochastic gradient descent. Our goal is to observe these effects in a controlled setting where experiments can be meaningfully compared along a few essential axes of variation. There are, of course, many aspects of deep learning for which the theorems of the previous section do not have any obvious implications, and here we attempt to limit the potentially combinatorial number of experimental choices to a few essential ones. ## 4.1 Experimental Results We focus on the cases of ℓp-norm regularization with p= 1, 2. For ℓ2-norm regularization, the gradient flow in eqs. (34–35) can be approximated by the additive updates $W_{ij}\gets W_{ij}-\eta\big{(}\frac{\partial C}{\partial W_{ij}}\big{)}-\eta\lambda W_{ij}$, $b_{i}\gets b_{i}-\eta\big{(}\frac{\partial C}{\partial b_{i}}\big{)}\quad\mbox{for}\quad i\in{\cal O}$, (11.11) where η >0 is a learning rate and ⟨·⟩ indicates a mini-batch estimate of the gradient of the cost in eq. (21). Likewise for ℓ1-norm regularization, the gradient flow in eqs. (34–35) can be approximated by the multiplicative updates $$W_{ij}\gets W_{ij}\exp\left(-\eta\operatorname{sign}(W_{ij})\left\langle\frac{\partial C}{\partial W_{ij}}\right\rangle-\eta\lambda\right),$$ $$b_{i}\gets b_{i}\exp\left(-\eta\operatorname{sign}(b_{i})\left\langle\frac{\partial C}{\partial b_{i}}\right\rangle\right)\quad\text{for}\quad i\in\mathcal{O}.$$ The updates in eqs. (40–41) can be viewed as exponentiated gradient updates for parameters whose signs are fixed at initialization but whose magnitudes are adapted over time (Kivinen & Warmuth, 1997; Arora et al., 2012; Bernstein et al., 2020). As shorthand, we refer to eqs. (38–39) as updates for stochastic gradient descent (SGD) and eqs. (40–41) as updates for exponentiated (stochastic) gradient descent (EGD). We experimented with the gradient-based updates in eqs. (38–41) both with and without an additional procedure for gauge-fixing. For the latter, we simply called Algorithm 1 before each step of SGD or EGD to minimize ∥W∥p,p up to some tolerance δ. In general, this additional procedure did not incur much overhead relative to the mini-batch updates for learning. Note that the amount of overhead is determined essentially by the number of rescaling transformations computed by Algorithm 1. This number may be moderate at initialization (as shown by Figure 2), but once the gauge has been fixed, the property of norm-minimality is preserved by gradient flow (as shown by Theorem 3.4) and only violated to O(η 2) by each discrete-time update in eqs. (38–41). Therefore many fewer rescaling transformations are required to minimize ∥W∥p,p throughout the course of learning. The relative overhead of the gauge-fixing procedure also decreases in proportion to the size of the mini-batch. Our experiments examined the effects of gauge-fixing on the optimization of the regularized loss in eq. (33). We studied problems in multiway classification where each input xt in the training set was labeled by a unary vector yt that provided a one-hot encoding of the correct class. In these experiments we optimized the regularized *log-loss* where the cost per example was computed via the softmax operation: $$C({\bf y},{\bf f}({\bf x}))=-\sum_{\alpha}y_{\alpha}\log\rho_{\alpha}\quad{\mathrm{where}}\quad\rho_{\alpha}={\frac{e^{f_{\alpha}({\bf x})}}{\sum_{\beta}e^{f_{\beta}({\bf x})}}}.$$ We experimented on the four publicly available datasets shown in Table 1. Also, for each data set, we performed a singular value decomposition (Eckart & Young, 1936) to project the inputs into a smaller space of 200 dimensions; this was done to reduce training times. We trained three-layer, five-layer, and seven-layer networks on these datasets, where each network had a total of 3750 hidden units. From shallowest to deepest, the networks had hidden layers with 2500-1250 hidden $$\left(42\right)$$ | Dataset | Examples | Dimension | Classes | Abbrev. | |--------------------------------------|------------|-------------|-----------|-----------| | mnist-digits (LeCun et al., 1998) | 60000 | 784 | 10 | digits | | Fashion-mnist (Xiao et al., 2017) | 60000 | 784 | 10 | fashion | | emnist-balanced (Cohen et al., 2017) | 112800 | 784 | 47 | chars | | cifar10 (Krizhevsky, 2009) | 50000 | 3072 | 10 | cifar10 | Table 1: Data sets of grayscale (LeCun et al., 1998; Xiao et al., 2017; Cohen et al., 2017) and color (Krizhevsky, 2009) images that were used to train the networks in this paper. units, 2000-1000-500-250 hidden units, and 1000-750-750-500-500-250 hidden units; the number of output units was equal to the number of classes (either 10 or 47). We used an initial learning rate of η = 0.001 and a weight decay of λ = 10−4in the networks with ℓ2-norm regularization and an initial learning rate of η = 0.1 and a weight decay of λ = 5×10−6in those with ℓ1-norm regularization. All networks were trained for 30 epochs with a mini-batch size of 32, and we reduced the learning rate by a factor of 0.95 after each epoch. As suggested by Theorem 3.4, we only adapted the biases for output units, while those for hidden units were frozen at zero. We allowed the multiplicative updates in Algorithm 1 to converge to a precision of δ = 10−3. In all, we present a summary of results from 240 experiments—one for each type of network (with three, five, or seven layers) on each data set in Table 1, with either ℓ1-norm or ℓ2-norm regularization, starting from five different random seeds, and evaluating the performance with and without an additional procedure for gauge-fixing. Fig. 3 plots the regularized log-loss and training error rates in the experiments with ℓ2-norm regularization as a function of the number of epochs of training. The left panels in this figure show that the minimum-norm gauge leads to consistently faster descent in the regularized log-loss. The right panels show that gauge-fixing also leads to lower error rates on the training set; this is an indication that gauge-fixing affects the course of learning in ways beyond the mere rescaling of weights. (Note that two networks cannot be equivalent up to rescaling transformations if they have different error rates.) Fig. 4 shows the same plots for the experiments with ℓ1-norm regularization. Here again we observe that the minimum-norm gauge leads to faster descent in the regularized log-loss, but in this case we do not see a similar effect on the training error rates. It should be noted, however, that the magnitude of these effects is largely controlled by the amount of regularization. By increasing λ, one can (arbitrarily) increase the gap between the curves in the left panels, while by decreasing λ, one can weaken the regularizer in an attempt to obtain smaller error rates. We have fixed the value of λ in these experiments to understand, in a controlled setting, how gauge-fixing affects the optimization of the regularized log-loss (which is the actual quantity being optimized). However, the right panels of Fig. 4 suggest that weaker regularizers (i.e., smaller λ) may be more appropriate when the weights are being continually rebalanced by rescaling transformations. ## 4.2 Related Work Many previous studies have investigated how to reformulate gradient-based learning in networks with rescaling symmetries. In a seminal paper, Neyshabur et al. (2015a) showed that SGD performs poorly in highly unbalanced networks, and in its place, they proposed PathSGD, a rescaling-invariant procedure that approximates steepest descent with respect to a special path-based regularizer. Notably, this regularizer has the distinguishing property that it computes the minimum value of a max-norm regularizer, where the minimum is performed over all networks equivalent up to rescaling (Neyshabur et al., 2015b). PathSGD was followed by other formulations of rescaling-invariant learning. For example, Badrinarayanan et al. (2015) fixed the rescaling degrees of freedom in multilayer networks by constraining certain weight vectors to have unit norm, while Meng et al. (2019) showed how to perform SGD in the vector space of paths (as opposed to weights), where the rescaling-invariant value of a path is given by the product of its weights. ![14_image_0.png](14_image_0.png) Figure 3: Comparison of SGD with and without rescaling transformations to minimize the sum of the squares of the weights. The left and right panels plot the ℓ2-regularized log loss and the training error rate, respectively, as a function of the number of epochs. Results are shown for three different architectures and four different data sets. Each curve is the result from averaging over five random seeds. ![14_image_1.png](14_image_1.png) Figure 4: Comparison of EGD with and without rescaling transformations to minimize the sum of the magnitudes of the weights. The left and right panels plot the ℓ1-regularized log loss and the training error rate, respectively, as a function of the number of epochs. Results are shown for three different architectures and four different data sets. Each curve is the result from averaging over five random seeds. More recent work has considered how to accelerate learning with particular rescaling transformations. For instance, Armenta et al. (2021) showed that the magnitudes of backpropagated gradients are boosted on average by randomly rescaling the weights before or in the middle of the learning—a process they call neural teleportation. Likewise, Zhao et al. (2022) explored how to choose symmetry group transformations that purposefully increase or maximize the norms of gradients for learning. Because these gradients are computed with respect to particular training examples, this approach can be viewed as a *data-driven* procedure for manipulating the optimization landscape via symmetry group transformations. Within this body of work, our main contribution is to derive the gauge-fixing multiplicative updates in Algorithm 1 that minimize the ℓp-norms of weights under rescaling transformations. This gauge-fixing criterion yields another rescaling-invariant procedure for learning, one motivated by the same underlying appeals to symmetry as in earlier studies (Neyshabur et al., 2015a; Badrinarayanan et al., 2015; Meng et al., 2019), but more closely aligned (via Theorem 3.4) with the widespread use of ℓp-norm regularization. Our approach differs from work on neural teleportation (Armenta et al., 2021; Zhao et al., 2022) by employing rescaling transformations to minimize the norms of weights rather than to increase the norms of gradients. These approaches may have similar effects in practice; we note, however, that the norms of gradients are unbounded above with respect to the (non-compact) group of rescaling transformations, and therefore one must be careful to identify the regime in which they serve as a reliable proxy for rates of convergence. ## 5 Discussion Deep learning is a revolutionary technology whose workings are not fully understood. In this paper, we have shown that further understanding may be gained from the symmetries of multilayer networks and the analogies they suggest to physical systems (Bronstein et al., 2021; Kunin et al., 2021; Gluch & Urbanke, 2021; Armenta & Jodoin, 2021). As in Lagrangian mechanics, these symmetries lead to conserved quantities when networks are trained by dynamical flows; as in gauge theories, they reveal redundant degrees of freedom that can be fixed in opportune ways. In this paper we have focused specifically on the rescaling symmetries of homogeneous networks. Inspired by these symmetries, we have shown how to rebalance the weights W of a feedforward network without changing the function that it computes. Specifically, we derived closed-form multiplicative updates that minimize the entry-wise p-norm ∥W∥p,p over the equivalence class of networks that are related by rescaling transformations. We also showed that this property of minimality was preserved by gradient flow in a correspondingly regularized loss function. Finally we considered how these rescaling symmetries might be exploited in conjunction with SGD. Our experimental results provide further evidence that learning can be accelerated by fixing (or otherwise controlling for) the degrees of freedom associated with these symmetries (Neyshabur et al., 2015a; Badrinarayanan et al., 2015; Meng et al., 2019; Armenta et al., 2021; Zhao et al., 2022). There are many questions deserving of further investigation. One important question is how to combine gauge-fixing with accelerated gradient-based methods, such as those involving momentum (Polyak, 1964) or adaptive learning schedules (Kingma & Ba, 2015; Duchi et al., 2010; Tieleman & Hinton, 2018). These methods may already be compensating, to some extent, for the rescaling symmetries in the optimization landscape, but if so, these effects need to be further clarified. Other potential benefits of gauge-fixing are suggested in the more familiar setting of matrix factorization (Horn & Johnson, 2012). The basic problem of factorization is underdetermined: any matrix can be written in an infinite number of ways as the product of two or more other matrices. But consider the wealth of information that is revealed by certain canonical factorizations of large matrices: for example, from the singular value decomposition, it is straightforward to compute the low-rank approximation that is optimal in a least-squares sense (Eckart & Young, 1936). It is natural to ask whether the functions computed by multilayer networks can be represented in a similarly canonical way, and if so, whether such representations might suggest more effective strategies for pruning, compressing, or otherwise approximating their weight matrices. The search for such representations provides yet another motivation for gauge-fixing. Finally we note that there are many possible criteria for gauge-fixing in multilayer networks with rescaling symmetries. In this paper we studied how to minimize the entry-wise p-norm of the weight matrix, a problem we found to be especially tractable. It would be interesting to study other criteria for gauge-fixing and to derive the conditions (analogous to those in Theorems 3.3–3.4) under which these criteria are preserved by gradient-based learning. We believe that the present work can provide a template for these further investigations—and also that such investigations will reveal a similarly rich mathematical structure. ## References P. W. Anderson. More is different. *Science*, 177(4047):393–396, 1972. M. Armenta and P.-M. Jodoin. The representation theory of neural networks. *Mathematics*, 9(24), 2021. M. A. Armenta, T. Judge, N. Painchaud, Y. Skandarani, C. Lemaire, G. G. Sanchez, P. Spino, and P. M. Jodoin. Neural teleportation, 2021. arXiv:2012.01118. S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta-algorithm and applications. *Theory of Computing*, 8(1):121–164, 2012. S. Arora, N. Cohen, and E. Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. In J. Dy and A. Krause (eds.), *Proceedings of the 35th International Conference on* Machine Learning, pp. 244–253, 2018. S. Arora, N. Cohen, N. Golowich, and W. Hu. A convergence analysis of gradient descent for deep linear neural networks. In *Proceedings of the 8th International Conference on Learning Representations*, 2019. V. Badrinarayanan, B. Mishra, and R. Cipolla. Understanding symmetries in deep networks. In *Proceedings* of the 8th NeurIPS Workshop on Optimization for Machine Learning, 2015. M. Belkin, D. Hsu, S. Ma, and S. Mandal. Reconciling the modern machine-learning practice and the classical bias-variance trade-off. *Proceedings of the National Academy of Sciences USA*, 116(32):15849–15854, 2019. J. Bernstein, J. Zhao, M. Meister, M.-Y. Liu, A. Anandkumar, and Y. Yue. Learning compositional functions via multiplicative weight updates. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems 33*, pp. 13319–13330, 2020. M. M. Bronstein, J. Bruna, T. Cohen, and P. Veličković. Geometric deep learning: grids, groups, graphs, geodesics, and gauges, 2021. arxiv:2104.13478. G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik. EMNIST: an extension of MNIST to handwritten letters, 2017. arXiv:1702.05373. A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. *Journal of the Royal Statistical Society B*, 39:1–38, 1977. L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. Sharp minima can generalize for deep nets. In D. Precup and Y. W. Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, pp. 1019–1028, 2017. S. S. Du, W. Hu, and J. D. Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. In S. Bengio, H. M Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 31*, pp. 382–393, 2018. J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. In *Proceedings of the 23rd Conference on Learning Theory*, pp. 257–269, 2010. C. Eckart and G. Young. The approximation of one matrix by another of lower rank. *Psychometrika*, 1(3): 211–218, 1936. O. Elkabetz and N. Cohen. Continuous vs. discrete optimization of deep neural networks. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan (eds.), Advances in Neural Information Processing Systems 34, pp. 4947–4960, 2021. N. Gillis. *Nonnegative Matrix Factorization*. SIAM, 2021. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Y. W. Teh and D. M. Titterington (eds.), *Proceedings of the 13th International Conference on Artificial* Intelligence and Statistics, pp. 249–256, 2010. X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In G. Gordon, D. Dunson, and M. Dudík (eds.), *Proceedings of the 14th International Conference on Artificial Intelligence and Statistics*, pp. 315–323, 2011. G. Gluch and R. Urbanke. Noether: the more things change, the more they stay the same, 2021. arXiv:2104.05508. I. Goodfellow, Y. Bengio, and A. Courville. *Deep Learning*. MIT Press, 2016. I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. C. Courville, and Y. Bengio. Maxout networks. In *Proceedings of the 30th International Conference on Machine Learning*, pp. 1319–1327, 2013. D. J. Gross. Gauge theory—past, present, and future? *Chinese Journal of Physics*, 30:955–972, 1992. F. V. Gubarev, L. Stodolsky, and V. I. Zakharov. On the significance of the vector potential squared. *Physical* Review Letters, 86(11):2220–2222, 2001. A. Gunawardana and W. Byrne. Convergence theorems for generalized alternating minimization procedures. Journal of Machine Learning Research, 6:2049–2073, 2005. K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In *Proceedings of the 2015 IEEE International Conference on Computer Vision*, pp. 1026–1034, 2015. G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In *Proceedings of the International Conference on Artificial Neural Networks (ICANN-11)*, pp. 44–51, 2011. R. A. Horn and C. R. Johnson. *Matrix Analysis*. Cambridge University Press, 2012. J. D. Jackson. From Lorenz to Coulomb and other explicit gauge transformations. American Journal of Physics, 70:917–928, 2002. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Y. Bengio and Y. LeCun (eds.), Proceedings of the 3rd International Conference on Learning Representations, 2015. J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997. A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1106–1114, 2012. D. Kunin, J. Sagastuy-Breña, S. Ganguli, D. L. K. Yamins, and H. Tanaka. Neural mechanics: Symmetry and broken conservation laws in deep learning dynamics. In *Proceedings of the 9th International Conference* on Learning Representations, 2021. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. *Nature*, 521:436–444, 2015. D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. *Nature*, 401: 788–791, 1999. D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In T. K. Leen, T. G. Dietterich, and V. Tresp (eds.), *Advances in Neural Information Processing Systems 13*, pp. 556–562. MIT Press, 2000. K. Lyu and J. Li. Gradient descent maximizes the margin of homogeneous neural networks. In *Proceedings* of the 8th International Conference on Learning Representations, 2020. Q. Meng, S. Zheng, H. Zhang, W. Chen, Q. Ye, Z.-M. Ma, N. Y, and T.-Y. Liu. G-SGD: Optimizing ReLU neural networks in its positively scale-invariant space. In Proceedings of the 7th International Conference on Learning Representations, 2019. R. R. Meyer. Sufficient conditions for the convergence of monotonic mathematical programming algorithms. Journal of Computer and System Sciences, 12(1):108–121, 1976. S. Mohan, Z. Kadkhodaie, E. P. Simoncelli, and C. Fernandez-Granda. Robust and interpretable blind image denoising via bias-free convolutional neural networks. In *Proceedings of the 8th International Conference* on Learning Representations (ICLR-20), 2020. URL https://openreview.net/forum?id=HJlSmC4FPS. B. Neyshabur, R. Salakhutdinov, and N. Srebro. Path-SGD: Path-normalized optimization in deep neural networks. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems 28, pp. 2422–2430, 2015a. B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In *Proceedings* of the 28th Conference on Learning Theory, pp. 1376–1401, 2015b. E. Noether. Invariante variationsprobleme. *Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse*, pp. 235–257, 1918. V. Papyan, X. Y. Han, and D. L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. *Proceedings of the National Academy of Sciences USA*, 117(40):24652–24663, 2020. B. T. Polyak. Some methods of speeding up the convergence of iteration methods. *USSR Computational* Mathematics and Mathematical Physics, 4(5):1–17, 1964. Lawrence K. Saul, Fei Sha, and Daniel D. Lee. Statistical signal processing with nonnegativity constraints. In *Proceedings of the 8th European Conference on Speech Communication and Technology*, pp. 1001–1004, 2003. A. M. Saxe, J. L. McClelland, and S. Ganguil. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In Y. Bengio and Y. LeCun (eds.), Proceedings of the 2nd International Conference on Learning Representations, 2013. F. Sha, Y. Lin, L. K. Saul, and D. D. Lee. Multiplicative updates for nonnegative quadratic programming. Neural Computation, 19:2004–2031, 2007. B. K. Sriperumbudur and G. R. G. Lanckriet. A proof of convergence of the concave-convex procedure using zangwill's theory. *Neural Computation*, 24:1391–1407, 2012. I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton. On the importance of initialization and momentum in deep learning. In *Proceedings of the 30th International Conference on Machine Learning*, pp. 1139–1147, 2013. H. Tanaka and D. Kunin. Noether's learning dynamics: Role of symmetry breaking in neural networks. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan (eds.), *Advances in Neural* Information Processing Systems 34, pp. 25646–25660, 2021. H. Tanaka, D. Kunin, D. L. Yamins, and S. Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems 33, pp. 6377–6389, 2020. T. Tieleman and G. E. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. *COURSERA: Neural networks for machine learning*, 4(2):26–31, 2018. T. van Laarhoven. L2 regularization versus batch and weight normalization, 2017. arxiv:1706.04340. A. Wibisono, A. C. Wilson, and M. I. Jordan. A variational perspective on accelerated methods in optimization. *Proceedings of the National Academy of Sciences USA*, 113(47):E7351–E7358, 2016. C. F. J. Wu. On the convergence properties of the EM algorithm. *Annals of Statistics*, 11(1):95–103, 1983. H. Xiao, K. Rasul, and R. Vollgraf. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms, 2017. arXiv:1708.07747. A. L. Yuille and A. Rangarajan. The concave-convex procedure. *Neural Computation*, 15:915–936, 2003. W. J. Zangwill. *Nonlinear programming: A unified approach*. Prentice Hall, 1969. A. Zee. *Fearful Symmetry: The Search for Beauty in Modern Physics*. Princeton University Press, 2016. C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning (still) requires rethinking generalization. *Communications of the ACM*, 64(3):107–115, 2021a. Y. Zhang, P. Ti˘no, A. Leonardis, and K. Tang. A survey on neural network interpretability. *IEEE Transactions on Emerging Topics in Computational Intelligence*, 5(5):726–742, 2021b. B. Zhao, N. Dehmamy, R. Walters, and R. Yu. Symmetry teleportation for accelerated optimization. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems 35*, 2022.
# Revisiting Active Learning In The Era Of Vision Foundation Models Sanket Rajan Gupte∗*sanketg@stanford.edu* Department of Computer Science Stanford University Josiah Aklilu∗*josaklil@stanford.edu* Department of Biomedical Data Science Stanford University Jeffrey J. Nirschl jnirschl@stanford.edu Department of Pathology Stanford University Serena Yeung-Levy *syyeung@stanford.edu* Department of Biomedical Data Science Stanford University Reviewed on OpenReview: *https: // openreview. net/ forum? id= u8K83M9mbG* ## Abstract Foundation vision or vision-language models are trained on large unlabeled or noisy data and learn robust representations that can achieve impressive zero- or few-shot performance on diverse tasks. Given these properties, they are a natural fit for *active learning* (AL), which aims to maximize labeling efficiency. However, the full potential of foundation models has not been explored in the context of AL, specifically in the low-budget regime. In this work, we evaluate how foundation models influence three critical components of effective AL, namely, 1) initial labeled pool selection, 2) ensuring diverse sampling, and 3) the trade-off between representative and uncertainty sampling. We systematically study how the robust representations of foundation models (DINOv2, OpenCLIP) challenge existing findings in active learning. Our observations inform the principled construction of a new simple and elegant AL strategy that balances uncertainty estimated via dropout with sample diversity. We extensively test our strategy on many challenging image classification benchmarks, including natural images as well as out-of-domain biomedical images that are relatively understudied in the AL literature. We also provide a highly performant and efficient implementation of modern AL strategies (including our method) at https://github.com/sanketx/AL-foundation-models. ## 1 Introduction Foundation models (Oquab et al., 2023; Cherti et al., 2022; Bommasani et al., 2022) have become ubiquitous in computer vision. The advent of large vision and vision-language models, pretrained on web-scale data corpora, has led to the significant enhancement of visual representation spaces. Despite differences in pretraining (e.g. contrastive frameworks with language supervision as in Cherti et al. (2022) or vision-only instance discrimination objectives as in Oquab et al. (2023)), the shared theme among vision foundation models are robust learned representations that can be effectively leveraged for state-of-the-art performance ∗Equal contribution. 1 on multiple downstream tasks. These learned visual features have direct implications for machine learning paradigms designed to mitigate data paucity challenges, most notably deep active learning. Active learning (AL) (Sener & Savarese, 2018; Ash et al., 2020; Gal et al., 2017; Kirsch et al., 2022; Aklilu & Yeung, 2022; Hacohen et al., 2022; Yehuda et al., 2022; Parvaneh et al., 2022; Mahmood et al., 2022) is a machine learning framework that addresses the scarcity of labeled data within a limited label budget. It achieves this by iteratively requesting labels from an external oracle, such as a subject matter expert or clinician. Stated differently, the challenge is determining the most beneficial labels to query when constrained by a limited labeling budget to maximize model performance. This approach is especially pertinent in practical computer vision applications that necessitate the creation of domain-specific datasets for custom tasks. A well-designed AL strategy can optimize resource allocation and substantially lower acquisition costs in situations where labeled data are scarce and labor-intensive to produce - such as annotating highresolution histopathology images for tumor classification. Previous research has tested the benefit of using pretrained representations during AL by leveraging representations of unlabeled data to 1) address the cold-start problem by selecting a good initial pool of candidates for labeling Hacohen et al. (2022); Chen et al. (2023); Chandra et al. (2020), 2) improve query functions by selecting points that maximize the coverage of the representation space Yehuda et al. (2022); Sener & Savarese (2018) and, 3) incorporating unlabeled samples in the training process via semi-supervised learning Gao et al. (2020); Simeoni et al. (2021). However, these studies limit their focus to supervised pretraining on ImageNet or self-supervised pretraining on available unlabeled data, which often proves impractical in many real-world scenarios. For instance, biomedical imaging datasets exhibit distribution shifts from natural images and may not have sufficient samples for effective self-supervised representation learning. Foundation models offer a compelling solution to these challenges. Image embeddings extracted from these models are semantically organized in the representation space, as evidenced by impressive zero- and few-shot performance with simple linear models, even on fine-grained classification datasets. Evidence also suggests they are robust to domain shifts and generalize well to out-of-distribution data. Since these embeddings "just work" out of the box, building dataset-specific feature extractors is no longer necessary. For these reasons, we envision that future research in AL will build upon the ever-increasing capabilities Dehghani et al. (2023); Sun et al. (2023) of Vision and Vision-Language foundation models to formulate efficient and scalable acquisition strategies. However, there is limited research Tran et al. (2022) on the impact of foundation models on AL. Our work seeks to re-examine the pillars of effective AL strategies in this context, specifically in the low-budget regime in which only a handful of images per class can be annotated. We hope our study's compelling results and novel insights will spur the use of foundation models as a "batteries included" launchpad for future AL research. ## Our Key Contributions Are The Following: 1. We investigate the impact of large-scale vision foundation models on the four pillars of active learning strategies, specifically in the low-budget regime, and contrast our findings with established results. Our analysis systematically explores the following dimensions of a successful AL strategy: - We study the impact of initial pool selection and highlight differences with existing research on the cold-start problem. - We explore the importance of querying diverse samples and demonstrate that with a few simple modifications, poorly performing AL query strategies can match those that explicitly incorporate sample diversity. - We compare representative sampling with uncertainty-based sampling and counter the existing notion that uncertainty-based query methods are ineffective in the low-budget regime. - We demonstrate the difficulties of leveraging semi-supervised learning as a complementary approach for building accurate models with limited labeled data. 2. As a direct consequence of the results of our investigation, we construct a simple, performant, and scalable AL strategy, **DropQuery** . This method, built atop the rich semantic features generated by powerful foundation models, leverages an intelligent strategy for initial pool selection, utilizes an uncertainty-based criterion to generate a pool of query candidates, and selects a diverse subset of candidates for annotation. Our strategy outperforms the current state-of-the-art on a diverse collection of datasets, including fine-grained image classification, out-of-distribution biomedical images, and image classification at scale, all of which are relatively understudied in AL. ## 2 Background And Related Works Traditional research in AL for image classification has largely focused on the development of acquisition functions to query samples for labeling by an oracle. These strategies have spanned uncertainty-based sampling approaches (Shannon, 1948), Bayesian methods (Gal et al., 2017; Kirsch et al., 2022), strategies to maximize sample diversity, query by committee strategies (Melville & Mooney, 2004), and techniques to estimate the largest update to the model parameters (Ash et al., 2020). Uncertainty-based methods supply a single measure of uncertainty (epistemic or aleatoric) for unlabeled samples so as to rank them for acquisition. **Entropy** selects instances with maximum predictive entropy, Uncertainty selects instances with the lowest confidence, and **Margins** selects instances with the lowest margin between the highest and second-highest predicted class probabilities. The Bayesian approaches to AL (Gal et al., 2017; Kirsch et al., 2019; Woo, 2023) leverage Bayesian neural networks to model uncertainty and the potential informativeness of unlabeled instances. Bayesian Active Learning by Disagreement or BALD (Gal et al., 2017) aims to query instances that maximize the mutual information gain between model parameters and predictive outputs. Additionally, the **PowerBALD** (Kirsch et al., 2022) query challenges the traditional top-B acquisition approach by noting the correlation of queried instances *between* each iteration of AL, which is alleviated by introducing stochasticity in the query. The prior works that have placed an emphasis on diversity sampling argue that the classifier should learn good decision boundaries in the representation space early. In the lens of these diversity-based queries, the AL paradigm can be re-framed as "what are the diverse instances that are representative of the data distribution?". The **Coreset** algorithm (Sener & Savarese, 2018) can be approximated as solving the kCenter greedy problem (Farahani & Hekmatfar, 2009), where the distance between any instance and its nearest cluster center (the labeled instances from previous AL iterations) is minimized. Following the Coreset query, the recently developed **ProbCover** query aims to avoid the selection of outlier instances and query more representative points. This is done by recasting the AL problem as a max coverage problem (Nemhauser et al., 1978). A hybrid technique, **BADGE** (Ash et al., 2020), uses an AL query that inspects the gradients of a neural network's final layer relative to the current model's prediction. This identifies unlabeled instances that could induce significant model updates. The underlying intuition is that samples prompting large gradient updates require substantial adjustments, signifying uncertainty in samples with new features. While this method yields diverse and highly uncertain samples, it is computationally intensive due to the storage of penultimate activations. **Alfa-Mix** (Parvaneh et al., 2022) solicits unlabeled samples with the greatest classification prediction variability when their representations are linearly interpolated with labeled instances, suggesting that the model needs to learn new features from unlabeled data points. **Typiclust** (Hacohen et al., 2022), queries typical instances during the early AL stages leveraging the representation space. Clustering in a semantically meaningful feature space after self-supervised learning ensures maximum diversity sampling. Sampling dense regions within clusters aids in better classifier learning within feature space. However, in most of the settings considered by these works, a model is trained from scratch at each iteration using a random initialization or an ImageNet pretrained backbone. The resulting models are often poorly calibrated (Guo et al., 2017) and fail to provide robust estimates of uncertainty. These settings also require a relatively large labeling budget as training deep non-linear models with limited labels can be challenging (Yuan et al., 2020). Consequently, a number of such strategies have been shown to underperform random sampling in the low budget (Simeoni et al., 2021; Hacohen et al., 2022), or do not perform significantly better than random when strong regularization is applied (Munjal et al., 2022). Recent methods designed for the low budget regime (Hacohen et al., 2022; Yehuda et al., 2022) take a more holistic approach to developing an effective AL strategy by leveraging pretrained model embeddings to cluster features and select representative points, which has been shown to improve the selection of the initial labeled pool. Other approaches aim to exploit unlabeled points using self-supervised or semi-supervised learning, or a combination of both (Chan et al., 2020; Bai et al., 2021). Their findings demonstrate that leveraging rich representations significantly improves AL performance in the context of a small labeling budget (Lüth et al., 2023). Given these promising results, it is reasonable to conjecture that embeddings from large-scale vision transformers trained on billions of images would amplify the efficacy of these complementary components, constructing an effective strategy that surpasses a simpler baseline consisting of only a standalone acquisition function. To comprehensively investigate these facets of AL, we compare the impact of a simple pool initialization strategy that leverages the representation space's implicit structure to sample diverse and representative points. Further, we investigate different classes of query functions in this context to compare uncertainty-based approaches with typicality-based methods, as the latter have been shown to significantly outperform the former in the low-budget regime. Orthogonal to the query function, we investigate if semisupervision via label propagation can be an effective aid in improving the performance of an active learner. Although some recent work has explored the effects of pretraining on the AL query (Hacohen et al., 2022; Tamkin et al., 2022), there has not been a rigorous evaluation of AL queries with respect to developments in foundation models and vision-language pretraining that have yielded highly expressive and rich visual representation spaces. In the following sections, we outline the critical components necessary for an effective AL query in the context of large foundation models. ## 3 Investigating The Pillars Of Effective Active Learning Strategies Formally, we consider the batch AL setting: let D = {Dpool ∪D**test** } be a dataset with a held-out test set and training pool D**pool** = {DU ∪ DL} consisting of unlabeled instances {xi} for i ∈ {NU } and labeled instances {(xj , yj )} for j ∈ {NL}. Let f : R H×W×3 → R d be a feature extractor such that zi:= f(xi) ∈ R dis a feature vector and d is the embedding dimension. Let g(z; θ) : R d → R K be a classifier parameterized by θ. The typical AL procedure consists of a series of sequential iterations beginning at t = 0 where the labeled pool DL is the empty set. At each iteration, a querying function a({zi: i ∈ {NU }}; θt), which given the current model parameters θt, receives as input all unlabeled instances, selects B instances to be labeled by an external oracle (e.g. clinical expert). Once labels are acquired, these labeled instances are subsumed in DL and removed from the unlabeled pool, and the model is retrained on this slightly larger labeled pool. We simulate the AL procedure by hiding labels yi until queried for a chosen instance xi. In this work, we employ frozen vision-only or vision-language foundation models as the feature extractor f(.) and a linear head as the classifier g(.). We only require a single forward pass through the feature extractor to generate embeddings which are saved for use in our various experiments. This enables us to study a wide range of experimental settings for AL efficiently. Unless mentioned otherwise, we use a DINOv2 VIT-g14 (Oquab et al., 2023) transformer as the feature extractor. Simple linear models trained on features extracted by this model achieve performance that is just shy of state-of-the-art, making them a compelling alternative to end-to-end fine-tuning of the backbone with scarce labels. Following the recommendations of (Munjal et al., 2022), we apply strong regularization to our models in the form of weight decay (1e-2) and aggressive dropout (0.75). Note that the benchmarks used to evaluate the various AL methods in this work were not used in the pretraining of the foundation models we investigated (we refer the reader to the data curation processes outlined by Oquab et al. (2023)). In our experiments, we set a query budget B of 1 sample per class per iteration. For instance, the CIFAR100 dataset contains images corresponding to 100 classes, so the query budget is set to 100 samples. Note that setting B to 1 sample per class per iteration does not necessarily imply that exactly one sample is chosen from each class, since the class labels are unknown at the time of querying. We run our active learning loop for 20 iterations and average our results over 5 random seeds for each experiment. We analyze the following query functions: Random Sampling baseline, Entropy, Uncertainty, Margins, Core-Set (greedy k-centers), BALD, Power BALD, BADGE, ALFA-Mix, Typiclust, and ProbCover. The hyper-parameters for these query functions, if any, are taken from the original publication. ## 3.1 The Impact Of Initial Pool Selection We begin our study by measuring the impact of different initial pool selection strategies in AL, particularly in the *cold-start* scenario where no data has been used for training the classifier. It has been shown in the recent literature Hacohen et al. (2022); Yehuda et al. (2022); Chen et al. (2023) that the initial pool is crucial for establishing a good rudimentary classifier that enjoys performance gains throughout AL. Hacohen et al. (2022) and Yehuda et al. (2022) also argue that the AL acquisition function need not be decoupled from the initial pool selection. Rather, an AL query should acquire representative samples so the classifier can learn good decision boundaries from the first query. Intuitively, in order for an AL query to be effective, it must rely on the current classifier's estimates of uncertainty or informativeness. To assess initialization strategies, we contrast AL query performance when the initial training pool for the active learner is randomly selected for labeling to a centroid-seeking approach following Pourahmadi et al. (2021), where the initial pool consists of samples from DU that are the closest to class centers after K-means clustering in the feature space, where B is the number of clusters. **Typiclust** and **ProbCover** utilize different initialization strategies based on sample density and maximizing coverage, respectively, so we hold these queries true to their initialization approach. Table 1 compares the deltas (∆) in the test set performance of a linear classifier actively trained on foundation model features using a randomly initialized initial pool versus a centroid-based initial pool. Note that a random initial pool is suboptimal to methods that explicitly take advantage of semantically meaningful embedding spaces (i.e. Typiclust or **ProbCover** ). The tremendous performance gains in early AL iterations for uncertainty-based AL queries like **Entropy** sampling enable these methods to surpass queries like **Typiclust** and **ProbCover** . Table 1 also demonstrates the impact of initial pool selection on AL queries acting on foundation model features. Some queries, like Uncertainty , **Entropy** , and **Coreset** , enjoy performance gains throughout AL given a centroid-based initialization, a major boost to the random initialization. Interestingly, some queries like **Alfa-Mix** and **BADGE** have deltas that converge within 2-4 iterations, and in later iterations (8-16) we observe higher accuracy with a randomly selected initial pool. Since the **Alfa-Mix** query crucially relies on interpolation in the feature space, we hypothesize that the separability of foundation model representation space renders differences between initial pooling strategies negligible. After a few iterations, the robust visual representations from foundation models enable the selection of highly representative samples, which help the classifier establish good class boundaries in few-shot. The results from this experiment are in contrast to previous preliminary findings from Chandra et al. (2020) that report no significant difference in initial pool selection strategies in the long run. While we also see diminishing returns as we progress to later iterations, we emphasize that the experimental results in Chandra et al. (2020) also showed little to no difference caused by most initialization strategies, even in the very first iteration of AL. This observation does not align with our results in which we see stark differences caused by intelligent selection of the initial labeled pool in the beginning iterations. However, this discrepancy can be resolved when taking into account the sizes of the labeling budgets. Our experimental setting is the very low-budget regime, and the number of labeled samples in the final iterations of our studies barely overlaps with the starting budgets studied in Chandra et al. (2020). We conclude that in the low-budget regime, where only a few examples per class have been acquired, initialization is crucial for AL performance, but in later iterations, as the number of labeled samples grows or as we transition to higher-budget regimes, it is not as significant and a randomly selected initial pool works just as well. ## 3.2 On The Importance Of Diversity In Query Selection In line with the initial pool selection, we evaluate how partitioning the representation space via clustering is crucial for AL performance in subsequent iterations, even after the initial query. To conduct these experiments, we allow AL queries that use stand-alone measures of uncertainty (i.e. Uncertainty , **Entropy** , Margins , and **BALD** ) to deviate from the top-B acquisition pattern persistent in the AL literature. As noted by Kirsch et al. (2022), top-B acquisition can potentially lead to a correlation between queried batches for some ti and tj , which can hurt performance. Table 1: Effect of initial pool selection on performance. Test set accuracy using our centroid-based initialization. In parentheses, we show the difference (∆) in performance when utilizing our centroid initialization vs. random initialization, where a positive (∆) shown in green indicates improvement over random initialization. We show AL iterations t for datasets CIFAR100 (Krizhevsky, 2009), Food101 (Bossard et al., 2014), ImageNet-100 (Gansbeke et al., 2020), and DomainNet-Real (Peng et al., 2019) (from top to bottom) with DINOv2 ViT-g14 as the feature extractor f. For both the Typiclust and ProbCover queries, we report the test accuracy using their own respective initialization strategies for the cold start. | t | Random | Uncertainty | Entropy | Margins | BALD | pBALD | Coreset | BADGE | Alfa-mix | Typiclust ProbCover | | |-----|----------------|---------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|-----------------------|------| | | CIFAR100 | | | | | | | | | | | | 1 | 72.4 (+24.4) | 72.4 (+24.4) | 72.4 (+24.4) | 72.4 (+24.4) | 72.4 (+24.4) | 72.4 (+24.4) | 72.4 (+24.4) | 72.4 (+24.4) | 72.4 (+24.4) | 64.6 | 62.3 | | 2 | 78.1 (+13.6) | 76.7 (+14.0) | 76.5 (+18.4) | 79.1 (+9.2) | 78.6 (+11.3) | 80.3 (+9.0) | 77.7 (+13.2) | 79.8 (+7.3) | 80.7 (+2.7) | 80.8 | 76.2 | | 4 | 82.5 (+3.7) | 80.6 (+5.9) | 78.8 (+8.4) | 84.0 (+1.4) | 82.1 (+1.7) | 85.2 (+1.3) | 81.9 (+3.7) | 84.6 (+0.5) | 83.7 (-0.1) | 86.8 | 81.9 | | 8 | 86.4 (+0.4) | 85.3 (+1.1) | 84.0 (+2.6) | 88.3 (+0.4) | 86.3 (+1.1) | 88.8 (+0.4) | 84.5 (+0.2) | 88.8 (-0.1) | 87.3 (-0.5) | 88.4 | 86.5 | | 16 | 89.2 (-0.0) | 89.2 (-0.0) | 88.6 (+0.8) | 90.9 (+0.2) | 89.0 (+0.6) | 90.9 (+0.1) | 88.0 (-0.3) | 90.7 (-0.2) | 90.4 (-0.4) | 89.3 | 89.1 | | | Food101 | | | | | | | | | | | | 1 | 69.6 (+22.3) | 69.6 (+22.3) | 69.6 (+22.3) | 69.6 (+22.3) | 69.6 (+22.3) | 69.6 (+22.3) | 69.6 (+22.3) | 69.6 (+22.3) | 69.6 (+22.3) | 68.3 | 66.1 | | 2 | 74.5 (+10.1) | 69.6 (+18.8) | 69.4 (+19.5) | 73.0 (+10.3) | 69.7 (+18.9) | 73.7 (+6.6) | 69.9 (+18.5) | 72.9 (+8.2) | 77.1 (+3.0) | 79.1 | 72.3 | | 4 | 79.3 (+2.1) | 73.2 (+9.7) | 72.1 (+13.5) | 78.1 (+2.7) | 72.1 (+9.9) | 79.0 (+0.8) | 71.7 (+8.7) | 78.7 (+1.2) | 80.5 (+0.9) | 83.1 | 78.1 | | 8 | 83.9 (+0.1) | 79.4 (+2.2) | 77.9 (+4.9) | 85.0 (+0.3) | 78.0 (+3.7) | 85.4 (-0.3) | 76.8 (+3.5) | 85.5 (+0.2) | 85.1 (-0.5) | 86.0 | 81.9 | | 16 | 87.3 (-0.2) | 85.6 (+0.4) | 84.3 (+1.6) | 89.4 (-0.1) | 83.8 (+1.5) | 89.4 (+0.0) | 81.7 (+0.8) | 89.5 (+0.1) | 88.8 (-0.3) | 87.3 | 85.1 | | | ImageNet-100 | | | | | | | | | | | | 1 | 80.8 (+26.0) | 80.8 (+26.0) | 80.8 (+26.0) | 80.8 (+26.0) | 80.8 (+26.0) | 80.8 (+26.0) | 80.8 (+26.0) | 80.8 (+26.0) | 80.8 (+26.0) | 76.7 | 76.6 | | 2 | 85.4 (+8.9) | 83.8 (+23.8) | 82.5 (+24.7) | 86.2 (+7.2) | 86.6 (+7.9) | 88.0 (+4.1) | 84.8 (+8.3) | 87.0 (+5.1) | 87.3 (+2.4) | 89.5 | 89.1 | | 4 | 88.8 (+2.0) | 86.2 (+11.8) | 85.6 (+17.0) | 88.9 (-0.1) | 87.9 (-0.3) | 90.6 (-0.1) | 86.6 (+0.9) | 89.6 (-0.2) | 90.2 (-0.5) | 92.3 | 91.7 | | 8 | 91.6 (+0.6) | 88.4 (+2.7) | 87.8 (+6.0) | 91.5 (-0.7) | 89.2 (-0.6) | 92.3 (-0.7) | 88.6 (-0.5) | 92.3 (-0.3) | 93.2 (-0.2) | 93.3 | 92.7 | | 16 | 93.0 (-0.3) | 91.0 (+0.5) | 90.5 (+0.8) | 93.5 (-0.8) | 91.3 (-0.1) | 94.1 (-0.3) | 90.4 (-0.7) | 93.8 (-0.4) | 94.3 (-0.1) | 93.4 | 93.3 | | | DomainNet-Real | | | | | | | | | | | | 1 | 68.5 (+23.6) | 68.5 (+23.6) | 68.5 (+23.6) | 68.5 (+23.6) | 68.5 (+23.6) | 68.5 (+23.6) | 68.5 (+23.6) | 68.5 (+23.6) | 68.5 (+23.6) | 64.8 | 63.9 | | 2 | 71.9 (+10.1) | 70.6 (+16.5) | 70.1 (+19.7) | 72.0 (+10.1) | 71.7 (+11.5) | 73.1 (+7.8) | 71.1 (+12.2) | 72.6 (+7.4) | 73.6 (+2.1) | 73.0 | 73.8 | | 4 | 76.0 (+2.9) | 72.9 (8.4) | 72.2 (+13.5) | 75.7 (+3.3) | 73.9 (4.6) | 77.1 (1.4) | 73.7 (+4.5) | 76.6 (+1.2) | 76.2 (-0.4) | 74.8 | 77.8 | | 8 | 79.6 (+0.4) | 77.1 (+3.0) | 76.5 (+6.2) | 80.1 (+0.2) | 77.5 (+1.2) | 81.0 (+0.3) | 77.0 (+1.1) | 80.8 (+0.1) | 78.4 (-0.3) | 76.2 | 80.6 | | 16 | 82.2 (+0.1) | 81.3 (+0.9) | 80.7 (+1.7) | 84.0 (-0.0) | 81.2 (+0.1) | 84.0 (-0.0) | 80.5 (+0.1) | 84.3 (-0.1) | 79.5 (-0.2) | 78.0 | 82.3 | We modify these queries to query the top-(K · B) samples based on their uncertainty metrics (K = 50) and then cluster these samples by k-means into B clusters and select points closest to the cluster centroids. We also experiment with enabling dropout at inference time to add an element of stochasticity that disrupts the classifier's estimates of uncertainty, allowing for diverse sample selection. We report the results of our experiments in Table 2. To decouple the influence of the initial pool selection, all queries, including Typiclust and **ProbCover** , use an identical randomly selected pool of initial samples. Our results show that by imposing simple diversity measures, uncertainty-based queries like Uncertainty , Entropy , **Margins** , and **BALD** surpass the performance of AL strategies that explicitly incorporate diversity in their queries. ## 3.3 Representative Versus Uncertainty Sampling And The Phase Transition Much prior art in AL has placed an emphasis on the distinction between sampling the most uncertain instances in DU for the active learner and sampling diverse, representative instances for establishing welldefined decision boundaries. It is a well-known fact that querying diverse but relatively representative instances early on in the AL procedure allows for the classifier to learn the structure of the representation space early in order to make uncertainty estimates in later iterations more reliable (Hacohen et al., 2022). This trade-off between diversity sampling in the low-data regime before uncertainty sampling is known as the *phase transition* (Hacohen et al., 2022). However, when leveraging large foundation models as feature extractors, we observe a phenomenon that is a striking deviation from the existing literature. As demonstrated in Table 2, when controlling for random initialization, uncertainty sampling is actually competitive to representative sampling methods like **Typiclust** as early as the 2nd AL iteration (CIFAR100, DomainNet-Real) and even beat these methods in later iterations. Using foundation models in our diverse selection of datasets, we see no evidence of a phase transition and find no support for the notion that uncertainty sampling is ineffective in low-budget AL. Further, we witness that **Typiclust** underperforms the uncertainty-based queries in later iterations (8-16, Food101 & DomainNet-Real). Table 2: Marginal (∆) in performance for stand-alone uncertainty measures (i.e. softmax uncertainty, predictive entropy, margins sampling, or BALD) when using clustering (∆k) or when using clustering with dropout during inference (∆k+d). For fairness, we hold all queries to use a randomly selected initial pool. Power BALD, Coreset, BADGE, Alfa-mix, Typiclust, and ProbCover inherently enforce diversity in their respective queries, so we do not show ∆s for these methods. The cells are color-coded according to the magnitude of ∆ for better visualization. | t | Uncertainty | Entropy | Margins | BALD | pBALD | Coreset | BADGE | Alfa-mix | Typiclust | ProbCover | | | | | | | | |------|----------------|-----------|-----------|--------|---------|-----------|---------|------------|-------------|-------------|-------|------|------|------|------|------|------| | Acc. | ∆k | ∆k+d | Acc. | ∆k | ∆k+d | Acc. | ∆k | ∆k+d | Acc. | ∆k | | | | | | | | | | CIFAR 100 | | | | | | | | | | | | | | | | | | 2 | 79.6 | +13.1 | +17.0 | 79.4 | +16.6 | +21.4 | 80.4 | +7.9 | +10.5 | 76.9 | +9.5 | 71.2 | 64.5 | 72.5 | 78.0 | 78.9 | 74.6 | | 4 | 86.7 | +12.0 | +12.0 | 86.7 | +15.9 | +16.3 | 87.1 | +4.1 | +4.5 | 86.6 | +6.2 | 83.9 | 78.2 | 84.1 | 83.9 | 86.4 | 82.2 | | 8 | 89.9 | +5.6 | +5.7 | 89.8 | +8.2 | +8.4 | 89.8 | +1.8 | +1.9 | 89.3 | +4.1 | 88.4 | 88.3 | 90.9 | 90.8 | 90.0 | 89.1 | | 16 | 91.2 | +2.2 | +1.9 | 91.4 | +3.5 | +3.6 | 91.4 | +0.9 | +0.7 | 91.0 | +2.7 | 90.8 | 88.3 | 90.9 | 90.8 | 90.0 | 89.1 | | | | Food101 | | | | | | | | | | | | | | | | | 2 | 70.3 | +16.1 | +19.4 | 68.8 | +14.3 | +18.9 | 72.3 | +7.4 | +9.6 | 64.0 | +13.2 | 67.1 | 51.5 | 64.8 | 74.1 | 77.0 | 73.8 | | 4 | 81.8 | +17.0 | +18.3 | 80.0 | +17.6 | +20.2 | 81.7 | +5.7 | +6.2 | 77.5 | +15.3 | 78.2 | 63.0 | 77.5 | 79.6 | 82.5 | 78.6 | | 8 | 86.4 | +9.3 | +9.1 | 85.7 | +11.1 | +12.7 | 86.1 | +1.5 | +1.5 | 84.3 | +10.0 | 85.6 | 73.4 | 85.3 | 85.6 | 85.8 | 82.3 | | 16 | 88.9 | +4.5 | +3.6 | 88.7 | +5.9 | +6.1 | 89.0 | +0.1 | -0.4 | 87.9 | +5.6 | 89.4 | 80.8 | 89.4 | 89.2 | 87.4 | 85.3 | | | ImageNet-100 | | | | | | | | | | | | | | | | | | 2 | 88.5 | +21.6 | +28.5 | 88.4 | +21.1 | +30.5 | 88.5 | +8.5 | +9.5 | 86.7 | +8.0 | 83.9 | 76.5 | 81.8 | 84.9 | 89.1 | 87.0 | | 4 | 90.6 | +15.6 | +16.2 | 90.4 | +20.9 | +21.8 | 91.2 | +2.3 | +2.2 | 90.2 | +2.0 | 90.6 | 85.7 | 89.8 | 90.7 | 92.4 | 91.8 | | 8 | 93.3 | +7.6 | +7.6 | 92.0 | +11.3 | +10.3 | 93.3 | +1.3 | +1.1 | 91.4 | +1.6 | 92.9 | 89.0 | 92.6 | 93.4 | 93.3 | 92.7 | | 16 | 94.1 | +4.1 | +3.6 | 94.2 | +4.7 | +4.5 | 94.4 | +0.3 | +0.2 | 93.3 | +1.9 | 94.4 | 91.1 | 94.1 | 94.4 | 93.8 | 93.8 | | | DomainNet-Real | | | | | | | | | | | | | | | | | | 2 | 71.5 | +12.6 | +17.4 | 71.2 | +15.4 | +20.8 | 72.3 | +7.1 | +10.4 | 68.1 | +7.8 | 65.3 | 58.8 | 65.2 | 71.4 | 72.4 | 71.3 | | 4 | 79.4 | +13.9 | +14.9 | 78.8 | +19.5 | +20.1 | 79.1 | +6.5 | +6.6 | 78.0 | +8.7 | 75.6 | 69.2 | 75.4 | 76.6 | 75.2 | 77.3 | | 8 | 82.7 | +8.3 | +8.5 | 82.5 | +12.2 | +12.2 | 82.4 | +2.6 | +2.5 | 81.7 | +5.4 | 80.7 | 76.0 | 80.7 | 78.7 | 77.0 | 80.4 | | 16 | 84.6 | +4.4 | +4.2 | 84.5 | +5.8 | +5.5 | 84.4 | +0.7 | +0.4 | 84.5 | +3.3 | 84.0 | 80.4 | 84.3 | 79.7 | 78.3 | 82.1 | ## 3.4 Leveraging Unlabeled Instances For Training The Active Learner Even with a good initial query, the classifier g(zi; θ) may perform poorly in the low-data regime since there are a limited number of training examples. In cases where the labeling budget is prohibitive, acquiring enough training instances for a classifier to establish good decision boundaries early on in AL may be difficult. This motivates the use of unlabeled instances in a principled way to ensure that the classifier maximizes performance even in the low-data regime. However, our experiments find that the efficacy of semi-supervised learning in this setting is questionable at best, with wide variation across query methods and datasets. While we do see an initial boost in performance, contrary to Gao et al. (2020) and Simeoni et al. (2021), the gap quickly narrows, and label propagation underperforms the reference query. Propagating labels from uncertain queried samples may cause points across the decision boundary to be assigned to incorrect classes, hampering performance in the long run. We point the reader to the Appendix for an in-depth analysis using a popular semi-supervised algorithm. ## 3.5 Summary Of Results Our experiments with initial pool selection (we study the importance of intelligently selecting the initial labeled pool by comparing centroid-based initialization with random initialization) motivate the use of a centroid-based initialization strategy to overcome the cold start problem. Additionally, our experiments modifying existing uncertainty-based approaches to enforce diversity in query selection, which motivates the use of clustering to select diverse samples during active learning. Our findings also indicate that a clustering approach similar to **Alfa-Mix** selects diverse candidates that span the representation space. Furthermore, our investigation into the previously identified phase transition implicates the need for an uncertainty-based query function instead of representative sampling throughout active learning. We revisited the notion that uncertainty-based queries are outperformed by representative sample selection methods such as **Typiclust** in the low-budget regime. We find that this is not the case, contradicting this existing notion. Finally, our inquiry into enhancing the active learner with unlabeled instances cautions against the use of semi-supervised learning as a complementary approach to active learning. Based on our findings, an effective AL strategy would initialize the labeled set with representative samples, employ an uncertainty-based query function for shortlisting unlabeled candidates, and select a diverse subset of these candidates. ## 4 Dropquery **, A Simple, Effective Active Learning Strategy** The observations from the previous sections directly inform the principled construction of a new AL strategy leveraging the robust visual features of foundation models, which we refer to as **DropQuery** . A detailed description of the query strategy is provided in Algorithm 1. Below, we briefly review the key results from section 3 that motivate the choice of components and the construction of **DropQuery** . - **Centroid-based initial pool selection:** Our experimental results in Section 3.1 demonstrated the utility of intelligently selecting the initial pool of candidates to label. Informed by these results and given the semantic clusters characteristic to the latent spaces of foundation models, **DropQuery** employs a centroid-based initial pool selection strategy to overcome the cold-start problem. - **Uncertainy-based sampling approach:** Based on our investigation into the tradeoff between uncertainty-based and representative query sampling in Section 3.3, **DropQuery** favors an uncertainty-based query strategy. We experiment using dropout perturbations to measure uncertainty and select candidates for annotation. Given features zi of an unlabeled instance, we produce M dropout samples of these inputs (ρ = 0.75) to get {z 1 i , . . . , zM i }. The current classifier at t then makes predictions on these M samples and the original instance zi, and we measure the consistency of classifier predictions yˆi = g(zi; θ). If more than half of the M predictions are inconsistent, we add zi to the candidate set Zc. In all of our experiments, we set M = 3. - **Selecting a diverse pool of candidates:** Our analysis in Section 3.2 provided evidence in favor of choosing a diverse set of candidates to annotate. **DropQuery** leverages this result by taking the candidate set Zc and clustering it into B clusters, selecting the point closest to the center of each cluster. These constitute a diverse subset of points to be annotated. - **Leveraging unlabeled instances:** The results from our investigation in Section 3.4 did not provide conclusive evidence in favor of adding a semi-supervised learning component to **DropQuery** , hence we choose to omit this particular variant of label propagation from our strategy. ## Algorithm 1 Dropquery | for zi ∈ Z do 1 M {z i , . . . , z i } ← Dropout(zi; ρ) | ▷ Apply dropout to the input features | |----------------------------------------------------------------|------------------------------------------------------| | ni ← PM 1[g(z i ; θ) = g(zi; θ)] | ▷ Measure inconsistency among classifier predictions | | | m | | | m=1 | | end for Zc = {zi|ni > 0.5M ∀zi ∈ ZU } | ▷ Use consistency as proxy for uncertainty | | Ck ← K-means(Zc, B) where k ∈ {B} | ▷ Cluster points to enforce diversity | | Sk ← {zi : arg minzi∈Zc ∥zi − ck∥ 2 2 where ck ∈ Ck} Y ← ϕ(Sk) | ▷ Retrieve labels from samples closest to centroids | We take inspiration from works like Ducoffe & Precioso (2017) and Beluch et al. (2018), which leverage query-by-committee to distinguish informative samples. It is well-known that dropout on the weights of a classifier during inference simulates an ensemble of models (Warde-Farley et al., 2014). However, our strategy is a nuanced but critical change from these works. **DropQuery** is more similar in spirit to ALFAMix, which interpolates unlabeled points with anchors to perturb them and tests the consistency of their class predictions. Applying dropout perturbs features, and those lying close to the decision boundary will have inconsistent predictions, making them good candidates for querying. Candidate clustering ensures the diversity of selected points, as shown in Table 3. Since label propagation does not help queries in our experiments, we do not consider it a core component. Table 3: Test accuracy for our method (with random acquisition as a reference) utilizing dropout at inference time (DQ), with a centroid-based initial pool (DQc), and dropout with semi-supervised learning in the form of label propagation (DQssl). | t | Random | DQ | DQc | DQssl | Random | DQ | DQc | DQssl | Random | DQ | DQc | DQssl | Random | DQ | DQc | DQssl | |----------|--------------|------|-------|---------|----------|------|----------------|---------|----------|------|-------|---------|----------|------|-------|---------| | CIFAR100 | ImageNet-100 | | | | Food101 | | DomainNet-Real | | | | | | | | | | | 1 | 48.1 | 48.1 | 72.6 | 53.3 | 54.7 | 54.7 | 79.1 | 57.8 | 47.4 | 47.4 | 71.3 | 49.7 | 44.8 | 44.8 | 68.0 | 46.8 | | 2 | 64.5 | 81.0 | 83.5 | 79.2 | 76.5 | 88.8 | 89.4 | 85.9 | 64.4 | 72.7 | 76.1 | 70.6 | 61.8 | 73.0 | 75.0 | 68.9 | | 4 | 78.7 | 87.4 | 87.7 | 87.1 | 86.8 | 91.7 | 91.5 | 90.9 | 77.2 | 81.8 | 82.2 | 80.9 | 73.1 | 79.1 | 78.7 | 76.7 | | 8 | 86.1 | 89.8 | 90.1 | 89.6 | 91.0 | 93.2 | 93.0 | 92.8 | 83.8 | 86.3 | 86.3 | 84.7 | 79.2 | 82.4 | 82.2 | 80.0 | | 16 | 89.2 | 91.4 | 91.5 | 91.0 | 93.3 | 94.1 | 94.1 | 93.7 | 87.5 | 89.5 | 89.4 | 86.7 | 82.1 | 84.8 | 84.7 | 82.1 | ## 5 Experimental Results Our strategy's effectiveness is demonstrated on several natural, out-of-domain biomedical, and large-scale image datasets. Initially, we assess **DropQuery** 's performance, maintaining the dataset constant while altering the underlying representation space (see Figure 1). Our experiments confirm that larger models generate more resilient visual features, thereby enhancing linear classifier performance in low-budget scenarios. An intriguing phenomenon, observed during iterations 4-16 in Figure 1, reveals that *smaller* models produce embeddings that our AL strategy better accommodates in later iterations. The performance benefits of larger models in early iterations (limited data availability) diminish rapidly, likely due to their expressiveness compared to smaller models' features. Hence, dropout-motivated uncertainty sampling is not as advantageous. We further note that, on average, vision-language foundation models outperform their vision-only counterparts. ![8_image_0.png](8_image_0.png) Figure 1: Results of our AL strategy on different representation spaces (i.e. DINOv2 (Oquab et al., 2023) and OpenCLIP (Cherti et al., 2022)). The y-axis is the delta in accuracy between iteration i and i/2. In early iterations, the improvements to AL query performance are more pronounced for larger models. ## 5.1 Natural Image Datasets Our proposed AL strategy is evaluated through a comprehensive set of experiments on diverse natural image datasets sourced from the VTAB+ benchmark (Schuhmann et al., 2022). The VTAB+ benchmark, an expanded version of the VTAB (Zhai et al., 2020) benchmark, encompasses a range of challenging visual perception tasks including fine-grained image classification and is typically used for assessing zero-shot performance. A notable drawback in many prior AL studies is the excessive reliance on datasets such as CIFAR100 or TinyImageNet with small image sizes that are not representative of real-world scenarios where AL would be necessary. Therefore, our experiments include several larger, more complex natural image ![9_image_0.png](9_image_0.png) Figure 2: (Top row) Results on fine-grained natural image classification tasks Stanford Cars (Krause et al., 2013), FVGC Aircraft (Maji et al., 2013), Oxford-IIIT Pets (Parkhi et al., 2012), and the Places365 (?) datasets. (Bottom row) AL curves for biomedical datasets, including images of peripheral blood smears (Acevedo et al., 2020), retinal fundoscopy (Kaggle & EyePacs, 2015), HeLa cell structures (Murphy et al., 2000), and skin dermoscopy skin (Tschandl et al., 2018), covering pathology, ophthalmology, cell biology, and dermatology domains using various imaging modalities. Additional biomedical datasets are explored in the Appendix. datasets that closely mirror real-world scenarios. We describe extensive implementation and training details in the Appendix. Among the fine-grained natural image classification datasets, in Stanford Cars (Krause et al., 2013) and Oxford-IIIT Pets (Parkhi et al., 2012), our **DropQuery** outperforms all other AL queries in each iteration while also outperforming complex query approaches like **Alfa-Mix** and **Typiclust** in the low-budget regime in FVGC Aircraft (Maji et al., 2013) (see Figure 2). Our approach, which is agnostic to dataset and model, outperforms the state-of-the-art AL queries, which often necessitate additional hyperparameter tuning given the underlying data distribution. We also test our method on a large-scale dataset with 365 classes, Places365 (Zhou et al., 2017) (which contains approximately 1.8 million images), and our strategy beats all other modern AL queries. These results exemplify the scalability of our method on large datasets where AL would be used to probe the unlabeled data space for important samples. ## 5.2 Out-Of-Domain Datasets In addition, we conduct experiments on various biomedical datasets, which significantly deviate from the training distribution of the foundation models discussed in this study. The aim of these experiments is to underscore the effectiveness of AL queries when applied to datasets that are out-of-domain or underrepresented in pretraining. This kind of evaluation is especially pertinent in situations where a model trained in one institution or setting is deployed in another. An efficient AL query is helpful to obtain a minimum number of samples to label to fine-tune the model for the new task - a realistic scenario often neglected in previous studies. 2 illustrates our strategy's performance on challenging out-of-domain data. In real-world contexts, ![10_image_0.png](10_image_0.png) Figure 3: Win matrices for all AL strategies investigated in our study evaluated on natural image datasets and out-of-domain biomedical image datasets. (a) CIFAR100, Food101, Imagenet-100, and DomainNet-Real using DINOv2 VIT-g/14 features and Stanford Cars, FVGC Aircraft, Oxford-III Pets, and Places365 using OpenCLIP VIT-G/14 features (8 total settings). Due to computational costs, **ProbCover** was not evaluated on Places365, so the max value of cells in the **ProbCover** row/column is 7. (b) Blood Smear, Diabetic Retinopathy, IICBU Hela, and Skin cancer datasets using DINOv2 VIT-g/14 features (4 total settings). DropQuery outperforms all other methods on the natural image datasets and is a strong competitor to all other methods on the biomedical image datasets with statistical significance. DropQuery excels as a practical, efficient method for querying hard-to-classify samples for improving the model. ## 5.3 Statistical Significance Study In order to verify the empirical strength of **DropQuery** , we furnish our results with statistical significance tests following the procedure in Parvaneh et al. (2022) and Ash et al. (2020). We conduct paired t-tests to determine whether a particular AL strategy i outperforms another strategy j. We say that query strategy i beats or surpasses strategy j at iteration r if $$c_{i,j}^{r}=\frac{\sqrt{5}\mu^{r}}{\sigma^{r}}>2.776,\quad\mu^{r}=\frac{1}{5}\sum_{k=1}^{5}(a_{i}^{r}-a_{j}^{r}),\quad\sigma^{r}=\sqrt{\frac{1}{5}\sum_{k=1}^{5}(a_{i}^{r}-a_{j}^{r}-\mu^{r})^{2}}$$ where a r i is the accuracy of strategy i at iteration r. This corresponds to the p = 0.05 threshold for significance. We compute these c r i,j for all iterations and sum the number of times strategy i surpassed strategy j at each iteration r (divided by the total number of iterations) to indicate the score of i beating j in a specific dataset-model setting. Figure 3 illustrates two "win" matrices illustrating the proportion of times strategy i has a significantly higher (p<0.05) mean accuracy compared to strategy j. For each pair, at each iteration, the mean accuracy for methods i and j are compared using a t-test with p<0.05, and the outcome is win, lose, or tie to yield a fraction of wins from (0, 1). This is repeated for all datasets considered and the values are summed to yield the proportion of wins. For example, in Figure 3a, **DropQuery** beats random sampling in 7.25/8 fraction of total AL iterations across the 8 dataset settings. ## 5.4 Ablations For **Dropquery** 5.4.1 Impact Of The Dropout Ratio We explore the influence of the dropout ratio on both the overall performance of the models and the number of selected candidates at each iteration step. We experiment with a range of values - [0.15, 0.30, 0.45, 0.60] in addition to **0.75**, the setting used in our main experiments. Our findings, highlighted in Appendix Table 1, indicate that higher dropout ratios for the features result in slightly better-performing models, a consistent trend among datasets. This is perhaps unsurprising, as stronger regularization in the form of harsher dropout ratios would improve the generalizability of models in the low-budget regime. Interestingly, lower dropout ratios lead to fewer samples added to the candidate set. This may be a complementary factor affecting the performance of the model, as a smaller candidate set would likely be less diverse. ## 5.4.2 Impact Of M**, The Number Of Dropout Iterations** The number of dropout iterations, M, is a tunable hyper-parameter for our query function. We experiment with a range of values - [5, 7, 9] in addition to 3, the value used in our main experiments, and present the results in Appendix Table 2. Our findings indicate that this hyper-parameter has a minimal impact on both the performance of the model and the number of selected candidates, with fewer iterations having a slight advantage. The robustness of our models' performance w.r.t M is a desirable trait for an AL query function designed for the low-budget regime since the existence of a validation set for hyper-parameter tuning cannot be assumed due to the paucity of labels. Over the course of our experiments, we found that other query functions could perform better with the right combination of hyperparameters. ProbCover performed better with lower purity levels, and ALFA-Mix performed better with a lower value of ϵ. The maximum number of clusters was set arbitrarily for TypiClust, as was the number of neighbors used to calculate typicality. Ideally, these hyper-parameters would have been determined based on the distribution of features, the number of dimensions, and the number of samples in each dataset. These observations strengthen our belief that **DropQuery** is a strong competitor to existing methods due to its simplicity and excellent performance without tuning M. ## 6 Conclusions In this work, we systematically study four critical components of effective active learning in the context of foundation models. Given our observations, we propose a new AL strategy, named **DropQuery** , that combines the benefits of strong initial pool selection by class centroids and a dropout-motivated uncertainty measure for querying unlabeled instances. This method outperforms other AL queries in a variety of natural image, biomedical, and large-scale datasets. It poses a paradigm shift to leveraging the properties of foundation model representations in AL. Limitations and Societal Impact Our work is a principled investigation focused on the interplay of AL and foundation models, and we recognize certain limitations. Firstly, our approach presumes the availability of public foundation models, which may not be universally accessible. Secondly, inherent biases and ethical dilemmas tied to these models could be reflected in our method, possibly leading to biased outcomes or the propagation of harmful stereotypes. Thirdly, our experiments focus on datasets with a relatively even distribution of labels, as has been the norm with most works exploring AL queries. Given that long-tailed datasets are quite common in the real world, there is a concern that our findings may not generalize to scenarios with heavily imbalanced class labels. This limitation warrants further investigation, particularly in the biomedical domain, where class imbalances and long-tailed distributions are widespread. Despite these issues, the growth of foundation models encourages their use in AL, even with out-of-domain datasets. Finally, we acknowledge that the semi-supervised learning method implemented in this study is not the only way to leverage unlabeled instances in this manner. Due to our experimental setup involving a fixed representation space generated by frozen foundation model backbones, we are constrained to a subset of SSL algorithms that operate in an offline mode. However, we encourage future work to investigate further along these lines and emphasize that our recommendation is limited to the use of a particular label propagation algorithm. We urge future research to address these concerns while considering the broader ethical implications. ## References Andrea Acevedo, Anna Merino, Santiago Alférez, Ángel Molina, Laura Boldú, and José Rodellar. A dataset of microscopic peripheral blood cell images for development of automatic recognition systems. *Data Brief*, 30(105474):105474, June 2020. Josiah Aklilu and Serena Yeung. Alges: Active learning with gradient embeddings for semantic segmentation of laparoscopic surgical images. In Zachary Lipton, Rajesh Ranganath, Mark Sendak, Michael Sjoding, and Serena Yeung (eds.), *Proceedings of the 7th Machine Learning for Healthcare Conference*, volume 182 of *Proceedings of Machine Learning Research*, pp. 892–911. PMLR, 05–06 Aug 2022. URL https: //proceedings.mlr.press/v182/aklilu22a.html. Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=ryghZJBKPS. Haoping Bai, Meng Cao, Ping Huang, and Jiulong Shan. Self-supervised semi-supervised learning for data labeling and quality evaluation, 2021. William Beluch, Tim Genewein, Andreas Nürnberger, and Jan Kohler. The power of ensembles for active learning in image classification. pp. 9368–9377, 06 2018. doi: 10.1109/CVPR.2018.00976. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, et al. On the opportunities and risks of foundation models, 2022. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 - mining discriminative components with random forests. In *European Conference on Computer Vision*, 2014. Yao-Chun Chan, Mingchen Li, and Samet Oymak. On the marginal benefit of active learning: Does selfsupervision eat its cake? *ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and* Signal Processing (ICASSP), pp. 3455–3459, 2020. Akshay L Chandra, Sai Vikas Desai, Chaitanya Devaguptapu, and Vineeth N. Balasubramanian. On initial pools for deep active learning. In *Preregister@NeurIPS*, 2020. Liangyu Chen, Yutong Bai, Siyu Huang, Yongyi Lu, Bihan Wen, Alan Yuille, and Zongwei Zhou. Making your first choice: To address cold start problem in medical active learning. In *Medical Imaging with Deep* Learning, 2023. URL https://openreview.net/forum?id=5iSBMWm3ln. Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning, 2022. Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd van Steenkiste, Gamaleldin F. Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Patrick Collier, Alexey Gritsenko, Vighnesh Birodkar, Cristina Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetić, Dustin Tran, Thomas Kipf, Mario Lučić, Xiaohua Zhai, Daniel Keysers, Jeremiah Harmsen, and Neil Houlsby. Scaling vision transformers to 22 billion parameters, 2023. Mélanie Ducoffe and Frédéric Precioso. Active learning strategy for cnn combining batchwise dropout and query-by-committee. In *The European Symposium on Artificial Neural Networks*, 2017. Reza Zanjirani Farahani and Masoud Hekmatfar. *Facility location: Concepts, Models, algorithms and case* studies. Springer - Verlag Berlin Heidelberg, 2009. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep Bayesian active learning with image data. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pp. 1183–1192. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/gal17a.html. Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels, 2020. Mingfei Gao, Zizhao Zhang, Guo Yu, Sercan Ö. Arık, Larry S. Davis, and Tomas Pfister. Consistency-based semi-supervised active learning: Towards minimizing labeling cost. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), *Computer Vision - ECCV 2020*, pp. 510–526, Cham, 2020. Springer International Publishing. ISBN 978-3-030-58607-2. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, 2017. Guy Hacohen, Avihu Dekel, and Daphna Weinshall. Active learning on a budget: Opposite strategies suit high and low budgets. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of *Proceedings of Machine Learning Research*, pp. 8175–8195. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/hacohen22a.html. Kaggle and EyePacs. Kaggle diabetic retinopathy detection, jul 2015. URL https://www.kaggle.com/c/ diabetic-retinopathy-detection/data. Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/ 95323660ed2124450caaac2c46b5ed90-Paper.pdf. Andreas Kirsch, Sebastian Farquhar, Parmida Atighehchian, Andrew Jesson, Frederic Branchaud-Charron, and Yarin Gal. Stochastic batch acquisition for deep active learning, 2022. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *2013 IEEE International Conference on Computer Vision Workshops*, pp. 554–561, 2013. doi: 10.1109/ICCVW.2013.77. Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. Carsten Tim Lüth, Till J. Bungert, Lukas Klein, and Paul F Jaeger. Navigating the pitfalls of active learning evaluation: A systematic framework for meaningful performance assessment. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=Dqn715Txgl. Rafid Mahmood, Sanja Fidler, and Marc T Law. Low-budget active learning via wasserstein distance: An integer programming approach. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=v8OlxjGn23S. S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013. Prem Melville and Raymond J. Mooney. Diverse ensembles for active learning. In Proceedings of the Twenty-First International Conference on Machine Learning, ICML '04, pp. 74, New York, NY, USA, 2004. Association for Computing Machinery. ISBN 1581138385. doi: 10.1145/1015330.1015385. URL https://doi.org/10.1145/1015330.1015385. Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, and Shadab Khan. Towards robust and reproducible active learning using neural networks, 2022. R F Murphy, M V Boland, and M Velliste. Towards a systematics for protein subcelluar location: quantitative description of protein localization patterns and automated analysis of fluorescence microscope images. Proc. Int. Conf. Intell. Syst. Mol. Biol., 8:251–259, 2000. George Nemhauser, Laurence Wolsey, and M. Fisher. An analysis of approximations for maximizing submodular set functions—i. *Mathematical Programming*, 14:265–294, 12 1978. doi: 10.1007/BF01588971. Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision, 2023. Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. Amin Parvaneh, Ehsan Abbasnejad, Damien Teney, Gholamreza (Reza) Haffari, Anton van den Hengel, and Javen Qinfeng Shi. Active learning by feature mixing. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition (CVPR), pp. 12237–12246, June 2022. Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In *Proceedings of the IEEE International Conference on Computer* Vision, pp. 1406–1415, 2019. Kossar Pourahmadi, Parsa Nooralinejad, and Hamed Pirsiavash. A simple baseline for low-budget active learning. *arXiv preprint arXiv:2110.12033*, 2021. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. Laion-5b: An open large-scale dataset for training next generation image-text models, 2022. Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id= H1aIuk-RW. C. E. Shannon. A mathematical theory of communication. *The Bell System Technical Journal*, 27(3): 379–423, 1948. doi: 10.1002/j.1538-7305.1948.tb01338.x. O. Simeoni, M. Budnik, Y. Avrithis, and G. Gravier. Rethinking deep active learning: Using unlabeled data at model training. In *2020 25th International Conference on Pattern Recognition (ICPR)*, pp. 1220–1227, Los Alamitos, CA, USA, jan 2021. IEEE Computer Society. doi: 10.1109/ICPR48806.2021.9412716. URL https://doi.ieeecomputersociety.org/10.1109/ICPR48806.2021.9412716. Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, and Yue Cao. Eva-clip: Improved training techniques for clip at scale, 2023. Alex Tamkin, Dat Pham Nguyen, Salil Deshpande, Jesse Mu, and Noah Goodman. Active learning helps pretrained models learn the intended task. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. URL https://openreview.net/forum?id=0Ww7UVEoNue. Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil Band, Tim G. J. Rudner, Karan Singhal, Zachary Nado, Joost van Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, and Balaji Lakshminarayanan. Plex: Towards reliability using pretrained large model extensions, 2022. Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The HAM10000 dataset, a large collection of multisource dermatoscopic images of common pigmented skin lesions. *Sci. Data*, 5(1):180161, August 2018. David Warde-Farley, Ian J. Goodfellow, Aaron Courville, and Yoshua Bengio. An empirical analysis of dropout in piecewise linear networks, 2014. Jae Oh Woo. Active learning in bayesian neural networks with balanced entropy learning principle. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/ forum?id=ZTMuZ68B1g. Ofer Yehuda, Avihu Dekel, Guy Hacohen, and Daphna Weinshall. Active learning through a covering lens. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural* Information Processing Systems, 2022. URL https://openreview.net/forum?id=u6MpfQPx9ck. Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. Cold-start active learning through self-supervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 7935–7948, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.637. URL https://aclanthology.org/2020.emnlp-main.637. Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, Lucas Beyer, Olivier Bachem, Michael Tschannen, Marcin Michalski, Olivier Bousquet, Sylvain Gelly, and Neil Houlsby. A large-scale study of representation learning with the visual task adaptation benchmark, 2020. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2017.
Review 1: Summary: This paper systematically studies four critical components of effective active learning in the context of Foundation Models. Based on the results of these researches, this paper challenges established research on the cold-start problem and counter the notion that uncertainty-based queries are ineffective in the low-budget regime. Finally, this paper introduces a new AL strategy called DropQuery. Strengths and Weaknesses: **Strengths** 1. This paper carries out deep researches on critical components of active learning. 2. The advantages of the proposed strategy are verified from several angles. **Weaknesses** 1. My main concern is that the main idea of this article is not clear. After reading the contributions of this article, I think the main contribution is the new strategy. But in fact, not only the title, but also the main part of the article is about effects of the four critical components of effective active learning. I think authors should be clear of what they want to show us mostly and modify the structure of this article. 2. In section 3.2, this paper carries out a large number of experiments and shows us the results. However, this paper only gives experiments settings and lacks the analysis of the experimental results. This is also the case in sections 3.1, 3.3, and 3.4. 3. The findings of the experiment should be clearly stated. 4. The setting of parameter M in Section 4 is not clear. It should be explained why M=3 not something else. 5. There is a lack of comparison with baselines in the experiments in Section 5.2. The results can only tell us that DrouQuery works on biomedical datasets. But is it really better than other existing strategies? Requested Changes: Please kindly refer to the above comments. Broader Impact Concerns: No. ================================================== Review 2: Summary: This work studies several design choices in Active Learning (AL) in the context of image classification based on frozen vision foundation models (e.g., DINO and CLIP), and proposes a new AL strategy, DropQuery. Through analytical experiments with DINOv2-ViT-g/14, various existing AL methods, and several image classification datasets, this work demonstrates that 1) the initial pool selection greatly influences the early stage of AL; 2) sample diversity and representativity should be emphasized in AL selection; and 3) utilization of semi-supervised learning may not be a necessary option. Based on such observations, this work proposes DropQuery, an effective AL strategy that 1) measures uncertainty of samples via dropout, 2) clusters high-priority samples, and 3) queries labels for samples closest to the cluster centroids. Experiments validate the efficacy of DropQuery on natural images and out-of-domain biomedical images. Strengths and Weaknesses: Strengths 1. This work studies AL for image classification based on pre-trained large-scale vision/vision-language models. This study can bring insights for applications of vision foundation models in domain-specific, low-data tasks. 2. The evaluated datasets are comprehensive, including both natural images and out-of-domain (medical) images. 3. The comparison between the proposed AL strategy DropQuery and prior methods clearly shows the advantages of DropQuery. 4. The writing is generally clear and easy to follow. Weaknesses 1. [Impact of initial pool (in the long run)] In Section 3.1, AL methods starting with a randomly sampled initial pool are compared with those starting from a centroid-based initial pool, and the results “dispel previous preliminary findings Chandra et al. (2020) that emphasize that there is no significant difference in initial pool selection strategies for AL.” However, from the reviewer’s perspective, the conclusions of this work and Chandra et al. (2020) do not seem to contradict each other. In this work, Table 1 indeed demonstrates that the centroid-based initial pool leads to greatly improved performance at the very beginning of AL (e.g., t=1). However, as the number of iterations increases, the benefits diminish or even become negative (e.g., t=8 or 16 in ImageNet-100). The observation from Chandra et al. (2020) is that “Experimental results could not conclusively prove that intelligently sampled initial pools are better for AL than random initial pools *in the long run*,” which do not seem to be very different from the results in Section 3.1. It is suggested to provide some clarification or revise the related claims to avoid misunderstandings of the conclusions in this work. 2. [Utilization of unlabeled instances] In Section 3.4, it is claimed that “the efficacy of *semi-supervised learning* in this setting is questionable at best, with wide variation across query methods and datasets,” based on experiments of (an offline variant of) label propagation (Iscen et al. (2019)). However, this method of utilizing unlabeled instances is just one example of semi-supervised learning, and the conclusion for ineffective semi-supervised learning may not generalize well to other methods, such as Mean Teacher (Tarvainen and Valpola (2017)), self-training (Yalniz et al. (2019)), or Meta Pseudo Labels (Pham et al. (2021)). 3. [Transition from observations to proposal] The connection between the analytical experiments (Section 3) and the proposed new AL strategy (Section 4) seems not very strong. It may be helpful to directly point out which part of DropQuery originates from the observations. Akshay L Chandra, Sai Vikas Desai, Chaitanya Devaguptapu, and Vineeth N. Balasubramanian. On initial pools for deep active learning. In Preregister@NeurIPS, 2020. Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NeurIPS, 2017. I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, Dhruv Mahajan. Billion-scale semi-supervised learning for image classification. arXiv, 2019. Hieu Pham, Zihang Dai, Qizhe Xie, Minh-Thang Luong, Quoc V. Le. Meta Pseudo Labels. In CVPR, 2021. Requested Changes: Most of the requested changes have been illustrated above. In addition, these minor issues may be fixed: 1. In the paragraph right before Section 3.1, the query budget is set as “1 sample per class per iteration.” Using the word “class” may cause some confusion, because at the time of query, the class labels are unknown. According to later sections, “1 sample per cluster” may be more suitable. 2. The analytical experiments in Section 3 are all performed with DINOv2-ViT-g/14, which may lack generality. It may be helpful to show some more results considering other models and scales. 3. The evaluated datasets do not seem to involve long-tail distributions, which are actually common in real-world scenarios. It is unclear if the proposed clustering-based AL strategy is still helpful on long-tail classification tasks. Some attempts on long-tail classification or discussion may be beneficial. 4. Grammar Issue - [Abstract] “... efficiency, but …” → “... efficiency. But …” - [Introduction] “... where labeled data is scarce and …” → “... where labeled data are scarce and…” - [Sec 3.1] “The results some queries like…” is grammatically incorrect. - [Sec 3.4] “However, in our experiments find that” is grammatically incorrect. - Additional errors throughout the manuscript necessitate thorough proofreading and correction. Broader Impact Concerns: No significant adverse effects are apparent from this study. On the contrary, the proposed method enhances the training environment in data-scarce situations, thereby potentially broadening the accessibility of models for individuals who have restricted access to large-scale datasets. ================================================== Review 3: Summary: This paper mainly targets active learning. The authors try to explore the usage of foundation models in AL regarding several important component. Based on the finding they propose a new AL strategy which utilizes the inconsistency among predictions of input features applied with different dropout. The authors conduct several experiments to show the effectiveness of the proposed method. Strengths and Weaknesses: Strengths: 1. This paper provides meaningful experiment results explaining how foundation models can help in AL. 2. The proposed DropQuery is simple and effective. Weaknesses: 1. It would be better to reorganize the paper so that more ablation study results can be presented in the main paper. 2. I wonder if such a method can be applied to supervised trained backbone models? Also, can such a method be applied to non-contrastive SSL backbone models, e.g. MAE? 3. Fig.2 can be improved for better understanding. Requested Changes: Please refer to the weaknesses. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper explores the integration of vision foundation models into active learning (AL) strategies, particularly focusing on low-budget scenarios. The authors evaluate the impact of using foundation models on three crucial aspects of AL: initial labeled pool selection, diversity sampling, and the balance between representativeness and uncertainty in sampling. Their findings inform a newly developed AL strategy, "DropQuery," which combines uncertainty estimation through dropout with sample diversity to enhance AL efficiency. This strategy is extensively tested across various challenging image classification benchmarks, including natural and biomedical images, demonstrating its potential to refine active learning processes in complex image datasets. The paper is overall solid. The authors thoroughly investigate critical components of effective AL, offering new insights particularly useful for domain-specific, low-data tasks. The investigation of AL with the context of foundation models is interesting and valuable. The proposed AL strategy, DropQuery is simple yet effective across various tests. On the weakness side, several reviewers noted that the main ideas and contributions could be clearer and more distinctively articulated within the paper. The authors were able to address some concerns with their rebuttal, but the paper could further benefit from appropriate rewriting and re-organization. The authors should better address reviewer sEZP's questions with the following changes: 1. Add the MAE backbone experiment besides DINOv2. 2. Re-organize and improve the paper according to the suggestions. ==================================================
# Conformal Data Cleaning: Statistical Guarantees For Data Quality Automation In Tables Anonymous authors Paper under double-blind review ## Abstract Machine Learning (ML) components have become ubiquitous in modern software systems. In practice, there remain major challenges associated with both the translation of research innovations to real-world applications as well as the maintenance of ML components. Many of these challenges, such as high predictive performance, robustness, and ethical concerns, are related to data quality and, in particular, to the lack of automation in data pipelines upstream of ML components. While there are many approaches developed for automating data quality monitoring and improvement, it remains an open research question to what extent data cleaning can be automated. Many of the solutions proposed are tailored to specific use cases or application scenarios, require manual heuristics, or cannot be applied to heterogeneous data sets. More importantly, most approaches do not lend themselves easily to full automation. Here, we propose a novel cleaning approach, *Conformal Data Cleaning* (CDC), combining an application-agnostic ML-based data cleaning approach with conformal prediction (CP). CP is a model-agnostic and distribution-free method to calibrate ML models that give statistical guarantees on their performance, allowing CDC to automatically identify and fix data errors in single cells of heterogeneous tabular data. We demonstrate in extensive empirical evaluations that the proposed approach improves downstream ML tasks in the majority of our experiments. At the same time, it allows full automation and integration in existing ML pipelines. We believe that CDC has the potential to improve data quality with little to no manual effort for researchers and practitioners and, thereby, contribute to more responsible usage of ML technology. Our code is available on GitHub: *redacted GitHub link*. ## 1 Introduction Machine Learning (ML) components have become ubiquitous in modern software applications. While researchers have made significant progress in developing better models, much of this innovation is only slowly translated into real-world applications. Research at the intersection of ML and database management systems has identified several potential causes, including the difficulty of maintaining an ML system (Sculley et al., 2015) and challenges related to data quality (Breck et al., 2019; Schelter et al., 2018). While many aspects of modern ML workflows have become simpler thanks to standardized APIs and libraries, data quality control remains one of the most impactful and difficult-to-automate parts (Biessmann et al., 2021), especially during ML deployment. Here we focus on one of the most common and relevant use cases of ML applications, where we assume that an ML model was trained on clean data and at inference time, the data quality deteriorates, impacting the predictive performance. When implementing ML systems, it is common to measure the data quality and remove erroneous examples, e.g., outlier detection (Zhao et al., 2019). However, in many scenarios, low-quality data points cannot be discarded, and a prediction would be desirable. For this reason, one line of research focuses on training robust ML algorithms that are agnostic to data quality issues. For tabular data, these techniques, e.g., regularization (Kadra et al., 2021) and data augmentation (Machado et al., 2022), have been reported to be not as useful yet as for other data modalities (Borisov et al., 2022). On the other hand, there is a substantial body of literature in ML and database management communities spotlighting data quality and trying to detect which attributes of the examples are erroneous to enable data cleaning. Those approaches often lack the necessary degree of *automation* to apply them to production systems because they rely on user input, e.g., constraints, rules (Rekatsinas et al., 2017), or cleaning suggestions (Mahdavi & Abedjan, 2020) or only work with specific attribute types or error types (Qahtan et al., 2018). Contributions In this study, we aim at improving automation in cleaning tasks by leveraging statistical dependencies inherent to the data. We propose a novel data cleaning approach, Conformal Data Cleaning (CDC), that detects and cleans erroneous attributes of heterogeneous tabular data by exploiting the dependency structure in a table. We combine ML-based data imputation approaches (Jäger et al., 2021) with conformal prediction (CP) (Vovk et al., 2005). Using ML models allows for exploiting the columns' dependencies but calibrating the outputs of such imputation methods can be challenging. We propose to build on the ideas of conformal prediction to ensure that the cleaning results enjoy strong statistical guarantees about whether an attribute, i.e., cell, is conforming to the training data (given other columns' values). In that sense, CDC is a procedure allowing users to combine a wide range of ML models, outlier detection, or data imputation methods to automatically clean (unseen) test data. For this reason, we benchmark CDC on 18 heterogeneous tabular data sets and show that it improves downstream ML tasks in the majority of our experiments. Structure of this study In Section 2, we cover related work. Section 3 describes the theoretical foundation and the main idea of CDC. Section 4 describes the implementation of your experimental study, and Section 5 describes its results, which are discussed in Section 6. Finally, we conclude in Section 7 and sketch ideas for future work. ## 2 Related Work The term *data cleaning* has different meanings depending on the research area. Here, we briefly describe the differences and highlight how these relate to and influence our work. We focus on tabular data and do not consider methods for other data types (e.g., texts or images). Outlier detection Outlier detection (also known as anomaly detection) identifies data points (examples) that differ substantially from others and is well-known in the machine learning (ML) community. Unsupervised ML approaches to predict whether or not a data point is an outlier is extensively studied for machine learning (Ramaswamy et al., 2000; Liu et al., 2008), empirical cumulative distribution function (Li et al., 2020; 2022) and neural networks (Hawkins et al., 2002; Chen et al., 2017; Wang et al., 2020). Many of the best-performing methods (Han et al., 2022) are available through software packages (Zhao et al., 2019; 2021). The idea of outlier detection is to remove outliers, entire data points, from a data set. In many application scenarios, a complementary goal is to detect and potentially clean anomalous attributes of a data point. While standard outlier detection methods can be applied to this task with minor modifications, these methods are trained on the statistical distribution of a single attribute or dimension and neglect statistical dependencies between attributes. Cell-based error detection and correction Cell-based error detection and correction focuses on errors in individual attributes or dimensions of data points. This task is less studied in the statistical learning community than outlier detection. In contrast, there is a large body of literature in the data management community investigating cell-based error detection and correction methods for relational databases, i.e., tabular data. However, these approaches are often specialized for detecting specific error types, e.g., violations of rules, constraints, or functional dependencies (Dallachiesa et al., 2013; Rekatsinas et al., 2017), inconsistencies (Ham, 2013), or (disguised) missing values ("999999" for a phone number) (Qahtan et al., 2018), and rely on user input. Also, in the data management community, ML methods are increasingly being used for data cleaning, employing semi-supervision (Mahdavi et al., 2019), active learning (Neutatz et al., 2019), or self-supervision (Liu et al., 2022) to exploit user input more efficiently. Most correction approaches can tackle all error types but use user input (e.g., Krishnan et al., 2016; Rekatsinas et al., 2017; Mahdavi & Abedjan, 2020). Abdelaal et al. (2023) show in an extensive benchmark that many of these detection and correction methods can be combined, which is similar to our approach. Since we focus on improving the automation of cleaning tasks, we build up on methods that do not rely on user input. Missing data and data imputation Missing values are common data quality issues but do not require complex detection mechanisms. However, after detecting erroneous cells in tabular data sets, one can treat error correction as a data imputation problem. The ML community developed many strategies to handle them, which range from discarding rows or columns over column-wise imputing (Rubin, 1976) and imputing depending on other columns' values (Rubin, 1987; Stekhoven & Buhlmann, 2012; Biessmann et al., 2019) to deep generative models (Yoon et al., 2018; Camino et al., 2019; Yoon & Sull, 2020; Nazábal et al., 2020). Here, we leverage recent work in imputation for heterogeneous data as a central component in our cleaning approach. Label error We presented data quality issues that relate to the data. However, Northcutt et al. (2021a) show that mislabeled examples are common in computer vision, natural language processing, and audio processing benchmarks. Therefore, different methods are available that detect these label errors Pleiss et al. (2020); Northcutt et al. (2021b); Chen et al. (2021). This type of cleaning is important to obtain a trustworthy training data set. It usually requires manual inspection of the cleaning results to ensure responsible usage of the models trained on the presumably clean training data. Another line of work develops approaches, which are robust against label errors, e.g., by adapting the loss Patrini et al. (2017) or using specialized layers Goldberger & Ben-Reuven (2022). In this work, we consider a complementary problem setting where we aim for automated data (not labels) cleaning at inference time. To summarize, our work differs from the mentioned studies as we aim for an unsupervised (no user-labeled data, constraints, thresholds, or rules required) approach that detects and cleans erroneous cells of tabular data independent of their error type. Further, we focus on an automated process that can be readily integrated into existing ML pipelines. ## 3 Methodology In this section, we introduce the theoretical foundation of the conformal framework and develop a generic data cleaning procedure. In short, we build on established ML-based data cleaning approaches and combine them with conformal predictors, allowing us to detect erroneous cells of heterogeneous tabular data to clean them. This simple cleaning procedure, detailed in Section 3.2, is generic enough to account for heterogeneous types of data errors and can be readily implemented in standard ML libraries (see Appendix A). ## 3.1 Conformal Predictors Conformal predictors are uncertainty quantification methods that allow the calculation of statistically rigorous confidence intervals (regression) or sets (classification) from any point estimator. Vovk et al. (2005) originally described transductive conformal predictors and inductive conformal predictors. However, inductive conformal predictors are much more computationally efficient and were quickly adapted for more complex machine learning (ML) applications (e.g., Papadopoulos, 2008; Balasubramanian et al., 2014). In the following, we refer to inductive conformal predictors as conformal predictors (CP) if not stated otherwise. ## 3.1.1 Conformal Prediction Framework Assume D := *X × Y ⊂* R d × R is the data space for a regression problem. We sample a training dataset D*train* := {X × Y } and calibration dataset D*calib* := {Xn×d × Yn}, the two data sets are independent and identically distributed (i.i.d.). To obtain a conformal predictor we first fit a machine learning regression model f : R d → R to the training data D*train* and obtain the predictor ˆf. Using the trained model we compute calibration nonconformity scores Rcalib = r1*, ..., r*n∀ri ∈ [0, ∞] for the calibration data D*calib* using a *nonconformity score function* $$S:{\mathcal{Y}}\times{\mathcal{Y}}\to R.$$ $\hat{y}_{calib}=\hat{f}(X_{calib})$ $R_{calib}=S(\hat{y}_{calib},y_{calib})$ ? $$(1)$$ Intuitionally, nonconformity scores represent how different one data point (xi, yi) is from what the fitted predictor ˆf expects it to be (xi, yˆi). Since smaller scores are better, the calibration nonconformity scores R*calib* will be relatively small if ˆf represents D*train* well. In this example, assume S(ˆ*y, y*) = |yˆ − y|, the absolute error between yi and its label. This nonconformity score function is commonly used for regression problems. Then, we compute qˆ, the k-th empirical quantile of R*calib*, as follows: $$\begin{array}{l}{{k=\frac{\lceil(n+1)(1-\alpha)\rceil}{n}}}\\ {{\hat{q}=q u a n t i e(R_{c a l i b},k),}}\end{array}$$ $$\left(2\right)$$ where α ∈ [0, 1] is the given significance level.1 Lastly, for new and unseen test data X*test*, we need to construct the prediction interval T . Since we are using the absolute errors as nonconformity scores, we compute the prediction intervals as T (X*test*) = ˆy*test* ± qˆ. Since qˆ is based on the calibration nonconformity scores and the given significance level α, the conformal framework guarantees that T (X*test*) contains y*test* (the true label) with at least probability 1−α, or in other words, with a confidence level of 1 − α. More formally conformal predictors (CPs) ensure that: $$\mathbb{P}(y_{t e s t}\in{\mathcal{T}}(X_{t e s t}))\geq1-\alpha.$$ $$\left({\boldsymbol{3}}\right)$$ P(ytest ∈ T (X*test*)) ≥ 1 − α. (3) If the model ˆf fits the data D*train* well, the prediction intervals T will be narrow. However, if ˆf performs poorly, the prediction intervals will be broader to satisfy Statement (3). This property is known as marginal coverage (Lei & Wasserman, 2014; Lei et al., 2018). To summarize, by applying the conformal prediction framework, the model predicts intervals or sets that statistically ensure to satisfy Statement (3), independent of the model's choice, its predictive performance, or the data's distribution2. ## 3.1.2 Conformal Quantile Regression The marginal coverage property has one major drawback: it does not guarantee good prediction intervals (Lei & Wasserman, 2014) because they have constant widths and can not vary depending on x. Besides other approaches, *Conformal Quantile Regression (CQR)* was developed to address this shortcoming (Romano et al., 2019). The main idea is to fit f : R → R 2to Dtrain's *lower* qαlo and *upper* qαup empirical quantiles to obtain ˆf, where qαlo = α/2 and qαup = 1−α/2. Using the nonconformity score function S(ˆ*y, y*) = max(ˆyαlo−*y, y*−yˆαup ) in the above-described conformal framework and computing the prediction intervals as T (X*test*) = [ˆyαlo−q, ˆ yˆαup+ˆq], it is possible to calibrate the predicted quantiles to satisfy Statement (3). Since yˆ is two-dimensional, yˆαlo represents the predicted lower quantile and yˆαup the upper quantile. For proof or intuitions about the score function, we refer the reader to Romano et al. (2019). ## 3.1.3 Conformal Classification Transferring the conformal framework to classifiers is straightforward. First, we choose an appropriate nonconformity score function and, second, select a suitable method to construct the prediction sets. For 1Originally, Vovk et al. (2005) calculated *p-values* for each nonconformity score. However, it has been proven that relying on the fitted residual distribution and qˆ, as described above, is equivalent (Lei et al., 2018) and commonly used in modern ML applications (Zeni et al., 2020). 2There are assumptions for CP, e.g., |X| > 0, |Y | > 1, and data is *exchangeable*. However, in a typical ML setting, where the data is i.i.d., these are negligible because a sequence of i.i.d. random variables is exchangeable. For more details, we refer the reader to, e.g., Vovk et al. (2005); Zeni et al. (2020). classification, the label space differs slightly: Y ⊂ N. However, many classifiers, especially neural networks using softmax activation, predict probability values for each class. Therefore, classifiers can be seen as f : R d → [0, 1]|Y|. It is worthwhile mentioning the worst-case scenario, where these probabilities can be poorly calibrated (Guo et al., 2017). Nevertheless, these are good starting points to define the nonconformity score function. S(ˆyc) = 1−yˆc is commonly used, where yc means the probability score of the true class. The prediction set T for an unseen test example contains all classes whose nonconformity score does not exceed qˆ, i.e, T (X*test*) = {c : S(ˆy*test*c ) < qˆ}. Similarly to regression CP, the marginal coverage property does not guarantee good prediction sets: the coverage of the classes can vary heavily. A simple yet powerful extension is the *class-conditioned* conformal classification3. In this approach, calibration nonconformity scores are stratified by class, and multiple qˆ (c) are calculated. The following equations represent the necessary adaptions, where • (c)represents the subset for class c. $$\begin{array}{c}{{R_{c a l i b}^{(c)}=S\Big(\hat{y}_{c}^{(c)}\Big)}}\\ {{k^{(c)}=\frac{\left[\left(n^{(c)}+1\right)(1-\alpha)\right)\right]}{n^{(c)}}}}\\ {{\hat{q}^{(c)}=q u a n t i e\Big(R_{c a l i b}^{(c)},k^{(c)}\Big)}}\\ {{\mathcal{T}(X_{t e s t})=\Big\{c:S(\hat{y}_{t e s t_{c}})<\hat{q}^{(c)}\Big\}}}\end{array}$$ $$\left(4\right)$$ $$\left(5\right)$$ A class-conditioned conformal classifier is guaranteed to satisfy the stronger *class-conditioned* coverage: $$\mathbb{P}(y_{t e s t}\in{\mathcal{T}}(X_{t e s t})\mid y_{t e s t}=y)\geq1-\alpha,\quad\forall y\in{\mathcal{Y}}.$$ P(ytest ∈ T (Xtest) | ytest = y) ≥ 1 − α, ∀y ∈ Y. (5) In the remainder of this work, we refer to class-conditioned conformal classifiers (CCP) as conformal classifiers. ## 3.2 Data Cleaning With Conformal Predictors In the following, we demonstrate that the concepts of conformal prediction can help to automate data cleaning routines. We follow an ML-based approach, introduced by van Buuren & Oudshoorn (1999), allowing us to exploit the columns' dependencies and use conformal predictors to detect which cells are erroneous given the information of all other cells in that row. Therefore, during training, we fit an ML model for each column, where all other columns are the model's features, and calibrate its output. During deployment, we (columnwise) test which values are erroneous, meaning which value does not belong to the prediction sets/intervals, and replace them with the underlying ML point prediction. Formally, let D*train* be a dataset and *cleaner* our proposed method. Then, *cleaner* fits d models on a subset of the data, where X*train* c = D*train* {1,...,d}\{c} is the training data and y train c = Dtrain cthe labels to fit *cleaner*c to clean column c ∈ {1*, ..., d*}. Outlier detection ML models' predictions are subject to uncertainties, but most model types do not explicitly state them. However, if models predict uncertainties, they are not necessarily well-calibrated (see Section 3.1.3 for an example). This means users can not rely on the model's uncertainty information. Therefore, we use CPs that predict statistically rigorous confidence sets/intervals for a given significance level α. For new and unseen test data D*test* n×d and significance level, e.g., α = 0.01, each of the fitted CP cleaner \ c predicts sets/intervals Ti,c, where ∀i ∈ {1*, ..., n*} and ∀c ∈ {1*, ..., d*}, for every test example i of the corresponding column c as follows: Xtest i,c = Dtest i,{1,...,d}\{c} Ti,c = cleaner \ c(Xtest i,c ) (6) 3Class-conditioned conformal predictors are also known as *mondrian conformal predictors*. For details and proofs, we refer the reader to Vovk et al. (2005). $\downarrow$ . If D*test* i,c is drawn from the same distribution as both the training and calibration data, Statement (3) holds. Hence if Xtest i,c ∈ Ti,c, we assume that the data point Xtest i,c is an *inlier*. Otherwise, if the data point X*test* i,c falls outside the confidence interval Ti,c, we assume Dtest i,c is an outlier. Note that the statistical guarantee in Equation (3) is defined for inliers, not for outliers. One can think of computing a matrix B*test* n×d ⊂ {0, 1} according to: $$B_{i,c}^{test}\begin{cases}1,&\text{if}D_{i,c}^{test}\notin\mathcal{T}_{i,c}\\ 0,&\text{otherwise}\end{cases}\tag{1}$$ $$\left(7\right)$$ In other words, matrix B*test* represents outliers of D*test* as 1, which is valuable for outlier cleaning. Outlier cleaning After detecting outliers, i.e., computing matrix B*test*, it is straightforward to clean them. Having B*test* representing outliers of D*test*, we can remove the outliers (cell-based) of D*test* and treat the situation as a missing value problem. However, this requires the application of another potentially independent tool, for example, as shown by Jäger et al. (2021); Nazábal et al. (2020); Yoon et al. (2018); Yoon & Sull (2020); Biessmann et al. (2019). Since the conformal framework, described in Section 3.1.1, relies on an underlying ML model, we use its predictions to clean outliers. In detail, our calculations for B*test* differ slightly. Instead of using the values {0, 1}, we use the *best* prediction of the ML model. Formally, these are minor changes regarding Statement (7). The *cleaner* \ CPs return not only a confidence set/interval but also the best prediction for an example used to clean outliers: $$\begin{array}{c}{{\mathcal{T}_{i,c},\hat{y}_{i,c}=\widehat{c l e a n e r_{c}}(X_{i,c}^{t e s t})}}\\ {{}}\\ {{B_{i,c}^{t e s t}=\begin{cases}\hat{y}_{i,c},&\text{if}D_{i,c}^{t e s t}\not\in\mathcal{T}_{i,c}\\ N A N,&\text{otherwise}\end{cases},}}\end{array}$$ $$({\mathfrak{s}})$$ where NAN is a placeholder representing not existing values. Replacing all values of X*test* with B*test* if Btest i,c ∈ { / NAN} is effectively cleaning X*test*. ## 4 Implementation And Experimental Setup In this section, we give implementation details of CDC and our baseline, describe our experimental benchmark, and the metrics to compare the results. ## 4.1 Conformal Data Cleaner Implementation The Conformal Data Cleaner (CDC) implementation, described in Section 3.2, uses our own conformal framework implementation (see Section 3.1.1)4. Besides others, it contains classes for CQR (Romano et al., 2019) and CCP based on AutoGluon (Erickson et al., 2020), allowing us to test many ML model types and optimize their hyperparameters (HPO). Since AutoGluon offers a unified API, we do not need to implement the necessary code for each model. CDC's implementation iterates over each column and, depending on its type, fits a CQR optimizing pinball loss5 or CCP optimizing F1 metric (two categories) or *macro F1* (more than two categories). Internally, AutoGluon finds the best model from random forests, extremely randomized trees, k-nearest neighbors, linear regression, and a FastAI-based neural network (NN) (Howard & Gugger, 2020). For NNs, it further uses 50 trials (grid search) to optimize their hyperparameters, where we use the default search spaces. Unfortunately, for the other model types, HPO is not implemented, but some predefined hyperparameter settings are tested. We disable model stacking and bagging and expose the best-performing model (without model ensembles) for data cleaning through the API described in Appendix A. Directly afterward, as part of the fit interface, we use 1000 data points6(not used during training) to calibrate the model. 4Our conformal prediction framework is publicly available on GitHub: *redacted GitHub link* 5*Pinball loss* is the most common metric for quantile regression. See also Section 3.1.2. 6Angelopoulos & Bates discussed the effect of the calibration set size and show that 1000 data points works well. However, tabular datasets are typically smaller and calibration set sizes < 1000 could be sufficient. We leave this for future research. Data Shifts and empty prediction set The conformal framework applied to classification problems allows empty set predictions. This typically means that the input data was very different from the training and calibration data, meaning the underlying stationarity assumption was likely violated. In such a scenario, the conformal framework is no longer valid and loses its guarantees - but this is not only true for conformal prediction, as almost all machine learning algorithms implicitly assume stationarity of the data. While some research investigates to what extent algorithms can be made robust against covariate shifts (Sugiyama & Kawanabe, 2012), many of these approaches are designed for special types of algorithms or data distributions (von Bünau et al., 2009). Complementary to this line of work, other researchers have recently proposed several model-agnostic approaches to detect covariate shifts (Rabanser et al., 2018; Schelter et al., 2020) or label shifts (Lipton et al., 2018). In principle, these model-agnostic solutions could be applied as a preprocessing step before applying conformal data cleaning. Empirically, however, we observed that considering cells for which conformal cleaning yields an empty prediction set as inliers and refraining from cleaning these data points resulted in better downstream performance. For this reason, we slightly modify the CDC implementation for categorical columns and check whether the prediction set is not empty before applying Statement (7). ## 4.2 Baseline Implementation As s baseline cleaning method, we use *PyOD* (Zhao et al., 2019) to detect outliers, more precisely, *ECOD*7(Li et al., 2022), which is currently, in many scenarios, one of the best-performing outlier detectors. PyOD combines many algorithms and makes them available through a unified API, which allows us to fit and predict whether or not a given input example is an outlier. For this reason, we fit iteratively for each column an outlier detector, allowing us to compute cell-wise outlier information. Note, when using column vectors as training data, we are trading information about column dependencies for cell-wise (instead of row-wise) outlier information. After detecting outliers, there are many possibilities to correct these data errors (e.g., Chu et al., 2016; Li et al., 2019). However, a widely used strategy is using the column-wise mean for numerical and mode for categorical columns, which we use for simplicity. We expose the PyOD-based baseline cleaning approach through the cleaning API described in Appendix A. ## 4.3 Cleaning Benchmark To benchmark the proposed CDC thoroughly, we use openly available datasets from OpenML (Vanschoren et al.) for the three most common task types: regression, binary classification, and multi-class classification. We start from a data imputation benchmark (Jäger et al., 2021) that focuses on the same downstream tasks and remove those that do not fulfill the criteria for tabular datasets formulated by Grinsztajn et al. (2022). We choose datasets with at least 50, 000 cells and prefer fewer columns because CDC fits one model for each column, which can get time-consuming. Appendix B presents the 18 datasets that fulfill our requirements and information about their size and columns. We split the datasets 80/20 into training and test sets and applied *Jenga*8(Schelter et al., 2021) to each of the test sets multiple times to create several corrupted test sets. Jenga can corrupt datasets with realistic error types that model real-world scenarios. However, besides missing values, which are out of the scope of this study, we do not distinguish the error types and refer to them as *outliers*, i.e., observations that differ substantially from others. We use four error types (swapped values, scaling, *Gaussian noise*, and *categorical* shift) with 1%, 5%, 10%, 30%, and 50% error fractions, which results in 20 corrupted test sets for each data set. Since we do not mix the error types, the actual error fraction after applying Jenga can be smaller than expected, which is caused by the dependency between the error type and the datasets' columns. For example, applying Gaussian noise to a dataset without numerical columns would result in no outliers. Figure 1 gives an overview of our benchmark containing a wide range of error fractions for each downstream task. 7Unsupervised outlier detection using empirical cumulative distribution functions. For more details, we refer the reader to Li et al. (2022) 8"Jenga is an experimentation library that allows data science practitioners and researchers to study the effect of common data corruptions". GitHub: https://github.com/schelterlabs/jenga ![7_image_0.png](7_image_0.png) Figure 1: Error distribution. The error fractions are for all downstream tasks similarly distributed. About 22% of the test sets do not have errors, 50% have about 5% or fewer, 75% have about 25% or less, and 25% have between 25% and 50% errors. ## 4.4 Experiments In ML projects, researchers and practitioners often use high-quality and clean datasets for training and testing. However, in production data quality can degrade, and the (downstream) model's performance drops. Data cleaning methods as part of the ML pipeline have the potential to reduce this effect. To simulate this scenario and empirically benchmark CDC's performance against the PyOD-based baseline, we use the following experimental setup. We use clean, high-quality training data to fit the downstream model9 and the two cleaning methods with multiple hyperparameter settings. All experiments are repeated three times to collect information about their robustness. For CDC, this leads to resampling the 1000 calibration data points (see Section 4.1) for each repetition. To compare the two cleaning methods, we use three performance metrics: *baseline* performance, *corrupted performance*, and *cleaned performance*. These measure the downstream performance on the original (clean) test datasets and the erroneous test datasets without and with data cleaning (by each method separately). Hyperparameter settings Both cleaning methods require exactly one hyperparameter, the confidence level for CDC, and *contamination* for the PyOD-based baseline cleaner. CDC's confidence level describes how well the cells are required to conform with the training data to be seen as inliers (see Section 3.2). Intuitively, the larger the confidence level, the fewer cells are cleaned. Here, we run six experiments with confidence levels 0.9, 0.99, 0.999, 0.9999, 0.99999, and 0.999999. On the other hand, PyOD's contamination is the proportion of expected outliers10. Therefore, the larger, the more cells are cleaned. We run five experiments with contamination equal to 0.1, 0.2, 0.3, 0.4, and 0.499. In the following, we refer to these cleaning method-hyperparameter combinations as experiment setting, *experiment*, or *setting*. ## 4.5 Comparison Metrics Depending on the downstream task, we use F1 for binary classification, *macro F1* for multi-class classification, and root mean square error (*RMSE*) for regression datasets to report their performance measures. Normalize performance metrics across datasets Comparing results across datasets with different difficulties and metrics is not possible. For this reason, we normalize the results for each dataset separately to range between 0 (worst) and 1 (best). This is similar to the distance to the minimum metric used by Grinsztajn et al. (2022) and Feurer et al. (2022). To represent how much cleaning improves (or reduces) the downstream performance, we calculate the downstream performance improvement (corrupted vs. cleaned) relative to the corrupted performance and scale the values for each dataset separately between −1 and 0 for performance degradation and 0 and 1 for performance improvement. This separation is necessary to preserve the information on whether or not the downstream performance improved. 9We use Jenga, which builds up on scikit-learn's SGDClassifier for classification or SGDRegressor for regression tasks, preprocesses the columns (scaling, missing value imputation, and one-hot encoding), and optimizes some hyperparameters (grid search). 10In real-world scenarios, this can not be known upfront and is a major drawback of this method. Finally, to report the outlier detection performance, we use the *true positive rate (TPR)*, i.e., probability of detection, and the *false positive rate (FPR)*, i.e., probability of false alarm. ## 5 Results Before evaluating the results, we average the three repetitions for each experiment, presented in the following. Since there are 120 (corrupted) test sets for each dataset and, therefore, multiple results for each experiment, we use box plots to visualize their distributions. Box plots visually show five summary statistics: minimum, maximum, first quartile, median, and second quartile. We group the results by experiment setting and error fraction to reveal potential trends depending on these variables. Appendix C shows and discusses the results for each downstream task separately. ## 5.1 Outlier Detection ![8_image_0.png](8_image_0.png) Figure 2 visualizes the outlier detection performance. Figure 2: (*Left*) Outlier Detection TPR (↑) vs. (*Right*) FPR (↓). CDC generally detects fewer outliers than the baseline (left), especially for higher error fractions. CDC with higher confidence levels is more robust regarding its hyperparameter than the baseline, which optimizes the TPR. On the other hand, CDC balances TPR and FPR. True positive rate In general, CDC's outlier detection TPR (↑) is worse than the baseline but more robust regarding its hyperparameter. Lower confidence levels (0.9 and 0.99) are exceptional and tend to achieve higher TPR. However, the baseline's TPR increases with increasing contamination, which is more pronounced for higher error fractions. False positive rate The outlier detection FPR (↓) shows similar effects. Compared to the baseline, CDC's results are, especially for high confidence levels, more robust regarding changing hyperparameters. Increasing error fractions generally degrade CDC's FPR, whereas the baseline increases slightly. However, the baseline's FPR is almost directly defined by its hyperparameter. TPR vs. FPR An optimal cleaner would detect all outliers, i.e., *T P R* = 1, and does not confuse any inliers, i.e., *F P R* = 0. Figure 2 clearly shows that the baseline cleaner focus on outlier detection without minimizing errors. On the other hand, our conformal data cleaning approach focuses on both high TPR and low FPR. ## 5.2 Outlier Cleaning Figure 3 shows the normalized results of the baseline model applied to cleaned data and the relative improvement over the corrupted performance. The cleaned performance gives insights about which approach works better but does not show whether the performance improved or degraded regarding not using a data cleaning step. ![9_image_0.png](9_image_0.png) Figure 3: (*Left*) Cleaned performance and (*Right*) downstream improvement (normalized). CDC achieves higher cleaned performance (left) and improves the downstream performance (right) in about 75% of the experiments (first quartile ≥ 0). The baseline's results are more dispersed and among the worst results. Cleaned performance Unsurprisingly, increasing error fractions decrease the downstream performance. However, using CDC performs better than the baseline, especially with higher confidence levels. Furthermore, CDC's boxplots are shorter, meaning the results are less dispersed. Cleaned improvement The baseline's first quartile is always negative. In some cases, the median is close to zero or even negative, meaning that in more than 25% (or 50% for the median) of those experiments, the baseline cleaning approach leads to worse predictive performance in the downstream task. In the range of [0 − 10] percent error fraction (about one-third of the experiments do not have errors, see Figure 1), the baseline leads to degradation in about 75% of the experiments, while CDC achieves improvements in about 50% of the experiments. Further, in about 67% of the experiment settings, at least one hyperparameter for CDC leads to an improved downstream performance. For the baseline, this is only in 43% of the settings the case. Generally, CDC's results are less dispersed (shorter boxes) and lead to improved predictive performance in the downstream task. ## 5.3 Comparing Best-Performing Results For The Experiments Figure 4 shows the best-performing experiment settings for CDC and the PyOD-based cleaner. Each colored cross represents one experiment and distinguishes the datasets, while their sizes represent the error fraction. The gray dashed lines visualize the identity. Therefore, marks in the upper left half show experiments where CDC outperforms the baseline. ![10_image_0.png](10_image_0.png) Figure 4: (*Left*) CDC vs. PyOD baseline data cleaning performance and (*Right*) improvement in predictive performance in the downstream task (normalized). Colors distinguish datasets, while cross' sizes represent the error fraction, and diagonal dashed lines visualize the identical performance of the approaches. Experiments above the diagonal lines show that CDC performs better than the baseline, which is the case in 75% of the experiments when comparing the cleaned performance and, respectively, 72.5% for the downstream improvement. In 75% of the experiments, CDC's cleaned performance (normalized) outperforms the baseline, respectively 72.5% downstream improvement (normalized). Further, in about 67%, CDC leads to improved downstream performance, while the baseline only achieves this in about 42% of the cases. Results for datasets with more errors tend to result in smaller cleaning performance but better downstream improvement. Further, the PyOD-based cleaner shows for many datasets with ≥ 40% error fraction the best results (around 1), while CDC performs worse. This is in line with Figure 3, showing that the baseline's downstream improvement is, in many cases, better than CDC's, represented by larger median and second quartile values. ## 6 Discussion We investigate the performance of CDC on many datasets and a wide range of artificially created but realistic errors. The results are compared to a PyOD-based cleaning method that does not incorporate column dependencies. In the following, we highlight some of the key findings. ## 6.1 Cdc Outperforms Pyod-Based Cleaning Our experimental results, visualized in Figure 3, show that CDC is superior to the PyOD-based baseline cleaner. CDC's results are, first, less dispersed (smaller boxes in Figure 3), meaning users can expect fewer performance outliers. Second, higher median values for both the cleaned performance and downstream improvements in the majority of experiments. Lastly, the first quartiles are higher (in many cases ≥ 0) for the downstream improvement, indicating that applying CDC increases the downstream performance in more than 75% of our experiments compared to not cleaning the test data. However, taking Figure 4 into account, this effect also holds when comparing the best-performing model settings for the experiments. These results demonstrate that fully automated cleaning at inference time leads to improvements in the majority of cases. Although, Figure 2 shows that the baseline has better outlier detection TPR. In combination with other results, it is evident that other factors are also important, e.g., smaller FPR and accurate cleaning. Influence of error fraction Comparing the box-plots median values in Figure 3, higher error fractions (≥ 30%) decrease performance differences between CDC and the baseline, and, eventually, the PyOD-based cleaner outperforms CDC. Figure 4 (right) shows many experiments with larger error fractions, where the baseline outperforms CDC. Multiple errors in a single test example worsen CDC because the underlying ML models suffer from erroneous attributes in (test) samples. Jäger et al. (2021) show that ML-based approaches using column dependencies for data imputation problems are superior to column-based approaches not using column dependencies. However, they also showed that in the high error fraction regime, ML methods could perform worse. Typically, multiple iterations of data imputation (or cleaning) (Little & Rubin, 2002) improve the results in these scenarios. We leave the implementation and testing of *multiple CDC* for future research. ## 6.2 Influence Of Cdc'S Hyperparameter **Confidence Level** We describe in Section 3.2 that CDC's hyperparameter *confidence level* directly influences how strong the evidence has to be that a value gets marked as an outlier. Therefore, Figure 2 shows decreasing TPR and FPR with increasing confidence levels. Surprisingly, with *conf idence level* ≥ 0.999, this stagnates and reverses, which is more pronounced for FPR than for TPR. The cleaned performance in Figure 3 (left) shows similarly that increasing confidence levels perform better and finally degrade with *conf idence level >* 0.99999. These plots show that CDC with lim*conf idence level*→1 is not necessarily desirable and produces good results for 0.999 ≤ *conf idence level* ≤ 0.99999. ## 6.3 Tree-Based Models Perform Best In The Majority Of Cases To clean values, CDC fits one ML model for every column. As mentioned in Section 4.1, we use AutoGluon for our experiments to find the best model-hyperparameter combination. In about 46% of these cases, AutoGluon finds a FastAI NN as best performing, in about 31% an extremely randomized tree (XT), and in about 23% a random forest (RF). Given the fact that for FastAI NN, we optimize 50 different hyperparameter settings (random search) but only three for RF and XF each, it is surprising that tree-based models (54%) outperform FastAI NN. However, for data imputation using the same approach, Jäger et al. (2021) already showed that RFs work well, which is in line with a study by Grinsztajn et al. (2022). They provide evidence that tree-based models still outperform neural networks on tabular data. In the future, CDC's performance could be increased by focusing on tree-based models and optimizing their hyperparameters, which we leave for future research. ## 6.4 Limitations In this study, we use tabular datasets as defined by Grinsztajn et al. (2022) with four to eleven columns (categorical, numerical, and mixed) and about 4, 800 to 89, 000 rows without missing values to benchmark three common downstream tasks: regression, binary classification, and multi-class classification. Thus, our approach and results can not be transferred to other data modalities, such as text- or image-based datasets. However, since we benchmarked CDC on a wide range of dataset sizes, we assume that it generalizes well to larger or smaller tabular datasets. We focus on discriminative ML approaches as they are typically used for data cleaning (Abdelaal et al., 2023) and data imputation (Jäger et al., 2021) problems. There is evidence demonstrating that (generative) Neural Networks are often outperformed by tree-based models (Grinsztajn et al., 2022), which is why we focus our empirical comparison on discriminative methods. Applying the ideas of conformal prediction to generative methods is, however, an interesting extension for future research. As described in Section 4.4, we assume that high-quality training data is available and data errors are tackled at inference time. While this assumption is often violated, we believe that it is a sensible simplification of the problem: First, it allows for substantial improvements in the degree of automation. Our results demonstrate that fully automated cleaning will improve the downstream predictive performance in over 75% of the cases. Second, the curation of training data is often a necessary part of model development cycles in ML. Model development typically involves several iterations, often related to the data preparation stages. Improving data quality and curating the initial training data is an essential step to ensure responsible usage of ML components - the curated data used for training the ML model can usually be used for training the cleaning models without additional curation efforts. Other approaches focus on application scenarios where there is no high-quality training data available to calibrate the cleaning models. Such cleaning systems often circumvent the assumption of high-quality training data by requiring additional user input (e.g., Mahdavi et al., 2019; Mahdavi & Abedjan, 2020; Neutatz et al., 2019; Krishnan et al., 2017; 2016). ## 7 Conclusion And Future Work In this study, we present how conformalized machine learning models can be used to detect and clean erroneous values of heterogeneous tabular data without requiring user input. We benchmark conformal data cleaning (CDC) on 18 datasets to experimentally compare its performance to a baseline approach. Therefore, we use a wide range of error fractions (5) and error types (4) to create 360 test data sets containing errors. Our results show that in about 75% of our experiments, using CDC increases the downstream performance while being robust to hyperparameter changes. Importantly these results were obtained without any manual intervention and could be readily used to improve data quality problems at inference time in a variety of application scenarios. In almost all experiments, using CDC outperforms the PyOD-based baseline cleaner. Lastly, we found that in 54% of the cases, tree-based models perform better than FastAI NN, nearest neighbors, or linear regression models. Therefore, we recommend applying CDC with tree-based models and optimizing their hyperparameters more thoroughly. In our experiments, using CDC's confidence level between 0.999 and 0.99999 worked well. In the future, we plan to test an iterative cleaning approach similar to multiple imputation, which has the potential to further increase CDC's performance, especially with many erroneous values. Further, we plan to investigate the usage of generative ML models. ## References Mohamed Abdelaal, Christian Hammacher, and Harald Schoening. Rein: A comprehensive benchmark framework for data cleaning methods in ml pipelines. *Proceedings of the VLDB Endowment (PVLDB)*, 2023. Anastasios N. Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. URL http://arxiv.org/abs/2107.07511. Vineeth Balasubramanian, Shen-Shyang Ho, and Vladimir Vovk. Conformal Prediction for Reliable Machine Learning: Theory, Adaptations and Applications. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1st edition, 2014. ISBN 978-0-12-398537-8. Felix Biessmann, Tammo Rukat, Phillipp Schmidt, Prathik Naidu, Sebastian Schelter, Andrey Taptunov, Dustin Lange, and David Salinas. DataWig: Missing Value Imputation for Tables. *Journal of Machine* Learning Research, 20(175):1–6, 2019. ISSN 1533-7928. URL http://jmlr.org/papers/v20/18-753. html. Felix Biessmann, Jacek Golebiowski, Tammo Rukat, Dustin Lange, and Philipp Schmidt. Automated data validation in machine learning systems. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, 2021. Vadim Borisov, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, and Gjergji Kasneci. Deep Neural Networks and Tabular Data: A Survey. *IEEE Transactions on Neural Networks and Learning* Systems, pp. 1–21, 2022. ISSN 2162-2388. doi: 10.1109/TNNLS.2022.3229161. Conference Name: IEEE Transactions on Neural Networks and Learning Systems. Eric Breck, Neoklis Polyzotis, Sudip Roy, Steven Euijong Whang, and Martin Zinkevich. Data Validation for Machine Learning. In *SysML*, pp. 1—14, 2019. Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake Vanderplas, Arnaud Joly, Brian Holt, and Gaël Varoquaux. API design for machine learning software: experiences from the scikit-learn project. In *ECML PKDD Workshop: Languages for Data Mining and Machine Learning*, pp. 108–122. September 2013. Ramiro D. Camino, Christian A. Hammerschmidt, and Radu State. Improving Missing Data Imputation with Deep Generative Models. *arXiv:1902.10666 [cs, stat]*, February 2019. URL http://arxiv.org/abs/ 1902.10666. arXiv: 1902.10666. Derek Chen, Zhou Yu, and Samuel R. Bowman. Clean or Annotate: How to Spend a Limited Data Collection Budget, October 2021. URL https://arxiv.org/abs/2110.08355v2. Jinghui Chen, Saket Sathe, Charu Aggarwal, and Deepak Turaga. Outlier Detection with Autoencoder Ensembles. In *Proceedings of the 2017 SIAM International Conference on Data Mining (SDM)*, Proceedings, pp. 90–98. Society for Industrial and Applied Mathematics, June 2017. doi: 10.1137/1.9781611974973.11. URL https://epubs.siam.org/doi/10.1137/1.9781611974973.11. Xu Chu, Ihab F. Ilyas, Sanjay Krishnan, and Jiannan Wang. Data Cleaning: Overview and Emerging Challenges. In *Proceedings of the 2016 International Conference on Management of Data*, SIGMOD '16, pp. 2201–2206, New York, NY, USA, June 2016. Association for Computing Machinery. ISBN 978-1-45033531-7. doi: 10.1145/2882903.2912574. URL https://dl.acm.org/doi/10.1145/2882903.2912574. Michele Dallachiesa, Amr Ebaid, Ahmed Eldawy, Ahmed Elmagarmid, Ihab F. Ilyas, Mourad Ouzzani, and Nan Tang. NADEEF: a commodity data cleaning system. In Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data, SIGMOD '13, pp. 541–552, New York, NY, USA, June 2013. Association for Computing Machinery. ISBN 978-1-4503-2037-5. doi: 10.1145/2463676.2465327. URL https://doi.org/10.1145/2463676.2465327. Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, and Alexander Smola. AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. March 2020. doi: 10.48550/ arXiv.2003.06505. URL http://arxiv.org/abs/2003.06505. arXiv:2003.06505 [cs, stat] type: article. Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter. Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning. *Journal of Machine Learning Research*, 23(261):1–61, 2022. ISSN 1533-7928. URL http://jmlr.org/papers/v23/21-0992.html. Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. July 2022. URL https://openreview.net/forum?id=H12GRgcxg. Léo Grinsztajn, Edouard Oyallon, and Gaël Varoquaux. Why do tree-based models still outperform deep learning on typical tabular data? In *NeurIPS 2022 Datasets and Benchmarks Track*, New Orleans, United States, November 2022. URL https://hal.archives-ouvertes.fr/hal-03723551. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On Calibration of Modern Neural Networks. In *Proceedings of the 34th International Conference on Machine Learning*, pp. 1321–1330. PMLR, July 2017. URL https://proceedings.mlr.press/v70/guo17a.html. ISSN: 2640-3498. Kelli Ham. OpenRefine (version 2.5). http://openrefine.org. Free, open-source tool for cleaning and transforming data. *Journal of the Medical Library Association : JMLA*, 101(3):233, July 2013. ISSN 1536-5050. doi: 10.3163/1536-5050.101.3.020. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3738091/. Publisher: Medical Library Association. Songqiao Han, Xiyang Hu, Hailiang Huang, Mingqi Jiang, and Yue Zhao. ADBench: Anomaly Detection Benchmark, September 2022. URL http://arxiv.org/abs/2206.09426. arXiv:2206.09426 [cs]. Simon Hawkins, Hongxing He, Graham Williams, and Rohan Baxter. Outlier Detection Using Replicator Neural Networks. In Yahiko Kambayashi, Werner Winiwarter, and Masatoshi Arikawa (eds.), *Data Warehousing and Knowledge Discovery*, Lecture Notes in Computer Science, pp. 170–180, Berlin, Heidelberg, 2002. Springer. ISBN 978-3-540-46145-6. doi: 10.1007/3-540-46145-0_17. Jeremy Howard and Sylvain Gugger. Fastai: A Layered API for Deep Learning. *Information*, 11(2):108, February 2020. ISSN 2078-2489. doi: 10.3390/info11020108. URL https://www.mdpi.com/2078-2489/ 11/2/108. Sebastian Jäger, Arndt Allhorn, and Felix Bießmann. A benchmark for data imputation methods. *Frontiers* in Big Data, 4, 2021. ISSN 2624-909X. doi: 10.3389/fdata.2021.693674. URL https://www.frontiersin. org/articles/10.3389/fdata.2021.693674. Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka. Well-tuned Simple Nets Excel on Tabular Datasets. In *Advances in Neural Information Processing Systems*, volume 34, pp. 23928– 23941. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ c902b497eb972281fb5b4e206db38ee6-Abstract.html. Sanjay Krishnan, Jiannan Wang, Eugene Wu, Michael J. Franklin, and Ken Goldberg. ActiveClean: interactive data cleaning for statistical modeling. *Proceedings of the VLDB Endowment*, 9(12):948–959, August 2016. ISSN 2150-8097. doi: 10.14778/2994509.2994514. URL https://doi.org/10.14778/2994509. 2994514. Sanjay Krishnan, Michael J. Franklin, Ken Goldberg, and Eugene Wu. BoostClean: Automated Error Detection and Repair for Machine Learning, November 2017. URL http://arxiv.org/abs/1711.01299. arXiv:1711.01299 [cs]. Jing Lei and Larry Wasserman. Distribution-free prediction bands for non-parametric regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1):71–96, January 2014. ISSN 13697412. doi: 10.1111/rssb.12021. URL https://onlinelibrary.wiley.com/doi/10.1111/rssb. 12021. Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan J. Tibshirani, and Larry Wasserman. Distribution-Free Predictive Inference for Regression. *Journal of the American Statistical Association*, 113(523):1094–1111, July 2018. ISSN 0162-1459. doi: 10.1080/01621459.2017.1307116. URL https://doi.org/10.1080/01621459. 2017.1307116. Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/01621459.2017.1307116. Peng Li, Xi Rao, Jennifer Blase, Yue Zhang, Xu Chu, and Ce Zhang. CleanML: A benchmark for joint data cleaning and machine learning [experiments and analysis]. Technical report, 2019. Zheng Li, Yue Zhao, Nicola Botta, Cezar Ionescu, and Xiyang Hu. COPOD: Copula-Based Outlier Detection. In *2020 IEEE International Conference on Data Mining (ICDM)*, pp. 1118–1123, November 2020. doi: 10.1109/ICDM50108.2020.00135. ISSN: 2374-8486. Zheng Li, Yue Zhao, Xiyang Hu, Nicola Botta, Cezar Ionescu, and George Chen. ECOD: Unsupervised Outlier Detection Using Empirical Cumulative Distribution Functions. IEEE Transactions on Knowledge and Data Engineering, pp. 1–1, 2022. ISSN 1558-2191. doi: 10.1109/TKDE.2022.3159580. Conference Name: IEEE Transactions on Knowledge and Data Engineering. Zachary Lipton, Yu-Xiang Wang, and Alexander Smola. Detecting and correcting for label shift with black box predictors. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference* on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 3122–3130. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/lipton18a.html. Roderick J. A. Little and Donald B. Rubin. *Statistical Analysis with Missing Data. 2nd Edition*. John Wiley & Sons, Inc., 2002. ISBN 0471802549. Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation Forest. In 2008 Eighth IEEE International Conference on Data Mining, pp. 413–422, December 2008. doi: 10.1109/ICDM.2008.17. ISSN: 2374-8486. Zifan Liu, Zhechun Zhou, and Theodoros Rekatsinas. Picket: guarding against corrupted data in tabular data during learning and inference. *The VLDB Journal*, 31(5):927–955, September 2022. ISSN 0949-877X. doi: 10.1007/s00778-021-00699-w. URL https://doi.org/10.1007/s00778-021-00699-w. Pedro Machado, Bruno Fernandes, and Paulo Novais. Benchmarking Data Augmentation Techniques for Tabular Data. In Hujun Yin, David Camacho, and Peter Tino (eds.), Intelligent Data Engineering and Automated Learning - IDEAL 2022, Lecture Notes in Computer Science, pp. 104–112, Cham, 2022. Springer International Publishing. ISBN 978-3-031-21753-1. doi: 10.1007/978-3-031-21753-1_11. Mohammad Mahdavi and Ziawasch Abedjan. Baran: effective error correction via a unified context representation and transfer learning. *Proceedings of the VLDB Endowment*, 13(12):1948–1961, July 2020. ISSN 2150-8097. doi: 10.14778/3407790.3407801. URL https://doi.org/10.14778/3407790.3407801. Mohammad Mahdavi, Ziawasch Abedjan, Raul Castro Fernandez, Samuel Madden, Mourad Ouzzani, Michael Stonebraker, and Nan Tang. Raha: A Configuration-Free Error Detection System. In *Proceedings of the 2019 International Conference on Management of Data*, SIGMOD '19, pp. 865–882, New York, NY, USA, June 2019. Association for Computing Machinery. ISBN 978-1-4503-5643-5. doi: 10.1145/3299869.3324956. URL https://doi.org/10.1145/3299869.3324956. Alfredo Nazábal, Pablo M. Olmos, Zoubin Ghahramani, and Isabel Valera. Handling incomplete heterogeneous data using VAEs. *Pattern Recognition*, 107:107501, November 2020. ISSN 00313203. doi: 10.1016/ j.patcog.2020.107501. URL https://linkinghub.elsevier.com/retrieve/pii/S0031320320303046. Felix Neutatz, Mohammad Mahdavi, and Ziawasch Abedjan. ED2: A Case for Active Learning in Error Detection. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19, pp. 2249–2252, New York, NY, USA, November 2019. Association for Computing Machinery. ISBN 978-1-4503-6976-3. doi: 10.1145/3357384.3358129. URL https://doi.org/10.1145/ 3357384.3358129. Curtis Northcutt, Anish Athalye, and Jonas Mueller. Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, 1, December 2021a. URL https://datasets-benchmarks-proceedings.neurips.cc/ paper/2021/hash/f2217062e9a397a1dca429e7d70bc6ca-Abstract-round1.html. Curtis Northcutt, Lu Jiang, and Isaac Chuang. Confident Learning: Estimating Uncertainty in Dataset Labels. *Journal of Artificial Intelligence Research*, 70:1373–1411, April 2021b. ISSN 1076-9757. doi: 10.1613/jair.1.12125. URL https://jair.org/index.php/jair/article/view/12125. Harris Papadopoulos. *Inductive Conformal Prediction: Theory and Application to Neural Networks*. IntechOpen, August 2008. ISBN 978-953-7619-03-9. doi: 10.5772/6078. URL https://www.intechopen.com/ chapters/5294. Publication Title: Tools in Artificial Intelligence. Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2233–2241, July 2017. doi: 10.1109/CVPR.2017. 240. ISSN: 1063-6919. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Èdouard Duchesnay. Scikit-learn: Machine learning in python. 12(85):2825–2830. ISSN 1533-7928. URL http://jmlr.org/papers/v12/ pedregosa11a.html. Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, and Kilian Q Weinberger. Identifying Mislabeled Data using the Area Under the Margin Ranking. In *Advances in Neural Information Processing Systems*, volume 33, pp. 17044–17056. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/ hash/c6102b3727b2a7d8b1bb6981147081ef-Abstract.html. Abdulhakim A. Qahtan, Ahmed Elmagarmid, Raul Castro Fernandez, Mourad Ouzzani, and Nan Tang. FAHES: A Robust Disguised Missing Values Detector. In *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, KDD '18, pp. 2100–2109, New York, NY, USA, July 2018. Association for Computing Machinery. ISBN 978-1-4503-5552-0. doi: 10.1145/3219819.3220109. URL https://doi.org/10.1145/3219819.3220109. Stephan Rabanser, Stephan Günnemann, and Zachary C. Lipton. *Failing loudly: An empirical study of methods for detecting dataset shift*. 2018. URL www.tensorflow.org/tfx/data_validation/get_started\# checking_data_skew_and_drift. Sridhar Ramaswamy, Rajeev Rastogi, and Kyuseok Shim. Efficient algorithms for mining outliers from large data sets. *ACM SIGMOD Record*, 29(2):427–438, May 2000. ISSN 0163-5808. doi: 10.1145/335191.335437. URL https://doi.org/10.1145/335191.335437. Theodoros Rekatsinas, Xu Chu, Ihab F. Ilyas, and Christopher Ré. HoloClean: holistic data repairs with probabilistic inference. *Proceedings of the VLDB Endowment*, 10(11):1190–1201, August 2017. ISSN 21508097. doi: 10.14778/3137628.3137631. URL https://dl.acm.org/doi/10.14778/3137628.3137631. Yaniv Romano, Evan Patterson, and Emmanuel J. Candès. Conformalized quantile regression. In *Proceedings* of the 33rd International Conference on Neural Information Processing Systems, number 318, pp. 3543– 3553. Curran Associates Inc., Red Hook, NY, USA, December 2019. D. B. Rubin. *Multiple Imputation for Nonresponse in Surveys*. Wiley, 1987. Donald B. Rubin. Inference and missing data. *Biometrika*, 63(3):581–592, 1976. ISSN 00063444. URL http://www.jstor.org/stable/2335739. Sebastian Schelter, Felix Biessmann, Tim Januschowski, David Salinas, Stephan Seufert, and Gyuri Szarvas. On Challenges in Machine Learning Model Management. *Bull. IEEE Comput. Soc. Tech. Comm. Data* Eng., pp. 5–13, 2018. Sebastian Schelter, Tammo Rukat, and Felix Biessmann. Learning to Validate the Predictions of Black Box Classifiers on Unseen Data. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pp. 1289–1299, Portland OR USA, June 2020. ACM. ISBN 978-1-4503-6735-6. doi: 10.1145/3318464.3380604. URL https://dl.acm.org/doi/10.1145/3318464.3380604. Sebastian Schelter, Tammo Rukat, and Felix Biessmann. JENGA - A Framework to Study the Impact of Data Errors on the Predictions of Machine Learning Models. OpenProceedings.org, 2021. doi: 10.5441/ 002/EDBT.2021.63. URL https://openproceedings.org/2021/conf/edbt/p134.pdf. Version Number: 1 Type: dataset. D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean François Crespo, and Dan Dennison. Hidden technical debt in machine learning systems. *Adv. Neural Inf. Process. Syst.*, 2015-Janua:2503–2511, 2015. ISSN 10495258. D. J. Stekhoven and P. Buhlmann. MissForest–non-parametric missing value imputation for mixedtype data. *Bioinformatics*, 28(1):112–118, January 2012. ISSN 1367-4803, 1460-2059. doi: 10.1093/ bioinformatics/btr597. URL https://academic.oup.com/bioinformatics/article-lookup/doi/10. 1093/bioinformatics/btr597. Masashi Sugiyama and Motoaki Kawanabe. *Machine Learning in Non-Stationary Environments: Introduction to Covariate Shift Adaptation*. The MIT Press, March 2012. ISBN 978-0-262-01709-1. doi: 10.7551/mitpress/9780262017091.001.0001. URL https://direct.mit.edu/books/book/3774. Stef van Buuren and Karin Oudshoorn. *Flexible Multivariate Imputation by MICE*, volume (PG/VGZ/99.054). TNO Prevention and Health, 1999. Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo. OpenML: networked science in machine learning. 15(2):49–60. ISSN 1931-0145. doi: 10.1145/2641190.2641198. URL https://doi.org/10.1145/ 2641190.2641198. Paul von Bünau, Frank C. Meinecke, and Klaus-Robert Müller. Stationary subspace analysis. In Tülay Adali, Christian Jutten, João Marcos Travassos Romano, and Allan Kardec Barros (eds.), *Independent Component Analysis and Signal Separation, 8th International Conference, ICA 2009, Paraty, Brazil, March* 15-18, 2009. Proceedings, volume 5441 of *Lecture Notes in Computer Science*, pp. 1–8. Springer, 2009. doi: 10.1007/978-3-642-00599-2\_1. URL https://doi.org/10.1007/978-3-642-00599-2_1. Vladimir Vovk, A. Gammerman, and Glenn Shafer. *Algorithmic learning in a random world*. Springer, New York, 2005. ISBN 978-0-387-00152-4 978-0-387-25061-8. Hu Wang, Guansong Pang, Chunhua Shen, and Congbo Ma. Unsupervised Representation Learning by Predicting Random Distances, July 2020. URL http://arxiv.org/abs/1912.12186. arXiv:1912.12186 [cs, stat]. Jinsung Yoon, James Jordon, and Mihaela van der Schaar. GAIN: Missing Data Imputation using Generative Adversarial Nets. Technical Report arXiv:1806.02920, arXiv, June 2018. URL http://arxiv.org/abs/ 1806.02920. arXiv:1806.02920 [cs, stat] type: article. Seongwook Yoon and Sanghoon Sull. GAMIN: Generative Adversarial Multiple Imputation Network for Highly Missing Data. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8453–8461, June 2020. doi: 10.1109/CVPR42600.2020.00848. ISSN: 2575-7075. Gianluca Zeni, Matteo Fontana, and Simone Vantini. Conformal Prediction: a Unified Review of Theory and New Challenges. Technical Report arXiv:2005.07972, arXiv, May 2020. URL http://arxiv.org/ abs/2005.07972. arXiv:2005.07972 [cs, econ, stat] type: article. Yue Zhao, Zain Nasrullah, and Zheng Li. PyOD: A Python Toolbox for Scalable Outlier Detection. Journal of Machine Learning Research, 20(96):1–7, 2019. ISSN 1533-7928. URL http://jmlr.org/papers/v20/ 19-011.html. Yue Zhao, Xiyang Hu, Cheng Cheng, Cong Wang, Changlin Wan, Wen Wang, Jianing Yang, Haoping Bai, Zheng Li, Cao Xiao, Yunlong Wang, Zhi Qiao, Jimeng Sun, and Leman Akoglu. SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier Detection. *Proceedings of Machine Learning and Systems*, 3:463–478, March 2021. URL https://proceedings.mlsys.org/paper/2021/hash/ 98dce83da57b0395e163467c9dae521b-Abstract.html. ## A Automated Data Cleaning Api In the ML field, it is, for good reasons, best practice to implement streamlined APIs. Well-known and widely used is the *scikit-learn* API (Pedregosa et al.). Its main components are estimators providing the fit method to learn models and transformers offering the transform method, which returns a transformed version of the input data (Buitinck et al., 2013). Further, scikit-learn allows the implementation of pipelines to combine different steps, e.g., pre-processing the data (transformer) and then fitting/predicting using the ML model (estimator). The cleaning API we propose is integrable into scikit-learn pipelines and consists of four methods: f i t ( t r ai ni n g_ d a t a ) \# F i t c on f orm al p r e d i c t o r f o r each column o f t r a i n i n g_ d a t a return f i t t e d _ e s t i m a t o r r em o v e_ o u tli e r s ( te s t_d a t a ) \# Tes t : t e s t _ d a t a ' s v a l u e s in p r e d i c t i o n s e t s / i n t e r v a l s \# I f no t : remove them return data_with_NAN , ou tlie r_m a sk impute ( te s t_d a t a ) ![18_image_0.png](18_image_0.png) \# Impute m i s s i n g v a l u e s w i t h CPs b e s t p r e d i c t i o n return data_without_NAN , imputed_mask t r a n s f o rm ( te s t_d a t a ) ![18_image_1.png](18_image_1.png) \# Combine ' r em o v e_ o u tl i e r s ' and ' impu te ' methods return data_without_NAN_or_outlier , cleaned_mask The methods remove_outliers and impute are custom interfaces that expose the functionality of detecting and correcting outliers (see Section 3.2). The method transform combines these and, therefore, fulfills scikit-learn's transformer abstraction. ## B Datasets Table 1: Datasets overview. ID is the assigned OpenML id, # means the number of, *Cat.* and *Num.* stand for categorical and numerical columns, and *Obs.* means observations, i.e., the number of rows of the tabular dataset. ID Task Type #Cat. #Num. #Obs. #Cells 725 Binary 1 7 8, 192 65, 536 310 Binary 1 5 11, 183 67, 098 1046 Binary 1 4 15, 545 77, 725 823 Binary 1 7 20, 640 165, 120 42493 Binary 4 3 26, 969 188, 783 4135 Binary 5 4 32, 769 294, 921 251 Binary 1 8 39, 366 354, 294 151 Binary 2 6 45, 312 362, 496 40498 Multi Class 9 2 4, 898 53, 878 30 Multi Class 1 9 5, 473 54, 730 198 Regression 4 2 9, 517 57, 102 23515 Regression 0 6 10, 081 60, 486 1199 Regression 3 6 17, 496 157, 464 1193 Regression 7 2 31, 104 279, 936 218 Regression 0 8 22, 784 182, 272 23395 Regression 3 1 89, 640 358, 560 42225 Regression 6 3 53, 940 485, 460 1200 Regression 0 9 59, 049 531, 441 ## C Results Separated By Downstream Task To reveal potential relations between downstream tasks and model performance after cleaning, Figure 5 presents the results from Section 5 (Figure 3), additionally grouped by downstream task type. Note, there are eight regression and binary classification but only two multi-class classification datasets (see Table 1). The normalized cleaned performance for regression datasets (bottom left sub-plot) shows that almost all results are around 1. Because RMSE (the underlying metric) does not have a defined range, which is why, it can, in worse cases, lead to large values dominating the normalization. This typically happens when the error fraction is larger, visualized by diamond markers around 0−0.2 because many erroneous cells decrease the downstream model's performance. Since the cleaning methods rely on the columns' dependencies, errors also affect their performance. Besides this, other sub-plots reflect the trends shown in Figure 3 and discussed in Section 5.2. Namely, increasing error fraction decreases the normalized cleaned performance. While CDC typically leads to better results, this performance gap to the baseline reduces with larger error fractions, similar to the downstream improvement. However, for regression with error fractions > 40%, the baseline cleaner outperforms CDC (comparing median values). Generally, the baseline leads to more dispersed results (larger boxes), except for the normalized downstream improvement on regression tasks, where both show exceptionally larger variance. Not surprisingly, comparing the best-performing settings separately for each downstream task, we do not find a systematic effect of the downstream task in Figure 6. As mentioned in Section 5.3, in 75% of the experiments, CDC's cleaned performance (normalized) outperforms the baseline, respectively 72.5% downstream improvement (normalized). Noteworthy, this trend is also valid for all downstream tasks: multi-class classification 90% and 87.5%, binary classification 78.1% and 75.6%, and regression 68.1% and 65.6% Further, CDC improves the downstream performance in about 55% of the multi-class classification tasks, where the baseline achieves improvements in only 20% of the experiments. Similarly for binary classification tasks with 63% and 39%, and for regressions tasks with 74% and 50%. ![20_image_0.png](20_image_0.png) Figure 5: (*Left*) Cleaned performance, (*Right*) downstream improvement (normalized), and rows represent downstream tasks. For regression tasks, the downstream improvements are more dispersed, less pronounced, and with larger error fractions worse than the baseline. ![21_image_0.png](21_image_0.png) Figure 6: (*Left*) CDC vs. PyOD baseline data cleaning performance, (*Right*) improvement in predictive performance in the downstream task (normalized), and rows represent downstream tasks. We observe empirically that the improvement with our proposed method appears to be less pronounced for downstream regression tasks.
Review 1: Summary: Authors apply the idea of conformal prediction to the task of cleaning tabular data. First the conformal data cleaning (CDC) trains a classifier to detect statistical outliers. Next, rather than a (0,1) prediction, this is replaced by a (N/A, pred) where the pred is the output from the classifier. Effectively, examples with N/A have no change, whereas examples that have a "pred" have their values replaced, and are thereby cleaned. This is applied to 18 datasets and compared against PyOD data cleaning method. In about 75% of cases, using CDC leads to an improvement in downstream performance. Strengths and Weaknesses: Strengths: Tabular data cleaning is a incredibly important problem with many practical applications. Almost every company in the world has spreadsheet data somewhere and more often than not, this data is noisy. - CDC often leads to improvements in quality and downstream performance - CDC is statistically sound and many details of the algorithm were included (maybe even too much, some could be moved to appendix) - The paper is reasonably well written and the experiments are well structured with 3 trial runs across many settings and datasets. Weaknesses: The technique just doesn't seem to work that well, even given all the advantages of a nice clean setup. - The training set is assumed to be clean, which is often not true in real life - The method gets access to thousands of cells for training a cleaner, which again is often unrealistic - CDC does not seem to help at all in a quarter of the cases - The technique does not apply to text or image type data - The process is slow and a new model must be trained for every column - The authors only compare against one baseline (PyOD), when there are many data cleaning and outlier options available - Even compared to this one baseline, CDC has poor performance in TPR - While CDC is better than doing nothing, it is really hard to tell (ie. based on Fig 4) that it does better than other alternatives. Requested Changes: The authors already suggested a number of improvements: - Working on optimizing tree-based models - Trying generative NN methods Other options include testing against more baselines as well: - Adaptation Layer (Goldberger and Ben-Reuven) https://openreview.net/forum?id=H12GRgcxg - Clean or Annotate (Chen et al., https://arxiv.org/abs/2110.08355) offers 4 methods of data cleaning: AUM, Cartography, Large Loss, Prototype, as well as a number of data denoising techniques to consider. - Loss Correction (Patrini et al. https://ieeexplore.ieee.org/document/8099723) - Confident Learning (Northcutt et al. https://arxiv.org/abs/1911.00068) - Using LLMs (Chong et al. https://aclanthology.org/2022.emnlp-main.618/) The above papers should all be cited as well and discussed in related work. ( Some are already discussed, but others are missing) Overall: The results under the proposed method don't seem strong enough to outperform existing techniques. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The authors propose an automatic data cleaning framework based on conformal regression and classification. Essentially, the confidence intervals and sets are used to determine whether a cell in a tabular dataset is an outlier or not, aber on a model trained to predict this column from the others. If so, the data is imputed by the model prediction. If the confidence set is empty, nothing is done. Strengths and Weaknesses: Strengths: - Promising combination of approaches - having data cleaning with guarantees is definitely interesting. - Good background and related work. - Thorough experiments and detailed description of experiments. - Encouraging results in the sense that the approach improves over baselines. Weaknesses: - I am not convinced that the authors' interpretation of the conformal guarantee is correct. Specifically after Equation (6). Essentially the coverage guarantee only holds on inliers (assuming the model was trained and calibrated on clean data). Conformal prediction does not allow to say anything about outliers unless specific methods for conformal prediction with distribution shift are used. That is, the conclusion that the column value is not in the confidence set for an outlier with at least peobability 1 - alpha does not hold as of my understanding. Even using conformal methods for outlier detection, guarantees only hold for inliers (inliers are correctly detected as inliers with probability 1 - alpha), not for outliers. This is a critical flaw of the approach in my opinion. - The treatment of empty confidence sets is also not meaningful as outliers might as well get empty confidence sets (as we cannot say anything about outliers, see above). - In the experiments it is unclear how the calibration set is chosen - how large is it, does it contain errors, etc? I assume the training set does not contain errors in which case I would assume clean training and calibration sets to be rather small in practice. Experiments with smaller numbers of training and calibration sets would be interesting. - I am also missing whether there are random trials of calibration/test slit performed? - It would also be great to see coverage and confidence set sizes for individual columns as examples, vs error rate. - A distinction between performance on regression and classification would also be interesting. - In terms of writing the paper is very verbose and could be shortened a bit in my opinion. the algorithm is not too insightful and while I appreciate providing a nice API, I would defer this to the code documentation or appendix. Background is also longer than required at times. Conclusion: I like the problem and idea but am not convinced that conformal prediction has been applied correctly in this setting. As a result, the experiments are purely empirical and using conformal prediction does not have advantages. If this could be fixed and there would eg be a guarantee on inliers or so, I believe it would be much more compelling. Experiments would also have to be adapted to give insights into the conformal prediction bits (coverage, set sizes by error rates, calibration/test trials etc.). In it's current form I believe the paper should not be published at TMLR. Requested Changes: See above. Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: The authors propose a novel cleaning approach by combining an application-agnostic ML-based data cleaning method with conformal prediction. The method is evaluated on multiple tabular datasets and against a state-of-the-art baseline, i.e., data cleaning on the PyOD approach (Zhao et al., 2019), one of the best approaches for detecting outliers. The experimental results show that the proposed approach achieves higher cleaning performance and improves the performance of downstream tasks (i.e., classification, regression). Strengths and Weaknesses: Strengths: 1. The paper provides a solution that can be integrated into data analytics pipelines for tabular data to automate data cleaning processes. 2. The proposed approach inherits some desired properties and conformity guarantees from conformal prediction. 3. The paper is well-written and the approach well-presented. 4. The topic of the paper is important for many real-world applications using tabular data. Weaknesses: 1. The proposed solution combines well-known techniques. 2. There are also more advanced approaches for data cleaning which could be used on top of the PyOD approach (Zhao et al., 2019), see for example Borisov et al. (2022). Language models are realistic tabular data generators. arXiv preprint arXiv:2210.06280. Other examples are: Zhang, Yishuo, et al. "GANBLR: a tabular data generation model." 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 2021. Rajabi, Amirarsalan, and Ozlem Ozmen Garibay. "Tabfairgan: Fair tabular data generation with generative adversarial networks." Machine Learning and Knowledge Extraction 4.2 (2022): 488-501. Xu, Lei, et al. "Modeling tabular data using conditional gan." Advances in Neural Information Processing Systems 32 (2019). Requested Changes: The authors should check/evaluate whether more recent tabular data generation approaches (see references provided above) used on top of PyOD can make up for a better baseline. Broader Impact Concerns: -- ================================================== Metareview: Recommendation: Reject Comment: While an interesting idea, the reviewers are all leaning towards rejection as they believe that the current usage of the conformal prediction framework has not been performed appropriately. All reviewers suggested more baselines and metrics that would benefit this work in a potential future version. ==================================================
# Identifying And Clustering Counter Relationships Of Team Compositions In Pvp Games For Efficient Balance Analysis Anonymous authors Paper under double-blind review ## Abstract How can balance be quantified in game settings? This question is crucial for game designers, especially in player-versus-player (PvP) games, where analyzing the strength relations among predefined team compositions—such as hero combinations in multiplayer online battle arena (MOBA) games or decks in card games—is essential for enhancing gameplay and achieving balance. We have developed two advanced measures that extend beyond the simplistic win rate to quantify balance in zero-sum competitive scenarios. These measures are derived from win value estimations, which employ strength rating approximations via the Bradley-Terry model and counter relationship approximations via vector quantization, significantly reducing the computational complexity associated with traditional win value estimations. Throughout the learning process of these models, we identify useful categories of compositions and pinpoint their counter relationships, aligning with the experiences of human players without requiring specific game knowledge. Our methodology hinges on a simple technique to enhance codebook utilization in discrete representation with a deterministic vector quantization process for an extremely small state space. Our framework has been validated in popular online games, including Age of Empires II, Hearthstone, Brawl Stars, and *League of Legends*. The accuracy of the observed strength relations in these games is comparable to traditional pairwise win value predictions, while also offering a more manageable complexity for analysis. Ultimately, our findings contribute to a deeper understanding of PvP game dynamics and present a methodology that significantly improves game balance evaluation and design. ## 1 Introduction In the dynamic landscape of player-versus-player (PvP) games, team compositions, or "comps," such as hero combinations or decks formed before matches commence, are pivotal (Costa et al., 2019; de Mesentier Silva et al., 2019; Reis et al., 2021). The gaming industry, now approximately a 200 billion US dollar market (Kristianto, 2023), thrives on the diversity and engagement offered by these compositions, reflecting players' individuality and sustaining market competitiveness (Figueira et al., 2018; Fontaine et al., 2019). However, the key to optimizing player engagement and competitive fairness lies in maintaining reasonable strength relations among diverse team compositions—a challenge for both players aiming for victory and designers striving for balance (Levkoff, 2014; Bakkes et al., 2014; Beyer et al., 2016). A quantitative measure for game balancing is thus essential for addressing this challenge. Currently, win or success rate, use rate, or even the entropy of strategy distributions are available measures across various game genres, targeting optimizations from detailed game parameters to skill-based matchmaking among players (Morosan & Poli, 2017; Hunicke, 2005; Rupp et al., 2023; Nikolakaki et al., 2020; Pendurkar et al., 2023). However, the prevailing reliance on these measures for balance assessment overlooks critical factors such as player skill variability and the counter relationships between compositions, rendering evaluations imprecise. Traditional player skill ratings, including Elo, TrueSkill, and Matchmaking Rating, predominantly focus on individual prowess, leaving a gap in the strength assessment of team compositions (Elo, 1966; Herbrich et al., 2006; Pramono et al., 2018). 1 ![1_image_0.png](1_image_0.png) Figure 1: Radar chart comparison of two team compositions across matchups with six different opponents. The left panel illustrates a scenario with no domination, where both compositions exhibit their strengths against specific opponents. The right panel shows a case where Comp1 dominates Comp2, achieving higher win rates against all opponents, illustrating clear dominance in overall performance. To better understand strength relations in compositions and analyze game balance, we pose the question: "**How many compositions are not dominated?**" If a composition shows no advantage over others, it could be considered redundant. Hence, our goal in this paper is to define measures that answer this question. We first integrate the Bradley-Terry model with Siamese neural networks (Bromley et al., 1993) to predict the strengths of team compositions from game outcomes under the competitive scenario (Bradley & Terry, 1952; Li et al., 2021). This scalar strength rating helps us identify the strongest or dominating composition more effectively with the numerical max operation compared to the comparison operation over all compositions. However, a single scalar strength often fails to provide precise predictions due to players potentially altering their playstyle under different states (Lin et al., 2021) or the inherent intransitivity present in competitive scenarios (Chen & Joachims, 2016; Balduzzi et al., 2018). Accurate predictions often consider cyclic dominance, such as the Rock-Paper-Scissors dynamic, which the Bradley-Terry model does not capture. By analyzing discrepancies between actual outcomes and Bradley-Terry model predictions, we learn a counter table through neural discrete representation learning (van den Oord et al., 2017), thereby enhancing prediction accuracy and offering insights into counter dynamics without specific game knowledge. During the learning of counter tables, we found that vanilla vector quantization (VQ) training leads to poor codebook utilization (Zhang et al., 2023), especially in small codebook sizes; hence, we proposed a new VQ Mean Loss to improve codebook utilization for this new use case. Leveraging these methods, we define new measures of game balance for counting non-dominated compositions that simple win rates face challenges in computation due to high time complexity. Our contributions are threefold: First, we establish two measures for balance by counting the non-dominated compositions: **Top-D Diversity**, which counts playable compositions given a tolerant win value gap, where the tolerant gap is defined by game designers and can be due to factors like skill or luck that make players willing to play those compositions; and **Top-B Balance**, which considers counter relationships in counting non-dominated compositions, i.e., how many meaningful counter relationships exist in the game. Next, we introduce the learning of composition strength and counter relationships, reducing the space complexity of analyzing composition strength relations from O(N2) to O(N +M2), where N is the number of compositions, and M is the category count of the counter table. This reduction in space complexity is crucial not only for storage reasons but also for generating a feasible size of balance report for game designers. Additionally, the time complexity of **Top-D Diversity** shifts from O(N2) to O(N) and **Top-B Balance** from O(N3) to O(N + M3). To clarify, this time complexity is only for the strength relationship analysis. The time for collecting game records and obtaining the strength prediction models is not included in this complexity as it depends on how many game records the game designers plan to collect within a given time period and does not necessarily increase with the number of compositions. Lastly, the rating and counter relationships derived from learning align with the experiences of human players without requiring specific game knowledge. We validate our methods across popular online games such as Age of Empires II, Hearthstone, *Brawl Stars*, and *League of Legends*, demonstrating precision on par with pairwise strength predictions using neural networks and also showcasing better generality compared to tabular statistics. Our methodology not only exhibits broad applicability but also underscores its potential to transform the evaluation and design processes of game balance. Furthermore, we believe these balance measures are not limited to games, as various competitive scenarios—including sports, movie preferences, peer grading, and elections—exhibit intransitivity in comparisons similar to games (Chen & Joachims, 2016). Additionally, the strength measurement of recent large language models (LLMs) also incorporates PvP paradigms (Zheng et al., 2023). ## 2 Game Balance Game designers are tasked with devising engaging mechanisms and numerical frameworks that enhance player experiences (Schell, 2008). Developing an immersive game loop not only encourages participation but also assists players in forming a mental model of the game's mechanics (Sellers, 2017). Designers often apply the Yerkes-Dodson law to optimize player satisfaction, suggesting an optimal arousal level for peak performance that aligns in-game challenges with player skill progression (Dodson, 1915). This dynamic interaction is crucial for maintaining players in a state of mental flow (Csíkszentmihályi, 1990), where game balance plays a pivotal role in sustaining appropriate levels of difficulty and challenge. As a critical research field within game design and operations (Schell, 2008; Novak et al., 2012; Sellers, 2017), game balance significantly influences player engagement through diverse strategies and playstyles. It extends beyond mere difficulty adjustments to encompass strategy, matchmaking, and game parameter tuning (Becker & Görlich, 2020). Understanding balance definitions and metrics is vital for effectively addressing these components. Traditional metrics such as win rate, win value difference, and game scores have driven the evolution of game balancing techniques, refining the interplay between game mechanics and player satisfaction (Jaffe et al., 2012; Budijono et al., 2022; Mahlmann et al., 2012). In PvP scenarios, win value estimation is a common approach, with values often normalized to scales like [0,1] or [-1,1] to simplify payoff calculations between competitors (Budijono et al., 2022). However, calibrating the strength of a composition with win values typically requires comparisons against multiple opponents (Fontaine et al., 2019). While strength rating systems like Elo, TrueSkill, and Matchmaking Rating can identify strength from a single scalar rating and suggest greater strength with higher ratings (Elo, 1966; Herbrich et al., 2006; Pramono et al., 2018), capturing intransitivity or cyclic dominance in scalar ratings is challenging. This necessitates multi-dimensional ratings (Chen & Joachims, 2016; Balduzzi et al., 2018), which reintroduce complexity into balance analysis. Thus, this paper aims to propose a solution that considers intransitivity while maintaining feasible complexity in balance analysis. Acquiring accurate game data is also crucial for balance analysis, often involving the deployment of rule-based agents during early development phases and integrating human testers later to capture realistic gameplay data. Advances in artificial intelligence, demonstrated by AlphaZero's performance in board games, have enabled learning-based agents to contribute to game balance data collection (Tomašev et al., 2022). Community discussions about strategies also provide valuable insights, often grounded in game theory principles or defining some empirical relationship graphs by humans to explain the game scenarios (Schmitz, 2022; Hernández et al., 2020). Although the entropy of a strategy reaching Nash equilibrium can serve as a measure of strategic balance (Pendurkar et al., 2023), computing Nash equilibrium policies at the game action level in complex games is resource-intensive, posing a challenge for practical application in the game design loop (Bowling et al., 2015; Perolat et al., 2022). Although a simpler alternative is using the idea of training adversarial agents or exploiters to identify the weaknesses of main playing strategies (Reis et al., 2024; Vinyals et al., 2019), using rule-based agents or human players as the data source for game balancing is usually more affordable and remains the main approach. With a comprehensive understanding of game balance, this paper focuses on analyzing balance directly through win-lose outcomes from human players and counting the number of meaningful compositions, particularly in two-team zero-sum PvP games. Given the variability of opposing team compositions in matches, our exploration spans multiple game types, from the civilization choices in *Age of Empire II* to hero combinations in *League of Legends*. Our methodology confronts the challenge of cataloging and evaluating an extensive array of possible team compositions, aiming to enhance the understanding and application of game balance. Before introducing our methods, let us formally define the target, "domination", in PvP balance analysis. Definition 2.1. Define Win : c1, c2 → w as a way to estimate the winner, where c1 and c2 are the compositions of players 1 and 2, respectively, and w ∈ R, w ∈ [0, 1] is the estimated win value. Proposition 2.2. We say composition c1 dominates c2 over all compositions c if Win(c1, c) > Win(c2, c). When Proposition 2.2 is true, c2 is considered useless in terms of win values because c1 can perform better than c2 in all cases. If game designers can validate all compositions with Proposition 2.2, they can analyze game balance by identifying overly strong or useless compositions. If some compositions are very weak or meaningless, leading to most compositions being able to defeat them 100% of the time, thus violating the domination relation, designers can either manually eliminate these compositions from the set of all compositions c or iteratively eliminate dominated compositions by running Proposition 2.2 several times. However, the time complexity of validating Proposition 2.2 is O(N3) over N team compositions with a pairwise win value estimation Win(c1, c2). We will try to reduce this complexity with approximations later. ## 3 Learning Rating Table And Counter Table For understanding the strength relations between compositions (comps) and performing efficient balance analysis, we need to quantify the strength and counter relationships first. Our methodology begins with the application of the Bradley-Terry model to allocate a scalar value representing the strength of each comp based on win estimations. This process is elaborated upon in Section 3.1. To tackle the issue of cyclic dominance or intransitivity of win values efficiently, epitomized by the Rock-Paper-Scissors dynamic, we devise a counter table. This involves examining the variances between actual win outcomes from specific comps and the predictions made by the Bradley-Terry model, a process detailed in Section 3.2. Furthermore, the overarching framework that integrates these components into our learning process is delineated in Section 3.3. ## 3.1 Neural Rating Table Win rates in PvP games, while useful as a conventional metric, do not fully encapsulate the actual strengths of individual players or team compositions. A player's or comp's true prowess is better reflected in their ability to triumph over comparable opponents, as victories against both weaker and stronger opponents contribute equally to the win rate but signify different levels of strength. The Elo rating system, commonly utilized in chess and similar two-player zero-sum games, offers a scalar strength rating for entities, aligning with the principles of the Bradley-Terry model (Elo, 1966; Bradley & Terry, 1952). This model predicts the probability of player i defeating player j, as delineated in Equation 1: $$P(i>j)=\frac{\gamma_{i}}{\gamma_{i}+\gamma_{j}},\tag{1}$$ $\left(1\right)$ . where γx represents the positive real-valued strength of player x. To manage the scale of γ, it is often reparameterized using a rating value λ in an exponential function, as shown in Equation 2: $$P(i>j)={\frac{e^{\lambda_{i}}}{e^{\lambda_{i}}+e^{\lambda_{j}}}}.$$ $$\left(2\right)$$ . (2) Adopting this model, we treat comps analogously to individual players, estimating each comp's strength to predict win probabilities. Given the impracticality of analyzing an extensive N ×N win rate table for a large number of comps, we harness the Bradley-Terry model in conjunction with neural networks to overcome this challenge. Our approach employs a Siamese neural network architecture to deduce the ratings e λfor each comp, utilizing mean square error (MSE) as a regression loss function D for model approximations. In our early experiments, we tried using binary cross entropy as the loss function for its probabilistic nature. However, this approach encouraged the rating values to become very large and prevented the model from converging, similar to using hinge loss. Therefore, we focused on using MSE for stable training. This integration allows our neural network to learn the rating table Rθ from match outcomes, assigning a rating to each comp c through Rθ(c). The ratings are computed using an exponential activation function to ensure appropriate scaling. The loss function, focused on match outcome Wm, is formalized as follows: $$L_{R_{\theta}}=\mathbb{E}[D(W_{m},\frac{e^{\lambda_{m_{i}}}}{e^{\lambda_{m_{i}}}+e^{\lambda_{m_{j}}}})]=\mathbb{E}[D(W_{m},\frac{R_{\theta}(c_{m_{i}})}{R_{\theta}(c_{m_{i}})+R_{\theta}(c_{m_{j}})})].\tag{3}$$ By adopting this methodology, our network efficiently processes diverse comp combinations, offering a robust and scalable solution for predicting team composition strengths. We can efficiently identify the strongest composition by tracing the ratings over all compositions with a time complexity of O(N). ## 3.2 Neural Counter Table Within the framework of adapting the Bradley-Terry model through neural networks, we can list the strength of all N team compositions with a space complexity of O(N). However, the precision of strength relations provided by this method may not match the precision afforded by direct pairwise comparisons for each composition, a process that inherently bears a space complexity of O(N2). While theoretically feasible via neural networks, such direct prediction incurs a high space complexity, making it challenging to check these ratings and analyze balance, especially with a large N. The phenomenon of cyclic dominance or intransitivity of win values, a common challenge in analyzing game balance, introduces further complications. An N × N counter table, which would record adjustments from rating predictions to enhance accuracy, becomes impractical due to its high space complexity and cognitive load. In practice, players intuitively grasp counter relationships without the need for exhaustive memorization of large tables. To navigate this, we propose a more manageable M × M counter table that serves as an approximation of the full N × N relationships, where M represents a manageable number of discrete categories. Beginning with a minimum of 3 to capture basic cyclic dominance patterns, M can be adjusted to strike a balance between prediction accuracy and table interpretability. For the task of learning discrete categories, we employ Vector Quantization (VQ), a technique of neural discrete representation learning celebrated for its effectiveness (van den Oord et al., 2017). It acts as an end-to-end analog to K-means clustering within neural networks, primarily introduced in the context of VQ-VAE (van den Oord et al., 2017; Baevski et al., 2020). Our goal diverges from traditional autoencoder objectives; rather than reconstructing inputs, our focus is on developing a counter table that learns from the residuals between direct win predictions and those derived from the Bradley-Terry model. ## 3.2.1 Neural Discrete Representation Learning Before introducing our design for learning the counter table, we first discuss a popular discrete representation learning method with neural networks, VQ-VAE (van den Oord et al., 2017). In scenarios that require discrete representation, clustering is a common approach, with k-means clustering being a widely used method for several decades (MacQueen et al., 1967). This clustering idea is based on finding k reference points in the feature space to represent corresponding groups of actual features using the nearest neighbor method. However, obtaining an effective feature space from raw observations for this clustering process is a critical problem. With the growth of deep learning, variational autoencoder (VAE) (Kingma & Welling, 2014) was proposed to learn effective latent feature spaces for several tasks. Building on the ideas of autoencoders and k-means clustering, VQ-VAE uses the concept of preparing an embedding codebook and employing the nearest neighbor method to convert continuous latent features into the discrete indexes of the codebook, thereby obtaining the discrete representation. Afterward, we can restore the discrete representations back to continuous for decoding tasks. This discretization process can be formulated with the following notations: Given an observation o and its latent features ze through encoder layers and an embedding space E = e1, · · · , eK with size K (codebook), we can define the following probability function for mapping o to discrete space: $$q(\overline{{{s}}}=k|o)=\begin{cases}1&\quad\mathrm{for}\ k_{i}=\arg\operatorname*{min}_{j}||z_{e}^{i}(o)-e_{j}||^{2},\\ 0&\quad\mathrm{otherwise}.\end{cases}$$ $$\left({4}\right)$$ Through this mapping, we can train a discrete encoder in an end-to-end manner without the need for feature engineering first as in k-means clustering. For training this neural network, we use the following loss functions: $$L_{r e c}=\mathbb{E}[D(o,o^{\prime})],\quad L_{v q}=\mathbb{E}[D(s g[z_{e}],z_{q})],\quad L_{c o m m i t}=\mathbb{E}[D(z_{e},s g[z_{q}])]$$ Here, D is a distance function, with mean square error (MSE) being a common choice. ze and zq are the latent codes before and after nearest neighbor replacement, respectively. The sg[·] denotes the stop-gradient operator. The standard VQ-VAE minimizes the combined loss function L = Lrec +Lvq +β ×L*commit*, where β is a weight term that encourages the encoder to produce latent codes closer to discrete representations. For applying this loss to the encoder, we can use the gradient copy trick with the chain rule, as follows: $$\left(5\right)$$ $$\nabla\theta_{e n c o d e r}={\frac{\partial L_{r e e}}{\partial z_{q}}}\times{\frac{\partial z_{e}}{\partial\theta_{e n c o d e r}}}+\beta\times{\frac{\partial L_{c o m m i t}}{\partial\theta_{e n c o d e r}}}$$ $$\quad(6)$$ For applying this loss to the codebook, it is treated as a simple regression optimization problem. ## 3.2.2 Applying Vector Quantization To The Counter Table After a brief understanding of vector quantization with neural networks, we extend this idea of discrete representation to our counter table application. Given the symmetrical nature of residual win values and our aim to classify compositions into M discrete categories, we utilize Siamese network architectures for both the learning of discrete representations and the prediction of residual win values, as illustrated in Figure 2. The residual win value, Wres, is defined as: $$W_{r e s}(c_{m_{i}},c_{m_{j}}|R_{\theta})=W_{m}-\frac{R_{\theta}(c_{m_{i}})}{R_{\theta}(c_{m_{i}})+R_{\theta}(c_{m_{j}})}$$ $$\left(7\right)$$ The counter table, denoted as Cθ, comprises a discrete encoder Ceθ and a residual win value decoder Cdθ, functioning as follows: $$C_{\theta}(c_{m_{i}},c_{m_{j}})=C d_{\theta}(C e_{\theta}(c_{m_{i}}),C e_{\theta}(c_{m_{j}})).\tag{1}$$ Here, every output of Ceθ(x) belongs to Ckθ, the embedding space optimized for vector quantization. The core loss function focuses on the residual win values: $$({\boldsymbol{\delta}})$$ $$L_{r e s}=\mathbb{E}[D(W_{r e s}(c_{m_{i}},c_{m_{j}}|R_{\theta}),C_{\theta}(c_{m_{i}},c_{m_{j}}))]$$ $$({\mathfrak{g}})$$ ))] (9) complemented by a vector quantization loss: $$L_{v q}=\mathbb{E}\left[\frac{D(z_{e}(c_{m_{i}}),z_{q}(c_{m_{i}}))+D(z_{e}(c_{m_{j}}),z_{q}(c_{m_{j}}))}{2}\right]$$ where ze represents the latent code before vector quantization, and zq denotes the code post-quantization. In our application, we require a minimal discrete state space to learn the counter table effectively. We observed low utilization of vectors within the embedding space Ckθ, leading to unselected vectors during training, which cannot construct a comprehensive M × M counter table. This low codebook utilization problem is common in VQ, and there are many techniques to improve it (van den Oord et al., 2017; Yu et al., 2022; Shin et al., 2023). Reg-VQ Zhang et al. (2023) specifically discusses this codebook utilization problem and suggests adopting KL divergence in stochastic VQ and leveraging Gumbel sampling over convolutional feature blocks. However, this raises the question of whether there is a simple and easy way to guarantee codebook utilization improvement conceptually and can be easily implemented for the vanilla VQ process for our simple usage. To handle this, we propose a new loss term for embedding vectors, termed VQ Mean Loss, which calculates the distance from the mean vector in the embedding space to the continuous latent code ze. This mechanism can be seen as another K-means clustering, encouraging the vectors in Ckθ to gravitate $$(10)$$ towards ze, thus increasing their likelihood of being selected by the nearest neighbor in subsequent iterations. For a more concrete explanation, we provide an example in Appendix A.1. We define this additional loss as: $${\cal L}_{m e a n}=\mathbb{E}\left[\frac{D(z_{e}(c_{m_{i}}),\overline{{{e}}}_{k})+D(z_{e}(c_{m_{j}}),\overline{{{e}}}_{k})}{2}\right],\quad\mathrm{where~}\overline{{{e}}}_{k}=\frac{1}{M}\sum_{e_{k}\in C k_{\theta}}e_{k}.$$ $$(11)$$ $$\left(12\right)$$ $$(13)$$ ek. (11) The gradients for each component in the counter table learning process are calculated with the hyperparameters βN and βM. βN is utilized in VQ-VAE to ensure the continuous latent codes ze closely align with their quantized versions zq, and we use βM to activate L*mean*, respectively: $$\nabla C e_{\theta}={\frac{\partial L_{r e s}}{\partial z_{q}}}\times{\frac{\partial z_{e}}{\partial C e_{\theta}}}+\beta_{N}\times{\frac{\partial L_{v q}}{\partial C e_{\theta}}},\ \nabla C k_{\theta}={\frac{\partial L_{v q}}{\partial C k_{\theta}}}+\beta_{M}\times{\frac{\partial L_{m e n}}{\partial C k_{\theta}}},\ \nabla C d_{\theta}={\frac{\partial L_{r e s}}{\partial C d_{\theta}}}.$$ . (12) By employing this M ×M counter table, win values Wθ that consider counter relationships can be computed via Equation 13: $$W_{\theta}(c_{m_{i}},c_{m_{j}})=\frac{R_{\theta}(c_{m_{i}})}{R_{\theta}(c_{m_{i}})+R_{\theta}(c_{m_{j}})}+W_{r e s}(c_{m_{i}},c_{m_{j}}).$$ This approach reduces the space complexity of analyzing strength relations for N compositions from O(N2) to O(N + M2). When M is a small, constant value, the complexity can simplify further to O(N). ## 3.3 Learning Procedure The methodology underlying the construction of the rating and counter tables is encapsulated in the learning framework depicted in Figure 3. This framework requires a dataset consisting of match results, including the team compositions of the competing sides alongside the ultimate win-lose outcomes. The representation of these compositions is adaptable, ranging from simple binary encodings to more nuanced feature descriptions, according to the preferences and requirements set by game designers. Crucially, the derivation of the Neural Counter Table Cθ is predicated on the prior establishment of the Neural Rating Table Rθ. This sequential approach ensures that the foundational ratings of team compositions are accurately determined before their interrelations and counter dynamics are analyzed. The development and refinement of these tables pave the way for the introduction of novel measures aimed at enhancing diversity and balance within the gaming environment. A comprehensive discussion of these newly introduced balance measures is forthcoming in Section 5, while the effectiveness and precision of the rating and counter tables will be evaluated in Section 4. ## 4 Accuracy Of Strength Relations With our rating table and counter table, we can approximate the win value of a match given two compositions and identify the strength relations for balance. In this section, we examine the accuracy of strength relations using these tables across different games and investigate the impact of the hyperparameters βN and βM on counter table training. There are 5 models for each method in our experiments, each trained from a different random seed. The results in the tables are the average values of these models. ## 4.1 Pvp Games To assess the accuracy of strength relations with our tables, we constructed simple games that emulate practical game scenarios for experiments. Additionally, we applied our methods to several open-access esports game records to confirm their real-world applicability. ## 4.1.1 Simple Combination Game A simple combination game was designed with 20 elements, each assigned a score equal to its index from 1 to 20. A comp consists of three distinct elements. The score sc of a comp c is the sum of its elements' ![7_image_0.png](7_image_0.png) Figure 2: Architecture of the Neural Counter Table Co. The diagram illustrates the process of estimating residual win values between team compositions. Team Comp A and Team Comp B are encoded into latent representations ze(c.a) and ze(cB) through shared encoder weights. These latent codes are then quantized into embedding vectors 20(c.A) and 20(cB) using the nearest neighbor search. The embedding vectors are classified into counter categories A and B. The decoded quantized vectors [20(c.A), 20(cB)] and [2q(cB), 20(c.A)] are fed into fully connected (FC) layers with tanh activation functions to produce intermediate values x1 and x22. The residual win value prediction is calculated as the average of the differences between these intermediate values, providing an estimation of the residual win value Wrea. The dashed layer implies shared weights. ![8_image_0.png](8_image_0.png) Figure 3: This diagram illustrates the learning procedure for deriving the Neural Rating Table Rθ and the Neural Counter Table Cθ. The process begins with team compositions (Team Comp A and Team Comp B) and their corresponding match outcomes (win-lose results). In the first stage, team compositions are processed through a shared rating encoder to obtain composition ratings (Comp Rating A and Comp Rating B). These ratings are then utilized in the Bradley-Terry model to predict win values, forming the Neural Rating Table Rθ. In the second stage, these compositions are further processed through a shared category encoder to determine counter categories. The residual win value predictor uses these categories to refine win value predictions, accounting for cyclic dominance, thus forming the Neural Counter Table Cθ. scores, and the win-lose outcome is binary, sampled via the probability function P(c1 > c2) = (sc1 ) 2 (sc1 ) 2+(sc2 ) 2 . There are C 20 3 = 1140 **possible compositions in this game.** The dataset, comprising 100,000 matches, was generated by uniformly sampling two comps. ## 4.1.2 Rock-Paper-Scissors We adhered to the Rock-Paper-Scissors rules, **only 3 compositions** with win values of 0/0.5/1 for lose/tie/win, respectively. The dataset, consisting of 100,000 matches, generated by uniformly sampling. ## 4.1.3 Advanced Combination Game This game combines the simple combination game with Rock-Paper-Scissors rules. The primary rule mirrors the simple combination game, with an additional rule: T = sc mod 3, assigning T as the counter category of the comp, with 0/1/2 corresponding to Rock/Paper/Scissors. A winning Rock-Paper-Scissors comp receives a +60 score bonus during comp score calculation. **There are still** C 20 3 = 1140 possible compositions in this setting. The dataset consists of 100,000 matches generated uniformly. ## 4.1.4 Age Of Empires Ii (Aoe2) Age of Empires II is a popular real-time strategy game. We utilized the statistics (as of January 2024) from aoestats1, an open-access statistics website. The game features 45 civilizations (comps) in 1v1 random map mode across all Elo ranges. **Thus, there are 45 compositions in this game. Further combinations of** civilizations in team mode or specific maps are not discussed in this paper. The dataset contains 1,261,288 matches. ## 4.1.5 Hearthstone For Hearthstone, a popular collectible card game, we accessed the statistics (as of January 2024) from HSReplay2, an open-access statistics website. We considered 91 named decks as compositions for standard ranking at the Gold level. **Therefore, only up to 91 compositions were used, and the detailed hero** or card selections within these compositions were not considered for simplicity. The dataset comprises 10,154,929 matches. ## 4.1.6 Brawl Stars Brawl Stars is a popular Multiplayer Online Battle Arena (MOBA) game. We focused on the Trio Modes of Brawl Stars, where teams of three compete for victory. Data were sourced from the "Brawl Stars Logs & Metadata 2023" on Kaggle, initially collected via the public API. With 64 characters, 43 maps, and 6 modes, the composition count could reach C 64 3 × 43 × 6, i.e., the maximum number of compositions can reach 10,749,312. The dataset includes 179,995 matches, with 94,235 unique compositions observed. ## 4.1.7 League Of Legends For League of Legends, a renowned MOBA game with 5-on-5 team competition, we used the "League of Legends Ranked Matches" dataset from Kaggle, which features 136 champions. Thus, the maximum number of compositions is C 136 5 = 359, 933, 112. The dataset covers 182,527 ranked solo games with 348,498 unique compositions observed, which implies that almost all compositions are different. ## 4.2 Comparisons Of Strength Relation Prediction To better understand whether win value predictions can provide accurate strength relations, we examine the accuracy of the strength relation classification task (weaker/same/stronger) rather than the prediction error of win values. For example, if there are two compositions, A and B, and the oracle win value is Win(A,B)=0.55, the actual outcome we care about is whether A is stronger than B. Now, consider two approximations: Win'(A,B)=0.49 and Win"(A,B)=0.62. It is clear that Win' has a smaller absolute error (0.06) compared to Win" (0.07), but Win' suggests the wrong strength relation. In this strength relation classification task, if the win value falls within the range of [0.499, 0.501], we designate the prediction as the same; a value below 0.499 indicates weaker, and a value above 0.501 indicates stronger. The classification label is calculated based on the average pairwise win value in the dataset. For example, if CA has a 60% average win value against CB in the dataset, CA is deemed stronger when calculating accuracy. All models are approximated with neural networks and trained for 100 epochs on datasets using 5-fold cross-validation. A linear decay learning rate from 0.00025 to 0 over 100 epochs with the Adam optimizer is employed. We then compare the following five methods of win value prediction and provide their definitions with formulations in Section A.4.1: 1. **WinValue**: Predicts the win value for a given composition and compares the win values of the two compositions to determine the winner. If the absolute value of the win value difference is not greater than 0.1%, they are considered to be at the same level. This is a common method in game statistics that does not require maintaining a large table. 2. **PairWin**: Directly predicts the pairwise win value. Some game statistics provide this kind of result when the number of compositions is not too large, and it is a straightforward measure to examine counter relationships. If we have sufficient match results, this method represents the upper bound of strength relation accuracy. 3. BT: Utilizes linear approximation to perform the Bradley-Terry model. This method assumes the rating of a composition can be derived from the sum of the element ratings within the composition. Common generalized Bradley-Terry models for team setups or Elo ratings in team games use this kind of approach (Coulom, 2007). 2https://hsreplay.net/ | | WinValue | PairWin | BT | NRT | NCT M=81 | |----------------------|------------|-----------|------|-------|------------| | Simple Combination | 64.5 | 71.2 | 63.9 | 64.8 | 66.4 | | Rock-Paper-Scissors | 51.3 | 100 | 51.3 | 51.3 | 100 | | Advanced Combination | 57.7 | 83.5 | 56.6 | 57.9 | 79.4 | | Age of Empires II | 68.7 | 97.3 | 68.7 | 68.7 | 97.7 | | Hearthstone | 81.1 | 97.8 | 81.4 | 83.4 | 97.4 | | Brawl Stars | 90.2 | 94.3 | 53.2 | 95.9 | 97.2 | | League of Legends | 79.6 | 78.9 | 54.0 | 88.2 | 90.9 | | Simple Combination | 64.7 | 61.8 | 65.5 | 64.9 | 63.9 | | Rock-Paper-Scissors | 51.1 | 100 | 51.1 | 51.1 | 100 | | Advanced Combination | 56.5 | 79.1 | 57.5 | 56.5 | 79.7 | | Age of Empires II | 64.5 | 75.7 | 64.5 | 64.5 | 75.4 | | Hearthstone | 80.9 | 95.4 | 81.1 | 81.2 | 94.8 | | Brawl Stars | 79.7 | 82.4 | 53.0 | 82.8 | 83.4 | | League of Legends | 51.1 | 50.9 | 53.6 | 51.1 | 51.0 | Table 1: Accuracies (%) in training (above) and testing (below) for various games, illustrating the effectiveness of NCT with M = 81 in achieving high prediction accuracy across different games. 4. NRT: Employs non-linear approximation to perform the Bradley-Terry model. In many games, combinations of elements in a composition will change the strength of the composition non-linearly. 5. NCT: Enhances NRT with an additional neural counter table of size M × M. Table 1 presents the accuracy of the strength comparison task. Notably, NCT with M = 81 achieves accuracy comparable to **PairWin** across all games. In games with complex compositions, such as Brawl Stars and League of Legends, a non-linear approximation is essential for estimating comp strength. For games with explicit counter relationships—such as Rock-Paper-Scissors, the Advanced Combination Game, and Age of Empires II, which exhibit a significant accuracy discrepancy between **PairWin** and NRT—our counter table offers a viable solution. The simplistic win value estimation approach, **WinValue**, usually does not provide the best strength predictions. These results affirm the precision of our rating and counter tables in predicting win values. Notably, in Table 2, as the parameter M increases, so does accuracy, allowing for detailed tracing and analysis of complex counter relationships through the counter table. We provide further discussions regarding the results of counter tables in Appendix A.3. Additionally, we can observe these accuracies to analyze the properties of the obtained gaming results. If the difference in accuracy between training and testing is small, it implies that there is no significant overfitting and the strength relations can be generalized to matches not observed. However, in cases where there is clear overfitting, such as in League of Legends, it suggests the need for more match results to generalize the known strength relations to unknown scenarios. Since League of Legends includes a hero ban and pick phase before a match starts, players tend to prefer compositions that can counter their opponents and aim to improve their win rate. This increases the difficulty of win rate prediction for unobserved cases. The large possible composition space also increases the complexity of accurate model training. For such game cases, we recommend collecting more game results for training. Nonetheless, we can still analyze the strength relations in the training datasets, as our goal is to understand these relations. Initial insights from observed cases can be valuable for early game balancing, although conclusions drawn from these datasets may not necessarily apply to unobserved scenarios. Table 2: Accuracies (%) in training (above) and testing (below) for various games with different sizes of counter tables. As the size of the counter table increases, we can obtain better accuracy. Additionally, we can use the difference in accuracy between **PairWin** and NRT to identify the magnitude of counter relationships in the game, as these are cases that a single scalar rating system cannot handle. | PairWin | NRT | NCT M=3 | NCT M=9 | NCT M=27 | NCT M=81 | | |----------------------|-------|-----------|-----------|------------|------------|------| | Simple Combination | 71.2 | 64.8 | 64.8 | 65.2 | 65.8 | 66.4 | | Rock-Paper-Scissors | 100 | 51.3 | 100 | 100 | 100 | 100 | | Advanced Combination | 83.5 | 57.9 | 57.9 | 79.4 | 79.8 | 79.4 | | Age of Empires II | 97.3 | 68.7 | 68.7 | 73.1 | 83.8 | 97.7 | | Hearthstone | 97.8 | 83.4 | 81.3 | 85.4 | 91.7 | 97.4 | | Brawl Stars | 94.3 | 95.9 | 96.3 | 97.3 | 97.2 | 97.2 | | League of Legends | 78.9 | 88.2 | 89.5 | 91.6 | 92.6 | 90.9 | | Simple Combination | 61.8 | 64.9 | 64.9 | 64.4 | 64.2 | 63.9 | | Rock-Paper-Scissors | 100 | 51.1 | 100 | 100 | 100 | 100 | | Advanced Combination | 79.1 | 56.5 | 56.5 | 79.7 | 80.1 | 79.7 | | Age of Empires II | 75.7 | 64.5 | 64.5 | 67.7 | 72.5 | 75.4 | | Hearthstone | 95.4 | 81.2 | 81.3 | 85.2 | 91.3 | 94.8 | | Brawl Stars | 82.4 | 82.8 | 82.9 | 83.3 | 83.3 | 83.4 | | League of Legends | 50.9 | 51.1 | 51.1 | 50.9 | 51.0 | 51.0 | | Setting | Accuracy (%) | Utilized M | |------------|----------------|--------------| | βN = 0.01 | 83.8 | 26.0 | | βN = 0.125 | 71.4 | 5.4 | | βN = 0.25 | 68.7 | 1.0 | Table 3: Training accuracy and the number of utilized categories (M) in AoE2 M=27 under different βN with a fixed βM = 0.25. The common βN in VQ-VAE is 0.25. ## 4.3 Counter Table Utilization Given the need for a counter table for strength relation analysis, we adopt a vector quantization process in our NCT training. There is an issue with low codebook utilization when the state space is extremely small. We introduced a VQ Mean Loss to maximize the utilized M. For vector quantization, as described in Section 3.2, the standard hyperparameters are set to βN = 0.01 and βM = 0.25. We explore different configurations of these hyperparameters in Age of Empires II using NCT M=27 since it is a game requiring a large counter table for better strength relation accuracy, as shown in Tables 3 and 4. We found that the commonly suggested βN = 0.25 (van den Oord et al., 2017) leads to low utilization. Thus, we selected a nearly zero coefficient βN = 0.01 to perform regular VQ training. However, even with βN = 0.01, if we do not introduce the VQ Mean Loss (set βM = 0), the utilized M is still far from the upper bound of 27. We suggest using βM = 0.25 for better accuracy and codebook utilization. Also, a coefficient greater than 1 for VQ Mean Loss is not reasonable since it suggests gravitating the mean embedding more than the nearest embedding, which breaks the original idea of VQ and results in worse performance. | Setting | Accuracy (%) | Utilized M | |------------|----------------|--------------| | βM = 0 | 76.6 | 13.8 | | βM = 0.125 | 83.6 | 26.0 | | βM = 0.25 | 83.8 | 26.0 | | βM = 0.5 | 83.4 | 25.6 | | βM = 1.0 | 81.0 | 21.6 | Table 4: Training accuracy and the number of utilized categories (M) in AoE2 M=27 under different βM with a fixed βN = 0.01. Too small or too large βM cannot provide good codebook utilization. ## 4.4 Tabular Version Of Baselines In Section 4.2, all baselines we used are trained from neural networks for better generalizing unseen compositions. One may ask why not conduct a simple tabular approach like common rating or statistical analysis; thus, we also report the tabular version of WinValue, PairWin, and also Elo rating and a multidimensional variant, mElo2 (Balduzzi et al., 2018). We test the following five types of methods: 1. **WinValue N**→T: It is the same as **WinValue**, but using a tabular method to get predictions. In other words, this method averages the game results to directly report the average win values instead of using an approximation from a neural network. For those unseen compositions on any side of players in the test dataset, we set the prediction to undefined and always get a wrong prediction since there is no default strength value in the method. 2. **PairWin N**→T: It is the same as **PairWin**, but using a tabular method to get predictions. In other words, this method averages the composition-by-composition game results to give the prediction. For those unseen matches, we also set the prediction to undefined and always get a wrong prediction. We can imagine that this method would have the best training accuracy but very bad generalizability since training with a tabular approximator can have minimal approximation error but no generalization. 3. **Elo N**→T: We apply the standard Elo rating method on compositions. Each composition is a player, and the initial rating is 1000. The constant K for updating the Elo rating is 16. We use NRT as the baseline of Elo since they are all derived from the Bradley-Terry model but use different implementations. 4. **mElo2**: We implement the mElo2 proposed by Balduzzi et al. (2018), which assigns each composition a scalar rating r and a two-dimensional vector c. The initial rating is 1000, and the update step K is 16. For the initial vectors of c, we follow a public implementation provided by Lazewatsky (2024), uniformly sampled from a real value range [-10, 10]. 5. NCT: Enhances NRT with an additional neural counter table of size M × M. These tabular methods share the same training process as neural network approaches, including accessing 100 epochs of training data and the random swapping of compositions in a match and corresponding win outcome adjustment. The results in Table 5 show that tabular methods can have better prediction on training data since they have less approximation error. However, they do not have generalizability and may fall into severe overfitting. If we focus on popular online games, using the neural network version of PairWin or **NCT=81** as the win value predictor is still a better choice. ## 5 New Balance Measures The creation of rating and counter tables allows us to devise new ways to measure balance in games, going beyond simple win rates to consider domination relations as described in Proposition 2.2. In games where | | WinValue N→T | PairWin N→T | Elo N→T | mElo2 | NCT M=81 | |----------------------|----------------|---------------|-------------|---------|------------| | Simple Combination | 64.5 → 64.5 | 71.2 → 99.9 | 64.8 → 64.3 | 56.6 | 66.4 | | Rock-Paper-Scissors | 51.3 → 51.3 | 100 → 100 | 51.3 → 73.3 | 100 | 100 | | Advanced Combination | 57.7 → 57.7 | 83.5 → 99.9 | 57.9 → 57.1 | 52.7 | 79.4 | | Age of Empires II | 68.7 → 68.7 | 97.3 → 100 | 68.7 → 53.1 | 51.2 | 97.7 | | Hearthstone | 81.1 → 81.1 | 97.8 → 98.0 | 83.4 → 74.8 | 61.4 | 97.4 | | Brawl Stars | 90.2 → 95.8 | 94.3 → 99.7 | 95.9 → 98.0 | 97.5 | 97.2 | | League of Legends | 79.6 → 99.9 | 78.9 → 100 | 88.2 → 100 | 100 | 90.9 | | Simple Combination | 64.7 → 64.7 | 61.8 → 6.5 | 64.9 → 64.4 | 57.2 | 63.9 | | Rock-Paper-Scissors | 51.1 → 55.8 | 100 → 100 | 51.1 → 73.6 | 100 | 100 | | Advanced Combination | 56.5 → 56.7 | 79.1 → 8.2 | 56.5 → 56.1 | 51.3 | 79.7 | | Age of Empires II | 64.5 → 64.0 | 75.7 → 75.4 | 64.5 → 52.4 | 51.0 | 75.4 | | Hearthstone | 80.9 → 81.5 | 95.4 → 95.0 | 81.2 → 74.9 | 61.1 | 94.8 | | Brawl Stars | 79.7 → 69.8 | 82.4 → 69.3 | 82.8 → 77.8 | 83.1 | 83.4 | | League of Legends | 51.1 → 0.1 | 50.9 → 0 | 51.1 → 6.3 | 50.2 | 51.0 | Table 5: Accuracies (%) in training (above) and testing (below) with different baselines. two players compete against each other, a common goal is to equalize win rates, aiming for each player to have a win rate near 50%. This is easier to achieve in real-time games with symmetric settings for each player, but it is harder in turn-based games because the player who goes first often has an advantage (Beau & Bakkes, 2016). The main challenge in balancing is determining which player has the upper hand and the extent of their advantage. We propose two new ways to measure balance based on estimated win values and explain how to calculate these measures using our approximations to reduce computational complexity. Next, we will examine the diversity of comps players might choose in Section 5.1, and identify how many comps might give players an advantage in Section 5.2. First, let us define some important concepts: Assumption 5.1. The Bradley-Terry rating function Rθ(c) provides an estimate of Win =Rθ(c1) Rθ(c1)+Rθ(c2) . Proposition 5.2. The composition ctop with the highest rating Rθ(ctop) over a rating function Rθ *is considered to dominate all others with lower ratings.* Considering Definition 2.1, Proposition 2.2, and Assumption 5.1, we can conclude that Proposition 5.2 is true because x1 x1+y >x2 x2+y when x1 > x2 > 0. According to Proposition 5.2, if there is only one composition ctop with the highest rating, it is considered the best choice before considering counter strategies. This information is often sufficient for some balancing methods, such as identifying and adjusting the strongest comp (Fontaine et al., 2019). The time complexity of identifying ctop can be done in O(N) over N comps, whereas traditional win rates cannot directly provide a ctop. If we average the win rates over N comps, the time complexity is O(N2). In the following sections, we aim to determine how many compositions might be acceptable to players compared to this top composition and use the new counter table to gain a better understanding of balance through domination with counter relationships. ## 5.1 Top-D Diversity Measure We are examining how many different game compositions (comps) players might prefer to play. More choices can enrich the game content for fun and also help designers generate revenue by selling these comps. We Algorithm 1 Compute Top-D Diversity Measure Input: neural rating table Rθ, top-rated comp ctop, acceptable win value gap G Output: Top-D Diversity Measure D Initialize count D ← 0 for each comp c in the set of all comps do if Rθ(c) Rθ(c)+Rθ(ctop) + G ≥ 0.5 **then** D ← D + 1 end if end for want to know which comps players will pick based on their chances of winning. Here are some definitions and assumptions for this measure. Definition 5.3. Let us define an acceptable win value gap G, where G ∈ R, G ∈ [0, 1]. Assumption 5.4. Players think that a small difference in win value, up to G, can be attributed to factors like skill or luck, and they are willing to play again under this belief. Assumption 5.5. If a comp c is considered not dominated by ctop, it is considered not dominated by any comps. Lemma 5.6. Players are likely to choose comps c *where* R(c) R(c)+R(ctop) + G ≥ 0.5. By accepting Definition 5.3, Assumption 5.1, Assumption 5.4, Assumption 5.5, and Proposition 2.2, Lemma 5.6 is true, since comps that meet this condition are considered not dominated by any other comps. We use Algorithm 1 to count how many comps meet this condition, and this number represents the game's Top-D Diversity measure, where a larger D implies more diverse game content. The time complexity of this algorithm is O(N) over N comps. Without the property of a single scalar rating, the time complexity to check and define win value gaps on pairwise compositions is O(N2). ## 5.2 Top-B Balance Measure To further explore game balance, we recognize that the dynamics of counterplay are vital, and it is rare to have a single dominating comp. Our goal is to identify the number of comps that are not dominated by any other comps, considering their counter relationships. Assumption 5.7. The Bradley-Terry model with the rating function Rθ, enhanced with a counter table Cθ, provides more reliable predictions than using the Bradley-Terry model alone. Based on Assumption 5.7, we derive the following: Proposition 5.8. A comp c1 dominates c2 if, for every comp c, Win(c1, c)+Cθ(c1, c) > Win(c2, c)+Cθ(c2, c). However, verifying this for all comps is still computationally challenging (O(N3)). To mitigate this, we can categorize comps and record the top comp of each category, leading to the following propositions: Proposition 5.9. If comps c1 and c2 fall into the same category in Cθ, then c1 dominates c2 when Rθ(c1) > Rθ(c2). Proposition 5.10. If a comp c1 dominates c2 and c2 dominates c3, then c1 *also dominates* c3. Deriving from Proposition 5.8 and Assumption 5.7, Proposition 5.9 simplifies the determination of domination within the same category, and Proposition 5.10 establishes a transitive relationship in domination. These propositions collectively support Lemma 5.11: Lemma 5.11. Given a rating table Rθ following the Bradley-Terry model and a counter table Cθ *covering* c comps across M *categories, all non-dominated comps can be identified among the highest-rated ones in each* of the M *categories.* After finding the top comp in each category (O(N + M)), we use Lemma 5.11 to identify non-dominated comps. This methodology is detailed in Algorithm 2, calculating the **Top-B Balance** measure in O(N+M3), where a larger B implies more balanced game content. Algorithm 2 Compute Top-B Balance Measure Input: rating table R, counter table Cθ, set of all comps Output: Number of non-dominated comps B List all comps based on R and categorize them using Cθ Keep only the highest-rated comp in each category Initialize count B ← 0 for each top comp c in each category do Assume c is not dominated for each other top comp c ′in other categories do domi ← true for each top comp c ′′ in all categories do if R(c ′) R(c ′)+R(c ′′) + Cθ(c ′, c′′) ≤R(c) R(c)+R(c ′′) + Cθ(*c, c*′′) **then** domi ← false and break end if end for if *domi* **then** Mark c as dominated and break end if end for if c is not dominated **then** B ← B + 1 end if end for ## 6 Case Study Of Top-D Diversity And Top-B Balance With our new balance measures, previous works on game balancing, as mentioned in Sections 1 and 2, can now incorporate these measures to adjust game mechanisms beyond merely achieving a 50% win rate in PvP scenarios. In this section, we conduct two case studies using our measures for direct balance change suggestions in Age of Empires II and Hearthstone, employing our first model of rating and counter tables in the experiments to demonstrate an application. The actual ratings and counter categories of these tables can be found in Appendix A.3. We also discuss the use case and information for suggesting balance updates with our methods and existing approaches, including win rate observations and entropy-based methods. ## 6.1 Case Study On Age Of Empires Ii In Age of Empires II, there are 45 civilizations as compositions in 1v1 mode. We first examine the Top-D Diversity measure in Table 6(a). The top composition identified was the **Romans** with a strength of 1.08145. The result on Top-D Diversity suggests that to enhance general balance, setting the win value's standard deviation larger than 4% is reasonable. Such adjustments could be implemented through matchmaking mechanisms, map randomness, game rule variations, etc. A lower randomness level implies imbalance, forcing almost every player to choose **Romans**, especially since it is part of the DLC (requiring purchase for competitive advantage). Regarding Top-B Balance in Table 6(b), when we assume there are 9 categories considering counter relationships, it showed that one category is dominated. According to our ad-hoc analysis in Appendix A.3.1, it is an economic powerhouse category, and the top civilization in this category is **Poles**. This indicates the potential necessity to enhance civilizations within this category on their economic bonuses to improve balance, as even the best among them, **Poles**, is dominated by other top civilizations. When we extend the size of the counter table to M = 27, **Aztecs** and **Chinese** were identified as dominated. This suggests a review of the counter relationships within their category might be warranted to discuss potential improvements. Generally, the balance is commendable, with 24 non-dominated civilizations. | (a) | | (b) | | | | |----------|-----------------|---------|--------|---------------|-------------------| | Setting | Top-D Diversity | Setting | Used M | Top-B Balance | Training Accuracy | | G = 0.01 | 2 | | | | | | G = 0.02 | 9 | | | | | | G = 0.04 | 25 | | | | | | G = 0.08 | 44 | M = 3 | 1 | 1 | 69.6 | | | M = 9 | 9 | 8 | 77.0 | | | | M = 27 | 26 | 24 | 86.0 | | | | M = 81 | 45 | 45 | 98.9 | | Table 6: Top-D Diversity and Top-B Balance in Age of Empires II with different settings. | | (a) | |----------|-----------------| | Setting | Top-D Diversity | | G = 0.01 | 1 | | G = 0.02 | 1 | | G = 0.04 | 3 | | G = 0.08 | 9 | | (b) | |-------| Table 7: Top-D Diversity and Top-B Balance in Hearthstone with different settings. M = 3 3 1 81.3 M = 9 9 9 87.0 M = 27 25 23 90.6 M = 81 54 53 97.2 Setting Used M **Top-B Balance Training Accuracy** When we assume the counter relationships can be very complex and it is worthwhile for expert players to memorize a large counter table, we find that there is no truly dominated civilization in the M = 81 setting. All civilizations are assigned to distinct categories and show their advantages in specific matchups. The accuracy of strength relations in M = 81 is 98.9%. This evidence shows that Age of Empires II is balanced when counter relationships are meticulously examined. The game features a complex counter loop, allowing for a counter civilization to almost any other. However, there is still room for balance improvement for beginner players if we do not want such a large counter table. These insights provide game developers with guidance on addressing balance weaknesses in future updates. They also offer players a deeper understanding that even a generally strong civilization, like Romans, has specific counter civilizations. Traditional game balance techniques, which often aim for a fair win rate (e.g., 50% in 2-player zero-sum games), might overlook the intricacies of counter relationships and game theory when merely weakening strong comps or strengthening weak ones. ## 6.2 Case Study On Hearthstone When we examine another game, Hearthstone, we first discard decks with fewer than 100 game records in our dataset to ensure the analysis is not biased by outliers. There are 58 decks remaining after our filtering. From the Top-D Diversity in Table 7(a), the balance is not good since even with a 4% tolerant win value gap, there are only 3 decks considered not dominated. The top deck is **Treant Druid** with a strength of 1.48915 (please see Figure 11 for the strengths of other decks), and it is clearly too strong. When we check Top-B Balance in Table 7(b) under M = 3, **Treant Druid** dominates the others. If we extend the counter table to M = 9 (we provide an ad-hoc category analysis in Appendix A.3.2), the balance is surprisingly good with 9 Top-B Balance. This is because there is a clear counter deck, Aggro Paladin (1.17278), to **Treant Druid**. Even though **Aggro Paladin** has a general strength of only 1.17278, it has a strong counter ability to **Treant Druid**, and all top decks in M = 9 have their advantages. When we use a larger counter table, M = 27 or M = 81, the game is balanced enough since the difference between Used M and Top-B Balance is not significant. This analysis suggests that for this version of Hearthstone, **Treant Druid** requires specific adjustments. The counter relationships are balanced. ## 6.3 Discussion On Different Types Of Balance Measures When considering game balance measures, the major concerns are the type of information these measures provide and how this information can help modify the game mechanics. We focus on measures with fewer subjective factors, specifically those related to improving players' chances of winning. Measures dependent on player preferences, such as use rate, learning difficulty, popularity, and other factors, are not included in the main discussion. Starting with the most common measure, win rates, it is very clear and general in PvP games. Whether using a simple win rate or a more detailed win rate with specific opponent compositions, making each composition have similar strength is a common idea (Becker & Görlich, 2020). Achieving an average 50% win rate for each composition in an average case without considering its opponents is a specific solution. Designers can check the win rate of each composition and increase the strength of those below 50%, commonly referred to as "buffing" in the player community. For compositions with over 50% win rates, especially those with a large advantage, designers may try to weaken their strength, referred to as "nerfing" in the player community or develop new specific compositions to counter them. However, these changes may not always align with players' desires for entertainment or satisfaction with game depth. Players often pursue diversity or want to demonstrate individuality with different strategy settings (Rheinberg, 2020). Keeping every setting or composition at the same strength level can violate this intention. Therefore, studying different distributions of strength over win rates can provide various solutions, not just achieving 50% average case win rates. Our Top-D Diversity and Top-B Balance measures also rely on win rates but focus on different aspects like the tolerant win value gap and the size of meaningful counter relationships. Using another possible balance measure, the entropy of strategy, especially strategies that reach Nash equilibrium (Pendurkar et al., 2023), can be considered another application of win rates. A policy reaching Nash equilibrium implies its opponent cannot change itself to gain more benefits. When balance is defined by maximizing the entropy of this policy, it suggests increasing the strength of low sampling probability strategies and reducing the advantages of high sampling probability strategies. This idea is similar to the goal of achieving 50% win rates or the same average case strength but defined on a more complex relationship with Nash equilibrium. However, this idea may have the same limitation as achieving 50% win rates since it cannot explicitly differentiate those compositions that share the same strength relations: for example, making all strategies nearly the same. Therefore, Pendurkar et al. (2023) also proposed using a parameter regularization term to trade off the choice of entropy, which may guide parameters to the same strength, and the inherent diversity of game settings. Previous measures can help with game balance; however, they cannot guarantee that the updates would not tend to reduce the inherent diversity of the game mechanics since there is usually a global optimal solution that sets all parameters for different compositions nearly the same. Although this result is very balanced, it is also boring for players since there is almost one actual composition decorated as several different compositions or at least sharing the same distribution of win values. If we analyze the potential effect of these balance changes from the player's experience angle, it would converge to a case with not improved balance since the actual game content would be very similar to one strong composition dominating the game since we do not have to study which composition is better, all have the same strength, except the mechanism or playstyle of each composition is different. This raises the question: what result are players pursuing for game balance? We propose a new explanation: the size of counter relationships. Explicitly increasing the complexity of counter relationships can increase the game's depth, requiring players to study and practice more to master the game with different counter categories to dynamically adjust to the strategies of other players. This ensures a diverse game setting, which is what Top-B Balance can help achieve. Also, accepting the idea of uncertainty from player skill or luck can enrich the game content. If there are weaker compositions, we do not necessarily have to change them to improve balance. Keeping them within a tolerant win value gap allows players to try them occasionally based on the belief in tolerance reasons, increasing the diversity of game mechanics. This is what Top-D Diversity can help with. There is no clear advantage of one balance measure over another since they can be used simultaneously to provide suggestions. The final decision on which suggestions to use is the responsibility of game designers. Different balance measures provide different information, and we believe our new balance measures can | Rock | Paper | Scissors | Rock2 | Paper2 | Scissors2 | | | | | |-----------|---------|------------|---------|----------|-------------|-------|--------|-----------|----| | Rock | 0 | -1 | 1 | 0 | 0 | 0 | | | | | Paper | 1 | 0 | -1 | 0 | 0 | 0 | | | | | Scissors | -1 | 1 | 0 | 0 | 0 | 0 | | | | | Rock2 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | Rock | Paper | Scissors | Paper2 | 0 | 0 | 0 | 0 | 0 | 0 | | Scissors2 | 0 | 0 | 0 | 0 | 0 | 0 | | | | | Rock | 0 | -1 | 1 | | | | | | | | 0 | | | | | | | | | | | Paper | 1 | -1 | Rock | Paper | Scissors | Rock2 | Paper2 | Scissors2 | | | Rock | 0 | 0 | 0 | | | | | | | | Scissors | -1 | 1 | 0 | -1 | 1 | 0 | | | | | Paper | 1 | 0 | -1 | 0 | 0 | 0 | | | | | Scissors | -1 | 1 | 0 | 0 | 0 | 0 | | | | | Rock2 | 0 | 0 | 0 | 0 | -1 | 1 | | | | | Paper2 | 0 | 0 | 0 | 1 | 0 | -1 | | | | | Scissors2 | 0 | 0 | 0 | -1 | 1 | 0 | | | | Figure 4: An example of extending the classical Rock-Paper-Scissors to more complex cases. enrich the choices for balance suggestions. Additionally, balance is just one of many factors considered in game design (Schell, 2008). Designers sometimes compromise on balance in favor of more important themes or features that enhance the game experience (Schell, 2008), or players may prioritize playstyle diversity to express their individuality (Lin et al., 2024). To address these kinds of requirements, the regularization term in the entropy-based balance measure (Pendurkar et al., 2023) serves as an example of implementing this trade-off, similar to how the tolerance gap functions in our Top-D Diversity measure. We also give two simple examples to demonstrate the advantages of our new measures that previous measures cannot explicitly report. Case one: a game without counter relationships, as seen in our Simple Combination Game. There is a theoretically optimal composition: (18,19,20). If we consider the strategy reaching Nash equilibrium, there is only one strategy selecting this composition, and the entropy of this strategy is zero, indicating super imbalance. However, if we consider some slightly weaker compositions: (17,19,20) and (13,17,19), their expected win rates against the best composition are 49.115% and 42.496%, with standard deviations ( / p(1 - p)) of 49.992% and 49.434%, respectively. This shows some probability of reporting (17,19,20) or (13,17,19) as stronger than (18,19,20) with small samples, which may make players feel balanced based on small sample sizes. In this case, Top-D Diversity would help quantify this type of scenario and would not strongly require all compositions to have the same strength, which might need exhaustive effort to develop mechanisms that are not the same but have the same strength. Another example: we can imagine two extended variants of Rock-Paper-Scissors with payoff matrices in Figure 4. In the original game, we already know that the mixed strategy reaching Nash equilibrium is (1/3,1/3,1/3) and is the uniform distribution. For both the upper variant and the lower variant, the uniform distribution is still a Nash equilibrium solution strategy, and using this strategy would result in a 50% average case win rate. Thus, both entropy-based and simple win rate measures would suggest they are both at the same level of balance. However, when applying our Top-B Balance measure, we can use only a 4 × 4 counter table to fully describe the counter relationships in the upper variant (Rock2, Paper2, Scissors2 share the same counter category), but in the lower variant, we need a 6 x 6 counter table. This result implies that the lower case has a larger size of counter relationship and players need to study and adjust their strategy for every composition, aligning with our new explanation of improving game balance. ## 7 Conclusion And Future Works The quantification of balance in competitive scenarios is crucial for analyzing how many participants are meaningful. This paper focuses on a special case of two-team zero-sum competitive scenarios, PvP game compositions. With our approximations of rating and counter relationships, domination relationships can now be quantified efficiently. In the past, most balancing techniques have primarily relied on win rate analysis. Our experiments, conducted in popular online games, underscore the efficacy and practicality of our approach. We believe our work enhances the tools available for game balancing, leading to more balanced and engaging player experiences. There are still many topics to explore further in the realm of PvP game compositions. For example, we have only considered pre-built compositions, but measuring the balance of elements that form a composition is also important for games with a vast number of element combinations, such as the individual cards in Hearthstone, the specific tech tree in Age of Empires II, or even the equipment in League of Legends. For games where it is difficult to enumerate all compositions, considering composition building first is essential. Expanding our focus to broader applications, our approach can be applied to domains where the assessment of competitor strength is crucial in competitive scenarios, such as sports, movie preference, peer grading, elections, and language model agents (Chen & Joachims, 2016; Zheng et al., 2023). In the realm of cuttingedge artificial intelligence research, our approach could offer insights into multi-agent training with counter relationships to exploit weaknesses (Vinyals et al., 2019) and potentially benefit fields like AI safety for attack and defense analysis (Amodei et al., 2016). Thus, our methods hold promise for a wide range of applications, marking a step forward in the quantitative analysis of competitive dynamics. ## Broader Impact Statement Our rating and counter tables are learned models and do not guarantee that the results will always be the same. It is necessary to carefully check and train several models for critical applications to ensure the results are not based on random guesses. Our balance measures focus on helping pinpoint the advantages and weaknesses of each composition rather than identifying a single dominating composition. However, if misused, this approach could potentially aid in creating market monopolies rather than improving balance. It is important to use these measures ethically and with the intention of promoting fair competition. Additionally, our methodology is built on a scalar rating system and tested on two-team zero-sum symmetric games, which may limit the applicability of our balance measures to other types of games, especially those without a win rate. ## References Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané. Concrete problems in AI safety. *CoRR*, abs/1606.06565, 2016. Alexei Baevski, Steffen Schneider, and Michael Auli. vq-wav2vec: Self-supervised learning of discrete speech representations. In *International Conference on Learning Representations (ICLR)*, 2020. Sander Bakkes, Shimon Whiteson, Guangliang Li, George Viorel Vişniuc, Efstathios Charitos, Norbert Heijne, and Arjen Swellengrebel. Challenge balancing for personalised game spaces. In IEEE Games Media Entertainment (GEM), 2014. David Balduzzi, Karl Tuyls, Julien Pérolat, and Thore Graepel. Re-evaluating evaluation. In *Advances in* Neural Information Processing Systems (NeurIPS), 2018. Philipp Beau and Sander Bakkes. Automated game balancing of asymmetric video games. In *IEEE conference* on computational intelligence and games (CIG), 2016. Alexander Becker and Daniel Görlich. What is game balancing? - an examination of concepts. *ParadigmPlus*, 2020. Marlene Beyer, Aleksandr Agureikin, Alexander Anokhin, Christoph Laenger, Felix Nolte, Jonas Winterberg, Marcel Renka, Martin Rieger, Nicolas Pflanzl, Mike Preuss, et al. An integrated process for game balancing. In *IEEE Conference on Computational Intelligence and Games (CIG)*, 2016. Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. Heads-up limit hold'em poker is solved. *Science*, 347(6218):145–149, 2015. Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. *Biometrika*, 1952. Jane Bromley, James W. Bentz, Léon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Säckinger, and Roopak Shah. Signature verification using A "Siamese" time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence (IJPRAI), 7, 1993. Nathaniel Budijono, Phoebe Goldman, Jack Maloney, Joseph B Mueller, Phillip Walker, Jack Ladwig, and Richard G Freedman. Ludus: An optimization framework to balance auto battler cards. In *AAAI* Conference on Artificial Intelligence (AAAI), 2022. Shuo Chen and Thorsten Joachims. Modeling intransitivity in matchup and comparison data. In *International Conference on Web Search and Data Mining (WSDM)*, 2016. Lincoln Magalhães Costa, Alinne Crintinne Corrêa Souza, and Francisco Carlos M. Souza. An approach for team composition in league of legends using genetic algorithm. In *Brazilian Symposium on Computer* Games and Digital Entertainment (SBGames), 2019. Rémi Coulom. Computing "elo ratings" of move patterns in the game of go. *Journal of the International* Computer Games Association (ICGA journal), 2007. Mihály Csíkszentmihályi. *Flow: The Psychology of Optimal Experience*. Harper & Row, 1990. Fernando de Mesentier Silva, Rodrigo Canaan, Scott Lee, Matthew C. Fontaine, Julian Togelius, and Amy K. Hoover. Evolving the hearthstone meta. In *IEEE Conference on Games (CoG)*, 2019. Jacob D. Dodson. The relation of strength of stimulus to rapidity of habit-formation in the kitten. *Journal* of Animal Behavior, 1915. Arpad E. Elo. *The USCF Rating System: Its Development, Theory, and Applications*. United States Chess Federation, 1966. Felipe Machado Figueira, Lucas Nascimento, Jose da Silva Junior, Troy Kohwalter, Leonardo Murta, and Esteban Clua. Bing: A framework for dynamic game balancing using provenance. In *Brazilian Symposium* on Computer Games and Digital Entertainment (SBGames), 2018. Matthew C Fontaine, Scott Lee, Lisa B Soros, Fernando de Mesentier Silva, Julian Togelius, and Amy K Hoover. Mapping hearthstone deck spaces through map-elites with sliding boundaries. In *The Genetic* and Evolutionary Computation Conference (GECCO), 2019. Ralf Herbrich, Tom Minka, and Thore Graepel. TrueSkill™: A Bayesian skill rating system. In *Advances in* Neural Information Processing Systems (NIPS), 2006. Daniel Hernández, Charles Takashi Toyin Gbadamosi, James Goodman, and James Alfred Walker. Metagame autobalancing for competitive multiplayer games. In *IEEE Conference on Games (CoG)*, 2020. Robin Hunicke. The case for dynamic difficulty adjustment in games. In *ACM SIGCHI International* Conference on Advances in computer entertainment technology (ACE), 2005. Alexander Jaffe, Alex Miller, Erik Andersen, Yun-En Liu, Anna Karlin, and Zoran Popovic. Evaluating competitive game balance with restricted play. In *AAAI Conference on Artificial Intelligence and Interactive* Digital Entertainment (AIIDE), 2012. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In *International Conference on* Learning Representations (ICLR), 2014. Donny Kristianto. 2023 gaming spotlight: Mobile is set to surpass $108 billion this year - maintaining its 2.7x global revenue lead over pc/mac, 2023. URL https://www.data.ai/en/insights/mobile-gaming/ 2023-gaming-spotlight-report. Daniel Lazewatsky. melo - re-evaluating evaluation. https://dclaz.github.io/mELO/index.html, 2024. Accessed: 2024-07-22. Steve Levkoff. On balance-optimizing the pvp experience, 2014. URL https://www.gamedeveloper.com/ business/on-balance-optimizing-the-pvp-experience. Shiyu Li, Hao Ma, and Xiangyu Hu. Neural image beauty predictor based on bradley-terry model. *CoRR*, abs/2111.10127, 2021. Chiu-Chou Lin, Wei-Chen Chiu, and I-Chen Wu. An unsupervised video game playstyle metric via state discretization. In *Conference on Uncertainty in Artificial Intelligence (UAI)*, 2021. Chiu-Chou Lin, Wei-Chen Chiu, and I-Chen Wu. Perceptual similarity for measuring decision-making style and policy diversity in games. *Transactions on Machine Learning Research*, 2024. James MacQueen et al. Some methods for classification and analysis of multivariate observations. In Berkeley symposium on mathematical statistics and probability, volume 1. Oakland, CA, USA, 1967. Tobias Mahlmann, Julian Togelius, and Georgios N. Yannakakis. Evolving card sets towards balancing dominion. In *IEEE Congress on Evolutionary Computation (CEC)*, 2012. Mihail Morosan and Riccardo Poli. Automated game balancing in ms pacman and starcraft using evolutionary algorithms. In Giovanni Squillero and Kevin Sim (eds.), *Applications of Evolutionary Computation* (EvoApplications), 2017. Sofia Maria Nikolakaki, Ogheneovo Dibie, Ahmad Beirami, Nicholas Peterson, Navid Aghdaie, and Kazi A. Zaman. Competitive balance in team sports games. In *IEEE Conference on Games (CoG)*, 2020. Jeannie Novak, Meaghan O'Brien, and Jim Gish. *Game development essentials*, volume 3. Delmar Cengage Learning, 2012. Sumedh Pendurkar, Chris Chow, Luo Jie, and Guni Sharon. Bilevel entropy based mechanism design for balancing meta in video games. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2023. Julien Perolat, Bart De Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T Connor, Neil Burch, Thomas Anthony, et al. Mastering the game of stratego with model-free multiagent reinforcement learning. *Science*, 2022. Muhammad Farrel Pramono, Kevin Renalda, and Harco Leslie Hendric Spits Warnars. Matchmaking problems in moba games. *Indonesian Journal of Electrical Engineering and Computer Science*, 2018. Simão Reis, Luís Paulo Reis, and Nuno Lau. VGC AI competition - A new model of meta-game balance AI competition. In *IEEE Conference on Games (CoG)*, 2021. Simão Reis, Rita Novais, Luís Paulo Reis, and Nuno Lau. An adversarial approach for automated pokémon team building and metagame balance. *IEEE Trans. Games*, 16, 2024. Falko Rheinberg. Intrinsic motivation and flow. *Motivation Science*, 2020. Florian Rupp, Manuel Eberhardinger, and K. Eckert. Balancing of competitive two-player game levels with reinforcement learning. *2023 IEEE Conference on Games (CoG)*, 2023. Jesse Schell. *The Art of Game Design: A book of lenses*. CRC press, 2008. Tim Schmitz. A better way to measure game balance using game theory, 2022. URL https:// quantimschmitz.com/2022/12/07/a-better-way-to-measure-game-balance-using-game-theory/. Michael Sellers. *Advanced game design: a systems approach*. Addison-Wesley Professional, 2017. Woncheol Shin, Gyubok Lee, Jiyoung Lee, Eunyi Lyou, Joonseok Lee, and Edward Choi. Exploration into translation-equivariant image quantization. In *International Conference on Acoustics, Speech and Signal* Processing (ICASSP), 2023. Nenad Tomašev, Ulrich Paquet, Demis Hassabis, and Vladimir Kramnik. Reimagining chess with alphazero. Communications of the ACM, 2022. Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Advances in Neural Information Processing Systems (NeurIPS), 2017. Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, L. Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander Sasha Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom Le Paine, Caglar Gulcehre, Ziyun Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy P. Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 2019. Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved VQGAN. In International Conference on Learning Representations (ICLR), 2022. Jiahui Zhang, Fangneng Zhan, Christian Theobalt, and Shijian Lu. Regularized vector quantization for tokenized image synthesis. In *IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llmas-a-judge with mt-bench and chatbot arena. In Advances in Neural Information Processing Systems (NeurIPS), 2023. ## A Appendix A.1 Enhancing Codebook Utilization Through Vq Mean Loss We tackle the issue of underutilization in codebooks for Vector Quantization (VQ) by introducing the VQ Mean Loss technique. This method significantly improves the use of the embedding space in VQ models. As depicted in Figure 5, traditional VQ-VAE approaches depend on nearest neighbor selection for determining latent vectors, which can lead to sparse utilization of the codebook. The integration of VQ Mean Loss obviates the need for meticulous codebook initialization or hyperparameter fine-tuning to overcome low utilization problems. ## A.2 Illustrative Examples Of Top-D Diversity Measure And Top-B Balance Measure We elucidate the Top-D Diversity Measure with an example in Figure 6. When utilizing a single scalar for strength measurement of compositions, identifying the composition with maximum strength (denoted as A in the figure) is straightforward. By defining an acceptable win value gap G, for instance, G = 5%, which may represent a game's error margin, compositions with a rating equal to or greater than 1.47 are considered playable. Such a demarcation is akin to tier tables commonly created within gaming communities, suggesting that our rating table could facilitate the construction of insightful analyses or tier tables. ![23_image_0.png](23_image_0.png) Figure 5: Demonstration of codebook utilization improvement using VQ Mean Loss. ![23_image_1.png](23_image_1.png) Top-D Diversity Measure: 4 Figure 6: Demonstration of Top-D Diversity Measure, showcasing compositions with ratings above a specified threshold as playable. For the Top-B Balance Measure, we present a simplified example in Figure 7. Here, compositions are assigned ratings and categorized according to the Rock-Paper-Scissors framework. In this extended gameplay, the highest-rated composition within each category is presumed to be dominant. Therefore, to assess game balance, one only needs to examine the win value relationships between these top compositions. Compositions that are dominated may represent beginner-level or less costly options, requiring players to progress and level up to obtain higher-strength compositions for fair competition ultimately. ## A.3 Rating Tables And Counter Tables In Online Games This appendix provides a discussion of the rating tables and counter tables derived from our first model in Age of Empires II, Hearthstone, and Brawl Stars. Each game's tables reveal interesting insights into the dynamics of team compositions and their interactions. We also explore the rationality and categorization inferred from these tables. ## A.3.1 Age Of Empires Ii The 9 x 9 counter table for Age of Empires II (Figure 8) elucidates the complex interplay of civilization strengths, weaknesses, and counter strategies in the game. With each civilization boasting unique attributes ⇒ x ≥ 1.47 | Rock-Comp | Rating | Paper-Comp | Rating | Scissors-Comp | Rating | | |---------------|------------|--------------|---------------|-----------------|----------|---------| | R1 | 1.7 | 1.9 | S1 | | | | | P1 | 1.3 | | | | | | | R2 | 1.6 | P2 | 1.4 | $2 | 1.2 | Top-B | | R3 | 1.3 | РЗ | 1.1 | 53 | 1.1 | Balance | | | Measure: 3 | | | | | | | R4 | 1.2 | P4 | 0.9 | ટેવ | 1.0 | | | RS | 1.0 | P5 | 0.7 | ટર | 0.9 | | | 0.8 | 0.8 | | | | | | | R6 | Рб | 0.2 | S6 | | | | | R .. | P.. | S .. | | | | | | : | -- | : | | | | | | Counter | Rock-Comp | Paper-Comp | Scissors-Comp | | | | | Rock-Comp | 0 | -50% | +50% | | | | | Paper-Comp | +50% | 0 | -50% | | | | | Scissors-Comp | -50% | +50% | 0 | | | | Figure 7: Explanation of the Top-B Balance Measure, illustrating the assessment of balance by analyzing top compositions within each Rock-Paper-Scissors category. that cater to different playstyles, the table categorizes them into nine distinct groups, reflecting their strategic affinities and shared advantages. The groups are delineated as follows: 1. Technological and Age Advancement Group: Civilizations such as the Malay with accelerated age advancement, the Malians with their fast university research, and the Bulgarians with free militia-line upgrades and significant cost reductions for Blacksmith and Siege Workshop technologies, offering them a strategic edge in pacing the game. 2. Heavy Cavalry Group: Notable for their robust cavalry units, civilizations like the Franks and Lithuanians dominate open-field engagements with their superior mobility and combat prowess. 3. Anti-Cavalry and Anti-Archer Group: This cohort, including the Goths with their economical infantry and the Indians with their camel units, specializes in countering cavalry and archers, altering the flow of battle with their unique troop compositions. 4. Infantry Dominance Group: Comprising civilizations with formidable infantry units, this group excels in foot-soldier combat, sustaining front-line engagements and applying pressure through sheer force. 5. Gunpowder-Intensive Group: Civilizations in this category leverage the destructive capacity of gunpowder units to gain a decisive advantage in warfare. 6. Cavalry and Horse Archer Factions: Balancing strengths between cavalry and horse archers, these civilizations maintain flexible and dynamic combat tactics, adept at swift raids and retreats. 7. Archery Group: Dominated by civilizations with powerful ranged units, this group wields archers as the cornerstone of their military strategy, excelling in long-range engagements. 8. Economic Powerhouses: These civilizations thrive on economic prowess, underpinning their military campaigns with robust economies and resource accumulation. 9. Trash and Counter Unit Group: With a focus on cost-effective 'trash units' and specialized counter units, this group adeptly negates enemy strategies, maximizing efficiency in resource management. ![25_image_0.png](25_image_0.png) Figure 8: The 9 × 9 counter table for Age of Empires II, illustrating the intricate balance of civilization match-ups. | Category1 | Category2 | Category3 | Category4 | Category5 | Category6 | Category7 | Category8 | Category9 | |-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Hindustanis | | | | | | | | | | Malay | Franks | Romans | Turks | Persians | Mongols | Poles | Incas | | | 1.00146 | 0.99444 | 1.06402 | 1.08145 | 1.03038 | 0.99886 | 0.99319 | 0.91685 | 1.02497 | | Malians | Cumans | Goths | Sicilians | Portuguese | Huns | Tatars | Saracens | Slavs | | 0.91956 | | | | | | | | | | 0.97565 | 0.97313 | 0.99500 | 1.02928 | 0.98785 | 0.83885 | 0.88485 | 1.00203 | | | Bulgarians | Lithuanians | Gurjaras | Bohemians | Italians | Berbers | Britons | Burgundians | Byzantines | | | 0.88276 | | | | | | | | | 0.96124 | 0.96764 | 0.93271 | 1.02476 | 0.88166 | 0.97427 | 0.83176 | 0.89442 | | | Spanish | Vikings | Dravidians | Magyars | Koreans | Chinese | Aztecs | | | | 0.94718 | 0.99278 | 0.87841 | 0.95044 | 0.79998 | 0.85114 | 0.87008 | | | | Ethiopians | Teutons | Khmer | Vietnamese | Georgians | | | | | | | 0.79796 | 0.76448 | | | | | | | | 0.94380 | 0.97401 | 0.90366 | | | | | | | | Burmese | Celts | Mayans | | | | | | | | 0.87361 | 0.94887 | 0.88710 | | | | | | | | Bengalis | Armenians | | | | | | | | | 0.87242 | 0.91716 | | | | | | | | | Japanese | | | | | | | | | | 0.89536 | | | | | | | | | Figure 9: The rating table for Age of Empires II, depicting the overall strength and viability of each civilization. The rating table (Figure 9) complements the counter table by offering a quantitative assessment of each civilization's overall strength. This allows for a broader perspective beyond direct match-ups, giving insight into how each civilization fares in the general meta. The counter table not only outlines how civilizations within the same group perform against each other but also illustrates the inherent strengths and weaknesses they possess against other groups, shaped by their distinct technologies, units, and economic bonuses. For instance, the Heavy Cavalry Group is susceptible to the powerful Anti-Cavalry capabilities of civilizations like the Hindustanis or Goths. They may also find themselves at a disadvantage against Infantry-dominated factions but can leverage their cavalry's mobility to gain an upper hand against civilizations that rely heavily on gunpowder or archery for ranged attacks. Goths, while typically facing a disadvantage in matchups against other Infantry-focused civilizations, prove to be highly effective against Archery-based groups. It is commonly believed that Infantry-centric civilizations can be countered by those that specialize in gumpowder units and archers; however, they are usually quite efficient against civilizations that rely on 'trash units' Economically focused civilizations display more flexibility and tend to have less pronounced counter relationships. Recognizing and understanding these counter dynamics is essential for competitive play. It enables players to predict and neutralize the strategies of their opponents, resulting in more sophisticated and informed decisionmaking both prior to and during matches. This strategic depth accentuates the long-standing charm of 'Age of Empires II' as a competitive real-time strategy game, where tactical knowledge and strategic planning are as crucial as agility and execution. ## A.3.2 Hearthstone The 9×9 Hearthstone counter table (Figure 10) represents a strategic breakdown of deck matchups, reflecting how various playstyles interact within the game. Each category is defined by distinct tactical approaches, and the table illustrates the expected performance of these categories against one another, with green indicating a favorable matchup and red indicating a disadvantage. ![27_image_0.png](27_image_0.png) Figure 10: The 9 × 9 counter table for Hearthstone, detailing the strengths and weaknesses of various deck archetypes. 1. **High-Value Control Archetypes (Category 1)**: These decks, often Singleton with no duplicate cards, rely on high-value individual cards and powerful board clears, such as the notorious Reno Jackson hero card, to win by value over time. Examples include Highlander Shaman, Paladin, and Demon Hunter, as well as Control Priest and Blood Death Knight. 2. **Tempo-Dependent Decks (Category 2)**: These archetypes require a specific mana curve to play cards efficiently and can dominate when curving out correctly. Decks like Treant Druid and Big Rogue, which need to establish a board and then buff it, fall into this category. 3. **Combo-Reliant Archetypes (Category 3)**: These decks hinge on key cards to enable combinations but lack exceptionally fast draw engines. Naga Priest and Dragon Druid, which require a mix of Naga and spells, or Dragon cards in hand respectively, are typical of this category. 4. **Midrange Archetypes with Strong Creatures (Category 4)**: This category thrives on deploying minions with both high attack and health, sustaining board presence to win. Earthen Paladin is a prime example, using minions that can endure and dominate board trades. Their area of effect spells also gives them an advantage against swarm-based strategies (Category 7). 5. **Midrange Value Decks (Category 5)**: These decks aim to overwhelm the opponent with the sheer quality of their cards, not necessarily through a quick victory but through sustained pressure and superior trades. 6. **Diverse Midrange Archetypes (Category 6)**: A mix of decks that don't fit neatly into one archetype but generally win by card value. They often don't have as stark counter relationships and tend to have more even matchups. 7. **Aggro and Snowball Archetypes (Category 7)**: These decks look to establish an early board presence and snowball to victory. They perform well against decks that struggle to clear multiple threats, such as general aggro decks (Category 9). 8. Value-Oriented Decks with Combo Finishers (Category 8): This group features decks that maintain board control with ample resources and are capable of executing a one-turn-kill (OTK) combo in the late stages of the game. Notably, Control Warrior can build up a significant armor stack to unleash a massive hit, while Rainbow Mage is known for its combo potential to achieve an ОТК. 9. Aggressive Paladin Archetypes (Category 9): These decks spread the board with multiple threats early on and aim to end the game before the opponent stabilizes. The analysis reveals that control archetypes (Category 1) are effective against high-value creature decks (Category 4) due to their abundance of removal options. However, they struggle against decks with lategame OTK capabilities (Category 8) because such decks can bypass control strategies with a sudden win condition. Tempo (Category 2) decks falter against stable midrange (Category 4) due to board clears disrupting their momentum, while also struggling against the faster-paced aggro decks (Category 9) that can establish a quicker board presence. Combination-reliant decks (Category 3) tend to underperform against aggressive Paladin strategies (Category 9) due to the Paladins' ability to conclude games before combos can be assembled. Meanwhile, the snowball potential of decks in Categories 4 and 7 makes them strong against tempo decks but vulnerable to control archetypes with multiple board clears. Value-oriented decks with combo finishers (Category 8) excel against control decks by circumventing their gradual value game with a sudden win condition, yet they might struggle against decks with large minions that their additional resources can't efficiently counter. Through the counter table, Hearthstone players can better strategize their deck choices and gameplay, considering the prevalent matchups in the current meta. The table thus serves as a critical tool for players aiming to optimize their strategies and achieve a higher win rate in competitive play. | Category1 | Category2 | Category3 | Category4 | Category5 | Category6 | Category7 | Category8 | Category9 | |--------------------|---------------|------------------|------------------|-------------------|-----------------|-----------------|----------------|------------------| | Highlande | Treant Druid | Naga Priest | Earthen Paladin | Highlander Hunter | Mining Rogue | Enrage Warrior | Control Mage | Aggro Paladin | | Shaman | 1.48915 | 1.20033 | 1.36393 | 1.27484 | 1.00962 | 0.99643 | 1.13171 | 1.17278 | | 1.02839 | | | | | | | | | | Highlander Paladin | Big Rogue | Dragon Druid | Pure Paladin | Arcane Hunter | Sludge Warlock | Undead Priest | Mining Warrior | Showdown Paladin | | 0.91191 | 1 21318 | 1.15328 | 1.02695 | 1.12621 | 0.91738 | 0.78334 | 0.98667 | 1.00791 | | Highlander Priest | Cleave Hunter | Mech Rogue | Secret Mage | Silver Hand | Relic Demon | Control Warrior | | | | 0.81985 | 1.03308 | 1.00164 | 1.05124 | Paladin | Hunter | 0.87570 | | | | | 0.85971 | 0.49378 | | | | | | | | Control Priest | Totem Shaman | Secret Hunter | Highlander Blood | Elemental Mage | Unholy Death | Rainbow Mage | | | | 0.75925 | 0.98028 | 0.95550 | Death Knight | 0.84178 | Knizht | 0.80490 | | | | 1.03096 | 0.49092 | | | | | | | | | Highlander Demon | Rock 'n" Roll | Elemental Shaman | Highlander Druid | Aggro Demon | Ogre Priest | | | | | Hunter | Warrior | 0.79008 | 1.00740 | Hunter | 0.70467 | | | | | 0.65419 | 0.89733 | 0.81080 | | | | | | | | Blood Death | Curse Warlock | Mech Paladin | Plague Death | Wishing Rogue | Highlander Mage | | | | | Knight | 0.82253 | 0.77766 | Krapht | 0.77634 | 0.63054 | | | | | 0.47515 | 0.96237 | | | | | | | | | Highlander | Taunt Warrior | Big Shaman | Thaddius Warlock | Moonbeam Druid | | | | | | Warrior | 0.64670 | 0.90223 | 0.61308 | | | | | | | | 0.77235 | | | | | | | | | 0.56461 | | | | | | | | | | Thaddlus Druid | Secret Rogue | Control Warlock | Ogre Rogue | Breakfast Hunter | | | | | | 0.55146 | 0.62855 | 0.88740 | 0.70925 | 0.37031 | | | | | Figure 11: Rating table for Hearthstone decks, showcasing the top eight decks within each category based on their performance in the current meta. The ratings are indicative of the deck's overall strength and potential to win matches within their respective categories. ## A.3.3 Brawl Stars In games with complex combinations for building compositions like Brawl Stars, it requires much more game knowledge to explain the possible meaning of counter categories from learning. For example, if we consider the composition number of only three heroes in Brawl Stars, there are C84 = 41664 available compositions. | Training Accuracy | Testing Accuracy | | |---------------------|--------------------|------| | WinValue | 56.7 | 56.7 | | PairWin | 80.0 | 74.9 | | BT | 50.9 | 50.7 | | NRT | 58.0 | 57.1 | | NCT M=3 | 58.1 | 57.1 | | NCT M=9 | 61.5 | 60.2 | | NCT M=27 | 63.5 | 61.9 | | NCT M=81 | 65.6 | 63.7 | Table 8: Strength relation accuracies (%) in training and testing for Brawl Stars 2 Heroes. It is not easy to check all these compositions and further check their pairwise relationships. According to Table 1, the counter relationship is not clear in Brawl Stars with three heroes and the corresponding game modes and maps. Using NRT can achieve 95.9% average accuracy and adding an M = 81 counter table only improves 1.3% extra accuracy. This implies that the scalar rating value already reflects most strength relationships and there is not much significant information that an extra counter table can help with. Thus, we trained another composition setting in Brawl Stars. We only considered two-hero combinations as the compositions, and for each game match, we split it into C 3 2 × C 3 2 = 9 game matches as the training and testing dataset. The corresponding accuracies of this setting are listed in Table 8. In this setting, the accuracy improvement with the M = 9 counter table is 3.5%, implying that some meaningful categories are identified for describing counter relationships. We selected the best model of the M = 9 counter table among five models, where the training accuracy is 62.0% and testing accuracy is 60.2%. The corresponding accuracy improvement from the used NRT model is 4.0% and 3.2%. Figure 12 shows the result of this counter table. We only list the first nine compositions for each category and try to analyze these compositions with our game knowledge to check whether these compositions also have some clear meaning. The groups are delineated as follows: 1. **High Mobility and Sustained Output**: Heroes like Sandy, Darryl, and Edgar possess high mobility, while others like 8-Bit, Sprout, and Mr. P provide sustained output. 2. **Versatile Skills and Comprehensive Abilities**: Heroes such as Janet, Lola, and Gray have transformative abilities, allowing them to switch states. 3. **Strong Crowd Control and Suppression**: Heroes like Barley, Frank, and Buster have powerful control and suppression abilities. 4. **High Burst Damage and Support**: Heroes like Edgar, Brock, and Fang offer high burst damage, while others like Gus, Gene, and Tara provide support and assistance. 5. **Long-Range Control and Area Damage**: Heroes like Sprout, Squeak, Barley, and Grom possess long-range control and area damage abilities. 6. **Multi-Skill Coordination**: Heroes like Bonnie, Ruffs, and Byron can coordinate multiple skills, providing sustained output and support. 7. **Sustained Output and Support**: Heroes like Mandy, 8-Bit, and Pam provide sustained output and support. ![30_image_0.png](30_image_0.png) Figure 12: The 9x9 counter table for Brawl Stars under two heroes combinations. 8. **Versatility and Flexibility**: Heroes like Gale, Gus, and Edgar offer versatility and flexibility, adapting to various tactics. 9. **Quick Burst and Control**: Heroes like Chester, Shelly, and Buzz possess quick burst and control abilities. Since there are C 64 2 available compositions and our limited game knowledge, we do not guarantee that these analyses are entirely correct, but we can find some interesting relationships. **High Mobility and** Sustained Output has an advantage over the Long-Range Control and Area Damage category due to their high mobility but may be countered by Sustained Output and Support in modes requiring prolonged engagement. The three categories with versatility: 2, 6, and 8 tend to form a cycle where 2 > 6 > 8 > 2. These kinds of ad-hoc explanations may help game designers to change the game mechanisms from a more general aspect. If there is a game that is very hard to explain the counter table or tables diverge when trained from different random seeds, we may need more detailed attributes of the combination elements to summarize a reasonable update direction. ## A.4 Neural Network Architectures And Details In our study, we employ distinct neural network architectures tailored to the specific requirements of each player-versus-player (PvP) game in our dataset. Below we provide a detailed description of the network designs and input features for each team composition: - Simple Combination Game: 20-dimensional binary vector representing elements. - Rock-Paper-Scissors: 3-dimensional one-hot vector for category representation. - Advanced Combination Game: 23-dimensional vector combining 20-dimensional binary element encoding with 3-dimensional one-hot category encoding. - Age of Empires II: 45-dimensional one-hot vector for civilization representation. - Hearthstone: 91-dimensional one-hot vector for deck naming. - Brawl Stars: 115-dimensional vector encoding the complex dynamics of Trio Modes. This includes a binary encoding for the presence of 64 unique heroes, one-hot encodings for 43 distinct maps plus an indicator for any map not listed, and a similar encoding scheme for the 6 game modes and any unrecorded mode. This encoding captures the essence of team compositions, map strategies, and game modes, essential for predicting match outcomes in Brawl Stars. - League of Legends: 136-dimensional binary vector for champion representation. Each network is constructed to handle the dimensionality and characteristics of the input features: - WinValue Network: Processes individual compositions to output direct win rates. - PairWin Network: Predicts the pairwise win rate between two compositions. - BT Network: Implements a linear approximation of the Bradley-Terry model. - NRT Network: Offers a non-linear approximation of the Bradley-Terry model, capturing complex relationships. - NCT Network: The neural counter table to consider counter relationships. For each neural network configuration, we present architectural diagrams that illustrate the structure and flow of data through the network and these network configuration are shared with all games. These visuals serve to complement the textual description and provide an at-a-glance understanding of each model's design. ## A.4.1 Formulations Of Strength Relation Methods For a more formal definition of the methods used in the strength relation classification task, we use the following definitions. For each match outcome, we have two compositions, A and B, and a strength relation label with three valid states: weaker/same/stronger. The ground truth of each matchup *A, B* is determined by the tabular PairWin method, which simply counts the average win value of this composition matchup. For example, if a matchup *A, B* has 100 game results with 60 wins, 30 losses, and 10 ties, the win value is x = (60 × 1.0 + 30 × 0.0 + 10 × 0.5)/100 = 0.65. For x > 0.501, the strength relation label is stronger; for x < 0.499, the strength relation label is weaker; and for 0.499 ≤ x ≤ 0.501, the strength relation label is same. - WinValue: Given a win value estimator *W inV alue*θ(C) of composition C without considering its opponent, if W inV alueθ(A)−*W inV alue*θ(B) > 0.001, the prediction is stronger. If *W inV alue*θ(B)− W inV alueθ(A) > 0.001, the prediction is weaker. If |W inV alueθ(A) − W inV alueθ(B)| ≤ 0.001, the prediction is same. - PairWin: Given a win value estimator *P airW in*θ(C1, C2) of compositions C1 and C2 in a matchup, if P airW inθ(*A, B*) > 0.501, the prediction is stronger. If P airW inθ(*A, B*) < 0.499, the prediction is weaker. If 0.499 ≤ P airW inθ(*A, B*) ≤ 0.501, the prediction is same. - BT Network: Given a strength estimator Rθ(C) for composition C, if Rθ(A) Rθ(A)+Rθ(B) > 0.501, the prediction is stronger. If Rθ(A) Rθ(A)+Rθ(B) < 0.499, the prediction is weaker. If 0.499 ≤Rθ(A) Rθ(A)+Rθ(B) ≤ 0.501, the prediction is same. - NRT: This method is the same as BT but uses a non-linear activation function in the neural networks. - NCT Network: Given a strength estimator Rθ(C) for composition C and an extra counter table Wθ(C1, C2) for adjusting the win value between compositions C1 and C2 in a matchup, if Rθ(A) Rθ(A)+Rθ(B) + Wθ(*A, B*) > 0.501, the prediction is stronger. If Rθ(A) Rθ(A)+Rθ(B) + Wθ(*A, B*) < 0.499, the prediction is weaker. If 0.499 ≤Rθ(A) Rθ(A)+Rθ(B) + Wθ(*A, B*) ≤ 0.501, the prediction is same. We count the accuracy of the strength relation label predictions. ![32_image_0.png](32_image_0.png) Figure 13: The architecture of the WinValue neural network. ![33_image_0.png](33_image_0.png) Figure 14: The architecture of the PairWin neural network. ![34_image_0.png](34_image_0.png) Figure 15: The architecture of the linear Bradley-Terry (BT) model network. ![35_image_0.png](35_image_0.png) Figure 16: The architecture of the Non-linear Rating Table (NRT) network. ![36_image_0.png](36_image_0.png) Figure 17: The architecture of the Neural Counter Table (NCT) network.
Review 1: Summary: Strategies in strategic interactions can be given scalar ratings that capture in some sense their “strength”. However, scalar ratings cannot predict cycles that may appear in a strategic interaction, and instead can only capture some transitive relationship between the strategies. Knowing if a strategy dominates another from ratings is useful, but one cannot be sure that there are no cycles. This paper focuses on PvP games for the “strategic interactions” and team compositions for the “strategies”. It proposes a multidimensional rating that attempts to provide a more accurate win-rate prediction compared to single dimensional approaches (averages, Elo, etc.). It is based on finding a base win-rate and adding a residual network to further hone the win-rate. Importantly, this residual has smaller dimensionality, because it clusters compositions allowing the approach to scale. To demonstrate the utility of their rating method they propose balance measures which may help game designers better balance their compositions so that each composition fits some sort of niche (and is not strictly dominated by all other compositions). Strengths and Weaknesses: Strengths 1. Single-dimensional ratings are indeed problematic and identifying cycles is important. This work is tackling an interesting problem. 2. Attempts to tackle the scale problem using category clustering. 3. I enjoyed the case studies of real games in the appendix. Weaknesses 1. There are no non-neural-network baselines. I would have liked to see baselines against simple average, Elo, and multidimensional Elo. 2. I feel the space/time complexity analysis is misleading or at least should be clarified. 3. I feel the wrong parts of the paper are emphasized (clarified below). Minor Comments 1. “MOBA” is undefined in the abstract. 2. “Beginning with 3 to capture basic cycle dominance…”. Cycles can appear with just two strategies (matching pennies). I suppose this paper mainly focuses on symmetric two-player zero-sum which requires three actions. 3. Figure 2: Some boxes are dashed while others are not. 4. Lemma 5.6 - is this a Lemma? 5. I’m unsure of the distinction between “​​PairWin” and “WinValue”. Requested Changes: [Important] Please add simple baselines (average, Elo, mElo) in Table 1. [Important] The authors need to be careful when talking about space complexity. Significant information will be stored in the weights of the neural network. Having approximately 6x128x128 weights in a neural network to rate compositions with N << 6x128x128 and describing it as O(N) will raise a few eyebrows. Please reword or de-emphasise the space complexity. Or only mention it in the O(M^2) part - where it is better motivated. [Important] Furthermore the authors need to be careful with time complexity. A lot of network pre-training is involved and the neural network weights cannot be used for different datasets. Therefore it is misleading to say that it only takes O(N+M^3) to do top B balance, because it involves so much pre-training. There are a bunch of definitions, assumptions, propositions and lemmas scattered in the text. I did not find this format easy to follow. Is Lemma 5.6 a lemma, for example? The proofs could be more precise. [Important] The use of neural networks is under-motivated. The main advantage of using a function approximator is to exploit either its a) generalization capabilities, b) representation efficiencies, or c) both. I guess the LOL dataset does have many more possible compositions than examples, but the rest do not, and therefore do not utilize generalization. And for many of the datasets, computing statistics over the whole dataset requires fewer parameters than the number of weights in the network. I think it is particularly important to address this question because the authors use the network to enumerate the O(N) statistics after training it. Why not just compute the exact statistics directly. (I get that there is an implicit defense of this for the O(M^2) residual part, but I think this needs to be addressed directly in the text). The method proposed only works with symmetric, 2-player, zero-sum, games. The team games are analyzed as two-player games, where each team is a player. This is not a weakness, as many other rating schemes have this limitation, but I think it should be mentioned in the text. The “secret sauce” of the paper seems to be the clustering part which, using a differentiable model, clusters the compositions to allow one to deal with statistics that are only O(M^2). This allows one to scale. I *like* this scalability property - I think that is the interesting part of the paper. I suppose I would have preferred the paper to focus on selling that part rather than distracting the readers with lemmas that do not feel tight, over-emphasising the importance of neural network training, or being a bit misleading with complexity. I hope this review is helpful and improves your paper. Broader Impact Concerns: I have no ethics concerns. ================================================== Review 2: Summary: The paper introduces methodologies for comparing the strengths of team compositions, and measuring the degree of balance between compositions in games, with an emphasis on video games. The introduced methods aim to address weaknesses of approaches based on analysis of win-rates by taking into account counter relationships, and using neural representations in embedding space rather to avoid high cost enumeration over the combinatorial space of compositions. The paper makes contributions in several areas: * The design and training of neural networks to represent team composition strength and counter relationships. This entails encoding team compositions, and using vector quantization to cluster them into a set number of categories, to reduce the complexity of analysing pairs. The authors address the known problem of low codebook utilisation in vector quantization by introducing a new loss term to encourage all codes to move towards the embedded composition, rather than only the nearest neighbour. * Introduction of metrics to quantify the level of balance in a game, in terms of the number of viable compositions. These metrics build on the learned counter relationships to reduce the algorithmic complexity to scale with the number of categories, rather than the number of possible compositions. * The paper includes analysis of case studies using the learned ratings, and counter relationships in order to validate the learned relationships with human subject matter expert knowledge, and provide examples of how the knowledge can be used to inform interventions to improve the gaming experience. Strengths and Weaknesses: # Strengths * The introduction and application of the dominance based metrics to measure strength and balance in combinatorial spaces with transitivity demonstrates value towards understanding and improving balance in games. * The NCT method of learning counter relationships combines several good ideas. * The learning signal is derived from the “residual win value”, identifying that the difference between actual outcome and predicted outcome based on scalar strength contains the composition counter information. * The methods use neural encoding in general and vector quantization in particular to represent the large composition space in a much smaller and more manageable space with interpretable membership qualities. # Weaknesses * In Tables 1 and 2, several games exhibit large discrepancies between the train and test accuracies which usually indicates overfitting. This is not addressed in the paper. In particular, the proposed NCT method achieves >90% train accuracy on League of legends with only a 51% test accuracy indicating almost no gain over random guessing. * The descriptions of the baseline models of WinValue, PairWin, and Bradley-Terry are not described clearly in the same way as NRT and NCT. Additional information is given in the Appendix but this is not mentioned in the baseline descriptions. WinValue and BT appear nearly identical to NRT with WinValue having a difference in the label (win rate vs rating), and BT lacking non-linear activation functions in the network layers. These may be more clearly represented as an ablation of variants of the NRT method. * The paper introduces a new method to address low codebook utilisation in vector quantization. While they mention other methods which address this problem, they do not provide a comparison of VQ Mean Loss to any other method. * In the case studies, analysis of game balance is made under different experimental settings such as the number of categories. There are some instances of unclear and apparent contradiction in conclusions. In the Hearthstone study authors state that “From Top-D diversity… the balance is not good”, and “If we extend the counter table to M = 9, … the balance is surprisingly good”, and “When we use a larger counter table, M = 27 or M = 81, the game is balanced enough.” All of these experiments are done on the same dataset representing the game in the same state of balance. The descriptions are referring to how the metrics portray balance differently under different settings, not real changes to the balance of the game. * In the AoE II and Hearthstone case studies, experiments are run with M=81 indicating 81 composition categories. However in the datasets used there are only 45 comps in AoE II and 58 for Hearthstone after filtering to comps with more than 100 appearances. This means there are more possible categories than compositions in these experiments which runs contrary to the motivation of reducing the comparison space. Requested Changes: We request additional information and clarification in the following areas: * Strengthen * In section 4.1, counts of the number of instances, and possible compositions are listed for each of the datasets. It would also be informative to include the number of unique compositions represented in the datasets. This can highlight the differences between games like AoE II where a composition is defined by one of 45 factions, all of which are represented, and League of Legends where the composition space is combinatorial and much larger than the number of instances. * In eqn. 3, the training objective for the NRT method is described as the mean squared error between the match outcome and the predicted match outcome probability. It is more common in Bradley-Terry inspired models to minimise cross entropy due to the probabilistic nature of the variables. An explanation for the choice of mean squared error would strengthen the understanding. * In order to demonstrate the effectiveness of the clustering on the high dimensional compositions spaces, authors may consider adding a case study on a game with combinatorial composition space such as League of Legends or Brawl Stars. While full analysis of all comps is not feasible, distributional level statistics on the cluster membership, and champion cluster membership could be enlightening. * Authors should provide definitions of gaming terminologies such as “buffing” and “nerfing.” * Critical * An explanation for the train/test discrepancy and low test accuracy for the League of Legends experiments is necessary. Broader Impact Concerns: The included broader impact statement adequately covers concerns. ================================================== Review 3: Summary: The paper focuses on the problem of `game balance’ that is prevalent in the current video game industry. Although the problem is prevalent; quantifiable metrics are not standardized to measure game balance. Although some measures based on win rates, and entropy of strategies at Nash equilibrium have been proposed in the current literature, the authors claim such methods can be resource-intensive. They propose learning two models namely, the machine learning based Bradley-Terry model and the machine learning based counter table. The authors use these two models to propose two novel metrics namely Top-D Diversity and Top-B Balance which have a better time/space complexity when compared to win rate based metrics. The authors conclude their paper with an in-depth discussion (ablation study) of the proposed methods, suggesting guidelines for tuning the hyperparameters for specific use cases. Strengths and Weaknesses: Strengths 1. The paper presents a good ablation study providing insightful conclusions. Such a study makes it easier for the reader to understand the effect of hyperparameters on performance based on use cases. 2. The paper seems very reproducible as the paper + appendix provides most of the details required to reproduce the results. (In any case, I encourage authors to release their code). 3. The paper discusses literature from both video games as well as ML community, which helps to motivate the importance of the problem being tackled. Weakness 1. The writing is not clear in some parts, especially the counter table implementation section. See Requested Changes for more details. 2. The paper does not discuss the related work in an appropriate context. 1. E.g., How do the papers Bowling et al. 2015, and Perolat et al. 2022, support your claim that computing Nash equilibrium is expensive in general? Yes, it might be expensive for the games considered, but not for all games (like rock paper scissors). Pendurkar et al. 2023 present such a study. 2. Some related (and recent) papers are not discussed (e.g, Hernandez et al 2020., Reis et al. 2023) 3. The paper cites Pendurkar et al. 2023 in the section that discusses win rate based metrics (Section 1 Introduction). However, the original paper proposed an entropy-based objective orthogonal to win-rate-based metrics (Not cited when you discuss this in the paper). The authors should avoid citing work in the wrong places. 3. Comparison with previous work is missing, making the claims of faster evaluation of metrics weaker. The paper needs an empirical study, of when the proposed metrics are faster/slower to evaluate, and how much these metrics differ from one another (discussing the 'quality' of balance). Hernandez, Daniel, et al. "Metagame autobalancing for competitive multiplayer games." 2020 IEEE Conference on Games (CoG). IEEE, 2020. Reis, Simão, et al. "An Adversarial Approach for Automated Pokémon Team Building and Meta-Game Balance." IEEE Transactions on Games (2023). Requested Changes: Requested Changes 1. I would like to see a comparison of compositions considered balanced with (1) win-rate based method (2) entropy of strategies at Nash equilibrium-based method (Pendurkar et a. 2023). (3) proposed methods. That is, highlighting the differences not only between the proposed methods but also with the current literature to support the claim that calculating the proposed metrics is easier (at least in some cases) and better/comparable. These experiments + addressing related work concerns would address my concerns regarding `Claims And Evidence' Writing Changes 1. 'Treant Druid needs a specific nerf’ The ML audience is not familiar with terms like 'nerf’ and 'buff’. Please define them. 2. In Section 4.1 please mention the composition size per game. (similar to 'k’ and '|M|’ in Pendurkar et al. 2023). 3. Please cite original work on the siamese neural network (page 2 line 2) (Bromley et al. 1993) 4. The section on the counter table model is not very clear, especially, because the background is not appropriately introduced. I would suggest authors add a subsection at the beginning that provides some idea on vector quantization training and defines terms like `codebook’. VQ Mean Loss seems like a contribution for the paper, the differences from the literature should be clearly highlighted. Questions (I hope authors use my questions to update their paper as well for clarity). 1. What are gameplay policies used (after composition) for each game to collect the data? 2. When should one use Top-D Diversity and when should one use Top-B Balance? Such a discussion is missing. Based on my understanding, it seems Top-B is more useful when using counters (see rock-paper-scissors-fire-water variant used in Pendurkar et al. 2023 which might serve as an easy example to demonstrate this). 3. Are the proposed metrics limited to two-player zero-sum games? What are the challenges if one were to scale them to multi-player games (especially with counter table)? If yes, then please clearly state this while discussing the limitations. 4. Page 2 para 2. ''which counts.. given a tolerant win value gap’’ what do you mean by `tolerant value group’? 5. How is Proposition 5.9 True? Do you assume the ML model for BT rating is accurate? Further, maybe countering c1 might be easier than countering c2 (this might happen if the categories are not appropriately identified). In such cases, 5.9 might not hold. Please clarify your assumptions here. Bromley, Jane, et al. "Signature verification using a" siamese" time delay neural network." Advances in neural information processing systems 6 (1993). Minor 1. Just above Definition 2.1 define the target, ‘’domination,’’ -> ‘‘domination’’, 2. Ralf Herbrich, Tom Minka, and Thore Graepel. Trueskilltm: A bayesian skill rating system. In Advances in Neural Information Processing Systems (NIPS), 2006. (Apologies for bad formatting here) ™ should be in upper case (so should be B in Bayesian). Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: All three reviewers (as well as myself) agree on the relevance of this work to (a subset of) the TMLR audience. We also mostly agree on the claims of the paper being supported by the experiments and discussion, just with one potential concern remaining regarding the discussion and/or comparison to other types of measures of game balance proposed in prior research. In light of this, **I have the following suggestion for what I think would be a very minor revision** (where I invite the authors to also explain if they feel that this would be a bad change to make): At the end of the third paragraph of Section 6.3, there is currently one sentence alluding to the regularization term that was proposed by Pendurkar et al. (2023) for their entropy-based approach, which, without such a term, could suffer from the existence of undesirable solutions that maximise entropy simply by making every behave exactly the same (which would be a "balanced" solution with equal win percentages for all comps, but also uninteresting in terms of player experience). More specifically (not yet discussed in your paper), this regularization term punishes the balancing procedure of Pendurkar et al. (2023) for moving game parameters too far away from the initial parameters, where the initial parameters are assumed to have been set by the game designers. This initial game design is likely not solely informed by wishing to produce a balanced game, but also the wish to ensure that certain comps/decks/characters fit specifically designed playstyles, match a certain theme/personality of a character, and so on. My impression is that such considerations would not solely be relevant to the specific algorithm of Pendurkar et al. (2023), but could be a relevant consideration (maybe: a constraint) for **any** measure of game balance. Regardless of which measure of balance a game designer would use (any of the ones proposed in this paper, or others discussed as related work in Section 6.3 of this paper), there would often be a constraint not to drift too far from the original game design. I would suggest that it may be worth adding a few (e.g., two or three) sentences to discuss these ideas in the larger discussion of Section 6.3. --- Aside from the above suggestion, I would like to remark to the authors: - Please do not forget to turn all text back to the regular black text for the camera-ready version. - Throughout the Appendix, the letter "x" is used in a few places where $\times$ (typeset using `$\times$`) would look much nicer: specifically, when discussing "9x9 counter tables" for Age of Empires 2 and Hearthstone. ==================================================
# Expressive Higher-Order Link Prediction Through Hypergraph Symmetry Breaking Anonymous authors ![0_image_0.png](0_image_0.png) Paper under double-blind review Figure 1: An illustration of a hypergraph of recipes. The nodes are the ingredients and the hyperedges are the recipes. The task of higher order link prediction is to predict hyperedges in the hypergraph. A negative hyperedge sample would be the dotted hyperedge. The Asian ingredient nodes (α) and the European ingredient nodes (β) form two separate isomorphism classes. However, GWL-1 cannot distinguish between these classes and will predict a false positive for the negative sample. ## Abstract A hypergraph consists of a set of nodes along with a collection of subsets of the nodes called hyperedges. Higher order link prediction is the task of predicting the existence of a missing hyperedge in a hypergraph. A hyperedge representation learned for higher order link prediction is fully expressive when it does not lose distinguishing power up to an isomorphism. Many existing hypergraph representation learners, are bounded in expressive power by the Generalized Weisfeiler Lehman-1 (GWL-1) algorithm, a generalization of the Weisfeiler Lehman-1 algorithm. However, GWL-1 has limited expressive power. In fact, induced subhypergraphs with identical GWL-1 valued nodes are indistinguishable. Furthermore, message passing on hypergraphs can already be computationally expensive, especially on GPU memory. To address these limitations, we devise a preprocessing algorithm that can identify certain regular subhypergraphs exhibiting symmetry. Our preprocessing algorithm runs once with complexity the size of the input hypergraph. During training, we randomly replace subhypergraphs identifed by the algorithm with covering hyperedges to break symmetry. We show that our method improves the expressivity of GWL-1. Our extensive experiments 1 also demonstrate the effectiveness of our approach for higher-order link prediction on both graph and hypergraph datasets with negligible change in computation. ## 1 Introduction In real world networks, it is common for a relation amongst nodes to be defined beyond a pair of nodes. Hypergraphs are the most general examples of this. These have applications in recommender systems Lü et al. 1https://anonymous.4open.science/r/HypergraphSymmetryBreaking-B07F/ 1 (2012), visual classification Feng et al. (2019), and social networks Li et al. (2013). Given an unattributed hypergraph, our goal is to perform higher order link prediction (finding missing hyperedges) with deep learning methods while also respecting symmetries of the hypergraph. Current approaches towards hypergraph representation are based on the Generalized Weisfeiler Lehman 1 (GWL-1) algorithm Huang & Yang (2021), a hypergraph isomorphism testing approximation algorithm that generalizes the message passing algorithm called Weisfeiler Lehman 1 (WL-1) Weisfeiler & Leman (1968)to hypergraphs. Due to message passing locally within a node's neighborhood, Weisfeiler Lehman 1 views a graph as a collection of trees rooted at each node. However this can violate the true meaning of the graph, in particular its symmetry properties. GWL-1 also views the hypergraph as a collection of rooted trees. These hypergraph representation methods, called hyperGNNs, are parameterized versions of GWL-1. To improve the expressivity of hypergraph/graph isomorphism approximators like GWL-1 or WL-1, it is common to augment the nodes with extra information You et al. (2021); Sato et al. (2021). We devise a method that, instead, selectively breaks the symmetry of the hypergraph topology itself coming from the limitations of the hyperGNN architecture. Since message passing on hypergraphs can be very computationally expensive, our method is designed as a preprocessing algorithm that can improve the expressive power of GWL-1 for higher order link prediction. Since the preprocessing only runs once with complexity linear in the input, we do not add to any computational complexity from training. Similar to a substructure counting algorithm Bouritsas et al. (2022), we identify certain symmetries in induced subhypergraphs. However, unlike in existing work where node attributes are modified, we directly target and modify the symmetries in the topology. During training, we randomly replace the hyperedges of the identified symmetric regular induced subhypergraphs with single hyperedges that cover the nodes of each subhypergraph. We show that our method can increase the expressivity of existing hypergraph neural networks. We summarize our contributions as follows: - We characterize the expressive power and limitations of GWL-1. - We devise an efficient hypergraph preprocessing algorithm to improve the expressivity of GWL-1 for higher order link prediction - We perform extensive experiments on real world datasets to demonstrate the effectiveness of our approach. ## 2 Background We go over what a hypergraph is and how these structures are represented as tensors. We then define what a hypergraph isomorphism is. ## 2.1 Isomorphisms On Higher Order Structures A hypergraph is a generalization of a graph. Hypergraphs allow for all possible subsets over a set of vertices, called hyperedges. We can thus formally define a hypergraph as: Definition 2.1. *An undirected hypergraph is a pair* H = (V, E) consisting of a set of vertices V *and a set of* hyperedges E ⊂ 2 V \ ({∅} ∪ {{v} | v ∈ V}) *where* 2 V *is the power set of the vertex set* V. We will assume all hypergraphs are undirected as in Definition 2.1. For a given hypergraph H = (V, E), a hypergraph G = (V ′, E ′) is a subhypergraph of H if V ′ ⊆ V and E ′ ⊆ E. For a *W ⊆ V*, an induced hypergraph is a subhypergraph (W, F = 2W ∩ E). For a given hypergraph H, we also use VH and EH to denote the sets of vertices and hyperedges of H respectively. According to the definition, a hyperedge is a nonempty subset of the vertices. A hypergraph with all hyperedges the same size d is called a d-uniform hypergraph. A 2-uniform hypergraph is an undirected graph, or just graph. When viewed combinatorially, a hypergraph can include some symmetries, which are called isomorphisms. On a hypergraph, isomorphisms are defined by bijective structure preserving maps. Such maps are a pair maps that respect hyperedge structure. Definition 2.2. For two hypergraphs H and D, a structure preserving map ρ : H → D *is a pair of maps* ρ = (ρV : VH → VD, ρE : EH → ED) such that ∀e ∈ EH, ρE (e) ≜ {ρV (vi) | vi ∈ e} ∈ ED. A hypergraph isomorphism is a structure preserving map ρ = (ρV , ρE ) such that both ρV and ρE are bijective. Two hypergraphs are said to be isomorphic, denoted as H ∼= D, if there exists an isomorphism between them. When H = D*, an* isomorphism ρ is called an automorphism on H*. All the automorphisms form a group, which we denote as* Aut(H). A graph isomorphism is the special case of a hypergraph isomorphism between 2-uniform hypergraphs according to Definition 2.2. A *neighborhood* N(v) ≜ (Sv∈e e, {e : v ∈ e}) of a node v ∈ V of a hypergraph H = (V, E) is the subhypergraph of H induced by the set of all hyperedges incident to v. The *degree* of v is denoted deg(v) = |EN(v)|. A simple but very common symmetric hypergraph is of importance to our task, namely the neighborhoodregular hypergraph, or just regular hypergraph. Definition 2.3. *A neighborhood-regular hypergraph is a hypergraph where all neighborhoods of each node* are isomorphic to each other. A d-uniform neighborhood of v is the set of all hyperedges of size d in the neighborhood of v. Thus, in a neighborhood-regular hypergraph, all nodes have their d-uniform neighborhoods of the same degree for all d ∈ N. Representing Higher Order Structures as Tensors : There are many data stuctures one can define on a higher order structure like a hypergraph. An n-order tensor Maron et al. (2018), as a generalization of an adjacency matrix on graphs can be used to characterize the higher order connectivities. For simplicial complexes, which are hypergraphs where all subsets of a hyperedge are also hyperedges, a Hasse diagram, which is a multipartite graph induced by the poset relation of subset amongst hyperedges, or simplices, differing in exactly one node, is a common data structure Birkhoff (1940). Similarly, the star expansion matrix Agarwal et al. (2006) can be used to characterize hypergraphs up to isomorphism. In order to define the star expansion matrix, we define the star expansion bipartite graph. Definition 2.4 (star expansion bipartite graph). *Given a hypergraph* H = (V, E)*, the* star expansion bipartite graph BV,E is the bipartite graph with vertices VFE and edges {(v, e) *∈ V × E |* v ∈ e}. Definition 2.5. The star expansion incidence matrix H *of a hypergraph* H = (V, E) is the *|V| ×* 2 |V| 0-1 incidence matrix H where Hv,e = 1 iff v ∈ e for (v, e) ∈ V × E for some fixed orderings on both V and 2 V . In practice, as data to machine learning algorithms, the matrix H is sparsely represented by its nonzeros. To study the symmetries of a given hypergraph H = (V, E), we consider the permutation group on the vertices V, denoted as Sym(V), which acts jointly on the rows and columns of star expansion adjacency matrices. For an introduction to group theory, see Dummit & Foote (2004). We assume the rows and columns of a star expansion adjacency matrix have some canonical ordering, say lexicographic ordering, given by some prefixed ordering of the vertices. Therefore, each hypergraph H has a unique canonical matrix representation H. We define the action of a permutation π ∈ Sym(V) on a star expansion adjacency matrix H: (π · H)v,e=(u1*...v...u*k) ≜ Hπ−1(v),π−1(e)=(π−1(u1)...π−1(v)*...π*−1(uk)) (1) Based on the group action, consider the stabilizer subgroup of Sym(V) on an incidence matrix H: StabSym(V)(H) = {π ∈ Sym(V) | π · H = H} (2) For simplicity we omit the lower index of Sym(V) when the permutation group is clear from context. It can be checked that Stab(H) ⊆ Sym(V) is a subgroup. Intuitively, *Stab*(H) consists of all permutations that fix H. These are equivalent to hypergraph automorphisms on the original hypergraph H. $$(1)$$ Proposition 2.1. Aut(H) ∼= Stab(H) *are equivalent as isomorphic groups.* We can also define a notion of isomorphism between k-node sets using the stabilizers on H. Definition 2.6. For a given hypergraph H with star expansion matrix H, two k-node sets S, T ⊆ V are called isomorphic, denoted as S ≃ T, if ∃π ∈ Stab(H), π(S) = T and π(T) = S. Such isomorphism is an equivalance relation on k-node sets. When k = 1, we have isomorphic nodes, denoted u ∼=H v for u, v ∈ V. Node isomorphism is also studied as the so-called structural equivalence in Lorrain & White (1971). Furthermore, when S ≃ T we can then say that there is a matching amongst the nodes in sets S and T so that matched nodes are isomorphic. ## 2.2 Invariance And Expressivity For a given hypergraph H = (V, E), we want to do hyperedge prediction on H, which is to predict missing hyperedges from k-node sets for k ≥ 2. Let |V| = n, |E| = m, and H ∈ Z n×2 n 2 be the star expansion adjacency matrix of H. To do hyperedge prediction, we study k-node representations h : [V] k × Z n×2 n 2 → R dthat map k-node sets of hypergraphs to d-dimensional Euclidean space. Ideally, we want a most-expressive k-node representation for hyperedge prediction, which is intuitively a k-node representation that is injective on knode set isomorphism classes from H. We break up the definition of most-expressive k-node representation into possessing two properties, as follows: Definition 2.7. Let h : [V] k × Z n×2 n 2 → R dbe a k-node representation on a hypergraph H*. Let* H ∈ Z n×2 n 2 be the star expansion adjacency matrix of H for n nodes. The representation h is k*-node most expressive if* ∀S, S′ ⊂ V, |S| = |S ′| = k*, the following two conditions are satisfied:* 1. $h$ _is $k$-node invariant:_ $\exists\pi\in Stab(H),\pi(S)=S^{\prime}\implies h(S,H)=h(S^{\prime},H)$. 2. h is **k-node expressive** ∄π ∈ Stab(H), π(S) = S ′ =⇒ h(*S, H*) ̸= h(S ′, H) The first condition of a most expressive k-node representation states that the representation must be well defined on the k nodes up to isomorphism. The second condition requires the injectivity of our representation. These two conditions mean that the representation does not lose any information when doing prediction for missing k-sized hyperedges on a set of k nodes. ## 2.3 Generalized Weisfeiler-Lehman-1 We describe a generalized Weisfeiler-Lehman-1 (GWL-1) hypergraph isomorphism test similar to Huang & Yang (2021); Feng et al. (2023) based on the WL-1 algorithm for graph isomorphism testing. There have been many parameterized variants of the GWL-1 algorithm implemented as neural networks, see Section 3. Let H be the star expansion matrix for a hypergraph H. We define the GWL-1 algorithm as the following two step procedure on H at iteration number i ≥ 0. $$\begin{array}{c}{{f_{e}^{0}\leftarrow\{\},h_{v}^{0}\leftarrow\{\}}}\\ {{f_{e}^{i+1}\leftarrow\{\{(f_{e}^{i},h_{v}^{i})\}\}_{v\in e},\forall e\in{\mathcal{E}}_{\mathcal{H}}(H)}}\\ {{h_{v}^{i+1}\leftarrow\{\{(h_{v}^{i},f_{e}^{i+1})\}\}_{v\in e},\forall v\in{\mathcal{V}}_{\mathcal{H}}(H)}}\end{array}$$ $\Sigma^{\prime}$, $H$) $\quad(3)$ . This is slightly different from the algorithm presented in Huang & Yang (2021) at the f i+1 e update step. Our update step involves an edge representation f i e , which is not present in their version. Thus our version of GWL-1 is more expressive than that in Huang & Yang (2021). However, they both possess some of the same issues that we identify. We denote f i e (H) and h i v (H) as the hyperedge and node ith iteration GWL-1, called i-GWL-1, values on an unattributed hypergraph H with star expansion H. If GWL-1 is run to convergence then we omit the iteration number i. We also mean this when we say i = ∞. For a hypergraph H with star expansion matrix H, GWL-1 is strictly more expressive than WL-1 on A = H · D−1 e· HT with De = *diag*(HT· 1n), the node to node adjacency matrix, also called the clique expansion of H. This follows since a triangle with its 3-cycle boundary: T and a 3-cycle C3 have exactly the same clique expansions. Thus WL-1 will give the same node values for both T and C3. GWL-1 on the star expansions HT and HC3 , on the other hand, will identify the triangle as different from its bounding edges. Let f i(H) ≜ [f i e1 (H), *· · ·* f i em (H)] and h i(H) ≜ [h i v1 (H), *· · ·* h i vn (H)] be two vectors whose entries are ordered by the column and row order of H, respectively. Proposition 2.2. *The update steps* f i(H) and h i(H) *of GWL-1 are permutation equivariant; For any* π ∈ Sym(V)*, let:* π · f i(H) ≜ [f i π−1(e1) (H), · · · , fiπ−1(em) (H)] and π · h i(H) ≜ [h i π−1(v1) (H), *· · ·* h i π−1(vn) (H)]: $$\forall i\in\mathbb{N},\pi\cdot f^{i}(H)=f^{i}(\pi\cdot H),\;\pi\cdot h^{i}(H)=h^{i}(\pi\cdot H)$$ i(π · H) (4) Define the operator AGG as a k-set map to representation space R d. Define the following representation of a k-node subset S ⊂ V of hypergraph H with star expansion matrix H: $$\left(4\right)$$ $$h(S,H)=A G G[\{h_{v}^{i}(H)\}_{v\in S}]$$ $$\left(5\right)$$ (H)}v∈S] (5) where h i v (H) is the node value of i-GWL-1 on H for node v. The representation h(*S, H*) preserves hyperedge isomorphism classes as shown below: Proposition 2.3. Let h(*S, H*) = AGGv∈S[h i v (H)] *with injective AGG and* h i v *permutation equivariant. The* representation h(S, H) is k-node invariant but not necessarily k-node expressive for S a set of k *nodes.* It follows that we can guarantee a k-node invariant representation by using GWL-1. For deep learning, we parameterize AGG as a universal set learner. The node representations h i v (H) are also parameterized and rewritten into a message passing hypergraph neural network with matrix equations Huang & Yang (2021). ## 3 Related Work And Existing Issues There are many hyperlink prediction methods. Most message passing based methods for hypergraphs are based on the GWL-1 algorithm. These include Huang & Yang (2021); Yadati et al. (2019); Feng et al. (2019); Gao et al. (2022); Dong et al. (2020); Srinivasan et al. (2021); Chien et al. (2022); Zhang et al. (2018). Examples of message passing based approaches that incorporate positional encodings on hypergraphs include SNALS Wan et al. (2021). The paper Zhang et al. (2019) uses a pair-wise node attention mechanism to do higher order link prediction. For a survey on hyperlink prediction, see Chen & Liu (2022). There have been methods to improve the expressive power due to symmetries in graphs. In Papp & Wattenhofer (2022), substructure labeling is formally analyzed. One of the methods analyzed includes labeling fixed radius ego-graphs as in You et al. (2021); Zhang & Li (2021). Other methods include appending random node features Sato et al. (2021), labeling breadth-first and depth-first search trees Li et al. (2023b) and encoding substructures Zeng et al. (2023); Wijesinghe & Wang (2021). All of the previously mentioned methods depend on a fixed subgraph radius size. This prevents capturing symmetries that span long ranges across the graph. Zhang et al. (2023) proposes to add metric information of each node relative to all other nodes to improve WL-1. This would be very computationally expensive on hypergraphs. Cycles are a common symmetric substructure. There are many methods that identify this symmetry. Cy2C Choi et al. is a method that encodes cycles to cliques. It has the issue that if the the cycle-basis algorithm is not permutation invariant, isomorphic graphs could get different cycle bases and thus get encoded by Cy2C differently, violating the invariance of WL-1. Similarly, the CW Network Bodnar et al. (2021) is a method that attaches cells to cycles to improve upon the distinguishing power of WL-1 for graph classification. However, inflating the input topology with cells as in Bodnar et al. (2021) would not work for link predicting since it will shift the hyperedge distribution to become much denser. Other works include cell attention networks Giusti et al. (2022) and cycle basis based methods Zhang et al. (2022). For more related work, see the Appendix. ## 4 A Characterization Of Gwl-1 A hypergraph can be represented by a bipartite graph BV,E from V to E where there is an edge (*v, e*) in the bipartite graph iff node v is incident to hyperedge e. This bipartite graph is called the star expansion bipartite graph. We introduce a more structured version of graph isomorphism called a 2-color isomorphism to characterize hypergraphs. It is a map on 2-colored graphs, which are graphs that can be colored with two colors so that no two nodes in any graph with the same color are connected by an edge. We define a 2-colored isomorphism formally here: Definition 4.1. A 2-colored isomorphism is a graph isomorphism on two 2-colored graphs that preserves node colors. It is denoted by ∼=c. A bipartite graph always has a 2-coloring. In this paper, we canonically fix a 2-coloring on all star expansion bipartite graphs by assigning red to all the nodes in the node partition and and blue to all the nodes in the hyperedge partition. See Figure 2(a) as an example. We let BV , BE be the red and blue colored nodes in BV,E respectively. Proposition 4.1. We have two hypergraphs (V1, E1) ∼= (V2, E2) iff BV1,E1 =∼c BV2,E2 *where* BVi,Ei is the star expansion bipartite graph of (Vi, Ei) We define a topological object for a graph originally from algebraic topology called a universal cover: Definition 4.2 (Hatcher (2005)). The universal covering of a connected graph G *is a (potentially infinite)* graph G˜ together with a map pG : G˜ → G *such that:* 1. ∀x ∈ V(G˜), pG|N(x)*is an isomorphism onto* N(pG(x)). 2. G˜ *is simply connected (a tree)* We call such pG the *universal covering map* and G˜ the *universal cover* of G. A covering graph is a graph that satisfies property 1 but not necessarily 2 in Definition 4.2. The universal covering G˜ is essentially unique Hatcher (2005) in the sense that it can cover all connected covering graphs of G. Furthermore, define a rooted isomorphism Gx ∼= Hy as an isomorphism between graphs G and H that maps x to y and vice versa. It is a known result that: Theorem 4.2. [Krebs & Verbitsky (2015)] Let G and H be two connected graphs. Let pG : G˜ → *G, p*H : H˜ → H be the universal covering maps of G and H respectively. For any i ∈ N*, for any two nodes* x ∈ G and y ∈ H: G˜ix˜ ∼= G˜iy˜ iff the WL-1 algorithm assigns the same value to nodes x = pG(˜x) and y = pH(˜y). We generalize the second result stated above about a topological characterization of WL-1 for GWL-1 for hypergraphs. In order to do this, we need to generalize the definition of a universal covering to suite the requirements of a bipartite star expansion graph. To do this, we lift BV,E to a 2-colored tree universal cover B˜V,E where the red/blue nodes of BV,E are lifted to red/blue nodes in B˜V,E . Furthermore, the labels {} are placed on the blue nodes corresponding to the hyperedges in the lift and the labels Xv are placed on all its corresponding red nodes in the lift. Let (B˜kV,E )x˜ denote the k-hop rooted 2-colored subtree with root x˜ and pBV,E (˜x) = x for any x ∈ V(BV,E ). Theorem 4.3. Let H1 = (V1, E1) and H2 = (V2, E2) be two connected hypergraphs. Let BV1,E1 and BV2,E2 be two canonically colored bipartite graphs for H1 and H2 (vertices colored red and hyperedges colored blue). Let pBV1,E1 : B˜V1,E1 → BV1,E1 , pBV2,E2 : B˜V2,E2 → BV2,E2 be the universal coverings of BV1,E1 and BV2,E2 respectively. For any i ∈ N +, for any of the nodes x1 ∈ BV1 , e1 ∈ BE1 and x2 ∈ BV2 , e2 ∈ BE2 : (B˜2i−1 V1,E1 )e˜1 ∼=c (B˜2i−1 V2,E2 )e˜2 iff f i e1 = f i e2 (B˜2i V1,E1 )x˜1 ∼=c (B˜2i V2,E2 )x˜2 iff h i x1 = h i x2 , with f i• , hi• the i*th GWL-1 values for the hyperedges and nodes* respectively where e1 = pBV1,E1 (˜e1), x1 = pBV1,E1 (˜x1), e2 = pBV2,E2 (˜e2), x2 = pBV2,E2 (˜x2). ![6_image_0.png](6_image_0.png) Figure 2: An illustration of hypergraph symmetry breaking. (c,d) 3-regular hypergraphs C 3 4 , C 3 5 with 4 and 5 nodes respectively and their corresponding universal covers centered at any hyperedge (B˜C3 4 )e∗,∗,∗ ,(B˜C3 5 )e∗,∗,∗ with universal covering maps pBC3 4 , pBC3 5 . (b,e) the hypergraphs Cˆ3 4 , Cˆ3 5 , which are C 3 4 , C3 5 with 4, 5-sized hyperedges attached to them and their corresponding universal covers and universal covering maps. (a,f) are the corresponding bipartite graphs of Cˆ3 4 , Cˆ3 5 . (c,d) are indistinguishable by GWL-1 and thus will give identical node values by Theorem 4.3. On the other hand, (b,e) gives node values which are now sensitive to the the order of the hypergraphs 4, 5, also by Theorem 4.3. See Figure 2 for an illustration of the universal covering of two 3-uniform neighborhood regular hypergraphs and their corresponding bipartite graphs. Notice that by Theorems 4.3, 4.2 GWL-1 reduces to computing WL-1 on the bipartite graph up to the 2-colored isomorphism. ## 4.1 A Limitation Of Gwl-1 For two neighborhood-regular hypergraphs C1 and C2, the red/blue colored universal covers B˜C1 , B˜C2 of the star expansions of C1 and C2 are isomorphic, with the same GWL-1 values on all nodes. However, two neighborhood-regular hypergraphs of different order become distinguishable if a single hyperedge covering all the nodes of each neighborhood-regular hypergraph is added. Furthermore, deleting the original hyperedges, does not change the node isomorphism classes of each hypergraph. Referring to Figure 2, consider the hypergraph C = C 3 4 ⊔C 3 5 , the hypergraph with two 3-regular hypergraphs C 3 4 and C 3 5 acting as two connected components of C. As shown in Figure 2, the node representations of the two hypergraphs are identical due to Theorem 4.3. Given a hypergraph H, we define a special induced subhypergraph *R ⊂ H* whose node set GWL-1 cannot distinguish from other such special induced subhypergraphs. Definition 4.3. A L-GWL-1 symmetric induced subhypergraph R ⊂ H of H is a connected induced subhypergraph determined by VR ⊂ VH*, some subset of nodes that are all indistinguishable amongst each other by* L*-GWL-1:* $$h_{u}^{L}(H)=h_{v}^{L}(H),\forall u,v\in{\mathcal{V}}_{\mathcal{R}}$$ (H), ∀u, v ∈ VR (6) When L = ∞, we call such R a GWL-1 symmetric induced subhypergraph. Furthermore, if R = H, then we say H is GWL-1 symmetric. This definition is similar to that of a symmetric graph from graph theory Godsil & Royle (2001), except that isomorphic nodes are determined by the GWL-1 approximator instead of an automorphism. The following observation follows from the definitions. Observation 1. A hypergraph H is GWL-1 symmetric if and only if it is L*-GWL-1 symmetric for all* L ≥ 1 if and only if H *is neighborhood regular.* Our goal is to find GWL-1 symmetric induced subhypergraphs in a given hypergraph and break their symmetry without affecting any other nodes. $\left(6\right)$. ## 5 Method Our goal is to predict higher order links in a hypergraph transductively. This can be formulated as follows: Problem 1. *Given a hypergraph* H = (V, E) *and ground truth hypergraph* Hgt = (V, Egt), E ⊂ Egt*, where* E is observable: Predict the existence of the unobserved hyperedges Egt \ E. We will assume that the unobservable hyperedges are of the same size k so that we only need to predict on k-node sets. In order to preserve the most information while still respecting topological structure, we aim to start with an invariant multi-node representation to predict hyperedges and increase its expressiveness, as defined in Definition 2.7. For input hypergraph H and its matrix representation H, to do the prediction of a missing hyperedge on node subsets, we use a multi-node representation h(*S, H*) for S ⊂ V(H) as in Equation 5 due to its simplicity, guaranteed invariance, and improve its expressivity. We aim to not affect the computational complexity since message passing on hypergraphs is already quite expensive, especially on GPU memory. Our method is a preprocessing algorithm that operates on the input hypergraph. In order to increase expressivity, we search for potentially indistinguishable regular induced subhypergraphs so that they can be replaced with hyperedges that span the subhypergraph to break the symmetries that prevent GWL-1 from being more expressive. We devise an algorithm, which is shown in Algorithm 1. It takes as input a hypergraph H with star expansion matrix H. The idea of the algorithm is to identify nodes of the same GWL-1 value that are maximally connected and use this collection of node subsets to break the symmetry of H. First we introduce some combinatorial definitions for hypergraph data that we will use in our algorithm: Definition 5.1. *A hypergraph* H = (V, E) is **connected** if BV,E *is a connected graph.* A **connected component** of H *is a connected induced subhypergraph which is not properly contained in any* connected subhypergraph of H. Definition 5.2. Chitra & Raphael (2019) A **random walk** *on a hypergraph* H = (V, E) *is a Markov chain* with state space V with transition probabilities Pu,v ≜Pe⊃{u,v}:e∈Eω(e) deg(u)|e| , where ω(e) : E → [0, 1] *is some* discrete probability distribution on the hyperedges. When not specified, this is the uniform distribution. Definition 5.3. A **stationary distribution** π : V → [0, 1] *for a Markov chain with transition probabilities* Pu,v is defined by the relationship Pu∈V Pu,vπ(u) = π(v). For a hypergraph random walk we have the closed form: π(v) = P deg(v) u∈V deg(u) for v ∈ V assuming H *is a* connected hypergraph. Algorithm: Our method is explicitly given in Algorithm 1. For a given L ∈ N + and any L-GWL-1 node value cL, we construct the induced subhypergraph HcL from the L-GWL-1 class of nodes: VcL ≜ {v ∈ V : cL = h $$L=h_{v}^{L}(H)\},$$ $\left(7\right)$. (H)}, (7) where h L v denotes the L-GWL-1 class of node v. We then compute the connected components of HcL . Denote CcL as the set of all connected components of HcL . If L = ∞, then drop L. Each of these connected components is a subhypergraph of H, denoted RcL,i where RcL,i ⊂ HcL ⊂ H for i = 1...|CcL |. Downstream Training: After executing Algorithm 1, we collect its output (RV , RE). During training, for each i = 1...|CcL | we randomly perturb RcL,i by: - Attaching a single hyperedge that covers VR*cL,i* with probability qi and not attaching with probability 1 − qi. - All the hyperedges in RcL,i are dropped or kept with probability p and 1 − p respectively. Let Hˆ be the estimator of the input hypergraph H as determined by the random drop and attaching operations. Since Hˆ is random, each sample of Hˆ has a stationary distribution πˆ. The expected stationary distribution, denoted E[ˆπ], is the expectation of πˆ over the distribution determined by Hˆ. We show in Proposition 5.6 that the probabilities p, qi, i = 1...|CcL | can be chosen so that πˆ is unbiased. Our method is similar to the concept of adding virtual nodes Hwang et al. (2022) in graph representation learning. This is due to the equivalence between virtual nodes and hyperedges by Proposition 4.1. For a guarantee of improving expressivity, see Lemma 5.2 and Theorems 5.3, 5.4. For an illustration of the data augmentation, see Figure 2. Alternatively, downstream training using the output of Algorithm 1 can be done. Similar to subgraph NNs, this is done by applying an ensemble of models Alsentzer et al. (2020); Papp et al. (2021); Tan et al. (2023), with each model trained on transformations of H with its symmetric subhypergraphs randomly replaced. This, however, is computationally expensive. Algorithm 1: A Symmetry Finding Algorithm Data: Hypergraph H = (V, E), represented by its star expansion matrix H. L ∈ N + is the number of iterations to run GWL-1. Result: A pair of collections: (RV = {VRj }, RE = ∪j{ERj }) where Rj are disconnected subhypergraphs exhibiting symmetry in H that are indistinguishable by L-GWL-1. 1 UL ← h L v (H); GL ← {UL[v] : ∀v *∈ V}* ; /* UL[v] is the L-GWL-1 value of node v ∈ V. */ 2 BVH,EH ← *Bipartite*(H) /* Construct the bipartite graph from H. */ 3 RV ← {}; RE *← {}* 4 for cL ∈ GL do 5 VcL ← {v ∈ V : UL[v] = cL}, EcL ← {e ∈ E : u ∈ VcL , ∀u ∈ e} 6 CcL ← ConnectedComponents(HcL = (VcL , EcL )) 7 for RcL,i ∈ CcL do /* There should be at least 3 nodes to form a nontrivial hyperedge */ 8 if |VRcL,i | ≥ 3 **then** 9 RV ← RV ∪ {VRcL,i }; RE ← RE ∪ ER*cL,i* 10 end 11 end 12 end 13 **return** (RV , RE) Algorithm Guarantees: We show some guarantees for the output of Algorithm 1. Notation: Let H = (V, E) be a hypergraph with star expansion matrix H and let (RV , RE ) be the output of Algorithm 1 on H for L ∈ N +. Let HˆL ≜ (V, *E ∪ R*V ) be H after adding all the hyperedges from RV and let HˆL be the star expansion matrix of the resulting hypergraph HˆL. Let VcL,s ≜ {v ∈ VcL : v ∈ *R, R* ∈ CcL , |VR| = s} be the set of all nodes of L-GWL-1 class cL belonging to a connected component in CcL of s ≥ 1 nodes in HcL , the induced subhypergraph of L-GWL-1. Let GL ≜ {h L v (H) : v *∈ V}* be the set of all L-GWL-1 values on H. Let ScL ≜ {|VRcL,i | : RcL,i ∈ CcL } be the set of node set sizes of the connected components in HcL . Proposition 5.1. If L = ∞, for any GWL-1 node value c computed on H, all connected component subhypergraphs Rc,i ∈ Cc *are GWL-1 symmetric as hypergraphs.* Lemma 5.2. If L ∈ N + is small enough so that after running Algorithm 1 on L, for any L*-GWL-1* node class cL on V none of the discovered VRcL,i are within L hyperedges away from any VRcL,j *for all* i, j ∈ 1...|CcL |, i ̸= j, then after forming HˆL, the new L-GWL-1 node classes of VRcL,i for i = 1...CcL in HˆL are all the same class c ′ L but are distinguishable from cL depending on |VR*cL,i* |. We also have the following guarantee on the number of pairs of distinguishable k-node sets on Hˆ: Theorem 5.3. Let |V| = *n, L* ∈ N +. If vol(v) ≜Pe∈E:e∋v |e| = O(log 1−ϵ 4L n), ∀v ∈ V *for any constant* ϵ > 0; |ScL | ≤ S, ∀cL ∈ CL, S *constant, and* |V L cL,s| = O(n ϵ log 1 2k (n) ), ∀s ∈ CcL , then for k ∈ N + and ktuple C = (cL,1...cL,k), cL,i ∈ GL, i = 1..k *there exists* ω(n 2kϵ) many pairs of k-node sets S1 ̸≃ S2 *such* that (h L u (H))u∈S1 = (h L v∈S2 (H)) = C, as ordered k-tuples, while h(S1, HˆL) ̸= h(S2, HˆL) also by L *steps of* GWL-1. We show that our algorithm increases expressivity (Definition 2.7) for h(*S, H*) of Equation 5. Theorem 5.4 (Invariance and Expressivity). If L = ∞, GWL-1 enhanced by Algorithm 1 is still invariant to node isomorphism classes of H *and can be strictly more expressive than GWL-1 to determine node* isomorphism classes. Proposition 5.5 provides the time complexity of our algorithm. Proposition 5.5 (Complexity). *Algorithm 1 runs in time* O(nnz(H)L + (n + m)), which is order linear in the size of the input star expansion matrix H *for hypergraph* H = (V, E), if L is independent of nnz(H), where n = |V|, nnz(H) = vol(V) ≜Pv∈V deg(v) and m = |E|. Since Algorithm 1 runs in time linear in the size of the input when L is constant, in practice it only takes a small fraction of the training time for hypergraph neural networks. For the downstream training, we show that there are Bernoulli hyperedge drop/attachment probabilities p, qi respectively for each RcL,i so that the stationary distribution doesn't change. This shows that our data augmentation can still preserve the low frequency random walk signal. Proposition 5.6. *For a connected hypergraph* H = (V, E), let (RV , RE) *be the output of Algorithm 1 on* H. Then there are Bernoulli probabilities p, qi for i = 1...|RV | *for attaching a covering hyperedge so that* πˆ is an unbiased estimator of π. | PR-AUC ↑ Baseline | Ours | Baseln.+edrop | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------|----------------| | HGNN | 0.98 ± 0.03 0.99 ± 0.08 0.96 ± 0.02 | | | | HGNNP | 0.98 ± 0.02 0.98 ± 0.09 0.96 ± 0.10 | | | | HNHN | 0.98 ± 0.01 0.96 ± 0.07 0.97 ± 0.04 | | | | HyperGCN 0.98 ± 0.07 0.98 ± 0.11 0.98 ± 0.03 UniGAT 0.99 ± 0.06 0.99 ± 0.03 0.99 ± 0.07 UniGCN 0.99 ± 0.00 0.99 ± 0.03 0.99 ± 0.08 UniGIN 0.87 ± 0.02 0.86 ± 0.10 0.85 ± 0.08 UniSAGE 0.86 ± 0.04 0.86 ± 0.05 0.84 ± 0.09 (a) cat-edge-DAWN | PR-AUC ↑ Baseline | Ours | Baseln.+edrop | | HGNN | 0.90 ± 0.13 1.00 ± 0.00 0.90 ± 0.13 | | | | HGNNP | 0.90 ± 0.09 1.00 ± 0.07 1.00 ± 0.03 | | | | HNHN | 0.90 ± 0.09 0.91 ± 0.02 0.90 ± 0.08 | | | | HyperGCN 1.00 ± 0.00 1.00 ± 0.03 1.00 ± 0.02 UniGAT 0.90 ± 0.06 1.00 ± 0.03 1.00 ± 0.06 UniGCN 1.00 ± 0.01 0.91 ± 0.01 0.82 ± 0.09 UniGIN 0.90 ± 0.12 0.95 ± 0.06 0.90 ± 0.11 UniSAGE 0.90 ± 0.16 1.00 ± 0.08 0.90 ± 0.17 (b) cat-edge-music-blues-reviews | PR-AUC ↑ Baseline | Ours | Baseln.+ edrop | | | HGNN | 0.96 ± 0.10 0.98 ± 0.05 0.96 ± 0.04 | | | | HGNNP | 0.96 ± 0.05 0.98 ± 0.09 0.97 ± 0.07 | | | | HNHN | 0.96 ± 0.02 0.97 ± 0.08 0.97 ± 0.06 | | | | HyperGCN 0.93 ± 0.05 0.98 ± 0.07 0.96 ± 0.09 UniGAT 0.96 ± 0.01 0.98 ± 0.14 0.97 ± 0.04 UniGCN 0.96 ± 0.04 0.96 ± 0.11 0.96 ± 0.09 UniGIN 0.97 ± 0.03 0.97 ± 0.11 0.96 ± 0.05 UniSAGE 0.96 ± 0.10 0.96 ± 0.10 0.96 ± 0.02 (c) contact-high-school | | | | PR-AUC ↑ Baseline | Ours | Baseln.+edrop | | | HGNN | 0.95 ± 0.03 0.96 ± 0.01 0.95 ± 0.03 | | | | HGNNP | 0.95 ± 0.02 0.96 ± 0.09 0.96 ± 0.07 | | | | HNHN | 0.94 ± 0.07 0.97 ± 0.10 0.95 ± 0.05 | | | | HyperGCN 0.97 ± 0.01 0.97 ± 0.05 0.96 ± 0.08 UniGAT 0.95 ± 0.02 0.98 ± 0.07 0.98 ± 0.02 UniGCN 0.96 ± 0.00 0.97 ± 0.14 0.97 ± 0.10 UniGIN 0.95 ± 0.09 0.97 ± 0.02 0.95 ± 0.05 UniSAGE 0.96 ± 0.08 0.95 ± 0.05 0.96 ± 0.02 (d) contact-primary-school | PR-AUC ↑ Baseline | Ours | Baseln.+edrop | | HGNN | 0.95 ± 0.07 0.97 ± 0.08 0.96 ± 0.07 | | | | HGNNP | 0.95 ± 0.07 0.96 ± 0.02 0.96 ± 0.01 | | | | HNHN | 0.94 ± 0.01 0.97 ± 0.02 0.95 ± 0.06 | | | | HyperGCN 0.92 ± 0.01 0.94 ± 0.06 0.94 ± 0.08 UniGAT 0.94 ± 0.08 0.98 ± 0.14 0.97 ± 0.08 UniGCN 0.97 ± 0.08 0.97 ± 0.14 0.97 ± 0.06 UniGIN 0.93 ± 0.07 0.94 ± 0.11 0.93 ± 0.09 UniSAGE 0.93 ± 0.07 0.93 ± 0.08 0.92 ± 0.04 (e) email-Eu | PR-AUC ↑ Baseline | Ours | Baseln.+edrop | | | HGNN | 0.75 ± 0.09 0.85 ± 0.09 0.71 ± 0.14 | | | | HGNNP | 0.83 ± 0.09 0.85 ± 0.08 0.85 ± 0.04 | | | | HNHN | 0.72 ± 0.09 0.82 ± 0.03 0.74 ± 0.09 | | | | HyperGCN 0.87 ± 0.08 0.83 ± 0.05 1.00 ± 0.07 UniGAT 0.80 ± 0.09 0.83 ± 0.03 0.78 ± 0.05 UniGCN 0.84 ± 0.08 0.89 ± 0.10 0.71 ± 0.07 UniGIN 0.69 ± 0.14 0.76 ± 0.05 0.61 ± 0.11 UniSAGE 0.72 ± 0.11 0.71 ± 0.10 0.64 ± 0.10 (f) cat-edge-madison-restaurants | | | ## 6 Evaluation Table 1: Transductive hyperedge prediction PR-AUC scores on six different hypergraph datasets. The highest scores per HyperGNN architecture (row) is colored. Red text denotes the highest average scoring method. Orange text denotes a two-way tie and brown text denotes a three-way tie. All datasets involve predicting hyperedges of size 3. We evaluate our method on higher order link prediction with many of the standard hypergraph neural network methods. Due to potential class imbalance, we measure the PR-AUC of higher order link prediction on the hypergraph datasets. These datasets are: cat-edge-DAWN, cat-edge-musicblues-reviews, contact-high-school, contact-primary-school, email-Eu, cat-edge-madisonrestaurants. These datasets range from representing social interactions as they develop over time to collections of reviews to drug combinations before overdose. We also evaluate on the amherst41 dataset, which is a graph dataset. All of our datasets are unattributed hypergraphs/graphs. Data Splitting: For the hypergraph datasets, each hyperedge in it is paired with a timestamp (a real number). These timestamps are a physical time for which a higher order interaction, represented by a | PR-AUC ↑ | HGNN | HGNNP | HNHN | HyperGCN UniGAT | UniGCN | UniGIN | UniSAGE | |------------------------------------|-------------------------------------------------|-------------------------------------------------|-------------|-------------------------------------|-------------------------------------|-------------|-------------| | Ours | 0.73 ± 0.10 0.61 ± 0.05 0.64 ± 0.06 | 0.71 ± 0.09 | 0.72 ± 0.08 | 0.70 ± 0.08 0.73 ± 0.03 0.73 ± 0.06 | | | | | HyperGNN Baseline | 0.62 ± 0.09 | 0.62 ± 0.10 0.63 ± 0.04 | 0.71 ± 0.07 | 0.70 ± 0.06 | 0.69 ± 0.07 0.73 ± 0.06 0.73 ± 0.09 | | | | HyperGNN Baseln.+edrop 0.61 ± 0.03 | 0.61 ± 0.03 0.61 ± 0.09 | 0.71 ± 0.06 | 0.71 ± 0.02 | 0.69 ± 0.05 0.73 ± 0.09 0.73 ± 0.04 | | | | | APPNP | 0.42 ± 0.07 | 0.42 ± 0.07 0.42 ± 0.07 | 0.42 ± 0.07 | 0.42 ± 0.07 | 0.42 ± 0.07 | 0.42 ± 0.07 | 0.42 ± 0.07 | | APPNP+edrop | 0.42 ± 0.03 | 0.42 ± 0.03 0.42 ± 0.03 | 0.42 ± 0.03 | 0.42 ± 0.03 | 0.42 ± 0.03 | 0.42 ± 0.03 | 0.42 ± 0.03 | | GAT | 0.49 ± 0.06 | 0.49 ± 0.06 0.49 ± 0.06 | 0.49 ± 0.06 | 0.49 ± 0.06 | 0.49 ± 0.06 | 0.49 ± 0.06 | 0.49 ± 0.06 | | GAT+edrop | 0.49 ± 0.06 | 0.49 ± 0.06 0.49 ± 0.06 | 0.49 ± 0.06 | 0.49 ± 0.06 | 0.49 ± 0.06 | 0.49 ± 0.06 | 0.49 ± 0.06 | | GCN2 | 0.56 ± 0.12 | 0.56 ± 0.12 0.56 ± 0.12 | 0.56 ± 0.12 | 0.56 ± 0.12 | 0.56 ± 0.12 | 0.56 ± 0.12 | 0.56 ± 0.12 | | GCN2+edrop | 0.54 ± 0.02 | 0.54 ± 0.02 0.54 ± 0.02 | 0.54 ± 0.02 | 0.54 ± 0.02 | 0.54 ± 0.02 | 0.54 ± 0.02 | 0.54 ± 0.02 | | GCN | 0.40 ± 0.03 | 0.40 ± 0.03 0.40 ± 0.03 | 0.40 ± 0.03 | 0.40 ± 0.03 | 0.40 ± 0.03 | 0.40 ± 0.03 | 0.40 ± 0.03 | | GCN+edrop | 0.65 ± 0.04 | 0.65 ± 0.04 0.65 ± 0.04 | 0.65 ± 0.04 | 0.65 ± 0.04 | 0.65 ± 0.04 | 0.65 ± 0.04 | 0.65 ± 0.04 | | GIN | 0.73 ± 0.10 0.73 ± 0.10 0.73 ± 0.10 0.73 ± 0.10 | 0.73 ± 0.10 0.73 ± 0.10 0.73 ± 0.10 0.73 ± 0.10 | | | | | | | GIN+edrop | 0.73 ± 0.10 0.73 ± 0.10 0.73 ± 0.10 0.73 ± 0.10 | 0.73 ± 0.10 0.73 ± 0.10 0.73 ± 0.10 0.73 ± 0.10 | | | | | | | GraphSAGE | 0.44 ± 0.01 | 0.44 ± 0.01 0.44 ± 0.01 | 0.44 ± 0.01 | 0.44 ± 0.01 | 0.44 ± 0.01 | 0.44 ± 0.01 | 0.44 ± 0.01 | | GraphSAGE+edrop | 0.44 ± 0.10 | 0.44 ± 0.10 0.44 ± 0.10 | 0.44 ± 0.10 | 0.44 ± 0.10 | 0.44 ± 0.10 | 0.44 ± 0.10 | 0.44 ± 0.10 | Table 2: PR-AUC on graph dataset amherst41. Each column is a comparison of the baseline PR-AUC scores against the PR-AUC score for our method (first row) applied to a standard HyperGNN architecture. The coloring scheme is the same as in Table 1. hyperedge, occurs. We form a train-val-test split by letting the train be the hyperedges associated with the 80th percentile of timestamps, the validation be the hyperedges associated with the timestamps in between the 80th and 85th percentiles. The test hyperedges are the remaining hyperedges. The train validation and test datasets thus form a partition of the nodes. We do the task of hyperedge prediction for sets of nodes of size 3, also known as triangle prediction. Half of the size 3 hyperedges in each of train, validation and test are used as positive examples. For each split, we select random subsets of nodes of size 3 that do not form hyperedges for negative sampling. We maintain positive/negative class balance by sampling the same number of negative samples as positive samples. Since the test distribution comes from later time stamps than those in training, there is a possibility that certain datasets are out-of-distribution if the hyperedge distribution changes. For the graph dataset, the single graph is deterministically split into 80/5/15 for train/val/test. We remove 10% of the edges in training and let them be positive examples Ptr to predict. For validation and test, we remove 50% of the edges from both validation and test to set as the positive examples Pval, Pte to predict. For train, validation, and test, we sample |Ptr|, |Pval|, |Pte| negative link samples from the links of train, validation and test. ## 6.1 Architecture And Training Our algorithm serves as a preprocessing step for selective data augmentation. Given a single training hypergraph H, the Algorithm 1 is applied and during training, the identified hyperedges of the symmetric induced subhypergraphs of H are randomly replaced with single hyperedges that cover all the nodes of each induced subhypergraph. Each symmetric subhypergraph has a p = 0.5 probability of being selected. To get a large set of symmetric subhypergraphs, we run 2 iterations of GWL-1. We implement h(*S, H*) from Equation 5 as follows. Upon extracting the node representations from the hypergraph neural network, we use a multi-layer-perceptron (MLP) on each node representation, sum across such compositions, then apply a final MLP layer after the aggregation. We use the binary cross entropy loss on this multi-node representation for training. We always use 5 layers of hyperGNN convolutions, a hidden dimension of 1024, and a learning rate of 0.01. ## 6.2 Higher Order Link Prediction Results We show in Table 1 the comparison of PR-AUC scores amongst the baseline methods of HGNN, HGNNP, HNHN, HyperGCN, UniGIN, UniGAT, UniSAGE, their hyperedge dropped versions, and "Our" method, which preprocesses the hypergraph to break symmetry during training. For the hyperedge drop baselines, there is a uniform 50% chance of dropping any hyperedge. We use the Laplacian eigenmap Belkin & Niyogi (2003) positional encoding on the clique expansion of the input hypergraph. This is common practice in (hyper)link prediction and required for using a hypergraph neural network on an unattributed hypergraph. We show in Table 2 the PR-AUC scores on the amhrest41. Along with HyperGNN architectures we use for the hypergraph experiments, we also compare with standard GNN architectures: APPNP Gasteiger et al. (2018), GAT Veličković et al. (2017), GCN2 Chen et al. (2020), GCN Kipf & Welling (2016a), GIN Xu et al. (2018), and GraphSAGE Hamilton et al. (2017). For every HyperGNN/GNN architecture, we also apply drop-edge Rong et al. (2019) to the input graph and use this also as baseline. The number of layers of each GNN is set to 5 and the hidden dimension at 1024. For APPNP and GCN2, one MLP is used on the initial node positional encodings. Overall, our method performs well across a diverse range of higher order network datasets. We observe that our method can often outperform the baseline of not performing any data perturbations as well as the same baseline with uniformly random hyperedge dropping. Our method has an added advantage of being explainable since our algorithm works at the data level. There was also not much of a concern for computational time since our algorithm runs in time O(nnz(H) + n + m), which is optimal since it is the size of the input. ## 6.3 Empirical Observations On The Components Discovered By The Algorithm As we are primarily concerned with symmetries in a hypergraph, we empirically measure the size and frequency of the components found by the Algorithm for real-world datasets. For the real-world datasets listed in Appendix D, in Figure 3a, we plot the fraction of connected components of the same L-GWL-1 value (L = 2) that are large enough to be used by Algorithm 1 as a function of the number of nodes of the hypergraph. We notice that the fraction of connected components is not large, however every dataset has a nonzero fraction. On the right, in Figure 3b we show the distribution of the sizes of the connected components found by Algorithm 1. We see that, on average, the connected components are at least an order of magnitude smaller compared to the total number of nodes. Common to both plots, the graph datasets appear to have more nodes and a consistent fraction and size of components, while the hypergraph datasets have higher variance in the fraction of components, which is expected since there are more possibilities for the connections in a hypergraph. In terms of the number of identified connected components, there are at least ![11_image_0.png](11_image_0.png) selected by Algorithm 1. 3 from Algorithm 1. Figure 3 exponentially many interventions that can be imposed on the hypergraph from simply dropping components. Thus, even finding just 10 components result in at least 2 10 ≈ 103 many counterfactual hypergraphs. It is known, that a large set of data augmentations during learning improves learner generalization. ## 7 Conclusion We have characterized and identified the limitations of GWL-1, a hypergraph isomorphism testing algorithm that underlies many existing HyperGNN architectures. A common issue with distinguishing regular hypergraphs exists. In fact more generally, maximally connected subsets of nodes that share the same value of GWL-1, which act like regular hypergraphs, are indistinguishable. To address this issue while respecting the structure of a hypergraph, we have devised a preprocessing algorithm that improves the expressivity of any GWL-1 based learner. The algorithm searches for indistinguishable regular subhypergraphs and simplifies them by a single hyperedge that covers the nodes of the subhypergraph. We perform extensive experiments to evaluate the effectiveness of our approach and make empirical observations about the output of the algorithm on hypergraph data. ## References Sameer Agarwal, Kristin Branson, and Serge Belongie. Higher order learning with graphs. In Proceedings of the 23rd international conference on Machine learning, pp. 17–24, 2006. Emily Alsentzer, Samuel Finlayson, Michelle Li, and Marinka Zitnik. Subgraph neural networks. Advances in Neural Information Processing Systems, 33:8017–8029, 2020. Ilya Amburg, Nate Veldt, and Austin R. Benson. Clustering in graphs and hypergraphs with categorical edge labels. In *Proceedings of the Web Conference*, 2020a. Ilya Amburg, Nate Veldt, and Austin R Benson. Fair clustering for diverse and experienced groups. arXiv:2006.05645, 2020b. Devanshu Arya, Deepak K Gupta, Stevan Rudinac, and Marcel Worring. Hypersage: Generalizing inductive representation learning on hypergraphs. *arXiv preprint arXiv:2010.04558*, 2020. Song Bai, Feihu Zhang, and Philip HS Torr. Hypergraph convolution and hypergraph attention. Pattern Recognition, 110:107637, 2021. Pierre Baldi and Peter Sadowski. The dropout learning algorithm. *Artificial intelligence*, 210:78–122, 2014. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003. Austin R. Benson, Rediet Abebe, Michael T. Schaub, Ali Jadbabaie, and Jon Kleinberg. Simplicial closure and higher-order link prediction. *Proceedings of the National Academy of Sciences*, 2018a. ISSN 0027-8424. doi: 10.1073/pnas.1800683115. Austin R Benson, Rediet Abebe, Michael T Schaub, Ali Jadbabaie, and Jon Kleinberg. Simplicial closure and higher-order link prediction. *Proceedings of the National Academy of Sciences*, 115(48):E11221–E11230, 2018b. Garrett Birkhoff. *Lattice theory*, volume 25. American Mathematical Soc., 1940. Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. Weisfeiler and lehman go cellular: Cw networks. Advances in Neural Information Processing Systems, 34:2625–2640, 2021. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger (eds.), *Advances in Neural Information Processing Systems*, volume 26. Curran Associates, Inc., 2013. URL https://proceedings.neurips.cc/paper_files/paper/2013/file/ 1cecc7a77928ca8133fa24680a88d2f9-Paper.pdf. Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):657–668, 2022. Derun Cai, Moxian Song, Chenxi Sun, Baofeng Zhang, Shenda Hong, and Hongyan Li. Hypergraph structure learning for hypergraph neural networks. In *Proceedings of the Thirty-First International Joint Conference* on Artificial Intelligence, IJCAI-22, pp. 1923–1929, 2022. Can Chen and Yang-Yu Liu. A survey on hyperlink prediction. *arXiv preprint arXiv:2207.02911*, 2022. Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In *International conference on machine learning*, pp. 1725–1735. PMLR, 2020. Samantha Chen, Sunhyuk Lim, Facundo Memoli, Zhengchao Wan, and Yusu Wang. Weisfeiler-lehman meets gromov-Wasserstein. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of *Proceedings of Machine Learning Research*, pp. 3371–3416. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/chen22o.html. Samantha Chen, Sunhyuk Lim, Facundo Mémoli, Zhengchao Wan, and Yusu Wang. The weisfeiler-lehman distance: Reinterpretation and connection with gnns. *arXiv preprint arXiv:2302.00713*, 2023. Eli Chien, Chao Pan, Jianhao Peng, and Olgica Milenkovic. You are allset: A multiset function framework for hypergraph neural networks. *arXiv preprint arXiv:2106.13264*, 2021. Eli Chien, Chao Pan, Jianhao Peng, and Olgica Milenkovic. You are allset: A multiset function framework for hypergraph neural networks. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=hpBTIv2uy_E. Uthsav Chitra and Benjamin Raphael. Random walks on hypergraphs with edge-dependent vertex weights. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference* on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 1172–1181. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/chitra19a.html. Yun Young Choi, Sun Woo Park, Youngho Woo, and U Jin Choi. Cycle to clique (cy2c) graph neural network: A sight to see beyond neighborhood aggregation. In *The Eleventh International Conference on* Learning Representations. Nicolas A Crossley, Andrea Mechelli, Petra E Vértes, Toby T Winton-Brown, Ameera X Patel, Cedric E Ginestet, Philip McGuire, and Edward T Bullmore. Cognitive relevance of the community structure of the human brain functional coactivation network. *Proceedings of the National Academy of Sciences*, 110 (28):11583–11588, 2013. Yihe Dong, Will Sawin, and Yoshua Bengio. Hnhn: Hypergraph networks with hyperedge neurons. arXiv preprint arXiv:2006.12278, 2020. David Steven Dummit and Richard M Foote. *Abstract algebra*, volume 3. Wiley Hoboken, 2004. Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pp. 3558–3565, 2019. Yifan Feng, Jiashu Han, Shihui Ying, and Yue Gao. Hypergraph isomorphism computation. arXiv preprint arXiv:2307.14394, 2023. Yue Gao, Yifan Feng, Shuyi Ji, and Rongrong Ji. Hgnn+: General hypergraph neural networks. *IEEE* Transactions on Pattern Analysis and Machine Intelligence, 2022. Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. *arXiv preprint arXiv:1810.05997*, 2018. Lorenzo Giusti, Claudio Battiloro, Lucia Testa, Paolo Di Lorenzo, Stefania Sardellitti, and Sergio Barbarossa. Cell attention networks. *arXiv preprint arXiv:2209.08179*, 2022. Chris Godsil and Gordon F Royle. *Algebraic graph theory*, volume 207. Springer Science & Business Media, 2001. Saiping Guan, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. Link prediction on n-ary relational data. In *Proceedings of the 28th International Conference on World Wide Web (WWW'19)*, pp. 583–593, 2019. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. Allen Hatcher. *Algebraic topology*. Web, 2005. Yang Hu, Xiyuan Wang, Zhouchen Lin, Pan Li, and Muhan Zhang. Two-dimensional weisfeiler-lehman graph neural networks for link prediction. *arXiv preprint arXiv:2206.09567*, 2022. Jing Huang and Jie Yang. Unignn: a unified framework for graph and hypergraph neural networks. *arXiv* preprint arXiv:2105.00956, 2021. EunJeong Hwang, Veronika Thost, Shib Sankar Dasgupta, and Tengfei Ma. An analysis of virtual nodes in graph neural networks for link prediction. In *The First Learning on Graphs Conference*, 2022. Jinwoo Kim, Saeyoon Oh, Sungjun Cho, and Seunghoon Hong. Equivariant hypergraph neural networks. In European Conference on Computer Vision, pp. 86–103. Springer, 2022. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *arXiv* preprint arXiv:1609.02907, 2016a. Thomas N Kipf and Max Welling. Variational graph auto-encoders. *arXiv preprint arXiv:1611.07308*, 2016b. Oliver Knill. A brouwer fixed-point theorem for graph endomorphisms. *Fixed Point Theory and Applications*, 2013(1):1–24, 2013. Andreas Krebs and Oleg Verbitsky. Universal covers, color refinement, and two-variable counting logic: Lower bounds for the depth. In *2015 30th Annual ACM/IEEE Symposium on Logic in Computer Science*, pp. 689–700. IEEE, 2015. Dongjin Lee and Kijung Shin. I'm me, we're us, and i'm us: Tri-directional contrastive learning on hypergraphs. *arXiv preprint arXiv:2206.04739*, 2022. Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graph evolution: Densification and shrinking diameters. *ACM Transactions on Knowledge Discovery from Data*, 1(1), 2007. doi: 10.1145/1217299.1217301. URL https://doi.org/10.1145/1217299.1217301. Dong Li, Zhiming Xu, Sheng Li, and Xin Sun. Link prediction in social networks based on hypergraph. In Proceedings of the 22nd international conference on world wide web, pp. 41–42, 2013. Mengran Li, Yong Zhang, Xiaoyong Li, Yuchen Zhang, and Baocai Yin. Hypergraph transformer neural networks. *ACM Transactions on Knowledge Discovery from Data*, 17(5):1–22, 2023a. Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably more powerful neural networks for graph representation learning. *Advances in Neural Information Processing* Systems, 33:4465–4478, 2020. Shouheng Li, Dongwoo Kim, and Qing Wang. Local vertex colouring graph neural networks. 2023b. Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34:20887–20902, 2021. Francois Lorrain and Harrison C White. Structural equivalence of individuals in social networks. *The Journal* of mathematical sociology, 1(1):49–80, 1971. Linyuan Lü, Matúš Medo, Chi Ho Yeung, Yi-Cheng Zhang, Zi-Ke Zhang, and Tao Zhou. Recommender systems. *Physics reports*, 519(1):1–49, 2012. Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018. Rossana Mastrandrea, Julie Fournet, and Alain Barrat. Contact patterns in a high school: A comparison between data collected using wearable sensors, contact diaries and friendship surveys. *PLOS ONE*, 10(9): e0136497, 2015. doi: 10.1371/journal.pone.0136497. URL https://doi.org/10.1371/journal.pone. 0136497. Pál András Papp and Roger Wattenhofer. A theoretical comparison of graph neural network extensions. In International Conference on Machine Learning, pp. 17323–17345. PMLR, 2022. Pál András Papp, Karolis Martinkus, Lukas Faber, and Roger Wattenhofer. Dropgnn: Random dropouts increase the expressiveness of graph neural networks. *Advances in Neural Information Processing Systems*, 34:21997–22009, 2021. Petar Ristoski and Heiko Paulheim. Rdf2vec: Rdf graph embeddings for data mining. In The Semantic Web–ISWC 2016: 15th International Semantic Web Conference, Kobe, Japan, October 17–21, 2016, Proceedings, Part I 15, pp. 498–514. Springer, 2016. Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. *arXiv preprint arXiv:1907.10903*, 2019. Nicolò Ruggeri, Federico Battiston, and Caterina De Bacco. A framework to generate hypergraphs with community structure. *arXiv preprint arXiv:2212.08593*, 22, 2023. Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random features strengthen graph neural networks. In *Proceedings of the 2021 SIAM international conference on data mining (SDM)*, pp. 333–341. SIAM, 2021. Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. Wikilinks: A large-scale cross-document coreference corpus labeled via links to Wikipedia. Technical Report UM-CS-2012-015, 2012. Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June (Paul) Hsu, and Kuansan Wang. An overview of microsoft academic service (MAS) and applications. In *Proceedings of the 24th International* Conference on World Wide Web. ACM Press, 2015. doi: 10.1145/2740908.2742839. URL https://doi. org/10.1145/2740908.2742839. Balasubramaniam Srinivasan and Bruno Ribeiro. On the equivalence between positional node embeddings and structural graph representations. *arXiv preprint arXiv:1910.00452*, 2019. Balasubramaniam Srinivasan, Da Zheng, and George Karypis. Learning over families of sets-hypergraph representation learning for higher order tasks. In *Proceedings of the 2021 SIAM International Conference* on Data Mining (SDM), pp. 756–764. SIAM, 2021. Juliette Stehlé, Nicolas Voirin, Alain Barrat, Ciro Cattuto, Lorenzo Isella, Jean-François Pinton, Marco Quaggiotto, Wouter Van den Broeck, Corinne Régis, Bruno Lina, et al. High-resolution measurements of face-to-face contact patterns in a primary school. *PloS one*, 6(8):e23176, 2011. Qiaoyu Tan, Xin Zhang, Ninghao Liu, Daochen Zha, Li Li, Rui Chen, Soo-Hyun Choi, and Xia Hu. Bring your own view: Graph neural networks for link prediction with personalized subgraph selection. In *Proceedings* of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 625–633, 2023. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. *arXiv preprint arXiv:1710.10903*, 2017. Changlin Wan, Muhan Zhang, Wei Hao, Sha Cao, Pan Li, and Chi Zhang. Principled hyperedge prediction with structural spectral features and neural networks. *arXiv preprint arXiv:2106.04292*, 2021. Haorui Wang, Haoteng Yin, Muhan Zhang, and Pan Li. Equivariant and stable positional encoding for more powerful graph neural networks. *arXiv preprint arXiv:2203.00199*, 2022. Xiyuan Wang, Pan Li, and Muhan Zhang. Improving graph neural networks on multi-node tasks with labeling tricks. *arXiv preprint arXiv:2304.10074*, 2023. Tianxin Wei, Yuning You, Tianlong Chen, Yang Shen, Jingrui He, and Zhangyang Wang. Augmentations in hypergraph contrastive learning: Fabricated and generative. *arXiv preprint arXiv:2210.03801*, 2022. Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. *nti, Series*, 2(9):12–16, 1968. Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, and Richong Zhang. On the representation and embedding of knowledge bases beyond binary relations. *arXiv preprint arXiv:1604.08642*, 2016. Asiri Wijesinghe and Qing Wang. A new perspective on" how graph neural networks go beyond weisfeilerlehman?". In *International Conference on Learning Representations*, 2021. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. *IEEE transactions on neural networks and learning systems*, 32(1):4–24, 2020. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. Hypergcn: A new method for training graph convolutional networks on hypergraphs. *Advances in neural* information processing systems, 32, 2019. Hao Yin, Austin R. Benson, Jure Leskovec, and David F. Gleich. Local higher-order graph clustering. In *Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data* Mining. ACM Press, 2017. doi: 10.1145/3097983.3098069. URL https://doi.org/10.1145/3097983. 3098069. Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In *Proceedings of the AAAI conference on artificial intelligence*, volume 35, pp. 10737–10745, 2021. Dingyi Zeng, Wanlong Liu, Wenyu Chen, Li Zhou, Malu Zhang, and Hong Qu. Substructure aware graph neural networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 11129– 11137, 2023. Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. Rethinking the expressive power of gnns via graph biconnectivity. *arXiv preprint arXiv:2301.09505*, 2023. Muhan Zhang and Yixin Chen. Weisfeiler-lehman neural machine for link prediction. In *Proceedings of the* 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 575–583, 2017. Muhan Zhang and Pan Li. Nested graph neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 15734–15747. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/ paper/2021/file/8462a7c229aea03dde69da754c3bbcc4-Paper.pdf. Muhan Zhang, Zhicheng Cui, Shali Jiang, and Yixin Chen. Beyond link prediction: Predicting hyperlinks in adjacency space. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. *Advances in Neural Information Processing Systems*, 34: 9061–9073, 2021. Ruochi Zhang, Yuesong Zou, and Jian Ma. Hyper-sagnn: a self-attention based graph neural network for hypergraphs. *arXiv preprint arXiv:1911.02613*, 2019. Simon Zhang, Soham Mukherjee, and Tamal K Dey. Gefl: Extended filtration learning for graph classification. In *Learning on Graphs Conference*, pp. 16–1. PMLR, 2022. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. *AI open*, 1:57–81, 2020. # Appendix ## A More Background We discuss in this section about the basics of graph representation learning and link prediction. Graphs are hypergraphs with all hyperedges of size 2. Simplicial complexes and hypergraphs are generalizations of graphs. We also discuss more related work. ## A.1 Graph Neural Networks And Weisfeiler-Lehman 1 The Weisfeiler-Lehman (WL-1) algorithm is an isomorphism testing approximation algorithm. It involves repeatedly message passing all nodes with their neighbors, a step called node label refinement. The WL-1 algorithm never gives false negatives when predicting whether two graphs are isomorphic. In other words, two isomorphic graphs are always indistinguishable by WL-1. The WL-1 algorithm is the following successive vertex relabeling applied until convergence on a graph G = (*X, A*) (a pair of the set of node attributes and the graph's adjacency structure): $$\begin{array}{c}{{h_{v}^{0}\gets X_{v},\forall v\in{\mathcal{V}}_{G}}}\\ {{h_{v}^{i+1}\gets\{\{(h_{v}^{i},h_{u}^{i})\}\}_{u\in N b r_{A}(v)},\forall v\in{\mathcal{V}}_{G}}}\end{array}$$ $$({\boldsymbol{\delta}})$$ The algorithm terminates after the vertex labels converge. For graph isomorphism testing, the concatenation of the histograms of vertex labels for each iteration is output as the graph representation. Since we are only concerned with node isomorphism classes, we ignore this step and just consider the node labels h i v for every v ∈ VC. The WL-1 isomorphism test can be characterized in terms of rooted tree isomorphisms between the universal covers for connected graphs Krebs & Verbitsky (2015). There have also been characterizations of WL-1 in terms of counting homomorphisms Knill (2013) as well as the Wasserstein Distance Chen et al. (2022) and Markov chains Chen et al. (2023). A graph neural network (GNN) is a message passing based node representation learner modeled after the WL-1 algorithm. It has the important inductive bias of being equivariant to node indices. As a neural model of the WL-1 algorithm, it learns neural weights common across all nodes in order to obtain a vector representation for each node. A GNN must use some initial node attributes in order to update its neural weights. There are many variations on GNNs, including those that improve the distinguishing power beyond WL-1. For two surveys on the GNNs and their applications, see Zhou et al. (2020); Wu et al. (2020). ## A.2 Link Prediction The task of link prediction on graphs involves the prediction of the existence of links. There are two kinds of link prediction. There is transductive link prediction where the same nodes are used for all of train validation and testing. There is also inductive link prediction where the test validation and training nodes can all be disjoint. Some existing works on link prediction include Zhang & Chen (2017). Higher order link prediction is a generalization of link prediction to hypergraph data. A common way to do link prediction is to compute a node-based GNN and for a pair of nodes, aggregate, similar to in graph auto encoders Kipf & Welling (2016b), the node representations in any target pair in order to obtain a 2-node representation. Such aggregations are of the form: $$h(S=\{u,v\})=\sigma(h_{u}\cdot h_{v})$$ h(S = {*u, v*}) = σ(hu · hv) (9) where S is a pair of nodes. As shown in Proposition B.4, this guaranteems an equivariant 2-node representation but can often give false predictions even with a fully expressive node-based GNN Wang et al. (2023). A common remedy for this problem is to introduce positional encodings such as SEAL Wang et al. (2022) and DistanceEncoding Li et al. (2020). Positional encodings encode the relative distances amongst nodes via a low distortion embedding for example. In the related work section we have gone over many of these embeddings. We have also used these in our evaluation since they are common practice and must exist to compute a hypergraph neural network if there are no ground truth node attributes. According to Srinivasan & Ribeiro (2019), fully expressive pairwise node representations, as defined by 2-node invariance and expressivity, can be represented by some fully expressive positional embedding, which is a positional embedding that is injective on the node pair isomorphism classes. It is not clear how one would achieve this in practice, however. Another remedy is to increase the expressive power of WL-1 to WL-2 for link prediction Hu et al. (2022). ## A.3 More Related Work The work of Wei et al. (2022) also does a data augmentation scheme. It considers randomly dropping edges and generating data through a generative model on hypergraphs. The work of Lee & Shin (2022) also performs data augmentation on a hypergraph so that homophilic relationships are maintained. It does this through contrastive losses at the node to node, hyperedge to hyperedge and intra hyperedge level. Neither of these methods provide guarantees for their data augmentations. As mentioned in the main text, an ensemble of neural networks can be used with a drop-out Baldi & Sadowski (2014) like method on the output of the Algorithm. Subgraph neural networks Alsentzer et al. (2020); Tan et al. (2023) are ensembles of models on subgraphs of the input graph. Some more of the many existing hypergraph neural network architectures include: Kim et al. (2022); Cai et al. (2022); Chien et al. (2021); Bai et al. (2021); Li et al. (2023a); Arya et al. (2020). ## B Proofs In this section we provide the proofs for all of the results in the main paper along with some additional theory. ## B.1 Hypergraph Isomorphism We first repeat the definition of a hypergraph and its corresponding matrix representation called the star expansion matrix:: Definition B.1. *An undirected hypergraph is a pair* H = (V, E) consisting of a set of vertices V *and a set* of hyperedges E ⊂ 2 V \ ({∅} ∪ {{v} | v ∈ V}) *where* 2 V *is the power set of the vertex set* V. Definition B.2. The star expansion incidence matrix H *of a hypergraph* H = (V, E) is the *|V| ×* 2 |V| 0-1 incidence matrix H where Hv,e = 1 iff v ∈ e for (v, e) ∈ V × E for some fixed orderings on both V and 2 V . We recall the definition of an isomorphism between hypergraphs: Definition B.3. For two hypergraphs H and D, a structure preserving map ρ : H → D is a pair of maps ρ = (ρV : VH → VD, ρE : EH → ED) such that ∀e ∈ EH, ρE (e) ≜ {ρV (vi) | vi ∈ e} ∈ ED. A hypergraph isomorphism is a structure preserving map ρ = (ρV , ρE ) such that both ρV and ρE *are bijective.* Two hypergraphs are said to be isomorphic, denoted as H ∼= D*, if there exists an isomorphism between them.* When H = D, an isomorphism ρ is called an automorphism on H. All the automorphisms form a group, which we denote as Aut(H). The action of π ∈ Sym(V) on the star expansion adjacency matrix H is repeated here for convenience: (π · H)v,e=(u1...v...uk) ≜ Hπ−1(v),π−1(e)=(π−1(u1)...π−1(v)*...π*−1(uk)) (10) Based on the group action, consider the stabilizer subgroup of Sym(V) on the star expansion adjacency matrix H defined as follows: StabSym(V)(H) = {π ∈ Sym(V) | π · H = H} (11) For simplicity we omit the lower index when the permutation group is clear from the context. It can be checked that Stab(H) ≤ Sym(V) is a subgroup. Intuitively, *Stab*(H) consists of all permuations that leave H fixed. For a given hypergraph H = (V, E), there is a relationship between the group of hypergraph automorphisms Aut(H) and the stabilizer group *Stab*(H) on the star expansion adjacency matrix. Proposition B.1. Aut(H) ∼= Stab(H) *are equivalent as isomorphic groups.* Proof. Consider ρ ∈ Aut(H), define the map Φ : ρ 7→ π := ρ|V(H). The group element π ∈ Sym(V) acts as a stabilizer of H since for any entry (*v, e*) in H, Hπ−1(v),π−1(e) = (π · H)v,e = 1 iff π −1(e) ∈ EH iff e ∈ EH iff Hv,e = 1 = Hπ◦π−1(v),π◦π−1(e). Since (*v, e*) was arbitrary, π preserves the positions of the nonzeros. We can check that Φ is a well defined injective homorphism as a restriction map. Furthermore it is surjective since for any π ∈ *Stab*(H), we must have Hv,e = 1 iff (π · H)v,e = Hπ−1(v),π−1(e) = 1 which is equivalent to v ∈ e ∈ E iff π(v) ∈ π(e) ∈ E which implies e ∈ E iff π(e) ∈ E. Thus Φ is a group isomorphism from Aut(H) to *Stab*(H) In other words, to study the symmetries of a given hypergraph H, we can equivalently study the automorphisms Aut(H) and the stabilizer permutations *Stab*(H) on its star expansion adjacency matrix H. Intuitively, the stabilizer group 0 ≤ Stab(H) ≤ Sym(V) characterizes the symmetries in a graph. When the graph has rich symmetries, say a complete graph, *Stab*(H) = Sym(V) can be as large as the whole permutaion group. Nontrivial symmetries can be represented by isomorphic node sets which we define as follow: Definition B.4. For a given hypergraph H with star expansion matrix H, two k-node sets S, T ⊆ V are called isomorphic, denoted as S ≃ T, if ∃π ∈ *Stab*(H), π(S) = T and π(T) = S. When k = 1, we have isomorphic nodes, denoted u =∼H v for u, v ∈ V. Node isomorphism is also studied as the so-called structural equivalence in Lorrain & White (1971). Furthermore, if S ≃ T we can then say that there is a matching amongst the nodes in the two node subsets so that matched nodes are isomorphic. Definition B.5. A k-node representation h is **k-permutation equivariant** if: for all π ∈ Sym(V), S ∈ 2 V with |S| = k: h(π · *S, H*) = h(*S, π* · H) Proposition B.2. If k-node representation h is k-permutation equivariant, then h is k*-node invariant.* Proof. given S, S′ ∈ C with |S| = |S ′| = k, if there exists a π ∈ *Stab*(H) (meaning π · H = H) and π(S) = S ′then h(S ′, H) = h(S ′, π · H) (by π · H = H) $$(12)$$ $$\mathbf{\tau}=S^{\prime})$$ $\square$ = h(*S, H*) (by k-permutation equivariance of h and π(S) = S ′)(12) ## B.2 Properties Of Gwl-1 Here are the steps of the GWL-1 algorithm on the star expansion matrix H with node attributes X is repeated here for convenience: $$f_{e}^{0}\leftarrow\{\},h_{v}^{0}\gets X_{v}$$ $$f_{e}^{i+1}\leftarrow\{\{(f_{e}^{i},h_{v}^{i})\}\}_{v\in e},\forall e\in{\cal E}(H)$$ $$h_{v}^{i+1}\leftarrow\{\{(h_{v}^{i},f_{e}^{i+1})\}\}_{v\in e},\forall v\in{\cal V}(H)$$ $$(13)$$ $$(14\mathrm{a})$$ $$(15\mathrm{a})$$ $$(15\mathrm{b})$$ $$(15\mathrm{c})$$ $$(1\mathrm{fba})$$ Where E(H) denotes the nonzero columns of H and V(H) denotes the rows of H. We make the following observations about each of the two steps of the GWL-1 algorithm: Observation 2. $\{\{(f^{i}_{e},h^{i}_{e})\}_{v\in e}=\{\{(f^{i}_{e},h^{i}_{e})\}_{v\in e}\ \mathit{iff}\ (f^{i}_{e},\{\{h^{i}_{e}\}\}_{v\in e})=(f^{i}_{e},\{\{h^{i^{\prime}}_{e}\}\}_{v\in e})\forall e\in\mathcal{E}(H)\ \mathit{and}$ $\{(h^{i}_{v},f^{e+1}_{e})\}_{v\in e}=\{\{(h^{i^{\prime}}_{v},f^{i+1}_{e})\}_{v\in e}\ \mathit{iff}\ (h^{i}_{v},\{\{f^{i+1}_{e}\}\}_{v\in e})=(h^{i^{\prime}}_{v},\{\{f^{i+1}_{e}\}\}_{v\in e})\forall v\in\mathcal{V}(H)$ Proof. Equation 14a follows since iff $\{\{(h^i_v,\{\{(f^i_e,h^i_u)\}_u{\in}e\}\})_{v{\in}e}=\{(h^i_v,\{\{(f^i_e,h^i_u)\}\}_{u{\in}e})\}_{v{\in}e}\forall v\in\mathcal{V}(H)$ iff $h^i_v=h^{i'_v}$ and $\{\{(f^i_e,h^i_u)\}\}_{u{\in}e,v{\in}e}=\{\{(f^{i'_e},h^{i'_u})\}\}_{u{\in}e,v{\in}e}\forall v\in\mathcal{V}(H)$ iff $h^i_v=h^{i'_v}$ and $\{\{f^{i+1}_e\}\}=\{\{f^{i+1}_e\}\}\forall v\in\mathcal{V}(H)$ the definition of multiset coupling and since there is no loss of information under These follow by the definition of multiset equality and since there is no loss of information upon factoring out a constant tuple entry of each pair in the multisets. Proposition B.3. *The update steps of GWL-1:* f i(H) ≜ [f i e1 (H), *· · ·* f i em (H)] and h i(H) ≜ [h i v1 (H), *· · ·* h i vn (H)], are permutation equivariant; in other words, For any π ∈ Sym(V)*, let* π · f i(H) ≜ [f i π−1(e1) (H), · · · , fiπ−1(em) (H)] and π · h i(H) ≜ [h i π−1(v1) (H), *· · ·* h i π−1(vn) (H)], we have ∀i ∈ N, π · f i(H) = f i(π · H) and π · h i(H) = h i(π · H) Proof. We prove by induction on i: Base case, i = 0: [π · f 0(H)]e={v1*...v*k} = {} = f 0 π−1(e)={π−1(v1)*...π*−1(vk)} (H) = f 0 e (π · H) since the π cannot affect a list of empty sets and the definition of the action of π on H as defined in Equation 10. [π · h 0(H)]v = [π · X]v = Xπ−1(v) = h 0 π−1(v) (H) = h 0 v (π · H) by definition of the group action Sym(V) acting on the node indices of a node attribute tensor as defined in Equation 10. Induction Hypothesis: $$[\pi\cdot f^{i}(H)]_{v}=f^{i}_{\pi^{-1}(v)}(H)=f^{i}_{v}(\pi\cdot H)\ \mbox{and}\ [\pi\cdot h^{i}(H)]_{v}=h^{i}_{\pi^{-1}(v)}(H)=h^{i}_{v}(\pi\cdot H)\tag{17}$$ Induction Step: {{(f i e , hiv )}}v∈e = {{(f ′i e , h′iv iff f i e = f ′i e and {{h i v}}v∈e = {{h ′i iff (f i e , {{h i v}}v∈e) = (f ′i e , {{h ′i )}}v∈e∀e ∈ E(H) (15a) v}}v∈e∀e ∈ E(H) (15b) v}}v∈e)∀e ∈ E(H) (15c) For Equation 14b, we have: $$\{\{(h_{v}^{i},f_{e}^{i+1}\}\}_{v\in e}=\{\{(h_{v}^{r i},f_{e}^{r i+1}\}\}_{v\in e}\forall v\in{\mathcal{V}}(H)$$ e }}v∈e∀v ∈ V(H) (16a) )}}u∈e)}}v∈e∀v ∈ V(H) (16b) )}}u∈e,v∈e∀v ∈ V(H) (16c) $$[\pi\cdot h^{i+1}(H)]_{v}=\{\{([\pi\cdot h^{i}(H)]_{v},[\pi\cdot f^{i+1}(H)]_{e})\}\}_{v\in e}$$ $$=\{\{([\pi\cdot h^{i}(H)]_{v},\{\{([\pi\cdot f^{i}(H)]_{e},[\pi\cdot h^{i}(H)]_{u})\}\}_{u\in e})\}\}_{v\in e}$$ $$=\{\{h^{i}_{v}(\pi\cdot H),\{\{(f^{i}_{e}(\pi\cdot H),h^{i}_{u}(\pi\cdot H))\}\}_{u\in e}\}\}_{v\in e}$$ $$=h^{i+1}_{v}(\pi\cdot H)$$ $$(18)$$ $$[\pi\cdot f^{i+1}(H)]_{e}=\{\{([\pi\cdot f^{i}(H)]_{e},[\pi\cdot h^{i}(H)]_{v})\}\}_{v\in e}$$ $$=\{\{(f^{i}_{e}(\pi\cdot H),h^{i}_{v}(\pi\cdot H))\}_{v\in e}$$ $$=f^{i+1}_{e}(\pi\cdot H)$$ $$(19)$$ Definition B.6. Let h : [V] k × Z n×2 n 2 → R dbe a k-node representation on a hypergraph H*. Let* H ∈ Z n×2 n 2 be the star expansion adjacency matrix of H for n nodes. The representation h is k*-node most expressive if* ∀S, S′ ⊂ V, |S| = |S ′| = k, the following two conditions are satisfied: 1. $h$ _is $\mathbf{k}$-node invariant:_ $\exists\pi\in Stab(H),\pi(S)=S^{\prime}\implies h(S,H)=h(S^{\prime},H)$. $${\mathrm{\bf~\Gamma~}}_{\mathrm{\bf~\Gamma~}}\#\pi\in S t a b(H),\pi(S)$$ 2. h is **k-node expressive** ∄π ∈ Stab(H), π(S) = S ′ =⇒ h(*S, H*) ̸= h(S ′, H) Let AGG be a permutation invariant map from a set of node representations to R d. Proposition B.4. Let h(*S, H*) = AGGv∈S[h i v (H)] *with injective AGG and* h i v *permutation equivariant. The* representation h(S, H) is k-node invariant but not necessarily k-node expressive for S a set of k nodes. Proof. ∃π ∈ *Stab*(H) s.t. π(S) = S ′, π · H = H ⇒ π(vi) = v ′ i for i = 1...|S|, π · H = H ⇒ h i π(v) (H) = h i v (π · H) = h i v (H) (By permutation equivariance of h i v and π · H = H) ⇒ AGGv∈S[h i v (H)] = AGGv ′∈S′ [h i v ′ (H)] (By Proposition B.2 and AGG being permutation invariant) The converse, that h(*S, H*) is k-node expressive, is not necessarily true since we cannot guarantee h(*S, H*) = h(S ′, H) implies the existence of a permutation that maps S to S ′(see Zhang et al. (2021)). A hypergraph can be represented by a bipartite graph BV,E from V to E where there is an edge (*v, e*) in the bipartite graph iff node v is incident to hyperedge e. This bipartite graph BV,E is called the star expansion bipartite graph. We introduce a more structured version of graph isomorphism called a 2-color isomorphism to characterize hypergraphs. It is a map on 2-colored graphs, which are graphs that can be colored with two colors so that no two nodes in any graph with the same color are connected by an edge. We define a 2-colored isomorphism formally here: Definition B.7. A 2-colored isomorphism is a graph isomorphism on two 2-colored graphs that preserves node colors. In particular, between two graphs G1 and G2 the vertices of one color in G1 *must map to* vertices of the same color in G2*. It is denoted by* ∼=c. A bipartite graph must always have a 2-coloring. In fact, the 2-coloring with all the nodes in the node bipartition colored red and all the nodes in the hyperedge bipartition colored blue forms a canonical 2coloring of BV,E . Assume that all star expansion bipartite graphs are canonically 2-colored. Proposition B.5. We have two hypergraphs (V1, E1) ∼= (V2, E2) iff BV1,E1 ∼=c BV2,E2 where BV,E *is the star* expansion bipartite graph of (V, E) Proof. Denote L(BVi,Ei ) as the left hand (red) bipartition of BVi,Ei to represent the nodes Vi of (Vi, Ei) and R(BVi,Ei ) as the right hand (blue) bipartition of BVi,Ei to represent the hyperedges Ei of (Vi, Ei). We use the left/right bipartition and Vi/Eiinterchangeably since they are in bijection. ⇒ If there is an isomorphism π : V1 → V2, this means - π is a bijection and - has the structure preserving property that (u1...uk) ∈ E1 iff (π(u1)...π(uk)) ∈ E2. We may induce a 2-colored isomorphism π ∗: V(BV1,E1 ) → V(BV1,E1 ) so that π ∗|L(BV1,E1 ) = π where equality here means that π ∗|L(BV1,E1 ) acts on L(BV1,E1 ) the same way that π does on V1. Furthermore π ∗ has the property that π ∗|R(BV1,E1 )(u1*...u*k) = (π(u1)...π(uk)), ∀(u1...uk) ∈ E1, following the structure preserving property of isomorphism π. The map π ∗is a bijection by definition of being an extension of a bijection. The map π ∗is also a 2-colored map since it maps L(BV1,E1 ) to L(BV2,E2 ) and R(BV1,E1 ) to R(BV2,E2 ). We can also check that the map is structure preserving and thus a 2-colored isomorphism since (ui,(u1...ui...uk)) ∈ E(BV1,E1 ), ∀i = 1*...k* iff (ui ∈ V1 and (u1...ui...uk) ∈ E1) iff π(ui) ∈ V2 and (π(u1)...π(ui)...π(uk)) ∈ E2 iff (π ∗(ui),(π ∗(u1, ...ui, ...uk)) ∈ E(BV2,E2 ), ∀i = 1*...k*. This follows from π being structure preserving and the definition of π ∗. ⇐ If there is a 2-colored isomorphism π ∗: BV1,E1 → BV2,E2 then it has the properties that - π ∗is a bijection, - (is 2-colored): π ∗|L(BV1,E1 ) : L(BV1,E1 ) → L(BV2,E2 ) and π ∗|R(BV1,E1 ): R(BV1,E1 ) → R(BV2,E2 ) - (it is structure preserving): (ui,(u1...ui...uk)) ∈ E(BV1,E1 ), ∀i = 1*...k* iff (π ∗(ui), π∗(u1...ui...uk)) ∈ E(BV2,E2 ), ∀i = 1*...k* . This then means that we may induce a π : V1 → V2 so that π = π ∗|L(BV1,E1 ). We can check that π is a bijection since π is the 2-colored bijection π ∗restricted to L(BV1,E1 ), thus remaining a bijection. We can also check that π is structure preserving. This means that (u1...uk) ∈ E1 iff (ui,(u1...ui...uk)) ∈ E(BV1,E1 )∀i = 1*...k* iff (π ∗(ui),(π ∗(u1...ui*...u*k))) ∈ E(BV2,E2 )∀i = 1*...k* iff (π ∗(u1*...u*k)) ∈ R(BV2,E2 ) iff (π(u1)...π(uk)) ∈ E2 We define a topological object for a graph originally from algebraic topology called a universal cover: Definition B.8. (Hatcher (2005)) A universal covering of a connected graph G *is a (potentially infinite)* graph G˜, s.t. there is a map pG : G˜ → G *called the universal covering map where:* 1. ∀x ∈ V(G˜), pG|N(x)*is an isomorphism onto* N(pG(x)). 2. G˜ *is simply connected (a tree)* A covering graph is a graph that satisfies property 1 but not necessarily property 2 in Definition B.8. It is known that a universal covering G˜ covers all the graph covers of the graph G. Let T G x denote a tree with root x. Furthermore, define a rooted isomorphism Gx ∼= Hy as an isomorphism between graphs G and H that maps x to y and vice versa. We will use the following result to prove a characterization of GWL-1: Lemma B.6 (Krebs & Verbitsky (2015)). Let T and S be trees and x ∈ V (T) and y ∈ V (S) *be their vertices* of the same degree with neighborhoods N(x) = {x1, ..., xk} and N(y) = {y1, ..., yk}. Let r ≥ 1*. Suppose that* T r−1 x∼= S r−1 y and T r xi ∼= S r yi for all i ≤ k*. Then* T r+1 x∼= S r+1 y. A universal cover of a 2-colored bipartite graph is still 2 colored. When we lift nodes v and hyperedge nodes e to their universal cover, we keep their respective red and blue colors. Define a rooted colored isomorphism T k e˜1 ∼=c T k e˜2 as a colored tree isomorphism where blue/red node e˜1/v˜1 maps to blue/red node e˜2/v˜2 and vice versa. In fact, Lemma B.6 holds for 2-colored isomorphisms, which we show below: Lemma B.7. Let T and S be 2-colored trees and x ∈ V (T) and y ∈ V (S) *be their vertices of the same degree* with neighborhoods N(x) = {x1, ..., xk} and N(y) = {y1, ..., yk}. Let r ≥ 1*. Suppose that* T r−1 x∼=c S r−1 y and T r xi ∼=c S r yi for all i ≤ k*. Then* T r+1 x∼=c S r+1 y. Proof. Certainly 2-colored isomorphisms are rooted isomorphisms on 2-colored trees. The converse is true if the roots match in color since recursively all descendants of the root must match in color. If T r−1 x∼=c S r−1 y and T r xi ∼=c S r yi for all i ≤ k and N(x) = {x1...xk}, N(y) = {y1..yk}, the roots x and y must match in color. The neighborhoods N(x) and N(y) then must both be of the opposing color. Since rooted colored isomorphisms are rooted isomorphisms, we must have T r−1 x∼= S r−1 y and T r xi ∼= S r yi for all i ≤ k. By Lemma B.6, we have T r+1 x∼= S r+1 y. Once the roots match in color, a rooted tree isomorphism is the same as a rooted 2-colored tree isomorphism. Thus, since x and y share the same color, T r+1 x∼=c S r+1 y Theorem B.8. Let H1 = (V1, E1) and H2 = (V2, E2) be two connected hypergraphs. Let BV1,E1 and BV2,E2 be two canonically colored bipartite graphs for H1 and H2 *(vertices colored red and hyperedges colored blue)* For any i ∈ N +, for any of the nodes x1 ∈ BV1 , e1 ∈ BV1,E1 and x2 ∈ BV1 , e2 ∈ BV2,E2 : (B˜2i−1 V1,E1 )e˜1 =∼c (B˜2i−1 V2,E2 )e˜2 iff f i e1 = f i e2 (B˜2i V1,E1 )x˜1 ∼=c (B˜2i V2,E2 )x˜2 iff h i x1 = h i x2 , with f i• , hi• the i*th GWL-1 values for the hyperedges and nodes* respectively where e1 = pBV1,E1 (˜e1), x1 = pBV1,E1 (˜x1), e2 = pBV1,E1 (˜e2), x2 = pBV1,E1 (˜x2)*. The maps* pBV1,E1 : B˜V1,E1 → BV1,E1 , pBV2,E2 : B˜V2,E2 → BV2,E2 are the universal covering maps of BV1,E1 and BV2,E2 respectively. Proof. We prove by induction: Let T k e˜1 := (B˜kV1,E1 )e˜1 where e˜1 is a pullback of a hyperedge, meaning pBV1,E2 (˜e1) = e1. Similarly, let T k e˜2 := (B˜kV2,E2 )e˜2 , T k x˜1 := (B˜kV1,E1 )x˜1 , T k x˜2 := (B˜kV2,E2 )x˜2 , ∀k ∈ N, where e˜1, e˜2, x˜1, x˜2 are the respective pullbacks of e1, e2, x1, x2. Define an (2-colored) isomorphism of multisets to mean that there exists a bijection between the two multisets so that each element in one multiset is (2-colored) isomorphic with exactly one other element in the other multiset. By Observation 3 we can rewrite GWL-1 as: $$\begin{array}{c}{{f_{e}^{0}\leftarrow\{\},h_{v}^{0}\gets X_{v}}}\\ {{\ }}\\ {{f_{e}^{i+1}\leftarrow(f_{e}^{i},\{\{h_{v}^{i}\}\}_{v\in e})\forall e\in{\mathcal E}_{\mathcal H}}}\\ {{h_{v}^{i+1}\leftarrow(h_{v}^{i},\{\{f_{e}^{i+1}\}\}_{v\in e})\forall v\in{\mathcal V}_{\mathcal H}}}\end{array}$$ $$(20)$$ $$(21)$$ $$(22)$$ v}}v∈e)∀e ∈ EH (21) e }}v∈e)∀v ∈ VH (22) Base Case i = 1: $T_{e_{1}}^{1}\cong_{e}T_{e_{2}}^{1}$ iff $(T_{e_{1}}^{0}\cong_{e}T_{e_{2}}^{0}$ and $\{\{T_{e_{1}}^{0}\}\}_{x_{1}\in N(e_{1})}\cong_{e}\{\{T_{e_{2}}^{0}\}\}_{x_{2}\in N(e_{2})}$ (By Lemma B.7) $$\mbox{iff}(f_{e_{1}}^{0}=f_{e_{2}}^{0}\mbox{and}\{h_{x_{1}}^{0}\}\}=\{h_{x_{2}}^{0}\}))$$ (By Equation 20) $$\mbox{iff}f_{e_{1}}^{1}=f_{e_{2}}^{1}\mbox{(By Equation21)}$$ $T_{\delta_{1}}^{2}\cong_{c}T_{\delta_{2}}^{2}$ iff $(T_{\delta_{1}}^{0}\cong_{c}T_{\delta_{2}}^{0}$ and $\{\{T_{\delta_{1}}^{1}\}\}_{\delta_{1}\in N(\delta_{1})}\cong_{c}\{\{T_{\delta_{2}}^{1}\}\}_{\delta_{2}\in N(\delta_{2})})$ (By Lemma B.7), $$(23\mathrm{a})$$ $$(23\mathrm{b})$$ $$(23\mathrm{c})$$ $\left(24\mathrm{a}\right)^{2}$ iff $\left({h}_{{e}_{1}}^{0}={h}_{{e}_{2}}^{0}\text{and}\left\{\left\{{f}_{{x}_{1}}^{1}\right\}\right\}=\left\{\left\{{f}_{{x}_{2}}^{1}\right\}\right\}\right)$ (By Equation 20) >>> iff $\left.{f}_{{e}_{1}}^{1}={f}_{{e}_{2}}^{1}\right.$ (By Equation 22) $$(24\mathrm{b})$$ $$(24\mathrm{c})$$ Induction Hypothesis: For i ≥ 1, T 2i−1 e˜1∼=c T 2i−1 e˜2iff f iff $ f_{e_1}^i=f_{e_2}^i$ and $ T_{\tilde{x}_1}^{2i}\cong_c T_{\tilde{x}_2}^{2i}$ iff $ h_{x_1}^i=h_{x_2}^i$ ? Induction Step: $T_{\varepsilon_{1}}^{2i+1}\cong_{\varepsilon}T_{\varepsilon_{2}}^{2i+1}$ iff $(T_{\varepsilon_{1}}^{2i-1}\cong_{\varepsilon}T_{\varepsilon_{2}}^{2i-1}$ and $\{\{T_{\varepsilon_{1}}^{2i}\}\}_{T_{\varepsilon}\in\mathcal{N}(\varepsilon_{1})}\cong_{\varepsilon}\{\{T_{\varepsilon_{2}}^{2i}\}\}_{T_{\varepsilon}\in\mathcal{N}(\varepsilon_{2})})$ (By Lemma B.7) $$\text{iff}(f_{\varepsilon_{1}}^{i}=f_{\varepsilon_{2}}^{i}\text{and}\{\{h_{\varepsilon_{1}}^{i}\}\}=\{\{h_{\varepsilon_{2}}^{i}\}\}))\text{(By Induction Hypothesis)}$$ $$\text{iff}f_{\varepsilon_{1}}^{i+1}=f_{\varepsilon_{2}}^{i+1}\text{(By Equation21)}$$ $$(25\mathrm{a})$$ (25b) $\left(25\text{c}\right)$ T 2i x˜1 ∼=c T 2i x˜2 iff (T 2i−2 x˜1 =∼c T 2i−2 x˜2and {{T 2i−1 e˜1}}e˜1∈N(˜x1) =∼c {{T 2i−1 e˜2}}e˜2∈N(˜x2)) (By Lemma B.7) (26a) iff (h i e1 = h i e2 and {{f i x1 }} = {{f i x2 }}) (By Equation 20) (26b) iff h i x1 = h i x2 (By Equation 22) (26c) $$(26\mathrm{{ha}})$$ $$(26\mathrm{b})$$ $$(26\mathrm{c})$$ Observation 3. If the node values for nodes x and y from GWL-1 for i *iterations on two hypergraphs* H1 and H2 are the same, then for all j with 0 ≤ j ≤ i, the node values for GWL-1 for j iterations on x and y also agree. In particular deg(x) = deg(y). Proof. There is a 2-color isomorphism on subtrees (B˜jV1,E1 )x˜ and (B˜jV2,E2 )y˜ of the i-hop subtrees of the universal covers rooted about nodes x ∈ V1 and y ∈ V2 for 0 ≤ j ≤ i since (B˜iV1,E1 )x˜ =∼c (B˜iV2,E2 )y˜. By Theorem B.8, we have that GWL-1 returns the same value for x and y for each 0 ≤ j ≤ i . Proposition B.9. If GWL-1 cannot distinguish two connected hypergraphs H1 and H2 then HyperPageRank will not either. Proof. HyperPageRank is defined on a hypergraph with star expansion matrix H as the following stationary distribution Π: $$\operatorname*{lim}_{n\to\infty}(D_{v}^{-1}\cdot H\cdot D_{e}^{-1}\cdot H^{T})^{n}=\Pi$$ If H is a connected bipartite graph, Π must be the eigenvector of (D−1 $(D_n^{-1}\cdot H\cdot D_e^{-1}\cdot H^T)$ for eig. e· HT) for eigenvalue 1. In other words, Π must satisfy e· HT) · Π = Π (28) $$(D_{v}^{-1}\cdot H\cdot D_{e}^{-1}\cdot H^{T})\cdot\Pi=\Pi$$ By Theorem 1 of Huang & Yang (2021), we know that the UniGCN defined by: $$(27)$$ $$\begin{array}{l}{{h_{e}^{i+1}\leftarrow\phi_{2}(h_{e}^{i},h_{v}^{i})=W_{e}\cdot H^{T}\cdot h_{v}^{i}}}\\ {{{}}}\\ {{h_{v}^{i+1}\leftarrow\phi_{1}(h_{v}^{i},h_{e}^{i+1})=W_{v}\cdot H\cdot h_{e}^{i+1}}}\end{array}$$ v(29a) e(29b) for constant We and Wv weight matrices, is equivalent to GWL-1 provided that ϕ1 and ϕ2 are both injective as functions. Without injectivity, we can only guarantee that if UniGCN distinguishes H1, H2 then GWL-1 distinguishes H1, H2. In fact, each matrix power of order n in Equation 27 corresponds to h n v so long as we satisfy the following constraints: $$W_{e}\gets D_{e}^{-1},W_{v}\gets D_{v}^{-1}{\mathrm{~and~}}h_{v}^{0}\gets I$$ We show that the matrix powers are UniGCN under the constraints of Equation 30 by induction: $$(29\mathrm{a})$$ $$(29\mathrm{b})$$ $$(30)$$ Base Case: n = 0: h 0 v = I $$(D_{v}^{-1}\cdot H\cdot h_{e}^{n})$$ $$=(D_{v}^{-1}\cdot H\cdot((D_{e}^{-1}\cdot H^{T})\cdot h_{v}^{n}))$$ $$=(D_{v}^{-1}\cdot H\cdot D_{e}^{-1}\cdot H^{T})\cdot(D_{v}^{-1}\cdot H\cdot D_{e}^{-1}\cdot H^{T})^{n}$$ $$=(D_{v}^{-1}\cdot H\cdot D_{e}^{-1}\cdot H^{T})^{n+1}=h_{v}^{n+1}$$ v(32d) Since we cannot guarantee that the maps ϕ1 and ϕ2 are injective in Equation 32b, it must be that the output h n v , coming from UniGCN with the constraints of Equation 30, is at most as powerful as GWL-1. In general, injectivity preserves more information. For example, if ϕ1 is injective and if ϕ ′ 1 is an arbitrary map (not guaranteed to be injective) then: $$\phi_{1}(h_{1})=\phi_{1}(h_{2})\Rightarrow h_{1}=h_{2}\Rightarrow\phi_{1}^{\prime}(h_{1})=\phi_{1}^{\prime}(h_{2})$$ HyperpageRank is exactly as powerful as UniGCN under the constraints of Equation 30. Thus HyperPageRank is at most as powerful as GWL-1 in distinguishing power. Induction Hypothesis: n > 0: $$(D_{v}^{-1}\cdot H\cdot D_{e}^{-1}\cdot H^{T})^{n}=h_{v}^{n}$$ v(31) Induction Step: ) (32a) $$(31)$$ $$(32\mathrm{a})$$ $$(32\mathrm{b})$$ $$(32\mathrm{c})$$ $$(32\mathrm{d})$$ $$(33)$$ )) (32b) n(32c) ![28_image_0.png](28_image_0.png) Figure 4: An illustration of hypergraph symmetry breaking. (c,d) 3-regular hypergraphs C 3 4 , C 3 5 with 4 and 5 nodes respectively and their corresponding universal covers centered at any hyperedge (B˜C3 4 )e∗,∗,∗ ,(B˜C3 5 )e∗,∗,∗ with universal covering maps pBC3 4 , pBC3 5 . (b,e) the hypergraphs Cˆ3 4 , Cˆ3 5 , which are C 3 4 , C3 5 with 4, 5-sized hyperedges attached to them and their corresponding universal covers and universal covering maps. (a,f) are the corresponding bipartite graphs of Cˆ3 4 , Cˆ3 5 . (c,d) are indistinguishable by GWL-1 and thus will give identical node values by Theorem B.8. On the other hand, (b,e) gives node values which are now sensitive to the the order of the hypergraphs 4, 5, also by Theorem B.8. ## B.3 Method We repeat here from the main text the symmetry finding algorithm: Algorithm 2: A Symmetry Finding Algorithm Data: Hypergraph H = (V, E), represented by its star expansion matrix H. L ∈ N + is the number of iterations to run GWL-1. Result: A pair of collections: (RV = {VRj }, RE = ∪j{ERj }) where Rj are disconnected subhypergraphs exhibiting symmetry in H that are indistinguishable by L-GWL-1. 1 Edeg ← {{deg(v) : v ∈ e} : ∀e *∈ E}* 2 UL ← h L v (H); GL ← {UL[v] : ∀v *∈ V}* ; /* UL[v] is the L-GWL-1 value of node v ∈ V. */ 3 BVH,EH ← *Bipartite*(H) /* Construct the bipartite graph from H. */ 4 RV ← {}; RE *← {}* 5 for cL ∈ GL do 6 VcL ← {v ∈ V : UL[v] = cL}, EcL ← {e ∈ E : u ∈ VcL , ∀u ∈ e} 7 CcL ← ConnectedComponents(HcL = (VcL , EcL )) 8 for RcL,i ∈ CcL do 9 if |VRcL,i | ≥ 3 and {deg(v) : v ∈ (VRcL,i )} ∈/ Edeg **then** 10 RV ← RV ∪ {VRcL,i }; RE ← RE ∪ ER*cL,i* 11 end 12 end 13 end 14 **return** (RV , RE) We also repeat here for convenience some definitions used in the proofs. Given a hypergraph H = (V, E), let VcL := {v ∈ V : cL = h L v (H)} (34) be the set of nodes of the same class cL as determined by L-GWL-1. Let HcL be an induced subgraph of H by VcL . Definition B.9. A L-GWL-1 symmetric induced subhypergraph R ⊂ H of H is a connected induced subhypergraph determined by VR ⊂ VH*, some subset of nodes that are all indistinguishable amongst each other by* L*-GWL-1:* h L u (H) = h L v (H), ∀u, v ∈ VR (35) When L = ∞, we call such R a GWL-1 symmetric induced subhypergraph. Furthermore, if R = H, then we say H is GWL-1 symmetric. Definition B.10. A neighborhood-regular hypergraph is a hypergraph where all neighborhoods of each node are isomorphic to each other. Observation 4. A hypergraph H is GWL-1 symmetric if and only if it is L*-GWL-1 symmetric for all* L ≥ 1 if and only if H *is neighborhood regular.* ## Proof. 1. First if and only if : By Theorem B.8, GWL-1 symmetric hypergraph H = (V, E) means that for every pair of nodes *u, v* ∈ V, (B˜V,E )u˜ =∼c (B˜V,E )v˜. This implies that for any L ≥ 1, (B˜2L V,E )u˜ ∼=c (B˜2L V,E )v˜ by restricting the rooted isomorphism to 2L-hop rooted subtrees, which means that h L u (H) = h L v (H). The converse is true since L is arbitrary. If there are no cycles, we can just take the isomorphism for the largest. Otherwise, an isomorphism can be constructed for L = ∞ by infinite extension. ## 2. Second If And Only If : Let pBV,E be the universal covering map for BV,E . Denote v, ˜ u˜ by the lift of some nodes v, u ∈ V by pBV,E . Let (N˜(˜u))u˜ be the rooted bipartite lift of (N(u))u. If H is L-GWL-1 symmetric for all L ≥ 1 then with L = 1, (B˜2V,E )u˜ ∼=c (N˜(u))u˜ ∼=c (N˜(v))v˜ ∼=c (B˜2V,E )v˜, iff (N(u))u ∼= (N(v))v˜, ∀u, v ∈ V since N(u) and N(v) are cycle-less for any u, v ∈ V. For the converse, assume all nodes v ∈ V have (N(v))v ∼= (N1)x for some 1-hop rooted tree (N1)x rooted at node x, independent of any v ∈ V. We prove by induction that for all L ≥ 1 and for all v ∈ V, (B˜2L V,E )v˜ ∼=c (N˜ 2L)x for a 2L-hop tree (N˜ 2L)x rooted at node x. Base case: L = 1 is by assumption. Inductive step: If (B˜2L V,E )v˜ ∼=c (N2L)x, we can form (B˜2L+2 V,E)v˜ by attaching (N˜(˜u))u˜ to each node u˜ in the 2L-th layer of (B˜2L V,E )v˜ ∼=c (N˜ 2L)x. Each (N˜(u))u˜ is independent of the root v˜ since every u ∈ V has (N˜(˜u))u˜ ∼=c (N˜ 2)x iff (N(˜u))u˜ ∼= (N1)x for an x independent of u ∈ V. This means (B˜2L+2 V,E)v˜ =∼c (N˜ 2L+2)x for the same root node x where (N˜ 2L+2)x is constructed in the same manner as (B˜2L V,E )v˜, ∀v ∈ V. ## B.3.1 Algorithm Guarantees Continuing with the notation, as before, let H = (V, E) be a hypergraph with star expansion matrix H and let (RV , RE ) be the output of Algorithm 1 on H for L ∈ N +. Denote CcL as the set of all connected components of HcL : CcL ≜ {CcL : conn. comp. CcL of HcL } (36) If L = ∞, then drop the L. Thus, the hypergraphs represented by (RV , RE) come from CcL for each cL. Let: HˆL ≜ (V, *E ∪ R*V ) (37) be H after adding all the hyperedges from RV and let HˆL be the star expansion matrix of the resulting hypergraph HˆL. Let: $${\mathcal{G}}_{L}\triangleq\{h_{v}^{L}(H):v\in{\mathcal{V}}\}$$ (H) : v *∈ V}* (38) be the set of all L-GWL-1 values on H. Let: VcL,s ≜ {v ∈ VcL : v ∈ R, R ∈ CcL , |VR| = s} (39) be the set of all nodes of L-GWL-1 class cL belonging to a connected component in CcL of s ≥ 1 nodes in HcL , the induced subhypergraph of L-GWL-1. Let: $${\mathcal{S}}_{c_{L}}\triangleq\{|{\mathcal{V}}_{{\mathcal{R}}_{c_{L},i}}|:{\mathcal{R}}_{c_{L},i}\in{\mathcal{C}}_{c_{L}}\}$$ } (40) $\left(38\right)^3$ $\lambda_{\rm B}|=e\lambda$ $\left(40\right)^3$ be the set of node set sizes of the connected components in HcL . Proposition B.10. If L = ∞, for any GWL-1 node value c for H, the connected component induced subhypergraphs Rc,i*, for* i = 1...|Cc| are GWL-1 symmetric and neighborhood-regular. Proof. Let pBV,E be the universal covering map for BV,E . Denote v, ˜ u, ˜ v˜ ′, u˜ ′ by the lift of some nodes v, u, v′, u′ ∈ V by pBV,E . Let L = ∞ and let Hc = (Vc, Ec). For any i, since u, v ∈ Vc, (B˜V,E )u ∼=c (B˜V,E )v for all u, v ∈ VRc,i . Since Rc,i is maximally connected we know that every neighborhood NHc (u) for u ∈ Vc induced by Hc has NHc (u) =∼ N(u) ∩ Hc. Since L = ∞ we have that NHc (u) =∼ NHc (v), ∀u, v ∈ VRc,i since otherwise WLOG there are u ′, v′ ∈ VRc,i with NHc (u ′) ̸∼= NHc (v ′) then WLOG there is some hyperedge e ∈ ENHc (u′) with some w ∈ e, w ̸= u ′ where e cannot be in isomorphism with any e ′ ∈ ENHc (v ′). For two hyperedges to be in isomorphism means that their constituent nodes can be bijectively mapped to each other by a restriction of an isomorphism ϕ between NHc (u ′), NHc (v ′) to one of the hyperedges. This means that (B˜V\{u′},E )w is the rooted universal covering subtree centered about w not passing through u ′that is connected to u ′ ∈ (B˜V,E )u′ by e. However, v ′ has no e and thus cannot have a Tx for x ∈ V(N˜(v ′))v satisfying Tx ∼=c (B˜V\{u′},E )w with x connected to v ′ by a hyperedge e ′isomorphic to e in its neighborhood in (B˜V,E )v ′ . This contradicts that (B˜V,E )u′ ∼=c (B˜V,E )v ′ . We have thus shown that all nodes in Vc have isomorphic induced neighborhoods. By the Observation 4, this is equivalent to saying that Rc,i is GWL-1 symmetric and neighborhood regular. Definition B.11. A star graph Nx is defined as a tree rooted at x of depth 1. The root x is the only node that can have degree more than 1. Lemma B.11. If L ∈ N + is small enough so that after running Algorithm 1 on L, for any L-GWL-1 node class cL on V none of the discovered VRcL,i are within L hyperedges away from any VRcL,j *for all* i, j ∈ 1...|CcL |, i ̸= j, then after forming HˆL, the new L-GWL-1 node classes of VRcL,i for i = 1...|CcL | in HˆL are all the same class c ′ L but are distinguishable from cL depending on |VR*cL,i* |. Proof. After running Algorithm 1 on H = (V, E), let HˆL = (VˆL, EˆL ≜ E ∪ ScL,i{VR*cL,i* }) be the hypergraph formed by attaching a hyperedge to each VR*cL,i* . For any cL, a L-GWL-1 node class, let RcL,i, i = 1...|CcL | be a connected component subhypergraph of HcL . Over all (cL, i) pairs, all the RcL,i are disconnected from each other and for each cL each RcL,i is maximally connected on HcL . Upon covering all the nodes VR*cL,i* of each induced connected component subhypergraph RcL,i with a single hyperedge e = VR*cL,i* of size s = |VR*cL,i* |, we claim that every node of class cL becomes cL,s, a L-GWL-1 node class depending on the original L-GWL-1 node class cL and the size of the hyperedge s. Consider for each v ∈ VR*cL,i* the 2L-hop rooted tree (B˜2L V,E )v˜ for pBV,E (˜v) = v. Also, for each v ∈ VR*cL,i* , define the tree $$T_{e}\triangleq({\hat{B}}_{\hat{\mathcal{V}}\setminus\{v\},\hat{e}}^{2L-1})_{\hat{e}}$$ )e˜ (41) We do not index the tree Te by v since it does not depend on v ∈ VR*cL,i* . We prove this in the following. proof for: Te **does not depend on** v ∈ VR*cL,i* : Let node e˜ be the lift of e to (B˜2L−1 Vˆ,Eˆ)e˜. Define the star graph (N(˜e))e˜ as the 1-hop neighborhood of e˜ in (B˜2L−1 Vˆ,Eˆ)e˜. We must have: $$(\bar{\cal B}_{\cal V,\cal E}^{2L-1})_{\bar{e}}\cong_{c}((N(\bar{e}))_{\bar{e}}\cup\bigcup_{\bar{u}\in{\cal V}_{N(\bar{e})}\setminus\{\bar{e}\}}(\bar{\cal B}_{\cal V,\cal E}^{2L-2})_{\bar{u}})_{\bar{e}}$$ Define for each node v ∈ e with lift v˜: $$(N({\hat{e}},{\hat{v}}))_{{\hat{e}}}\triangleq({\mathcal{V}}_{(N({\hat{e}}))_{{\hat{e}}}}\setminus\{{\hat{v}}\},{\mathcal{E}}_{(N({\hat{e}}))_{{\hat{e}}}}\setminus\{({\hat{e}},{\hat{v}})\})_{{\hat{e}}}$$ \ {(˜e, v˜)})e˜ (43) $$(41)$$ $$(42)$$ $$(43)$$ The tree (N(˜e, v˜))e˜ is a star graph with the node v˜ deleted from (N(˜e))e˜. The star graphs (N(˜e, v˜))e˜ ⊂ (N(˜e))e˜ do not depend on v˜ as long as v˜ ∈ V(N(˜e))e˜ . In other words, $$(N({\tilde{e}},{\tilde{v}}))_{\tilde{e}}\cong_{c}(N({\tilde{e}},{\tilde{v}}^{\prime}))_{{\tilde{e}}},\forall{\tilde{v}},{\tilde{v}}^{\prime}\in{\mathcal{V}}_{(N({\tilde{e}}))_{\tilde{e}}}\setminus\{{\tilde{e}}\}$$ \ {e˜} (44) Since the rooted tree (B˜2L−1 Vˆ,Eˆ)e˜, where e˜ is the lift of e by universal covering map pBV,E , has all pairs of nodes u, ˜ u˜ ′ ∈ e˜ in it with (B˜2L V,E )u˜ ∼=c (B˜2L V,E )u˜ ′ , which implies $$(44)$$ $$(\tilde{B}_{\nu,\tilde{e}}^{2L-2})_{\tilde{u}}\cong_{c}(\tilde{B}_{\nu,\tilde{e}}^{2L-2})_{\tilde{u}^{\prime}},\forall\tilde{u},\tilde{u}^{\prime}\in\tilde{e}\tag{1}$$ $$(45)$$ $$(46)$$ By Equations 45, 44, we thus have: $$(\tilde{B}_{\mathcal{V}\setminus\{v\},\tilde{\epsilon}}^{2L-1})_{\tilde{\epsilon}}\cong_{\tilde{\epsilon}}\big{(}(N(\tilde{\epsilon},\tilde{v}))_{\tilde{\epsilon}}\cup\bigcup_{\tilde{u}\in\mathcal{V}_{(N(\tilde{\epsilon},\tilde{v}))_{\tilde{\epsilon}}}\setminus\{\tilde{\epsilon}\}}(\tilde{B}_{\mathcal{V},\tilde{\epsilon}}^{2L-2})_{\tilde{u}}\big{)}_{\tilde{\epsilon}}\tag{1}$$ This proves that Te does not need to be indexed by v ∈ VR*cL,i* . We continue with the proof that all nodes in VR*cL,i* become the L-GWL-1 node class cL,s for s = |VR*cL,i* |. Since every v ∈ VR*cL,i* becomes connected to a hyperedge e = VR*cL,i* in Hˆ, we must have: $$(\bar{\mathcal{B}}^{2L}_{\bar{\mathcal{V}},\bar{\varepsilon}})_{\bar{v}}\cong_{c}((\bar{\mathcal{B}}^{2L}_{\bar{\mathcal{V}},\bar{\varepsilon}})_{\bar{v}}\cup_{(\bar{v},\bar{\varepsilon})}T_{e})_{\bar{v}},\forall v\in\mathcal{V}_{\mathcal{R}_{e_{L},i}}\tag{1}$$ $$(47)$$ The notation ((B˜2L V,E )v˜ ∪(˜v,e˜) Te)v˜ denotes a tree rooted at v˜ that is the attachment of the tree Te rooted at e to the node v˜ by the edge (˜v, e˜). As is usual, we assume v, ˜ e˜ are the lifts of v ∈ V, e ∈ E respectively. We only need to consider the single e since L was chosen small enough so that the 2L-hop tree (B˜2L Vˆ,Eˆ )v˜ does not contain a node u˜ satisfying pBV,E (˜u) = u with u ∈ VR*cL,j* for all j = 1...|CcL |, j ̸= i. Since Te does not depend on v ∈ VR*cL,i* , $$(\tilde{\mathcal{B}}_{\mathcal{V},\ell}^{2L})_{\bar{u}}\cong_{c}(\tilde{\mathcal{B}}_{\mathcal{V},\ell}^{2L})_{\bar{v}},\forall u,v\in\mathcal{V}_{\mathcal{R}_{c L},i}$$ $$(48)$$ $\square$ )v˜, ∀u, v ∈ VR*cL,i* (48) This shows that h L u (Hˆ ) = h L v (Hˆ ), ∀u, v ∈ VR*cL,i* by Theorem B.8. Furthermore, since each v ∈ VR*cL,i* ⊂ Vˆ in Hˆ is now incident to a new hyperedge e = VR*cL,i* , we must have that the L-GWL-1 class cL of VR*cL,i* on H is now distinguishable by |VR*cL,i* |. We will need the following definition to prove the next lemma. Definition B.12. *A partial universal cover of hypergraph* H = (V, E) with an unexpanded induced subhypergraph R, denoted U(H, R)V,E is a graph cover of BV,E where we freeze BVR,ER ⊂ B˜V,E as an induced subgraph. A l*-hop rooted partial universal cover of hypergraph* H = (V, E) *with an unexpanded induced subhypergraph* R*, denoted* (U l(H, R)V,E )u˜ for u ∈ V or (U l(H, R)V,E )e˜ for e ∈ E, where v, ˜ e˜ are lifts of v, e, is a rooted graph cover of BV,E where we freeze BVR,ER ⊂ B˜V,E *as an induced subgraph.* Lemma B.12. *Assuming the same conditions as Lemma B.11, where* H = (V, E) is a hypergraph and for all L-GWL-1 node classes cL with connected components RcL,i*, as discovered by Algorithm 1, so that* L ≥ diam(RcL,i). Instead of only adding the hyperedges {VRcL,i }cL,i to E *as stated in the main paper, let* Hˆ† ≜ (V,(E \ RE) ∪ RV ), meaning H with each RcL,i for i = 1...|CcL | having all of its hyperedges dropped and with a single hyperedge that covers VRcL,i *and let* Hˆ = (V, E ∪ RV ) then: The GWL-1 node classes of VRcL,i for i = 1...|CcL | in Hˆ *are all the same class* c ′ L but are distinguishable from cL depending on |VR*cL,i* |. Proof. For any cL, a L-GWL-1 node class, let RcL,i, i = 1...|CcL | be a connected component subhypergraph of HcL . These connected components are discovered by the algorithm. Over all (cL, i) pairs, all the RcL,i are disconnected from each other. Upon arbitrarily deleting all hyperedges in each such induced connected component subhypergraph RcL,i and adding a single hyperedge of size s = |VR*cL,i* |, we claim that every node of class cL becomes cL,s, a L-GWL-1 node class depending on the original L-GWL-1 node class cL and the size of the hyperedge s. Define the subhypergraph made up of the disconnected components RcL,i as: $$(49)$$ $${\mathcal{R}}:=\bigcup_{c,i}{\mathcal{R}}_{c L,i}$$ (50) $$\begin{array}{l}\mbox{(50)}\end{array}$$ . $$(51)$$ RcL,i (49) Since L ≥ *diam*(RcL,i), we can construct the 2L-hop rooted partial universal cover with unexpanded induced subhypergraph R, denoted by (U 2L(H, R)V,E )v˜, ∀v ∈ V of H as given in Definition B.12. Denote the hyperedge nodes, or right hand nodes of the bipartite graph by B(VR, ER) by R(B(VR, ER)). Their corresponding hyperedges are ER ⊂ E(U(H, R)) ⊂ E. Since each RcL,i is maximally connected, for any nodes u, v ∈ VR we have: $$(U^{2L}({\mathcal{H}},{\mathcal{R}})_{\bar{u}}\setminus R({\mathcal{B}}({\mathcal{V}}_{{\mathcal{R}}},{\mathcal{E}}_{{\mathcal{R}}})))_{\bar{u}}\cong_{c}(U^{2L}({\mathcal{H}},{\mathcal{R}})_{\bar{v}}\setminus R({\mathcal{B}}({\mathcal{V}}_{{\mathcal{R}}},{\mathcal{E}}_{{\mathcal{R}}})))_{\bar{v}}$$ $e_{c_{L},i}\stackrel{{\Delta}}{{=}}\mathcal{V}_{\mathcal{R}_{c_{L}},i}$ for every $(c_{i},i)$ pair we see that 2L(H, R)v˜ \ R(B(VR, ER)))v˜ (50) by Proposition B.10, where U 2L(H, R)v˜ \ R(B(VR, ER)) denotes removing the nodes R(B(VR, ER)) from U 2L(H, R)v˜. This follows since removing R(B(VR, ER)) removes an isomorphic neighborhood of hyperedges from each node in VR. This requires assuming maximal connectedness of each RcL,i. Upon adding the hyperedge ecL,i ≜ VR*cL,i* (51) covering all of VR*cL,i* after the deletion of ER*cL,i* for every (cL, i) pair, we see that any node u ∈ VR*cL,i* is connected to any other node v ∈ VR*cL,i* through ecL,i in the same way for all nodes u, v ∈ VR*cL,i* . In fact, we claim that all the nodes in VR*cL,i* still have the same GWL-1 class. We can write Hˆ† equivalently as (V,ScL,i(E \ E(RcL,i) ∪ {ecL,i})), which is the hypergraph formed by the algorithm. The replacement operation on H can be viewed in the universal covering space B˜V,E as taking U(H, R) and replacing the frozen subgraph BVR,ER with the star graphs (NHˆ† (˜ecL,i))e˜*cL,i* of root node e˜cL,i determined by hyperedge ecL,i for each connected component indexed by (cL, i). Since the star graphs (NHˆ† (˜ecL,i))e˜*cL,i* are cycle-less, we have that: $$(U({\mathcal{H}},{\mathcal{R}})\setminus R({\mathcal{B}}({\mathcal{V}}_{{\mathcal{R}}},{\mathcal{E}}_{{\mathcal{R}}})))\cup\bigcup_{c_{L,i}}(N_{{\mathcal{H}}_{i}}({\bar{c}}_{c_{L,i}}))_{{\bar{c}}_{c_{L,i}}}\cong{\tilde{\mathcal{B}}}_{{\mathcal{V}}_{{\mathcal{H}}_{i}},{\mathcal{E}}_{{\mathcal{R}}_{i}}}$$ $$(52)$$ Viewing Equation 52 locally, by our assumptions on L, for any v ∈ VR*cL,i* , we must also have: $$(U^{2L}({\mathcal{H}},{\mathcal{H}}$$ $$(U^{2L}({\mathcal{H}},{\mathcal{R}})_{\bar{v}}\setminus R({\mathcal{B}}({\mathcal{V}}_{{\mathcal{R}}},{\mathcal{E}}_{{\mathcal{R}}})))\bigcup(N_{{\mathcal{H}}_{\uparrow}}({\bar{e}}_{c,i}))_{{\bar{e}}_{c,i}}\cong_{c}{\tilde{\mathcal{B}}}_{{\mathcal{V}}_{{\mathcal{H}}_{\uparrow}},{\mathcal{E}}_{{\mathcal{R}}_{\uparrow}}}$$ $${}_{\mathrm{c}}\,\tilde{B}_{\nu_{n_{+}},\mathcal{E}_{n_{+}}}$$ ,EHˆ†(53) We thus have (B˜2L VHˆ† ,EHˆ† )u˜ ∼=c (B˜2L VHˆ† ,EHˆ† )v˜ for every u, v ∈ VR*cL,i* with u, ˜ v˜ being the lifts of *u, v* by pBV,E , since (U 2L(H, R)u˜ \R(B(VR, ER)))u˜ ∼=c (U 2L(H, R)v˜ \R(B(VR, ER)))v˜ for every u, v ∈ VR*cL,i* as in Equation 50. These rooted universal covers now depend on a new hyperedge ecL,i and thus depend on its size s. This proves the claim that all the nodes in VR*cL,i* retain the same L-GWL-1 node class by changing H to Hˆ† and that this new class is distinguishable by s = |VR*cL,i* |. In otherwords, the new class can be determined by cs. Furthermore, cL,s on the hyperedge ecL,i cannot become the same class as an existing class due to the algorithm. Theorem B.13. Let |V| = *n, L* ∈ N +. If vol(v) ≜Pe∈E:e∋v |e| = O(log 1−ϵ 4L n), ∀v ∈ V *for any constant* ϵ > 0; |ScL | ≤ S, ∀cL ∈ GL, S *constant, and* |VcL,s| = O(n ϵ log 1 2k (n) ), ∀s ∈ CcL , then for k ∈ N + and ktuple C = (cL,1...cL,k), cL,i ∈ GL, i = 1..k *there exists* ω(n 2kϵ) many pairs of k-node sets S1 ̸≃ S2 *such* that (h L u (H))u∈S1 = (h L v∈S2 (H)) = C, as ordered k-tuples, while h(S1, HˆL) ̸= h(S2, HˆL) also by L *steps of* GWL-1. $$(53)$$ Proof. 1. Constructing forests from the rooted universal cover trees : The first part of the proof is similar to the first part of the proof of Theorem 2 of Zhang et al. (2021). Consider an arbitrary node v ∈ V and denote the 2L-tree rooted at v from the universal cover as (B˜2L V,E )v as in Theorem B.8. As each node v ∈ V has volume vol(v) = Pv∈e |e| = O(log 1−ϵ 4L n), then every edge e ∈ E has |e| = O(log 1−ϵ 4L n) and for all v ∈ V we have that deg(v) = O(log 1−ϵ 4L n), we can say that every node in (B˜2L V,E )v˜ has degree d = O(log 1−ϵ 4L n). Thus, the number of nodes in (B˜2L V,E )v˜, denoted by |V((B˜2L V,E )v˜|, satisfies |V((B˜2L V,E )v˜| ≤ P2L i=0 d i = O(d 2L) = O(log 1−ϵ 2 n). We set K ≜ maxv∈V |V((B˜2L V,E )v˜| as the maximum number of nodes of (B˜2L V,E )v˜ and thus K = O(log 1−ϵ 2 n). For all v ∈ V, expand trees (B˜2L V,E )v˜ to (B˜2L V,E )v˜ by adding K −|V((B˜2L V,E )v˜| independent nodes. Then, all (B˜L V,E )v˜ have the same number of nodes, which is K, becoming forests instead of trees. ## 2. Counting |Gl|: Next, we consider the number of non-isomorphic forests over K nodes. Actually, the number of nonisomorphic graph over K nodes is bounded by 2 ( K 2 ) = exp(O(log 1−ϵ 2 n)) = o(n 1−ϵ). Therefore, due to the pigeonhole principle, there exist n o(n1−ϵ) = ω(n ϵ) many nodes v whose (B˜L V,E )v˜ are isomorphic to each other. Denote GL as the set of all L-GWL-1 values. Denote the set of these nodes as VcL , which consist of nodes whose L-GWL-1 values are all the same value cL ∈ GL after L iterations of GWL-1 by Theorem B.8. For a fixed L, the sets VcL form a partition of V, in other words, FcL∈GL VcL = V. Next, we focus on looking at k-sets of nodes that are not equivalent by GWL-1. For any cL ∈ GL, there is a partition VcL =Fs VcL,s where VcL,s is the set of nodes all of which have L-GWL- 1 class cL and that belong to a connected component of size s in HcL . Let ScL ≜ {|VRcL,i | : RcL,i ∈ CcL } denote the set of sizes s ≥ 1 of connected component node sets of HcL . We know that |ScL | ≤ S where S is independent of n. ## 3. Computing The Lower Bound: Let Y denote the number of pairs of k-node sets S1 ̸≃ S2 such that (h L u (H))u∈S1 = (h L v (H))v∈S2 = C = (cL,1...cL,k), as ordered tuples, from L-steps of GWL-1. Since if any pair of nodes *u, v* have same L-GWL1 values cL, then they become distinguishable by the size of the connected component in HcL that they belong to. We can lower bound Y by counting over all pairs of k tuples of nodes ((u1...uk),(v1*...v*k)) ∈ (Qk i=1 VcL,i )×(Qk i=1 VcL,i ) that both have L-GWL-1 values (cL,1...cL,k) where there is atleast one i ∈ {1..k} where ui and vi belong to different sized connected components si, s′i ∈ ScL,i with si ̸= s ′ i Y k Y ≥ 1 k! [X ((si) k i=1,(s ′ i ) k i=1)∈[(Qk i=1 S L cL,i )]2 :(si) k i=1̸=(s ′ i ) k i=1 i=1 |V L (cL,i),si ||V L (cL,i),s′i |] (54a) = 1 k! [ Y k i=1 ( X si∈SL ci |V L (cL,i),si |) 2 −X (si) k i=1∈Qk i=1 SL (cL,i) ( Y k i=1 |V L (cL,i),si | 2)] (54b) Using the fact that for each i ∈ {1...k}, |VcL,i | =Psi∈ScL,i |V(cL,i),si | and by assumption |V(cL,i),si | = . We have: $\square$ ``` O(n ϵ log 1 2k n ) for any si ∈ ScL,i , thus we have: ``` $$Y\geq\omega(n^{2k\epsilon})-O(|S|^{k}\frac{n^{2k\epsilon}}{\log n})]=\omega(n^{2k\epsilon})\tag{55}$$ For the following proof, we will denote ∼=H as a node or hypergraph isomorphism with respect to a hypergraph H. Theorem B.14 (Invariance and Expressivity). If L = ∞, GWL-1 enhanced by Algorithm 1 is still invariant to node isomorphism classes of H and can be strictly more expressive than GWL-1 to determine node isomorphism classes. ## Proof. 1. Expressivity: Let L ∈ N + be arbitrary. We first prove that L-GWL-1 enhanced by Algorithm 1 is strictly more expressive for node distinguishing than L-GWL-1 on some hypergraph(s). Let C 3 4 and C 3 5 be two 3-regular hypergraphs from Figure 2. Let H = C 3 4 FC 3 5 be the disjoint union of the two regular hypergraphs. L iterations of GWL-1 will assign the same node class to all of VH. These two subhypergraphs can be distinguished by L-GWL-1 for L ≥ 1 after editing the hypergraph H from the output of Algorithm 1 and becoming Hˆ = Cˆ3 4 ∪ Cˆ3 5 . This is all shown in Figure 2. Since L was arbitrary, this is true for L = ∞. ## 2. Invariance: For any hypergraph H, let Hˆ = (Vˆ, Eˆ) be H modified by the output of Algorithm 1 by adding hyperedges to VRc,i . GWL-1 remains invariant to node isomorphism classes of H on Hˆ. ## A. Case 1: Let L ∈ N + be arbitrary. For any node u with L-GWL-1 class c changed to cs in Hˆ, if u ∼=H v for any v ∈ V, then the GWL-1 class of v must also be cs. In otherwords, both u and v belong to s-sized connected components in Hc We prove this by contradiction. Say u belong to a L-GWL-1 symmetric induced subhypergraph S with |VS| = s. Say v is originally of L-GWL-1 class c and changes to L-GWL-1 cs ′ for s ′ < s on Hˆ, WLOG. If this is the case then v belongs to a L-GWL-1 symmetric induced subhypergraph S ′ with |VS′ | = s ′. Since there is a π ∈ Aut(H) with π(u) = v and since s ′ < s, by the pigeonhole principle some node w ∈ VS must have π(w) ∈ V / S′ . Since S and S ′ are maximally connected, π(w) cannot share the same L-GWL-1 class as w. Thus, it must be that (B˜2L V,E )π](w) ̸=∼c (B˜2L V,E )w˜ where w, ˜ π](w) are the lifts of *w, π*(w) by universal covering map pBV,E . However by the contrapositive of Theorem B.8, this means π(w) ̸= w. However w and π(w) both belong to L-GWL-1 class c in H, meaning (B˜2L V,E )π](w) ∼=c (B˜2L V,E )w˜, contradiction. The argument for when v does not change its class c after the algorithm, follows by noticing that c ̸= cs as GWL-1 node classes of u and v implies u ̸∼=H v once again by the contrapositive of Theorem B.8. Since L was arbitrary, this is true for L = ∞ ## B. Case 2: Now assume L = ∞. Let pBV,E be the universal covering map of BV,E . For all other nodes u ′ ∼=H v ′for u ′, v′ ∈ V unaffected by the replacement, meaning they do not belong to any Rc,i discovered by the algorithm, if the rooted universal covering tree rooted at node u˜ ′connects to any node w˜ in l hops in (B˜lV,E )u˜ ′ where pBV,E (˜u ′) = u ′, pBV,E ( ˜w) = w and where w has any class c in H, then v˜ ′ must also connect to a node z˜ in l hops in (B˜lV,E )u˜ ′ where pBV,E (˜z) = z and w ∼=H z. Furthermore, if w becomes class cs in H due to the algorithm, then z also becomes class cs in Hˆ. This will follow by the previous result on isomorphic w and z both of class c with w becoming class cs in Hˆ. Since L = ∞: For any w ∈ V connected by some path of hyperedges to u ′ ∈ V, consider the smallest l for which (B˜lV,E )u˜ ′ , the l-hop universal covering tree of H rooted at u˜ ′, the lift of u ′, contains the lifted w˜ of w ∈ V with GWL-1 node class c at layer l. Since u ′ ∼=H v ′ by π. We can use π to find some z = π(w). We claim that z˜ is l hops away from v˜ ′. Since u ′ =∼H v ′ due to some π ∈ Aut(H) with π(u ′) = v ′, using Proposition B.2 for singleton nodes and by Theorem B.8 we must have (B˜lV,E )u˜ ′ ∼=c (B˜lV,E )v˜ ′ as isomorphic rooted universal covering trees due to an induced isomorphism π˜ of π where we define an induced isomorphism π˜ : (B˜V,E )u˜ ′ → (B˜V,E )v˜ ′ between rooted universal covers (B˜V,E )u˜ ′ and (B˜V,E )v˜ ′ for u˜ ′, v˜ ′ ∈ V(B˜V,E ) as π˜(˜a) = ˜b if π(a) = b ∀a, b ∈ V(BV,E ) connected to u ′ and v ′respectively and pBV,E (˜a) = a, pBV,E ( ˜b) = b. Since l is the shortest path distance from u˜ ′to w˜, there must exist some shortest (as defined by the path length in BV,E ) path P of hyperedges from u ′to w with no cycles. Using π, we must map P to another acyclic shortest path of the same length from v ′to z. This path correponds to a l length shortest path from v˜ ′to z˜ in (B˜V,E )v˜ ′ . If w has GWL-1 class c in H that doesn't become affected by the algorithm, then z also has GWL-1 class c in H since w ∼=H z. If w has class c and becomes cs in Hˆ, by the previous result, since w ∼=H z we must have the GWL-1 classes c ′ and c ′′ of w and z in Hˆ be both equal to cs. The node w connected to u ′ was arbitrary and so both w˜ and the isomorphism induced z˜ are l hops away from u˜ ′ and v˜ ′respectively, with the same GWL-1 class c ′in Hˆ, thus (B˜Vˆ,Eˆ)u˜ ′ ∼=c (B˜Vˆ,Eˆ)v˜ ′ . We have thus shown, if u ∼=H v for u, v ∈ H, then in Hˆ we have h L u (Hˆ ) = h L v (Hˆ ) using the duality between universal covers and GWL-1 from Theorem B.8. Proposition B.15 (Complexity). Let H be the star expansion matrix of H*. Algorithm 1 runs in time* O(L·nnz(H)+ (n+m)), the size of the input hypergraph when viewing L as constant, where n is the number of nodes, nnz(H) = vol(V) ≜Pv∈V deg(v) and m is the number of hyperedges. Proof. Computing Edeg, which requires computing the degrees of all the nodes in each hyperedge takes time O(nnz(H)). The set Edeg can be stored as a hashset datastructure. Constructing this takes O(nnz(H)). Computing GWL-1 takes O(L·nnz(H)) time assuming a constant L number of iterations. Constructing the bipartite graphs for H takes time O(nnz(H) + n + m) since it is an information preserving data structure change. Define for each c ∈ C, nc := |Vc|, mc := |Ec|. Since the classes partition V, we must have: $$n=\sum_{c\in C}n_{c};m=\sum_{c\in C}m_{c};n n z(H)=\sum_{c\in C}n n z(H_{c})$$ $\square$ $$(56)$$ where Hc is the star expansion matrix of Hc. Extracting the subgraphs can be implemented as a masking operation on the nodes taking time O(nc) to form Vc followed by searching over the neighbors of Vc in time O(mc) to construct Ec. Computing the connected components for Hc for a predicted node class c takes time O(nc + mc + nnz(Hc)). Iterating over each connected component for a given c and extracting their nodes and hyperedges takes time O(nci + mci ) where nc =Pi nci , mc =Pi mci . Checking that a connected component has size at least 3 takes O(1) time. Computing the degree on H for all nodes in the connected component takes time O(nci ) since computing degree takes O(1) time. Checking that the set of node degrees of the connected component doesn't belong to Edeg can be implemented as a check that the hash of the set of degrees is not in the hashset datastructure for Edeg. Adding up all the time complexities, we get the total complexity is: Taking up in the time dependence, we get the form complexity as $$O(mz(H))+O(mz(H)+n+m)+\sum_{e\in C}(O(n_{e}+m_{e}+mz(H_{e}))+\sum_{\text{com.compl.}i\neq e}O(n_{e_{i}}+m_{e_{i}}))\tag{57a}$$ $$=O(mz(H)+n+m)+\sum_{e\in C}(O(n_{e}+m_{e}+mz(H_{e}))+O(n_{e}+m_{e}))$$ (57b) $$=O(mz(H)+n+m)\tag{57c}$$ $\square$ Proposition B.16. For a connected hypergraph H, let (RV , RE) be the output of Algorithm 1 on H. Then there are Bernoulli probabilities p, qi for i = 1...|RV | for attaching a covering hyperedge so that πˆ *is an* unbiased estimator of π. Proof. Let CcL = {RcL,i}i be the maximally connected components induced by the vertices with L-GWL-1 values cL. The set of vertex sets {V(RcL,i)} and the set of all hyperedges ∪i{E(RcL,i)} over all the connected components RcL,i for i = 1...CcL form the pair (RV , RE). For a hypergraph random walk on connected H = (V, E), its stationary distribution π on V is given by the closed form: $$(58)$$ $$\pi(v)={\frac{d e g(v)}{\sum_{u\in\mathcal{V}}d e g(u)}}$$ for v ∈ V. Let Hˆ = (V, Eˆ) be the random hypergraph as determined by p and qi for i = 1...|RV |. These probabilities determine Hˆ by the following operations on the hypergraph H: - Attaching a single hyperedge that covers VR*cL,i* with probability qi and not attaching with probability 1 − qi. - All the hyperedges in RcL,i are dropped/kept with probability p and 1 − p respectively. 1. Setup: Let deg(v) ≜ |{e : e ∋ v}| for v ∈ V(H) and deg(S) ≜Pu∈V(S) deg(u) for *S ⊂ H* a subhypergraph. Let $$Bernoulli(p)\triangleq\begin{cases}1&\text{prob.}p\\ 0&\text{prob.}1-p\end{cases}$$ and $$(59)$$ $$B i n o m(n,p)\triangleq\sum_{i=1}^{n}B e r n o u l i(p)$$ Define for each v ∈ V, C(v) to be the unique RcL,i where v ∈ RcL,i This means that we have the following independent random variables: $X_{e}\triangleq Bernoulli(1-p),\forall e\in\mathcal{E}$ (i.i.d. across all $e\in\mathcal{E}$) $\forall e\in\mathcal{E}$. $$(60)$$ $$(61\mathrm{a})$$ $$X_{C(v)}\triangleq B e r n o u l i(q_{i})$$ $$(61\mathrm{b})$$ $$(62)$$ XC(v) ≜ *Bernoulli*(qi) (61b) As well as the following constant, depending only on C(v): $$m_{C(v)}\triangleq\sum_{u\in\mathcal{V}\backslash C(v)}deg(u)\tag{1}$$ where C(v) ⊂ V, ∀v ∈ V Let πˆ be the stationary distribution of Hˆ . Its expectation E[ˆπ] can be written as: $$\hat{\pi}(v)\triangleq{\frac{\sum_{e\ni v}X_{e}+X_{C(v)}}{m_{C(v)}+\sum_{e\ni u:u\in C(v)}X_{e}+X_{C(v)}}}$$ $$(63)$$ Letting $N_{v}\triangleq\sum\limits_{e\ni v}X_{e}=Binom(deg(v),1-p)$ $N\triangleq N_{v}+X_{C(v)}$ $$(64\mathrm{a})$$ $$(64\mathrm{b})$$ $$(64\mathrm{c})$$ $$(64\mathrm{d})$$ $$D\triangleq m_{C(v)}+\sum_{e\ni w\in C(v)}X_{e}+X_{C(v)}$$ $$C\triangleq D-(\sum_{e\ni v}|e|)N_{v}-m_{C(v)}=\sum_{e\ni w\in\mu_{e}\in\mu_{C}(v)}X_{e}-m_{C(v)}=Binom(F_{v},1-p)-m_{c(v)}$$ and so we have: πˆ(v) = ND We have the following joint independence Nv ⊥ XC(v) ⊥ C due to the fact that each random variable describes disjoint hyperedge sets. 2. Computing the Expectation: where $F_{v}\triangleq|\{e\ni u:v\notin e,u\in C(v)\}|$ Writing out the expectation with conditioning on the joint distribution P(D, Nv, XC(v)), we have: E[ˆπ(v)] = X 1 i=0 deg X (v) j=0 deg X (C(v)) k=mC(v) E[ˆπ(v)|D = k, N = j]P(D = k, Nv = *j, X*C(v) = i) (65a) i=0 deg X (v) j=0 deg X (C(v)) = X 1 k=mC(v) 1 k E[N|D = k, Nv = j, XC(v) = i]P(D = k, Nv = *j, X*C(v) = i) (65b) i=0 deg X (v) j=0 deg X (C(v)) = X 1 k=mC(v) j + i kP(D = k, Nv = *j, X*C(v) = i) (65c) = X 1 i=0 deg X (v) j=0 deg X (C(v)) k=mC(v) j + i kP(D = k)P(Nv = j)P(XC(v) = i) (65d) i=0 deg X (v) j=0 deg X (C(v)) = X 1 k=mC(v) j + i kP(C = k − deg(v)j − mC(v))P(Nv = j)P(XC(v) = i) (65e) i=0 deg X (v) j=0 deg X (C(v)) = X 1 k=mC(v) j + i kP(*Binom*(Fv, 1 − p), 1 − p) = k − deg(v)j − mC(v)) (65f) ·P(Binom(deg(v), 1 − p) = j) · P(*Bernoulli*(q) = i) = X 1 i=0 deg X (v) j=0 deg X (C(v)) k=mC(v) j + i k Fv k − deg(v)j − mC(v) ( 1 2 ) Fv· deg(v) j ( 1 2 ) deg(v)· P(*Bernoulli*(q) = i) (65g) i=0 deg X (v) j=0 deg X (C(v)) = X 1 k=mC(v) j + i k Fv k − deg(v)j − mC(v) ( 1 2 ) Fv (65h) · deg(v) j ( 1 2 ) deg(v)· P(*Bernoulli*(q) = i) = deg X (v) j=0 deg X (C(v)) k=mC(v) Fv k − deg(v)j − mC(v) ( 1 2 ) Fv· deg(v) j ( 1 2 ) deg(v)[(1 − q) j k + q j + 1 k] (65i) = deg X (v) j=0 deg X (C(v)) k=mC(v) Fv k − deg(v)j − mC(v) (1 − p) Fv−(k−deg(v))p k−deg(v)· deg(v) j (1 − p) jp deg(v)−j[ j k + q 1 k ] (65j) = C1(p) + qC2(p) (65k) ## 3. Pick P And Q: We want to find p and q so that E[ˆπ(v)] = C1(p) + qC2(p) = π(v) We know that for a given p ∈ [0, 1], we must have: $$q=\frac{\pi(v)-C_{1}(p)}{C_{2}(p)}\tag{66}$$ In order for q ∈ [0, 1], must have π(v) ≥ C1(p) and π(v) − C1(p) ≤ C2(p). a. Pick p **sufficiently large:** Notice that Hence that $$0\leq C_1(p)\leq O(\mathbb{E}[\frac{1}{Binom(F_v,1-p)+m_{C(v)}}]\cdot\mathbb{E}[Binom(deg(v),1-p)])=O(\frac{1}{m_{C(v)}}deg(v)(1-p))$$. and that 0 ≤ C1(p) ≤ C2(p) (68) for p ∈ [0, 1] sufficiently large. This is because $$(67)$$ $$0\leq C_{1}(p)\leq C_{2}(p)$$ $$(68)$$ $$C_{1}(p)\leq O(\frac{1}{m_{C(v)}}d e g(v)(1-p))$$ $$(69)$$ deg(v)(1 − p)) (69) and $$\Omega(\frac{1}{m_{C(v)}}d e g(v)(1-p))\leq C_{2}(p)$$ deg(v)(1 − p)) ≤ C2(p) (70) Piecing these two inequalities together gets the desired inequality 68. We can then pick a p ∈ [0, 1] even larger than the previous p so that for the C ′ > 0 which gives C1(p) ≤ C ′ mC(v) deg(v)(1 − p), we achieve $$C_{1}(p)\leq{\frac{C^{\prime}}{m_{C(v)}}}d e g(v)(1-p)<\pi(v)={\frac{d e g(v)}{m_{C(v)}+\sum_{u\in C(v)}d e g(u)}}$$ We then have that there exists a s > 1 so that $$(70)$$ $$(71)$$ $$s C_{1}(p)=\pi(v)$$ $$(72)$$ $$(73)$$ $$(74)$$ $\square$ sC1(p) = π(v) (72) Using this relationship, we can then prove that for a sufficiently large p ∈ [0, 1], we must have a q ∈ [0, 1] b. p ∈ [0, 1] **sufficiently large implies** q ≥ 0: We thus have q ≥ 0 since its numerator is nonnegative: $$\pi(v)-C_{1}(p)=(s-1)C_{1}(p)\geq0\Rightarrow q\geq0$$ π(v) − C1(p) = (s − 1)C1(p) ≥ 0 ⇒ q ≥ 0 (73) c. p ∈ [0, 1] **sufficiently large implies** q ≤ 1: $$\pi(v)-C_{1}(p)=s C_{1}(p)-C_{1}(p)=(s-1)C_{1}(p)\leq C_{2}(p)\Rightarrow q\leq1$$ ## C Additional Experiments We show in Figure 5 the distributions for the component sizes over all connected components for samples from the Hy-MMSBM model Ruggeri et al. (2023). This is a model to sample hypergraphs with community structure. In Figure 5a we sample hypergraphs with 3 isolated communities, meaning that there is 0 chance of any interconnections between any two communities. In Figure 5b we sample hypergraphs with 3 communities where every node in a community has a weight of 1 to stay in its community and a weight of 0.2 to move out to any other community. We plot the boxplots as a function of increasing number of nodes. We notice that the more communication there is between communities for more nodes there is more spread in possible connected component sizes. Isolated communities should make for predictable clusters/connected components. ![39_image_0.png](39_image_0.png) ![39_image_1.png](39_image_1.png) (a) Boxplot of the sizes of the connected components with equal GWL-1 node values from the hy-MMSBM sampling algorithm where there are three independent communities. (b) Boxplot of the sizes of the connected components with equal GWL-1 node values from the hy-MMSBM sampling algorithm where any two of the three communities can communicate. Figure 5 ## C.1 Additional Experiments On Hypergraphs | PR-AUC ↑ | Baseline | Ours | Baseln.+edrop | | |--------------------|--------------------------------|-------------|-----------------|---------------| | HGNN | 0.75 ± 0.01 | 0.79 ± 0.11 | 0.74 ± 0.09 | | | HGNNP | 0.75 ± 0.05 | 0.78 ± 0.10 | 0.74 ± 0.12 | | | HNHN | 0.74 ± 0.04 | 0.74 ± 0.02 | 0.74 ± 0.05 | | | HyperGCN | 0.74 ± 0.09 | 0.50 ± 0.07 | 0.50 ± 0.12 | | | UniGAT | 0.73 ± 0.07 | 0.81 ± 0.10 | 0.81 ± 0.09 | | | UniGCN | 0.78 ± 0.04 | 0.81 ± 0.09 | 0.71 ± 0.08 | | | UniGIN | 0.74 ± 0.09 | 0.74 ± 0.03 | 0.74 ± 0.07 | | | UniSAGE | 0.74 ± 0.03 | 0.74 ± 0.12 | 0.74 ± 0.01 | | | (a) cat-edge-Brain | PR-AUC ↑ | Baseline | Ours | Baseln.+edrop | | | HGNN | 0.95 ± 0.10 | 0.99 ± 0.04 | 0.96 ± 0.09 | | | HGNNP | 0.95 ± 0.06 | 0.96 ± 0.09 | 0.96 ± 0.08 | | | HNHN | 1.00 ± 0.08 | 0.99 ± 0.09 | 0.95 ± 0.10 | | | HyperGCN | 0.76 ± 0.03 | 0.67 ± 0.14 | 0.68 ± 0.09 | | | UniGAT | 0.87 ± 0.07 | 1.00 ± 0.09 | 0.99 ± 0.08 | | | UniGCN | 0.99 ± 0.07 | 0.96 ± 0.09 | 0.92 ± 0.05 | | | UniGIN | 0.98 ± 0.06 | 0.96 ± 0.08 | 0.95 ± 0.06 | | | UniSAGE | 0.94 ± 0.05 | 0.98 ± 0.07 | 0.97 ± 0.07 | | | (b) cat-edge-vegas-bar-reviews | | | | | PR-AUC ↑ | Baseline | Ours | Baseln.+edrop | | | HGNN | 0.52 ± 0.01 | 0.57 ± 0.08 | 0.54 ± 0.10 | | | HGNNP | 0.52 ± 0.03 | 0.54 ± 0.07 | 0.54 ± 0.06 | | | HNHN | 0.73 ± 0.03 | 0.73 ± 0.07 | 0.73 ± 0.00 | | | HyperGCN | 0.54 ± 0.05 | 0.55 ± 0.02 | 0.49 ± 0.10 | | | UniGAT | 0.49 ± 0.09 | 0.54 ± 0.04 | 0.53 ± 0.04 | | | UniGCN | 0.46 ± 0.08 | 0.68 ± 0.08 | 0.51 ± 0.08 | | | UniGIN | 0.73 ± 0.09 | 0.73 ± 0.01 | 0.73 ± 0.02 | | | UniSAGE | 0.73 ± 0.06 | 0.73 ± 0.02 | 0.73 ± 0.08 | | | (c) WikiPeople-0bi | PR-AUC ↑ | Baseline | Ours | Baseln.+edrop | | | HGNN | 0.59 ± 0.04 | 0.63 ± 0.04 | 0.45 ± 0.09 | | | HGNNP | 0.71 ± 0.07 | 0.63 ± 0.07 | 0.57 ± 0.04 | | | HNHN | 0.73 ± 0.04 | 0.73 ± 0.03 | 0.73 ± 0.04 | | | HyperGCN | 0.59 ± 0.05 | 0.58 ± 0.09 | 0.48 ± 0.01 | | | UniGAT | 0.61 ± 0.07 | 0.61 ± 0.04 | 0.51 ± 0.08 | | | UniGCN | 0.58 ± 0.00 | 0.60 ± 0.03 | 0.59 ± 0.02 | | | UniGIN | 0.80 ± 0.04 | 0.77 ± 0.08 | 0.75 ± 0.05 | | | UniSAGE | 0.79 ± 0.02 | 0.77 ± 0.08 | 0.74 ± 0.01 | | | (d) JF17K | | | | In Table 3, we show the PR-AUC scores for four additional hypergraph datasets, cat-edge-Brain, catedge-vegas-bar-reviews, WikiPeople-0bi, and JF17K for predicting size 3 hyperedges. Table 3: PR-AUC on four other hypergraph datasets. The top average scores for each hyperGNN method, or row, is colored. Red scores denote the top scores in a row. Orange scores denote a two way tie and brown scores denote a threeway tie. ## C.2 Experiments On Graph Data We show in Tables 4, 5, 6 the PR-AUC test scores for link prediction on some nonattributed graph datasets. The train-val-test splits are predefined for FB15k-237 and for the other graph datasets a single graph is deterministically split into 80/5/15 for train/val/test. We remove 10% of the edges in training and let them be positive examples Ptr to predict. For validation and test, we remove 50% of the edges from both validation and test to set as the positive examples Pval, Pte to predict. For train, validation, and test, we sample 1.2|Ptr|, 1.2|Pval|, 1.2|Pte| negative link samples from the links of train, validation and test. Along with HyperGNN architectures we use for the hypergraph experiments, we also compare with standard GNN architectures: APPNP Gasteiger et al. (2018), GAT Veličković et al. (2017), GCN2 Chen et al. (2020), GCN Kipf & Welling (2016a), GIN Xu et al. (2018), and GraphSAGE Hamilton et al. (2017). For every HyperGNN/GNN architecture, we also apply drop-edge Rong et al. (2019) to the input graph and use this also as baseline. The number of layers of each GNN is set to 5 and the hidden dimension at 1024. For APPNP and GCN2, one MLP is used on the initial node positional encodings. Since graphs do not have any hyperedges beyond size 2, graph neural networks fit the inductive bias of the graph data more easily and thus may perform better than hypergraph neural network baselines more often than expected. PR-AUC ↑ HGNN HGNNP HNHN HyperGCN UniGAT UniGCN UniGIN UniSAGE Ours 0.71 ± 0.04 0.71 ± 0.09 0.69 ± 0.09 0.75 ± 0.14 0.75 ± 0.09 0.74 ± 0.09 0.65 ± 0.08 0.65 ± 0.07 HyperGNN Baseline 0.68 ± 0.00 0.69 ± 0.06 0.67 ± 0.02 0.75 ± 0.04 0.74 ± 0.02 0.74 ± 0.00 0.65 ± 0.05 0.64 ± 0.08 HyperGNN Baseln.+edrop 0.67 ± 0.02 0.70 ± 0.07 0.66 ± 0.00 0.75 ± 0.03 0.73 ± 0.08 0.74 ± 0.05 0.63 ± 0.01 0.64 ± 0.03 APPNP 0.40 ± 0.03 0.40 ± 0.03 0.40 ± 0.03 0.40 ± 0.03 0.40 ± 0.03 0.40 ± 0.03 0.40 ± 0.03 0.40 ± 0.03 APPNP+edrop 0.40 ± 0.13 0.40 ± 0.13 0.40 ± 0.13 0.40 ± 0.13 0.40 ± 0.13 0.40 ± 0.13 0.40 ± 0.13 0.40 ± 0.13 GAT 0.49 ± 0.03 0.49 ± 0.03 0.49 ± 0.03 0.49 ± 0.03 0.49 ± 0.03 0.49 ± 0.03 0.49 ± 0.03 0.49 ± 0.03 GAT+edrop 0.51 ± 0.05 0.51 ± 0.05 0.51 ± 0.05 0.51 ± 0.05 0.51 ± 0.05 0.51 ± 0.05 0.51 ± 0.05 0.51 ± 0.05 GCN2 0.50 ± 0.09 0.50 ± 0.09 0.50 ± 0.09 0.50 ± 0.09 0.50 ± 0.09 0.50 ± 0.09 0.50 ± 0.09 0.50 ± 0.09 GCN2+edrop 0.56 ± 0.07 0.56 ± 0.07 0.56 ± 0.07 0.56 ± 0.07 0.56 ± 0.07 0.56 ± 0.07 0.56 ± 0.07 0.56 ± 0.07 GCN 0.73 ± 0.02 0.73 ± 0.02 0.73 ± 0.02 0.73 ± 0.02 0.73 ± 0.02 0.73 ± 0.02 0.73 ± 0.02 0.73 ± 0.02 GCN+edrop 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 GIN 0.73 ± 0.06 0.73 ± 0.06 0.73 ± 0.06 0.73 ± 0.06 0.73 ± 0.06 0.73 ± 0.06 0.73 ± 0.06 0.73 ± 0.06 GIN+edrop 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 0.73 ± 0.01 GraphSAGE 0.73 ± 0.08 0.73 ± 0.08 0.73 ± 0.08 0.73 ± 0.08 0.73 ± 0.08 0.73 ± 0.08 0.73 ± 0.08 0.73 ± 0.08 GraphSAGE+edrop 0.73 ± 0.09 0.73 ± 0.09 0.73 ± 0.09 0.73 ± 0.09 0.73 ± 0.09 0.73 ± 0.09 0.73 ± 0.09 0.73 ± 0.09 Table 4: PR-AUC on graph dataset johnshopkins55. Each column is a comparison of the baseline PR-AUC scores against the PR-AUC score for our method (first row) applied to a standard HyperGNN architecture. Red color denotes the highest average score in the column. Orange color denotes a two-way tie in the column, and brown color denotes a three-or-more-way tie in the column. | PR-AUC ↑ | HGNN | HGNNP | HNHN | HyperGCN | UniGAT | UniGCN | UniGIN | UniSAGE | |------------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Ours | 0.66 ± 0.06 | 0.78 ± 0.02 | 0.63 ± 0.07 | 0.82 ± 0.10 | 0.75 ± 0.05 | 0.74 ± 0.03 | 0.75 ± 0.03 | 0.75 ± 0.06 | | HyperGNN Baseline | 0.65 ± 0.06 | 0.65 ± 0.06 | 0.65 ± 0.04 | 0.82 ± 0.09 | 0.74 ± 0.04 | 0.74 ± 0.05 | 0.75 ± 0.03 | 0.77 ± 0.01 | | HyperGNN Baseln.+edrop | 0.65 ± 0.09 | 0.65 ± 0.00 | 0.64 ± 0.05 | 0.82 ± 0.00 | 0.72 ± 0.00 | 0.74 ± 0.07 | 0.73 ± 0.03 | 0.72 ± 0.07 | | APPNP | 0.72 ± 0.10 | 0.72 ± 0.10 | 0.72 ± 0.10 | 0.72 ± 0.10 | 0.72 ± 0.10 | 0.72 ± 0.10 | 0.72 ± 0.10 | 0.72 ± 0.10 | | APPNP+edrop | 0.71 ± 0.05 | 0.71 ± 0.05 | 0.71 ± 0.05 | 0.71 ± 0.05 | 0.71 ± 0.05 | 0.71 ± 0.05 | 0.71 ± 0.05 | 0.71 ± 0.05 | | GAT | 0.64 ± 0.06 | 0.64 ± 0.06 | 0.64 ± 0.06 | 0.64 ± 0.06 | 0.64 ± 0.06 | 0.64 ± 0.06 | 0.64 ± 0.06 | 0.64 ± 0.06 | | GAT+edrop | 0.61 ± 0.09 | 0.61 ± 0.09 | 0.61 ± 0.09 | 0.61 ± 0.09 | 0.61 ± 0.09 | 0.61 ± 0.09 | 0.61 ± 0.09 | 0.61 ± 0.09 | | GCN2 | 0.66 ± 0.03 | 0.66 ± 0.03 | 0.66 ± 0.03 | 0.66 ± 0.03 | 0.66 ± 0.03 | 0.66 ± 0.03 | 0.66 ± 0.03 | 0.66 ± 0.03 | | GCN2+edrop | 0.65 ± 0.10 | 0.65 ± 0.10 | 0.65 ± 0.10 | 0.65 ± 0.10 | 0.65 ± 0.10 | 0.65 ± 0.10 | 0.65 ± 0.10 | 0.65 ± 0.10 | | GCN | 0.69 ± 0.03 | 0.69 ± 0.03 | 0.69 ± 0.03 | 0.69 ± 0.03 | 0.69 ± 0.03 | 0.69 ± 0.03 | 0.69 ± 0.03 | 0.69 ± 0.03 | | GCN+edrop | 0.71 ± 0.06 | 0.71 ± 0.06 | 0.71 ± 0.06 | 0.71 ± 0.06 | 0.71 ± 0.06 | 0.71 ± 0.06 | 0.71 ± 0.06 | 0.71 ± 0.06 | | GIN | 0.73 ± 0.03 | 0.73 ± 0.03 | 0.73 ± 0.03 | 0.73 ± 0.03 | 0.73 ± 0.03 | 0.73 ± 0.03 | 0.73 ± 0.03 | 0.73 ± 0.03 | | GIN+edrop | 0.56 ± 0.07 | 0.56 ± 0.07 | 0.56 ± 0.07 | 0.56 ± 0.07 | 0.56 ± 0.07 | 0.56 ± 0.07 | 0.56 ± 0.07 | 0.56 ± 0.07 | | GraphSAGE | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | | GraphSAGE+edrop | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | | PR-AUC ↑ | HGNN | HGNNP | HNHN | HyperGCN | UniGAT | UniGCN | UniGIN | UniSAGE | |------------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Ours | 0.79 ± 0.11 | 0.73 ± 0.10 | 0.73 ± 0.02 | 0.85 ± 0.07 | 0.75 ± 0.10 | 0.84 ± 0.09 | 0.72 ± 0.03 | 0.72 ± 0.12 | | HyperGNN Baseline | 0.72 ± 0.07 | 0.72 ± 0.07 | 0.72 ± 0.06 | 0.85 ± 0.05 | 0.75 ± 0.09 | 0.84 ± 0.05 | 0.72 ± 0.07 | 0.72 ± 0.06 | | HyperGNN Baseln.+edrop | 0.72 ± 0.05 | 0.72 ± 0.08 | 0.72 ± 0.06 | 0.85 ± 0.07 | 0.73 ± 0.09 | 0.84 ± 0.06 | 0.72 ± 0.03 | 0.72 ± 0.07 | | APPNP | 0.81 ± 0.12 | 0.81 ± 0.12 | 0.81 ± 0.12 | 0.81 ± 0.12 | 0.81 ± 0.12 | 0.81 ± 0.12 | 0.81 ± 0.12 | 0.81 ± 0.12 | | APPNP+edrop | 0.80 ± 0.05 | 0.80 ± 0.05 | 0.80 ± 0.05 | 0.80 ± 0.05 | 0.80 ± 0.05 | 0.80 ± 0.05 | 0.80 ± 0.05 | 0.80 ± 0.05 | | GAT | 0.50 ± 0.02 | 0.50 ± 0.02 | 0.50 ± 0.02 | 0.50 ± 0.02 | 0.50 ± 0.02 | 0.50 ± 0.02 | 0.50 ± 0.02 | 0.50 ± 0.02 | | GAT+edrop | 0.33 ± 0.02 | 0.33 ± 0.02 | 0.33 ± 0.02 | 0.33 ± 0.02 | 0.33 ± 0.02 | 0.33 ± 0.02 | 0.33 ± 0.02 | 0.33 ± 0.02 | | GCN2 | 0.83 ± 0.05 | 0.83 ± 0.05 | 0.83 ± 0.05 | 0.83 ± 0.05 | 0.83 ± 0.05 | 0.83 ± 0.05 | 0.83 ± 0.05 | 0.83 ± 0.05 | | GCN2+edrop | 0.78 ± 0.04 | 0.78 ± 0.04 | 0.78 ± 0.04 | 0.78 ± 0.04 | 0.78 ± 0.04 | 0.78 ± 0.04 | 0.78 ± 0.04 | 0.78 ± 0.04 | | GCN | 0.73 ± 0.14 | 0.73 ± 0.14 | 0.73 ± 0.14 | 0.73 ± 0.14 | 0.73 ± 0.14 | 0.73 ± 0.14 | 0.73 ± 0.14 | 0.73 ± 0.14 | | GCN+edrop | 0.75 ± 0.08 | 0.75 ± 0.08 | 0.75 ± 0.08 | 0.75 ± 0.08 | 0.75 ± 0.08 | 0.75 ± 0.08 | 0.75 ± 0.08 | 0.75 ± 0.08 | | GIN | 0.73 ± 0.00 | 0.73 ± 0.00 | 0.73 ± 0.00 | 0.73 ± 0.00 | 0.73 ± 0.00 | 0.73 ± 0.00 | 0.73 ± 0.00 | 0.73 ± 0.00 | | GIN+edrop | 0.73 ± 0.10 | 0.73 ± 0.10 | 0.73 ± 0.10 | 0.73 ± 0.10 | 0.73 ± 0.10 | 0.73 ± 0.10 | 0.73 ± 0.10 | 0.73 ± 0.10 | | GraphSAGE | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | 0.46 ± 0.15 | | GraphSAGE+edrop | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | 0.47 ± 0.01 | Table 5: PR-AUC on graph dataset FB15k-237. Each column is a comparison of the baseline PR-AUC scores against the PR-AUC score for our method (first row) applied to a standard HyperGNN architecture. Red color denotes the highest average score in the column. Orange color denotes a two-way tie in the column, and brown color denotes a three-or-more-way tie in the column. Table 6: PR-AUC on graph dataset AIFB. Each column is a comparison of the baseline PR-AUC scores against the PR-AUC score for our method (first row) applied to a standard HyperGNN architecture. Red color denotes the highest average score in the column. Orange color denotes a two-way tie in the column, and brown color denotes a three-or-more-way tie in the column. ## D Dataset And Hyperparameters Table 7 lists the datasets and hyperparameters used in our experiments. All datasets are originally from Benson et al. (2018b) or are general hypergraph datasets provided in Sinha et al. (2015); Amburg et al. (2020a). We list the total number of hyperedges |E|, the total number of vertices |V|, the positive to negative label ratios for train/val/test, and the percentage of the connected components searched over by our algorithm that are size atleast 3. A node isomorphism class is determined by our isomorphism testing algorithm. By Proposition B.2 we can guarantee that if two nodes are in separate isomorphism classes by our isomorphism tester, then they are actually nonisomorphic. We use 1024 dimensions for all HyperGNN/GNN layer latent spaces, 5 layers for all hypergraph/graph neural networks, and a common learning rate of 0.01. Exactly 2000 epochs are used for training. The HyperGNN architecture baselines are described in the follwoing: - HGNN Feng et al. (2019) A neural network that generalizes the graph convolution to hypergraphs where there are hyperedge weights. Its architecture can be described by the following update step for the l + 1-layer from the lth layer: $$X^{(l+1)}=\sigma(D_{v}^{-\frac{1}{2}}H W D_{e}^{-1}H^{T}D_{v}^{-\frac{1}{2}}X^{(l)}W^{(l)})$$ $$(75)$$ where Dv ∈ R n×n is the diagonal node degree matrix, De ∈ R m×m is the diagonal hyperedge degree matrix, H ∈ R n×m is the star incidence matrix, X(l) ∈ R n×dis a node signal matrix, W(l) ∈ R d×d is a weight matrix, and σ is a nonlinear activation. Following the matrix products, as a message passing neural network, HGNN is GWL-1 based since the nodes pass to the hyperedges and back. - HGNNP Feng et al. (2023) is an improved version of HGNN where asymmetry is introduced into the message passing weightings to distinguish the vertices from the hyperedges. This is also a GWL-1 based message passing neural network. It is described by the following node signal update equation: X(l+1) = σ(D−1 v *HW D*−1 e HT X(l)W(l)) (76) where the matrices are exactly the same as from HGNN. - HyperGCN Yadati et al. (2019) computes GCN on a clique expansion of a hypergraph. This has an updateable adjacency matrix defined as follows: $$(76)$$ $$A_{i,j}^{(l)}={\begin{cases}1&(i,j)\in E^{(l)}\\ 0&(i,j)\notin E^{(l)}\end{cases}}$$ $$(77)$$ $$(78)$$ $$(79)$$ 0 (*i, j*) ∈/ E(l)(77) where $$E^{(l)}=\{(i_{e},j_{e})=arg max_{i,j\in e}|X_{i}^{(l)}-X_{j}^{(l)}|:e\in{\mathcal{E}}\}$$ $$X_{v}^{(l+1)}=\sigma(\sum_{u\in N(v)}([A^{(l)}]_{v,u}X_{u}^{(l)}W^{(l)}))$$ The X(l) ∈ R n×dis the node signal matrix at layer l, the W(l) ∈ R d×dis the weight matrix at layer l, and σ is some nonlinear activation. This architecture has less expressive power than GWL-1. - HNHN Dong et al. (2020) This is like HGNN but where the message passing is explicitly broken up into two hyperedge to node and node to hyperedge layers. $$X_{E}^{(l)}=\sigma(H^{T}X_{V}^{(l)}W_{E}^{(l)}+b_{E}^{(l)})$$ ) (80a) and $$X_{V}^{(l+1)}=\sigma(H X_{E}^{(l)}W_{V}^{(l)}+b_{V}^{(l)})$$ ) (80b) where H ∈ R n×m is the star expansion incidence matrix, W (l) E , W(l) V ∈ R d×d, b(l) E ∈ R m, b(l) V ∈ R n are weights and biases, X (l) E , X(l) Vare the hyperedge and node signal matrices at layer l, and σ is a nonlinear activation function. The bias vectors prevent HNHN from being permutation equivariant. $$(80\mathrm{a})$$ $$(80\mathrm{b})$$ - UniGNN Huang & Yang (2021) The idea is directly related to generalizing WL-1 GNNs to Hypergraphs. Define the following hyperedge representation for hyperedge e ∈ E: $$h_{e}^{(l)}=\frac{1}{|e|}\sum_{u\in e}X_{u}^{(l)}\tag{1}$$ - UniGCN: a generalization of GCN to hypergraphs $$X_{v}^{(l)}=\frac{1}{\sqrt{d_{v}}}\sum_{e\ni v}\frac{1}{\sqrt{d_{e}}}W^{(l)}h_{e}^{(l)}$$ $$(81)$$ $$(82)$$ $$(83\mathbf{a})$$ $$(833\mathbf{b})$$ $$(83\mathrm{c})$$ e(82) - UniGAT: a generalization of GAT to hypergraphs $$\alpha_{u e}=\sigma(a^{T}[X_{i}^{(l)}W^{(l)};X_{j}^{(l)}W^{(l)}])$$ j W(l)]) (83a) $$\tilde{\alpha}_{u e}=\frac{e^{\alpha_{u e}}}{\sum_{v\in e}e^{\alpha_{v e}}}$$ $$X_{v}^{(l+1)}=\sum_{e\ni v}\alpha_{v e}h_{e}W^{(l)}$$ $$(84)$$ αve(83b) αveheW(l)(83c) - UniGIN: a generalization of GIN to hypergraphs $$X_{v}^{(l+1)}=((1+\epsilon)X_{v}^{(l)}+\sum_{e\ni v}h_{e})$$ he) (84) - UniSAGE: a generalization of GraphSAGE to hypergraphs $$X_{v}^{(l+1)}=(X_{v}^{(l)}+\sum_{e\ni v}(h_{e}))$$ $$(85)$$ | Dataset Information | | | | | | | |------------------------------|---------|--------|---------------|----------------------------|---------------|-------| | Dataset | |E| | |V| | ∆+,tr | ∆+,val | ∆+,te | | | ∆−,tr | ∆−,val | | ∆−,te | % of Conn. Comps. Selected | | | | cat-edge-DAWN | 87,104 | 2,109 | 8,802/10,547 | 1,915/2,296 | 1,867/2,237 | 0.05% | | email-Eu | 234,760 | 998 | 1,803/2,159 | 570/681 | 626/749 | 0.6% | | contact-primary-school | 106,879 | 242 | 1,620/1,921 | 461/545 | 350/415 | 9.3% | | cat-edge-music-blues-reviews | 694 | 1,106 | 16/19 | 7/6 | 3/3 | 0.14% | | cat-edge-vegas-bars-reviews | 1,194 | 1,234 | 72/86 | 12/14 | 11/13 | 0.7% | | contact-high-school | 7,818 | 327 | 2,646/3,143 | 176/208 | 175/205 | 5.6% | | cat-edge-Brain | 21,180 | 638 | 13,037/13,817 | 2,793/3,135 | 2,794/3,020 | 9.9% | | johnshopkins55 | 298,537 | 5,163 | 29,853/35,634 | 9,329/11,120 | 27,988/29,853 | 2.0% | | AIFB | 46,468 | 8,083 | 4,646/5,575 | 1,452/1,739 | 4,356/5,222 | 0.02% | | amherst41 | 145,526 | 2,234 | 14,552/17,211 | 4,547/5,379 | 16,125/13,643 | 4.4% | | FB15k-237 | 272,115 | 14,505 | 27,211/32,630 | 8,767/10,509 | 10,233/12,271 | 2.1% | | WikiPeople-0bi | 18,828 | 43,388 | 27,211/32,630 | 10,254/12,301 | 1,164/1,396 | 0.05% | | JF17K | 76,379 | 28,645 | 11,907/14,287 | 1,341/1,608 | 1,341/1,608 | 0.6% | Table 7: Dataset statistics and training hyperparameters used for all datasets in scoring all experiments. We describe here some more information about each dataset we use in our experiments as provided by Benson et al. (2018b): Here is some information about the hypergraph datasets: All positional encodings are computed from the training hyperedges before data augmentation. The loss we use for higher order link prediction is the Binary Cross Entropy Loss for all the positive and negatives samples. Hypergraph neural network implementations were mostly taken from https://github.com/iMoonLab/ DeepHypergraph, which uses the Apache License 2.0. - Amburg et al. (2020a) cat-edge-DAWN: Here nodes are drugs, hyperedges are combinations of drugs taken by a patient prior to an emergency room visit and edge categories indicate the patient disposition (e.g., "sent home", "surgery", "released to detox"). - Benson et al. (2018a); Yin et al. (2017); Leskovec et al. (2007)email-Eu: This is a temporal higherorder network dataset, which here means a sequence of timestamped simplices, or hyperedges with all its node subsets existing as hyperedges, where each simplex is a set of nodes. In email communication, messages can be sent to multiple recipients. In this dataset, nodes are email addresses at a European research institution. The original data source only contains (sender, receiver, timestamp) tuples, where timestamps are recorded at 1-second resolution. Simplices consist of a sender and all receivers such that the email between the two has the same timestamp. We restricted to simplices that consist of at most 25 nodes. - Stehlé et al. (2011) contact-primary-school: This is a temporal higher-order network dataset, which here means a sequence of timestamped simplices where each simplex is a set of nodes. The dataset is constructed from interactions recorded by wearable sensors by people at a primary school. The sensors record interactions at a resolution of 20 seconds (recording all interactions from the previous 20 seconds). Nodes are the people and simplices are maximal cliques of interacting individuals from an interval. - Amburg et al. (2020b)cat-edge-vegas-bars-reviews: Hypergraph where nodes are Yelp users and hyperedges are users who reviewed an establishment of a particular category (different types of bars in Las Vegas, NV) within a month timeframe. - Benson et al. (2018a); Mastrandrea et al. (2015) contact-high-school: This is a temporal higherorder network dataset, which here means a sequence of timestamped simplices where each simplex is a set of nodes. The dataset is constructed from interactions recorded by wearable sensors by people at a high school. The sensors record interactions at a resolution of 20 seconds (recording all interactions from the previous 20 seconds). Nodes are the people and simplices are maximal cliques of interacting individuals from an interval. - Crossley et al. (2013) cat-edge-Brain: This is a graph whose edges have categorical edge labels. Nodes represent brain regions from an MRI scan. There are two edge categories: one for connecting regions with high fMRI correlation and one for connecting regions with similar activation patterns. - Lim et al. (2021)johnshopkins55: Non-homophilous graph datasets from the facebook100 dataset. - Ristoski & Paulheim (2016)AIFB: The AIFB dataset describes the AIFB research institute in terms of its staff, research groups, and publications. The dataset was first used to predict the affiliation (i.e., research group) for people in the dataset. The dataset contains 178 members of five research groups, however, the smallest group contains only four people, which is removed from the dataset, leaving four classes. - Lim et al. (2021)amherst41: Non-homophilous graph datasets from the facebook100 dataset. - Bordes et al. (2013)FB15k-237: A subset of entities that are also present in the Wikilinks database Singh et al. (2012) and that also have at least 100 mentions in Freebase (for both entities and relationships). Relationships like '!/people/person/nationality' which just reverses the head and tail compared to the relationship '/people/person/nationality' are removed. This resulted in 592,213 triplets with 14,951 entities and 1,345 relationships which were randomly split. - Guan et al. (2019)WikiPeople-0bi: The Wikidata dump was downloaded and the facts concerning entities of type human were extracted. These facts are denoised. Subsequently, the subsets of elements which have at least 30 mentions were selected. And the facts related to these elements were kept. Further, each fact was parsed into a set of its role-value pairs. The remaining facts were randomly split into training set, validation set and test set by a percentage of 80%:10%:10%. All binary relations are removed for simplicity. This modifies WikiPeople to WikiPeople-0bi. - Wen et al. (2016)JF17K: The full Freebase data in RDF format was downloaded. Entities involved in very few triples and the triples involving String, Enumeration Type and Numbers were removed. A fact representation was recovered from the remaining triples. Facts from meta-relations having only a single role were removed. From each meta-relation containing more than 10,000 facts, 10,000 facts were randomly selected. ## D.1 Timings We perform experiments on a cluster of machines equipped with AMD MI100s GPUs and 112 shared AMD EPYC 7453 28-Core Processors with 2.6 PB shared RAM. We show here the times for computing each method. The timings may vary heavily for different machines as the memory we used is shared and during peak usage there is a lot of paging. We notice that although our data preprocessing algorithm involves seemingly costly steps such as GWL-1, connected connected components etc. The complexity of the entire preprocessing algorithm is linear in the size of the input as shown in Proposition B.15. Thus these operations are actually very efficient in practice as shown by Tables 9 and 10 for the hypergraph and graph datasets respectively. The preprocessing algorithm is run on CPU while the training is run on GPU for 2000 epochs. | Timings (hh:mm) ± (s) | | | | |---------------------------------|----------------------------------|--------------------|-----------------------| | Method | Preprocessing Time | Training Time | | | HGNN | 2m:45s±108s | 35m:9s±13s | | | HGNNP | 1m:52s±0s | 35m:16s±0s | | | HNHN | 1m:55s±0s | 35m:0s±1s | | | HyperGCN | 1m:50s±0s | 58m:17s±79s | | | UniGAT | 1m:54s±0s | 1h:19m:34s±0s | | | UniGCN | 1m:50s±2s | 35m:19s±2s | | | UniGIN | 1m:50s±1s | 35m:12s±1288s | | | UniSAGE | 1m:51s±0s | 35m:16s±0s | Timings (hh:mm) ± (s) | | | Method | Preprocessing Time | Training Time | | | HGNN | 1.72s±5s | 2m:11s±11s | | | HGNNP | 1.42s±0s | 2m:10s±0s | | | HNHN | 1.99s±0s | 3m:43s±2s | | | HyperGCN | 1.47s±2s | 4m:12s±3s | | | UniGAT | 1.85s±0s | 3m:54s±287s | | | UniGCN | 2.93s±0s | 3m:15s±19s | | | UniGIN | 2.24s±0s | 3m:17s±18s | | | UniSAGE | 2.04s±0s | 3m:13s±3s | | (a) cat-edge-DAWN | (b) cat-edge-music-blues-reviews | | | | Timings (hh:mm) ± (s) | | | | | Method | Preprocessing Time | Training Time | | | HGNN | 4.17s±0s | 2m:34s±1954s | | | HGNNP | 4.54s±0s | 2m:41s±53s | | | HNHN | 3.06s±0s | 2m:27s±15s | | | HyperGCN | 1.81s±1s | 2m:27s±0s | | | UniGAT | 1.91s±0s | 2m:27s±306s | | | UniGCN | 2.84s±0s | 2m:30s±72s | | | UniGIN | 3.20s±0s | 2m:27s±1189s | | | UniSAGE | 1.65s±0s | 2m:27s±0s | Timings (hh:mm) ± (s) | | | Method | Preprocessing Time | Training Time | | | HGNN | 5.84s±1s | 6m:49s±8s | | | HGNNP | 5.82s±0s | 9m:8s±19s | | | HNHN | 5.95s±0s | 8m:21s±19s | | | HyperGCN | 5.74s±0s | 10m:16s±1s | | | UniGAT | 8.80s±0s | 2m:31s±282s | | | UniGCN | 6.35s±0s | 6m:9s±957s | | | UniGIN | 5.99s±0s | 10m:41s±43s | | | UniSAGE | 5.97s±0s | 9m:50s±0s | | (c) cat-edge-vegas-bars-reviews | (d) contact-primary-school | | | | Timings (hh:mm) ± (s) | | | | | Method | Preprocessing Time | Training Time | | | HGNN | 23.25s±1s | 25m:41s±17s | | | HGNNP | 23.25s±0s | 19m:52s±49s | | | HNHN | 24.27s±1s | 5m:12s±63s | | | HyperGCN | 24.00s±0s | 21m:16s±0s | | | UniGAT | 14.27s±0s | 5m:13s±243s | | | UniGCN | 25.44s±0s | 5m:51s±1019s | | | UniGIN | 13.71s±1s | 19m:10s±3972s | | | UniSAGE | 14.08s±2s | 36m:29s±5s | Timings (hh:mm) ± (s) | | | Method | Preprocessing Time | Training Time | | | HGNN | 4.89s±6s | 1m:27s±8s | | | HGNNP | 2.12s±0s | 2m:42s±30s | | | HNHN | 2.12s±0s | 2m:39s±42s | | | HyperGCN | 2.11s±0s | 40.11s±3s | | | UniGAT | 2.13s±0s | 3m:18s±8s | | | UniGCN | 2.11s±0s | 3m:21s±2s | | | UniGIN | 2.11s±0s | 2m:24s±70s | | | UniSAGE | 2.11s±0s | 2m:8s±49s | (e) email-Eu (f) cat-edge-madison-restaurant-reviews | Timings (hh:mm) ± (s) | | | | |-------------------------|--------------------|--------------------|-----------------------| | Method | Preprocessing Time | Training Time | | | HGNN | 15.11s±4s | 4m:59s±1s | | | HGNNP | 12.72s±0s | 2m:29s±0s | | | HNHN | 12.17s±0s | 3m:6s±0s | | | HyperGCN | 12.47s±0s | 49.25s±0s | | | UniGAT | 12.74s±0s | 2m:1s±1s | | | UniGCN | 12.50s±0s | 2m:29s±3s | | | UniGIN | 12.57s±0s | 2m:16s±3s | | | UniSAGE | 12.67s±0s | 1m:50s±29s | Timings (hh:mm) ± (s) | | | Method | Preprocessing Time | Training Time | | | HGNN | 11.34s±10s | 4m:24s±6s | | | HGNNP | 6.02s±0s | 4m:13s±2s | | | HNHN | 6.01s±0s | 5m:31s±1s | | | HyperGCN | 6.32s±0s | 1m:33s±0s | | | UniGAT | 6.04s±0s | 4m:11s±0s | | | UniGCN | 5.79s±0s | 4m:12s±0s | | | UniGIN | 6.64s±1s | 3m:4s±1s | | | UniSAGE | 5.79s±0s | 3m:2s±0s | | (a) contact-high-school | (b) cat-edge-Brain | | | | Timings (hh:mm) ± (s) | | | | | Method | Preprocessing Time | Training Time | | | HGNN | 3m:30s±5s | 1h:29m:33s±6s | | | HGNNP | 3m:34s±1s | 1h:48m:57s±1s | | | HNHN | 3m:41s±1s | 2h:9m:34s±1s | | | HyperGCN | 3m:24s±1s | 58m:27s±1s | | | UniGAT | 3m:50s±1s | 4h:21m:24s±1s | | | UniGCN | 3m:38s±1s | 29m:14s±1s | | | UniGIN | 3m:50s±1s | 27m:50s±1s | | | UniSAGE | 3m:41s±1s | 27m:22s±1s | Timings (hh:mm) ± (s) | | | Method | Preprocessing Time | Training Time | | | HGNN | 8m:11s±52s | 37m:18s±9s | | | HGNNP | 7m:34s±1s | 47m:56s±1s | | | HNHN | 6m:21s±1s | 49m:33s±1s | | | HyperGCN | 8m:20s±1s | 28m:25s±1s | | | UniGAT | 10m:40s±1s | 1h:54m:36s±1s | | | UniGCN | 7m:25s±1s | 2h:40m:20s±1s | | | UniGIN | 10m:37s±1s | 2h:48m:35s±1s | | | UniSAGE | 6m:58s±1s | 2h:35m:4s±1s | (c) WikiPeople-0bi (d) JF17K Table 9: Timings for our method broken up into the preprocessing phase and training phases (2000 epochs) for the hypergraph datasets. | Timings (hh:mm) ± (s) | | | | |-------------------------|--------------------|--------------------|-----------------------| | Method | Preprocessing Time | Training Time | | | HGNN | 11m:14s±75s | 53m:21s±2845s | | | HGNNP | 11m:10s±21s | 1h:34m:25s±35s | | | HNHN | 5m:15s±395s | 1h:35m:15s±419s | | | HyperGCN | 33.98s±0s | 5m:8s±0s | | | UniGAT | 1m:59s±120s | 2h:2m:47s±25s | | | UniGCN | 34.37s±0s | 1h:17m:38s±2s | | | UniGIN | 34.05s±0s | 1h:16m:38s±7s | | | UniSAGE | 34.36s±0s | 1h:16m:34s±3s | Timings (hh:mm) ± (s) | | | Method | Preprocessing Time | Training Time | | | HGNN | 17m:9s±164s | 12m:38s±549s | | | HGNNP | 15m:34s±61s | 20m:26s±124s | | | HNHN | 15m:31s±83s | 18m:11s±30s | | | HyperGCN | 15m:46s±32s | 4m:17s±80s | | | UniGAT | 1m:27s±6s | 16m:30s±0s | | | UniGCN | 15m:57s±24s | 18m:42s±170s | | | UniGIN | 16m:14s±73s | 16m:22s±39s | | | UniSAGE | 8m:42s±610s | 8m:49s±324s | | (a) johnshopkins55 | (b) AIFB | | | | Timings (hh:mm) ± (s) | | | | | Method | Preprocessing Time | Training Time | | | HGNN | 4m:1s±11s | 22m:30s±1177s | | | HGNNP | 3m:53s±4s | 39m:30s±3s | | | HNHN | 3m:16s±22s | 44m:7s±71s | | | HyperGCN | 3m:35s±23s | 5m:22s±25s | | | UniGAT | 11.92s±0s | 1h:51m:53s±123s | | | UniGCN | 3m:20s±6s | 39m:18s±51s | | | UniGIN | 3m:21s±8s | 38m:3s±0s | | | UniSAGE | 11.27s±0s | 58m:48s±956s | Timings (hh:mm) ± (s) | | | Method | Preprocessing Time | Training Time | | | HGNN | 3m:32s±9s | 1h:19m:5s±4684s | | | HGNNP | 3m:26s±10s | 2h:19m:44s±3586s | | | HNHN | 3m:27s±0s | 1h:55m:48s±22s | | | HyperGCN | 3m:28s±0s | 10m:31s±18s | | | UniGAT | 3m:24s±5s | 3h:50m:24s±91s | | | UniGCN | 3m:19s±4s | 1h:39m:46s±13s | | | UniGIN | 3m:17s±0s | 1h:36m:47s±35s | | | UniSAGE | 3m:25s±13s | 1h:37m:16s±102s | (c) amherst41 (d) FB15k-237 Table 10: Timings for our method broken up into the preprocessing phase and training phases (2000 epochs) for the graph datasets.
Review 1: Summary: The paper studies link prediction on hypergraphs, which are representations of higher-order relationships on a graph. The paper devises a generalized Weisfeiler-Lehman test, which is a condition to tell if two hypergraphs are equivalent or not under this test. The paper then moves on to propose a method to predict higher-order links. The experiments focus on six different hypergraph datasets, comparing the proposed method to a couple of hyper-order graph neural networks. Strengths and Weaknesses: S1) The paper is detailed in detail, especially the theoretical conditions of the Weisfeiler-Lehman test. S2) The appendix (which I did not check in detail) provides great details about the theoretical statements. W1) The experimental comparison is weak---from looking at Table 1 and Table 2, I could not see a significant advantage brought by the proposed method. W2) There is no description of the implementation of the network architectures, and the baselines are not described in detail for me to appreciate the experimental results. W3) There are no results concerning the convergence of the training dynamics---even though it is stated in Proposition 5.5 that the algorithm runs in linear time to the input, the number of iterations required to train this method is not clear. W4) I didn't get a great sense of the intuition behind the benefit of adding this symmetry breaking to the method---to better explain this, a simple working example would help. The fact that the empirical gains are marginal added to my concern regarding the benefit of this idea. Requested Changes: On Page 9, the authors write Algorithm 14---should it be Algorithm 1? Other requested changes include addressing the weaknesses mentioned above. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper addresses limitations in existing hypergraph representation learning by introducing a preprocessing algorithm to identify and break symmetry in regular subhypergraphs. Through extensive experiments, they demonstrate the effectiveness of their approach for higher-order link prediction on both graph and hypergraph datasets with minimal increase in computation. Strengths and Weaknesses: - The idea of breaking symmetry is novel. - The proposed method has good experimental results. Requested Changes: - Hypergraph Symmetry Breaking does not always enhance performance. If symmetry is a crucial pattern for graph learning tasks, disrupting it will hinder the network's ability to acquire information. - Theoretical contribution is limited, and all theorems are not hard to be extended from the common graphs. The definition of GWL is also obtained with minor modifications to existing work. - Lack of experiments on WikiPeople and JF17k. Broader Impact Concerns: N/A. ================================================== Review 3: Summary: The present work studies the problem of predicting the hyperedges of a hypergraph by using some hypergraph embedding techniques based on the Weisfeiler-Lehman algorithm. The work presents both theoretical and experimental results. Strengths and Weaknesses: The main strength is that Weisfeiler-Lehman techniques are actively being investigated and therefore any new piece of work on them is welcome. The main weakness is that the work is, at least to me, almost unintelligible. I read it carefully through the first 4-5 pages, but then I had to give up. Several technical notions are used without being defined, complex mathematical statements are made without any introduction or commentary, and so on. Details: 1. Section 1 assumes familiarity with GWL, hypergraph prediction, and/or message-passing algorithms. If one is not an expert then it will be very difficult to figure out what the section is saying. 2. What does it mean for two neighborhoods (i.e., two sets of vertices) to be isomorphic? That the subgraphs they induce are isomorphic? 3. The "star expansion incidence matrix" is simply the incidence matrix of H. 4. Proposition 2.1. It says "Aut ≈ Stab are equivalent as isomorphic groups". I do not understand what "equivalent" means, and moreover I guess "Aut ≈ Stab" should be "Aut and Stab", or alternatively the proposition should just say "Aut ≈ Stab." 5. Above and below Proposition 2.1 it is said "[the elements of Stab] are equivalent to automorphisms on the original hypergraph H" and "Intuitively, hypergraph automorphisms and stablizer permutations on star expansion adjacancy matrices are equivalent". This is just repeating what the proposition says; there's no further intuition or information. 6. Section 2 uses non-elementary mathematics without properly introducing it. In particular it uses group actions, orbits, and orbit stabilizers (see e.g. Equation (2)), without introducing any of them. 7. 2.2 says "These two conditions mean that the representation does not lose any information when doing prediction for missing hyperedges on k nodes". If that is true then one should be able to retrieve from the representation the subgraph induced by the k nodes. I doubt this holds here. 8. In 2.3 there is no intuition or introduction for the GWL-1 method, and in particular for Equation (3), which is therefore very hard to understand. It is said that X is a vector of attributes, but the paper was only talking about unattributed hypergraphs so far. Moreove I do not see the relationship with the k-node representations of the previous section; the set of equations defined here seem to yield a sequence of nested tuples and not a vector in R^d. 9. In 2.3, the discussion in the paragraph after Eq (3) is totally cryptic to me. What is the goal now? In particular, I was totally lost at "Define the operator AGG as a permutation invariant map on a set of vectors to representation space R^d". What does that mean? What is the purpose of this? 10. The statement of the problem (Problem 1) is sloppy. What does "Predict Egt \ E" mean? One needs to define a cost, or a loss, and possibly a distribution. In any case there must be a way to measure the error of the algorithm, as well as some assumption on the relationship between Egt and E. Here there is none of that. Requested Changes: Rewrite the manuscript so that it is possible to follow it. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: I struggled with this decision since I can see value in the approach. But I defer to the expert reviewers who are concerned with the technically dense explanation for the claims, paired with minor empirical performance. I hope the feedback is helpful to the authors. ==================================================
# Revisiting Discrete Soft Actor-Critic Anonymous authors Paper under double-blind review ## Abstract We study the adaption of Soft Actor-Critic (SAC), which is considered as a state-of-theart reinforcement learning (RL) algorithm, from continuous action space to discrete action space. We revisit vanilla discrete SAC and provide an in-depth understanding of its Q value underestimation and performance instability issues when applied to discrete settings. We thereby propose Stable Discrete SAC (SDSAC), an algorithm that leverages entropy-penalty and double average Q-learning with Q-clip to address these issues. Extensive experiments on typical benchmarks with discrete action space, including Atari games and a large-scale MOBA game, show the efficacy of our proposed method. Our code is at: https://github. com/revisiting-sac/Revisiting-Discrete-SAC.git. ## 1 Introduction In the conventional model-free reinforcement learning (RL) paradigm, an agent can be trained by learning an approximator of action-value (Q) function (Mnih et al., 2015; Bellemare et al., 2017). The class of actorcritic algorithms (Mnih et al., 2016; Fujimoto et al., 2018) evaluates the policy function by approximating the value function. Motivated by maximum-entropy RL(Ziebart et al., 2008; Rawlik et al., 2012; Abdolmaleki et al., 2018), soft actor-critic (SAC) (Haarnoja et al., 2018a) introduces action entropy in the framework of actor-critic to achieve exploit-explore trade-off and it has achieved remarkable performance in a range of environments with continuous action spaces (Haarnoja et al., 2018b), and is considered as the state-of-the-art algorithm for domains with continuous action space, e.g., Mujoco (Todorov et al., 2012). However, while SAC solves problems with continuous action space, it cannot be directly applied to discrete domains since it relies on the reparameterization of Gaussian policies to sample actions, in which the action in discrete domains is categorical. Soft-DQN (Vieillard et al., 2020) provides a simple way to discretize SAC by adopting the maximum-entropy RL to DQN (Mnih et al., 2013). However, Soft-DQN utilizes only a Q-value parametrization to bypass the policy parameterization. Another discretization of the continuous action output and Q value in vanilla SAC is an obvious strategy suggested by (Christodoulou, 2019) to adapt SAC to discrete domains, resulting in the discrete version of SAC, denoted as discrete SAC (DSAC) throughout the paper. However, it is counter-intuitive that the empirical experiments in subsequent efforts (Xu et al., 2021) indicate that discrete SAC performs poorly in discrete domains, e.g., Atari games. We believe that the idea of maximum entropy RL applies to both discrete and continuous domains. However, extending the maximum-entropy-based SAC algorithm to discrete domains still lacks a commonly accepted practice in the community. Therefore, in this paper, similar to the motivation of DDPG (deep deterministic policy gradient) (Lillicrap et al., 2016), which adapts DQN (deep Q networks) (Mnih et al., 2013) from discrete action space to continuous action space, we aim to optimize SAC algorithm for discrete domains. Previous studies (Xu et al., 2021; Wang & Ni, 2020) have analyzed the reasons for the performance disparity of SAC between continuous and discrete domains. Reviewing from the perspective of automating entropy adjustment, an unreasonable setting of target-entropy for temperature α may break the SAC value–entropy trade-off (Wang & Ni, 2020; Xu et al., 2021). Furthermore, the function approximation errors of Q-value are known to lead to estimation bias and hurt performance in actor-critic methods (Fujimoto et al., 2018). To avoid overestimation bias, both discrete SAC and continuous SAC resort to clipped double Q-learning (Fujimoto et al., 2018) for actor-critic algorithms. On the contrary, using the lower bound approximation to the critic can lead to underestimation bias, which makes the policy fall into pessimistic underexplored, as pointed by (Ciosek et al., 2019; Pan et al., 2020), mainly when the reward is sparse. However, existing works only focus on continuous domains (Ciosek et al., 2019; Pan et al., 2020), while SAC for discrete cases remains less explored. In addition to the abovementioned issues, we conjecture that discrete SAC fails also include the absence of policy update constraints. Intuitively, the unstable training causes a shift in the Q function distribution and policy entropy, which generates a rapidly changing target for the critic network due to the soft Q-learning objective. Meanwhile, the critic network in SAC needs time to adapt to the oscillating target process, exacerbating policy instability. To address the above challenges, we first design test cases to replicate the failure modes of vanilla discrete SAC, exposing its inherent weaknesses regarding training instability and Q-value underestimation. Then, accordingly, we propose Stable Discrete SAC to stabilize the training. We develop an entropy penalty on the policy optimization objective to constrain policy updates. We also develop double average Q-learning with Q-clip to confine the Q value within a reasonable range. We use Atari games (the default testbed for the RL algorithm for discrete action space) to verify the effectiveness of our optimizations. We also deploy our method to the Honor of Kings 1v1 game, a large-scale MOBA game used extensively in recent RL advances (Ye et al., 2020b;c;a; Wei et al., 2022), to demonstrate the scale-up capacity of our Stable Discrete SAC. To sum up, our contributions are: - We pinpoint two failure modes of discrete SAC, regarding training instability and underestimated Q values. We found that the underlying causes are the environment's deceptive rewards and SAC's double Q learning respectively. - To alleviate the training instability issue, we propose entropy-penalty to constrain the policy update in discrete SAC. - To deal with the underestimation bias of Q value in discrete SAC, we propose double average Qlearning with Q-clip to estimate the state-action value. With the above contributions, we have obtained the Stable Discrete SAC algorithm. Extensive experiments on Atari games and a large-scale MOBA game show SDSAC's superiority compared to baselines, with a 68% improvement of normalized scores in Atari and around 100% ELO increase in MOBA game. ## 2 Related Work Adaption of Action Space. The most relevant works to this paper are vanilla discrete SAC (Christodoulou, 2019), TES-SAC (Xu et al., 2021) and Soft-DQN (Vieillard et al., 2020). Discrete SAC replaces the Gaussian policy with a categorical one and discretizes the Q-value output to adapt SAC from continuous to discrete action space. However, as we will point out, a direct discretization of SAC will have specific failure modes with poor performance. TES-SAC proposes a new scheduling method for the target entropy parameters in discrete SAC. Soft-DQN has discretized SAC by adopting the maximum-entropy RL to DQN, utilizing only a Q value parametrization and directly applies a softmax operation to the Q-values to take action. Q Estimation. Previous works (Fujimoto et al., 2018; Ciosek et al., 2019; Pan et al., 2020; Duan et al., 2021) have already expressed concerns about the estimation bias of Q value for SAC. SD3 (Pan et al., 2020) proposes to reduce the Kurtosis distribution of Q approximately by using the softmax operator on the original Q value output to reduce the overestimation bias. OAC (Ciosek et al., 2019) constrains the Q value approximation objective by calculating the upper and lower boundaries of two Q-networks. DSAC (Duan et al., 2021) replaces the Q learning target with the expected reward sum obtained from the current state to the end of the episode and uses a multi-frame estimates target to reduce overestimation. Maxmin Q-learning (Lan et al., 2020) controls estimation bias by minimizing the complete ensemble in the target. MME (Han & Sung, 2021) extends max-min operation to the entropy framework to adapt to SAC. REM (Agarwal et al., 2020) ensembles Q-value estimations with a random convex combination to enhance generalization in the offline setting. REDQ (Chen et al., 2021b) reduces the estimation bias by minimizing a random subset of Q-functions. AEQ (Gong et al., 2023) adjusts the estimation bias by using the mean of Q-functions minus their standard deviation. However, little research is on discrete settings. Our approach focuses on reducing the underestimation bias for the double Q-estimators to enhance exploration. Performance Stability. Flow-SAC (Ward et al., 2019) applies a technique called normalizing flows policy on continuous SAC leading to the finer transformation that improves training stability when exploring complex states. However, applying normalizing flows to discrete domains will cause a degeneracy problem (Horvat & Pfister, 2021), making it difficult to transfer to discrete actions. SAC-AWMP (Hou et al., 2020) improves the stability of the final policy by using a weighted mixture to combine multiple policies. Based on this method, the cost of network parameters and inference speed is significantly increased. ISAC (Banerjee et al., 2022) increases SAC stability by mixing prioritized and on-policy samples, enabling the actor to repeat learns states with drastic changes. Repeatedly learning priority samples, however, runs the risk of settling into a local optimum. By comparison, our method improves policy stability in case of drastic state changes with an entropy constraint. ## 3 Preliminaries This section briefly overviews the symbol definitions of SAC for discrete action space. Following the maximum entropy framework, SAC adds an entropy term H(π(· | s)) as regularization to the policy gradient objective: $$\pi^{*}=\operatorname*{argmax}_{\pi}\sum_{t=0}^{T}\mathbb{E}_{s_{t-\pi}^{*}}[r(s_{t},a_{t})+\alpha\mathbb{H}(\pi(\cdot\mid s))],$$ $$\mathbb{H}(\pi(\cdot\mid s))=-\int\pi(a\mid s)\log\pi(a\mid s)\mathrm{d}a=\mathbb{E}_{a\sim\pi(\cdot\mid s)}[-\log\pi(a\mid s)]$$ $$\left(1\right)$$ $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ where π is a policy, π ∗is the optimal policy, and α is the temperature parameter that determines the relative importance of the entropy term versus the reward r, thus controls the stochasticity of the optimal policy. Soft Bellman Backup The soft Q-function, parametrized by θ, is updated by reducing the soft bellman error as described in the next subsection: $$J_{Q}(\theta)=\frac{1}{2}\left(r(s_{t},a_{t})+\gamma V(s_{t+1})-Q_{\theta}(s_{t},a_{t})\right)^{2},$$ where V (st) defines the soft state value function, which represents the expected reward estimate that policy obtains from the current state to the end of the trajectory. $$V(s_{t})=\mathbb{E}_{a_{t}\sim\pi}[Q_{\theta}(s_{t},a_{t})-\alpha\log(\pi(a_{t}\mid s_{t}))].$$ $$\left(4\right)$$ Soft actor-critic minimizes soft Q-function with final soft bellman error: $$J_{Q}(\theta)=\mathbb{E}_{(s_{t},a_{t})\sim D}[\frac{1}{2}(Q_{\theta}(s_{t},a_{t})-(r(s_{t},a_{t})+\gamma\mathbb{E}_{s_{t+1}\sim p(\cdot|s_{t},a_{t})}[V(s_{t+1})]))^{2}],$$ where D is a replay buffer replenished by rollouts of the policy π interacting with the environment. In the implementation, SAC (Haarnoja et al., 2018a) uses the minimum of two delayed-update target-critic network outputs as the soft bellman learning objective to reduce overestimation. The formula is expressed as $$V(s_{t+1})=\operatorname*{min}_{i=1,2}\mathbb{E}_{a_{t}\sim\pi}[Q_{\theta^{i}_{i}}(s_{t+1},a_{t+1})-\alpha\log(\pi(a_{t+1}\mid s_{t+1}))],$$ where Qθ ′ i represents i-th target-critic network. Policy Update Iteration The policy, parameterized by ϕ, distills the softmax policy induced by the soft Q-function. The discrete SAC policy directly maximizes the probability of discrete actions, in contrast to the continuous SAC policy, which optimizes the two parameters of the Gaussian distribution. Then, the $$\left(5\right)$$ $$(6)$$ discrete SAC policy is updated by minimizing KL divergence between the policy distribution and the soft Q-function. πϕnew = argmin πϕold∈Π DKL(πϕold (. | st)∥ exp( 1αQπϕold (st, .)) Z $$\mathbf{\Sigma}(7)$$ π*ϕold* (st)). (7) Note that the partition function Z π*ϕold* (st) is a normalization term that can be ignored since it does not affect the gradient concerning the new policy. The resulting optimization objective of the policy is as follows: $$J_{\pi}(\phi)=\mathbb{E}_{s_{t}\sim D}[\mathbb{E}_{a_{t}\sim\pi_{\phi}}[\alpha\log(\pi_{\phi}(a_{t}\mid s_{t}))-Q_{\theta}(s_{t},a_{t})]].\tag{8}$$ Automating Entropy Adjustment The entropy parameter temperature α regulates the value-entropy balance in soft Q learning. The SAC paper proposes using the temperature Lagrange term to tune the temperature α automatically. The following equation can be regarded as the optimization objective satisfying an entropy constraint. $$\max_{\mathbf{s}_{0},\tau}\mathbb{E}_{\rho_{\pi}}\big{[}\sum_{t=0}^{T}r(\mathbf{s}_{t},\mathbf{a}_{t})\big{]}\quad\text{s.t.}\mathbb{E}_{(\mathbf{s}_{t},\mathbf{a}_{t})\sim\rho_{\pi}}[-\log(\pi_{t}(\mathbf{a}_{t}\mid\mathbf{s}_{t}))]\geq\mathcal{H},\forall t,\tag{9}$$ where H is the desired minimum expected entropy. Optimizing the Lagrangian term α involves minimizing: $$(10)$$ $$J(\alpha)=\mathbb{E}_{(a|s)\sim\pi_{t}}[-\alpha\log\pi_{t}(a_{t}\mid s_{t})-\alpha\mathcal{H}].$$ [−α log πt(at | st) − αH]. (10) By setting a loose upper limit on the target entropy H, SAC achieves automatic adjustment of temperature α. Typically, the target entropy is set to 0.98 ∗ −log(1 dim(*Actions*) ) for discrete(Christodoulou, 2019) and −dim(*Actions*) for continuous actions(Haarnoja et al., 2018b). ## 4 Failure Modes Of Vanilla Discrete Sac We start by outlining the failure modes of the vanilla discrete SAC and then analyze under what circumstances the standard choices of vanilla discrete SAC perform poorly. ## 4.1 **Unstable Coupling Training** The first failure mode comes from the instability caused by fluctuations in Q function distribution and policy entropy during training. The maximum entropy mechanism in SAC effectively balances exploration and exploitation. However, due to the existence of entropy term in the soft bellman error, and the mechanism in discrete SAC that aligns the policy with the Q function, the policy update iteration (Eq. 8) is strongly coupled with the Q-learning iteration (Eq. 5). In environments with deceptive rewards (Hong et al., 2018), the agent can gain substantial returns in the early stages of training through short-term rewards, causing the Q value of specific actions to rise rapidly and the Q function distribution to become sharper. The coupling learning paradigm of discrete SAC leads to a sharper policy distribution and, thus, a decline in entropy. Consequently, the Q learning target becomes unstable, which can, in turn, deteriorate the policy learning. As a result, the agent falls into local optima and struggles to discover alternative strategies with larger long-term payoffs. To illustrate this issue more concretely, we take the training process of discrete SAC in the Atari game Asterix as an example. As shown in Fig. 1(a), the player controls Asterix, which can move horizontally and vertically. In each round, horizontally moving objects appear. Asterix scores points by collecting objects and loses a life when collecting a lyre. In the early stage of the game, rounds often appear where there are only scoring objects and no life-losing lyres (Fig. 1(b)), allowing the agent to score quickly by collecting objects, resulting in deceptive rewards. These rewards make the Q function sharper, thereby reducing the entropy of the policy. In Fig. 2(a), we sample a fixed set of states, and measure the variance of Q function across different actions for these states. We find that the Q function variance increases rapidly, indicating Q becomes sharp quickly. Policy entropy also decreased during this period. ![4_image_0.png](4_image_0.png) (a) Gameplay screenshot of the Atari Game Asterix (b) Deceptive rewards in Asterix Figure 1: Gameplay screenshot of the Atari Game Asterix, including the player-controlled Asterix (yellow box), scoring objects (green box) and life-losing lyres (orange box) that appear in rounds. Deceptive rewards appear in the early stage of game when there are only scoring objects. ![4_image_2.png](4_image_2.png) ![4_image_1.png](4_image_1.png) Figure 2: Measuring Q variance, estimation of Q-value, policy entropy, episode length, steps with rewards, and score on Atari Game Asterix with discrete SAC over 10 million timesteps. As the learning process continues, the policy entropy drops rapidly, and the action probabilities become deterministic quickly (Fig. 2(c)). The agent can collect objects but struggles to avoid obstacles effectively. After the policy entropy reaches its lowest point at round 2 million steps, neither the episode length (Fig. 2(d)) nor the number of steps with rewards (Fig. 2(e)) increases significantly. At the same time, the drastic change of policy entropy misleads the learning process, and thus, both Q-value and policy fall into local optimum (Fig. 2(c) and Fig. 2(b)). Since both policy and Q-value converge to the local optimum, it becomes hard for the policy to explore efficiently in the later training stage. Even the policy entropy re-rises in the later stage (Fig. 2(c))), the performance of policy does not improve anymore (Fig. 2(f)). Similar situations also occur in other Atari environments, and we provide more examples in Appendix A.5. To better understand why this undesirable behavior occurs, we inspect the gradient of the soft bellman object calculated by the formula 5. $$\hat{\nabla}_{\theta}J_{Q}(\theta)=\nabla_{\theta}Q_{\theta}(\mathbf{a}_{t},\mathbf{s}_{t})(Q_{\theta}(\mathbf{s}_{t},\mathbf{a}_{t})-(r(\mathbf{s}_{t},\mathbf{a}_{t})+\gamma(Q_{\theta}(\mathbf{s}_{t+1},\mathbf{a}_{t+1})-\alpha\log(\pi_{\phi}(\mathbf{a}_{t+1}\mid\mathbf{s}_{t+1})))).\tag{11}$$ As shown in Eq. 11, the improvement of Qθ(st, at) relies on the Q-estimation of the following states and policy entropy. However, a sharper Q function causes the drastically shifting entropy, increasing the uncertainty of gradient updates and misleading the learning of the Q-network. Since the soft Q-network induces the policy, the policy can also become misleading and hurt performance. To mitigate this phenomenon, the key is to ensure the smoothness of policy entropy change to maintain stable training. In the next section, we will introduce how to constrain the policy's randomness to ensure smooth policy changes. ## 4.2 Pessimistic Exploration The second failure mode comes from pessimistic exploration due to the double Q-learning mechanism. The double Q-learning trick has been widely used in value-based or actor-critic RL algorithms for both discrete (e.g., Double DQN (Van Hasselt et al., 2016)) and continuous (e.g., SAC (Haarnoja et al., 2018a)) domains. Due to the max operator, DQN tends to suffer from overestimation bias in discrete domains. Double DQN uses the double Q-learning trick to mitigate this issue. In continuous domains, inspired by Double DQN, TD3 (Fujimoto et al., 2018) and SAC adopt clipped double Q-learning to mitigate overestimation. Empirical results demonstrate that the clipped double Q-learning trick can boost SAC performance in continuous domains, but its impact remains unclear in discrete domains. Therefore, we need to revisit clipped double Q-learning for discrete SAC. In our experiments, in discrete domains, we find that discrete SAC tends to suffer from underestimation bias instead of overestimation bias. This underestimation bias can cause pessimistic exploration, especially in the case of sparse reward. Here, we illustrate how the popularly used clipped double Q-learning trick causes underestimation bias and how the policy used with this trick tends to converge to suboptimal actions for discrete action spaces. Our work complements previous work with a more in-depth analysis of clipped double Q-learning. We demonstrate the existence of underestimation bias and then illustrate its impact on Atari games. To analyze the estimated bias ϵ, we introduce the mathematical expression of the soft state-value function: $$V(s_{t})=\mathbb{E}_{a_{t}\sim\pi}[Q(s_{t},a_{t})-\alpha\log(\pi(a_{t}\mid s_{t}))],$$ $$(12)$$ V (st) = Eat∼π[Q(st, at) − α log(π(at | st))], (12) where Q(st, at) represents the true Q-value. In practice, SAC uses the clipped double Q-learning trick. The learning target of soft state-value function can be written as: $$V_{approx}(s_{t})=\mathbb{E}_{a_{t}\sim\pi}\min_{i=1,2}[Q_{\theta^{\prime}_{i}}(s_{t},a_{t})-\alpha\log(\pi(a_{t}\mid s_{t}))],\tag{13}$$ where Qθ ′ i represents estimation of target-critic networks parameterized by θ ′ i . The estimated bias for Q′θi can be calculated as ϵi = Qθ ′ i (s, a) − Q(*s, a*). On the one hand, when ϵ1 > ϵ2 > 0, the clipped double Q-learning trick can help mitigate overestimation error due to the min operation. On the other hand, when ϵ1 < ϵ2 < 0 or ϵ1 < 0 < ϵ2, the clipped double Q-learning trick will lead to underestimation (i.e., Vappox < V ) and consequently result in pessimistic exploration (Pan et al., 2020; Ciosek et al., 2019). Does this theoretical underestimate occur in practice for discrete SAC and hurt the performance? We answer this question by showing the influence of the clipped double Q-learning trick for discrete SAC in Atari games, as shown in Fig. 3. Here, we compare the true value to the estimated value. The results are averaged over three independent experiments with different random seeds. We find that, in Fig. 3(a), the approximate values are lower than the true value over time, demonstrating the underestimation bias issue. At the same time, we also run experiments for discrete SAC with a single Q (DSAC-S), which uses a single Q-value for bootstrapping instead of clipped double Q-values. As shown in Fig. 3(b), without the clipped double Q-learning trick, the estimated value of DSAC-S is higher than the true value and thus has an overestimation bias. However, in Fig. 3(c), we discover that even though DSAC-S suffers from overestimation bias, it performs much better than discrete SAC which adopts the clipped double Q-learning mechanism. This indicates that the clipped double Q-learning trick can lead to pessimistic exploration issues and hurt the agent's performance. ## 5 Improvements Of Sac Failure Modes We provide two simple alternatives, which are the surrogate objective with entropy-penalty and double average Q-learning with Q-clip, to avoid the two failure modes of discrete SAC discussed in Section 4. Combining these two modifications, we propose a new discrete-SAC algorithm with entropy-penalty and double average Q-learning with Q-clip. The pseudo-code is provided in Algorithm 1. ![6_image_0.png](6_image_0.png) Figure 3: The results of Atari game Frostbite/MsPacman environment over 2/5 million time steps: a) Measuring Q-value estimates of discrete SAC; b) Measuring Q-value estimates of discrete SAC with single Q; c) Score comparison between discrete SAC and discrete SAC with single Q. ## 5.1 Entropy-Penalty The drastic change of Q function distribution and entropy affects the optimization of the Q-value. Due to the mutual coupling of the Q function and policy training in discrete SAC, we optimize policy entropy to alleviate the unstable effect on training caused by a sharp Q function distribution and a rapid drop in entropy. Simply removing the entropy term will injure the exploration ability under the framework of maximum entropy RL. An intuitive solution is to introduce an entropy penalty in the objective of policy to avoid entropy chattering. We will introduce how to incorporate the entropy penalty in the learning process for the discrete SAC algorithm. Recall the objective of policy in SAC as in Eq. 8. For a mini-batch transition data pair (st, at, rr, st+1) sampled from the replay buffer, we add an extra entropy term Hπold to the transition tuple which reflects the randomness of policy (i.e., (st, at, r, st+1, Hπold)), where πold denotes the policy used for data sampling. We calculate the entropy penalty by measuring the distance between Hπold and Hπ. Formally, the objective of the policy is as follows: $$J_{\pi}(\phi)=\mathbb{E}_{s_{t}\sim D}[\mathbb{E}_{a_{t}\sim\pi_{\phi}}[\alpha\log(\pi_{\phi}(a_{t}\mid s_{t}))-Q_{\theta}(s_{t},a_{t})]]\tag{14}$$ $$+\beta\cdot\frac{1}{2}\mathbb{E}_{s_{t}\sim D}([\mathbb{E}_{a_{t}\sim\pi_{\phi,a_{t}}}[-\log(\pi_{\phi,a_{t}})]-\mathbb{E}_{a_{t}\sim\pi_{\phi}}[-\log(\pi_{\phi})])^{2},$$ where Eat∼π*ϕold* [− log(πϕold )] represents policy entropy of πϕold , Eat∼πϕ [− log(πϕ)] represents policy entropy of πϕ, and β denotes a coefficient for the penalty term and is set to 0.5 in this paper. By constraining the policy objective with this penalty term, we increase the stability of the policy learning process. Fig. 4 shows the training curves to demonstrate how the entropy penalty mitigates the failure mode of policy drastic change. In Fig. 4(b), the entropy of discrete SAC (the purple curve) drops quickly, and the policy falls into a local optimum at the early training stage. Later, the policy stops improving and even suffers from performance deterioration, as shown in the purple curves in Fig. 4(c) and Fig. 4(d). On the contrary, our proposed method (i.e., discrete SAC with entropy-penalty) demonstrates better stability than discrete SAC. As shown in Fig. 4(a), entropy penalty effectively constrains the sharpness of Q function, as a result, the policy changes smoothly during training (Fig. 4(b)). Consequently, compared with discrete SAC, the policy in our approach can keep improving during the whole training stage and does not suffer from a performance drop at the later training stage (the red curves in Fig. 4(c) and Fig. 4(d)). ![7_image_0.png](7_image_0.png) Figure 4: Measuring Q function variance, policy action entropy, estimation of Q-value, and score on Atari game Asterix compared between discrete SAC, discrete SAC with KL-penalty and discrete SAC with entropypenalty over 10 million time steps. It is worth noting that, since the instability in training mainly manifests as the existence of policy entropy term in optimization, imposing constraints in the entropy space is more effective than constraints in the policy space. Other common methods, such as the KL penalty, limit the magnitude of policy updates and impose additional restrictions on policy updates. This is proved in experiments: KL penalty (the yellow curve) cannot effectively constrain the rise in Q variance (Fig. 4(a)) and the decrease in entropy (Fig. 4(b)). Consequently, the final Q-value and score of the KL penalty are lower than those with the entropy penalty, with a difference of 12% and 23%, respectively. The entropy-penalty term 12 Est∼D([Eat∼π*ϕold* [− log(πϕold )] − Eat∼πϕ [− log(πϕ)])2, in conjunction with the temperature α, jointly regulates the exploration of policy. Different from other trust region methods such as KL constraint (Schulman et al., 2015) or clipping surrogate objective (Schulman et al., 2017), our method penalizes the change of action entropy between old and new policies to address policy instability during training. By adding regularization in entropy space instead of policy space, our method can mitigate the drastic changes of policy entropy while maintaining the inherent exploratory ability of SAC (as shown in Fig. 4(b), the policy entropy changes smoothly. It keeps at a relatively high value to encourage exploration). ## 5.2 Double Average Q-Learning With Q-Clip While several approaches(Ciosek et al., 2019; Pan et al., 2020) have been proposed to reduce underestimation bias, they are not straightforward to be applied to discrete SAC due to the use of Gaussian distribution. In this section, we introduce a novel variant of double Q-learning to mitigate the underestimation bias for discrete SAC. In practice, discrete SAC uses clipped double q-learning with a pair of target critics (Qθ ′ 1 , Qθ ′ 2 ), and the learning target of these two critics is: $$y=r+\gamma\operatorname*{min}_{i=1,2}Q_{\theta_{i}^{\prime}}(s^{\prime},\pi(s^{\prime})).$$ $$(15)$$ ′)). (15) When neural networks approximate the Q-function, there exists an unavoidable bias in the critics. Since policy is optimized concerning the low bound of double critics, for some states, we will have Qθ ′ 2 (s, πϕ(s)) > Qtrue > Qθ ′ 1 (*s, π*ϕ(s)). This is problematic because Qθ ′ 1 (*s, π*ϕ(s)) will generally underestimate the true value, and this underestimated bias will be further exaggerated during the whole training phase, which results in pessimistic exploration. To address this problem, we propose to mitigate the underestimation bias by replacing the min operator with avg operator. This results in taking the average between the two estimates, which we refer to as *double* average Q-learning: $$y=r+\gamma\cdot\mathrm{avg}(Q_{\theta_{1}^{\prime}}(s^{\prime},\pi(s^{\prime})),Q_{\theta_{2}^{\prime}}(s^{\prime},\pi(s^{\prime}))).$$ ′))). (16) By doing so, the other critic can mitigate the underestimated bias of the lower bound of double critics. To improve the stability of the Q-learning process, inspired by value clipping in PPO (Schulman et al., 2017), we $$(16)$$ ![8_image_0.png](8_image_0.png) Figure 5: Measuring estimation of Q-value and score on Atari Game Frostbite/MsPacman environment compared between discrete SAC, discrete SAC with REDQ, discrete SAC with REM, and ours (discrete SAC with double average Q-learning with Q-clip) over 10 million steps. further add a clip operator on the bellman error to prevent drastic updates of the Q-network. The modified bellman loss of Q-network is as follows: $${\cal L}(\theta_{i})=\max\left((Q_{\theta_{i}}-y)^{2},(Q_{\theta^{\prime}_{i}}+{\rm clip}(Q_{\theta_{i}}-Q_{\theta^{\prime}_{i}},-c,c))-y)^{2}\right),\tag{17}$$ where Qθi represents the critic network's estimate, Qθi′ represents estimation of target-critic networks, and c is the hyperparameter denoting the clip range. This clipping operator prevents the Q-network from performing an incentive update beyond the clip range. In this way, the Q-learning process is more robust to the abrupt change in data distribution. Combining the clipping mechanism (Eq. 17) with double average Q-learning (Eq. 16), we refer to our proposed approach as *double average Q-learning with Q-clip*. Fig. 5 demonstrates the effectiveness of our approach. We compare the discrete SAC and various ensemble Q-estimation methods, including Randomized Ensembled Double Q-learning (REDQ) Chen et al. (2021b) and Random Ensemble Mixture (REM) Agarwal et al. (2020), with our proposed method, Double Average Q-learning with Q-clip. In Fig. 5(a), the Q-value estimate of discrete SAC is underestimated than the true value. Therefore, the policy of discrete SAC suffers from pessimistic exploration and results in poor performance (blue curve in Fig. 5(e)). On the contrary, in Fig. 5(d), with double average Q-learning and Q-clip, the Q-value estimate eliminates underestimation bias and improves quickly at the early training stage. The improvement of Q-value carries over to the performance of policy. Consequently, our approach outperforms baseline discrete SAC by a large margin (Fig. 5(e)). The result also demonstrates that even though REDQ has less estimation bias in Fig. 5(b), it still suffers from underestimation bias, leading to suboptimal performance due to pessimistic exploration. Although REM addresses the underestimation issue in Fig. 5(c), the overestimation bias of REM significantly exceeds that of our proposed method, resulting in a rapid decline in performance at 8 million steps. In Fig. 5(d), we also notice that the Q-value overestimates the true value during the early training stage but finally converges to the true value after the training process. This encourages early exploration, which is consistent with the principle of optimism in the face of uncertainty (Kearns & Singh, 2002). ## 6 Experiments 6.1 Experimental Setup To evaluate our algorithm, we compare our SDSAC with most related baselines, i.e., discrete SAC (Christodoulou, 2019), TES-SAC (Xu et al., 2021), Soft-DQN (Vieillard et al., 2020) and Rainbow(Hessel et al., 2018) which is widely accepted algorithm in the discrete domain. We measure their performance in 20 Atari games chosen as the same as (Christodoulou, 2019) for a fair comparison. We evaluate for 10 episodes for every 50000 steps during training, and execute 3 random seeds for each algorithm for 10 million | scores come from the corresponding paper, and the NE means the score does not exist in the original pap | | | | | er. | | | | |-----------------------------------------------------------------------------------------------------------|-------------------|-------------|--------------|----------|--------------|--------------------|---------------|------------| | Game | Discrete SAC (1M) | TES-SAC(1M) | Soft-DQN(1M) | Ours(1M) | Rainbow(10M) | Discrete SAC (10M) | Soft-DQN(10M) | Ours (10M) | | Alien | 216.90 | 685.93 | 726.33 | 981.67 | 1798.33 | 2717.67 | 2018.00 | 2158.33 | | Amidar | 7.9 | 42.07 | 130.03 | 132.97 | 394.23 | 354.77 | 438.80 | 407.20 | | Assault | 350.0 | 337.03 | 881.97 | 1664.77 | 1802.53 | 7189.97 | 7258.87 | 6785.60 | | Asterix | 272.0 | 378.5 | 676.67 | 733.33 | 5853.33 | 2860.00 | 3761.67 | 5993.33 | | BattleZone | 4386.7 | 5790 | 7933.33 | 6266.67 | 24266.67 | 16850.00 | 24733.33 | 9466.67 | | BeamRider | 432.1 | NE | 3321.60 | 3468.60 | 3310.40 | 7169.60 | 7048.20 | 10506.60 | | Breakout | 0.7 | 2.65 | 46.17 | 11.47 | 492.93 | 29.03 | 155.83 | 60.43 | | CrazyClimber | 3668.7 | 4.0 | 25390.00 | 20753.33 | 30286.67 | 126320.00 | 95156.67 | 140726.67 | | Enduro | 0.8 | NE | 54.23 | 0.93 | 1517.70 | 1326.77 | 1144.07 | 2246.40 | | Freeway | 4.4 | 13.57 | 17.70 | 20.17 | 20.13 | 15.73 | 32.30 | 20.17 | | Frostbite | 59.4 | 81.03 | 294.33 | 347.00 | 4163.67 | 646.33 | 2959.00 | 4806.00 | | Jamesbond | 68.3 | 31.33 | 273.33 | 368.33 | 656.67 | 1386.67 | 965.00 | 2085.00 | | Kangaroo | 29.3 | 307.33 | 160.00 | 120.00 | 3716.67 | 2426.67 | 2703.33 | 5556.67 | | MsPacman | 690.9 | 1408 | 1528.00 | 1639.00 | 2738.67 | 3221.33 | 2386.33 | 3175.67 | | Pong | -20.98 | -20.84 | 20.00 | 15.53 | 20.93 | 20.37 | 20.73 | 20.37 | | Qbert | 280.5 | 74.93 | 1400.83 | 986.67 | 15299.17 | 12946.67 | 14293.33 | 15325.83 | | RoadRunner | 305.3 | NE | 5510.00 | 12793.33 | 45173.33 | 34043.33 | 33370.00 | 43203.33 | | SpaceInvaders | 160.8 | NE | 488.83 | 383.50 | 1330.50 | 458.83 | 816.00 | 586.50 | | Seaquest | 211.6 | 116.73 | 681.33 | 744.00 | 2105.33 | 1853.33 | 3438.67 | 2764.00 | | UpNDown | 250.7 | 207.6 | 8727.33 | 8114.67 | 9110.00 | 17803.33 | 79313.00 | 63441.33 | | method across all 20 Atari games at 1M and 10M steps. Discrete SAC(1M) TES-SAC(1M) Soft-DQN(1M) Ours(1M) Rainbow(10M) | Discrete SAC(10M) | Soft-DQN(10M) | Ours(10M) | | | | | | |-------------------------------------------------------------------------------------------------------------------------|---------------------|-----------------|-------------|-------|---------|--------|--------|--------| | Mean | 0.5% | 3.0% | 41.7% | 38.5% | 187.4 % | 151.4% | 199.2% | 220.0% | | Median | 0.4% | 2.1% | 20.0% | 11.1% | 79.2 % | 90.8% | 107.7% | 114.1% | environment steps (or 40 million frames). For the baseline implementation of discrete-SAC, we use Tianshou 1. We find that Tianshou's implementation performs better than the original paper by Christodoulou (Christodoulou, 2019), thus we use the default hyperparameters in Tianshou on all 20 games. We start the game with up to 30 no-op actions, similar to (Mnih et al., 2013), to provide the agent with a random starting position. To obtain summary statistics across games, following Hasselt (Van Hasselt et al., 2016), we normalize the score for each game as follows: Score normalized =Score agent − Score random Score human − Score random . ## 6.2 Overall Performance Table 1 provides an overview of results and detailed results are presented in the table 2 and appendix A.1. Since TES-SAC is not open-sourced and our re-implement algorithm following the paper underperforms the reported results, we adopt the normalized scores of discrete SAC and TES-SAC reported in the corresponding publication (Xu et al., 2021). When comparing our method to the discrete SAC and TES-SAC, mean normalized scores increase by 38% and 35.5%, respectively. And our method improves the median normalized scores by 10.7% and 9.0% while compared with discrete SAC and TES-SAC. To verify the effect of a longer training process, table 1 also compares discrete SAC, Rainbow, Soft-DQN, and our method performance on 10 million steps. Compared with discrete SAC, our method has improved the normalized scores by 68.6% and 23.3% on mean and median, respectively. Additionally, our proposed method outperformed Rainbow by 32.6% on the mean and by 34.9% on the median. Better Q-estimation and steady policy updates are responsible for the performance increase in average scores. The experimental results demonstrate that benefiting from the deterministic greedy policy and entropy regularization in the evaluation step, Soft-DQN's performance improves rapidly in the early stages and achieves the best results at 1 million steps. However, due to the early convergence of the deterministic greedy policy, Soft-DQN's performance stagnates after 4 million steps, as seen in Fig. 9. Our method outperforms Soft-DQN in the final 10 million steps by 20.8% on average and 6.4% on median, due to the training stability brought by entropy penalty and the optimistic exploration altered by the double avg-Q with Q-clip. 1https://github.com/thu-ml/tianshou ![10_image_0.png](10_image_0.png) Figure 6: Scores of variant discrete SAC, which includes discrete SAC, discrete SAC with entropy-penalty, discrete SAC with double average Q learning with Q-clip,for Atari games Assault, Asterix, Enduro, Freeway, Kangaroo and Seaquest. ## 6.3 Ablation Study Fig. 6 shows the learning curves for 6 environments. Entropy-penalty (red curve) increases performance compared to the discrete SAC in each of the six environments and even increases 2x scores in Assault. This shows that discrete SAC can perform excellently after removing unstable training. Except for Asterix, the alternative choice of clipped double Q-learning, which is double average Q learning with Q-clip (yellow curve), also shows some improvement compared to the discrete SAC in 5 environments. Additional improvements can be derived when the combination of both alternative design choices is used simultaneously. We also conducted a hyperparameter analysis in appendix A.2. 6.4 Qualitative Analysis ![10_image_1.png](10_image_1.png) Figure 7: The loss surfaces of discrete SAC and our method on Atari game Seaquest with trained weights at 3 million, 5 million and 10 million steps. Fig. 7 shows loss surfaces of the discrete SAC and our method by using the visualization method proposed in (Li et al., 2018; Ota et al., 2021) with the loss of TD error of Q functions. According to the sharpness/flatness in these two sub-figures, our method has a nearly convex surface, while discrete SAC has a more complex loss surface. The surface of our method has fewer saddle points than the discrete SAC, which further shows that it can be more smoothly optimized during the training process. ![11_image_0.png](11_image_0.png) Figure 8: a) A screenshot for Honor of Kings 1v1. b) The ELO scores, compared with discrete SAC and our method, were tested for three snapshots of 24, 36, and 48 hours during training. ## 7 Case Study In Honor Of Kings We further deploy our method into Honor of Kings 1v1, a commercial game in the industry, to investigate the scale-up ability of our proposed SAC algorithm. Honor of Kings is the world's most popular MOBA (Multiplayer Online Battle Arena game) and a popular testbed for RL research (Ye et al., 2020b;c;a; Chen et al., 2021a; Wei et al., 2022) The game descriptions are in (Ye et al., 2020c;a). In our experiments, we use the one-versus-one mode (1v1 solo), with both sides being the same hero: Diao Chan. We use the default training settings (e.g., computing resources, self-play settings, initializations, etc.) from the officially released Honor of Kings 1v1 game environment (Wei et al., 2022) (corresponding code available at: https://github.com/tencent-ailab/hok_env). The state of the game is represented by feature vectors, as reported in (Ye et al., 2020c; Wei et al., 2022). The action space is discrete, i.e., we discretize the direction of movement and skill, same to (Ye et al., 2020c;a). The goal of the game is to destroy the opponent's turrets and base crystals while protecting its own turrets and base crystals. The ELO rating system, calculated from the win rate, is used to measure the ability of two agents. The results are shown in Fig. 8. Throughout the entire training period, our method outperforms discrete SAC by a significant margin, which indicates our method's efficiency in large-scale cases. ## 8 Conclusions And Future Work We highlights that soft actor-critic (SAC), that widely accepted design choices in continuous action space do not necessarily generalize to new discrete environments. We conduct failure mode analysis and obtain two main insights: 1) due to the deceptive reward, the unstable coupling update of policy and Q function will further disturb training; 2) the underestimation bias caused by double Q-learning results in the agent's pessimistic exploration and inefficient sample usage. We thereby propose two alternative design choices for discrete SAC: entropy-penalty and double-average Q-learning with Q-clip. Experiments show that our alternative design choices increase the training stability and Q-value estimation accuracy, which ultimately improves overall performance. In addition, we also apply our method to the large-scale MOBA game Honor of Kings 1v1 to show the scalability of our optimizations. Finally, the success obscures certain flaws, one of which is that our improved discrete SAC still performs poorly in instances involving long-term decision-making. One possible reason is that SAC can not accurately estimate the future only by rewarding the current frame. In order to accomplish long-term choices with SAC, our next study will concentrate on improving the usage of the incentive signal across the whole episode. ## References Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920, 2018. Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 104–114. PMLR, 2020. URL http://proceedings.mlr.press/v119/agarwal20c.html. Chayan Banerjee, Zhiyong Chen, and Nasimul Noman. Improved soft actor-critic: Mixing prioritized offpolicy samples with on-policy experiences. IEEE Transactions on Neural Networks and Learning Systems, 2022. Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In International Conference on Machine Learning, pp. 449–458. PMLR, 2017. Sheng Chen, Menghui Zhu, Deheng Ye, Weinan Zhang, Qiang Fu, and Wei Yang. Which heroes to pick? learning to draft in moba games with neural networks and tree search. IEEE Transactions on Games, 13 (4):410–421, 2021a. Xinyue Chen, Che Wang, Zijian Zhou, and Keith W. Ross. Randomized ensembled double q-learning: Learning fast without a model. In International Conference on Learning Representations, 2021b. URL https://openreview.net/forum?id=AY8zfZm0tDd. Petros Christodoulou. Soft actor-critic for discrete action settings. arXiv preprint arXiv:1910.07207, 2019. Kamil Ciosek, Quan Vuong, Robert Loftin, and Katja Hofmann. Better exploration with optimistic actor critic. Advances in Neural Information Processing Systems, 32, 2019. Jingliang Duan, Yang Guan, Shengbo Eben Li, Yangang Ren, Qi Sun, and Bo Cheng. Distributional soft actor-critic: Off-policy reinforcement learning for addressing value estimation errors. IEEE transactions on neural networks and learning systems, 2021. Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pp. 1587–1596. PMLR, 2018. Xiaoyu Gong, Shuai Lü, Jiayu Yu, Sheng Zhu, and Zongze Li. Adaptive estimation q-learning with uncertainty and familiarity. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China, pp. 3750–3758. ijcai.org, 2023. doi: 10.24963/ijcai.2023/417. URL https://doi.org/10.24963/ijcai.2023/417. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861–1870. PMLR, 2018a. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018b. Seungyul Han and Youngchul Sung. A max-min entropy framework for reinforcement learning. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 25732–25745, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ d7b76edf790923bf7177f7ebba5978df-Abstract.html. Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In Thirty-second AAAI conference on artificial intelligence, 2018. Zhang-Wei Hong, Tzu-Yun Shann, Shih-Yang Su, Yi-Hsiang Chang, Tsu-Jui Fu, and Chun-Yi Lee. Diversitydriven exploration strategy for deep reinforcement learning. Advances in neural information processing systems, 31, 2018. Christian Horvat and Jean-Pascal Pfister. Denoising normalizing flow. Advances in Neural Information Processing Systems, 34:9099–9111, 2021. Zhimin Hou, Kuangen Zhang, Yi Wan, Dongyu Li, Chenglong Fu, and Haoyong Yu. Off-policy maximum entropy reinforcement learning: Soft actor-critic with advantage weighted mixture policy (sac-awmp). arXiv preprint arXiv:2002.02829, 2020. Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine learning, 49(2):209–232, 2002. Qingfeng Lan, Yangchen Pan, Alona Fyshe, and Martha White. Maxmin q-learning: Controlling the estimation bias of q-learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id= Bkg0u3Etwr. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018. Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1509. 02971. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013. URL http://arxiv.org/abs/1312.5602. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529–533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928–1937. PMLR, 2016. Kei Ota, Devesh K Jha, and Asako Kanezaki. Training larger networks for deep reinforcement learning. arXiv preprint arXiv:2102.07920, 2021. Ling Pan, Qingpeng Cai, and Longbo Huang. Softmax deep double deterministic policy gradients. Advances in Neural Information Processing Systems, 33:11767–11777, 2020. Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference. Proceedings of Robotics: Science and Systems VIII, 2012. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1889–1897, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/schulman15.html. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pp. 5026–5033. IEEE, 2012. Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016. Nino Vieillard, Olivier Pietquin, and Matthieu Geist. Munchausen reinforcement learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings. neurips.cc/paper/2020/hash/2c6a0bae0f071cbbf0bb3d5b11d90a82-Abstract.html. Yufei Wang and Tianwei Ni. Meta-sac: Auto-tune the entropy temperature of soft actor-critic via metagradient. In Proceedings of the International Conference on Machine Learning workshop, 2020. Patrick Nadeem Ward, Ariella Smofsky, and Avishek Joey Bose. Improving exploration in soft-actor-critic with normalizing flows policies. In Proceedings of the International Conference on Machine Learning workshop, 2019. Hua Wei, Jingxiao Chen, Xiyang Ji, Hongyang Qin, Minwen Deng, Siqin Li, Liang Wang, Weinan Zhang, Yong Yu, Liu Linc, Lanxiao Huang, Deheng Ye, QIANG FU, and Yang Wei. Honor of kings arena: an environment for generalization in competitive reinforcement learning. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. URL https://openreview.net/ forum?id=7e6W6LEOBg3. Yaosheng Xu, Dailin Hu, Litian Liang, Stephen McAleer, Pieter Abbeel, and Roy Fox. Target entropy annealing for discrete soft actor-critic. Advances in Neural Information Processing Systems workshop, 2021. Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, et al. Towards playing full moba games with deep reinforcement learning. Advances in Neural Information Processing Systems, 33:621–632, 2020a. Deheng Ye, Guibin Chen, Peilin Zhao, Fuhao Qiu, Bo Yuan, Wen Zhang, Sheng Chen, Mingfei Sun, Xiaoqian Li, Siqin Li, et al. Supervised learning achieves human-level performance in moba games: A case study of honor of kings. IEEE Transactions on Neural Networks and Learning Systems, 2020b. Deheng Ye, Zhao Liu, Mingfei Sun, Bei Shi, Peilin Zhao, Hao Wu, Hongsheng Yu, Shaojie Yang, Xipeng Wu, Qingwei Guo, et al. Mastering complex control in moba games with deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 6672–6679, 2020c. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pp. 1433–1438. Chicago, IL, USA, 2008. ## A Appendix A.1 **Detailed Experiment Results On 20 Atari Game Environments** ![15_image_0.png](15_image_0.png) In Figure 9, we present the learning curves of all 20 experiments. Figure 9: Learning curves for discrete SAC, Rainbow, Soft-DQN, and ours, for each game. Every curve is smoothed with a moving average of 10 to improve readability. ## A.2 Hyperparameter Analysis Our alternate design method incorporates two hyperparameters, i.e., entropy-penalty coefficient β and Q-clip range c. Fig. 10 compares various entropy-penalty coefficient β and Q-clip range c values. The constraint proportion of policy change is determined by the entropy-penalty coefficient β. Intuitively, an excessive ![16_image_0.png](16_image_0.png) Figure 10: Scores on Seaquest: a) variants entropy-penalty coefficient β with 0.1, 0.2, 0.5 and 1. b) variants Q-clip c with 0.5, 1, 2 and 5. penalty term will lead to policy under-optimization. We experiment with different β in {0.1, 0.2, 0.5, 1}. We find that β = 0.5 can effectively limit entropy randomness while improving performance. The Q-clip constrains different ranges of Q value range c, and experiments with different ranges c in {0.5, 1, 2, 5} show that 0.5 is a reasonable constraint value. ## A.3 Computation Overhead We test the computational speed on a machine equipped with an Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz with 24 cores and a single Tesla T4 GPU. The unit "it/s" represents the number of steps interacting with the environment per second. Detailed data are shown in the table 3 below. The results demonstrate that our method has a 10.86% reduction(265.41->236.58) in speed compared to the vanilla discrete SAC, while maintaining the same parameter size. | Table 3: Computational speed our method and discrete SAC. algorithm speed discrete SAC 265.41it/s discrete SAC + entropy-penalty 246.83it/s(-18.58) discrete SAC + avg-q + q-clip 250.27it/s(-15.14) | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------| | discrete SAC + avg-q + q-clip + entropy-penalty(ours) | 236.58it/s(-28.83) | ## A.4 Pseudo Code Please refer to Algorithm 1. ## A.5 **Other Atari Environments For Unstable Coupling Training Of Dsac** We conduct cross-validation in other Atari environments, as presented in Fig. The result show that in other environments with deceptive rewards, the rapid decrease in policy entropy due to larget Q variance similarly affects training. ## A.5.1 Games With Deceptive Rewards We take the Atari games Assault, Jamesbond and MsPacman as examples to further illustrate the manifestation and impact of deceptive rewards in game environments. (Fig. 11) In the game Assault, a mothership releases different kinds of aliens. They move along the screen, with the bottom-most alien firing various types of weapons. The player controls a cannon that can shoot bullets Algorithm 1 Discrete-SAC with entropy-penalty and double average Q-learning with Q-clip Input: θ1, θ2*, ϕ ▷* Initial parameters Output: θ1, θ2*, ϕ ▷* Optimized parameters Hyperparameters: *γ, β, c, τ* Initialise Qθ1 : S → R |A|, Qθ2 : S → R |A|, πϕ : S → [0, 1]|A| ▷ Initialise local networks Initialise Qθ ′ 1 : S → R |A|, Qθ ′ 2 : S → R |A| ▷ Initialise target networks θ ′ 1 ← θ1, θ′2 ← θ2 ▷ Equalise target and local network weights D ← ∅ ▷ Initialize an empty replay buffer for each iteration do for each environment step do at ∼ πϕ (at | st) ▷ Sample action from the policy st+1 ∼ p (st+1 | st, at) ▷ Sample transition from the environment Hπold ∼ E a∼πϕ(·|st) [− log πϕ(a | st)] ▷ Calculate the entropy Hπold of the current policy ϕ D ← D ∪ {(st, at, r (st, at), st+1, Hπold )} ▷ Store the transition in the replay buffer end for for each gradient step do y ∼ r (st, at) + γ · avg(Qθ ′ 1 (st+1, π(st+1)), Qθ ′ 2 (st+1, π(st+1))) ▷ Double average Q-value estimation L(θi) ∼ max (Qθi − y) 2,(Qθ ′ i + clip(Qθi − Qθ ′ i , −*c, c*)) − y) 2for i ∈ {1, 2} ▷ Clip the Q-value estimation from target critic network θi ← θi − λQ∇ˆθiL(θi) for i ∈ {1, 2} ▷ Update the Q-function parameters Hπ ∼ E a∼πϕ(·|st) [− log πϕ(a | st)] ▷ Calculate the entropy Hπ of policy ϕ Jπ(ϕ) ∼ Est∼D[Eat∼πϕ [α log(πϕ(at | st)) − Qθ(st, at)]] + β · 1 2 (Hπold − Hπ) 2 ϕ ∼ ϕ − λπ∇ˆϕJπ(ϕ) ▷ Update policy weights α ∼ α − λ∇ˆαJ(α) ▷ Update temperature Qθ ′ i ← τQθi + (1 − τ )Qθ ′ i for i ∈ {1, 2} ▷ Update target network weights end for end for ![17_image_0.png](17_image_0.png) Figure 11: Three examples of Atari game environments with deceptive rewards. horizontally or vertically to attack the aliens and fireballs they shot. Hitting an alien scores points, while being hit or cannon overheating results in a loss of life. In Jamesbond, the player controls a craft that needs to complete various mission to achieve final victory. In the first mission, the player must navigate through a desert with craters, acoid overhead satellite scans and helicopter bombings, and score points by hitting diamonds through fixed-angle shooting. ![18_image_0.png](18_image_0.png) Figure 12: Plots of Q function variance, estimation of Q-value, policy action entropy, episode length, number of steps with rewards and score on Atari Game Assault environment with discrete SAC over 10 million time steps. As for MsPacman, the player controls a Pacman, who scores points by eating dots in a maze while avoiding floating ghosts. When Pacman eats an energy pill, she can attack the ghosts to gain higher scores. In all three environments, the agent can quickly gain deceptive rewards through short-term payoffs. For Assault and Jamesbond, all points come from shooting actions that hit specific targets, while avoiding obstacles can prevent the loss of life but does not bring clear rewards. Thus, agents often excel at shooting but struggle with dodging. In MsPacman, the numerous dots in the maze provide many rewards for the agent's movement. As a result, the agent finds it difficult to learn advances strategies such as avoiding ghosts and picking up energy pills to attack ghosts. The presence of deceptive rewards leads to the training process stuck in local optima, making it challenging to explore better, long-term strategies. ## A.5.2 Plots Of Training Process We present the training process in the three aforementioned environments with deceptive rewards in Fig. 12-14. It can be observed that in each case, deceptive rewards cause a rapid increase in Q variance and a decrease in policy entropy, leading the training process to fall into local optima. ## A.6 Various Learning Rates For Drastic Changes Of Policy We introduce various learning rates for experiments on Asterix using vanilla discrete SAC in Fig.15. An excessively high learning rate leads to early convergence of entropy, while an excessively low learning rate results in insufficient optimization. The experiments show that the entropy instability issue of discrete SAC is not caused by inappropriate learning rate settings. ## A.7 Hyperparameter Please refer to Table 4. ![19_image_0.png](19_image_0.png) Figure 13: Plots of Q function variance, estimation of Q-value, policy action entropy, episode length, number of steps with rewards and score on Atari Game Jamesbond environment with discrete SAC over 10 million time steps. ![19_image_1.png](19_image_1.png) Figure 14: Plots of Q function variance, estimation of Q-value, policy action entropy, episode length, number of steps with rewards and score on Atari Game MsPacman environment with discrete SAC over 10 million time steps. ![20_image_0.png](20_image_0.png) Figure 15: Measuring policy action entropy, estimation of Q-value and score on Atari Game Asterix environment with discrete SAC over 10 million time steps using different learning rates. | Table 4: Hyperparameter for Discrete SAC and Ours Hyperparameter Discrete SAC Ours learning rate 10−5 10−5 optimizer Adam Adam mini-batch size 64 64 discount (γ) 0.99 0.99 buffer size 105 105 hidden layers 2 2 hidden units per layer 512 512 | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------| | target smoothing coefficient (τ ) | 0.005 | 0.005 | | Learning iterations per round | 1 | 1 | | alpha | 0.05 | 0.05 | | n-step | 3 | 3 | | β | False | 0.5 | | c | False | 0.5 | ## A.8 Cosine Similarity Comparison Please refer to Figure 16. ![20_image_1.png](20_image_1.png) Figure 16: Measuring cosine similarity of states on Atari game Asterix compared between discrete SAC and discrete SAC with entropy-penalty over 10 million time steps. ## A.9 Different Choices Of Clip Ratio In Figure 17, we compare the clip ratio and final scores of different c in our Q-clip. ![21_image_0.png](21_image_0.png) Figure 17: Measuring clip-ratio and score on Atari Game Seaquest environment with our method over 10 million time steps using variants Q-clip c with 0.1, 0.2, 0.5, 0.8 and 1.0 . ## A.10 Different Choices Of Temperature Α **In Discrete Sac** In Figure 18, we compare the score of discrete SAC using different α values. All plots do not show significant differences. ![21_image_1.png](21_image_1.png) Figure 18: Measuring scores on Asterix by discrete SAC using variants α with 0.01, 0.025, 0.05, 0.075 and 0.1 over 10 million time steps.
# Transductive Decoupled Variational Inference For Few-Shot Classification Anuj Singh 1,2**, Hadi Jamali-Rad** 1,2 {a.r.singh, h.jamalirad}@tudelft.nl, {anuj.singh2, hadi.jamali-rad}@shell.com 1Delft University of Technology, The Netherlands 2*Shell Global Solutions International B.V., Amsterdam, The Netherlands* Reviewed on OpenReview: *https: // openreview. net/ forum? id= bomdTc9HyL* ## Abstract The versatility to learn from a handful of samples is the hallmark of human intelligence. Fewshot learning is an endeavour to transcend this capability down to machines. Inspired by the promise and power of probabilistic deep learning, we propose a novel variational inference network for few-shot classification (coined as TRIDENT) to decouple the representation of an image into *context* and *label* latent variables, and simultaneously infer them in an intertwined fashion. To induce *task-awareness*, as part of the inference mechanics of TRIDENT, we exploit information across both query and support images of a few-shot task using a novel builtin attention-based transductive feature extraction module (we call AttFEX). Our extensive experimental results corroborate the efficacy of TRIDENT and demonstrate that, using the simplest of backbones and a meta-learning strategy, it sets a new state-of-the-art in the most commonly adopted datasets *mini*ImageNet and *tiered*ImageNet (offering up to 4% and 5% improvements, respectively), as well as for the recent challenging cross-domain *mini*Imagenet → CUB scenario offering a significant margin (up to 20% improvement) beyond the best existing baselines1. ## 1 Introduction Deep learning algorithms are usually data hungry and require massive amounts of training data to reach a satisfactory level of performance on any task. To tackle this limitation, few-shot classification aims to learn to classify images from various unseen tasks in a data-deficient setting. In this exciting space, *metric learning* proposes to learn a shared feature extractor to embed the samples into a metric space of aggregated class embeddings (Sung et al., 2018; Vinyals et al., 2016; Snell et al., 2017; Wang et al., 2019; Liu et al., 2020). Due to limited data per class, these embeddings suffer from sample-bias and fail to efficiently represent class characteristics. Furthermore, sharing a feature extractor across tasks implies that the discriminative information learnt from the seen classes are equally effective on any arbitrary unseen classes, which is not true in most cases. *Transductive task-aware* few-shot learning approaches (Bateni et al., 2022; Ye et al., 2020; Cui & Guo, 2021) address these limitations by exploiting information hidden in the unlabeled data. As a result, the model learns task-specific embeddings by aligning the features of the labelled and unlabelled task instances for optimal distance metric based label assignment. Since the alignment of these embeddings is still subject to the relevance of the characteristics captured by the shared feature extractors, task-aware methods sometimes fail to extract meaningful representations particularly relevant to classification. *Probabilistic* methods address sample-bias by relaxing the need to find point estimates to approximate data-dependent distributions of either high-dimensional model weights (Nguyen et al., 2019; Ravi & Beatson, 2019; Gordon et al., 2019; Hu et al., 2020) or lower-dimensional class prototypes (Sun et al., 2021; Zhang et al., 2019). However, inferring a high-dimensional posterior of model parameters is inefficient in low-data regimes and estimating distributions of class prototypes involves using hand-crafted non-parametric aggregation techniques which may not be well suited for every unseen task. 1Codebase available at https://github.com/anujinho/trident. ![1_image_0.png](1_image_0.png) Figure 1: High-level process flow of TRIDENT. Inferred label latent variable zl contains class-characterizing information, as is reflected by better separation of the distributions when compared to their context latent counterparts zc. AttFEX module generates *task-aware* feature maps by exploiting information from both support and query images, which compensates for the lack of label vectors Y in inferring zl. Although fit for purpose, all these approaches seem to overlook an important perspective. An image is composed of different attributes such as style, design, backdrop and setting which are not necessarily relevant discriminative characteristics for classification. Here, we refer to these attributes as *contextual* information. On the other hand, other class-characterizing attributes (such as wings of a bird, trunk of an elephant, hump on a camel's back) are critical for classification, irrespective of context. We refer to such attributes as *label* information. Typically, contextual information is majorly governed by context attributes, whereas the label characteristics are subtly embedded throughout an image. In other words, contextual information can be predominantly present across an image, whereas *attending* to subtle label information determines how effective a classification algorithm would be. Thus, we argue that attention to label-specific information should be ingrained into the mechanics of the classifier, decoupling it from contextual information. This becomes even more important in a few-shot setting where the network has to quickly learn from little data. Building upon this idea, we propose transductive variational inference of decoupled latent variables (coined as TRIDENT), to simultaneously infer decoupled label and context information using two intertwined variational networks. To induce task-awareness while constructing the variational inference mechanics of TRIDENT, we introduce a novel attention-based transductive f eature extraction module (we call AttFEX) which further enhances the discriminative power of the inferred label attributes. This way TRIDENT infers distributions instead of point estimates and injects a handcrafted inductive-bias into the network to guide the classification process. Our main contributions can be summarized as: 1. We propose TRIDENT, a variational inference network to simultaneously infer two salient *decoupled* attributes of an image (*label* and *context*), by inferring these two using two intertwined variational sub-networks (Fig. 1). 2. We introduce an attention-based transductive feature extraction module, AttFEX, to enable TRIDENT see through and compare all images within a task, inducing transductive task-cognizance in the inference of label information. 3. We perform extensive evaluations to demonstrate that TRIDENT sets a new state-of-the-art by outperforming all existing baselines on the most commonly adopted datasets *mini*Imagenet and tieredImagenet (up to 4% and 5%), as well as for the challenging cross-domain scenario of miniImagenet → CUB (up to 20% improvement). ## 2 Related Work Metric-based learning. This body of work involves mapping input samples into a lower-dimensional embedding space and then classifying the unlabelled samples based on a distance or similarity metric. By parameterizing these mappings with neural networks and using differentiable similarity metrics for classification, these networks can be trained in an episodic manner (Vinyals et al., 2016) to perform few-shot classification. Prototypical Nets (Snell et al., 2017), Simple Shot (Wang et al., 2019), FRN (Wertheimer et al., 2021), Relation Networks (Sung et al., 2018), Matching Networks (Vinyals et al., 2016) variants of Graph Neural Nets (Satorras & Estrach, 2018; Yang et al., 2020), are a few examples of seminal ideas here. Transductive Feature-Extraction and Inference. Transductive feature extraction or transductive taskaware learning is a variant of metric-learning with an adaptation mechanism that *aligns* support and query feature vectors in the embedding space for better representation of task-specific discriminative information. This not only improves the discriminative ability of classifiers across tasks, but also alleviates the problem of overfitting on limited support set since information from the query set is also used for extracting features of images in a task. CNAPS (Requeima et al., 2019), Transductive-CNAPS (Bateni et al., 2022), FEAT (Ye et al., 2020), Assoc-Align (Afrasiyabi et al., 2020), TPMN (Wu et al., 2021) and CTM (Li et al., 2019) are prime examples of such methods. Next to transduction for task-aware feature extraction, there are methods that use *transductive inference* to classify all the query samples at once by jointly assigning them labels, as opposed to their inductive counterparts where prediction is done on the samples one at a time. This is either done by iteratively propagating labels from the support to the query samples or by fine-tuning a pre-trained backbone using an additional entropy loss on all query samples, which encourages confident class predictions at query samples. TPN (Liu et al., 2019), Ent-Min (Dhillon et al., 2020), TIM (Boudiaf et al., 2020), Transductive-CNAPS (Bateni et al., 2022), LaplacianShot (Ziko et al., 2020), DPGN (Yang et al., 2020) and ReRank (SHEN et al., 2021) are a few notable examples in this space that usually report state-of-the-art results in certain few-shot classification settings (Liu et al., 2019). That being said, TRIDENT can be regarded as a transductive feature-extraction method, owing to AttFEX's unique ability to see through and compare all images within a task. Optimization-based meta-learning. These methods optimize for model parameters that are sensitive to task objective functions for fast gradient-based adaptation to new tasks. MAML (Finn et al., 2017) and its variants (Rajeswaran et al., 2019; Nichol et al., 2018b), (Oh et al., 2021) are a few prominent examples while LEO (Rusu et al., 2019) efficiently meta-updates its parameters in a lower dimensional latent space. Metalearner LSTM (Ravi & Larochelle, 2017b) uses a separate meta-learner model to learn the exact optimization algorithm used to train another 'learner' neural network classifier. Probabilistic learning. The estimated parameters of typical gradient-based meta-learning methods discussed earlier (Finn et al., 2017; Rusu et al., 2019; Mishra et al., 2018; Nichol et al., 2018b; Rajeswaran et al., 2019), have high variance due to the small task sample size. To deal with this, a natural extension is to model the uncertainty by treating these parameters as latent variables in a Bayesian framework as proposed in Neural Statistician (Edwards & Storkey, 2017), PLATIPUS (Finn et al., 2018), VAMPIRE (Nguyen et al., 2019), ABML (Ravi & Beatson, 2019), VERSA (Gordon et al., 2019), SIB (Hu et al., 2020), SAMOVAR (Iakovleva et al., 2020). Methods like ABPML (Sun et al., 2021) and VariationalFSL (Zhang et al., 2019) infer latent variables of class prototypes to perform classification and avoid inferring high-dimensional model parameters. ABPML (Sun et al., 2021) and VariationalFSL (Zhang et al., 2019) are the closest to our approach. In contrast to these two methods, we avoid hand-crafting class-level aggregations. Additionally, we enhance variational inference by incorporating a classification-relevant inductive bias through decoupling of label and context information. ## 3 Problem Definition Consider a labelled dataset D = {(xi, yi)|i ∈ [1, N′]} of images xi and class labels yi. This dataset D is divided into three disjoint subsets: D = {Dtr ∪ Dval ∪ D*test*}, respectively, referring to the training, validation, and test subsets. The validation dataset Dval is used for model selection and the testing dataset D*test* for final evaluation. Following standard few-shot classification settings, as proposed in Vinyals et al. (2016); Sung et al. (2018); Snell et al. (2017), we use episodic training on a set of tasks Ti ∼ p(T ). The tasks are constructed by drawing K random samples from N different classes, which we denote as an (Nway, K-shot) task. Concretely, each task Tiis composed of a *support* and a *query* set. The support set S = {(x S kn, yS kn)| k ∈ [1, K], n ∈ [1, N]} contains K samples per class and the query set Q = {(x Q kn, y Q kn)| k ∈ ![3_image_0.png](3_image_0.png) Figure 2: Generative Model of TRIDENT. Dotted lines indicate variational inference and solid lines refer to generative processes. The inference and generative parameters are color coded to correspond to their respective architectures indicated in Fig.1 and Fig.4. [1, Q], n ∈ [1, N]} contains Q samples per class. For a given task, the NQ query and NK support images are disjoint to assess the generalization performance. ## 4 The Proposed Method: Trident Let us start with the high-level idea. The proposed approach is devised to learn meaningful representations that capture two pivotal characteristics of an image by modelling them as separate latent variables: (i) zc representing *context*, and (ii) zl embodying class *labels*. Inferring these two latent variables simultaneously allows zl to learn meaningful distributions of class-discriminating characteristics *decoupled* from context features represented by zc. We argue that learning zl as the sole latent variable for classification results in capturing a mixture of true label and other context information. This in turn can lead to sub-optimal classification performance, especially in a few-shot setting where the information per class is scarce and the network has to adapt and generalize quickly. By inferring decoupled label and context latent variables, we inject a handcrafted inductive-bias that incorporates only relevant characteristics, and thus, ameliorates the network's classification performance. ## 4.1 Generative Process The directed graphical model in Fig. 2 illustrates the common underlying generative process p such that pi = p(xi, yi| zli, zci). For the sake of brevity, in the following we drop the sample index i as we always refer to terms associated with a single data sample. We work on the logical premise that the label latent variable zlis responsible for generating class label as well as for image reconstruction, whereas the context latent variable zc is only responsible for image reconstruction (solid lines in the figure). Formally, the data is explained by the generative processes: pθ1 (y | zl) = Cat(y | zl) and pθ2 (x | zl, zc) = gθ2 (x; zl, zc), where Cat(.) refers to a multinomial distribution and gθ2 (x; zl, zc) is a suitable likelihood function such as a Gaussian or Bernoulli distribution. The likelihoods of both these generative processes are parameterized using deep neural networks and the priors of the latent variables are chosen to be standard multivariate Gaussian distributions (Kingma & Welling, 2014; Kingma et al., 2014): p(zc) = N (zc | 0, I) and p(zl) = N (zl| 0, I). ## 4.2 Variational Inference Of Decoupled Zl And Zc Computing exact posterior distributions is intractable due to high dimensionality and non-linearity of the deep neural network parameter space. Following Kingma & Welling (2014); Kingma et al. (2014), we instead construct an approximate posterior over the latent variables by introducing a fixed-form distribution q(zl, zc | x, y) parameterized by ϕ. By using qϕ(.) as an inference network, the inference is rendered tractable, scalable and amortized since ϕ now acts as the global variational parameter. We assume qϕ has a factorized form qϕ (zc, zl| x, y) = qϕ1 (zl| x, zc) qϕ2 (zc | x), where qϕ1 (.), qϕ2 (.) are assumed to be multivariate Gaussian distributions. As is also depicted in Fig. 2, we use zc as input to qϕ1 (.) to infer zl because of their conditional dependence given x. This way we forge a path to allow *necessary* context latent information flow through the label inference network. On the other hand, the opposite direction (using zl to infer zc) is unnecessary, because label information does not directly contribute to the extraction of context features. We will further reflect on this design choice in the next subsection. Neural networks are then used to parameterize both inference networks as: $$\begin{array}{l}{{q_{\phi_{2}}\left(\mathbf{z}_{c}\left|\,\mathbf{x}\right.\right)={\mathcal{N}}\left(\mathbf{z}_{c}\left|\,\boldsymbol{\mu}_{\phi_{2}}(\mathbf{x}),d i a g(\boldsymbol{\sigma}_{\phi_{2}}^{2}(\mathbf{x}))\right.\right),}}\\ {{q_{\phi_{1}}\left(\mathbf{z}_{l}\left|\,\mathbf{x},\mathbf{z}_{c}\right.\right)={\mathcal{N}}\left(\mathbf{z}_{l}\left|\,\boldsymbol{\mu}_{\phi_{1}}(\mathbf{x},\mathbf{z}_{c}),d i a g(\boldsymbol{\sigma}_{\phi_{1}}^{2}(\mathbf{x},\mathbf{z}_{c}))\right.\right).}}\end{array}$$ $$\left(1\right)$$ To find the optimal *approximate* posterior, we derive the evidence lower bound (ELBO) on the marginal likelihood of the data to form our objective function: p(x, y) = Z Z p(x, y | zc, zl) p(zs,zl) dzc dzl, = Eq(zc,zl | x) p(x | zl, zc)p(y | zl)p(zl)p(zc) q(zl, zc | x) . ln p(x, y) ⩾ Eq(zc,zl|x) ln p(x | zl, zc)p(y | zl)p(zl)p(zc) q(zc, zl| x) , = Eqϕ2 Eqϕ1 ln p(x | zc, zl)p(y | zl)p(zc)p(zl) q(zc | x)q(zl| x, zc) . Denoting Ψ = (θ1, θ2, ϕ1, ϕ2), the negative ELBO can be given by $$\begin{split}\mathcal{L}(\Psi)&=-\mathbb{E}_{q_{\phi_{2}}}\mathbb{E}_{q_{\phi_{1}}}\left[\ln p_{\theta_{2}}(\mathbf{x}\left|\mathbf{z}_{c},\mathbf{z}_{l}\right.)+\ln p_{\theta_{1}}(y\left|\mathbf{z}_{l}\right.)\right]+\\ &\mathbb{E}_{q_{\phi_{2}}}\left[D_{KL}\big{(}q_{\phi_{1}}(\mathbf{z}_{l}\left.|\mathbf{x}_{l}\right.)\mathbf{z}_{c}\right.)\right]\left[p(\mathbf{z}_{l})\right)\right]+\\ &\left.D_{KL}\big{(}q_{\phi_{2}}(\mathbf{z}_{c}\left.|\mathbf{x}\right.)\right]\left[p(\mathbf{z}_{c})\right),\end{split}\tag{2}$$ where the second line follows the graphical model in Fig 2, and E(.) and ln(.) denote the expectation operator and the natural logarithm, respectively. We avoid computing biased gradients by following the re-parameterization trick from Kingma & Welling (2014). Note that in equation 1 we deliberately choose to exclude the label information y as input to qϕ1 (.) to be able to exploit the associated generative network pθ1 (y | zl) as a classifier. The consequence and the proposed solution to accommodate this design choice are discussed in the next subsection. ## 4.3 Attfex **For Transductive Feature Extraction** Our design choice to omit label information y when inferring zl (as discussed for equation 1) can be an information bottleneck and counter-productive to the discriminative power zl holds. However, this allows us to employ zl for classification and not reconstruction of the label. To compensate for this bottleneck, we introduce an attention-based transductive feature extractor (AttFEX) module that allows the network qϕ1 (zl| x, zc) see through and compare images across all classes within each task (irrespective of being from the query or support sets), thus, induces *task-cognizance* in the inference network. We first extract the feature maps of all images in the task using a convolutional block F = ConvEnc(X) where X ∈ R N(K+Q)×C×W×H, F ∈ R N(K+Q)×C ′×W′×H′. The feature map tensor F is then transposed into F ′ ∈ R C ′×N(K+Q)×W′×H′ and fed into two consecutive 1 × 1 convolution blocks. This helps the network utilize information across corresponding pixels of all images in a task Ti, which can be considered as a parametric comparison of classes. We leverage the fact that ConvEnc already extracts local pixel information by using larger kernels, and thus, use parameter-light 1 × 1 convolutions subsequently to focus only on individual pixels. Let F ′ i denote the i th channel (or feature map layer) out of total of C ′ available and ReLU denote the rectified linear unit activation. The 1 × 1 convolution block (Conv1×1) is formulated as follows: $$\begin{array}{l}\mathbf{M}_{i}=\texttt{ReLU}\big{(}\texttt{Conv}_{1\times1}(\mathbf{F}^{\prime}_{i},\mathbf{W}_{M})\big{)},\forall i\in[1,C^{\prime}];\\ \mathbf{N}_{j}=\texttt{ReLU}\big{(}\texttt{Conv}_{1\times1}(\mathbf{M}_{j},\mathbf{W}_{N})\big{)},\forall j\in[1,C^{\prime}];\end{array}\tag{3}$$ where N ∈ R C ′×32×W′×H′and WM ∈ R 64×N(K+Q)×1×1, WN ∈ R 32×64×1×1 denote the learnable weights. Next, we want to blend information across feature maps for which we use a self-attention mechanism (Vaswani ![5_image_0.png](5_image_0.png) Figure 3: AttFEX module depicting colors as images and shades as feature maps. We illustrate only 3 image feature maps and 3 channels instead of 32 for N, for the sake of simplicity. et al., 2017) across Nj , ∀j ∈ [1, 32]. To do so, we feed N to query, key and value extraction networks fq(, ;WQ), fk(.;WK), fv(.;WV ) which are also designed to be 1 × 1 convolutions as: $$\begin{array}{l l}{{\mathbf{Q}_{i}=\mathrm{ReLU}\left(\mathrm{Conv}_{1\times1}(\mathbf{N}_{i},\mathbf{W}_{Q})\right),}}&{{\forall i\in[1,C^{\prime}];}}\\ {{\mathbf{K}_{i}=\mathrm{ReLU}\left(\mathrm{Conv}_{1\times1}(\mathbf{N}_{i},\mathbf{W}_{K})\right),}}&{{\forall i\in[1,C^{\prime}];}}\\ {{\mathbf{V}_{i}=\mathrm{ReLU}\left(\mathrm{Conv}_{1\times1}(\mathbf{N}_{i},\mathbf{W}_{V})\right),}}&{{\forall i\in[1,C^{\prime}];}}\end{array}$$ (4) $$\begin{array}{l}\small\text{(i)}\end{array}$$ . where WQ,WK,WV ∈ R 1×32×1×1 are the learnable weights and Q, K, V ∈ R C ′×1×W′×H′are the query, key and value tensors. Next, each feature map Nj is mapped to its output tensor Gj by computing a weighted sum of the values, where each weight (within parentheses in equation 5) measures the compatibility (or similarity) between the query and its corresponding key tensor using an inner-product: $$\mathbf{G}_{i}=\sum_{j=1}^{C^{\prime}}\left(\frac{\exp\left(\mathbf{Q}_{i}\cdot\mathbf{K}_{j}\right)}{\sqrt{d_{k}},\sum_{k=1}^{C^{\prime}}\exp\left(\mathbf{Q}_{i}\cdot\mathbf{K}_{k}\right)}\right)\mathbf{V}_{i},\tag{5}$$ where dk = W′ × H′, and Gi ∈ R 1×C ′×W′×H′, ∀i. Finally, we transform the original feature maps F by applying a Hadamard product between the feature mask G and F, thus, rendering the required feature maps transductive: $$\tilde{\bf F}^{S}={\bf G}\circ{\bf F}^{S}\quad o r\quad\tilde{\bf F}^{Q}={\bf G}\circ{\bf F}^{Q}.$$ Here, F Sand F Q represent the feature maps corresponding to the support and query images, respectively. As a result of operating on this channel-pixel distribution across images in a task, F Sand F Q are rendered transductive. Unlike other attention-based few-shot learning methods (Ye et al., 2020; Vinyals et al., 2016), we do not compute an attention-based transform on the flattened support and query vectors, but rather on the outputs of the Conv1×1(.;WN ) to effectively fuse information from multiple class-pixel comparisons. Note that the query tensor Q must not be confused with the query set Q of a task. ## 4.4 Trident**'S Transductive Elbo** AttFEX's transductive feature extraction process introduces task-level dependencies in the variational formulation of qϕ1 . To incorporate this dependency in equation 2, we now revise the derivation of our negative ELBO to be defined in terms of the entire task set and not individual data points. Let X = X S ∪X Q denote the tensor containing all images sampled in a task, Y = Y S ∪ Y Q denote all the labels corresponding to the images in the task and N′ = NK + NQ be the total number of samples in a task. Considering all samples to be independently and identically distributed (I.I.D.), the likelihood of the entire task can be written as: $$p({\bf X},Y)=\prod_{i=1}^{N^{\prime}}\iint p({\bf x}_{i},y_{i}\,|\,{\bf z}_{ci},{\bf z}_{ii})\,p({\bf z}_{ci},{\bf z}_{ii})\,d{\bf z}_{ci}\,d{\bf z}_{ii}.\tag{1}$$ Since the generative networks pθ2 (x | zc, zl) and pθ1 (y | zl) remain inductive, while the approximate inference network qϕ1 (zl| X, zc) becomes transductive (via AttFEX), the log-likelihood now becomes: $$\ln p(\mathbf{X},Y)\geq\sum_{i=1}^{N^{\prime}}\mathbb{E}_{q_{\theta_{2}}}\left[\mathbb{E}_{q_{\theta_{1}}}\left[\ln\left({\frac{p(\mathbf{x}_{i}\mid\mathbf{z}_{c i},\mathbf{z}_{i i})p(y\mid\mathbf{z}_{i i})p(\mathbf{z}_{c})p(\mathbf{z}_{i})}{q(\mathbf{z}_{c i}\mid\mathbf{x}_{i})q(\mathbf{z}_{i i}\mid\mathbf{X},\mathbf{z}_{c i})}}\right)\right]\right].$$ Finally, the overall negative ELBO for the entire task can be given by $$\begin{split}\mathcal{L}(\Psi)&=-\sum_{i=1}^{N}\mathbb{E}_{q_{\varphi_{2}}}\mathbb{E}_{q_{\varphi_{1}}}\left[\ln p_{\theta_{2}}(\mathbf{x}_{i}\mid\mathbf{z}_{ci},\mathbf{z}_{hi})+\ln p_{\theta_{1}}(y_{i}\mid\mathbf{z}_{ii})\right]+\\ &\mathbb{E}_{q_{\varphi_{2}}}\left[D_{KL}\big{(}q_{\varphi_{1}}(\mathbf{z}_{ii}\mid\mathbf{X},\mathbf{z}_{ci})\big{)}\big{|}\,p(\mathbf{z}_{l})\big{)}\right]+\\ &D_{KL}\big{(}q_{\varphi_{2}}(\mathbf{z}_{ci}\mid\mathbf{x}_{i})\big{|}\,p(\mathbf{z}_{c})\big{)}.\end{split}\tag{8}$$ $$\left(6\right)$$ $$\quad(7)$$ Assuming Gaussian distributions for the priors as well as the variational distributions allows us to compute the KL Divergences of zl and zc (last two terms in equation 8) analytically (Kingma & Welling, 2014). By considering a multivariate Gaussian distribution and a multinomial distribution as the likelihood functions for pθ2 (x | zc, zl) and pθ1 (y | zl), respectively, the negative log-likelihood of x becomes the mean squared error (MSE) between the reconstructed images x˜ and the ground-truth images x while the negative loglikelihood of y becomes the cross-entropy between the actual labels y and the predicted labels y˜. After working equation 8 out, we arrive at our overall objective function L = LR + LC , where: $$\begin{split}\mathcal{L}_{R}&=\alpha_{1}\sum_{i=1}^{N^{\prime}}\lVert\mathbf{x}_{i}-\tilde{\mathbf{x}}_{i}\rVert^{2}-KL(\mu_{ci},\sigma_{ci}),\\ \mathcal{L}_{C}&=-\alpha_{2}\sum_{i=1}^{N^{\prime}}\sum_{n=1}^{N}[y_{i}]_{n}\ln p_{\theta_{1}}(\tilde{y}_{i}=n\,|\,\mathbf{z}_{i})-KL(\mu_{ii},\sigma_{1i}).\end{split}\tag{9}$$ where KL(*µ, σ*) = 12 PD d=1 1 + 2 ln(σ d) − (µ d) 2 − (σ d) 2, [yi]n denotes the n-th dimension of the i-th onehot encoded ground-truth vector y, D denotes the dimension of the latent space, N is the total number of classes in an (N-way, K-shot) task, α1, α2 are constant scaling factors, µc and σ 2 c denote the mean and variance vectors of context latent distribution, and µl and σ 2 ldenote the mean and variance vectors of label latent distribution. The hyper-parameters α1, α2 only scale the evidence lower-bound appropriately, since the reconstruction loss is in practice three orders of magnitude greater than the cross-entropy loss. Moreover, these scaling factors can be understood as gradient-scaling parameters which help improve training in heterogeneous likelihoods (Gaussian and Categorical in our case) (Javaloy et al., 2022). ## 4.5 Algorithmic Overview And Training Strategy Overview of **TRIDENT**. The complete architecture of TRIDENT is illustrated in Fig. 4. The ConvEnc feature extractor and the linear layers µϕ2 (.), σ 2 ϕ2 (.) constitute the inference network qϕ2 of the context latent variable (bottom row of Fig. 4). The AttFEX module, another ConvEnc, and linear layers µϕ1 (.) and σ 2 ϕ1 (.) make up the inference network qϕ1 of the label latent variable (top row of Fig. 4). The proposed approach, TRIDENT, is described in Algorithm 1. Note that TRIDENT is trained in a MAML (Finn et al., 2017) fashion, where depending on the inner or outer loop, the support or query set (g *∈ {S*, Q}) will be the reference, respectively. First, the lower ConvEnc block extracts feature maps X g CE = ConvEnc(X g). X g CE's are then flattened and passed onto µϕ2 (.), σ 2 ϕ2 (.), which respectively output the mean and variance vectors of the ![7_image_0.png](7_image_0.png) Figure 4: TRIDENT is comprised of two intertwined variational networks. Zc gis concatenated with the output of AttFEX, and used for inferring Z g l , where g *∈ {S*, Q}. Next, both Z g l and Zc gare used to reconstruct images X˜ gwhile Z g l is used to extract Y˜ g. Algorithm 1: TRIDENT Require: X S, X Q, Y g, X g CE, where g *∈ {S*, Q} 1 Sample: Zc g ∼ qϕ2 Zc | µϕ2 (X g CE)*, diag*σ 2 ϕ2 (X g CE) 2 Compute *task-cognizant* embeddings: [F˜ S, F˜ Q] = AttFEX(ConvEnc(X)); X = X S ∪ X Q 3 Concatenate Zc g and F˜ginto [F˜g, Zc g] and sample: Z g l ∼ qϕ1 Zl | µϕ1 ([F˜ g, Zc g])*, diag*(σ 2 ϕ1 ([F˜ g, Zc g])) 4 Reconstruct X g using X˜ g= pθ2 (X | Z g l , Zc g) 5 Extract class-conditional probabilities using: pY˜ g| Z g l = softmaxpθ1 (Y g| Z g l ) 6 Compute L g = L g R + L g C using equation 9 Return: L g context latent distribution, as discussed in equation 1. This is done either for the entire support or the query images X g, where g ∈ {*S, Q*} for a given task Ti. We then sample a set of vectors Z g c (subscript c for *context*) from their corresponding Gaussian distributions using the re-parameterization trick (line 1, Algorithm 1). Upon passing X = X S ∪X Q through the upper ConvEnc, the AttFEX module of qϕ1 comes into play to create task-cognizant feature maps F˜ gfor either S or Q (line 2). Z g c together with F˜ gare passed onto the linear layers µϕ1 (.), σ 2 ϕ1 (.) to generate the mean and variance vectors of the *label* latent Gaussian distributions (line 3). After sampling the set of vectors Z g l (subscript l for *label*) from their corresponding distributions, we use Z g l and Z g c to reconstruct images X˜ gusing the generative network pθ2 (line 4). Next, Z g l 's are input to the classifier network pθ1 to generate the class logits, which are normalized using a softmax(.), resulting in class-conditional probabilities p(Y˜ g| Z g l ) (line 5). Finally (in line 6), using the outputs of all the components discussed earlier, we calculate the loss L g as formulated in equation 8, 9. Training strategy. An important aspect of the training procedure of TRIDENT is that its set of parameters Ψ = (θ1, θ2, ϕ1, ϕ2) are meta-learnt by back-propagating through the adaptation procedure on the support set, as proposed in MAML (Finn et al., 2017) and illustrated here in Algorithm 2. This increases the sensitivity of the parameters Ψ towards the loss function for fast adaptation to unseen tasks and reduces generalization errors on the query set Q, as discussed from a dynamical systems standpoint in Finn et al. (2017). First, we randomly initialize the parameters Ψ (line 1, Algorithm 2) to compute the objective function over the support set L Si (Ψ) using equation 9, and perform a number of gradient descent steps on the parameters Ψ to adapt them to the support set (lines 5 to 9). This is called the *inner-update* and is done separately for all the support sets corresponding to their B different tasks (line 3). Once the inner-update is computed for each of the B parameter sets, the loss is evaluated on the query set L Qi (Ψ′i ) (line 12), following Algorithm 2: End to End Meta-Training of TRIDENT Require: Dtr, α, β, B 1 Randomly initialise Ψ = (ϕ1, ϕ2, θ1, θ2) 2 **while** *not converged* do 3 Sample B tasks Ti = Si ∪ Qi from Dtr 4 for *each task* Ti do 5 for *number of adaptation steps* do 6 Compute L Si (Ψ) = TRIDENT(Ti − {Y Qi }) 7 Evaluate ∇(Ψ)L Si (Ψ) 8 Ψ ← Ψ − α∇ΨL Si (Ψ) 9 end 10 (Ψ′)i = Ψ 11 end 12 Compute L Qi (Ψ′i ) = TRIDENT(Ti − {Y Si }); ∀i ∈ [1, B] 13 Meta-update on Qi: Ψ ← Ψ − β∇ΨPB i=1 L Qi (Ψ′i ) 14 end which a *meta-update* is conducted over all the corresponding query sets, which involves computing a gradient through a gradient procedure as described in Finn et al. (2017) (line 13). ## 5 Experimental Evaluation The goal of this section is to address the following four questions: (i) How well does TRIDENT perform when compared against the state-of-the-art methods for few-shot classification? (ii) How reliable is TRIDENT in terms of the confidence and uncertainty metrics? (iii) How well does TRIDENT perform in a cross-domain setting where there is a domain shift between the training and testing datasets? (iv) Does TRIDENT actually decouple latent variables? Benchmark Datasets. We evaluate TRIDENT on the three most commonly adopted datasets: *mini*Imagenet (Ravi & Larochelle, 2017a), *tiered*Imagenet (Ren et al., 2018) and CUB (Welinder et al., 2010). mini**Imagenet** (Vinyals et al., 2016) is a subset of ImageNet (Deng et al., 2009) for few-shot classification. It contains 100 classes with 600 samples each. We follow the predominantly adopted settings of Ravi & Larochelle (2017a); Chen et al. (2019) where we split the entire dataset into 64 classes for training, 16 for validation and 20 for testing. *tiered***Imagenet** is a larger subset of ImageNet with 608 classes and 779, 165 total images, which are grouped into 34 higher-level nodes in the *ImageNet* human-curated hierarchy. This set of nodes is partitioned into 20, 6, and 8 disjoint sets of training, validation, and testing nodes, and the corresponding classes form the respective meta-sets. CUB (Welinder et al., 2010) dataset has a total of 200 classes, split into training, validation and test sets following Chen et al. (2019). We use this dataset to simulate the effect of a domain shift where the model is first trained on a (5-way, 1 or 5-shot) configuration of *mini*Imagenet and then tested on the test classes of CUB, as used in Chen et al. (2019); Boudiaf et al. (2020); Ziko et al. (2020); Long et al. (2018). Implementational Details. We use PyTorch (Paszke et al., 2019) and learn2learn (Arnold et al., 2020) for all our implementations. We use a commonly adopted Conv4 architecture (Ravi & Larochelle, 2017a; Finn et al., 2017; Patacchiola et al., 2020; Afrasiyabi et al., 2020; Wang et al., 2019; Boudiaf et al., 2020) as ConvEnc to obtain the generic feature maps. Following the standard setting in the literature (Finn et al., 2017; Ravi & Larochelle, 2017a), the Conv4 has four convolutional blocks where each block has a 3 × 3 convolution layer with 32 feature maps, followed by a batch normalization (BN) (Ioffe & Szegedy, 2015) layer, a 2 × 2 max-pooling layer and a LeakyReLU(0.2) activation. The generative network pθ1 for zlis a classifier with two linear layers and a LeakyReLU(0.2) activation in between, while pθ2 for zc consists of four blocks of a 2-D upsampling layer, followed by a 3 × 3 convolution and LeakyReLU(0.2) activation. Both latent variables zl and zc have a dimensionality of 64. Following Nichol et al. (2018a); Liu et al. (2019); Vaswani et al. (2017), images are resized to 84 × 84 for all configurations and we train and report test accuracy of (5-way, 1 and 5-shot) settings with 10 query images per class for all datasets. The hyperparameter values (**H.P.**) used for training TRIDENT on *mini*Imagenet and *tiered*Imagenet are shown in Table 1. We apply the same hyperparameters for the cross-domain testing scenario of *mini*Imagenet → CUB used for training TRIDENT on *mini*Imagenet, for the given (N-way, K-shot) configuration. Hyperparameters are kept fixed throughout training, validation and testing for a given configuration. Adam (Kingma & Ba, 2015) optimizer is used for inner and meta-updates. Finally, the query, key and value extraction networks fq(, ;WQ), fk(.;WK), fv(.;WV ) of the AttFEX module only use Conv1×1(.) and not the LeakyReLU(0.2) activation function for (5-way, 1-shot) tasks, irrespective of the dataset. We observed that utilizing BatchNorm (Ioffe & Szegedy, 2015) in the decoder of zc (pθ2 ) to train TRIDENT on (5-way, 5-shot) tasks of *mini*Imagenet and on (5-way, 1-shot) tasks of *tiered*Imagenet leads to better scores and improved stability during training. We used the ReLU activation function instead of LeakyReLU(0.2) to carry out training on (5-way, 1-shot) tasks of *tiered*Imagenet. Meta-learning objectives can lead to unstable optimization processes in practice, especially when coupled with stochastic sampling in latent spaces, as also previously observed in Antreas Antoniou et al. (2019); Rusu et al. (2019). For ease of experimentation, we clip the meta-gradient norm at an absolute value of 1. Since AttFEX operates on all samples available in a task, scaling to a larger number of ways and shots per task requires more computational resources. TRIDENT converges in 82, 000 and 22, 500 epochs for (5-way, 1-shot) and (5-way, 5-shot) tasks of *mini*Imagenet, respectively and takes 67, 500 and 48, 000 epochs for convergence on (5-way, 1-shot) and (5-way, 5-shot) tasks of *tiered*Imagenet, respectively. This translates to an average training time of 110 hours on an 11GB NVIDIA 1080Ti GPU. Note that we did not employ any data augmentation, feature averaging or any other data apart from the corresponding training subset Dtr, during training. ``` Table 1: H.P. values when training TRIDENT. miniImagenet tieredImagenet H.P. 5-way, 1-shot 5-way, 5-shot 5-way, 1-shot 5-way, 5-shot α1 1e-2 1e-2 1e-2 1e-2 α2 100 100 150 150 α 1e-3 1e-3 1.5e-3 1.7e-3 β 1e-4 1e-4 1.5e-4 1.7e-4 B 20 20 20 20 n 5 5 5 5 ``` ## 5.1 Evaluation Results We report test accuracies indicating 95% confidence intervals over 600 tasks for *mini*Imagenet, and 2000 tasks for both *tiered*Imagenet and CUB, as is customary across the literature (Chen et al., 2019; Dhillon et al., 2020; Bateni et al., 2022). We compare our performance against a wide variety of state-of-the-art few-shot classification methods such as: (i) metric-learning (Wang et al., 2019; Bateni et al., 2020; Afrasiyabi et al., 2020; Yang et al., 2020), (ii) transductive feature-extraction based (Oreshkin et al., 2018; Ye et al., 2020; Li et al., 2019; Xu et al., 2021), (iii) optimization-based (Finn et al., 2017; Mishra et al., 2018; Oh et al., 2021; Lee et al., 2019; Rusu et al., 2019), (iv) transductive inference-based (Bateni et al., 2022; Boudiaf et al., 2020; Ziko et al., 2020; Liu et al., 2019), and (v) Bayesian (Iakovleva et al., 2020; Zhang et al., 2019; Hu et al., 2020; Patacchiola et al., 2020; Ravi & Beatson, 2019) approaches. Previous works such as Liu et al. (2019), and Hou et al. (2019) have demonstrated the superiority of transductive inference methods over their inductive counterparts. In this light, we compare against a larger number of transductive (18 baselines) rather than inductive (7 baselines) methods for a fair comparison. It is important to note that TRIDENT is only a *transductive feature-extraction* based method as we utilize the query set images to extract task-aware feature embeddings; it is not a transductive inference based method since we perform inference of class-labels over the entire domain of definition and not just for the selected query samples (Vapnik, 2006; Gammerman et al., 1998). The results on *mini*Imagenet and *tiered*Imagenet for both (5-way, 1 and 5-shot) settings are summarized in Table 2. We accentuate on the fact that we also compare against TransdCNAPS+FETI (Bateni et al., 2022), where the authors pre-train the ResNet-18 backbone on the entire train split of Imagenet. We, however, avoid training on additional datasets, in favor of fair comparison with the rest of literature. Regardless of the choice of backbone (simplest in our case), TRIDENT sets a new state-of-the-art on *mini*Imagenet and *tiered*Imagenet for both (5-way, 1 and 5-shot) settings, offering up to 5% gain over the prior art. Recently, a more challenging *cross-domain* setting has been proposed for few-shot classification to assess its generalization capabilities to unseen datasets. The commonly adopted setting is | (TF), and the simplest of backbones (Conv4). | miniImagenet | tieredImagenet | mini→CUB | | | | | | |------------------------------------------------|-------------------------------------------------------------------------------------------------|------------------|-------------------------------------------------------------------------------|--------------|--------------|--------------|--------------|--------------| | Methods | Backbone Approach 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot | | | | | | | | | MAML (Finn et al., 2017) | Conv4 | Ind. | 48.70 ± 1.84 | 63.11 ± 0.92 | 51.67 ± 1.81 | 70.30 ± 0.08 | 34.01 ± 1.25 | 48.83 ± 0.62 | | ABML (Ravi & Beatson, 2019) | Conv4 | Ind. | 40.88 ± 0.25 | 58.19 ± 0.17 | - | - | 31.51 ± 0.32 | 47.80 ± 0.51 | | OVE(PL) (Patacchiola et al., 2020) | Conv4 | Ind. | 48.00 ± 0.24 | 67.14 ± 0.23 | - | - | 37.49 ± 0.11 | 57.23 ± 0.31 | | DKT+Cos (Patacchiola et al., 2020) | Conv4 | Ind. | 48.64 ± 0.45 | 62.85 ± 0.37 | - | - | 40.22 ± 0.54 | 55.65 ± 0.05 | | BOIL (Oh et al., 2021) | Conv4 | Ind. | 49.61 ± 0.16 | 48.58 ± 0.27 | 66.45 ± 0.37 | 69.37 ± 0.12 | - | - | | LFWT (Tseng et al., 2020) | RN10 | TF+TI | 66.32 ± 0.80 | 81.98 ± 0.55 | - | - | 47.47 ± 0.75 | 66.98 ± 0.68 | | FRN (Wertheimer et al., 2021) | RN12 | Ind. | 66.45 ± 0.19 | 82.83 ± 0.13 | 71.16 ± 0.22 | 86.01 ± 0.15 | 54.11 ± 0.19 | 77.09 ± 0.15 | | DPGN (Yang et al., 2020) | RN12 | TF+TI | 67.77 | 84.6 | 72.45 | 87.24 | - | - | | PAL (Ma et al., 2021) | RN12 | TF+TI | 69.37 ± 0.64 | 84.40 ± 0.44 | 72.25 ± 0.72 | 86.95 ± 0.47 | - | - | | Proto-Completion (Zhang et al., 2021a) | RN12 | TF+TI | 73.13 ± 0.85 | 82.06 ± 0.54 | 81.04 ± 0.89 | 87.42 ± 0.57 | - | - | | TPMN (Wu et al., 2021) | RN12 | TF+TI | 67.64 ± 0.63 | 83.44 ± 0.43 | 72.24 ± 0.70 | 86.55 ± 0.63 | - | - | | LIF-EMD (Li et al., 2021) | RN12 | TF+TI | 68.94 ± 0.28 | 85.07 ± 0.50 | 73.76 ± 0.32 | 87.83 ± 0.59 | - | - | | Transd-CNAPS (Bateni et al., 2022) | RN18 | TF+TI | 55.6 ± 0.9 | 73.1 ± 0.7 | 65.9 ± 1.0 | 81.8 ± 0.7 | - | - | | Baseline++ (Chen et al., 2019) | RN18 | TF | 51.87 ± 0.77 | 75.68 ± 0.63 | - | - | 42.85 ± 0.69 | 62.04 ± 0.76 | | FEAT (Ye et al., 2020) | RN18 | TF | 66.78 | 82.05 | 70.80 | 84.79 | 50.67 ± 0.78 | 71.08 ± 0.73 | | SimpleShot (Wang et al., 2019) | WRN | Ind. | 63.32 | 80.28 | 69.98 | 85.45 | 48.56 | 65.63 | | Assoc-Align (Afrasiyabi et al., 2020) | WRN | TF | 65.92 ± 0.60 | 82.85 ± 0.55 | 74.40 ± 0.68 | 86.61 ± 0.59 | 47.25 ± 0.76 | 72.37 ± 0.89 | | ReRank (SHEN et al., 2021) | WRN | TF+TI | 72.4±0.6 | 80.2±0.4 | 79.5±0.6 | 84.8±0.4 | - | - | | TIM-GD (Boudiaf et al., 2020) | WRN | TI | 77.8 | 87.4 | 82.1 | 89.8 | - | 71 | | LaplacianShot (Ziko et al., 2020) | WRN | TI | 74.9 | 84.07 | 80.22 | 87.49 | 55.46 | 66.33 | | S2M2 (Mangla et al., 2020) | WRN | TF | 64.93 ± 0.18 | 83.18 ± 0.11 | 73.71 ± 0.22 | 88.59 ± 0.14 | 48.24 ± 0.84 | 70.44 ± 0.75 | | MetaQDA (Zhang et al., 2021b) | WRN | TF | 67.83 ± 0.64 | 84.28 ± 0.69 | 74.33 ± 0.65 | 89.56 ± 0.79 | 53.75 ± 0.72 | 71.84 ± 0.66 | | BAVARDAGE (Hu et al., 2022b) | WRN | TI | 82.7 | 89.5 | 83.5 | 89.0 | - | - | | EASY (Bendou et al., 2022) | WRN | TF+TI | 84.04 ± 0.23 | 89.14 ± 0.11 | 84.29 ± 0.24 | 89.76 ± 0.14 | - | - | | PT+MAP (Hu et al., 2021) | WRN | TF+TI | 82.92 ± 0.26 | 88.82 ± 0.13 | 85.67 ± 0.26 | 90.45 ± 0.14 | 62.49 ± 0.32 | 76.51 ± 0.18 | | PEMnE-BMS (Hu et al., 2022a) | WRN | TF+TI | 83.35 ± 0.25 | 89.53 ± 0.13 | 86.07 ± 0.25 | 91.09 ± 0.14 | 63.90 ± 0.31 | 79.15 ± 0.18 | | Transd-CNAPS+FETI (Bateni et al., 2022) | RN18† | TF+TI | 79.9 ± 0.8 | 91.50 ± 0.4 | 73.8 ± 0.1 | 87.7 ± 0.6 | - | - | | TRIDENT(Ours) | Conv4 | TF | 86.11 ± 0.59 95.95 ± 0.28 86.97 ± 0.50 96.57 ± 0.17 84.61 ± 0.33 80.74 ± 0.35 | | | | | | where one trains on *mini*Imagenet and tests on CUB (Chen et al., 2019). The results of this experiment are also presented in Table 2. We compare against *any existing baselines* for which this cross-domain experiment has been conducted. As can be seen, and to the best of our knowledge, TRIDENT again sets a new stateof-the-art by a significant margin of 20% for (5-way, 1-shot) setting, and 1.5% for (5-way, 5-shot) setting. Computational Complexity. Most of the reported baselines in Table 2 use stronger backbones such as ResNet12, ResNet18 and WRN which contain 11.5, 12.4 and 36.4 millions of parameters respectively. On the other hand, we use three Conv4s along with two fully connected layers and an AttFEX module which accounts for 410,958 and 412,238 parameters in the (5-way, 1-shot) and (5-way, 5-shot) scenarios, respectively. This is summarized in details in Table 3. Even though we are more parameter heavy than approaches that use a single Conv4 as feature extractor, TRIDENT's total parameters still lies in the same order of magnitude as these approaches. In summary, when it comes to complexity in parameter space, we are considerably more efficient than the vast majority of the cited competitors. Table 3: Parameter count of TRIDENT against competitors. Conv4 µϕ σϕ AttFEX **TRIDENT** Conv4 RN18 WRN qϕ1 28896 51264 51264 6994 qϕ2 28896 51264 51264 - pθ1+ pθ2 2245 + 132009**412,238** 190, 410 12.4M 36.482M ``` Table 4: Calibration errors of TRIDENT. Style: best and second best. Metrics MAML PLATIPUS ABPML ABML BMAML VAMPIRE TRIDENT ECE 0.046 0.032 0.013 0.026 0.025 0.008 0.0036 5-way, 1-shot MCE 0.073 0.108 0.037 0.058 0.092 0.038 0.029 ECE 0.032 - 0.006 - 0.027 - 0.0015 5-way, 5-shot MCE 0.044 - 0.030 - 0.049 - 0.018 ``` | Table 5: Style: best and second best. | | | | |-----------------------------------------|-------|-------|-------| | Methods | ECE | MCE | Brier | | Feature Transfer(Chen et al., 2019) | 0.275 | 0.646 | 0.772 | | Baseline(Chen et al., 2019) | 0.315 | 0.537 | 0.716 | | Proto Nets(Snell et al., 2017) | 0.009 | 0.025 | 0.604 | | DKT+Cos(Patacchiola et al., 2020) | 0.236 | 0.426 | 0.670 | | BMAML+Chaser(Yoon et al., 2018) | 0.066 | 0.260 | 0.639 | | LogSoftGP(ML)(Galy-Fajou et al., 2020) | 0.220 | 0.513 | 0.709 | | LogSoftGP(PL)(Galy-Fajou et al., 2020) | 0.022 | 0.042 | 0.564 | | OVE(ML)(Snell & Zemel, 2021) | 0.049 | 0.066 | 0.576 | | OVE(PL)(Snell & Zemel, 2021) | 0.020 | 0.032 | 0.556 | | TRIDENT(Ours) | 0.009 | 0.02 | 0.276 | Reliability Metrics. A complementary set of metrics are typically used in probabilistic settings to measure the uncertainty and reliability of predictions. More specifically, expected calibration error (ECE) and maximum calibration error (MCE) respectively measure the expected and maximum binned difference between confidence and accuracy (Guo et al., 2017). This is illustrated in Table 4 where TRIDENT offers superior calibration on *mini*Imagenet (5-way, 1 and 5-shot) as compared to other 11 | | Table 6: Ablation study for miniImagenet (5-way, 1-shot) tasks. Accuracies in (% ± std.). | | | | | | | | | |-----------|---------------------------------------------------------------------------------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------| | (B, n) | (5, 3) | (5, 5) | (10, 3) | (10, 5) | (20, 3) | (20, 5) | | | | | | - | 67.43 ± 0.75 | 69.21 ± 0.66 | 74.6 ± 0.84 | 80.82 ± 0.68 | 86.11 ± 0.59 | | | | | (dim(zl), | (32, 32) | (32, 64) | (32, 128) | (64, 32) | (64, 64) | (64, 128) | (128, 32) | (128, 64) | (128, 128) | | dim(zc)) | 76.29 ± 0.72 | 75.44 ± 0.81 | 79.1 ± 0.57 | 82.93 ± 0.8 | 86.11 ± 0.59 | 85.62 ± 0.52 | 81.49 ± 0.65 | 82.89 ± 0.48 | 84.42 ± 0.59 | | (dim(WM), | (32, 32) | (32, 64) | (32, 128) | (64, 32) | (64, 64) | (64, 128) | (128, 32) | (128, 64) | (128, 128) | | dim(WN )) | 78.4 ± 0.23 | 77.89 ± 0.39 | 79.55 ± 0.87 | 86.11 ± 0.59 | 84.87 ± 0.45 | 82.11 ± 0.35 | 84.67 ± 0.7 | 85.8 ± 0.58 | 83.92 ± 0.63 | probabilistic approaches, and MAML (Finn et al., 2017). To further examine the reliability and calibration of our method, we assess the ECE, MCE (Guo et al., 2017) and Brier scores (BRIER, 1950) of TRIDENT on the challenging *cross-domain* scenario of *mini*Imagenet → CUB for (5-way, 5-shot) tasks. When compared against other baselines that report these metrics on the aforementioned scenario, TRIDENT proves to be the most calibrated with the best reliability scores. This is shown in Table 5. ## 5.2 Decoupling Analysis As a qualitative demonstration, we visualize the *label* and *context* latent means (µl and µc) of query ![11_image_0.png](11_image_0.png) images for a randomly selected (5-way, 5-shot) task from the test split of *mini*Imagenet, before and after the MAML meta-update procedure. The UMAP (McInnes et al., 2018) plots in Fig. 5 illustrate significant improvement in class-conditional separation of query samples for *label* latent space upon metaupdate, whereas negligible improvement is visible on the context latent space. This is qualitative evidence that Zl captures more class-discriminating information as compared to Zc. To substantiate this quantitatively, the clustering capacity of these latent spaces is also measured by the Davies-Bouldin score (DBI) (Davies & Bouldin, 1979), where, the lower the DBI score, the better both the inter-cluster separation and intra-cluster "tightness". Fig. 5 shows that the DBI score drops significantly more after meta-update in the case of Zl as compared to Zc, indicating better clustering of features in the former than the latter. This aligns with the proposed decoupling strategy of TRIDENT and corroborates the validity of our proposition to put an emphasis on label latent information for the downstream few-shot tasks. Figure 5: Better class separation upon meta-update is confirmed by lower DBI scores. Different colors/markers indicate classes. ## 5.3 Ablation Study We analyze the classification performance of TRIDENT across various paramaters and hyper-parameters, as is summarized in Table 6. We use *mini*Imagenet (5-way, 1-shot) setting to carry out ablation study experiments. To cover different design perspectives, we carry out ablation on: (i) MAML-style training parameters: metabatch size B and number of inner adaption steps n, (ii) latent space dimensionality: zl and zc to assess the impact of their size, (iii) AttFEX features: number of features extracted by WM, WN . Looking at the results, TRIDENT's performance is directly proportional to the number of tasks and inner-adaptation steps, as is previously demonstrated in Antreas Antoniou et al. (2019); Finn et al. (2017) for MAML based training. Regarding latent space dimensions, a correlation between a higher dimension of zl and zc and a better performance can be observed. Even though, the results show that increasing both dimensions beyond 64 leads to performance degradation. As such, (64, 64) seems to be the sweet spot. Finally, on feature space dimensions of AttFEX, the performance improves when WM > WN , and the best performance is achieved when the parameters are set to (64, 32). Notably, the exact set of parameters return the best performance for (5-way, 5-shot) setting. To sum up, (B, n, dim(zl), dim(zc), dim(WM)*, dim*(WN )) = (20, 5, 64, 64, 64, 32) turns out to be the best setting for (5-way, 1-shot), consistently the same for (5-way, 5-shot). ## 5.4 Impact Of Attfex **And The Decoupled Inference Strategy** In order to study the impact of the transductive feature extractor AttFEX, we exclude it during training and train the remaining architecture. Training proceeds exactly as mentioned in Algorithm 2. As can be seen in Table 7, the exclusion of AttFEX from TRIDENT (AttFEX OFF) results in a substantial drop in classification performance across both datasets and task settings. Empirically, this further substantiates the importance of AttFEX's ability to render the feature maps transductive/task-aware. As explained earlier in section 4.3, the derivation of TRIDENT's ELBO implies that y should be included as an input to qϕ1 due to its dependence on zl. However, in order to utilize TRIDENT as a classification and not a label reconstruction network, we choose not to input y to qϕ1 (.), but rather do so indirectly by inducing a semblance of label characteristics in the features extracted from the images in a task. Thus, it is important to realize that this ability of AttFEX to render feature maps transductive is not just an adhoc performance enhancer, but rather an essential part of TRIDENT. To further understand the impact of AttFEX on TRIDENT, we train TRIDENT with a transductive feature extraction module different from AttFEX. The three modules that we replace AttFEX with are: ``` Table 7: Impact of AttFEX on classification accuracies. miniImagenet tieredImagenet (5-way, 1-shot) (5-way, 5-shot) (5-way, 1-shot) (5-way, 5-shot) AttFEX OFF 67.68 ± 0.55 78.53 ± 0.21 69.32 ± 0.76 79.32 ± 0.76 TRIDENT (EP) 69.84 ± 0.5 80.15 ± 0.67 73.29 ± 0.60 82.17 ± 0.65 TRIDENT (FEAT) 80.11 ± 0.43 87.61 ± 0.12 82.39 ± 0.45 88.78 ± 0.39 TRIDENT (LSTM) 75.41 ± 0.49 83.89 ± 0.45 79.72 ± 0.52 86.20 ± 0.92 ConvFEX 51.46 ± 0.91 62.35 ± 0.72 55.89 ± 0.31 64.56 ± 0.29 TRIDENT(Ours) 86.11 ± 0.59 95.95 ± 0.28 86.97 ± 0.50 96.57 ± 0.17 ``` (i) Embedding propagation module (EP): This has been adapted from Embedding Propagation Networks (Rodríguez et al., 2020). Here, a non-parametric graph-based propagation matrix helps smoothen the embedding manifold to remove undesirable noise from the support and query feature vectors; (ii) Attention-based feature adaption module (FEAT): This has been adapted from FEAT (Ye et al., 2020). A self-attention module is used to transform the support and query set by computing a weighted average of all the feature vectors in a task. The weights are calculated using a dot-product between each pair of feature vectors; (iii) LSTM-based feature adaption module (LSTM): We introduce the LSTM-based transductive taskencoding procedure from Transductive CNAPS (Bateni et al., 2022) in place of AttFEX and carry out the same training procedure. The results for each of these experiments, when trained with TRIDENT on miniImagenet and *tiered*Imagenet, are shown in Table 7. TRIDENT's superior results corroborate the importance of our design choices in AttFEX. Furthermore, to empirically verify the contribution of the decoupled variational inference vs AttFEX, we trained a simplified network ConvFEX = Conv4 + AttFEX as the inference network q(z | x) to generate class labels y using an MLP p(y | z). ConvFEX embodies the inference and generative mechanics of zl while omitting the second latent variable zc, thus dropping the decoupled inference strategy. As shown in Table 7, the classification accuracies across both datasets and task settings for ConvFEX corroborate that when label-specific and context information are coupled, we observe a significant performance degradation as compared to TRIDENT, thus reaffirming the importance of our *decoupled* variational inference strategy. ## 5.5 Impact Of End-To-End Meta-Learning To understand the importance of end-to-end meta-training of the entire network architecture, we train parts of TRIDENT in different steps. More specifically, we pre-train a ConvEnc on the training split of *mini*Imagenet to perform 64-way classification. Note that during this pre-training phase, training proceeds by sampling random batches from the entire training split without defining support or query sets. We use the pre-trained feature extractors in TRIDENT's inference networks qϕ1 and qϕ2 for fine-tuning. We then conduct three different experiments for fine-tuning the network: (i) freeze both the ConvEnc's and fine-tune episodically without any MAML-style metalearning; (ii) fine-tune the entire architecture episodically without any MAML-style meta-learning; (iii) freeze both the ConvEnc's and fine-tune using MAML-style meta-learning. Fine-tuning proceeds by sampling (N-way, K-shot) tasks from the training split of *mini*Imagenet. Notably, in (i) and (ii), we do not have separate updates for the support and query sets following simple episodic training. Therefore, employing an MLP for classification is a suboptimal utilization of the labelled samples. To address this, we use a prototypical classification framework as proposed in Prototypical Networks (Snell et al., 2017). The results of all the experimentation is illustrated in Table 8. It can be observed that episodic fine-tuning is not as effective as meta-learning the entire network architecture. This can be attributed to the ability of MAML-style meta-learning to render the network's weights sensitive to the loss function, thus enabling quicker generalization to unseen tasks (Finn et al., 2017). Table 8: Impact of meta-learning on accuracies. mini**Imagenet** (5-way, 1-shot) **(5-way, 5-shot)** Frozen ConvEnc (Episodic) 67.68 ± 0.55 78.53 ± 0.21 Fine-tune ConvEnc (Episodic) 69.84 ± 0.5 80.15 ± 0.67 Frozen ConvEnc (Meta-Learn) 80.11 ± 0.43 87.61 ± 0.12 TRIDENT(Ours) 86.11 ± 0.59 95.95 ± **0.28** ## 6 Concluding Remarks We introduce a novel variational inference network (coined as TRIDENT) that simultaneously infers decoupled latent variables representing context and label information of an image. The proposed network is comprised of two intertwined variational sub-networks responsible for inferring the context and label information separately, the latter being enhanced using an attention-based transductive feature extraction module (AttFEX). Our extensive experimental results corroborate the efficacy of this transductive decoupling strategy on a variety of few-shot classification settings demonstrating superior performance and setting a new state-of-theart for the most commonly adopted datasets *mini* and *tiered*Imagenet as well as for the recent challenging cross-domain scenario of *mini*Imagenet → CUB. As future work, we plan to demonstrate the applicability of TRIDENT in semi-supervised and unsupervised settings by including the likelihood of unlabelled samples derived from the graphical model. This would render TRIDENT as an all-inclusive holistic approach towards solving few-shot classification. ## References Arman Afrasiyabi, Jean-François Lalonde, and Christian Gagné. Associative alignment for few-shot image classification. In *European Conference on Computer Vision*, pp. 18–35. Springer, 2020. Antreas Antoniou, Harrison Edwards, and Amos J. Storkey. How to train your maml. In *ICLR (Poster)*. OpenReview.net, 2019. Sébastien M R Arnold, Praateek Mahajan, Debajyoti Datta, Ian Bunner, and Konstantinos Saitas Zarkias. learn2learn: A library for Meta-Learning research. August 2020. URL http://arxiv.org/abs/2008. 12284. Peyman Bateni, Raghav Goyal, Vaden Masrani, Frank Wood, and Leonid Sigal. Improved few-shot visual classification. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (CVPR), June 2020. Peyman Bateni, Jarred Barber, Jan-Willem van de Meent, and Frank Wood. Enhancing few-shot image classification with unlabelled examples. In *Proceedings of the IEEE/CVF Winter Conference on Applications* of Computer Vision (WACV), pp. 2796–2805, January 2022. Yassir Bendou, Yuqing Hu, Raphael Lafargue, Giulia Lioi, Bastien Pasdeloup, Stéphane Pateux, and Vincent Gripon. Easy: Ensemble augmented-shot y-shaped learning: State-of-the-art few-shot classification with simple ingredients, 2022. URL https://arxiv.org/abs/2201.09699. Malik Boudiaf, Imtiaz Ziko, Jérôme Rony, Jose Dolz, Pablo Piantanida, and Ismail Ben Ayed. Information maximization for few-shot learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 2445– 2457. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 196f5641aa9dc87067da4ff90fd81e7b-Paper.pdf. GLENN W. BRIER. Verification of forecasts expressed in terms of probability. *Monthly Weather Review*, 78 (1):1 - 3, 1950. doi: 10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2. URL https://journals. ametsoc.org/view/journals/mwre/78/1/1520-0493_1950_078_0001_vofeit_2_0_co_2.xml. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Wang, and Jia-Bin Huang. A closer look at few-shot classification. In *International Conference on Learning Representations*, 2019. Wentao Cui and Yuhong Guo. Parameterless transductive feature re-representation for few-shot learning. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine* Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 2212–2221. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/cui21a.html. David L. Davies and Donald W. Bouldin. A cluster separation measure. *IEEE Transactions on Pattern* Analysis and Machine Intelligence, PAMI-1(2):224–227, 1979. doi: 10.1109/TPAMI.1979.4766909. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. In *International Conference on Learning Representations*, 2020. Harrison Edwards and Amos J. Storkey. Towards a neural statistician. *ArXiv*, abs/1606.02185, 2017. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pp. 1126–1135, 2017. Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings. neurips.cc/paper/2018/file/8e2c381d4dd04f1c55093f22c59c3a08-Paper.pdf. Théo Galy-Fajou, Florian Wenzel, Christian Donner, and Manfred Opper. Multi-class gaussian process classification made conjugate: Efficient inference via data augmentation. In Ryan P. Adams and Vibhav Gogate (eds.), *Proceedings of The 35th Uncertainty in Artificial Intelligence Conference*, volume 115 of *Proceedings of Machine Learning Research*, pp. 755–765. PMLR, 22–25 Jul 2020. URL https://proceedings.mlr.press/v115/galy-fajou20a.html. A Gammerman, V Vovk, and V Vapnik. Learning by transduction, vol uai'98, 1998. Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard Turner. Meta-learning probabilistic inference for prediction. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=HkxStoC5F7. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of* Machine Learning Research, pp. 1321–1330. PMLR, 06–11 Aug 2017. Ruibing Hou, Hong Chang, Bingpeng MA, Shiguang Shan, and Xilin Chen. Cross attention network for few-shot classification. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 01894d6f048493d2cacde3c579c315a3-Paper.pdf. Shell Xu Hu, Pablo Moreno, Yang Xiao, Xi Shen, Guillaume Obozinski, Neil Lawrence, and Andreas Damianou. Empirical bayes transductive meta-learning with synthetic gradients. In *International Conference* on Learning Representations (ICLR), 2020. URL https://openreview.net/forum?id=Hkg-xgrYvH. Yuqing Hu, Vincent Gripon, and Stéphane Pateux. Leveraging the feature distribution in transfer-based few-shot learning. In *International Conference on Artificial Neural Networks*, pp. 487–499. Springer, 2021. Yuqing Hu, Stéphane Pateux, and Vincent Gripon. Squeezing backbone feature distributions to the max for efficient few-shot learning. *Algorithms*, 15(5):147, 2022a. Yuqing Hu, Stéphane Pateux, and Vincent Gripon. Adaptive dimension reduction and variational inference for transductive few-shot classification, 2022b. URL https://arxiv.org/abs/2209.08527. Ekaterina Iakovleva, Jakob Verbeek, and Karteek Alahari. Meta-learning with shared amortized variational inference. In *Proceedings of the 37th International Conference on Machine Learning*, Proceedings of Machine Learning Research. PMLR, 13–18 Jul 2020. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of *Proceedings of Machine Learning Research*, pp. 448–456. PMLR, 07–09 Jul 2015. Adrian Javaloy, Maryam Meghdadi, and Isabel Valera. Mitigating modality collapse in multimodal VAEs via impartial optimization. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of *Proceedings of Machine Learning Research*, pp. 9938–9964. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/javaloy22a.html. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015. URL http://arxiv.org/abs/1412.6980. Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2014. Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In *Advances in Neural Information Processing Systems*, volume 27, 2014. Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In *CVPR*, 2019. Hongyang Li, David Eigen, Samuel Dodge, Matthew Zeiler, and Xiaogang Wang. Finding Task-Relevant Features for Few-Shot Learning by Category Traversal. In *CVPR*, 2019. Junjie Li, Zilei Wang, and Xiaoming Hu. Learning intact features by erasing-inpainting for few-shot classification. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35(9):8401–8409, May 2021. URL https://ojs.aaai.org/index.php/AAAI/article/view/17021. Jinlu Liu, Liang Song, and Yongqiang Qin. Prototype rectification for few-shot learning. In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I, 2020. Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sungju Hwang, and Yi Yang. Learning to propagate labels: Transductive propagation network for few-shot learning. In International Conference on Learning Representations, 2019. Mingsheng Long, ZHANGJIE CAO, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. Jiawei Ma, Hanchen Xie, Guangxing Han, Shih-Fu Chang, Aram Galstyan, and Wael Abd-Almageed. Partner-assisted learning for few-shot image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10573–10582, 2021. Puneet Mangla, Nupur Kumari, Abhishek Sinha, Mayank Singh, Balaji Krishnamurthy, and Vineeth N Balasubramanian. Charting the right manifold: Manifold mixup for few-shot learning. In *Proceedings of* the IEEE/CVF winter conference on applications of computer vision, pp. 2218–2227, 2020. Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. *arXiv preprint arXiv:1802.03426*, 2018. Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. In *International Conference on Learning Representations*, 2018. Cuong C. Nguyen, Thanh-Toan Do, and Gustavo Carneiro. Uncertainty in model-agnostic meta-learning using variational inference. *CoRR*, 2019. Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. *CoRR*, abs/1803.02999, 2018a. Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms, 2018b. Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, and Se-Young Yun. Boil: Towards representation change for few-shot learning. In *International Conference on Learning Representations*, 2021. URL https: //openreview.net/forum?id=umIdUL8rMH. Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved few-shot learning. In *Advances in Neural Information Processing Systems*, volume 31, 2018. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32. 2019. Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael F. P. O'Boyle, and Amos J. Storkey. Bayesian meta-learning for the few-shot setting via deep kernels. In *NeurIPS*, 2020. URL https:// proceedings.neurips.cc/paper/2020/hash/b9cfe8b6042cf759dc4c0cccb27a6737-Abstract.html. Aravind Rajeswaran, Chelsea Finn, Sham M Kakade, and Sergey Levine. Meta-learning with implicit gradients. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/file/072b030ba126b2f4b2374f342be9ed44-Paper.pdf. Sachin Ravi and Alex Beatson. Amortized bayesian meta-learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rkgpy3C5tX. Sachin Ravi and H. Larochelle. Optimization as a model for few-shot learning. In *ICLR*, 2017a. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In International Conference on Learning Representations, 2017b. URL https://openreview.net/forum?id=rJY0-Kcll. Mengye Ren, Sachin Ravi, Eleni Triantafillou, Jake Snell, Kevin Swersky, Josh B. Tenenbaum, Hugo Larochelle, and Richard S. Zemel. Meta-learning for semi-supervised few-shot classification. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum?id= HJcSzz-CZ. James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, and Richard E Turner. Fast and flexible multi-task classification using conditional neural adaptive processes. In *Advances in Neural Information Processing Systems*, volume 32, 2019. Pau Rodríguez, Issam Laradji, Alexandre Drouin, and Alexandre Lacoste. Embedding propagation: Smoother manifold for few-shot classification. In *European Conference on Computer Vision*, pp. 121– 138. Springer, 2020. Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. In *International Conference on Learning* Representations, 2019. URL https://openreview.net/forum?id=BJgklhAcK7. Victor Garcia Satorras and Joan Bruna Estrach. Few-shot learning with graph neural networks. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum?id= BJj6qGbRW. Xi SHEN, Yang Xiao, Shell Xu Hu, Othman Sbai, and Mathieu Aubry. Re-ranking for image retrieval and transductive few-shot classification. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, 2021. URL https://openreview.net/forum? id=sneJD9juaNl. Jake Snell and Richard Zemel. Bayesian few-shot classification with one-vs-each pólya-gamma augmented gaussian processes. In *International Conference on Learning Representations*, 2021. URL https:// openreview.net/forum?id=lgNx56yZh8a. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, volume 30, 2017. Zhuo Sun, Jijie Wu, Xiaoxu Li, Wenming Yang, and Jing-Hao Xue. Amortized bayesian prototype metalearning: A new probabilistic meta-learning approach to few-shot image classification. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 2021. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, and Timothy M. Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, and Ming-Hsuan Yang. Cross-domain few-shot classification via learned feature-wise transformation. *arXiv preprint arXiv:2001.08735*, 2020. Vladimir Naumovich Vapnik. Estimation of dependences based on empirical data. Estimation of Dependences Based on Empirical Data, 2006. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30, 2017. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, koray kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In *Advances in Neural Information Processing Systems*, volume 29, 2016. Yan Wang, Wei-Lun Chao, Kilian Q. Weinberger, and Laurens van der Maaten. Simpleshot: Revisiting nearest-neighbor classification for few-shot learning, 2019. P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010. Davis Wertheimer, Luming Tang, and Bharath Hariharan. Few-shot classification with feature map reconstruction networks. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition (CVPR), pp. 8012–8021, June 2021. Jiamin Wu, Tianzhu Zhang, Yongdong Zhang, and Feng Wu. Task-aware part mining network for few-shot learning. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 8433–8442, 2021. Weijian Xu, yifan xu, Huaijin Wang, and Zhuowen Tu. Attentional constellation nets for few-shot learning. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum? id=vujTf_I8Kmc. Ling Yang, Liangliang Li, Zilun Zhang, Xinyu Zhou, Erjin Zhou, and Yu Liu. Dpgn: Distribution propagation graph network for few-shot learning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13387–13396, 2020. Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, and Fei Sha. Few-shot learning via embedding adaptation with set-to-set functions. In *IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 8808–8817, 2020. Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian model-agnostic meta-learning. In *Advances in Neural Information Processing Systems*, volume 31, 2018. Baoquan Zhang, Xutao Li, Yunming Ye, Zhichao Huang, and Lisai Zhang. Prototype completion with primitive knowledge for few-shot learning. In *CVPR*, pp. 3754–3762, 2021a. Jian Zhang, Chenglong Zhao, Bingbing Ni, Minghao Xu, and Xiaokang Yang. Variational few-shot learning. In *2019 IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 1685–1694, 2019. doi: 10.1109/ICCV.2019.00177. Xueting Zhang, Debin Meng, Henry Gouk, and Timothy M Hospedales. Shallow bayesian meta learning for real-world few-shot recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 651–660, 2021b. Imtiaz Masud Ziko, Jose Dolz, Eric Granger, and Ismail Ben Ayed. Laplacian regularized few-shot learning. In *ICML*, pp. 11660–11670, 2020.
Review 1: Summary: The paper proposes a technique for few-shot learning by jointly learning semantic and label latent representations. Strengths and Weaknesses: Strengths - Interesting idea - relevant topic - paper is clearly written and easy to follow - SOTA results Weaknesses - The effectiveness of the key ingredient of the idea, i.e. jointly learning representations of label and semantics, is not supported with empirical evidence - No qualitative examples demonstrating the proposed intuition in action - a few unsubstantiated claims - ablation studies are vacuous Requested Changes: - I do not understand what decoupled stands for in the title and throughout the paper. How comes $z_s$ and $z_l$ are decoupled if $z_l$ clearly depends on $z_s$ in Figure 2 and equation (1)? Please fix this. - Equation deriving ELBO uses factorization of $p(z_s, z_l)$ assuming $z_s$ and $z_l$ are independent. First, what is the justification for this assumption? Is it really possible that label is independent of semantic information? It is hardly believable that this independence is actually viable in practice. Since this is foundational to the theory of the paper, I believe it is very important to motivate theoretically and empirically that this assumption holds in practice or otherwise modify the theory accordingly so that the algorithm does not need to rely on this assumption. Second, Figure 1 does show the arrow conditioning $z_l$ on $z_s$, which again contradicts the theoretical foundation of the paper. - Related work fails to demonstrate where the current work falls in the literature and which research gaps it helps to bridge. Please rewrite accordingly. - Relationship and novelty w.r.t. matching networks and other few-shot approaches using attention must be more clearly articulated - "This helps the network utilize information across corresponding pixels of all images in a task $T_i$ which acts as a parametric comparison of classes". Can you provide empirical support of this claim in the manuscript? - "This increases the sensitivity of the parameters Ψ towards the loss function for fast adaptation to unseen tasks and reduces generalization errors on the query set $Q$." Can you provide empirical support of this claim in the manuscript? - The main idea and claim of the paper is "Thus, we argue that attention to label-specific information should be ingrained into the mechanics of the classifier, decoupling it from semantic information. This becomes even more important in a few-shot setting where the network has to quickly learn from little data." I do not see a proof of this concept in the empirical results. Where is the proof that when label-specific information and semantic information are coupled, then performance is worse and when what the authors propose is implemented, performance improves. There is no ablation study like this in the paper and moreover, the section "Ablation study" is doing hyperparameter search instead of doing the ablation study. In ablation study we would like to see the effects of adding and removing key components of the proposed architecture, as well observing any interaction effects between them. Please fix this. - Another claim in the paper is the $z_s$ learns semantic representation of an image. Please provide empirical evidence supporting this claim. At the moment it is not at all clear what exactly learned in $z_s$ and why it helps (if indeed it does) to learn a better $z_l$. - Please prove that $z_s$ helps to learn better $z_l$. What if we do not supervise $z_s$ at all, may be we still get the same performance level? Broader Impact Concerns: none ================================================== Review 2: Summary: The authors propose TRIDENT, a transductive few-shot learning method that has a generative model that disentangles label-relevant from label-irrelevant information in the latent space. To this end, they utilize a variational objective has a reconstruction term, which is based on MSE between the generated (via both latent semantic and label information) images and the original images, as well as a classification term, where the class label prediction is made from the label latent variables. They also propose the AttFEX module, which is a transductive one that utilizes unlabeled information from both the support and query sets, in order to obtain a set of ‘task-cognizant’ features, which are used for the branch of the inference network that predicts the label-related latent variables. For this, they extract features from all images in the task using a convolutional feature extractor, and then utilize 1x1 convolutions for dimensionality reduction over (individual pixels of) all of the task’s images, thus blending information across classes. Next, they blend information across feature maps via a self-attention layer, the result of which is multiplied with the original non-transductive features (with a Hadamard product) to yield the transductive features. This is done independently for each of the support and query sets of the task. Overall, they utilize a meta-learning framework to train the parameters in the convolutional feature extractors, AttFEX, and inference networks. Their meta-learning objective is a gradient-based one, where a few steps of gradient descent are performed on the support set to update all parameters (‘inner loop’), followed by an update on the query set (‘outer loop’). In both inner and outer loop, the transductive branch sees both the support and the query set (though without labels), whereas the non-transductive branch uses only the corresponding set (support set in inner loop, query set in outer loop). Strengths and Weaknesses: Strengths: ======== - the idea of disentangling latent factors in terms of those relating to class labels and those that don’t is a good one, as indeed utilizing only the former set for classification can intuitively be a better fit for few-shot classification problems due to focusing on a more ‘robust’ set of features, which is important in the face of limited task-specific data. - the proposed method performs very strongly on the problems it is tested on, which is particularly impressive given that it uses a much smaller architecture compared to other works they compare against. Weaknesses: =========== - the paper has several clarity and writing issues - the proposed architecture is more complicated than some of the previous work compared against, which makes it harder to understand and deconstruct the improved performance over previous work. While the authors perform some ablations which are helpful, they are not sufficient. For instance, they ablate entirely removing the AttFEX module, but don’t consider simpler architectural designs for that module. - in terms of experimental setup, the paper focuses on simple benchmarks that the community is largely moving away from, in favor of more diverse few-shot learning benchmarks. Requested Changes: Clarity and correctness ================== Metric learning is described as learning a “shared feature extractor to embed the samples into a metric space of class prototypes”. This is not correct, as building class prototypes is not a necessary part of all metric learning approaches. For example, out of the ones cited, there are several that don’t create prototypes, like Matching Networks which instead have a nonparameteric flavor. Further, the next sentence talks about how sharing a feature extractor across tasks isn’t necessarily ideal, but not all cited methods in this context share the feature extractor. E.g. Bateni et al (2020) conditions it using an amortized meta-learned model to output parameters for each new task, so this citation for instance should be part of the “task-aware” category instead. Also, the definition of “task-aware” in the paper is an unconventional one that involves exploiting unlabeled data. This isn’t necessarily true - there is a vast literature of task-aware methods (also referred to as task-conditioning methods) where task-awareness is built based on the (labeled) support set of the task, like TADAM, CNAPs, Simple CNAPs etc. Perhaps a more appropriate term for what the authors are talking about here is “transductive task-aware”, since they are talking about a specific way of achieving task-awareness that makes additional assumptions (the availability of (additional) unlabeled data) over other task-aware methods. In related work, SNAIL is incorrectly classified into the optimization-based meta-learning category. SNAIL is actually a black-box meta-learner that does not apply gradient descent at inference time to solve each new task, therefore it’s not optimization-based. Overall, from reading the intro and related work, it felt on several occasions like the authors haven’t read the papers they cited. Please correct the citation style used throughout the paper which is incorrectly used and can be distracting. Use citep to refer to a paper and citet to refer to the authors of the paper. For instance, “Bateni et al. (2022) proposes a method for…” should use the latter, but “A method for … was proposed in (Bateni et al. 2022)” should use the former. Style, design and context don’t sound like “semantic” information to me. The term semantics is typically used to refer to information relating to e.g. class information, so I find this terminology odd. A more appropriate term would be perhaps nuisance, style or context variables. Problem definition: it should be noted that the splits into train/valid/test is done at a *class* level (assuming the authors are following the standard few-shot classification setup). “The NQ query and NK support images are mutually exclusive” - disjoint is a more appropriate word than mutually exclusive, since we’re referring to sets. Section 3 presents a framework for variational inference for a graphical model that assumes that images are generated from latent factors corresponding to both “semantics” and “label” information, whereas the latter is responsible for generating the label. However, it is unclear in this section how the loss function presented here fits into the episodic framework. The authors say that the loss is applied separately to both the support and query sets which makes me wonder then, why is it useful to have a support/query separation? The role of the support and query becomes clear later but at this stage causes confusion. I would recommend not mentioning support/query in this section and only describing the variational inference model there. Do the two feature extractors shown in figure 4 have shared parameters with each other? The use of different colors in this figure made me guess that they don’t, but that would be unintuitive and perhaps wasteful. If they don’t have shared parameters, it would be good to include an empirical comparison to a version that does share them. I did not understand the difference between transductive feature extraction and transductive inference. It would be good to explain this in more detail. In Figure 5 for the visualization, it should be clarified whether the query images used to create this were from the meta-training or meta-validation/meta-testing set. Also, in the paragraph describing this figure (section 5.2), the authors introduce the term ‘meta-adaptation’, while the caption of Fig. 5 calls this ‘meta-update’, neither of which has been defined. I assume this means task-adaptation using the support set, but this needs to be clarified. Note that, if my assumption is correct, the terminology used here is confusing since in meta-learning, ‘meta-update’ refers to the update to the meta-parameters, which is based on the query set loss. Either way, meta-adaptation and meta-update are not synonyms. Minor: in equation 4, maybe use a symbol other than N to denote the result of the 1x1 convolution, since N denotes the number of classes in each task. Experimental setup =============== It would be useful to also have ablations with no meta-learning, or less meta-learning, as these would better aid in understanding the success of the proposed approach. This is important as the proposed system is quite complex and involves several components, and it’s not clear whether learning all of them need to be end-to-end meta-learned, or what is the effect of that design choice. For instance, can one pre-train the convolutional feature extractor and inference network for the variational objective to reconstruct images, using batches of examples from the meta-training set, without separate support/query sets? Then this could be kept frozen during the meta-learning phase which can learn e.g. the AttFEX parameters. Pushing further along this direction, could one use a pre-trained convolutional feature extractor (e.g. trained on ImageNet) and integrate that (frozen) into this architecture? Some experiments in this direction would strengthen the paper. It would also be great to have other ablations for the transductive component, aside from the current ablation that removes it entirely. What are the architectures used in the cited related works to handle incorporating unlabeled information from the entire task? Can one of these be subbed in in the place of AttFEX? The experimental setting is focused on older benchmarks (and thus also precludes comparisons against newer methods). The community has largely moved away from simpler benchmarks like mini- and tiered- imagenet that are relatively homogeneous and evidently don’t require adaptation to the feature extractor for good performance on new few-shot tasks. While the cross-domain scenario explored here is useful, it is quite limited. It would be much more informative to instead use more recent benchmarks for few-shot learning like Meta-Dataset [1], Meta-Dataset + VTAB [2], Orbit [3], MetaAlbum [4], etc. References ========= [1] Meta-Dataset: A dataset of datasets for learning to learn from few examples. Triantafillou et al. ICLR 2020. [2] Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark. Dumoulin et al. NeurIPS 2021. [3] ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition. Massiceti et al. ICCV 2021. [4] Meta-Album: Multi-domain Meta-Dataset for Few-Shot Image Classification. NeurIPS 2022. Broader Impact Concerns: I have no broader impact concerns about this paper. ================================================== Review 3: Summary: This work proposes a novel approach to few-shot learning for classification. In layman's terms, the proposed method is a hierarchical VAE in which the first latent variable $z_s$ provides semantic information (that information not important for classification), and the second latent variable $z_l$ embodies the information useful for classification. The authors make a number of modifications to make the model work in their settings, being the most important that: 1. The inference model does not depend on the label (so that it can be used in the during testing). 2. To cope with the loss of information, the authors shift to the transductive setting, using attention on query and support data *at once* to extract the relevant features (making it "task-aware"). 3. Finally, the model is trained using meta-learning in the same fashion as MAML models. These key changes enable a model which convincingly outperform existing methods using far less parameters than the average model, as empirically demonstrasted in their experiments. Strengths and Weaknesses: **Strenghts:** - The paper is clearly written and easy to follow for the most part. Figures help a lot to understand the proposed approach. - The empirical results are strong and the qualitative results are convincing. - Literature review is properly conducted, and the paper is mostly self-contained. - The idea is intuitive, and all proposed changes are reasonable and well-argued. **Weaknesses:** - The definition of the probabilistic model has some issues that can be misleading (see in next section), as well as some claims in regards to the probabilstic methods are arguable. - Limitations of the method are not properly discussed. For example, the model should not scale really well, as it needs to process the entire dataset at once. - To my understanding, all results from other models are taken from the original papers. I would've appreciate to reproduce at least one of them with the actual codebase of this paper to make sure that the setups are equivalent. - The last experiment (ConvFEX) in section 5.4 is unclear and (personally) unconvincing. Unclear in that I do not understand whether everything was exactly as in the proposed method (e.g., trained using meta-learning), as well as why to use a simplified model. Unconvincing as I would require a deeper discussion or stronger results to show a counter-intuitive result: if the only goal is classification, why wouldn't the model drop all irrelevant information? is it because reconstructing helps aligning the features of query and support variables? - As far as I could see, there are no details regarding hyperparameter optimization. Requested Changes: Critical changes to secure my recommendation: 1. The probabilistic model introduced in section 4.2 is misleading, in the sense that it is defined per sample $(x, y)$, but the proposed AttFEX introduces dataset-dependent dependences that are not contemplated anywhere. To my eyes, the probabilistic model should be defined in terms of the entire dataset $\mathcal{D}$, rather than individual points. While the generative network is inductive $p(\mathcal{D}|\mathcal{Z}) = \prod_i p(x_i | z_i)$, the inference network is transductive like $q(\mathcal{Z}|\mathcal{D}) = \prod_i q(z_i | \mathcal{D})$. This is a *huge* difference which is not reflected in the model. To help rewriting the probabilistic model, I would suggest looking at [Neural Processes](https://arxiv.org/abs/1807.01622), which also perform variational inference over the entire dataset. 2. Equation 2 is wrong, as the first KL should be inside an expectation w.r.t. $q_{\phi_2}(z_s|x)$. Also, that equation is the *negative* ELBO. 3. While I can understand $\alpha_1$ as the normalizing constant of a fixed variance Gaussian likelihood, $\alpha_2$ does not make sense in probabilistic terms and there is no such a thing as "being considered variational inference by consensus." Scaling factors such as those proposed can be understood instead as gradient-scaling parameters *during training*, see [Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization](https://arxiv.org/abs/2206.04496). This same paper also showed that properly tuning these parameters can improve the training in heterogeneous likelihoods (such as Gaussian + Categorical). 4. In section 5.4, the phrase "it is imperative to include y in the input to $q_{\phi_1}(.)$ for mathematical correctness of the variational inference formulation" is wrong. The choice of variational family is up to the user and will always be correct (another matter is wether it is feasible or sensible). Changes that would strengthen the paper: 5. Formatting could be improved: properly using `\citep` and `\citet`, moving line numbers of algorithms outside the margins, remove the short-line at the beginning of page 11, correcting typos (e.g. remove doubled `equation equation` within the text), etc. 6. I'd make sure that the $86.11 \pm 0.59$ in Table 6 is correct, as it appears twice in the table. 7. I would mention meta-learning somewhere in the abstract, and definitely earlier than where it appears right now. 8. In problem definition, letter $K$ is used twice for difference things. Something similar goes for $\alpha$, $\alpha_1$ and $\alpha_2$. Indexes of $G_i$ after Equation 6 are wrong $1 \times C'$ instead of $C' \times 1$. 9. It is a bit odd to pass $X_{CE}^g$ as input in Algorithm 1, rather than computing it within the algorithm. Broader Impact Concerns: I don't have any concerns regarding the broader impact. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper presents a model for few-shot classification that makes use of two latent representations of an image, each with a different semantic content (one for context information, one for label information). The paper also introduces an attention-based network to extract features that exploits information from both query and support images of the few-shot task. The effectiveness of the proposed model and network, trained with variational inference, is demonstrated through a thorough set of experiments. In the initial version of the paper, the reviewers raised some valid concerns, including: clarity/writing issues (including some mistakes in the equation), missing ablation studies, choice of baselines, and incomplete citations. This was mostly fixed by the authors through an extensive revision, in line with the reviewers' suggestions. After the authors' rebuttal, Reviewers MEaT and Y8UK recognized that the paper had been significantly improved and that their concerns had been addressed. Reviewer sHKr, however, still has some concerns and leans towards rejection. Specifically: + The concerns about "decoupling" and "inference" seem to be caused by a mere nomenclature misunderstanding. Here, "decoupling" refers to being able to recover two latent representations, each with different semantic content. Perhaps a less confusing way of stating that would be dropping the word "decouple" and simply states that the model considers both types of latent variables, but I think it is also fine to leave as-is. The term "inference" refers to the process of learning the latent variables through approximate posterior inference (as opposed to "at test time"). In this sense, the model and inference are allowed to be different, and any choice of the variational model will lead to a valid ELBO - this should also address the point about the "theoretical justification". + Regarding the *missing citations*, the authors have now added citations to original work in Section 4.5, as well as a connection to matching networks in Section 4.3. + The *ablation analysis* in Table 7 shows the importance of the proposed architecture, while Section 5.4 (last paragraph) also analyzes the importance of having two decoupled latent variables. + *Figure 5* shows empirically how $z_l$ captures label information whereas $z_c$ doesn't. As a **suggestion to the authors**, another way to quantify that would be to compute some mutual information between $Z_l$ and the labels (and between $Z_c$ and the labels); it may be nice to include that in the text as well. I thus recommend acceptance. In the discussion period, the reviewers also mentioned the paper can be improved as there are some remianing writing issues. I encourage the authors to take a full pass over the text and also to take care of any of the suggestions from the reviewers that haven't been incorporated yet. Finally, a couple of minor comments: + Please replace "Fig. 5.2" with "Fig. 5". + Please add a cross-link in the last paragraph of Section 5.4. ==================================================
# Bayesian Quadrature For Neural Ensemble Search Saad Hamid saad@robots.ox.ac.uk University of Oxford Xingchen Wan xwan@robots.ox.ac.uk University of Oxford Binxin Ru robin@robots.ox.ac.uk University of Oxford Michael Osborne mosb@robots.ox.ac.uk University of Oxford Martin Jørgensen martinj@robots.ox.ac.uk University of Oxford Reviewed on OpenReview: *https: // openreview. net/ forum? id= T5sXdAO3EQ* ## Abstract Ensembling can improve the performance of Neural Networks, but existing approaches struggle when the architecture likelihood surface has dispersed, narrow peaks. Furthermore, existing methods construct equally weighted ensembles, and this is likely to be vulnerable to the failure modes of the weaker architectures. By viewing ensembling as approximately marginalising over architectures we construct ensembles using the tools of Bayesian Quadrature - tools which are well suited to the exploration of likelihood surfaces with dispersed, narrow peaks. Additionally, the resulting ensembles consist of architectures weighted commensurate with their performance. We show empirically - in terms of test likelihood, accuracy, and expected calibration error - that our method outperforms state-of-the-art baselines, and verify via ablation studies that its components do so independently. ## 1 Introduction Neural Networks (NNs) are extremely effective function approximators. Their architectures are, however, typically designed by hand, a painstaking process. Therefore, there has been significant interest in the automatic selection of NN architectures. In addition to a search strategy, this involves defining a search space from which to select the architecture - a non-trivial which is also an active area of research. Recent work shows ensembles of networks of different architectures from a given search space can outperform the single best architecture or ensembles of networks of the same architecture (Zaidi et al., 2022; Shu et al., 2022). Finding the single best architecture is typically referred to as Neural Architecture Search (NAS) (Zoph & Le, 2016; Elsken et al., 2019; He et al., 2021). Such ensembles improve performance on a range of metrics, including the test set's predictive accuracy, likelihood, and expected calibration error. The latter two metrics measure the quality of the model's uncertainty estimates, which can be poor for single architectures in some cases (Guo et al., 2017). Performant models in these metrics are crucial for systems which make critical decisions, such as self-driving vehicles. Ensemble selection is an even more difficult problem to tackle manually than selecting a single architecture, as it requires a combinatorial search over the same space. Hence, interest in methods for automatic ensemble construction is growing. This paper targets exactly this problem. Conceptually, Neural Ensemble Search (NES) algorithms can be split into two stages. The first is the candidate selection stage, which seeks to characterise the posterior distribution, p(α | D), given the training data D, over architectures from a given search space α ∈ A. Multiple approaches have been proposed. One such is an evolutionary strategy which seeks the modes of this distribution (Zaidi et al., 2022). An alternative is training a "supernet" and using it to learn the parameters of a variational approximation to this distribution (Shu et al., 2022). This involves evaluating the likelihood of a set of architectures from the search space, an evaluation which requires first training the architecture weights. The second stage is ensemble selection, where the ensemble members are selected from the candidate set and each member's weight is chosen. Several approaches have also been suggested for ensemble selection, such as beam search and sampling from the (approximate) posterior over architectures. In this work, we investigate novel approaches to both stages of a NES algorithm. We view ensembling, the averaging over architectures, as marginalisation with respect to a particular distribution over architectures. When this distribution is the posterior over architectures, we are taking the hierarchical Bayesian approach. The key advantage of this approach is the principled accounting of uncertainty, which also improves accuracy by preventing overconfidence in a single architecture. Additionally, this paradigm allows us to bring the tools of Bayesian Quadrature to bear upon the problem of Neural Ensemble Search. Specifically, the contributions of this work are as follows:1 - We propose using an acquisition function for adaptive Bayesian Quadrature to select the candidate set of architectures to train. It is from this candidate set that the ensemble members are later selected. - We show how recombination of the approximate posterior over architectures can be used to construct a weighted ensemble from the candidate set. - We undertake an empirical comparison of our proposals against state-of-the-art baselines. Additionally, we conduct ablation studies to understand the effect of our proposals for each stage of the NES pipeline. ## 2 Background 2.1 Neural Architecture Search (Nas) NAS aims to automatically discover high-performing NN architectures and has shown promising performance in various tasks (Real et al., 2017; Zoph et al., 2018; Liu et al., 2019a). It is typically formulated as an optimisation problem, i.e. maximising some measure of performance f over a space of NN architectures A, α∗ = argmaxα∈Af(α). (1) Elsken et al. (2019) identify three conceptual elements of a NAS pipeline: a search space, a search strategy, and a performance estimation strategy. The first part of a NAS pipeline - the search space - is the way in which the possible space of NN architectures is defined. In this paper, we require the search space to be such that a Gaussian Process (GP) can be defined upon it. In particular, we focus on the two types of search space most common in the literature. The first is a *cell-based* search space, which consists of architectures made by swapping out "cells" in a fixed macro-skeleton (Pham et al., 2018; Dong et al., 2021; Liu et al., 2019b). These cells are represented as directed acyclic graphs where each edge corresponds to an operation from a pre-defined set of operations. Typically, the macro-skeleton will be structured so that repeating copies of the same cell are stacked within the macro-skeleton. This structure allows for the representation of the architecture by the corresponding cell. The second is a *macro* search space defined by varying structural parameters such as kernel size, number of layers, and layer widths. An example of such a search space is the *Slimmable network* (Yu et al., 2019; Yu & Huang, 2019) on the MobileNet search space (Sandler et al., 2018): the largest possible network is trained and all other networks in the search space are given as sub-networks or "slices". A NAS pipeline's second phase is the search strategy. This is a procedure for selecting which architectures to query the performance of. All strategies will exhibit an exploration-exploitation trade-off, where exploration 1An implementation of our proposals can be found at https://github.com/saadhamidml/bq-nes. is covering the search space well, and exploitation is selecting architectures that are similar to the wellperforming architectures in the already queried history. The final element of a NAS pipeline is the performance estimation strategy, which is the method for querying the performance of a given architecture. Typically, this is done by training the NN weights, given the architecture, on a training dataset, and evaluating its performance on a validation set. This demands significant computation and practically limits the total number of architecture evaluations available to the search strategy. However, for some search spaces - where network weights are shared - performance estimation is considerably cheaper. There exists a large body of literature devoted to NAS, pursuing a range of strategies such as one-shot NAS (Liu et al., 2019b; Xu et al., 2020; Chen et al., 2019; Yu et al., 2020; Bender et al., 2018), evolutionary strategies Real et al. (2017); Liang et al. (2018); Liu et al. (2021), and Bayesian Optimisation. ## 2.2 Bayesian Optimisation For Neural Architecture Search An effective approach to NAS is Bayesian Optimisation (BO) (Kandasamy et al., 2019; White et al., 2020; Ru et al., 2021; Wan et al., 2022; Shi et al., 2020; Zhou et al., 2023). On a high level, BO models the objective function f and sequentially selects where to query next based on an acquisition function, with the goal of finding the optimal value of the objective function in a sample-efficient manner. Typically, the objective function is modelled using a Gaussian Process (GP) - a stochastic process for which all finite subsets of random variables are joint normally distributed (Rasmussen & Williams, 2006). A GP is defined using a mean function m(α) that specifies the prior mean at α, and a kernel function k(*α, α*′) that specifies the prior covariance between f(α) and f(α ′). The posterior, conditioned on a set of observations A = {(αi,)} N iand y = [f(α1)*, . . . , f*(αN )]T, is also a GP with moments $$\begin{array}{l l}{{}}&{{m_{A}(\cdot)=m(\cdot)+K_{\cdot A}K_{A A}^{-1}\left(y-m(\alpha)\right)\qquad\mathrm{~and~}}}\\ {{}}&{{}}\\ {{}}&{{k_{A}(\cdot,\cdot^{\prime})=K_{\cdot\cdot^{\prime}}-K_{\cdot A}K_{A A}^{-1}K_{A\cdot},}}\end{array}$$ $$\left(2\right)$$ $$(3)$$ where KXY indicates a matrix generated by evaluating the kernel function between all pairs of points in the sets X and Y . The prior mean function m is typically set to zero. Ru et al. (2021) showed that the Weisfeiler-Lehman graph kernel (WL kernel) (Shervashidze, 2011) is an appropriate choice for modelling NN performance metrics on a cell-based NAS search space with a GP. To apply the WL kernel, a cell first needs to be represented as a labelled DAG. Next, a feature vector is built up by aggregating labels for progressively wider neighbourhoods of each node, and building a histogram of the resulting aggregated labels. The kernel is then computed as the dot product of the feature vectors for a pair of graphs. A common acquisition function for BO is Expected Improvement (Garnett, 2021), $$a_{E I}(\alpha)=\mathbb{E}_{p(f|D)}\left[\operatorname*{max}\bigl(f(\alpha)-f({\hat{\alpha}}),0\bigr)\right]$$ hmaxf(α) − f( ˆα), 0i(4) where αˆ is the best architecture found so far. Using this acquisition function in conjunction with a GP using the WL kernel was shown by Ru et al. (2021) to be effective for NAS. ## 2.3 Neural Ensemble Search Neural Ensemble Search (Zaidi et al., 2022) is a method for automatically constructing ensembles of a given size, M, from a NAS search space A. First, a candidate set of architectures, A ⊂ A, is selected using a regularised evolutionary strategy (NES-RE), or random sampling from the search space. The authors propose several ensemble selection methods to subsequently select a subset of M architectures AM ⊂ A. Of particular interest in this work are Beam Search (BS) and Weighted Stacking (WS). BS initially adds the best-performing architecture to the ensemble and greedily adds the architecture from the candidate set (without replacement) that most improves the validation loss of the ensemble. WS optimises $$\left({4}\right)$$ the ensemble weights over the whole candidate set on the validation loss (subject to the weights being nonnegative and summing to one). The members with the highest M weights are included in the ensemble, and their corresponding weights renormalised. The authors compare BS to WS on the CIFAR-10 dataset, and find performance in terms of the log-likelihood of the test set to be better for BS for small ensembles, but similar for larger ensembles. Neural Ensemble Search via Bayesian Sampling (Shu et al., 2022) approximates the posterior distribution over architectures p(α | D) with a variational distribution of the form q(α) = Qi qi(o | θi), where i iterates over the connections within a cell, o is the operation for connection i, and θi are the variational parameters for qi. The form of qiis chosen to be a softmax over θi. The ensemble is then selected by using Stein Variational Gradient Descent with Regularised Diversity to select a diverse set of M samples from (a continuous relaxation of) the variational distribution. DeepEnsembles (Lakshminarayanan et al., 2017) seek to approximately marginalise over the parameters of a given NN architecture. The architecture is trained from several random initialisations, and the ensemble makes a prediction as an equally weighted sum of these. Hyper-deep ensembles (Wenzel et al., 2020) build on this idea by training architectures from several randomly selected hyperparameter settings. They then construct an ensemble by using Beam Search, with replacement. Relatedly, there has been interest in ensembling for improving uncertainty calibration in the context of Reinforcement Learning (Osband et al., 2018; Dwaracherla et al., 2022). Additionally, methods for marginalising over the parameters of a fixed architecture such as Shui et al. (2018); He et al. (2020); D'Angelo & Fortuin (2021) frequently require constructing ensembles. Such methods are orthogonal to our work, which seeks to construct ensembles of different architectures, rather than ensembles of different parameter settings of the same architecture. ## 2.4 Bayesian Quadrature Bayesian Quadrature (BQ) (O'Hagan, 1991; Minka, 2000) is a probabilistic numerical (Hennig & Osborne, 2022) integration technique that targets the computation of Z =Rf(·)dπ(·) based on evaluations of the integrand f (assuming a given prior π). Similar to BO, it maintains a surrogate model over the integrand f, which induces a posterior over the integral value Z. BQ also makes use of an acquisition function to iteratively select where next to query the integrand. The surrogate model for BQ is usually chosen to be a GP, and this induces a Gaussian posterior over Z ∼ N (µZ, σZ). The moments of this posterior are given by $$\begin{array}{l}{{\mu_{Z}{=}{\int}K(\cdot,X)d\pi(\cdot)K_{X X}^{-1}f,\quad\mathrm{and}}}\\ {{\sigma_{Z}{=}{\int}K(\cdot,\cdot^{\prime})-K(\cdot,X)K_{X X}^{-1}K(X,\cdot)d\pi(\cdot)d\pi(\cdot^{\prime}),}}\end{array}$$ $$\left(5\right)$$ ′), (6) where X is the set of query points, and f are the corresponding integrand observations. Note that the posterior mean µZ takes the form of a quadrature rule - a weighted sum of function evaluations Pi wif(xi) where wi are the elements of the vector RK(·, X)dπ(·)K −1 XX. Frequently, the integrand of interest is non-negative. Important examples of such integrands are likelihood functions (which are integrated with respect to a prior to compute a model evidence) and predictive densities (which are integrated with respect to a posterior to compute a posterior predictive density). Warped Bayesian Quadrature (Osborne et al., 2012; Gunter et al., 2014; Chai & Garnett, 2019) allows practitioners to incorporate the prior knowledge that the integrand is non-negative into the surrogate model. Of particular interest in this work will be the WSABI-L model (Gunter et al., 2014), which models the square root of the integrand with a GP, q2f(x) − β∼ GPµD(x), ΣD(*x, x*′). This induces a (non-central) chi-squared distribution over f, which can be approximated with a GP, with moments $$m(x)=\beta+\frac{1}{2}\mu_{D}(x)^{2},$$ $$k(x,x^{\prime})=\mu_{D}(x)\Sigma_{D}(x,x^{\prime})\mu_{D}(x^{\prime}).\tag{1}$$ ′). (8) $$({\mathfrak{f}}{\mathfrak{o}})$$ $$\left({\overline{{t}}}\right)$$ $$({\boldsymbol{\delta}})$$ Gunter et al. (2014) established, empirically, that the uncertainty sampling acquisition function works well for Bayesian Quadrature. This acquisition function targets the variance of the integrand aUS(x) = ΣD(*x, x*)µD(x) $\lambda(x)^2\pi(x)^2$. $\left(\mathfrak{g}\right)$. 2. (9) This naturally trades off between exploration (regions where ΣD(*x, x*) is high), and exploitation (regions where µD(x) is high - most of the volume under the integrand is concentrated here). Just as BO is a natural choice for NAS - an expensive black-box optimisation problem - so BQ is a natural choice for NES - an expensive black-box marginalisation problem. It is this realisation that inspires our proposals in Section 3. ## 2.5 Recombination Typically, Neural Ensemble Search requires selecting a subset of the trained architectures to build the ensemble, as making predictions with the whole candidate set is too computationally burdensome. To achieve this within the Bayesian Quadrature framework will require reducing the support of the quadrature rule. This problem is referred to as recombination (Litterer & Lyons, 2012). Given a non-negative measure supported on N points {(wn, xn)} N n=1 where wn ≥ 0 and PN n=1 wn = 1, and M − 1 "test" functions {ϕt(·)} M−1 t=1 , it is possible to find a subset of *M < N* points {xn}M m=1 ⊂ {xn} N n=i for which $ \sum_{m=1}^M w_m\phi_t(x_m)=\sum_{n=1}^N w_n\phi_t(x_n)$ $ v_m=1$ (Tchermychova, 2015). $$(10)$$ $$(11)$$ for all ϕt, with wm ≥ 0 and PM $$v_{m}=1\ (\mathrm{Tchernychova},\ 2015).$$ For Kernel Quadrature, one can use the Nyström approximation of the kernel matrix to obtain a set of test functions (Hayakawa et al., 2022; Adachi et al., 2022). Using a subset, S, of M −1 data points, the kernel can be approximated ˜k(*x, x*′) = k(x, S)k(*S, S*) −1k(*S, x*′). By taking an eigendecomposition, k(*S, S*) = UΛU T, the approximate kernel can be expressed as $$\tilde{k}(x,x^{\prime})=\sum_{t}^{M-1}\frac{1}{\lambda_{t}}\big(u_{t}^{T}k(S,x)\big)\big(u_{t}^{T}k(S,x^{\prime})\big)$$ where ui are the columns of U, and λi the diagonal elements of Λ. We can then use ϕt(·) = u T t k(S, ·) as test functions. ## 3 Bayesian Quadrature For Neural Ensemble Search We decompose NES into two sub-problems: 1. The selection of a candidate set of architectures {αi} N i=1 = A ⊂ A for which to train the architecture parameters. 2. The selection of a set of M members from the candidate set to include in the ensemble, and their weights, w ∈ RM. We take novel approaches to each of these sub-problems, described respectively in the following two subsections. Algorithms 1, 2 and 3 summarise our propositions. ## 3.1 Building The Candidate Set An ensemble's prediction is a weighted sum of the predictions of its constituent members, AM, and this can always be viewed as approximating an expectation with respect to a distribution, π, over architectures, $$\mathbb{E}_{\pi(\alpha)}\big{[}p(c\mid x,\alpha)\big{]}=\sum_{\alpha\in A}p(c\mid x,\alpha)\pi(\alpha)\approx\sum_{\alpha\in A_{M}}p(c\mid x,\alpha)\pi_{M}(\alpha).$$ $$\left(12\right)$$ ![5_image_0.png](5_image_0.png) Figure 1: A schematic representation of our proposal. The plot on the left shows a Gaussian Process modelling the likelihood over the space of architectures. The architectures to train and evaluate the likelihood for are selected by maximising a Bayesian Quadrature acquisition function, as described in Section 3.1. One of the algorithms described in Section 3.2 is then used to select the subset of architectures to include in the ensemble, along with their weights. The final prediction is then a linear combination of the predictions of each ensemble member, as shown by the bar plots on the right (where each bar indicates the probability assigned to a particular class). p(c | *x, α, D*) is the predictive probability assigned to class c ∈ {1*, . . . , C*} by the architecture α (conditioned on the training data D) given the input x ∈ X . The expectation in Equation (12) corresponds to a marginalisation over architectures. The set of architectures AM and their weights πM can be seen as a quadrature rule to approximate this marginalisation. When π is the posterior over architectures p(α | D), we are performing hierarchical Bayesian inference, and the result is the posterior predictive distribution, $$p(c\mid x,D)=\sum_{\alpha\in{\cal A}}p(c\mid x,\alpha,D)\,p(\alpha\mid D)\tag{13}$$ $$=\frac{\sum_{\alpha\in{\cal A}}p(c\mid x,\alpha,D)\,p(D\mid\alpha)\,p(\alpha)}{\sum_{\alpha\in{\cal A}}p(D\mid\alpha)\,p(\alpha)}.$$ By taking the view of ensembling as marginalisation, the practitioner has the ability to make their belief over A explicit. As the training data is finite, it is rarely appropriate to concentrate all of the probability mass of π on a single architecture. Arguably, π(α) = p(α | D) is the most appropriate choice of π as it is the distribution implied by the prior and the architectures' ability to explain the observed data. The hierarchical Bayesian framework should offer the most principled accounting of uncertainty in the choice of architecture (a more concentrated π should over-fit, a less concentrated π should under-fit). From Equation (13) we see that, to compute the posterior predictive, we need to compute C sums of products of functions of the architecture and the architecture likelihoods. Intuitively, we expect a quadrature scheme that approximates well the sum in the denominator of (13) will also approximate the sum in the numerator well. Therefore, we propose using a Bayesian Quadrature acquisition function to build up the candidate set, as these architectures will form the nodes of a query-efficient quadrature scheme for (13) and so a good basis for an ensemble. The likelihood of an architecture p(D | α) is not typically available, as this would require marginalisation over the NN weights, w, of the architecture. We instead approximate using the MLE, which is equivalent to assuming the prior of the architecture weights is a Dirac delta distribution at the maximiser of the (architecture weights') likelihood function. $$p(D\mid\alpha)=\int p(D\mid w,\alpha)\,p(w\mid\alpha)dw\approx p(D\mid\hat{w},\alpha),$$ $$\hat{w}=\operatorname*{argmax}_{w}p(D\mid w,\alpha)p(w\mid\alpha).$$ $$(14)$$ $$(15)$$ Computing p(c | *x, α, D*) requires an analogous intractable marginalisation. We approximate it similarly, noting that it depends only indirectly on the training data, through the optimisation procedure, i.e. p(c | x, α, D) = Zp(c | x, w, α, D) p(w | α, D)dw $$(1{\mathrm{f}}{\mathrm{o}})$$ ≈ p(c | x, *w, α* ˆ ). (16) Concretely, we place a functional prior on the architecture likelihood surface, warped using the square-root transform, q2p(D | w, α ˆ ) − β*∼ GP*, and use uncertainty sampling to make observations of the likelihood at a set of architectures {αi} N i=1 = A ⊂ A. This provides us with an estimate of the model evidence Z =Pα∈A p(D | *w, α* ˆ ) p(α), which we denote Zˆ. The computation of this estimate requires Monte Carlo sampling to approximate sums of (products of) the WL-kernel over A. Note this is far more feasible than approximating the original sums in Equation (13) with Monte Carlo sampling as K(αj , A) is far cheaper to evaluate than p(D | w, α ˆ j ) or p(c | x, *w, α* ˆ j ) - either would require training architecture αj . $$\overline{{{4)}}}$$ Algorithm 1 Candidate set selection algorithm using a BQ acquisition function. Returns architectures A = {αi} N i=1 and their corresponding likelihoods L = {p(D | *w, α* ˆ i)} N i=1. A, L ← sample(n, A) ▷ Initial samples. θ ← argmaxθp(L | *A, θ*) ▷ Optimise WL kernel. while i > 0 do α ← argmaxα∈Aacquisition_function(*α, A, L, θ*) A ← {*A, α*} L ← {L, p(D | *w, α* ˆ )} θ ← argmaxθp(L | *A, θ*) end while $$\mathbf{return}\ A,L$$ $$\mathbf{w}\mathbf{h}\mathbf{i}$$ ## 3.2 Selecting The Ensemble In principle, the ensemble can be constructed using the weights provided by the quadrature scheme, as these weights naturally trade-off between member diversity and member performance. However, we wish to select a subset of the candidate set for the ensemble (as it is assumed that an ensemble of the whole candidate set is too costly to be practical for deployment). Concretely, we seek a subset AM ⊂ A, along with weights w ∈ RM such that $$p(c\mid x,D)\approx\sum_{n}^{N}\frac{1}{Z}\,p(D\mid\alpha_{n})\,p(\alpha_{n})\,p(c\mid x,\alpha_{n},D)+\epsilon$$ $$\approx\sum_{m}^{M}w_{m}p(c\mid x,\alpha_{m},D)+\epsilon.$$ $\Box L,\theta)$. $$(17)$$ $$(18)$$ We expect ϵ to be small if regions of high likelihood have been well-explored by the acquisition function in the building of the candidate set. To select the weights w and the set AM we can use any recombination algorithm, using the Nyström approximation to generate the test functions, as described in Section 2.5, and the estimated posterior over architectures as the measure to recombine. We refer to this algorithm as Posterior Recombination (PR). A second approach, which we refer to as Re-weighted Stacking (RS), is a modification of Weighted Stacking. Similar to WS, we optimise the weights of an ensemble of the whole candidate set to minimize the validation loss. The ensemble members are then chosen by selecting the members with the M highest weights. However, rather than renormalising the corresponding weights, as suggested in Zaidi et al. (2022), we reallocate the weight assigned to excluded architectures proportionally to the relative covariance between them and the ensemble members. Concretely, let {(αm, ωm)}M m=1 be the ensemble members and their optimised weights, and {(αl, ωl)} N−M l=1 be the excluded architectures and their optimised weights. The weights of the ensemble w ∈ RM are given by $$\mathbf{w}_{m}=\omega_{m}+\sum_{l=1}^{N-M}{\frac{k(\alpha_{m},\alpha_{l})}{\sum_{m=1}^{M}k(\alpha_{m},\alpha_{l})}}\omega_{l}.$$ $$(19)$$ ωl. (19) | Algorithm 2 Posterior recombination. iN µ ← h p(D|αn)p(αn) Zˆ n=1 w, AM ← recombination(T, µ) | |-------------------------------------------------------------------------------------------------| | Algorithm 3 Re-weighted stacking. ω ← argminω∈∆loss( P i ωip(c | x, αn, D), Dval) for m in I do end for | |-----------------------------------------------------------------------------------------------------------| T ← nystrom_test_functions(KAA, A) ▷ From Eq (11) ω ← argminω∈∆loss(Pi ωip(c | x, αn, D), Dval) I ← select_top(*M, ω*) ▷ Select top M. for m in I do wm ← reweight(m, I, ω, k(*A, A*)) ▷ Eq (19). end for Our proposals can be combined to yield two possible algorithms. Both share the same candidate selection strategy that uses a WSABI-L surrogate model with the uncertainty sampling acquisition function to select the set of architectures to train (Algorithm 1). "BQ-R" then uses posterior recombination (Algorithm 2) to select a subset of architectures from the candidate set to include in the ensemble, and choose their corresponding weights. "BQ-S" instead uses re-weighted stacking (Algorithm 3 to select, and weight, the ensemble members from the candidate set. Note that "BQ-R" performs approximate hierarchical Bayesian inference using BQ, but "BQ-S" is a heuristic inspired by BQ. Figure 1 is a schematic representation of these algorithms. ## 4 Experiments We begin by performing comparisons on the NATS-Bench benchmark (Dong et al., 2021). Specifically, we use the provided topology search space, which consists of cells with 4 nodes, 6 connections, and 5 possible operations (including "zeroise" which is equivalent to removing a connection) in a fixed macro-skeleton. The architecture weights are trained for 200 epochs on the CIFAR-100 and ImageNet16-120 (a smaller version of ImageNet with 16×16 pixel input images, and 120 classes) datasets. We will compare ensemble performance as measured by test accuracy, test likelihood, and expected calibration error on the test set for a range of ensemble sizes. The log-likelihood of the test set measures a model's performance both in terms of accuracy and uncertainty calibration, as placing a very low probability on the true class (i.e. being confidently wrong) is heavily penalised by this metric. First, we verify that our chosen surrogate model (WSABI-L) performs well. Table 1 shows model performance, measured by root mean square error (RMSE) and negative log predictive density (NLPD) on a test set. The test set is selected by ranking all the architectures in the search space by validation loss, and selecting every 25th architecture. This ensures that the test set contains architectures across the full range of performance. We build on the results of Ru et al. (2021), who showed that a GP with a WL kernel is able to model the architecture likelihood surface well. Our results show that WSABI-L (with a WL kernel) is a consistently better model than an ordinary GP (with a WL kernel). Next, we examine the effect of the candidate selection algorithm, shown in Table 2. In all cases, we use our variant of weighted stacking, described in Section 3.2, to select and weight the ensemble members. We compare Expected Improvement (EI) with a GP surrogate with a WL kernel, Uncertainty Sampling with a | | | CIFAR-100 | | ImageNet16-120 | | | |-----------|------------|---------------|---------------|------------------|---------------|---------------| | Algorithm | Accuracy | ECE | LL | Accuracy | ECE | LL | | M = 3 RE | 77.1 ± 0.2 | 0.018 ± 0.001 | -4385 ± 24.89 | 51.9 ± 0.2 | 0.029 ± 0.002 | -5595 ± 12.15 | | EI | 76.1 ± 0.2 | 0.024 ± 0.001 | -4472 ± 29.74 | 51.4 ± 0.2 | 0.034 ± 0.002 | -5632 ± 11.91 | | US | 76.6 ± 0.2 | 0.021 ± 0.001 | -4417 ± 35.85 | 52.2 ± 0.1 | 0.029 ± 0.001 | -5543 ± 10.87 | | M = 5 RE | 78.5 ± 0.2 | 0.033 ± 0.001 | -4013 ± 19.08 | 53.3 ± 0.2 | 0.043 ± 0.002 | -5417 ± 12.90 | | EI | 77.4 ± 0.2 | 0.039 ± 0.001 | -4126 ± 22.25 | 52.6 ± 0.3 | 0.053 ± 0.003 | -5479 ± 15.55 | | US | 77.8 ± 0.2 | 0.040 ± 0.002 | -4077 ± 33.60 | 53.6 ± 0.1 | 0.050 ± 0.002 | -5380 ± 12.31 | | M = 10 RE | 79.4 ± 0.1 | 0.053 ± 0.002 | -3759 ± 16.38 | 54.5 ± 0.2 | 0.065 ± 0.002 | -5280 ± 16.85 | | EI | 78.2 ± 0.2 | 0.055 ± 0.002 | -3889 ± 23.79 | 53.4 ± 0.2 | 0.071 ± 0.002 | -5368 ± 19.47 | | US | 78.6 ± 0.2 | 0.059 ± 0.001 | -3843 ± 22.71 | 54.7 ± 0.1 | 0.072 ± 0.001 | -5262 ± 9.964 | | | CIFAR-100 | ImageNet16-120 | | | |---------|---------------|------------------|---------------|----------------| | Model | RMSE | NLPD | RMSE | NLPD | | GP | 6.165 ± 0.116 | 0.124 ± 0.013 | 9.610 ± 0.626 | 0.121 ± 0.012 | | WSABI-L | 5.797 ± 0.043 | -2.741 ± 0.095 | 4.078 ± 0.058 | -3.437 ± 0.040 | Table 1: The (normalised) RMSE and NLPD of a WSABI-L surrogate and a GP surrogate on the test sets. Table 2: Test accuracy, expected calibration error, and log-likelihood on CIFAR-100 and ImageNet16-120 for our candidate set selection method (US) and baselines. The numbers shown are means and standard error of the mean over 10 repeats. Each candidate set selection method is initialised with 10 random architectures, and used to build a set of 150 architectures. The ensemble is chosen and weighted using our variant of weighted stacking. We see that the RE candidate set performs best for CIFAR-100 and in terms of ECE for ImageNet16-120. The US candidate set performs best in terms of accuracy and LL for ImageNet16-120. WSABI-L surrogate using a WL kernel (US), and Regularised Evolution (RE). We find that the US candidate set performs best for ImageNet16-120 in terms of accuracy and LL, but that the RE candidate set performs best for ECE on ImageNet16-120, and across all metrics for CIFAR-100. We then move on to comparing the effect of the ensemble selection algorithm, shown in Table 3. In all cases, we use uncertainty sampling with a WSABI-L surrogate to build the candidate set. We initialise with 10 architectures randomly selected from a uniform prior over the search space, and use the acquisition function to build a set of 150 architectures. We compare beam search (BS), weighted stacking (WS), recombination of the approximate posterior (PR), and re-weighted stacking (RS). We find that the stacking variants consistently perform best (with RS slightly improving upon WS) in terms of accuracy and LL, and PR in terms of ECE for larger datasets. We then proceed to compare the two variants of our algorithm - BQ-R and BQ-S - with several baselines. Random The ensemble is an evenly weighted combination of M architectures randomly sampled from the prior p(α) over the search space. Hyper-DE The candidate set is selected by randomly sampling from the prior p(α). The ensemble is then chosen using beam search, with replacement. NES-RE The candidate set is selected using regularised evolution, and the ensemble members are chosen using beam search. The ensemble members are equally weighted. NES-BS The posterior over architectures p(α | D) is approximated using a variational distribution. The ensemble is constructed by sampling M architectures from the variational distribution using SteinVariational Gradient Descent. As no implementation is publicly available, we provide our own. | | | CIFAR-100 | | ImageNet16-120 | | | |-----------|------------|---------------|---------------|------------------|---------------|---------------| | Algorithm | Accuracy | ECE | LL | Accuracy | ECE | LL | | M = 3 BS | 75.2 ± 0.2 | 0.030 ± 0.002 | -4500 ± 41.04 | 52.2 ± 0.1 | 0.036 ± 0.002 | -5572 ± 13.17 | | WS | 76.4 ± 0.2 | 0.021 ± 0.001 | -4426 ± 35.87 | 52.1 ± 0.1 | 0.029 ± 0.001 | -5545 ± 10.57 | | PR | 71.9 ± 0.8 | 0.075 ± 0.025 | -5259 ± 300.9 | 46.7 ± 2.3 | 0.052 ± 0.021 | -6347 ± 480.5 | | RS | 76.6 ± 0.2 | 0.021 ± 0.001 | -4417 ± 35.85 | 52.2 ± 0.1 | 0.029 ± 0.001 | -5543 ± 10.87 | | M = 5 BS | 76.4 ± 0.2 | 0.048 ± 0.002 | -4233 ± 36.48 | 53.4 ± 0.1 | 0.058 ± 0.002 | -5410 ± 10.70 | | WS | 77.7 ± 0.2 | 0.036 ± 0.002 | -4088 ± 34.13 | 53.6 ± 0.1 | 0.049 ± 0.001 | -5382 ± 12.55 | | PR | 73.3 ± 0.9 | 0.040 ± 0.004 | -4768 ± 174.3 | 50.7 ± 0.3 | 0.028 ± 0.004 | -5647 ± 50.58 | | RS | 77.8 ± 0.2 | 0.040 ± 0.002 | -4077 ± 33.60 | 53.6 ± 0.1 | 0.050 ± 0.002 | -5380 ± 12.31 | | M = 10 BS | 76.9 ± 0.3 | 0.063 ± 0.001 | -4079 ± 50.29 | 54.1 ± 0.1 | 0.076 ± 0.001 | -5307 ± 9.795 | | WS | 78.5 ± 0.2 | 0.055 ± 0.002 | -3848 ± 23.96 | 54.6 ± 0.1 | 0.070 ± 0.001 | -5264 ± 10.11 | | PR | 75.5 ± 0.9 | 0.037 ± 0.002 | -4309 ± 172.6 | 52.3 ± 0.3 | 0.018 ± 0.001 | -5412 ± 22.96 | | RS | 78.6 ± 0.2 | 0.059 ± 0.001 | -3843 ± 22.71 | 54.7 ± 0.1 | 0.072 ± 0.001 | -5262 ± 9.964 | Table 3: Test accuracy, expected calibration error, and log-likelihood on CIFAR-100 and ImageNet16-120 for Beam Search (BS), Weighted Stacking (WS), Posterior Recombination (PR), and Re-weighted Stacking (RS). The numbers shown are means and standard error of the mean over 10 repeats. The candidate set selection method is our method - Uncertainty Sampling with a WSABI-L surrogate - initialised with 10 random architectures, and used to build a set of 150 architectures. We see that the stacking variants consistently perform best for accuracy and LL, with RS slightly improving upon WS. For ECE, RS and WS perform well for small ensembles, but PR works best for larger ensembles. However, our implementation learns the variational parameters by approximating the expected loglikelihood term of the ELBO using a subset of the search space, rather than by backpropagating through a "supernet" as described by Shu et al. (2022). The subset we use is the 150 architectures in the search space with the highest likelihoods on the validation set. (Of course, this is only possible when working with a NAS benchmark.) We argue that our approximation is suitable as most posterior mass will be concentrated on these architectures, so a good variational distribution will concentrate mass on them as well. Additionally, our approximation is much faster as it does not require training a supernet. Table 4 presents the results on CIFAR-100 and ImageNet16-120 for a range of ensemble sizes. Whilst NESRE matches or does slightly better than our proposals in terms of accuracy and LL on CIFAR-100, we find that both BQ-S and BQ-R often perform better in terms of expected calibration error. BQ-S achieves the best performance on ImageNet16-120 in terms of LL across all ensemble sizes, is joint best with NES-RE in terms of accuracy, and often outperforms NES-RE in terms of ECE. Next, we perform a study on a larger search space defined by a "slimmable network" (Yu et al., 2019), consisting of 614,625 architectures. Sub-networks or "slices" of this supernet constitute architectures within this search space. The architectures are structured as a chain of 7 blocks, each of which can have up to 4 layers. These sub-networks can be represented in a 28-dimensional ordinal space (with 4 options along each dimension). We compare the best-performing variant of our method, BQ-S, and the best-performing baseline, NES-RE, from the smaller NATS-Bench search space. We use an RBF kernel with WSABI-L for Uncertainty Sampling with our method BQ-S, and compare it to NES-RE. The results are shown in Table 5. We see that BQ-S consistently outperforms NES-RE in terms of the log-likelihood of the test set and, for CIFAR-100, in terms of expected calibration error as well. Finally, we perform experiments to examine the robustness to dataset shift. Previous work has provided evidence that ensembling of Neural Networks provides robustness to shifts in the underlying data distribution (Zaidi et al., 2022; Shu et al., 2022). However, these investigations have assumed the availability of a validation set from the shifted distribution, which we argue is unrealistic in practice. Instead, we examine the setting where only the test set is shifted, and the validation set is representative of the training set. We | | CIFAR-100 | ImageNet16-120 | | | | | |---------------|-------------|------------------|---------------|------------|---------------|---------------| | Algorithm | Accuracy | ECE | LL | Accuracy | ECE | LL | | Best Single | 69.1 | 0.088 | -5871 | 45.9 | 0.062 | -6386 | | M = 3 Random | 69.2 ± 1.5 | 0.075 ± 0.007 | -5778 ± 291.3 | 39.7 ± 2.2 | 0.097 ± 0.007 | -7459 ± 309.6 | | Hyper-DE | 76.2 ± 0.2 | 0.030 ± 0.002 | -4390 ± 20.95 | 51.4 ± 0.2 | 0.040 ± 0.002 | -5659 ± 24.21 | | NES-RE | 76.6 ± 0.2 | 0.026 ± 0.002 | -4340 ± 19.58 | 52.0 ± 0.2 | 0.033 ± 0.002 | -5582 ± 8.858 | | NES-BS | 66.2 ± 1.5 | 0.073 ± 0.009 | -6477 ± 203.0 | 45.7 ± 0.3 | 0.058 ± 0.003 | -6403 ± 28.04 | | BQ-R | 71.9 ± 0.8 | 0.075 ± 0.025 | -5259 ± 300.9 | 46.7 ± 2.3 | 0.052 ± 0.021 | -6347 ± 480.5 | | BQ-S | 76.6 ± 0.2 | 0.021 ± 0.001 | -4417 ± 35.85 | 52.2 ± 0.1 | 0.029 ± 0.001 | -5543 ± 10.87 | | M = 5 Random | 72.2 ± 0.9 | 0.111 ± 0.009 | -5304 ± 180.9 | 42.7 ± 1.5 | 0.129 ± 0.008 | -7135 ± 216.1 | | Hyper-DE | 77.6 ± 0.1 | 0.048 ± 0.002 | -4099 ± 12.67 | 52.4 ± 0.2 | 0.061 ± 0.001 | -5535 ± 22.38 | | NES-RE | 78.2 ± 0.1 | 0.042 ± 0.002 | -4002 ± 17.11 | 53.4 ± 0.2 | 0.051 ± 0.001 | -5404 ± 12.59 | | NES-BS | 65.9 ± 1.5 | 0.073 ± 0.009 | -6481 ± 208.7 | 45.7 ± 0.3 | 0.058 ± 0.003 | -6403 ± 28.04 | | BQ-R | 73.3 ± 0.9 | 0.040 ± 0.004 | -4768 ± 174.3 | 50.7 ± 0.3 | 0.028 ± 0.004 | -5647 ± 50.58 | | BQ-S | 77.8 ± 0.2 | 0.040 ± 0.002 | -4077 ± 33.60 | 53.6 ± 0.1 | 0.050 ± 0.002 | -5380 ± 12.31 | | M = 10 Random | 74.7 ± 0.3 | 0.150 ± 0.010 | -5018 ± 82.21 | 45.1 ± 0.4 | 0.159 ± 0.008 | -6916 ± 73.21 | | Hyper-DE | 78.6 ± 0.1 | 0.066 ± 0.001 | -3862 ± 9.328 | 53.1 ± 0.2 | 0.075 ± 0.002 | -5466 ± 18.60 | | NES-RE | 79.4 ± 0.1 | 0.060 ± 0.001 | -3763 ± 15.16 | 54.5 ± 0.2 | 0.069 ± 0.001 | -5269 ± 17.83 | | NES-BS | 69.1 ± 0.4 | 0.085 ± 0.005 | -6119 ± 36.31 | 45.6 ± 0.3 | 0.068 ± 0.004 | -6442 ± 24.47 | | BQ-R | 75.5 ± 0.9 | 0.037 ± 0.002 | -4309 ± 172.6 | 52.3 ± 0.3 | 0.018 ± 0.001 | -5412 ± 22.96 | | BQ-S | 78.6 ± 0.2 | 0.059 ± 0.001 | -3843 ± 22.71 | 54.7 ± 0.1 | 0.072 ± 0.001 | -5262 ± 9.964 | | | CIFAR-10 | | | CIFAR-100 | | | |---------------|------------|---------------|---------------|-------------|---------------|---------------| | Algorithm | Accuracy | ECE | LL | Accuracy | ECE | LL | | M = 3 NES-RE | 93.8 ± 0.0 | 0.029 ± 0.001 | -1165 ± 5.602 | 74.2 ± 0.2 | 0.072 ± 0.004 | -5136 ± 61.49 | | BQ-S | 93.7 ± 0.1 | 0.030 ± 0.000 | -1152 ± 5.215 | 74.4 ± 0.1 | 0.063 ± 0.002 | -5021 ± 22.71 | | M = 5 NES-RE | 93.8 ± 0.0 | 0.030 ± 0.001 | -1165 ± 5.503 | 74.3 ± 0.2 | 0.071 ± 0.004 | -5134 ± 60.72 | | BQ-S | 93.7 ± 0.1 | 0.032 ± 0.000 | -1113 ± 4.123 | 74.5 ± 0.1 | 0.055 ± 0.002 | -4897 ± 25.66 | | M = 10 NES-RE | 93.8 ± 0.0 | 0.030 ± 0.001 | -1159 ± 5.959 | 74.3 ± 0.2 | 0.069 ± 0.004 | -5083 ± 58.42 | | BQ-S | 93.8 ± 0.0 | 0.031 ± 0.000 | -1098 ± 3.752 | 74.7 ± 0.1 | 0.045 ± 0.001 | -4766 ± 15.89 | Table 4: Test accuracy, expected calibration error (ECE), and log-likelihood (LL) on CIFAR-100 and ImageNet16-120 for our proposals (BQ-R and BQ-S) and baselines. For reference, we also include the performance of the best architecture (measured by validation loss) on the test set (labelled Best Single). The numbers shown are means and standard error of the mean over 10 repeats. Where applicable, the candidate set selection method is initialised with 10 random architectures and used to build a set of 150 architectures. For ImageNet16-120 we see that BQ-S performs best across ensemble sizes in terms of LL, and joint best with NES-RE in terms of accuracy. For CIFAR-100 we find that NES-RE performs best in terms of accuracy and LL. Particularly for larger ensembles, BQ-R performs best in terms of ECE. Table 5: Test accuracy, expected calibration error (ECE), and log-likelihood (LL) on CIFAR-10 and CIFAR100 for BQ-S (our proposal) and NES-RE (the strongest baseline) for the "Slimmable Network" search space. We see that BQ-S consistently outperforms NES-RE in terms of ECE and LL, whilst maintaining the same accuracy. use the benchmark established by Hendrycks & Dietterich (2019) to generate shifted datasets by applying one of 30 corruption types to each image for CIFAR-10 and CIFAR-100. Each corruption type has a severity level on a 1 − 5 scale. Table 6 shows a comparison between NES-RE and BQ-S in this setting (on the | Severity Level 1 | | | | | | | |--------------------|--------------|---------------|--------------------|--------------|---------------|----------------------| | CIFAR-10 | | CIFAR-100 | | | | | | Algorithm | Accuracy | ECE | LL | Accuracy | ECE | LL | | M = 3 NES-RE | 86.20 ± 0.04 | 0.046 ± 0.001 | -59259.6 ± 595.907 | 62.36 ± 0.08 | 0.151 ± 0.004 | -169235 ± 1632.43166 | | BQ-S | 86.26 ± 0.08 | 0.036 ± 0.001 | -54283.4 ± 642.383 | 62.52 ± 0.10 | 0.093 ± 0.002 | -149022 ± 364.57480 | | M = 5 NES-RE | 86.25 ± 0.04 | 0.046 ± 0.001 | -59178.3 ± 851.719 | 62.36 ± 0.09 | 0.155 ± 0.003 | -169999 ± 1553.96433 | | BQ-S | 86.16 ± 0.06 | 0.032 ± 0.001 | -52173.6 ± 202.350 | 62.58 ± 0.09 | 0.103 ± 0.004 | -152466 ± 1249.96873 | | M = 10 NES-RE | 86.26 ± 0.04 | 0.043 ± 0.001 | -57010.4 ± 722.311 | 62.46 ± 0.07 | 0.145 ± 0.002 | -164816 ± 975.78975 | | BQ-S | 86.22 ± 0.05 | 0.029 ± 0.001 | -50504.6 ± 443.984 | 62.54 ± 0.08 | 0.093 ± 0.002 | -149022 ± 364.57480 | | Severity Level 3 | | | | | | | | CIFAR-10 | | CIFAR-100 | | | | | | Algorithm | Accuracy | ECE | LL | Accuracy | ECE | LL | | M = 3 NES-RE | 73.16 ± 0.08 | 0.147 ± 0.002 | -133205 ± 1537.15 | 49.18 ± 0.07 | 0.235 ± 0.005 | -270710 ± 2628.75462 | | BQ-S | 73.31 ± 0.12 | 0.131 ± 0.002 | -123113 ± 1498.55 | 49.45 ± 0.12 | 0.193 ± 0.007 | -250337 ± 3304.95 | | M = 5 NES-RE | 73.18 ± 0.09 | 0.148 ± 0.002 | -133239 ± 1904.50 | 49.20 ± 0.09 | 0.239 ± 0.004 | -272004 ± 2482.09961 | | BQ-S | 73.23 ± 0.07 | 0.125 ± 0.001 | -118756 ± 614.899 | 49.57 ± 0.09 | 0.175 ± 0.005 | -241407 ± 2438.78 | | M = 10 NES-RE | 73.23 ± 0.08 | 0.143 ± 0.002 | -128663 ± 1664.07 | 49.29 ± 0.07 | 0.227 ± 0.003 | -263639 ± 1596.01214 | | BQ-S | 73.39 ± 0.11 | 0.120 ± 0.002 | -114613 ± 1247.44 | 49.57 ± 0.06 | 0.163 ± 0.003 | -235152 ± 881.120 | | Severity Level 5 | | | | | | | | CIFAR-10 | | CIFAR-100 | | | | | | Algorithm | Accuracy | ECE | LL | Accuracy | ECE | LL | | M = 3 NES-RE | 55.49 ± 0.08 | 0.285 ± 0.002 | -239927 ± 2187.24 | 34.04 ± 0.06 | 0.339 ± 0.005 | -415182 ± 3710.25 | | BQ-S | 55.67 ± 0.14 | 0.265 ± 0.003 | -226433 ± 2523.79 | 34.24 ± 0.11 | 0.291 ± 0.008 | -385063 ± 5355.88 | | M = 5 NES-RE | 55.53 ± 0.08 | 0.286 ± 0.003 | -240154 ± 2835.89 | 34.04 ± 0.07 | 0.344 ± 0.005 | -417110 ± 3476.95 | | BQ-S | 55.51 ± 0.05 | 0.260 ± 0.002 | -220196 ± 1198.20 | 34.35 ± 0.10 | 0.270 ± 0.006 | -371575 ± 4083.67 | | M = 10 NES-RE | 55.54 ± 0.08 | 0.279 ± 0.002 | -233441 ± 2474.23 | 34.11 ± 0.07 | 0.331 ± 0.003 | -405068 ± 2327.88 | | BQ-S | 55.61 ± 0.11 | 0.254 ± 0.003 | -214126 ± 2030.20 | 34.35 ± 0.07 | 0.257 ± 0.003 | -361876 ± 1666.07 | Table 6: Test accuracy, expected calibration error (ECE), and log-likelihood (LL) on CIFAR-10 and CIFAR100 for NES-RE (the strongest baseline), and BQ-S (our strongest proposal) using the "Slimmable Network" search space for a range of corruption severities. We see that BQ-S is more robust than NES-RE to dataset shift, especially in terms of LL and ECE. slimmable network search space). We see that, whilst our proposal performs similarly in terms of accuracy, it produces ensembles that perform significantly better in terms of expected calibration error and test set log-likelihood. This trend holds across corruption severity levels. ## 5 Discussion And Future Work We proposed a method for building ensembles of Neural Networks using the tools provided by Bayesian Quadrature. Specifically, by viewing ensembling as approximately performing marginalisation over architectures, we used the warped Bayesian Quadrature framework to select a candidate set of architectures to train. We then suggest two methods of constructing the ensemble based upon this candidate set: one based upon recombination of the approximate posterior over architectures (BQ-R), and one based upon optimisation of the ensemble weights (BQ-S) using a validation set. BQ-R approximately performs hierarchical Bayesian inference using BQ, whereas BQ-S is a heuristic inspired by BQ. The discrepancy in performance is likely due to the fact that BQ-R does not make use of the validation set, as it takes the Bayesian perspective and performs hierarchical inference over both architecture weights and architectures using the training set. (In principle, BQ-R can use the union of the training and validation sets to perform hierarchical inference. However, we did not run experiments in this setting as it would obviously allow BQ-R significantly more compute than the alternative methods.) For the same reason, BQ-R is more sensitive to any errors introduced by approximating the architecture likelihood using MLE (or any other approximation). BQ-S (and all the baselines), however, make use of a separate validation set to select the ensemble weights. We additionally show that BQ-S outperforms state-of-the-art baselines when the search space is large, and on the largest datasets for smaller search spaces. This is likely because it is more exploratory than alternative methods, and so less likely to become stuck near local minima of the architecture likelihood. Lastly, we demonstrated that BQ-S is more robust to dataset shift than state-of-the-art baselines. A limitation of our proposals is that they do not outperform existing methods in some cases, notably on the CIFAR-100 dataset for the NATS-Bench search space. Additionally, it will be challenging to scale our proposals to larger evaluation budgets (greater than 1000) as the computational burden of inverting the GP's covariance matrix will become too large. An interesting direction for future work is to examine the effect of marginalising over architecture weights as well as over architectures. We introduce a general-purpose method, so its societal impacts will depend on the specific tasks to which it is applied. We find it difficult to anticipate what those tasks will be, and even more difficult to speculate meaningfully about any societal impacts will be. ## Acknowledgments The authors would like to thank anonymous reviewers for detailed and constructive feedback. S.H. and M.O are grateful for funding from the EPSRC and AIMS at the University of Oxford. S.H. is additionally supported by a scholarship from Kellogg College. X.W. and B.R. are supported by the Clarendon Scholarship at the University of Oxford. M.J. is supported by the Carlsberg Foundation. ## References Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Harald Oberhauser, and Michael A. Osborne. Fast bayesian inference with batch bayesian quadrature via kernel recombination. *Advanced in Neural Information Processing Systems*, 35, 2022. Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In *International conference on machine learning*, pp. 550–559. PMLR, 2018. Henry Chai and Roman Garnett. Improving quadrature for constrained integrands. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019. Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. In *Proceedings of the IEEE/CVF International Conference on* Computer Vision (ICCV), 2019. Xuanyi Dong, Lu Liu, Katarzyna Musial, and Bogdan Gabrys. NATS-bench: Benchmarking NAS algorithms for architecture topology and size. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, pp. 1–1, 2021. ISSN 0162-8828, 2160-9292, 1939-3539. doi: 10.1109/TPAMI.2021.3054824. Vikranth Dwaracherla, Zheng Wen, Ian Osband, Xiuyuan Lu, Seyed Mohammad Asghari, and Benjamin Van Roy. Ensembles for uncertainty estimation: Benefits of prior functions and bootstrapping, 2022. Francesco D'Angelo and Vincent Fortuin. Repulsive deep ensembles are bayesian. In *Advances in Neural* Information Processing Systems, volume 34, 2021. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. *Journal of* Machine Learning Research, 20(55):1–21, 2019. Roman Garnett. *Bayesian Optimization*. Cambridge University Press, 2021. Tom Gunter, Michael A. Osborne, Roman Garnett, Philipp Hennig, and Stephen J. Roberts. Sampling for inference in probabilistic models with fast bayesian quadrature. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems, pp. 2789–2797, 2014. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference of Machine Learning, 2017. Satoshi Hayakawa, Harald Oberhauser, and Terry Lyons. Positively weighted kernel quadrature via subsampling. arXiv: 2107.09597, 2022. Bobby He, Balaji Lakshminarayanan, and Yee Whye Teh. Bayesian deep ensembles via the neural tangent kernel. In *Advances in Neural Information Processing Systems*, volume 33, 2020. Xin He, Kaiyong Zhao, and Xiaowen Chu. Automl: A survey of the state-of-the-art. Knowledge-Based Systems, 212:106622, 2021. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In *Proceedings of the International Conference on Learning Representations*. arXiv, 2019. P Hennig and M A Osborne. *Probabilistic Numerics*. Cambridge University Press, 2022. Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric Xing. Neural architecture search with bayesian optimisation and optimal transport. *Advances in Neural Information* Processing Systems, 31, 2019. Ian Kivlichan, Zi Lin, Jeremiah Liu, and Lucy Vasserman. Measuring and improving model-moderator collaboration using uncertainty estimation. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pp. 36–53. Association for Computational Linguistics, 2021. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in Neural Information Processing Systems*, 30, 2017. Jason Liang, Elliot Meyerson, and Risto Miikkulainen. Evolutionary architecture search for deep multitask networks. In *Proceedings of the genetic and evolutionary computation conference*, pp. 466–473, 2018. C. Litterer and T. Lyons. High order recombination and an application to cubature on wiener space. 22(4), 2012. Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In *Proceedings of* the IEEE/CVF conference on computer vision and pattern recognition, pp. 82–92, 2019a. Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In *Proceedings* of the International Conference on Learning Representations, 2019b. Yuqiao Liu, Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Kay Chen Tan. A survey on evolutionary neural architecture search. *IEEE transactions on neural networks and learning systems*, 2021. Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. In *Advanced in Neural Information Processing Systems*, volume 32. arXiv, 2019. Thomas Minka. Deriving quadrature rules from gaussian processes. pp. 1–21, 2000. A. O'Hagan. Bayes–hermite quadrature. *Journal of Statistical Planning and Inference*, 29(3):245–260, 1991. Ian Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep reinforcement learning. In *Advances in Neural Information Processing Systems*, volume 31, 2018. Michael A Osborne, David Duvenaud, Roman Garnett, Carl E Rasmussen, Stephen J Roberts, and Zoubin Ghahramani. Active learning of model evidence using bayesian quadrature. In *Advances in Neural Information Processing Systems 26*, pp. 46–54, 2012. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advanced in Neural Information Processing Systems, volume 32, 2019. Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In *International conference on machine learning*, pp. 4095–4104. PMLR, 2018. Carl Rasmussen and Christopher Williams. *Gaussian Processes for Machine Learning*. MIT Press, 2006. Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V Le, and Alexey Kurakin. Large-scale evolution of image classifiers. In *International Conference on Machine* Learning, pp. 2902–2911. PMLR, 2017. Jože M. Rožanec, Luka Bizjak, Elena Trajkova, Patrik Zajec, Jelle Keizer, Blaž Fortuna, and Dunja Mladenić. Active learning and novel model calibration measurements for automated visual inspection in manufacturing. 2023. Binxin Ru, Xingchen Wan, Xiaowen Dong, and Michael Osborne. Interpretable neural architecture search via bayesian optimisation with weisfeiler-lehman kernels. In Proceedings of the 9th International Conference on Learning Representations, 2021. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 4510–4520, 2018. Nino Shervashidze. Weisfeiler-lehman graph kernels. *Journal of Machine Learning Research*, 12:2539–2561, 2011. Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James Kwok, and Tong Zhang. Bridging the gap between samplebased and one-shot neural architecture search with bonas. Advances in Neural Information Processing Systems, 33:1808–1819, 2020. Yao Shu, Yizhou Chen, Zhongxiang Dai, and Bryan Kian Hsiang Low. Neural ensemble search via bayesian sampling. In *Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence*, 2022. Changjian Shui, Azadeh Sadat Mozafari, Jonathan Marek, Ihsen Hedhli, and Christian Gagné. Diversity regularization in deep ensembles. In *International Conference on Learning Representations*, 2018. Maria Tchernychova. Caratheodory cubature measures, 2015. Xingchen Wan, Binxin Ru, Pedro M. Esparança, and Fabio Maria Carlucci. Approximate neural architecture search via operation distribution learning. In *Proceedings of the IEEE/CVF Winter Conference on* Applications of Computer Vision (WACV), pp. 2377–2386, January 2022. Florian Wenzel, Jasper Snoek, Dustin Tran, and Rodolphe Jenatton. Hyperparameter ensembles for robustness and uncertainty quantification. In *Advances in Neural Information Processing Systems*, volume 33, 2020. Colin White, Willie Neiswanger, and Yash Savani. BANANAS: Bayesian optimization with neural architectures for neural architecture search. *Advances in Neural Information Processing Systems*, 36, 2020. Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai Xiong. PC-DARTS: Partial channel connections for memory-efficient architecture search. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 43:2953–2970, 2020. Jiahui Yu and Thomas S Huang. Universally slimmable networks and improved training techniques. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1803–1811, 2019. Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. Slimmable neural networks. In International Conference on Learning Representations. arXiv, 2019. Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, Xiaodan Song, Ruoming Pang, and Quoc Le. Bignas: Scaling up neural architecture search with big single-stage models. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VII 16, pp. 702–717. Springer, 2020. Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris Holmes, Frank Hutter, and Yee Whye Teh. Neural ensemble search for uncertainty estimation and dataset shift. In *Advances in Neural Information Processing* Systems, volume 36, 2022. Han Zhou, Xingchen Wan, Ivan Vulić, and Anna Korhonen. Autopeft: Automatic configuration search for parameter-efficient fine-tuning. *arXiv preprint arXiv:2301.12132*, 2023. Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. *arXiv preprint* arXiv:1611.01578, 2016. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 8697–8710, 2018. ## A Computational Complexity The computational complexity of candidate set selection strategies that require a GP surrogate (such as our proposals) is dominated by the cost of inverting the kernel matrix, which is, at each iteration, cubic in the size of the candidate set O(|A| 3). The complexity of re-weighted stacking is O|A| log(|A|) + |AM|(|A*| − |*AM|). The first term is due to the sorting of candidate set, and the second to the computation the relevant covariances. The complexity of posterior recombination is dominated by the eigendecomposition of the kernel matrix, O(|A| 3). Table 7 compares the run-time of our proposals against the most performant baseline on the NATS-Bench topology search space. Note that all architecture-related evaluations are cached (i.e. the trained weights are loaded from the NATS-Bench API, and the logits for the forward pass through the train/validation set are loaded from disk). The key reason that NES-RE is slower than our proposals is the iterative nature of Beam Search (required at each step of candidate set selection and then for ensemble selection). Recall that it greedily builds up the set of parent candidates (resp. final ensemble) by iterating through the whole pool (resp. population). Each iteration requires loading a set of logits from disk which, whilst cheaper than loading and performing a forward pass through the architecture, still incurs significant computational cost in aggregate. Note that, for most search spaces, the computational cost of Neural Ensemble Search is dominated by the cost of evaluating the likelihoods of the architectures for the candidate set. For some spaces, such as those defined by a supernet, this cost is lower as the supernet is only trained once. However, this initial training is still computationally intensive. Therefore, in either case, the computation related to training and evaluating architectures is likely to be significantly larger than the computation required for the Neural Ensemble Search method. ## B Further Analysis Of Results Our results over the NATS-Bench topology search space show that NES-RE performs best for the CIFAR100 dataset, and BQ-S performs best for ImageNet16-120 (see Table 4). The ablation study in Tables 2 and 3 suggest that this is due to the candidate selection strategy. Recall that NES-RE uses regularised | Algorithm | CIFAR-100 | ImageNet16-120 | |-------------|-------------------|-------------------| | NES-RE | 24236.3 ± 109.551 | 12020.6 ± 98.1485 | | BQ-R | 308.872 ± 7.05121 | 258.444 ± 3.76138 | | BQ-S | 348.650 ± 11.7402 | 280.689 ± 9.76650 | Table 7: The runtime of our proposals, BQ-R and BQ-S, against the most competitive (in terms of performance) baseline on CIFAR-100 and ImageNet16-120 over the NATS-Bench topology search space. The total evaluation budget is 150 architectures, and we select ensembles of size 5. We report the means and standard error of the means over 3 runs. All architecture-related evaluations are cached (i.e. the trained weights are loaded from the NATS-Bench API, and the logits for the forward pass through the train/validation set are loaded from disk). evolution for candidate set selection and beam search for ensemble selection, whereas BQ-S uses uncertainty sampling with a WSABI-L surrogate and re-weighted stacking. Table 2 suggests that (for a fixed ensemble selection strategy) regularised evolution is the best method for CIFAR-100 but that uncertainty sampling with a WSABI-L surrogate is the best method for ImageNet16-120. Table 3 shows that re-weighted stacking consistently outperforms beam search given a fixed candidate set. From these results, we can infer that regularised evolution is a better candidate selection strategy for CIFAR-100 and uncertainty sampling is better for ImageNet16-120. This must be due to the nature of the architecture likelihood surfaces for each dataset. As these surfaces are defined over an input space of architectures, they are difficult to visualise. One possible method is shown in Figure 2. We consider the 500 architectures with the highest likelihood. We then visualise the covariance matrix between them using the WL kernel of a trained GP. The architectures are sorted using the GP's estimate of the likelihood (to smooth out noise). We observe larger clusters of architectures along the diagonal of the covariance matrix for CIFAR-100. This means that they covary strongly and have similar likelihoods. Further, note that these clusters covary strongly with each other. This provides some evidence that the peaks of the CIFAR-100 likelihood surface are broader and closer together (based on the metric implied by the WL kernel) than those for the ImageNet16-120 likelihood surface. On this basis, we suggest that uncertainty sampling with a WSABI-L surrogate is better suited to architecture likelihood surfaces with dispersed, narrow peaks and regularised evolution is better suited to architecture likelihood surfaces with wider peaks. ## C Additional Uncertainty Metrics To verify that the quality of the uncertainty is well measured by the expected calibration error we check that it correlates well with the calibration AUC (Kivlichan et al., 2021; Rožanec et al., 2023). These results are shown in Table 8 (for the same experimental setup as in Table 4). Note that well-calibrated uncertainty is indicated by high calibration AUC but low expected calibration error. The correlation between the (means of the) two metrics is -0.71 for CIFAR-100 and -0.20 for ImageNet16-120. They are, therefore, generally in agreement about the quality of a model's calibration estimates. ## D Architecture Likelihood Approximation We examine the impact of the approximations suggested in Equations 15 and 16 by comparing their performance to an alternative - Stochastic Weight Averaging (Gaussian) (Maddox et al., 2019). This instead approximates the posterior over architecture weights using a diagonal Gaussian, whose moments are the empirical mean and variance of several SGD iterates, obtained by continuing training. The results are shown on CIFAR-100 for the NATS-Bench search space in Table 9. We see significant improvements across all metrics when using the SWAG approximation, suggesting that there is much to be gained by marginalising over both architecture weights and architectures. This would be a promising direction for future work. ![17_image_0.png](17_image_0.png) (a) CIFAR-100 (b) ImageNet16-120 Figure 2: Visualisation of the (WL) covariance matrix for the 500 architectures with the highest likelihoods in the search space for each dataset, sorted by (a smoothed estimate, using a GP, of the) likelihood. The colour scale varies from 1 (yellow) to 0 (blue). We observe larger blocks of architectures within the top 500 that covary strongly for CIFAR-100 than for ImageNet16-120, which implies that the modes of the architecture likelihood surface are wider for CIFAR-100. This suggests that a more exploratory strategy will do better on ImageNet16-120, and a more exploitative strategy for CIFAR-100. ## E Experimental Setup The codebase uses PyTorch (Paszke et al., 2019) to handle deep learning and backpropagation. Except where otherwise stated, the experimental results report means and standard error of the mean over 10 repeats. Where applicable, each candidate selection method is initialised with 10 architectures randomly selected from the search space, and allowed to select an additional 140. Our proposal over the cell-based search space uses the WL kernel, with its level hyperparameter chosen from {1, 2} using the GP's marginal likelihood. For the macro-based search space, our proposal uses an ARD RBF kernel, whose hyperparameters are optimised using LBFGS. The lengthscales are constrained between the minimum and maximum distances between observations along the relevant dimensions. Architecture likelihoods are normalised so that the maximum observed is 1 before modelling with the GP surrogate. The noise hyperparameter is also selected to optimise the probability density assigned to the observed data under the GP prior. It is constrained in the range [10−5, 10−1]. The acquisition function is always optimised using an evolutionary strategy, using a pool size of 1024. Per iteration, we allow 128 mutations, of which 16 are modifications of the architecture with the highest acquisition value, and the remainder are selected uniformly at random from the pool. | | CIFAR-100 | ImageNet16-120 | | | |---------------|---------------|------------------|---------------|---------------| | Algorithm | ECE | C-AUC | ECE | C-AUC | | M = 3 NES-RE | 0.026 ± 0.002 | 0.873 ± 0.002 | 0.033 ± 0.002 | 0.823 ± 0.002 | | NES-BS | 0.073 ± 0.009 | 0.848 ± 0.003 | 0.058 ± 0.003 | 0.809 ± 0.003 | | BQ-R | 0.075 ± 0.025 | 0.866 ± 0.002 | 0.052 ± 0.021 | 0.815 ± 0.003 | | BQ-S | 0.021 ± 0.001 | 0.874 ± 0.001 | 0.029 ± 0.001 | 0.824 ± 0.002 | | M = 5 NES-RE | 0.042 ± 0.002 | 0.875 ± 0.001 | 0.051 ± 0.001 | 0.826 ± 0.002 | | NES-BS | 0.073 ± 0.009 | 0.848 ± 0.003 | 0.058 ± 0.003 | 0.809 ± 0.003 | | BQ-R | 0.040 ± 0.004 | 0.871 ± 0.001 | 0.028 ± 0.004 | 0.820 ± 0.002 | | BQ-S | 0.040 ± 0.002 | 0.877 ± 0.001 | 0.050 ± 0.002 | 0.825 ± 0.002 | | M = 10 NES-RE | 0.060 ± 0.001 | 0.877 ± 0.001 | 0.069 ± 0.001 | 0.825 ± 0.001 | | NES-BS | 0.085 ± 0.005 | 0.848 ± 0.001 | 0.068 ± 0.004 | 0.808 ± 0.002 | | BQ-R | 0.037 ± 0.002 | 0.872 ± 0.001 | 0.018 ± 0.001 | 0.819 ± 0.002 | | BQ-S | 0.059 ± 0.001 | 0.879 ± 0.001 | 0.072 ± 0.001 | 0.826 ± 0.002 | | | CIFAR-100 | | | |-------------------|-------------|---------------|---------------| | Algorithm | Accuracy | ECE | LL | | M = 3 BQ-R (MLE) | 71.9 ± 0.8 | 0.075 ± 0.025 | -5259 ± 300.9 | | BQ-R (SWAG) | 75.1 ± 0.6 | 0.035 ± 0.007 | -4516 ± 140.2 | | BQ-S (MLE) | 76.6 ± 0.2 | 0.021 ± 0.001 | -4417 ± 35.85 | | BQ-S (SWAG) | 77.4 ± 0.3 | 0.022 ± 0.000 | -4345 ± 38.19 | | M = 5 BQ-R (MLE) | 73.3 ± 0.9 | 0.040 ± 0.004 | -4768 ± 174.3 | | BQ-R (SWAG) | 75.0 ± 1.1 | 0.039 ± 0.012 | -4335 ± 133.4 | | BQ-S (MLE) | 77.8 ± 0.2 | 0.040 ± 0.002 | -4077 ± 33.60 | | BQ-S (SWAG) | 78.8 ± 0.4 | 0.033 ± 0.001 | -3944 ± 25.47 | | M = 10 BQ-R (MLE) | 75.5 ± 0.9 | 0.037 ± 0.002 | -4309 ± 172.6 | | BQ-R (SWAG) | 77.9 ± 0.2 | 0.063 ± 0.006 | -3964 ± 8.734 | | BQ-S (MLE) | 78.6 ± 0.2 | 0.059 ± 0.001 | -3843 ± 22.71 | | BQ-S (SWAG) | 79.9 ± 0.1 | 0.053 ± 0.001 | -3680 ± 18.40 | Table 8: Expected calibration error (ECE) and calibration AUC (C-AUC) on CIFAR-100 and ImageNet16120 for our proposals (BQ-R and BQ-S) and baselines. Well-calibrated uncertainty is indicated by high calibration AUC but low expected calibration error. We see that the two metrics are generally in agreement about the quality of a model's uncertainty estimates. Table 9: Test accuracy, expected calibration error (ECE), and log-likelihood (LL) on CIFAR-100 for BQ-R and BQ-S with MLE and SWAG approximations for the architecture likelihood. Note that, due to their computation expense, results for the SWAG approximations are means and standard error of the mean over 2 runs. For the MLE approximations, we report the mean and standard error of the mean over 10 runs.
Review 1: Summary: This work proposes using Bayesian quadrature (BQ) for neural ensemble search (NES), which is a natural counterpart of using Bayesian optimization for neural architecture search (NAS). The authors propose two approaches for NES based on a Gaussian process model over the architectural DAGs: a more faithful implementation to BQ (BQ-R), and a stacking variant (BQ-S) that reweight the learned ensemble using additional validation samples. Across several benchmarks constructed out of the CIFAR and ImageNet datasets, the BQ-R method outperforms recent NES baselines in terms of uncertainty-related metrics. Strengths and Weaknesses: **Strengths:** + The idea of using Bayesian quadrature for NES is quite sensible and may deserver further investigation. + A variant of the BQ approach demonstrates promising empirical performance. **Weaknesses:** - The BQ-R method, which is closer to a full implementation of Bayesian quadrature, performs worse than most baselines. Thus, the evidence for the efficacy of BQ is somewhat limited. - In the implementation, two marginalization operations over the NN parameters are replaced with an MLE approximation. This is understandable given the increased computational cost of the latter, but it deviates from the BQ method and may be related to the previous point, the worsened performance of BQ-R. The authors argued that is because the baselines have access to validation samples, but I think there should also be discussion around the MLE approximations, since (in many related settings) a fully Bayesian approach would not be hindered by the lack of validation samples. (Note that I'm not familiar with the recent empirical results on NAS and NES, and my evaluation of the experiments is purely based on the authors' report.) Requested Changes: - (Enhancement) More discussion about the performance of BQ-R. - (Enhancement) In Table 3 and 5 the BQ-R method is only competitive in the log likelihood (LL) metric. The authors argued that the ECE and LL metrics are "crucial for systems which make critical decisions", but this is only obviously true for ECE (or other metrics that directly reflect calibration error), so there should be more discussions around the use of the LL metric. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper uses Bayesian Quadrature to construct ensembles of neural networks. The key observation is that constructing an ensemble of neural networks can be seen as marginalizing over different neural network architectures and that this marginalization can be approximated using quadrature. This is analogous to the way in which neural architecture search can be seen as an optimization problem over the space of different architectures that can be approximated using Bayesian optimization. The paper views ensemble construction as a two-step process. Firstly, a set of candidate architectures is selected using an acquisition function. Secondly, a (weighted) ensemble is constructed from a subset of the candidate architectures. The first problem is solved with a novel use of Bayesian Quadrature. Two potential methods are proposed for solving the second problem, each of which is better in some settings. Strengths and Weaknesses: ## Strengths + **Novelty:** While novelty is not an acceptance criterion for TMLR, this paper does propose a novel and interesting approach to ensemble construction. I suspect that many people would find this work stimulating and that it could lead to further interesting work in the future. + **Solid Experiments:** The experimental evaluation seems solid, and most of the claims are well supported. ## Weaknesses - **Clarity:** I found the paper hard to understand in several places. The main issue is that the authors do not guide the reader through a few key sections. As a result, many statements seem arbitrary and do not follow previous statements, and I struggled to gain insight into some parts of the method. Making connections more explicit and assuming less knowledge/understanding on the reader's part would help greatly. I make concrete suggestions in the next section. - **Evidence:** There are a small number of key statements for which I do not believe there is sufficient evidence in the paper. 1. Existing approaches to ensembling struggle when the architecture likelihood surface has dispersed and narrow peaks. 2. Bayesian Quadrature is well suited to exploring such likelihood surfaces. 3. The proposed method improves calibration. The paper uses two metrics–LL and ECE–as measures of calibration. However, LL is not an ideal choice because it combines both accuracy and uncertainty calibration, making it difficult to disentangle where the improvement is coming from. Furthermore, in almost all cases, when LL improves, so does the accuracy, which does not help. ECE, on the other hand, is not entangled with accuracy but has several other pathologies. For example, a classifier can get perfect ECE while predicting only the marginal probabilities for each class. Another issue is that it is not obvious how ECE should be applied in a multi-class setting. 4. Bayesian Quadrature is being used to construct the ensemble. Or rather, the Bayesian Quadrature used to construct the ensemble is well approximated. My concern is the approximations used in equations (14) and (16). These seem like very strong approximations, making it unclear whether the proposed method is doing something close to proper quadrature or if we should view the method as a well-performing heuristic inspired by quadrature. Requested Changes: I use [major] and [minor] to indicate the changes that would be crucial to securing my recommendation or would simply strengthen the work, respectively. Although most clarity issues are listed individually as [minor], they represent a more major issue when taken together. 1. [major] Add a citation (or other evidence) for the claim that "... existing approaches to ensembling struggle when the architecture likelihood surface has dispersed, narrow peaks". 2. [major] Add a citation for the claim that Bayesian Quadrature is "... well suited to exploring likelihood surfaces with dispersed, narrow peaks". 3. [minor] Add a discussion of Wenzel et al. (2020), alongside Zaidi et al. (2022) and Shu et al. (2022) when discussing the idea that ensembles of different architectures outperform ensembles of the same architecture. Wenzel et al. (2020) show that ensembles with different hyperparameters (which could include architectural hyperparameters) outperform standard ensembles. 4. [minor] The claim that single architectures provide poor uncertainty estimates is not well supported. Firstly, Guo et al. (2017) focus on a narrow range of architectures (namely CNNs), and a follow-up study (Minderer et al., 2021) shows that this is not true for other architectures (e.g. ViTs). Secondly, several techniques can be applied to single models to improve calibration. I would change this statement and instead say that ensembles tend to be better calibrated than single models. 5. [major] Add additional uncertainty metrics. I recommend using Calibration AUC (Kivlichan et al., 2021, Tran et al., 2022), which does not suffer from the same pathologies as ECE. Oracle Collaborative AUC and OOD detection AUC could also be a good choices. I acknowledge that adding these additional metrics to all experiments might require a large amount of work. However, I think that adding just Calibration AUC to only Table 4 would be good enough as if there is a good correlation between ECE and Calibration AUC there, then we can assume that the ECE results are reliable elsewhere. 6. [minor] Section 2.5 must be better integrated into the rest of the text. Reading the paper for the first time, this section seemed very arbitrary, and I was unsure how this would connect to the rest of the work. Even after a second reading, I felt it would be helpful to make connections between this section and the rest of the paper more explicit. 7. [minor] Add a legend to figure 1. What do the different colors mean? 8. [minor] On page 6, 3rd paragraph, the sentence "Therefore, we propose using a Bayesian Quadrature acquisition function to build the candidate set ...", did not follow from the previous sentence. Adding additional explanation and intuition would be helpful. 9. [major] Add an ablation study to show that the approximations in equations (14) and (16) are reasonable. Even in a small-scale setting and using some approximate inference scheme (I would suggest linearised Laplace), it would be useful to compare the performance of the proposed method with and without marginalizing the weights in (14) and (16). Otherwise, it is impossible to know what kind of impact the approximation has and how close the proposed method is in practice. 10. [minor] I think it would be useful to be more explicit about the different levels of Bayesian inference that are taking place. There is a first level in which we do inference over weights. And a second level where we do inference over architectures. The evidence of the first level is the likelihood of the second level. Chapter 28 of "Information Theory, Inference, and Learning Algorithms" by David J.C. MacKay explains this well. 11. [minor] The statement "computation of this estimate requires Monte-Carlo sampling to approximate sums of (products of) the WL kernel over A" needs to be made more explicit by writing out the MC sampling equation. 12. [minor] A few preliminary results showing the impact of using the union of the train and validation sets for BQ-R would be an interesting and useful addition. 13. [minor] Add a short discussion of the limitation of the method. 14. [major] Add an appendix section which describes the experimental setup in detail. Details such as which deep learning framework, how many random seeds for each experiment, and any hyper-parameter sweeps and settings should be included. **References** * Florian Wenzel, Jasper Snoek, Dustin Tran, Rodolphe Jenatton: Hyperparameter Ensembles for Robustness and Uncertainty Quantification. NeurIPS 2020 * Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, Mario Lucic: Revisiting the Calibration of Modern Neural Networks. NeurIPS 2021: 15682-15694 * Ian D. Kivlichan, Zi Lin, Jeremiah Z. Liu, Lucy Vasserman: Measuring and Improving Model-Moderator Collaboration using Uncertainty Estimation. CoRR abs/2107.04212 (2021) * Dustin Tran, Jeremiah Z. Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil Band, Tim G. J. Rudner, Karan Singhal, Zachary Nado, Joost van Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, Balaji Lakshminarayanan: Plex: Towards Reliability using Pretrained Large Model Extensions. CoRR abs/2207.07411 (2022) Broader Impact Concerns: None ================================================== Review 3: Summary: The authors consider the problem of Neural Ensemble Search (NES), that is, Neural Architecture Search over the space of neural network ensembles. They phrase this as a Bayesian Quadrature (BQ) problem, in which the selected ensemble members are used to approximate the marginal predictive over all architectures. They show that using acquisition functions from BQ can help with this problem and also propose a new recombination strategy for model weighting. Strengths and Weaknesses: Strengths: - The paper is clearly written. - NES is an important problem. - Using ideas from BQ is a creative approach and seems to be useful. Weaknesses: - Related work is missing. - Similarly, some important baselines are missing. - There is little discussion and empirical results on the computational cost / runtimes. Major comments: - In the Algorithm boxes, you use a lot of shorthand functions (acquisition_function, nystrom_test_function, reweight, etc). I think it would be more useful to write these out in maths, so one can see at one glance what is happening. - While the experiments provide useful ablations between the different strategies, I feel like they are missing more common baselines. I would expect to see a standard deep ensemble (with the fixed largest architecture from the search space), a hyper-deep ensemble [1] and maybe some kind of diversity-regularized ensemble [e.g., 2-8]. - The paper is entirely missing a related work section. What have people done before to do NES? How does it compare to existing NAS approaches? What should at least be mentioned are the very related hyper-deep ensembles [1] and probably some examples of other diversity-regularized ensembles [e.g., 2-8]. Minor comments: - In Tab. 2, why do US and RE perform differently on ImageNet and CIFAR? Do they encourage different kinds of architectures or amounts of diversity? It would be interesting to examine this more closely. - How do the methods compare in terms of runtime? It would be useful for practitioners to know the tradeoffs, so reporting wallclock or GPU runtimes in all experiments would be helpful. [1] https://arxiv.org/abs/2006.13570 [2] https://arxiv.org/abs/1806.03335 [3] https://arxiv.org/abs/2206.03633 [4] https://openreview.net/forum?id=BJlahxHYDS [5] https://proceedings.neurips.cc/paper/2020/hash/0b1ec366924b26fc98fa7b71a9c249cf-Abstract.html [6] https://arxiv.org/abs/1802.07881 [7] https://arxiv.org/abs/2106.10760 [8] https://proceedings.neurips.cc/paper/2021/hash/1c63926ebcabda26b5cdb31b5cc91efb-Abstract.html Requested Changes: - Add a related work section - Add some baselines (see above) - Add runtimes to the tables. Broader Impact Concerns: No broader impact concers. ================================================== Metareview: Recommendation: Accept as is Comment: The paper introduces a novel approach for Neural Ensemble Search using Bayesian Quadrature. A number of issues were raised, but the main ones concerning the method itself focused on the approximations involved (BQ-R vs BQ-S) and the use of log-likelihood to make claims about modelling uncertainty. There was relatively little discussion needed as the author's subsequent draft resolved all of these issues along with many other minor ones. The reviewers unanimously agreed that the paper should be accepted to TMLR. ==================================================
# Expected Pinball Loss For Quantile Regression And Inverse Cdf Estimation Taman Narayan *tamann@google.com* Google Research Serena Wang *serenawang@google.com* Google Research University of California, Berkeley Kevin Canini canini@google.com Google Research Maya R. Gupta *relativeentropy@gmail.com* University of Washington Reviewed on OpenReview: *https: // openreview. net/ forum? id= Eg8Rnb0Hdd* ## Abstract We analyze and improve a recent strategy to train a quantile regression model by minimizing an expected pinball loss over all quantiles. Through an asymptotic convergence analysis, we show that minimizing the expected pinball loss can be more efficient at estimating single quantiles than training with the standard pinball loss for that quantile, an insight that generalizes the known deficiencies of the sample quantile in the unconditioned setting. Then, to guarantee a legitimate inverse CDF, we propose using flexible deep lattice networks with a monotonicity constraint on the quantile input to guarantee non-crossing quantiles, and show lattice models can be regularized to the same location-scale family. Our analysis and experiments on simulated and real datasets show that the proposed method produces stateof-the-art legitimate inverse CDF estimates that are likely to be as good or better for specific target quantiles. ## 1 Introduction Real world applications often seek estimates of the quantiles of a random variable. For example, an airline would like to tell passengers that 90% of the time their flight will arrive within 4 hours, so they know they can make their connecting flights. In addition, one might condition that estimate on features. For example, conditioned on a forecast of 8 inches of snow at the destination airport, the conditional estimate might worsen to 90% of the time their flight arriving within 6 hours and 30 minutes. Quantile regression learns a model from training examples to produce such estimates for quantiles. If the model can estimate all quantiles, it is an inverse CDF estimator. ## 1.1 Formal Definitions Formally, let random variable Y ∈ R have cumulative distribution function (CDF) F, and for τ ∈ (0, 1), the τ -quantile of Y is defined as qτ = F −1(τ ) = inf{q : P(Y ≤ q) ≥ τ}. In the *conditional* setting where one also has random feature vector X ∈ R D, the *conditional* τ -quantile of Y for feature vector X is defined as qτ (x) = inf{q : P(Y ≤ q|X = x) ≥ τ}. Quantile regression takes as training data a set of n pairs (*x, y*) ∈ R D × R from a joint distribution over (*X, Y* ), and forms an estimator for one or more of the conditional quantiles of Y for any value of X. A standard objective to train a model to estimate the quantile for τ is to minimize the *pinball loss* (Koenker & Bassett, 1978), Lτ (y, yˆ) = max(τ (y − yˆ),(τ − 1)(y − yˆ)) for y, yˆ ∈ R. In the unconditioned case with no feature vector X, the training data is simply a set of n scalar values {yi} n i=1, and minimizing the pinball loss has the satisfying property that it selects the empirical quantile of the training set (Chernozhukov, 2005). ## 1.2 Expected Pinball Loss Recent work by Tagasovska & Lopez-Paz (2019) proposed training one deep neural network (DNN) model f(*x, τ* ) that takes τ as an input, and is trained to minimize an *expected* pinball loss over a random T , drawn from a uniform distribution. That work is important because it provides *simultaneous quantile regression* of the entire inverse CDF. However, it left open a couple of important theoretical and practical issues that we address in this paper. ## 1.3 Is Expected Pinball Loss Worse At Specific Quantiles? The first issue we address is whether one pays a price in estimation accuracy for a model f(*x, τ* ) that can predict any quantile, compared to a model trained specifically for one target quantile. Surprisingly, we show theoretically with an analysis of asymptotic convergence properties that learning the full inverse CDF model f(*x, τ* ) can produce a more efficient estimator for a single quantile than minimizing the pinball loss for just a single quantile, depending on the true distribution, quantile, and function class. Our simulations and real-world experiments confirm that one does often win on single quantile estimates by training with the expected pinball loss, though the full inverse CDF model f(*x, τ* ) may take a bit longer to train and a bit more memory to store. We also demonstrate a novel use case of the expected pinball loss for single quantile regression - training with a Beta distribution over τ - and show how that can also outperform single-quantile regression (without the added bonus of accurately estimating the full inverse CDF). ## 1.4 The Problem Of Non-Crossing Quantiles The second issue we address is that training a DNN model with the expected pinball loss as proposed by Tagasovska & Lopez-Paz (2019) may fail to produce a legitimate inverse CDF because it does not guarantee *non-crossing quantiles*: that any two quantile estimates satisfy the common sense requirement that qτ (x) ≥ qτ 0 (x) for τ ≥ τ 0 at every x. For example, a model that does not guarantee non-crossing quantiles might tell a passenger there is a 90% chance their flight will arrive in 4 hours, but that there is a 80% chance their flight will arrive in 4 hours and 12 minutes. Such non-crossing quantile estimates have long been thought objectionable (Koenker & Bassett, 1978). To test whether non-experts people would actually notice or mind non-crossing estimates, we emailed 100 frequent customers of Artifact Puzzles, who are estimated to be 70% college-educated and 20% with postgraduate education, and told them we were considering giving them estimates of how long it would take for their package to arrive, and gave them example arrival time estimates for the 50%, 80%, 90% quantiles, and asked for their feedback on how useful they would find such estimates. However, the example estimates were all made-up: we gave 50 of the customers estimates with crossing 80% and 90% quantiles (3 days, 8 days and 7 days), and the other 50 customers got estimates with non-crossing quantile predictions (3 days, 7 days, 8 days). From the 50 emails with non-crossing estimates, we received 9 emails back to say they would appreciate such estimates, and no concerns. From the 50 emails with crossing estimates, we received 16 emails back all expressing enthusiasm for having such predictions, but 11 of the 16 emails pointed out the crossing quantiles with negative adjectives including "wrong," "broken," "buggy," "didn't make sense to me." Embarrassing quantile-crossing mistakes are known to happen often in quantile regression (He, 1997; Koenker & Bassett, 1978); see also Table 1. Tagasovska & Lopez-Paz (2019) hypothesized that training with an expected pinball loss induces smoothing across τ that would reduce non-crossing. However, we show experimentally (Table 3) that a flexible model like a DNN optimized to minimize the expected pinball loss easily suffers a problematic amount of quantile crossing. To address non-crossing without losing model flexibility, we propose using deep lattice networks (DLNs) (You et al., 2017) with a monotonicity shape constraint on τ to guarantee non-crossing quantiles, thus providing legitimate inverse CDF estimates. Additionally, we show that the DLN function class is amenable to two additional kinds of useful regularization for quantile regression: restricting the learned inverse CDF to a location-scale family, and restricting other features to have only monotonic effect on the predictions. ## 1.5 Other Uncertainty Estimation Problems In this paper, we focus on estimating multiple quantiles and the extreme case of all quantiles: inverse CDF estimation. Such problems are only part of the landscape of estimating uncertainty. A closely related problem is estimating *prediction intervals* such that an α-interval contains the true label alpha is and sharp around α. Prediction intervals can be generated by optimizing for the calibration and sharpness of the interval (Pearce et al., 2018; Chung et al., 2021). Tagasovska & Lopez-Paz (2019) instead formed them by choosing a pair of quantiles from a multiple quantile regression model, which works well because the standard pinball lose used for training quantile regression models balances calibration and sharpness. A different type of uncertainty estimation problem is estimating the uncertainty of a specific statistic, such as the mean in a regression model; such problems are often handled with Bayesian methods. All of these estimation problems become more challenging in the out-of-domain (OOD) setting. ## 1.6 Paper Organization Next, in Section 2 we formalize the problem of producing legitimate inverse CDF estimates with quantile regression. Then in Section 3 we dig into what is known about beating the standard *pinball loss* by smoothing over multiple quantile estimates, and we give a new asymptotic convergence result for minimizing the expected pinball loss. In Section 4 we propose using DLNs as the model architecture. Experiments in Section 5 on simulations and real-world data show that the proposed non-crossing DLNs provide competitive, trust-worthy estimates. ## 2 Estimating A Legitimate Inverse Cdf We give a constrained optimization problem to train a quantile regression estimator without crossing quantiles, which can produce a legitimate inverse CDF. Let {(xi, yi)} n i=1 be a training set where each xi ∈ R D is a feature vector and yi ∈ R is a corresponding label. Denote the model f(*x, τ* ; θ), where τ ∈ (0, 1) specifies the quantile of interest, and the model f : R D+1 → R is parameterized by θ ∈ R m. Note that we sometimes stick to f(*x, τ* ) for notational simplicity. Recall the standard pinball loss given τ is Lτ (y, yˆ) = max(τ (y − yˆ),(τ − 1)(y − yˆ)). Let T ∼ PT denote a random τ so we can minimize the *expected* pinball loss over T to train f(*x, τ* ; θ) as in Tagasovska & Lopez-Paz (2019), though here we generalize to PT , rather than assuming a uniform distribution for T . The standard pinball loss can be written as the special case where PT is the Dirac delta distribution on the target quantile. Also, we constrain the empirical loss minimization with non-crossing constraints, producing the following constrained training objective: $$\operatorname*{min}_{\theta}\mathbb{E}_{\mathcal{T}}\sum_{i=1}^{n}L_{\mathcal{T}}(y_{i},f(x_{i},\mathcal{T};\theta))$$ s.t. $$f(x,\tau^{+};\theta)\geq f(x,\tau^{-};\theta)\,\forall x\in\mathbb{R}^{D},\tau^{+}\geq\tau^{-}.$$ LT (yi, f(xi, T ; θ)) (1) −. (2) ## 2.1 Related Work In Minimizing Sum Of Pinball Losses The idea of training one model that can predict multiple quantiles by minimizing a sum of pinball losses with non-crossing constraints dates back to at least the 2006 work of Takeuchi et al. (2006) for a *prespecified, discrete set* of quantiles. Other prior work used a similar mechanism for a pre-specified discrete set of quantiles, and for *more-restrictive* function classes that are monotonic on τ , such as linear models $$(1)$$ $$\left(2\right)$$ (Bondell et al., 2010), and two-layer *monotonic* neural networks (Cannon, 2018), which are known to have limited flexibility (Daniels & Velikova, 2010). Monotonic DNN models with more layers (Minin et al., 2010; Lang, 2005) or min-max layers (Sill, 1998) can theoretically provide universal approximations, but have not performed as well experimentally as DLNs (You et al., 2017). Monotonic neural nets have also been proposed for estimating a CDF (Chilinski & Silva, 2018)). Training only for (1) without (2) (Tagasovska & Lopez-Paz, 2019) does not guarantee a legitimate inverse CDF because the estimated quantiles can cross. An in-between strategy is to instead add to (1) a penalty on violations of the monotonicity constraint on training samples (and one can add virtual examples as well for more coverage) (Sill & Abu-Mostafa, 1997), but applying that strategy to quantile regression cannot guarantee non-crossing quantiles everywhere. ## 2.2 Related Work In Estimating The Inverse Cdf There are two main prior approaches to estimating a *complete* legitimate inverse CDF. The first is nonparametric. For example, k-NN can be extended to predict quantiles by taking the quantiles rather than the mean from within a neighborhood (Bhattacharya & Gangopadhyay, 1990). Similarly, quantile regression forests (Meinshausen, 2006) use random forests leaf nodes to generate local empirical quantiles. The second strategy is to predict a parametric inverse CDF for any x. Traditionally these methods have been fairly rigid. He (1997) developed a method to fit a shared but learned location-scale family across x. Yan et al. (2018) proposed a modified 4-parameter Gaussian whose skew and variance was dependent on x. Recently, Gasthaus et al. (2019) proposed the *spline quantile function DNN* (SQF-DNN), which is a DNN model that takes an input x, and outputs the parameters for a monotonic piecewise-linear quantile function on τ . SQF-DNN can fit any continuous bounded distribution, given sufficient output parameters, and because for any x the output is a monotonic function on τ , guarantees no quantile crossing. Though Gasthaus et al. (2019) focused on RNNs, their framework is easy to apply to the generic quantile regression setting as well, which we do in our experiments. ## 3 Comparison To The Single Quantile Pinball Loss We show through asymptotic analysis and simulations that minimizing the expected pinball loss can be better at estimating the τ -quantile than minimizing the pinball loss just for τ , and that these positive results are foreshadowed by classic results for unconditional quantile estimates. This section focuses largely on the unconditioned case (no feature vector x); we will return to the conditional problem given features x in Section 4. ## 3.1 Deficiency Of The Sample Quantile Given only samples from Y , minimizing a pinball loss produces the corresponding sample quantile (Chernozhukov, 2005). However, it has long been known that simply taking the sample quantile is not necessarily the best quantile estimator, and in particular that it can be beaten by strategies that smooth multiple sample quantiles (Harrell & Davis, 1982; Kaigh & Lachenbruch, 1982; David & Steinberg, 1986; Reiss, 1980; Falk, 1984). For example, one important classic result that follows from the Lehmann-Schaffe theorem, says that if the data are known to be from a uniform [*a, b*] distribution, then the minimum variance unbiased estimator for the median is not the sample median, rather it is the average of the sample min and sample max. Even if we do not know anything about the true distribution, general-purpose methods that smooth multiple quantiles of the data when estimating a particular quantile have proven effective. In fact, the widely used Harrell-Davis estimator is a weighted average of all the sample order statistics where the weights depend on τ , and the estimator for the median is asymptotically equivalent to computing the mean of bootstrapped medians (Harrell & Davis, 1982). The prior work above suggests that there can be a better training loss than the τ -specific pinball loss for the general problem of quantile regression given features - one that increases efficiency by better smoothing across the quantiles. Indeed, by minimizing the expected pinball loss as in (1), the estimate of any quantile is affected by the other quantiles (given some smoothness in f). We provide some further intuition about how it smooths in Appendix A.7. ## 3.2 Asymptotic Properties We give a new asymptotic convergence result for estimating the full inverse CDF by minimizing the expected pinball loss (1), optionally with a monotonicity constraint (2), and compare the resulting asymptotic variance to that of single-quantile estimation learned by minimizing an individual pinball loss. We show that depending on the underlying data distribution and the function class chosen to estimate the inverse CDF, the simultaneous estimation can produce a more efficient estimator for a single target quantile than minimizing the pinball loss for only that quantile. For tractability, we work in the setting where there are no features x to condition on, only a set of training samples {yi}, and we are interested in estimating the quantile F −1(τ ) for a random variable Y ∈ R given IID samples {yi} n i=1. Such analysis is relevant to the conditional case because for function classes that are flexible over X, they intuitively will have similar dynamics at each value of X as the unconditioned case we work with here. We formalize this in a basic corollary in Section 3.4. We also focus our theory on the case where the quantile model is *well-specified*, i.e. the used function class f(*x, τ* ; θ) contains the true data generating model. In practice, this is often not the case, but we further explore in Section 3.5 how and why the use of a (tunably) flexible function class can still improve quantile estimation in the general case. In particular, we explore how adding more flexibility to the function class moves us from global smoothing across all of the data towards more local smoothing in the vicinity of the target quantiles. General theory for each of these issues - conditional CDF estimation and misspecified function classes - remain open problems. Our main experiments in Section 5.5, though, do consider the general case of conditional quantile regression with unknown output distributions. ## 3.2.1 Review Of Single Pinball Loss Asymptotic Properties We start with the well-understood case of single-quantile estimation. Let θ ∗ τ = arg minθ EY [Lτ (*Y, θ*)] be the true minimizer for the corresponding pinball loss and ˆθ (n) τ = arg minθ 1 n Pn i=1[Lτ (yi, θ)] be the empirical minimizer over our data. (In our context, θ ∗(τ ) is the true τ quantile of Y and ˆθ (n) τ is the τ sample quantile of the data (Chernozhukov, 2005).) The asymptotic distribution for this single quantile estimate is given by, $$\sqrt{n}(\hat{\theta}_{\tau}^{(n)}-\theta_{\tau}^{*})\stackrel{d}{\rightarrow}{\mathcal N}\left(0,\frac{\tau(1-\tau)}{p(\theta_{\tau}^{*})^{2}}\right),$$ $$(3)$$ , (3) where p(y) is the density for Y (Koenker, 2005). ## 3.2.2 Expected Pinball Loss Asymptotic Properties Next, consider learning the full inverse CDF f(τ ; θ) parameterized by θ ∈ Θ ⊆ R m by minimizing the expected pinball loss: $$\theta^{*}=\operatorname*{arg\,min}_{\theta\in\Theta}E_{T}[E_{Y}[L_{T}(Y,f(T;\theta))]],$$ where T ∈ (0, 1) is a random variable drawn from some distribution independently of Y . Note that we do not separately analyze the monotonicity constraint in Equation (2) except to note that said constraints can be wrapped into the function class f and feasible parameter space Θ. It may be that explicitly modeling such constraints would improve the asymptotic variance we derive; conversely, imposing the monotonicity constraint on certain function classes may lead to violations of some of the sufficient conditions set forth in Lemma 1. Now let ˆθ (n) be the corresponding minimizer under an empirical distribution over Y with samples {yi} n i=1. In Theorem 1 below, we present a general asymptotic distribution for ˆθ (n)that depends on the properties of the parametric function f, feasible parameter space Θ, and the distribution of T . A natural choice for the distribution of T is Unif(0, 1), but in fact, depending on the distribution of Y and properties of the function class f(τ ; θ), other distributions could lead to lower asymptotic variance. In choosing the distribution of T , note that choosing distributions with wider support will make it easier to satisfy the requirements for asymptotic normality in Theorem 1 below (see note in Appendix A.4). Theorem 1. Suppose f(τ ; θ) *is well specified with a unique* θ ∗ ∈ Θ ⊆ R m *such that* f(τ ; θ ∗) = F −1(τ ) for all τ ∈ (0, 1)*, and that* θ ∗*is also the unique minimizer for the risk* R(θ) = EY,T [LT (*Y, f*(T ; θ))]*. Suppose the* estimator ˆθ (n)*is weakly consistent,* ˆθ (n) P−→ θ ∗. Suppose further that the function θ 7→ f(τ ; θ) is continuous, locally Lipschitz, and twice differentiable at θ = θ ∗for all τ ∈ (0, 1)*. Then,* $$\sqrt{n}(f(\tau;\theta^{(n)})-f(\tau;\theta^{*}))\stackrel{{d}}{{\rightarrow}}{\cal N}\left(0,\nabla_{\theta}f(\tau;\theta^{*})^{\top}Q^{-1}V(Q^{-1})^{\top}\nabla_{\theta}f(\tau;\theta^{*})\right)\tag{4}$$ _where $Q=E_{\mathcal{T}}[p(f(\mathcal{T};\theta^{*}))\Gamma(\mathcal{T})]$, $V=E_{\mathcal{T}}[\mathcal{T}(1-\mathcal{T})\Gamma(\mathcal{T})]$, with $\Gamma(\tau)=\nabla_{\theta}f(\tau;\theta^{*})\nabla_{\theta}f(\tau;\theta^{*})^{\top}$._ Lemma 1 provides a simple example of sufficent conditions for the consistency condition in Theorem 1, which in general depends on properties of the function f and parameter space Θ. Lemma 1. If f(*τ, θ*) is convex in θ for all τ ∈ (0, 1), Θ is convex, and θ ∗is in the interior of Θ, then ˆθ (n)is weakly consistent, ˆθ (n) P−→ θ ∗. Lemma 1 applies when the optimization problem in (1) is convex. Aside from convexity, other example conditions could include compactness of Θ, or further Lipschitz continuity or bounded moment conditions on f (Powell, 2010). Given a specific distribution of T , a function class of the inverse CDF f, and a data distribution Y , we can directly compare the asymptotic variance of the direct quantile estimates (Equation (3)) to that of quantile estimates from the learned inverse CDF (Equation (4)). We illustrate this below with several examples. ## 3.2.3 Example 1: Single Parameter Uniform To build intuition for the implications of Theorem 1, we consider a centered uniform distribution Y ∼ Unif(− α 2 , α 2 ), whose inverse CDF F −1(τ ) = − α 2 + τα. Let f(τ ; θ) = θ(τ − 1 2 ). Expanding the asymptotic variance term in Theorem 1, $$Q^{-1}V(Q^{-1})^{\top}=\frac{\alpha^{2}E_{\mathcal{T}}[-{\mathcal{T}}^{4}+2{\mathcal{T}}^{3}-\frac{5}{4}{\mathcal{T}}^{2}+\frac{1}{4}{\mathcal{T}}]}{E_{\mathcal{T}}[{\mathcal{T}}^{2}-{\mathcal{T}}+\frac{1}{4}]^{2}}.$$ This depends on the first four moments of the distribution of T , the choice of which is up to the algorithm designer. For the natural choice of T ∼ Unif(0, 1), the asymptotic variance for estimating a specific τ0quantile using the function f(τ0; ˆθ (n)) = ˆθ (n)(τ0 − 1 2 ) is 1.2α 2(τ0 − 1 2 ) 2. This variance is lowest when τ0 = 0.5, and increases to 0.3α 2 at the most extreme quantiles when τ = 0 or τ = 1. We next compare this to estimating the τ0-quantile using the single pinball loss Lτ0 . Equation (3) shows that the estimate ˆθ (n) τ0 has asymptotic variance α 2τ0(1 − τ0). This shows that depending on the exact value of τ0, the single quantile estimate could be less or more efficient than the estimate resulting from evaluating the function f(τ0; ˆθ (n)). The single quantile estimate is more efficient for the extreme quantiles, whereas the inverse CDF estimate is more efficient for estimating the median. Intuitively, the inverse CDF median estimate benefits from fitting the inverse CDF function to the surrounding data points, which is reminiscent of classic quantile smoothing. ## 3.2.4 Example 2: Two-Parameter Uniform We next consider a two-parameter uniform distribution example: for Y ∼ Unif(*α, β*), p(y) = 1 β−α and F −1(τ ) = α + (β − α)τ . Let θ ∈ R 2 with f(τ ; θ) = θ0 + θ1τ . That is, we learn a two-parameter line as our inverse CDF. We then have ∇θf(τ ; θ) = 1 τ and can therefore write the Q and V matrices from Theorem 1 as $$Q=\frac{1}{\beta-\alpha}E_{\mathcal{T}}\begin{bmatrix}1&{\mathcal{T}}\\ {\mathcal{T}}&{\mathcal{T}}^{2}\end{bmatrix};\qquad V=E_{\mathcal{T}}\begin{bmatrix}{\mathcal{T}}-{\mathcal{T}}^{2}&{\mathcal{T}}^{2}-{\mathcal{T}}^{3}\\ {\mathcal{T}}^{2}-{\mathcal{T}}^{3}&{\mathcal{T}}^{3}-{\mathcal{T}}^{4}\end{bmatrix}.$$ Evaluating each of these matrices for T ∼ Unif(0, 1) we arrive at $$Q^{-1}V(Q^{-1})^{\top}=(\beta-\alpha)^{2}\begin{bmatrix}\frac{7}{15}&-\frac{3}{5}\\ -\frac{3}{5}&\frac{6}{5}\end{bmatrix},$$ and therefore an asymptotic variance for a particular τ0-quantile estimate f(τ0; ˆθ (n)) = ˆθ0 + ˆθ1τ0 of (β − α) 2( 7 15 + 6 5 τ 2 0 − 6 5 τ0). This is a quadratic function that finds its minimum at τ = 0.5 and is highest (on the unit interval) at τ = 0 and τ = 1. We can compare it to the asymptotic variance for single-pinball regression of (β − α) 2τ (1 − τ ), which is a quadratic function that attains its *maximum* at τ = 0.5. As we see in Figure 1, the variance is smaller for the expected pinball regression for quantiles between roughly τ = 0.3 and τ = 0.7. The opposite is true for the extreme quantiles. We validate these results empirically through simulation in Table 1. In each simulation, we draw 1,000 ![6_image_0.png](6_image_0.png) samples of Y from Uniform(0,1) and seek to best estimate the true 10th, 50th, and 90th quantiles (i.e. 0.1, 0.5, and 0.9). We train a linear model over τ using the expected pinball loss (equivalent to a 2-keypoint 1-dimensional DLN as explained in Section 4), and compare it to taking the sample quantile or using the Harrell-Davis estimator. We find the expected pinball loss model performs best at estimating the median but worse at more extreme quantiles. Note that this is specific to the uniform distribution simulation; we would not necessarily expect the same outcome for any quantile estimation problem. Figure 1: Plots of the asymptotic variances for the expected pinball regression (orange) and individual single-pinball regressions (blue) for different quantiles, when we assume that the true distribution is Uniform(*α, β*), the expected pinball loss regression learns a linear model, and we assume β − α = 1. Relaxing the last assumption would scale the y-axis but not change the intersection points. Table 1: Simulation results for unconditional quantile estimation given 1,000 samples of a Uniform(0,1) random variable. Results are shown for monotonic DLNs trained with the expected pinball loss over uniformly sampled τ , sample quantiles, and the Harrell-Davis estimator. DLNs are learned with 2 keypoints in order to embed the correct linear assumption on the inverse CDF. | Model | τ = 0.1 | τ = 0.5 | τ = 0.9 | |-----------------------|---------------------|---------------------|---------------------| | DLN, Expected Pinball | 0.000109 ± 0.000005 | 0.000077 ± 0.000004 | 0.000109 ± 0.000005 | | Sample Quantile | 0.000097 ± 0.000004 | 0.000245 ± 0.000010 | 0.000086 ± 0.000004 | | Harrell-Davis | 0.000089 ± 0.000004 | 0.000247 ± 0.000011 | 0.000083 ± 0.000004 | Overall, these examples illustrate the sometimes counterintuitive fact that it can be more asymptotically efficient to estimate a quantile through learning a full inverse CDF than it would be to just estimate the single quantile alone. ## 3.3 Expected Pinball Loss With A Beta Distribution So far, our examples have analyzed the case where the expected pinball loss is taken over the uniform distribution; that is, equal weight is put on the pinball loss for each quantile. Theorem 1 is more general, though, allowing for any distribution over T . We now explore the case where we put more weight on pinball losses around a desired quantile using the Beta distribution. This makes it less likely that the overall inverse CDF will be high quality, but offers the possibility that an individual quantile estimate may improve by focusing the Beta distribution on it. We parameterize the Beta distribution by its mode and concentration. This allows us to fix the mode parameter M (the peak of the beta distribution) as our desired quantile and the concentration parameter C as smoothly interpolating between the uniform distribution (C = 2) and the point-mass distribution with all weight at the mode (C → ∞). Consider again Figure 1, which found that taking the sample quantile achieved similar performance to training with an expected pinball loss when it comes to estimating the 0.7 quantile. What if we only cared about that quantile; could we do better? We analyze this with a slight change to our setup in Section 3.2.4, where we replace T ∼ Unif(0, 1) with ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) T ∼ Beta(M = 0.7, C = x) for different values of x. We can reuse the Q and V matrices and only change the moments we plug in. The Beta(*α, β*) distribution has E(Xk) = Qk−1 r=0α+r α+β+r , and we can convert from {M, C} to {*α, β*} via the formulae α = M(C − 2) + 1 and β = (1 − M)(C − 2) + 1. Figure 2: Plots of the asymptotic variances for the 0.7 quantile for linear models trained with a Beta distribution expected pinball loss. The Beta distributions have modes of 0.7 and varied concentrations. The dotted lines show the asymptotic variances for a uniform expected pinball loss estimator and the sample quantile; these values are equivalent to the values of the curves in Figure 1 for the 0.7 quantile. As in Figure 1, we assume the true distribution is Uniform(*α, β*) with β − α = 1. Figure 2 shows the results. C=2 is equivalent to the uniform pinball loss, which is slightly worse than the sample quantile. As the concentration parameter increases, we improve to a point before getting worse and asymptoting to the performance of the sample quantile. These results show the influence of local quantile smoothing; we can sometimes do better by primarily considering nearby data points instead of all data points, even when our chosen function matches the true distribution. We will see in Section 3.5 that local smoothing becomes particularly important when the function class is misspecified. We further explore the performance of the Beta distribution in Section 5. ## 3.4 Extension To Conditional Quantile Estimation The asymptotic theory so far shows that expected pinball loss estimators can sometimes outperform single quantile estimators in the absence of conditioning on any additional features X. Conditional quantile estimation in general adds a considerable amount of theoretical complexity which we do not fully address here, and to the best of our knowledge, characterization of asymptotic properties for full conditional inverse CDF estimation remains an open problem. However, the provided unconditional asymptotic theory can yield intuition and extreme examples for cases when estimating the full conditional inverse CDF would outperform single conditional quantile estimation. For instance, when X is a finite categorical feature, and the function f and parameter space Θ are expressive enough in X, Theorem 1 can be directly applied to the individual quantile estimation problems conditioned on each finite value of X. We formalize this in the following Corollary. Corollary 1. Let X be distributed uniformly over a finite set of categorical values X , letting m = *|X |*. Suppose that a dataset {(xi, yi)} n i=1 *is created by sampling* {yj} n m j=1 *values IID from the conditional distribution* Y |X = x for each value of x (assuming m divides n), and taking the union of these sets. Let f : X ,(0, 1), Θ → R be fully separable over x*; that is, let* Θ = Θ1×...×Θm, where Θx *represents a copy of a* given parameter space Θ0 for each value of x ∈ X . Let f take the form f(*x, τ* ; θ) = Px0∈X 1(x = x0)g(τ ; θx0 ) for some function g : (0, 1), Θ0 → R. Suppose f(x, τ ; θ) *is well specified with a unique* θ ∗ ∈ Θ ⊆ R m such that f(*x, τ* ; θ ∗) = qτ (x) for all τ ∈ (0, 1) and all x ∈ X *, and that* θ ∗*is also a unique minimizer for the risk* R(θ) = 1 m Px∈X EY,T [LT (Y, f(x, T ; θ))|X = x]*. Suppose the estimator* ˆθ (n)*is weakly consistent,* ˆθ (n) P−→ θ ∗. Suppose further that the function θ 7→ f(x, τ ; θ) *is continuous, locally Lipschitz, and twice differentiable at* θ = θ ∗ for all τ ∈ (0, 1) and all x ∈ X . Then for each x ∈ X , $${\sqrt{n/m}}(f($$ pn/m(f(*x, τ* ; ˆθ (n)) − f(*x, τ* ; θ ∗)) d−→ N 0, ∇θf(*x, τ* ; θ $$f(x,\tau;\theta^{*})^{\top}Q^{-1}V(Q^{-1})^{\top}\nabla_{\theta}f(x,\tau;\theta^{*}))$$ where Q = ET [px(f(x, T ; θ ∗))Γ(T )], V = ET [T (1 − T )Γ(T )], with px(y) being the density of Y |X = x and Γ(τ ) = ∇θf(*x, τ* ; θ ∗)∇θf(*x, τ* ; θ ∗) >. More generally, function classes with a high degree of flexibility such as neural networks may approach this extreme categorical separability described above. We describe one such flexible function class below which, given enough parameters, can be specified to meet this separability condition for finite categorical features. Of course, computational constraints and generalizability must also be considered. ## 3.5 Model Misspecification And Local Quantile Smoothing We now expand our simulations to a case where the model's functional form f(τ ; θ) is not well-specified, specifically estimating the true quantiles of the exponential distribution (with λ = 1) given a sample of data. While the true quantile function is −ln(1 − τ ), we learn a linear or piecewise-linear f (i.e. a 1-dimensional DLN; see Section 4) trained with the proposed expected pinball loss and monotonicity constraint on τ . We compare to the sample quantile and the Harrell Davis estimator (discussed in Section 3.1), which also does not rely on knowing the underlying distribution, but does not extend to quantile regression with features. Results in Table 2 show the average MSE when we use 10-fold cross-validation over the expected pinball loss to choose the number of knots K in the piecewise-linear function f(τ ). The DLN is better than the sample quantile at predicting the median, but not as good as the Harrell-Davis estimator, but outperforms both for the more extreme quantile τ = 0.99. Table 2: Estimating the median and 99th percentile of an exponential distribution via cross-validated DLN training. Here, the number of knots in the DLN (a piecewise linear function on τ ) is tuned via 10-fold cross-validation over the N = 51 or N = 505 training points, respectively, and the model is then retrained on the full data with the selected number of keypoints. | Model | τ = 0.5 | τ = 0.99 | |--------------------------|--------------------|------------------| | CV-DLN, Expected Pinball | 0.019263 ± 0.00086 | 0.17566 ± 0.0080 | | Sample Quantile | 0.019968 ± 0.00046 | 0.19552 ± 0.0043 | | Harrell-Davis | 0.018081 ± 0.00038 | 0.18887 ± 0.0042 | We can better understand how the chosen flexibility of f(τ ) affects the quality of our quantile estimator by considering the performance for different numbers of keypoints K in the piecewise linear function, as shown in the top row of Figure 3. At a high-level, we see a consistent and intuitive pattern: low performance for very low number of keypoints which oversmooths across τ , great performance for an intermediate number of keypoints which regularizes, and close-to-sample-quantile performance for a high number of keypoints as the effect of a very flexible model f(τ ) is similar to the individual pinball loss. ![9_image_0.png](9_image_0.png) Figure 3: Estimating the median (left plots) and 99th percentile (right plots) of an exponential distribution (no features) by the sample median, by Harrell-Davis, and by solving (1) and (2) with a DLN function class. The median estimates were from N = 51 training samples. The 7 = . 99 estimates were from N = 505 training samples. Error bars are computed over 1,000 repeats. Top: The Pr is the uniform distribution, and the DLN is a piece-wise linear function (PLF) with K keypoints. Larger K makes the model more flexible, and imposes less smoothing across quantiles. Results for different choices of K are shown across the x-axis. Bottom: The DLN is just a linear model on 7 € [0,1] with 0 E R2 and Pr is a beta distribution. Results are shown for six choices of the beta concentration across the x-axis, where a bigger concentration makes Pr more peaked around the target quantile, imposing less smoothing across quantiles. With a low number of keypoints (note: results are even worse for very-low K which we cut off for ease of visualization), the model misspecification is the dominant effect. Limited to only a few keypoints, the function class f cannot capture the exponential distribution's quantile function, and so the quantile estimates are poor. With a high number of keypoints, meanwhile, we mechanistically approach the performance of the sample quantile; note that at the extreme when we have the same number of keypoints as data points, our expected pinball loss model will exactly recover every sample quantile. In the middle, we find that the disadvantages of misspecification are outweighted by the benefit of smoothing across nearby data samples when estimating any one quantile. For example, consider the high-performing model with K = 100 when predicting the 7 = 0.99 quantile on an N = 505 dataset; that suggests that each keypoint is influenced by around 5 data points each, and a locally linear assumption on that scale improves performance. We confirm the key role of local smoothing across nearby quantiles by running another experiment. Again, we seek to predict the exponential distribution; but this time, we do so with a linear model f rather than a piecewise-linear model. We focus this linear model around the neighborhood of our desired quantile by training with a Beta distribution over pinball losses instead of a Uniform distribution over pinball losses. This setup is very similar to the theoretical analysis conducted in Figure 2. We observe, in the bottom row of Figure 3, a qualitatively similar pattern as in the top row. A Beta distribution with low concentration around the desired quantile performs poorly, and one with a very high concentration performs similarly to the sample quantile. Intermediate concentrations, i.e. concentrations that fit a linear approximation over nearby sample quantiles, see improved performance. In practice, whether we train with a flexible f over all pinball losses equally or with a potentially less-flexible f over neighboring pinball losses depends on whether we are interested in the full distribution, as well as the particulars of our problem. Section 5.5 contains a comprehensive set of experiments that compare various strategies. To recap, we have shown that we can replicate or even beat the performance of not only the sample quantile but unconditional quantile estimators such as the Harrell-Davis estimator as well. We do this, though, with a strategy that can seamlessly scale to the problems of conditional quantile regression and full inverse CDF estimation. ## 4 Dlns For Legitimate But Flexible Inverse Cdfs We propose using DLNs (You et al., 2017) as the function class in (1) because of their flexibility and because they efficiently enable three key regularization strategies: non-crossing constraints, restricting the learned distribution to location-scale families, and monotonic features, as explained below. ## 4.1 Review Of Deep Lattice Networks DLNs are multi-layer models where some layers are lattices or ensembles of lattices. Lattices build upon a well-established strategy for approximating functions over a bounded domain (Campbell-Kelly et al., 2003): record the function's values on a regular grid of inputs, and then one can linearly interpolate between the recorded values. Generalizing to multiple dimensions, a lattice function is a multi-dimensional look-up table whose parameters are the look-up table values, and the function is produced by linearly interpolating the look-up table values surrounding an input point. See Fig. 4 for example 2D lattices. In the special case of a one-dimensional domain, a lattice is just a piecewise-linear function. We recommend reading Gupta et al. (2016) for the basics of lattices with monotonicity constraints. See also Garcia et al. (2012) for basics of lattice models and their relationship to other splines. Mathematically, a lattice function f(x) : RD → R can be expressed f(x) = θ T ψ(x), where the parameters θ ∈ Rp are the look-up table values, and the nonlinear transformation ψ(x) ∈ Rp maps an input x ∈ RD to the ultra-sparse vector of linear interpolation weights over the look-up table values. Garcia & Gupta (2009) showed that lattice models can be trained using a standard empirical risk minimization framework. Lattices can approximate any continuous bounded function if they are sampled on a regular grid with enough knots. A key reason to use lattice functions in an application like quantile regression is that the regular structure of their parameters makes them amenable to shape constraints. Here, the constraint (2) requires f(*x, τ* ; θ) to be monotonically increasing in τ , which is imposed efficiently with sparse linear inequality constraints on the lattice parameters corresponding to forcing any neighboring gridpoints in the direction of τ to be increasing (Gupta et al., 2016). To more efficiently represent functions, lattice models are usually architected with a first layer that separately calibrates each input with a learned one-dimensional piecewise linear function (PLF) whose knots need-not be regularly-spaced referred to as a calibrator, then fuses the calibrated inputs together with a coarser multi-dimensional lattice (Garcia et al., 2012; Gupta et al., 2016). The main drawback to lattices is that the number of lattice parameters needed to approximate a Ddimensional functions blows up exponentially as O(2D). This problem is handled by making an ensemble of calibrated lattices (Canini et al., 2016) much as one makes a random forest, and these calibration and lattice layers can be mixed with other layers like linear layers, and cascaded ad infinitum, forming arbitrarily deep lattice networks (DLNs) (You et al., 2017; Cotter et al., 2019). Importantly, as long as each layer of a DLN is trained to respect a monotonic feature like τ , the entire DLN will be monotonic in τ . ![11_image_0.png](11_image_0.png) Figure 4: Example 2D lattices sized 2 × 2 have four parameters. Each parameter represents the value the lattice function takes at one of the four corners of the input domain [0, 1]2. The lattice function values inbetween the vertices are computed by bilinear interpolation of the four parameter values, so these functions can be re-parameterized and expressed as bilinear polynomial functions over [0, 1]2 of the form f(x) = a + bx1 + cx2 + dx1x2. A 2D lattice with more knots, e.g. 3 × 3, would be locally bilinear across each cell of the lattice. More generally, one could use unrestricted ReLU or embedding layers for the first few model layers on x, then fuse in τ later with τ -monotonic DLN layers. Figure 5 illustrates how an arbitrary neural net architecture can be used to reduce the dimensionality of the input features of x before fusing with τ , allowing our approach to be used for essentially any type and size of input. ![11_image_1.png](11_image_1.png) Figure 5: Block diagram showing how an arbitrary neural net architecture can be used to reduce the dimensionality of x to a more manageable size, before combining in a DLN architecture with τ , which is constrained to be monotonic. In our experiments, our DLNs are built with the standard two-layer calibrated lattice (Gupta et al., 2016) and calibrated-ensemble-of-lattices (Canini et al., 2016) architectures offered by the open-source TensorFlow Lattice library (TensorFlow Blog Post, 2020). The TensorFlow Lattice library enables solving the constrained optimization problem (1) and (2) with Dykstra's projection algorithm. In our experiments, we found our DLN models trained in a similar amount of time as the DNN and SQF-DNN models. ## 4.2 Location-Scale Family Models We show that by different architecture choices for the DLN model used can regularize the estimated inverse CDF in semantically-meaningful ways, similar to prior more rigid distributional approaches. As a simple example, if one constrains the DLN architecture to not have any interactions between τ and any of the x features, then one learns a regression model with homoskedastic errors. As another example, the basic twolayer DLN called a *calibrated lattice* model (Gupta et al., 2016) described in Section 4.1 can be constrained to learn distributions across x that come from a shared, learned, location-scale family: Lemma 2. Let f(*x, τ* ) be a calibrated lattice model with piece-wise linear calibrator c(τ ) : [0, 1] → [0, 1] for τ , and let there be two knots in the regular grid in the direction of τ , and the lattice parameters are interpolated with standard multilinear interpolation. Then f(*x, τ* ) represents an inverse CDF function F −1(y|x) where the estimated distribution for every x is from the same location-scale family as the calibrator c(τ ). The proof is in the Appendix. Note the number of keypoints in c(τ ) controls the complexity of the learned base distribution, allowing the model to approximate location-scale families like the Gaussian, gamma, or Pareto distributions. Then in the second layer of f, the number of knots in the lattice over the c(τ ) input controls how much the distribution should be allowed to vary across x. Two knots in the lattice limits us to a shared location-scale family, as noted above, while three knots in the lattice gives an extra degree of freedom to shrink or stretch one side of the distribution differently across x. More lattice knots across τ , or ensembling of lattices, or more layers can all be used to move the model towards full generality. ## 4.3 Other Monotonic Features Using DLNs also enables imposing monotonicity on the features x if domain knowledge says they should have a monotonic impact on the estimates (Gupta et al., 2016). For example, a pizza delivery time estimator might treat constrain an input quantifying how bad traffic is to only increase estimated quantiles (that is, worse traffic never makes the estimated delivery time shorter). Our experience with real-world applications of quantile estimation is that real-world quantile estimates do often use features that are past measurements (or otherwise strong correlates) to predict the future distribution of measurements. For example, in the real-world Puzzles experiment in Section 5, we predict the quantiles of how long a Hoefnagel Puzzle Club member will keep a rented puzzle, and one of the features is how long they kept their last puzzle. We constrain that input to have a monotonically positive effect on the estimate. (However, earlier hold-times are not necessarily monotonically indicative of the next future hold-time, because they may indicate a strongly decreasing trend over time). Domain knowledge about the relationship between the model inputs-and-ouput that can be captured with monotonicity constraints is a tuning-free, semantically-meaningful regularizer that can improve test performance and increase model explainability (Gupta et al., 2016), and where applicable, even help capture ethical rules (Wang & Gupta, 2020). DLNs also enable multi-dimensional shape constraints to capture domain knowledge like complementary features (Gupta et al., 2020), and functions defined on sets (Cotter et al., 2019). ## 5 Experiments Using simulations and real data, we first show that training a DLN that is monotonic in τ to satisfy both (1) and (2) works well compared to training a DNN for (1) as done in Tagasovska & Lopez-Paz (2019) or the SQF-DNN parametric approach of Gasthaus et al. (2019); the results are in Tables 3 and 5. Then we use the proposed τ -monotonic DLN models to compare training to minimize expected pinball loss versus training to minimize the pinball loss for a specific τ ; the results are in Tables 7 and 8. Bolded results in tables indicate that the metric is not statistically significantly different from the best metric among the models being compared, using an unpaired t-test. ## 5.1 Model Training Details All hyperparameters were optimized on validation sets. We used Keras models in TensorFlow 2.2 for the unrestricted DNN comparisons that optimize (1) (Tagasovska & Lopez-Paz, 2019), and the SQF-DNN (Gasthaus et al., 2019), which optimizes the same objective in a different manner while also guaranteeing non-crossing quantile estimates. For DLNs, we used the TensorFlow Lattice library (TensorFlow Blog Post, 2020). For all DNN and DLN experiments, we use the Adam optimizer (Kingma & Ba, 2015) with its default learning rate of 0.001, except where noted. For DNN models, we validated the number of hidden layers and the hidden dimension. For the SQF-DNN, we also validated the number of distribution keypoints. For the smaller DLN models, we used the common two-layer calibrated lattice architecture (Gupta et al., 2016) and validated over its number of calibration keypoints and lattice vertices. For the larger datasets, we used an ensemble of calibrated lattices for the DLN model, and then also validated over the the number and dimensionality of the base models in the ensemble (Canini et al., 2016). For both DLNs and DNNs, we additionally validated over the number of training epochs. ## 5.2 Benchmark And Real Datasets Used We tested on three publicly available datasets and one proprietary dataset for real-world problems. Code is available at github.com/google-research/google-research/tree/master/quantile_regression. Air Quality: The Beijing Multi-Site Air-Quality dataset from UCI (Zhang et al., 2017) contains hourly air quality data from 12 monitoring regions around Beijing. We predict the quantiles of the PM2.5 concentration from D = 7 features: temperature, pressure, dew point, rain, wind speed, region, and wind direction. The DLN model is an ensemble of two-layer calibrated lattice models. We split the data by time (not IID) with earlier examples forming a training set of size 252,481, later examples a validation set of size 84,145, and most recent examples a test set of size 84,145. Puzzles: This dataset is from the Hoefnagel Puzzle Club, a private library that rents wooden jigsaw puzzles. We predict how long a library member will hold a borrowed puzzle given their five previous holdtimes and a sixth categorical feature that indicates a user's past activity (one of {active users, high-variance users, new users}). The DLN model is an ensemble of two-layer calibrated lattice models. Based on their domain knowledge, we also constrain the DLN output to be monotonically increasing in the most-recent-past hold-time feature. The 984 train and 247 validation examples are IID from past data, while the 211 test samples are the most recent samples (so the test samples are not IID with the train-and-validation set). The anonymized dataset is publicly available at www.mayagupta.org/data/PuzzleClub_HoldTimes.csv. Traffic: This is a proprietary dataset from Google for estimating the time to drive a route. The DLN is a two-layer calibrated lattice whose D = 4 inputs are 1 categorical and 3 continuous features. We used 1,000 examples each for training, validation, and testing, with the training examples occurring earlier in time than the validation and test examples (not IID). Wine: We used the Wine Reviews dataset from Kaggle (Bahri, 2018). We predict the quantiles of wine quality on a 100-point scale. We used D = 42 features, including price, a 1-d learned embedding for the country of origin (aka categorical calibration (Gupta et al., 2016)), and 40 Boolean features describing each wine. The DLN model is an ensemble of two-layer calibrated lattice models. The DLN output is constrained to be monotonically increasing in the *price* feature, as suggested in prior work (Gupta et al., 2018). The data was split IID with 84,641 examples for training, 12,091 for validation, and 24,184 for testing. ## 5.3 Model Architecture Experiments We start by demonstrating the efficacy of using monotonic DLNs to predict the inverse CDF on simulations, and then on the real data. ## 5.3.1 Simulations We used simulations from a recent quantile regression survey paper (Torossian et al., 2020) based on the sine, Griewank, Michalewicz, and Ackley functions with carefully designed noise distributions to represent a range of variances and skews across the respective input domains. We used 250 training examples for the 1-D sine and Michalewicz functions, 1,000 examples for the 2-D Griewank function, and 10,000 examples for the 9-D Ackley function. Our metric is the average squared difference between the estimated and true quantile curves sampled at 99 uniform quantiles τ ∈ 0.01*, . . . ,* 0.99, averaged over 100 repeats, and averaged over values of x across the domain (we use fine grids of 1,000 and 10,000 x values for the 1-D and 2-D experiments, respectively, and a random sample of 10,000 x-values for the 9-D experiment). We also report the fraction of test points for which at least two of their 99 quantiles crossed. Results in Table 3 demonstrate that the proposed monotonic-in-τ DLNs are the best or statistically tied for the best across all four disparate simulations. The DNNs were trained with the same sampled expected pinball loss, and are sometimes close in performance, but suffer substantially from crossing quantiles. For example, on the sine-skew distribution, quantile-crossing was observed for the DNN model between at least two quantiles on 38% of test x values! Spline quantile functions (Gasthaus et al., 2019) avoid crossing by construction, but their accuracy was much worse than the DLN accuracy on three of the four datasets, and not better on the fourth. In particular, the DLNs do much better at the Griewank and Ackley simulations, which have the lowest signal-to-noise ratios (Torossian et al., 2020). | Sine-skew (1,7) | Griewank | Michalewicz | Ackley | | |-------------------|------------------|-----------------|--------------------|---------------| | Model | MSE, Crossing | MSE, Crossing | MSE, Crossing | MSE, Crossing | | DNN | 3.53 ± 0.09, 38% | 1.29 ± 0.02, 5% | 0.311 ± 0.011, 12% | 237 ± 6, 0.3% | | SQF-DNN | 5.68 ± 0.11, 0% | 1.05 ± 0.01, 0% | 0.232 ± 0.006, 0% | 667 ± 81, 0% | | DLN | 3.51 ± 0.13, 0% | 0.55 ± 0.01, 0% | 0.219 ± 0.006, 0% | 206 ± 2, 0% | Table 3: Simulations: Quantile MSE and percent crossing violations for τ ∈ {0.01, 0.02*, . . . ,* 0.99}. ## 5.4 Training Time Comparison We also provide timing results in Table 4 for the Ackley simulation, which as a complex 9-dimensional problem required non-trivial training times. We compared timings of the different models with their validated hyperparameters. The validated DNN chose half the number of training batches as the DLN, and this caused it to be twice as fast. Each training batch took roughly the same amount of time though, and more granular hyperparameter tuning of the number of training batches could have led to different results. The SQF-DNN, meanwhile, is noticeably slower to train. However, while these implementations were comparable in their use of software and processors, all of these approaches could certainly be better optimized for speed. For example, compiler-optimized DLN models can execute ten times faster than an optimized C++ implementation, which was 100 times faster than a TensorFlow implementation (Zhang et al., 2021). | Model | Train Time (s) | Training Steps | Time Per Batch (ms) | |---------|------------------|------------------|-----------------------| | DNN | 3446 | 500000 | 6.9 | | SQF-DNN | 28042 | 100000 | 280 | | DLN | 8444 | 1000000 | 8.4 | Table 4: Simulations: Training time, in seconds, for the validated model of each type on the Ackley simulation from Table 3. We also report the number of training steps used by each model (i.e. the number of batches seen by the model) and the time taken per training step. ## 5.4.1 Real Data Table 5 compares these models on the four real datasets. We report the pinball loss averaged over τ ∈ {0.01, 0.02*, . . . ,* 0.99}, which is a standard metric for judging conditional quantile estimates, and one which approximates the *continuous ranked probability score* metric used for measuring the quality of probabilistic forecasts (Gasthaus et al., 2019; Yan et al., 2018). The τ -monotonic DLNs performed the best on-average for all four problems. Unlike the simulation results in Table 3, the SQF-DNN (monotonic by construction) performs more similarly to the DNN on these real datasets. We also report the *absolute deviation calibration error* (Chung et al., 2021) in Table 6 (some authors use a squared variant (Kuleshov et al., 2018)), the formula is given in A.8. By that metric the DLN is better or stat. sig. the same as SQF-DNN, with substantial wins over SQF-DNN on Traffic and Wine. ## 5.5 Expected Pinball Loss Experiments We next empirically show that training a full inverse CDF by minimizing the expected pinball loss is at least as good at estimating specific quantiles as training with the pinball loss just for those specific quantiles. Table 5: Real data experiments: Pinball loss on the test set, averaged over τ ∈ {0.01, 0.02*, . . . ,* 0.99}. | Model | Air Quality | Puzzles | Traffic | Wine | |---------|----------------|---------------|------------------|-----------------| | DNN | 18.041 ± 0.088 | 3.229 ± 0.017 | 0.0494 ± 0.00037 | 0.6603 ± 0.0088 | | SQF-DNN | 17.356 ± 0.106 | 3.634 ± 0.142 | 0.0491 ± 0.00022 | 0.6552 ± 0.0037 | | DLN | 17.280 ± 0.157 | 3.188 ± 0.041 | 0.0479 ± 0.00003 | 0.6381 ± 0.0005 | | Model | Air Quality | Puzzles | Traffic | Wine | |---------|-----------------|-----------------|-----------------|-----------------| | DNN | 0.0761 ± 0.0038 | 0.0821 ± 0.0032 | 0.0562 ± 0.0014 | 0.0253 ± 0.0010 | | SQF-DNN | 0.0798 ± 0.0042 | 0.0614 ± 0.0038 | 0.0600 ± 0.0062 | 0.0671 ± 0.0083 | | DLN | 0.0588 ± 0.0057 | 0.0739 ± 0.0078 | 0.0210 ± 0.0027 | 0.0125 ± 0.0031 | Table 6: Real data experiments: Calibration on the test set, averaged over τ ∈ {0.01, 0.02*, . . . ,* 0.99}. ## 5.5.1 Simulations We ran simulations on the 1D sine-skew function as per Torossian et al. (2020), with three noise scenarios illustrated in Figure 6: noise parameters (*a, b*) = (1, 1) for symmetric low-noise, (7, 7) for symmetric highnoise, and (1, 7) for asymmetric high-noise. These simulations are the simplest extension of the unconditioned context studied empirically and theoretically in Section 3 to the conditioned case, in that there is a single x feature, and enough data for a flexible quantile regression model to approximate the corresponding sample quantile of y within a small neighborhood of any value of x. ![15_image_0.png](15_image_0.png) Figure 6: The colored lines show the true quantiles for τ = 0.1, 0.4, 0.5, 0.6, 0.9 for the sine-skew distribution simulations Torossian et al. (2020). The results shown in Table 7 are the error in predicting the median given either N = 100 or N = 1, 000 training samples. Overall, the single pinball loss model struggles relative to the expected pinball loss models, even as we increase the number of data points from 100 to 1000, at which point there are about 50 data points within each 0.1 region of x ∈ [−1, 1]. The uniform pinball loss model which trains equally across all quantiles performs quite well in both symmetric-noise cases (1, 1) and (7, 7). However, for the simulation with highly skewed noise (1, 7) it struggles a bit relative to the models trained with a Beta pinball loss, perhaps because the extreme quantiles start becoming very uninformative about the median. Still, even in that case, it performs similarly to the single pinball loss models. ## 5.5.2 Real Data In Table 8 we give the accuracy at predicting three specific target quantiles on the four real datasets by either training a single model estimating the full inverse CDF by minimizing ET where T is drawn uniformly from (0, 1); or training a single model to minimize the discrete sum of the three pinball losses for the τ s Table 7: Sine-skew experiment with different sine-skew noise choices (1,1), (7,7) and (1,7). We present results for DLNs trained with four different distributions over τ : the uniform distribution, the beta distribution with a mode of 0.5 and concentration of 10, the beta distribution with a mode of 0.5 and concentration of 100, and just the 0.5 pinball loss. The metric is the MSE between the true median and estimated median, averaged over x ∼ Unif(−1, 1). | (a, b) | N | min ET , Unif | min ET , Beta(10) | ET , Beta(100) | min single-τ loss | |----------|-------|-----------------|---------------------|------------------|---------------------| | (1, 1) | 100 | 0.421 ± 0.025 | 0.348 ± 0.015 | 0.301 ± 0.016 | 0.588 ± 0.036 | | (1, 1) | 1,000 | 0.050 ± 0.003 | 0.057 ± 0.002 | 0.056 ± 0.002 | 0.067 ± 0.003 | | (7, 7) | 100 | 7.334 ± 0.367 | 6.654 ± 0.330 | 7.402 ± 0.397 | 9.860 ± 0.560 | | (7, 7) | 1,000 | 0.967 ± 0.058 | 0.861 ± 0.036 | 1.057 ± 0.037 | 1.470 ± 0.086 | | (1, 7) | 100 | 4.458 ± 0.259 | 3.515 ± 0.178 | 3.031 ± 0.179 | 4.416 ± 0.386 | | (1, 7) | 1,000 | 0.451 ± 0.037 | 0.336 ± 0.022 | 0.338 ± 0.020 | 0.542 ± 0.067 | considered; or training three separate models that minimize ET where T is concentrated around the target quantiles; or training three separate models that minimize each specific τ 's pinball loss. The test metric is computed on finite test sets, unlike the simulations where we could compare to the true quantiles. When measuring the accuracy of the predicted median, via the τ = 0.5 pinball loss, there is no statistically significant difference between the model trained for the single quantile individually and the model trained to estimate the full inverse CDF using a distribution of τ ∼ Unif(0, 1) for the loss. In other words, the estimate of the median does not suffer from trying to estimate the full inverse CDF. At the more extreme quantiles, the single-τ estimator sometimes outperforms the full inverse CDF estimator, e.g. the 99th percentile for Air Quality, and the 10th and 90th percentile for Wine. This is possibly due to the fact that the smoothing from the ET loss is more one-sided. Additionally, for Wine, we hypothesize this is because it is a fairly easy regression problem with a large number of training samples and one continuous feature (wine price) that correlates highly with the label (wine quality), so the extra regularization is not helping. That said, the single τ estimator is still not always the best even on extreme quantiles; for the large Traffic dataset, the 99th percentile is statistically tied, while on Puzzles the single τ estimator for the 90th percentile is worse. In all cases, smoothing from τ ∼ Beta is still beneficial: training with τ sampled from a Beta distribution is statistically similar or better than training with a single pinball loss on all datasets. This tracks with the simulated results and remains true even on extreme quantiles. ## 6 Conclusions And Some Open Questions We gave theoretical and empirical evidence that training quantile regression models by minimizing the expected pinball loss can perform as well or better on any individual quantile than models trained for that quantile alone. We showed that minimizing the expected pinball loss is not sufficient to provide legitimate inverse CDF estimates, one must also produce a model whose quantiles are guaranteed to not cross. We showed that DLN models can be effectively constrained to be monotonic in τ and produce state-of-the-art quantile estimates. The non-crossing issue is important not just for theoretical reasons, but because it is confusing to many non-experts if a model's quantile estimates cross, eroding trust in that model, and by association, all AI. We provide a new asymptotic convergence analysis for estimation of the full inverse CDF, assuming an oracle that minimizes the empirical loss. We leave open the question of non-asymptotic variances, and whether some of our assumptions can be relaxed, or whether lower variances can be had with some more stringent assumptions. Theorem 1 is for unconditioned quantile estimation, with a simple extension to separable Table 8: Accuracy of single quantile estimates, comparing one model trained to minimize the expected pinball loss with respect to uniform PT ; one model trained to minimize the sum of the three pinball losses for the three quantiles shown (Discrete); three separate models trained with individual beta distributions PT with the mode set at the target quantile and the beta concentration chosen on the validation set; and three separate models trained with individual τ pinball losses (Single). All models were DLNs with the same architecture as in Table 5, and monotonic on τ and any input features noted in the text. | Air Quality: | Model | Pinball loss (τ = 0.5) | Pinball loss (τ = 0.9) | Pinball loss (τ = 0.99) | |----------------|--------------------|--------------------------|--------------------------|---------------------------| | τ ∼ Unif(0, 1) | 23.576 ± 0.047 | 14.850 ± 0.075 | 2.947 ± 0.025 | | | τ ∼ Discrete | 24.083 ± 0.073 | 15.439 ± 0.062 | 3.042 ± 0.012 | | | τ ∼ Beta | 23.586 ± 0.183 | 13.942 ± 0.054 | 2.709 ± 0.029 | | | Single τ ∈ T | 23.634 ± 0.068 | 14.908 ± 0.092 | 2.700 ± 0.013 | | | Traffic: | Model | Pinball loss (τ = 0.5) | Pinball loss (τ = 0.9) | Pinball loss (τ = 0.99) | | τ ∼ Unif(0, 1) | 0.064386 ± 0.00014 | 0.040159 ± 0.00018 | 0.010645 ± 0.00018 | | | τ ∼ Discrete | 0.064578 ± 0.00028 | 0.039804 ± 0.00014 | 0.011290 ± 0.00014 | | | τ ∼ Beta | 0.064988 ± 0.00030 | 0.039549 ± 0.00009 | 0.01064 ± 0.00011 | | | Single τ ∈ T | 0.064791 ± 0.00012 | 0.039900 ± 0.00010 | 0.01070 ± 0.00018 | | | Wine: | Model | Pinball loss (τ = 0.1) | Pinball loss (τ = 0.5) | Pinball loss (τ = 0.9) | | τ ∼ Unif(0, 1) | 0.4099 ± 0.0006 | 0.8889 ± 0.0017 | 0.3773 ± 0.0019 | | | τ ∼ Discrete | 0.4094 ± 0.0005 | 0.8891 ± 0.0010 | 0.3706 ± 0.0003 | | | τ ∼ Beta | 0.4047 ± 0.0005 | 0.8871 ± 0.0008 | 0.3665 ± 0.0004 | | | Single τ ∈ T | 0.4049 ± 0.0004 | 0.8867 ± 0.0006 | 0.3663 ± 0.0003 | | | Puzzles: | Model | Pinball loss (τ = 0.5) | Pinball loss (τ = 0.7) | Pinball loss (τ = 0.9) | | τ ∼ Unif(0, 1) | 4.437 ± 0.148 | 4.427 ± 0.202 | 2.696 ± 0.091 | | | τ ∼ Discrete | 4.265 ± 0.052 | 4.192 ± 0.044 | 2.686 ± 0.035 | | | τ ∼ Beta | 4.188 ± 0.049 | 4.259 ± 0.054 | 2.680 ± 0.020 | | | Single τ ∈ T | 4.258 ± 0.056 | 4.255 ± 0.057 | 2.793 ± 0.049 | | conditional quantile estimation in Corollary 1. Important open work would be providing stronger results for conditional quantile estimation where there is smoothing across the features. A key question is the choice of distribution of T when minimizing the expected pinball loss. We note one can control the regularization across quantiles both by the shape of PT and by the model flexibility with respect to τ . Thus we showed a uniform distribution PT generally provides good results if one is validating the model flexibility over τ , and a uniform distribution has no hyperparameters and gives you a good estimate of the entire inverse CDF for free. However, if one is only interested in a single τ , we also showed that one may do better with a beta distribution for PT , and that a concentration of 10 likely makes a good default choice, but for best results one would want to validate the concentration. One open question is whether the proposed methodology improves the accuracy of prediction intervals (Pearce et al., 2018), as measured by traditional metrics such as interval width and calibration. Tagasovska & Lopez-Paz (2019) have shown that minimizing the expected pinball loss can be a useful strategy in that regard, and our results in this paper about improved quantile accuracy and calibration suggest there may be improved performance in prediction intervals as well. Our experiments, like most published quantile regression experiments, were limited to datasets with D = 42 features and less. This is partly because one needs a large amount of data to make decent estimates of tail quantiles. In theory the proposed methodology can be applied to any size problem by only bringing τ in for the last layers as shown in Fig. 4, but an open question is how well this or any quantile regression works in practice with a large number of features, and whether there are novel scaling issues. In particular, we hypothesize that for larger models there may be an increase in *quantile collapse* (LopezMartin et al., 2021). Quantile collapse is the edge case of non-crossing, defined as a flat f(τ ) over some range of τ . For example, the airline tells passengers there is an 80% chance the flight will arrive in 4 hours and 6 minutes, and that there is a 90% chance the flight will arrive in 4 hours and 6 minutes. Our monotonicity constraint (2) allows quantile collapse because it only requires monotonicity in τ , not *strict* monotonicity. Quantile collapse can happen in regions of sparse data where there are no training samples to fit to such that the model only has to satisfy (2), which it can do with a flat f(*x, τ* ). We belive this can partly be solved by initializing with a f(*x, τ* ) that is strictly monotonically increasing (as we do in our experiments), as then one only gets quantile collapse if there is enough relevant training data to pull the model to the edge of the feasibility of (2). Quantile collapse is also a risk if the model f(*x, τ* ) has too much flexibility in x such that the data is sparse with respect to that flexibility. For example, assume x is a categorical feature with 1 million possible values (e.g. a language model), and that f(*x, τ* ) is flexible enough to learn an independent estimate for each value of x. Suppose for one of those categorical values there is only one training sample (xi, yi). Then since the pinball loss (expected or standard) will be minimized when f(*x, τ* ) = yi, there is no τ dependence, and for that category you will have quantile collapse. More generally, if you have a very flexible model and sparse data in some regions, we expect you may get quantile collapse if no other regularization to avoid it is used. ## Broader Impact Statement We hope the broader impact of this work to be positive in increasing the deserved trustworthiness of AI by promoting quantile regression models with no crossing-quantiles. The alternative of quantile estimates that cross will undoubtedly erode public confidence in AI. For example, prior work would launch a quantile regression model that predicts there is a 90 percent chance your pizza will be delivered in 30 minutes, and that there is an 80 percent chance it will be delivered in 34 minutes. Such models are unnecessarily confusing and untrustworthy for non-experts, and using them in practice may then reduce the ability of worthy AI to have a positive impact on the world. ## References D. Bahri. Wine reviews. *Kaggle*, 2018. URL https://www.kaggle.com/dbahri/wine-ratings. P. K. Bhattacharya and A. K. Gangopadhyay. Kernel and nearest-neighbor estimation of a conditional quantile. *The Annals of Statistics*, 1990. H. D. Bondell, B. J. Reich, and H. Wang. Non-crossing quantile regression curve estimation. *Biometrika*, 2010. M. Campbell-Kelly, M. Croaken, and E. Robson. *The History of Mathematical Tables: From Sumer to* Spreadsheets. Oxford University Press, Oxford, England, 2003. K. Canini, A. Cotter, M. M. Fard, M. R. Gupta, and J. Pfeifer. Fast and flexible monotonic functions with ensembles of lattices. *NeurIPS*, 2016. A. J. Cannon. Non-crossing nonlinear regression quantiles. *Stochastic Environmental Research and Risk* Assessment, 32:3207–3225, 2018. V. Chernozhukov. Extremal quantile regression. *Annals of Statistics*, 33(2), 2005. P. Chilinski and R. Silva. Neural likelihoods via cumulative distribution functions. *arXiv*, 2018. Y. Chung, W. Neiswanger, I. Char, and J. Schneider. Beyond pinball loss: Quantile methods for calibrated uncertainty quantification. *NeurIPS*, 2021. A. Cotter, M. R. Gupta, H. Jiang, E. Louidor, J. Muller, T. Narayan, S. Wang, and T. Zhu. Shape constraints for set functions. *ICML*, 2019. H. Daniels and M. Velikova. Monotone and partially monotone neural networks. *IEEE Trans. Neural* Networks, 21(6):906–917, 2010. C. E. David and S. M. Steinberg. Quantile estimation. In *Encyclopedia of Statistical Sciences*, volume 7. Wiley, New York, 1986. M. Falk. Relative deficiency of kernel type estimators of quantiles. *Annals of Statistics*, 12, 1984. E. K. Garcia and M. R. Gupta. Lattice regression. *NeurIPS*, 2009. E. K. Garcia, R. Arora, and M. R. Gupta. Optimized regression for efficient function evaluation. *IEEE* Trans. Image Processing, 21(9):4128–4140, September 2012. J. Gasthaus, K. Benidis, Y. Wang, S. Rangapuram, D. Salinas, V. Flunkert, and T. Januschowski. Probabilistic forecasting with spline quantile function RNNs. *AIStats*, 2019. M. R. Gupta, A. Cotter, J. Pfeifer, K. Voevodski, K. Canini, A. Mangylov, W. Moczydlowski, and A. Van Esbroeck. Monotonic calibrated interpolated look-up tables. Journal of Machine Learning Research (JMLR), 17(109):1–47, 2016. URL http://jmlr.org/papers/v17/15-243.html. M. R. Gupta, D. Bahri, A. Cotter, and K. Canini. Diminishing returns shape constraints for interpretability and regularization. *NeurIPS*, 2018. M. R. Gupta, E. Louidor, O. Mangylov, N. Morioka, T. Narayan, and S. Zhao. Multidimensional shape constraints. *ICML*, 2020. F. E. Harrell and C. E. Davis. A new distribution-free quantile estimator. *Biometrika*, 1982. Fumio Hayashi. *Econometrics*. Princeton University Press, 2011. X. He. Quantile curves without crossing. *The American Statistician*, 51(2):186–192, 1997. W. D. Kaigh and P. A. Lachenbruch. A generalized quantile estimator. Communications in Statistics - Theory and Methods, 1982. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. R. Koenker. *Quantile Regression*. Cambridge University Press, 2005. R. Koenker and G. Bassett. Regression quantiles. In *Econometrica*, 1978. V. Kuleshov, N. Fenner, and S. Ermon. Accurate uncertainties for deep learning using calibrated regression. In *ICML*, 2018. B. Lang. Monotonic multi-layer perceptron networks as universal approximators. Artificial neural networks: formal models and their applications-ICANN, 2005. M. Lopez-Martin, A. Sanchez-Esguevillas, L. Hernandez-Callejo, J. I. Arribas, and B. Carro. Additive ensemble neural network with constrained weighted quantile loss for probabilistic electric-load forecasting. Sensors, 2021. N. Meinshausen. Quantile regression forests. *Journal of Machine Learning Research (JMLR)*, 2006. A. Minin, M. Velikova, B. Lang, and H. Daniels. Comparison of universal approximators incorporating partial monotonicity by structure. *Neural Networks*, 23(4):471–475, 2010. T. Pearce, A. Brintrup, M. Zaki, and A. Neely. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. *ICML*, 2018. James L. Powell. Notes on median and quantile regression, 2010. R.-D. Reiss. Estimation of quantiles in certain nonparametric models. *Annals of Statistics*, 8, 1980. J. Sill. Monotonic networks. *NeurIPS*, 1998. J. Sill and Y. S. Abu-Mostafa. Monotonicity hints. *NeurIPS*, 1997. N. Tagasovska and D. Lopez-Paz. Single-model uncertainties for deep learning. *NeurIPS*, 2019. I. Takeuchi, Q. V. Le, T. D. Sears, and A. J. Smola. Nonparametric quantile estimation. Journal of Machine Learning Research (JMLR), 2006. TensorFlow Blog Post. TensorFlow Lattice: flexible, controlled, and interpretable ML, 2020. URL https://blog.tensorflow.org/2020/02/tensorflow-lattice-flexible-\ controlled-and-interpretable-ML.html. L. Torossian, V. Picheny, R. Faivre, and A. Garivier. A review on quantile regression for stochastic computer experiments. *Reliable Engineering & System Safety*, 2020. A. W. van der Vaart. *Asymptotic Statistics*. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998. S. Wang and M. R. Gupta. Deontological ethics by monotonicity shape constraints. In *AIStats*, 2020. X. Yan, W. Zhang, L. Ma, W. Liu, and Q. Wu. Parsimonious quantile regression of financial asset tail dynamics via sequential learning. *NeurIPS*, 2018. S. You, K. Canini, D. Ding, J. Pfeifer, and M. R. Gupta. Deep lattice networks and partial monotonic functions. *NeurIPS*, 2017. N. Zhang, K. Canini, S. Silva, and M. R. Gupta. Fast linear interpolation. ACM Journal on Emerging Technologies in Computing Systems, 2021. S. Zhang, B. Guo, A. Dong, J. He, Z. Xu, and S.X. Chen. Cautionary tales on air-quality improvement in Beijing. *Proceedings of the Royal Society A*, 473, 2017. ## A Appendix A.1 Proof Of Lemma 2 Lemma 2. Let f(x, τ ) *be a calibrated lattice model with piece-wise linear calibrator* c(τ ) : [0, 1] → [0, 1] for τ , and let there be two knots in the regular grid in the direction of τ , and the lattice parameters are interpolated with standard multilinear interpolation. Then f(x, τ ) *represents an inverse CDF function* F −1(y|x) *where* the estimated distribution for every x *is from the same location-scale family as the calibrator* c(τ ). Proof. If a random variable Y conditioned on X belongs to the location-scale family then for τ ∈ (0, 1) and a ∈ R and b > 0, it must hold that the conditional inverse CDF satisfies F −1 Y |X=z (τ ) = a + bF −1 Y |X=x (τ ). Note that for τ ∈ (0, 1), interpolating a lattice with two lattice vertices in the τ dimension yields the estimate Fˆ−1 Y |X=z (τ ) = f(*z, τ* ) = f(z, 0) + c(τ )(f(z, 1) − f(z, 0)). Thus mapping to the location-scale property, a = f(z, 0), b = f(z, 1) − f(z, 0), and Fˆ−1 Y |X=x (τ ) = c(τ ). Thus every estimated conditional inverse CDF Fˆ−1 Y |X=z (τ ) is a translation and scaling of the piecewise linear function c(τ ). ## A.2 Proof Of Theorem 1 Theorem 1. Suppose f(τ ; θ) *is well specified with a unique* θ ∗ ∈ Θ ⊆ R m *such that* f(τ ; θ ∗) = F −1(τ ) for all τ ∈ (0, 1), and let T *be distributed such that* θ ∗*is also a unique minimizer for the risk* R(θ) = EY,T [LT (*Y, f*(T ; θ))]*. Suppose the estimator* ˆθ (n)*is weakly consistent,* ˆθ (n) P−→ θ ∗*. Suppose further that the* function θ 7→ f(τ ; θ) *is continuous, locally Lipschitz, and twice differentiable at* θ = θ ∗*for all* τ ∈ (0, 1). Then,√n(f(τ ; ˆθ (n)) − f(τ ; θ ∗)) d*−→ N* 0, ∇θf(τ ; θ ∗) >Q −1V (Q −1) >∇θf(τ ; θ ∗) where, $Q=E_{\mathcal{T}}[p(f(T;\theta^{*}))\Gamma(T)],V=E_{\mathcal{T}}[T(1-T)\Gamma(T)],$ $(\pi;\theta^{*})^{\top}$ with Γ(τ ) = ∇θf(τ ; θ ∗)∇θf(τ ; θ ∗) >. Proof. Consider the loss function L¯(*y, θ*) = ET [LT (*y, f*(T ; θ))]. Let ˆθ (n) minimize the risk with an empirical distribution over Y with IID samples {yi} n i=1, ˆθ (n) = arg minθ 1 n Pn i=1 L¯(yi, θ). We assume that Y is a continuous random variable with density p(y), and T is independent of Y . We apply Theorem 5.23 from van der Vaart (1998)1, which says that under some regularity conditions, $$\Gamma(T)],$$ $$\sqrt{n}(\hat{\theta}^{(n)}-\theta^{*})=-\nabla^{2}R(\theta^{*})^{-1}\cdot\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\nabla_{\theta}\bar{L}(y_{i},\theta^{*})+o_{P}(1),$$ where oP (1) refers to a sequence of random variables that converges in probability to 0: Xn = oP (1) if and only if Xn P−→ 0. Specifically, Theorem 5.23 from van der Vaart (1998) requires the following conditions: (i) ˆθ (n)is weakly consistent for θ ∗: ˆθ (n) P−→ θ ∗. This is assumed, but in general this consistency depends on properties of the function f and the feasible parameter space Θ. See Lemma 1 for an example. (ii) 1n Pn i=1 L¯(yi, ˆθ (n)) ≤ infθ 1 n Pn i=1 L¯(yi, θ) + oP (1/n): true by optimality of ˆθ (n). (iii) The loss L¯(*y, θ*) is locally Lipschitz near θ = θ ∗. This holds as long as f(τ ; θ) is locally Lipschitz near θ = θ ∗. (iv) For P-almost all y, the function θ 7→ L¯(*y, θ*) is differentiable at θ = θ ∗. (v) R(θ) is twice differentiable at θ ∗ with positive definite Hessian ∇2R(θ ∗) 0. 1A. W. van der Vaart. *Asymptotic Statistics*. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998. To verify condition (iv), note that ∇θL¯(*y, θ*∗) = ∇θET [LT (*y, f*(T ; θ ∗)] = ET [∇θLT (*y, f*(T ; θ ∗))]. Since Y is a continuous random variable with a density near f(τ ; θ ∗), ∇θLτ (y, f(*τ, θ*∗)) exists for P-almost every y as long as f(*τ, θ*) is also differentiable at θ = θ ∗. This is because for continuous Y , the τ -quantile is unique, and the derivative ∇γLτ (*y, γ*) exists with probability 1. The full gradient is: $$\nabla_{\theta}L_{\tau}(y,f(\tau;\theta^{*}))=\left((1-\tau)\mathbb{1}(f(\tau;\theta^{*})\geq y)-\tau\mathbb{1}\left(f(\tau;\theta^{*})\leq y\right)\right)\nabla_{\theta}f(\tau;\theta^{*}).$$ Therefore, ∇θL¯(*y, θ*∗) = ET [∇θLT (*y, f*(T ; θ ∗))] is also well defined and exists for P-almost every y. To verify condition (v), we explicitly compute the Hessian: ∇R(θ ∗) = EY [∇θL¯(Y, θ∗)] = EY [ET [∇θLT (Y, f(T ; θ ∗))]] = ET [EY [∇θLT (Y, f(T ; θ ∗))]] since T ⊥⊥ Y = ET [((1 − T )P(f(T ; θ ∗) ≥ Y ) − T P(f(T ; θ ∗) ≤ Y )) ∇θf(T ; θ ∗)] = ET [(P(Y ≤ f(T ; θ ∗)) − T )∇θf(T ; θ ∗)] = ET [(F(f(T ; θ ∗)) − T )∇θf(T ; θ ∗)]. $$\nabla^{2}R(\theta^{*})=E_{\mathcal{T}}[\nabla_{\theta}\left\{(F(f(\mathcal{T};\theta^{*}))-\mathcal{T})\nabla_{\theta}f(\mathcal{T};\theta^{*})\right\}]$$ $$=E_{\mathcal{T}}[\nabla_{\theta}f(\mathcal{T};\theta^{*})F^{\prime}(f(\mathcal{T};\theta^{*})\nabla_{\theta}f(\mathcal{T};\theta^{*})^{\top}+(F(f(\mathcal{T};\theta^{*}))-\mathcal{T})\nabla_{\theta}^{2}f(\mathcal{T};\theta^{*})]$$ $$=E_{\mathcal{T}}[p(f(\mathcal{T};\theta^{*}))\nabla_{\theta}f(\mathcal{T};\theta^{*})\nabla_{\theta}f(\mathcal{T};\theta^{*})^{\top}]$$ $$=Q\succ0.$$ The third equality in the computation of the Hessian above follows by assumption that f is well specified, since f(τ ; θ ∗) = F −1(τ ), so F(f(τ ; θ ∗)) − τ = 0. With all conditions verified, applying Theorem 5.23 from van der Vaart (1998) we have, $$\sqrt{n}(\hat{\theta}^{(n)}-\theta^{*})=-Q^{-1}\cdot\frac{1}{\sqrt{n}}\sum_{i=1}^{n}E_{\mathcal{T}}[\nabla_{\theta}L_{\mathcal{T}}(y_{i},f(\mathcal{T};\theta^{*}))]+o_{P}(1).$$ $$\left(5\right)$$ Focusing on the sum term, $$\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\sum_{t=1}^{n}E_{T}[\nabla_{\theta}L_{T}(y_{i},f(T;\theta^{\prime}))]$$ $$=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}E_{T}\left[(1-T)1(f(T;\theta^{\prime})\geq y_{i})-\mathcal{T}1(f(T;\theta^{\prime})\leq y_{i}))\,\nabla_{\theta}f(T;\theta^{\prime})\right]$$ $$=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}E_{T}\left[(1\,(y_{i}\leq f(T;\theta^{\prime}))-T)\nabla_{\theta}f(T;\theta^{\prime})\right]$$ Since $1(y_{i}\leq f(T;\theta^{\prime}))$ are IID samples from Bernoulli($T$) and $\mathcal{T}$ is independent of $Y$, the random variable $\mathcal{T}$ is $\mathcal{T}$. ET [(1(Y ≤ f(T ; θ ∗)) − T )∇θf(T ; θ ∗)] has mean 0 and variance $$V=E_{\mathcal{T}}[\mathcal{T}(1-\mathcal{T})\nabla_{\theta}f(\mathcal{T};\theta^{*})\nabla_{\theta}f(\mathcal{T};\theta^{*})^{\top}].$$ By the central limit theorem, the sum converges in distribution to N (0, V ). Combining this with Equation (5) yields,√n( ˆθ (n) − θ ∗) d−→ N (0, Q−1V (Q −1) >). Applying the delta method with f(τ ; θ) as a function of θ for a fixed desired quantile τ , we have the final result,√n(f(τ ; $$\sqrt{n}(f(\tau;\hat{\theta}^{(n)})-f(\tau;\theta^{\ast}))\stackrel{d}{\rightarrow}\mathcal{N}\left(\mathbf{0},\nabla_{\theta}f(\tau;\theta^{\ast})^{\top}Q^{-1}V(Q^{-1})^{\top}\nabla_{\theta}f(\tau;\theta^{\ast})\right).$$ ## A.3 Sufficient Conditions For Consistency Lemma 1. If f(τ, θ) is convex in θ *for all* τ ∈ (0, 1]), Θ *is convex, and* θ ∗is in the interior of Θ*, then* ˆθ (n) is weakly consistent, ˆθ (n) P−→ θ ∗. Proof. The result follows from Proposition 7.4 from Hayashi (2011), provided we show that L¯(*y, θ*) is also convex. $$\bar{L}(y,\theta)=E_{\mathcal{T}}[L_{\mathcal{T}}(y,f(\mathcal{T};\theta))]$$ $$=E_{\mathcal{T}}[\operatorname*{max}(\mathcal{T}(y-f(\mathcal{T};\theta)),(\mathcal{T}-1)(y-f(\mathcal{T};\theta)))]$$ For any fixed scalar value τ , the term τ (y − f(τ ; θ)) is convex in θ since it is a linear mapping of f(τ ; θ). The same applies to the term (τ − 1)(y − f(τ ; θ)). Therefore, max(τ (y − f(τ ; θ)),(τ − 1)(y − f(τ ; θ))), as the pointwise maximum of two convex functions, is also convex in θ. Finally, the linearity of the ET [·] operator gives that L¯(*y, θ*) is convex in θ. ## A.4 A Note On Uniqueness Of Θ ∗ Theorem 1 and Lemma 1 assume that f(τ ; θ) is well specified with a unique θ ∗ ∈ Θ ⊆ R m such that f(τ ; θ ∗) = F −1(τ ) for all τ ∈ (0, 1), and that θ ∗is also the unique minimizer for the risk R(θ) = EY,T [LT (*Y, f*(T ; θ))]. The property that f(τ ; θ ∗) = F −1(τ ) directly implies that θ ∗is a minimizer of R(θ); however, whether θ ∗is the *unique* minimizer of R(θ) depends on the distribution of T and function class defined by f and Θ. Lemma 3. Suppose f(τ ; θ) is well specified with a unique θ ∗ ∈ Θ ⊆ R m such that f(τ ; θ ∗) = F −1(τ ) for all τ ∈ (0, 1). For θ ∈ Θ, define the set Tθ := {τ : f(τ ; θ) 6= f(τ ; θ ∗)}. If for all θ ∈ Θ where θ 6= θ ∗, the set Tθ has measure greater than 0 under random variable T , then θ ∗is also the unique minimizer for the risk R(θ) = EY,T [LT (*Y, f*(T ; θ))]. Proof. Suppose there exists θ 0 where θ 0 6= θ ∗ but θ 0 also minimizes of the risk R(θ). Since θ 0 6= θ ∗, the set Tθ0 is nonempty, and for all τ ∈ Tθ 0 , $$f(\tau;\theta^{\prime})\neq F^{-1}(\tau)\implies\theta^{\prime}\neq\operatorname*{arg\,min}_{\theta}E_{Y}[L_{\tau}(Y,f(\tau;\theta^{*}))]$$ $$\Longrightarrow\ E_{Y}[L_{\tau}(Y,f(\tau;\theta^{\prime}))]>E_{Y}[L_{\tau}(Y,f(\tau;\theta^{*}))].$$ $\square$ Then since Tθ 0 has measure greater than 0 under random variable T , this implies $$E_{\mathcal{T}}[E_{Y}[L_{\mathcal{T}}(Y,f(\mathcal{T};\theta^{\ast}))]]>E_{\mathcal{T}}[E_{Y}[L_{\mathcal{T}}(Y,f(\mathcal{T};\theta^{\ast}))]],$$ which contradicts that θ 0 minimizes the risk R(θ). The conditions of uniqueness can be satisfied by a combination of wide enough support of T , and a limited P enough function class given by f, Θ. For example, if T is supported on the full interval (0, 1), and f(τ ; θ) = m j=1 θj τ jis a polynomial function over Θ ⊆ R m, then any pair θ 6= θ 0 would yield a set {τ : f(τ ; θ) 6= f(τ ; θ 0)} with measure greater than 0. Uniqueness breaks down of the function class is too expressive. For example, if f(τ ; θ) is infinitely expressive over Θ (e.g. Θ = R × (0, 1), and f(τ ; θ) = θ0 1(θ1 = τ )), then it's possible to find a pair θ 6= θ 0that yield a set {τ : f(τ ; θ) 6= f(τ ; θ 0)} that contains only a single value of τ with measure 0. Uniqueness can also break down if the support of T is too limited. For example, if T is only supported on a single value τ0, and f(τ ; θ) = Pm j=1 θj τ jis a polynomial function, then it's possible to find a pair θ 6= θ 0 where f(τ0; θ) = f(τ0; θ 0), and thus {τ : f(τ ; θ) 6= f(τ ; θ 0)} has measure 0 for T . ## A.5 Basic Extension To Categorical Features Corollary 1. Let X be distributed uniformly over a finite set of categorical values X . For notational convenience, let X = {1, ..., m} ⊂ N. Suppose that a dataset {(xi, yi)} n i=1 *is created by sampling* {yj} n m j=1 values IID from the conditional distribution Y |X = x for each value of x (assuming m divides n), and taking the union of these sets. Let f : X ,(0, 1), Θ → R be fully separable over x*; that is, let* Θ = Θ1 × ... × Θm, where Θx *represents a copy* of a given parameter space Θ0 for each value of x ∈ X . Let f *take the form* $$f(x,\tau;\theta)=\sum_{x_{0}\in{\mathcal{X}}}\mathbb{1}(x=x_{0})g(\tau;\theta_{x_{0}})$$ for some function g : (0, 1), Θ0 → R. Suppose f(x, τ ; θ) *is well specified with a unique* θ ∗ ∈ Θ ⊆ R m such that f(*x, τ* ; θ ∗) = qτ (x) for all τ ∈ (0, 1) and all x ∈ X *, and that* θ ∗*is also a unique minimizer for the risk* R(θ) = 1 m Px∈X EY,T [LT (Y, f(x, T ; θ))|X = x]*. Suppose the estimator* ˆθ (n)*is weakly consistent,* ˆθ (n) P−→ θ ∗. Suppose further that the function θ 7→ f(x, τ ; θ) *is continuous, locally Lipschitz, and twice differentiable at* θ = θ ∗ for all τ ∈ (0, 1) and all x ∈ X . Then for each x ∈ X , $$\sqrt{\frac{n}{m}}(f(x,\tau;{\hat{\theta}}^{(n)})-f(x,\tau;\theta^{\star}))\stackrel{d}{\rightarrow}{\mathcal{N}}\left(0,\nabla_{\theta}f(x,\tau;\theta^{\star})^{\top}Q^{-1}V(Q^{-1})^{\top}\nabla_{\theta}f(x,\tau;\theta^{\star})\right)$$ where, _where,_ $$Q=E_{\cal T}[p_{x}(f(x,{\cal T};\theta^{*}))\Gamma({\cal T})],V=E_{\cal T}[{\cal T}(1-{\cal T})\Gamma({\cal T})],$$ _with $p_{x}(y)$ being the density of $Y|X=x$ and $\Gamma(\tau)=\nabla_{\theta}f(x,\tau;\theta^{*})\nabla_{\theta}f(x,\tau;\theta^{*})^{\top}$._ Proof. For notational convenience, let X = {1, ..., m} ⊂ N. Define loss function L¯(*x, y, θ*) = ET [LT (*y, f*(x, T ; θ))]. Let ˆθ (n) minimize the empirical risk over samples {xi, yi} n i=1,ˆθ (n) = arg minθ 1 n Pn i=1 L¯(yi, xi, θ). By assumption of the separability of f and Θ by x, ˆθ (n) ∈ arg min θ∈Θ 1 n Xn i=1 L¯(yi, xi, θ) = arg min θ∈Θ X x∈X 1 m 1 n/m Xn i=1 L¯(yi, x, θ) 1(xi = x) ! = arg min θ∈Θ X x∈X 1 m 1 n/m Xn i=1 ET [LT (yi, f(x, T ; θ))] 1(xi = x) ! = arg min θ∈Θ X x∈X 1 m 1 n/m Xn i=1 ET [LT (yi, g(T ; θx))] 1(xi = x) ! x∈X 1 m arg min θx∈Θx 1 n/m Xn i=1 ET [LT (yi, g(T ; θx))] 1(xi = x) ! = X (n) Therefore, each component ˆθ x minimizes an empirical loss over points with xi = x: $${\hat{\theta}}_{x}^{(n)}\in\operatorname*{arg\,min}_{\theta_{x}\in\Theta_{x}}{\frac{1}{n/m}}\sum_{i=1}^{n}E_{\mathcal{T}}[L_{\mathcal{T}}(y_{i},g(\mathcal{T};\theta_{x}))]\,\mathbb{1}(x_{i}=x).$$ By assumption, components of the optimum θ ∗ x also uniquely minimize each conditional risk $\theta^{*}=\arg\min\limits_{\theta_{x}\in\Theta_{x}}E_{Y,T}[L_{T}(Y,f(x,T;\theta))|X=x]$. Therefore, we may apply Theorem 1 to describe the convergence of each component ˆθ (n) x to its respective optimum θ ∗ x for each x ∈ X , yielding the result. ## A.6 Additional Examples Illustrating Theorem 1 Single parameter uniform anchored at 0. Suppose the true distribution of interest is Y ∼ Unif(0, α). Let f(τ ; θ) = θτ , which correctly parameterizes the true linear inverse CDF. By Theorem 1, the asymptotic variance of √n( ˆθ (n) − θ ∗) is $$Q^{-1}V(Q^{-1})^{\top}=\alpha^{2}(E\tau[T^{3}]-E\tau[T^{4}])/E\tau[T^{2}]^{2}.$$ Thus, the asymptotic variance depends on the distribution of T through its third and fourth moments, and depends on the distribution of Y through α. For T ∼ Unif(0, 1), the exact asymptotic variance is 0.45α 2. To estimate a specific τ0-quantile, we would evaluate the function f(τ0; ˆθ (n)) = ˆθ (n)τ0, which would have an asymptotic variance of 0.45α 2τ 2 0 . ## A.7 A Connection To Regression On Sample Quantiles We provide some more intuition here about how the training procedure we describe in this paper - learning a function f(τ ) with an expected pinball loss - smooths sample quantiles in the case that there are no conditioning x-features. In particular, we can show that our proposed method (except with a uniform discrete rather than uniform continuous distribution for τ ) is equivalent to a regression on the observed sample quantiles with a particular cost function. Proposition 1. *Given a dataset* {yi} n i=1 of n elements, construct the dataset {τ (i), y(i)} n i=1 *where* y (i)represents the i*th order statistic of the data and* τ (i) = i−1 n−1 is the sample quantile that order statistic represents. Then training a model with an expected pinball loss (1) where τ *is drawn uniformly from the discrete distribution* {τ (i)} n i=1 is equivalent to learning a single-variable regression over the data points {τ (i), y(i)} n i=1. The cost function c for the regression at the specific point (τ (i), y(i)) is 0 *when* f(τ (i)) = y (i) and otherwise piecewise-linear with slope $$c^{\prime}(f(\tau(i)))={\begin{cases}-\tau(i)-\sum_{j}^{n}\\ (1-\tau(i))+i\end{cases}}$$ j=1 1(f(τ (i)) < y(j)) for f(τi) < y(i) (1 − τ (i)) + Pn j=i+1 1(f(τ (i)) > y(j))for f(τi) > y(i) Proof. The τ -pinball loss can be expressed as Lτ (*y, f*(τ )) = (τ (y − f(τ )) if f(τ ) < y (1 − τ )(f(τ ) − y) if f(τ ) ≥ y When training with an expected pinball loss over the discrete values of τ corresponding to the sample quantiles of the data, this means that the expected pinball loss is a function only of f(τ (i)), or the quantile predictions of f at those sample quantiles. This is precisely the context of ordinary regression; the loss we face is a function only of the values our function takes at the x-coordinates of our data points, which here are set up to be exactly the {τ (i)} n i=1. Let us consider the pinball loss applied to τ (i). To optimize (1), we average the losses across our data points {yi} n i=1; for this proof, we (equivalently) work with the sums instead. Let our regression-equivalent cost c(f(τ (i))) be equal to the relative increase in the τ (i) pinball loss we face, summed over the data points, compared to if we had predicted f(τ (i)) = y (i). By the definition of the τ (i) pinball loss, decreasing f(τ (i)) by will mean that we face an additional cost of τ (i) for every data point we are below and moving away from and recover (1−τ (i)) for every data point we are above and moving towards. Say we start at f(τ (i)) = y (i) and choose an such that f(τ (i))− *> y*(i−1). The point y (i) has τ (i)(n−1) data points below it and (1−τ (i))(n−1) data points above it by definition. So our marginal loss from decreasing f(τ (i)) by is τ (i)(1 − τ (i))(n − 1) due to the points above y (i) while we recover (1 − τ (i))τ (i)(n − 1) due to the points below it. These are equal! But we also lose τ (i) because we are now moving away from the point y (i)itself, making the slope of c(f(τ (i))) equal to −τ (i) in the region between y (i−1) and y (i). Each time we pass a neighboring data point, we start losing τ (i) from that point instead of gaining (1 − τ (i)), increasing the steepness of the slope by 1. This is exactly what we want to show: c 0(f(τ (i))) is equal to −τ (i) minus the number of data points smaller than y (i)that f(τ (i)) is smaller than. The opposite case - analyzing what happens when we increase f(τ (i)) by is essentially analagous. Increasing f(τ (i)) by leads to an additional cost of (1 − τ (i)) for every data point we are above and moving away from and a decreased cost of τ (i) for every data point we are below and moving towards. This implies the slope of the cost function between y (i) and y (i+1) is (1 − τ (i)), as the cost accrued and recovered by the points above and below y (i)cancel out and we only have the net cost due to moving away from y (i)itself. And for the same reason described above, passing data points adds to the slope by 1. See Figure 7 for some example cost functions. This cost function is nonstandard in a couple ways. First, it varies per point. And second, its steepness is a function of how many neighboring sample data points the prediction f(τ ) is off by rather than just how far away it is from the sample quantile in an absolute sense. Still, it otherwise follows the normal properties of a cost function, such as taking the value 0 when the prediction is exactly correct and (weakly) rising in a convex fashion as the prediction is too high or too low. This makes it clear why a sufficiently flexible function f(τ ) will exactly pass through all of the sample quantiles and also why a more limited f(τ ) will serve as a smoother, passing below some sample quantiles and above others. Worked Example: Consider the dataset {1, 2, 5, 6, 8}. These correspond to the sample quantiles {0, 0.25, 0.5, 0.75, 1}. Figure 7 plots the sample quantiles on a graph, the least squares regression fit, and the expected pinball loss fit with linear f(τ ). We can see that both are quite similar. For example, both estimate the median as being lower than the observed sample median. The figure also shows the regression-equivalent cost function for each of the quantile predictions, if we were to think of our method as a regression on sample quantiles. ## A.8 Calibration Error Metric Definition A predicted conditional inverse CDF f(x, τ ; θ) is *calibrated* if $$P(Y\leq f(X,\tau;\theta))=\tau.$$ $$(6)$$ A calibration error metric measuring the difference between P(Y ≤ f(*X, τ* ; θ)) and τ can be approximated over a finite dataset with data points {xi, yi} n i=1 and quantiles τ1*, ..., τ*m. We consider the absolute deviation: $$\frac{1}{m}\sum_{j=1}^{m}\left|\left(\frac{1}{n}\sum_{i=1^{n}}\mathbb{1}(y_{i}\leq f(x_{i},\tau_{j};\theta))\right)-\tau_{j}\right|\tag{1}$$ Others such as Kuleshov et al. (2018) have considered the squared deviation. ## A.9 Hyperparameter Tuning Details For each real data experiment, we tune hyperparameters over a validation dataset, where the validation dataset is selected according to Section 5.2. The tuning criterion was the validation pinball loss averaged over τ ∈ {0.01*, ...,* 0.99}. For DLNs, the search spaces for real data experiments were the following: - The number of calibration keypoints for the piecewise-linear calibration function over τ were tuned between {10, 20, 50, 100}. Note that as this number goes up, you get more model flexibility with respect to τ (only), which makes it effectively like just training separate models for each τ . - The number of lattice keypoints for τ was tuned between {2, 3, 5, 7, 10}. That controls the flexibility of the interactions between τ and the other features x. ![27_image_0.png](27_image_0.png) Figure 7: Given the dataset {1, 2, 5, 6, 8}, we show visually how learning f(τ ) via the expected pinball loss with τ as a feature can be seen as a type of regression on the sample quantiles. For simplicity, we take f(τ ) to be linear, which would be the correct functional form if the data are drawn from a uniform distribution. On the left, we plot the sample quantiles and compare the lines learned by a direct squared-loss regression on those data points (i.e. {τ (i), y(i)} 5 i=1) with the line learned by our method (i.e. learning f(τ ) with the expected pinball loss). On the right, we show the equivalent costs imposed on predictions for each sample quantile if we reframed training f(τ ) with an expected pinball loss over τ = {0, 0.25, 0.5, 0.75, 1} as a regression on sample quantiles. The cost is 0 when the prediction is exactly the corresponding sample quantile and weakly rising as it varies. - Other feature calibration keypoints were tuned between {5, 10, 15, 20}, which are common values for DLNs. - Step sizes were tuned between {0.001, 0.005, 0.01, 0.05, 0.1} - minibatch sizes were tuned between {1000, 10000}. - Number of steps was tuned between {100, 1000, 10000}. To tune the DLN architecture for experiments with real data, we searched over the following hyperparameters: - Air Quality: number of lattices ∈ {4, 8},number of features per lattice = 6, - Puzzles: number of lattices ∈ {8, 16, 32},number of features per lattice = 5 - Traffic: single lattice which includes all features (not ensemble) - Wine: number of lattices ∈ {100, 200, 400, 800}, number of features per lattice ∈ {2, 4, 8}.
Review 1: Summary: The paper provides strong evidence, using both theory and real-world data analysis, that performing quantile regression at different quantiles is more beneficial than doing it at just one specific quantile, even if we are primarily interested in that particular quantile. Moreover, the paper suggests using Deep Lattice networks [1] for estimating quantiles, as these networks address significant problems observed in previous methods [2], such as the issue of quantiles crossing each other. The paper also highlights the practical implications of these problems. **Refs** [1] S. You, K. Canini, D. Ding, J. Pfeifer, and M. R. Gupta. Deep lattice networks and partial monotonic functions. NeurIPS, 2017. [2] N. Tagasovska and D. Lopez-Paz. Single-model uncertainties for deep learning. NeurIPS, 2019. Strengths and Weaknesses: **Strengths:** Suggestion of deep lattice networks for quantile estimators to solve crossing quantile issue. **Weakness:** Writing and organization of the paper Requested Changes: **Writing:** Writing feels like not upto the mark. I will just highlight few of them. Page1-2: "*That* work is important because it provides simultaneous quantile regression of the entire inverse CDF. However, *that* work left open a couple of important theoretical and practical issues that we address in this paper." Page 11 : "Lattices build upon a truly *ancient* strategy for approximating functions over a bounded domain.. " **Relation with Calibration:** It would be beneficial to have a discussion and explore the practical applications of quantile estimators for calibration purposes. Broader Impact Concerns: NA ================================================== Review 2: Summary: This paper investigates the training of a quantile regression model by minimizing the expected pinball loss across all quantiles. The asymptotic convergence rate indicates that minimizing the expected pinball loss can more effectively estimate single quantiles than training with the standard pinball loss for that quantile. Additionally, the authors utilize deep lattice networks to address the issue of crossing quantiles. Strengths and Weaknesses: Strengths: 1. The research addresses an interesting problem. 2. The paper provides several theoretical results. 3. The research includes a thorough analysis of numerical studies. Weakness: The revised version effectively addresses the previously mentioned comments. Requested Changes: The revised version effectively addresses the previously mentioned comments. Broader Impact Concerns: NA ================================================== Review 3: Summary: This is a resubmission of a manuscript I reviewed before. The authors have addressed my previous concerns, and I don't have additional major comments. However, the writing, especially some notations, needs some clarification. Strengths and Weaknesses: no additional comments. Requested Changes: Both $f(x, \tau)$ and $f(\tau; x)$ are used in Section 1.3, which may be a typo. Later $f(x, \tau; \theta)$ is used. I guess $f(x, \tau)$ means for one sample problem and $f(x, \tau; \theta)$ is for the conditional quantile, but it will same readers a lot of efforts if the authors explicitly state the their meanings. In the paper, $f(x, \tau; \theta)$ is a model, so it is not appropriate to call it an estimator in Sections 2, 3.2, and probably some other places. I believe by "i.e. our function class contains the true distribution." you mean the working model contains the true data generating model. Please go other the paper and try to improve this type of wage statements. Are the two models correctly specified in Examples 1 and 2? It would be helpful to write the true inverse cdf in the discussion. The title for Section 3.3 can be more accurate, so as some statements in the abstract. It is about the sampling properties (or asymptotic properties) of the estimator. The rate of convergence for the proposed estimator and that of the estimator based on a single quantile are the same. It is their asymptotic variances that are different. Broader Impact Concerns: No additional comments ================================================== Metareview: Recommendation: Accept as is Comment: Reviewers agree that this resubmission addresses the major concerns raised in the previous round, especially around empirical evaluation, assumptions in theory, and discussion of related work. The work makes a valuable contribution to quantile regression, that will be of broad interest to the community. The theory has been improved with clearer assumptions (in particular, a well specified assumption). Some of these assumptions are strong and so empirical evaluations are important. The empirical work is now extensive enough to paint a clear picture that the method offers a valuable tool. ==================================================
# Distributed Sgd In Overparameterized Linear Regression Anonymous authors Paper under double-blind review ## Abstract We consider distributed learning using constant stepsize SGD over several devices, each sending a final model update to a central server. In a final step, the local estimates are aggregated. We prove in the setting of overparameterized linear regression general upper bounds with matching lower bounds and derive learning rates for specific data generating distributions. We show that the excess risk is of order of the variance provided the number of local nodes grows not too large with the global sample size. We further compare distributed SGD with distributed ridge regression and provide an upper bound of the excess SGD-risk in terms of the excess RR-risk for a certain range of the sample size. ## 1 Introduction Deep neural networks possess powerful generalization properties in various machine learning applications, despite being overparameterized. It is generally believed that the optimization algorithm itself, e.g., stochastic gradient descent (SGD), implicitly regularizes such overparameterized models. This regularizing effect due to the choice of the optimization algorithm is often referred to as *implicit regularization*. A refined understanding of this phenomenon was recently gained in the setting of linear regression (to be considered as a reasonable approximation of neural network learning) for different variants of SGD. Constant stepsize SGD (with last iterate or tail-averaging) is investigated in Jain et al. (2018), in Dieuleveut & Bach (2016) in an RKHS frameowrk and also in Mücke et al. (2019) with additional mini-batching, see also Mücke & Reiss (2020) for a more general analysis in Hilbert scales. In Zou et al. (2021b;a) it is shown that benign overfitting also occurs for SGD. Multi-pass SGD is analyzed in Lin et al. (2016); Jain et al. (2016); Lin & Rosasco (2017); Zou et al. (2022) while last iterate bounds can be found in Jain et al. (2019); Wu et al. (2022); Varre et al. (2021). Despite the attractive statistical properties of all these SGD variants, the complexity of computing regression estimates prevents it from being routinely used in large-scale problems. More precisely, the time complexity and space complexity of SGD and other regularization methods in a standard implementation scale as O(n α), α ∈ [2, 3]. Such scalings are prohibitive when the sample size n is large. Distributed learning (DL) based on a divided-and-conquer approach is an effective way to analyze large scale data that can not be handled by a single machine. In this paper we study a distributed learning strategy in linear regression (including both underparameterized and overparameterized regimes) via (tail-) averaged stochastic gradient descent with constant stepsize (DSGD). The approach is quite simple and communication efficient: The training data is distributed across several computing nodes where on each a local SGD is run. In a final step, these local estimates are aggregated (a.k.a. *one-shot SGD*). Local SGD has become state of the art in large scale distributed learning, showing a linear speed-up in the number of workers for convex problems, see e.g. Mcdonald et al. (2009); Zinkevich et al. (2010); Dieuleveut & Patel (2019); Stich (2018); Spiridonoff et al. (2021) and references therein. The field of DL has gained increasing attention in statistical learning theory with the aim of deriving conditions under which minimax optimal rates of convergence can be guaranteed, see e.g. Chen & Xie (2014), Mackey et al. (2011), Xu et al. (2019), Fan et al. (2019), Shi et al. (2018), Battey et al. (2018), Fan et al. (2021), Bao & Xiong (2021). Indeed, the learning properties of DL in regression settings over Hilbert spaces are widely well understood. The authors in Zhang et al. (2015) analyze distributed (kernel) ridge regression and show optimal learning rates with appropriate regularization, provided the number of machines increases sufficiently slowly with the sample size, though under restrictive assumptions on the eigenfunctions of the kernel integral operator. This has been alleviated in Lin et al. (2017). However, in these works the number of machines *saturates* if the target is very smooth, meaning that large parallelization seems not possible in this regime. An extension of these works to more general spectral regularization algorithms for nonparametric least square regression in (reproducing kernel) Hilbert spaces is given in Guo et al. (2017), Mücke & Blanchard (2018), including gradient descent (Lin & Zhou, 2018) and stochastic gradient descent (Lin & Cevher, 2018). The recent work Tong (2021) studies DL for functional linear regression. We finally mention the work of Mücke et al. (2022), where distributed ordinary least squares (DOLS) in overparameterized linear regression is studied, i.e. one-shot OLS without any explicit or implicit regularization. It is shown that the number of workers acts as a regularization parameter itself. Contributions. We analyze the performance of DSGD with constant stepsize in overparameterized linear regression and provide upper bounds with matching lower bounds for the excess risk under suitable noise assumptions. Our results show that optimal rates of convergence can be achieved if the number of local nodes grows sufficiently slowly with the sample size. The excess risk as a function of data splits remains constant until a certain threshold is reached. This threshold depends on the structural assumptions imposed on the problem, i.e. on the eigenvalue decay of the Hessian and the coefficients of the true regression parameter. We additionally perform a comparison between DSGD and DRR, showing that the excess risk of DSGD is upper bounded by the excess risk of DRR under an assumption on the sample complexity (SC) of DSGD, depending on the same structural assumptions. We show that the SC of DSGD remains within constant factors of the SC of DRR. Our analysis extends known results in this direction from Zou et al. (2021b;a) for the single machine case to the distributed learning setting and from DOLS in Mücke et al. (2022) to SGD with implicit regularization. Organization. In Section 2 we define the mathematical framework needed to present our main results in Section 3, where we provide a theoretical analysis of DSGD with a discussion of our results. In Section 4 we compare DSGD with DRR while Section 5 is devoted to showing some numerical illustrations. The proofs a deferred to the Appendix. Notation. By L(H1, H2) we denote the space of bounded linear operators between real Hilbert spaces H1, H2. We write L(H, H) = L(H). For A ∈ L(H) we denote by ATthe adjoint operator. By A† we denote the pseudoinverse of A and for w ∈ H we write ||w||2A := ||A 1 2 w|| for an PSD operator A. We let [n] = {1*, ..., n*} for every n ∈ N. For two positive sequences (an)n, (bn)n we write an ≲ bn if an ≤ cbn for some c > 0 and an ≃ bn if both an ≲ bn and bn ≲ an. ## 2 Setup In this section we provide the mathematical framework for our analysis. More specifically, we introduce distributed SGD and state the main assumptions on our model. ## 2.1 Sgd And Linear Regression We consider a linear regression model over a real separable Hilbert space H in random design. More precisely, we are given a random covariate vector x ∈ H and a random output y ∈ R following the model $$y=\langle w^{*},x\rangle+\epsilon\ ,$$ ∗, x⟩ + ϵ , (1) where ϵ ∈ R is a noise variable. We will impose some assumptions on the noise model in Section 3. The true regression parameter w ∗ ∈ H minimizes the least squares test risk, i.e. $$L(w^{*})=\operatorname*{min}_{w\in{\mathcal{H}}}L(w)\;,\quad L(w):=\frac{1}{2}\mathbb{E}[(y-\langle w,x\rangle)^{2}]\;,$$ where the expectation is taken with respect to the joint distribution P of the pair (x, y) *∈ H ×* R. More specifically, we let w ∗ be the minimum norm element in the set of all minimizers of L. To derive an estimator wˆ ∈ H for w ∗ we are given an i.i.d. dataset $$D:=\{(x_{1},y_{1}),...,(x_{n},y_{n})\}\subset{\mathcal{H}}\times\mathbb{R}\;,$$ following the above model equation 1, i.e., $$\mathbf{Y}=\mathbf{X}w^{*}+\varepsilon\ ,$$ with i.i.d. noise ε = (ε1*, ..., ε*n) ∈ R n. The corresponding random vector of outputs is denoted as Y = (y1, . . . , yn) T ∈ R n and we arrange the data xj ∈ H into a data matrix X ∈ L(H, R n) by setting (Xv)j = ⟨xj , v⟩ for v ∈ H, 1 ≤ j ≤ n. If H = R d, then X is a n × d matrix (with row vectors xj ). We are particular interested in the overparameterized regime, i.e. where dim(H) > n. In the classical setting of stochastic approximation with constant stepsize, the SGD iterates are computed by the recursion wt+1 = wt − γ(⟨wt, xt⟩ − yt)xt , t = 1*, ..., n ,* with some initialization w1 ∈ H and where γ > 0 is the stepsize. The tail average of the iterates is denoted by $$\bar{w}_{\frac{n}{2}:n}:=\frac{1}{n-n/2}\sum_{t=n/2+1}^{n}w_{t}\;,$$ $$\mathbf{\Pi}(2)$$ and where we denote by w¯n := ¯w0:n the full (uniform) average. Various forms of SGD (with iterate averaging, tail averaging, multi passes) in the setting of overparameterized linear regression has been analyzed recently in Zou et al. (2021b), Wu et al. (2022), Zou et al. (2022), respectively. In particular, the phenomenon of *benign overfitting* is theoretically investigated in these works. It could be shown that benign overfitting occurs in this setting, i.e. the SGD estimator fits training data very well and still generalizes. We are interested in this phenomenon for localized SGD, i.e. when our training data is distributed over several computing devices. ## 2.2 Local Sgd In the distributed setting, our data are evenly divided into M ∈ N local disjoint subsets $$D=D_{1}\cup...\cup D_{M}$$ of size |Dm| = n M , for m = 1*, ..., M*. To each local dataset we associate a local design matrix Xm ∈ L(H, R n M ) (build with local row vectors x (m) j) with local output vector Ym ∈ R n M and a local noise vector εm ∈ R n M . The local SGD iterates are defined as $$w_{t+1}^{(m)}=w_{t}^{(m)}-\gamma\Big(\Big\langle w_{t}^{(m)},x_{t}^{(m)}\Big\rangle-y_{t}\Big)x_{t}^{(m)}\ ,$$ for t = 1*, ...,* n M and m = 1*, ..., M*. The averaged local iterates w¯ (m) n Mare computed according to equation 2. We are finally interested in the uniform average of the local SGD iterates, building a global estimator: $$\overline{{{w}}}_{M}:=\frac{1}{M}\sum_{m=1}^{M}\bar{w}_{\frac{n}{M}}^{(m)}\ .$$ Distributed learning in overparameterized linear regression is studied in Mücke et al. (2022) for the ordinary least squares estimator (OLS), i.e. without any implicit or explicit regularization and with local interpolation. It is shown that local overfitting is harmless and regularization is done by the number of data splits. We aim at finding optimal bounds for the excess risk $$\mathbb{E}\big[L(\overline{{{w}}}_{M})\big]-L(w^{*})\;,$$ of distributed SGD (DSGD) with potential local overparameterization and as function of the number of local nodes M and under various model assumptions, to be given in the next section. ## 3 Main Results In this section we present our main results. To do so, we first impose some model assumptions. Definition 3.1. 1. We define the second moment of x ∼ Px to be the operator H : H → H*, given by* $$\mathbf{H}$$ H := E[x ⊗ x] = E[⟨·, x⟩x] . 2. The fourth moment operator M : L(H) → L(H) *is defined by* M := E[x ⊗ x ⊗ x ⊗ x] , with M(A)(w) = E[⟨x, Ax⟩⟨w, x⟩x], for all w ∈ H. 3. *The covariance operator of the gradient noise at* w ∗is defined as Σ : *H → H*, $$\Sigma:=\mathbb{E}[(\langle w^{*},x\rangle-y)^{2}\;x\otimes y]$$ 2 x ⊗ x] . Assumption 3.2 (Second Moment Condition). *We assume that* E[y 2|x] < ∞ almost surely. Moreover, we assume that the trace of H *is finite, i.e.,* Tr[H] < ∞. Assumption 3.3 (Fourth Moment Condition). We assume there exists a positive constant τ > 0 *such that* for any PSD operator A*, we have* M(A) ⪯ τ Tr[HA]H . Note that this assumption holds if H−1x is sub-Gaussian, being a standard assumption in least squares regression, see e.g. Bartlett et al. (2020), Zou et al. (2021b), Tsigler & Bartlett (2020). Assumption 3.4 (Noise Condition). *Assume that* $\mathbf{A}|\mathbf{T}$ . $$\sigma^{2}:=||\mathbf{H}^{-{\frac{1}{2}}}\mathbf{\Sigma H}^{-{\frac{1}{2}}}||<\infty$$ $\mathbf{a}$ This assumption on the noise is standard in the literature about averaged SGD, see e.g. Zou et al. (2021b), Dieuleveut & Bach (2016). We introduce some further notation involving the second moment operator H: We denote the eigendecomposition as $$\mathbf{H}=\sum_{j=1}^{\infty}\lambda_{j}v_{j}\otimes v_{j}\ ,$$ where the λ1 ≥ λ2 ≥ ... are the eigenvalues of H and the v ′ j s are the corresponding eigenvectors. For k ≥ 1, we let $$\mathbf{H}_{0:k}:=\sum_{j=1}^{k}\lambda_{j}v_{j}\otimes v_{j}~,~\mathbf{H}_{k:\infty}:=\sum_{j=k+1}^{\infty}\lambda_{j}v_{j}\otimes v_{j}~.~\,$$ Similarly, $$\mathbf{I}_{0:k}=\sum_{j=1}^{k}v_{j}\otimes v_{j}\ ,\ \ \mathbf{I}_{k:\infty}:=\sum_{j=k+1}^{\infty}v_{j}\otimes v_{j}\ .$$ A short calculation shows that for all w ∈ H we have $$||w||_{\mathbf{H}_{0:k}^{\dagger}}^{2}=\sum_{j=1}^{k}{\frac{\langle w,v_{j}\rangle^{2}}{\lambda_{j}}}\,,\quad||w||_{\mathbf{H}_{k:\infty}}^{2}=\sum_{j=k+1}^{\infty}\lambda_{j}\langle w,v_{j}\rangle^{2}\;.$$ We finally set $$V_{k}(n,M):=\frac{k}{n}+\gamma^{2}\frac{n}{M^{2}}\sum_{j=k+1}^{\infty}\lambda_{j}^{2}\;.$$ $$(3)$$ . (3) ## 3.1 Upper Bound We now present an upper bound for the averaged local SGD iterates. The proof relies on a bias-variance decomposition and is given in Appendix A.1. Theorem 3.5 (DSGD Upper Bound). Suppose Assumptions 3.2, 3.3 and 3.4 are satisfied and let γ < 1 τ Tr[H] , w1 = 0*. The excess risk for the averaged local SGD estimate satisfies* $$\mathbb{E}\big[L(\overline{{{w}}}_{M})\big]-L(w^{*})\ \leq\ 2\mathrm{Bias}(\overline{{{w}}}_{M})\ +\ 2\mathrm{Var}(\overline{{{w}}}_{M})\ ,$$ where $$\mathrm{Bias}(\overline{\mathbb{B}}_{M})\leq\frac{M^{2}}{\gamma^{2}n^{2}}\|w^{*}\|_{\mathbf{H}_{b,b^{*}}^{*}}^{2}+\|w^{*}\|_{\mathbf{H}_{b^{*},\infty}}^{2}\ +\ \frac{2\tau M^{2}\big{(}\|w^{*}\|_{\mathbf{H}_{b^{*},\infty}}^{2}+\gamma\frac{\pi}{M}\|w^{*}\|_{\mathbf{H}_{b^{*},\infty}}^{2}\big{)}}{\gamma n(1-\gamma\tau\ \mathrm{Tr}[\mathbf{H}])}\cdot V_{b^{*}}(n,M)\ .$$ and $$\mathrm{Var}(\overline{{{w}}}_{M})\;\leq\;\frac{\sigma^{2}}{1-\gamma\tau\,\mathrm{Tr}[{\bf H}]}\cdot V_{k^{*}}(n,M)\;,$$ with k ∗ = max{k : λk ≥ M γn }. The excess risk is upper bounded in terms of the bias and variance. Both terms crucially depend on the effective dimension k ∗ = max{k : λk ≥ M γn }, dividing the full Hilbert space H into two parts. On the part associated to the first largest k ∗eigenvalues, the bias may decay faster than on the remaining tail part that is associated to the smaller eigenvalues, see Zou et al. (2021b) in the context of single machine SGD, Bartlett et al. (2020); Tsigler & Bartlett (2020), in the context of single machine ridge regression and Mücke et al. (2022) for distributed ordinary least squares. Our Theorem 3.5 reveals that the excess risk converges to zero if $$||w^{*}||_{\bf H_{k^{*}:\infty}}^{2}\to0\;,\quad\frac{2M^{2}}{\gamma^{2}n^{2}}||w^{*}||_{\bf H_{0:k^{*}}^{\dagger}}^{2}\to0$$ and Vk∗ (n, M) → 0 as n → ∞. This requires the eigenvalues of H to decay sufficiently fast and to choose the number of local nodes M = Mn to be a sequence of n. Note that we have to naturally assume Mn ≲ n. In Subsection 3.3 we provide two specific examples of data distributions with specific choices for (Mn)n∈N such that the above conditions are met, granting not only convergence but also providing explicit rates of convergence. ## 3.2 Lower Bound Before we state the lower bounds for the excess risk of the DSGD estimator we need to impose some assumptions. Assumption 3.6 (Fourth Moment Lower Bound). We assume there exists a positive constant θ > 0 *such* that for any PSD operator A*, we have* $$\mathbf{M}(\mathbf{A})-\mathbf{H}\mathbf{A}\mathbf{H}\succeq\theta\operatorname{Tr}[\mathbf{H}\mathbf{A}]\mathbf{H}\;.$$ Assumption 3.7 (Well-Specified Noise). The second moment operator H *is strictly positive definite with* Tr[H] < ∞. Moreover, the noise ϵ in equation 1 is independent of x *and satisfies* $$\epsilon\sim{\mathcal{N}}(0,\sigma_{n o i s e}^{2})\ .$$ We now come to the main result whose proof can be found in Appendix A.2. Theorem 3.8 (DSGD Lower Bound). *Suppose Assumptions 3.6 and 3.7 are satisfied. Assume* w1 = 0. The excess risk of the DSGD estimator satisfies $$\mathbb{E}\big[L(\overline{{{\upmu}}}_{M})\big]-L(w^{*})\geq\frac{M(M-1)}{100\gamma^{2}n^{2}}\bigg(||w^{*}||_{\mathbf{H}_{0,\mathrm{k}^{*}}^{1}}^{2}+\frac{\gamma^{2}n^{2}}{M^{2}}||w^{*}||_{\mathbf{H}_{\mathrm{k}^{*},\infty}^{1}}^{2}\bigg)+\frac{\sigma_{o u i s e}^{2}}{100}\cdot V_{k^{*}}(n,M)\;,$$ where Vk∗ (n, M) *is defined in equation 3.* The lower bound for the excess risk also decomposes into a bias part (first term) and a part associated to the variance (second term). Comparing the bias with the upper bound for the bias from Theorem 3.5 shows that both are of the same order. Comparing the variances reveals that they are of the same order if $$\frac{2\tau M^{2}\bigl(||w^{*}||_{\mathbf{I}_{0:k^{*}}}^{2}+\gamma\frac{n}{M}||w^{*}||_{\mathbf{H}_{k^{*}:\infty}}^{2}\bigr)}{\gamma n(1-\gamma\tau\operatorname{Tr}[\mathbf{H}])}\leq1\ .$$ In the next section, we will provide specific conditions and examples when this is satisfied. ## 3.3 Fast Rates Of Convergence For Specific Distributions We now consider two particular cases of data distributions, namely the *spiked covariance model* (with local overparameterization) and the case where the eigenvalues of the second moment operator H decay *polynomially*. These are standard assumptions for the model, see e.g. Tsigler & Bartlett (2020); Zou et al. (2021b); Mücke et al. (2022). In both cases, we determine a range of the number of local nodes Mn depending on the global sample size such that the bias is dominated by the variance. The final error is then of the order of the variance, if the number of local nodes grows sufficiently slowly with the sample size. The optimal1 number exactly balances bias and variance. Corollary 3.9 (Spiked Covariance Model). *Suppose all assumptions of Theorem 3.5 are satisfied. Assume* that ||w ∗|| ≤ R for some R > 0 and H ∈ R d×d*. Let* d =nM qfor some q > 1 and ˜d =nM r< d *for some* 0 < r ≤ 1. Suppose the spectrum of H *satisfies* $$\lambda_{j}=\begin{cases}\frac{1}{\tilde{d}}&:\ j\leq\tilde{d}\\ \frac{1}{d-d}&:\ \tilde{d}+1\leq j\leq d\end{cases}.$$ $$M_{n}\leq\sqrt{\frac{\gamma(1-2\gamma\tau)n}{R^{2}}}$$ 1*Optimal* in the sense of the maximal possible number of local nodes that balances bias and variance. If then for any n *sufficiently large, we have* $$\mathbb{E}\big[L(\overline{{{w}}}_{M_{n}})\big]-L(w^{*})\ \leq\ c\,\frac{1}{\gamma M_{n}}\,\left(\frac{M_{n}}{n}\right)^{\nu}\,,$$ where ν = min{1 − r, q − 1} and for some c < ∞, depending on *τ, γ, σ*. Choosing the maximum number of local nodes Mn ≃ √n gives the fast rate of order $$\mathbb{E}\left[L(\overline{{{w}}}_{M_{n}})\right]-L(w^{*})\ \stackrel{<}{\sim}\ \left(\frac{1}{n}\right)^{\frac{\nu+1}{2}}\,.$$ for the excess risk. Corollary 3.10 (Polynomial Decay). Suppose all assumptions of Theorem 3.5 are satisfied with γ < minn1,1 τ Tr[H] o*. Assume that* ||w ∗|| ≤ R for some R > 0. Suppose the spectrum2 of H *satisfies for some* r > 0 λj = j −(1+r). $${\mathit{I}}{\mathit{f}}$$ $$M_{n}\leq\left(\frac{\gamma}{R^{2}}\right)^{\frac{1+r}{2+r}}\cdot(\gamma n)^{\frac{1}{2+r}}\ ,$$ then for any n *sufficiently large, we have* $$\mathbb{E}\left[\left(\overline{{{w}}}_{M}\right)\right]-L(w^{*})\;\leq\;c\;\frac{\gamma}{M_{n}}\;\left(\frac{M_{n}}{n}\right)^{\frac{r}{1+r}}\;,$$ for some c < ∞, depending on *τ, γ, σ*. Choosing the maximum number of local nodes Mn ≃ n 1 2+r gives the fast rate of order $$\mathbb{E}\big[L(\overline{{{w}}}_{M_{n}})\big]-L(w^{*})\ \stackrel{<}{\sim}\ \left(\frac{1}{n}\right)^{\frac{1+1}{r+2}}$$ . for the excess risk. ## 3.4 Discussion Comparison to single machine SGD. We compare the DSGD algorithm with the single machine SGD algorithm, i.e. when M = 1. For this case, we recover the results from Zou et al. (2021b) under the same assumptions. Our Corollaries 3.9, 3.10 show that the excess risk is dominated by the variance as long as M grows sufficiently slowly with the sample size. But we can say even more: In the spiked covariance model, if Mn ≃ n βfor β ∈ [0, 1/2], we see that DSGD performs as good as single machine SGD, provided ν ≤ 1. Indeed, a direct comparison shows that $$\frac{1}{\gamma M_{n}}\left(\frac{M_{n}}{n}\right)^{\nu}\simeq\frac{1}{\gamma n^{\beta}}\left(\frac{n^{\beta}}{n}\right)^{\nu}\simeq\frac{1}{\gamma}\left(\frac{1}{n}\right)^{\nu},$$ for any β ∈ [0, 1/2] and ν ≤ 1. Recall that all our bounds are of optimal order, hence the relative efficiency remains of constant order until the critical threshold for Mn is reached. However, if Mn is larger than the threshold, i.e. if β ∈ (1/2, 1], then the bias term is dominating. In this case, the excess risk is of order $$\frac{2M^{2}}{n^{2}}||w^{*}||_{\mathbf{H}_{0,\mathbf{k}^{*}}^{1}}^{2}+||w^{*}||_{\mathbf{H}_{\mathbf{k}^{*},\mathbf{k}^{*}}^{2}}^{2}\simeq\left(\frac{M_{n}}{n}\right)^{2-r}+\left(\frac{M_{n}}{n}\right)^{q}\simeq\left(\frac{n^{\beta}}{n}\right)^{2-r}+\left(\frac{n^{\beta}}{n}\right)^{q},$$ 2Note that the choice λj = j−(1+r) ensures that Tr[H] < ∞. res that $\text{Tr}[\mathbf{H}]<\infty$. being larger than the variance, see the proof of Corollary 3.9, Appendix A.3. The same observations can be made for the setting in Corollary 3.10 when the eigenvalues are polynomially decaying. If we let Mn ≃ n β with β ∈ [0, 1/(2 + r)], then the variance dominates and for all r > 0, the test error satisfies $$\frac{1}{\gamma M_{n}}\left(\frac{M_{n}}{n}\right)^{\frac{r}{r+1}}\simeq\frac{1}{\gamma n^{\beta}}\left(\frac{n^{\beta}}{n}\right)^{\frac{r}{r+1}}\simeq\frac{1}{\gamma}\left(\frac{1}{n}\right)^{\frac{r}{r+1}}\,.$$ We refer to Section5 and Section C for some numerical experiments. Comparison to distributed learning in RKHSs. We emphasize that all our results above hold for a constant stepsize 0 *< γ <* minn1,1 τ Tr[H] o. In particular, γ does not depend on the number M of local nodes. This result is line with the results for regularized distributed learning over reproducing kernel Hilbert spaces, see Zhang et al. (2015); Lin et al. (2017); Mücke & Blanchard (2018) and references therein. In this setting it is shown for a large class of spectral regularization methods3that the optimal regularization parameter λ that leads to minimax optimal bounds, depends on the global sample size only and is of order n −α, α ∈ (0, 1]. In particular, this parameter is chosen as in the single machine machine setting and each local subproblem is underregularized. This leads to a roughly constant bias (unchanged by averaging) in the distributed setting, an increase in variance but averaging reduces the variance sufficiently to obtain optimal excess risk bounds. The same phenomenon occurs in our DSGD setting. On each local node the same stepsize γ as for the M = 1 case is applied. Comparison to distributed ordinary least squares (DOLS). We also compare our results with those recently obtained in Mücke et al. (2022) for DOLS in random design linear regression. The general observation in this work is that in the presence of overparameterization, the number of local nodes acts as a regularization parameter, balancing bias and variance. Recall that this is in contrast to what we observe for DSGD due to the implicit regularization. The optimal number of splits MOLS opt depends on structural assumptions, i.e. eigenvalue decay and decay of the Fourier coefficients of w ∗(a.k.a. source condition). For the spiked covariance model, the optimal number MOLS n of DOLS is of order $$M_{n}^{O L S}\simeq\left(\frac{d n^{3/2}}{d\cdot\vec{d}}\right)^{2/5}\simeq n^{\frac{3-2r}{5-2r}}\ ,$$ see Corollary 3.14 in Mücke et al. (2022). Comparing with our maximum number for Mn ≃ n 1/2from our Corollary 3.9 we observe that MOLS n ≲ MSGD nif 12 ≤ r ≤ 1, i.e., DSGD allows for more parallelization in this regime. For polynomially decaying eigenvalues λj ∼ j 1+r, r > 0, the optimal number of data splits in Corollary 3.9 (Mücke et al., 2022) scales as MOLS n ≃ n 1/3. Compared to our result from Corollary 3.10 we have $$M_{n}^{S G D}\simeq n^{\frac{1}{2+r}}\stackrel{<}{\sim}n^{1/3}$$ for all r ≥ 1. Thus, DOLS seems to allow more data splits under optimality guarantees for fast polynomial decay, i.e. large r. ## 4 Comparison Of Sample Complexity Of Dsgd And Drr In this section we compare the distributed tail-averaged SGD estimator with the distributed Ridge Regression (RR) estimator (see Zhang et al. (2015); Lin et al. (2017); Mücke & Blanchard (2018); Sheng & Dobriban (2020) or Tsigler & Bartlett (2020) for RR in the single machine case). Recall that RR reduces to ordinary least-squares (OLS) if the regularization parameter is set to zero. As a special case, we compare our results to local OLS from Mücke et al. (2022) and analyze the benefit of implicit regularization of local SGD in the presence of local overparameterization. 3This class contains, among others, gradient descent and accelerated methods like Heavy ball and Nesterov, ridge regression or PCA. We recall that for any m ∈ [M], λ ≥ 0, the local RR estimates are defined by $$\hat{w}_{m}^{\mathrm{RR}}(\lambda)={\bf X}_{m}^{T}({\bf X}_{m}{\bf X}_{m}^{T}+\lambda)^{-1}{\bf Y}_{m}\ .$$ The average is $$\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda)=\frac{1}{M}\sum_{m=1}^{M}\hat{w}_{m}^{\mathrm{RR}}\;.$$ We aim at showing that the excess risk of DSGD is upper bounded by the excess risk of DRR under suitable assumptions on the sample complexity. To this end, we first derive a lower bound for DRR to compare with. The proof follows by combining Proposition B.3 and Proposition B.5 with Lemma B.2. Assumption 4.1. The variable H−1x *is sub-Gaussian and has independent components.* Similarly to the bounds for DSGD, our bounds for DRR depend on the effective dimension $$k_{\mathrm{RR}}^{*}:=\operatorname*{min}\left\{k:\lambda_{k+1}\leq{\frac{M\Big(}\lambda+\sum_{j>k}\lambda_{j}\Big)}{b n}}\right\}$$ , for λ > 0 and some b > 1. Theorem 4.2 (Lower Bound Distributed RR). Suppose Assumption 4.1 holds and that H *is strictly positive* definite with Tr[H] < ∞*. Assume that* k ∗ RR ≤n c ′M *for some* c ′ > 1. There exist constants b, c > 1 *such that* the excess risk of the averaged RR estimator satisfies $$\begin{split}\mathbb{E}\big{[}L(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda))\big{]}-L(w^{*})&\geq\|w^{*}\|_{\mathbf{H}_{k_{\mathrm{RR}}^{*}\to\infty}}^{2}+\frac{M^{2}\Big{(}\lambda+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}\Big{)}^{2}}{\sigma^{2}}\cdot\|w^{*}\|_{\mathbf{H}_{0}^{-1}k_{\mathrm{RR}}^{*}}^{2}\\ &\quad+\frac{\sigma^{2}}{c}\bigg{(}\frac{k_{\mathrm{RR}}^{*}}{n}+\frac{n}{M^{2}}\cdot\frac{\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}^{2}}{(\lambda+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j})^{2}}\bigg{)}\cdot\end{split}$$ We do our risk comparison particularly for tail-averaged DSGD and derive a bias-improved upper bound. The proof is given in Section B.2 and is an extension of Lemma 6.1 in Zou et al. (2021a) to DSGD. Theorem 4.3 (Upper Bound Tail-averaged DSGD). Suppose Assumption 3.7 is satisfied. Let wMn *denote* the tail-averaged distributed estimator with n training samples and assume γ < 1/ Tr[H]*. For arbitrary* k1, k2 ∈ [d] $$\mathbb{E}\left[L({\overline{{{\overline{{w}}}}}}_{M})\right]-L(w^{*})\ =\mathrm{Bias}({\overline{{{\overline{{w}}}}}}_{M})+\mathrm{Var}({\overline{{{\overline{{w}}}}}}_{M})$$ with Bias(wM) ≤ cbM2 γ 2n2 · exp− n M γH w ∗ 2 H−1 0:k1 + ||w ∗||2Hk1:∞ , Var(wM) ≤ cv(1 + R 2) · σ 2 k2 n + nγ2 M2 · X j>k2 λ 2 j , for some universal constants cb, cv > 0. To derive the risk comparison we fix a sample size nRR and nSGD for DRR and tail-averaged DSGD, resp., and derive conditions on the sample complexities such that individually, the bias and variance of DSGD is upper bounded by the bias and variance of DRR, respectively. Combining then both of the above theorems finally leads to the risk comparison result. A detailed computation is given in Section B.3. Theorem 4.4 (Comparison DSGD with DRR). Let wMnSGD denote the tail-averaged distributed estimator with nSGD *training samples. Let further* w RR nRR (λ) denote the distributed RR estimator with nRR training samples and with regularization parameter λ ≥ 0. Suppose all assumptions from Theorems 4.2 ,4.3 are satisfied. There exist constants b, c > 1 and 0 < Lλ,γ ≤ L ′ λ,γ *such that for* C ∗:= c 1 + ||w ∗||2 σ2 , $$C_{\lambda}^{*}:=\lambda+\sum_{j>k_{\rm RR}^{*}}\lambda_{j}\,\quad\gamma<\min\biggl{\{}\frac{1}{{\rm Tr}[H]},\frac{1}{\sqrt{c}C^{*}C_{\lambda}^{*}}\biggr{\}}\tag{4}$$ and $$L_{\lambda,\gamma}\cdot n_{\mathrm{RR}}\leq n_{\mathrm{SGD}}\leq L_{\lambda,\gamma}^{\prime}\cdot n_{\mathrm{RR}},$$ the excess risks of DSGD and DRR satisfy $$\mathbb{E}\big[L(\overline{{{w}}}_{M})\big]-L(w^{*})\;\leq\;\mathbb{E}\big[L(\overline{{{w}}}_{n\mathrm{RR}}^{\mathrm{RR}}(\lambda))\big]-L(w^{*})\;.$$ ∗) . (5) The constants Lλ,γ, L′λ,γ *are explicitly given by* $$L_{\lambda,\gamma}=\operatorname*{max}\left\{C^{*},\frac{\sqrt{c(1-\gamma\lambda_{k_{\mathrm{RR}}^{*}})}}{\gamma C_{\lambda}^{*}}\right\}\,,\quad L_{\lambda,\gamma}^{\prime}=\frac{1}{C^{*}\gamma^{2}(C_{\lambda}^{*})^{2}}\,\,.$$ $\left(5\right)^{\frac{1}{2}}$ Note that in the above Theorem, assumption equation 4 on the stepsize ensures that 0 < Lλ,γ ≤ L ′ λ,γ. Our result shows that DSGD performs better than DRR/ DOLS if the sample complexity (SC) of SGD differs from the SC of RR/OLS by no more than a constant. This constant depends on the amount of regularization λ, the stepsize γ and the tail behavior of the eigenvalues of the Hessian. We refer to Section B.3.2 for a more detailed discussion. Our bound slightly differs from Zou et al. (2021a) for the case M = 1 in two respects: We scale our SC such that the constant in equation 5 is equal to one while Zou et al. (2021a) show that both risks are of the same order (with a constant larger than one). Second, we also show that the SC of DSGD is upper bounded by a factor of the SC of DRR/DOLS while Zou et al. (2021a) only derive a lower bound. However, we remark that nSGD in our Theorem is larger that nRR as the constant Lλ,γ ≥ 1. A look onto Figure 1 reveals that optimally tuned DSGD may perform better than optimally tuned DRR even with the same or smaller sample size for certain problem instances. This suggests that our bound may be refined. ## 5 Numerical Experiments We illustrate our theoretical findings with experiments on simulated and real data. The reader may find additional experiments in Section C. Simulated Data. In a first experiment in Figure 1 (left) we analyze the test error of DSGD as a function of the local nodes M. We generate n = 500 i.i.d. training data with xj ∼ N (0, H) with mildly overparameterization d = 700. The target w ∗satisfies three different decay conditions w ∗ j = j −α, α ∈ {0, 1, 10}. The eigenvalues of H follow a polynomial decay λj = j −2. The local nodes satisfy Mn = n β, β ∈ {0, 0.1*, ...,* 0.9}. According to Corollary 3.10 we see that a fast decay of w ∗ j (i.e. a smaller norm ||w ∗||) allows for more parallelization until the test error blows up. In a second experiment we compare the sample complexity of optimally tuned tail-averaged DSGD and DRR for different sources w ∗, see Figures 1 (right), 2. Here, the data are generated as above with d = 200, λj = j −2 and w ∗ j = j −α, α ∈ {0, 1, 10}. The number of local nodes is fixed at Mn = n 1/3for each n ∈ {100*, ...,* 6000}. For this problem instance, DSGD may perform even better than DRR for sparse targets (α = 10), i.e., DSGD achieves the same accuracy as DRR with less samples in this regime. For less sparse targets α = 1, the sample complexities of DSGD and DRR are comparable while for non-sparse targets (α = 0), DRR outperforms DSGD. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) Figure 1: **Left:** Test error for DSGD with λj = j −2for different sources w ∗ as a function of M. Right: Comparison of optimally tuned tail-ave DSGD with DRR with λj = j ![10_image_2.png](10_image_2.png) −2, w ∗ j = j −10, Mn = n 1/3. Figure 2: Comparison of optimally tuned tail-ave DSGD with DRR with λj = j −2for different sources w ∗, with λj = j −2 and Mn = n 1/3. **Left:** w ∗ j = j −1 **Right:** w ∗ j = 1. Real Data. To analyze the performance of DSGD on real data, we considered the classification problem of the Gisette data set4, containing pictures of the digits four and nine. We used the first 3000 samples of the original train data set for training and the second 3000 samples for evaluation. The feature dimension of one picture is d = 5000. Hyper-parameters had been fine-tuned on the validation data set to achieve the best performance. The first experiment in Figure 3(left) again analyzes the test error of DSGD as a function of the local nodes M. Because the feature dimension is quite large, the optimal stepsize is small (γ ∼ 10−10). Theorem 3.8 therefore explains why in our example the bias-term and thus the test error grows rather quickly with the number of local nodes. In Figure 3(right) we compare DRR with tail- and full-averaged DSGD. We observe that DRR slightly outperforms DSG. According to Theorem 4.4, we need sparsity for w ∗so that DSGD can keep up with DRR. This might be not the case for the Gisette data set. ## 6 Summary We analyzed the performance of distributed constant stepsize (tail-) averaged SGD for linear regression in an overparameterized regime. We find that the relative efficiency as a function of the number of workers remains largely unchanged until a certain threshold is reached. This threshold depends on the structural assumptions imposed by the problem at hand (eigenvalue decay of the Hessian H and the norm of the target w ∗). This is in contrast to distributed OLS without any implicit or explicit regularization with local overparameterization, where the number of workers itself acts as a regularization parameter, see Figure 4 in Appendix C. We also compared the sample complexity of DSGD and DRR and find that the sample complexity of DSGD remains within constant factors of the sample complexity of DRR. For some problem instances, tail-averaged 4http://archive.ics.uci.edu/ml/datasets/Gisette ![11_image_0.png](11_image_0.png) Figure 3: **Left:** Test error for DSGD with n = 1000, 2000, 3000 and different M. **Right:** Comparison of DSGD with DRR for Mn = n 1/4. SGD may outperform DRR, i.e., achieves the same or better accuracy with less samples. Our bound is not sharp and may be improved in future research. ## References Yajie Bao and Weijia Xiong. One-round communication efficient distributed m-estimation. In International Conference on Artificial Intelligence and Statistics, pp. 46–54. PMLR, 2021. 1 Peter L Bartlett, Philip M Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. Proceedings of the National Academy of Sciences, 2020. 3, 3.1, B.1.3 Heather Battey, Jianqing Fan, Han Liu, Junwei Lu, and Ziwei Zhu. Distributed testing and estimation under sparse high dimensional models. *Annals of statistics*, 46(3):1352, 2018. 1 Xueying Chen and Min-ge Xie. A split-and-conquer approach for analysis of extraordinarily large data. Statistica Sinica, pp. 1655–1684, 2014. 1 Aymeric Dieuleveut and Francis Bach. Nonparametric stochastic approximation with large step-sizes. Ann. Statist., 44(4):1363–1399, 08 2016. doi: 10.1214/15-AOS1391. 1, 3 Aymeric Dieuleveut and Kumar Kshitij Patel. Communication trade-offs for local-sgd with large step size. Advances in Neural Information Processing Systems, 32, 2019. 1 Jianqing Fan, Dong Wang, Kaizheng Wang, and Ziwei Zhu. Distributed estimation of principal eigenspaces. Annals of statistics, 47(6):3009, 2019. 1 Jianqing Fan, Yongyi Guo, and Kaizheng Wang. Communication-efficient accurate statistical estimation. Journal of the American Statistical Association, pp. 1–11, 2021. 1 Zheng-Chu Guo, Shao-Bo Lin, and Ding-Xuan Zhou. Learning theory of distributed spectral algorithms. Inverse Problems, 33(7):074009, 2017. 1 P. Jain, S.M. Kakade, R. Kidambi, P. Netrapalli, and A. Sidford. Parallelizing stochastic gradient descent for least squaresregression: mini-batching, averaging and model misspecification. *arXiv:1610.03774v3*, 2016. 1, A.1.1 Prateek Jain, Sham M Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, and Aaron Sidford. A markov chain theory approach to characterizing the minimax optimality of stochastic gradient descent (for least squares). In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, 2018. 1 Prateek Jain, Dheeraj Nagaraj, and Praneeth Netrapalli. Making the last iterate of sgd information theoretically optimal. In *Conference on Learning Theory*, pp. 1752–1755. PMLR, 2019. 1 Junhong Lin and Volkan Cevher. Optimal distributed learning with multi-pass stochastic gradient methods. In *International Conference on Machine Learning*, pp. 3092–3101. PMLR, 2018. 1 Junhong Lin and Lorenzo Rosasco. Optimal rates for multi-pass stochastic gradient methods. Journal of Machine Learning Research 18, 2017. 1 Junhong Lin, Raffaello Camoriano, and Lorenzo Rosasco. Generalization properties and implicit regularization for multiple passes sgm. *International Conference on Machine Learning*, 2016. 1 Shao-Bo Lin and Ding-Xuan Zhou. Distributed kernel-based gradient descent algorithms. Constructive Approximation, 47(2):249–276, 2018. 1 Shao-Bo Lin, Xin Guo, and Ding-Xuan Zhou. Distributed learning with regularized least squares. The Journal of Machine Learning Research, 18(1):3202–3232, 2017. 1, 3.4, 4 Lester Mackey, Ameet Talwalkar, and Michael I Jordan. Divide-and-conquer matrix factorization. Advances in neural information processing systems, 24, 2011. 1 Ryan Mcdonald, Mehryar Mohri, Nathan Silberman, Dan Walker, and Gideon Mann. Efficient large-scale distributed training of conditional maximum entropy models. *Advances in neural information processing* systems, 22, 2009. 1 Nicole Mücke and Gilles Blanchard. Parallelizing spectrally regularized kernel algorithms. The Journal of Machine Learning Research, 19(1):1069–1097, 2018. 1, 3.4, 4 Nicole Mücke and Enrico Reiss. Stochastic gradient descent in hilbert scales: Smoothness, preconditioning and earlier stopping. *stat*, 1050:18, 2020. 1 Nicole Mücke, Gergely Neu, and Lorenzo Rosasco. Beating sgd saturation with tail-averaging and minibatching. In *Advances in Neural Information Processing Systems*, pp. 12568–12577, 2019. 1 Nicole Mücke, Enrico Reiss, Jonas Rungenhagen, and Markus Klein. Data-splitting improves statistical performance in overparameterized regimes. In International Conference on Artificial Intelligence and Statistics, pp. 10322–10350. PMLR, 2022. 1, 1, 2.2, 3.1, 3.3, 3.4, 4 Yue Sheng and Edgar Dobriban. One-shot distributed ridge regression in high dimensions. In International Conference on Machine Learning, pp. 8763–8772. PMLR, 2020. 4 Chengchun Shi, Wenbin Lu, and Rui Song. A massive data framework for m-estimators with cubic-rate. Journal of the American Statistical Association, 113(524):1698–1709, 2018. 1 Artin Spiridonoff, Alex Olshevsky, and Yannis Paschalidis. Communication-efficient sgd: From local sgd to one-shot averaging. *Advances in Neural Information Processing Systems*, 34:24313–24326, 2021. 1 Sebastian U Stich. Local sgd converges fast and communicates little. In International Conference on Learning Representations, 2018. 1 Hongzhi Tong. Distributed least squares prediction for functional linear regression. *Inverse Problems*, 38(2): 025002, 2021. 1 Alexander Tsigler and Peter L Bartlett. Benign overfitting in ridge regression. *arXiv preprint* arXiv:2009.14286, 2020. 3, 3.1, 3.3, 4, B.1 Aditya Vardhan Varre, Loucas Pillaud-Vivien, and Nicolas Flammarion. Last iterate convergence of sgd for least-squares in the interpolation regime. *Advances in Neural Information Processing Systems*, 34: 21581–21591, 2021. 1 Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, and Sham Kakade. Last iterate risk bounds of sgd with decaying stepsize for overparameterized linear regression. In *International Conference on* Machine Learning, pp. 24280–24314. PMLR, 2022. 1, 2.1 Ganggang Xu, Zuofeng Shang, and Guang Cheng. Distributed generalized cross-validation for divide-andconquer kernel ridge regression and its asymptotic optimality. Journal of computational and graphical statistics, 28(4):891–908, 2019. 1 Yuchen Zhang, John Duchi, and Martin Wainwright. Divide and conquer kernel ridge regression: A distributed algorithm with minimax optimal rates. *The Journal of Machine Learning Research*, 16(1):3299– 3340, 2015. 1, 3.4, 4 Martin Zinkevich, Markus Weimer, Lihong Li, and Alex Smola. Parallelized stochastic gradient descent. Advances in neural information processing systems, 23, 2010. 1 Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P Foster, and Sham Kakade. The benefits of implicit regularization from sgd in least squares problems. *Advances in Neural Information* Processing Systems, 34:5456–5468, 2021a. 1, 1, 4, 4, 1, 2, B.1.2, B.1.3, B.2 Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, and Sham Kakade. Benign overfitting of constant-stepsize sgd for linear regression. In *Conference on Learning Theory*, pp. 4633–4635. PMLR, 2021b. 1, 1, 2.1, 3, 3, 3.1, 3.3, 3.4, A.1.1, A.1.1, A.1.1, A.1.2, A.2.1, A.2.1, A.2.2 Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, and Sham M Kakade. Risk bounds of multipass sgd for least squares in the interpolation regime. *arXiv preprint arXiv:2203.03159*, 2022. 1, 2.1 Notation. By L(H1, H2) we denote the space of bounded linear operators between real Hilbert spaces H1, H2 with operator norm *|| · ||*. We write L(H, H) = L(H). For A ∈ L(H) we denote by ATthe adjoint operator. For two PSD operators on H we write A ⪯ B if ⟨(A − B)v, v⟩ ≥ 0 for all v ∈ H. We further let ⟨A, B⟩op = Tr[AT B]. ## A Proofs Section 3 (Bounds For Dsgd) A.1 Proofs Upper Bound A.1.1 Bias-Variance Decomposition We will use an iterative bias-variance-decomposition which has been extensively studied before in the non distributed case (see Jain et al. (2016), Zou et al. (2021b)). First we need a couple of definitions. -) **Centered local iterates:** Set η (m) t:= w (m) t − w ∗ and $$\bar{\eta}_{n}^{(m)}:=\frac{M}{n}\sum_{t=1}^{n/M}\eta_{t}^{(m)}\;,\quad\bar{\bar{\eta}}_{M}:=\frac{1}{M}\sum_{m=1}^{M}\bar{\eta}_{n}^{(m)}.$$ -) **Local bias:** For m = 1*, ..., M* we set b (m) 1 = w1 − w ∗, $$b_{t}^{(m)}:=({\bf I}-\gamma x_{t}^{(m)}\otimes x_{t}^{(m)})b_{t-1}^{(m)}\;,\quad t=2,...,\frac{n}{M}\;.$$ $$\bar{b}_{n}^{(m)}:=\frac{M}{n}\sum_{t=1}^{n/M}b_{t}^{(m)}\ ,\quad\bar{\bar{b}}_{M}:=\frac{1}{M}\sum_{m=1}^{M}\bar{b}_{n}^{(m)}.$$ -) **Local variance:** For m = 1*, ..., M* we set v (m) 1 = 0 and $$v_{t}^{(m)}:=({\bf I}-\gamma x_{t}^{(m)}\otimes x_{t}^{(m)})v_{t-1}^{(m)}+\gamma\epsilon_{t}^{(m)}x_{t}^{(m)}\;,\quad t=2,...,\frac{n}{M}\;,$$ $$\bar{v}_{n}^{(m)}:=\frac{M}{n}\sum_{t=1}^{n/M}v_{t}^{(m)}\;,\quad\overline{{{v}}}_{M}:=\frac{1}{M}\sum_{m=1}^{M}\bar{v}_{n}^{(m)},$$ where we let $\epsilon_{t}^{(m)}:=y_{t}^{(m)}-\left\langle x_{t}^{(m)},w^{*}\right\rangle$. $$\left(6\right)$$ $$\left(7\right)$$ Note that for any m = 1*, ..., M* and t ≥ 1 one has $$\mathbb{E}[b_{t+1}^{(m)}]=\mathbb{E}[\mathbb{E}[b_{t+1}^{(m)}|b_{t}^{(m)}]]\ ;$$ t] . (6) $$\mathbf{\Sigma}=\mathbb{E}[\mathbb{E}[(\mathbf{I}-\gamma x_{t+1}^{(m)}\otimes x_{t+1}^{(m)})b_{t}^{(m)}|b_{t}^{(m)}]]=(\mathbf{I}-\gamma\mathbf{H})\mathbb{E}[b_{t}^{(m)}]\ .$$ Moreover, from B.4 in Zou et al. (2021b), we find $$\mathbb{E}[v_{t+1}^{(m)}]=(\mathbf{I}-\gamma H)\ \mathbb{E}[v_{t}^{(m)}]=(\mathbf{I}-\gamma H)^{t}\ \mathbb{E}[v_{1}^{(m)}]=0\ .$$ 1] = 0 . (7) It is easy to see that η (m) t = b (m) t + v (m) t and therefore $$\bar{\bar{\eta}}_{M}=\bar{\bar{b}}_{M}+\overline{{{v}}}_{M}.$$ η¯¯M = bM + vM. (8) $$({\boldsymbol{\delta}})$$ Lemma A.1. *Define* $$\mathrm{\boldmath~\Psi~}$$ $$\mathrm{\mathrm{\boldmath~\Psi~}}(\overline{{{\Psi}}}_{M}):=\frac{1}{2}\Big\langle\mathbf{H},\mathbb{E}\Big[\overline{{{\hat{b}}}}_{M}\otimes\overline{{{\hat{b}}}}_{M}\Big]\Big\rangle_{op}\;,\quad\mathrm{\mathrm{\boldmath~\Psi~}}(\overline{{{\Psi}}}_{M}):=\frac{1}{2}\langle\mathbf{H},\mathbb{E}\big[\overline{{{\hat{v}}}}_{M}\otimes\overline{{{\hat{v}}}}_{M}\big]\rangle_{op}\;.$$ a) *We have the following decomposition for the excess risk,* $$\mathbb{E}\big[L(\overline{{{w}}}_{M})\big]-L(w^{*})\leq\left(\sqrt{\mathrm{Bias}(\overline{{{w}}}_{M})}+\sqrt{\mathrm{Var}(\overline{{{w}}}_{M})}\right)^{2}.$$ _._ 2. _Suppose the model noise_ $\epsilon_{t}^{(m)}$ _is well-specified, i.e.,_ $\epsilon_{t}^{(m)}:=y_{t}^{(m)}-\left\langle x_{t}^{(m)},w^{*}\right\rangle,$ _dent and_ $\mathbb{E}[\epsilon_{t}^{(m)}]=0,$ _then we have the following equality for the excess risk,_ t, w∗Eand x (m) $$\mathbb{E}\big[L(\overline{{{w}}}_{M})\big]-L(w^{*})=\mathrm{Bias}(\overline{{{w}}}_{M})+\mathrm{Var}(\overline{{{w}}}_{M})\;.$$ Proof of Lemma A.1. The proof strategy is similar to the non distributed case (see Zou et al. (2021b), Lemma B2 and Lemma C1). For completeness we included it here. a) By definition of the excess risk we have $${L}{\left(\overline{{{w}}}_{{{M}}}\right)}-{L}{\left({w}^{{{*}}}\right)}={\frac{{{1}}}{{{2}}}}$$ $$={\frac{{{1}}}{{{2}}}}$$ $$={\frac{{{1}}}{{{2}}}}$$ $$={\frac{{{1}}}{{{2}}}}$$ Z H ⟨wM − w ∗, x⟩ 2 Px(dx) 1 2 ⟨H(wM − w ∗), wM − w ∗⟩ 1 2 ∥H 1 2 (wM − w ∗)∥ 2 1 2 b + v 2 H , where we used (8) for the last equality. Using Cauchy-Schwarz inequality we obtain $$\mathbb{E}[L(\overline{\overline{w}}_{M})-L(w^{*})]\leq\left(\sqrt{\frac{1}{2}\mathbb{E}\big{\|}\overline{\overline{b}}\big{\|}_{\mathbf{H}}^{2}}+\sqrt{\frac{1}{2}\mathbb{E}\big{\|}\overline{\overline{w}}\big{\|}_{\mathbf{H}}^{2}}\right)^{2}$$ $$=\left(\sqrt{\frac{1}{2}\big{(}\mathbf{H},\mathbb{E}\big{[}\overline{\overline{b}}_{M}\otimes\overline{\overline{b}}_{M}\big{]}\big{)}_{op}}+\sqrt{\frac{1}{2}\big{(}\mathbf{H},\mathbb{E}\big{[}\overline{\overline{b}}_{M}\otimes\overline{\overline{b}}_{M}\big{]}\big{)}_{op}}\right)^{2}$$ b) Set P (m) t = I − γx (m) t ⊗ x (m) t. Note that we have $$b_{t}^{(m)}=\prod_{k=1}^{t}P_{k}^{(m)}b_{0}^{(m)},\qquad\qquad\qquad v_{t}^{(m)}=\gamma\sum_{i=1}^{t}\prod_{j=i+1}^{t}\epsilon_{i}^{(m)}P_{j}^{(m)}x_{i}^{(m)}.$$ By assumption, we therefore have for all s, t ≤ n/M and *m, m*′ ≤ M, $$\mathbb{E}\left[b_{s}^{(m)}\otimes v_{t}^{(m^{\prime})}\right]=\gamma\mathbb{E}\left[\prod_{k=1}^{s}P_{k}^{(m)}b_{0}^{(m)}\otimes\sum_{i=1}^{t}\prod_{j=i+1}^{t}\epsilon_{i}^{(m^{\prime})}P_{j}^{(m^{\prime})}x_{i}^{(m^{\prime})}\right]$$ $$=\gamma\sum_{i=1}^{t}\mathbb{E}\left[\prod_{k=1}^{s}P_{k}^{(m)}b_{0}^{(m)}\otimes\prod_{j=i+1}^{t}P_{j}^{(m^{\prime})}x_{i}^{(m^{\prime})}\right]\mathbb{E}[\epsilon_{i}^{(m^{\prime})}]=0.$$ This implies $$\mathbb{E}\biggl[\overline{{{\overline{{b}}}}}_{M}\otimes\overline{{{\overline{{v}}}}}_{M}\biggr]=0.$$ i= 0. (9) From (8) we therefore have $$\mathbb{E}\left[\overline{{{\overline{{{\eta}}}}}}_{M}\otimes\overline{{{\overline{{{\eta}}}}}}_{M}\right]=\mathbb{E}\left[\overline{{{\overline{{{b}}}}}}_{M}\otimes\overline{{{\overline{{{b}}}}}}_{M}\right]+\mathbb{E}\left[\overline{{{\overline{{{v}}}}}}_{M}\otimes\overline{{{\overline{{{v}}}}}}_{M}\right]$$ (10) Finally, by definition of the excess risk we have $$\mathbb{E}[L(\overline{\overline{w}}_{M})-L(w^{*})]=\frac{1}{2}\mathbb{E}\bigg{[}\int_{\mathcal{H}}\langle\overline{\overline{w}}_{M}-w^{*},x\rangle^{2}\ \mathbb{P}_{\mathbf{x}}(d\mathbf{x})\bigg{]}$$ $$=\frac{1}{2}\mathbb{E}\big{[}\langle\mathbf{H}(\overline{\overline{w}}_{M}-w^{*}),\overline{\overline{w}}_{M}-w^{*}\rangle\big{]}$$ $$=\frac{1}{2}\langle\mathbf{H},\mathbb{E}\big{[}\bar{\overline{w}}_{M}\otimes\bar{\overline{j}}_{M}\big{]}\rangle_{op}$$ $$=\text{Bias}(\overline{\overline{w}}_{M})+\text{Var}(\overline{\overline{w}}_{M}),$$ $$(10)$$ (11) (12) $\binom{12}{2}$ . where we used (10) for the last equality. ## A.1.2 Upper Bound For the non distributed case Zou et al. (2021b) (see Lemma B.11 and Lemma B.6 ) already established upper bounds. More precisely we have for the local bias and variance term: Proposition A.2. Set k ∗ = maxnk : λk ≥ M nγo. If the step size satisfies γ < 1/(τ tr(H))*, we have for every* m ∈ [M]: a) Under Assumption 3.2 and 3.3, it holds that Biasw¯ (m) n M := 1 2 DH, E hb (m) t ⊗ b (m) tiEop ≤ M2 γ 2n2 · ∥w0 − w∗∥ 2 H−1 0:k∗ + ∥w0 − w∗∥H2k∗:∞ + 2τM2∥w0 − w∗∥ 2 I0:k∗ + n M γ∥w0 − w∗∥ 2 Hk∗:∞ γn(1 − γτ tr(H)) · k ∗ n +n M2 γ 2 X i>k∗ λ 2 i ! . b) Under Assumptions 3.2 - 3.4, it holds that $$V a r\Big(\bar{w}_{\frac{m}{M}}^{(m)}\Big):=\frac{1}{2}\Big\langle\mathbf{H},\mathbb{E}\Big[\bar{v}_{n}^{(m)}\otimes\bar{v}_{n}^{(m)}\Big]\Big\rangle_{o p}\leq\frac{\sigma^{2}}{1-\gamma\tau\operatorname{tr}(\mathbf{H})}\Bigg(\frac{k^{*}M}{n}+\gamma^{2}\frac{n}{M}\cdot\sum_{i>k^{*}}\lambda_{i}^{2}\Bigg).$$ Lemma A.3. Set k ∗ = maxnk : λk ≥ M nγo. If the step size satisfies γ < 1/(τ tr(H))*, we have for every* m ∈ [M]: a) Under Assumption 3.2 and 3.3, it holds that Bias(wM) ≤ M2 γ 2n2 · ∥w0 − w∗∥ 2 H−1 0:k∗ + ∥w0 − w∗∥H2k∗:∞ + 2τM2∥w0 − w∗∥ 2 I0:k∗ + n M γ∥w0 − w∗∥ 2 Hk∗:∞ γn(1 − γτ tr(H)) · k ∗ n +n M2 γ 2 X i>k∗ λ 2 i ! . b) Under Assumptions 3.2 - 3.4 , it holds that $$V a r(\overline{{{w}}}_{M})\leq\frac{\sigma^{2}}{1-\gamma\tau\,\mathrm{tr}({\bf H})}\left(\frac{k^{*}}{n}+\gamma^{2}\frac{n}{M^{2}}\cdot\sum_{i>k^{*}}\lambda_{i}^{2}\right).$$ Proof of Lemma A.3 . a) For the Bias-term we simply use $$\mathrm{Bias}({\overline{{{\overline{{w}}}}}}_{M})={\frac{1}{2}}\mathbb{E}\Big\|{\overline{{{b}}}}\Big\|_{\mathbf{H}}^{2}=1$$ 1 2 $$\frac{1}{M}\,\sum_{m=1}^{M}b_{M}^{-(m)}\bigg{\|}_{\bf H}^{2}\leq\frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\mathbb{E}\Big{\|}\tilde{b}_{n}^{(m)}\Big{\|}_{\bf H}^{2}=\frac{1}{M}\sum_{m=1}^{M}\text{Bias}\Big{(}\tilde{w}_{\frac{m}{M}}^{(m)}\Big{)}.\tag{13}$$ Taking the bound of the local Bias-term Biasw¯ (m) n M from A.2, proves the claim. b) First we split the expectation operator as follows $$\mathbb{E}\big{[}\overline{\mathfrak{P}}_{M}\otimes\overline{\mathfrak{P}}_{M}\big{]}$$ $$=\frac{1}{M^{2}}\sum_{m,m^{\prime}=1}^{M}\mathbb{E}\Big{[}\bar{v}_{n}^{(m)}\otimes\bar{v}_{n}^{(m^{\prime})}\Big{]}$$ $$=\frac{1}{M^{2}}\sum_{m=1}^{M}\mathbb{E}\Big{[}\bar{v}_{n}^{(m)}\otimes\bar{v}_{n}^{(m)}\Big{]}+\frac{1}{M^{2}}\sum_{m\neq m^{\prime}}\mathbb{E}\Big{[}\bar{v}_{n}^{(m)}\otimes\bar{v}_{n}^{(m^{\prime})}\Big{]}$$ $$=:I_{1}+I_{2}.\tag{14}$$ Now we prove that the second operator I2 is equal zero. First rewrite I2 as $$I_{2}=\frac{1}{M^{2}}\sum_{m\neq m^{\prime}}\frac{M^{2}}{n^{2}}\sum_{s,t=0}^{\frac{n}{M}-1}\mathbb{E}[v_{t}^{(m)}\otimes v_{s}^{(m^{\prime})}].$$ Therefore it is enough to to prove E[v (m) t ⊗ v (m′) s ] = 0 for any m ̸= m′. Since we assume our data sets to be independent we have E[v (m) t ⊗ v (m′) s ] = E[⟨*., v* (m) t⟩]E[v (m′) s ] = 0, where the last equality follows from (7). This proves I2 = 0. To sum up we have from (14) for the variance term, $$\mathrm{Var}(\overline{\overline{w}}_{M})=\frac{1}{2}\big{\langle}\mathbf{H},\mathsf{E}\big{[}\overline{\overline{v}}_{M}\otimes\overline{\overline{v}}_{M}\big{]}\big{\rangle}_{op}$$ $$=\frac{1}{M^{2}}\sum_{m=1}^{M}\frac{1}{2}\big{\langle}\mathbf{H},\mathsf{E}\big{[}\tilde{v}_{n}^{(m)}\otimes\tilde{v}_{n}^{(m)}\big{]}\big{\rangle}_{op}$$ $$=\frac{1}{M^{2}}\sum_{m=1}^{M}\mathrm{Var}\Big{(}\tilde{w}_{\overline{M}}^{(m)}\Big{)}.\tag{15}$$ variance term from A.2 completes the proof. Using the bound of the local variance term from A.2 completes the proof. Proof of Theorem 3.5. Using lemma A.1 a) we have $$\mathbb{E}\big[L(\overline{{{w}}}_{M})\big]-L(w^{*})\leq2\mathrm{Bias}(\overline{{{w}}}_{M})+2\mathrm{Var}(\overline{{{w}}}_{M}).$$ The claim now follows from lemma A.3. $$\square$$ ## A.2 Proofs Lower Bound A.2.1 Lower Bound Bias Proposition A.4 (Lower Bound Bias). Suppose Assumptions 3.2 and 3.6 are satisfied and let γ < 1 ||H|| . Recall the definition of Bias(wM) in Lemma A.1. The bias of the distributed SGD estimator satisfies the lower bound $$\mathrm{Bias}(\overline{{{w}}}_{M})\geq\frac{M(M-1)}{100\gamma^{2}n^{2}}\biggl(||w_{1}-w^{*}||_{\mathbf{H}_{0,k^{*}}^{1}}^{2}+\frac{\gamma^{2}n^{2}}{M^{2}}||w_{1}-w^{*}||_{\mathbf{H}_{k^{*},\infty}}^{2}\biggr)\ .$$ Proof of Proposition A.4. From the definition of the bias in Lemma A.1, we have Bias(wM) = 12 DH, E hbM ⊗ bM iE op =1 2M2 X M m1=1 X M m2=1 DH, E h¯b (m1) n ⊗ ¯b (m2) niEop =1 2M2 X M m=1 DH, E h¯b (m) n ⊗ ¯b (m) niEop +1 2M2 X M m1̸=m2 DH, E h¯b (m1) n ⊗ ¯b (m2) niEop . (16) We show that the first term in the above decomposition can be lower bounded by zero. Indeed, from (C.2) and (C.4) in Zou et al. (2021b) we have for all m = 1*, ..., M* the local lower bound $$\left\langle\mathbf{H},\mathbb{E}\Big[\tilde{b}_{n}^{(m)}\otimes\tilde{b}_{n}^{(m)}\Big]\right\rangle_{op}\geq\frac{M^{2}}{n^{2}}\sum_{t=1}^{\frac{\partial\mathbf{H}}{\partial t}}\sum_{k=t}^{\frac{\partial\mathbf{H}}{\partial t}}\Big{\langle}(\mathbf{I}-\gamma\mathbf{H})^{k-t}\mathbf{H},\mathbb{E}\Big[b_{t}^{(m)}\otimes b_{t}^{(m)}\Big]\Big{\rangle}_{op}$$ $$\geq\frac{M^{2}}{\gamma n^{2}}\Big{\langle}\mathbf{I}-(\mathbf{I}-\gamma\mathbf{H})^{\frac{\partial\mathbf{H}}{\partial t}},\mathbf{S}_{\frac{\partial\mathbf{H}}{\partial t}}^{(m)}\Big{\rangle}_{op}\;,$$ where we set $$\mathbf{S}_{\frac{n}{2M}}^{(m)}:=\sum_{t=1}^{\frac{n}{2M}}\mathbb{E}\Big[b_{t}^{(m)}\otimes b_{t}^{(m)}\Big]\ .$$ Setting B1 = b (m) 1 ⊗ b (m) 1 = (w1 − w ∗) ⊗ (w1 − w ∗) and applying Lemma C.4 from Zou et al. (2021b) gives then for all m = 1*, ..., M* $$\mathbf{S}_{\frac{\pi\mathbf{H}}{2\mathbf{H}}}^{(m)}\ \geq\ \underbrace{\frac{\theta}{4}\operatorname{Tr}\bigl{[}\bigl{(}\mathbf{I}-(\mathbf{I}-\gamma\mathbf{H})^{\frac{\pi\mathbf{H}}{2\mathbf{H}}}\bigr{)}\mathbf{B}_{1}\bigr{]}\cdot\bigl{(}\bigl{(}\mathbf{I}-(\mathbf{I}-\gamma\mathbf{H})^{\frac{\pi\mathbf{H}}{2\mathbf{H}}}\bigr{)}\bigr{)}}_{PSD}+\underbrace{\sum_{t=1}^{\oplus}(\mathbf{I}-\gamma\mathbf{H})^{t}\cdot\mathbf{B}_{1}\cdot(\mathbf{I}-\gamma\mathbf{H})^{t}}_{PSD}$$ $$\geq0\,.$$ $$\left(17\right)$$ Hence, $$\begin{array}{c}{{\frac{1}{2M^{2}}\sum_{m=1}^{M}\left\langle\mathbf{H},\mathbf{E}\left[\tilde{b}_{n}^{(m)}\otimes\tilde{b}_{n}^{(m)}\right]\right\rangle_{o p}\geq\frac{1}{2M^{2}}\cdot\frac{M^{2}}{\gamma n^{2}}\sum_{m=1}^{M}\left\langle\mathbf{I}-(\mathbf{I}-\gamma\mathbf{H})^{\frac{m}{2M}},\mathbf{S}_{\frac{2M}{2M}}^{(m)}\right\rangle_{o p}}}\\ {{\geq0\;.}}\end{array}$$ We now bound the second term in equation 16. Note that by independence of the local nodes and with equation 6 we may write for any fixed m1 ̸= m2 $$\mathbb{E}\Big{[}\bar{b}_{n}^{(m_{1})}\otimes\bar{b}_{n}^{(m_{2})}\Big{]}=\frac{M^{2}}{n^{2}}\sum_{t=1}^{\frac{n}{M}}\sum_{k=1}^{\frac{n}{M}}\mathbb{E}\Big{[}b_{t}^{(m_{1})}\Big{]}\otimes\mathbb{E}\Big{[}b_{k}^{(m_{2})}\Big{]}$$ $$=\frac{M^{2}}{n^{2}}\sum_{t=1}^{\frac{n}{M}}\sum_{k=1}^{\frac{n}{M}}(\mathbf{I}-\gamma\mathbf{H})^{t}\cdot\mathbf{B}_{1}\cdot(\mathbf{I}-\gamma\mathbf{H})^{k}\.$$ Hence, n n 1 2M2 X M m1̸=m2 DH, E h¯b (m1) n ⊗ ¯b (m2) niEop =1 2M2 M2 n2 X M XM XM k=1 H,(I − γH) t· B1 · (I − γH) kop m1̸=m2 t=1 = M(M − 1) 2γn2 * nXM k=1 (I − γH) kI − (I − γH) n M +1, B1 + op = M(M − 1) 2γ 2n2 DI − (I − γH) n M +12H−1, B1 E op . $$\square$$ Following now the lines of the proof of Lemma C.5 in Zou et al. (2021b) (adapted to our local setting) gives $$\frac{1}{2M^{2}}\sum_{m_{1}\neq m_{2}}^{M}\left\langle\mathbf{H},\mathbb{E}\left[\widetilde{b}_{n}^{(m_{1})}\otimes\widetilde{b}_{n}^{(m_{2})}\right]\right\rangle_{q p}\geq\frac{M(M-1)}{100\gamma^{2}n^{2}}\bigg(\|w_{1}-w^{*}\|_{\mathbf{H}_{0,\infty}^{+}}^{2}+\frac{\gamma^{2}n^{2}}{M^{2}}\|w_{1}-w^{*}\|_{\widetilde{\mathbf{H}}_{\Phi,\infty}^{+}}^{2}\bigg)\;.$$ Combining now the last bound with equation 17 and equation 16 finally gives $$\mathrm{Bias}(\overline{{{\w}}}_{M})\geq\frac{M(M-1)}{100\gamma^{2}n^{2}}\biggl(||w_{1}-w^{*}||_{\mathbf{H}_{0,\mathbf{k}^{*}}^{\dagger}}^{2}+\frac{\gamma^{2}n^{2}}{M^{2}}||w_{1}-w^{*}||_{\mathbf{H}_{\mathbf{k}^{*},\infty}}^{2}\biggr)\ .$$ ## A.2.2 Lower Bound Variance Proposition A.5 (Lower Bound Variance). *Suppose Assumptions 3.2 and 3.6 are satisfied and let* nM ≥ 500, γ < 1 ||H|| *. Recall the definition of* Var(wM) *in Lemma A.1. The variance of the distributed SGD estimator* satisfies the lower bound $$\mathrm{Var}(\overline{{{w}}}_{M})\geq\frac{\sigma_{n o i s e}^{2}}{100}\cdot\left(\frac{k^{*}}{n}+\frac{\gamma^{2}n}{M^{2}}\sum_{j>k^{*}}\lambda_{j}^{2}\right)\,.$$ Proof of Proposition A.5. From the definition of the variance in Lemma A.1, we have Var(wM) = 12 H, E-vM ⊗ vM op =1 2M2 X M m1=1 X M m2=1 DH, E hv¯ (m1) n ⊗ v¯ (m2) niEop =1 2M2 X M m=1 DH, E hv¯ (m) n ⊗ v¯ (m) niEop +1 2M2 X M m1̸=m2 DH, E hv¯ (m1) n ⊗ v¯ (m2) niEop . (18) We first lower bound the first term. By Eq. (C.3) and Lemma C.3 in Zou et al. (2021b) (adapted to our local setting) we obtain Solving, we obtain $$\begin{aligned} \frac{1}{2M^2}\sum_{m=1}^M\left\langle\mathbf{H},\mathbb{E}\left[\bar{v}_n^{(m)}\otimes\bar{v}_n^{(m)}\right]\right\rangle_{op}&\geq\frac{1}{2M^2}\frac{M^2}{n^2}\sum_{m=1}^M\sum_{t=0}^{\frac{m}{2}-1}\left\langle\mathbf{I}-\gamma\mathbf{H}\right\rangle^{k-t}\mathbf{H},\mathbb{E}\left[v_t^{(m)}\otimes v_t^{(m)}\right]\right\rangle_{op}\\ &\geq\frac{\sigma_{one}^2}{100M^2}\sum_{m=1}^M\left(\frac{M}{n}k^*+\frac{\gamma^2n}{M}\sum_{j>k^*}\lambda_j^2\right)\\ &=\frac{\sigma_{one}^2\text{isets}}{100}V_{k^*}(n,M)\;, \end{aligned}$$ where $$V_{k^{*}}(n,M):=\left(\frac{k^{*}}{n}+\frac{\gamma^{2}n}{M^{2}}\sum_{j>k^{*}}\lambda_{j}^{2}\right)\,.$$ To derive the final bound we argue that the second term in equation 18 is zero. Indeed, by independence of the local nodes we may write for any m1 ̸= m2 with equation 7 $$\mathbb{E}\left[v_{t}^{(m_{1})}\otimes v_{k}^{(m_{2})}\right]=\mathbb{E}\Big{[}v_{t}^{(m_{1})}\Big{]}\otimes\mathbb{E}\Big{[}v_{k}^{(m_{2})}\Big{]}$$ $$=(\mathbf{I}-\gamma\mathbf{H})^{t}(v_{0}^{(m_{1})}\otimes v_{0}^{(m_{2})})(\mathbf{I}-\gamma\mathbf{H})^{k}\,$$ $$=0\,$$ since v (m) 0 = 0 for all m = 1*, ..., M*. Hence, $$\frac{1}{2M^{2}}\sum_{m_{1}\neq m_{2}}^{M}\left\langle\mathbf{H},\mathbb{E}\Big[\bar{v}_{n}^{(m_{1})}\otimes\bar{v}_{n}^{(m_{2})}\Big]\right\rangle_{o p}=0\;.$$ this finishes the proof. ## A.3 Proofs Rates Of Convergence Proof of Corollary 3.9. Let the sequence Mn ≤ qγ(1−2γτ)n R2 . By definition of k ∗ we know that k ∗ = ˜d = n Mn rand hence λk∗ =Mn n r. We first bound the bias from Theorem 3.5. Since ||w ∗||2 ≤ R by assumption, we find $$||w^{*}||_{{\bf H}^{1}_{0,{\rm k}*}}^{2}\leq\frac{||w^{*}||_{2}^{2}}{\lambda_{k^{*}}}\leq R^{2}\bigg{(}\frac{n}{M_{n}}\bigg{)}^{r}.\tag{19}$$ Similarly, since n Mn → ∞ as n → ∞, there exists n0 ∈ N such that $$||w^{*}||_{\widehat{\mathbf{H}}_{\mathbf{k}^{*},\infty}}^{2}\leq\frac{R^{2}}{\left(\frac{n}{M_{n}}\right)^{q}-\left(\frac{n}{M_{n}}\right)^{r}}\leq c_{n_{0}}R^{2}\bigg{(}\frac{M_{n}}{n}\bigg{)}^{q}\,,\tag{20}$$ $\mathbf{H}_{\mathbf{k}^{*},\infty}$\(\mathbf{H}_{\mathbf{k}^{*},\infty} for any n ≥ n0 and some cn0 < ∞. Using that Tr[H] = 2 and ||w ∗||2 I0:k∗ ≤ R2, we find for all n ≥ n0, for some n0 ∈ N, that $$\frac{2\tau M^{2}\big(||w^{*}|_{\mathrm{I}_{0-k}}^{2}+\gamma\frac{n}{M}||w^{*}||_{\mathrm{H}_{\mathrm{r}\infty}}^{2}\big)}{\gamma n\big(1-\gamma\tau\,\mathrm{Tr}[\mathbf{H}]\big)}\leq4\max\{1,c_{n_{0}}\}\ \frac{\tau R^{2}}{1-2\gamma\tau}\ \frac{M_{n}^{2}}{\gamma n}\ .$$ Note that we also use that Mn ≤ n and hence Mn n q−1≤ 1, since q > 1. Since $$M_{n}\leq\sqrt{\frac{\gamma(1-2\gamma\tau)n}{R^{2}}}$$ and hence we have $$\frac{\tau R^{2}}{1-2\gamma\tau}\;\frac{M_{n}^{2}}{\gamma n}\leq1$$ $$\frac{2\tau M_{n}^{2}\big(||w^{*}|_{\mathbf{f}_{6+\phi}}^{2}+\gamma\frac{n}{M}||w^{*}||_{\mathbf{H}_{k},\infty}^{2}\big)}{\gamma n(1-\gamma\tau\,\mathrm{Tr}[\mathbf{H}])}\leq4\max\{1,c_{n_{0}}\}\;.$$ We further observe that by the definition of the spectrum of H $$\sum_{j>k^{*}}\lambda_{l}^{2}=\sum_{j=d}^{d}\frac{1}{d-\bar{d}}=\frac{1}{\left(\frac{n}{M_{n}}\right)^{q}-\left(\frac{n}{M_{n}}\right)^{r}}\leq c_{n_{0}}\left(\frac{M_{n}}{n}\right)^{q},$$ for any n sufficiently large, by using the argumentation as above. Hence, $$V_{k^{*}}(n,M_{n}):=\frac{k^{*}}{n}+\gamma^{2}\frac{n}{M_{n}^{2}}\sum_{j=k^{*}+1}^{\infty}\lambda_{j}^{2}$$ $$\leq\max\{1,c_{n_{0}}\}\ \frac{1}{M_{n}}\cdot\left(\ \left(\frac{M_{n}}{n}\right)^{1-r}+\gamma^{2}\bigg{(}\frac{M_{n}}{n}\bigg{)}^{q-1}\ \right).\tag{22}$$ $$(21)$$ (23) $$\begin{array}{l}\left(24\right)\end{array}$$ . Combining (19), (20), (21) and (22), we find for the bias term Bias(Mn) ≤R2 γ 2(n/Mn) 2 n Mn r+ cn0R 2 Mn n q+ 4 max{1, cn0 }Vk∗ (n, Mn) ≤ max{1, cn0 } R 2 1 γ 2 Mn n 2−r+ Mn n q!+ (23) 4 max{1, cn0 } 21 Mn · Mn n 1−r+ γ 2 Mn n q−1!. (24) We now turn to the bound of the variance term. From equation 22 we have $$\mathrm{Var}(M_{n})\leq\operatorname*{max}\{1,c_{n_{0}}\}\left({\frac{\sigma^{2}}{1-\gamma\tau\,\mathrm{Tr}[{\bf H}]}}\right)\cdot{\frac{1}{M_{n}}}\cdot\left(\,\left({\frac{M_{n}}{n}}\right)^{1-r}+\gamma^{2}\!\left({\frac{M_{n}}{n}}\right)^{q-1}\,\right)\,.$$ Combining the bounds for bias and variance leads to the total error bound $$\mathbb{E}\left[L\big{(}\overline{{{u}}}M_{n}\big{)}\right]-L(u^{*})\leq$$ $$2\tilde{c}_{n_{0}}\cdot c_{\gamma,\tau,\sigma}\cdot\left(R^{2}\Bigg{(}\frac{1}{\gamma^{2}}\Bigg{(}\frac{M_{n}}{n}\Bigg{)}^{2-\tau}+\Bigg{(}\frac{M_{n}}{n}\Bigg{)}^{\eta}\Bigg{)}+\cdot\frac{1}{M_{n}}\cdot\Bigg{(}\Bigg{(}\frac{M_{n}}{n}\Bigg{)}^{1-\tau}+\gamma^{2}\Bigg{(}\frac{M_{n}}{n}\Bigg{)}^{q-1}\Bigg{)}\right),$$ with $$c_{\gamma,\tau,\sigma}:=1+\frac{\sigma^{2}}{1-\gamma\tau\,\mathrm{Tr}[{\bf H}]}\;,\quad\tilde{c}_{n_{0}}=4\operatorname*{max}\{1,c_{n_{0}}\}^{2}\;,$$ holding for any n sufficiently large. We proceed by further simplifying the right hand side of the above inequality. Since τ ≥ 1 and 1 − γτ Tr[H] < 1, the assumption on Mn implies that $$M_{n}^{2}\leq\frac{n\gamma}{R^{2}}\ ,$$ further implying that $$\frac{R^{2}}{\gamma}\biggl(\frac{M_{n}}{n}\biggr)^{2-r}\leq\frac{1}{M_{n}}\biggl(\frac{M_{n}}{n}\biggr)^{1-r}$$ and $$R^{2}\biggl(\frac{M_{n}}{n}\biggr)^{q}\leq\frac{\gamma}{M_{n}}\biggl(\frac{M_{n}}{n}\biggr)^{q-1}\;.$$ $$\square$$ As a result, applying Theorem 3.5, the excess risk can be bounded by $$\mathbb{E}\left[L(\overline{\overline{w}}_{M_{n}})\right]-L(w^{*})\leq4\bar{c}_{m_{0}}\cdot c_{\gamma,r,\sigma}\cdot\frac{1}{M_{n}}\left(\left(\frac{1}{\gamma}+1\right)\left(\frac{M_{n}}{n}\right)^{1-r}+(\gamma+\gamma^{2})\left(\frac{M_{n}}{n}\right)^{q-1}\right)\cdot$$ $$\leq4\bar{c}_{m_{0}}\cdot c_{\gamma,r,\sigma}\cdot\frac{1}{\gamma M_{n}}\left(\left(\frac{M_{n}}{n}\right)^{1-r}+\left(\frac{M_{n}}{n}\right)^{q-1}\right).$$ In the last step we use that γ < 1 2τ < 1 2 . Proof of Corollary 3.10. Assume the sequence (Mn)n satisfies Mn/n → 0 as n → ∞. We use Theorem 3.5 to bound the excess risk and find estimates for bias and variance. By the definition of k ∗ we have $$k^{*}=\operatorname*{max}\left\{k\in\mathbb{N}:k\leq\left({\frac{\gamma n}{M_{n}}}\right)^{\frac{1}{1+r}}\right\}=\left\lfloor\left({\frac{\gamma n}{M_{n}}}\right)^{\frac{1}{1+r}}\right\rfloor.$$ Hence, there exists n0 ∈ N such that for all n ≥ n0 $$c_{n_{0}}\biggl(\frac{\gamma n}{M_{n}}\biggr)^{\frac{1}{1+r}}\leq k^{*}\leq C_{n_{0}}\biggl(\frac{\gamma n}{M_{n}}\biggr)^{\frac{1}{1+r}}\;,$$ for some constants 0 < cn0 ≤ Cn0 . Therefore, $$\lambda_{k^{*}}=(k^{*})^{-(1+r)}\leq\left({\frac{1}{c_{n_{0}}}}\right)^{1+r}\cdot{\frac{M_{n}}{\gamma n}}$$ and1 $$\frac{1}{\lambda_{k^{*}}}=(k^{*})^{1+r}\leq C_{n_{0}}^{1+r}\cdot\frac{n}{\gamma M_{n}}\ .$$ We therefore get for the first two terms of the bias $$\frac{M_{n}^{2}}{\gamma^{2}n^{2}}\cdot||w^{*}||_{\mathbf{H}_{0+k^{*}}^{\dagger}}^{2}\leq\frac{R^{2}M_{n}^{2}}{\gamma^{2}n^{2}\lambda_{k^{*}}}$$ $$\leq C_{n_{0}}^{1+r}\frac{R^{2}}{\gamma}\cdot\frac{M_{n}}{n}\.$$ (25) $\binom{26}{25}$ . $$(27)$$ and $$||w^{*}||_{{\bf H}_{k^{*},\infty}}^{2}\leq R^{2}\lambda_{k^{*}}\leq R^{2}\bigg(\frac{1}{c_{n_{0}}}\bigg)^{1+r}\cdot\frac{M_{n}}{\gamma n}\;.$$ We now bound the last term of the bias. To this end, we apply a well known bound for sums over decreasing functions, i.e., $$\sum_{j\geq k}f(j)\leq\int_{k}^{\infty}f(x)d x\ .$$ This gives $$\sum_{j>k^{*}}\lambda_{j}^{2}\leq\int_{k^{*}}^{\infty}x^{-2(r+1)}d x\leq{\frac{1}{2r+1}}(k^{*})^{-(2r+1)}\leq{\frac{1}{2r+1}}c_{n_{0}}^{-(2r+1)}\biggl({\frac{M_{n}}{\gamma n}}\biggr)^{1+{\frac{r}{1+r}}}\,.$$ Thus, Vk∗ (n, Mn) = k ∗ n + γ 2 n M2 X∞ j=k∗+1 λ 2 j ≤ 1 n Cn0 γn Mn 1 1+r + γ 2 n M2 n c −(2r+1) n0 2r + 1 Mn γn 1+ r 1+r ≤ c ′ r,n0 1 n γn Mn 1 1+r +γ Mn Mn γn r 1+r! ≤ 2c ′ r,n0 ·γ Mn Mn γn r 1+r , (28) $$(28)$$ with $$c_{r,n_{0}}^{\prime}=\operatorname*{max}\Biggl\{C_{n_{0}},\frac{c_{n_{0}}^{-(2r+1)}}{2r+1}\Biggr\}\;.$$ Moreover, 2τM2 n ||w ∗||2 I0:k∗ + γ n Mn ||w ∗||2Hk∗:∞ γn(1 − γτ Tr[H]) ≤ 2τM2 n γn(1 − γτ Tr[H])R 2 + R 2γ n Mn λk∗ ≤ c ′′ n0 2τM2 n γn(1 − γτ Tr[H])R 2 1 + γ n Mn Mn γn ≤ 2c ′′ n0 2τ (1 − γτ Tr[H]) · R2M2 n γn, $$(29)$$ with $$c_{n_{0}}^{\prime\prime}=\operatorname*{max}\Biggl\{1,\left(\frac{1}{c_{n_{0}}}\right)^{1+r}\Biggr\}\;.$$ Hence, combining this with equation 28 and choosing $$M_{n}\leq{\frac{\sqrt{\gamma n}}{R}}$$ R(29) leads to 2τM2 n ||w ∗||2 I0:k∗ + γ n Mn ||w ∗||2Hk∗:∞ γn(1 − γτ Tr[H]) · Vk∗ (n, Mn) ≤ 2c ′′ n0 2τ (1 − γτ Tr[H]) · R2M2 n γn· 2c ′ r,n0 ·γ Mn Mn γn r 1+r ≤ cr,n0τ (1 − γτ Tr[H]) · R2M2 n γn· γ Mn Mn γn r 1+r ≤ cr,n0τ (1 − γτ Tr[H]) · γ Mn Mn γn r 1+r , (30) . Combining (26), (27) and (30), we find for all n ≥ n0 where cr,n0 = 8c ′′ n0 · c ′ r,n0 $$\mathrm{Bias}(M_{n})\leq\bar{c}_{r,n_{0}}\cdot\frac{R^{2}}{\gamma}\cdot\frac{M_{n}}{n}+c_{r,n_{0}}\frac{\tau}{(1-\gamma\tau\,\mathrm{Tr}[{\bf H}])}\cdot\frac{\gamma}{M_{n}}\left(\frac{M_{n}}{\gamma n}\right)^{\frac{\tau}{1+\tau}},$$ , (31) where we set $$\tilde{c}_{r,n_{0}}=2\operatorname*{max}\left\{C_{n_{0}}^{1+r},\left(\frac{1}{c_{n_{0}}}\right)^{1+r}\right\}\,.$$ $$(30)$$ $$(31)$$ We now turn to bounding the variance. Using equation 28 once more, the variance can be bounded by $$\mathrm{Var}(M_{n})\leq\frac{\sigma^{2}}{1-\gamma\tau\,\mathrm{Tr}[{\bf H}]}\cdot2c_{r,n_{0}}^{\prime}\cdot\frac{\gamma}{M_{n}}\biggl(\frac{M_{n}}{\gamma n}\biggr)^{\frac{r}{1+r}}\ .$$ Combining the bias bound equation 31 with the variance bound, we obtain for the excess risk $$\mathbb{E}\left[L(\overline{{{\overline{{w}}}}}_{M_{n}})\right]-L(w^{*})\leq c_{r,n_{0},\tau,\sigma}\Bigg(\frac{R^{2}}{\gamma}\cdot\frac{M_{n}}{n}+\frac{\gamma}{M_{n}}\bigg(\frac{M_{n}}{\gamma n}\bigg)^{\frac{r}{1+r}}\Bigg)\;.$$ $$c_{r,n_{0},\tau,\sigma}:=\max\Bigg\{\tilde{c}_{r,n_{0}},2c_{r,n_{0}}\cdot c_{r,n_{0}}^{\prime}\cdot\frac{\max\{\tau,\sigma^{2}\}}{1-\gamma\tau\;\mathrm{Tr}[\mathbf{H}]}\Bigg\}\;.$$ Note that the choice $$M_{n}\leq\left(\frac{\gamma}{R^{2}}\right)^{\frac{1+r}{2+r}}\cdot(\gamma n)^{\frac{1}{2+r}}\tag{32}$$ leads to a dominating variance part, i.e. $${\frac{R^{2}}{\gamma}}\cdot{\frac{M_{n}}{n}}\leq{\frac{\gamma}{M_{n}}}{\bigg(}{\frac{M_{n}}{\gamma n}}{\bigg)}^{\frac{r}{1+r}}$$ and $$\mathbb{E}\big[L(\overline{{{w}}}_{M_{n}})\big]-L(w^{*})\leq2c_{\tau,n_{0},\tau,\sigma}\frac{\gamma}{M_{n}}\bigg(\frac{M_{n}}{\gamma n}\bigg)^{\frac{r}{1+r}}\;.$$ Note that the choice equation 32 is compatible with the choice equation 29, i.e., $$M_{n}\leq\left(\frac{\gamma}{R^{2}}\right)^{\frac{1+r}{2+r}}\cdot(\gamma n)^{\frac{1}{2+r}}\leq\frac{\sqrt{\gamma n}}{R}\,,$$ following from the fact that r > 0, provided that n is sufficiently large. ## B Proofs Section 4 ( Comparison Of Sample Complexity Of Dsgd And Drr) B.1 Lower Bound For Distributed Ridge Regression In this section we derive a lower bound for the distributed RR estimator. We adopt the following notation and assumptions from Tsigler & Bartlett (2020). - H−1/2x, where x ∈ R dis sub-Gaussian with independent components $\mathbf{X}=(\sqrt{\lambda_1}z_1,...,\sqrt{\lambda_d}z_d)$ with $z_j$ being sub-$z_j$. - X = (√λ1z1*, ...,* √λdzd) with zj being sub-Gaussian with independent components $$\mathbf{\partial}\cdot\mathbf{A}:=\mathbf{X}\mathbf{X}^{T}+\lambda I_{n},\quad\mathbf{A}_{m}:=\mathbf{X}_{m}\mathbf{X}_{m}^{T}+\lambda I_{n}$$ $$\mathbf{\partial}\mathbf{\partial}\mathbf{A}_{-j}=\sum_{i j^{\prime}}\lambda_{i}z_{i}z_{i}^{T}+\lambda I_{n}$$ Crucial for our analysis is the following quantity, called the *local effective dimension* for the RR problem: $$k_{\rm RR}^{*}:=\min\Biggl{\{}k:\lambda_{k+1}\leq\frac{M\Bigl{(}\lambda+\sum_{j>k}\lambda_{j}\Bigr{)}}{bn}\Biggr{\}}.\tag{33}$$ ## B.1.1 Bias-Variance Decomposition Drr Definition B.1 (Bias and Variance of Distributed RR). Let $$\Pi_{m}(\lambda):=\left(\mathbf{X}_{m}^{T}\mathbf{X}_{m}+\lambda\right)^{-1}\mathbf{X}_{m}^{T}\mathbf{X}_{m}-I d\ ,$$ $$\widehat{\mathrm{Bias}}(\overline{w}_{n}^{\mathrm{RR}}(\lambda)):=\left|\left|\mathbf{H}^{1/2}\!\left(\frac{1}{M}\sum_{m=1}^{M}\Pi_{m}(\lambda)w^{*}\right)\right|\right|^{2},$$ $$\widehat{\mathrm{Var}}(\overline{w}_{n}^{\mathrm{RR}}(\lambda)):=\left|\left|\mathbf{H}^{1/2}\!\left(\frac{1}{M}\sum_{m=1}^{M}(\mathbf{X}_{m}^{T}\mathbf{X}_{m}+\lambda)^{-1}\mathbf{X}_{m}^{T}\epsilon_{m}\right)\right|\right|^{2}.$$ We call $$\operatorname{Bias}({\overline{{w}}}_{n}^{\mathrm{RR}}(\lambda))=\mathbb{E}\left[{\widehat{\operatorname{Bias}}}({\overline{{w}}}_{n}^{\mathrm{RR}}(\lambda))\right]$$ the (expected) bias of the distributed RR estimator and $$\operatorname{Var}({\overline{{w}}}_{n}^{\mathrm{RR}}(\lambda))=\mathbb{E}\Bigl[{\widehat{\operatorname{Var}}}({\overline{{w}}}_{n}^{\mathrm{RR}}(\lambda))\Bigr]$$ the (expected) variance. We immediately obtain: Lemma B.2. *The excess risk satisfies* $$\mathbb{E}\Big[||\mathbf{H}^{1/2}(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda)-w^{*})||^{2}\Big]=\mathrm{Bias}(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda))+\mathrm{Var}(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda))\ .$$ Proof of Lemma B.2. We split the excess risk as m=1 wˆ RR m (λ) − w ∗ ! 2 ||H1/2(w RR n(λ) − w ∗)||2 = H1/2 1 M X M m=1 (XTmXm + λ) −1XTmYm − w ∗ ! 2 = H1/2 1 M X M m=1 (XTmXm + λ) −1XTm(Xmw ∗ + ϵm) − w ∗ ! 2 = H1/2 1 M X M = Bias( d w RR n(λ)) + Var( d w RR n(λ)) +2 M2 X M m=1 X M m′=1 H Πm(λ)w ∗,(XTmXm + λ) −1XTmϵm . We argue that the expectation with respect to the noise (i.e. conditioned on X) of the last term is equal to zero. Indeed, by linearity and since ϵm is centered (conditioned on Xm) for all m ∈ [M], we find $$\begin{array}{l l}{{\mathbb{E}_{\epsilon_{m}}\left[\langle\mathbf{H}\;\Pi_{m}(\lambda)w^{*},(\mathbf{X}_{m}^{T}\mathbf{X}_{m}+\lambda)^{-1}\mathbf{X}_{m}^{T}\epsilon_{m}\rangle\right]=\langle\mathbf{H}\;\Pi_{m}(\lambda)w^{*},(\mathbf{X}_{m}^{T}\mathbf{X}_{m}+\lambda)^{-1}\mathbf{X}_{m}^{T}\mathbb{E}_{\epsilon_{m}}[\epsilon_{m}]\rangle}}\\ {{}}&{{=0\;.}}\end{array}$$ Hence, $$\mathbb{E}\left[||\mathbf{H}^{1/2}(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda)-w^{*})||^{2}\right]=\mathbb{E}\left[\widehat{\mathrm{Bias}}(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda))\right]+\mathbb{E}\left[\widehat{\mathrm{Var}}(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda))\right]\,.$$ ## B.1.2 Lower Bound Of Bias For Drr Proposition B.3 (Lower Bound of Bias for local RR). Assume H *is strictly positive definite with* Tr[H] < ∞. There exist absolute constants b > 1, c > 1 *such that* $$\mathrm{Bias}(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda))\geq\frac{M-1}{c M}\cdot\left(\frac{M^{2}\Big{(}\lambda+\sum_{j>k_{\mathrm{RR}}^{+}}\lambda_{j}\Big{)}^{2}}{n^{2}}\cdot||w^{*}||_{\mathbf{H}_{0}^{-1}k_{\mathrm{RR}}^{+}}^{2}+||w^{*}||_{\mathbf{H}_{k_{\mathrm{RR}}^{+}}^{-}}^{2}\right)\right).$$ , where k ∗ RR *is defined in equation 33.* For proving this Proposition we need the following Lemma. Lemma B.4. Let X˜ ∈ R n×d*be an independent copy of* X ∈ R n×d and set A˜ = X˜ X˜ T + λ*. Define further* the operator B := (Id − XT A−1X)H(Id − X˜ T A˜ −1X˜ ) . 1. For any i ̸= j*, we have* $$\mathbb{E}_{\mathbf{X,\tilde{X}}}[\mathbf{B}_{i j}]=0\ .$$ 2. *The diagonal elements satisfy for any* k $$\mathbb{E}_{\mathbf{X,\tilde{x}}}[\mathbf{B}_{i i}]\geq{\frac{1}{c}}\cdot{\frac{\lambda_{i}}{\left(1+{\frac{\lambda_{i}}{\lambda_{k+1}}}\cdot{\frac{n}{\rho_{k}}}\right)^{2}}}\ ,$$ for some absolute constant c > 1 *and where we define* $$\rho_{k}=\frac{\lambda+\sum_{j>k}\lambda_{j}}{\lambda_{k+1}}\;.$$ Proof of Lemma B.4. Recall that H = diag{λ1*, ..., λ*d}j and $${\bf X}=(\sqrt{\lambda_{1}}z_{1},...,\sqrt{\lambda_{d}}z_{d})\;,\quad\check{\bf X}=(\sqrt{\lambda_{1}}\tilde{z}_{1},...,\sqrt{\lambda_{d}}\tilde{z}_{d})\;.$$ 1. Let i ̸= j. We expand −pλi ej , HXT A−1zi −pλj ei, HX˜ T A˜−1z˜j + √zizj zi, A−1XHX˜ T A˜ −1z˜j Bij = ⟨ei, Hej ⟩ | {z } =0 = −λj pλiλj zj , A−1zi − λi pλiλj z˜i, A˜ −1z˜j +pλiλj zi, A−1XHX˜ T A˜ −1z˜j . We define the map F(zj ) := zj , A−1zi . Following the lines of the proof of Lemma C.7 in Zou et al. (2021a) shows that Ezj [F(zj )] = 0. Using similar arguments, the same is true for the second and last term, showing the result. 2. We expand Bii = DH(ei −pλiXT A−1zi), ei −pλiX˜ T A˜ −1z˜i E +λi HXT A−1zi, X˜ T A˜ −1z˜i −pλi Hei, X˜ T A˜ −1z˜i −pλi Hei, XT A−1zi = ⟨Hei, ei⟩ | {z } =λi = λi -1 − λi zi, A−1zi +z˜i, A˜ −1z˜i + λi HXT A−1zi, X˜ T A˜ −1z˜i . Setting $$a_{i}:=\left\langle z_{i},{\bf A}^{-1}z_{i}\right\rangle\,,\quad\tilde{a}_{i}:=\left\langle\tilde{z}_{i},\tilde{\bf A}^{-1}\tilde{z}_{i}\right\rangle\,,$$ we further find that I find that $$\lambda_i\langle\mathbf{H}\mathbf{X}^T\mathbf{A}^{-1}z_i,\hat{\mathbf{X}}^T\hat{\mathbf{A}}^{-1}\bar{z}_i\rangle=\lambda_i\sum_{j=1}^d\lambda_j(\mathbf{X}^T\mathbf{A}^{-1}z_i)_j\cdot(\hat{\mathbf{X}}^T\hat{\mathbf{A}}^{-1}\bar{z}_i)_j$$ $$=\lambda_i\sum_{j=1}^d\lambda_j^2\langle z_j,\mathbf{A}^{-1}z_i\rangle\cdot\langle\bar{z}_j,\hat{\mathbf{A}}^{-1}z_i\rangle$$ $$=\lambda_i^3\cdot a_i\cdot\bar{a}_i+\lambda_i\sum_{j\neq i}\lambda_j^2\langle z_j,\mathbf{A}^{-1}z_i\rangle\cdot\langle\bar{z}_j,\hat{\mathbf{A}}^{-1}z_i\rangle\;.$$ and once, the last term is non-negative in expectation, i.e. By independence, the last term is non-negative in expectation, i.e. $$\begin{array}{l}\mathbb{E}_{\mathbf{X},\tilde{\mathbf{X}}}\left[\lambda_{i}\sum_{j\neq i}\lambda_{j}^{2}\langle z_{j},\mathbf{A}^{-1}z_{i}\rangle\cdot\langle\tilde{z}_{j},\tilde{\mathbf{A}}^{-1}z_{i}\rangle\right]\\ =\lambda_{i}\sum_{j\neq i}\lambda_{j}^{2}\mathbb{E}\mathbf{x}\left[\langle z_{j},\mathbf{A}^{-1}z_{i}\rangle\right]\cdot\mathbb{E}_{\tilde{\mathbf{X}}}\left[\langle\tilde{z}_{j},\tilde{\mathbf{A}}^{-1}z_{i}\rangle\right]\\ =\lambda_{i}\sum_{j\neq i}\lambda_{j}^{2}\cdot\mathbb{E}\mathbf{x}\left[\langle z_{j},\mathbf{A}^{-1}z_{i}\rangle\right]^{2}\\ \geq0\.\end{array}$$ Hence, for deriving a lower bound in expectation it is sufficient to lower bound the expression λi· [1 − λi(ai + ˜ai)] + λ 3 i · ai· a˜i = λi· (1 − λiai) · (1 − λia˜i) . Using independence once more we find $$\mathbb{E}_{\mathbf{X,\tilde{X}}}[\mathbf{B}_{i i}]\geq\lambda_{i}\cdot\mathbb{E}_{\mathbf{X,\tilde{X}}}[(1-\lambda_{i}a_{i})\cdot(1-\lambda_{i}\tilde{a}_{i})]\ .$$ We proceed as in the proof of Lemma C.7 in Zou et al. (2021a). Recall that **Lemma 1**.: _Let $\mathcal{A}$ be a finite set of $\mathcal{A}$. Then $\mathcal{A}$ is a finite set of $\mathcal{A}$._ Proof.: Let $\mathcal{A}$ be a finite set of $\mathcal{A}$. Let $\mathcal{A}$ be a finite set of $\mathcal{A}$. and for all $ k$ ? $$\left\langle z_{i},\mathbf{A}_{-i}^{-1}z_{i}\right\rangle\leq c\cdot{\frac{n}{\lambda_{k+1}\rho_{k}}}\ ,$$ for some c > 0, with high probability. Concluding as in Zou et al. (2021a) and using independence finishes the proof. Proof of Proposition B.3. Setting wm(λ) = H1/2Πm(λ)w ∗(see Definition B.1), we decompose the bias as Bias(w RR $$(\lambda))=\mathbb{E}\left[\widehat{\text{Bias}}(\overline{w}_{n}^{\text{RR}}(\lambda))\right]$$ $$=\mathbb{E}\left[\left|\left|\frac{1}{M}\sum_{m=1}^{M}w_{m}(\lambda)\right|\right|^{2}\right]$$ $$=\frac{1}{M^{2}}\mathbb{E}\left[\text{Tr}\left[\left(\sum_{m=1}^{M}w_{m}(\lambda)\right)\otimes\left(\sum_{m^{\prime}=1}^{M}w_{m^{\prime}}(\lambda)\right)\right]\right]$$ $$=\frac{1}{M^{2}}\sum_{m=1}^{M}\mathbb{E}\left[\text{Tr}[w_{m}(\lambda)\otimes w_{m}(\lambda)]\right]+\frac{1}{M^{2}}\sum_{m\neq m^{\prime}}\mathbb{E}[\text{Tr}[w_{m}(\lambda)\otimes w_{m^{\prime}}(\lambda)]]\;.\tag{34}$$ We aim to find a lower for the above expression. Since $$\sum_{m=1}^{M}\mathbb{E}[\mathrm{Tr}[w_{m}(\lambda)\otimes w_{m}(\lambda)]]\geq0$$ we proceed to lower bound the second term in equation 34 . Setting $$\mathbf{B}_{m,m^{\prime}}:=\Pi_{m}(\lambda)\circ\mathbf{H}\circ\Pi_{m^{\prime}}(\lambda)$$ for *m, m*′ ∈ [M] we may write Bias(w RR n(λ)) ≥ 1 M2 X m̸=m′ E[Tr[wm(λ) ⊗ wm′ (λ)]] =1 M2 X m̸=m′ E[⟨H ◦ Πm(λ)w ∗, Πm′ (λ)⟩] =1 M2 X m̸=m′ E[⟨Bm,m′w ∗, w∗⟩] =1 M2 X m̸=m′ X i E[(Bm,m′ )ii](w ∗ i ) 2 + 2X i>j E[(Bm,m′ )ij ]w ∗ i · w ∗ j . (35) We now apply Lemma B.4 and follow the lines of the proof of Theorem C.8 in Zou et al. (2021a) to obtain for every k Bias(w RR n(λ)) ≥ 1 M2 X m̸=m′ X i E[(Bm,m′ )ii](w ∗ i ) 2 λi· (w ∗ i ) 2 ≥1 cM2 X m̸=m′ X i 1 + λi λk+1 ·n Mρk 2 λi· (w ∗ i ) 2 = M − 1 cM X i 1 + λi λk+1 ·n Mρk 2 ≥ M − 1 cM· M2λ +Pj>k∗RR λj 2 n2· ||w ∗||2H−1 0:k∗RR + ||w ∗||2Hk∗RR:∞ , (36) for some c > 1. ## B.1.3 Lower Bound Of Variance For Drr Proposition B.5 (Lower Bound of Bias for local RR). *Suppose* k ∗ RR <n c ′M *, for some universal constant* c ′ > 1. There exist constants b, c > 1 *such that* $$\mathrm{Var}(\overline{{{w}}}_{n}^{\mathrm{RR}}(\lambda))\geq\frac{\sigma^{2}}{c}\left(\frac{k_{\mathrm{RR}}^{*}}{n}+\frac{n}{M^{2}\cdot(\lambda+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j})}\cdot\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}^{2}\right)\,,$$ $$\square$$ where k ∗ RR *is defined in equation 33.* Proof of Proposition B.5. By definition of the variance, we may write Var(w RR n(λ)) = E hVar( d w RR n(λ))i = E m=1 (XTmXm + λ) −1XTmϵm ! 2 H1/2 1 M X M =1 M2 X M m,m′=1 EX hTrhH1/2(XTmXm + λ) −1XTmEϵm[ϵm ⊗ ϵm′ ]Xm(XTm′Xm′ + λ) −1H1/2ii = σ 2 M2 X M m=1 EX hTrhH1/2(XTmXm + λ) −1XTmXm(XTmXm + λ) −1H1/2ii =1 M2 X M m=1 Var( ˆw RR m (λ)) , where the local variance is given by $$\mathrm{Var}(\hat{w}_{m}^{\mathrm{RR}}(\lambda))=\sigma^{2}\mathbb{E}\mathbf{x}\left[\mathrm{Tr}\Big[\mathbf{H}^{1/2}(\mathbf{X}_{m}^{T}\mathbf{X}_{m}+\lambda)^{-1}\mathbf{X}_{m}^{T}\mathbf{X}_{m}(\mathbf{X}_{m}^{T}\mathbf{X}_{m}+\lambda)^{-1}\mathbf{H}^{1/2}\Big]\right]\,.$$ To lower bound the variance we utilize Theorem C.5 from Zou et al. (2021a) (see also Bartlett et al. (2020)) and obtain $$\mathrm{Var}(\overline{w}_{n}^{\mathrm{RR}}(\lambda))\geq\frac{\sigma^{2}}{cM^{2}}\sum_{m=1}^{M}\left(\frac{M\cdot k_{\mathrm{RR}}^{*}}{n}+\frac{n}{M\cdot(\lambda+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j})}\cdot\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}^{2}\right)$$ $$=\frac{\sigma^{2}}{c}\left(\frac{k_{\mathrm{RR}}^{*}}{n}+\frac{n}{M^{2}\cdot(\lambda+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j})}\cdot\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}^{2}\right)\,,$$ $$\square$$ provided k ∗ RR <n c ′M , for some universal constants *c, c*′ > 1. ## B.1.4 Proof Of Theorem 4.2 The proof follows by combining Proposition B.3 and Proposition B.5 with Lemma B.2. ## B.2 Upper Bound Excess Risk Tail-Averaged Dsgd Theorem B.6 (Upper Bound Tail-averaged DSGD). Suppose Assumption 3.7 is satisfied. Let wMn denote the tail-averaged distributed estimator with n training samples and assume γ < 1/ Tr[H]*. For arbitrary* k1, k2 ∈ [d] $$\mathbb{E}\big[L(\overline{{{w}}}_{M})\big]-L(w^{*})\ =\mathrm{Bias}(\overline{{{w}}}_{M})+\mathrm{Var}(\overline{{{w}}}_{M})$$ with Bias(wM) ≤ cbM2 γ 2n2 · exp− n M γH w ∗ 2 H−1 0:k1 + ||w ∗||2Hk1:∞ , Var(wM) ≤ cv(1 + R 2) · σ 2 k2 n + nγ2 M2 · X j>k2 λ 2 j , for some universal constants cb, cv > 0. Proof of Theorem B.6. Utilizing equation 13 and Lemma 6.1 in Zou et al. (2021a), we have Bias(wM) = 1M X M m=1 Biasw¯ (m) n M ≤1M X M m=1 cbM2 γ 2n2 · exp− n M γH 2 H−1 0:k1 + ||w ∗||2Hk1:∞ = cbM2 γ 2n2 · exp− n M γH w ∗ 2 H−1 0:k1 + ||w ∗||2Hk1:∞ , for some universal constant cb > 0. For the variance, we utilize equation 15 and Lemma 6.1 in Zou et al. (2021a) once more to obtain $$\mathrm{Var}(\overline{\overline{w}}_{M})\leq\frac{1}{M^{2}}\sum_{m=1}^{M}\mathrm{Var}\!\left(\bar{w}_{\frac{m}{M}}^{(m)}\right)$$ $$\leq c_{v}\frac{(1+R^{2})\cdot\sigma^{2}}{M^{2}}\sum_{m=1}^{M}\left(\frac{k_{2}M}{n}+\frac{n\gamma^{2}}{M}\cdot\sum_{j>k_{2}}\lambda_{j}^{2}\right)$$ $$=c_{v}(1+R^{2})\cdot\sigma^{2}\!\left(\frac{k_{2}}{n}+\frac{n\gamma^{2}}{M^{2}}\cdot\sum_{j>k_{2}}\lambda_{j}^{2}\right)\,,$$ for some universal constant cv > 0. ## B.3 Comparing Dsgd With Drr B.3.1 Proof Of Theorem 4.4 To prove Theorem 4.4 we derive conditions on nRR and nSGD such that the upper bound for the excess risk of wM for DSGD from Theorem 4.3 can be upper bounded by the lower bound of w RR n(λ) for DRR from Theorem 4.2, i.e. such that $$\frac{c_{\rm b}M^{2}}{\gamma^{2}n_{\rm SGD}^{2}}\cdot\left|\left|\exp\Bigl{(}-\frac{n_{\rm SGD}}{M}\gamma{\bf H}\Bigr{)}w^{*}\right|\right|_{{\bf H}_{0}^{-1}{}_{\rm b}{}_{\rm SR}^{-1}}^{2}+\left|\left|w^{*}\right|\right|_{{\bf H}_{\rm b}{}_{\rm SR}^{-1}\infty}^{2}$$ $$\leq\frac{M^{2}\Bigl{(}\lambda+\sum_{j>k_{\rm SR}^{*}}\lambda_{j}\Bigr{)}^{2}}{cn_{\rm RR}^{2}}\cdot\left|\left|w^{*}\right|\right|_{{\bf H}_{0}^{-1}{}_{\rm b}{}_{\rm RR}^{-1}}^{2}+\left|\left|w^{*}\right|\right|_{{\bf H}_{\rm b}{}_{\rm SR}^{-1}\infty}^{2}\tag{37}$$ and $$c_{\sigma}\bigg{(}1+\frac{||w^{*}||^{2}}{\sigma^{2}}\bigg{)}\cdot\sigma^{2}\bigg{(}\frac{k_{\rm RR}^{\prime}}{n_{\rm SGD}}+\frac{n_{\rm SGD}\gamma^{2}}{M^{2}}\cdot\sum_{j>k_{\rm RR}}\lambda_{j}^{2}\bigg{)}\leq\frac{\sigma^{2}}{c}\bigg{(}\frac{k_{\rm RR}^{\prime}}{n_{\rm RR}}+\frac{n_{\rm RR}}{M^{2}}\cdot\frac{\sum_{j>k_{\rm RR}}\lambda_{j}^{2}}{(\lambda+\sum_{j>k_{\rm RR}}\lambda_{j})^{2}}\bigg{)}\;.\tag{38}$$ We start with equation 38. For $$c_{v}\bigg(1+\frac{||w^{*}||^{2}}{\sigma^{2}}\bigg)\cdot\sigma^{2}\frac{k_{\mathrm{RR}}^{*}}{n_{\mathrm{SGD}}}\leq\frac{\sigma^{2}}{c}\frac{k_{\mathrm{RR}}^{*}}{n_{\mathrm{RR}}}$$ to hold we need that $$C^{*}n_{\mathrm{RR}}\leq n_{\mathrm{SGD}}\ ,\quad C^{*}:=c_{v}\cdot c\cdot\left(1+\frac{||w^{*}||^{2}}{\sigma^{2}}\right)\,.$$ $$(39)$$ To $$c_{v}\bigg(1+\frac{||w^{*}||^{2}}{\sigma^{2}}\bigg)\cdot\sigma^{2}\frac{n_{\mathrm{SGD}}\gamma^{2}}{M^{2}}\cdot\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}^{2}\leq\frac{\sigma^{2}}{c}\frac{n_{\mathrm{RR}}}{M^{2}}\cdot\frac{\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}^{2}}{(\lambda+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j})^{2}}$$ to hold we need $$n_{\mathrm{SGD}}\leq\frac{n_{\mathrm{RR}}}{C^{*}\cdot(C_{\lambda}^{*})^{2}\gamma^{2}}\;,\quad C_{\lambda}^{*}:=\lambda+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}\;.$$ Finally, from equation 37 we need $$\frac{c_{\rm b}M^{2}}{\gamma^{2}n_{\rm SD}^{2}}\cdot\left|\left|\exp\Bigl{(}-\frac{n_{\rm SGD}}{M}\gamma{\bf H}\Bigr{)}w^{*}\right|\right|_{{\bf H}_{\rm b}^{-1}{\bf H}_{\rm RR}^{-1}}^{2}\leq\frac{M^{2}(C_{\rm A}^{*})^{2}}{cn_{\rm RR}^{2}}\cdot\left|\left|w^{*}\right|\right|_{{\bf H}_{\rm b}^{-1}{\bf H}_{\rm RR}^{-1}}^{2}\cdot\tag{10}$$ $$(40)$$ $$(41)$$ To ensure this, note that $$\left|\left|\exp\left(-\frac{n_{\mathrm{SGD}}}{M}\gamma\mathbf{H}\right)w^{*}\right|\right|_{\mathbf{H}_{0:\tilde{k}_{\mathrm{RB}}}^{-1}}^{2}\leq e^{-\frac{n_{\mathrm{SGD}}}{M}\gamma\lambda_{k_{\mathrm{RB}}}^{*}}\cdot\left|\left|w^{*}\right|\right|_{\mathbf{H}_{0:\tilde{k}_{\mathrm{RB}}}^{-1}}^{2}\leq\left(1-\gamma\lambda_{k_{\mathrm{RB}}}\right)\cdot\left|\left|w^{*}\right|\right|_{\mathbf{H}_{0:\tilde{k}_{\mathrm{RB}}}^{-1}}^{2}.$$ Hence, equation 41 is implied if $$\frac{c_{b}}{\gamma^{2}n_{\mathrm{SGD}}^{2}}(1-\gamma\lambda_{k_{\mathrm{RR}}^{*}})\leq\frac{\left(C_{\lambda}^{*}\right)^{2}}{c n_{\mathrm{RR}}^{2}}\;,$$ $$(42)$$ being equivalent to $$\frac{\sqrt{cc_{b}(1-\gamma\lambda_{k_{\rm RR}^{*}})}}{\gamma C_{\lambda}^{*}}n_{\rm RR}\leq n_{\rm SGD}\ .$$ Combining conditions equation 39, equation 40 and equation 42 we need $$\operatorname*{max}\left\{C^{*},{\frac{\sqrt{c c_{b}(1-\gamma\lambda_{\mathrm{R}^{*}\mathrm{R}})}}{\gamma C_{\lambda}^{*}}}\right\}\cdot n_{\mathrm{RR}}\leq n_{\mathrm{SGD}}\leq{\frac{1}{C^{*}\cdot(C_{\lambda}^{*})^{2}\gamma^{2}}}\cdot n_{\mathrm{RR}}\ .$$ A short calculation shows that the condition $$\gamma<\operatorname*{min}\biggl\{{\frac{1}{\mathrm{Tr}[H]}},{\frac{1}{\sqrt{c}C^{*}C_{\lambda}^{*}}}\biggr\}$$ $$\operatorname*{max}\left\{C^{*},{\frac{\sqrt{c c_{b}(1-\gamma\lambda_{k_{\mathrm{RR}}^{*}})}}{\gamma C_{\lambda}^{*}}}\right\}\leq{\frac{1}{C^{*}\cdot(C_{\lambda}^{*})^{2}\gamma^{2}}}\ .$$ implies that This finishes the proof. ## B.3.2 Discussion We give a more detail discussion about the sample complexities (SCs) of DSGD and DRR. In particular, we derive conditions under which the SCs are of the same order to ensure that $$\mathbb{E}\Big[L(\overline{{{w}}}_{M_{n_{\mathrm{SGD}}}})\Big]-L(w^{*})\;\leq\;\mathbb{E}\big[L(\overline{{{w}}}_{n_{\mathrm{RR}}}^{\mathrm{RR}}(\lambda_{n_{\mathrm{RR}}}))\big]-L(w^{*})\;.$$ Recall that in order to achieve this bound we would need $$L_{\lambda_{n_{\mathrm{RR}}},\gamma}\cdot n_{\mathrm{RR}}\leq n_{\mathrm{SGD}}\leq L_{\lambda_{n_{\mathrm{RR}}},\gamma}^{\prime}\cdot n_{\mathrm{RR}}\;,$$ for a suitable choice of the regularization parameter λnRR and number of machines MnSGD . To ensure that nRR ≲ nSGD we need to require that $$1\lesssim L_{\lambda_{n_{\mathrm{RR}}},\gamma}=\operatorname*{max}\left\{C^{*},\frac{\sqrt{c(1-\gamma\lambda_{k_{\mathrm{RR}}^{*}})}}{\gamma C_{\lambda_{n_{\mathrm{RR}}}}^{*}}\right\},$$ with C ∗ λ := λ +Pj>k∗RR λj . Recall that $$\gamma<\operatorname*{min}\left\{{\frac{1}{\mathrm{Tr}[H]}},{\frac{1}{\sqrt{c}C^{*}C_{\lambda_{n_{\mathrm{RB}}}}^{*}}}\right\}\,,$$ ``` and that 1 − γλk ∗ RR < 1. A short calculation shows that ``` $$1\leq{\frac{\sqrt{c(1-\gamma\lambda_{k_{\mathrm{RR}}^{*}})}}{\gamma C_{\lambda_{n_{\mathrm{RR}}}}^{*}}}$$ if $$\gamma\left(\lambda_{n_{\mathrm{RR}}}+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}\right)\lesssim1\;.$$ Furthermore, to ensure that nSGD ≲ nRR we have to require that $$L_{\lambda_{n_{\mathrm{RR}}},\gamma}^{\prime}=\frac{1}{C^{*}\gamma^{2}(C_{\lambda_{n_{\mathrm{RR}}}}^{*})^{2}}\stackrel{<}{\sim}1\;.$$ This is satisfied if $$1\stackrel{<}{\sim}\gamma\cdot C_{\lambda_{n\mathrm{RR}}}^{*}=\gamma\left(\lambda_{n_{\mathrm{RR}}}+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}\right)\,.$$ We summarize our finding the following Corollary B.7. *Suppose all assumptions of Theorem 4.4 are satisfied. If* $$\gamma\left(\lambda_{n_{\mathrm{RR}}}+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}\right)\simeq1$$ (43) $$\begin{array}{l}\small\left(43\right)^{1}\end{array}$$ . ≃ 1 (43) holds, then the sample complexities of DSGD and DRR are of the same order, i.e. nSGD ≃ nRR and $$\mathbb{E}\Big[L(\overline{{{w}}}_{M_{n\mathrm{SGD}}})\Big]-L(w^{*})\;\leq\;\mathbb{E}\big[L(\overline{{{w}}}_{n\mathrm{BR}}^{\mathrm{RR}}(\lambda_{n\mathrm{RR}}))\big]-L(w^{*})\;.$$ Example B.8 (Spiked Covariance Model). We show that condition equation 43 is satisfied in the spiked covariance model from Corollary 3.9 under a suitable choice for λnRR and MnSGD *. Here, we assume that* with $$M_{n_{\mathrm{SRD}}}=M_{n_{\mathrm{RR}}}\simeq n_{\mathrm{RR}}^{\frac{1}{5-2}}$$ 3−2r for 1/2 ≤ r ≤ 1*, see our discussion in Section 3.4 (comparison with DOLS). A short calculation shows that* $$n\simeq n_{\mathrm{RR}}^{\frac{5-2r}{5-2r}}\ ,$$ $$k_{\mathrm{RR}}^{*}\simeq\tilde{d}\simeq\left(\frac{n_{\mathrm{RR}}}{M_{n_{\mathrm{RR}}}}\right)^{r}\simeq n_{\mathrm{RR}}^{\frac{2r}{5-2r}}\ .$$ Moreover, for λnRR ≃ n −ζ RR, ζ ≥ 0 and γ = const. *, we have* $$\gamma\left(\lambda_{n_{\mathrm{RR}}}+\sum_{j>k_{\mathrm{RR}}^{*}}\lambda_{j}\right)\simeq\gamma\left(n_{\mathrm{RR}}^{-\zeta}+1\right)\simeq1\ .$$ ![33_image_0.png](33_image_0.png) Hence, for a wide range of regularization, the condition equation 43 is met and the SCs of DSGD and DRR in the spiked covariance model are of the same order. ## C Further Numerical Experiments In this Section we collect further experimental results conducted on simulated data from Section 5. Figure 4: Test error for distributed ridgeless regression with λj = j −2for different sources w ∗ as a function of M = n α, α ∈ {0, 0.1*, ...,* 0.9}. The number of local nodes acts as a regularization parameter. We generate n = 500 i.i.d. training data with xj ∼ N (0, H) with mildly overparameterization d = 700. We compare the sample complexity of optimally tuned full-averaged DSGD, tail-averaged DSGD and lastiterate DSGD with optimally tuned DRR for different sources w ∗, see Figures 5, 6 and 6. Here, the data are generated as in Section 5 with d = 200, λj = j −2 and w ∗ j = j −α, α ∈ {0, 1, 10}. The number of local nodes is fixed at Mn = n 1/3for each n ∈ {100*, ...,* 6000}. ![34_image_0.png](34_image_0.png) ![34_image_1.png](34_image_1.png) Figure 5: Left: λ; = j-10, ω; = 1 Middle: λ; = j-10, w; = j-¹ Right: ![35_image_0.png](35_image_0.png) ![35_image_3.png](35_image_3.png) ![35_image_1.png](35_image_1.png) ![35_image_2.png](35_image_2.png) Figure 6: Left: λ; = j-2, ω; = 1 Middle: λ; = j-2, ω; = j-1 Right: ![36_image_0.png](36_image_0.png) ![36_image_1.png](36_image_1.png) Figure 7: Left: λ; = j¯¹, ω³ = 1 Middle: λ; = j¯¹, ω³ j-1 Right:
Review 1: Summary: The paper analyzes one-shot averaging in the overparameterized least squares setting. It derives a bias-variance decomposition and compares the algorithm against other distributed variants of ordinary least squares and ridge regression estimators. The paper also studies specific distributions and spectra to highlight the benefits of using one-shot averaging over these algorithms. Some experiments are provided to compare the baselines with each other. Strengths and Weaknesses: The math and the results are stated pretty clearly and rigorously. However, I have two major concerns about the paper: 1) It is unclear if the analysis here is a straightforward extension of Zou et al.'s analysis. Intuitively it is not hard to get the on-shot averaging analysis from the single machine analysis: one reduces the variance on the machines. This seems especially simple in the homogeneous setting. If there is not a significant technical challenge compared to Zou et al., then the paper doesn't have much of a contribution. 2) My biggest qualm with the paper is that it seems unfamiliar with very related results in the homogenous convex setting, which are dimension dependent (Woodworth et al. [a](https://arxiv.org/abs/2002.07839), [b](https://arxiv.org/abs/2102.01583)). In particular, the loss functions in this paper satisfy smoothness and bounded optimal property, which means we can apply Theorem 1 from [Woodowrth et al. a]((https://arxiv.org/abs/2002.07839)). This shows that one-shot averaging is optimal without looking at any specific spectral structure or distribution (up to acceleration). What is the setting where this result is insufficient and more fine-grained spectral analysis is needed? If there is no such setting, why would we po Requested Changes: 1) I encourage the authors to discuss in detail how the analysis differs from Zou et al. and if there are additional challenges in analyzing one-shot averaging (which have not been dealt with in the general convex literature.) 2) Discuss the relevance of the results in this paper when comparing them against known dimension-independent results. One can't appreciate this paper's results without highlighting which functions are missed by the dimension-independent analyses. 3) Experimentally, it would b good to add local SGD with multiple rounds of communication besides one-shot averaging. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper considers the setting of distributed learning with SGD over multiple machines, where each machine runs one epoch on its local dataset and then sends the model to a central server, which then aggregates the models into the global model. For the setting of overparameterized linear regression, where each machine has i.i.d. data, they prove upper and lower bounds for the excess risk, under some assumptions that have been also used in prior work. They also show that under certain settings (if the number of machines grows sufficiently slowly as compared to the total dataset size), the upper and the lower bounds match. Strengths and Weaknesses: Strengths: 1. Tightness of analysis: The upper bound and the lower bound provided in the paper match under certain conditions. 2. Insights into system design: The discussion in the paper on the upper and lower bounds provides some insight into how systems can be designed to minimize excess risk and achieve optimal sample complexity. 3. Comparison with distributed ridge regression: The paper shows that under some assumptions, the excess risk of distributed SGD is less than the excess risk of distributed ridge regression. Weaknesses: 1. The setting of local SGD with single round of communication is motivated by use cases such as distributed training over user devices (like smartphones), because in that setup frequent communication could be expensive. In that case however, the dataset in different devices is not i.i.d. . On the other hand, the dataset is usually i.i.d. in use cases such as training in a data center, but then there the communication is cheap and we do not need a single round of communication. 2. The problem setup only considers a single epoch of local SGD (Section 2.2). This might be restrictive. Can the results be extended to the case where the local devices run multiple epochs, that is, the local SGD algorithm does multiple passes over the local datasets? 3. I am not able to grasp Assumption 3.6 (the fourth moment lower bound). In particular, for which kind of distributions would it hold? 4. Assumption 4.1. might be a strong assumption. If $z$ is some Gaussian random variable with identity covariance, and $x:=Az$, where $A$ is some matrix. Then $H = AA^T$, and $H^{-1}x = (AA^T)^{-1}Az$. In this case, I think $H^{-1}x$ does not have independent coordinates unless $(AA^T)^{-1}A$ is orthogonal. Requested Changes: I request the authors to address the concerns pointed out in the weakness section (in particular concerns 2, 3, and 4). Broader Impact Concerns: N/A ================================================== Review 3: Summary: In the overparametrised linear resgression setting, the authors analyse averaged constant stepsize local SGD (one iteration over the whole dataset) and provide an upperbound for the excess risk which depends on structural assumptions on the data. They then compute these upperbounds for two particular data distributions and compare these to those of distributed ridge regression. The sample complexity of distributed SGD is no worse than a constant factor of that of distributed ridge regression. Strengths and Weaknesses: **Strengths:** - from a point of view of the "form", the paper is well written: the setting, notations and results are clear and commented. The results seem sound and previously known results can be recovered. **Weaknesses:** I am unfortunately highly skeptical concerning many of the messages conveyed by this work and believe there is a huge gap between the original motivations and the considered setting. In my opinion, many of the statements in the paper are confusing, misleading or even wrong. Also, the major and relevant line of work concerning implicit regularisation is totally omitted: see *On the Implicit Bias in Deep-Learning Algorithms* from Gal Vardi and the references therein for a recent survey. The authors start by introducing the concept of implicit regularisation. They then provide some references (Jain et al, Dieuleveut & Bach, Lin et al Varre et al etc) which have no link with implicit regularisation: one-pass SGD is known to directly optimise the true risk. The notion of implicit regularisation makes sense when reaching zero loss on the **empirical risk** (doing multi-pass SGD on it for example). In this case, since there are infinitely many global minimisers, it makes sense to look at which one the algorithm has converged to. The authors then introduce distributed learning and local SGD, for which the motivation is clear. But the analysed algorithm in the paper is quite far from the actual local SGD algorithm since only one pass over the dataset is considered. This is never done in practice as the dataset is passed through many times. In fact, the actual local SGD algorithm would be much easier to analyse: if we initialise everything at $0$, all the updates remain in the span of the dataset and therefore we converge towards the OLS solution (which is the solution with the minimum $\ell_2$ norm), for which the benign overfitting phenomenon is well known from the Bartlett et al. paper (Benign overfitting in linear regression). Therefore I don't understand the goal of the paper: if it is to understand the implicit regularisation of the actual multipass local SGD algorithm, then this is already provided by the results on the OLS estimator. How does one-pass provide informations on the practical performances observed in practice? Some sentences I disagree with: - In the introduction: *In Zou et al. (2021b;a) it is shown that benign overfitting also occurs for SGD*, they consider one-pass SGD, for which we cannot overfit the dataset. - Page 3: *It could be shown that benign overfitting occurs in this setting, i.e. the SGD estimator fits training data very well and still generalizes.* Zou et al. (2021b), Wu et al. (2022) consider one-pass SGD, therefore very far from overfitting the train set. How do authors support the claim that *SGD estimator fits training data very well*? I have never seen practical settings where doing one-pass SGD leads to a solution close to zero-training error. - Top of page 4: *Distributed learning in overparameterized linear regression is studied in Mücke et al. (2022) for the ordinary least squares estimator (OLS), i.e. without any implicit or explicit regularization and with local interpolation.* Choosing the OLS estimator is already an explicit choice of solution! there are many other interpolating solutions, by choosing the OLS solution the authors choose the minimum $\ell_2$ norm interpolating solution. Such sentences are very misleading. Other comments: - the *Introduction* section feels like a mix of an introduction and of related works. It would be much clearer to separate an actual introduction (with clear motivations), and a dedicated *related works* section. - *It is generally believed that the optimization algorithm itself, e.g., stochastic gradient descent (SGD), implicitly regularizes such overparameterized models.*: I of course agree with this but it would be good to add at least a reference to support this claim - speaking of linear regression: "*(to be considered as a reasonable approximation of neural network learning)*, how can we consider linear regression as a reasonable approximation of neural networks?! The only way I can see this is through the NTK regime, which has been shown many times to be far from the actual performance regime which NNs work in. - *Comparison to distributed ordinary least squares (DOLS)* paragraph: the authors only compare the optimal number of nodes but not the optimal rates, how do they compare? - Figures: the figures are quite disappointing and could highly be improved: the figures are squashed, the labels are hard to read: "w^*_j~j^-10" Minor comments: - First paragraph: *frameowrk* - Contributions paragraph: the acronym DRR is not introduced before - Notation paragraph: $\Vert w \Vert^2_A = \Vert A^{1/2} w \Vert$ should be $\Vert w \Vert^2_A = \Vert A^{1/2} w \Vert^2$ - equation (2): is this notation used anywhere in the paper? - Theorem 3.5: parameter $\tau$ is not introduced - Theorem 3.5: the definitions of $\mathrm{Bias}(w)$ and $\mathrm{Var}(w)$ are not given - page 8: *Section5* --> *Section 5* - using the notation $\lambda$ for the ridge regression regularisation parameter and for the eigenvalues of $H$ can be confusing Requested Changes: Though I think the results can be interesting to some of TMLR's audience. I think they should be motivated in a totally different manner, the implicit regularisation / benign overfitting motivations are currently very misleading and wrong in my opinion. Thus I believe the introduction and motivations should be rewritten to be convincing. Broader Impact Concerns: N/a ================================================== Metareview: Recommendation: Reject Comment: I can relate to the objections of the reviewers. Although I think the paper has some merit and some results might be of interest to the community (note that the reviewers' comments focus mainly on the main result), it is not ideal if they are hidden and difficult to find. The paper should clearly state the differences from previous work and the new findings that have been brought to light. It appears that some of the reviewer's concerns could have been addressed in a discussion. Unfortunately, the authors did not engage in a discussion and did not submit a revision. Therefore, I recommend rejection of the manuscript in its present form (as suggested by all three reviewers), with encouragement to submit a major revision at a later time. ==================================================
# Using The Krylov Subspace Formulation To Improve Regularisation And Interpretation In Partial Least Squares Regression Anonymous authors Paper under double-blind review ## Abstract Partial least squares regression (PLS-R) has been an important regression method in the life sciences and many other fields for decades. However, PLS-R is typically solved using an algorithmic approach, rather than through an optimisation formulation and procedure. There is a clear optimisation formulation of the PLS-R problem based on a Krylov subspace formulation, but it is only rarely considered. The popularity of PLS-R is attributed to the ability to interpret the data through the model components, but the model components are not available when solving the PLS-R problem using the Krylov subspace formulation. We therefore highlight a simple reformulation of the PLS-R problem using the Krylov subspace formulation as a promising modelling framework for PLS-R, and illustrate one of the main benefits of this reformulation—namely that it allows arbitrary penalty terms of the regression coefficients to be included in the PLS-R model. Further, we propose an approach to estimate the PLS-R model components for the solution found through the Krylov subspace formulation, that are those we would have obtained had we been able to use the common algorithms for estimating the PLS-R model. We illustrate the utility of the proposed method on simulated and real data. ## 1 Introduction Partial least squares regression (PLS-R) was proposed by Wold et al. (1983) as an alternative to principal component regression (PCR, Massy, 1965) and ridge regression (Hoerl & Kennard, 1970) for the problem of approximating concentrations of constituents in a chemical sample from spectroscopic data of the sample. With spectroscopic data, the variables are typically strongly correlated and numerical instability problems may therefore arise when using ordinary least squares regression. PLS-R solves this by projecting the data onto an orthogonal set of basis vectors, derived from the data itself, and linear regression is then performed within this subspace. This is very similar to how PCR works, but instead of using the singular value decomposition (SVD) of the data, PLS-R is based on the partial least squares path modelling method in mode A (Wold et al., 1983), where the linear subspace is derived using both the data and the target variables. This leads to a non-linear shrinkage estimator (Krämer, 2007), with some unusual properties (Butler & Denham, 2000). Nevertheless, PLS-R has been widely used and is a very common choice of regression method in the chemical and biological literature (Frank & Friedman, 1993; Wold et al., 2001; Boulesteix & Strimmer, 2007; Gromski et al., 2015), but also in *e.g.* neuroimaging (Krishnana et al., 2011) and many other fields. PLS-R has been a popular method within these fields not only for regression, but extensions have also been developed for classification (Sjöström et al., 1986; Ståhle & Wold, 1987; Barker & Rayens, 2003), outlier detection (Valderrama et al., 2007), and even for unsupervised problems such as *e.g.* clustering (Kloss et al., 2015). One of the reasons for the popularity of PLS-R is likely because of the possibility to interpret the resulting regression vector in terms of the basis vectors of the solution subspace, the PLS-R score and loading vectors give information about linear relationships between samples, but also about which variables correlate with those relationships. This is of course an important aspect, especially considering the recent growth in the field of interpretable and explainable machine learning (Linardatos et al., 2020). The fact that PLS-R allowed better interpretations to be made, compared to PCR and ridge regression at the time, is likely to have added to the popularity of the PLS-R method. PLS-R has also been extended in many other ways, for instance to handle multiple target variables (Bisani et al., 1983; Mateos-Aparicio, 2011). There are also several similar but different formulations of the underlying optimisation problem, that make their own trade-offs and typically give different but related or similar solutions (Helland, 1988; de Jong, 1993). There are also different forms of preconditioning or preprocessing methods, that are sometimes equivalent to the PLS-R problem, but allow different interpretations of the resulting subspace (*e.g.*, Trygg & Wold, 2002; Ergon, 2005; Kvalheim et al., 2009). Regularised versions of PLS-R have also been proposed, such as sparse PLS-R (Lê Cao et al., 2008), elasticnet PLS-R (Chun & Keleş, 2010), and non-negative PLS-R (Allen et al., 2013). However, those methods regularise the score and/or loading vectors, which means that the resulting regression vector need not be sparse. While it is possible to solve the PLS-R optimisation problem using any general purpose solver, the most common ones appear to be to use the SVD, or the non-linear iterative partial least squares (NIPALS) algorithm—an instance of the power method (Abdi, 2010). While there are also accelerated versions of the power method (Xu et al., 2018; Rabbani et al., 2021), the original NIPALS algorithm (Wold et al., 1983) appears to still be one of the most common solvers for the PLS-R problem. It has been shown that the PLS-R regression vector lies in a Krylov subspace (Helland, 1988; Rosipal & Krämer, 2006; Krämer, 2007), and through that has connections to both Lanczos bidiagonalization (Eldén, 2004) and the conjugate gradient method (Wold et al., 1984). However, these relations do not seem to be exploited in practice. A reason for this could be that while it is easy to solve the problem using the Krylov subspace formulation, it is not immediately possible to obtain the scores and loadings from the obtained regression vector, reducing the model interpretability. In this work, we present solutions to two main problems: First, we show how to use the Krylov subspace formulation to allow general-purpose regularisation terms to be added to the PLS-R problem. In particular, we analyse a regularised version of the Krylov formulation of the PLS-R problem that results in a sparse regression vector. The regularisation we used was the elastic net, *i.e.* a linear combination of `1 and squared `2 penalties (Zou & Hastie, 2005). Second, we propose a means to estimate the score and loading vectors for the found regression vector that we would have gotten, had we been able to use the traditional NIPALS solver for the regularised PLS-R problem. This procedure thus allows the same interpretation of the model, data, and target values as in classical PLS-R but now also when using the Krylov subspace formulation. ## 2 Method 2.1 Partial Least Squares Regression We consider the standard linear regression problem, *i.e.*, yi = x T i w + εi, for i = 1*, . . . , n*, where yi ∈ R is a continuous target variable, xi ∈ R pis a data sample of p measured variables, and εi ∼ N (0, σ2) is zero-mean additive Gaussian noise with variance σ 2. The data samples, xi, and target variables, yi, are assumed to be zero-mean. In this setting, the PLS-R problem is typically formulated as (Höskuldsson, 1988), $$\begin{array}{r l}{{\mathrm{maximise}}}&{{\mathbf{y}^{\mathrm{T}}\mathbf{X}_{0}\mathbf{w}_{1}}}\\ {{}}&{{}}\\ {{\mathbf{w}_{1}\in\mathbb{R}^{p}}}&{{}}\end{array}$$ TX0w1 (1) where y ∈ R n is the vector of all target variables (all vectors are assumed to be column vectors), X0 := X ∈ R n×pcontains the n data samples in the rows, and w1 ∈ R pis a weight vector. Once the weight vector $$(1)$$ is found, a score vector is computed as t1 = X0w1 and a loading vector as p1 = XT 0 t1/(t T 1 t1). We also compute a y-loading, c1 = y Tt1/(t T 1 t1). Once all score and loading vectors are found, the data matrix, X0, is *deflated*, by anti-projecting on the found score vector, $$\mathbf{X}_{1}=\mathbf{X}_{0}-\mathbf{t}_{1}\mathbf{p}_{1}^{\mathrm{T}}=\mathbf{X}_{0}-{\frac{\mathbf{t}_{1}\mathbf{t}_{1}^{\mathrm{T}}\mathbf{X}_{0}}{\mathbf{t}_{1}^{\mathrm{T}}\mathbf{t}_{1}}}=\left(\mathbf{I}-{\frac{\mathbf{t}_{1}\mathbf{t}_{1}^{\mathrm{T}}}{\mathbf{t}_{1}^{\mathrm{T}}\mathbf{t}_{1}}}\right)\mathbf{X}_{0}.$$ After deflation, the optimisation program in Equation 1 is run again, using X1, to find a second set of weights, w2, scores, t2, and loadings, p2 and c2. A sequence of K ≤ rank(X) ≤ min(*n, p*) such vectors are thus constructed, and we collect them as the columns in the matrices $$\mathbf{W}=[\mathbf{w}_{1},\ldots,\mathbf{w}_{K}],\quad\mathbf{T}=[\mathbf{t}_{1},\ldots,\mathbf{t}_{K}],$$ $\cdot\cdot\cdot\cdot\,,\,c_{K}]$. $$\left(2\right)$$ $$\mathbf{P}=[\mathbf{p}_{1},\ldots,\mathbf{p}_{K}],$$ P = [p1*, . . . ,* pK], and C = [c1*, . . . , c*K]. This procedure leads to mutually orthogonal weight and score vectors. A final regression vector is computed as $$\beta_{\mathrm{PLS}}=\mathbf{W}(\mathbf{P}^{\mathrm{T}}\mathbf{W})^{-1}(\mathbf{T}^{\mathrm{T}}\mathbf{T})^{-1}\mathbf{T}^{\mathrm{T}}\mathbf{y}=\mathbf{W}(\mathbf{P}^{\mathrm{T}}\mathbf{W})^{-1}\mathbf{C}^{\mathrm{T}},$$ −1CT, (2) $\pi$-$\pi$-$\vec{r}$ and new samples are predicted as $${\hat{y}}_{\mathrm{new}}=\mathbf{x}_{\mathrm{new}}^{\mathrm{T}}{\boldsymbol{\beta}}_{\mathrm{PLS}}.$$ The PLS-R method is thus a complicated procedure, and the steps leading to Equation 2 are fairly opaque, and typically in need of careful individual study to fully understand. Further, the deflation procedure is sensitive to numerical precision in the solution, and any errors are propagated to higher order components (Björck & Indahl, 2017). It would be very difficult to include regularisation terms in Equation 1, that penalises the regression vector, βPLS, (through Equation 2) since the problem would become highly non-linear and a very complicated function of the weight vectors, w, and especially so with more elaborate regularisers. In the next section, we present an alternative, but equivalent formulation of the PLS-R problem in which we can trivially incorporate penalties of the regression vector. ## 2.2 Partial Least Squares And Krylov Subspaces Helland (1988) showed that an alternative basis for the weight vectors is the sequence, XTy,(XTX)XTy, . . . ,(XTX) K−1XTy, generating a Krylov subspace (Watkins, 2007). We have the following definition. Definition 1. A Krylov subspace of order K*, generated by a matrix* A ∈ R m×m *and a vector* v ∈ R m*, is* the linear subspace spanned by the first K powers of A*, and is denoted by* $${\mathcal{K}}_{K}(\mathbf{A},\mathbf{v})=\operatorname{span}\{\mathbf{v},\mathbf{A}\mathbf{v},\mathbf{A}^{2}\mathbf{v},\ldots,\mathbf{A}^{K-1}\mathbf{v}\}.$$ Now, since the PLS-R weight vectors all lie in span{XTy,(XTX)XTy*, . . . ,*(XTX) K−1XTy} (Helland, 1988), we immediately obtain the following result. This result is known, but we have failed to find a direct proof of it; we therefore provide a simple proof, for completeness. Lemma 1. If KK(XTX, XTy) *is a basis for the weight vectors,* W = [w1, . . . , wK]*, then the PLS-R regression vector,* βPLS, lie in the Krylov subspace of order K generated by XTX and XTy*, i.e.* $$\beta_{\mathrm{{PLS}}}\in\mathcal{K}_{K}(\mathbf{X}^{\mathrm{{T}}}\mathbf{X},\mathbf{X}^{\mathrm{{T}}}\mathbf{y}).$$ Proof. It is well-known that the Krylov subspace KK(XTX, XTy) is a basis for the weight vectors (see *e.g.*, Helland, 1988). Hence, if we let K ∈ R p×K be some basis for KK(XTX, XTy), then $$\mathbf{W}=\mathbf{K}\mathbf{A},$$ for some matrix A ∈ R K×K. From Equation 2, we have $$\beta_{\mathrm{PLS}}=\mathbf{W}(\mathbf{P}^{\mathrm{T}}\mathbf{W})^{-1}(\mathbf{T}^{\mathrm{T}}\mathbf{T})^{-1}\mathbf{T}^{\mathrm{T}}\mathbf{y}=\mathbf{W}\mathbf{v},$$ with v = (PTW) −1(TTT) −1TTy. Thus $$\beta_{\mathrm{{PLS}}}=\mathbf{Wv}=\mathbf{KAv}=\mathbf{K}\boldsymbol{\alpha},$$ $\square$ where α = Av is a vector. This concludes the proof. Hence, we see that the PLS-R problem can be cast in the form of a linear least squares problem, as $$\operatorname*{minimise}_{\boldsymbol{\beta}\in\mathbb{R}^{p}}\ {\frac{1}{2n}}\|\mathbf{y}-\mathbf{X}{\boldsymbol{\beta}}\|_{2}^{2}$$ subject to $\ {\boldsymbol{\beta}}\in\mathcal{K}_{K}(\mathbf{X}^{\mathrm{T}}\mathbf{X},\mathbf{X}^{\mathrm{T}}\mathbf{y})$, and by Lemma 1 an equivalent reformulation is thus $${\underset{\boldsymbol{\alpha}\in\mathbb{R}^{K}}{\mathrm{minimise}}}\ {\frac{1}{2n}}\|\mathbf{y}-\mathbf{X}\mathbf{K}{\boldsymbol{\alpha}}\|_{2}^{2},$$ $$\left({\boldsymbol{3}}\right)$$ , (3) where K ∈ R p×K again is a basis for the Krylov subspace KK(XTX, XTy) and α ∈ R K. We assume that K is an orthonormal basis. An analytical solution is thus $${\boldsymbol{\alpha}}=(\mathbf{K}^{\mathrm{{T}}}\mathbf{X}^{\mathrm{{T}}}\mathbf{X}\mathbf{K})^{-1}\mathbf{K}^{\mathrm{{T}}}\mathbf{X}^{\mathrm{{T}}}\mathbf{y},$$ assuming that KTXTXK is invertible, but any numerical optimisation algorithm that solves Equation 3 can of course also be used. Finally, the PLS-R regression coefficient vector is retrieved as $$\beta_{\mathrm{{PLS}}}=\mathbf{K}\alpha.$$ Note that this is the *same* regression vector as that found in Equation 2. ## 2.3 Regularising The Regression Vector In Partial Least Squares Regression With Equation 3, the PLS-R problem is in a familiar form, and we can apply any regularisation we want to the least squares objective. *E.g.*, we can add a square `2 norm penalty, and obtain a ridge PLS-R hybrid model, where the `2 regularisation parameter and K would control the trade-off between linear least squares regression, ridge regression, and PLS-R. Equivalently, we can add an `1 norm penalty, for a Lasso PLS-R hybrid model, with a trade-off between linear least squares regression, the Lasso, and PLS-R. This is particularly interesting, since the `1 norm penalty performs variable selection, and thus a truly sparse PLS-R model. We chose to add both the `1 and squared `2 norm penalties, to obtain an elastic net (Zou & Hastie, 2005) PLS-R hybrid model, where we thus can find an optimal trade-off between `1, squared `2, and PLS-R regularisation of the regression coefficient vector, *i.e.* the Kα. This model thus also performs variable selection in the regression coefficient vector, *i.e.* a sparse PLS-R model, where the sparsity is with respect to the regression coefficients instead of the weights and/or scores. The optimisation problem thus becomes $$\underset{\mathbf{\alpha}\in\mathbb{R}^{K}}{\text{minimise}}\ \frac{1}{2\eta}\|\mathbf{y}-\mathbf{X}\mathbf{K}\mathbf{\alpha}\|_{2}^{2}+\frac{\gamma}{2}\|\mathbf{K}\mathbf{\alpha}\|_{2}^{2}+\lambda\|\mathbf{K}\mathbf{\alpha}\|_{1},\tag{4}$$ where λ > 0 and γ > 0 are regularisation parameters (or rather, Lagrange multipliers) controlling the tradeoff between the main objective and the regularisation terms. We obtain our final sparse regression vector as βbPLS = Kα. The optimisation problem in Equation 4 was the main object of our attention in this work, and in the examples that follow, we used the alternating direction method of multipliers (ADMM, Gabay & Mercier, 1976; Boyd et al., 2010) to solve it. ## 2.3.1 The Steps Of The Alternating Direction Method Of Multipliers We cast the program in Equation 4 in the following form, $$\begin{array}{ll}\underset{\mathbf{\alpha}\in\mathbb{R}^{\mathbb{R}}}{\text{minimise}}&\frac{1}{2n}\|\mathbf{y}-\mathbf{X}\mathbf{K}\mathbf{\alpha}\|_{2}^{2}+\frac{\gamma}{2}\|\mathbf{K}\mathbf{\alpha}\|_{2}^{2}+\lambda\|\mathbf{z}\|_{1}\\ \text{subject to}&\mathbf{K}\mathbf{x}=\mathbf{z}.\end{array}\tag{5}$$ We formulate the augmented Lagrangian of Equation 5, $$L_{\rho}({\bf x},{\bf z},{\bf v})=\frac{1}{2n}\|{\bf y}-{\bf X}{\bf K}{\bf x}\|_{2}^{2}+\frac{\gamma}{2}\|{\bf K}{\bf x}\|_{2}^{2}+\lambda\|{\bf z}\|_{1}+{\bf v}^{\rm T}({\bf K}{\bf x}-{\bf z})+\frac{\rho}{2}\|{\bf K}{\bf x}-{\bf z}\|_{2}^{2},\tag{6}$$ with ρ > 0 the penalty parameter and υ ∈ R p a vector of Lagrange multipliers. For the ADMM algorithm, we must minimise Equation 6 with respect to x and with respect to z. We see that, by setting the gradient of Lρ with respect to x to zero and solving for x, we obtain $$\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{K}}L_{\rho}(\mathbf{x},\mathbf{z},\mathbf{v})=\left({\frac{1}{n}}\mathbf{K}^{\mathrm{T}}\mathbf{X}^{\mathrm{T}}\mathbf{X}\mathbf{K}+(\gamma+\rho)\mathbf{K}^{\mathrm{T}}\mathbf{K}\right)^{-1}\left({\frac{1}{n}}\mathbf{K}^{\mathrm{T}}\mathbf{X}^{\mathrm{T}}\mathbf{y}-\mathbf{K}^{\mathrm{T}}\mathbf{v}+\rho\mathbf{K}^{\mathrm{T}}\mathbf{z}\right).$$ To minimise Lρ with respect to z, we first see that $$L_{\rho}(\mathbf{x},\mathbf{z},\mathbf{v})=\lambda\|\mathbf{z}\|_{1}-\mathbf{v}^{\mathrm{T}}\mathbf{z}+\frac{\rho}{2}\|\mathbf{K}\mathbf{x}-\mathbf{z}\|_{2}^{2}+C_{1}(\mathbf{x},\mathbf{v})$$ $$=\lambda\|\mathbf{z}\|_{1}+\frac{\rho}{2}\bigg{\|}\bigg{(}\mathbf{K}\mathbf{x}+\frac{\mathbf{v}}{\rho}\bigg{)}-\mathbf{z}\bigg{\|}_{2}^{2}+C_{2}(\mathbf{x},\mathbf{v})$$ where C1 and C2 are constant *wrt.* z. We recognise this as the proximal operator of the `1 norm, and thus arrive at $$\operatorname*{arg\,min}_{\mathbf{z}\in\mathbb{R}^{p}}L_{\rho}(\mathbf{x},\mathbf{z},\mathbf{v})=\operatorname*{prox}_{\frac{\lambda}{\rho}}\|\cdot\|_{1}\left(\mathbf{K}\mathbf{x}+{\frac{\mathbf{v}}{\rho}}\right).$$ Finally, the steps of ADMM are, $$\begin{array}{r}{\mathbf{x}^{(s+1)}=\operatorname{arg\,min}_{\mathbf{x}\in\mathbb{R}^{K}}L_{\rho}(\mathbf{x},\mathbf{z}^{(s)},\mathbf{v}^{(s)})}\\ {\mathbf{z}^{(s+1)}=\operatorname{arg\,min}_{\mathbf{z}\in\mathbb{R}^{P}}L_{\rho}(\mathbf{x}^{(s+1)},\mathbf{z},\mathbf{v}^{(s)})}\\ {\mathbf{v}^{(s+1)}=\mathbf{v}^{(s)}+\rho(\mathbf{K}\mathbf{x}^{(s+1)}-\mathbf{z}^{(s+1)}).}\end{array}$$ ## 2.4 Reconstructing The Components Of Partial Least Squares Regression The regression coefficient vector we obtain by solving Equation 4 will not coincide with the one obtained by PLS-R in Equation 2, nor with the equivalent one we obtain by solving Equation 3. While we still select it from the Krylov subspace KK(XTX, XTy), the found vector, α, will be different in Equation 4 from that found in Equation 3, and so the corresponding regression vectors will also be different. What we can do is to set up an optimisation problem, searching for weights, scores, and loadings fulfilling the properties of the components of PLS-R, at least approximately, and that gives a regression vector that is *close* to βbPLS. We therefore want to solve the following optimisation problem, where parameters with a tilde, such as we k+1, are the component k + 1 parameters that we want to find. Parameters without tilde, such as w, are those found using unregularised PLS-R (from Section 2.1), and βbPLS = Kα is the regularised PLS-R regression vector found by Equation 5. A matrix with an index has a number of columns containing the components already found in order, *e.g.* Wfk = [we 1*, . . . ,* we k]. Hence, Wfk+1 also includes the current sought parameter vector. The objective, Equation 7, is the original PLS-R objective (from Equation 1), which still is what we want to maximise. Equation 8 is the unit norm constraint. Equation 9 is an orthogonality constraint on the weight vector, we k+1, to the k already found weight vectors, Wfk. Equation 10 is a corresponding orthogonality constraint for the score vectors. The loadings, pek+1, should be orthogonal to the already found weight vectors, encoded in Equation 11 (see *e.g.*, Manne, 1987). Further, we may want the found weights, scores, and loadings to be close to the PLS-R weights, scores, and loadings, which is encoded in Equations 12–15. Finally, Equation 16 forces the regression coefficient vector, computed using Equation 2, to be near to the one obtained from the regularised Krylov formulation of the PLS-R problem in Equation 5. This formulation, in Equation 7, poses the exact problem that we want to solve, but it becomes a very difficult problem in practice. We have multiple constraints of which several are non-linear, non-convex, and there may not even be a non-empty feasible set. We therefore propose the following slightly relaxed problem, employing the method of Lagrange multipliers, minimise ωek+1∈Rp f(ωek+1) (17) = −y TXKωek+1 (18) + λkβbPLS − Wfk+1(Pe T k+1Wfk+1) −1Ce T k+1k 2 2(19) + µkwk+1 − Kωek+1k 2 2(20) + νktk+1 − XKωek+1k 2 2(21) + ξk(t T k+1tk+1)pk+1 − XTXKωek+1k 2 2(22) + πk(t T k+1tk+1)ck+1 − y TXKωek+1k 2 2(23) subject to kKωek+1k 2 2 ≤ 1, (24) WfT k Kωek+1 = 0, (25) Te T k XKωek+1 = 0, (26) WfT k XTXKωek+1 = 0, (27) where we k+1 = Kωek+1, the *λ, µ, ν, ξ*, and π are regularisation parameters (Lagrange multipliers), and where the constraints from Equations 8–16, for which we the projection operators are easy to compute, have been maximise wek+1∈Rp y TXwe k+1 (7) subject to kwe k+1k 2 2 = 1, (8) WfT k we k+1 = 0, (9) Te T ketk+1 = 0, (10) WfT k pek+1 = 0, (11) kwk+1 − we k+1k 2 2 ≤ α, (12) ktk+1 − etk+1k 2 2 ≤ β, (13) kpk+1 − pek+1k 2 2 ≤ γ, (14) kck+1 − eck+1k 2 2 ≤ δ, (15) kβbPLS − Wfk+1(Pe T k+1Wfk+1) −1Ce T k+1k 2 2 ≤ ε, (16) kept as constraints in Equations 24–27, and those that are more difficult, or alternatively, very easy to optimise over in their penalty form, have been put as penalty terms instead. We now give the derivation and interpretation of the terms in Equations 19–27 in order: - Equation 19: Note that Wfk+1 = [we 1*, . . . ,* we k, Kωek+1], and similar for Pe k+1 and Ce k+1. Corresponding to Equation 16, this is a non-linear function in ωek+1, for which we don't know the projection operator nor the proximal operator. It is easier to minimise in penalty form. - Equation 20: Corresponds to Equation 12. This projection operator is known and easy to compute, and a smooth function in ωek+1, why we make it into a penalty instead. - Equation 21: Corresponds to Equation 13. We express this as a function of ωek+1, with etk+1 = XKωek+1, and since it is smooth and convex, we put it as a penalty. - Equation 22: Corresponds to Equation 14. Note that PLS-R defines the loadings as $$\tilde{\mathbf{p}}_{k+1}={\frac{\mathbf{X}^{\mathrm{{T}}}\tilde{\mathbf{t}}_{k+1}}{\tilde{\mathbf{t}}_{k+1}^{\mathrm{{T}}}\tilde{\mathbf{t}}_{k+1}}}={\frac{\mathbf{X}^{\mathrm{{T}}}\mathbf{X}\tilde{\mathbf{w}}_{k+1}}{\|\mathbf{X}\tilde{\mathbf{w}}_{k+1}\|_{2}^{2}}}={\frac{\mathbf{X}^{\mathrm{{T}}}\mathbf{X}\mathbf{K}\tilde{\boldsymbol{\omega}}_{k+1}}{\|\mathbf{X}\mathbf{K}\tilde{\boldsymbol{\omega}}_{k+1}\|_{2}^{2}}},$$ which is non-linear in ωek+1. To get rid of the denominator we make an approximation in that we multiply both terms by the squared norm of their corresponding score vector, and instead ask that XTXKωek+1 be close to (t T k+1tk+1)pk+1. Hence, another smooth convex function put as a penalty. - Equation 23: Corresponds to Equation 15. Recall that PLS-R defines the y-loadings as $$\widetilde{c}_{k+1}=\frac{\mathbf{y}^{\mathrm{T}}\widetilde{\mathbf{t}}_{k+1}}{\widetilde{\mathbf{t}}_{k+1}^{\mathrm{T}}\widetilde{\mathbf{t}}_{k+1}}=\frac{\mathbf{y}^{\mathrm{T}}\mathbf{X}\widetilde{\mathbf{w}}_{k+1}}{\|\mathbf{X}\widetilde{\mathbf{w}}_{k+1}\|_{2}^{2}}=\frac{\mathbf{y}^{\mathrm{T}}\mathbf{X}\mathbf{K}\widetilde{\boldsymbol{\omega}}_{k+1}}{\|\mathbf{X}\mathbf{K}\widetilde{\boldsymbol{\omega}}_{k+1}\|_{2}^{2}},$$ which is non-linear in ωek+1. To get rid of the denominator, we again make an approximation in that we multiply both terms by the squared norm of their corresponding score vector, and ask that y TXKωek+1 be close to (t T k+1tk+1)ck+1. Hence, a smooth convex function in ωek+1, put as a penalty. - Equation 24: This is a convex relaxation of Equation 8. - Equations 25–26: Same as Equations 9–10, with the difference that Equation 26 is expressed as a function of ωek+1. - Equation 27: Corresponds to Equation 11. We know that the PLS-R loadings satisfy (Höskuldsson, 2003), $$\widetilde{\mathbf{W}}_{k}^{\mathrm{T}}\widetilde{\mathbf{p}}_{k+1}={\frac{\widetilde{\mathbf{W}}_{k}^{\mathrm{T}}\mathbf{X}^{\mathrm{T}}\widetilde{\mathbf{t}}_{k+1}}{\widetilde{\mathbf{t}}_{k+1}^{\mathrm{T}}\widetilde{\mathbf{t}}_{k+1}}}={\frac{\widetilde{\mathbf{W}}_{k}^{\mathrm{T}}\mathbf{X}^{\mathrm{T}}\mathbf{X}\mathbf{K}\widetilde{\omega}_{k+1}}{\|\mathbf{X}\mathbf{K}\widetilde{\omega}_{k+1}\|_{2}^{2}}}=\mathbf{0},$$ which clearly is equivalent to $$\widetilde{\mathbf{W}}_{k}^{\mathrm{{T}}}\widetilde{\mathbf{p}}_{k+1}=\widetilde{\mathbf{W}}_{k}^{\mathrm{{T}}}\mathbf{X}^{\mathrm{{T}}}\mathbf{X}\mathbf{K}\widetilde{\boldsymbol{\omega}}_{k+1}=\mathbf{0},$$ assuming ωe T k+1KTXTXKωek+1 > 0, but since K ≤ rank(X), this is achieved. With these changes, consisting of several reformulations, one convex relaxation, and two approximations, we have an objective function that is the sum of a number of functions that all but one are convex, with four convex constraints (of which three are linear). We can solve this problem using *e.g.* projected gradient descent (Bertsekas, 1999), or any other optimisation algorithm of choice. In order to apply projected gradient descent, we need to know the gradient of f and the projection operator corresponding to the four constraints in Equations 24–27. These are straight-forward, but need to be outlined in more detail. ## 2.4.1 Projection Operators Each constraint in Equations 24–27 correspond to a set of feasible points, denoted S1*, . . . ,* S4, respectively. In order for all four constraints to be satisfied, the solution must lie in all of them, *i.e.* we seek a point that lie in their intersection, $${\mathcal{S}}=\{\mathbf{x}:\mathbf{x}\in{\mathcal{S}}_{1}\land\cdots\land\mathbf{x}\in{\mathcal{S}}_{4}\}=\left\{\mathbf{x}:\mathbf{x}\in\bigcap_{i=1}^{4}{\mathcal{S}}_{i}\right\}.$$ $$(28)$$ We see from Equations 24–27 that for all Si, for i = 1*, . . . ,* 4, we have that at least 0 ∈ Si, and so S 6= ∅ We note that each Si, for i = 1*, . . . ,* 4, is a convex set, and since the intersection of convex sets is convex, S is also a convex set. The single projection operator corresponding to the four constraints in Equations 24–27 is the projection onto their intersection, *i.e.* the projection onto S. The projection of a point w ∈ R p onto a convex set, S ⊆ R p, is defined as, $$\operatorname{proj}_{\mathcal{S}}(\mathbf{w})=\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{p}}\|\mathbf{x}-\mathbf{w}\|_{2}^{2}+\chi_{\mathcal{S}}(\mathbf{x}),$$ 2 + χS (x), (28) where χS is the characteristic function over S, *i.e.*, $$\chi_{\mathcal{S}}(\mathbf{x})={\begin{cases}0&{\mathrm{if~}}\mathbf{x}\in{\mathcal{S}},\\ \infty&{\mathrm{if~}}\mathbf{x}\notin{\mathcal{S}}.\end{cases}}$$ We can numerically compute the projection onto the intersection of the four sets, S1, . . . S4, *i.e.* onto S, by using a parallel Dykstra-like proximal algorithm, as outlined by *e.g.* Combettes & Pesquet (2011). We give the two projection operators, and start with Equation 24. The proximal operator for λkKxk 2 2 is trivially $$\operatorname{prox}_{\lambda\|\mathbf{K}\cdot\|_{2}^{2}}(\mathbf{x})=(\mathbf{I}+2\lambda\mathbf{K}^{\mathrm{{T}}}\mathbf{K})^{-1}\mathbf{x}={\frac{1}{1+2\lambda}}\mathbf{x},$$ since K is assumed orthonormal (and thus KTK = I), and we seek the smallest λ such that Equation 24 is fulfilled, *i.e.*, such that x ∈ S1 = {x ∈ R p: kKproxλkK·k22 (x)k 2 2 ≤ 1}, which we achieve by finding the smallest λ ∗such that $$\left\|\mathbf{K}\!\left({\frac{1}{1+2\lambda^{*}}}\mathbf{x}\right)\right\|_{2}^{2}\leq1\quad\Longleftrightarrow\quad\lambda^{*}\geq{\frac{\|\mathbf{K}\mathbf{x}\|_{2}}{2}}-{\frac{1}{2}}.$$ Hence, using this λ ∗, the projection operator becomes $$\operatorname{proj}_{S_{1}}(\mathbf{x})={\begin{cases}\mathbf{x}&{\mathrm{if~}}\|\mathbf{K}\mathbf{x}\|_{2}^{2}\leq1,\\ {\frac{1}{1+2\lambda^{*}}}\mathbf{x}&{\mathrm{otherwise.}}\end{cases}}$$ The constraints in Equations 25–27 all have the general form Aix = 0 with A2 = WfT k K, A3 = Te T k XK, or A4 = WfT k XTXK, respectively. The projection operator onto the sets Si = {x ∈ R p: Aix = 0}, for i ∈ {2, 3, 4}, has the analytic solution (Bauschke & Kruk, 2004), $$\operatorname{proj}_{\mathcal{S}_{i}}(\mathbf{x})={\begin{cases}\mathbf{x},&{\mathrm{if~}}\mathbf{A}_{i}\mathbf{x}=\mathbf{0},\\ \mathbf{x}-\mathbf{A}_{i}^{\dagger}\mathbf{A}_{i}\mathbf{x},&{\mathrm{otherwise}},\end{cases}}$$ where A†is the Moore-Penrose pseudo-inverse of A. ## 2.4.2 Gradient Of The Objective We need to compute the gradient of f in Equation 17. Now, Equation 18 and Equations 20–23 are straightforward linear and quadratic functions, with trivial gradients, but Equation 19 is more difficult to find, why we present it here. We have that (Höskuldsson, 2003), $$\begin{array}{l}{{\widetilde{\mathbf{w}}_{k+1}^{*}=\widetilde{\mathbf{w}}_{k+1}-\left[\sum_{i=1}^{k}\widetilde{\mathbf{w}}_{i}^{*}\widetilde{\mathbf{p}}_{i}^{\mathrm{T}}\right]\widetilde{\mathbf{w}}_{k+1}}}\\ {{=\mathbf{K}\widetilde{\boldsymbol{\omega}}_{k+1}-\left[\sum_{i=1}^{k}\widetilde{\mathbf{w}}_{i}^{*}\widetilde{\mathbf{p}}_{i}^{\mathrm{T}}\right]\mathbf{K}\widetilde{\boldsymbol{\omega}}_{k+1}}}\\ {{=(\mathbf{I}-\mathbf{D})\mathbf{K}\widetilde{\boldsymbol{\omega}}_{k+1},}}\end{array}$$ $$(29)$$ where I is an identity matrix and D is a constant matrix. Let βbPLS,k be the regression coefficient vector approximated using the k previously found components, and further let a = KTXTy, b = βbPLS − βbPLS,k, A = (I − D)K, and B = KTXTXK. Then the gradient becomes, ∇ωek+1 βbPLS − Wfk+1(Pe T k+1Wfk+1) −1Ce T k+1 2 2(29) = −2(ATbaT + abTA)ωek+1 ωe T k+1Bωek+1 + 4b TAωek+1ωe T k+1aBωek+1 (ωe T k+1Bωek+1) 2 + 2(ATAωek+1ωe T k+1a +ωe T k+1ATAωek+1a)a Tωek+1 (ωe T k+1Bωek+1) 2 − 4a Tωek+1ωe T k+1ATAωek+1ωe T k+1aBωek+1 (ωe T k+1Bωek+1) 3. ## 2.4.3 Minimising The Objective Function We used projected gradient descent (Bertsekas, 1999) to solve the non-linear program in Equation 17, which amounts to iterating the weight update scheme, $$\tilde{\omega}_{k+1}^{(s+1)}\leftarrow\mathrm{proj}_{S}\left(\tilde{\omega}_{k+1}^{(s)}-\eta\nabla f(\tilde{\omega}_{k+1}^{(s)})\right)$$ where s is a sequence index, η > 0 is a step size, the gradient of f is Equation 29 plus the gradients of the rest of the terms, *i.e.* from Equation 18 and Equations 20–23, and the projection operator, projS , was computed numerically by solving Equation 28 using a parallel Dykstra-like proximal algorithm (Combettes & Pesquet, 2011). ## 2.4.4 The Found Regression Coefficient Vector We have the following immediate result about the regression coefficient vector, βbPLS,K = WfK(Pe TKWfK) −1Ce TK, found by using the program in Equation 17. Theorem 1. *The PLS-R regression vector,* βbPLS,K*, found through Equation 17, lie in the Krylov subspace* of order K generated by XTX and XTy*, i.e.* $${\widehat{\boldsymbol{\beta}}}_{\mathrm{{PLS}},K}\in{\mathcal{K}}_{K}(\mathbf{X}^{\mathrm{{T}}}\mathbf{X},\mathbf{X}^{\mathrm{{T}}}\mathbf{y}).$$ Proof. All weight vectors found through Equation 17 are written in the form we k = Kωek, where K is a basis for the Krylov subspace KK(XTX, XTy). We can then write $$\widehat{\mathbf{W}}_{K}=\mathbf{K}\Omega_{K},$$ with ΩK = [ωe 1, . . . , ωeK]. *I.e.*, all weight vectors lie in the Krylov subspace. Hence, by Lemma 1, the PLS-R regression vector, βbPLS,K, lie in that Krylov subspace of order K generated by XTX and XTy. Hence, the solution found using the Krylov formulation has the same property as the PLS-R solution, namely that both the weights, Wf, and the regression vector, βbPLS, lie in a Krylov subspace. This property may lead to better future algorithms for solving the PLS-R problems. ## 3 Examples To illustrate the utility of the proposed Krylov-based PLS-R formulation, we present two examples. The first example is based on simulated data and the second example is based on near infrared reflectance (NIR) scans of soil samples. Both examples are analysed *without* regularisation, to show that the proposed method is able to reconstruct the scores and loadings of a standard PLS-R model computed using the NIPALS algorithm (Helland, 1988; Abdi, 2010), and both are also analysed *with* elastic-net (`1 and squared `2) regularisation, to illustrate the extended regularisation, the variable selection, and the interpretation of the regression coefficient vector and the reconstructed scores and loadings. ## 3.1 Example 1: Simulated Data The first example illustrates how the proposed regularised PLS-R method works in comparison to standard PLS-R. We will illustrate that without regularisation, the proposed method gives the same result as standard PLS-R, and further how the regularised PLS-R differs from the regular PLS-R in terms of the regression vector, and the weights and scores. ![9_image_0.png](9_image_0.png) Figure 1: (a) The loading profiles that make up the data matrix. (b) The regression coefficient vectors found using regular PLS-R. Note that the green and blue curves are indistinguishable. (c) The first weight vectors for each method. (d) The first score vectors for each method. The data were collected in a matrix, X ∈ R n×p, with n = 50 and p = 101, and was composed of three spectra, each with a Gaussian profile as seen in Figure 1 (a). The data were constructed as $$\mathbf{X}=\mathbf{t}_{1}\mathbf{p}_{1}^{\mathrm{T}}+\mathbf{t}_{2}\mathbf{p}_{2}^{\mathrm{T}}+\mathbf{t}_{3}\mathbf{p}_{3}^{\mathrm{T}}+\mathbf{E},$$ where y = t1 ∼ N (0n, In) were sampled from a standard normal, t2 = 1 2 |z2| + 0.35 with z2 ∼ N (0n, In), and t3 = z3 −1.25 with z3 ∼ N (0n, In), and E are independent zero-mean normal with variance 0.01. There was thus a perfect correlation between y and t1, the correlation between y and t2 was about 0.213, and the correlation between y and t3 was about 0.005. We fit one regular PLS-R model, one PLS-R model based on the Krylov subspace formulation without regularisation, and one PLS-R model based on the Krylov subspace formulation with elastic net regularisation. The number of components extracted were K = 20, and for the elastic net regularisation, we used γ = 0.00125 and λ = 0.02375. The PLS-R regression vectors are illustrated in Figure 1 (b). We see that the regular PLS-R regression vector, βPLS, picked up much noise in the data. We further see the unregularised PLS-R vector, βbPLS, found using the unregularised Krylov subspace formulation, and that it is almost indistinguishable from the regular PLS-R regression vector (the green and blue curves in Figure 1 (b)). In fact, the differences are attributed to numerical instability in the noise dimensions, because the differences disappeared for few components, and when the number of components were near the rank of the data matrix. The regression vector found when using the unregularised Krylov subspace formulation had much less noise, and had seven variables (about 7 %) that were smaller than 5 · 10−7(considered as zero). Note that the regression vector for the PLS-R model computed using the NIPALS algorithm did not have any coefficients that were near zero. The found and reconstructed weight and score vectors of the models are illustrated in Figure 1 (c) and Figure 1 (d), respectively. They are all highly correlated, implying that it would be possible to interpret them in a similar way. ![10_image_0.png](10_image_0.png) Figure 2: (a) The regression vectors from the ordinary least squares model, βOLS, the PLS-R model computed using the NIPALS algorithm, βPLS, the PLS-R model computed using the Krylov formulation without regularisation, βbPLS (No Reg.), and the PLS-R model computed using the Krylov formulation with elastic net regularisation, βbPLS (Reg.). Note that the purple and blue lines are indistinguishable. (b) The first and second weight vectors for the PLS-R model computed using the NIPALS algorithm (red curve), and for the PLS-R model computed using the Krylov formulation with elastic net regularisation (blue curve). The numbers indicate the wavelengths (in nm). (c) The first and second score vectors for the PLS-R model computed using the NIPALS algorithm (red points), and for the PLS-R model computed using the Krylov formulation with elastic net regularisation (blue points). The numbers indicate the sample index. ## 3.2 Example 2: Soil Samples Measured With Nir The second example contains soil samples originating from a long-term field experiment in Abisko, Sweden, described by Rinnan & Rinnan (2007)1. Each of 36 samples were collected from the 5 to 10 cm depth 1Obtained from http://www.models.life.ku.dk/NIRsoil. with three repetitions, yielding a total of n = 108 samples. The samples were scanned using NIR, in the wavelength range of 400–2498 nm at p = 1050 wavelengths. The target variable was soil organic matter (SOM, *e.g.* plant residues), that was measured as loss on ignition at 550 ◦C. Again, we fit one regular PLS-R model and one PLS-R model based on the Krylov subspace formulation with elastic net regularisation. The number of components extracted were K = 13 for the regular PLS-R model, and it was K = 50 with γ = 1.0 · 10−5 and λ = 1.0 · 10−3for the elastic net regularised PLS-R model based on the Krylov formulation. We also fit an ordinary least squares (OLS) regression model, and one PLS-R model using the Krylov formulation without the elastic net regularisation. The PLS-R regression vectors are illustrated in Figure 2 (a). We see that the OLS vector is very noisy (green curve), and that the PLS-R regression vectors are less so (purple and blue curves). Further, we see that the regression vector from the regularised PLS-R model (red curve) has many values close to zero, especially in the range of about 900–1750 nm. The regression coefficient values are close to zero, and it has a sparsity strucure with 27 coefficients being zero (smaller than 5 · 10−7), or about 2.5 %. Note that the regression vector for the PLS-R model computed using the NIPALS algorithm did not have any coefficients that were near zero. The reconstructed first and second weight and score vectors, from the regularised PLS-R model, are illustrated in Figure 2 (b) and (c), respectively, together with the weight and score vectors from the PLS-R model computed using the NIPALS algorithm. We see that they are very close, implying that it would be possible to interpret them in a similar way. ## 4 Discussion And Conclusions We have presented a simple way to use the Krylov formulation to solve the PLS-R problem, which allows additional regularisation terms to be added to the model. We illustrated the use of elastic net regularisation (`1 and squared `2 terms) for additional regularisation and variable selection, and demonstrated that the found regression vectors were sparse. Note, however, that while we illustrated that the proposed formulation allow sparse regression vectors, the proposed formulation allows an analyst to impose any conceivable problem-relevant penalties in the PLS-R model. Further, using the Krylov formulation allows other solvers, for instance more efficient Krylov-based solvers to be used for the PLS-R problem. Further, we proposed an approach to approximate the scores and loadings for the PLS-R regression vector found using the Krylov formulation, which allows interpretations of the model in the same way as PLS-R models are interpreted when they are computed using *e.g.* the NIPALS algorithm. We illustrated the utility of the model on simulated data, and on a real data set with soil sample data. Both examples showed that the Krylov PLS-R method gave regression coefficient vectors that had coefficients that were zero or close to zero, meaning that variable selection was performed. Further, both examples showed that it was possible to approximate weight and score vectors for the Krylov PLS-R model, that were close to the PLS-R equivalents, and that thus can be used for model and data interpretation. The proposed PLS-R model formulation opens the door to more elaborate regularisation in a PLS-R model, while still allowing corresponding scores, weights, and loadings to be approximated. Follow-up research could also focus on computational aspects, to *e.g.* speed up the computations required for the component reconstructions. ## References Hervé Abdi. Partial least squares regression and projection on latent structure regression (PLS regression). WIREs Computational Statistics, 2(1):97–106, 2010. Genevera I. Allen, Christine Peterson, Marina Vannucci, and Mirjana Maletić-Savatić. Regularized partial least squares with an application to NMR spectroscopy. *Statistical Analysis and Data Mining*, 6(4):302– 314, 2013. Matthew Barker and William Rayens. Partial least squares for discrimination. *Journal of Chemometrics*, 17:166–173, 2003. Heinz H. Bauschke and Serge G. Kruk. Reflection-projection method for convex feasibility problems with an obtuse cone. *Journal of Optimization Theory and Applications*, 120(3):503–531, 2004. Dimitri P. Bertsekas. *Nonlinear Programming*. Athena Scientific, Belmont, Ma., U.S.A., 1999. M. Laura Bisani, Domenico Faraone, Sergio Clementi, Kim H. Esbensen, and Svante Wold. Principal components and partial least-squares analysis of the geochemistry of volcanic rocks from the aeolian archipelago. *Analytica Chimica Acta*, 150:129–143, 1983. Åke Björck and Ulf G. Indahl. Fast and stable partial least squares modelling: A benchmark study with theoretical comments. *Journal of Chemometrics*, 31, 2017. Anne-Laure Boulesteix and Korbinian Strimmer. Partial least squares: a versatile tool for the analysis of high-dimensional genomic data. *Briefings in Bioinformatics*, 8(1):32–44, 2007. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, 2010. Neil A. Butler and Michael C. Denham. The peculiar shrinkage properties of partial least squares regression. Journal of the Royal Statistical Society, 62(3):585–593, 2000. Hyonho Chun and Sündüz Keleş. Sparse partial least squares regression for simultaneous dimension reduction and variable selection. *Journal of the Royal Statistical Society*, 72(1):3–25, 2010. Patrick L. Combettes and Jean-Christophe Pesquet. Proximal splitting methods in signal processing. In H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz (eds.), *FixedPoint Algorithms for Inverse Problems in Science and Engineering*, pp. 185–212. New York: Springer, 2011. Sijmen de Jong. SIMPLS: an alternative approach to partial least squares regression. Chemometrics and Intelligent Laboratory Systems, 18:251–263, 1993. Lars Eldén. Partial least-squares vs. Lanczos bidiagonalization—I: analysis of a projection method for multiple regression. *Computational Statistics & Data Analysis*, 46:11–31, 2004. Rolf Ergon. PLS post-processing by similarity transformation (PLS+ST): a simple alternative to OPLS. Journal of Chemometrics, 19(1):1–4, 2005. Ildiko E. Frank and Jerome H. Friedman. A statistical view of some chemometrics regression tools. *Technometrics*, 35(2):109–135, 1993. Daniel Gabay and Bertrand Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. *Computers & Mathematics with Applications*, 2(1):17–40, 1976. Piotr S. Gromski, Howbeer Muhamadali, David I. Ellis, Yun Xu, Elon Correa, Michael L. Turner, and Royston Goodacre. A tutorial review: Metabolomics and partial least squares-discriminant analysis - a marriage of convenience or a shotgun wedding. *Analytica Chimica Acta*, 879(16):10–23, 2015. Inge S. Helland. On the structure of partial least squares regression. Communications in Statistics— Simulation and Computation, 17(2):581–607, 1988. Arthur E. Hoerl and Robert W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 8:27–51, 1970. Agnar Höskuldsson. PLS regression methods. *Journal of Chemometrics*, 2:211–228, 1988. Agnar Höskuldsson. Analysis of latent structures in linear models. *Journal of Chemometrics*, 17:630–645, 2003. Ricardo Barbosa Kloss, Marcos Vinicius Mussel Cirne, Samira Silva, Helio Pedrini, and William Robson Schwartz. Partial least squares image clustering. In L. R. Oliveira, A. L. Apolinário Junior, and R. P. Lemes (eds.), *Proceedings of the 28th Conference on Graphics, Patterns and Images (SIBGRAPI 2015)*, pp. 41–48, 2015. Nicole Krämer. An overview on the shrinkage properties of partial least squares regression. Computational Statistics, 22:249–273, 2007. Anjali Krishnana, Lynne J. Williams, Anthony Randal McIntosh, and Hervé Abdi. Partial least squares (PLS) methods for neuroimaging: A tutorial and review. *NeuroImage*, 56(2):455–475, 2011. Olav M. Kvalheim, Tarja Rajalahti, and Reidar Arneberg. X-tended target projection (XTP)—comparison with orthogonal partial least squares (OPLS) and PLS post-processing by similarity transformation (PLS+ST). *Journal of Chemometrics*, 23(1):49–55, 2009. Kim-Anh Lê Cao, Debra Rossouw, Christèle Robert-Granié, and Philippe Besse. A sparse PLS for variable selection when integrating omics data. *Statistical Applications in Genetics and Molecular Biology*, 7(1), 2008. Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. Explainable AI: A review of machine learning interpretability methods. *Entropy (Basel)*, 23:1, 2020. Rolf Manne. Analysis of two partial-least-squares algorithms for multivariate calibration. Chemometrics and Intelligent Laboratory Systems, 2:187–197, 1987. William F. Massy. Principal components regression in exploratory statistical research. Journal of the American Statistical Association, 60:234–246, 1965. Gregoria Mateos-Aparicio. Partial least squares (PLS) methods: Origins, evolution, and application to social sciences. *Communications in Statistics—Theory and Methods*, 40:2305–2317, 2011. Tahseen Rabbani, Apollo Jain, Arjun Rajkumar, and Furong Huang. Practical and fast momentum-based power methods. In *Proceedings of the 2nd Annual Conference on Mathematical and Scientific Machine* Learning, volume 145 of *Proceedings of Machine Learning Research*, pp. 1–36. PMLR, 2021. Riikka Rinnan and Åmund Rinnan. Application of near infrared reflectance (NIR) and fluorescence spectroscopy to analysis of microbiological and chemical properties of arctic soil. *Soil Biology and Biochemistry*, 39(7):1664–1673, 2007. Roman Rosipal and Nicole Krämer. Overview and recent advances in partial least squares. In Craig Saunders, Marko Grobelnik, Steve Gunn, and John Shawe-Taylor (eds.), *Subspace, Latent Structure and Feature* Selection, pp. 34–51. Springer Berlin Heidelberg, 2006. Michael Sjöström, Svante Wold, and Bengt Söderström. PLS discriminant plots. In E. S. Gelsema and L. N. Kanal (eds.), *Pattern Recognition in Practice*, pp. 461–470. Elsevier Science Publishers BV, North-Holland, 1986. Lars Ståhle and Svante Wold. Partial least squares analysis with cross-validation for the two-class problem: a Monte Carlo study. *Journal of Chemometrics*, 1:185–196, 1987. Johan Trygg and Svante Wold. Orthogonal projections to latent structures (O-PLS). *Journal of Chemometrics*, 16:119–128, 2002. Patrícia Valderrama, Jez Willian B. Braga, and Ronei Jesus Poppi. Variable selection, outlier detection, and figures of merit estimation in a partial least-squares regression multivariate calibration model. A case study for the determination of quality parameters in the alcohol industry by near-infrared spectroscopy. Journal of Agricultural Food Chemistry, 55(21):8331–8338, 2007. David S. Watkins. *The Matrix Eigenvalue Problem*. Society for Industrial and Applied Mathematics, 2007. Philadelphia, Pa., U.S.A. Svante Wold, Harald Martens, and Herman Wold. The multivariate calibration problem in chemistry solved by the PLS method. *Lecture Notes in Mathematics*, 973:286–293, 1983. Svante Wold, A. Ruhe, Herman Wold, and William J. III Dunn. The collinearity problem in linear regression. The partial least squares (PLS) approach to generalized inverses. *SIAM Journal on Scientific and* Statistical Computing, 5(3):735–743, 1984. Svante Wold, Michael Sjöström, and Lennart Eriksson. PLS-regression: a basic tool of chemometrics. Chemometrics and Intelligent Laboratory Systems, 58(2):109–130, 2001. Peng Xu, Bryan He, Christopher De Sa, Ioannis Mitliagkas, and Christopher Re. Accelerated stochastic power iteration. In Amos Storkey and Fernando Perez-Cruz (eds.), *Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics*, volume 84 of Proceedings of Machine Learning Research, pp. 58–67. PMLR, 2018. Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. *Journal of the Royal* Statistical Society, 67(2):301–320, 2005.
Review 1: Summary: Devised an algorithm to solve the (regularized) partial least squares (PLS) regression problem using the Krylov subspace formulation, where the objective is to minimize the squared error (on the subspace bases) plus an elastic net penalty ($\ell_1$ + $\ell_2$ regularization). The objective is solved via ADMM, after which a second optimization problem is introduced to construct parameters (scores and loadings) that remain close to the regularized solution $\hat{\boldsymbol{\beta}}_{PLS}$ and also satisfy the PLS constraint. The algorithm is applied to solve several real-world regression problems. Strengths and Weaknesses: I am not familiar with the PLS literature so I cannot evaluate the novelty and significance of the proposed method. In my opinion the submission does a good job introducing the problem setting (PLS, Krylov subspace, etc.) and explaining the optimization objective (relaxation/approximation, etc.). Experiments also illustrate the effectiveness of the added regularizations. I have the following concerns & questions. 1. After reading the main text, I'm still not sure if I understand the motivation of using PLS, which involves a rather opaque formulation. As mentioned by the authors in the introduction, the main advantage seems to be that dimension reduction is performed by taking the target information into account. If this is the case, then some empirical or theoretical comparison against alternative methods (such as PCR) would strengthen the manuscript. 2. Related to the previous point, for PCR it is well-known that certain notion of "alignment" between the target and the data covariance affects the prediction error (for example see [Huang et al. 2020] and reference therein). It would be nice to at least empirically evaluate the performance of the proposed method on synthetic datasets, where the ground truth and alignment conditions can be manipulated. Huang et al. 2020. Dimensionality reduction, regularization, and generalization in overparameterized regressions. 3. A large part of the main text is devoted to constructing the score/loading parameters from the approximate solution $\hat{\boldsymbol{\beta}}_{PLS}$. How does this compare to directly solving for the minimum $\ell_1 + \ell_2$ norm solution, instead of using explicit regularizations as in Equation (4)? 4. The time and space complexity of the proposed method is not analyzed. I suspect that due to non-convexity of the formulation, global optimization guarantee is probably difficult to obtain; but such limitation should be discussed more explicitly. 5. (minor) Please double-check the typos. For example, wouldn't the equality constraint after Equation (5) be $\boldsymbol{K\alpha}=\boldsymbol{z}$ instead? Requested Changes: Please refer to the weakness section above. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper studies the Partial Least Squares Regression (PLSR) method, a method for linear regression with dimensionality reduction. The main contribution of the paper is to propose a regularized version of this method utilizing a Krylov subspace reformulation, along with an approximate optimization method for the regularized problem. Strengths and Weaknesses: My overall feeling is that the topic of the paper may not be of wide interest to the community, and the contribution of the paper on this topic is quite questionable as well. The main contributions on the theory / methodology side seem to be in two sections, Section 2.3 and 2.4. Section 2.3 utilizes the known Krylov subspace reformulation to incorporate a regularization into the problem, turning Eq (3) to Eq (5), and deriving an optimization procedure using ADMM. All these seem to be standard well-known practice. Then I have a hard time understanding what is the purpose of Section 2.4—Is it some approximate optimization method? If so then what is the advantage of the approximation? (Faster runtime, or some reformulation into some standard problem or software?) In contrast, what is the issue with the ADMM method in Section 2.3.1? For the experiments, I also don’t understand what the conclusions are. Upon reading Section 3 it feels like main claims are that the proposed optimization via the reformulation successfully reproduces the existing implementation (for the unregularized case), and then can also produce the regularized case. This feels quite standard and does not sound like a novel research result. Also, the figures seem mostly like basic visualizations of the learned parameters, and does not report more concrete comparisons of quantitative metrics, such as train / test errors, or runtime. I also think the topic of the paper (PLSR method) sounds quite nuanced, and I am not sure how interesting it is to the broader ML/Stats community (at least from reading the paper). As the authors mentioned, the method is widely used in certain application areas in science, which may be a better audience for the present paper (where the method is considered standard). As for a general ML/stats method, the best connection I can draw is its similarity to Principal Components Regression (PCR), as the authors mentioned, which is another classical regression method based on data-dependent dimensionality reduction. However, I don’t think either PCR or PLSR are as widely used or considered, at least as general methodologies, compared with e.g. Lasso, ridge regression, or elastic net. The theoretical advantages of PLSR are unclear (the weightings depend on both X and y, unlike PCR). The practical advantages such as interpretability are only briefly mentioned in the paper, and without further explanations and/or corresponding results. Requested Changes: Please find my questions in the Strengths and Weaknesses part. Broader Impact Concerns: I don’t see broader impact or ethical concerns regarding this paper. ================================================== Review 3: Summary: This paper proposes a regularization framework for partial least squares regression by exploiting a Krylov subspace formulation of the problem and adding an elastic net penalty on the regression coefficients. Furthermore, the paper formulates an optimization problem for reconstructing the weights, and hence the scores and loadings, from the estimated regression coefficients. The paper derives an ADMM algorithm for solving the former problem and a projected gradient descent algorithm for solving the latter. The proposed method is illustrated on simulated and real data. Overall, the paper addresses an interesting problem, and the proposed method is potentially useful for applications where simultaneous dimension reduction and variable selection are desired. However, the proposed optimization algorithms are standard, and no statistical or algorithmic properties are presented. Also, the practical issues of tuning parameter selection are not discussed. Therefore, the contributions of the paper seem fairly limited. Strengths and Weaknesses: Strengths: 1) The Krylov subspace formulation is less explored for computing purposes in the PLS regression literature. The idea of directly penalizing the regression coefficients in this formulation seems novel. 2) The proposed elastic net regularized problem leads to sparse regression coefficients, and hence performs variable selection and improves the interpretability of the PLS method. Weaknesses: 1) Given the basis $\mathbf{K}$, problem (4) is just a generalized elastic net problem, or equivalently a subspace constrained elastic net problem, which has been studied by, e.g., Mouret, Brault, and Vahid Partovinia (2013, “Generalized elastic net regression,” *JSM Proceedings*, 3457-3464). With the PLS regression and Krylov subspace structures taken into account, of course, the statistical properties of the proposed estimators may not be trivial. However, the paper did not go further in this direction. 2) The proposed optimization problem for reconstructing the weights is nonconvex, and its convergence properties with projected gradient descent is unclear. The term (19) in the objective function is highly nonconvex with the weights appearing in the matrix inverse, which may cause instability. 3) One major advantage of the Krylov subspace formulation is that the $K$ vectors of regression coefficients can be obtained simultaneously. This important advantage, however, is lost in the reconstruction algorithm, which finds the $K$ vectors of weights in a sequential manner. 4) The tuning parameters $\gamma$ and $\lambda$ are critical to the performance of the elastic net regularized estimator. In the simulated and real data examples, these parameters seem to be arbitrarily specified, resulting in a very low level of sparsity (7% and 2.5%, respectively). With such low sparsity levels, the elastic net penalty does not essentially play a role, and the trade-off between data fitting and sparsity is not well illustrated. 5) The optimization problem for reconstruction involves five tuning parameters, which are too many and could be very difficult to choose in practice. The paper did not even mention how these parameters were specified in the data examples. Requested Changes: 1) It would be interesting to understand how the proposed method performs in high-dimensional settings, in view of the recent theoretical developments in PLS regression (e.g., Cook and Forzani, 2019, “Partial least squares prediction in high-dimensional regression,” *Ann. Statist.*, 47, 884-908). 2) The convergence properties of the projected gradient descent algorithm need to be better understood. 3) Is it possible to compute the $K$ vectors of weights simultaneously? 4) Why are the weights, scores, and loadings shrunk toward their unregularized counterparts rather than toward zeros? What if the regularized regression coefficients are far away from the unregularized version? 5) Tuning parameter selection is an important part of the regularization problem, and should be carefully discussed. Cross-validation is usually a safe option, but would be too expensive for optimizing so many tuning parameters. Is it possible to derive an information criterion, such as BIC, in this case? To this end, an estimate of the degrees of freedom may be needed; see, e.g., Krämer and Sugiyama (2011, “The degrees of freedom of partial least squares regression,” *JASA*, 106, 697-705). Broader Impact Concerns: None. ================================================== Review 4: Summary: This paper studies the classical partial least square regression (PLS-R) problem and its regularization version. This paper proposes an alternative way of expressing the regularized PLS-R problem as an constrained minimization problem and introduces an ADMM type approach to solve it. Strengths and Weaknesses: Strength: - This paper considers a alternative framework of expressing the partial least square - This paper shows how this alternative expression can be easily combined with regularization Weakness: - The writing of this paper can be improved a lot - The motivation of some extensions in this paper was not very clear - There is a lack of analysis on the proposed method Requested Changes: 1. Lemma 1. The presentation of Lemma is strange and needs some revision. - First, the statement requires \emph{If ${\mathcal{K}_K(...)}$ is a basis for weight vectors, ...}. But in the first sentence of the proof, the authors directly argue that this is a known result. If this is a known result, it should not be casted in the `if' statement. - Avoid using the same notations. The notation $\mathbf{A}, \mathbf{v}$ already appear in Definition 1. So they should be replaced by other notations to avoid confusion. 2. An explicit example of the matrix $\mathbf{K}$. Since the matrix $\mathbf{K}$ appears very frequently in the paper but it was not explicitly defined, I would recommend the authors to at least provide a clear way to construct it. For instance, one can first form a matrix whose columns are $X^Ty, (X^TX)^{-1}X^Ty, ...$ and then perform an orthonormalization. 3. Section 2.3.1. Some clarifications are needed to make this part more clear. - Is $\mathbf{\alpha} = \mathbf{x}$? - The authors should add some comments on the meaning of $\mathbf{x}, \mathbf{z}$ and how they are presented in a usual ADMM algorithm. - Running the algorithm requires an initial choice of $\mathbf{x}^{(0)}, \mathbf{z}^{(0)}$. This should be clarified. - Will the optimization from this part always be the same as the output of equation (4)? If so, the authors should provide a proof. If not, then the authors should argue why we want to do it. 4. Motivation of Section 2.4. I was confused why we need to match the result from equation (4) to equation (2). It is known that the regularized solution will generally not be the same as the unregularized solution. Why do we need this procedure? A common motivation for regularization is the high-dimensional scenario where the unregularized problem cannot be solved. But if this is the case, the equation (2) is not solvable so the method in Section 2.4 cannot be used. 5. Why will the solution of $\hat \beta_{PLS,k}$ be different from $\hat \beta_{PLS}$? It seems to me that a simple solution to equation (7-16) is just setting all tilde versions to be the same as the non-tilde version, which eventually leads to $\hat \beta_{PLS}=\hat \beta_{PLS,k}$ when $k=K$. Why this is not a feasible solution? Is the problem comes from $k\neq K$? However, $k$ has to be smaller than or equal to $K$ otherwise the formulation is ill-defined. In the case $k\leq K$, the solution seems to be simply the PLS-R with $k$ basis rather than $K$ basis. Also, how does this related to the problem in Section 2.3? 6. The definition of $\tilde{{W}}$, $\tilde{{P}}$, $\tilde{{C}}$. The definition of $\tilde W_k$ can be simply written as $\tilde W_{k+1} = [\tilde w_{1},\cdots, \tilde w_{k+1}]$, where each $\tilde w_{j} = \mathbf{K} \tilde \omega_{j}$. The presentation in page 7, line 4 makes it more involved. Also, it should be clarified that $\tilde P_{k+1}, \tilde C_{k+1}$ consist of column vectors of $\tilde p_{k}, \tilde c_{j}$. But these column vectors are defined later. These column vectors should be defined along with these matrices. 7. Definition of $\hat \beta_{PLS,k}$. This vector was used and stated in equation (29) but its formal definition is given later in Section 2.4.4. Please define this vector when it first appears. 8. Theorem 1. Why this result is stated as a theorem rather than a lemma (compared with Lemma 1)? A theorem is generally a more important and useful result than a lemma. What does Theorem 1 help us? It only shows that the solution lies in the Krylov space but it says nothing about its optimality so its prediction power could be very bad. 9. Choice of the step size $\eta$. The step size always plays a key role in a gradient decent algorithm. The authors should provide some discussions on its choice and even some analysis on the feasible range of it. Broader Impact Concerns: I do not see any concerns on the broader impact. ================================================== Metareview: Recommendation: Reject Comment: This paper proposed a reformulation of partial least squares regression (PLS-R) using Krylov subspace formulation, studies an optimization problem for reconstructing weights and used ADMM and projected GD to solve it. The authors argued that by using the reformulation of PLS-R which is more principled, there are a few benefits such as allowing arbitrary regularizations. However, a few reviewers find that these benefits are not of interest to the ML community, and one reviewer argued that it will be more interesting to show the advantage in interesting metrics such as training/test error. Furthermore, one reviewer suggests a comparison to PCR to justify the advantage of the proposed methods, but the authors did not respond. Another reviewer pointed out the practical advantage of the studied formulation such as interpretability are only briefly mentioned, and there are no corresponding results to support it. I agree with the reviewers that the benefit of "allowing any regularization" is not of sufficient interest, and other benefits that may be of interest are not well demonstrated. Due to these reasons, I recommend rejection. ==================================================
# Edibert: A Generative Model For Image Editing Thibaut Issenhuth t.issenhuth@criteo.com Criteo AI Lab, Paris, France LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, Marne-la-Vallée, France Ugo Tanielian *u.tanielian@criteo.com* Criteo AI Lab, Paris, France Jérémie Mary j.mary@criteo.com Criteo AI Lab, Paris, France David Picard *david.picard@enpc.fr* LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, Marne-la-Vallée, France Reviewed on OpenReview: *https: // openreview. net/ forum? id= GRBbtkW3Lp* ## Abstract Advances in computer vision are pushing the limits of image manipulation, with generative models sampling highly-realistic detailed images on various tasks. However, a specialized model is often developed and trained for each specific task, even though many image edition tasks share similarities. In denoising, inpainting, or image compositing, one always aims at generating a realistic image from a low-quality one. In this paper, we aim at making a step towards a unified approach for image editing. To do so, we propose EdiBERT, a bidirectional transformer that re-samples image patches conditionally to a given image. Using one generic objective, we show that the model resulting from a single training matches state-of-the-art GANs inversion on several tasks: image denoising, image completion, and image composition. We also provide several insights on the latent space of vector-quantized auto-encoders, such as locality and reconstruction capacities. The code is available at https://github.com/EdiBERT4ImageManipulation/EdiBERT. ## 1 Introduction Significant progress in image generation has been made in the past few years, thanks notably to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). For example, the StyleGAN architecture (Karras et al., 2019; 2020b) yields state-of-the-art results in data-driven unconditional generative image modeling. Empirical studies have also shown the usefulness of GANs' architecture when it comes to image manipulation. By following specific directions in the latent space, one can modify an image attribute such as gender, age, the pose of a person (Shen et al., 2020), or the angle (Jahanian et al., 2019). However, since the whole picture is generated from a Gaussian vector, changing some undesired elements while keeping the others frozen is difficult. To solve this problem, edition algorithms involving optimization procedures have been proposed (Abdal et al., 2019; 2020) but with one main caveat: the results are not convincing when manipulating complex visuals (Niemeyer & Geiger, 2021) (cf. experimental section for visual results). Independently, Van den Oord et al. (2017) propose VQVAE, a promising latent representation by training an encoder/decoder using a discrete latent space. The authors demonstrate the possibility to embed images in sequences of discrete tokens borrowing ideas from vector quantization (VQ), paving the way for the generation of images with autoregressive transformer models (Ramesh et al., 2021; Esser et al., 2021b). Building on this litterature, we argue that one of the benefits of this representation is that each token in the sequence is mostly coding for a localized patch of pixels (see section 3.5), thus opening the possibility for an efficient localized latent edition. ![1_image_0.png](1_image_0.png) Figure 1: Using a single and straightforward training, EdiBERT can tackle a wide variety of different tasks in image editing. In this image, the top row is the input, while the second and third rows are different samples from EdiBERT, showing realism, consistency, and variety. Aiming to build a unified approach for image manipulation, we propose a method that leverages both the spatial property of the discrete vector-quantized representation and the use of model that performs attention on the whole image. To do so, we train a bi-directionnal transformer network based on ideas from the language model BERT (Devlin et al., 2019), naming EdiBERT the resulting model. During training, EdiBERT tries to recover the original tokens of a perturbed sequence through a bidirectional attention schema. In computer vision, this approach has mainly been studied in the context of self-supervised representation learning (Bao et al., 2022; He et al., 2022). We advocate that training a single model using this generic objective provides a sounded way to obtain a model able to tackle several editing tasks. Finally, to practically handle these tasks, we also derived two different sampling algorithms: one dedicated for image denoising and editing, and a second one for inpainting. To better visualize the abilities of EdiBERT after a single training, we show in Figure 1 that the same model can now be used in many different image manipulation tasks such as denoising, inpainting (or completion), compositing, and scribble-based editing. To sum up, our contributions are the following: + We analyze the VQ latent representations and illustrate their spatial properties, and show how to improve the reconstruction capabilities of VQGAN, using a post-processing procedure that better recovers the pixel content outside of the edited region. + We show how to derive two different sampling algorithms from a single bidirectional transformers: one for the task of image denoising where the locations of the edits are unknown, and a second one for inpainting or image compositing where the mask specifying the area to edit is known. + Finally, we show that using this generic simple training algorithm along with its companion postprocessing allow us to achieve competitive results on various image manipulation tasks. ## 2 Related Work We start this section by introducing transformer models for image generation. Then, we motivate the use of the VQ representation and bidirectional models for image manipulation. ## 2.1 Autoregressive Image Generation The use of autoregressive transformers in the field of generative modeling (Parmar et al., 2018) has been made possible by two simultaneous research branches. First, the extensive deployment of attention mechanisms such as non-local means algorithms (Buades et al., 2005), non-local neural networks (Wang et al., 2018), and also attention layers in GANs (Zhang et al., 2019; Hudson & Zitnick, 2021). Second, the development of both classifiers and generative models sequentially inferring pixels via autoregressive convolutional networks such as PixelCNN (Van den Oord et al., 2016a;b). The self-attention mechanism (Vaswani et al., 2017), which now become ubiquitous in computer vision, is quickly recalled here: a sequence X ∈ R L×d, where L is length of the sequence, is mapped by a position-wise linear layer to a query Q ∈ R L×dk , a key K ∈ R L×dk and a value V ∈ R L×dv . The self-attention layer is then: $$\mathrm{attn}(Q,K,V)=\mathrm{softmax}({\frac{Q K^{t}}{\sqrt{d_{k}}}})V\in\mathbb{R}^{L\times d_{v}}$$ $$\left(1\right)$$ \text {attn}(Q,K,V) = \text {softmax}(\frac {Q K^t}{\sqrt {d_k}})V \in \mathbb {R}^{L \times d_v} (1) If autoregressive transformers allow a principled log-likelihood estimation of the data, attention layers have a complexity scaling with the square of the sequence length, a clear bottleneck to scale to high-resolution images. To reduce the size of these sequences, Van den Oord et al. (2017) proposed the use of discrete representation. In this framework, an encoder E, a decoder D, and a codebook/dictionary Z are learned simultaneously to represent images with a single sequence of tokens. Esser et al. (2021b) later trained an autoregressive model on these token sequences, stressing that high-capacity transformers can generate realistic high-resolution images. The framework consists of three steps: 1. Training simultaneously a set of encoder/decoder/codebook (*E, D, Z*), by combining reconstruction, commitment and adversarial losses. The reconstruction loss is a perceptual distance (Zhang et al., 2018). The commitment loss (Van den Oord et al., 2017) pushes the codebook towards the output of the encoder using a quantization loss. The adversarial loss is the Vanilla GANs loss defined in (Goodfellow et al., 2014). The training objective becomes : $$E^{\star},D^{\star},Z^{\star}=\operatorname*{ar}_{D}$$ \label {eq:vqgan_training} {E^\star ,D^\star ,Z^\star } = \argmin _{E,D,Z} \ [L_{\text {rec.}}(E,D,Z) + L_{\text {commit.}}(E,Z) + \lambda L_{\text {adv.}}(\{E,D,Z\})]. (2) 2. Training an autoregressive transformer to maximize the log-likelihood of the encoded sequences. 3. At inference, sampling a sequence with the transformer and decoding it with the decoder D. This vector-quantized representation was later improved by Yu et al. (2021a) and used by Yu et al. (2022) to create PARTI, a state-of-the-art text-to-image generative model. Interestingly, our work EdiBERT builds on top of the first step of VQGAN, and also requires the training of the triplet (*E, D, Z*) following equation 2. ## 2.2 Bidirectional Attention The main property of autoregressive models is that they only perform attention on previous tokens, making them inadequate when dealing with image manipulation (Esser et al., 2021a). Some works alleviate this bias in different ways. Yang et al. (2019) learn an autoregressive model on random permutations of the ordering. Cao et al. (2021) propose a model where missing tokens are inferred autoregressively, conditionally to the set of kept tokens. Similarly, Wan et al. (2021) use an auto-regressive procedure conditioned on the masked image, while Yu et al. (2021b) use BERT training with [MASK] tokens and Gibbs sampling. If this setting is ideal for tasks with masked tokens such as inpainting, it makes it ill-posed for scribble-editing and insertion without existing paired datasets. On the opposite, our EdiBERT tackles all tasks without any need for supervision. Finally, Esser et al. (2021a) train ImageBART, a multinomial diffusion process (Hoogeboom et al., 2021) in the discrete latent space of VQGAN. Each generated sequence is conditioned on the previous one and performs attention to the whole image. However, this method is computationally heavy since it requires making N × L inferences, where N is the number of generated sequences and L is the number of tokens in the sequence. A more efficient way to perform bidirectional attention for image generation has been proposed in MaskGIT (Chang et al., 2022). MaskGIT consists of training with a BERT-like objective (Devlin et al., 2019) on sequences randomly perturbed with [MASK] tokens, and generating images with a parallel decoding scheme. Similarly, Zhang et al. (2021) propose to use a masking-based strategy to perform conditional image editing with bidirectional attention mechanisms. However, they still require specific conditional data to learn their model editing model. In this paper, we argue that by performing bidirectional attention over all the tokens and learning with a denoising objective (tokens perturbed by randomization, not with [MASK] tokens), it is possible to train a single model tackling many editing tasks. ## 2.3 Unifying Image Manipulation Initially, image manipulation methods were implemented without any trainable parameters. Image completion was tackled using nearest-neighbor techniques along with a large dataset of scenes (Hays & Efros, 2007). As to image insertion, blending methods were widely used, such as the Laplacian pyramids (Burt & Adelson, 1987). In recent years, image manipulation has benefited from the advances of deep generative models. A first line of work has consisted of gathering datasets of corrupted and target images to train conditional generative models. By doing so, one can therefore learn a mapping from any corrupted image to a real one. For example, Liu et al. (2021) proposes an encoder-decoder architecture for sketch-guided image inpainting. However, in all cases, a dataset with both types of images is required, therefore limiting the applicability. To avoid this dependency, a second idea - known as GAN inversion methods - leverages pre-trained unconditional GANs. They work by projecting edited images on the manifold of real images learned by the pre-trained GAN. It can be solved either by optimization (Abdal et al., 2019; 2020; Daras et al., 2021), or with an encoder mapping to the latent space (Chai et al., 2021; Richardson et al., 2021; Tov et al., 2021). Pros of these GAN-based methods are that one benefits from the outstanding properties of StyleGan, stateof-the-art in image generation. However, these methods rely on a task-specific loss function that needs to be defined and optimized. More recently, another line of research is based on the development of score-based models (Song et al., 2020): Meng et al. (2022) use Langevin's dynamics for image edition and, (Esser et al., 2021a) combine discrete diffusion models (Hoogeboom et al., 2021; Austin et al., 2021) with the discrete vector-quantized representations from VQGANs. ## 3 Motivating Edibert For Image Editing This section gives a global description of the proposed EdiBERT model. We start with notations before describing the different steps leading to the BERT-based edition. ## 3.1 Discrete Auto-Encoder Vqgan Let I be an image with width w, a height h, and a number c of channels. I thus belongs to R h×w×c. Let (*E, D, Z*) be respectively the encoder, decoder, and codebook defined in VQVAE and VQGAN (Van den Oord et al., 2017; Esser et al., 2021b). The codebook Z consists of a finite number of tokens with fixed vectors in an embedding space: Z = {t1*, . . . , t*N } with tk ∈ R d and N being the cardinality of the codebook. For any given image I, the encoder E outputs a vector E(I) ∈ R L×d, which is then quantized and reshaped into a sequence s of length L as follows: $s=(\arg\min_{z\in Z}\|E(I)_{1}-z\|,\ldots,\arg\min_{z\in Z}\|E(I)_{L}-z\|))=Q_{Z}(E(I)),$ $s=\arg\min_{z\in Z}\|E(I)_{1}-z\|,\ldots,\arg\min_{z\in Z}\|E(I)_{L}-z\|))=Q_{Z}(E(I)),$ where E(I)l = E(I)l,: ∈ R dis the feature vector of E(I) at position l, and QZ refers to the quantization operation using the codebook Z. Recall that, after the quantization step, one gets a sequence composed of L codebook elements, thus s ∈ Z L. After we feed the codebook embeddings to the decoder D, the reconstructed image becomes ˆI = D(QZ(E(I))). Let's note D, the available image dataset. From a pre-trained encoder E and codebook Z, one can transform the image dataset D into a dataset of token-sequences DS := {QZ(E(I)), I *∈ D}*. When learning transformers on sequences of tokens, the practitioner is directly working with DS. ## 3.2 Learning Sequences With Autoregressive Models The following sections aim at motivating the training objective for the EdiBERT model. To begin with, let pθ be a transformer model parameterized with Θ trained on DS. For each position i in s, we note p i θ (.|s), the modeled distribution of tokens conditionally to s. When training an autoregressive transformer on the discrete sequences of tokens DS (Esser et al., 2021b), one needs to compute the likelihood pθ(s) of each given sequence s = (s1, . . . , sL) ∈ DS as follows: $$p_{\theta}(s)=\prod_{i=1}^{L}\ p_{\theta}^{i}(s_{i}|s_{<i}),\quad\text{with}s_{<i}=(s_{1},\ldots,s_{i-1}).$$ $$\quad(4)$$ $$\operatorname*{arg\,max}_{\theta\in\Theta}\mathbb{E}_{s\in\mathcal{D}_{s}}\ \log p_{\theta}(s).$$ $$\left(5\right)$$ Conditional distributions p i θ (si|s<i) are computed using a causal left-to-right attention mask. The final objective of the autoregressive model is to find the best set of parameters within Θ: \label {eq:autoregressive} \underset {\theta \in \Theta }{\text {arg max }} \mathbb {E}_{s \in \mathcal {D}_s} \ \log p_\theta (s). (5) Limitations of the model. If this setting is well suited for unconditional image generation, it is ill-posed for image manipulation tasks, as shown by Esser et al. (2021a). In the case of scribble-based editing, or inpainting, one wants to resample tokens conditionally to the whole image, so that the model has all the information at its disposal. ## 3.3 A Unique Training Objective For Edibert. Let us define the training objective for EdiBERT. For a sequence s = (s1*, ..., s*L), a function φ randomly selects a subset of k indices {φ1*, ..., φ*k} where φk < L. At each selected position φi, a perturbation is applied on the token sφi . We attribute a random token with probability p, or keep the same token with probability 1 − p. Consequently, the perturbed token s˜φi becomes: \Tilde {s}_{\varphi _i} &= \mathbb {U}(Z) \quad \text {with probability } p, \\ \Tilde {s}_{\varphi _i} &= s_{\varphi _i} \quad \text {with probability } 1-p, where U(Z) refers to the uniform distribution on the space of tokens Z. Similarly to Bao et al. (2022), the sampling function φ is defined with a 2D selection strategy, and the positions are selected by drawing random 2D rectangles, see in Figure 2. Contrarily to Bao et al. (2022) and Devlin et al. (2019), we only use random tokens from the codebook but no [MASK] tokens. We argue this setting corresponds more to the cases of denoising and editing, where tokens have to be sampled conditionally to an entire perturbed sequence. Let us now call s˜ = (s1*, . . . ,* s˜φ1 , . . . , s˜φk , . . . , sL) the perturbed sequence, and D˜s = {s, s ˜ *∈ D}* the perturbed dataset. The training of EdiBERT optimizes the following objective : $$\operatorname*{arg\,max}_{\theta\in\Theta}\ \mathbb{E}_{s\in\mathcal{D}_{s}}\ \frac{1}{k}\ \sum_{i=1}^{k}\log p_{\theta}^{i}(s_{\varphi_{i}}|\bar{s}).$$ $$(6)$$ ![5_image_0.png](5_image_0.png) Figure 2: The 2D selection and randomization strategy for the training of our bidirectional transformer: EdiBERT is trained on sequences where localized patch of tokens have been perturbed. Contrary to equation 5, we note that the objective in equation 6 does not require a causal left-to-right attention. Instead, the attention can be performed over the whole input sequence. Sampling from EdiBERT: Wang & Cho (2019) show that it is possible to generate realistic samples with a BERT model starting with random initialization. However, compared with standard autoregressive language models, the authors stress that BERT generations are more diverse but of slightly worse quality. Building on the findings of Wang & Cho (2019), we do not aim to use BERT for pure unconditional sequence generation but rather improve an already existing sequence of tokens. In our defined EdiBERT model, for any given position i ∈ s, a token will be sampled according to the multinomial distribution p i θ (.|s). ## 3.4 On The Locality Of Vector Quantization Encoding In this paper, we argue that one of the main advantages of EdiBERT comes from the VQ latent space proposed by Van den Oord et al. (2017) where each image is encoded in a discrete sequence of tokens. In this section, we illustrate with simple visualizations the property of this VQGAN encoding. We explore the spatial correspondence between the position of the token in the sequence and a set of pixels for the encoded image. We aim at answering the following question: do local modifications of the image lead to local modifications of the latent representation and *vice versa*? Modifying the image. To answer this question, images are voluntarily perturbed with grey masks (i −→ im). Then, we encode the two images, quantize their representation using a pre-trained codebook, and plot the distance between the two latent representations: ∥QZ(E(i)) − QZ(E(im))∥ 2 2 . The results are shown in the first row in Figure 3. Due to the large receptive field of the encoder, tokens can be influenced by distant parts of the image: the down-sampled mask does not recover all of the modified tokens. However, tokens that are largely modified are either inside, or very close to the down-sampled mask. Modifying the latent space. To understand the correspondence between tokens and pixels, we stress how one can easily manipulate images using the discrete latent space. In Figure 3, we show that cutting a specific area of a source image to insert it in a different location of another image is possible only by replacing the corresponding tokens in both sequences. This spatial correspondence between VQGANs' latent space and the image is interesting for localized image editing tasks, *i.e.* tasks that require modifying only a subset of pixels without altering the other ones. 6 Modifying the latent space via the image ![6_image_0.png](6_image_0.png) Figure 3: Each token in the sequence is tied to a small spatial area in the decoded image. In the 1st row: we voluntarily perturb images and display the variations among the tokens in the latent space. The heatmaps represent the distance (red is high) between the tokens of the original image and the tokens of the perturbed image. In the 2nd row: we stress how collages of images can easily be done with this discrete latent representation: third and fourth images are generated by the decoder from a latent space collage. ## 3.5 On The Reconstruction Capabilities Of Vector Quantization Encoding A limit of the framework resides in the use of the vector quantization operation and the induced loss of information. Indeed, we observe in Figure 4 that VQGAN struggles to reconstruct high-frequency details, for example complex backgrounds on FFHQ dataset (Karras et al., 2019). To improve the reconstruction capabilities of VQGANs, we propose a simple optimization procedure over the latent space vectors. The objective is to find the latent vectors that minimize the LPIPS (Zhang et al., 2018) between the target image and the decoded reconstruction. We initialize the procedure from the output of the encoder E(I), and optimize the objective with gradient descent. Figure 4 shows how this procedure improves the inversion capabilities of VQGAN to make it better than GAN inversion methods (Abdal et al., 2020). A potential explanation of the limited reconstruction capabilities of VQGAN is displayed in Figure 5: the latent vectors of the codebook might suffer from a very low rank. The optimization procedure seems to solve this since the latent vectors span much more dimensions of the embedding space after a few hundred optimization steps. ## 4 Image Editing With Edibert Baselines. For each task, we run comparisons with baselines and state-of-the-art models based on GANs inversion methods. On FFHQ, we compare to ImageStyleGAN2++ (Abdal et al., 2020) on pre-trained StyleGANs: StyleGAN2 (Karras et al., 2020b) and StyleGAN2-ADA (Karras et al., 2020a). Besides, we run the solution proposed by Chai et al. (2021) where a StyleGAN2 model is inverted using a trained encoder. Finally, we use In-Domain GAN (Zhu et al., 2020), a hybrid method combining an encoder with an optimization procedure minimizing reconstruction losses. We also compare to Co-Modulated GANs (Zhao et al., 2020), a conditional GAN for inpainting. Metrics. We follow the work of Chai et al. (2021) and use metrics assessing both fidelity and distribution fitting. The masked L1 metric (Chai et al., 2021) measures the closeness between the generated image and the source image outside the edited areas. The density/coverage metrics (Naeem et al., 2020) are robust ![7_image_0.png](7_image_0.png) Figure 4: Comparison of reconstruction capabilities of VQGAN + optimization to two GANs inversion methods such as Id-GAN (Zhu et al., 2020) and I2SG1++ (Abdal et al., 2020). Averraged LPIPS are computed on the validation set FFFHQ. ![7_image_1.png](7_image_1.png) Figure 5: Analysis of reconstruction capabilities of VQGAN. On the left, we see that both the L1 and perceptual loss (LPIPS) between original and reconstructed images significantly decrease when optimizing LPIPS over the latent vectors of VQGAN. This may be a consequence of a higher number of dimensions spanned by the latent vectors (on the right), after the optimization (allowing for more complex reconstructions). versions of precision/recall metrics. Intuitively, density measures fidelity while coverage measures diversity. Finally, the FID (Heusel et al., 2017) quantifies the distance between generated and target distributions. Moreover, we perform a user study on FFHQ image compositing. More details and quantitative results on LSUN Bedroom are presented in Appendix. ## 4.1 Localized Image Denoising Image denoising aims to improve the quality of a pre-generated image or improve a locally perturbed one. The model has to work without information on the localization of the perturbations. This means we need ![8_image_0.png](8_image_0.png) Figure 6: Image denoising with EdiBERT: the color in the 4 different heatmaps is proportional to the negative likelihood of the token. Tokens with a lower likelihood appear in red in the heatmap and have a higher probability of being sampled and edited. Consequently, conditional distributions output by EdiBERT are an efficient tool to detect anomalies and artifacts in the image. to find and replace the perturbed tokens with more likely ones to recover a realistic image. Thus, given a sequence s = (s1*, . . . , s*L), we want to: 1. Detect the tokens that do not fit properly in the sequence s. 2. Change them for new tokens increasing the likelihood of the new sequence. We desire a significantly more likely sequence with as few as possible token amendments. To do so, we measure the likelihood of each token si based on the whole sequence s, aiming to compute p(si|s), and replace the least-probable tokens considering them independently. That is, we propose to use the conditional probability output by the model in order to detect and sample the *less likely* odd tokens. Some examples of image denoising are presented in Figure 6, and we observe that EdiBERT is able to detect artifacts and replace them with more likely tokens. The full algorithm is given in the Algorithm 1. Algorithm 1: Image denoising Requires: Sequence (s1*, . . . , s*L), BERT model pθ, number of iterations T; for *iterations in [0,T]* do Compute pi = logit(−p i θ (si|s)), ∀i ∈ [1, L] ; Sample p ∼ (p1, . . . , pl) (*less likely position*); Sample t ∈ Z ∼ p p θ (·|s) ; Insert sampled token: si ← t ; end Image ← Decoder(s); Result: Image ## 4.2 Image Inpainting In this setting, we have access to a masked image im ∈ R h×w×c along with the location of the binary mask m ∈ R h×w. im has been obtained by masking an image i ∈ R h×w×c as follows: im = i ⊙ m with ⊙ pointwise multiplication. The goal of image inpainting is to generate an image ˆi that is both realistic (high density) and conserves non-masked parts, that is ˆi ⊙ (1 − m) = i ⊙ (1 − m). Among the different tasks in image manipulation, image inpainting stands out. Indeed, when masking a specific area of an image, one shall not consider the pixels within the mask to recover the target image. ![9_image_0.png](9_image_0.png) as LC (Chai et al., 2021) and I2SG (Abdal et al., 2020). Note that Com-GAN (Zhu et al., 2020) is specialized for image inpainting and was trained on pair datasets (masked image, target image), it can not perform other image editing tasks. The image inpainting task thus requires specific care to reach a state-of-the-art performance; this is why we added five different elements to our approach, and validated these elements with visual results in Figure 13. 1. **Randomization:** to erase the mask influence, the tokens within the mask are given random values. 2. **Dilation of the mask:** as shown in Figure 3, some tokens outside of the down-sampled mask in the latent space are also impacted by the mask on the image. Modifying only tokens inside the down-sampled mask might not be enough and could lead to images with irregularities on the borders. As a solution, we apply a dilation on the down-sampled mask and show in Figure 13 that it helps better blend the target image's completion since the boundaries are removed. 3. **Spiral ordering:** since there is no pre-defined ordering of positions in EdiBERT, one can look for an optimal sampling of positions. We argue that by sampling positions randomly, one does not fully leverage the spatial location of the mask. Instead, we propose a spiral ordering that goes from the border to the inside of the mask. Qualitative and quantitative results in Figure 13 and Table 2 confirm the advantage of this ordering. 4. **Periodic image collage:** to preserve fidelity to the original image, we periodically perform a collage between the masked image and the decoded image. We observed in Figure 13, that without this collage trick, the reconstruction can diverge too much from the input image. 5. **Online optimization on latent sequences:** to improve fidelity to the masked image im, the final stage of the algorithm consists in an optimization procedure on the latent sequence s ∈ R h×w×d. The objective function is defined as: $$L=L_{p}\left(\left(D(s)-i_{m}\right)\odot m\right)+L_{p}\left(\left(D(s)-D(s^{0})\right)\odot(1-m)\right)$$ $$\left(7\right)$$ where Lp is a perceptual loss (Zhang et al., 2018), and s 0is the initial sequence from EdiBERT. Intuitively, the first term enforces the decoded image to get closer to the masked image im, while the second term is a regularization enforcing the decoded image to stay similar to the completion proposed by the transformer's likelihood. We illustrate in Figure 5 and Figure 13 that the optimization leads to a better-preserved source image. Analyzing the results: we see from Table 1 and Figure 7 that the specialized method com-GAN (Zhao et al., 2020) outperforms non-specialized methods on image inpainting. This was expected since it is the only method that has been trained specifically for this task. Note that the trained model co-mod GAN cannot be used in any other image manipulation task. Compared with the non-specialized method, EdiBERT always provides better fidelity to the source image (lower Masked L1) and realism (best FID and top-2 density). An ablation study is available in Table 2 and validates our choices. Finally, more details regarding the sampling algorithm for the task of inpainting are given in Appendix. ## 4.3 Image Composition In this setting, we have access to a non-realistically edited image ie ∈ R h×w×c. The edited image ie is obtained by a composition between a source image is ∈ R h×w×c and a target image it ∈ R h×w×c. The target image can be a user-drawn scribble or another real image in the case of image compositing. Besides, pixels are edited inside a binary mask m ∈ R h×w, which indicates the areas modified by the user. Thus, the final edited image is computed pointwise as: $$i_{e}=i_{s}\odot m+i_{t}\odot(1-m).$$ i_e = i_s \odot m + i_t \odot (1-m). (8) Image composition aims to transform an edited image ie to make it more realistic and faithful without limiting the changes outside the mask. We note the source image is outside the mask and the edits of the target image im for the inserted elements in the edition mask. Three tasks fall under this umbrella: scribble-based editing, *image compositing*, and *image crossovers*. $$({\boldsymbol{\delta}})$$ ![11_image_0.png](11_image_0.png) Figure 8: Ablation study for inpainting. Components removed are (a) optimization, (b) dilation, (c) randomization, (d) collage, (e) spiraling (random order instead). Optimization improves fidelity to the source image, while the other components help increase image quality. Table 1: Image inpainting and compositing on FFHQ 256 × 256. Com-GAN is a model specific for image inpainting, ID-GAN handles several editing tasks but not inpainting, while other methods handle both. We remove I2SG++ from the user study, since I2SG†++ is the same method with a better GAN backbone, *i.e.* StyleGAN2-ADA (Karras et al., 2020a). **Bold:** 1 st **rank**, blue: 2 nd rank. | Inpainting | | Compositing | | | | | | |------------------------------------|---------------------------|---------------|------|--------|--------|--------------|-------| | Masked | Dens. Cover. Masked Dens. | | | | | | | | L1 ↓ | FID ↓ | ↑ | ↑ | L1 ↓ | ↑ | User study ↑ | | | I2SG++ (Abdal et al., 2020) 0.0767 | 23.6 | 0.99 | 0.88 | 0.0851 | 0.77 | - | | | I2SG†++ Abdal et al. (2020) 0.0763 | 22.1 | 1.25 | 0.91 | 0.0866 | 1.07 | 8.3% | | | LC (Chai et al., 2021) | 0.1027 | 27.9 | 1.12 | 0.84 | 0.1116 | 1.00 | 14.8% | | EdiBERT ⋆ | 0.0290 | 13.8 | 1.16 | 0.98 | 0.0307 | 0.94 | 61.2% | | Com-GAN (Zhu et al., 2020) 0.0086 | 10.3 | 1.42 | 1.00 | - | - | - | | | ID-GAN (Zhu et al., 2020) | - | - | - | - | 0.0570 | 0.75 | 15.7% | Results of image compositing on FFHQ are presented in Table 1 and Figure 9. EdiBERT always has the lowest masked L1. We also present the results from a user study in Table 1. 30 users were shown 40 original and edited images, along with four results (EdiBERT and baselines). They were asked which one is preferable, accounting for both fidelity and realism. The survey shows that on average, users prefer EdiBERT over competing approaches. We give more visual results along with the detailed answers of the user study in Appendix. Table 2: Inpainting: Ablation study on the components of EdiBERT sampling algorithm. EdiBERT (1st row) shows the best tradeoff between fidelity (masked L1) and quality (FID, density/coverage). **Bold:** 1 st rank, blue: 2 nd rank. | OrderingOptim- Random-Collage Dilation Masked | | FID | Density Coverage | | | | | | |-------------------------------------------------|---------|-------|--------------------|----|--------|------|------|------| | ization | ization | L1 ↓ | ↓ | ↑ | ↑ | | | | | Spiral | ✓ | ✓ | ✓ | ✓ | 0.0201 | 19.4 | 1.14 | 0.96 | | Random | ✓ | ✓ | ✓ | ✓ | 0.0206 | 20.7 | 1.13 | 0.95 | | Spiral | X | ✓ | ✓ | ✓ | 0.0299 | 20.3 | 1.20 | 0.94 | | Spiral | ✓ | X | ✓ | ✓ | 0.0198 | 20.5 | 1.26 | 0.92 | | Spiral | ✓ | ✓ | X | ✓ | 0.0252 | 19.9 | 1.11 | 0.95 | | Spiral | ✓ | ✓ | ✓ | X | 0.0197 | 23.3 | 0.96 | 0.91 | ## 5 Discussions EdiBERT is a bidirectional transformers model that can tackle multiple editing tasks from one single training. One of the key elements of the proposed method is that it does not require having access to paired datasets (source, target), or unpaired image datasets. This property shows how flexible EdiBERT is and why it can be easily applied to different tasks. Overall, the proposed framework is simple and tractable: 1) train a VQGAN Esser et al. (2021b), 2) train an EdiBERT model following the objective defined in equation 6. Interestingly, for simple applications, one can directly train EdiBERT based on the representations output by the VQGAN pre-trained on ImageNet. However, for more complex data or when dealing with multiple domains, one might have to train a specialized codebook, which requires a large auto-encoder and a lot of data. Another EdiBERT's drawback is related to the core interest of image editing. Since the tokens are predominantly localized, EdiBERT is perfectly suited for small manipulations that only require amending a few numbers of tokens. However, some manipulations such as zooms or rotations require changing large areas of the source image. In these cases, modifying a large number of tokens might be more demanding. ## Broader Impact Statement Similarly to other image generative models, EdiBERT might be used to create and propagate fake beliefs via deepfakes, as discussed in Fallis (2020). ## 6 Conclusion In this paper, we demonstrated the possibility to perform several editing tasks by using the same pre-trained model. The proposed framework is simple and aims at making a step towards a unified model able to do any conceivable manipulation task on images. An exciting direction of research would be to extend the editing capabilities of EdiBERT to global transformations (*e.g.* zoom, rotation). 13 ![13_image_0.png](13_image_0.png) I2SG (Abdal et al., 2019). EdiBERT preserves better the fidelity to the source image while being also able to fit the inserted object properly. This confirms the quantitative results in Table 1, EdiBERT seems to be leading in both fidelity and realism. ## References Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 4432– 4441, 2019. Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan++: How to edit the embedded images? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8296–8305, 2020. Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. *Advances in Neural Information Processing Systems*, 34:17981–17993, 2021. Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEit: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022. Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In *2005 IEEE* Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 2, pp. 60–65. IEEE, 2005. Peter J Burt and Edward H Adelson. The laplacian pyramid as a compact image code. In *Readings in* computer vision, pp. 671–679. Elsevier, 1987. Chenjie Cao, Yuxin Hong, Xiang Li, Chengrong Wang, Chengming Xu, Yanwei Fu, and Xiangyang Xue. The image local autoregressive transformer. In *Advances in Neural Information Processing Systems*, 2021. Lucy Chai, Jonas Wulff, and Phillip Isola. Using latent space regression to analyze and leverage compositionality in gans. *arXiv:2103.10426*, 2021. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11315–11325, 2022. Giannis Daras, Joseph Dean, Ajil Jalal, and Alex Dimakis. Intermediate layer optimization for inverse problems using deep generative models. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 2421–2432. PMLR, 18–24 Jul 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Conference of the North American Chapter of the Association* for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, 2019. Patrick Esser, Robin Rombach, Andreas Blattmann, and Bjorn Ommer. Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. *Advances in Neural Information Processing* Systems, 34, 2021a. Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 12873– 12883, 2021b. Don Fallis. The epistemic threat of deepfakes. *Philosophy & Technology*, pp. 1–21, 2020. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. James Hays and Alexei A Efros. Scene completion using millions of photographs. ACM Transactions on Graphics (ToG), 26(3), 2007. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 16000–16009, 2022. M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In *Advances in Neural Information Processing Systems*, pp. 6626–6637, 2017. Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in Neural Information Processing Systems, 34:12454–12465, 2021. Drew A Hudson and Larry Zitnick. Generative adversarial transformers. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings* of Machine Learning Research, pp. 4487–4499. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr. press/v139/hudson21a.html. Ali Jahanian, Lucy Chai, and Phillip Isola. On the "steerability" of generative adversarial networks. In International Conference on Learning Representations, 2019. T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410, 2019. Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 12104– 12114, 2020a. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110–8119, 2020b. Hongyu Liu, Ziyu Wan, Wei Huang, Yibing Song, Xintong Han, Jing Liao, Bin Jiang, and Wei Liu. Deflocnet: Deep image editing via flexible low-level controls. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10765–10774, 2021. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In *International Conference on* Learning Representations, 2022. Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo. Reliable fidelity and diversity metrics for generative models. In *International Conference on Machine Learning*, pp. 7176– 7185. PMLR, 2020. Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11453–11464, 2021. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In *International Conference on Machine Learning*, pp. 4055–4064. PMLR, 2018. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. *arXiv:2102.12092*, 2021. Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel CohenOr. Encoding in style: a stylegan encoder for image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2287–2296, 2021. Yujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou. Interpreting the latent space of gans for semantic face editing. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 9243–9252, 2020. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. *ACM Transactions on Graphics (TOG)*, 40(4):1–14, 2021. Aaron Van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning, pp. 1747–1756. PMLR, 2016a. Aaron Van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. In *Proceedings of the 30th International Conference on Neural Information Processing Systems*, pp. 4797–4805, 2016b. Aaron Van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6309– 6318, 2017. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in neural information processing systems*, pp. 5998–6008, 2017. Ziyu Wan, Jingbo Zhang, Dongdong Chen, and Jing Liao. High-fidelity pluralistic image completion with transformers. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 4692– 4701, 2021. Alex Wang and Kyunghyun Cho. Bert has a mouth, and it must speak: Bert as a markov random field language model. *arXiv:1902.04094*, 2019. Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In *Proceedings* of the IEEE conference on computer vision and pattern recognition, pp. 7794–7803, 2018. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019. Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved vqgan. arXiv preprint arXiv:2110.04627, 2021a. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-toimage generation. *arXiv preprint arXiv:2206.10789*, 2022. Yingchen Yu, Fangneng Zhan, Rongliang Wu, Jianxiong Pan, Kaiwen Cui, Shijian Lu, Feiying Ma, Xuansong Xie, and Chunyan Miao. Diverse image inpainting with bidirectional and autoregressive transformers. In Proceedings of the 29th ACM International Conference on Multimedia, pp. 69–78, 2021b. Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of Proceedings of Machine Learning Research, pp. 7354–7363, 2019. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 586–595, 2018. Zhu Zhang, Jianxin Ma, Chang Zhou, Rui Men, Zhikang Li, Ming Ding, Jie Tang, Jingren Zhou, and Hongxia Yang. M6-ufc: Unifying multi-modal controls for conditional image synthesis. arXiv preprint arXiv:2105.14211, 2021. Shengyu Zhao, Jonathan Cui, Yilun Sheng, Yue Dong, Xiao Liang, I Eric, Chao Chang, and Yan Xu. Large scale image completion via co-modulated generative adversarial networks. In International Conference on Learning Representations, 2020. Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. In-domain gan inversion for real image editing. In European conference on computer vision, pp. 592–608. Springer, 2020. ## Appendix A Implementation Details The code for the implementation of EdiBERT is available on GitHub at the following link https://github.com/EdiBERT4ImageManipulation/EdiBERT. A pre-trained model on FFHQ is available on a linked Google Drive. Notebooks to showcase the model have also been developped. ## A.1 Training Hyper-Parameters We use the same architecture than Esser et al. (2021b) for both VQGAN and transformer. On LSUN Bedroom and FFHQ, we use a codebook size of 1024. For the transformer, we use a model with 32 layers of width 1024. To train the transformer with 2D masking strategy, we generate random rectangles before flattening Q(E(I)). The height of rectangles is drawn uniformly from [0.2×h, 0.5×h]. Similarly, the width of rectangles is drawn uniformly from [0.2 × w, 0.5 × w]. In our experiments, since we work at resolution 256 × 256 and follow the downsampling factor of 4 from Esser et al. (2021b), we have h = w = 256/16 = 16. Tokens outside the rectangle are used as input, to give context to the transformer, but not for backpropagation. Tokens inside the rectangle are used for back-propagation. prand = 90% of tokens inside the mask are put to random tokens, while psame = 1 − prand = 10% are given their initial value. Although we did not perform a large hyper-parameter study on this parameter, we feel it is an important one. The lower prand, the more the learned distributions p i θ (.|s) will be biased towards the observed token si. However, setting prand = 1 leads to a model that diverges too fast from the observed sequence. ## A.2 Inference Hyper-Parameters A.2.1 Image Inpainting. We set the number of epochs to 2, collage frequency to 4 per epoch, top-k sampling to 100, dilation to 1, and number of optimization steps to 200. We apply a gaussian filter on the mask for the periodic image collage. Additionally, we use these two implementation details. 1) We use two latent masks: the latent down-sampled mask latent_mask1, and the dilated mask latent_mask2, obtained by a dilation of latent_mask1. The randomization is done with latent_mask1, such that no information from the unmasked parts of the image is erased. However, the selection of positions that are re-sampled by EdiBERT is done with latent_mask2. 2) At the second epoch, we randomize the token value, at the position that is being replaced. This is only done for image inpainting. ## A.2.2 Image Compositing. We set the number of epochs to 2, collage frequency to 4 per epoch, top-k sampling to 100, dilation to 1, and number of optimization steps to 200. We apply a gaussian filter on the mask for the periodic image collage. Contrarily to inpainting, we do not randomize such that EdiBERT samples stay closer to the original sequence. The full algorithm is presented below in Algorithm 2. ## B Additional Experimental Results We give additional comparisons on FFHQ and LSUN Bedroom, for the following tasks: image inpainting in Table 3, image crossovers in Table 4, and image composition in Table 5. All these experiments are run on the test-set of EdiBERT. Note that concurrent methods based on StyleGAN2 were trained on the full dataset, which advantages them. Algorithm 2: Image inpainting/composition Requires: Masked (or edited) image im, mask m, Encoder E, Decoder D, BERT model pθ, epochs e, periodic collage c, optimization steps optim_steps; s ← Q(E(im)); latent_mask ← get_mask_in_latent_space(m); if *task is inpainting* **then** s ← s × latent_mask + rand × (1 − latent_mask); end for *e in [0,epochs]* do for p in chosen_order(*latent_mask*) do Sample token t ∈ Z ∼ p p θ (·|s) ; Insert sampled token: sp ← t ; if *p%c=0 (collage)* **then** Encode image post-collage: s ← E(im ⊙ m + D(s) ⊙ (1 − m)); end end end s 0 ← s ; for *i in [0, optim_steps]:* do L = Lp (D(s) − im) ⊙ m+ Lp (D(s) − D(s 0)) ⊙ (1 − m); s ← s + ϵ ∗ Adam(∇s*L, s*) ; end Image ← Decoder(s); Result: Image Inpainting. We use 2500 images. On FFHQ, we provide results for free-form masks and rectangular masks. The height of rectangular masks is drawn uniformly from [0.4 × h, 0.6 × h] with h = 256, and similarly for the width. For non-rectangular masks generations, we follow the procedure of Chai et al. (2021): we draw a binary mask at low-resolution 6 × 6 and uspsample it to 256 × 256 with bilinear interpolation. The ablation study in Table 2 of main paper is performed on free-form masks. Results in Table 1 of main paper are on rectangular masks. On LSUN Bedroom, we provide results for rectangular masks. Crossovers. We generate 2500 crossovers from random pairs of images, on both FFHQ and LSUN Bedroom. Editing/Compositing. We create small datasets of 100 images from the test set of EdiBERT for FFHQ scribble-based editing, FFHQ compositing and LSUN Bedroom compositing. A user study on FFHQ compositing is presented in main paper with statistically significant number of votes. We also provide some metrics in 5. Because of the small size of the dataset, we only report masked L1 and density. For density, the support of the real distribution is estimated with 2500 real points, and density is averaged over the individual density of the 100 generated images. | | Masked | Density Coverage | | | |-----------------------------------------------------------|----------|--------------------|------|------| | | L1 ↓ | FID ↓ | ↑ | ↑ | | FFHQ: rect. masks I2SG++ (Abdal et al., 2020) | 0.0767 | 23.6 | 0.99 | 0.88 | | I2SG†++ (Abdal et al., 2020; Karras et al., 2020a) 0.0763 | 22.1 | 1.25 | 0.91 | | | LC (Chai et al., 2021) | 0.1027 | 27.9 | 1.12 | 0.84 | | EdiBERT (⋆) | 0.0290 | 13.8 | 1.16 | 0.98 | | Co-mod. GAN (Zhao et al., 2020) | 0.0128 | 4.7 | 1.24 | 0.99 | | FFHQ: free-form masks I2SG++ (Abdal et al., 2020) | 0.0440 | 22.3 | 0.92 | 0.89 | | I2SG†++ (Abdal et al., 2020; Karras et al., 2020a) 0.0435 | 21.1 | 1.17 | 0.91 | | | LC (Chai et al., 2021) | 0.0620 | 27.9 | 1.22 | 0.85 | | EdiBERT (⋆) | 0.0201 | 19.4 | 1.14 | 0.96 | | Com-GAN (Zhao et al., 2020) | 0.0086 | 10.3 | 1.42 | 1.00 | | LSUN Bedroom: rect. masks I2SG (Abdal et al., 2019) | 0.1125 | 50.2 | 0.04 | 0.04 | | MaskGIT (Chang et al., 2022) | 0.0209 | 11.4 | 1.09 | 0.96 | | EdiBERT (⋆) | 0.0288 | 11.3 | 0.89 | 0.97 | | | Masked | Density Coverage | | | |-----------------------------------------------------------|----------|--------------------|------|------| | | L1 ↓ | FID ↓ | ↑ | ↑ | | FFHQ I2SG++ (Abdal et al., 2020) | 0.1141 | 29.4 | 0.97 | 0.78 | | I2SG†++ (Abdal et al., 2020; Karras et al., 2020a) 0.1133 | 26.9 | 1.35 | 0.82 | | | ID-GAN (Zhu et al., 2020) | 0.0631 | 23.2 | 0.88 | 0.83 | | LC (Chai et al., 2021) | 0.1491 | 31.9 | 1.17 | 0.77 | | EdiBERT (⋆) | 0.0364 | 19.7 | 1.05 | 0.88 | | LSUN Bedroom I2SG (Abdal et al., 2019) | 0.1123 | 45.7 | 0.12 | 0.20 | | ID-GAN (Zhu et al., 2020) | 0.0682 | 21.4 | 0.35 | 0.57 | | EdiBERT (⋆) | 0.0369 | 12.4 | 0.64 | 0.84 | Table 3: Image inpainting. Table 4: Image crossover. | | Masked Density L1 ↓ ↑ | | |-----------------------------------------------------------|-------------------------|------| | FFHQ scribble-edits I2SG++ (Abdal et al., 2020) | 0.7811 | 0.91 | | I2SG†++ (Abdal et al., 2020; Karras et al., 2020a) 0.0777 | 1.11 | | | ID-GAN (Zhu et al., 2020) | 0.0461 | 0.79 | | LC (Chai et al., 2021) | 0.1016 | 1.14 | | EdiBERT (⋆) | 0.0281 | 0.96 | | FFHQ compositing I2SG++ (Abdal et al., 2020) | 0.0851 | 0.77 | | I2SG†++ (Abdal et al., 2020; Karras et al., 2020a) 0.0866 | 1.07 | | | ID-GAN (Zhu et al., 2020) | 0.0570 | 0.75 | | LC (Chai et al., 2021) | 0.1116 | 1.00 | | EdiBERT (⋆) | 0.0307 | 0.94 | | LSUN Bedroom compositing I2SG (Abdal et al., 2019) | 0.1285 | 0.25 | | ID-GAN (Zhu et al., 2020) | 0.0484 | 1.45 | | EdiBERT (⋆) | 0.0247 | 1.49 | Table 5: Image editing. ## C Baselines We use the implementation and pre-trained models from the following repositories. ID-GAN (Zhu et al., 2020): https://github.com/genforce/idinvert_pytorch, which has pre-trained models on FFHQ 256x256 and LSUN Bedroom 256x256. I2SG++ and I2SG†++(Karras et al., 2020b;a; Abdal et al., 2020): https://github.com/NVlabs/stylegan2-ada-pytorch. We tested projections with the following pre-trained models on FFHQ: StyleGAN2 (Karras et al., 2020b) at resolution 256x256, and StyleGAN2-Ada (Karras et al., 2020a) at resolution FFHQ 1024x1024. For evaluation, we downsample the 1024x1024 generated images to 256x256. LC (Chai et al., 2021): https://github.com/chail/latent-composition. We use the pre-trained encoder and StyleGAN2 generator, for FFHQ at resolution 1024x1024. For evaluation, we downsample the 1024x1024 generated images to 256x256. Com-GAN Zhao et al. (2020): https://github.com/zsyzzsoft/co-mod-gan. We use the pre-trained network for image inpainting on FFHQ at resolution 512x512. We downsample the generated images to 256x256 for evaluation. MaskGIT (Chang et al., 2022): https://github.com/google-research/maskgit. We use the tokenizer and transformer trained for conditional image generation and editing on ImageNet 256x256. To perform comparisons with EdiBERT on LSUN Bedroom Image Inpainting, we condition the transformer to the ImageNet class '843: studio couch, day bed'. ## D Qualitative Results On Image Compositon We present more examples of image compositions, with image compositing and scribble-based editing on FFHQ and LSUN Bedroom in Figure 9, 10, and 11. Preservation of non-masked parts. Thanks to its VQGAN auto-encoder, EdiBERT generally better conserves areas outside the mask than GANs inversion methods. This is particularly visible for images with complex backgrounds on FFHQ (Figure 10, 5th and last rows). Insertion of edited parts. Since EdiBERT is a probabilistic model and the tokens inside the modified area are resampled, the inserted object can be modified and mapped to a more likely object given the context. It thus generates more realistic images, but can alter the fidelity to the inserted object. For example, on row 1 of Figure 11, the green becomes lighter and the perspective of the inserted window is improved. Although it can be a downside for image compositing, note that this property is interesting for scribble-based editing, where the scribbles have to be largely transformed to get a realistic image. Contrarily, GANs inversion methods tend to conserve the inserted object too much, even if it results in a highly unrealistic generated image. We can observe this phenomenon on last row of Figure 10. ![23_image_0.png](23_image_0.png) ![24_image_0.png](24_image_0.png) ![25_image_0.png](25_image_0.png) ![26_image_0.png](26_image_0.png) ## E Survey On Ffhq Image Compositing The survey was presented as a Google Form with 40 questions. For each question, the user was shown 6 images: Source, Composite, EdiBERT, ID-GAN Zhu et al. (2020), I2SG†++ Abdal et al. (2019); Karras et al. (2020a), LC Chai et al. (2021). The different generated images were referred as Algorithm 1, ..., Algorithm 4. The user was asked to vote for its preferred generated image, by taking into account realism and fidelity criterions. The user had no time limit for the poll. 30 users answered our poll. We provide the detailed answers for each image in Table 6. | EdiBERT | ID-GAN Zhu et al. (2020)LC Chai et al. (2021)I2SG†++ Abdal et al. (2019); Karras et al. (2020a) | | | |------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|--------------|--------------| | 17 | 5 | 7 | 1 | | 15 | 1 | 8 | 6 | | 22 | 4 | 1 | 3 | | 19 | 4 | 5 | 2 | | 22 | 0 | 4 | 4 | | 6 | 7 | 8 | 9 | | 21 | 1 | 5 | 3 | | 23 | 1 | 4 | 2 | | 20 | 5 | 5 | 0 | | 11 | 13 | 6 | 0 | | 27 | 0 | 3 | 0 | | 12 | 3 | 3 | 12 | | 16 | 4 | 6 | 4 | | 25 | 2 | 1 | 2 | | 18 | 8 | 1 | 3 | | 8 | 13 | 9 | 0 | | 26 | 0 | 4 | 0 | | 7 | 0 | 21 | 2 | | 14 | 9 | 1 | 6 | | 27 | 0 | 1 | 2 | | 11 | 19 | 0 | 0 | | 14 | 9 | 4 | 3 | | 16 | 14 | 0 | 0 | | 21 | 1 | 3 | 5 | | 8 | 2 | 18 | 2 | | 19 | 3 | 3 | 5 | | 22 | 7 | 0 | 1 | | 23 | 2 | 1 | 4 | | 18 | 0 | 2 | 10 | | 27 | 2 | 1 | 0 | | 22 | 2 | 1 | 5 | | 24 | 0 | 5 | 1 | | 3 | 25 | 2 | 0 | | 28 | 0 | 2 | 0 | | 24 | 0 | 6 | 0 | | 27 | 0 | 3 | 0 | | 27 | 1 | 2 | 0 | | 22 | 7 | 1 | 0 | | 9 | 15 | 6 | 0 | | 14 | 0 | 14 | 2 | | Total735 (61.25%) | 189 (15.75%) | 177 (14.75%) | 99 (0.0825%) | | Table 6: Detailed results of the user study. Each line corresponds to an image, with the associated number | | | | Table 6: Detailed results of the user study. Each line corresponds to an image, with the associated number of votes per method.
Review 1: Summary: This paper presents EdiBERT, a bidirectional transformer-based model trained with a simple masked token prediction task in the discrete latent space of a VQGAN and capable of inpainting, scribble-based image processing, and image harmonization/composition. Particular emphasis is placed on the ability to achieve these functions with a single model. The model is trained on 256x256 images of the FFHQ and LSUN datasets and achieves results that rival other more specialized models. Strengths and Weaknesses: **Strengths:** The paper is well written and easy to follow, the proposed approach is simple. Further, the manuscript does a good job of motivating a bidirectional approach over AR models for image manipulation tasks. Interesting analysis of the latent space of a f16-VQGAN are presented. **Weaknesses:** - The proposed approach is effectively making an independence assumption for the masked tokens. The discussion on where this fails too short (only reference I can find is on p.14, "However, some manipulations such as zooms or rotations require changing large areas of the source image. In these cases, modifying a large number of tokens might be more demanding." - Further, the paper builds on AR factorization as a motivation to introduce bidirectional attention, i.e. a generative model that learns a "full" density, but itself presents an approach that makes a strong assumption on the density (i.e. independence of individual tokens). Again, advantages and disadvantages of such a modeling choice should be discussed (e.g., trading speed and control vs generative capabilities of complex content?). I also suspect that the approach will not work as well for larger masks, more complex scenes/datasets and VQ-Models with a downsampling factor of 8 oder 4 instead of 16. - Related: The paper should compare to approaches that use bidirectional models in latent space, most notably MaskGIT [1] and M6 [2] and to ones that build discrete diffusion [3, 4] models in latent space [5]. Furthermore, a discrete diffusion formulation formulation is highly related to the one described here and opens up a different view on the approach as a few-step decoding sampler of a multinomial diffusion process. The only reference to diffusion models is to SDEdit with a reference to decoding speed. - A discussion/ablation on the number of inference iterations T is missing. - The results are focused on FFHQ, a simple and highly structured dataset. How does the inpainting approach on more complex datasets. A common benchmark for inpainting models (such as the mentioned CoModGAN) is the Places dataset [6] missing evaluation of applying the method in different latent spaces. E.g., is it possible to train an EdiBERT in an f=8 latent space? **Minor remarks/Questions:** - p.18, A.1. "In our experiments, since we work at resolution 256 × 256 and follow the downsampling factor of 4 from Esser et al. (2021b), we have h = w = 256/4 = 16." --> My understanding from the main paper is that the downsampling factor is 16. - it would be nice to see more qualitative results on LSUN-Bedrooms, especially for inpainting. - a comparison to other models in terms of sampling speed is missing. While the approach is significantly faster than AR models, it aims to fulfill a different tasks (image manipulation vs full generation) and should thus be compared to related approaches. How expensive is the "post-collage" procedure described in Alg.2? How much quality is lost without post-hoc latent optimiation, and how much speed gained? - p.1: litterature -> literature _Refererences_ - [1]: MaskGIT: Masked Generative Image Transformer, Chang et al, https://arxiv.org/abs/2202.04200 - [2]: M6-UFC: Unifying Multi-Modal Controls for Conditional Image Synthesis via Non-Autoregressive Generative Transformers, Zhang et al, https://arxiv.org/abs/2105.14211 - [3]: Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions, Hoogeboom et al, https://arxiv.org/abs/2102.05379 - [4]: Structured Denoising Diffusion Models in Discrete State-Spaces, Austin et al, https://arxiv.org/abs/2107.03006 - [5]: Vector Quantized Diffusion Model for Text-to-Image Synthesis, Gu et al, https://arxiv.org/abs/2111.14822 - [6]: Places: A 10 million Image Database for Scene Recognition, B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017 Requested Changes: Detailed requests/clarifications can be found in the **weaknesses** section above. In general, I think that the paper should address more the possible shortcomings and discuss the related work mentioned above. I am also somewhat concerned about the applicability/relevance of the proposed approach. It seems to be limited to "simple datasets" such as FFHQ and LSUN and does not provide a clear advantage over other approaches. The more general discrete diffusion models seem more promising in this regard (while also providing a "unified" model), and the proposed approach could be interpreted as a particular denoising sampling scheme. Broader Impact Concerns: Ethical implications of the model are briefly mentioned. While the discussion is quite brief and only mentions deepfakes as a potential negative impact, a full discussion of the societal implications of powerful generative image models is likely beyond the scope of this paper and an ongoing community effort. ================================================== Review 2: Summary: In this paper, the authors propose a generative model for image editing. It is claimed that a single pretrained model could be applied to image denoising, image inpainting, and image composition tasks. The problem of a previous method VQGAN is analyzed, which helps to identify the chance to improve that model. In particular, an additional optimization step is incorporated, namely, optimizing the LPIPS metric over the latent spaces. It is visualized that by adding this additional optimization step, more high-frequency details are recovered. Then the authors present how to adapt one single model to two different image manipulation tasks, namely image denoising and image inpainting. It is also show that competitive results can be achieved with the generic training algorithm. Strengths and Weaknesses: Strengths: 1. It is a good idea to design a single model for image manipulation tasks. Unified models are already proven to be useful for language tasks. And for high-level vision tasks, CLIP models are shown to generalize quite well to images with domain gap. This paper is inspired by the general idea and tries to propose a unified approach for image manipulation. 2. According to experimental results, the proposed method achieves comparable subjective and objective performances, although only a single pretrained model is used. Weakness: 1. The main contribution of this paper is the change of the optimization method of VQGAN, that is incorporating an additional optimization LPIPS loss over the latent space. The main contribution is a little bit trivial. 2. The paper is not well-written. And a lot of desciptions are not self-contained. And a couple of them are listed below. - The pipeline of the proposed framework is not clearly presented in the begining of this paper. A figure that describes the general pipeline would help to improve the readability of this paper. - To adapt a single pretrained network to two different image manipulation task including denoising and image inpainting, the resampling strategy seems to be essential. But it is not explained well in the introduction why this resampling strategy is important. - The equations and notations are not well explained. For example, what does $E(I)_{l}$ in Eqn. 1 mean? Only after reading another paper did I undertand the correct meaning. Again, what does the $Z^l$ in the following line mean? - Some description is not self-contained. For example, after Eqn. 6, the authors mentioned that "the sampling function φ is defined with a 2D masking strategy, and the training positions are selected by drawing random 2D rectangles in the 2D patch extracted from the encoder." Please explain explicitly what the meaning of 2D masking strategy is. - Figures needs to be well explained. For example, in Figure 2: Modifying the image via the latent space, what does the four images mean? How are the third and fourth images formed is not explained. [1] Taming Transformers for High-Resolution Image Synthesis Requested Changes: Questions: 1. The authors mentioned that "An autoregressive model is trained to generate these token sequences directly, stressing that high-capacity expressive transformers can generate realistic high-resolution images. To summarize, this framework consists of three main steps:" Could the authors explain the difference between the three main steps mention here and the main steps used by the proposed method? 2. Please revised the paper according to the suggestions given above. Please make the figures and contents more self-contained. Please explain each equation more clearly. Broader Impact Concerns: The authors base their experiments on exsiting public datasets. Thus, although the experimental results is conducted on human faces, there is no ethical issues at lease for the algorithm part. ================================================== Review 3: Summary: In this work the authors propose a method to perform image edition/manipulation using VQ-GAN and attention models. Starting from the VQ-GAN model with its tokens, an image can be represented as a sequence of such tokens. During training, the sequence of tokens is randomly perturbed and the model is trained to recover the original sequence from the full perturbed input sequence. This objective is different from the more commonly used autoregressive model for unconditional image generation. After training the model, the authors analyze the effect of image space manipulation on the latent space. Another aspect that needs to be addressed is the reconstruction capabilities. Using the default token sequence estimation results in images that are further away from the input. To address this the authors use latent space optimization with the objective of reconstructing the input image. After all this, the image edition/manipulation is expressed as an optimization problem over the sequence of tokens. The exact steps and terms of this optimization vary depending on the exact problem (denoising vs inpainting). Strengths and Weaknesses: # Strengths - Use VQ-GAN for image denoising and manipulation - Quantitative evaluation shows competitive performance and user study gives the proposed method a clear advantage. - Overall the paper is clear and easy to follow - The ablation study shows the necessity of the different tricks proposed by the authors to achieve best results and avoid divergence from the input image. # Weaknesses 1. The presentation of the model assumes the reader knows in detail of the VQ-GAN (Esser et al. 2021b). Not enough details are provided for the different distributions and attention mechanisms. This is important to judge on the novelty of the proposed model. 2. Although the reconstruction is improved thanks to the latent space optimization, there is still an important remaining gap. 3. Currently the proposed optimization algorithm relies on several tricks and guides to avoid divergence. It would be interesting to have some more comments on how feasible it would be to shape the latent space to render those tricks unnecessary. 3. Limited visual comparisons: only the composition task has images and the images are provided in supplemental. 4. Exploring an additional dataset would be interesting. What is the expected performance for a more general setting (beyond faces or interior scenes)? Requested Changes: Currently I would recommend the paper for acceptance. The necessary changes are: 1. More details about the model, in particular the different distributions and the attention mechanism. 2. Visual example with other state of the art methods focusing on reconstruction gap 3. Visual comparisons with existing methods on other tasks should appear in the main paper. Addressing the remaining points from the weaknesses section would strengthen the paper. Broader Impact Concerns: The authors acknowledge that the model might be used to create and propagate fake beliefs via deepfakes, a concern shared with other image generative models. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper presents a bidirectional transformer that re-samples image patches conditioned on an input image. The main advantage of the proposed method is that a *single* trained model can be applied to solve a wide variety of different image editing tasks. Reviewer ucMa and Reviewer DPNU are satisfied with the authors' detailed revision and response, including a discussion on missed references in the related work section and various clarifications on method exposition. After the discussions with the authors, both Reviewer ucMa and Reviewer DPNU are leaning toward acceptance. Reviewer ANEH, however, is leaning toward rejection. The main complaints were a lack of comparisons with MaskGIT and M6-UFC, simple, structured datasets (e.g., FFHQ), and some method exposition issues. The authors' responses partially address these concerns, including the revision of the related work to include relevant prior work and an explanation of the lack of other larger-scale datasets. The AE thinks the responses and revision sufficiently address the concerns of Reviewer ANEH. That being said, the AE agrees with Reviewer ANEH that a comparison with methods that also use bidirectional models in latent space, e.g,. MaskGIT and M6-UFC, would be beneficial for the readers to get a clearer picture of the literature. The AE thus recommends "Accept with minor revision". Specifically, the authors should 1) cite and discuss the differences between the proposed method and MaskGIT and M6-UFC 2) include at least one direct comparison with MaskGIT (code available here: https://github.com/google-research/maskgit). Note that the acceptance is *not* tied to the performance comparison with MaskGIT. The MaskGIT requires training one specific model per task so there are still merits of the proposed method even if it underperforms MaskGIT on the compared tasks. ==================================================
# Ancer: Anisotropic Certification Via Sample-Wise Volume Maximization Francisco Eiras∗*eiras@robots.ox.ac.uk* University of Oxford, Five AI Ltd., UK Motasem Alfarra∗ *motasem.alfarra@kaust.edu.sa* King Abdullah University of Science and Technology (KAUST) M. Pawan Kumar pawan@robots.ox.ac.uk University of Oxford Philip H. S. Torr *philip.torr@eng.ox.ac.uk* University of Oxford Puneet K. Dokania *puneet@robots.ox.ac.uk* University of Oxford, Five AI Ltd., UK Bernard Ghanem *bernard.ghanem@kaust.edu.sa* King Abdullah University of Science and Technology (KAUST) Adel Bibi∗*adel.bibi@eng.ox.ac.uk* University of Oxford Reviewed on OpenReview: *https: // openreview. net/ forum? id= 7j0GI6tPYi* ## Abstract Randomized smoothing has recently emerged as an effective tool that enables certification of deep neural network classifiers at scale. All prior art on randomized smoothing has focused on isotropic `p certification, which has the advantage of yielding certificates that can be easily compared among isotropic methods via `p-norm radius. However, isotropic certification limits the region that can be certified around an input to *worst-case* adversaries, *i.e.* it cannot reason about other "close", potentially large, constant prediction safe regions. To alleviate this issue, (i) we theoretically extend the isotropic randomized smoothing `1 and `2 certificates to their generalized *anisotropic* counterparts following a simplified analysis. Moreover, (ii) we propose evaluation metrics allowing for the comparison of general certificates - a certificate is superior to another if it certifies a *superset* region - with the quantification of each certificate through the volume of the certified region. We introduce AnCer, a framework for obtaining anisotropic certificates for a given test set sample via volume maximization. We achieve it by generalizing memory-based certification of data-dependent classifiers. Our empirical results demonstrate that AnCer achieves state-of-the-art `1 and `2 certified accuracy on CIFAR-10 and ImageNet in the data-dependence setting, while certifying larger regions in terms of volume, highlighting the benefits of moving away from isotropic analysis. Our code is available in this repository. ## 1 Introduction The well-studied fact that Deep Neural Networks (DNNs) are vulnerable to additive imperceptible noise perturbations has led to a growing interest in developing robust classifiers (Goodfellow et al., 2015; Szegedy ∗Equal contribution; order of first two authors decided by 3 coin flips. ![1_image_0.png](1_image_0.png) Figure 1: Illustration of the landscape of f y(blue corresponds to a higher confidence in y, the true label) for a region around an input in a toy, 2-dimensional radially separable dataset. For two dataset examples, in (a) and (b) we show the boundaries of the optimal `1 isotropic and anisotropic certificates, while (c) and (d) are the boundaries of the optimal `2 isotropic and anisotropic certificates. A thorough discussion of this figure is presented in Section 3. et al., 2014). A recent promising approach to achieve state-of-the-art provable robustness (*i.e.* a theoretical bound on the output around every input) at the scale of ImageNet (Deng et al., 2009) is *randomized smoothing* (Lecuyer et al., 2019; Cohen et al., 2019). Given an input x and a network f, randomized smoothing constructs g(x) = E∼D[f(x + )] such that g(x) = g(x + δ) ∀δ ∈ R, where the certification region R is characterized by x, f, and the smoothing distribution D. For instance, Cohen et al. (2019) showed that if D = N (0, σ2I), then R is an `2-ball whose radius is determined by x, f and σ. Since then, there has been significant progress towards the design of D leading to the largest R for all inputs x. The interplay between R characterized by `1, `2 and `∞-balls, and a notion of optimal distribution D has been previously studied (Yang et al., 2020). Despite this progress, current randomized smoothing approaches provide certification regions that are *isotropic* in nature, limiting their capacity to certifying smaller and *worst-case* regions. We provide an intuitive example of this behavior in Figure 1. The isotropic nature of R in prior art is due to the common assumption that the smoothing distribution D is identically distributed (Yang et al., 2020; Kumar et al., 2020; Levine & Feizi, 2021). Moreover, comparisons between various randomized smoothing approaches were limited to methods that produce the same `p certificate, with no clear metrics for comparing with other certificates. In this paper, we address both concerns and present new state-of-the-art certified accuracy results on both CIFAR-10 and ImageNet datasets. Our contributions are threefold. (i) We provide a general and simpler analysis compared to prior art (Cohen et al., 2019; Yang et al., 2020) that paves the way for the certification of *anisotropic* regions characterized by any norm, holding prior art as special cases. We then specialize our result to regions that, for a positive definite A, are ellipsoids, i.e. kAδk2 ≤ *c, c >* 0, and generalized cross-polytopes, *i.e.* kAδk1 ≤ c, generalizing both `2 (Cohen et al., 2019) and `1 (Lecuyer et al., 2019; Yang et al., 2020) certification (Section 4). **(ii)** We introduce a new evaluation framework to compare methods that certify general (isotropic or anisotropic) regions. We compare two general certificates by defining that a method certifying R1 is superior to another certifying R2, if R1 is a strict superset to R2. Further, we define a standalone quantitative metric as the volume of the certified region, and specialize it for the cases of ellipsoids and generalized cross-polytopes (Section 5). **(iii)** We propose AnCer, an anisotropic certification method that performs sample-wise (*i.e.* per sample in the test set) region volume maximization (Section 6), generalizing the data-dependent, memory-based solution from Alfarra et al. (2022). Through experiments on CIFAR-10 (Krizhevsky, 2009) and ImageNet (Deng et al., 2009), we show that restricting AnCer's certification region to `1 and `2-balls outperforms state-of-the-art `1 and `2 results from previous works (Yang et al., 2020; Alfarra et al., 2022). Further, we show that the volume of the certified regions are significantly larger than all existing methods, thus setting a new state-of-the-art in certified accuracy. We highlight that while we effectively achieve state-of-the-art performance, it comes at a high cost given the data-dependency requirements. A discussion of the limitations of the solution is presented in Section 6. Notation. We consider a base classifier f : R n → P(K), where P(K) is a probability simplex over K classes, *i.e.* f i ≥ 0 and 1 >f = 1, for i ∈ {1*, . . . , K*}. Further, we use (*x, y*) to be a sample input x and its corresponding true label y drawn from a test set Dt, and f yto be the output of f at the correct class. We use `p to be the typically defined *k · k*p norm (p ≥ 1), and ` A p or *k · k*A,p for p = {1, 2} to be a composite norm defined with respect to a positive definite matrix A as kA−1/pvkp. ## 2 Related Work Verified Defenses. Since the discovery that DNNs are vulnerable against input perturbations (Goodfellow et al., 2015; Szegedy et al., 2014), a range of methods have been proposed to build classifiers that are verifiably robust (Huang et al., 2017; Gowal et al., 2019; Bunel et al., 2018; Salman et al., 2019b). Despite this progress, these methods do not yet scale to the networks the community is interested in certifying (Tjeng et al., 2019; Weng et al., 2018). Randomized Smoothing. The first works on randomized smoothing used Laplacian (Lecuyer et al., 2019; Li et al., 2019) and Gaussian Cohen et al. (2019) distributions to obtain `1 and `2-ball certificates, respectively. Several subsequent works improved the performance of smooth classifiers by training the base classifier using adversarial augmentation (Salman et al., 2019a), regularization (Zhai et al., 2019), or general adjustments to training routines (Jeong & Shin, 2020). Recent work derived `p-norm certificates for other isotropic smoothing distributions (Yang et al., 2020; Levine & Feizi, 2020; Zhang et al., 2019). Concurrently, Dvijotham et al. (2020) developed a framework to handle arbitrary smoothing measures in any `p-norm; however, the certification process requires significant hyperparameter tuning. Similarly, Mohapatra et al. (2020) introduces larger certificates that require higher-order information, yet do not provide a closed-form solution. This was followed by a complementary data-dependent smoothing approach, where the parameters of the smoothing distribution were optimized per test set *sample* to maximize the certified radius at an individual input (Alfarra et al., 2022). All prior works considered smoothing with *isotropic* distributions and hence certified isotropic `p-ball regions. In this paper, we extend randomized smoothing to certify *anisotropic* regions, by pairing it with a generalization of the data-dependent framework (Alfarra et al., 2022) to maximize the certified region at each input point. ## 3 Motivating Anisotropic Certificates Certification approaches aim to find the *safe* region R, where arg maxi f i(x) = arg maxi f i(x + δ) ∀δ ∈ R. Recent randomized smoothing techniques perform this certification by explicitly optimizing the isotropic `p certified region around each input (Alfarra et al., 2022), obtaining state-of-the-art performance as a result. Despite this `p optimality, we note that any `p-norm certificate is *worst-case* from the perspective of that norm, as it avoids adversary regions by limiting its certificate to the `p-closest adversary. This means that it can only enjoy a radius that is at most equal to the distance to the closest decision boundary. However, decision boundaries of general classifiers are complex, non-linear, and non-radially distributed with respect to a generic input sample (Karimi et al., 2019). This is evidenced by the fact that, within a reasonably small `p-ball around an input, there are often only a small set of adversary directions (Tramèr et al., 2017; 2018) (*e.g.* see the decision boundaries in Figure 1). As such, while `p-norm certificates are useful to reason about worst-case performance and are simple to obtain given previous works (Cohen et al., 2019; Yang et al., 2020; Lee et al., 2019), they are otherwise uninformative in terms of the shape of decision boundaries, *i.e.* which regions around the input are safe. To visualize these concepts, we illustrate the decision boundaries of a base classifier f trained on a toy 2-dimensional, radially separable (with respect to the origin) binary classification dataset, and consider two different input test samples (see Figure 1). We compare the *optimal* isotropic and anisotropic certified regions of different shapes at these points. In Figures 1a and 1b, we compare an isotropic cross-polytope (of the form kδk1 ≤ r) with an anisotropic generalized cross-polytope (of the form kAδk1 ≤ r), while in Figures 1c and 1d we compare an isotropic `2 ball (of the form kδk2 ≤ r) with an anisotropic ellipsoid (of the form kAδk2 ≤ r). Notice that in Figures 1a and 1c, due to the curvature of the classification boundary (shown in white), the optimal certification region is isotropic in nature, which is evidenced by the similarities of the ![3_image_0.png](3_image_0.png) Figure 2: Visualization of a CIFAR-10 image x and an example x + δ of an imperceptible change that *is not* inside the optimal isotropic certified region, but is covered by the anisotropic certificate. optimal isotropic and anisotropic certificates. On the other hand, in Figures 1b and 1d, the location of the decision boundary allows for the anisotropic certified regions to be considerably larger than their isotropic counterparts, as they are not as constrained by the closest decision boundary, *i.e.* the *worst-case* performance. We note that these differences are further highlighted in higher dimensions, and we study them for a single CIFAR-10 test set sample in Appendix A.1. As shown, anisotropic certification reasons more closely about the shape of the decision boundaries, allowing for further insights into constant prediction (safe) directions. In Figure 2, we present a series of test set images x, as well as practically indistinguishable x + δ images which *are not inside* the optimal certified isotropic `2-balls for each input sample, yet *are within* the anisotropic certified regions. This showcases the merits of using anisotropic certification for characterizing larger safe regions. ## 4 Anisotropic Certification One of the main obstacles in enabling anisotropic certification is the complexity of the analysis required. To alleviate this, we follow a Lipschitz argument first observed by Salman et al. (2019a) and Jordan & Dimakis (2020) and propose a simple and general certification analysis. We start with the following two observations. All proofs are in Appendix B. Proposition 1. *Consider a differentiable function* g : R n → R. If supxk∇g(x)k∗ ≤ L where k · k∗ *has a dual* norm kzk = maxx z >x s.t. kxk∗ ≤ 1, then g is L-Lipschitz under norm k·k∗, that is |g(x)−g(y)| ≤ Lkx−yk. Given the previous proposition, we formalize *k · k* certification as follows: Theorem 1. Let g : R n → R K, g ibe L-Lipschitz continuous under norm k · k∗ ∀i ∈ {1, . . . , K}*, and* cA = arg maxi g i(x)*. Then, we have arg* maxi g i(x + δ) = cA for all δ *satisfying:* $$\|\delta\|\leq\frac{1}{2L}\left(g^{c_{A}}(x)-\operatorname*{max}_{c}g^{c\neq c_{A}}(x)\right).$$ Theorem 1 provides an *k · k* norm robustness certificate for any L-Lipschitz classifier g under *k · k*∗. The certificate is only informative when one can attain a tight *non-trivial* estimate of L, ideally supxk∇g(x)k∗, which is generally difficult when g is an arbitrary neural network. Framework Recipe. In light of Theorem 1, randomized smoothing can be viewed **differently** as an instance of Theorem 1 with the favorable property that the constructed smooth classifier g enjoys an analytical form for L = supxk∇g(x)k∗ by design. As such, to obtain an informative *k · k* certificate, one must, for an arbitrary choice of a smoothing distribution, compute the analytic Lipschitz constant L under *k · k*∗ for g. While there can exist a notion of "optimal" smoothing distribution for a given choice of *k · k* certificate, as in part addressed earlier for the isotropic `1, `2 and `∞ certificates (Yang et al., 2020), this is not the focus of this paper. The choice of the smoothing distribution in later sections is inspired by previous work for the purpose of granting anisotropic certificates. This recipe complements randomized smoothing works based on Neyman-Pearson's lemma (Cohen et al., 2019) or the Level-Set and Differential Method (Yang et al., 2020). We will deploy this framework recipe to show two specializations for anisotropic certification, namely ellipsoids (Section 4.1) and generalized cross-polytopes (Section 4.2).1. 1Our analysis also grants a certificate for a mixture of Gaussians smoothing distribution (see Appendix B.1). ## 4.1 Certifying Ellipsoids In this section, we consider the certification under ` Σ 2 norm, or kδkΣ,2 = √δ>Σ−1δ, that has a dual norm kδkΣ−1,2. Note that both kδkΣ,2 ≤ r and kδkΣ−1,2 ≤ r define an ellipsoid. Despite that the following results hold for any positive definite Σ, we assume for efficiency reasons that Σ is diagonal throughout. First, we consider the anisotropic Gaussian smoothing distribution N (0, Σ) with the smooth classifier defined as gΣ(x) = E∼N(0,Σ) [f(x + )]. Considering the classifier Φ −1(gΣ(x)), where Φ is the standard Gaussian CDF, and following Theorem 1 to grant an ` Σ 2 certificate for Φ −1(gΣ(x)), we derive the Lipschitz constant L under k · kΣ−1,2, in the following proposition. Proposition 2. Φ −1(gΣ(x)) is 1*-Lipschitz (i.e.* L = 1) under the k · kΣ−1,2 *norm.* Since Φ −1is a strictly increasing function, by combining Proposition 2 with Theorem 1, we have: Corollary 1. Let cA = arg maxi g i Σ(x) *, then* arg maxi g i Σ(x + δ) = cA for all δ *satisfying:* $$\|\delta\|_{\Sigma,2}\leq\frac{1}{2}\left(\Phi^{-1}\left(g_{\Sigma}^{c_{A}}(x)\right)-\Phi^{-1}\left(\operatorname*{max}_{c}g_{\Sigma}^{c\neq c_{A}}(x)\right)\right).$$ Corollary 1 holds the `2 certification from Zhai et al. (2019) as a special case for when Σ = σ 2I. 2 ## 4.2 Certifying Generalized Cross-Polytopes Here we consider certification under the ` Λ 1 norm defining a generalized cross-polytope, *i.e.* the set {δ : kδkΛ,1 = kΛ −1δk1 ≤ r}, as opposed to the `1-bounded set that defines a cross-polytope, *i.e.* {δ : kδk1 ≤ r}. As with the ellipsoid case and despite that the following results hold for any positive definite Λ, for the sake of efficiency, we assume Λ to be diagonal throughout. For generalized cross-polytope certification, we consider an anisotropic Uniform smoothing distribution U, which defines the smooth classifier gΛ(x) = E∼U[−1,1]n [f(x + Λ)]. Following Theorem 1 and to certify under the ` Λ 1 norm, we compute the Lipschitz constant of gΛ under the kΛxk∞ norm, which is the dual norm of *k · k*Λ,1 (see Appendix B), in the next proposition. Proposition 3. The classifier gΛ is 1/2-Lipschitz (i.e. L = 1/2) under the kΛxk∞ *norm.* Similar to Corollary 1, by combining Proposition 3 with Theorem 1, we have that: Corollary 2. Let cA = arg maxi g i Λ (x) *, then* arg maxi g i Λ (x + δ) = cA for all δ *satisfying:* $\|\delta\|_{\Lambda,1}=\|\Lambda^{-1}\|$ kδkΛ,1 = kΛ −1δk1 ≤ g cA Λ (x) − max cg c6=cA Λ(x) . Corollary 2 holds the `1 certification from Yang et al. (2020) as a special case for when Λ = λI. ## 5 Evaluating Anisotropic Certificates With the anisotropic certification framework presented in the previous section, the question arises: "Given two general (isotropic or anisotropic) certification regions R1 and R2, how can one effectively compare them?". We propose the following definition to address this issue. Definition 1. For a given input point x, consider the two certification regions R1 and R2 obtained for two classifiers f1 and f2*, i.e.* A1 = {δ : arg maxc f c 1 (x) = arg maxc f c 1 (x + δ), ∀δ ∈ R1} and A2 = {δ : arg maxc f c 2 (x) = arg maxc f c 2 (x + δ), ∀δ ∈ R2} *where* arg maxc f c 1 (x) = arg maxc f c 2 (x). We say A1 *is a* "superior certificate" to A2 (i.e. A1 A2), if and only if, A1 ⊃ A2. This definition is a natural extension from the radius-based comparison of `p-ball certificates, providing a basis for evaluating anisotropic certification. To compare an anisotropic to an isotropic region of certification, it is not immediately clear how to (i) check that an anisotropic region is a superset to the isotropic region, and **(ii)** if it were a superset, how to quantify the improvement of the anisotropic region over the isotropic counterpart. In Sections 5.1 and 5.2, we tackle these issues for the particular cases of ellipsoid and generalized cross-polytope certificates. 2A similar result was derived in the appendix of Fischer et al. (2020); Li et al. (2020) with a more involved analysis by extending Neyman-Pearson's lemma. ## 5.1 Evaluating Ellipsoid Certificates Comparing `2−**Balls to** ` Σ 2 −Ellipsoids (Specialization of Definition 1). Recall that if Σ = σ 2I, our ellipsoid certification in Corollary 1 recovers as a special case the isotropic `2-ball certification of Cohen et al. (2019); Salman et al. (2019a); Zhai et al. (2019). Consider the certified regions R1 = {δ : kδk2 ≤ σr˜ 1} and R2 = {δ : kδkΣ,2 = √δ>Σ−1δ ≤ r2} for given r1, r2 > 0. Since we take Σ = diag({σ 2 i } n i=1), the maximum enclosed `2-ball for the ellipsoid R2 is given by the set R3 = {δ : kδk2 ≤ mini σir2}, and thus R2 ⊇ R3. Therefore, it suffices that R3 ⊇ R1 (*i.e.* mini σir2 ≥ σr˜ 1), to say that R2 is a superior certificate to the isotropic R1 as per Definition 1. Quantifying ` Σ 2 **Certificates.** The aforementioned specialization is only concerned with whether our ellipsoid certified region R2 is "superior" to the isotropic `2-ball without quantifying it. A natural solution is to directly compare the volumes of the certified regions. Since the volume of an ellipsoid given by R2 is V(R2) = r n 2 √πn/Γ(n/2+1) Qn i=1 σi (Kendall, 2004), we directly compare the *proxy radius* R˜ defined for R2 as R˜ = r2 pn Qn i σi, since larger R˜ correspond to certified regions with larger volumes. Note that R˜, which is the n th root of the volume up to a constant factor, can be seen as a generalization to the certified radius in the case when σi = σ ∀i. ## 5.2 Evaluating Generalized Cross-Polytope Certificates Comparing `1−**Balls to** ` Λ 1 −Generalized Cross-Polytopes (Specialization of Definition 1). Consider the certificates S1 = {δ : kδk1 ≤ λr˜1}, S2 = {δ : kδkΛ,1 = kΛ −1δk1 ≤ r2}, and S3 = {δ : kδk1 ≤ mini λir2}, where we take Λ = diag({λi} n i=1). Note that since S2 ⊇ S3, then as per Definition 1, it suffices that S3 ⊇ S1 (*i.e.* mini λir2 ≥ λr˜1) to say that the anisotropic generalized cross-polytope S2 is superior to the isotropic `1-ball S1. Quantifying ` Λ 1 **Certificates.** Following the approach proposed in the ` Σ 2 case, we quantitatively compare the generalized cross-polytope certification of Corollary 2 to the `1 certificate through the volumes of the two regions. We first present the volume of the generalized cross-polytope. Proposition 4. V{δ : kΛ −1δk1 ≤ r}= (2r) n n!Qi λi. Following this definition, we define the *proxy radius* for S2 in this case to be R˜ = r2 pn Qn i=1 λi. As with the `2 case, larger R˜ correspond certified regions with larger volumes. As in the ellipsoid case, R˜ can be seen as a generalization to the certified radius when λi = λ ∀i. ## 6 Ancer: Sample-Wise Volume Maximization For Anisotropic Certification Given the results from the previous sections, we are now equipped to certify anisotropic regions, in particular ellipsoids and generalized cross-polytopes. As mentioned in Section 4, these regions are generally defined as R = {δ : kδkΘ,p ≤ r p} for a given parameter of the smoothing distribution Θ = diag ({θi} n i=1), an `p-norm (p ∈ {1, 2}), and a gap value of r p ∈ R +. At this point, one could simply take an anisotropic distribution with arbitrarily chosen parameters Θ and certify a trained network at any input point x, in the style of what was done in the previous randomized smoothing literature with isotropic distributions. However, the choice of Θ is more complex in the anisotropic case. A fixed choice of anisotropic Θ could severely underperform the isotropic case - take, for example, the anisotropic distribution of Figure 1d applied to the input of Figure 1c. Instead of taking a fixed Θ, we generalize the framework introduced by Alfarra et al. (2022), where parameters of the smoothing distribution are optimized per input test point (*i.e.* in a *sample-wise* fashion) so as to maximize the resulting certificate. The goal of the optimization in (Alfarra et al., 2022) is, at a point x, to maximize the isotropic `2 region described in Section 4.1 (*i.e.* {δ : kδk2 ≤ σ xr p(*x, σ*x))}), where r pis the gap and a function of x and σ x ∈ R +. In the isotropic `p case, this generalizes to maximizing the region {δ : kδkp ≤ θ xr p(*x, θ*x)}, which can be achieved by maximizing radius θ xr p(*x, θ*x) through θ x ∈ R +, obtaining r ∗ iso (Alfarra et al., 2022). Algorithm 1: ANCER Optimization Function AnCer (fθ, x, α, Θ0, n, K, κ): Initialize: Θ0x ← Θ0 for k = 0 . . . K − 1 do sample ˆ1, . . . ˆn ∼ D ψ(Θkx ) = 1n Pn i=1 fθ(x + Θkx ˆi) EA(Θkx ) = maxc ψ c; yA = arg maxc ψ c; EB(Θkx ) = maxc6=yA ψ c r p(x, Θkx ) = (EA − EB , if p = 1 1 2 Φ −1(EA) − Φ −1(EB), if p = 2 R(Θkx ) = r p(x, Θkx ) Qd i Θk ii1/d+ κ rp(x, Θkx ) mini Θk ii Θk+1 x ← Θkx + α∇Θkx R(Θkx ) Θk+1 x ← max Θk+1 x, Θ0 // element-wise maximum - projection step return ΘK x For the general anisotropic case, we propose AnCer, whose objective is to maximize the volume of the certified region through the *proxy radius*, while satisfying the *superset* condition with respect to the maximum isotropic `2 radius, r ∗ iso. In the case of the ellipsoids and generalized cross-polytopes as presented in Sections 5.1 and 5.2, respectively, AnCer's optimization problem can be written as: $$\operatorname*{arg\,max}_{\Theta^{x}}\;\;\;r^{p}\left(x,\Theta^{x}\right)\,\sqrt{\prod_{i}\theta_{i}^{x}}\;\;\;\;\;\;\;\mathrm{s.t.}\;\;\;\;\operatorname*{min}_{i}\;\theta_{i}^{x}r^{p}\left(x,\Theta^{x}\right)\geq r_{\mathrm{iso}}^{*}$$ $$(1)$$ iso (1) where r p(x, Θx) is the gap value under the anisotropic smoothing distribution. That is, $$r^{p}\left(x,\Lambda^{x}\right)=g_{\Lambda}^{c_{A}}(x)-\max_{c}g_{\Lambda}^{q e_{A}}(x),\qquad r^{p}\left(x,\Sigma^{x}\right)=\frac{1}{2}\left(\Phi^{-1}\left(g_{\Sigma}^{c_{A}}(x)\right)-\Phi^{-1}\left(\max_{c}g_{\Sigma}^{q e_{A}}(x)\right)\right).$$ for `1 and `2, respectively. This is a nonlinear constrained optimization problem that is challenging to solve. As such, we relax it, and solve instead: $$\operatorname*{arg\,max}_{\Theta^{x}}\;\;\;r^{p}\left(x,\Theta^{x}\right)\,\sqrt[n]{\prod_{i}\theta_{i}^{x}}+\kappa\operatorname*{min}_{i}\;\theta_{i}^{x}r^{p}\left(x,\Theta^{x}\right)\quad\mathrm{s.t.}\quad\theta_{i}^{x}\geq\bar{\theta}^{x}$$ given a hyperparameter κ ∈ R +. While the constraint θ x i ≥ ¯θ xis not explicitly required to enforce the superset condition over the isotropic case, it proved itself beneficial from an empirical perspective. To sample from the distribution parameterized by Θx(in our case, either a Gaussian or Uniform), we make use of the reparameterization trick, as in Alfarra et al. (2022). The solution of this optimization problem can be found iteratively by performing projected gradient ascent, as detailed in Algorithm 1. A standalone implementation for the AnCer optimization stage is presented in Listing 1 in Appendix C. Memory-based Anisotropic Certification. While each of the classifiers induced by the parameter Θx, i.e. gΘx , is robust by definition as presented in Section 4, the certification of the overall data-dependent classifier is not necessarily sound due to the optimization procedure for each x. This is a known issue in certifying data-dependent classifiers, and is addressed by Alfarra et al. (2022) through the use of a memory-based procedure. In Appendix D, we present an adapted version of this algorithm to AnCer. All subsequent results are obtained following this procedure. Limitations of AnCer. Given AnCer uses a memorization procedure similar to the one presented in Alfarra et al. (2022), it incurs limitations on memory and runtime complexity. Note that in memory-based data-dependent certification there is a single procedure for both certification and inference in contrast with the fixed σ setting from Cohen et al. (2019). The main limitations of the memory-based certification are outlined in Appendix E of Alfarra et al. (2022). The anisotropic case increases on the complexity of the isotropic framework by the increased runtime of specific functions presented in Appendix D. Certification runtime comparisons are in Section 7.4. The memory-based procedure incurs the same memory cost as the one presented in Alfarra et al. (2022), i.e., it has a memory complexity of O(N) where N is the total number of inferred samples. This is since that the memory based method requires saving the observed instances along with their smoothing parameters. While the linear runtime dependency on memory size might appear daunting for the deployment of such a system, there are a few factors that could mitigate the cost. Firstly, in practice the models deployed get regularly updated in deployment, and the memory should be reset in those situations. Secondly, there are possible solutions which might attain sublinear runtime for the post-certification stage, such as the application of k-d trees to reduce the space of comparisons and speed-up the process. As such, we believe AnCer to be suited to applications in offline scenarios, where improved robustness is desired and inference time is not a critical issue. A further limitation of the memorization procedure has to do with the impact of the order in which inputs are certified on the overall statistics obtained. Within a memory-based framework, certifying x2 with x1 in memory can be different from certifying x1 with x2 in memory if they intersect. In practice, given the low number of intersections observed with the original certified regions, this effect was almost negligible in the results presented in Section 7. For fairness of comparison with non-memory based methods, we report "worst-case" results for AnCer in which we abstain from deciding whenever an intersection of two certified regions occurs. ## 7 Experiments We now study the empirical performance of AnCer to obtain ` Σ 2 , ` Λ 1 , `2 and `1 certificates on networks trained using randomized smoothing methods found in the literature. In this section, we show that AnCer is able to achieve (i) improved performance on those networks in terms of `2 and `1 certification when compared to certification baselines that smooth using a fixed isotropic σ (Fixed σ) (Cohen et al., 2019; Yang et al., 2020; Salman et al., 2019a; Zhai et al., 2019) or a data-dependent and memory-based isotropic one (Isotropic DD) (Alfarra et al., 2022); and **(ii)** a significant improvement in terms of the ` Σ 2 and ` Λ 1 -norm certified region obtained by the same methods - compared by computing the *proxy radius* of the certified regions – thus generally satisfying the conditions of a superior certificate proposed in Definition 1. Note that both data-dependent approaches (Isotropic DD and AnCer) use memory-based procedures. As such, the gains described in this section constitute a trade-off given the limitations of the method described in Section 6. We follow an evaluation procedure as similar as possible to the ones described in Cohen et al. (2019); Yang et al. (2020); Salman et al. (2019a); Zhai et al. (2019) by using code and pre-trained networks whenever available and by performing experiments on CIFAR-10 (Krizhevsky, 2009) and ImageNet (Deng et al., 2009), certifying the entire CIFAR-10 test set and a subset of 500 examples from the ImageNet test set. For the implementation of AnCer, we solve Equation equation 1 with Adam for 100 iterations, where the certification gap r p(x, Θx) is estimated at each iteration using 100 noise samples per test point (see Appendix C) and Θxin Equation equation 1 is initialized with the Isotropic DD solution from Alfarra et al. (2022). Further details of the setup can be found in Appendix E. As in previous works, `p **certified accuracy** at radius R is defined as the portion of the test set Dt for which the smooth classifier correctly classifies with an `p certification radius of at least R. In a similar fashion, we define the anisotropic ` Σ 2 /` Λ 1 certified accuracy at a proxy radius of R˜ (as defined in Section 5) to be the portion of Dt in which the smooth classifier classifies correctly with an ` Σ 2 /` Λ 1 -norm certificate of an n th root volume of at least R˜. We also report **average certified radius** (ACR) defined as Ex,y∼Dt [Rx1(g(x) = y)] (Alfarra et al., 2022; Zhai et al., 2019) as well as **average certified proxy radius** (ACR˜) defined as Ex,y∼Dt [R˜x1(g(x) = y)], where Rx and R˜x denote the radius and proxy radius at x with a true label y for a smooth classifier g. Recall that in the isotropic case, the proxy radius is, by definition, the same as the radius for a given `p-norm. For each classifier, we ran experiments on the σ values reported in the original work (with the exception of Yang et al. (2020), see Section 7.2). For the sake of brevity, we report in this section the top-1 certified accuracy plots, ACR and ACR˜ per radius across σ, as in Salman et al. (2019a); Zhai et al. (2019); Alfarra et al. (2022). The performance of each method per σ is presented in Appendix G. ![8_image_0.png](8_image_0.png) Figure 3: Distribution of top-1 certified accuracy as a function of `2 radius (top) and ` Σ 2 -norm proxy radius (bottom) obtained by different certification methods on CIFAR-10 and ImageNet. ## 7.1 Ellipsoid Certification (`2 And ` Σ 2 -Norm Certificates) We perform the comparison of `2-ball vs. ` Σ 2 -ellipsoid certificates via Gaussian smoothing using networks trained following the procedures defined in Cohen et al. (2019), Salman et al. (2019a), and Zhai et al. (2019). For each of these, we report results on ResNet18 trained using σ ∈ {0.12, 0.25, 0.5, 1.0} for CIFAR-10, and ResNet50 using σ ∈ {0.25, 0.5, 1.0} for ImageNet. For details of the training procedures, see Appendix E.1. Figure 3 plots top-1 certified accuracy as a function of the `2 radius (top) and of the ` Σ 2 -norm proxy radius (bottom) per trained network and dataset, while Table 1 presents an overview of the certified accuracy at various `2 radii, as well as `2 ACR and ` Σ 2 -norm ACR˜. Recall that, following the considerations in Section 5.1, the `2 certificate obtained through AnCer is the maximum enclosed isotropic `2-ball in the ` Σ 2 ellipsoid. First, we note that sample-wise certification (Isotropic DD and AnCer) achieves higher certified accuracy than fixed σ across the board. This mirrors the findings in Alfarra et al. (2022), since certifying with a fixed σ for all samples struggles with the robustness/accuracy trade-off first mentioned in Cohen et al. (2019), whereas the data-dependent solutions explicitly optimize σ per sample to avoid it. More importantly, AnCer achieves new state-of-the-art `2 certified accuracy at most radii in Table 1, *e.g.* at radius 0.5 AnCer brings certified accuracy to 77% (from 66%) and 70% (from 62%) on CIFAR10 and ImageNet, respectively, yielding relative percentage improvements in ACR between 13% and 47% when compared to Isotropic DD. While the results are significant, it might not be immediately clear why maximizing the volume of an ellipsoid with AnCer results in a larger maximum enclosed `2-ball certificate in ` Σ 2 ellipsoid when compared to optimizing the `2-ball with Isotropic DD. We explore this phenomenon in Appendix 7.3. ![8_image_1.png](8_image_1.png) Figure 4: Distribution of top-1 certified accuracy as a function of `1 radius (top) and ` Λ 1 -norm proxy radius (bottom) obtained by different certification methods on CIFAR-10 and ImageNet. As expected, AnCer substantially improves ` Σ 2 ACR˜ compared to Isotropic DD in all cases - with relative improvements in ACR˜ between 38% and 63% over both datasets. The joint results, certification with `2 and ` Σ 2 , establish that AnCer certifies the `2-ball region obtained by previous approaches, in addition to a much larger region captured by the ` Σ 2 certified accuracy and ACR˜, and therefore is, according to Definition 1, generally superior to the Isotropic DD one. Table 1: Comparison of top-1 certified accuracy at different `2 radii, `2 average certified radius (ACR) and ` Σ 2 average certified proxy radius (ACR˜) obtained by using the isotropic σ used for training the networks (Fixed σ); the isotropic data-dependent (Isotropic DD) optimization scheme from Alfarra et al. (2022); and AnCer's data-dependent anisotropic optimization. | CIFAR-10 | Certification | Accuracy @ `2 radius (%) | `2 ACR | ` Σ 2 ACR˜ | | | | | | | |-----------------------|-----------------|----------------------------|----------|--------------|-----|-----|----|-------|-------|-------| | 0.0 | 0.25 | 0.5 | 1.0 | 1.5 | 2.0 | 2.5 | | | | | | Fixed σ | 86 | 71 | 51 | 27 | 14 | 6 | 2 | 0.722 | 0.722 | | | Cohen | Isotropic DD | 82 | 76 | 62 | 39 | 24 | 14 | 8 | 1.117 | 1.117 | | Cohen et al. (2019) | AnCer | 86 | 85 | 77 | 53 | 31 | 17 | 10 | 1.449 | 1.772 | | SmoothAdv | Fixed σ | 82 | 72 | 55 | 32 | 19 | 9 | 5 | 0.834 | 0.834 | | Isotropic DD | 82 | 75 | 63 | 40 | 25 | 15 | 7 | 1.011 | 1.011 | | | Salman et al. (2019a) | AnCer | 83 | 81 | 73 | 48 | 30 | 17 | 8 | 1.224 | 1.573 | | Fixed σ | 87 | 76 | 59 | 37 | 24 | 14 | 9 | 0.970 | 0.970 | | | MACER | Isotropic DD | 88 | 80 | 66 | 40 | 17 | 9 | 6 | 1.007 | 1.007 | | Zhai et al. (2019) | AnCer | 84 | 80 | 67 | 34 | 15 | 11 | 9 | 1.136 | 1.481 | | ImageNet | Certification | Accuracy @ `2 radius (%) | `2 ACR | ` Σ 2 ACR˜ | | | | | | | | 0.0 | 0.5 | 1.0 | 1.5 | 2.0 | 2.5 | 3.0 | | | | | | Cohen | Fixed σ | 70 | 56 | 41 | 31 | 19 | 14 | 12 | 1.098 | 1.098 | | Isotropic DD | 71 | 59 | 46 | 36 | 24 | 19 | 15 | 1.234 | 1.234 | | | Cohen et al. (2019) | AnCer | 70 | 70 | 62 | 61 | 42 | 36 | 29 | 1.810 | 1.981 | | Fixed σ | 65 | 59 | 44 | 38 | 26 | 20 | 18 | 1.287 | 1.287 | | | SmoothAdv | Isotropic DD | 66 | 62 | 53 | 41 | 32 | 24 | 20 | 1.428 | 1.428 | | Salman et al. (2019a) | AnCer | 66 | 66 | 62 | 58 | 44 | 37 | 32 | 1.807 | 1.965 | ## 7.2 Generalized Cross-Polytope Certification (`1 And ` Λ 1 -Norm Certificates) To investigate `1-ball vs. ` Λ 1 -generalized cross-polytope certification via Uniform smoothing, we compare AnCer to the `1 state-of-the-art results from RS4A (Yang et al., 2020). While the authors of the original work report best certified accuracy based on 15 networks trained at different σ levels between 0.15 and 3.5 on CIFAR-10 (WideResNet40) and ImageNet (ResNet50) and due to limited computational resources, we perform the analysis on a subset of those networks with σ = {0.25, 0.5, 1.0}. We reproduce the results in Yang et al. (2020) as closely as possible, with details of the training procedure presented in Appendix E.2. Figure 4 shows the top-1 certified accuracy as a function of the `1 radius (top) and of the ` Λ 1 -norm proxy radius (bottom) for RS4A, and Table 2 shows an overview of the certified accuracy at various `1 radii, as well as `1 ACR and ` Λ 1 ACR˜. As with the ellipsoid case, we notice that AnCer outperforms both Fixed σ and Istropic DD for most `1 radii, establishing new state-of-the-art results in CIFAR-10 at radii 0.5 and 1.0, and ImageNet at radii 0.5 (compared to previous results reported in Yang et al. (2020)). Once more and as expected, AnCer significantly improves the ` Λ 1 ACR˜ for all radii, pointing to substantially larger cerficates than the isotropic case. These results also establish that AnCer certifies the `1-ball region obtained by previous work, in addition to the larger region obtained by the ` Λ 1 certificate, and thus we can consider it superior (with respect to Definition 1) to Isotropic DD. ## 7.3 Why Does Ancer Improve Upon Isotropic Dd'S `P **Certificates?** As observed in Sections 7.1 and 7.2, AnCer's `2 and `1 certificates outperform the corresponding certificates obtained by Isotropic DD. To explain this, we compare the `2 certified region obtained by AnCer, defined in Section 6 as {δ : kδk2 ≤ mini σ x i r(x, Σ x)}, to the one by Isotropic DD defined as {δ : kδk2 ≤ σ xr(*x, σ*x)}. We observe that the radius of both of these certificates can be separated into a σ-factor (σ x vs. σ x min = mini σ x i ) and a gap-factor (r(*x, σ*x) vs. r(x, Σ x)). We posit the seemingly surprising result can be attributed to the computation of the gap-factor r using an anisotropic, optimized distribution. However, another potential explanation would be that AnCer benefits from a prematurely stopped initialization provided by Isotropic DD, thus achieving a better σ x min than the isotropic σ x when given further optimization iterations. Table 2: Comparison of top-1 certified accuracy at different `1 radii, `1 average certified radius (ACR) and ` Λ 1 average certified proxy radius (ACR˜) obtained by using the isotropic σ used for training the networks (Fixed σ); the isotropic data-dependent (Isotropic DD) optimization scheme from Alfarra et al. (2022); and AnCer's data-dependent anisotropic optimization. | CIFAR-10 | Certification | Accuracy @ `1 radius (%) | `1 ACR | ` Λ 1 ACR˜ | | | | | | | |--------------------|-----------------|----------------------------|----------|--------------|-----|-----|----|-------|-------|-------| | 0.0 | 0.25 | 0.5 | 0.75 | 1.0 | 1.5 | 2.0 | | | | | | Fixed σ | 92 | 83 | 75 | 71 | 46 | 0 | 0 | 0.775 | 0.775 | | | RS4A | Isotropic DD | 92 | 89 | 82 | 76 | 58 | 6 | 2 | 0.946 | 0.946 | | Yang et al. (2020) | AnCer | 92 | 90 | 84 | 80 | 63 | 6 | 2 | 0.980 | 1.104 | | ImageNet | Fixed σ | 78 | 73 | 67 | 63 | 0 | 0 | 0 | 0.683 | 0.683 | | RS4A | | | | | | | | | | | | Yang et al. (2020) | Isotropic DD | 79 | 76 | 70 | 65 | 46 | 0 | 0 | 0.729 | 0.729 | | AnCer | 78 | 76 | 70 | 66 | 48 | 0 | 0 | 0.730 | 1.513 | | To investigate this, we take the optimized parameters from the Isotropic DD experiments on SmoothAdv for an initial σ = 0.25 on CIFAR-10, and run the optimization step of Isotropic DD for 100 iterations more than its default number of iterations from Alfarra et al. (2022), so as to match the total number of optimization steps between Isotropic DD and AnCer. The histograms of σ x or σ x min and the gap-factor r, *i.e.* the two factors from the `2 certification results, are presented in Figure 5. While σ xfor Isotropic DD is similar in distribution to AnCer's σ x min, the distribution of the two gaps, r(*x, σ*x) and r(x, Σ x), are quite different. In particular, the AnCer certification gap is significantly larger when compared to Isotropic DD, and is the main contributor to the improvement in the `2-ball certificate of AnCer. That is to say, AnCer generates Σ xthat is better aligned with the decision boundaries, and hence increases the confidence of the smooth classifier. ## 7.4 Certification Runtime ![10_image_0.png](10_image_0.png) Figure 5: Histograms of the values of the gap r (left) and the σ-factor (right) obtained by AnCer initialized with Isotropic DD, and Isotropic DD when allowed to run for 100 iterations more than the baseline. Vertical lines plot the median of the data. The certification procedures of Isotropic DD and AnCer tradeoff improved certified accuracy for runtime, since they require a sample-wise optimization to be run prior to the Certify step described in Cohen et al. (2019), and a memory-based step as per Alfarra et al. (2022). The runtime of the optimization and certification procedures is roughly equal for `1, `2, ` Σ 2 and ` Λ 1 certification, and mostly depends on network architecture. As such, we report the average certification runtime for a test set sample on an NVIDIA Quadro RTX 6000 GPU for Fixed σ, Isotropic DD and AnCer (including the isotropic initialization step) in Table 3. We observe that the run time overhead for AnCer is not significant as compared to its certification gains. Finally, due to the memory based step in our approach, the inference and certification runtime are the same. Table 3: Average certification time for each sample per architecture used: (a) ResNet18 (`2, ` Σ 2 on CIFAR-10), (b) WideResNet40 (`1, ` Λ 1 on CIFAR-10), and (c) ResNet50 (ImageNet). Fixed σ Isotropic DD AnCer (a) 1.6s 1.8s 2.7s (b) 7.4s 9.5s 11.5s (c) 109.5s 136.0s 147.0s ## 8 Conclusion We lay the theoretical foundations for anisotropic certification through a simple analysis, propose a metric for comparing general robustness certificates, and introduce AnCer, a certification procedure that estimates the parameters of the anisotropic smoothing distribution to maximize the certificate. Our experiments show that AnCer achieves state-of-the-art `1 and `2 certified accuracy in the data-dependent setting. ## Acknowledgments This publication is based upon work supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems [EP/S024050/1] and Five AI Limited; by King Abdullah University of Science and Technology (KAUST) under Award No. ORA-CRG10-2021-4648, as well as, the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI); and by the Royal Academy of Engineering under the Research Chair and Senior Research Fellowships scheme, EPSRC/MURI grant EP/N019474/1. ## References Motasem Alfarra, Adel Bibi, Philip H. S. Torr, and Bernard Ghanem. Data dependent randomized smoothing. In *Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence*, volume 180 of Proceedings of Machine Learning Research, pp. 64–74. PMLR, 01–05 Aug 2022. 2, 3, 6, 7, 8, 9, 10, 11, 15, 19, 20, 21, 24, 25 Rudy Bunel, Ilker Turkaslan, Philip HS Torr, Pushmeet Kohli, and M Pawan Kumar. A unified view of piecewise linear neural network verification. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 3 Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. In *International Conference on Machine Learning (ICML)*, 2019. 2, 3, 4, 6, 7, 8, 9, 10, 11, 19, 25, 26 Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2009. 2, 8, 24 Krishnamurthy Dj Dvijotham, Jamie Hayes, Borja Balle, Zico Kolter, Chongli Qin, András György, Kai Xiao, Sven Gowal, and Pushmeet Kohli. A framework for robustness certification of smoothed classifiers using f-divergences. In *International Conference on Learning Representations (ICLR)*, 2020. 3 Marc Fischer, Maximilian Baader, and Martin Vechev. Certified defense to image transformations via randomized smoothing. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2020. 5 Igor Gilitschenski and Uwe D Hanebeck. A robust computational test for overlap of two arbitrary-dimensional ellipsoids in fault-detection of kalman filters. In *2012 15th International Conference on Information Fusion*, 2012. 21, 22 Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015. 1, 3 Sven Gowal, Krishnamurthy Dj Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. Scalable verified training for provably robust image classification. In *IEEE International Conference on Computer Vision (ICCV)*, 2019. 3 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. 24 Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety verification of deep neural networks. In International Conference on Computer Aided Verification (CAV), 2017. 3 Jongheon Jeong and Jinwoo Shin. Consistency regularization for certified robustness of smoothed classifiers. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2020. 3 Matt Jordan and Alexandros G Dimakis. Exactly computing the local lipschitz constant of relu networks. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 4 Hamid Karimi, Tyler Derr, and Jiliang Tang. Characterizing the decision boundary of deep neural networks. arXiv preprint arXiv:1912.11460, 2019. 3 Maurice G Kendall. *A Course in the Geometry of n Dimensions*. Courier Corporation, 2004. 6, 17 Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. 2, 8, 15, 24 Aounon Kumar, Alexander Levine, Tom Goldstein, and Soheil Feizi. Curse of dimensionality on randomized smoothing for certifiable robustness. In *International Conference on Machine Learning (ICML)*, 2020. 2 Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In *IEEE Symposium on Security and Privacy (SP)*, 2019. 2, 3 Guang-He Lee, Yang Yuan, Shiyu Chang, and Tommi S Jaakkola. Tight certificates of adversarial robustness for randomly smoothed classifiers. *arXiv preprint arXiv:1906.04948*, 2019. 3 Alexander Levine and Soheil Feizi. Robustness certificates for sparse adversarial attacks by randomized ablation. In *Association for the Advancement of Artificial Intelligence (AAAI)*, 2020. 3 Alexander Levine and Soheil Feizi. Improved, deterministic smoothing for l1 certified robustness. *arXiv* preprint arXiv:2103.10834, 2021. 2 Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified adversarial robustness with additive noise. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2019. 3 Linyi Li, Maurice Weber, Xiaojun Xu, Luka Rimanic, Tao Xie, Ce Zhang, and Bo Li. Provable robust learning based on transformation-specific smoothing. *arXiv preprint arXiv:2002.12398*, 2020. 5 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel. Higher-order certification for randomized smoothing. *Advances in Neural Information Processing Systems (NeurIPS)*, 2020. 3, 29 Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, and Pascal Frossard. Robustness via curvature regularization, and vice versa. In *IEEE Conference on Computer Vision and Pattern Recognition* (CVPR), 2019. 15 Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In *Advances in* Neural Information Processing Systems (NeurIPS). 2019. 20, 24 Lluís Ros, Assumpta Sabater, and Federico Thomas. An ellipsoidal calculus based on propagation and fusion. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2002. 21, 22 Hadi Salman, Jerry Li, Ilya P Razenshteyn, Pengchuan Zhang, Huan Zhang, Sébastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. In Advances in Neural Information Processing Systems (NeurIPS), 2019a. 3, 4, 6, 8, 9, 10, 16, 25, 26, 28, 30 Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. A convex relaxation barrier to tight robust verification of neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2019b. 3 Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014. 1, 3 Vincent Tjeng, Kai Xiao, and Russ Tedrake. Evaluating robustness of neural networks with mixed integer programming. In *International Conference on Learning Representations (ICLR)*, 2019. 3 Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. *arXiv preprint arXiv:1704.03453*, 2017. 3 Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations (ICLR), 2018. 3 Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S Dhillon, and Luca Daniel. Towards fast computation of certified robustness for relu networks. In *International* Conference on Machine Learning (ICML), 2018. 3 Greg Yang, Tony Duan, J Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li. Randomized smoothing of all shapes and sizes. In *International Conference on Machine Learning (ICML)*, 2020. 2, 3, 4, 5, 8, 10, 11, 25, 29 Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei Wang. Macer: Attack-free and scalable robust training via maximizing certified radius. In International Conference on Learning Representations (ICLR), 2019. 3, 5, 6, 8, 9, 10, 25, 26 Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, and Qiang Liu. Filling the soap bubbles: Efficient black-box adversarial certification with non-gaussian smoothing. https://openreview.net/forum?id= Skg8gJBFvr, 2019. 3 ## A Qualitative Motivation Of Anisotropic Certification A.1 Visualizing Cifar-10 Optimized Isotropic Vs. Anisotropic Certificates To extend the illustration in Figure 1 to a higher dimensional input, we now analyze an example of the isotropic `2 certification of randomized smoothing with N (0, σ2I), where σ is optimized per input Alfarra et al. (2022), against AnCer, certifying an anisotropic region characterized by a diagonal ` Σ 2 - norm. To do so, we consider a CIFAR-10 Krizhevsky (2009) dataset point x, where the input is of size (32x32x3). We perform the 2D analysis by considering the regions closest to a decision boundary. To do so, and following Moosavi-Dezfooli et al. (2019), we compute the Hessian of f y(x) with respect to x where y is the true label for x with f classifying x correctly, *i.e.* y = arg maxi f i(x). In addition to the Hessian, we also compute its eigenvector decomposition, yielding the eigenvectors {νi}, i ∈ {1*, . . . ,* 3072} ordered in descending order of the absolute value of the respective eigenvalues. In Figure 6a, we show the projection of the landscape of f yin the highest curvature directions, *i.e.* ν1 and ν2. Note that the isotropic certification, much as in Figure 1c, in these 2 dimensions is nearly optimal when compared to the anisotropic region. However, if we take the same projection with respect to the eigenvectors with the lowest and highest eigenvalues, *i.e.* ν1 and ν3072, the advantages of the anisotropic certification become clear as shown in Figure 6b. Anistropic Isotropic ``` 1 Anistropic Isotropic ``` 3072 1 2 (a) (b) Figure 6: Illustration of the landscape of f yfor points around an input point x, and two projections of an isotropic `2 certified region and an anisotropic ` Σ 2 2norm region on a CIFAR-10 dataset example to a subset of two eigenvectors of the Hessian of f y(blue regions correspond to a higher confidence in y). ## B Anisotropic Certification And Evaluation Proofs Proposition 1 (restatement). *Consider a differentiable function* g : R n → R. If supxk∇g(x)k∗ ≤ L *where* k · k∗ *has a dual norm* kzk = maxx z >x s.t. kxk∗ ≤ 1, then g is L-Lipschitz under norm k · k∗*, that is* |g(x) − g(y)| ≤ Lkx − yk. Proof. Consider some *x, y* ∈ R n and a parameterization in t as γ(t) = (1 − t)x + ty ∀t ∈ [0, 1]. Note that γ(0) = x and γ(1) = y. By the Fundamental Theorem of Calculus we have: $$|g(y)-g(x)|=|g(\gamma(1))-g(\gamma(0))|=\left|\int_{0}^{1}\frac{dg(\gamma(t))}{dt}dt\right|=\left|\int_{0}^{1}\nabla g^{\top}\nabla\gamma dt\right|\leq\int_{0}^{1}\left|\nabla g^{\top}\nabla\gamma\right|dt$$ $$\leq\int_{0}^{1}\|\nabla g(x)\|_{*}\|\nabla\gamma(t)\|dt\leq L\|y-x\|$$ $\square$ Theorem 1 (restatement). Let g : R n → R K, g ibe L-Lipschitz continuous under norm k· k∗ ∀i ∈ {1*, . . . , K*}, and cA = arg maxi g i(x)*. Then, we have arg* maxi g i(x + δ) = cA for all δ *satisfying:* $$\|\delta\|\leq\frac{1}{2L}\left(g^{c_{A}}(x)-\operatorname*{max}_{c}g^{c\neq c_{A}}(x)\right).$$ Proof. Take cB = arg maxc g c6=cA (x). By Proposition 1, we get: $g^{c_{A}}(x+\delta)-g^{c_{A}}(x)|\leq L\|\delta\|\implies g^{c_{A}}(x+\delta)\geq g^{c_{A}}(x)-L\|\delta\|\leq|g^{c_{B}}(x+\delta)-g^{c_{B}}(x)|\leq L\|\delta\|\implies g^{c_{B}}(x+\delta)\leq g^{c_{B}}(x)+L\|\delta\|$ By subtracting the inequalities and re-arranging terms, we have that as long as g cA (x)−Lkδk > gcB (x)+Lkδk, i.e. the bound in the Theorem, then g cA (x + δ) > gcB (x + δ), completing the proof. Proposition 2 (restatement). *Consider* gΣ(x) = E∼N(0,Σ) [f(x + )]. Φ −1(gΣ(x)) is 1*-Lipschitz (i.e.* L = 1) under the k · kΣ−1,2 *norm.* Proof. To prove Proposition 2, one needs to show that Φ −1(g i Σ(x)) ∀i is 1-Lipschitz under the *k · k*Σ−1,2 norm. For ease of notation, we drop the superscript g i Σ and use only g. We want to show that k∇Φ −1(gΣ(x))kΣ−1,2 = kΣ 1/2∇Φ −1(gΣ(x))k2 ≤ 1. Following the argument presented in Salman et al. (2019a), it suffices to show that, for any unit norm direction u and p = gΣ(x), we have: $$u^{\top}\Sigma^{\frac{1}{2}}\nabla g_{\Sigma}(x)\leq\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}(\Phi^{-1}(p))^{2}\right).$$ $$\text{(2)}$$. We start by noticing that: $$u^{\top}\Sigma^{\frac{1}{2}}\nabla g_{\Sigma}(x)=\frac{1}{(\sqrt{2\pi})^{n}\sqrt{|\Sigma|}}\int_{\mathbb{R}^{n}}f(t)u^{\top}\Sigma^{\frac{1}{2}}\Sigma^{-1}(t-x)\exp\left(-\frac{1}{2}(x-t)\Sigma^{-1}(x-t)\right)d^{n}t$$ $$=\mathbb{E}_{v\sim\mathcal{N}(0,\mathrm{I})}[f(x+\Sigma^{\frac{1}{2}}s)u^{\top}s]=\mathbb{E}_{v\sim\mathcal{N}(0,\Sigma)}[f(x+v)u^{\top}\Sigma^{-\frac{1}{2}}v].$$ We now need to find the optimal f ∗: R n → [0, 1] that satisfies gΣ(x) = Ev∼N(0,Σ)[f(x + v)] = p while maximizing the left hand size Ev∼N(0,Σ)[f(x + v)u >Σ − 12 v]. We argue that the maximizer is the following function: $$f^{*}(x+v)=1\left\{u^{\top}\Sigma^{-\frac{1}{2}}v\geq-\Phi^{-1}(p)\right\}.$$ To prove that f ∗is indeed the optimal maximizer, we first show feasibility. (i): It is clear that f ∗: R n → [0, 1]. (ii) Note that: $$\mathbb{E}_{v\sim\mathcal{N}(0,\Sigma)}\left[1\left\{u^{\top}\Sigma^{-\frac{1}{2}}v\geq-\Phi^{-1}(p)\right\}\right]=\mathbb{P}_{x\sim\mathcal{N}(0,1)}(x\geq-\Phi^{-1}(p))=1-\Phi(-\Phi^{-1}(p))=p.$$ To show the optimality of $f^{\star}$, we show that it attains the right upper bound: $$\begin{aligned} \mathbb{E}_{v\sim\mathcal{N}(0,\Sigma)}\Big[u^{\top}\Sigma^{-\frac{1}{2}}v\mathbb{1}\left\{u^{\top}\Sigma^{-\frac{1}{2}}v\geq-\Phi^{-1}(p)\right\}\Big]&=\mathbb{E}_{v\sim\mathcal{N}(0,1)}\Big[x\left\{x\geq-\Phi^{-1}(p)\right\}\Big]\\ &=\frac{1}{\sqrt{2\pi}}\int_{-\Phi^{-1}(p)}^{\infty}x\exp\left(-\frac{1}{2}x^2\right)dx\\ &=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}(\Phi^{-1}(p))^2\right) \end{aligned}$$ which is in the kernel for $p=p$, it is possible to hold the probability that $p=f_{\star}$. $\square$ obtaining the bound from Equation equation 2, and thus completing the proof. Proposition 3 (restatement). *Consider* gΛ(x) = E∼U[−1,1]n [f(x + Λ)]*. The classifier* g i Λ ∀i is 1/2-Lipschitz (i.e. L = 1/2) under the kΛxk∞ *norm.* Proof. We begin by observing that the dual norm of kxkΛ,1 = kΛ −1xk1 is kxk∗ = kΛxk∞, since: $$\operatorname*{max}_{\|\Lambda^{-1}x\|_{1}\leq1}x^{\top}y=\operatorname*{max}_{\|z\|_{1}\leq1}y^{\top}\Lambda z=\|\Lambda y\|_{\infty}.$$ Without loss of generality, we analyze ∂gi/∂x1. Let xˆ = [x2*, . . . , x*n] ∈ R n−1, then: $$\frac{\lambda_{1}\partial g^{i}}{\partial x_{1}}=\frac{\lambda_{1}}{(2\lambda)^{n}}\frac{\partial}{\partial x_{1}}\int_{[-1,1]^{n-1}}\int_{-1}^{1}f^{i}(x_{1}+\lambda_{1}\epsilon_{1},\hat{x}+\hat{\Lambda}\hat{\epsilon})d\epsilon_{1}d^{n-1}\hat{\epsilon}$$ $$=\frac{1}{2^{n}}\int_{[-1,1]^{n-1}}(f^{i}(x_{1}+1,\hat{x}+\hat{\Lambda}\hat{\epsilon})-f^{i}(x_{1}-1,\hat{x}+\hat{\Lambda}\hat{\epsilon}))d^{n-1}\hat{\epsilon}$$ Thus, $$\left|\frac{\lambda_{1}\partial g^{i}}{\partial x_{1}}\right|\leq\frac{1}{2^{n}\prod_{j=2}^{n}\lambda_{j}}\int_{[-1,1]^{n-1}}\left|f^{i}(x_{1}+1,\hat{x}+\hat{\lambda}\hat{\epsilon})-f^{i}(x_{1}-1,\hat{x}+\hat{\lambda}\hat{\epsilon})\right|d^{n-1}\hat{\epsilon}\leq\frac{1}{2}.$$ In the last step, we have the above formula that cannot be used. In this case, The second and last steps follow by the change of variable t = x1 + λ11 and Leibniz rule. Following a symmetric argument, λj ∂gi/∂xj ≤ 1/2 ∀i resulting in having kΛ∇g i(x)k∞ = maxi λi ∂gi/∂xi ≤ 1/2 ∀i concluding the proof. $${\hat{\partial}}\prod_{\mathrm{1}}\ \ ,$$ Proposition 4 (restatement). V{δ : kΛ −1δk1 ≤ r}= (2r) $${}^{n}-\prod_{i}\lambda_{i}\,.$$ Proof. Take A = rΛ −1 = diag( 1/rλ1*, . . . ,* 1/rλn) = diag(a1*, . . . , a*n). We can re-write the region as {x :Pi ai|xi| ≤ 1}, from which it is clear to see that this region is an origin centered, axis-aligned simplex with the set of vertices V = {±1/aiei} n i=1, where eiis the standard basis vector i. Define the sets of vertices V t = *V \ {−*1/anen} and V b = *V \ {*1/anen}. Given the symmetry around the origin, each of these sets defines an n-dimensional *hyperpyramid* with a shared *base* Bn−1 given by the n − 1-dimensional hyperplane defined by all vertices where xn = 0, and an *apex* at the vertex 1/anen (or −1/anen in the case of V b). The volume of each of these n − 1-dimensional hyperpyramids is given by V(Bn−1)/nan (Kendall (2004)), yielding a total volume of Vn = 2 n 1 an V(Bn−1). The same argument can be applied to compute V(Bn−1) which is a union of two n−1-dimensional hyperpyramids. This forms a recursion that completes the proof. Proof. (**Alternative Proof**.) We consider the case that Λ −1is a general positive definite matrix that is not necessarily diagonal. Note that V{δ : kΛ −1δk1 ≤ r}= V{δ : k(rΛ)−1δk1 ≤ 1}= r n|Λ|V ({δ : kδk1 ≤ 1}) where |rΛ| denotes the determinant. The last equality follows by the volume of a set under a linear map and noting that {δ : k(rΛ)−1δk1 ≤ 1} = {rΛδ : kδk1 ≤ r}. At last, {δ : kδk1 ≤ 1} can be expressed as the disjoint union of 2 n simplexes. Thus, we have V{δ : kΛ −1δk1 ≤ r}= (2r) n/n!|Λ| since the volume of a simplex is 1/n! completing the proof. For completeness, we supplement the previous result with bounds on the volume that may be useful for future readers. Proposition 5. *For any positive definite* Λ −1 ∈ R n×n*, we have the following:* $$\left({\frac{2r}{n}}\right)^{n}\mathcal{V}\left(\mathcal{Z}(\Lambda)\right)\leq\mathcal{V}\Big(\{\delta:\|\Lambda^{-1}\delta\|_{1}\leq r\}\Big)\leq(2r)^{n}\mathcal{V}\left(\mathcal{Z}(\Lambda)\right)$$ where V (Z(Λ)) = p|Λ>Λ| which is the volume of the zonotope with a generator matrix Λ. Proof. Let S1 = {δ : kΛ −1δk1 ≤ r}, S∞ = {δ : kΛ −1δk∞ ≤ r} and S n∞ = {δ : nkΛ −1δk∞ ≤ r}. Since kΛ −1δk∞ ≤ kΛ −1δk1 ≤ nkΛ −1δk∞, then S∞ ⊇ S1 ⊇ S n∞. Therefore, we have V(S∞) ≥ V(S1) ≥ V(S n∞). At last note that, S n∞ = { r n Λδ : kδk∞ ≤ 1} and that with the change of variables δ = 2u − 1n where 1n is a vector of all ones, we have S n∞ = Z2r n Λ⊕ −r n Λ1n where ⊕ is a Minkowski sum and noting that rn Λ1n is a single point in R n. Therefore, V Z 2r n Λ ⊕ −r n Λ1n = ( 2r/n) nV (Z(Λ)). The upper bound follows with a similar argument completing the proof. ## B.1 Certification Under Gaussian Mixture Smoothing Distribution We consider a general, K-component, zero-mean Gaussian mixture smoothing distribution G such that: $$\mathcal{G}(\{\alpha_{i},\Sigma_{i}\}_{i=1}^{K}):=\sum_{i=1}^{K}\alpha_{i}\mathcal{N}(0,\Sigma_{i}),\quad\text{s.t.}\quad\sum_{i}\alpha_{i}=1,0<\alpha_{i}\leq1\tag{1}$$ $$(3)$$ Given f and as per the recipe described in Section 4, we are interested in the Lipschitz constant of the smooth classifier gG(x) = (f ∗ G)(x) = PK i αigΣi =PK i αi(f ∗ N (0, Σi)) = Pi αigΣi (x) where gΣi is defined as in the Gaussian case. Note the weaker bound when compared to Proposition 2, for each of the Gaussian components presented in the following proposition. Proposition 6. gΣ is p2/π-Lipschitz under k.kΣ−1,2 norm. Proof. Following a similar argument to the proof of Proposition 2, we get: $$u^{\top}\Sigma^{\frac{1}{2}}\nabla g_{\Sigma}(x)\leq\frac{1}{(2\pi)^{n/2}\sqrt{\left\|\Sigma\right\|}}\int_{\mathbb{R}^{n}}\left|u^{\top}\Sigma^{-\frac{1}{2}}(t-x)\right|\exp\left(-\frac{1}{2}(x-t)^{\top}\Sigma^{-1}(x-t)\right)d^{n}t$$ $$=\mathbb{E}_{\pi\sim\mathcal{N}(0,\mathbf{I})}\left[\left|u^{\top}s\right|\right]=\mathbb{E}_{v\sim\mathcal{N}(0,1)}\left[\left|v\right|\right]=\sqrt{2/\pi}.$$ With Proposition 6, we obtain a Lipschitz constant for a Gaussian mixture smoothing distribution as: Proposition 7. gG is pπ/2-Lipschitz under kδkB−1,2 *norm, where* B −1 =PK i αiΣ −1 i. Proof. $$|g_{\mathcal{G}}(x+\delta)-g_{\mathcal{G}}(x)|\leq\sum_{i}\alpha_{i}|g_{\Sigma_{i}}(x+\delta)-g_{\Sigma_{i}}(x)|$$ $$\leq\sqrt{\frac{\pi}{2}}\sum_{i}\alpha_{i}\|\delta\|_{\Sigma_{i},2}\leq\sqrt{\frac{\pi}{2}}\sqrt{\delta^{\top}\left(\sum_{i}\alpha_{i}\Sigma_{i}^{-1}\right)\delta}=\sqrt{\frac{\pi}{2}}\|\delta\|_{\mathcal{B},2},$$ $\square$ Obtained by first applying the triangle inequality, then Proposition 2 followed by Jensen's inequality. Thus yielding the following certificate by combining Proposition 7 and Theorem 1. Corollary 3. Let cA = arg maxi gG(x) *, then* arg maxi g i G (x + δ) = cA for all δ *satisfying:* $$\|\delta\|_{{\mathcal B},2}\leq\frac{1}{\sqrt{2\pi}}\left(g_{{\mathcal G}}^{c_{A}}(x)-\operatorname*{max}_{c}g_{{\mathcal G}}^{c\neq c_{A}}(x)\right).$$ where B −1 =PK i αiΣ −1 i. ## C Ancer Optimization In this section we detail the implementation choices required to solving Equation equation 1. For ease of presentation, we restate the AnCer optimization problem (with Θx = diag({θ x i } n i=1)): $$\operatorname*{arg\,max}_{\Theta^{x}}\;\;\;r^{p}\left(x,\Theta^{x}\right)\,{\sqrt[n]{\prod_{i}\theta_{i}^{x}}}\;\;\;\;\;\;\;\;\mathrm{s.t.}\;\;\;\;\;\operatorname*{min}_{i}\;\theta_{i}^{x}r^{p}\left(x,\Theta^{x}\right)\geq r_{\mathrm{iso}}^{*},$$ where r p(x, Θx) is the gap value under the anisotropic smoothing distribution, and r ∗ iso is the optimal isotropic radius, i.e. ¯θ xr p(x, ¯θ x) for ¯θ x ∈ R +. This is a nonlinear constrained optimization problem that is challenging to solve. As such, we relax it, and solve instead: $$\operatorname*{arg\,max}_{\Theta^{x}}\quad r^{p}\left(x,\Theta^{x}\right)\,\root n\of{\prod_{i}\theta_{i}^{x}+\kappa\operatorname*{min}_{i}\,\theta_{i}^{x}r^{p}\left(x,\Theta^{x}\right)}\quad{\mathrm{s.t.}}\quad\theta_{i}^{x}\geq\bar{\theta}^{x}$$ given a hyperparameter κ ∈ R +. While the constraint θ x i ≥ ¯θ xis not explicitly required to enforce the superset condition over the isotropic case, it proved itself beneficial from an empirical perspective. To sample from the distribution parameterized by Θx(in our case, either a Gaussian or Uniform), we make use of the reparameterization trick, as in Alfarra et al. (2022). The solution of this optimization problem can be found iteratively by performing projected gradient ascent. A standalone implementation for the AnCer optimization stage is presented in Listing 1, whereas the full code integrated in our code base is available as supplementary material. To perform certification, we simply feed the output of this optimization to the certification procedure from Cohen et al. (2019). import torch from torch . autograd import Variable from torch . distributions . normal import Normal ``` class Certificate () : def compute_proxy_gap ( self , logits : torch . Tensor ) : raise NotImplementedError def sample_noise ( self , batch : torch . Tensor , repeated_theta : torch . Tensor ): raise NotImplementedError def compute_gap ( self , pABar : float ): raise NotImplementedError class L2Certificate ( Certificate ) : def __init__ ( self , batch_size : int , device : str = " cuda :0"): self .m = Normal ( torch . zeros ( batch_size ). to ( device ) , torch . ones ( batch_size ). to ( device ) ) self . device = device self . norm = "l2" def compute_proxy_gap ( self , logits : torch . Tensor ) : return self .m. icdf ( logits [: , 0]. clamp_ (0.001 , 0.999) ) - \ self .m. icdf ( logits [: , 1]. clamp_ (0.001 , 0.999) ) def sample_noise ( self , batch : torch . Tensor , repeated_theta : torch . Tensor ): return torch . randn_like ( batch , device = self . device ) * repeated_theta def compute_gap ( self , pABar : float ): return norm . ppf ( pABar ) class L1Certificate ( Certificate ) : def __init__ ( self , device =" cuda :0"): self . device = device self . norm = "l1" def compute_proxy_gap ( self , logits : torch . Tensor ) : return logits [: , 0] - logits [: , 1] def sample_noise ( self , batch : torch . Tensor , repeated_theta : torch . Tensor ): return 2 * ( torch . rand_like ( batch , device = self . device ) - 0.5) * repeated_theta def compute_gap ( self , pABar : float ): return 2 * ( pABar - 0.5) def ancer_optimization ( model : torch . nn . Module , batch : torch . Tensor , certificate : Certificate , learning_rate : float , isotropic_theta : torch . Tensor , iterations : int , samples : int , kappa : float , device : str = " cuda :0"): """ Optimize batch using ANCER , assuming isotropic initialization point . Args : model : trained network batch : inputs to certify around certificate : instance of desired certification object learning_rate : optimization learning rate for ANCER isotropic_theta : initialization isotropic value per input in batch iterations : number of iterations to run the optimization samples : number of samples per input and iteration kappa : relaxation hyperparameter """ batch_size = batch . shape [0] img_size = np . prod ( batch . shape [1:]) # define a variable , the optimizer , and the initial sigma values theta = Variable ( isotropic_theta , requires_grad = True ) . to ( device ) optimizer = torch . optim . Adam ([ theta ], lr = learning_rate ) initial_theta = theta . detach () . clone () # reshape vectors to have ''samples '' per input in batch new_shape = [ batch_size * samples ] new_shape . extend ( batch [0]. shape ) new_batch = batch . repeat ((1 , samples , 1, 1) ). view ( new_shape ) # solve iteratively by projected gradient ascend for _ in range ( iterations ): theta_repeated = theta . repeat (1 , samples , 1, 1) . view ( new_shape ) # Reparameterization trick noise = certificate . sample_noise ( new_batch , theta_repeated ) out = model ( new_batch + noise ). reshape ( batch_size , samples , -1) . mean ( dim =1) vals , _ = torch . topk ( out , 2) gap = certificate . compute_proxy_gap ( vals ) prod = torch . prod ( ( theta . reshape ( batch_size , -1) ) **(1/ img_size ) , dim =1) proxy_radius = prod * gap radius_maximizer = - ( proxy_radius .sum () + kappa * ( torch . min ( theta . view ( batch_size , -1) , dim =1) . values * gap ) .sum () ) radius_maximizer . backward () optimizer . step () # project to the initial theta with torch . no_grad () : torch .max( theta , initial_theta , out = theta ) return theta Listing 1: Python implementation of the AnCer optimization routine using PyTorch Paszke et al. (2019) ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) ![19_image_2.png](19_image_2.png) ![19_image_3.png](19_image_3.png) ``` ## D Memory-Based Certification For Ancer ![19_Image_4.Png](19_Image_4.Png) ![19_Image_5.Png](19_Image_5.Png) ![19_Image_6.Png](19_Image_6.Png) ![19_Image_7.Png](19_Image_7.Png) ![19_Image_8.Png](19_Image_8.Png) To guarantee the soundness of the AnCer classifier, we use an adapted version of the data-dependent memory-based solution presented in Alfarra et al. (2022). The modified algorithm involves a post-processing certification step that obtains adjusted certification statistics based on the memory procedure from Alfarra et al. (2022) (see the original paper for more details). We present an adapted version to AnCer of this post-processing memory-based step in Algorithm 2. Algorithm 2: Memory-Based Certification Input: input point xN+1, certified region RN+1, prediction CN+1, and memory M Result: Prediction for xN+1 and certified region at xN+1 that does not intersect with any certified region in M. for (xi, Ci, Ri) ∈ M do if CN+1 6= Ci **then** if xN+1 ∈ Ri **then** return Abstain, 0 else if MaxIntersect(RN+1, Ri) and *Intersect*(RN+1, Ri) **then** R0N+1 = LargestOutSubset(Ri, RN+1); RN+1 ← R0N+1; end add (xN+1, CN+1, RN+1) to M; return CN+1, RN+1; Note that the proposed certified region RN+1 emerges from our certification bounds presented in Sections 4.1 and 4.2. There are a few differences between our proposed Algorithm 2 with respect to the original variant presented in Alfarra et al. (2022). The first is that we remove the computation of the largest certifiable subset of a certified region RN+1 when there exists an i such that xN+1 ∈ Ri with a different class prediction, *i.e.* (LargestInSubset in Alfarra et al. (2022)) due to the complexity of the operation in the anisotropic case. As an example, it is generally difficult to find the largest volume ellipsoid contained in another ellipsoid. Due to this complexity, we choose to simply Abstain instead. Given the high dimensionality of the data, empirically, we never found a certificate in this situation within our experiments. Further, to ease the computational burden of the Intersect function, we introduce and instantiate the function MaxIntersect first which checks whether the `p-ball over-approximation of the region RN+1 intersects with a `p over-approximation of Ri. This follows since when the `p balls over-approximation to the anisotropic regions RN+1 and Ri do not intersect, then RN+1 and Ri do not intersect either. Only in cases in which those over-approximation regions intersect, we run the more expensive Intersect procedure. We present practical implementations for MaxIntersect, Intersect and LargestOutSubset for the ellipsoids and generalized cross-polytopes considered in this paper. ## D.1 Implementing Maxintersect(Ra, Rb) **In The Ellipsoid And Generalized Cross-Polytope Cases** Given the two regions RA and RB, consider `p-ball approximations of those regions, RA˜ = {x ∈ R n : kx − akp ≤ ra} and RB˜ = {x ∈ R n : kx − bkp ≤ rb} such that RA ⊆ RA˜ and RB ⊆ RB˜ . Lemma 1. If ka − bkp > ra + rb, then RA ∩ RB = ∅. Proof. For the sake of contradiction, let ka−bkp > ra+rb and x ∈ RA˜ ∩RB˜ . Then, we have that kx−ak ≤ ra and kx − bk ≤ rb. However: ra + rb < ka − bkp ≤ kx − akp + kx − bkp ≤ ra + rb, forming a contradiction. Thus, RA˜ ∩ RB˜ = ∅, which in turn implies RA ∩ RB = ∅ since RA and RB are subsets of RA˜ and RB˜ , respectively. This forms a fast, maximum intersection check for ellipsoids, *i.e.* p = 2, and generalized cross-polytopes, *i.e.* p = 1. The MaxIntersect function returns False if ka − bkp > ra + rb, and True otherwise. ## D.2 Implementing Intersect(Ra, Rb**) In The Ellipsoid Case** The problem of efficiently checking if two ellipsoids intersect is not trivial. We rely on the work of Ros et al. (2002); Gilitschenski & Hanebeck (2012) with missing proofs from Gilitschenski & Hanebeck (2012) for completeness. Lemma 2. Let RA = {x ∈ R n : (x−a) >A(x−a) ≤ 1} and RB = {x ∈ R n : (x−b) >B(x−b) ≤ 1} *define two* ellipsoids centered at a and b*, respectively. We have that* R = {x : t(x−a) >A(x−a)+(1−t)(x−b) >B(x−b) ≤ 1} for any t ∈ [0, 1] satisfies RA ∩ RB ⊆ R ⊆ RA ∪ RB. Proof. By considering the convex combination of the left-hand side of the inequalities defining the regions RA and RB, it becomes obvious that x ∈ RA ∩ RB =⇒ x ∈ R, concluding the left side of the property. As for the right side, it suffices to show that if x /∈ RA and x ∈ R then x ∈ RB and, similarly, that if x /∈ RB and x ∈ R then x ∈ RA. We show the first case since the second follows by symmetry. Without loss of generality, we assume that a = b = 0n. Now, let x be such that x >Ax > 1 and tx>Ax + (1 − t)x >Bx ≤ 1 since x /∈ RA and x ∈ R. Then, since x ∈ R, we have that (1 − t)x >Bx ≤ 1 − tx>Ax ≤ 1 since x >Ax > 1 which implies that x ∈ RB. Note that the previous result holds without loss of generality when for the radius 1 as the radius can be absorbed in A and B. As the following Lemma was shown by Gilitschenski & Hanebeck (2012) without proof, we complement it below for completeness. Lemma 3. The set R *is equivalent to the following ellipsoid* R = {x : (x − m) >Et(x − m) ≤ K(t)} *where* Et = tA + (1 − t)B, m = E −1 t(tAa + (1 − t)Bb)*, and* K(t) = 1 − ta>Aa − (1 − t)b >Bb + m>Etm. Proof. $$\begin{array}{l}{{t(x-a)^{\top}{\bf A}(x-a)+(1-t)(x-b)^{\top}{\bf B}(x-b)\leq1}}\\ {{\Leftrightarrow x^{\top}\underbrace{(t{\bf A}+(1-t){\bf B})}_{{\bf E}_{t}}x-2x^{\top}\underbrace{(t{\bf A}a+(1-t){\bf B}b)}_{{\bf E}_{t}m}\leq1-ta^{\top}{\bf A}a-(1-t)b^{\top}{\bf B}b}}\\ {{\Leftrightarrow(x-m)^{\top}{\bf E}_{t}(x-m)\leq1-ta^{\top}{\bf A}a-(1-t)b^{\top}{\bf B}b+m^{\top}{\bf E}_{t}m}}\end{array}$$ $\square$ The last equality follows by adding and subtracting m>Etm and concluding the proof. Proposition 8. The set of points satisfying R for t ∈ (0, 1) *is either an empty set, a single point, or the* ellipsoid R. Proof. We first observe that since A and B are positive definite, then Et is positive definite. Then observe that for a choice of t ∈ (0, 1) such that K(t) < 0, the set R is an empty set, and since R ⊇ RA ∩ RB, the two sets do not intersect. If K(t) = 0, then the only point satisfying R is the center at m. Following a similar argument, then the two ellipsoids intersect at a point. At last for a choice of t such that K(t) > 0, then R defines an ellipsoid. As per Theorem 8, it suffices to find some t ∈ [0, 1] under which K(t) < 0 to guarantee that the ellipsoids do not intersect. To that end, we solve the following convex optimization problem: t ∗ = argmint∈[0,1]K(t) and check the condition if K(t ∗) < 0. Moreover, as shown by Ros et al. (2002); Gilitschenski & Hanebeck (2012) K(t) is convex in the domain t ∈ (0, 1). With several algebraic manipulations, one can show that K(t) has the following equivalent forms: $$\begin{array}{l}{{K(t)=1-t a^{\top}{\bf A}a-(1-t)b^{\top}{\bf B}b+m^{\top}{\bf E}_{t}m}}\\ {{K(t)=1-t(1-t)(b-a)^{\top}{\bf B}{\bf E}_{t}^{-1}{\bf A}(b-a)}}\\ {{K(t)=1-(b-a)^{\top}\left(\frac{1}{1-t}{\bf B}^{-1}+\frac{1}{t}{\bf A}^{-1}\right)^{-1}(b-a)}}\end{array}$$ Observe that for ANCER, we have that both A and B to be diagonals with diagonal elements {Aii} n i=1 and {Bii} n i=1, respectively, resulting in the following simple form for K(t): $$K(t)=1-\sum_{i=1}^{n}(b_{i}-a_{i})^{2}\frac{t(1-t)\mathbf{A}_{i i}\mathbf{B}_{i i}}{t\mathbf{A}_{i i}+(1-t)\mathbf{B}_{i i}}.$$ The Intersect function in the ellipsoid case returns False if there exists a t ∈ (0, 1) such that K(t) < 0, *i.e.* ellipsoids do not intersect, and True otherwise. ## D.3 Implementing Intersect(Ra, Rb**) In The Generalized Cross-Polytope Case** Let RA and RB be two generalized cross-polytopes RA = {x ∈ R n : kA(x − a)k1 ≤ 1} and RB = {x ∈ R n : kB(x − b)k1 ≤ 1}, where A and B are positive definite diagonal matrices with elements {Aii} n i=1 and {Bii} n i=1, respectively. We are interested in deciding whether RA and RB intersect. However, given the conservative context in which Intersect is used in Algorithm 2, we only need to make sure that the function only returns False if it is guaranteed that RA ∩ RB = ∅. As such, we are able to simplify the complex problem of generalized cross-polytope intersection to the much simpler one of ellipsoid over-approximation intersection. We do this by considering the over-approximation, i.e. superset, ellipsoids RA˜ = {x ∈ R n : kA(x − a)k2 ≤ 1} and RB˜ = {x ∈ R n : kB(x − b)k2 ≤ 1}, and perform the ellipsoid intersection check presented in Appendix D.2. If RA˜ ∩ RB˜ = ∅, then this implies that RA ∩ RB = ∅ and we can safely return False. Otherwise, we conservatively assume the generalized cross-polytopes intersect, and return True, triggering the reduction procedure detailed in Appendix D.5. ## D.4 Implementing Largestoutsubset(Ra, Rb**) In The Ellipsoid Case** Given two ellipsoids RA = {x ∈ R n : (x − a) >A(x − a) ≤ 1} and RB = {x ∈ R n : (x − b) >B(x − b) ≤ 1} that do intersect where A and B are positive definite diagonal matrices, the task is to find the largest possible ellipsoid RB˜ centered at b such that RB˜ ⊆ RB where RA ∩ RB˜ = ∅. Finding a maximum ellipsoid that satisfies those conditions is not trivial, so instead we consider a maximum enclosing `2-ball of RB, RB˜ = {x ∈ R n : kx − bk2 ≤ r}, that does not intersect RA. To obtain this ball, we project the center of RB, b, to the ellipsoid RA. Particularly, we formulate the problem as the projection of a vector y = b − a onto an ellipsoid with the same shape as RA centered at 0n. This is equivalent to solving the following optimization problem for a symmetric positive definite matrix A: $$\operatorname*{min}_{x}{\frac{1}{2}}\left\|x-y\right\|_{2}^{2}\qquad\qquad{\mathrm{s.t.}}\quad x^{\top}\mathbf{A}x\leq1.$$ Note that the objective function is convex, and the constraint forms a convex set. Forming the Lagrangian to this problem, we obtain: $${\mathcal{L}}(x,\lambda)={\frac{1}{2}}\left\|x-y\right\|_{2}^{2}+\lambda\left(x^{\top}\mathbf{A}x-1\right),$$ where λ > 0. Therefore, the global optimal solution must satisfy the KKT conditions below: $$\begin{array}{l}{{\frac{\partial{\mathcal{L}}}{\partial x}=0\to x^{*}=\left(2\lambda\mathbf{A}+I\right)^{-1}y,}}\\ {{\frac{\partial{\mathcal{L}}}{\partial\lambda}=0\to\underbrace{y^{\top}\left(2\lambda\mathbf{A}+I\right)^{-\top}\mathbf{A}\left(2\lambda\mathbf{A}+I\right)^{-1}y-1}_{f(\lambda)}=0.}}\end{array}$$ Thus, to project the vector y on our region the ellipsoid characterized by A, one needs to solve the scalar optimization f(λ) = 0 then substitute back in the formula of x ∗. Further, given A = diag(A11*, . . . ,* Ann), we can simplify the problem to: $$f(\lambda)=\sum_{i=1}^{n}\frac{y_{i}^{2}\mathbf{A}_{i i}}{(1+2\lambda\mathbf{A}_{i i})^{2}}-1=0.$$ $$\operatorname{i}\operatorname{j}\operatorname{k}$$ Once x ∗is obtained, we can define the maximum radius of the `2-ball centered at b that does not intersect RA as: $$r^{*}=\|(x^{*}+a)-b\|_{2}-\epsilon,$$ for an arbitrarily small . Finally, we obtain RB˜ as the maximum ball contained within RB that has a radius smaller than r ∗, that is: $${\mathcal{R}}_{\mathbf{B}}=\{x\in\mathbb{R}^{n}:\|x\|$$ n: kx − bk2 ≤ min{r ∗, min iBii}}. Note that while choosing the radius of RB˜ to be r ∗ guarantees that RB˜ ∩ RA = ∅, this does not guarantee that RB˜ ⊆ RB. To guarantee both properties, we take the minimum of both r ∗ and mini Bii. This approach finds the solution to the projection of the point to the ellipsoid {x ∈ R n : x >Ax ≤ 1}; it does not work for the case in which b ∈ RA, since the problem would be trivially solved by setting x ∗ = y. Thus, our classifier must abstain in that situation. ## D.5 Implementing Largestoutsubset(Ra, Rb**) In The Generalized Cross-Polytope Case** Let RA and RB be two generalized cross-polytopes RA = {x ∈ R n : kA(x − a)k1 ≤ 1} and RB = {x ∈ R n : kB(x − b)k1 ≤ 1}, where A and B are positive definite diagonal matrices with elements {Aii} n i=1 and {Bii} n i=1, respectively. The task is to find the largest possible generalized cross-polytope RB˜ centered at b such that RB˜ ⊆ RB where RA ∩ RB˜ = ∅. As with the ellipsoid case, solving this problem for a generalized cross-polytope is not trivial, so instead we consider a maximum enclosing cross-polytope (i.e., `1-ball) of RB˜ = {x ∈ R n : kx − bk1 ≤ r} that does not intersect RA and is a subset of RB. To obtain this `1-ball, we project the center of RB, b, to the generalized cross-polytope RA in a similar fashion to the ellipsoid case in Appendix D.4. We formulate the problem as the projection of the vector y = b − a to the 0n centered generalized cross-polytope {x ∈ R n : kAxk1 ≤ 1}. Lemma 4. *Consider the hyperplane* H = {x ∈ R n : w >x − k = 0} *and a point* y ∈ R n. The `2 *projection of* y *on the hyperplane is the point* x ∗ = y − (w >y−k)w/kwk 2 2. Proof. We define the projection problem in a similar fashion to the ellipsoid case: $$\operatorname*{min}_{x}{\frac{1}{2}}\left\|x-y\right\|_{2}^{2}\qquad\qquad{\mathrm{s.t.}}\quad w^{\top}x-k=0,$$ and obtain the Lagrangian as L(*x, λ*) = 12 kx − yk 2 2 + λ(w >x − k), from where we get (using the KKT conditions): x ∗ = y − λ ∗w and λ ∗ = w >y−k/kwk 2 2; thus obtaining: x ∗ = y − (w >y−k)w kwk 2 2. While this formulation does not yield the closest point from a hyperplane when measured with the `1 norm, the fact that kx − x ∗k1 ≥ kx − x ∗k2 implies the certification set obtained in the `1 norm via this method is a subset of the `2-ball of the minimum projection point. Crucially, this `2 projection has the advantage of having a closed-form solution, while an `1 one would require solving the problem using an iterative linear programming solver. As such, for the sake of computational complexity, we decided to use this projection, despite the sub-optimality of the result from the `1 perspective. Empirically, we have found this does not affect our results. Since the set of vertices of the generalized cross-polytope {x ∈ R n : kAxk1 ≤ 1} is given by {ei/Aii, −ei/Aii} n i=1, and considering the distance between the projections and the original y, the hyperplane that minimizes it is defined by the set of vertices {sign(yi)ei/Aii} n i=1. By writing it as a system of n equations, we obtain the hyperplane defined by w = [−sign(y1)A11*, ...,* −sign(yn)Ann] and k = 1. Finally, after computing x ∗ as per Lemma 4, we can define the maximum radius of the `1-ball centered at b that does not intersect RA as: $$r^{*}=\|(x^{*}+a)-b\|_{1}-\epsilon,$$ $\downarrow$ . for an arbitrarily small . Finally, and similar to the ellipsoids case, we obtain RB˜ as the maximum generalized cross-polytope contained within RB that has a radius smaller than r ∗, that is: RB˜ = {x ∈ R n: kx − bk1 ≤ min{r ∗, min iBii}} Similar to before, to guarantee that the `1 ball RB˜ is still a subset to RB, we take the minimum between r ∗ and mini Bii to be the radius of RB˜ . As with the ellipsoid case, this approach does not work for the case in which b ∈ RA, since the assumption of the closest plane to y would not hold. Thus, our classifier must abstain in that situation. ## E Experimental Setup The experiments reported in the paper used the CIFAR-10 Krizhevsky (2009) 3 and ImageNet Deng et al. (2009) 4 datasets, and trained ResNet18, WideResNet40 and ResNet50 networks He et al. (2016). Experiments used the typical data split for these datasets found in the PyTorch implementation Paszke et al. (2019). The procedures to obtain the baseline networks used in the experiments are detailed in Appendix E.1 and E.2 for ellipsoids and generalized cross-polytopes, respectively. Source code to reproduce the AnCer optimization and certification results of this paper is available as supplementary material. Isotropic DD Optimization. We used the available code of Alfarra et al. (2022) 5to obtain the isotropic data dependent smoothing parameters. To train our models from scratch, we used an adapted version of the code provided in the same repository. 3Available here (url), under an MIT license. 4Available here (url), terms of access detailed in the Download page. 5Data Dependent Randomized Smoothing source code available here Certification. Following Cohen et al. (2019); Salman et al. (2019a); Zhai et al. (2019); Yang et al. (2020); Alfarra et al. (2022), all results were certified with N0 = 100 Monte Carlo samples for selection and N = 100, 000 estimation samples, with failure a probability of α = 0.001. ## E.1 Ellipsoid Certification Baseline Networks In terms of ellipsoid certification, the baselines we considered were Cohen Cohen et al. (2019) 6, SmoothAdv Salman et al. (2019a) 7 and MACER Zhai et al. (2019) 8. In the CIFAR-10 experiments, we used a ResNet18 architecture, instead of the ResNet110 used in Cohen et al. (2019); Salman et al. (2019a); Zhai et al. (2019) due to constraints at the level of computation power. As such, we had to train each of the networks from scratch following the procedures available in the source code of each of the baselines. We did so under our own framework, and the training scripts are available in the supplementary material. For the ImageNet experiments we used the ResNet50 networks provided by each of the baselines in their respective open source repositories. We trained the ResNet18 networks for 120 epochs, with a batch size of 256 and stochastic gradient descent with a learning rate of 10−2, and momentum of 0.9. ## E.2 Generalized Cross-Polytope Certification Baseline Networks For the certification of generalized cross-polytopes we considered RS4A Yang et al. (2020) 9. As described in RS4A Yang et al. (2020), we take λ = σ/ √3 and report results as a function of σ for ease of comparison. As with the baseline, we ran experiments on CIFAR-10 on a WideResNet40 architecture, and ImageNet on a ResNet50 Yang et al. (2020). However, due to limited computational power, we were not able to run experiments on the wide range of distributional parameters the original work considers, *i.e.* σ = {0.15, 0.25, 0.5, 0.75, 1.0, 1.125, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5} on CIFAR-10 and σ = {0.25, 0.5, 0.75, 1.0, 1.125, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5} on ImageNet. Instead, and matching the requirements from the ellipsoid section, we choose a subset of σ = {0.25, 0.5, 1.0} and performed our analysis at that level. While the trained models are available in the source code of RS4A, we ran into several issues when we attempted to use them, the most problematic of which being the fact that the clean accuracy of such models was very low in both the WideResNet40 and ResNet50 ones. To avoid these issues we trained the models from scratch, but using the stability training loss as presented in the source code of RS4A. All of these models achieved clean accuracy of over 70%. Following the procedures described in the original work, we trained the WideResNet40 models with the stability loss used in Yang et al. (2020) for 120 epochs, with a batch size of 128 and stochastic gradient descent with a learning rate of 10−2, and momentum of 0.9, along with a step learning rate scheduler with a γ of 0.1. For the ResNet50 networks on ImageNet, we trained them from scratch with stability loss for 90 epochs with a learning rate of 0.1 that drops by a factor of 0.1 after each 30 epochs and a batch size of 256. ## F Superset Argument The results we present in Section 7 support the argument that AnCer achieves, in general, a certificate that is a *superset* of the Fixed σ and Isotropic DD ones. To confirm this at an individual test set sample level, we compare the `2, `1, ` Σ 2 and ` Λ 1 certification results across the different methods, and obtain the percentage of the test set in which AnCer performs at least as well as all other methods in each certificates of the samples. Results of this analysis are presented in Tables 4 and 5. 6Cohen source code available here. 7SmoothAdv source code available here. 8MACER source code available here. 9RS4A source code available here. For most networks and datasets, we observe that AnCer achieves a larger `p certificate than the baselines in a significant portion of the dataset, showcasing the fact that it obtains a superset of the isotropic region per sample. This is further confirmed by the comparison with the anisotropic certificates, in which, for all trained networks except MACER in CIFAR-10, AnCer's certificate is superior in over 90% of the test set samples. Table 4: Superset in top-1 `2 and ` Σ 2 (rounded to nearest percent) | % AnCer `2 is the best | % AnCer ` Σ 2 is the best | | |--------------------------|-----------------------------|----| | CIFAR-10: Cohen | 83 | 93 | | CIFAR-10: SmoothAdv | 73 | 90 | | CIFAR-10: MACER | 50 | 69 | | ImageNet: Cohen | 94 | 96 | | ImageNet: SmoothAdv | 90 | 93 | | | Λ | | |----------------|------------------------|-------------------------| | | % AnCer `1 is the best | % AnCer ` 1 is the best | | CIFAR-10: RS4A | 100 | 100 | | ImageNet: RS4A | 97 | 99 | Table 5: Superset in top-1 `1 and ` Λ 1 (rounded to nearest percent) ## G Experimental Results Per Σ G.1 Certifying Ellipsoids - `2 And ` Σ 2 Certification Results Per Σ In this section we report certified accuracy at various `2 radii and ` Σ 2 proxy radii, following the metrics defined in Section 7, for each training method (Cohen Cohen et al. (2019), SmoothAdv Salman et al. (2019a) and MACER Zhai et al. (2019)), dataset (CIFAR-10 and ImageNet) and σ (σ ∈ {0.12, 0.25, 0.5, 1.0}). Figures 7 and 8 shows certified accuracy at different `2 radii for CIFAR-10 and ImageNet, respectively, whereas Figures 9 and 10 plot certified accuracy and different ` Σ 2 proxy radii for CIFAR-10 and ImageNet, respectively. ## G.2 Certifying Ellipsoids - `1 And ` Λ 1 Certification Results Per Σ In this section we report certified accuracy at various `1 radii and ` Λ 1 proxy radii, following the metrics defined in Section 7, for RS4a, dataset (CIFAR-10 and ImageNet) and σ (σ ∈ {0.25, 0.5, 1.0}). Figures 11 and 12 shows certified accuracy at different `1 radii for CIFAR-10 and ImageNet, respectively, whereas Figures 13 and 14 plot certified accuracy and different ` Λ 1 proxy radii for CIFAR-10 and ImageNet, respectively. ## H Visual Comparison Of Parameters In Ellipsoid Certificates Anisotropic certification allows for a better characterization of the decision boundaries of the base classifier f. For example, the directions aligned with the major axes of the ellipsoids kδkΣ,2 = r, *i.e.* locations where Σ is large, are, by definition, expected to be less sensitive to perturbations compared to the minor axes directions. To visualize this concept, Figure 15 shows CIFAR-10 images along with their corresponding optimized `2 isotropic parameters obtained by Isotropic DD, and ` Σ 2 anisotropic parameters obtained by AnCer. First, we note the richness of information provided by the anisotropic parameters when compared to the `2 worst-case, isotropic one. Interestingly, pixel locations where the intensity of Σ is large (higher intensity in Figure 15) are generally the ones corresponding least with the underlying true class and overlapping more with background pixels. A particular insight one can get from AnCer certification is that the decision boundaries are not distributed isotropically around each input. To quantify this in higher dimensions, we plot in Figure 16 a histogram ![26_image_0.png](26_image_0.png) ![26_image_1.png](26_image_1.png) Figure 7: CIFAR-10 certified accuracy as a function of `2 radius, per model and σ (used as initialization in the isotropic data-dependent case and AnCer). ![26_image_2.png](26_image_2.png) Figure 8: ImageNet certified accuracy as a function of `2 radius, per model and σ (used as initialization in the isotropic data-dependent case and AnCer). of the ratio between the maximum and minimum elements of our optimized smoothing parameters for the experiments on SmoothAdv (with an initial σ = 1.0) on CIFAR-10. We note that this ratio can be as high as 5 for some of the input points, meaning the decision boundaries in that case could be 5 times closer to a given input for some directions than others. ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) Figure 9: CIFAR-10 certified accuracy as a function of ` 2 proxy radius, per model and σ (used as initialization in the isotropic data-dependent case and AnCer). ![27_image_2.png](27_image_2.png) Figure 10: ImageNet certified accuracy as a function of ` Σ 2 proxy radius, per model and σ (used as initialization in the isotropic data-dependent case and AnCer). ## I **Non Data-Dependent Anisotropic Certification** As mentioned briefly in Section 6, it is our intuition that anisotropic certification requires a data-dependent approach, as different points will have fairly different decision boundaries and the certified regions will extend in different directions (as exemplified in Figure 1). To validate this claim, we perform certification of SmoothAdv Salman et al. (2019a) with an initial σ = 1 on CIFAR-10 using a Σ which is the average of all the optimized Σx. The results of the certified accuracy, ACR and ACR˜ are presented in Table 6, along with the same results for the methods reported in the main paper. ![28_image_0.png](28_image_0.png) Figure 11: CIFAR-10 certified accuracy as a function of `1 radius per σ (used as initialization in the isotropic data-dependent case and AnCer). ![28_image_1.png](28_image_1.png) Figure 12: ImageNet certified accuracy as a function of `1 radius per σ (used as initialization in the isotropic data-dependent case and AnCer). As can be observed, moving away from the data-dependent certification in the anisotropic scenario leads to a significant performance drop in terms of robustness. ## J Theoretical And Empirical Comparison With Mohapatra Et Al. **(2020)** In regards to the theoretical results, unfortunately the certified regions of Mohapatra et al. (2020) do not exhibit a closed form solution similarly to ours. Thus, a direct theoretical volume bound comparison is not possible. As for the empirical comparison, AnCer's performance on both `2 and `1 certificates far out-does that of Mohapatra et al. (2020). For example, with `2 certificates at a radius of 0.5, Cohen certified with AnCer achieves 77% certified accuracy (see Table 1) while Mohapatra et al. (2020) achieves under 60% certified accuracy. Note that Mohapatra et al. (2020) has only a marginal improvement over Cohen et al. As for the `1 certificates, Mohapatra et al. (2020) uses the Gaussian distribution of Cohen et al, resulting in worse performance than existing state-of-art in `1 Yang et al. (2020) that uses a uniform distribution. Our approach improves further upon the performance of Yang et al. (2020). For example, as per Table 2, RS4A with ANCER certification achieves 84% certified accuracy at an `1 radius of 0.5, Yang et al. (2020) achieves 75% certified accuracy while Mohapatra et al. (2020) achieves below 60%. However, we believe that the combination of both approaches, ANCER and Mohapatra et al. (2020) can further boost the performance as also hinted on in the abstract of Mohapatra et al. (2020) on the use of data-dependent smoothing. ![29_image_0.png](29_image_0.png) Figure 13: CIFAR-10 certified accuracy as a function of ` Λ 1 proxy radius per σ (used as initialization in the isotropic data-dependent case and AnCer). ![29_image_1.png](29_image_1.png) ![29_image_2.png](29_image_2.png) 0.0 0.5 1.0 1.5 2.0 2.5 Figure 14: ImageNet certified accuracy as a function of ` ![29_image_3.png](29_image_3.png) Λ 1 proxy radius per σ (used as initialization in the isotropic data-dependent case and AnCer). Figure 15: Visualization of an input CIFAR-10 image x (top), and the optimized parameters σ (middle) and Σ (bottom) - higher intensity corresponds to higher σiin that pixel and channel - of the smoothing distributions in the isotropic and anisotropic case, respectively. ![29_image_4.png](29_image_4.png) Figure 16: Distribution of the maximum over the minimum AnCer σ x at each dataset point for SmoothAdv Salman et al. (2019a) on CIFAR-10 (for initial σ = 1.0) Table 6: Comparison of different certification methods on SmoothAdv with an initial σ = 1.0 on CIFAR-10. | | | | Σ | | | | | | | |--------------|-----------|--------------------------|--------|----------|-----|-----|----|-------|-------| | CIFAR-10 | SmoothAdv | Accuracy @ `2 radius (%) | `2 ACR | ` 2 ACR˜ | | | | | | | 0.0 | 0.25 | 0.5 | 1.0 | 1.5 | 2.0 | 2.5 | | | | | Fixed σ | 45 | 40 | 35 | 25 | 16 | 9 | 5 | 0.565 | 0.565 | | Isotropic DD | 41 | 39 | 36 | 29 | 21 | 14 | 7 | 0.694 | 0.694 | | AnCer | 44 | 43 | 41 | 35 | 26 | 15 | 8 | 0.871 | 0.992 | | Average Σ | 29 | 25 | 21 | 14 | 9 | 5 | 2 | 0.329 | 0.379 | | σ = 1.0 | | | | | | | | | |
Review 1: Summary: The paper proposes a modification of randomized smoothing to support certifying anisotropic regions around each datapoint as compared to previous methods which does it only for isotropic regions (lp-balls). Specifically, the paper utilizes the idea of data-dependent certification in previous work, and maximizes the certified volume per datapoint to get anisotropic certified regions. The authors evaluate their method on CIFAR-10 and ImageNet, properly comparing with previous isotropic randomized smoothing methods. Strengths and Weaknesses: ### Strengths: + The paper is very well written and easy to follow. It was a nice read for me. + The proposed approach is well-motivated and supported with intuitive diagrams that helps grasping the idea. Specifically, S3 motivates why anisotropic certification is important and would be better than isotropic certifications (though I have some concerns, see below), with proper connections to previous works. + The authors do a good job evaluating their method and comparing it with previous methods on proper benchmarks. + The authors explicitly discuss the limitations of their method in S6 highlighting the memory-based and data-dependent requirements for their anisotropic certification. ### Weaknesses + The contribution of the paper is minor and the results are not very significant ( though I understand that TMLR does not focus on these aspects). + It is unclear why anisotropic certification matters in practice to guarantee more robustness. Relatedly, I think the proxy radius \tilde{R} might be misleading and does not really reflect the robustness of the model. Larger \tilde{R} reflects a larger stable (irregular/anisotropic) region of the smoothed model around a given datapoint, but does not mean that the model is more robust there. In fact, an adversary does not care about how large that stable region is, all it cares about is finding the closest decision boundary; this closest decision boundary won't change if we make the certified region more flexible/anisotropic. Thus it is not clear why larger *irregular* stable regions implies higher adversarial robustness. I would appreciate if the authors can clarify/comment on this. Overall, I like this paper, and I believe it will be useful for the community as an extension of isotropic certification. Requested Changes: + **A discussion on why anisotropic certification means more robustness:** This is regarding my second point in weaknesses. I understand that the proposed method can certify larger volumes by relaxing the isotropy of the certified regions, thus we can have larger stable neighborhoods around each given data-points. But does this mean more robustness? As I mentioned above, the closest decision region will be the same for an isotropic vs anisotropic region. Nevertheless I still believe the anisotropy can be of advantage, but I believe that such a discussion would be useful in the paper. + **Memory requirement of data-dependent certification:** I appreciate that the authors include certification runtime in section 7.4, but I also think a similar discussion on the memory requirement of data-dependent certification would be useful for the reader. I understand that this might be already discussed in Alfarra et al., but I think it would be useful in this paper too. Broader Impact Concerns: No concerns on the ethical implications of the work. ================================================== Review 2: Summary: The paper proposes ANCER as a robustness certification tool based on randomized smoothing through anisotropic noise. The approach is built upon existing randomized smoothing tools that use isotropic Gaussian and uniform noise to certify the prediction against $\ell_2,\ell_\infty$ norm-bounded adversarial noise. The ANCER optimization problem is defined in Equation (1) and Appendix C suggests relaxing the optimization problem and applying projected gradient ascent to solve the task. Numerical results are presented in section 7, claiming that ANCER outperforms the fixed-variance smoothing and Isotropic DD baselines in terms of certified accuracy for CIFAR10 and ImageNet datasets. Strengths and Weaknesses: Strengths: 1) The paper's idea of an anisotropic certification region sounds novel and potentially useful. Weaknesses: 1) The paper's motivation for developing ANCER needs to be further clarified. What kind of applications may require anisotropic certification regions with bounded volumes? The introduction only mentions that it is possible to find a larger anisotropic certification region that is a superset of the maximal isotropic region (Figure 1). However, ANCER is only a certification tool that applies to an already trained classifier. Therefore, as long as the anisotropic region in ANCER is found through extending the maximal isotropic certification region such that the output label does not change for an input $x$, ANCER does not make any changes to the original classification rule coming from the isotropic randomized smoothing certification method. In other words, ANCER may find a larger anisotropic certification region, but it essentially uses the same classification rule as (Alfarra et al 2022)'s method. 2) The optimization task in Equation (1) is vaguely explained. I could not find the mathematical definition of $r^p(x,\Theta^x)$ which is vaguely referred to as "the gap value under the anisotropic smoothing distribution". Is $r^p(x,\Theta^x)$ a function of model $f$? I am wondering how the authors take the gradient of this function w.r.t. $\Theta^x$, which is not explained in the text. Since these key details are missing in section 6, I could not see whether ANCER results in a different classification rule from the smoothed classifier with fixed variance parameter $r^*_{iso}$. It seems Equation (1) tries to expand the $r^*_{iso}$-radius isotropic region until the classifier's prediction remains unchanged. If this statement is true, the classification rule remains essentially the same. 3) The numerical discussion in section 7 needs further clarification. The paper mainly uses $\ell_p$ certified accuracy as the evaluation metric for the proposed ANCER and the baseline methods. However, a higher certified accuracy only implies that the certification method could guarantee the prediction in a larger radius and does not mean the classifier enjoys higher adversarial robustness. This point needs to be clarified in the text. 4) As another comment on numerical results, the results in Figures 3,4 and Table 1 show ANCER improves the certified accuracy over isotropic DD (Alfarra et al 2022), while the certified accuracy is in terms of the isotropic $\ell_p$ accuracy. At first, this claim sound rather unexpected to me, since ANCER is the anisotropic version of (Alfarra et al 2022)'s method. However, the paper uses the optimized variables of (Alfarra et al 2022)'s method as the ANCER initialization. Therefore, the better numerical performance is only a consequence of the suboptimality of (Alfarra et al 2022)'s method in finding the optimal isotropic radius, and it seems they do not imply that an anisotropic method could improve the maximal isotropic radius over the optimal isotropic method. This point needs to be discussed clearly in the paper, and the claim that an anisotropic method like ANCER improves the isotopic certified accuracy should be revised and moderated accordingly. 5) I think the comparison vs. isotropic DD is to some extent unfair, since the current implementation initializes ANCER's optimization variables using the baseline isotropic DD's solution. What happens if ANCER's initialization becomes independent of the baseline method's solution, e.g. random or zero initialization? How much does the performance deteriorate as the initialization changes? 6) As a comment on the writing style, the paper delays several key details to the Appendix. For example, the discussion in Appendix C, or at least a significant part of it should be moved back to the main text. Especially, a clean pseudocode version of the Python code in Listing 1 should be added to the main text. Currently, the main text does not have any algorithm descriptions, while ANCER is a new algorithm for computing anisotropic certification regions. Requested Changes: As already mentioned in my comments on the paper's weaknesses, the paper needs to be revised to address the followings: 1) The introduction should better explain the motivation for an anisotropic certification region. What types of machine learning applications can use anisotropic certification regions? 2) Equation (1) needs to be explained clearly. What is the mathematical definition of $r^p(x,\Theta^x)$, and how can one compute the gradient of this function for solving the optimization problem? Also, an algorithmic block for ANCER needs to be added to Section 6. 3) The numerical results in section 7 should be further clarified, and the paper needs to investigate the suboptimality of (Alfarra et al 2022)'s method in finding the maximal isotropic certification region. Also, ANCER's numerical results without the baseline-based initialization should be included for a fair comparison. Broader Impact Concerns: I have no specific ethical concerns regarding this submission. ================================================== Review 3: Summary: This paper proposes an anisotropic certification method for l1-norm and/or l2-norm by adapting (isotropic) randomized smoothing. It performs sample-wise region volume maximization, generalizing the data-dependent, memory-based solution from Alfarra et al. (2022). Experiments demonstrate that the proposed method achieves larger certified region compared with existing state-of-the-art isotropic certification methods. Strengths and Weaknesses: Strengths: + The paper is well-written and easy to follow. I found the idea of generalizing smoothed distribution to be anisotropic insightful and interesting. + The paper presents extensive experiments to demonstrate the effectiveness of the proposed anisotropic certification methods. Compared with state-of-the-art, it achieves improved certified accuracy and a larger certified radius. + The authors spent a decent amount of effort discussing the limitations of the proposed method (Section 6), allowing readers to clearly understand the trade-offs between improved performance and limitations on memory and computational costs. Weaknesses: - The proposed method is heavily based on the memory-based data-dependent certification approach (Alfarra et al, 2022). As the authors mentioned in Section 6, one limitation caused by this is that the overall certified statistics depend on ordering the test inputs. This raises the question of whether the presented comparisons with non-data-dependent approaches are fair. While the authors seem to propose a fix for this by showing “the worst-case results for AnCer in which we abstain from deciding whenever an intersection of two certified regions occurs”, it is still unclear to me how their inference phase is implemented and what the run time complexity of this phase is. - No standard deviation results are presented for the reported statistics (i.e., certified accuracy and certified radius). Given the probabilistic nature of randomized smoothing (i.e., sampling from the smoothed distribution) and the optimization procedure with respect to Equation (1), I am not confident in the consistency of the claimed improvement. This is particularly the case for l1-norm, as the l1-certified accuracies of ANCER and Isotropic DD are quite close to each other. Requested Changes: I hope the authors provide more detailed discussions and more experiments on the effect of ordering test inputs on the certification results. In my perspective, the soundness of a certification method is of top priority. In the worst-case scenario, if an adversary is capable of varying the test input ordering, how much improvement of the proposed method can achieve. What about the average-case scenario? Clarifications on this will largely strengthen the work. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers were satisfied with the author responses, and have reached a consensus that the paper meets the TMLR acceptance criteria. I therefore recommend accepting the paper. ==================================================
# Variational Elliptical Processes Maria Bankestad ˙ maria.bankestad@it.uu.se RISE Research Institutes of Sweden and Uppsala University, Sweden Jens Sjölund jens.sjolund@it.uu.se Department of Information Technology, Uppsala University, Sweden Jalil Taghia *jalil.taghia@ericsson.com* Ericsson Research, Sweden Thomas B. Schön thomas.schon@it.uu.se Department of Information Technology, Uppsala University, Sweden Reviewed on OpenReview: *https: // openreview. net/ forum? id= djN3TaqbdA&* ## Abstract We present elliptical processes—a family of non-parametric probabilistic models that subsumes Gaussian processes and Student's t processes. This generalization includes a range of new heavy-tailed behaviors while retaining computational tractability. Elliptical processes are based on a representation of elliptical distributions as a continuous mixture of Gaussian distributions. We parameterize this mixture distribution as a spline normalizing flow, which we train using variational inference. The proposed form of the variational posterior enables a sparse variational elliptical process applicable to large-scale problems. We highlight advantages compared to Gaussian processes through regression and classification experiments. Elliptical processes can supersede Gaussian processes in several settings, including cases where the likelihood is non-Gaussian or when accurate tail modeling is essential. ## 1 Introduction Systems for autonomous decision-making are increasingly dependent on predictive models. To ensure safety and reliability, it is essential that these models capture uncertainties and risks accurately. Gaussian processes (GPs) offer a framework for probabilistic modeling that is widely used partly because it provides uncertainty estimates. However, these estimates are only reliable to the extent that the model is correctly specified, i.e., that the assumptions of Gaussianity hold true. On the contrary, heavy-tailed data arise in many real-world applications, including finance (Mandelbrot, 1963), signal processing (Zoubir et al., 2012), and geostatistics (Diggle et al., 1998). We use a combination of normalizing flows and modern variational inference techniques to extend the modeling capabilities of GPs to the more general class of elliptical processes (EPs). Elliptical processes. Elliptical processes subsume Gaussian processes and Student's t processes (Rasmussen & Williams, 2006; Shah et al., 2014). They are based on ![0_image_0.png](0_image_0.png) Figure 1: Posterior distributions of elliptical and Gaussian processes with equal kernel hyperparameters and covariance. The shaded areas are credible intervals of the posterior processes. The elliptical credible intervals are wider due to the process' heavier tail. the elliptical distribution—a scale-mixture of Gaussian distributions that is attractive mainly because it can describe heavy-tailed distributions while retaining most of the Gaussian distribution's computational tractability (Fang et al., 1990). We use a normalizing flow (Rezende & Mohamed, 2015; Papamakarios et al., 2021) to model the continuous scale-mixture, which provides an added flexibility that can benefit a range of applications. We explore using elliptical processes as both a prior (over functions) and a likelihood, as well as the combination thereof. We also explore using EPs as a variational posterior that can adapt its shape to match complex posterior distributions. Variational inference. Variational inference is a tool for approximate inference that uses optimization to find a member of a predefined family of distributions that is close to the target distribution (Wainwright et al., 2008; Blei et al., 2017). Significant advances in the last decade have made variational inference the method of choice for scalable approximate inference in complex parametric models (Ranganath et al., 2014; Hoffman et al., 2013; Kingma & Welling, 2014; Rezende et al., 2014). It is thus not surprising that the quest for more expressive and scalable variations of Gaussian processes has gone hand-in-hand with the developments in variational inference. For instance, sparse GPs use variational inference to select inducing points to approximate the prior (Titsias, 2009). Inducing points are a common building block in deep probabilistic models such as deep Gaussian processes (Damianou & Lawrence, 2013; Salimbeni et al., 2019) and can also be applied in Bayesian neural networks (Maroñas et al., 2021; Ober & Aitchison, 2021). Similarly, the combination of inducing points and variational inference enables scalable approximate inference in models with non-Gaussian likelihoods (Hensman et al., 2013), such as when performing GP classification (Hensman et al., 2015; Wilson et al., 2016b). However, the closeness of the variational distribution to the target distribution is bounded by the flexibility of the variational distribution. Consequently, the success of deep (neural network) models has inspired various suggestions on flexible yet tractable variational distributions, often based on parameterized transformations of a simple base distribution (Tran et al., 2016). In particular, models using a composition of invertible transformations, known as normalizing flows, have been especially popular (Rezende & Mohamed, 2015; Papamakarios et al., 2021). ![1_image_0.png](1_image_0.png) Figure 2: **Left:** A contour plot of an elliptical twodimensional, correlated distribution with zero means. The name derives from its elliptical level sets. **Right:** Three examples of one-dimensional elliptical distributions with zero means and varying tail-heaviness. Elliptical distributions are symmetric around the mean E[X] = µ. Our contributions. We propose an adaptation of elliptical distributions and processes in the same spirit as modern Gaussian processes. Our EP construction is similar to that of MaumeDeschamps et al. (2017) but aims to turn EPs into a practical modeling tool. Constructing elliptical distributions based on a normalizing flow provides a high degree of flexibility without sacrificing computational tractability. This makes it possible to sidestep the "curse of Gaussianity", and adapt to heavy-tailed behavior when called for. We thus foresee many synergies between EPs and recently developed GP methods. We make a first exploration of these, and simultaneously demonstrate the versatility of the elliptical process as a model for the prior and/or the likelihood, or as the variational posterior. In more detail, our contributions are: - a construction of the elliptical process and the elliptical likelihood as a continuous scale-mixture of Gaussian processes parameterized by a normalizing flow; - a variational approximation that can either learn an elliptical likelihood or handle known nonGaussian likelihoods, such as in classification problems; - formulating a sparse variational approximation for large-scale problems; and - describing extensions to heteroscedastic data. ## 2 Background This section provides the necessary background on elliptical distributions, elliptical processes, and normalizing flow models. Throughout, we consider a regression problem, where we are given a set of N scalar observations, y = [y1*, . . . , y*N ] ⊤, at the locations X = [x1*, . . . ,* xN ] ⊤, where xiis D-dimensional. The measurement yiis assumed to be a noisy measurement, such that, $$y_{i}=f(\mathbf{x}_{i})+\epsilon_{i},$$ $$(1)$$ $$\left(2\right)$$ yi = f(xi) + ϵi, (1) where ϵiis zero mean i.i.d. noise. The task is to infer the underlying function f : R D → R. ## 2.1 Elliptical Distributions Elliptical processes are based on elliptical distributions (Figure 2), which include Gaussian distributions as well as more heavy-tailed distributions, such as the Student's t distribution and the Cauchy distribution. The probability density of a random variable Y ∈ R N that follows the elliptical distribution can be expressed as $$p(u;\,\eta)=c_{N,\eta}|\Sigma|^{-1/2}g_{N}(u;\,\eta),$$ $$\left({\boldsymbol{3}}\right)$$ −1/2gN (u; η), (2) where u := (y − µ) TΣ−1(y − µ) is the squared Mahalanobis distance, µ is the location vector, Σ is the non-negative definite scale matrix, and cN,η is a normalization constant. The density generator gN (u; η) is a non-negative function with finite integral parameterized by η, which determines the shape of the distribution. Elliptical distributions are consistent, i.e., closed under marginalization, if and only if p(u; η) is a scalemixture of Gaussian distributions (Kano, 1994). The density can be expressed as $$p(u;\,\eta)=|\Sigma|^{-{\frac{1}{2}}}\int_{0}^{\infty}\left({\frac{1}{2\pi\xi}}\right)^{\frac{N}{2}}e^{-{\frac{\mu}{2\xi}}}\,p(\xi;\,\eta_{\xi})d\xi,$$ using a mixing variable ξ ∼ p(ξ; ηξ). Any mixing distribution p(ξ; ηξ) that is strictly positive can be used to define a consistent elliptical process. In particular, we recover the Gaussian distribution if the mixing distribution is a Dirac delta function and the Student's t distribution if it is a scaled inverse chi-square distribution. For more information on the elliptical distribution, see Appendix A. ## 2.2 Elliptical Processes The elliptical process is defined analogously to a Gaussian process as: Definition 1 An elliptical process (EP) is a collection of random variables such that every finite subset has a consistent elliptical distribution, where the scale matrix is given by a covariance kernel. This means that an EP is specified by a mean function µ(x), a scale matrix (a kernel) k(x, x ′) and the mixing distribution p(ξ; ηξ). Since the EP is built upon consistent elliptical distributions, it is closed under marginalization. The marginal mean µ is the same as the mean for the Gaussian distribution, and the covariance is Cov[Y ] = E [ξ] Σ where Y is an elliptical random variable, Σ is the covariance for a Gaussian distribution, and ξ is the mixing variable. Formally, a stochastic process {Xt : t ∈ T} on a probability space (Ω, F, P) consists of random maps Xt : ω → St, t ∈ T, for measurable spaces (St, St), t ∈ T (Bhattacharya & Waymire, 2007). We focus on the setting where S = R and the index set T is a subset of R N , in particular, the half-line [0, ∞). Due to Kolmogorov's extension theorem, we may construct the EP from the family of finite-dimensional, consistent, elliptical distributions, which is due to the restriction to S = R (which is a Polish space) and Kano's characterization above. Note that the above definition of the elliptical process is essentially the same as that in MaumeDeschamps et al. (2017) but makes the use of a covariance kernel explicit. (This definition also appears in Bånkestad et al. (2020), which is an early preprint version of the present article.) Deceptively, there are also definitions of elliptical processes that differ in important ways from ours, notably ones based on Lévy processes (Bingham & Kiesel, 2002) which assume independent increments. Identifiability. When using a GP for regression or classification, it is typically assumed that the data originate from a single sample path, representing a realization from the GP. However, an elliptical process introduces a hierarchical model wherein ξ is first sampled from the distribution p(ξ; ηξ), followed by sampling f from GP(f; µ, Kξ). In this structure, inferring the mixing distribution p(ξ; ηξ) from a single path is impossible since we only have one observation from p(ξ; ηξ). In other words, to infer the mixing distribution p(ξ; ηξ), we must have multiple paths drawn from the same EP. Therefore, to infer an elliptical prior from data, the dataset would have to contain multiple sample paths, such as multiple time series, all originating from the same prior. ![3_image_0.png](3_image_0.png) Figure 3: Graphical models of (a) the elliptical likelihood, (b) the EP-prior, and (c) the EP with independent elliptical noise, where ω is sampled from the likelihood mixing distribution p(ω; ηω). Prediction. To use the EP for predictions, we need the predictive mean and covariance of the corresponding elliptical distribution. The predictive distribution is guaranteed to be a consistent elliptical distribution but not necessarily the same as the original one—the shape depends on the training samples. (Recall that consistency only concerns the marginal distribution.) The noise-free predictive distribution can be derived analytically (see Appendix B), but to account for additive elliptical noise, we will instead solve it by approximating the posterior p(ξ| y; ηξ) with a variational distribution q(ξ; φξ). The approximate inference framework also lets us incorporate (non-Gaussian) noise according to the graphical models in Figure 3. We aim to model mixing distributions that can capture any shape of the elliptical noise in the data. Thus, to improve upon the piecewise constant parameterization in an earlier version of this work (Bånkestad et al., 2020), we use normalizing flows: a class of methods for learning complex probability distributions. ## 2.3 Flow Based Models Normalizing flows are a family of generative models that map simple distributions to complex ones through a series of learned transformations (Papamakarios et al., 2021). Suppose we have a random variable x that follows an unknown probability distribution px(x). Then, the main idea of a normalizing flow is to express x as a transformation Tγ of a variable z with a known simple probability distribution pz(z). The transformation Tγ has to be bijective, and it can have learnable parameters γ. Both T and its inverse have to be differentiable. A change of variables obtains the probability density of x: $$p_{x}(\mathbf{x})=p_{z}(\mathbf{z})\left|\operatorname*{det}\left({\frac{\partial T_{\gamma}(\mathbf{z})}{\partial\mathbf{z}}}\right)\right|^{-1}.$$ We focus on one-dimensional flows since we are interested in modeling the mixing distribution. In particular, we use *linear rational spline flows* (Dolatabadi et al., 2020; Durkan et al., 2019), wherein the mapping Tγ is an elementwise, monotonic linear rational spline: a piecewise function where each piece is a linear rational function. The parameters are the number of pieces (bins) and the knot locations. ## 3 Method We propose the variational EP with elliptical noise, where the variational EP can learn any consistent elliptical process, and the elliptical noise can capture any consistent elliptical noise. The key idea is to model the $$\quad(4)$$ mixing distributions with a normalizing flow. The joint probability distribution of the model (see Figure 3c) is $$p(\mathbf{y},\mathbf{f},\omega,\xi;\,\mathbf{\eta})=\underbrace{p(\mathbf{f}|\xi;\,\mathbf{\eta}_{f})p(\xi;\,\mathbf{\eta}_{\xi})}_{\text{prior}}\ \underbrace{\prod_{i=1}^{N}p(y_{i}|f_{i},\omega_{i})p(\omega_{i};\,\mathbf{\eta}_{\omega})}_{\text{likelihood}}.\tag{5}$$ Here, p(f|ξ; ηf ) ∼ N (0, Kξ) is a regular GP prior with the covariance kernel K containing the parameters ηf , p(ξ; ηξ) is the process mixing distribution, and p(ω; ηω) is the noise mixing distribution. To learn the mixing distributions p(ξ; ηξ) and p(ω; ηω) by gradient-based optimization, they need to be differentiable with respect to the parameters ηξ and ηω in addition to being flexible and computationally efficient to evaluate and sample from. Based on these criteria, a spline flow (Section 2.3) is a natural fit. We construct the mixing distributions by transforming a sample from a standard normal distribution with a spline flow. The output of the spline flow is then projected onto the positive real axis using a differentiable function such as Softplus or *Sigmoid*. In the following sections, we detail the construction of the model and show how to train it using variational inference. For clarity, we describe the likelihood first before combining it with the prior and describing a (computationally efficient) sparse approximation. ## 3.1 Likelihood By definition, the likelihood (Figure 3a) describes the measurement noise ϵi (Equation (1)). The probability distribution of the independent elliptical likelihood is $$p(\epsilon_{i};\sigma,\mathbf{\eta}_{\omega})=\int\mathcal{N}(\epsilon_{i};0,\sigma^{2}\omega_{i})p(\omega_{i};\;\mathbf{\eta}_{\sigma})d\omega_{i},\tag{6}$$ where $\sigma>0$, and can be set to unity without loss of generality. In other words, the likelihood is a continuous mixture of Gaussian distributions where, e.g., ϵi follows a Student's t distribution if ω is scaled inverse chi-squared distributed. Parameterization. We parameterize p(ω; ηω) as a spline flow, We parameterize $p(\omega;\,\eta_\omega)$ as a spline flow. $$p(\omega;\,\eta_{\omega})=p(\zeta)\left|\frac{\partial T(\zeta;\,\eta_{\omega})}{\partial\zeta}\right|^{-1},\tag{1}$$ $$\left(7\right)$$ although it could be, in principle, any positive, finite probability distribution. Here, p(ζ) ∼ N (0, 1) is the base distribution and ω = T(ζ ; ηω) represents the spline flow transformation followed by a *Softplus* transformation to guarantee positivity of ω. The flexibility of this flow-based construction lets us capture a broad range of elliptical likelihoods, but we could also specify an appropriate likelihood ourselves. For instance, using a categorical likelihood enables EP classification; see Section 4.5. Training objective. Now, suppose that we observe N independent and identically distributed residuals ϵi = yi − fi between the observations y and some function f. We are primarily interested in estimating the noise for the purpose of "denoising" the measurements. Hence, we fit an elliptical distribution to the residuals by maximizing the (log) marginal likelihood with respect to the parameters ηω, that is $$\log p(\mathbf{\epsilon};\,\mathbf{\eta}_{\omega})=\sum_{i=1}^{N}\log\int\mathcal{N}\left(\epsilon_{i};0,\omega_{i}\right)p(\omega_{i};\,\mathbf{\eta}_{\omega})d\omega_{i}.$$ In this integral looks a closed form expression, but since it is one dimension For general mixing distributions, this integral lacks a closed-form expression, but since it is one-dimensional we can approximate it efficiently by numerical integration (for example, using the trapezoidal rule). Ultimately, we arrive at the likelihood $$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^{N}\int{\mathcal{N}}\left(y_{i};f_{i},\omega_{i}\right)p(\omega_{i};\,\eta_{\omega})d\omega_{i}.$$ ZN (yi; fi, ωi) p(ωi; ηω)dωi. (9) $$\mathbf{\Sigma}$$ ## 3.2 Prior Recall that our main objective is to infer the latent *function* f ∗ = f(x ∗) at arbitrary locations x ∗ ∈ R D given a finite set of noisy observations y. In probabilistic machine learning, the mapping y 7→ f ∗is often defined by the posterior predictive distribution $$p(f^{*}|\mathbf{y})=\int p(f^{*}|\mathbf{f})p(\mathbf{f}|\mathbf{y})d\mathbf{f},\tag{1}$$ $$(10)$$ $$(11)$$ which turns modeling into a search for suitable choices of p(f ∗|f) and p(f|y). Accordingly, the noise estimation described in the previous section is only done in pursuit of this higher purpose. Sparse formulation. For an elliptical process (EP) we can rewrite the posterior predictive distribution as p(f ∗|y) = Zp(f ∗|f, ξ) p(f,u, ξ|y)dfdudξ, (11) where we are marginalizing not only over the mixing variable ξ and the function values f (at the given inputs X), but also over the function values u at the so-called M inducing inputs Z. Introducing inducing points lets us derive a *sparse* variational EP—a computationally scalable version of the EP similar to the sparse variational GP (Titsias, 2009). The need for approximation arises because of the intractable second factor p(f,u, ξ|y) in Equation (11). (The first factor p(f ∗|f, ξ) is simply a Normal distribution.) We summarize the sparse variational EP below and refer to Appendices D and E for additional details. Variational approximation. We make the variational ansatz p(f,u, ξ|y) ≈ p(f|u, ξ)q(u|ξ) q(ξ), where u is the latent function at the induced input locations Z, and parameterize this variational posterior as an elliptical distribution. We do so for two reasons: first, this makes the variational posterior similar to the true posterior, and second, we can then use the conditional distribution to make predictions. In full detail, we factorize the posterior as $$q(\mathbf{f},\mathbf{u},\xi;\,\varphi)=p(\mathbf{f}|\mathbf{u},\xi;\,\eta_{\mathbf{f}})q(\mathbf{u}|\xi;\varphi_{\mathbf{u}})q(\xi;\,\varphi_{\xi}),$$ $$(12)$$ where φ = (φf , φu, φξ) are the variational parameters, q(u|ξ; φu) = N (m,Sξ) is a Gaussian distribution with variational parameters m and S, and the mixing distribution ξ ∼ q(ξ; φξ). Again, q(ξ; φξ) could be any positive finite distribution, but since we only know that the posterior is elliptical with a data-dependent mixing distribution, and since it is impossible to "overfit" a variational distribution (Bishop & Nasrabadi, 2006), we choose a very flexible yet tractable model, namely a spline flow. Note that, because of the conditioning on ξ, the first two factors in Equation (12) are a Gaussian conjugate pair in u. Thus, marginalization over u results in a Gaussian distribution, for which the marginals of f ∗ only depend on the corresponding input x ∗(Salimbeni et al., 2019): $$q(f^{*}|\xi;\varphi)={\mathcal{N}}(f^{*}|\mu_{f}(x^{*}),\sigma_{f}(x^{*})\xi),$$ $$(13)$$ ∗)ξ), (13) where $$\mu_{f}(\mathbf{x}^{*})=\mathbf{k}_{*}^{\top}\mathbf{K}_{\mathbf{uu}}^{-1}\mathbf{m},$$ $$\sigma_{f}(\mathbf{x}^{*})=k_{**}-\mathbf{k}_{*}^{\top}\left(\mathbf{K}_{\mathbf{uu}}^{-1}-\mathbf{K}_{\mathbf{uu}}^{-1}\mathbf{S}\mathbf{K}_{\mathbf{uu}}^{-1}\right)k_{*},$$ $$\sigma_{f}(\mathbf{x}^{*}),\text{and}\mathbf{K}_{\mathbf{uu}}=k(\mathbf{Z},\mathbf{Z}).$$ uuk∗, (15) and k∗∗ = k(x ∗, x ∗), k∗ = k(Z, x Predictions on unseen data points x ∗ are then computed according to (see Appendix E) $$q(f^{*}|\mathbf{y};\mathbf{x}^{*})=\mathbb{E}_{q(\xi;\,\varphi_{\xi})}\left[\mathcal{N}(f^{*}|\mu_{\mathbf{f}}(\mathbf{x}^{*}),\sigma_{\mathbf{f}}(\mathbf{x}^{*})\xi)\right].$$ ∗)ξ)] . (16) For training, we use variational inference (VI), i.e., maximizing the evidence lower bound (ELBO) to indirectly maximize the marginal likelihood. We train the model using stochastic gradient descent and black-box variational inference (Bingham et al., 2019; Wingate & Weber, 2013; Ranganath et al., 2014). $$\begin{array}{l}{(14)}\\ {\quad(15)}\end{array}$$ $$(16)$$ VI training. The marginal likelihood is $$p(\mathbf{y};\,\mathbf{\eta}_{\mathbf{f}},\mathbf{\eta}_{\mathbf{u}},\mathbf{\eta}_{\xi})=\int p(\mathbf{y},\mathbf{f},\mathbf{u},\xi;\,\mathbf{\eta}_{\mathbf{f}},\mathbf{\eta}_{\mathbf{u}},\mathbf{\eta}_{\xi})dfdud\xi=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\mathbf{u},\xi;\,\mathbf{\eta}_{\mathbf{f}})p(\mathbf{u},\xi;\,\mathbf{\eta}_{\mathbf{u}},\mathbf{\eta}_{\xi})dfdud\xi.\tag{17}$$ This integral is intractable since p(ξ; ηξ) is parameterized by a spline flow. To overcome this we approximate the marginal likelihood with the ELBO $$\mathcal{L}_{\mathrm{ELBO}}(\eta_{f},\eta_{u},\eta_{\xi};\varphi_{f},\varphi_{u},\varphi_{\xi})=\mathbb{E}_{\mathbf{y}(f,\xi;\varphi)}\left[\log p(\mathbf{y}|\mathbf{f})\right]-D_{\mathrm{KL}}\left(q(\mathbf{u},\xi;\,\varphi)\,||\,p(\mathbf{u},\xi;\eta)\right)$$ $$=\sum_{i=1}^{N}\mathbb{E}_{q(f_{i},\xi;\varphi)}\left[\log p(y_{i}|f_{i})\right]-D_{\mathrm{KL}}\left(q(\mathbf{u},\xi;\,\varphi)\,||\,p(\mathbf{u},\xi;\eta)\right).$$ $$\left(19\right)$$ $$(18)$$ Had the likelihood been Gaussian, the expectation Eq(fi,ξ; φ)[log p(yi|fi; ηf )] could have been computed analytically. In our case, however, it is elliptical, and we use a Monte Carlo estimate instead. Inserting the elliptical likelihood (Equation (9)) from the previous section yields $${\mathcal{L}}(\eta,\varphi)=\sum_{i=1}^{N}\mathbb{E}_{\alpha(f_{i},\xi;\varphi)}\left[\log\left({\mathcal{N}}\left(y_{i};f_{i},\omega_{i}\right)p(\omega_{i},\eta_{\omega})\right)\right]-D_{\mathrm{KL}}\left(q(\mathbf{u},\xi;\varphi)||p(\mathbf{u},\xi;\eta)\right).$$ ## 3.3 Extension To Heteroscedastic Noise We now extend the elliptical likelihood to capture heteroscedastic (input-dependent) noise. The main idea is to let the parameters ηωi of the likelihood's mixing distribution depend on the input location xi. In heteroscedastic regression, the noise depends on the input location xi. For example, heteroscedastic elliptical noise can be useful in a time series where the noise variance and tail-heaviness change over time. Examples of this can be found in statistical finance (Liu et al., 2020) and robotics (Kersting et al., 2007). To model this, we use a neural network gγω (·) with parameters γω to represent the mapping from input location to spline flow parameters, xi gγω (·) 7−−−−→ ηωi . We train the model by maximizing the log-likelihood $${\mathcal{L}}(\boldsymbol{\gamma}_{\omega})=\sum_{i=1}^{N}\log\int p(y_{i}|f_{i},\,\omega_{i})p(\omega_{i};\,\boldsymbol{\eta}_{\omega_{i}}=g_{\boldsymbol{\gamma}_{\omega}}(\boldsymbol{x}_{i}))d\omega_{i}.$$ $$(20)$$ Additional information, such as time of the day or season for time series data, can be incorporated by simply passing it as extra inputs to the neural network gγω (·). ## 4 Experiments We examined the variational elliptical processes using four different experiments. In the **first** experiment, we investigated how well the elliptical likelihood (Section 3.1) recovers known elliptical noise in synthetic data. In the **second** experiment, we demonstrated the use of the heteroscedastic EP on a synthetic dataset. We evaluated regression performance on seven standard benchmarks in the **third** experiment where we compared the sparse and the heteroscedastic EP formulations with both sparse GP (*SVGP*) (Hensman et al., 2013) and full GP baselines. Finally, in the **fourth** experiment, we examined if using an EP is beneficial in classification tasks. Implementation. The mixing distribution of the variational EP used a quadratic rational spline flow, where we transformed the likelihood flow p(ω) using *Softplus* and the posterior flow p(ξ) using *arctan* to ensure that they were positive (remember that a mixing distribution must be positive, see Equation (3)). We used a squared exponential kernel with independent length scales in all experiments. See Appendix F for further implementation details. Accompanying code is found at . ![7_image_0.png](7_image_0.png) Figure 4: The posterior predictive distribution when using a GP with elliptical noise modeled by a spline flow. Each row represents a synthetic dataset with different noise. The **first** row adds Gaussian noise ω ∼ δ(ω − 0.04) the **second** row adds Student's t noise ω ∼Scale-Inv-χ 2(ν = 4), and the **third** row adds Cauchy noise ω ∼Scale-Inv-χ 2(ν = 1). The shaded areas show the latent function posterior f ∗ and the noisy posterior y ∗ 90% credibility areas. The histograms show the learned and the true noise mixing distribution. ## 4.1 Noise Identification To examine how well the elliptical likelihood, described in Section 3.1, captures different types of elliptical noise, we created three synthetic datasets with the same latent function fi = sin(3xi)/2 sampled independently at N = 200 locations xi ∼ U(−2, 2). Further, each dataset was contaminated with independent elliptical noise ϵi that was added to the latent function, yi = fi + ϵi. The added noise varied for the three datasets. The first was Gaussian distributed which is the same as ω being Delta distributed ω ∼ δ(ω −0.04). The second was Student's t which means that ω follows the scaled inverse chi-squared distribution ω ∼ScaleInv-χ 2(ν = 4). The third was Cauchy distributed ω ∼Scale-Inv-χ 2(ν = 1). We trained a sparse variational GP for each dataset with an elliptical likelihood. Figure 4 illustrates the results from the experiments. The histograms in the right column compare the learned mixing distribution p(ω; ηω) to the true mixing distribution (the red curve) from which the noise ϵi originated. The learned distributions follow the shape of the true mixing distribution reasonably well, considering the small number of samples, indicating that we can learn the correct likelihood regardless of the noise variance. Furthermore, if the noise is actually Gaussian, as at x = −0.7, then so is the learned likelihood. The left column shows the predictive posterior of the final models, demonstrating that the models managed to learn suitable kernel parameters jointly with the likelihood. Robust regression on synthetic data. An elliptical likelihood is better at handling outliers and non-Gaussian noise than a Gaussian likelihood because it can better match the whole distribution of the noise rather than just a single variance. This is shown in Figure 5, where a GP with an elliptical and a Gaussian likelihood were trained on a small synthetic dataset with additive Student's t noise, with η = 4. The Gaussian likelihood approximates the mixing distribution with a single variance at approximately ω = 0.4, while the elliptical likelihood fits the entire mixing distribution. As a result, the GP with the Gaussian likelihood needs to use a shorter length scale to compensate for the thin tail of the likelihood. In contrast, the GP with the elliptical likelihood can focus on the slower variations, thus producing a better fit to data. ## 4.2 Elliptic Heteroskedastic Noise In this experiment, we aimed to exemplify the benefits of using heteroscedastic elliptical noise as described in Section 3.3. To this end, we created the synthetic dataset shown in Figure 6. It consisted of 150 samples generated by adding heteroscedastic noise to the function f(xi) = sin(5xi)+xi, where xi ∼ U(0, 4). Specifically, we added Student's t noise, ϵ(xi) ∼ St(ν(xi), σ(xi)), where the noise scale followed ν(xi) = 25−11|xi + 1| 0.9, and the standard deviation by σ(xi) = 0.5|xi + 1| 1.6 + 0.001. We used a variational sparse GP with heteroscedastic noise as described in Section 3.3. ![8_image_0.png](8_image_0.png) Figure 5: The predictive posterior after training on a small synthetic dataset. The shaded areas show the 95 % credibility area of the latent function posterior f ∗(blue) and the noisy posterior y ∗(magenta) when using (**top row**) a GP with elliptical noise modeled by a spline flow and a (**bottom row**) a GP with Gaussian noise. The histograms show the learned and the true noise mixing distribution. The experimental results, depicted in Figure 6, show that, qualitatively, even though the rapid change in the noise distribution and the low number of training samples, the model captures the varying noise in terms of the scale and the increasing heaviness of the tail. Remember that a single spike in the mixing distribution, as at x = 0.7, indicates that the noise is Gaussian, and the *wider* the mixing distribution is, as at x = −0.7, the heavier-tailed the noise is. When the synthetic data has Gaussian noise, then so has also the learned elliptical noise. Similarly, when the synthetic noise is heavier-tailed, so is the learned mixing distribution. This indicates that this model could be helpful for data with varying elliptical noise. ## 4.3 Regression We conducted experiments on seven datasets from the UCI repository (Dua & Graff, 2017) to study the impact of the elliptical noise, elliptical posterior, and heteroscedastic noise. We used various regression models based on a Gaussian Process (GP) prior; see Table 1 for a summary. As baselines, we compare with the sparse variational GP model of Hensman et al. (2013), which we call *SVGP*, and an exact GP. Models evaluated. We used a GP model with elliptical noise (*EP −GP*) to compare its performance to the traditional GP model with Gaussian noise. Theoretically, an elliptical posterior should result from combining a Gaussian prior and an elliptical likelihood, but in this case, we approximated the posterior with a Gaussian. We also included a model that used an elliptical posterior (*EP − EP*) to explore the potential benefits of using the theoretically more accurate elliptical posterior. Additionally, we tested a heteroscedastic elliptical noise model (Het-EP) and a heteroscedastic Gaussian noise model (Het-GP) to compare their performance. The difference between these two is that in Het-GP the neural network only predicts the noise variance, whereas the Het-EP model predicts the 26 parameters corresponding to nine bins of the spline flow. ![9_image_0.png](9_image_0.png) Figure 6: The results from training a GP with heteroscedastic elliptical noise on a synthetic dataset. The top row shows the posterior distribution where the shaded areas are the 95 % credibility areas of the latent posterior f ∗(magenta) and the noisy posterior y ∗(blue). The histograms in the **middle** row show the noise mixing distributions at the different x-values indicated by the vertical dashed lines in the top plot. The bottom row shows the mixing distribution used when creating the synthetic data. ![9_image_1.png](9_image_1.png) Figure 7: Predictive negative log-likelihood (LL) (top) and mean squared error (MSE) (**bottom**) on heldout test data from the regression benchmarks (smaller is better). We show the average of the ten splits as a dot and the standard deviation as a line. The models with bold fonts are our models. Note that the spread of the error varies between the datasets. For example, the MSE error for the Bike dataset is low for all six models. Overall, the EP posterior outperforms the GP posterior, regarding the log-likelihood, for the five larger datasets . First, we summarize the results in Figure 7, and then we discuss the results of each method in more detail. The figure displays the mean and standard deviation of ten randomly chosen training, validation, and test data splits. The training procedure for all models optimizes —directly or indirectly—the log-likelihood. Therefore, the most relevant figure of merit is the negative test log-likelihood (LL), shown in the top row of Figure 7. We stress that log-likelihood is more critical than MSE because mean squared error (MSE) does not consider the predictive variance. However, we show the MSE on held-out test sets in the bottom row for completeness. Additional details on the experiments can be found in Appendix G. | NAME | APPROXIMATION | LOSS | LIKELIHOOD | POSTERIOR | |----------|-----------------|---------------------|--------------|-------------| | Exact GP | Exact | Marginal likelihood | Gaussian | Gaussian | | SVGP | Variational | ELBO | Gaussian | Gaussian | | EP-GP | Variational | ELBO | Elliptic | Gaussian | | EP-EP | Variational | ELBO | Elliptic | Elliptic | | Het-GP | Variational | ELBO | Gaussian | Gaussian | | Het-EP | Variational | ELBO | Elliptic | Gaussian | Table 1: The different types of models we trained on the regression datasets. GP **baseline.** To assess the quality of the approximations introduced, we first establish an exact GP baseline that made predictions without any approximation. We trained the GP hyperparameters using LBFGS and early stopping on a validation dataset. For this to be feasible on large datasets, we used the Blackbox Matrix-Matrix multiplication inference procedure (Gardner et al., 2018; Wang et al., 2019). In the following sections, we discuss each method in detail. Variational GP **approximation.** First, we compare the exact GP baseline with its variational approximation, i.e., *SVGP*. First, consider the results on MPG and Concrete, where *SVGP* did not make use of inducing points due to the small sample size. Consequently, *SVGP* 's worse performance on Concrete is only due to the change of inference method. For the other datasets, we investigated the effect of the number of inducing points on the predictive log-likelihood; see Figure 11 in Appendix H. The dependence is very similar for all methods. In particular, the performance saturates at roughly 500 inducing points on all datasets except Kin40k, which continues to improve. However, the relative performance of the different methods on Kin40k is fairly stable. Elliptical likelihood. Next, we consider whether it is advantageous to use an elliptical likelihood instead of a Gaussian. To this end, we compare the performance of *SVGP* and EP-GP, which only differ in this respect. The results show that switching to an elliptical likelihood improves the log-likelihood on most datasets, as would be expected theoretically. Elliptical posterior. We now compare EP-GP to EP-EP to analyze the potential benefit of having an elliptical posterior. On three of the datasets (Bike, Kin40k, and Protein), which are all relatively large, the elliptical posterior produces a clear improvement in log-likelihood. In contrast, on the other datasets, it is similar, possibly because the posterior is well-approximated by a Gaussian. Regardless, we conclude that, when using an elliptical likelihood, an elliptical posterior is preferable over a Gaussian one. Heteroscedastic models. Is there an additional benefit of having heteroscedastic noise? On the two smallest datasets (MPG and Concrete), the answer is clearly no: the heteroscedastic models perform worse than *SVGP* and EP-GP in terms of both log-likelihood and mean squared error, indicating potential overfitting and that regularization may be warranted. (Note that the most relevant comparisons are Het-GP vs. SVGP and Het-EP vs. EP-GP.) On the remaining datasets, however, the heteroscedastic models clearly outperform *SVGP* and EP-GP in terms of log-likelihood. On the other hand, they perform poorly in terms of mean squared error; in fact, worse than *SVGP* on all datasets. Hypothetically, this is because the heteroscedastic models attribute too much variation to the likelihood, thus sacrificing the mean-function prediction. This could potentially be mitigated by decoupling the mean and the covariance models (Salimbeni et al., 2018; Jankowiak et al., 2020). Another option would be to increase the weight of the KL-divergence term in the ELBO (Higgins et al., 2017). We expect such improvements to be more critical for the Het-EP model, which has a more flexible likelihood than Het-GP. Still, Het-EP already performs slightly better than Het-GP on the three largest datasets. Note, however, the EP-EP model often achieves both a log-likelihood similar to the heteroscedastic models and a mean squared error similar to *SVGP*. ![11_image_0.png](11_image_0.png) Figure 9: The predictive posterior after training the elliptical process on the wind power dataset. The shaded areas show the 95 % credibility area of the latent function posterior f ∗ and the noisy posterior y ∗. Computational considerations. Empirically, we found that replacing the Gaussian likelihood with the elliptical likelihood had a minor impact on the computational demand. Further changing to an elliptical posterior increased the computational time per iteration, but a faster convergence partially compensated for this. Finally, modeling heteroscedastic noise with a neural network adds significant complexity, but this was offset by running it on a GPU. Prediction accuracy. In summary, the results show that an elliptical likelihood results in better or equal predictive log-likelihoods than a Gaussian likelihood. However, the advantage is less significant on small datasets. Similarly, the more flexible elliptical posterior tends to produce better results. However, when looking at the mean squared error (MSE), the exact GP outperforms the other models. Thus, if predictive performance is the main objective, an exact GP (or, even better, a neural network) may be the best choice. However, the well-known scalability issues of exact GP clearly limit its applicability. In such scenarios, our results suggest that EP-EP is a better choice than *SVGP*. ## 4.4 Application: Forecasting Wind Power Production Wind power stands for a significant and increasing share of global power production. However, since wind power generation is inherently stochastic and hard to control it poses severe challenges to power system balance and stability (Impram et al., 2020). In principle, these could be addressed via sophisticated control theoretical tools such as stochastic model predictive control (Schildbach et al., 2014), but that requires accurate and reliable wind power forecasts. In this section, we illustrate the applicability of the elliptical process to time series forecasting. Specifically, we consider the task of making a probabilistic forecast of German country-wide totals of wind power production at the end of 2016 based on data from the preceding seven years. The data comes from Wiese et al. (2019). ![11_image_1.png](11_image_1.png) Figure 8: The mixing distribution p(ω) of the elliptical noise. Since the data is strictly positive, we model the time series in the log domain, using an elliptical process with elliptical noise (*EP − EP*). To capture the periodicity in the data we use a sum of a periodic kernel and a linear kernel. See Appendix I for more details. Figure 9 presents the predictive distribution from the elliptical process. The model successfully captures the underlying annual periodicity and slowly increasing trend while producing qualitatively reasonable credibility regions. Figure 8 shows the mixing distribution of the trained elliptical likelihood, which is rather broad and hence non-Gaussian. The two modes discernible in the mixing distribution reflect the seasonal changes in the underlying data. For comparison, Appendix I shows the corresponding results when using a full GP and a heteroscedastic EP, where the full GP seems to be more likely to overfit to short-scale trends in the data and produces credible regions that are too wide. The heteroscedastic EP, on the other hand, produces a fit that is overall on par with the EP model but disentangles the seasonal variations. ## 4.5 Binary Classification To evaluate the EP on classification tasks, we perform variational EP and GP classification by simply replacing the likelihood with a binary one. To derive the expectation in Equation (18) we first sample fi ∼ N (fi|µf (xi), σf (xi)ξ) and then derive the likelihood as Ber(Sigmoid(fi)). This realization is interesting since we do not have a likelihood that captures the noise in the data; instead, the process itself has to do it. Therefore, we can indicate the value of the elliptical process itself without the elliptical noise. We compare two variational EP models with a variational GP model. The two EPs differ in the prior mixing distributions, where the first model has a GP prior and a EP posterior. For the second model, we replace the GP prior to an elliptical one. We can see the trainable prior mixing distribution as using a continuously scaled mixture of Gaussian processes, which can be more expressive than a single GP. ![12_image_0.png](12_image_0.png) Figure 10: The classification AUC (Area Under the Curve), accuracy, and predictive log-likelihood from the ten-fold cross-validation (higher is better). We show the average of the ten splits as a dot and the standard deviation as a line. To evaluate the models, we performed a ten-fold cross-validation where we trained the models on three classification datasets, described in Appendix J. Figure 10 presents the results from ten folds. From the area under the curve and the log-likelihood score, we see that the EP separates the two classes better, especially using the sparse models. The variational elliptical distribution is sufficient to get a high log-likelihood while training the mixing distribution of the EP did not further improve the score. ## 5 Related Work In general, attempts at modeling heavy-tailed stochastic processes modify either the likelihood or the stochastic process prior—rarely both. Approximate inference is typically needed when going beyond Gaussian likelihoods (Neal, 1997; Jylänki et al., 2011), e.g., for robust regression, but approximations that preserve analytical tractability have been proposed (Shah et al., 2014). Elliptical processes gain flexibility by learning the mixing distribution, which makes training and inference reasonably efficient and the resulting predictions interpretable. It is, however, also possible to construct deep process priors by composing several stochastic process layers (Damianou & Lawrence, 2013) and including non-linear transformations as activation functions (Aitchison et al., 2021). The tractability of such deep processes depends on the specifics of the distributions involved. For instance, the deep inverse Wishart process (Aitchison et al., 2021) uses an inverse Wishart prior over the kernel just as the Student's t process (Shah et al., 2014). This suggests it might be possible to generalize this approach to deep *elliptical* processes. While intriguing, we leave this to future work. Other attempts at creating more expressive GP priors include Maroñas et al. (2021), who used a GP in combination with a normalizing flow, and Luo & Sun (2017), who used a discrete mixture of Gaussian processes. Similar ideas combining mixtures and normalizing flows have also been proposed to create more expressive likelihoods (Abdelhamed et al., 2019; Daemi et al., 2019; Winkler et al., 2019; Rivero & Dvorkin, 2020) and variational posteriors (Nguyen & Bonilla, 2014). Non-stationary extensions of Gaussian processes, such as when modeling heteroscedastic noise, are somewhat rare. Examples include Zhao et al. (2021) who propose a hierarchical model in parameter space, the mixture model of Li et al. (2021), and the variational model of Lázaro-Gredilla & Titsias (2011). Deep kernel learning (Calandra et al., 2016; Wilson et al., 2016b;a) is another class of deep GPs that uses a neural network to learn the input features of a GP. A similar approach was taken by Ma et al. (2019), who describe a class of stochastic processes where the finite-dimensional distributions are only defined implicitly as a parameterized transformation of some base distribution, thereby generalizing earlier work on warped Gaussian processes (Snelson et al., 2003; Rios & Tobar, 2019). However, the price of this generality is that standard variational inference is no longer possible. Based on an assumption of a Gaussian likelihood, they describe an alternative based on the wake-sleep algorithm by Hinton et al. (1995). In the statistics literature, it is well-known that elliptical processes can be defined as scale-mixtures of Gaussian processes (Huang & Cambanis, 1979; O'Hagan, 1991; O'Hagan et al., 1999). However, unlike in machine learning, little emphasis is placed on building the models from data (i.e., training). These models have found applications in environmental statistics because of the field's inherent interest in modeling spatial extremes (Davison et al., 2012). Like us, several works take the mixing distribution as the starting point and make localized predictions of quantiles (Maume-Deschamps et al., 2017) or other tail-risk measures (Opitz, 2016). ## 6 Conclusions The Gaussian distribution is the default choice in statistical modeling for good reasons. Even so, far from everything is Gaussian—casually pretending it is, comes at a risk. The elliptical distribution offers a computationally tractable alternative that can capture heavy-tailed distributions. The same reasoning applies when comparing the Gaussian process to the elliptical process. A sensible approach in many applications would be to start from the weaker assumptions of the elliptical process and let the data decide whether the evidence supports Gaussianity. We constructed the elliptical process as a scale mixture of Gaussian distributions. By parameterizing the mixing distribution using a normalizing flow, we showed how a corresponding elliptical process can be trained using variational inference. The variational approximation we propose enables us to capture heavy-tailed posteriors and makes it straightforward to create a sparse variational elliptical process that scales to large datasets. We performed both experiments on regression and classification. In particular, we investigated the benefits of various combinations of elliptical posterior and elliptical likelihood and their heteroscedastic counterparts. We concluded that using an elliptical likelihood and an elliptical posterior often achieves a better loglikelihood and similar mean squared error as the sparse variational GP. The added flexibility of elliptical processes could benefit a range of classical and new applications. However, advanced statistical models are not a cure-all, and one needs to avoid overreliance on such models, especially in safety-critical applications. ## 7 Acknowledgments We thank Sebastian Mair and Zheng Zhao for providing feedback on the manuscript. We also want to thank the reviewers whose comments significantly improved the quality and clarity of our paper. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, the Swedish Foundation for Strategic Research grant SM19-0029, and by Kjell och Märta Beijer Foundation. ## References Abdelrahman Abdelhamed, Marcus A Brubaker, and Michael S Brown. Noise flow: Noise modeling with conditional normalizing flows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3165–3173, 2019. Laurence Aitchison, Adam Yang, and Sebastian W Ober. Deep kernel processes. In *International Conference* on Machine Learning (ICML), pp. 130–140. PMLR, 2021. Jesús Alcalá-Fdez, Alberto Fernández, Julián Luengo, Joaquín Derrac, Salvador García, Luciano Sánchez, and Francisco Herrera. Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. *Journal of Multiple-Valued Logic & Soft Computing*, 17, 2011. Maria Bånkestad, Jens Sjölund, Jalil Taghia, and Thomas Schön. The elliptical processes: a family of fat-tailed stochastic processes. *arXiv preprint arXiv:2003.07201*, 2020. Rabindra Nath Bhattacharya and Edward C Waymire. *A basic course in probability theory*, volume 69. Springer, 2007. Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D Goodman. Pyro: Deep universal probabilistic programming. *Journal of Machine Learning Research*, 20(1):973–978, 2019. Nicholas H Bingham and Rüdiger Kiesel. Semi-parametric modelling in finance: theoretical foundations. Quantitative Finance, 2(4):241, 2002. Christopher M Bishop and Nasser M Nasrabadi. *Pattern recognition and machine learning*, volume 4. Springer, 2006. David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877, 2017. Roberto Calandra, Jan Peters, Carl Edward Rasmussen, and Marc Peter Deisenroth. Manifold Gaussian processes for regression. In *2016 International joint conference on neural networks (IJCNN)*, pp. 3338– 3345. IEEE, 2016. Atefeh Daemi, Hariprasad Kodamana, and Biao Huang. Gaussian process modelling with Gaussian mixture likelihood. *Journal of Process Control*, 81:209–220, 2019. Andreas Damianou and Neil D Lawrence. Deep Gaussian processes. In *International Conference on Artificial* Intelligence and Statistics (AISTATS), pp. 207–215, 2013. Anthony C Davison, Simone A Padoan, Mathieu Ribatet, et al. Statistical modeling of spatial extremes. Statistical science, 27(2):161–186, 2012. Peter J Diggle, Jonathan A Tawn, and Rana A Moyeed. Model-based geostatistics. Journal of the Royal Statistical Society: Series C (Applied Statistics), 47(3):299–350, 1998. Hadi Mohaghegh Dolatabadi, Sarah Erfani, and Christopher Leckie. Invertible generative modeling using linear rational splines. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, pp. 4236–4246, 2020. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems (NeurIPS), volume 32, 2019. Hadi Fanaee-T and Joao Gama. Event labeling combining ensemble detectors and background knowledge. Progress in Artificial Intelligence, 2(2):113–127, 2014. Kai-Tai Fang, Samuel Kotz, and Kai Wang Ng. *Symmetric multivariate and related distributions*. Chapman and Hall, 1990. Jacob Gardner, Geoff Pleiss, Kilian Q Weinberger, David Bindel, and Andrew G Wilson. Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration. In *Advances in Neural Information* Processing Systems (NeurIPS), volume 31, 2018. James Hensman, Nicolò Fusi, and Neil D. Lawrence. Gaussian processes for big data. In Ann Nicholson and Padhraic Smyth (eds.), *Uncertainty in Artificial Intelligence (UAI)*, volume 29. AUAI Press, 2013. James Hensman, Alexander Matthews, and Zoubin Ghahramani. Scalable variational Gaussian process classification. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, pp. 351– 360, 2015. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. β-VAE: Learning basic visual concepts with a constrained variational framework. In *International conference on learning representations (ICLR)*, 2017. Geoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The wake-sleep algorithm for unsupervised neural networks. *Science*, 268(5214):1158–1161, 1995. Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 2013. Steel T Huang and Stamatis Cambanis. Spherically invariant processes: Their nonlinear structure, discrimination, and estimation. *Journal of Multivariate Analysis*, 9(1):59–83, 1979. Semich Impram, Secil Varbak Nese, and Bülent Oral. Challenges of renewable energy penetration on power system flexibility: A survey. *Energy Strategy Reviews*, 31:100539, 2020. Martin Jankowiak, Geoff Pleiss, and Jacob Gardner. Parametric Gaussian process regressors. In *International* Conference on Machine Learning (ICML), pp. 4702–4712. PMLR, 2020. Pasi Jylänki, Jarno Vanhatalo, and Aki Vehtari. Robust Gaussian process regression with a Student-t likelihood. *Journal of Machine Learning Research*, 12(Nov):3227–3257, 2011. Yutaka Kano. Consistency property of elliptic probability density functions. *Journal of Multivariate Analysis*, 51(1):139–147, 1994. Kristian Kersting, Christian Plagemann, Patrick Pfaff, and Wolfram Burgard. Most likely heteroscedastic Gaussian process regression. In *International Conference on Machine Learning (ICML)*, pp. 393–400, 2007. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *International Conference on Learning Representations (ICLR)*, 2015. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In *International Conference on* Learning Representations (ICLR), 2014. Miguel Lázaro-Gredilla and Michalis K Titsias. Variational heteroscedastic Gaussian process regression. In International Conference on Machine Learning (ICML), 2011. Tao Li, Di Wu, and Jinwen Ma. Mixture of robust Gaussian processes and its hard-cut EM algorithm with variational bounding approximation. *Neurocomputing*, 452:224–238, 2021. Bingqing Liu, Ivan Kiskin, and Stephen Roberts. An overview of Gaussian process regression for volatility forecasting. In *International Conference on Artificial Intelligence in Information and Communication* (ICAIIC), pp. 681–686, 2020. Chen Luo and Shiliang Sun. Variational mixtures of Gaussian processes for classification. In International Joint Conferences on Artificial Intelligence (IJCAI), volume 357, pp. 4603–4609, 2017. Chao Ma, Yingzhen Li, and José Miguel Hernández-Lobato. Variational implicit processes. In International Conference on Machine Learning (ICML), pp. 4222–4233, 2019. Benoit Mandelbrot. The variation of certain speculative prices. *The Journal of Business*, 36(4):394–419, 1963. Juan Maroñas, Oliver Hamelijnck, Jeremias Knoblauch, and Theodoros Damoulas. Transforming gaussian processes with normalizing flows. In Arindam Banerjee and Kenji Fukumizu (eds.), *International Conference on Artificial Intelligence and Statistics (AISTATS)*, 2021. Véronique Maume-Deschamps, Didier Rullière, and Antoine Usseglio-Carleve. Quantile predictions for elliptical random fields. *Journal of Multivariate Analysis*, 159:1–17, 2017. Radford M Neal. Monte Carlo implementation of Gaussian process models for Bayesian regression and classification. Technical Report 9702, Department of Statistics, University of Toronto, 1997. Trung V Nguyen and Edwin V Bonilla. Automated variational inference for Gaussian process models. Advances in Neural Information Processing Systems (NeurIPS), 27, 2014. Sebastian W Ober and Laurence Aitchison. Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes. In *International Conference on Machine Learning (ICML)*, pp. 8248–8259, 2021. Anthony O'Hagan. Bayes–Hermite quadrature. *Journal of statistical planning and inference*, 29(3):245–260, 1991. Anthony O'Hagan, Marc C Kennedy, and Jeremy E Oakley. Uncertainty analysis and other inference tools for complex computer codes. In *Bayesian statistics 6*, pp. 503–524. Oxford University Press, 1999. Thomas Opitz. Modeling asymptotically independent spatial extremes based on Laplace random fields. Spatial Statistics, 16:1–18, 2016. R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. *Statistics & Probability Letters*, 33(3): 291–297, 1997. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. *Journal of Machine Learning* Research, 22(57):1–64, 2021. Rajesh Ranganath, Sean Gerrish, and David Blei. Black Box Variational Inference. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, pp. 814–822, 2014. Carl Edward Rasmussen and Christopher K I Williams. *Gaussian processes for machine learning*. The MIT Press, 2006. p. 194. Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In *International Conference on Machine Learning (ICML)*, pp. 1530–1538, 2015. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International Conference on Machine Learning (ICML)*, pp. 1278– 1286, 2014. Gonzalo Rios and Felipe Tobar. Compositionally-warped Gaussian processes. *Neural Networks*, 118:235–246, 2019. Ana Diaz Rivero and Cora Dvorkin. Flow-based likelihoods for non-Gaussian inference. *Physical Review D*, 102(10):103507, 2020. Hugh Salimbeni, Ching-An Cheng, Byron Boots, and Marc Deisenroth. Orthogonally decoupled variational Gaussian processes. *Advances in neural information processing systems (NeurIPS)*, 31, 2018. Hugh Salimbeni, Vincent Dutordoir, James Hensman, and Marc Deisenroth. Deep Gaussian processes with importance-weighted variational inference. In *International Conference on Machine Learning (ICML)*, pp. 5589–5598, 2019. Georg Schildbach, Lorenzo Fagiano, Christoph Frei, and Manfred Morari. The scenario approach for stochastic model predictive control with bounds on closed-loop constraint violations. *Automatica*, 50(12):3009– 3018, 2014. Amar Shah, Andrew Wilson, and Zoubin Ghahramani. Student-t processes as alternatives to Gaussian processes. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, pp. 877–885, 2014. Jack W Smith, James E Everhart, WC Dickson, William C Knowler, and Robert Scott Johannes. Using the adap learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the annual symposium on computer application in medical care, pp. 261. American Medical Informatics Association, 1988. Edward Snelson, Zoubin Ghahramani, and Carl Rasmussen. Warped gaussian processes. In *Advances in* Neural Information Processing Systems (NeurIPS), volume 16, 2003. Michalis Titsias. Variational learning of inducing variables in sparse Gaussian processes. In *International* Conference on Artificial Intelligence and Statistics (AISTATS), pp. 567–574, 2009. Dustin Tran, Rajesh Ranganath, and David M Blei. Variational Gaussian process. In *International Conference on Learning Representations (ICLR)*, 2016. Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and variational inference. Foundations and Trends® *in Machine Learning*, 1(1–2):1–305, 2008. Ke Wang, Geoff Pleiss, Jacob Gardner, Stephen Tyree, Kilian Q Weinberger, and Andrew Gordon Wilson. Exact gaussian processes on a million data points. In *Advances in Neural Information Processing Systems* (NeurIPS), volume 32, 2019. Frauke Wiese, Ingmar Schlecht, Wolf-Dieter Bunke, Clemens Gerbaulet, Lion Hirth, Martin Jahn, Friedrich Kunz, Casimir Lorenz, Jonathan Mühlenpfordt, Juliane Reimann, et al. Open power system data– frictionless data for electricity system modelling. *Applied Energy*, 236:401–409, 2019. Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. Deep kernel learning. In International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 370–378. PMLR, 2016a. Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P. Xing. Deep kernel learning. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2016b. David Wingate and Theophane Weber. Automated variational inference in probabilistic programming. arXiv preprint arXiv:1301.1299, 2013. Christina Winkler, Daniel Worrall, Emiel Hoogeboom, and Max Welling. Learning likelihoods with conditional normalizing flows. *arXiv preprint arXiv:1912.00042*, 2019. I-C Yeh. Modeling of strength of high-performance concrete using artificial neural networks. *Cement and* Concrete research, 28(12):1797–1808, 1998. Zheng Zhao, Muhammad Emzir, and Simo Särkkä. Deep state-space Gaussian processes. Statistics and Computing, 31:1–26, 2021. Abdelhak M Zoubir, Visa Koivunen, Yacine Chakhchoukh, and Michael Muma. Robust estimation in signal processing: A tutorial-style treatment of fundamental concepts. *IEEE Signal Processing Magazine*, 29(4): 61–80, 2012. ## A The Elliptical Distribution The Gaussian distribution—the basic building block of Gaussian processes—has several attractive properties that we wish the elliptical process to inherit, namely (i) closure under marginalization, (ii) closure under conditioning, and (iii) straightforward sampling. This leads us to consider the family of *consistent* elliptical distributions. Following Kano (1994), we say that a family of elliptical distributions {p(u(yN ); η)| N ∈ N} is consistent if and only if $$\int_{-\infty}^{\infty}p\left(u(\mathbf{y}_{N+1});\,\mathbf{\eta}\right)dy_{N+1}=p\left(u(\mathbf{y}_{N});\,\mathbf{\eta}\right).\tag{1}$$ In other words, a consistent elliptical distribution is closed under marginalization. Far from all elliptical distributions are consistent, but the complete characterization of those that are is provided by the following theorem (Kano, 1994). Theorem 1 *An elliptical distribution is consistent if and only if it originates from the integral* $$(21)$$ $$p(u;\,\eta)=|\Sigma|^{-\frac{1}{2}}\int_{0}^{\infty}\left(\frac{1}{\xi^{2\pi}}\right)^{\frac{N}{2}}e^{\frac{-u}{2\xi}}p(\xi;\,\eta_{\xi})d\xi,$$ $$(22)$$ where ξ is a mixing variable with the corresponding, strictly positive finite, mixing distribution p(ξ; η)*, that* is independent of N. This shows that consistent elliptical distributions p(u; η) are scale-mixtures of Gaussian distributions, with a mixing variable ξ ∼ p(ξ; η). Note that any mixing distribution fulfilling Theorem 1 can be used to define a consistent elliptical process. We recover the Gaussian distribution if the mixing distribution is a Dirac delta function and the Student's t distribution if it is a scaled inverse chi-square distribution. If p(u; η) is a scale-mixture of normal distributions, it has the stochastic representation $$\mathbf{Y}|~\xi\sim{\mathcal{N}}(\mathbf{\mu},\mathbf{\Sigma}\xi),\quad\xi\sim p(\xi;\,\eta).$$ Y | ξ ∼ N (µ, Σξ), ξ ∼ p(ξ; η). (23) By using the following representation of the elliptical distribution, $$(23)$$ $$(24)$$ $$Y=\mu+\Sigma^{1/2}Z\xi^{1/2},$$ $$(25)$$ 1/2, (24) where Z follows the standard normal distribution, we get the mean $$\mathbb{E}[Y]=\mu+\Sigma^{1/2}\mathbb{E}\left[Z\right]\ \mathbb{E}[\xi^{1/2}]=\mu$$ 1/2] = µ (25) and the covariance $$\mathrm{Cov}(\mathbf{Y})=\mathbb{E}\left[(\mathbf{Y})\mathbf{\mu})(\mathbf{Y}-\mathbf{\mu})^{\top}\right]$$ $$=\mathbb{E}\left[(\mathbf{\Sigma}^{1/2}\mathbf{Z}\sqrt{\xi})(\mathbf{\Sigma}^{1/2}\mathbf{Z}\sqrt{\xi})^{\top}\right]$$ $$=\mathbb{E}\left[\xi\mathbf{\Sigma}^{1/2}\mathbf{Z}\mathbf{Z}^{\top}(\mathbf{\Sigma}^{1/2})^{\top}\right]$$ $$=\mathbb{E}\left[\xi\right]\mathbf{\Sigma}.$$ $$(2{\mathrm{f}}{\mathrm{i}})$$ The variance is a scale factor of the scale matrix Σ. To get the variance we have to derive E [ξ]. Note that if ξ follows the scaled inverse chi-square distribution, E[ξ] = ν/(ν − 2). We recognize this from the Student's t distribution, where Cov(Y ) = ν/(ν − 2)Σ. ## B Conditional Distribution To use the EP for predictions, we need the conditional mean and covariance of the corresponding elliptical distribution, which we derive next. We partition the data as y = [y1, y2], where y1 are the N1 observed data points, y2 are the next N2 data points to predict, and N1 + N2 = N. We have the following result: Proposition 1 *If the data* y = [y1, y2] *originate from the consistent elliptical distribution in Equation (3),* the conditional distribution originates from the distribution Z ∞ py2|u1 (y2) = cN1,η Σ22|1 1 2(2π) N2 2 0 ξ − n 2 e −(u2|1+u1) 1 2ξ p(ξ; η)dξ (27) $$(28)$$ with the conditional mean E[y2|y1] = µ2|1 *and the conditional covariance* $$\operatorname{Cov}[\mathbf{Y}_{2}|\mathbf{Y}_{1}=\mathbf{y}_{2}]=\mathbb{E}[{\hat{\xi}}]\mathbf{\Sigma}_{22|1},\quad{\hat{\xi}}\sim\xi|\mathbf{y}_{1},$$ ˆξ]Σ22|1,ˆξ ∼ ξ|y1, (28) where u1 = (y1 − µ1) ⊤Σ −1 11 (y1 − µ1), u2|1 = (y2 − µ2|1) ⊤Σ −1 22|1 (y2 − µ2|1), and cN1,η *is a normalization* constant. The conditional scale matrix Σ22|1 and the conditional mean vector µ2|1 *are the same as the mean* and the covariance matrix for a Gaussian distribution. The conditional distribution is guaranteed to be a consistent elliptical distribution but not necessarily the same as the original one—the shape depending on the training samples. (Recall that consistency only concerns the marginal distribution.) Proof of proposition 1. The joint distribution of [y1, y2] is p(y1, y2|ξ)p(ξ; η) and the conditional distribution of y2, given y1 is p(y2|y1, ξ)p(ξ|y1; η). For a given ξ, p(y2|y1, ξ) is the conditional normal distribution and so $$p(\mathbf{y}_{2}|\mathbf{y}_{1},\xi)\sim{\mathcal{N}}(\mathbf{\mu}_{2|1},\Sigma_{22|1}\hat{\xi}),\quad\hat{\xi}\sim p(\xi|\mathbf{y}_{1};\,\mathbf{\eta})$$ where, $$(29)$$ $$\mu_{2|1}=\mu_{2}+\Sigma_{21}\Sigma_{11}^{-1}(y_{1}-\mu_{1})$$ $$\Sigma_{22|1}=\Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{21},$$ $$(30)$$ the same as for the conditional Gaussian distribution. We obtain the conditional distribution p(ξ|y1; η) by remembering that p(y1|ξ) ∼ N (µ1, Σ11ξ). (32) Using Bayes' Theorem we get $p(\xi|\mathbf{y}_{1};\,\mathbf{\eta})\propto p(\mathbf{y}_{1}|\xi)p(\xi;\,\mathbf{\eta})$ $$\propto|\Sigma_{11}\xi|^{-1/2}\exp\left\{-\frac{u_{1}}{2\xi}\right\}p(\xi;\,\mathbf{\eta})$$ $$\propto\xi^{-N_{1}/2}\exp\left\{-\xi\frac{u_{1}}{2}\right\}p(\xi;\,\mathbf{\eta}).$$ $${\mathfrak{s}})\sim{\mathcal{N}}($$ $$(33)$$ $$(34)$$ Recall that u1 = (y − µ1) ⊤Σ −1 $\mathbf{\mu}_{1}^{1}(\mathbf{y}-\mathbf{\mu}_{1})$). We normalize the distribution by $$(a^{-N+1/2})$$ $$c_{N_{1},\eta}^{-1}=\int_{0}^{\infty}\xi^{-N_{1}/2}\exp\left\{-\frac{u_{1}}{2\xi}\right\}p(\xi;\,\eta)d\xi.$$ p(ξ; η)dξ. (34) The conditional mixing distribution is $$p(\xi|\mathbf{y}_{1};\,\eta)=c_{N_{1},\eta}\xi^{-N_{1}/2}\exp\left\{-\frac{u_{1}}{2\xi}\right\}p(\xi;\,\eta).$$ p(ξ; η). (35) The conditional distribution of y2 given y1 is derived by using the consistency formula $$p(\mathbf{y}_{2}|\mathbf{y}_{1})={\frac{1}{|\mathbf{\Sigma}_{22|1}|^{1/2}(2\pi)^{N_{2}/2}}}\int_{0}^{\infty}\xi^{-N_{2}/2}\exp-{\frac{u_{2|1}}{2\xi}}p(\xi|\mathbf{y}_{1})d\xi,$$ p(ξ|y1)dξ, (36) where u2|1 = (y2 − µ2|1) $\mathbf{\Sigma}_{2|1}^{-1})^{\top}\Sigma_{22|1}^{-1}(\mathbf{y}_{2}-\mathbf{\mu}_{2|1}).$ Using Equation (35) we get $$p(\mathbf{y}_{2}|\mathbf{y}_{1})={\frac{c_{N_{1},\,\mathbf{\eta}}}{|\mathbf{\Sigma}_{22|1}|^{1/2}(2\pi)^{N_{2}/2}}}\int_{0}^{\infty}\xi^{-n/2}e^{-(u_{2|1}+u_{1})/(2\xi)}p(\xi;\,\mathbf{\eta})d\xi.$$ $$(35)$$ $$(36)$$ $$(37)$$ ## C Derivation Of The Credible Intervals Of The Elliptical Process We derive the credible interval of the elliptical process, by using the Monte Carlo approximation of the integral, as p(−zσ < x < zσ) = 1 σ √2π Z zσ −zσ Z ∞ 0 ξ −1/2e −x 2/(ξ2σ 2)p(ξ)dξdx (38) =1 σ √2π Z zσ −zσ 1 m Xm i=1 ξ −1/2 ie −x 2/(2ξiσ 2)dx (39) =1 σm√2π Xm i=1 ξ −1/2 i Z zσ −zσ e −x 2/(2ξiσ 2)dx (40) =2 m √π Xm i=1 Z √z2ξi 0 e −u 2du (41) = 1 m Xm i=1 erf z √2ξi . (42) $$(43)$$ For every mixing distribution, we can derive the credibility of the prediction. It is the number of samples m we take that decides the accuracy of the credible interval. ## D Details On The Non-Sparse Variational Elliptical Process For a Gaussian process, the posterior of the latent variables f is $$p(f|\mathbf{y})\propto p(\mathbf{y}|f)p(f).$$ $$(44)$$ $$(45)$$ p(f|y) ∝ p(y|f)p(f). (43) Here, the prior p(f|X) ∼ N (0, K) is a Gaussian process with kernel K and the likelihood p(y|f) ∼ N (f, σ2I) is Gaussian. The posterior derives to $$p(\mathbf{f}|\mathbf{y})\sim{\mathcal{N}}\left(\mathbf{f}|\mathbf{K}\left(\mathbf{K}+\sigma^{2}I\right)^{-1}\mathbf{y},\left(\mathbf{K}^{-1}+\sigma^{-2}\mathbf{I}\right)^{-1}\right)$$ and the predictive distribution of an arbitrary input location x ∗is $$p(f^{*}|\mathbf{y})=\int p(f^{*}|\mathbf{f})p(\mathbf{f}|\mathbf{y})d\mathbf{f},$$ $$(46)$$ ∗|f)p(f|y)df, (45) where p(f ∗|f, x, x ∗) is the conditional distribution, which is again Gaussian with $${\cal N}\left(f^{*}|\mathbf{k}_{*}^{\top}(\mathbf{k}+\sigma^{2}\mathbf{I})^{-1}\mathbf{y},k_{**}-\mathbf{k}_{*}^{\top}(\mathbf{K}+\sigma^{2}\mathbf{I})^{-1}\mathbf{k}_{*}\right).\tag{1}$$ Going back to the elliptical process, we want to derive the predictive distribution. The problem, though, is that the posterior is now intractable. In order to get a tractable posterior, we train the model using variational inference, where we approximate the intractable posterior with a tractable one, $$p(\mathbf{f},\xi|\mathbf{y};\,\eta_{\mathbf{f}},\eta_{\xi})\approx q(\mathbf{f},\xi;\,\varphi_{\mathbf{f}},\varphi_{\xi})=q(\mathbf{f}|\xi;\,\varphi_{\mathbf{f}})q(\xi;\,\varphi_{\xi}).$$ p(f, ξ|y; ηf , ηξ) ≈ q(f, ξ; φf , φξ) = q(f|ξ; φf )q(ξ; φξ). (47) Here ηf are the parameters of the prior GP process (such as the kernel parameters), ηξ are the parameters of the mixing distribution, and q(f|ξ; φf ) ∼ N (mf ,Sf ξ), where mf and Sf are variational parameters. The posterior q(ξ; φξ) is parameterized with any positive distribution such as a normalizing flow. We use this approximation when we derive the predictive distribution $$p(f^{*}|\mathbf{y})=\int p(f^{*}|\mathbf{f},\xi;\,\mathbf{\eta}_{\mathbf{f}})p(\mathbf{f},\xi|\mathbf{y};\,\mathbf{\eta}_{\mathbf{f}},\mathbf{\eta}_{\xi})d\mathbf{f}d\xi$$ $$=\int p(f^{*}|\mathbf{f},\xi;\,\mathbf{\eta}_{\mathbf{f}})p(\mathbf{f},\xi|\mathbf{y};\,\,\mathbf{\eta}_{\mathbf{f}},\mathbf{\eta}_{\xi})d\mathbf{f}d\xi$$ $$\approx\int p(f^{*}|\mathbf{f},\xi;\,\mathbf{\eta}_{\mathbf{f}})q(\mathbf{f}|\xi;\,\mathbf{\varphi}_{\mathbf{f}})q(\xi;\,\mathbf{\varphi}_{\xi})d\mathbf{f}d\xi.$$ $$(47)$$ $$(48)$$ $$(49)$$ $$(50)$$ By first taking a look at the prior distribution p(f ∗, f|ξ) when ξ is constant, $$\begin{bmatrix}f^{*}\\ \mathbf{f}\end{bmatrix}\xi\sim\mathcal{N}\left(0,\begin{bmatrix}k_{**}&\mathbf{k}_{*}^{\mathsf{T}}\\ \mathbf{k}_{*}&\mathbf{K}\end{bmatrix}\xi\right),\tag{1}$$ $$(51)$$ $$(52)$$ $$(53)$$ (54) $\text{}$ (55) (56) (57) we arrive at the conditional distribution $$p(f^{*}|\mathbf{f},\xi;\,\eta_{f})={\mathcal{N}}\left(\mathbf{k}_{*}^{\top}\mathbf{K}^{-1}\mathbf{f},\left(k_{**}-\mathbf{k}_{*}^{\top}\mathbf{K}^{-1}\mathbf{k}_{*}\right)\xi\right).$$ ξ. (52) We use this expression together with the variational approximation to derive the posterior predictive distribution p(f ∗|y) = Zp(f ∗|f, ξ; ηf )q(f|ξ; φf )q(ξ; φξ)dfdξ (53) = Eq(ξ; φξ) Zp(f ∗|f, ξ; ηf )q(f|ξ; φf )df (54) = Eq(ξ; φξ) ZNf ∗|k ⊤ ∗ K−1f,(k∗∗ − k ⊤ ∗ K−1k∗)ξN (f|m,Sξ) df = Eq(ξ; φξ)[N (f ∗|µf (x ∗), σf (x ∗)ξ)] , (56) (55) where $$\mu_{f}(\mathbf{x}^{*})=\mathbf{k}_{*}^{\top}\mathbf{K}^{-1}\mathbf{m},$$ $$\sigma_{f}(\mathbf{x}^{*})=k_{**}-\mathbf{k}_{*}^{\top}\left(\mathbf{K}^{-1}-\mathbf{K}^{-1}\mathbf{S}\mathbf{K}^{-1}\right)\mathbf{k}_{*}.\tag{10.11}$$ $$\begin{array}{l}{(57)}\\ {(58)}\end{array}$$ We get the variance by E[ξ]σf (x ∗). Optimizing the ELBO We train the model by optimizing the evidence lower bound (ELBO) given by LELBO(ηf , ηξ, φf , φξ) = Eq(f,ξ; φ)[ log p(y|f)] − DKL (q(f, ξ; φf , φξ)|| p(f, ξ; ηf , ηξ)). (59) ## E Details On Sparse Elliptical Processes With the variational inference framework, we create a sparse version of the model $$\int p(\mathbf{f},\mathbf{u},\xi;\,\eta)d\xi=\int p(\mathbf{f}|\mathbf{u},\xi;\,\eta_{\mathbf{f}})p(\mathbf{u}|\xi;\,\eta_{\mathbf{u}})p(\xi;\,\eta_{\xi})d\xi,$$ where u are outputs of the elliptical process located at the inducing inputs Z. We approximate the posterior with p(f,u, ξ|y; η) ≈ p(f|u, ξ; ηf )q(u|ξ; φu)q(ξ; φξ). (61) The posterior predictive distribution is then given by $$p(f^{*}|\mathbf{y})=\int p(f^{*}|\mathbf{f},\mathbf{u},\xi;\,\eta)p(\mathbf{f},\mathbf{u},\xi|y;\,\eta)dfdud\xi$$ $$\approx\int p(f^{*}|\mathbf{f},\mathbf{u},\xi;\,\eta)p(\mathbf{f}|\mathbf{u},\xi;\,\eta_{\mathbf{f}})q(\mathbf{u}|\xi;\,\varphi_{u})q(\xi;\,\varphi_{\xi})dfdud\xi$$ $$=\int\left[\int p(f^{*}|\mathbf{f},\mathbf{u},\xi;\,\eta)p(\mathbf{f}|\mathbf{u},\xi;\,\eta_{\mathbf{f}})df\right]q(\mathbf{u}|\xi;\,\varphi_{u})q(\xi;\,\varphi_{\xi})dud\xi.$$ We can simplify the inner expression by using the fact that the elliptical distribution is consistent, $$\int p(f^{*}|\mathbf{f},\mathbf{u},\xi;\,\eta)p(\mathbf{f}|\mathbf{u},\xi;\,\eta)d\mathbf{f}=\int p(f^{*},\mathbf{f}|\mathbf{u},\xi;\,\eta)d\mathbf{f}=p(f^{*}|\mathbf{u},\xi;\,\eta).$$ $$(60)$$ $$(61)$$ $$(62)$$ $$(63)$$ Hence, Equation (62) is simplifies to $$p(f^{*}|\mathbf{y})=\int p(f^{*}|\mathbf{u},\xi;\,\mathbf{\eta})q(\mathbf{u}|\xi;\,\varphi_{\mathbf{u}})q(\xi;\,\varphi_{\xi})d\mathbf{u}d\xi,$$ ∗|u, ξ; η)q(u|ξ; φu)q(ξ; φξ)dudξ, (64) where q(u|ξ; φu) = N (mu,Suξ) with the variational parameters mu and Su, and ξ is parameterized, e.g., by a normalizing flow. Finally, we obtain the posterior p(f ∗|x ∗) = Eq(ξ;φξ)[N (f ∗|µf (x ∗)*, ξσ*f (x ∗))], where $$(64)$$ $$\begin{array}{l}{{\mu_{f}(x^{*})=k_{*}^{\top}K_{u u}^{-1}m}}\\ {{\sigma_{f}(x^{*})=k_{*}-k_{i}^{\top}\left(K_{u u}^{-1}-K_{u u}^{-1}S K_{u u}^{-1}\right)k_{*}.}}\end{array}$$ $$(65)$$ $$({\mathrm{f}}6)$$ $${\overline{{\mathrm{\boldmath~\ptical~process~}}}}(\mathrm{VI-}{\mathcal{E}}{\mathcal{P}}{\mathrm{-}}{\mathcal{E}}{\mathcal{P}}).$$ uum (65) uuk∗. (66) Here, k∗ = k(x ∗, Z), k∗∗ = k(x ∗, x ∗), and Kuu = k(Z, Z). ## F Implementation: Variational Inference We used the Pyro library (Bingham et al., 2019), a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. In Pyro, we trained a model with variational inference (Kingma & Welling, 2014) by creating "stochastic functions" called **model** and a **guide**, where the **model** samples from the prior latent distributions p(f*, ξ, ω*; η), and the observed distribution p(y|f, ω), and the **guide** samples the approximate posterior q(f|ξ; φf )q(ξ; φξ). We then trained the model by maximizing the evidence lower bound (ELBO), where we simultaneously optimized the model parameters η and the variational parameters φ. (See more details here.) To implement the model in Pyro, we created the guide and the model (see Algorithm 1) by building upon the already implemented variational Gaussian process. We used the guide and the model to derive the ELBO, which we then optimized with stochastic gradient descent using the Adam optimizer (Kingma & Ba, 2015). Algorithm 1 PyTorch implementation of the variational sparse elliptical process (VI-EP-EP). 1: **procedure** model(X, y) 2: Sample ξ = from p(ξ; ηξ)( Normalizing flow ) 3: Sample u from N (0, ξKuu)) ▷ Take a sample from the latent u and ξ 4: Derive the variational posterior QN i=1 q(fi|ξ; φ) = N (µf (xi), σf (xi)ξ). ▷ During training ξ is sampled from the posterior/guide. 5: Take a Monte-Carlo sample ˆfi from each q(fi|ξ; φ) 6: For or each yi approximate ℓyi = log RN (yi|fi, ω) p(ω; ηω)dω using the trapezoid rule. 7: Get the log probability of y by PN i=1 ℓyi . 8: **end procedure** 9: **procedure** guide 10: Sample ξ = from q(ξ; φξ)( Normalizing flow ) 11: Sample u, from N (m,Sξ) 12: **end procedure** ## G Regression Experiment Setup In the regression experiments in Section 4.3, we ran all experiments using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.01. For the full GP, we used the L-BFGS optimizer to train the hyperparameters. Here, we, in the same way as for the other models, used early stopping on a validation dataset, which operated by saving the model with the lowest validation log predictive likelihood. For all experiments, we created ten random train/val/test splits with the proportions 0.6/0.2/0.2, except for the two smallest datasets (mpg and concrete), where we neglected the validation dataset and used a train/test proportions of 0.7/0.3. For the test set evaluation, we used the model with the highest predictive probability on the validation set. For the large datasets (N > 1000), we used 500 inducing points. We did not use a sparse version of the model for the small datasets but instead used Z = X*train* and kept them fixed during the training. We run the optimizer for the large dataset for 500 epochs and the small dataset for 2000 epochs. Elliptical process setup. The likelihood mixing distribution uses a spline flow with nine bins and *Softplus* as its output transformation. The elliptic posterior mixing distribution uses a spline flow with five bins and a *Sigmoid* output transformation. These parameters were not tuned, and fixed for all experiments. For the heteroscedastic noise models, we used two-layer neural networks with hidden dimensions of 128. For the elliptical noise, we learned a spline flow with nine bins which results in 26 hyperparameters to learn while for the heteroscedastic Gaussian likelihood we learned the variance solely. ## H Results The regression results from Figure 7 are presented in Tables 2 and 3. Figure 11 presents the outcome for different numbers of inducing points. We see that the results have stabilized for almost all datasets at 500 inducing points. We also notice that the relative log-likelihood between the models stays constant after 400-500 inducing points. Table 2: Predictive mean squared error (MSE) on the hold-out sets from the experiments. We show the average of the ten random splits and one standard deviation in parenthesis. | | MPG | CONCRETE | ELEVATORS | BIKE | CALIFORNIA | KIN40K | PROTEIN | |----------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | Het-EP | 0.144 (0.018) | 0.127 (0.014) | 0.149 (0.005) | 0.018 (0.002) | 0.223 (0.012) | 0.141 (0.008) | 0.531 (0.009) | | Het-GP | 0.142 (0.020) | 0.178 (0.022) | 0.148 (0.005) | 0.020 (0.002) | 0.230 (0.011) | 0.112 (0.007) | 0.508 (0.007) | | EP-EP | 0.122 (0.027) | 0.176 (0.022) | 0.145 (0.005) | 0.007 (0.001) | 0.223 (0.012) | 0.042 (0.002) | 0.433 (0.008) | | EP-GP | 0.121 (0.026) | 0.128 (0.013) | 0.145 (0.005) | 0.011 (0.001) | 0.226 (0.013) | 0.056 (0.003) | 0.481 (0.007) | | SVGP | 0.122 (0.018) | 0.128 (0.011) | 0.143 (0.004) | 0.007 (0.001) | 0.219 (0.012) | 0.047 (0.001) | 0.477 (0.007) | | Exact GP | 0.135 (0.135) | 0.103 (0.103) | 0.134 (0.134) | 0.002 (0.002) | 0.134 (0.134) | 0.006 (0.000) | 0.357 (0.006) | | MPG | CONCRETE | ELEVATORS | BIKE | CALIFORNIA | KIN40K | PROTEIN | | |----------|---------------|---------------|---------------|-----------------|---------------|----------------|---------------| | Het-EP | 0.463 (0.089) | 0.332 (0.033) | 0.400 (0.018) | -1.327 (0.020) | 0.506 (0.022) | -0.397 (0.012) | 0.921 (0.017) | | Het-GP | 0.530 (0.155) | 0.456 (0.074) | 0.399 (0.017) | -1.532 (0.047) | 0.376 (0.021) | -0.316 (0.010) | 0.986 (0.013) | | EP-EP | 0.266 (0.074) | 0.443 (0.083) | 0.425 (0.015) | -1.406 (0.028) | 0.506 (0.022) | -0.246 (0.020) | 0.976 (0.008) | | EP-GP | 0.268 (0.071) | 0.344 (0.031) | 0.427 (0.014) | -1.304 (0.023) | 0.515 (0.021) | -0.053 (0.049) | 1.056 (0.006) | | SVGP | 0.352 (0.062) | 0.382 (0.030) | 0.446 (0.013) | -0.865 (0.016) | 0.649 (0.022) | -0.028 (0.004) | 1.056 (0.006) | | Exact GP | 0.387 (0.387) | 0.206 (0.206) | 0.463 (0.463) | -1.103 (-1.103) | 0.463 (0.463) | -0.166 (0.097) | 0.970 (0.005) | Table 3: Negative log likelihood (Neg LL) on the hold-out test sets from the experiments. We show the average of the ten random splits and one standard deviation in parenthesis. ![24_image_0.png](24_image_0.png) Figure 11: The train and validation negative log-likelihood for the datasets using a varying number of inducing points. ## I Application: Wind Power Production In the wind power production experiment, we used an EP process with elliptical likelihood, elliptical posterior, and 300 inducing points; a heteroscedastic EP with heteroscedastic elliptical likelihood, and elliptical posterior; and a full GP with Gaussian likelihood. They all used a kernel that was a sum of a periodic kernel and a linear kernel. Before training, we transformed to data to log scale and then normalized it. See Figure 12 for a visualization of the data before and after the transformation. This transformation made the data more symmetric around its mean and removed the nonnegative constraints. Figure 9 in Section 4.4 shows the results when training the data with an EP process with elliptical likelihood. For comparison, we here plot the results when using a full GP (Figure 13), and heteroscedastic EP (Figure 14). ![25_image_0.png](25_image_0.png) Figure 12: The data used in the wind power production experiments, where the left plot illustrates the data before it is transformed and the right plot shows the data after it is transformed, which is what we used as input to the model. The main takeaway is that all three models are able to capture the periodicity in the data. The full GP seems to have a tendency to overfit the data and also to create wide credibility regions. This is probably because the data is non-Gaussian and the thin tails of the GP forces variances to be high. The heteroscedastic EP fits the data well, although the difference from using a non-heteroscedastic likelihood is small. Figure 15 illustrates the mixing distribution during some of the months of the year where we clearly can see that we have a seasonal change in the noise. ![25_image_1.png](25_image_1.png) Figure 13: Wind power forecast using an exact GP The plot shows the predictive posterior where the shaded areas show the 95 % credibility area of the latent function posterior f ∗ and the noisy posterior y ∗. ## J Datasets Bike dataset (Fanaee-T & Gama, 2014) is obtained from bike sharing data, especially it contains the hourly and daily count of rental bikes between the years 2011 and 2012 with the corresponding weather and seasonal information. Elevators dataset (Dua & Graff, 2017) is obtained from the task of controlling an F16 aircraft, and the objective is related to an action taken on the elevators of the aircraft according to the status attributes of the airplane. Physicochemical properties of protein tertiary structure dataset The dataset is taken from CASP 5-9. There are 45730 decoys and sizes varying from 0 to 21 Armstrong. ![26_image_0.png](26_image_0.png) Figure 14: The predictive posterior after training the elliptical process with heteroscedastic noise on the wind power dataset. The shaded areas show the 95 % credibility area of the latent function posterior f ∗ and the noisy posterior y ∗. ![26_image_1.png](26_image_1.png) Figure 15: The mixing distribution of the elliptical heteroscedastic likelihood sampled during different months during the year. California housing dataset was originally published by Pace & Barry (1997). There are 20 640 samples and 9 feature variables in this dataset. The targets are the prices of houses in the California area. The Concrete dataset (Yeh, 1998) has 8 input variables and 1030 observations. The target variables are the concrete compressive strength. Auto MPG dataset (Alcalá-Fdez et al., 2011) originally from the StatLib library which is maintained at Carnegie Mellon University. The data concerns city-cycle fuel consumption in miles per gallon and consists of 392 samples with five features each. Pima Indians Diabetes Database (Smith et al., 1988) originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to predict diagnostically whether or not a patient has diabetes based on certain diagnostic measurements included in the dataset. The dataset consists of 768 samples with eight attributes. The Cleveland Heart Disease dataset consists of 13 input variables and 270 samples. The target classifies whether a person is suffering from heart disease or not. The Mammography Mass dataset predicts the severity (benign or malignant) of a mammography mass lesion from BI-RADS attributes and the patient's age. This dataset consists of 961 with six attributes.
Review 1: Summary: The paper proposes elliptical processes (EPs), a new family of stochastic processes inspired on GPs (Rasmussen & Williams, 2006) and Student-T processes (SPs) (Shah, ICML 2014). The underlying principle of these new EPs is that they are based on elliptical distributions, which are a more general family of distributions, typically more heavy-tailed, that includes Student-T, Cauchy as well as the Gaussian density. The key point of the elliptical distributions is their density generator, which in the end, depends of a parametric mixing variable \xi. The authors consider this mechanism for both the function prior and the likelihood distribution, as they hope to better capture the uncertainty with the parametric heavy-tailed distribution. For the mixing distribution on the likelihood function, they consider a normalising flow based on splines. Experimental results show some evidence of the performance in regression, classification and heteroscedastic tasks, mainly in the well-known UCI datasets. Strengths and Weaknesses: The paper brings an interesting topic to the audience of GP models, which is the problem of correctly calibrating the uncertainty in probabilistic non-linear regression and classification tasks. The use of more heavy-tailed distributions instead of just Gaussian densities makes sense to me, and authors motivated it better in this new version of the manuscript. The general presentation of the method and the technical details are clear, and easy to follow, apart from some details around the concept of identifiability and the theoretical conditions to be a stochastic process in Section 2.2, which are a bit short and unclear to me. (More details around these two points would be highly appreciated). On the modelling side, I saw the main point of the model, playing around with the parametric mixing distribution to control the elliptical model in both the likelihood and prior functions of the GP (or EP). The application of the proposed model to several datasets and comparison with SOTA methods is also a valuable point that I remark. Requested Changes: Authors followed the requested changes {1,2} in the previous revision (see my previous comments as Reviewer CEkx), which have now improved quite a lot the quality of the manuscript. Additionally, removing the multi-task details made the manuscript much clearer and detailed, allowing the reader to focus on the main contributions and the technical utility of the proposed method. To me, even if some extra experiments have been added, I still have a general concern around the limitation of the empirical results. To give an example, the performance of HetEP, EP-EP and EP-GP is kind of in-between the rest of SOTA metrics in Figure 7. Even if some analysis/conclusions are included in the end of Section 4.3, this is not entirely sufficient to me. I remark that I am not looking for metrics or datasets that indicate a better performance for the proposed model, but I would like to see that there is a clear utility on using the model indicated by some empirical results. > One advise would be to think about scenarios, datasets or problems (i.e. regression) where the use of Gaussians is predominantly worse than other heavy-tailed distributions. The authors indicated "finance" as one of the applications for more heavy-tailed stochastic processes, perhaps that could be a good one. But at least, I would like to see a real example of this effect and how the proposed EP correctly captures the whole uncertainty, improving standard GPs. > Other extra (minor) points could be to improve the explanations around the effect of using the elliptical distribution in the likelihood and the function prior, for example. Also the presentation of the sparse/inducing points could be fixed a bit, particularly the use of X_u and x_n, which seemed a bit odd to me. Broader Impact Concerns: I do not detect particular ethical implications of the work. ================================================== Review 2: Summary: The paper proposes the use of the elliptical distribution for a priori modeling and a posteriori approximation of random processes and observations. Noting that a (consistent) elliptical distribution only arises as a scale mixture of Gaussians, the paper proposes modeling elliptical distributions generically using a normalizing flow for mixing distribution (for both a priori and a posteriori cases). A sparse variational solution for the case of elliptical process prior is provided. Synthetic experiments in regression exploring elliptical noise and heteroscedastic noise settings is provided as well as experiments on real regression and classification datasets. Strengths and Weaknesses: Strengths Synthetic evidence : I liked the experiment of Figure 5 where the consequence of utilizing the more straightforward GP w/ Gaussian likelihood results in an overfit latent function with smaller estimated lengthscales. This is among the clearest examples provided for how using an elliptical distribution for noise can benefit modeling. The heteroscedastic experiment in Section 4.2 is also nicely illustrative. Weaknesses Clarity : For the use case where an elliptical distribution is used as observation likelihood, there is either reader confusion in interpreting the joint probability (5) and noise likelihood (6) or typos in these equations : The mixing distribution $p(\omega; \eta_\omega)$ for the likelihood can be defined either per observation or hierarchically as (5) and (6) specify. In the former case, the mixing distribution can be estimated from observations IF the same elliptical distribution is used across observations and IF it happens to be true that the equivalence of two univariate elliptical distributions implies equivalence of their mixing distributions. In the latter (hierarchical) case, the mixing distribution cannot be estimated from observations for the same reason that it cannot be estimated when an elliptical distribution is used to model a random process, i.e., only a single draw of the mixing variable is observed. Figure 3a,c and the experiments of Figure 4 imply the paper is assuming the former model, but the a priori and a posteriori model specifications are for the latter model. For the use case where an elliptical distribution is used within a random process prior, there are some inconsistencies in the exposition that create confusion. In Section 2.2, "Prediction" paragraph, in the statement "The conditional distribution can be derived analytically (see Appendix B), but we will instead solve it by approximating the posterior $p(\xi|y; \eta_\xi)" what does "conditional distribution" refer to? The results of Appendix B refer to a pure elliptical distribution while the statement of Section 2.2 refers to conditioning on observations, i.e., drawn from a pure elliptical distribution plus i.i.d. noise. Experimental methodology : The paper does not provide a procedure for how the number of bins in an a priori or a posteriori mixing distribution spine flow should be set and how an output transformation (softplus, sigmoid, arctan) after the flow should be specified. In other words, it should be clear to a reader whether or not the settings described in appendix G were tuned for the experiments (preferably, not tuned). Similarly, for implementing a heteroscedastic elliptical distribution noise model, how would one select an architecture for the input-varying network model $g_{\gamma_\omega}( )$ of Section 3.3 in a novel problem? Regression experiments : The EP-EP results w.r.t. MSE plotted in Figure 7 are worse than the exact GP results in 5/7 datasets and not very different in the remaining 2. The claim that "...log-likelihood is more critical than MSE because mean-squared error (MSE) does not consider the predictive variance" is not unconditionally true. One would have to show that, for a given value of MSE, no value of variance could yield a given log likelihood. In other words, both metrics generally provide information. In the 5/7 cases where the exact GP has worse log likelihood, it could be argued that the exact GP is somehow poorly calibrated. The EP-GP method does not seem to perform any better than the EP-EP method across the metrics and datasets. The heteroscedastic EP method appears to be trading log likelihood for MSE even more than the vanilla EP-EP method. Finally, it isn't clear what additional value SVGP provides for interpreting the proposed method's performance. Classification experiments : The evidence among the non-sparse variants in support of the proposed method seems marginal. One dataset (mammographic) shows little difference among the methods in both performance metrics. Another dataset (heart) shows some improvement in one metric (accuracy) but not much in the other (AUC). The final dataset (pima) does show what appears to be statistically significant improvement in one metric (AUC). In total, one could say that the use of EP distributions does not hurt modeling binary classification and can sometimes provide a little benefit. The included sparse variants are not necessary since the datasets are all small (<1000 sample size) and obfuscate interpreting the results. Requested Changes: Section 2.2, "Prediction" paragraph : Please correct the statement "The conditional distribution can be..." Section 3 : Please correct the likelihood definitions in (5), (6), (8), and (9) to include an $i$ subscript on the mixing variable or clarify how the mixing distribution can be estimated as in the experiments of Figure 4 from a single draw of the mixing distribution for all observations. In other words, I believe the correct expression for the likelihood term in (5) is $\prod_{i=1}^N p(y_i | f_i, \omega_i) p(\omega_i; \eta_\omega)$. Section 3.2 : Please provide some motivatation for selecting a spline flow for the variational posterior $q(\xi; \phi_\xi)$. Since the prior model (5) selects a single draw from the mixing distribution $p(\xi; \eta_\xi)$ it isn't clear why a complicated posterior is necessary. Section 4.4 : If the paper would like to include a comparison among sparse variants, please select a dataset large enough to warrant them, e.g., the airlines dataset of the sparse GP classification paper by Hensman (2015) et al. Section 4.4 : Please comment on the estimated test-set log predictive densities of the two (full) EP methods compared to the GP method (or include results and discussion). Appendix D : Please explain the distinction b/w $\eta$ in (48) and $\eta_f$ in (49). Appendix G : Please elaborate on the choices of output transformations utilized in the likelihood mixing distribution (softplus) and posterior mixing distribution (sigmoid). The included hypothesis that the posterior is inherently "more difficult to learn" should be backed up with experimental evidence. Minor: Section 3 : "...is a regular EP prior with the covariance kernel..." -> a regular GP prior (?) Section 3.2 : Please modify the title "Prior" of this section. It is not obvious why this title applies. Section 3.2 : Extra vertical bar in (17) in expression $p(u, \xi |; \eta_u, \eta_\xi)$. Section 4.2, 1st para : The expression for the true function $f( x_n )$ is missing the additive noise. Section 4.4, 2nd para : "Therefore, we can indicate the value of the elliptical..." What does it mean to "indicate the value" of the process? Please clarify or re-word. Section 4.4, 3rd para : "performed a ton-fold cross-validation" -> ten-fold Appendix B : At the end of the line starting with "Proof of proposition 1.", the variable $M$ in $p(\xi|y_1 M ; \eta)$ is undefined. Appendix F : There is a missing $\xi$ in the expression in the line above (66). Broader Impact Concerns: N/A ================================================== Review 3: Summary: In this work, the authors present a novel interpretation of elliptical processes that leverages variational inference in order to approximate intractable likelihoods and reduce computational complexity. This class of models is expected to outperform Gaussian processes (GPs) on datasets having heavier-tailed distributions, while also subsuming the former along with Student-$t$ processes. Via an extensive combination of experiments on synthetic and real-world datasets, the authors demonstrate how different aspects of variational $\mathcal{EP}$s (i.e. their likelihood, posterior approximation, etc) contribute towards the overall performance of the model in comparison to the more widely-used GPs. Strengths and Weaknesses: **Strengths** - The paper is well-written and a pleasure to read. I believe it does an especially great job in communicating the theoretical contributions required to formulate $\mathcal{EP}$s, while also including multiple illustrative examples that convey the utility of the proposed model in practical set-ups. Crucially, the paper comes across as fairly "complete", and is especially enhanced by the ablation studies and discussion contained in Section 4. I also appreciated that the experiments highlight both strengths and weaknesses of the proposed model (such as the greater risk of overfitting when accounting for heteroscedastic noise). - The problem itself is very interesting and relevant, and as the authors indicate, highly-relevant to practitioners working with data having heavy-tailed distributions (without compromising on the effectiveness of GPs for normal-distributed data). There would definitely be large value in having the model proposed here being integrated in a more generic toolkit such as $sklearn$ in order to improve usability. **Weaknesses** - One aspect that sometimes comes across as unclear is the core contribution of itself, i.e. is it the fundamental formulation of elliptical processes, or the incorporation of variational inference to enable tractability? The initial description of $\mathcal{EP}s$ is provided in the *Background* section, but there is no reference to the unpublished arXiv manuscript on a similar topic (https://arxiv.org/abs/2003.07201). - A small suggestion for the Experiments section is to perhaps include a critical difference diagram that can more succinctly summarise how GPs and different variations of $\mathcal{EP}s$ compare against each other. Beyond commenting on computational considerations, a more formal description of computational complexity would also be welcome. - In the related work section, although there are several references to alternative models permitting more complex and flexible posteriors than standard GPs (such as deep GPs), these methods are not featured in the experimental evaluation itself. While I wouldn't expect the evaluation section to be extended further, a little more commentary on what performance can be expected form these competing models when applied to heavily-tailed distributions could be just as illuminating. - [nitpick] The final sentence of the paper reads as unnecessarily pessimistic, and could with either a citation or a supporting argument. Requested Changes: The main changes I would like to see in a future revision of the paper are closely tied to the weakness listed in the previous section. For clarity, these are: - *[major]* Clarifying whether the specification of elliptical processes is a core contribution of the paper itself, or whether the focus is on the variational approximation. The current placement of the introduction to $\mathcal{EP}$s in the Background section makes this unclear. - *[major]* Expanding the Related Work section to more clearly express connections to alternative methods such as deep GPs and deep kernel learning methods. - *[minor]* Slight tweaks to Experiments section to make key takeaways more clear. - *[minor]* Tiny spotted typos: - pg.1 - "Gaussian distributions *is* attractive..." - pg.12 - "a *ten*-fold cross-validation where *we*..." Broader Impact Concerns: I do not foresee any broader impact concerns resulting from this work. ================================================== Metareview: Recommendation: Accept with minor revision Comment: Overall, this paper is of interest to the community and the clarity and focus of the paper are much improved from the previous submission. Indeed, those reviewers who reviewed both versions were very complementary about the improvements in this version. While there are some concerns about the experimental results (which show marginal improvements over alternative methods), the reviewers all agree that EPs, and the proposed methodology, are of interest to the community. Therefore, I do not request any further experiments. I *would* like to see clearer delineation of the contributions. Two out of three reviewers were of the impression that the Elliptical process was a new concept; however as the authors address in response to reviewer g9nQ, the idea exists already in the statistical literature (albeit, not as a practical modeling method). Making EPs a practical modeling tool is certainly an important contribution, and under "Our Contributions", the authors do indeed say that their contribution is a new construction of EPs, not the introduction of EPs. However, I would like to see this distinction made clearer in the Introduction and in Section 2.2 (where it is unclear whether Definition 1 is a new contributions since it has no attribution). In addition, the reviewers noted that, while the paper is much approved, it would benefit from a final proof read -- for example, in (5) the subscript should be $n$ not $i$ ==================================================
# Learning With Kan Extensions Anonymous authors Paper under double-blind review ## Abstract A common problem in machine learning is "use this function defined over this small set to generate predictions over that larger set." Extrapolation, interpolation, statistical inference and forecasting all reduce to this problem. The Kan extension is a powerful tool in category theory that generalizes this notion. In this work we explore applications of the Kan extension to machine learning problems. We begin by deriving a simple classification algorithm as a Kan extension and experimenting with this algorithm on real data. Next, we use the Kan extension to derive a procedure for learning clustering algorithms from labels and explore the performance of this procedure on real data. Although the Kan extension is usually defined in terms of categories and functors, this paper assumes no knowledge of category theory. We hope this will enable a wider audience to learn more about this powerful mathematical tool. ## 1 Introduction A popular slogan in category theoretic circles, popularized by Saunders Mac Lane, is: "all concepts are Kan extensions" (Mac Lane, 1971). While Mac Lane was partially referring to the fundamental way in which many elementary category theoretic structures can be formulated as Kan extensions, there are many applied areas that have Kan extension structure lying beneath the surface as well. Casting a problem as a Kan extension can unveil hidden structure and suggest new avenues for exploration. In this paper we aim to demonstrate what Kan extensions, and applied category theory more broadly, can offer to machine learning researchers. We hope that our work will inspire more researchers to explore this direction. As a machine learning researcher it may be easiest to think of the Kan extension as tool for extrapolation. We can use the Kan extension to expand a function over a small set to a similar function over a larger set. However, the Kan extension perspective on extrapolation is fundamentally different from traditional machine learning perspectives. Intuitively, traditional perspectives focus on means and sums whereas the Kan extension perspective focuses on minimums and maximums. That is, a traditional machine learning algorithm may try to extrapolate from data in a way that minimizes the total observed error. In contrast, an algorithm derived from the Kan extension may try to solve a problem like "minimize false positives subject to no false negatives on some set". In this paper we explore the ramifications of this difference across supervised and unsupervised learning applications. To do this, we cast basic machine learning problems in category theoretic language, apply the Kan extension, translate the result back to machine learning language, and study the behavior of the resulting algorithms. First, we derive a simple classification algorithm as a Kan extension and demonstrate experimentally that this algorithm can learn to classify images. Next, we use Kan extensions to derive a novel method for learning a clustering algorithm from labeled data and demonstrate experimentally that this method can learn to cluster images. All code is available on GitHub. For interested readers we include two additional examples of how Kan extensions can be applied to machine learning in the Appendix. In Section A.2 we explore the structure of meta-supervised learning and use Kan extensions to derive supervised learning algorithms from sets of labeled datasets and trained functions. In Section A.3 we use Kan extensions to characterize the process of approximating a complex function with a simpler minimum description length (MDL) function. ## 2 Preliminaries The foundational structures that we use in this paper are preorders and monotonic maps. Definition 2.1. A preorder (P, ≤) is a tuple of a set of objects P and a reflexive, transitive relation ≤ on P. Definition 2.2. *A monotonic map* f : (P1, ≤1) → (P2, ≤2) from the preorder (P1, ≤1) *to the preorder* (P2, ≤2) is an order-preserving function from P1 to P2. That is, for any x, y ∈ P1 where x ≤1 y *we have* f(x) ≤2 f(y). For example, consider the preorder (R n, ≤∥∥) where for *v, u* ∈ R n we have v ≤∥∥ u when ∥v*∥ ≤ ∥*u∥. Consider also the preorder (R n, ≤∀) where v ≤∀ u when ∀i=1···n|vi*| ≤ |*ui|. The identity function id : (R n, ≤∀) → (R n, ≤∥∥) is a monotonic map, but the identity function id : (R n, ≤∥∥) → (R n, ≤∀) is not a monotonic map. Definition 2.3. (P, ≤) is a discrete preorder if p1 ≤ p2 in P *implies* p1 = p2 Definition 2.4. (P1, ≤1) is a subpreorder of the preorder (P2, ≤2) if P1 ⊆ P2 *and the identity function* id : (P1, ≤1) → (P2, ≤2) *is a monotonic map.* For example, (R n, ≤∀) is a subpreorder of (R n, ≤∥∥). As another example, consider the preorder (U n, ≤∀) where U n is the set of unit-norm vectors in R n. (U n, ≤∀) is a subpreorder of (R n, ≤∀). In order to keep notation simple we will use bold characters like A to represent the preorder (Ob(A), ≤A). In addition, given two monotonic maps f1, f2 : A → B we will write f1 ≤ f2 to indicate that for all a ∈ A we have f1(a) ≤ f2(a). ## 2.1 Kan Extensions Now suppose we have three preorders A, B, C and two monotonic maps G : A → B, K : A → C and we would like to derive the "best" monotonic map F : B → C: ![1_image_0.png](1_image_0.png) There are two canonical ways that we can do this. Definition 2.5. The left Kan extension of K : A → C along G : A → B *is the minimal monotonic map* LanGK : B → C such that K ≤ (LanGK ◦ G). That is, for any other monotonic map m : B → C such that K ≤ (m ◦ G) we have LanGK ≤ m. Definition 2.6. The right Kan extension of K : A → C along G : A → B *is the maximal monotonic map* RanGK : B → C such that (RanGK ◦ G) ≤ K. That is, for any other monotonic map m : B → C such that (m ◦ G) ≤ K we have m ≤ RanGK. If G : A ,→ B is the inclusion map then the Kan extensions of K along G are interpolations or extrapolations of K from A to all of B. For example, suppose we want to interpolate a monotonic function K : Z → R to a monotonic function F : R → R such that F ◦ G = K where G : Z ,→ R is the inclusion map. ![2_image_0.png](2_image_0.png) We have that LanGK : R → R is simply K ◦ *floor* and RanGK : R → R is simply K ◦ *ceil*, where *floor, ceil* are the rounding down and rounding up functions respectively. In this paper we explore a few applications of Kan extensions to machine learning. In each of these applications we first define preorders A, B, C and a monotonic function K : A → C such that A is a subpreorder of B and G : A ,→ B is the inclusion map. Then, we take the left and right Kan extensions LanG*K, Ran*GK of K along G and study their behavior. ## 3 Classification We start with a simple application of Kan extensions to supervised learning. Suppose that I is a preorder, I ′ ⊆ I is a subpreorder of I, and {false,true} is the two element preorder where false < true. Suppose also that K : I ′ → {false,true} is a mapping into {false,true} and we would like to learn a monotonic function F : I → {false,true} that approximates K on I ′. That is, K defines a finite training set of points S = {(*x, K*(x)) | x ∈ I ′} from which we wish to learn a monotonic function F : I → {false,true}. Of course, it may not be possible to find a monotonic function that agrees with K on all the points in I ′. ![2_image_1.png](2_image_1.png) We can solve this problem with the left and right Kan extensions of K along the inclusion map G : I ′,→ I. Proposition 3.1. *The left and right Kan extensions of* K : I ′ → {false, true} *along the inclusion map* G : I ′,→ I *are respectively:* By $\begin{array}{l}Lan_GK:\mathbf{I}\to\{false,true\}\qquad Ran_GK:\mathbf{I}\to\{false,true\}\\ \\ Lang_GK(x)=\begin{cases}true&\exists x'\in\mathbf{I}',x'\leq x,K(x')=true\\ false&else\end{cases}\\ RangeK(x)=\begin{cases}false&\exists x'\in\mathbf{I}',x\leq x',K(x')=false\\ true&else\end{cases}\end{array}$ (Proof in Appendix *A.1.1)* In the extreme case that Ob(I ′) = ∅, for x ∈ I we have that: $Lan_{G}K(x)=\begin{pmatrix}\text{true}&\exists x^{\prime}\in\mathbf{I}^{\prime},x^{\prime}\leq x,K(x^{\prime})=\text{true}\\ \text{false}&\text{else}\end{pmatrix}=\text{false}$ $Ran_{G}K(x)=\begin{pmatrix}\text{false}&\exists x^{\prime}\in\mathbf{I}^{\prime},x\leq x^{\prime},K(x^{\prime})=\text{false}\\ \text{true}&\text{else}\end{pmatrix}=\text{true}$ Similarly, in the extreme case that Ob(I ′) = Ob(I) we have by the monotonicity of K that for x ∈ I both of the following hold if and only if K(x) = true. $\exists x^{\prime}\in\mathbf{I}^{\prime},x^{\prime}\leq x,K(x^{\prime})=\text{true}\qquad\quad\nexists x^{\prime}\in\mathbf{I}^{\prime},x\leq x^{\prime},K(x^{\prime})=\text{false}$. Therefore in this extreme case we have LanGK(x) = RanGK(x) = K(x). Now suppose that I ′contains at least one x ′such that K(x ′) = true and at least one x ′such that K(x ′) = false. In this case LanGK and RanGK split I into three regions: a region where both map all points to false, a region where both map all points to true, and a disagreement region. Note that RanGK has no false positives on I ′ and LanGK has no false negatives on I ′. For example, suppose I = R, I ′ = {1, 2, 3, 4} and we have: $$K(1)={\mathrm{false}}$$ $$K(2)=\mathrm{false}\qquad K(3)=\mathrm{true}\qquad K(4)=\mathrm{true}$$ $\begin{array}{c}{}^{7}\text{true}\\ \text{false}\\ {}^{\text{}}\\ \left[\text{false}\\ \left[\text{true}\right]\right]\end{array}$ . Then we have that: LanGK(x) = (true ∃x $\exists x^{\prime}\in\mathbf{I}^{\prime},x^{\prime}\leq x,K(x^{\prime})=\text{true}\Bigg{)}=\left(\begin{cases}\text{true}&x\geq3\\ \text{false}&\text{else}\end{cases}\right),$ else $\exists x^{\prime}\in\mathbf{I}^{\prime},x\leq x^{\prime},K(x^{\prime})=\text{false}\Bigg{)}=\left(\begin{cases}\text{true}&x>2\\ \text{false}&\text{else}\end{cases}\right),$ else false else RanGK(x) = (false ∃x true else In this case the disagreement region for LanG*K, Ran*GK is (2, 3) and for any x ∈ (2, 3) we have LanGK(x) < RanGK(x). As another example, suppose I = R, I ′ = {5, 6, 7, 8} and we have: $$K(6)=\mathrm{true}\qquad K(7)=\mathrm{false}\qquad K(8)=\mathrm{true}$$ $$K(5)={\mathrm{false}}$$ Then we have that: $$L a n_{G}K(x)=\left(\begin{cases}\text{true}\\ \text{false}\end{cases}\right)$$ $$R a n_{G}K(x)=\left(\begin{cases}\text{false}\\ \text{true}\end{cases}\right)$$ false else $\exists x^{\prime}\in\mathbf{I}^{\prime},x^{\prime}\leq x,K(x^{\prime})=\text{true}\Bigg{)}=\left(\begin{cases}\text{true}&x\geq6\\ \text{false}&\text{else}\end{cases}\right),$ else $\exists x^{\prime}\in\mathbf{I}^{\prime},x\leq x^{\prime},K(x^{\prime})=\text{false}\Bigg{)}=\left(\begin{cases}\text{true}&x>7\\ \text{false}&\text{else}\end{cases}\right),$ else true else In this case the disagreement region for LanG*K, Ran*GK is [6, 7] and for any x ∈ [6, 7] we have RanGK(x) < LanGK(x). While this approach is effective for learning very simple mappings, there are many choices of K for which LanG*K, Ran*GK do not approximate K particularly well on I ′ and therefore the disagreement region is large. In such a situation we can use a similar strategy to the one leveraged by kernel methods (Hofmann et al., 2008) and transform I to minimize the size of the disagreement region. That is, we choose a preorder I ∗ and transformation f : I → I ∗such that the size of the disagreement region for Lanf◦GK ◦ *f, Ran*f◦GK ◦ f is minimized. ![3_image_0.png](3_image_0.png) For example, if I ∗ = R a we can choose f to minimize the following loss: Definition 3.2. *Suppose we have a set* I ′ ⊆ I *and function* K : I ′ → {false, true} *such that:* ∃x ′, x′′ ∈ I ′, K(x ′) = true, K(x ′′) = *false* Then the ordering loss l *maps a function* f : I → R a*to an approximation of the size of the disagreement* region for Lanf◦GK ◦ f, Ranf◦GK ◦ f. Formally, we define the ordering loss l *to be:* $$l:(\mathbf{I}\to\mathbb{R}^{a})\to\mathbb{R}$$ 3 $$l(f)=\sum_{i\leq a}\max(0,\ \max\{f(x)[i]\ \mid\ x\in{\bf I^{\prime}},K(x)=f a l s e\}-$$ $$\min\{f(x)[i]\ \mid\ x\in{\bf I^{\prime}},K(x)=t r u e\})$$ where f(x)[i] is the i*th component of the vector* f(x)[i] ∈ R a. We can show that minimizing the ordering loss l will also minimize the size of the disagreement region: Proposition 3.3. The ordering loss l (Definition *3.2) is nonnegative and is only equal to 0 when* ∀x ∈ I ′ we have: $K(x)=(Lan_{fo}K\circ f)(x)=(Ran_{fo}K\circ f)(x)$. Proof. First note that l must be nonnegative since each term can be expressed as max(0, _). Next, suppose that l(f) = 0. Then it must be that for any x0, x1 ∈ I ′such that K(x0) = false, K(x1) = true we have that f(x0) ≤ f(x1). As a result, for any x ∈ I ′there can only exist some x ′ ∈ I ′ where f(x) ≤ f(x ′), K(x ′) = false when K(x) = false. Similarly, there can only exist some x ′ ∈ I ′ where f(x ′) ≤ f(x), K(x ′) = true when K(x) = true. Therefore: $K(x)=(Lan_{f\circ G}K\circ f)(x)=(Ran_{f\circ G}K\circ f)(x)$ $\square$ It is relatively straightforward to minimize the ordering loss with an optimizer like subgradient descent (Boyd et al., 2003). For example, we can implement the ordering loss in Tensorflow (Abadi et al., 2015) as follows: ![4_image_0.png](4_image_0.png) ![4_image_4.png](4_image_4.png) 13 ![4_image_1.png](4_image_1.png) ![4_image_2.png](4_image_2.png) ![4_image_3.png](4_image_3.png) | Model | Dataset | True Positive Rate | True Negative Rate | |----------------------|-----------|----------------------|----------------------| | Left Kan Classifier | Training | 1.000 (±0.000) | 0.612 (±0.042) | | Right Kan Classifier | Training | 0.705 (±0.035) | 1.000 (±0.000) | | Left Kan Classifier | Testing | 0.815 (±0.020) | 0.593 (±0.044) | | Right Kan Classifier | Testing | 0.691 (±0.044) | 0.837 (±0.026) | Table 1: True positive rate and true negative rate of the left Kan classifier Lanf◦GK ◦ f *and the right* Kan classifier Ranf◦GK ◦ f where f is a linear map trained to minimize the ordering loss l(f) *(Definition* 3.2) on the Fashion-MNIST "T-shirt" vs "shirt" task (Xiao et al., *2017). We run a bootstrap experiment* by repeatedly selecting 9000 *training samples and* 1000 *testing samples, running the training procedure, and* computing true positive rate and true negative rate metrics. Mean and two standard error confidence bounds from 10 *such bootstrap iterations are shown.* In Table 1 we demonstrate that we can use this strategy to distinguish between the "T-shirt" (false) and "shirt" (true) categories in the Fashion MNIST dataset (Xiao et al., 2017). Samples in this dataset have 784 features (pixels), so we train a simple linear model f : R 784 → R 10 with Adam (Kingma & Ba, 2014) to minimize the ordering loss l(f) over a training set that contains 90% of samples in the dataset. We then evaluate the performance of the left Kan classifer Lanf◦GK ◦ f and the right Kan classifier Ranf◦GK ◦ f over both this training set and a testing set that contains the remaining 10% of the dataset. We look at two metrics over both sets: the true positive rate and the true negative rate. Recall that the true positive rate of a classifier is the proportion of all true samples which the classifier correctly labels as true and the true negative rate of a classifier is the proportion of all false samples which the classifier correctly labels as false. As we would expect from the definition of Kan extensions, the left Kan classifier Lanf◦GK ◦ f has no false negatives and the right Kan classifier Ranf◦GK ◦ f has no false positives on the training set. The metrics on the testing set are in line with our expectations as well: the left Kan classifier has a higher true positive rate and the right Kan classifier has a higher true negative rate. ## 4 Clustering With Supervision Clustering algorithms allow us to group points in a dataset together based on some notion of similarity between them. Formally, we can consider a clustering algorithm as mapping a finite metric space (*X, d*X) to a partition of X. In most applications of clustering the points in the metric space (*X, d*X) are grouped together based solely on the distances between the points and the rules embedded within the clustering algorithm itself. This is an unsupervised clustering strategy since no labels or supervision influence the algorithm output. For example, agglomerative clustering algorithms like HDBSCAN (McInnes & Healy, 2017) and single linkage partition points in X based on graphs formed from the points (vertices) and distances (edges) in (*X, d*X). However, there are some circumstances under which we have a few ground truth examples of pre-clustered training datasets and want to learn an algorithm that can cluster new data as similarly as possible to these ground truth examples. We can define the supervised clustering problem as follows. Given a collection of tuples $$S=\{(X_{1},d_{X_{1}},\mathbb{P}_{X_{1}}),(X_{2},d_{X_{2}},\mathbb{P}_{X_{2}}),\cdots,(X_{n},d_{n},\mathbb{P}_{X_{n}})\}$$ where each (Xi, dXi ) is a finite metric space and PXi is a partition of Xi, we would like to learn a general function f that maps a finite metric space (*X, d*X) to a partition PX of X such that for each (Xi, dXi , PXi ) ∈ S the difference between f(Xi, dXi ) and PXi is small. In order to frame this objective in terms of Kan extensions we will first construct our preorder of metric spaces. Definition 4.1. A nonexpansive map from the metric space (X, dX) to the metric space (Y, dY ) *is a function* f : X → Y such that for x1, x2 ∈ X *we have:* dY (f(x1), f(x2)) ≤ dX(x1, x2) Definition 4.2. *In the preorder* Metid *the set* Ob(Metid) consists of all metric spaces (X, dX) and (*X, d*X) ≤Metid (Y, dY ) when X ⊆ Y *and the inclusion map* ι : (X, dX) ,→ (Y, dY ) *is nonexpansive.* We can represent a clustering of a set X with a partition PX of that set. We can now construct our preorder of partitions. Definition 4.3. Consider the tuples (X, PX),(Y, PY ) where PX is a partition of X and PY *is a partition of* Y *. Then a consistent map* f : (X, PX) → (Y, PY ) is a function f : X → Y *such that for any set* SX ∈ PX there exists some set SY ∈ PY *such that* f(SX) ⊆ SY . Definition 4.4. *In the preorder* **Part**id *the set* Ob(**Part**id) consists of all partitions (X, PX) and (X, PX) ≤**Part**id (Y, PY ) when X ⊆ Y *and the inclusion map* ι : (X, PX) ,→ (Y, PY ) *is consistent.* We need one more condition to corral our definition of a clustering map. Definition 4.5. We say that a monotonic map f *from a subpreorder of* Metid *to a subpreorder of* **Part**id is well-behaved if for all (X, dX) in the domain of f we have that f(*X, d*X) = (X, PX) *for some partition* PX of X. Intuitively a well-behaved monotonic map from Metid to **Part**id acts as the identity on underlying sets. Now given a subpreorder D ⊆ Metid, a discrete preorder T ⊆ D, and a well-behaved monotonic map K : T → **Part**id, our goal is to find the best well-behaved monotonic map F : D → **Part**id such that F ◦ G = K where G : T ,→ D is the inclusion map. ![6_image_0.png](6_image_0.png) Intuitively, Ob(T) is the set of unlabelled training samples, K defines the labels on these training samples, and Ob(D) is the set of testing samples. We would like to use the Kan extensions of K along G to find this best clustering map. However, these Kan extensions are not guaranteed to be well-behaved. For example, consider the case in which T is the discrete preorder that contains the single-element metric space as its only object and D is the discrete preorder that contains two objects: the single-element metric space and R equipped with the Euclidean distance metric 1. ![6_image_1.png](6_image_1.png) 1This counterexample due to Sam Staton Since D is a discrete preorder, the behavior of K on ({∗}, d{∗}) will not affect the behavior of the left and right Kan extensions of K along G on (R, dR). For example, the left Kan extension of K along G will map (R, dR) to the empty set and is therefore not well-behaved. In order to solve this problem with Kan extensions we need to add a bit more structure. Suppose Ob(D) is the discrete preorder with the same objects as D and define the following: Definition 4.6. *The monotonic map* KL : Ob(D) → **Part**id is equal to K on T *and maps each object* (X, dX) in Ob(D) − Ob(T) to (X, {{x} | x ∈ X}). Definition 4.7. *The monotonic map* KR : Ob(D) → **Part**id is equal to K on T *and maps each object* (*X, d*X) in Ob(D) − Ob(T) to (X, {X}). Intuitively, KL and KR are extensions of K to all of the objects in D. For any metric space (*X, d*X) ∈ Metid not in Ob(T) the monotonic map KL maps (*X, d*X) to the finest possible partition of X and KR maps (*X, d*X) to the coarsest possible partition of X. Suppose we go back to the previous example in which T is the discrete preorder containing only the singleelement metric space and D is the discrete preorder containing both the single-element metric space and (R, dR). Since: $$K_{L}(\mathbb{R},d_{\mathbb{R}})=(\mathbb{R},\{\{x\}\;\mid\;x\in\mathbb{R}\})$$ the left Kan extension of KL along the inclusion G : Ob(D) ,→ D must map (R, dR) to the ≤**Part**id -smallest (X, PX) such that: $$(\mathbb{R},\{\{x\}\ \mid\ x\in\mathbb{R}\})\leq_{\mathbf{Part}_{i d}}(X,\mathbb{P}_{X})$$ which is (X, PX) = (R, {{x} | x ∈ R}). Similarly, since: KR(R, dR) = (R, {R}) the right Kan extension of KR along the inclusion G : Ob(D) ,→ D must map (R, dR) to the ≤**Part**id -largest (X, PX) such that: $$(X,\mathbb{P}_{X})\leq_{\mathbf{Part}_{i d}}{\big(}\mathbb{R},\{\mathbb{R}\}{\big)}$$ which is (X, PX) = (R, {R}). We can apply the same logic to the behavior of the Kan extensions on the single-element metric space as well, so both Kan extensions are well-behaved monotonic maps. We can now build on this perspective to construct optimal extensions of K. Proposition 4.8. Consider the monotonic map LanGKL : D → **Part**id *that sends the metric space* (X, dX) ∈ D to the partition of X defined by the transitive closure of the relation R where for x1, x2 ∈ X we have x1 R x2 if and only if there exists some metric space (X′, dX′ ) ∈ T where (X′, dX′ ) ≤D (X, dX) and x1, x2 are in the same cluster in K(X′, dX′ ). The map LanGKL : D → **Part** is a well-behaved monotonic map. (Proof in Appendix *A.1.2)* Proposition 4.9. Consider the map RanGKR : D → **Part**id that sends the metric space (X, dX) ∈ D to the partition of X defined by the transitive closure of the relation R where for x1, x2 ∈ X we have x1 R x2 if and only if there exists no metric space (X′, dX′ ) ∈ T where (X, dX) ≤D (X′, dX′ ) and x1, x2 *are in different* clusters in K(X′, dX′ ). The map RanGKR : D → **Part** is a well-behaved monotonic map (Proof in Appendix *A.1.3)* We can now construct LanGKL*, Ran*GKR as Kan extensions. Proposition 4.10. *Suppose there exists some well-behaved monotonic map* F : D → **Part**id *where* F ◦ G = K. Then LanGKL : D → **Part**id (Proposition *4.8) is the left Kan extension of* KL : Ob(D) → **Part**id *along the* inclusion map G : Ob(D) ,→ D. ![8_image_0.png](8_image_0.png) In addition RanGKR : D → **Part**id (Proposition *4.9) is the right Kan extension of* KR : Ob(D) → **Part**id along the inclusion map G : Ob(D) ,→ D. ![8_image_1.png](8_image_1.png) (Proof in Appendix *A.1.4)* We will call LanGKL the left Kan supervised clustering map and RanGKR the right Kan supervised clustering map. When Ob(T) = ∅ we have for any (*X, d*X) ∈ D that: LanGKL(*X, d*X) = KL(*X, d*X) = (X, {{x} | x ∈ X}) RanGKR(*X, d*X) = KR(*X, d*X) = (X, {{X}}) In general for any metric space (*X, d*X) ∈ D − Ob(T) the monotonic maps LanGKL*, Ran*GKR respectively map (*X, d*X) to the finest (most clusters) and coarsest (fewest clusters) partitions of X such that for any metric space (X′, dX′ ) ∈ T we have: $$K(X^{\prime},d_{X^{\prime}})=L a n_{G}K_{L}(X^{\prime},d_{X^{\prime}})=R a n_{G}K_{R}(X^{\prime},d_{X^{\prime}})$$ and LanGKL*, Ran*GKR are monotonic maps. For example, suppose we have a metric space (*X, d*X) where X = {x1, x2, x3}. We can form the subpreorders T ⊆ D ⊆ Metid where: $Ob(\mathbf{T})=\{(\{x_{1},x_{2}\},d_{X}),(\{x_{1},x_{3}\},d_{X}),(\{x_{2},x_{3}\},d_{X})\}$ $Ob(\mathbf{D})=Ob(\mathbf{T})\cup(\{x_{1},x_{2},x_{3}\},d_{X})$ T is a discrete preorder and we define ≤D such that S1 ≤D S2 when S1 ≤Metid S2. Now define K : T → Partid to be the following monotonic map: $$\begin{array}{l}{K(\{x_{1},x_{2}\},d_{X})=\{\{x_{1},x_{2}\}\}}\\ {K(\{x_{1},x_{3}\},d_{X})=\{\{x_{1}\},\{x_{3}\}\}}\\ {K(\{x_{2},x_{3}\},d_{X})=\{\{x_{2}\},\{x_{3}\}\}}\end{array}$$ In this case we have that: KL({x1, x2, x3}, dX) = {{x1}, {x2}, {x3}} KR({x1, x2, x3}, dX) = {{x1, x2, x3}} $$\}\qquad K_{R}(\{x_{1},x_{2},x_{3}$$ 9 Since the only points that need to be put together are x1, x2 and there is no metric space (*X, d*X) in D where ({x1, x2, x3}, dX) <D (*X, d*X) in D, we have: LanGKL({x1, x2, x3}, dX) = {{x1, x2}, {x3}} RanGKR({x1, x2, x3}, dX) = {{x1, x2, x3}} As another example, suppose D is Metid and T is the discrete subpreorder of D whose objects are all metric spaces with no more than 2 elements. Define the following well-behaved monotonic map: $$K(\{x_{1},x_{2}\},d)={\begin{cases}\{\{x_{1},x_{2}\}\}&d(x_{1},x_{2})\leq\delta\\ \{\{x_{1}\},\{x_{2}\}\}&{{\mathrm{else}}}\end{cases}}$$ Now for some metric space (*X, d*X) ∈ D with |X| > 2 and points x1, x2 ∈ X we have that LanGKL maps x1, x2 to the same cluster if and only if there exists some chain of points x1, · · · , x2 in X where for each pair of adjacent points x ′ 1 , x′2 in this chain there exists some metric space ({x ′ 1 , x′2}, dX′ ) ∈ D where ({x ′ 1 , x′2}, dX′ ) ≤D (*X, d*X) and x ′ 1 , x′2 are in the same cluster in K({x ′ 1 , x′2}, dX′ ). This is the case if and only if dX(x ′ 1 , x′2 ) ≤ δ. Therefore, LanGKL maps x1, x2 to the same cluster if and only if x1, x2 are in the same connected component of the δ-Vietoris Rips complex of (*X, d*X). That is, LanGKL performs the single linkage clustering algorithm. In contrast, since |X| > 2 there is no (X′, dX′ ) in T where (X, dX) ≤D (X′, dX′ ). Therefore: RanGKR(*X, d*X) = (X, {X}) We can use this strategy to learn a clustering algorithm from real-world data. Recall that the Fashion MNIST dataset (Xiao et al., 2017) contains images of clothing and the categories that each image falls into. Suppose that we have two subsets of this dataset: a training set Xtr in which images are grouped by category and a testing set Xte of ungrouped images. We can use UMAP (McInnes et al., 2018) to construct metric spaces (Xtr, dXtr ) and (Xte, dXte ) from these sets. Now suppose we would like to group the images in Xte as similarly as possible to the grouping of the images in Xtr. For any collection of nonexpansive maps between (Xtr, dXtr ) and (Xte, dXte ) we can define subpreorders T ⊆ D ⊆ Metid and monotonic map K : T → **Part**id as follows: 1. Initialize T to an empty preorder and D to be the discrete preorder with a single object {(Xte, dXte )}. 2. For every nonexpansive map f : (Xtr, dXtr ) → (Xte, dXte ) in our collection and pair (x1, x2) ∈ Xtr of samples in the same clothing category, add the object ({f(x1), f(x2)}, dXte ) to T and D where: ({f(x1), f(x2)}, dXte ) ≤D (Xte, dXte ) and define K({f(x1), f(x2)}, dXte ) to map f(x1) and f(x2) to the same cluster. 3. For every nonexpansive map f : (Xte, dXte ) → (Xtr, dXtr ) in our collection define a metric space (X′ te, dX′te ) where Xte = X′ te and dXte = dX′te . Add the object (X′ te, dX′te ) to T and D where: (Xte, dXte ) ≤D (X′ te, dX′te ) and define K(X′ te, dX′te ) to be the partition of X′ te defined by the preimages of the function (h ◦ f) where h maps each element of Xtr to the category of clothing it belongs to. We can now use LanGKL and RanGKR to partition Xte. In Figure 1 we compare the clusterings produced by LanGKL*, Ran*GKR to the ground truth clothing categories. As a baseline we compute the δ-single linkage clustering algorithm with δ chosen via line search to maximize the adjusted Rand score (Hubert & Arabie, 1985; Pedregosa et al., 2011) with the ground truth labels. As expected, we see that LanGKL produces a finer clustering (more clusters) than does RanGKR and that the clusterings produced by LanGKL and RanGKR are better than the clustering produced by single linkage in the sense of adjusted Rand score with ground truth. ![10_image_0.png](10_image_0.png) Figure 1: Cluster assignments of a 100 point testing set Xxe from the Fashion MNIST dataset (Xiao et al., 2017) shown in UMAP space (McInnes et al., 2018). Each color corresponds to a unique cluster, and points without clusters are shown as black squares. We show ground truth clothing categories, unsupervised ố-single linkage cluster assignments (ô chosen via line search), and the LangKr, Ranc-Kr, supervised cluster assignments. The LancKr, RangKB algorithms are trained on a separate 1000 point random sample Xt+ from the Fashion MNIST dataset. | Frequency that Left Kan Clustering | Frequency that Right Kan Clustering | |--------------------------------------|---------------------------------------| | Beats Best δ-Single Linkage | Beats Best δ-Single Linkage | | 0.860 (±0.068) | 0.680 (±0.091) | Table 2: We compare the performance of the left and right Kan supervised clustering maps on the Fashion MNIST dataset to the performance of δ-single linkage clustering with an optimal choice of δ*. We select* 100 *bootstrap training and testing samples of the Fashion MNIST dataset. We then train and evaluate each* method on each such sample. To perform this evaluation we use the scikit-learn implementation of the Adjusted Rand Score (Pedregosa et al., 2011) to compare the algorithmically generated clusterings with the ground truth categorization. We then compute the frequency with which the left and right Kan supervised clustering maps perform better (have a higher Adjusted Rand Score with ground truth) than choosing the optimal value of δ *for single linkage. The win rates and two standard error confidence bounds from the* 100 experiments are shown. We see that the left and right Kan clustering maps both perform consistently better than single linkage. ## 5 Related Work Some authors have begun to explore Kan extension structure in topological data analysis. For example, Bubenik et al. (2017) describe how three mechanisms for interpolating between persistence modules can be characterized as the left Kan extension, right Kan extension, and natural map from left to right Kan extension. Similarly, McCleary & Patel (2021) use Kan extensions to characterize deformations of filtrations. Furthermore, Botnan & Lesnick (2018) use Kan extensions to generalize stability results from block decomposable persistence modules to zigzag persistence modules and Curry (2013) uses Kan extensions to characterize persistent structures from the perspective of sheaf theory. Other authors have explored the application of Kan extensions to databases. For example, in categorical formulations of relational database theory (Spivak & Wisnesky, 2015; Schultz & Wisnesky, 2017; Schultz et al., 2016), the left Kan extension can be used for data migration. Spivak & Wisnesky (2020) exploit the characterization of data migrations as Kan extensions to apply the chase algorithm from relational database theory to the general computation of the left Kan extension. ## 6 Future Work In this paper we demonstrate that Kan extensions can be used to derive many different kinds of supervised learning algorithms. However, these algorithms are inherently focused on extreme values (minimums and maximums) rather than averages. Averages are required to build algorithms that are robust to noise, and a potential future direction for this work is to extend these algorithms to incorporate averages. For example, we may be able to combine multiple Kan classifiers together to generate a robust Kan classifier ensemble. It may even be possible to apply a boosting approach in which we minimize the ordering loss, fit Kan classifiers, and then repeat on the samples in the disagreement region. ## References Martín. Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems. 2015. URL https://www.tensorflow.org/. Magnus Botnan and Michael Lesnick. Algebraic stability of zigzag persistence modules. *Algebraic & Geometric Topology*, 18(6), Oct 2018. ISSN 1472-2747. doi: 10.2140/agt.2018.18.3133. Stephen Boyd, Lin Xiao, and Almir Mutapcic. Subgradient methods. *Course notes at Stanford University*, 2003. URL https://web.stanford.edu/class/ee392o/subgrad_method.pdf. Peter Bubenik, Vin de Silva, and Vidit Nanda. Higher interpolation and extension for persistence modules. *SIAM Journal on Applied Algebra and Geometry*, 1(1), Jan 2017. ISSN 2470-6566. doi: 10.1137/16m1100472. Justin M. Curry. Sheaves, cosheaves and applications. 2013. doi: 10.1.1.363.2881. Thomas Hofmann, Bernhard Schölkopf, and Alexander J Smola. Kernel methods in machine learning. The annals of statistics, 36(3), 2008. Lawrence Hubert and Phipps Arabie. Comparing partitions. *Journal of classification*, 2(1), 1985. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 2014. URL https: //arxiv.org/abs/1412.6980. Saunders Mac Lane. *Categories for the Working Mathematician*. New York, 1971. Alexander McCleary and Amit Patel. Edit distance and persistence diagrams over lattices. 2021. URL https://arxiv.org/abs/2010.07337. Leland McInnes and John Healy. Accelerated hierarchical density clustering. 2017. URL https://arxiv. org/abs/1705.07321. Leland McInnes, John Healy, and James Melville. UMAP: Uniform manifold approximation and projection for dimension reduction. 2018. URL https://arxiv.org/abs/1802.03426. Fabian Pedregosa et al. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12, 2011. Jorma Rissanen. Modeling by shortest data description. *Automatica*, 14(5):465–471, 1978. ISSN 0005-1098. doi: https://doi.org/10.1016/0005-1098(78)90005-5. URL https://www.sciencedirect.com/science/ article/pii/0005109878900055. Patrick Schultz and Ryan Wisnesky. Algebraic data integration. 2017. URL https://arxiv.org/abs/ 1503.03571. Patrick Schultz, David I Spivak, and Ryan Wisnesky. Algebraic model management: A survey. In *International Workshop on Algebraic Development Techniques*. Springer, 2016. David I. Spivak and Ryan Wisnesky. Relational foundations for functorial data migration. 2015. URL https://arxiv.org/abs/1212.5303. David I Spivak and Ryan Wisnesky. Fast left-Kan extensions using the chase. Preprint. Available at www. categoricaldata. net, 2020. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. 2017. URL https://arxiv.org/abs/1708.07747. ## A Appendix A.1 Proofs A.1.1 Proof Of Proposition 3.1 Proof. We first need to show that LanG*K, Ran*GK are monotonic. For any x1 ≤ x2 ∈ I suppose that LanGK(x1) = true. Then ∃x ′ ∈ I ′, x′ ≤ x1, K(x ′) = true. By transitivity we have x ′ ≤ x2, so: $$Lan_{G}K(x_{2})=\left(\begin{cases}\operatorname{true}&\exists x^{\prime}\in\mathbf{I}^{\prime},x^{\prime}\leq x_{2},K(x^{\prime})=\operatorname{true}\\ \operatorname{false}&\operatorname{else}\end{cases}\right)$$ $$={\mathrm{true}}$$ and LanGK is therefore monotonic. Next, for any x1 ≤ x2 ∈ I suppose that RanGK(x2) = false. Then ∃x ′ ∈ I ′, x2 ≤ x ′, K(x ′) = false. By transitivity we have x1 ≤ x ′, so: $$R a n_{G}K(x_{1})=\left(\left\{\begin{array}{l}{{}}\\ {{}}\end{array}\right.\right.$$ RanGK(x1) = (false ∃x $$\left({\begin{array}{l l}{{\mathrm{false}}}&{{\exists x^{\prime}\in\mathbf{I}^{\prime},x_{1}\leq x^{\prime},K(x^{\prime})={\mathrm{false}}}\\ {{\mathrm{true}}}&{{\mathrm{else}}}\end{array}}\right)={\mathrm{false}}$$ and RanGK is therefore monotonic. Next we will show that LanGK is the left Kan extension of K along G. If for some x ′ ∈ I ′ we have that K(x ′) = true then: $$L a n_{G}K(x^{\prime})=\left(\left\{\begin{array}{l}{{}}\\ {{}}\end{array}\right.\right.$$ ′) = (true ∃x $$\left(\begin{array}{l l}{{\mathrm{true}}}&{{\exists x^{\prime\prime}\in\Gamma,x^{\prime\prime}\leq x^{\prime},K(x^{\prime\prime})=\mathrm{true}}}\\ {{\mathrm{false}}}&{{\mathrm{else}}}\end{array}\right)=\mathrm{true}$$ so we can conclude that K ≤ (LanGK ◦ G). Now consider any other monotonic map ML : I → {false,true} such that ∀x ′ ∈ I ′, K(x ′) ≤ ML(x ′). We must show that ∀x ∈ I*, Lan*GK(x) ≤ ML(x). For some x ∈ I suppose ML(x) = false. Then since ML is a monotonic map it must be that ∀x ′ ∈ I ′, x′ ≤ *x, M*L(x ′) = false. Since K ≤ (ML ◦ G) it must be that ∀x ′ ∈ I ′, x′ ≤ *x, K*(x ′) = false. Therefore LanGK(x) = false. Next we will show that RanGK is the right Kan extension of K along G. If for some x ′ ∈ I ′ we have that K(x ′) = false then: $$R a n_{G}K(x^{\prime})=\left(\begin{cases}\text{false}&\exists x^{\prime\prime}\in\mathbf{I}^{\prime},x^{\prime}\leq x^{\prime\prime},K(x^{\prime\prime})=\text{false}\\ \text{true}&\text{else}\end{cases}\right)=\text{false}$$ so we can conclude that (RanGK ◦ G) ≤ K. Now consider any other monotonic map MR : I → {false,true} such that ∀x ′ ∈ I ′, MR(x ′) ≤ K(x ′). We must show that ∀x ∈ I, MR(x) ≤ RanGK(x). For some x ∈ I suppose MR(x) = true. Then since MR is a monotonic map it must be that ∀x ′ ∈ I ′, x ≤ x ′, MR(x ′) = true. Since (MR ◦ G) ≤ K it must be that ∀x ′ ∈ I ′, x ≤ x ′, K(x ′) = true. Therefore RanGK(x) = true. ## A.1.2 Proof Of Proposition 4.8 Proof. LanGKL trivially acts as the identity on underlying sets so we simply need to show that when: $$(X,d_{X})\leq_{\mathbf{D}}(Y,d_{Y})$$ then $$L a n_{G}K_{L}(X,d_{X})\leq_{\mathbf{Part}_{i d}}L a n_{G}K_{L}(Y,d_{Y})$$ Suppose there exists some *x, x*∗ ∈ X in the same cluster in LanGKL(*X, d*X). Then by the definition of LanGKL there must exist some sequence $(X_{1},d_{X_{1}}),(X_{2},d_{X_{2}}),\cdots,(X_{n},d_{X_{n}})\in\mathbb{T}$. where x ∈ X1, x∗ ∈ Xn and each: $$(X_{i},d_{X_{i}})\leq_{\mathbf{D}}(X,d_{X})$$ as well as some sequence x1, x2, · · · , xn−1, such that xi ∈ Xi, xi ∈ Xi+1 where the pair (*x, x*1) is in the same cluster in K(X1, dX1 ), the pair (xn−1, x∗) is in the same cluster in K(Xn, dXn ), and for each 1 *< i < n* the pair (xi−1, xi) is in the same cluster in K(Xi, dXi ). Since it must be that each: $$(X_{i},d_{X_{i}})\leq_{\mathbf{D}}(Y,d_{Y})$$ as well then by the definition of LanGKL it must be that *x, x*∗ are in the same cluster in LanGKL(*Y, d*Y ). ## A.1.3 Proof Of Proposition 4.9 Proof. RanGKR trivially acts as the identity on underlying sets so we simply need to show that when: $$(X,d_{X})\leq_{\mathbf{D}}(Y,d_{Y})$$ then: RanGKR(*X, d*X) ≤**Part**id RanGKR(*Y, d*Y ) Suppose the points *x, x*∗ ∈ X are in the same cluster in RanGKR(*X, d*X). Then by the definition of RanGKR there cannot be any (X′, dX′ ) in T such that: (X, dX) ≤D (X′, dX′ ) and *x, x*∗ are in different clusters in RanGKR(X′, dX′ ). By transitivity this implies that there cannot be any (X′′, dX′′ ) in T such that: (Y, dY ) ≤D (X′′, dX′′ ) and *x, x*∗ are in different clusters in RanGKR(X′′, dX′′ ). By the definition of RanGKR the points *x, x*∗ must therefore be in the same cluster in RanGKR(*Y, d*Y ). ## A.1.4 Proof Of Proposition **4.10** We use the following Proposition in the proof below: Proposition A.1. *Suppose there exists some well-behaved monotonic map* F : D → **Part**id *where* F◦G = K. Then for (X, dX) ∈ T *we have that:* F(*X, d*X) = K(*X, d*X) = LanGKL(*X, d*X) = RanGKR(*X, d*X) Proof. Since each of: $\begin{array}{c}\mathbf{a}\\ \mathbf{b}\end{array}$ D. F : D → **Part** $\mathbf{M}_0$ RanGKR : D → **Part** LanGKL : D → **Part** are well-behaved monotonic maps we simply need to prove that all three maps generate the same partition of X for any input (*X, d*X) ∈ T. Consider some (*X, d*X) ∈ T and two points *x, x*∗ ∈ X. Suppose *x, x*∗ are in different clusters in K(*X, d*X) = F(*X, d*X) . Then since F is a well-behaved monotonic map it must be that for any sequence $\left({X,\ldots,Y}\right)$ (X1, dX1 ),(X2, dX2 ), · · · ,(Xn, dXn ) ∈ T where x ∈ X1, x∗ ∈ Xn and each: (Xi, dXi ) ≤D (*X, d*X) and any sequence x1, x2, · · · , xn−1, such that xi ∈ Xi, xi ∈ Xi+1 one of the following must be true: - The pair (*x, x*1) are in different clusters in F(X1, dX1 ) - The pair (xn−1, x∗) are in different clusters in F(Xn, dXn ) - For some 1 *< i < n* the pair (xi−1, xi) are in different clusters in F(Xi, dXi ) This implies that in LanGKL(*X, d*X) the points *x, x*∗ must be in different clusters. Similarly, since (X, dX) ≤D (*X, d*X), by Proposition 4.9 it must be that *x, x*∗ are in different clusters in RanGKR(*X, d*X). Now suppose *x, x*∗ are in the same cluster in: K(*X, d*X) = F(*X, d*X) Since (X, dX) ≤D (*X, d*X), by Proposition 4.8 it must be that *x, x*∗ are in the same cluster in LanGKL(*X, d*X). Similarly, since F is a well-behaved monotonic map there cannot exist any metric space (X′, dX′ ) ∈ T where: $$(X,d_{X})\leq_{\mathbf{D}}(X^{\prime},d_{X^{\prime}})$$ $\square$ and *x, x*∗ are in different clusters in: $$\left[X^{r}\right]$$ K(X′, dX′ ) = F(X′, dX′ ) Therefore *x, x*∗ are in the same cluster in RanGKR(*X, d*X). Now we can prove Proposition 4.10: Proof. To start, note that Proposition A.1 implies that for any (*X, d*X) ∈ T we have: LanGKL(*X, d*X) = K(*X, d*X) = RanGKR(*X, d*X) By the definition of KL, KR we can therefore conclude that for any (*X, d*X) ∈ D we have: KL(*X, d*X) ≤**Part**id LanGKL(*X, d*X) RanGKR(*X, d*X) ≤**Part**id KR(*X, d*X) Next, consider any monotonic map ML : D → **Part**id such that for all (*X, d*X) ∈ D we have: KL(*X, d*X) ≤**Part**id (ML ◦ G)(*X, d*X) We must show that for any (*X, d*X) ∈ D we have: LanGKL(*X, d*X) ≤**Part**id ML(*X, d*X) To start, note that for any *x, x*∗ ∈ X that are in the same cluster in LanGKL(*X, d*X) by the definition of LanGKL there must exist some sequence: $(X_{1},d_{X_{1}}),(X_{2},d_{X_{2}}),\cdots,(X_{n},d_{X_{n}})\in\mathbb{T}$. where x ∈ X1, x∗ ∈ Xn and each: $\square$ $$(X_{i},d_{X_{i}})\leq_{\mathbf{D}}(X,d_{X})$$ as well as some sequence x1, x2, · · · , xn−1, such that xi ∈ Xi, xi ∈ Xi+1 where the pair (*x, x*1) is in the same cluster in KL(X1, dX1 ), the pair (xn−1, x∗) is in the same cluster in KL(Xn, dXn ), and for each 1 *< i < n* the pair (xi−1, xi) is in the same cluster in KL(Xi, dXi ). Now since for each (Xi, dXi ) in this sequence we have that: KL(Xi, dXi ) ≤**Part**id ML(Xi, dXi ) it must be that the pair (*x, x*1) is in the same cluster in ML(X1, dX1 ), the pair (xn−1, x∗) is in the same cluster in ML(Xn, dXn ), and for each 1 *< i < n* the pair (xi−1, xi) is in the same cluster in ML(Xi, dXi ). Since ML is a monotonic map it must therefore be that the pair *x, x*∗is in the same cluster in ML(*X, d*X) and therefore: $L_{\mathcal{I}\mathcal{I}\mathcal{I}}K_L(X,d_X)\leq_{\mathbf{Part}}$ $$M_{L}(\cdot)$$ LanGKL(*X, d*X) ≤**Part**id ML(*X, d*X) Next, consider any monotonic map MR : D → **Part**id such that for all (*X, d*X) in D: (MR ◦ G)(*X, d*X) ≤**Part**id KR(*X, d*X) We must show that for any (*X, d*X) in D we have: MR(*X, d*X) ≤**Part**id RanGKR(*X, d*X) To start, note that for any *x, x*∗ ∈ X such that *x, x*∗ are not in the same cluster in RanGKR(*X, d*X) by the definition of RanGKR there must exist some: (X′, dX′ ) ∈ D,(X, dX) ≤D (X′, dX′ ) where *x, x*∗ are not in the same cluster in KR(X′, dX′ ). Now since: MR(X′, dX′ ) ≤**Part**id KR(X′, dX′ ) it must be that *x, x*∗ are not in the same cluster in MR(X′, dX′ ). Since MR is a monotonic map we have: MR(*X, d*X) ≤**Part**id MR(X′, dX′ ) so *x, x*∗ are also not in the same cluster in MR(*X, d*X) and therefore: MR(*X, d*X) ≤**Part**id RanGKR(*X, d*X) ## A.2 Meta-Supervised Learning Suppose I is a set and O is a partial order. A supervised learning algorithm maps a labeled dataset (set of pairs of points in I × O) to a function f : I → O. For example, both LanGK and RanGK from Section 3 are supervised learning algorithms. In this section we use Kan extensions to derive supervised learning algorithms from pairs of datasets and functions. Our construction combines elements of Section 3's point-level algorithms and Section 4's datasetlevel algorithms. Suppose we have a finite partial order Sf ⊆ (I → O) of functions where for *f, f*′ ∈ Sf we have f ≤ f ′ when ∀x ∈ *I, f*(x) ≤ f ′(x). Proposition A.2. *For any subset* S ∗ f ⊆ Sf *the upper antichain of* S ∗ f is the set: $$\{f\;\mid\;f\in S_{f}^{*},\;\nexists f^{*}\in S_{f}^{*},f<f^{*}\}\}$$ ``` The upper antichain of S ∗ f is an antichain in S ∗ f , and for any function f ∈ S ∗ f there exists some function f ∗ ``` in the upper antichain of S ∗ f such that f ≤ f ∗. Proof. Suppose f1, f2 are in the upper antichain of S ∗ f ⊆ Sf and f1 ≤ f2. Then since ̸ ∃f ∗ 1 ∈ S ∗ f , f1 < f ∗ 1 it must be that f1 = f2 and we can conclude that the upper antichain is an antichain. Next, for any function f ∈ S ∗ f consider the set {f ∗ ∈ S ∗ f , f < f ∗}. Since Sf is finite this set must have finite size. If this set is empty then f is in the upper antichain of S ∗ f . If this set has size n then for any f ∗in this set the set {f ∗∗ ∈ S ∗ f , f ∗ < f ∗∗} must have size strictly smaller than n. We can therefore conclude by induction that the upper antichain of S ∗ f contains at least one function f ∗ where f ≤ f ∗. ``` Intuitively the upper antichain of S ∗ f is the collection of all functions f ∈ S ∗ f that are not strictly upper bounded by any other function in S ∗ f . The upper antichain of an empty set is of course itself an empty set. ``` Definition A.3. *We can form the following preorders:* DC : The objects in DC are ≤-antichains of functions Xf ⊆ Sf . DC *is a preorder in which* Xf ≤ X′f if for f ∈ Xf *there must exist some* f ′ ∈ X′f *where* f ≤ f ′. DB : The objects in DB are labeled datasets, or sets of pairs U = {(x, y) | x ∈ I, y ∈ O}. DB *is a* preorder such that U ≤ U ′ when for all (*x, y*′) ∈ U ′there exists (x, y) ∈ U *where* y ≤ y ′. DA : A subpreorder of DB *such that if* U ≤ U ′ ∈ DB *then* U ≤ U ′ ∈ DA. Proposition A.4. DB and DC are preorders. Proof. DC We trivially have Xf ≤ Xf in DC. To see that ≤ is transitive in DC simply note that if Xf1 ≤ Xf2 and Xf2 ≤ Xf3 then for f1 ∈ Xf1 there must exist f2 ∈ Xf2 , f1 ≤ f2, which implies that there must exist f3 ∈ Xf3 , f1 ≤ f2 ≤ f3. DB We trivially have U ≤ U in DB. To see that ≤ is transitive in DB simply note that if U1 ≤ U2 and U2 ≤ U3 in DB then for (*x, y*3) ∈ U3 there must exist (*x, y*2) ∈ U2, y2 ≤ y3 which implies that there must exist (x, y1) ∈ U1, y1 ≤ y2 ≤ y3. Intuitively, DA is a collection of labeled training datasets and DB is a collection of labeled testing datasets. We can define a monotonic map that maps each training dataset to all of the trained models that agree with that dataset. Proposition A.5. The map K : DA → DC that maps the object U ∈ DA *to the upper antichain of the* following set: $$S_{K}(U)=\{f\ \mid\ f\in S\}$$ $\mathbf{S}_{\text{in}}$ $U_{\star}$ b. $$x)\leq y\}$$ SK(U) = {f | f ∈ Sf , ∀(x, y) ∈ *U, f*(x) ≤ y} is a monotonic map. Proof. To start, note that K maps objects in DA to objects in DC since the upper antichain of SK(U) must be an antichain in Sf by Proposition A.2. Next, we need to show that if U ≤ U ′then K(U) ≤ K(U ′). For any *x, y*′ ∈ U ′it must be that there exists (*x, y*) ∈ U where y ≤ y ′, so if f ∈ K(U) then by the definition of K we have f(x) ≤ y ≤ y ′. Therefore f ∈ SK(U ′), so by Proposition A.2 K(U ′) contains f ′ where f ≤ f ′. Therefore K(U) ≤ K(U ′). Now define G : DA ,→ DB to be the inclusion map. A monotonic map F : DB → DC such that F ◦ G commutes with K will then be a mapping from the testing datasets in DB to collections of trained models. ![18_image_0.png](18_image_0.png) We can take the left and right Kan extensions of K along the inclusion map G : DA ,→ DB to find the optimal such mapping. Proposition A.6. The map LanGK that maps the object U ∈ DB to the upper antichain of the following set: $$S_{L}(U)=\bigcup_{\{U^{\prime}\ |\ U^{\prime}\in{\bf D_{A}},U^{\prime}\leq U\}}K(U^{\prime})$$ is the left Kan extension of K *along* G. Next, the map RanGK that maps the object U ∈ DB *to the upper antichain of the following set:* SR(U) = {f | f ∈ Sf , ∀U ′ ∈ {U ′| U ′ ∈ DA, U ≤ U ′}, ∃f ′ ∈ K(U ′), f ≤ f ′} is the right Kan extension of K *along* G. Proof. We first need to show that LanGK is a monotonic map DB → DC. Note that LanGK maps objects in DB to objects in DC since the upper antichain of SL(U) must be an antichain in Sf . Next, suppose U1 ≤ U2 and that f ∈ LanGK(U1). Consider the set of all U ′ ∈ DA where U ′ ≤ U1. Since U1 ≤ U2 this is a subset of the set of all U ′ ∈ DA where U ′ ≤ U2. Since SL(U1) is defined to be a union of the elements in the set we have that SL(U1) ⊆ SL(U2). Since f ∈ LanGK(U1) implies that f ∈ SL(U1) this implies that f ∈ SL(U2) as well. Proposition A.2 then implies that there must exist f ′ ∈ LanGK(U2) where f ≤ f ′ and therefore LanGK(U1) ≤ LanGK(U2). Next, we will show that LanGK is the left Kan extension of K along G. - Consider some U ∈ DA and f ∈ K(U). Since U ≤ U we have by the definition of SL that f ∈ SL(U). Proposition A.2 then implies that ∃f ′ ∈ LanGK(U) such that f ≤ f ′. This implies that K ≤ LanGK ◦ G. - Now consider any monotonic map ML : DB → DC such that K ≤ (ML ◦ G). We must show that LanGK ≤ ML. For some U ∈ DB suppose f ∈ LanGK(U). By the definition of SL there must exist some U ′ ∈ DA where U ′ ≤ U such that f ∈ K(U ′). Since K(U ′) ≤ ML(U ′) there must exist some f ′ ∈ ML(U ′) where f ≤ f ′. Since ML is a monotonic map we have ML(U ′) ≤ ML(U) which implies that there must exist some f ∗ ∈ ML(U) where f ≤ f ′ ≤ f ∗. Therefore LanGK ≤ ML. Next, we need to show that RanGK is a monotonic map DB → DC. Note that RanGK maps objects in DB to objects in DC since the upper antichain of SR(U) must be an antichain in Sf . Next, suppose U1 ≤ U2 and that f ∈ RanGK(U1). Consider the set of all U ′ ∈ DA where U2 ≤ U ′. Since U1 ≤ U2 this is a subset of the set of all U ′ ∈ DA where U1 ≤ U ′. Therefore by the definition of SR we have that SR(U1) ⊆ SR(U2). Since f ∈ RanGK(U1) implies that f ∈ SR(U1) this implies that f ∈ SR(U2) as well. Proposition A.2 then implies that there must exist f ′ ∈ RanGK(U2) where f ≤ f ′ and therefore RanGK(U1) ≤ RanGK(U2). Next, we will show that RanGK is the right Kan extension of K along G. - For U ∈ DA since U ≤ U when f ∈ SR(U) we have by the definition of SR that ∃f ′ ∈ K(U) such that f ≤ f ′. Since RanGK(U) is a subset of SR(U) this implies that RanGK ◦ G ≤ K. - Now consider any monotonic map MR : DB → DC such that (MR ◦ G) ≤ K. We must show that MR ≤ RanGK. For some U ∈ DB suppose f ∈ MR(U). Since MR is a monotonic map it must be that for all U ′ ∈ DA where U ≤ U ′ we have that MR(U) ≤ MR(U ′) and therefore ∃f ′ MR ∈ MR(U ′), f ≤ f ′ MR . Since (MR ◦ G) ≤ K this implies that for all U ′ ∈ DA where U ≤ U ′ we have that ∃f ′ K ∈ K(U ′), f ≤ f ′ MR ≤ f ′ K. By the definition of SR this implies that f ∈ SR(U). Proposition A.2 therefore implies that there exists f ′ R ∈ RanGK(U) such that f ≤ f ′ R, and therefore MR(U) ≤ RanGK(U). Intuitively the functions in RanGK(U) and LanGK(U) are as large as possible subject to constraints imposed by the selection of sets in Ob(DA). The functions in LanGK(U) are subject to a membership constraint and grow smaller when we remove objects from Ob(DA). The functions in RanGK(U) are subject to an upper boundedness-constraint and grow larger when we remove objects from Ob(DA). Consider the extreme case where Ob(DA) = ∅. For any U ∈ DB we have that: $$\begin{array}{l l}{{S_{L}(U)=}}&{{\bigcup}}&{{K(U^{\prime})=\emptyset}}\\ {{}}&{{\{U^{\prime}\ |\ U^{\prime}\in\emptyset,\cdots\}}}\\ {{S_{R}(U)=\{f\ |\ f\in S_{f},\ \forall U^{\prime}\in\emptyset,\ \cdots\}=S_{f}\ .}}\end{array}$$ so LanGK(U) is empty and RanGK(U) is the upper antichain of Sf . Now consider the extreme case where Ob(DA) = Ob(DB). For any U ∈ DB and f ∈ K(U) the monotonicity of K implies that: $$\forall U^{\prime}\in\{U^{\prime}\ |\ U^{\prime}\in{\bf D_{A}},U\leq U^{\prime}\},\ \exists f^{\prime}\in K(U^{\prime}),\ f\leq f^{\prime}$$ and therefore f ∈ SR(U). This implies K(U) ≤ RanGK(U). Similarly, for any f ∈ LanGK(U) it must be that: $$\exists U^{\prime}\in\mathbf{D_{A}},\ U^{\prime}\leq U,\ f\in K(U^{\prime})$$ which by the monotonicity of K implies that: $$\exists f^{*}\in K(U),f\leq f^{*}$$ $\left(U\right)$ and therefore LanGK(U) ≤ K(U). Therefore in this extreme case we have: $$R a n_{G}K(U)=L a n_{G}K(U)=K(U)$$ Let's now consider a more concrete example. Suppose I = R 2 ≥0 , O = {false,true}, and Sf is the finite set of linear classifiers l : R 2 ≥0 → {false,true} that can be expressed as: $$l_{a,b}(x_{1},x_{2})={\begin{cases}\mathrm{true}&x_{2}\leq a*x_{1}+b\\ \mathrm{false}&\mathrm{else}\end{cases}}$$ where *a, b* are integers in (−100, 100). Intuitively: - The classifiers in LanGK(U) are selected to be the classifiers that predict true as often as possible among the set of all classifiers that have no false positives on some U ′ ∈ DA where U ′ ≤ U. - The classifiers in RanGK(U) are constructed to predict true as often as possible subject to a constraint imposed by the selection of sets in DA. For every set U ′ ∈ DA where U ≤ U ′it must be that each classifier in RanGK(U) is upper bounded at each point in I by some classifier in Sf with no false positives on U ′. A concrete example will demonstrate this. Suppose that DA is: $$\{((2,2),\mathrm{true}),((1,3),\mathrm{true}),((4,4),\mathrm{false})\}$$ $$\{((2,2),\mathrm{false}),((1,3),\mathrm{false}),((4,4),\mathrm{false})\}$$ $$\{((2,2),\mathrm{true}),((1,3),\mathrm{false}),((4,4),\mathrm{true})\}\longleftarrow$$ {((2, 2),true),((1, 3), false),((4, 4),true)} {((2, 2), false),((1, 3), false),((4, 4), false)} ≤ ≤ and that DB is: $$\{((2,2),\mathrm{true}),((1,3),\mathrm{true}),((4,4),\mathrm{false})\}$$ $\leq$ . {((2, 2),true),((1, 3), false),((4, 4),true)} {((2, 2),true),((1, 3), false),((4, 4), false)} $$\{((2,2),\mathrm{true}),((1,3),\mathrm{false}),((4,4),\mathrm{true})\}\longleftarrow$$ $$\begin{array}{c}{{\{((2,2),\mathrm{true}),((1,3),\mathrm{false}),((4,4),\mathrm{false})\}}}\\ {{\uparrow}}\end{array}$$ ≤ | {$((2,2),\text{false}),((1,3),\text{false}),((4,4),\text{false})$} . We can see the following: - l(1,1) ∈ K({((2, 2),true),((1, 3), false),((4, 4),true)}) since: $$\left[\begin{array}{l}{{\mathrm{true}}}\\ {{\mathrm{false}}}\end{array}\right.$$ $$l_{(1,1)}(1,3)=\left(\begin{array}{l}{{}}\\ {{}}\end{array}\right)$$ l(1,1)(1, 3) = (true 3 ≤ 1 ∗ 1 + 1 false else $$\left.{\begin{array}{l}{3\leq1*1+1}\\ {{\mathrm{else}}}\end{array}}\right\}={\mathrm{false}}$$ but we have that: $$\begin{array}{l}{\left\{\mathrm{true}\right.}}\\ {\left.\left|\ \mathrm{false}\right.}\right.}\\ {\left.\left(\mathrm{true}\right.}\right.}\\ {\left.\left|\ \mathrm{false}\right.}\right.}\end{array}$$ $$l_{(1,2)}(1,3)=\left(\begin{array}{c}{{}}\\ {{}}\\ {{}}\\ {{l_{(2,1)}(1,3)=\left(\begin{array}{c}{{}}\\ {{}}\\ {{}}\\ {{}}\\ {{}}\end{array}\right)}}\end{array}\right)$$ l(1,2)(1, 3) = (true 3 ≤ 1 ∗ 1 + 2 false else l(2,1)(1, 3) = (true 3 ≤ 2 ∗ 1 + 1 false else $$\left.{\begin{array}{l}{3\leq1*1+2}\\ {{\mathrm{else}}}\end{array}}\right\}={\mathrm{true}}$$ $$3\leq2*1+1\left(2\right)={\mathrm{true}}$$ $${\mathrm{else}}$$ - l(0,2) ∈ K({((2, 2),true),((1, 3), false),((4, 4),true)}) since: $$\left[\begin{array}{l}{{\mathrm{true}}}\\ {{\mathrm{false}}}\end{array}\right.$$ $$l_{(0,2)}(1,3)=\left(\begin{array}{l}{{}}\\ {{}}\end{array}\right)$$ l(0,2)(1, 3) = (true 3 ≤ 0 ∗ 1 + 2 false else $$\left.{\begin{array}{l}{3\leq0*1+2}\\ {{\mathrm{else}}}\end{array}}\right\}={\mathrm{false}}$$ but we have that: $\begin{pmatrix}\text{true}\\ \text{false}\\ \\ \text{true}\\ \text{false}\end{pmatrix}$ $$l_{(0,3)}(1,3)=\left(\begin{array}{c}{{}}\\ {{}}\\ {{l_{(1,2)}(1,3)=\left(\begin{array}{c}{{}}\\ {{}}\\ {{}}\end{array}\right)}}\end{array}\right)$$ l(0,3)(1, 3) = (true 3 ≤ 0 ∗ 1 + 3 false else l(1,2)(1, 3) = (true 3 ≤ 1 ∗ 1 + 2 false else $$\left.{\begin{array}{l}{3\leq0*1+3}\\ {{\mathrm{else}}}\end{array}}\right)={\mathrm{true}}$$ $$3\leq1*1+2$$ $${\mathrm{else}}$$ - $l_{(0,3)}\in K(\{((2,2),\text{true}),((1,3),\text{true}),((4,4),\text{false})\})$ since: - $l_{(0,3)}\in K(\{((2,2),\text{true}),((1,3),\text{true}),((4,4),\text{false})\})$ since: $$l_{(0,3)}(4,4)=\left(\begin{array}{l}{{}}\\ {{}}\end{array}\right)$$ $$\left[\mathrm{true}\right]$$ $$\left[\mathrm{false}\right]$$ l(0,3)(4, 4) = (true 4 ≤ 0 ∗ 4 + 3 false else $$4\leq0*4+3$$ $${\mathrm{else}}$$ but we have that: $\frac{1}{2}$ . $\begin{pmatrix}\text{true}\\ \text{false}\\ \\ \text{true}\\ \text{false}\end{pmatrix}$ $$l_{(1,3)}(4,4)=\left(\begin{array}{l}{{}}\\ {{}}\\ {{l_{(0,4)}(4,4)=\left(\begin{array}{l}{{}}\\ {{}}\\ {{}}\end{array}\right)}}\end{array}\right)$$ l(1,3)(4, 4) = (true 4 ≤ 1 ∗ 4 + 3 false else l(0,4)(4, 4) = (true 4 ≤ 0 ∗ 4 + 4 false else $4\leq1*4+3$ else $4\leq0*4+4$ else $\left.\begin{matrix}\text{else}\\ \text{}\end{matrix}\right\}=$ $$=\text{true}$$ $$=\text{true}$$ ... - $l_{(0,1)}\in K(\{((2,2),\text{false}),((1,3),\text{false}),((4,4),\text{false})\})$ since: - $l_{(0,1)}\in K(\{((2,2),\text{false}),((1,3),\text{false}),((4,4),\text{false})\})$ since: $\begin{cases}\text{true}\\ \text{false}\\ \\ \text{true}\\ \text{false}\\ \\ \text{true}\\ \text{false}\end{cases}$ l(0,1)(2, 2) = (true 2 ≤ 0 ∗ 2 + 1 false else l(0,1)(1, 3) = (true 3 ≤ 0 ∗ 1 + 1 false else l(0,1)(4, 4) = (true 4 ≤ 0 ∗ 4 + 1 false else 9)) and: $2\leq0*2+1$ else $3\leq0*1+1$ else $4\leq0*4+1$ else $\left.\begin{matrix}2\leq0*2+1\\ \text{else}\end{matrix}\right)=\text{false}$ $$\begin{pmatrix}\text{true}\\ \text{false}\end{pmatrix}$$ $$\begin{pmatrix}\text{true}\\ \text{false}\end{pmatrix}$$ $$l_{(1,1)}(4,4)=\left(\begin{array}{c}{{}}\\ {{}}\\ {{l_{(0,2)}(2,2)=\left(\begin{array}{c}{{}}\\ {{}}\\ {{}}\end{array}\right)}}\end{array}\right)$$ l(1,1)(4, 4) = (true 4 ≤ 1 ∗ 4 + 1 false else l(0,2)(2, 2) = (true 2 ≤ 0 ∗ 2 + 2 false else $$\left.{\begin{array}{l}{4\leq1*4+1}\\ {\mathrm{else}}\\ {2\leq0*2+2}\\ {\mathrm{else}}\end{array}}\right\}={\mathrm{true}}$$ $$l_{(0,2)}\leq l_{(1,2)}$$ $$K(\{((2,2),\mathrm{true})\}$$ but we have that: By the definition of LanGK we have that: must contain l(0,1) since we have that: LanGK({((2, 2),true),((1, 3), false),((4, 4), false)}) l(0,1) ∈ K({((2, 2), false),((1, 3), false),((4, 4), false)}) but: l(0,2) ̸∈ K({((2, 2), false),((1, 3), false),((4, 4), false)}) l(1,1) ̸∈ K({((2, 2), false),((1, 3), false),((4, 4), false)}) Similarly, by the definition of RanGK we have that: RanGK({((2, 2),true),((1, 3), false),((4, 4), false)}) must contain l(0,2) since we have that: l(0,2) ≤ l(0,3) l(0,2) ≤ l(1,2) but that there is no l(a,b) such that l(0,2) < l(a,b) that is in both: K({((2, 2),true),((1, 3),true),((4, 4), false)}) and: K({((2, 2),true),((1, 3), false),((4, 4),true)}) since: l(1,2) ̸∈ K({((2, 2),true),((1, 3),true),((4, 4), false)}) l(0,3) ̸∈ K({((2, 2),true),((1, 3), false),((4, 4),true)}) ![22_image_0.png](22_image_0.png) Figure 2: The decision boundaries defined by l(1,1), l(0,3)*, and* l(0,1). ## A.3 Function Approximation In many learning applications there may be multiple functions in a class that fit a particular set of data similarly well. In such a situation Occam's Razor suggests that we are best off choosing the simplest such function. For example, we can choose the function with the smallest Kolmogorov complexity, also known as the minimum description length (MDL) function (Rissanen, 1978). In this section we will explore how we can use Kan extensions to find the MDL function that fits a dataset. Suppose I is a set, O is a partial order, and S is a finite subset of I. We can define the following preorder: Definition A.7. Define the preorder ≤S on (I → O) such that f1 ≤S f2 if and only if ∀x ∈ *S, f*1(x) ≤ f2(x). If f1 ≤S f2, f2 ̸≤S f1 then write f1 <S f2 and if f1 ≤S f2 ≤S f1 *then write* f1 =S f2. Now suppose also that C≤c is some finite subset of the space of all functions (I → O) equipped with a total order ≤c such that f1 ≤c f2 whenever the Kolmogorov complexity of f1 is no larger than that of f2. Note that functions with the same Kolmogorov complexity may be ordered arbitrarily in C≤c . Proposition A.8. Given a set of functions Sf ⊆ C≤c *we can define a map that sends each function* f ∈ Sf to the function: $$f_{c}=\operatorname*{min}_{\leq_{c}}\{f^{\prime}\;\mid\;f^{\prime}\in S_{f},f^{\prime}=s\ f\}$$ where fc *satisfies* fc ≤c f. This map is guaranteed to exist and we can define the minimum Kolmogorov subset Sfc of Sf to be the image of this map. Sfc contains exactly one function fc where f =S fc. Proof. For any function f ∈ Sf there must exist some fc = min≤c {f ′| f ′ ∈ Sf , f′ =S f} since {f ′| f ′ ∈ Sf , f′ =S f} is a nonempty finite total ≤c-order. Therefore we can define a map that sends each f ∈ Sf to fc and we can define Sfc to be the image of this map. Since this map will send all f ∈ Sf in the same =S equivalence class to the same function in that =S equivalence class, Sfc contains exactly one function fc where f =S fc. This function fc satisfies fc ≤c f. We can use these constructions to define the following preorders: Definition A.9. *Given the sets of functions* S 1 f ⊆ S 2 f ⊆ C≤c *define* S 1 fc to be the minimum Kolmogorov subset of S 1 f . We can construct the preorders FA, FB, FC *as follows.* - *The set of objects in the discrete preorder* FA is S 1 fc . - *The set of objects in* FB is S 2 f . FB *is a preorder under* ≤S. - FC is the subpreorder of FB under ≤S *in which objects are functions in* S 1 fc . ``` Intuitively a monotonic map FB → FC acts as a choice of a minimum Kolmogorov complexity function in S 1 fc for each function in S 2 f . For example, if S 1 f contains all linear functions and S 2 f is the class of all ``` polynomials then we can view a monotonic map FB → FC as selecting a linear approximation for each polynomial in S 2 f . Proposition A.10. *For some function* g ∈ S 2 f define its minimal S*-overapproximation to be the function* h ∈ S 1 fc where g ≤S h and ∀h ′ ∈ S 1 fc where g ≤S h ′ *we have* h ≤S h ′. If this function exists it is unique. Proof. Suppose h1, h2 are both minimal S-overapproximations of g. Then h1 ≤S h2 and h2 ≤S h1 which by the definition of S 1 fc implies that h1 = h2. ``` Proposition A.11. For some function g ∈ S 2 f define its maximal S-underapproximation to be the function ``` h ∈ S 1 fc where h ≤S g and ∀h ′ ∈ S 1 fc where h ′ ≤S g *we have* h ′ ≤S h*. If this function exists it is unique.* Proof. Suppose h1, h2 are both maximal S-underapproximations of g. Then h2 ≤S h1 and h1 ≤S h2 which by the definition of S 1 fc implies that h1 = h2. Proposition A.12. *Suppose that for some* g ∈ S 2 f there exists some h ∈ S 1 fc such that h =S g. Then h will be both the minimal S-overapproximation and the maximal S*-underapproximation of* g. Proof. To start, note that h must satisfy g ≤S h and for any h ′ ∈ S 1 fc where g ≤S h ′ we have: h =S g ≤S h ′ so h is the minimal S-overapproximation of g. Next, note that h must satisfy h ≤S g and for any h ′ ∈ S 1 fc where h ′ ≤S g we have: $$h^{\prime}\leq$$ ′ ≤S g =S h so h is also the maximal S-underapproximation of g. We can now show the following: Proposition A.13. Define both K : FA ,→ FC and G : FA ,→ FB *to be inclusion maps. Then:* - *Suppose that for any function* g ∈ S 2 f there exists a minimal S-overapproximation h of g. Then the left Kan extension of K along G is the monotonic map LanGK *that maps* g to h. - *Suppose that for any function* g ∈ S 2 f there exists a maximal S-underapproximation h of g*. Then* the right Kan extension of K along G is the monotonic map RanGK *that maps* g to h. ![24_image_0.png](24_image_0.png) Proof. We first show that LanGK is monotonic when it exists. Since FB, FC are preorders we simply need to show that when f1 ≤S f2 then LanGK(f1) ≤S LanGK(f2). Since f2 ≤S LanGK(f2) by the definition of the minimal S-overapproximation of f2 we have that f1 ≤S LanGK(f2). Then LanGK(f1) ≤S LanGK(f2) by the definition of the minimal S-overapproximation of f1. We next show that RanGK is monotonic when it exists. Since FB, FC are preorders we simply need to show that when f1 ≤S f2 then RanGK(f1) ≤S RanGK(f2). Since RanGK(f1) ≤S f1 by the definition of the maximal S-underapproximation of f1 we have that RanGK(f1) ≤S f2. Then RanGK(f1) ≤S RanGK(f2) by the definition of the maximal S-underapproximation of f2. Next, we will show that LanGK and RanGK are respectively the left and right Kan extensions when they exist. First, by Proposition A.12 if f ∈ S 1 fc then f must be both the minimal S-overapproximation and maximal S-underapproximation of f. Therefore we have: K(f) = LanGK(f) = RanGK(f) Next, consider any monotonic map ML : FB → FC such that ∀f ∈ S 1 fc , K(f) ≤S ML(f). Since f =S K(f) this implies f ≤S ML(f) so by the definition of the minimal S-overapproximation LanGK(f) ≤S ML(f). Next, consider any monotonic map MR : FB → FC such that ∀f ∈ S 1 fc , MR(f) ≤S K(f). Since K(f) =S f this implies MR(f) ≤S f so by the definition of the maximal S-underapproximation MR(f) ≤ RanGK(f). ``` Intuitively, the Kan extensions of the inclusion map K : FA → FC along the inclusion map G : FA → FB map a function g ∈ S 2 f to its best S 1 f -approximations over the points in S. ``` For example, suppose I = O = R, g is a polynomial, S 1 f is the set of lines defined by all pairs of points in S and S 2 f = S 1 f ∪ g. LanGK and RanGK may or may not exist depending on the choice of S and g. In Figure 3 we give an example *S, g* in which LanGK exists and RanGK does not (left) and an example *S, g* in which RanGK exists and LanGK does not (right). As another example, suppose I = O = R, S 1 f is a subset of all polynomials of degree |S| − 1 and S 2 f is a subset of all functions R → R. Since there always exists a unique n − 1 degree polynomial through n unique points, for any S there exists some S 1 f so that both LanGK and RanGK exist and map g ∈ S 2 f to the unique |S| − 1 degree polynomial that passes through the points {(*x, g*(x)) | x ∈ S}. ![25_image_0.png](25_image_0.png) Figure 3: Left and right Kan extensions of K : FA → Fc along G : FA → FB for two example sets S and polynomials g where S} is the class of lines and S3 = S} U g.
Review 1: Summary: The paper proposes ways of using Kan extensions to functionally define methods for a few learning tasks, in particular varieties of classification tasks. The first task is simply binary classification with real-valued features, and the classifier has the form of a threshold on some (unspecified) norm of a linear transformation of the instance's features. In each case, there are two classifiers (obtained from the "left" and "right" Kan extensions), that are respectively biased towards positive and negative classification. The second task is essentially a multi-class classification problem where various sets of points lie in various metric spaces (i.e., are labeled by various distances that obey the triangle inequality, etc.) -- it is framed as a clustering problem, but the objective does not (seem to) have much to do with the geometry of the points, but rather recovering a specified set of partitions that we can view as class labels. Some small experiments on Fashion MNIST demonstrate that the proposed classifiers obtain nontrivial (but surely not state-of-the-art) performance on these tasks. Strengths and Weaknesses: The main strength of the work is the unusual perspective it takes. As far as I am aware, the proposed approach to obtaining classifiers via Kan extensions is novel. I have not seen these notions used in machine learning previously. There are a handful of weaknesses, however: most critically, the paper is unclear on some key points and misses some very relevant prior work (see the requested changes, below). The experiments are somewhat weak, especially the first experiment that does not even include a baseline. I also generally found the formulation of the "clustering" task confusing. It is very strange to me that the resulting "clustering" map acts on metric spaces and produces partitions. I'd intuitively have expected it to map points from Ob(D) to an index for some part of the partition of Ob(D) (or simply output a partition rather than a partition-valued mapping). Finally, it seems a bit piecemeal: we can use Kan extensions to obtain constructions of classifiers for some tasks, but it isn't clear why these classifiers would be desirable in any particular way. Also, In each case, it appears that some ad hoc choices were made in the constructions, e.g., the use of UMAP to induce a metric on Fashion MNIST. I would view it more favorably if there were some general principles that defined the classifiers, indicated when and why this approach would be likely to be useful, etc. Requested Changes: I am confused about a few key points in the proposed (example) constructions. First, what norm is being used by the classifier in the first, binary classification task? Second, what role do the collection of nonexpansive maps play in the second task? These must be clarified in the text. Some relatex work is missing, and consequently the discussion of the Kan extension approach versus "traditional machine learning" borders on factually incorrect: learning tasks of precisely the form "minimize false positives subject to no false negatives on some set" have been considered previously, under various names. See for example the "heuristic learning" task considered by Pitt and Valiant [1] and "positive/negative reliable learning" as considered by Kalai et al. [2]. The work should note that the Kan extension approach solves tasks in such models (and must not suggest that this is a novel formulation). Ideally the work would compare to the existing algorithms for reliable/heuristic learning in the experiments, especially the first classification experiment that currently has no baseline (except, e.g., "trivial performance"). [1] L. Pitt and L. G. Valiant. Computational limitations on learning from examples. Journal of the ACM (JACM), 35(4):965-984, 1988. [2] A. T. Kalai, V. Kanade, and Y. Mansour. Reliable agnostic learning. Journal of Computer and System Sciences, 78(5):1481-1495, 2012. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: I have to state that I know nothing about the category theory that the authors are using in this paper. So I only understand very little about this paper. If I represent a regular ML researcher, this paper does not seem to be a good fit to TMLR because it has used too much mathematical content beyond the knowledge of a regular ML researcher. The authors should consider submitting it to an applied math journal or an ML journal and provide a long and detailed introduction on the math. This paper introduces the Kan extension to the classification and clustering problem. Strengths and Weaknesses: + New math concept. - Very poor explanation. The writing is so bad that most ML researchers will not be able to understand it. There are too many math concepts that are not in the usual ML curriculum. - Lack of motivation. Why do we want to use Kan extension? Does it provide improvement on the performance? Does it show connections among different methods? The motivation of this paper is totally unclear to me. - Unclear how to use the proposed method. There is no clear explanation how to construct the Kan extension from a given classifier or clustering algorithm. - Limitation of the method. The idea relies heavily on the pre-order. However, feature space often does not have a pre-order, especially when there are more than 1 variable Requested Changes: 1. I will recommend the authors to resubmit this paper to an applied math journal. The math concepts are interesting but unclear to me how they can be useful in ML problems. 2. Provide more explanations and improve the writing. The authors have to improve the writing and provide more explanations so that regular ML researchers can understand the contents. For instance, how will a classifier be represented in the diagram (I think a classifier will represent a map)? and how can we think of the map between other two pre-order sets? 3. Provide a clear motivation. As is mentioned in the weakness, there is no clear motivation on why we need the mathematics introduced in this paper. It seems to me that this idea is just very cool mathematical concepts but not clear to me if it will be useful in practice. 4. Provide a clear explanation on how to use the method. There is no clear explanation on how to use this extension. Do we need to start with a classifier? How do we construct another map between pre-order sets? Does this related to feature transformation derived from kernel methods? Broader Impact Concerns: This paper does not include a statement on the broader impact. And I cannot understand most part of the paper so I cannot judge its broader impact. ================================================== Review 3: Summary: This paper explains how right and left Kan extensions, tools issued from category theory, can be used in a machine learning setting. Despite being rather unfamiliar with category theory, I found the paper to be quite an interesting read. For the same reason and to hand out my review in time (and before my own vacations), I could only perform a superficial check of the mathematics of the paper, but those are correct as far as I can tell (nevertheless, they should be checked by an expert of the field). The paper is very well-written and pleasant to read, with examples helping to grasp concepts expressed in a mathematically abstract way. Some experiments are led to show how those concepts can be applied, with quite satisfying results. Strengths and Weaknesses: +: an interesting take on learning problems, connecting them to category theory +: a very well-written paper +: examples and first experiments demonstrating the applicability of the ideas and their potential usefulness No real perceived weaknesses, but my expertise of the used mathematics is limited. Requested Changes: The paper is quite nice as it is, and I would not especially change anything. I have a couple of questions that authors may wish to answer here (and maybe also in the paper, but I do not consider it necessary): * Since Kan extensions heavily rely on the concept of ordered spaces and mapping between them, I was wondering if it would not be an ideal tool to use in the context of monotonic classification, where one assumes monotonic relationships between the attributes and the resulting (ordered) class? See for example "Monotonic classification: An overview on algorithms, performance measures and data sets" for a recent review. * Similarly, I am wondering how easy it would be to extend the proposal made for classification to the case where the set of classes is unordered? Should one impose an order/permutation, or proceed pairwisely? * Regarding the first experiment, what would be the results if we rejected all those samples for which the two extensions disagree? Said otherwise, is disagreement between Kan extensions a useful tool to implement a (partial) reject option? * Another connection that would seem possible to make is with the notion of rough sets, and the different regions they can induce in the sample/class space? The two notions seem indeed related. A quick search also show some papers trying to connect the two notions, but those appear to be rare (e.g., https://ieeexplore.ieee.org/document/4666018/) * Finally, regarding the clustering case, it seems the approach can well handle "must-link" constraints when we learn that two objects should be together, but I was wondering if it could handle "cannot-link" constraints, i.e., specifying that two objects cannot be together in the resulting clustering and guaranteeing that this will be the case in the right and left extensions. Broader Impact Concerns: No concerns ================================================== Metareview: Recommendation: Reject Comment: The paper proposes a potentially interesting point of view connecting the Kan extension to machine learning. However, the reviewers found several issues, including clarity, connections to related work, and comparison to appropriate baselines, that should be addressed. The authors ignored most of the feedback, and did not explain how they would correct these issues. Therefore, the submission is rejected. The authors are encouraged to address the issues highlighted by the reviewers and resubmit. ==================================================
# Models Of Human Preference For Learning Reward Functions W. Bradley Knox∗ † bradknox@cs.utexas.edu Bosch The University of Texas at Austin Google Research Stephane Hatgis-Kessell∗stephane@cs.utexas.edu The University of Texas at Austin Serena Booth sbooth@mit.edu Bosch MIT CSAIL Scott Niekum *sniekum@cs.utexas.edu* The University of Texas at Austin University of Massachusetts Amherst Peter Stone pstone@cs.utexas.edu The University of Texas at Austin Sony AI Alessandro Allievi alessandro.allievi@us.bosch.com Bosch Version that highlights what is the same, regardless of start or end state Reviewed on OpenReview: *https: // openreview. net/ forum? id= hpKJkVoThY* Different end state Different start state ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) Higher regret Lower regret Figure 1: Illustrations of segment pairs for which the common partial return preference model poorly explains intuitive human preference. The task has −1 reward each time step, penalizing time taken to reach the goal. In both pairs, both segments have the same partial return (−2), but the one on the right is nonetheless the intuitive choice. Additionally, the right segment in each pair consists only of optimal actions, whereas the left segment includes at least one suboptimal action. Regret, which our proposed preference model is based upon, is designed to measure a segment's deviation from optimal decision making. The right segment in each pair is therefore more likely to be preferred by a regret preference model. In the left pair, the preferred segment has a higher end state value. In the right pair, the preferred segment has a lower start state value, indicating a lower opportunity cost (i.e., it did not waste a more valuable start state). ∗The first two authors contributed equally. †Work partially done while at Bosch and Google Research ## Abstract The utility of reinforcement learning is limited by the alignment of reward functions with the interests of human stakeholders. One promising method for alignment is to learn the reward function from human-generated preferences between pairs of trajectory segments, a type of reinforcement learning from human feedback (RLHF). These human preferences are typically assumed to be informed solely by partial return, the sum of rewards along each segment. We find this assumption to be flawed and propose modeling human preferences instead as informed by each segment's regret, a measure of a segment's deviation from optimal decision-making. Given infinitely many preferences generated according to regret, we prove that we can identify a reward function equivalent to the reward function that generated those preferences, and we prove that the previous partial return model lacks this identifiability property in multiple contexts. We empirically show that our proposed regret preference model outperforms the partial return preference model with finite training data in otherwise the same setting. Additionally, we find that our proposed regret preference model better predicts real *human* preferences and also learns reward functions from these preferences that lead to policies that are better humanaligned. Overall, this work establishes that the choice of preference model is impactful, and our proposed regret preference model provides an improvement upon a core assumption of recent research. We have open sourced our experimental code, the human preferences dataset we gathered, and our training and preference elicitation interfaces for gathering a such a dataset. ## 1 Introduction Improvements in reinforcement learning (RL) have led to notable recent achievements (Silver et al., 2016; Senior et al., 2020; Vinyals et al., 2019; Bellemare et al., 2020; Berner et al., 2019; Degrave et al., 2022; Wurman et al., 2022), increasing its applicability to real-world problems. Yet, like all optimization algorithms, even *perfect* RL optimization is limited by the objective it optimizes. For RL, this objective is created in large part by the reward function. Poor alignment between reward functions and the interests of human stakeholders limits the utility of RL and may even pose risks of financial cost and human injury or death (Amodei et al., 2016; Knox et al., 2021). Influential recent research has focused on learning reward functions from preferences over pairs of trajectory segments, a common form of reinforcement learning from human feedback (RLHF). Nearly all of this recent work assumes that human preferences arise probabilistically from *only* the sum of rewards over a segment, i.e., the segment's **partial return** (Christiano et al., 2017; Sadigh et al., 2017; Ibarz et al., 2018; Bıyık et al., 2021; Lee et al., 2021a;b; Ziegler et al., 2019; Wang et al., 2022; Ouyang et al., 2022; Bai et al., 2022; Glaese et al., 2022; OpenAI, 2022). That is, these works assume that people tend to prefer trajectory segments that yield greater accumulated rewards *during the segment*. However, this preference model ignores seemingly important information about the segment's desirability, including the state values of the segment's start and end states. Separately, this partial return preference model can prefer suboptimal actions with lucky outcomes, like buying a lottery ticket. This paper proposes an alternative preference model based on the **regret** of each segment, which is a measure of how much each segment deviates from optimal decision-making. More precisely, regret is the negated sum of an optimal policy's advantage of each transition in the segment (Section 2.2). Figures 1 and 2 show intuitive examples of when these two models disagree. Some examples of domains where the preference models will differ are those with constant reward until the end, including competitive games like chess, go, and soccer as well as tasks for which the objective is to minimize time until reaching a goal. For these two preference models, we first focus theoretically on a normative analysis (Section 3)—i.e., what preference model would we *want* humans to use if we could choose one based on how informative its generated preferences are—proving that reward learning on infinite, exhaustive preferences with our proposed regret preference model identifies a reward function with the same set of optimal policies as the reward function with which the preferences are generated. We also prove that the partial return preference model is not guaranteed to identify such a reward function in three different contexts: without preference noise, when trajectories of different lengths are possible from a state, and when segments consist of only one transition. We follow up with a descriptive analysis of how well each of these proposed models align with *actual* human preferences by collecting a humanlabeled dataset of preferences in a rich grid world domain (Section 4) and showing that the regret preference model better predicts these human preferences (Section 5). Finally, we find that the policies ultimately created through the regret preference model tend to outperform those from the partial return model learning—both when assessed with collected human preferences or when assessed with synthetic preferences (Section 6). Our code for learning and for re-running our main experiments can be found here, alongside our interface for training subjects and for preference elicitation. The human preferences dataset is available here Knox et al. (2023). In summary, our primary contributions are five-fold: 1. We propose a new model for human preferences that is based on regret instead of partial return. 2. We theoretically validate that this regret-based model has the desirable characteristic of reward identifiability, and that the partial return model does not. 3. We empirically validate that when each preference model learns from a preferences dataset it created, this regret-based model leads to better-aligned policies. 4. We empirically validate that, with a collected dataset of human preferences, this regret-based model both better describes the human preferences and leads to better-aligned policies. 5. Overall, we show that the choice of preference model impacts the alignment of learned reward functions. ## 2 Preference Models For Learning Reward Functions We assume that the task environment is a Markov decision process (MDP) specified by the tuple (S, A, T, γ, D0, r). S and A are the sets of possible states and actions, respectively. T is a transition function, T :S×A→p(·|s,a); γ is the discount factor; and D0 is the distribution of start states. Unless otherwise stated, we assume all tasks are undiscounted (i.e., γ = 1) and have terminal states, after which only 0 reward can be received. Discounting is considered in depth in Appendix B.2. r is a reward function, r :S×A×S →R, where the reward rt at time t is a function of st, at, and st+1. An MDP\r is an MDP without a reward function. Throughout this paper, r refers to the ground-truth reward function for some MDP; rˆ refers to a learned approximation of r; and r˜ refers to any reward function (including r or rˆ). A policy (π :S×A→[0,1]) specifies the probability of an action given a state. Qπ r˜ and V π r˜ refer respectively to the state-action value function and state value function for a policy, π, under r˜, and are defined as follows. $V_{r}^{\pi}(s)=\mathbb{E}_{\pi}[\sum_{r}(s_{t},a_{t},s_{t+1})|s_{0}=s]$ $t=0$ $Q_{r}^{\pi}(s,a)=\mathbb{E}_{\pi}[\tilde{r}(s,a,s^{\prime})+V_{r}^{\pi}(s^{\prime})]$ An optimal policy π ∗is any policy where V π ∗ r˜(s)≥V π r˜ (s) at every state s for every policy π. We write shorthand for Qπ ∗ r˜ and V π ∗ r˜ as Q∗ r˜ and V ∗ r˜ , respectively. The optimal advantage function is defined as A∗ r˜ (s,a)≜Q∗ r˜ (s,a)−V ∗ r˜ (s); this measures how much an action reduces expected return relative to following an optimal policy. Throughout this paper, the ground-truth reward function r is used to algorithmically generate preferences when they are not human-generated, is hidden during reward learning, and is used to evaluate the performance of optimal policies under a learned rˆ. ## 2.1 Reward Learning From Pairwise Preferences A reward function can be learned by minimizing the cross-entropy loss—i.e., maximizing the likelihood—of observed human preferences, a common approach in recent literature (Christiano et al., 2017; Ibarz et al., 2018; Wang et al., 2022; Bıyık et al., 2021; Sadigh et al., 2017; Lee et al., 2021a;b; Ziegler et al., 2019; Ouyang et al., 2022; Bai et al., 2022; Glaese et al., 2022; OpenAI, 2022). Segments Let σ denote a segment starting at state s σ 0 . Its length |σ| is the number of transitions within the segment. A segment includes |σ|+1 states and |σ| actions: (s σ 0 ,aσ 0 ,sσ 1 ,aσ 1 ,...,sσ |σ| ). In this problem setting, segments lack any reward information. As shorthand, we define σt ≜(s σ t ,aσ t ,sσ t+1). A segment σ is **optimal** with respect to r˜ if, for every i∈ {1*,...,*|σ|-1}, Q∗ r˜ (s σ i ,aσ i ) =V ∗ r˜ (s σ i ). A segment that is not optimal is **suboptimal**. Given some r˜ and a segment σ, r˜ σ t ≜r˜(s σ t ,aσ t ,sσ t+1), and the undiscounted **partial return** of a segment σ is P|σ|−1 t=0 r˜ σ t , denoted in shorthand as Σσr˜. Preference datasets Each preference over a pair of segments creates a sample (σ1,σ2,µ) in a preference dataset D≻. Vector µ=⟨µ1,µ2⟩ represents the preference; specifically, if σ1 is preferred over σ2, denoted σ1 ≻σ2, µ=⟨1,0⟩. µ is ⟨0,1⟩ if σ1 ≺σ2 and is ⟨0.5,0.5⟩ for σ1 ∼σ2 (no preference). For a sample (σ1,σ2,µ), we assume that the two segments have equal lengths (i.e., |σ1|=|σ2|). Loss function To learn a reward function from a preference dataset, D≻, a common assumption is that these preferences were generated by a preference model P that arises from an unobservable *ground-truth* reward function r. We approximate r by minimizing cross-entropy loss to learn rˆ: $$\begin{array}{c}{{l o s s(\hat{r},D_{\sim})=-\sum_{(\sigma_{1},\sigma_{2},\mu)\in D_{\sim}}\mu_{1}\mathrm{log}P(\sigma_{1}\succ\sigma_{2}|\hat{r})+\mu_{2}\mathrm{log}P(\sigma_{1}\prec\sigma_{2}|\hat{r})}}\\ {{\mathrm{~}}}\end{array}$$ $$\left(1\right)$$ For a single sample where σ1 ≻ σ2, the sample's likelihood is P(σ1 ≻ σ2|rˆ) and its loss is therefore −*logP*(σ1 ≻σ2|rˆ). If σ1 ≺σ2, its likelihood is 1−P(σ1 ≻σ2|rˆ). This loss is under-specified until P(σ1 ≻σ2|rˆ) is defined, which is the focus of this paper. We show that the common partial return model of preference probabilities is flawed and introduce an improved regret-based preference model. Preference models A preference model determines the probability of one trajectory segment being preferred over another, P(σ1 ≻σ2|r˜). P(σ1 ≻σ2|r˜)+P(σ1 ∼σ2|r˜)+P(σ1 ≺σ2|r˜)= 1, and P(σ1 ∼σ2|r˜)= 0 for the preference models considered herein. Preference models could be applied to model preferences provided by humans or other systems. Preference models can also directly generate preferences, and in such cases we refer to them as **preference generators**. ## 2.2 Choice Of Preference Model: Partial Return And Regret Partial return All aforementioned recent work assumes human preferences are generated by a Boltzmann distribution over the two segments' partial returns, expressed here as a logistic function.1 $$P_{\Sigma r}(\sigma_{1}\succ\sigma_{2}|\hat{r})=l o g i s t i c\Big(\Sigma_{\sigma_{1}}\hat{r}-\Sigma_{\sigma_{2}}\hat{r}\Big).$$ . (2) Regret We introduce an alternative preference model based on the regret of each transition in a segment. We first focus on segments with deterministic transitions. For a transition (st,at,st+1) in a deterministic segment, regretd(σt|r˜)≜V ∗ r˜ (s σ t )−[r˜t+V ∗ r˜ (s σ t+1)]. The subscript d in *regret*d signifies the assumption of deterministic transitions. For a full deterministic segment, $$regret_{d}(\sigma|\vec{r})\triangleq\sum_{t=0}^{|\sigma|-1}regret_{d}(\sigma_{t}|\vec{r})=V_{\vec{r}}^{*}(s_{0}^{\sigma})-(\Sigma_{\sigma}\vec{r}+V_{\vec{r}}^{*}(s_{|\sigma|}^{\sigma})),$$ $$\mathbf{\Sigma}$$ with the right-hand expression arising from cancelling out intermediate state values. Therefore, deterministic regret measures how much the segment reduces expected return from V ∗ r˜ (s σ 0 ). An optimal segment, σ ∗, always has 0 regret, and a suboptimal segment, σ ¬∗, will always have positive regret, an intuitively appealing property that also plays a role in the identifiability proof of Theorem 3.1. 1See Appendix B.1 for a derivation of this logistic expression from a Boltzmann distribution with a temperature of 1. Unless otherwise stated, we ignore the temperature because scaling reward has the same effect when preference probabilities are not deterministic. The temperature is allowed to vary for our theory in Section 3. Another context when the temperature parameter would be useful is when learning a single reward function with a loss function that includes one or more loss terms in addition to the formula in Equation 1; in such a case, scaling reward might undesirably affect the other loss term(s), whereas the varying the Boltzmann temperature changes the preference entropy without affecting the other loss term(s). $$\mathbf{\Sigma}$$ Stochastic state transitions, however, can result in *regret*d(σ ∗|rˆ)*>regret*d(σ ¬∗|r˜), losing the property above. For instance, an optimal action can lead to worse return than a suboptimal action, based on stochasticity in state transitions. To retain this property that optimal segments have a regret of 0 and suboptimal segments have positive regret, we first note that the effect on expected return of transition stochasticity from a transition (st,at,st+1) is [r˜t+V ∗ r˜ (st+1)]−Q∗ r˜ (st,at) and add this expression once per transition to get *regret*(σ), removing the subscript d that refers to determinism. The regret for a single transition becomes *regret*(σt|r˜)=[V ∗ r˜ (s σ t )− [r˜t+V ∗ r˜ (s σ t+1)]]+[[r˜t+V ∗ r˜ (s σ t+1)]−Q∗ r˜ (s σ t ,aσ t )]=V ∗ r˜ (s σ t )−Q∗ r˜ (s σ t ,aσ t )=−A∗ r˜ (s σ t ,aσ t ). Regret for a full segment is $$regret(\sigma|\tilde{r})=\sum_{t=0}^{|\sigma|-1}regret(\sigma_{t}|\tilde{r})=\sum_{t=0}^{|\sigma|-1}\left[V_{r}^{*}(\sigma_{t}^{\sigma})-Q_{r}^{*}(\sigma_{t}^{\sigma},\sigma_{t}^{\sigma})\right]=\sum_{t=0}^{|\sigma|-1}-A_{r}^{*}(\sigma_{t}^{\sigma},\sigma_{t}^{\sigma})\,.\tag{4}$$ The regret preference model is the Boltzmann distribution over negated regret: $\left(5\right)^{\frac{1}{2}}$ $$P_{r e g r e t}(\sigma_{1}\!\succ\!\sigma_{2}|\tilde{r})\!\stackrel{\Delta}{=}l o g i s t i c\Big(r e g r e t(\sigma_{2}|\tilde{r})\!-\!r e g r e t(\sigma_{1}|\tilde{r})\Big).$$ . (5) Lastly, we note that if two segments have deterministic transitions, end in terminal states, and have the same starting state, in this special case the regret model reduces to the partial return model: P*regret*(·|r˜)=PΣr(·|r˜). In this article, our *normative* results examine both tasks with deterministic transitions and tasks with stochastic transitions. These normative results include the theoretical analysis in Section 3 and the empirical results with synthetic data in Section 6.2 and Appendix F.2, with stochastic tasks specifically examined empirically in Appendix F.2.4. We gather human preferences for a deterministic task, which allows us to investigate the results with the more intuitive expression of *regret*d that includes partial return as a component. Algorithms in this paper All algorithms in the body of this paper can be summarized as "minimize Equation 1". They differ only in how the preference probabilities are calculated. All reward function learning via partial return uses Equation 2, replicating the dominant algorithm in recent literature (Christiano et al., 2017; Ibarz et al., 2018; Wang et al., 2022; Bıyık et al., 2021; Sadigh et al., 2017; Lee et al., 2021a;b; Ouyang et al., 2022). We use two algorithms for reward function learning via regret. The theory in Section 3 assumes exact measurement of regret, using Equation 5. Section 6 introduces Equation 6 to approximate regretreplacing Equation 5 to create another algorithm—and uses the resulting algorithm for our experimental results later in that section. Appendix B introduces other algorithms that use Equation 1, as well as one in Appendix B.4 that generalizes Equation 1. Regret as a model for human preference P*regret* makes at least three assumptions worth noting. First, it keeps the assumption that human preferences follow a Boltzmann distribution over some statistic, which is a common model of choice behavior in economics and psychology, where it is called the Luce-Shepard choice rule (Luce, 1959; Shepard, 1957). Second, P*regret* implicitly assumes humans can identify optimal and suboptimal segments when they see them, which will be less true in domains where the human has less expertise. This assumption is similar to a common assumption of many algorithms for imitation learning, that humans can provide demonstrations that are optimal or noisily optimal (e.g., Abbeel & Ng (2004)). ![4_image_0.png](4_image_0.png) Figure 2: Two segments of a car moving at high speed near a brick wall. On the left, a car moves toward a brick wall; a bad crash is imminent, but has not yet occurred. On the right, a car escapes an imminent crash against a brick wall with only a scrape. Assume the right segment is optimal and the left segment is suboptimal (as defined in Sec. 2.1). The left segment has a higher sum of reward, so it is preferred under the partial return preference model. The right segment is preferred under the regret preference model since optimal segments have minimal regret. If we also assume deterministic transitions, then the regret model includes the difference in values between the start state and the end state (Equation 3), and the right segment would tend to be preferred because it greatly improves its state values from start to end, whereas the left segment's state values greatly worsen. We suspect our human readers will also tend to prefer the right segment. Lastly, P*regret* assumes that in stochastic settings where the best *outcome* may only result from suboptimal decisions (e.g., buying a lottery ticket), humans instead prefer optimal *decisions*. We suspect humans are capable of expressing either type of preference—based on decision quality or desirability of outcomes—and can be influenced by training or the preference elicitation interface. Curiously, for stochastic tasks in which preferences are based upon segments' observed outcomes, a preference model that uses deterministic *regret*d in Equation 5 appears fitting, since it does not subtract out the effects of fortunate and unfortunate transitions but does include segments' start and end state values. In practice we determine that the regret model produces improvements over the partial return model (Section 6), and its assumptions represent an opportunity for follow-up research. ## 3 Theoretical Comparison In this section, we consider how different ways of *generating preferences* affect reward inference, setting aside whether humans can be influenced to give preferences in accordance with a specific preference method. In economics, this analysis—and all of our later analyses with synthetic preferences—could be considered a normative analysis. In artificial intelligence, this analysis might be cast as a step towards defining criteria for a rational preference model. The theorems and proofs below focus on identifiability, a property which determines whether the parameters of a model can be recovered from infinite, exhaustive samples generated by the model. A model is unidentifiable if any two parameterizations of the model result in the same model behavior. In our setting, the model of concern is a preference model and the parameters constitute the ground-truth reward function, r. A preference model is identifiable if an infinite, exhaustive preferences dataset created by the preference model contains sufficient information to infer a behaviorally equivalent reward function to r. Note that identifiability focuses on the preference model alone as a preference generator, not on the learning algorithm that uses such a preference model. This section uses preference models that include discounting (see Appendix B.2). We allow for discounting to make the theory more general and also because discounting is integral to Section 3.2.3. Here the notation for Q∗ r˜ (s,a) and V ∗ r˜ (s) is expanded to Q∗ (˜r,γ˜) (s,a) and V ∗ (˜r,γ˜) (s) respectively include the discount factor. To make the other content in this section specific to undiscounted tasks, simply assume all instances of γ˜= 1, including the ground-truth γ and the γˆ used during reward function inference and policy improvement. Definition 3.1 (An identifiable preference model). For a preference model P*, assume an infinite dataset* D≻ of pairs of segments is constructed by repeatedly choosing (σ1,σ2) and sampling a label µ∼P(σ1 ≻σ2|r)*, using* P as a preference generator. Further assume that, in this dataset, all possible segment pairs appear infinitely many times. For some M that is an MDP\(r,γ)*—an MDP with neither a reward function nor a discount factor—let* M(˜r,γ˜) be M with the reward function r˜ and the discount factor γ˜*. Let* Π∗ (˜r,γ˜) be the set of optimal policies for M(˜r,γ˜). Let problem-equivalence class R *be the set of all pairs of a reward function and a discount factor such* that if (r1,γ1),(r2,γ2)∈ R *then* Π∗ (r1,γ1) = Π∗ (r2,γ2) . Preference model P is **identifiable** *if and only if, for any* choice of segment length n and ground-truth M(r,γ), there exists an operation on D≻ *that always outputs a* (r, ˆ γˆ) that is in the same problem equivalence class as (r,γ)*. I.e.,* Π∗ (r,γ) =Π∗ (ˆr,γˆ) . ## 3.1 The Regret Preference Model Is Identifiable. We first prove that our proposed regret preference model is identifiable. Theorem 3.1 (P*regret* is identifiable). Let Pregret be any function such that if regret(σ1|r, ˜ γ˜)*<regret*(σ2|r, ˜ γ˜), Pregret(σ1 ≻ σ2|r, ˜ γ˜) > 0.5, and if *regret*(σ1|r, ˜ γ˜) = regret(σ2|r, ˜ γ˜), P*regret*(σ1 ≻ σ2|r, ˜ γ˜) = 0.5. Pregret is identifiable. This class of regret preference models includes but is not limited to the Boltzmann distribution of Equation 5. Additionally, it includes a version of the regret preference model that noiselessly always prefers the segment with lower regret, as Theorem 3.2 considers for the partial return preference model.2 2Equations 2 and 5 can be extended to include such noiseless preference models by including the temperature parameter of the Boltzmann distributions (after converting from their logistic formulations, reversing the derivation in Appendix B.1), where we assume that setting the temperature to 0 results in a hard maximum. In other words, when the temperature is 0 the preference is given deterministically to the segment with the higher partial return in Equation 2 or regret in Equation 5. Consider reviewing the definitions of optimal segments and suboptimal segments in Section 2.1 before proceeding. For the proof below, we will apply the following **sufficiency test for identifiability**. Preference model P is identifiable if, for any ground-truth M(r,γ), any (r, ˆ γˆ)=argmin(˜r,γ˜)[loss(r, ˜ γ,D˜ ≻)]—for the cross-entropy loss (Eqn. 8, which is Eqn. 1 generalized to include discounting), with P as the preference model—is in the same problem equivalence class as (r,γ). I.e., Π∗ (r,γ) =Π∗ (ˆr,γˆ) . Proof Make all assumptions in Definition 3.1. Let (r, ˆ γˆ) = argmin(˜r,γ˜)[loss(r, ˜ γ,D˜ ≻)], where loss is the cross-entropy loss from Eqn. 8 with P*regret* as the preference model. Since (r, ˆ γˆ) minimizes cross-entropy loss and is chosen from the complete space of reward functions and discount factors, Pregret(·|r,γ)=P*regret*(·|r, ˆ γˆ) for all possible segment pairs. Also, by Equation 12 (which generalizes Equation 4 to include discounting) *regret*(σ|r, ˜ γ˜) = 0 if and only if σ is optimal with respect to r˜. And regret(σ|r, ˜ γ˜)>0 if and only if σ is suboptimal with respect to (˜r,γ˜). With respect to some (r, ˜ γ˜), let σ ∗ be any optimal segment and σ ¬∗ be any suboptimal segment. regret(σ ∗|r, ˜ γ˜) *< regret*(σ ¬∗|r, ˜ γ˜), so P*regret*(σ ∗ ≻ σ ¬∗|r, ˜ γ˜) > 0.5. P*regret*(·|r, ˜ γ˜) induces a total ordering over segments, defined by regret(σ1|r, ˜ γ˜) < regret(σ2|r, ˜ γ˜) ⇐⇒ Pregret(σ1 ≻ σ2|r, ˜ γ˜) > 0.5 ⇐⇒ σ1 > σ2 and regret(σ1|r, ˜ γ˜)=regret(σ2|r, ˜ γ˜) ⇐⇒ P*regret*(σ1 ≻σ2|r, ˜ γ˜)= 0.5 ⇐⇒ σ1 =σ2. Because regret has a minimum (0), there must be a set of segments which are ranked highest under this ordering, denoted Σ ∗ (˜r,γ˜) . These segments in Σ ∗ (˜r,γ˜) are exactly those that achieve the minimum regret (0) and so are optimal with respect to (˜r,γ˜). Since the dataset (D≻) contains all segments by assumption, Σ ∗ (˜r,γ˜) contains all optimal segments with respect to (r, ˜ γ˜). If a state-action pair (s,a) is in an optimal segment, then by the definition of an optimal segment Q∗ (˜r,γ˜) (s,a) = V ∗ (˜r,γ˜) (s). The set of optimal policies Π∗ r˜ for r˜ is all π such that, for all (s,a), if π(s,a) > 0, then Q∗ (˜r,γ˜) (*s, a*) = V ∗ (˜r,γ˜) (s). In short, Σ ∗ (˜r,γ˜) determines the set of each state-action pair (*s, a*) such that Q∗ (˜r,γ˜) (s,a)=V ∗ (˜r,γ˜) (s). This set determines Π∗ (˜r,γ˜) . Therefore Σ ∗ (˜r,γ˜) determines Π∗ (˜r,γ˜) , and we will refer to this determination as the function g. We now focus on the reward function and discount factor used to generate preferences, (r,γ), and on the inferred reward function and discount factor, (r, ˜ γ˜). Since Pregret(·|r,γ)=P*regret*(·|r, ˆ γˆ), (r,γ) and (r, ˆ γˆ) induce the same total ordering over segments, and so Σ ∗ (r,γ) =Σ∗ (ˆr,γˆ) . Therefore g(Σ∗ (r,γ) )=g(Σ∗ (ˆr,γˆ) ). Since g(Σ∗ (r,γ) )=Π∗ (r,γ) and g(Σ∗ (ˆr,γˆ) )=Π∗ (ˆr,γˆ) , Π∗ (r,γ) =Π∗ (ˆr,γˆ) . The proof above establishes the identifiability of P*regret* regardless of whether preferences are generated noiselessly or stochastically. ## 3.2 The Partial Return Preference Model Is Not Generally Identifiable. In this subsection, we critique the previous standard preference model, the partial return model PΣr, by proving that this model can be unidentifiable in three different contexts. - Given *noiseless* **preference labeling** by PΣr in some MDPs, preferences never provide sufficient information to recover the set of optimal policies. - **In variable-horizon tasks when the lengths of both segments in a pair are always equivalent.** Variable-horizon tasks include common tasks that terminate upon reaching success or failure states, reward functions that differ by a constant can have different sets of optimal policies. Yet for two such reward functions, the preference probabilities according to partial return will be identical. - **With segment lengths of 1** (|σ| = 1), the discount factor γ does not affect the partial return preference model and therefore will not be recoverable from the preferences it generates. Since different values of γ can determine different sets of optimal policies, an inability to recover γ is a third type of unidentifiability. We now prove in each of these three contexts that the partial return preference model is not identifiable. For each, we will apply the following **sufficiency test for non-identifiability**. Preference model P is not identifiable if there exist two ground-truth MDPs, M(r1,γ1) and M(r2,γ2), such that Π∗ (r1,γ1 )̸=Π∗ (r2,γ2) and the infinite preference datasets created as described in Definition 3.1 by P for M(r1,γ1) and M(r2,γ2) are identical. Note that such identical preference datasets lack the information to differentiate which MDP they came from. ## 3.2.1 Partial Return Is Not Identifiable When Preferences Are Noiseless. Theorem 3.2 (Noiseless PΣr is not identifiable). Let PΣr *be any function such that if* Σσ1 r >˜ Σσ2 r˜, PΣr(σ1 ≻ σ2|r˜)= 1 *and if* Σσ1 r˜=Σσ2 r˜, PΣr(σ1 ≻σ2|r˜)= 0.5. There exists an MDP in which PΣr *is not identifiable.* Below we present two proofs of Theorem 3.2. Each are proofs by counterexample. The first is a general proof and the second proof assumes a common characteristic, that all segments in the preference dataset are the same length. Though only one proof is needed, we present two because each counterexample demonstrates a qualitatively different category of how the partial return preference model can fail to identify the set of optimal policies. Proof based on stochastic transitions: Assume the following class of MDPs, illustrated in Figure 3. The agent always begins at start state s0. From s0, action a*safe* always transitions to s*safe*, getting a reward of 0. From s0, action a*risk* transitions to swin with probability 0.5, getting a reward of rwin, and transitions to s*lose* with with probability 0.5, getting a reward of −10. In all MDPs in this class, rwin >0. All 3 possible resulting states (ssafe, swin, and s*lose*) are absorbing states, from which all further reward is 0. ![7_image_0.png](7_image_0.png) Figure 3: A class of MDPs in which, if rwin >0, the partial return preference model fails the test for identifiability. If rwin ≥ 10, a*risk* is optimal in s0. If rwin ≤ 10, a*safe* is optimal in s0. Three single-transition segments exist: (s0,asafe,ssafe), (s0,arisk,swin), and (s0,arisk,s*lose*). By noiseless PΣr, (s0,arisk,swin) ≻ (s0,asafe,ssafe) ≻ (s0,arisk,s*lose*), regardless of the value of rwin. In other words, PΣr is insensitive to what the optimal action is from s0 in this class of MDPs. Now assume MDP M, where rwin = 11. In linear form, the weight vector for the reward function rM can be expressed as wrM1 =<−10,0,11>. Let rˆM have wrˆM =<−10,0,9>. Both rM and rˆM have the same preferences as above, meaning that rˆM minimizes loss on an infinite preferences dataset D≻ created by PΣr, yet it has a different optimal policy. Therefore, noiseless PΣr is not identifiable. In contrast, note that by noiseless P*regret*, the preferences are different than those above for PΣr. If rwin >10, then (s0,arisk,swin) ∼ (s0,arisk,slose) ≻ (s0,asafe,s*safe*), If rwin < 10, then (s0,asafe,ssafe) ≻ (s0,arisk,swin) ∼ (s0,arisk,s*lose*). Intuitively, this difference comes from P*regret* always giving higher preference probability to optimal actions, even if they result in bad outcomes. Another perspective can be found from the utility theory of Von Neumann & Morgenstern (1944). Specifically, PΣr gives preferences over outcomes, which in the terms of utility theory can only be used to infer an ordinal utility function. Ordinal utility functions are merely consistent with the preference ordering over outcomes and do not generally capture preferences over actions when their outcomes are stochastically determined. The deterministic regret preference model, P*regret*d , also has this weakness in tasks with stochastic transitions. On the other hand, P*regret* forms preferences over so-called lotteries—the distribution over possible outcomes—and can therefore learn a cardinal utility function, which can explain preferences over risky action. See Russell & Norvig (2020, Ch. 16) for more detail on these concepts from expected utility theory. Since the proof above focuses upon stochastic transitions, we show the lack of identifiability for noiseless PΣr can be found for quite different reasons in a deterministic MDP when the preferences dataset has a common characteristic: that all segments are the same length. Proof based on segments of fixed length: Consider the MDP M1 in Figure 4 and assume preferences are given over segments with length 1 (i.e., containing one transition). The optimal policy for M1 is to move rightward from s0, whereas optimal behavior for M′1 is to move downward from s0. In *both* M1 and M′1 , preferences by PΣr are as follows, omitting the action for brevity: (sa,s0)∼(sa,sterm)∼(s0,sa)≻(s0,s*term*). As in the previous proof, PΣr is insensitive to certain changes in the reward function that alter the set of optimal policies. Whenever this characteristic is found, Π∗ r = Π∗ rˆ is not guaranteed, failing the test for identifiability. Here specifically, the reward function for M′1 would achieve the minimum possible cross-entropy loss on an exhaustive preference dataset created in M1 with the noiseless preferences from the partial return preference model, despite the optimal policy in M′1 conflicting with the ground-truth optimal policy. The logic of this proof can be applied for trajectories of length 2 in the MDP M2 shown in Figure 5. Together, M1 and M2 suggest a rule for constructing an MDP where Π∗ r =Π∗ rˆ is not guaranteed for PΣr, failing the identifiability test for any fixed segment length, |σ|: set the number of states to the right of s0 to |σ| (not counting s*term*), set the reward r*fail* for (s0,s*term*) such that r*fail* < 0, and set the reward for each other transition to c + r*fail*/(|σ| + 1), where c > 0. Given an MDP constructed this way, an alternative reward function that results in the same preferences under PΣr yet has a different optimal action from s0 can then be constructed by changing all reward other than rfail to c+r*fail*/(|σ|+1), where c now is constrained to c<0 and c×|σ|<r*fail*. Note that the set of preferences for each of these MDPs is the same even when including segments that reach terminal state before |σ| transitions (which can still be considered to be of length |σ| if the terminal state is an absorbing state from which reward is 0). V VD V VD VWHUP VWHUP 0 0 Figure 4: An MDP (M1) where Π ∗ r = Π∗ rˆis not guaranteed for the partial return preference model, failing the test for identifiability with segments of length 1. The ground-truth reward function is shown to the left, and an MDP M′1 with an alternative reward function is shown to the right. Under partial return, both create the same set of preferences despite having different optimal actions from s0. Discussion of preference noise and identifiability Of the two proofs by example for Theorem 3.2, the first proof's example reveals issues when learning reward functions with stochastic transitions with either PΣr or deterministic P*regret*d . These issues directly correspond to the need for preferences over distributions over outcomes (i.e., lotteries) to construct a cardinal utility function (see Russell & Norvig (2020, Ch. 16)). Correspondingly, when Skalse et al. (2022) consider reward identifiability with the partial return preference model, they change the learning problem such that a training sample consists of preferences between *distributions* over trajectories. Intuitively, Theorem 3.2 says that PΣr is not identifiable without the distribution over preferences providing information about the proportions of rewards with respect to each other. In contrast, to be identifiable, the regret preference model does not require this preference error (though it can presumably benefit from it in certain contexts). ## 3.2.2 Partial Return Is Not Identifiable In Variable-Horizon Tasks. Many common tasks have the characteristic of having at least one state from which trajectories of *different* ![8_image_0.png](8_image_0.png) Figure 5: An MDP (M2) where Π r =Π∗ rˆis not guaranteed for the partial return preference model, failing the test for identifiability with segments of length 2. The ground-truth reward function is shown in the top diagram, and an MDP M′2 with an alternative reward function is shown in the bottom diagram. Under partial return, both create the same set of preferences despite having different optimal actions from s0. lengths are possible, which we refer to as being a **variable-horizon task**. Tasks that terminate upon completing a goal typically have this characteristic. We also assume that for any pair of segments in the preference dataset, the lengths of those two segments are equivalent. This assumption describes typical practice (e.g., Christiano et al. (2017); Sadigh et al. (2017); Ibarz et al. (2018); Bıyık et al. (2021); Lee et al. (2021a); Wang et al. (2022)). In this context, we show another way that the partial return preference model is not identifiable, a limitation that has arisen dramatically in our own experiments and is not limited to noiseless preferences: adding a constant to a reward function will often change the set of optimal policies, but it will not change the probability of preference for any two segments. Therefore, those preferences will not contain information to recover the set of optimal policies. We now explain why such a constant shift will not change the probability of preference based upon partial return. Consider a constant value c and two reward functions, r1 and r2, where r1(st,at,st+1)−r2(st,at,st+1)=c for all transitions (st,at,st+1). The partial return of any segment of length |σ| will be c×|σ| higher for r1 than for r2 (assuming an undiscounted task, γ = 1). In the partial return preference model (Equation 2), this addition of c×|σ| to each segment's partial return cancels out, having no effect on the different in the segments' partial returns and therefore also having no effect on the preference probabilities. Consequently, adding c to a reward function's output will also not affect the distribution over preference datasets that the partial return preference model would create via sampling from its preference probabilities. If, for each state in an MDP, all possible trajectories from that state have the *same* length, then adding a constant c to the output of the reward function does not affect the set of optimal policies. Specifically, the set of optimal policies is preserved because the return for any trajectory from a state is changed by c×|τ |, where |τ | is the unchanging trajectory length from that state, so the ranking of trajectories by their returns is unchanged and also the expected return of a policy from that state is unchanged. Continuing tasks and fixed-horizon tasks have this property. However, if trajectories from a state can terminate after *different* numbers of transitions, then two reward functions that differ by a constant can have different sets of optimal policies. Episodic tasks are often vulnerable to this issue. To illustrate, consider the task in Figure 1, a simple grid world task that penalizes the agent with −1 reward for each step it takes to reach the goal. If this reward per step is shifted to +1 (or any positive value), then any optimal policy will *avoid the goal*, flipping the objective of the task from that of reaching the goal. So, for variable-horizon tasks, PΣr is not generally identifiable. Though identifiability focuses on what information is encoded in preferences, this issue has practical consequences during *learning* from preferences over segments of length 1: for a preferences dataset, all reward functions that differ by a constant assign the same likelihood to the dataset, making the choice between such reward functions arbitrary and the learning problem underspecified. Some past authors have acknowledged this insensitivity to a shift (Christiano et al., 2017; Lee et al., 2021a; Ouyang et al., 2022; Hejna & Sadigh, 2023), and the common practice of forcing all tasks to have a fixed horizon (Christiano et al., 2017; Gleave et al., 2022) may be partially attributable to PΣr's lack of identifiability in variable-horizon tasks, leading to its low performance in such tasks. In Appendix F.2.2, we propose a stopgap solution to this problem and also observe that in episodic grid worlds that the partial return preference model performs catastrophically poorly without this solution, both with synthetic preferences and human preferences. The regret preference model is appropriately affected by constant reward shifts. Here we give intuition for why adding a constant c to the output of a reward function does not threaten the identifiability of the regret preference model, as established in Theorem 3.1. As stated above, adding c to reward can change the set of optimal policies. Any such change in what actions are optimal would likewise change the ordering of segments by regret, so the likelihood of a preferences dataset according to the regret preference model *would* be affected by such a constant shift in the learned reward function (as it should be). ## 3.2.3 Partial Return Is Not Identifiable For Segment Lengths Of 1. Arguably the most impactful application to date of learning reward functions from human preferences is to fine-tune large language models. For the most notable of these applications, the segment length |σ|= 1 (Ziegler et al., 2019; OpenAI, 2022; Glaese et al., 2022; Ouyang et al., 2022; Bai et al., 2022). Changing γ often changes the set of optimal policies, yet when |σ|= 1, changing the discount factor does not change preference probabilities based upon partial return preference model. We elaborate below. Here we make an exception to this article's default assumption that all tasks are undiscounted. As we describe in Appendix B.2, the discounted partial return of a segment is P|σ|−1 t=0 γ˜ tr˜ σ t . We follow the standard convention that 0 0 = 1. When |σ|= 1, the partial return simplifies to the immediate reward, r˜ σ 0 , regardless of the value of γ. Consequently the partial return preference model is unaffected by the discount factor when |σ|= 1. We leave to the reader the task of a precise proof by counterexample that partial return is not identifiable when |σ|= 1; any two MDPs that differ only by their discount factor and have different sets of optimal policies will suffice to provide a proof, since the distribution of preferences according to partial return will be identical in each of these MDPs, establishing the lack of identifiability. To remove this source of unidentifiability, the preferences dataset would need to be presented to the learning algorithm with a corresponding discount factor. Past work on identifiability in this setting (Skalse et al., 2022) has assumed that the discount factor is given and does not discuss the topic further. As with the other identifiability issues demonstrated in this subsection, this issue has practical consequences during learning from preferences. When |σ| = 1, the choice of γˆ is arbitrary, making the learning problem underspecified. The regret preference model is identifiable even when the discount factor is unknown. Note that Theorem 3.1 already includes this case. To add some intuition, the discounted regret of a segment—presented in Appendix B.2—does include the discount factor in its formulation, regardless of segment length. Therefore, the discount factor used during preference generation does impact what reward function is learned. ## 4 Creating A Human-Labeled Preference Dataset To empirically investigate the consequences of each preference model when learning reward from *human* preferences, we collected a preference dataset labeled by human subjects via Amazon Mechanical Turk. This data collection was IRB-approved. Appendix D adds detail to the content below. ## 4.1 The General Delivery Domain The delivery domain consists of a grid of cells, each of a specific road surface type. The delivery agent's state is its location. The agent's action space is moving one cell in one of the four cardinal directions. The episode can terminate either at the destination for +50 reward or in failure at a sheep for −50 reward. The reward for a non-terminal transition is the sum of any reward components. Cells with a white road surface have a −1 reward component, and cells with brick surface have a −2 component. Additionally, each cell may contain a coin (+1) or a roadblock (−1). Coins do not disappear and at best cancel out the road surface cost. Actions that would move the agent into a house or beyond the grid's perimeter result in no motion and receive reward that includes the current cell's surface reward component but not any coin or roadblock components. In this work, the start state distribution, D0, is always uniformly random over non-terminal states. This domain was designed to permit subjects to easily identify bad behavior yet also to be difficult for them to determine *optimal* behavior from most states, which is representative of many common tasks. Note that this intended difficulty forces some violation of the regret preference model's assumption that humans always prefer optimal segments over suboptimal ones, therefore testing its performance in non-ideal conditions. Figure 6: The delivery task used to gather ![10_image_0.png](10_image_0.png) human preferences. The yellow van is the agent and the red inverted teardrop is the destination. ## 4.1.1 The Delivery Task We chose one instantiation of the delivery domain for gathering our dataset of human preferences. This specific MDP has a 10×10 grid. From every state, the highest return possible involves reaching the goal, rather than hitting a sheep or perpetually avoiding termination. Figure 6 shows this task. ## 4.2 The Subject Interface And Survey This subsection describes the three main stages of each data collection session. A video showing the full experimental protocol can be seen at youtu.be/V3hAqlE0qXg. Teaching subjects about the task Subjects first view instructions describing the general domain. To avoid the jargon of "return" and "reward," these terms are mapped to equivalent values in US dollars, and the instructions describe the goal of the task as maximizing the delivery vehicle's financial outcome, where the reward components are specific financial impacts. This information is shared amongst interspersed interactive episodes, in which the subject controls the agent in domain maps that are each designed to teach one or two concepts. Our intention during this stage is to inform the later preferences of the subject by teaching them about the domain's dynamics and its reward function, as well as to develop the subject's sense of how desirable various behaviors are. At the end of this stage, the subject controls the agent for two episodes in the specific delivery task shown in Figure 6. Preference elicitation After each subject is trained to understand the task, they indicate their preferences between 40–50 randomly-ordered pairs of segments, using the interface shown in Figure 7. The subjects select a preference, no preference ("same"), or "can't tell". In this work, we exclude responses labeled "can't tell", though one might alternatively try to extract information from these responses. Subjects' task comprehension Subjects then answered questions testing their understanding of the task, ![11_image_0.png](11_image_0.png) and we removed their data if they scored poorly. We also removed a subject's data if they preferred colliding the vehicle into a sheep over not doing so, which we interpreted as poor task understanding or inattentiveness. This filtered dataset contains 1812 preferences from 50 subjects. Figure 7: Interface shown to subjects during preference elicitation. The left trajectory shows the yellow van doubling back on itself before hitting a sheep. The right trajectory shows the van hitting a road block. ## 4.3 Selection Of Segment Pairs For Labeling We collected human preferences in two stages, each with different methods for selecting which segment pairs to present for labeling. The sole purpose of collecting this second-stage data was to improve the reward-learning performance of the partial return model, PΣr. Without second-stage data, PΣr compared even worse to P*regret* than in the results described in Section 6, performing worse than a uniformly random policy (see Appendix F.3.3). ![12_image_0.png](12_image_0.png) Figure 8: Proportions at which subjects preferred each segment in a pair, plotted by the difference in the segments' changes in state values (x-axis) and partial returns (y-axis). The diagonal line shows points of preference indifference for P*regret*. Points of indifference for PΣ lie on the x-axis. The shaded gray area indicates where the partial return and regret models disagree, each giving a different segment a preference probability greater than 0.5. Each circle's area is proportional to the number of samples it describes. A visual test of which preference model better fits the data is as follows: if the human subjects followed the partial return preference model, the color gradient would be orthogonal to the x-axis. If they followed the regret preference model, the color gradient would be orthogonal to the diagonal line, since regret on this plot is x+y. Visual inspection of the color gradient reveals the latter color gradient, suggesting they more closely followed the regret preference model. Both stages' data are combined and used as a single dataset. These methods and their justification are described in Appendix D.3. ## 5 Descriptive Results This section considers how well different preference models explain our dataset of human preferences. ## 5.1 Correlations Between Preferences And Segment Statistics Recall that with deterministic transitions, the regret of a segment has 3 components: regretd(σ|r˜)=V ∗ r˜ (s σ 0 )− (Σσr˜+V ∗ r˜ (s σ |σ| )), one of which is partial return, Σσr˜. We hypothesize that the other two terms—the values of segments' start and end states, which are included in P*regret* but not in PΣ—affect human preferences, independent of partial return. If this hypothesis is true, then we have more confidence that preference models that include start and end state values will be more effective during inference of the reward functions. The dataset of preferences is visualized in Figure 8. To simplify analysis, we combine the two parts of *regret*d(σ|r) that are additional to Σσr˜ and introduce the following shorthand: ∆σVr˜≜V ∗ r˜ (s σ |σ| )−V ∗ r˜ (s σ 0 ). Note that with an algebraic manipulation (see Appendix E.1), regretd(σ2|r˜) − *regret*d(σ1|r˜) = (∆σ1 Vr˜ − ∆σ2 Vr˜)+(Σσ1 r˜−Σσ2 r˜). Therefore, on the diagonal line in Figure 8, regretd(σ2|r)=*regret*d(σ1|r), making the P*regret*d preference model indifferent. This plot shows how ∆σVr has influence independent of partial return by focusing only on points at a chosen y-axis value; if the colors along the corresponding horizontal line reddens as the x-axis value increases, then ∆σVr appears to have independent influence. To statistically test for independent influence of ∆σVr on preferences, we consider subsets of data where Σσ1 r−Σσ2 r is constant. For Σσ1 r−Σσ2 r=−1 and Σσ1 r−Σσ2 r=−2, the only values with more than 30 samples that also include informative samples with both negative and positive values of regret(σ1|r)−*regret*(σ2|r), the Spearman's rank correlations between ∆σVr and the preferences are significant (r >= 0.3, p<0.0001). This result indicates that ∆σVr *influences human preferences independent of partial return*, validating our hypothesis that humans form preferences based on information about segments' start states and end states, not only partial returns. ## 5.2 Likelihood Of Human Preferences Under Different Preference Models To examine how well each preference model predicts human preferences, we calculate the cross-entropy loss for each model (Equation 1)—i.e., the negative log likelihood—of the preferences in our dataset. Scaling reward by a constant factor does not affect the set of optimal policies. Therefore, throughout this work we ensure that our analyses of preference models are insensitive to reward scaling. To do so for this specific analysis, we conduct 10-fold cross validation to learn a reward scaling factor for each of P*regret* and PΣr. Table 1 shows that the loss of P*regret* is lower than that of PΣr, indicating that P*regret* is more reflective of how people actually express preferences. | Preference model | Loss | |------------------------|--------| | P(·)= 0.5 (uninformed) | 0.69 | | PΣr (partial return) | 0.62 | | Pregret | 0.57 | Table 1: Mean cross-entropy test loss over 10-fold cross validation (n=1812) from predicting human preferences. Lower is better. ## 6 Results From Learning Reward Functions Evaluating A Learned Reward Function ![13_image_0.png](13_image_0.png) Analysis of a preference model's predictions of human preferences is informative, but such predictions are a means to the ends of learning human-aligned reward functions and policies. We now examine each preference model's performance in these terms. In all cases, we learn a reward function rˆ according to Equation 1 and apply value iteration (Sutton & Barto, 2018) to find the approximately optimal Q∗ rˆ function. For this Q∗ rˆ , we then evaluate the mean return of the maximum-entropy optimal policywhich chooses uniformly randomly among all *optimal* actions—with respect to the ground-truth reward function r, over D0. This methodology is illustrated in Figure 9. Figure 9: The general design pattern used for learning a reward function from preferences and evaluating that reward function. The generic gridworld shown is for illustrative purposes only. To compare performance across different MDPs, the mean return of a policy π, V π r , is normalized to (V π r − V U r )/V ∗ r , where V ∗ r is the optimal expected return and V U ris the expected return of the uniformly random policy (both given D0). Normalized mean return above 0 is better than V U r . Optimal policies have a normalized mean return of 1, and we consider above 0.9 to be *near optimal*. ## 6.1 An Algorithm To Learn Reward Functions With Regret(Σ|Rˆ) Algorithm 1 is a general algorithm for learning a *linear* reward function according to P*regret*. This regret-specific algorithm only changes the regret-based algorithm from Section 2.2 by replacing Equation 5 with a tractable approximation of regret, avoiding expensive repeated evaluation of V ∗ rˆ (·) and Q∗ rˆ (·,·) to compute P*regret*(·|rˆ) during reward learning. Specifically, successor features for a set of policies are used to approximate the optimal state values and state-action values for any reward function. This algorithm straightforwardly applies general policy iteration (GPI) with successor features to approximate optimal state and action values for arbitrary reward functions, as described by Barreto et al. (2016). Approximating P*regret* **with successor features** Following the notation of Barreto et al., assume the ground-truth reward is linear with respect to a feature vector extracted by ϕ : S×A×S →R d and a weight vector wr ∈R d: r(s,a,s′)=ϕ(*s,a,s*′) ⊤wr. During learning, wrˆ similarly expresses rˆ as rˆ(s,a,s′)=ϕ(*s,a,s*′) ⊤wrˆ. Given a policy π, the successor features for (s,a) are the expectation of discounted reward features from that state-action pair when following π: ψπQ (*s, a*) = Eπ[P∞ t=0 γ tϕ(st, at, st+1)|s0 = *s, a*0 = a]. Therefore, Qπ rˆ (s,a)=ψπQ (s,a) ⊤wrˆ. Additionally, state-based successor features can be calculated from the ψπQ above as ψπ V (s)=Pa∈Aπ(a|s)ψπQ (s,a), making V π rˆ (s)=ψπ V (s) ⊤wrˆ. Given a set ΨQ of state-action successor feature functions and a set ΨV of state successor feature functions for various policies and given a reward function via wrˆ, Qπ ∗ rˆ (s, a) ≥ maxψQ ∈ΨQ [ψπQ (*s, a*) ⊤wrˆ] and Algorithm 1 Linear reward learning with regret preference model (P*regret*), using successor features 1: Input: a set of policies 2: Ψ←∅ 3: for each reward function policy πSF *in the input set* do 4: estimate ψ πSF Qand ψ πSF V(if not estimated already during step 4) 5: add ψ πSF Qto ΨQ 6: add ψ πSF Vto ΨV 7: **end for** 8: **repeat** 9: optimize wrˆ by loss of Eqn. 1, calculating P˜*regret*(σ1 ≻σ2|rˆ) via Eqn. 6, using ΨQ and ΨV 10: **until** *stopping criteria are met* 11: **return** wrˆ V π ∗ rˆ(s) ≥ maxψV ∈ΨV [ψπ V (s) ⊤wrˆ] (Barreto et al., 2016), so we use these two maximizations as approximations of Q∗ rˆ (s,a) and V ∗ rˆ (s), respectively. In practice, to enable gradient-based optimization with current tools, the maximization in this expression is replaced with the softmax-weighted average, making the loss function linear. Focusing first on the approximation of V ∗ rˆ (s), for each ψV ∈ ΨV , a softmax weight is calculated for ψπ V (s): *softmax*ΨV (ψπ V (s) ⊤wrˆ) ≜ [(ψπ V (s) ⊤wrˆ) 1/T ]/[(Pψ′V ∈ΨV ψ′π V (s) ⊤wrˆ) 1/T ], where temperature T is a constant hyperparameter. The resulting approximation of V ∗ rˆ (s) is therefore defined as V˜ ∗ rˆ (s) ≜PψV ∈ΨV*softmax*ΨV (ψπ V (s) ⊤wrˆ)[ψπ V (s) ⊤wrˆ]. Similarly, to approximate Q∗ rˆ (s, a), *softmax*ΨQ (ψπQ (*s, a*) ⊤wrˆ) ≜ [(ψπQ (*s, a*) ⊤wrˆ) 1/T ]/[(Pψ′Q ∈Ψ ψ′π Q (*s, a*) ⊤wrˆ) 1/T ] and Q˜∗ rˆ (*s, a*) ≜ PψQ ∈ΨQ sof tmaxΨQ (ψπQ (*s, a*) ⊤wrˆ)[ψπQ (*s, a*) ⊤wrˆ]. Consequently, from Eqns. 4 and 5, the corresponding approximation P˜*regret* of the regret preference model is: $$\hat{P}_{r e g e r t}(\sigma_{1}\!\!\!>\!\!\sigma_{2}|\hat{r})\!=\!l o g i s t i c\!\left(\sum_{t=0}^{\lfloor\sigma_{2}\rfloor-1}\left[\hat{V}_{\hat{r}}^{a}(s_{t}^{\sigma_{2}})\!-\!\hat{Q}_{\hat{r}}^{a}(s_{t}^{\sigma_{2}},a_{t}^{\sigma_{2}})\right]-\sum_{t=0}^{\lfloor\sigma_{2}\rfloor-1}\left[\hat{V}_{\hat{r}}^{a}(s_{t}^{\sigma_{1}})\!-\!\hat{Q}_{\hat{r}}^{a}(s_{t}^{\sigma_{2}},a_{t}^{\sigma_{2}})\right]\right),$$ i(6) The algorithm In Algorithm 1, lines 8–11 describe the supervised-learning optimization using the approximation P˜*regret*, and the prior lines create ΨQ and ΨV . Specifically, given a set of input policies (line 1), for each such policy πSF , successor feature functions ΨπSF Qand ΨπSF Vare estimated (line 4), which by default would be performed by a minor extension of a standard policy evaluation algorithm as detailed by Barreto et al. (2016). Note that the reward function that is ultimately learned is not restricted to be in the input set of reward functions, which is used only to create an approximation of regret. One potential source of the input set of policies is a set of reward functions, where each input policy is the result of policy improvement on one reward function. We follow this method in our experiments, randomly generating reward functions and then estimating an optimal policy for each reward function. Specifically, for each reward function, we seek the the maximum entropy optimal policy, which resolves ties among optimal actions in a state via a uniform distribution over those optimal actions. Further details of our instantiation of Algorithm 1 for the delivery domain can be found in Appendix F.1, along with preliminary guidance for choosing an input set of policies (Appendix F.1.1) and for extending it to reward functions that might be non-linear (Appendix F.1.2). ## 6.2 Results From Synthetic Preferences Before considering human preferences, we first ask how each preference model performs when it correctly describes how the preferences in its training set were generated. In other words, we investigate empirically how well the preference model could perform if humans perfectly adhered to it. Recall that the ground-truth reward function, r, is used to create these preferences but is inaccessible to the reward-learning algorithms. For these evaluations, either a stochastic or noiseless preference model acts a preference generator to create a preference dataset. Then the stochastic version of the same model is used for reward learning, which prevents $$(6)$$ ![15_image_0.png](15_image_0.png) Figure 10: Performance comparison over 100 randomly generated deterministic MDPs when each preference model creates its own training dataset and learns from it. Performance with the regret preference model is consistently better, regardless of training set size or whether preferences are generated stochastically. The bottom-left and bottom-right plots are created from the top plot. The bottom-left plot shows the ratio of between each preference model's success rate. The bottom-right plot shows the ratio between each preference model's rate of failure to reach near-optimal performance. For easier visual comparison, the ratios of each plot are chosen such that higher values indicate better performance by the regret preference model. the introduction of a hyperparameter. Note that the stochastic preference model can approach determinism through scaling the reward function, so learning a reward function with the stochastic preference model from deterministically generated preferences does not remove our ability to fit a reward function to those preferences. For the noiseless case, the deterministic preference generator compares a segment pair's Σσr values for PΣr or their *regret*(σ|r) values for P*regret*. Note that through reward scaling the preference generators approach determinism in the limit, so this noiseless analysis examines minimal-entropy versions of the two preferencegenerating models. (The opposite extreme, uniformly random preferences, would remove all information from preferences and therefore is not examined.) In the stochastic case, for each preference model, each segment pair is labeled by sampling from that preference generator's output distribution (Eqs 2 or 5), using the unscaled ground-truth reward function. We created 100 deterministic MDPs that instantiate variants of our delivery domain (see Section 4.1). To create each MDP, we sampled from sets of possible widths, heights, and reward component values, and the resultant grid cells were randomly populated with a destination, objects, and road surface types (see Appendix F.2 for details). Each segment in the preference datasets for each MDP was generated by choosing a start state and three actions, all uniformly randomly. For a set number of preferences, each method had the same set of segment pairs in its preference dataset. Figure 10 shows the percentage of MDPs in which each preference model results in near-optimal performance. The regret preference model outperforms the partial return model at every dataset size, both with and without noise. By a Wilcoxon paired signed-rank test on normalized mean returns, p<0.05 for 86% of these comparisons and p<0.01 for 57% of them, as reported in Appendix F.2. Further analyses can be found in Appendix F.2: with stochastic transitions, with different segment lengths, without segments that terminate before their final transition, and with additional novel preference models. ## 6.3 Results From Human Preferences We now consider the reward-learning performance of each preference model on preferences generated by humans for our specific delivery task. We randomly assign human preferences from our gathered dataset to different numbers of same-sized partitions, resulting in different training set sizes, and test each preference model on each partition. Figure 11 shows the results. With smaller training sets (20–100 partitions), the regret preference model results in near-optimal performance more often. With larger training sets (1–10 partitions), both preference models always reach nearoptimal return, but the mean return from the regret preference model is higher for all of these partitions except for only 3 partitions in the 10-partition test. Applying a Wilcoxon paired signed-rank test on normalized mean return to each group with 5 or more partitions, p<0.05 for all numbers of partitions except 100 and p<0.01 for 20 and 50 partitions. To summarize, we find that both the regret and the partial return preference models achieve near-optimal performance when the dataset is sufficiently large—although the performance of the regret preference model is nonetheless almost always higher—and we also find that regret achieves near-optimal performance more often with smaller datasets. Figure 11: Performance comparison over various amounts ![16_image_0.png](16_image_0.png) of human preferences. Each partition has the number of preferences shown or one less. Using the human preferences dataset, Appendix F.3 contains further analyses: learning without segments that terminate before their final transition, learning via additional novel preference models, and testing the learned reward functions on other MDPs with the same ground-truth reward function. ## 7 Conclusion Over numerous evaluations with human preferences, our proposed regret preference model (P*regret*) shows improvements summarized below over the previous partial return preference model (PΣr). When each preference model generates the preferences for its own infinite and exhaustive training set, we prove that P*regret* identifies the set of optimal policies, whereas PΣr is not guaranteed to do so in multiple common contexts. With finite training data of synthetic preferences, P*regret* also empirically results in learned policies that tend to outperform those resulting from PΣr. This superior performance of P*regret* is also seen with human preferences. In summary, our analyses suggest that regret preference models are more effective both descriptively with respect to human preferences and also normatively, as the model we want humans to follow if we had the choice. Independent of P*regret*, this paper also reveals that segments' changes in state values provide information about human preferences that is not fully provided by partial return. More generally, we show that the choice of preference model impacts the performance of learned reward functions. This study motivates several new directions for research. Future work could address any of the limitations detailed in Appendix A.1. Specifically, future work could further test the general superiority of P*regret* or apply it to deep learning settings. Additionally, *prescriptive* methods could be developed via the subject interface or elsewhere to nudge humans to conform more to P*regret* or to other normatively appealing preference models. Lastly, this work provided conclusive evidence that the choice of preference model is impactful. Subsequent efforts could seek preference models that are even more effective with preferences from actual humans. ## Acknowledgements We thank Jordan Schneider, Garrett Warnell, Ishan Durugkar, and Sigurdur Orn Adalgeirsson, who each gave extensive and insightful feedback on earlier drafts. This work has taken place in part in the Safe, Correct, and Aligned Learning and Robotics Lab (SCALAR) at The University of Massachusetts Amherst. SCALAR research is supported in part by the NSF (IIS-1749204, IIS-1925082), AFOSR (FA9550-20-1-0077), and ARO (78372-CS, W911NF-19-2-0333). This work has also taken place in part in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (FAIN-2019844), ONR (N00014-18-2243), ARO (W911NF-19-2-0333), DARPA, Bosch, and UT Austin's Good Systems grand challenge. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. ## References Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In *Proceedings of* the twenty-first international conference on Machine learning, pp. 1, 2004. Riad Akrour, Marc Schoenauer, and Michele Sebag. Preference-based policy learning. In *Joint European* Conference on Machine Learning and Knowledge Discovery in Databases, pp. 12–27. Springer, 2011. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. *arXiv preprint arXiv:1606.06565*, 2016. Saurabh Arora and Prashant Doshi. A survey of inverse reinforcement learning: Challenges, methods and progress. *Artificial Intelligence*, 297:103500, 2021. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. *arXiv preprint arXiv:2204.05862*, 2022. André Barreto, Will Dabney, Rémi Munos, Jonathan J Hunt, Tom Schaul, Hado Van Hasselt, and David Silver. Successor features for transfer in reinforcement learning. *arXiv preprint arXiv:1606.05312*, 2016. Marc G Bellemare, Salvatore Candido, Pablo Samuel Castro, Jun Gong, Marlos C Machado, Subhodeep Moitra, Sameera S Ponda, and Ziyu Wang. Autonomous navigation of stratospheric balloons using reinforcement learning. *Nature*, 588(7836):77–82, 2020. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. *arXiv preprint arXiv:1912.06680*, 2019. Erdem Bıyık, Dylan P Losey, Malayandi Palan, Nicholas C Landolfi, Gleb Shevchuk, and Dorsa Sadigh. Learning reward functions from diverse sources of human feedback: Optimally integrating demonstrations and preferences. *The International Journal of Robotics Research*, pp. 02783649211041652, 2021. Daniel Brown, Russell Coleman, Ravi Srinivasan, and Scott Niekum. Safe imitation learning via fast bayesian reward inference from preferences. In *International Conference on Machine Learning*, pp. 1165–1177. PMLR, 2020. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In *Advances in Neural Information Processing Systems (NIPS)*, pp. 4299–4307, 2017. Yuchen Cui, Qiping Zhang, Brad Knox, Alessandro Allievi, Peter Stone, and Scott Niekum. The empathic framework for task learning from implicit human feedback. In *Conference on Robot Learning*, pp. 604–626. PMLR, 2021. Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de Las Casas, et al. Magnetic control of tokamak plasmas through deep reinforcement learning. *Nature*, 602(7897):414–419, 2022. William Fedus, Carles Gelada, Yoshua Bengio, Marc G Bellemare, and Hugo Larochelle. Hyperbolic discounting and learning over multiple horizons. *arXiv preprint arXiv:1902.06865*, 2019. Shane Frederick, George Loewenstein, and Ted O'donoghue. Time discounting and time preference: A critical review. *Journal of economic literature*, 40(2):351–401, 2002. Amelia Glaese, Nat McAleese, Maja Trębacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. Improving alignment of dialogue agents via targeted human judgements. *arXiv preprint arXiv:2209.14375*, 2022. Adam Gleave, Mohammad Taufeeque, Juan Rocamonde, Erik Jenner, Steven H. Wang, Sam Toyer, Maximilian Ernestus, Nora Belrose, Scott Emmons, and Stuart Russell. imitation: Clean imitation learning implementations. arXiv:2211.11972v1 [cs.LG], 2022. URL https://arxiv.org/abs/2211.11972. Joey Hejna and Dorsa Sadigh. Inverse preference learning: Preference-based rl without a reward function. arXiv preprint arXiv:2305.15363, 2023. Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in atari. *arXiv preprint arXiv:1811.06521*, 2018. Kuno Kim, Shivam Garg, Kirankumar Shiragur, and Stefano Ermon. Reward identification in inverse reinforcement learning. In *International Conference on Machine Learning*, pp. 5496–5505. PMLR, 2021. W Bradley Knox, Alessandro Allievi, Holger Banzhaf, Felix Schmitt, and Peter Stone. Reward (mis)design for autonomous driving. *arXiv preprint arXiv:2104.13906*, 2021. W Bradley Knox, Stephane Hatgis-Kessell, Serena Booth, Scott Niekum, Peter Stone, and Alessandro Allievi. Reproduction Data for: Models of Human Preference for Learning Reward Functions, 2023. URL https: //doi.org/10.18738/T8/S4WTWR. Zeb Kurth-Nelson and A David Redish. Temporal-difference reinforcement learning with distributed representations. *PLoS One*, 4(10):e7362, 2009. Kimin Lee, Laura Smith, and Pieter Abbeel. Pebble: Feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training. *arXiv preprint arXiv:2106.05091*, 2021a. Kimin Lee, Laura Smith, Anca Dragan, and Pieter Abbeel. B-pref: Benchmarking preference-based reinforcement learning. *arXiv preprint arXiv:2111.03026*, 2021b. R Duncan Luce. *Individual choice behavior: A theoretical analysis*. John Wiley, 1959. Andrew Y Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In *Seventeenth International* Conference on Machine Learning (ICML), 2000. OpenAI. Chatgpt: Optimizing language models for dialogue. OpenAI Blog https://openai.com/blog/ chatgpt/, 2022. Accessed: 2022-12-20. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. *arXiv preprint arXiv:2203.02155*, 2022. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/ paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. A David Redish and Zeb Kurth-Nelson. Neural models of delay discounting. In Impulsivity: The behavioral and neurological science of discounting., pp. 123–158. American Psychological Association, 2010. Stuart Russell and Peter Norvig. *Artificial intelligence: a modern approach*. Pearson, 2020. Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. Active preference-based learning of reward functions. *Robotics: Science and Systems*, 2017. Andrew W Senior, Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin Žídek, Alexander WR Nelson, Alex Bridgland, et al. Improved protein structure prediction using potentials from deep learning. *Nature*, 577(7792):706–710, 2020. Roger N Shepard. Stimulus and response generalization: A stochastic model relating generalization to distance in psychological space. *Psychometrika*, 22(4):325–345, 1957. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. *Nature*, 529(7587):484–489, 2016. Joar Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, and Adam Gleave. Invariance in policy optimisation and partial identifiability in reward learning. *arXiv preprint arXiv:2203.07475*, 2022. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. John Von Neumann and Oskar Morgenstern. *Theory of games and economic behavior*. Princeton university press, 1944. Xiaofei Wang, Kimin Lee, Kourosh Hakhamaneshi, Pieter Abbeel, and Michael Laskin. Skill preferences: Learning to extract and execute robotic skills from human feedback. In *Conference on Robot Learning*, pp. 1259–1268. PMLR, 2022. Peter R. Wurman, Samuel Barrett, Kenta Kawamoto, James MacGlashan, Kaushik Subramanian, Thomas J. Walsh, Roberto Capobianco, Alisa Devlic, Franziska Eckert, Florian Fuchs, Leilani Gilpin, Varun Kompella, Piyush Khandelwal, HaoChih Lin, Patrick MacAlpine, Declan Oller, Craig Sherstan, Takuma Seno, Michael D. Thomure, Houmehr Aghabozorgi, Leon Barrett, Rory Douglas, Dion Whitehead, Peter Duerr, Peter Stone, Michael Spranger, , and Hiroaki Kitano. Outracing champion gran turismo drivers with deep reinforcement learning. *Nature*, 62:223–28, Feb. 2022. doi: 10.1038/s41586-021-04357-7. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. *arXiv preprint arXiv:1909.08593*, 2019. After Appendix A, the appendix is organized according to the major sections and subsections of the main content. ## A Limitations And Ethics A.1 Limitations Some limitations of the regret preference model are discussed in the paragraph "Regret as a model for human preference" in Section 2.2, including assumptions that a person giving preferences can distinguish between optimal and suboptimal segments, that they follow a Boltzmann distribution (i.e., a Luce Shepard choice rule), and that they base their preferences on decision quality even when transition stochasticity results in segment pairs for which the worse decision has a better outcome. Our proposed algorithm (Section 6.1) has a few additional limitations. Generating candidate successor features for the approximations Q˜∗ rˆ and V˜ ∗ rˆ may be difficult in complex domains. Specifically, challenges include choosing the set of policies or reward functions for which to compute successor features (line 3 of Algorithm 1, discussed in Appendix F.1.1) and creating a reward feature vector ϕ for non-linear reward functions (discussed in Appendix F.1.2). Additionally, although learning with P*regret* is more sample efficient in our experiments, it is computationally slower than learning with PΣr because of the additional need to compute successor features and the use of the softmax function to approximate Q∗ rˆ and V ∗ rˆ . Nonetheless, we may accept the tradeoff of an increase in computational time that reduces the number of human samples needed or that improves the reward function's alignment with human stakeholders' interests. Lastly, the loss during optimization with P*regret* was unstable, which we addressed by taking the minimum loss over all epochs during training. Therefore, for more complex reward feature vectors (ϕ) than our 6-element vector for the delivery task, extra care might be needed to avoid overfitting rˆ, for example by withholding some preference data to serve as a test set. We also generally assume that the RL algorithm and reward learning algorithm use the same discount factor as in the MDP\r specification. One weakness of contemporary deep RL is that RL algorithms require artificially lower discount factors than the true discount factor of the task. The interaction of this discounting with preference models is considered in Appendix F.2. Our expectation though is that this weakness of deep RL algorithms is likely a temporary one, and so we focused our analysis on simple tasks in which we do not need to artificially lower the RL algorithm's discount factor. However, further investigation of the interaction between preference models and discount factors would aid near-term application of P*regret* to deep RL domains. This work also does not consider which segment pairs should be presented for labeling with preferences used for reward learning. However, other research has addressed this problem through active learning (Lee et al., 2021a; Christiano et al., 2017; Akrour et al., 2011), and future work could consider the compatibility of these active learning methods with our Algorithm 1, combining the improved sample efficiency of P*regret* with that of these active learning methods. Additionally, all preference models considered in this paper are evaluated with the assumption that for any pair of segments in the preference dataset, both segments have the same length. However, for segment pairs of different lengths, the regret preference model may act in ways that violate our intuition for how humans would give preferences. In particular, a short, highly suboptimal segment might be more preferable under the regret preference model than a much longer segment that is near-optimal yet nonetheless has a higher regret, since the regret of a segment is the sum of the regret of its individual transitions. The partial return model—which already breaks our intuition with same-length segment pairs—suffers from a similar limitation, in that it further deviates from intuition as the segment lengths diverge. Allowing such different-length segment pairs appears uncommon in practice but may be difficult to avoid in some task domains. Regarding the human side of the problem of reward learning from preferences, further research could provide several improvements. First, we are confident that humans can be influenced by their training and by the preference elicitation interface, which is a particularly rich direction for follow-up study. We also do not consider how to handle learning reward functions from multiple human stakeholders who have different preferences, a topic we revisit in Appendix A.2. Lastly, we expect humans to deviate from any simple model, including P*regret*, and a fine-grained characterization of how humans generate preferences could produce preference models that further improve the alignment of the reward functions that are ultimately learned from human preferences. ## A.2 Ethical Statement This work is meant to address ethical issues that arise when autonomous systems are deployed without properly aligning their objectives with those of human stakeholders. It is merely a step in that direction, and overly trusting in our methods—even though they improve on previous methods for alignment—could result in harm caused by poorly aligned autonomous systems. When considering the objectives for such systems, a critical ethical question is *which* human stakeholders' interests the objectives should be aligned with and how multiple stakeholders' interests should be combined into a single objective for an autonomous system. We do not address these important questions, instead making the convenient-but-flawed assumption that many different humans' preferences can simply be combined. In particular, care should be taken that vulnerable and marginalized communities are adequately represented in any technique or deployment to learn a reward function from human preferences in high-impact settings. The stakes are high: for example, a reward function that is only aligned with a corporation's financial interests could lead to exploitation of such communities or more broadly to exploitation of or harm to users. In this specific work, our filter for which Mechanical Turk Workers could join our study is described in Appendix D. We did not gather demographic information and therefore we cannot assess how representative our subjects are of any specific population. ## A.3 On The Challenge Of Using Regret Preference Models In Practice We have provided evidence—theoretically and with experimentation—that the regret preference model is more effective when precisely measured or effectively approximated. The challenge of efficiently creating such approximations presents one clear path for future research. We believe this challenge does not justify staying within the local maximum of the partial return preference model. Like the regret preference model, inverse reinforcement learning (IRL) was founded on an algorithm that requires solving an MDP in an inner loop of learning a reward function. For example, see the seminal work on IRL by Ng & Russell (2000). IRL has been an impactful problem despite this challenge, and handling this inner-loop computational demand is the focus of much IRL research. Future work on the application of the regret preference model can face the challenge of scaling to more complex problems. Given that IRL has made tremendous progress in this direction and Brown et al. (2020) have scaled an algorithm with similar needs to those of Algorithm 1, we are optimistic that the methods to scale can be developed, likely with light adaptation from existing methods (e.g., in Brown et al. or in Appendix F.1.1 and F.1.2). ## B Preference Models For Learning Reward Functions Here we extend the content of Section 2, focusing on preference models and learning algorithms that use them. This corresponding section of the appendix provides a simple derivation of the logistic form of these preference models, discusses extensions of the regret preference model, sketches an alternative way to learn a policy with it, and discusses the relationship of inverse reinforcement learning to learning reward functions with a regret preference model. ## B.1 Derivation Of The Logistic Expression Of The Boltzmann Distribution For the reader's convenience, below we derive the logistic expression of a function that is based on two subtracted values from the Boltzmann distribution (i.e., softmax) representation that is more common in past work. These values are specifically the same function f applied to each segment, which is a general expression of both of the preference models considered here. .$$ \begin{split}P(\sigma_1\!\succ\!\sigma_2)&=\frac{\exp\,\left[f(\sigma_1)\right]}{\exp\,\left[f(\sigma_1)\right]+\exp\,\left[f(\sigma_2)\right]}\\ &=\frac{1}{1+\frac{\exp\,\left[f(\sigma_2)\right]}{\exp\,\left[f(\sigma_1)\right]}}\\ &=\frac{1}{1+\exp\,\left[f(\sigma_2)-f(\sigma_1)\right]}\\ &=logistic(f(\sigma_1)-f(\sigma_2)).\end{split}$$ $$\left(7\right)$$ ## B.2 Learning Reward Functions From Preferences, With Discounting For the equations from the paper's body that assume that there is no temporal discounting (i.e., γ = 1), we share in this section versions that do not make this assumption. If γ = 1, then the equations below simplify to those in the body of the paper. To allow for fully myopic discounting with γ = 0, we define 0 0 = 1. Recall that r˜ indicates an arbitrary reward function, which may not be the ground-truth reward function, r, and rˆ refers to a learned reward function. Similarly, γ˜ refers to an arbitrary exponential discount factor, which may not be the ground-truth discount factor, γ, and γˆ refers to the discount factor during learning, which could be inferred or hand-coded. Also, the notation of V ∗ r˜ and Q∗ r˜ are expanded in this subsection to denote the discounting in their expected return: V ∗ (˜r,γ˜) and Q∗ (˜r,γ˜) , respectively. In most of this article, the discount factor used during reward function inference is hard-coded as γˆ= 1. However, in the theory of Section 3, we assume γ is not known to reach more general conclusions. In this subsection, for generality we likewise assume that γ is not known, using γ˜ generally and using γˆ in notation we consider specific to reward function inference. The discounted versions of the preference models below rely on a cross entropy loss function that is identical to Equation 1 except for the inclusion of discounting notation: loss(ˆr,γ,Dˆ ≻)=− X (σ1,σ2,µ)∈D≻ µ1logP(σ1 ≻σ2|r, ˆ γˆ)+µ2logP(σ1 ≺σ2|r, ˆ γˆ) (8) Partial return With discounting, the **partial return** of a segment σ is P|σ|−1 t=0 γ tr˜σ,t. This notation differs from that in Section 2.1 in that the subscript of the reward symbol r˜σ,t is now expanded to include which segment it comes from. The preference model based on partial return *with exponential discounting* is expressed below, generalizing Equation 2. $$P_{\Sigma r}(\sigma_{1}\!\succ\!\sigma_{2}|\vec{r},\vec{\gamma})\!=\!logistic\bigg{(}\sum_{t=0}^{|\sigma_{1}|-1}\vec{\gamma}^{\,t}\vec{r}_{\sigma_{1},t}-\sum_{t=0}^{|\sigma_{2}|-1}\vec{\gamma}^{\,t}\vec{r}_{\sigma_{2},t}\bigg{)}.\tag{9}$$ Regret With discounting, for a transition (st,at,st+1) in a segment containing only deterministic transitions, regretd(σt|r, ˜ γ˜)≜V ∗ (˜r,γ˜) (s σ t )−[˜rt+˜γV ∗ (˜r,γ˜) (s σ t+1)]. For a full deterministic segment, *regret*d(·|r, ˜ γ˜) with exponential discounting is defined as follows, generalizing Equation 3. $$\begin{split}regret_{d}(\sigma|\tilde{r},\tilde{\gamma})&\triangleq\sum_{t=0}^{|\sigma|-1}\hat{\gamma}^{t}regret_{d}(\sigma_{t}|\tilde{r},\tilde{\gamma})\\ &=V_{(\tilde{r},\tilde{\gamma})}^{*}(s_{0}^{\sigma})-\Big{(}\sum_{t=0}^{|\sigma|-1}\hat{\gamma}^{t}\tilde{r}_{\sigma,t}+\hat{\gamma}^{|\sigma|}V_{(\tilde{r},\tilde{\gamma})}^{*}(s_{|\sigma|}^{\sigma})\Big{)},\end{split}\tag{10}$$ It is not defined from $\sigma$ but is not known as a lower-order-valued $$(8)$$ Like Equation 3, this discounted form of deterministic regret also measures how much the segment reduces expected return from the start state value, V ∗ (˜r,γ˜) (s σ 0 ). To create the general expression of discounted regret that accounts for potential stochastic transitions, we note that, with discounting, the effect on expected return of transition stochasticity from a transition (st,at,st+1) is [r˜t+γV˜ ∗ (˜r,γ˜) (st+1)]−Q∗ (˜r,γ˜) (st,at) and add this expression once per transition to get *regret*(σ|r, ˜ γ˜), removing the subscript d that refers to determinism. The discounting does not change the simplified expressions in Equation 4, the regret for a single transition: $$regret(\sigma_{t}|\hat{r},\hat{\gamma})=[V^{*}_{(\hat{r},\gamma)}(s^{\pi}_{t})-[\hat{r}_{t}+\hat{\gamma}V^{*}_{(\hat{r},\gamma)}(s^{\pi}_{t+1})]]+[[\hat{r}_{t}+\hat{\gamma}V^{*}_{(\hat{r},\gamma)}(s^{\pi}_{t+1})]-Q^{*}_{(\hat{r},\gamma)}(s^{\pi}_{t},a^{\pi}_{t})]\tag{11}$$ $$=V_{(\hat{r},\gamma)}(s^{\pi}_{t})-Q^{*}_{(\hat{r},\gamma)}(s^{\pi}_{t},a^{\pi}_{t})$$ $$=-A^{*}_{(\hat{r},\gamma)}(s^{\pi}_{t},a^{\pi}_{t}).$$ With both discounting and accounting for potential stochastic transitions, regret for a full segment is $$regret(\sigma|\tilde{r},\tilde{\gamma})=\sum_{t=0}^{|\sigma|-1}\tilde{\gamma}^{t}regret(\sigma_{t}|\tilde{r},\tilde{\gamma})$$ $$=\sum_{t=0}^{|\sigma|-1}\tilde{\gamma}^{t}\Big{[}V^{*}_{(\tilde{r},\tilde{\gamma})}(s^{\sigma}_{t})-Q^{*}_{(\tilde{r},\tilde{\gamma})}(s^{\sigma}_{t},a^{\sigma}_{t})\Big{]}\tag{12}$$ $$=\sum_{t=0}^{|\sigma|-1}-\tilde{\gamma}^{t}A^{*}_{(\tilde{r},\tilde{\gamma})}(s^{\sigma}_{t},a^{\sigma}_{t}).$$ is the most general solution procedure also which Equation 5 is identically The expression of regret above is the most general in this paper and can be used in Equation 5 identically as can the undiscounted version in Equation 4. Equation 6, the approximation P˜*regret* of the regret preference model derived in Section 6.1, is expressed with discounting below. $$\hat{P}_{regret}(\sigma_{1}\!>\!\sigma_{2}|\hat{r},\hat{r})\!=\!logistic\!\left(\sum_{t=0}^{[\sigma_{t}|\sigma_{1}\!>\!1,\,\delta_{t}^{\prime}]}\!\left[\hat{V}_{(\hat{r},\hat{r})}^{*}(\sigma_{t}^{\prime*})\!-\!\hat{Q}_{(\hat{r},\hat{r})}^{*}(\sigma_{t}^{\prime*},\sigma_{t}^{\prime*})\right]-\sum_{t=0}^{[\sigma_{t}]-1,\,\delta_{t}^{\prime}}\!\left[\hat{V}_{(\hat{r},\hat{r})}^{*}(\sigma_{t}^{\prime*})\!-\!\hat{Q}_{(\hat{r},\hat{r})}^{*}(\sigma_{t}^{\prime*},\sigma_{t}^{\prime*})\right]\right)\tag{13}$$ Note that the successor features used in Section 6.1 to determine these approximations, V˜ ∗ (ˆr,γˆ) and Q˜∗ (ˆr,γˆ) , already include discounting. As with the undiscounted versions of the above equations, if two segments have deterministic transitions, end in terminal states, and have the same starting state, this regret model reduces to the partial return model: P*regret*(·|r, ˜ γ˜)=PΣr(·|r, ˜ γ˜). If hard-coding γˆ**, when to set** γ <ˆ 1 **during reward function inference** In reinforcement learning, both γ and r together determine the set of optimal policies. Changing either γ or r while holding the other constant will often change the set of optimal policies. For both preference models, we suspect that learning would benefit from using the same discounting during reward inference as the human used while evaluating segments to provide preferences (i.e., setting γˆ=γ. And this same γˆ would be used for learning a policy from the learned reward function. On the other hand, when γˆ is hand-coded and γˆ ̸=γ, the reward inference algorithm will regardless attempt to find an rˆ that explains those preferences; however, a set of optimal policies is determined by a reward function *with the discount*, and the set of optimal polices created by the human's reward function and discounting may not be determinable under a different discounting. Not only is a specific human rater's γ unobservable, but psychology and economics researchers have firmly established that humans do not typically follow exponential discounting (Frederick et al., 2002), which should evoke skepticism for hard-coding γ <ˆ 1 during reward function inference. One exception is humans who have been trained to apply exponential discounting, such as in certain financial settings. The best model for how humans discount future rewards and punishments is not settled, but one popular model is hyperbolic discounting. Some exploration of RL with hyperbolic discounting exists, including approximating hyperbolically discounted value function using a mixture of exponentially discounted value functions (Kurth-Nelson & Redish, 2009; Redish & Kurth-Nelson, 2010). However, it has not found clear usage beyond as an auxiliary task to aid representation learning (Fedus et al., 2019). The interpretation of human preferences over segments appears to us to be a strong candidate for using these methods to approximate hyperbolic discounting. This research topic currently lacks a rigorous treatment of discounting when learning reward functions from human preferences and such an investigation is beyond our scope, and so we leave our guidance above as speculative. ## B.3 Logistic-Linear Preference Model In Appendices E.2, F.2.5, and F.3.2 we also consider preference models that arise by making the noiseless preference model a linear function over the 3 components of P*regret*d . Building upon Equation 7 above, we set f(σ)= ⃗w·⟨V ∗ r˜ (s σ 0 ),Σσ,V ∗ r˜ (s σ |σ| )⟩. This preference model, Plog−lin, can be expressed after algebraic manipulation as $$P_{log-lim}(\sigma_{1}\!>\!\sigma_{2}|\tilde{r})\!=\!logistic\!\left(\vec{w}\,\cdot\,\langle V^{\prime}_{r}(s^{\prime\prime}_{0})-V^{\prime}_{r}(s^{\prime\prime}_{0}),\,\Sigma_{\sigma_{1}}\tilde{r}-\Sigma_{\sigma_{2}}\tilde{r},\,V^{\prime}_{r}(s^{\prime\prime}_{|\sigma_{1}|})-V^{\prime}_{r}(s^{\prime\prime}_{|\sigma_{2}|})\rangle\right).\tag{14}$$ This logistic-linear preference model is a generalization of PΣr and also of P*regret*d , the regret preference model for deterministic transitions. Specifically, if ⃗w=⟨0,1,0⟩, then Plog−lin(·|r˜)=PΣr(·|r˜). And if ⃗w=⟨−1,1,1⟩, then Plog−lin(·|r˜) = P*regret*d (·|r˜). More generally, for some constant c, ⃗w = ⟨0,c,0⟩ and ⃗w = ⟨−*c,c,c*⟩ recreate PΣr and P*regret*d respectively but with different reward function scaling, which is the same as allowing a different temperature in the Boltzmann distribution that determines preference probabilities. In Appendix E.2, we fit ⃗w to maximize the likelihood of the human preference dataset under Plog−lin(·|r), using the ground-truth r, and compare the learned weights to those of PΣr and P*regret*d . ## B.4 Adding A Constant Probability Of Uniformly Distributed Preference Appendix E.2 also considers adaptations of PΣr, P*regret*d , and Plog−lin that add a constant probability of uniformly distributed preference, as was done by Christiano et al. (2017). The body of the paper does not consider these adaptations. We create this adaptation, which we will call P ′ here, from another preference model P by P ′(σ1 ≻ σ2) = [(1−*logistic*(c))∗P(σ1 ≻σ2)]+[*logistic*(c)/2], where c is a constant that in practice we fit to data and *logistic*(c) is the constant probability of uniformly random preference. The *logistic*(c) allows any constant c to result in a the constant probability of uniformly distributed preference to be in (0,1). The term *logistic*(c)/2 gives half of the constant probability to σ1 and half to σ2. The term [1−*logistic*(c)] scales the P(σ1 ≻σ2) probability—which could be PΣr, P*regret*d , or Plog−lin—to a proportion of the remaining probability. The only difference in this adaptation and Christiano et al.'s 0.1 probability of uniformly distributed preference is that we learn the value of c from training data (in a k-fold cross-validation setting), as we see in Appendix E, whereas Christiano et al. do not share how 0.1 was chosen. ## B.5 Expected Return Preference Model In Appendix F.3, we test reward learning on a third preference model. This expected return preference model is derived by making f(σ)=−(Σσr˜+V ∗ r˜ (s σ |σ| )), in Equation 7. This segment statistic f(σ) can be considered be in between deterministic regret (Equation 3) and partial return, differing from each by one term. We include this preference model because judging by expected return is intuitively appealing in that it considers the partial return along the segment and the end state value of the segment, and we found it plausible that human preference providers might tend to ignore start state value, as this preference model does. However, reward learning with the regret model outperforms or matches that by this expected return preference model, as we show in Appendix F.3. ## B.6 Relationship To Inverse Reinforcement Learning Like learning reward functions from pairwise preferences, inverse reinforcement learning (IRL) also involves learning a reward function. However, the inputs to IRL and learning reward functions from pairwise preferences are different: IRL requires demonstrations, not preferences over segment pairs. However, because a a regret-based preference model always prefers optimal segments over suboptimal segments, at least one further connection can be made. If one assumes that a demonstrated trajectory segment is noiselessly optimal—as in the foundational IRL paper on apprenticeship learning (Abbeel & Ng, 2004)—then such a demonstration is equivalent to expressing preference or indifference for the demonstrated segment over all other segments. In other words, no other segment is preferred over the demonstrated segment. However, IRL has its own identifiability issues in noiseless settings (e.g., see Kim et al. (2021)) that, viewed from the lens of preferences, come in part from the "indifference" part of the above statement: since there can be multiple optimal actions from a single state, it is not generally correct to assume that a demonstration of one such action shows a preference over all others, and therefore it remains unclear in IRL what other actions are optimal. Note that since partial-return-based preferences can prefer suboptimal segments over optimal segments, the common assumption in IRL that demonstrations are optimal does not map as cleanly to partial-return-based preferences. The regret preference model also relates to IRL in that the most basic version of IRL requires solving an MDP in the inner loop (see Algorithm 1 in the survey of IRL by Arora & Doshi (2021)), as appears necessary for a perfect measure of regret while learning a reward function. The progress that IRL has made addressing this challenge gives us optimism that it is similarly addressable for complex tasks in for our proposed algorithm. We discuss potential solutions in Appendix F.1.1 and F.1.2. ## C Theoretical Comparisons The relevance of noiseless preference generators Because we model preferences as stochastic in Section 2, one might reasonably wonder how the above theoretical analysis of noiseless preference generators are relevant. We offer four arguments below. First, having structured noise provides information that can help both preference models, but these proofs show that there are cases where the signal behind the noise—either regret or partial return—is not sufficient in the partial return case to identify an equivalent reward function. So, in a rough sense, regret more effectively uses both the signal and the noise, which might explain its superior sample efficiency in our experiments across both human labels and synthetic labels. Relatedly, the noiseless setting can help us understand each preference model's sample efficiency in a low-noise setting. Second, noiseless preferences are also feasible, even if they are rare. Therefore, understanding what can be learned from them is worthwhile. Theorem 3.2 shows that there are MDPs in which there is no class of preference models—stochastic or deterministic—that can identify an equivalent reward function from partial-return-based preferences if the preference generator noiselessly prefers according to partial return. Specifically, we show that the mapping from two reward functions with different sets of optimal policies to partial-return based preferences is a many-to-one-mapping, and therefore the information simply does not exist to invert that mapping and identify a reward function with the same set of optimal policies. In contrast, Theorem 3.1 shows that preferences generated noiselessly (and in certain stochastic settings) by regret do not have this issue. Third, noise is often motivated as modeling human error. Having an algorithm rely on noise—structured in a very specific, Boltzmann-rational way—is an undesirable crutch. Skalse et al. (2022) justify including noiseless preferences in their examinations of identifiability with a similar argument: "these invariances rely heavily on the precise structure of the decision noise revealing cardinal information in the infinite-data limit". Beyond the work of Skalse et al. (2022), there is broader precedent for considering noiseless human input for theory or derivations. For instance, the foundational IRL research on apprenticeship learning (Abbeel & Ng, 2004) treats demonstrations as noiselessly optimal. Recent work by Kim et al. (2021) focuses on reward identifiability with noiseless, optimal demonstrations. ## D Additional Information For Creating A Human-Labeled Preference Dataset D.1 The Preference Elicitation Interface And Study Overview Here we share miscellaneous details about the preference elicitation interface from which we collected human subjects' preferences. This description builds on Section 4.2. In selecting preferences, subjects had four options. They could prefer either trajectory (left or right), or they could express their preference to be the same or indistinguishable. To provide these preferences, subjects could either click on each of the buttons labeled "LEFT", "RIGHT", "SAME", or "CAN'T TELL" (shown in Figure 7) or by using the arrow keys to select amongst these choices. For the interface, all icons used to visualize the task were obtained from icons8.com under their Paid Universal Multimedia Licensing Agreement. We paid all subjects $5 per experiment (i.e., for each a Mechanical Turk HIT), which was chosen using the median time subjects took during a pilot study and then calculating the payment to result in $15 USD per hour. This hourly rate of $15 was chosen because it is commonly recommended as an improved US federal minimum wage. The human subject experiments cost $2,145 USD in total. An experimental error resulted in the IRB-approved consent form not being presented to human subjects after Mechanical Turk Workers accepted our study. We reported this error to our IRB and received their approval to use the data. ## D.2 Filtering Subject Data To join our study via Amazon Mechancial Turk, potential subjects had to meet the following criteria. They had to be located in the United States, have an approval rating of at least 99%, and have completed at least 100 other MTurk HITs. We selected these criteria to improve the probability of collecting data from subjects who would attentively engage with our study and who would understand our training protocol. We assessed each subject's understanding of the delivery domain and filtered out those who did not comprehend the task, as described below. Specifically, subjects completed a task-comprehension survey, through which we assigned them a task-comprehension score. The questions and answer choices are shown in Table 2. Each fully correct answer was worth 1 point and each partially correct answer was worth 0.5 points. Task-comprehension scores were bounded between 0 and 7. We removed the data from subjects who scored below a threshold of 4.5. The threshold of 4.5 was chosen based on visual analysis of a histogram of scores, attempting to balance high standards for comprehension with retaining sufficient data for analysis. In addition to filtering based off the task comprehension survey, we also removed a subject's data if they ever preferred colliding the vehicle into a sheep over not doing so. Since such collisions are highly undesirable in this task, we interpreted this preference as evidence of either poor task understanding or inattentiveness. In total, we collected data from 143 subjects. Data from 58 of these subjects were removed based on subjects' responses to the survey. From what remained, data from another 35 subjects were removed for making the aforementioned sheep-collision preference errors. After this filtering, data from 50 subjects remained. This filtered data consists of 1812 preferences over 1245 unique segment pairs and is used in this article's experiments. Regarding potential risks to subjects, this data collection had limited or no risk. No offensive content was shown to subjects while they completed the HIT. Mechanical Turk collected Worker IDs, which were used only to link preference data with the results from the task-comprehension survey for filtering data (see Appendix D.2) and then were deleted from our data. No other potentially personally identifiable information was collected. ## D.3 The Two Stages Of Data Collection We collected the human preference dataset in two stages, as mentioned in Section 4.2. Here we provide more detail on each stage. These stages differed largely by their goals for data collection and, following those goals, how we chose which segment pairs were presented to subjects for their preference. Table 2: The task comprehension survey, designed to test participant's comprehension of the domain for the purpose of filtering data. Each full credit answer earned 1 point; each partial credit answer earned 0.5 points. We discarded the data of participants who scored less than 4.5 points overall. | data of participants who scored less than 4.5 points overall. Question Full credit answer Partial credit answer | Other answer choices | | |----------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | What is the goal of this world? (Check | - To maximize profit | - To get to a specific location. | | all that apply.) | - To maximize profit Partial credit was given if both answers were selected. | - To drive as far as possible to explore the world. - To collect as many coins as possible. - To collect as many sheep as possible. - To drive sheep to a specific location. | | - You pay a gas penalty. | - You pay a gas penalty. | - The episode ends. | | What | happens | | | when you run into a house? (Check all that apply.) | - You get stuck. - To collect as many sheep as possible. | | | - You can't run into a house; the world doesn't let you move into it. Full credit was given if both answers were selected. | - You can't run into a house; the world doesn't let you move into it. Partial credit was given if only one answer was selected. | | | - The episode ends. | - The episode ends. | | | What | happens | | | when you run into a sheep? (Check all that apply.) | - You are rewarded for collecting a sheep. | | | - You are penalized for running into a sheep. Full credit was given if both answers were selected. | - You are penalized for running into a sheep. Partial credit was given if only one answer was selected. | | | - You pay a penalty. | - The episode ends. | | | What | happens | | | when you run into a roadblock? (Check all that apply.) | - You get stuck. - You can't run into a roadblock; the world doesn't let you move into it. | | | Is running into a roadblock ever a good choice in any town? | - Yes, | in certain circum | | stances. | - No. | | | - You pay extra for gas. | - The episode ends. | | | What | happens | | | when you go into the brick area? (Check all that apply.) | - You get stuck in the brick area. - You can't go into the brick area; the world doesn't let you move into it. | | | Is entering the brick area ever a good | - Yes, | in certain circum | | stances | - No | | | choice? | | | First stage Figure 12 illustrates the coordinates that segment pairs were sampled from in the first stage of data collection, varying by state value differences and by differences in partial returns over the segments. We sought a range of points that would allow a characterization of human preferences that is well distributed across different parts of the plot. To better differentiate the consequences of each preference model, we intentionally chose a large number of points in the gray area of Figure 8, where the regret and partial return preference models would disagree (i.e., each giving a different segment a preference probability greater than 0.5). ![28_image_0.png](28_image_0.png) Figure 12: Coordinates from which segment pairs were sampled from during the first stage of data collection. The x-axis is state value differences between the two segments and the y-axis is partial return differences between the two segments. The areas of the circles are proportional to the number of samples at that point, and the proportionality is consistent across this plot and the 3 subplots of Figure 13. We now describe our segment-pair sampling process more specifically. We first we constructed all unique segments of length 3 and then exhaustively paired them, resulting in nearly 30 million segment pairs. Each segment pair's partial returns, start-state values, end-state values place the segment pair on a coordinate in Figure 8, and segment pairs that are not on any of the dots in Figure 8 were discarded. For the segment pairs at each coordinate, we further divided them into 5 bins: non-terminal segments with the same start state and different end states, non-terminal segments with different start states and different end states, terminal segments with the same start state and same end state, terminal segments with a different start states and the same end state, and bin of segment pairs that fit in none of the other bins. Segment pairs in the 5th bin were discarded. From each of the 4 bins corresponding to each point in Figure 8, we randomly sampled 20 segment pairs. If the bin did not have at least 20 segment pairs, all segment pairs in the bin were "sampled". All sampled segment pairs from all bins for all points in Figure 8 made up the pool of segment pairs used with Mechanical Turk. For each subject, 50 segment pairs were randomly sampled from this pool. We gathered data until we had roughly 20 labeled segment pairs per bin. After filtering subject data, this first stage contributed 1437 segment pairs out of the 1812 pairs used in our reward learning experiments in Section 6.3 and Appendix F.3. Second stage When we conducted the reward-learning evaluation in Section 6 with only the data from the first ![28_image_1.png](28_image_1.png) stage, PΣr performed very poorly, always performing worse than uniformly random. This performance difference is shown in Appendix F.3. In contrast P*regret* performed well, always achieving near-optimal performance. To better assess PΣr, we investigated its results with synthetic preferences in detail and speculated that two types of additional segment pairs would aid its performance. The first of these two types include one segment that is terminal and one that is non-terminal, which we expected to help differentiate the reward for reaching terminal states from that of reaching non-terminal ones. Figure 13: Coordinates from which segment pairs were sampled from during the second stage of data collection. The points are in 3 distant clusters, so they are presented in 3 separate subplots for readability. The areas of the circles are proportional to the number of samples at that point, and the proportionality is consistent across these 3 subplots and Figure 12. The second of these two types are two segments that each terminate at different t values. For example, one segment terminates on its end state, s σ |σ| , and another terminates after its first transition, at s σ 1 . These early-terminating segments can be viewed either as shorter segments or as segments of the same length as the other segments (|σ|= 3), where they reach absorbing state from which no future reward can be received. We speculated that this second type of segment pairs would help learn the negative reward component for each move (i.e., the gas cost). Specifically, in the first stage's data, both segments in a pair always have the same number of non-terminal transitions, seemingly preventing preferences from providing information about whether an extra transition (from non-absorbing state) generally resulted in positive or negative reward. These segment pairs were included in all results unless otherwise stated. Note that this second type addresses the identifiability issue of the partial return model related to a constant shift in the reward function and discussed in Section 3.2.2 and Appendix F.2.2. The speculation described above was a conceptual predecessor of our understanding of this identifiability issue. In particular, any change to this gas cost—which is given at every time step—is equivalent to a constant shift in the reward function. We now describe our segment-pair sampling process for the second stage more specifically. For the first additional type of segment pair, where one segment is terminal and one is not, we randomly pair terminal and non-terminal segments from the first-stage pool of segment pairs drawn from to present to subjects. In this pairing, each segment is only used once, and pairing stops when one of all terminal segments or all non-terminal segments have been paired. The corresponding coordinates for these pairs are shown in the two right most plots of Figure 13. For the second additional type of segment pair, we utilize all terminal segments from the pool of segment pairs shown to subjects in the first stage. For each of these terminal segments, we construct two additional segments: one that shifts the segment earlier, removing the first state and action and adds a dummy transition within absorbing state at the end, and another that shifts the segment two timesteps earlier and adds two such dummy transitions at the end. These two newly constructed segments are then each paired with the original segment, producing two new pairs for each terminal segment in the data set. The corresponding coordinates for these segment pairs are shown in the left most plot of Figure 13. All of both types of additional segments pairs are then characterized by the coordinates shown in Figure 13. Then, as with the first stage, we randomly sampled 20 segment pairs from each coordinate to make the experimental pool for the second round of Mechanical Turk data collection. If 20 segment pairs were not available at a coordinate, we used all segment pairs for that coordinate. As in the first stage, 50 segment pairs were randomly sampled from this pool to be presented to each subject during preference elicitation. After filtering subject data, this first stage contributed 311 segment pairs out of the 1812 pairs used in our reward learning experiments in Section 6.3 and Appendix F.3. ## D.4 The Study Design Pattern This work follows an experimental design pattern that is often used for studying methods that take human input for evaluating the desirability of behaviors or outcomes. This pattern is illustrated for the specific case of learning reward functions from preferences in Figure 9. In this pattern, human subjects are taught to understand a specific task metric and/or are incentivized to align their desires with this metric. The human subjects then provide input to some algorithm that has no knowledge of the performance metric, and this algorithm or learned model is evaluated on how well its output performs with respect to the hidden metric. For another example, see Cui et al. (2021). ## E Descriptive Results **Derivation of $regret_{\mathbf{d}}(\sigma_{2}|\tilde{r})-regret_{\mathbf{d}}(\sigma_{1}|\tilde{r})=(\Delta_{\sigma_{1}}V_{\tilde{r}}-\Delta_{\sigma_{2}}V_{\tilde{r}})+(\Sigma_{\sigma_{1}}\tilde{r}-\Sigma_{\sigma_{2}}\tilde{r})$**. The derivation below supports our assertion in the first paragraph of Section 5.1. regretd(σ2|r˜)−*regret*d(σ1|r˜) = [V ∗ r˜ (s σ2 0 )−(Σσ2 r˜+V ∗ r˜ (s σ2 |σ2| )]−[V ∗ r˜ (s σ1 0 )−(Σσ1 r˜+V ∗ r˜ (s σ1 |σ1| )] = [V ∗ r˜ (s σ2 0 )−V ∗ r˜ (s σ2 |σ2| )]−[V ∗ r˜ (s σ1 0 )−V ∗ r˜ (s σ1 |σ1| )]− Σσ2 r˜−Σσ1 r˜ = [V ∗ r˜ (s σ1 |σ1| )−V ∗ r˜ (s σ1 0 )]−[V ∗ r˜ (s σ2 |σ2| )−V ∗ r˜ (s σ2 0 )]+ Σσ1 r˜−Σσ2 r˜ = (∆σ1 Vr˜−∆σ2 Vr˜)+(Σσ1 r˜−Σσ2 r˜) $$(15)$$ E.2 Losses of an expanded set of preference models on the human preferences dataset Table 3 shows an expansion of Table 1, including models introduced in Appendix B. The logistic linear preference model, Plog−lin, provides a lower bound, given that it can express either P*regret* or PΣr and that we do not not observe any overfitting of its 3 parameters. | | Loss | |----------------------------------------|-----------| | Preference model | (n=1,812) | | P(·)= 0.5 (uninformed) | 0.693 | | PΣr (partial return) | 0.620 | | Pregret (regret) | 0.573 | | Plog-lin (logistic linear) | 0.548 | | PΣr with prob of uniform response | 0.620 | | Pregret with prob of uniform response | 0.573 | | Plog-lin with prob of uniform response | 0.548 | Table 3: Expanding on Table 1, mean cross-entropy test loss over 10-fold cross validation (n=1812) from predicting human preferences. Lower is better. Including the constant probability of a uniformly random response, as in Christiano et al. (2017), also increases the expressivity of the model. The final three results in Table 3 show the best test loss achieved across different training runs that differ by initializing *logistic*(c) of this model to be 0.01, 0.1, or 0.5. Surprisingly, no benefit is observed from including a constant probability of a uniformly random response. Because this augmentation of our models appears to have no effect on the likelihood, we do not include it in further analysis. Across the 10 folds, the mean weights learned for Plog−lin(·|r˜) were ⃗w=⟨−0.18,0.34,0.32⟩, where each weight applies respectively to segments' start state values, partial returns, and end state values. Scaled to have a maximum weight of 1 for easy comparison with PΣr and P*regret*d , ⃗w*scaled* =⟨−0.53,1.0,0.94⟩. First, we note that these weights are closer to those that make Plog−lin =P*regret*d (i.e., ⃗w=⟨−1,1,1⟩) than to those that make Plog−lin =PΣr (i.e., ⃗w=⟨0,1,0⟩). Also, the notable deviation from the weights of P*regret*d is the weight for the start state value, which has half as much impact as the regret preference model gives it. In other words, this lower weight suggests that our subjects did tend to weigh the maximum possible expected return from each segment's start state, but they did so less than they weighed the reward accrued along each segment and the maximum expected return from each segment's end state. ## F Results From Learning Reward Functions This section provides additional implementation details for Section 6, discussion of potential improvements, and additional analyses that thematically fit in Section 6. ## F.1 An Algorithm To Learn Reward Functions With Regret(Σ|Rˆ) We describe below additional details of our instantiation of Algorithm 1. Doubling the training set by reversing preference samples Because the ordering of preference pairs is arbitrary—i.e., (σ1 ≺σ2)⇐⇒(σ2 ≻σ1)—for all preference datasets we double the amount of data by duplicating each preference sample with the opposite ordering and the reversed preference. This provides more training data and avoids learning any segment ordering effects. Collecting the policies from which successor feature functions are calculated For this article's instantiation of Algorithm 1, we collect successor feature functions by randomly sampling a large number of reward functions and then calculating the successor feature functions for their optimal polices. This procedure is more precisely described below. 1. Create a reward function by sampling with replacement each element of its weight vector, wr˜, from {−50,−10,−2,−1,0,1,5,10,50}. 2. For this reward function, use value iteration to approximate its maximum entropy optimal policy and that policy's successor feature function. 3. If this successor feature function policy differs from all previously calculated successor feature functions, save it and go to step 1. 4. Otherwise it is a redundant policy. If less than 300 consecutive redundant policies have been found, go to step 1. The policy collection process above is terminated after 300 consecutive redundant policies are found. Finally, we calculate the maximum entropy optimal policy for the optimal policy for the ground-truth reward function, r, and *remove the successor feature function for any policy that matches the optimal policy for* r. In other words, we remove any policies for other reward functions that were also optimal for r, making the regret-based learning problem more difficult. We ensured that the ground-truth reward function was not represented to better approximate real-world reward learning applications, in which one would be unlikely to have the optimal policy for learning a successor features function. On the specific delivery task on which we gathered human preferences, the process above resulted in 70 reward functions. Early stopping without a validation set During training, the loss for the P*regret* model tended to show cyclical fluctuations, reaching low loss and then spiking. To handle this volatility, we used the rˆ that achieved the lowest loss over all epochs of training, not the final rˆ. For PΣr and P*regret*, we found no evidence of overfitting with our linear representation of the reward function, but with a more complex representation, such early stopping likely should be based upon the loss of the model on a validation set. A better understanding of the cyclical loss fluctuations we observe during training could further improve learning with P*regret*. Discounting during value iteration Despite the delivery domain being an episodic task, a low-performing policy can endlessly avoid terminal states, resulting in negative-infinity values for both its return and successor features based on the policy. To prevent such negative-infinity values, we apply a discount factor of γ = 0.999 during value iteration—which is also where successor feature functions are learned—and when assessing the mean returns of policies with respect to the ground-truth reward function, r. We chose this high discount factor to have negligible effect on the returns of high-performing policies (since relatively quick termination is required for high performance) while still allowing value iteration to converge within a reasonable time. Hyperparameters for learning Below we describe the other specific hyperparameters used for learning a reward function with both preference models. These hyperparameters were used across all experiments. For all models, the learning rate, softmax temperature, and number of training iterations were tuned on the noiseless synthetic preference data sets such that each model achieved an accuracy of 100% on our specific delivery task and then were tuned further on stochastic synthetic preferences on our specific delivery task. Reward learning with the partial return preference model learning rate: 2; number of training epochs: 30,000; and optimizer: Adam (with β1 = 0.9 and β2 = 0.999, and eps= 1e−08). Reward learning with the regret preference model learning rate: 0.5; number of training epochs: 5,000; optimizer: Adam (with β1 = 0.9, β2 = 0.999, and eps=1e−08); and softmax temperature: 0.001. Logistic regression with both preference models, for the likelihood analysis in Section 5.2 and Appendix E.2 learning rate: 0.5; number of training iterations: 3,000; optimizer: stochastic gradient descent; and evaluation: 10-fold cross validation. Computer specifications and software libraries used The computer used to run experiments shown in Figures 15,16,17,18, 19, 21, 22, and 23 had the following specification. Processor: 2x AMD EPYC 7763 (64 cores, 2.45 GHz); Memory: 284 GB. The computer used to run all other experiments had the following specification. Processor: 1x Core™ i9-9980XE (18 cores, 3.00 GHz) & 1x WS X299 SAGE/10G | ASUS | MOBO; GPUs: 4x RTX 2080 Ti; Memory: 128 GB. Pytorch 1.7.1 (Paszke et al., 2019) was used to implement all reward learning models, and statistical analyses were performed using Scikit-learn 0.23.2 (Pedregosa et al., 2011). ## F.1.1 For Algorithm 1, Choosing An Input Set Of Policies For Learning Successor Feature Functions A set of policies is input to Algorithm 1 and used to create successor feature functions, which are in turn used for generalized policy improvement (GPI) to efficiently estimate optimal value functions for rˆ during learning. Which policies to insert is an important open question for successor-feature-based methods in general, but our intuition is that the performance of GPI under successor-feature-based methods is improved with a greater diversity of successor feature functions (via a diverse set of policies) and by having some policies that perform decently (but not necessarily perfectly) on the reward functions for which V ∗ and Q∗ outputs values are being estimated via GPI. Recalling that an input policy can come from policy improvement with an arbitrary reward function, we offer the following ideas for how to source an input set of policies. - Choose reward function parameters according to some random distribution (as we do). - Create a set of reward functions that differ in a structured way, such as each reward function being a point in a grid formed in parameter space. - Learn policies from a separate demonstration dataset, using an imitation learning algorithm. - Bootstrap from rˆ. Specifically, during learning augment the current set of successor feature functions (ΨπSF Qand ΨπSF V) by learning one new successor feature function via policy improvement on the current rˆ; then continue reward learning with the augmented set of successor feature functions and repeat this process as desired. The input set of policies could come from multiple different sources, including the ideas above. ## F.1.2 Instantiating Algorithm 1 For Reward Functions That May Be Non-Linear Algorithm 1 operates under the assumption that the reward function can be represented as a linear combination of reward features. These reward features are obtained through a reward-features function ϕ, which is given as input to the algorithm. Here we address situations when the linearity assumption does not hold. If the state and action space are discrete, one could linearly model all possible (deterministic) reward functions by creating a feature for each (s,a,s') that is 1 for (s,a,s') and 0 otherwise. A downside of this approach is that the learned reward function cannot benefit from generalization, which has two negative consequences. First, in complex tasks, generalization would typically have decreased the training set size required to learn a reward function rˆ with optimal policies that perform well on the ground-truth reward function, r. Second, the reward function will not generalize to different tasks that share the same *symbolic* reward function, such as always having the same reward from interacting with a particular object type. If the reward features are unknown or the reward is known to be non-linear, another method is to create a reward features function that permits a linear *approximation* of the reward function. Several methods to derive some or all of these reward features appear promising: - Reward features can be learned by minimizing several auxiliary losses in a self-supervised fashion, as by Brown et al. (2020). After optimizing for these various objectives using a single neural network, the activations of the penultimate layer of this network can be used as reward features. Such auxiliary tasks may include minimizing the mean squared error of the reconstruction loss for the current state from a lower-dimensional embedding and the original state, predicting how much time has passed between states by minimizing the mean squared error loss (i.e., learning a temporal difference model), predicting the action taken between two states by minimizing the cross entropy loss (i.e., learning an inverse dynamics model), predicting the next state given the current state and action by minimizing the mean squared error loss(i.e., learning a forward dynamics model), and predicting which of two segments is preferred given a provided ranking by minimizing the t-rex loss. - Reward features could also be learned by first learning a reward function represented as a neural network *using a partial return preference model*, and then using the activations of the penultimate layer of this neural network to provide reward features. ## F.2 Results From Synthetic Preferences F.2.1 Learning Reward Functions From 100 Randomly Generated Mdps Here we describe how each MDP in the set of 100 MDPs discussed in section 6.2 was generated. We also extend the analysis to illustrate how often each preference model performs better than uniformly random and give further details on our statistical tests. Design choices The 100 MDPs are all instances of the delivery domain, but they have different reward functions. The height for each MDP is sampled from the set {5,6,10}, and the width is sampled from {3,6,10,15}. The proportion of cells that are terminal failure states is sampled from the set {0,0.1,0.3}. There is always exactly one terminal success state. The proportion of "mildly bad" cells were selected from the set {0,0.1,0.5,0.8}, and the proportion of "mildly good" cells were selected from {0,0.1,0.2}. Mildly good cells and mildly bad cells respectively correspond to cells with coins and roadblocks in our specific delivery task, but the semantic meaning of coins and roadblocks is irrelevant here. Each sampled proportion is translated to a number of cells (rounding down to an integer when needed) and then cells are randomly chosen to fill the grid with each of the above types of states until the proportions are satisfied. Then, the ground-truth reward component for each of the above cell types were sampled from the following sets: - Terminal failure states: {0,1,5,10,50} - Terminal success states: {−5,−10,−50} - Mildly bad cells: {−2,−5,−10} Mildly good cells always have a reward component of 1, and the component for white road surface cells is always -1. There are no cells with a higher road surface penalty (analogous to the bricks in the delivery domain). Better than random performance Figure 14 complements the results in Figure 10, showing the percentage of MDPs in which each preference model outperforms a policy that chooses actions according to a uniformly random probability distribution. We can see that at this performance threshold, lower than that in Figure 10, the regret preference model outperforms the partial return preference model in most conditions. Even when their performance in this plot—based on outperforming uniformly random actions—is nearly identical, Figure 10 shows that the regret preference model achieves near optimal performance at a higher proportion. Details for statistical tests We performed a Wilcoxon paired signed-rank test on the normalized average returns achieved by each model over the set of 100 randomly generated MDPs. All normalized average returns below −1 were replaced with −1, so that all such returns were in the range [−1,1]. This clipping was done because any normalized average return below 0 is worse than uniformly random, so the difference between a normalized return of −1 and −1000 is relatively unimportant compared to the difference between 1 and 0. Results are shown in Table 4. Figure 14: Comparison of performance over 100 randomly generated ![33_image_0.png](33_image_0.png) deterministic MDPs, showing the percentage of MDPs in which each model performed better than an agent taking actions by a uniformly random policy. This plot complements Figure 10, which shows the percentage of MDPs in which the models perform near-optimally. Additionally, we investigate whether P*regret* and PΣr learn near-optimal policies on the same MDPs within this set of 100 randomly generated MDPs. Results for this analysis are shown below. | |D≻|= 3 | |D≻|= 10 | |D≻|= 30 | |D≻|= 100 | |D≻|= 300 | |D≻|= 1000 | |D≻|= 3000 | | |------------------------------|------------|------------|-------------|-------------|--------------|--------------|--------| | Preference generator type | w=1003, | w=917, | w=739, | w=487, | w=284, | w=301, | w=289, | | p=0.115 | p=0.007 | p=0.012 | p=0.007 | p<0.001 | p=0.002 | p=0.001 | | | Noiseless (Pregret vs. PΣr) | w=979, | w=1189.5, | w=891, | w=710, | w=285, | w=460, | w=199, | | p=0.541 | p=0.018 | p=0.027 | p=0.018 | p<0.001 | p=0.002 | p<0.001 | | | Stochastic (Pregret vs. PΣr) | | | | | | | | Table 4: Results of the Wilcoxon paired signed-rank test on normalized average returns for each preference model. | Model(s) | |D≻|= 3 | |D≻|= 10 | |D≻|= 30 | |D≻|= 100 | |D≻|= 300 | |D≻|= 1000 | |D≻|= 3000 | |--------------|-----------|------------|------------|-------------|-------------|--------------|--------------| | Both models | 31 | 40 | 66 | 72 | 83 | 87 | 88 | | Only Pregret | 20 | 26 | 17 | 18 | 14 | 8 | 8 | | Only PΣr | 10 | 12 | 7 | 8 | 3 | 3 | 3 | | Neither | 39 | 22 | 10 | 2 | 0 | 2 | 1 | Table 5: A table showing the count of the number of MDPs where both, either, or neither of the models achieved near optimal performance. ## F.2.2 The Effect Of Including Transitions From Absorbing State In Section 3.2.2 we describe one context in which partial return is not identifiable because reward functions that differ by a constant can have different sets of optimal policies yet will have identical preference probabilities according to the partial return preference model (via Eqs. 2 and 9). This lack of identifiability arises specifically in tasks that have the characteristic of having at least one state from which trajectories of *different* lengths are possible. Tasks that terminate upon completing a goal or reaching a failure state typically have this characteristic. Below we describe an imperfect method to remove this lack of identifiability. Then we show that the partial return preference model does much worse in our main experiments with synthetic preferences and human preferences when this method is not applied. An imperfect method to prevent this source of unidentifiability for partial return Put simply, the approach to prevent all constant shifts of reward from resulting in the same preference probabilities is to force rˆ(s,a,s')= 0 in at least one tuple of state, action, and next state, (*s,a,s*′), and include in the training dataset one or more segments with that tuple. Technically, this solution addresses the source of this identifiability issue, that any constant shift in the output of the reward function will not change the likelihood of a preferences dataset (but can change the set of optimal policies). With this solution, a constant shift cannot be applied to all outputs, since at least one reward output is hard-coded to 0. And applying a constant shift to all other outputs would change the likelihood of the infinite, exhaustive preferences dataset that is assumed in identifiability theory. Figure 15: Performance comparison over 100 randomly generated deterministic MDPs. The results in this plot expand upon the experiments ![34_image_0.png](34_image_0.png) ![34_image_1.png](34_image_1.png) in Figure 10, adding results for datasets that do not have any segments that terminate early. More intuitively, setting one rˆ(*s,a,s*′)= 0 for one (*s,a,s*′) tuple anchors the reward function. To explain, reward function inference effectively learns an ordering over (*s,a,s*′) tuples. Let us arbitrarily assume this ordering is in ascending order of preference. Then setting the reward for one such tuple to 0 forces all (*s,a,s*′) tuples that are later in the ordering to have positive reward and all (*s,a,s*′) tuples that are earlier in the ordering to have negative reward. (*s,a,s*′) tuples that are equal in the ordering are assigned a reward of 0. However, assuming that reward is 0 in any state has at least two undesirable properties. First, it requires adding human task knowledge that is beyond what the preferences dataset contains, technically changing the learning problem we are solving. Second, while it resolves some ambiguity regarding what reward function has the highest likelihood, this resolution may not actually align with the ground-truth reward function. In an attempt to reduce the impact of these undesirable properties, we only set the reward for *absorbing* state to 0. Absorbing state is the theoretical state that an agent enters upon terminating the task. All actions from the absorbing state merely transition to the absorbing state. In practice, to include segments with transitions from absorbing state, we add *early terminating* segments, meaning that termination in an n-length segment occurs before the final transition. The method above is not a perfect fix, however. Assuming that transitions from absorbing state have 0 reward also consequently assumes that humans consider time after termination to have 0 return. We are uncertain that humans will always provide preferences that are aligned with this assumption. For instance, if termination frees the agent to pursue other tasks with positive reward, then we might be mistaken to assume that humans are giving preferences as if there is only 0 reward after termination. As mentioned in Section 3.2.2, past authors have acknowledged this issue with learning reward functions under the partial return preference model, apparently unaware of this solution of including transitions from absorbing states. Instead, they have used normalization (Christiano et al., 2017; Ouyang et al., 2022), tanh activations (Lee et al., 2021a), and L2 regularization (Hejna & Sadigh, 2023) that resolve their algorithms' insensitivity to constant shifts in reward. However, these papers do not discuss what assumptions regarding alignment are implicitly made by these ambiguity-resolving choices. Curiously, artificially forcing episodic tasks to be fixed horizon is common (e.g., as done by Christiano et al. (2017, p. 14) and Gleave et al. (2022)) but has not, to our knowledge, been justified as a way to address this identifiability issue. Our solution above involving early termination could be cast as a specific form of forcing a fixed horizon (infinite in this case). In one recent analysis of reward identifiability, Skalse et al. (2022) make assumptions that are identical to our solution above but do not discuss their assumptions' relationship to the partial return preference model otherwise being insensitive to constant shifts in reward when the lengths of segments in pairs are the same. We advise researchers, when forcing a fixed horizon via other methods, to explain what bias they are introducing regarding the set of optimal policies, either for generating synthetic preferences or during reward function inference. Empirical results Figure 15 expands upon the synthetic-preferences results of Figure 10 in Section 6.2, adding results for datasets that lack segments that terminate early. For each combination of preference model and whether early terminating segments are included (i.e., for each different color in Figure 15), we used a learning rate chosen from testing on a set of 30 MDPs. Specifically, we chose the learning rate from the set {0.005,0.05,0.5,1,2} that had the highest mean percentage of near-optimal performance on these 30 MDPs across training sets of 30, 300, and 3000 preferences. These 30 MDPs were taken from the 100 randomly generated MDPs used in Section 6.2. So, for testing with the chosen learning rates, we replaced them with 30 new MDPs generated by the same sampling procedure, determining the 100 MDPs used for these results. Also, a different random seed is used here than in Section 6.2. In these results shown in Figure 15, performance by the partial return model is worse without early terminating segments, whereas the regret model does not appear to be sensitive to their inclusion. The effect of removing such early-terminating segments from the *human* preferences dataset is tested in Appendix F.3.4. There we similar results: only performance with the partial return model is harmed by removing segment pairs with early-terminating segments and the decrease in performance is severe. ## F.2.3 Varying Segment Length Here we consider empirically the effect of different fixed lengths of the segments in the preference data set. All other experiments in this article assume the segment length |σ|= 3, and this analysis is a limited investigation into whether changing |σ| affects results. In general, we suspect that increasing segment length has a trade off: ![36_image_0.png](36_image_0.png) Figure 16: Using noiselessly generated preferences, comparison of performance over 100 randomly generated deterministic MDPs, showing the percentage of MDPs in which each model performed near-optimally after learning from preference datasets of different segment lengths. ![36_image_3.png](36_image_3.png) ![36_image_1.png](36_image_1.png) Figure 17: Using stochastically generated preferences, comparison of performance over 100 randomly generated deterministic MDPs, showing the percentage of MDPs in which each model performed near-optimally after learning from preference datasets of different segment lengths. ![36_image_2.png](36_image_2.png) Figure 18: The results from Figure 16 (noiselessly generated preferences) divided by preference model, allowing further perspective. Figure 19: The results from Figure 17 (stochastically generated preferences) divided by preference model, allowing further perspective. Segment lengths 4 and 6 are added here. it makes credit assignment within the segment more difficult yet also gives preference information about more unique and complex combinations of events, which could reduce the probability of the preference providing new information that is not already contained in the remainder of the training dataset. Put more simply, preferences over larger segments appear to give information that is harder to use but covers more transitions. To conduct this analysis, the following process is followed for each of 30 MDPs that were sampled as described in Appendix F.2.1. For each n E {1,2,3,4,6,10,20}, we synthetically create preference datasets with segment length |0 =n. Each segment was generated by choosing a non-terminal start state and n actions, all uniformly randomly. As in Appendix F.2.1, each preference model acts as a preference generator to label these segment pairs, resulting in datasets that differ only in their labels, and then each preference model is used for reward learning on the same dataset it labeled. We observe the following: · With noiselessly generated preferences (Figure 16), the performance with each preference model is similar for segments with 20 transitions, though it is sometimes slightly better with the partial return preference model. At a segment length of 10, performance with the regret preference model is better or similar, depending on the number of preferences. For all other segment lengths, performance with the regret preference model is better. - With stochastically generated preferences (Figure 17), the performance with each preference model is generally better with the regret preference model, although sometimes the partial return preference model has marginally better performance. Additionally, the variance of performances with the partial return preference model is higher. Additionally, we conduct Wilcoxon paired signed-rank tests for a positive effect of segment length on performance. Four tests are conducted, one per combination of preference model and whether preferences are generated noiselessly or stochastically. The steps to conduct this test are below. For each combination of preference model and each preference dataset of size |D≻*|∈ {*3,10,30,100,300,1000,3000}, we calculate Kendall's τ correlation measures between the following two orderings: - segment lengths in ascending order, (1,2,3,10,20), and - segment lengths in the ascending order of the corresponding percentage of MDPs in which near-optimal performance was achieved (e.g., if performance is near-optimal in 90% of MDPs for |σ| = 20 and is near-optimal in 60% of MDPs for |σ|= 10, then |σ|= 20 would be later in the ordering than |σ|= 10). For each of the resultant 7 τ values—one for each |D≻|—we create a pair for a Wilcoxon paired signed-ranked test: (τ,0). The 0 in the pair is the expected Kendall's τ value for uniformly random orderings of segment lengths. Therefore, the pair represents a comparison between the correlation between a training dataset's segment length and its performance and no correlation. Each Wilcoxon paired signed-rank test is conducted on these 7 pairs. - With noiselessly generated preferences and the partial return preference model (Figure 18), p= 0.016 and the mean τ across the 7 training dataset sizes is 0.80, indicating a significant and large correlation between segment length and ordering by performance. Our visual inspection of Figure 18 shows that the effect size is large, so we conclude that in this experiment increasing segment length meaningfully improves performance. - With stochastically generated preferences and the partial return preference model and (Figure 19), p= 0.016 and the mean τ across the 7 training dataset sizes is 0.51, indicating a significant and moderate correlation between segment length and ordering by performance. Our visual inspection of Figure 19 shows that the effect size is smaller but still observable, so we conclude that in this experiment increasing segment length somewhat improves performance. - With noiselessly generated preferences and the regret preference model (Figure 18), p= 0.219 and the mean τ across the 7 training dataset sizes is 0.25, which is not a significant between segment length and ordering by performance. Likewise, our visual inspection of Figure 19 does not reveal any effect, so we conclude that in this experiment increasing segment length does not improve performance. - With stochastically generated preferences and the regret preference model (Figure 19), p= 0.016 and the mean τ across the 7 training dataset sizes is 0.47, indicating a significant and moderate correlation between segment length and ordering by performance. Our visual inspection of Figure 19 does not reveal this effect, indicating that the effect size is small, so we conclude that in this experiment increasing segment length improves performance with only minor effect. F.2.4 Reward learning in stochastic MDPs Although we theoretically consider MDPs with stochastic transitions in Section 3, we have not yet *empirically* compared PΣr and P*regret* in tasks with stochastic transitions, which we do below. We randomly generated 20 MDPs, each with a 5×5 grid. Instead of terminal cells that are associated with success or failure, these MDPs have terminal cells that are either risky or safe. A single terminal *safe* cell was randomly placed, and the number of terminal *risk* cells was sampled from the set {1,2,7} and then these terminal risk cells were likewise randomly placed. No other special cells were used in this set of MDPs. To add stochastic transitions, the delivery domain was modified such that when an agent moves into a terminal risk cell Table 6: Stochastic MDPs: Proportion of 10 MDPs in which performance was near optimal, with varied reward functions. Entering a terminal risk cell results in rwin and r*lose*, each with 50% probability. | Preference generator | rwin = 1 | rwin = 1000 | rwin = 100 | rwin = 100 | |------------------------|------------|---------------|--------------|--------------| | rlose =−50 | rlose =−50 | rlose =−1 | rlose =−1000 | | | Noiseless Pregret | 1.0 | 1.0 | 1.0 | 1.0 | | Stochastic Pregret | 1.0 | 1.0 | 1.0 | 1.0 | | Noiseless PΣr | 1.0 | 0.0 | 1.0 | 0.0 | | Stochastic PΣr | 1.0 | 0.0 | 1.0 | 1.0 | there is a 50% chance of receiving a lower reward, r*lose*, and a 50% chance of receiving a higher reward, rwin. All other transitions are deterministic. As in the unmodified delivery domain, moving to any non-terminal state results in a reward of -1. Moving to the terminal safe state yields a reward of +50, like the terminal success state of the unmodified delivery domain. Therefore, depending on the values of rwin and r*lose*, it may be better to move into a terminal risk state than to avoid it. All segments were generated by choosing a start state and three actions, all uniformly randomly. For each MDP, the preference dataset D≻ contains 3000 segment pairs. The 10 MDPs of each condition differed from those of the other conditions by their ground-truth reward function r, with different rwin and r*lose* values. As in Section 6.2, regardless of whether the stochastic or noiseless version of preference model generates the preference labels, the stochastic version of the same preference model is used for learning the reward function. The results are shown below in Table 6, indicating that for both noiseless and stochastic preference datasets, P*regret* is always able to achieve near-optimal performance, whereas PΣr is not. These results expand upon and support the first proof of Theorem 3.2 in Section 3. ## F.2.5 Generating Preferences And Learning Reward Functions With Different Preference Models Using synthetically generated preferences, here we investigate the effects of choosing the incorrect model. Specifically, either P*regret* or PΣr generates preference labels, and then the *other* preference model is used to learn a reward function from these preference labels. Through this mixing of preference models, we add two new conditions to the analysis in the delivery domain in Section 6.2. The results are shown in Figure 20. We observe that, as expected, each preference model performs best when learning on preference labels it generated. Of all four combinations of preference models during generation and reward function inference, the best performing combination is doing reward inference with P*regret* on preferences generated by P*regret*. Between the two mixed-model conditions, we observe that learning with the partial return preference model on preferences created via regret outperforms the reverse. These results suggest that learning reward functions with the regret preference model may be more sensitive to how much the preference generator deviates from it. However, we note that humans *do not* appear to be giving preferences according to partial return, so these results—though informative—are not worrisome. ## F.3 Results From Human Preferences Figure 20: Performance comparison over 100 randomly generated deterministic MDPs, with synthetically generated preferences. This plot ![38_image_0.png](38_image_0.png) expands upon Figure 10 by including mismatches between the preference model generating preference data and the preference model used during learning of the reward function. In this section we expand upon the results described in Section 6.3, which involve learning reward functions from the dataset of human preferences we collected. ![39_image_0.png](39_image_0.png) Figure 21: Performance comparison over various amounts of human preferences. Each partition has the number of preferences shown or one less. This plot is identical to Figure 11 except that results for the expected return preference model are included and a different random seed was used to partition the human preferences. Figure 22: Performance comparison over various ![39_image_1.png](39_image_1.png) amounts of human preferences. Each partition has the number of preferences shown or one less. This plot tests the same learned reward functions as in Figure 21, but it thresholds on outperforming a uniformly random policy rather than on near-optimal performance. ## F.3.1 Wilcoxon Paired Signed-Rank Test For the Wilcoxon paired signed-rank test described in Section 6.3, normalized mean returns were clipped to [−1,1] as in Appendix F.2.1. The result from each test is shown in Table 7. These results are surprisingly significant, given that both models reach 100% near-optimal performance for 5 and 10 partitions in Figure 11. However, in this setting, learning a reward function with the regret preference model tends to result in a higher mean score than learning one with the partial return preference model, even when both are above the threshold for near optimal performance. | 5 partitions | 10 partitions | 20 partitions | 50 partitions | 100 partitions | | |-----------------------------------|-----------------|-----------------|-----------------|------------------|-------| | Pregret vs. PΣr preference models | w=0 | w=6 | w=24 | w=216 | w=939 | | p=0.043 | p=0.028 | p=0.007 | p=0.003 | p=0.076 | | Table 7: Results from Wilcoxon paired signed-rank tests. ## F.3.2 Expanded Plots, With The Expected Return And Logistic Linear Preference Models Included Figure 21 show the same results as Figure 11, but with additional results from the expected return preference model introduced in Appendix B.5 and the logistic linear preference model (Plog−lin) introduced in Appendix B.3. Figure 22 shows the same results with a different performance threshold, that of performing better than uniformly random action selection, which receives a 0 on our normalized mean return metric. The regret preference model matches or outperforms *both* other preference models in all partitionings of the human data, at both thresholds (near optimal and better than random). ## F.3.3 Performance On Only Human Preferences From The First Stage Of Data Collection As previously mentioned in Section D and Appendix D, when learning reward functions only from the data from the first stage of human data collection, the partial return model does worse. This first stage of data contains 1437 preferences, which is 79% of the full dataset. The specific performance of the partial return preference model on the full set of first-stage data (i.e., 1 partition) is a normalized mean return of −12.7, worse than a uniformly random policy. The regret preference model achieves 0.999, close to optimal performance of 1.0. ## F.3.4 Performance Without Early-Terminating Segments Segment pairs with early-terminating segments are one type of segment pairs (of two types) that are in the second stage of data collection but not in this first stage. The identifiability issue of the partial return model related to a constant shift in the reward function—discussed in Section 3.2.2 and Appendix F.2.2—provides sufficient justification for the low performance in Appendix F.3.3 observed without inclusion of early-terminating segments. To more directly test the effect of early-terminating segments, we repeat the analysis in Appendix F.3.3 while only removing the segment pairs from the second stage that have early-terminating segments. We get the same normalized mean return, −12.7 for the partial return preference model and 0.999 for the regret preference model. Therefore, removal of the early-terminating segments is sufficient to cause the partial return preference model to perform poorly. ## F.3.5 Generalization Of Learned Reward Functions To Other Mdps With The Same Ground-Truth Reward Function We also test how well these learned reward functions generalize to other instantiations of the domain. To do so, we keep the same hidden ground-truth reward function but otherwise randomly sample MDP configurations identically as done for the analysis of Sec. 6.2, which is detailed in Appendix F.2.1. Procedurally, this analysis starts identically as that in Section 6.3, learning a reward function from randomly sampled partitions of the human preference data. Each learned reward function is then tested within the 100 MDPs. For example, for two partitions of the data, each algorithm learns two reward functions and each such rˆ is tested on 100 MDPs. To test a reward function in an MDP, the same procedure of value iteration is used as in Sections 6.2 and 6.3 to find an approximately optimal policy, the performance of which is then tested on the MDP (with the ground-truth reward function). Figure 23: Performance comparison of learned reward functions' generalization to 100 MDPs with the same reward ![40_image_0.png](40_image_0.png) function, when learned from various amounts of human preferences. Each partition has the number of preferences shown or one less. The results are shown in Figure 23. The two preference models perform similarly at 100, 10, 2, and 1 partition(s), and otherwise the regret preference model outperforms the partial return preference model. The most pronounced differences are at 20 and 5 partitions, where the partial return preference model fails to reach near-optimal performance approximately twice as often as the regret preference model. For each training set size, we conduct the same Wilcoxon paired signed-rank test as in Section 6.3 and Appendix F.3.1, except that for each of the 100 MDPs, we calculate the normalized mean return, and the *mean* of normalized mean returns across all 100 MDPs represents a single sample for the statistical test. Across all 7 training set sizes, no statistical significance is found (p >0.2). Unlike most other analyses in this paper, we cannot here conclude superior performance from assuming the regret preference model.
Review 1: Summary: This paper studies the problem of how to model human preferences for learning reward functions, which can then be used in Reinforcement Learning from Human Feedback. The paper proposes a new model for human preferences, the Regret model, which models the probability of one sequence being preferred to another as a logistic function over the differences in the *regret* of the two sequences. They compare this model to the standard partial return model both theoretically and empirically. On the theoretical side, they show their model is identifiable (in the limit of infinite noiseness data using the model can recover the underlying reward function), wherease the partial return model is shown by counter-example to not be identifiable in 3 scenarios: noiseless preferences, variable-horizon tasks and segments of length 1. On the empirical side, the authors introduces a toy delivery driver gridworld MDP, and gather human preference data on segments in this setting. They show that the regret model better-models the human preference data, implying it is a better descriptive model. They present an algorithm to learn this a reward function under regret model in a linear setting using successor features, and show that this algorithm can learn reward functions from synthetic and human data better than the partial return model in terms of the size of the dataset. Strengths and Weaknesses: # Strengths * The paper is clear and well-written, and easy to follow. * The paper's claims are all carefully made and backed up by theoretical or empirical results in all places. * The theoretical results are interesting and well-explained. * The proposed regret model of human preferences is interesting for the community, and points to lots of future work in this space. # Weaknesses * While this does not make the paper not worthy of acceptance, the propose model of human preferences seems much more difficult to learn a reward function under than the partial return model. The proposed algorithm relies on a linear setting and successor features, neither of which are generally the case in the standard RLHF setting of fine-tuning language models. * Similarly, in this setting (in particular single-turn dialogue or instruction following) the partial return and regret models are the same, so this contribution is unlikely to improve results there. I don't think the authors need to change the paper to address these two points, but I thought it was worth mentioning. * The empirical results are somewhat limited to a single environment which is proposed by the authors, which makes it difficult to asertain how performant using this model and algorithm would be in other more realistic settings. I think the paper is still worthy of acceptance, but if the authors performed experiments on other settings, particularly pre-exising environment from the literature, then that would enable them to make stronger claims about the empirical promise of this modelling strategy. Requested Changes: As mentioned in the Strengths and Weaknesses, none of these changes are critical for securing my recommendation, and are rather just changes that would strengthen the claims the authors could make in this work. * If the authors performed experiments on other settings, particularly pre-exising environment from the literature, then that would enable them to make stronger claims about the empirical promise of this modelling strategy. * Related to this, if the authors could present an algorithm for learning under the regret preference model that will work in more general settings (for example RLHF fine-tuning of language models), that would be beneficial. Broader Impact Concerns: I don't have any concerns of the ethical implications of the work. ================================================== Review 2: Summary: The paper studies how much existing ways of learning from human preferences based on reward [models] can recover optimal behavior, and under what assumptions. The idea proposed by the authors is that humans are more likely to rank (sub)trajectories based on regret than rewards cumulated in that segment, which is what current reward modelling paradigms assume. In the theoretical part, the authors show that regret-based models of human preferences allow to recover optimal behaviors within standard workflows (preferences -> value function -> policy) but in some non-trivial cases models involving rewards cumulated on the subtrajectory cannot. The authors also propose an algorithm based on successor features to learn the optimal value function for practical cases where it is not available. In the experiments, the authors study two claims: 1) whether humans do seem to follow more a regret-based model than a cumulated rewards-basesd model, 2) whether learning from human preferences works better with a regret-based model, and 3) how much we lose if the preferences actually follow a cumulative reward model compared to a regret-bsaed model because of non-identifiability. The experiments overall support the claims of the authors, with a little caveat for the last experiment where eventually "partial return" catches up with "regret" Strengths and Weaknesses: Strengths: I really like the paper. As far as I know the way the authors address the problem of learning from preferences hasn't been addressed either in the "preference learning" literature nor in the "RL" literature (and still not in the most recent "RLHF"/LLM literature). I think the authors make an interesting point between rewards cumulated over small trajectories vs regret (vs rewards comulated over entire trajectories). The claim coes in two parts (that's what humans do + that's the model we should use when learning from human rewards), which are validated independently. The theoretical part is nice. I agree with the authors that in the current format, keeping the proofs in the body of the paper (instead of putting them to Appendix, which tends to be more standard nowadays) is useful as the actual insights are in there. The algorithm is a nice-to-have more than a game changer, but somehow makes the paper feel more complete The experiments are carried out thoroughly. The various steps are well-detailed and my feeling is that they support the claims sufficiently well for the sake of this paper. Weaknesses: To me the main weakness is that the paper may miss its audience. Considering the huge interests and investments in RLHF for LLMs and foundation models in general, it seems to me that the value of this paper would lie in how the insights translate for these particular applications. I think that the paper, as insightful as it can be, doesn't really address the specific LLM setting in enough depth to be directly applicable. On the other hand, I appreciate that the application of regret vs patial return idea to RLHF in LLMs is a whole different endeavour that would likely require an entirely new paper. So I'm happy with the contribution as it stands and I think that "leaving this for future work" is reasonable. The second main weakness from my perspective is presentation -- but this again may only be a matter of perspective. It seems to me that the presentation is directed towards a theory-inclined audience that is familiar with RL and concepts such as regret, which IMHO makes the paper hard to read for more applied researchers working on RLHF. Also, from the write-up, it is hard to see how much the results are corner cases or actual phenomena that we are likley to occur. I understand that only experience can answer the question, but I would have liked some discussion of what the authors would expect on RLHF-types of tasks, or a list of the main unknowns that should be answered in that context. other questions/comments: - the notion of regret (Eq 3) is based on the optimal policy but most of the value of this definition is when the trajectory doesn't end in a terminal state. At this stage of the paper I wonder what happens if we constrain the (sub)trajectories to end up in terminal states - I am not sure to understand why identifiability (Def 3.1) should be limited to compare segments of the same length. Also, I think it should be said more clearly (now it's hidden in the middle of the definition). I missed it in a first read which made me struggle at the start of section 3.2.2 - the paragraph "regret as model for human preference" right before section 3 discussed the assumptions underlying the regret model. This paragraph is definitely nice to have, but I would also have liked some discussion on the differences in assumptions between the reward and the regret model regarding human data collection. - Figure 2: it feels strange to me to compare the two segments since they do not have the same starting state. As a human, it is unclear which one is "better" without doing an evaluation of each segment irrespective of the other (segment one is bad because is goes from safe to unsafe apparently because of the driver, while segment two seems good because it goes from unsafe to safe), which kills the point of comparisons in the first place. As far as I can see, RLHF as used in LLMs always focus on the same starting state, so I would encourage the authors to make sure their examples also follow this case. In particular, to me it is not clear if the human assessment should assess the starting state -- in which case I prefer to start from a safe place than an unsafe one -- it seems easier to improve from there. My understanding is that the regret cancels out the effect of the starting state within a trajectory, which seems a pretty big deal in practice if we compare trajectories with different starting states. Requested Changes: nothing requested per se but I would encourage the authors to address my questions on the presentation Broader Impact Concerns: none ================================================== Review 3: Summary: The paper studies the problem of modeling human preferences. A new preference model is proposed based on the regret rather than partial return. The author shows that the partial-return based model is unidentifiable, while the regret based on the negative advantage function value of the segment is identifiable. Strengths and Weaknesses: Strength: The paper makes a first step towards understanding how dense regret helps the better align the policy with human preferences in RL. The proposed method associates the regret of each segment with human preferences, which utilizes the negated sum of an optimal policy’s advantage of each transition in the segment. Weakeness: - 1. My biggest concern is about how the regret can be approximated in practice. For RL problems, approximately knowing the optimal Q function up to $\epsilon$ error is equivalent to finding an $\epsilon$-optimal policy, assuming that maximizing over $Q^\star$ can be directly solved. Thus, it is unclear whether we can really utilize the regret notion for RLHF in practice. - 2. Regarding the identifiability issue, I understand that the regret over segments provide more signal than partial return, making it identifiable under weak assumptions. However, as the authors mentioned, traditional approach focuses on maximum likelihood -- meaning that the preference model follows a certain statistical model (or precisely, the Bradley-Luce-Terry model for pairwise comparisons). If such model is well-specified and with enough samples, the MLE shall be identifiable. How shall we compare this with the regret based scenario in terms of identifiability? - 3. I don't get the point of Section 3.2.3. If $|\sigma|=1$, meaning that it is a contextual bandit problem. Then it is expected that the discount factor does not play a role. Why would we say that this is an unidentifiability issue? - 4. The algorithm focuses mostly on the linear reward case. Can the authors comment on how this can be generalized to neural networks? Requested Changes: Please clarify all my comments above. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: This paper was a bit cursed, through no fault of the authors, as we struggled to find a full set of reviewers. Once we did, the reviewer consensus was clear, and I am satisfied that the authors have taken onboard all feedback in making changes to the paper during the discussion period. I encourage the authors to look once more over the feedback and incorporate any outstanding recommendations into their camera ready draft. ==================================================
# K**-Mixup Regularization For Deep Learning Via Optimal** Transport Kristjan Greenewald *kristjan.h.greenewald@ibm.com* MIT-IBM Watson AI Lab, IBM Research Anming Gu *agu2002@bu.edu* Boston University Mikhail Yurochkin *mikhail.yurochkin@ibm.com* MIT-IBM Watson AI Lab, IBM Research Justin Solomon jsolomon@mit.edu Massachusetts Institute of Technology Edward Chien edchien@bu.edu Boston University Reviewed on OpenReview: *https: // openreview. net/ forum? id= lOegPKSu04* ## Abstract Mixup is a popular regularization technique for training deep neural networks that improves generalization and increases robustness to certain distribution shifts. It perturbs input training data in the direction of other randomly-chosen instances in the training set. To better leverage the structure of the data, we extend mixup in a simple, broadly applicable way to k*-mixup*, which perturbs k-batches of training points in the direction of other kbatches. The perturbation is done with displacement interpolation, i.e. interpolation under the Wasserstein metric. We demonstrate theoretically and in simulations that k-mixup preserves cluster and manifold structures, and we extend theory studying the efficacy of standard mixup to the k-mixup case. Our empirical results show that training with k-mixup further improves generalization and robustness across several network architectures and benchmark datasets of differing modalities. For the wide variety of real datasets considered, the performance gains of k-mixup over standard mixup are similar to or larger than the gains of mixup itself over standard ERM after hyperparameter optimization. In several instances, in fact, k-mixup achieves gains in settings where standard mixup has negligible to zero improvement over ERM. ## 1 Introduction Standard mixup (Zhang et al., 2018) is a data augmentation approach that trains models on weighted averages of random pairs of training points. Averaging weights are typically drawn from a beta distribution β(*α, α*), with parameter α such that the generated training set is *vicinal*, i.e., it does not stray too far from the original dataset. Perturbations generated by mixup may be in the direction of any other data point instead of being informed by local distributional structure. As shown in Figure 1, this property is a key weakness of mixup that can lead to poor regularization when distributions are clustered or supported on a manifold. With larger α, the procedure can result in averaged training points with incorrect labels in other clusters or in locations that stray far from the data manifold. To address these issues, we present k*-mixup*, which averages random pairs of *sets* of k samples from the training dataset. This averaging is done using optimal transport, with *displacement interpolation*. The sets ![1_image_0.png](1_image_0.png) Figure 1: Outputs of a fully-connected network trained on three synthetic datasets for binary classification, with no mixup (ERM), 1-mixup, and 32-mixup regularization (α = 1). Note that ERM under-regularizes (visible through the jagged boundaries), 1-mixup over-regularizes (visible through over-smoothing), and that 32-mixup better captures local structure (visible through less blur, increased contrast) while retaining reasonable smoothing between the classes. of k samples are viewed as discrete distributions and are averaged as distributions in a geometric sense. If k = 1, we recover standard mixup regularization. Figures 1 and 2 illustrate how k-mixup produces perturbed training datasets that better match the global cluster or manifold support structure of the original training dataset. The constraints of optimal transport are crucial, as for instance a nearest-neighbors approach would avoid the cross-cluster matches necessary for smoothing.1 Figure 3 illustrates the distribution of possible matchings for a sample point and shows non-zero likelihood for these cross-cluster matches. In Section 5, we provide empirical results that justify the above intuition. The resulting method is easy to implement, computationally cheap, and versatile. Our contributions are as follows. - Empirical results: - We show improved generalization results on standard benchmark datasets showing k-mixup with k > 1 consistently improves on standard mixup, where α is optimized for both methods. The improvements are consistently similar in magnitude or larger than those of 1-mixup over basic ERM. - On image datasets, a heuristic of k = 16 outperforms 1-mixup in nearly all cases. - We show that k-mixup significantly improves robustness to certain distribution shifts (additive noise and adversarial samples) over 1-mixup and ERM. - Theoretical results: - We argue that as k increases, the interpolated samples are more and more likely to remain within the data manifold (Section 3.1). - In the clustered setting, we provide an argument that shows inter-cluster regularization interpolates nearest points and better smooths interpolation of labels (Section 3.2). - We extend the theoretical analysis of Zhang et al. (2020) and Carratino et al. (2020) to our k-mixup setting, showing that it leverages local data distribution structure (Di of Eq. 1) to make more informed regularizations (Section 4). Related works. We tackle issues noted in the papers on adaptive mixup (Guo et al., 2019) and manifold mixup (Verma et al., 2018). The first refers to the problem as "manifold intrusion" and seeks to address it by training data point-specific weights α and considering convex combinations of more than 2 points. Manifold mixup deals with the problem by relying on the network to parameterize the data manifold, interpolating in the hidden layers of the network. We show in Section 5 that k-mixup can be performed in hidden layers to boost performance of manifold mixup. A related approach is that of GAN-mixup (Sohn et al., 2020), which trains a conditional GAN and uses it to generate data points between different data manifolds. The approaches above require training additional networks and are far more complex than our k-mixup method. 1To see this, note that because nearest-neighbors can be a many-to-one matching, nearly all matches would be intra-cluster between points of the same class and thus provide few/no interpolated labels, particularly missing any interpolations in the voids between classes where interpolation is most important. As additional support, we ran 20 Monte Carlo trials of the CIFAR-10 experiment below with a k = 16-nearest-neighbors strategy. It failed to outperform even 1-mixup (0.09% worse). Further discussion is presented in supplement Section J, with an analogue of Figure 3 for a k-nearest-neighbors strategy. ![2_image_0.png](2_image_0.png) Figure 2: Optimal transport couplings and vicinal datasets for k = 1 (left) and k = 32 (right) in 3 simple datasets. In the bottom row, α = 1 was used to generate vicinal datasets of size 512. The recent local mixup method (Baena et al., 2022) uses a distance-based approach for modifying mixup. This method retains the random matching strategy of mixup, but scales the contribution of a vicinal point to the objective function according to its distance from the original training point. As noted previously, such methods that employ random matching will fail to provide substantive data smoothing for the output function between clusters or high-density regions in the training data. PuzzleMix (Kim et al., 2020) also combines optimal transport ideas with mixup, extending CutMix (Yun et al., 2019) to combine pairs of images. PuzzleMix uses transport to shift saliency regions of images, producing meaningful combinations of input training data. Their use of optimal transport is fundamentally different from ours and does not generalize to non-image data. There are several other image-domain specific works that are in this vein, including CoMix (Kim et al., 2021), Automix (Liu et al., 2021), and Stackmix (Chen et al., 2021). Performing optimal transport between empirical samples of distributions has been considered in studies of the *sample complexity* of Wasserstein distances (e.g. Weed & Bach (2019)). Unlike most settings, in our application the underlying source and target distributions are the same; the theoretical investigation of a generalization of variance called k-variance by Solomon et al. (2020) considers a similar setting. In other works, transport between empirical samples has been dubbed *minibatch optimal transport* and has been used in generative models (Genevay et al., 2018; Fatras et al., 2020) and domain adaptation (Damodaran et al., 2018; Fatras et al., 2021). ## 2 Generalizing Mixup Standard mixup. Mixup uses a training dataset of feature-target pairs {(xi, yi)} N i=1, where the target yiis a one-hot vector for classification. Weighted averages of training points construct a vicinal dataset: (˜x λ ij , y˜ λ ij ) := (λxi + (1 − λ)xj *, λy*i + (1 − λ)yj ). λ is sampled from a beta distribution, β(*α, α*), with parameter α > 0 usually small so that the averages are near an endpoint. Using this vicinal dataset, empirical risk minimization (ERM) becomes: $\mathbb{E}_{i,j,\lambda}\left[\ell\left(f\right)\right]$ E mix 1(f) := E*i,j,λ* -ℓfx˜ λ ij , y˜ λ ij , where i, j ∼ U{1, . . . , N}, λ ∼ β(*α, α*), f is a proposed feature-target map, and ℓ is a loss function. Effectively, one trains on datasets formed by averaging random pairs of training points. As the training points are randomly selected, this construction makes it likely that the vicinal data points may not reflect the local structure of the dataset, as in the clustered or manifold-support setting. k**-mixup.** To generalize mixup, we sample two random subsets of k training points {(x γ i , y γ i )} k i=1 and {(x ζ i , y ζ i )} k i=1. For compactness, let x γ:= {x γ i } k i=1 and y γ:= {y γ i } k i=1 denote the feature and target sets ![3_image_0.png](3_image_0.png) Figure 3: Locally-informed matching distributions Di for k = 32, for a randomly selected point (see Equation 1 for explicit definition). These distributions reflect local manifold and cluster structure. Note that a nearest-neighbors approach would not provide cross-cluster matchings (see Figure 7 in Appendix J). (and likewise for ζ). A weighted average of these subsets is formed with *displacement interpolation* and used as a vicinal training set. This concept is from optimal transport (see, e.g., Santambrogio (2015)) and considers (x γ, yγ) and (x ζ, yζ) as uniform discrete distributions µˆ γ, µˆ ζ over their supports. In this setting, the optimal transport problem becomes a linear assignment problem (Peyré & Cuturi, 2019). The optimal map is described by a permutation σ ∈ Sk minimizing the cost: $$W_{2}^{2}(\hat{\mu}^{\gamma},\hat{\mu}^{\zeta})=\frac{1}{k}\sum_{i=1}^{k}\|x_{i}^{\gamma}-x_{\sigma(i)}^{\zeta}\|_{2}^{2}.$$ Here, σ can be found efficiently using the Hungarian algorithm (Bertsimas & Tsitsiklis, 1997). Figure 2 gives intuition for this identification. When compared to the random matching used by standard mixup, our pairing is more likely to match nearby points and to make matchings that better respect local structure—especially by having nontrivial probability for cross-cluster matches between nearby points on the two clusters (see Figure 3). Given a permutation σ and weight λ, the displacement interpolation between (x γ, yγ) and (x ζ, yζ) is: $$D I_{\lambda}((x^{\gamma},y^{\gamma}),(x^{\zeta},y^{\zeta})):=\Big\{\lambda(x_{i}^{\gamma},y_{i}^{\gamma})+(1-\lambda)(x_{\sigma(i)}^{\zeta},y_{\sigma(i)}^{\zeta})\Big\}_{i=1}^{k}\,.$$ As in standard mixup, we draw λ ∼ β(*α, α*). For the loss function, we consider sampling k-subsets of the N samples at random, which we can mathematically describe as choosing γ, ζ ∼ U{1, . . . , N k } for which $$\uparrow\uparrow\downarrow\downarrow\uparrow$$ {{(x γ i , y γ i )} k i=1} ( N k ) γ=1 are the possible subsets.2 This yields an expected loss $${\mathcal E}_{k}^{m i x}(f)=\mathbb{E}_{\gamma,\zeta,\lambda}\left[\ell(f(D I_{\lambda}(x^{\gamma},x^{\zeta})),D I_{\lambda}(y^{\gamma},y^{\zeta}))\right].$$ The localized nature of the matchings makes it more likely that the averaged labels will smoothly interpolate over the decision boundaries. A consequence is that k-mixup is robust to higher values of α, since it is no longer necessary to keep λ close to 0 or 1 to avoid erroneous labels. This can be seen in our empirical results in Section 5 and theoretical analysis in Section 4. k, α**, and Root Mean Squared Perturbation Distance.** When α is kept fixed and k is increased, the perturbations become more local, and the distance of matchings (and thus perturbations) decreases. We found that more sensible comparisons across values of k can be obtained by *increasing* α in concert with k, 3so that the average perturbation's squared distance remains constant—bearing in mind that large squared distance values may not be achievable for high values of k. In effect, this is viewing the root mean squared perturbation distance as the parameter, instead of α. Computation details are found in Section 5.1. Throughout training, this tuned α is kept fixed. We typically pick an α for k = 1, get the associated root mean squared distance (ξ), and increase α as k increases to maintain the fixed value of ξ. 2These possible subsets need not be enumerated since we optimize the loss with stochastic gradient descent. 3Note that this adjustment is of course not necessary for parameter tuning in practice. ![4_image_0.png](4_image_0.png) Figure 4: Injectivity radius example. Various ϵ-neighborhoods, Bϵ(S), have been illustrated for a curve S ⊂ R 2. Intuitively, the topology of Bϵ(S) no longer reflects that of S when *ϵ > R*S, the injectivity radius. To be precise, Bϵ(S) is no longer homotopy equivalent to S (see Hatcher (2000) for definition). Pseudocode and Computational Complexity. We emphasize the speed and simplicity of our method. We have included a brief PyTorch pseudocode in Section 5.1 below and note that with CIFAR-10 and k = 32, the use of k-mixup added about one second per epoch on GPU. Section 5.1 also contains a more extended analysis of computational cost, showing that there is little computational overhead to incorporating k-mixup in training compared to ERM alone. ## 3 Manifold & Cluster Structure Preservation Using an optimal coupling for producing vicinal data points makes it likely that vicinal data points reflect local dataset structure. Below we argue that as k increases, the vicinal couplings will preserve manifold support, preserve cluster structure, and interpolate labels between clusters. ## 3.1 Manifold Support Suppose our training data is drawn from a distribution µ on a d-dimensional embedded submanifold S in X × Y ⊂ RM, where X and Y denote feature and target spaces. We define an injectivity radius: Definition 3.1 (Injectivity radius). Let Bϵ(S) = {p ∈ X × Y | d(p, S) < ϵ} denote the ϵ-neighborhood of S where d(p, S) is the Euclidean distance from p to S. Define the injectivity radius RS of S to be the infimum of the ϵ's for which Bϵ(S) is not homotopy equivalent to S. As S is embedded, Bϵ(S) is homotopy equivalent to S for small enough ϵ, so RS > 0. Essentially, Definition 3.1 grows an ϵ-neighborhood until the boundary intersects itself. See Figure 4 for a schematic example that illustrates this. We have the following: Proposition 3.2. For a data distribution µ supported on an embedded submanifold S *with injectivity radius* RS , with high probability any constant fraction 1 − δ (for any fixed δ ∈ (0, 1]) of the interpolated samples induced by k-mixup will remain within Bϵ(S), for k *large enough.* The proof of this proposition is in supplement Section D. Hence, for large enough k, the interpolation induced by optimal transport will approximately preserve the manifold support structure. While the theory requires high k to achieve a tight bound, our empirical evaluations show good performance in the small k regime. Some schematic examples are shown in Figures 1, 2, and 3. ## 3.2 Clustered Distributions With a clustered data distribution, we preserve global structure by including many within-cluster and inter-cluster matches, where inter-cluster matches correspond (approximately) to the nearest points across clusters. This contrasts with 1-mixup, which, when the number of clusters is large, is biased towards primarily inter-cluster matches and does not seek to provide any structure to these random matches. These approximate characterizations of k-mixup are achieved exactly as k → ∞, as argued below. To make precise the notion of a clustered distribution, we adopt the (m, ∆)-clusterable definition used by Weed & Bach (2019); Solomon et al. (2020). In particular, a distribution µ is (m, ∆)-clusterable if supp(µ) lies in the union of m balls of radius at most ∆. Now, if our training samples (xi, yi) are sampled from such a distribution, where the clusters are sufficiently separated, then optimal transport will prioritize intra-cluster matchings over cross-cluster matchings. Lemma 3.3. *Draw two batches of samples* {pi} N i=1 and {qi} N i=1 from a (m, ∆)*-clusterable distribution, where* the distance between any pair of covering balls is at least 2∆. If ri and si denote the number of samples in cluster i *in batch 1 and 2, respectively, then the optimal transport matching will have* 12 Pi |ri −si| *cross-cluster* matchings. The proof of the above (supplement Section E) involves the pigeonhole principle and basic geometry. We also argue that the fraction of cross-cluster identifications approaches zero as k → ∞ and characterize the rate of decrease. The proof (supplement Section F) follows via Jensen's inequality: Theorem 3.4. Given the setting of Lemma 3.3, with probability masses p1, . . . , pm*, and two batches* of size k *matched with optimal transport, the expected fraction of cross-cluster identifications is* O (2k) −1/2 Pm i=1 ppi(1 − pi) . Note that the absolute number of cross-cluster matches still increases as k increases, providing more information in the voids between clusters. We emphasize that the (m, ∆) assumption corresponds to a setting where clusters are well-separated such that cross-cluster identifications are as unlikely as possible. Hence, in real datasets, the fraction of cross-cluster identifications will be larger than indicated in Theorem 3.4. Finally, we show that these cross-cluster matches are of length close to the distance between the clusters with high probability (proved in supplement Section G), i.e., the endpoints of the match lie in the parts of each cluster closest to the other cluster. This rests upon the fact that we are considering the W2 cost, for which small improvements to long distance matchings yield large cost reductions in squared Euclidean distance. Theorem 3.5. Suppose density p has support on disjoint compact sets (clusters) A, B whose boundaries are smooth, where p > 0 throughout A and B. Let D be the Euclidean distance between A and B, and let RA, RB be the radii of A, B respectively. Define Aϵ to be the subset of set A *that is less than* (1 + ϵ)D *distance from* B, and define Bϵ similarly. Consider two batches of size k drawn from p and matched with optimal transport. Then, for k large enough, with high probability all cross-cluster matches will have an endpoint each in Aϵ and Bϵ*, where* ϵ = max(RA,RB) 2 D2 . Theorem 3.5 implies that for large k, the vicinal distribution created by sampling along the cross-cluster matches will almost entirely lie in the voids between the clusters. If the clusters correspond to different classes, this will directly encourage the learned model to smoothly interpolate between the class labels as one transitions across the void between clusters. This is in contrast to the random matches of 1-mixup, which create vicinal distributions that can span a line crossing any part of the space without regard for intervening clusters. The behaviors noted by Theorems 3.4 and 3.5 are visualized in Figures 1, 2, and 3, showing that k-mixup provides smooth interpolation between clusters, and strengthens label preservation within clusters. ## 4 Regularization Expansions Two recent works analyze the efficacy of 1-mixup perturbatively (Zhang et al., 2020; Carratino et al., 2020). Both consider quadratic Taylor series expansions about the training set or a simple transformation of it, and they characterize the regularization terms that arise in terms of label and Lipschitz smoothing. We adapt these expansions to k-mixup and show that the resulting regularization is more locally informed via the optimal transport coupling. In both works, perturbations are sampled from a globally informed distribution, based upon all other samples in the data distribution. In k-mixup, these distributions are defined by the optimal transport couplings. Given a training point xi, we consider all k-samplings γ that might contain it, and all possible k-samplings ζ that it may couple to. A locally-informed distribution is the following: $$\mathcal{D}_{i}:=\frac{1}{\binom{N-1}{k-1}\binom{N}{k}}\sum_{\gamma=1}^{\binom{N-1}{k-1}\binom{N}{k}}\sum_{\zeta=1}^{\binom{N}{k}}\delta_{\sigma_{\gamma\zeta}(x_{i})},$$ δσγζ(xi), (1) where σγζ denotes the optimal coupling between k-samplings γ and ζ. This distribution will be more heavily weighted on points that xiis often matched with. An illustration of this distribution for a randomly selected point in our synthetic examples is visible in Figure 3. We use "locally-informed" in the sense of upweighting of points closer to the point of interest that are likely to be matched to it by k-mixup. Zhang et al. (2020) expand about the features in the training dataset DX := {x1*, . . . , x*n}, and the perturbations in the regularization terms are sampled from D. We generalize their characterization to k-mixup, with D replaced by Di. Focusing on the binary classification problem for simplicity, we assume a loss of the form ℓ(f(x), y) = h(f(x)) − y · f(x) for some twice differentiable h and f. This broad class of losses includes the cross-entropy for neural networks and all losses arising from generalized linear models. Theorem 4.1. Assuming a loss ℓ as above, the k*-mixup loss can be written as:* $${\mathcal{E}}_{k}^{m i x}(f)={\mathcal{E}}^{s t d}+\sum_{j=1}^{3}{\mathcal{R}}_{j}+\mathbb{E}_{\lambda\sim\beta(\alpha+1,\alpha)}[(1-\lambda)^{2}\phi(1-\lambda)],$$ where lima→0 ϕ(a) = 0, E std denotes the standard ERM loss, and the three Ri *regularization terms are:* R1 = Eλ∼β(α+1,α)[1 − λ] n× XN i=1 (h ′(f(xi)) − yi)∇f(xi) T Er∼Di [r − xi] R2 = Eλ∼β(α+1,α)[(1 − λ) 2] 2n× X N i=1 h ′′(f(xi))∇f(xi) T Er∼Di [(r − xi)(r − xi) T]∇f(xi) R3 = Eλ∼β(α+1,α)[(1 − λ) 2] 2n× X N i=1 (h ′(f(xi)) − yi)Er∼Di [(r − xi)∇2f(xi)(r − xi) T]. A proof is given in Section H of the supplement and follows from some algebraic rearrangement and a Taylor expansion in terms of 1 − λ. The higher-order terms are captured by Eλ∼β(α+1,α)[(1 − λ) 2ϕ(1 − λ)]. E std represents the constant term in this expansion, while the regularization terms Ri represent the linear and quadratic terms. These effectively regularize ∇f(xi) and ∇2f(xi) with respect to local perturbations r − xi sampled from Di, ensuring that our regularizations are locally-informed. In other words, the regularization terms vary over the support of the dataset, at each point penalizing the characteristics of the locally-informed distribution rather than those of the global distribution. This allows the regularization to adapt better to local data (e.g. manifold) structure. For example, R2 and R3 penalize having large gradients and Hessians respectively along the directions of significant variance of the distribution Di of points xiis likely to be matched to. When k = 1, this Di will not be locally informed, and will instead effectively be a global variance measure. As k increases, the Di will instead be dominated by matches to nearby clusters, better capturing the smoothing needs in the immediate vicinity of xi. Notably, the expansion is in the feature space alone, yielding theoretical results in the case of 1-mixup on generalization and adversarial robustness. An alternative approach by Carratino et al. (2020) characterizes mixup as a combination of a reversion to mean followed by random perturbation. In supplement Section I, we generalize their result to k-mixup via a locally-informed mean and covariance. ## 5 Implementation And Experiments 5.1 K**-Mixup Implementation & Computational Cost** Figure 5 shows pseudocode for our implementation of one epoch of k-mixup training, in the style of Figure 1 of Zhang et al. (2018).4 Also provided in Algorithm 1 is the procedure used in the experiments to adjust α as k increases (denoted αk). While ξ cannot be evaluated in closed form as a function of α, since ξ monotonically increases with α it is sufficient to iteratively increase αk until the desired ξ is reached. The while loop involves only computing the empirical expectation of a simple function of a scalar beta-distributed random variable and is therefore fast to compute. 4Python code for applying k-mixup to CIFAR10 can be found at https://github.com/AnmingGu/kmixup-cifar10. \# y1, y2 should be one-hot vectors for (x1, y1), (x2, y2) **in zip**(loader1, loader2): idx = numpy.zeros_like(y1) for i **in range**(x1.shape[0] // k): cost = scipy.spatial. distance_matrix(x1[i * k:(i+1) * k], x2[i * k:(i+1) * k]) _, ix = scipy.optimize.linear_sum_assignment(cost) idx[i * k:(i+1) * k] = ix + i * k x2 = x2[idx] y2 = y2[idx] lam = numpy.random.beta(alpha, alpha) x = Variable(lam * x1 + (1 - lam) * x2) y = Variable(lam * y1 + (1 - lam) * y2) optimizer.zero_grad() loss(net(x), y).backward() optimizer.step() Figure 5: k-mixup implementation Algorithm 1 Choosing αk to maintain constant ξ Require: Chosen α1, desired k, trials N, constant c > 1, threshold γ, dataset of interest. 1: Using N trials, compute ¯ξ1 as the empirical average of the squared distance between two random points in the dataset. 2: Using N/k trials, compute ¯ξk as the empirical average of the squared Wasserstein-2 distance between two random k-samples drawn from the dataset. 3: Using N trials, compute empirical average λ¯1 of min(λ 2 1 ,(1 − λ1) 2) where λ1 ∼ β(α1, α1). 4: ξ ← ¯ξ1λ¯1 5: Initialize αk = α, λ¯k = λ¯1. 6: **while** λ¯k ¯ξk < ξ and αk < γ do 7: αk ← αkc 8: Using N trials, compute empirical average λ¯k of min(λ 2 k ,(1 − λk) 2) where λk ∼ β(αk, αk). 9: **end while** 10: Output αk. Computational cost. While the cost of the Hungarian algorithm is O(k 3), it provides k data points for regularization, yielding an amortized O(k 2) complexity per data point. For the smaller values of k that we empirically consider, approximate Sinkhorn-based methods (Cuturi, 2013) are slower in practice, and the Hungarian cost remains small relative to that of gradient computation (e.g. for CIFAR-10 and k = 32, the Hungarian algorithm costs 0.69 seconds per epoch in total). Computing the distance matrix input to the OT matching costs O(k 2d) where d is dimension, yielding an amortized O(kd) complexity per data point. With the high dimensionality of CIFAR, a naive GPU implementation of this step of the computation adds about 0.5 seconds per epoch. Note that, on our hardware, the overall cost of an epoch is greater than 30 seconds. Moreover, training convergence speed is unaffected in terms of epoch count, unlike in manifold mixup (Verma et al., 2018), which in our experience converges slower and has larger computational overhead. Overall, therefore, there is little computational downside to generalizing 1-mixup to k-mixup, and, as we will see, the potential for performance gains. ## 5.2 Empirical Results The simplicity of our method allows us to test the efficacy of k-mixup over a wide range of datasets and domains: 5 standard benchmark image datasets, 5 UCI tabular datasets, and a speech dataset; employing a variety of fully connected and convolutional neural network architectures. Across these experiments we find that k-mixup for k > 1 consistently improves upon 1-mixup after hyperparameter optimization. The gains are generally on par with the improvements of 1-mixup over no-mixup (ERM). Interestingly, when the mean perturbation distances (squared) are constrained to be at a fixed level ξ 2, k-mixup for k > 1 still usually improves upon 1-mixup, with the improvement especially stark for larger perturbation sizes. In the tables below, ξ is used to denote the root mean square of the perturbation distances, and α denotes the α used in the k = 1 case to achieve this ξ. We show here sweeps over k and ξ, choosing k and ξ in practice is discussed in Supplement Section A. Unless otherwise stated, our training is done over 200 epochs via a standard SGD optimizer, with learning rate 0.1 decreased at epochs 100 and 150, momentum 0.9, and weight decay 10−4. Note that throughout, we focus on comparing ERM, 1-mixup, and k-mixup on different architectures, rather than attempting to achieve a new state of the art on these benchmarks. Image datasets. Our most extensive testing was done on image datasets, given their availability and the efficacy of neural networks in this application domain. In Table 1, we show our summarized error rates across various benchmark datasets and network architectures (all results are averaged over 20 random trials).5 For each combination of dataset and architecture, we report the improvement of k-mixup over 1-mixup, allowing for optimization over relevant parameters. In this table, hyperparameter optimization refers to optimizing over a discrete set of the designated hyperparameter values listed in the detailed hyperparameter result tables shown below for each dataset and setting. As can be seen, the improvement is on par with that of 1-mixup over ERM. We also show the performance of k = 16 (a heuristic choice for k), which performed well across all experiments, outperforming 1-mixup in nearly all instances. This k = 16-mixup performance is notable as it only requires optimizing a single hyperparameter α / ξ, the same hyperparameter used for 1-mixup. Table 1: Summary of image dataset error rate results (in percent, all results averaged over 20 trials). Note that for each setting, k-mixup with hyperparameter optimization outperforms both ERM and optimized 1-mixup. We also show 16-mixup as a heuristic that only involves optimizing α / ξ, i.e. the hyperparameter space is the same as 1-mixup. This heuristic outperforms ERM and 1-mixup in all but one instance, where it matches the performance of 1-mixup. Note that, on average, the improvement of k-mixup over 1-mixup tends to be similar in magnitude to the improvement of 1-mixup over ERM. | Dataset (Confidence) | Architecture | ERM 1-mixup (ξ opt.) k-Mixup (k, ξ opt.) 16-Mixup (heuristic, ξ opt.) | | | | |------------------------|-----------------------|-------------------------------------------------------------------------|-------|-------|-------| | MNIST (±.02) | LeNet | 0.95 | 0.76 | 0.66 | 0.74 | | CIFAR-10 (±.03) | ResNet18 | 5.6 | 4.18 | 4.02 | 4.02 | | (±.03) | DenseNet-BC-190 | 3.70 | 3.29 | 2.85 | 2.85 | | (±.09) | WideResNet-101 | 11.6 | 11.53 | 11.25 | 11.25 | | CIFAR-100 (±.05) | DenseNet-BC-190 18.91 | 18.916 | 18.31 | 18.31 | | | SVHN (±.02) | ResNet18 | 3.37 | 2.93 | 2.78 | 2.93 | | Tiny ImageNet (±.06) | ResNet18 | 38.50 | 37.58 | 35.67 | 35.67 | Full parameter sweeps for all settings shown in Table 1 are in Appendix C, with two presented below that are indicative of the general pattern that we see as we vary α / ξ and k. In Table 2, we show results for MNIST (LeCun & Cortes, 2010) with a slightly modified LeNet architecture to accommodate grayscale images, and for Tiny ImageNet with a PreAct ResNet-18 architecture as in Zhang et al. (2018). Each table entry is averaged over 20 Monte Carlo trials. Here and throughout the remainder of the paper, error bars are reported for the performance results.7In the MNIST results, for each ξ the best generalization performance is for some k > 1, i.e. Table 2: Results for MNIST with a LeNet architecture (no mixup (ERM) error: 0.95%), averaged over 20 trials (±.02 confidence on test error). Note the best k-mixup beats the best 1-mixup by 0.1%; on the same order as the 0.19% improvement of 1-mixup over ERM. α=.05 α=.1 α=.2 α=.5 α= 1 α= 10 α= 100 k ξ = 0.5 ξ = 0.6 ξ = 0.8 ξ = 1.2 ξ = 1.4 ξ = 2.1 ξ = 2.2 1 0.79 0.79 0.76 0.86 0.83 1.05 1.26 2 0.85 0.79 0.90 0.76 0.83 0.94 1.14 4 0.91 0.80 0.83 0.81 0.85 0.89 0.95 8 0.80 0.81 0.78 0.79 0.78 0.81 0.83 16 0.77 0.82 0.80 0.75 0.80 0.74 0.75 32 0.79 0.75 0.77 0.79 0.80 0.76 0.71 64 0.80 0.82 0.77 0.78 0.78 **0.66** 0.71 5As a result, Table 1 summarizes results from training 3680 neural network models in total. 6Achieved with α = 0, i.e. equivalent to ERM. 7These error bars are the one standard deviation of the Monte Carlo average, where for brevity we report only the worst such variation over the elements of the corresponding results table. ![9_image_0.png](9_image_0.png) Figure 6: Training convergence of k = 1 and k = 32-mixup on CIFAR-10, averaged over 20 random trials. Both train at roughly the same rate (k = 32 slightly faster), the train accuracy discrepancy is due to the more class-accurate vicinal training distribution created by higher k-mixup. k-mixup outperforms 1-mixup for all perturbation sizes. The lowest error rate overall is achieved with k = 64 and ξ = 2.1: 0.66%, an improvement of 0.1% over the best 1-mixup and an improvement of 0.29% over ERM. For Tiny ImageNet, we again see that for each α / ξ, the best generalization performance is for some k > 1 (note that the ξ are larger for Tiny ImageNet than MNIST due to differences in normalization and size of the images). The lowest error rate overall is achieved with k = 16 and ξ = 1000: 35.67%, and improvement of 1.91% over the best 1-mixup and an improvement of 2.83% over ERM. In both datasets, we see that for each perturbation size value ξ, the best generalization performance is for some k > 1, but that the positive effect of increasing k is seen most clearly for larger ξ. This strongly supports the notion that k-mixup is significantly more effective than 1-mixup at designing an effective vicinal distribution for larger perturbation budgets. The lowest overall error rates are achieved for relatively high ξ and high k. Table 3: Results for Tiny ImageNet with a ResNet18 architecture (no mixup (ERM) error: 38.50%), averaged over 20 trials (±.06 confidence on test error). Note the best k-mixup beats the best 1-mixup by 1.91%; better than the 0.92% improvement of 1-mixup over ERM. α=.1 α=.2 α=.5 α= 1 α= 10 k ξ = 9.7 ξ = 13 ξ = 18 ξ = 22 ξ = 32 1 38.47 38.26 37.79 37.61 37.58 2 38.48 38.06 37.89 37.43 37.00 4 38.51 38.13 37.67 37.44 36.21 8 38.42 38.05 37.67 37.19 35.86 16 38.47 38.10 37.60 37.28 **35.67** 32 38.45 38.04 37.60 37.29 35.88 Additionally, we show training curves (performance as a function of epoch) demonstrating that the use of k-mixup does not affect training convergence. These are shown for ResNet-18 on CIFAR-10 in Figure 6, which is indicative of what is seen across all our experiments. The training speeds (test accuracy in terms of epoch) for 1-mixup and 32-mixup show no loss of convergence speed with k = 32-mixup, with (if anything) k = 32 showing a slight edge. The discontinuity at epoch 100 is due to our reduction of the learning rate at epochs 100 and 150 to aid convergence (used throughout our image experiments). The train accuracy shows a similar convergence profile between 1- and 32-mixup; the difference in absolute accuracy here (and the reason it is less than the test accuracy) is because the training distribution is the mixup-modified vicinal distribution. The curve for k = 32 is higher, especially for α = 10, because the induced vicinal distribution and labels are more consistent with the true distribution, due to the better matches from optimal transport. The large improvement in train accuracy is remarkable given the high dimension of the CIFAR-10 data space, since it indicates that k = 32-mixup is able to find significantly more consistent matches than k = 1-mixup. UCI datasets. To demonstrate efficacy on non-image data, we tried k-mixup on UCI datasets (Dua & Graff, 2017) of varying size and dimension: Iris (150 instances, dim. 4), Breast Cancer Wisconsin-Diagnostic (569 instances, dim. 30), Abalone (4177 instances, dim. 8), Arrhythmia (452 instances, dim. 279), and Phishing (11055 instances, dim. 30). For Iris, we used a 3-layer network with 120 and 84 hidden units; for Breast Cancer, Abalone, and Phishing, we used a 4-layer network with 120, 120, and 84 hidden units; and lastly, for Arrhythmia we used a 5-layer network with 120, 120, 36, and 84 hidden units. For these datasets we used a learning rate of 0.005 instead of 0.1. Each entry is averaged over 20 Monte Carlo trials. Test error rate is shown in Figure 4. k-mixup improves over 1-mixup in each case, although for Arrhythmia no mixup outperforms both. In these small datasets, the best performance seems to be achieved with relatively small α (0.05, 0.1) and larger k (4 or greater). | Dataset | α | ξ | k = 1 k = 2 k = 4 k = 8 k = 16 | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------|-------------------------------|----------------------------------|--------------------------------|------|------|------| | Abalone | 0.05 | .07 | 71.37 71.32 71.23 71.54 71.47 | | | | | | (ERM: 72.32) | 0.1 | .09 | 71.32 71.41 71.82 71.05 71.07 | | | | | | 1.0 | .20 | 71.56 71.85 71.26 71.29 70.90 | | | | | | | Arrhythmia | 0.05 13.97 35.15 34.84 33.88 35.16 34.97 | | | | | | | | (ERM: 32.06) 0.1 17.83 34.14 36.13 34.29 35.64 33.39 1.0 42.28 34.25 35.51 33.78 33.55 33.64 Cancer 0.05 19 10.32 9.08 9.52 8.63 9.35 (ERM: 11.25) 0.1 .26 10.42 9.75 9.92 9.14 10.25 1.0 .59 12.31 10.83 10.23 9.85 9.15 | Dataset | α | ξ | k = 1 k = 2 k = 4 k = 8 k = 16 | | | | | | Iris | 0.05 .07 | 6.70 | 4.20 | 3.30 | 2.50 | 3.70 | | | (ERM: 4.10) 0.1 .10 | 6.00 | 2.90 | 2.70 | 3.50 | 3.60 | | | | | 1.0 .22 | 7.00 | 3.90 | 2.90 | 3.80 | 2.90 | | | Phishing | 0.05 .28 | 3.30 | 3.14 | 3.05 | 3.17 | 3.06 | | | (ERM: 3.43) 0.1 .38 | 3.30 | 3.36 | 3.35 | 3.34 | 3.26 | | | | | 1.0 .85 | 4.69 | 4.55 | 4.27 | 4.18 | 4.20 | Table 4: Test error on UCI datasets using fully connected networks, averaged over 20 random trials. Speech dataset. Performance is also tested on a speech dataset: Google Speech Commands (Warden, 2018). Results with a simple LeNet architecture are in Table 5. Each table entry is averaged over 20 Monte Carlo trials (±.014 confidence on test performance). We augmented the data in the same way as Zhang et al. (2018), i.e. we sample the spectrograms from the data using a sampling rate of 16 kHz and equalize their sizes at 160 × 101. We also use similar training parameters: we train for 30 epochs using a learning rate of 3 × 10−3 that is divided by 10 every 10 epochs. The improvement of the best k-mixup over the best 1-mixup is 1.11%, with the best k-mixup performance being for k = 16 and large α / ξ. Note this improvement is greater than the 0.83% improvement of 1-mixup over ERM. Toy datasets. Note that for completeness, in Appendix B we provide quantitative results for the toy datasets of Figures 1, 2, and 3, confirming the qualitative analysis therein. Table 5: Google Speech Commands test error using LeNet architecture (no mixup (ERM) error: 12.26%), averaged over 20 Monte Carlo trials. Note the best k-mixup beats the best 1mixup by 1.11%, greater than the 0.83% improvement of 1-mixup over ERM. α = .1 α = .2 α = .5 α = 1 α = 10 k ξ = 14 ξ = 19 ξ = 26 ξ = 31 ξ = 45 1 11.89 11.55 11.43 11.55 12.16 2 11.76 11.48 11.32 11.38 11.62 4 11.68 11.36 11.18 11.19 11.20 8 11.83 11.36 11.14 11.14 10.84 16 11.72 11.36 11.11 11.04 **10.32** Distribution shift. As mentioned above, a key benefit of mixup is that it (approximately) smoothly linearly interpolates between classes and thereby should provide a degree of robustness to distribution shifts that involve small perturbation of the features Zhang et al. (2018). Here we test two such forms of distribution shift: additive Gaussian noise and FGSM white box adversarial attacks (a full exploration of the infinite set of possible distribution shifts is beyond the scope of this paper). Additive Gaussian noise can arise from unexpected noise in the imaging sensor, and testing adversarial FGSM attacks on mixup was introduced in the original mixup paper Zhang et al. (2018) as a much more difficult perturbation test than i.i.d. additive noise. As in Zhang et al. (2018), we limit the experiment to basic FGSM attacks, since iterative PGD attacks are too strong, making any performance improvements seem less relevant in practice and calling into question the realism of using PGD as a proxy for distribution shifts. Additive Gaussian noise results for CIFAR-10 and various levels ϵ of noise and mixup parameter α are shown in Figure 6.8 Note that the k = 16 outperforms k = 1 for all noise levels by as much as 4.29%, and ERM by as much as 16%. Following the experimental setup of Zhang et al. (2018), Figure 7 shows results on white-box adversarial attacks generated by the FGSM method (implementation of Kim (2020)9) for various values of maximum adversarial perturbation. As in Zhang et al. (2018), the goal of this test is not to achieve results comparable to the much more computationally intensive methods of adversarial training. This would be impossible for a non-adversarial regularization approach. We show CIFAR-10 accuracy on white-box FGSM adversarial data (10000 points), where the maximum adversarial perturbation is set to ϵ/255; performance is averaged over 30 Monte Carlo trials (±0.6 confidence). Note that the highest k outperforms k = 1 uniformly by as much 8Smaller values of α were tried, but this decreased performance for all k values. 9Software has MIT License Table 6: Additive noise: Error for CIFAR-10 ResNet-18 with additive Gaussian noise of standard deviation ϵ/255 (results averaged over 30 trials, ±0.6 confidence). k = 16-mixup outperforms 1-mixup and ERM in each instance (best performance for each ϵ in bold). | | α = 2 | α = 5 | α = 10 | | | | | | | | |-------|---------|-------------------|-------------|-------------------|-------------|-------------------|--------|-------|-------|--------| | Noise | ERM | k = 1 | k = 4 | k = 16 | k = 1 | k = 4 | k = 16 | k = 1 | k = 4 | k = 16 | | ϵ= 8 | 15.72 | 11.10 11.65 11.00 | 11.39 11.19 | 11.28 | 11.88 11.28 | 11.53 | | | | | | ϵ= 10 | 22.21 | 15.47 16.06 | 14.34 | 15.78 14.69 14.20 | 15.43 14.56 | 14.37 | | | | | | ϵ= 12 | 29.55 | 21.69 21.75 | 18.75 | 21.19 19.06 | 17.71 | 20.56 18.45 17.69 | | | | | | ϵ= 14 | 37.93 | 29.69 28.34 | 23.86 | 27.43 23.72 | 21.62 | 25.90 22.62 21.61 | | | | | as 6.12%, and all k > 1-mixups outperform or statistically match k = 1-mixup for all attack sizes. Similar results for MNIST are in Table 7(b), with the FGSM attacks being somewhat less effective. Here again, the highest k performs the best for all attack sizes. The improved robustness shown by k-mixup speaks to a key goal of mixup, that of smoothing the predictions in the parts of the data space where no/few labels are available. This smoothness should make adversarial attacks require greater magnitude to successfully "break" the model. | k | ϵ=.5 | ϵ= 1 | ϵ= 2 | ϵ= 4 | ϵ= 8 | ϵ = 16 | | | |------------------------------------------|--------------------------------------|--------|---------------------|--------|--------|----------|-------|--------| | 1 | 20.63 | 26.25 | 31.07 | 36.72 | 48.55 | 74.78 | | | | 2 | 20.40 | 25.70 | 29.89 | 34.34 | 43.78 | 72.22 | | | | 4 | 20.73 | 26.29 | 30.59 | 34.94 | 43.94 | 71.20 | | | | 8 | 20.58 | 26.06 | 30.31 | 34.50 | 43.51 | 70.18 | | | | 16 20.21 25.50 29.57 33.80 43.32 | 68.66 | k | ϵ=.5 | ϵ= 1 | ϵ= 2 | ϵ= 4 | ϵ= 8 | ϵ = 16 | | | 1 | 1.23 | 1.66 | 2.37 | 3.88 | 7.70 | 18.10 | | | | 2 | 1.12 | 1.50 | 2.11 | 3.58 | 7.44 | 19.81 | | | | 4 | 1.07 | 1.37 | 1.94 | 3.25 | 7.30 | 22.02 | | | | 8 | 1.08 | 1.40 | 1.88 | 3.12 | 6.92 | 21.00 | | | | 16 | 0.88 | 1.11 | 1.50 | 2.42 | 5.28 | 16.44 | | | | 32 | 0.85 | 1.07 1.39 2.27 4.91 | 14.95 | | | | | | (a) CIFAR-10 (±0.6 confidence). α = 0.5. | (b) MNIST (±0.1 confidence). α = 10. | | | | | | | | Table 7: Adversarial shifts: error on white-box FGSM attacks. Parameter α chosen to maximize k = 1 performance. Large k-mixup outperforms 1-mixup in each instance. Manifold mixup. We have also compared to manifold mixup (Verma et al., 2018), which aims to interpolate data points in deeper layers of the network to get more meaningful interpolations. This leads to the natural idea of doing k-mixup in these deeper layers. We use the settings in Verma et al. (2018), i.e., for ResNet18, the mixup layer is randomized (coin flip) between the input space and the output of the first residual block.10 Results for CIFAR-10 with a ResNet18 architecture are in Table 8. Note that in this experiment, the α's shown are used for all k without adjustment since the matches across k-samples happen at random layers and cannot be standardized as simply. Numbers in this experiment are averaged over 20 Monte Carlo trials (±.03 confidence on test performance). Manifold 1-mixup is matched by manifold k-mixup with k = 4, but both are outperformed by standard k-mixup (Table 1). **We therefore** do not find a benefit to using "k**-manifold-mixup" over** our proposed k**-mixup.** We also tried randomizing over (a) the outputs of all residual blocks and (b) the outputs of (lower-dimensional) deep residual blocks only, but found that performance of both manifold 1-mixup and manifold k-mixup degrades in these cases. This latter observation underscores that mixup in hidden layer manifolds is not guaranteed to be effective and can require tuning. Table 8: Comparison to manifold mixup approach. "k-manifold mixup" test error on CIFAR-10 using ResNet18 architecture, averaged over 20 Monte Carlo trials (±.03 confidence). Compare Table 1 and observe that this "manifold k-mixup" matches the standard manifold mixup but both underperform our proposed k-mixup, which achieves a 4.02% error rate with k = 16. k α = .1 α = .2 α = .5 α = 1 α = 10 1 5.00 4.73 4.41 4.26 4.81 2 5.09 4.78 4.44 4.31 4.71 4 5.01 4.78 4.45 **4.25** 5.02 8 5.08 4.78 4.42 4.28 4.71 16 5.14 4.82 4.42 4.31 4.53 32 5.14 4.91 4.58 4.47 4.54 10See https://github.com/vikasverma1077/manifold_mixup ## 6 Conclusions And Future Work The experiments above demonstrate that k-mixup improves the generalization and robustness gains achieved by 1-mixup. This is seen across a diverse range of datasets and network architectures. It is simple to implement, adds little computational overhead to conventional 1-mixup training, and may also be combined with related mixup variants. As seen in the theory presented in Sections 3 and 4, as k increases, the induced regularization more accurately reflects the local structure of the training data, especially in the manifold support and clustered settings. Empirical results show that performance is relatively robust to variations in k, especially when normalizing for similar perturbation distances (squared), ensuring that extensive tuning is not necessary. With the notable exception of the larger improvement on Tiny ImageNet, our experiments show the improvement on high-dimensional datasets is sometimes smaller than on lower dimensional datasets (recall that classic mixup also has somewhat small gains over ERM in these settings). This difference may be influenced by the diminishing value of Euclidean distance for characterizing dataset geometry in high dimensions (Aggarwal et al., 2001), but intriguingly this effect was not remedied by doing OT in the lowerdimensional manifolds created by the higher layers in our manifold mixup experiments. In future work we will consider alternative metric learning strategies, with the goal of identifying alternative high-dimensional metrics for displacement interpolation of data points. ## Acknowledgements The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, and from a Google Research Scholar award. ## References Charu C Aggarwal, Alexander Hinneburg, and Daniel A Keim. On the surprising behavior of distance metrics in high dimensional space. In *International conference on database theory*, pp. 420–434. Springer, 2001. Raphael Baena, Lucas Drumetz, and Vincent Gripon. Preventing manifold intrusion with locality: Local mixup, 2022. Dimitris Bertsimas and John Tsitsiklis. *Introduction to Linear Optimization*. Athena Scientific, 1st edition, 1997. ISBN 1886529191. Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, and Jean-Philippe Vert. On Mixup Regularization. arXiv:2006.06049 [cs, stat], June 2020. URL http://arxiv.org/abs/2006.06049. arXiv: 2006.06049. John Chen, Samarth Sinha, and Anastasios Kyrillidis. Stackmix: A complementary mix algorithm, 2021. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. *Advances in neural* information processing systems, 26:2292–2300, 2013. Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, and Nicolas Courty. DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation. In Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (eds.), *Computer Vision - ECCV 2018*, pp. 467–483, Cham, 2018. Springer International Publishing. ISBN 978-3-030-01225-0. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. Kilian Fatras, Younes Zine, Rémi Flamary, Remi Gribonval, and Nicolas Courty. Learning with minibatch Wasserstein : asymptotic and gradient properties. In *Proceedings of the Twenty Third International* Conference on Artificial Intelligence and Statistics, pp. 2131–2141. PMLR, June 2020. URL https: //proceedings.mlr.press/v108/fatras20a.html. ISSN: 2640-3498. Kilian Fatras, Thibault Sejourne, Rémi Flamary, and Nicolas Courty. Unbalanced minibatch Optimal Transport; applications to Domain Adaptation. In *Proceedings of the 38th International Conference on* Machine Learning, pp. 3186–3197. PMLR, July 2021. URL https://proceedings.mlr.press/v139/ fatras21a.html. ISSN: 2640-3498. Aude Genevay, Gabriel Peyre, and Marco Cuturi. Learning Generative Models with Sinkhorn Divergences. In *Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics*, pp. 1608–1617. PMLR, March 2018. URL https://proceedings.mlr.press/v84/genevay18a.html. ISSN: 2640-3498. Hongyu Guo, Yongyi Mao, and Richong Zhang. MixUp as Locally Linear Out-of-Manifold Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):3714–3722, July 2019. ISSN 2374-3468. doi: 10.1609/aaai.v33i01.33013714. URL https://ojs.aaai.org/index.php/AAAI/article/view/4256. Number: 01. Allen Hatcher. *Algebraic topology*. Cambridge Univ. Press, Cambridge, 2000. URL https://cds.cern.ch/ record/478079. S. K. Katti. Moments of the Absolute Difference and Absolute Deviation of Discrete Distributions. The Annals of Mathematical Statistics, 31(1):78–85, 1960. ISSN 0003-4851. URL https://www.jstor.org/ stable/2237495. Publisher: Institute of Mathematical Statistics. Hoki Kim. Torchattacks: A pytorch repository for adversarial attacks. *arXiv preprint arXiv:2010.01950*, 2020. Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup. In *International Conference on Machine Learning*, pp. 5275–5285. PMLR, November 2020. URL http://proceedings.mlr.press/v119/kim20b.html. ISSN: 2640-3498. JangHyun Kim, Wonho Choo, Hosan Jeong, and Hyun Oh Song. Co-mixup: Saliency guided joint mixup with supermodular diversity. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=gvxJzw8kW4b. Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun. com/exdb/mnist/. Zicheng Liu, Siyuan Li, Di Wu, Zhiyuan Chen, Lirong Wu, Jianzhu Guo, and Stan Z. Li. Unveiling the power of mixup for stronger classifiers, 2021. Gabriel Peyré and Marco Cuturi. Computational Optimal Transport: With Applications to Data Science. Foundations and Trends® *in Machine Learning*, 11(5-6):355–607, February 2019. ISSN 1935-8237, 19358245. doi: 10.1561/2200000073. URL https://www.nowpublishers.com/article/Details/MAL-073. Publisher: Now Publishers, Inc. Filippo Santambrogio. *Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and* Modeling. Progress in Nonlinear Differential Equations and Their Applications. Birkhäuser Basel, 2015. ISBN 978-3-319-20827-5. doi: 10.1007/978-3-319-20828-2. URL https://www.springer.com/gp/book/ 9783319208275. Jy Yong Sohn, Jaekyun Moon, Kangwook Lee, and Dimitris Papailiopoulos. Gan-mixup: Augmenting across data manifolds for improved robustness. *ICML Workshop on Uncertainity and Robustness in Deep Learning*, 2020. Justin Solomon, Kristjan Greenewald, and Haikady N Nagaraja. k-variance: A clustered notion of variance. arXiv preprint arXiv:2012.06958, 2020. Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Aaron Courville, Ioannis Mitliagkas, and Yoshua Bengio. Manifold Mixup: Learning Better Representations by Interpolating Hidden States. September 2018. URL https://openreview.net/forum?id=rJlRKjActQ. | α=.25 | α= 1 | α= 4 | α= 16 α= 64 | | | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------|-------------| | k ξ = .10 ξ = .15 ξ = .20 ξ = .23 ξ = .25 1 5.01 7.88 9.85 10.28 9.75 2 4.28 5.19 5.87 6.19 5.79 4 2.93 3.34 4.10 3.86 3.92 8 2.50 2.77 3.24 3.15 2.78 16 2.42 2.26 2.54 2.43 2.30 | | α=.25 | α= 1 | α= 4 | α= 16 α= 64 | | | | | | | k ξ = .50 ξ = .78 ξ = 1.0 ξ = 1.1 ξ = 1.2 1 0.00 0.00 6.79 49.60 49.80 2 0.01 0.00 1.53 47.82 49.74 4 0.00 0.00 1.41 8.25 9.58 8 0.00 0.00 0.38 0.62 0.63 16 0.00 0.00 0.08 0.07 0.22 | | | α=.25 | α= 1 | α= 4 | α= 16 α= 64 | | | | | | | k ξ = 1.2 ξ = 1.9 ξ = 2.4 ξ = 2.8 ξ = 3.0 1 0.83 9.52 31.84 37.38 36.18 2 0.49 3.30 28.41 32.21 32.75 4 0.32 1.34 15.49 27.53 29.23 8 0.35 0.45 2.67 8.14 2.76 16 0.40 0.33 0.45 0.27 0.27 | | | | | (a) One Ring | | (b) Four Bars | | | (c) Swiss Roll | | | | Table 9: Test error on toy datasets, averaged over 5 Monte Carlo trials. Pete Warden. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209, 2018. Jonathan Weed and Francis Bach. Sharp asymptotic and finite-sample rates of convergence of empirical measures in wasserstein distance. *Bernoulli*, 25(4A):2620–2648, 2019. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features. pp. 6023–6032, 2019. URL https://openaccess.thecvf.com/content_ICCV_2019/html/Yun_CutMix_Regularization_ Strategy_to_Train_Strong_Classifiers_With_Localizable_Features_ICCV_2019_paper.html. Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond Empirical Risk Minimization. February 2018. URL https://openreview.net/forum?id=r1Ddp1-Rb. Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, and James Zou. How Does Mixup Help With Robustness and Generalization? September 2020. URL https://openreview.net/forum?id= 8yKEo06dKNo. ## A Hyperparameter Tuning While as noted in the main text, the large value of k = 16 with α optimized consistently performs well, increasing k does not always improve performance monotonically. This, however, is to be expected in any real data scenario. Hence in practice, it is often appropriate to search over k. This is not too difficult as in our experiments we found it sufficient to only try powers of 2, and performance generally is smoothly varying over α. Several search approaches work well: 1. **Very simple**: set α = 1 and search over k (powers of 2). It can be seen from our experiments that except for one UCI dataset this approach outperforms or matches the 1-mixup performance at any α. 2. **More complex but still relatively simple**: search over ξ for 1-mixup, fix the best such ξ, and then search over k (in powers of 2 up to k = 16 or 32). This approach always outperforms 1-mixup and does not add much hyperparameter search overhead. 3. **Full hyperparameter grid search**: Not too expensive for many neural networks, for instance a full grid search (with k powers of 2) for ResNet-18 on CIFAR-10 requires only a few hours of time on our medium-sized cluster. If the model will be deployed in high-stakes or high-volume applications, full search would be feasible even for larger networks. ## B Test Error On Toy Datasets. For completeness, we provide quantitative results for the toy datasets of Figures 1, 2, and 3 (denoted "One Ring," "Four Bars," and "Swiss Roll") in Table 9, continuing the intuition-building discussion in each of those figures. We used a fully-connected 3-layer neural network (130 and 120 hidden units). As the datasets are well-clustered and have no noise, smoothing is not needed for generalization and performance without mixup is typically 100%. Rather than beating ERM, applying mixup to these datasets instead aims to build intuition, providing a view to the propensity of each variant to oversmooth, damaging performance. For each dataset and each perturbation size ξ, higher k-mixup outperforms 1-mixup, with k = 16 providing the best performance in all but one instance. These results quantitatively confirm the intuition built in Figures 1, 2, and 3 that k-mixup regularization more effectively preserves these structures in data, limiting losses from oversmoothing at a fixed mean perturbation size. ## C Additional Image Dataset Parameter Sweeps In this section, we show the parameter sweep tables for the remaining rows of Table 1. Table 10 below shows results for CIFAR-10, using the PreAct ResNet-18 architecture as in Zhang et al. (2018). Again, increasing k for fixed ξ tends to improve generalization performance, especially in the high-ξ regime. The best performance overall is achieved at k = 16 with ξ = 16.7. While the best k-mixup performance exceeds that of the best 1-mixup by 0.16%, recall that in this setting, 1-mixup outperforms ERM by only 1.4% (Zhang et al., 2018), so when combined with the low overall error rate, small gains are not surprising. Results for DenseNet and WideResNet architectures can be found in Table 11, with the best k-mixup outperforming the best 1-mixup by 0.44% and 0.28% respectively. Note that in the case of DenseNet, k = 16 outperforms or (statistically) matches k = 1 for all values of ξ. Table 10: Results for CIFAR-10 with ResNet18 architecture (no mixup (ERM) error: 5.6%), averaged over 20 Monte Carlo trials (±.03 confidence on test performance). Difference between best k-mixup and best 1-mixup is 0.16%; for fixed high α (α = 100), the improvement increases to 1.19%. | α=.05 | α=.1 | α=.2 | α=.5 | α= 1 | α= 10 | α= 100 | | |---------|--------------------------------------------------------------|--------|--------|--------|---------|----------|------| | k | ξ = 5.6 ξ = 7.4 ξ = 10.0 ξ = 13.8 ξ = 16.7 ξ = 24.0 ξ = 27.2 | | | | | | | | 1 | 5.01 | 4.68 | 4.41 | 4.24 | 4.18 | 4.95 | 5.67 | | 2 | 4.92 | 4.69 | 4.46 | 4.13 | 4.03 | 4.58 | 5.46 | | 4 | 4.88 | 4.68 | 4.52 | 4.13 | 4.03 | 4.48 | 5.19 | | 8 | 4.91 | 4.77 | 4.51 | 4.21 | 4.08 | 4.42 | 4.92 | | 16 | 4.92 | 4.77 | 4.48 | 4.23 | 4.02 | 4.40 | 4.75 | | 32 | 4.98 | 4.78 | 4.58 | 4.24 | 4.16 | 4.36 | 4.48 | Table 11: CIFAR-10 test error for DenseNet-BC-190 and WideResNet-101 architectures. For DenseNet, the difference between best k-mixup and best 1-mixup is 0.44%, for fixed high ξ (α = 20.2), the improvement increases to 0.65%. For WideResNet, the difference between best k-mixup and best 1-mixup is 0.28%, for fixed high ξ (ξ = 20.2), the improvement increases to 1.25%. α = .5 α = 1 α = 2 α = 4 k ξ = 13.8 ξ = 16.7 ξ = 19.0 ξ = 20.2 1 3.35 3.29 3.42 3.57 16 3.38 2.98 **2.85** 2.92 (a) DenseNet-BC-190 architecture(±.03 confidence, no mixup (ERM) error: 3.7%) α = .05 α = .2 α = .5 α = 1 α = 2 α = 4 k ξ = 5.6 ξ = 10.0 ξ = 13.8 ξ = 16.7 ξ = 19.0 ξ = 20.2 1 11.54 11.53 11.62 11.78 12.25 12.99 4 11.36 11.47 11.27 11.41 11.59 12.16 16 11.53 11.59 **11.25** 11.34 11.38 11.74 (b) WideResNet-101 architecture (±.09 confidence, no mixup (ERM) error: 11.6%) Table 12 shows results for CIFAR-100 and SVHN, with DenseNet-BC-190 and ResNet-18 architectures respectively. As before, for fixed ξ, the best performance is achieved for some k > 1. The improvement of the best k-mixup over the best 1-mixup is 2.14% for CIFAR-100 and 0.15% for SVHN. For fixed high α, the k-mixup improvement over 1-mixup rises to 2.85% for CIFAR-100 and 0.52% for SVHN, possibly indicating that the OT matches yield better interpolation between classes, aiding generalization. Table 12: CIFAR-100 (DenseNet-BC-190) and SVHN (ResNet18), averaged over 20 trials. | α=.5 | α= 1 | α= 2 | α= 4 | | | | | | | |---------------------------------------|---------|---------|---------|---------|----------|------|------|------|-------| | k | ξ = 3.4 | ξ = 4.2 | ξ = 4.8 | ξ = 5.4 | | | | | | | 1 | 28.52 | 27.76 | 20.45 | 21.53 | | | | | | | 8 | 18.35 | 18.78 | 19.16 | 19.71 | | | | | | | 16 | 18.33 | 18.31 | 18.53 | 18.85 | α=.1 | α=.2 | α=.5 | α= 1 | α= 10 | | k | ξ = 3.8 | ξ = 5.1 | ξ = 6.9 | ξ = 9.5 | ξ = 11.6 | | | | | | 1 | 3.29 | 3.22 | 2.99 | 2.93 | 3.89 | | | | | | 2 | 3.32 | 3.19 | 2.96 | 2.79 | 3.65 | | | | | | 4 | 3.30 | 3.25 | 2.94 | 2.78 | 3.55 | | | | | | 8 | 3.28 | 3.20 | 3.01 | 2.86 | 3.44 | | | | | | 16 | 3.26 | 3.24 | 3.04 | 2.93 | 3.37 | | | | | | (a) CIFAR-100 error (±.05 confidence, | | | | | | | | | | ## D Proof Of Proposition 3.2 Finite-sample convergence results for empirical measures (Theorem 9.1 of Solomon et al. (2020), Weed & Bach (2019)) imply that for an arbitrary sampling of k points µˆk, we have $$W_{2}^{2}(\mu,\hat{\mu}_{k})\leq O(k^{-2/d})$$ with 1 − 1/k2 probability. The triangle inequality then implies that the Wasserstein-2 distance between our batches of k samples will tend to 0 at the same asymptotic rate, specifically $$W_{2}(\hat{\mu}_{k}^{\gamma},\hat{\mu}_{k}^{\zeta})\leq W_{2}(\mu,\hat{\mu}_{k}^{\zeta})+W_{2}(\hat{\mu}_{k}^{\gamma},\mu)\leq O(k^{-1/d})$$ again with 1 − 1/k2 probability. Now, recalling the definition of the optimal coupling permutation σ(i), we have $$W_{2}^{2}(\hat{\mu}_{k}^{\gamma},\hat{\mu}_{k}^{\zeta})=\frac{1}{k}\sum_{i=1}^{k}\|x_{i}^{\gamma}-x_{\sigma(i)}^{\zeta}\|_{2}^{2}.$$ Hence, $${\frac{1}{k}}\sum_{i=1}^{k}\|x_{i}^{\gamma}-x_{\sigma(i)}^{\zeta}\|_{2}^{2}\leq O(k^{-2/d})$$ $\Gamma^{\pm}=\pm1$ with 1 − 1/k2 probability. Thus, for any I ⊆ [1, k] with ∥x by $\mathcal{I}\subseteq[1,k]$ with $\|x_i^\gamma-x_{\sigma(i)}^\zeta\|_2^2>k^{-1/d}$ for all $i\in\mathcal{I}$, . $$O(k^{-2/d})\geq\frac{1}{k}\sum_{i=1}^{k}\|x_{i}^{\gamma}-x_{\sigma(i)}^{\zeta}\|_{2}^{2}>\frac{|{\mathcal{I}}|}{k}k^{-1/d},$$ $$|{\mathcal{I}}|<0$$ $$(k^{1-1/d})$$ $\varepsilon$ $\varepsilon$ $\downarrow\geq\frac{1}{2}$ . $\blacksquare$ $<\cdot\cdot\cdot$ 4. $\mathcal{L}$ implying |I| < O(k 1−1/d) = k · O(k −1/d) ≤ kδ, where the last inequality holds for any δ ∈ (0, 1] given k large enough. In essence, the fraction of matches that are long-distance (i.e. those in I) is bounded by δ for large enough k. Under these conditions, the set of short-distance matches (i.e. those in I¯ the complement of I) satisfy $$|{\bar{\mathcal{I}}}|\geq(1-\delta)k,$$ * [16] M. C. $\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{b}$. where by definition, ∥x γ i − x ζ σ(i) ∥ 2 2 ≤ k −1/d for all i ∈ I¯. Crucially, for any chosen ϵ, we have k −1/d < ϵ for k large enough, so for k large enough $$\|x_{i}^{\gamma}-x_{\sigma(i)}^{\zeta}\|_{2}^{2}$$ ∥2 *< ϵ,* ∀i ∈ I¯. By definition of k-mixup, for all i ∈ I¯, the corresponding mixup interpolated point will be an interpolation between x γ i and x ζ σ(i) , i.e. λxγ i + (1 − λ)x ζ σ(i) for λ ∈ [0, 1]. Since all x γ i lie in S and for all i ∈ I¯, ∥x γ i − x ζ σ(i) ∥2 < ϵ, the mixup interpolated point will lie in Bϵ(S). The proposition results. $$\lambda x_{i}+1$$ $\mathbf{v}=\mathbf{a}$ ## E Proof Of Lemma 3.3 Firstly, observe that the maximum number of within-cluster matches is $$\sum_{i}\operatorname*{min}(r_{i},s_{i})$$ by definition, and the total number of matches overall must equal Pi si =Pi ri. Hence the number of cross-cluster matchings must be larger than or equal to $$\sum_{i}r_{i}-\sum_{i}\operatorname*{min}(r_{i},s_{i})=\sum_{i}\operatorname*{max}(0,r_{i}-s_{i}),$$ equivalently, $$\sum_{i}s_{i}-\sum_{i}\operatorname*{min}(r_{i},s_{i})=\sum_{i}\operatorname*{max}(s_{i}-r_{i},0).$$ Averaging these implies the number of cross-cluster matchings cannot be smaller than $${\frac{1}{2}}\sum_{i}\left(\operatorname*{max}(r_{i}-s_{i},0)+\operatorname*{max}(0,s_{i}-r_{i})\right)={\frac{1}{2}}\sum_{i}|r_{i}-s_{i}|.$$ It remains to show that the number of cross-cluster matchings cannot exceed 12 Pi |ri − si|. We argue by contradiction and prove the result for m = 2 first. Suppose that the number of cross-cluster matchings exceeds |r1 − s1|. Then by the pigeonhole principle (i.e. via the fact that all points must have exactly one other matched point), there must be at least two such matchings. WLOG, let us say these cross-cluster matches are between pi and qi, and pi+1 and qi+1, where pi and qi+1 must be in the same cluster, as are qi and pi+1 since there are only two clusters. By our assumption on the spacing of the clusters, the cost of matching pi to qi and pi+1 to qi+1 at least $>2(2\Delta)^2$... However, consider the alternative matching of pi to qi+1 and pi+1 to qi. These are both intra-cluster matchings, which by our assumption on the radius of the clusters must have total cost smaller than or equal to $$\leq2(2\Delta)^{2}.$$ This matching has smaller cost than the inter-cluster matching strategy, so this contradicts optimality of the inter-cluster pairing. In the scenario with m clusters, an analogous argument works. As above, |ri − si| is the number of cluster i elements that must be matched in a cross-cluster fashion. If there are more than the minimum 12 Pi |ri − si|, by the pigeonhole principle, there must be additional cross-cluster matches that form a cycle in the graph over clusters. As above, the cost would be reduced by matching points within the same clusters, so this contradicts optimality. Since the number of cross-cluster matchings cannot be less than or exceed 12 Pi |ri − si|, it must equal this number and the lemma results. ## F Proof Of Theorem 3.4 By Lemma 3.3, if ri and si denote the number of points in cluster i from batch 1 and 2, the resulting number of cross-cluster matches is 12 Pi |ri − si|. As the samples for our batches are i.i.d., these random variables (ri, si, where riis independent of si) each (marginally) follow a simple binomial distribution B(*k, p*i). We can bound the expectation of this quantity with Jensen's inequality: $$\left(\mathbb{E}[|r_{i}-s_{i}|]\right)^{2}\leq\mathbb{E}[|r_{i}-s_{i}|^{2}]=2\mathrm{Var}(r_{i})=2k p_{i}(p_{i}-1).$$ This implies that $${\frac{{\frac{1}{2}}\sum_{i}\mathbb{E}|r_{i}-s_{i}|}{k}}\leq{\frac{\sum_{i}{\sqrt{2k p_{i}(1-p_{i})}}}{2k}}=(2k)^{-{\frac{1}{2}}}\sum_{i=1}^{m}{\sqrt{p_{i}(1-p_{i})}},$$ yielding the bound in the theorem. It is also possible to get an exact rate with some hypergeometric identities (Katti, 1960), but these simply differ by a constant factor, so we omit the exact expressions here. ## G Proof Of Theorem 3.5 By the smooth boundary and positive density assumptions, we know that P(Aδ) > 0 and P(Bδ) > 0 for any δ > 0. Hence, for fixed δ and k large enough, we know that with high probability the sets Aδ and Bδ each contain more points than the number of cross-cluster identifications. Now consider Aϵ and Bϵ for ϵ = 2δ + (max(RA, RB) 2)/(2D2). All cross-cluster matches need to be assigned. The cost of assigning a cross-cluster match to a point in Aδ and a point in Bδ is at most (1 + 2δ) 2D2(since we are using W2). Furthermore, the cost of assigning a cross-cluster match that contains a point in A outside Aϵ and an arbitrary point in B is at least (1 + ϵ) 2D2. Consider the difference between these two costs: $$(1+\epsilon)^{2}D^{2}-(1+2\delta)^{2}D^{2}=(2(\epsilon-2\delta)+\epsilon^{2}-4\delta^{2})D^{2}>2D^{2}\frac{\operatorname*{max}(R_{A},R_{B})^{2}}{2D^{2}}\geq R_{A}^{2}.$$ Since this difference > 0 and we have shown Aδ contains sufficient points for handling all assignments, this assignment outside of Aϵ will only occur if there is a within-cluster pair which benefits from using the available point in Aϵ more than is lost by not giving it to the cross-cluster pair (> R2A). The maximum possible benefit gained by the within-cluster pair is the squared radius of A, i.e. R2A. Since we have shown that the lost cost for the cross-cluster pair is bigger than R2A, we have arrived at a contradiction. The proof is similar for the B side. We have thus shown that for k large enough (depending on δ), with high probability all cross-cluster matches have an endpoint each in Aϵ and Bϵ where ϵ = 2δ + (max(RA, RB) 2)/(2D2). Setting δ = (max(RA, RB) 2)/(4D2) completes the proof. ## H Proof Of Theorem 4.1 We mostly follow the notation and argument of Zhang et al. (2020) (c.f. Lemma 3.1), modifying it for our setting. There they consider sampling λ ∼ Beta(*α, β*) from an asymmetric Dirichlet distribution. Here, we assume a symmetric Dirichlet distribution, such that α = β, simplifying most of the expressions. The analogous results hold in the asymmetric case with simple modifications. Consider the probability distribution D˜λ with probability distribution: β(α+ 1, α). Note that this distribution is more heavily weighted towards 1 across all α, and for α < 1, there is an asymptote as you approach 1. Let us adopt the shorthand notation x˜i,σγζ(i)(λ) := λxγ i + (1 − λ)x ζ σγζ for an interpolated feature point. The manipulations below are abbreviated, as they do not differ much for our generalization. E mix k(f) = 1 kN k 2 Eλ∼β(α,α) ( NXk ) γ,ζ=1 X k i=1 h(f(˜xi,σγζ(i)(λ))) − (λyγ i + (1 − λ)y ζ σγζ(i) )f(˜xi,σγζ(i)(λ)) =1 kN k 2 Eλ∼β(α,α)EB∼Bern(λ) ( NXk ) γ,ζ=1 X k i=1 B[h(f(˜xi,σγζ(i)(λ))) − y γ i f(˜xi,σγζ(i)(λ))] + (1 − B)[h(f(˜xi,σγζ(i)(λ))) − y ζ σγζ(i) f(˜xi,σγζ(i)(λ))] ( NXk ) γ,ζ=1 X k =1 kN k 2 i=1 Eλ∼β(α+1,α)h(f(˜xi,σγζ(i)(λ))) − y γ i f(˜xi,σγζ(i)(λ)) For the third equality above, the ordering of sampling for λ and B has been swapped via conjugacy: λ ∼ β(α, α), B|λ ∼ *Bern*(λ) is equivalent to B ∼ U{0, 1}, λ|B ∼ β(α + *B, α* + 1 − B). This is combined with the fact that x˜i,σγζ(i)(1 − λ) = ˜xσγζ(i),i(λ) to get the last line above. Now we can swap the sums, grouping over the initial point to express this as the following: $${\mathcal{E}}_{k}^{m i x}(f)={\frac{1}{N}}\sum_{i=1}^{N}\mathbb{E}_{\lambda\sim\beta(\alpha+1,\alpha)}\mathbb{E}_{r\sim{\mathcal{D}}_{i}}h(f(\lambda x_{i}+(1-\lambda)r))-y_{i}^{\gamma}f(\lambda x_{i}+(1-\lambda)r),$$ where the probability distribution Diis as described in the text. The remainder of the argument performs a Taylor expansion of the loss term h(f(λxi + (1 − λ)r)) − y γ i f(λxi + (1 − λ)r) in terms of 1 − λ, and is not specific to our setting, so we refer the reader to Appendix A.1 of (Zhang et al., 2020). for the argument. ## I K**-Mixup As Mean Reversion Followed By Regularization** Theorem I.1. *Define* (˜x γ i , y˜ γ i ) as $$\begin{array}{c}{{\tilde{x}_{i}^{\gamma}=\bar{x}_{i}^{\gamma}+\bar{\theta}(x_{i}^{\gamma}-\bar{x}_{i}^{\gamma}),}}\\ {{\tilde{y}_{i}^{\gamma}=\bar{y}_{i}^{\gamma}+\bar{\theta}(y_{i}^{\gamma}-\bar{y}_{i}^{\gamma}),}}\end{array}$$ _where $\bar{x}_{i}^{\gamma}=\frac{1}{\binom{N}{k}}\sum_{\zeta=1}^{\binom{N}{k}}x_{\sigma_{\gamma\zeta}(i)}^{\zeta}$ and $\bar{y}_{i}^{\gamma}=\frac{1}{\binom{N}{k}}\sum_{\zeta=1}^{\binom{N}{k}}y_{\sigma_{\gamma\zeta}(i)}^{\zeta}$ are all $\beta_{1/2,2}\,\mathrm{u}(\alpha,\alpha)$. Further, denote the zero mean perturbations_ are expectations under the matchings and θ ∼ $$\begin{array}{l}{{\tilde{\delta}_{i}^{\gamma}=(\theta-\bar{\theta})x_{i}^{\gamma}+(1-\theta)x_{\sigma_{\gamma\zeta}(i)}^{\zeta}-(1-\bar{\theta})\bar{x}_{i}^{\gamma}}}\\ {{\tilde{\epsilon}_{i}^{\gamma}=(\theta-\bar{\theta})y_{i}^{\gamma}+(1-\theta)y_{\sigma_{\gamma\zeta}(i)}^{\zeta}-(1-\bar{\theta})\bar{y}_{i}^{\gamma}.}}\end{array}$$ Then the k*-mixup loss can be written as* $$\mathcal{E}_{k}^{O T m i x u p}(f)=\frac{1}{{\binom{N}{k}}}\sum_{\gamma=1}^{{\binom{N}{k}}}\mathbb{E}_{\theta,\zeta}\left[\frac{1}{k}\sum_{i=1}^{k}\ell(\hat{y}_{i}^{\gamma}+\hat{\epsilon}_{i}^{\gamma},f(\hat{x}_{i}^{\gamma}+\hat{\delta}_{i}^{\gamma}))\right].$$ The mean x¯ γ i being shifted toward is exactly the mean of the locally-informed distribution Di. Moreover, the covariance structure of the perturbations is detailed in the proof (simplified in Section I.1) and is now also derived from the local structure of the distribution, inferred from the optimal transport matchings. Proof. This argument is modelled on a proof of Carratino et al. (2020), so we adopt analogous notation and highlight the differences in our setting and refer the reader to Appendix B.1 of that paper for any omitted details. First, let us use shorthand notation for the interpolated loss function: $$m_{i}^{\gamma\zeta}(\lambda)=\ell(f(\lambda x_{i}^{\gamma}+(1-\lambda)x_{\sigma_{\gamma\zeta}(i)}^{\zeta}),\lambda y_{i}^{\gamma}+(1-\lambda)y_{\sigma_{\gamma\zeta}(i)}^{\zeta}).$$ Then the mixup objective may be written as: $${\mathcal{E}}_{k}^{m i x}(f)={\frac{1}{k{\binom{N}{k}}^{2}}}\sum_{\gamma,\zeta=1}^{{\binom{N}{k}}}\sum_{i=1}^{k}\mathbb{E}_{\lambda}m_{i}^{\gamma\zeta}(\lambda).$$ As λ ∼ β(*α, α*), we may leverage the symmetry of the sampling function and use a parameter θ ∼ β[1/2,1](*α, α*) to write the objective as: $${\mathcal{E}}_{k}^{m i x}(f)={\frac{1}{{\binom{N}{k}}}}\sum_{\gamma=1}^{{\binom{N}{k}}}\ell_{i},\qquad{\mathrm{~where~}}\ell_{i}={\frac{1}{k{\binom{N}{k}}}}\sum_{\zeta=1}^{{\binom{N}{k}}}\sum_{i=1}^{k}\mathbb{E}_{\theta}m_{i}^{\gamma\zeta}(\theta).$$ To obtain the form of the theorem in the text, we introduce auxiliary variables x˜ γ i , y˜ γ ito represent the mean-reverted training points: $$\begin{array}{c}{{\tilde{x}_{i}^{\gamma}=\mathbb{E}_{\theta,\zeta}\left[\theta x_{i}^{\gamma}+(1-\theta)x_{\sigma_{\gamma\zeta}(i)}^{\zeta}\right]}}\\ {{\tilde{y}_{i}^{\gamma}=\mathbb{E}_{\theta,\zeta}\left[\theta y_{i}^{\gamma}+(1-\theta)y_{\sigma_{\gamma\zeta}(i)}^{\zeta}\right],}}\end{array}$$ and ˜δ γ i , ϵ˜ γ i to denote the zero mean perturbations about these points: $$\begin{array}{l}{{\tilde{\delta}_{i}^{\gamma}=\theta x_{i}^{\gamma}+(1-\theta)x_{\sigma_{\gamma}\zeta}^{\zeta}(i)-\mathbb{E}_{\theta,\zeta}\left[\theta x_{i}^{\gamma}+(1-\theta)x_{\sigma_{\gamma}\zeta}^{\zeta}(i)\right]}}\\ {{\tilde{\epsilon}_{i}^{\gamma}=\theta y_{i}^{\gamma}+(1-\theta)y_{\sigma_{\gamma}\zeta}^{\zeta}(i)-\mathbb{E}_{\theta,\zeta}\left[\theta y_{i}^{\gamma}+(1-\theta)y_{\sigma_{\gamma}\zeta}^{\zeta}(i)\right].}}\end{array}$$ These reduce to the simplified expressions given in the theorem if we recall that θ and ζ are independent random variables. Note that both the mean-reverted points and the perturbations are informed by the local distribution Di. ## I.1 Covariance Structure As in (Carratino et al., 2020), it is possible to come up with some simple expressions for the covariance structure of the local perturbations, hence we write out the analogous result below. As the argument is very similar to that in (Carratino et al., 2020), we omit it. Lemma I.2. Let σ 2 denote the variance of β[1/2,1](α, α)*, and* ν 2:= σ 2 + (1 − ¯θ) 2. Then the following expressions hold for the covariance of the zero mean perturbations: Eθ,ζ ˜δ γ i ( ˜δ γ i ) ⊤ = σ 2(˜x γ i − x¯ γ i )(˜x γ i − x¯ γ i ) ⊤ + ν 2Σx˜ γ i x˜ γ i ¯θ 2 Eθ,ζ ϵ˜ γ i (˜ϵ γ i ) ⊤ = σ 2(˜y γ i − y¯ γ i )(˜y γ i − y¯ γ i ) ⊤ + ν 2Σy˜ γ i y˜ γ i ¯θ 2 Eθ,ζ ˜δ γ i (˜ϵ γ i ) ⊤ = σ 2(˜x γ i − x¯ γ i )(˜y γ i − y¯ γ i ) ⊤ + ν 2Σx˜ γ i y˜ γ i ¯θ 2, ``` where Σx˜ γ i x˜ γ i , Σy˜ γ i y˜ γ i , Σx˜ γ i y˜ γ i denote empirical covariance matrices. ``` Note again, that the covariances above are locally-informed, rather than globally determined. Lastly, there is also a quadratic expansion performed about the mean-reverted points x˜ γ i , y˜ γ i with terms that regularize f, but we omit this result as the regularization of Theorem 4.1 is more intuitive (c.f. Theorem 2 of (Carratino et al., 2020)). ## J Vicinal Distribution Of K **Nearest-Neighbors** We provide this section for explicit illustration of why k nearest-neighbors is not a sensible alternative for k-mixup. In this setting, we consider drawing two sets of k samples {(x γ i , y γ i )} k i=1 and {(x ζ i , y ζ i )} k i=1, match each x γ i with its nearest-neighbor in {x ζ i } k i=1, and then interpolate these matches to obtain the vicinal distribution. In Figure 7, we provide example images of the generated matching distributions. Note that in general, these matching distributions have very limited cross-cluster matchings. The "Swiss Roll" dataset is an exception due to the interwoven arms of the spirals. Additionally, note that the intra-cluster matchings are much more concentrated around the points in question, and do not perform as much smoothing within clusters. Figure 7: Example of matching distributions generated by k = 32-nearest-neighbors, using the same point as ![21_image_0.png](21_image_0.png) in Figure 3.
Review 1: Summary: The major contribution of this paper is that it proposes a new mixup method based on optimal transport, which can lead to improved generalization performance than the standard Mixup. Detailed contributions are summarized as follows: * The proposed method can give better empirical generalization performance. * This paper develops a theoretical analysis to justify that the proposed methods can generate interpolated samples that are closer to the data manifold. * This paper also theoretically shows that the proposed k-mixup method can lead to more informed regularizations. Strengths and Weaknesses: Strength: * The proposed method is theoretically sound and achieves better empirical performance than baselines. Weakness: * The empirical performance improvement is not that significant and it requires an additional hyperparameter $k$. * The authors may also need to provide the run-time analysis as it is not clear whether calculating the optimal match is time-consuming or not. * The motivation that the data point generalized by Mixup should lie in the data manifold needs more justifications. For instance, the authors should also consider performing Mixup for the data pair with the same labels (as the data generated by this type of mixup could be closer to the data manifold than the data generated from two data points with different labels.) Requested Changes: See the weakness section. Broader Impact Concerns: No concern ================================================== Review 2: Summary: This paper proposes an extension of the popular Mixup regularization technique. In particular, instead of simply taking convex combinations of random data points, this work proposes interpolating between k-batches of data, each representing a discrete distribution in itself, where interpolation is performed under the Wasserstein metric. Theoretical results indicate that the proposed k-Mixup technique better preserve the original structure of the data compared to standard Mixup. Numerical experiments on various datasets across different modalities suggest that k-Mixup can further improve generalization and robustness to some types of perturbations. Strengths and Weaknesses: Strengths: - In my opinion, the paper is well-written and easy to follow in most parts. More involved technical concepts, are illustrated nicely through figures that help conveying the intuition of the paper. - The theoretical contributions on preservation of global structure and locally-informed regularization looks solid. - Empirical results consistently demonstrate that generalization using k-Mixup is not worse than standard Mixup after tuning a hyperparameter. - Based on the experiments, the generalization improvement over standard Mixup is significant in the presence of additive Gaussian perturbations with large variance. Weaknesses: - Experiments are on datasets that can be considered small/low-dimensional with today's standard. It would be interesting to see how the method performs on ImageNet with a larger ResNet model for instance. As the improvements over standard Mixup are fairly small in some cases, it would be important to demonstrate that the computational overhead introduced by k-Mixup on larger datasets is worthwhile. Otherwise, the practical contribution of the paper is somewhat limited to small datasets. - I believe the compute cost analysis should be better highlighted in the paper and more rigorously compared to standard Mixup, as this is the main trade-off when deciding between Mixup and k-Mixup. Requested Changes: I think the following additions would strengthen the work's practical contribution significantly: - Additional experiments on large-scale vision datasets (see details under Weaknesses above). - More rigorous comparison of the compute requirements of Mixup/k-Mixup, potentially moving the analysis to the main paper from the Appendix. Furthermore, I think it would improve the readability of the paper if the Algorithm summary/implementation from the appendix was moved to the main paper. Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: This paper proposes k-Mixup which is a data augmentation technique mixing k (k is generally bigger than 1) instances based on the optimal transport. The authors theoretically and empirically demonstrate that k-mixup is more likely to generate instances within the data manifold compared with 1-mixup. For data of several clusters, k-mixup generates a higher proportion of within-cluster instances instead of inter-cluster ones with the increase of k. Extensive experiments in various applications demonstrate the effectiveness of k-mixup and its advantages over 1-mixup. In addition, the authors show models trained by k-mixup demonstrate robustness against random noise or simple FGSM perturbation. Strengths and Weaknesses: Strength: 1. The method is well motivated and justified theoretically and empirically. 2. The experiments are comprehensive and in different domains / applications. Weakness: 1. [Proposition 3.2] This proposition indicates that the instances generated by k-mixup will be in the manifold with high probability. However, is this enough to justify the advantages of k-mixup? It is sort of necessary condition instead of a sufficient condition, which should be like: can the data in the manifold be generated by k-mixup? Since even for 1-mixup, the generated data would be highly likely in the manifold when $\alpha$ is very small. 2. For the experiments, Table 1, since every setting is run for 20 times, it is better to report the performance variance. 3. In Figure 5, what is the definition of "train accuracy"? Since the generate data does not have a one-hot label. 4. Based on the motivation and the theoretical analysis, the algorithm should work better with a bigger $k$, but this is not always true based on the results in Table 3. Can you explain the performance degradation when $k$ is too large? Requested Changes: Generally, this work is well written, well motivated and easy to follow. The revision should address the weakness part pointed in the previous section. Broader Impact Concerns: This paper proposed a generic method and there is no concerns on ethics as far as what I know. ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers all found the method was well-motivated and the experiments were correct. There were some concerns about the lack of large-scale experiments, but the experiments provided were comprehensive and technically correct. Therefore, I recommend acceptance. ==================================================
# A Survey On Transformers In Reinforcement Learning 1*Tsinghua University* 2*Peking University* 3*BAAI* 4*Tencent Inc.* 5*Washington University in St.Louis* Wenzhe Li1∗*lwz21@mails.tsinghua.edu.cn* Hao Luo2,3∗*lh2000@pku.edu.cn* Zichuan Lin4∗*zichuanlin@tencent.com* Chongjie Zhang5†*chongjie@wustl.edu* Zongqing Lu2,3†*zongqing.lu@pku.edu.cn* Deheng Ye4†*dericye@tencent.com* Reviewed on OpenReview: *https://openreview.net/forum?id=r30yuDPvf2* ## Abstract Transformer has been considered the dominating neural architecture in NLP and CV, mostly under supervised settings. Recently, a similar surge of using Transformers has appeared in the domain of reinforcement learning (RL), but it is faced with unique design choices and challenges brought by the nature of RL. However, the evolution of Transformers in RL has not yet been well unraveled. In this paper, we seek to systematically review motivations and progress on using Transformers in RL, provide a taxonomy on existing works, discuss each sub-field, and summarize future prospects. ## 1 Introduction Reinforcement learning (RL) provides a mathematical formalism for sequential decision-making. By utilizing RL, we can acquire intelligent behaviors automatically. While RL has provided a general framework for learning-based control, deep neural networks, as a way of function approximation with high capacity, have been enabling significant progress along a wide range of domains (Silver et al., 2016; Vinyals et al., 2019; Ye et al., 2020a;b). While the generality of deep reinforcement learning (DRL) has achieved tremendous developments in recent years, the issue of sample efficiency prevents its widespread use in real-world applications. To address this issue, an effective mechanism is to introduce inductive biases into the DRL framework. One important inductive bias in DRL isthe choice of function approximator architectures, such as the parameterization of neural networks for DRL agents. However, compared to efforts on architectural designs in supervised learning (SL), how to design architectures for DRL has remained less explored. Most existing works on architectures for RL are motivated by the success of the (semi-) supervised learning community. For instance, a common practice to deal with high-dimensional image-based input in DRL is to introduce convolutional neural networks (CNN) (LeCun et al., 1998; Mnih et al., 2015); another common practice to deal with partial observability is to introduce recurrent neural networks (RNN) (Hochreiter & Schmidhuber, 1997; Hausknecht & Stone, 2015). In recent years, the Transformer architecture (Vaswani et al., 2017) has revolutionized the learning paradigm across a wide range of SL tasks (Devlin et al., 2018; Dosovitskiy et al., 2020; Dong et al., 2018) and demonstrated superior performance over CNN and RNN. Among its notable benefits, the Transformer architecture enables modeling long dependencies and has excellent scalability (Khan et al., 2022). Inspired by the success of SL, there has been a surge of interest in applying Transformers in RL, with the hope of carrying the benefits of Transformers to the RL field. ∗ Equal contribution; † Equal advising. 1 The use of Transformers in RL dates back to Zambaldi et al. (2018), where the self-attention mechanism is used for relational reasoning over structured state representations. Afterward, many researchers seek to apply self-attention for representation learning to extract relations between entities for better policy learning (Vinyals et al., 2019; Baker et al., 2019). Besides leveraging Transformers for state representation learning, prior works also use Transformers to capture multi-step temporal dependencies to deal with the issue of partial observability (Parisotto et al., 2020; Parisotto & Salakhutdinov, 2021). More recently, offline RL (Levine et al., 2020) has attracted attention due to its capability to leverage large-scale offline datasets. Motivated by offline RL, recent efforts have shown that the Transformer architecture can directly serve as a model for sequential decisions (Chen et al., 2021; Janner et al., 2021) and generalize to multiple tasks and domains (Lee et al., 2022; Carroll et al., 2022). The purpose of this survey is to present the field of *Transformers in Reinforcement Learning*, denoted as "Transformerbased RL". Although Transformer has been considered one of the most popular models in SL research at present (Devlin et al., 2018; Dosovitskiy et al., 2020; Bommasani et al., 2021; Lu et al., 2021), it remains to be less explored in the RL community. In fact, compared with the SL domain, using Transformers in RL as function approximators faces unique challenges. First, the training data of RL is collected by an ever-changing policy during optimization, which induces non-stationarity for learning a Transformer. Second, existing RL algorithms are often highly sensitive to design choices in the training process, including network architectures and capacity (Henderson et al., 2018). Third, Transformer-based architectures often suffer from high computational and memory costs, making it expensive in both training and inference during the RL learning process. For example, in the case of AI for video game-playing, the training performance is closely related to the efficiency of sample generation, which is restricted by the computational cost of the RL policy network and value network (Ye et al., 2020a; Berner et al., 2019). Fourthly, compared to models that rely on strong inductive biases, Transformer models typically need a much larger amount of training data to achieve decent performance, which usually exacerbates the sample efficiency problem of RL. Despite all these challenges, Transformers are becoming essential tools in RL due to their high expressiveness and capability. However, they are utilized for various purposes stemming from orthogonal advances in RL, such as **a) RL that requires strong** representation or world model (e.g., RL with high-dimensional spaces and long horizon); **b) RL as a sequence** modeling problem; and **c) Pre-training large-scale foundation models for RL**. In this paper, we seek to provide a comprehensive overview of Transformer-based RL, including a taxonomy of current methods and the challenges. We also discuss future perspectives, as we believe the field of Transformer-based RL will play an important role in unleashing the potential impact of RL, and this survey could provide a starting point for those looking to leverage its potential. We structure the paper as follows. Section 2 covers background on RL and Transformers, followed by a brief introduction on how these two are combined together. In Section 3, we describe the evolution of network architecture in RL and the challenges that prevent the Transformer architecture from being widely explored in RL for a long time. In Section 4, we provide a taxonomy of Transformers in RL and discuss representative existing methods. Finally, we summarize and point out potential future directions in Section 5. ## 2 Problem Scope 2.1 Reinforcement Learning In general, Reinforcement Learning (RL) considers learning in a Markov Decision Process (MDP) M = ⟨S, A*, P, r, γ, ρ*0⟩, where S and A denote the state space and action space respectively, P(s ′|*s, a*) is the transition dynamics, r(*s, a*)is the reward function, γ ∈ (0, 1)is the discount factor, and ρ0 is the distribution of initial states. Typically, RL aims to learn a policy π(a|s) to maximize the expected discounted return J(π) = E*π,P,ρ*0 [Pt γ tr(st, at)]. To solve an RL problem, we need to tackle two different parts: learning to represent states and learning to act. The first part can benefit from inductive biases (e.g., CNN for image-based states, and RNN for non-Markovian tasks). The second part can be solved via behavior cloning (BC), model-free or model-based RL. In the following part, we introduce several specific RL problems related to advances in Transformers in RL. Offline RL. In offline RL (Levine et al., 2020), the agent cannot interact with the environment during training. Instead, it only has access to a static offline dataset D = {(s, a, s′, r)} collected by arbitrary policies. Without exploration, modern offline RL approaches (Fujimoto et al., 2019; Kumar et al., 2020; Yu et al., 2021b) constrain the learned policy close to the data distribution, to avoid out-of-distribution actions that may lead to overestimation. Recently, in parallel with typical value-based methods, one popular trend in offline RL is RL via Supervised Learning (RvS) (Emmons et al., 2021), which learns an outcome-conditioned policy to yield desired behavior via SL. Goal-conditioned RL. Goal-conditioned RL (GCRL) extends the standard RL problem to goal-augmented setting, where the agent aims to learn a goal-conditioned policy π(a|*s, g*) that can reach multiple goals. Prior works propose to use various techniques, such as hindsight relabeling (Andrychowicz et al., 2017), universal value function (Schaul et al., 2015), and self-imitation learning (Ghosh et al., 2019), to improve the generalization and sample efficiency of GCRL. GCRL is quite flexible as there are diverse choices of goals. We refer readers to (Liu et al., 2022) for a detailed discussion around this topic. Model-based RL. In contrast to model-free RL which directly learns the policy and value functions, model-based RL learns an auxiliary dynamic model of the environment. Such a model can be directly used for planning (Schrittwieser et al., 2020), or generating imaginary trajectories to enlarge the training data for any model-free algorithm (Hafner et al., 2019). Learning a model is non-trivial, especially in large or partially observed environments where we first need to construct the representation of the state. Some recent methods propose to use latent dynamics (Hafner et al., 2019) or value models (Schrittwieser et al., 2020) to address these challenges and improve the sample efficiency of RL. ## 2.2 Transformers Transformer (Vaswani et al., 2017) is one of the most effective and scalable neural networks to model sequential data. The key idea of Transformer is to incorporate *self-attention* mechanism, which could capture dependencies within long sequences in an efficient manner. Formally, given a sequential input with n tokens xi ∈ R d n i=1, where d is the embedding dimension, the self-attention layer maps each token xito a query qi ∈ R dq, a key ki ∈ R dk , and a value vi ∈ R dv via linear transformations, where dq = dk. Let the sequence of inputs, queries, keys, and values be X ∈ R n×d, Q ∈ R n×dq, K ∈ R n×dk , and V ∈ R n×dv , respectively. The output of the self-attention layer Z ∈ R n×dv is a weighted sum of all values: $$\mathbf{Z}=\mathrm{softmax}\left({\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{q}}}}\right)\mathbf{V}.$$ With the self-attention mechanism (Bahdanau et al., 2014) as well as other techniques, such as multi-head attention and residual connection (He et al., 2016), Transformers can learn expressive representations and model long-term interactions. Due to the strong representation capability and excellent scalability, the Transformer architecture has demonstrated superior performance over CNN and RNN across a wide range of supervised and unsupervised learning tasks. So a natural question is: can we use Transformers to tackle the problems (i.e., learning to represent states, and learning to act) in RL? ## 2.3 Combination Of Transformers And Rl We notice that a growing number of works are seeking to combine Transformers and RL in diverse ways. In general, Transformers can be used as one component for RL algorithms, e.g., a representation module or a dynamic model. Transformers can also serve as one whole sequential decision-maker. Figure 1 provides a sketch of Transformers' different roles in the context of RL. ## 3 Network Architecture In Rl Before presenting the taxonomy of current methods in Transformer-based RL, we start by reviewing the early progress of network architecture design in RL, and summarize their challenges. We do this because Transformer itself is an advanced neural network and designing appropriate neural networks contributes to the success of DRL. ![3_image_0.png](3_image_0.png) Figure 1: An illustrating example of Transformer-based RL. On the one hand, Transformers can be used as one component in RL. Particularly, Transformers can encode diverse sequences, such as entities, agents, and stacks of historical information; and it is also an expressive predictor for the dynamics model. On the other hand, Transformers can integrate all subroutines in RL and act as a sequential decision-maker. Overall, Transformers can improve RL's learning efficiency in single-task, multi-task, and cross-domain settings. ## 3.1 Architectures For Function Approximators Since the seminal work Deep Q-Network (Mnih et al., 2015), many efforts have been made in developing network architectures for DRL agents. Improvements in network architectures in RL can be mainly categorized into three classes. The first class is to design a new structure that incorporates RL inductive biases to ease the difficulty of training policy or value functions. For example, Wang et al. (2016) propose dueling network architecture with one for the state value function and another for the state-dependent action advantage function. This choice of architecture incorporates the inductive bias that generalizes learning across actions. Other examples include the value decomposition network, which has been used to learn local Q-values for individual agent (Sunehag et al., 2017) or sub-reward (Lin et al., 2019). The second class is to investigate whether general techniques of neural networks (e.g., regularization, skip connection, batch normalization) can be applied to RL. To name a few, Ota et al. (2020) find that increasing input dimensionality while using an online feature extractor can boost state representation learning, and hence improve the performance and sample efficiency of DRL algorithms. Sinha et al. (2020) propose a deep dense architecture for DRL agents, using skip connections for efficient learning, with an inductive bias to mitigate data-processing inequality. Ota et al. (2021) use DenseNet (Huang et al., 2017) with decoupled representation learning to improve flows of information and gradients for large networks. The third class is to scale DRL agents for distributional learning. For instance, IMPALA (Espeholt et al., 2018) have developed distributed actor-learner architectures that can scale to thousands of machines without sacrificing data efficiency. Recently, due to the superior performance of Transformers, some researchers have attempted to apply Transformers architecture in policy optimization algorithms, but found that the vanilla Transformer design fails to achieve reasonable performance in RL tasks (Parisotto et al., 2020). ## 3.2 Challenges While the usage of Transformers has made rapid progress in SL domains in past years, applying them in RL is not straightforward with the following unique challenges in two folds1: 1Namely Transformers in RL versus Transformers in SL, and Transformers in RL versus other neural network architectures in RL. ![4_image_0.png](4_image_0.png) Figure 2: The taxonomy of Transformer-based RL (Transform RL in short). The timeline is based on the first work related to the branch. On the one hand, from the view of RL, many researchers point out that existing RL algorithms are incredibly sensitive to architectures of deep neural networks (Henderson et al., 2018; Engstrom et al., 2019; Andrychowicz et al., 2020). First, the paradigm of alternating between data collection and policy optimization (i.e., data distribution shift) in RL induces non-stationarity during training. Second, RL algorithms are often highly sensitive to design choices in the training process. In particular, when coupled with bootstrapping and off-policy learning, learning with function approximators can diverge when the value estimates become unbounded (i.e., "deadly triad") (Van Hasselt et al., 2018). More recently, Emmons et al. (2021) identify that carefully choosing model architectures and regularization are crucial for the performance of offline DRL agents. On the other hand, from the view of Transformers, the Transformer-based architectures suffer from large memory footprints and high latency, which hinder their efficient deployment and inference. Recently, many researchers aim to improve the computational and memory efficiency of the original Transformer (Tay et al., 2022), but most of these works focus on SL domains. In the context of RL, Parisotto & Salakhutdinov (2021) propose to distill learning progress from a large Transformer-based learner model to a small actor model to bypass the high inference latency of Transformers. However, these methods are still expensive in terms of memory and computation. So far, the idea of efficient or lightweight Transformers has not yet been fully explored in the RL community. In addition to considerable memory usage and high latency, Transformer models often require a significantly larger amount of training data to achieve comparable performance to models that rely on strong inductive biases (e.g., CNN and RNN). Given that DRL algorithms often struggle with sample efficiency, it can be challenging for DRL agents to gather sufficient data to train the Transformer models. As we shall see later, such a challenge inspires Transformer's prosperity in offline RL. ## 4 Transformers In Rl Although Transformer has become a popular model in most supervised learning research, it has not been widely used in the RL community for a long time due to the aforementioned challenges. Actually, most early attempts of Transformer-based RL apply Transformers for state representation learning or providing memory information, but still use standard RL algorithms such as temporal difference learning and policy optimization for agent learning. Therefore, these methods still suffer from challenges from the conventional RL framework. Until recently, offline RL allows learning optimal policy from large-scale offline data. Inspired by offline RL, recent works further treat the RL problem as a conditional sequence modeling problem on fixed experiences, which bypasses the challenges of bootstrapping error in traditional RL, allowing Transformer to unleash its powerful sequential modeling ability. In this survey paper, we retrospect the advances of Transformer-based RL, and provide a taxonomy to present the current methods. We categorize existing methods into four classes: representation learning, model learning, sequential decision-making, and generalist agents. Figure 2 provides a taxonomy sketch with a subset of corresponding works. ## 4.1 Transformers For Representation Learning One natural usage of Transformers is to use it as a sequence encoder. In fact, various sequences in RL tasks require processing2, such as local per-timestep sequence (multi-entity sequence (Vinyals et al., 2019; Baker et al., 2019), multi-agent sequence (Wen et al., 2022) , etc.), temporal sequence (trajectory sequence (Parisotto et al., 2020; Banino et al., 2021)), and so on. ## 4.1.1 Encoder For Local Per-Timestep Sequence The early notable success of this method is to process complex information from a variable number of entities scattered in the agent's observation with Transformers. Zambaldi et al. (2018) first propose to capture relational reasoning over structured observation with multi-head dot-product attention, which is subsequently used in AlphaStar (Vinyals et al., 2019) to process multi-entity observation in the challenging multi-agent game StarCraft II. The proposed entity Transformer encodes the observation as: ## Emb = Transformer(E1, · · · , Ei, · · ·), where ei represents the agent's observation on entity i, either directly sliced from the whole observation or given by an entity tokenizer. Several follow-up works have enriched entity Transformer mechanisms. Hu et al. (2020) propose a compatible decoupling policy to explicitly associate actions to various entities and exploit an attention mechanism for policy explanation. Wang et al. (2023b) learn an entity Transformer with general knowledge and feature-space-agnostic tokenization via transfer learning within different types of games. To solve the challenging one-shot visual imitation, Dasari & Gupta (2021) use Transformers to learn a representation focusing on task-specific elements. Similar to entities scattered in observation, some works exploit Transformers to process other local per-timestep sequences. Tang & Ha (2021) leverage the attention mechanism to process sensory sequence and construct a policy that is permutation invariant w.r.t. inputs. In the incompatible multi-task RL setting, Transformers are proposed to extract morphological domain knowledge (Kurin et al., 2020). Regarding the presence of multimodal information (e.g. image and language) in local per-timestep observations, Team et al. (2021) utilize Transformer-based structure to integrate these multimodal information and represent the state of the agent. Moreover, recent RL algorithms are trying to incorporate vision inductive biases into policy learning. For instance, Vision Transformer (ViT) uses patch sequences to process images in the visual domain, which can be used for representation learning in RL. Tao et al. (2022) test the effectiveness of ViT and its variants combined with various self-supervised techniques (Data2vec, MAE, and Momentum Contrastive learning) for visual control tasks. However, no notable performance gain is shown in their experiments with less complex tasks. On the other hand, Hansen et al. (2021) find ViT-based architecture is prone to overfitting and address the promblem with data-augmentation. Moreover, Kalantari et al. (2022) use ViT architecture to learn Q value with vision inputs, showing its potential to improve the sample efficiency of RL algorithms. Moreover, Seo et al. (2022a) combine ViT with an improved feature-mask MAE to learn image features that are better suited for dynamics, which can benefit decision-making and control. As a summary, the Transformer, as a choice for local per-timestep sequence encoding, is being applied in various RL scenarios. When the RL task itself requires attention to the relationships among various parts of observations, such as entities and morphological domain sequences, the attention mechanism inherent in Transformers is suitable for their encoding process. When complex observation information needs to be processed, some works desire to transfer the expressive power demonstrated by Transformers in vision or language domains to the representation learning in RL. Additionally, there are some works indicating a trend towards using pre-trained encoders in RL, further establishing a deeper connection between the choice of representation learning structures in RL and the highly acclaimed architectures in the vision and language domains. 2In the context of Transformers, the term sequence refers to the manner in which data is processed. While the local per-timestep information may be represented as a set or graph structure in specific tasks, the information is conventionally aggregated in a specific order either as input or as a post-processing step in the output. ## 4.1.2 Encoder For Temporal Sequence Meanwhile, it is also reasonable to process temporal sequence with Transformers. Such a temporal encoder works as a memory module: Emb0:t = Transformer(o0, · · · , ot), where ot represents the agent's observation at timestep t and Emb0:t represents the embedding of historical observations from initial observation to current observation. In early works, Mishra et al. (2018) fail to process temporal sequence with vanilla Transformers and find it even worse than random policy under certain tasks. Gated Transformer-XL (GTrXL) (Parisotto et al., 2020) is the first efficacious scheme to use Transformer as the memory. GTrXL provides a gated 'skip' path with Identity Map Reordering to stabilize the training procedure from the beginning. Such architecture can also incorporate language instructions to accelerate meta RL (Bing et al., 2022) and multi-task RL (Guhur et al., 2023). Furthermore, Loynd et al. (2020) propose a shortcut mechanism with memory vectors for long-term dependency, and Irie et al. (2021) combine the linear Transformer with Fast Weight Programmers for better performance. In addition, Melo (2022) proposes to use the self-attention mechanism to mimic memory reinstatement for memory-based meta RL. Esslinger et al. (2022) combine Transformer with Bellman loss to process observation history as the input of Q-network. The attention mechanism in Transformers avoids the need for recurrent context input, making it superior to recurrent models in encoding long dependencies. While Transformer outperforms LSTM/RNN as the memory horizon grows and parameter scales, it suffers from poor data efficiency with RL signals. Follow-up works exploit some auxiliary (self-) supervised tasks to benefit learning (Banino et al., 2021) or use a pre-trained Transformer as a temporal encoder (Li et al., 2022; Fan et al., 2022). ## 4.2 Transformers For Model Learning In addition to using Transformers as the encoder for sequence embedding, Transformer architecture also serves as the backbone of the world model in model-based algorithms. Distinct from the prediction conditioned on single-step observation and action, Transformer enables the world model to predict transition conditioned on historical information. Practically, the success of Dreamer and subsequent algorithms (Hafner et al., 2020; 2021; 2023; Seo et al., 2022b) has demonstrated the benefits of world models conditioned on history in partially observable environments or in tasks that require a memory mechanism. A world model conditioned on history consists of an observation encoder to capture abstract information and a transition model to learn the transition in latent space, formally: $$z_{t}\sim P_{\mathrm{enc}}(z_{t}|o_{t}),$$ $\hat{z}_{t+1},\hat{r}_{t+1},\hat{\gamma}_{t+1}$ $$\hat{r}_{t+1},\hat{\gamma}_{t+1}|_{z\leq t},a_{\leq t}\rangle,$$ zˆt+1, rˆt+1, γˆt+1 ∼ Ptrans(ˆzt+1, rˆt+1, γˆt+1|z≤t, a≤t), where zt represents the latent embedding of observation ot, and Penc, Ptrans denote observation encoder and transition model, respectively. There are several attempts to build a world model conditioned on history with Transformer architecture instead of RNN in previous works. Concretely, Chen et al. (2022) replace the RNN-based Recurrent State-Space Model (RSSM) in Dreamer with a Transformer-based model (Transformer State-Space Model, TSSM). IRIS (Imagination with autoRegression over an Inner Speech) (Micheli et al., 2022) and Transformer-based World Model (TWM) (Robine et al., 2023) learns a Transformer-based world model simply via auto-regressive learning on rollout experience without KL balancing like Dreamer and achieves considerable results on the Atari (Bellemare et al., 2013) 100k benchmark. Besides, some works also try to combine Transformer-based world models with planning. Ozair et al. (2021) verify the efficacy of planning with a Transformer-based world model to tackle stochastic tasks requiring long tactical lookahead. Sun et al. (2022) propose a goal-conditioned Transformer-based world model which is effective in visualgrounded planning for procedural tasks. It is true that both RNN and Transformer are compatible with world models conditioned on historical information. However, Micheli et al. (2022); Chen et al. (2022) find that Transformer is a more data-efficient world model compared with Dreamer, and experimental results of TSSM demonstrate that Transformer architecture is lucrative in tasks that require long-term memory. In fact, although model-based methods are data-efficient, they suffer from the | Method | Setting | Hindsight Info | Inference | Additional Structure/Usage | |---------------------------------|-------------------------------|-------------------------|----------------|-----------------------------------------| | DT (Chen et al., 2021) | Offline | return-to-go | conditioning | basic Transformer structure | | TT (Janner et al., 2021) | IL/GCRL/Offline | return-to-go | beam search | basic Transformer structure | | BeT (Shafiullah et al., 2022) | BC | none | conditioning | basic Transformer structure | | BooT (Wang et al., 2022) | Offline | return-to-go | beam search | data augmentation | | GDT (Furuta et al., 2021) | HIM | arbitrary | conditioning | anti-causal aggregator | | ESPER (Paster et al., 2022) | Offline (stochastic) | expected return | conditioning | adversarial clustering | | DoC (Yang et al., 2022a) | Offline (stochastic) | learned representation | conditioning | additional latent value func. | | QDT (Yamagata et al., 2022) | Offline | relabelled return-to-go | conditioning | additional Q func. | | StARformer (Shang et al., 2022) | IL/Offline | return-to-go/reward | conditioning | Step Transformer & Sequence Transformer | | TIT (Mao et al., 2022) | Online/Offline | return-to-go/none | conditioning | Inner Transformer & Outer Transformer | | ConDT (Konan et al., 2022) | Offline | learned representation | conditioning | return-dependent transformation | | SPLT (Villaflor et al., 2022) | Offline | none | min-max search | separate models for world and policy | | DeFog (Hu et al., 2023) | Offline | return-to-go | conditioning | drop-span embedding | | ODT (Zheng et al., 2022) | Online finetune | return-to-go | conditioning | trajectory-based entropy | | MADT (Meng et al., 2021) | Online finetune (multi-agent) | none | conditioning | separate models for actor and critic | Table 1: A summary of Transformers for sequential decision-making. compounding prediction error increasing with model rollout length, which greatly affects the performance and limits model rollout length (Janner et al., 2019). The Transformer-based world model can help alleviate the prediction error on longer sequences (Chen et al., 2022; Janner et al., 2021). ## 4.3 Transformers For Sequential Decision-Making In addition to being an expressive architecture to be plugged into components of traditional RL algorithms, Transformer itself can serve as a model that conducts sequential decision-making directly. This is because RL can be viewed as a conditional sequence modeling problem - generating a sequence of actions that can yield high returns. ## 4.3.1 Transformers As A Milestone For Offline Rl One challenge for Transformers to be widely used in RL is that the non-stationarity during the training process may hinder its optimization. However, the recent prosperity in offline RL motivates a growing number of works focusing on training a Transformer model on offline data that can achieve state-of-the-art performance. Decision Transformer (DT) (Chen et al., 2021) first applies this idea by modeling RL as an autoregressive generation problem to produce the desired trajectory: $$\tau=\left({\hat{R}}_{1},s_{1},a_{1},{\hat{R}}_{2},s_{2},a_{2},\ldots,{\hat{R}}_{T},s_{T},a_{T}\right),$$ where Rˆt =PT t ′=t r(st ′ , at ′ ) is the return-to-go. By conditioning on proper target return values at the first timestep, DT can generate desired actions without explicit TD learning or dynamic programming. Concurrently, Trajectory Transformer (TT) (Janner et al., 2021) adopts a similar autoregressive sequence modeling: $$\log P_{\theta}(r_{t}|\tau_{<t})=\sum_{i=1}^{N}\log P_{\theta}(s_{i}^{i}|s_{i}^{<t},\tau_{<t})+\sum_{j=1}^{M}\log P_{\theta}(a_{i}^{j}|a_{i}^{<t},s_{i},\tau_{<t})+\log P_{\theta}(r_{t}|a_{t},s_{t},\tau_{<t})+\log P_{\theta}(\hat{R}_{t}|r_{t},a_{t},s_{t},\tau_{<t}),$$ where M and N denote the dimension of state and action, respectively. In contrast to selecting target return values, TT proposes to use beam search for planning during execution. The empirical results demonstrate that TT performs well on long-horizon prediction. Moreover, TT shows that with mild adjustments on vanilla beam search, TT can perform imitation learning, goal-conditioned RL, and offline RL under the same framework. Regarding the behavior cloning setting, Behavior Transformer (BeT) (Shafiullah et al., 2022) proposes a similar Transformer structure as TT to learn from multi-modal datasets. In light of Transformer's superior accuracy on sequence prediction, Bootstrapped Transformer (BooT) (Wang et al., 2022) proposes to bootstrap Transformer to generate data while optimizing it for sequential decision-making. Formally, given a trajectory τ from the original dataset, BooT resamples the last T ′ < T timesteps, and concatenates the generated T ′steps with the original T − T ′steps as a new trajectory τ ′: $$\tau^{\prime}=\left(\tau_{\leq T-T^{\prime}},\tilde{\tau}_{>T-T^{\prime}}\right),\quad\tilde{\tau}_{>T-T^{\prime}}\sim P_{\theta}(\tau_{>T-T^{\prime}}|\tau_{\leq T-T^{\prime}}).$$ Bootstrapping Transformer for data augmentation can expand the amount and coverage of offline datasets, and hence improve the performance. More specifically, this work compares two different schemes to predict the next token y˜n: y˜n ∼ Pθ(yn|y˜<n, τ≤T −T′ ); (autoregressive generation, sampling conditioned on previously *generated* tokens) y˜n ∼ Pθ(yn|y<n, τ≤T −T′ ), (teacher-forcing generation, sampling conditioned on previously *original* tokens) and the results show that using two schemes together can generate data consistent with the underlying MDP without additional explicit conservative constraints. ## 4.3.2 Different Choices Of Conditioning While conditioning on return-to-go is a practical choice to incorporate future trajectory information, one natural question is whether other kinds of hindsight information can benefit sequential decision-making. To this end, Furuta et al. (2021) propose Hindsight Information Matching (HIM), a unified framework that can formulate variants of hindsight RL problems. More specifically, HIM converts hindsight RL into matching any predefined statistics of future trajectory information w.r.t. the distribution induced by the learned conditional policy: $$\operatorname*{min}_{\pi}\mathbb{E}_{z\sim p(z),\tau\sim\rho_{z}^{\pi}(\tau)}[D(I^{\Phi}(\tau),z)],$$ where z is the parameter that policy π(a|*s, z*) is conditioned on, D is a divergence measure such as KL divergence or f-divergences, and the information statistics I Φ(τ ) can be any function of the trajectory via the feature function Φ. Furthermore, this work proposes Generalized DT (GDT) for arbitrary choices of statistics I Φ(τ ) 3and demonstrates its applications in two HIM problems: offline multi-task state-marginal matching and imitation learning. Specifically, one drawback of conditioning on return-to-go is that it will lead to sub-optimal actions in stochastic environments, as the training data may contain sub-optimal actions that luckily result in high rewards due to the stochasticity of transitions. Paster et al. (2022) identify this limitation in general RvS methods. They further formulate RvS as an HIM problem and discover that RvS policies can achieve goals in consistency if the information statistics are independent of transitions' stochasticity. Based on this implication, they propose environment-stochasticity-independent representations (ESPER), an algorithm that first clusters trajectories and estimates average returns for each cluster, and then trains a policy conditioned on the expected returns. Alternatively, Dichotomy of Control (DoC) (Yang et al., 2022a) proposes to learn a representation that is agnostic to stochastic transitions and rewards in the environment via minimizing mutual information. During inference, DoC selects the representation with the highest value and feeds it into the conditioned policy. Yang et al. (2022b) proposed a method by annotating task-specific procedure observation sequences in the training set and using them to generate decision actions. This approach enables the agent to learn decision-making based on prior planning, which is beneficial for tasks that require multi-step foresight. In addition to exploring different hindsight information, another approach to enhance return-to-go conditioning is to augment the dataset. Q-learning DT (QDT) (Yamagata et al., 2022) proposes to use a conservative value function to relabel return-to-go in the dataset, hence combining DT with dynamic programming and improving its stitching capability. ## 4.3.3 Improving The Structure Of Transformers Apart from studying different conditioned information, there are some works to improve the structure of DT or TT. These works fit into two categories: **a) Introduce additional modules or loss functions to improve the representation.** Shang et al. (2022) argue that the DT structure is inefficient for learning Markovian-like dependencies, as it takes all tokens as the input. To alleviate this issue, they propose learning an additional Step Transformer for local state-action-reward representation and using this representation for sequence modeling. Similarly, Mao et al. (2022) use an inner Transformer to process the single observation and an outer Transformer to process the history, and cascade them as a backbone for online and offline RL. Konan et al. (2022) believe that different sub-tasks correspond to different levels of return-to-go, which require different representation. Therefore, they propose Contrastive Decision Transformer (ConDT) structure, where a return-dependent transformation is applied to state and action embedding 3As a special case, I Φ(τ) = Rˆt in DT. before putting them into a causal Transformer. The return-dependent transformation intuitively captures features specific to the current sub-task, and it is learned with an auxiliary contrastive loss to strengthen the correlation between transformation and return. **b) Design the architecture to improve the robustness.** Villaflor et al. (2022) identify that TT structure may produce overly optimistic behavior, which is dangerous in safety-critical scenarios. This is because TT implements model prediction and policy network in the same model. Therefore, they propose SeParated Latent Trajectory Transformer (SPLT Transformer), which consists of two independent Transformer-based CVAE structures of the world model and policy model, with the trajectory as the condition. Formally, given the trajectory τ K t = (st, at, st+1, at+1, . . . , st+K, at+K) and its sub-sequence τ ′k t = (st, at, st+1, at+1*, . . . , s*t+k), SPLT Transformer optimizes two variational lower bounds for the policy model and the world model: $$\mathbb{E}_{z_{t}^{\tau}\sim q_{\infty}}\left[\sum_{k=1}^{K}\log p_{\theta_{\pi}}(a_{t+k}|\tau^{\prime}_{t};z_{t}^{\pi})\right]-D_{\mathrm{KL}}(q_{\phi_{\pi}}(z_{t}^{\pi}|\tau_{t}^{K}),p(z_{t}^{\pi})),$$ $$\mathbb{E}_{z_{t}^{\tau}\sim q_{\phi_{\infty}}}\left[\sum_{k=1}^{K}\log p_{\theta_{w}}(s_{t+k+1},r_{t+k},\hat{R}_{t+k+1}|\tau_{t}^{k};z_{t}^{w})\right]-D_{\mathrm{KL}}(q_{\phi_{w}}(z_{t}^{w}|\tau_{t}^{K}),p(z_{t}^{w})),$$ where qϕπ and pθπ are the encoder and decoder of the policy model, and qϕw and pθw are the encoder and decoder of the world model, respectively. Similarly to the min-max search procedure, SPLT Transformer searches the latent variable space to minimize return-to-go in the world model and to maximize return-to-go in the policy model during planning. Hu et al. (2023) consider the possible frame dropping in the actual application scenario where states and rewards at some timesteps are unavailable due to frame dropping and the information at previous timesteps is left to be reused. They propose Decision Transformer under Random Frame Dropping (DeFog), a DT variant which extends the timestep embedding by introducing an additional drop-span embedding to predict the number of consecutive frame drops. In addition to frame dropping, DT may also suffer from forgetting, as it merely relies on parameters to "memorize" the large-scale data in an implicit way. Therefore, Kang et al. (2023) introduce an internal working memory module into DT to extract knowledge from past experience explicitly. They also incorporate low-rank adaptation (LoRA) (Hu et al., 2022) parameters to adapt to unseen tasks. ## 4.3.4 Extending Dt Beyond Offline Rl Although most of the works around Transformers for sequential decision-making focus on the offline setting, there are several attempts to adapt this paradigm to online and multi-agent settings. Online Decision Transformer (ODT) (Zheng et al., 2022) replaces the deterministic policy in DT with a stochastic counterpart and defines a trajectory-level policy entropy to help exploration during online fine-tuning. Pretrained Decision Transformer (PDT) (Xie et al., 2022) adopts the idea of Bayes' rule in online fine-tuning to control the conditional policy's behavior in DT. Besides, such a two-stage paradigm (offline pre-training with online fine-tuning) is also applied to Multi-Agent Decision Transformer (MADT) (Meng et al., 2021), where a decentralized DT is pre-trained with offline data from the perspective of individual agents and is used as the policy network in online fine-tuning with MAPPO (Yu et al., 2021a). Interestingly, to the best of our knowledge, we do not find related works about learning DT in the pure online setting without any pre-training. We suspect that this is because pure online learning will exacerbate the non-stationary nature of training data, which will severely harm the performance of the current DT policy. Instead, offline pre-training can help to warm up a well-behaved policy that is stable for further online training. ## 4.4 Transformers For Generalist Agents In view of the fact that Decision Transformer has already flexed its muscles in various tasks with offline data, several works turn to consider whether Transformers can enable a generalist agent to solve multiple tasks or problems, as in the CV and NLP fields. ## 4.4.1 Generalize To Multiple Tasks Some works draw on the ideas of pre-training on large-scale datasets in CV and NLP, and try to abstract a general policy from large-scale multi-task datasets. Multi-Game Decision Transformer (MGDT) (Lee et al., 2022), a variant of DT, learns DT on a diverse dataset consisting of both expert and non-expert data and achieves close-to-human performance on multiple Atari games with a single set of parameters. In order to obtain expert-level performance with a dataset containing non-expert experiences, MGDT includes an expert action inference mechanism, which calculates an expert-level return-to-go posterior distribution from the prior distribution of return-to-go and a preset expert-level return-to-go likelihood proportional according to the Bayesian formula. Likewise, Switch Trajectory Transformer (SwitchTT) (Lin et al., 2022), a multi-task extension to TT, exploits a sparsely activated model that replaces the FFN layer with a mixture-of-expert layer for efficient multi-task offline learning. More specifically, such "switch layer" consists of n expert networks E1*, . . . , E*n and a router to select a specific expert for each token: $$y=p_{i}(x)E_{i}(x),\quad i=\arg\operatorname*{max}_{i}p_{i}(x)=\arg\operatorname*{max}_{i}{\frac{e^{h(x)_{i}}}{\sum_{j}^{n}e^{h(x)_{j}}}},$$ , where h(x) denotes the logits produced by the router. Besides, a distributional trajectory value estimator is adopted to model the uncertainty of value estimates. With these two enhanced features, SwitchTT achieves improvement over TT across multiple tasks in terms of both performance and training speed. MGDT and SwitchTT exploit experiences collected from multiple tasks and various performance-level policies to learn a general policy. To solve multi-objective RL problem, Zhu et al. (2023) build a multi-objective offline RL dataset and extend DT to preference and return conditioned learning. As for meta RL across diverse tasks, Team et al. (2023) demonstrate that Transformer modelbased RL can improve adaptation and scalability with curriculum and distillation. However, constructing a large-scale multi-task dataset is non-trivial. Unlike large-scale datasets in CV or NLP, which are usually constructed with massive public data from the Internet and simple manual labeling, action information is always absent from public sequential decision-making data and is not facile to label. Thus, Baker et al. (2022) propose a semi-supervised scheme to utilize large-scale online data without action information by learning a Transformerbased Inverse Dynamic Model (IDM) on a small portion of action-labeled data, which predicts the action information with past and future observations and is consequently capable of labeling massive unlabeled online video data. IDM is learned on a small-scale dataset containing manually labeled actions and is accurate enough to provide action labels of videos for effective behavior cloning and fine-tuning. Further, Venuto et al. (2022) conduct experiments where they train IDM with action-labeled data from other tasks, which reduces the need of action-labeled data specifically for the target task. The efficacy of prompting (Brown et al., 2020) for adaptation to new tasks has been proven in many prior works in NLP. Following this idea, several works aim at leveraging prompting techniques for DT-based methods to enable fast adaptation. Prompt-based Decision Transformer (Prompt-DT) (Xu et al., 2022) samples a sequence of transitions from the few-shot demonstration dataset as prompt, and shows that it can achieve few-shot policy generalization on offline meta RL tasks. Reed et al. (2022) further exploit prompt-based architecture to learn a generalist agent (Gato) via auto-regressive sequence modeling on a super large-scale dataset covering natural language, image, temporal decision-making, and multi-modal data. Gato is capable of a range of tasks from various domains, including text generation and decision-making. Specifically, Gato unifies multi-modal sequences in a shared tokenization space and adapts prompt-based inference in deployment to generate task-specific sequences. Similarly to Gato, RT-1 (Brohan et al., 2022) and PaLM-E (Driess et al., 2023) leverage large-scale multimodal datasets to train Transformers that can achieve high performance on downstream tasks. VIMA (Jiang et al., 2022) combines textual and visual tokens as multimodal prompts to build a scalable model that can generalize well among robot manipulation tasks. Despite being effective, Laskin et al. (2022) point out that one limitation of the prompt-based framework is that the prompt is demonstrations from a well-behaved policy, as contexts in both works are not sufficient to capture policy improvement. Inspired by the in-context learning capabilities (Alayrac et al., 2022) of Transformers, they propose Algorithm Distillation (AD) (Laskin et al., 2022), which instead trains a Transformer on across-episode sequences of the learning progress of single-task RL algorithms. Therefore, even in new tasks, the Transformer can learn to gradually improve its policy during the auto-regressive generation. ## 4.4.2 Generalize To Multiple Domains Beyond generalizing to multiple tasks, Transformer is also a powerful "universal" model to unify a range of domains related to sequential decision-making. Motivated by advances in masked language modeling (Devlin et al., 2018) technique in NLP, Carroll et al. (2022) propose Uni[MASK], which unifies various commonly-studied domains, including behavioral cloning, offline RL, GCRL, past/future inference, and dynamics prediction, as one mask inference problem. Uni[MASK] compares different masking schemes, including task-specific masking, random masking, and finetune variants. It is shown that one single Transformer trained with random masking can solve arbitrary inference tasks. More surprisingly, compared to the task-specific counterpart, random masking can still improve performance in the single-task setting. In addition to unifying sequential inference problems in the RL domain, Reid et al. (2022) find it beneficial to finetune DT with Transformer pre-trained on language datasets or multi-modal datasets containing language modality. Concretely, Reid et al. (2022) find out that pre-training Transformer with language data while encouraging similarity between language and RL-based representations can help improve the performance and convergence speed of DT. This finding implies that even knowledge from non-RL fields can benefit RL training via Transformers. Li et al. (2022) further show that a policy initialized with pre-trained language models can be finetuned to accommodate different tasks and goals in interactive decision-making. Additionally, several works (Huang et al., 2022a;b; Raman et al., 2022; Yao et al., 2022; Ahn et al., 2022; Wang et al., 2023c; Du et al., 2023; Wang et al., 2023a) discover that pre-trained largescale language models are capable of generating reasonable high-level plans to accomplish complex tasks without further finetuning. However, even with well-behaved low-level policies, directly applying large language models for execution is inefficient or even infeasible in particular scenarios or environments. Therefore, these works propose to generate valid action sequences with affordance guidance (Ahn et al., 2022), autoregressive correction (Huang et al., 2022a), corrective re-prompting (Raman et al., 2022), interleaved reasoning and action traces generation (Yao et al., 2022), and description feedback (Huang et al., 2022b; Wang et al., 2023c; Du et al., 2023; Wang et al., 2023a). Even without RL modules, Wu et al. (2023) find that GPT-4 (OpenAI, 2023) , when prompted with related RL paper and chain-of-thought, can produce a competitive performance in RL benchmark tasks. ## 5 Summary And Future Perspectives This paper briefly reviews advances in Transformers for RL. We provide a taxonomy of these advances: a) Transformers can serve as a powerful module of RL, e.g., acting as a representation module or a world model; b) Transformers can serve as a sequential decision-maker; and c) Transformers can benefit generalization across tasks and domains. While we cover representative works on this topic, the usage of Transformers in RL is not limited to our discussions. Given the prosperity of Transformers in the broader AI community, we believe that combining Transformers and RL is a promising trend. To conclude this survey, in the following, we discuss future perspectives and open problems for this direction. Bridging Online and Offline Learning via Transformers. Stepping into offline RL is a milestone in Transformerbased RL. Practically, utilizing Transformers to capture dependencies in decision sequence and to abstract policy is mainly inseparable from the support of considerable offline demonstrations. However, it is unfeasible for some decision-making tasks to get rid of the online framework in real applications. On the one hand, it is not that easy to obtain expert data in certain tasks while a substantial amount of less related data holds potential valuable information. On the other hand, some environments are open-ended (e.g., Minecraft), which means the policy has to continually adjust to deal with unseen tasks during online interaction. Therefore, we believe that bridging online learning and offline learning is necessary. However, most research progress following Decision Transformer focuses on the offline learning framework. We thereby expect some special paradigm designs to address this issue via transferring the generalization capabilities demonstrated by Transformer structures in large vision or language models to decision tasks and to potentially significantly enhancing the utilization of less related offline data. In addition, how to train a well-performed online Decision Transformer from scratch is an interesting open problem. Combining Reinforcement Learning and (Self-) Supervised Learning. Retracing the development of Transformer-based RL, the training methods involve both RL and (self-) supervised learning. When trained under the conventional RL framework, the Transformer architecture is usually unstable for optimization (Parisotto et al., 2020). RL algorithms inherently require imposing various constraints to avoid issues like the deadly triad (Van Hasselt et al., 2018), which could potentially exacerbate the training difficulty of the Transformer architecture. Meanwhile, the (self-) supervised learning paradigm, while improving the optimization of the Transformer structure, significantly constrains the performance of the policy based on the quality of the experience/demonstration data. Therefore, a better policy may be learned when we combine RL and (self-) supervised learning in Transformer learning. Some works (Zheng et al., 2022; Meng et al., 2021) have tried out the scheme of supervised pre-training and RL-involved fine-tuning. However, exploration can be limited with a relatively fixed policy (Nair et al., 2020), which is one of the bottlenecks to be resolved. Also, along this line, the tasks used for performance evaluation are relatively simple. It is worthwhile to further explore whether Transformers can scale such (self-) supervised learning up to larger datasets, more complex environments, and real-world applications. Further, we expect future work to provide more theoretical and empirical insights to characterize under which conditions this (self-) supervised learning is expected to perform well (Brandfonbrener et al., 2022; Siebenborn et al., 2022; Takagi, 2022). Transformer Structure Tailored for Decision-making Problems. The Transformer structures in the current DTbased methods are mainly vanilla Transformer, which is originally designed for the text sequence and may not fit the nature of decision-making problems. For example, is it appropriate to adopt the vanilla self-attention mechanism for trajectory sequences? Whether different elements in the decision sequence or different parts of the same elements need to be distinguished in position embedding? In addition, as there are many variants of representing trajectory as a sequence in different DT-based algorithms, how to choose from them still lacks systematic research. For instance, how to select robust hindsight information when deploying such algorithms in the industry? Furthermore, the vanilla Transformer is a structure with huge computational cost, which makes it expensive at both training and inference stages, and high memory occupation, which constrains the length of the dependencies it captures. We notice that some works (Zhou et al., 2021; Fedus et al., 2022; Zhu et al., 2021) have improved the structure from these aspects and have been validated in various domains. It is also worth exploring whether similar structures can be used in the decision-making problem. Towards More Generalist Agents with Transformers. Our review has shown the potential of Transformers as a general policy (Section 4.4). In fact, the design of Transformers allows multiple modalities (e.g., image, video, text, and speech) to be processed using similar processing blocks and demonstrates excellent scalability to large-capacity networks and huge datasets. Recent works have made substantial progress in training agents capable of performing multiple and cross-domain tasks. However, given that these agents are trained on huge amounts of data, it is still uncertain whether they merely memorize the dataset and if they can perform efficient generalization. Therefore, how to learn an agent that can generalize to unseen tasks without strong assumptions is well worth studying (Boustati et al., 2021). Moreover, we are curious about whether Transformer is strong enough to learn a general world model (Schubert et al., 2023) for different tasks and scenarios. Connections to Other Research Trends. While we have discussed how RL can benefit from the usage of Transformers, the reverse direction, i.e., using RL to benefit Transformer training is an intriguing open problem yet less explored. We see that, some works model language/dialogue generation tasks with offline RL setting and learn generation policy with relabeling (Snell et al., 2022b) or value functions (Verma et al., 2022; Snell et al., 2022a; Jang et al., 2022). Recently, Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022) learns a reward model and uses RL algorithms to finetune Transformer for aligning language models with human intent (Nakano et al., 2021; OpenAI, 2023). In the future, we believe RL can be a useful tool to further polish Transformer's performance in other domains. In parallel with Transformers, diffusion model (Song et al., 2020; Ho et al., 2020) is also becoming a mainstream tool for generative tasks and demonstrates its applications in RL (Janner et al., 2022). While no significant performance gap is observed between Transformer-based RL and Diffusion-based RL (Janner et al., 2019; Ajay et al., 2022), we speculate that each model may have advantages in the domain they are most commonly used, i.e., Transformer in language RL tasks and diffusion model in vision RL tasks. Moreover, diffusion models may perform better in stochastic environments as they can model complex distributions. However, the relatively slow inference may hinder diffusion models' usage in real-time decision-making problems. We hope future work can conduct rigorous experiments and provide more empirical evidence to verify our conjectures. ## References Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. *arXiv preprint arXiv:2204.01691*, 2022. Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, and Pulkit Agrawal. Is conditional generative modeling all you need for decision-making? *arXiv preprint arXiv:2211.15657*, 2022. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736, 2022. Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. Advances in neural information processing systems, 30, 2017. Marcin Andrychowicz, Anton Raichuk, Piotr Stanczyk, Manu Orsini, Sertan Girgin, Raphaël Marinier, Leonard ´ Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, et al. What matters for on-policy deep actor-critic methods? a large-scale study. In *International conference on learning representations*, 2020. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014. Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. In *International Conference on Learning Representations*, 2019. Bowen Baker, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (VPT): Learning to act by watching unlabeled online videos. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing* Systems, 2022. Andrea Banino, Adria Puigdomenech Badia, Jacob C Walker, Tim Scholtes, Jovana Mitrovic, and Charles Blundell. Coberl: Contrastive bert for reinforcement learning. In *International Conference on Learning Representations*, 2021. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. *Journal of Artificial Intelligence Research*, 47:253–279, 2013. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław D˛ebiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019. Zhenshan Bing, Alexander Koch, Xiangtong Yao, Fabrice O Morin, Kai Huang, and Alois Knoll. Meta-reinforcement learning via language instructions. *arXiv preprint arXiv:2209.04924*, 2022. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. Ayman Boustati, Hana Chockler, and Daniel C McNamee. Transfer learning with causal counterfactual reasoning in decision transformers. *arXiv preprint arXiv:2110.14355*, 2021. David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, and Joan Bruna. When does returnconditioned supervised learning work for offline reinforcement learning? *arXiv preprint arXiv:2206.01079*, 2022. Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. *arXiv preprint arXiv:2212.06817*, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Micah Carroll, Orr Paradise, Jessy Lin, Raluca Georgescu, Mingfei Sun, David Bignell, Stephanie Milani, Katja Hofmann, Matthew Hausknecht, Anca Dragan, et al. Unimask: Unified inference in sequential decision problems. arXiv preprint arXiv:2211.10869, 2022. Chang Chen, Yi-Fu Wu, Jaesik Yoon, and Sungjin Ahn. Transdreamer: Reinforcement learning with transformer world models. *arXiv preprint arXiv:2202.09481*, 2022. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34:15084–15097, 2021. Sudeep Dasari and Abhinav Gupta. Transformers for one-shot visual imitation. In *Conference on Robot Learning*, pp. 2071–2084. PMLR, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Linhao Dong, Shuang Xu, and Bo Xu. Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. In *2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 5884–5888. IEEE, 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. Yuqing Du, Olivia Watkins, Zihan Wang, Cédric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, and Jacob Andreas. Guiding pretraining in reinforcement learning with large language models. *arXiv preprint arXiv:2302.06692*, 2023. Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, and Sergey Levine. Rvs: What is essential for offline rl via supervised learning? *arXiv preprint arXiv:2112.10751*, 2021. Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Implementation matters in deep rl: A case study on ppo and trpo. In International conference on learning representations, 2019. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In *International conference on machine learning*, pp. 1407–1416. PMLR, 2018. Kevin Esslinger, Robert Platt, and Christopher Amato. Deep transformer q-networks for partially observable reinforcement learning. *arXiv preprint arXiv:2206.01078*, 2022. Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. In *Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track*, 2022. William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. *The Journal of Machine Learning Research*, 23(1):5232–5270, 2022. Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International conference on machine learning, pp. 2052–2062. PMLR, 2019. Hiroki Furuta, Yutaka Matsuo, and Shixiang Shane Gu. Generalized decision transformer for offline hindsight information matching. *arXiv preprint arXiv:2111.10364*, 2021. Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Devin, Benjamin Eysenbach, and Sergey Levine. Learning to reach goals via iterated supervised learning. *arXiv preprint arXiv:1912.06088*, 2019. Pierre-Louis Guhur, Shizhe Chen, Ricardo Garcia Pinel, Makarand Tapaswi, Ivan Laptev, and Cordelia Schmid. Instruction-driven history-aware policies for robotic manipulations. In *Conference on Robot Learning*, pp. 175– 187. PMLR, 2023. Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. *arXiv preprint arXiv:1912.01603*, 2019. Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. In *International Conference on Learning Representations*, 2020. Danijar Hafner, Timothy P Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. In *International Conference on Learning Representations*, 2021. Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104, 2023. Nicklas Hansen, Hao Su, and Xiaolong Wang. Stabilizing deep q-learning with convnets and vision transformers under data augmentation. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In *2015 aaai fall* symposium series, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In *International Conference on Learning Representations*, 2022. Kaizhe Hu, Ray Chen Zheng, Yang Gao, and Huazhe Xu. Decision transformer under random frame dropping. *arXiv* preprint arXiv:2303.03391, 2023. Siyi Hu, Fengda Zhu, Xiaojun Chang, and Xiaodan Liang. Updet: Universal multi-agent rl via policy decoupling with transformers. In *International Conference on Learning Representations*, 2020. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 4700–4708, 2017. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. *arXiv preprint arXiv:2201.07207*, 2022a. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022b. Kazuki Irie, Imanol Schlag, Róbert Csordás, and Jürgen Schmidhuber. Going beyond linear transformers with recurrent fast weight programmers. *Advances in Neural Information Processing Systems*, 34:7703–7717, 2021. Youngsoo Jang, Jongmin Lee, and Kee-Eung Kim. Gpt-critic: Offline reinforcement learning for end-to-end taskoriented dialogue systems. In *International Conference on Learning Representations*, 2022. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. *Advances in Neural Information Processing Systems*, 32, 2019. Michael Janner, Qiyang Li, and Sergey Levine. Reinforcement learning as one big sequence modeling problem. In ICML 2021 Workshop on Unsupervised Reinforcement Learning, 2021. Michael Janner, Yilun Du, Joshua B Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. *arXiv preprint arXiv:2205.09991*, 2022. Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, and Linxi Fan. Vima: General robot manipulation with multimodal prompts. arXiv preprint arXiv:2210.03094, 2022. Amir Ardalan Kalantari, Mohammad Amini, Sarath Chandar, and Doina Precup. Improving sample efficiency of value based models using attention and vision transformers. *arXiv preprint arXiv:2202.00710*, 2022. Jikun Kang, Romain Laroche, Xindi Yuan, Adam Trischler, Xue Liu, and Jie Fu. Think before you act: Decision transformers with internal working memory. *arXiv preprint arXiv:2305.16338*, 2023. Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. *ACM computing surveys (CSUR)*, 54(10s):1–41, 2022. Sachin G Konan, Esmaeil Seraj, and Matthew Gombolay. Contrastive decision transformers. In 6th Annual Conference on Robot Learning, 2022. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. *Advances in Neural Information Processing Systems*, 33:1179–1191, 2020. Vitaly Kurin, Maximilian Igl, Tim Rocktäschel, Wendelin Boehmer, and Shimon Whiteson. My body is a cage: the role of morphology in graph-based incompatible control. *arXiv preprint arXiv:2010.01856*, 2020. Michael Laskin, Luyu Wang, Junhyuk Oh, Emilio Parisotto, Stephen Spencer, Richie Steigerwald, DJ Strouse, Steven Hansen, Angelos Filos, Ethan Brooks, et al. In-context reinforcement learning with algorithm distillation. *arXiv* preprint arXiv:2210.14215, 2022. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Kuang-Huei Lee, Ofir Nachum, Sherry Yang, Lisa Lee, C. Daniel Freeman, Sergio Guadarrama, Ian Fischer, Winnie Xu, Eric Jang, Henryk Michalewski, and Igor Mordatch. Multi-game decision transformers. In *Advances in Neural* Information Processing Systems, 2022. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv preprint arXiv:2005.01643*, 2020. Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, et al. Pre-trained language models for interactive decision-making. Advances in Neural Information Processing Systems, 35:31199–31212, 2022. Qinjie Lin, Han Liu, and Biswa Sengupta. Switch trajectory transformer with distributional value approximation for multi-task reinforcement learning. *arXiv preprint arXiv:2203.07413*, 2022. Zichuan Lin, Li Zhao, Derek Yang, Tao Qin, Tie-Yan Liu, and Guangwen Yang. Distributional reward decomposition for reinforcement learning. *Advances in neural information processing systems*, 32, 2019. Minghuan Liu, Menghui Zhu, and Weinan Zhang. Goal-conditioned reinforcement learning: Problems and solutions. arXiv preprint arXiv:2201.08299, 2022. Ricky Loynd, Roland Fernandez, Asli Celikyilmaz, Adith Swaminathan, and Matthew Hausknecht. Working memory graphs. In *International conference on machine learning*, pp. 6404–6414. PMLR, 2020. Kevin Lu, Aditya Grover, Pieter Abbeel, and Igor Mordatch. Pretrained transformers as universal computation engines. arXiv preprint arXiv:2103.05247, 1, 2021. Hangyu Mao, Rui Zhao, Hao Chen, Jianye Hao, Yiqun Chen, Dong Li, Junge Zhang, and Zhen Xiao. Transformer in transformer as backbone for deep reinforcement learning. *arXiv preprint arXiv:2212.14538*, 2022. Luckeciano C Melo. Transformers are meta-reinforcement learners. In *International Conference on Machine Learning*, pp. 15340–15359. PMLR, 2022. Linghui Meng, Muning Wen, Yaodong Yang, Chenyang Le, Xiyun Li, Weinan Zhang, Ying Wen, Haifeng Zhang, Jun Wang, and Bo Xu. Offline pre-trained multi-agent decision transformer: One big sequence model conquers all starcraftii tasks. *arXiv preprint arXiv:2112.02845*, 2021. Vincent Micheli, Eloi Alonso, and François Fleuret. Transformers are sample efficient world models. arXiv preprint arXiv:2209.00588, 2022. Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. In *International Conference on Learning Representations*, 2018. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *nature*, 518(7540):529–533, 2015. Ashvin Nair, Abhishek Gupta, Murtaza Dalal, and Sergey Levine. Awac: Accelerating online reinforcement learning with offline datasets. *arXiv preprint arXiv:2006.09359*, 2020. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. OpenAI. Gpt-4 technical report, 2023. Kei Ota, Tomoaki Oiki, Devesh Jha, Toshisada Mariyama, and Daniel Nikovski. Can increasing input dimensionality improve deep reinforcement learning? In *International Conference on Machine Learning*, pp. 7424–7433. PMLR, 2020. Kei Ota, Devesh K Jha, and Asako Kanezaki. Training larger networks for deep reinforcement learning. *arXiv preprint* arXiv:2102.07920, 2021. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022. Sherjil Ozair, Yazhe Li, Ali Razavi, Ioannis Antonoglou, Aaron Van Den Oord, and Oriol Vinyals. Vector quantized models for planning. In *International Conference on Machine Learning*, pp. 8302–8313. PMLR, 2021. Emilio Parisotto and Ruslan Salakhutdinov. Efficient transformers in reinforcement learning using actor-learner distillation. *arXiv preprint arXiv:2104.01655*, 2021. Emilio Parisotto, Francis Song, Jack Rae, Razvan Pascanu, Caglar Gulcehre, Siddhant Jayakumar, Max Jaderberg, Raphael Lopez Kaufman, Aidan Clark, Seb Noury, et al. Stabilizing transformers for reinforcement learning. In International conference on machine learning, pp. 7487–7498. PMLR, 2020. Keiran Paster, Sheila McIlraith, and Jimmy Ba. You can't count on luck: Why decision transformers fail in stochastic environments. *arXiv preprint arXiv:2205.15967*, 2022. Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, and Stefanie Tellex. Planning with large language models via corrective re-prompting. *arXiv preprint arXiv:2211.09935*, 2022. Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. A generalist agent. arXiv preprint arXiv:2205.06175, 2022. Machel Reid, Yutaro Yamada, and Shixiang Shane Gu. Can wikipedia help offline reinforcement learning? *arXiv* preprint arXiv:2201.12122, 2022. Jan Robine, Marc Höftmann, Tobias Uelwer, and Stefan Harmeling. Transformer-based world models are happy with 100k interactions. *arXiv preprint arXiv:2303.07109*, 2023. Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International conference on machine learning, pp. 1312–1320. PMLR, 2015. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. *Nature*, 588(7839):604–609, 2020. Ingmar Schubert, Jingwei Zhang, Jake Bruce, Sarah Bechtle, Emilio Parisotto, Martin Riedmiller, Jost Tobias Springenberg, Arunkumar Byravan, Leonard Hasenclever, and Nicolas Heess. A generalist dynamics model for control. arXiv preprint arXiv:2305.10912, 2023. Younggyo Seo, Danijar Hafner, Hao Liu, Fangchen Liu, Stephen James, Kimin Lee, and Pieter Abbeel. Masked world models for visual control. In *Conference on Robot Learning*, pp. 1332–1344. PMLR, 2022a. Younggyo Seo, Kimin Lee, Stephen L James, and Pieter Abbeel. Reinforcement learning with action-free pre-training from videos. In *International Conference on Machine Learning*, pp. 19561–19579. PMLR, 2022b. Nur Muhammad Mahi Shafiullah, Zichen Jeff Cui, Ariuntuya Altanzaya, and Lerrel Pinto. Behavior transformers: Cloning k modes with one stone. *arXiv preprint arXiv:2206.11251*, 2022. Jinghuan Shang, Kumara Kahatapitiya, Xiang Li, and Michael S Ryoo. Starformer: Transformer with state-actionreward representations for visual reinforcement learning. In *European Conference on Computer Vision*, pp. 462– 479. Springer, 2022. Max Siebenborn, Boris Belousov, Junning Huang, and Jan Peters. How crucial is transformer in decision transformer? arXiv preprint arXiv:2211.14655, 2022. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. *nature*, 529(7587):484–489, 2016. Samarth Sinha, Homanga Bharadhwaj, Aravind Srinivas, and Animesh Garg. D2rl: Deep dense architectures in reinforcement learning. *arXiv preprint arXiv:2010.09163*, 2020. Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. Offline rl for natural language generation with implicit language q learning. *arXiv preprint arXiv:2206.11871*, 2022a. Charlie Snell, Sherry Yang, Justin Fu, Yi Su, and Sergey Levine. Context-aware language modeling for goal-oriented dialogue systems. *arXiv preprint arXiv:2204.10198*, 2022b. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. *arXiv preprint arXiv:2011.13456*, 2020. Jiankai Sun, De-An Huang, Bo Lu, Yun-Hui Liu, Bolei Zhou, and Animesh Garg. Plate: Visually-grounded planning with transformers in procedural tasks. *IEEE Robotics and Automation Letters*, 7(2):4924–4930, 2022. Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. Value-decomposition networks for cooperative multiagent learning. *arXiv preprint arXiv:1706.05296*, 2017. Shiro Takagi. On the effect of pre-training for transformer in different modality on offline reinforcement learning. arXiv preprint arXiv:2211.09817, 2022. Yujin Tang and David Ha. The sensory neuron as a transformer: Permutation-invariant neural networks for reinforcement learning. *Advances in Neural Information Processing Systems*, 34:22574–22587, 2021. Tianxin Tao, Daniele Reda, and Michiel van de Panne. Evaluating vision transformer methods for deep reinforcement learning from pixels. *arXiv preprint arXiv:2204.04905*, 2022. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. Efficient transformers: A survey. *ACM Computing* Surveys, 55(6):1–28, 2022. Adaptive Agent Team, Jakob Bauer, Kate Baumli, Satinder Baveja, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, et al. Human-timescale adaptation in an openended task space. *arXiv preprint arXiv:2301.07608*, 2023. DeepMind Interactive Agents Team, Josh Abramson, Arun Ahuja, Arthur Brussee, Federico Carnevale, Mary Cassin, Felix Fischer, Petko Georgiev, Alex Goldin, Mansi Gupta, et al. Creating multimodal interactive agents with imitation and self-supervised learning. *arXiv preprint arXiv:2112.03763*, 2021. Hado Van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, and Joseph Modayil. Deep reinforcement learning and the deadly triad. *arXiv preprint arXiv:1812.02648*, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. David Venuto, Sherry Yang, Pieter Abbeel, Doina Precup, Igor Mordatch, and Ofir Nachum. Multi-environment pretraining enables transfer to action limited datasets. *arXiv preprint arXiv:2211.13337*, 2022. Siddharth Verma, Justin Fu, Mengjiao Yang, and Sergey Levine. Chai: A chatbot ai for task-oriented dialogue with offline reinforcement learning. *arXiv preprint arXiv:2204.08426*, 2022. Adam R Villaflor, Zhe Huang, Swapnil Pande, John M Dolan, and Jeff Schneider. Addressing optimism bias in sequence modeling for reinforcement learning. In *International Conference on Machine Learning*, pp. 22270– 22283. PMLR, 2022. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. *arXiv preprint arXiv:2305.16291*, 2023a. Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, and Dongsheng Li. Bootstrapped transformer for offline reinforcement learning. *arXiv preprint arXiv:2206.08569*, 2022. Rundong Wang, Weixuan Wang, Xianhan Zeng, Liang Wang, Zhenjie Lian, Yiming Gao, Feiyu Liu, Siqin Li, Xianliang Wang, QIANG FU, et al. Multi-agent multi-game entity transformer. 2023b. Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. *arXiv preprint arXiv:2302.01560*, 2023c. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. Dueling network architectures for deep reinforcement learning. In *International conference on machine learning*, pp. 1995–2003. PMLR, 2016. Muning Wen, Jakub Grudzien Kuba, Runji Lin, Weinan Zhang, Ying Wen, Jun Wang, and Yaodong Yang. Multi-agent reinforcement learning is a sequence modeling problem. *arXiv preprint arXiv:2205.14953*, 2022. Yue Wu, So Yeon Min, Shrimai Prabhumoye, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom Mitchell, and Yuanzhi Li. Spring: Gpt-4 out-performs rl algorithms by studying papers and reasoning. *arXiv preprint* arXiv:2305.15486, 2023. Zhihui Xie, Zichuan Lin, Junyou Li, Shuai Li, and Deheng Ye. Pretraining in deep reinforcement learning: A survey. arXiv preprint arXiv:2211.03959, 2022. Mengdi Xu, Yikang Shen, Shun Zhang, Yuchen Lu, Ding Zhao, Joshua Tenenbaum, and Chuang Gan. Prompting decision transformer for few-shot policy generalization. In *International Conference on Machine Learning*, pp. 24631–24645. PMLR, 2022. Taku Yamagata, Ahmed Khalil, and Raul Santos-Rodriguez. Q-learning decision transformer: Leveraging dynamic programming for conditional sequence modelling in offline rl. *arXiv preprint arXiv:2209.03993*, 2022. Mengjiao Yang, Dale Schuurmans, Pieter Abbeel, and Ofir Nachum. Dichotomy of control: Separating what you can control from what you cannot. *arXiv preprint arXiv:2210.13435*, 2022a. Mengjiao Sherry Yang, Dale Schuurmans, Pieter Abbeel, and Ofir Nachum. Chain of thought imitation with procedure cloning. *Advances in Neural Information Processing Systems*, 35:36366–36381, 2022b. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. *arXiv preprint arXiv:2210.03629*, 2022. Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, et al. Towards playing full moba games with deep reinforcement learning. *Advances in Neural Information* Processing Systems, 33:621–632, 2020a. Deheng Ye, Zhao Liu, Mingfei Sun, Bei Shi, Peilin Zhao, Hao Wu, Hongsheng Yu, Shaojie Yang, Xipeng Wu, Qingwei Guo, et al. Mastering complex control in moba games with deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 6672–6679, 2020b. Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, and Yi Wu. The surprising effectiveness of ppo in cooperative, multi-agent games. *arXiv preprint arXiv:2103.01955*, 2021a. Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and Chelsea Finn. Combo: Conservative offline model-based policy optimization. *Advances in neural information processing systems*, 34:28954–28967, 2021b. Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, et al. Deep reinforcement learning with relational inductive biases. In International conference on learning representations, 2018. Qinqing Zheng, Amy Zhang, and Aditya Grover. Online decision transformer. *arXiv preprint arXiv:2202.05607*, 2022. Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 11106–11115, 2021. Baiting Zhu, Meihua Dang, and Aditya Grover. Scaling pareto-efficient decision making via offline multi-objective rl. arXiv preprint arXiv:2305.00567, 2023. Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, and Bryan Catanzaro. Long-short transformer: Efficient transformers for language and vision. *Advances in neural information processing* systems, 34:17723–17736, 2021.
Review 1: Summary: Transformers have emerged as a general and powerful model for learning from high-dimensional, rich, and sequential data that has seen success across variety of domains of ML, including NLP and Computer Vision. This paper surveys the use of Transformers in Reinforcement Learning, categorizing uses of transformers into the following: 1. as one component of a more traditional RL agent system, e.g. as an observation encoder, especially when processing rich visual or hierarchical/abstract data; as a memory module that processes past states (as an alternative to other memory mechanisms such as RNNs); or as a world model; 2. as an "end-to-end" sequential decision maker, by treating RL as a sequence modeling problem, in the line of decision transformer and other approaches to hindsight RL; 3. as very large, generalist models that can solve many tasks and domains, especially when combined with pretrained models such as large language models Within these three taxonomies, the authors survey a fairly comprehensive set of recent work, including core ideas (e.g. RL as sequence modeling, generalist models), as well as additional work attempting to build and generalize such ideas. They conclude with several promising directions for future research. Strengths and Weaknesses: # Strengths - This area is clearly of interest to many in the TMLR audience, as transformers are increasingly the go-to modeling architecture of choice across a variety of ML domains, including RL. This paper should serve as a reasonably recent and comprehensive reference for practitioners - I largely agree with the authors' taxonomy of uses of various Transformer architectures, though I disagree with the framing (see Weaknesses) - Good survey of future directions and outstanding challenges in the field. # Weaknesses ## Framing I feel conflicted about the paper—on the one hand, Transformers have upended RL (and every other ML subfield) and so it's obviously important for people to understand how they're used in RL, and towards that end this paper does a good job at surveying the recent work and latest advancements (it may have missed a few papers, since it's hard to catch every paper that gets released in this field, but it feels reasonably comprehensive). In providing a reasonably comprehensive overview of the recent literature, this paper does a good job. On the other hand, I find the framing of the paper a bit reductive, as it simplifies and combines several distinct research advances and thrusts under the generic umbrella of "use Transformers for RL". Specifically, the authors identify three "pillars" of recent advances in RL: 1. Using Transformers to model high-dimensional, abstract sequential and/or graph-structured data as components in more "traditional" RL systems. 2. The view of reinforcement learning as sequential decision making + the hindsight information matching framework. Note that this advance is *not* inherent to Transformers, though obviously the first work pushing this view indeed used a Transformer as its modeling architecture of choice. 3. The gradual trend towards large-scale, generalist, pretrained ("foundation") models that learn to transfer and share parameters across a variety of tasks for improved performance, and the use of already successful pretrained models (such as large language models) as backbones for RL systems. Note that this advance is *also* not specific to Transformer models, though obviously the overwhelming majority of currently successful foundation models are Transformer-based. Note that of these three thrusts, all 3 involve transformers to some degree, but arguably only one of these (the first pillar) is actually specific to the Transformer architecture. The other I do believe should be included in a Transformers for RL review paper, **however** I find the paper takes a bit of a reductive framing by describing everything as Transformer-first. For example, section 4.3 is described as "using Transformers for sequential decision making", and is framed primarily as "let's use Transformers to solve a SDM problem." OTOH I believe the framing should be something like: "A recent advance in RL has been viewing RL as sequence modeling...Transformers are a particularly useful architecture for this approach because they are particularly good sequence models." Similarly, I feel like Section 4.4 is less about "Transformers for generalist agents" and more about "Transformers as (one choice) of a model architecture, towards the general trend of large-scale pretrained models and transfer learning for RL." This framing can be seen in the authors' term "TransformRL". I disagree with the authors' decision to define a new name, TransformRL, to refer to the use of Transformers in RL, given the discussion on reductiveness above, and especially given how many overloaded and confusing terms the fields of RL/ML/DL have already. In my opinion the term conflates the field (RL) with current tools that we find useful to use in the field (Transformers) and the recent central advances dominating the RL conversation, of which Transformers play varying degrees of importance (RL as sequence modeling, large-scale pretraining). It's possible Transformer hype will die down, yet these other ideas will persist, and I don't think that RL should currently be defined by the use of Transformers. Imagine, for example, if we had analogous terms like RNNNLP or CNNCV—I view them all as rather unnecessary. I think to improve the framing under my view, the paper could use a more nuanced discussion at the beginning, by saying that transformers are now a workhorse in RL because they are such expressive and capable models, but they are used for very (different) reasons stemming from orthogonal advances in RL, including (1) RL in high-dimensional spaces, (2) RL as a sequence modeling problem, and (3) the trend towards large-scale, pretrained foundation models. ## Could better summarize advantages and disadvantages *specific to Transformers* in RL I find the paper could be more detailed and nuanced when describing the reasons why one might specifically use Transformers over other models in RL, and what features of problems make Transformers a particularly good choice. Most of the sections of the paper basically show that Transformers have seen success when slotting them into basically every possible part of the RL problem: when processing observations (section 4.1.1), memory (section 4.1.2), world models (section 4.1.3). The authors catalog dozens upon dozens of apparent success stories where Transformers successfully replace existing deep architectures for RL. What should the takeaway be for the readers reading this paper? Is it that one should now use Transformers basically unilaterally, in every possible component of an RL model? The paper could emphasize *particular problems* of an RL task that make Transformers a particularly suitable solution. For example, in Section 4.1, it seems like Transformers have seen success in processing "complex information", including multi-agent representations (zambaldi et al 2018), morphological domain knowledge (kurin 2020), "sensory sequences" (tang and ha), multimodal information (team 2021), and a variety of vision settings. What is the overall recommendation or understanding from the authors, after surveying this work? Is it that one should no longer use any other more domain specific architectures such as CNNs and RNNs? Does it seem like Transformers be the go-to architecture for representation learning in RL? Are there any commonalities among domains where Transformers consistently outperform or underperform, or is more empirical work needed? I imagine that this work shows that Transformers have the most success in RL when applied the same kinds of representation learning problems as the supervised learning problems that Transformers are so good at. Assuming this is the case, the authors could more clearly illustrate this by unifying the case studies in this section, and tying it back to the general advantages of transformers as mentioned e.g. in 3rd paragraph of the intro (long dependencies, scalability). Similarly, why should one use a Transformer instead of a more traditional memory mechanism such as an RNN/LSTM in RL agents (section 4.1.2)? Should it only be for environments with extremely long-distance dependencies? How should one think about the tradeoffs of (1) increased sequence modeling capacity and (2) larger model capacity? There is some description of the relevant tradeoffs in the final paragraph, but the writing could try to better aggregate and summarize the trends in each subarea here. --- The paper also lacks nuance in describing the current outstanding challenges in using Transformers in RL. For example, end of page 1/beginning of page 2, and Section 3.2 describes "unique challenges" for Transformers in RL: 1. Sensitivity to architecture selection 2. Sensitivity to design choices (e.g. hyperparameters) 3. Large memory footprints and high latency In my view, only (3) is a challenge that is arguably somewhat unique to Transformers, especially the large-scale transformers that have seen success across vision, NLP, and RL tasks. (1) and (2) seem like general issues across the entire field of RL. ## Minor - I don't think "foundation model" is used correctly e.g. bottom of page 1, first sentence of section 4—I don't think a particular architecture can be categorized as a "foundation model" since you can still have small domain-specific transformers. Whether something is a "foundation model" is primarily determined by the model capabilities and usage intent rather than the particular architecture - I don't think Transformers are strictly sequence models, since sequence implies linear order and one can apply transformers to various richly ordered or unordered representation learning problems. However section 4.1 seems to cast Transformers as such. For example it's not clear to me that all of the uses of Transformers in section 4.1.1 are **sequence encoding** problems—e.g. the multi-entity StarCraft modeling problem is not necessarily a sequence modeling problem but rather a set representation learning problem. - End of section 4.2 "The Transformer-based world model can help alleviate the prediction error on longer sequences." Can authors elaborate? Is this an empirical finding from the Janner paper or just a general statement about improved sequence modeling capacity from transformers? - I think in every review paper it's difficult to avoid the "laundry list paragraph of several different papers" trope, but minimizing this to whatever extent possible is ideal. Section 4.3.3 sticks out as a particular "laundry list" paragraph that outlines a dozen or so advance sin Transformer architectures without really unifying or describing the main shortcomings people are trying to work around, and the main trends in this research. There are also a lot of terms that are not fully explained and digestible to the average reader (e.g. CVAE, "drop-span embedding", etc) ## Overall I don't think any of the weaknesses I described above are blockers to publication, and co-reviewers may not see as much issue with the framing as I do, and I am happy to be overriden. However in my opinion there is some additional work that would elevate this paper beyond the typical review paper in providing a more nuanced and unified view of the various new advances that have defined modern (2020+) RL research, and how precisely the Transformer architecture plays a role in all of this. Requested Changes: As I described above, I would be happy to be overriden if my co-reviewers believe the paper is fine as is, but I think to be an excellent review paper, the paper could (1) slightly change the wording to be more clear about the different roles that Transformers play across different recent advances in RL, and (2) more clearly explain the advantages and shortcomings of the Transformer architecture specifically (rather than listing challenges specific to RL as a whole), and how recent work (e.g. section 4.3.3) is trying to get around such limitations. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper is a survey on the use of Transformers in Deep Reinforcement Learning. After discussing challenges of the use of such architectures (notably related w.r.t. to their need of massive training data and their latency), authors present works for different use of Transformers, namely for representation learning in RL, model learning and as the actor of the learned policy. The latter is the most developed, mainly in the scope of offline learning. Finally authors discuss advantages of Transformers for learning generalist agents and give some insights about their view on important future works in that scope. Strengths and Weaknesses: Strengths: - many important recent works are listed - the paper makes the reader questioning about challenges and potential of these architectures - some interesting discussions Weakness : - No experiments to support discussions - Not enough details in many places. It would have been very useful to choose one important approach in each section, to be detailed, or at least a bit formalized (e.g., GDT, SPLT, SwitchTT, BooT, etc). It would greatly help the reader to organize the given information, which is a bit dense in the current form, and focus on main aspects of the discussion (I was a bit lost in the textual flow, some illustrations and equations usually help to clarify). - Sometimes a lack of structure. Giving bold titles to paragraphs could help gathering the main idea of each. Section 4.4.1 is an example of section that would greatly benefit from a more highlighted structure. - Difficult to understand why challenges given in the paper are specific to transformers. Need of massive training data and latency are well-known challenges related to any deep architectures (including eg RNN) - The survey devotes a very large part to offline learning, specifically following decision transformers (and GDT) principles. While it is true that it is mainly used in that setting, it would have been very useful to discuss further the reasons that limit their applicability in online RL (for instance by giving return-to-go as input as DT does for offline data). Section 4.3.4 is really too short on this, please develop. Requested Changes: - I would really appreciate if authors could give more details at least about the following approaches : GDT, SPLT, SwitchTT, BooT. Giving some formalized components of their principles. - A trending emerging architecture is decision diffusers for RL tasks. It would have been useful if authors could discuss pros and cons of transformers w.r.t. these new competitors. - develop section 4.3.4 - I have difficulties to understand the core differences between the two first paragraphs given as perspectives in 5. For me it is very close... - Authors discuss in two different places about the "deadly triad problem" that hinders the use of Transformers for online learning of RL policies. In the first para of perspectives (section 5), it is argued that self-supervised learning of sequences is thus better. I would like to point out that the policy gradient paradigm is fully equivalent to such a log-likelihood maximization when all rewards are set to 1 in every trajectory. For me, appart the onpolicy exploration process of policy gradients, there is no real difference between both paradigms. So this does not look very "new". Could authors elaborate on this ? By the way, policy gradient is fully ignored in the survey... - The last perspective paragraph (RL for transformers) should be removed : while it suggests that RL could be used to learn better transformer architectures, the discussion is about the use of Transformers for sequences modelling, which has no difference with imitation learning (and RLHF is simply RL finetuning as it is done for any other decision-making problem, I cannot understand why authors treat that tasks differently here). Broader Impact Concerns: . ================================================== Review 3: Summary: Main contributions: - This survey provides a taxonomy of different applications of the transformer architectures in RL, including transformer as state encoder, as world model, or as decision maker (in the decision-transformer style). - The authors also reviewed works that apply transformer models to train generalist agents, either from scratch or by fine-tuning or prompting models. - The authors discussed future perspectives and open problems. Strengths and Weaknesses: Strength - This is a timely survey for an emerging and promising research area. The works mentioned in the paper are comprehensive and representative, and cover all the main areas (as far as I know) in this filed. - The authors provide a brief historical perspective on the development of model architectures in RL, and discussed challenges in applying transformer in the RL domain, e.g. the problem of non-stationary training data. This sets up an adequate context for the main discussion. Weakness (For each bullet, please see "requested changes" for details.) - There are a few inaccurate description of cited papers. - A few claims are not well supported or need rephrasing. - A few missing citations. Requested Changes: I only have a few non-critical requests: Inaccurate description of cited papers: - In section 3.2, "Emmons et al. (2021) identify that carefully choosing ... are crucial for the performance of DRL agents." That paper only discusses offline RL. - In section 5, under "Transformer Structure Tailored for Decision-making Problems": Zhou et al., 2021 is not an NLP paper. If the authors want to mention efficient transformer variants, Tay et al., 2022 (already cited) should have pointers to representative works. Questionable claims - In the introduction, "the training objective of RL agents is typically a function of the current policy, which induces nonstationarity during learning a Transformer". The non-stationary is not caused by using the policy in the training objective. It's caused by the training data being collected using an ever-changing policy. - Section 4.2, "The Transformer-based world model can help alleviate the prediction error on longer sequences." Please add citations to support this claim. A few missing citations - When discussing architectures in DRL, please consider cite IMPALA [1] - Deep Transformer Q-Networks [2] uses transformer with observation history as input, and the bellman loss is trained for multiple time-steps together. - [3] discusses applying ViT to RL Typos: - Section 4.4.2: "directly applying large language models for execution is inefficient or even feasible". feasible -> infeasible or unfeasible. Additional suggestion: - In the introduction and section 3.2, the authors discuss a few challenges of adopting transformer to RL problems. I think another possible reason is that transformer models typically need much more training data to be effective compared to models with strong inductive biases (CNN, RNN). In traditional DRL, sample efficiency is a key benchmark, which limits how much data the agent can collect to fit the model. It might be helpful to discuss this aspect as well. [1] IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. Espeholt et al., 2018 [2] Deep transformer q-networks for partially observable reinforcement learning. Esslinger et al., 2022 [3] Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation. Hansen et al., 2021 Broader Impact Concerns: No concerns ================================================== Metareview: Recommendation: Accept with minor revision Comment: The reviewers all left a number of requested changes to particular points, and the framing in general. I see the authors have clarified in the responses, and I would expect them to update the paper accordingly before the final version is submitted. ==================================================
# The Impact Of Syntactic And Semantic Proximity On Machine Translation With Back-Translation Anonymous authors Paper under double-blind review ## Abstract Unsupervised on-the-fly back-translation, in conjunction with multilingual pretraining, is the dominant method for unsupervised neural machine translation. Theoretically, however, the method should not work in general. We therefore conduct controlled experiments with artificial languages to determine what properties of languages make back-translation an effective training method, covering lexical, syntactic, and semantic properties. We find, contrary to popular belief, that (i) parallel word frequency distributions, (ii) partially shared vocabulary, and (iii) similar syntactic structure across languages are not sufficient to explain the success of back-translation. We show however that even crude semantic signal (similar lexical fields across languages) does improve alignment of two languages through backtranslation. We conjecture that rich semantic dependencies, parallel across languages, are at the root of the success of unsupervised methods based on back-translation. Overall, the success of unsupervised machine translation was far from being analytically guaranteed. Instead, it is another proof that languages of the world share deep similarities, and we hope to show how to identify which of these similarities can serve the development of unsupervised, cross-linguistic tools. ## 1 Machine Translation And The Role Of Back-Translation To Eliminate Supervision Supervised training for neural machine translation (NMT) requires a vast amount of parallel data (Sutskever et al., 2014; Wu et al., 2016; Vaswani et al., 2017). Creating the necessary datasets aligned across languages is a complex, onerous and sometimes impossible task. In this context, unsupervised back-translation, and in particular on-the-fly back-translation, becomes highly valuable, as it allows unsupervised training from independent monolingual corpora (Sennrich et al., 2016; Lample et al., 2018a; Guzmán et al., 2019; Haddow et al., 2022). Back-translation works by using a translation model from one language (L1) to another (L2) to synthetically generate data in the following way: a given text x in L1 is passed in and a hypothetical translation y˜ in L2 is generated. This pair is then treated as if it were a 'gold' translation (˜*y, x*) to train an L2-to-L1 system. For iterative and on-the-fly back-translation, the whole process is repeated in the other direction, and iterated, so that the models improve as they are generating the data. One can think of this process, in essence, as that of an auto-encoder of one of the languages, with the hidden space being the other language: the model is trained to translate a sentence from language 1 into language 2 and back into language 1, success being attained if the original and final sentence from language 1 match. Back-translation was originally used as data augmentation, i.e. part of the training was also done on a parallel corpus (Sennrich et al., 2016). In the more recent literature some models are fully trained via back-translation in an unsupervised way on top of other unsupervised objectives such as denoising auto-encoding or language modeling, and obtain decent BLEU scores, with significant improvements for low-resource languages (Lample & Conneau, 2019; Liu et al., 2020, i.a.). Here, then, success is obtained without any training signal as to how the two languages are aligned. This empirical success, however, is puzzling: without explicit information about how to align the two languages, how does the method succeed? To our knowledge, there is no systematic understanding as to why this type of training succeeds. In this paper, we first discuss why the success of back-translation is puzzling in general (Section 3). We conjecture, as many do, that it works because, despite surface differences, natural languages have rich and similar structures that can promote alignment (Lample & Conneau, 2019; Wu & Dredze, 2019; Conneau et al., 2020b). We then describe our shared exerpimental setup (Section 4), which uses artificial languages to systematically manipulate various similarities and differences between languages. We then report a series of experiments which analyze which properties drive the observed empirical success of back-translation (Sections 5-10). Our systematic experiments suggest that shared syntactic and very simple semantic structure does not suffice for back-translation to yield quality unsupervised NMT (UNMT) systems. Hence, shared complex semantic dependencies appears to be crucial. Our contributions are: (i) an analysis of why back-translation should not work in general, (ii) the use of artificial languages to conduct systematic experiments on factors driving its empirical success, and (iii) the elimination of reasons (often claimed to be behind its success) such as close syntactic structure, word frequency, anchor-points or even a semantic structure of lexical fields. ## 2 Related Work Prior to unsupervised translation, several works focused on bilingual dictionary induction but using little or no parallel data. Mikolov et al. (2013) does so by relying on a distributed word representation of large monolingual corpora as well as a small supervised corpus. Klementiev et al. (2012); Irvine & Callison-Burch (2016) develop a phrase-based statistical model for phrasal and lexical translation. More recently, Lample et al. (2018b) allows dictionary inference without any parallel data. Bojar & Tamchyna (2011) used *reverse self-training* to train a model in a statistical machine translation (SMT) framework. The idea is to use a small parallel corpus to train a machine translation (MT) system from target to source and use it to translate a large monolingual target-side corpus to create synthetic pairs to train a source to target SMT model. This is the root of Back-translation (BT) as a data augmentation method. This is applied to NMT by Sennrich et al. (2016) who train two models iteratively, source to target and target to source. The idea that translation from a language to another has a dual task, which is the translation in the reverse order, that can be learned jointly to improve efficiency is largely used in several works (Xia et al., 2017; Cheng et al., 2016; He et al., 2016; Wang et al., 2019; Hoang et al., 2018; Niu et al., 2018). For example, Cotterell & Kreutzer (2018) use the formalism of a wake-sleep algorithm to propose an Iterative Back-Translation method, i.e. the synthetic data is regenerated after each model training. However, in all these works training is still partially supervised and back-translation acts as data augmentation. Artetxe et al. (2018) introduce a fully Unsupervised Neural Machine Translation (UNMT) method. It uses on-the-fly BT (OTF-BT), meaning that synthetic sentences are generated as the model trains. They also add a denoising autoencoding objective. This paper lays the foundations for the following BT-based architectures and models. For example, Lample et al. (2018a) use the same method and leverage the intuition that the same sentence, regardless of the language, will be mapped onto the same embedding vector by encoders, and that to translate the sentence it is then sufficient to decode it into the desired language. They enforce this behavior using an additional adversarial training objective. Other works remove this adversarial objective while maintaining performance (Lample et al., 2018c; Yang et al., 2018), suggesting that this embedding space overlap emerges naturally. Following on from this line of work, several papers formulate the idea that the quality of the cross-lingual embeddings used to initialize BT are key and therefore add a language modeling objective as pre-training (Lample et al., 2018a; Edunov et al., 2019). This method works very well and has become the standard approach in UNMT. For example, Liu et al. (2020) introduces mBART for UNMT while Zan et al. (2022) studies in depth its benefits. Numerous papers propose modifications to BT to improve its performance, particularly in a low-resource setup (Kumari et al., 2021; Kim et al., 2019; Pourdamghani et al., 2019; Dou et al., 2020; Song et al., 2019). In particular, Filtered BT (Khatri & Bhattacharyya, 2020; Imankulova et al., 2017) filters synthetic sentences based on their quality, using a round-trip sentence similarity metrics, while Tagged BT (Caswell et al., 2019) prepends a tag in front of synthetic sentences to indicate to the model which are from the parallel corpus and which are not. Many studies examine the reasons for the success of BT for UNMT (Edunov et al., 2018; Poncelas et al., 2018; Kim et al., 2019; Edunov et al., 2020; Conneau et al., 2020b). In particular, some focus on the properties of languages, pursuing the thought that BT and multilingual models work due to the similar structures of natural languages. K et al. (2019) studies the impact of several factors on the cross-lingual abilities of mBERT, including the linguistic properties of languages, such as token overlap between vocabularies, or word order. They show that the former has very little effect, but that the latter is fundamental. Dufter & Schütze (2020) takes up the same kind of work. Finally, Kim et al. (2020) shows that UNMT performance drops drastically when languages or domains are very different. It also shows that BT cannot recover from a bad initialization. Our work continues this line of research, but differs in that we use artificial languages to maintain perfect control over our experiments and to avoid confounding factors as much as possible. This allows to investigate precise syntactic and semantics effects on OTF-BT. We also use these artificial languages to demonstrate the inaccuracy of the usual assumptions concerning the success of OTF-BT (e.g. word frequencies, anchor points, syntax, etc.). To the best of our knowledge, this has not been done in previous works. It is hoped that this work will lead to greater intuition about BT, enabling researchers to refine the method and produce better results in the future. Several other works also use artificial languages on related tasks (Arora et al., 2016; White & Cotterell, 2021; Chiang & yi Lee, 2021), but none apply their methodology to UNMT. Finally, it should be noted that the model used and studied in this work (Lample et al., 2018c) is no longer state of the art, but uses OTF-BT almost exclusively, unlike more recent models which rely heavily on massive pre-training, making it extremely relevant to our work. ## 3 Back-Translation Is Necessary, But Not Sufficient In General A fully unsupervised training with back-translation works as follows. The model consists of classical components of a translation system, as seen in Figure 1. It includes encoders ei from language Liinto a hidden ![2_image_0.png](2_image_0.png) space, and decoders di from this hidden space into Li. These encoders and decoders could be separate models or the same multilingual model conditioned on a language ID token (Conneau et al., 2020a; Liu et al., 2020). The composition of an encoder ei and decoder dj from different languages provides a translation function Tij . Figure 1: The components of a classical translation pipeline. Translation functions T12 and T21 are obtained from the composition of encoders and decoders from different languages. These components are first trained through a denoising auto-encoding, monolingual regime, whereby di◦ei◦g, with g a noise function, should lead to the identity for each language. Then suppose y is an element of the language L2. We can translate y into L1 as x˜ = d1 ◦ e2(y) := T21(y). This creates a synthetic pair (˜*x, y*) in L1 ×L2. This pair, which was created by the model itself, is now used as a supervised datum for translation. That is, we check that the translation (now backwards) from x˜ in L1 to L2 is coherent: y˜ = d2◦e1(˜x) := T12(˜x) should lead back to y. With a loss function L (e.g., cross entropy), this process can be summarized as: $$\begin{array}{r}{\operatorname*{min}_{\theta_{e_{i},d_{i}}}\sum_{y\in L_{2}}{\mathcal{L}}(y,d_{2}\circ e_{1}\circ d_{1}\circ e_{2}(y))}\\ {+\sum_{x\in L_{1}}{\mathcal{L}}(x,d_{1}\circ e_{2}\circ d_{2}\circ e_{1}(x))}\\ {+\sum_{j\in\{1,2\}}\sum_{z\in L_{j}}{\mathcal{L}}(z,d_{j}\circ e_{j}\circ g(z))}\end{array}$$ $$(1)$$ where θei,di are the parameters of the encoders/decoders. The gray components in the first two lines reflect 'frozen' parts of the pipeline: in practice we sample from those di and then pass those generations forward, as previously described. Hence the formula is a slight simplification. In practice, the denoising autoencoding objective (or similar; see third line in (1)) often happens *before* the back-translation objective is used; we have included them as one loss because the present arguments do not depend on this choice. With this framework in place, we can now explain why back-translation is a necessary objective, but not a ![3_image_0.png](3_image_0.png) sufficient one. The back-translation objective (1) could be fully met, without translation being accurate at all. We illustrate this visually in Figure 2, with two extreme cases. (a) Accurate translation (b) Outside of language translation (c) Shuffled translation Figure 2: Schematic representation of three cases in which the back-translation objective (1) is fully met: (a) with an accurate translation, (b) with a translation missing the target language entirely, but being sent back on the original language appropriately, (c) with a translation that bijectively shuffles the target language, and a reverse translation that unshuffles it back in place. We ignore here the hidden encoding space, and write −→ for T12 and ←− for T21. First, consider Figure 2b. It could be that the "translation" T12 from L1 to L2 projects L1 into a space X outside of the actual L2 (so translation will be completely off), but that the back-translation would project this space X back into L1 appropriately. Second, consider Figure 2c. It could be that the space X overlaps well with L2, but that it is a shuffled version of it, in such a way that translation will be completely off again. Yet, the back-translation objective will be fully met, as long as the translation un-shuffles it back in place onto L1. More generally, any combination of ei and di such that T12 = T −1 21 verifies the back-translation component of (1); this essentially requires that the translation functions Tij be inverse bijections, but such bijections do not need to correspond to accurate translation functions. The auto-encoding component of (1) does not suffice *a priori* to overcome this problem. These simple counter-examples show how back-translation could fail. In practice, however, back-translation has achieved significant empirical success (Lample et al., 2018a; Lample & Conneau, 2019; Liu et al., 2020; Song et al., 2019, i.a.). Back-translation must work because natural languages look like one another, i.e. structure in the data makes it so that the proper alignments between L1 and L2 are easier to discover than the external ones or shuffled ones. But what structure in the data? The intuition often put forward is that back-translation works because, e.g., frequent words are better mapped onto frequent words, or because systematic word orders will anchor the mappings. In the rest of this paper we investigate such hypotheses in more detail. ## 4 Experimental Setup We investigate which language properties make back-translation work or fail. To do this, we tested backtranslation between artificial languages for which we can freely manipulate lexical, syntactic and semantic properties. Code and data are available at [URL redacted for anonymous review]. ## 4.1 The Context Free Languages The grammars of our artificial languages vary somewhat similarly to what is assumed for natural languages. We used the simple Context Free Grammars (CFG) introduced by White & Cotterell (2021), which are parametrized by 'switches': the order in which constituents surface. Table 1 presents the rules that are switchable in these grammars. We will denote a grammar by a sequence of six binary values, corresponding to the switches. | | Rules for each switch value | | |---------------|-------------------------------|-------------------| | Switch | 0 | 1 | | S | S −→ NP VP | S −→ VP NP | | VP | VP −→ NP VP | VP −→ VP NP | | Comp | SComp −→ S Comp | SComp −→ Comp S | | PP | NP −→ PP NP | NP −→ NP PP | | PP −→ NP Prep | PP −→ Prep NP | | | NP | NP −→ Adj NP | NP −→ NP Adj | | Rel | NP −→ VP Rel Noun | NP −→ Noun Rel VP | Table 1: Rules that are switchable in the grammar. Table from White & Cotterell (2021). At the lexical level, 1,374 words were created, with a plausible English morphology, and distributed in several Parts-of-Speech (POS), based on the work of Kharkwal (2014). Picking a set of switches (i.e. a grammar) and a set of such created words thus defines one artificial language. A full translation between two such simple languages can be reconstructed with a one-to-one mapping at the lexical level, and knowledge of the relevant switches. We provide some examples of sentences generated by such artificial languages in Table 2, illustrating the impact of the vocabulary changes and the grammatical switches. ## 4.2 Training Model We used the exact model introduced by Lample et al. (2018c) for NMT. The architecture is based on 4 Transformer encoder layers and 4 Transformer decoder layers. The last 3 encoder layers and the first 3 decoder layers are shared across languages. The embedding size is 512 and the vocabulary, of size 1, 000, was shared and built using Byte Pair Encoding. We use this model because of its high-performance (state of the art at the time of publication) and because it allows us to isolate the study of OTF-BT. Data For training purposes, we generated two sets of 100,000 sentence structures. All unsupervised training sets are made of (i) one of these sets of sentence structures for the 100,000 sentences in one language (using the language specific grammar and lexicon), and (ii) the other set to create 100,000 sentences in the other language. Hence, the data are neither labeled for supervision, nor are they even unlabelled translations from one another in principle. The test and validation sets are each composed of 10,000 parallel sentence pairs. These are generated in one language, and then transformed into the second language using the known grammar switches and lexical translations. | Language | Examples | |--------------------|----------------------------------| | 000000 POS | NounS Subj VerbCompPresS. | | 000000 (Lexicon 0) | burse sub lurchifies. | | 100000 (Lexicon 0) | lurchifies burse sub. | | 100010 (Lexicon 0) | lurchifies burse sub. | | 000000 (Lexicon 1) | swopceer bus rheleates. | | 000000 POS | IVerbPastP Adj NounP Subj. | | 000000 (Lexicon 0) | rolveda prask autoners sub. | | 100000 (Lexicon 0) | prask autoners sub rolveda. | | 100010 (Lexicon 0) | autoners sub prask rolveda. | | 000000 (Lexicon 1) | knyfeateda wourk krarfteers bus. | Table 2: Examples of sentences generated by our artificial languages. The POS row gives the underlying structure of the sentence in each group (colors in the following rows trace these Part-of-Speech). When relevant, constituents swapped compared to the grammar from the row above are underlined. The grammars and switches are described in Table 1. Procedure At the start of training, a FastText algorithm is applied to initialize the embeddings Bojanowski et al. (2017). We then train the translation model for 40 epochs, with a batch size of 16 and an Adam optimizer with a 10−4learning rate. Those hyper-parameters were chosen based on Lample et al. (2018c) and on our computational capabilities. We trained on a Tesla M10 with 8GB memory for roughly one day. Objectives The overall training objective is described in (1). First the model does a denoising autoencoding (DAE) step and updates its parameters, then it does an on-the-fly back-translation step in both directions and updates its parameters again. The denoising function g used is a combination of word substitution, local shuffling and masking. We note that DAE is here interspersed with BT, not used as a separate pretraining objective. Evaluation In each of the next experiments the results obtained are those computed on the test set by the model having obtained the best BLEU score on the validation set. Note that the BLEU score calculated on our artificial languages is not to be compared with BLEU scores obtained on natural languages with much larger vocabularies and many turns of phrase that count as correct translations. However, as a point of reference, the Lample et al. (2018c) model we use obtained on WMT monolingual News Crawl a BLEU score of 24.65 on English-French pairs, and of 19.10 on English-German pairs which was UNMT state of the art. ## 4.3 Experiments In the remainder of this paper, we present six experiments, each of which will answer several research questions, some of which will emerge from the results. The first experiment §5 is very simple and acts as a sanity check, the idea being to ensure that the model is indeed capable of learning to translate these artificial languages into themselves. The second experiment §6 is to measure the impact of grammar on translation performance, keeping the lexicon equal. In this way, the translation function is only concerned with the change in grammar from one language to another. The third experiment §7 adds the difficulty of a new lexicon. We show that this difficulty completely undermines the success of the machine translation system. This, of course, is different from the situation with real-world languages. The following three experiments then seek to understand what additional signal is present in natural languages and is so crucial to the success of OTF-BT. Experiment four §8 adds anchor words and identical word frequency. Experiment five §9 shows that a weak supervised signal, such as a bilingual dictionary or a small supervised dataset, restores the model's performance. Finally, experiment six §10 adds semantic information by way of lexical fields. ## 4.4 Metrics We use the BLEU score as a metric of accurate translation (Papineni et al. (2002)), and in particular the Moses implementation as in Lample et al. (2018c).1 BLEU is the most popular metric and will reveal most relevant effects. However, additional metrics will be introduced during the course of the experiments, to analyze certain effects in greater detail. ## 5 Experiment 1: Identical L1 And L2 **Languages** Our first experiment demonstrates that our hyper-parameters and training pipeline work in principle, by using BT to translate between two identical languages. Note however that, in principle, even for identical languages, back-translation is not a sufficient objective. We randomly selected 8 grammars among the 64 possible ones, and picked one lexicon, to form 8 artificial languages. For each such language L1, we trained our model to learn translation between L1 and a similarly defined language L2. Sampling of training sets was done independently for the two identical languages, hence the two monolingual corpora are not aligned. Table 3 reports BLEU scores on the test set in this paradigm. Training was successful: all BLEU scores are above 97. | Grammars | BLEU | |---------------|--------| | 000000↔000000 | 98.76 | | 011101↔011101 | 98.70 | | 011111↔011111 | 99.08 | | 000001↔000001 | 98.13 | | 100000↔100000 | 98.83 | | 000101↔000101 | 97.76 | | 111111↔111111 | 97.77 | | 111110↔111110 | 97.30 | Table 3: BLEU scores obtained on the test set for within grammar and within vocabulary training. Those scores are the average between the BLEU score obtained when evaluating source to target and target to source respectively. White & Cotterell (2021) found that the choice of the grammar had an impact on language modeling success for the Transformer architecture (but not for LSTM language models). Here as well, we found that the choice of grammar had an effect on the result, and in the same way: our within-language BLEU scores varied and were correlated with the perplexity on the same languages obtained in their language modeling task (R2 = 0.62). In short, some word orders robustly lead to better performance both for language modeling and, now, for within-language translation. ## 6 Experiment 2: Effect Of Grammar If similar structure is the key for back-translation to work, then translation between two languages should be harder if their grammar are more different (even if we stay within the variation observed between actual languages). We can operationalize this hypothesis in terms of the Hamming distance (number of different switches) between the grammars generating two languages: we expect BLEU score to decrease as Hamming distance increases. In these tests, we keep the vocabulary constant across languages to focus on the effect of grammar. 1https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl | 000000 | 011101 | 011111 | 000001 | 100000 | 000101 | 111111 | 111110 | | |----------|----------|----------|----------|----------|----------|----------|----------|------| | 000000 | 98.8 | 64.3 | 67.8 | 73.5 | 46.8 | 68.3 | 35.5 | 36.5 | | 011101 | 64.7 | 98.7 | 98.6 | 87.4 | 36.8 | 96.5 | 38.7 | 37.9 | | 011111 | 67.8 | 98.7 | 99.1 | 86.2 | 36.1 | 93.9 | 39.4 | 40.4 | | 000001 | 73.9 | 85.6 | 83.3 | 98.1 | 39.9 | 91.8 | 29.3 | 34.5 | | 100000 | 43.9 | 33.8 | 33.7 | 33.4 | 98.8 | 31.3 | 63.1 | 76.0 | | 000101 | 69.4 | 90.3 | 96.7 | 91.8 | 37.6 | 97.8 | 34.1 | 40.5 | | 111111 | 41.6 | 40.8 | 41.9 | 36.1 | 67.1 | 40.5 | 97.8 | 86.6 | | 111110 | 40.6 | 36.4 | 38.5 | 39.2 | 78.3 | 46.1 | 86.0 | 97.3 | To test this hypothesis, we used the eight random grammars of §5 and trained a translation system via back-translation for each pair, resulting in 64 BLEU scores. The complete results are in Table 4. Table 4: BLEU scores for languages with different grammars but the same lexicon. First, we found a correlation between the obtained BLEU scores and the Hamming distance (R2 = 0.35 with a coefficient of −8.35, p < 0.001). In other words, back-translation does become less performant as the grammars differ more. Second, we used as predictors not the raw Hamming distance, but individual variables Si corresponding to each switch being different or not in the translation pair at stake, thus fitting a model of the form BLEU ∼P6 i=1 βiSi. We obtain a strong fit (R2 = 0.94), and a significant effect at p < 0.001 for the coefficients corresponding to switch S1 (β1 = −45.04), S4 (β4 = −6.36) and S6 (β6 = −11.04). This suggests that not all switches are created equal: some have more dramatic effects than others on back-translation performance. | Grammars | BLEU | id baseline | Grammars | BLEU | id baseline | | | |---------------|---------------|---------------|------------|---------------|---------------|-------|---------| | 000000↔100000 | 45.36 | 51.04 | (-0.11) | | | | | | 000000↔010000 | 94.81 | 42.37 | (1.24) | | | | | | 000000↔001000 | 95.93 | 70.33 | (0.36) | | | | | | 000000↔000100 | 92.88 | 94.63 | (-0.02) | | | | | | 000000↔000010 | 98.69 | 82.60 | (0.20) | | | | | | 000000↔000001 | 73.17 | 73.17 | (0.00) | 111111↔011111 | 41.25 | 48.23 | (-0.17) | | | 111111↔101111 | 87.94 | 43.27 | (1.03) | | | | | | 111111↔110111 | 97.86 | 69.57 | (0.41) | | | | | | 111111↔111011 | 92.59 | 93.95 | (-0.01) | | | | | | 111111↔111101 | 97.56 | 79.46 | (0.23) | | | | | | 111111↔111110 | 86.40 | 69.66 | (0.24) | | | | Because of this, we more systematically evaluated the effect of each individual switch. To do this, we use the 000000 grammar as the source language and vary the target language so that one switch only is activated each time: 100000, 010000, *. . .* , 000001. Conversely, we used 111111 as the source language, deactivating each switch in turn for the target language: 011111, 101111, *. . .* , 111110. The results are in Table 5. Table 5: The BLEU scores show that different switches have different impacts on the translation performance, with source language **000000** (left) and **111111** (right). The second column shows the baseline BLEU score that would be obtained by a translation system that would just copy the initial sentence and, in parentheses, the relative distance of the learned translation to this baseline: BLEU−baseline baseline . Different switches show different impact on the translation performance, in a way mirroring the regression coefficients: switch 1 causes the largest drop in performance, followed by switch 6 and then switch 4. Other switches, like the fifth one, have little impact on BLEU. An intuition behind the different impacts of the switches could be as follows: the first switch governs a change that occurs near the root of the parse tree while the fifth switch concerns a node closer to the leaves. Thus, more words move, and they move further away in the first case than in the second. Because of this, a model that has learned the identity mapping (and therefore has not learned the translation) would get a higher BLEU score with the fifth switch than with the first. 8 To isolate this effect and determine whether the drops in BLEU are a result of a bias towards learning an identity translation, we computed the BLEU score of a system that would just copy the source sentence as is (and the relative distance between the BLEU score obtained by our model and this baseline). The results are shown on the last two columns of Table 5. Different (relative) effects of each switch on the translation are found again, suggesting a more subtle effect than the intuition mentioned above. In sum, grammar changes that make words move further away have more impact on the translation performance, and this is not because the identity is learned instead of a proper translation. ## 7 Experiment 3: Effect Of The Lexicon Until now, the two languages had the same lexicon but different grammars. In this section we focus on the opposite scenario, i.e. translation between languages with an identical grammar, but different lexica. We thus built a second, parallel lexicon of 1,374 new fictitious words with an English-like morphology using the same methods as before (see §4.1). We followed the same training procedure on five out of the eight grammars from §5. In this context, we found that back-translation systematically failed to produce successful translation systems: the best BLEU score was below 7, as reported in Table 6. Note that the round-trip BLEU score was above 70 in all trainings. This shows that it's not the back-translation objective that has failed to be optimized, but rather that it's not sufficient to ensure correct translation. | Grammars | BLEU | POS BLEU | |-----------------|--------|------------| | 000000 ↔ 000000 | 2.46 | 69.3 | | 011101 ↔ 011101 | 2.81 | 62.4 | | 000001 ↔ 000001 | 4.31 | 67.3 | | 100000 ↔ 100000 | 5.68 | 76.2 | | 111111 ↔ 111111 | 6.63 | 58.1 | Table 6: BLEU scores obtained on the test set for within grammars training but with different lexica. It is clear that the training did not work as the highest BLEU score is below 7, demonstrating on the fly that back-translation is not guaranteed to succeed. The **POS BLEU** colomn report the average BLEU score obtained on the Part-of-Speech only. These high scores show that the failure lies in the vocabularies mapping. To identify the source of these errors, we computed BLEU scores for part-of-speech (POS) values instead of exact tokens. In this setting, all BLEU scores were above 60 (see Table 6). In more detail, around a third of the sentences in the test set are syntactically correctly translated. The average length of correct sentences is 6.2, compared with 11.0 for the whole test set. Correct sentences are therefore shorter on average, which is not surprising simply if shorter length means fewer opportunities of error. This relatively good syntactic performance can be partly explained by the DAE objective, which constrains the decoder to output sentences from the grammar. However, it is also clear that the model does not ignore input when doing translation. To further document the failure at the vocabulary level, we focussed on an analysis of sentences whose POS sequence has been correctly translated; there it is possible to do word-by-word evaluation. For any word ws from the source language, we looked at the distribution of its translations by the model across its different occurrences in this restricted test set. The perfect model would always translate ws by its correct translation, and would have an entropy of 0. By comparison, a model choosing the translation at random (although in the correct POS) would have an entropy of log2 of the size of the relevant POS set. Taking singular names as an example, the average entropy of the model is 0.05, whereas a random model would have an entropy of 7.3. The results for the other POS are qualitatively identical (see Table 7), showing that the model did pick a single translation per word, albeit not the correct one. This points towards the shuffled translation scenario illustrated in Figure 2c for the vocabulary. | POS | Entropy | (random) | |---------------------|-----------|------------| | Noun | 0.05 | (7.3) | | Adjective | 0.05 | (5.4) | | Transitive Verb | 0.06 | (6.4) | | Intransitive Verb | 0.05 | (6.8) | | Verb Complementizer | 0.03 | (4.5) | Table 7: Entropy for each POS (in parentheses, entropy value for a random translation within the correct POS). For each POS, the results have been averaged over the different morphological variants. For example, the **Noun** row groups together Singular and Plural Nouns. ## 8 Experiment 4: Unsupervised Vocabulary Signals | Manipulation | Exp. | BLEU score | |-------------------|-----------|--------------| | Lexical change | 3, §7 | 2.5 | | (i) Anchor Points | 4a, §8.1 | 23.0 (+20.6) | | (ii) Frequency | 4b, §8.2 | 10.1 (+ 7.6) | | (iii) Nc = 2 | 6a, §10.1 | 9.2 (+ 6.8) | | (iv) Nc = 2 | 6b, §10.2 | 10.7 (+ 8.3) | | (v) Nc = 10 | 6c, §10.3 | 11.9 (+ 9.5) | Back-translation alone was unable to learn the mapping between fully distinct vocabularies. Here we model two phenomena that could help overcome this difficulty in natural settings: the presence of shared lexical items across languages, or "anchor points" (Conneau et al., 2020b), and similar word frequencies across languages (Piantadosi (2014), among many others). Table 8: BLEU scores obtained from back-translation, augmented with other unsupervised signals. We start from the previous experiment with a lexical change, then show results when we supplement the situation with (i) anchor points, (ii) (intra-POS) word frequencies (for the **000000** grammar and two different lexica), (iii) Nc = 2 equally balanced disjoint lexical fields, (iv) Nc = 2 unbalanced disjoint lexical fields, (v) Nc = 10 disjoint lexical fields. In parentheses: the gain of BLEU score compared to Exp 3. ## 8.1 Experiment 4A: Anchor Points In natural language corpora, a number of words are identical between languages, in particular in written form. This happens because the languages have a common root, because some words have been borrowed across languages, because of identical proper nouns, or because the languages may use similar numeral systems. These identical words between two languages may serve as anchor points that allow translation models to fit the whole mapping between two lexica. This could help prevent the rotation problem described in §3 and the disjoint lexicon problem of §7. To test this hypothesis, we ran back-translation on two languages with the **000000** grammar and lexicons sharing 30% of their words. We obtained a BLEU score of 23.04. Using the word-by-word comparison method from §7 (for syntactically correct translations), we show that 92% of the words with non-zero precision and recall are within the common words. This suggests, mirroring similar findings on multilingual language modeling due to Conneau et al. (2020b), that common words do not serve as good anchors that can lead the empirical success of back-translation on the *entire* vocabulary. ## 8.2 Experiment 4B: Frequency Alignment In the experiments described thus far, all words within a given POS have equal frequency. In natural languages, however, words vary in frequency, and presumably in similar ways across languages, following a power law. This information could help a translation model learn the mapping between the two vocabularies. We hence manipulated our corpora so that the probability of appearance of words within each POS follows a power law: P(wn) ∝ n −kfor k = 1.1, as they roughly do in natural languages, where wn is the n-th most frequent word. With the **000000** grammar and two different lexica following these parallel Zipfian distributions, we obtained ![10_image_0.png](10_image_0.png) a BLEU score of 10.08, a slight increase compared to §7. Using again the word-by-word translation analysis, we observed no regularity in which words were appropriately translated: it is not the case that the most/least frequent words are better translated, it is not the case that the model outputs the most frequent words. This is illustrated in Figure 3 with singular nouns: most words have a very low accuracy and recall, and those that do not are spread across the x-dimension (rank). Figure 3: The red (resp. blue) line represents the accuracy (resp. recall) of the lexicon word translation. These are ranked by their frequency in the training corpus. The dashed lines represent word counts in the testset (light blue) and in the model translation (grey). It can be seen that the model tracks actual frequencies well. Then, very few words have a non-zero score, which is consistent with a very low BLEU score. What's more surprising is that the most frequent words are not translated any better. Thus, in the same way that anchor points did not help much (§8.1), rich and realistic word frequencies cannot be the primary driver of the empirical success of back-translation. This is disappointing, as a simple mapping of lexica based on word frequency in the corpus would yield a very good translation. Finally, the POS BLEU score for this experiment is 57, with only 22% sentences being correctly translated syntactically (compared to 33% before). In the case where intra-POS frequencies were uniform, word frequencies were therefore fixed by the POS frequency, itself fixed by the grammar. This clustering of word frequencies based solely on syntax may explain the better syntactic score of previous experiments. In other words, rich word frequencies do not improve vocabulary translation, and it impacts translations at the level of syntax. ## 9 Experiment 5: Adding Supervision One possible explanation for the failures above could be the rotation effect described in §3. Some supervised training could help avoid that risk, by anchoring at least some translations into ground-truth. We test two types of supervised examples: some aligned sentence translations (as when back-translation is used not for fully unsupervised translation but for data augmentation), and the injection of a bilingual dictionary. ## 9.1 Small Aligned Dataset Here, we use back-translation as data augmentation, as it was originally used, by training the model in a supervised way on 1,000 sentences, on top of the training with back-translation on the same 100,000 sentences from before. The training process is now as follows: first a step of denoising auto-encoding, then a supervised step with a batch from the aligned dataset and finally a back-translation step with a batch from the unaligned dataset. At each of these steps, the model updates its parameters. Results presented in Table 9 show that performance greatly improves, with BLEU scores often above 80 and all significantly above the results without supervision in Table 6, even though the grammars are different here. As a baseline, training only with the supervised examples is not sufficient (for example for 000000↔**000000** the BLEU score with only 1, 000 parallel sentences is 31.49, not 96.03). | Grammars | aligned sentences | bilingual dictionary | |---------------|---------------------|------------------------| | 000000↔000000 | 96.0 | 72.1 | | 000000↔011101 | 67.4 | 59.8 | | 000000↔100000 | 81.5 | 42.7 | | 000000↔111111 | 84.8 | 28.3 | Table 9: BLEU scores for different language pairs, when augmented with supervised signal. The column aligned sentences corresponds to the addition of supervised training on a parallel dataset of 1,000 sentences (out of 100,000). The column **bilingual dictionary** to the addition of the supervised training on all word pairs (passed in as one-word sentences). Both supervisions lead to better scores than without any supervision (see Table 8, where top BLEU score is 23). ## 9.2 Bilingual Dictionary Previous results suggest that failure arises at the lexical level, so here we add supervised training on all pairs of aligned words, i.e. the complete bilingual dictionary. Results improve compared to previous results: BLEU scores are now between 28 and 72 (see Table 9), when they were previously below 7 (see Table 6). Overall then, back-translation works drastically more efficiently when complemented with some supervised signal, either at the sentence level (previous subsection) or at the lexical level (this subsection). ## 10 Experiment 6: Towards Semantics Via Lexical Fields One difference between our artificial languages and natural languages concerns semantics. Currently, it appears that our models roughly generate the correct POS sequence and then each word is randomly sampled, with no regard for semantic dependencies. Some methods have been proposed to address this sampling problem (Arora et al., 2016; Hopkins, 2022). To stay as close as possible to our previous experiments, we propose here a first-order approximation of the semantic dependencies between different words in a sentence: each word in a 'content' POS (Noun, Adj, TVerb, IVerb, Verb Comp) is associated with one of Nc lexical fields (respecting morphology, so *bird* and birds would be associated with the same lexical field), and each sentence in the corpus is made of words from a single lexical field, or context (we use the two terms interchangeably). Words within a given POS and lexical field are sampled from a power law. The idea of this experiment is to see whether the model is capable of picking up this simple semantic cue. To measure success at capturing the lexical field information, we calculated (i) the proportion of sentences containing only words from the same lexical field, and (ii) for accurately POS-translated sentences, the proportion of words that were translated into a word from the right lexical field. ## 10.1 Experiment 6A: Two Balanced Lexical Fields In a first sub-experiment, we used Nc = 2 lexical fields, and sampled the context of the sentences uniformly across the two contexts. Across 4 training seeds, we systematically obtained a proportion of mono-context sentences of 1. This success may come from the DAE only, which plays the role of language modeling, completing sentences within a given context. For word-by-word translations, the accuracy was around 35% for 2 training seeds, and around 65% for the other 2 training seeds. This corresponds to the fact that in unsupervised learning, there is no signal to map contexts correctly across languages, since they are permutable within each language. This is thus an example of the rotation situation from Figure 2c. This precision yet is not at ceiling: either 0% or 100%. This indicates that the model fails to learn perfectly this partition of our languages. ## 10.2 Experiment 6B: Two Unbalanced Lexical Fields In a second sub-experiment, we used Nc = 2 lexical fields again, and sampled the context of sentences across the two contexts with an unbalance proportion of 30/70. The idea is to give the model the material to distinguish between contexts and overcome the bi-modal behavior noted above. The BLEU score was slightly increased compared to the same experiment without lexical fields (see Table 8). The proportion of mono-context sentences remains at one. The proportion of words translated in the right context are, over six training sessions: 55%, 65%, 66%, 70%, 73%, 76%. A naive baseline consisting in choosing the most frequent context would be at 70%. But this is visibly not what the models are doing (the output sentences are from both contexts). Another naive baseline consisting in choosing the context according to its proportion in the corpus would obtain 58%. The models are (almost always) above that level. In addition, unlike in the case of equally frequent lexical fields, the different trained models now tend to all choose the right mapping between lexical fields, therefore taking advantage of their relative frequencies to map them onto one another. This is both a good result, and a risk: if lexical fields are present in different proportions across cultures/languages, this could create a wrong signal. In sum, the models are, to some extent, able to use semantic frequency cues to map lexical fields across languages. ## 10.3 Experiment 6C: Ten Unbalanced Lexical Fields We replicated the lexical field experiment with Nc = 10 lexical fields, and a number of sentences in each lexical field varying according to a power law with parameter k = 1.1 as before, thereby providing finergrained 'semantic' information. The mono-context sentence proportion remains perfect at 1. The BLEU score increases slightly to 11.92 (see comparative results in Table 8). The proportion of words translated in the right context are: 27%, 28% , 38% and 39%, which is above a random baseline at 20%. Overall, these experiments show that more and more fine-grained semantic information provides key signal for translation alignment, even if it is not fully sufficient in this simple form to make unsupervised backtranslation fully work. ## 11 Conclusion The back-translation objective is not sufficient to align two sets without supervision, in general. This is true even if it is complemented with additional objectives such as filtering or denoising auto-encoding. The method is successful nonetheless with real languages. This success then is presumably due to similarities between natural languages, which the training method picks up on. But it is not clear which similarities help do that. Through controlled experimentation with artificial languages, we investigate the role of lexical, syntactic and semantic properties. We find that, when they share the exact same lexica, languages with more similar grammars are easier to translate into one another. Hence, grammatical similarity across languages of the world could be a key to the success of back-translation. But when lexica also vary, syntactic similarities are not sufficient to make back-translation align two languages. Lexical alignments are thus hard to learn by back-translation. What language properties make them learnable? We find that neither anchor points (partially shared vocabulary), nor rich, parallel word frequencies are enough to make back-translation work. Thus, manipulating various lexical and syntactic properties only, we find that some supervision signal is critical to support back-translation: through a small set of aligned sentences, or a complete set of aligned words. Moving to semantics then, we explored how the distribution of word cooccurrences influence the efficiency of back-translation. We used only a crude form of semantics, by implementing lexical fields: different classes of words that never occur within the same sentence (think about: 'clothes', 'sock', 'shoe', 'shirt' and 'astrophysics', 'interstellar', 'electromagnetic'). We find that unsupervised back-translation models are able to pick-up on this (coarse) semantic signal to find a better alignment. We conclude that the success of back-translation is probably due to an even richer semantic parallelism across languages, above and beyond their lexical and syntactic similarities. In the future, one would like to study more subtle semantic properties then. Currently, the semantic information we implemented is both coarse-grained, and not completely decisive. One would thus like to test whether more realistic semantic information improves the system even more, or makes it collapse again. And subtle properties could be investigated. For instance, selectional restrictions on verbs may play a role in shaping text distributions, above and beyond syntactic constraints, in a way that may help induce alignment across languages in an unsupervised setting. Future work should thus explore realistic semantic distributions, while maintaining the experimental control that artificial languages provide (Hopkins, 2022). This will help understand what in natural languages makes unsupervised back-translation reach so much success, despite its *a priori* theoretical insufficiency. ## Broader Impact Statement We hope that our work can have two impacts, navigating between engineering and scientific communities. In one direction, systematic investigations we present can help evaluate and improve applied translation system, and in particular for low resource languages where supervision is not an option. Conversely, we take advantage of the engineering success of back-translation, to unearth what natural languages are made of, how they spontaneously align with one another. ## References Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. A latent variable model approach to PMI-based word embeddings. *Transactions of the Association for Computational Linguistics*, 4:385–399, 2016. doi: 10.1162/tacl_a_00106. URL https://aclanthology.org/Q16-1028. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. Unsupervised neural machine translation. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum? id=Sy2ogebAW. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146, 2017. doi: 10.1162/ tacl_a_00051. URL https://aclanthology.org/Q17-1010. Ondřej Bojar and Aleš Tamchyna. Improving translation model by monolingual data. In *Proceedings of* the Sixth Workshop on Statistical Machine Translation, pp. 330–336, Edinburgh, Scotland, July 2011. Association for Computational Linguistics. URL https://aclanthology.org/W11-2138. Isaac Caswell, Ciprian Chelba, and David Grangier. Tagged back-translation. In *Proceedings of the Fourth* Conference on Machine Translation (Volume 1: Research Papers), pp. 53–63, Florence, Italy, August 2019. Association for Computational Linguistics. doi: 10.18653/v1/W19-5206. URL https://aclanthology. org/W19-5206. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Semi-supervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1965–1974, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1185. URL https://aclanthology. org/P16-1185. Cheng-Han Chiang and Hung yi Lee. On the transferability of pre-trained language models: A study from artificial datasets. In *AAAI Conference on Artificial Intelligence*, 2021. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp. 8440–8451, Online, July 2020a. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.747. URL https://aclanthology.org/2020.acl-main.747. Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. Emerging cross-lingual structure in pretrained language models. In *Proceedings of the 58th Annual Meeting of the Association for* Computational Linguistics, pp. 6022–6034, Online, July 2020b. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.536. URL https://aclanthology.org/2020.acl-main.536. Ryan Cotterell and Julia Kreutzer. Explaining and generalizing back-translation through wake-sleep. *ArXiv*, abs/1806.04402, 2018. Zi-Yi Dou, Antonios Anastasopoulos, and Graham Neubig. Dynamic data selection and weighting for iterative back-translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 5894–5904, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.475. URL https://aclanthology.org/2020.emnlp-main.475. Philipp Dufter and Hinrich Schütze. Identifying elements essential for BERT's multilinguality. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4423–4437, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main. 358. URL https://aclanthology.org/2020.emnlp-main.358. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 489–500, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/ v1/D18-1045. URL https://aclanthology.org/D18-1045. Sergey Edunov, Alexei Baevski, and Michael Auli. Pre-trained language model representations for language generation. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for* Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4052– 4059, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/ N19-1409. URL https://aclanthology.org/N19-1409. Sergey Edunov, Myle Ott, Marc'Aurelio Ranzato, and Michael Auli. On the evaluation of machine translation systems trained with back-translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2836–2846, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.253. URL https://aclanthology.org/2020.acl-main.253. Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. The FLORES evaluation datasets for low-resource machine translation: Nepali–English and Sinhala–English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6098–6111, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1632. URL https://aclanthology.org/D19-1632. Barry Haddow, Rachel Bawden, Antonio Valerio Miceli Barone, Jindřich Helcl, and Alexandra Birch. Survey of Low-Resource Machine Translation. Technical Report arXiv:2109.00486, arXiv, February 2022. URL http://arxiv.org/abs/2109.00486. arXiv:2109.00486 [cs] type: article. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. Dual learning for machine translation. NIPS16, pp. 820–828, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819. Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. Iterative back-translation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 18–24, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-2703. URL https://aclanthology.org/W18-2703. Mark Hopkins. Towards more natural artificial languages. In *Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)*, pp. 85–94, Abu Dhabi, United Arab Emirates (Hybrid), Dec 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.conll-1.7. Aizhan Imankulova, Takayuki Sato, and Mamoru Komachi. Improving low-resource neural machine translation with filtered pseudo-parallel corpus. In Proceedings of the 4th Workshop on Asian Translation (WAT2017), pp. 70–78, Taipei, Taiwan, November 2017. Asian Federation of Natural Language Processing. URL https://aclanthology.org/W17-5704. Ann Irvine and Chris Callison-Burch. End-to-end statistical machine translation with zero or small parallel texts. *Natural Language Engineering*, 22(4):517–548, 2016. doi: 10.1017/S1351324916000127. Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. Cross-lingual ability of multilingual bert: An empirical study. *ArXiv*, abs/1912.07840, 2019. Gaurav Kharkwal. *Taming the jabberwocky: examining sentence processing with novel words*. PhD thesis, Rutgers University - Graduate School - New Brunswick, 2014. URL https://rucore.libraries. rutgers.edu/rutgers-lib/45309/\#citation-export. Jyotsana Khatri and Pushpak Bhattacharyya. Filtering back-translated data in unsupervised neural machine translation. In *Proceedings of the 28th International Conference on Computational Linguistics*, pp. 4334– 4339, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.383. URL https://aclanthology.org/2020.coling-main.383. Yunsu Kim, Yingbo Gao, and Hermann Ney. Effective cross-lingual transfer of neural machine translation models without shared vocabularies. In *Proceedings of the 57th Annual Meeting of the Association* for Computational Linguistics, pp. 1246–1257, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1120. URL https://aclanthology.org/P19-1120. Yunsu Kim, Miguel Graça, and Hermann Ney. When and why is unsupervised neural machine translation useless? In *Proceedings of the 22nd Annual Conference of the European Association for Machine Translation*, pp. 35–44, Lisboa, Portugal, November 2020. European Association for Machine Translation. URL https://aclanthology.org/2020.eamt-1.5. Alexandre Klementiev, Ann Irvine, Chris Callison-Burch, and David Yarowsky. Toward statistical machine translation without parallel corpora. In *Proceedings of the 13th Conference of the European Chapter* of the Association for Computational Linguistics, EACL '12, pp. 130–140, USA, 2012. Association for Computational Linguistics. ISBN 9781937284190. Surabhi Kumari, Nikhil Jaiswal, Mayur Patidar, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, and Lovekesh Vig. Domain adaptation for NMT via filtered iterative back-translation. In Proceedings of the Second Workshop on Domain Adaptation for NLP, pp. 263–271, Kyiv, Ukraine, April 2021. Association for Computational Linguistics. URL https://aclanthology.org/2021.adaptnlp-1.26. Guillaume Lample and Alexis Conneau. Cross-lingual Language Model Pretraining. In *33rd Conference on* Neural Information Processing Systems (NeurIPS 2019), 2019. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. In International Conference of Learning Representations (ICLR), 2018a. Guillaume Lample, Alexis Conneau, Marc Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data. In *International Conference on Learning Representations*, 2018b. URL https://openreview.net/forum?id=H196sainb. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. Phrase-based & neural unsupervised machine translation. In *Proceedings of the 2018 Conference on Empirical Methods in* Natural Language Processing, pp. 5039–5049, Brussels, Belgium, Oct 2018c. Association for Computational Linguistics. doi: 10.18653/v1/D18-1549. URL https://aclanthology.org/D18-1549. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. *Transactions of the* Association for Computational Linguistics, 8:726–742, 2020. doi: 10.1162/tacl_a_00343. URL https: //aclanthology.org/2020.tacl-1.47. Tomás Mikolov, Quoc V. Le, and Ilya Sutskever. Exploiting similarities among languages for machine translation. *CoRR*, abs/1309.4168, 2013. URL http://arxiv.org/abs/1309.4168. Xing Niu, Michael Denkowski, and Marine Carpuat. Bi-directional neural machine translation with synthetic parallel data. In *Proceedings of the 2nd Workshop on Neural Machine Translation and Generation*, pp. 84–91, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/ W18-2710. URL https://aclanthology.org/W18-2710. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://aclanthology.org/P02-1040. Steven T. Piantadosi. Zipf's word frequency law in natural language: A critical review and future directions. *Psychonomic Bulletin and Review*, 21(5):1112–1130, 2014. ISSN 15315320. doi: 10.3758/ s13423-014-0585-6. Alberto Poncelas, D. Shterionov, Andy Way, Gideon Maillette de Buy Wenniger, and Peyman Passban. Investigating backtranslation in neural machine translation. In *European Association for Machine Translation Conferences/Workshops*, 2018. Nima Pourdamghani, Nada Aldarrab, Marjan Ghazvininejad, Kevin Knight, and Jonathan May. Translating translationese: A two-step approach to unsupervised machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3057–3062, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1293. URL https: //aclanthology.org/P19-1293. Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 86–96, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1009. URL https://aclanthology.org/P16-1009. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie Yan Liu. MASS: Masked sequence to sequence pre-training for language generation. *36th International Conference on Machine Learning, ICML 2019*, 2019-June:10384–10394, 2019. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, pp. 3104–3112, Cambridge, MA, USA, 2014. MIT Press. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Proceedings of the 31st International Conference on* Neural Information Processing Systems, NIPS'17, pp. 6000–6010, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. Yiren Wang, Yingce Xia, Tianyu He, Fei Tian, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. Multi-agent dual learning. In *International Conference on Learning Representations*, 2019. URL https://openreview. net/forum?id=HyGhN2A5tm. Jennifer C. White and Ryan Cotterell. Examining the inductive bias of neural language models with artificial languages. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and* the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 454–463, Online, Aug 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.38. URL https://aclanthology.org/2021.acl-long.38. Shijie Wu and Mark Dredze. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 833–844, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1077. URL https://aclanthology.org/D19-1077. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. Technical report, 2016. Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. Dual supervised learning. In *Proceedings of the 34th International Conference on Machine Learning - Volume 70*, ICML'17, pp. 3789–3798. JMLR.org, 2017. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 46–55, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1005. URL https://aclanthology.org/P18-1005. Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, and Dacheng Tao. On the complementarity between pre-training and random-initialization for resource-rich machine translation. *ArXiv*, abs/2209.03316, 2022.
Review 1: Summary: This paper aims to better understand **why** on-the-fly back-translation works well in the case of unsupervised machine translation, even when there are no theoretical guarantees for their success. To that end, the paper used an artificial language setup where it is possible to disentangle the impact of several different factors on back-translation's success in the context of unsupervised machine translation: (i) the extent to which the word frequency distributions are similar across the two languages; (ii) the extent to which the vocabularies are shared / there are lexical overlaps; (iii) the extent to which the syntax between the two languages differs; and (iv) the extent to which crude semantic signals (in particular, lexical fields) are similar across the two languages. The paper began by demonstrating that back-translation is not theoretically guaranteed to succeed in the case of unsupervised machine translation. Furthermore, while prior work has hypothesized the importance of similar syntactic structures, word frequency, and anchor points (i.e. the fact that numbers tend to have similar surface strings across different languages, and can therefore serve as anchor points for other words different languages), this paper finds evidence to the contrary. While the paper finds that shared syntactic structures improve back-translation performance, syntactic similarity alone is insufficient to make back-translation work well in the presence of strong lexical differences. Furthermore, neither anchor words nor similar word frequencies are sufficient to make back-translation succeed, although some degree of shared semantic parallelism (here in the form of **lexical fields**) is helpful. The paper concludes that back-translation success can likely be attributed to the richer semantic parallelism across languages. Furthermore, *some* degree of supervision signal is critical for back-translation success for unsupervised machine translation, whether through a small number of aligned sentences (i.e. a small parallel corpus) or a complete set of aligned words (i.e. a good bilingual dictionary, whether provided from an existing dictionary or learned through bilingual lexical induction). Strengths and Weaknesses: # Strengths 1. The paper addresses an important question of **why exactly** back-translations work in the case of unsupervised machine translation, even though there are no theoretical guarantees for their success. This line of work is important for the community to better understand the factors that drive back-translation success, and understand for what kinds of language pairs back-translation would likely succeed (or not succeed, e.g. when there is a substantial amount of lexical differences). 2. The paper is rigorous and extensive in terms of its analysis and methodology, which is done through synthetic language pairs, where one can control for the different factors (e.g. degree of syntactic & lexical overlap, etc.). I appreciate that the paper covers multiple different factors that can affect back-translation performance, such as syntactic & lexical & semantic overlap, etc. 3. The paper is very clear and well-written, and includes a comprehensive overview of both unsupervised machine translation and back-translation. This would be useful for readers that have not worked directly on this line of work. 4. The findings would likely be of interest to the broader community. # Weaknesses 1. While the findings are interesting, it is not immediately clear how to apply the findings to make back-translation work better for unsupervised machine translation. I do think that the paper can make some recommendations based on its findings (e.g. that some degree of supervision is necessary, so having high-quality bilingual dictionary or a small amount of parallel corpus would be very helpful, or that having cross-lingual word embedding that puts similar words in different languages in the same vector space is essential etc.). It would be nice to see the implications of the paper's findings spelled out more clearly, so that the broader community can build on the findings to design better-performing approaches. 2. Not enough space is dedicated to the semantic similarity section through lexical fields (Section 10, Experiment 6, page 12). This is an important part of the paper, because this shared semantic information helps translation alignment to a good extent (more so than other factors like syntactic similarity, for example). A more extensive description of lexical fields and what is exactly being done, ideally along with some examples, would be very helpful for the reader. 3. I am not entirely certain about how realistic the lexical field assumptions are in real languages (in particular, the assumption that each sentence in the corpus is made of words from a single lexical field, bottom of page 12). Some discussion around this would be much appreciated. 4. A discussion around how pre-training and cross-lingual embeddings would affect the findings would also be very helpful. Are the problems with lexical overlap, etc. can be solved by multi-lingual pre-training on massively multi-lingual data? Or would these likely to persist? Requested Changes: 1. **Recommended**: A discussion around what the paper's findings mean for designing better unsupervised machine translation system through back-translation (e.g. that a good dictionary / some small amount of supervision is important, etc.). 2. **Recommended**: More discussion and space dedicated towards explaining the lexical field experiment, ideally with examples, given that the paper highlights the importance of semantic similarity across languages. 3. **Recommended**: A discussion around how realistic the lexical field assumption is in real languages. 4. **Recommended**: A discussion around whether or not pre-training and cross-lingual word embeddings would likely mitigate the issues highlighted here (e.g. helping to find alignment between different words in different languages, etc.). Broader Impact Concerns: No broader impact concerns from my side, and I am happy with the broader impact statement as it is written on the paper. ================================================== Review 2: Summary: This paper uses artificial languages based on context-free grammars to inspect the behavior of back-translation for unsupervised neural machine translation. With a shared vocabulary, using different grammars leads to various degrees of success, generally with lower performance for major ordering changes between the two languages. When using identical grammars, but fully disjoint vocabularies, translation performance is very poor even if round-trip translation is often successful, with the model learning the wrong mapping between tokens. Using shared words or non-uniform word frequencies is not sufficient, while adding some supervised signal is helpful. Using lexical fields, i.e. non-overlapping word subsets, hints that richer semantic dependencies may be necessary for unsupervised NMT. Strengths and Weaknesses: **Strengths** - The paper presents a series of controlled studies to understand what may enable the success of back-translation for unsupervised NMT. - The paper challenges assumptions about the success of unsupervised NMT. **Weaknesses** - 100,000 sentences seem low for unsupervised NMT. It is not fully clear if and how some of the findings would differ with larger datasets. - Semantic properties are only explored at a coarse level, and more complex semantic properties may be difficult to simulate with artificial languages. Requested Changes: **Questions** P1. Why is creating parallel datasets "sometimes an impossible task"? P5. Why 1374 words? P5. Why share 3 layers out of 4? In Lample et al. (2018c), it seems that all parameters are shared. **Comments** In the related work section, Lample et al. (2018a) is presented as a follow-up to Artetxe et al. (2018), but I believe both papers were contemporaneous. **Typos** P2. exerpimental P13. unbalance Broader Impact Concerns: None ================================================== Review 3: Summary: This paper evaluates the performance of back translation approaches for neural machine translation through controlled experiments with artificial languages as an attempt to theoretically uncover what properties of languages allow back translation to work at all as an algorithm. Specifically they construct and look at languages that differ in word frequency distributions and syntactic structure, allowing them to evaluate how the similarity of lexical fields across languages can allow better back translation. They discuss how this can help understand which languages, based on their syntactic and lexical properties might help develop cross-linguistic tools for better translation methods. Strengths and Weaknesses: This paper is clearly written and explained and the hypotheses tested are well formulated and valid tests of the back translation method. The experimental methodology is sound and carried out in detail for the artificially generated languages the authors test. These languages and experimental results allow us to a deeper understanding of the effect of variation/similarities in languages and how this can affect models that attempt to back translate between languages. The generation procedure of artificial languages, details of grammar, sizes of datasets produced etc., are all well documented and easy to replicate. Weakness of evaluation: the main metric to score translations in this paper is BLEU scores of generations, and although this is widely and most commonly used for machine translation evaluations, there are several papers that highlight the deficiencies of this metric (see here: https://aclanthology.org/E06-1032.pdf) and alternatives to it (see here: https://aclanthology.org/2021.triton-1.6.pdf). Especially since some of the experiments hinge on lexical similarity between languages, it would be worth discussing alternative metrics for some of this paper (e.g., why the limitations of this metric might result e.g., in higher results for the lexically similar languages and highlighting potential differences that might occur if the other metrics that are less biased towards simple n-gram overlap were to be used.) Weakness of lexical/similarity hypotheses: the similarities between real languages are often due to more subtle semantically aligned properties rather than just pure lexical similarity that is evaluated here. Although the simple case of lexical overlap or anchor points is important to evaluate, it is worth also attempting to evaluate the more nuanced similarities that occur in real languages (or atleast posit how this might influence back translation methods). In order to understand how the results form these artificial languages can be mapped to real languages that we might want to apply these translation methods to, it would be useful to have a table or paragraph that outlines how pairs of real languages fall into the categories of hypotheses framed here (e.g., which pairs of languages have some high percentage of lexical overlap, anchor points, syntactic similarity and so on). Future work that draws on this paper can then attempt to empirically evaluate if real translation datasets/models on this task also have similar effects as is seen by the languages here. Requested Changes: A discussion section about a mapping between the grammars/artificial languages and real languages in the world would be helpful to understand the implications on real translation tasks. A table that gives an overview of this mapping with example languages would be a great help to the community and future work drawing off of this paper. Having one more metric that is an alternative to BLEU (especially for the lexically similar languages) would be good to see to verify that the trend of results hold up. A discussion section about the implications of the metrics used and how they favour some language pairs over others would be helpful. Broader Impact Concerns: None ==================================================
# Exact And Approximate Conformal Inference For Multi-Task Learning Anonymous authors Paper under double-blind review ## Abstract It is common in machine learning to estimate a response y given covariate information x. However, these predictions alone do not quantify any uncertainty associated with said predictions. One way to overcome this deficiency is with conformal inference methods, which construct a set containing the unobserved response y with a prescribed probability. Unfortunately, even with a one-dimensional response, conformal inference is computationally expensive despite recent encouraging advances. In this paper, we explore multi-task learning within a regression setting, delivering exact derivations of conformal inference p-values when the predictive model can be described as a linear function of y. Additionally, we propose unionCP and a multivariate extension of rootCP as efficient ways of approximating the conformal prediction region for a wide array of multi-task predictors while preserving computational advantages. We also provide both theoretical and empirical evidence of the effectiveness of these methods. ## 1 Introduction In regression, we aim to predict (or estimate) some response y given covariate information x. These predictions alone deliver no information related to the uncertainty associated with the unobserved response, and thus, would benefit from the inclusion of a set Γ (α)(x) such that, for any significance level α ∈ (0, 1), $$(2)$$ Py ∈ Γ (α)(x)= 1 − α. (1) One method to generate Γ (α)is through conformal inference (used interchangeably with "conformal prediction" in this work) (Gammerman et al., 1998; Lei et al., 2018), which generates *conservative* prediction sets for some unobserved response y under only the assumption of exchangeability. Given a finite number of observations Dn = {(xi, yi)} n i=1 and a new unlabelled example xn+1, conformal prediction regions are generated through the repeated inversion of the test, H0 : yn+1 = z vs. Ha : yn+1 ̸= z, (2) where z is a potential candidate response value, *i.e.,* the null hypothesis (Lei et al., 2018). A p-value for this test is constructed by learning a predictive model yˆ(z) on the augmented dataset Dn ∪ (xn+1, z) and comparing one's ability to predict the new candidate z using yˆn+1(z) to the already observed responses using, say, yˆi(z), the predicted value for the i-th response as a function of z. We note that while yˆi(z) depends on Dn, xi, xn+1, and z, we only explicitly highlight dependence on z. The so-called conformal prediction set is the collection of candidates z for which the null hypothesis is not rejected, *i.e.,* when the error in predicting z is not too high compared to others. The inversion of the test is Equation (2) is traditionally called "full" conformal prediction since it uses the entire dataset to learn a predictive model. Unfortunately, full conformal prediction is computationally demanding in most cases, with each new candidate point z requiring a new model to be fit. To avoid this complexity, more efficient methods, *e.g.,* split conformal inference (Vovk et al., 2005; Lei et al., 2018) and trimmed conformal inference (Chen et al., 2016), have been introduced with trade-offs between computational efficiency and performance. 1 Of interest to our work in this paper are *exact* and *approximate conformal inference* methods, which aim to reduce computational complexity without sacrificing performance. Nouretdinov et al. (2001) showed that with certain models, ridge regressors in particular, conformity measures for every observation in a dataset can be constructed as an affine function of the candidate value z and only require training the model once. In our work, we extend the result of Nouretdinov et al. (2001) to predictors of the form $${\hat{y}}=H y,$$ $\left(3\right)$. yˆ = Hy, (3) where y is an n×1 vector of responses, H is an n×n matrix, and yˆ is an n×1 vector of predictions. We note that H can also be a function of a set of covariates, *e.g.,* as with ridge regression where H = X(X⊤X + λI) −1X⊤. In reality, the restriction shown in Equation (3) is more general than ridge regression; we only require the predictions be linear functions of the input. In this paper, we refer to models that follow Equation (3) as linear models; this is in contrast to the traditional usage of the term to reflect models that are linear with respect to their parameters. With some models, exact conformal inference is difficult. However, Ndiaye (2022) showed that under certain regularity conditions on the model of interest, we can generate upper and lower bounds on the conformal prediction set, *i.e.,* interval, with only a single model fit, allowing for conservative approximations of the true conformal prediction set. In more complex settings it might be of interest to construct a model for multiple responses, *i.e.,* for some response y ∈ R q, also known as multi-task (or multi-output) regression (Zhang & Yang, 2018; Borchani et al., 2015; Xu et al., 2019). Thus, we might wish to construct a prediction set such that some some q-dimension version of y, say y = (y (1)*, . . . , y*(q)) ⊤, is contained with some specified probability. Contributions With these potential scenarios is mind, we aim to extend exact and approximate conformal inference to the multi-task setting. Specifically, we contribute: - extensions of exact conformal inference to multiple dimensions with various predictors and conformity measures - unionCP to approximate conformal prediction sets without model retraining - a multivariate extension of rootCP (Ndiaye & Takeuchi, 2021) which utilizes numerical root-based methods to find points on the boundary of a conformal prediction set The introduction of unionCP and extension of rootCP provide a trade-off between various conformal inference methods, balancing the computational efficiency of split conformal prediction (splitCP) with the performance of full conformal prediction (fullCP). Table 1 summarizes the overall computational costs for each of these methods in terms of the number of model retraining iterations required to generate the conformal prediction region. We also include the computation complexity of the CP approximation provided by a grid-based approach (gridCP). In contrast, fullCP comprises approaches where the exact conformal prediction set can be constructed in a closed-form. Table 1: Computational complexity of methods where q is the response dimension, m is the cardinality of the candidate value set, d is the number of search directions, and ϵ is the tolerance. | Method | Linear | Nonlinear | |----------|----------|-------------| | splitCP | O(q) | O(q) | | gridCP | O(q) | O(mq) | | fullCP | O(q) | - | | unionCP | O(q) | O ndq log2 (1/ϵ) | | rootCP | O(q) | O dq log2 (1/ϵ) | 2 From Table 1, we can see that in the linear case, each of the methods for prediction set generation require the same number of model refits as splitCP. We note that this does not account for the complexity of interval construction in each case. The rest of the paper is laid out as follows. Section 2 provides requisite background for the paper. Section 3 extends exact conformal inference to multiple dimensions, while Section 4 introduces various conformal prediction set approximation methods in multiple dimensions. Section 5 provides empirical evaluation of our proposed approaches. Section 6 concludes the paper. Notation We denote the design matrix X = (x1, . . . , xn, xn+1) ⊤. Given j ∈ [n], the rank of an element uj among a sequence {u1*, . . . , u*n} is defined as $$\mathrm{Rank}(u_{j})=\sum_{i=1}^{n}\mathbb{1}_{u_{i}\leq u_{j}}\ \ .$$ ## 2 Conformal Inference In this section we provide background on relevant topics for this paper. Our applications within with paper are focused on regression, so we focus our background discussion on regression as well. Originally introduced in Gammerman et al. (1998) as "transductive inference", conformal inference (CI) was originally focused on providing inference with classification approaches. Vovk et al. (2005) provides a formalized introduction to conformal inference within regression. With the express purpose of inference, the goal of CI is to attach, in some fashion, a measure of uncertainty to a predictor, specifically through the construction of a conservative prediction set, *i.e.,* one such that $$\mathbb{P}\big(y_{n+1}\in\Gamma^{(\alpha)}\big(x_{n+1}\big)\big)\geq1-\alpha.$$ $$\left({\boldsymbol{4}}\right)$$ (α)(xn+1)≥ 1 − α. (4) We define Dn = {(xi, yi)} n i=1 as a collection of n observations, where the i-th data tuple (xi, yi) is made up of a covariate vector xi and a response yi. We wish to construct a *valid* prediction set for a new observation (xn+1, yn+1), where xn+1 is some known covariate vector and yn+1 is some, yet-to-be-observed response. Assuming each data pair (xi, yi) and (xn+1, yn+1) are drawn exchangeably from some distribution P, conformal inference generates conservative, finite-sample valid prediction sets in a distribution-free manner. The main approach to perform the test inversion associated with Equation (2) relies on Lemma 1. Lemma 1. Let U1, . . . , Un, Un+1 *be an exchangeable sequence of random variables. Then, for any* α ∈ (0, 1), $$\mathbb{P}\big(\mathrm{Rank}\;(U_{n+1})\leq\lceil(1-\alpha)(n+1)\rceil\big)\geq1-\alpha.$$ In a prediction setting, test inversion for a particular candidate value z is achieved by training the model of interest on an augmented data set Dn+1(z) = Dn ∪ (xn+1, z). At this point, we leave our model of interest general, denoting the prediction of the i-th observation based on a model trained with Dn+1(z) as yˆi(z). Following the refitting, each observation in the augmented data set receives a (non)conformity *measure*, which determines the level of (non)conformity between itself and other observations. One popular, and particularly effective, conformity measure is the absolute residual $$S_{i}(z)=|y_{i}-{\hat{y}}_{i}(z)|.$$ $\left(5\right)^3$ Si(z) = |yi − yˆi(z)|. (5) We can construct the conformity *score* associated with a particular candidate point z with $$\pi(z)={\frac{1}{n+1}}+{\frac{1}{n+1}}\sum_{i=1}^{n}\mathbb{1}_{S_{i}(z)\leq S_{n+1}(z)},$$ 1Si(z)≤Sn+1(z), (6) where Si(z) is the conformity measure for the data pair (xi, yi) as a function of z and Sn+1(z) is the conformity measure associated with (xn+1, z). Then, a valid p-value for the test shown in Equation (2) can be found with $$p\mathrm{-value}(z)=1-\pi(z).$$ $$({\mathfrak{h}})$$ A prediction set for an unknown response yn+1 associated with some covariate vector xn+1 is $$\Gamma^{(\alpha)}(x_{n+1})=\{z:(n+1)\pi(z)\leq\lceil(1-\alpha)(n+1)\rceil\}.$$ $$\left(7\right)$$ (α)(xn+1) = {z : (n + 1)π(z) ≤ ⌈(1 − α)(n + 1)⌉}. (7) $\mathbf{a}$ $\mathbf{A}$ ? Then, by Lemma 1, with $$(n+1)\pi(y_{n+1})\equiv\mathrm{Rank}(S_{n+1}(y_{n+1})),$$ Equation (4) holds for Γ (α)(xn+1). By the previous results, CI can also be utilized in the multivariate response case, where one is interested in quantifying uncertainty with respect to the joint behavior of a collection of responses, given a set of covariates. Thus, we can construct a multidimensional prediction set Γ (α)(xn+1) ⊂ R qsuch that Equation (4) holds when yn+1 is some q-dimensional random vector. The first result extending conformal inference to the multivariate setting comes from Lei et al. (2015), which applies conformal inference to functional data, providing bounds associated with prediction "bands". Diquigiovanni et al. (2022) extends and generalizes additional results for conformal inference on functional data. Joint conformal prediction sets outside the functional data setting are explored in Kuleshov et al. (2018) and Neeven & Smirnov (2018). Messoudi et al. (2020; 2021) extend these works through the use of Bonferroni- and copula-based conformal inference, respectively. Cella & Martin (2020), Kuchibhotla (2020) and Johnstone & Cox (2021) construct joint conformal sets through the use of depth measures, *e.g.,* half-space and Mahalanobis depth, as the overall conformity measure. Applications of conformal inference have been seen in healthcare (Olsson et al., 2022), drug discovery (Cortés-Ciriano & Bender, 2019; Eklund et al., 2015; Alvarsson et al., 2021), and decision support (Wasilefsky et al., 2023), to name a few. For a thorough treatment on conformal inference in general, we point the interested reader to Fontana et al. (2023) and Angelopoulos et al. (2023). ## 2.1 Computationally Efficient Conformal Inference Due to the inherent model refitting required to generate prediction sets through full conformal inference, i.e., the testing of an infinite amount candidate points, more computationally efficient methods have been explored. We describe a subset of these methods in the following sections. Specifically, we focus on resampling-based and exact conformal inference. ## 2.1.1 Resampling Methods Split conformal inference (Vovk et al., 2005; Lei et al., 2018) generates conservative prediction intervals under the same assumptions of exchangeability as *full* conformal inference . However, instead of refitting a model for each new candidate value, split conformal inference utilizes a randomly selected partition of Dn, which includes a training set I1 and a calibration set I2. First, a prediction model fit using I1. Then, conformity measures are generated using out-of-sample predictions for observations in I2. The split conformal prediction interval for an incoming (xn+1, yn+1), when using the absolute residual as our comformity measure, is $$\Gamma_{\texttt{split}}^{(\alpha)}(x_{n+1})=[\hat{y}_{n+1}-s,\hat{y}_{n+1}+s],$$ $$({\boldsymbol{\delta}})$$ where yˆn+1 is the prediction for yn+1 generated using the observations in I1, and s is the ⌈(|I2|+ 1)(1−α)⌉-th largest conformity measure for observations in I2. In order to combat the larger widths and high variance associated with split conformal intervals, cross-validation (CV) approaches to conformal inference have also been implemented. The first CV approach was introduced in Vovk (2015) as cross-conformal inference with the goal to "smooth" inductive conformity scores across multiple folds. Aggregated conformal predictors Carlsson et al. (2014) generalize cross-conformal predictors, constructing prediction intervals through any exhangeable resampling method, *e.g.,* bootstrap resampling. Other resampling-based conformal predictors also include CV+ and jackknife+ (Barber et al., 2021). For a more detailed review and empirical comparison of resampling-based conformal inference methods, we point the interested reader to Contarino et al. (2022). ## 2.1.2 Exact Conformal Inference For Piecewise Linear Estimators In order to test a particular set of candidate values for inclusion in Γ (α)(xn+1), we must compare the conformity measure associated with our candidate data point to the conformity measures of our training data. Naively, this requires the refitting of our model for each new candidate value. However, Nouretdinov et al. (2001) showed that Si(z), constructed using Equation (5) in conjunction with a ridge regressor, varies piecewise-linearly as a function of the candidate value z, eliminating the need to test a dense set of candidate points through model refitting. Other exact conformal inference methods include conformal inference through homotopy (Lei, 2019; Ndiaye & Takeuchi, 2019), influence functions (Bhatt et al., 2021; Cherubin et al., 2021), and root-finding approaches (Ndiaye & Takeuchi, 2021). While not exact, Ndiaye (2022) provide approximations to the full conformal prediction region through stability-based approaches. ## 3 Exact Conformal Inference For Multi-Task Learning In the following sections, we extend the results in Nouretdinov et al. (2001) to multiple dimensions. We also discuss closed-form solutions for more general predictors as well as higher dimension prediction sets with other conformity measures. While CI can be applied to any prediction or classification task, in this section we restrict each of our predictors, given an incoming observation (xn+1, z), to the form $${\hat{y}}^{(k)}(z_{k})=H_{k}(x_{n+1},x_{i})y^{(k)}(z_{k}),$$ $$({\mathfrak{g}})$$ (k)(zk), (9) where yˆ (k)(zk) is the vector of predictions for the k-th response as a function of the candidate value zk, and the candidate value *vector* is defined as z = (z1*, . . . , z*q) ⊤. We note that the restriction shown in Equation (9) is analogous to the restriction identifed in Equation (3). Additionally, we require that Hk be constructed independently of y (k), *i.e.,* not as a function of y (k). Even with this restriction, Hk is general enough so as to include many classes of predictors with examples described below. We specifically discuss how to construct exact p-values for a given z without retraining our model. We also identify how we construct explicit p*-value* change-point sets, denoted as Ei for the i-th observation, where $${\mathcal{E}}_{i}\equiv\{z\in\mathbb{R}^{q}:S_{n+1}(z)\leq S_{i}(z)\},$$ $$\left(10\right)$$ $$(11)$$ q: Sn+1(z) ≤ Si(z)}, (10) with the end goal of generating exact conformal prediction sets. Note that En+1 ≡ R q. Then, the p-value associated with the hypothesis test shown in Equation (2) for any candidate point z is $$p\text{-value}(z)={\frac{|\{i\in[n+1]:z\in{\mathcal{E}}_{i}\}|}{n+1}}.$$ In the following sections, we describe several predictors with which we can perform exact conformal inference. We also describe exact conformal inference results for two conformity measure constructions, ℓ1 and *|| · ||*2W , as well as results for finding points on the boundary of a conformal prediction set for any conformity measure. ## 3.1 Predictors For Exact Conformal Inference Many regression methods generate predictions that follow Equation (9), including: ridge regression, kernel regression, and k-nearest neighbors, among others. In the sequel, we describe the predictions resulting from these approaches in the form laid out prior in Equation (3). Ridge Regression While we have already described H with respect to a ridge regressor using regularization parameter λ in Section 1, we explicitly describe it for the k-th response dimension as $$H_{k}(x_{n+1})=X(X^{\top}X+\lambda_{k}I)^{-1}X^{\top}.$$ −1X⊤. (12) Local Constant (Nadaraya-Watson) Regression Kernel regression (Nadaraya, 1964; Watson, 1964) is a nonparametric regression technique that utilizes kernel density estimators (Parzen, 1962). Traditionally, a "kernel" is of the form $$K_{h}(u)={\frac{1}{h}}K{\Big(}{\frac{u}{h}}{\Big)},$$ , (13) $$(12)$$ $$(13)^{\frac{1}{3}}$$ where K(·) is a (symmetric) probability density, and h is a bandwidth parameter. Often a Guassian kernel is used, *i.e.,* K(u) ≡ Φ(u), but other kernels are also popular. Using a kernel, we can perform nonparametric regression. For some xiin Dn+1(z), the Nadaraya-Watson regression estimator generates a prediction $${\hat{y}}_{i}(z)=\sum_{j=1}^{n+1}{\frac{K_{h}(x_{i}-x_{j})}{\sum_{t=1}^{n+1}K_{h}(x_{i}-x_{t})}}y_{j}(z)$$ where yj (z) is the j-th element of y(z). Thus, we can perform "local-constant" regression by using Hk(xn+1, xi) ≡ Hk(xn+1) = (w1*, . . . , w*n+1) ⊤ where each wiis a vector of the normalized kernel values Kh(·) for each observation xj centered on xi, *i.e.,* $$w_{i}=\left(K_{h}(x_{i}-x_{1}),\ldots,K_{h}(x_{i}-x_{n+1})\right)^{\top}\left(I_{n+1}\sum_{j=1}^{n+1}K_{h}(x_{i}-x_{j})\right)^{-1}.$$ $$(14)$$ The current form for our kernel is general. However, we have not specified how it can be extended to a multivariate scenario; this is especially important for applications where we consider multiple covariates. While there do exist multivariate kernels that take vector arguments and utilize a bandwidth *matrix*, simpler approaches provide a more attractive and computationally efficient approach to generating multivariate kernels. As an example, a *product* kernel (Scott, 1992) generates a multivariate kernel by multiplying marginal kernel functions for each covariate. Namely, for an argument u = (u1*, . . . , u*p) ⊤ and bandwidth vector h = (h1*, . . . , h*p) ⊤, $$K_{h}(u)=\prod_{p=1}^{p}K_{h_{p}}(u_{p}).\tag{1}$$ k=1 Another popular method for extension to multiple covariates are radial basis functions (Broomhead & Lowe, 1988), which utilize a norm as an argument to a univariate kernel. Local Linear Regression In contrast to the local-constant regression with the Nadaraya-Watson estimator, local-linear regression (Fan, 1992) utilizes a weighted version of the covariate matrix. Using the kernel introduced in Equation (13), we can construct a vector wi for the i-th observation. Local-linear regression then constructs an estimate for yi, as a function of the candidate value pair (xn+1, z), by using an adjusted covariate matrix, $${\tilde{X}}_{i}={\begin{bmatrix}1&\left(x_{1}-x_{i}\right)^{\top}\\ \vdots&\vdots\\ 1&\left(x_{n+1}-x_{i}\right)^{\top}\end{bmatrix}},$$ and a diagonalized version of wi, which we identify as G(xi), resulting in predictions for the i-th observation of the k-th response of the form $$\hat{y}_{i}^{(k)}(z_{k})=\left(H_{k}(x_{n+1},x_{i})y^{(k)}(z_{k})\right)_{i},$$ where $$H_{k}(x_{n+1},x_{i})=\hat{X}_{i}\big(\hat{X}_{i}^{\top}G(x_{i})\hat{X}_{i}\big)^{-1}\hat{X}_{i}^{\top}G(x_{i}),$$ and (·)iis the i-th element of the vector argument. k**-nearest Neighbors** k-Nearest neighbors (Cover & Hart, 1967; Fix, 1985) is a nonparametric regression technique that generates predictions based on neighboring values of an incoming observation. Traditionally, k is used to describe the number of neighbors utilized to construct a prediction. In this work, we use m. Specifically, given a fixed m, with an observation for some x we can construct a set of m neighbors of x, made up of the training data observations. We define the set of neighbors for x as N(x). Then, a matrix Hk(xn+1, xi) ≡ Hk(xn+1) can be constructed such that for each row-column position (*i, j*), $$H_{k}(x_{n+1})_{i j}=\left\{\begin{array}{l l}{{1/m}}&{{\qquad x_{j}\in N(x_{i})}}\\ {{0}}&{{\qquad\mathrm{otherwise}}}\end{array}\right..$$ The result of Nouretdinov et al. (2001) was extended to include both lasso and elastic net regressors in Lei (2019). For this paper, we utilize a generalized version, shown in Proposition 1. Proposition 1. *Assume the fitted model as in Equation* (3), where H(xn+1, xi) ≡ H*. Then, if we define* y(z) = (y ⊤, z) ⊤*, we can describe the vector residuals associated with the augmented dataset and some candidate* value z as $$\left(15\right)$$ $${\hat{y}}(z)-H y(z)=A-B z$$ where A and B *are of the form* $$\begin{array}{l}{{A=\left(I-H\right)y(0)}}\\ {{B=\left(I-H\right)(0,\ldots,0,1)^{\top}.}}\end{array}$$ Proof. By assumption, we can describe our vector of predictions yˆ(z) = Hy(z). Thus, $$y(z)-\hat{y}(z)=y(z)-Hy(z)$$ $$=(I_{n+1}-H)y(z)$$ $$=(I_{n+1}-H)y(0)+(I_{n+1}-H)(0,\ldots,0,1)^{\top}z$$ $$=A+Bz$$ With Proposition 1, we can then describe the conformity measure for the i-th observation, when using Equation (5), as Si(z) = |ai + biz|. ## 3.2 Exact P**-Values With** ℓ1 We formalize our extension of Nouretdinov et al. (2001) to multiple dimensions, specifically utilizing $\square$ $$S_{i}(z)=||y_{i}-{\hat{y}}_{i}(z)||_{1},$$ $$(16)$$ $$(z)||_{1},$$ Si(z) = ||yi − yˆi(z)||1, (16) as our conformity measure, in Proposition 2. Proposition 2. *Assume the fitted model,* yˆ (k)(zk) = Hk(xn+1, xi)y (k)(zk). Then, using *Equation* (16), $$S_{\mathrm{i}}(z)=||a_{i}+b_{i}z||_{1},$$ $$B_{k}=\left(I-H_{k}(x_{n+1},x_{i})\right)(0,\ldots,0,1)^{\top}.$$ where ai = (a1i*, . . . , a*qi) ⊤, bi = (b1i*, . . . , b*qi) ⊤, and aki and bki are the i-th elements of the vectors Ak and Bk*, respectively, defined as* $$A_{k}=\big(I-H_{k}(x_{n+1},x_{i})\big)y^{(k)}(0),$$ (k)(0), (17) $$a n d,$$ Bk =I − Hk(xn+1, xi)(0*, . . . ,* 0, 1)⊤. (18) $$(17)^{\frac{1}{2}}$$ $$(18)^{\frac{1}{2}}$$ 7 Algorithm 1 exact conformal prediction with ℓ1 Input: data Dn = {(x1, y1), . . . ,(xn, yn)}, and xn+1 Coverage level α ∈ (0, 1) Dimension q \# *Initialization* Construct Hk(xn+1) for each k = 1*, . . . , d*. Construct aki, bki for all i = 1*, . . . , n* + 1. Construct z˜ as in Equation (19). \# Construct p*-value change-point sets* We define z˜(j) as z˜ without the j-th component. for i ∈ 1*, . . . , n* do Initialize set of corner points V = ∅. for j ∈ 1*, . . . , q* do Fix zk = ˜zk for k ̸= j. Then, find the set z ∗ j = {zj : ci + |aji + bjizj | = |ajn+1 + bjn+1zj |} Set V = V ∪ {(˜z(j), z∗ j )} end for Set Ei = ˜ chullV} where ˜ chullS is the convex hull of the set S. end for Return: E = {Ei} n i=1 Proof. The proof follows directly from applying Proposition 1 to each element of the vector. Proposition 2 allows us to construct conformity measures associated with a multidimensional response without retraining the model for each new z. Additionally, using Proposition 2 for each observation (xi, yi), we can generate a region Ei, as defined in Equation (10). Additionally, we can construct a fixed-point solution for yˆn+1(z), *i.e.,* a point where yˆn+1(z) = z, as $${\tilde{z}}=\bigg(-{\frac{a_{1n+1}}{b_{1n+1}}},\ldots,-{\frac{a_{q n+1}}{b_{q n+1}}}\bigg).$$ $$(19)^{\frac{1}{2}}$$ bqn+1 . (19) Equation (19) can be derived by setting each component of Sn+1(z) equal to zero; the fixed point for a given observation is where the probability of a more extreme response, *i.e.,* p-value(z), is maximized. It is initially unclear how the construction of an individual region Ei occurs. As it stands, finding all z such that Si(z) = Sn+1(z) is a multidimensional root-finding problem with infinite solutions, which has exponential complexity as q increases. However, utilizing the inherent structure of each Ei, we can simplify the problem. In order to provide clarity, we include Algorithm 1 to construct Eiin practice when using ℓ1. Algorithm 1 leverages the fact that when using ℓ1, each Ei can be defined by the convex hull of a collection of points, specifically points axis-aligned with the fixed-point solution z˜. These points, referred to as "corners" within this paper, differ from z˜ in only the j-th element. The j-th element of a corner point, defined as z ∗ j , is such that $$z_{j}^{*}=\{z_{j}:c_{i}+|a_{j i}+b_{j i}z_{j}|=|a_{j n+1}+b_{j n+1}z_{j}|\},$$ j = {zj : ci + |aji + bjizj | = |ajn+1 + bjn+1zj |}, (20) where ci =Pk̸=j |aki + bkiz˜k|. We emphasize that all other components for the n + 1-th conformity measure are zero by definition of z˜. Then, z ∗ j is either one of two solutions, z l j or z u j . With the decomposition shown in Equation (20), finding Eiis reduced to a series of q one-dimensional root-finding problems. We include a two-dimensional visual of the solutions generated using Algorithm 1 for an observation from the cement dataset in Figure 1. We also include further discussion on Algorithm 1 in Supplementary Materials ![8_image_0.png](8_image_0.png) Figure 1: Example of Algorithm 1 for constructing Ei for an observation from cement dataset. The "•" identifies z˜, while the black line represents the border of the p-value change-point set The points (z˜1, zl2 ), (z˜1, zu 2 ), (z l 1 , z˜2) and (z u 1 , z˜2) are identified with "•". The axes generated with z˜ are shown with the dotted black lines. ## 3.3 Exact P**-Values With** || · ||2W In order to generalize our exact p-value construction further than for use solely with ℓ1, we now consider conformity measures of the form $$S_{i}(z)=r_{i}(z)^{\top}W r_{i}(z)\equiv||r_{i}(z)||_{W}^{2},$$ ⊤W ri(z) *≡ ||*ri(z)||2W , (21) where ri(z) = yi − yˆi(z), and W is some q × q matrix. Proposition 3 provides a similar result to Proposition 2, but instead utilizes Equation (21). Namely, Si becomes quadratic with respect to z, instead of piecewise-linear. Proposition 3. *Assume the fitted model* yˆ (k)(zk) = Hk(xn+1, xi)y (k)(zk) *for each response dimension* k ∈ [q]. Then, using *Equation* (21), $$S_{i}(z)={\begin{bmatrix}a_{1i}+b_{1i}z_{1}\\ \vdots\\ a_{qi}+b_{qi}z_{q}\end{bmatrix}}^{\top}W\begin{bmatrix}a_{1i}+b_{1i}z_{1}\\ \vdots\\ a_{qi}+b_{qi}z_{q}\end{bmatrix}$$ $$(21)$$ where aki and bki are the i-th elements of the vectors Ak and Bk*, respectively, as defined in Equation* (17) and Equation (18). Proof. Let Si(z) be constructed as in Equation (21). Then, by Proposition 1, each element of the vector of residuals can be described in the form of a1k + b1kzk, which gives us the desired result. With Proposition 3, the difference between Sn+1(z) and Si(z) is the difference between two quadratic forms. Thus, we can describe the boundary Ei for every i ∈ [n] as a *conic section*. Specifically, we can describe the difference between the candidate conformity measure and the conformity measure for observation i as, $$S_{n+1}(z)-S_{i}(z)=\begin{bmatrix}a_{1n+1}+b_{1n+1}z_{1}\\ \vdots\\ a_{9n+1}+b_{9n+1}z_{q}\end{bmatrix}^{\top}W\begin{bmatrix}a_{1n+1}+b_{1n+1}z_{1}\\ \vdots\\ a_{9n+1}+b_{9n+1}z_{q}\end{bmatrix}-\begin{bmatrix}a_{1i}+b_{1i}z_{1}\\ \vdots\\ a_{qi}+b_{qi}z_{q}\end{bmatrix}^{\top}W\begin{bmatrix}a_{1i}+b_{1i}z_{1}\\ \vdots\\ a_{qi}+b_{qi}z_{q}\end{bmatrix}.$$ $$(22)$$ $$(23)$$ $$(24)$$ . (22) Knowing we aim to find the boundary of each Ei, *i.e.,* the roots of Equation (22), we can expand the statement into the form of a conic section such that $(1,z_{1},\ldots,z_{q})^{\top}[M_{n+1}-M_{i}](1,z_{1},\ldots,z_{q})=0,$ where Miis $$M_{i}=\begin{bmatrix}\beta_{i0}&\beta_{i1}/2&\dots&\beta_{i q}/2\\ \beta_{i1}/2&\beta_{i11}&\dots&\beta_{i q}/2\\ \vdots&\vdots&\ddots&\vdots\\ \beta_{i q}/2&\beta_{q i}/2&\dots&\beta_{i q q}\end{bmatrix},$$ $$\begin{array}{l}\mathbf{0}\end{array}$$ . with the construction of each element of Mi shown in Table 2. Table 2: β parameter formulas. | Parameter | Formula Pq | | |-------------|---------------------|---------------| | βi0 | Pq k=1 | j=1 aikaijwkj | | βik | 2 Pq j=1 aij bikwkj | | | βikj | bikbijwkj | | We can then translate a point s on the unit-ball to the boundary of Ei with z =pKiLis + z $z_{i}s+z_{i}^{c}\nonumber$ where Liis the upper-triangular Cholesky matrix of M∗ i ≡ Mn+1 − Mi. We define (·)qq as the lower q × q submatrix of the argument and (·)qqi as the i-th row of the lower q × q submatrix. z c i is the center of Ei, *i.e.,* $z=\sqrt{K_i}$. $$z_{i}^{c}=(M_{i}^{*})_{q q}^{-1}(-M_{i}^{*})_{q q1}$$ )qq1 (25) $${\mathrm{and~}}K_{i}{\mathrm{~is~}}$$ $$(25)^{\frac{1}{2}}$$ $$K_{i}=\frac{-d e t(M_{i}^{*})}{d e t\big((M_{i}^{*})_{q q}\big)}.$$ In order to maintain the probabalistic guarantees inherent to conformal inference, we require W to be constructed exchangeably. Two constructions that satisfy exchangeability are: 1) W constructed independently of Dn+1(z), or 2) W constructed using all observations within Dn+1(z). However, we show in Section 5 that, in practice, setting W ≡ Σˆ −1, the observed inverse-covariance matrix associated with the residuals from our q responses using a model constructed using only Dn, performs well. The p-value associated with some z using sets constructed using Equation (21) is the same as in Equation (11). While Proposition 3 does not restrict the structure of W, limiting W to be a symmetric, positive semi-definite matrix ensures that the set Eiis not only convex, but ellipsoidal. Without this additional restriction on the matrix W, the p-value change-point sets could be ill-formed, *i.e.,* non-convex. An example of an ill-formed p-value change-point set is shown in Figure 2. For clarity, we include Algorithm 2 to describe how each Ei can be constructed in practice when using *|| · ||*2W . We also compare the exact p-value change-point sets constructed using *|| · ||*2W with W = Σˆ −1to the conformal prediction p-value contours constructed using gridCP in Figure 3. Exact p-value change-points sets constructed using ℓ1 and *|| · ||*2W with W = Iq are shown in Figure 4. ![10_image_0.png](10_image_0.png) (a) Plot of ψ1 and ψ2 ![10_image_1.png](10_image_1.png) (b) Plot of the difference ψ1 − ψ2 Figure 2: ψ1 : z *7→ ∥*y0 − T1z∥ and ψ2 : z *7→ ∥*T2z∥ where y0 = (1, 0)⊤, T1 = ( −1 −1 −1 0 ) and T2 = ( 0 −1 0 1 ). ![10_image_2.png](10_image_2.png) Figure 3: Comparing gridCP sets (left) to closed-form p-value change-point sets (right) constructed using || · ||2W with W = Σˆ −1 . Algorithm 2 exact conformal prediction with *|| · ||*2w Input: data Dn = {(x1, y1), . . . ,(xn, yn)}, and xn+1 Coverage level α ∈ (0, 1) Dimension q Matrix W of dimension q × q \# *Initialization* Construct Hk(xn+1) for each k = 1*, . . . , d*. Construct aki, bki for all i = 1*, . . . , n* + 1. \# Construct p*-value change-point sets* for i ∈ 1*, . . . , n* do Generate matrix M∗ i ≡ Mn+1 − Mi with Mn+1 and Mi constructed as in Equation (24). Generate Ei = {z : (1, z) ∗ i (1,z)≤0} end for Return: E = {Ei} n i=1 ![11_image_0.png](11_image_0.png) $$(26)^{\frac{1}{2}}$$ Figure 4: p-value change-point sets in two-dimensions for observations from single cement test data point with ℓ1 conformity measure constructed using Algorithm 1 (left), and *|| · ||*2W conformity measure with W = I2 (middle) constructed using Algorithm 2. ## 3.4 Sampling Points In The Boundary To cope with higher-dimensional complexity, we now aim to sample points on the boundary of the conformal prediction set. We can describe our results more generally by finding roots to Sn+1(z) − Si(z) where $$z(t,d)=z_{0}+t d$$ z(*t, d*) = z0 + td (26) for some direction vector d ∈ R q and some interior point z0. Then, finding roots is limited to finding t ∗such that $$t^{*}=\{t>0:S_{n+1}(z(t,d))-S_{i}(z(t,d))=0\}.$$ Restricting our models to linear predictors that follow Equation (3) in one direction of the space is equivalent to restricting the observed output. As such, we have $$\hat{y}(z(t,d)=Hy(z(t,d))=H(y_{1},\ldots,y_{n},z(t,d))^{\top}$$ $$=Hy(0)+H(0,\ldots,0,z_{0}+td)^{\top}$$ $$=Hy(z_{0})+tH(0,\ldots,0,d)^{\top}$$ $$\begin{array}{c}{{a_{n+1}=z_{0}-(H y(z_{0}))_{n+1}}}\\ {{b_{n+1}=d-H_{n+1}d}}\end{array}$$ The conformity measures along the direction d are then given by $S_{i}(z(t,d))=\|y_{i}-\hat{y}_{i}(z(t,d))\|=\|a_{i}-tb_{i}\|$ $S_{n+1}(z(t,d))=\|z(t,d)-\hat{y}_{n+1}(z(t,d))\|=\|a_{n+1}-tb_{n+1}\|$, where we define ai = yi − (Hy(z0))i, an+1 = z0 − (Hy(z0))n+1 bi = Hn+1*d, b*n+1 = d − Hn+1d $$\begin{array}{l}{{a_{i}=y_{i}-(H y(z_{0}))_{i},}}\\ {{b_{i}=H_{n+1}d,}}\end{array}$$ The goal is now to solve the one dimension problem ψ(t) = Sn+1(z(t, d)) − Si(z(*t, d*)) ≥ 0. Without this one dimensional restriction, the computations are significantly more difficult and impossible to track without stronger data assumptions. This is illustrated in Figure 2 where we provide simple examples that lead to a non-convex set of solutions. For completeness, we describe below the solution for different norms to be used as score functions and explicit form of the conformal set for a given direction. $$\mathbf{m}\quad\text{We have}$$ Solving for ℓ1 **norm** We have **In** We have $$\psi(t)=\left\|a_{n+1}-tb_{n+1}\right\|_{1}-\left\|a_{i}-tb_{i}\right\|_{1}$$ $$=\left\langle\text{sign}(a_{n+1}-tb_{n+1}),a_{n+1}-tb_{n+1}\right\rangle-\left\langle\text{sign}(a_{i}-tb_{i}),a_{i}-tb_{i}\right\rangle$$ $$=c(t)+ts(t)$$ where we can easily see that ψ is piecewise linear with slopes s(t) and bais c(t) defined as Every piece is characterized by the moment where the sign terms change *i.e.,* when for a coordinate j ∈ [q], it holds ai,j − tbi,j = 0 or an+1,j − tbn+1,j = 0 (27) $$\begin{array}{l}{{s(t)=\langle\mathrm{sign}(a_{i}-t b_{i}),b_{i}\rangle-\langle\mathrm{sign}(a_{n+1}-t b_{n+1}),b_{n+1}\rangle}}\\ {{c(t)=\langle\mathrm{sign}(a_{n+1}-t b_{n+1}),a_{n+1}\rangle-\langle\mathrm{sign}(a_{i}-t b_{i}),a_{i}\rangle}}\end{array}$$ Without loss of generality, let us assume that bi,j and bn+1,j are non zero; otherwise, the equation does not have a solution and we can skip them. Also let us assume that ai,j and an+1,j are non zero, otherwise the solution is trivial equal to zero. As such, we have 2q solutions of Equation (27) that we denote $$t_{1}^{\star},\ldots,t_{2q}^{\star}$$ By the Intermediate Value Theorem, the function t 7→ ψ(t) is equal to zero if and only if it exists consecutive roots t ⋆ k , t⋆k+1 for which ψ takes opposite sign *i.e.,* ψ(t ⋆ k )ψ(t ⋆ k+1) ≤ 0 (note that the product is equal to zero only at the roots). Then, we deduce that Ei ≡ {z ∈ R q: Sn+1(z) ≤ Si(z)} restricted on the line z0 + td is a union of intervals whose boundaries are delimited by the roots of ψ that are easily obtained as $\ a_{i,j}-tb_{i,j}=0$ or $a_{n+1,j}-tb_{n+1,j}=0$ Assume that $b_{i,j}$ and $b_{n+1,j}$ are non zero. $$(27)^{\frac{1}{2}}$$ $$\hat{t}_{k}=-\frac{c(t_{k+1}^{\star})}{s(t_{k+1}^{\star})}\;\mathrm{or}\;\hat{t}_{k}=-\frac{c(t_{k}^{\star})}{s(t_{k}^{\star})}.$$ Solving for general Mahalanobis score function By monotonicity, squaring the norm in the definition of ψ does not change its level set. So, let's consider the function $$\psi(t)=\left\|a_{n+1}-tb_{n+1}\right\|_{M}^{2}-\left\|a_{i}-tb_{i}\right\|_{M}^{2}$$ $$=t^{2}(\left\|b_{n+1}\right\|_{M}^{2}-\left\|b_{i}\right\|_{M}^{2})-2t(\left\langle a_{n+1},b_{n+1}\right\rangle_{M}-\left\langle a_{i},b_{i}\right\rangle_{M})+(\left\|a_{n+1}\right\|_{M}^{2}-\left\|a_{i}\right\|_{M}^{2})\.$$ where we denote ∥α∥ 2 M = ⟨*α, α*⟩M and ⟨α, β⟩M = ⟨*α, Mβ*⟩. Hence the root of ψ are obtained by merely solving a quadratic equation. Explicit description of the conformal set For any i in [n + 1], we denote the intersection points of the functions Si(zt) and Sn+1(zt) by tis and then we have Ei can be an interval (possibly a point), a union of interval or even empty. In all cases, it is characterized by the intersection points obtained explicitly. Since π(z(*t, d*)) is piecewise constant, it changes only at those points. We denote the set of solutions t1, · · · , tK in increasing order as t0 < t1 < · · · < tK. Whence for any t, it exists a unique index j = J (t) such that t ∈ (tj , tj+1) or t ∈ {tj , tj+1} and for any t, we have $$(n+1)\pi(z_{t})=\sum_{i=1}^{n+1}\mathbb{1}_{t\in S_{i}}=N(\mathcal{J}(t))+M(\mathcal{J}(t))$$ where the functions $$N(j)=\sum_{i=1}^{n+1}\mathds{1}_{(t_{j},t_{j+1})\subset S_{i}}{\mathrm{~and~}}M(j)=\sum_{i=1}^{n+1}\mathds{1}_{t_{j}\in S_{i}}$$ = {$ t_j,t_{j+1}$}, Finally Note that J −1(j) = (tj , tj+1) or J −1(j) = {tj , tj+1}. Finally, we have the restriction of the conformal set to the direction d is given by $$\Gamma^{(\alpha)}(x_{n+1},d)=\bigcup_{\begin{subarray}{c}j\in[K]\\ N(j)>(n+1)\alpha\end{subarray}}(t_{j},t_{j+1})\ \cup\bigcup_{\begin{subarray}{c}j\in[K]\\ M(j)>(n+1)\alpha\end{subarray}}\{t_{j}\}.\tag{28}$$ ## 4 Approximate Conformal Inference For Multi-Task Learning While the results in Section 3 allows for the construction of exact p-values with no additional model refitting (for multiple responses), we still cannot describe exactly the conformal prediction sets in closed-form. Thus, we aim to construct approximations of the conformal prediction set for a given xn+1. In this section we specifically introduce a union-based approximation for a conformal prediction set generated using the results from Section 2.1.2. Additionally, we extend the root-based approximation procedures introduced in Ndiaye & Takeuchi (2021) to the multi-task setting. ## 4.1 Unioncp **Approximation Method** After constructing the set E for an incoming point xn+1, it is initially unclear which regions Ei make up various conformal prediction sets, let alone how we need to combine these regions to get the exact conformal prediction sets. Thus, we aim to provide an approximation of conformal prediction sets using the regions generated with the approaches introduced in Section 3. We provide Proposition 4 to bound error probabilities associated with potential combinations of these regions. Proposition 4. *Under uniqueness of conformity measures, for some* y ∈ R q*such that* (x1, y1), . . . ,(xn+1, yn+1) are drawn exchangeably from P, for any S ⊂ [n] $$\mathbb{P}\Big(y\in\bigcup_{i\in{\mathcal{S}}}{\mathcal{E}}_{i}\Big)\geq{\frac{|{\mathcal{S}}|}{n+1}}.$$ Proof. Assume we have the data pair (*x, y*) drawn exchangeably with (x1, y1), . . . ,(xn, yn). Also assume that we have constructed the set E. In Section 3, we show construction of E with ℓ1 and *|| · ||*2W as conformity measures, but the following proof holds for any conformity measure. First, we select and fix any S ⊆ [n]. We then fix a z such that z /∈Si∈S Ei. Then, $$z\not\in\bigcup_{i\in{\mathcal{S}}}{\mathcal{E}}_{i}\Longleftrightarrow S_{i}(z)\leq S_{n+1}(z)\;\forall\;i\in{\mathcal{S}}\Longrightarrow\sum_{i=1}^{n+1}\mathds{1}\{S_{i}(z)\leq S_{n+1}(z)\}\geq|{\mathcal{S}}|+1$$ Then, for y $$\mathbb{P}\Big{(}y\notin\bigcup_{i\in\mathcal{S}}\mathcal{E}_{i}\Big{)}\leq\mathbb{P}\Big{(}\sum_{i=1}^{n+1}\mathbb{1}\{S_{i}(y)\leq S_{n+1}(y)\}\geq|\mathcal{S}|+1\Big{)}$$ $$\Rightarrow\mathbb{P}\Big{(}y\in\bigcup_{i\in\mathcal{S}}\mathcal{E}_{i}\Big{)}\geq1-\mathbb{P}\Big{(}\sum_{i=1}^{n+1}\mathbb{1}\{S_{i}(y)\leq S_{n+1}(y)\}\geq|\mathcal{S}|+1\Big{)}$$ $$\geq\mathbb{P}\Big{(}\sum_{i=1}^{n+1}\mathbb{1}\{S_{i}(y)\leq S_{n+1}(y)\}\leq|\mathcal{S}|\Big{)}.$$ By Lemma 1, with the selection of α = 1 − |S| n+1 , $$\mathbb{P}\Big(\sum_{i=1}^{n+1}\mathbb{1}\{S_{i}(y)\leq S_{n+1}(y)\}\leq|{\mathcal{S}}|\Big)\geq{\frac{|{\mathcal{S}}|}{n+1}}.$$ Proposition 4 states that with the selection of any subset of E, the probability of the response yn+1 being contained in the union of that subset is bounded-below by a function of cardinality. For example, if we wish to construct, say, a conservative 50% prediction set, we could select (at random) a set *S ⊆ E* such that |S| ≥ |E|/2; the union of all sets within S would provide a conservative prediction set. We again note that while our work emphasizes ℓ1 and *|| · ||*2W , Proposition 4 holds for any conformity measure. Now, the random set constructed might not provide tight coverage as there exist some Ei such that $\square$ $$\bigcup_{i^{\prime}\in{\mathcal{S}}_{(i)}}{\mathcal{E}}_{i^{\prime}}\subseteq{\mathcal{E}}_{i},$$ where S(i)is some subset of [n] that does not contain i; some p-value change-point sets are contained in others and, thus, choosing the larger set could result in extremely conservative coverage. We include results related to the theoretical coverage associated with the randomized approach in Supplemental Materials. While the union of a random selection of regions forms a conservative 1 *− |S|*/(n + 1) prediction set, we can provide more intelligently constructed sets that are empirically less conservative (but still valid). Suppose we provide an ordering of our regions, where E(k)is defined as the k-th smallest region by volume. Definition 1 (unionCP). A more efficient (1 − α) *prediction set approximation can then be constructed as* $$\hat{\Gamma}^{(\alpha)}(x_{n+1})=\bigcup_{i\in{\mathcal{S}}_{1-\alpha}}{\mathcal{E}}_{(i)},$$ $$(29)^{\frac{1}{2}}$$ E(i), (29) where S1−α = [⌈(1 − α)(n + 1)⌉]*. We dub the approximation shown in Equation* (29) as unionCP. By Proposition 4, unionCP generates an approximation that, at minimum, provides a region that is at least valid. We compare prediction sets constructed using unionCP to a random selection of regions for multiple predictors in Section 5. We find empirically that sets constructed using unionCP are less conservative than a random collection of p-value change-point sets. While Proposition 4 and the adjustment described in Equation (29) allow for conservative prediction sets, at times, the union of various Ei does not explicitly describe a conformal prediction set exactly. Thus, unionCP provides (at worse) a conservative approximation of the true conformal prediction set. Figure 5 provides an example where the region constructed with unionCP differs from the true conformal prediction set. With full comformal prediction, the computational complexity depends heavily on the number of candidate values chosen, while the computational burden of unionCP depends on the number of observations n. To reduce the computation required to generate the approximation, we can utilize the result shown in Lemma 2. ![15_image_0.png](15_image_0.png) Figure 5: Comparison of full conformal prediction set for α = .25 (red line) and regions included in unionCP approximation (black line(s)). Algorithm 3 unionCP Input: data Dn = {(x1, y1), . . . ,(xn, yn)}, and xn+1 Coverage level α ∈ (0, 1) Subset size m ≤ n \# *Initialization* Generate a random subset M ⊆ [n] of size m. Construct E m = {Ei}i∈M with Algorithm 1 or Algorithm 2. \# *Construct conformal prediction region approximation* Order each element of E m by volume where E m (k) is defined as the k-th smallest region by volume of E m. Generate S1−α = {1*, . . . ,* ⌈(1 − α)(m + 1)⌉}. Set Γˆ(α)(xn+1) = Si∈S1−α E m (i) . Return: Γˆ(α)(xn+1) Lemma 2. Let U1, . . . , Un, Un+1 *be an exchangeable sequence of random variables. Then, any subsample* U1, . . . , Um *is also exchangeable.* By Lemma 2, we can randomly select any m observations, where 1 < m ≤ n, and the conformity measures of this subset, along with Sn+1(z), will also be exchangeable. Thus, we can randomly select a subset of E of size m, defined as E m, and then order this subset by volume, where E m (k) is defined as the k-th smallest region by volume of the set E m. Then, by Proposition 4, unionCP constructed with this subset also provides valid prediction regions, at a potentially much lower computational cost. If we wish to avoid the unionCP approximation, we can generate exact p-values using Equation (11) in conjunction with a grid-based approach with much computational gain over that of full conformal prediction. We include Algorithm 3 to construct a generalized version of the unionCP approximation of the conformal prediction set for a given test observation. ## 4.2 Connection Between Unioncp And Splitcp In this section, we provide further theoretical backing for unionCP by connecting it explicitly to splitCP. First, we can further generalize the conformal prediction set generated when using splitCP than just with the use of the absolute residual. In general, for an incoming xn+1, the split conformal prediction set is $$\Gamma^{(\alpha)}_{\bf split}(x_{n+1})=\{z:S_{n+1}(z)\leq s\},\tag{1}$$ $$(30)$$ where Sn+1(z) is constructed as a function of z and yˆn+1, generated using observations in I1, and s is the ⌈(|I2| + 1)(1 − α)⌉-th largest conformity measure for observations in I2. In one dimension, it is easy to show that with the absolute residual |z − yˆn+1|, Equation (30) reduces to the region shown in Equation (8). For y ∈ R q, when using *|| · ||*2W as the conformity measure, $$S_{n+1}(z)\leq s\Rightarrow||z-{\hat{y}}_{n+1}||_{W}^{2}\leq s$$ We also note that for splitCP, we can construct the p-value change-point set for observation i when using || · ||2W as $${\mathcal{E}}_{i}\equiv\{z:S_{n+1}(z)\leq S_{i}(z)\}=\{z:||z-{\hat{y}}_{n+1}||_{W}^{2}\leq||y_{i}-{\hat{y}}_{i}||_{W}^{2}\}.$$ Now, if we select two observations i and j such that Si(z) ≤ Sj (z) then the result in Proposition 5 holds. Proposition 5. For two observations i and j such that Si(z) ≤ Sj (z) ∀ z, $$S_{i}(z)\leq S_{j}(z)\Rightarrow{\mathcal{E}}_{i}\subseteq{\mathcal{E}}_{j}$$ Proof. We assume Si(z) ≤ Sj (z) ∀ z. We note this this assumption is valid for splitCP as the conformity measure for each observation is constant with respect to z. Thus, we can provide an ordering of the conformity measures. Then, $\mathcal{E}_{j}=\{z:S_{n+1}(z)\leq S_{j}(z)\}$ $=\{z:S_{n+1}(z)\leq S_{j}(z)+S_{i}(z)-S_{i}(z)\}$ $=\{z:S_{n+1}(z)\leq S_{i}(z)+\underbrace{S_{j}(z)-S_{i}(z)}_{\geq0\text{by assumption}}$ $=\mathcal{E}_{i}\cup\{z:S_{i}(z)\leq z\leq S_{j}(z)\}$ Thus, with the unionCP approach, we can match exactly the conformal prediction sets constructed using splitCP. We note that this result is related to the *nested* conformal prediction sets discussed in Gupta et al. (2022). ## 4.3 Root-Based Approximation Methods As noted earlier, computation of the conformal prediction sets requires model readjustment for any candidate value to replace the true yn+1 value. Current efficient approaches to exact computation, limited to dimension one, are restricted to models that are piecewise-linear; this structure allows to track changes in the conformity function. We have extended these approaches to higher dimensions in the previous section. ![17_image_0.png](17_image_0.png) Figure 6: Illustration of the approximated conformal prediction set obtained fitting ellipse and convex hull given boundary points obtained by rootCP. We use scikit-learn make_regression to generate synthetic dataset with the parameters n_samples = 15, n_features = 5, n_targets = 2 is the dimension of in output yn+1. We selected 80% of informative features and 60% for effective rank (described as the approximate number of singular vectors required to explain most of the input data by linear combinations) and the standard deviation of the random noise is set to 5. To go beyond linear structures, we can use approximate homotopy approaches which, given an optimization tolerance, provide a discretization of all the values that yn+1 can take. However, these approaches are also limited in dimension one and have an exponential complexity in the dimension of yn+1. Convexity assumptions are also required, which, unfortunately, are not verified for more complex prediction models. In this section, we extend the approximations of conformal prediction in multiple dimensions by computing conformal prediction set boundaries directly. Unlike the one-dimensional case where the boundary is often two points, in multiple dimensions the boundary is continuous and, thus, uncountable, which makes finite-time computation impossible. To get around this difficulty, the main idea here is very simple. We will first fix a finite set of search directions; we will estimate the intersection points between the boundary of the conformal prediction set and the chosen direction. Then, we use the points on the boundary as a data base to fit a convex approximation, *e.g.,* an ![18_image_0.png](18_image_0.png) Figure 7: Illustration of the approximated Ei with 30 search directions with conformity measures defined with ℓp norms. Note that the different level sets can actually overlap. Solid black lines denote ellipsoid approximations of Ei using calculated boundary points. ellipse or the convex hull, passing through these points. More formally, we want to estimate (efficiently) the set described in Equation (7). Assumptions. We suppose that the conformal prediction set is *star-shaped i.e.,* there exists a point z0 such that any other point z within Γ (α)(xn+1) can be connected to z0 with a line segment. Note that a star-shaped set can be non-convex; we can still approximate complex, e.g., non-convex, conformal sets. We provide some illustration in Figure 6. We also note that ellipsoidal sets (or any convex set) are inherently star-shaped. ## Outline Of Rootcp Given any direction d ∈ R q, the intersection points between the boundary of Γ (α)(xn+1), defined as ∂Γ (α)(xn+1), and the line passing through z0 and directed by d are obtained by solving the one dimensional equation $$\pi(z(t,d))=1-\alpha,$$ π(z(*t, d*)= 1 − α, (31) where z(*t, d*) is as in Equation (26). We briefly described the main steps and display the detail in Algorithm 4. $$(31)$$ 1. Fit a model µ0 on the observed training set Dn and predict a feasible point z0 = µ0(xn+1). 2. For a collection of search directions {d1*, . . . , d*K}, perform a bisection search in [tmin, 0] and [0, tmax] to output solutions ˆℓ(dk) and uˆ(dk) of Equation (31) at direction dk, after at most log2 ( tmax−tmin ϵr) iterations for an optimization tolerance ϵr > 0. Notice that the star-shape assumption implies that we will have only two roots on the selected directions. 3. Fit a convex set on the roots obtained at the previous step { ˆℓ(dk), uˆ(dk)}k∈[K]. In practice, when one uses a least-squares ellipse as the convex approximation, a number of search directions K proportional to the dimension q of the target yn+1 is sufficient. This is not necessarily the case for the convex hull. We refer to Figure 6 where we observe that many more search directions are needed to cover the conformal set when using the convex hull approximation. Algorithm 4 rootCP in multiple dimensions Input: data Dn = {(x1, y1), . . . ,(xn, yn)}, and xn+1 Coverage level α ∈ (0, 1), accuracy ϵr > 0, list of search directions d1*, . . . , d*K \# *Initialization* Set z0 = µ0(xn+1) where we fitted a model µ0 on the initial training dataset Dn \# *Approximation of boundary points* for k ∈ {1*, . . . , K*} do We define the direction-wise conformity function as πk(t) = π(z(t)) − α where z(t) = z0 + tdk 1. t − = bisection_search(πk, tmin, 0, ϵr) 2. t + = bisection_search(πk, 0, tmax, ϵr) Set ˆℓdk = z0 + t −dk and uˆdk = z0 + t +dk end for \# *Convex approximation* Γ( ˆ xn+1) = ˜co nˆℓd1 , uˆd1 $${\hat{u}}_{d_{1}},\ldots,{\hat{\ell}}_{d_{K}},{\hat{u}}_{d_{K}}\biggr\}$$ where ˜coS is a convex set built on S, *e.g.,* the convex hull or fit an ellipse using the points in S Return: Γ( ˆ xn+1) The root-finding approach can also be adapted to unionCP by approximating the level-line boundary of the Ei score difference introduced in Equation (10). In so doing, the previous restriction to quadratic functions that enabled an explicit construction is no longer necessary, at the cost of an approximation. We illustrate this generalization to different score functions in Figure 7. ## 5 Empirical Results And Application To provide empirical support for our theoretical results, we consider a small data example using the multi-task cement data set (Yeh, 2007). For the sake of simplicity, we limit our exploration to two dimensions, focusing on the construction of prediction sets for slump and flow, given information on other elements in the mixture, e.g., amount of cement, fly ash and superplasticizer. We include empirical coverage results for the approximation approaches described in Section 4, specifically with regions constructed using *|| · ||*2W . We also include results for the root-based approximation results ![20_image_0.png](20_image_0.png) Figure 8: Comparison of empirical coverage with random selection of k regions and unionCP for various predictors, including: linear regression (LR), Nadaraya-Watson (LC), and local-linear (LL) across 100 repetitions. The calibrated curve is constructed using *number of regions selected*/(m + 1), where m = n = 40. described in Section 4.3 for a wide-array of predictors that include those more general than the restrictions we outline in our paper. As a reference benchmark, from Lemma 1, we have π(yn+1) ≥ α with probability larger than 1 − α. Hence, we can define the oracleCP as π −1([α, +∞)) where π is obtained with a model fit optimized on the oracle data Dn+1(yn+1) on top of the root-based approach to find boundary points. We remind the reader that the target variable yn+1 is not available in practice. ## 5.1 Unioncp **Approximation Application** While our focus for the construction of H(xn+1, xi) has been general, for much of our discussion we often reference H(·) constructions associated with a ridge-regressor. Thus, in this section, we wish to demonstrate the performance of unionCP with other methods that fall under our the general model restriction; we specifically explore the prediction methods discussed in Section 3.1. We also note that there are additional methods other than these which also adhere to our model restrictions. Figure 3 includes a comparison of the p-value change-point regions constructed with Equation (21) to the conformal prediction sets constructed using the grid-based approach, also with linear regression for each predictor as well as coverage results for various predictors on the cement data set. From Figure 8, we can see that the unionCP approximation, with each of the models, is empirically calibrated. ## 5.2 Rootcp **Approximation Application** We numerically examine the performance of rootCP on multi-task regression problems using both synthetic and real databases. The experiments were conducted with a coverage level of 0.9, *i.e.,* α = 0.1. For comparisons, we run the evaluations on 100 repetitions of examples, and display the average of the following performance statistics for different methods: 1) the empirical coverage, *i.e.,* the percentage of times ![21_image_0.png](21_image_0.png) Figure 9: Ellipse Benchmarking conformal sets for several regression models on cement dataset. We display the volumes of the confidence sets over 100 random permutations of the data. We denoted cov the average coverage, and T the average computational time normalized with the average time for computing oracleCP which requires a single model fit on the whole data. the prediction set contains the held-out target yn+1, 2) the volume of the confidence intervals, and 3) the execution time. For each run, we randomly select a data tuple (xi, yi) to constitute the targeted variables for which we will compute the conformal prediction set. The rest is considered as observed data Dn. Similar experimental settings are considered in Lei (2019). We run experiments on a suite of complex regression models, including: multi-task elastic net, multi-layer perceptron, orthogonal matching pursuit, kernel ridge regression with both linear and Gaussian kernels, support vector regression, k-nearest neighbors and quantile regression. The results are shown in Figure 9. We include additional results in Supplementary Materials. ## 6 Conclusion In this paper, we introduced exact p-values in multiple dimensions for predictors that are a linear function of the candidate value. Specifically, we discussed the exact construction of p-values using various conformity measures, including ℓ1 and *|| · ||*2W . Additionally, we introduced methods for various approximations of multidimensional 1 − α conformal prediction sets through union-based and root-based prediction set construction, unionCP and and a multi-task extension to rootCP, respectively. We also also deliver probabilistic bounds and convergence results for these approximations. We then showed empirically with multiple predictors, including a subset of both linear and nonlinear predictors, that these approximations were comparable to tgridCP sets, while drastically reducing the computational requirements. One drawback of the methods described in this work is their lack of adaptability. We hope to include region adaptability in future work, e.g., with methods similar to those introduced in Messoudi et al. (2022). Other questions about the theoretical guarantees of our approach have yet to be answered. For example, we lack precise characterizations on the number of points to be sampled on the conformal set boundary, as well as implications our convex approximations, *e.g.,* ellipse, convex hull, related to expected volume and potential coverage loss in the worst case. Besides the conformal sets presented in this paper, these questions are equally relevant to the construction of any high-dimensional confidence sets. ## References Jonathan Alvarsson, Staffan Arvidsson McShane, Ulf Norinder, and Ola Spjuth. Predicting with confidence: using conformal prediction in drug discovery. *Journal of Pharmaceutical Sciences*, 110(1):42–49, 2021. Anastasios N Angelopoulos, Stephen Bates, et al. Conformal prediction: A gentle introduction. Foundations and Trends® *in Machine Learning*, 16(4):494–591, 2023. Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani. Predictive inference with the jackknife+. *The Annals of Statistics*, 49(1):486–507, 2021. Umang Bhatt, Adrian Weller, and Giovanni Cherubin. Fast conformal classification using influence functions. In *Conformal and Probabilistic Prediction and Applications*, pp. 303–305. PMLR, 2021. Hanen Borchani, Gherardo Varando, Concha Bielza, and Pedro Larranaga. A survey on multi-output regression. *Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery*, 5(5):216–233, 2015. David S Broomhead and David Lowe. Multivariable functional interpolation and adaptive networks, 1988. Lars Carlsson, Martin Eklund, and Ulf Norinder. Aggregated conformal prediction. In IFIP International Conference on Artificial Intelligence Applications and Innovations, pp. 231–240. Springer, 2014. Leonardo Cella and Ryan Martin. Valid distribution-free inferential models for prediction. *arXiv preprint* arXiv:2001.09225, 2020. Wenyu Chen, Zhaokai Wang, Wooseok Ha, and Rina Foygel Barber. Trimmed conformal prediction for high-dimensional models. *arXiv preprint arXiv:1611.09933*, 2016. Giovanni Cherubin, Konstantinos Chatzikokolakis, and Martin Jaggi. Exact optimization of conformal predictors via incremental and decremental learning. In *International Conference on Machine Learning*, pp. 1836–1845. PMLR, 2021. Alex Contarino, Christine Shubert Kabban, Chancellor Johnstone, and Fairul Mohd-Zaid. Constructing prediction intervals with neural networks: an empirical evaluation of bootstrapping and conformal inference methods. *arXiv preprint arXiv:2210.05354*, 2022. Isidro Cortés-Ciriano and Andreas Bender. Concepts and applications of conformal prediction in computational drug discovery. *arXiv preprint arXiv:1908.03569*, 2019. Thomas Cover and Peter Hart. Nearest neighbor pattern classification. *IEEE transactions on information* theory, 13(1):21–27, 1967. Jacopo Diquigiovanni, Matteo Fontana, and Simone Vantini. Conformal prediction bands for multivariate functional data. *Journal of Multivariate Analysis*, 189:104879, 2022. Martin Eklund, Ulf Norinder, Scott Boyer, and Lars Carlsson. The application of conformal prediction to the drug discovery process. *Annals of Mathematics and Artificial Intelligence*, 74:117–132, 2015. Jianqing Fan. Design-adaptive nonparametric regression. *Journal of the American statistical Association*, 87 (420):998–1004, 1992. Evelyn Fix. *Discriminatory analysis: nonparametric discrimination, consistency properties*, volume 1. USAF school of Aviation Medicine, 1985. Matteo Fontana, Gianluca Zeni, and Simone Vantini. Conformal prediction: a unified review of theory and new challenges. *Bernoulli*, 29(1):1–23, 2023. Alexander Gammerman, Vladimir Vovk, and Vladimir Vapnik. Learning by transduction. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pp. 148–155, 1998. Chirag Gupta, Arun K Kuchibhotla, and Aaditya Ramdas. Nested conformal prediction and quantile out-of-bag ensemble methods. *Pattern Recognition*, 127:108496, 2022. Chancellor Johnstone and Bruce Cox. Conformal uncertainty sets for robust optimization. In Conformal and Probabilistic Prediction and Applications, pp. 72–90. PMLR, 2021. Arun Kumar Kuchibhotla. Exchangeability, conformal prediction, and rank tests. *arXiv preprint* arXiv:2005.06095, 2020. Alexander Kuleshov, Alexander Bernstein, and Evgeny Burnaev. Conformal prediction in manifold learning. In *Conformal and Probabilistic Prediction and Applications*, pp. 234–253, 2018. Jing Lei. Fast exact conformalization of the lasso using piecewise linear homotopy. *Biometrika*, 106(4): 749–764, 2019. Jing Lei, Alessandro Rinaldo, and Larry Wasserman. A conformal prediction approach to explore functional data. *Annals of Mathematics and Artificial Intelligence*, 74(1):29–43, 2015. Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan J Tibshirani, and Larry Wasserman. Distribution-free predictive inference for regression. *Journal of the American Statistical Association*, 113(523):1094–1111, 2018. Soundouss Messoudi, Sébastien Destercke, and Sylvain Rousseau. Conformal multi-target regression using neural networks. In *Conformal and Probabilistic Prediction and Applications*, pp. 65–83. PMLR, 2020. Soundouss Messoudi, Sébastien Destercke, and Sylvain Rousseau. Copula-based conformal prediction for multi-target regression. *arXiv preprint arXiv:2101.12002*, 2021. Soundouss Messoudi, Sébastien Destercke, and Sylvain Rousseau. Ellipsoidal conformal inference for multitarget regression. *Proceedings of Machine Learning Research*, 179:1–13, 2022. Elizbar A Nadaraya. On estimating regression. *Theory of Probability & Its Applications*, 9(1):141–142, 1964. Eugene Ndiaye. Stable conformal prediction sets. In *International Conference on Machine Learning*, pp. 16462–16479. PMLR, 2022. Eugene Ndiaye and Ichiro Takeuchi. Computing full conformal prediction set with approximate homotopy. arXiv preprint arXiv:1909.09365, 2019. Eugene Ndiaye and Ichiro Takeuchi. Root-finding approaches for computing conformal prediction set. arXiv preprint arXiv:2104.06648, 2021. Jelmer Neeven and Evgueni Smirnov. Conformal stacked weather forecasting. In Conformal and Probabilistic Prediction and Applications, pp. 220–233, 2018. Ilia Nouretdinov, Thomas Melluish, and Volodya Vovk. Ridge regression confidence machine. In *ICML*, pp. 385–392. Citeseer, 2001. Henrik Olsson, Kimmo Kartasalo, Nita Mulliqi, Marco Capuccini, Pekka Ruusuvuori, Hemamali Samaratunga, Brett Delahunt, Cecilia Lindskog, Emiel AM Janssen, Anders Blilie, et al. Estimating diagnostic uncertainty in artificial intelligence assisted pathology using conformal prediction. *Nature communications*, 13(1):7761, 2022. Emanuel Parzen. On estimation of a probability density function and mode. The annals of mathematical statistics, 33(3):1065–1076, 1962. David W Scott. *Multivariate density estimation: theory, practice, and visualization*. John Wiley & Sons, 1992. Vladimir Vovk. Cross-conformal predictors. *Annals of Mathematics and Artificial Intelligence*, 74(1):9–28, 2015. Vladimir Vovk, Alex Gammerman, and Glenn Shafer. *Algorithmic learning in a random world*. Springer Science & Business Media, 2005. Devin Wasilefsky, William Caballero, Chancellor Johnstone, Nathan Gaw, and Phillip Jenkins. Responsible machine learning for united states air force pilot candidate selection, 2023. Geoffrey S Watson. Smooth regression analysis. *Sankhy¯a: The Indian Journal of Statistics, Series A*, pp. 359–372, 1964. Donna Xu, Yaxin Shi, Ivor W Tsang, Yew-Soon Ong, Chen Gong, and Xiaobo Shen. Survey on multi-output learning. *IEEE transactions on neural networks and learning systems*, 31(7):2409–2429, 2019. I-Cheng Yeh. Modeling slump flow of concrete using second-order regressions and artificial neural networks. Cement and concrete composites, 29(6):474–480, 2007. Yu Zhang and Qiang Yang. An overview of multi-task learning. *National Science Review*, 5(1):30–43, 2018. ## Supplementary Materials In the following sections, we include additional content to further support the contributions of the paper. ![25_image_0.png](25_image_0.png) S.1 Additional Experiments Figure S1: Benchmarking the convex Hull based conformal sets for several regression models on cement dataset. We display the lengths of the confidence sets over 100 random permutation of the data. We denoted cov the average coverage, and T the average computational time normalized with the average time for computing oracleCP which requires a single model fit on the whole data. ![26_image_0.png](26_image_0.png) Figure S2: Benchmarking the ellipse based conformal sets for several regression models on synthetic dataset. We display the lengths of the confidence sets over 100 random permutation of the data. We denoted cov the average coverage, and T the average computational time normalized with the average time for computing oracleCP which requires a single model fit on the whole data. ![27_image_0.png](27_image_0.png) Figure S3: Benchmarking the convex Hull based conformal sets for several regression models on synthetic dataset. We display the lengths of the confidence sets over 100 random permutation of the data. We denoted cov the average coverage, and T the average computational time normalized with the average time for computing oracleCP which requires a single model fit on the whole data. ## S.2 Sketch-Proof Of Random Region Selection Coverage Probability We now include a sketch-proof for the probability that a randomly selected subset of E, of cardinality k, constructed with Dn+1(z) will contain yn+1. Proof. First, we define Γk as a random subset of E such that |Γk| = k where each region i, constructed as a function of Dn+1(z), is selected with probability 1/n. We assume, without loss of generality, that E(i) ⊂ E(i+1)∀i = 1*, . . . , n*. While this assumption does not always hold, it results in a larger upper bound. Then, for yn+1, $$\mathbb{P}\big(y_{n+1}\in{\mathcal{E}}_{(k)}\big)={\frac{k}{n+1}}$$ $$\left(\text{32}\right)^{\frac{1}{2}}$$. Furthermore, let the random variable ei = 1 if region E(i)is selected (with probability 1/n). Then, for k = 1, $$\mathbb{P}(y_{n+1}\in\Gamma_{1})=\sum_{i=1}^{n}\mathbb{P}(y_{n+1}\in\Gamma_{1},e_{i}=1)$$ $$=\sum_{i=1}^{n}\mathbb{P}(y_{n+1}\in\Gamma_{1}|e_{i}=1)\mathbb{P}(e_{i}=1)$$ $$=\frac{1}{n}\sum_{i=1}^{n}\mathbb{P}(y_{n+1}\in\Gamma_{1}|e_{i}=1)$$ $$=\frac{1}{n}\sum_{i=1}^{n}\frac{i}{n+1}$$ $$\text{(by Equation(32))}$$ $$=\frac{1}{n(n+1)}\sum_{i=1}^{n}i$$ $$=\frac{1}{n(n+1)}\frac{n(n+1)}{2}$$ $$=\frac{1}{2}$$ Now consider Γk for k > 1. We define the random variable πk as a joint random variable for (e1*, . . . , e*n) where the support of πk is Πk, the set of all 2 n combinations of ei ∈ {0, 1}. Because we limit our regions to be nested, the k-th region is contained in (k + 1)-th region. Thus, we can describe coverage probabilities with various Γk and πk by just examining the highest k such that ek = 1, defined as max k(πk). Then, $$\mathbb{P}(y_{n+1}\in\Gamma_{k})=\sum_{\pi_{k}\in\Pi_{k}}\mathbb{P}(y_{n+1}\in\Gamma_{k},\pi_{k})$$ $$=\sum_{\pi_{k}\in\Pi_{k}}\mathbb{P}(y_{n+1}\in\Gamma_{k}|\pi_{k})\mathbb{P}(\pi_{k})$$ $$=\sum_{i=k}^{n}\mathbb{P}(y_{n+1}\in\Gamma_{k}|\max_{k}(\pi_{k})=i)\mathbb{P}(\max_{k}(\pi_{k})=i)$$ $$(\mathbb{P}(\max_{k}(\pi_{k})=i)=0\ \forall\ i<k)$$ $$\geq\sum_{i=k}^{n}\frac{i}{n+1}\mathbb{P}(\max_{k}(\pi_{k})=i)$$ $$(\text{by Lemma1})$$ $$=\frac{1}{n+1}\sum_{i=k}^{n}i\mathbb{P}(\max_{k}(\pi_{k})=i)$$ where $$\mathbb{P}{\bigl(}\operatorname*{max}_{k}(\pi_{k})=i{\bigr)}={\frac{{\binom{i-1}{k-1}}}{{\binom{n}{k}}}}.$$ ## S.3 Reproducibility The source code utilized for our experimentation will be available in open-source later. ## S.4 Discussion On Algorithm 1 There are several *potential* solutions to Equation (20); some of these solutions are invalid. In order to solve for valid solutions, we can find intervals where the argument of each absolute value component is less than zero. Without loss of generality, with two-dimensions, $$a_{2i}+b_{2i}z_{2}<0\quad\Rightarrow\quad z_{2}<-\frac{a_{2i}}{b_{2i}}\ \mbox{and}\ a_{2n+1}+b_{2n+1}z_{2}<0\Rightarrow\quad z_{2}<-\frac{a_{2n+1}}{b_{2n+1}}.\tag{33}$$ Then, we can construct a set of intervals $$(-\infty,l_{i}),(l_{i},u_{i}),(u_{i},\infty)$$ to check for solutions, where li = min(− a2i b2i , − a2n+1 b2n+1 ) and ui = max(− a2i b2i , − a2n+1 b2n+1 ). The left-most interval corresponds to both the arguments within Equation (20) being negative; the right-most interval corresponds to both the arguments within Equation (20) being positive, resulting in $$z_{2}(i)={\frac{c_{1}+(-a_{2i}+a_{2n+1})}{b_{2i}-b_{2n+1}}}{\mathrm{~and~}}z_{2}(i)={\frac{c_{1}+(a_{2i}-a_{2n+1})}{b_{2n+1}-b_{2i}}},$$ respectively. We explicitly denote z2(i) as a function of the index i because we must repeat this process for each observation. For simplicity, we drop the i index. The sign of the components when z2 is contained within the inner interval depends on the value of li and ui where $l_{i}=-\dfrac{a_{2i}}{b_{2i}}\Rightarrow\text{left component is positive,right component is negative.}$ $\therefore\text{}\left(\text{}\right)=\dfrac{a_{2i}}{b_{2i}}\Rightarrow\text{left component is positive,right component is negative.}$ otherwise ⇒ right component is positive, left component is negative. Regardless, we can find the two valid solutions by checking all four potential solutions to see if they fall within their respective intervals. We denote these two valid solutions z l 2 and z u 2 , corresponding to the smallest and largest solution value, respectively. We can repeat the above process, instead fixing z2 = z˜2 to find equivalent solutions for z1, denoted z l 2 and z u 2 . The points (z˜1, zl2 ), (z˜1, zu 2 ), (z l 1 , z˜2) and (z u 1 , z˜2), all generated as a function of i, make up the "corners" of each region Ei.
Review 1: Summary: The paper focuses on exact and approximate conformal inference for muti-task learning with different regression models. The driving idea is to deliver exact derivations of conformal inference p-values in the cases when the multi-variate predictive model can be described as a linear function of the response variable. Strengths and Weaknesses: **Strenghts:** The paper is (in general) well-written and easy to read. In that regard, I could follow the thread of the story the authors wanted to tell. To me, the presentation of methods looks correct and the knowledge of conformal inference also looks demonstrated. The desired contributions of the work seem very clear and there are not additional details that I could find incorrect. **Weaknesses:** Even if the paper shows good writing quality with rigorous notation, the main arguments and decisions taken are not entirely clear to me in the manuscript. I summarise my concerns in the following points: - I see the paper as somehow long, in the sense that it takes too much space to describe relatively simple concepts. I say this because one might think that much more space could be dedicated to the analysis, empirical results or discussion instead of the introductory points around conformal inference and regression methods. - I do not entirely understand the decisions around the choice of the regression methods, and why extremely well-known methods in the literature like k-nearest neighbors (KNNs) are presented in the main paper when it could be just referred to using a citation. On the other hand, I feel that many details around the core CI topic are not properly explained and just solved pointing out conformal inference references out of the scope of the paper or the easy knowledge of the reader. In that sense I feel that the work is not entirely self-contained. - The work is somehow imbalanced to me, as a lot of effort is placed on finding the methodology and notation to accommodate exact multi-task conformal inference, but later little is explored around the multi-variate nature of the data. For instance, in Section 5 it is said: *"For the sake of simplicity, we limit our exploration to two dimensions"* when showing the empirical results... - There is not really any comparison with SOTA, when it is clear that other conformal inference methods around the multi-task regression topic are recently in the literature. I particularly would like to point out Messoudi (JMLR 2022), who is also cited and commented on in the related work of the paper. - The empirical results are perhaps too simple and limited to confirm the good performance of the proposed CI methodology. Also, I perceive that the authors did not spend a lot of space and time to explain to the reader what the results are indicating. Requested Changes: Please, see my concerns. I would like to remark my concerns about the limited empirical results, the lack of slowly-discussed conclusions, the comparison with Messoudi (JMLR 2022), and the flow of decisions around the regression models chosen. **References** *S. Messoudi et al. Ellipsoidal conformal inference for Multi-Target Regression, JMLR, 2022* Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper addresses the problem of computing full-conformal prediction regions for multi-dimensional models. Assuming that the prediction model is linear, the authors describe a series of approximation approaches for solving the computationally infeasible task. The proposed algorithms apply to the case when the conformity score is the L1 and L2 norm of the residual. Strengths and Weaknesses: Strengths. - Whether approximating full conformal prediction is better than using computationally more efficient CP methods, e.g., split CP, is relevant. - - Multi-dimensional conformal prediction has been poorly investigated despite the recent successes of CP. The proposed strategies for extracting the multi-dimensional prediction regions are general and may be applied to other CP setups. Weaknesses. The paper explores the same problem as [Ndiye 2022] but under further assumptions on the prediction model. The goal is to extend the approximation scheme to the multi-dimensional case. The author should explain better why it is worth approximating the full version of CP by making such assumptions. Especially because splitCP seems to outperform rootCP in all but two numerical experiments. The presentation of the material could have been organised better. The authors should have made the differences between their method and existing approaches to multi-dimensional CP more explicit, e.g. by mentioning underlying assumptions and feasibility. Requested Changes: Questions: - What is rootCP mentioned in the abstract? - Is $\Gamma$ assumed to be an interval or a convex set? - In the table on Page 2, should $ndq \log(1/e)$ be $mdq \log(1/e)$? - While reviewing existing methods (before Section 2.1 and before Section 3), it would be good to say if they apply to the full-conformal setup or/and specific resampling approximations and mention the assumptions on the prediction model and the shape of the prediction regions. - Above Proposition 1, you write, "[Nouretdinov et al. (2001)] was extended to include both lasso and elastic net regressors...". Is the proposed method applicable to these extensions? - Split conformal prediction is not computationally expansive as full conformal prediction. Why is it better to approximate the latter instead using the former? - Regarding the extension to multiple-dimensional outputs. What is the difference between the proposed approach and the ellipsoid method of [Messoudi 2022 et al.]? - Is the change-point technique for constructing pvalues new? - GridCP is mentioned before being explicitly defined. - Is unionCP related to work on multiple-output quantile regression, e.g. [1]? - Can the proposed method produce non-convex or non-predictive regions? Does Assumption 8 apply only to rootCP? - GridCP and UnionCP do not appear in the box plots on page 22. Why did you exclude them from the comparison? - Have the methods been tested on data sets other than the Cement? [1] Hallin, Marc, et al. "MULTIVARIATE QUANTILES AND MULTIPLE-OUTPUT REGRESSION QUANTILES: FROM L 1 OPTIMIZATION TO HALFSPACE DEPTH [with Discussion and Rejoinder]." The Annals of Statistics (2010): 635-703. Broader Impact Concerns: Considering the recent successes of computationally cheap CP algorithms, e.g. split-CP, the authors should justify better why the proposed approximation is advantageous, from the theoretical and practical perspective. ================================================== Review 3: Summary: Since conformal prediction is a rather abstract and still somewhat esoteric topic, I will start by trying to explain my understanding of the paper in my own words. The basic learning problem of interest here is estimating "prediction sets" for a response variable (denoted $y$) based on covariate vector $x$. The basic property desired of such sets is "coverage," namely that for any probability level $\\alpha$, given a fresh observation of $x$, the probability that $y$ is included in the set (allowed to depend on $x$) is *at least* $1-\\alpha$. At a high level, the main approach considered here for estimating such sets given an iid (or exchangeable) sequence of $n$ covariate-response pairs $(x\_{1},y\_{1}),\\ldots,(x\_{n},y\_{n})$ is conformal prediction, which utilizes a predictor for $y\_{i}$ ($i = 1,\\ldots,n,n+1$, where $y\_{n+1}$ is understood as a test point), and looks at how well the predictions match the data. Put very roughly, the "full" conformal prediction approach discussed here uses the fact that looking at non-conformity quantities incurred by the ancillary predictor trained on the first $n$ pairs plus a "candidate" pair $(x\_{n+1},z)$, we can get a $1-\\alpha$ bound on the order of the non-conformity incurred for the candidate point, which can enable us to check if $z$ is in the desired $1-\\alpha$ prediction set or not. The problem in practice is that since we want to know *all* the $z$ which fall into this set, naive approaches require repeating the ancillary predictor sub-routine for all possible $z$, usually uncountably infinite. While this problem is not computationally feasible in general, this paper is interested in special settings in which the ancillary predictor depends *linearly* on the candidate response $z$; for non-conformity metrics based on the absolute error, linearity is inherited by the non-conformity quantities, and means that the "full" conformal prediction process can be carried out without having to re-run a $z$-dependent learning algorithm over $n+1$ points multiple times. If coverage is our only concern, then when the $y_{1},y_{2},\\ldots$ are real-valued (i.e., single-output regression), then taking small/large enough lower/upper bounds will give us a valid interval - we only need two real values. It may not be tight, but it will be coverage-valid. Things get more complicated if the responses are multi-dimensional, and this is the situation considered in the present paper. Regarding the substantive contributions of this paper, the main points are as follows: - The "nice" setting mentioned earlier (where the predictor varies linearly with $z$) is formulated for $q$-dimensional responses (first paragraphs of section 3, plus Props 2 and 3). Theoretically the main results follow essentially directly from assumptions, but the formulation is given clearly and with high generality. Furthermore, the authors give a detailed description of how to compute "change-point sets" using the structure afforded by the linear assumptions (see exposition in sections 3.2 and 3.3). - The "hard" problem of actually making prediction sets is tackled in section 4; they link up the change-point sets of the previous section to prediction sets by lower-bounding the probability of inclusion of $y\_{n+1}$ in unions of these sets. As a heuristic to get tighter sets, they consider volume-based ordering of these sets, shown in their $\\texttt{unionCP}$ procedure (Defn 1, Eqn 29). There also appear to be other concrete contributions, in particular sections 3.4 and 4.3, but I didn't really get how these fit into the overall story. I'll come back to these guys later. Strengths and Weaknesses: The paper deals with a topic of machine learning research which I think is of interest to many, particularly due to the generality of CP and the recent interest in "uncertainty" quantification in the ML community. The extension of existing methods to multiple-output regression settings is non-trivial, and I think the authors present both new procedures and (nascent) experimental evidence of the utility of these procedures. The content of the proposed methods is for the most part quite clear, and I think the ideas/methods here will be of interest to a non-trivial fraction of the ML community. On the negative side, I find the composition and organization of the paper to be a bit of a mess. In addition, there are numerous examples of sloppy notation peppered throughout the paper which make the overall story and key insights rather hard to parse. Perhaps a determined reader making many passes over the paper could get a clear picture of what is going on, but I found it rather hard going myself. I'll try my best to concisely summarize some points that tripped me up going through the paper. Order is roughly in terms of when things appear in the paper, not ordered in terms of importance. - "Multi-task" is rather ambiguous; why not change the title from "Multi-task Learning" to "Multi-output Regression"? Same with abstract. - Writing $\\texttt{unionCP}$ and $\\texttt{rootCP}$ in the abstract is rather meaningless; don't provide algorithm names, but rather just describe them. In particular, the abstract seems to assume the reader knows what $\\texttt{rootCP}$ is, which is unwise. - The ideal set we want to know is characterized in equation (1), and the usual "coverage" requirement is given in equation (4); this is all crystal clear. Why all the business related to hypothesis test inversions and p-values? I found all the exposition related to hypothesis tests extremely unclear, and I don't really see why it is necessary in formulating the problem of interest in the first place. - After equation (4), the authors emphasize "*valid*" prediction sets; I take this as a definition, i.e., "valid" prediction sets are those that satisfy (4). If this is the case, it should be stated more explicitly. - $\\mathcal{D}\_{n+1}$ definition missing curly braces, should be $\\mathcal{D}\_{n} \\cup \\{ (x\_{n+1},z) \\}$. - The authors abruptly start using the definition/identity/equivalance symbol $\\equiv$ after equation (7) with no explanation, even though there were several places prior to this where it would have been appropriate. This inconsistency should be avoided. - The critical assumption used by the authors in the multi-output setting is equation (9), but I find the notation very hard to parse. What is $H\_{k}(x\_{n+1},x\_{i})$? In particular, why is where emphasis on $x\_{i}$? I cannot understand this. I found this all the more confusing when on the next page the authors write $H\_{k}(x\_{n+1},x\_{i}) \\equiv H\_{k}(x\_{n+1})$, since the quantity on the right-hand side has not yet been defined. - In section 2, "valid" prediction sets are considered with no direct mention of "change-point sets," which appear suddenly in section 3 in the context of multi-output extensions. A link between sections 2 and 3 comes through the change-point set based p-value given in equation (11), but the link between p-values and getting a coverage-valid prediction set is not in my opinion described at all, really. - p.6: the "Gaussian kernel" shown is $\\Phi(\\cdot)$, but this $\\Phi(\\cdot)$ is both undefined and to the best of my knowledge is typically used for the Gaussian CDF, not the density function. - Index in equation (14) product is wrong; should be $h\_{k}$ and $u\_{k}$ (replace $p$ with $k$). - I feel like the results of substance are not presented well in section 3.2 (and onward). What I mean by this is that while Proposition 2 is crystal clear, in my opinion the most important material is the structure utilized to motivate Algorithm 1. This is buried in the text, i.e., the last paragraph of page 8 ("Algorithm 1 leverages the fact that..."). I think this "fact" should be presented in a more clear way as a proposition with a proof. I know it is not a difficult thing, but organizing results that are used to directly derive algorithms makes the overall story flow much better. The same thing can be said for the next sub-section as well. - Algorithm 2: title line has $w$ where $W$ should be, and the brace sizing is wrong in for $\\mathcal{E}\_{i}$. - Regarding section 3.4, while I can roughly follow the details, I have no idea what the purpose of this section is. Sorry, the motivation isn't clear to me. - Proposition 4: saying "for some $y \\in \\mathbb{R}^{q}$" makes it sound like $y$ is fixed, and not a random vector. What happened to $y\_{n+1}$? Furthermore, the use of $\\mathcal{E}$ is quite troublesome. From Algorithm 2, $\\mathcal{E}$ is a list of $n$ change-point sets, but in the proof of Prop 4, the authors write $\\mathcal{S} \\subseteq \\mathcal{E}$, which can't be right. - Definition 1: here the authors introduce $\\texttt{unionCP}$, which is said to be "more efficient." I know it is just a heuristic, but this is really quite vague. Plus, on the computational side, we need to compute volumes in order to do this sorting, correct? Is this a non-issue? - Regarding section 4.3, I don't get the positioning of the root-based methods. The authors say in the second paragraph "To go beyond linear structures...", so I assume that $\\texttt{rootCP}$ is meant as a complement to Algorithm 3 when linearity doesn't hold... is this correct? If so, that is nice, but the paper is already quite long as is, and it feels like this section was just tacked on. I don't think the authors necessarily need to scrap it, but it is kind of hard to compare the two settings. One assumes linear structure, the other assumes a star-shaped prediction set; there is very little discussion regarding these points, so I just felt like this section really stood out. Requested Changes: Please see the comments above. Taking everything together, I think the authors have some results of value, but the paper needs a fair bit of polish before it can be considered ready for publication. I also wouldn't be surprised if other reviewers asked for more thorough empirical analysis. Broader Impact Concerns: Not applicable. ================================================== Metareview: Recommendation: Reject Comment: The reviewers generally felt compelled to recommend rejection of the manuscript due to a lack of a revised manuscript to look at. However, I do think the original review comments raised by the reviewers are possible to overcome in a careful major revision. I encourage the authors to resubmit once the revision is ready. ==================================================
# Affordance Extraction With An External Knowledge Database For Text-Based Simulated Environments Anonymous authors Paper under double-blind review ## Abstract Text-based simulated environments have proven to be a valid testbed for machine learning approaches. The process of affordance extraction can be used to generate possible actions for interaction within such an environment. In this paper the capabilities and challenges for utilizing external knowledge databases (in particular ConceptNet) in the process of affordance extraction are studied. An algorithm for automated affordance extraction is introduced and evaluated on the Interactive Fiction (IF) platforms TextWorld and Jericho. For this purpose, the collected affordances are translated into text commands for IF agents. To probe the quality of the automated evaluation process, an additional human baseline study is conducted. The paper illustrates that, despite some challenges, external databases can in principle be used for affordance extraction. The paper concludes with recommendations for further modification and improvement of the process. ## 1 Motivation Simulated environments are an indespensible tool for modern studies of artificial intelligence and machine learning in particular. As such, they provide the possibility to develop new methods and even paradigms, as well as the opportunity for testing and evaluation. Simulated environments come in different modalities, degrees of complexity and fields of applications. Recently, the domain of video games emerged as a popular test field for algorithms, including many concepts, tools and paradigms of modern AI research like Reinforcement Learning (RL), Knowdlege Graphs (KG) and Neural Networks (NN). The potential of these developments has been demonstrated in several landmark studies, for example for the classic board game Go (Silver et al., 2016), or their successful application to Atari games (Mnih et al., 2015). In many of these cases, algorithms were able to achieve or even surpass human performance level. Encouraged by these results, attempts were made to extend the developed methods to the field of text-based games. Among these, in particular Interactive Fiction (IF) (also known as *text adventures*) received some attention. Environments like *Textworld* (Côté et al., 2018) or *Jericho* (Hausknecht et al., 2019a) were designed specifically to aid studies of machine learning performance on IF games. Correspondingly, challenges have been established to compare the performance of different algorithms in solving selected text based tasks. However, according to the evaluation metrics of these challenges, even the best performing solutions still rarely achieve scores comparable to human performance without task-specific fine tuning. For example, referring to the first installment of the popular IF franchise *Zork*, up to our knowledge the best results in playing Zork1 (without employing game specific methods) have been achieved by Ammanabrolu & Hausknecht (2020), scoring around 12% of the achievable reward. While certain solutions score impressive results in specified challenges (i.e. *First Textworld Problems* 1), these solutions are often specialized on particular tasks (i.e. cooking meals) and generalize poorly to other domains (see i.e. Adolphs & Hofmann (2019)). 1First TextWorld Problems: A Reinforcement and Language Learning Challenge, https://www.microsoft.com/enus/research/project/textworld/ A major aspect for explaining this performance gap is the increased environmental complexity of IF games compared to other AI challenges, like arcade games or conventional board games. To provide the player with a feeling of immersion, IF simulates an environment which aims to resemble a life-like, detailed scenario for the player to explore and interact with. The emerging setup poses two additional challenges: On the one hand, all sensory input and communication between agent and environment is mediated by textual descriptions. To convert these descriptions into computationally processable information, methods of natural language processing (NLP) are necessary. On the other hand, the growing complexity of the environment may inflate both action- and state-space of model descriptions involved, i.e. (partially observable) Markov Decission Processes, (PO)MDPs. For example, collecting all combinatorially possible commands from the parser vocabulatory of *Zork* yields approx. 200 billion commands. Only a small subset of these commands can actually be executed by the parser, while most commands are comprised of arbitrary combinations of verbs, prepositions and objects which do not yield any reasonable semantics. The above mentioned platforms mostly address this problem by introducing some kind of predefined command templates. For a more generalized approach, the concept of *affordances* can be introduced to the field. In a general sense, *affordances* refer to the possible interactions available to a given agent in any given situation. A seminal work for affordances in the context of IF games is Fulda et al. (2017), and the following arguments build up on the definitions of this work. The corresponding process of *affordance extraction* from external (that is: not game related) knowledge bases is the central focus of this paper. The following chapters establish and evaluate a way of including external knowledge bases (demonstrated with *ConceptNet*) for affordance extraction in text-based tasks. Three main aspects are addressed: - The paper aims to demonstrate that IF and to some extent text-based environments in general, can be used as testbeds for automated affordance extraction procedures. - It is further demonstrated that the process of automated affordance extraction can benefit from external databases like ConceptNet. - The paper discusses remaining open issues and possible extentions, as well as giving recommendations for future data collection strategies and algorithmic improvements. ## 2 Affordance Extraction The term *affordance* was introduced by J.J. Gibson in the context of visual perception studies (Gibson, 1966). It was used to describe the situational nature of interactions between an animal and its environment, i.e.: While a monkey is able to climb a tree, an elephant instead can simply reach for its fruits. Both animals can "afford" different actions with the same object. In a more abstract sense, affordances determine interaction options for agents in their environment. Correspondingly, this term has found its way into various fields of research, i.e. robotics, visual computing and natural language processing, as inGrabner et al. (2011); Dubey et al. (2018); ZhjZhu et al. (2014). For the purpose of this paper, the latter is of central importance. Most algorithms used for solving IF challenges rely on RL and therefore policy generation/optimization. This approach requires a set of possible actions (which can be seen as instances of *affordances* in this context) for every game state. The process of generating such affordances from any provided input (i.e. text excerpts) is called *affordance extraction*. While affordance extraction recently drew increasing attention from the scientific community (Fulda et al., 2017; Loureiro & Jorge, 2018; Khetarpal et al., 2020), in practical applications this process is sometimes shortened or even omitted and a stronger focus is laid on the rating of available actions. This is possible, because either the available actions are few and trivial (i.e. moving directions in a maze), or they are provided by the environment itself (i.e. *TextWorld* and *Jericho* both provide a set of *Admissible Commands* (AC)). However, for any more general approach using RL and/or (PO)MDP-like models in object-based environments, affordance extraction is a useful and sometimes even necessary technique. It should be noted that the process of affordance extraction sometimes is not clearly distinguished from policy optimization in literature. That means that by defining an action space, often contextual information, i.e. about the task to be solved, is already included and acts as a filter for affordances. Regarding pure affordance extraction however, the question "Which action could be executed on/with this object?" should not be confused with the question "Which action on this object would make sense to solve the task?". In the scope of this work a clear focus is put on the first question, while the goal to provide a reasonable or even "optimal" action space for solving the game itself is not addressed. ## 3 Interactive Fiction In this paper, games of *Interactive Fiction* (IF) are used as a test case for affordance extraction. As such, the general mechanics and properties of IF are shortly described in this chapter. The term *Interactive Fiction* describes a certain genre of video games, in which the player is performing actions in a simulated environment. The actions, as well as the description of the environment, are communicated through a text-based interface. Therefore, this type of games is sometimes colloquially refered to as *text adventures*. The first verified textadventure *Colossal Cave Adventure* was created by Will Crowther in 1975 (Crowther, 1976), with Don Woods expanding and releasing it later in 1977 2. Soon after, text adventures grew more popular and a variety of prominent commercial and non-commercial games have been released. Among them was the popular *Zork* franchise, which up to this day provides an often studied challenge for machine learning algorithms (Atkinson et al., 2018; Ammanabrolu et al., 2020; Tessler et al., 2019; Jain et al., 2019; Yin & May, 2019). While in theory the whole set of natural language can be used to issue commands to an IF game, the parser of the game is limited to certain vocabulary and grammar. Two common types of IF games are distinguished according to the form of command input: - *Parser-based*: The commands are entered in free text by the player and are then interpreted by a parser. - *Choice-based*: The player is chosing from a predefined set of commands in every game state. The representation of these commands and their amount may vary (i.e. in form of hyper-text or a plain list of text commands). In many cases in literature, this distinction is not explicitly made. As for example noted byZelinka (2018), within the limits of most parsers, any parser-based problem can be converted into a choice based one by spelling out explicitly all possible combinations of vocabulary accepted by the parser. This paper will focus exclusively on parser-based IF games. For the purposes of this paper, IF provides an interesting opportunity to test affordance extraction using external databases. The simulated environment of the games is built around objects and interactions: Along with the movement of the agents (and sometimes dialogues), object interaction is often the only available action and the text descriptions are usually designed with these interactions in mind. This marks an important difference to other text-based media, e.g. belletristic literature, which often only implicitly (or not at all) contain information about a physical environment and interaction possibilities of characters. On the other hand, compared to agents interacting with physical reality, IF games offer a decreased complexity level due to a reduced amount of information about environment and objects, which is mediated by the text. ## 3.1 Environments For Studying Interactive Fiction Several platforms have been developed to study NLP or machine learning related problems with IF. Among the most prominent are *TextWorld* and *Jericho*, which are also used for the evaluation in this paper. 2https://www.ifarchive.org/indexes/if-archiveXgamesXsource.html ## 3.1.1 Textworld TextWorld (Côté et al., 2018) "is an open-source, extensible engine that both generates and simulates text games" 3, developed by Microsoft. It was specifically designed as a testbed for reinforcement learning agents in the context of language understanding and (sequential) decision making. Therefore, it provides important features valuable for studies of IF: - Customizable scenarios: By adjusting input parameters, the user is able to create scenarios with varying degree of complexity, i.e. the number of involved objects or the minimal amount of steps to solve the scenario. - Arbitrary large test/evaluation set: In principle, a very large amount of scenarios can be generated by TextWorld, limited only by the number of items, objects and locations in the vocabulary of the engine. - Different game modes: By offering three different goal definitions (Cooking a meal, collecting coins and finding a treasure), TextWorld theoretically allows for the evaluation of different skillsets and a very rudimentary kind of generalization. - Observation- and evaluation options: The TextWorld engine contains a set of options to assist studies of agents interacting with the scenarios. For example, lists of available objects and commands for any situation can be obtained, as well as information about location, score and even an optimal "walkthrough" towards the game's goal. Building on these features, TextWorld has been featured in a number of studies (i.e. Jain et al. (2019); Hausknecht et al. (2019b); Tao et al. (2018); Zelinka et al. (2020); Madotto et al. (2020)) and, most notably, it has also been used to facilitate the *First TextWorld Problems* competition (*FTWP*, see Chapter 1). ## 3.1.2 Jericho Jericho (Hausknecht et al., 2019a) is another platform for language and AI based studies of IF, but it follows a rather different approach than TextWorld. Instead of creating its own IF scenarios from basic pieces of vocabulary and scenario related input parameters, the Jericho suite is compiled from already existing IF and is made available through the Jericho framework. The platform supports over 30 textadventures, which cover a wide range of commercial and non-commercial games (i.e. fan projects), spanning several decades. Similar to TextWorld, Jericho provides options to access information about the full game state (including inventory, score and ACs) as well as walkthroughs for supported games. While in principle both frameworks offer similar options for controlling an agent and evaluating corresponding moves, there are important differences regarding the nature of the scenarios itself. While TextWorld focusses on simplified and fully customizable sets of scenarios for the specific purpose of RL/ML studies, the IF games in Jericho were created for the purpose of human entertainment. As such, the latter tend to have a higher degree of complexity and creativity as they are meant to challenge and entertain a human user. On the contrary, the solution to TextWorld scenarios is often perceived to be very easy or even obvious by human players. In this respect, the somehow complementary nature of both platforms offers an opportunity to study the performance of algorithms for different levels of complexity. ## 4 Simulation Setup 4.1 External Knowledge And Conceptnet In the context of RL related approaches, data is usually collected through interactions of an agent with its environment. To generate data, information on available interactions usually has to be supplemented and 3https://www.microsoft.com/en-us/research/project/textworld/ can rarely be created by the algorithm itself. Often, the definition of available actions is made explicitly, especially if the action space is small and consists of only a few actions (see Section 2). For more complex environments, like IF related games, this approach becomes more difficult. The algorithm has to either rely on pre-programmed generalisation rules (i.e. "every object can be taken"), pre-defined sets of commands, or explicit knowledge about every available object. The latter can rarely be supplied completely by human authors for every scenario. In this case, utilizing an existing knowledge database about objects and their properties is able to fill gaps and provide additional affordances. Surveying databases as a possible resource for affordance extraction, several criteria have to be fulfilled: - Data density: The resource should contain a large amount of information about objects and their usage. - Data diversity: The resource should not focus on a specific context or scenario to ensure a certain generalizability. - Data accessability: The information should be available in a machine readable format or at least be convertible without significant information loss. - Efficiency: The data extraction process should have a suitable time performance for testing on a large number of scenarios. - Data availability: The data from the resource should be accessible during the whole run time, either via web API or as a (feasible) local copy. Several data bases have been investigated to be used for affordance extraction:. - WikiData (Vrandečić & Krötzsch, 2014) is an extension of the online encyclopedia Wikipedia. It adds a machine readable knowledge graph to a lot of wikipedia's topics. It also contains information about the typical usage of objects, denoted using the use property. WikiData can be queried through a publicly available API. - WordNet (Fellbaum, 1998) organizes terms from the english language in a semantic network. WordNet supports disambiguation of terms, by using *Synsets* (sets of synonyms, forming a common concept) and explicitly orders terms along other relations, such as antonymy, hyponomy, meronymy, troponymy and *entailment*. None of these relations refers to the typical usage of an object or a term. - ConceptNet (Speer et al., 2016) is a partially curated collection of commonsense knowledge from a number of sources. Among other sources like DbPedia, OpenCyc and others, the Wiktionary online dictionary is used to gather its contents. ConceptNet organizes its contents in a knowledge graph using a set of relations4to connect nodes within the graph. ConceptNet does not offer disambiguation of terms. Among the supported relations (see Speer et al. (2016) for more details) some explictly address the common usage of objects, such as: *CapableOf, UsedFor, ReceivesAction*. The knowledge graph of ConceptNet is accessible via a public available API. - NELL (Mitchell et al., 2018) is an acronym for "Never Ending Language Learner". It refers to a continually updated knowledge base of terms, which are ordered in a taxonomy. Updates of the knowledge base happen mostly automatically by means of web-crawlers parsing sites on the internet and storing the parsed information as logical statements within the taxonomy. During preliminary tests, no useful information for affordance extraction could be extracted from this knowledge base. Referring to the above mentioned criteria, *ConceptNet* (CN) has been selected for evaluation. 4https://github.com/commonsense/conceptnet5/wiki/Relations For the purpose of this study, CN has some major advantages: It provides information in a machine readable format, as well as an API for automated queries. At the time of this study CN provides about 34 million pieces of relational information (*edges*) and therefore offers a comparatively large amount of possible environmental information. The next Section discusses the CN properties relevant for this study. ## 4.2 Command Generation With Conceptnet The object information in ConceptNet is organized into labeled categories like synonyms, antonyms or typical locations, where the object can be found. For the extraction of possible object affordances, in principle three categories are offering corresponding information: - *used for*: Describes what the object is *used for*, usually resulting in a single verb phrase (e.g. "knife is used for slicing"). - *capable of* : Describes what the object is *capable of*. Semantically, this often yields results similar to the *used for*-category, but puts more emphasis on active capabilities, rather than the passive usage. - *receives action*: Describes which action can be performed on the object. In the description of ConceptNet, this category is somehow misleadingly labeled "Can be..." (see Section 6.2). An example for this category is "a book can be placed in a drawer". A prelimary analysis showed that most relevant information for affordance extraction is concentrated in the *used for-* and the *receives action-* categories. The *capable of-* category often contains information which is either irrelevant or already covered in the other categories. Furthermore, ConceptNet offers weights (reflecting the frequency of appearance) for every entry. While weights are not used in this proof-of-concept study, they are in principle a useful option, as the human input sometimes refers to subjective or (very) rare affordances (for example, one user provided the entry "knife is capable of free one's soul"). These special cases may provide useful for some studies which try to deal with "long tail events" or advanced reasoning, but for the purpose of affordance extraction on a basic level they are ineffective most of the time. The information stored in ConceptNet is organized in n-tuples of information, connecting concepts via the previously discussed categories above. These n-tuples (mostly triples, i.e. "(knife, UsedFor, cutting)") in their unaltered form cannot be interpreted by any IF parser as commands for the agent. Some means of translation have to be established. This task itself is related to Natural Language Generation (NLG) and many sophisticated solutions might be employed in future applications. For the purpose of this paper, some basic translation rules are defined. In the affordance extraction algorithm, two rules are used, referring to the number of items involved. Conveniently, these two rules correspond to the two categories selected for evaluation in this section: ## 4.2.1 Affordances With One Object Information from the *Receives Action* category usually refers to actions which can be performed on the queried object. The corresponding template for triples of this category is therefore designed as: verb (imperative form) + object, i.e. the triple "(door, ReceivesAction, opened)" transforms to "open door". ## 4.2.2 Affordances With Two Objects The information from the *Used for* category is phrased in a more modal way. For example the triple "(knife, UsedFor, slicing)" implicitly refers to another object, which can be sliced with a knife. By identifing the required second object, an affordance with two objects is formed, i.e. "(knife, UsedFor, slicing, tomato)". When querying ConceptNet, the returned text fragments are processed with spaCy (Honnibal & Montani, 2017) to determine if, in addition to the object in question and a verb, more nouns are present. If this is the case, these nouns are considered to be additional objects, that are required for the described action. If two objects of such an affordance are available at the same time, a corresponding command becomes possible. The translation rule for this case is defined as: verb (imperative mood) + target object + preposition + queried object, i.e. the 4-tuple "(knife, UsedFor, slicing, tomato)" transforms to "slice tomato with knife". The correct proposition is determined via statistical means by counting associated prepositions for every word in large text corpora. While this solution is sufficient for the study at hand, it is modularized and simply exchangeable with a more sophisticated routine. It should be mentioned, that the command generation is largely dominated by one-object-affordances, due to the comparativly sparse connections between most objects in ConceptNet. Therefore, two-object-commands are rarely relevant for later evaluation. Abstraction techniques might address this problem (See Section 6.2). ## 4.3 Algorithm Overview The algorithm is written in Python 3.6. Its general strucuture for the automated evaluation procedure is ![6_image_0.png](6_image_0.png) shown in Figure 1. Figure 1: Algorithmic setup for the automated evaluation of the generated commands. The algorithm is designed to recieve a piece of text as input and is therefore able to be used with any text source. For the purpose of this study an interface to TextWorld and Jericho (see Section 3.1) is assumed. The algorithm performs the following steps: (1) **Textual Description Extraction**: Entering a new scenario, the algorithm receives textual information about the environment using the "look" command (or parsing the general description provided by the game), as well as textual information about the available inventory. Both inputs are concatenated into one string. Note, that for this step in principle every text source is valid. Textual description extraction then has to be adjusted accordingly. (2) **Object Identification**: From the textual input of (1), objects for possible interaction are identified. In some cases, it is possible to use pre-defined lists of objects for this task (this is done when playing in TextWorld, denoted as (2b) in Figure 1). But as the algorithm is designed to work for every text description, identification of objects via PoS-Tagging is also available as a more general method (this is done when playing in Jericho, denoted as (2a) in Figure 1). The algorithm uses spaCy as a *Parts of* Speech-Tagger (PoS-Tagger). It should be noted that this solution is not particularly robust against certain situational and linguistic problems (i.e. recognizing a boat on a painting as an actual item in the room or that "open window" can be interpreted as a window that is open, or as a command to open a closed window). For the purpose of this paper, these disadvantages are accepted to retain a broader field of application. (3) **Object Affordance Query**: The list of objects is handed to the command generator module (named ConceptNetCommandGenerator), which queries corresponding information from ConceptNet. The criteria of ConceptNet regarding queried categories and weights are described in Section 4.2. While querying ConceptNet, a local cache is used to speed up future queries in the evaluations of this study. This cache is however entirely optional. Alternatively the query cache can also be substituted with a full knowledge graph, using for example a RDF data format. (4) **Command Generation**: The command generator module converts the queried affordances into text commands according to the translation rules stated in Section 4.2. As a result, a list of generated commands (GCs) is handed to the evaluation code, along with other relevant information from the corresponding state (i.e. ACs provided by the framework). (5) **(Automated) Evaluation**: The GCs are evaluated automatically according to the procedure and metrics outlined in Sections 5.1 and 5.2. The results are stored into a logfile. For the human evaluation procedure (see Section 5.4), this step is slightly modified. The ACs are discarded, and the GCs from step (4) are instead presented to a human volunteer in pair with the corresponding textual description from (1). (6) **Proceeding to the next Scenario**: The algorithm executes a command selected from a predefined list (usually a walkthrough) to enter the next scenario. Step (1) commences, until all pre-selected scenarios of the game have been visited. This step is not visualized in Figure 1 and its execution depends on the applied text source. The algorithm is modularized and allows for improvement or replacement of single components, i.e. object identification (Step (2a)) or command generation (Step (4)). ## 5 Evaluation 5.1 Evaluation Setup To assess the performance and the robustness of the algorithm in an efficient way, the evaluation setup aims to present as many unique and clearly distinguishable scenarios as possible to the agent. Naturally, different games (usually) offer different scenarios, so for both frameworks, Textworld and Jericho, a sufficient number of games is selected to maintain diversity (see Section 5.3). However, it should also be ensured that the agent does not evaluate too many similar scenarios within a single game, i.e. by getting stuck in the same room. The walkthrough option provided by both frameworks offers a simple way to automatically achieve progression in the current game setup. The walkthrough is provided as a list of commands, which guide the agent towards the corresponding goal when executed sequentially. While following the walkthrough path, the steps for affordance extraction and command generation outlined in Section 4.3 are executed. Additionally, the extraction and generation process is only applied, if the location of the current state has not been visited before, so location induced double counting (for example when travelling between locations and passing the same site multiple times) is eliminated. ## 5.2 Evaluation Metrics The routine described in Sections 4.3 and 5.1 produces a list of affordances (phrased as parser-compatible text commands) for any textual input. To assess the quality of these results, two evaluation types are employed: The automated comparison to the ACs provided by the framework, and a human baseline. The automated comparison refers to the option available in both TextWorld and Jericho to provide a list of commands which can be executed in the current game state. The traditional metric of *precision* can be applied to this situation: This metric quantifies, how many of the produced commands are also an AC. If the affordance extraction process reproduces exactly the AC list, a score of 1 is achieved. However, this kind of metric alone is not sufficient to judge the quality of the generated affordances. The reason for this lies in parser related constraints (this refers to grammatical patterns as well as vocabulary) and game related simplifications. The AC list is by no means complete. In every situation, there are additional commands that would appear reasonable by human standards, but will not be recognized by the parser and thus are not listed as an AC. This can refer to commands which are not part of the parser's vocabulary due to the use of synonyms (e.g. "shut door" instead of "close door") as well as actions which have not been taken into account when the game was made, mostly because they are not relevant for the task-related progress within a game (i.e. "carve table with knife" or "jump on table"). For this reason, a human baseline is introduced in addition to the automatically computed metrics: A volunteer ([one of the authors, anonymized for review process]) is presented with the textual description of the scenario, as well as the list of commands generated from this description. The volunteer then is asked to sort each command into one of the following categories: - Contextually suitable (A): The command is perceived as physically possible and useful in the current situation. - Valid, but contextually infeasable (B): The command is generally perceived as valid, but does not appear useful or feasible in the current situation (e.g. "bite table"; or "open window" if the window is not physically present, but only mentioned in a dialogue). - Invalid (C): The command is physically impossible or does not make sense grammatically (Examples: "put house in your backpack", "drink book", "open door from key"). For the interpretation of the results the subjective nature of the answers needs to be taken into account. Yet, the human baseline offers a broader perspective on the performance of the algorithm which is not limited by the NLP related capabilities of the parser and the game design. ## 5.3 Evaluation On Jericho And Textworld The algorithm is first evaluated on the Jericho platform. A total of 37 games have been selected for evaluation, containing 1226 evaluation scenarios overall. The first line of Table 1 shows the amount of generated commands and matches with an AC command, as well as the corresponding precision percentage. A more detailed compilation of results for every single game is given in Table 5. In 24 of 37 games, at least one AC command is reproduced with results varying in an intervall between 0 and a maximum of 9 commands (1.83% precision, achieved in the game "hollywood"). In most cases, only a small absolute amount of matching commands is produced, with a strong focus on recurring commands like "open/close door". Overall, with only 0.4% of the generated commands matching an AC, the unmodified algorithm is not yet able to produce a significant set of affordances to be used with the game parser. The reasons for this behavior and possible improvements are studied later in this section, with the current results serving as a baseline. Table 1: Evaluation results for the basic algorithm (first line) and the "take"-addition on the Jericho testset (second line). The columns denote the steps, the generated commands, the total amount of generated commands matching the ACs and the corresponding percentage of matching commands for every game. Evaluation mode Steps Gen. Com. Corr. Com. Corr. Com. (%) Jericho base 1226 12949 52 0.4 Jericho add take 1226 17743 149 0.84 The next evaluation step addresses the problem of trivial background knowledge. The affordances provided by ConceptNet tend to focus on rather complicated and creative activities, while mostly omitting "trivial" ones (see Section 6.2). The second line of Table 1 shows another evaluation on the same testset, with a simple "take" affordance added to every identified object (implying that most objects can be picked up). The complete results for every single game are given in Table 6. Now, only for 3 out of 37 games no AC is reproduced. Overall the amount of correctly reproduced ACs nearly increases by a factor of three, from previously 52 matching commands up to 149 (0.84%). The distribution among single games remains uneven, reflecting the heterogeneous character of the test games: While for few games still no matching command is produced at all, up to 17 matching commands for "hollywood" or 13 commands for "zork1" and "zork2" are generated. It should be noted that the overall precision does only improve by a factor of two, as by indiscriminately adding a "take" affordance to every object, additional noise is produced. The next evaluation step addresses the heterogeneity and the entertainment focus of games included in the Jericho platform. By applying the algorithm to a set of structurally simplier TextWorld games, the influence of this kind of complexity on the result can be assessed. For this evaluation run, 333 TextWorld games with a total of 984 evaluation scenarios have been created. The results for the evaluation with the standard algorithm (first line), as well as the "take"-addition" (second line), are depicted in Table 2. Table 2: Evaluation results for the basic algorithm (first line) and the "take"-addition (second line) on the TextWorld testset. The columns show the steps, the generated commands, the total amount of generated commands matching the ACs and the corresponding percentage of matching commands for every game. Evaluation mode Steps Gen. Com. Corr. Com. Corr. Com. (%) TextWorld base 984 5143 330 6.42 Textworld add take 984 7726 438 5.67 While providing a roughly similar amount of evaluation scenarios (984 against 1226 for Jericho), the algorithm provides only 5143 generated commands (vs. 12949 for Jericho), mirroring the decreasing diversity and level of detail in the TextWorld descriptions. Of the generated commands, 330 (ca. 6.42% precision) matched an AC, fluctuating between 0 and 3 correct commands for every step. It should be noted, that these results are vastly dominated by "eat" and "close/open" commands and that often correct commands are repeated over several steps (i.e. by carrying an corresponding item in the inventory). Still, the overall percentage of correct commands improves by a factor of around 16. The addition of a "take"-affordance to every item further increases the number of produced commands to 7726, with 438 being recognized as an AC (ca. 5.7% precision). In this specific case, the increase of correct commands is significantly less compared to the Jericho evaluation and the corresponding precision even decreases. This is caused by a special characteristic of the chosen TextWorld testset: The agent starts with a large number of items already in its inventory, rendering them effectively immune to a "take" command. ## 5.4 Evaluation Of Human Baseline The last evaluation step addresses platform related limitations. To determine whether a generated command could be regarded as "correct", the platform provided ACs have been used. However, these ACs only cover a small set of all affordances available from the game descriptions. Generally the ACs are limited by vocabulary, templates and generation routines. To explore the magnitude of this effect, a selection of three Jericho games ("awaken", "ballyhoo" and "Zork: the Undiscovered Underground" (ztuu)) is used as the input for the affordance generator. The produced commands are then evaluated by a human volunteer according to the categories established in Section 5.2. The corresponding results are depicted in Table 3. Table 3: Human baseline for three selected Jericho games. The generated commands are sorted into the categorys "Contextually suitable" (A), "Valid, but contextually infeasible" (B) and "Invalid" (C) (See Section 5.2). Jericho Game A B C gen. commands awaken 44 119 113 276 ballyhoo 48 136 73 257 ztuu 36 143 69 248 The results reveal that between 59% ("awaken") and 72% ("Zork: the Undiscovered Underground") of the commands are recognized as an functional affordance by human judgement (referring to categories A or B from Table 3). While being prone to small subjective fluctuations, this marks an significant increase of identified affordances/commands compared to the automated parser evaluation. This has two reasons: Firstly, some commands are simply too exotic for the parser and its vocabulary. Secondly, some affordances were produced by text descriptions not related to actually available items in the scenario (i.e. pieces of dialogue or memories). The last point is quantitatively addressed by distinction between the categories A and B. For all three games, the category B ("Valid, but contextually infeasable") dominates with a factor between three and four over category A, revealing a large influence of "noisy" items not physically relevant/present in the current situation. An illustrative example is given in Appendix B. In a next step, the evaluation is repeated for the TextWorld setup. To keep human evaluation feasible, only a small subset of 10 scenarios, each from a different game, has been chosen randomly for evaluation (see Table 4). As a result, 69% of the produced commands were deemed either category A or B, mirroring the results from the Jericho evaluation. Between the two functional categories, B still dominates over A, but only with a factor of about two. This again emphasizes the effect of "noisy" language, which in TextWorld is reduced by using the predefined list of interactable objects. ## 6 Summary And Scope The evaluation in the previous Section showed that external databases can be used to generate affordance for text based input with comparatively little effort. The results however have to be interpreted carefully to account for all upsides and limitations of this simple approach. For the automated evaluation (see Tables 1 and 2) three major observations have been made: - The total amount and percentage of correctly reproduced ACs for the basic approach is rather low, yielding 52 commands (0.4%) for Jericho and 330 commands (6.4%) for TextWorld. This serves as an illustration point to the many potential challenges of automated affordance extraction. - The amount of correctly reproduced ACs is increased by a factor of 2.9 (for Jericho) and 1.3 (for TextWorld), respectively, by manually adding trivial "take"-affordances to every evaluation step. This illustrates that information retrieved by external databases might often omit "trivial" information. Table 4: Human baseline for ten selected TextWorld games. The evaluation only refers to the first scenario of each game. The generated commands are sorted into the categorys "Contextually suitable" (A), "Valid, but contextually infeasible" (B) and "Invalid" (C) (See Section 5.2). TextWorld Game A B C gen. commands Game 329 2 6 3 11 Game 108 0 2 3 5 Game 140 2 3 3 8 Game 32 2 5 0 7 Game 62 0 3 0 3 Game 157 1 2 0 3 Game 155 4 4 4 12 Game 55 1 1 3 5 Game 160 2 2 0 4 Game 171 1 0 3 4 Overall 15 28 19 62 - The comparison between the results of Jericho and TextWorld showed a significant increase of the percentage of correctly reproduced ACs (by a factor of ca. 16) for the TextWorld evaluation. This emphasizes the major influence of text complexity and object identification to the result. Finally, a human evaluation illustrated the limitations introduced by the ACs themselves (see Tables 3 and 4). Compared to the only 0.4% (Jericho) and 6.4% (TextWorld) of affordances being marked as suitable by the automated evaluation, human interpretation deemed between 59% and 72% of all produced commands in Jericho as functional (69% for Textworld), meaning they are either contextually suitable (category A) or at least valid in general (category B). Still the comparative low fraction of contextually suitable affordances hinders practical application at this point. The next subsections address open issues and possible measures for improvement. ## 6.1 Algorithmic Aspects Several components of the algorithm outlined in Section 4.3 feature NLP related processes, which were kept very simple and therefore offer room for improvement. The following points could improve affordance extraction performance by *modifying the algorithm*: - **Object identification**: The significant difference in performance between evaluations on TextWorld and Jericho testsets (see Section 5.3) highlights the well expected importance of object identification. While TextWorld offered a predefined list of interactable objects, in the evaluation for the Jericho platform objects are extracted via simple PoS-tagging. The algorithm only performs rudimentary post processing. Objects are not contextualized, which means that for example an object in a picture or mentioned in a dialogue will be treated as an interactable object. The conducted evaluation steps already showed the large improvement potential of this measure. Possible solutions might be the implementation of advanced semantic text contextualization techniques. - **Object ambiguity**: Currently the post processing does not retain adjectives or other describing text, because this would complicate the query process (as ConceptNet only features objects without adjectives). This means that a red and a green paprika will be treated as the same item. Although both of this issues have rather little impact on the study at hand, they should be addressed for more general applications. Possible solutions might be an extended object buffer, maintaining the object properties while dealing with external databases. - **Forced command patterns**: To convert the affordances extracted from ConceptNet into parsable text commands, predefined translation rules are used. This puts constraints on the grammatical variations of commands. A solution for this problem could be the usage of more advanced Natural Language Generation (NLG) techniques, which can put verbs and objects into reasonable sentence/command structures. - **Selection of ConceptNet parameters**: While preliminary testing showed that most usable information was contained in the categories *ReceivesAction* and *UsedFor*, information stored in other categories (such as *CapableOf*) is currently not used at all. By including these categories and finding coherent ways to convert this information into text commands, better results are achievable. This also holds for the usage of weights and other porperties offered by ConceptNet (i.e. synonyms). - **Refined data structures**: Currently, the algorithm directly converts ConceptNet queries to textcommands. The algorithm also provides the option to store obtained information in a knowledge graph using the RDF format. Right now, the only types of edges in this graph are "affords", which connects objects and verbs, and "requires", which is used to denote that an additional object is required for a given affordance (i.e. if one wants to cut something, a tool for cutting and object that is to be cut are required). Given the wealth of information available in ConceptNet and other knowledge bases, it might be useful to use additional means of structuring the local RDF based knowledge graph for affordances. It would i.e. be possible to use type-/instance-relations and subclass-/superclassrelations between nodes of the graph to improve generalization of learned affordances. This path was not followed in the evaluation, as it would have required large additional considerations beyond the scope of a first explorational study. Yet, advanced data storing/processing structures hold large potential for performance improvement, as many knowledge graph related studies mentioned in the introduction show. ## 6.2 Data Quality While ConceptNet is a semantic network designed to boost machine understanding, some aspects of it (and most other similar knowledge bases) still offer potential for improvement with regard to the process of affordance extraction. The following points could enhance affordance extraction performance by improving data quality/quantity: - **Data density**: Despite ConceptNet offering 34 million pieces of relational information (Speer et al., 2016), the frameworks of Jericho and TextWorld contain a multitude of usual and unusual objects. As a result, even in this study many queried objects did not have an entry in CN. The direct connection of two (or more) objects is even more sparse. Therefore, querying rarely offers a combined action of two objects which are found in the IF environment. Further collection of data would correspondingly enhance the performance. - **Obligation for creativity**: Many of the entries in ConceptNet entered by human operators showed some attempt for creativity. While the most trivial performable actions were not mentioned (probably because they seemed too obvious for the operator), more complicated actions were offered. For example actions like "divide pills" or even "free one's soul" were associated with the item "knife", but the simple facts that a knife can be "picked up" or "put down" were not mentioned. As a solution, such simple actions currently have to be provided to the algorithm by a software engineer. It is advised to add them to the semantic network or the local knowledge graph in the future, either by adapted instructions for human operators, by explicit addition or, to not inflate the amount of information, by employing categorizational rulesets. - **Unclear instructions**: At several points, faulty information is provided by the network, because the instructions for the human operators are misleading. For example, the category *ReceivesAction* stores information about actions which can be performed on a specific item. But to acquire the corresponding data, some human operators were required to complete the sentence "[Item] can be....". It is clear, that the semantic network expects a verbal phrase, but some human operators misread this instruction as a request for properties. For example a data entry for "knife" in the ReceivesAction category lists "sharp" as an answer, while rather a verbal phrases like "sharpened", "thrown" etc. were expected. To avoid this problem, clear and unambigous instructions should be provided for human input. To conclude, the algorithm is able to produce a fair amount of affordances from textual input by the standard of human judgement. The results are achieved by relatively simple means, using relational input from ConceptNet and some basic generation rules. They can be understood as a proof-of-concept and the listed aspects and arguments can serve as a recommendation and encouragement for further data collection and algorithmic improvement. ## References Leonard Adolphs and Thomas Hofmann. Ledeepchef: Deep reinforcement learning agent for families of text-based games. *CoRR*, abs/1909.01646, 2019. URL http://arxiv.org/abs/1909.01646. Prithviraj Ammanabrolu and Matthew J. Hausknecht. Graph constrained reinforcement learning for natural language action spaces. *CoRR*, abs/2001.08837, 2020. URL https://arxiv.org/abs/2001.08837. Prithviraj Ammanabrolu, Ethan Tien, Zhaochen Luo, and Mark O. Riedl. How to avoid being eaten by a grue: Exploration strategies for text-adventure agents. *CoRR*, abs/2002.08795, 2020. URL https: //arxiv.org/abs/2002.08795. Timothy Atkinson, Hendrik Baier, Tara Copplestone, Sam Devlin, and Jerry Swan. The text-based adventure AI competition. *CoRR*, abs/1808.01262, 2018. URL http://arxiv.org/abs/1808.01262. Marc-Alexandre Côté, Ákos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew J. Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam Trischler. Textworld: A learning environment for text-based games. *CoRR*, abs/1806.11532, 2018. URL http://arxiv.org/abs/ 1806.11532. Crowther. Colossal cave adventure, 1976. Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L. Griffiths, and Alexei A. Efros. Investigating human priors for playing video games. In *ICML*, 2018. Christiane Fellbaum. *WordNet: An Electronic Lexical Database*. Bradford Books, 1998. Nancy Fulda, Daniel Ricks, Ben Murdoch, and David Wingate. What can you do with a rock? affordance extraction via word embeddings. *CoRR*, abs/1703.03429, 2017. URL http://arxiv.org/abs/1703. 03429. J. J. Gibson. *The senses considered as perceptual systems*. Houghton Mifflin, Boston, 1966. Helmut Grabner, Juergen Gall, and Luc van Gool. What makes a chair a chair? In *IEEE Conference on* Computer Vision and Pattern Recognition (CVPR), 2011, pp. 1529–1536, Piscataway, NJ, 2011. IEEE. ISBN 978-1-4577-0394-2. doi: 10.1109/CVPR.2011.5995327. Matthew Hausknecht, Prithviraj Ammanabrolu, Marc-Alexandre CÃťtÃľ, and Xingdi Yuan. Interactive fiction games: A colossal adventure, 2019a. Matthew J. Hausknecht, Ricky Loynd, Greg Yang, Adith Swaminathan, and Jason D. Williams. NAIL: A general interactive fiction agent. *CoRR*, abs/1902.04259, 2019b. URL http://arxiv.org/abs/1902. 04259. Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear, 2017. Vishal Jain, William Fedus, Hugo Larochelle, Doina Precup, and Marc G. Bellemare. Algorithmic improvements for deep reinforcement learning applied to interactive fiction. *CoRR*, abs/1911.12511, 2019. URL http://arxiv.org/abs/1911.12511. Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, David Abel, and Doina Precup. What can I do here? A theory of affordances in reinforcement learning. *CoRR*, abs/2006.15085, 2020. URL https: //arxiv.org/abs/2006.15085. Daniel Loureiro and Alípio Mário Jorge. Affordance extraction and inference based on semantic role labeling. CoRR, abs/1809.00589, 2018. URL http://arxiv.org/abs/1809.00589. Andrea Madotto, Mahdi Namazifar, Joost Huizinga, Piero Molino, Adrien Ecoffet, Huaixiu Zheng, Alexandros Papangelis, Dian Yu, Chandra Khatri, and Gokhan Tur. Exploration based language learning for text-based games, 2020. T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. Never-ending learning. *Commun. ACM*, 61(5):103âĂŞ115, apr 2018. ISSN 0001-0782. doi: 10.1145/ 3191513. URL https://doi.org/10.1145/3191513. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei Rusu, Joel Veness, Marc Bellemare, Alex Graves, Martin Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. *Nature*, 518:529–33, 02 2015. doi: 10.1038/nature14236. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. *Nature*, 529(7587):484–489, jan 2016. ISSN 0028-0836. doi: 10.1038/nature16961. Robyn Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilingual graph of general knowledge. *CoRR*, abs/1612.03975, 2016. URL http://arxiv.org/abs/1612.03975. Ruo Yu Tao, Marc-Alexandre Côté, Xingdi Yuan, and Layla El Asri. Towards solving text-based games by producing adaptive action spaces. *CoRR*, abs/1812.00855, 2018. URL http://arxiv.org/abs/1812. 00855. Chen Tessler, Tom Zahavy, Deborah Cohen, Daniel J. Mankowitz, and Shie Mannor. Action assembly: Sparse imitation learning for text based games with combinatorial action spaces. *CoRR*, abs/1905.09700, 2019. URL http://arxiv.org/abs/1905.09700. Denny Vrandečić and Markus Krötzsch. Wikidata: A free collaborative knowledgebase. *Commun. ACM*, 57(10):78âĂŞ85, sep 2014. ISSN 0001-0782. doi: 10.1145/2629489. URL https://doi.org/10.1145/ 2629489. Xusen Yin and Jonathan May. Comprehensible context-driven text game playing. *CoRR*, abs/1905.02265, 2019. URL http://arxiv.org/abs/1905.02265. Mikuláˆs Zelinka. Baselines for reinforcement learning in text games. *CoRR*, abs/1811.02872, 2018. URL http://arxiv.org/abs/1811.02872. MikulÃąÅą Zelinka, Xingdi Yuan, Marc-Alexandre CÃťtÃľ, Romain Laroche, and Adam Trischler. Building dynamic knowledge graphs from text-based games, 2020. Yuke ZhjZhu, Alireza Fathi, and Li Fei-Fei. Reasoning about object affordances in a knowledge base representation. In David Fleet (ed.), *Computer vision - ECCV 2014*, Lecture notes in computer science, pp. 408–424, Cham, 2014. Springer. ISBN 978-3-319-10605-2. ## A Detailed Result Tables This section provides more detailed results for automated evaluation. Table 5: Evaluation results for the basic algorithm on the Jericho testset. The columns show the steps, the generated commands, the total amount of generated commands matching the ACs and the corresponding percentage of matching commands for every game. Game Steps Gen. Com. Corr. Com. Corr. Com. (%) 905 6 126 2 1.59 adventureland 19 71 0 0 awaken 15 276 0 0 balances 12 210 0 0 ballyhoo 38 257 2 0.78 cutthroat 34 396 4 1.01 detective 20 442 0 0 enchanter 48 742 1 0.13 enter 18 320 1 0.31 gold 22 228 0 0 hhgg 29 438 2 0.46 hollywood 52 491 9 1.83 huntdark 7 80 0 0 infidel 46 476 0 0 inhumane 41 338 1 0.3 jewel 37 379 0 0 karn 28 259 1 0.39 library 7 124 0 0 ludicorp 79 312 2 0.64 lurking 48 1134 2 0.18 moonlit 6 126 0 0 murdac 74 232 0 0 night 11 58 1 1.72 omniquest 31 188 2 1.06 pentari 16 83 0 0 planetfall 69 531 2 0.38 plundered 42 471 0 0 reverb 17 68 1 1.47 seestalker 20 157 1 0.64 sorcerer 63 722 2 0.28 temple 19 431 1 0.23 wishbringer 43 549 3 0.55 zenon 13 103 1 0.97 zork1 72 658 2 0.30 zork2 62 802 6 0.75 zork3 46 423 2 0.47 ztuu 16 248 1 0.4 Overall 1226 12949 52 0.4 Table 6: Evaluation results for the "take"-addition algorithm on the Jericho testset. The columns show the steps, the generated commands, the total amount of generated commands matching the ACs and the corresponding percentage of matching commands for every game. Game Steps Gen. Com. Corr. Com. Corr. Com. (%) 905 6 167 2 1.2 adventureland 19 112 2 1.79 awaken 15 359 1 0.28 balances 12 256 2 0.78 ballyhoo 38 377 5 1.33 cutthroat 34 571 7 1.23 detective 20 512 3 0.59 enchanter 48 1001 6 0.6 enter 18 442 2 0.45 gold 22 327 5 1.53 hhgg 29 554 6 1.08 hollywood 52 932 17 1.82 huntdark 7 102 0 0 infidel 46 680 7 1.03 inhumane 41 459 3 0.65 jewel 37 528 0 0 karn 28 360 2 0.56 library 7 157 0 0 ludicorp 79 433 2 0.46 lurking 48 1464 3 0.2 moonlit 6 156 2 1.28 murdac 74 317 4 1.26 night 11 82 1 1.22 omniquest 31 286 5 1.75 pentari 16 20 1 5 planetfall 69 744 5 0.67 plundered 42 676 4 0.59 reverb 17 109 2 1.83 seestalker 20 236 2 0.85 sorcerer 63 919 5 0.54 temple 19 551 3 0.54 wishbringer 43 725 6 0.83 zenon 13 140 1 0.71 zork1 72 958 13 1.36 zork2 62 1104 13 1.18 zork3 46 575 3 0.52 ztuu 16 352 4 1.14 Overall 1226 17743 149 0.84 ## B Example For Human Baseline Affordance Rating The following example is a game step from "Zork: the Undiscovered Underground"). ## Situation Description By Game: Backstage: Ah ah choo. Those curtains. If I weren t so busy helping you with this game, I d suggest you go on without me and let me clean this place up enough so that when you returned, I could at least describe it decently. I ll do the best I can though. A thick maroon curtain separates the backstage area from the stage. This area was obviously the target of a small underground tornado, a Vorx as scrims scenery and costumes litter the floor. Even an old steamer trunk, virtually decaying from age, rests in a corner. Your inventory: brass lantern, glasses, ZM$100000, Multi-Implementeers, Forever Gores, Baby Rune, razor-like gloves, cheaply-made sword Candidate commands generated by the algorithm: 'cover floor', 'fill glasses', 'find floor', 'find gloves', 'find trunk', 'lie floor', 'live area', 'need glasses', 'play game', 'use lantern', 'wear glasses' (11) Contextually suitable (A): 'use lantern', 'wear glasses' (2) Valid, but contextually infeasable (B): 'cover floor', 'fill glasses', 'find floor', 'find gloves', 'find trunk', 'play game' (6) Invalid (C): 'lie floor', 'live area', 'need glasses' (3)
Review 1: Summary: The manuscript attempts to solve the problem of affordances extraction in text-based games by employing external, graph-based knowledge bases, presenting an algorithm for matching potential knowledge tuples to the state-action space of such games. Strengths and Weaknesses: The manuscript attempts to solve an extremely interesting and relevant problem for the RL / decision making community. It is generally hard to deal with large action spaces, and language provides enough structure such that it might be possible to strongly reduce a language-based action space. I found the literature review and background sections to be decent and useful to get perspective on the problem being tackled. However, broadly speaking the manuscript needs significant work in multiple areas: 1. There&rsquo;s nothing that is significantly novel about utilising KBs to guide agents through structured (language) environments (see e.g. <https://arxiv.org/abs/2007.09185>). Framing the problem as finding affordances might be, but the results are not strong enough to significantly make a dent towards forming a useful solution. 2. The results seem to be largely negative (which is fine!), and whilst the authors have indeed put above-average effort in discussing tradeoffs and possible solutions, I don&rsquo;t believe the overall methodology goes past the community threshold. If this were a paper primarily focused on understanding why affordance extraction is hard in the proposed benchmarks, I would probably come to a different conclusion, but fundamentally it is a method paper (as it is presented). 3. The evaluation, which effectively is what might be the most useful and impactful contribution of this work, is extremely untidy, and has significant design issues. Finally, as it is I don&rsquo;t believe TMLR is a great venue for this work, and the algorithmic / KB communities in CL conferences might be a better target. However, if the authors still wish to target ML folks, I would suggest attempting to use and evaluate these affordances in an env / agent evaluation setting, or to focus the narrative on improving [our understanding of] the presented benchmarks. Requested Changes: Here's some broad feedback section by section that I hope helps setting the context for the overall review. ### Section 1 > Recently, the domain of&#x2026; (nit) Not so recently, video games have pretty much been an AI benchmarking tool since their inception. Also, strictly speaking Go is not a video game per se. 1. Typos 1. indespensible 2. up to our knowledge (not sure what this means? &ldquo;To the best of our knowledge&rdquo;?) 3. a major aspect for explaining (?) ### Section 2 > In practical applications this process is sometimes shortened or even omitted&#x2026; I&rsquo;m not sure I&rsquo;m totally happy with the assumptions made in this paragraph. I agree that affordances extractions is in principle a useful problem to solve, but in principle there&rsquo;s no reason it needs to be a problem that is solved explicitly. Most real life tasks and environments are likely to produce affordances that follow well known distributions (often, Zipf&rsquo;s law &#x2013; think e.g. the autoregressive task in language modeling) that allow agents to bootstrap to make a lot of trivial shortcuts. It is a relatively strong proposition to claim that in RL unless one does explicit affordance extraction one needs to rely on given prior information; it might be true, but I would try to back the argument with some references. > It should be noted that&#x2026; This is also not a sound interpretation of much RL literature. Classically, an MDP defines the task, no more (and no less!) than that. There&rsquo;s plenty of literature that attempts to tackle cases where the action space is large (and thus where exploration / action selection is a big problem simply due to scale). See e.g. <https://arxiv.org/abs/1906.12266> and literature therein. I would suggest reworking this section to be a tad more fair to previous work, or alternatively a stronger critique. 1. Typos 1. as inGrabner&#x2026; (sic.) ### Section 3 > IF games offer a decreased complexity level due to a reduced amount of information&#x2026; I&rsquo;m not sure I get what the manuscript is trying to say here. If it is trying to define and compare &ldquo;complexity&rdquo; of tasks wrt. environmental dynamics, a good option would be to compare (possibly abstracted) state spaces. Similarly, it would be helpful to qualitatively compare Jericho vs TextWorld in such terms in Section 3.1.2. 1. Typos 1. byZelinka&#x2026; (sic.) ### Section 4 > &#x2026;as a possible resource for affordance extraction, several criteria have to be fulfilled:[&#x2026;] It&rsquo;s perhaps more accurate to say that these are assumptions required by the presented algorithm, rather than being strict requirements for the broader task (one could e.g. imagine some degree of partial observability on data availability that would still provide enough data in the general case). > ConceptNet (CN) has been selected&#x2026; Might be helpful to define CN where ConceptNet is first introduced, as it took me some time to scan for the introduction of the acronym. > For the purpose of affordance extraction on a basic level [CN weights] are > ineffective most of the time. In the general case, you might have some scale / compute problem, so wouldn&rsquo;t they be extremely useful then? They&rsquo;d certainly provide some kind of search prior, surely? > The correct proposition is determined via statistical means by counting&#x2026; This seems quite important, as it&rsquo;ll directly drive downstream results for non-trivial n-tuples. It would be good to understand the algorithm used here, and possible / seen failure cases; this is important even if multi-object commands are rare, because in the limit these might make for (potentially) unfairly negative results. Furthermore, I think it might be important to clarify that most affordances tackled in this problem are fairly straightforward relationships earlier in the manuscript, and that it might a potential downside of the approach / methodology. > The algorithm is written in Python 3.6. Is this relevant information? Is there anything about this algorithm that couldn&rsquo;t be written in other languages (or python versions!)? > Figure 1 The manuscript might be better off using a textual / pseudocode / abstract python representation for this algorithm (especially since details such as logging seem to be relevant). As a general comment, it is perfectly reasonable, and often &#x2013; also in this case &#x2013; a good idea to focus on framework-level details rather than implementation ones when describing an algorithm. For instance, the reader probably doesn&rsquo;t really care <span class="underline">at this point of the manuscript</span> the **exact** data structure used to pass information between functional steps. > Textual description extraction then has to be adjusted accordingly. This part is unclear and needs to be clarified. What does it mean to say that &ldquo;every text source is valid&rdquo;? What problems does it create for the algorithmic step? > This is done when playing in TextWorld&#x2026; It would be great to clarify whether the PoS-Tagging approach does indeed work on TextWorld, or whether there are significant limitations to it for this benchmark (and thus why the word list method was employed, which is far less general). > these disadvantages are accepted to retain a broader field of application. What does this mean? An architectural / design problem of the algorithm doesn&rsquo;t make said algorithm &ldquo;broadly applicable&rdquo;. If anything, it reduces its *straightforward* applicability&#x2026; > proceeding to the next scenario This seems an important &ldquo;exploration&rdquo; step of the algorithm. How hard was it to implement for the presented benchmarks, and what is the general complexity of the problem? To me it seems to be essentially a similar meta-task to the actual problem being solved in the paper, and if so I wonder why one wouldn&rsquo;t be able to use the affordance extractor in a similar scope and manner. 1. Typos 1. may provide useful [information?] 2. It should be mentioned, that -> It should be mentioned that 3. recieve 4. is therefore able (not quite the right formulation here, semantics wise) ### Section 5 > Walkthrough option&#x2026; See critique in earlier section. > parser related constraints To understand the evaluation choices, it is essential to expand on these and defining what these constraints are. The following paragraphs essentially only talks about the wordlist issue. > as well as actions which have not been taken into account&#x2026; Aren&rsquo;t these exactly the definition of affordances? > A volunteer (one of the authors) Considering the human baseline / evaluation seems to be extremely important for the manuscript, it is important that the evaluation be unbiased and as representative of the underlying problem as possible (especially since the authors make a point about subjectivity being a potential issue). At such, I would strongly suggest to employ multiple unrelated people (through e.g. Mechanical Turk) rather than one of the authors. > The next evaluation step addresses&#x2026; This entire section is really unclear. The manuscript would be significantly improved by clearly delineating and separating subsections by &ldquo;tasks&rdquo; and algorithms used. It is really hard as it is to figure out what worked, what didn&rsquo;t, and why. Section 5.4 is comparably easier to understand (but still needs work). Broader Impact Concerns: Generally this work doesn&rsquo;t seem to strictly require a broader impact statement, although the authors might wish to consider addressing and/or discussing possible biases embedded in their work due to the usage of external knowledge bases. ================================================== Review 2: Summary: The paper explores the use of large existing sources of data in the form of knowledge graphs (specifically ConceptNet) to generate commands in text games --- a relatively well known framework for testing multi-step language reasoning in grounded scenarios. Strengths and Weaknesses: Strengths: - The paper is well written overall and provides a good overview of the field for those less familiar with it - The connection to cognitive science literature on affordances is made clear, providing the paper with a solid underlying motivation - Evaluation on both Jericho and TextWorld (two of the primary text game benchmarks) makes it easier to see which domains this method works well in Weaknesses: - The paper is not situated with respect to former work on using knowledge bases for action generation in text games. The paper in its current form does not distinguish itself from these works and so it is unclear to a reader what exactly the novel contributions are here. - https://arxiv.org/abs/2005.00811 and https://www.aaai.org/AAAI21Papers/AAAI-7812.MurugesanK.pdf use knowledge from Concept to explicitly generate commands and run them in TextWorld, following https://arxiv.org/abs/1908.06556 who use ConceptNet as a source of commonsense information in text games - https://arxiv.org/abs/2010.02903 and https://arxiv.org/abs/2012.02757 look at using LLMs as a source of affordance information when generating text commands - The evaluation setup is based on measuring the percentage of correctly generated actions using the walkthrough as opposed to executing actions within the environment as all online RL algorithm works in this area have done or using an offline pre-crawled dataset (of many states, not just those on the walkthrough - e.g. for TextWorld https://arxiv.org/abs/2208.01174 and for Jericho, the JerichoWorld dataset https://openreview.net/forum?id=7FHnnENUG0). - Using just the walkthroughs offers a very minimal setup for evaluating these games (a handful of a few hundred state/action pairs across the games where all previous works effectively evaluate on tens of thousands or more such state/action pairs). The low performance on the walkthroughs (even lower than the current state of the art performance for RL agents suggests much room for improvement in this method) Requested Changes: My requested changes will key off of the weaknesses mentioned above. Without such changes, the paper is not suitable for acceptance: - Rewriting especially the related work and algorithm section, especially situating against prior work mentioned and how this work differs (if at all). - Evaluating by using the action generation system introduced here in conjunction with a popular existing RL agent such as DRRN https://arxiv.org/abs/1511.04636 and comparing to existing RL agents (i.e. actually executing actions within the environment) or at the very least evaluating on a larger offline dataset such as TextWorld Express or JerichoWorld would help a reader analyze the relative benefits of this method. Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper explores the use of external knowledge bases as a source for identifying plausible actions in text-based games. The paper proposes a system that (1) identifies mentioned objects in a scene, and (2) uses ConceptNet to extract possible commands in the scene. They evaluate extracted commands automatically by comparing with a set of "admissable commands" (provided by the game), and through human evaluation. Strengths and Weaknesses: Strengths: - Some of the error analysis and discussion of mistakes is interesting. - The problem of automatically identifying affordances, and the motivation of discovering affordances in real-world action settings, is compelling. Weaknesses: - The contribution is not compelling. The paper is describing a simple application of ConceptNet to extract items in a text description of a scene (discovered via what is essentially teacher forcing through the game path), then identify tuples in ConceptNet relevant to those items. It is unclear whether the paper has relevance to TMLR, or whether there are any impacts beyond text-games. - Even within the realm of text-games as an application, it's unclear how the proposed method would have impact. There is no evaluation of how the proposed method helps in the long-term goal of training agents that complete text-based games. What is the end task performance of an agent in these games given the proposed method? - The precision metric is not well-motivated. If the goal is to recover additional feasible commands not explicitly specified by the game, it would be much more informative to evaluate recall of the original ACs rather than precision of the generated commands. - It would be interesting to see what proportion of generated commands are actually useful towards the game's goal. Another question is: why were the original ACs chosen? Is it because they are feasible and helpful examples, or are they truly a subset of the potential reasonable actions to take? - The human evaluation only covers a small number of generated commands, especially for TextWorld. - Why are there remaining issues of generating "items not physically relevant/present in the current situation"? Are most of these problems happening because of other examples in the paper (e.g., items mentioned in a dream or memory), or for other reasons? - Some missing details; see below Missing details: - What text corpus was used to extract prepositions, as mentioned in 4.2? Smaller comments: - Style violations; vertical space missing between paragraphs - Citation for NELL is wrong. The paper described was published in 2015, not 2018, and there are older NELL papers as well (e.g. Carlson et al. 2010). - Spelling mistakes (e.g., "proposition" instead of "preposition" in section 4.2). - Use of "AC" as an acronym throughout is relatively confusing. Requested Changes: - Evaluation of end-task performance with the proposed method for training text-game RL agents. - Use of recall as an automatic, instead of / in addition to precision, wrt. the ACs. - Significantly more content and work in the realm of ML. For example, proposing a learning algorithm that refines and adapts the proposed affordance extraction method during the learning process for a text-game RL agent? Broader Impact Concerns: I cannot imagine significant negative or positive broader impacts of this work. ==================================================
# Named Tensor Notation David Chiang University of Notre Dame Alexander M. Rush Cornell University Boaz Barak Harvard University Reviewed on OpenReview: *https://openreview.net/forum?id=hVT7SHlilx* ## Abstract We propose a notation for tensors with named axes, which relieves the author, reader, and future implementers of machine learning models from the burden of keeping track of the order of axes and the purpose of each. The notation makes it easy to lift operations on low-order tensors to higher order ones, for example, from images to minibatches of images, or from an attention mechanism to multiple attention heads. After a brief overview and formal definition of the notation, we illustrate it through several examples from modern machine learning, from building blocks like attention and convolution to full models like Transformers and LeNet. We then discuss differential calculus in our notation and compare with some alternative notations. Our proposals build on ideas from many previous papers and software libraries. We hope that our notation will encourage more authors to use named tensors, resulting in clearer papers and more precise implementations. ## 1 Introduction Formal descriptions of neural networks primarily adopt the notation of vectors and matrices from applied linear algebra (Goodfellow et al., 2016). When used to describe vector spaces, this notation is both concise and unambiguous. However, when applied to neural networks, these properties are lost. Consider the equation for attention as notated in the Transformer paper (Vaswani et al., 2017): $$\mathrm{Attention}(Q,K,V)=\left(\mathrm{softmax}\,{\frac{Q K^{\top}}{\sqrt{d_{k}}}}\right)V.$$ The equation relates Q, K, and V (for query, key, and value, respectively) as sequences of feature vectors, packed into possibly identically-sized matrices. While concise, this equation is ambiguous. Does the product QK⊤ sum over the sequence, or over the features? We know that it sums over columns, but there is not enough information to know what the columns represent. Is the softmax taken over the query sequence or the key sequence? The usual notation does not offer an answer. Perniciously, the implementation of an incorrect interpretation might still run without errors. With the addition of more axes, like multiple attention heads or multiple sentences in a minibatch, the notation becomes even more cumbersome. We propose an alternative mathematical notation for tensors with *named axes*. 1 The notation has a formal underpinning, but is hopefully intuitive enough that machine learning researchers can understand it without much effort. In named tensor notation, the above equation becomes Attention: R key × R seq×key × R seq×val → R val 1We follow NumPy in using the term *axis*. Other possible terms would be index, dimension, way, or *mode* (Tucker, 1964), but we felt that *axis* had the least potential for confusion. $${\mathrm{Attention}}(Q,K,V)=\left(\operatorname*{softmax}_{\mathrm{seq}}{\frac{Q\odot K}{\sqrt{|\mathrm{key}|}}}\right)\odot_{\mathrm{seq}}V.$$ The type signature introduces three named axes: the key axis is for features of queries and keys, the val axis is for features of values, and the seq axis is for tokens in a sequence. (Please see Section 2.2 for an explanation of our naming convention.) This notation makes the types of each input tensor explicit. Tensor Q is a query vector that is compared with key vectors, so it has a key axis. Tensor K is a sequence of key vectors, so it has seq and key axes. Tensor V is a sequence of value vectors, so it has seq and val axes. Unlike with matrix notation, the reader is not required to remember whether seq corresponds to rows or columns in either of these tensors. The function itself uses the named axes to precisely apply operations. The expression Q ⊙ key K is a dot product over the key axis shared between K and Q; there is no ambiguity about rows or columns. Similarly, the softmax function is annotated with the axis along which it is applied, removing any ambiguity or reliance on convention. Furthermore, named tensor notation naturally extends to *lifting* (also known as vectorizing and/or broadcasting) a function to tensors with more axes. For example, if instead of being a tensor with the single axis key, Q has three axes key, seq and batch (corresponding to tokens of a sequence and examples in a minibatch, respectively) then the Attention function works as written, acting on each example in a minibatch in parallel. Similarly, we can also add a heads axis to the inputs to get multiple attention heads. These additional axes are often elided in neural network papers, possibly avoiding notational complexity, but possibly also hiding critical model details. Our contributions. This work proposes a *mathematical notation* for named tensors and a fully specified semantic interpretation for the notation. Through examples, we demonstrate that this notation enables specifying machine learning models and operations in a succinct yet precise manner. The need for named tensors has been recognized by several software packages, including xarray (Hoyer & Hamman, 2017), Nexus (Chen, 2017), tsalib (Sinha, 2018), axisarrays (Bauman, 2018), NamedTensor (Rush, 2019), PyTorch (Torch Contributors, 2019), Dex (Paszke et al., 2021), JAX (JAX authors, 2021), einops (Rogozhnikov, 2022), and torchdim (DeVito, 2023). While our notation is inspired by these efforts, our focus is on mathematical notation to be used in papers, whereas previous efforts have focused on code. Our hope is that our notation will be adopted by authors, leading to clearer, more replicable papers, and that this, in turn, will encourage more implementers to adopt named tensor libraries, leading to clearer, more correct code. ## 2 Named Tensors In standard notation, a vector, matrix, or tensor is indexed by an integer or sequence of integers; if it has dimensions n1*, . . . , n*r, it can be thought of as a map that takes as input (i1, . . . , ir) ∈ [n1] *× · · · ×* [nr] and outputs a real number (or an element of a different field). For example, if A ∈ R 3×3, then the order of the two axes matters: A1,3 and A3,1 are not the same element. It is up to the reader to remember what each axis of each tensor stands for. This problem is exacerbated in modern machine learning, where tensors have multiple axes with different meanings (batches, channels, etc.), and different operations act on different axes. In contrast, we propose *named tensors*, in which each axis has a *name* that describes it and ensures there is no confusion between axes. We write ax[n] for an axis with name ax and size n, and we write ax(i) to index the i-th element along axis ax. So if a tensor has axes ax1 [n1]*, . . . ,* axr [nr] (with ax1 , . . . , axr being distinct names), it can be thought of as a map that takes as input a *record* {ax1 (i1)*, . . . ,* axr (ir)}, with i1 ∈ [n1]*, . . . , i*r ∈ [nr], and outputs a field element. In summary the key difference is that, while a tensor in standard notation takes as input an ordered tuple of indices, a named tensor takes as input a record, which is an unordered set of named indices. We illustrate with some examples below, then give formal definitions. ## 2.1 By Example For example, if A represents a 3 × 3 grayscale image, we can make it a named tensor like so (writing it two equivalent ways to show that the order of axes does not matter): $A\in\mathbb{R}^{\text{height}[3]\times\text{width}[3]}=\mathbb{R}^{\text{width}[3]\times\text{height}[3]}$ $$A={\mathrm{height}}\begin{bmatrix}3&1&4\\ 1&5&9\\ 2&6&5\end{bmatrix}={\mathrm{width}}\begin{bmatrix}3&1&2\\ 1&5&6\\ 4&9&5\end{bmatrix}.$$ $$\text{(1)}$$. We access elements of A using named indices, whose order again does not matter: Aheight(1),width(3) = Awidth(3),height(1) = 4. We also allow partial indexing: $$A_{\mathrm{height}(1)}={\begin{bmatrix}3&1&4\end{bmatrix}}$$ -3 1 4Awidth(3) = $$A_{\mathrm{width}(3)}={\bigl[}4\quad9\quad5{\bigr]}.$$ It does not matter if we write Aheight(1) or Awidth(3) as row and column vectors. In many contexts, an axis name is used with only one size. If so, we can simply write height for the unique axis with name height, as in R height×width. We can leave the size of an axis unspecified at first, and specify its size later (e.g., deferring it to an appendix on experimental details). For example, we can specify |height| = |width| = 28 if we want to prescribe the precise size of an image, or just write |height| = |width| to specify that it's a square image. ## 2.2 What'S In A Name? Although users of this notation are free to choose any names for axes, we offer the following recommendations. First, we recommend *words* instead of single letters, to communicate better the meaning of each axis. More subtly, we recommend words that describe a *whole* rather than its parts. For example, to represent a minibatch of examples, we would name the axis batch; to represent a sequence of tokens, we would name the axis seq. One reason for this choice is that there are cases, like height and width, where there is a name for the whole, but no unambiguous name for the part. By contrast, in cases where there is a name for the part but not the whole, it's always possible to use the plural form of the name of the part. For example, if we wanted A to have red, green, and blue channels, we would name the axis chans. Section 4 contains many more examples of axis names. ## 2.3 Formal Definition We now define formally the notation we use. Definition 1 (Names, indices, and axes). An *axis* is a pair, written ax[I], where - ax is the *name* of the axis, which is simply a string of letters. We write both names and variables ranging over names using sans-serif font. - I is a set of *indices*. In this paper, I is always of the form {1*, . . . , n*} for some n, so we abbreviate ax[{1*, . . . , n*}] as ax[n]. In many contexts, there is only one axis with name ax, and so we refer to the axis simply as ax. The context always makes it clear whether ax is a name or an axis. If ax is an axis, we write ind(ax) for its index set, and we write |ax| as shorthand for | ind(ax)|. Definition 2 (Named indices and records). If ax[I] is an axis and i ∈ I, then a *named index* is a pair, written ax(i). A *record* is a set of named indices {ax1 (i1)*, . . . ,* axr (ir)}, where ax1 , . . . axr are pairwise distinct names. Definition 3 (Shapes). A *shape* is a set of axes, written ax1 [I1]*×· · ·×*axr [Ir], where ax1 , . . . axr are pairwise distinct names. We write ∅ for the empty shape. A shape defines a set of records: rec(ax1 [I1] *× · · · ×* axr [Ir]) = {{ax1 (i1)*, . . . ,* axr (ir)} | i1 ∈ I1*, . . . , i*r ∈ Ir} . We say two shapes S and T are *compatible* if whenever ax[I] ∈ S and ax[J] ∈ T , then I = J. We say that S and T are *orthogonal* if there is no ax such that ax[I] ∈ S and ax[J] ∈ T for any I, J. If t ∈ rec T and S ⊆ T , then we write t|S for the unique record in rec S such that t|S ⊆ t. Definition 4 (Named tensors). Let F be a field and let S be a shape. Then a named tensor over F with shape S is a mapping from rec S to F. If X has shape S then we write shp X = S. We write the set of all named tensors with shape S as F S . We don't make any distinction between a scalar (an element of F) and a named tensor with empty shape (an element of F ∅). If X ∈ F S , then we access an element of X by applying it to a record s ∈ rec S; but we write this using the usual subscript notation: Xs rather than X(s). To avoid clutter, in place of X{ax1 (i1)*,...,*axr (ir)}, we usually write Xax1 (i1)*,...,*axr (xr). When a named tensor is an expression like (X + Y ), we index it by surrounding it with square brackets like this: [X + Y ]ax1 (i1)*,...,*axr (xr). We also allow partial indexing. If X is a tensor with shape T and s ∈ rec S where *S ⊆ T* , then we define Xs to be the named tensor with shape *T \ S* such that, for any t ∈ rec(*T \ S*), $$[X_{s}]_{t}=X_{s\cup t}.$$ (For the edge case T = ∅, our definitions for indexing and partial indexing coincide: one gives a scalar and the other gives a tensor with empty shape, but we don't distinguish between the two.) ## 3 Operations A significant benefit of named tensor notation is that it allows one to unambiguously specify *operations* that map tensors to tensors, and defines precisely how operations can be *lifted* when an operation is applied to tensors with more axes than are present in its signature and how *broadcasting* happens when different arguments add different axes. We start with the formal definition of named tensor operations and lifting, then show how this definition leads to many common operations. ## 3.1 Formal Definition By (named) tensor function or *(named) tensor operation*, we mean not only functions from tensors to tensors, but also operators like negation (−), addition (+), and so on. We will extend the standard function/operator notation by allowing tensor operations to be *lifted* to higher-order tensors. Definition 5 (lifting, unary). Let f : F S → GT be a function from tensors to tensors. For any shape S ′ orthogonal to both S and T , we can define the *lift* f S ′of f with the shape S ′to be the map $$f^{{\mathcal{S}}^{\prime}}:F^{{\mathcal{S}}\cup{\mathcal{S}}^{\prime}}\to G^{{\mathcal{T}}\cup{\mathcal{S}}^{\prime}}$$ $$\begin{array}{l l}{{f^{\mathcal{S}^{\prime}}\colon F^{\mathcal{S}\cup\mathcal{S}^{\prime}}\to G^{\mathcal{T}\cup\mathcal{S}^{\prime}}}}\\ {{\left[f^{\mathcal{S}^{\prime}}(X)\right]_{s^{\prime}}=f(X_{s^{\prime}})\qquad{\mathrm{for~all~}}X\in F^{\mathcal{S}\cup\mathcal{S}^{\prime}}{\mathrm{~and~}}s^{\prime}\in\operatorname{rec}\mathcal{S}^{\prime}.}}\end{array}$$ Usually, we simply write f instead of f S ′. That is, for every tensor X with shape *R ⊇ S*, we let f(X) = f R\S (X). If f is a multary function, we can lift each of its arguments to larger shapes, and we don't have to add the same axes to all the arguments; an axis present in one argument but not another is *broadcast* from the former to the latter. We consider just the case of two arguments; three or more arguments are analogous. Definition 6 (lifting, binary). Let f : F S × GT → HU be a binary function from tensors to tensors. For any shapes S ′ and T ′that are compatible with each other and orthogonal to S and T , respectively, and such that S ′ ∪ T ′is orthogonal to U, we can lift f to: $$f^{S^{\prime}\cup\mathcal{T}^{\prime}}:F^{S\cup\mathcal{S}^{\prime}}\times G^{\mathcal{T}\cup\mathcal{T}^{\prime}}\to H^{\mathcal{H}\cup\mathcal{S}^{\prime}\cup\mathcal{T}^{\prime}}$$ $$\left[f^{S^{\prime}\cup\mathcal{T}^{\prime}}(X,Y)\right]_{s^{\prime}}=f\left(X_{s^{\prime}\mid s^{\prime}},Y_{s^{\prime}\mid\tau^{\prime}}\right)\qquad\text{for all}X\in F^{S\cup\mathcal{S}^{\prime}},\,Y\in F^{\mathcal{T}\cup\mathcal{T}^{\prime}},\,s^{\prime}\in\operatorname{rec}(\mathcal{S}^{\prime}\cup\mathcal{T}^{\prime}).$$ Again, we usually write f instead of f S ′∪T ′. In the following sections, we present some consequences of the above lifting rules. In particular, we show how they allow one to lift some common operations from operating on scalars, vectors, or matrices to operating on tensors with more axes, and how they correspond to vectorizing and broadcasting (as implemented by NumPy and related packages). ## 3.2 Elementwise Operations And Broadcasting Any function from a scalar to a scalar corresponds to a tensor function with signature F ∅ → F ∅. Hence lifting it to any tensor shape, by Definition 5, corresponds to elementwise application. For example, given the logistic sigmoid function, $$\sigma\colon\mathbb{R}\to\mathbb{R}$$ $$\sigma(x)={\frac{1}{1+\exp(-x)}}$$ we can lift it to tensors. If A ∈ R height[3]×width[3] is the example tensor (1), then $$\sigma(A)={\mathrm{height}}\begin{bmatrix}{\frac{1}{1+\exp(-3)}}&{\frac{\mathrm{width}}{1+\exp(-1)}}&{\frac{1}{1+\exp(-4)}}\\ {\frac{1}{1+\exp(-1)}}&{\frac{1}{1+\exp(-5)}}&{\frac{1}{1+\exp(-9)}}\\ {\frac{1}{1+\exp(-2)}}&{\frac{1}{1+\exp(-6)}}&{\frac{1}{1+\exp(-5)}}\end{bmatrix}.$$ . Similarly for rectified linear units (relu(x) = max(0, x)), negation, and so on. Any function with signature R × R → R, including binary operators like addition (+), can be applied to two named tensors with the same shape. But if we apply a binary function or operator to tensors with different shapes, then, by Definition 6, broadcasting applies. For example, let $$x\in\mathbb{R}^{\mathrm{height}[3]}$$ height[3] y ∈ R $$y\in\mathbb{R}^{\mathrm{width}[3]}$$ $$x=\operatorname{height}\left[{\frac{2}{7}}\right]$$ y = $$y={\begin{bmatrix}1&4&1\end{bmatrix}}.$$ (We write x as a column just to make the broadcasting easier to visualize.) Then, to evaluate A + x, we effectively replace x with a new tensor with a copy of x for every index of axis width. Likewise for A + y: $$A+x=\text{height}\begin{bmatrix}3+2&1+2&4+2\\ 1+7&5+7&9+7\\ 2+1&6+1&5+1\end{bmatrix}\qquad\qquad A+y=\text{height}\begin{bmatrix}3+1&1+4&4+1\\ 1+1&5+4&9+1\\ 2+1&6+4&5+1\end{bmatrix}.$$ Similarly for other operations. We write elementwise multiplication (Hadamard product) as ⊙. ## 3.3 Reductions The same rules apply to functions from vectors to scalars, called *reductions*. We specify which axis a reduction applies to using a subscript (equivalent to the axis argument in NumPy and dim in PyTorch), so that even after lifting, we know which axis to reduce. For example, we can define summation: $$\sum_{\mathrm{ax}}-:\mathbb{R}^{\mathrm{ax}[I]}\to\mathbb{R}$$ $$\sum_{\mathrm{ax}}X=\sum_{i\in I}X_{\mathrm{ax}(i)}$$ and use it on A from Eq. (1): 44^(1). $$\sum_{\text{height}}A=\sum_i A_{\text{height}(i)}=\begin{bmatrix}3+1+2&1+5+6&4+9+5\end{bmatrix}$$ $$\sum_{\text{width}}A=\sum_j A_{\text{width}(j)}=\begin{bmatrix}3+1+4&1+5+9&2+6+5\end{bmatrix}.$$ We can also write multiple names to sum over multiple axes: $$\sum_{\begin{subarray}{l}{\mathrm{height}}\\ {\mathrm{width}}\end{subarray}}A=\sum_{i}\sum_{j}A_{\mathrm{height}(i),\mathrm{width}(j)}=3+1+4+1+5+9+2+6+5.$$ But a summation with an index variable (like i or j above) is a standard summation over values of that variable, and a summation with no subscript is a standard summation over a set. Other examples of reductions include: $$\operatorname*{norm}_{\mathbf{x}^{n}}X=\sqrt{\sum_{\mathbf{x}^{n}}X^{2}}\operatorname*{norm}_{\mathbf{x}^{n}}p\,X=\left(\sum_{\mathbf{x}^{n}}X^{p}\right)^{\frac{1}{p}}$$ $$\min_{\mathbf{x}^{n}}X=\min\{X_{\mathbf{x}^{n}(i)}\mid i\in I\}\max_{\mathbf{x}^{n}}X=\max\{X_{\mathbf{x}^{n}(i)}\mid i\in I\}$$ $$\operatorname*{mean}_{\mathbf{x}^{n}}X=\frac{1}{|\mathbf{x}|}\sum_{\mathbf{x}^{n}}X\operatorname*{var}_{\mathbf{x}}X=\frac{1}{|\mathbf{x}|}\sum_{\mathbf{x}^{n}}(X-\operatorname*{mean}_{\mathbf{x}^{n}}X)^{2}$$ ## 3.4 Contraction The vector dot-product is a function from two vectors to a scalar. We write it as follows: − ⊙ −: R ax[I] × R ax[I] → R ax $$X\odot Y=\sum_{i\in I}X_{\mathrm{ax}(i)}Y_{\mathrm{ax}(i)}$$ When lifted to higher-order tensors, the dot-product generalizes to the ubiquitous *contraction* operator, which can also be thought of as elementwise multiplication followed by summation over one axis, that is, $$X\odot_{\mathrm{ax}}Y=\sum_{\mathrm{ax}}X\odot Y.$$ X ⊙ Y. (2) For example, $A\underset{\text{height}}{\odot}x=\sum_{\text{height}}A\odot x=\left[3\cdot2+1\cdot7+2\cdot1\quad1\cdot2+5\cdot7+6\cdot1\quad4\cdot2+9\cdot7+5\cdot1\right]$ $$A\odot y=\sum_{\mathrm{width}}A\odot y=\operatorname{height}\begin{bmatrix}3\cdot1+1\cdot4+4\cdot1\\ 1\cdot1+5\cdot4+9\cdot1\\ 2\cdot1+6\cdot4+5\cdot1\end{bmatrix}.$$ Again, we can write multiple names to contract multiple axes at once: $${\underset{\begin{array}{l}{A\odot}\\ {\underset{\begin{array}{l}{\mathrm{height}}\\ {\mathrm{width}}\end{array}}}{A=\sum_{\begin{array}{l}{h e i g h e t}\\ {\mathrm{height}}\\ {\mathrm{width}}\end{array}}A\odot A=3\cdot3+1\cdot1+4\cdot4+1\cdot1+5\cdot5+9\cdot9+2\cdot2+6\cdot6+5\cdot5}}\\ \end{array}}$$ A ⊙ with no axis name under it contracts zero axes and is equivalent to elementwise multiplication, which is the reason we use the same symbol ⊙ for elementwise multiplication and contraction. The contraction operator can be used for many multiplication-like operations: x ⊙ height x = X height x ⊙ x inner product $$\begin{array}{r l}{{\mathrm{height}}}&{{}{\mathrm{height}}}\\ {{}}&{{}}\\ {{}}&{{x\odot y=\mathrm{height}\begin{bmatrix}2\cdot1&2\cdot4&2\cdot1\\ 7\cdot1&7\cdot4&7\cdot1\\ 1\cdot1&1\cdot4&1\cdot1\end{bmatrix}}}\\ {{}}&{{}}\\ {{A\odot y=\sum_{\mathrm{width}}A\odot y}}\\ {{x\odot A=\sum_{\mathrm{height}}x\odot A}}\\ {{A\odot B=\sum_{\mathrm{width}}A\odot B}}\\ {{\mathrm{width}}}&{{}}\end{array}$$ A ⊙ y matrix-vector product x ⊙ A vector-matrix product A ⊙ B matrix-matrix product (B ∈ R matrix-matrix product $(B\in\mathbb{R}^{\text{width}\times\text{width}'})$ . inner product $\frac{1}{2}$ $${\mathrm{outer~product}}$$ $\text{matrix-vector product}$ . vector-matrix product $\vec{v}$ outer product A contraction of three more tensors can be written as a sum. For example, the three-way inner product of vectors *x, y, z* ∈ R width can be written as P width x ⊙ y ⊙ z. Like the dot-product from which it is lifted, but unlike matrix multiplication, the contraction operator is commutative, but not associative. However, contraction does obey the following associative-like law. $$X\underset{S\cup T}{\odot}(Y\underset{\mathcal{U}}{\odot}Z)=(X\underset{S}{\odot}Y)\underset{T\cup\mathcal{U}}{\odot}Z\text{if}\mathcal{S}\cap\text{shp}Z=\mathcal{U}\cap\text{shp}X=\emptyset.\tag{3}$$ The special case $$X\odot_{\mathcal{S}}(Y\odot_{\mathcal{U}}Z)=(X\odot_{\mathcal{S}}Y)\odot_{\mathcal{U}}Z$$ Z if S ∩ shpZ = U ∩ shp X = ∅ (4) will be useful in Section 5 for moving Z from inside one or more sets of parentheses to the outside. ## 3.5 Vectors To Vectors And Beyond Functions from vectors to vectors (R ax[I] → R ax[I]) lift to functions on tensors that operate along one axis, but leave the tensor shape unchanged. Such functions are particularly problematic in standard notation, which does not provide any way (to our knowledge) of specifying which axis the operation should be performed over. Such functions include: $${\mathrm{~if~}}S\cap\operatorname{shp}Z={\mathcal{U}}\cap\operatorname{shp}X=\varnothing$$ $$\operatorname*{softmax}_{\mathbf{x}}X={\frac{\exp X}{\sum_{\mathbf{a x}}\exp X}}$$ $$\operatorname*{argmax}_{\mathbf{x}}X=\operatorname*{lim}_{\alpha\to\infty}\operatorname*{softmax}_{\mathbf{a x}}\alpha X$$ $$\quad(4)$$ $$(5\mathrm{a})$$ $$\left({\mathrm{5b}}\right)$$ $$\operatorname*{argmin}_{\mathfrak{a x}}X=\operatorname*{lim}_{\alpha\to-\infty}{\underset{\mathfrak{a x}}{\mathrm{softmax}}}\,\alpha X$$ αX (5c) For example, we can clearly distinguish between two ways of performing a softmax on A: width $\operatorname{softmax}A=\text{height}\left[\begin{matrix}\frac{\exp3}{\exp3+\exp1+\exp2}\\ \frac{\exp1}{\exp3+\exp1+\exp2}\\ \frac{\exp2}{\exp3+\exp1+\exp2}\end{matrix}\right]$ $\operatorname{softmax}A=\text{height}\left[\begin{matrix}\frac{\exp3}{\exp3+\exp1+\exp4}\\ \frac{\exp1}{\exp1+\exp5+\exp9}\\ \frac{\exp2}{\exp2+\exp6+\exp5}\end{matrix}\right]$ width $\left(5\mathrm{c}\right)$ . | width exp 1 | exp 4 |  | |-------------------|-----------------------|-----| | exp 1+exp 5+exp 6 | exp 4+exp 9+exp 5 | | | exp 5 | exp 9 | | | exp 1+exp 5+exp 6 | exp 4+exp 9+exp 5   | | | exp 6 | exp 5 | | | exp 1+exp 5+exp 6 | exp 4+exp 9+exp 5 | | | width exp 1 | exp 4 |  | | exp 3+exp 1+exp 4 | exp 3+exp 1+exp 4 | | | exp 5 | exp 9 | | | exp 1+exp 5+exp 9 | exp 1+exp 5+exp 9   | | | exp 6 | exp 5 | | | exp 2+exp 6+exp 5 | exp 2+exp 6+exp 5 | | ## 3.6 Renaming And Reshaping It's often useful to rename an axis (analogous to a transpose operation in standard notation). We can think of this as the lift of a function from vectors to vectors, but with different input and output axes: $$\begin{array}{r}{[-]_{\mathrm{ax}\to\mathrm{ax}^{\prime}}\colon\mathbb{R}^{\mathrm{ax}[I]}\to\mathbb{R}^{\mathrm{ax}^{\prime}[I]}}\\ {\quad[X_{\mathrm{ax}\to\mathrm{ax}^{\prime}}]_{\mathrm{ax}^{\prime}(i)}=X_{\mathrm{ax}(i)}}\end{array}$$ For example, $$A_{\mathrm{height\to height^{\prime}}}={\mathrm{height^{\prime}}}\begin{bmatrix}3&1&4\\ 1&5&9\\ 2&6&5\end{bmatrix}.$$ We can also define notation for reshaping two or more axes into one axis: $$A_{\mathrm{(height,width)\to layer}}={\begin{bmatrix}3&1&4&1&5&9&2&6&5\end{bmatrix}}$$ The order of elements in the new axis is undefined. Authors who need a particular ordering may write a more specific definition. ## 3.7 Indexing2 NumPy and its derivatives provide various ways to recombine elements of a tensor to form a new tensor: integer array indexing, and functions like numpy.take, numpy.take_along_axis, torch.index_select, and torch.gather. Using named tensors, we can write nearly all of these operations as lifts of a single function: $${\underset{\mathbf{a}\mathbf{x}}{\operatorname{index}}}\colon\mathbb{R}^{\mathrm{ax}[n]}\times[n]\to\mathbb{R}$$ $$\operatorname*{index}(X,i)=X_{\operatorname*{\mathbf{ax}}(i)}.$$ For example, suppose we have $$E\in\mathbb{R}^{\mathsf{vocab}[n]\times\mathsf{embed}}$$ - $i\in[n]$ $I\in[n]^\mathrm{seq}$ $P\in\mathbb{R}^{\mathrm{seq}\times\mathrm{vocab}[n]}$ . 2We are grateful to Tongfei Chen and Chu-Cheng Lin for contributing the original idea behind this section, as well as the example. Tensor E contains word embeddings for all the words in the vocabulary. Integer i is the numeric identifier of a word, while tensor I is a sequence of numeric identifiers of words. Tensor P contains a sequence of probability distributions over the vocabulary (e.g., the predictions of a language model). Then: - index vocab (*E, i*) broadcasts the emb axis from E to i, giving the word embedding of word i. This is the same as partial indexing (Evocab(i)). - index vocab (*E, I*) also broadcasts the seq axis from I to E, giving a sequence of word embeddings. This is the same as integer array indexing (E[I]), numpy.take(E, I, 0), or torch.index_select(E, 0, I). - index vocab (*P, I*) aligns P's and I's seq axes, giving a sequence of probabilities. This is the same as numpy.take_along_axis(P, I, 0) or torch.gather(P, 0, I). - If P and I additionally had a batch axis (before the other axes), then index vocab (*P, I*) would be the same as tensorflow.gather(P, I, axis=1, batch_dims=1). In NumPy, indexing using two or more integer arrays requires a special definition with some surprising special cases. With named tensors, we simply apply the indexing function twice. For example, if we wanted to get probabilities of words J at a subset I of positions, we could let: |seq| = m $$S=\operatorname*{index}(\operatorname{index}(P,I),J)\in\mathbb{R}^{\operatorname{subseq}}$$ $$={\mathit{m}}$$ $\eta^{\left|\text{subseq}\right.}$ I ∈ [m] subseq positions $\in[n]^{\text{subseq}}$ J ∈ [n] subseq numeric identifiers so that Ssubseq(k) = Pseq(Isubseq(k)),vocab(Isubseq(k)). ## 4 Worked Examples: Neural Networks In this section we give a series of worked examples illustrating how standard neural network model components can be written using named tensors. Appendix A builds some of these components into complete specifications of the Transformer and LeNet. ## 4.1 Feedforward Neural Networks A multi-layer, feedforward neural network with different-sized layers can be written as: $$X^{2}=\sigma(W^{2}$$ $\mathbf{a}$ X0 ∈ R input | input | | | | | |-----------|-----|------------|------------------------|-----------------| | X1 = σ(W1 | ⊙ | X0 + b 1 ) | W1 ∈ R hidden1×input | b 1 ∈ R hidden1 | | input | | | | | | X2 = σ(W2 | ⊙ | X1 + b 2 ) | W2 ∈ R hidden2×hidden1 | b 2 ∈ R hidden2 | | hidden1 | out | | | | | X3 = σ(W3 | ⊙ | X2 + b 3 ) | W3 ∈ R out×hidden2 | b 3 ∈ R | | hidden2 | | | | | The layer sizes can be specified by writing |hidden1 | = n1, etc. As noted above, σ is applied elementwise and does not require additional annotation. Alternatively, the layer equation can be abstracted by defining: $$\operatorname{FullConn}^{l}(x)=\sigma{\Bigl(}W^{l}\underset{\mathrm{layer}}{\odot}\ x+b^{l}{\Bigr)}_{\mathrm{layer}^{\prime}\to\mathrm{layer}}$$ where $$W^{l}\in\mathbb{R}^{\mathrm{layer}^{\prime}\left[n_{l}\right]\times\mathrm{layer}\left[n_{l-1}\right]}$$ $$\begin{array}{r l}{a^{l}}&{{}=}\\ {b^{l}\in\mathbb{R}^{\mathrm{layer}^{\prime}\left[n_{l}\right]}.}\end{array}$$ The function FullConnlencapsulates both the equation for layer l as well as its parameters Wl, bl(analogous to what TensorFlow and PyTorch call *modules*). Since we chose to use the same axis name layer for all the layers (with different sizes nl), FullConnltemporarily computes its output over axis layer′, then renames it back to layer. The network can be defined like this: $$X^{0}\in\mathbb{R}^{\mathrm{layer}[n_{0}]}$$ $$\begin{array}{r l}{{}}&{{}\mathbf{1}^{2}\subset\mathbf{2}^{2}}\\ {}&{{}X^{1}=\operatorname{FullCom}^{1}(X^{0})}\\ {}&{{}X^{2}=\operatorname{FullCom}^{2}(X^{1})}\\ {}&{{}X^{3}=\operatorname{FullCom}^{3}(X^{2}).}\end{array}$$ $$t=1,\ldots,n$$ $$|\mathrm{hidden}|=|\mathrm{hidden}^{\prime}|$$ ## 4.2 Recurrent Neural Networks As a second example, we consider a simple (Elman) RNN. This model is similar to the feedforward network, except that the number of timesteps is variable and parameters are shared over time. At each time step, it produces a tensor with a new axis hidden′ which is then renamed hidden for the next step in the recursion. $$\begin{array}{l}{{W^{\mathrm{h}}\in\mathbb{R}^{\mathrm{hidden}\times\mathrm{hidden}^{\prime}}}}\\ {{W^{\mathrm{i}}\in\mathbb{R}^{\mathrm{input}\times\mathrm{hidden}^{\prime}}}}\end{array}$$ hidden×hidden′|hidden| = |hidden′| $$b\in\mathbb{R}^{\mathrm{hidden}^{\prime}}$$ $$h^{0}\in\mathbb{R}^{\mathrm{hidden}}$$ $$h^{t}=\sigma{\binom{W^{\mathrm{h}}}{\mathrm{hidden}}}\;\;h^{t-1}+W^{i}\underset{\mathrm{input}}{\overset{\mathrm{()}}{\mathrm{()}}}\;x^{t}+b{\binom{t}{\mathrm{hidden^{\prime}\to hidden}}}$$ hidden′→hiddent = 1*, . . . , n* $$x^{t}\in\mathbb{R}^{\mathrm{input}}$$ input t = 1*, . . . , n* ## 4.3 Attention In the introduction (§1), we described difficulties in interpreting the equation for attention as used with Transformers (Vaswani et al., 2017). In our notation, it looks like this: rank $\text{et al.}$, 2017). In our notation, it looks like this: Attention : $\mathbb{R}^{\mathsf{key}}\times\mathbb{R}^{\mathsf{seq}\times\mathsf{key}}\times\mathbb{R}^{\mathsf{seq}\times\mathsf{val}}\to\mathbb{R}^{\mathsf{val}}$ Attention($Q,K,V$) = $\left(\underset{\mathsf{seq}}{\text{softmax}}\,\frac{Q\odot K}{\sqrt{|\mathsf{key}|}}\right)\odot V$. val (6) $$t=1,\ldots,n$$ $$\left(6\right)$$ $$\left(7\right)$$ V. (7) This definition takes a single query Q vector and returns a single result vector (and actually could be further reduced to a scalar values as val is not strictly necessary). To apply to a sequence, we can give Q a seq′ axis, and the function will compute an output sequence. Providing Q, K, and V with a heads axis lifts the function to compute multiple attention heads. For Transformers we often need to apply a mask to ensure attention is only applied to valid keys (e.g. for causal language models). We can modify the equation to include this mask: Attention: R key × R seq×key × R seq×val × R seq → R val $$\mathrm{Attention}(Q,K,V,M)=\left(\operatorname*{softmax}_{\mathrm{seq}}\left({\frac{Q\odot K}{\sqrt{|\mathrm{key}|}}}+M\right)\right)\odot V.$$ Appendix A.1 includes a full specification of the complete Transformer model using the named tensor notation. ## 4.4 Convolution Standard neural network convolutions can be specified by "unrolling" a vector and then applying a standard dot product. We define an axis-parameterized unrolling function that converts a one-axis tensor to a sequence of kernel sized vectors: $$\begin{array}{r l}{{\mathrm{unroll}}\colon\mathbb{R}^{\operatorname{seq}[n]}\to\mathbb{R}^{\operatorname{seq}[n-|\operatorname{kernel}|+1],\operatorname{kernel}}}\\ {{}{\underset{\operatorname{kernel}}{\overset{\operatorname{seq}}{}}}}\\ {{}}&{{\mathrm{unroll}}\ X=Y,\ {\mathrm{where}}}\\ {{}}&{{\underset{\operatorname{kernel}}{\overset{\operatorname{seq}}{}}}}\end{array}$$ $$Y_{\tt{seq}}(i),{\tt{kernel}}(j)=X_{\tt{seq}}(i+j-1)\,.$$ A 1d convolution with input channels chans and output channels chans′consists of unrolling along the seq axis and then taking a dot product: Conv1d : $\mathbb{R}^{\text{chans}\times\text{seq}[n]}\to\mathbb{R}^{\text{chans}^{\prime}\times\text{seq}[n^{\prime}]}$ Conv1d($X;W,b$) = $W$ ($\circ$) $\text{unroll}\,X+b$ chans seq kernel kernel where W ∈ R $W\in\mathbb{R}^{\text{chans}^{\prime}}\times\text{chans}\times\text{kernel}$. $b\in\mathbb{R}^{\text{chains}^{\prime}}$ . Unrolling easily generalizes to higher-dimensional convolutions: Conv2d: $\mathbb{R}^{\text{chans}\times\text{height}[h]\times\text{width}}$. chans×height[h]×width[w] → R chans′×height[h ′]×width[w ′] $$\to\mathbb{K}^{\subset\mathbb{R}}$$ $\begin{array}{l}\mbox{Conv2d}(X;W,b)=W\quad\mbox{($\circ$)}\quad\mbox{unroll}\;X+b\\ \mbox{chans}\quad\mbox{height}\quad\mbox{width}\\ \mbox{kh,kw}\quad\mbox{kh}\quad\mbox{kw}\end{array}$ where $W\in\mathbb{R}^{\text{chans}^{\prime}}\times\text{chans}\times\text{kh}\times\text{kw}$ $$b\in\mathbb{R}^{\mathrm{chains}^{\prime}}.$$ ## 4.5 Pooling Pooling is similar to convolutions. We first define a function to partition a tensor into windows. $$\operatorname{pool}\ :\mathbb{R}^{\operatorname{seq}[n]}\to\mathbb{R}^{\operatorname{seq}[n/|\operatorname{kernel}||],\operatorname{kernel}[n]}$$ $$\operatorname*{pool}_{\mathrm{seq},\mathrm{kernel}}X=Y,\mathrm{~where~}$$ **Leq,kernel** $Y_{\text{seq}}(i),\text{kernel}(j)=X_{\text{seq}}((i-1)\cdot|\text{kernel}|+j)\cdot1$ Then we can define aggregations over kernel. We define max-pooling as: $\mathrm{MaxPool1d}_{k}:\mathbb{R}^{\mathrm{seq}[n]}\to\mathbb{R}^{\mathrm{seq}[n/k]}$ MaxPool1d${}_{k}(X)=\max\limits_{\begin{subarray}{c}\text{kernel seq},\text{kernel}\end{subarray}}X$ $\operatorname{MaxPool1d}_{k}(X)=\max\limits_{\begin{subarray}{c}\text{kernel seq},\text{kernel}\\ \text{kernel}\end{subarray}}X$ $$|\text{kernel}|=k$$ MaxPool2d${}_{kh,kw}:\mathbb{R}^{\text{height}[h]\times\text{width}[w]}\rightarrow\mathbb{R}^{\text{height}[h/kh]\times\text{width}[w/kw]}$ MaxPool2d${}_{kh,kw}(X)=\max\limits_{\begin{subarray}{c}\text{k}\text{h},\text{kw height},\text{kh width},\text{kw}\end{subarray}}$ MaxPool2dkh,kw(X) = max kh,kw pool height,kh pool width,kw X |kh| = kh $$|\mathrm{k}\mathrm{w}|=k w.$$ ## 4.6 Normalization Layers Normalization layers are used in all large-scale deep learning models, with different architectures requiring different types of normalization. However, despite their importance, the differences between them are often not clearly communicated. For example, the PyTorch documentation (PyTorch Contributors, 2022) describes all of them using the same equation (where ϵ > 0 is a small constant for numerical stability): $$Y={\frac{X-\operatorname{mean}(X)}{\sqrt{\operatorname{var}(X)+\epsilon}}}\odot\gamma+\beta$$ Wu & He (2018) give essentially the same equation and explain the differences using a combination of equations, words, and pictures. But they do not capture differences in γ and β among different normalization layers. Critically, the layers do differ by which axes are *standardized* as well as their parameters. We define a single named standardization function as: $${\underset{\boldsymbol{\mu}}{\mathrm{standardize}}}\colon\mathbb{R}^{\mathrm{ax}}\to\mathbb{R}^{\mathrm{ax}}$$ ax $2\infty$. $$\operatorname*{lim}_{\mathbf{x}}X-\operatorname*{mean}(X)$$ $${\mathrm{standardize}}(X)={\frac{\operatorname*{ax}}{\sqrt{\operatorname*{ax}(X)+\epsilon}}}$$ Then, we can define the three kinds of normalization layers, all with type R batch×chans×layer → R batch×chans×layer. While superficially similar, these functions differ in their standardized axes and their parameter shape. $$\operatorname{BatchNorm}(X;\gamma,\beta)=\operatorname{standardize}(X)\odot\gamma+\beta$$ $$\operatorname{InstanceNorm}(X;\gamma,\beta)=\operatorname*{standardize}(X)\odot\gamma+\beta$$ (X) ⊙ γ + *β γ, β* ∈ R $$\gamma,\beta\in\mathbb{R}^{\mathrm{chains}}$$ (X) ⊙ γ + *β γ, β* ∈ R $\gamma,\beta\in\mathbb{R}^{\text{chains}}$ $$\operatorname{LayerNorm}(X;\gamma,\beta)={\underset{\mathrm{layer,chams}}{\operatorname{standardize}}}(X)\odot\gamma+\beta$$ (X) ⊙ γ + *β γ, β* ∈ R $\gamma,\beta\in\mathbb{R}^{\text{chans},\text{layer}}$ ## 5 Differential Calculus In many machine learning applications, we need to compute derivatives of functions from tensors to tensors. In standard vector/matrix notation, this can become complicated. For example, if f maps from vectors to vectors, then the partial derivatives of f form a matrix (the Jacobian). It has an "input" axis for the directions in which X could change, and an "output" axis for the directions in which f(X) could change. But there are conflicting conventions about whether the first axis is the input axis ("denominator layout") or the output axis ("numerator layout"). The derivative of a function from vectors to matrices or matrices to vectors cannot be represented as a matrix at all, so one must resort to flattening the matrices into vectors. With non-named tensor index notation, taking derivatives is not difficult (Laue et al., 2018), but again a convention must be adopted that the input axes come after the output axes, separated by a comma. With named tensors, axes are not ordered, so we don't need to remember whether the input or output axes come first. But we do need to ensure that the input and output axes have different names. ## 5.1 Definition Definition 7. Let S = ax1 *× · · · ×* axr be a shape. Then we write S ∗ = ax∗ 1 *× · · · ×* ax∗ r , and if s = {ax1 (i1)*, . . .* axr (ir)} ∈ rec S, then we write s ∗ = {ax∗ 1 (i1)*, . . .* ax∗ r (ir)}. Furthermore, if X ∈ R S then we write X∗ = XS→S∗ . Definition 8. Let f : R S → R T. The *derivative* of f(X) with respect to X∗is the tensor such that $$\left.\begin{array}{c}{{\frac{\partial f(X)}{\partial X^{*}}\in\mathbb{R}^{s^{*}\cup\mathcal{T}}}}\\ {{\left[\frac{\partial f(X)}{\partial X^{*}}\right]_{s^{*},t}=\frac{\partial f(X)_{t}}{\partial X_{s}}}}\end{array}\right.$$ for all s ∈ rec S and t ∈ rec T . The above definition assumes that S ∗ and T are orthogonal; if not, the axes in S should be renamed to something else. For example, the second derivative (the Hessian) could be $$\left.\begin{array}{c}{{\frac{\partial^{2}f(X)}{\partial X^{*}\,\partial X^{\dagger}}\in\mathbb{R}^{s^{*}\cup S^{\dagger}\cup T}}}\\ {{\left[\frac{\partial^{2}f(X)}{\partial X^{*}\,\partial X^{\dagger}}\right]_{r^{*},s\dagger,t}=\frac{\partial^{2}f(X)_{t}}{\partial X_{r}\,\partial X_{s}}}}\end{array}\right.$$ for all *r, s* ∈ rec S and t ∈ rec T . ## 5.2 Differentials We could derive rules like the chain rule and the sum and product rules, and use them to compute derivatives; however, ensuring that input and output shapes are orthogonal is inconvenient. Instead, we recommend the method of differentials (Magnus & Neudecker, 1985), which reduces renaming to a minimum. The first-order Taylor approximation of f around X ∈ R S is $$f(X+H)\approx f(X)+{\frac{\partial f(X)}{\partial X^{*}}}\odot H^{*}\qquad H\in\mathbb{R}^{\mathcal{S}}.$$ The *differential* of f(X) with increment H, written ∂f(X; H), is the second term of this approximation; we defer a formal definition to Appendix B. For example, - If id is the identity function, then $$\begin{array}{l}{{\operatorname{id}(X+H)=X+H}}\\ {{\quad\partial\operatorname{id}(X;H)=H.}}\end{array}$$ $$(8\mathrm{a})$$ ∂id(X; H) = H. (8a) If $f(X)=X\odot X$ where $X\in\mathbb{R}^{\textup{ax}}$, then . - If f(X) = X ⊙ ax $$f(X+H)=(X+H)\odot(X+H)$$ $$=X\odot X+2X\odot H+H\odot H$$ $$\partial f(X;H)=2X\odot H.$$ H. (8b) $$({\mathrm{8b}})$$ It's often more convenient to work directly with the expression X ⊙ ax X instead of f(X), and to write ∂(X ⊙ ax X) for ∂f(X; H). Then, since ∂X = ∂id(X; H) = H, we can write Eq. (8b) simply as $$\partial(X\odot_{\mathfrak{a x}}X)=2X\odot_{\mathfrak{a x}}\partial X$$ (9a) (9b) (9c) (9d) (9e) (9f) (9g) (9g) so that the H has beeen "hidden" inside ∂X. More generally, we can derive rules like the following: $\partial(U+V)=\partial U+\partial V$ $\partial(U\odot V)=U\odot\partial V+V\odot\partial U$ $\partial\left(\frac{U}{V}\right)=\frac{V\odot\partial U-U\odot\partial V}{V^{2}}$ $\partial\sum_{\text{ax}}U=\sum_{\text{ax}}\partial U$ $\partial(U\odot V)=U\odot\partial V+V\odot\partial U$ $\partial U_{\text{ax}}=[\partial U]_{s}$ $\partial U_{\text{ax}\to\text{ax}^{\prime}}=[\partial U]_{\text{ax}\to\text{ax}^{\prime}}$. $$(9\mathrm{h})$$ The chain rule for differentials is $$\partial f(U)=\left.{\frac{\partial f(X)}{\partial X^{*}}}\right|_{X=U\stackrel{\odot}{\mathcal{S}^{*}}}\partial U_{\mathcal{S}\to\mathcal{S}^{*}}f\colon\mathbb{R}^{\mathcal{S}}\to\mathbb{R}^{\mathcal{T}}.$$ Recall that f can be lifted to shapes larger than S. In that case, the rule above still applies, but note that the contraction will still be over S. A special case of this is when S = T = ∅: $$\partial f(U)=\left.\frac{\mathrm{d}f(x)}{\mathrm{d}x}\right|_{x=U}\odot\partial U f\colon\mathbb{R}\to\mathbb{R}.\tag{1}$$ For example, letting f(x) = exp x gives the rule $$\partial(\exp U)=\exp U\odot\partial U.$$ ∂(expU) = expU ⊙ ∂U. (9j) Using these rules we can compute the differential of a wide variety of expressions. For example, the softmax operator: ∂(softmax ax X) (5a) = ∂ exp X P ax exp X (9c) = P ax exp X⊙ ∂(exp X) − exp X ⊙ ∂P ax exp X P ax exp X2 (9d) = P ax exp X⊙ ∂(exp X) − exp X ⊙P ax ∂(exp X) P ax exp X2 (9j) = P ax exp X⊙ exp X ⊙ ∂X − exp X ⊙P ax (exp X ⊙ ∂X) P ax exp X2 (2) = P ax exp X⊙ exp X ⊙ ∂X − exp X ⊙ (exp X ⊙ ax ∂X) P ax exp X2 $$\left(9\mathrm{i}\right)$$ $$(9\mathrm{j})$$ $$=\frac{\exp X}{\sum\limits_{\text{ax}}\exp X}\odot\partial X-\frac{\exp X}{\sum\limits_{\text{ax}}\exp X}\odot\left(\frac{\exp X}{\sum\limits_{\text{ax}}\exp X}\odot\partial X\right)$$ $$\overset{\eqref{eq:200}}{=}\operatorname*{softmax}_{\text{ax}}X\odot\partial X-\operatorname*{softmax}_{\text{ax}}X\odot(\operatorname*{softmax}_{\text{ax}}X\odot\partial X)$$ $$=\operatorname*{softmax}_{\text{ax}}X\odot(\partial X-\operatorname*{softmax}_{\text{ax}}X\odot\partial X).\tag{1}$$ We stop when the only differentials left are ∂X. ## 5.3 Derivatives Via Differentials If we can get ∂f(X) into so-called *canonical form*, $$\quad(10)$$ $\partial f(X)=D\odot\partial X^{*}+\mbox{const}$. where "const." stands for terms not depending on ∂X, then by Magnus & Neudecker's first identification theorem (Theorem 1 in Appendix B), we can conclude that $$(11)$$ $${\frac{\partial f(X)}{\partial X^{*}}}=D.$$ When trying to get expressions into canonical form, one helpful fact is that renaming can be thought of as contraction with an identity matrix. First we define the identity matrix with shape ax × ax′: $$[I_{\mathbf{ax},\mathbf{ax}^{\prime}}]_{\mathbf{ax}(i),\mathbf{ax}^{\prime}(j)}={\begin{cases}1&i=j\\ 0&i\neq j.\end{cases}}$$ Then for any tensor A with an axis ax, $$(12)$$ $$A_{\mathrm{ax}\to\mathrm{ax}^{\prime}}=I_{\mathrm{ax},\mathrm{ax}^{\prime}}\odot A.$$ A. (12) More specifically, if ∂X ∈ R S , then $$(13)$$ $\partial X=I_{\mathcal{S},\mathcal{S}^{*}}\stackrel{{\circ}}{{\rightarrow}}\partial X^{*}$ (11.11) and then Eq. (4) can usually be used to move the ⊙ S ∗ ∂XS→S∗ to the outermost level of the expression. Above, we found the differential of the softmax function; now let us find its derivative. ∂(softmax ax X) (10) = softmax ax X ⊙ (∂X − softmax ax X ⊙ ax ∂X) (13) = softmax ax X ⊙ (Iax,ax∗ ⊙ ax∗ ∂X∗ − softmax ax X ⊙ ax (Iax,ax∗ ⊙ ax∗ ∂X∗)) (4) = (softmax ax X ⊙ (Iax,ax∗ − softmax ax X ⊙ ax Iax,ax∗ )) ⊙ ax∗ ∂X∗ (12) = (softmax ax X ⊙ (Iax,ax∗ − (softmax ax X) ∗)) ⊙ ax∗ ∂X∗. This is in canonical form, so we have $$\frac{\partial}{\partial X}(\operatorname{softmax}_{\mathfrak{H}}X)=\operatorname*{softmax}_{\mathfrak{H}}X\odot(I_{\mathfrak{H},\mathfrak{H}^{*}}-(\operatorname*{softmax}_{\mathfrak{H}^{*}}X)^{*}).$$ $$(14)$$ ## 5.4 Lifting Recall that f S ′is the lift of f : R S → R T with S ′, and in most contexts we can simply write f instead of f S ′. However, derivatives are one place where some extra caution is in order. To lighten notation, let's write g for the derivative of f: $$g(X)={\frac{\partial f(X)}{\partial X^{*}}}.$$ Recall that the chain rule (9h) works under lifting, so $$\partial f^{{\mathcal{S}}^{\prime}}(X)=g^{{\mathcal{S}}^{\prime}}(X)\stackrel{\odot}{{\mathcal{S}}^{*}}\partial X_{{\mathcal{S}}\to{\mathcal{S}}^{*}}.$$ But the contraction is only over S ∗, so it would be incorrect to conclude that ∂f S′(X) ∂X = g S ′(X). The derivative of a lift is not the lift of a derivative. We must rename and contract S ′ as well: ∂f S ′(X) (9h) = g S ′(X) ⊙ S ∗ ∂XS→S∗ (13) = g S ′(X) ⊙ S ∗ (IS′,S′∗ ⊙ S ′∗ ∂XS∪S′→(S∪S′) ∗ ) (3) = (g S ′(X) ⊙ IS′,S′∗ ) ⊙ (S∪S′) ∗ ∂XS∪S′→(S∪S′) ∗ ∂f S ′(X) ∂X∗= g S ′(X) ⊙ IS′,S′∗ . In general, then, the derivative of a lift is the lift of the derivative, multiplied by the identity matrix for the new axes. Intuitively, this is because the derivative is a linear transformation—before lifting, a transformation from S ∗to T . When lifting to *S ∪ S*′, this transformation must also be lifted, which is what multiplication by IS′,S′∗ accomplishes. ## 5.5 Extended Example As a more elaborate example, we find the derivative of self-attention. For brevity, we omit the factor √ 1 |key| , and we write α = softmax seq (Q ⊙ key K). $$\partial{\rm Att}(Q,K,V)\stackrel{{()}}{{=}}\partial(\alpha\odot V)$$ $$\stackrel{{()}}{{=}}\alpha\odot\partial V+V\odot\partial\alpha.\tag{15}$$ Focus first on the first term, which is the only term depending on ∂V : $\alpha$$\odot$$\partial V$$\stackrel{{(13)}}{{=}}$$\alpha$$\odot$$((I_{\rm seq,seq}$$\odot$$I_{\rm val,val}$$)$$\stackrel{{(\circ)}}{{\longrightarrow}}$$\partial V^{*}$$)$ $\stackrel{{(4)}}{{=}}$$((\alpha$$\odot$$I_{\rm seq,seq}$$\odot$$I_{\rm val,val}$$)$$\stackrel{{(\circ)}}{{\longrightarrow}}$$\partial V^{*}$$)$ $\stackrel{{(12)}}{{=}}$$(\alpha_{\rm seq\to seq}$$\odot$$I_{\rm val,val}$$)$$\stackrel{{(\circ)}}{{\longrightarrow}}$$\partial V^{*}$$)$ $\stackrel{{(\circ)}}{{=}}$$(\alpha_{\rm seq\to seq}$$\odot$$I_{\rm val,val}$$)$$\stackrel{{(\circ)}}{{\longrightarrow}}$$\partial V^{*}$$)$ $\stackrel{{(\circ)}}{{=}}$$(\alpha_{\rm seq\to seq}$$\odot$$I_{\rm val,val}$$)$ Next, focus on the second term of Eq. (15): $$V\odot\partial\alpha\stackrel{{\eqref{eq:10}}}{{=}}V\odot(\alpha\odot(\partial(Q\odot K)-\alpha\odot\partial(Q\odot K)))$$ $$\stackrel{{\eqref{eq:10}}}{{=}}V\odot(\alpha\odot(Q\odot\partial K+K\odot\partial Q-\alpha\odot(Q\odot\partial K+K\odot\partial Q))).\tag{16}$$ Keeping only terms depending on ∂Q, we get V ⊙ seq (α ⊙ (K ⊙ key ∂Q − α ⊙ seq (K ⊙ key ∂Q))) (13) = V ⊙ seq (α ⊙ (K ⊙ key (Ikey,key∗ ⊙ key∗ ∂Q∗) − α ⊙ seq (K ⊙ key (Ikey,key∗ ⊙ key∗ ∂Q∗)))) (4) = V ⊙ seq (α ⊙ (K ⊙ key Ikey,key∗ − α ⊙ seq (K ⊙ key Ikey,key∗ )))⊙ key∗ ∂Q∗ (4) = V ⊙ seq (α ⊙ (Kkey→key∗ − α ⊙ seq Kkey→key∗ ))⊙ key∗ ∂Q∗ ∂ ∂Q∗ Att(Q, K, V ) = V ⊙ seq (α ⊙ (Kkey→key∗ − α ⊙ seq Kkey→key∗ )). Similarly, keeping only terms from Eq. (16) depending on ∂K, we get $V\odot(\alpha\odot(Q\odot\partial K-\alpha\odot(Q\odot\partial K)))$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ $\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}$ \(\begin{array}{l}\mbox{seq}\\ \mbox{seq}\end{array}\ ## 6 Alternatives And Related Work 6.1 Index Notations Among alternatives to standard vector and matrix notation, the most common one is index notation as used in physics (Ricci & Levi-Civita, 1900). Related notations are used in other fields as well (Harshman, 2001). In this notation, axes are ordered, and every equation is written in terms of tensor components. If an index appears on both sides of an equation, then the equation must hold for each value of the index, and the Einstein summation convention (Einstein, 1916) is that if an index appears twice on one side and not on the other, there is an implicit summation over that index. Let summation over that index: Attention : $\mathbb{R}^{d_k}\times\mathbb{R}^{n\times d_k}\times\mathbb{R}^{n\times d_v}\to\mathbb{R}^{d_v}$ $$[\text{Attention}(Q,K,V)]_k=\underset{i}{\text{softmax}}\left(\frac{Q_jK_{ij}}{\sqrt{d_k}}\right)V_{ik}.$$ Because k appears on both sides, the equation must hold over all values of this index. But because i and j occur twice on only the right-hand side, they are both summed over. We would have to define exactly what the i under the softmax means (i is bound inside the softmax and free outside it), and since softmax doesn't distribute over addition, we would need to modify the summation convention so that the summation over j occurs inside the softmax. Aside from these correctable issues, this notation scales very well to more than two axes and is concise and unambiguous. But it does not solve the main problem we set out to solve, which is that ordered axes force the author and reader to remember the purpose of each axis. The indices do act as symbolic names for axes (indeed, in *abstract* index notation (Penrose & Rindler, 1984), they really are symbols, not variables), but they are temporary names; they could be totally different in the next equation. It would be up to the author to choose to use consistent names, and to do so correctly. A second issue is that because it depends on repetition of indices to work, index notation can be more verbose than our notation, particularly for reductions and contractions: $$C=\operatorname*{max}_{i}A_{i}$$ iAi C = max $$C=\operatorname*{max}_{\mathrm{ax}}A$$ $$C=A\odot B.$$ $$C=A_{i}B_{i}$$ C = AiBi C = A ⊙ Finally, index notation requires us to write out all indices explicitly. So if we wanted to lift attention to minibatches (b ∈ [B]), multiple heads (h ∈ [H]) and multiple query tokens (i ′ ∈ [n ′]), we would write: Attention : $\mathbb{R}^{B\times H\times n^{\prime}\times d_{k}}\times\mathbb{R}^{B\times H\times n\times d_{k}}\times\mathbb{R}^{B\times H\times n\times d_{v}}\rightarrow\mathbb{R}^{B\times H\times n^{\prime}\times d_{v}}$ [Attention($Q,K,V$)]${}_{bhi^{\prime}k}=\underset{i}{\mathrm{softmax}}\left(\frac{Q_{bhi^{\prime}j}K_{bhij}}{\sqrt{d_{k}}}\right)V_{bhik}$. We could adopt a convention that lifts a function on tensors to tensors that have extra axes to the *left*, but such conventions tend to lead to messy reordering and squeezing/unsqueezing of axes. Named axes make such conventions unnecessary. ## 6.2 Graphical Notations In the graphical notation of Penrose (1971), a node in the graph stands for a tensor, and its incident edges stand for its indices. The edges are ordered from left to right. An edge connecting two nodes denotes contraction. The notation of Alsberg (1997) is similar, except that edges are named, not ordered. Graphs are commonly used in machine learning for representing probability models (Koller & Friedman, 2009). A node in the graph stands for a random variable, and an edge or hyperedge stands for a dependency between variables. If random variables have finite domains, then a (hyper)edge with r endpoints can be thought of as an r-th order tensor. A graph can then be thought of as a product and contraction. Extensions that allow for a choice between two subgraphs (e.g., Minka & Winn, 2008) can be thought of as addition. Our assessment of graphical notations like these is that, on the positive side, they have obvious value for visualization, and they at least have the potential to represent indices in a purely unordered way. On the negative side, these notations seem best suited for representing linear functions, and even for this purpose, some other practical considerations are that drawing pictures requires more effort from the author, and that pictures will have a less transparent relationship with their implementation in most programming languages. ## 6.3 Relational Algebra In relational algebra (Codd, 1970), the basic objects are sets of r-tuples, which could be thought of as tensors of order r with Boolean-valued entries. In the original formulation, the members of the tuples, which correspond to axes, were both named and ordered, although later definitions (e.g. Pirotte, 1982) made them unordered. Probabilistic variants of relational algebra also exist (e.g. Dey & Sarkar, 1996; Fuhr & Rölleke, 1997), whose relations would correspond to tensors of probabilities. While relational algebra and tensor notations are designed for totally different purposes, the notation of relational algebra generally has a similar flavor to ours (for example, our contraction operator is similar to the ▷◁ operator, and our renaming operator is the same as the ρ operator). ## 6.4 Programming Languages One response to the notation presented here, as well as the alternative notations mentioned in this section, is that research papers in machine learning should simply present models as code rather than equations. But we argue that a model's mathematical specification should abstract away from details of its implementation. Conceptually, it is important to have a distinct specification to define what makes an implementation (both the original implementation and any reimplementations) correct or incorrect. If the implementation is its own specification, it cannot be correct or incorrect; it will be "not even wrong." Practically, abstracting away from implementation is important because we do not want the interpretation of research papers to be subject to differences across programming languages and libraries, or versions thereof. For example, PyTorch's Dropout2d on order-3 tensors has one behavior in versions 1.11 and 1.13, but another behavior in 1.10, 1.12, and future versions. It would be problematic for correct understanding of a paper to depend on such differences. ## 7 Conclusions Named tensor notation is a system of formal notation for representing operations between tensors in a non-ambiguous way while remaining intuitive for practitioners. The system is motivated by challenges that arise from taking notation designed for applied linear algebra and using it for representing neural networks, as demonstrated through examples of canonical deep-learning components such as attention and layer normalization. However, named tensors are not limited to specifying neural networks. We have also explained how to integrate our notation with Magnus & Neudecker (1985)'s method of differentials for matrix calculus. While there are other conventions that such as index notation that have some usage in the machine learning community, these conventions either lack the conciseness of named tensors or are not well-suited to non-linear operations. For these reasons, we encourage members of the machine learning community to try out named tensor notation for teaching, research, and software documentation. ## Acknowledgements We would like to thank Ekin Akyürek, Justin Bayer, Tongfei Chen, Chu-Cheng Lin, Colin McDonald, Adam Poliak, Matt Post, Chung-chieh Shan, Nishant Sinha, and Yee Whye Teh for their input to this document (or the ideas in it). We also thank the anonymous TMLR reviewers for their feedback, which substantially improved the quality of the paper, especially Section 5. This material is based upon work supported by the National Science Foundation under Grants No. CCF2019291 and DMS-2134157, as well as a Sloan Fellowship, Simons Investigator Fellowship, DARPA grant W911NF2010021, and DOE grant DE-SC0022199. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies. ## References Bjørn K. Alsberg. A diagram notation for n-mode array equations. *Journal of Chemometrics*, 11:251–266, 1997. doi: 10.1002/(SICI)1099-128X(199705)11:3<251::AID-CEM472>3.0.CO;2-Q. Matt Bauman. Axisarrays, 2018. URL https://github.com/JuliaArrays/AxisArrays.jl. Open-source software. Tongfei Chen. Typesafe abstractions for tensor operations. In Proc. 8th ACM SIGPLAN International Symposium on Scala, pp. 45–50, 2017. doi: 10.1145/3136000.3136001. E. F. Codd. A relational model of data for large shared data banks. *Communications of the ACM*, 13(6): 377–387, June 1970. doi: 10.1145/362384.362685. Zachary DeVito. Named tensors using first-class dimensions in PyTorch, 2023. URL https://github.com/ facebookresearch/torchdim. Open-source software. Debabrata Dey and Sumit Sarkar. A probabilistic relational model and algebra. ACM Transactions on Database Systems, 21(3):339–369, September 1996. doi: 10.1145/232753.232796. A. Einstein. Die Grundlage der allgemeinen Relativitätstheorie. *Annalen der Physik*, 354(7):769–880, 1916. doi: 10.1002/andp.19163540702. Norbert Fuhr and Thomas Rölleke. A probabilistic relational algebra for the integration of information retrieval and database systems. *ACM Transactions on Information Systems*, 15(1):32–66, January 1997. doi: 10.1145/239041.239045. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep Learning*. MIT Press, 2016. Richard A. Harshman. An index formalism that generalizes the capabilities of matrix notation and algebra to n-way arrays. *Journal of Chemometrics*, 15:689–714, 2001. doi: 10.1002/cem.665. Stephan Hoyer and Joe Hamman. xarray: N-D labeled arrays and datasets in Python. Journal of Open Research Software, 5(1):10, 2017. doi: 10.5334/jors.148. JAX authors. Named axes and easy-to-revise parallelism, 2021. URL https://jax.readthedocs.io/en/ latest/notebooks/xmap_tutorial.html. Daphne Koller and Nir Friedman. *Probabilistic Graphical Models: Principles and Techniques*. MIT Press, 2009. Soeren Laue, Matthias Mitterreiter, and Joachim Giesen. Computing higher order derivatives of matrix and tensor expressions. In *Proc. NeurIPS*, pp. 2750–2759, 2018. URL https://proceedings.neurips.cc/ paper/2018/file/0a1bf96b7165e962e90cb14648c9462d-Paper.pdf. Jan R. Magnus and H. Neudecker. Matrix differential calculus with applications to simple, Hadamard, and Kronecker products. *Journal of Mathematical Psychology*, 29(4):474–492, 1985. doi: 10.1016/00222496(85)90006-9. Tom Minka and John Winn. Gates. In *Proc. NeurIPS*, pp. 1073–1080, 2008. URL https://papers.nips. cc/paper/3379-gates. Adam Paszke, Daniel D. Johnson, David Duvenaud, Dimitrios Vytiniotis, Alexey Radul, Matthew J. Johnson, Jonathan Ragan-Kelley, and Dougal Maclaurin. Getting to the point: Index sets and parallelism-preserving autodiff for pointful array programming. *Proc. ACM on Programming Languages*, 5(ICFP), August 2021. doi: 10.1145/3473593. R. Penrose and W. Rindler. *Spinors and space-time*, volume 1. Cambridge University Press, 1984. Roger Penrose. Applications of negative dimensional tensors. In D. J. A. Welsh (ed.), Combinatorial Mathematics and its Applications, pp. 221–244. Academic Press, 1971. Alain Pirotte. A precise definition of basic relational notations and of the relational algebra. ACM SIGMOD Record, 13(1):30–45, 1982. doi: 10.1145/984514.984516. PyTorch Contributors. PyTorch documentation, 2022. URL https://pytorch.org/docs/1.12/. version 1.12. M. M. G. Ricci and T. Levi-Civita. Méthodes de calcul différentiel absolu et leurs applications. *Mathematische* Annalen, 54:125–201, 1900. doi: 10.1007/BF01454201. Alex Rogozhnikov. Einops: Clear and reliable tensor manipulations with Einstein-like notation. In Proc. ICLR, 2022. URL https://openreview.net/pdf?id=oapKSVM2bcj. Alexander Rush. Named tensors, 2019. URL https://github.com/harvardnlp/NamedTensor. Open-source software. Nishant Sinha. Tensor shape (annotation) library, 2018. URL https://github.com/ofnote/tsalib. Opensource software. Torch Contributors. Named tensors, 2019. URL https://pytorch.org/docs/stable/named_tensor.html. PyTorch documentation. L. R. Tucker. The extension of factor analysis to three-dimensional matrices. In H. Gulliksen and N. Frederiksen (eds.), *Contributions to Mathematical Psychology*, pp. 110–127. Holt, Rinehart and Winston, New York, 1964. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Proc. NeurIPS*, pp. 5998–6008, 2017. URL https: //proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Yuxin Wu and Kaiming He. Group normalization. In *Proc. ECCV*, 2018. URL https://openaccess. thecvf.com/content_ECCV_2018/papers/Yuxin_Wu_Group_Normalization_ECCV_2018_paper.pdf. ## A Extended Examples A.1 Transformer We define a Transformer used autoregressively as a language model. The input is a sequence of one-hot vectors, from which we compute word embeddings and positional encodings: $$I\in\{0,1\}^{\mathrm{seq}\times\mathrm{vocab}}$$ seq×vocab X $W=(E\underset{\text{vocab}}{\overset{(\circ)}{\cup}}I)\sqrt{|\text{layer}|}$ $P\in\mathbb{R}^{\text{seq}\times\text{layer}}$ . I)p|layer| E ∈ R $E\in\mathbb{R}^{\text{vocab}\times\text{layer}}$. $P_{\text{seq}(p),\text{layer}(i)}=\begin{cases}\sin((p-1)/10000^{(i-1)/|\text{layer}|})&i\text{odd}\\ \cos((p-1)/10000^{(i-2)/|\text{layer}|})&i\text{even}.\end{cases}$ Then we use L layers of self-attention and feed-forward neural networks: $$X^{0}=W+P$$ $$T^{1}=\mathrm{LayerNorm}^{1}(\mathrm{SelfAtt}^{1}(X^{0})+X^{0})$$ X1 = LayerNorm1 ′(FFN1(T $${}^{1}(T^{1})+T^{1})$$ $$\mathbf{N}^{+}(T^{+})$$ ... $\bullet$ 4. T L = LayerNormL(SelfAttL(XL−1) + XL−1) XL = LayerNormL ′(FFNL(T L) + T L) O = softmax vocab (E ⊙ layer XL) where LayerNorm, SelfAtt and FFN are defined below. Layer normalization (l = 1, 1 ′*, . . . , L, L*′): LayerNorml: R layer → R layer LayerNorml(X) = standardize layer (X) ⊙ γ l + β l $$\sum_{\mathrm{vocab}}I=1$$ $\zeta$ **35** β l, γl ∈ R layer We defined attention in §4.3; the Transformer uses multi-head self-attention, in which queries, keys, and values are all computed from the same sequence. $$\begin{array}{r}{\operatorname{SelfAtt}^{l}\colon\mathbb{R}^{\operatorname{seq}\times\operatorname{layer}}\to\mathbb{R}^{\operatorname{seq}\times\operatorname{layer}}}\\ {\operatorname{SelfAtt}^{l}(X)=Y}\end{array}$$ where |seq| = |seq′| |key| = |val| = |layer|/|heads| Q = Wl,Q ⊙ layer K = Wl,K ⊙ layer V = Wl,V ⊙ layer M ∈ R seq×seq′ Xseq→seq′ Wl,Q ∈ R $$W^{l,Q}\in\mathbb{R}^{\text{heads}\times\text{layer}\times\text{key}}$$ $$W^{l,K}\in\mathbb{R}^{\text{heads}\times\text{layer}\times\text{key}}$$ $$W^{l,V}\in\mathbb{R}^{\text{heads}\times\text{layer}\times\text{val}}$$ X Wl,K ∈ R X Wl,V ∈ R $M\in\mathbb{R}^{2\times2\times2\times2}$ $M_{\text{seq}(i),\text{seq}^{\prime}(j)}=\begin{cases}0&i\leq j\\ -\infty&\text{otherwise}\end{cases}$ $Y=W^{l,O}$$\odot$$\text{Attention}(Q,K,V,M)_{\text{seq}^{\prime}\to\text{seq}}$ heads val Attention(*Q, K, V, M*)seq′→seq Wl,O ∈ R $$W^{l,O}\in\mathbb{R}^{\mathrm{heads}\times\mathrm{val}\times\mathrm{layer}}$$ Feedforward neural networks: where $$\begin{array}{l}{{X^{1}=\mathrm{relu}(W^{l,1}\underset{\mathrm{layer}}{\odot}X+b^{l,1})}}\\ {{X^{2}=W^{l,2}\underset{\mathrm{hidden}}{\odot}X^{1}+b^{l,2}}}\end{array}$$ l,1) Wl,1 ∈ R hidden×layer b $$b^{l,1}\in\mathbb{R}^{\mathrm{hidden}}$$ $$b^{l,2}\in\mathbb{R}^{\mathrm{hidden}}.$$ l,2 Wl,2 ∈ R layer×hidden b ## A.2 Lenet $X^{0}\in\mathbb{R}^{\text{batch}\times\text{chans}[c_{0}]\times\text{height}\times\text{width}}$ $$\begin{array}{r l}{\operatorname{FFN}^{l}\colon\mathbb{R}^{\mathrm{layer}}\to\mathbb{R}^{\mathrm{layer}}}\\ {\operatorname{FFN}^{l}(X)=X^{2}}\end{array}$$ $$W^{l,1}\in\mathbb{R}^{\mathrm{hidden}\times\mathrm{layer}}$$ $$W^{l,2}\in\mathbb{R}^{\mathrm{layer}\times\mathrm{hidden}}$$ $X^{1}\in\mathbb{R}^{d}$ $T^{1}=\operatorname{relu}(\operatorname{Conv}^{1}(X^{0}))$ $X^{1}=\operatorname{MaxPool}^{1}(T^{1})$ $T^{2}=\operatorname{relu}(\operatorname{Conv}^{2}(X^{1}))$ $X^{2}=\operatorname{MaxPool}^{2}(T^{2})$(height,width,chans)$\rightarrow$layer $X^{3}=\operatorname{relu}(W^{3}\odot X^{2}+b^{3})$ $O=\operatorname{softmax}(W^{4}\odot X^{3}+b^{4})$ $W^{4}\in\mathbb{R}^{d\text{classes}\times\text{hidden}}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{classes}$ $\text{ $$b^{3}\in\mathbb{R}^{\mathrm{hidden}}$$ 4 ∈ R classes As an alternative to the flattening operation in the equation for X2, we could have written X2 = MaxPool2(T 2) $$X^{3}=\operatorname{relu}(W^{3}\;\;\odot\;\;X^{2}+b^{3})$$ $$\begin{array}{l}{{\mathrm{height}}}\\ {{\mathrm{width}}}\\ {{\mathrm{chams}}}\end{array}$$ 3) W3 ∈ R $W^{3}\in\mathbb{R}$hidden $\times$height $\times$width $\times$chans The convolution and pooling operations are defined as follows: $$W^{l}{}_{7}$$ $\text{Conv}^l(X)=\text{Conv2d}()$ $\begin{array}{c}\vdash\\ \vdash\;\uparrow\end{array}$ . $\downarrow$ 4. Convl(X) = Conv2d(X; Wl, bl)chans′→chans where $W^{l}\in\mathbb{R}^{\text{chans}^{\prime}}[c_{l}]\times\text{chans}[c_{l-1}]\times\text{kh}[kh_{l}]\times\text{kw}[kw_{l}]$ $$b^{l}\in\mathbb{R}^{\mathrm{chains}^{\prime}[c_{l}]}$$ and MaxPooll(X) = MaxPool2dphl,phl (X). ## B Differentiation: Formal Definitions The following definition and theorem come directly from the paper by Magnus & Neudecker (1985), but generalized to named tensors. For any X ∈ R S , we write ∥X∥ = norm S X. Definition 9. Let f : S → R T where S ⊆ R S . Let A be an interior point of R S , that is, for some r > 0, B(A; r) = {X | ∥X − A∥ < r} ⊆ S. If there is a tensor D(A) ∈ R S ∗∪T and R(*A, H*) ∈ R Tsuch that $$f(A+H)=f(A)+D(A)\mathop{\stackrel{\circ}{\mathcal{S}^{*}}}H_{\mathcal{S}\to\mathcal{S}^{*}}+R(A,H)$$ for all $H\in\mathbb{R}^{\mathcal{S}}$ with $\|H\|<r$, and . $$\operatorname*{lim}_{H\to\mathbf{0}}{\frac{R(A,H)}{\|H\|}}=\mathbf{0},$$ then f is said to be *differentiable* at A; the tensor $$\partial f(A;H)=D(A)\stackrel{\odot}{\underset{S^{*}}{\odot}}H_{S\to S^{*}}$$ is then called the (first) differential of f at A *with increment* H. Magnus & Neudecker give their (first) identification theorem twice, once for vector-to-vector functions and once for matrix-to-matrix functions (but omitting vector-to-matrix and matrix-to-vector functions). Here, we only need one version, which works for functions from tensors to tensors of any shape. Theorem 1. Let f : S → R T*, where* S ⊆ R S , be differentiable at A ∈ S*. Let* D(X) ∈ R S ∗∪T *. Then* _for all $H$, $\partial f(A;H)=D(X)\underset{\mathcal{S}^{*}}{\ominus}H_{\mathcal{S}\to\mathcal{S}^{*}}$ iff $\left.\frac{\partial f(X)}{\partial X^{*}}\right|_{X=A}=D(X)$._ ## C La**Tex Macros** Many of the LATEX macros used in this document are available in the style file namedtensor.sty, available on CTAN at https://ctan.org/pkg/namedtensor. To use it, put \usepackage{namedtensor} in the preamble of your LATEX source file (after \documentclass{article} but before \begin{document}). We write axis names in sans-serif font. To make this easier, \name{ax} prints an axis name (like this: ax), and \ndef{\ax}{ax} defines a macro \ax that does the same. - Binary operators - Use A \ndot{\ax} B for contraction: A ⊙ ax B. You can use \\ to stack up several names. - In general, you can use \nbin to make a new binary operator with a name under it: A \nbin{\ax}{\star} B gives you A ⋆ ax B. - Functions - Use \nsum{\ax} A for summation: P ax A. - In general, you can use \nfun to make a function with a name under it: \nfun{\ax}{qux} A gives you qux ax A.
Review 1: Summary: The paper formally introduces the notion of naming tensor axes/dimensions/... and shows how to adapt common tensor operations to this convention. A number of worked examples on common building blocks (MLPs, RNNs, attention, convolutions, ..., up to differentiation) show that standard machine learning tools can be concisely expressed using this notation. Strengths and Weaknesses: * (+) the proposed notation is indeed suitable to resolve ambiguities in defining common mechanism. * (+) the breadth of worked examples shows that the proposed notation is suitable to handle a wide array of use cases. * (-) there is little novelty here, given that it summarises a trend of (referenced) similar proposals. * (-) the paper is focused only on notation. However, one reason for choosing to represent operations as (somewhat ambigious and hence error-prone) matrix multiplication is that this closely corresponds to factors influencing hardware performance. Keeping these factors in mind is crucial to enable scale-out and hence important in communicating results. Requested Changes: * page 1: I found $\mathbb{R}^{\textsf{key}}$ (etc.) surprising, as it is not refering to the dimension of individual keys etc. This aspect becomes clearer in the following, but it would be good to call this out earlier. * page 2: https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html is another example of an implementation of the proposed notation (which actually discusses HW implementation issues in great detail) * page 4, def 6/7: I'd recommend to rename $s$ to $s'$ here, as it is an element of $\textnormal{rec}\mathcal{S}'$ (and not the also existent $\textnormal{rec}\mathcal{S}$) * page 5, Sect. 3.2: please recall for the reader that $A$ was defined in Sect. 2.1. Given that $A$ is also used as a variable in some definitions, it may be easier for the reader if you picked another name for this concrete matrix. * page 7, Sect. 4.1: "(analogous to what TensorFlow and PyTorch modules)" misses its verb Broader Impact Concerns: I have no concerns regarding the broader impact of this paper. ================================================== Review 2: Summary: This paper presents a notation for representing functions between multidimensional arrays (tensors), for use in academic papers and presentations. The specific contributions are: - The authors define a notation for representing tensors in terms of semantically named axes, and operations that act on those tensors, motivated by drawbacks of existing notation used in papers. - They specify the interpretation of that notation, in particular by interpreting a tensor as a mapping from a set of named indices (a "record") to a field element, and describing the corresponding mappings for each of their operations. - They show how the notation can be used to write a variety of operations common in neural network architectures. - They explain how to use this notation to compute Jacobian tensors, and present a set of rules analogous to the scalar differentiation rules. - They give examples of how to compute Jacobian tensors for a few operations of interest. Strengths and Weaknesses: Strengths: - The proposed notation is quite clean and minimal while also being unambiguous, and I think adopting this notation would lead to more readable ML papers. - The notation also appears to be very general and expressive, and easy to extend if needed. - The examples provided by the authors both demonstrate the value of the notation and serve as a reference for future uses of the notation. - Overall I think this notation will be of interest to members of the TMLR community. Weaknesses: - I found some parts of the paper confusing, as discussed in the requested changes section below. - The definition of differential and corresponding differentiation rules seemed imprecise and not fully specified, which might lead to difficulty using this notation to express derivative computations. - It's not currently obvious from the paper how to actually typeset this notation (e.g. using LaTeX) Requested Changes: ## Important requested changes There were some things I found confusing about the current version of the paper, specifically regarding the computation of derivatives. I would like to see these addressed before recommending acceptance. ### Confusing/incomplete definition of differentials (Section 5.2) I found Section 5.2 to be very unclear, specifically regarding the definition of a differential. Upon first read, I was very confused regarding where $\Delta X$ is defined, what type of object $\partial f(X; \Delta X)$ is, and why it is OK to omit $\Delta X$ in the calculations; I initially thought the differential must be a function taking $\Delta X$ as an argument, but realized this didn't make any sense once I got to the list of rules. After consulting Magnus & Neudecker (1985), I think that my confusion comes from two things: - a missing part of the definition of differential. Magnus & Neudecker define "the differential of f(X) at X **with increment $\Delta X$**", where $\Delta X$ is also explicitly named as part of the term, and different increments produce different differentials. - the current wording that "the differential ... is a tensor ... that linearly approximates f", which makes it sound like the tensor itself is a mapping somehow (since a fixed tensor can't linearly approximate a function). If my understanding is correct, I would recommend rewording this section to be more explicit about the role of $\Delta X$ in the differential, and explicitly naming it as part of the definition like Magnus & Neudecker do. Perhaps something like > Intuitively, the differential of $f(X)$ at $X$ with increment $\Delta X$, written $\partial f(X; \Delta X)$, is a tensor with the same shape as $f(X)$, obtained by evaluating a linear approximation of $f$ around $X$ at the position $X + \Delta X$. or even > Intuitively, the differential of $f(X)$ at $X$ with increment $\Delta X$, written $\partial f(X; \Delta X)$, is the first-order term of the Taylor series for $f(X)$ around $X$, evaluated at $X + \Delta X$. It might be even simpler to just write out a formal definition of the differential instead of omitting it; it looks like the definition would only be a few lines long? Relatedly, I think the sentence "In practice, we can write $\partial f(X; \Delta X)$ simply as $\partial f$" is not quite correct, and should instead be "as $\partial f(X)$", since subsequent uses of $\partial$ always apply to terms, not functions. (I also think "never appears in our calculations" isn't the best justification, and it would be better to say that it's omitted because it can be inferred from context.) One more minor point on the same topic: it seems that this section is a recommendation / example of how to use the notation to compute derivatives, so I'd suggest wording it as "we can use the method of differentials" or "we propose/recommend using the method of differentials" instead of just saying "we use the method of differentials". At first I thought this section might be some sort of implementation detail of some autodiff system used by the authors, which was confusing. ### Definition of chain rule (Section 5.2) The current definition of the chain rule is only well defined when $\\mathcal{U}$ and $\\mathcal{V}$ are orthogonal, but this is not explicitly stated. Moreover, I think it's important to discuss how to do the chain rule when shapes are NOT orthogonal, since this will occur often in practice. My guess is that this will involve a rename of any conflicting axes, something like $$ \\partial f(U) = \\left[ \\frac{\\partial f(U)_{\\mathcal{U} \to \\mathcal{U}'}}{\\partial U} \\odot_\\mathcal{U} \\partial U \\right]\_{\\mathcal{U}' \to \\mathcal{U} } $$ perhaps. (Alternatively the rename could be applied to the denominator, I'm not sure which is simpler.) ### Additional details in example derivations (Section 5.3) I found the examples in 5.3 hard to verify. While each of the steps seemed plausible, it was difficult to determine which of the rules were being applied, and whether they were being applied correctly. As a derivation of the equations themselves, this seems OK, but as an example of how to use the notation and rules, I think it is important to actually show why the steps follow from the stated rules. One thing I noticed was the combination of the chain rule with the quotient rule in a single step; I think it would be better to split those steps up. I also noticed that the transformation from $\partial \exp X$ to $\exp X \odot \partial X$ (hidden in the above step) involves an elementwise variant of the chain rule that is never explicitly stated. The chain rule written in section 5.2 would suggest that the differential would be $\partial \exp X = \exp X \odot_{ax} \partial X$, but this doesn't work, because it reduces away `ax` incorrectly (since, as a vector-to-vector function, $\exp$ doesn't satisfy the unstated orthogonality requirement). It seems important to either explicitly discuss the chain rule for elementwise functions, or show how it follows from the ordinary chain rule after applying all of the necessary renaming. Additionally, at multiple places throughout the examples, the canonical form of the differential is not actually shown, despite how much 5.2 stresses the importance of the canonical form. I suggest explicitly writing out the canonical form (with a contraction instead of a sum) at all of the places where Theorem 1 is used in sections 5.3 and 5.4. --- ## Other suggestions I think the paper could be improved by adding some more discussion of these areas. ### Introduction of named tensors (Section 2.0) I found the introduction of named tensors a bit disorienting. I think part of my confusion comes from the section being written from a perspective of named axes already existing, despite the notation not being defined yet. For instance, it currently reads "If it has ax1[n1], . . . , axr[nr] (with ax1, . . . , axr being distinct names), it can be thought of as a map that takes as input a record {ax1[i1],...,axr[ir]}, with i1 ∈ [n1],...,ir ∈ [nr], and outputs a field element." However, as a reader, I don't yet know what `ax1[n1]` means, or what it means for a named tensor to have it, or what a record is, and only in the following sections do I learn that `ax1` is a variable taking the place of a name, and `n1` is supposed to denote the size of that axis. I think it would be clearer to briefly define each of the concepts (informally) before using them, and only introduce one at a time. For instance, perhaps something like > In contrast, we propose named tensor notation, in which each axis has a name that describes it and ensures there is no confusion between axes. We use `ax[n]` to refer to an axis with name `ax` and size `n`, and `ax(i)` to the `i`th element along that axis (called a named index). We can then define a named tensor as a map that takes a *record* and maps it to a field element, where a record is ... Another confusing aspect was that the record notation is inconsistent with the later definition, it should be $\\{ax_1(i_1),...,ax_r(i_r)\\}$. (I also mention this in the "Nits" section below.) ### Elementwise operations as a special case of ordinary broadcasting (Section 3.2) It seems to me that the rules for broadcasting elementwise operations follow directly from the rules for arbitrary functions, if you view elementwise operations as operations on tensors with the empty shape (as mentioned at the end of section 2.2). This seems pretty cool, and might be worth an explicit mention. ### Discussion of tensor-to-tensor operations with multiple (distinct) axes All of the examples in section 3 appear to treat input axes identically, e.g. summing over two axes at once is the same as flattening a sum over all elements of a reshaped axis. However, later examples use functions (such as `unroll`) which take multiple axes as arguments and apply different operations along each. It might be worthwile to add a brief mention of this (perhaps in section 3.4), since it seems like a strength of the notation that such functions are easy to express. ### Discussion of gather/scatter-type operations Does this notation support "scatter" / "gather" operations, where one tensor is used to index into another? How would such operations be represented? It would be interesting to add a discussion of those, since those are fairly common when implementing ML algorithms. ### Motivation for differentiation section Section 5 ("Differentiation") seems a bit out-of-place currently, as it jumps directly into definitions and rules without explaining why. Is this section supposed to be another example of why the notation is useful, by showing that it aids computations? Or are the derivative rules intended to be part of the notation? It would be useful to start this section with some sort of statement summarizing what this section does and how it relates to the remainder of the paper. ### Example of applying chain rule with non-orthogonal axes Building on my earlier point about the chain rule with non-orthogonal names, I think it would be useful to see an example of a derivation of a differential that involves a non-trivial chain rule application for a function that preserves some axes in its output. For instance, a computation of something like $\\partial ( softmax_{seq} ( Q \\odot_{key} V ))$. Right now the rules don't clearly explain what to do in this situation, as far as I can tell. ### Discussion of other alternative notations Two other notation options which might be worth discussing: - A version of index notation that doesn't include the Einstein summation convention, and requires all summation to be written out (a bit less ambiguous than including Einstein summation, but more verbose) - Just writing out everything in code, e.g. in numpy/pytorch/jax Not that these are particularly compelling alternatives, but I've seen them used in practice. ### LaTeX definitions for notation Since this paper is recommending a new type of notation for paper-writing specifically, it would be convenient to include a reference for how to typeset this notation in LaTeX. This might help standardize some of the details (e.g. the centering of subscripts below $\\odot$ and the sans-serif notation for names). --- ## Nits / typos / minor suggestions I noticed a few minor things while reading through the paper: - Named tensor software: There have been some newer software implementations of named tensors that aren't mentioned here, including [einops](https://openreview.net/pdf?id=oapKSVM2bcj), [torchdim](https://github.com/facebookresearch/torchdim) and [jax.xmap](https://jax.readthedocs.io/en/latest/notebooks/xmap_tutorial.html). Also, the citation for Dex could be updated to refer to the [conference version](https://dl.acm.org/doi/abs/10.1145/3473593) instead of the workshop paper. - `Section 2.0`: Inconsistent notation of introductory example. "a map that takes as input a record $\\{ax_1[i_1],...,ax_r[i_r]\\}$" is inconsistent with the notation defined in the next section; it should be $\\{ax_1(i_1),...,ax_r(i_r)\\}$. - I found this pretty confusing at first, since it seemed like the $ax_1[i_1]$ notation was being overloaded. - `Definition 5 (Named tensor operation)`: It seems that your definition of operation is equivalent to a function, and throughout the paper you use "function" and "operation" interchangeably. I wonder if you even need to formally define "operation", or whether "function" would be sufficient? - `Definition 7 (lifting, binary)`: Slightly odd grammar; perhaps "and **for which** S' ∪ T' is orthogonal to U" would be better - `Section 3.2`: It would be helpful to restate the definition of $A$ before applying it, or at least refer back to where A was defined (on page 2). - `Section 3.3`: - If "norm" is intended as a convention other authors should use, it might be worth allowing other norms than the L2 norm. (Or, if this is just an example for how to define a reduction, it might be worth stating that explicitly.) - Could you include an example of contracting multiple axes at once? - `Section 3.4`: I was surprised that `argmax` was defined as a one-hot embedding, that seems unusual. Is there not a way to encode the ordinary notion of `argmax` as an integer-valued tensor? - `Section 4.1`: It's not clear what it means for `FullConn` to "encapsulate" the parameters if the parameters are arguments to `FullConn`. It seems clearer not to have W and b be inputs to `FullConn` if the intent is to have `FullConn` implicitly carry W and b. - `Section 4.4`: I expected the bias term for convolutions to have a `chans'` axis, is it intentional that it does not? - `Section 5.4`: Missing word(s): "But although f' can be to X using the usual lifting rules". f' can be what to X? - `Appendix A.1`: LayerNorm is defined in terms of XNorm, but XNorm is not defined. Perhaps this was supposed to refer to "standardize" from the main paper? Broader Impact Concerns: I do not have any broader impact concerns. ================================================== Review 3: Summary: This work offers a named tensor notation in which each dimension/axis of a particular tensor is given a name and the names are used to create unambiguous and easily understood tensor operations. This syntax can be used to dramatically improve the readability of machine learning models, as demonstrated by multiple instances. Detailed comparisons between this notation with other existing tensor notations, including Einstein summation notation and the graphical tensor diagrams, are also given. Strengths and Weaknesses: Overall, this is a well-motivated and clearly presented study with useful results for researchers in machine learning. Einstein summation notation is the prevalent tensor notation (also called index notation in the paper). This notation has the advantage of clearly expressing all multilinear operations, but it is insufficient for machine learning algorithms because it cannot handle other operations (convolution, pooling, etc.). The named tensor notation, in my opinion, generalizes the index notation and would help scientific writing in the machine learning field. Requested Changes: Below are my suggestions: 1. The tensor contraction examples shown in the paper all involve 2 tensors/matrices/vectors. I think it would be helpful to give an example showing how to express general tensor contractions with more than 2 tensors with the named notation. For example, the contraction in the einsum notation $d_{i,k,l} = \sum_{j}a_{i,j,k}b_{i,j}c_{i,j,l}$ can be expressed as $d = \sum_{j}(a \odot b \odot c)$ in the named notation. 2. Definition of derivatives under nonorthogonal case is a bit confusing, since it's not clear to me how to use that in the chain rule. For the example in 5.3, authors show how to get $\frac{\partial L}{\partial X}$ based on differential. Is there easy ways one can derive it using chain rule: $\frac{\partial L}{\partial X}=\frac{\partial L}{\partial Y}\frac{\partial Y}{\partial X}$? It's not clear to me how to use the chain rule directly here since index renaming is needed for $\frac{\partial Y}{\partial X}$. I believe more discussions here would be helpful. 3. Minor: at the beginning of page 13: $\frac{\partial L}{\partial X}$ instead of $\frac{\partial f(Y)}{\partial X}$. Broader Impact Concerns: n/a ================================================== Metareview: Recommendation: Accept as is Comment: Reviewers agree that the proposed notation is well-designed and expressive. The reviewers appreciated the author responsiveness in the review process and feel that after recent updates, the paper is correct and clearly relevant to the TMLR community. Improved practices around communicating tensor notation would be valuable for the community, so we hope to see widespread adoption. ==================================================
# Scalable Deep Compressive Sensing Zhonghao Zhang zhonghaozhang@yeah.net School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China Yipeng Liu ∗*yipengliu@uestc.edu.cn* School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China Xingyu Cao *shutong.cxy@alibaba-inc.com* Alibaba DAMO Academy Fei Wen wenfei@sjtu.edu.cn Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China Ce Zhu *eczhu@uestc.edu.cn* School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China Reviewed on OpenReview: *https: // openreview. net/ forum? id= 10JdgrzNOk* ## Abstract Deep learning has been used to image compressive sensing (CS) for enhanced reconstruction performance. However, most existing deep learning methods train different models for different subsampling ratios, which brings an additional hardware burden. In this paper, we develop a general framework named scalable deep compressive sensing (SDCS) for the scalable sampling and reconstruction (SSR) of all existing end-to-end-trained models. In the proposed way, images are measured and initialized linearly. Two sampling matrix masks are introduced to flexibly control the subsampling ratios used in sampling and reconstruction, respectively. To achieve a reconstruction model with flexible subsampling ratios, a training strategy dubbed scalable training is developed. In scalable training, the model is trained with the sampling matrix and the initialization matrix at various subsampling ratios by integrating different sampling matrix masks. Experimental results show that models with SDCS can achieve SSR without changing their structure while maintaining good performance, and SDCS outperforms other SSR methods. ## 1 Introduction Compressive sensing (CS) is a technique that simultaneously samples and compresses signals. And the signal is sampled and reconstructed at a ratio that can be much lower than the Nyquist rate. The sampling process of CS can be expressed as y = Ax , where x ∈ R N is the original signal, y ∈ RM denotes the measurement, A ∈ RM×N is the sampling matrix with *M < N* and M/N is the CS ratio. The signal recovery from y is under-determined, and it is usually carried out by solving an optimization problem as follows: $$\mathrm{t.\,\,\,\,y=A\mathbf{x},}$$ min xR(x), s. t. y = Ax, (1) where R(x) is the regularization term. In this paper, we mainly focus on the visual image CS (Lohit et al., 2018a) which has been applied in single-pixel imaging (SPI) (Lohit et al., 2018a; Duarte et al., 2008) and ∗Corresponding author $\left(1\right)$. ![1_image_0.png](1_image_0.png) Figure 1: Scalable sampling and scalable initialization of an image block. wireless broadcast (Yin et al., 2016; Li et al., 2013; Guo et al., 2020). And since block-by-block sampling and reconstruction (Dong et al., 2014; Dinh & Jeon, 2017; Lohit et al., 2018a; Zhang & Ghanem, 2018; Zhang et al., 2021) would bring less burden to the hardware, we mainly focus on the block-based visual image CS problem. To solve the problem (1), model-based methods (Dong et al., 2014; Li et al., 2020) introduce various handcrafted regularizers (Elad, 2010; Liu et al., 2019) and apply non-linear iterative algorithms (Beck & Teboulle, 2009; Donoho et al., 2009) to recover images. These methods usually have theoretical guarantees and work well using sampling matrices with different CS ratios. However, their performance needs to be further improved. In recent years, deep learning has achieved great success in computer vision (Rick Chang et al., 2017; Dong et al., 2018; Yan et al., 2020a;c;b; Tu et al., 2022; Hu et al., 2021). Among them, models for visual image CS can be cast into two categories: traditional deep learning models and deep unfolding models. Traditional deep learning models (Mousavi et al., 2015; Lohit et al., 2018a; Shi et al., 2019a; 2020) are usually stacked by non-linear computational layers. These models map the measurement to the output without considering the prior information of images. Although they can reconstruct high-quality images at a high speed, there is no good interpretability and theoretical guarantee (Huang et al., 2018). Deep unfolding models denote a series of models constructed by mapping iterative algorithms with unfixed numbers of steps onto deep neural networks with fixed numbers of steps (Gregor & LeCun, 2010; Zhang & Ghanem, 2018; Metzler et al., 2017; Zhang et al., 2021; Dong et al., 2018; Sun et al., 2016; Ma et al., 2019). By combining the interpretability of model-based methods and the trainable characteristics of traditional deep learning models, they make a good balance between reconstruction performance and interpretability. Usually, the above two kinds of deep-learning-based models are trained end-to-end using some well-known backpropagation algorithms (Kingma & Ba, 2015). However, a common shortage of most existing end-toend-trained models is that different models have to be trained for different CS ratios. In some applications, sampling and reconstructing images at different CS ratios may be required(Yin et al., 2016; Li et al., 2013). However, storing more than one model with the same structure would bring additional burdens to the hardware. Thus, sampling and reconstructing images at different CS ratios with only one model is needed. At present, there exist a few methods (Xu et al., 2020; Su & Lian, 2020; Shi et al., 2019b; Zhang et al., 2020; Lohit et al., 2018b; Li et al., 2020) which reconstruct images at different CS ratios using only one model, and they can be roughly cast into two categories. The first kind (Xu et al., 2020; Su & Lian, 2020) trains a single model to adapt to a set of sampling matrices with different CS ratios. The second kind (Shi et al., 2019b; Lohit et al., 2018b) applies only one sampling matrix and integrates its rows to achieve sampling and reconstruction at different CS ratios, and we call such a strategy as scalable sampling and reconstruction (SSR) in this paper. This paper focuses on SSR methods, because they are more practical in existing applications such as SPI (Lohit et al., 2018a), wireless broadcasting (Yin et al., 2016), and MRI (Sun et al., 2016). In detail, in image CS for wireless broadcast (Yin et al., 2016; Li et al., 2013), images are reconstructed with different quality according to different channel conditions using one sampling matrix at the receiving end. And in SPI (Lohit et al., 2018a), different CS ratios can be applied for different image quality and sampling time by combining rows of the sampling matrix. However, existing SSR methods cannot be applied universally (Shi et al., 2019b) or a more appropriate sampling matrix is needed (Zhang et al., 2020; Lohit et al., 2018b). Therefore, a general and more effective SSR method is expected. ![2_image_0.png](2_image_0.png) X Figure 2: Forward-propogation of the scalable training. In this paper, we propose a general framework dubbed scalable deep compressive sensing (SDCS) to achieve sampling and reconstructing images at all CS ratios in a certain range. In detail, two binary sampling matrix masks are developed to activate rows of the sampling matrix and the initialization matrix to control the CS ratios for SSR. And we develop a novel training strategy named scalable training, which integrates the multi-ratio information into the training stage by randomly generating sampling matrix masks for different CS ratios. We emphasize that SDCS can bring to the model the ability of SSR, while maintaining the characteristics of its own structure. Furthermore, experimental results show that the model with SDCS can obtain a more effective combination of sampling matrix and model than existing SSR methods. Our paper has three contributions: - We propose a framework named SDCS that jointly trains the sampling matrix and the model to achieve sampling and reconstruction at all CS ratios in a certain range. - With SDCS, a deep learning model can achieve SSR without changing its original structure, while maintaining good performance. - Technically, SDCS can be used for all end-to-end-trained deep learning models. ## 2 Sdcs In this section, we introduce the proposed framework SDCS which is simple but powerful. SDCS is composed of four parts: scalable sampling, scalable initialization, scalable reconstruction, and scalable training. ## 2.1 Scalable Sampling Assume that the largest CS ratio is RM, then the sampling matrix can be expressed as A ∈ R ⌈RMN⌉×N . It can be noticed that the CS ratio is determined by the row number of A. Therefore, to achieve scalable sampling, we design a sampling matrix mask MS ∈ R ⌈RMN⌉×N to control the activities of the rows of A. MS is a zero-one matrix which satisfies MS(1 : ⌈RSN⌉ , :) = 1 and MS(⌈RSN⌉ + 1 : ⌈RMN⌉ , :) = 0, where RS denotes the CS ratio for sampling. In such a case, we can generate a new sampling matrix as AS = MS ⊙ A, where ⊙ denotes the element-wise product. Since the ⌈RSN⌉ + 1-th row to the ⌈RMN⌉-th row of AS are all filled with 0, we say that the first row to the ⌈RSN⌉-th row of A are activated. In detail, if the original image block is X¯ ∈ R H×L satisfying N = HL, then the scalable sampling at the CS ratio of RS can be expressed as: $$\mathbf{y}=\mathbf{A}_{\mathbf{S}}\operatorname{vec}({\bar{\mathbf{X}}}),$$ y = AS vec(X¯ ), (2) where vec(·) is an operator which transforms a matrix to a vector and y ∈ R ⌈RMN⌉is the measurement. It can be noticed that y(⌈RSN⌉ + 1 : ⌈RMN⌉) = 0 and y(1 : ⌈RSN⌉) is the valid measurement for reconstruction. In this way, we can achieve a unified learning mode with different compression rates. ## 2.2 Scalable Initialization For deep learning methods, the initialized image is important in the following reconstruction. In SDCS, we use a linear operation to initialize the image block. Based on some deep-learning based models (Shi et al., 2019a; Zhang & Ghanem, 2018; Zhang et al., 2021), an initialization matrix B ∈ R N×⌈RMN⌉is developed. Similar to (2), a sampling matrix mask MR ∈ R ⌈RMN⌉×N is proposed to control the activities of the columns of B, where MR(1 : ⌈RRN⌉ , :) = 1 and MR(⌈RRN⌉+ 1 : ⌈RMN⌉ , :) = 0. RR denotes the CS ratio for initialization and reconstruction, which satisfies RR ≤ RS. In such a case, we can activate the first column to the ⌈RRN⌉-th column of B to generate a new initialization matrix as BR = MR T ⊙ B. The detailed scalable initialization at the CS ratio of RR can be expressed as: $$\mathbf{X}^{0}=\operatorname{vec}^{-1}(\mathbf{B}_{\mathrm{R}}\mathbf{y}),$$ $$\left({\boldsymbol{3}}\right)$$ X0 = vec−1(BRy), (3) where X0 ∈ R H×L denotes the initialized image block and vec−1(·) is the operator which transforms a vector to matrix. In some cases, RR can be lower than RS. For example, in wireless broadcasting (Yin et al., 2016), images are transferred at a high CS ratio and are received at a low CS ratio due to the poor channel condition. Fig. 1 illustrates the scalable sampling and scalable initialization of an image. ## 2.3 Scalable Reconstruction In this subsection, we describe the scalable reconstruction of two different kinds of deep learning models: traditional deep learning models and deep unfolding models. The generalized reconstruction process of traditional deep learning models can be expressed as: $${\hat{\mathbf{X}}}={\mathfrak{F}}_{\mathrm{tra}}(\mathbf{X}^{0};\Theta),$$ $\downarrow$ . $$\mathbf{\Sigma}$$ $\downarrow$ . Xˆ = Ftra(X0; Θ), (4) where Xˆ ∈ R H×L is the reconstructed image block and Θ contains trainable parameters of the model. In SDCS, Θ is trained with A and B to make sure that Ftra(·; Θ) can perform well at all CS ratios. The reconstruction model of a deep unfolding model is usually composed of K reconstruction modules with the same structure. In each module, the sampling matrix also participates in the image reconstruction. In detail, the generalized reconstruction process of a deep unfolding model can be expressed as: $$\begin{array}{l}{{\hat{\bf X}=\mathfrak{F}_{\mathrm{unf}}({\bf X}^{0},{\bf y};{\bf A},\Theta)=\mathfrak{F}_{\mathrm{unf}}^{K}({\bf X}^{K-1},{\bf y};{\bf A},\Theta^{K}),}}\\ {{{\bf X}^{k}=\mathfrak{F}_{\mathrm{unf}}^{k}({\bf X}^{k-1},{\bf y};{\bf A},\Theta^{k}),}}\end{array}$$ where Funf(·, ·; A, Θ) is the entire deep unfolding model, of which Θ contains its trainable parameters. F k unf(·, ·; A, Θk) is the k-th reconstruction module and Θkcontains its trainable parameters. the inputs of Funf(·, ·; A, Θ) and F k unf(·, ·; A, Θk) usually contain the image block X0 and the measurement y. Since A plays an important role in each reconstruction module, the scalable reconstruction of the deep unfolding model is achieved by applying an activated sampling matrix. In detail, the scalable reconstruction of the k-th reconstruction model can be expressed as: $${\bf X}^{k}=\widehat{\bf y}_{\mathrm{unf}}^{k}({\bf X}^{k-1},{\bf y};{\bf M}_{\mathrm{R}}\odot{\bf A},\Theta^{k}).$$ unf(Xk−1, y;MR ⊙ A, Θk). (7) Similar to the traditional deep learning models, Θ = {Θ1, Θ2, *· · ·* , ΘK} is trained with A and B. Since the sampling matrix A usually appears in the image sampling and reconstruction of deep unfolding models, deep unfolding models have great potential to achieve SSR. $\downarrow$ . Algorithm 1 Scalable training of one epoch. Input: training set T, batch size B, loss function L, max CS ratio RM, sampling matrix A, initialization matrix B, reconstruction model Ftra(·; Θ) or Funf(·; A, Θ). Output: trained parameters. 1: T ′ ← ∅ 2: **repeat** 3: Select S = {X1, X2, · · · , XB} ∈ T \ T ′. 4: T ′ ← T ′ ∪ S. 5: Generate {R1, R2, · · · , RB} randomly, where Ri ∈ [1, RM]. 6: Generate {M1,M2, *· · ·* ,MB}, where Mi(1 : ⌈RiN⌉ , :) = 1 and Mi(⌈RiN⌉ + 1 : ⌈RMN⌉ , :) = 0. 7: Generate AS = {AS1, AS2, · · · , ASB}, AR = {AR1, AR2, *· · ·* , ARB} and BR = {BR1, BR2, *· · ·* , BRB}, where ASi = Mi ⊙ A, ARi = Mi ⊙ A and BRi = MT i ⊙ B. 8: for i = 1 : B do 9: yi = ASi vec(Xi) 10: X0 i = vec−1(BRiyi) 11: Xˆi = Ftra(X0 i ; Θ) or Xˆi = Funf(X0 i , yi; ARi, Θ) 12: Compute loss L using {Xˆ1, Xˆ2, *· · ·* , Xˆ B} and S. 13: Update A, B and Θ. 14: **until** T \ T ′ = ∅ 15: **return** A, B, Θ. ## 2.4 Scalable Training As shown in (2), (3) and (7), A and B are important in effective SSR. How to obtain an appropriate combination of A, B and the reconstruction model is the main issue. To this end, we develop a novel training strategy dubbed scalable training to train A, B with parameters of the reconstruction model jointly. In scalable training, it is assumed that all parameters are trained using stochastic-gradient-descent-related algorithms like Adam (Kingma & Ba, 2015). If the batch size for training is B and the loss function is L, the training process of A, B and Θ of one epoch can be expressed as Algorithm 1. And Fig. 2 illustrates the forward-propagation of the scalable training. The gradients of A and B can be computed as follows: It can be noticed that the closer to the top of A or the left of B, the more gradient information for updating is obtained, which makes using MS and MR for effective SSR possible. It is emphasized that the form of L is not limited by SDCS, but related to the combined reconstruction model. Furthermore, to validate the trained model, a CS ratio validation group (RVG) is applied. Each RVG contains G validation CS ratios as {R1, R2, · · · , RG}. At the end of each epoch, for each ratio Ri, the average PSNR on the validation set can be obtained. And the model with the best average PSNR on RVG is regarded as the model for test. We emphasize that SDCS has no restriction on the structure of deep learning models, which means it can be combined with any end-to-end-trained model for SSR. However, the final performance is determined by the structure of the reconstruction model. $$\nabla_{\mathbf{A}}L={\frac{1}{B}}\sum_{i=1}^{B}\mathbf{M}_{i}{\odot}\nabla_{\mathbf{M}_{i}{\odot}\mathbf{A}}L,$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ Mi⊙∇Mi⊙AL, (8) T⊙BL. (9) ## 3 Related Works In this section, we first introduce some deep-learning-based methods for image CS, then some SSR methods are compared with SDCS. ## 3.1 Deep Learning Models For Image Cs For traditional deep learning models, Mousavi et al. (Mousavi et al., 2015) first designed a fully-connectedlayer-based stacked denoising autoencoder (SDA) for visual image CS. Lohit et al. (Lohit et al., 2018a) first proposed a six-layers CNN-based model named ReconNet to reconstruct image blocks from measurements. Shi et al. (Shi et al., 2019a) proposed a deeper CNN model named CSNet which has trainable deblocking operations and integrated residual connection (He et al., 2016) for better performance. Furthermore, there are some other models (Du et al., 2019; Yao et al., 2019; Bora et al., 2017; Sun et al., 2020) for visual image CS, and all these models have one thing in common the models for reconstruction are trained end-to-end. Deep unfolding models are first developed for the sparse coding problem (Gregor & LeCun, 2010; Chen et al., 2018; Borgerding et al., 2017). And inspired by these models, Zhang et al. (Zhang & Ghanem, 2018) developed a deep unfolding model named ISTA-Net for image CS problem by unfolding iterative shrinkagethresholding algorithm (ISTA) and learning sparse transformation functions. Metzler et al. (Metzler et al., 2017) and Zhang et al. (Zhang et al., 2021) established deep unfolding models named LDAMP and AMPNet respectively based on approximate message passing (AMP) algorithm, where LDAMP samples and reconstructs the entire image, and AMP-Net measures and recovers an image block-by-block with general trainable deblocking modules. Dong et al. (Dong et al., 2018) designed a model named DPDNN inspired by the half-quadratic splitting (HQS) algorithm for image inverse problems which can be applied to visual image CS. These deep unfolding models apply the sampling matrix for reconstruction and they can also be trained end-to-end. Some of the above methods discuss the sampling matrix training strategies, including in traditional deep learning models (Mousavi et al., 2015; Shi et al., 2019a) or in the deep unfolding model (Zhang et al., 2021), and they all train their initialization matrices. Although the trained sampling matrices can improve the reconstruction performance, they are designed for the single CS ratio and the performance would decrease seriously when the CS ratio changes for SSR. However, using SDCS, the model and the trained sampling matrix can perform well in all CS ratios in a certain range. ## 3.2 Ssr Methods As far as we know, there exist several SSR methods (Shi et al., 2019b; Lohit et al., 2018b; Zhang et al., 2020). We introduce and compare them with SDCS in the following paragraphs. Shi et al. (Shi et al., 2019b) proposed a model dubbed SCSNet. SCSNet trains the sampling matrix with the reconstruction model which is composed of seven independent sub-models with the same structure. Each sub-model adapts with a sub-range of CS ratios to make sure that the whole model can achieve SSR at CS ratios from 1% to 50%. And a greedy algorithm is applied to rearrange the rows of the sampling matrix for better reconstruction. However, SCSNet has two weaknesses: 1) The number of parameters is very large due to the existence of multiple sub-models. 2) Based on SCSNet, the existing deep learning models have to change their structure to achieve scalable reconstruction which would bring more burden to the hardware. However, SDCS needs only one model to achieve SSR and it can be applied to all end-to-end-trained models without changing their structures. Zhang et al. (Zhang et al., 2020) propose a framework named CRA which applies two reconstruction models, of which the first one is for initializing and completing the measurement, and the second one is for further reconstruction. Compared with SDCS, CRA do not train the sampling matrix, and two reconstruction models would introduce more parameters. Furthermore, we emphasize that CRA is essentially a pluggable method, which can be combined with other SSR methods by applying a non-linear model for initialization and measurement completion. Therefore, the experimental comparison between SDCS and CRA is not the focus of our paper. Lohit et al. (Lohit et al., 2018b) designed a general framework like SDCS named Rate-Adaptive CS (RACS) which does not need to change the structure of the model, and it has three training stages. In stage 1, the model is trained with the sampling matrix at a single CS ratio of RM. And all parameters of the model are frozen after stage 1. In stage 2, The first RKN rows of the sampling matrix are optimized, where RK < RM. In stage 3, the following rows of the sampling matrix are trained one by one. It can be noticed that RACS has an obvious weakness: the model is learned for a specific sampling matrix with CS ratio RM in stage 1, which means the performance of the model at lower CS ratios can be further improved. Different from RACS, with SDCS, the learned model adapts to a sampling matrix that can change its CS ratios from 1% to RM using a sampling matrix mask. Our strategy brings the model the potential that performs better for SSR. ## 4 Experimental Results 4.1 Experimental Settings In this paper, the model combined with SDCS is named as *model*-SDCS. To evaluate the performance of SDCS, six models are combined with SDCS, namely SDA (Mousavi et al., 2015), ReconNet (Lohit et al., 2018a), CSNet+ (Shi et al., 2019a), ISTA-Net+ (Zhang & Ghanem, 2018), DPDNN (Dong et al., 2018) and AMP-Net (Zhang et al., 2021), which sample and reconstruct images block-by-block with the block size of 33 × 33 that makes N = 1089. SDA, ReconNet, CSNet+ are traditional deep learning models. ISTA-Net+, DPDNN and AMP-Net are deep unfolding models with 9, 6 and 6 reconstruction modules respectively. In this paper, the activation functions of SDA are changed to the Rectified Linear Unit (ReLU) (Lohit et al., 2018a) for better performance. It is worth noting that in CSNet+ and AMP-Net, trainable deblocking operations are applied. SDA, ReconNet, ISTA-Net+ and DPDNN do not train the sampling matrix in their original matrix, and CSNet+ and AMP-Net have the trainable sampling matrix. Furthermore, the above six models have the same initialization matrix in equation 3 of this paper. Furthermore, since SCSNet (Shi et al., 2019b) and RACS (Lohit et al., 2018b) can achieve SSR like *model*-SDCS, they are compared with SDCS to show the effectiveness of our framework. All of our experiments are performed on two datasets: BSDS500 (Arbelaez et al., 2010) and Set11 (Lohit et al., 2018a). BSDS500 contains 500 colorful visual images and is composed of a training set (200 images), a validation set (100 images) and a test set (200 images). Set11 (Lohit et al., 2018a) contains 11 grey-scale images. In this paper, BSDS500 is used for training, validation and testing. And Set11 is used for testing. We generate two training sets for models with and without trainable deblocking operations. (a) Training set 1 contains 89600 sub-images sized of 99 × 99 which are randomly extracted from the luminance components of images in the training set of BSDS500 (Shi et al., 2019a). (b) Training set 2 contains 195200 sub-images sized of 33×33 which are randomly extracted from the luminance components of images in the training set of BSDS500 (Zhang & Ghanem, 2018). In this paper, CSNet+, AMP-Net, CSNet+-SDCS, AMP-Net-SDCS and SCSNet are trained on training set 1 due to the existence of trainable deblocking operations. SDA, ReconNet, ISTA-Net+, DPDNN, SDA-SDCS, ReconNet-SDCS, ISTA-Net+-SDCS and DPDNN-SDCS are trained on training set 2. The way to combine these models with SDCS is described in Algorithm 1. And they are trained on the conditions in their original papers. Moreover, we use the validation set of BSDS500 for model choosing and the test set of BSDS500 for testing. In this paper, all sampling matrices are initialized randomly in Gaussian distribution. RM is 50% and RVG is {1%, 4%, 10%, 25%, 30%, 40%, 50%}. All experiments are performed on a computer with an AMD Ryzen7 2700X CPU and an RTX2080Ti GPU. ## 4.2 Comparison With Original Deep Learning Methods In this subsection, we compare SDA, ReconNet, CSNet+, ISTA-Net+, DPDNN and AMP-Net with SDASDCS, ReconNet-SDCS, CSNet+-SDCS, ISTA-Net+-SDCS, DPDNN-SDCS and AMP-Net-SDCS.AMP-NetSDCS* represents the experimental results that sampling matrix A and initialization matrix B are not involved in training. Table 1 and Table 2 show the average PSNR and SSIM of 12 models tested on Set11 and the testing set of BSDS500 at different CS ratios respectively. We emphasize that there are seven different models for seven different test CS ratios for the method without SDCS, and a single model is tested at different CS ratios for *model*-SDCS. Table 1: The results of twelve models tested on Set11 at different CS ratios, where the best is marked in bold. | Method | 50% | 40% | 30% | 25% | 10% | 4% | 1% | |----------------|--------------|--------------|----------------|--------------|--------------|--------------|--------------| | | | | PSNR (dB)/SSIM | | | | | | SDA | 26.43/0.8007 | 25.14/0.7371 | 24.77/0.7191 | 24.77/0.7234 | 23.66/0.6794 | 21.05/0.5720 | 17.69/0.4376 | | SDA-SDCS | 30.80/0.9038 | 30.63/0.9009 | 29.43/0.8793 | 28.76/0.8636 | 25.58/0.7660 | 22.77/0.6458 | 19.87/0.4829 | | ReconNet | 32.12/0.9137 | 30.59/0.8928 | 28.72/0.8517 | 28.04/0.8303 | 24.07/0.6958 | 21.00/0.5817 | 17.54/0.4426 | | ReconNet-SDCS | 34.29/0.9532 | 33.81/0.9242 | 32.42/0.9313 | 31.42/0.9173 | 26.90/0.8225 | 23.57/0.6931 | 20.02/0.5071 | | CSNet+ | 38.19/0.9739 | 36.15/0.9625 | 33.90/0.9449 | 32.76/0.9322 | 27.76/0.8513 | 24.24/0.7412 | 20.09/0.5334 | | CSNet+-SDCS | 36.65/0.9645 | 35.48/0.9568 | 33.58/0.9414 | 32.44/0.9295 | 27.85/0.8493 | 23.92/0.7303 | 20.32/0.5394 | | ISTA-Net+ | 38.08/0.9680 | 35.93/0.9537 | 33.66/0.9330 | 32.27/0.9167 | 25.93/0.7840 | 21.14/0.5947 | 17.48/0.4403 | | ISTA-Net+-SDCS | 36.51/0.9693 | 34.92/0.9587 | 32.85/0.9400 | 31.65/0.9256 | 26.99/0.8334 | 23.57/0.7073 | 20.13/0.5146 | | DPDNN | 35.85/0.9532 | 34.30/0.9411 | 32.06/0.9145 | 30.63/0.8924 | 24.53/0.7392 | 21.11/0.6029 | 17.59/0.4459 | | DPDNN-SDCS | 39.50/0.9775 | 37.61/0.9686 | 35.38/0.9543 | 34.12/0.9434 | 29.07/0.8708 | 25.08/0.7622 | 20.55/0.5423 | | AMP-Net | 40.27/0.9804 | 38.23/0.9713 | 35.90/0.9574 | 34.59/0.9477 | 29.45/0.8787 | 25.16/0.7692 | 20.57/0.5639 | | AMP-Net-SDCS* | 34.57/0.9427 | 32.89/0.9249 | 30.12/0.8922 | 29.32/0.8688 | 24.99/0.7201 | 21.21/0.5649 | 18.97/0.4561 | | AMP-Net-SDCS | 39.67/0.9781 | 37.96/0.9703 | 35.89/0.9576 | 34.67/0.9477 | 29.59/0.8792 | 25.43/0.7750 | 20.47/0.5629 | | Method | 50% | 40% | 30% | 25% | 10% | 4% | 1% | |----------------|--------------|--------------|----------------|--------------|--------------|--------------|--------------| | | | | PSNR (dB)/SSIM | | | | | | SDA | 26.16/0.8048 | 24.97/0.7392 | 24.58/0.7127 | 24.58/0.7107 | 23.77/0.6489 | 21.75/0.5534 | 19.05/0.4522 | | SDA-SDCS | 30.17/0.9026 | 29.90/0.8973 | 28.77/0.8704 | 28.13/0.8510 | 25.43/0.7338 | 23.38/0.6145 | 21.08/0.4865 | | ReconNet | 30.85/0.8949 | 29.47/0.8647 | 27.95/0.8190 | 27.20/0.7914 | 23.98/0.6472 | 21.69/0.5557 | 18.96/0.4531 | | ReconNet-SDCS | 33.27/0.9448 | 32.52/0.9355 | 31.04/0.9107 | 30.13/0.8921 | 26.46/0.7753 | 23.99/0.6502 | 21.20/0.5063 | | CSNet+ | 35.89/0.9677 | 33.96/0.9513 | 31.94/0.9251 | 30.91/0.9067 | 27.01/0.7949 | 24.41/0.6747 | 21.42/0.5261 | | CSNet+-SDCS | 34.91/0.9588 | 33.59/0.9462 | 31.80/0.9221 | 30.82/0.9043 | 26.97/0.7906 | 24.21/0.6692 | 21.48/0.5288 | | ISTA-Net+ | 34.92/0.9510 | 32.87/0.9264 | 30.77/0.8901 | 29.64/0.8638 | 25.11/0.7124 | 21.82/0.5661 | 18.92/0.4529 | | ISTA-Net+-SDCS | 34.85/0.9622 | 33.26/0.9465 | 31.38/0.9199 | 30.36/0.9003 | 26.56/0.7811 | 24.00/0.6555 | 21.24/0.5096 | | DPDNN | 33.56/0.9373 | 32.05/0.9164 | 29.98/0.8759 | 28.87/0.8491 | 24.37/0.6863 | 21.80/0.5716 | 18.97/0.4544 | | DPDNN-SDCS | 36.84/0.9708 | 34.91/0.9560 | 32.85/0.9323 | 31.74/0.9150 | 27.58/0.8069 | 24.78/0.6858 | 21.72/0.5319 | | AMP-Net | 37.48/0.9744 | 35.34/0.9594 | 33.17/0.9358 | 32.01/0.9188 | 27.82/0.8133 | 24.95/0.6949 | 21.90/0.5501 | | AMP-Net-SDCS | 37.04/0.9720 | 35.18/0.9580 | 33.14/0.9354 | 32.04/0.9187 | 27.84/0.8136 | 25.03/0.6967 | 21.87/0.5493 | Table 2: The results of twelve models tested on the test set of BSDS500 at different CS ratios, where the best is marked in bold. From Table 1 and Table 2, it can be found that compared with models without trained sampling matrices, although *model*-SDCS has only one model for reconstruction, it obtains better performance in terms of PSNR and SSIM at most test CS ratios. And compared with models that also apply trained sampling matrices (CSNet+ and AMP-Net), *model*-SDCS can still obtain competitive performance at all test CS ratios with only a single model. Such a result implies the great potential of deep learning techniques and the sampling matrix training strategy. Therefore, we conclude that *model*-SDCS can effectively achieve SSR without changing the structure of the model. Furthermore, it can be noticed from Table 1 and Table 2 that the above models combined with SDCS can have a good performance of SSR, which verifies the universality of SDCS. It is worth emphasizing that the purpose of the universality of SDCS is not to combine SDCS with all existing models, but to bring reconstruction models the effective SSR performance. This means that researchers can design reconstruction models at a single CS ratio. To achieve SSR, they only need to combine these models with SDCS. In addition, Table 1 and Table 2 also verify the benefits of the sampling matrix after scalable training, which can be summarized into two points: 1) The trained sampling matrix can improve the performance of the reconstruction model, comparing models with SDCS and without SDCS. 2) The trained sampling matrix can adapt the reconstructed model to different CS ratios. ## 4.3 Comparison With Ssr Methods In this subsection, we compare SDCS with two SSR methods: SCSNet (Shi et al., 2019b) and RACS (Lohit et al., 2018b). Table 3: Parameter number of the reconstruction model of seven models. | Parameter | SDA-SDCS | ReconNet-SDCS | CSNet+-SDCS | ISTA-Net+-SDCS | DPDNN-SDCS | AMP-Net-SDCS | SCSNet | |-------------|------------|-----------------|---------------|------------------|--------------|----------------|----------| | Number | 6534 | 22914 | 370560 | 336978 | 1363712 | 229254 | 1110823 | First, SCSNet is compared with SDA-SDCS, ReconNet-SDCS, CSNet+-SDCS, ISTA-Net+-SDCS, DPDNN- SDCS and AMP-Net-SDCS. Table 3 shows the parameter number of the seven models. Fig. 3 plots the average PSNR and SSIM of the seven models tested on the test set of BSDS500 at CS ratios from 1% to 50%. It can be noticed that except for DPDNN-SDCS, other models have fewer parameters than SCSNet and achieve SSR. And DPDNN-SDCS and AMP-Net-SDCS even outperform SCSNet, which shows the great potential of SDCS. Furthermore, deep unfolding models have better SSR performance than traditional deep learning models. For examples, AMP-Net-SDCS and DPDNN-SDCS outperform SDA-SDCS, ReconNet- SDCS and CSNet -SDCS, and ISTA-Net -SDCS outperform SDA-SDCS and ReconNet-SDCS. we conclude that deep unfolding models are more suitable for SSR to a certain degree due to the important role of the sampling matrix in the image reconstruction process. ![8_image_0.png](8_image_0.png) PSNR (dB) Figure 3: Comparison between SCSNet and six models with SDCS at different CS ratios. ![8_image_1.png](8_image_1.png) Figure 4: Comparison between SDCS and RACS at different CS ratios. Second, SDCS is compared with RACS. Since with SDCS, AMP-Net outperforms other deep unfolding models and CSNet+ outperforms other traditional deep learning models, we use AMP-Net and CSNet+ as examples to compare SDCS and RACS. In this subsection, the values of RK of RACS mentioned in 3.2 are 1%, 10% and 25%. Fig. 4 plots average PSNR and SSIM of AMP-Net-SDCS, CSNet+-SDCS, AMP-Net-RACS-RK and CSNet+-RACS-RK on the test set of BSDS500 at CS ratios from 1% to 50%, where model-RACS-RK denotes the model combined with RACS with the hyperparameter RK. It can be noticed that when the CS ratio is lower than RK, model-RACS-RK has bad performance. For AMP-Net, AMP-Net-SDCS outperforms all compared AMP-Net-RACS-RKs when the CS ratio is lower than 30%. And for CSNet +, CSNet +-SDCS has better performance than all compared CSNet+-RACS-RKs at all CS ratios. Such a result implies that SDCS can generate a more appropriate combination of sampling matrix and model than RACS. Fig. 5 shows the Parrots images in Set11 reconstructed by different SSR models at different CS ratios. Fig. 5 is quite revealing in several ways. 1) AMP-Net-SDCS generates better results than SCSNet while maintaining fewer parameters which shows the great potential of SDCS. 2) Model-RACS-RK can not inherit the characteristics of original models well. For example, AMP-Net and CSNet+ both have trainable deblocking operations, but AMP-Net-RACS-1% and CSNet -RACS-1% generates images with obvious blocking artifacts at CS ratios of 1%, 4% and 10%. However, AMP-Net-SDCS and CSNet -SDCS generate smooth images without blocking artifacts. Therefore, we conclude that models with SDCS can get good SSR performance. In particular, they can inherit the characteristics of original models. ![9_image_0.png](9_image_0.png) Figure 5: The Parrots images in Set11 Reconstructed by different SSR methods at different CS ratios. To further prove the effectiveness of SDCS, we compare AMP-Net-SDCS and CSNet +-SDCS with AMP-Net and CSNet+ which train their sampling matrices for one single CS ratio and apply the greedy algorithm in SCSNet (Shi et al., 2019b). In this subsection, the sampling matrices of AMP-Net and CSNet+ are trained for the CS ratio of 50%. And their rows are rearranged using the greedy algorithm in SCSNet (Shi et al., 2019b) for better SSR. Fig. 6 plots the average PSNR and SSIM of four models tested on the test set of BSDS500 at CS ratios from 1% to 50%. It can be noticed that at the CS ratio of 50%, the specially trained | Table 4: The results of different models on the test set of BSDS5 | 00 with different SNRs. | | | | |---------------------------------------------------------------------|---------------------------|--------------|--------------|--------------| | SNR | Method | 30% | 10% | 4% | | | PSNR (dB)/SSIM | | | | | CSNet+ | 31.89/0.9212 | 27.00/0.7942 | 24.51/0.6767 | | | CSNet+-SDCS | 31.76/0.9219 | 27.03/0.7953 | 24.41/0.6768 | | | 40dB | AMP-Net | 32.93/0.9314 | 27.73/0.8088 | 24.94/0.6930 | | AMP-Net-SDCS | 32.83/0.9301 | 27.71/0.8083 | 24.95/0.6916 | | | CSNet+ | 31.58/0.9145 | 26.93/0.7863 | 24.43/0.6714 | | | CSNet+-SDCS | 31.62/0.9178 | 26.95/0.7902 | 24.30/0.6720 | | | 30dB | AMP-Net | 31.40/0.9002 | 26.97/0.7761 | 24.46/0.6689 | | AMP-Net-SDCS | 31.61/0.9054 | 27.12/0.7818 | 24.57/0.6701 | | | CSNet+ | 30.87/0.8963 | 26.46/0.7665 | 24.12/0.6550 | | | CSNet+-SDCS | 30.92/0.9023 | 26.66/0.7775 | 24.03/0.6624 | | | 25dB | AMP-Net | 29.83/0.8588 | 26.07/0.7328 | 23.82/0.6350 | | AMP-Net-SDCS | 29.90/0.8599 | 26.15/0.7349 | 23.78/0.6254 | | | CSNet+ | 26.84/0.7618 | 23.85/0.6337 | 21.69/0.5280 | | | CSNet+-SDCS | 26.49/0.7492 | 24.15/0.6434 | 22.04/0.5479 | | | 15dB | AMP-Net | 26.43/0.8007 | 23.21/0.6003 | 21.66/0.5324 | | AMP-Net-SDCS | 25.29/0.6936 | 22.64/0.5486 | 20.52/0.4433 | | Table 4: The results of different models on the test set of BSDS500 with different SNRs. models can obtain better results than the models with SDCS, but such models have a bad performance at ![10_image_0.png](10_image_0.png) other CS ratios. However, models combined with SDCS perform well at all CS ratios. Therefore, we conclude that *model*-SDCS outperforms the model with a trained sampling matrix for a single CS ratio, and SDCS provides a more obvious improvement than the greedy algorithm of SCSNet under the condition of only one reconstruction model. Figure 6: Comparison between two models with and without SDCS at different CS ratios. *model*-G is the model combined with the greedy algorithm in (Shi et al., 2019b). ## 4.4 Simulating The Actual Imaging Conditions In some practical conditions, noises may be introduced to the measurement y. To this end, we validate the anti-noise performance of SDCS in this subsection to simulate the actual CS imaging conditions. In detail, additive Gaussian white noises (Lepskii, 1991) are added to y of all datasets to train and test models in the subsection. And the signal-to-noise ratios (SNRs) are 40dB, 30dB, 25dB and 15dB. All results are obtained by testing 5 times on the test set and averaging. Since with SDCS, AMP-Net outperforms the other deep unfolding models, and CSNet+ outperforms other traditional deep learning models, we use AMP-Net and CSNet+ as examples to validate the anti-noise performance of SDCS. Table 4 shows the average PSNR and SSIM by different models on the test set of BSDS500 with different SNRs at different CS ratios of 30%, 10%, and 4%. It can be noticed that in most cases, the original model and the model with SDCS have similar performance, which demonstrates that ![11_image_0.png](11_image_0.png) Figure 7: Comparison between SDCS and RACS with different SNRs. SDCS will not weaken the anti-noise ability of the original model to a certain extent. The only exception is that AMP-Net outperforms AMP-Net-SDCS with a low SNR of 15dB, which means that the anti-noise performance of model may decline with a low SNR when it is combined with SDCS. Furthermore, combined with SDCS, a single model can be used to achieve sampling and reconstruction at multiple CS ratios, which further illustrates the advantage of SDCS. Since model-RACS-1% has the most similar performance to model-SDCS in subsection 4.3, we compare model-RACS-1% with model-SDCS to further validate the anti-noise performance of SDCS. Fig. 7 shows the average PSNR and SSIM of CSNet -SDCS, CSNet - RACS-1%, AMP-Net-SDCS, AMP-Net-RACS-1% with different SNRs at CS ratios from 1% to 50%. In Fig. 4, when there is no noise, the maximum difference of the PSNR, and the SSIM of mdoel-SDCS and model-RACS-1% are 1dB and 0.04, respectively. It can be noticed from Fig. 7 that as the SNR decreases, the performance of each model decreases, and the performance difference between mdoel-SDCS and model-RACS-1% also increases. In particular, the performance of CSNet+-SDCS with an SNR of 15dB is even better than model-RACS-1% with an SNR of 30dB, and for CSNet - RACS-1%, the PSNR is even lower than 10dB and the SSIM is lower than 0.1, which may be due to that the traditional deep model combined with RACS cannot be well adapted to the condition of low $NR. Therefore, it can be concluded that SDCS has better anti-noise performance than RACS and is more suitable for imaging under actual conditions. ## 5 Conclusion In this paper, for the visual image CS problem, we propose a general framework named SDCS to achieve SSR of deep-learning-based models. Besides of the initialization matrix and two sampling matrix masks, SDCS does not change the structure of the model. The proposed scalable training can generate an appropriate combination of the sampling matrix and the reconstruction model for efficient SSR. Experimental results show that SDCS outperforms other SSR methods. Specifically, models with SDCS can inherit the characteristics of the original models, e.g. the deblocking ability. In addition, it is shown that SDCS can work well with additive noises. However, SDCS has one shortcoming: Ri being sampled uniformly during training makes the different training times of rows of the sampling matrix, which may affect the performance of SDCS. In the future, we will try to find a better way to generate Ri and try some bigger datasets like ImageNet (He et al., 2016) to improve the power of SDCS. Furthermore, we will extend SDCS to high-dimensional CS problems which demand SSR. For example, snapshot compressive imaging (SCI) (Liu et al., 2018; Ma et al., 2019) is promising to use a single model to reconstruct hyperspectral images in different frequency bands, and some applications like transient imaging (Sun et al., 2018) and magnetic resonance imaging (MRI) (Liu et al., 2017; 2020) can obtain images at different ratios using one model with a binary sampling matrix. As different CS applications have different sampling and reconstruction strategies, which makes the current SDCS has to be updated to adapt to them. ## 6 Acknowledgements This research is supported by National Natural Science Foundation of China (NSFC, No. 62171088, U19A2052, 62020106011), Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China (No. ZYGX2021YGLH215, ZYGX2022YGRH005). ## References Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 33(5):898–916, 2010. Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. Compressed sensing using generative models. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine* Learning, volume 70 of *Proceedings of Machine Learning Research*, pp. 537–546, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. Mark Borgerding, Philip Schniter, and Sundeep Rangan. AMP-inspired deep networks for sparse linear inverse problems. *IEEE Transactions on Signal Processing*, 65(16):4293–4308, 2017. Xiaohan Chen, Jialin Liu, Zhangyang Wang, and Wotao Yin. Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds. In *Advances in Neural Information Processing Systems*, pp. 9061–9071, 2018. Khanh Quoc Dinh and Byeungwoo Jeon. Iterative weighted recovery for block-based compressive sensing of image/video at a low subrate. *IEEE Transactions on Circuits and Systems for Video Technology*, 27(11): 2294–2308, 2017. doi: 10.1109/TCSVT.2016.2587398. Weisheng Dong, Guangming Shi, Xin Li, Yi Ma, and Feng Huang. Compressive sensing via nonlocal low-rank regularization. *IEEE Transactions on Image Processing*, 23(8):3618–3632, 2014. Weisheng Dong, Peiyao Wang, Wotao Yin, Guangming Shi, Fangfang Wu, and Xiaotong Lu. Denoising prior driven deep neural network for image restoration. *IEEE Transactions on Pattern Analysis and Machine* Intelligence, 41(10):2305–2318, 2018. David L Donoho, Arian Maleki, and Andrea Montanari. Message-passing algorithms for compressed sensing. Proceedings of the National Academy of Sciences, 106(45):18914–18919, 2009. Jiang Du, Xuemei Xie, Chenye Wang, Guangming Shi, Xun Xu, and Yuxiang Wang. Fully convolutional measurement network for compressive sensing image reconstruction. *Neurocomputing*, 328:105–112, 2019. Marco F Duarte, Mark A Davenport, Dharmpal Takhar, Jason N Laska, Ting Sun, Kevin F Kelly, and Richard G Baraniuk. Single-pixel imaging via compressive sampling. *IEEE Signal Processing Magazine*, 25(2):83–91, 2008. Michael Elad. *Sparse and redundant representations: from theory to applications in signal and image processing*. Springer Science & Business Media, 2010. Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In *The 27th International* Conference on International Conference on Machine Learning, pp. 399–406. Omnipress, 2010. Jiajia Guo, Chao-Kai Wen, Shi Jin, and Geoffrey Ye Li. Convolutional neural network-based multiple-rate compressive sensing for massive mimo csi feedback: Design, simulation, and analysis. *IEEE Transactions* on Wireless Communications, 19(4):2827–2840, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *CoRR*, abs/2106.09685, 2021. URL https://arxiv. org/abs/2106.09685. Yixing Huang, Tobias Würfl, Katharina Breininger, Ling Liu, Günter Lauritsch, and Andreas Maier. Some investigations on robustness of deep learning in limited angle tomography. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 145–153. Springer, 2018. Diederik P Kingma and Jimmy Ba. ADAM : A method for stochastic optimization. *International Conference* on Learning Representations, 2015. OV Lepskii. On a problem of adaptive estimation in gaussian white noise. Theory of Probability & Its Applications, 35(3):454–466, 1991. Chengbo Li, Hong Jiang, Paul Wilford, Yin Zhang, and Mike Scheutzow. A new compressive video sensing framework for mobile broadcast. *IEEE Transactions on Broadcasting*, 59(1):197–205, 2013. Yong Li, Wenrui Dai, Junni Zhou, Hongkai Xiong, and Yuan F. Zheng. Scalable structured compressive video sampling with hierarchical subspace learning. IEEE Transactions on Circuits and Systems for Video Technology, 30(10):3528–3543, 2020. doi: 10.1109/TCSVT.2019.2939370. Yang Liu, Xin Yuan, Jinli Suo, David J Brady, and Qionghai Dai. Rank minimization for snapshot compressive imaging. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 41(12):2990–3006, 2018. Yipeng Liu, Shan Wu, Xiaolin Huang, Bing Chen, and Ce Zhu. Hybrid CS-DMRI : Periodic time-variant subsampling and omnidirectional total variation based reconstruction. IEEE Transactions on Medical Imaging, 36(10):2148–2159, 2017. Yipeng Liu, Zhen Long, Huyan Huang, and Ce Zhu. Low CP rank and Tucker rank tensor completion for estimating missing components in image data. *IEEE Transactions on Circuits and Systems for Video* Technology, 2019. Yipeng Liu, Tengteng Liu, Jiani Liu, and Ce Zhu. Smooth robust tensor principal component analysis for compressed sensing of dynamic mri. *Pattern Recognition*, 102:107252, 2020. Suhas Lohit, Kuldeep Kulkarni, Ronan Kerviche, Pavan Turaga, and Amit Ashok. Convolutional neural networks for noniterative reconstruction of compressively sensed images. *IEEE Transactions on Computational Imaging*, 4(3):326–340, 2018a. Suhas Lohit, Rajhans Singh, Kuldeep Kulkarni, and Pavan Turaga. Rate-adaptive neural networks for spatial multiplexers. *arXiv preprint arXiv:1809.02850*, 2018b. Jiawei Ma, Xiao-Yang Liu, Zheng Shou, and Xin Yuan. Deep tensor ADMM-Net for snapshot compressive imaging. In *Proceedings of the IEEE International Conference on Computer Vision*, pp. 10223–10232, 2019. Chris Metzler, Ali Mousavi, and Richard Baraniuk. Learned d-amp: Principled neural network based compressive image recovery. In *Advances in Neural Information Processing Systems*, pp. 1772–1783, 2017. Ali Mousavi, Ankit B Patel, and Richard G Baraniuk. A deep learning approach to structured signal recovery. In *The 53rd Annual Allerton Conference on Communication, Control, and Computing*, pp. 1336–1343. IEEE, 2015. JH Rick Chang, Chun-Liang Li, Barnabas Poczos, BVK Vijaya Kumar, and Aswin C Sankaranarayanan. One network to solve them all–solving linear inverse problems using deep projection models. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5888–5897, 2017. Wuzhen Shi, Feng Jiang, Shaohui Liu, and Debin Zhao. Image compressed sensing using convolutional neural network. *IEEE Transactions on Image Processing*, 29:375–388, 2019a. Wuzhen Shi, Feng Jiang, Shaohui Liu, and Debin Zhao. Scalable convolutional neural network for image compressed sensing. In *The IEEE Conference on Computer Vision and Pattern Recognition*, pp. 12290– 12299, 2019b. Wuzhen Shi, Shaohui Liu, Feng Jiang, and Debin Zhao. Video compressed sensing using a convolutional neural network. *IEEE Transactions on Circuits and Systems for Video Technology*, pp. 1–1, 2020. doi: 10.1109/TCSVT.2020.2978703. Yueming Su and Qiusheng Lian. ipiano-net: Nonconvex optimization inspired multi-scale reconstruction network for compressed sensing. *Signal Processing: Image Communication*, 89:115989, 2020. Jian Sun, Huibin Li, Zongben Xu, et al. Deep ADMM-Net for compressive sensing MRI . In Advances in Neural Information Processing Systems, pp. 10–18, 2016. Qilin Sun, Xiong Dun, Yifan Peng, and Wolfgang Heidrich. Depth and transient imaging with compressive spad array cameras. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 273–282, 2018. Yubao Sun, Jiwei Chen, Qingshan Liu, and Guangcan Liu. Learning image compressed sensing with sub-pixel convolutional generative adversarial network. *Pattern Recognition*, 98:107051, 2020. Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV, pp. 459–479. Springer, 2022. Yibo Xu, Weidi Liu, and Kevin F Kelly. Compressed domain image classification using a dynamic-rate neural network. *IEEE Access*, 8:217711–217722, 2020. Chenggang Yan, Biao Gong, Yuxuan Wei, and Yue Gao. Deep multi-view enhancement hashing for image retrieval. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2020a. Chenggang Yan, Zhisheng Li, Yongbing Zhang, Yutao Liu, Xiangyang Ji, and Yongdong Zhang. Depth image denoising using nuclear norm and learning graph model. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 16(4):1–17, 2020b. Chenggang Yan, Biyao Shao, Hao Zhao, Ruixin Ning, Yongdong Zhang, and Feng Xu. 3d room layout estimation from a single rgb image. *IEEE Transactions on Multimedia*, 22(11):3014–3024, 2020c. Hantao Yao, Feng Dai, Shiliang Zhang, Yongdong Zhang, Qi Tian, and Changsheng Xu. Dr2-net: Deep residual reconstruction network for image compressive sensing. *Neurocomputing*, 359:483–493, 2019. Wenbin Yin, Xiaopeng Fan, Yunhui Shi, Ruiqin Xiong, and Debin Zhao. Compressive sensing based soft video broadcast using spatial and temporal sparsity. *Mobile Networks and Applications*, 21(6):1002–1012, 2016. Jian Zhang and Bernard Ghanem. ISTA-Net : Interpretable optimization-inspired deep network for image compressive sensing. In *The IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1828– 1837, 2018. Zhikang Zhang, Kai Xu, and Fengbo Ren. CRA: A generic compression ratio adapter for end-to-end datadriven image compressive sensing reconstruction frameworks. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1439–1443. IEEE, 2020. Zhonghao Zhang, Yipeng Liu, Jiani Liu, Fei Wen, and Ce Zhu. Amp-net: Denoising-based deep unfolding for compressive image sensing. *IEEE Transactions on Image Processing*, 30:1487–1500, 2021. doi: 10. 1109/TIP.2020.3044472.
Review 1: Summary: The paper proposes a framework for solving (with a single model) Compressed Sensing problems in which different subsampling ratios might be encountered. The paper considers the supervised setting for solving inverse problems, i.e. a model is trained to reconstruct an image from a corrupted observation. The empirical finding of prior work is that the best performance is obtained when different models are trained for (significantly) different subsampling rations. The current paper considers this a prior work limitation mainly because of hardware (storage?) and training constraints. The paper proposes to solve this problem by using a trainable sampling (forward) matrix, A. During training, the matrix is masked at different ratios to reconstruct images, with solution initializations arriving from masking (at different ratios) of an initialization matrix B. Strengths and Weaknesses: Strengths: * The problem that the authors study is relevant. * The paper builds upon a large line of prior work. * The proposed solution is intuitive. * The results show that the method outperforms both single model and previously proposed SSR solutions. Weaknesses: * The paper ignores a body of work based on generative networks for solving inverse problems. A different approach to the supervised learning route is to use a generative model as a prior for the domain and then find a solution from the network that explains the measurements. This approach has found a lot of empirical success (especially recently with the rise of powerful generative networks) and doesn't have the problem of different ratios which is the topic of investigation of the current paper. This line of work should be at least cited (if not compared to). For example, the Bora et. al. (2017) paper, is incorrectly cited by the authors as a supervised learning approach for solving inverse problems. * The paper's presentation could be improved (see Requested Changes for a list of typos, suggested improvements). * Experiments are limited. The biggest training set contains only 200 images. Also, since the testing on Set11 isn't it considered out-of-distribution testing since the training images were colorful? What is the point of the experiments on this test set? Is this a claim of better out-of-distribution performance? * From the paper, I gathered that the forward matrix is free to be chosen at will (which seems to be one of the main innovations of the paper). It would be cool to see what the learned forward matrix is, in the sense of how the images look visually after the degradation (sampling) at different rations. * Also, in many problems, the forward matrix is fixed, i.e. in physical systems. It would be cool to evaluate performance on this setting, e.g. maybe in the setting where A is a random inpainting matrix (with different probabilities of masking based on the corresponding subsampling ratio). * One of the main arguments of the paper against having separate models based on the subsampling ratio is the storage of the trained models. But these models will not be doing something entirely different so it is possible that some method for efficient parameter tuning would be efficient. For example, the [LoRA](https://arxiv.org/abs/2106.09685) that was proposed for efficient tuning of LLMs, has been lately applied to Computer Vision and Stable Diffusion. It would be nice to hear the author's perspective on this. Requested Changes: Critical for acceptance: * I would recommend citing, discussion and potential comparison to the unsupervised approach for solving inverse problems, i.e. the approach of using pre-trained generative networks as priors. * Experimental section should be strengthened. I think it would be best to include some results on classic datasets, such as CIFAR-10 or ImageNet. I understand that the computational requirements might be an issue, but I think at least CIFAR-10 should be doable in the described hardware and it will be a convincing experiment that this framework scales. * I think it would be valuable to the paper to evaluate also performance when the forward (sampling) matrix is given. This will also help getting a better sense of how this method performs in standard benchmarks and compared to recent state-of-the-art approaches for solving specific inverse problems. For example, a valuable comparison would be with MaxVIT (MAXIM). Not so critical for acceptance: * Fix several typos and improve the delivery of the paper. Examples: * "bring the model the ability" -> "bring to the model the ability". * in the related work section, there are repetitions in the citations, such as Lohit et. al Lohit et. al (2018b) * The first paragraph of 3.1. feels a bit disconnected with the paper since it just lists architectural innovations, which is orthogonal to the proposed framework. * I think some parts could be better explained for the unfamiliar reader. E.g. why is it better to have one sampling matrix (that gets masked every time?). Also, it would help define what the block-based visual image CS problem is. * It would be nice to explain why the initial value for the problem to be solved is chosen to be linear matrix times the measurements. I could imagine "rough" non-linear reconstruction approximations that would later be improved. Broader Impact Concerns: I do not have any ethical concerns. ================================================== Review 2: Summary: This paper proposes a technique that adapts deep networks for compressive sensing to multiple subsampling ratios. The proposed method involves learning common sampling and reconstruction matrices and using the corresponding rows/columns of the matrices for multiple subsampling ratios. This approach is modular and can be combined with any deep network architecture that performs compressive sensing and reconstruction of images/image patches. The authors perform experiments using different deep network architectures to demonstrate the efficacy at individual subsampling ratios. The authors also compare their methods to other proposals for robust and scalable compressive sensing and show their techniques are more parameter/compute efficient as well as achieving superior PSNR. Strengths and Weaknesses: Strengths: 1. SDCS seems to show results across multiple subsampling ratios that are superior to/as effective as the original techniques tuned to each subsampling ratio. 2. SDCS is modular and can be combined with any network architecture. Weaknesses: 1. While the experiments demonstrate the efficacy of SDCS, the authors do not indicate whether this improvement can be attributed to the fact that SDCS trained networks simply see a lot more data ($B$ times the data that a single network sees). Especially at lower subsampling ratios, the number of samples that are used to train the models and sensing matrices could be a lot more than just $B$ times the data seen by a single network since higher subsampling ratios also influence the first few rows/columns of the sensing/reconstruction matrices. A clarification of this point, or an experiment that allows baselines to see more data will be helpful in pointing out the efficacy of the proposed method. 2. The authors do not perform any ablation studies that demonstrate the efficacy of different components. It would be interesting to know if the improvements are primarily due to learning the sampling/reconstruction matrices or due to the fact that the single network is trained on all the data. Implementing a version of SDCS that does not update the $\mathbf{A,B}$ matrices and only updates the parameter matrices $\Theta$ will answer this question. 3. It is unclear whether this reconstruction technique can generalize to sampling matrices that are not learned by the model. There may be scenarios where the measurements are generated by a randomized sampling technique and one may want to reconstruct the image from these measurements without access to the exact measurement matrix. How does SDCS perform in this scenario? Presentation Improvements: 1. The paper could be written in a more accessible manner so that it can reach a wider audience than just the research community in deep learning for compressive sensing. A short primer explaining the scaling problem with deep networks for compressive sensing would help motivate the problem for the reader who is unfamiliar with the problems in this sub-field. In fact moving a version of Table 3 to the first few pages of the paper could give us a sense of the contributions of the paper before we dive into the algorithm. 2. Citations could use citep to put them in parantheses rather than just cite. There are a few places where the author names are repeated (Shi et al. Shi et al.; Lohit et al. Lohit et al.) Requested Changes: 1. Ablation experiment on learning $\mathbf{A,B}$ vs just training at different subsampling ratios without learning $\mathbf{A,B}$ 2. Normalizing the amount of training data seen by SDCS vs baselines. If this is already the case, it should be mentioned/emphasized in the paper. Broader Impact Concerns: None beyond the usual caveats and concerns that accompany models that are learned from data. The paper could perhaps emphasize that it does not address bias and fairness issues that could arise from using datasets that are not representative of certain populations. ================================================== Review 3: Summary: The paper introduces a new training scheme to support compressed sensing with different compression ratios. The method learns a sampling matrix, a reconstruction initialization matrix, and the network parameters together. Authors propose a masking scheme to ensure that the same sampling matrix and reconstruction initiation matrix easily work across different compression ratios. Authors perform extensively experimentation with 6 different network architectures for reconstruction, and shows that when combined with the proposed method, these architectures outperform their counterpart trained with a vanilla scheme achieving state-of-the-art performance across multiple compression ratios. Furthermore, they show that training with the proposed scheme provides some robustness to addition to Gaussian noise. Strengths and Weaknesses: The paper is well written, easy to read, and most claims are supported well with empirical evidence. The idea is simple, and straightforward, and I appreciate the authors providing extensive empirical evidence that the idea works across different models and dataset. However, I've a few questions/comments about the set-up: 1. The idea of training A and B together with the network probably exists in the literature already. I might have missed this while the reading the paper, but can the authors provide more context around related work and confirm which elements (a) A matrix, (b) B matrix, (c) and the masking strategy is introduced in this paper vs what exists in the literature? 2. The results in Table 1 and 2 are very impressive, but it seems peculiar that a model optimized for a single compression ratio does not even come close to the performance of a model trained for multiple compression ratios. Can authors provide some insight on why this maybe the case? Are all experimental parameters the same (was same amount of effort spent on hyperparameter tuning)? 3. From algorithm 1, it seems like R_i s are sampled uniformly. Does this sampling strategy affect performance? For example, I might imagine that training with smaller R_i s in the initial epochs and expanding R_is as epoch increases will help get better performance for the top rows of A_i. 4. In Section 4.4 is Gaussian a realistic noise model? If yes, can authors kindly provide references and place it in context? Further, I assume that the models are not trained in the presence of Gaussian noise - but only evaluated - can the authors confirm this/make this clear in the paper? Requested Changes: See weakness above. Minor nitpik: it might make sense to combine table 3 with tables 1 and 2 so that it is easier to read the results. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept as is Comment: The main contribution of this paper is the proposal of a flexible deep compressive sensing approach that can accommodate multiple subsampling ratios. Extensive experiments support the performance of the proposed approach. the paper received mixed recommendations, but the majority is leaning toward acceptance. Together with the fact that better performance is offered by the proposed approach in many scenarios and the claims are supported by accurate and clear evidence, I recommend acceptance. ==================================================
# Distributed Training Of Large Graph Neural Networks With Variable Communication Rates Anonymous authors Paper under double-blind review ## Abstract Training Graph Neural Networks (GNNs) on large graphs presents unique challenges due to the large memory and computing requirements. Distributed GNN training, where the graph is partitioned across multiple machines, is a common approach to training GNNs on large graphs. However, as the graph cannot generally be decomposed into small non-interacting components, data communication between the training machines quickly limits training speeds. Compressing the communicated node activations by a fixed amount improves the training speeds, but lowers the accuracy of the trained GNN. In this paper, we introduce a variable compression scheme for reducing the communication volume in distributed GNN training without compromising the accuracy of the learned model. Based on our theoretical analysis, we derive a variable compression method that converges to a solution that is equivalent to the full communication case. Our empirical results show that our method attains a comparable performance to the one obtained with full communication and that for any communication budget, we outperform full communication at any fixed compression ratio. ## 1 Introduction Graph Neural Networks (GNNs) are a neural network architecture tailored for graph-structured data (Zhou et al., 2020; Wu et al., 2020; Bronstein et al., 2017). GNNs are multi-layered networks, where each layer is composed of a (graph) convolution and a point-wise non-linearity (Gama et al., 2018). GNNs have shown state-of-the-art performance in robotics (Gama et al., 2020a; Tzes et al., 2023), weather prediction (Lam et al., 2022), protein interactions (Jumper et al., 2021) and physical system interactions (Fortunato et al., 2022), to name a few. The success of GNNs can be attributed to some of their theoretical properties such as being permutation-invariant (Keriven and Peyré, 2019; Satorras et al., 2021), stable to perturbations of the graph (Gama et al., 2020b), transferable across graphs of different sizes (Ruiz et al., 2020), and their expressive power (Xu et al., 2018; Bouritsas et al., 2022; Kanatsoulis and Ribeiro, 2022; Chen et al., 2019). In a GNN, the data is propagated through the graph via graph convolutions, which aggregate information across neighborhoods. In large-scale graphs, the data diffusion over the graph is costly in terms of computing and memory requirements. To overcome this limitation, several solutions were proposed. Some works have focused on the transferability properties of GNNs, i.e. training a GNN on a small graph and deploying it on a large-scale graph (Ruiz et al., 2020; Maskey et al., 2023; 2022). Other works have focused on training on a sequence of growing graphs (Cerviño et al., 2023; Cerviño et al., 2023). Though useful, these solutions either assume that an accuracy degradation is admissible (i.e. transferability bounds), or that all the graph data is readily accessible within the same training machine. These assumptions may not hold in practice, as we might need to recover the full centralized performance without having the data in a centralized manner. Real-world large-scale graph data typically cannot fit within the memory of a single machine or accelerator, which forces GNNs to be learned in a distributed manner (Cai et al., 2021; Wang et al., 2021; Zheng et al., 2020; Wang et al., 2022). To do this efficiently, several solutions have been proposed. There are *data-parallel* approaches that distribute the data across different machines where model parameters are updated with local data and then aggregated via a parameter server. Another solution is *federated learning*(FL), where the situation is even more complex as data is naturally distributed across nodes and cannot be shared to a central location due to privacy or communication constraints(Bonawitz et al., 2019; Li et al., 2020a;b). Compared to data parallel approaches FL suffers from data heterogeneity challenges as we cannot control the distribution of data across nodes to be identically distributed(Shen et al., 2022). The GNN-FL adds additional complexity as the graph itself (input part of the model) is split across different machines. The GNN FL counterpart has proven to be successful when the graph can be split into different machines(He et al., 2021; Mei et al., 2019). However, training locally while assuming no interaction between datasets is not always a reasonable assumption for graph data. This work is drawn by two observations of GNNs. First, large graph datasets cannot be split into noninteracting pieces across a set of machines. Therefore, training GNNs distributively requires interaction between agents in the computation of the gradient updates. Second, the amount of communicated data affects the performance of the trained model; the more we communicate the more accurate the learned model will be. In this paper, we posit that the compression rate in the communication between agents should vary between the different stages of the GNN training. Intuitively, at the early stages of training, the communication can be less reliable, but as training progresses, and we are required to estimate the gradient more precisely, the quality of the communicated data should improve. ![1_image_0.png](1_image_0.png) Figure 1: Example of partitioning a graph with 9 nodes into 3 machines. Each machine only stores the features of the nodes in their corresponding partition. In this paper, we consider the problem of efficiently learning a GNN across a set of machines, each of which has access to part of the graph data (see Figure 1). Drawn from the observation that in a GNN, model parameters are significantly smaller than the graph's input and intermediate node feature, we propose to compress the intermediate GNN node features that are communicated between different machines. Given that this compression affects the accuracy of the GNN, in this paper we introduce a variable compression scheme that trades off the communication overhead needed to train a GNN distributively and the accuracy of the GNN. The contributions of this paper are: 1. We present a novel algorithm to learn graph representations while compressing the data communicated between the training agents. We propose to vary the compression rate progressively, to achieve a comparable performance to the no-compression case at a fraction of the communication cost. 2. We theoretically show that our method converges to a first-order stationary point of the full graph training problem while taking distributed steps and compressing the inter-server communications. 3. We empirically show that our method attains a comparable performance to the full communication training scenario while incurring fewer communication costs. In particular, by plotting accuracy as a function of the communication costs, our method outperforms full communication and fixed compression rates in terms of accuracy achieved per communicated byte. ## Related Work Mini-Batch Training. In the context of distributed GNN training, Zheng et al. (2020) proposes to distribute mini-batches between a set of machines, each of which computes a local gradient, updates the GNN, and communicates it back to the server. Zheng et al. (2020) uses METIS (Karypis and Kumar, 1998) to partition the graph, which reduces communication overhead and balances the computations between machines. Although (Zheng et al., 2020) provides good results in practice, the GNN does not process data on the full graph, only a partition of it, which can yield sub-optimal results compared to processing the full graph. In ![2_image_0.png](2_image_0.png) Figure 2: To compute a gradient step, we need to gather the data. To do so, each machine starts by computing the activations of the local nodes. Then, these activations are **compressed** and **communicated** to adjacent machines. Once all the activations are communicated, each machine **decompresses** the data from the compressed nodes. Kaler et al. (2023) they employ a policy to cache data associated with frequently accessed vertices in remote partitions. This method reduces the communications across partitions by creating local copies of the data. Memory Optimization. In this work we do full batch training, which has also been considered in the literature. Similar to us is *sequential aggregation and rematerialization* (Mostafa, 2022), which sequentially re-constructs and frees pieces of the large GNN during the backward pass computation. Even in densely connected graphs, this deals with the memory limitations of the GNN, showing that the memory requirements per worker decrease linearly with the number of workers. Others have studied similar approaches in the context of distributed training (Mostafa et al., 2023). In Md et al. (2021) they propose to use a balanced partitioning of the graph, as well as a shared memory implementation. They utilize a delayed partial aggregation of remote partitions by either ignoring or delaying feature vector aggregation during training. Quantization. A related approach to dealing with the limitations of a single server for training compression in communication is quantization in the architecture. To train GNNs more efficiently using quantization, the most salient examples are feature quantization (Ma et al., 2022), Binary Graph Neural Networks (Bahri et al., 2021), vector quantization (Ding et al., 2021), last bit quantization (Feng et al., 2020), and degree quantization (Tailor et al., 2020). There are also works on adaptive quantization of the messages between machines. In Wan et al. (2023), they adapt the quantization level using stochastic integer quantization. However similar, this work adapts the quantization level locally, at the message level, which differs from our global view of compression. In all, although related in spirit, quantizing a GNN is a local operation that differs from compressing the communication between servers. Sampling Based Methods Our method can be applied in conjunction with sampling-based methods. In sampling-based methods, each node only aggregates from a random subset of its neighbors Zeng et al. (2021); Bai et al. (2021); Serafini and Guan (2021); Liu et al. (2021). In the context of distributed learning, this random subset could include remote nodes. Therefore communication between machines becomes a bottleneck, and our method would still be relevant to reduce the communication volume. If we bias the sampling to only consider local nodes, then this would hurt performance, as it is equivalent to splitting the graph into multiple disconnected components, which does not work well in practice. ## 2 Learning Graph Neural Networks In this paper, we consider the problem of training GNNs on large-scale graphs that are stored in a set of machines. Formally, a graph is described by a set of vertices and edges G = (V, E), where |V| = n is the set of vertices whose cardinality is equal to the number of nodes n, and E ⊆ V × V is the set of edges. The graph G can be represented by its adjacency matrix S ∈ R n×n. Oftentimes, the graph includes vertex features, FV ∈ R |V|×|Dv|, and edge features, FE ∈ R |E|×|Dv|, where Dv and DE are the vertex and edge feature dimensions, respectively. Graph problems fall into three main categories: node-level prediction where the goal is to predict properties of individual nodes; edge-level prediction where the goal is to predict edge features or predict the locations of missing edges; and graph-level prediction where the goal is to predict properties of entire graphs. In this paper, we focus on distributed training of GNNs for node classification problems. Our distributed training approach is still relevant to the other two types of graph problems, as they also involve a series of GNN layers, followed by readout modules for edge-level or graph-level tasks. A GNN is a multi-layer network, where at each layer, messages are aggregated between neighboring nodes via graph convolutions. Formally, given a graph signal x ∈ R n, where [x]i represents the value of the signal at node i, the graph convolution can be expressed as $$\mathbf{z}_{n}=\sum_{k=0}^{K-1}h_{k}\mathbf{S}^{k}x_{n},\tag{1}$$ where h = [h0*, . . . , h*K−1] ∈ R K are the graph convolutional coefficients. In the case that K = 2, and S is binary, the graph convolution (1) translates into the AGGREGATE function in the so-called message-passing neural networks. A GNN is composed of L layers, each of which is composed of a graph convolution (1) followed by a point-wise non-linear activation function ρ such as ReLU, or tanh. The l-th layer of a GNN can be expressed as, $${\bf X}_{l}=\rho\bigg{(}\sum_{k=0}^{K-1}{\bf S}^{k}{\bf X}_{l-1}{\bf H}_{l,k}\bigg{)},\tag{2}$$ $$({\mathrm{SRM}})$$ where Xl−1 ∈ R n×Fl−1is the output of the previous layer (with X0 is equal to the input data X = [x1*, . . . ,* xn] ∈ R n×F0 ), and Hl,k ∈ R Fl−1×Fl are the convolutional coefficients of layer l and hop k. We group all learnable coefficients H = {Hl,k}l,k, and define the GNN as Φ(x, S, H) = XL. ## 2.1 Empirical Risk Minimization We consider the problem of learning a GNN that given an input graph signal x *⊂ X ⊆* R n can predict an output graph signal y *⊂ Y ⊆* R n of an unknown distribution p(*X, Y* ), $$\operatorname*{minimize}_{\mathcal{H}}\mathbb{E}_{p}[\ell(\mathbf{y},\mathbf{\Phi}(\mathbf{x},\mathbf{S},\mathcal{H}))],$$ where ℓ is a non-negative loss function ℓ : R d × R d → R +, such that ℓ(y, y) = 0 iif y = y. Common choices for the loss function are the cross-entropy loss and the mean square error. Problem (SRM) is denoted called Statistical Risk Minimization (Vapnik, 2013), and the choice of GNN for a parameterization is justified by the structure of the data, and the invariances in the graph Bronstein et al. (2017). However, problem (SRM) cannot be solved in practice given that we do not have access to the underlying probability distribution p. In practice, we assume to be equipped with a dataset D = {xi, yi}i drawn i.i.d. from the unknown probability p(*X, Y* ) with |D| = m samples. We can therefore obtain the empirical estimator of (SRM) as, $$\underset{\mathcal{H}}{\text{minimize}}\ \mathbb{E}_{\mathcal{D}}[\ell(\mathbf{y},\mathbf{\Phi}(\mathbf{x},\mathbf{S},\mathcal{H}))]:=\frac{1}{m}\sum_{i=1}^{m}\ell(\mathbf{y}_{i},\mathbf{\Phi}(\mathbf{x}_{i},\mathbf{S},\mathcal{H})).$$ (ERM) The empirical risk minimization problem (ERM) can be solved in practice given that it only requires access to a set of samples D. The solution to problem (ERM) will be close to (SRM) given that the samples are i.i.d., and that there is a large number of samples m(Vapnik, 2013). To solve problem (ERM), we will resort to gradient descent, and we will update the coefficients H according to, $${\mathcal{H}}_{t+1}={\mathcal{H}}_{t}-\eta_{t}\mathbb{E}_{\mathcal{D}}[\nabla_{{\mathcal{H}}}\ell(\mathbf{y},f(\mathbf{x},\mathbf{S},{\mathcal{H}}))],$$ Ht+1 = Ht − ηtED[∇Hℓ(y, f(x, S, H))], (SGD) where t represents the iteration, and ηt the step-size. In a centralized manner computing iteration (SGD) presents no major difficulty, and it is the usual choice of algorithm for training a GNN. When the size of the graph becomes large, and the graph data is partitioned across a set of agents, iteration (SGD) requires communication. In this paper, we consider the problem of solving the empirical risk minimization problem (ERM) through gradient descent (SGD) in a decentralized and efficient way. $$(\mathrm{SGD})$$ ## 3 Distributed Gnn Training Consider a set Q, |Q| = Q of workers that jointly learn a single GNN Φ. Each machine q ∈ Q is equipped with a subset of the graph S, and node data, as shown in Figure 2. Each machine is responsible for computing the features of the nodes in its local partitions for all layers in the GNN. The GNN model H is replicated across all machines. To learn a GNN, we update the weights according to gradient descent (SGD), and average them across machines. This procedure is similar to the standard FedAverage algorithm used in Federated Learning(Li et al., 2020b). The gradient descent iteration (SGD) cannot be computed without communication given that the data in (y, x)iis distributed in the Q machines. To compute the gradient step (SGD), we need the machines to communicate graph data. What we need to communicate is the data of each node in the adjacent machines; transmit the input feature xj for each neighboring node j ∈ N k i , j ∈ q ′. For each node that we would like to classify, we would require all the graph data belonging to the k-hop neighborhood graph. This procedure is costly and grows with the size of the graph, which renders it unimplementable in practice. As opposed to communicating the node features, and the graph structure for each node in the adjacent machine, we propose to communicate the features and activation of the nodes at the nodes in the boundary. Note that in this procedure the bits communicated between machines do not depend on the number of nodes in the graph and the number of features compressed can be controlled by the width of the architecture used (see Appendix A). The only computation overhead that this method requires is computing the value of the GNN at every layer for a given subset of nodes using local information only. ## 3.1 Computing The Gradient Using Remote Compressed Data Following the framework of communicating the intermediate activation, to compute a gradient step (SGD), we require 3 communication rounds (i) the forward pass, in which each machine fetches the feature vectors of remote neighbors and propagates their messages to the nodes in the local partition, (ii) the backward pass, in which the gradients flow in the opposite direction and are accumulated in the GNN model weights, and (iii) the aggregation step in which the weight gradients are summed across all machines and used to update the GNN model weights. The communication steps for a single GNN layer are illustrated in Figure 2. To compute a gradient step, we first compute the forward pass, for which we need the output of the GNN at a node i. To compute the output of a GNN at node i, we need to evaluate the GNN according to the graph convolution (1). Note that to evaluate (1), we require access to the value of the input features xj for each j ∈ N k i , which might not be on the same client as i. In this paper, we propose that the clients with nodes in N k i , compute the forward passes *locally* (i.e. using only the nodes in their client), and communicate the compressed activations for each layer l. Once the values of the activations are received, they are decompressed, and processed by the local machine, and the output of the GNN is obtained. To obtain the gradient of the loss ℓ, the output of the GNN is compared to the true label, and the gradient with respect to the parameters of the GNN is computed. The errors are once again compressed and communicated back to the clients. Which, will update the values of the parameters of the GNN, after every client updates the values of the parameters H, there is a final round of communication where the values are averaged. Note that this procedure allows the GNN to be updated using the whole graph. The communication costs are reduced given that the number of bits communicated is controlled by the compressing-decompressing mechanism. The compressing and decompressing mechanism can be modeled as follows, Definition 1 The compression and decompression mechanism gϵ,r, g−1 ϵ,r with compression error ϵ, and compression rate r*, satisfies that given a set of parameters* x ∈ R n, when compressed and decompressed, the following relation holds i.e., $$\mathbf{z}=g_{\epsilon,r}(\mathbf{x}),\,\,\,a n d\,\,{\tilde{\mathbf{x}}}=g_{\epsilon,r}^{-1}(g_{\epsilon,r}(\mathbf{x}))\,\,\,a n d\,\,\mathbb{E}[\|{\hat{\mathbf{x}}}-\mathbf{x}\|]\leq\delta\,\,\,{\mathrm{with~}}\mathbb{E}[\|{\hat{\mathbf{x}}}-\mathbf{x}\|^{2}]\leq\epsilon^{2},$$ 2, (3) where z ∈ R n/r with n/r ∈ Z *is the compressed signal. If* δ = 0*, the compression is lossless.* $\left(3\right)$. Algorithm 1 VARCO: Distributed Graph Training with VARiable COmmunication Rates Split graph G into Q partitions and assign them to each worker qi Initialize the GNN with weights H0 and distribute it to all workers qi Fix initial compression rate c0, and scheduler r repeat Each Worker qi: Compute the activations for each node in the local graph (cf. equation (2)) using the local nodes. Each Worker qi: Compress the activations of the nodes that are in the boundary using the compression mechanism (cf. equation (3)), and communicate them to the adjacent machines. Each Worker qi: Collect data from all adjacent machines, and compute forward pass by using non-compressed activations for local nodes and compressed activations for non-local nodes that are fetched from other machines. Each Worker qi: Compute parameter gradients by back-propagating errors across machines and through the differentiable compression routine. Apply the gradient step to the parameters in each worker(cf. equation (SGD)). Server: Average parameters, send them back to workers, and update compression rate ct+1 until Convergence Intuitively, the error ϵ, and compression rate r are related; a larger compression rate r will render a larger compression error ϵ. Also, compressed signals require less bandwidth to be communicated. Relying on the compression mechanism (3), a GNN trained using a *fixed* compression ratio r during training, will converge to a neighborhood of the optimal solution. The size of the neighborhood will be given by the variance of the compression error ϵ 2. The first-order compression error δ is related to the mismatch in the compressed and decompressed signals. Our analysis works for any value of δ, which encompasses both lossy (δ > 0), as well as loss-less compression (δ = 0). AS1 The loss ℓ function has L Lipschitz continuous gradients, ||∇ℓ(y1, x) − ∇ℓ(y2, x)*|| ≤* L||y1 − y2||. AS2 The non-linearity ρ *is normalized Lipschitz.* AS3 The GNN, and its gradients are M*-Lipschitz with respect to the parameters* H. AS4 *The graph convolutional filters in every layer of the graph neural network are bounded, i.e.* ||h∗Sx|| ≤ ||x||λmaxX T t=0 htS t , with λmaxX T t=0 htS t $$\mathbf{\partial}(\mathbf{x})-\nabla\ell(\mathbf{y}_{2},\mathbf{x})||\leq L||\mathbf{y}_{1}-\mathbf{y}_{2}||$$ $$\left(4\right)$$ $\left(5\right)$. < ∞. (4) Assumption 1 holds for most losses used in practice over a compact set. Assumption 2 holds for most non-linearities used in practice over normalized signals. Assumption 3 is a standard assumption, and the exact characterization is an active area of research (Fazlyab et al., 2019). Assumption 4 holds in practice when the weights are normalized. Proposition 1 (Convergence of GNNs Trained on Fixed Compression) Under assumptions 1 through 4, consider the iterates generated by equation (SGD) where the normalized signals x are compressed using compression rate c with corresponding error ϵ(cf. Definition 1). Consider an L layer GNN with F*, and* K features and coefficients per layer respectively. Let the step-size be η ≤ 1/L∇*, with* L∇ = 4MλLmax √*KF L* if the compression error is such that at every step k, and any β > 0, $$\mathbb{E}_{\mathcal{D}}[||\nabla_{\mathcal{H}}\ell(y,\Phi(x,\mathbf{S};\mathcal{H}_{t}))||^{2}]\geq L_{\nabla}^{2}\epsilon^{2}+\beta,$$ 2 + β, (5) then the fixed compression mechanism converges to a neighborhood of the first-order stationary point of SRM in K ≤ O( 1 β ) *iterations, i.e.,* $$\mathbb{E}_{\mathcal{D}}[||\nabla_{\mathcal{H}}\ell(y,\Phi(x,\mathbf{S};\mathcal{H}_{t}))||^{2}]\leq L_{\nabla}^{2}\epsilon^{2}+\beta.$$ 2 + β. (6) ![6_image_0.png](6_image_0.png) (a) Random Part. in Arxiv Dat. (b) METIS Part. in Arxiv Dat. (c) Random Part. in Products Dat. Figure 3: Accuracy as a function of the number of servers. ![6_image_1.png](6_image_1.png) Figure 4: Accuracy as a function of epoch for the Arxiv Dataset with 16 servers. Proof: The proof can be found in Appendix E ■ Proposition 1 is important because it allows us to show that the fixed compression mechanism can converge to a neighborhood of the first-order stationary point of 1. The size of the neighborhood can be controlled by the compression rate ϵ. Although useful, Proposition 1, presents a limitation on the quality of the solution that can be obtained through compression. In what follows we introduce a variable compression scheme that can trade-off between efficient communication and sub-optimality guarantees. ## 4 Varco - Variable Compression For Distributed Gnn Learning In this paper, we propose variable compression rates as a way to close the gap between training in a centralized manner and efficient training in a distributed manner. We use proposition (1) as a stepping stone towards a training mechanism that reduces the compression ratio rt as the iterates progress. We begin by defining a scheduler r(t) : Z → R as a strictly decreasing function that given a train step t ∈ Z, returns a compression ratio r(t), such that r(t ′) < r(t) if t ′> t. The scheduler r will be in charge of reducing the compression ratio as the iterates increase. An example of scheduler can be the linear rlin(t) = cmin−cmax Tt + cmax scheduler (see Appendix 2). In this paper, we propose to train a GNN by following a compression scheme by a scheduler r, given a GNN architecture, a number of clients Q, and a dataset D. During the forward pass, we compute the output of the GNN at a node ni using the local data, and the compressed data from adjacent machines. The compressed data encompasses both the features at the adjacent ![7_image_0.png](7_image_0.png) $$\left(7\right)$$ Figure 5: Accuracy per epoch for the Products Dataset with 16 servers. nodes, as well as the compressed activations of the intermediate layers for that node. The compression mechanism compresses the values of the GNN using scheduler r(t), and communicates them the to machine that owns node n. The backward pass receives the gradient from the machine that owns node n and updates the values of the GNN. After the GNN values are updated, the coefficients of the GNN are communicated to a centralized agent that averages them and sends them back to the machines. A more succinct explanation of the aforementioned procedure can be seen in Algorithm 1. ## 4.1 Convergence Of The Varco Algorithm We characterize the convergence of the VARCO algorithm in the following proposition. Proposition 2 (Scheduler Convergence) *Under assumptions 1 through 4, consider the iterates generated* by equation (SGD) where the normalized signals x are compressed using compression rate r *with corresponding* error ϵ(cf. Definition 1). Consider an L layer GNN with F, and K features and coefficients per layer respectively. Let the step-size be η ≤ 1/L∇*, with* L∇ = 4MλLmax √KF L. Consider a scheduler such that the compression error ϵt decreases at every step ϵt+1 < ϵt, then for any σ > 0 ED[||∇Hℓ(y, Φ(x, S; Ht))||2] ≤ σ. (7) $$\mathbb{E}_{\mathcal{D}}[||\nabla_{\mathcal{H}}\ell(y,\Phi(x,\mathbf{S};\mathcal{H}_{t}))||^{2}]\leq\sigma.$$ happens infinitely often. Proof: The proof can be found in Appendix A.1. ■ According to Proposition 2, for any scheduler, we can obtain an iterate t, whose gradient has a norm smaller than σ. This is an improvement to the fixed compression Proposition 1, given that we removed the term that depends on ϵ 2, converging to a smaller neighborhood. The condition on the scheduler is simple to satisfy; the compression error ϵ(t) needs to decrease on every step (see more information about schedulers in Appendix A.1). This means that the scheduler does not require information about the gradient of the GNN. Compressing the activations in a GNN can prove to reduce the communication required to train it, given that the overall communication is reduced. However, keeping a fixed compression ratio might not be enough to obtain a GNN of comparable accuracy to the one trained using no compression. By using a variable compression for the communications, we obtain the best of both worlds - we reduce the communication overhead needed to train a GNN, without compromising the overall accuracy of the learned solution. The key observation is that in the early stages of training, an estimator of the gradient with a larger variance is acceptable, while in the later stages, a more precise estimator needs to be used. This behavior can be translated into efficiency; use more information from other servers only when needed. ## 5 Experiments We benchmarked our method in 2 real-world datasets: OGBN-Arxiv (Wang et al., 2020) and OGBN-Products (Bhatia et al., 2016). In the case of OGBN-Arxiv, the graph has 169, 343 nodes and 1, 166, 243 edges and it represents the citation network between computer science arXiv papers. The node features are 128 dimensional embeddings of the title and abstract of each paper Mikolov et al. (2013). The objective is to predict which of the 40 categories the paper belongs to. In the case of OGBN-Products, the graph represents products that were bought together on an online marketplace. There are 2, 449, 029 nodes, each of which is a product, and 61, 859, 140 edges which represent that the products were bought together. Feature vectors are 100 dimensional and the objective is to classify each node into 47 categories. For each dataset, we partition the graph at random and using METIS partitioning (Karypis and Kumar, 1997) and distribute it over {2, 4, 8, 16} machines. In all cases, we trained for 300 epoch. We benchmarked VARCO against full communication, no intermediate communication, and fixed compression for {2, 4, 8, 16, 32, 64} fixed compression ratios. For the GNNs, we used a 3 layered GNN with 256 hidden units per layer, ReLU non-linearity, and SAGE convolution (Hamilton et al., 2017). For VARCO, we used a linear compression with slopes {2, 3, 4, 5, 6, 7}, and 128 and 1 maximum and minimum compression ratio respectively (see Appendix A.1). We empirically validate the claims that we put forward, that our method (i) attains the same accuracy as the one trained with full communication, (ii) is more efficient in terms of communication, and (iii) is robust to the choice of the scheduler. ## 5.1 Accuracy We report the accuracy over the unseen data, frequently denoted test accuracy. In terms of accuracy, we can compare the performance of our variable compression algorithm, against no communication, full communication, and fixed compression ratios. In Figure 3 we show the accuracy as a function of the number of servers for the different baselines considered. We show results for both random 3a and METIS 3b partitioning for the Arxiv dataset, and random partitioning for the products dataset 3c. As can be seen in all three plots, the variable compression attains a comparable performance to the one with full communication. Also, the fixed compression scheme is not able to recover the full communication accuracy when the number of servers increases. Consistent with Proposition 1, as the fixed compression increases, the accuracy attained by the GNN decreases. We study the accuracy as a function of the number of epochs with 16 servers. This setup is the most challenging, given that the size of the graph in each server is the smallest, and it is therefore the one in which communication is needed. In Figure 4, we show the accuracy per epoch for the Arxiv dataset. As can be seen in both random 4a, and METIS 4b partitioning, the accuracy of variable compression is comparable to the one with full communication. Also, the different fixed compression mechanisms have a worse performance (10% and 3% in random and METIS partitioning respectively), and their performance degrades as their fixed compression increases. In Figure 5, we plot the results for the products dataset with 16 servers. Again, in both partitioning schemes, our variable compression algorithm attains a comparable performance to the one trained on full communication. In this case, compared to the Arxiv dataset 4, the spread of the results is smaller, and the effect of our method is less significant. This is related to the fact that the graph is larger, and therefore, the partitions are also larger. In all, for both partitioning methods, and both datasets, we can validate that the variable compression mechanism attains a comparable performance to the one trained on the full communication, which is not the case for fixed compression. ## 5.2 Efficiency In terms of efficiency, we can plot the accuracy as a function of the number of floating point numbers communicated between servers. The fixed compression and full communication schemes communicate a constant number of bytes in each round of communication. This number is proportional to the cross-edges between machines, multiplied by the compression coefficient, which in the case of full communication is one. Our method is particularly useful given that at the early stages of training, fewer bytes of communication are needed, and the GNN can be trained with local data only. Intuitively, all learning curves show a similar slope at the beginning of training, and they converge to different values in the later stages. In Figure 6 we ![9_image_0.png](9_image_0.png) Figure 6: Accuracy per floating points communicated for the products dataset with 16 servers. ![9_image_1.png](9_image_1.png) Figure 7: Accuracy per epoch with 16 servers for different variable compression schemes. corroborate that our method attains the best accuracy as a function of bytes communicated throughout the full trajectory. Given that the VARCO curve in Figure 6 is above all curves, for any communication budget i.e. number of bits, VARCO obtains the best accuracy of all methods considered. This indeed validates our claim that using VARCO is an efficient way of training a GNN. ## 5.3 Robustness In Figure 7 we validate the robustness of our method to the choice of compression rate. Using linear compression rate, at epoch e, with cmin = 1, cmax = 128, E = 300 and compression rate c = min(cmax−a cmax−cmin Ee, cmin), we vary the slope a of the scheduler and verify that the algorithm attains a comparable performance for all runs. This is consistent with Proposition 2, as we only require that the scheduler decreases in every iteration. In Appendix A, the equations that govern the compression rate are described. ## 6 Conclusion In this paper, we presented a distributed method to train GNNs by compressing the activations and reducing the overall communications. We showed that our method converges to a neighborhood of the optimal solution, while computing gradient estimators communicating fewer bytes. Theoretically, we showed that by increasing the number of bits communicated (i.e. decreasing the compression ratio) as epochs evolve, we can decrease the loss throughout the whole training trajectory. We also showed that our method only requires the compression ratio to decrease in every epoch, without the need for any information about the process. Empirically, we benchmarked our algorithm in two real-world datasets and showed that our method obtains a GNN that attains a comparable performance to the one trained on full communication, at a fraction of the communication costs. ## References Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. *AI open*, 1:57–81, 2020. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. *IEEE transactions on neural networks and learning systems*, 32(1):4–24, 2020. Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. *IEEE Signal Processing Magazine*, 34(4):18–42, 2017. Fernando Gama, Antonio G Marques, Geert Leus, and Alejandro Ribeiro. Convolutional neural network architectures for signals supported on graphs. *IEEE Transactions on Signal Processing*, 67(4):1034–1049, 2018. Fernando Gama, Qingbiao Li, Ekaterina Tolstaya, Amanda Prorok, and Alejandro Ribeiro. Decentralized control with graph neural networks. *arXiv preprint arXiv:2012.14906*, 2020a. Mariliza Tzes, Nikolaos Bousias, Evangelos Chatzipantazis, and George J Pappas. Graph neural networks for multi-robot active information acquisition. In *2023 IEEE International Conference on Robotics and* Automation (ICRA), pages 3497–3503. IEEE, 2023. Remi Lam, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Alexander Pritzel, Suman Ravuri, Timo Ewalds, Ferran Alet, Zach Eaton-Rosen, et al. Graphcast: Learning skillful medium-range global weather forecasting. *arXiv preprint arXiv:2212.12794*, 2022. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. *Nature*, 596(7873):583–589, 2021. Meire Fortunato, Tobias Pfaff, Peter Wirnsberger, Alexander Pritzel, and Peter Battaglia. Multiscale meshgraphnets. In *ICML 2022 2nd AI for Science Workshop*, 2022. Nicolas Keriven and Gabriel Peyré. Universal invariant and equivariant graph neural networks. Advances in Neural Information Processing Systems, 32, 2019. Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pages 9323–9332. PMLR, 2021. Fernando Gama, Joan Bruna, and Alejandro Ribeiro. Stability properties of graph neural networks. IEEE Transactions on Signal Processing, 68:5680–5695, 2020b. Luana Ruiz, Luiz Chamon, and Alejandro Ribeiro. Graphon neural networks and the transferability of graph neural networks. *Advances in Neural Information Processing Systems*, 33:1702–1712, 2020. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2018. Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):657–668, 2022. Charilaos I Kanatsoulis and Alejandro Ribeiro. Graph neural networks are more powerful than we think. arXiv preprint arXiv:2205.09801, 2022. Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. On the equivalence between graph isomorphism testing and function approximation with gnns. *arXiv preprint arXiv:1905.12560*, 2019. Sohir Maskey, Ron Levie, and Gitta Kutyniok. Transferability of graph neural networks: an extended graphon approach. *Applied and Computational Harmonic Analysis*, 63:48–83, 2023. Sohir Maskey, Ron Levie, Yunseok Lee, and Gitta Kutyniok. Generalization analysis of message passing neural networks on large random graphs. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, *Advances in Neural Information Processing Systems*, volume 35, pages 4805–4817. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 1eeaae7c89d9484926db6974b6ece564-Paper-Conference.pdf. Juan Cerviño, Luana Ruiz, and Alejandro Ribeiro. Learning by transference: Training graph neural networks on growing graphs. *IEEE Transactions on Signal Processing*, 71:233–247, 2023. Juan Cerviño, Luana Ruiz, and Alejandro Ribeiro. Training graph neural networks on growing stochastic graphs. In *ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing* (ICASSP), pages 1–5, 2023. doi: 10.1109/ICASSP49357.2023.10094894. Zhenkun Cai, Xiao Yan, Yidi Wu, Kaihao Ma, James Cheng, and Fan Yu. Dgcl: an efficient communication library for distributed gnn training. In Proceedings of the Sixteenth European Conference on Computer Systems, pages 130–144, 2021. Lei Wang, Qiang Yin, Chao Tian, Jianbang Yang, Rong Chen, Wenyuan Yu, Zihang Yao, and Jingren Zhou. Flexgraph: a flexible and efficient distributed framework for gnn training. In Proceedings of the Sixteenth European Conference on Computer Systems, pages 67–82, 2021. Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan Gan, Zheng Zhang, and George Karypis. Distdgl: distributed graph neural network training for billion-scale graphs. In 2020 IEEE/ACM 10th Workshop on Irregular Applications: Architectures and Algorithms (IA3), pages 36–44. IEEE, 2020. Qiange Wang, Yanfeng Zhang, Hao Wang, Chaoyi Chen, Xiaodong Zhang, and Ge Yu. Neutronstar: distributed gnn training with hybrid dependency management. In Proceedings of the 2022 International Conference on Management of Data, pages 1301–1315, 2022. Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečn`y, Stefano Mazzocchi, Brendan McMahan, et al. Towards federated learning at scale: System design. *Proceedings of machine learning and systems*, 1:374–388, 2019. Li Li, Yuxi Fan, Mike Tse, and Kuo-Yi Lin. A review of applications in federated learning. *Computers &* Industrial Engineering, 149:106854, 2020a. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. *IEEE signal processing magazine*, 37(3):50–60, 2020b. Zebang Shen, Juan Cervino, Hamed Hassani, and Alejandro Ribeiro. An agnostic approach to federated learning with class imbalance. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=Xo0lbDt975. Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Carl Yang, Han Xie, Lichao Sun, Lifang He, Liangwei Yang, Philip S Yu, Yu Rong, et al. Fedgraphnn: A federated learning system and benchmark for graph neural networks. *arXiv preprint arXiv:2104.07145*, 2021. Guangxu Mei, Ziyu Guo, Shijun Liu, and Li Pan. Sgnn: A graph neural network based federated learning approach by hiding structure. In *2019 IEEE International Conference on Big Data (Big Data)*, pages 2560–2568. IEEE, 2019. George Karypis and Vipin Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on scientific Computing, 20(1):359–392, 1998. Tim Kaler, Alexandros Iliopoulos, Philip Murzynowski, Tao Schardl, Charles E Leiserson, and Jie Chen. Communication-efficient graph neural networks with probabilistic neighborhood expansion analysis and caching. *Proceedings of Machine Learning and Systems*, 5, 2023. Hesham Mostafa. Sequential aggregation and rematerialization: Distributed full-batch training of graph neural networks on large graphs. *Proceedings of Machine Learning and Systems*, 4:265–275, 2022. Hesham Mostafa, Adam Grabowski, Md Asadullah Turja, Juan Cervino, Alejandro Ribeiro, and Nageen Himayat. Fastsample: Accelerating distributed graph neural network training for billion-scale graphs. arXiv preprint arXiv:2311.17847, 2023. Vasimuddin Md, Sanchit Misra, Guixiang Ma, Ramanarayan Mohanty, Evangelos Georganas, Alexander Heinecke, Dhiraj Kalamkar, Nesreen K Ahmed, and Sasikanth Avancha. Distgnn: Scalable distributed training for large-scale graph neural networks. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–14, 2021. Yuxin Ma, Ping Gong, Jun Yi, Zhewei Yao, Cheng Li, Yuxiong He, and Feng Yan. Bifeat: Supercharge gnn training via graph feature quantization. *arXiv preprint arXiv:2207.14696*, 2022. Mehdi Bahri, Gaétan Bahl, and Stefanos Zafeiriou. Binary graph neural networks. In *Proceedings of the* IEEE/CVF conference on computer vision and pattern recognition, pages 9492–9501, 2021. Mucong Ding, Kezhi Kong, Jingling Li, Chen Zhu, John Dickerson, Furong Huang, and Tom Goldstein. Vq-gnn: A universal framework to scale up graph neural networks using vector quantization. Advances in Neural Information Processing Systems, 34:6733–6746, 2021. Boyuan Feng, Yuke Wang, Xu Li, Shu Yang, Xueqiao Peng, and Yufei Ding. Sgquant: Squeezing the last bit on graph neural networks with specialized quantization. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pages 1044–1052. IEEE, 2020. Shyam Anil Tailor, Javier Fernandez-Marques, and Nicholas Donald Lane. Degree-quant: Quantization-aware training for graph neural networks. In *International Conference on Learning Representations*, 2020. Borui Wan, Juntao Zhao, and Chuan Wu. Adaptive message quantization and parallelization for distributed full-graph gnn training. *Proceedings of Machine Learning and Systems*, 5, 2023. Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, and Ren Chen. Decoupling the depth and scope of graph neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, *Advances in Neural Information Processing Systems*, volume 34, pages 19665–19679. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/ a378383b89e6719e15cd1aa45478627c-Paper.pdf. Youhui Bai, Cheng Li, Zhiqi Lin, Yufei Wu, Youshan Miao, Yunxin Liu, and Yinlong Xu. Efficient data loader for fast sampling-based gnn training on large graphs. IEEE Transactions on Parallel and Distributed Systems, 32(10):2541–2556, 2021. Marco Serafini and Hui Guan. Scalable graph neural network training: The case for sampling. *ACM SIGOPS* Operating Systems Review, 55(1):68–76, 2021. Yang Liu, Xiang Ao, Zidi Qin, Jianfeng Chi, Jinghua Feng, Hao Yang, and Qing He. Pick and choose: a gnn-based imbalanced learning approach for fraud detection. In *Proceedings of the web conference 2021*, pages 3168–3177, 2021. Vladimir Vapnik. *The nature of statistical learning theory*. Springer science & business media, 2013. Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George Pappas. Efficient and accurate estimation of lipschitz constants for deep neural networks. *Advances in Neural Information* Processing Systems, 32:11427–11438, 2019. Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. Microsoft academic graph: When experts are not enough. *Quantitative Science Studies*, 1(1):396–413, 2020. K. Bhatia, K. Dahiya, H. Jain, P. Kar, A. Mittal, Y. Prabhu, and M. Varma. The extreme classification repository: Multi-label datasets and code, 2016. URL http://manikvarma.org/downloads/XC/XMLRepository. html. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, *Advances in Neural Information Processing Systems*, volume 26. Curran Associates, Inc., 2013. URL https://proceedings.neurips.cc/paper_files/paper/2013/file/ 9aa42b31882ec039965f3c4923ce901b-Paper.pdf. George Karypis and Vipin Kumar. Metis: A software package for partitioning unstructured graphs, partitioning meshes, and computing fill-reducing orderings of sparse matrices. 1997. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. *Advances* in neural information processing systems, 30, 2017. Dimitri P Bertsekas and John N Tsitsiklis. Gradient convergence in gradient methods with errors. *SIAM* Journal on Optimization, 10(3):627–642, 2000. Rick Durrett. *Probability: Theory and Examples*. Cambridge University Press, 2019.
Review 1: Summary: This work is motivated to boost the performance of distributed-training GNNs to be close to full-communication GNNs, via proposing varying communication rates between servers (or compression error). It provides experimental results of the proposed method, together with theoretical arguments about a gap between convergence of fixed and varying communication rates. Strengths and Weaknesses: Pros: 1. The motivation and methodology look good. 1. In Figure 4 and 5, the experimental benefit of the proposed method is clear from all baselines. (See Concern 2 below for more details.) Concerns: 1. As pointed out in Section 5.1, the benefit of the proposed method is less clear on larger graphs (OGBN-Products) than on smaller graphs (OGBN-Arxiv). Does this phenomena question the motivation of this work? Or does this imply more servers are necessary for larger graphs? 1. For Figure 4 and 5, some curves are not converging to stable values at epoch 300. Although it might be expensive, it would be good to to train all variants to stationary points for fair comparison. 1. In definition 1, it includes a first-order compression error $\delta$, but it seems not to appear in any following discussion outside of the definition. It would be good to clarify the term more clearly. 1. Is there any insight about how to ensure that the key assumption in Prop 1 hold? If I understand correctly, the first two terms on the RHS seem to be constant, which means the guarantee of getting close to a stationary point is in order $O(1)$. 1. The proof of Prop 1 in Appendix D may need further improvement: 1. In Lemma 1 and Lemma 2, the inputs are low-case $x_1,x_2$. Are these standing for a input for single nodes or multiple nodes? From Eq(2), GNN's input is defined as features of all nodes in the computation tree, as a upper-case $X$. So lower-case $x$'s are not correct as the input of a GNN, as the $L$-layer GNN computes on multiple nodes. 1. In Eq(15), the gradient of $\rho(x)$ is assumed to be 1 for any $x$? 1. From Eq(18,19) to (20), a term of $\\|x_1\\|,\\|x_2\\|$ is missing. 1. In Eq(27,28), which order of norms of these tensors are used? 1. From Eq(45) to (46), it is not guaranteed that $\\|a+b\\|^2 \ge $ or $\le \\|a\\|^2 + \\|b\\|^2$, which depends on the angle between $a$ and $b$. 1. In Eq(47), the last term is not zero. Because $\mathbb{E}[a] = \mathbb{E}[b]$ does not imply $\mathbb{E}[a^2] = \mathbb{E}[b^2]$. 1. In Prop 2, is the upper bound still a constant? It will be good to clarify how this can be changed to a small value. 1. (very minor) The figures shall be modified in colors and line styles that are more friendly to readers. Typos: 1. Under Eq(2), it should be $X=[x_0,x_1,\dots,x_n]^{\top}\in\mathbb{R}^{n \times F_0}$. 1. In Eq(15), $\nabla_{k_v}$ should be $\nabla_{h_v}$. 1. In Eq(24,25), subscripts of $H_1, H_2$ are missing. 1. IN Prop 1 and 2, $\epsilon$ shall be referred to Definition 1 instead of 3. Requested Changes: Please see the above concerns, for both experiments and theory. Broader Impact Concerns: No additional concern is needed. ================================================== Review 2: Summary: This paper presents a distributed GNN training algorithm based on graph partitioning. The proposed method optimizes the training cost by compressing the neighbor information (intermediate activation) to be exchanged among servers in each GNN layer. Theoretical analysis bounds the GNN convergence quality by the compression error. Motivated by the intuition that accurate neighbor aggregation is more desirable when model converges, a variable compression scheme is proposed that gradually decreases the compression ratio when training progresses. Experiments on two standard benchmarks are conducted to validate the accuracy and efficiency of the proposed approach. Strengths and Weaknesses: ## Strengths + The paper is in general well written, with clear motivation & sufficient background. + The experiments are evaluated under a number of different settings & metrics (e.g., different number of machines & training epochs, etc). + Theoretical analysis has been performed to deepen the technical strength. ## Weaknesses - The compression & decompression algorithm used in this paper are not clearly defined. From my understanding, compression might mean aggregating local neighbors without visiting remote ones. If this is the case, how would decompression work? In addition, for an MPNN, each layer only aggregates 1-hop neighbors. Then the node should send its own activation to remote neighbors. Why an additional local aggregation is needed? - The connection between theoretical results and the model design is weak. The 1st theoretical result show that model converges to a better solution when the compression error is smaller. This conclusion is intuitive, but it does not help us design the compression algorithm to achieve such better convergence quality. The 2nd theoretical result show that a schedule with monotonically decreasing compression rate is better. It seems to hold for any monotonically decreasing scheme, and does not justify the specific one used in experiments. - The experiments are only conducted on relatively small graphs. A single GPU can easily hold the full graph of ogbn-arxiv and ogbn-products. This experimental setting differs from the realistic scenario that motivate the paper. - The experimental metric is unclear. e.g., what does "accuracy" mean for all the convergence curves? Training accuracy? Test accuracy? - In the motivation, it should be noted that aggregating from all neighbors may not necessarily produce the best result. Some works (e.g., [1]) deliberately aggregate from a subset of most important neighbors to achieve higher accuracy than full neighbor aggregation. ----- [1] Decoupling the depth and scope of graph neural networks. In NeurIPS 2021. Requested Changes: Please clarify the points in weakness. In addition, we can make background a bit more concise (e.g., empirical risk minimization is a bit out of topic). Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper's objective lies in the area of large-scale training of Graph Neural Networks (GNNs), specifically in distributed learning through the utilization of multiple compute nodes (i.e. machines). The authors propose a new approach with the goal of optimization of communication efficiency. The key contribution is the development of a variable compression scheme, called VARCO, which dynamically adjusts compression rates during training to balance communication overhead and model accuracy. Along with the methodological contribution, the authors present theoretical results of how their method is associated with a full graph training instance. Specifically, they show that the variable compression schema achieves tighter convergence bounds (Equations 7, and 8) than the fixed compression one, which are, also, independent of the compression error $\epsilon(t)$. Finally, VARCO method is benchmarked in two datasets (a small-size one: ogbn-arxiv #nodes < 200k, # edges $\approx$ 1m, and a mid-size one: ogbn-products: #nodes $\approx$ 2.5m , # edges $\approx$ 60m) against full communication of the compute nodes, no communication, intermediate communication, and fixed compression rates. The authors show that their method is performing similarly to the full communication training, while being more efficient with respect to the communication rate. Strengths and Weaknesses: ***Strengths*** 1. Interesting approach on variable communication rates, that iteratively updates compression rates of the workers. 2. Solid theoretical results, indicating the contribution of a variable communication schema. Specifically, the key contribution makes the convergence bound independent of the compression error. 3. Empirical success in maintaining accuracy compared to the full communication of learning workers. Also, the method is able to reduce communication costs by attaining strong accuracies with less floating points communicated. 4. The paper is clear and well-written. ***Weaknesses*** See the ***Requested Changes*** review section for an elaborate list of my questions and suggestions. Below, I present a brief list of the same points: 1. It is unclear how the graph characteristics affect the variable compression efficiency. The paper benchmarks two datasets with varied density, for which the presented method, and the fixed compression one perform differently. It would be very insightful to have a better view on how VARCO converges with respect to the density, and the size of the graph. 2. Only one variant of GNNs is studied. It would be interesting to showcase any generalization capabilities of VARCO for diverse GNN types. 3. Related strong baselines are missing. See the next section for more information. Requested Changes: Questions: 1. To my knowledge, the default versions of ogbn-arxiv, and ogbn-products are directed and undirected graphs respectively. Can the authors clarify whether they used the default versions or not? In the case of a **directed** graph, the states of a pair of adjacent nodes (with one directed graph) would be not similarly updated, because of the lack of the one edge direction. Would that affect the effectiveness of the third, and fourth steps of Algorithm 1? 2. Figures 4,5,6 show a couple of patterns between the sparser (products) and the denser graph (arxiv). Specifically, reasonably the no communication case performs better than the higher fixed compression rates (Figure 5), mainly due to the small number of adjacent communication messages required. But, also, it is interesting how the partitioning is affecting the performance of the methods. It would be very useful to draw an analysis of the convergence of the methods **with respect to the density** of the graphs. Probably a case with degenerate extremely sparse, and extremely dense graphs (even in small size) will be very insightful. 3. The authors are utilizing a specific line of GNNs for their methodology, specifically the convolution-based one. Can the authors mention if possible how Algorithm 1 could change in the case of more sophisticated architectures, e.g. attention-based GNN? If this is not possible, at least it should be mentioned in the paper, that the methodology should require modifications for training more sophisticated architectures. 4. Although I am not well educated in the area of distributed GNN training, I was able to find a couple of relevant works in the are of communication efficient distributed GNN training. The authors should at least mention them, and indicate their differences: a. SALIENT++ paper: https://proceedings.mlsys.org/paper_files/paper/2023/file/74e22712c9b50a9b43b2ae54e225888e-Paper-mlsys2023.pdf. b. ADAQP paper: https://i.cs.hku.hk/~cwu/papers/brwan-mlsys23.pdf c. DistGNN paper: https://aiichironakano.github.io/cs653/Vasimuddin-DistGNN-SC21.pdf Broader Impact Concerns: There is no clear concern on the ethical implications of the works that would require a Broader Impact Statement. ================================================== Metareview: Recommendation: Reject Comment: I’m recommending a rejection to encourage the authors to keep improving their method and really demonstrate that the gains in efficiency are worth the cost, e.g. additional complexity in the system, unintuitive training curves, additional things to tune, etc.. The description of the compression algorithm in Appendix A first paragraph should really be moved to the main paper - without this it is hard for a reader to have a concrete understanding about what the compression algorithm is doing. The compression algorithm might also be improved - instead of filling in 0s for the entries not communicated, maybe some kind of moving average can be kept on each machine and used instead of 0s for those missing features. This might improve your training stability a bit. ==================================================
# Fair Feature Importance Scores For Interpreting Decision Trees Camille Olivia Little∗*col1@rice.edu* Department of Electrical and Computer Engineering Rice University Debolina Halder Lina∗dl73@rice.edu Department of Computer Science Rice University Genevera Allen *genevera.allen@columbia.edu* Department of Statistics Columbia University Reviewed on OpenReview: *https: // openreview. net/ forum? id= 72mDxlzRZ1& noteId= Zu6WJGo8AJ* ## Abstract Across various sectors such as healthcare, criminal justice, national security, finance, and technology, large-scale machine learning (ML) systems are being deployed to make critical data-driven decisions. Many have asked if we can and should trust these ML systems to be making these decisions. Two critical components are prerequisites for trust in ML systems: interpretability, or the ability to understand why the ML system makes the decisions it does, and fairness, which ensures that ML systems do not exhibit bias against certain individuals or groups. While both interpretability and fairness have garnered substantial attention in the ML literature, methods directly interpreting models in terms of fairness remain limited. This paper considers a popular interpretation for a widely used class of ML models: feature importance scores for decision trees and tree-based models. We introduce a novel *Fair Tree Feature Importance Score* to assess each feature's impact on fairness or bias in decision trees. Analogous to the mean decrease in impurity for trees, our score quantifies the mean increase (or decrease) in group bias, and extends to interpret tree-based ensembles or surrogates of complex ML systems. Through simulations and real examples on benchmark fairness datasets, we show the validity of our Fair Tree Feature Importance Score, offering meaningful interpretations for both tree-based ensembles and tree-based surrogates of other ML systems. ## 1 Introduction The adoption of machine learning models in high-stakes decision-making has witnessed a remarkable surge in recent years. Employing these models to assist in human decision processes offers significant advantages, such as managing vast datasets and uncovering subtle trends and patterns. However, it has become increasingly evident that the utilization of these models can lead to biased outcomes. Even when users can discern bias within the model's results, they frequently encounter substantial hurdles when attempting to rectify this bias, primarily due to their inability to comprehend the inner workings of the model and the factors contributing to its bias. When machine learning models impact high-stakes decisions, trust is paramount (Toreini et al., 2020; Rasheed et al., 2022; Broderick et al., 2023). Users, stakeholders, and the general public need to have confidence in the fairness and interpretability of these models. Without comprehensible explanations and the ability to audit model decisions, trust can degrade rapidly. An incident with the Apple Credit Card in 2019 is a prime example of this. The wife of a long-time married couple applied for an increased credit limit for her card (Vigdor, 2019). Despite having a better credit score and other positive factors in her favor, her application for an increased line of credit was denied. The husband, who had filed taxes together with his wife for years, wondered why he deserved a credit limit 20 times that of his wife. When the couple inquired as to why the credit limit was so different, no one was able to explain the decision to the couple, which created consternation amongst these and other clients on social media who also demanded explanations (Knight, 2019). This led to an investigation by the New York State Department of Financial Services. While the investigation showed that the card did not discriminate based on gender (Campbell, 2021), the inability to provide an interpretation or explanation about the fairness of the algorithm used to determine credit limits created significant mistrust. Moving forward, we must have ways of interpreting ML systems based not only on the accuracy of predictions but also on the fairness of the predictions. As a particular example, we have many ways to interpret how features affect a model's predictions through feature importance scores (Du et al., 2019; Murdoch et al., 2019). Yet, we have no current way of understanding how a feature affects the fairness of the model's predictions. The goal of this paper is to fill in this critical gap by developing a simple and interpretable fair feature importance score. Countless works have proposed methods to improve fairness in existing models (Zemel et al., 2013; Calmon et al., 2017; Agarwal et al., 2018; Zhang et al., 2018; Lohia et al., 2019; Caton & Haas, 2020), but few have focused on how to interpret models with regards to fairness. We adopt a simple approach and consider interpreting features in the context of fairness in decision trees. Why trees? For one, decision trees are the base ensemble for one of the most widely used algorithms, random forests (RFs). Renowned for their robust predictive capabilities, RFs provide an easily computable intrinsic feature importance score known as mean decrease in impurity (MDI) (Breiman, 1973; Caruana et al., 2008). Beyond RFs, tree-based ensembles such as AdaBoost and XGBoost are widely used machine learning models, especially for tabular data (FernándezDelgado et al., 2014). Moreover, trees have also more recently been proposed as interpretable surrogates for deep learning systems (Guidotti et al., 2018; Schaaf et al., 2019). In this work, we develop a straightforward and intuitive metric for calculating fair feature importance scores in decision trees. Our Fair Tree Feature Importance Score (*FairTreeFIS*) reveals which features lead to improvements in the fairness of a decision tree's predictions and which degrade fairness or contribute to the tree's bias. Additionally, we show how *FairTreeFIS* can be used to explain the fairness of predictions in tree-based ensembles and through tree-based surrogates of other complex ML systems. ## 1.1 Related Works To promote trust, transparency, and accountability, there has been a surge in recent research in interpretable ML; see reviews of this literature by Ishwaran (2007); Kazemitabar et al. (2017); Lipton (2018) for more details. Interpretable ML (or explainable AI) seeks to provide human understandable insights into the data, the model or a model's output and decisions (Allen et al., 2023; Murdoch et al., 2019). One of the most popular interpretations is feature importance, which measures how each feature contributes to a model's predictions. There are a wide variety of model-specific feature importance measures like the popular mean decrease in impurity (MDI) for decision trees (Louppe et al., 2013) or layer-wise relevance propagation (LRP) for deep learning (Samek et al., 2021), among many others. Several proposed model agnostic measures of feature importance include Shapley values, feature permutations, and feature occlusions (Mase et al., 2021; Chen et al., 2018; Lundberg et al., 2018). This work is inspired by the MDI metric for decision trees which is widely used and also has been the subject of much recent research (Strobl et al., 2007; Li et al., 2019; Zhou & Hooker, 2021; Scornet, 2023). Another notable category of interpretability-enhancing techniques involves surrogate models. A surrogate model is a simplified and more interpretable representation of a complex, often black-box model (Samek & Müller, 2019). Surrogate models are designed to approximate the behavior of the original model while being easier to understand, faster to compute, or more suitable for specific tasks such as optimization, sensitivity analysis, or interpretability; examples include linear models, decision trees or Gaussian processes. One of the most well-known surrogates for interpretability is LIME (Local Interpretable Model-Agnostic Explanations) (Ribeiro et al., 2016); this approach builds a simple and interpretable (usually linear) model to interpret a local sub-region of the input space. Global surrogates, on the other hand, build a second surrogate model to approximate the global behavior and all the predictions of the original model. Decision trees have been proposed as potential global surrogates as they are fast, simple and interpretable, and as a fully grown decision tree can exactly reproduce the predictions of the original model on the training data (Blanco-Justicia & Domingo-Ferrer, 2019). On a related note, decision trees have played a crucial role in an associated field known as knowledge distillation, where simplified surrogates of complex models are crafted to mimic the complex model's predictions (Hinton et al., 2015; Gou et al., 2021). Although knowledge distillation focuses on prediction, it is worth noting that if predictions from surrogate decision trees prove to be accurate, they can also be harnessed for interpretation (Yang et al., 2018; Sagi & Rokach, 2021; Wan et al., 2020). Separate from interpretability, fairness is another critical component to promote trust in ML systems. There has been a surge of recent literature on fairness (Chouldechova & Roth, 2018; Friedler et al., 2019). And while many methods have been developed to mitigate bias in ML systems (Zhang et al., 2018; Grari et al., 2019; Agarwal et al., 2018), very few of these papers have additionally focused on interpretability. Yet, many have called for improving interpretability in the context of fairness (Jain et al., 2020; Agarwal, 2021; Dai et al., 2021; Wang et al., 2023). Notably, there are a few recent examples that seek to address this. Begley et al. (2020) introduces a new value function that measures fairness for use within Shapley values; although this is an interesting and relevant approach, no code is publicly available and computing these Shapley values requires significant computational time. Another relevant example is LimeOut (Bhargava et al., 2020) which uses LIME explanations to determine which features to drop to make a classifier fairer. This is a local and not global method, however, and the focus is on selecting features, not directly interpreting them via a feature importance score. In this paper, we are motivated to address these issues by proposing a very simple, intuitive, fast, and easy-to-compute fair feature importance score. ## 1.2 Contributions We make three major contributions that allow us to interpret a tree or tree-based model in terms of the fairness of its features. First, we propose and develop the first fair feature importance score (*FairTreeFIS*) for interpreting decision trees. Second, we outline how to use *FairTreeFIS* to interpret tree-based ensembles and tree-based global surrogates of complex ML systems. Finally, we empirically validate *FairTreeFIS* for interpreting trees, tree-based ensembles, and tree-based surrogates of deep learning models on both synthetic and benchmark datasets. ## 2 Fairtreefis**: Fair Feature Importance Score For Trees** 2.1 Review: Feature Importance Score (Treefis) For Trees One of the many benefits of decision trees is that they have a straightforward mechanism for interpretation. Perhaps the most popular feature importance score for trees is based on the Mean Decrease in Impurity (MDI), measured by a decrease in variance in regression or in the Gini Index or other metrics for classification Breiman (1973); we refer to the MDI as *TreeFIS* in this paper. Let us first introduce some notation to formally define and review *TreeFIS*; this definition will help us in defining our novel fairness version of *TreeFIS* in the next section. Suppose we have a response y and the decision tree is built from data X based on n samples. Additionally, let t = 0 be the root node of the tree and T be the total number of nodes in the tree; let nt be the number of samples falling in node t. Next, let cℓ(t) be the left child of t and cr(t) be the right child of node t. Let St = {i ∈ t} be the set of samples belonging to node t; let ySt be the response associated with those samples in node t, we denote as yt for ease of notation. Let yˆt denote the predictions for samples in node t. As an example, for binary classification with y ∈ {0, 1} or for regression with y ∈ recall that yˆt =1 |St| Pi∈St yi; that is, yˆt is the proportion of successes in node t in the classification setting and the mean of node t in the regression setting. Additionally, let wt represent the weighted number of samples nt n at node t and 1{(t,j)} denote the indicator that feature j was split upon in node t. Let L(y, yˆ) be the loss ![3_image_0.png](3_image_0.png) Figure 1: Schematic trees to illustrate *TreeFIS* and *FairTreeFIS*. Panel A illustrates the level of node t and the child level of t that are used to calculate *FairTreeFIS*. Panels B-D illustrate classification trees with pluses and minuses denoting positive and negative labels respectively and red and blue denoting the majority and minority groups respectively. Panels B and C show the *Bias* and weighted impurity (Gini Index) at node or level t and that of the children of node t. In Panel B, notice the *Bias* decreases between the parent and child level, resulting in a positive *FairTreeFIS* for that split. Differently, in Panel C, the *Bias* increases, resulting in a negative *FairTreeFIS* for that split. Panel D illustrates why we must use soft predictions versus hard labels when computing *FairTreeFIS*. function employed to built the decision tree (e.g. MSE loss for regression or the Gini Index or Cross Entropy for classification). Now, we can formally define *TreeFIS*: Definition 1. *For a decision tree, the* TreeFIS (MDI) for feature j *is defined as:* $$FIS_{j}=\sum_{t=0}^{T-1}\hat{\mathfrak{L}}_{\{(t,j)\}}(w_{t}\mathcal{L}(y_{t},\hat{y}_{t})-\big{(}w_{ce(t)}\mathcal{L}(y_{ce(t)},\hat{y}_{ce(t)})+w_{ce(t)}\mathcal{L}(y_{ce(t)},\hat{y}_{ce(t)})\big{)})\tag{1}$$ If feature j is used to split node t, then the *TreeFIS* calculates the change in the loss function before and after the split, or more precisely, the change in the loss between the predictions at node t and the predictions of node t's children. Hence, *TreeFIS* uses the accuracy of the predictions to determine feature importance. ## 2.2 Fairtreefis Inspired by *TreeFIS*, we seek to define a tree-based feature importance score for group fairness that is based upon the bias of the predictions instead of the accuracy of the predictions. To do this, we first need to define group bias measures. Let zi ∈ {0, 1} for i = 1*, . . . n* be an indicator of the protected attribute (e.g. gender, race or etc.) for each observation. We propose to work with two popular metrics to measure the group bias, Demographic Parity (DP) and Equality of Opportunity (EQOP), although we note that our framework is conducive to other group metrics as well. In brief, DP measures whether the predictions are different conditional on the protected attribute whereas EQOP is typically only defined for classification tasks and measures whether the predictions are different conditioned on a positive outcome and the protected attribute (Hardt et al., 2016; Beutel et al., 2017). One might consider simply replacing the loss function in equation 1 with these bias metrics, but constructing our fair metric is not that simple. Consider that for *TreeFIS*, we can calculate the loss between yt and yˆt for a particular node t, hence we can calculate the difference in loss after a split. We cannot use this same process, however, for bias as the predictions in each node of the decision tree are the same by construction. Thus, for a given node t, there are never any differences between the predictions based on protected group status. Hence, the bias calculated at node t must always be zero. To remedy this and keep the same spirit as TreeFIS, we propose to consider the difference in bias between the split that produced node t and the split at node t that produces node t's children. Thus, we propose to calculate the bias that results from each split of the tree. To formalize this, notice that the result of each split in a tree is a right and left node. We call this set the level of the tree for node t and denote this as lev(t); this level includes the right and left node denoted as levℓ(t) and levr(t) respectively. We also let c(t) denote all the children of t, or in other words, the child level of node t. Now, we can define our bias metrics for the split that produced node t, or in other words, the level of node t. The Bias of lev(t) in terms of DP and EQOP are defined as follows: $$B i a s^{D P}(l e v(t))=\big|E(\hat{y}_{i}|z_{i}=1,i\in l e v(t))-E(\hat{y}_{i}|z_{i}=0,i\in l e v(t))\big|,$$ $$\left(2\right)$$ $$\begin{split}Bias^{EQOP}(lev(t))&=\big{|}E(\hat{y}_{i}=1|y_{i}=1,z_{i}=1,i\in lev(t))\\ &\qquad-E(\hat{y}_{i}=1|y_{i}=1,z_{i}=0,i\in lev(t))\big{|}.\end{split}\tag{1}$$ $$\left({\mathrm{3}}\right)$$ These group bias metrics range between zero and one, with higher values indicating larger amounts of bias in the predictions. Armed with these definitions, we now seek to replace the loss function L in *TreeFIS* with this *Bias* metric to obtain our *FairTreeFIS*. To do so, we calculate the difference in bias between the level of node t and node t's children: Definition 2. The F airT reeF IS for feature j *is defined as:* $$FairTreeFIS_{j}=\sum_{t=0}^{T-1}\mathbb{1}_{\{(t,j)\}}w_{t}\left(Bias(lev(t))-Bias(c(t))\right)$$ $$\left(4\right)$$ Note that at the root node, t = 0, the level of the tree consists of only the root; then, Bias(lev(0)) = 0 for this constant model at the root in our definition. Finally, as the scale of *TreeFIS* is not always interpretable, it is common to normalize *TreeFIS* so that is sums to one across all features. We analogously do so for FairTreeFIS by rescaling so that the sum of the absolute values across all features is one; in this manner, FairTreeFIS and *TreeFIS* are on the same scale and can be directly interpreted. Our *FairTreeFIS* formulation is an analogous extension of *TreeFIS* as it calculates the *Bias* of the parent minus the *Bias* of the children summed over all splits that split upon feature j. But unlike *TreeFIS* which is always positive, *FairTreeFIS* can be both positive and negative. As decision trees are constructed with each split minimizing the loss, the difference in loss between parent and children is always positive. The splits do not consider *Bias*, however, so the *Bias* of the parent level could be higher or lower than that of the child level. Thus, *FairTreeFIS* will be positive when the split at node t improved the bias and negative when the split at node t made the bias worse. *FairTreeFIS* is then positive for features that improve the fairness (or decrease the bias) and negative for features that are less fair (or increased the bias). This is a particularly advantageous aspect of *FairTreeFIS* that improves the interpretability of each feature with respect to fairness. Figure 1 illustrates our *FairTreeFIS* definition for a binary classification example. Panel A highlights our notation and calculation of *Bias* for levels of the tree. In Panel B, the bias improves from the parent level to the child level and hence *FairTreeFIS* is positive, indicating the split improved the fairness of the predictions. The opposite happens in Panel C where the bias is worse in the child level and hence *FairTreeFIS* is negative indicating worsening fairness as a result of the split. In regression settings, *FairTreeFIS* can be easily applied with the demographic parity metric equation 2, which is most commonly used for regression tasks. The bias can be calculated directly as the empirical mean of the predictions in each sensitive group. For classification settings, however, more care needs to taken in computing the *Bias* and our *FairTreeFIS* metric, as discussed in the next section. ![5_image_0.png](5_image_0.png) Figure 2: Classification results for *TreeFIS* (MDI using the Gini Index) and *FairTreeFIS* (DP) on the three major simulation types and for a decision tree, gradient boosting, and random forest classifier. We consider four feature groups: features in G1 (red) and G2 (blue) are correlated with the protected attribute, features in G1 and G3 (green) are signal, and features in G4 (purple) are noise. The magnitudes and directions of the *FairTreeFIS* scores for each group align with what we would expect from the simulation construction, thus validating our metric. ## 2.3 Fairtreefis: Classification Setting Typically for classification tasks, people use hard label predictions to compute the DP and EQOP *Bias* metrics. However, for decision trees, this presents a problem as both the left and right node of level t could predict the same hard label; the parent and child levels could also predict the same hard label. In these settings, using hard labels with equation 2 and equation 3 would result in zero or a misleading *Bias* measure even when the split might be unfair. This phenomenon is illustrated in Figure 1 Panel D. To remedy this issue, we are left with two options: employ measures of *Bias* that take soft predictions or employ probabilistic decision trees that return stochastic hard label predictions based on the soft label probabilities. So that our Bias metrics are interpretable and comparable with others that typically employ hard label predictions, we choose the latter option. Let levℓ(t) and levr(t) denote the left and right nodes of the level of node t and let πlevℓ(t) and πlevr(t) denote the proportion of positive samples in these nodes, respectively. Then for probabilistic trees, yˆi for i ∈ *levℓ*(t) is a Bernoulli random variable with probability of success πlevℓ(t), and that of the right node is defined analogously. Given this, we can directly apply equation 2 and equation 3 to compute the expectation necessary for our *Bias* metrics: Proposition 1. *Consider binary classification with probabilistic trees, then our* Bias measures are given by the following: BiasDP (lev(t)) = πlevℓ(t) P P i 1{zi=1,i∈levℓ(t)} i 1{zi=1,i∈lev(t)} − P P i 1{zi=0,i∈levℓ(t)} i 1{zi=0,i∈lev(t)} + πlevr(t) P P i 1{zi=1,i∈levr(t)} i 1{zi=1,i∈lev(t)} − P P i 1{zi=0,i∈levr(t)} i 1{zi=0,i∈lev(t)} , (5) BiasEQOP (lev(t)) = πlevℓ(t) P P i 1{zi=1,yi=1,i∈levℓ(t)} i 1{zi=1,yi=1,i∈lev(t)} − P P i 1{zi=0,yi=1,i∈levℓ(t)} i 1{zi=0,yi=1,i∈lev(t)} + πlevr(t) P P i 1{zi=1,yi=1,i∈levr(t)} i 1{zi=1,yi=1,i∈lev(t)} − P P i 1{zi=0,yi=1,i∈levr(t)} i 1{zi=0,yi=1,i∈lev(t)} . (6) Thus, even when employing probabilistic trees, our *Bias* measures and hence *FairTreeFIS* is easy to compute. The proof / calculation for Proposition 1 is in the Supplemental materials. Note also that these results for the *Bias* and also *FairTreeFIS* can easily be extended to multi-class classification settings, which we present in the Supplemental materials. ## 2.4 Fairtreefis For Tree-Based Ensembles And Decision Tree Global Surrogates Decision trees are widely used due to their ability to break down complex problems into simpler solutions, thus making them more interpretable (Loh, 2011). Further, they are commonly employed in various popular ensemble-based classifiers such as random forest, gradient boosting, XGBoost, and others. For these treebased ensembles, *TreeFIS* is averaged (or averaged with weights) over all the trees in the ensemble (Breiman, 1996). We propose to extend *FairTreeFIS* in the exact same manner to interpret all tree-based ensembles. Decision trees have also gained attention for their role in knowledge distillation to transfer knowledge from large, complex models to smaller models that are easier to deploy (Hinton et al., 2015; Buciluˇa et al., 2006). Here, decision trees are not fit to the original labels or outcomes, but instead to the complex model's predicted labels or outcomes. Recently, others have proposed to use decision trees in a similar manner for global interpretation surrogates (Blanco-Justicia & Domingo-Ferrer, 2019; Yang et al., 2018; Sagi & Rokach, 2021; Wan et al., 2020). Decision trees are often an ideal surrogate in this scenario as a fully grown tree can exactly reproduce the predictions of the complex, black-box model. Hence, if the predictions match precisely, we can be more confident in the feature interpretations that the decision tree surrogate produces. Here, we propose to employ *FairTreeFIS* to interpret features in a decision tree surrogate in the exact same manner as that of *TreeFIS*. In this way, *FairTreeFIS* provides a simple, intuitive, and computationally efficient way to interpret any large, complex, and black-box ML system. ## 3 Empirical Studies 3.1 Simulation Setup And Results We design simulation studies to validate our proposed *FairTreeFIS* metric; these simulations are an important test since there are not other comparable fair feature interpretation methods to which we can compare our approach. We work with four groups of features: features in G1 and G2 are correlated with the protected attribute z and are hence biased, features in G1 and G3 are signal features associated with the outcome y, and features in G4 are purely noise. We simulate the protected attribute, zi, as z i.i.d ∼ *Bernoulli*(π) and take π = 0.2. Then, the data is generated as xi,j i.i.d ∼ N(αj ∗ zi, Σ) with αj = 2 if j ∈ G1 or G2 and αj = 0 if j ∈ G2 or G4 in the classification scenarios and αj = 0.4 for j ∈ G1 or G2 and αj = 0 for j ∈ G3 or G4 in the regression simulations. Hence, all features in G1 and G2 are strongly associated with z and hence should be identified as biased features with a negative *FairTreeFIS*. Then, we consider three major simulation ![7_image_0.png](7_image_0.png) Figure 3: Regression results for *TreeFIS* (MDI using the Gini Index) and *FairTreeFIS* (DP) on the three major simulation types and for a decision tree regressor, gradient boosting regressor, and random forest regressor. We consider four feature groups: features in G1 (red) and G2 (blue) are correlated with the protected attribute, features in G1 and G3 (green) are signal, and features in G4 (purple) are noise. The magnitudes and directions of the *FairTreeFIS* scores for each group align with what we would expect from the simulation construction, thus validating our metric. scenarios for both classification and regression settings: a linear model where f(xi) = β0 +Pp j=1 βjxij , a non-linear additive scenario where f(xi) = β0 +Pp j=1 βj sin(xij ), and finally a non-linear scenario with pairwise interactions where f(xi) = β0 +Pp j=1 βjxij +Pp l=1,k=1 γlksin(xilxik) and with γlk = 1 for the first two features in each group and zero otherwise. We also let βj = 1 for j ∈ G1 or G3 and βj = 0 for j ∈ G2 or G4 in the classification scenario and βj = 3 for j ∈ G1 or G3 and βj = 0 for j ∈ G2 or G4 in the regression simulations. For regression scenarios, we let yi = f(xi) + ϵ where ϵ i.i.d ∼ N(0, 1), and for classification scenarios, we employ a logistic model with yi i.i.d ∼ *Bernoulli*(σ(f(xi)), where σ is the sigmoid function. We present our binary classification and regression results for the DP metric with N = 1000, p = 12 features, and Σ = I in Figure 2. Additional simulation results for both classification and regression tasks with N = 500 or 1000, larger p, correlated features with Σ ̸= I, and for the EQOP metric are presented in the Supplemental Materials. Figure 2 presents the *TreeFIS* and *FairTreeFIS* metric for each of the twelve features colored according to their group status, and averaged over ten replicates. We present all three classification simulation scenarios for decision tree, gradient boosting, and random forest classifiers. First, notice that the sign of *FairTreeFIS* is correct in all scenarios; that is, features in G1 (red) and G2 (blue) are biased and *FairTreeFIS* accurately reflects this bias with a negative score while the features in G3 (green) and G4 (purple) exhibit no bias and FairTreeFIS is positive. *FairTreeFIS* also accurately captures the magnitude of each feature's contributions as the magnitude of *FairTreeFIS* and *TreeFIS* are comparable in all scenarios. Note here that *FairTreeFIS* values are low for non-signal features in trees and gradient boosting, as non-signal features are likely not split upon and hence do not contribute to bias or fairness. Because random forests use random splits, however, non-signal features are split upon more often and we see that *FairTreeFIS* accurately determines that features in G2 are biased. Additionally, in Figure 3, we present all three regression scenarios for decision tree regressor, gradient boosting regressor, and random forest regressor. We see similar behavior to the classification results. Overall, these results (and the many additional simulations in the Supplement) ![8_image_0.png](8_image_0.png) Figure 4: Random Forest interpretation using *FairTreeFIS* for the Adult, Law, COMPAS, and Communities and Crime datasets. The importance scores' magnitudes and directions align with other studies done on these datasets. strongly validate the use of *FairTreeFIS* for interpreting features in trees and tree-based ensembles for the bias or fairness that the feature induces in the predictions. ## 3.2 Case Studies To align our work with the existing fairness literature, we evaluate our metric on five popular benchmark datasets. We examine: (i) the Adult Income dataset (Dua & Graff, 2017) containing 14 features and approximately 48,000 individuals with class labels stating whether their income is greater than $50,000 and Gender as the protected attribute; (ii) the COMPAS dataset (Larson et al., 2022), which contains 13 attributes of roughly 7,000 convicted criminals with class labels that state whether the individual will recidivate within two years of their most recent crime and we use Race as the protected attribute; (iii) the Law School dataset (Whiteman, 1998), which has 8 features and 22,121 law school applicants with class labels stating whether an individual will pass the Bar exam when finished with law school and Race as the protected attribute; (iv) the Communities and Crimes (C & C) dataset (Dua & Graff, 2017), which contains 96 features of 2,000 cities with a regression task of predicting the number of violent crimes per capita and Race encoded as the protected attribute; and (v) the German Credit dataset, which classifies people with good or bad credit risks based on 20 features and 1,000 observations and we use Gender as the protected attribute. We begin by evaluating the quality of *FairTreeFIS* interpretations on four benchmark datasets in Figure 4; additional interpretations of all benchmarks are provided in the Supplemental Material. Figure 4 shows scores for a random forest classifier on the Adult dataset with Gender as the protected attribute, the Law dataset with race as the protected attribute, the C & C dataset with Race as the protected attribute, and the COMPAS dataset with Race as the protected attribute. In the Adult dataset, the "Married" feature contributes most to both accuracy and bias. This indicates that while the "Married" feature is highly predictive, its use in the model could lead to more unfair predictions. This aligns with studies done showing that in men higher wages were associated with a higher proportion of being married, whereas in women higher wages were associated with a lower proportion of being married and the fact that many women are 9 not employed outside the home (Mincy et al., 2009). Figure 4 shows that "Yr 3 GPA" contributes most to both the accuracy and the bias of the model in the Law dataset. In the C & C dataset, the percentage of kids who grew up with two parents in the household, denoted as "% Kids 2 Par", has the highest magnitude for both *TreeFIS* and *FairTreeFIS*, although *FairTreeFIS* shows that this feature contributes to the bias seen in the predictions. Studies have shown that black young adults are disproportionately impacted by family structure (Wilcox, 2021). Specifically, black young adults are less likely to go to college and more likely to be imprisoned if they grow up in a single-parent household. In contrast, white young adults are significantly less affected by family structure. Thus, our *FairTreeFIS* interpretations are consistent with these studies. Looking at the results for the COMPAS dataset, the number of priors greater than 3, denoted as "Num Pri > 3" has the highest magnitude for both *TreeFIS* and *FairTreeFIS*, and again *FairTreeFIS* reveals that this feature strongly influences the biased outcomes. These interpretations are consistent with other studies on the COMPAS data set (Rudin et al., 2020), again validating our results. Next, we further validate our *FairTreeFIS* interpretations of Random Forest models by removing features that contribute most to the bias of the model based on their *FairTreeFIS* values. Figure 5 shows fairness and accuracy metrics over 10 runs on the Adult, Law, COMPAS, and Communities and Crime datasets. The gray bar represents all features included in the model, the red bar represents when the feature with the most negative *FairTreeFIS* value was removed from the model, and the blue bar represents when the two features with the most negative *FairTreeFIS* values were removed. We expect that models containing all features would yield the most biased results and the bias to decrease as features that contribute to the bias most are removed. We see that this is indeed the trend for all of the datasets. Furthermore, we see in the C & C dataset that the accuracy also slightly increases as features that contribute to bias are removed. In the Adult dataset, we notice little movement between fairness and accuracy no matter which features are included. We suspect that this is due in part to the fact that the Adult and C & C datasets have a large number of features and many of them are highly correlated. As a result, it is unsurprising to see only small changes when removing 1 or 2 features at a time. We move on to validating the use of *FairTreeFIS* for interpreting tree-based global surrogates. To do this, in Figure 6, we compare *TreeFIS* and *FairTreeFIS* results on a gradient boosting classifier (where these scores were calculated by averaging over all tree ensemble members) to *TreeFIS* and *FairTreeFIS* results for a tree-based surrogate of the same gradient boosting classifier (where a fully grown decision tree was fit to the model's predictions). Generally, we see that the *TreeFIS* and *FairTreeFIS* scores between the top row (boosting) and bottom row (surrogate) are similar in magnitude and direction. Specifically looking at the Adult dataset, we see that "Married" is an important feature according to *TreeFIS* but *FairTreeFIS* indicates that the model is using "Married" in a way that contributes to bias; these results are reflected in both the boosting model and the tree surrogate. While the scores for some of the less important features may vary slightly between the original model and the surrogate, notice that the most important features are always consistent between the two approaches. This indicates that our *FairTreeFIS* scores are effective when used to interpret tree-based global surrogates. Additional case studies on tree-based surrogates including validating *TreeFIS* compared to model-specific deep learning feature importance scores are provided in the Supplemental Material. Lastly, in Figure 7, we examine *TreeFIS* and *FairTreeFIS* scores for a tree-based surrogate of a deep learning model (multi-layer perception with two hidden layers each with p units and ReLU activation) as well as a tree-based surrogate for a bias mitigation method, the Adversarial Debiasing approach (Zhang et al., 2018) for the Adult dataset with Gender as the protected attribute. The Adversarial Debiasing method (Zhang et al., 2018) applies adversarial learning to improve fairness by learning how to prevent an adversary from predicting the protected attribute. Looking at the Adult dataset scores of the tree-based surrogate of the deep learning model, we see that the "Cap. Gain", "Edu Num", and "Married" features are most important in terms of accuracy and US Native Country ("US NC"), "Married", and "Age" are most influential in terms of bias. Specifically, "US NC" and "Married" hurt the overall fairness of the model. In the debiasing method, the magnitude of both *FairTreeFIS* and *TreeFIS* for the feature "Married" decreases substantially, showing that using this feature likely would result in more biased predictions. Additionally, the "Cap. Gain" feature becomes more important in terms of accuracy in the debiasing model, as this feature contributes less to bias. The accuracy and fairness go from 0.84 and 0.83 in the deep learning model to 0.80 and 0.92 ![10_image_0.png](10_image_0.png) Figure 5: Random Forest model interpretations using *FairTreeFIS* when features contributing to most to bias are removed (See Figure 4). Each panel shows accuracy and fairness (DP) metrics in three scenarios for the Adult, Law, COMPAS, and C & C datasets. The gray bar represents a random forest model trained with all the features, the red bar represents a random forest model trained without the feature with the most negative *FairTreeFIS* value, and the blue bar represents a model trained without the two most negative FairTreeFIS values. Note that removing the features with the most negative *FairTreeFIS* values decreases bias, therefore validating our *FairTreeFIS* metric. ![10_image_1.png](10_image_1.png) Figure 6: Global surrogate validation. The top row shows *TreeFIS* and *FairTreeFIS* results on a gradient boosting classifier for the Adult, COMPAS and Law datasets. The bottom row shows *TreeFIS* and FairTreeFIS results for a tree-based surrogate of a boosting classifier. The scores between the top and bottom rows are similar in magnitude and direction, indicating that our scores are effective when used to interpret tree-based global surrogates. ![11_image_0.png](11_image_0.png) Figure 7: Interpretation of features for models with bias mitigation. The *FairTreeFIS* values show the difference in importance scores between a tree-based surrogate of a deep learning (two hidden layer MLP) model and a tree-based surrogate of a bias mitigation approach, deep learning adversarial debiasing method. Note that the "Married" feature has the most negative *FairTreeFIS* in the tree-based surrogate model, but is significantly less negative in the debiased model. in the Adversarial Debiasing model, indicating that the approach is successful at mitigating bias. Seeing as the features that strongly negatively contribute to fairness have less of an impact when the model becomes more fair indicates that our fair feature importance scores accurately capture when features are helping or hurting the overall fairness of the model. Note also that strongly predictive features often hurt fairness, and as fairness increases, accuracy decreases. This trend is a sign of the well-known and studied tradeoff between fairness and accuracy (Zliobaite, 2015; Little et al., 2022). Further results on all five benchmark datasets are included in the Supplemental material. ## 4 Discussion In this work, we proposed a fair tree feature importance score, *FairTreeFIS*, for interpreting trees, treebased ensembles, and tree-based surrogates of complex ML systems. We extend the traditional accuracybased *TreeFIS* (MDI), which calculates the change in loss between parent and child nodes, to consider fairness, where we calculate the difference in group bias between the parent and child levels. We empirically demonstrated that *FairTreeFIS* accurately captures the importance of features concerning fairness in various simulation and benchmark studies. Crucially, we showed that we can employ this method to interpret complex deep-learning models when trees are used as surrogates. Our *FairTreeFIS* metric is inspired by *TreeFIS* (MDI) and recently, there have been many papers studying theoretical properties of MDI and some of its limitations Strobl et al. (2007); Li et al. (2019); Zhou & Hooker (2021). For example, MDI is known to favor features with high entropy and less correlation Strobl et al. (2007); it is also known to be consistent using out-of-sample data for additive models Scornet (2023); Li et al. (2019); Zhou & Hooker (2021), but its properties for other model classes are unknown. Given this recent research on MDI, one may ask whether our *FairTreeFIS* metric inherits these properties and limitations. Note that our *FairTreeFIS* metric performed well in simulations, including when there are correlated features (see Appendix), suggesting that perhaps these limitations do not apply. Further, note that many of the limitations of MDI arise from the fact that it utilizes the same loss function to compute feature importance as was used to greedily build the tree. In contrast, *FairTreeFIS* is based upon the differences in proportions across groups after each split and is hence less subject to such loss function specific biases. Nonetheless, just as MDI has been studied extensively and theoretically, the properties of our *FairTreeFIS* metric are important to investigate in future work. Additionally, there are several other possible directions to investigate in future work. *FairTreeFIS* is flexible and can be easily extended to work with other group fairness metrics, including those that consider several groups in the protected attribute or even multiple protected attributes. Further, just as people have studied feature interactions using trees Basu et al. (2018), *FairTreeFIS* can possibly be extended to interpret bias that might arise from feature interactions. Finally, one could also investigate extending other popular feature importance scores to help interpret features in the context of fairness. Our *FairTreeFIS* metric offers valuable insights for transparency and auditing. While our score does not indicate if features are fair or unfair, it does help users understand how the model is using features in a manner that may help or hurt fairness. Combined with traditional *TreeFIS*, it enables users to understand the relative importance of features for both accuracy and fairness. Imagine if, in 2019, Apple and Goldman Sachs had employed *FairTreeFIS* to explain the biases exhibited by their models toward women. This could have significantly reduced public distrust and facilitated mitigation efforts. Furthermore, the increasing emphasis on fairness in AI models, exemplified by the October 2023 executive order on Safe, Secure, and Trustworthy AI, underscores the critical need for tools like *FairTreeFIS*. By enabling fair model interpretation, *FairTreeFIS* facilitates transparency and builds trust among users and consumers. ## Acknowledgments COL acknowledges support from the NSF Graduate Research Fellowship Program under grant number 1842494. GIA, COL and DL acknowledge support from the JP Morgan Faculty Research Awards and NSF DMS-2210837. ## References Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. A reductions approach to fair classification. In ICML 2018: Proceedings of the 35th International Conference on Machine Learning, volume 80, pp. 60–69, 2018. URL https://proceedings.mlr.press/v80/agarwal18a.html. Sushant Agarwal. Trade-offs between fairness and interpretability in machine learning. In IJCAI 2021 Workshop on AI for Social Good, 2021. Genevera I Allen, Luqin Gan, and Lili Zheng. Interpretable machine learning for discovery: Statistical challenges\& opportunities. *arXiv preprint arXiv:2308.01475*, 2023. Sumanta Basu, Karl Kumbier, James B Brown, and Bin Yu. Iterative random forests to discover predictive and stable high-order interactions. *Proceedings of the National Academy of Sciences*, 115(8):1943–1948, 2018. Tom Begley, Tobias Schwedes, Christopher Frye, and Ilya Feige. Explainability for fair machine learning. arXiv preprint arXiv:2010.07389, 2020. Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H. Chi. Data decisions and theoretical implications when adversarially learning fair representations. *ArXiv Pre-Print 1707.00075*, 2017. doi: 10.48550/arXiv.1707. 00075. Vaishnavi Bhargava, Miguel Couceiro, and Amedeo Napoli. Limeout: an ensemble approach to improve process fairness. In *Joint European conference on machine learning and knowledge discovery in databases*, pp. 475–491. Springer, 2020. Alberto Blanco-Justicia and Josep Domingo-Ferrer. Machine learning explainability through comprehensible decision trees. In Machine Learning and Knowledge Extraction: Third IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2019, Canterbury, UK, August 26–29, 2019, Proceedings 3, pp. 15–26. Springer, 2019. Leo Breiman. *Classification and regression trees*. Routledge, 1973. Leo Breiman. Bagging predictors. *Machine Learning*, 24(2):123–140, 1996. doi: 10.1023/A:1018054314350. Tamara Broderick, Andrew Gelman, Rachael Meager, Anna L Smith, and Tian Zheng. Toward a taxonomy of trust for probabilistic machine learning. *Science Advances*, 9(7):eabn3999, 2023. Cristian Buciluˇa, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535–541, 2006. Flavio P Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In *NeurIPS 2017: Advances in Neural* Information Processing Systems, volume 30, pp. 3995–4004, 2017. doi: https://proceedings.neurips.cc/ paper/2017/hash/9a49a25d845a483fae4be7e341368e36-Abstract.html. Ian Carlos Campbell. The apple card doesn't actually discriminate against women, investigators say. *The Verge*, 2021. URL https://www.theverge.com/2021/3/23/22347127/ goldman-sachs-apple-card-no-gender-discrimination. Rich Caruana, Nikos Karampatziakis, and Ainur Yessenalina. An empirical evaluation of supervised learning in high dimensions. In *Proceedings of the 25th international conference on Machine learning*, pp. 96–103, 2008. Simon Caton and Christian Haas. Fairness in machine learning: A survey. *ArXiv Pre-Print 2010.04053*, 2020. doi: 10.48550/arXiv.2010.04053. Jianbo Chen, Le Song, Martin J Wainwright, and Michael I Jordan. L-shapley and c-shapley: Efficient model interpretation for structured data. *arXiv preprint arXiv:1808.02610*, 2018. Alexandra Chouldechova and Aaron Roth. The frontiers of fairness in machine learning. ArXiv Pre-Print 1810.08810, 2018. doi: 10.48550/arXiv.1810.08810. Jessica Dai, Sohini Upadhyay, Stephen H Bach, and Himabindu Lakkaraju. What will it take to generate fairness-preserving explanations? *arXiv preprint arXiv:2106.13346*, 2021. Mengnan Du, Ninghao Liu, and Xia Hu. Techniques for interpretable machine learning. *Communications of* the ACM, 63(1):68–77, 2019. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. Manuel Fernández-Delgado, Eva Cernadas, Senén Barro, and Dinani Amorim. Do we need hundreds of classifiers to solve real world classification problems? *The journal of machine learning research*, 15(1): 3133–3181, 2014. Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, and Derek Roth. A comparative study of fairness-enhancing interventions in machine learning. In FAccT 2019: Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, pp. 329–338, 2019. doi: 10.1145/3287560.3287589. Jianping Gou, Baosheng Yu, Stephen J Maybank, and Dacheng Tao. Knowledge distillation: A survey. International Journal of Computer Vision, 129:1789–1819, 2021. Vincent Grari, Boris Ruf, Sylvain Lamprier, and Marcin Detyniecki. Fair adversarial gradient tree boosting. In *ICDM 2019: Proceedings of the 2019 IEEE International Conference on Data Mining*, pp. 1060–1065, 2019. doi: 10.1109/ICDM.2019.00124. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. A survey of methods for explaining black box models. *ACM Computing Surveys*, 51(5), aug 2018. ISSN 0360-0300. doi: 10.1145/3236009. URL https://doi.org/10.1145/3236009. Moritz Hardt, Eric Price, and Nathan Srebron. Equality of opportunity in supervised learning. In *NeurIPS* 2016: Advances in Neural Information Processing Systems 29, volume 29, 2016. URL https://papers. nips.cc/paper/2016/hash/9d2682367c3935defcb1f9e247a97c0d-Abstract.html. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Hemant Ishwaran. Variable importance in binary regression trees and forests. 2007. Aditya Jain, Manish Ravula, and Joydeep Ghosh. Biased models have biased explanations. arXiv preprint arXiv:2012.10986, 2020. Jalil Kazemitabar, Arash Amini, Adam Bloniarz, and Ameet S Talwalkar. Variable importance using decision trees. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/ 5737c6ec2e0716f3d8a7a5c4e0de0d9a-Paper.pdf. Will Knight. The apple card didn't see gender—and that's the problem. *WIRED*, 2019. URL https: //www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/. Jeff Larson, Marjorie Roswell, and Vaggelis Atlidakis. Compas recidivism risk score data and analysis, 2022. URL https://github.com/propublica/compas-analysis/. Xiao Li, Yu Wang, Sumanta Basu, Karl Kumbier, and Bin Yu. A debiased mdi feature importance measure for random forests. *Advances in Neural Information Processing Systems*, 32, 2019. Zachary C Lipton. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. *Queue*, 16(3):31–57, 2018. Camille Olivia Little, Michael Weylandt, and Genevera I. Allen. To the fairness frontier and beyond: Identifying, quantifying, and optimizing the fairness-accuracy Pareto frontier. *arXiv preprint arXiv:2206.00074*, 2022. Wei-Yin Loh. Classification and regression trees. *Wiley interdisciplinary reviews: data mining and knowledge* discovery, pp. 14–23, 2011. Pranay K. Lohia, Karthikeyan Natesan Ramamurthy, Manish Bhide, Diptikalyan Saha, Kush R. Varshney, and Ruchir Puri. Bias mitigation post-processing for individual and group fairness. In ICASSP 2019: Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2847–2851, 2019. doi: 10.1109/ICASSP.2019.8682620. Gilles Louppe, Louis Wehenkel, Antonio Sutera, and Pierre Geurts. Understanding variable importances in forests of randomized trees. *Advances in neural information processing systems*, 26, 2013. Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee. Consistent individualized feature attribution for tree ensembles. *ArXiv Pre-Print 1802.03888*, 2018. doi: 10.48550/arXiv.1802.03888. Masayoshi Mase, Art B Owen, and Benjamin B Seiler. Cohort shapley value for algorithmic fairness. *arXiv* preprint arXiv:2105.07168, 2021. Ronald Mincy, Jennifer Hill, and Marilyn Sinkewicz. Marriage: Cause or mere indicator of future earnings growth? *Journal of Policy Analysis and Management*, 28(3):417–439, 2009. W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. Definitions, methods, and applications in interpretable machine learning. *Proceedings of the National Academy of Sciences*, 116 (44):22071–22080, 2019. Khansa Rasheed, Adnan Qayyum, Mohammed Ghaly, Ala Al-Fuqaha, Adeel Razi, and Junaid Qadir. Explainable, trustworthy, and ethical machine learning for healthcare: A survey. Computers in Biology and Medicine, pp. 106043, 2022. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?": Explaining the predictions of any classifier. In *Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, pp. 1135–1144. Association for Computing Machinery, 2016. ISBN 9781450342322. doi: 10.1145/2939672.2939778. URL https://doi.org/10.1145/2939672.2939778. Cynthia Rudin, Caroline Wang, and Beau Coker. The age of secrecy and unfairness in recidivism prediction. Harvard Data Science Review, 2(1):1, 2020. Omer Sagi and Lior Rokach. Approximating xgboost with an interpretable decision tree. *Information* Sciences, 572:522–542, 2021. Wojciech Samek and Klaus-Robert Müller. Towards explainable artificial intelligence. *Explainable AI:* interpreting, explaining and visualizing deep learning, pp. 5–22, 2019. Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. *Proceedings* of the IEEE, 109(3):247–278, 2021. Nina Schaaf, Marco Huber, and Johannes Maucher. Enhancing decision tree based interpretation of deep neural networks through l1-orthogonal regularization. In *2019 18th IEEE International Conference On* Machine Learning And Applications (ICMLA), pp. 42–49. IEEE, 2019. Erwan Scornet. Trees, forests, and impurity-based variable importance in regression. In Annales de l'Institut Henri Poincare (B) Probabilites et statistiques, volume 59, pp. 21–52. Institut Henri Poincaré, 2023. Carolin Strobl, Anne-Laure Boulesteix, Achim Zeileis, and Torsten Hothorn. Bias in random forest variable importance measures: Illustrations, sources and a solution. *BMC bioinformatics*, 8:1–21, 2007. Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad Van Moorsel. The relationship between trust in ai and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pp. 272–283, 2020. Neil Vigdor. Apple card investigated after gender discrimination complaints. *New York Times*, 2019. URL https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html. Alvin Wan, Lisa Dunlap, Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah Adel Bargal, and Joseph E Gonzalez. Nbdt: neural-backed decision trees. *arXiv preprint arXiv:2004.00221*, 2020. Caroline Wang, Bin Han, Bhrij Patel, and Cynthia Rudin. In pursuit of interpretable, fair and accurate machine learning for criminal recidivism prediction. *Journal of Quantitative Criminology*, 39(2):519–581, 2023. Linda Whiteman. The scale and effects of admissions preferences in higher education (SEAPHE), 1998. URL http://www.seaphe.org/databases.php. W. Bradford Wilcox. Less poverty, less prison, more college: What two parents mean for black and white children, 2021. URL https://ifstudies.org/blog/ less-poverty-less-prison-more-college-what-two-parents-mean-for-black-and-white-children. Yongxin Yang, Irene Garcia Morillo, and Timothy M Hospedales. Deep neural decision trees. *arXiv preprint* arXiv:1806.06988, 2018. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In *International conference on machine learning*, pp. 325–333, 2013. URL https://proceedings.mlr.press/ v28/zemel13.html. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In *AIES 2018: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society*, pp. 335–340, 2018. doi: 10.1145/3278721.3278779. Zhengze Zhou and Giles Hooker. Unbiased measurement of feature importance in tree-based methods. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(2):1–21, 2021. Indre Zliobaite. On the relation between accuracy and fairness in binary classification. ArXiv Pre-Print 1505.05723, 2015. doi: 10.48550/arXiv.1505.05723. Presented at the 2nd Workshop on Fairness, Accountability, and Transparency in Machine Learning.
Review 1: Summary: The paper develops a new feature importance for trees and forests that measures fairness of the features with respect to some protected attribute. To this end, the paper adapts the MDI measure of Random Forests to instead measure the additional bias when splitting on a certain covariate $j$. After introducing the measure, an extensive empirical analysis studies the behavior of the measure in simulated and real data scenarios. Strengths and Weaknesses: Strengths: ----------------- + Generally important idea + Extensive empirical evaluation + Through surrogate models the score is very widely applicable Problems: --------------- - There could be more focus on the limitations of the method, at least in the Discussion Section. In particular, the score is somewhat ad hoc. Nowadays, new importance measures are often developed in a principled way, by defining a population quantity to which the estimate should converge in large samples. Not doing that can have the disadvantage of very unexpected behavior of the measure, as what we are actually measuring might not be understood (see my comments below). While I think this is ok for this paper, I also think it should be highlighted more. Requested Changes: - I think the limitations of the method should be better addressed: The MDI is known to have several problems, both in theory and practice, see e.g., the references in Section 2.1 in Bénard et al (2022): "... in the last two decades, many empirical analyses have highlighted the flaws of the MDI (Strobl et al., 2007). Although Li et al. (2019), Zhou and Hooker (2021), and Loecher (2020) recently improved the MDI to partially remove its bias, Scornet (2020) demonstrated that the MDI is consistent under a strong and restrictive assumption: the regression function is additive and the covariates are independent." These problems arise especially when features are correlated (i.e. almost always in practice). Having a look at this literature, does the FairTreeFIS have the same problems? (I am especially thinking of the empirical flaws of MDI). If yes, they should be at the very least discussed as shortcomings in the discussion. If not, what is the difference? - Related to the above, what happens if a feature that contributes to bias is highly correlated with another one? Will FairTreeFIS (falsely) flag both as having a negative fairness value? I tried to understand this from the supplementary material, but even just a simpler simulation with two correlated covariates might be helpful here. - Is it possible to have several protected features in z? - Detail: I find the colors in the plots a bit confusing: It could help to refer again that the colors correspond to different groups G_1 to G_4 in the caption or on the side. Bénard, C., Da Veiga, S., and Scornet, E. (2022). Mean decrease accuracy for random forests: inconsis- tency, and a practical solution via the Sobol-MDA. Biometrika, 109(4):881–900. Broader Impact Concerns: There are no concerns from my side. ================================================== Review 2: Summary: The article introduces a variable importance measure for decision trees and tree ensembles to quantify the impact of each input feature on the algorithm fairness. This new importance score identifies features that are unfair with respect to protected features (typically gender or race in the experiments). The proposed algorithm is called FairTreeFIS, and is directly based on the Mean Decrease Impurity (MDI), introduced by Breiman (2003). The MDI is defined as the sum of weighted decreases of impurity over all tree nodes that split on a given input variable. FairTreeFIS uses the bias decrease at each node instead of the impurity in the case of the MDI, to quantify feature fairness. Both regression and classification problems are handled by FairTreeFIS. Finally, Section 3 presents the results of several experiments using both simulated and real data, for decision trees, random forests, and boosted tree ensembles. Experiments show that FairTreeFIS identifies features that are influential and unfair with respect to the protected features. Breiman, L. (2003). Setting up, using, and understanding random forests v3.1. Technical report, UC Berkeley, Department of Statistics. Strengths and Weaknesses: The article deals with fairness in machine learning, a topic of high importance in the community. The structure of the paper is sound, the FairTreeFIS methodology is clearly introduced after the introduction, and the third section presents various experiments to assess the introduced algorithm. FairTreeFIS is strongly based on the MDI, an existing importance measure for tree-based models (Breiman, 2003). There is a vast literature about the MDI, which is missing in the article. This point is critical because many papers have highlighted flaws of the MDI, and some have introduced improved MDI algorithms. More precisely, practical problems have long been identified by Strobl et al (2007), with strong biases for categorical inputs, or masking effects. Li et al (2019) and Zhou & Hooker (2021) have suggested to use out of sample data to recompute the MDI, and thus improve the accuracy of the MDI. These approaches should be considered and discussed in the article for the MDI adaptation to fairness. More importantly, the theoretical groundings of the MDI have been strongly criticized by Scornet (2023), which shows that the MDI is intrinsically ill-defined: only in the very specific case of independent inputs and additive regression functions, the MDI provides a variance decomposition of the output. Otherwise, we do not even know what the MDI really estimates. Consequently, the MDI cannot provide any precise quantification of input importance, but only a very rough idea whether a given feature has a strong or low influence. FairTreeFIS inherits the same problem, which is even worse in the case of fairness, which involves more complex conditional quantities. In our opinion, this point is critical and should be discussed in the paper. In particular, the baseline procedure were the MDI is run, and the correlations (or non-linear dependance metrics) of features with the protected variables are computed, may provide similar information as FairTreeFIS. There is a growing consensus in the community that algorithmic definition of variable importance are unclear, and that theoretical quantities should be first defined instead. Then, algorithms should be built to estimate these quantities. In particular, this approach is strongly supported by Williamson et al (2021) and Candes et al. (2018), where targeted quantities are clearly stated. # References B.D. Williamson, P.B. Gilbert, N.R. Simon, and M. Carone. A general framework for inference on algorithm-agnostic variable importance. Journal of the American Statistical Association, pages 1–38, 2021. E. Candes, Y. Fan, L. Janson, and J. Lv. Panning for gold:‘model-X’knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 80:551–577, 2018. Scornet, E. Trees, forests, and impurity-based variable importance in regression."Annales de l'Institut Henri Poincare (B) Probabilites et statistiques. Vol. 59. No. 1. Institut Henri Poincaré, 2023. Breiman, L. (2003). Setting up, using, and understanding random forests v3.1. Technical report, UC Berkeley, Department of Statistics. C. Strobl, A.-L. Boulesteix, A. Zeileis, and T. Hothorn. Bias in random forest variable importance measures: illustrations, sources and a solution. BMC Bioinformatics, 8:25, 2007. X. Li, Y. Wang, S. Basu, K. Kumbier, and B. Yu. A debiased MDI feature importance measure for random forests. In Advances in Neural Information Processing Systems, volume 32, pages 8049–8059, New York, 2019. Curran Associates, Inc. Z. Zhou and G. Hooker. Unbiased measurement of feature importance in tree-based methods. ACM Transactions on Knowledge Discovery from Data, 15:1–21, 2021. Requested Changes: See Strengths And Weaknesses. Broader Impact Concerns: Not applicable. ================================================== Review 3: Summary: In applications of critical consequences, how to explain why a machine learning model makes fair or unfair decisions across different populations has been an important aspect in responsible machine learning. The authors introduce Fair Feature Importance Score (FairFIS) that measures the contribution of a feature toward or against improving fairness metrics such as demographic parity or equalized odds, especially for decision trees and models originated from them (e.g., boosting algorithms or random forests). The authors measure the Fair Feature Importance Score on synthetic and several real-world datasets to pinpoint features that cause potential discriminations. Strengths and Weaknesses: **Strengths** 1. This is (probably) the first metric that is used to measure the contribution of a feature on improving or exacerbating discriminations against different populations. 2. The introduction is well-written and motivation. The Apple Credit Card example is a good example for the need of this research. **Weaknesses** 1. The proposed Fair Feature Importance Score is a direct application of DP or EQOD on the mean decrease in purity. Despite that very intuitive and easy to understand, the novelty of the proposed FairFIS seems not high here. 2. There is a lack of discussion and study that combine the proposed FairFIS with explanability metrics in existing XAI literatures. For example, how to compare FairFIS with some famous metrics such as LIME (Local Interpretable Model-Agnostic Explanations) or the Shapley values? 3. In the illustration, the authors plot the FairFIS for each feature. Is FairFIS scalable for large datasets (e.g., with several million rows and hundreds of columns)? The authors claim that FairFIS could also be applied on XGBoost; however, no empirical study is provided. 4. Figure 1 could be improved for clarify the difference between TreeFIS and FairTreeFIS. Requested Changes: I will use this space to list some questions I have. I would be great if the authors could address the weaknesses above and my questions here. **Questions** 1. In the empirical study, the authors seem to completely overlook the fairness-accuracy trade-off. In practice, it is often undesirable to trade in too much accuracy for fairness. In Figure 5, COMPAS dataset as an example, removing two features cause a 5% accuracy drop. Therefore, directly removing features with the most negative FairTreeFIS values may not be a practical treatment. Moreover in Figure 5, why for most of the datasets, removing one will not decrease the bias as claimed? Also the acc is not affected? 2. Could the authors clarify what are the importance scores in other studies in Figure 4? 3. The proposed FairTreeFIS is a single variate analysis; however, in many cases, unfairness or discrimination could be originated from a (non-linear) combination/transformation of several features. How could FairTreeFIS adopt to those scenarios? Broader Impact Concerns: There is no ethical concerns. ================================================== Metareview: Recommendation: Accept as is Comment: One of the main criticism from reviewers is that the proposed metric is derived from MDI, which has been criticized in recent work for its flaws in reliably assessing variable importance in the standard setting (without fairness in mind). I believe the authors have properly addressed this issue (in the sense of TMLR's acceptance criteria) by adding a discussion on the limitations of MDI and the possibility to design other, perhaps more reliable metrics for fair variable importance. As a first attempt at proposing such a metric, the paper may elicit more work on this topic. ==================================================
# Costs And Benefits Of Fair Regression Han Zhao hanzhao@illinois.edu Department of Computer Science University of Illinois Urbana-Champaign Reviewed on OpenReview: *https://openreview.net/forum?id=v6anjyEDVW* ## Abstract Real-world applications of machine learning tools in high-stakes domains are often regulated to be fair, in the sense that the predicted target should satisfy some quantitative notion of parity with respect to a protected attribute. However, the exact tradeoff between fairness and accuracy with a real-valued target is not entirely clear. In this paper, we characterize the inherent tradeoff between statistical parity and accuracy in the regression setting by providing a lower bound on the error of any attribute-blind fair regressor. Our lower bound is sharp, algorithm-independent, and admits a simple interpretation: when the moments of the target differ between groups, any fair algorithm has to make an error on at least one of the groups. We further extend this result to give a lower bound on the joint error of any (approximately) fair algorithm, using the Wasserstein distance to measure the quality of the approximation. With our novel lower bound, we also show that the price paid by a fair regressor that does not take the protected attribute as input is less than that of a fair regressor with explicit access to the protected attribute. On the upside, we establish the first connection between individual fairness, accuracy parity, and the Wasserstein distance by showing that if a regressor is individually fair, it also approximately verifies the accuracy parity, where the gap is again given by the Wasserstein distance between the two groups. Inspired by our theoretical results, we develop a practical algorithm for fair regression through the lens of representation learning, and conduct experiments on a real-world dataset to corroborate our findings. ## 1 Introduction High-stakes domains, e.g., loan approvals, and credit scoring, have been using machine learning tools to help make decisions. A central question in these applications is whether the algorithm makes fair decisions, in the sense that certain sensitive data does not influence the outcomes or accuracy of the learning algorithms. For example, as regulated by the General Data Protection Regulation (GDPR, Article 22 Paragraph 4) (gdp), "decisions which produce legal effects concerning him or her or of similar importance shall not be based on certain personal data", including race, religious belief, etc. As a result, using sensitive data directly in algorithms is often prohibited. However, due to the redundant encoding, redlining, and other problems, this "fairness through blindness" is often not sufficient to ensure algorithmic fairness in automated decision-making processes. Many works have produced methods aiming at reducing unfairness (Calmon et al., 2017; Chi et al., 2021; Hardt et al., 2016; Agarwal et al., 2019; Feldman et al., 2015; Beutel et al., 2017; Lum & Johndrow, 2016) under various contexts. However, the question of the price that we need to pay for enforcing various fairness definitions in terms of the accuracy of these tools is less explored. In this paper, we attempt to answer this question by characterizing a tradeoff between statistical parity and accuracy in the regression setting, where the regressor is prohibited to use the sensitive attribute directly, dubbed as an attribute-blind regressor (predictor). Among many definitions of fairness (Verma & Rubin, 2018) in the literature, statistical parity asks the predictor to be statistically independent of a predefined protected attribute, e.g., race, gender, etc. While empirically it has long been observed that there is an underlying tension between accuracy and statistical parity (Calders et al., 2013; Zliobaite, 2015; Berk et al., 2017; Agarwal et al., 2019) in both classification and regression settings, theoretical understanding of this tradeoff in regression is limited. In the case of classification, Menon & Williamson (2018) explored such tradeoff in terms of the fairness frontier function under the context of cost-sensitive binary classification. Zhao & Gordon (2019) provided a characterization of such a etradeoff in binary 1 classification. Recently, Chzhen et al. (2020a) and Le Gouic et al. (2020) concurrently with each other derived an analytic bound to characterize the price of statistical parity in regression using Wasserstein barycentres when the learner can take the sensitive attribute explicitly as an input. In this paper, we derive the first lower bound to characterize the inherent tradeoff between fairness and accuracy in the regression setting under general `p loss when the regressor is prohibited to use the sensitive attribute directly during the inference stage. Our main theorem can be informally summarized as follows: For any *fair algorithm satisfying statistical parity, it has to incur a large error on at least one of the* demographic subgroups when the moments of the target variable differ across groups. Furthermore, if the population of the two demographic subgroups is imbalanced, the minorities could still suffer from the reduction in accuracy even if the global accuracy does not seem to reduce. We emphasize that the above result holds in the noiseless setting as well, where there exist (unfair) algorithms that are perfect on both demographic subgroups. Hence it highlights the inherent tradeoff due to the coupling between statistical parity and accuracy in general, not due to the noninformativeness of the input. We also extend this result to the general noisy setting when only approximate fairness is required. Our bounds are algorithm-independent and do not make any distributional assumptions. To illustrate the tightness of the lower bound, we also construct a problem instance where the lower bound is attained. In particular, it is easy to see that in an extreme case where the group membership coincides with the target task, a call for exact statistical parity will inevitably remove the perfect predictor. At the core of our proof technique is the use of the Wasserstein metric and its contraction property under certain Lipschitz assumptions on the regression predictors. On the positive side, we establish the first connection between individual fairness (Dwork et al., 2012), a more finegrained notion of fairness, and accuracy parity (Buolamwini & Gebru, 2018; Bagdasaryan et al., 2019; Chi et al., 2021). Roughly speaking, an algorithm is said to be individually fair if it treats similar individuals similarly. We show that if a regressor is individually fair, then it also approximately verifies the accuracy parity. Interestingly, the gap in this approximation is exactly given by the Wasserstein distance between the distributions across groups. Our proof techniques are very simple but general, and we expect them to have broader applications in other learning scenarios with real targets, e.g., domain adaptation for regression problems (Ganin et al., 2016; Courty et al., 2017; Zhao et al., 2019b) and counter-factual inference (Johansson et al., 2016; Shalit et al., 2017; Johansson et al., 2020). Although our main focus is to understand the costs and benefits of fair regression, our analysis also naturally suggests a practical algorithm to achieve statistical parity and accuracy parity simultaneously in regression by learning fair representations. The idea is relatively simple and intuitive: it suffices if we can ensure that the representations upon which the regressor applies are approximately fair (measured by Wasserstein distance). Finally, we also conduct experiments on a real-world dataset to corroborate our theoretical findings. Our results highlight the role of the Wasserstein distance in both the theoretical analysis and algorithm design of fair regression, which complements the existing results for fair classification using TV-distance (Zhao & Gordon, 2022). ## 2 Preliminaries Notation We consider a general regression setting where there is a joint distribution µ over the triplet T = (X, A,Y), where X *∈ X ⊆* Rdis the input vector, A ∈ {0, 1} is the protected attribute, e.g., race, gender, etc., and Y ∈ Y ⊆ [−1, 1] is the target output.1 Hence, the joint distribution µ is defined over the product space *X × {*0, 1*} × Y*. Lower case letters x, a and y are used to denote the instantiation of X, A and Y, respectively. Let H be a hypothesis class of predictors from input to output space. Throughout the paper, we focus on the setting where the regressor cannot directly use the sensitive attribute A to form its prediction. However, note that even if the regressor does not explicitly take the protected attribute A as input, this *fairness through blindness* mechanism can still be biased due to the redundant encoding issue (Barocas et al., 2017). To keep the notation uncluttered, for a ∈ {0, 1}, we use µa to mean the conditional distribution of µ given A = a. The zero-one entropy of A (Grünwald et al., 2004, Section 3.5.3) is denoted as H0-1(A) := 1 − maxa∈{0,1} Pr(A = a). Furthermore, we use Fν to represent the cumulative distribution function of a distribution ν over R, i.e., for t ∈ R, Fν(t) := Prν((−∞, t]). In this paper, we assume that the density of µi and 1Our main results could be extended to the case where A can take finitely many values. its corresponding pushforward under proper transformation (w.r.t. the Lebesgue measure λ) is universally bounded above, i.e., kdµi/dλk∞ ≤ C, ∀i ∈ {0, 1}. Given a feature transformation function g : *X → Z* that maps instances from the input space X to feature space Z, we define g]µ := µ ◦ g −1to be the induced distribution (pushforward) of µ under g, i.e., for any measurable event E 0 ⊆ Z, Prg]µ(E 0) := Prµ(g −1(E 0)) = Prµ({x *∈ X |* g(x) ∈ E 0}). We also use Y]µ to denote the marginal distribution of Y from the joint distribution µ, i.e., projection of µ onto the Y coordinate. Throughout the paper, we make the following assumption: Assumption 2.1. There exists a constant C such that the density of every µ 0(w.r.t. the Lebesgue measure λ) is universally bounded above, i.e., kdµ 0/dλk∞ ≤ C. Fairness Definition We mainly focus on group fairness where the group membership is given by the protected attribute A. In particular, *statistical parity* asks that the predictor should be statistically independent of the protected attribute. In binary classification, this requirement corresponds to the notion of equality of outcome (Holzer & Neumark, 2006), and it says that the outcome rate should be equal across groups. Definition 2.1 (Statistical Parity). Given a joint distribution µ, a classifier Yb = h(X), satisfies *statistical parity* if Yb is independent of A. Since Yb is continuous, the above definition implies that Prµ0 (Yb ∈ E) = Prµ1 (Yb ∈ E) for any measurable event E ⊆ R. Statistical parity has been adopted as a definition of fairness in a series of work (Calders et al., 2009; Edwards & Storkey, 2015; Johndrow et al., 2019; Kamiran & Calders, 2009; Kamishima et al., 2011; Louizos et al., 2015; Zemel et al., 2013; Madras et al., 2018), both under the classification and regression settings. Fair Regression Given a joint distribution µ, the weighted `p error of a predictor Yb = h(X) under µ for p ≥ 1 is defined as $$\varepsilon_{p,\mu}(\widehat{Y}):=\sum_{a}\Pr_{\mu}(A=a)\cdot\varepsilon_{p,\mu_{a}}(\widehat{Y})=\sum_{a}\Pr_{\mu}(A=a)\cdot\left(\mathbb{E}_{\mu_{a}}\left[|\widehat{Y}-Y|^{p}\right]\right)^{1/p}.\tag{1}$$ Note that the definition above on weighted `p error is not the same as the `p error over the joint distribution µ. Instead, it is the weighted sum of `p errors on each sub-group, where the weighted combination is given by the ratio of the sub-groups in the overall population. In fact, these two losses can be related by the following observation. Let pa:= Prµ(A = a), we have $$\|\widehat{Y}-Y\|_{\ell_{p}(\mu)}:=\left(\mathbb{E}_{\mu}\left[|\widehat{Y}-Y|^{p}\right]\right)^{1/p}=\left(\sum_{a}p_{a}\cdot\mathbb{E}_{\mu_{a}}\left[|\widehat{Y}-Y|^{p}\right]\right)^{1/p}$$ $$\geq\sum_{a}p_{a}\left(\mathbb{E}_{\mu_{a}}\left[|\widehat{Y}-Y|^{p}\right]\right)^{1/p}=\varepsilon_{p,\mu}(\widehat{Y}),$$ where the inequality is due to Jensen's inequality since g(t) = t 1/pfor p ≥ 1 is concave over t ≥ 0. The above inequality means that the `p error over the joint distribution µ is an upper bound of our weighted `p error over subgroups, which is the main focus of this paper. Conceptually, as we shall shortly see in Section 3.1, since we are more interested in understanding the tradeoff between the so-called *balanced error rate* and statistical parity, where each sub-group has the same weight towards the whole population, it is natural to treat each sub-group as a separate distribution and define the corresponding error over it separately. As two notable special cases, when p = 2, the above definition reduces to the weighted sum of the square root of the usual mean-squared-error (MSE) over each sub-group; when p = 1, (1) becomes the weighted sum of the mean-absolute-error (MAE) of the predictor overall each sub-group. To make the notation more compact, we may drop the subscript µ when it is clear from the context. The main departure from prior works on classification is that both Y and Yb(h(X)) are allowed to be real-valued rather than just categorical. Under statistical parity, the problem of fair regression can be understood as the following constrained optimization problem: $$\begin{array}{r l}{{\underset{h\in{\mathcal{H}}}{\mathrm{minimize}}}}&{{\varepsilon_{p,h}({\widehat{Y}})}}\\ {{\mathrm{subject~to}}}&{{\left|\Pr(h(X)\leq t)-\Pr(h(X)\leq t)\right|\leq\epsilon,\;\forall t\in\mathbb{R}.}}\end{array}$$ $$(2)$$ 3 Note that since Yb = h(X) ∈ R is a real-valued random variable and A is binary, the constraint in the above optimization formulation asks that the conditional cumulative distributions of Yb are approximately equal across groups, which is an additive approximation to the original definition of statistical parity. Formally, the constraint in (2) is known as the Kolmogorov-Smirnov distance: Definition 2.2 (Kolmogorov-Smirnov distance). For two probability distributions µ and µ 0 over R, the *KolmogorovSmirnov distance* K(µ, µ 0) is K(µ, µ 0) := supz∈R |Fµ(z) − Fµ0(z)|. With the Kolmogorov-Smirnov distance, we can define the e-statistical parity for a regressor h: Definition 2.3 (e-Statistical Parity). Given a joint distribution µ and 0 ≤ e ≤ 1, a regressor Yb = h(X), satisfies e-*statistical parity* if K(h]µ0, h]µ1) ≤ e. Clearly, the slack variable e controls the quality of approximation and when e = 0 it reduces to asking exact statistical parity as defined in Definition 2.1. Wasserstein Distance Given two random variables T and T 0 with the corresponding distributions µ and µ 0, let Γ(µ, µ 0) denote the set of all couplings γ of µ and µ 0, i.e., γT = µ and γT0 = µ 0. The *Wasserstein distance* between the pair of distributions µ and µ 0is defined as follows: $$W_{p}(\mu,\mu^{\prime}):=\left(\operatorname*{inf}_{\gamma\in\Gamma(\mu,\mu^{\prime})}\int\|T-T^{\prime}\|^{p}\,d\gamma\right)^{1/p},$$ $$(3)$$ where p ≥ 1 and throughout this paper we fix *k · k* to be the `2 norm. For the special case where both µ and µ 0are distributions over R, the Wasserstein distance Wp(µ, µ 0) admits the following equivalent characterization (Kolouri et al., 2017): $$W_{p}(\mu,\mu^{\prime})=\left(\int_{0}^{1}|F_{\mu}^{-1}(t)-F_{\mu^{\prime}}^{-1}(t)|^{p}\,dt\right)^{1/p},\tag{4}$$ where F −1 µ(t) denotes the generalized inverse of the cumulative distribution function, i.e., F −1 µ(t) = infz∈R{z : F(z) ≥ t}. The above closed-form formulation will be particularly useful in our later analysis. When p = 1, the Wasserstein distance is also called the *Earth Mover distance*, and it admits a dual representation in a variational form using sup rather than inf: W1(µ, µ 0) = supf :k f kL≤1 Rf dµ −Rf dµ 0 , where k f kL := supx6=x 0 | f(x) − f(x 0)|/|x − x 0| is the Lipschitz seminorm of f . It is well-known that convergences of measures under the Wasserstein distance imply weak convergence, i.e., convergence in distribution (Gibbs & Su, 2002). Furthermore, compared with other distance metrics including total variation (TV), Jensen-Shannon distance, etc. that ignore the geometric structure of the underlying space, Wasserstein distance often allows for more robust applications, e.g., the Wasserstein GAN (Arjovsky et al., 2017), domain adaptation (Courty et al., 2017), etc., due to its Lipschitz continuous constraint in the dual representation. Moreover, unlike the KL divergence, the Wasserstein distance between two measures is generally finite even when neither measure is absolutely continuous with respect to the other, a situation that often arises when considering empirical distributions arising in practice. Furthermore, unlike the TV-distance, the Wasserstein distance inherently depends on the geometry of the underlying space, whereas the TV distance is invariant under any bijective mapping. ## 3 Main Results Recently, Agarwal et al. (2019) proposed a reduction-based approach to tackle (2) by solving a sequence of cost-sensitive problems. By varying the slack variable e, the authors also empirically verified the unavoidable tradeoff between statistical parity and accuracy in practice. However, to the best of our knowledge, a quantitative characterization of the exact tradeoff between fairness and accuracy is still missing. In this section, we seek to answer the following intriguing and important question: In the setting of regression, what is the minimum error that any attribute-blind fair algorithm has to incur, and how does this error depend on the coupling between the target and the protected attribute? In what follows we shall first provide a simple example to illustrate this tradeoff. This example will give readers a flavor of the kind of impossibility result we are interested in proving. We then proceed to formally present our first theorem which exactly answers the above question, even if only approximate fairness is satisfied. We conclude this section with some discussions on the implications of our results. A Simple Example As a warm-up, let us consider an example to showcase the potential tradeoff between statistical parity and accuracy. But before our construction, it should be noted that the error ε p,µ(Yb) bears an intrinsic lower bound for any deterministic predictor Yb = h(X), i.e., the noise in the underlying data distribution µ. Hence to simplify our discussions, in this example, we shall construct distributions such that there is no noise in the data, i.e., for a ∈ {0, 1}, there exists a ground-truth labeling function h ∗ a such that Y = h ∗ a (X) on µa. Realize that such simplification will only make it harder for us to prove the lower bound on ε p,µa since there exist predictors that are perfect. Example 3.1 (Target coincides with the protected attribute). For a ∈ {0, 1}, let the marginal distribution X]µa be a uniform distribution over {0, 1}. Let Y = a be a constant. Hence by construction, on the joint distribution, we have Y = A hold. Now for any fair predictor Yb = h(X), the statistical parity asks Yb to be independent of A. However, no matter what value h(x) takes, we always have |h(x)| + |h(x) − 1| ≥ 1. Hence for any predictor h : X → R: $$\varepsilon_{1,\mu_{0}}(h)+\varepsilon_{1,\mu_{1}}(h)=\frac{1}{2}|h(0)-0|+\frac{1}{2}|h(1)-0|+\frac{1}{2}|h(0)-1|+\frac{1}{2}|h(1)-1|\geq\frac{1}{2}+\frac{1}{2}=1.$$ This shows that for any fair predictor h, the sum of `1 errors of h on both groups has to be at least 1. On the other hand, there exists a trivial unfair algorithm that makes no error on both groups by also taking the protected attribute into consideration: ∀x ∈ {0, 1}, h ∗(x) = 0 if A = 0 else h ∗(x) = 1. ## 3.1 The Cost Of Statistical Parity Under Noiseless Setting The example in the previous section corresponds to a worst case where Y = A. On the other hand, it is also clear that when the target variable Y is indeed independent of the protected attribute A, there will be no tension between statistical parity and accuracy. The following theorem exactly characterizes the tradeoff between fairness and accuracy by taking advantage of the relationship between Y and A: Theorem 3.1. Let Yb = h(X) be a predictor. If Yb satisfies statistical parity, then ∀p ≥ 1, $$\varepsilon_{p,\mu_{0}}(\widehat{Y})+\varepsilon_{p,\mu_{1}}(\widehat{Y})\geq W_{p}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1}).$$ $$(S)$$ (Yb) ≥ Wp(Y]µ0,Y]µ1). (5) We provide a proof by picture to illustrate the high-level idea of the proof in Fig. 1. For the special case of p = 1 and p = 2, Theorem 3.1 gives the following lower bounds on the sum of MAE and MSE on both groups respectively: Corollary 3.1. If Yb satisfies statistical parity, then ε1,µ0 (Yb) + ε1,µ1 (Yb) ≥ |Eµ0 [Y] − Eµ1 [Y]| and ε 2 2,µ0 (Yb) + ε 2 2,µ1 (Yb) ≥ 12 |Eµ0 [Y] − Eµ1 [Y]| 2. Remark First of all, the lower bound Wp(Y]µ0,Y]µ1) corresponds to a measure of the distance between the marginal distributions of Y conditioned on A = 0 and A = 1 respectively. Hence when A is independent of Y, we will have Y]µ0 = Y]µ1 so that the lower bound gracefully reduces to 0, i.e., no essential tradeoff between fairness and accuracy. On the other extreme, consider Y = cA, where c > 0. In this case, A fully describes Y and it is easy to verify that Wp(Y]µ0,Y]µ1) = c, which means the lower bound also takes into account the magnitude of the target variable Y. For a protected attribute A that takes more than 2 values, we could extend Theorem 3.1 by considering all possible pairwise lower bounds and averages over them. Furthermore, the lower bound is sharp, in the sense that for every p ≥ 1, there exist problem instances that achieve the above lower bound, e.g., Example 3.1. To see this, consider the `p error for any predictor h, we have $$\varepsilon_{p,\mu_{0}}(h)+\varepsilon_{p,\mu_{1}}(h)=\left(\frac{1}{2}|h(0)-0|^{p}+\frac{1}{2}|h(1)-0|^{p}\right)^{1/p}+\left(\frac{1}{2}|h(0)-1|^{p}+\frac{1}{2}|h(1)-1|^{p}\right)^{1/p}$$ $$\geq\frac{1}{2}|h(0)-0|+\frac{1}{2}|h(1)-0|+\frac{1}{2}|h(0)-1|+\frac{1}{2}|h(1)-1|$$ $$\geq\frac{1}{2}+\frac{1}{2}=1=W_{p}(Y_{2}\mu_{0}\,Y_{2}\mu_{1}),$$ where the first inequality follows from Jensen's inequality and the fact that g(t) = t 1/pfor p ≥ 1 is concave over t ≥ 0. On the other hand, it can be readily verified that the fair predictor h(X) = 1/2 attains the lower bound. As another example, consider the following Gaussian case for p = 2: Example 3.2 (Gaussian case). For a ∈ {0, 1}, let the marginal distribution X]µa be a standard Gaussian distribution N (0, Id) and assume A ⊥ X. Fix w ∈ Rd with kwk = 1, and construct Y0 = w TX − 1 and Y1 = w TX + 1. Now for any regressor Yb = h(X), due to the data-processing inequality, Yb ⊥ A so Yb is fair. However, consider the `2 error of h on both groups: $$\varepsilon_{2,\mu_{0}}(h)+\varepsilon_{2,\mu_{1}}(h)=\mathbb{E}_{X}^{1/2}[(h(X)-Y_{0})^{2}]+\mathbb{E}_{X}^{1/2}[(h(X)-Y_{1})^{2}]$$ $$\geq\mathbb{E}_{X}[|h(X)-Y_{0}|]+\mathbb{E}_{X}[|h(X)-Y_{1}|]\geq\mathbb{E}_{X}[|Y_{0}-Y_{1}|]=2,$$ where the first inequality is due to Jensen's inequality. On the other hand, note that the distributions of Y0 and Y1 are N (−1, 1) and N (1, 1), respectively. The analytic formula (Givens et al., 1984, Proposition 7) for the W2 distance between two Gaussians N (m0, Σ0) and N (m1, Σ1) is $$W_{2}^{2}({\cal N}(m_{0},\Sigma_{0}),{\cal N}(m_{1},\Sigma_{1}))=\|m_{0}-m_{1}\|^{2}+\mathrm{Tr}\left(\Sigma_{0}+\Sigma_{1}-2\left(\Sigma_{0}^{1/2}\Sigma_{1}\Sigma_{0}^{1/2}\right)^{1/2}\right),$$ which shows that W2(Y0,Y1) = | − 1 − 1| = 2. Further, consider Yb∗ = h $${\textsf{r}}{\widehat{Y}}^{*}=h^{*}(X)=w^{T}X,$$ TX, then $$\varepsilon_{2,\mu_{0}}(h^{*})+\varepsilon_{2,\mu_{1}}(h^{*})=\mathbb{E}_{X}^{1/2}[(h^{*}(X)-Y_{0})^{2}]+\mathbb{E}_{X}^{1/2}[(h^{*}(X)-Y_{1})^{2}]=1+1=2.$$ Hence h ∗achieves the lower bound and the lower bound is verified. It is worth pointing out that the lower bound in Theorem 3.1 is algorithm-independent and it holds on the population distribution. That being said, by using recent tail bounds (Lei et al., 2020; Weed et al., 2019) on the expected Wasserstein distance between the empirical distributions and its population counterpart, it is not hard to extend Theorem 3.1 to obtain a finite sample high probability bound of Theorem 3.1: Theorem 3.2. Let Yb = h(X) be the predictor and µˆ be an empirical distribution induced from a sample of size n drawn from µ. If Yb satisfies statistical parity, then there exists an absolute constant c1 > 0 such that for 0 < δ < 1, with probability at least 1 − δ over the draw of the sample, $$\varepsilon_{2,\mu_{0}}(\widehat{Y})+\varepsilon_{2,\mu_{1}}(\widehat{Y})\geq\varepsilon_{1,\mu_{0}}(\widehat{Y})+\varepsilon_{1,\mu_{1}}(\widehat{Y})\geq W_{1}(Y_{1}\widehat{\mu}_{0},Y_{2}\widehat{\mu}_{1})-\left(2c_{1}+\sqrt{2\log(2/\delta)}\right)\sqrt{\frac{1}{n}}.$$ (6) $$\binom{6}{4}$$. Remark It is possible to obtain better lower bounds for the `2 error in Theorem 3.2, but that requires making more assumptions on the underlying distribution µ, e.g., strongly log-concave density. The first term in the lower bound, W1(Y]µˆ0,Y]µˆ1), could be efficiently computed from the data by solving a linear program (Cuturi & Doucet, 2014, Problem (3)). Furthermore, it is worth pointing out that the lower bound in Theorem 3.2 applies to all the predictors Yb and is insensitive to the marginal distribution of A. As a comparison, let pa:= Prµ(A = a), then ε p,µ(Yb) = p0ε p,µ0 (Yb) + p1ε p,µ1 (Yb). In this case, if the group ratio is imbalanced, the overall error ε p,µ(Yb) could still be small even if the minority group suffers a large error. Using Theorem 3.1, we can also bound the joint error over all the population: Corollary 3.2. Let Yb = h(X) be a predictor. If Yb satisfies statistical parity, then ∀p ≥ 1, the joint error has the following lower bound: ε p,µ(Yb) ≥ H0-1(A) · Wp(Y]µ0,Y]µ1). (7) Compared with the one in Theorem 3.1, the lower bound of the joint error in Corollary 3.2 additionally depends on the zero-one entropy of A. In particular, if the marginal distribution of A is skewed, then H0-1(A) will be small, which means that fairness will not reduce the joint accuracy too much. In this case, even if Wp(Y]µ0,Y]µ1) is large, it might seem like that the joint error ε p,µ(Yb) need not be large. However, this is due to the fact that the price in terms of the drop in accuracy is paid by the minority group. Our observation here suggests that the joint error ε p,µ(Yb) is not necessarily $$\varepsilon_{p,\mu}(\widehat{Y})\geq H_{0\textrm{-}1}(A)\cdot W_{p}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1}).$$ $$\left(7\right)$$ ![6_image_0.png](6_image_0.png) Figure 1: Proof by picture. Under the constraint of statistical parity, the predictor hfair(·) on µ0 and µ1 induces the same predictive distributions over Y. Applying a triangle inequality (with Wp(·, ·)) to the triangle in the right circle and using the fact that the Wasserstein distance is a lower bound of the regression error completes the proof. the objective to look at in high-stakes applications, since it naturally encodes the imbalance between different subgroups into account. Instead, a more appealing alternative to consider is the *balanced error rate*: $$\mathrm{{Balanced\;Error\;Rate\;of\;}}\widehat{Y}:=\frac{1}{2}\left(\varepsilon_{p,\mu_{0}}(\widehat{Y})+\varepsilon_{p,\mu_{1}}(\widehat{Y})\right),$$ $$({\mathfrak{s}})$$ , (8) which applies balanced weights to both groups in the objective function. Clearly, (8) could be reduced to the so-called cost-sensitive loss, where data from group a ∈ {0, 1} is multiplied by a positive weight that is reciprocal to the group's population level, i.e., 1/ Pr(A = a). Comparisons with Related Lower Bounds It is instructive to compare the above lower bound for the population error with the one of (Chzhen et al., 2020a, Theorem 2.3), where the authors use a Wasserstein barycenter characterization to give a lower bound on the special case of squared `2 error (p = 2) when the regressor can explicitly take the protected attribute as its input. As a comparison, our results apply to the general `p loss. To provide a more formal and detailed comparison, we first state the theorem for the mean-squared error in the setting where the regressor has explicit access to the protected attribute A from Chzhen et al. (2020a) (using adapted notation for consistency): Theorem 3.3 (Chzhen et al. (2020a) Theorem 2.3). Assume, for each a ∈ {0, 1}, that the univariate measure Y]µa has a density and let pa:= Pr(A = a). Then, $$\operatorname*{min}_{g{\mathrm{~satisfies~statistical\mathrm{~parity}}}}\quad\quad\operatorname{E}[(f^{*}(X,A)-g(X,A))^{2}]=\operatorname*{min}_{\nu}\sum_{a\in\{0,1\}}p_{a}\cdot W_{2}^{2}(Y_{\sharp}\mu_{a},\nu),$$ where f ∗(X, A) is the Bayes optimal regressor and ν is a distribution over R. Remark First, the quantity of interest in Theorem 3.3 is the discrepancy between a fair predictor g(·, ·) and the Bayes optimal predictor f ∗(·, ·). On the other hand, the cost we are interested in this work is the excess risk (c.f. Definition 3.1) of a fair predictor. These two terms are not the same in general. However, in the noiseless setting, we know that Y = f ∗(X, A) and the excess risk reduces to the error ε2,µ. In this case, the costs of fairness in Theorem 3.1 and Theorem 3.3 has the following relationship: $$\mathbb{E}[(Y-g(X,A))^{2}]=\sum_{a\in\{0,1\}}p_{a}\cdot\epsilon_{2,\mu_{a}}^{2}({\widehat{Y}})$$ $$\begin{array}{l}{{=\left(\sum_{a\in\{0,1\}}p_{a}\cdot\varepsilon_{2,\mu_{a}}^{2}(\widehat{Y})\right)\cdot\left(\sum_{a\in\{0,1\}}p_{a}\right)}}\\ {{\geq\left(\sum_{a\in\{0,1\}}\sqrt{p_{a}}\cdot\sqrt{p_{a}}\varepsilon_{2,\mu_{a}}(\widehat{Y})\right)^{2}=\varepsilon_{2,\mu}^{2}(\widehat{Y}).}}\end{array}$$ Furthermore, although in both Theorem 3.1 and Theorem 3.3 we require the predictor to be fair in the sense of statistical parity, the class of feasible predictors in Theorem 3.3 is still larger than that of Theorem 3.1. To see this, note that in Theorem 3.1, beyond asking for h(·) to be fair, the same attribute-blind predictor h has to be used in both groups. On the other hand, although the attribute-aware predictor g(·, ·) is constrained to be fair, different predictors g(·, a) could be applied over different groups indexed by A = a. Last but not least, in the noiseless case with p = 2, one can use Theorem 3.3 to obtain Theorem 3.1 as follows. Let W := W2(Y]µ0,Y]µ1). $$\sum_{a\in\{0,1\}}p_{a}\cdot c_{2,\mu}^{2}(\widehat{Y})\geq\min_{h\text{satisfies statistical property}}\mathbb{E}[(h(X)-Y)^{2}]$$ $$\geq\min_{g\text{satisfies minimal property}}\mathbb{E}[(g(X,A)-Y)^{2}]\qquad\qquad(g(\cdot,\cdot)\text{has additional access to}A)$$ $$=\min_{g\text{satisfies critical property}}\mathbb{E}[(Y(X,A)-g(X,A))^{2}]\quad\text{(Noiseless setting,so}Y=f^{*}(X,A))$$ $$=\min_{\mu}\sum_{g\in\{0,1\}}p_{\mu}\cdot W_{2}^{2}(Y_{2}\mu_{\nu}\nu)$$ $$=\min_{t\in[0,\text{BVP}]}p_{0}t^{2}+p_{1}(W-t)^{2}\qquad\qquad\text{(Barycenter property of}W_{2})$$ $$=p_{0}\text{Pr}\,W^{2}.$$ Now, since the above inequality holds for every p0, p1 such that p0 + p1 = 1, we can choose p0 and p1 as follows. $$p_{0}=\frac{\varepsilon_{2,\mu_{0}}^{-1}(\widehat{Y})}{\varepsilon_{2,\mu_{0}}^{-1}(\widehat{Y})+\varepsilon_{2,\mu_{1}}^{-1}(\widehat{Y})},\quad p_{1}=\frac{\varepsilon_{2,\mu_{1}}^{-1}(\widehat{Y})}{\varepsilon_{2,\mu_{0}}^{-1}(\widehat{Y})+\varepsilon_{2,\mu_{1}}^{-1}(\widehat{Y})}.$$ Under this choice, from p0ε 2 2,µ0 (Yb) + p1ε 2 2,µ1 (Yb) ≥ p0 p1W2, we immediately obtain ε2,µ0 (Yb) + ε2,µ1 (Yb) ≥ W = W2(Y]µ0,Y]µ1) as desired. Note that the above relationship between Theorem 3.1 and Theorem 3.3 only holds for p = 2 under the noiseless assumption. It is not clear such relationships continue to hold under the noisy setting and general p ≥ 1. Another related result in the literature is Theorem 3.1 of Chi et al. (2021), which we restate as follows: Theorem 3.4 (Chi et al. (2021) Theorem 3.1). Let Yb = h(X) be a predictor, then $$\varepsilon_{2,\mu_{0}}^{2}(\widehat{Y})+\varepsilon_{2,\mu_{1}}^{2}(\widehat{Y})\geq\frac{1}{2}\left(\left[W_{1}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1})-W_{1}(h_{\sharp}\mu_{0},h_{\sharp}\mu_{1})\right]_{+}\right)^{2},$$ where [t]+ := max{0, t}. Remark When h(·) satisfies the statistical parity exactly, the above lower bound reduces to 12W1(Y]µ0,Y]µ1) 2. Again, Theorem 3.4 is not directly comparable to Theorem 3.1 due to the same reason that Theorem 3.1 and Theorem 3.3 are not directly comparable. However, we can compare Theorem 3.4 and Theorem 3.3 directly since both costs are measured by the mean-squared error. To do so, realize that for the special case of W2 with *k · k*2 as the underlying metric, we know that the Wasserstein barycenter lies on the Wasserstein geodesic between Y]µ0 and Y]µ1 (Villani, 2009). Let ν ∗ = arg min ∑a∈{0,1}pa · W2 2 (Y]µa, ν), i.e., ν ∗is the Wasserstein barycenter. Now since W2(·, ·) is a metric and ν ∗lies on the geodesic, we know $$W_{2}(Y_{\sharp}\mu_{0},\nu^{*})+W_{2}(Y_{\sharp}\mu_{1},\nu^{*})=W_{2}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1}).$$ By using this fact, it is straightforward to verify that the following chain of inequalities holds: $$W_{2}^{2}(Y_{\bar{t}}\mu_{0},\nu^{*})+W_{2}^{2}(Y_{\bar{t}}\mu_{1},\nu^{*})\geq\frac{1}{2}\left(W_{2}(Y_{\bar{t}}\mu_{0},\nu^{*})+W_{2}(Y_{\bar{t}}\mu_{1},\nu^{*})\right)^{2}=\frac{1}{2}W_{2}^{2}(Y_{\bar{t}}\mu_{0},Y_{\bar{t}}\mu_{1})\geq\frac{1}{2}W_{1}^{2}(Y_{\bar{t}}\mu_{0},Y_{\bar{t}}\mu_{1}),$$ where the first inequality is due to the AM-GM inequality and the last one is due to the monotonicity of the Wp(·, ·) metric. Hence, in the special case of p = 2 with mean-squared error, the lower bound in Theorem 3.3 is tighter than the one in Theorem 3.4. ## 3.2 Extension To Approximate Fairness Under Noisy Setting In the previous section, we show that there is an inherent tradeoff between statistical parity and accuracy when a predictor *exactly* satisfies statistical parity, and in particular this holds even if there is a perfect (unfair) regressor in both groups, i.e., there is no noise in the underlying population distribution. However, as formulated in (2), in practice, we often only ask for approximate fairness where the quality of the approximation is controlled by the slack variable e. Furthermore, even without the fairness constraint, in most interesting problems we often cannot hope to find perfect predictors for the regression problem of interest. Hence, it is natural to ask what is the tradeoff between fairness and accuracy when our predictor only approximately satisfies fairness (e-SP, Definition 2.3) over general distribution µ? In this section, we shall answer this question by generalizing our previous results to prove lower bounds on both the sum of conditional and the joint target errors that also take the quality of such approximation into account. Due to potential noise in the underlying distribution, we first define the excess risk rp,µ(Yb) of a predictor Yb, which corresponds to the reducible error: Definition 3.1 (Excess Risk). Let Yb = h(X) ∈ R be a predictor. The `p excess risk of Yb is defined as rp,µ(Yb) := ε p,µ(Yb) − ε ∗p,µ , where ε ∗p,µ:= inffε p,µ(f(X)) is the optimal error over all measurable functions. Assuming the infimum is achievable, we use f ∗ ito denote an optimal regressor without fairness constraint over µi, i ∈ {0, 1}, i.e., f ∗ i ∈ arg minfε p,µi (f(X)). Then we have the following hold: Proposition 3.1. Let Yb = h(X) be a predictor. Under Assumption 2.1, for p ≥ 1, if there exists e > 0 such that Wp(h]µ0, h]µ1) ≤ e, then $$r_{p,\mu_{0}}(\widehat{Y})+r_{p,\mu_{1}}(\widehat{Y})\geq\qquad\qquad\underbrace{W_{p}(f_{0}^{*};\mu_{0},f_{1}^{*};\mu_{1})}_{p,\mu_{0},\mu_{1},\mu_{0},\mu_{1},\mu_{0}}\qquad\qquad-2(\epsilon_{p,\mu_{0}}^{*}+\epsilon_{p,\mu_{1}}^{*})-\epsilon,$$ $$(9)$$ | {z } distance between optimal unfair predictors across groups and Yb satisfies 2 √ Ce-SP. Remark It is easy to verify that Proposition 3.1 is a generalization of the lower bound presented in Theorem 3.1: when f ∗ i are perfect predictors, we have Y]µi = f ∗ i ] µi and ε ∗p,µi = 0, for i ∈ {0, 1}. Hence in this case the excess risk rp,µi (Yb) reduces to the error ε p,µi (Yb). Furthermore, if e = 0, i.e., Yb satisfies the exact statistical parity condition, then the lower bound (9) recovers the lower bound (5). As a separate note, Proposition 3.1 also implies that one can use the Wasserstein distance between the predicted distributions across groups as a proxy to ensure approximate statistical parity. This observation has also been shown in Dwork et al. (2012, Theorem 3.3) in classification. Finally, similar to Corollary 3.2, the lower bound in Proposition 3.1 for the excess risks could also be extended to give a lower bound for the weighted joint risk when approximate statistical parity (e-SP) is enforced as a constraint. The proof is exactly the same as the one for Corollary 3.2 so we omit it here. ## 3.3 Individual Fairness, Accuracy Parity And The Wasserstein Distance In the previous section, we show that the Wasserstein distance between the output distributions across groups could be used as a proxy to ensure approximate statistical parity. Nevertheless, Theorem 3.1 and Proposition 3.1 show that statistical parity is often at odds with the accuracy of the predictor, and in many real-world scenarios SP is insufficient to be used as a notion of fairness (Dwork et al., 2012, Section 3.1). Alternatively, in the literature, a separate notion of fairness, known as *individual fairness*, has been proposed in Dwork et al. (2012). Roughly speaking, under the framework of individual fairness, for the classification task T of interest at hand, the learner will have access to a (hypothetical) task-specific metric dT(·, ·) for determining the degree to which individuals are similar w.r.t. the task T. We emphasize here that the metric dT(·, ·) should be task-specific, and in practice, it is often hard (or infeasible) to determine this task-specific metric. Nevertheless, in this section, we are mainly interested in understanding the relationship between the notion of individual fairness and its connection to accuracy parity, where we use the Wasserstein distance as a bridge to connect these two. In particular, we say that a predictor h is individually fair if it treats similar individuals (measured by dT(·, ·)) similarly: Definition 3.2 (Individual Fairness, (Dwork et al., 2012)). For the task T with task-specific metric dT(·, ·), a regressor h satisfies ρ-individual fairness if ∀x, x 0 ∈ X , |h(x) − h(x 0)| ≤ ρdT(x, x 0). Essentially, individual fairness puts a Lipschitz continuity constraint on the predictor with respect to the task-specific metric dT(·, ·). Note that in the original definition (Dwork et al., 2012, Definition 2.1) the authors use a general metric dT(·, ·) as a similarity measure between individuals, and the choice of such similarity measure is at the center of related applications. In this section, we use *k · k* in Definition 3.2 mainly for the purpose of illustration, but the following results can be straightforwardly extended for any metric dT(·, ·) that is upper bounded by the norm *k · k*, i.e., there exists c > 0, such that ∀x, x 0 ∈ X , dT(x, x 0) ≤ ckx − x 0k. Another notion of group fairness that has gained increasing attention (Buolamwini & Gebru, 2018; Bagdasaryan et al., 2019; Chi et al., 2021) is *accuracy parity*: Definition 3.3 (e-Accuracy Parity). Given a joint distribution µ and 0 ≤ e ≤ 1, a regressor Yb = h(X) satisfies e-*accuracy parity* if |ε1,µ0 (Yb) − ε1,µ1 (Yb)| ≤ e. Accuracy parity calls for approximately equalized performance of the predictor across different groups. The following proposition states the relationship between individual fairness, accuracy parity and the W1 distance between the distributions µ0 and µ1 of different groups: Proposition 3.2. If h(·) is ρ-individually fair, then h satisfies pρ 2 + 1 · W1(µ0, µ1)-accuracy parity. Together with Lemma A.1 in the appendix, Proposition 3.2 suggests that in order to achieve approximate accuracy parity, one can constrain the predictor to be Lipschitz continuous while at the same time try to decrease the W1 distance between the distributions across groups, via learning representations. In the case where the groups are similar and the Wasserstein distance is small, individual fairness provides some guidance towards approximate accuracy parity. However, in cases where the groups are different (disjoint), representation learning becomes more important. ## 3.4 Fair Representations With Wasserstein Distance From the previous discussion, we know that the Wasserstein distance between the predicted distributions and the input distributions can be used to control both statistical parity and accuracy parity, respectively. *Is there a way to* simultaneously achieve both goals? In this section we shall provide an affirmative answer to this question via learning fair representations. The high-level idea is quite simple and intuitive: given input variable X, we seek to learn a representation Z = g(X) such that W1(g]µ0, g]µ1) is small. If furthermore the predictor h acting on the representation Z is individually fair, we can then hope to have small statistical and accuracy disparity simultaneously. One potential benefit of learning fair representations for statistical parity over other methods based on constrained optimization (Agarwal et al., 2019) is that we can release the learned fair representations so that different kinds of downstream tasks could build on top of it without worrying about the fairness constraint anymore. Due to the data-processing inequality, no matter what predictors for the downstream tasks are placed over the released fair representations, we can guarantee that the predictors on top of the released fair representations are also going to satisfy the statistical parity constraint. As a comparison, for constrained-based optimization methods, one often needs to establish a different objective function when working with a different downstream task. Concretely, the following proposition says if the Wasserstein distance between feature distributions from two groups, W1(g]µ0, g]µ1), is small, then as long as the predictor is individually fair, it also satisfies approximate statistical parity: Proposition 3.3. Let Z = g(X) be the features from input X. If W1(g]µ0, g]µ1) ≤ e and Yb = h(Z) is ρ-Lipschitz, then Yb = (h ◦ g)(X) verifies 2pCρe-SP. In practice since we only have finite samples from the corresponding distributions, we will replace all the distributions with their corresponding empirical versions. Furthermore, instead of using the joint error as our objective function, as we discussed in the previous section, we propose to use the balanced error rate instead: $$\min_{g,\ \|h\|_{L\leq p}\|f\|_{L\leq1}}\max_{\frac{1}{2}\left(\varepsilon_{2,\mu_{0}}(h\circ g)+\varepsilon_{2,\mu_{1}}(h\circ g)\right)+\tau\cdot\left|\mathbf{E}_{g,\mu_{0}}[f(Z)]-\mathbf{E}_{g,\mu_{1}}[f(Z)]\right|,\tag{10}$$ where τ > 0 is a hyperparameter that trades off the `p error and the Wasserstein distance and ρ is the Lipschitz constant of the predictor. The above problem could be optimized using the gradient descent-ascent algorithm (Edwards & Storkey, 2015; Zhang et al., 2018). To implement the Lipschitz constraints, we apply weight clipping to the parameters of both the adversary as well as the target predictor. More specifically, we use the projected gradient descent algorithm to ensure the `2 norm of the discriminator and the target predictor are bounded by the preset values. Comparisons with Related Fair Representation Learning Method One closely related approach that uses the Wasserstein distance as a penalty term in representation learning is from Chi et al. (2021), where the authors also formulated a minimax approach to encourage accuracy disparity between different groups. Compared with Eq. (10), from a model perspective, the main difference between our formulation and the one in Chi et al. (2021) is that the discriminator in Eq. (10) only takes the features Z from different groups as input whereas the corresponding discriminator in Chi et al. (2021) takes both the features Z and the label Y as input. Because of this difference, the motivations of these two approaches are quite different. In our case, we use the Wasserstein distance to penalize the discrepancy between the marginal feature distributions in order to (approximately) achieve statistical parity, whereas the Wasserstein metric used in Chi et al. (2021) is used to align the conditional feature distributions (conditioned on the label Y) between different groups, in order to minimize the accuracy disparity. To encourage accuracy parity, in Eq. (10), we also constrain the predictor to be Lipschitz continuous, following Proposition 3.2. ## 4 Experiments Our theoretical results imply that even if there is no significant drop in terms of the overall population error when a model is built to satisfy the statistical parity, the minority group can still suffer greatly from the reduction in accuracy. On the other hand, by using the balanced error rate as the objective function, we can mitigate the disparate drops in terms of accuracy between these two groups. Furthermore, by minimizing the Wasserstein distance of the feature distributions across groups, we can hope to achieve both approximate statistical and accuracy parity. To verify these implications, we conduct experiments on a real-world benchmark dataset, the Law School dataset (Wightman, 1998), to present empirical results with various metrics. We refer readers to Appendix B for further details about the dataset, our pre-processing pipeline and the models used in the experiments. Experimental Setup To demonstrate the effect of using Wasserstein distance to regularize the representations with adversarial training, we perform a controlled experiment by fixing the baseline model to be a three hidden-layer feed-forward network with ReLU activations, denoted as MLP. To verify the effect of the balanced error rate on reducing the accuracy disparity, we further study two variants of it, where one is using the weighted joint error rate (first row in Table 1) as the objective function whereas the other (second row in Table 1) uses the balanced error rate as the objective function. We use W-MLP to denote the model with Wasserstein constraint for representation learning. In the experiment, all the other factors are fixed to be the same across these two methods, including learning rate, optimization algorithm, training epoch, and also batch size. To see how the Wasserstein regularization affects the joint error, the conditional errors as well as the statistical parity and accuracy parity, we vary the coefficient τ for the adversarial loss between 0.1, 1.0, 5.0 and 10.0. For each setup, we repeat the experiment 5 times and report both the mean and the standard deviation. To further study the role of the Lipschitz constant ρ in achieving statistical parity as well as reducing the accuracy disparity, in the second set of experiments we fix the coefficient τ = 0.01 and vary the value of ρ by changing the weight clipping value of the model parameters of both the adversary as well as the target predictor. More specifically, we vary the clipping value for the parameters between 0.01, 0.1, 1.0 and 10.0. Again, for each setup, we repeat the experiment 5 times and report both the mean and the standard deviation. Results and Analysis The experimental results are listed in Table 1. In the table, we use K(Yb0,Yb1) to denote the Kolmogorov-Smirnov distance of the predicted distribution across groups, which is also the value of approximate statistical parity. From the table, it is then clear that with increasing τ, both the statistical disparity and the accuracy disparity are decreasing. Interestingly, the overall error εµ (sensitive to the distribution of A) and the sum of group errors εµ0 + εµ1 (insensitive to the imbalance of A) only marginally increase. In fact, for τ = 1.0, we actually observed better accuracy. We conjecture that the improved performance stems from the implicit regularization via weight clipping of the target predictor. With τ = 10.0, the last row shows that this method could effectively reduce both the statistical and accuracy disparity to a value very close to 0, although at the cost of increasing errors. Comparing the two variants of MLP with weighted joint error and the balanced error rate, we can see that the balanced error rate helps to reduce both the Kolmogorov-Smirnov distance of the predicted distribution across groups as well as the accuracy disparity, at the price of a higher joint error rate as well as the sum of group errors. The impact of the Lipschitz constant ρ is shown in Table 2. As can be observed from the table, as ρ decreases, both the Kolmogorov-Smirnov distance of the predicted distribution across groups and the accuracy disparity also decreases. In | τ | εµ | εµ0 + εµ1 | K(Yb0,Yb 1) | |εµ0 (Yb) − εµ1 (Yb)| | | |----------------------------|------|-------------|---------------|-------------------------|-------------| | MLP (weighted joint error) | N/A | 0.034±0.005 | 0.069±0.011 | 0.296±0.044 | 0.011±0.004 | | MLP (balanced error rate) | N/A | 0.037±0.005 | 0.072±0.010 | 0.283±0.045 | 0.010±0.003 | | W-MLP | 0.1 | 0.034±0.004 | 0.067±0.008 | 0.222±0.037 | 0.011±0.003 | | W-MLP | 1.0 | 0.030±0.000 | 0.059±0.001 | 0.116±0.032 | 0.011±0.002 | | W-MLP | 5.0 | 0.034±0.002 | 0.067±0.004 | 0.084±0.073 | 0.008±0.002 | | W-MLP | 10.0 | 0.035±0.001 | 0.069±0.003 | 0.048±0.059 | 0.006±0.000 | Table 1: Fair representations with Wasserstein regularization on the Law School dataset with different values of the regularization coefficient τ. We report the overall error, group-wise error, statistical disparity, and accuracy disparity. | ρ | εµ | εµ0 + εµ1 | K(Yb0,Yb 1) | |εµ0 (Yb) − εµ1 (Yb)| | | |-------|------|-------------|---------------|-------------------------|-------------| | W-MLP | 10.0 | 0.036±0.003 | 0.070±0.006 | 0.159±0.032 | 0.011±0.005 | | W-MLP | 1.0 | 0.036±0.003 | 0.071±0.006 | 0.155±0.034 | 0.011±0.005 | | W-MLP | 0.1 | 0.035±0.002 | 0.069±0.005 | 0.092±0.044 | 0.008±0.003 | | W-MLP | 0.01 | 0.037±0.002 | 0.073±0.003 | 0.003±0.005 | 0.006±0.003 | Table 2: Fair representations with Wasserstein regularization on the Law School dataset with different values of the Lipschitz constant ρ. We report the overall error, group-wise error, statistical disparity, and accuracy disparity. particular, when ρ is set to be 0.01, K(Yb0,Yb1) can be mitigated to 0.003, a value that is very close to 0. This means that adjusting the Lipschitz constant value ρ can potentially have a larger impact than τ in controlling the (approximate) statistical parity. Note, however, in this case, both the joint weighted error as well as the balanced error rate are also larger, showing evidence of the trade-off between accuracy and statistical parity again. To conclude, all the empirical results are consistent with our theoretical findings. ## 5 Related Work Fair Regression Two central notions of fairness have been extensively studied, i.e., individual fairness and group fairness. Dwork et al. (2012) defined individual fairness as a Lipschitz constraint of the underlying (randomized) algorithm. However, this definition requires a priori a distance metric to compute the similarity between pairs of individuals, which is often hard to construct or design in practice. Group fairness is a statistical definition, and it includes a family of definitions which essentially ask some statistical scores to be equalized between different subgroups. Typical examples include statistical parity, equalized odds (Hardt et al., 2016), and accuracy parity (Buolamwini & Gebru, 2018). In this work we focus on an extension of statistical parity to regression problems and study its theoretical tradeoff with accuracy when the regressor cannot directly take the protected attribute as input during both training and inference stages. We also investigate the relationship between individual fairness and accuracy parity and provide a bound through the Wasserstein distance. The line of work on fair regression through regularization techniques dates at least back to Calders et al. (2013), where the authors enforce a first-order moment requirement between the predicted distributions. More recent works include (Komiyama et al., 2018) that use coefficient of determination as a notion of fairness when there are multiple sensitive attributes. When the sensitive attribute is continuous, generalized definition using the Rényi correlation coefficient exists (Mary et al., 2019). In a recent work, Agarwal et al. (2019) proposed a reduction approach from fair regression to a sequence of cost-sensitive minimization problems. Other approaches include a two-stage recalibration procedure (Chzhen et al., 2020b) and robust optimization techniques (Narasimhan et al., 2020). Our definition of statistical parity in the regression setting is stronger than the one of Calders et al. (2013), which proposed to use the difference of means as the metric. When the output dimension is one, our definition also coincides with the one proposed by Agarwal et al. (2019), which amounts to the Kolmogorov-Smirnov distance. Tradeoff between Fairness and Accuracy Although it has long been empirically observed that there is an inherent tradeoff between accuracy and statistical parity in both classification and regression problems (Calders et al., 2009; Zafar et al., 2015; Zliobaite, 2015; Berk et al., 2017; Corbett-Davies et al., 2017; Zhao et al., 2019a), precise characterizations on such tradeoffs are less explored. Berk et al. (2017) defined the price of fairness (PoF) under their convex framework for linear and logistic regression problems as the ratio between the loss of the optimal (approximately) fair regressor and the Bayes optimal loss. Under this PoF definition, the authors then explored the accuracy-fairness frontier by changing the approximate coefficient of the fairness constraint. Menon & Williamson (2018, Proposition 8) explored such tradeoff in terms of the fairness frontier function under the context of cost-sensitive binary classification. Zhao & Gordon (2022) proved a lower bound on the joint error that has to be incurred by any fair algorithm satisfying statistical parity. Our negative result is similar to that of Zhao & Gordon (2022) in nature, and could be understood as a generalization of their results from classification to regression. Chzhen et al. (2020a) and Le Gouic et al. (2020) concurrently with each other derived an analytic bound to characterize the price of statistical parity in regression when the learner can take the sensitive attribute explicitly as an input for `2 loss. In this case, the lower bound is given by the optimal transport distance from two group distributions to a common one, characterized by the W2 barycenter. Our results differ in that in our setting the learner cannot use the sensitive attribute directly as an input, and our results hold for the general `p loss. Note that this is significant because it is not clear how to extend the results to space with Wp as a metric, since the proof depends on the use of the Pythagoras' decomposition, which only holds under the `2 distance. On the upside, under certain data generative assumptions of the sampling bias, there is a line of recent works showing that fairness constraints could instead improve the accuracy of the predictor (Dutta et al., 2020; Blum & Stangl, 2020). In particular, Blum & Stangl (2020) prove that if the observable data are subject to labeling bias, then the Equality of Opportunity constraint could help recover the Bayes optimal classifier. Note that this does not contradict our results, since in this work we do not make any assumptions about the underlying training distributions. ## 6 Conclusion In this paper, we show that when the target distribution differs across different demographic subgroups, any attributeblind fair algorithm in the statistical parity sense has to achieve a large error on at least one of the groups. In particular, we give a characterization of such a tradeoff using the Wasserstein distance. On the other hand, we also establish a connection between individual fairness and accuracy parity, where again, the accuracy disparity gap is characterized by the Wasserstein distance. Besides the theoretical contributions, our analysis using Wasserstein distance also suggests a practical algorithm for fair regression through learning representations for different demographic subgroups that are close in the sense of Wasserstein distance. Empirical results on a real-world dataset also confirm our findings. ## Acknowledgements We would like to thank the anonymous reviewers for their helpful comments in improving the technical presentation of the paper. The work of H.Z. was supported in part by a Facebook Research Award and Amazon AWS Cloud Credit. ## References General data protection regulation. URL https://gdpr-info.eu/art-22-gdpr/. [Online; accessed 13-May2021]. Alekh Agarwal, Miroslav Dudik, and Zhiwei Steven Wu. Fair regression: Quantitative definitions and reduction-based algorithms. In *International Conference on Machine Learning*, pp. 120–129, 2019. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. *arXiv preprint arXiv:1701.07875*, 2017. Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. Differential privacy has disparate impact on model accuracy. *Advances in Neural Information Processing Systems*, 32:15479–15488, 2019. Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. *NIPS Tutorial*, 2017. Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. A convex framework for fair regression. *arXiv preprint arXiv:1706.02409*, 2017. Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. Data decisions and theoretical implications when adversarially learning fair representations. *arXiv preprint arXiv:1707.00075*, 2017. Avrim Blum and Kevin Stangl. Recovering from biased data: Can fairness constraints improve accuracy? In *Symposium* on Foundations of Responsible Computing (FORC), volume 1, 2020. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In *Conference on fairness, accountability and transparency*, pp. 77–91. PMLR, 2018. Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops, pp. 13–18. IEEE, 2009. Toon Calders, Asim Karim, Faisal Kamiran, Wasif Ali, and Xiangliang Zhang. Controlling attribute effect in linear regression. In *2013 IEEE 13th International Conference on Data Mining*, pp. 71–80. IEEE, 2013. Flavio P Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In *Proceedings of the 31st International Conference on* Neural Information Processing Systems, pp. 3995–4004, 2017. Sourav Chatterjee. Lecture notes in stein's method and applications, August 2007. Jianfeng Chi, Yuan Tian, Geoffrey J Gordon, and Han Zhao. Understanding and mitigating accuracy disparity in regression. In *International Conference on Machine Learning*, 2021. Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, and Massimiliano Pontil. Fair regression with wasserstein barycenters. *arXiv preprint arXiv:2006.07286*, 2020a. Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, and Massimiliano Pontil. Fair regression via plug-in estimator and recalibration with statistical guarantees. *Advances in Neural Information Processing Systems*, 33: 19137–19148, 2020b. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In *Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data* Mining, pp. 797–806. ACM, 2017. Nicolas Courty, Rémi Flamary, Amaury Habrard, and Alain Rakotomamonjy. Joint distribution optimal transportation for domain adaptation. In *Advances in Neural Information Processing Systems*, pp. 3730–3739, 2017. Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In International conference on machine learning, pp. 685–693. PMLR, 2014. Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, and Kush Varshney. Is there a trade-off between fairness and accuracy? a perspective using mismatched hypothesis testing. In *International Conference on* Machine Learning, pp. 2803–2813. PMLR, 2020. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214–226. ACM, 2012. Harrison Edwards and Amos Storkey. Censoring representations with an adversary. *arXiv preprint arXiv:1511.05897*, 2015. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In *proceedings of the 21th ACM SIGKDD international conference on knowledge* discovery and data mining, pp. 259–268, 2015. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030, 2016. Alison L Gibbs and Francis Edward Su. On choosing and bounding probability metrics. *International statistical review*, 70(3):419–435, 2002. Clark R Givens, Rae Michael Shortt, et al. A class of wasserstein metrics for probability distributions. *The Michigan* Mathematical Journal, 31(2):231–240, 1984. Peter D Grünwald, A Philip Dawid, et al. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. *Annals of statistics*, 32(4):1367–1433, 2004. Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In *Advances in neural* information processing systems, pp. 3315–3323, 2016. Harry J Holzer and David Neumark. Affirmative action: What do we know? *Journal of Policy Analysis and Management*, 25(2):463–490, 2006. Fredrik Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In International conference on machine learning, pp. 3020–3029, 2016. Fredrik D Johansson, Uri Shalit, Nathan Kallus, and David Sontag. Generalization bounds and representation learning for estimation of potential outcomes and causal effects. *arXiv preprint arXiv:2001.07426*, 2020. James E Johndrow, Kristian Lum, et al. An algorithm for removing sensitive information: application to raceindependent recidivism prediction. *The Annals of Applied Statistics*, 13(1):189–220, 2019. Faisal Kamiran and Toon Calders. Classifying without discriminating. In 2009 2nd International Conference on Computer, Control and Communication, pp. 1–6. IEEE, 2009. Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops, pp. 643–650. IEEE, 2011. Soheil Kolouri, Se Rim Park, Matthew Thorpe, Dejan Slepcev, and Gustavo K Rohde. Optimal mass transport: Signal processing and machine-learning applications. *IEEE signal processing magazine*, 34(4):43–59, 2017. Junpei Komiyama, Akiko Takeda, Junya Honda, and Hajime Shimao. Nonconvex optimization for regression with fairness constraints. In *International conference on machine learning*, pp. 2737–2746. PMLR, 2018. Thibaut Le Gouic, Jean-Michel Loubes, and Philippe Rigollet. Projection to fairness in statistical learning. *arXiv* e-prints, pp. arXiv–2005, 2020. Jing Lei et al. Convergence and concentration of empirical measures under wasserstein distance in unbounded functional spaces. *Bernoulli*, 26(1):767–798, 2020. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. arXiv preprint arXiv:1511.00830, 2015. Kristian Lum and James Johndrow. A statistical framework for fair predictive algorithms. arXiv preprint arXiv:1610.08077, 2016. David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In *International Conference on Machine Learning*, pp. 3381–3390, 2018. Jérémie Mary, Clément Calauzenes, and Noureddine El Karoui. Fairness-aware learning for continuous attributes and treatments. In *International Conference on Machine Learning*, pp. 4382–4391. PMLR, 2019. Aditya Krishna Menon and Robert C Williamson. The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency, pp. 107–118, 2018. Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, and Serena Wang. Pairwise fairness for ranking and regression. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 5248–5255, 2020. Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In *Proceedings of the 34th International Conference on Machine Learning-Volume 70*, pp. 3076–3085. JMLR. org, 2017. Sahil Verma and Julia Rubin. Fairness definitions explained. In *2018 IEEE/ACM International Workshop on Software* Fairness (FairWare), pp. 1–7. IEEE, 2018. Cédric Villani. *Optimal transport: old and new*, volume 338. Springer, 2009. Jonathan Weed, Francis Bach, et al. Sharp asymptotic and finite-sample rates of convergence of empirical measures in wasserstein distance. *Bernoulli*, 25(4A):2620–2648, 2019. Linda F Wightman. Lsac national longitudinal bar passage study. lsac research report series. 1998. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classification. *arXiv preprint arXiv:1507.05259*, 2015. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In *International* Conference on Machine Learning, pp. 325–333, 2013. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340. ACM, 2018. Han Zhao and Geoffrey J Gordon. Inherent tradeoffs in learning fair representations. In Advances in neural information processing systems, 2019. Han Zhao and Geoffrey J. Gordon. Inherent tradeoffs in learning fair representations. *Journal of Machine Learning* Research, 23(57):1–26, 2022. URL http://jmlr.org/papers/v23/21-1427.html. Han Zhao, Amanda Coston, Tameem Adel, and Geoffrey J. Gordon. Conditional learning of fair representations. arXiv preprint arXiv:1910.07162, 2019a. Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In *International Conference on Machine Learning*, pp. 7523–7532, 2019b. Indre Zliobaite. On the relation between accuracy and fairness in binary classification. *arXiv preprint arXiv:1505.05723*, 2015. ## A Missing Proofs In this section we provide all the missing proofs in the main text. For the ease of the readers, in what follows we shall first restate the theorems that appear in the main text and then provide the corresponding proofs. ## A.1 Proofs Of Theorem 3.1 And Corollary 3.1 Theorem 3.1. Let Yb = h(X) be a predictor. If Yb satisfies statistical parity, then ∀p ≥ 1, $$\varepsilon_{p,\mu_{0}}(\widehat{Y})+\varepsilon_{p,\mu_{1}}(\widehat{Y})\geq W_{p}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1}).$$ $$({\mathfrak{H}})$$ $$(11)$$ (Yb) ≥ Wp(Y]µ0,Y]µ1). (5) Proof. First, realize that Wp(·, ·) is a metric of probability distributions, the following chain of triangle inequalities holds: $$W_{p}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1})\leq W_{p}(Y_{\sharp}\mu_{0},h_{\sharp}\mu_{0})+W_{p}(h_{\sharp}\mu_{0},h_{\sharp}\mu_{1})+W_{p}(h_{\sharp}\mu_{1},Y_{\sharp}\mu_{1}).$$ Now due to the assumption that Yb = h(X) is independent of A, the second term, Wp(h]µ0, h]µ1), is 0, leading to: $$W_{p}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1})\leq W_{p}(Y_{\sharp}\mu_{0},h_{\sharp}\mu_{0})+W_{p}(h_{\sharp}\mu_{1},Y_{\sharp}\mu_{1}).$$ $$(12)$$ Wp(Y]µ0,Y]µ1) ≤ Wp(Y]µ0, h]µ0) + Wp(h]µ1,Y]µ1). (11) Next, for a ∈ {0, 1}, by definition of the Wasserstein distance, $$W_{p}(Y_{5}\mu_{a},h_{5}\mu_{a})=\left(\operatorname*{inf}_{\gamma}\mathbf{E}_{\gamma}[|Y-{\widehat{Y}}|^{p}]\right)^{1/p}\leq\left(\mathbf{E}_{\mu_{a}}[|Y-{\widehat{Y}}|^{p}]\right)^{1/p}=\varepsilon_{p,\mu_{a}}({\widehat{Y}}),$$ (Yb), (12) where we use the fact that the pushforward distribution of µa under h is a particular coupling between Y and Yb to establish the above inequality. Applying the inequality (12) for both a = 0 and a = 1 and combining it with inequality (11) completes the proof. **Corollary 3.1**.: If $\widehat{Y}$ satisfies statistical parity, then $\varepsilon_{1,p_{0}}(\widehat{Y})+\varepsilon_{1,p_{1}}(\widehat{Y})\geq|\mathbb{E}_{p_{0}}[Y]-\mathbb{E}_{p_{1}}[Y]|$ and $\hat{\varepsilon}_{2,p_{0}}^{2}(\widehat{Y})+\hat{\varepsilon}_{2,p_{1}}^{2}(\widehat{Y})\geq\frac{1}{2}|\mathbb{E}_{p_{0}}[Y]-\mathbb{E}_{p_{1}}[Y]|^{2}$. Proof. We first prove the first inequality in Corollary 3.1. Apply Theorem 3.1 by setting p = 1. Let Id : *Y → Y* be the identity map, i.e., Id(y) = y, ∀y ∈ R. Clearly Id(·) is 1-Lipschitz. Using the sup characterization of the Wasserstein distance, we have: $$\mathbf{E}_{\mu_{0}}[|\widehat{Y}-Y|]+\mathbf{E}_{\mu_{1}}[|\widehat{Y}-Y|]\geq W_{1}(Y_{2}\mu_{0},Y_{2}\mu_{1})$$ (Theorem 3.1) $$=\sup_{\|f\|_{\leq1}\leq1}\left|\int f\,d(Y_{2}\mu_{0})-\int f\,d(Y_{2}\mu_{1})\right|$$ $$\geq\left|\int\mathrm{Id}\,d(Y_{2}\mu_{0})-\int\mathrm{Id}\,d(Y_{2}\mu_{1})\right|$$ (Id is 1-Lipschitz) $$=\left|\int Y\,d\mu_{0}-\int Y\,d\mu_{1}\right|$$ (Change of Variable) $$=|\mathbf{E}_{\mu_{0}}[Y]-\mathbf{E}_{\mu_{1}}[Y]|,$$ where the second to last equation follows from the definition of pushforward distribution. To prove the second inequality in Corollary 3.1, again set p = 2 in Theorem 3.1 and realize that for a, b, we have a 2 + b 2 ≥ (a + b) 2/2 by the AM-GM inequality. Hence when p = 2, we have $$\mathbf{E}_{p_{0}}[|\widehat{Y}-Y|^{2}]+\mathbf{E}_{p_{1}}[|\widehat{Y}-Y|^{2}]\geq\mathbf{E}_{p_{0}}^{2}\left[|\widehat{Y}-Y|\right]+\mathbf{E}_{p_{1}}^{2}\left[|\widehat{Y}-Y|\right]$$ (Jensen's inequality) $$\geq2\left(\frac{\mathbf{E}_{p_{0}}[|\widehat{Y}-Y|]+\mathbf{E}_{p_{1}}[|\widehat{Y}-Y|]}{2}\right)^{2}$$ (AM-GM inequality) $$=\frac{1}{2}\left(\mathbf{E}_{p_{0}}[|\widehat{Y}-Y|]+\mathbf{E}_{p_{1}}[|\widehat{Y}-Y|]\right)^{2}$$ $$\geq\frac{1}{2}|\mathbf{E}_{p_{0}}[Y]-\mathbf{E}_{p_{1}}[Y]|^{2}.$$ $$(14)$$ Note that for the last equation we apply the lower bound Eµ0 [|Yb − Y|] + Eµ1 [|Yb − Y|] ≥ |Eµ0 [Y] − Eµ1 [Y]| we proved in the first inequality. ## A.2 Proof Of Theorem 3.2 Before we provide the proof of Theorem 3.2, we first recall some useful results about the Wasserstein distance (Weed et al., 2019; Lei et al., 2020). Proposition A.1 (Proposition 20, (Weed et al., 2019)). For all n ≥ 0 and p ≥ 1, let µˆ be an empirical distribution induced from µ with sample size n. Then, $$\mathrm{Pr}(W_{p}^{p}(\mu,{\hat{\mu}})\geq\mathrm{E}W_{p}^{p}(\mu,{\hat{\mu}}_{n})+t)\leq\exp(-2n t^{2}).$$ Proposition A.1 gives a concentration inequality of W p p (·, ·) around its mean. Note that the expectation in (14) is over the draw of the sample of size n. This inequality is particularly useful when p = 1 since it reduces to W1(·, ·) and gives a convergence rate of O(1/√n). The following theorem is a special case of (Lei et al., 2020, Theorem 3.1), which bounds the rate of EWp(µ, µˆn): Theorem A.1 (Theorem 3.1, (Lei et al., 2020)). Let µˆ be an empirical distribution induced from µ with sample size n, p ≥ 1 and recall that Y = [−1, 1]. Then $$\mathbb{E}W_{p}(Y_{\sharp}\mu,Y_{\sharp}\hat{\mu})\leq c_{p}\cdot n^{-\frac{1}{2p}},$$ $$(15)$$ 2p, (15) where cp is a positive constant that only depends on p. Again, the interesting case here is when p = 1, which gives the same rate of O(1/√n) that coincides with the one in Proposition A.1. Theorem 3.2. Let Yb = h(X) be the predictor and µˆ be an empirical distribution induced from a sample of size n drawn from µ. If Yb satisfies statistical parity, then there exists an absolute constant c1 > 0 such that for 0 < δ < 1, with probability at least 1 − δ over the draw of the sample, $$\varepsilon_{2,\mu_{0}}(\widehat{Y})+\varepsilon_{2,\mu_{1}}(\widehat{Y})\geq\varepsilon_{1,\mu_{0}}(\widehat{Y})+\varepsilon_{1,\mu_{1}}(\widehat{Y})\geq W_{1}(Y_{2}\hat{\mu}_{0},Y_{2}\hat{\mu}_{1})-\left(2\varepsilon_{1}+\sqrt{2\log(2/\delta)}\right)\sqrt{\frac{1}{n}}.\tag{6}$$ Proof. We first prove the finite sample lower bound w.r.t. the `1 error. Realize that W1(·, ·) is a metric, the triangle inequality gives us $$W_{1}(Y_{\hat{\pi}}\hat{\mu}_{0},Y_{\hat{\pi}}\hat{\mu}_{1})\leq W_{1}(Y_{\hat{\pi}}\hat{\mu}_{0},Y_{\hat{\pi}}\mu_{0})+W_{1}(Y_{\hat{\pi}}\mu_{0},Y_{\hat{\pi}}\mu_{1})+W_{1}(Y_{\hat{\pi}}\mu_{1},Y_{\hat{\pi}}\hat{\mu}_{1}).$$ Combined with Theorem 3.1, the above inequality leads to $$\varepsilon_{1,\mu_{0}}(\widehat{Y})+\varepsilon_{1,\mu_{1}}(\widehat{Y})\geq W_{1}(Y_{\sharp}\hat{\mu}_{0},Y_{\sharp}\hat{\mu}_{1})-\left(W_{1}(Y_{\sharp}\hat{\mu}_{0},Y_{\sharp}\mu_{0})+W_{1}(Y_{\sharp}\mu_{1},Y_{\sharp}\hat{\mu}_{1})\right)$$ Hence it suffices if we could provide high probability bound to further lower bound W1(Y]µi,Y]µˆi), for i ∈ {0, 1}. To this end, we first apply Proposition A.1 with p = 1: let exp(−2nt2 p ) = δ/2 and solve for t, we have t = log(2/δ)/2n, which means that with probability at least 1 − δ/2, $$W_{1}(Y_{\sharp}\hat{\mu}_{i},Y_{\sharp}\mu_{i})\leq\mathbb{E}W_{1}(Y_{\sharp}\hat{\mu}_{i},Y_{\sharp}\mu_{i})+\sqrt{\frac{\log(2/\delta)}{2n}}$$ $$\leq c_{p}\sqrt{\frac{1}{n}}+\sqrt{\frac{\log(2/\delta)}{2n}}.$$ (Theorem 2.1) $$({\mathrm{Theorem~A.1}})$$ $$\left(7\right)$$ Now apply the above inequality twice, one for i ∈ {0, 1}. With a union bound, we have shown that w.p. ≥ 1 − δ, $$\varepsilon_{1,\mu_{0}}(\widehat{Y})+\varepsilon_{1,\mu_{1}}(\widehat{Y})\geq W_{1}(Y_{\sharp}\hat{\mu}_{0},Y_{\sharp}\hat{\mu}_{1})-\left(2c_{1}+\sqrt{2\log(2/\delta)}\right)\sqrt{\frac{1}{n}}.$$ To prove the second lower bound w.r.t. the `2 error, simply realize that ε2,µi (Yb) ≥ ε1,µi (Yb) for i ∈ {0, 1}, which completes the proof. ## A.3 Proof Of Corollary 3.2 Corollary 3.2. Let Yb = h(X) be a predictor. If Yb satisfies statistical parity, then ∀p ≥ 1, the joint error has the following lower bound: ε p,µ(Yb) ≥ H0-1(A) · Wp(Y]µ0,Y]µ1). (7) Proof. To simplify the notation used in the proof, define ε := ε p,µ(Yb), ε0:= ε p,µ0 (Yb) and ε1:= ε p,µ1 (Yb). Let pa:= Prµ(A = a). By Theorem 3.1, we know that ε0 + ε1 ≥ Wp(Y]µ0,Y]µ1). By definition of the joint error: $$\varepsilon_{p,\mu}({\widehat{Y}})\geq H_{0\text{-}1}(A)\cdot W_{p}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1}).$$ $$\varepsilon=p_{0}\varepsilon_{0}+p_{1}\varepsilon_{1}\geq\operatorname*{min}\{p_{0},p_{1}\}(\varepsilon_{0}+\varepsilon_{1})\geq H_{0\textrm{-}1}(A)\cdot W_{p}(Y_{\sharp}\mu_{0},Y_{\sharp}\mu_{1}).$$ ## A.4 Proof Of Proposition 3.1 As a comparison to the Kolmogorov-Smirnov distance, the W1 distance between distributions over R could be equivalently represented as: Proposition A.2 (Gibbs & Su (2002)). For two distributions µ, µ 0 over R, W1(µ, µ 0) = RR |Fµ(z) − Fµ0(z)| dz. Proposition A.2 was stated as a fact without proof in (Gibbs & Su, 2002), but it is not hard to see that it could be proved using the equivalent characterization of W1 in (4) by changing the integral variable. Furthermore, in regression if both µ and µ 0are continuous distributions, then the following well-known result (Chatterjee, 2007, Lemma 2) serves as a bridge to connect the Wasserstein distance W1(·, ·) and the Kolmogorov-Smirnov distance K(·, ·): Lemma A.1. If there exists a constant C such that the density of µ 0(w.r.t. the Lebesgue measure λ) is universally bounded above, i.e., kdµ 0/dλk∞ ≤ C, then K(µ, µ 0) ≤ 2pC · W1(µ, µ0). Using Kolmogorov-Smirnov distance, the constraint in the optimization problem (2) could be equivalently expressed as K(h]µ0, h]µ1) ≤ e. Now with Lemma A.1, we are ready to prove Proposition 3.1: Proposition 3.1. Let Yb = h(X) be a predictor. Under Assumption 2.1, for p ≥ 1, if there exists e > 0 such that Wp(h]µ0, h]µ1) ≤ e, then $$r_{p,\mu_{0}}(\widehat{Y})+r_{p,\mu_{1}}(\widehat{Y})\geq\underbrace{W_{p}(f_{0}^{*};\mu_{0},f_{1}^{*};\mu_{1})}_{\text{distance between optimal multi predictions across groups}}-2(\epsilon_{p,\mu_{0}}^{*}+\epsilon_{p,\mu_{1}}^{*})-\epsilon,\tag{9}$$ and Yb satisfies 2 √ Ce-SP. Proof. First, for a ∈ {0, 1}, by definition of the Wasserstein distance, for any predictor Yb = h(X): $u\in\{0,1\}$, by definition of the Wasserstein distance, for any predictor $Y=n(\lambda)$, $$W_{p}(Y_{t}|u_{\alpha},h_{t}|u_{\alpha})=\left(\inf_{\gamma}\mathbb{E}_{\gamma}[|Y-\widehat{Y}|^{p}]\right)^{1/p}\leq\left(\mathbb{E}_{\mu_{t}}[|Y-\widehat{Y}|^{p}]\right)^{1/p}=\varepsilon_{p,\mu_{t}}(\widehat{Y}),\tag{16}$$ we inequality to both $h$ and $S_{t}$ we have Applying the above inequality to both $h$ and $f_{a}^{*}$, we have: $$W_{p}(h_{1}\mu_{a},\gamma_{1}\mu_{b})+W_{p}(Y_{2}\mu_{a}f_{a}^{*}\mu_{b})\leq\varepsilon_{p,\mu_{a}}(\widehat{Y})+\varepsilon_{p,\mu_{b}}(f_{a}^{*}(X))=\varepsilon_{p,\mu_{b}}(\widehat{Y})+\varepsilon_{p,\mu_{a}}^{*},\quad\forall a\in\{0,1\}.$$ On the other hand, by the triangle inequality, $$W_{p}(h_{1}\mu_{a},h_{2}\mu_{1})+\sum_{a\in\{0,1\}}W_{p}(h_{2}\mu_{a},Y_{2}\mu_{a})+W_{p}(Y_{2}\mu_{a},f_{a}^{*}\mu_{b})\geq W_{p}(f_{0}^{*}\mu_{0}f_{1}^{*}\mu_{1}).$$ , ∀a ∈ {0, 1}. (17) Now by the assumption $W_{p}(h_{\sharp}\mu_{0},h_{\sharp}\mu_{1})\leq\epsilon$ and Eq. (17), we have: $$(17)$$ $\epsilon+\sum_{a\in\{0,1\}}\epsilon_{p,\mu_{a}}(\hat{Y})+\epsilon_{p,\mu_{a}}^{*}\geq W_{p}(f_{0\,\sharp}^{*}\mu_{0},f_{1\,\sharp}^{*}\mu_{1})$. By the definition of the excess risk, rearranging and subtracting 2 ∑a∈{0,1}ε ∗p,µa from both sides of the inequality then completes the proof of the first part. To show that Yb is 2 √ Ce-SP, first note that Yb = h(X) is t-SP iff K(h]µ0, h]µ1) ≤ t. Now apply Lemma A.1, under the assumption that Wp(h]µ0, h]µ1) ≤ e: $\epsilon_{\sharp}\mu_{0},n_{\sharp}\mu_{1})\leq\epsilon$: $$K(h_{\sharp}\mu_{0},h_{\sharp}\mu_{1})\leq2\sqrt{CW_{1}(h_{\sharp}\mu_{0},h_{\sharp}\mu_{1})}$$ $$\leq2\sqrt{CW_{p}(h_{\sharp}\mu_{0},h_{\sharp}\mu_{1})}$$ $$\leq2\sqrt{C\epsilon},$$ qCW1(h]µ0, h]µ1) (Lemma A.1) (Lemma A.1) (Monotonicity of the $W_{p}(\cdot,\cdot)$) qCWp(h]µ0, h]µ1) (Monotonicity of the Wp(·, ·)) completing the proof. ## A.5 Proof Of Proposition 3.2 Proposition 3.2. If h(·) is ρ-individually fair, then h satisfies pρ 2 + 1 · W1(µ0, µ1)-accuracy parity. Proof. p Define g(X,Y) := |Yb − Y| = |h(X) − Y|. We first show that if h(X) is ρ-Lipschitz, then g(X,Y) is ρ 2 + 1-Lipschitz: for ∀x, y, x, x 0: |g(x, y) − g(x 0, y 0)| = |h(x) − y| − |h(x 0) − y 0| ≤ |h(x) − h(x 0) − y + y 0| (Triangle inequality) ≤ |h(x) − h(x 0)| + |y − y 0| ≤ ρkx − x 0k + |y − y 0| (h is ρ-Lipschitz) ≤ qρ 2 + 1 · qkx − x 0k 2 + |y − y 0| 2(Cauchy-Schwarz) = qρ 2 + 1 · k(x, y) − (x 0, y 0)k. Let ρ 0:=pρ 2 + 1. Now consider the error difference: ow consider the error difference. $$\begin{split}|\varepsilon_{1,\mu_{0}}(\widehat{Y})-\varepsilon_{1,\mu_{1}}(\widehat{Y})|&=|\mathbb{E}_{\mu_{0}}[|h(X)-Y|]-\mathbb{E}_{\mu_{1}}[|h(X)-Y|]|\\ &=|\mathbb{E}_{\mu_{0}}[g(X,Y)]-\mathbb{E}_{\mu_{1}}[g(X,Y)]|\\ &\leq\quad\sup\quad|\mathbb{E}_{\mu_{0}}[g^{\prime}(X,Y)]-\mathbb{E}_{\mu_{1}}[g^{\prime}(X,Y)]|\\ &=\rho^{\prime}\sup\quad|\mathbb{E}_{\mu_{0}}[g^{\prime}(X,Y)]-\mathbb{E}_{\mu_{1}}[g^{\prime}(X,Y)]|\\ &=\rho^{\prime}\cdot W_{1}(\mu_{0},\mu_{1}),\end{split}$$ 0· W1(µ0, µ1), (Kantorovich duality) which completes the proof. $$({\mathrm{Kantorovich~duality}})$$ ## A.6 Proof Of Proposition 3.3 Proposition 3.3. Let Z = g(X) be the features from input X. If W1(g]µ0, g]µ1) ≤ e and Yb = h(Z) is ρ-Lipschitz, then Yb = (h ◦ g)(X) verifies 2pCρe-SP. Proof. We first show that W1((h ◦ g)]µ0,(h ◦ g)]µ1) is small if h is ρ-Lipschitz. To simplify the notation, we define µ 0 0 := g]µ0 and µ 0 1 := g]µ1. Consider the dual representation of the Wasserstein distance: $$W_{1}(h_{\sharp}\mu^{\prime}_{0},h_{\sharp}\mu^{\prime}_{1})=\sup_{\|f^{\prime}\|_{L\leq1}}\left|\int f^{\prime}\,d(h_{\sharp}\mu^{\prime}_{0})-\int f^{\prime}\,d(h_{\sharp}\mu^{\prime}_{1})\right|$$ $$=\sup_{\|f^{\prime}\|_{L\leq1}}\left|\int f^{\prime}\circ h\,d\mu^{\prime}_{0}-\int f^{\prime}\circ h\,d\mu^{\prime}_{1}\right|$$ $$\leq\sup_{\|f\|_{L\leq\rho}}\left|\int f\,d\mu^{\prime}_{0}-\int f\,d\mu^{\prime}_{1}\right|$$ $$=\rho\cdot W_{1}(\mu^{\prime}_{0},\mu^{\prime}_{1})$$ $$=\rho\cdot W_{1}(g_{\sharp}\mu_{0},g_{\sharp}\mu_{1})$$ $$\leq\rho e,$$ (Kantorovich duality) (Kantorovich duality) $\text{(Change of Variable formula)}$ . (Change of Variable formula) $$(h\;\mathrm{is}\;\rho\mathrm{-Lipschitz})$$ (h is ρ-Lipschitz) where the first inequality is due to the fact that for k f 0kL ≤ 1, k f 0 ◦ hkL ≤ k f 0kL · khkL = ρ. Applying Lemma A.1 to W1(h]µ 0 0 , h]µ 0 1 ) then completes the proof. ## B Further Details About The Experiments B.1 Dataset The Law School dataset contains 1,823 records for law students who took the bar passage study for Law School Admission.2 The features in the dataset include variables such as undergraduate GPA, LSAT score, full-time status, family income, gender, etc. In the experiment, we use gender as the protected attribute and undergraduate GPA as the target variable. We use 80 percent of the data as our training set and the rest 20 percent as the test set. The data distribution for different subgroups in the Law School dataset could be found in Figure 2. In the Law School dataset, Pr(A = 1) = 0.452, which is a quite balanced dataset. All the experiments are performed on a Titan 1080 GPU. ## B.2 Network Architectures We fix the baseline model to be a three hidden-layer feed-forward network with ReLU activations. The number of units in each hidden layer is 50 and 20, respectively. The output layer corresponds to a linear regression model. This baseline is denoted as MLP. To verify the effect of the balanced error rate on reducing the accuracy disparity, we further study two variants of it, where one is using the weighted joint error rate (first row in Table 1) as the objective function whereas the other (second row in Table 1) uses the balanced error rate as the objective function. For learning with Wasserstein regularization, the adversarial discriminator network takes the feature from the last hidden layer as input and connects it to a hidden-layer with 10 units, followed by an auditor whose goal is to output a score function in distinguishing the features from the two different groups. This model is denoted as W-MLP. Compared with MLP, the only difference of W-MLP in terms of the objective function is that besides the `2 loss for target prediction, the W-MLP also contains a loss from the auditor to distinguish the sensitive attribute A. ## B.3 Hyperparameters Used In Experiments In this section, we report the detailed hyperparameters used in our experiments to obtain the results in Table 1. Throughout the experiments, we fix the learning rate to be 1.0 and use the same networks as well as random seeds. One 2We use the edited public version of the dataset which can be downloaded here: https://github.com/algowatchpenn/GerryFair/ blob/master/dataset/lawschool.csv ![21_image_0.png](21_image_0.png) Figure 2: The data distributions of different groups in the Law School dataset. important aspect in the implementation of the Wasserstein adversary is the choice of the clipping parameter for the weights in the adversary network. The values used in our experiments are shown below in Table 3. | τ | Clipping Value | | |-------|------------------|------| | W-MLP | 0.1 | 0.1 | | W-MLP | 1.0 | 1.0 | | W-MLP | 5.0 | 5.0 | | W-MLP | 10.0 | 10.0 | Table 3: Clipping parameters used in training the Wasserstein adversary.
Review 1: Summary: This paper tries to understand the fairness and accuracy trade-offs in a learning problem. The bounds are based on Wasserstein distance. Strengths and Weaknesses: Weaknesses 1. This paper basically finds a characteristic for random variables without taking into account the learning problem. Basically, given that $\hat{Y}$ is independent of $A$, it finds a lower bound for $E\{|Y-Y|^p|A=0\}^{\frac{1}{p}} + E\{|Y-Y|^p|A=1\}^{\frac{1}{p}}$. For the bounds found in this paper, it really does not matter what objective function we are minimizing or what model (linear, quadratic, DNN) we are considering. The bounds basically follow by the Wasserstein distance definition (it has an infimum operator in the definition, which leads to a lower bound). If I am using a linear model or quadratic model, or if I am minimizing MSE or MAE, the bound remains the same. 2. several theorems find a lower bound for $\epsilon_{p,\mu_0} + \epsilon_{p,\mu_1} $ (e.g.m theorem 3.1). I really do not understand why we need a lower bound for $\epsilon_{p,\mu_0} + \epsilon_{p,\mu_1} $. A lower bound for $\epsilon_{p,\mu_0} + \epsilon_{p,\mu_1} $ does not give us any lower bound for the overall loss or the loss of each group. So my question is that if $\epsilon_{p,\mu_0} + \epsilon_{p,\mu_1}\geq L $, can we find a lower bound for $\epsilon_{p,\mu_0} $ and $\epsilon_{p,\mu_1}$, $\epsilon_{p,\mu}$ and $l_p$ error? 3. The paper claims that it finds the privacy accuracy tradeoff. However, there is no theorem that find a lower bound for weighted lp error pr lp error that when $\hat{Y}$ satisfies $\epsilon$-SP. Proposition 3.1 tries to find an upper bound for the fairness level and a lower bound for $r_{p,\mu_0} + r_{p,\mu_1}$. However, having a lower bound for $r_{p,\mu_0} + r_{p,\mu_1}$ does not tell us anything about the lower bound on the overall loss. My question: if the $\hat{Y}$ satisfies $\epsilon$-SP, what would be the lower bound for $\epsilon_{p,\mu}$ and $l_p$ error? 4. The results do not give any insight into any practical learning problem. Some bounds do not seem useful. Strengths: Most of the results are mathematically correct. Requested Changes: The authors should add the answer to the following questions to the paper: 1. How does the model complexity (e.g., linear or quadratic) or objective function (e.g., MSE, MAE, MSE+l_2norm, MAE+l_2norm, etc.) affect the lower bounds found in the paper? In the current form, the lower bounds remain the same no matter what the optimization problem is. 2. How the lower bounds can give us insight into practical problems? Since the lower bounds do not depend on the nature of the learning problem, they seem loose in general. For example, when there is an imbalanced population, the lower bound in equation 7 is close to zero. 3. If $\epsilon_{p,\mu_0} + \epsilon_{p,\mu_1}\geq L $, can we find a lower bound for $\epsilon_{p,\mu_0} $ and $\epsilon_{p,\mu_1}$, $\epsilon_{p,\mu}$ and $l_p$ error? 4. if the $\hat{Y}$ satisfies $\epsilon$-SP, what would be the lower bound for $\epsilon_{p,\mu}$ and $l_p$ error? 5. The last equation on page 7 and the first equation on page 8 are not contradicting each other. Could you please double-check them? Broader Impact Concerns: None ================================================== Review 2: Summary: This work provides several theoretical analyses on the relation between Wasserstein distance (of two groups specified by a binary protected attribute) and various fairness definitions, in the regression setting. Specifically, Sec.3.1 & 3.2 provide several lower bounds on the errors (Thm. 3.1 & 3.2) or risks (Prop. 3.1) when the regressor satisfies statistical parity (Def. 2.1). Comparisons to other similar bounds or algorithms are also discussed throughout the paper. The proposed optimization (10) minimizes the balanced error rate and Wasserstein distance. Finally, empirical studies on the Law School dataset show that smaller Wasserstein distance can lead to better statistical parity and accuracy parity. Strengths and Weaknesses: Strengths - Study an important topic in fair ML and the writing is mostly clear - Provide several interesting lower bounds when the predictor satisfies statistical parity, and the lower bounds are tight - Comparison to existing bounds are clear and informative Weaknesses - There is a gap between the theory and the proposed algorithm - The experiment results are limited Detailed comments: 1. The proof in Fig.1 is confusing. First of all, due to statistical parity (Def.2.1), the image of $\mu_1$ and $\mu_2$ should be the same. Then a single triangle inequality suffices. Second, making the distinction between $h_1$ and $h_0$ doesn’t seem to provide any more useful discussion for the problem. Finally, it would be more clear to define $\epsilon_{p,\mu_0}$ and $\epsilon_{p,\mu_1}$ explicitly. Equation (1) is for the joint \mu. 2. (8) is not very well-motivated. The theorems in Sec.3.1 discuss what happens when the predictor $\hat{Y}$ satisfies statistical parity (SP). However, there is not much discussion about why objective (8) can lead to SP. In other words, it remains unclear whether small (8) is a sufficient condition for SP. We have a similar situation for (10). Prop.3.3 suggests that small Wasserstein distance + Lipschitz predictor are sufficient for SP, then why do we need the balanced error objective (first term in (10))? 3. Experiment - What if we only use the balanced objective for the MLP? If it is already using the balanced objective (8), then what’s the performance using the joint (unbalanced) error instead? This would be useful to see the effects of the two terms in (10). - It would be important to see the effect of $\rho$ since Prop.3.2 and 3.3 suggest that small $\rho$ leads to better accuracy parity and SP. - In Table 1, the trend in K is nice but does not really verify the theory. Again, the discussion in Sec.3.1 does not say anything about small Wasserstein distance can result in a better SP. Minor comments: - Page 5, last paragraph, the first inequality holds not because $l_p$ norm is convex, but because $x^{1/p}$ is concave. This is the same as the middle derivation on Page 3. - Page 2 Sec.2: why does $F_\mu$ take a scalar as input? $mu$ is the joint distribution over the product space $X\times A\times Y$ which is multivariate. - Page 6: “reduce the joint accuracy too much”, but the current paper focuses on regression. The same happens for many other occurrences of “accuracy” in the paper (e.g., the first few sentences in Sec.4). Requested Changes: 1. Discuss why (8) can lead to SP as mentioned above. 2. Additional experiments as mentioned above. Broader Impact Concerns: This work studies fair ML algorithms and thus intrinsically could have a great impact for practical applications. It would be helpful for the authors to discuss it (at least briefly) in the paper. ================================================== Review 3: Summary: The paper studies the tradeoff between accuracy and statistical parity constraint, which asks that the prediction be independent of the underlying predictor. in the regression setting. The main results include (1) Lower bounds on the sum of squared loss (and risk) on two groups can be bounded by the wasserstein distance between the groups' label distributions (Theorem 3.1, Proposition 3.1) (2) Relationship between individual fairness and error parity (Proposition 3.2) They perform experiments on a real world dataset to corroborate their theoretical results. Strengths and Weaknesses: Strengths: -The lower bound results are quite crisp and easy to understand intuitively. -The lower bound results provide insight into what happens in error disparity when enforcing fairness constraints. With this insight, the authors encourage using a balanced error rate and hence using a more of a cost-sensitive approach. -Instead of relying on the structure of the function to describe the lower bounds (i.e. linear regressor), it’s nice that the lower bounds are described in terms of the characteristic of the underlying distribution, meaning these results are resistant to what kind of regressor is being used. Weaknesses/Questions/Concerns: -Although it’s stated that the individual fairness with respect to other metric can be easily taken care of, it’s not clear that this is actually the case. It seems like the result depends on the Kantorovich duality and its special case when the underlying metric is some Lp norm. -It’s stated that the results here are algorithm-independent, but I think the term algorithm and model are conflated here: I think the authors mean that the results are with respect to any arbitrary function f, not an algorithm that was used to come up with such f’s, right? -I think it’s possible that readers may not be convinced of optimizing for equation (10) in order to get statistical/error disparity, as this algorithm encourages statistical/error parity in a slightly round about way. So I think it would be good to include other algorithms for comparison. -Throughout the paper, the results seem to be talking about any arbitrary function f as opposed to think about a family of functions (e.g. linear regressor). Is my understanding correct? -It seems like an example showing the tightness of the lower bound is only for the noiseless setting, but is there any thing that can be said in the approximate & noisy setting? Requested Changes: -Explanation as to how the individual fairness with respect to Lp norm can be generalized to arbitrary metric. -The authors state the minimax objective can be optimized via stochastic gradient descent, and I think it would be helpful to go into a little more detail about how such objective can be optimized via gradient descent: i.e. is it just alternating between the min player doing stochastic gradient descent and the max player doing stochastic gradient ascent? Broader Impact Concerns: I don't have any concern on the ethical implication of the work. ================================================== Metareview: Recommendation: Accept with minor revision Comment: In this paper, the author investigated the essential tradeoff between accuracy and fairness. Specifically, the author provided an algorithm-independent lower bound with intuitive explanation. The author also extend the results for noise setting in terms of Wasserstein distance, and proposed a fair representation learning inspired by the theoretical understanding. Preliminary experimental study has been conducted to verify the proposed algorithm. All the reviewers agree that the paper is theoretical solid and provides new insights, especially Reviewer HaYk, who acknowledges the importance of the algorithm-independent lower bound. The major discussion is about the practical implication of the theoretical results (Reviewer o86N) and the insufficiency in empirical study (Reviewer rDyX). Therefore, I recommend acceptance with minor revision. The authors should take the suggestions from reviewers, especially Reviewr rDyX, to add more comprehensive empirical comparison as the author promised. ==================================================
# Modeling Causal Mechanisms With Diffusion Models For Interventional And Counterfactual Queries Anonymous authors Paper under double-blind review ## Abstract We consider the problem of answering observational, interventional, and counterfactual queries in a causally sufficient setting where only observational data and the causal graph are available. Utilizing the recent developments in diffusion models, we introduce diffusion-based causal models (DCM) to learn causal mechanisms, that generate unique latent encodings. These encodings enable us to directly sample under interventions and perform abduction for counterfactuals. Diffusion models are a natural fit here, since they can encode each node to a latent representation that acts as a proxy for exogenous noise. Our empirical evaluations demonstrate significant improvements over existing state-of-the-art methods for answering causal queries. Furthermore, we provide theoretical results that offer a methodology for analyzing counterfactual estimation in general encoder-decoder models, which could be useful in settings beyond our proposed approach. ## 1 Introduction Understanding the causal relationships in complex problems is crucial for making analyses, conclusions, and generalized predictions. To achieve this, we require causal models and queries. Structural Causal Models (SCMs) are generative models describing the causal relationships between variables, allowing for observational, interventional, and counterfactual queries (Pearl, 2009a). An SCM specifies how a set of endogenous (observed) random variables is generated from a set of exogenous (unobserved) random variables with prior distribution via a set of structural equations. In this work, we focus on approximating SCM given observational data and the underlying causal DAG. Our aim is to provide a mechanism for answering all the three types of causal (observational, interventional, and counterfactual) queries. For this, we assume *causal sufficiency*, i.e., absence of hidden confounders. Note that causal sufficiency is the minimal necessary set of assumptions for causal reasoning from observational data alone. In the SCM framework, causal queries can be answered by learning a proxy for the unobserved exogenous noise and the structural equations. This suggests that learning (conditional) *latent variable models* could be an attractive choice for the modeling procedure, as the latent serves as the proxy for the exogenous noise. In these models, the encoding process extracts the latent from an observation, and the decoding process generates the sample from the latent, approximating the structural equations. Our Contributions. In this work, we propose and analyze the effectiveness of using a diffusion model for modeling SCMs. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021) have gained popularity recently due to their high expressivity and exceptional performance in generative tasks (Saharia et al., 2022; Ramesh et al., 2022; Kong et al., 2021). The primary contribution of our work is in determining how to apply diffusion models to the causal setting. This is non-trivial because diffusion models are typically used to learn a stochastic process between a Gaussian and a density. Our main idea is to model each node in the causal graph as a diffusion model and cascade generated samples in topological order to answer causal queries. For each node, the corresponding diffusion model takes as input the node and parent values to encode and decode a latent representation. To implement the diffusion model, we utilize the recently proposed *Denoising Diffusion Implicit Models* (DDIMs) (Song et al., 2021), which may be interpreted as a deterministic autoencoder model with latent variables. We leverage the deterministic forward and backward diffusion processes of DDIMs to use diffusion models as an encoder-decoder model. We refer to the resulting model as *diffusion-based causal model* (DCM) and show that this model mimics the necessary properties of an SCM. Our key contributions include: 1 (1) [Section 3] We propose diffusion-based causal model (DCM), a new model class for modeling structural causal models, that provides a flexible and practical framework for approximating both interventions (do-operator) and counterfactuals (abduction-action-prediction steps). We present a procedure for training a DCM given just the causal graph and observational data, and show that the resulting trained model enables sampling from the observational and interventional distributions, and facilitates answering counterfactual queries. (2) [Section 4] Our theoretical analysis examines the accuracy of counterfactual estimates generated by the DCM, and we demonstrate that they can be bounded given some reasonable assumptions. Importantly, our analysis is not limited to diffusion models, but also applies to other encoder-decoder settings. To the best of our knowledge, these are first error bounds that explain the observed performance improvements in using latent variable models, like diffusion models, to address counterfactual queries. Another feature of this result, is that it also extends, under an additional assumption, to the more challenging multivariate case. (3) [Section 5] We evaluate the performance of DCM on a range of synthetic datasets generated with various structural equation types for all three forms of causal queries. We find that DCM consistently outperforms existing stateof-the-art methods (Sánchez-Martin et al., 2022; Khemakhem et al., 2021). In fact, for certain interventional and counterfactual queries such as those arising with *nonadditive noise* models, DCM is better by an order of magnitude or more than these existing approaches. Additionally, we demonstrate the favorable performance of DCM on an interventional query experiment conducted on fMRI data. Related Work. Over the years, a variety of methods have been developed in the causal inference literature for answering interventional and/or counterfactual queries including non-parametric methods (Shalit et al., 2017; Alaa & Van Der Schaar, 2017; Muandet et al., 2021) and probabilistic modeling methods (Zecevi ˇ c et al., 2021a). More relevant to ´ our approach is a recent series of work, including (Moraffah et al., 2020; Pawlowski et al., 2020; Kocaoglu et al., 2018; Parafita & Vitrià, 2020; Zecevi ˇ c et al., 2021b; Garrido et al., 2021; Karimi et al., 2020; Sánchez-Martin et al., 2022; ´ Khemakhem et al., 2021; Sanchez & Tsaftaris, 2022) that have demonstrated the success of using deep (conditional) generative models for this task. Karimi et al. (2020) propose an approach for answering interventional queries by fitting a conditional variational autoencoder to each conditional in the Markov factorization implied by the causal graph. Also using the ideas of variational inference and normalizing flows, Pawlowski et al. (2020) propose schemes for counterfactual inference. In Khemakhem et al. (2021), the authors propose an autoregressive normalizing flow for causal discovery and queries, referred to as *CAREFL*. While the authors focus on bivariate graphs between observed and unobserved variables, their approach can be extended to more general DAGs. CAREFL is also applicable even with only knowledge of the causal ordering rather than the full causal graph as we require. However, as also noted by Sánchez-Martin et al. (2022), when the causal graph is present, CAREFL is unable to exploit the absence of edges fully as it reduces a causal graph to its causal ordering (which may not be unique). Using normalizing flows for answering causal queries with causal DAG was also explored by (Balgi et al., 2022), however their approach does not have any theoretical guarantees on their counterfactual estimates, and their experimental evaluation is quite limited. Sánchez-Martin et al. (2022) propose *VACA*, which uses graph neural networks (GNNs) in the form of a *variational* graph autoencoder to sample from the observational, interventional, and counterfactual distribution. VACA can utilize the inherent graph structure through the GNN, however, suffers in empirical performance (see Section 5). Furthermore, the usage of the GNN leads to undesirable design constraints, e.g., the encoder GNN cannot have hidden layers (Sánchez-Martin et al., 2022). A very recent work by Javaloy et al. (2023) combines the idea of using autoregressive normalizing flows (as in CAREFL) and modeling the entire causal graph as one model (as in VACA), to reduce possible error propagation in the graph. The authors further generalize the theoretical results from Khemakhem et al. (2021) beyond affine autoregressive normalizing flows and establish a clearer connection to SCMs by providing a more direct way of applying the do-operator. In contrast to our work, while Javaloy et al. (2023) focuses on modeling the whole causal graph as one model. Modeling on a per-node basis has several advantages over a single model in terms of flexibility and computational efficiency because it permits individual node models to be trained in parallel. For example, our DCM approach trains approximately seven times faster than CAREFL and nine times faster than VACA (see Appendix D.1). In practice, it also makes it easier to utilize training data even if some values are missing. Sanchez & Tsaftaris (2022) use diffusion models for counterfactual estimation, focusing on the bivariate graph case with an image class causing an image. The authors train a diffusion model to generate images and use the abductionaction-prediction procedure from Pearl et al. (2016) as well as classifier guidance (Dhariwal & Nichol, 2021) to generate counterfactual images. However, this is solely for bivariate models and requires training a separate classifier for intermediate diffusion images, and exhibits poor performance for more complex images e.g., ImageNet (Deng et al., 2009). Our approach distinguishes itself from Sanchez & Tsaftaris (2022) as it can handle general causal graphs (beyond the simpler two node setting) and operates on continuous variables (beyond the simpler case of a discrete label and image). Our experimental evaluations are more general as it also covers interventional queries not addressed by Sanchez & Tsaftaris (2022). Finally, in terms of theoretical contribution, we provide rigorous conditions on the latent model and structural equations under which we can estimate counterfactuals. Even for two variable setting considered by Sanchez & Tsaftaris (2022) such a theoretical understanding was previously missing. Finally, a diffusion model based approach has also been recently proposed for the different task of causal discovery under additive noise models, where the diffusion model serves as an approximation to the Hessian functions (Sanchez et al., 2022). This raises the interesting question of whether the diffusion models can be adapted for solving the end-to-end problem from discovery to inference. ## 2 Preliminaries Notation. To distinguish random variables from their instantiation, we represent the former with capital letters and the latter with the corresponding lowercase letters. To distinguish between the nodes in the causal graph and diffusion random variables, we use subscripts to denote graph nodes. Let [n] := {1*, . . . , n*}. Structural Causal Models. Consider a directed acyclic graph (DAG) G with nodes {1*, . . . , K*} in a topologically sorted order, where a node i is represented by a (random) variable Xiin some generic space Xi ⊂ R di. Let pai be the parents of node i in G and let Xpai := {Xj}j∈pai be the variables of the parents of node i. A structural causal model M describes the relationship between an observed/endogenous node i and its causal parents. Formally, an SCM M := (F, p(U)) determines how a set of K endogenous random variables X := {X1*, . . . , X*K} is generated from a set of exogenous random variables U := {U1*, . . . , U*K} with prior distribution p(U) via a set of structural equations, F := (f1*, . . . , f*K) where Xi:= fi(Xpai , Ui) for i ∈ [K]. Throughout this paper, we assume that the unobserved random variables are jointly independent (Markovian SCM), and the DAG G is the graph induced by M. Every SCM M entails a unique joint observational distribution satisfying the causal Markov assumption: p(X) = QK i=1 p(Xi| Xpai ). Structural causal models address Pearl's causal hierarchy (or "ladder of causation"), which consists of three "layers" of causal queries in increasing complexity (Pearl, 2009a): observational (or associational), interventional, and counterfactual. As an example, an interventional query can be formulated as "What will be the effect on the population X, if a variable Xiis assigned a fixed value γi?" The do-operator do(Xi:= γi) represents the effect of setting variable Xito γi. Note that our proposed framework allows for more general sets of interventions as well, such as interventions on multiple variables denoted as do(XI := γ) (where I ⊆ [K], XI := (Xi)i∈I, γ ∈ R |I|). An intervention operation, do(XI := γ), transforms the original joint distribution into an interventional distribution denoted by p(X | do(XI := γ)). On the other hand, a counterfactual query can be framed as "What would have been the outcome of a particular factual sample x F := (x F 1 , . . . , xFK), if the value of XI had been set to γ?". Counterfactual estimation may be performed through the three-step procedure of 1) abduction: estimation of the exogenous noise U, 2) action: intervene do(XI := γ), and 3) prediction: estimating x CF using the abducted noise and intervention values. Diffusion Models. Given data from distribution X0 ∼ Q, the objective of diffusion models is to construct an efficiently sampleable distribution approximating Q. Denoising diffusion probabilistic models (DDPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020) accomplish this by introducing a forward noising process that adds isotropic Gaussian noise at each time step and a learned reverse denoising process. A common representation of diffusion models is a fixed Markov chain that adds Gaussian noise with variances β1*, . . . , β*T ∈ (0, 1), generating latent variables X1*, . . . , X*T, q(Xt| x t−1) = N (Xt; √1 − βtx t−1, βtI) and q(Xt| x 0) = N (Xt; √αtx 0,(1−αt)I), where αt := Qt i=1(1−βi). Here, T ∈ Z +, and t ∈ {0*, . . . , T*} denotes the time index. By choosing sufficiently large T and αt that converge to 0, we have XTis distributed as an isotropic Gaussian distribution. The learned reverse diffusion process attempts to approximate the intractable q(Xt−1| x t) using a neural network and is defined as a Markov chain with Gaussian transitions, pθ(Xt−1| x t) = N (Xt−1; µθ(x t, t), Σθ(x t, t)). Rather than predicting µθ directly, the network could instead predict the Gaussian noise ε from x t = √αtx 0 + √1 − αtε. Ho et al. (2020) found that modeling ε instead of µθ, fixing Σθ, and using the following reweighted loss function $$\begin{array}{c}{{\mathbb{E}_{t\sim\mathrm{Unif}\{[T]\}}\{\|\varepsilon-\varepsilon_{\theta}(\sqrt{\alpha_{t}}X^{0}+\sqrt{1-\alpha_{t}}\varepsilon,t)\|^{2}\},}}\\ {{\qquad\qquad X^{0}{\sim}Q}}\\ {{\qquad\varepsilon{\sim}{\mathcal{N}}(0,I)}}\end{array}$$ $\downarrow$ . $$\left(2\right)$$ 2], (1) works well empirically. We also utilize this loss function in our training. Song et al. (2021) demonstrate that it is possible to take a pretrained standard denoising diffusion probabilistic model (DDPM) and generalize the generation to non-Markovian processes. In particular, it is possible to use a pretrained DDPM model to obtain a deterministic sample given noise XT, known as the denoising diffusion implicit model (DDIM), with *reverse implicit diffusion process* $$X^{t-1}:=\sqrt{\frac{\alpha_{t-1}}{\alpha_{t}}}X^{t}-\varepsilon_{\theta}(X^{t},t)\left(\sqrt{\alpha_{t-1}(1-\alpha_{t})/\alpha_{t}}-\sqrt{1-\alpha_{t-1}}\right).$$ Note that the Xt here is deterministic. We also use a *forward implicit diffusion process* introduced by Song et al. (2021), derived from rewriting the DDIM process Eq. 2 as an ordinary differential equation (ODE) and considering the Euler method approximation in the forward direction to obtain $$X^{t+1}:={\sqrt{\frac{\alpha_{t+1}}{\alpha_{t}}}}X^{t}+\varepsilon_{\theta}(X^{t},t)\left({\sqrt{1-\alpha_{t+1}}}-{\sqrt{\alpha_{t+1}(1-\alpha_{t})/\alpha_{t}}}\right).$$ $$(3)$$ We utilize the DDIM framework in this work, in particular Eqs. 3 and 2 will define the encoding (forward) and decoding (reverse) processes. Note that this ensures deterministic encoding and decoding. This construction produces a unique latent variable per observation, as well as a unique decoding, and also ensures we obtain the same output for repeated counterfactual queries. ## 3 Dcms: Diffusion-Based Causal Models In this section, we present our DCM approach for modeling the SCMs and to answer causal queries. The DCM approach falls in a general class of techniques that try to model a structural causal model by using an encoder-decoder pair. Consider a data generating process X = f(Xpa, U). The goal will to construct an encoding function g and a decoding function h. The encoding function g attempts to represent the information in U: for a pair (*X, X*pa), Z := g(*X, X*pa) is the latent variable. The decoder takes the input Z and Xpa as input to attempt to reconstruct X: Xˆ = h(*Z, X*pa), where under perfect reconstruction, Xˆ = X. The decoding function h mimics the true structural equation f, although it does not need to be exactly equal. For example, there are infinitely many encodings that satisfy Z = r(U) for all U for an invertible function r. We first explain the construction and the training process of a DCM, and then explain how the model can be used for answering various causal queries. We start with some notations. - Define Z t i to be the ith endogenous node value at diffusion step t of the *forward* implicit diffusion process (Eq. 3), and let Zi:= Z T i . - Define Xˆt i to be ith endogenous node value at diffusion step t of the *reverse* implicit diffusion process (Eq. 2), and let Xˆi:= Xˆ 0 i . Training a DCM. We train a diffusion model for *each node*, taking denoised parent values as input. The parent values can be interpreted as additional covariates to the model, where one may choose to use classifier free guidance to incorporate the covariates (Ho & Salimans, 2021). Empirically, we find that simply concatenating the covariates results in better performance than classifier free guidance. We use the εθ parametrization for the diffusion model from Ho et al. (2020), representing the diffusion model for node i as ε i θ (*X, X*pai , t). The complete training procedure presented in Algorithm 1 is only slightly modified from the usual training procedure, with the additions of the parents as covariates and training a diffusion model for each node. Since the generative models learned for generation of different endogenous nodes do not affect training of each other, these models may be trained in parallel. For each node in the graph, we can train a model in parallel as each diffusion model only requires the current node and parent values. Our final DCM model is just a combination of these K trained diffusion models ε 1 θ , . . . , εK θ . Algorithm 1 DCM Training Input: Distribution Q, scale factors {αt} T t=1, causal DAG G with node i represented by Xi 1: **while** not converged do 2: Sample X0 ∼ Q 3: for i = 1*, . . . , K* do 4: t ∼ Unif[{1*, . . . , T*}] 5: ε ∼ N (0, Idi ) {diis the dimension of Xi} 6: Update parameters of node i's diffusion model ε i θ , by minimizing the following loss: ∥ε − ε i θ ( √αtX0 i + √1 − αt*ε, X*0 pai , t)∥ 2 2(based on Eq. 1) 7: **end for** 8: **end while** We use all the variables (X1*, . . . , X*K) for the training procedure, because a priori, we do not assume anything on the possible causal queries, i.e., we allow for all possible target variables, intervened variables, etc. However, if we are only interested in some pre-defined set of queries, then the graph could be reduced accordingly. For example, if we are only interested in counterfactual estimate of a particular node with respect to an intervention of a predecessor, one can simply reduce it to a subgraph containing the target node, the intervened node and a backdoor adjustment set (e.g., the ancestors of the intervened node). This then reduces to only learning a single diffusion model. One major advantage of our proposed DCM approach is the ability to generalize to larger graphs. Since each diffusion model only uses the parents as input, modeling each node depends only on the incoming degree of the node (the number of causal parents). While the number of diffusion models scales with the number of non-root nodes, each model is generally small in terms of its parameter size and can be trained in parallel. Additionally, we may apply the proposed DCM approach to any setting where diffusion models are applicable: continuous variables, high dimensional settings, categorical data, images, etc. Encoding and Decoding Steps with DCM. With a DCM, the encoding (resp. decoding) process is identical to the DDIM encoding (resp. decoding) process except we include the parent values as additional covariates. Note that, given the model ϵθ, DDIM is a deterministic process as laid out in Eqs. 2 and 3. Let us focus on a node i ∈ [K] (same process is repeated for each node i). The encoding process takes Xi and its parent values Xpai as input and maps it to a latent variable Zi. The decoding process takes Zi and Xpai as input to construct Xˆi (an approximation of Xi). Formally, using the forward implicit diffusion process in Eq. 3, given a sample Xi, we encode a unique latent variable Zi:= Z T i , using the recursive formula $$Z_{t}^{t+1}:=\sqrt{\frac{\alpha_{t+1}}{\alpha_{t}}}Z_{t}^{t}+\varepsilon_{\theta}^{t}(Z_{t}^{t},X_{\text{pa}_{t}},t)\left(\sqrt{1-\alpha_{t+1}}-\sqrt{\frac{\alpha_{t+1}(1-\alpha_{t})}{\alpha_{t}}}\right),\forall t=0,..,T-1,\tag{4}$$ where Z 0 i:= Xi. The latent variable Zi acts as a proxy for the exogenous noise Ui. Using the reverse implicit diffusion process from DDIM in Eq. 2, given a latent vector Zi we obtain a deterministic decoding Xˆi:= Xˆ 0 i , using the recursive formula $$\hat{X}_{i}^{t-1}:=\sqrt{\frac{\alpha_{t-1}}{\alpha_{t}}}\hat{X}_{i}^{t}-\varepsilon_{\theta}^{i}(\hat{X}_{i}^{t},X_{\mathrm{pa}_{i}},t)\left(\sqrt{\frac{\alpha_{t-1}(1-\alpha_{t})}{\alpha_{t}}}-\sqrt{1-\alpha_{t-1}}\right),\text{for all}t=T,\ldots,1,$$ $$({\boldsymbol{S}})$$ where Xˆ T:= Zi. In the following, we use Enci(Xi, Xpai ) and Deci(Zi, Xpai ) to denote the encoding and decoding functions for node i defined in Eqns. 4 and 5 respectively. See Algorithms 2 and 3 for detailed pseudocodes. Answering Causal Queries with a Trained DCM. We now describe how a trained DCM model can be used for (approximately) answering causal queries. Answering observational and interventional queries require sampling from the observational and the interventional distribution respectively. With counterfactuals, a query is at the unit level, where the structural assignments are changed, but the exogenous noise is identical to that of the observed datum. Algorithm 2 Enci(Xi, Xpai ) Input: Xi, Xpai 1: Z 0 i ← Xi 2: for t = 0*, . . . , T* − 1 do 3: Z t+1 i ← qαt+1 αt Z t i + ε i θ (Z t i , Xpai , t) √1 − αt+1 − qαt+1(1−αt) αt 4: **end for** 5: Return Zi:= Z T i Algorithm 3 Deci(Zi, Xpai ) Input: Zi, Xpai 1: Xˆ T ← Zi 2: for t = *T, . . . ,* 1 do 3: Xˆt−1 i ← qαt−1 αt Xˆt i − ε i θ (Xˆt i , Xpai , t) qαt−1(1−αt) αt− √1 − αt−1 4: **end for** 5: Return Xˆi:= Xˆ 0 i (a) Generating Samples for Observational/Interventional Queries. Samples from a DCM model that approximates the interventional distribution p(X | do(XI := γ)) can be generated as follows. For an intervened node i with intervention γi, the sampled value is always the intervention value, therefore we generate Xˆi:= γi. For a nonintervened node i, assume by induction we have the generated parent values Xˆpai . To generate Xˆi, we first sample the latent vector Zi ∼ N (0, Idi ) where diis the dimension of Xi. Then taking Zi as the noise for node i, we compute Xˆi:= Deci(Zi, Xˆpai ) as the generated sample value for node i. This value Xˆiis then used as the parent value for the children of node i. Samples from a DCM model that approximates the observational distribution p(X) can be generated by setting I = ∅. See Algorithm 4 for the pseudocode. Algorithm 4 Observational/Interventional Sampling Input: Intervention set I with values γ (I = ∅ for observational sampling) 1: for i = 1*, . . . , K* do {in topological order} 2: Zi ∼ N (0, Idi ) 3: if i ∈ I **then** 4: Xˆi ← γi 5: **else** 6: Xˆi ← Deci(Zi, Xˆpai ) 7: **end if** 8: **end for** 9: Return Xˆ := (Xˆ1*, . . . ,* XˆK) (b) Counterfactual Queries. Consider a factual observation x F := (x F 1 , . . . , xFK) and interventions on a set of nodes I with values γ. We use a DCM model to construct a counterfactual estimate xˆ CF as follows. The counterfactual estimate only differs from the factual value on intervened nodes or descendants of an intervened node. Similarly to interventional queries, for each intervened node i ∈ I, xˆ CF i:= γi. For each non-intervened node i that is a descendant of any intervened node, assume by induction that we have the generated counterfactual estimates xˆ CF pai . To obtain xˆ CF i, we first define the estimated factual noise as zˆ F i := Enci(x F i , xF pai ). Then we generate our counterfactual estimate by using zˆ F i as the noise for node i, by decoding, xˆ CF i:= Deci(ˆz F i , xˆ CF pai ). See Algorithm 5 for the pseudocode. Note that with x F, we assumed full observability,1since because Algorithm 5 produces a counterfactual estimate for each node. However, when intervening on XI and if the only quantity of interest is counterfactual on some X⋆, then 1This is a common assumption in literature also made in all the related work e.g., (Sánchez-Martin et al., 2022; Khemakhem et al., 2021; Pawlowski et al., 2020). Algorithm 5 Counterfactual Estimation Input: Intervention set I with values γ, factual sample x F := (x F 1 , . . . , xFK) 1: for i = 1*, . . . , K* do {in topological order} 2: if i ∈ I **then** 3: xˆ CF i ← γi 4: **else if** i is not a descendant of any intervened node in I **then** 5: xˆ CF i ← x F i 6: **else** 7: z F i ← Enci(x F i , xF pai ) {abduction step} 8: xˆ CF i ← Deci(z F i , xˆpai ) {action and prediction steps} 9: **end if** 10: **end for** 11: Return xˆ CF := (ˆx CF 1*, . . . ,* xˆ CF K ) you only need factual samples from {Xi: Xiis on a path from XI → X⋆} (Saha & Garain, 2022). In practice, this could be further relaxed by imputing for missing data, which is beyond the scope of this work. ## 4 Bounding Counterfactual Error We now establish *sufficient* conditions under which the counterfactual estimation error can be bounded. In fact, the results in this section not only hold for diffusion models, but to a more general setting of conditional latent variable models satisfying certain properties. Another feature of this result, is that it also extends, under an additional assumption, to the more challenging higher-dimensional case. All proofs from this section are collected in Appendix A. We focus on learning a single conditional latent variable model for an endogenous node Xi, given its parents Xpai , as the models learned for different endogenous nodes do not affect each other. Since the choice of node i plays no role, we drop the subscript i in the following and refer to the node of interest as X, its causal parents as Xpa, its corresponding exogenous variables as U, and its structural equation as X := f(Xpa, U). Let the encoding function g : X × Xpa → Z and the decoding function h : Z × Xpa → X , where Z is the latent space. In the DCM context, the functions g and h correspond to Enc and Dec functions, respectively. It is well-known that certain counterfactual queries are not identifiable from observational data without making assumptions on the functional relationships, even under causal sufficiency (Pearl, 2009b). Consequently, recent research has been directed towards understanding the conditions under which identifiability results can be obtained (Lu et al., 2020; Nasr-Esfahany & Kiciman, 2023; Nasr-Esfahany et al., 2023). Assumption 2 of our Theorem 1, ensures that the true counterfactual outcome is identifiable, see e.g., (Lu et al., 2020, Theorem 1) or (Nasr-Esfahany & Kiciman, 2023, Theorem 5). In the context of learned structural causal models to determine whether a given counterfactual query can be answered with sufficient accuracy, requires also assumptions on the learned SCM, e.g., encoder and decoder in this case. Our first result presents sufficient conditions on the latent variable encoding function and the structural equation under which we can recover latent exogenous variable up to a (possibly nonlinear) invertible function. We start with a onedimensional exogenous noise U and variable X *∈ X ⊂* R. In Section 4.1, we provide a similar theorem for the higher-dimensional case where X ∈ R m for m ≥ 3 in Theorem 2 with a stronger assumption on the Jacobian of f and g. Theorem 1. Assume for X ∈ X ⊂ R *and exogenous noise* U ∼ Unif[0, 1], X *satisfies the structural equation:* X := f(Xpa, U), where Xpa ∈ Xpa ⊂ R d are the parents of node X and U ⊥⊥ Xpa*. Consider an encoder-decoder* model with encoding function g : X × Xpa → Z and decoding function h : Z × Xpa → X , Z := g(*X, X*pa), Xˆ := h(Z, Xpa)*. Assume the following conditions:* 1. The encoding is independent of the parent values, g(*X, X*pa) ⊥⊥ Xpa. 2. The structural equation f *is differentiable and strictly increasing with respect to* U. 3. The encoding g *is invertible and differentiable with respect to* X. ## Then, G(X, Xpa) = ˜Q(U) For An Invertible Function Q˜. Discussion On Assumptions Underlying Theorem 1. (1) Assumption 1 of independence between the encoding and the parent values may appear strong, but is in fact often valid. For example, in the additive noise setting with X := f(Xpa)+U where Xpa and U are independent, if the fitted model ˆf ≡ f, then the encoder g(*X, X*pa) = X − ˆf(Xpa) = U and by definition U is independent of Xpa. 2 The same assumption also appears in other related results on counterfactual identifiability in bijective SCMs, see, e.g., (NasrEsfahany et al., 2023, Theorem 5.3) and proof of Theorem 5 in (Nasr-Esfahany & Kiciman, 2023). We conduct empirical tests to further confirm this assumption by examining the dependence between the parents and encoding values. Our experimental results show that DCMs consistently fail to reject the null hypothesis of independence. This implies that independent encodings can be found in practice. We provide the details of these experiments in Appendix B.3. (2) Assumption 2 is always satisfied under the additive noise model (i.e., X := f(Xpa)+U) and post non-linear models (Zhang & Hyvarinen, 2012). 4 Again, the recent results about counterfactual identifiability, e.g., (Nasr-Esfahany & Kiciman, 2023, Theorem 5) and (Lu et al., 2020, Theorem 1), also utilize the same assumption. The strictly increasing assumption is somewhat necessary for any identifiability result as it obviates trivial cases, e.g., distinguishing between X := Xpa + U and X := Xpa − U. (3) We may consider transformations of the uniform noise U to obtain other settings, for example additive Gaussian noise. For a continuous random variable U ′ with invertible CDF F and the structural equation f(·, F(·)), we have U ′ d= F −1(U) and the results similarly hold. We now discuss some consequences of Theorem 1 for estimating counterfactual outcomes. 1. Perfect Estimation. Using Theorem 1, we now look at a condition under which the counterfactual estimate produced by the encoder-decoder model matches the true counterfactual outcome.5. The idea here been, if no information is lost in the encoding and decoding steps, i.e., h(g(X, Xpa), Xpa) = X, and assuming Theorem 1 (g(*X, X*pa) = ˜q(U)), we have h(˜q(U), Xpa) = X = f(Xpa, q˜ −1(U)). This means that in the abduction step, the encoder-decoder model could recover q˜(U), but in the prediction step, it first applies the inverse of q˜ to the recovered exogenous variable, and then f. Thus, the counterfactual estimate equals the true counterfactual outcome. We formalize this in Corollary 1. Corollary 1. Assume the conditions of Theorem 1. Furthermore, assume the encoder-decoder model pair (*g, h*) satisfies: h(g(X, Xpa), Xpa) = X*. Consider a factual sample pair* x F := (x, xpa) where x := f(xpa, u) *and an intervention* do(Xpa := γ). Then, the counterfactual estimate, given by h(g(x, xpa), γ) *matches the true counterfactual* outcome x CF := f(*γ, u*). Comparison with Recent Related Work. Recent studies by Nasr-Esfahany & Kiciman (2023) and Nasr-Esfahany et al. (2023) have explored the problem of estimating counterfactual outcomes with learned SCMs. In particular, NasrEsfahany & Kiciman (2023, Theorem 5) consider a setting where the SCM X := f(Xpa, U) is learned with a *bijective* (deep conditional generative) model ˆf(Xpa,Uˆ). Nasr-Esfahany et al. (2023, Theorem 5.3) considered a closely related problem of learning a ground-truth bijective SCM. The conditions underlying ours and these results are not directly comparable because, unlike our setup, they do not explicitly consider an encoder-decoder model. Our results provide precise conditions on the encoder g and decoder h for recovering the correct counterfactual outcome, and we can extend these results to obtain counterfactual estimation error bounds under relaxed assumptions, a problem that has not been addressed previously. Counterfactual identifiability in a different context of reinforcement learning was also established by (Lu et al., 2020). Their result relies on incomparable assumptions on state-action pairs. Furthermore, our proof techniques are quite 2In general, if we have a good approximation f by some fˆ, then the encoding g(*X, X*pa) = X − fˆ(Xpa) would be close to U, as also noted by (Hoyer et al., 2008). 3To encourage independence, one could also modify the original diffusion model training objective to add a Hilbert-Schmidt independence criterion (HSIC) (Gretton et al., 2007) regularization term. Our experiments did not show a clear benefit of using this modified objective, and we leave further investigation here for future work. 4It is also satisfied by heteroscedastic noise models (HNMs) which are generally defined as X = f(Xpa) + g(Xpa) · U where the function g is assumed to be strictly positive (see Definition 1 in (Strobl & Lasko, 2023), which makes it compatible with our Assumption 2 of Theorem 1. 5Note that identifiability of the counterfactual outcomes, does not require identifiability of the SCM. different from Lu et al. (2020) who rely on a technique based on analyzing conditional quantile, unlike an algebraic technique employed here. 2. Estimation Error. Another consequence of Theorem 1 is that it can bound the counterfactual error in terms of the reconstruction error of the encoder-decoder model. Informally, the following corollary shows that if the reconstruction h(g(X, Xpa), Xpa) is "close" to X (measured under some metric d(·, ·)), then such encoder-decoder models can provide "good" counterfactual estimates. To the best of our knowledge, this is the first result that establishes a bound on the counterfactual error in relation to the reconstruction error of these encoder-decoder models. Corollary 2. Let γ ≥ 0. Assume the conditions of Theorem 1. Furthermore, assume the encoder-decoder model pair (g, h) under some metric d (e.g., ∥ · ∥2), has reconstruction error less than τ : d(h(g(X, Xpa), Xpa), X) ≤ τ . Consider a factual sample pair x F := (x, xpa) where x := f(xpa, u) *and an intervention* do(Xpa := γ). Then, the error between the true counterfactual x CF := f(γ, u) and counterfactual estimate given by h(g(x, xpa), γ) *is at most* τ . In other words, d(h(g(x, xpa), γ), xCF) ≤ τ . The above result suggests that the reconstruction error can serve as an estimate for the counterfactual error. While the true value of τ is unknown, we may compute a reasonable bound by computing the reconstruction error over the dataset. ## 4.1 Extension Of Theorem 1 To Higher-Dimensional Setting In this section, we present an extension of Theorem 1 to a higher dimensional setting and use it to provide counterfactual identifiability and estimation error results. Theorem 2. Assume for X *∈ X ⊂* R m *and continuous exogenous noise* U ∼ Unif[0, 1]m for m ≥ 3, and X satisfies the structural equation $$X=f(X_{\mathrm{pa}},U)$$ $\downarrow$). $$\mathbf{\partial}\mathbf{\partial}$$ X = f(Xpa, U) (6) where Xpa ∈ Xpa ⊂ R d are the parents of node X and U ⊥⊥ Xpa. Consider an encoder-decoder model with encoding function g : X × Xpa → Z and decoding function h : Z × Xpa → X , $$Z:=g(X,X_{\mathrm{pa}}),\quad\hat{X}:=h(Z,X_{\mathrm{pa}}).$$ Z := g(X, Xpa), Xˆ := h(*Z, X*pa). (7) Assume the following conditions: 1. The encoding is independent of the parent values, g(*X, X*pa) ⊥⊥ Xpa. 2. The structural equation f *is invertible and differentiable with respect to U, and* Jfxpa is p.d. for all xpa ∈ Xpa. 3. The encoding g is invertible and differentiable with respect to X*, and* Jgxpa is p.d. for all xpa ∈ Xpa. 4. *The encoding* qxpa (U) := g(f(xpa, U), xpa) *satisfies* Jqxpa |q −1 xpa (z) = c(xpa)A for all z ∈ Z and xpa ∈ Xpa, where c is a scalar function and A *is an orthogonal matrix.* Then, g(f(Xpa, U), Xpa) = ˜q(U) *for an invertible function* q˜. In Corollaries 3 and 4 (Appendix A), we restate Corollaries 1 and 2 to this higher-dimensional setting. The proofs are identical to that of Corollaries 1 and 2, with the only change being that the role of Theorem 1 in those proofs is now replaced by Theorem 2. On Negative Result of Nasr-Esfahany & Kiciman (2023). Nasr-Esfahany & Kiciman (2023) presented a general counterfactual impossibility identification result under multidimensional exogenous noise. The construction in Nasr-Esfahany & Kiciman (2023) considers two structural equations *f, f*′that are indistinguishable in distribution. Formally, let R ∈ R m×m be a rotation matrix, and U ∈ R m be a standard (isotropic) Gaussian random vector. Define, $$f^{\prime}(X_{\mathrm{pa}},U)=\begin{cases}f(X_{\mathrm{pa}},U)&\text{for}X_{\mathrm{pa}}\in A\\ f(X_{\mathrm{pa}},R\cdot U)&\text{for}X_{\mathrm{pa}}\in B\end{cases},$$ where the domain Xpa is split into disjoint A and B. Now, f and f ′ generate different counterfactual outcomes, for counterfactual queries with evidence in A and intervention in B (or the other way around). $\vdots$). In Theorem 2, we avoid this impossibility result by assuming that we can construct an encoding of a "special" kind captured through our Assumption 4. In particular, consider the encoding qxpa at a specific parent value xpa as a function of the exogenous noise U. The assumption states that the Jacobian of the encoding is equal to c(xpa)A for a scalar function c and orthogonal matrix A. However, it is important to acknowledge that this assumption is highly restrictive and difficult to verify, not to mention challenging to construct in practice with just observational data. Our intention is that these initial ideas can serve as a starting point for addressing the impossibility result, with the expectation that subsequent results will further refine and expand upon these ideas. ## 5 Experimental Evaluation In this section, we evaluate the empirical performance of DCM for answering causal queries on both synthetic and real world data. Additional semi-synthetic experimental evaluations are presented in Appendix D.2. For the semi-synthetic experiments, we leverage the *Sachs* dataset (Sachs et al., 2005). Diffusion Model Implementation and Training. For our implementation of the εθ model in DCM, we use a simple fully connected neural network with three hidden layers of size [128, 256, 256] and SiLU activation (Elfwing et al., 2018). We fit the model using Adam with a learning rate of 1e-4, batch size of 64, and train for 500 epochs. Since the root nodes lack parents, the only form of counterfactual reasoning involves directly intervening on the root node, which may be done trivially. Therefore, for root nodes, we do not train diffusion models and instead sample from the empirical distribution of the training data. Additional details about the diffusion model parameters are in Appendix C.1. Compared Approaches. For a fair comparison, our evaluation centers on methodologies that allow for both interventional and counterfactual estimation. With this criteria, we primarily compare DCM to two recently proposed state-of-the-art schemes VACA (Sánchez-Martin et al., 2022) and CAREFL (Khemakhem et al., 2021), and a general regression model that assumes an additive noise model which we refer to as ANM.6 For VACA and CAREFL, we use the code provided by their respective authors. The ANM approach performs model selection over a variety of models, including linear and gradient boosted regressor, and we use the implementation from the popular *DoWhy* causal inference package (Sharma et al., 2019; Blöbaum et al., 2022). Additional details on how ANM answers causal queries are provided in Appendix C.2 and implementation details for VACA, CAREFL, and ANM are in Appendix C.1. ## 5.1 Synthetic Data Experiments For generating quantitative results, we use synthetic experiments since we know the exact structural equations, and hence we have access to the ground-truth observational, interventional, and counterfactual distributions. We provide two sets of synthetic experiments, a set of two larger graphs provided here and a set of four smaller graphs provided in Appendix D.3. The two larger graphs, include a *ladder* graph structure (see Figure 9) and a randomly generated graph structure. Both graphs are comprised of 10 nodes of three dimensions each, and the random graph is a randomly sampled directed acyclic graph (see Appendix C.3 for more details). Since each diffusion model only uses the parents as input, modeling each node depends only on the incoming degree of the node (the number of causal parents). Following Sánchez-Martin et al. (2022), for the observational and interventional distributions, we report the Maximum Mean Discrepancy (MMD) (Gretton et al., 2012) between the true and estimated distributions. For counterfactual estimation, we report the mean squared error (MSE) of the true and estimated counterfactual values. Again following Sánchez-Martin et al. (2022), we consider two broad classes of structural equations: 1. Additive Noise Model (NLIN): fi(Xpai , Ui) = f ′(Xpai ) + Ui. In particular, we will be interested in the case where fi's are non-linear. 2. Nonadditive Noise Model (NADD): fi(Xpai , Ui) is an arbitrary function of Xpai and Ui. To prevent overfitting of hyperparameters, we randomly generate these structural equations for each initialization. Each structural equation is comprised a neural network with a single hidden layer of 16 units and SiLU activation (Elfwing et al., 2018) with random weights from sampled from [−1, 1]. 6In spite of our best efforts, we were unable to run a proper comparison against the very recent approach proposed by Javaloy et al. (2023) due to challenges in adapting their code into our settings. | DCM | ANM | VACA | CAREFL | | | | |----------|-------------|-------------|---------------|-------------|------------|------------| | SCM | Metric | (×10−2 ) | (×10−2 ) | (×10−2 ) | (×10−2 ) | | | Obs. MMD | 0.44±0.14 | 0.63±0.21 | 2.82±0.83 | 13.41±1.14 | | | | NLIN | Int. MMD | 1.63±0.20 | 1.80±0.17 | 4.48±0.78 | 15.01±1.23 | | | CF. MSE | 3.42±1.67 | 10.65±2.48 | 41.03±19.00 | 17.46±6.04 | | | | Ladder | Obs. MMD | 0.32±0.11 | 0.40±0.16 | 3.22±1.05 | 14.60±1.34 | | | NADD | Int. MMD | 1.54±0.17 | 1.57±0.15 | 5.13±1.16 | 16.87±1.85 | | | CF. MSE | 4.28±2.39 | 10.71±5.47 | 27.42±12.34 | 22.26±13.75 | | | | Obs. MMD | 0.28±0.12 | 0.47±0.15 | 1.82±0.73 | 12.11±1.21 | | | | NLIN | Int. MMD | 1.45±0.07 | 1.88±0.22 | 3.52±1.03 | 14.15±2.34 | | | CF. MSE | 9.51±13.12 | 23.68±28.49 | 82.10±78.49 | 52.57±82.03 | | | | Random | | Obs. MMD | 0.19±0.05 | 0.31±0.14 | 2.09±0.60 | 12.63±1.10 | | NADD | Int. MMD | 1.42±0.25 | 1.73±0.44 | 4.24±1.40 | 14.65±1.76 | | | CF. MSE | 20.13±57.52 | 44.76±86.02 | 124.82±275.09 | 54.29±83.68 | | | Table 1: Mean±standard deviation of observational, interventional, and counterfactual queries of the ladder and random SCMs in nonlinear and nonadditive settings over 20 random initializations of the model and training data. The values are multiplied by 100 for clarity. Each simulation generates n = 5000 samples as training data. Let Mˆ be a fitted causal model and M⋆ be the true causal model, both capable of generating observational and interventional samples, and answering counterfactual queries. Each pair of graphs and structural equation type is evaluated for 20 different initialization, and we report the mean value. We provide additional details about our observational, interventional, and counterfactual evaluation frameworks in Appendix C.4. Synthetic Experiments Results. In Table 1, we provide the performance of all evaluated models for observational, interventional, and counterfactual queries, averaged over 20 separate initializations of models and training data, with the lowest value in each row bolded. The values are multiplied by 100 for clarity. We also provide boxplots of the performances in Figure 1 (Appendix D). We see DCM and ANM are the most competitive approaches, with similar performance on observational and interventional queries. If the ANM is the correctly specified model, then the ANM encoding should be close to the true encoding, assuming the regression model fit the data well. We see this in the nonlinear setting, ANM performs well but struggles to outperform DCM, perhaps due to the complexity of fitting a neural network using classical models. Note thats since observational and interventional queries are inherently easier than counterfactual queries, it is natural that we observe smaller improvements over other baselines. Our proposed DCM method exhibits superior performance compared to VACA and CAREFL, often by as much as an order of magnitude. The better performance of DCM over CAREFL may be attributed to the fact that DCM uses the causal graph, while CAREFL only relies on a non-unique causal ordering. In the case of VACA, the limited expressiveness of the GNN encoder-decoder might be the reason behind its inferior performance, especially when dealing with multivariable, multidimensional complex structural equations as considered here, a shortcoming that has also been acknowledged by the authors of VACA (Sánchez-Martin et al., 2022). Furthermore, VACA performs approximate inference, e.g. when performing a counterfactual query with do(X1 = 2), the predicted counterfactual value for X1 is not exactly 2 due to imperfections in the reconstruction. This design choice may result in downstream compounding errors, possibly explaining discrepancies in performance. To avoid penalizing this feature, all metrics are computed using downstream nodes from intervened nodes. In terms of speed, DCM is much more compute efficient than VACA and CAREFL, typically training 7 times faster, see Appendix D.1 for more details. Lastly, the standard deviation of DCM is small relative to the other models, demonstrating relative consistency, which points to the robustness of our proposed approach. | Algorithm | Median Abs. Error | Mean Abs. Error | |-------------|---------------------|-------------------| | DCM | 0.5981 ± 9.5e-3 | 0.5779 ± 1.9e-3 | | CAREFL | 0.5983 ± 3.0e-2 | 0.6004 ± 2.2e-2 | | ANM | 0.6746 ± 4.8e-8 | 0.6498 ± 1.4e-6 | | Linear SCM | 0.6045 ± 0 | 0.6042 ± 0 | Table 2: Performances for interventional predictions on the fMRI dataset, of the form median/mean absolute error±standard deviation, using the mean over 10 random seeds. We do not include VACA due to implementation difficulties. The results for CAREFL, ANM, and Linear SCM are consistent with those observed by (Khemakhem et al., 2021). We note that the Linear SCM has zero standard deviation, as the ridge regression model does not vary with the random seed. ## 5.2 Real Data Experiments We evaluate DCM on interventional real world data by evaluating our model on the electrical stimulation interventional fMRI data from (Thompson et al., 2020), using the experimental setup from (Khemakhem et al., 2021). The fMRI data comprises samples from 14 patients with medically refractory epilepsy, with time series of the Cingulate Gyrus (CG) and Heschl's Gyrus (HG). The assumed underlying causal structure is the bivarate graph CG → HG. Our interventional ground truth data comprises an intervened value of CG and an observed sample of HG. We defer the reader to (Thompson et al., 2020; Khemakhem et al., 2021) for a more thorough discussion of the dataset. In Table 2, we note that the difference in performance is more minor than our synthetic results. We believe this is due to two reasons. Firstly, the data seems inherently close to linear, as exhibited by the relatively similar performance with the standard ridge regression model (Linear SCM). Secondly, we only have a single ground truth interventional value instead of multiple samples from the interventional distribution. As a result, we can only compute the absolute error based on this single value, rather than evaluating the maximum mean discrepancy (MMD) between the true and predicted interventional distributions. Specifically, in the above table, we compute the absolute error between the model prediction and the interventional sample for each of the 14 patients and report the mean/median. The availability of a single interventional introduces a possibly large amount of irreducible error, artificially inflating the error values. For more details on the error inflation, see Appendix C.5. ## 6 Concluding Remarks We demonstrate that diffusion models, in particular, the DDIM formulation (which allows for unique encoding and decoding) provide a flexible and practical framework for approximating interventions (do-operator) and counterfactual (abduction-action-prediction) steps. Our approach, DCM, is applicable independent of the DAG structure. We find that empirically DCM outperforms competing methods in all three causal settings, observational, interventional, and counterfactual queries, across various classes of structural equations and graphs. While not in scope of this paper, the proposed DCM approach can also be naturally extended to any setting where diffusion models are applicable like categorical data, images, etc. For higher dimensional spaces, we believe DCMs should scale nicely, as diffusion models are typically deployed for high-dimensional image settings and exhibit SOTA performance. Furthermore, we may leverage many of the optimization and implementation tricks, as diffusion models are a very active field of research. The proposed method does come with certain limitations. For example, as with all the previously mentioned related approaches, DCM precludes unobserved confounding. The theoretical analyses require assumptions, not all of which are easy to test. However, our practical results suggest that DCM provides competitive empirical performance, even when some assumptions needed for our theoretical guarantees are violated. ## References Ahmed M Alaa and Mihaela Van Der Schaar. Bayesian inference of individualized treatment effects using multi-task gaussian processes. *Advances in neural information processing systems*, 30, 2017. Sourabh Balgi, Jose M Pena, and Adel Daoud. Personalized public policy analysis in social sciences using causalgraphical normalizing flows. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 11810–11818, 2022. David E Blair. *Inversion theory and conformal mapping*, volume 9. American Mathematical Soc., 2000. Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, and Dominik Janzing. Dowhy-gcm: An extension of dowhy for causal inference in graphical causal models. *arXiv preprint arXiv:2206.06821*, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 248–255, 2009. doi: 10. 1109/CVPR.2009.5206848. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 8780–8794. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/ paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf. Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information* Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips. cc/paper/2019/file/7ac71d433f282034e088473244df8c02-Paper.pdf. Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. *Neural Networks*, 107:3–11, 2018. ISSN 0893-6080. doi: https: //doi.org/10.1016/j.neunet.2017.12.012. URL https://www.sciencedirect.com/science/article/ pii/S0893608017302976. Special issue on deep reinforcement learning. Sergio Garrido, Stanislav Borysov, Jeppe Rich, and Francisco Pereira. Estimating causal effects with the neural autoregressive density estimator. *Journal of Causal Inference*, 9(1):211–218, 2021. Arthur Gretton, Kenji Fukumizu, Choon Teo, Le Song, Bernhard Schölkopf, and Alex Smola. A kernel statistical test of independence. In J. Platt, D. Koller, Y. Singer, and S. Roweis (eds.), Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007. URL https://proceedings.neurips.cc/paper/ 2007/file/d5cfead94f5350c12c322b5b664544c1-Paper.pdf. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *Journal of Machine Learning Research*, 13(25):723–773, 2012. URL http://jmlr.org/ papers/v13/gretton12a.html. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In *NeurIPS 2021 Workshop on Deep Generative* Models and Downstream Applications, 2021. URL https://openreview.net/forum?id=qw8AKxfYbI. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 6840–6851. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/ file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf. Patrik Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Bernhard Schölkopf. Nonlinear causal discovery with additive noise models. *Advances in neural information processing systems*, 21, 2008. Adrián Javaloy, Pablo Sánchez-Martín, and Isabel Valera. Causal normalizing flows: from theory to practice, 2023. Amir-Hossein Karimi, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. *Advances in neural information processing systems*, 33: 265–277, 2020. Ilyes Khemakhem, Ricardo Monti, Robert Leech, and Aapo Hyvarinen. Causal autoregressive flows. In Arindam Banerjee and Kenji Fukumizu (eds.), Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of *Proceedings of Machine Learning Research*, pp. 3520–3528. PMLR, 13–15 Apr 2021. URL https://proceedings.mlr.press/v130/khemakhem21a.html. Murat Kocaoglu, Christopher Snyder, Alexandros G Dimakis, and Sriram Vishwanath. Causalgan: Learning causal implicit generative models with adversarial training. In *International Conference on Learning Representations*, 2018. Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. In *International Conference on Learning Representations*, 2021. URL https://openreview. net/forum?id=a-xFK8Ymz5J. Chaochao Lu, Biwei Huang, Ke Wang, José Miguel Hernández-Lobato, Kun Zhang, and Bernhard Schölkopf. Sampleefficient reinforcement learning via counterfactual-based data augmentation. *arXiv preprint arXiv:2012.09092*, 2020. Raha Moraffah, Bahman Moraffah, Mansooreh Karami, Adrienne Raglin, and Huan Liu. Can: A causal adversarial network for learning observational and interventional distributions. *arXiv preprint arXiv:2008.11376*, 2020. Krikamol Muandet, Motonobu Kanagawa, Sorawit Saengkyongam, and Sanparith Marukatat. Counterfactual mean embeddings. *J. Mach. Learn. Res.*, 22:162–1, 2021. Arash Nasr-Esfahany and Emre Kiciman. Counterfactual (non-) identifiability of learned structural causal models. arXiv preprint arXiv:2301.09031, 2023. Arash Nasr-Esfahany, Mohammad Alizadeh, and Devavrat Shah. Counterfactual identifiability of bijective causal models. *arXiv preprint arXiv:2302.02228*, 2023. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 8162–8171. PMLR, 18–24 Jul 2021. URL https: //proceedings.mlr.press/v139/nichol21a.html. Álvaro Parafita and Jordi Vitrià. Causal inference with deep causal graphs. *arXiv preprint arXiv:2006.08380*, 2020. Nick Pawlowski, Daniel Coelho de Castro, and Ben Glocker. Deep structural causal models for tractable counterfactual inference. *Advances in Neural Information Processing Systems*, 33:857–869, 2020. J. Pearl, M. Glymour, and N.P. Jewell. *Causal Inference in Statistics: A Primer*. Wiley, 2016. ISBN 9781119186847. URL https://books.google.com/books?id=L3G-CgAAQBAJ. Judea Pearl. Causal inference in statistics: An overview. *Statistics Surveys*, 3(none):96 - 146, 2009a. doi: 10.1214/ 09-SS057. URL https://doi.org/10.1214/09-SS057. Judea Pearl. Causal inference in statistics: An overview, 2009b. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents, 2022. URL https://arxiv.org/abs/2204.06125. Karen Sachs, Omar Perez, Dana Pe'er, Douglas A. Lauffenburger, and Garry P. Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. *Science*, 308(5721):523–529, 2005. doi: 10.1126/science. 1105809. URL https://www.science.org/doi/abs/10.1126/science.1105809. Saptarshi Saha and Utpal Garain. On noise abduction for answering counterfactual queries: A practical outlook. Transactions on Machine Learning Research, 2022. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022. URL https://arxiv.org/abs/2205.11487. Pedro Sanchez and Sotirios A. Tsaftaris. Diffusion causal models for counterfactual estimation. In *CLeaR*, 2022. Pedro Sanchez, Xiao Liu, Alison Q O'Neil, and Sotirios A Tsaftaris. Diffusion models for causal discovery via topological ordering. In *The Eleventh International Conference on Learning Representations*, 2022. Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In *International Conference on Machine Learning*, pp. 3076–3085. PMLR, 2017. Amit Sharma, Emre Kiciman, et al. DoWhy: A Python package for causal inference. https://github.com/microsoft/dowhy, 2019. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International* Conference on Machine Learning, volume 37 of *Proceedings of Machine Learning Research*, pp. 2256–2265, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/sohl-dickstein15. html. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=St1giarCHLP. Eric V Strobl and Thomas A Lasko. Identifying patient-specific root causes with the heteroscedastic noise model. Journal of Computational Science, 72:102099, 2023. Pablo Sánchez-Martin, Miriam Rateike, and Isabel Valera. Vaca: Designing variational graph autoencoders for causal queries. *Proceedings of the AAAI Conference on Artificial Intelligence*, 36(7):8159–8168, Jun. 2022. doi: 10.1609/ aaai.v36i7.20789. URL https://ojs.aaai.org/index.php/AAAI/article/view/20789. W. H. Thompson, R. Nair, H. Oya, O. Esteban, J. M. Shine, C. I. Petkov, R. A. Poldrack, M. Howard, and R. Adolphs. A data resource from concurrent intracranial stimulation and functional MRI of the human brain. *Scientific Data*, 7(1):258, August 2020. ISSN 2052-4463. doi: 10.1038/s41597-020-00595-y. URL https://doi.org/10. 1038/s41597-020-00595-y. Matej Zecevi ˇ c, Devendra Dhami, Athresh Karanam, Sriraam Natarajan, and Kristian Kersting. Interventional sum- ´ product networks: Causal inference with tractable probabilistic models. Advances in Neural Information Processing Systems, 34:15019–15031, 2021a. Matej Zecevi ˇ c, Devendra Singh Dhami, Petar Veli ´ ckovi ˇ c, and Kristian Kersting. Relating graph neural networks to ´ structural causal models. *arXiv preprint arXiv:2109.04173*, 2021b. Kun Zhang and Aapo Hyvarinen. On the identifiability of the post-nonlinear causal model. *arXiv preprint* arXiv:1205.2599, 2012. ## A Missing Details From Section 4 Notation. For two sets X , Y a map f : *X 7→ Y*, and a set S ⊂ X , we define f(S) = {f(x) : x ∈ S}. For x ∈ X , we define x + S = {x + x ′: x ′ *∈ X }*. For a random variable X, define pX(x) as the probability density function (PDF) at x. We use p.d. to denote positive definite matrices and Jf|x to denote the Jacobian of f evaluated at x. For a function with two inputs f(·, ·), we define fx(Y ) := f(*x, Y* ) and fy(X) := f(*X, y*). Lemma 1. For U, Z ⊂ R*, consider a family of invertible functions* qxpa : U → Z for xpa ∈ Xpa ⊂ R d*, then* dqxpa du (q −1 xpa (z)) = c(z) for all xpa ∈ Xpa *if and only if* qxpa can be expressed as $$\mathbf{a}\rfloor J$$ qxpa (u) = q(u + r(xpa)) for some function r *and invertible* q. Proof. First for the reverse direction, we may assume qxpa (u) = q(u + r(xpa)). Then $${\frac{d q_{x_{\mathrm{pa}}}}{d u}}(u)={\frac{d q}{d u}}(u+r(x_{\mathrm{pa}})).$$ Now plugging in u = q −1 xpa (z) = q −1(z) − r(xpa), $$\frac{d q_{x_{\mathrm{pa}}}}{d u}(q_{x_{\mathrm{pa}}}^{-1}(z))=\frac{d q}{d u}(q^{-1}(z)-r(x_{\mathrm{pa}})+r(x_{\mathrm{pa}}))=\frac{d q}{d u}(q^{-1}(z))=c(z).$$ Therefore dqxpa du (q −1 xpa (z)) is a function of z. For the forward direction, assume dqxpa du (q −1 xpa (z)) = c(z). Define sxpa : *Z → U* to be the inverse of qxpa . By the inverse function theorem and by assumption. $${\frac{d s_{x_{\mathrm{pa}}}}{d z}}(z)={\frac{d q_{x_{\mathrm{pa}}}^{-1}}{d z}}(z)={\frac{1}{{\frac{d q_{x_{\mathrm{pa}}}}{d u}}(q_{x_{\mathrm{pa}}}^{-1}(z))}}={\frac{1}{c(z)}}$$ for all xpa. Since the derivatives of sxpa are equal for all xpa, by the mean value theorem, all sxpa are additive shifts of each other. Without loss of generality, we may consider an arbitrary fixed xpa0 ∈ Xpa and reparametrize sxpa as $$s_{x_{\mathrm{pa}}}(z)=s_{x_{\mathrm{pa}_{0}}}(z)-r(x_{\mathrm{pa}}).$$ $\square$ Let u = sxpa (z). Then we have $$\begin{array}{c}{{s_{x_{\mathrm{pa}_{0}}}(z)=u+r(x_{\mathrm{pa}})}}\\ {{q_{x_{\mathrm{pa}}}(u)=z=q_{x_{\mathrm{pa}_{0}}}(u+r(x_{\mathrm{pa}})),}}\end{array}$$ and we have the desired representation by choosing q = qxpa0 . Theorem 1. Assume for X ∈ X ⊂ R *and exogenous noise* U ∼ Unif[0, 1], X *satisfies the structural equation:* X := f(Xpa, U), where Xpa ∈ Xpa ⊂ R d are the parents of node X and U ⊥⊥ Xpa*. Consider an encoder-decoder* model with encoding function g : X × Xpa → Z and decoding function h : Z × Xpa → X , Z := g(*X, X*pa), Xˆ := h(Z, Xpa)*. Assume the following conditions:* 1. The encoding is independent of the parent values, g(*X, X*pa) ⊥⊥ Xpa. 2. The structural equation f *is differentiable and strictly increasing with respect to* U. 3. The encoding g *is invertible and differentiable with respect to* X. Then, g(*X, X*pa) = ˜q(U) *for an invertible function* q˜. Proof. First, we show that g(*X, X*pa) = g(f(Xpa, U), Xpa) is solely a function of U. Since continuity and invertibility imply strict monotonicity, without loss of generality, assume g is an strictly increasing function (if not, we may replace g with −g and use h(−*Z, X*pa)). By properties of the composition of functions, qxpa (U) := g(f(xpa, U), xpa) is also differentiable and strictly increasing with respect to U. Also, because of strict monotonicity it is also invertible. By the assumption that the encoding Z is independent of Xpa, $$Z=q_{x_{\mathrm{pa}}}(U)\perp\!\!\!\perp X_{\mathrm{pa}}.$$ $$(9)$$ (U) ⊥⊥ Xpa. (9) Therefore the conditional distribution of Z does not depend on Xpa. Using the assumption that U ⊥⊥ Xpa, for all xpa ∈ Xpa and z in the support of Z, by the change of density formula, $$p_{Z}(z)=\frac{p_{U}(q_{rxn}^{-1}(z))}{\left|\frac{dq_{rxn}}{du}(q_{rxn}^{-1}(z))\right|}=\frac{1\{q_{rxn}^{-1}(z)\in[0,1]\}}{\frac{dq_{rxn}}{du}(q_{rxn}^{-1}(z))}=c_{1}(z).\tag{10}$$ The numerator follows from the fact that the noise is uniformly distributed. The term dqxpa du (q −1 xpa (z)) is nonnegative since qxpa is increasing. Furthermore, since pZ(z) > 0, the numerator in Eq. 10 is always equal to 1 and the denominator must not depend on Xpa, $$\frac{d q_{x_{\mathrm{pa}}}}{d u}(q_{x_{\mathrm{pa}}}^{-1}(z))=c_{2}(z)$$ for some function c2. From Lemma 1 (by replacing a by xpa), we may express $$q_{x_{\mathrm{pa}}}(u)=q(u+r(x_{\mathrm{pa}}))$$ $$(11)$$ (u) = q(u + r(xpa)) (11) for an invertible function q. Next, since Z ⊥⊥ Xpa, the support of Z does not depend on Xpa, equivalently the ranges of qx1 and qx2 are equal for all x1, x2 ∈ Xpa, $$q_{x_{1}}([0,1])=q_{x_{2}}([0,1]).$$ $$(12)$$ $\square$ ([0, 1]). (12) Applying Eq. 11 and the invertibility of q, $$\begin{array}{c}{{q([0,1]+r(x_{1}))=q([0,1]+r(x_{2}))}}\\ {{\qquad[0,1]+r(x_{1})=[0,1]+r(x_{2})}}\\ {{\qquad[r(x_{1}),r(x_{1})+1]=[r(x_{2}),r(x_{2})+1]}}\end{array}$$ Since this holds for all x1, x2 ∈ Xpa, we have r(xpa) is a constant function, or r(xpa) ≡ r. Thus let q˜ be q˜(u) = q(u + r) = qxpa (u), which is solely a function of U for all xpa. For all xpa, $$g(f(x_{\rm pa},U),x_{\rm pa}))=q_{x_{\rm pa}}(U)=\tilde{q}(U)\implies g(f(X_{\rm pa},U),X_{\rm pa})=\tilde{q}(U),\tag{13}$$ for an invertible function q˜. This completes the proof. Corollary 1. Assume the conditions of Theorem 1. Furthermore, assume the encoder-decoder model pair (*g, h*) satisfies: h(g(X, Xpa), Xpa) = X*. Consider a factual sample pair* x F := (x, xpa) where x := f(xpa, u) *and an intervention* do(Xpa := γ). Then, the counterfactual estimate, given by h(g(x, xpa), γ) matches the true counterfactual outcome x CF := f(γ, u). Proof. For the intervention do(Xpa := γ), the true counterfactual outcome is x CF := f(*γ, u*). By assumption, h(g(x CF, xpa), xpa) = x CF. Now since Eq. 13 holds true for all Xpa and U, it also holds for the factual and counterfactual samples. We have, $$g(x,x_{\mathrm{pa}})=g(f(x_{\mathrm{pa}},u),x_{\mathrm{pa}})=\tilde{q}(u)=g(f(\gamma,u),\gamma)=g(x^{\mathrm{CF}},\gamma).$$ Therefore, the counterfactual estimate produced by the encoder-decoder model $$h(g(x,x_{\mathrm{pa}}),\gamma)=h(g(x^{\mathrm{CF}},\gamma),\gamma)=x^{\mathrm{CF}}.$$ This completes the proof. Corollary 2. Let γ ≥ 0. Assume the conditions of Theorem 1. Furthermore, assume the encoder-decoder model pair (g, h) under some metric d (e.g., ∥ · ∥2), has reconstruction error less than τ : d(h(g(X, Xpa), Xpa), X) ≤ τ . Consider a factual sample pair x F := (x, xpa) where x := f(xpa, u) *and an intervention* do(Xpa := γ). Then, the error between the true counterfactual x CF := f(γ, u) and counterfactual estimate given by h(g(x, xpa), γ) *is at most* τ . In other words, d(h(g(x, xpa), γ), xCF) ≤ τ . Proof. For the intervention do(Xpa := γ), the counterfactual outcome is x CF := f(*γ, u*). Since Eq. 13 holds true for all Xpa and U, it also holds for the factual and counterfactual samples. We have, $$g(x,x_{\mathrm{pa}})=g(f(x_{\mathrm{pa}},u),x_{\mathrm{pa}})=\hat{q}(u)=g(f(\gamma,u),\gamma)=g(x^{\mathrm{CF}},\gamma).$$ CF, γ). (14) $\square$ $$(14)$$ By the assumption on the reconstruction error of the encoder-decoder and 14, $d(h(g(x^{\rm CF},\gamma),\gamma),x^{\rm CF})\leq\tau$ $d(h(g(x,x_{\rm pa}),\gamma),x^{\rm CF})\leq\tau$. $$(15)$$ $$(16)$$ $\square$ We now discuss a lemma that is an extension of Lemma 1 to higher dimensions. Lemma 2. For U, Z ⊂ R m*, consider a family of invertible functions* qxpa : U → Z for xpa ∈ Xpa ⊂ R d*, if* Jqxpa (q −1 xpa (z)) = cA for all xpa ∈ Xpa *then* qxpa can be expressed as $$q_{x_{\mathrm{pa}}}(u)=q(u+r(x_{\mathrm{pa}}))$$ for some function r and invertible q. Proof. Assume Jqxpa (q −1 xpa (z)) = cA. By the inverse function theorem, $$J q_{x_{\mathrm{pa}}}^{-1}|_{z}=\left(J q_{x_{\mathrm{pa}}}|_{q_{x_{\mathrm{pa}}}^{-1}(z)}\right)^{-1}.$$ $$(17)$$ $=\;cA$ . $$(18)$$ −1. (17) Define sxpa : *Z → U* to be the inverse of qxpa . By assumption det Jqxpa |q −1 $x_{\text{pa}}\left[q_{x_{\text{pa}}}\left(z\right)\right]=x_{\text{pa}}\left(z\right)$ (z) = det Jq−1 xpa |z = cA $z=\det Js_{x_\text{pa}}|z=$ for all xpa and a constant c and orthgonal matrix A. Since the Jacobian of sxpa is a scaled orthogonal matrix, sxpa is a conformal function. Therefore by Liouville's theorem, sxpa is a Möbius function (Blair, 2000), which implies that $$s_{x_{\mathrm{pa}}}(z)=b_{x_{\mathrm{pa}}}+\alpha_{x_{\mathrm{pa}}}A_{x_{\mathrm{pa}}}(z-a_{x_{\mathrm{pa}}})/\|z-a_{x_{\mathrm{pa}}}\|^{\varepsilon},$$ where Axpa is an orthogonal matrix, ε ∈ {0, 2}, axpa ∈ R m, and αxpa ∈ R. The Jacobian of sxpa is equal to cA by assumption $$J s_{x_{\mathrm{pa}}}\big|_{z}={\frac{\alpha_{x_{\mathrm{pa}}}A_{x_{\mathrm{pa}}}}{\|z-a_{x_{\mathrm{pa}}}\|^{\varepsilon}}}\left(I-\varepsilon{\frac{(z-a_{x_{\mathrm{pa}}})(z-a_{x_{\mathrm{pa}}})^{T}}{\|z-a_{x_{\mathrm{pa}}}\|^{2}}}\right)=c A.$$ This imposes constraints on variables α, a, and ε. Choose z such that z − axpa = kv for a unit vector v and multiply by A−1 xpa , $$c A A_{x_{\mathrm{pa}}}^{-1}={\frac{\alpha_{x_{\mathrm{pa}}}}{\|k v\|^{\varepsilon}}}\left(I-\varepsilon{\frac{k^{2}v v^{T}}{k^{2}\|v\|^{2}}}\right)$$ $$I=\varepsilon v v^{T}+\left({\frac{c k^{\varepsilon}}{\alpha_{x_{\mathrm{pa}}}}}\right)A A_{x_{\mathrm{pa}}}^{-1}.$$ If ε = 2, choosing different values of k, implying different values of z, results in varying values of on the right hand side, which should be the constant identity matrix. Therefore we must have ε = 0. This also implies that αxpa = c and A = Axpa . This gives the further parametrization $$s_{x_{\mathrm{pa}}}(z)=b_{x_{\mathrm{pa}}}-c A(z-a_{x_{\mathrm{pa}}})=b^{\prime}(x_{\mathrm{pa}})+c A z$$ where b ′(xpa) = bxpa − cAaxpa . Without of loss of generality, we may consider an arbitrary fixed xpa0 ∈ Xpa, $$s_{x_{\mathrm{pa}\,0}}(z)-s_{x_{\mathrm{pa}}}(z)=r(x_{\mathrm{pa}}):=b^{\prime}(x_{\mathrm{pa}\,0})-b^{\prime}(x_{\mathrm{pa}}).$$ Let u = sxpa (z). Then we have $$\begin{array}{c}{{s_{x_{\mathrm{pa}_{0}}}(z)=u+r(x_{\mathrm{pa}})}}\\ {{q_{x_{\mathrm{pa}}}(u)=z=q_{x_{\mathrm{pa}_{0}}}(u+r(x_{\mathrm{pa}})),}}\end{array}$$ and we have the desired representation by choosing q = qxpa0 . Theorem 2. Assume for X *∈ X ⊂* R m *and continuous exogenous noise* U ∼ Unif[0, 1]m for m ≥ 3, and X satisfies the structural equation $\square$ $$X=f(X_{\mathrm{pa}},U)$$ $$(6)$$ $$\left(7\right)$$ X = f(Xpa, U) (6) where Xpa ∈ Xpa ⊂ R d are the parents of node X and U ⊥⊥ Xpa. Consider an encoder-decoder model with encoding function g : X × Xpa → Z and decoding function h : Z × Xpa → X , $$Z:=g(X,X_{\mathrm{pa}}),\quad\hat{X}:=h(Z,X_{\mathrm{pa}}).$$ Z := g(X, Xpa), Xˆ := h(*Z, X*pa). (7) Assume the following conditions: 1. The encoding is independent of the parent values, g(*X, X*pa) ⊥⊥ Xpa. 2. The structural equation f *is invertible and differentiable with respect to U, and* Jfxpa is p.d. for all xpa ∈ Xpa. 3. The encoding g is invertible and differentiable with respect to X*, and* Jgxpa is p.d. for all xpa ∈ Xpa. 4. *The encoding* qxpa (U) := g(f(xpa, U), xpa) *satisfies* Jqxpa |q −1 xpa (z) = c(xpa)A for all z ∈ Z and xpa ∈ Xpa, where c is a scalar function and A *is an orthogonal matrix.* Then, g(f(Xpa, U), Xpa) = ˜q(U) *for an invertible function* q˜. Proof. We may show that g(*X, X*pa) = g(f(Xpa, U), Xpa) is solely a function of U. By properties of composition of functions, qxpa (U) := g(f(xpa, U), xpa) is also invertible, differentiable. Since Jfxpa and Jgxpa are p.d. and Jqxpa = Jfxpa Jgxpa , then Jqxpa is p.d. for all xpa ∈ Xpa as well. By the assumption that the encoding Z is independent of Xpa, $$Z=q x_{\mathrm{pa}}(U)\perp\!\!\!\perp X_{\mathrm{pa}}.$$ $$(19)$$ $$(20)$$ (U) ⊥⊥ Xpa. (19) Therefore the conditional distribution of Z does not depend on Xpa. Using the assumption that U ⊥⊥ Xpa, for all xpa ∈ Xpa and z in the support of Z, by the change of density formula, $$p_{Z}(z)={\frac{p_{U}(q_{\tau_{\mathrm{pn}}}^{-1}(z))}{\left|\operatorname*{det}J q_{\tau_{\mathrm{pn}}}|_{q_{\tau_{\mathrm{pn}}}^{-1}(z)}\right|}}={\frac{2^{-m}1\{q_{\tau_{\mathrm{pn}}}^{-1}(z)\in[0,1]^{m}\}}{\operatorname*{det}J q_{\tau_{\mathrm{pn}}}|_{q_{\tau_{\mathrm{pn}}}^{-1}(z)}}}=c_{1}(z).$$ The numerator follows from the fact that the noise is uniformly distributed. The determinant of the Jacobian term Jqxpa (q −1 xpa (z)) is nonnegative since Jqxpa is p.d. Furthermore, since pZ(z) > 0, the numerator in Eq. 20 is always equal to 2 −m and the denominator must not depend on Xpa, $$\operatorname*{det}J q_{x_{\mathrm{pa}}}|_{q_{x_{\mathrm{pa}}}^{-1}(z)}=c_{2}(z)$$ (z) = c2(z) (21) for some function c2. From our assumption, Jqxpa |q −1 xpa (z) = c(xpa)A for an orthogonal matrix A for all z. Applying this to Eq. 21, $$\operatorname*{det}J q_{x_{\mathrm{pa}}}\big|_{q_{x_{\mathrm{pa}}}(z)}^{-1}=\operatorname*{det}c(x_{\mathrm{pa}})A=c(x_{\mathrm{pa}})=c_{2}(z),$$ $$(21)$$ which implies c(xpa) ≡ c is a constant function, or Jqxpa |q −1 xpa (z) = cA. By Lemma 2, we may express qxpa (u) as $$(222)$$ $$q_{x_{\mathrm{pa}}}(u)=q(u+r(x_{\mathrm{pa}}))$$ (u) = q(u + r(xpa)) (22) for an invertible function q. Next, since Z ⊥⊥ Xpa, the support of Z does not depend on Xpa, equivalently the ranges of qxpa1 and qxpa2 are equal for all x1, x2 ∈ Xpa, $$q_{x_{1}}([0,1]^{m})=q_{x_{2}}([0,1]^{m}).$$ $$(23)$$ ([0, 1]m). (23) Applying Eq. 22 and the invertibility of q, $$\begin{array}{c}{{q([0,1]^{m}+r(x_{1}))=q([0,1]^{m}+r(x_{2}))}}\\ {{\ \ \ \ [0,1]^{m}+r(x_{1})=[0,1]^{m}+r(x_{2})}}\\ {{\ \ \ [r(x_{1}),r(x_{1})+1]^{m}=[r(x_{2}),r(x_{2})+1]^{m}.}}\end{array}$$ Since this holds for all x1, x2 ∈ Xpa, we have r(x) is a constant, or r(x) ≡ r. Thus let q˜be q˜(u) = q(u+r) = qxpa (u), which is solely a function of U for all xpa. For all xpa, $$g(f(x_{\mathrm{pa}},U,x_{\mathrm{pa}}))=q_{x_{\mathrm{pa}}}(U)=\tilde{q}(U)\implies g(f(X_{\mathrm{pa}},U),X_{\mathrm{pa}})=\tilde{q}(U).$$ This completes the proof. Corollary 3. Assume the conditions of Theorem 2. Furthermore, assume the encoder-decoder model pair (*g, h*) satisfies $$(24)$$ $\square$ $$h(g(X,X_{\mathrm{pa}}),X_{\mathrm{pa}})=X.$$ $$(25)$$ h(g(X, Xpa), Xpa) = X. (25) Consider a factual sample pair (x, xpa) where x := f(xpa, u) *and an intervention* do(Xpa := γ)*. Then, the given by* h(g(x, xpa), γ) *matches the true counterfactual outcome* x CF := f(*γ, u*). Corollary 4. Let γ ≥ 0*. Assume the conditions of Theorem 2. Furthermore, assume the encoder-decoder model pair* (g, h) under some metric d (e.g., ∥ · ∥2*), has reconstruction error less than* τ , $$d(h(g(X,X_{\mathrm{pa}}),X_{\mathrm{pa}}),X)\leq\tau.$$ $$(26)$$ d(h(g(X, Xpa), Xpa), X) ≤ τ. (26) Consider a factual sample pair (x, xpa) where x := f(xpa, u) *and an intervention* do(Xpa := γ)*. Then, the error between the true counterfactual* x CF := f(γ, u) and counterfactual estimate h(g(*x, x*pa), γ) is at most τ *, i.e.,* d(h(g(*x, x*pa), γ), xCF) ≤ τ . ## B Testing Independence Between Parents And Encodings We empirically evaluate the dependence between the encoding and parent values. We consider a bivariate nonlinear SCM X1 → X2 where X2 = f(X1, U2) = X2 1 + U2 and X1 and U2 are independently sampled from a standard normal distribution. We evaluate the HSIC between X1 and the encoding of X2. We fit our model on n = 5000 samples and evaluate the HSIC score on 1000 test samples from the same distribution. We compute a p-value using a kernel based independence test and compare our performance to ANM, a correctly specified model in this setting (Gretton et al., 2007). We perform this experiment 100 times. Given true independence, we expect the p-values to follow a uniform distribution. In the table below, we show some summary statistics of the p-values from the 100 trials, with the last row representing the expected values with true uniform p-values (which happens under the null hypothesis). We provide p-values from the correctly specified ANM approach which are close to a uniform distribution, demonstrating that it is possible to have encodings that are close to independent. Although the p-values produced by our DCM approach are not completely uniform, the encodings do not to consistently reject the null hypothesis of independence. | Mean | Std. Dev | 10% Quantile | 90% Quantile | Min | Max | | |---------------------|------------|----------------|----------------|-------|-------|-------| | DCM | 0.196 | 0.207 | 0.004 | 0.515 | 6e-6 | 0.947 | | ANM | 0.419 | 0.255 | 0.092 | 0.774 | 3e-5 | 0.894 | | True Uniform (null) | 0.500 | 0.288 | 0.100 | 0.900 | 1e-2 | 0.990 | Table 3: Table of p-value descriptions. These results demonstrate that it is empirically possible to obtain encodings independent of the parent variables. We further note that the ANM is correctly specified in this setting and DCM is relatively competitive despite being far more general. ## C Missing Experimental Details In this section, we provide missing details from Section 5. ## C.1 Model Hyperparameters For all experiments in our evaluation, we hold the model hyperparameters constant. For DCM, we use T = 100 total time steps with a linear βt schedule interpolating between 1e-4 and 0.1, or βt =0.1 − 10−4t−1 T −1 + 10−4 for t ∈ [T]. To incorporate the parents' values and time step t, we simply concatenate the parent values and t/T as input to the εθ model. We found that using the popular cosine schedule (Nichol & Dhariwal, 2021) resulted in worse empirical performance, as well as using a positional encoding for the time t. We believe the drop in performance from the positional encoding is due to the low dimensionality of the problem since the dimension of the positional encoder would dominate the dimension of the other inputs. We also evaluated using classifier-free guidance (CFG) (Ho & Salimans, 2021) to improve the reliance on the parent values, however, we found this also decreased performance. We provide a plausible explanation that can be explained through Theorem 1. With Theorem 1, we would like our encoding g(*Y, X*) to be independent of X, however using a CFG encoding (1 + w)g(*Y, X*) − wg(Y, 0) would only serve to increase the dependence of g(*Y, X*) on X, which is counterproductive to our objective. For VACA, we use the default implementation7, training for 500 epochs, with a learning rate of 0.005, and the encoder and decoder have hidden dimensions of size [8, 8] and [16] respectively, a latent vector dimension of 4, and a parent dropout rate of 0.2. For CAREFL, we also use the default implementation8 with the neural spline autoregressive flows (Durkan et al., 2019), training for 500 epochs with a learning rate of 0.005, four flows, and ten hidden units. For ANM, we also use the default implementation to select a regression model. Given a set of fitted regression models, the ANM chooses the model with the lowest root mean squared error averaged over splits of the data. The ANM considers the following regressor models: linear, ridge, LASSO, elastic net, random forest, histogram gradient boosting, support vector, extra trees, k-NN, and AdaBoost. ## C.2 Details About The Additive Noise Model (Anm) For a given node Xi with parents Xpai , consider fitting a regression model ˆfi where ˆfi(Xpai ) ≈ Xi. Using this regression model and the training dataset is sufficient for generating samples from the observational and interventional distribution, as well as computing counterfactuals. Observational/Interventional Samples. Samples are constructed in topological order. For intervened nodes, the sampled value is always the intervened value. Non-intervened root nodes in the SCM are sampled from the empirical distribution of the training set. A new sample for Xiis generated by sampling the parent value Xpai inductively and sampling Uˆ from the empirical residual distribution, and outputting Xˆi:= ˆfi(Xpai ) + Uˆ. 7https://github.com/psanch21/VACA 8https://github.com/piomonti/carefl/ Counterfactual Estimation. For a factual observation x F and interventions on nodes I with values γ, the counterfactual estimate only differs from the factual estimate for all nodes that are intervened or downstream from an intervened node. We proceed in topological order. For each intervened node i, xˆ CF i:= γi. For each non-intervened node i downstream from an intervened node, define uˆ F i:= x F i − ˆfi(x F pai ), the residual and estimated noise for the factual sample. Let xˆ CF pai be counterfactual estimates of the parents of Xi. Then xˆ CF i:= ˆfi(ˆx CF pai ) + ˆu F i . Therefore, for counterfactual queries, if the true functional equation fiis an additive noise model, then if ˆfi ≈ fi, the regression model will have low counterfactual error. In fact, if ˆfi ≡ fi, then the regression model will have perfect counterfactual performance. ## C.3 Details About Random Graph Generation The random graph is comprised of ten nodes. We randomly sample this graph by generating a random upper triangular adjacency matrix where each entry in the upper triangular half is each to 1 with probability 30%. We then check that this graph is comprised of a single connected component (if not, we resample the graph). For a graphical representation, we provide an example in Figure 4. ## C.4 Query Evaluation Frameworks For Synthetic Data Experiments Observational Evaluation. We generate 1000 samples from both the fitted and true observational distribution and report the MMD between the two. Since DCM and the ANM use the empirical distribution for root nodes, we only take the MMD between nonroot nodes. Interventional Evaluation. We consider interventions of individual nodes. For an intervention node i, we choose 20 intervention values γ1*, . . . , γ*20, linearly interpolating between the 10% and 90% quantiles of the marginal distribution of node i to represent realistic interventions. Then for each intervention do(Xi:= γj ), we generate 100 values from the fitted model and true causal model, Xˆ and X⋆for the samples from the fitted model and true model respectively. Since the intervention only affects the descendants of node i, we subset Xˆ and X⋆to include only the descendants of node i, and compute the MMD on Xˆ and X⋆to obtain a distance δi,j between the interventional distribution for the specific node and interventional value. Lastly, we report the mean MMD over all 20 intervention values and all intervened nodes. For the ladder graph, we choose to intervene on X2 and X3 as these are the farthest nodes upstream and capture the maximum difficulty of the intervention. For the random graph, we randomly select three non-sink nodes to intervene on. A formal description of our interventional evaluation framework is given in Algorithm 6. Algorithm 6 Evaluation of Interventional Queries 1: for each intervention node i do 2: γ1*, . . . , γ*20 linearly interpolate 10% to 90% quantiles of node i 3: for each intervention γj generated above do 4: Intervene do(Xi:= γj ) 5: Generate 100 samples Xˆ from Mˆ and X⋆from M⋆ of descendants of i 6: δi,j ← MMDˆ (*X, X* ˆ ⋆) 7: **end for** 8: **end for** 9: Output mean of all δi,j Counterfactual Evaluation. Similarly to interventional evaluation, we consider interventions of individual nodes and for node i, we choose 20 intervention values γ1*, . . . , γ*20, linearly interpolating between the 10% and 90% quantiles of the marginal distribution of node i to represent realistic interventions. Then for each intervention do(Xi:= γj ), we generate 100 nonintervened factual samples x F, and query for the estimated and true counterfactual values xˆ CF and x CF respectively. Similarly to before, xˆ CF and x CF only differ on the descendants of node i, therefore we only consider the subset of the descendants of node i. We compute the MSE δi,j , since the counterfactual estimate and ground truth are point values, giving us an error for a specific node and interventional value. Lastly, we report the mean MSE over all 20 intervention values and all intervened nodes. We use the same intervention nodes as in the interventional evaluation mentioned above. A formal description of our counterfactual evaluation framework is given in Algorithm 7 (Appendix C). Algorithm 7 Evaluation of Counterfactual Queries 1: for each intervention node i do 2: γ1*, . . . , γ*20 linearly interpolate 10% to 90% quantiles of node i 3: for each intervention γj generated above do 4: Generate 100 factual samples x F 1 , . . . , xF 100 5: Intervene do(Xi:= γj ) 6: Using {x F} 100 k=1, compute counterfactual estimates {xˆ CF k} 100 k=1 and true counterfactuals {x CF k} 100 k=1 for all descendants of i 7: δi,j ← MSE({xˆ CF k} 100 k=1, {x CF k} 100 k=1) 8: **end for** 9: **end for** 10: Output mean of all δi,j ## C.5 Explanation Of Error Inflation In Fmri Experiments In our fMRI experiments, we compute the absolute error on a single interventional sample and compute the absolute value. As an intuition for why we cannot hope to have errors close to zero and for why the errors are relatively much closer together, consider the following toy problem. Assume X1, . . . , Xn, Y1*, . . . , Y*n iid∼ N (θ, 1) and we observe X1*, . . . , X*n as data. Consider the two following statistical problems: 1. Learn a distribution Dˆ such that samples from Dˆ and samples Y1*, . . . , Y*n achieves low MMD. 2. Learn an estimator that estimates Y1 well under squared error. In the first statistical problem, if Dˆ is a reasonable estimator, we should expect that more data leads to lower a MMD, for example the error may decay at a 1/n rate. We should expect MMD values close to zero, and the magnitude of the performance is directly interpretable. In the second statistical problem, under squared error, the problem is equivalent to min c EY1∼N(θ,1)(Y1 − c) 2 = min cθ 2 + 1 − 2cθ + c 2. The optimal estimator is the mean θ and achieves a squared error of 1. Of course in practice we do not know θ, if we were to use the sample mean of X1*, . . . , X*n, we would have an error of the order 1 + 1/n. While we may still compare various estimators, e.g. sample mean, sample median, deep neural network, all the losses will be inflated by 1, causing the difference in performances to seem much more minute. Our synthetic experiments hope to directly estimate the interventional distribution and computes the MMD between samples from the true and model's distributions, implying they are of the first statistical problem. The fMRI real data experiments aim to estimate a single intervention value from a distribution, implying they are of the second statistical problem. We should not expect these results to be very small, and the metric values should all be shifted by an intrinsic irreducible error. ## D Additional Experiments D.1 Running Time Experiments We present the training times in minutes for one seed on the ladder graph using the default implementation and parameters. For a fair comparison, these are all evaluated on a CPU. Note that ANM is the fastest as it uses standard regression models, and our proposed DCM approach is about 7-9x faster than CAREFL and VACA. The generation (inference) times for all the methods are in the order of 1 second. For VACA and CAREFL, we use the implementation provided by the respective authors. ![23_image_0.png](23_image_0.png) Figure 1: Left: Nonlinear setting (NLIN), Right: Nonadditive setting (NADD). Box plots of observational, interventional, and counterfactual queries of the ladder and random SCMs over 20 random initializations of the model and training data. | DCM | ANM | VACA | CAREFL | | |----------------------------|-------|--------|----------|-------| | Training Time (in minutes) | 15.3 | 4.3 | 142.8 | 110.5 | Table 4: Table of training times for a single run. ## D.2 Semi-Synthetic Experiments To further evaluate the effectiveness of DCM, we explore a semi-synthetic experiment based on the Sachs dataset (Sachs et al., 2005). We use the real world graph from comprised of 11 nodes9. The graph represents an intricate network of signaling pathways within human T cells. The 11 nodes within this graph each correspond to one of the phosphorylated proteins or phospholipids that were examined in their study. For our experiment, we sample data in a semi-synthetic manner. For the root nodes, we sample from the empirical marginal distribution. For non-root nodes, since the ground truth structural equations are unknown, we use a random neural network as the structural equation, as was done in Section 5. We report the performances in Table 5. We see that the performance improvements with our DCM approach corroborate our prior findings. 9https://www.bnlearn.com/bnrepository/discrete-small.html#sachs | | DCM | ANM | VACA | CAREFL | | |----------|-----------|-----------|------------|-----------|-----------| | SCM | Metric | (×10−2 ) | (×10−2 ) | (×10−2 ) | (×10−2 ) | | Obs. MMD | 0.31±0.15 | 0.21±0.04 | 0.53±0.24 | 7.30±0.95 | | | NLIN | Int. MMD | 1.25±0.11 | 1.37±0.12 | 2.03±0.36 | 5.77±0.85 | | CF. MSE | 0.72±0.19 | 4.72±1.71 | 17.71±9.23 | 9.59±2.52 | | | Sachs | Obs. MMD | 0.18±0.07 | 0.18±0.06 | 0.39±0.24 | 6.10±1.14 | | NADD | Int. MMD | 1.42±0.26 | 1.86±0.51 | 2.21±0.98 | 5.29±2.07 | | CF. MSE | 1.99±2.49 | 4.68±5.77 | 7.30±11.57 | 8.30±8.34 | | Table 5: Mean±standard deviation of observational, interventional, and counterfactual queries of the Sachs SCM in nonlinear (NLIN) and nonadditive (NADD) settings over 10 random initializations of the model and training data. The values are multiplied by 100 for clarity. ## D.3 Additional Synthetic Experiments In this section, we present additional experimental evidence showcasing the superior performance of DCM in addressing various types of causal queries. We consider four additional smaller graph structures, which we call the *chain*, triangle, *diamond*, and Y graphs (see Figure 9). The exact equations are presented in Table 6. These functional equations were chosen to balance the signal-to-noise ratio of the covariates and noise to represent realistic settings. Furthermore, these structural equations were chosen after hyperparameter selection, meaning we did not tune DCM's parameters nor tune the structural equations after observing the performance of the models. Consider a node with value X with parents Xpa and exogenous noise U where Xpa ⊥⊥ U, and corresponding functional equation f such that X = f(Xpa, U). Further assuming the additive noise model, X := f1(Xpa) + f2(U). In this additive setting, since Xpa ⊥⊥ U, we have Var [X] = Var [f1(Xpa] + Var [f2(U)]. We choose f1 and f2 such that $$0.05\leq{\frac{\mathrm{Var}\left[f_{2}(U)\right]}{\mathrm{Var}\left[f_{1}(X_{\mathrm{pa}}\right]}}\leq0.5,$$ representing the fact that the ratio of the effect of the noise to the parents is roughly approximate or smaller by an order of magnitude. For the nonadditive case, we decompose the variance using the law of total variance, $$\operatorname{Var}{[X]}=\mathbb{E}\operatorname{Var}{[f(X_{\operatorname{pa}},U)\mid U]}+\operatorname{Var}\mathbb{E}[f(X_{\operatorname{pa}},U)\mid U].$$ Similarly, we choose the functional equation f such that f satisfies $$0.05\leq{\frac{\operatorname{Var}\mathbb{E}[f(X_{\mathrm{pa}},U)\mid U]}{\mathbb{E}\operatorname{Var}\left[f(X_{\mathrm{pa}},U)\mid U\right]}}\leq0.5$$ For all graphs, Ui iid∼ N (0, 1), and we choose f such that Xi = Uiif Xiis a root node, i.e. f is the identity function. Lastly, we normalize every node Xi such that Var(Xi) ≈ 1. For the sake of clarity, we omit all normalizing terms in the formulas and omit functional equations for root nodes below. Results. For these 4 graph structures (chain, triangle, *diamond*, and Y), in Table 7, we provide the performance of all evaluated models for observational, interventional, and counterfactual queries, averaged over 10 separate initializations | SCM | Nonlinear Case | Nonadditive Case | | |-----------------------------------------------------------------------------------------|-------------------------------------|------------------------------------|------------------------------------| | f2(U2, X1) | exp(X1/2) + U2/4 | 1/((U2 + X1) 2 + 0.5) | | | Chain | f3(U3, X2) | (X2 − 5)3/15 + U3 | p X2 + |U3|/(0.1 + X2) | | f2(U2, X1) | 2X2 1 + U2 | X1/((U2 + X1) 2 + 1) + U2/4 | | | Triangle | f3(U3, X1, X2) | 20/(1 + exp(−X2 2 + X1)) + U3 | (|U3| + 0.3)(−X1 + X2/2 + |U3|/5)2 | | f2(U2, X1) | X2 1 + U2/2 | p |X1|(|U2| + 0.1)/2 + |X1| + U2/5 | | | Diamond | f3(U3, X1, X2) | X2 2 − 2/(1 + exp(−X1)) + U3/2 | 1/(1 + (|U3| + 0.5) exp(−X2 + X1)) | | f4(U4, X2, X3) | X3/(|X2 + 2| + X3 + 0.5) + U4/10 | (X3 + X2 + U4/4 − 7)2 − 20 | | | f3(U3, X1, X2) | 4/(1 + exp(−X1 − X2)) − X2 2 + U3/2 | (X1 − 2X2 − 2)(|U3| + 0.2) | | | Y | f4(U4, X3) | 20/(1 + exp(X2 3 /2 − X3)) + U4 | (cos(X3) + U4/2)2 | | Table 6: The equations defining the data generating process in the NLIN and NADD cases. | | | | of models and training data, with the lowest value in each row bolded. The values are multiplied by 100 for clarity. In Figure 2, we show the box plots for the same set of experiments. The results here are similar to those observed with the larger graph structures in Table 1. DCM has the lowest error in 7 out of 12 of the nonlinear settings, with the correctly specified ANM having the lowest error in the remaining 5. Furthermore, DCM and ANM both typically have a lower standard deviation compared to the other competing methods. For the nonadditive settings, DCM demonstrates the lowest values for all 12 causal queries. | DCM | ANM | VACA | CAREFL | | | | |----------|------------|-------------|--------------|--------------|--------------|---------------| | SCM | Metric | (×10−2 ) | (×10−2 ) | (×10−2 ) | (×10−2 ) | | | Obs. MMD | 0.27±0.11 | 0.19±0.27 | 1.63±0.42 | 4.25±1.12 | | | | NLIN | Int. MMD | 1.71±0.27 | 1.70±0.47 | 10.10±1.81 | 8.63±0.52 | | | CF. MSE | 0.33±0.16 | 2.43±2.49 | 25.99±6.47 | 19.62±4.01 | | | | Chain | Obs. MMD | 0.22±0.16 | 1.51±0.43 | 1.53±0.41 | 3.48±1.06 | | | NADD | Int. MMD | 2.85±0.45 | 7.34±0.62 | 10.42±2.77 | 11.02±1.32 | | | CF. MSE | 75.84±1.65 | 88.56±2.63 | 98.82±4.16 | 105.80±11.04 | | | | Obs. MMD | 0.16±0.11 | 0.12±0.07 | 3.12±0.93 | 4.64±1.03 | | | | NLIN | Int. MMD | 1.50±0.30 | 3.28±0.79 | 18.43±1.72 | 7.08±0.82 | | | CF. MSE | 1.12±0.26 | 9.80±1.70 | 178.69±16.45 | 41.85±19.58 | | | | Triangle | Obs. MMD | 0.25±0.12 | 0.51±0.08 | 2.42±0.48 | 5.12±1.10 | | | NADD | Int. MMD | 2.81±0.21 | 5.54±0.60 | 11.09±0.85 | 4.17±0.44 | | | CF. MSE | 26.28±6.68 | 97.25±16.45 | 173.67±16.28 | 121.99±31.34 | | | | Obs. MMD | 0.11±0.05 | 0.14±0.08 | 2.29±0.69 | 6.82±0.85 | | | | NLIN | Int. MMD | 1.23±0.08 | 1.40±0.14 | 9.50±0.96 | 14.97±1.29 | | | CF. MSE | 0.28±0.25 | 1.22±0.27 | 28.79±4.02 | 27.85±5.33 | | | | Y | Obs. MMD | 0.21±0.15 | 1.00±0.19 | 1.51±0.44 | 3.39±0.58 | | | NADD | Int. MMD | 1.54±0.23 | 5.62±0.31 | 5.37±0.72 | 6.51±0.40 | | | CF. MSE | 33.45±2.06 | 47.55±2.56 | 60.67±3.24 | 52.41±6.69 | | | | Obs. MMD | 0.22±0.10 | 0.13±0.07 | 2.77±0.45 | 8.28±1.29 | | | | Int. MMD | 3.21±0.62 | 2.56±0.31 | 25.30±1.39 | 18.23±3.01 | | | | Diamond | NLIN | CF. MSE | 14.74±6.09 | 32.02±37.74 | 138.65±12.33 | 607.62±241.21 | | Obs. MMD | 0.25±0.18 | 0.28±0.09 | 2.36±0.44 | 5.50±0.81 | | | | NADD | Int. MMD | 1.88±0.23 | 4.40±0.52 | 12.54±0.89 | 16.34±1.72 | | | CF. MSE | 1.36±0.14 | 8.58±0.77 | 57.40±4.23 | 24.61±7.05 | | | Table 7: Mean±standard deviation of observational, interventional, and counterfactual queries of four different SCMs in nonlinear and nonadditive settings over 10 random initializations of the model and training data. The values are multiplied by 100 for clarity. ![27_image_0.png](27_image_0.png) Figure 2: Top: Nonlinear setting (NLIN), Bottom: Nonadditive setting (NADD). Box plots of observational, interventional, and counterfactual queries of four different SCMs over 10 random initializations of the model and training data. ![28_image_0.png](28_image_0.png) Figure 4: Example of a random graph used in Section 5, with exogenous noise nodes omitted for clarity. ![28_image_2.png](28_image_2.png) ![28_image_1.png](28_image_1.png) Figure 5: Chain graph. Figure 7: Diamond graph. Figure 8: Y graph. Figure 9: Causal graphs used in our experiments in Appendix D.3.
Review 1: Summary: This paper proposes a method for learning and inference of structural causal models based on the diffusion models. The proposed algorithm for training works given the causal graph and observational data. Their trained models are able to answer causal queries including observational, interventional, and counterfactual queries. Empirical evaluations demonstrate that their proposed methods superior to state-of-the-art methods for causal queries. Their contributions also include a framework for theoretical analysis of encoder-decoder-based structural causal models. Strengths and Weaknesses: Strength: * The positioning of the study and contributions are clearly explained along with many references. * The effectiveness of the proposed method has been verified by both artificial and real data experiments, and is sufficiently convincing. * The proposed method is natural and generic and should be of interest to the machine learning community. Weakness: * It is not clear whether the theoretical analysis in Section 4 is applicable to the proposed diffusion-based approach in Section 3. For example, there seems to be no discussion on whether assumptions 2 and 3 of Theorem 1 are valid in the diffusion model. Assumption 1 has also only been experimentally verified, and there is no theoretical evidence that the diffusion model satisfies this assumption. * I could not discern the logical flow that led to the proposal in this paper. In particular, in dealing with structural causal models, I would like to know why the diffusion model was chosen among various encoder-decoder models that are applicable, and what challenges it addresses that other models cannot. Since the theoretical analysis part of Section 4 can be applied to arbitrary encoder-decoder models, I do not believe this supports the usefulness of the diffusion model. The fact that each node can be trained in parallel is also not limited to diffusion models. Requested Changes: If the authors can address any of the points listed as weakness above, I would appreciate it. However, I am not strongly opposed to accepting this paper as it is. Broader Impact Concerns: NA ================================================== Review 2: Summary: The paper proposes diffusion-based causal models (DCMs), which use diffusion models to approximate the functions in a structural causal model, given a DAG and observational data. The intuition is similar to the classic approach of using regression methods to approximate these functions, but instead using more expressive and flexible diffusion models based on neural networks. DCMs allow answering interventional and counterfactual queries by constructing a Markov chain of samples from post-intervention distributions. The paper also offers bounds on the error of counterfactual estimates in more general encoder-decoder approaches. Strengths and Weaknesses: **Strengths**: - good clarity of writing, especially after Section 1 - ample theoretical support along with concise proof-of-concept experiments - fairly intuitive approach but with a more expressive class of models than previous papers **Weaknesses**: - Section 1 is less consistently well-written than the rest of the paper Requested Changes: (2. is critical, the others are suggested) 1. Go carefully through the intro again---it seemed to have more mistakes and less clear/polished writing than other parts of the paper (though it was more clear to me the second time through, after understanding more about diffusion models). For example: - "... between a Gaussian and a density." seems to be missing some words - "... these are first error bounds..." seems to be missing "the" - "In contrast to our work, while Javaloy et al. (2023) focuses on modeling the whole causal graph as one model." Is the word "while" unnecessary, or is some other part of the sentence missing? 2. "Note that causal sufficiency is the minimal necessary set of assumptions for causal reasoning from observational data alone." Isn't this just one assumption and not a set? More importantly, this claim needs more explanation, context, or justification. Why doesn't something like latent causal discovery count as causal reasoning? 3. "For example, our DCM approach trains approximately seven times faster than CAREFL and nine times faster than VACA." This reads like a very general claim, but it's actually only supported by a seed on a single ground truth. Weaken the claim or add more support. Broader Impact Concerns: n/a ================================================== Review 3: Summary: This paper provides the first attempt to use diffusion models in answering causal queries. The proposed method takes a causal graph as input, models each node in the causal graph as a diffusion model and learns how to sample from the data generating distribution by using diffusion models. Strengths and Weaknesses: strengths: the proposed method might have good potential in settings where sampling from the dataset is difficult, e.g., when the node of the causal graph is an image or text. The authors attempted to provide theoretical guarantees of the proposed methods under some relatively strong assumptions. Weakness: The paper could also benefit from improved writing in the introduction to clarify the contribution and objective of the paper. the proposed method relies on strong assumptions about the underlying data generating distributions which might be unrealistic when the data type is complex (e.g., text/image). For simpler setups, e.g., the nodes of the causal graph are of numerical values, it is unclear whether the proposed method will outperform semi-parametric estimation methods. When the data generating process is complex, it is unclear how the causal DAG could be provided/constructed in the first place. A better example is needed. The numerical results on the real-world data seem a bit marginal. Overall, I see the real-world potential use case of the proposed methods. However, this is hindered by the strong assumption made by the methods. A strong real-world case study will be needed for the paper to be publishable. I am happy to update the score if the authors can either 1) update their numerical results with better benchmarks and a stronger case study to support their contribution claim in the introduction, or 2) revise the scope of the contribution of the paper to acknowledge the limitation of the proposed methods at the beginning of the paper. The idea of using diffusion models to sample from DAG might be an idea that is of interest to a broader audience. Requested Changes: writing: 1. please formally introduce the problem of causal queries in the introduction. In the preliminaries, instead of explaining counterfactual queries in words, please provide a mathematical definition: do you mean the expected outcome here? 2. the first sentence of the second paragraph of the introduction is very confusing: "we focus on approximating SCM given observational data and the underlying causal DAG" This sentence sounds like you are learning the underlying DAG from data, but you are not. It is better to say along the lines of given a causal DAG, we focus on approximating the data generating distribution with diffusion models 3. the fourth to the last row of page 1 "which may be interpreted as a deterministic autoencoder model with latent variables": this is a causal paper, what do you mean by latent variables here? do you mean unmeasured confounding? how does this relate to the assumption of causal sufficiency that you subsequently made in the paper? 4. khemakhen et al. (2021) "their approach can be extended to more general DAGs": the proposed method scales exponentially in the number of variables in the DAG. This approach in general is not scalable and this is an overstatement. 5. in the discussion of assumption 2 "assumption 2 is always satisfied under additive noise model": is this true? f here is independent of U, so it seems that it would not satisfy the second half of the assumption where you assume that f is strictly increasing in U. "e.g., distinguishing between X:= X_pa + U and X:= X_pa - U": does this example contradict with the claim that assumption 2 is always satisfied under additive noise model? The two U's here can be thought of as having flipped distribution. assumptions: 1. in the preliminaries end of the second paragraph "every SCM M entails a unique joint observational distribution satisfying the causal Markov assumption..." this seems to imply that the causal DAG is acyclic. Can you verify this and make it more explicit? 2. the Assumption 2 in Theorem 1 is too strong: I understand for diffusion models to work on this problem, you probably would require some form of assumption 2. However, I do not see how this assumption is easily satisfied in practice. Can the authors comment on the comparison to semi-parametric estimation methods which would not require this assumption? So long that I have a target parameter defined, I can estimate the data generating distribution nonparametrically (here we can also leverage the structure of the causal graph), perform a plug-in estimation, and subsequently perform a debiasing step to get asymptotically desired behavior. 3. In theorem 2, assumptions are listed without interpretation. What does it mean in practice for Jf_x_pa to be positive definite? experiments: 1. ANM does not require the monotone assumption, can you verify whether your real-world data satisfies the monotone assumption listed in Assumption 2? 2. Can you provide some robustness check of your method when assumption 2 does not hold? 3. From the description of the real-world experiments, it is unclear to me what the format of the input data is and what the causal graph is. Is the graph simply CG->HG? The real-world experiment is not very convincing that this method can be useful in practice when compared with simpler benchmarks. A better case study is needed to illustrate the real-world use case of the proposed method. 4. Can the authors compare the proposed methods to statistical estimation methods? ANM is more commonly used as a discovery method than an estimation method. The other two benchmarks are not specialized in inference. Broader Impact Concerns: no concerns ==================================================
# Adir: Adaptive Diffusion For Image Reconstruction ![0_image_0.png](0_image_0.png) Anonymous authors Paper under double-blind review Figure 1: Super-resolution with scale factors 4 and 8, using Stable Diffusion (Rombach et al., 2022), Guided Diffusion (Dhariwal & Nichol, 2021), and our method ADIR. The adaptability of ADIR allows reconstructing finer details. ## Abstract In recent years, denoising diffusion models have demonstrated outstanding image generation performance. The information on natural images captured by these models is useful for many image reconstruction applications, where the task is to restore a clean image from its degraded observations. In this work, we propose a conditional sampling scheme that exploits the prior learned by diffusion models while retaining agreement with the measurements. We then combine it with a novel approach to adapting pre-trained diffusion denoising networks to their input. We examine two adaptation strategies: the first uses only the degraded image, while the second, which we advocate, is performed using images that are "nearest neighbors" of the degraded image, retrieved from a diverse dataset with an off-the-shelf visual-language model. To evaluate our method, we test it on two stateof-the-art publicly available diffusion models, Stable Diffusion and Guided Diffusion. We show that our proposed 'adaptive diffusion for image reconstruction' (ADIR) approach achieves a significant improvement in image reconstruction tasks. Our code will be available online upon publication. ## 1 Introduction Image reconstruction problems appear in a wide range of applications, where one would like to reconstruct an unknown clean image x ∈ R n from its degraded version y ∈ R m, which can be noisy, blurry, low-resolution, etc. The acquisition (forward) model of y in many important degradation settings can be formulated using the following linear model y = Ax + e, (1) where A ∈ R m×n is the measurement operator (blurring, masking, sub-sampling, etc.) and e ∈ R m ∼ N (0, σ2Im) is the measurement noise. Typically, just fitting the observation model is not sufficient for recovering x successfully. Thus, prior knowledge of the characteristics of x is needed. Over the past decade, many works suggested solving the inverse problem in Eq. equation 1 using a single execution of a deep neural network, which has been trained on pairs of clean {xi} images and their degraded versions {yi} obtained by applying the forward model equation 1 on {xi} (Dong et al., 2015; Sun et al., 2015; Lim et al., 2017; Zhang et al., 2017a; Lugmayr et al., 2020; Liang et al., 2021). Yet, these approaches tend to overfit the observation model and perform poorly on setups that have not been considered in training and several methods have been proposed to overcome that (Shocher et al., 2018; Tirer & Giryes, 2019; Hussein et al., 2020b; Ji et al., 2020; Wei et al., 2020; Wang et al., 2021; Zhang et al., 2021b; 2022). Tackling this limitation with dedicated training for each application is not only computationally inefficient but also often impractical. This is because the exact observation model may not be known before inference time. Several approaches such as Deep Image Prior (Ulyanov et al., 2018), zero-shot-super-resolution (Shocher et al., 2018) or GSURE-based test-time optimization (Abu-Hussein et al., 2022) rely solely on the observation image y. They utilize the implicit bias of deep neural networks and gradient-based optimizers, as well as the self-recurrence of patterns in natural images when training a neural model directly on the observation and in this way reconstruct the original image. Although these methods are not limited to a family of observation models, they usually perform worse than data-driven methods, since they do not exploit the robust prior information that the unknown image x share with external data that may contain images of the same kind. The alternative popular approach that exploits external data while remaining flexible to the observation model, uses deep models for imposing only the prior. It typically uses pretrained deep denoisers (Zhang et al., 2017b; Arjomand Bigdeli et al., 2017; Tirer & Giryes, 2018; Zhang et al., 2021a) or generative models (Bora et al., 2017; Dhar et al., 2018; Hussein et al., 2020a) within the optimization scheme, where consistency of the reconstruction with the observation y is maintained by minimizing a data-fidelity term. Recently, diffusion models (Dhariwal & Nichol, 2021; Nichol & Dhariwal, 2021; Sohl-Dickstein et al., 2015; Ho et al., 2020) have shown remarkable capabilities in generating high-fidelity images and videos (Ho et al., 2022). These models are based on a Markov chain diffusion process performed on each training sample. They learn the reverse process, namely, the denoising operation between each two points in the chain. Sampling images via pretrained diffusion models is performed by starting with a pure white Gaussian noise image, which is followed by progressively sampling a less noisy image, given the previous one, until reaching a clean image after T iterations. Since diffusion models capture prior knowledge of the data, one may utilize them as deep priors/regularization for inverse problems of the form equation 1 (Song et al., 2021; Lugmayr et al., 2022; Avrahami et al., 2022b; Kawar et al., 2022a; Choi et al., 2021; Rombach et al., 2022). In this work, we propose an Adaptive Diffusion framework for Image Reconstruction (ADIR). First, we devise a diffusion guidance sampling scheme that solves equation 1 while restricting the reconstruction of x to the range of a pretrained diffusion model. Our scheme is based on novel modifications to the guidance used in (Dhariwal & Nichol, 2021) (see Figure 2 and Section 3.2 for details). Then, we propose two techniques that use the observations y to adapt the diffusion network to patterns beneficial for recovering the unknown x. Adapting the model's parameters is based either directly on y or on K external images similar to y in some neural embedding space that is not sensitive to the degradation of y. These images may be retrieved from a diverse dataset and the embedding can be calculated using an off-the-shelf encoder model for images such as CLIP (Radford et al., 2021). In this work, ADIR is mainly developed for image reconstruction tasks. Yet, we also showcase that the ADIR adaptation strategy can be employed for text-guided image editing. Note that for the latter, we just show the potential of our strategy and that can be combined with existing edting techniques. We leave further exploration of the use of ADIR to editing to a future work. The contribution of the ADIR framework is the proposal of an *adaptive* diffusion approach to inverse problems. We evaluate it with two state-of-the-art diffusion models: Stable Diffusion (Rombach et al., 2022) and Guided Diffusion (Dhariwal & Nichol, 2021), and show that it outperforms existing methods in the super-resolution and deblurring tasks. ## 2 Related Work Diffusion models In recent years, many works utilized diffusion models for image manipulation and reconstruction tasks (Rombach et al., 2022; Kawar et al., 2022b;a; Whang et al., 2022; Saharia et al., 2022b), where a denoising network is trained to learn the prior distribution of the data. At test time, some conditioning mechanism is combined ![2_image_0.png](2_image_0.png) Figure 2: Diagram of our proposed method ADIR (Adaptive Diffusion for Image Reconstruction) applied to the super resolution task. Given a pretrained diffusion model (ϵθ(·), Σθ(·)) and a Low Resolution (LR) image, we look for the K nearest neighbor images to the LR image, then using ADIR we adapt the diffusion model and use it for reconstruction. with the learned prior to solve very challenging imaging tasks (Avrahami et al., 2022b;a; Chung et al., 2022a). Note that our novel adaptive diffusion ingredient can be incorporated with any conditional sampling scheme that is based on diffusion models. In (Whang et al., 2022; Saharia et al., 2022b) the problems of deblurring and super-resolution were considered. Then, a diffusion model has been trained to perform this task where instead of adding noise at each of its steps, a blur or downsampling is performed. In this way, the model learns to carry out the deblurring or super-resolution task directly. Notice that these models are trained for one specific task and cannot be used for the other as is. The closest works to us are (Giannone et al., 2022; Sheynin et al., 2022; Kawar et al., 2022b). These very recent concurrent works consider the task of image editing and perform an adaptation of the used diffusion model using the provided input and external data. Yet, notice that neither of these works consider the task of image reconstruction as we do here or apply our proposed sampling scheme for this task. Image-Adaptive Reconstruction Adaptation of pretrained deep models, which serve as priors in inverse problems, to the unknown true x through its observations at hand was proposed in (Hussein et al., 2020a; Tirer & Giryes, 2019). These works improve the reconstruction performance by fine-tuning the parameters of pretrained deep denoisers (Tirer & Giryes, 2019) and GANs (Hussein et al., 2020a) via the observed image y instead of keeping them fixed during inference time. The image-adaptive GAN (IAGAN) approach (Hussein et al., 2020a) has led to many follow up works with different applications, e.g., (Bhadra et al., 2020; Pan et al., 2021; Roich et al., 2022; Nitzan et al., 2022). Recently, it has been shown that one may even fine-tune a masked-autoencoder to the input data at test-time for improving the adaptivity of classification neural networks to new domains (Gandelsman et al., 2022). In this paper we consider test-time adaptation of diffusion models for inverse problems. As far as we know, adaptation of diffusion models has not been proposed. Furthermore, while existing works fine-tune the deep priors directly using y, we propose an improved strategy where the tuning is based on K external images similar to y that are automatically retrieved from an external dataset. ## 3 Method We now turn to present our proposed approach. We start with a brief introduction to regular denoising diffusion models. After that we describe our proposed strategy for modifying the sampling scheme of diffusion models for the task of image reconstruction. Finally, we present our suggested adaptation scheme. ## 3.1 Denoising Diffusion Models Denoising diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) are latent variable generative models, with latent variables x1, x2*, ...,* xT ∈ R n (the same dimensionality as the data x ∼ qx). Given a training sample x0 ∼ qx, these models are based on constructing a diffusion process (forward process) of the variables x1:T := x1, x2*, ...,* xT as a Markov chain from x0 to xT of the form $$q(\mathbf{x}_{1:T}|\mathbf{x}_{0}):=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}),\tag{1}$$ $$\left(2\right)$$ where q(xt|xt−1) := N ( √1 − βtxt−1, βtIn), and 0 < β1 *< ... < β*T = 1 is the diffusion variance schedule (hyperparameters of the model). Note that sampling xt|x0 can be done via a simplified way using the parametrization (Ho et al., 2020): $${\bf x}_{t}=\sqrt{\bar{\alpha}_{t}}{\bf x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon,\;\epsilon\sim{\cal N}({\bf0},{\bf I}_{n}),\tag{1}$$ where αt := 1−βt and α¯t := Qts=1 αs. The goal of these models is to learn the distribution of the reverse chain from xT to x0, which is parameterized as the Markov chain $$p_{\theta}(\mathbf{x}_{0:T}):=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}),$$ $$({\mathcal{I}})$$ $$(4)$$ $$({\mathfrak{S}})$$ $$(6)$$ where pθ(xt−1|xt) := N (µθ(xt, t), Σθ(xt, t)), $$\mu_{\theta}(\mathbf{x}_{t},t):={\frac{1}{\sqrt{\alpha_{t}}}}(\mathbf{x}_{t}-{\frac{1-\alpha_{t}}{\sqrt{1-\alpha_{t}}}}\epsilon_{\theta}(\mathbf{x}_{t},t)),$$ and θ denotes all the learnable parameters. Essentially, ϵθ(xt, t) is an estimator for the noise in xt (up to scaling). The parameters θ of the diffusion model (ϵθ(xt, t), Σθ(xt, t)) are optimized by minimizing evidence lower bound (Sohl-Dickstein et al., 2015), a simplified score-matching loss (Ho et al., 2020; Song & Ermon, 2019), or a combination of both (Dhariwal & Nichol, 2021; Nichol & Dhariwal, 2021). For example, the simplified loss involves the minimization of $$\ell_{\mathrm{simple}}(\mathbf{x}_{0},\mathbf{\epsilon}_{\theta},t)=\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\sqrt{\alpha_{t}}\mathbf{x}_{0}+\sqrt{1-\alpha_{t}}\mathbf{\epsilon},t)\|_{2}^{2}\tag{1}$$ w.r.t. θ in each training iteration, where x0 is drawn from the training data, t uniformly drawn from {1*, ..., T*} and the noise ϵ ∼ N (0, In). Given a trained diffusion model (ϵθ(xt, t), Σθ(xt, t)), one may generate a sample x0 from the learned data distribution pθ by initializing xT ∼ N (0, In) and running the reverse diffusion process by sampling $$\mathbf{x}_{t-1}\sim{\mathcal{N}}(\mu_{\theta}(\mathbf{x}_{t},t),\Sigma_{\theta}(\mathbf{x}_{t},t)),$$ xt−1 ∼ N (µθ(xt, t), Σθ(xt, t)), (7) where 0 < t ≤ T and µθ(xt, t) is defined in equation 5. The class-guided sampling method that has been proposed in (Dhariwal & Nichol, 2021) modifies the sampling procedure in equation 7 by adding to the mean of the Gaussian a term that depends on the gradient of an offline-trained classifier, which has been trained using noisy images {xt} for each t, and approximates the likelihood pc|xt , where c is the desired class. This procedure has been shown to improve the quality of the samples generated for the learned classes. $$\left(7\right)$$ ![4_image_0.png](4_image_0.png) Figure 3: Text-based image editing comparison between GLIDE (full) (Nichol et al., 2021), Stable Diffusion (Rombach et al., 2022) and ADIR applied to the Stable Diffusion model. The images are taken from (Nichol et al., 2021), since their official high-res model was not publicly released. As can be seen, our method produces more realistic images in cases where Stable Diffusion either was not accurate (brown hair instead of red) or in terms of artifacts. ## 3.2 Diffusion Based Image Reconstruction We turn to extend the guidance method of (Dhariwal & Nichol, 2021) to image reconstruction. First, we generalize their framework to inverse problems in the form of equation 1. Namely, given the observed image y, we modify the guided reverse diffusion process to generate possible reconstructions of x that are associated with y rather than arbitrary samples of a certain class. Similar to (Dhariwal & Nichol, 2021), ideally, the guiding direction at iteration t should follow (the gradient of) the likelihood function py|xt . The key difference between our framework and (Dhariwal & Nichol, 2021) is that we need to base our method on the specific degraded image y rather than on a classifier that has been trained for each level of noise of {xt}. However, only the likelihood function py|x0 is known, i.e., of the clean image x0 that is available only at the end of the procedure, and not for every 1 ≤ t ≤ T. To overcome this issue, we propose a surrogate for the intermediate likelihood functions py|xt . Our relaxation resembles the one in a recent concurrent work (Chung et al., 2022b). Yet, their sampling scheme is significantly different and has no adaptation ingredient. Similar to (Dhariwal & Nichol, 2021), we guide the diffusion progression using the log-likelihood gradient. Formally, we are interested in sampling from the posterior $$p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{y})\propto p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1})p_{\mathbf{y}|\mathbf{x}_{t}}(\mathbf{y}|\mathbf{x}_{t}),$$ $$({\boldsymbol{\delta}})$$ (y|xt), (8) where py|xt (·|xt) is the distribution of y conditioned on xt, and pθ(xt|xt+1) = N (µθ(xt+1, t+ 1)), Σθ(xt+1, t+ 1)) is the learned diffusion prior. For brevity, we omit the arguments of µθ and Σθ in the rest of this subsection. Under the assumption that the likelihood log py|xt (y|·) has low curvature compared to Σ −1 θ(Dhariwal & Nichol, 2021), the following Taylor expansion around xt = µθ is valid log py|xt (y|xt) ≈ log py|xt (y|xt)|xt=µθ + (xt − µθ) ⊤∇xt log py|xt (y|xt)|xt=µθ = (xt − µθ) $$E_{t}-\mu_{\theta})^{\top}{\bf g}+C_{1},\quad\quad({\bf9})$$ where g = ∇xt log py|xt (y|xt)|xt=µθ , and C1 is a constant that does not depend on xt. Then, similar to the computation in (Dhariwal & Nichol, 2021), we can use equation 9 to express the posterior in equation 8, i.e., $$\log(p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1})p_{\mathbf{y}|\mathbf{x}_{t}}(\mathbf{y}|\mathbf{x}_{t}))\approx C_{2}+\log p(\mathbf{z}),$$ $$(10)$$ (y|xt)) ≈ C2 + log p(z), (10) where z ∼ N (µθ + Σθg, Σθ), and C2 is some constant that does not depend on xt. Therefore, for conditioning the diffusion reverse process on y, one needs to evaluate the derivative g from a (different) log-likelihood function log py|xt (y|·) at each iteration t. Observe that we know the exact log-likelihood function for t = 0. Since the noise e in equation 1 is white Gaussian with variance σ 2, we therefore have following distribution $$p_{\mathbf{y}|\mathbf{x}}(\mathbf{y}|\mathbf{x})={\mathcal{N}}(\mathbf{Ax},\sigma^{2}\mathbf{I}_{m})\propto e^{-{\frac{1}{2\sigma^{2}}}\|\mathbf{y}-\mathbf{Ax}\|_{2}^{2}}.$$ 2 . (11) In the denoising diffusion setup, y is related to x0 using the observation model equation 1. Therefore, $$(11)$$ $$\log p_{\mathbf{y}|\mathbf{x}_{0}}(\mathbf{y}|\mathbf{x}_{0})\propto-\|\mathbf{A}\mathbf{x}_{0}-\mathbf{y}\|_{2}^{2}.$$ $$(12)$$ $$(13)$$ . (12) However, we do not have tractable expressions for the likelihood functions {py|xt (y|·)} T t=1. Therefore, motivated by the expression above, we propose the following approximation $$\log p_{\mathbf{y}|\mathbf{x}_{t}}(\mathbf{y}|\mathbf{x})$$ $$\dot{\mathbf{x}}_{0}(\mathbf{x}_{t})),$$ $$\log p_{\mathbf{y}|\mathbf{x}_{t}}(\mathbf{y}|\mathbf{x}_{t})\approx\log p_{\mathbf{y}|\mathbf{x}_{0}}(\mathbf{y}|{\hat{\mathbf{x}}}_{0}(\mathbf{y}))$$ $$(14)$$ (y|xˆ0(xt)), (13) where $${\hat{\mathbf{x}}}_{0}(\mathbf{x}_{t}):=\left(\mathbf{x}_{t}-{\sqrt{1-{\bar{\alpha}}_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t)\right)/{\sqrt{\bar{\alpha}_{t}}}$$ √α¯t (14) is an estimation of x0 from xt, which is based on the (stochastic) relation of xt and x0 in equation 3 and the random noise ϵ is replaced by its estimation ϵθ(xt, t). From equation 11 and equation 13 it follows that g in equation 9 can be approximated at each iteration t by evaluating (e.g., via automatic-differentiation) $$\mathbf{g}\approx-\nabla_{\mathbf{x}_{t}}\|\mathbf{A}{\hat{\mathbf{x}}}_{0}(\mathbf{x}_{t})-\mathbf{y}\|_{2}^{2}|\mathbf{x}_{t}{=}\mu_{\theta}\,.$$ $$(15)$$ . (15) Note that existing methods (Chung et al., 2022b; Kawar et al., 2022a; Song et al., 2021) either use a term that resembles equation 15 with the naive approximation xˆ0(xt) = xt (Kawar et al., 2022a; Song et al., 2021), or significantly modify equation 15 before computing it via the automatic derivation framework (Chung et al., 2022b) (we observed that trying to compute the exact equation 15 is unstable due to numerical issues). For example, in the official implementation of (Chung et al., 2022b), which uses automatic derivation, the squaring of the norm in equation 15 is dropped even though this is not stated in their paper (otherwise, the reconstruction suffers from significant artifacts). In our case, we use the following relaxation to overcome the stability issue of using equation 15 directly. For a pretrained denoiser predicting ϵθ from xt and 0 < t ≤ T we have $$\|\mathbf{A}\hat{\mathbf{x}}_{0}(\mathbf{x}_{t})-\mathbf{y}\|_{2}^{2}=\|\mathbf{A}(\mathbf{x}_{t}-\sqrt{1-\bar{\alpha}_{t}}\mathbf{\epsilon}_{\theta})/\sqrt{\bar{\alpha}_{t}}-\mathbf{y}\|_{2}^{2}$$ $$\propto\|\mathbf{A}\mathbf{x}_{t}-\sqrt{1-\bar{\alpha}_{t}}\mathbf{A}\mathbf{\epsilon}_{\theta}-\sqrt{\bar{\alpha}_{t}}\mathbf{y}\|_{2}^{2}$$ $$=\|\mathbf{A}\mathbf{x}_{t}-\sqrt{\bar{\alpha}_{t}}\mathbf{y}-\sqrt{1-\bar{\alpha}_{t}}\mathbf{A}\mathbf{\epsilon}_{\theta}\|_{2}^{2}$$ $$=\|\mathbf{A}\mathbf{x}_{t}-\mathbf{y}_{t}\|_{2}^{2},$$ , (16) $$(16)^{\frac{1}{2}}$$ Algorithm 1 Proposed GD sampling for image reconstruction given a diffusion model (ϵθ(·), Σθ(·)), and a guidance scale s Require: (ϵθ(·), Σθ(·)), y, s 1: xT ← sample from N (0, In) 2: for t from T to 1 do 3: ϵˆ, Σˆ ← ϵθ(xt, t), Σθ(xt, t) 4: µˆ ← √ 1 αt (xt − √ 1−αt 1−α¯t ϵˆ) 5: yt ← √α¯ty + √1 − α¯tAϵˆ 6: g ← −2AT(Aµˆ − yt) 7: xt−1 ← sample from N (µˆ + sΣgˆ , Σˆ ) 8: **end forreturn** x0 where yt := √α¯ty + √1 − α¯tAϵθ. We further assume that ϵθ is independent of xt, which we found to be sufficient in our use-cases. Consequently, we propose to replace the expression for g (the guiding likelihood direction at each iteration t) that is given in equation 15 with a surrogate obtained by evaluating the derivative of equation 16 w.r.t. xt, which is given by $${\hat{\boldsymbol{\mu}}}+s{\hat{\boldsymbol{\Sigma}}}\mathbf{g},{\hat{\boldsymbol{\Sigma}}})$$ $$\mathbf{g}\approx-2\mathbf{A}^{T}(\mathbf{A}\mathbf{x}_{t}-\mathbf{y}_{t})|_{\mathbf{x}_{t}=\mu_{\theta}}$$ g ≈ −2AT(Axt − yt)|xt=µθ(17) that can be used for sampling the posterior distribution as detailed in Algorithm 1. ## 3.3 Adaptive Diffusion Having defined the guided inverse diffusion flow for image reconstruction, we turn to discuss how one may adapt a given diffusion model to a given degraded image y as defined in equation 1. Assume we have a pretrained diffusion model (ϵθ(·), Σθ(·)), then the adaptation scheme is defined by the following minimization problem $$\hat{\theta}=\arg\min_{\theta}\sum_{t=1}^{T}\ell_{\rm simple}({\bf y},\epsilon_{\theta},t)\tag{1}$$ $$(17)$$ $$(18)$$ with ℓsimple defined in equation 6, which can be solved efficiently using stochastic gradient descent, where at each iteration the gradient step is performed on a single term of the sum above, for 0 < t ≤ T chosen randomly. Although the original work (Dhariwal & Nichol, 2021) trains the network to predict the posterior variance Σθ, in our case, we did not see any benefit of including it in the adaptation loss. Adapting the denoising network to the measurement image y, allows it to learn cross-scale features recurring in the image, which is a well studied property of natural images (Ulyanov et al., 2018; Mataev et al., 2019; Shaham et al., 2019; Michaeli & Irani, 2014). Such an approach has been proven to be very helpful in reconstruction-based algorithms (Hussein et al., 2020a; Tirer & Giryes, 2019). However, in some cases where the image does not satisfy the assumption of recurring patterns across scales, this approach can lose some of the sharpness captured in training. Therefore, in this work we extend the approach to few-shot fine-tuning adaptation, where instead of solving equation 18 w.r.t. y, we propose an algorithm for retrieving K images similar to x from a large dataset of diverse images, using off-the-shelf embedding distance. Let (ξv(·), ξℓ(·)) be some off-the-shelf multi-modal encoder trained on visual-language modalities, e.g., CLIP (Radford et al., 2021), BLIP (Li et al., 2022b), or CyCLIP (Goel et al., 2022)). Let ξv(·) and ξℓ(·) be the visual and language encoders respectively. Then, given a large diverse dataset of natural images, we propose to retrieve K images, denoted by {zk} K k=1, with minimal embedding distance from y. Formally, let DIA be an arbitrary external dataset, then {zk} K k=1 = {z1, ..., zK|ϕξ(z1, y) ≤ ... ≤ ϕξ(zK, y) ≤ ϕξ(z, y), ∀z ∈ DIA \ {z1, ..., zK}}, (19) where ϕξ(a, b) = 2 arcsin(0.5∥ξ(a) − ξ(b)∥ 22 ) is the spherical distance and ξ can be either the visual or language encoder depending on the provided conditioning of the application. $$(19)$$ After retrieving K-NN images {zk} K k=1 from DIA, we fine-tune the diffusion model on them, which adapts the denoising network to the context of y. Specifically, we modify the denoiser parameters θ based on minimizing a loss similar to equation 18, but with {zk} K k=1 rather than y, formally, we stochastically solve the following minimization problem $${\hat{\boldsymbol{\theta}}}=\arg\min_{\boldsymbol{\theta}}\sum_{k=1}^{K}\sum_{t=1}^{T}\ell_{\text{simple}}(\mathbf{z}_{k},\boldsymbol{\epsilon}_{\boldsymbol{\theta}},t)\tag{1}$$ $$(20)$$ We refer to this K-NN based adaptation technique as ADIR (Adaptive Diffusion for Image Reconstruction), which is described schematically in Figure 2. | IA Iter. | LR | NN imag. | s | diff. steps | | |------------|------|------------|-----|---------------|------| | ADIR-GD | 400 | 10−4 | 20 | 10 | 1000 | | ADIR-SD | 400 | 10−4 | 50 | - | 500 | Table 1: Configurations used for ADIR. ## 4 Experiments We evaluate our method on two state-of-the-art diffusion models, Guided Diffusion (GD) (Dhariwal & Nichol, 2021) and Stable Diffusion (SD) (Rombach et al., 2022), showing results for super-resolution and deblurring. In addition, we show how adaptive diffusion can be used for the task of text-based editing using stable diffusion. Guided diffusion (Dhariwal & Nichol, 2021) provides several models with a conditioning mechanism built-in to the denoiser. However, in our case, we perform the conditioning using the log-likelihood term. Therefore, we used the unconditional model that was trained on ImageNet (Russakovsky et al., 2015) and produces images of size 256 × 256. In the original work, the conditioning for generating an image from an arbitrary class was performed using a classifier trained to classify the noisy sample xt directly, where the log-likelihood derivative can be obtained by deriving the corresponding logits w.r.t. xt directly. In our setup, the conditioning is performed using g in equation 17, where A is defined by the reconstruction task, which we specify in the sequel. In addition to GD, we demonstrate the improvement that can be achieved using stable diffusion (Rombach et al., 2022), where we use publicly available super-resolution and text-based editing models for it. Instead of training the denoiser on the natural images domain directly, they suggest using a Variational Auto Encoder (VAE) and train the denoiser using a latent representation of the data. Note that the lower dimensionality of the latent enables the network to be trained at higher resolutions. In all cases, we adapt the diffusion models in the image adaptive scheme presented in section 3.3, using the Google Open Dataset (Kuznetsova et al., 2020) as the external dataset DIA, from which we retrieve K images, where K = 20 for GD and K = 50 for SD (several examples of retrieved images are shown in the sup. mat.). In practice we found that regularizing the objective loss with LPIPS (Zhang et al., 2018) term yields to better results, therefore we add it with weight 0.1. For optimizing the network parameters we use LoRA (Hu et al., 2021) with rank r = 16 and scaling α = 8 for all the convolution layers, which is then optimized using Adam (Kingma & Ba, 2014). The specific implementation configurations are detailed in Table 1. We run all of our experiments on a NVIDIA RTX A6000 48GB card, which allows us to fine-tune the models by randomly sampling a batch of 6 images from {zk} K k=1, where in each iteration we use the same 0 < t ≤ T for images in the batch. ## 4.1 Super Resolution In the Super-Resolution (SR) task one would like to reconstruct a high resolution image x from its low resolution image y, where in this case A represents an anti-aliasing filter followed by sub-sampling with stride γ, which we refer to as the scaling factor. In our use-case we employ a bicubic anti-aliasing filter and assume e = 0, similarly to most SR works. Here we apply our approach on two different diffusion based SR methods, Stable Diffusion (Rombach et al., 2022), and section 3.2 approach combined with the unconditional diffusion model from (Dhariwal & Nichol, 2021). In Stable | | SRx4 | | |-------------|------------------|------------------| | | IPT | 0.221/4.90/65.38 | | | USRNet | 0.234/4.51/59.10 | | | SwinIR | 0.218/4.88/65.08 | | | SRDiff | 0.237/4.76/62.64 | | | DeepRED | 0.405/3.25/25.26 | | | Real-ESRGAN | 0.305/4.93/69.11 | | | Stable Diffusion | 0.331/5.07/69.18 | | | ADIR (SD) | 0.213/5.51/72.56 | | SRx4 | SRx8 | | | IPT | 0.237/5.02/64.19 | - | | USRNet | 0.249/4.38/45.76 | - | | SwinIR | 0.232/4.98/64.19 | 0.424/4.54/48.04 | | SRDiff | 0.135/4.76/60.87 | - | | Real-ESRGAN | 0.317/5.02/69.42 | - | | DeepRED | 0.475/3.20/22.77 | 0.591/2.99/17.43 | | DDRM | 0.297/3.42/28.96 | 0.572/3.13/20.68 | | GD | 0.325/4.88/64.63 | 0.365/4.36/53.99 | | ADIR | 0.335/5.06/66.33 | 0.347/4.41/55.89 | Table 2: x4 Super resolution results (1282 → 5122) and 8 (642 → 5122) for the unconditional guided diffusion model (Dhariwal & Nichol, 2021). The results are averaged on the first 50 images of the DIV2K validation set (Agustsson & Timofte, 2017). We compare ADIR to IPT (Chen et al., 2021), USRNet (Zhang et al., 2020), SwinIR (Liang et al., 2021), SRDiff (Li et al., 2022a), Real-ESRGAN (Wang et al., 2021), DeepRED (Mataev et al., 2019), the baseline approach presented in Section 3.2(without adaptation), and DDRM (Kawar et al., 2022a). We use the traditional LPIPS (Zhang et al., 2018) as well as the state-of-the-art no reference perceptual losses AVA-MUSIQ and KonIQ-MUSIQ (Ke et al., 2021) for evaluation (LPIPS/MUSIQAVA/MUSIQ-KONIQ). The best results are in bold black, and the second best is highlighted in blue. | Box (256) | Box (512) | Gauss (256) | | |------------------|------------------|------------------|------------------| | M3SNet | 0.477/3.13/26.16 | 0.468/2.93/47.42 | 0.481/2.75/31.13 | | DeepRED | 0.561/3.61/22.12 | 0.557/3.57/27.59 | 0.572/3.59/19.13 | | Restormer | 0.341/3.74/40.11 | 0.377/4.67/55.03 | 0.518/3.61/36.70 | | MPRNet | 0.395/3.08/26.90 | 0.429/3.63/37.87 | 0.491/3.01/20.96 | | Guided Diffusion | 0.423/4.20/49.19 | 0.411/4.81/58.66 | 0.424/4.01/48.11 | | ADIR (GD) | 0.394/4.31/55.78 | 0.312/4.77/60.13 | 0.415/4.19/51.80 | Table 3: x4 Super resolution (2562 → 10242) using Stable Diffusion SR (Rombach et al., 2022). Similar to Table 2, the results are averaged on the first 50 images of the DIV2K validation set (Agustsson & Timofte, 2017). We compare ADIR to IPT (Chen et al., 2021), USRNet (Zhang et al., 2020), SwinIR (Liang et al., 2021), SRDiff (Li et al., 2022a), DeepRED (Mataev et al., 2019), Stable Diffusion (without adaptation), and Real-ESRGAN (Wang et al., 2021). We use LPIPS (Zhang et al., 2018) as well as AVA-MUSIQ and KonIQ-MUSIQ (Ke et al., 2021) for evaluation (LPIPS/MUSIQAVA/MUSIQ-KONIQ). The best results are in bold black, and the second best is highlighted in blue. Table 4: Deblurring with 10 noise levels results for the unconditional guided diffusion model (Dhariwal & Nichol, 2021). Similar to SR in Table 2, the results are averaged on the first 50 images of the DIV2K validation set (Agustsson & Timofte, 2017). We compare our method to M3SNet (Gao et al., 2023), DeepRED (Mataev et al., 2019), Restormer (Zamir et al., 2022), MPRNet Mehri et al. (2021) and the baseline presented in Section 3.2 (without adaptation). We use LPIPS (Zhang et al., 2018) as well as AVA-MUSIQ and KonIQ-MUSIQ (Ke et al., 2021) for evaluation (LPIPS/MUSIQ-AVA/MUSIQ-KONIQ). Diffusion, the low-resolution image y is upscaled from 256 × 256 to 1024 × 1024, while in Guided Diffusion we use the unconditional model trained on 256 × 256 images, to upscale y from 128 × 128 to 512 × 512 resolution. When adapting Stable diffusion, we downsample random crops of the K-NN images using A, which we encode using the VAE and plug into the network conditioning mechanism. We fine-tune both models using random crops of the K-NN images, to which we then add noise using the scheduler provided by each model. The perception preference of generative models-based image reconstruction has been seen in many works (Hussein et al., 2020a; Bora et al., 2017; Blau & Michaeli, 2018). Therefore, we chose a perception-based measure to evaluate the performance of our method. Specifically, we use the state-of-the-art AVA-MUSIQ and KonIQ-MUSIQ perceptual | DDRM | Guided Diffusion (GD) | ADIR (GD) | |--------------|-------------------------|--------------| | 4.012/53.458 | 4.195/56.044 | 4.214/58.679 | Table 5: Image colorization for the unconditional guided diffusion model (Dhariwal & Nichol, 2021). The results are averaged on the first 50 images of the DIV2K validation set (Agustsson & Timofte, 2017). We compare ADIR to the baseline presented in Section 3.2 (without adaptation) and DDRM (Kawar et al., 2022a). We use AVA-MUSIQ and KonIQ-MUSIQ (Ke et al., 2021) for evaluation (MUSIQ-AVA/MUSIQ-KONIQ). The best results are in bold black, and the second best is highlighted in blue. quality assessment measures (Ke et al., 2021), which are state-of-the-art image quality assessment measures. We report our results using the two measures averaged on the first 50 validation images of the DIV2K (Agustsson & Timofte, 2017) dataset. As can be seen in Tables 2, 3, our method significantly outperforms both Stable Diffusion and GD-based reconstruction approaches. We compare our super-resolution (SR) results to Stable Diffusion SR and Guided Diffusion without the adaptation component. Additionally, we benchmark our method against several other state-of-the-art techniques; IPT (Chen et al., 2021), USRNet (Zhang et al., 2020), SwinIR (Liang et al., 2021), SRDiff (Li et al., 2022a), DeepRED (Mataev et al., 2019), Stable Diffusion (without adaptation), and Real-ESRGAN (Wang et al., 2021). As can be seen from the results, our method outperforms or shows competitive results compared to the other approaches. It is worth noting that because ADIR is a generative prior-based method, it targets the perceptual quality aspect more than the distortional aspect; a fact observed by comparing ADIR to task-specific approaches using reference-based measure (LPIPS). Figures 1 and 7 present qualitative results. Note that our method achieves superior restoration quality. In some cases it restores even fine details that were blurred in the acquisition of the GT image. ## 4.2 Deblurring In deblurring, y is obtained by applying a blur filter (uniform blur of size 5 × 5 in our case) on x, followed by adding measurement noise e ∼ N (0, σ2In), where in our setting σ = 10. We apply our proposed approach in Section 3.2 for the Guided Diffusion unconditional model (Dhariwal & Nichol, 2021) to solve the task. In this case, A can be implemented by applying the blur kernel on a given image. As a baseline, we use the unconditional diffusion model provided by GD (Dhariwal & Nichol, 2021), which was trained on 256 × 256 size images. Yet, in our tests, we solve the deblurring task on images of size 256 × 256 as well as 512 × 512, which emphasizes the remarkable benefit of the adaptation, as it allows the model to generalize to resolutions not seen during training. Similar to SR, in Table 4 we report the KonIQ-MUSIQ and AVA-MUSIQ (Ke et al., 2021) measures, averaged on the first 50 DIV2K validation images (Agustsson & Timofte, 2017), where we compare our approach to the guided diffusion reconstruction without image adaptation. A visual comparisons are also available in Figure 4, where a very significant improvement can be seen in both robustness to noise and reconstructing details. ## 4.3 Colorization In colorization, y is obtained by averaging the colors of x using RGB2Gray transform. Similar to deblurring, we apply our proposed approach in Section 3.2 to solve the task. In this case, A can be implemented by averaging the color dimension of x, while ATcan simply be viewed as a replication of the color dimension. We use the unconditional diffusion model provided by GD (Dhariwal & Nichol, 2021) as a baseline for coloring 256×256 images. Visual comparison of the results can be seen in Figure 6. We report the average MUSIQ (Ke et al., 2021) perceptual measure for this case, as shown in Table 5. Note that we do not report LPIPS as there are many colorization solutions and therefore the reconstructed image may differ a lot from the ground truth. Thus, we focus on the non-reference perceptual measures for the colorization task. ![10_image_0.png](10_image_0.png) Figure 4: Image deblurring using Guided Diffusion approach from section 3.2 and ADIR, using the unconditional model from (Dhariwal & Nichol, 2021). The degradation is performed using 5 × 5 uniform blur filter with 10 levels of additive Gaussian noise. Note the better quality of our method. ## 4.4 Text-Guided Editing Text-guided image editing is the task of completing a masked region of x according to a prompt provided by the user. In this case, the diffusion model needs to predict objects and textures correspondent to the provided prompt, therefore we chose to adapt the network on {zk} K k=1 retrieved using the text encoder, i.e. by solving equation 19 using ξℓ. For evaluating our method for this application, we use the inpainting model of Stable Diffusion (Rombach et al., 2022). Where y encoded and concatenated with the mask resized to latent dimension, which are then plugged to the denoising network. When adapting the network, we follow the training scheme of Stable Diffusion, where we use random masks and the classifier-free conditioning approach (Ho & Salimans, 2022) used for training Stable Diffusion, where the text embedding is randomly chosen to either be the encoded prompt or the embedding of an empty prompt. Notice that we cannot compare to (Giannone et al., 2022; Sheynin et al., 2022; Kawar et al., 2022b) as there is no code available for them. For some of them, we do not even have access to the diffusion model that they adapt (Saharia et al., 2022a). Note though that our goal is not to show state-of-the-art editing results but rather to show here the potential contribution of ADIR to text-guided editing. As it is a general framework, it may be used also with other existing editing techniques in order to improve them. Figure 3 presents the editing results and compares them to both stable diffusion and GLIDE. GLIDE is the basis of the popular DALL-E-2 model. The images of GLIDE are taken from the paper. We use ADIR with stable diffusion and optimize them using the same seed. Since Stable Diffusion was trained using a lossy latent representation with smaller dimensionality than the data, it is clear that GLIDE can achieve better results. However, because our method adapts the network to a specific scenario, it enables the model to produce cleaner and more accurate generations, as can be seen in Figure 3. In the first image we see that Stable Diffusion adds an object that does not blend well and has artifacts, while when combined with our approach the quality improves significantly. Similarly, in the second image we see that Stable Diffusion produces an inaccurate edit, where it adds a brown hair instead of red hair. This is again improved by our adaptation method. Limitation. One limitation of our approach is that as is the case with all diffusion models, there is randomness in the generation process of the results. Therefore, the quality of the output may depend on the random seed being used. For a fair comparison, we used the same seed both for ADIR and the baseline. In the appendix, we provide more examples with different random seeds. We still find that when we compare our approach and the baseline with the same seed, we consistently get an improvement. Another limitation of ADIR is that it works in a sequential fashion, i.e. we first look ![11_image_0.png](11_image_0.png) Figure 5: Text-based editing comparison between Stable Diffusion and ADIR, using the prompt "Africa" for two different seeds. Note that Stable diffusion adds partial animals while ADIR completes the scene more naturally. for K NN images and then fine-tune the denoiser network on these images, therefore, an additional run-time is added to the standard diffusion flow. Also, in this work we assume that the observation operator A is known (non-blind setting), while in many real-world applications it is usually inaccessible. As a result, one needs to run the guidance scheme (section 3.2) with an estimated version of A, which is suboptimal. ## 5 Conclusion We have presented the Adaptive Diffusion Image Reconstruction (ADIR) method, in which we improve the reconstruction results in several imaging tasks using off-the-shelf diffusion models. We have demonstrated how our adaptation can significantly improve existing state-of-the-art methods, e.g. Stable Diffusion for super resolution, where the exploitation of external data with the same context as y, combined with our adaptation scheme leads to a significant improvement. Specifically, the produced images are sharper and have more details than the original ground truth image. Importantly, note that our novel adaptive diffusion ingredient can be incorporated into any conditional sampling scheme that is based on diffusion models, beyond those that are examined in this paper. One such possible direction is integrating our method with advanced diffusion models-based editing techniques (Meng et al., 2022; Kim et al., 2022; Mokady et al., 2023; Bar-Tal et al., 2023; Molad et al., 2023; Wei et al., 2023; Huang et al., 2023; Qi et al., 2023; Liu et al., 2023). We believe that our proposed novel concept can be a useful tool for improving diffusion-based reconstruction and editing. ![12_image_0.png](12_image_0.png) Figure 6: Image colorization results comparison between DDRM (Kawar et al., 2022a), Guided diffusion proposed in section 3.2, and our adaptive approach ADIR. As can be seen, adapting the denoiser network to the given image can improve the results significantly. ## References Shady Abu-Hussein, Tom Tirer, Se Young Chun, Yonina C Eldar, and Raja Giryes. Image restoration by deep projected gsure. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pp. 3602–3611, 2022. Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 126–135, 2017. Siavash Arjomand Bigdeli, Matthias Zwicker, Paolo Favaro, and Meiguang Jin. Deep mean-shift priors for image restoration. *Advances in Neural Information Processing Systems*, 30, 2017. Omri Avrahami, Ohad Fried, and Dani Lischinski. Blended latent diffusion. *arXiv preprint arXiv:2206.02779*, 2022a. Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18208–18218, 2022b. Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. Multidiffusion: Fusing diffusion paths for controlled image generation. In *ICML*, 2023. Sayantan Bhadra, Weimin Zhou, and Mark A Anastasio. Medical image reconstruction with image-adaptive priors learned by use of generative adversarial networks. In *Medical Imaging 2020: Physics of Medical Imaging*, volume 11312, pp. 206–213. SPIE, 2020. Yochai Blau and Tomer Michaeli. The perception-distortion tradeoff. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6228–6237, 2018. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generative models. In Proceedings of the International Conference on Machine Learning (ICML), pp. 537–546. JMLR. org, 2017. Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In *Proceedings of the IEEE/CVF conference on computer* vision and pattern recognition, pp. 12299–12310, 2021. Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. In *2021 IEEE/CVF International Conference on Computer Vision* (ICCV), pp. 14347–14356. IEEE, 2021. Hyungjin Chung, Eun Sun Lee, and Jong Chul Ye. Mr image denoising and super-resolution using regularized reverse diffusion. *arXiv preprint arXiv:2203.12621*, 2022a. Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. *arXiv preprint arXiv:2206.00941*, 2022b. Manik Dhar, Aditya Grover, and Stefano Ermon. Modeling sparse deviations for compressed sensing using generative models. In *International Conference on Machine Learning*, pp. 1214–1223. PMLR, 2018. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. *Advances in Neural* Information Processing Systems, 34:8780–8794, 2021. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. *IEEE transactions on pattern analysis and machine intelligence*, 38(2):295–307, 2015. Yossi Gandelsman, Yu Sun, Xinlei Chen, and Alexei A Efros. Test-time training with masked autoencoders. arXiv preprint arXiv:2209.07522, 2022. Hu Gao, Jing Yang, Ying Zhang, Ning Wang, Jingfan Yang, and Depeng Dang. A mountain-shaped single-stage network for accurate image restoration. *arXiv preprint arXiv:2305.05146*, 2023. Giorgio Giannone, Didrik Nielsen, and Ole Winther. Few-shot diffusion models. *10.48550/ARXIV.2205.15463*, 2022. Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan A Rossi, Vishwa Vinay, and Aditya Grover. Cyclip: Cyclic contrastive language-image pretraining. *arXiv preprint arXiv:2205.14459*, 2022. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. *arXiv preprint arXiv:2207.12598*, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in Neural Information* Processing Systems, 33:6840–6851, 2020. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. *arXiv:2204.03458*, 2022. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Lianghua Huang, Di Chen, Yu Liu, Shen Yujun, Deli Zhao, and Zhou Jingren. Composer: Creative and controllable image synthesis with composable conditions. 2023. Shady Abu Hussein, Tom Tirer, and Raja Giryes. Image-adaptive gan based reconstruction. In *Proceedings of the* AAAI Conference on Artificial Intelligence, volume 34, pp. 3121–3129, 2020a. Shady Abu Hussein, Tom Tirer, and Raja Giryes. Correction filter for single image super-resolution: Robustifying off-the-shelf deep super-resolvers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1428–1437, 2020b. Xiaozhong Ji, Yun Cao, Ying Tai, Chengjie Wang, Jilin Li, and Feiyue Huang. Real-world super-resolution via kernel estimation and noise injection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1914–1923, 2020. Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. arXiv preprint arXiv:2201.11793, 2022a. Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. *arXiv preprint arXiv:2210.09276*, 2022b. Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. Musiq: Multi-scale image quality transformer. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 5148–5157, 2021. Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffusionclip: Text-guided diffusion models for robust image manipulation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 2426–2435, June 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. *IJCV*, 2020. Haoying Li, Yifan Yang, Meng Chang, Shiqi Chen, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. Srdiff: Single image super-resolution with diffusion probabilistic models. *Neurocomputing*, 479:47–59, 2022a. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*, 2022b. Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 1833–1844, October 2021. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In *Proceedings of the IEEE conference on computer vision and pattern recognition* workshops, pp. 136–144, 2017. Shaoteng Liu, Yuechen Zhang, Wenbo Li, Zhe Lin, and Jiaya Jia. Video-p2p: Video editing with cross-attention control. *arXiv:2303.04761*, 2023. Andreas Lugmayr, Martin Danelljan, Luc Van Gool, and Radu Timofte. Srflow: Learning the super-resolution space with normalizing flow. In *ECCV*, 2020. Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11461–11471, 2022. Gary Mataev, Peyman Milanfar, and Michael Elad. Deepred: Deep image prior powered by red. In *Proceedings of the* IEEE/CVF International Conference on Computer Vision Workshops, pp. 0–0, 2019. Armin Mehri, Parichehr B Ardakani, and Angel D Sappa. Mprnet: Multi-path residual network for lightweight image super resolution. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pp. 2704–2713, 2021. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In *International Conference on Learning Representations*, 2022. Tomer Michaeli and Michal Irani. Blind deblurring using internal patch recurrence. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III 13, pp. 783–798. Springer, 2014. Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In *CVPR*, 2023. Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, and Yedid Hoshen. Dreamix: Video diffusion models are general video editors. *arXiv preprint arXiv:2302.01329*, 2023. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In *International* Conference on Machine Learning, pp. 8162–8171. PMLR, 2021. Yotam Nitzan, Kfir Aberman, Qiurui He, Orly Liba, Michal Yarom, Yossi Gandelsman, Inbar Mosseri, Yael Pritch, and Daniel Cohen-Or. Mystyle: A personalized generative prior. *arXiv preprint arXiv:2203.17272*, 2022. Xingang Pan, Xiaohang Zhan, Bo Dai, Dahua Lin, Chen Change Loy, and Ping Luo. Exploiting deep generative prior for versatile image restoration and manipulation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2021. Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, and Qifeng Chen. Fatezero: Fusing attentions for zero-shot text-based video editing. *arXiv:2303.09535*, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, pp. 8748–8763. PMLR, 2021. Daniel Roich, Ron Mokady, Amit H Bermano, and Daniel Cohen-Or. Pivotal tuning for latent-based editing of real images. *ACM Transactions on Graphics (TOG)*, 42(1):1–13, 2022. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. arXiv:2205.11487, 2022a. Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image superresolution via iterative refinement. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022b. Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. Singan: Learning a generative model from a single natural image. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 4570–4580, 2019. Shelly Sheynin, Oron Ashual, Adam Polyak, Uriel Singer, Oran Gafni, Eliya Nachmani, and Yaniv Taigman. Knndiffusion: Image generation via large-scale retrieval. *arXiv:2204.02849*, 2022. Assaf Shocher, Nadav Cohen, and Michal Irani. "zero-shot" super-resolution using deep internal learning. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 3118–3126, 2018. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019. Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon. Solving inverse problems in medical imaging with score-based generative models. *arXiv preprint arXiv:2111.08005*, 2021. Jian Sun, Wenfei Cao, Zongben Xu, and Jean Ponce. Learning a convolutional neural network for non-uniform motion blur removal. *2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 769–777, 2015. Tom Tirer and Raja Giryes. Image restoration by iterative denoising and backward projections. IEEE Transactions on Image Processing, 28(3):1220–1234, 2018. Tom Tirer and Raja Giryes. Super-resolution via image-adapted denoising cnns: Incorporating external and internal learning. *IEEE Signal Processing Letters*, 26(7):1080–1084, 2019. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. In *Proceedings of the IEEE Conference* on Computer Vision and Pattern Recognition (CVPR), pp. 9446–9454, 2018. Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 1905–1914, 2021. Pengxu Wei, Ziwei Xie, Hannan Lu, ZongYuan Zhan, Qixiang Ye, Wangmeng Zuo, and Liang Lin. Component divide-and-conquer for real-world image super-resolution. In Proceedings of the European Conference on Computer Vision, 2020. Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. *arXiv preprint arXiv:2302.13848*, 2023. Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G Dimakis, and Peyman Milanfar. Deblurring via stochastic refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16293–16303, 2022. Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In *Proceedings of the IEEE/CVF conference* on computer vision and pattern recognition, pp. 5728–5739, 2022. ![17_image_0.png](17_image_0.png) Figure 7: Comparison of super resolution (2562 → 10242) results of Stable Diffusion model(Rombach et al., 2022) and our method (ADIR). As can be seen from the images, our method outperforms Stable Diffusion in both sharpness and reconstructing details. Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a Gaussian denoiser: Residual learning of deep cnn for image denoising. *IEEE Transactions on Image Processing*, 26(7):3142–3155, 2017a. Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3929–3938, 2017b. Kai Zhang, Luc Van Gool, and Radu Timofte. Deep unfolding network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3217–3226, 2020. Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte. Plug-and-play image restoration with deep denoiser prior. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2021a. Kai Zhang, Jingyun Liang, Luc Van Gool, and Radu Timofte. Designing a practical degradation model for deep blind image super-resolution. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 4791–4800, October 2021b. Kai Zhang, Yawei Li, Jingyun Liang, Jiezhang Cao, Yulun Zhang, Hao Tang, Radu Timofte, and Luc Van Gool. Practical blind denoising via swin-conv-unet and data synthesis. *arXiv preprint*, 2022. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 586–595, 2018. ## A Additional Results In the following we - Show results for super resolution with scaling factor of 8. - Show more results of deblurring task. - Show more results of colorization use-case. - Compare our method to Stable Diffusion for editing task in multiple scenarios. - Examples of retrieved nearest neighbours images ![19_image_0.png](19_image_0.png) Figure 8: Comparison of super resolution (642 → 5122) results of Guided Diffusion from section 3.2 and our method (ADIR), using the unconditional model from (Rombach et al., 2022). As can be seen from the images, our method outperforms guided diffusion in both sharpness and reconstruction details. ![20_image_0.png](20_image_0.png) Figure 9: Comparison of super resolution (2562 → 10242) results of Stable Diffusion (Rombach et al., 2022) and our method (ADIR), using the unconditional model from (Rombach et al., 2022). As can be seen from the images, our method outperforms guided diffusion in both sharpness and reconstruction details. ![21_image_0.png](21_image_0.png) Figure 10: Comparison of super resolution (2562 → 10242) results of Stable Diffusion (Rombach et al., 2022) and our method (ADIR), using the unconditional model from (Rombach et al., 2022). As can be seen from the images, our method outperforms guided diffusion in both sharpness and reconstruction details. ![22_image_0.png](22_image_0.png) Figure 11: Comparison of super resolution (2562 → 10242) results of Stable Diffusion (Rombach et al., 2022) and our method (ADIR), using the unconditional model from (Rombach et al., 2022). As can be seen from the images, our method outperforms guided diffusion in both sharpness and reconstruction details. ![23_image_0.png](23_image_0.png) (ADIR), using the unconditional model from (Rombach et al., 2022). As can be seen from the images, our method outperforms guided diffusion in both sharpness and reconstruction details. ![24_image_0.png](24_image_0.png) using the unconditional model from (Rombach et al., 2022). As can be seen from the images, our method outperforms guided diffusion in both sharpness and reconstruction details. ![25_image_0.png](25_image_0.png) Figure 14: Gaussian deblurring (σblur = 2 and σnoise = 10) results of Guided Diffusion from section 3.2 and our method (ADIR), using the unconditional model from (Rombach et al., 2022). As can be seen from the images, our method outperforms guided diffusion in both sharpness and reconstruction details. ![26_image_0.png](26_image_0.png) the prompt "A beautiful frozen lake between mountains in the snow" for two different seeds. ![27_image_0.png](27_image_0.png) the prompt "An elephant walking" for two different seeds. ![28_image_0.png](28_image_0.png) to the Stable Diffusion model, for the prompt "A fox sitting in the middle of the desert" ![29_image_0.png](29_image_0.png) to the Stable Diffusion model, for the prompt "Taj Mahal" ![30_image_0.png](30_image_0.png) Figure 19: Image colorization results comparison between DDRM (Kawar et al., 2022a), Guided diffusion proposed in section 3.2, and our adaptive approach ADIR. As can be seen, adapting the denoiser network to the given image can improve the results significantly. ![31_image_0.png](31_image_0.png) Figure 20: Examples of images retrieved from Google Open Dataset (Kuznetsova et al., 2020) using CLIP (Radford et al., 2021) for super resolution with scale factor of 8 (642 → 5122). ![32_image_0.png](32_image_0.png) Figure 21: The effect of A on the K-NN retrieval: Each row represent a different blur operator A, and each column shows the 5 least similar images from the 20 retrieved nearest images from Google Open Dataset (Kuznetsova et al., 2020) using CLIP (Radford et al., 2021). In all cases we used a box filter with support 3 × 3, 5 × 5, 7 × 7, 9 × 9 and 11 × 11, respective to the row number.
Review 1: Summary: The authors propose an Adaptive Diffusion framework for Image Reconstruction (ADIR), an extension of the guided diffusion approach aimed at various image reconstruction/restoration tasks. The framework employs a CLIP model to adapt the diffusion network to patterns essential for restoring the target image by selecting analogous image samples from an external database. Strengths and Weaknesses: The proposed method is technically sound to me. It extends guided diffusion to tackle image reconstruction tasks. I am okay with this application paper, but I am not convinced by the current results and evaluation. Technical Innovation: The technical innovation of ADIR appears marginal, as the core technique is rooted in guided diffusion. The rationale behind the CLIP-based image selection remains ambiguous. Evaluation: The evaluation presents concerns since the authors did not compare their approach to state-of-the-art super-resolution and deblurring methods in terms of commonly used experimental settings. Specific Comments: The ablation study on the visual-language encoder-based KNN selection is missing, which is one of the key contributions in ADIR. For image classification or other recognition and understanding tasks, I can understand why we want to select images that are similar to your test image for few-shot adaptation. But for image restoration tasks, such as image super-resolution, we may only need to randomly select pictures with rich structures and fine textures. Back 10-15 years ago, we only used very few pictures (less than 10 in NcSR [1]) to train super-resolution or only flower pictures in ScSR [2] to tackle images from many different domains. The key reason is that small image patches are highly redundant across nature images. It raises the question: would a random selection of K images suffice? If that's the case, the proposed method is not useful. Single-image super-resolution (SISR) has been extensively researched, yet the authors overlooked comparisons with state-of-the-art methods, such as [3], [4], and [5]. Notably, [3]—a pioneer in applying the diffusion model to SISR—holds direct relevance. The absence of benchmark comparisons on established datasets like Set5, Set14, BSD100, Urban100, and Manga109, along with the omission of PSNR and SSIM metrics, diminishes the paper's credibility. Given the paper's foundation on an advanced, large-scale diffusion model, one would anticipate it outperforming preceding SISR methods. The deblurring section is narrowly focused on uniform blur, whereas contemporary research predominantly addresses motion blurs. Standard datasets like GoPro [6] and HIDE [7] are overlooked. Moreover, the lack of comparison with recent deblurring approaches, such as Restormer [8], MPRNet [9], and IPT [10], combined with the exclusion of benchmarking on universally acknowledged datasets, renders the presented results unconvincing. [1] Dong W, Zhang L, Shi G, Li X. Nonlocally centralized sparse representation for image restoration. IEEE transactions on Image Processing. 2012 Dec 21;22(4):1620-30. [2] Yang J, Wright J, Huang TS, Ma Y. Image super-resolution via sparse representation. IEEE transactions on image processing. 2010 May 18;19(11):2861-73. [3] Haoying Li, Yifan Yang, Meng Chang, Shiqi Chen, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. Srdiff: Single image super-resolution with diffusion probabilistic models. Neurocomputing, 2022. [4] Kai Zhang, Luc Van Gool, and Radu Timofte. Deep unfolding network for image super-resolution. In CVPR, 2020. [5] Liang J, Cao J, Sun G, Zhang K, Van Gool L, Timofte R. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision 2021 (pp. 1833-1844). [6] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, 2017 [7] Ziyi Shen, Wenguan Wang, Xiankai Lu, Jianbing Shen, Haibin Ling, Tingfa Xu, and Ling Shao. Human-aware motion deblurring. In ICCV, 2019. [8] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In CVPR, 2022. [9] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In CVPR, 2021. [10] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In CVPR, 2021. Requested Changes: - Adding an ablation study on the visual-language encoder-based KNN selection by comparing it with random selection. - Comparing with SOTA SISR approaches on the commonly used benchmark datasets and providing PSNR and SSIM results - Comparing with SOTA image deblurring approaches on the GoPro and HIDE datasets Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper proposed a method for image superresolution by modifying the diffusion model. Strengths and Weaknesses: The results look encouraging. However, the description is obscure, and the whole paper is difficult to understand. Key points and the innovative parts need to be clearly clarified. Requested Changes: Fig. 2 is too simple. Key information is missing from this figure. It is redundant to present 3.1 again since this is not the method of this paper. The method description is too obscure to understand. For example, what do you mean by “The key difference between our framework and Dhariwal & Nichol (2021) is that we need to base our method on the specific degraded image y rather than on a classifier that has been trained for each level of noise of {xt}.” Some format errors are present in Tables 1 and 2. It seems that the results of the proposed method are not significantly better than existing works. In experiments, the proposed method is compared with only the diffusion-related method. However, many other SR methods should be compared in detail. How about the computation complexity of the proposed method? Since numerous iterations are required in the proposed method, the runtime is longer than conventional methods. Thus, the runtime, model size, model parameters, etc., should be compared in the paper. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The work presents a method for using pretrained diffusion models for image restoration tasks. The main principle is to describe the forward corruption model and guide the denoising process accordingly. The primary novelty lies in an additional stage proposed by the authors, which involves adapting the denoising network parameters by fitting a denoising diffusion process to a small set of images similar to the input sample. These samples are found through K-NN in CLIP space. The method is demonstrated on several tasks, including super-resolution, deblurring, colorization, and text editing. Strengths and Weaknesses: ### Strengths (+) The work proposes a versatile tool for several image restoration and editing tasks. (+) The method is described clearly and appears technically sound. (+) The k-NN retrieval and adaptation seem like pragmatic solutions in many cases when working with natural images. (+) Multiple applications are demonstrated, and significant improvements are shown compared to baselines. ### Weaknesses (-) As also mentioned by the authors, the adaptation technique using x_0 prediction for guidance isn't particular to this work. In addition to the mentioned works, also [1] discusses the "x0" prediction in section 3.1 (Reconstruction-guided sampling). The following points are more specific to the guidance part of the paper but in general I believe the contribution of the suggested method does not lie in the guidance. If the authors believe this is not the case I would ask for a more detailed comparison/discussion with other techniques. (-) It seems that equation 16 (and Alg. 1 accordingly) treats epsilon as independent of x_t? if so, this should be stated ndmotivated clearly. (-) The adaptation explanation is not claear to me. It seems eq 18 is using the standard "loss_simple" but uses y instead of x_0. Does this mean adaptation is done with our without the proposed guidance? (-) My confusion followes to the NN retrieval where, according to Figure 2 and eq 19, it seems the authros assume CLIP is invariant to the corruption so that the NN are clean images with embeddings z_i that are similar enought the the corrupted input embedding. This assumption needs to be stated and discussed as currupted images might be OOD for CLIP and result in wrong retrievals. I also believe y should be changed here to avoid confusion as here it is the CLIP embedding of the corrupted image y. (-) Another point that lack discussion is the usage of latent diffusion models like stable diffusion, whereas the corruption occurs in pixel space. This means the corruption operator A doesn't translate from pixel space to latent space -- how is this dealt with? (-) adapting a denoising network on small number of inputs might overfit. Is there any special mechanism used here to prevent that? I would appreciate an analysis how K inflences the result. (-) As the adaptation neccesitates finetuning, what is the add-on in terms of computation and runtime to perform a reconstrcution on a single input sample? (-) In the SR task, the 50 validation images is not a standard evaluation benchmark, in that case it'd be good to add some recent, non-diffusion based methods. (-) The work of Chung et al, is mentioned and the differences mentioned in stabilizing the gradient are described yet no experiments are showing comparison to it. (-) I'm missing non-diffusion baselines in comparison for the deblurring task. (-) i'm missing an ablation showing the importance of eqch ingeredient -- how much of the improivement is carried by adaptation alone and how much by the guidance alone? (-) Particular limitations of the suggested method should be discussed, byond randomness. kNN retrieval can also impose limitations, adaptation adds run-time, the kernel may be unknown, etc. #### Minor: (-) Equation (6) doesn't seem to influence \Sigma, only mu (-) This sentence needs more explanation: "Adapting the denoising network to the measurement image y, allows it to learn cross-scale features". Is this due to a particular choice of network architecture? (-) should the spherical distance be squared? (-) note the formatting of Tables 1 and 2. (-) in table 3, LPIPS for first model should be blue. #### Typos: (-) In equation 4, the sigma should be inside the normal distribution, i believe. #### Missing references [1] video diffusion models (Jonathan Ho et al.) Requested Changes: See weaknesses for my request of clarifications and additional comparisons with baselines. Broader Impact Concerns: No concerns ================================================== Metareview: Recommendation: Reject Comment: The primary technical innovation of this paper lies in employing a diffusion-model approach for image reconstruction, using selected K-NN images for turning. However, the paper lacks a well-demonstrated selection scheme for K-NN images. If random selection is deemed adequate to support diffusion tuning, the significance of this work would be substantially diminished. Therefore, the AE recommends rejection and invites the authors to resubmit the paper after substantial revision. ==================================================
# Neural Clamping: Joint Input Perturbation And Temperature Scaling For Neural Network Calibration Yung-Chen Tang yctang@cse.cuhk.edu.hk The Chinese University of Hong Kong National Tsing Hua University Pin-Yu Chen *pin-yu.chen@ibm.com* IBM Research Tsung-Yi Ho *tyho@cse.cuhk.edu.hk* The Chinese University of Hong Kong Reviewed on OpenReview: *https: // openreview. net/ forum? id= qSFToMqLcq* ## Abstract Neural network calibration is an essential task in deep learning to ensure consistency between the confidence of model prediction and the true correctness likelihood. In this paper, we propose a new post-processing calibration method called **Neural Clamping**, which employs a simple joint input-output transformation on a pre-trained classifier via a learnable universal input perturbation and an output temperature scaling parameter. Moreover, we provide theoretical explanations on why Neural Clamping is provably better than temperature scaling. Evaluated on BloodMNIST, CIFAR-100, and ImageNet image recognition datasets and a variety of deep neural network models, our empirical results show that Neural Clamping significantly outperforms state-of-the-art post-processing calibration methods. The code is available at github.com/yungchentang/NCToolkit, and the demo is available at huggingface.co/spaces/TrustSafeAI/NCTV. ## 1 Introduction Deep neural networks have been widely deployed in real-world machine learning empowered applications such as computer vision, natural language processing, and robotics. However, without further calibration, model prediction confidence usually deviates from the true correctness likelihood (Guo et al., 2017). The issue of poor calibration in neural networks is further amplified in high-stakes or safety-critical decision making scenarios requiring accurate uncertainty quantification and estimation, such as disease diagnosis (Jiang et al., 2012; Esteva et al., 2017) and trac sign recognition systems in autonomous vehicles (Shafaei et al., 2018). Therefore, calibration plays an important role in trustworthy machine learning (Guo et al., 2017; Kumar et al., 2019; Minderer et al., 2021). Recent studies on neural network calibration can be mainly divided into two categories: *in-processing* and post-processing. In-processing involves training or fine-tuning neural networks to mitigate their calibration errors, such as in Müller et al. (2019); Liang et al. (2020); Tian et al. (2021); Qin et al. (2021); Tao et al. (2023). Post-processing involves post-hoc intervention on a pre-trained neural network model without changing the given model parameters, such as adjusting the data representations of the penultimate layer (i.e., the logits) to calibrate the final softmax layer's output of prediction probability estimates. As in-processing calibration tends to be time-consuming and computationally expensive, in this paper, we opt to focus on post-processing calibration. Current post-processing calibration methods are predominately shed on processing or remapping the output logits of neural networks, *e.g.,* Guo et al. (2017); Kull et al. (2019); Gupta et al. (2020); Tian et al. (2021); Xiong et al. (2023). However, we aim to provide a new perspective and show that joint input-output model calibration can further improve neural network calibration. The rationale is that active adjustment of data inputs will aect their representations in every subsequent layer, rather than passive modification of the output logits. In this paper, we propose a new post-processing calibration framework for neural networks. We name this framework **Neural Clamping** because its methodology is based on learning a simple joint input-output transformation for calibrating a pre-trained (frozen) neural network classifier. Figure 1 illustrates the entire procedure of Neural Clamping. We consider a K-way neural network classifier f◊(·) œ RK with fixed model parameters ◊. The classifier outputs the logits for K classes and uses softmax on the logits to obtain the final confidence on class predictions (i.e., probability scores). To realize joint input-output calibration, Neural Clamping adds a trainable universal perturbation " to every data input and a trainable temperature scaling parameter T at the output logits. The parameters " and T are jointly learned by minimizing the focal loss Lin et al. (2017) with a weight-decay regularization term trained on a calibration set (i.e. validation set) {xi, yi}ni=1 for calibration. The focal loss assigns non-uniform importance on {xi}ni=1 during training and includes the standard cross entropy loss as a special case. Finally, in the evaluation (testing) phase, Neural Clamping appends the optimized calibration parameters "ú and T ú to the input and output of the fixed classifier f◊(·), respectively. ![1_image_0.png](1_image_0.png) Figure 1: Overview of Neural Clamping: a joint input-output post-processing calibration framework. Our main contributions are summarized as follows: - We propose Neural Clamping as a novel joint input-output post-processing calibration framework for neural networks. Neural Clamping learns a universal input perturbation and a temperature scaling parameter at the model output for calibration. It includes temperature scaling as a special case, which is a strong baseline for post-processing calibration. - We develop theoretical results to prove that Neural Clamping is better than temperature scaling in terms of constrained entropy maximization for uncertainty quantification. In addition, we use first-order approximation to optimize the data-driven initialization term for the input perturbation, improving the stability of Neural Clamping in our ablation study. Furthermore, we leverage this theoretical result to design a computationally ecient algorithm for Neural Clamping. - Evaluated on dierent deep neural network classifiers (including ResNet (He et al., 2016), Vision Transformers (Dosovitskiy et al., 2020), and MLP-Mixer (Tolstikhin et al., 2021)) trained on BloodMNIST (Yang et al., 2023), CIFAR-100 (Krizhevsky et al., 2009), and ImageNet-1K (Deng et al., 2009) datasets and three calibration metrics, Neural Clamping outperforms state-of-the-art postprocessing calibration methods. For instance, when calibrating the ResNet-110 model on CIFAR100, the expected calibration error is improved by 34% when compared to the best baseline. ## 2 Background And Related Work In this section, we begin by introducing the problem formulation for calibration and describing the notations used in this paper. Furthermore, we define dierent metrics used to measure calibration error and conclude this section with an overview of the post-processing calibration methods. ## 2.1 Probabilistic Characterization Of Neural Network Calibration $$\mathbb{P}(\mathbf{y}={\hat{y}}|\mathbf{p}={\hat{p}})={\hat{p}},$$ Assume a pair of data sample and label (x, y) is drawn from a joint distribution D µ X◊Y, where x œ X is a data sample, and y œ Y = {1*, ..., K*} is the ground-truth class label. Let f◊ : X æ RK denote a K-way neural network classifier parametrized by ◊, where z = f◊(x)=[z1*,...,* zK] is the model's output *logits* of a given data input x. Following the convention of neural network implementations, the prediction probability score of x is obtained by applying the *softmax* function ‡ on z, denoted as ‡(z) œ [0, 1]K. The k-th component of ‡(z) is defined as ‡(z)k = exp(zk)/qKkÕ=1 exp(zkÕ ), which satisfies qKk=1 ‡(z)k = 1 and ‡(z)k Ø 0. Suppose the model predicts a most likely class yˆ = arg maxk ‡(z)k with confidence pˆ = ‡(z)yˆ. Formally, the model f◊ is called *calibrated* if P(y = ˆy|p = ˆp)=ˆp, (1) where P denotes probability and p denotes the true likelihood. Equation (1) only considers the prediction confidence of the most likely (top-1) class. We can extend it to consider the prediction confidence of every class. Let the class-wise prediction confidence be pˆi = ‡(z)i for i = {1*, ..., K*}, the network is called classwise-calibrated if P(y = i|pi = ˆpi)=ˆpi, ' i œ Y (2) where pi is the true likelihood for class i. $$\mathbb{P}(\mathbf{y}=i|\mathbf{p}_{i}={\hat{p}}_{i})={\hat{p}}_{i},\quad\forall\ i\in\mathbb{Y}$$ ## 2.2 Calibration Metrics Expected Calibration Error (ECE). Calibration error aims to compute the dierence between confidence and accuracy as E(x,y)≥D [|P(y = ˆy|p = ˆp) ≠ pˆ|] (3) Unfortunately, this quantity cannot be exactly computed from equation (3) if the underlying data distribution D is unknown. The most popular metric to measure calibration is the *Expected Calibration Error* (ECE) (Guo et al., 2017; Naeini et al., 2015). ECE approximates the calibration error by partitioning predictions into m intervals (bins) {Bi}mi=1. The calibration error is calculated by first taking the dierence between the confidence and accuracy in each bin and then computing the weighted average across all bins, i.e., $$\mathbb{E}_{(x,y)\sim{\mathcal{D}}}\left[|\mathbb{P}(\mathbf{y}={\hat{y}}|\mathbf{p}={\hat{p}})-{\hat{p}}|\right]$$ $${\rm ECE}=\sum_{i=1}^{M}\frac{|B_{i}|}{n}\left|{\rm acc}(B_{i})-{\rm conf}(B_{i})\right|\tag{4}$$ $\left(1\right)$. $$\left(2\right)$$ $$\left({\mathfrak{3}}\right)$$ where |Bi| is the number of samples in bin Bi, n is the total number of data, and acc(Bi) and conf(Bi) is the accuracy and confidence in Bi, respectively. Adaptive Expected Calibration Error (AECE) (Mukhoti et al., 2020). Since most data for a trained model fall into the highest confidence bins, these bins mostly determine the value of the ECE. Instead of pre-defined intervals for bin partitioning, in AECE, adaptive interval ranges ensure each bin has the same number of samples. AECE is defined as $${\mathrm{AECE}}=\sum_{i=1}^{M}{\frac{|B_{i}|}{n}}\left|\mathsf{acc}(B_{i})-\mathsf{conf}(B_{i})\right|$$ $$\mathrm{subject~to}\quad|B_{i}|=|B_{j}|\quad\forall i,j$$ $\left(5\right)^{\frac{1}{2}}$ $$\quad(6)$$ where |Bi| is the number of samples in bin Bi, n is the total number of data, and acc(Bi) and conf(Bi) is the accuracy and confidence in Bi, respectively. Static Calibration Error (SCE) (Nixon et al., 2019). ECE does not take into account the calibration error for all classes. It only calculates the calibration error of the top-1 class prediction. SCE extends ECE and considers multi-class predictions based on Equation (2): $$\mathrm{SCE}={\frac{1}{K}}\sum_{k=1}^{K}\sum_{i=1}^{M}{\frac{|B_{i}^{k}|}{n}}\,|\mathrm{acc}(i,k)-\mathrm{conf}(i,k)|$$ where K is the number of classes, --Bki -- is the number of samples in bin i of class k, n is the total number of data, and acc(*i, k*) and conf(*i, k*) is the accuracy and confidence in Bki , respectively. ## 2.3 Post-Processing Calibration Methods Temperature Scaling (Guo et al., 2017). Temperature scaling is the simplest variant of Platt scaling (Platt et al., 1999), which is a method of converting a classifier's output into a probability distribution over all classes. Specifically, all classes have the same scalar parameter (i.e., temperature) T > 0 in the softmax output such that qˆ = ‡(z/T), where qˆ œ [0, 1]K denotes the calibrated probability scores. It is worth noting that by definition temperature scaling only changes the confidence but not the class prediction. Moreover, the entropy of qˆ increases with T when T Ø 1. The temperature T is optimized via the Negative Log Likelihood (NLL) over a calibration training set {xi}ni=1, where NLL is defined as ≠qni=1 log(ˆqi,yi ) and qˆi,yi is the prediction on the correct class yi for the i-th sample. Vector Scaling and Matrix Scaling (Guo et al., 2017). They are two extensions of Plat scaling (Platt et al., 1999). Let z be the output logits for an input x. Vector Scaling and Matrix Scaling adopt linear transformations on z such that qˆ = ‡(W z + b), where W œ RK◊K and b œ RK for both settings. Vector scaling is a variation of matrix scaling when W is restricted to be a diagonal matrix. The parameters W and b are optimized based on NLL. MS-ODIR and Dir-ODIR (Kull et al., 2019). The authors in Kull et al. (2019) proposed Dirichlet calibration and ODIR term (O-Diagonal and Intercept Regularization). The dierence between matrix scaling and Dirichlet calibration is that the former aects logits while the latter modifies pseudo-logits through qˆ = ‡(W ln(‡(z)) + b), where ln(·) is a component-wise natural logarithm function. The results in Guo et al. (2017) indicated poor matrix scaling performance, because calibration methods with a large number of parameters will over-fit to a small calibration set. Therefore, Kull et al. (2019) proposed a new regularization method called ODIR to address the overfitting problem, i.e. ODIR = 1 K(K≠1) qj"=j wj,k + 1 K qj bj , where wj,k and bj are elements of W and b, respectively. MS-ODIR applies Matrix Scaling (Guo et al., 2017) with ODIR, while Dir-ODIR uses Dirichlet calibration (Kull et al., 2019). Spline-Fitting (Gupta et al., 2020) and Density-Ratio Calibration (Xiong et al., 2023). The Spline-fitting approximates the empirical cumulative distribution with splines to re-calibrate network outputs for calibrated probabilities. However, this method is limited to calibration on the top-k class prediction but cannot be extended to all-class predictions. The Density-Ratio Calibration approach focuses on estimating continuous density functions, which aligns well with calibration methods that produce continuous confidence scores, such as scaling-based calibration techniques. However, this method is limited in that it can only calibrate the predicted probabilities, and cannot calibrate the full probability distribution output by the model. Therefore, these methods are beyond our studied problem of all-class post-processing calibration. ## 3 Neural Clamping Based on the proposed framework of Neural Clamping as illustrated in Figure 1, in this section we provide detailed descriptions on the joint input-output calibration methodology, the training objective function, the influence of hyperparameter selection, and the theoretical justification on the improvement over temperature scaling and the date-driven initialization. ## 3.1 Joint Input-Output Calibration To realize joint input-output calibration, Neural Clamping appends a learnable universal perturbation " at the model input and a learnable temperature scaling parameter T for all classes at the model output. In contrast to the convention of output calibration, Neural Clamping introduces the notion of input calibration by applying simple trainable transformations on the data inputs prior to feeding them to the model. In our implementation, the input calibration is simply a universal additive perturbation ". Therefore, Neural Clamping includes temperature scaling as a special case when setting " = 0. Modern neural networks often suer from an overconfident issue, resulting in a lack of well-calibration and low output entropy (Guo et al., 2017; Mukhoti et al., 2020). Calibrating neural networks, a common approach to address the overconfident issue, typically results in increased entropy as a byproduct. In the seminal work on neural network calibration by Guo et al. (2017), it is noted that adjusting the temperature parameter to improve calibration aligns with the objective of maximizing the entropy of the output probability distribution under additional constraints. Building upon this notion, we extend the problem formulation presented in Guo et al. (2017), which utilizes entropy to study output calibration. We evaluate this problem on a calibration set {xi, yi}ni=1 with an input perturbation ". The objective is to find the best-calibrated output qú that maximizes the entropy of qú while satisfying the specified calibration constraints: $$\begin{array}{l}\mbox{Maximize}_{q\in\mathbb{R}^{K}}-\sum_{i=1}^{n}q(\mathbf{z}_{i})^{\top}\log(q(\mathbf{z}_{i}))\\ \mbox{subject to}\quad q(\mathbf{z}_{i})^{(k)}\geq0\quad\forall i\in\{1,\ldots,n\}\mbox{and}k\in\{1,\ldots,K\}\\ \sum_{i=1}^{n}\mathbf{1}^{\top}q(\mathbf{z}_{i})=1\\ \sum_{i=1}^{n}\mathbf{z}_{i}^{\top}\mathbf{e}^{(y_{i})}=\sum_{i=1}^{n}z_{i}^{\top}q(\mathbf{z}_{i})\end{array}$$ $$\left(7\right)$$ where ·€ denotes vector transpose, zi = f◊(xi + ") is the logit of xi + ", 1 is an all-one vector, e(yi) is an one-hot vector corresponding to the class label yi, and log(·) is an element-wise log operator. The first two constraints guarantee that q is a probability distribution, whereas the third constraint restricts the range of possible distributions, which stipulates that the average true class logit equals to the average weighted logit. To motivate the utility of joint input-output calibration, the following lemma formally states that the proposed form of joint input perturbation and temperature scaling in Neural Clamping is the unique solution qú to the above constrained entropy maximization problem. Lemma 3.1 (optimality of joint input-output calibration). For any input perturbation "*, let* f◊(·) = [f(1) ◊ *,...,f*(K) ◊ ] be a fixed K-way neural network classifier and let z *be the output logits of a perturbed data* input x + "*. Then the proposed form of joint input-output calibration in Neural Clamping is the unique* solution qú(z)(k) = exp[f(k) ◊ q (x+")/T] K j=1 exp[f(j) ◊ (x+")/T] , 'k œ {1,...,K}, to the constrained entropy maximization problem in equation 7. Proof. The proof is given in Appendix B. ## 3.2 Training Objective Function In Neural Clamping Neural Clamping uses the focal loss (Lin et al., 2017) and a weight-decay regularization term as the overall objective function for calibration. It has been shown that the focal loss is an upper bound of the regularized ![5_image_0.png](5_image_0.png) Figure 2: Neural Clamping on ResNet-50/ResNet-110 and Wide-ResNet-40-10 with dierent " values and the resulting expected calibration error (ECE), training loss, and entropy on BloodMNIST and CIFAR-100. When " = 0, focal loss reduces to cross entropy loss. The experiment setup is the same as Section 4. KL-divergence (Charoenphakdee et al., 2021; Mukhoti et al., 2020). Therefore, minimizing the focal loss aims to reduce the KL divergence between the groundtruth distribution and the predicted distribution while increasing the entropy of the predicted distribution. Focal loss is an adjusted cross entropy loss with a modulating factor (1 ≠ pˆi,yi )" and " Ø 0, where pˆi,yi is the prediction probability given by a neural network on the correct class yi for the i-th sample. When " = 0, focal loss reduces to the standard cross entropy loss. Formally, it is defined as $${\mathcal{L}}_{F L}^{\gamma}(f_{\theta}(\mathbf{x}_{i}),y_{i})=-(1-{\hat{p}}_{i,y_{i}})^{\gamma}\log({\hat{p}}_{i,y_{i}})$$ " log(ˆpi,yi ) (8) Given a calibration training set {xi, yi}ni=1, the optimal calibration parameters " and T in Neural Clamping are obtained by solving $$\delta^{\star},T^{\star}=\arg\operatorname*{min}_{\delta,T}\sum_{i=1}^{n}{\mathcal{L}}_{F L}^{\gamma}(f_{\theta}(\mathbf{x_{i}}+\delta)/T,y_{i})+\lambda\left\|\delta\right\|_{2}^{2}$$ $$({\boldsymbol{\delta}})$$ $$(9)$$ Like other post-processing calibration methods, Neural Clamping only appends a perturbation at the model input and a temperature scaling parameter at the model output. It does not require any alternations on the given neural network for calibration. ## 3.3 How To Choose A Proper " **Value In Focal Loss For Neural Clamping?** The focal loss has a hyperparameter " governing the assigned importance on each data sample in the aggregated loss. To understand its influence on calibration, in Figure 2 we performed a grid search of " value between 0 and 1 with an interval of 0.05 to calibrate Wide-Resnet-40-10 (Zagoruyko & Komodakis, 2016) and DenseNet-121 (Huang et al., 2017) models trained on CIFAR-100 (Krizhevsky et al., 2009) dataset. While the entropy continues to increase as the gamma value increases, ECE attains its minimum at some intermediate " value and is better than the ECE of using cross entropy loss (i.e., " = 0). This observation verifies the importance of using focal loss for calibration. In our implementation, we select the best " value that minimizes ECE of the calibration dataset from a candidate pool of " values with separate runs. ## 3.4 Theoretical Justification On The Advantage Of Neural Clamping Here we use the entropy after calibration as a quantifiable metric to prove that Neural Clamping can further increase this quantity over temperature scaling. Note that temperature scaling is a special case of Neural Clamping when there is no input calibration (i.e., setting " = 0). For ease of understanding, we define gi as the gradient of the output entropy H(‡(f◊(·)/T)) with respect to the input data xi = [x(1) i *,...,x*(m) i ] œ [–, —], where [–, —] µ Rm ◊ Rm means the bounded range of all feasible data inputs (e.g., every image pixel value is within [0, 255]). We further define ¸ œ Rm and µ œ Rm as the lower bound and the upper bound over all calibration data {xi}ni=1 on each input dimension. That is, their j-th entry is defined as ¸j = miniœ{1*,...,n*} x(j) i and µj = maxiœ{1*,...,n*} x(j) i , respectively. With the use of first-order approximation, the following theorem shows that given the same temperature value T, Neural Clamping increases the entropy of temperature scaling by "€g, demonstrating the advantage of involving input calibration. Furthermore, based on our derivation and the data-driven bounds ¸ and µ, we can obtain a closed-form first-order optimal solution " for maximizing the entropy increment "€g. We call " the *data-driven initialization* for the input perturbation ". We will perform an ablation study to compare the performance and stability of data-driven versus random initialization in Section 4.3. In the following theorem, the notation |·|, sign, and § denote element-wise absolute value, sign operation (i.e., ±1), and product (i.e., Hadamard product), respectively. Theorem 3.2. **(provable entropy increment and data-driven initialization)** Let [–, —] *be the feasible* range of data inputs and g = qni=1 gi = [g(1),...,g(K)] *be the sum of local input gradients. Define* ÷ œ Rm element-wise such that ÷j = ¸j ≠ –j if g(j) < 0, ÷j = —j ≠ µj if g(j) > 0*, and* ÷j = 0 otherwise, for every j œ {1,...,m}*. Approaching by first-order approximation and given the same temperature value* T, Neural Clamping increases the entropy of temperature scaling by "€g. Furthermore, the optimal value " for maximizing "€g is " = sign(g) § ÷. Proof. The proof is given in Appendix C. Table 1: Comparison with various calibration methods on BloodMNIST with ResNet-50. The reported results are mean and standard deviation over 5 runs. The best/second-best method is highlighted by blue/green color. On ECE/AECE, the relative improvement of Neural Clamping to the best baseline is 31% and 28%, respectively. | ResNet-50 | | | | | | |----------------------|--------------|----------------|------------|------------|----------------| | Method | Accuracy (%) | Entropy ø | ECE (%) ¿ | AECE (%) ¿ | SCE (◊10≠2) ¿ | | Uncalibrated | 85.79 | 0.2256 | 5.77 | 5.76 | 1.7003 | | Temperature Scaling | 85.79 ±0 | 0.3726 ±0 | 1.77 ±0 | 1.66 ±0 | 1.1067 ±0 | | TS by Grid Search | 85.79 ±0 | 0.3684 ±0 | 2.13 ±0 | 1.68 ±0 | 1.1041 ±0 | | Vector Scaling | 85.79 ±0.05 | 0.3653 ±0.0023 | 1.97 ±0.11 | 1.94 ±0.06 | 0.9264 ±0.0574 | | Matrix Scaling | 85.79 ±0.38 | 0.2984 ±0.0161 | 4.96 ±0.65 | 4.86 ±0.71 | 1.4665 ±0.1314 | | MS-ODIR | 85.79 ±0.04 | 0.3726 ±0.0001 | 1.94 ±0.01 | 1.70 ±0.03 | 0.9099 ±0.0101 | | Dir-ODIR | 85.79 ±0.02 | 0.3748 ±0.0002 | 1.55 ±0.04 | 1.71 ±0.09 | 0.8366 ±0.0034 | | Neural Clamping (CE) | 85.79 ±0.02 | 0.3820 ±0.0005 | 1.54 ±0.02 | 1.57 ±0.05 | 1.1100 ±0.0103 | | Neural Clamping (FL) | 85.82 ±0.03 | 0.4204 ±0.0004 | 1.05 ±0.03 | 1.19 ±0.06 | 1.0797 ±0.0042 | ## 4 Performance Evaluation In this section, we conducted extensive experiments to evaluate the performance of our proposed Neural Clamping calibration method using the calibration metrics introduced in Section 2.2. We compared our method to several baseline and state-of-the-art calibration methods. All experiments are evaluated on three popular image recognition datasets (BloodMNIST, CIFAR100, ImageNet-1K) and six trained deep neural network models (e.g. ResNet, Vision Transformer (ViT), and MLP-Mixer). An ablation study on Neural Clamping is presented at the end of this section. ## 4.1 Evaluation And Implementation Details Experiment setup. We used ResNet-50 (He et al., 2016) on BloodMNIST (Yang et al., 2023); ResNet-110 (He et al., 2016) and Wide-ResNet-40-10 (Zagoruyko & Komodakis, 2016) models on CIFAR-100 (Krizhevsky et al., 2009); ResNet-101 (He et al., 2016), ViT-S/16 (Dosovitskiy et al., 2020), and MLP-Mixer B/16 (Tolstikhin et al., 2021) models on ImageNet-1K (Deng et al., 2009). Blood MNIST, a recognized medical machine learning benchmark, features 11,959/1,712/3,421 samples for training/validation/evaluation. For CIFAR-100 and ImageNet, lacking default validation data, we divided CIFAR-100's training set into 45,000 training images and 5,000 calibration images. For ImageNet, 25,000 test images were reserved for calibration, | ResNet-110 | | | | | | |----------------------|--------------|----------------|-------------|-------------|----------------| | Method | Accuracy (%) | Entropy ø | ECE (%) ¿ | AECE (%) ¿ | SCE (◊10≠2) ¿ | | Uncalibrated | 74.15 | 0.4742 | 10.74 | 10.71 | 0.2763 | | Temperature Scaling | 74.15 ±0 | 0.8991 ±0 | 1.71 ±0 | 1.63 ±0 | 0.1711 ±0 | | TS by Grid Search | 74.15 ±0 | 0.9239 ±0 | 1.35 ±0 | 1.38 ±0 | 0.1717 ±0 | | Vector Scaling | 73.81 ±0.05 | 0.8698 ±0.0008 | 2.29 ±0.07 | 2.15 ±0.15 | 0.1949 ±0.0046 | | Matrix Scaling | 62.03 ±0.31 | 0.1552 ±0.0026 | 31.85 ±0.29 | 31.85 ±0.29 | 0.6842 ±0.0057 | | MS-ODIR | 74.07 ±0.03 | 0.9035 ±0.0001 | 1.79 ±0.04 | 1.75 ±0.03 | 0.1797 ±0.0006 | | Dir-ODIR | 74.10 ±0.04 | 0.9160 ±0.0002 | 1.36 ±0.05 | 1.31 ±0.03 | 0.1780 ±0.0014 | | Neural Clamping (CE) | 74.17 ±0.07 | 0.8928 ±0.0061 | 1.67 ±0.16 | 1.63 ±0.19 | 0.1709 ±0.0020 | | Neural Clamping (FL) | 74.16 ±0.09 | 0.9707 ±0.0049 | 0.89 ±0.06 | 1.01 ±0.11 | 0.1754 ±0.0015 | | Wide-ResNet-40-10 | | | | | | | Method | Accuracy (%) | Entropy ø | ECE (%) ¿ | AECE (%) ¿ | SCE (◊10≠2) ¿ | | Uncalibrated | 79.51 | 0.4210 | 7.63 | 7.63 | 0.2188 | | Temperature Scaling | 79.51 ±0 | 0.7420 ±0 | 2.30 ±0 | 2.17 ±0 | 0.1627 ±0 | | TS by Grid Search | 79.51 ±0 | 0.8359 ±0 | 1.75 ±0 | 1.54 ±0 | 0.1659 ±0 | | Vector Scaling | 79.08 ±0.09 | 0.7079 ±0.0012 | 2.52 ±0.07 | 2.35 ±0.05 | 0.1818 ±0.0032 | | Matrix Scaling | 68.48 ±0.16 | 0.1371 ±0.0023 | 26.13 ±0.15 | 26.12 ±0.15 | 0.5657 ±0.0024 | | MS-ODIR | 79.15 ±0.03 | 0.7529 ±0.0002 | 1.90 ±0.07 | 1.95 ±0.03 | 0.1705 ±0.0008 | | Dir-ODIR | 79.51 ±0.01 | 0.7707 ±0.0001 | 1.81 ±0.03 | 1.98 ±0.01 | 0.1625 ±0.0004 | | Neural Clamping (CE) | 79.53 ±0.01 | 0.7461 ±0.0030 | 2.27 ±0.03 | 2.20 ±0.03 | 0.1624 ±0.0004 | | Neural Clamping (FL) | 79.53 ±0.04 | 0.8626 ±0.0033 | 1.67 ±0.14 | 1.66 ±0.12 | 0.1683 ±0.0014 | Table 2: Comparison with various calibration methods on CIFAR-100 with dierent models. The reported results are mean and standard deviation over 5 runs. The best/second-best method is highlighted by blue/green color. On ECE, the relative improvement of Neural Clamping to the best baseline is 34/5 % on ResNet110/Wide ResNet-40-10, respectively. and the remaining 25,000 were for evaluation. Uniform calibration dataset and test set were shared across all methods. Our experiments ran on an Nvidia Tesla V100 with 32GB RAM and Intel Xeon Gold CPU. Comparative methods. We compared our method to all the post-processing calibration methods introduced in Section 2.3, including Temperature Scaling (Guo et al., 2017), Vector Scaling, Matrix Scaling, Matrix scaling with ODIR (MS-ODIR) (Kull et al., 2019), and Dirichlet Calibration (Dir-ODIR) (Kull et al., 2019). For temperature scaling, we considered two implementations: (a) learning the temperature by minimizing NLL loss via gradient decent on the calibration dataset, and (b) taking a grid search on temperature over 0 to 5 with a resolution of 0.001 and then reporting the lowest ECE and its corresponding temperature, for which we call TS (Grid Searched). For MS-ODIR and Dir-ODIR, we trained their regularization coecient with 7 values from 10≠2 to 104 and chose the best result on the calibration dataset (See Section 4.1 in (Kull et al., 2019)). All methods were trained with 1000 epochs with full-batch gradient descent with learning rate 0.001. In addition to Neural Clamping with the focal loss (FL), we also compared Neural Clamping with the cross entropy (CE) loss. Neural Clamping implementation. The hyperparameters ⁄ and " in equation (9) are determined by the best parameter minimizing the ECE on the calibration dataset. The choice of " was already discussed in Sec 3.3. The default value of " is set to 1 because we find it to be stable across models and datasets. Regarding the choice of ⁄, its purpose is to aid in regularization. Our default approach is to set this term to be 1/10 of the initial loss. The input calibration parameter " and the output calibration parameter T are optimized using the stochastic gradient descent (SGD) optimizer with learning rate 0.001, batch size 512, and 100 epochs. For initialization, " uses random initialization and T is set to 1. The detailed algorithmic procedure of Neural Clamping is presented in Appendix A. Evaluation metrics. We reported 5 evaluation measures on the test sets: Accuracy, Entropy, ECE, AECE, and SCE. All three calibration metrics are defined in Section 2.2 and 15 bins were used. In all experiments, we report the average value and standard deviation over 5 independent runs. Table 3: Comparison with various calibration methods on ImageNet with dierent models. The reported results are mean and standard deviation over 5 runs. The best/second-best method is highlighted by blue/green color. On ECE, the relative improvement of Neural Clamping to the best baseline is 11/6/13 % on ResNet101/ViT-S16/MLP-Mixer B16, respectively. | ResNet-101 | | | | | | |----------------------|--------------|----------------|-------------|-------------|----------------| | Method | Accuracy (%) | Entropy ø | ECE (%) ¿ | AECE (%) ¿ | SCE (◊10≠3) ¿ | | Uncalibrated | 75.73 | 0.6608 | 5.88 | 5.88 | 0.3180 | | Temperature Scaling | 75.73 ±0 | 0.9376 ±0 | 1.88 ±0 | 1.91 ±0 | 0.3117 ±0 | | TS by Grid Search | 75.73 ±0 | 0.9244 ±0 | 2.02 ±0 | 1.97 ±0 | 0.3108 ±0 | | Vector Scaling | 75.67 ±0.07 | 1.0463 ±0.0017 | 2.04 ±0.12 | 1.92 ±0.07 | 0.3192 ±0.0009 | | Matrix Scaling | 51.97 ±0.30 | 0.0593 ±0.0008 | 45.61 ±0.28 | 45.60 ±0.28 | 0.9037 ±0.0052 | | MS-ODIR | 70.71 ±0.10 | 0.9904 ±0.0016 | 3.29 ±0.06 | 3.28 ±0.06 | 0.3448 ±0.0011 | | Dir-ODIR | 70.72 ±0.03 | 0.9841 ±0.0007 | 3.47 ±0.05 | 3.47 ±0.05 | 0.3480 ±0.0013 | | Neural Clamping (CE) | 75.73 ±0.01 | 0.9429 ±0.0240 | 1.89 ±0.13 | 1.88 ±0.11 | 0.3114 ±0.0007 | | Neural Clamping (FL) | 75.73 ±0.01 | 1.0103 ±0.0245 | 1.68 ±0.04 | 1.71 ±0.03 | 0.3128 ±0.0001 | | ViT-S/16 | | | | | | | Method | Accuracy (%) | Entropy ø | ECE (%) ¿ | AECE (%) ¿ | SCE (◊10≠3) ¿ | | Uncalibrated | 79.90 | 0.7161 | 1.28 | 1.30 | 0.2808 | | Temperature Scaling | 79.90 ±0 | 0.7314 ±0 | 1.08 ±0 | 1.09 ±0 | 0.2817 ±0 | | TS by Grid Search | 79.90 ±0 | 0.7791 ±0 | 0.82 ±0 | 0.80 ±0 | 0.2852 ±0 | | Vector Scaling | 80.02 ±0.03 | 0.9410 ±0.0014 | 2.62 ±0.02 | 2.69 ±0.03 | 0.2985 ±0.0015 | | Matrix Scaling | 53.99 ±0.29 | 0.0646 ±0.0010 | 43.36 ±0.30 | 43.36 ±0.29 | 0.8811 ±0.0054 | | MS-ODIR | 75.94 ±0.09 | 0.9810 ±0.0018 | 0.87 ±0.10 | 0.92 ±0.10 | 0.3163 ±0.0023 | | Dir-ODIR | 75.93 ±0.09 | 0.9788 ±0.0007 | 0.93 ±0.06 | 0.86 ±0.09 | 0.3149 ±0.0018 | | Neural Clamping (CE) | 79.98 ±0.01 | 0.7898 ±0.0028 | 0.81 ±0.03 | 0.77 ±0.04 | 0.2801 ±0.0005 | | Neural Clamping (FL) | 79.97 ±0.01 | 0.7934 ±0.0038 | 0.77 ±0.01 | 0.72 ±0.03 | 0.2804 ±0.0004 | | MLP-Mixer B/16 | | | | | | | Method | Accuracy (%) | Entropy ø | ECE (%) ¿ | AECE (%) ¿ | SCE (◊10≠3) ¿ | | Uncalibrated | 73.94 | 0.6812 | 11.55 | 11.55 | 0.3589 | | Temperature Scaling | 73.94 ±0 | 1.2735 ±0 | 4.94 ±0 | 4.98 ±0 | 0.3188 ±0 | | TS by Grid Search | 73.94 ±0 | 1.6243 ±0 | 2.60 ±0 | 2.60 ±0 | 0.3258 ±0 | | Vector Scaling | 73.24 ±0.06 | 1.1474 ±0.0089 | 6.91 ±0.17 | 6.88 ±0.20 | 0.3321 ±0.0027 | | Matrix Scaling | 40.96 ±0.31 | 0.1137 ±0.0010 | 54.50 ±0.28 | 54.50 ±0.28 | 1.0979 ±0.0041 | | MS-ODIR | 73.16 ±0.02 | 1.8049 ±0.0016 | 4.65 ±0.08 | 4.73 ±0.05 | 0.3477 ±0.0018 | | Dir-ODIR | 73.13 ±0.05 | 1.8083 ±0.0013 | 4.68 ±0.09 | 4.76 ±0.09 | 0.3480 ±0.0018 | | Neural Clamping (CE) | 74.14 ±0.01 | 1.7952 ±0.0302 | 2.43 ±0.16 | 2.51 ±0.18 | 0.3054 ±0.0020 | | Neural Clamping (FL) | 74.12 ±0.00 | 1.7673 ±0.0269 | 2.27 ±0.13 | 2.34 ±0.14 | 0.3029 ±0.0018 | | Method | Accuracy (%) | Entropy ø | ECE (%) ¿ | AECE (%) ¿ | SCE (◊10≠2) ¿ | |---------------------------|----------------|--------------|-------------|--------------|-----------------| | Uncalibrated | 74.15 | 0.4742 | 10.74 | 10.71 | 0.2763 | | Temperature Scaling | 74.15 | 0.8991 | 1.71 | 1.63 | 0.1711 | | Temperature Scaling (FL) | 74.15 | 1.0542 | 2.01 | 2.02 | 0.1812 | | Input Calibration w/ "ú | 74.16 ±0.09 | 0.4775 ±0.03 | 10.62 ±0.10 | 10.61 ±0.10 | 0.2776 ±0.0019 | | Output Calibration w/ T ú | 74.15 ±0 | 0.9648 ±0.46 | 1.18 ±0.05 | 1.35 ±0.01 | 0.1731 ±0.0003 | | Neural Clamping | 74.16 ±0.09 | 0.9707 ±0.49 | 0.89 ±0.06 | 1.01 ±0.11 | 0.1754 ±0.0015 | Table 4: Ablation study with ResNet-110 on CIFAR-100. The best result is highlighted by blue color. ## 4.2 Bloodmnist, Cifar-100, And Imagenet Results BloodMNIST. BloodMNIST is an 8-class microscopic peripheral blood cell image recognition task. The calibration results with ResNet-50 are shown in Table 1. Compared to the best existing method, Neural Clamping shows an additional 31%/28% reduction in ECE/AECE. CIFAR-100. The experimental results on CIFAR-100 are presented in Table 2, which is divided into two sections corresponding to dierent models: ResNet-110 and Wide-ResNet-40-10. Our method consistently achieves the lowest ECE and either the lowest or second lowest AECE and SCE when compared to other existing methods. Notably, in the ResNet-110 experiment, Neural Clamping reduced ECE and AECE by 34% and 23%, respectively, compared to the best existing method. It is important to highlight that our method not only reduces calibration error but also improves accuracy, which sets it apart from existing approaches. ImageNet-1K. Table 3 presents the experimental results on ImageNet, where the table is divided into three sections containing ResNet-101, ViT-S/16, and MLP-Mixer B/6. Neural Clamping consistently outperforms the compared methods by achieving the lowest ECE, AECE, and either the lowest or second-lowest SCE in all cases, similar to the CIFAR-100 experiments. Particularly, in the ResNet-101 experiment, Neural Clamping reduces ECE and AECE by 11% compared to the best existing method. Moreover, our method concurrently improves accuracy while reducing the calibration error for all three models, demonstrating its eectiveness in calibration for various model architectures. Additionally, We also provide experimental results of dierent bins number in Appendix D, which clearly demonstrates the same conclusion of the outstanding calibration performance of Neural Clamping over the baselines. To further compare how our method diers from the baselines, we also visualize the ECE results via plotting the reliability diagrams in Appendix E. ## 4.3 Additional Analysis Of Neural Clamping Data-driven vs. random initialization for input perturbation ". There are two initialization methods for the input calibration " in Neural Clamping: data-driven initialization as derived from Theorem 3.2 and random initialization. In scrutinizing these two initialization methods, we found that the data-driven initialization managed to consistently deliver stable calibration results. Figure 3 shows that across all metrics, both initialization methods have similar mean values across 5 runs, while the data-driven initialization has a smaller variation and standard deviation. As a result of the experiment, it can be concluded that datadriven value can not only oer a more reliable solution but also slightly improved outcomes. We also used this data-driven initialization to devise a computationally ecient Neural Clamping variant. Under similar runtime constraints, our method achieves superior calibration performance compared to temperature scaling. This implies that theoretically derived input perturbations can attain performance comparable to that of training results. Please see Appendix F for details. ![9_image_0.png](9_image_0.png) Figure 3: Comparison of random (blue) and data-driven (green) initializations for input calibration " in Neural Clamping. The reported results are (a) Entropy, (b) ECE, (c) AECE, and (d) SCE of ResNet110 on CIFAR-100 over 5 runs. This boxplot graphically demonstrates the spread groups of numerical data through their quartiles. The data-driven initialization shows better stability (smaller variation) than random initialization. Ablation study with input calibration "ú **and output calibration** T ú. After calibration, Neural Clamping learns "ú for input calibration and T ú for output calibration. In Table 4 we performed an ablation study to examine the eects of input calibration and output calibration separately with their jointly trained parameters "ú and T ú and "Temperature Scaling (FL)" approach, where T optimized with focal loss. For input calibration, we inferred testing data with only the learned input perturbation "ú; for output calibration, we tested the result with only the learned temperature scaling parameter T ú. One noteworthy finding from this exercise is that while output calibration alone already trims the ECE and AECE materially, a further 25% reduction in ECE and AECE can be achieved when it is paired with input calibration (i.e. Neural Clamping). Input calibration alone is less eective because it does not directly modify the prediction output. In addition, the Temperature Scaling (FL) actually performed worse than both the original Temperature Scaling method and "Output Calibration w/ T*" across the various calibration metrics. This finding suggests that the improvements achieved by our method Neural Clamping (joint input-output calibration) is not simply due to the use of temperature scaling with focal loss. This ablation study corroborates the necessity and advantage of joint input-output calibration and the additional benefits only gained from joint calibration framework. ## 5 Conclusion In this paper, we present a new post-processing calibration method called Neural Clamping, which oers novel insights into joint input-output calibration and significantly improves calibration performance. We also develop theoretical analysis to justify the advantage of Neural Clamping. Our empirical results on several datasets and models show that Neural Clamping outperforms state-of-the-art post-processing calibration methods. We believe our method delivers a practical tool that can contribute to neural network based technology and applications requiring accurate calibration. ## Impact Statements We see no ethical or immediate negative societal consequence of our work, and it holds the potential for positive social impacts. By improving the accuracy of machine learning models' prediction probabilities, our research can benefit various domains. ## Acknowledgments I would like to express my gratitude to Yu-Chieh Cheng for his invaluable assistance in proofreading and editing. ## References Nontawat Charoenphakdee, Jayakorn Vongkulbhisal, Nuttapong Chairatanakul, and Masashi Sugiyama. On focal loss for class-posterior probability estimation: A theoretical perspective. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5202–5211, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. *nature*, 542(7639): 115–118, 2017. Chuan Guo, Geo Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, pp. 1321–1330. PMLR, 2017. Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, and Richard Hartley. Calibration of neural networks using splines. *arXiv preprint arXiv:2006.12800*, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 4700–4708, 2017. Xiaoqian Jiang, Melanie Osl, Jihoon Kim, and Lucila Ohno-Machado. Calibrating predictive model estimates to support personalized medicine. *Journal of the American Medical Informatics Association*, 19(2):263– 274, 2012. Alex Krizhevsky, Georey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Meelis Kull, Miquel Perello Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, and Peter Flach. Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration. Advances in neural information processing systems, 32, 2019. Ananya Kumar, Percy S Liang, and Tengyu Ma. Verified uncertainty calibration. *Advances in Neural* Information Processing Systems, 32, 2019. Gongbo Liang, Yu Zhang, Xiaoqin Wang, and Nathan Jacobs. Improved trainable calibration method for neural networks on medical imaging classification. *arXiv preprint arXiv:2009.04057*, 2020. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In *Proceedings of the IEEE international conference on computer vision*, pp. 2980–2988, 2017. Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the calibration of modern neural networks. *Advances in Neural Information Processing Systems*, 34, 2021. Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, and Puneet Dokania. Calibrating deep neural networks using focal loss. *Advances in Neural Information Processing Systems*, 33:15288–15299, 2020. Rafael Müller, Simon Kornblith, and Georey E Hinton. When does label smoothing help? Advances in neural information processing systems, 32, 2019. Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In *Twenty-Ninth AAAI Conference on Artificial Intelligence*, 2015. Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in deep learning. In *CVPR Workshops*, volume 2, 2019. John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. *Advances in large margin classifiers*, 10(3):61–74, 1999. Yao Qin, Xuezhi Wang, Alex Beutel, and Ed Chi. Improving calibration through the relationship with adversarial robustness. *Advances in Neural Information Processing Systems*, 34:14358–14369, 2021. Sina Shafaei, Stefan Kugele, Mohd Hafeez Osman, and Alois Knoll. Uncertainty in machine learning: A safety perspective on autonomous driving. In *International Conference on Computer Safety, Reliability,* and Security, pp. 458–464. Springer, 2018. Linwei Tao, Minjing Dong, and Chang Xu. Dual focal loss for calibration. In International Conference on Machine Learning, pp. 33833–33849. PMLR, 2023. Junjiao Tian, Dylan Yung, Yen-Chang Hsu, and Zsolt Kira. A geometric perspective towards neural calibration via sensitivity decomposition. *Advances in Neural Information Processing Systems*, 34, 2021. Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. *Advances in Neural Information Processing Systems*, 34:24261–24272, 2021. Miao Xiong, Ailin Deng, Pang Wei W Koh, Jiaying Wu, Shen Li, Jianqing Xu, and Bryan Hooi. Proximityinformed calibration for deep neural networks. *Advances in Neural Information Processing Systems*, 36: 68511–68538, 2023. Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bilian Ke, Hanspeter Pfister, and Bingbing Ni. Medmnist v2-a large-scale lightweight benchmark for 2d and 3d biomedical image classification. Scientific Data, 10(1):41, 2023. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. *arXiv preprint arXiv:1605.07146*, 2016.
Review 1: Summary: This paper presents a new calibration method that combines input perturbation with temperature scaling. The prior method only considers using temperature scaling to modify output logits, while this work proposes using a learnable universal input perturbation to improve the calibration without changing the model parameters. It has theoretical result about optimal perturbation value and the improved entropy (calibration-related metric). It has empirical results in improving calibration metrics such as ECE in ResNets, Transformers and CIFAR, ImageNet datasets. Strengths and Weaknesses: ### Strengths 1. The empirical evaluation is comprehensive, it studies three datasets and four architectures, and compares with about 7 baselines. The effectiveness of the proposed method as a good calibration method seems to be pronounced. 2. The paper is well-motivated, it focuses on post-processing, which improves calibration without changing the model parameters, similar to prompt engineering in LLM. 3. The paper is overall well-written, with preliminary definitions carefully stated. ### Weakness 1. The novelty of the proposed method is a concern. The improvement in calibration through input perturbation has been studied by [Qin et al](https://arxiv.org/abs/2006.16375) and temperature scaling is a traditional calibration approach. The technical novelty of the paper lies in making both components trainable at the post-training stage without changing the model parameters. 2. The results in Table 4 suggest that input calibration may not be as necessary as output calibration, which raises questions about the authors' claim that "This ablation study corroborates the necessity and advantage of joint input-output calibration." The ablation study of Section 4.3, Table 4 shows that the ECE is reduced by 9.54 with output calibration (temperature scaling), but is reduced only by 0.1 - 0.3 with input calibration (perturbation). Requested Changes: I would like to ask for some clarification. 1. If I understand correctly, the $\widetilde{\boldsymbol{\delta}}$ is derived from Theorem 3.2 and its closed form is $\operatorname{sign}(\boldsymbol{g}) \odot \boldsymbol{\eta}$, the $\widetilde{\boldsymbol{\delta}}$ is used as "data-driven initialization", $\boldsymbol{\delta}$ is then trained. Could the authors provide additional details, such as an algorithm diagram or text description, to explain how the "data-driven initialization" is implemented? 2. What is the rationale for using the closed-form optimal value $\widetilde{\boldsymbol{\delta}}$ that maximizes entropy as an initialization? If the theoretical result of Theorem 3.2 is correct, then one might suppose that the derived $\widetilde{\boldsymbol{\delta}}$ can be directly used at test time (since it maximizes entropy). How is that compared to $\boldsymbol{\delta}$ after training, or randomly selected value in evaluations of Figure 3? 3. Some of the theoretical results are confusing. Was Lemma 3.1 and its proof a replicate of Appendix S.2 of [Guo et al](https://arxiv.org/pdf/1706.04599) with $\mathbf{x}$ changed to $\mathbf{x} + \boldsymbol{\delta}$. I am confused about what are the new insights/differences between the presented one and the one in prior work. Broader Impact Concerns: The border impact has been discussed in the impact section, and no concerns regarding that. ================================================== Review 2: Summary: The paper presents neural clamping, a generalization of temperature scaling scaling which adds a fixed, trainable, additive perturbation to the input of the model at hand, with both the perturbation and temperate fitted with focal loss instead of basic CE. The method is evaluated on various datasets and model architectures and appears to improve ECE, AECE and SCE, sometimes also slightly improving accuracy. A theoretical analysis in terms of entropy maximisation and a data-derived init for the perturbation are also presented Strengths and Weaknesses: \+ interesting idea to modify both input and output, likewise to use the focal loss to tune \+ method appears to perform well across benchmarks \- experiments appear to be done on single pretrained models? would require multiple seeds *on models* to be meaningful \- having the same architectures across datasets would also make analysis better \- also, statistical significance test should be done since the methods are sometimes quite close \- I have the suspicion that the benefit mainly derives from the focal loss and not from the input perturbation => should be done in an ablation \- slight test leakage required for the accuracy to improve (effectively, we are learning a slighlty adjusted initial centering operation) \- nitpick: english is weird in some place s("Uncalibration" instead of "Uncalibrated" in tables,"methods predominently shed" "confidence made by model prediction") Requested Changes: # Critical 1. run an ablation where you fit the temperature only with the focal loss - currently either temperature alone is fitted with CE or temperature+input are fitted with Focal loss, but then only one is used 2. pick at least one dataset (I'd propose cifar100) and train each architecture studied on it (multiple seeds for training), then perform the above ablation and your other tests and then perform a statistical significance test (e.g. welch test) or another rigorous assessment across independently trained models (I suggest also using multiple independent calibration datasets on each model) without these, I can't call them method rigorously supported ## Nice to have 1. above on all datasets => but I understand computational expense might be limiting 2. if you can keep snapshots across training, you can try the calibration on various stages of undertraining to assess the importance of model capacity and fittedness Broader Impact Concerns: No need for broader impact assessment. ================================================== Review 3: Summary: The paper introduces Neural Clamping, a novel post-processing calibration method for deep neural networks. This method enhances model calibration by jointly applying a learnable universal input perturbation and an output temperature scaling parameter to a pre-trained classifier. Theoretical proofs are provided to demonstrate the superiority of Neural Clamping over traditional temperature scaling. Empirical evaluations on datasets like BloodMNIST, CIFAR-100, and ImageNet reveal that Neural Clamping significantly improves calibration performance across various deep neural network architectures, achieving up to 34% better expected calibration error compared to state-of-the-art methods. Strengths and Weaknesses: **Strengths** The introduction of noise perturbation is a good try. Very surprise to see it works together with temperature scaling on calibration task. --- **Weaknesses** 1. The motivation is not clear. Why adding a noise together with the temperature scaling will benefit calibration? Better to adding more details or toy examples to illustrate the motivation in the introduction part. 2. What is the calibration set mentioned in introduction? Is it a validation set? not clear in introduction 3. Does it have to be focal loss to train the noise and temperature parameter? not clear in introduction 4. In related works, did not mention the recent works in in-processing calibration methods such as [1] 5. Lack of analysis and comparison with recent post-processing methods such as [2] 6. Lack of constrain of perturbation, should be mentioned earlier than eq 7. Is it like the adversarial samples? 7. It is not clear how you choose gamma in 3.3. Do you split another set for choosing gamma? Or using the same "calibration set" 8. The introduction of noise perturbation is a good try. Very surprise to see it works together with temperature scaling on calibration task. Can the author give more analysis on the ablation study why noise along does not work well. [1] Dual Focal Loss for Calibration [2] Proximity-Informed Calibration for Deep Neural Networks Requested Changes: 1. add more recent related works such as [1] [2] 2. Compare to recent proposed post-processing works 3. Answer the question in above section [1] Dual Focal Loss for Calibration [2] Proximity-Informed Calibration for Deep Neural Networks Broader Impact Concerns: The paper introduce a new framework of calibration, which is motivative. ================================================== Metareview: Recommendation: Accept with minor revision Comment: While there were some mixed impressions concerning the novelty of the method and the evaluation methods, these concerns were clarified during the discussion phase and changes were made in the manuscript. After the rebuttal, all reviewers recommend the acceptance of the paper at TMLR upon the following minor revisions: - Reviewer *7Ruf* noted that "*the t-statistic and p value are possibly slightly misleading on the current stage, as they assume that the 25 samples are IID which one could argue is not likely (as they are derived from the same models). I think all that is required to fix this is to tweak the language in the camera ready and/or make an argument for why the authors chose this particular setup*" - Reviewer *pRL2* noted that: "*based on the rebuttal, it is encouraging to know that the theoretically derived input perturbation can achieve performance similar to that of training results. I suggest highlighting this in the paper. Lastly, I still don't think that Lemma 3.1 and its proof made a significant contribution.*" Therefore, I encourage the authors to implement these changes and I am happy to recommend the paper for publication at TMLR. ==================================================
# Feddag: Federated Dag Structure Learning Erdun Gao∗*erdun.gao@student.unimelb.edu.au* School of Mathematics and Statistics, The University of Melbourne Junjia Chen *cjj19970505@stu.xjtu.edu.cn* Faculty of Electronic and Information Engineering, Xi'an Jiaotong University Li Shen *shenli100@jd.com* JD Explore Academy Tongliang Liu *tongliang.liu@sydney.edu.au* TML Lab, Sydney AI Centre, The University of Sydney Department of Machine Learning, Mohamed bin Zayed University of Artificial Intelligence Mingming Gong mingming.gong@unimelb.edu.au School of Mathematics and Statistics, The University of Melbourne Howard Bondell howard.bondell@unimelb.edu.au School of Mathematics and Statistics, The University of Melbourne Reviewed on OpenReview: *https: // openreview. net/ forum? id= MzWgBjZ6Le* ## Abstract To date, most directed acyclic graphs (DAGs) structure learning approaches require data to be stored in a central server. However, due to the consideration of privacy protection, data owners gradually refuse to share their personalized raw data to avoid private information leakage, making this task more troublesome by cutting off the first step. Thus, a puzzle arises: *how do we discover the underlying DAG structure from decentralized data?* In this paper, focusing on the additive noise models (ANMs) assumption of data generation, we take the first step in developing a gradient-based learning framework named FedDAG, which can learn the DAG structure without directly touching the local data and also can naturally handle the data heterogeneity. Our method benefits from a two-level structure of each local model. The first level structure learns the edges and directions of the graph and communicates with the server to get the model information from other clients during the learning procedure, while the second level structure approximates the mechanisms among variables and personally updates on its own data to accommodate the data heterogeneity. Moreover, FedDAG formulates the overall learning task as a continuous optimization problem by taking advantage of an equality acyclicity constraint, which can be solved by gradient descent methods to boost the searching efficiency. Extensive experiments on both synthetic and real-world datasets verify the efficacy of the proposed method. ## 1 Introduction Bayesian Networks (BNs) have become prevalent over the last few decades by leveraging the graph theory and probability theory to model the probabilistic relationships among a set of random variables, which can potentially provide a mechanism for evidential reasoning (Pearl, 1985). The success of BNs has contributed to a furry of downstream real-world application problems in econometrics (Heckman, 2008), epidemiology (Greenland et al., 1999), biological sciences (Imbens & Rubin, 2015) and social sciences (Marini ∗Work was done during an internship at JD Explore Academy. & Singer, 1988). However, learning the graph structure of a BN from purely observational data remains a significant challenge due to the NP-hard property, and therefore, cutting-edge research that has drawn considerable attention in both academic and industrial fields (Koller & Friedman, 2009; Jensen & Nielsen, 2007; Glymour et al., 2019; Zheng et al., 2018; Zheng, 2020). ![1_image_0.png](1_image_0.png) Figure 1: (a) Visualization of heterogeneous data. Different colors represent data from different sources, while each sub-figure includes the distribution of one fixed dimension of data from all clients. (b) Normalized structural hamming distances (SHDs) (↓) of three methods, where MCSL (Sep) (Ng et al., 2022b) separately trains one model on local data while MCSL (All) trains one model on all data, which is forbidden in FL. Each BN is defined by a directed acyclic graph (DAG) and a set of parameters to depict the direction, strength, and shape of the mechanisms between variables (Kitson et al., 2021). Over the recent decades, a bunch of methods (Spirtes et al., 2001; Chickering, 2002; Zheng, 2020) for discovering the DAG structure encoded among concerned events from the observational data have been proposed. In practice, however, finite sample problem bears the brunt of the performance decrease of DAG structure learning methods. Regularly (1) collecting data from various sources and then (2) designing the structure learning algorithm on all collected data can serve as a direct and standard pipeline to alleviate this issue in this field. However, owing to the issue of data privacy, data owners gradually prefer not to share their personalized data1 with others (Kairouz et al., 2021). Naturally, the new predicament, how do we discover the underlying DAG structure from decentralized data? has arisen. In statistical learning problems such as regression and classification, federated learning (FL) has been proposed to learn from locally stored data (McMahan et al., 2017). Inspired by the developments in FL, we aim to develop a federated DAG structure learning framework that enables learning the graph structure from the decentralized data. Compared to the traditional FL methods in statistical learning, federated DAG learning, a **structural learning** task, has the following two main **differences**: - **Learning objective difference.** Most of the previous FL researches focus on learning *an estimator* to estimate the conditional distribution P(Y |X) in supervised learning tasks, e.g., image classification (McMahan et al., 2017; Li et al., 2020), sequence tagging (Lin et al., 2022), and feature prediction (Kairouz et al., 2021). However, DAG structure learning, an unsupervised learning task (Glymour et al., 2019), tries to *find the underlying graph structure* among the concerned variables and the relationship mechanisms estimators to fit with the joint distribution of observations. - **Data heterogeneity difference.** The setup of data heterogeneity in FL is mainly assumed to be caused by some specific distribution shifts such as label shift (the shift of P(Y )) Lipton et al. (2018) or covariate shift (the shift of P(X)) Reisizadeh et al. (2020), while federated DAG learning handles a generative model, which can allow the data heterogeneity resulted from the joint distribution shift of all variables (Figure 1(a)). This would bring more challenges than the FL paradigm model design. 1Notice that in this paper, we restrict our scope to define the privacy leakage by sharing the raw data of users. In this paper, we present FedDAG, a gradient-based framework for learning the underlying graph structure from decentralized data, including the case of heterogeneous data caused by mechanisms change. (1) To alleviate the data leakage problem, FedDAG inherits the merits of FL, which proposes to deploy a local model to each client separately and collaboratively learn a joint model at the server end. Instead of sharing raw data, FedDAG exchanges model info among clients and the server to achieve collaboration. (2) Taking into consideration of the first main difference between FedDAG and FL, a two-level structure consisting of a graph structure learning (GSL) part and a mechanisms approximating (MA) part, respectively, is adopted as the local model. (3) Benefiting from this separated structure, the second difference between FL and FedDAG can naturally be handled by only sharing the GSL parts of clients during the learning procedure and locally updating the MA parts to get with data heterogeneity. Moreover, we provide the structure identifiability conditions for learning DAG structure from decentralized data in Appendix A. Our contributions are summarized as follows: - We introduce the federated DAG structure learning task under the assumption that the underlying graph structure among different datasets remains invariant while mechanisms vary. We also show the structure identifiability conditions of DAG learning from the decentralized data. - We propose FedDAG, which separately learns the mechanisms on local data and jointly learns the DAG structure to handle the data heterogeneity elegantly. Meanwhile, since 0 bits of raw data are shared, but only parameters of the GSL parts of models, the requirement of privacy protection is guaranteed, and the communication pressure is relatively low. - We evaluate our proposed method on data following structure equation models of additive noise structure with a variety of experimental settings, including simulations and real datasets, against recent state-of-theart algorithms to show its superior performance and the ability to use one model in all settings. ## 1.1 Potential Applications Of Feddag Compared to traditional DAG learning methods, the homogeneous data (no distribution shift) setup of our method brings one more assumption that local data cannot be directly collected owing to the consideration of *privacy leakage*. Then, we further extend our model to heterogeneous data, where mechanisms among variables may also vary among different local data. Therefore, our method could be directly applied to the applications of DAG structure learning, where privacy is also very important. The first example can come from medical science. Exploring the underlying relations among concerned events from healthcare data can help to understand the disease mechanisms and causes (Yang et al., 2013). However, in real medical scenarios, the clinical data of patients are extremely sensitive and related to personal privacy, which faces stringent data protection regulations, such as HIPAA regulations (Annas, 2003). Each hospital may own finite clinical data for some rare diseases, which is not enough for DAG learning. How can hospitals cooperate to analyze the pathology while preventing sharing the private information (raw diagnostic data)? Naturally, this challenge can be addressed by our method. Depending on how each hospital collects data, e.g., medical devices and survey design, the data in each hospital may not share the same distribution. The second example can come from the recommendation system (RS) (Wang et al., 2020b; Yang et al., 2020). Leveraging BN model into RS is becoming prevail to perform robust recommendations by de-confounding some spurious relations. As users pay more attention to privacy and governments also exacerbate many strict regulations like the general data protection regulation (GDPR) (Voigt & Von dem Bussche, 2017), it increases difficulties in collecting raw personal data to the server. Accordingly, we think that RS can also benefit from FedDAG learning. ## 2 Preliminaries Additive Noise Models (ANMs). We consider a specific structural equation model (SEM), which is defined as a tuple M = ⟨X , F ⟩, where X = {X1, X2, · · · , Xd} is a set of concerned variables. And F = {f1, f2, · · · , fd} is a set of functions. Each fi, called the relationship mechanism of Xi, maps ϵi ∪ PAi to Xi, i.e., Xi = fi(PAi, ϵi), where the PAi corresponds to the set including all direct parents of Xi and ϵiis the random noise. M can be leveraged to describe how nature assigns values to variables of interest (Pearl et al., 2016). In this paper, we focus on a commonly used model named ANMs. They assume that $$X_{i}=f_{i}(\mathbf{PA}_{i})+\epsilon_{i},\quad i=1,2,\cdots,d,$$ Xi = fi(PAi) + ϵi, i = 1, 2, · · · *, d,* (1) where ϵiis independent of variables in PAi and mutually independent with any ϵj for i ̸= j. Bayesian Networks (BNs). Let X = (X1, X2, · · · , Xd) be a vector that includes all variables in X with the index set V := {1, 2, · · · , d} and P(X) with the probability density function p(X) be a marginal distribution induced from M. A DAG G = (V, E) consists of a nodes set V and an edge set E ⊆ V 2. Every SEM M can be associated with a DAG GM, in which each node i corresponds to the variable Xi and directed edges point from PAi to Xi 2for i ∈ [d] 3. A BN is defined as a pair ⟨P(X), GM⟩. Then GM is called the graph structure associated with M and P(X) is Markovian to GM. Throughout the main text, we assume that there is no latent variable4(Spirtes et al., 2001) and then p(X) can be factorized as $$p(X)=\prod_{i=1}^{d}p(X_{i}|X_{pa_{i}})\tag{1}$$ $$(1)$$ $$\left(2\right)$$ $\left(3\right)$. according to GM (Lauritzen, 1996). Xpai is the parental vector that includes all variables in PAi. Characterizations of Acyclicity. A DAG G with d nodes can be represented by a binary adjacency matrix B = [B:,1|B:,2*| · · · |*B:,d] with B:,i ∈ {0, 1} dfor ∀i ∈ [d]. NOTEARS (Zheng et al., 2018) formulates a sufficient and necessary condition for B representing a DAG by an equation. The formulation is as follows: $$\mathrm{Tr}[e^{\mathbf{B}}]-d=0,$$ B] − d = 0, (3) where Tr[·] means the trace of a given matrix. e (·), here, is the matrix exponential operation. However, NOTEARS is only designed to solve the linear Gaussian models, which assume that all relationship mechanisms are linear. Therefore, the DAG structure and relationship mechanisms can be modeled together by a weighted matrix. To extend NOTEARS to the non-linear cases, MCSL (Ng et al., 2022b) proposes to use a mask M, parameterized by a continuous proxy matrix U, to approximate the adjacency matrix B. To enforce the entries of M to approximate the binary form, i.e., 0 or 1, a two-dimensional version of Gumbel-Softmax (Jang et al., 2017) approach named Gumbel-Sigmoid is designed to reparameterize U and to ensure the differentiability of the model. Then, M can be obtained element-wisely by $$M_{i j}=\frac{1}{1+\exp(-\log(\mathbf{U}_{i j}+\mathrm{Gumb}_{i j})/\tau)},$$ where τ is the temperature, Gumbij = g 1 ij − g 0 ij , g 1 ij and g 0 ij are two independent samples from Gumbel(0, 1). For simplicity but equivalence, g 1 ij and g 0 ij also can be sampled from −log(log(a)) with a ∼ Uniform(0, 1). See Appendix D in (Ng et al., 2022b). MCSL names Eq. (4) as Gumbel-Sigmoid w.r.t. U and temperature τ , which is written as gτ (U). Then, the acyclicity constraint can be reformulated as $$|\rangle$$ $$\mathrm{Tr}[e^{(g_{\tau}(U))}]-d=0.$$ $$\mathbf{\ddot{\theta}}$$ (gτ (U))] − d = 0. (5) ## 3 Problem Definition Here, we first describe the property of decentralized data and the data distribution shift among different clients if there exists data heterogeneity (Huang et al., 2020b; Mooij et al., 2020; Zhang et al., 2020). Then, we define the problem, federated DAG structure learning, considered in this paper. Decentralized data and probability distribution set. Let C = {c1, c2, · · · , cm} be the client set which includes m different clients and s be the only server. The data Dck ∈ R nck ×d, in which each observation 2In the intact graph structure of ANMs, we just fix directed edges from ϵi to Xi and assume the distribution of ϵi. Therefore, in this paper, G is only defined over the endogenous variables. 3For simplicity, we use [d] = {1, 2, · · · , d} to represent the set of all integers from 1 to d. 4This assumption can be relaxed to some restricted cases with latent variables. See Appendix C.3 for details. D ck ifor ∀i ∈ [nck ] independently sampled from its corresponding probability distribution P ck (X), represent the personalized data owned by the client ck. nck is the number of observations in Dck . The dataset D = {Dc1, Dc2, *· · ·* , Dcm} is called a decentralized dataset and P C(X) = {P c1 (X), Pc2 (X), · · · , Pcm(X)} is defined as the decentralized probability distribution set. If P ck1 (X) = P ck2 (X) for ∀ k1, k2 ∈ [m], then D is defined as a homogeneous decentralized dataset throughout this paper. The heterogeneous decentralized dataset is defined by assuming that there exists at least two clients, e.g., ck1 and ck2 , on which the local data are sampled from different distributions, i.e., P ck1 (X) ̸= P ck2 (X). Assumption 3.1. **(Invariant DAG)** For ∀ck, P ck (X) ∈ P C(X) admits the product factorization of Eq. (2) relative to the same DAG G. Remark 3.2. If P C(X) satisfies Assumption 3.1, then, each P ck (X) ∈ P C(X) is Markovian relative to G. According to the general definition of *mechanisms change* in (Tian & Pearl, 2001), interventions can be seen as a special case of distribution shifts, where the external influence involves fixing certain variables to some predetermined values. Actually, in general, the external influence may be milder to merely change the conditional probability of certain variables given its causes. In this paper, we restrict our scope by assuming that the distribution shifts across P ck (X) come from the changes of mechanisms in F or distribution shifts of the exogenous variables in E (see Appendix F.1 for detailed discussion). More justifications on Assumption 3.1 are in Appendix F.2. Assumption 3.3. For ∀ck1 , ck2 , if P ck1 (X) ̸= P ck2 (X), the distribution shifts are caused by (1) ∃ i ∈ [d], P ck1 (Xi|Xpai ) ̸= P ck2 (Xi|Xpai ), i.e., f ck1 i̸= f ck2 i. (2) ∃ i ∈ [d], P ck1 (ϵi) ̸= P ck2 (ϵi). Federated DAG Structure Learning. Given the decentralized dataset D consisting of data from m clients while the corresponding P C(X) satisfies Assumptions 3.1 and 3.3, federated DAG structure learning aims to identify the underlying DAG G from D. ## 4 Methodology To solve the federated DAG structure learning problem, we formulate a continuous score-based method named FedDAG to learn the DAG structure from decentralized data. Firstly, we define an objective function that guides all models from different clients to federally learn the underlying DAG structure G (or adjacency matrix B), and at the same time also to learn personalized mechanisms for each client. As shown in Figure 2, for each client ck, the local model consists of a graph learning part and a mechanisms approximation part. The GSL part is parameterized by a matrix Uck ∈ R d×d, which would be the same for all clients finally5. To make every entry of Uck to approximate the binary entry of the adjacency matrix, a Gumbel-Sigmoid method (Jang et al., 2017; Ng et al., 2022b), represented as gτ (Uck ), is further leveraged to transform Uck to a differentiable approximation of the adjacency matrix. The mechanisms approximation parts f ck 1 , f ck 2 , · · · , f ck dare parameterized by d sub-networks, each of which has d inputs and one output. In the learning procedure, the GSL parts (specifically Uck ) of participating clients are shared with the server s. Then, the processed information is broadcast to each client for self-updating its matrix. The details of our method are demonstrated in the following subsections. ## 4.1 The Overall Learning Objective Now we present the overall learning objective of FedDAG as the following optimization problem: $$\begin{array}{r l}{{\underset{\Phi,U}{\operatorname{arg\,max}}}}&{{\sum_{k=1}^{m}\;{\mathcal{S}}^{c_{k}}({\mathcal{D}}^{c_{k}},\Phi^{c_{k}},U)}}\\ {{\mathrm{subject~to}}}&{{g_{\tau}(U)\in{\bf D A G s}\ \Leftrightarrow\ h(U)=\mathrm{Tr}[e^{(g_{\tau}(U))}]-d=0,}}\end{array}$$ $$({\mathfrak{f}}{\mathfrak{h}})$$ where Φck := {Φ ck 1 , Φ ck 2 , *· · ·* , Φ ck d } represents the MA part of the model on ck. S ck (·) is the scoring function for evaluating the fitness of the local model of client ck and observations Dck . For score-based DAG structure 5Please notice that GSL parts of different clients may not be the same during the training procedure. So we index them. ![5_image_0.png](5_image_0.png) Figure 2: An overview of FedDAG. Each solid-line box includes the local model for each client. For client ck, the GSL part includes a continuous proxy U ck and gτ (·), the Gumbel-Sigmoid function, which maps U ck to approximate the binary adjacency matrix. To approximate the mechanisms, the MA part uses Φ ck , including d neural networks. Xck represents observations on ck and Xˆ ck is the predicted data. Xck firstly goes through the GSL part to select the parental variables and then the MA part to get Xˆ ck . The server coordinates the FL procedures by leveraging U among clients. learning, selecting a proper score function such as BIC score (Schwarz, 1978), generalized score function (Huang et al., 2018) or equivalently taking the likelihood of P(X) with a penalty function on model parameters (Zheng et al., 2018; Ng et al., 2022b; Zheng et al., 2020; Lachapelle et al., 2020) can guarantee to identify up the underlying ground-truth graph structure G because G is supposed to have the maximal score over Eq. (6). Throughout all experiments in this paper, we assume the noise types are Gaussian with equal variance for each local distribution. And the overall score function utilized in this paper is as follows, $$\mathcal{S}^{c_{k}}(\mathcal{D}^{c_{k}},\Phi^{c_{k}},\mathbf{U}^{c_{k}})=-\frac{1}{2n_{k}}\sum_{i=1}^{n_{k}}\sum_{j}^{d}\|\mathcal{D}_{ij}^{c_{k}}-\Phi_{j}^{c_{k}}(g_{r}(\mathbf{U}_{j,c}^{c_{k}})\circ\mathcal{D}_{i}^{c_{k}})\|_{2}^{2}-\lambda\|g_{r}(\mathbf{U})\|_{1}.\tag{7}$$ In our score function, we take the negative Least Squares loss and a sparsity term, which corresponds to the model complexity penalty in the BIC score (Schwarz, 1978).6 However, the global minimum is hard to reach by using the gradient descent method due to the non-convexity of h(U). More details on discussions of the optimization results can be found in Appendix. C. In this paper, instead of directly taking the likelihood of P(X), we leverage the well-known results on the density transformation to model the distribution of P(E), i.e., maximizing the likelihood P ck (E|Fck , G) for ∀ck ∈ C (Mooij et al., 2011). According to Eq. (1), we have ϵi = Xi − fi(PAi). That is to say, modeling P(E) can be achieved by an auto-regressive model. To get ϵi, the first step is to select the parental set PAi for Xi. This can be realized by B[:, i] ◦ X, where ◦ means the element-wise product. In our paper, for client ck, we predict the noise by ϵi = Xi − Φi(gτ (U)[:, i] ◦ X), where gτ (U) is to approximate B and Φi(·) is parameterized by a neural network to approximate fi. The specific formulation of S ck would depend on the assumption of noise distributions. 6The consistency of BIC score for learning graphs on ANMs is discussed in Appendix C.5. ## Algorithm 1 Feddag 1: **Input:** D, C, Parameter-list = {αinit, ρinit, htol, itmax, ρmax, β, γ, r } 2: **Output:** Egτ (Ut), Φt 3: \#Parameter Initializing 4: t ← 1, αt ← αinit, ρt ← ρ*init* 5: **while** t ≤ itmax and h(Ut) ≥ htol and ρ ≤ ρmax do 6: \#Sub-problem Solving 7: Ut+1, Φt+1 ← SPS(D, C, αt, ρt, itin, itf l, r) 8: \#Coefficients Updating 9: αt+1 ← αt + ρtE[h(Ut+1)], t ← t + 1 10: if E[h(Ut+1)] > γE[h(Ut)] **then** 11: ρt+1 = βρt 12: **else** 13: ρt+1 = ρt 14: **end if** 15: **end while** ## 4.2 Federated Dag Structure Learning As suggested in NOTEARS (Zheng et al., 2018), the hard-constraint optimization problem in Eq. (6) can be addressed by an Augmented Lagrangian Method (ALM) to get an approximate solution. Similar to the penalty methods, ALM transforms a constrained optimization problem by a series of unconstrained sub-problems and adds a penalty term to the objective function. ALM also introduces a Lagrangian multiplier term to avoid ill-conditioning by preventing the coefficient of the penalty term from going too large. To solve Eq. (6), the sub-problem can be written as $$\operatorname*{arg\,max}_{\Phi,U}\sum_{k=1}^{m}{\mathcal{S}}^{c_{k}}\left({\mathcal{D}}^{c_{k}},\Phi^{c_{k}},g_{\tau}(U)\right)-\alpha_{t}h(U)-\frac{\rho_{t}}{2}h(U)^{2},$$ $$({\boldsymbol{\delta}})$$ 2, (8) where αt and ρt are the Lagrangian multiplier and penalty parameter of the t-th sub-problem, respectively. These parameters are updated after the sub-problem is solved. Since neural networks are adopted to fit the mechanisms in our work, there is no closed-form solution for Eq. (8). Therefore, we solve it approximately via ADAM (Kingma & Ba, 2015). The method is described in Algorithms 1 and 2. And in Algorithm 1, we share the same coefficients updating strategy as in (Ng et al., 2022b). Each sub-problem as Eq. (8) is solved mainly by distributing the computation across all local clients. Since data is prevented from sharing among clients and the server, each client owns its personalized model, which is only trained on its personalized data. The server communicates with clients by exchanging the parameters information of models and coordinates the joint learning task. To achieve so, our method alternately updates the server and clients in each communication round. Client Update. For each model of client ck, there are two main parts, named GSL part parameterized by Uck and MA part parameterized by Φck , respectively. Essentially, the joint objective in Eq. (8) guides the learning process. In the self-updating procedure as described in Algorithm 2, the clients firstly receive the updated penalty coefficients αt and ρt and the averaged parameter Unew. Then, the renewed learning personalized score of client ck is defined as SPck = S ck − αth(Uck ) − ρt 2 h(Uck ) 2. itf l times of local gradient-based parameter updates are operated to maximize its personalized score. Server Update. After itf l local updates, the server randomly chooses r clients to collect their Us to the set U. Then, Us in U are averaged to get Unew. The other operation on the server is updating the αt, ρt to αt+1, ρt+1. The detailed calculating rules are described at lines 8 − 14 in Algorithm 1. Then, the new penalty coefficients and parameters are broadcast to all clients. Notice that assuming that data distribution across clients is homogeneous (no distribution shift), Φck of the chosen r clients can also be collected and averaged to update the local models of clients in the same way, which is named as All-Shared FedDAG (AS-FedDAG) in this paper. For clarity, we name our general method as Graph-Shared (GS-FedDAG) to distinguish it from Algorithm 2 Sub-Problem Solver (SPS) for FedDAG 1: **Input:** D, C, Parameter-list = {αt, ρt, itin, itf l, r} 2: **Output:** Unew, Φitin 3: Define SPck = S ck − αth(Uck ) − ρt 2 h(Uck ) 2 4: for i in (1, 2, · · · *, it*in) do 5: **for each** client ck do 6: \#Self-updating 7: Ui,ck , Φi,ck ← arg maxΦck ,Uck SPck 8: **end for** 9: if i (% itf l) = 0 or i = itin **then** 10: \#Aggregating: randomly select r clients and collect their Us into U, then, send U to the server 11: U ← Agg(r, C) 12: \#Server Updating: average U ∈ U 13: Unew ← Avg(U) 14: \#Broadcasting: distribute Unew to all clients 15: C ← BD(Unew) 16: **for each** client ck do 17: \#Clients Updating 18: Ui,ck ← Unew 19: **end for** 20: **end if** 21: **end for** AS-FedDAG. It is worth noting that AS-FedDAG can further enhance the performance in the homogeneous case but introduce some additional communication costs. ## 4.3 Thresholding For continuous optimization, as illustrated in the previous work (Ng et al., 2022b), we leverage GumbelSigmoid to approximate the binary mask. That is to say, the exact 0 or 1 is hard to get. The other issue is raised by ALM since the solution of ALM only satisfies the numerical precision of the constraint. This is because we set htol and itmax maximally but not infinite coefficients of penalty terms to formulate the last sub-problem. Therefore, some entries of the output M = Egτ (U) will be near but not exactly 0 or 1. To alleviate this issue, ℓ1 sparsity is added to the objective function. In our method, since all mask values are in [0, 1], we take the median value 0.5 as the threshold to prune the edges, which follows the same way in our baseline method MCSL (Ng et al., 2022b). The iterative thresholding method is also taken to deal with the case that the learned graph is cyclic. This may happen if the number of variables is large (40 variables in our paper). Because, in numerical optimization, the constraint penalty exponentially decreases with the number of variables. To deal with the cyclic graph, we one-by-one cut the edge with the minimum value until the graph is acyclic. Until now, all continuous search methods for DAG learning suffer from these two problems. It is an interesting future direction to be investigated. ## 4.4 Convergence Analysis Let us quickly review our method. For each client ck, the model parameters include Φck and Uck . Each client optimizes its parameters on its own data Dck . Like NOTEARS and its following works, our method can reach a stationary point instead of the global maximum (the ground-truth DAG). Then, we separate our discussion into homogeneous and heterogeneous data. ## 4.4.1 Homogeneous Data For the no distribution shift case, we have Φc1 = Φc2 = *· · ·* = Φcm and Uc1 = Uc2 = *· · ·* = Ucm. Our method named AS-FedDAG (All-Shared FedDAG) sets a central server, which regularly (1) receives all parameters (or Uck for GS-FedDAG), (2) averages these parameters to get Φnew and Unew and (3) broadcasts Φnew and Unew to all clients during the learning procedures. AS-FedDAG benefits from an advanced technique named FedAvg (McMahan et al., 2017) for solving the FL problem in the homogeneous data case. FedAvg solves a similar problem by averaging all parameters learned from each client in the learning process. ## 4.4.2 Heterogeneous Data To solve the overall constraint-based problem, we take ALM to convert the hard constraint to a soft constraint with a series of increasing penalty co-efficiencies. The convergence of ALM for the non-convex problem has been well studied (Nemirovski, 1999) and presented in NOTEARS (Zheng et al., 2018). Thus, we only consider the convergence analysis of our method directly from the inner optimization, i.e., the t-th sub-problem, as follows. $$\arg\max_{\Phi,\mathbf{U}}\sum_{k=1}^{m}\mathcal{S}^{c_{k}}\left(\mathcal{D}^{c_{k}},\mathbf{\Phi}^{c_{k}},g_{\tau}(\mathbf{U})\right)-\alpha_{t}h(\mathbf{U})-\frac{\rho_{t}}{2}h(\mathbf{U})^{2}.\tag{9}$$ Here, for simplification, we just define that Sˆck (Φck , U) = −Sck (Dck , Φck , gτ (U)) + αth(U) + ρt 2 h(U) 2. Then, the overall optimization problem can be reformulated as follows. $$\arg\min_{\Phi,U}\hat{\mathcal{S}}(\mathbf{U},\mathbf{\Phi}):=\sum_{k=1}^{m}\hat{\mathcal{S}}^{c_{k}}(\mathbf{U},\mathbf{\Phi}^{c_{k}}).\tag{1}$$ $$(10)$$ Through the following part, we use ∇U and ∇Φ to represent the gradients of Sˆ(U, Φ) w.r.t U and Φck , respectively. And, we use ∇˜ U and ∇˜ Φ to represent the stochastic gradients calculated by a mini-batch of observations w.r.t U and Φck , respectively. Definition 4.1. (Partial Gradient Diversity). The gradient diversity among all local learning objectives as: $$\sum_{i=1}^{m}\left\|\nabla_{U}\hat{S}^{c_{k}}(U,\Phi^{c_{k}})-\nabla_{U}\hat{S}(U,\Phi)\right\|^{2}\leq\delta^{2}.\tag{1}$$ $$(11)$$ Note that the notation of gradient diversity is introduced (Yin et al., 2018; Haddadpour & Mahdavi, 2019) as a measurement to compute the similarity among gradients updated on different clients. Assumption 4.2. (Smoothness and Lower Bound). The local objective function Sˆck (·) of the k-th client is differentiable for ∀k ∈ [m]. Also, ∇U Sˆck (U, Φck ) is LU -Lipschitz w.r.t U and LUΦ w.r.t Φck , and ∇ΦSˆck (U, Φck ) is LΦ-Lipschitz w.r.t Φck and LΦU w.r.t U. We also assume the overall objective function can be bounded by a constant Sˆ∗ and denote ∆Sˆ0 = Sˆ(U0, Φ0) − Sˆ∗. The relative cross-sensitivity of ∇U Sˆck w.r.t Φck and ∇ΦSˆck w.r.t U with the scalar $$\chi:=\operatorname*{max}\left\{L_{U\Phi},L_{\Phi U}\right\}/\sqrt{L_{U}L_{\Phi}}\,.$$ $$(12)^{\frac{1}{2}}$$ χ := max {LUΦ, LΦU } /pLU LΦ. (12) Assumption 4.3. (Bounded Local Variance) For each local data Dck , k ∈ [m], we can independently sample a batch of data denoted as ξ ⊂ Dck . Then, there exist constant δU and δΦ such that $$\begin{array}{l}{{\mathbf{E}\left[\left|\tilde{\nabla}_{\mathbf{U}}\hat{S}^{c_{k}}(\mathbf{\Phi}^{c_{k}},\mathbf{U})-\nabla_{\mathbf{U}}\hat{S}^{c_{k}}(\mathbf{U},\mathbf{\Phi}^{c_{k}})\right|\right]^{2}\right]\leq\sigma_{\mathbf{U}}^{2},}}\\ {{\mathbf{E}\left[\left|\tilde{\nabla}_{\mathbf{\Phi}}\hat{S}^{c_{k}}(\mathbf{\Phi}^{c_{k}},\mathbf{U})-\nabla_{\mathbf{\Phi}}\hat{S}^{c_{k}}(\mathbf{U},\mathbf{\Phi}^{c_{k}})\right|\right]^{2}\right]\leq\sigma_{\mathbf{\Phi}}^{2},}}\end{array}$$ The bounded variance assumption is a standard assumption on the stochastic gradients (Haddadpour & Mahdavi, 2019; Pillutla et al., 2022). Theorem 4.4. (Convergence of GS-FedDAG). For GS-FedDAG with all clients involved in the aggregation, for ∀0 ≤ it ≤ T − 1, under Assumptions 4.2, 4.3 and 4.1, and the learning rate for the U *part is set as* η/(LU itin) and the learning rate for the ϕ part is set as η/(Lϕitin). Then, for η*, depending on the problem* parameters, we have $$\begin{split}&\frac{1}{T}\sum_{t=0}^{T-1}\left(\frac{1}{L_{\mathbf{U}}}\mathbb{E}\left[\left\|\nabla_{\mathbf{U}}\hat{S}^{z_{t}}(\mathbf{\Phi}_{it}^{z_{t}},\mathbf{U}_{it})\right\|^{2}\right]+\frac{1}{L_{\mathbf{E}}}\mathbb{E}\left[\frac{1}{m}\sum_{i=1}^{m}\left\|\nabla_{\mathbf{U}}\hat{S}^{z_{t}}(\mathbf{\Phi}_{it}^{z_{t}},\mathbf{U}_{it})\right\|^{2}\right]\right)\leq\\ &\frac{(\Delta\hat{S}_{0}\eta_{\text{random},1}^{2})^{1/2}}{\sqrt{T}}+\frac{(\Delta\hat{S}_{0}^{2}\eta_{\text{random},2}^{2})^{1/3}}{T^{2/3}}+\mathcal{O}(\frac{1}{T}).\end{split}\tag{13}$$ where we define the effective variance terms $$\sigma^{2}_{\rm FeIDAG,1}=\left(1+\chi^{2}\right)\left(\frac{\sigma^{2}_{U}}{L_{U}}+\frac{\sigma^{2}_{\Phi}}{L_{\Phi}}\right),\tag{14}$$ $$\sigma^{2}_{\rm FeIDAG,2}=\left(1+\chi^{2}\right)\left(\frac{\delta^{2}}{L_{U}}+\frac{\sigma^{2}_{U}}{L_{U}}+\frac{\sigma^{2}_{\Phi}}{L_{\Phi}}\right)\left(1-\frac{1}{it_{in}}\right),$$ where itin is the total step of one inner loop used in lines 4 − 21 *in Algorithm* 2. From Theorem 4.4, we can see that the gradients ∇U Sˆck (Φ ck it , Uit) w.r.t U and ∇U Sˆck (Φ ck it , Uit) w.r.t Φ at the t-th step can be bounded if we choose a proper η, which affects the learning rates of the model. The proof of Theorem 4.4 can be borrowed from the proof of Theorem 2 in (Pillutla et al., 2022). Notice that, in our theorem, we have assumed that all clients participate in the aggregation for simplification and the conclusion can be easily extended to the general partial participation case. ## 4.5 Privacy And Costs Discussion Privacy issues of FedDAG. The strongest motivation of FL is to avoid *personalized raw data leakage*. To achieve this, FedDAG proposes to exchange the parameters for modeling the graph. Here, we argue that the information leakage of local data is rather limited. The server, receiving parameters with client index, may infer some data property. However, according to the data generation model (1), the distribution of local data is decided by (1) DAG structure, (2) noise types/strengths, and (3) mechanisms. The gradient information of the shared matrix is decided by (1) the learning objective and (2) model architecture, which are agnostic to the server. Especially for the network part, clients may choose different networks to make the inference more complex. Moreover, suppose the graph structure is also private information for clients. In that case, this problem can be easily solved by selecting a client to serve as the proxy server7. The proxy server needs to play two roles, including training its own model and taking on the server's duties. Then, other clients communicate with the proxy server instead of a real server in the communication round. Moreover, the aim of our work, and federated learning in general, is not to provide a full solution to privacy protection. Instead, it is the first step towards this goal, i.e., no data sharing between clients. To further protect privacy, more constraints need to be added to the federated learning framework, such as the prevention of information leakage from gradient sharing, which are studied under the privacy umbrella. To further enhance privacy protection, our method can also include more advanced privacy protection techniques (Wei et al., 2020b), which would be an interesting work to be investigated. Communication cost. Since FedDAG requires exchanging parameters between the server and clients. Additional communication costs are raised. In our method, however, we argue that GS-FedDAG only brings rather small additional communication pressures. For the case of d variables, a single communication only exchanges a d×d matrix twice (sending and receiving). For homogeneous data, which assumes that local data are sampled from the same distribution, one can also transmit the neural network together to further improve the performance since mechanisms are also shared among clients. The trade-off between performances and communication costs can also be controlled by r in Algorithm 2, i.e., enlarging or reducing r. Surprisingly, we find that reducing r does not harm the performance severely (see Table 17 in Appendix D for detailed results). Moreover, the partial communication method, which only chooses some clients to exchange training information, is also leveraged to address the issue that not all clients are always online at the same time. 7Notice that the DAG structure encoded in the data is not a secret for the data owners (clients). ## 5 Experimental Results In this section, we study the empirical performances of FedDAG on both synthetic and real-world data. More detailed ablation experiments can also be found in Appendix D. Baselines We compare our method with various baselines including some continuous search methods, named NOTEARS (Zheng et al., 2018), NOTEARS-MLP (N-S-MLP, for short) (Zheng et al., 2020), DAG-GNN (Yu et al., 2019) and MCSL (Ng et al., 2022b), and also two traditional combinatorial search methods named PC (Spirtes et al., 2001) and GES (Chickering, 2002). The comparison results with another method named causal additive models (CAM) (Bühlmann et al., 2014) are put in Appendix D.5. Furthermore, we also include a concurrent work named NOTEARS-ADMM (Ng & Zhang, 2022), which also considers learning the Bayesian network in the federated setup. Since NOTEARS-ADMM focuses more on the homogeneous case and linear settings and pays less attention to the nonlinear cases, we only include the results on linear cases of NOTEARS-ADMM in the main paper for fair comparisons. More detailed comparisons are shown in Appendix D.6. Moreover, we also compare our FedDAG with a voting method (Na & Yang, 2010) in Appendix D.4, which also tries to learn DAG from decentralized data. We provide two training ways for these compared methods. The first way named "All data" is using all data to train only one model, which, however, is not permitted in FedDAG since the ban of data sharing in our setting. For the homogeneous data case, the results on this setting can be an *approximate upper bound* of our method but unobtainable. The second one named "Separated data" is separately training each siloed model over its personalized data, of which the performances reported are the average results of all clients. Metrics. We report two metrics named Structural Hamming Distance (SHD) and True Positive Rate (TPR) averaged over 10 random repetitions to evaluate the discrepancies between estimated DAG and the ground-truth graph G. See more details about SHD, and TPR in Appendix B.3. Notice that PC and GES can only reach the completed partially DAG (CPDAG, or MEC) at most, which shares the same Skeleton with the ground-truth DAG G. When we evaluate SHD, we just ignore the direction of undirected edges learned by PC and GES. That is to say, these two methods can get SHD 0 if they can identify the CPDAG. The implementation details of all methods are given in Appendix B. ## 5.1 Synthetic Data The synthetic data we consider here is generated from Gaussian ANMs (Model (1)). Two random graph models named Erdős-Rényi (ER) and Scale-Free (SF) (detailed definitions are shown in Appendix B.1.) are adopted to generate the graph structure G. And then, for each node Vi corresponding to Xiin G, we sample a function from the given function sets to simulate fi. Finally, data are generated according to a specific sampling method. In the following experiments, we take 10 clients and each with 600 observations (unless otherwise specified in some ablation studies.) throughout this paper. According to Assumption 3.1, data across all clients share the same DAG structure for both homogeneous and heterogeneous data settings. Due to the space limit, more ablation experiments, such as unevenly distributed observations, *varying clients*, dense graph, *different non-linear functions*, and *different number of observations*, etc., are put in Appendix D. All detailed discussions on the experimental results are in Appendix E. ## 5.1.1 Homogeneous Data Setting Results on linear models. For a fair comparison, here, we also provide the linear version of our method. Since linear data are parameterized with an adjacency matrix, we can directly take the adjacency matrix as our model instead of a GSL part and a MA part. During training, the matrix is communicated and averaged by the server to coordinate the joint learning procedures. NOTEARS-ADMM is also a DAG structure learning method from decentralized data. Different from our averaging strategy to exchange training information among clients, the optimization problem is solved by the alternating direction method of multipliers (ADMM). From Table 1, we find that our method can consistently show its advantage in the linear case. In the *ER2 with 10 nodes* setting, our AS-FedDAG is even better than NOTEARS with all training data. While it is possible and the detailed explanation can be found in Appendix E. | ER2 with 10 nodes | | SF2 with 10 nodes | ER2 with 20 nodes | | SF2 with 20 nodes | | | | |---------------------|------------|---------------------|---------------------|-------------|---------------------|-------------|-------------|-------------| | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | PC-All | 9.0 ± 3.9 | 0.58 ± 0.14 | 4.4 ± 1.3 | 0.76 ± 0.07 | 18.2 ± 5.9 | 0.59 ± 0.12 | 22.3 ± 4.8 | 0.48 ± 0.08 | | GES-All | 7.5 ± 10.1 | 0.82 ± 0.25 | 4.1 ± 5.6 | 0.89 ± 0.14 | 25.2 ± 22.1 | 0.81 ± 0.16 | 22.1 ± 11.8 | 0.74 ± 0.15 | | NOTEARS-All | 1.6 ± 1.6 | 0.93 ± 0.06 | 1.4 ± 1.1 | 0.92 ± 0.05 | 3.0 ± 2.7 | 0.94 ± 0.06 | 6.9 ± 7.0 | 0.86 ± 0.12 | | NOTEARS-Sep | 3.0 ± 2.2 | 0.85 ± 0.08 | 3.6 ± 2.1 | 0.83 ± 0.10 | 4.1 ± 2.4 | 0.91 ± 0.05 | 10.2 ± 5.9 | 0.82 ± 0.10 | | NOTEARS-ADMM | 4.7 ± 3.9 | 0.89 ± 0.12 | 4.4 ± 3.0 | 0.86 ± 0.09 | 7.9 ± 5.9 | 0.89 ± 0.07 | 10.7 ± 5.3 | 0.82 ± 0.08 | | AS-FedDAG | 1.3 ± 1.5 | 0.94 ± 0.07 | 1.6 ± 1.0 | 0.91 ± 0.06 | 3.9 ± 3.1 | 0.91 ± 0.06 | 9.4 ± 6.7 | 0.82 ± 0.12 | | ER2 with 10 nodes | SF2 with 10 nodes | ER2 with 40 nodes | | SF2 with 40 nodes | | | | | |---------------------|---------------------|---------------------|------------|---------------------|-------------|-------------|-------------|-------------| | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | PC | 15.3 ± 2.6 | 0.37 ± 0.10 | 14.1 ± 4.3 | 0.44 ± 0.20 | 84.9 ± 13.4 | 0.40 ± 0.08 | 95.0 ± 10.4 | 0.36 ± 0.07 | | GES | 13.0 ± 3.9 | 0.50 ± 0.18 | 9.6 ± 4.4 | 0.71 ± 0.17 | 59.0 ± 9.8 | 0.53 ± 0.08 | 73.8 ± 11.9 | 0.47 ± 0.10 | | NOTEARS | 16.5 ± 2.0 | 0.05 ± 0.04 | 14.5 ± 1.1 | 0.09 ± 0.07 | 71.2 ± 7.2 | 0.08 ± 0.03 | 70.8 ± 2.3 | 0.07 ± 0.03 | | All data N-S-MLP | 8.1 ± 3.8 | 0.56 ± 0.17 | 8.3 ± 2.8 | 0.51 ± 0.16 | 45.3 ± 6.8 | 0.43 ± 0.08 | 49.2 ± 7.7 | 0.39 ± 0.09 | | DAG-GNN | 16.2 ± 2.1 | 0.07 ± 0.06 | 15.2 ± 0.8 | 0.05 ± 0.05 | 73.0 ± 7.7 | 0.06 ± 0.03 | 72.4 ± 1.6 | 0.05 ± 0.02 | | MCSL | 1.9 ± 1.5 | 0.90 ± 0.08 | 1.6 ± 1.2 | 0.91 ± 0.07 | 25.4 ± 13.1 | 0.68 ± 0.14 | 31.6 ± 10.0 | 0.59 ± 0.13 | | PC | 14.1 ± 2.4 | 0.31 ± 0.06 | 13.6 ± 2.7 | 0.30 ± 0.10 | 83.8 ± 7.4 | 0.24 ± 0.03 | 86.1 ± 4.6 | 0.23 ± 0.04 | | GES | 12.7 ± 2.7 | 0.37 ± 0.09 | 12.7 ± 2.4 | 0.33 ± 0.11 | 71.0 ± 6.7 | 0.29 ± 0.03 | 73.2 ± 4.4 | 0.29 ± 0.05 | | NOTEARS | 16.5 ± 2.0 | 0.06 ± 0.04 | 14.6 ± 1.0 | 0.09 ± 0.06 | 71.1 ± 7.3 | 0.08 ± 0.03 | 70.7 ± 2.0 | 0.07 ± 0.03 | | Sep data N-S-MLP | 8.5 ± 2.9 | 0.56 ± 0.13 | 8.7 ± 2.9 | 0.53 ± 0.16 | 51.0 ± 6.9 | 0.41 ± 0.06 | 53.6 ± 5.5 | 0.39 ± 0.08 | | DAG-GNN | 15.7 ± 2.3 | 0.11 ± 0.05 | 14.5 ± 1.0 | 0.10 ± 0.06 | 71.5 ± 7.5 | 0.08 ± 0.02 | 70.8 ± 1.8 | 0.07 ± 0.02 | | MCSL | 7.1 ± 3.2 | 0.83 ± 0.08 | 6.9 ± 2.8 | 0.84 ± 0.08 | 77.3 ± 19.8 | 0.64 ± 0.11 | 72.9 ± 16.4 | 0.58 ± 0.13 | | GS-FedDAG | 2.4 ± 2.0 | 0.86 ± 0.13 | 2.7 ± 2.2 | 0.86 ± 0.13 | 36.5 ± 12.1 | 0.65 ± 0.15 | 46.4 ± 10.4 | 0.57 ± 0.13 | | AS-FedDAG | 1.8 ± 2.0 | 0.89 ± 0.12 | 2.5 ± 2.7 | 0.85 ± 0.15 | 30.0 ± 12.3 | 0.74 ± 0.15 | 31.5 ± 10.0 | 0.59 ± 0.13 | Table 1: Results on the linear model (Homogeneous data). Table 2: Results on the nonlinear ANM with GP (Homogeneous data). Results on the nonlinear model. For the nonlinear setting, all data are generated by an ANM and divided into 10 pieces. Each fiis sampled from a Gaussian Process (GP) with RBF kernel of bandwidth one (See Table 14 and Table 15 in Appendix. D for results of other functions.) and noises are sampled from one zero-mean Gaussian distribution with fixed variance. We consider graphs of d nodes and 2d expected edges. Experimental results are reported in Table 2 with nodes 10 and 40. Since all local data are homogeneous, here, we also provide another effective training method named AS-FedDAG, in which the MA parts are also shared among clients. In all settings, AS-FedDAG shows a better performance than GS-FedDAG because more model information is shared during training. While GS-FedDAG can also show a consistent advantage over other methods. When separately training local models, all models suffer from data scarcity. Therefore, we can observe that both GS-FedDAG and AS-FedDAG perform better than the other methods in the fashion of separate training. NOTEARS and DAG-GNN, as continuous search methods, obtain unsatisfactory results due to the weak model capacity and improper model assumption. In contrast, the BIC score of GES gets a linear-Gaussian likelihood, which is incapable to deal with non-linear data8. With the number of nodes increasing, GS-FedDAG still shows better results than the closely-related baseline method MCSL. However, NOTEAES-MLP can show a comparable result with GS-FedDAG owing to the advantage over MCSL. Here, we give a more detailed explanation of why our FedDAG method performs better than the baseline methods. For PC and GES, they can only reach the CPDAG (or MEC) at most, which shares the same skeleton with the ground-truth DAG. When we evaluate the SHD, we just ignore the direction of undirected edges learned by PC and GES. That is to say, these two methods can get SHD 0 if they can identify the true CPDAG. Therefore, the final results are not caused by unfair comparison. For PC, the independence test is leveraged to decode the (conditional) independence from the data distribution. Therefore, the accuracy would be affected by (1) the number of observations and (2) the effectiveness of the non-parametric kernel independence test method. GES leverages greedy search with BIC score. However, the likelihood part of BIC in GES is Linear Gaussian, which is unsuitable for data generated by the Non-linear model. NOTEARS 8Please find the ablation experiment with linear data and more discussions of the experimental results in Appendix D. is a linear model but the mechanisms are non-linear. The reason will be the unfitness between data and model. Therefore, the comparisons with GES and NOTEARS on linear homogeneous data are implemented in the Table 1. DAG-GNN is also a non-linear model. However, the non-linear assumption of DAG-GNN is not the same as the data generation model ANMs assumed in our paper. The second reason comes from its *mechanisms approximation* modules are compulsory to share some parameters. Both NOTEARS-MLP and MCSL have their advantages. Please refer to Tables 14 and 15, you will find that NOTEARS-MLP performs better when the non-linear functions are MIM and MLP while MCSL works better on GP and GP-add models. Visualization of the learned DAG of FedDAG. We take an example of the AS-FedDAG optimization process on linear Gaussian model with NOTEARS as the baseline method and plot the change of estimated parameters in Fig. 3 and Fig. 4. In this example, the number of nodes is set as 10 and the edges are 10. The data is simulated by ER graph and evenly assign 200 observations on two different clients. In Fig. 3, we can see that the learned graph is asymptotically approximating the ground-truth DAG BG, including the existence of edges and their weights. From Fig. 4, we can find that with the increase of the penalty coefficients, h*loss* decreases quickly. For learned graphs on the different clients, we can see that the SHD distance is smaller during the optimization procedures. ![12_image_0.png](12_image_0.png) Figure 3: Visualization of the learned graph during the optimization process. B − n means the learned graph in the n steps. Best is the final estimated DAG. BG is the ground-truth DAG. ![12_image_1.png](12_image_1.png) Figure 4: Parameters changing during the optimization process. The first three sub-figures include the changes of penalty coefficients ρ, α, and the DAG constraint loss h*loss*. The fourth sub-figure records the ℓ1 distance between two learned graphs on the different clients. The fifth sub-figure records the SHD distance between two learned graphs on the different clients. ## 5.1.2 Heterogeneous Data Setting As defined in Section 3, the heterogeneous data property of data across clients come from the changes in mechanisms or the shift of noise distributions. To simulate the heterogeneous data, we first generate a graph structure shared by all clients and then decide the types of mechanisms f ck iand noises ϵi for i ∈ [d] for each client ck. In our experiments, we allow that f ck can be linear or non-linear for each client. If being linear, f ck here is a weighted adjacency matrix with coefficients sampled from Uniform ([−2.0, −0.5] ∪ [0.5, 2.0]), with equal probability. If being non-linear, f ck iis independently sampled from GP, GP-add, MLP, or MIM functions (Yuan, 2011), randomly. Then, a fixed zero-mean Gaussian noise is set to each client with a randomly sampled variance from {0.8, 1}. We can see that the conclusion of experimental results on the heterogeneous data setting is rather similar to that of the homogeneous data. As can be read from Table 3, GS-FedDAG always shows the best performances Table 3: Results on ANMs with heterogeneous data. | ER2 with 10 nodes | SF2 with 10 nodes | ER2 with 40 nodes | SF2 with 40 nodes | | | | | | |---------------------|---------------------|---------------------|---------------------|-------------|--------------|-------------|--------------|-------------| | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | PC | 22.3 ± 4.2 | 0.41 ± 0.11 | 21.0 ± 3.6 | 0.41 ± 0.12 | 151.9 ± 14.2 | 0.27 ± 0.08 | 152.5 ± 5.4 | 0.26 ± 0.04 | | GES | 26.4 ± 6.2 | 0.53 ± 0.14 | 25.4 ± 4.6 | 0.54 ± 0.13 | NaN | NaN | NaN | NaN | | NOTEARS | 20.4 ± 4.1 | 0.49 ± 0.14 | 18.7 ± 3.3 | 0.45 ± 0.11 | 164.8 ± 47.4 | 0.39 ± 0.07 | 178.1 ± 33.0 | 0.40 ± 0.10 | | All data N-S-MLP | 22.8 ± 5.0 | 0.87 ± 0.07 | 24.7 ± 3.3 | 0.88 ± 0.07 | 344.4 ± 71.9 | 0.92 ± 0.08 | 325.0 ± 50.2 | 0.85 ± 0.08 | | DAG-GNN | 21.2 ± 6.0 | 0.39 ± 0.11 | 16.6 ± 3.0 | 0.48 ± 0.18 | 146.6 ± 41.6 | 0.29 ± 0.08 | 168.2 ± 34.2 | 0.31 ± 0.09 | | MCSL | 19.4 ± 4.4 | 0.75 ± 0.19 | 19.0 ± 4.0 | 0.81 ± 0.14 | 118.6 ± 18.1 | 0.68 ± 0.11 | 126.9 ± 16.5 | 0.59 ± 0.12 | | PC | 12.5 ± 2.7 | 0.45 ± 0.07 | 11.0 ± 2.1 | 0.49 ± 0.07 | 65.7 ± 11.0 | 0.43 ± 0.06 | 73.7 ± 5.5 | 0.36 ± 0.05 | | GES | 12.9 ± 2.6 | 0.58 ± 0.07 | 10.3 ± 2.8 | 0.60 ± 0.09 | 68.2 ± 20.8 | 0.65 ± 0.09 | 77.2 ± 13.8 | 0.60 ± 0.07 | | NOTEARS | 7.6 ± 2.6 | 0.60 ± 0.11 | 7.6 ± 1.8 | 0.58 ± 0.09 | 34.9 ± 12.7 | 0.63 ± 0.11 | 43.4 ± 8.4 | 0.53 ± 0.10 | | Sep data N-S-MLP | 5.2 ± 1.4 | 0.80 ± 0.05 | 6.1 ± 1.6 | 0.76 ± 0.05 | 46.0 ± 10.2 | 0.73 ± 0.08 | 56.0 ± 9.5 | 0.66 ± 0.09 | | DAG-GNN | 8.2 ± 2.9 | 0.67 ± 0.12 | 8.4 ± 2.1 | 0.67 ± 0.09 | 45.7 ± 13.5 | 0.64 ± 0.11 | 52.7 ± 8.4 | 0.60 ± 0.11 | | MCSL | 9.2 ± 1.8 | 0.72 ± 0.06 | 8.9 ± 2.0 | 0.71 ± 0.08 | 76.1 ± 13.7 | 0.53 ± 0.09 | 78.1 ± 6.3 | 0.47 ± 0.07 | | AS-FedDAG | 3.4 ± 1.7 | 0.97 ± 0.04 | 2.7 ± 1.6 | 0.90 ± 0.07 | 35.9 ± 17.0 | 0.84 ± 0.09 | 41.8 ± 12.6 | 0.73 ± 0.07 | | GS-FedDAG | 1.9 ± 1.6 | 0.99 ± 0.02 | 2.6 ± 1.3 | 0.93 ± 0.07 | 24.3 ± 10.2 | 0.86 ± 0.09 | 33.9 ± 10.9 | 0.73 ± 0.09 | across all settings. If taking all data together to train one model using other methods, we can see that data heterogeneity would put great trouble to all compared methods while GS-FedDAG plays pretty well. Here, we also provide the experimental results of AS-FedDAG on this setting. We can find that the model misspecification problem would lead to unsatisfactory results, which motivate us to design the GS-FedDAG. Moreover, GS-FedDAG shows consistently good results with different numbers of observations on each client (see Table 16). NOTEARS takes second place at the setting of 40 nodes because there are some linear data among clients, which is also the reason that GS-FedDAG shows lower SHDs on heterogeneous data in Table 3 than Table 2. Compared with non-linear models, NOTEARS easily fits well with even fewer linear data. ## 5.2 Real Data | All data | Separate data | GS-FedDAG | AS-FedDAG | | | | | | |------------|-----------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | PC | NOTEARS | MCSL | PC | NOTEARS | MCSL | | | | | SHD ↓ | 9.0 ± 0.0 | 5.0 ± 0.0 | 9.0 ± 0.6 | 8.7 ± 1.3 | 8.0 ± 1.9 | 8.3 ± 1.7 | 6.4 ± 0.9 | 5.0 ± 0.0 | | NNZ | 11.0 ± 0.0 | 4.0 ± 0.0 | 12.0 ± 0.6 | 7.6 ± 1.3 | 5.4 ± 1.5 | 9.0 ± 1.7 | 6.8 ± 0.6 | 5.0 ± 0.0 | | TPR ↑ | 0.43 ± 0.00 | 0.29 ± 0.00 | 0.44 ± 0.04 | 0.26 ± 0.11 | 0.19 ± 0.18 | 0.35 ± 0.15 | 0.27 ± 0.12 | 0.29 ± 0.00 | | FDR ↓ | 0.73 ± 0.00 | 0.50 ± 0.00 | 0.74 ± 0.03 | 0.76 ± 0.10 | 0.78 ± 0.19 | 0.73 ± 0.11 | 0.72 ± 0.11 | 0.60 ± 0.00 | We consider a real public dataset named **fMRI Hippocampus** (Poldrack et al., 2015) to discover the underlying relationships among six brain regions. This dataset records signals from six separate brain regions in the resting state of one person in 84 successive days and the anatomical structure provide 7 edges as the ground truth graph (see Figure 10 in (Appendix D). Herein, we separately select 500 records in each of the 10 days, which can be regarded as different local data. It is worth noting that though this data does not have a real data privacy problem, we can use this dataset to evaluate the learning accuracy of our method. Here, in Table 4 we show part of the experimental results while others lie in Table 18). AS-FedDAG shows the best performance on all criteria while GS-FedDAG also performs better than most of the other methods. Table 4: Empirical results on **fMRI Hippocampus** dataset (Part 1). ## 6 Related Work Two mainstreams, named constraint-based and score-based methods, push the development of DAG structure learning. Constraint-based methods, including SGS and PC (Spirtes et al., 2001), take conditional independence constraints induced from the observed distribution to decide the graph skeleton and part of the directions. Another branch of methods (Chickering, 2002) define a score function, which evaluates the fitness between the distribution and graph, and identifies the graph G with the highest score after searching the DAG space. To avoid solving the combinatorial optimization problem, NOTEARS (Zheng et al., 2018) introduces an equivalent acyclicity constraint and formulates a fully continuous optimization for searching the graph. Following this work, many works leverages this constraint to non-linear case (Ng et al., 2019; Zheng et al., 2020; Lachapelle et al., 2020; Zhu et al., 2020; Wang et al., 2021; Gao et al., 2021; Ng et al., 2022b), low-rank graph (Fang et al., 2020), interventional data (Brouillard et al., 2020; Ke et al., 2019; Scherrer et al., 2021), time-series data (Pamfil et al., 2020), incomplete data (Gao et al., 2022; Geffner et al., 2022) and unmeasured confounding (Bhattacharya et al., 2021). GOLEM (Ng et al., 2020) leverages the full likelihood and soft constraint to solve the optimization problem. Ng et al. (2022a), DAG-NoCurl (Yu et al., 2021) and NOFEARS (Wei et al., 2020a) focus on the optimization aspect. The second line of related work is on the Overlapping Datasets (OD) (Danks et al., 2009; Tillman & Spirtes, 2011; Triantafillou & Tsamardinos, 2015; Huang et al., 2020a) problem in DAG structure learning. However, OD assumes that each dataset owns observations of partial variables and targets learning the integrated DAG from multiple datasets. In these works, data from different sites need to be collected on a central server. The last line is on federated learning (Yang et al., 2019; Kairouz et al., 2021), which provides the joint training paradigm to learn from decentralized data while avoiding sharing raw data during the learning process. FedAvg (McMahan et al., 2017) first formulates and names federated learning. FedProx (Li et al., 2020) studies the heterogeneous case and provides the convergence analysis results. SCAFFOLD leverages variance reduction by correcting client-shift to enhance training efficiency. Besides these fundamental problems in FL itself, this novel learning way has been widely co-operated with or applied to many real-world tasks such as healthcare (Sheller et al., 2020), recommendation system (Yang et al., 2020), and smart transport (Samarakoon et al., 2019). ## 6.1 Concurrent Work (Notears-Admm) In NOTEARS-ADMM (Ng & Zhang, 2022), the authors also consider the same setting for discovering the underlying relations from distributed data owing to privacy and security concerns. The main advantage of our FedDAG over NOTEARS-ADMM is to handle heterogeneous data, which is very common in real applications. Then, NOTEARS-ADMM mainly considers the linear case, which shares the same learning object with our method. Instead of taking an average to share training information, ADMM is taken to make the adjacency matrix close. More detailed experimental comparisons can be found in Appendix D.6, from which we can see that our FedDAG performs better in most settings. ## 7 Conclusion And Discussions Learning the underlying DAG structure from decentralized data brings considerable challenges to traditional DAG learning methods. In this context, we have introduced one of the first federated DAG structure learning methods called FedDAG, which uses a two-level structure for each local model. During the learning procedure, each client tries to learn an adjacency matrix to approximate the graph structure and neural networks to approximate the mechanisms. The matrix parts of some participating clients are aggregated and processed by the server and then broadcast to each client for updating its personalized matrix. The overall problem is formulated as a continuous optimization problem and solved by gradient descent. Structural identifiability conditions are provided, and extensive experiments on various data sets are to show the effectiveness of our FedDAG. The first limitation of our framework is with the *no latent variable* assumption, which is seldom right in real scenarios. While, as a general framework, the advanced methods (Bhattacharya et al., 2021), which can handle the no observed confounder case, can be well incorporated with our method to deal with the federated setup (More details can be seen in Appendix C.3). Another limitation relies on privacy protection. As we said, we focus on the statistical and optimization perspectives of federated DAG structure learning and leave the problem of combining the advanced privacy protection methods (Wei et al., 2020b) into our framework as a future work. The last limitation is to loose the invariant DAG assumption and allow the causal graph change among different clients, which is more common in the real world. ## Acknowledgement LS is supported by the Major Science and Technology Innovation 2030 "Brain Science and Brain-like Research" key project (No. 2021ZD0201405). EG is supported by an Australian Government Research Training Program (RTP) Scholarship. This research was supported by The University of Melbourne's Research Computing Services and the Petascale Campus Initiative. This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. TL was partially supported by Australian Research Council Projects DP180103424, DE-190101473, IC-190100031, DP-220102121, and FT-220100318. MG was supported by ARC DE210101624. HB was supported by ARC FT190100374. ## References George J Annas. Hipaa regulations: a new era of medical-record privacy? *New England Journal of Medicine*, 348:1486, 2003. Bryon Aragam, Arash Amini, and Qing Zhou. Globally optimal score-based learning of directed acyclic graphs in high-dimensions. In *Advances in Neural Information Processing Systems*, volume 32, 2019. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv* preprint arXiv:1907.02893, 2019. Rohit Bhattacharya, Tushar Nagarajan, Daniel Malinsky, and Ilya Shpitser. Differentiable causal discovery under unmeasured confounding. In *International Conference on Artificial Intelligence and Statistics*, 2021. Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and Alexandre Drouin. Differentiable causal discovery from interventional data. Advances in Neural Information Processing Systems, 2020. Peter Bühlmann, Jonas Peters, and Jan Ernest. Cam: Causal additive models, high-dimensional order search and penalized regression. *The Annals of Statistics*, 42(6):2526–2556, 2014. David Maxwell Chickering. Optimal structure identification with greedy search. *Journal of Machine Learning* Research, 3(Nov):507–554, 2002. Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploiting shared representations for personalized federated learning. *International Conference on Machine Learning*, 2021. David Danks, Clark Glymour, and Robert Tillman. Integrating locally learned causal structures with overlapping variables. *Advances in Neural Information Processing Systems*, 2009. Zhuangyan Fang, Shengyu Zhu, Jiji Zhang, Yue Liu, Zhitang Chen, and Yangbo He. Low rank directed acyclic graphs and causal structure learning. *arXiv preprint arXiv:2006.05691*, 2020. Erdun Gao, Ignavier Ng, Mingming Gong, Li Shen, Wei Huang, Tongliang Liu, Kun Zhang, and Howard Bondell. MissDAG: Causal discovery in the presence of missing data with continuous additive noise models. Advances in Neural Information Processing Systems, 2022. Yinghua Gao, Li Shen, and Shu-Tao Xia. Dag-gan: Causal structure learning with generative adversarial nets. In *International Conference on Acoustics, Speech and Signal Processing*, pp. 3320–3324, 2021. Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, et al. Deep end-to-end causal inference. *arXiv preprint* arXiv:2202.02195, 2022. Clark Glymour, Kun Zhang, and Peter Spirtes. Review of causal discovery methods based on graphical models. *Frontiers in Genetics*, 10:524, 2019. Sander Greenland, Judea Pearl, and James M Robins. Causal diagrams for epidemiologic research. *Epidemiology*, pp. 37–48, 1999. Farzin Haddadpour and Mehrdad Mahdavi. On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425, 2019. James J Heckman. Econometric causality. *International Statistical Review*, 76(1):1–27, 2008. Patrik Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Bernhard Schölkopf. Nonlinear causal discovery with additive noise models. *Advances in Neural Information Processing Systems*, 2008. Biwei Huang, Kun Zhang, Yizhu Lin, Bernhard Schölkopf, and Clark Glymour. Generalized score functions for causal discovery. In *International Conference on Knowledge Discovery & Data Mining*, 2018. Biwei Huang, Kun Zhang, Mingming Gong, and Clark Glymour. Causal discovery from multiple data sets with non-identical variable sets. *Association for the Advancement of Artificial Intelligence*, 34(06):10153–10161, 2020a. Biwei Huang, Kun Zhang, Jiji Zhang, Joseph Ramsey, Ruben Sanchez-Romero, Clark Glymour, and Bernhard Schölkopf. Causal discovery from heterogeneous/nonstationary data. *Journal of Machine Learning Research*, 21:1–53, 2020b. Guido W Imbens and Donald B Rubin. *Causal inference in statistics, social, and biomedical sciences*. Cambridge University Press, 2015. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. International Conference on Learning Representations, 2017. Finn V Jensen and Thomas Dyhre Nielsen. *Bayesian networks and decision graphs*, volume 2. Springer, 2007. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawit, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Now Foundations and Trends, 2021. Marcus Kaiser and Maksim Sipos. Unsuitability of notears for causal graph discovery. arXiv preprint arXiv:2104.05441, 2021. Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Hugo Larochelle, Bernhard Schölkopf, Michael C Mozer, Chris Pal, and Yoshua Bengio. Learning neural causal models from unknown interventions. arXiv preprint arXiv:1910.01075, 2019. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *International Conference* on Learning Representations, 2015. Neville K Kitson, Anthony C Constantinou, Zhigao Guo, Yang Liu, and Kiattikun Chobtham. A survey of bayesian network structure learning. *arXiv preprint arXiv:2109.11415*, 2021. Daphne Koller and Nir Friedman. *Probabilistic graphical models: principles and techniques*. MIT press, 2009. Sébastien Lachapelle, Philippe Brouillard, Tristan Deleu, and Simon Lacoste-Julien. Gradient-based neural dag learning. In *International Conference on Learning Representations*, 2020. Steffen Lauritzen. *Graphical models*, volume 17. Clarendon Press, 1996. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. *Conference on Machine Learning and Systems*, 2020. Bill Yuchen Lin, Chaoyang He, Zihang Ze, Hulin Wang, Yufen Hua, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, and Salman Avestimehr. Fednlp: Benchmarking federated learning methods for natural language processing tasks. In *Findings of the Association for Computational Linguistics*, pp. 157–175, 2022. Zachary Lipton, Yu-Xiang Wang, and Alexander Smola. Detecting and correcting for label shift with black box predictors. In *International Conference on Machine Learning*, pp. 3122–3130. PMLR, 2018. Rui Liu and Han Yu. Federated graph neural networks: Overview, techniques and challenges. arXiv preprint arXiv:2202.07256, 2022. Margaret Mooney Marini and Burton Singer. Causality in the social sciences. *Sociological methodology*, 18: 347–409, 1988. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In *International Conference on Artificial* intelligence and statistics. PMLR, 2017. Joris M Mooij, Dominik Janzing, Tom Heskes, and Bernhard Schölkopf. On causal discovery with cyclic additive noise models. *Advances in neural information processing systems*, 24, 2011. Joris M Mooij, Sara Magliacane, and Tom Claassen. Joint causal inference from multiple contexts. Journal of Machine Learning Research, 21:1–108, 2020. Yongchan Na and Jihoon Yang. Distributed bayesian network structure learning. IEEE International Symposium on Industrial Electronics, pp. 1607–1611, 2010. Arkadi Nemirovski. Optimization ii: Standard numerical methods for nonlinear continuous optimization. Lecture Note, 1999. Ignavier Ng and Kun Zhang. Towards federated bayesian network structure learning with continuous optimization. In *International Conference on Artificial Intelligence and Statistics*, pp. 8095–8111. PMLR, 2022. Ignavier Ng, Shengyu Zhu, Zhitang Chen, and Zhuangyan Fang. A graph autoencoder approach to causal structure learning. *arXiv preprint arXiv:1911.07420*, 2019. Ignavier Ng, AmirEmad Ghassami, and Kun Zhang. On the role of sparsity and dag constraints for learning linear dags. *Advances in Neural Information Processing Systems*, 33:17943–17954, 2020. Ignavier Ng, Sébastien Lachapelle, Nan Rosemary Ke, Simon Lacoste-Julien, and Kun Zhang. On the convergence of continuous constrained optimization for structure learning. In International Conference on Artificial Intelligence and Statistics, pp. 8176–8198. PMLR, 2022a. Ignavier Ng, Shengyu Zhu, Zhuangyan Fang, Haoyang Li, Zhitang Chen, and Jun Wang. Masked gradientbased causal structure learning. In *SIAM International Conference on Data Mining*, pp. 424–432. SIAM, 2022b. Nooshin Omranian, Jeanne MO Eloundou-Mbebi, Bernd Mueller-Roeber, and Zoran Nikoloski. Gene regulatory network inference using fused lasso on multiple data sets. *Scientific reports*, 6(1):1–14, 2016. Roxana Pamfil, Nisara Sriwattanaworachai, Shaan Desai, Philip Pilgerstorfer, Konstantinos Georgatzis, Paul Beaumont, and Bryon Aragam. Dynotears: Structure learning from time-series data. In International Conference on Artificial Intelligence and Statistics, 2020. Judea Pearl. Bayesian netwcrks: A model cf self-activated memory for evidential reasoning. In Proceedings of the 7th conference of the Cognitive Science Society, University of California, Irvine, CA, USA, pp. 15–17, 1985. Judea Pearl. *Causality*. Cambridge university press, 2009. Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. *Causal inference in statistics: A primer*. John Wiley & Sons, 2016. Jonas Peters and Peter Bühlmann. Identifiability of gaussian structural equation models with equal error variances. *Biometrika*, 101(1):219–228, 2014. Jonas Peters, Joris M Mooij, Dominik Janzing, and Bernhard Schölkopf. Identifiability of causal graphs using functional models. *Conference on Uncertainty in Artificial Intelligence*, 2011. Jonas Peters, Peter Bühlmann, and Nicolai Meinshausen. Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society. Series B (Statistical Methodology), pp. 947–1012, 2016. Krishna Pillutla, Kshitiz Malik, Abdel-Rahman Mohamed, Mike Rabbat, Maziar Sanjabi, and Lin Xiao. Federated learning with partial model personalization. In *International Conference on Machine Learning*, pp. 17716–17758. PMLR, 2022. Russell A Poldrack, Timothy O Laumann, Oluwasanmi Koyejo, Brenda Gregory, Ashleigh Hover, Mei-Yen Chen, Krzysztof J Gorgolewski, Jeffrey Luci, Sung Jun Joo, Ryan L Boyd, et al. Long-term neural and physiological phenotyping of a single human. *Nature Communications*, 6(1):1–15, 2015. Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, and Ali Jadbabaie. Robust federated learning: The case of affine distribution shifts. *arXiv preprint arXiv:2006.08907*, 2020. Sumudu Samarakoon, Mehdi Bennis, Walid Saad, and Mérouane Debbah. Distributed federated learning for ultra-reliable low-latency vehicular communications. *IEEE Transactions on Communications*, 68(2): 1146–1159, 2019. Nino Scherrer, Olexa Bilaniuk, Yashas Annadani, Anirudh Goyal, Patrick Schwab, Bernhard Schölkopf, Michael C Mozer, Yoshua Bengio, Stefan Bauer, and Nan Rosemary Ke. Learning neural causal models with active interventions. *arXiv preprint arXiv:2109.02429*, 2021. Gideon Schwarz. Estimating the dimension of a model. *The Annals of Statistics*, pp. 461–464, 1978. Micah J Sheller, Brandon Edwards, G Anthony Reina, Jason Martin, Sarthak Pati, Aikaterini Kotrotsou, Mikhail Milchenko, Weilin Xu, Daniel Marcus, Rivka R Colen, et al. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. *Scientific Reports*, 10(1):1–12, 2020. Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, Antti Kerminen, and Michael Jordan. A linear non-gaussian acyclic model for causal discovery. *Journal of Machine Learning Research*, 7(10), 2006. Peter Spirtes, Clark Glymour, Richard Scheines, et al. *Causation, Prediction, and Search*, volume 1. The MIT Press, 2001. Jin Tian and Judea Pearl. Causal discovery from changes. In *Uncertainty in Artificial Intelligence*, pp. 512–521, 2001. Robert Tillman and Peter Spirtes. Learning equivalence classes of acyclic models with latent and selection variables from multiple datasets with overlapping variables. *International Conference on Artificial* Intelligence and Statistics, 2011. Sofia Triantafillou and Ioannis Tsamardinos. Constraint-based causal discovery from multiple interventions over overlapping variable sets. *Journal of Machine Learning Research*, 16(1):2147–2205, 2015. Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). *A Practical Guide,* 1st Ed., Cham: Springer International Publishing, 10(3152676):10–5555, 2017. Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. Federated learning with matched averaging. In *International Conference on Learning Representations*, 2020a. Xiaoqiang Wang, Yali Du, Shengyu Zhu, Liangjun Ke, Zhitang Chen, Jianye Hao, and Jun Wang. Orderingbased causal discovery with reinforcement learning. *International Joint Conference on Artificial Intelligence*, 2021. Yixin Wang, Dawen Liang, Laurent Charlin, and David M Blei. Causal inference for recommender systems. In *ACM Conference on Recommender Systems*, pp. 426–431, 2020b. Dennis Wei, Tian Gao, and Yue Yu. Dags with no fears: A closer look at continuous optimization for learning bayesian networks. *Advances in Neural Information Processing Systems*, 2020a. Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farhad Farokhi, Shi Jin, Tony Q. S. Quek, and H. Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. *IEEE* Transactions on Information Forensics and Security, 15:3454–3469, 2020b. Han Xie, Jing Ma, Li Xiong, and Carl Yang. Federated graph classification over non-iid graphs. Advances in Neural Information Processing Systems, 34:18839–18852, 2021. Jing Yang, Ning An, Gil Alterovitz, Lian Li, and Aiguo Wang. Causal discovery based on healthcare information. In *International Conference on Bioinformatics and Biomedicine*, pp. 71–73. IEEE, 2013. Liu Yang, Ben Tan, Vincent W Zheng, Kai Chen, and Qiang Yang. Federated recommendation systems. Federated Learning, pp. 225–239, 2020. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. *ACM Transactions on Intelligent Systems and Technology*, 10(2):1–19, 2019. Dong Yin, Ashwin Pananjady, Max Lam, Dimitris Papailiopoulos, Kannan Ramchandran, and Peter Bartlett. Gradient diversity: a key ingredient for scalable distributed learning. In *International Conference on* Artificial Intelligence and Statistics, pp. 1998–2007. PMLR, 2018. Yue Yu, Jie Chen, Tian Gao, and Mo Yu. DAG-GNN: DAG structure learning with graph neural networks. International Conference on Machine Learning, 2019. Yue Yu, Tian Gao, Naiyu Yin, and Qiang Ji. Dags with no curl: An efficient DAG structure learning approach. International Conference on Machine Learning, 2021. Ming Yuan. On the identifiability of additive index models. *Statistica Sinica*, pp. 1901–1911, 2011. Keli Zhang, Shengyu Zhu, Marcus Kalander, Ignavier Ng, Junjian Ye, Zhitang Chen, and Lujia Pan. gcastle: A python toolbox for causal discovery. *arXiv preprint arXiv:2111.15155*, 2021. Kun Zhang and Aapo Hyvarinen. On the identifiability of the post-nonlinear causal model. *Conference on* Uncertainty in Artificial Intelligence, 2009. Kun Zhang, Mingming Gong, Petar Stojanov, Biwei Huang, Qingsong Liu, and Clark Glymour. Domain adaptation as a problem of inference on graphical models. *Advances in Neural Information Processing* Systems, 2020. Xun Zheng. *Learning DAGs with Continuous Optimization*. PhD thesis, Carnegie Mellon University, 2020. Xun Zheng, Bryon Aragam, Pradeep K Ravikumar, and Eric P Xing. DAGs with NO TEARS: Continuous Optimization for Structure Learning. *Advances in Neural Information Processing Systems*, 31, 2018. Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, and Eric P. Xing. Learning sparse nonparametric DAGs. In *International Conference on Artificial Intelligence and Statistics*, 2020. Shengyu Zhu, Ignavier Ng, and Zhitang Chen. Causal discovery with reinforcement learning. In *International* Conference on Learning Representations, 2020. # Appendix ## Table Of Contents | Table of Contents A Structure identifiability | 23 | | | |-------------------------------------------------|------------------------------------------------------------------------|----|----| | B | Implementations | 24 | | | B.1 | Graph generation | 24 | | | B.2 | SEM simulation | 24 | | | B.3 | Detailed metrics | | 24 | | B.4 | Hyper-parameters setting | | 25 | | B.5 | Hyper-parameters in real-data setting | | 25 | | B.6 | Model parameters | | 26 | | B.7 | Training parameters | | 26 | | B.8 | Sensitivity analysis of hyper-parameters | | 27 | | C | Discussions on our method | 27 | | | C.1 | Novelty and contributions | 27 | | | C.2 | Difference with graph neural network (GNN) learning | | 28 | | C.3 | FedDAG as a framework | 28 | | | C.4 | Broader impact statement | 29 | | | C.5 | The consistency results by BIC score | 29 | | | C.6 | Does the global maximum of Eq. (6) correspond to the ground-truth DAG? | 29 | | | C.7 | Can Algorithms 1 and 2 solve Eq. (6)? | 29 | | | C.8 | Can U finally satisfy the acyclicity constraint in Eq. (3)? | 29 | | | D | Supplementary experimental details | 30 | | | D.1 | Uneven distributions | 30 | | | D.2 | Varying clients | | 30 | | D.3 | Dense graphs | | 31 | | D.4 | Comparisons with voting method | 31 | | | D.5 | Comparisons with CAM | 31 | | | D.6 | Comparisons with NOTEARS-ADMM | 32 | | | E | More discussions on the experimental results | 32 | | | F | Discussions on Assumptions | 33 | | | F.1 | Data heterogeneity | 33 | | | F.2 | Invariant DAG assumption | | 33 | ## A Structure Identifiability Besides exploring effective DAG structure learning methods, identifiability conditions of graph structure (Spirtes et al., 2001) are also important. In general, unique identification of the ground truth DAG is impossible from purely observational data without some specific assumptions. However, accompanying some specific data generation assumptions, the graph can be identified (Peters et al., 2011; Peters & Bühlmann, 2014; Zhang & Hyvarinen, 2009; Shimizu et al., 2006; Hoyer et al., 2008). We first give the definition of identifiability in the decentralized setting. Definition A.1. Consider a decentralized distribution set P C(X) satisfying Assumption 3.1. Then, G is said to be identifiable if P C(X) cannot be induced from any other DAG. Condition A.2. (Minimality condition) Given the joint distribution P(X), P(X) is Markovian to a DAG G but not Markovian to any sub-graph of G. Condition A.3. (Cond. 19 in (Peters & Bühlmann, 2014)) The triple (fj , P(Xi), P(ϵj )) does not solve the following differential equation ∀xi, xj with v ′′(xj − f(xi))f ′(xi) ̸= 0: $$\xi^{\prime\prime\prime}=\xi^{\prime\prime}\left(-\frac{\nu^{\prime\prime\prime}f^{\prime}}{\nu^{\prime\prime}}+\frac{f^{\prime\prime}}{f^{\prime}}\right)-2\nu^{\prime\prime}f^{\prime\prime}f^{\prime}+\nu^{\prime}f^{\prime\prime\prime}+\frac{\nu^{\prime}\nu^{\prime\prime\prime}f^{\prime\prime}f^{\prime}}{\nu^{\prime\prime}}-\frac{\nu^{\prime}\left(f^{\prime}\right)^{2}}{f^{\prime}}.$$ Here, f := fj and ξ := log P(Xi), and v := log P(ϵj ) are the logarithms of the strictly positive densities. Definition A.4. (Restricted ANM. Def. 27 in (Peters & Bühlmann, 2014)) Consider an ANM with d variables. This SEM is called restricted ANM if ∀j ∈ V, i ∈ PAj and all sets S ⊆ V with PAj\{i} ⊆ S ⊆ PAj\{*i, j*}, there is an xS with P(xS) > 0, s.t. the tripe $$\left(f_{j}(x_{\mathbf{PA}_{j}\setminus\{i\}},\underbrace{\phantom{\rule{0.0pt}{12.9pt}}}_{X_{i}}),P\left(X_{i}\mid X_{\mathbb{S}}=x_{\mathbb{S}}\right),P\left(\epsilon_{j}\right)\right)$$ satisfies ConditionA.3. Here, the under-brace indicates the input component of fj for variable Xi. In particular, we require the noise variables to have non-vanishing densities and the functions fj to be continuous and three times continuously differentiable. Assumption A.5. (Faithfulness) Let P C(X) satisfy Assumption 3.1. At least one distribution P ck (X) ∈ P C(X) meets Assumption A.6 and the other distributions are faithful to G. Assumption A.6. Let a distribution P(X) with X = (X1, X2, · · · , Xd) be induced from a restricted ANM A.3 with graph G, and P(X) satisfies *Minimality condition* w.r.t G. Proposition A.7. *Given* P C(X) satisfying Assumption A.5, and then, G *can be identified up from* P C(X). Proof. From Remark 3.2, we have P ck (X) ∈ P C(X) for ∀ck, is Markov with G. For each ck ∈ C with P ck (X) does not satisfy Assumption A.6, the Completed Partially DAG (CPDAG) Gˆ (Pearl, 2009), which represents the CPDAG induced by G, can be identified (Spirtes et al., 2001). (1) That also says that these distributions can be induced from any DAG induced from M(G), including G definitely. Notice that skeleton(Gˆ) = Skeleton(G) and any Xi ← Xj in Gˆ is also existed in G. Then, for those ck with with P ck (X) satisfying Assumption A.6, G can be identified. (2) That is to say, distributions satisfying Assumption A.6 can only be induced from G. Then, two kinds of graph, Gˆ and G, are obtained. Therefore, G can be easily identified. With (1) and (2), P ck (X) ∈ P C(X) for ∀ck can only be induced by G. Then, G is said to be identifiable ■ Future work is to relax our invariant DAG assumption to the invariant CPDAG assumption, which means that a group of DAGs across different clients share the same conditional independence. For this case, the generalized score functions (Huang et al., 2018) can be adopted to search for the Markov Equivalence Class. However, it is not straightforward to incorporate this method into our FedDAG framework since the score of this method is motivated by a kernel-based (conditional) independence test rather than penalized likelihood. Moreover, this method does not support a continuous search strategy. It would be interesting to explore the penalized likelihood-based method for this case and incorporate it into our framework. ## B Implementations The comparing DAG structure learning methods used in this paper all have available implementations, listed below: - MCSL: Codes are available at gCastle https://github.com/huawei-noah/trustworthyAI/tree/ master/gcastle. The first author of MCSL added the implementation in this package. - NOTEARS and NOTEARS-MLP: Codes are available at the first author's GitHub repository https: //github.com/xunzheng/notears - NOTEARS-ADMM: Codes are available at the first author's GitHub repository https://github.com/ ignavierng/notears-admm - DAG-GNN: Codes are available at the author's GitHub repository https://github.com/fishmoon1234/ DAG-GNN - PC and GES: the implementations of PC and GES are available at causal-learn package repository https://causal-learn.readthedocs.io/en/latest/index.html - CAM: the codes are available at CRAN R package repository https://cran.r-project.org/src/contrib/ Archive/CAM/ Our implementation is highly based on the existing Tool-chain named gCastle (Zhang et al., 2021), which includes many gradient-based DAG structure learning methods. ## B.1 Graph Generation To simulate DAG for generating observations, we introduce two kinds of graph generation methods named Erdős-Rényi (ER) and Scale-Free (SF) graphs. To simulate the ER graph generation, we firstly randomly sample a topological order and by adding directed edges where it is allowed independently with probability p =2s d2−d where s is the number of edges in the resulting DAG. To generate Scale-free (SF) graphs, we first take the Barabasi-Albert model and then add all nodes one by one. From the above descriptions, we can find that the degree distribution of ER graphs follows a Poisson distribution, and the degree of SF graphs follows a power law: few nodes, often called hubs, have a high degree (Lachapelle et al., 2020). ## B.2 Sem Simulation In our experimental setup, for each client, we randomly choose a nonlinear type from the given four functions with equal probability in the heterogeneous data setting. The nonlinear function choice is totally the same as used in NOTEARS-MLP (Zheng et al., 2020). The details are as follows. We simulate the SEM Xj = fj (Xpaj ) + Zj for all j ∈ [d] in the topological order induce by G. GP: fj is drawn from the Gaussian process with RBF kernel with length-scale one. GP-add: fj (Xpaj ) = Pk∈paj fjk(Xk), where each fjk is from GP. MLP: fj is randomly initialized MLP with one hidden layer of size 100 and Sigmoid activation. MiM: also named as index model. fj (Xpaj ) = P3m=1 hm(Pk∈paj θjmkXk), where h1 = tanh, h2 = cos, h3 = sin, and each θjmk is uniformly drawn from range [−2, −0.5] ∪ [0.5, 2]. ## B.3 Detailed Metrics SHD is a kind of measurement which is defined to calculate the Hamming distance between two partially directed acyclic graphs (PDAG) by counting the number of edges for which the edge type differs in both PDAGs. In PDAG, there exist four kinds of edges between two nodes: i → j, i ← j, i − j, and i j. SHD just counts the different edges between the two graphs. SHD is defined over the space of PDAGs, so we can, of course, use it to calculate distances in DAG and CPDAG spaces. True Positive Rate (TPR) and False Discovery Rate (FDR) are two common metrics in the machine learning community. True positive rate, also referred to as sensitivity or recall, is used to measure the percentage of actual positives which are correctly identified. The FDR is defined as the expected proportion of errors committed by falsely rejecting the null hypothesis. Let T P be true positives (samples correctly classified as positive), F N be false negatives (samples incorrectly classified as negative), F P be false positives (samples incorrectly classified as positive), and T N be true negatives (samples correctly classified as negative). Then, T P R =T P T P +F N and F DR =F P F P +T P . ## B.4 Hyper-Parameters Setting In all experiments, there is no extra hyper-parameter to adjust for PC (with Fisher-z test and p-value 0.01) and GES (BIC score). For NOTEARS, NOTEARS-MLP, and DAG-GNN, we use the default hyper-parameters provided in their papers/codes. For MCSL, the hyper-parameters that need to be modified are ρ*init* and β. Specifically, if experimental settings (10 variables and 20 variables) are the same as those in their paper, we just take all the recommended hyper-parameters. For settings not implemented in their paper (40 variables exactly), we have two kinds of implementations. The first one is taking a linear interpolation for choosing the hyper-parameters. The second one is taking the same parameters as ours. We find that the second choice always works better. In our experiment, we report the experimental results in a second way. Notice that CAM pruning is also introduced to improve the performance of MCSL, which however can not guarantee a better result in our settings. For simplicity and fairness, we just take the direct outputs of MCSL. Similar to MCSL (Ng et al., 2022b) and GraN-DAG (Lachapelle et al., 2020), we implement several experiments on simulated data with known graph structure to search for the hyper-parameters and then use these hyperparameters for all the simulated experiments. Specifically, we use seeds from 1 to 10 to generate the simulated data to search for the best combination of hyper-parameters while all our experimental results reported in this paper are all conducted using seeds from 2021 to 2030. ## B.5 Hyper-Parameters In Real-Data Setting Most DAG learning methods have hyper-parameters, more or less, which need to be decided prior to learning. Moreover, NN-based methods are especially sensitive to the selection of hyper-parameters. For instance, Gran-DAG (Lachapelle et al., 2020) defines a really large hyper-parameters space for searching the optimal combination, which even uses different learning rates for the first sub-problem and the other sub-problems. MCSL and GS-FedDAG are sensitive to the selection of ρ*init* and β when constructing and solving the sub-problem. As pointed out in (Kairouz et al., 2021), NOTEARS focuses more on optimizing the scoring term in the early stage and pays more attention to approximate DAG in the late stage. If NOTEARS cannot find a graph near G in the early stage, then, it would lead to a worse result. To alleviate this problem, one may choose to (1) enlarge the learning rate or take more steps when solving the first few sub-problems as Gran-DAG; (2) reduce the value of coefficient ρ*init* to let the optimizer pay more attention to the scoring term in the early stages as MCSL. The other trick we find when dealing with real data is increasing ℓ1. This mostly results from that real data may not fit well with the data generation assumptions in most papers. Therefore, we choose to conduct a grid search to find the best combination of ρinit*, β, ℓ*1 for DAG structure learning on real data. In the practice of DAG structure learning, it is impossible to have G to select the hyper-parameters. One common approach is trying multiple hyper-parameter combinations and keeping the one yielding the best score evaluated on a validation set (Koller & Friedman, 2009; Ng et al., 2022b; Lachapelle et al., 2020). However, the direct use of this method may not work for some algorithms, such as MCSL, NOTEARS-MLP, and GS-FedDAG. This mainly lies in the similar explanations of the property of the traditional solution of FL. In the late stage of optimization, the optimizer focuses heavily on finding *a DAG* by enlarging the penalty coefficient ρ. Then, the learning of relationship mechanisms would be nearly ignored. To address this problem, we first report the DAG directly learned by a combination of hyper-parameters. And then, we replace the parameters part for describing the graph with the learned DAG. Afterward, we just take the score without DAG constraint to optimize the relationship mechanisms approximating part (which may not be the same name in the other algorithms). Finally, the validation set is taken to evaluate the learned model. The final hyper-parameters used on the real dataset in our paper are as follows: Table 5: The hyper-parameters used on real data. | Parameters | ρinit | β | λℓ1 | |--------------|---------|-----|-------| | Values | 0.008 | 2 | 0.3 | ## B.6 Model Parameters The GSL part in each local model is parameterized by a d × d matrix named U and the Gumbel-Sigmoid approach is leveraged for approximating the binary form. Each entry in U is initialized as 0. The temperature τ is set to 0.2 for all settings. Then, for the relationship mechanism approximating part, we use 4 dense layers with 16 variables in each hidden layer. All weights in the Network are initialized using the Xavier uniform initialization. The number of parameters used in each method is shown in Table 6. | 10 nodes | 20 nodes | 40 nodes | | |--------------|------------|------------|-------| | NOTEARS | 100 | 400 | 1600 | | Mask of MCSL | 100 | 400 | 1600 | | NN of MCSL | 9930 | 19860 | 39720 | Table 6: Model parameters for each client with different nodes. ## B.7 Training Parameters | Parameters | αinit | htol | itmax | itinner | itf l | γ | ρmax | λℓ1 | |--------------|---------|-----------|---------|-----------|---------|------|----------|-------| | Values | 0 | 1 × 10−10 | 25 | 1000 | 200 | 0.25 | 1 × 1014 | 0.01 | Our GS-FedDAG and AS-FedDAG reach this point and are implemented with the following hyper-parameters. We take ADAM (Kingma & Ba, 2015) with learning rate 3 × 10−2 and all the observational data Dck on each client are used for computing the gradient. And the detailed parameters used in Algorithms 1 and 2 are listed in Table 7. Table 7: The hyper-parameters used on simulated data in this paper. Notice that as illustrated in MCSL (Ng et al., 2022b), the performance of the algorithm is affected by the initial value of ρ*init* and the choice of β. Since a small initial of ρ*init* and β would result in a rather long training time. As said in (Kaiser & Sipos, 2021), MLE plays an important role in the early stage of training and highly affects the final results. Therefore, carefully picking a proper combination of ρ*init* and β will lead to a better result. In our method, we tune these two parameters via the same scale of experiment with seeds 1 ∼ 10. For each variable scale and training type, the parameters are adjusted once and are applied to all other experiments with the same variable scale. We find the combinations of the following parameters in Table 8 work well in our method. Our method also adopts a ℓ1 sparsity term on gτ (U), where the sparsity coefficient λℓ1 is chosen as 0.01 for all settings. | 10 nodes | 20 nodes | 40 nodes | | | | | |------------|------------|------------|----------|-------|-----------|-----| | ρinit | β | ρinit | β | ρinit | β | | | AS-FedDAG | 6 × 10−3 | 10 | 1 × 10−5 | 20 | 1 × 10−11 | 120 | | GS-FedDAG | 6 × 10−3 | 10 | 6 × 10−5 | 20 | 1 × 10−11 | 120 | Table 8: The combinations of ρ*init* and β on simulated data in our method. ![26_image_0.png](26_image_0.png) Figure 5: The sensitivity analysis of hyper-parameters ## B.8 Sensitivity Analysis Of Hyper-Parameters Here, we show the sensitivity analysis of itf l, α*init*, and λl1 . From the experimental results in Figure 5, we find that our method is relatively robust to itf l. That is to say, the itf l can be reduced to alleviate the pressure of communication costs while the performance can be well kept. λl1 is the coefficient of l1 sparsity, which will affect the final results. Because we have no sparsity information of the underlying graph, we set λl1 = 0.01 in all settings. When dealing with real data, we recommend the audiences adjust this parameter by using our parameter-tuning method provided in Section B.5. The results of α*init* are exactly as expected. As discussed before, our method tries to maximize the likelihood term of the total loss in the early stages, which is important to find the final ground-truth DAG. If setting a relatively large α*init*, the early learning stages would be affected. Therefore, we recommend directly taking α*init* as 0 in all settings. ## C Discussions On Our Method C.1 Novelty And Contributions Firstly, we acknowledge the contribution of our baseline method MCSL (Ng et al., 2022b), which performs well in many settings and helps to guarantee the performance of our proposed method. We also appreciate the excellent baseline method FedAvg (McMahan et al., 2017), which provides an efficient federated learning way. Our FedDAG is highly inspired and benefits from these two works. The main contributions, which can be taken by our proposed method, are (1) **one of the first works** that investigate the practical problem of DAG structure learning in a federated setup and (2) further providing the FedDAG approach that can guarantee the **privacy protection** by avoiding the raw data leakage and allow the **data heterogeneity** across the clients. Another concurrent work NOTEARS-ADMM (Ng & Zhang, 2022) also considers the same problem while our GS-FedDAG can (1) gain better performances in most of the settings, (2) well handle the nonlinear cases, (3) allow heterogeneous data, and (4) provide a quite flexible federated DAG structure learning framework. Discussions on the simple averaging Even though averaging is the simplest way to aggregating and exchanging information, we find it is **quite an effective** way to solve the federated DAG structure learning problem, which is an advantage of our method. Our simple averaging for homogeneous cases can nearly approach the same performance as using all data. For the heterogeneous cases, GS-FedDAG can still obtain satisfactory results. While, as future work, more advanced information aggregation methods (Wang et al., 2020a) can be well incorporated into our framework to boost the performances further. ## C.2 Difference With Graph Neural Network (Gnn) Learning Four main reasons make DAG structure learning and GNN two different research lines. (1) Nodes in DAG represent variables, and directed edges describe the single-direction relation between different variables. In GNN, graph talks more about graph-type data, such as social networks, protein networks, and traffic networks. (2) Networks in DAG structure learning are leveraged to learn the relationship mechanisms, while networks in GNN are taken to achieve node embedding and feature extraction. (3) Learned DAG can be taken for interventional and counterfactual reasoning (Kitson et al., 2021). (4) DAG structure learning cares more about identifiability. It is essential to identify the true underlying relationship of the observations precisely. In the federated setup, most existing federated GNN (Xie et al., 2021; Liu & Yu, 2022) methods assume that the underlying graphs are known and localized. What is being learned in the federation is the weight aggregation of the GNN but not its graph. This also leads to the main difference between our federated DAG learning and federated GNN learning. ## C.3 Feddag As A Framework In this paper, we restrict our attention to the case that all concerned variables can be well observed. We also only take MCSL (Ng et al., 2022b) as the baseline method. However, all gradient-based methods can be incorporated into our AS-FedDAG framework to deal with homogeneous data. To deal with the heterogeneous data, we prefer that the baseline methods can separately learn the DAG structure and relationship mechanisms. The other baseline methods that can be easily combined into our framework are NOTEARS-MLP (Zheng et al., 2020) and DAG-GNN (Yu et al., 2019). Unfortunately, many works are not in this fashion, such as GraN-DAG (Lachapelle et al., 2020), CD-RL (Zhu et al., 2020), and their following works. Latent variables. In this paper, we carry with no unobserved common confounder assumption. Handling latent confounders is a fundamentally important but challenging problem in the traditional DAG structure learning, not to mention the federated setup. Until now, the theoretical results on the structure identifiability of DAG learning with latent confounder are always too weak to be used in practice since too strict assumptions are taken. In the recent progress of the latent variables research, Bhattacharya et al. (2021) takes the acyclic directed mixed graphs (ADMGs) to describe the graphs with latent confounders. With different types of restrictions, three classes of proprieties, named Ancestral graph, Acid graph, and Bow-free graph, are given. According to different proprieties, different graph constraints are given. For example, trace e D−d+sum(D ◦ B) = 0 is set for the Bow-free graph9, where D is the adjacency matrix recording the directed edges and B records the double-directed edges. We can directly replace the constraint to incorporate this method in our framework. However, this method can only deal with the linear Gaussian case, which is somewhat limited. 9See more details at Section 4 in (Bhattacharya et al., 2021) ## C.4 Broader Impact Statement In federated learning, the server and some clients participate in this process. While as we talked about above, the DAG is shared among all clients. FedDAG is motivated by the "data on each client is not enough for identifying up the ground-truth DAG." The graph information is not private for clients. For the server, it depends. In our previous motivations, we only cared about the "raw data leakage" problem but did not consider the privacy of the graph. Some relations can be public in real-world scenarios, such as disease research. For these cases, our method can still work. However, graph structure may sometimes also be private information. This problem can be easily solved by picking one client as the proxy server. ## C.5 The Consistency Results By Bic Score Actually, for linear additive noise models with Gaussian noises, the consistency results for maximizing the BIC score to identify the DAG (Markov Equivalence Class or DAG) have been well established (Tian & Pearl, 2001; Huang et al., 2020a). For this case, with the DAG space constraint, the unique maximum of score function S ck (Dck , Φck , Uck ) with BIC score corresponds to the ground-truth DAG. Even for the high-dimensional consistency for linear Gaussian SEM when the model is identifiable (Aragam et al., 2019). Since the ground-truth G corresponds to each S ck , the global maximum arg maxΦ,U Pm k=1 S ck (Dck , Φck , U) with DAG constraint can lead to the ground-truth DAG graph. For nonlinear ANMs, however, even many practical methods, e.g., MCSL (Ng et al., 2022b), NOTEARS-MLP (Zheng et al., 2020), and CD-RL (Zhu et al., 2020), have been proposed to solve this problem by maximizing the BIC score, the theoretical results of consistency are still lacking and would be an interesting future work to be investigated. Therefore, our framework based on these methods inherits the theoretical limit for the nonlinear case. From our paper, however, empirical results can still show the method's effectiveness. ## C.6 Does The Global Maximum Of Eq. (6) Correspond To The Ground-Truth Dag? Firstly, for observations of identifiable ANMs on each client, the unique maximum of score function S ck (Dck , Φck , Uck ) with BIC score corresponds to the ground-truth DAG (Zheng et al., 2018; Ng et al., 2022b). Even for the high-dimensional consistency of linear Gaussian SEM in the case when the model is identifiable. Since the ground-truth G corresponds to each S ck , the global maximum arg maxΦ,U Pm k=1 S ck (Dck , Φck , Uck ) with DAG constraint can lead to the ground-truth DAG. ## C.7 Can Algorithms 1 And 2 **Solve Eq. (6)?** Unfortunately, the global maximum of Eq. (6) can not be well reached by the gradient-based optimization methods, which is mainly caused by the non-convex property of the acyclicity constraint. Firstly, discovering the ground-truth DAG is an NP-hard problem. Traditional methods like PC and GES search the discrete DAG space to solve this problem, which is relatively time-consuming. Then, NOTEARS introduces an equality constraint (3) to formulate the DAG search problem as a continuous optimization problem, which can be easily solved by the gradient descent methods. However, the trade-off is that this equality constraint is non-convex, which pushes us away from finding the ground-truth DAG (the global minima of (6)). That is to say, using gradient descent to solve (6) only can reach the local minima of (6). This similar conclusion stands for recent continuous optimization-based CD methods such as GraNDAG (Lachapelle et al., 2020), DAG-GNN (Yu et al., 2019), and NOTEARS-MLP (Zheng et al., 2020). ## C.8 Can U **Finally Satisfy The Acyclicity Constraint In Eq. (3)?** For simplicity, please do not mind if we explain our method by setting some parameters with specific values. Firstly, following NOTEARS (Zheng et al., 2018) and MCSL (Ng et al., 2022b), we take the Augmented Lagrangian Method (ALM) to covert the constrained optimization problem into a series of sub-problems without the hard constraint but with two penalty terms. For the t-th sub-problem, the specific formulation of Eq. (8) is related to αt and ρt. αt and ρt will be updated to αt+1 and ρt+1 after solving the t-th sub-problem for 1000 steps (gradient descent step). When dealing with each sub-problem, each client locally updates its personalized model *with acyclicity penalty terms*, which is indeed for the acyclicity constraint. During the 1000 steps, Us are averaged every 200 steps (Yes, the simple average is nothing with acyclicity). When finishing 1000 steps (also the 5-th 200 *steps* is just finished), a new Unew is obtained. Then, αt and ρt are updated to αt+1 and ρt+1 to formulate the next sub-problem of ALM, which are described in steps 5 ∼ 9 in Algorithm 1. Then, a new circulation begins. Therefore, we argue that (1) the acyclicity constraint is guaranteed by taking the acyclicity penalty when solving each sub-problem. (2) *the convergence of* U is supported by the convergence analysis of Personalized FedAvg of heterogeneous data. ## D Supplementary Experimental Details D.1 Uneven Distributions For federated learning problems in the real world, different clients may own different amounts of observations. To verify the stability of our method, we simulate the setting of uneven distributions in different clients. For each client, the number of observations is randomly chosen from a list [20%, 40%, 60%, 80%] × n, where n is the maximal observation. The experimental results are shown in Fig. 6, from which we can find that our method show relatively stable performance in this setting. ![29_image_0.png](29_image_0.png) Figure 6: Results of uneven distributions on different clients. ## D.2 Varying Clients In this setting, we now consider a fixed number of samples distributed across different clients. We conduct experiments for (2, 4, 6, 8) clients and show the results in Fig. 7. With the increase in clients number, our method can show better performance. ![29_image_1.png](29_image_1.png) Figure 7: Results of performances with varying clients. ## D.3 Dense Graphs Our method is also implemented on some denser graphs. Experimental results in Table 9 and Table 10. From these experimental results, we can see that our method shows consistently better performance over other methods on the denser graph setting. For the homogeneous case, both AS-FedDAG and GS-FedDAG obtain the nearly low SHD as MCSL trained on all data and far better than all methods trained on separated data. For the heterogeneous case, our GS-FedDAG still shows the best performance. Compared to NOTEARS in 20 variables case, GS-FedDAG shows similar SHD results but a much better TPR result. Therefore, how to reduce the false discovery rate of GS-FedDAG would be an exciting thing. | | ER4 with 10 nodes | | SF4 with 10 nodes | ER4 with 20 nodes | | SF4 with 20 nodes | | | |-------------|---------------------|-------------|---------------------|---------------------|-------------|---------------------|-------------|-------------| | | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | All data PC | 27.3 ± 3.2 | 0.29 ± 0.07 | 18.9 ± 4.9 | 0.37 ± 0.16 | 68.2 ± 9.5 | 0.23 ± 0.06 | 60.2 ± 9.3 | 0.30 ± 0.08 | | NOTEARS | 34.3 ± 1.7 | 0.03 ± 0.02 | 22.7 ± 1.3 | 0.05 ± 0.05 | 71.8 ± 7.2 | 0.03 ± 0.01 | 62.8 ± 0.9 | 0.02 ± 0.01 | | MCSL | 15.5 ± 5.9 | 0.57 ± 0.15 | 4.5 ± 3.1 | 0.83 ± 0.11 | 33.8 ± 10.4 | 0.55 ± 0.11 | 19.8 ± 7.5 | 0.69 ± 0.11 | | Sep data PC | 31.5 ± 2.1 | 0.14 ± 0.03 | 20.4 ± 0.58 | 0.21 ± 0.03 | 68.7 ± 8.1 | 0.13 ± 0.03 | 60.9 ± 2.8 | 0.15 ± 0.02 | | NOTEARS | 34.3 ± 1.8 | 0.03 ± 0.01 | 22.7 ± 1.0 | 0.06 ± 0.04 | 70.1 ± 6.9 | 0.03 ± 0.01 | 62.3 ± 0.56 | 0.03 ± 0.01 | | MCSL | 15.8 ± 3.3 | 0.61 ± 0.09 | 8.3 ± 4.3 | 0.78 ± 0.11 | 49.3 ± 11.8 | 0.63 ± 0.10 | 39.7 ± 5.6 | 0.73 ± 0.07 | | GS-FedDAG | 16.9 ± 4.9 | 0.53 ± 0.12 | 5.4 ± 3.0 | 0.78 ± 0.12 | 35.4 ± 10.9 | 0.53 ± 0.11 | 20.7 ± 5.1 | 0.69 ± 0.08 | | AS-FedDAG | 17.4 ± 4.8 | 0.53 ± 0.12 | 5.5 ± 2.8 | 0.79 ± 0.11 | 40.7 ± 4.8 | 0.57 ± 0.10 | 24.1 ± 5.8 | 0.71 ± 0.09 | Table 9: Results on nonlinear ANM with dense graphs (Homogeneous data). | ER4 with 10 nodes | SF4 with 10 nodes | ER4 with 20 nodes | SF4 with 20 nodes | | | | | | |---------------------|---------------------|---------------------|---------------------|-------------|-------------|-------------|-------------|-------------| | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | Sep data PC | 29.3 ± 1.3 | 0.23 ± 0.03 | 20.3 ± 2.1 | 0.31 ± 0.06 | 71.9 ± 8.1 | 0.19 ± 0.03 | 62.7 ± 2.8 | 0.22 ± 0.03 | | NOTEARS | 20.5 ± 2.6 | 0.45 ± 0.08 | 12.2 ± 2.9 | 0.54 ± 0.11 | 43.2 ± 7.0 | 0.49 ± 0.08 | 39.4 ± 6.8 | 0.47 ± 0.10 | | MCSL | 20.0 ± 3.2 | 0.52 ± 0.07 | 13.7 ± 2.2 | 0.65 ± 0.07 | 65.1 ± 7.7 | 0.33 ± 0.05 | 59.4 ± 5.3 | 0.31 ± 0.05 | | GS-FedDAG | 8.5 ± 3.7 | 0.84 ± 0.09 | 4.5 ± 2.0 | 0.93 ± 0.07 | 40.7 ± 14.5 | 0.74 ± 0.07 | 39.9 ± 10.8 | 0.68 ± 0.07 | Table 10: Results on nonlinear ANM with dense graphs (Heterogeneous data). ## D.4 Comparisons With Voting Method There is another interesting research line (Na & Yang, 2010), which also tries to learn DAG from decentralized data. We add a DAG combination method proposed in (Na & Yang, 2010), which proposes to vote for each entry of the adjacency matrix to get the final DAG. From the experimental results in Table 11, we can find that For PC and NOTEARS, the combining method seems to contribute little improvement. This is because the reported DAGs local clients are too bad to get a good result. For MCSL, this combing method works well for improving performance. The reason is easy to be inferred from the results. For MCSL, DAGs reported by local clients are of bad SHDs but good TPR, which means that the False Discovery Rates (FDRs) are high. In contrast, the combing method can further reduce the FDRs and keep the TPRs still good. Then, SHD can be further reduced. Luckily, our GS-FedDAG still shows the best performances in all settings. ## D.5 Comparisons With Cam Here, we add one more identifiable baseline named causal additive model (CAM) (Bühlmann et al., 2014), which also serves as a baseline in MCSL (Ng et al., 2022b), GraNDAG (Lachapelle et al., 2020), and DAG-GAN (Yu et al., 2019). From results in Table 12 and 13, we can see that our methods always show an advantage over CAM. CAM also assumes a non-linear ANM for data generation. However, CAM limits the non-linear function to be additive. In normal ANM, Xi = fi(Xpai ) + ϵi while CAM assumes Xi =Pj∈X(pai) fi←j (Xj ) + ϵi, which limits the capacity of its model. From the above experimental results, we can see that our methods show consistent advantages over CAM. | | Homogeneous data (GP) | | | Heterogeneous data | | | | | |-------------------|-------------------------|-------------------|-------------------|----------------------|-------------------|-------------|-------------|-------------| | ER2 with 10 nodes | | ER2 with 20 nodes | ER2 with 10 nodes | | ER2 with 20 nodes | | | | | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | Sep data PC | 14.1 ± 2.4 | 0.31 ± 0.06 | 32.7 ± 6.5 | 0.28 ± 0.07 | 12.5 ± 2.7 | 0.45 ± 0.07 | 28.5 ± 6.3 | 0.44 ± 0.07 | | NOTEARS | 16.5 ± 2.0 | 0.06 ± 0.04 | 31.7 ± 6.0 | 0.11 ± 0.04 | 7.6 ± 2.6 | 0.60 ± 0.11 | 15.0 ± 3.1 | 0.62 ± 0.09 | | MCSL | 7.1 ± 3.2 | 0.83 ± 0.08 | 24.8 ± 5.5 | 0.88 ± 0.07 | 9.2 ± 1.8 | 0.72 ± 0.06 | 23.3 ± 5.8 | 0.56 ± 0.08 | | Voting PC | 13.3 ± 3.0 | 0.27 ± 0.11 | 29.7 ± 5.9 | 0.22 ± 0.05 | 11.4 ± 3.4 | 0.36 ± 0.13 | 25.5 ± 6.8 | 0.29 ± 0.13 | | NOTEARS | 15.6 ± 2.2 | 0.11 ± 0.06 | 32.6 ± 6.2 | 0.09 ± 0.05 | 7.8 ± 4.0 | 0.56 ± 0.20 | 18.4 ± 11.6 | 0.49 ± 0.30 | | MCSL | 8.0 ± 3.1 | 0.85 ± 0.16 | 18.1 ± 7.8 | 0.88 ± 0.06 | 6.9 ± 2.2 | 0.71 ± 0.13 | 10.1 ± 4.6 | 0.79 ± 0.09 | | GS-FedDAG | 2.4 ± 2.0 | 0.86 ± 0.12 | 6.2 ± 4.0 | 0.85 ± 0.10 | 1.9 ± 1.6 | 0.99 ± 0.02 | 6.2 ± 4.7 | 0.89 ± 0.09 | | AS-FedDAG | 1.8 ± 2.0 | 0.89 ± 0.12 | 5.0 ± 4.2 | 0.88 ± 0.11 | NaN | NaN | NaN | NaN | | ER2 with 10 nodes | SF2 with 10 nodes | ER2 with 20 nodes | SF2 with 20 nodes | | | | | | | |---------------------|---------------------|---------------------|---------------------|-------------|-------------|-------------|-------------|-------------|-------------| | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | | All data | CAM | 9.5 ± 2.9 | 0.87 ± 0.09 | 9.1 ± 3.1 | 0.84 ± 0.10 | 21.4 ± 4.7 | 0.77,± 0.08 | 26.6 ± 6.1 | 0.75 ± 0.07 | | Sep data | CAM | 11.8 ± 2.6 | 0.40 ± 0.10 | 11.1 ± 1.5 | 0.38 ± 0.11 | 24.3 ± 5.8 | 0.40 ± 0.07 | 26.8 ± 2.0 | 0.36 ± 0.06 | | GS-FedDAG | 2.4 ± 2.0 | 0.86 ± 0.12 | 2.7 ± 2.2 | 0.86 ± 0.13 | 6.2 ± 4.0 | 0.85 ± 0.10 | 14.7 ± 7.0 | 0.80 ± 0.11 | | | AS-FedDAG | 1.8 ± 2.0 | 0.89 ± 0.12 | 2.5 ± 2.7 | 0.85 ± 0.15 | 5.0 ± 4.2 | 0.88 ± 0.11 | 7.8 ± 5.5 | 0.80 ± 0.14 | | Table 11: Comparison with the voting method. Table 12: Comparisons with CAM on nonlinear ANM (Homogeneous data-GP). ## D.6 Comparisons With Notears-Admm In this subsection, we give the experimental comparisons with NOTEARS-ADMM in detail to verify the advantage of our averaging strategy is simple but effective. Firstly, we conduct the results on linear models, which are the main part in Ng & Zhang (2022). As shown in Fig. 8, even on linear models, our AS-FedDAG can consistently show its advantage over NOTEARS-ADMM. Then, for the nonlinear models, we consider two different functions named MLP and Gaussian process (GP). The results are presented in Fig. 9, from which we can see that FedDAG always show better performance over all settings. Since NOTEARS-ADMM can not handle heterogeneous data, we do not give the results on heterogeneous data for fair comparison. ## E More Discussions On The Experimental Results Why does our method outperform other methods even some baseline methods using all data for training? Let us first discuss the AS-FedDAG (All-Shared FedDAG), which shares all model parameters (both Φ and U) among all clients. If we set itf l as 1 in AS-FedDAG, AS-FedDAG is totally the same as MCSL using all data for training. For simplicity, we mark all parameters (actually Φ and U) of client ck together as θ ck . Let us consider the t-th iteration when all clients receive the average parameters θt from the server and update their parameters by θt. For AS-FedDAG, firstly, we mark the gradients obtained by using the local data of client c kfor k ∈ [m] as g ck t . Then each client ck updates its parameters for one step by θ ck t = θt − lr × g ck t , where lr is the learning rate. Afterward, the server collects all parameters and averages them to get θt+1 = Pm k=1 θ ck t P m = m k=1 (θt−lr×g ck t ) m = θt −lr × Pm k=1 g ck t m . For MCSL, there is only one θ. If MCSL uses full gradient information, then θt+1 = θt − lr × Pm k=1 g ck t m (the full gradient is just the average of gradients from all samples). We can find that the updated parameters are totally the same. Then if itf l > 1, we average all parameters every itf l iterations. Even though the exact updating procedures are not the same, the expectations of updated parameters are the same. This is why we say that MCSL trained on all data can serve as an approximate upper bound of our method but unobtainable in our paper. Table 13: Comparisons with CAM on nonlinear ANM (Heterogeneous data). | ER2 with 10 nodes | SF2 with 10 nodes | ER2 with 20 nodes | SF2 with 20 nodes | | | | | | | |---------------------|---------------------|---------------------|---------------------|-------------|-------------|--------------|-------------|--------------|-------------| | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | | All data | CAM | 31.9 ± 4.8 | 0.39 ± 0.15 | 31.8 ± 4.4 | 0.31 ± 0.17 | 104.6 ± 15.4 | 0.46 ± 0.15 | 116.9 ± 13.8 | 0.35 ± 0.07 | | Sep data | CAM | 18.0 ± 1.7 | 0.52 ± 0.04 | 17.8 ± 2.1 | 0.51 ± 0.3 | 47.5 ± 9.2 | 0.52 ± 0.04 | 53.0 ± 6.1 | 0.50 ± 0.03 | | GS-FedDAG | 1.9 ± 1.6 | 0.99 ± 0.02 | 2.6 ± 1.3 | 0.93 ± 0.07 | 6.2 ± 4.7 | 0.89 ± 0.09 | 11.5 ± 6.7 | 0.81 ± 0.14 | | ![32_image_0.png](32_image_0.png) Figure 8: Comparisons with NOTEARS-ADMM on the linear model (IID). In GS-FedDAG (Graph-Shared FedDAG) method, only all graphs are averaged. However, this partial information-sharing mechanism also helps on benefiting information from other clients to find a better solution (Collins et al., 2021). ## F Discussions On Assumptions F.1 Data Heterogeneity The general heterogeneous data setup should include the distribution shift caused by interventions since interventions on certain variables would also lead to heterogeneous distribution. Previous work (Huang et al., 2020b) has investigated this case and proposes the CD-NOD algorithm, which enhances the PC method, to learn from heterogeneous data. However, CD-NOD needs to identify some edge directions by capturing the changing information among distributions. That is to say, this method which needs to gather all data and cause the raw data leakage, of course. In our paper, we restrict our attention to the ANMs, which care more about the mechanisms and noise shift among different clients. Moreover, finding the identifiability conditions for learning graphs from the general heterogeneous data (both mechanisms shift and interventional data) in the federated setup is a challenging but important problem, which is left for future work. ## F.2 Invariant Dag Assumption Firstly, let us skip the homogeneous data setting of FedDAG, which only assumes all SEMs are totally the same but data are generated at different local clients. Then, we mainly talk about the heterogeneous setting that assumes SEMs vary, but DAG is shared among different clients. ![33_image_0.png](33_image_0.png) Figure 9: Comparisons with NOTEARS-ADMM on nonlinear models (Homogeneous data). | GP | MIM | MLP | GP-add | | | | | | |------------------|------------|-------------|------------|-------------|------------|-------------|------------|-------------| | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | PC | 15.3 ± 2.6 | 0.37 ± 0.10 | 11.0 ± 4.9 | 0.60 ± 0.16 | 11.8 ± 4.3 | 0.61 ± 0.14 | 14.0 ± 4.7 | 0.49 ± 0.16 | | GES | 13.0 ± 3.9 | 0.50 ± 0.18 | 9.6 ± 4.4 | 0.71 ± 0.17 | 15.8 ± 6.0 | 0.63 ± 0.14 | 14.4 ± 4.9 | 0.57 ± 0.17 | | All data DAG-GNN | 16.2 ± 2.1 | 0.07 ± 0.06 | 13.7 ± 2.4 | 0.26 ± 0.10 | 18.2 ± 3.3 | 0.36 ± 0.12 | 13.3 ± 2.3 | 0.24 ± 0.10 | | NOTEARS | 16.5 ± 2.0 | 0.05 ± 0.04 | 12.1 ± 3.2 | 0.34 ± 0.13 | 13.3 ± 3.4 | 0.35 ± 0.15 | 13.4 ± 2.2 | 0.23 ± 0.09 | | N-S-MLP | 8.1 ± 3.8 | 0.56 ± 0.17 | 1.6 ± 1.3 | 0.95 ± 0.06 | 5.6 ± 1.3 | 0.81 ± 0.11 | 6.8 ± 4.0 | 0.65 ± 0.16 | | MCSL | 1.9 ± 1.5 | 0.90 ± 0.08 | 0.7 ± 1.2 | 0.97 ± 0.06 | 12.7 ± 3.6 | 0.58 ± 0.24 | 1.9 ± 1.7 | 0.91 ± 0.07 | | PC | 14.1 ± 2.4 | 0.31 ± 0.06 | 11.1 ± 3.6 | 0.48 ± 0.14 | 13.2 ± 3.6 | 0.42 ± 0.09 | 13.5 ± 3.2 | 0.37 ± 0.12 | | GES | 12.7 ± 2.7 | 0.37 ± 0.09 | 10.6 ± 3.3 | 0.54 ± 0.12 | 14.6 ± 4.6 | 0.50 ± 0.13 | 12.0 ± 2.6 | 0.48 ± 0.08 | | DAG-GNN | 15.7 ± 2.3 | 0.11 ± 0.05 | 11.7 ± 3.3 | 0.37 ± 0.12 | 17.7 ± 3.6 | 0.39 ± 0.11 | 13.0 ± 2.0 | 0.26 ± 0.10 | | NOTEARS | 16.5 ± 2.0 | 0.06 ± 0.04 | 12.3 ± 3.0 | 0.33 ± 0.12 | 13.4 ± 3.4 | 0.35 ± 0.14 | 13.3 ± 2.3 | 0.24 ± 0.09 | | N-S-MLP | 8.5 ± 2.9 | 0.56 ± 0.13 | 2.8 ± 1.5 | 0.93 ± 0.06 | 6.4 ± 1.3 | 0.81 ± 0.11 | 7.4 ± 2.9 | 0.67 ± 0.13 | | MCSL | 7.1 ± 3.2 | 0.83 ± 0.08 | 4.4 ± 2.1 | 0.91 ± 0.06 | 13.4 ± 3.9 | 0.57 ± 0.21 | 6.5 ± 3.5 | 0.84 ± 0.07 | | GS-FedDAG | 2.4 ± 2.0 | 0.86 ± 0.12 | 2.1 ± 1.4 | 0.91 ± 0.07 | 11.1 ± 3.1 | 0.57 ± 0.20 | 2.6 ± 1.6 | 0.87 ± 0.09 | | AS-FedDAG | 1.8 ± 2.0 | 0.89 ± 0.12 | 1.7 ± 1.6 | 0.91 ± 0.08 | 10.5 ± 3.5 | 0.59 ± 0.22 | 2.4 ± 1.6 | 0.87 ± 0.08 | Table 14: Results on nonlinear ANM with different functions (Homogeneous data, 10 nodes, ER2). Essentially, an SEM models the physical processes of a system and the generation process behind observations. Intuitively, different SEMs usually describe different systems. Then, naturally, the DAGs may be different. In the case that the deployed systems on different clients are not the same, our method will break down because of the model misspecification. Unfortunately, it is not straightforward to extend our current framework to deal with this case, and we leave it for future work. In this paper, we leave the variant causal graphs case aside and focus on the invariant graph case. This can be explained by the fact that a system can have various SEMs at different statuses (Huang et al., 2020b). In the real world, some cases can be supported by our assumptions. The first example can be fMRI recordings. As pointed in (Huang et al., 2020b), fMRI recordings are usually non-stationary because information flows in the brain may change with stimuli, tasks, and attention of the subject. Our federated setting only has one more assumption that fMRI recordings among different clients cannot be shared. The second example can be causal gene regulatory network inference (Omranian et al., 2016). The causal direction among genes, i.e., which gene regulates which gene, is believed to be the same. However, the SEM mechanism could vary in | | GP | MIM | MLP | GP-add | | | | | |------------------|------------|-------------|------------|-------------|-------------|-------------|-------------|-------------| | | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | PC | 32.7 ± 9.4 | 0.48 ± 0.13 | 22.8 ± 5.8 | 0.60 ± 0.15 | 33.7 ± 12.3 | 0.50 ± 0.13 | 35.2 ± 8.0 | 0.50 ± 0.09 | | GES | 27.1 ± 8.5 | 0.56 ± 0.11 | 21.5 ± 6.1 | 0.78 ± 0.09 | 44.9 ± 12.5 | 0.65 ± 0.11 | 41.7 ± 11.6 | 0.66 ± 0.08 | | DAG-GNN | 32.5 ± 6.8 | 0.10 ± 0.08 | 26.7 ± 7.4 | 0.26 ± 0.13 | 32.1 ± 10.4 | 0.38 ± 0.08 | 27.2 ± 2.4 | 0.24 ± 0.08 | | All data NOTEARS | 31.8 ± 6.0 | 0.11 ± 0.04 | 25.6 ± 6.1 | 0.29 ± 0.08 | 25.3 ± 8.0 | 0.40 ± 0.09 | 25.6 ± 3.9 | 0.28 ± 0.06 | | N-S-MLP | 18.2 ± 4.5 | 0.52 ± 0.10 | 4.1 ± 2.0 | 0.95 ± 0.04 | 8.0 ± 3.9 | 0.86 ± 0.07 | 12.6 ± 2.2 | 0.70 ± 0.06 | | MCSL | 4.6 ± 4.6 | 0.90 ± 0.13 | 1.7 ± 1.6 | 0.97 ± 0.04 | 18.1 ± 6.6 | 0.72 ± 0.14 | 3.1 ± 1.9 | 0.92 ± 0.05 | | PC | 32.7 ± 6.5 | 0.28 ± 0.07 | 24.4 ± 5.6 | 0.46 ± 0.11 | 30.6 ± 8.0 | 0.41 ± 0.09 | 29.5 ± 5.6 | 0.42 ± 0.10 | | GES | 28.6 ± 5.5 | 0.34 ± 0.06 | 20.5 ± 3.7 | 0.61 ± 0.06 | 34.4 ± 11.3 | 0.52 ± 0.09 | 29.3 ± 5.5 | 0.51 ± 0.07 | | DAG-GNN | 31.7 ± 6.1 | 0.12 ± 0.04 | 26.8 ± 5.8 | 0.26 ± 0.06 | 34.1 ± 9.7 | 0.46 ± 0.07 | 26.5 ± 4.0 | 0.27 ± 0.05 | | Sep data NOTEARS | 31.7 ± 6.0 | 0.11 ± 0.04 | 25.7 ± 5.9 | 0.29 ± 0.07 | 25.4 ± 7.4 | 0.42 ± 0.07 | 25.6 ± 3.8 | 0.29 ± 0.06 | | N-S-MLP | 19.5 ± 4.7 | 0.52 ± 0.07 | 6.5 ± 1.9 | 0.92 ± 0.03 | 16.1 ± 8.6 | 0.86 ± 0.07 | 16.2 ± 3.3 | 0.70 ± 0.07 | | MCSL | 24.8 ± 5.5 | 0.88 ± 0.07 | 20.4 ± 3.8 | 0.91 ± 0.05 | 30.2 ± 5.1 | 0.67 ± 0.12 | 16.2 ± 5.3 | 0.87 ± 0.05 | | GS-FedDAG | 6.2 ± 4.0 | 0.85 ± 0.10 | 8.5 ± 2.8 | 0.93 ± 0.05 | 21.4 ± 7.9 | 0.71 ± 0.14 | 8.1 ± 3.2 | 0.85 ± 0.05 | | AS-FedDAG | 5.0 ± 4.2 | 0.88 ± 0.11 | 3.3 ± 2.5 | 0.92 ± 0.07 | 20.1 ± 8.3 | 0.72 ± 0.14 | 5.6 ± 2.8 | 0.86 ± 0.06 | | | n =100 | n =300 | | n =600 | n =900 | | | | |-----------|--------------|-------------|--------------|-------------|--------------|-------------|--------------|-------------| | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | PC | 55.5 ± 8.5 | 0.21 ± 0.06 | 57.3 ± 5.7 | 0.29 ± 0.07 | 60.4 ± 9.8 | 0.32 ± 0.11 | 62.4 ± 6.6 | 0.29 ± 0.10 | | GES | 82.8 ± 13.7 | 0.38 ± 0.12 | 96.4 ± 14.9 | 0.48 ± 0.08 | 102.9 ± 13.6 | 0.51 ± 0.08 | 106.3 ± 14.3 | 0.50 ± 0.11 | | DAG-GNN | 61.8 ± 14.7 | 0.39 ± 0.07 | 56.8 ± 9.7 | 0.37 ± 0.08 | 57.7 ± 12.0 | 0.38 ± 0.08 | 57.9 ± 12.1 | 0.32 ± 0.08 | | NOTEARS | 58.7 ± 12.8 | 0.41 ± 0.12 | 57.6 ± 10.2 | 0.44 ± 0.06 | 57.3 ± 12.9 | 0.43 ± 0.08 | 59.4 ± 10.3 | 0.39 ± 0.10 | | N-S-MLP | 111.2 ± 14.4 | 0.92 ± 0.10 | 101.0 ± 16.8 | 0.92 ± 0.05 | 100.8 ± 14.7 | 0.90 ± 0.10 | 97.6 ± 14.8 | 0.90 ± 0.07 | | MCSL | 49.0 ± 8.1 | 0.62 ± 0.06 | 54.0 ± 10.0 | 0.70 ± 0.10 | 53.8 ± 9.6 | 0.73 ± 0.10 | 57.6 ± 11.6 | 0.73 ± 0.08 | | PC | 31.2 ± 5.7 | 0.30 ± 0.05 | 29.0 ± 5.9 | 0.39 ± 0.06 | 28.5 ± 6.3 | 0.44 ± 0.07 | 27.9 ± 6.6 | 0.47 ± 0.08 | | GES | 35.1 ± 8.3 | 0.48 ± 0.10 | 31.6 ± 9.8 | 0.57 ± 0.08 | 30.0 ± 8.0 | 0.62 ± 0.06 | 30.5 ± 10.7 | 0.64 ± 0.07 | | DAG-GNN | 29.9 ± 7.2 | 0.66 ± 0.09 | 20.3 ± 5.0 | 0.67 ± 0.09 | 18.5 ± 4.9 | 0.67 ± 0.09 | 18.0 ± 5.2 | 0.66 ± 0.11 | | NOTEARS | 16.3 ± 3.4 | 0.61 ± 0.08 | 15.5 ± 3.2 | 0.60 ± 0.08 | 15.0 ± 3.1 | 0.62 ± 0.09 | 15.2 ± 2.9 | 0.61 ± 0.09 | | N-S-MLP | 68.0 ± 5.4 | 0.80 ± 0.04 | 22.6 ± 3.3 | 0.79 ± 0.06 | 12.7 ± 2.6 | 0.80 ± 0.05 | 11.8 ± 2.8 | 0.80 ± 0.05 | | MCSL | 32.8,± 5.4 | 0.49 ± 0.08 | 26.4 ± 5.5 | 0.53 ± 0.09 | 23.3 ± 5.8 | 0.56 ± 0.08 | 23.1 ± 6.5 | 0.56 ± 0.07 | | GS-FedDAG | 11.6 ± 5.6 | 0.83 ± 0.11 | 7.1 ± 6.1 | 0.90 ± 0.12 | 6.2 ± 4.7 | 0.89 ± 0.09 | 6.0 ± 5.5 | 0.91 ± 0.11 | Table 15: Results on nonlinear ANM with different functions (Homogeneous data, 20 nodes, ER2). Table 16: Results on heterogeneous setting with the different number of observations, (20 nodes, ER2). each individual due to individual properties, such as age, gender, etc. Also, the assumption that domain shifts can also come from the distribution shifts of the exogenous variables (noise terms in our paper) has been widely accepted in the machine learning field, such as invariant causal prediction (Peters et al., 2016), IRM (Arjovsky et al., 2019). Table 17: Results on randomly selecting models-info of partial clients (heterogeneous data, 20 nodes, ER2). | Homogeneous data | | Heterogeneous data | | | | | | | | |--------------------|-------------------|----------------------|-------------------|-------------|-------------|-------------|-------------|-------------|-------------| | ER2 with 10 nodes | ER2 with 20 nodes | ER2 with 10 nodes | ER2 with 20 nodes | | | | | | | | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | SHD ↓ | TPR ↑ | | | | r m | 10% | 3.8 ± 2.4 | 0.78 ± 0.14 | 8.6 ± 4.8 | 0.77 ± 0.13 | 3.8 ± 1.4 | 0.93 ± 0.05 | 8.5 ± 5.4 | 0.89 ± 0.07 | | 20% | 3.2 ± 2.0 | 0.81 ± 0.12 | 6.7 ± 4.8 | 0.82 ± 0.13 | 2.5 ± 2.1 | 0.97 ± 0.04 | 8.2 ± 5.4 | 0.87 ± 0.09 | | | 50% | 2.9 ± 1.8 | 0.83 ± 0.11 | 5.8 ± 4.4 | 0.85 ± 0.12 | 1.8 ± 1.4 | 0.99 ± 0.02 | 6.3 ± 5.1 | 0.89 ± 0.10 | | | 80% | 2.7 ± 1.9 | 0.84 ± 0.12 | 6.0 ± 3.9 | 0.86 ± 0.10 | 1.8 ± 1.3 | 0.99 ± 0.02 | 5.9 ± 4.1 | 0.90 ± 0.08 | | | 100% | 2.4 ± 2.0 | 0.86 ± 0.12 | 6.2 ± 4.0 | 0.85 ± 0.10 | 1.9 ± 1.6 | 0.99 ± 0.02 | 6.2 ± 4.7 | 0.89 ± 0.09 | | | | All data | Separate data | GS-FedDAG | AS-FedDAG | | | | | |-------|-------------|-----------------|-------------|-------------|-------------|-------------|-------------|-------------| | GES | N-S-MLP | DAG-GNN | GES | N-S-MLP | DAG-GNN | | | | | SHD ↓ | 8.0 ± 0.0 | 9.0 ± 0.0 | 5.4 ± 0.5 | 8.3 ± 1.2 | 11.3 ± 1.0 | 8.2 ± 1.9 | 6.4 ± 0.9 | 5.0 ± 0.0 | | NNZ | 11.0 ± 0.0 | 12.0 ± 0.0 | 3.3 ± 0.8 | 8.5 ± 1.1 | 14.4 ± 0.8 | 5.7 ± 1.4 | 6.8 ± 0.6 | 5.0 ± 0.0 | | TPR ↑ | 0.43 ± 0.00 | 0.43 ± 0.00 | 0.23 ± 0.07 | 0.31 ± 0.17 | 0.44 ± 0.10 | 0.17 ± 0.18 | 0.27 ± 0.12 | 0.29 ± 0.00 | | FDR ↓ | 0.73 ± 0.00 | 0.75 ± 0.00 | 0.52 ± 0.09 | 0.75 ± 0.12 | 0.78 ± 0.05 | 0.80 ± 0.18 | 0.72 ± 0.11 | 0.60 ± 0.00 | Table 18: Empirical results on **fMRI Hippocampus** dataset (Part 2). ![35_image_0.png](35_image_0.png) Figure 10: Anatomical causal-effect relationships of **fMRI Hippocampus** dataset
Review 1: Summary: This work proposes federated DAG learning to enable decentralized learning which protects user privacy in the problem of DAG learning. Strengths and Weaknesses: S: 1. It proposes a novel DAG learning method to handle the decentralized setting. 2. Under mild assumptions, the proposed method achieves good empirical performance compared to various SOTA DAG learning methods including the most recent ones with gradient-based/ADMM optimization. W: 1. It seems that assumption 3.1 is a sufficient condition for the method to work. I wonder if there is a possibility to further relax the assumption. Or in other words, can we allow the underlying DAG to be different for different subpopulations? For example, a group of DAGs sharing the same conditional independence? 2. The work discussed communication time. It is better to explicitly write down the time complexity of communication w.r.t. r and d. Requested Changes: See weakness. In addition to that 1. In 5.1.2, what does it mean by randomly sampling f_i^{ck} from GP, GP-add, MLP or MIM functions? How do you determine the parameters of those models? What is the probability to sample from each of those functions? Broader Impact Concerns: NA ================================================== Review 2: Summary: I have reviewed a previous version of this paper with a different positioning. This appears to be a re-positioning of the original paper following my suggestions, which is well done. I appreciate the effort the authors have spent to reposition the motivation of their work, which now highlights much better the key contribution. For clarity, I quote my previous summary of this work below, which has been modified accordingly to the new position (the core content remains mostly the same). --- This paper expands on the previous work of NOTEARS (Zheng et al. 2018) and MCSL (Ng et al., 2022b) to develop a new federated structure learning framework. The structure is represented in terms of a directed acyclic probabilistic graphical model, which comprises (1) a DAG; and (2) a set of (learnable) conditional probabilities along its edges. The DAG is assumed to be the same (though unknown) across data silos but the conditional probabilities are heterogeneous. The conditional probabilities are therefore learned using local data (and are private to each silos) but the graph is learned collaboratively via a federated graph learning formulation with constraints. The constraint is adopted from NOTEARS (Zheng et al. 2018) & MCSL (Ng et al. 2022b) to enforced that the learned graph is a DAG, which defines a common differentiable search space on which direct application of FedAvg is possible. Going by formulation, this work is very similar to a concurrent work of NOTEARS-ADMM (Ng & Zhang, 2022). But, the personalized setup for the conditional probabilities are different. Strengths and Weaknesses: Strengths: 1. The paper addresses a new problem, which has not been only explored before. Even in comparison to the most similar setting of a concurrent work (Ng & Zhang, 2022), there is a clear improvement in settings with heterogeneous data. 2. The paper is well-organized and well-written. This is a highly technical paper but it is not hard to understand the core idea thanks to the high-quality narrative. 3. The experiments are well-organized and extensive. Weaknesses: 1. The assumption of an invariant DAG across data silos is quite strong. I noticed that the authors had discussed this in F2 but the narrative there is somewhat incoherent -- there are two points: (1) "Intuitively, different SEMs usually describe different systems. Then, naturally, the DAGs may be different. Following this logic, we would say yes that the invariant DAG assumptions among different SEMs are too strong" ==> does this mean the assumption is not reasonable? (2) "The assumption that domain shifts can come from the distribution shifts of the exogenous variables (noise terms in our paper) has been widely accepted, .... So there is no need to argue this one" ==> I do not understand how this is related to the question "Is our Invariant DAG assumption reasonable?" I am not quite sure I follow the stance of the authors on this matter from those narratives. But regardless, the assumption does appear a but unrealistic in my view. This is also one of my concerns in the previous review. Nonetheless, seeing that this is a relatively new problem and there is no existing work addressing anything beyond this setup, I would not consider this a serious dampener of the key contribution here. The authors should revise the discussion on this point though -- the current form of discussion is not very insightful. 2. The current discussion of a related body of work in federated GNN (in C2) somehow misses a key difference between the two lines of work, which would strengthen the position of this paper: Most existing federated GNN assumes graphs are known and localized, what is being learned in federation is the weight aggregation of the GNN but not its graph. Citing some recent examples along this line would be good (not super important but for the sake of thoroughness) 3. It is somewhat unclear what is the configuration for the reported performance of other continuous graph search method such as NOTEARS & DAG-GNN under the Separated Data partition of the tables. Are these averaged performance of siloed models or performance of their FedAvg variant (i.e., direct application of FedAvg on the entire parameterization of those methods) 4. What is the performance of AS-FedDAG in heterogeneous setting in Table 3? If AS-FedDAG always outperforms GS-FedDAG, what is the motivation to personalize the conditional probabilities? Furthermore, what is the difference between AS-FedDAG and a coupling between FedAvg and variants of NOTEARS? I was under the impression that the position of this paper is unique because it allows personalization of the conditional probabilities and there is practical benefit to do so but seeing that AS-FedDAG (no personalization) outperforming GS-FedDAG (with personalization), I am a bit concerned about the practical merit of this position. But perhaps this doubt will go away if the authors can elaborate more on the key differences between AS-FedDAG & FedAvg + variants of NOTEARS, as well as how the improved performance is tied to those differences. Or explain to me if I have misunderstood something. Requested Changes: 1. Revise & expand the discussion in C2 and F2 as I suggested above. In fact, F2 would really benefit from a rephrasing. The narrative in F2 is somewhat informal and partly disconnected from the main question it aims to answer (or perhaps this is the effect of the informal phrasing) -- e.g. "The assumption that domain shifts can come from the distribution shifts of the exogenous variables (noise terms in our paper) has been widely accepted, .... So there is no need to argue this one" -- this seems to be a response to another concern which got mixed up in the narrative. 2. Addressing my questions no. 3. and 4. in the Weaknesses section, which seek clarification regarding the experiment setup of the other continuous graph learning baselines, as well as the practical merit of enabling personalization of the conditional probabilities. Broader Impact Concerns: There is still a concern about revealing local graph would leak information about clients. I raised this point before and the authors have responded to that with a detailed paragraph, which is fine except for a sentence that I do not get: "However, graph structure sometimes may also be private information. This problem can also be easily solved by picking one client as the proxy server." ==> private information would still then be leaked to the proxy server so how does this solve the problem? Nonetheless, this is orthogonal to the main contribution (which focuses more on privacy compliance rather than privacy preservation against sophisticated attack) so it is fine to simply acknowledge the limitation. ================================================== Review 3: Summary: The authors propose a decentralized method to learn causal graphs under the ANM assumption. The clients learn their own model and do not share the data but only the model parameters, which are then combined by a central entity and re-distributed back to the clients to continue their computation. Although the proposed method does not come with guarantees, there are several experiments that could be insightful. Strengths and Weaknesses: I do not have many technical comments. It makes sense to decentralize the causal graph discovery problem in the parametric ANM setting the way the authors are suggesting. However, there are several issues that should be addressed before acceptance. The writing needs to be improved significantly. The use of the word IID throughout the paper is confusing. The paper also suffers many grammar mistakes. Experiments are slightly opaque in the sense that the proposed method outperforms even the centralized approaches and it is unclear why/how. The theoretical result is not related to the proposed method and does not add to the paper. Requested Changes: "IID decentralized dataset" throughout this paper. This naming decision is a bit confusing. Independence is a property of different realizations in a dataset. How does a property among distributions imply the independence of samples? I believe the use of the word IID is very misleading throughout the paper. If I am understanding correctly, authors use IID when the distributions across different clients are the same. I recommend using "no distribution shift" or similar to describe this setting instead of IID. Section 3.1 seems redundant to me. It does not convey anything about the proposed algorithm. Under the parametric assumptions made in the paper, causal graph is already identifiable from one dataset. Why is it insightful to say it is also identifiable from several datasets? I recommend authors remove this section as it does not add anything to the paper. (see comments below for bringing in some results from appendix instead) Section 4: "To solve the aforementioned problem" I am not sure what problem is being referred to here (I know but this lacks clarity). "relationship mechanisms " please drop the word relationship throughout the paper. the word mechanism already captures the causal relationship. Section E.1 should be in the main paper, not appendix. "Why does our method outperforms other methods even some baseline methods using all data for training?" This section in the Appendix is very intriguing. But the explanation is not convincing. I am not sure how it's possible that the proposed method can **THAT SIGNIFICANTLY** outperform methods that use the full dataset - see Table 2 and Table 3. Also, can you add results in Table 1, i.e., in the linear setting, for methods that use full dataset as a sanity check? Section C.6 seems very important. I am not sure why it's deep in the appendix. In my opinion, it should be in the main paper. It discusses that the used objective function is sound, but the iterative process can only recover a stationary point of the loss. This is insightful and should be in the main text. There are many grammatical mistakes and the paper should go through a spell check. "for \forall" please drop the word for, throughout the paper. Appendix A.1:"is also existed " grammar. page 9: "ADMM is used for address this issue" page 11: "see Figure ?? " Broader Impact Concerns: I do not foresee adverse ethical implications of the proposed work. ================================================== Metareview: Recommendation: Accept as is Comment: Three reviewers with expertise in either federated learning or causal discovery who reviewed the original version of this manuscript agreed that the key contributions have been highlighted much better in this revised version. ==================================================
# Straggler-Resilient Personalized Federated Learning Isidoros Tziotis isidoros_13@utexas.edu The University of Texas at Austin Zebang Shen zebang.shen@inf.ethz.ch ETH Zurich Ramtin Pedarsani *ramtin@ece.ucsb.edu* The University of California, Santa Barbara Hamed Hassani *hassani@seas.upenn.edu* The University of Pennsylvania Aryan Mokhtari mokhtari@austin.utexas.edu The University of Texas at Austin Reviewed on OpenReview: *https: // openreview. net/ forum? id= gxEpUFxIgz& referrer= %5BAuthor% 20* ## Abstract Federated Learning is an emerging learning paradigm that allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions. Despite its success, federated learning faces several challenges related to its decentralized nature. In this work, we develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles, namely (i) data heterogeneity, i.e., data distributions can vary substantially across clients, and (ii) system heterogeneity, i.e., the computational power of the clients could differ significantly. By leveraging previous works in the realm of representation learning (Collins et al., 2021; Liang et al., 2020), our method constructs a global common representation utilizing the data from all clients. Additionally, it learns a user-specific set of parameters, resulting in a personalized solution for each individual client. Furthermore, it mitigates the effects of stragglers by adaptively selecting clients based on their computational characteristics, thus achieving for the first time near-optimal sample complexity and provable logarithmic speedup. Experimental results support our theoretical findings showing the superiority of our method over alternative personalized federated schemes in system and data heterogeneous environments. ## 1 Introduction Due to growing concerns on data privacy and communication cost, Federated Learning (FL) has become an emerging learning paradigm as it allows for training machine learning models without collecting local data from the clients. Due to its decentralized nature, a major challenge in designing efficient solvers for FL is heterogeneity of local devices which can be categorized into two different types: (i) *data heterogeneity* where the underlying data distributions of clients vary substantially, and (ii) *system heterogeneity* where the computational and storage capabilities of devices differ significantly. In fact, it has been observed that the seminal Federated Averaging (FedAvg) method suffers from slow convergence to a high quality solution when facing highly heterogeneous datasets (McMahan et al., 2017) as well as heterogeneous systems (Li et al., 2020; Kairouz et al., 2021). In this paper, we aim to address these two challenges simultaneously by introducing a generic framework that includes algorithms with robust performance in the presence of those forms of clients' heterogeneity. Inspired by prior works in the literature of FL (Collins et al., 2021; Liang et al., 2020) that utilized representation learning theory to tackle data heterogeneity, we propose a meta-algorithm that produces personalized solutions and handles data heterogeneity by leveraging a global representation shared among all clients. Further, our method circumvents the delays introduced due to the presence of stragglers by carefully selecting participating nodes based on their computational speeds. In early stages, only a few of the fastest nodes are chosen to participate and sequentially slower devices are included in the training process until the target accuracy is achieved. Although the disproportional selection of nodes raises fairness and accuracy concerns, we highlight that our method achieves speedup without compromising the resulting solution. The most significant contribution of our work is achieving near-optimal sample complexity in regimes with data and system heterogeneity, alongside a provable logarithmic speedup guarantee in terms of running time. Next we summarize our contributions: 1. SRPFL **Algorithm**. We propose Straggler-Resilient Personalized Federated Learning (SRPFL), an adaptive node participation meta-algorithm that builds upon subroutines that fall into the representation learning framework (Collins et al., 2021; Liang et al., 2020) enhancing their resilience to stragglers and performance. 2. **Logarithmic Speedup**. Assuming that clients' speeds are drawn from the exponential distribution, we prove that SRPFL guarantees logarithmic speedup in the linear representation setting, outperforming established, straggler-prone benchmarks while maintaining the state of the art sample complexity per client m = O((d/N + log(N))), where d and N denote the feature vector size and number of active nodes. Our results hold for non-convex loss functions, heterogeneous data and dynamically changing client's speeds. 3. **Numerical Results**. Experiments on various datasets (CIFAR10, CIFAR100, EMNIST, FEMNIST, Sent140) support our theoretical results showing that: (i) SRPFL significantly boosts the performance of different subroutines designed for personalized FL both in full and partial participation settings and (ii) SRPFL exhibits superior performance in system and data heterogeneous settings compared to state-of-the-art baselines. ## 1.1 Related Work Data heterogeneity. In data heterogeneous settings, if one aims at minimizing the aggregate loss in the network using the classic FedAvg method or more advanced algorithms, which utilize control-variate techniques, such as SCAFFOLD (Karimireddy et al., 2019), FEDGATE (Haddadpour et al., 2021), FedDyn (Acar et al., 2021) or FEDSHUFFLE (Horváth et al., 2022) the resulting solution could perform poorly for some of the clients. This is an unavoidable hurdle due to the fact that no single model works well for all clients when their underlying data distributions are diverse. A common technique that addresses this issue is fine-tuning the derived global model to each local task by following a few steps of SGD updates (Wang et al., 2019; Yu et al., 2020). Based on this observation, Fallah et al. (2020b) showed that one might need to train models that work well after fine-tuning and showed its connections to Model-Agnostic Meta-Learning (MAML). In (Cho et al., 2022; Balakrishnan et al., 2021) novel client-sampling schemes were explored achieving increased efficiency in regimes with data heterogeneity. Another line of work for personalized FL is learning additive mixtures of local and global models (Deng et al., 2020; Mansour et al., 2020; Hanzely and Richtárik, 2020). These methods learn local models for clients that are close to each other in some norm, an idea closely related to multi-task FL (Smith et al., 2017; Hu et al., 2021). The works in (Chen et al., 2022; Lee et al., 2022) studied the interplay of local and global models utilizing Bayesian hierarchical models and partial participation, respectively. An alternative approach was presented by Collins et al. (2021), where instead of enforcing local models to be close, the authors assumed that models across clients share a common representation. Using this perspective, they presented FedRep a method that provably learns this underlying structure in the linear representation setting. Building upon the idea of a common representation Zhu et al. (2021) and Jiang and Lin (2022) proposed federated methods that can handle data heterogeneity while exhibiting robustness to distribution shifts. Recently, a novel framework was proposed allowing the comparison of personalized FL methods under various metrics (Wu et al., 2022). In all of the aforementioned methods however, a subset of clients participate regardless of their computational capabilities. Thus, in the presence of stragglers, the speed of the training process significantly goes down as the server waits, at every communication round, for the slowest participating node to complete its local updates. System heterogeneity. Several works have attempted to address the issue of system heterogeneity. Specifically, asynchronous methods, which rely on bounded staleness of slow clients, have demonstrated significant gains in distributed data centers (Xie et al., 2019; Stich, 2019; So et al., 2021). In FL frameworks, however, stragglers could be arbitrarily slow casting these methods inefficient. In an attempt to manually control staleness, deadline-based computation has been proposed (Reisizadeh et al., 2019) as well as aggregation of a fixed number of models per round (Nguyen et al., 2022). However, in the worst case scenario the performance of these methods is still determined by the slowest client in the network. Active sampling is another approach where the server aggregates as many local updates as possible within a predefined time span (Nishio and Yonetani, 2019). In a different line of work the effects of stragglers are mitigated by utilizing computation/gradient coding schemes (Tandon et al., 2017; Wang et al., 2018; 2020b). In (Cho et al., 2021) clients use heterogeneous model-architectures to transfer knowledge to nodes with similar data distributions while Yang et al. (2022) proposed a new framework where clients are free to choose their participation scheme. More recently, normalized averaging methods were proposed in (Wang et al., 2020a; Horváth et al., 2022) that rectify the objective inconsistency created by the mismatch in clients' updates. FedLin (Mitra et al., 2021) instead utilizes gradient correction and error-feedback mechanisms to circumvent the speed-accuracy conflict. A novel approach to mitigate the effects of stragglers was proposed by Reisizadeh et al. (2022), where clients are selected to take part in different stages of the training according to their computational characteristics. Alas, all of the above methods yield improvement only in data-homogeneous settings and they are not applicable in regimes with data heterogeneity. ## 2 Problem Formulation In this section, we introduce our setting and define the data and system heterogeneity model that we study. Consider the FL framework where M clients interact with a central server. We focus on a supervised, data-heterogeneous setting where client i draws samples from distribution Di, potentially with Di 6=Dj . Further, consider the learning model of the i-th client as qi: R d *−→ Y* which maps inputs xi ∈ R dto predicted labels qi(xi) ∈ Y. The objective function of client i is defined as fi(qi) := E(xi,yi)∼Di [`(qi(xi), yi))], where the loss ` : *Y × Y −→* R penalizes the gap between the predicted label qi(xi) and true label yi. In the most general setting clients aim to solve $$\operatorname*{min}_{(q_{1},...,q_{M})\in\mathcal{Q}}\ \ \frac{1}{M}\sum_{i=1}^{M}f_{i}(q_{i}),$$ $$(1)$$ fi(qi), (1) with Q the space of feasible tuples of mappings (q1*, ..., q*M). Traditionally in FL, methods focus on learning a single model q=q1=...=qM that performs well on average across clients (Li et al., 2020; McMahan et al., 2017). Although such a solution may be satisfactory in data-homogeneous settings, it leads to undesirable local models when the data distributions are diverse. Indeed, in the presence of data heterogeneity the loss functions fi have different forms and their minimizers could be far from each other. This justifies the formulation in (1) and necessitates the search for personalized solutions that can be learned in federated fashion. Low Dimensional Common Representation. There have been numerous examples in image classification and word prediction where tasks with heterogeneous data share a common, low dimensional representation, despite having different labels (Bengio et al., 2013; LeCun et al., 2015; Pillutla et al., 2022). Based on that, a reasonable choice for Q is a set in which all qi share a common map, coupled with a personalized map that fits their local data. To formalize this, suppose the ground-truth map can be written for each client i as qi=hi ◦φ where φ:R d−→R kis a shared global representation which maps d-dimensional data points to a lower dimensional space of size k and hi: R k−→Y, which maps from the lower dimensional subspace to the space of labels. Typically kd and thus given any fixed representation φ, the client specific heads hi: R k−→Y are easy to optimize locally. With this common structure into consideration, (1) can be reformulated as minφ∈Φ 1 M PM i=1 minhi∈H fi(hi ◦φ), where Φ is the class of feasible representation and H is the class of feasible heads. This formulation leads to good local solutions, if the underlying data generation models for the clients share a low dimensional common representation, i.e., yi = h ∗ i ◦ φ ∗(xi) + zi, where ziis some additive noise. ![3_image_0.png](3_image_0.png) Figure 1: Classic FL schemes for solving (2) Figure 2: SRPFL for solving (2) The server and clients collaborate to learn the common representation φ, while locally each client learns their unique head hi. Since clients often do not have access to their true data distributions, instead of minimizing their expected loss, they settle for minimizing the empirical loss associated with their local samples. Specifically, we assume client i has access to Si samples {x 1 i , x 2 i , ..., x Si i }, and its local empirical loss is ˆfi(hi ◦ φ) = 1 Si PSi s=1 `(hi ◦ φ(x s i ), ys i ). Hence, the global problem becomes $$\operatorname*{min}_{\phi\in\Phi}{\frac{1}{M}}\sum_{i=1}^{M}\operatorname*{min}_{h_{i}\in\mathcal{H}}\left\{{\hat{f}}_{i}(h_{i}\circ\phi):={\frac{1}{S_{i}}}\sum_{s=1}^{S_{i}}\ell(h_{i}\circ\phi(\mathbf{x}_{i}^{s}),y_{i}^{s})\right\}$$ (2) $\frac{1}{2}$ System Heterogeneity Model. In most FL settings, thousands of clients participate in the training process, each with different computational capabilities. Thus, fixed computational tasks such as gradient computations or local model updates require different processing time, for different clients. Formally, for each client i ∈ [M], we denote by Ti the time required to compute a local model update. When a subset of clients participate in the learning process, the computational time of each round is determined by the slowest participating node. Naturally, as the number of nodes in the network grows, we expect the number of stragglers to increase. This phenomenon calls for the development of straggler-resilient methods in system-heterogeneous settings. ## 3 Straggler-Resilient Personalized Fl In the shared representation setting, we face the challenge of finding an algorithm that coordinates server and clients in order to learn a common representation and a set of personalized parameters in a federated and straggler-resilient fashion. To this end, we propose a method that tackles problem (2) with limited sample access and provably superior performance over naive, straggler-prone methods. Specifically, we propose the Straggler-Resilient Personalized Federated Learning (SRPFL) meta-algorithm, designed to mitigate the effects of system heterogeneity in environments with non-convex loss functions and heterogeneous data while accommodating a variety of methods as subroutines. In a nutshell, SRPFL iteratively solves problem (2), while adaptively increasing the set of participating nodes based on their computational capabilities. As a result the process of learning the common representation across clients' tasks is accelerated, without compromising the resulting accuracy. To simplify the exposition, henceforth we denote by A some federated algorithm of choice, designed to solve (2). As depicted in Figure 1, in standard FL frameworks, out of all M clients in the network, the server often selects uniformly at random a subset of size N. Subsequently, a few iterations of algorithm A are performed to approximately solve a version of (2) which corresponds to those N selected clients i.e., minφ∈Φ 1 N PN i=1 minhi∈H ˆfi(hi ◦ φ). In every following stage, a new subset of N nodes is sampled and the same process is repeated. Although such a procedure eventually learns the global representation across all tasks, it is susceptible to delays caused by stragglers, as the server has to wait for the slowest client (among the N selected ones) to complete its updates. Hence, when N is large the training procedure could become prohibitively slow. SRPFL takes a different approach in order to mitigate the effects of straggling clients. An overview of the selecting scheme is provided in Figure 2. At each stage, N clients are randomly selected, but only a small, fast subset of them is used to solve their corresponding learning problem. More precisely, suppose that at stage r, each client i in the sampled subset of size N, is associated with a computational time T r i . For simplicity we assume that the nodes are re-indexed at every stage so that they maintain a decreasing ordering w.r.t. their times, i.e. T r 1 ≤ T r 2 ≤ ... ≤ T rN . Initially, only the n0 fastest clients, {1, 2*, ..., n*0}, are included in the learning procedure, with n0 much smaller than N. At every communication round, the set of n0 nodes perform the model updates indicated by algorithm A, to solve their version of (2), i.e., minφ∈Φ 1 n0 Pn0 i=1 minhi∈H ˆfi(hi ◦φ). We note that during this stage of the algorithm, the server waits only for the slowest client among the participating ones, i.e., client n0 which is potentially much faster than node N. Remark 3.1. In practice, the knowledge of clients' computational power is not required to figure out the n0 fastest nodes. Instead, the server sends the global representation model to all N sampled clients and updates the common representation once the first n0 new models are received. Indeed, these representations belong to the fastest n0 nodes. Once the current stage terminates, a new batch of N clients is sampled and the 2n0 fastest nodes are chosen to participate in the learning process. Since speeds vary across stages, consecutive sets of participating nodes could have small or no overlap. However, the representations learned in previous stages still operate as good starting points for subsequent stages which is possible since nodes are homogeneous w.r.t. their representations (see Section 2). Thus, utilizing the representation model learned from the previous stage, nodes {1, 2*, ...,* 2n0} continue the learning process deriving a model of improved accuracy. The procedure of geometrically increasing the number of participating nodes continues until the target accuracy is achieved. Hence, SRPFL uses the data of stragglers only at the latest stages of the algorithm when an accurate approximation is required. Remark 3.2. For simplicity of exposition we assume that the set of N sampled nodes maintains connectivity to the server throughout each stage. However, our analysis remains unaffected even if a new set of nodes is sampled at every round. We proceed to characterize the class of federated algorithms able to employ SRPFL to enhance their performance in system heterogeneous environments. Any iterative method that solves an instance of (2) can be combined with our adaptive node participation scheme, however in this paper, we focus on a broad class of alternating gradient-based update methods, presented in (Collins et al., 2021). As the name suggests, in each round, clients update their heads and representation models in an alternative fashion. After a certain number of gradient updates is completed, all clients send their derived representations to the server where the models are averaged and broadcasted back to the clients. Next, we rigorously illustrate this procedure. Alternating Update Scheme. At round t, the server communicates a common representation φ tto the clients and a subset of them I t, are selected to participate by performing the following updates. Client Head Update. Each client i ∈ It performs τh local gradient-based updates optimizing their head parameter, given the received representation φ t. Concretely, for s = 1 *, ..., τ*h client i updates their head model as follows $$h_{i}^{t,s}=G R D\left(f_{i}\left(h_{i}^{t,s-1},\phi^{t}\right),h_{i}^{t,s-1},\eta\right).$$ i, η. (3) Client Representation Update. Once the updated local heads h t,τh iare obtained, each client i executes τφ local updates on their representation parameters. That is for s = 1*, ..., τ*φ $$\phi_{i}^{t,s}=G R D\left(f_{i}\left(h_{i}^{t,\tau_{h}},\phi_{i}^{t,s-1}\right),\phi_{i}^{t,s-1},\eta\right).$$ i, η. (4) In the above expressions, GRD(*f, h, η*) captures the generic notion of an update of variable h using the gradient of function f with respect to h and step size η. This notation allows the inclusion of a large class of algorithms such as Gradient Descent with momentum, SGD, etc. Server Update. Each client i sends their derived representation models φ t,τφ ito the server, where they are averaged producing the next representation model φ t+1. Coupling SRPFL with a generic subroutine that falls into the Alternating Update Scheme, gives rise to Algorithm 1. Every stage r is characterized by a participating set of size 2 r· n0, denoted by I r. At the $$(3)$$ $$\left(4\right)$$ ## Algorithm 1 Srpfl 1: **Input:** Initial number of nodes n0; step size η; number of local updates for head τh; number of local updates for representation τφ. 2: **Initialization:** n ← n0, φ0, h0,τh 1*, ..., h*0,τh N 3: for r = 0, 1, 2*, . . . ,* log(N/n0) do 4: φ 0 ← φr 5: for t = 1, 2*, . . . , τ*r do 6: Server sends representation φ t−1to N clients sampled from [M]. 7: for i ∈ Ir do 8: Client i initializes h t,0 i ← h t−1,τh iand runs τh updates h t i = h t i − η∇h t i fi(h t i , φt−1). 9: Client i initializes φ t,0 i ← φ t−1 and runs τφ updates φ t i = φ t i − η∇φ t i fi(h t i , φt i ). 10: Client i sends φ t,τφ ito the server. 11: **end for** 12: for each client i /∈ Ir do 13: h t,τh i ← h t−1,τh i 14: **end for** 15: Server computes φ t ← 1n Pn i=1 φ t,τφ i. 16: **end for** 17: Server sets n ← min{N, 2n} and φr+1 ← φ τr. 18: **end for** beginning of each round the server provides a representation model to the participating clients (line 6). The clients update their models (lines 8 and 9) and sent their representations back to the server where they are averaged producing a new global model. The numbers of local updates τh, τφ depend on the subroutine method of choice and the number of rounds per stage is denoted by τr. At the end of every stage the set of participating nodes is doubling in size until a set of size N is reached (line 17). Remark 3.3 summarizes the technical novelties of our work and highlights crucial benefits enjoyed by SRPFL. Remark 3.3. Reisizadeh et al. (2022) proposed a similar participation scheme, however their approach differs from ours and their results apply to significantly more restrictive regimes. Specifically, in (Reisizadeh et al., 2022) the analysis heavily relies on deriving a connection between the ERM solutions of consecutive stages. In order to control the statistical accuracy of the corresponding ERM problems (i) data homogeneity across all clients is necessary and further (ii) clients who participate in early stages are required to remain active and connected to the server in all subsequent stages, maintaining fixed computational speeds throughout the whole training process. (iii) The results of their analysis hold only for strongly convex loss functions and (iv) their stage termination criterion requires the knowledge of the strong convexity parameter. The above restrictions are detrimental in the FL regime and severely undermine the applicability of the resulting algorithm. In our work we follow a different approach controlling -in terms of principal angle distance - a quantity analogous to statistical accuracy, therefore directly connecting the common representation (and overall solution) at every stage to the ground truth representation. This novel approach allows our algorithm to accommodate (i) data heterogeneity, (ii) clients with dynamically changing speeds or equivalently clients that are replaced by new ones at every round, and (iii) non-convex loss functions. Additionally, a major part of our technical contribution focuses on (iv) analytically deriving the optimal number of rounds per stage, thus producing a simple and efficient doubling scheme. ## 4 Srpfl In The Linear Representation Case Our theoretical analysis focuses on a specific instance of (1), where clients strive to solve a linear representation learning problem. Concretely we assume that fiis the quadratic loss, φ is a projection onto a k-dimensional subspace of R d, given by matrix B ∈ R d×k and the i-th client's head hi, is a vector wi ∈ R k. We model local data of client i such that yi = w∗> i B∗>xi + zi, for some ground truth representation B∗ ∈ R d×k, local heads w∗ i ∈ R k and zi ∼ N (0, σ2) capturing the noise in the measurements. Hence, all clients' optimal solutions lie Algorithm 2 FedRep-SRPFL (Linear Representation) 1: **Input:** Step size η; Batch size m; Initial nodes n0; 2: **Initialization:** Client i∈[N] sends to server: Pi:= 1m Pm j=1(y 0,j i) 2x 0,j i(x 0,j i) >, n ← n0. 3: Server finds UDU> ← rank-k SVD( 1 N PN i=1 Pi). 4: for r = 0, 1, 2*, . . . ,* log(N/n0) do 5: Server initializes representation Br,0 ← U. 6: for t = 1, 2*, . . . , τ*r do 7: Server sends Br,t to N clients sampled from [M]. 8: for i ∈ {1*, .., n*} do 9: Client i samples a fresh batch of m samples. 10: Client i computes w r,t+1 i ← arg minw ˆf t i (w, Br,t). 11: Client i computes B r,t+1 i ← Br,t − η∇B ˆf t i (w t+1 i, Br,t) and sends it back to the server. 12: **end for** 13: Server computes B¯r,t+1 ← 1n Pi∈It B r,t+1 i. 14: Server computes Br,t+1, Rr,t+1 ← QR(B¯r,t+1). 15: **end for** 16: Server sets U ← B¯ r,t+1 and n ← min{N, 2n}. 17: **end for** in the same k-dimensional subspace. Under these assumptions the global expected risk is $$\operatorname*{min}_{\mathbf{B},\mathbf{W}}{\frac{1}{2M}}\sum_{i=1}^{M}\mathbb{E}_{(\mathbf{x}_{i},y_{i})\sim{\mathcal{D}}_{i}}\left[\left(y_{i}-\mathbf{w}_{i}^{\top}\mathbf{B}^{\top}\mathbf{x}_{i}\right)^{2}\right],$$ 2i, (5) where W = [w> 1 , ..., w>N ] ∈ R N×kis the concatenation of the client-specific heads. Since the true distributions Di's are unknown, algorithms strive to minimize the empirical risk instead. The global empirical risk over all clients is1 $${\frac{1}{M}}\sum_{i=1}^{M}{\hat{f}}_{i}(\mathbf{w}_{i},\mathbf{B})\!=\!{\frac{1}{2M m}}\sum_{i=1}^{M}\sum_{j=1}^{m}\left(y_{i}^{j}\!-\!\mathbf{w}_{i}^{t\top}\mathbf{B}^{\top}\mathbf{x}_{i}^{j}\right)^{2}\!,$$ $$\left(5\right)$$ $$(6)$$ , (6) where m is the number of samples at each client. The global loss in (6) is nonconvex and has many global minima, including all pairs of W∗Q−1, B∗Q>, where Q ∈ R k×kis some invertible matrix. Thus, the server aims to retrieve the column space of B∗, instead of the ground truth factors (W∗, B∗). To measure closeness between column spaces, we adopt the metric of principal angle distance (Jain et al., 2013). Definition 4.1.Let matrices B1, B2 ∈ R d×k and Bˆ1,⊥,Bˆ2 orthonormal matrices s.t. *span*(Bˆ1,⊥) = *span*(B1) ⊥ and *span*(Bˆ2) = *span*(B2). The principle angle distance between the column spaces of B1 and B2 is defined to be dist(B1, B2) := kBˆ > 1,⊥Bˆ2k2. Federated Representation Learning (FedRep) is an alternating minimization-descent algorithm, recently proposed in (Collins et al., 2021) for the Linear Shared Representation framework. SRPFL coupled with FedRep gives rise to Algorithm 2. Below we highlight the main points of interest. In the initialization phase (lines 2 and 3) a model of bounded distance from the optimal representation is obtained, via the Method of Moments (Tripuraneni et al., 2021). Subsequently, at every round t, client i samples a fresh batch of samples {x t,j i, y t,j i} m j=1 from its local distribution (line 9) and thus the corresponding loss function becomes ˆfi(wi ◦ B):= 1 2m Pm j=1(y t,j i − w> i B>x t,j i) 2. Utilizing the global representation provided by the server, client i computes the optimal head wi (line 10). Fixing the newly computed head, client i proceeds to update its global representation model with one step of gradient descent (line 11) and transmits it back to the server. As depicted in lines 13 and 14 the parameter server averages the models received and orthogonalizes the resulting matrix before broadcasting it to the clients, a component of crucial importance required in our analysis. Mapping this method back to Algorithm 1 we note that the number of representation model updates τφ is set to 1, whereas the number of head updates τh is sufficiently large to derive (approximately) the optimal solutions. This imbalance is designed to take advantage of the inherent structure of our problem where the size of wi's is significantly smaller than d. Finally, we point out that the number of the communication rounds per stage τr is a small and a priori known to the algorithm constant, specified by our analysis. Remark 4.2. Although our proposed method utilizes FedRep as a backbone, our analysis and framework differ substantially from the ones in (Collins et al., 2021). Specifically, Collins et al. (2021) assume access to (i) infinite samples, (ii) without the presence of noise. Additionally, (iii) the number of participating nodes remains fixed throughout the training and (iv) the focus lies solely on handling heterogeneous data with system heterogeneity being an orthogonal direction to their work. In contrast, our analysis requires only (i) finite and (ii) noisy samples, and our theoretical results (Theorems 4.7 and 4.9) revolve around (iii) participating subsets of different sizes and (iv) regimes where both data and system heterogeneity is prevalent. ## 4.1 Theoretical Results In this subsection, we provide rigorous analysis of FedRep-SRPFL in the linear representation setting. First, we present the notion of Wall Clock Time (WCT) which is the measure of the performance for our algorithm. Subsequently, we illustrate the contraction inequality that determines the rate at which the distance to the optimal representation diminishes. We conclude showing that FedRep-SRPFL outperforms its straggler-prone variant by a factor of O(log N), under the standard assumption that clients' computational times follow the exponential distribution. Before we proceed, we introduce the necessary notation and the assumptions. $$\begin{array}{l}{{E_{0}:=1-\mathrm{dist}^{2}\left(\mathbf{B}^{0},\mathbf{B}^{*}\right),}}\\ {{\bar{\sigma}_{\mathrm{max},*}:=\operatorname*{max}_{\mathcal{I}\in[N],[\mathcal{I}]=n,n_{0}\leq n\leq N}\sigma_{\mathrm{max}}\left(\frac{1}{\sqrt{n}}\mathbf{W}_{\mathcal{I}}^{*}\right),}}\\ {{\bar{\sigma}_{\mathrm{min},*}:=\operatorname*{min}_{\mathcal{I}\in[N],[\mathcal{I}]=n,n_{0}\leq n\leq N}\sigma_{\mathrm{min}}\left(\frac{1}{\sqrt{n}}\mathbf{W}_{\mathcal{I}}^{*}\right)}}\end{array}$$ $\left(7\right)^3$ $$({\boldsymbol{\delta}})$$ $$(9)$$ Assumption 4.3. (Sub-gaussian design). The local samples xi ∈ R d are i.i.d. with mean 0, covariance Id and are Id-sub-gaussian, i.e. E[e v >xi] ≤ e kvk 2 2 /2for all v ∈ R d. Assumption 4.4. (Client diversity). Let σ¯min,∗ defined in (9), be the minimum singular value of any matrix that can be obtained by taking n rows of √ 1 nW∗. Then σ¯min,∗>0. Specifically, our theoretical analysis requires Assumption 4.4 to be satisfied for every n ≥ n0. Assumption 4.4 implies that the optimal heads, of the participating clients, span R k. This is true in many FL regimes as the number of clients is usually much larger than the dimension of the shared representation. Remark 4.5. In this work we consider client speeds being independent of the local data available to them. This is a natural assumption since the computational power of each client crucially depends on their device characteristics (battery, CPU, etc.), whereas any connection to their local data is unclear. However, in the presence of strong correlation between data and system heterogeneity, Assumption 4.4 may not hold, which can be seen as a potential limitation of our work and an interesting future direction to explore. Assumption 4.6. (Client normalization). The ground-truth client specific parameters satisfy kw∗ i k2 = √k for all i ∈ [n] and B∗ has orthonormal columns. Assumption 4.6 ensures the ground-truth matrix W∗B∗> is row-wise *incoherent*, i.e. its row norms have similar magnitudes. This is of vital importance since our measurement matrices are row-wise sparse and incoherence is a common requirement in sensing problems with sparse measurements. Wall Clock Time. To measure the speedup that our meta-algorithm enjoys we use the concept of real time or WCT as described below. FedRep-SRPFL runs in communication rounds grouped into stages. Consider such a round t at stage r, with nodes {1, 2*, ..., n*r} participating in the learning process. Here nr denotes the slowest participating node. The expected amount of time that the server has to wait for the updates to take place is E-T r nr . Put simply, the expected computational time of the slowest node acts as the bottleneck for the round. Further, at the beginning and at the end of every round, models are exchanged between the server and the clients. This incurs an additional, fixed communication cost C. If τr communication rounds take place at every stage r, then the overall expected WCT for FedRep-SRPFL is E [T*SRPFL*] =Plog(N/n0) r=0 τr ·E-T r nr + C. Similarly, the total expected runtime for FedRep can be expressed in terms of the total number of rounds, TF R, as E [TFedRep]=TF R(E [T r N ]+C).Taking the ratio of these quantities derives the desired speedup guarantee. ![8_image_0.png](8_image_0.png) $$(10)$$ Figure 3: Numerical results on CIFAR10, CIFAR100, EMNIST, FEMNIST with full participation (M = N) in the fixed computation speeds setting. 'Shard' denotes the number of classes per client. 'C.T.' denotes the communication cost per round. Contraction Inequality. Theorem 4.7 captures the progress made between two consecutive rounds of FedRep-SRPFL. It follows that the rate of convergence to the optimal representation is exponentially fast, provided that the number of participating nodes and the batch size are sufficiently large. Theorem 4.7. *Let Assumptions 4.3-4.6 hold. Further, let the following inequalities hold for the number of* participating nodes and the batch size respectively, n ≥ n0 and m ≥ c0 (1+σ 2)k 3κ 4 E2 0max {log(N), d/n0}*, for* some absolute constant c0. Then FedRep-SRPFL *with stepsize* η ≤1 - Proof : Each of the above sequence $\eta=8\pi^2_{\max,\eta}$, but you'd have the following equation: $\textit{dist}\left(\mathbf{B}^{t+1},\mathbf{B}^*\right)\leq\textit{dist}\left(\mathbf{B}^t,\mathbf{B}^*\right)\sqrt{1-a}+\dfrac{a}{\sqrt{\frac{n}{n_0}\left(1-a\right)}}$, , satisfies the following contraction inequality: , (10) _w.p. at least $1-T\cdot\exp\left(-90\min\left\{d,k^{2}\log(N)\right\}\right)$, where $a=\frac{1}{2}\eta E_{0}\tilde{\sigma}_{\min,*}^{2}\leq\frac{1}{4}$._ Here T denotes the total number of communication rounds which is logarithmic w.r.t. the target error . The initial representation computed by the Method of Moments satisfies dist B0, B∗≤1−CM, for some constant CM. Since E0 is strictly greater than zero inequality (10) ensures contraction. Remark 4.8. Theorem 4.7 suggests that the server can learn the ground-truth representation before some of the clients update their local heads or participate in the learning process. This might raise concerns about fairness or accuracy, however it is important to highlight that such concerns are unfounded. This is because, after obtaining it the server shares the ground-truth representation with all the clients in the system. Thus, even if a client i was not selected in the representation learning process, it can still optimize its low-dimensional head wi ∈ R k using its local samples through a few local updates (given that k is a small constant). Consequently, the derivation of the ground-truth model benefits both the clients that already participated in the learning procedure as well as new clients who opt to join the federated system at a later time. Logarithmic Speedup. Algorithm 2 sets off with n0 participating clients and follows a doubling scheme so that at stage r only the fastest 2 rn0 nodes contribute to the learning process. Thus at stage r, inequality (10) can be written as: $$(11)$$ $$\operatorname{dist}^{+}\leq\operatorname{dist}\cdot{\sqrt{1-\alpha}}+{\frac{\alpha}{\sqrt{2^{r}\left(1-\alpha\right)}}}$$ ![9_image_0.png](9_image_0.png) $$(12)$$ Figure 4: Numerical results on CIFAR10, CIFAR100, EMNIST, FEMNIST with partial participation (N = M/5) in the fixed computational speeds setting. 'Shard' denotes the number of classes per client. 'C.T.' denotes the communication cost per round. We note that the second term on the r.h.s. is an artifact of the noisy measurements. Utilizing geometric series properties we can deduce that in the limit, contraction (11) converges to α/ √ 2 r(1−α)(1− √1−α). This implies that the achievable error of our algorithm is lower bounded by α/p N n0 (1−α)(1− √1−α), since the total number of stages is at most r = log (N/n0). To illustrate the theoretical benefits of SRPFL we compare Algorithm 2 to FedRep. One can distill FedRep from Algorithm 2 by disregarding the doubling scheme and instead at each round, randomly sampling N nodes to participate. For fair comparison between the two methods we set the target error small enough so that the contribution of all N nodes is necessary. Specifically, we express the error as $$\epsilon=\hat{c}\frac{\alpha}{\sqrt{\frac{N}{n_{0}}\left(1-\alpha\right)\left(1-\sqrt{1-\alpha}\right)}},\quad\text{with}\quad\sqrt{2}>\hat{c}>1.$$ Intuitively, one should expect √FedRep-SRPFL to vastly outperform straggler-prone FedRep as cˆ approaches 2 (large error), since in this case the biggest chunk of the workload is completed before FedRep-SRPFL utilizes the slower half of the clients. In contrast, FedRep experiences heavy delays throughout the whole training process due to the inclusion of stragglers at every round. On the contrary, as cˆ approaches 1 (small error), the amount of rounds spent by FedRep-SRPFL utilizing N clients increases. In this case one should expect the speedup achieved by FedRep-SRPFL in early stages, to eventually become obsolete. Theorem 4.9 provides a rigorous exposition of these insights. Theorem 4.9. *Suppose that at each stage the client's computational times are i.i.d. random variables drawn* from the exponential distribution with parameter λ. Further, suppose that the expected communication cost per round is C = c λ , for some constant c. Finally, consider the target error *given in* (12)*. Then, we have* E[T*SRPFL*] E[T*FedRep*] = O $\frac{1}{\frac{1}{\tau}}=\mathcal{O}\left(\stackrel{\propto}{\frac{\log\left(\frac{1}{\varepsilon-1}\right)}{\log(N)+\log\left(\frac{1}{\varepsilon-1}\right)}}\right)$ $\frac{[\mathrm{L}]}{[\mathrm{p}]}$ . ![10_image_0.png](10_image_0.png) Figure 5: Numerical results on CIFAR10, CIFAR100, EMNIST, FEMNIST with full participation (M = N) in the random (dynamic) computation speeds setting. 'Shard' denotes the number of classes per client. 'C.T.' denotes the communication cost per round. Theorem 4.9 establishes O (log N) speedup for our method compared to its straggler prone benchmark. This result holds when speeds are drawn once at the beginning of the process as well as when new speeds are drawn at every round thus rendering our method versatile in a broad class of settings. Remark 4.10. Our analysis is crucially intertwined with the representation learning framework where the presence of a shared, global representation serves as common ground across data-heterogeneous clients allowing us to show that intermediate solutions constitute good starting points and substantial progress is achieved between stages. Despite FedAvg being a general-purpose algorithm not designed for representation learning, it was recently shown to recover the ground-truth representation in the case of multi-task linear regression (Collins et al., 2022), thus casting FedAvg a potential candidate subroutine for our method. Remark 4.11. The initialization phase of Algorithm 2 requires a one-time exchange of information between the clients and the server. Although this process reveals only the sum of outer products of local samples, it can be further fortified using differential privacy techniques, such as the ones in (Jain et al., 2021; Shen et al., 2022). ## 5 Experiments In our empirical study we consider the classification tasks for the CIFAR10, CIFAR100, EMNIST, FEMNIST and Sent140 datasets. We conduct experiments under the full and partial participation scheme with different computation speed distributions comparing the performance of our proposed method against other state-ofthe-art benchmarks. Due to space limitation, in this section we present the results for the image classification tasks in the full and partial participation regime with fixed and dynamic computation speeds. Additional results and extensive discussion can be found in Appendix C. Baselines. The first benchmarks under consideration are FedRep (Collins et al., 2021) and Local-Global FedAvg (LG-FedAvg) (Liang et al., 2020). These federated methods utilize a mixture of global and local models to derive personalized solutions with small global loss. Coupling these algorithms with our proposed doubling scheme gives rise to FedRep-SRPFL and LG-SRPFL, respectively, which are our proposed algorithms ![11_image_0.png](11_image_0.png) Figure 6: Numerical results on CIFAR10, CIFAR100, EMNIST, FEMNIST with partial participation (M = N) in the random (dynamic) computation speeds setting. 'Shard' denotes the number of classes per client. 'C.T.' denotes the communication cost per round. for the numerical experiments. We further compare our methods with FLANP (Reisizadeh et al., 2022) and FedAvg (McMahan et al., 2017) with and without fine-tuning. Note that the fine-tuning variants are also considered as they often lead to better performance in data-heterogeneous settings (Wang et al., 2019; Yu et al., 2020). Our final baseline is the HF-MAML algorithm from (Fallah et al., 2020a) which is a Model Agnostic Meta Learning-based method producing high quality personalized solutions in data-heterogeneous settings. Data allocation. To ensure that our data allocation is heterogeneous we randomly split the data points among the clients in a way that each client can only observe a specific subset of classes. For instance, in the CIFAR10 dataset where there are in total 10 different classes of data points, each client is only assigned data from 5 different classes, which we refer to as Shards; see first column of Figure 3. We also make sure that the test set for each client is consistent with the samples they have access to at training time, e.g., if client i only observes samples with labels 1, 4, 5 during training, then at test time they are only asked to classify samples from the same classes. Further details and different allocation schemes are presented in Appendix C. Simulation of System Heterogeneity. We consider two different types of client speed configurations: Fixed computation speeds. In this configuration we sample a value for each client from the exponential distribution with parameter λ, once at the beginning of the training process. These personalized values capture the computational time of every client and remain fixed throughout the training procedure. In addition to the computational time, our methods suffer a fixed communication cost at every round. In Figures 3 to 6 each row depicts the effects of different values of communication cost on the convergence of the algorithms under consideration (C.T. = 0, 10, 1001). Dynamic computation speeds. In this configuration every client samples at every round their processing times from a personalized exponential distribution that remains fixed throughout the process. Specifically, at the beginning of the training process we sample for each client i a parameter λi from the uniform distribution over [1/M, 1]. Subsequently, at every round we sample a new computational time for each client i from the exponential distribution with parameter λi. Similarly to the former setting an additional, fixed communication 1For reference the computational times are sampled with λ=1. cost is incurred at every round contributing to the overall running time. As we illustrate in Figure 5 (full participation) and 6 (partial participation - %20), the experimental results under this configuration are qualitatively similar to the ones presented in Figure 3 (full participation) and Figure 4(partial participation - %20) for fixed computation speeds, with SRPFL providing substantial speedup over straggler prone variants. Results and Discussions.From the numerical results in Figures 3 to 6 we distill the following takeaways: 1) Coupling SRPFL with FedRep exhibits consistently superior performance across different datasets and communication time regimes compared to all proposed baselines. 2) Applying SRPFL to personalized FL solvers (FedRep and LG-FedAvg) significantly enhances their efficiency. 3) The speedup achieved by our meta-algorithm is more significant for smaller values of communication time. Concretely, we observe that the gap between FedRep-SRPFL and FedRep as well as LG-SRPFL and LG-FedAvg diminishes as the communication cost increases (plots in the same column C.T.= 0, 10, 100 ). This is unsurprising since our method improves the computational cost of the training and thus when the communication cost dominates the overall running time, the benefits of SRPFL are less apparent. 4) FedRep-SRPFL vastly outperforms the fine-tuning variant of the previously proposed FLANP, especially in regimes with high data heterogeneity (FEMNIST). ## 6 Conclusion In this paper, we proposed SRPFL, a straggler resilient FL meta-algorithm with near-optimal sample complexity and provable logarithmic speedup guarantees in regimes with data and system heterogeneity. Our method leverages ideas from representation learning theory to compute a global representation model along with local client heads, thus deriving personalized solutions for all clients. In SRPFL the participating clients are selected in an adaptive manner. In early stages fast nodes are prioritized and progressively slower nodes are included in the training process, therefore mitigating the effects of stragglers without compromising the quality of the solutions. Our numerical results illustrated the benefits of SRPFL when coupled with different personalized FL methods such as FedRep and LG-FedAvg. Furthermore, our experiments support our theoretical findings showcasing the superior performance of FedRep-SRPFL compared to state-of-the-art FL methods. ## Acknowledgments The research of I. Tziotis and A. Mokhtari is supported in part by NSF Grants 2019844 and 2112471, ARO Grant W911NF2110226, the Machine Learning Lab (MLL) at UT Austin, and the Wireless Networking and Communications Group (WNCG) Industrial Affiliates Program. The research of Z. Shen and H. Hassani is supported by the NSF Institute for CORE Emerging Methods in Data Science (EnCORE) as well as The Institute for Learning-enabled Optimization at Scale (TILOS). The research of R. Pedarsani is supported by NSF awards 2003035 and 2236483. ## References Acar, D. A. E., Zhao, Y., Matas, R., Mattina, M., Whatmough, P., and Saligrama, V. (2021). Federated learning based on dynamic regularization. In *International Conference on Learning Representations*. Balakrishnan, R., Li, T., Zhou, T., Himayat, N., Smith, V., and Bilmes, J. (2021). Diverse client selection for federated learning via submodular maximization. In *International Conference on Learning Representations*. Bengio, Y., Courville, A., and Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828. Chen, H., Ding, J., Tramel, E. W., Wu, S., Sahu, A. K., Avestimehr, S., and Zhang, T. (2022). Actperfl: Active personalized federated learning. In ACL 2022 Workshop on Federated Learning for Natural Language Processing. Cho, Y. J., Wang, J., Chiruvolu, T., and Joshi, G. (2021). Personalized federated learning for heterogeneous clients with clustered knowledge transfer. *arXiv preprint arXiv:2109.08119*. Cho, Y. J., Wang, J., and Joshi, G. (2022). Towards understanding biased client selection in federated learning. In *International Conference on Artificial Intelligence and Statistics*, pages 10351–10375. PMLR. Collins, L., Hassani, H., Mokhtari, A., and Shakkottai, S. (2021). Exploiting shared representations for personalized federated learning. In *Proceedings of the 38th International Conference on Machine Learning*, Proceedings of Machine Learning Research. PMLR. Collins, L., Hassani, H., Mokhtari, A., and Shakkottai, S. (2022). Fedavg with fine tuning: Local updates lead to representation learning. *Advances in Neural Information Processing Systems*, 35:10572–10586. Deng, Y., Kamani, M. M., and Mahdavi, M. (2020). Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461. Fallah, A., Mokhtari, A., and Ozdaglar, A. (2020a). On the convergence theory of gradient-based modelagnostic meta-learning algorithms. In *International Conference on Artificial Intelligence and Statistics*. PMLR. Fallah, A., Mokhtari, A., and Ozdaglar, A. (2020b). Personalized federated learning: A meta-learning approach. *arXiv preprint arXiv:2002.07948*. Haddadpour, F., Kamani, M. M., Mokhtari, A., and Mahdavi, M. (2021). Federated learning with compression: Unified analysis and sharp guarantees. In *International Conference on Artificial Intelligence and Statistics*. PMLR. Hanzely, F. and Richtárik, P. (2020). Federated learning of a mixture of global and local models. *arXiv* preprint arXiv:2002.05516. Horváth, S., Sanjabi, M., Xiao, L., Richtárik, P., and Rabbat, M. (2022). Fedshuffle: Recipes for better use of local work in federated learning. *arXiv preprint arXiv:2204.13169*. Hu, S., Wu, Z. S., and Smith, V. (2021). Private multi-task learning: Formulation and applications to federated learning. *arXiv preprint arXiv:2108.12978*. Jain, P., Netrapalli, P., and Sanghavi, S. (2013). Low-rank matrix completion using alternating minimization. In *Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing*, STOC '13, page 665–674, New York, NY, USA. Association for Computing Machinery. Jain, P., Rush, J., Smith, A., Song, S., and Guha Thakurta, A. (2021). Differentially private model personalization. *Advances in Neural Information Processing Systems*, 34:29723–29735. Jiang, L. and Lin, T. (2022). Test-time robust personalization for federated learning. *arXiv preprint* arXiv:2205.10920. Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. (2021). Advances and open problems in federated learning. *Foundations* and Trends® *in Machine Learning*. Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S. J., Stich, S. U., and Suresh, A. T. (2019). Scaffold: Stochastic controlled averaging for on-device federated learning. *arXiv preprint arXiv:1910.06378*. LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. *Nature*, 521:436–44. Lee, S., Sahu, A. K., He, C., and Avestimehr, S. (2022). Partial model averaging in federated learning: Performance guarantees and benefits. *arXiv preprint arXiv:2201.03789*. Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. (2020). Federated optimization in heterogeneous networks. *Proceedings of Machine Learning and Systems*, 2:429–450. Liang, P. P., Liu, T., Ziyin, L., Allen, N. B., Auerbach, R. P., Brent, D., Salakhutdinov, R., and Morency, L.-P. (2020). Think locally, act globally: Federated learning with local and global representations. arXiv preprint arXiv:2001.01523. Mansour, Y., Mohri, M., Ro, J., and Suresh, A. T. (2020). Three approaches for personalization with applications to federated learning. *arXiv preprint arXiv:2002.10619*. McMahan, B., Moore, E., Ramage, D., Hampson, S., and Arcas, B. A. y. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research. PMLR. Mitra, A., Jaafar, R., Pappas, G. J., and Hassani, H. (2021). Linear convergence in federated learning: Tackling client heterogeneity and sparse gradients. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W., editors, *Advances in Neural Information Processing Systems*, volume 34, pages 14606–14619. Curran Associates, Inc. Nguyen, J., Malik, K., Zhan, H., Yousefpour, A., Rabbat, M., Malek, M., and Huba, D. (2022). Federated learning with buffered asynchronous aggregation. In International Conference on Artificial Intelligence and Statistics, pages 3581–3607. PMLR. Nishio, T. and Yonetani, R. (2019). Client selection for federated learning with heterogeneous resources in mobile edge. In *ICC 2019-2019 IEEE International Conference on Communications (ICC)*, pages 1–7. IEEE. Pillutla, K., Malik, K., Mohamed, A.-R., Rabbat, M., Sanjabi, M., and Xiao, L. (2022). Federated learning with partial model personalization. In *International Conference on Machine Learning*, pages 17716–17758. PMLR. Reisizadeh, A., Taheri, H., Mokhtari, A., Hassani, H., and Pedarsani, R. (2019). Robust and communicationefficient collaborative learning. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 8388–8399. Reisizadeh, A., Tziotis, I., Hassani, H., Mokhtari, A., and Pedarsani, R. (2022). Straggler-resilient federated learning: Leveraging the interplay between statistical accuracy and system heterogeneity. *IEEE Journal on* Selected Areas in Information Theory. Shen, Z., Ye, J., Kang, A., Hassani, H., and Shokri, R. (2022). Share your representation only: Guaranteed improvement of the privacy-utility tradeoff in federated learning. In *The Eleventh International Conference* on Learning Representations. Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A. S. (2017). Federated multi-task learning. Advances in neural information processing systems. So, J., Ali, R. E., Güler, B., and Avestimehr, A. S. (2021). Secure aggregation for buffered asynchronous federated learning. *arXiv preprint arXiv:2110.02177*. Stich, S. U. (2019). Local sgd converges fast and communicates little. In *ICLR 2019 ICLR 2019 International* Conference on Learning Representations. Tandon, R., Lei, Q., Dimakis, A. G., and Karampatziakis, N. (2017). Gradient coding: Avoiding stragglers in distributed learning. In Precup, D. and Teh, Y. W., editors, Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR. Tripuraneni, N., Jin, C., and Jordan, M. (2021). Provable meta-learning of linear representations. In International Conference on Machine Learning, pages 10434–10443. PMLR. Vershynin, R. (2018). *High-Dimensional Probability: An Introduction with Applications in Data Science*. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press. Wang, J., Liu, Q., Liang, H., Joshi, G., and Poor, H. V. (2020a). Tackling the objective inconsistency problem in heterogeneous federated optimization. *Advances in neural information processing systems*. Wang, K., Mathews, R., Kiddon, C., Eichner, H., Beaufays, F., and Ramage, D. (2019). Federated evaluation of on-device personalization. *arXiv preprint arXiv:1910.10252*. Wang, S., Liu, J., and Shroff, N. (2018). Coded sparse matrix multiplication. In Dy, J. and Krause, A., editors, *Proceedings of the 35th International Conference on Machine Learning*, volume 80 of Proceedings of Machine Learning Research, pages 5152–5160. PMLR. Wang, S., Liu, J., and Shroff, N. B. (2020b). Fundamental limits of approximate gradient coding. ACM SIGMETRICS Performance Evaluation Review, 48:21 - 22. Wu, S., Li, T., Charles, Z., Xiao, Y., Liu, Z., Xu, Z., and Smith, V. (2022). Motley: Benchmarking heterogeneity and personalization in federated learning. Xie, C., Koyejo, S., and Gupta, I. (2019). Asynchronous federated optimization. *arXiv preprint* arXiv:1903.03934. Yang, H., Zhang, X., Khanduri, P., and Liu, J. (2022). Anarchic federated learning. In *International* Conference on Machine Learning, pages 25331–25363. PMLR. Yu, T., Bagdasaryan, E., and Shmatikov, V. (2020). Salvaging federated learning by local adaptation. arXiv preprint arXiv:2002.04758. Zhu, C., Xu, Z., Chen, M., Konečn`y, J., Hard, A., and Goldstein, T. (2021). Diurnal or nocturnal? federated learning of multi-branch networks from periodically shifting distributions. In *International Conference on* Learning Representations. ## A Appendix Before we dive into the analysis we provide the following useful definition. Definition A.1. For a random vector x ∈ R d1 and a fixed matrix A ∈ R d1×d2, the vector A>x is called kAk2 -subgaussian if y >A>x is subgaussian with subgaussian norm O (kA2k kyk2 ) for all y ∈ R d2, i.e. E -exp y >A>x ≤ exp kyk 2 2 kAk 2 2 /2 . We study the performance of SSRFRL with FedRep as the subroutine of choice. The first part of our analysis focuses on a single round t and extends the analysis in Collins et al. (2021). We assume that there are N clients in the network and at round t a subset I t of them participate in the learning procedure with cardinality n ≥ n0 := 2d log(N)·σ¯max,∗ . Without loss of generality we assume that the clients are indexed from fastest to slowest thus the clients that participate in the learning process are all i ∈ [n]. Each client i, draws a batch of m ≥ c0 (1+σ 2)k 3κ 4log(N) E2 0fresh, i.i.d. samples at every round. We denote by Xt i ∈ R d×m and Yt i ∈ R m the matrix of samples and the labels for client i, such that the rows of Xt i are samples {x 1 i , ..., x m i }. By Z t i ∈ R m we denote the noise in the measurements of client i, with zi,j ∼ N (0, σ2). Let Bˆ ∗ ∈ R d×k and W∗ ∈ R N×k stand for the optimal representation and the concatenation of optimal heads respectively. The hat denotes that a matrix is orthonormal i.e. its columns form an orthonormal set. Similarly, Bˆ t ∈ R d×k and Wt ∈ R N×k denote the global representation and the concatenation of the heads at round t. w∗ i 's and wt i 's denote the optimal heads and the heads at round t which constitute the rows of W∗ and Wtrespectively. Furthermore we define σ¯min,∗ := minI∈[N],|I|=n0,n0≤N σmin √ 1 n0W∗I and σ¯max,∗ := maxI∈[N],|I|=n0,n0≤N σmax √ 1 n0W∗I , where WI is formed by taking the rows of W indexed by I. That is σ¯max,∗ and σ¯min,∗ are the maximum and minimum singular values of any submatrix W∗I that can be obtained throughout the course of our algorithm. Notice that by assumption 4.6 each row of W∗ has norm √k, so 1 n0 acts as a normalization factor such that 1 n0W∗I F = √k. Finally we define κ = σ¯max,∗/σ¯min,∗. Since we focus on a single round the time index can be dropped for simplicity. Further, henceforth we drop the subscripts I t on Wt. First we derive the update scheme our algorithm follows. Notice that the empirical objective function given in (6) can be expressed via matrices Xi and Yi, $$L_{N}(\mathbf{B},\mathbf{W})={\frac{1}{2m n}}\sum_{i=1}^{n}\left(\mathbf{Y}_{i}-\mathbf{X}_{i}{\hat{\mathbf{B}}}\mathbf{w}_{i}\right)^{2}\tag{1}$$ $$(13)$$ (14) $\binom{15}{2}$ (15) . Further, computing the gradients we derive $$\frac{1}{2mn}\sum_{i=1}^{n}\nabla_{\hat{\mathbf{B}}}\left(\mathbf{Y}_{i}-\mathbf{X}_{i}\hat{\mathbf{B}}\mathbf{w}_{i}\right)^{2}=\frac{1}{mn}\sum_{i=1}^{n}\mathbf{X}_{i}^{\top}\left(\mathbf{X}_{i}\hat{\mathbf{B}}\mathbf{w}_{i}-\mathbf{Y}_{i}\right)\mathbf{w}_{i}^{\top},$$ $$\frac{1}{2mn}\nabla_{\mathbf{w}_{i}}\sum_{j=1}^{n}\left(\mathbf{Y}_{j}-\mathbf{X}_{j}\hat{\mathbf{B}}\mathbf{w}_{j}\right)^{2}=\frac{1}{mn}\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\left(\mathbf{X}_{i}\hat{\mathbf{B}}\mathbf{w}_{i}-\mathbf{Y}_{i}\right)$$ and since 1m Bˆ >X> i XiBˆis invertible with high probability by Lemma A.4, solving for the minimizer gives us w + i = 1 m Bˆ >X> i XiBˆ −11 m Bˆ >X> i Yi (16) Thus our update scheme with stepsize η is with stepsize $\eta$ is $$\forall i\in[n]\quad\mathbf{w}_{i}^{+}=\left(\frac{1}{m}\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\hat{\mathbf{B}}\right)^{-1}\frac{1}{m}\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{Y}_{i}$$ $$\mathbf{B}^{+}=\hat{\mathbf{B}}-\frac{\eta}{mn}\sum_{i=1}^{n}\mathbf{X}_{i}^{\top}\left(\mathbf{X}_{i}\hat{\mathbf{B}}\mathbf{w}_{i}^{+}-\mathbf{Y}_{i}\right)\mathbf{w}_{i}^{+\top}$$ $$\hat{\mathbf{B}}^{+},\mathbf{R}^{+}=QR(\mathbf{B}^{+})$$ where QR denotes the QR decomposition and Yi = XiBˆ ∗w∗ i + Zi. $$(16)$$ $$\quad(17)$$ $$\quad(18)$$ $$(19)$$ Lemma A.2. For every client i the update for wi *can be expressed as follows :* $$(20)$$ $$\mathbf{w}_{i}^{+}={\hat{\mathbf{B}}}^{\mathsf{T}}{\hat{\mathbf{B}}}^{*}\mathbf{w}_{i}^{*}+\mathbf{F}_{i}+\mathbf{G}_{i},$$ $$\quad(21)$$ $$(22)$$ $$(23)$$ i + Fi + Gi, (20) where Fi and Gi *are defined in equations* (24) and (25), respectively. Proof. Further expanding (17) we can write w + i = 1 m Bˆ >X> i XiBˆ −11 m Bˆ >X> i XiBˆ ∗w∗ i + 1 m Bˆ >X> i XiBˆ −11 m Bˆ >X> i Zi (21) = Bˆ >Bˆ ∗w∗ i + 1 m Bˆ >X> i XiBˆ −1 1 m Bˆ >X> i XiBˆ ∗ − 1 m Bˆ >X> i XiBˆ Bˆ >Bˆ ∗ w∗ i + 1 m Bˆ >X> i XiBˆ −11 m Bˆ >X> i Zi (22) = Bˆ >Bˆ ∗w∗ i + Fi + Gi (23) where we define Fi:= 1m Bˆ >X> i XiBˆ −1 1 m Bˆ >X> i XiBˆ ∗ − 1 m Bˆ >X> i XiBˆ Bˆ >Bˆ ∗ w∗ i Gi:= 1m Bˆ >X> i XiBˆ −11 m Bˆ >X> i Zi (25) , (24) We further have the following immediate corollary. Corollary A.3. Let W+, F and G *be the matrices with rows the concatenation of* w + i , Fi and Gi*, respectively.* Then (24) $\binom{25}{2}$ (25) . $$\begin{array}{l}{\square}\end{array}$$ $$\mathbf{W}^{+}=\mathbf{W}^{*}{\hat{\mathbf{B}}}^{*}{\hat{\mathbf{B}}}+\mathbf{F}+\mathbf{G}$$ $$(26)$$ W+ = W∗Bˆ ∗Bˆ + F + G (26) Our first goal is to control the norm of w + i . In order to achieve that we provide lemmas that bound the norms of Fi and Gi extending the analysis in Collins et al. (2021) and Jain et al. (2013). Lemma A.4. Let δ = c k 3/2√log(N) √mfor some absolute constant c*, then with probability at least* 1 − exp −115k 3log(N) $$\forall i\in[n],\quad\sigma_{\min}\left(\frac{1}{m}\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\hat{\mathbf{B}}\right)\geq1-\delta\tag{1}$$ _probability_ It follows that with the same probability $$\forall i\in[n],\quad\sigma_{\operatorname*{max}}\left({\frac{1}{m}}{\hat{\mathbf{B}}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}{\hat{\mathbf{B}}}\right)^{-1}\leq{\frac{1}{1-\delta}}$$ Proof. First notice that we can rewrite $$\frac{1}{m}\hat{\bf B}^{\top}{\bf X}_{i}^{\top}{\bf X}_{i}\hat{\bf B}=\sum_{j=1}^{m}\frac{1}{\sqrt{m}}\hat{\bf B}^{\top}{\bf x}_{i}^{j}\left(\frac{1}{\sqrt{m}}\hat{\bf B}^{\top}{\bf x}_{i}^{j}\right)^{\top}$$ $$(27)$$ $$(28)$$ $$(29)$$ For all i ∈ [n], j ∈ [m] we define v j i := √ 1 m Bˆ >x j i such that each v j i is an i.i.d. k √ 1 m Bˆ k2-subgaussian random variable (please see the definition of kAk2-subgaussian in Definition A.1) and thus by equation (4.22)(Theorem 4.6.1) in Vershynin (2018) we obtain the following bound for any m ≥ *k, l* ≥ 0 $$\sigma_{\mathrm{min}}\left({\frac{1}{m}}\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\hat{\mathbf{B}}\right)\geq1-c_{1}\left({\sqrt{\frac{k}{m}}}+{\frac{l}{\sqrt{m}}}\right),$$ $$(30)$$ with probability at least 1 − exp −l 2and c1 some absolute constant. We set l = 12k 3/2log(N) √k and δ1 = 12c1k 3/2√log(N) √mand the above bound becomes $$\sigma_{\min}\left(\frac{1}{m}\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\hat{\mathbf{B}}\right)\geq1-\delta_{1},\tag{1}$$ $$-k\left(12k\sqrt{\log(N)}-1\right)^{2}\right)\tag{2}$$ with probability at least 1 − exp −k $$\stackrel{\prime}{-k}\left(12k{\sqrt{\log(N)}}-1\right)^{2}\right)$$ Further notice that that $$\exp\left(-k\left(12k\sqrt{\log(N)}-1\right)^{2}\right)=\exp\left(k\left(-144k^{2}\log(N)+24k\log(N)-1\right)\right)$$ $$\leq\exp\left(-120k^{3}\log(N)\right)$$ $$(31)$$ $$(34)$$ Thus taking Union Bound over i ∈ [n] we have that for all i ∈ [n] with probability at least 1 − n exp −120k 3log(N)≥ 1 − exp −115k 3log(N)(35) Choosing c sufficiently large derives the statement of the lemma. Lemma A.5. Let Hi:= √ 1 m Bˆ >X> i √ 1 m Xi Bˆ Bˆ > − Id Bˆ ∗ and δ := c k 3/2√log(N) √m, for an absolute constant c*. Then with probability at least* 1 − exp −115k 2log(N)*we have* $$\sum_{i=1}^{n}\|\mathbf{H}_{i}\mathbf{w}_{i}^{*}\|_{2}^{2}\leq\delta^{2}\,\|\mathbf{W}^{*}\|_{2}^{2}\,d i s t^{2}\left({\hat{\mathbf{B}}},{\hat{\mathbf{B}}}^{*}\right)$$ Proof. In order to argue about the quantity Hi = √ 1 m Bˆ >X> i √ 1 m Xi Bˆ Bˆ > − Id Bˆ ∗ we define matrix U := √ 1 m Xi Bˆ Bˆ > − Id Bˆ ∗such that its j-th row, uj = √ 1 m Bˆ ∗> Bˆ Bˆ > − Id x j i is subgaussian with norm at most √ 1 m Bˆ ∗> Bˆ Bˆ > − Id . Similarly we define V = √ 1 m Bˆ >X> isuch that its j-th row vj = √ 1 m Bxˆ j i has norm at most √ 1 m Bˆ . We are now ready to use a concentration argument similar to Proposition (4.4.5) in . Let S k−1 denote the unit sphere in k dimensions and Nk the 1/4-net of cardinality 9 k. From equation (4.13) in Vershynin (2018) we have $$\left\|\left(\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\right)\mathbf{X}_{i}\left(\hat{\mathbf{B}}\hat{\mathbf{B}}^{\top}-\mathbf{I}_{d}\right)\hat{\mathbf{B}}^{\star}\right\|_{2}=\left\|\mathbf{U}^{\top}\mathbf{V}\right\|_{2}\leq2\max_{\mathbf{p}:\mathbf{y}\in\mathcal{N}_{h}}\mathbf{p}^{\top}\left(\sum_{j=1}^{m}\mathbf{u}_{j}\mathbf{v}_{j}^{\top}\right)\mathbf{y}$$ $$=2\max_{\mathbf{p}:\mathbf{y}\in\mathcal{N}_{h}}\sum_{j=1}^{m}\left\langle\mathbf{p},\mathbf{u}_{j}\right\rangle\left\langle\mathbf{v}_{j},\mathbf{y}\right\rangle$$ $$(35)$$ $$(36)$$ (37) $\binom{38}{2}$ (38) . $$(39)$$ By definition hp, uj i and hvj , yi are subgaussians with norms √ 1 m Bˆ ∗> Bˆ Bˆ > − Id 2 = √ 1 m dist Bˆ , Bˆ ∗ and √ 1 m ˆ kBk2 = √ 1 m respectively and thus for all j ∈ [m] the product hp, uj i hvj , yi is subexponential with norm at most C 0 m dist Bˆ , Bˆ ∗, for some constant C 0. Note that $$\mathbb{E}\left[\left\langle\mathbf{p},\mathbf{u}_{j}\right\rangle\left\langle\mathbf{v},\mathbf{y}\right\rangle\right]=\mathbf{p}^{\top}\left(\hat{\mathbf{B}}^{*\top}\left(\mathbf{I}_{d}-\hat{\mathbf{B}}\hat{\mathbf{B}}^{\top}\right)\hat{\mathbf{B}}\right)\mathbf{y}=0\tag{1}$$ $$\sigma_{\min}\left(\frac{1}{m}\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\hat{\mathbf{B}}\right)\geq1-\delta_{1}\tag{1}$$ and thus we can use Bernstein's inequality to bound the sum of m zero mean subexponential random variables, for any fixed pair p, y ∈ Nk: $$\mathbb{P}\left(\sum_{i=1}^{m}\left(\mathbf{p},\mathbf{u}_{j}\right)\left\langle\mathbf{v}_{j},\mathbf{y}\right\rangle\geq s\right)\leq\exp\left(-c_{2}\min\left\{\frac{s^{2}m^{2}}{\mathrm{dist}^{2}\left(\mathbf{\widehat{B}},\mathbf{\widehat{B}^{*}}\right)},\frac{sm}{\mathrm{dist}\left(\mathbf{\widehat{B}},\mathbf{\widehat{B}^{*}}\right)}\right\}\right)\tag{40}$$ $$\leq\exp\left(-c_{2}m\min\left\{\frac{s^{2}}{\mathrm{dist}^{2}\left(\mathbf{\widehat{B}},\mathbf{\widehat{B}^{*}}\right)},\frac{s}{\mathrm{dist}\left(\mathbf{\widehat{B}},\mathbf{\widehat{B}^{*}}\right)}\right\}\right)\tag{41}$$ for constant c2. Thus taking Union Bound over all the point in the net we derive $$\mathbb{P}\left(\forall\mathbf{p},\mathbf{y}\in\mathcal{N}_{k},\quad2\sum_{j=1}^{m}\left\langle\mathbf{p},\mathbf{u}_{j}\right\rangle\left\langle\mathbf{v}_{j},\mathbf{y}\right\rangle\geq2s\right)\leq9^{2k}\exp\left(-c_{2}m\min\left\{\frac{s^{2}}{\operatorname{dist}^{2}\left(\mathbf{\hat{B}},\mathbf{\hat{B}^{*}}\right)},\frac{s}{\operatorname{dist}\left(\mathbf{\hat{B}},\mathbf{\hat{B}^{*}}\right)}\right\}\right)\right)\tag{42}$$ Since *m > Ck*2log(N), by setting s = dist Bˆ , Bˆ ∗ qCk2 log(N) 4m and (38) we obtain $$\mathbb{P}\left(\frac{1}{m}\left\|\left(\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\right)\mathbf{X}_{i}\left(\hat{\mathbf{B}}\hat{\mathbf{B}}^{\top}-\mathbf{I}_{d}\right)\hat{\mathbf{B}}^{*}\right\|_{2}\geq\operatorname{dist}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)\sqrt{\frac{C k^{2}\log(N)}{m}}\right)$$ $$\left(\begin{array}{l}{{\leq9^{2k}\exp\left(-c_{2}m\frac{s^{2}}{\operatorname{dist}^{2}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)}}\\ {{\ }}\\ {{\leq9^{2k}\exp\left(-C\cdot c_{2}m k^{2}\log(N)\right)}}\\ {{\ }}\\ {{\ }}\\ {{\leq\exp\left(-120k^{2}\log(N)\right)}}\end{array}\right.$$ $$\overline{{\begin{array}{l}{{\ }\end{array}}}}$$ ) $\left(44\right)$ $\left(45\right)$ . for sufficiently large C. Using Union Bound again over all participating clients we get $$\mathbb{P}\left({\forall i\in[n]\quad\|\mathbf{H}_{i}\|_{2}\leq\operatorname{dist}\left({\hat{\mathbf{B}}},{\hat{\mathbf{B}}}^{*}\right){\sqrt{\frac{C k^{2}\log(N)}{m}}}}\right)\geq1-n\exp\left(-120k^{2}\log(N)\right)$$ $$\geq1-\exp\left(-115k^{2}\log(N)\right)$$ (48) $\binom{49}{49}$ . The above also implies $$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\|\mathbf{H}_{i}\|_{2}^{2}\leq C\text{dist}^{2}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)\frac{k^{2}\log(N)}{m}\right)\geq1-\exp\left(-115k^{2}\log(N)\right)$$ $$\mathbb{P}\left(\frac{k}{n}\left\|\mathbf{W}^{*}\right\|_{2}^{2}\sum_{i=1}^{n}\|\mathbf{H}_{i}\|_{2}^{2}\leq C\left\|\mathbf{W}^{*}\right\|_{2}^{2}\text{dist}^{2}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)\frac{k^{3}\log(N)}{m}\right)\geq1-\exp\left(-115k^{2}\log(N)\right)$$ that is 2log(N)(48) 2log(N)(49) Finally notice that $$\sum_{i=1}^{n}\|{\bf H}_{i}{\bf w}_{i}^{*}\|_{2}^{2}\leq\sum_{i=1}^{n}\|{\bf H}_{i}\|_{2}^{2}\,k\leq\frac{\|{\bf W}^{*}\|_{F}^{2}}{n}\sum_{i=1}^{n}\|{\bf H}_{i}\|_{2}^{2}\leq\frac{k}{n}\,\|{\bf W}^{*}\|_{2}^{2}\sum_{i=1}^{n}\|{\bf H}_{i}\|_{2}^{2}\,,\tag{50}$$ where we used Assumption (4.6) and the fact that the rank of W∗is k. Combining this with (49) and choosing sufficiently large c we derive the result. Building on the previous lemmas we can now bound the norm of Fi. Lemma A.6. Let δ := c k 3/2√log(N) √mfor some absolute constant c and for all i ∈ [n] let Fi *given by* (24). Further let matrix F ∈ R n×ksuch that its rows are the concatenation of Fi*'s. Then with probability at least* 1 − exp −110k 2log(N)*we have* $$\forall i\in[n]\ \ \ \ \ \|\mathbf{F}_{i}\|_{2}\leq\frac{\delta}{1-\delta}\,dist\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)\,\|\mathbf{w}_{i}^{*}\|_{2}\,,$$ $$\|\mathbf{F}\|_{F}\leq\frac{\delta}{1-\delta}\,dist\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)\,\|\,\mathbf{W}^{*}\|_{2}$$ Proof. $$\|\mathbf{F}_{i}\|_{2}^{2}\leq\left\|\left(\frac{1}{m}\hat{\mathbf{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\hat{\mathbf{B}}\right)^{-1}\right\|_{2}^{2}\|\mathbf{H}_{i}\|_{2}^{2}\,\|\mathbf{w}_{i}^{*}\|_{2}^{2}$$ $$\leq\frac{\delta^{2}}{\left(1-\delta\right)^{2}}\cdot\operatorname{dist}^{2}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)\,\|\mathbf{w}_{i}^{*}\|_{2}^{2}$$ (51) $\binom{52}{52}$ . which holds for all i ∈ [n] with probability at least 1 − exp −110k 2log(N)by using Union Bound on the failure probability of (28) and (47). Similarly, we have kFk 2 F = Xn i=1 kFik 2 2 ≤ Xm i=1 1 m Bˆ >X> i XiBˆ −1 2 2 kHiw∗ i k 2 2(55) ≤1 (1 − δ) 2 Xm i=1 kHiw∗ i k 2 2(56) ≤δ 2 (1 − δ) 2 · dist2Bˆ , Bˆ ∗kW∗k 2 2(57) (53) $\binom{54}{5}$ . which holds with probability at least 1 − exp −110k 2log(N)taking Union Bound on the failure probability on (28) and (36). We now turn our attention on deriving a bound for kGik2 . Lemma A.7. Let δ := c k 3/2√log(N) √mfor some absolute constant c and for all i ∈ [n] let Gi *given by* (25). Further let matrix G ∈ R n×ksuch that its rows are the concatenation of Gi*'s. Then with probability at least* 1 − exp −110k 2log(N)*we have* $$\forall i\in[n]\quad\|\mathbf{G}_{i}\|_{2}\leq\frac{\delta}{1-\delta}\sigma^{2},$$ $$\|\mathbf{G}\|_{F}\leq\frac{\delta}{1-\delta}\sqrt{n}\sigma^{2}$$ (58) $\binom{59}{5}$ . $$(60)$$ Proof. Notice that we can write $$\mathbf{G}_{i}=\left({\frac{1}{m}}\mathbf{\hat{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\mathbf{\hat{B}}\right)^{-1}{\frac{1}{m}}\mathbf{\hat{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{Z}_{i}=\left({\frac{1}{m}}\mathbf{\hat{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\mathbf{\hat{B}}\right)^{-1}{\frac{1}{m}}\sum_{i=1}^{m}z_{i}^{i}\mathbf{\hat{B}}^{\top}\mathbf{x}_{i}^{i},$$ , (60) and since z j i ∼ N 0, σ2we can conclude that for all *i, j, z*j i Bˆ >x j i is an i.i.d. zero mean subexponential with norm at most C 0 2σ 2 ˆ kBk2 = C 0 2σ 2for some constant C2. Once again we denote by S k−1the unit sphere in k dimensions and by Nk the 1/4-net with cardinality 9 k. Using Bernstein's inequality and Union Bound over all the points on the net we follow the derivations from Lemma A.5 to get $$\mathbb{P}\left(\left\|{\frac{1}{m}}\sum_{i=1}^{m}z_{i}^{i}\mathbf{\hat{B}}^{\top}\mathbf{x}_{i}^{j}\right\|_{2}\geq2s\right)\leq9^{k+1}\exp\left(-c_{3}m\operatorname*{min}\left\{{\frac{s^{2}}{\sigma^{4}}},{\frac{s}{\sigma^{2}}}\right\}\right)$$ $$(61)$$ Since *m > C*2k 2log(N), by setting s = σ $s=\sigma^2\sqrt{\frac{C_2k^2\log(N)}{4m}}$ we derive . $$\mathbb{P}\left(\left\|\frac{1}{m}\sum_{i=1}^{m}z_{i}^{j}\hat{\mathbf{B}}^{\top}\mathbf{x}_{i}^{j}\right\|_{2}\geq\sigma^{2}\sqrt{\frac{C_{2}k^{2}\log(N)}{m}}\right)\leq9^{k+1}\exp\left(-c_{3}m\frac{s^{2}}{\sigma^{4}}\right)$$ $$\leq9^{k+1}\exp\left(-C_{2}\cdot c_{3}k^{2}\log(N)\right)$$ $$\leq\exp\left(-115k^{2}\log(N)\right)$$ for sufficiently large C2. Choosing c large enough and taking Union Bound over all i ∈ [n] we can obtain $$\mathbb{P}\left(\forall i\in[n]\quad\left\|{\frac{1}{m}}\sum_{i=1}^{m}z_{i}^{j}{\hat{\mathbf{B}}}^{\top}\mathbf{x}_{i}^{j}\right\|_{2}\leq\sigma^{2}\delta\right)\,\mathbf{x}_{i}$$ $$\begin{array}{l}{{\vdots1-n\exp\left(-115k^{2}\log(N)\right)}}\\ {{}}\\ {{\vdots1-\exp\left(-113k^{2}\log(N)\right)}}\end{array}$$ $$(62)$$ $$\begin{array}{l}{(63)}\\ {(64)}\end{array}$$ (65) $\begin{array}{l}\left(66\right)\end{array}$ $\begin{array}{l}\left(67\right)\end{array}$ $$(68)$$ $$(69)$$ $\square$ $$(70)$$ $$(71)$$ Finally taking Union Bound over the failure probabilities of (28) and (67) we get $$\forall i\in[n]\quad\|\mathbf{G}_{i}\|_{2}\leq\left\|\left({\frac{1}{m}}\mathbf{\hat{B}}^{\top}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\mathbf{\hat{B}}\right)^{-1}\right\|_{2}\left\|{\frac{1}{m}}\sum_{i=1}^{m}z_{i}^{i}\mathbf{\hat{B}}^{\top}\mathbf{x}_{i}^{j}\right\|_{2}\leq{\frac{\delta}{1-\delta}}\sigma^{2}$$ 2(68) with probability at least 1 − n exp −110k 2log(N). It follows that with the same probability $$\|\mathbf{G}\|_{F}^{2}=\sum_{i=1}^{n}\|\mathbf{G}_{i}\|_{2}^{2}\leq n\left({\frac{\delta}{1-\delta}}\right)^{2}\sigma^{4}$$ $$\forall i\in[n]\quad\left\|{\bf w}_{i}^{+}\right\|_{2}\leq2\sqrt{k}+\sigma^{2}\delta\tag{1}$$ $\forall i\in[n]\quad\left\|{\bf w}_{i}^{+}\right\|_{2}\leq2\sqrt{k}+\sigma^{2}\delta$ $\forall i\in[n]\quad\left\|{\bf w}_{i}^{+}\right\|_{2}\leq2\sqrt{k}+\sigma^{2}\delta$ $$\forall i\in[n]\quad\|\mathbf{q}_{i}\|_{2}\leq2{\sqrt{k}}\cdot d i s t\left({\hat{\mathbf{B}}},{\hat{\mathbf{B}}}^{*}\right)+\sigma^{2}{\hat{\delta}}$$ 2ˆδ (71) $$\left\|\mathbf{w}_{i}^{+}\right\|$$ 2 $$\leq\left\|\hat{\mathbf{B}}^{\top}\right\|_{2}\left\|\hat{\mathbf{B}}^{*}\right\|_{2}\left\|\mathbf{w}_{i}^{*}\right\|_{2}+\left\|\mathbf{F}_{i}\right\|_{2}+\left\|\mathbf{G}_{i}\right\|_{2}$$ $$\leq\left\|\mathbf{w}_{i}^{*}\right\|_{2}+\hat{\delta}\left\|\mathbf{w}_{i}^{*}\right\|_{2}\cdot\text{dist}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)+\hat{\delta}\sigma^{2}$$ $$\leq2\sqrt{k}+\hat{\delta}\sigma^{2}$$ $$\forall i\in[n]\quad\left\|\mathbf{w}_{i}^{+}\right\|_{2}\leq2{\sqrt{k}}+\sigma^{2}{\hat{\delta}}$$ 2ˆδ (75) Lemma A.8. Let δ := c k 3/2√log(N) √mfor some absolute constant c and ˆδ = δ/(1 − δ)*. Then with probability* at least 1 − exp −105k 2log(N)*we have* Further with probability at least 1 − exp −105k 2log(N)*we have* Proof. where the first inequality comes from (20) and the third from Assumption 4.6. For the second inequality we take Union Bound over the failure probability of (51) and (58) and thus the above result holds with probability at least 1 − exp −107k 2log(N). Taking Union Bound for all i ∈ [n] we get that with probability at least 1 − exp −105k 2log(N) For all i ∈ [n] we define qi:= Bwˆ + i − Bˆ ∗w∗ i . The following lemma provides upper bounds on the norms of w + iand qi $$(74)$$ $$(75)$$ and the first result of the lemma follows. For the second part we have kqik2 = Bwˆ + i − Bˆ ∗w∗ i 2 ≤ Bˆ Bˆ >Bˆ ∗w∗ i + BFˆi + BGˆi − Bˆ ∗w∗ i 2(76) ≤ Bˆ Bˆ > − Id Bˆ ∗w∗ i 2 + BFˆi 2 + BGˆi 2(77) ≤ Bˆ ⊥Bˆ ∗2 kw∗ i k2 + kFik2 + kGik2(78) ≤ dist Bˆ , Bˆ ∗kw∗ i k2 + dist Bˆ , Bˆ ∗ˆδ kw∗ i k2 + σ ≤ dist Bˆ , Bˆ ∗2 √ k + σ 2ˆδ (80) 2ˆδ (79) where the first inequality comes from (20). For the forth inequality we take Union Bound over the failure probability of (51) and (58) and thus the above result holds with probability at least 1−exp −107k 2log(N). Taking Union Bound for all i ∈ [n] we get that with probability at least 1 − exp −105k 2log(N) ∀i ∈ [n] kqik2 ≤ 2 $$\|{\bf q}_{i}\|_{2}\leq2\sqrt{k}\cdot\operatorname{dist}\left({\hat{\bf B}},{\hat{\bf B}}^{*}\right)+\sigma^{2}{\hat{\delta}}$$ $$\forall i\in[n]$$ 2ˆδ (81) Lemma A.9. Let δ := c k 3/2√log(N) √mfor some absolute constant c and ˆδ = δ/(1 − δ)*. Then with probability* at least 1 − exp (−105d) − exp −105k 2log(N)*we have* $$\left\|{\frac{1}{m n}}\sum_{i=1}^{n}\mathbf{X}_{i}^{\mathsf{T}}\mathbf{Z}_{i}\mathbf{w}_{i}^{+{\mathsf{T}}}\right\|_{2}\leq c\cdot{\frac{\sigma^{2}\left({\sqrt{k}}+{\hat{\delta}}\sigma^{2}\right){\sqrt{d}}}{{\sqrt{m n}}}}$$ Proof. Let S d−1, S k−1 denote the unit spheres in d and k dimensions and Nd, Nk the 1/4-nets of cardinality 9 d and 9 k, respectively. By equation 4.13 in Vershynin (2018) we have 1 mn Xn i=1 X> i Ziw +> i 2 ≤ 2 max p∈Nd,y∈Nk p > Xn i=1 1 mn X> i Ziw +> i ! y (83) =≤ 2 max p∈Nd,y∈Nk p > Xn i=1 Xm j=1 z j i mn x j i w +> i y (84) =≤ 2 max p∈Nd,y∈Nk Xn i=1 Xm j=1 z j i mn Dx j i , p E w + i , y!(85) j $$({\boldsymbol{\delta}}{\boldsymbol{1}})$$ $\square$ $$(82)$$ (83) $$\begin{array}{l}\left(84\right)\end{array}$$ = $$\begin{array}{l}\left(85\right)\end{array}$$ = $$\begin{array}{l}\left(85\right)\end{array}$$ . Notice that for any fixed p, y and ∀i ∈ [n], j ∈ [m] the random variables z i mn Dx j i , p E w + i , yare i.i.d. zero mean subexponentials with norm at most C 0 3σ 2kw+ i k2 mn, for some absolute constant C 0 3 . Conditioning on the event $${\mathcal E}_{1}:=\bigcap_{i=1}^{n}\left\{\left\|{\bf w}_{i}^{+}\right\|_{2}\leq2\sqrt{k}+\hat{\delta}\sigma^{2}\right\},$$ which holds with probability at least 1 − exp −105k 2log(N)by Lemma A.8, we can invoke Bernstein's inequality to get $$\mathbb{P}\left(\sum_{i=1}^{n}\sum_{j=1}^{m}\frac{z_{i}^{j}}{mn}\left\langle\mathbf{x}_{i}^{j},\mathbf{p}\right\rangle\left\langle\mathbf{w}_{i}^{+},\mathbf{y}\right\rangle\geq s\Big{|}\mathcal{E}_{1}\right)\leq\exp\left(-c_{4}mn\min\left\{\frac{s^{2}}{\sigma^{4}\left(2\sqrt{k}+\sigma^{2}\hat{\delta}\right)^{2}},\frac{s}{\sigma^{2}\left(2\sqrt{k}+\sigma^{2}\hat{\delta}\right)^{2}}\right\}\right).$$ $$(86)$$ $$\left.\begin{array}{l}{{}}\\ {{}}\\ {{(87)}}\end{array}\right.$$ Since m > d·C3 n0≥ d·C3 nby setting s = σ 2(2 √ k+δσˆ 2) √d·C3 8 √mnthe above quantity simplifies as follows $$\mathbb{P}\left(\sum_{i=1}^{n}\sum_{j=1}^{m}\frac{z_{i}^{j}}{mn}\left\langle\mathbf{x}_{i}^{j},\mathbf{p}\right\rangle\left\langle\mathbf{w}_{i}^{+},\mathbf{y}\right\rangle\geq\frac{\sigma^{2}\left(2\sqrt{k}+\hat{\delta}\sigma^{2}\right)\sqrt{d\cdot C_{5}}}{\sqrt{mn}}\Bigg{|}\mathcal{E}_{1}\right)\leq\exp\left(-c_{4}nm\frac{s^{2}}{\sigma^{4}\left(2\sqrt{k}+\sigma^{2}\hat{\delta}\right)^{2}}\right)$$ $$\leq\exp\left(-C_{3}\cdot c_{4}\cdot d\right)$$ $$\leq\exp\left(-110d\right)$$ (88) $\begin{array}{l}\left(89\right)\hfill\\ \left(90\right)\hfill\end{array}$ (88) for C3 large enough. Taking Union Bound over all points p, y on the Nd, Nk and using (85) we derive $$\mathbb{P}\left(\left\|{\frac{1}{m n}}\sum_{i=1}^{n}\mathbf{X}_{i}^{\top}\mathbf{Z}_{i}\mathbf{w}_{i}^{+\top}\right\|_{2}\geq{\sqrt{C_{3}}}{\frac{\sigma^{2}\left({\sqrt{k}}+\delta\sigma^{2}\right){\sqrt{d}}}{{\sqrt{m n}}}}\right|\mathcal{E}_{1}\right)\leq9^{d+k}\exp\left(-110d\right)$$ $$\leq\exp\left(-105d\right)$$ (91) $\binom{92}{92}$ . (93) $\binom{94}{9}$ . and removing the conditioning on E1 we get $$\mathbb{P}\left(\left\|\frac{1}{mn}\sum_{i=1}^{n}\mathbf{X}_{i}^{\top}\mathbf{Z}_{i}\mathbf{w}_{i}^{+\top}\right\|_{2}\geq\sqrt{C_{3}}\frac{\sigma^{2}\left(\sqrt{k}+\hat{\delta}\sigma^{2}\right)\sqrt{d}}{\sqrt{mn}}\right)\leq\exp\left(-105d\right)+\mathbb{P}\left(\mathcal{E}_{1}^{C}\right)$$ $$\leq\exp\left(-105d\right)+\exp\left(-105k^{2}\log(N)\right)$$ 2log(N)(94) Choosing c large enough and taking the complementary event derives the result. Lemma A.10. Let δ := c k 3/2√log(N) √mfor some absolute constant c and ˆδ = δ/(1 − δ)*. Then with probability* at least 1 − exp (−100d) − exp −100k 2log(N)*we have* $$\left\|\frac{1}{n}\sum_{i=1}^{n}\left(\frac{1}{m}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\left(\hat{\mathbf{B}}\mathbf{w}_{i}^{+}-\hat{\mathbf{B}}^{*}\mathbf{w}_{i}^{+}\right)-\left(\hat{\mathbf{B}}\mathbf{w}_{i}^{+}-\hat{\mathbf{B}}^{*}\mathbf{w}_{i}^{+}\right)\right)\mathbf{w}_{i}^{+}\right\|_{2}\leq c\cdot\frac{\sqrt{\lambda}\left(dist\left(\hat{\mathbf{D}},\hat{\mathbf{D}}^{*}\right)k+\sqrt{k}\delta\sigma^{2}+\left(\hat{\lambda}\sigma^{2}\right)^{2}\right)}{\sqrt{mn}}\tag{95}$$ Proof. Let us define the event $${\mathcal{E}}_{2}:=\bigcap_{i=1}^{n}\left\{\|{\bf w}_{i}^{+}\|_{2}\leq2{\sqrt{k}}+{\hat{\delta}}\sigma^{2}\quad\bigcap\quad\|{\bf q}_{i}\|_{2}\leq\operatorname{dist}\left({\hat{\bf B}},{\hat{\bf B}}^{*}\right)2{\sqrt{k}}+{\hat{\delta}}\sigma^{2}\right\},$$ which happens with probability at least 1 − exp −100k 2log(N)by Union Bound and Lemma A.8. For the rest of this proof we work conditioning on event E2. Recall that qi:= Bwˆ + i − Bˆ ∗w∗ i and thus we can write 1 n Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i = 1 n Xn i=1 X> i Xiqiw +> i − Xn i=1 qiw +> i ! = 1 n 1 m Xn i=1 Xm j=1 Dx j i , qi Ex j i w +> i − Xn i=1 (97) $\mathbf{q}_i\mathbf{w}_i^{+\top}$ (98) ... $$(96)$$ Let S d−1, S k−1 denote the unit spheres in d and k dimensions and Nd, Nk the 1/4-nets of cardinality 9 d and 9 k, respectively. By equation 4.13 in Vershynin (2018) we have 1 n Xn i=1 Xm j=1 1 m Dx j i , qi Ex j i w +> i − 1 n Xn i=1 qiw +> i 2 ≤ 2 max p∈Nd,y∈Nk 1 n p > Xn i=1 Xm j=1 1 m Dx j i , qi Ex j i w +> i − 1 n Xn i=1 qiw +> i y (99) = 2 max p∈Nd,y∈Nk 1 mn Xn i=1 Xm j=1 Dx j i , qi E Dp, x j i E w + i , y− hp, qiiw + i , y (100) Notice that for any fixed p, y the products Dx j i , qi Eare i.i.d. subgaussians with norm at most c˜1 kqik and Dp, x j i Eare i.i.d. subgaussians with norm at most c˜2 kpk2 = c˜2. Hence under the event E2 the product 1 mn Dx j i , qi E Dp, x j i E w + i , yare subexponentials with norm at most C 0 4 mn dist Bˆ , Bˆ ∗k + √k ˆδσ2 + ˆδσ22, for some constant C 0 4 . Also note that $$\mathbb{E}\left[\left\langle\mathbf{x}_{i}^{j},\mathbf{q}_{i}\right\rangle\left\langle\mathbf{p},\mathbf{x}_{i}^{j}\right\rangle\left\langle\mathbf{w}_{i}^{+},\mathbf{y}\right\rangle-\left\langle\mathbf{p},\mathbf{q}_{i}\right\rangle\left\langle\mathbf{w}_{i}^{+},\mathbf{y}\right\rangle\right]=0\tag{101}$$ and thus applying Bernstein's inequality we get P 1 mn Xn i=1 Xm j=1 Dx j i , qi E Dp, x j i E w + i , y− 1 n Xn i=1 hp, qiiw + i , y≥ s E2 ≤ exp −c5 · mn min dist Bˆ , Bˆ ∗k + √k ˆδσ2 + ˆδσ222 ,s dist Bˆ , Bˆ ∗k + √k ˆδσ2 + ˆδσ22 s 2 (102) Since m > d·C4 n0≥ d·C4 nby setting s = √C4·ddist(Bˆ ,Bˆ ∗)k+ √ kδσˆ 2+(δσˆ 2) 2 2 √mnand taking Union Bound over all p ∈ Nd, y ∈ Nk we derive P 1 n Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i 2 ≥ √C4 · d dist Bˆ , Bˆ ∗k + √k ˆδσ2 + ˆδσ22 √mn E2 ≤ 9 d+kexp −c5 · mns2 dist Bˆ , Bˆ ∗k + √k ˆδσ2 + ˆδσ222 ≤ 9 d+kexp (−C4 · c5 · d) (103) ≤ 9 d+kexp (−120d) (104) ≤ exp (−100d) (105) choosing a large enough constant C4. Recall that PE C 2 ≤ exp −100k 2log(N). Hence by removing the conditioning on E2 we get that with probability at least 1 − exp (−100d) − exp −100k 2log(N) $$\left\|\frac{1}{n}\sum_{i=1}^{n}\left(\frac{1}{m}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\left(\mathbf{\hat{B}w}_{i}^{+}-\mathbf{\hat{B}}^{+}\mathbf{w}_{i}^{+}\right)-\left(\mathbf{\hat{B}w}_{i}^{+}-\mathbf{\hat{B}}^{+}\mathbf{w}_{i}^{+}\right)\right)\mathbf{w}_{i}^{+\top}\right\|_{2}\leq c\cdot\frac{\sqrt{d}\left(\text{dist}\left(\mathbf{\hat{B}},\mathbf{\hat{B}}^{+}\right)k+\sqrt{k}\delta\sigma^{2}+\left(\delta\sigma^{2}\right)^{2}\right)}{\sqrt{mn}}\tag{106}$$ for sufficiently large $c$. Having set all the building blocks we now proceed to the proof of Theorem 4.7. Theorem 4.7. *Let Assumptions 4.3-4.6 hold. Further, let the following inequalities hold for the number of* participating nodes and the batch size respectively, n ≥ n0 and m ≥ c0 (1+σ 2)k 3κ 4 E2 0max {log(N), d/n0}*, for* some absolute constant c0. Then FedRep-SRPFL *with stepsize* η ≤1 8¯σ2max,∗ , satisfies the following contraction inequality: $$dist\left(\mathbf{B}^{t+1},\mathbf{B}^{*}\right)\leq dist\left(\mathbf{B}^{t},\mathbf{B}^{*}\right)\sqrt{1-a}+\frac{a}{\sqrt{\frac{n}{n_{0}}\left(1-a\right)}},\tag{10}$$ w.p. at least 1 − T · exp −90 min *d, k*2log(N) *, where* a = 1 2 ηE0σ¯ 2 min,∗ ≤ 1 4 . Proof. First let us recall the definition of δ := c k 3/2√log(N) √mfor some absolute constant c and ˆδ = δ/(1 − δ). Further notice that for our choice of m and sufficiently large c0 we have the following useful inequality $${\hat{\delta}}={\frac{\delta}{1-\delta}}\leq2\delta\leq{\frac{E_{0}}{20\cdot\kappa^{2}}}\cdot{\frac{1}{1+\sigma^{2}}}\leq{\frac{1}{20}}$$ $$(107)$$ From the update scheme of our algorithm (18) we have B + = Bˆ −η mn Xn i=1 X> i XiBwˆ + i w +> i − Xn i=1 X> i XiBˆ ∗w∗ i w +> i − Xn i=1 X> i Ziw +> i ! = Bˆ − η n Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i ! − η n Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i + η n Xn i=1 1 m X> i Ziw +> i(109) $$\quad(108)$$ $$\quad(109)$$ $$\quad(109)$$ where we added and subtracted terms. Multiplying both sides by Bˆ ∗> ⊥ we get Bˆ ∗> ⊥ B + = Bˆ ∗> ⊥ Bˆ − η n Bˆ ∗> ⊥ Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i ! − η n Xn i=1 Bˆ ∗> ⊥ Bwˆ + i − Bˆ ∗> ⊥ Bˆ ∗w∗ i w +> i + η n Bˆ ∗> ⊥ Xn i=1 1 m X> i Ziw +> i(110) = Bˆ ∗> ⊥ Bˆ Ik − η n Xn i=1 w + i w +> i ! + η n Bˆ ∗> ⊥ Xn i=1 1 m X> i Ziw +> i − η n Bˆ ∗> ⊥ Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i ! (111) where the second equality holds since Bˆ ∗> ⊥ Bˆ ∗ = 0. Recall that from the QR decomposition of B+we have B+ = Bˆ +R+. Hence multiplying by (R+) −1and taking both sides the norm we derive dist Bˆ +, Bˆ ∗≤ Bˆ ∗> ⊥ Bˆ Ik − η n Xn i=1 w + i w +> i !2 R+−12 + η n Bˆ ∗> ⊥ Xn i=1 1 m X> i Ziw +> i 2 R+−12 + η n Bˆ ∗> ⊥ Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i !2 R+−12 (112) Let us define A1 := dist Bˆ , Bˆ ∗ Ik − η n Xn i=1 w + i w +> i 2(113) A2 := η n Bˆ ∗> ⊥ Xn i=1 1 m X> i Ziw +> i 2(114) A3 := η n Bˆ ∗> ⊥ Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i !2(115) so that the following inequality holds $$\operatorname{dist}\left({\hat{\mathbf{B}}}^{+},{\hat{\mathbf{B}}}^{*}\right)\leq\left(A_{1}+A_{2}+A_{3}\right)\left\|\left(\mathbf{R}^{+}\right)^{-1}\right\|_{2}$$ R+−12(116) For the rest of the proof we will work conditioning on the intersection of the events $$(1116)$$ $$(117)$$ $$(118)$$ $$\frac{+\left(\hat{\delta}\sigma^{2}\right)^{2}\right)}{}\Bigg\}$$ $$(120)$$ which happens with probability at least 1 − exp (−90d) − exp −90k 2log(N)by Union Bound on the failure probability of (52), (59), (82), (95) and (96). We will now provide bounds for each of the terms of interest in (116), starting from A1. Notice that by (26) we have $$\lambda_{\operatorname*{max}}\left(\mathbf{w}^{+}\mathbf{w}^{+}\right)=\left\|\mathbf{w}^{+}\right\|_{2}^{2}$$ (121) (122) (123) (124) (125) (126) $=\left\|\mathbf{W}^{*}\hat{\mathbf{B}}^{*}\hat{\mathbf{B}}+\mathbf{F}+\mathbf{G}\right\|_{2}^{2}$ $\leq2\left\|\mathbf{W}^{*}\right\|_{2}^{2}+4\left\|\mathbf{F}\right\|_{2}^{2}+4\left\|\mathbf{G}\right\|_{2}^{2}$ $\leq2\left\|\mathbf{W}^{*}\right\|_{2}^{2}+\text{dist}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)4\hat{\delta}^{2}\left\|\mathbf{W}^{*}\right\|_{2}^{2}+4\hat{\delta}^{2}\sigma^{4}n$ $\leq4\left(\left\|\mathbf{W}^{*}\right\|_{2}^{2}+n\right)$ $\leq4n\left(\bar{\sigma}_{\max,*}^{2}+1\right)$ E2 := \n i=1 nw + i 2 ≤ 2 √ k + ˆδσ2 \kqik2 ≤ dist Bˆ , Bˆ ∗2 √ k + ˆδσ2o(117) E3 := nkFkF ≤ dist Bˆ , Bˆ ∗ˆδ kW∗k2\kGkF ≤ ˆδ √nσ2o(118) 1 mn Xn i=1 X> i Ziw +> i 2 ≤ c · σ 2√k + ˆδσ2 √d √mn E4 := 1 n Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i 2 ≤ c √d dist Bˆ , Bˆ ∗k + √k √mn E5 := $$(119)$$ ˆδσ2 + where in the last inequality we use the fact that kW∗k2 = √n · σ¯max,∗. Since η < σ¯ 2 max,∗ + 1−1the matrix Ik − η nW+>W+ is positive definite. Thus we have Ik − η n W+>W+2 ≤ 1 − η n λmin W+>W+(126) ≤ 1 − η n λmin W∗Bˆ ∗Bˆ + F + G > W∗Bˆ ∗Bˆ + F + G (127) ≤ 1 − η n σ 2 min W∗Bˆ ∗Bˆ− σ 2 min(F) − σ 2 min(G) + 2η n σmax F >W∗Bˆ ∗>Bˆ+ σmax F >G+ σmax G>W∗Bˆ ∗>Bˆ (128) ≤ 1 − η n σ 2 min (W∗) σ 2 min Bˆ ∗>Bˆ+ 2 Bˆ ∗>Bˆ2 σmax F >W∗+ σmax G>W∗ + 2η n σmax(F)σmax(G) (129) ≤ 1 − η · σ¯ 2 min,∗ · σ 2 min Bˆ ∗>Bˆ+ 2η n (kFk2 + kGk2 ) kW∗k2 + 2η n (kFk2 · kGk2 ) (130) where we used that the norms of Bˆ ∗ and Bˆ are 1 since the matrices are orthonormal and σ¯min,∗ ≤ σmin(W∗). Recall that we operate under E3 and thus we can further write Ik − η n W+>W+2 ≤ 1 − η · σ¯ 2 min,∗ · σ 2 min Bˆ ∗>Bˆ+ 2η n dist Bˆ , Bˆ ∗ˆδ kW∗k2 + √nˆδσ2kW∗k2 + 2η n dist Bˆ , Bˆ ∗ˆδ 2σ 2√n kW∗k2 (131) ≤ 1 − η · σ¯ 2 min,∗ · σ 2 min Bˆ ∗>Bˆ+ 2η ˆδ kW∗k 2 2 n+ ˆδσ2 kW∗k √2 n+ ˆδ 2σ 4 kW∗k √n ! ≤ 1 − η · σ¯ 2 min,∗ · σ 2 min Bˆ ∗>Bˆ+ 3η E0σ¯ 2 min,∗ 20¯σ 2max,∗ · σ¯ 2 max,∗ ! ≤ 1 − η · σ¯ 2 min,∗ · σ 2 min Bˆ ∗>Bˆ+ 1 6 ηE0σ¯ 2 min,∗(134) (132) (133) where we upper bound dist Bˆ , Bˆ ∗by 1, ˆδ ≤E0 20·κ2 ·1 1+σ2 and use Assumption 4.4 in the third inequality. Further by the definition of E0 := 1 − dist2Bˆ 0, Bˆ ∗≤ σ 2 min Bˆ ∗>, Bˆwe have $$\left\|\mathbf{I}_{k}-{\frac{\eta}{n}}\mathbf{W}^{+\top}\mathbf{W}^{+}\right\|_{2}\leq1-\eta E_{0}{\bar{\sigma}}_{\mathrm{min},*}^{2}+{\frac{1}{6}}\eta E_{0}{\bar{\sigma}}_{\mathrm{min},*}^{2}$$ and it follows immediately that $$A_{1}\leq\operatorname{dist}\left({\hat{\mathbf{B}}},{\hat{\mathbf{B}}}^{*}\right)\left(1-\eta E_{0}{\bar{\sigma}}_{\mathrm{min},*}^{2}+{\frac{1}{6}}\eta E_{0}{\bar{\sigma}}_{\mathrm{min},*}^{2}\right)$$ Further since we operate under E4 (119) and Bˆ ∗⊥ 2 = 1 we have $$(134)$$ $$(135)$$ $$(136)$$ $$A_{2}\leq\eta c\sigma^{2}\left({\sqrt{k}}+1\right){\frac{\sqrt{d}}{\sqrt{m n}}}$$ and since we operate under E5 (120) we obtain $$A_{3}\leq\eta c\left(\operatorname{dist}\left({\hat{\mathbf{B}}},{\hat{\mathbf{B}}}^{*}\right)k+{\sqrt{k}}+1\right){\frac{{\sqrt{d}}}{\sqrt{m n}}}$$ √mn(138) $$(137)$$ $$(138)$$ Combining (116), (136), (137) and (138) we get $$\begin{split}\text{dist}\left(\hat{\mathbf{B}}^{+},\hat{\mathbf{B}}^{+}\right)&\leq\text{dist}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{+}\right)\left(1-\frac{5}{6}\eta E_{0}\sigma_{\text{min}_{*}}^{2}+\eta ck\frac{\sqrt{d}}{\sqrt{mn}}\right)\cdot\left\|\left(\mathbf{R}^{+}\right)^{-1}\right\|_{2}\\ &\quad+\eta c\left(\sqrt{k}+1\right)\left(\sigma^{2}+1\right)\frac{\sqrt{d}}{\sqrt{mn}}\cdot\left\|\left(\mathbf{R}^{+}\right)^{-1}\right\|_{2}\end{split}\tag{139}$$ The last part of the proof focuses on bounding (R+) −12 . Let us define $$\mathbf{S}:=\sum_{i=1}^{n}\frac{1}{m}\mathbf{X}_{i}^{\top}\mathbf{X}_{i}\left(\hat{\mathbf{B}}\mathbf{w}_{i}^{+}-\hat{\mathbf{B}}^{*}\mathbf{w}_{i}^{*}\right)\mathbf{w}_{i}^{+\top}$$ $$\mathbf{E}:=\sum_{i=1}^{n}\frac{1}{m}\mathbf{X}_{i}^{\top}\mathbf{Z}_{i}\mathbf{w}_{i}^{+\top}$$ and hence (108) takes the form $$(140)$$ $$(141)$$ $$\mathbf{B}^{+}=\mathbf{\hat{B}}-{\frac{\eta}{n}}\mathbf{S}+{\frac{\eta}{n}}\mathbf{E}$$ $$(142)^{\frac{1}{2}}$$ $$(143)$$ E (142) and also B +>B + = Bˆ >Bˆ − η n Bˆ >S + S >Bˆ+ η n Bˆ >E + E >Bˆ+ η 2 n2 S >S − η 2 n2 E >S + S >E+ η 2 n2 E = Ik − η n Bˆ >S + S >Bˆ+ η n Bˆ >E + E >Bˆ− η 2 n2 E >S + S >E+ η 2 n2 E >E + η 2 n2 S >E (143) >S (144) By Weyl's inequality and since R+>R+ = Bˆ +>Bˆ + we derive $$\sigma_{\mathrm{min}}^2\left(\mathbf{R}^+\right)\geq1-\frac{\eta}{n}\lambda_{\mathrm{max}}\left(\hat{\mathbf{B}}^\top\mathbf{S}+\mathbf{S}^\top\hat{\mathbf{B}}\right)-\frac{\eta}{n}\lambda_{\mathrm{max}}\left(\hat{\mathbf{B}}^\top\mathbf{E}+\mathbf{E}^\top\hat{\mathbf{B}}\right)-\frac{\eta^2}{n^2}\lambda_{\mathrm{max}}\left(\mathbf{E}^\top\mathbf{S}+\mathbf{S}^\top\mathbf{E}\right)$$ us further define. >E(145) Let us further define $$(145)$$ $$\begin{array}{l}{{R_{1}:=\frac{\eta}{n}\lambda_{\mathrm{max}}\left(\hat{\mathbf{B}}^{\top}\mathbf{S}+\mathbf{S}^{\top}\hat{\mathbf{B}}\right)}}\\ {{R_{2}:=\frac{\eta^{2}}{n^{2}}\lambda_{\mathrm{max}}\left(\mathbf{E}^{\top}\mathbf{S}+\mathbf{S}^{\top}\mathbf{E}\right)}}\\ {{R_{3}:=\frac{\eta}{n}\lambda_{\mathrm{max}}\left(\hat{\mathbf{B}}^{\top}\mathbf{E}+\mathbf{E}^{\top}\hat{\mathbf{B}}\right)}}\end{array}$$ (146) $\left(147\right)$ (148) (148) So that we can succinctly rewrite the above inequality as follows $${\bf\Omega}^{+})\geq1-R_{1}-R_{2}-R_{3}$$ σ 2 min R+≥ 1 − R1 − R2 − R3 (149) We work to bound separately each of the three terms. R1 = 2η nmax p:kpk2=1 p >Bˆ >Sp (150) = max p:kpk2=1 2η n p >Bˆ > " Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i !# p + max p:kpk2=1 2η n p >Bˆ > "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i # p (151) $$(149)$$ $$(150)$$ $$\left(151\right)$$ and since we operate under E5 (120) the above simplifies to R1 ≤ 2η Bˆ2 c dist Bˆ , Bˆ ∗k + √ k + 1 √d √mn + max p:kpk2=1 2η n p >Bˆ > "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i # p (152) ≤ 3ηc dist Bˆ , Bˆ ∗k + √ k √d √mn + max p:kpk2=1 2η n p >Bˆ > "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i # p (153) We focus on the second term and using (20) we get 2η n p >Bˆ > "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i # p = 2η n · tr "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i pp>Bˆ > # = 2η n · tr "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i Bˆ >Bˆ ∗w∗ i + Fi + Gi >pp>Bˆ > (154) $\hat{\mathbf{B}}^\top\bigg]$ (155) ... $$(159)$$ $\|$ . We bound each term separately and to this end we define T1 := 2η n · tr "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w∗> i Bˆ ∗>Bpp ˆ >Bˆ > # T2 := 2η n · tr "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i F > i pp>Bˆ > # T3 := 2η n · tr "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i G> i pp>Bˆ > # (156) (157) (158) such that (155) can be expressed as - $\cfrac{2\eta}{n}\mathbf{p}^\top\hat{\mathbf{B}}^\top\left[\sum_{i=1}^n\left(\hat{\mathbf{B}}\mathbf{w}_i^+-\hat{\mathbf{B}}^*\mathbf{w}_i^*\right)\mathbf{w}_i^{+\top}\right]\mathbf{p}=T_1+T_2+T_3$ - Further expanding T1 we have T1 = 2η n tr "Xn i=1 Bˆ Bˆ >Bˆ ∗w∗ i w∗> i + BFˆiw∗> i + BGˆiw∗> i − Bˆ ∗w∗ i w∗> iBˆ ∗>Bpp ˆ >Bˆ > # = 2η n tr "Bˆ >Bˆ Bˆ > − Id Xn i=1 Bˆ ∗>w∗ i w∗> iBˆ ∗>Bpp ˆ > # + 2η n tr "Xn i=1 BFˆiw∗> iBˆ ∗>Bpp ˆ >Bˆ > + 2η n tr "Xn i=1 BGˆiw∗> iBˆ ∗>Bpp ˆ >Bˆ > # = 2η n tr hBˆ >BˆF > + G>W∗Bˆ ∗>Bpp ˆ >i(162) ≤ 2η n (kFkF + kGkF ) kW∗k2(163) $\left(160\right)^{2}$ 7. # $$(161)$$ where in the first equality we expand w + ivia (20) and in the third equality we use that Bˆ >Bˆ Bˆ > − Id = 0 and F > =Pn i=1 Fiw∗> i, G> =Pn i=1 Giw∗> i. The inequality is obtained by noticing that the norms of the orthonormal Bˆ , Bˆ ∗is one and alsopp>2 ≤ 1. Conditioning on E3 (118) we can further simplify as follows $$T_{1}\leq\frac{2\eta}{n}\left(\operatorname{dist}\left(\hat{\mathbf{B}},\hat{\mathbf{B}}^{*}\right)\hat{\delta}\left\|\mathbf{W}^{*}\right\|_{2}^{2}+\hat{\delta}\sigma^{2}\sqrt{n}\left\|\mathbf{W}\right\|_{2}\right)$$ $$\leq\frac{1}{10}\eta E_{0}\sigma_{\min,*}^{2}$$ $$\begin{array}{l}{(162)}\\ {\ }\end{array}$$ $$\begin{array}{l}{(163)}\\ {\ }\end{array}$$ $$(164)$$ We now turn our attention to T2 2η n · tr "Xn i=1 Bˆ Bˆ >Bˆ ∗w∗ i + BFˆi + BGˆi − Bˆ ∗w∗ i F > i pp>Bˆ > # (166) = 2η n tr "Bˆ >Bˆ Bˆ > − Id Xn i=1 Bˆ ∗w∗ i F > i pp> # + 2η n tr "Bˆ >BˆXn i=1 FiF > i pp> # + 2η n tr "Bˆ >BˆXn i=1 GiF > i pp> (167) = 2η n tr hBˆ >BˆF >F + G>Fpp>i(168) ≤ 2η n kFk 2 F + kGkF kFkF (169) $$T_{2}=$$ = # $$\left.\begin{array}{l}{{\mathbf{\tau}_{i}\mathbf{p}\mathbf{p}^{\top}}}\\ {{}}\end{array}\right]$$ $$(170)$$ $$(171)$$ $$\begin{array}{l}{(172)}\\ {}\end{array}$$ $$\begin{array}{l}{(173)}\\ {}\end{array}$$ $$\begin{array}{l}{(174)}\\ {}\end{array}$$ where in the third equality we used that Bˆ >Bˆ Bˆ > − Id = 0 and in the forth that the norms of the orthonormal matrices is 1 as well as the norm of pp>. Following the same calculations for T3 we get $ T_3\leq\frac{2\eta}{n}\left(\left\|\mathbf{G}\right\|_F^2+\left\|\mathbf{G}\right\|_F\left\|\mathbf{F}\right\|_F\right)$ get the following 1. and thus summing the two terms we get the following $T_2+T_3\leq\dfrac{2\eta}{n}\left(\|\mathbf{F}\|_F+\|\mathbf{G}\|_F\right)^2$ . Again conditioning on E3 (118) we derive $$T_{2}+T_{3}\leq\frac{2\eta}{n}\left(\hat{\delta}\left(\|\mathbf{W}^{*}\|_{2}+\sqrt{n}\sigma^{2}\right)\right)^{2}$$ $$\leq2\eta\hat{\delta}^{2}\left(\bar{\sigma}_{\max,*}^{2}+\sigma^{4}\right)$$ $$\leq\frac{1}{10}\eta E_{0}\bar{\sigma}_{\min,*}^{2}$$ $$\left(175\right)$$ $$\left(176\right)$$ $$(177)$$ Hence combining (153), (165) and (174) we get a bound for R1 $$R_{1}\leq3\eta c\left(k+\sqrt{k}\right)\frac{\sqrt{d}}{\sqrt{mn}}+\frac{1}{5}\eta E_{0}\bar{\sigma}_{\min,*}^{2}$$ $$\leq6\eta\cdot c\cdot k\frac{\sqrt{d}}{\sqrt{mn}}+\frac{1}{5}\eta E_{0}\bar{\sigma}_{\min,*}^{2}$$ We work in similar fashion to derive the bound on R2 R2 = η 2 n2 λmax E >S + S >E(177) = 2η 2 n2 max p:kpk2=1 p >S >Ep (178) ≤ 2η 2 n2 max p:kpk2=1 p > "Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i #Xn i=1 1 m X> i Ziw +> i + 2η 2 n2 max p:kpk2=1 p > "Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i #Xn i=1 1 m X> i Ziw +> i p (179) ≤ 2η 2 1 n Xn i=1 1 m X> i Xi Bwˆ + i − Bˆ ∗w∗ i − Bwˆ + i − Bˆ ∗w∗ i w +> i 2 · 1 mn Xn i=1 X> i Ziw +> i 2 Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i 2 · 1 mn Xn i=1 X> i Ziw +> i 2(180) + 2η 2 n (178) $\bf\Huge\color{green}{\top}$ $\bf\color{blue}{\right)}$ $\bf\color{blue}{p}$ $\bf\Huge\color{green}{\left(179\right)}$ . $$(181)$$ $$(180)$$ Since we work conditioning on the event E4TE5 we further derive R2 ≤ 2η 2 c √d dist Bˆ , Bˆ ∗k + √k ˆδσ2 + ˆδσ22 √mn c σ 2√k + ˆδσ2 √d √mn · c σ 2√k + ˆδσ2 √d √mn Xn i=1 Bwˆ + i − Bˆ ∗w∗ i w +> i 2 · + 2η 2 n (182) ≤ 3η 2 ck √d √mn! c √ kσ2 √d √mn!+ 2η 2 n Xn i=1 kqik2 w + i 2 c √ kσ2 √d √mn!(183) And since we also condition on E2 (117) we finally get $$\begin{array}{c}{{(182)}}\\ {{}}\end{array}$$ $$\begin{array}{c}{{(183)}}\\ {{}}\end{array}$$ $$(184)$$ $$(185)$$ $$\begin{array}{l}{{R_{2}\leq3\eta^{2}c^{2}k^{\frac{3}{2}}\sigma^{2}\frac{d}{m n}+\frac{2\eta^{2}}{n}{\sum_{i=1}^{n}}\left(2\sqrt{k}+\hat{\sigma}\sigma^{2}\right)^{2}\left(c\sqrt{k}\sigma^{2}\frac{\sqrt{d}}{\sqrt{m n}}\right)}}\\ {{\qquad\qquad\leq3\eta^{2}c^{2}k^{\frac{3}{2}}\sigma^{2}\frac{d}{m n}+9\eta^{2}\left(c k^{\frac{3}{2}}\sigma^{2}\frac{\sqrt{d}}{\sqrt{m n}}\right)}}\end{array}$$ The last term we need to bound is R3 $R_{3}$ $$R_{3}=\frac{\eta}{n}\lambda_{\max}\left(\mathbf{E}^{\top}\hat{\mathbf{B}}+\hat{\mathbf{B}}^{\top}\mathbf{E}\right)$$ $$=\frac{2\eta^{2}}{n^{2}}\max_{\mathbf{p}:\|\mathbf{p}\|_{2}=1}\mathbf{p}^{\top}\hat{\mathbf{B}}^{\top}\mathbf{E}\mathbf{p}$$ $$\leq\frac{2\eta}{n}\left\|\hat{\mathbf{B}}\right\|_{2}\left\|\sum_{i=1}^{n}\frac{1}{m}\mathbf{X}_{i}^{\top}\mathbf{Z}_{i}\,\mathbf{w}_{i}^{+\top}\right\|_{2}$$ $$\leq2\eta c\cdot\frac{\sigma^{2}\left(\sqrt{k}+\hat{\delta}\sigma^{2}\right)\sqrt{d}}{\sqrt{mn}}$$ $$\leq3\eta c\sqrt{k}\sigma^{2}\frac{\sqrt{d}}{\sqrt{mn}}$$ and (100) and $k$ $$(189)$$ $$(190)$$ Combining (149) with (176), (185) and (190) we derive σ 2 min R+≥ 1 − 6η · c · k √d √mn − 1 5 ηE0σ¯ 2 min,∗ − 3η 2c 2k 3 2 σ 2 d mn − 9η 2 ck 32 σ 2 √d √mn!− 3ηc√kσ2 √d √mn ≥ 1 − 14ηck 32 σ 2 √d √mn − 3η 2c 2k 3 2 σ 2 d mn − 1 5 ηE0σ¯ 2 min,∗(192) ≥ 1 − 15ηck 32 σ 2 √d √mn − 1 5 ηE0σ¯ 2 min,∗(193) $$\frac{\sqrt{d}}{m n}$$ $$\quad(191)$$ where the last inequality holds since √mn ≥ c √d. We can now combine (139) and (193) to obtain the contraction inequality dist Bˆ +, Bˆ ∗≤ dist Bˆ , Bˆ ∗ 1 − 5 6 ηE0σ¯ 2 min,∗ + ηck √d √mn!· 1 − 15ηck 32 σ 2 √d √mn − 1 5 ηE0σ¯ 2 min,∗ !−1/22 + ηc √k + 1 σ 2 + 1 √d √mn · 1 − 15ηck 32 σ 2 √d √mn − 1 5 ηE0σ¯ 2 min,∗ !−1/22(194) $$\quad(192)$$ $$\quad(193)$$ We divide and multiply by n0 and using our bounds on m and n the previous inequality further simplifies, −1/2 dist Bˆ +, Bˆ ∗≤ dist Bˆ , Bˆ ∗1 − 5 6 ηE0σ¯ 2 min,∗ + ηck √d qmn0 · n n0 1 − 15ηck 32 σ 2 √d qmn0 · n n0 − 1 5 ηE0σ¯ 2 min,∗ + ηc √k + 1 σ 2 + 1 √d qmn0 · n n0 · 1 − 15ηck 32 σ 2 √d qmn0 · n n0 − 1 5 ηE0σ¯ 2 min,∗ −1/2 (195) ≤ dist Bˆ , Bˆ ∗ 1 − 1 2 ηE0σ¯ 2 min,∗ 1 − 1 2 ηE0σ¯ 2 min,∗ −1/2+ rn0 4n ηE0σ¯ 2 min,∗ 1 − 1 2 ηE0σ¯ 2 min,∗ −1/2 (196) ≤ dist Bˆ , Bˆ ∗s1 − 1 2 ηE0σ¯ 2 min,∗ +12 ηE0σ¯ 2 min,∗ q n n0 1 − 1 2 ηE0σ¯ 2 min,∗ (197) $$(198)$$ $$(199)$$ where in the second inequality we used that for our choices of m and n the following inequality holds 15ηck 32 (1 + σ 2) √ √ d mn0 ≤ 1 10 ηE0σ¯ 2 min,∗ . Taking Union Bound over the total number of iterations T we derive the result. Corollary A.11. Recall that our algorithm starts at stage 0 with n0 *participating clients and doubles* the number of participating clients at every subsequent stage. Thus, by slightly abusing notation, we can reformulate the contraction inequality of Theorem 4.7 at stage r *as follows* $$d i s t^{+}\leq d i s t{\sqrt{1-a}}+{\frac{a}{\sqrt{2^{r}(1-a)}}}\quad{\mathrm{with}}\quad a\leq{\frac{1}{4}}$$ ## B Appendix In the second part of our analysis we compute the expected 'Wall Clock Time' of our proposed method and compare it to the corresponding 'Wall Clock Time' of straggler-prone FedRep. We prove that when the computational speeds are drawn from the exponential distribution with parameter λ and the communication cost is given by C = c 1 λ , (for some constant c), then SRPFL guarantees a logarithmic speedup. Recall that in Corollary A.11 we get the following simplified version of the contraction inequality $$\mathrm{dist}^{+}\leq\mathrm{dist}{\sqrt{1-a}}+{\frac{a}{{\sqrt{2^{r}(1-a)}}}}\quad{\mathrm{~with}}\quad a\leq{\frac{1}{4}}.$$ For the rest of this section w.l.g. we assume that the clients are re-indexed at every stage so that the expected computation times maintain a decreasing ordering i.e. ∀r E [T r 1 ] ≤ E [T r 2 ] ≤ *...,* ≤ E [T r N ]. For simplicity henceforth we drop the stage index r. Notice that the decreasing ordering of the computation times in combination with (199) imply that SRPFL initially benefits by including only few fast nodes in the training procedure. However, as the distance diminishes the improvement guaranteed by the contraction inequality becomes negligible and thus our method benefits by including slower nodes, thus decreasing the second term of the r.h.s. of (199). Let us denote by Xi the maximum distance for which the optimal number of participating nodes (for SRPFL) is n0 · 2 i. This definition immediately implies that X0 = +∞. To compute each Xi we turn our attention on measuring the progress per unit of time achieved by SRPFL, when 2 r· n0 nodes are utilized. This ratio at stage r can be expressed as $${\frac{\operatorname{dist}^{+}-\operatorname{dist}{\sqrt{1-a}}-{\frac{a}{\sqrt{2^{r}(1-a)}}}}{\mathbb{E}\left[T_{n_{0}2^{r}}\right]+{\mathcal{C}}}}.$$ r ] + C. (200) $$(200)$$ Notice that by (199) the nominator captures the progress per round while the algorithm incurs E [Tn02 r ] computation and C communication cost. Similarly the ration when 2 r+1· n0 nodes are used is given by $${\frac{\operatorname{dist}^{+}-\operatorname{dist}{\sqrt{1-a}}-{\frac{a}{{\sqrt{2^{r+1}(1-a)}}}}}{\operatorname*{\mathbb{E}}\left[{\mathcal{T}}_{n_{0}2^{r+1}}\right]+{\mathcal{C}}}}.$$ Based on the above inequalities we can now compute the optimal doubling points (in terms of distance) and thus the values of Xi's. Subsequently, we compute the number of iterations SRPFL spends in every stage. Lemma B.1. For all i let Xi denote the maximum distance for which the optimal number of nodes for *SRPFL* is n0 · 2 i*. Then the following holds* $$\forall i>0\quad X_{i}={\frac{a}{\sqrt{2^{r}(1-a)}1-{\sqrt{1-a}}}}\left(1+{\frac{\left(\mathbb{E}\left[T_{n_{0}2^{r}}\right]+\mathcal{C}\right)\left(1-{\frac{1}{\sqrt{2}}}\right)}{\mathbb{E}\left[T_{n_{0}2^{r+1}}\right]-\mathbb{E}\left[T_{n_{0}2^{r}}\right]}}\right)$$ - $$(201)$$ $$(202)$$ $$(203)$$ (202) X0 = +∞ Further SRPFL spends at each stage r *at most* t r*communication rounds such that* $$t^{r}\geq{\frac{2\log\left({\frac{\sqrt{2}\left(\mathbb{E}\left[\mathcal{T}_{n_{0}2}r^{+1}\right]-\mathbb{E}\left[\mathcal{T}_{n_{0}2}r\right]\right)}{\mathbb{E}\left[\mathcal{T}_{n_{0}2}r\right]-\mathbb{E}\left[\mathcal{T}_{n_{0}2}r-1\right]\right)}}\right)}{\log({\frac{1}{1-a}})}}.$$ Proof. For each stage r let us compute the point where the transitioning between 2 r· n0 and 2 r+1· n0 occurs. That is the distance at which SRPFL benefits by doubling the number of participation nodes to 2 r+1. Thus equating the two ratios in (200) and (201) we get $$\frac{X_{r+1}-X_{r+1}\sqrt{1-a}-\frac{a}{\sqrt{2^{r}(1-a)}}}{\mathbb{E}\left[T_{n_{0}2^{r+1}}\right]+\mathcal{C}}=\frac{X_{r+1}-X_{r+1}\sqrt{1-a}-\frac{a}{\sqrt{2^{r+1}(1-a)}}}{\mathbb{E}\left[T_{n_{0}2^{r+1}}\right]+\mathcal{C}}$$ $$X_{r+1}\left(\mathbb{E}\left[T_{n_{0}2^{r+1}}\right]-\mathbb{E}\left[T_{n_{0}2^{r}}\right]\right)=\frac{a}{\sqrt{2^{r}(1-a)(1-\sqrt{1-a})}}\left(\mathbb{E}\left[T_{n_{0}2^{r+1}}\right]-\frac{1}{\sqrt{2}}\mathbb{E}\left[T_{n_{0}2^{r}}\right]+\mathcal{C}\left(1-\frac{1}{\sqrt{2}}\right)\right)$$ (204) $\frac{}{2}$) (205) (205) ... $$(206)$$ $$X_{r+1}=\frac{a}{\sqrt{2^{r}(1-a)(1-\sqrt{1-a})}}\left(1+\frac{\left(\mathbb{E}\left[\mathcal{T}_{n_{0}2^{r}}\right]+\mathcal{C}\right)\left(1-\frac{1}{\sqrt{2}}\right)}{\mathbb{E}\left[\mathcal{T}_{n_{0}2^{r+1}}\right]-\mathbb{E}\left[\mathcal{T}_{n_{0}2^{r}}\right]}\right)$$ $$(207)$$ (206) Let us now compute the number of rounds t r(henceforth denoted by t) required in stage r. That is the minimum number of iterations that SRPFL needs to decrease the distance from Xr to Xr+1 using only the n02 rfastest participating nodes. Thus, starting off at Xr and following (199) for t rounds we have $$X_{r}^{t}\leq X_{r}({\sqrt{1-a}})^{t}+\sum_{i=0}^{t-1}{\frac{a}{{\sqrt{2^{r}(1-a)}}}}({\sqrt{1-a}})^{i}$$ As stated above we want to find the minimum number of rounds such that we reach the next doubling point i.e. we want t large enough such that $$X_{r+1}\geq X_{r}(\sqrt{1-a})^{t}+\sum_{i=0}^{t-1}\frac{a}{\sqrt{2^{r}(1-a)}}(\sqrt{1-a})^{i}$$ $$\geq X_{r}(\sqrt{1-a})^{t}+\frac{a}{\sqrt{2^{r}(1-a)}}\cdot\frac{1-\sqrt{1-a}^{t}}{1-\sqrt{1-a}}$$ $$(208)$$ $$(209)$$ where in the last inequality we use geometric series properties. We proceed to solve for t by rearranging and using (202) and the fact that Xr > √ a 2 r(1−a)(1− √1−a) , ( √1 − a) t ≤ p2 r(1 − a)(1 − √1 − a)Xr+1 − a p2 r(1 − a)(1 − √1 − a)Xr − a(210) (E[Tn02r ]+C)1− √12 E[Tn02r+1 ]−E[Tn02r ] √2 (E-Tn02r−1+C)1− √12 E[Tn02r ]−E-Tn02r−1 + 1 − √ 1 2 (211) ≤ ≤ E [Tn02 r ] − E [Tn02 r−1 ] E [Tn02 r+1 ] − E [Tn02 r ](212) (211) $\phantom{XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX taking the logarithm on both sides we derive the required amount of rounds $$t\geq{\frac{2\log\left({\frac{\sqrt{2}{\big(}\mathbb{E}{\big[}T_{n_{0}2^{r}+1}{\big]}-\mathbb{E}{\big[}T_{n_{0}2^{r}}{\big]}{\big)}}{\mathbb{E}{\big[}T_{n_{0}2^{r}}{\big]}-\mathbb{E}{\big[}T_{n_{0}2^{r}-1}{\big]}{\big)}}}\right)}{\log({\frac{1}{1-a}})}}$$ $$(210)$$ $$(214)$$ $$(215)$$ $$(216)$$ The following lemmas compute the 'Wall Clock Time' that SRPFL and FedRep require in order to achieve target accuracy . As discussed in Section 4.1 for fair comparison we consider accuracy of the form $$\epsilon=\hat{c}\frac{\alpha}{\sqrt{\frac{N}{n_{0}}\left(1-\alpha\right)\left(1-\sqrt{1-\alpha}\right)}},\qquad\mbox{with}\qquad\sqrt{2}>\hat{c}>1.$$ When cˆ takes values close to √2 we expect SRPFL to vastly outperform FedRep and as cˆ takes values close to 1 the performance gap diminishes. Lemma B.2. Suppose at each stage the client's computational times are i.i.d. random variables drawn from the exponential distribution with parameter λ*. Further, suppose that the expected communication cost per* round is C = c 1 λ , for some constant c. Finally, consider target accuracy *given in* (12). Then the expected 'Wall Clock Time' for SRPFL *is upper bounded as follows* $$\mathbb{E}\left[T_{S R P F L}\right]\leq\log N\left({\frac{6(c+1)+4\log\left({\frac{1}{c-1}}\right)}{\log\left({\frac{1}{1-a}}\right)}}\right){\frac{1}{\lambda}}$$ $$(217)$$ Proof. First we upper bound the expected cost suffered by our method until the distance between the current representation and the optimal representation becomes smaller than Xlog(N/n0), i.e. the cost corresponding to the first log N 2n0 stages of SRPFL (denoted by E [T*SRPFL*]) . E -T 1 SRPFL= log( N 2n0 X ) i=1 t i(E [Tn02 i ] + C) (218) i=1 2 (E [Tn02 i ] + C) · log √2E-Tn02i+1 −E-Tn02i E -Tn02i−E-Tn02i−1 ≤ log( N 2n0 X ) log( 1 1−a )(219) √2(E hT N 2 i−E hT N 4 i) i=1 2 (E [Tn0·2 i ] + C) log ≤ log( N 4n0 X ) E hT N 4 i−E hT N 8 i log( 1 1−a ) √2(E[TN ]−E hT N 2 i) + 2(E-TN/2 + C) log E hT N 2 i−E hT N 4 i log( 1 1−a ), (220) where we used 214. Since the computational times of the clients come from the exponential distribution it is (221) $\phantom{XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX$ (222) (223) (224) (224) (224) straightforward to derive the following bounds E [TN ] − E-TN/2 = 1 λ XN/2 i=1 1 i ≤ 1 λ (ln(N/2) + 1) ≤ 1 λ log(N) (221) E -TN/2 − E-TN/4 = 1 λ 1 N/2 + 1 +1 N/2 + 2 + ... +1 3N/4 ≥ 1 λ · N 4 ·4 3N =1 3λ(222) E -TN/2 − E-TN/4 ≤ 1 λ · N 4 · 2 N =1 2λ(223) E -TN/4 − E-TN/8 = 1 λ 1 3N/4 + 1 +1 3N/4 + 2 + ... +1 7N/8 ≥ 1 λ · N 8 ·4 3N =1 6λ(224) Making use of the above bounds the expression in 220 further simplifies to logN 4n0 ≤ 2 logN 4n0 X i=1 " (E [Tn0·2 i ] + C) log(3√2) log( 1 1−a ) # + 2 E-TN/2 + C log(3√2 log(N)) log( 1 1−a )(225) ≤5 log( 1 1−a ) logN 4n0 X i=1 E [Tn0·2 i ] + log N 2n0 · C + 2 log(3√2 log(N)) log( 1 1−a )(E-TN/2 + C) (226) Further, notice that E -TN/2 = 1 λ XN/2 i=1 1 N/2 + i ≤ 1 λ , (227) ≤ 1 λ · 1 3 , E-TN/8 ≤ 1 λ · 1 7 , E-TN/16≤ 1 λ · 1 15 and so on. Thus, logN 4n0 X i=1 E [Tn0·2 i ] = E [2n0] + E [4n0] + ... + E [N/4] ≤ 1 λ X∞ i=1 1 2 i ≤ 1 λ(228) 1 $$(225)$$ $$(226)$$ and similarly E-TN/4 Combining the bounds from 227 and 228 and substituting C = c 27 and 228 and substituting $\mathcal{C}=c\frac{1}{\lambda}$. λ in expression 226 we derive the following bound $$\mathbb{E}\left[T_{SRPFL}^{1}\right]\leq\frac{5}{\log\left(\frac{1}{1-a}\right)}\left(c\log(N/2n_{0})+1\right)\frac{1}{\lambda}+\frac{2\log(3\sqrt{2}\log(N))}{\log\left(\frac{1}{1-a}\right)}\frac{1}{\lambda}$$ $$\leq\log(N/n_{0})\frac{6(c+1)}{\log\left(\frac{1}{1-a}\right)}\cdot\frac{1}{\lambda}$$ $$(227)$$ $$(228)$$ $$(229)$$ $$(230)$$ Having derived an upper bound on the cost suffered by SRPFL on the first log N 2n0 stages we now turn our attention on bounding the cost incurred from Xlog( N n0 ) until the target accuracy is achieved (denoted by E -T 2 SRPFL). Recall that $$X_{\log(N/n_{0})}={\frac{a}{\sqrt{{\frac{N}{2n_{0}}}(1-a)(1-{\sqrt{1-a}})}}}\left(1+{\frac{(\mathbb{E}\left[T_{N/2}\right]+\mathcal{C})(1-{\frac{1}{\sqrt{2}}})}{\mathbb{E}\left[T_{N}\right]-\mathbb{E}\left[T_{N/2}\right]}}\right)$$ $$(231)$$ and further during the last stage of SRPFL, N clients are utilized deriving the following form in the contractions inequality from (199) $$\mathrm{dist}^{+}\leq\mathrm{dist}{\sqrt{1-a}}+{\frac{a}{\sqrt{{\frac{N}{n_{0}}}(1-a)}}}\quad{\mathrm{with}}\quad a\leq{\frac{1}{4}}.$$ We first compute the number of rounds required in this second phase of the algorithm. Starting with distance Xlog(N/n0) and following the contraction in (232) for t rounds, we derive current distance at most $$X_{\log}\big{(}\tfrac{a}{\infty}\big{)}\cdot(\sqrt{1-a})^{t}+\sum_{i=0}^{t-1}\frac{a}{\sqrt{\tfrac{\lambda}{m}}(1-a)}(\sqrt{1-a})^{i}$$ $$=X_{\log}(\tfrac{a}{\infty})\cdot(\sqrt{1-a})^{t}+\frac{a}{\sqrt{\tfrac{\lambda}{m}}(1-a)}\frac{1-\sqrt{1-a^{t}}}{1-\sqrt{1-a}}$$ $$=\frac{a}{\sqrt{\tfrac{\lambda}{m}}(1-a)(1-\sqrt{1-a})}\left(\sqrt{2}\left(1+\frac{(\mathbb{E}\left[T_{\sqrt{\lambda}}\right]+C)(1-\tfrac{1}{\sqrt{2}})}{\mathbb{E}\left[T_{\sqrt{\lambda}}\right]-\mathbb{E}\left[T_{\sqrt{\lambda}}\right]}\right)(\sqrt{1-a^{t}})+(1-\sqrt{1-a^{t}})\right)$$ $$(232)$$ $$(233)$$ (236) $\binom{237}{237}$ . $$\begin{array}{c}{{(234)}}\\ {{}}\end{array}$$ $$\begin{array}{c}{{(235)}}\\ {{}}\end{array}$$ where in the first equality we use geometric series properties and in the second we substitute according to (231). Using the fact that √2(E [T N/2] + C)(1 − √ 1 2 ) ≤ 1 λ (c + 1)(√2 − 1) and E [TN ] − E [T N/2] ≤ 1 λ log N expression (235) is upper bounded by $$\leq\frac{a}{\sqrt{\frac{N}{n_{0}}(1-a)(1-\sqrt{1-a})}}\left(\left(\sqrt{2}+\frac{(c+1)(\sqrt{2}-1)}{\log N}\right)(\sqrt{1-a}^{t})+(1-\sqrt{1-a}^{t})\right)$$ $$\leq\frac{a}{\sqrt{\frac{N}{n_{0}}(1-a)(1-\sqrt{1-a})}}+\frac{a}{\sqrt{\frac{N}{n_{0}}(1-a)(1-\sqrt{1-a})}}\cdot\sqrt{1-a}^{t}$$ In light that the above formula is the number of points that are the last two that the two terms (236) The above implies that the number of rounds in the second phase is the smallest t so that the target accuracy is achieved, i.e., $$\epsilon\geq{\frac{a}{\sqrt{{\frac{N}{n_{0}}}(1-a)(1-\sqrt{1-a})}}}+{\frac{a}{\sqrt{{\frac{N}{n_{0}}}(1-a)(1-\sqrt{1-a})}}}\cdot{\sqrt{1-a}}$$ Further recall that from (216) the accuracy can be expressed in terms of $$\epsilon=\hat{c}\frac{\alpha}{\sqrt{\frac{N}{m_{0}}\left(1-\alpha\right)\left(1-\sqrt{1-\alpha}\right)}},\qquad\mbox{with}\qquad\sqrt{2}>\hat{c}>1.$$ Combining the above and solving for t we derive the required number of rounds for the second phase $$(238)$$ $$(239)$$ $$t\geq{\frac{2\log({\frac{1}{{\hat{c}}-1}})}{\log({\frac{1}{1-a}})}}$$ )(240) The expected cost during phase 2 can be computed as follows $$(240)$$ $$\mathbb{E}\left[T_{SRPFL}^{2}\right]\leq(\mathbb{E}\left[\mathcal{T}_{N}\right]+\mathcal{C})\left(\frac{2\log(\frac{1}{c-1})}{\log(\frac{1}{1-a})}+1\right)$$ $$\leq(\ln(N)+1+c)\frac{1}{\lambda}\left(\frac{2\log(\frac{1}{c-1})}{\log(\frac{1}{1-a})}+1\right)$$ $$\leq4\log(N)\left(\frac{\log(\frac{1}{c-1})}{\log(\frac{1}{1-a})}\right)\frac{1}{\lambda}$$ $$\leq\log(N)\cdot\frac{4}{\log(\frac{1}{1-a})}\log\left(\frac{1}{\hat{c}-1}\right)\frac{1}{\lambda}$$ $\lambda$ is the $\lambda$-function. (241) $$\begin{array}{l}\left(242\right)\end{array}$$ = $$\begin{array}{l}\left(243\right)\end{array}$$ = $$\begin{array}{l}\left(244\right)\end{array}$$ = $$\begin{array}{l}\left(244\right)\end{array}$$ = $$\begin{array}{l}\left(244\right)\end{array}$$ . Summing the two quantities of interest we can derive the promised upper bound on the 'Wall Clock Time' of SRPFL. $$\mathbb{E}\left[T_{S R P}\right]$$ $\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{b}$. E [T*SRPFL*] = E-T $$\mathbb{E}_{L}]=\mathbb{E}\left[T_{SRPFL}^{1}\right]+\mathbb{E}\left[T_{SRPFL}^{2}\right]\tag{2}$$ $$\leq\log(N/n_{0})\frac{6(c+1)}{\log\left(\frac{1}{1-a}\right)}\cdot\frac{1}{\lambda}+\log(N)\cdot\frac{4}{\log\left(\frac{1}{1-a}\right)}\log\left(\frac{1}{\hat{c}-1}\right)\frac{1}{\lambda}$$ (3) $$\leq\log N\left(\frac{6(c+1)+4\log\left(\frac{1}{\hat{c}-1}\right)}{\log(\frac{1}{1-a})}\right)\frac{1}{\lambda}\tag{4}$$ $$\begin{array}{l}{(245)}\\ {\qquad(246)}\end{array}$$ $$\begin{array}{l}{(247)}\end{array}$$ . Having computed an upper bound on the expected 'Wall Clock Time' of SRPFL we proceed to compute an lower bound on the expected 'Wall Clock Time' of FedRep. Lemma B.3. Suppose at each stage the client's computational times are i.i.d. random variables drawn from the exponential distribution with parameter λ*. Further, suppose that the expected communication cost per* round is C = c 1 λ , for some constant c. Finally, consider target accuracy *given in* (12)*. Then the expected* 'Wall Clock Time' for FedRep *is lower bounded as follows* is lower bound as follows: $\mathbb{E}\left[T_{FedRep}\right]\geq\log N\left(\dfrac{\log N+2\log\left(\frac{1}{e-1}\right)}{\log\left(\frac{1}{1-a}\right)}\right)\frac{1}{\lambda}$ $$\mathbb{D}$$ $$(248)$$ Proof. First we compute the number of rounds required by FedRep to achieve the target accuracy. Recall that FedRep utilizes N clients at each round deriving the following form of the contractions inequality from (199) $$\mathrm{ist}^{+}\leq\mathrm{dist}{\sqrt{1-a}}+{\frac{a}{\sqrt{{\frac{N}{n_{0}}}(1-a)}}}\quad{\mathrm{with}}\quad a\leq{\frac{1}{4}}.$$ Starting with distance equal 1 and following the contraction in (249) for t rounds, we derive current distance at most $$({\sqrt{1-a}})^{t}+\sum_{i=0}^{t-1}{\frac{a}{\sqrt{{\frac{N}{n_{0}}}(1-a)}}}({\sqrt{1-a}})^{i}=({\sqrt{1-a}})^{t}+{\frac{a}{\sqrt{{\frac{N}{n_{0}}}(1-a)}}}{\frac{1-{\sqrt{1-a}}^{t}}{1-{\sqrt{1-a}}}},$$ $$(249)$$ $$(250)$$ $$(251)$$ , (250) using the properties of geometric series. Further recall that from (216) the accuracy can be expressed as $$\epsilon=\hat{c}\frac{\alpha}{\sqrt{\frac{N}{n_{0}}\left(1-\alpha\right)\left(1-\sqrt{1-\alpha}\right)}},\qquad\mathrm{{\it~with}}\qquad\sqrt{2}>\hat{c}>1.$$ The above imply that the number of rounds is going to be the smallest t that guarantees that the target accuracy has been achieved that is $${\hat{c}}{\frac{\alpha}{\sqrt{\frac{N}{n_{0}}\left(1-\alpha\right)\left(1-{\sqrt{1-\alpha}}\right)}}}\geq({\sqrt{1-a}})^{t}+{\frac{a}{\sqrt{\frac{N}{n_{0}}\left(1-a\right)}}}{\frac{1-{\sqrt{1-a^{t}}}}{1-{\sqrt{1-a}}}}$$ $$(252)$$ $$(253)$$ We use the fact that q N n0 (1 − a)(1 − √1 − a) − a > 0 for a ≤ 1/4 and all reasonable values of N to rearrange and solve for t. Thus, we derive $$t\geq{\frac{2\log\left({\frac{1}{\bar{c}-1}}\right)}{\log\left({\frac{1}{1-a}}\right)}}+{\frac{\log N}{\log\left({\frac{1}{1-a}}\right)}}$$ Multiplying the number of rounds with a lower bound on the expected cost incurred per round, results in the desired lower bound on the expected 'Wall Clock Time' suffered by FedRep: $$\mathbb{E}\left[T_{FedRep}\right]\geq\left(\mathbb{E}\left[\mathcal{T}_{N}\right]+\mathcal{C}\right)\left(\frac{2\log\left(\frac{1}{\bar{c}-1}\right)}{\log\left(\frac{1}{1-a}\right)}\right)$$ $$\geq\log N\left(\frac{\log N+2\log\left(\frac{1}{\bar{c}-1}\right)}{\log\left(\frac{1}{1-a}\right)}\right)\frac{1}{\bar{\lambda}}$$ $$(254)$$ $$(255)$$ $$\square$$ Combining the results of Lemma B.2 and Lemma B.3 we obtain Theorem 4.9. Theorem 4.9. Suppose that at each stage the client's computational times are i.i.d. random variables drawn from the exponential distribution with parameter λ. Further, suppose that the expected communication cost per round is C = c λ , for some constant c. Finally, consider the target error given in (12). Then, we have E[TSRPFL] E[TFedRep] = O log( 1 cˆ−1 ) log(N)+log( 1 cˆ−1 ) . Proof. log 1 cˆ−1 log N + 2 log 1 cˆ−1 = O log(N) + log 1 cˆ−1 E [TSRPFL] E [TFedRep] ≤ 6(c + 1) + 4 log( 1 cˆ−1 ) (256) Remark B.4. The initialization scheme in Algorithm 2 guarantees that distB0, B∗≤ 1 − c, with probability at least 1 − O (mn) −100, effectively without increasing the overall sample complexity. The formal statement and proof is identical to Theorem 3 in (Collins et al., 2021) and is omitted. ## C More On Experiments Hyperparameters and choice of models.We set the hyperparameters following the work of (Collins et al., 2021). Specifically, for the implementation of FedRep and FedRep-SRPFL we use SGD with momentum where the momentum parameter is set to 0.5 and the local learning rate to 0.1. Further, similarly to (Collins et al., 2021) we set the local learning rate to 0.1 for all other methods under consideration, which obtains optimal performance. We fix the batch size to 10 for all our implementations. The number of local epochs is set to 1 for CIFAR10 with N = 100 and to 5 for the rest of the datasets. In terms of the choice of the neural network model, for CIFAR10, we use LeNet-5 including two convolution layers with (64, 64) channels and three fully connected layers where the numbers of hidden neurons are (120, 64). The same structure is used for CIFAR100, but the numbers of channels in the convolution layers are increased to (64, 128) and the numbers of hidden neurons are increased to (256, 128). Additionally, a dropout layer with parameter 0.6 is added after the first two fully connected layers, which improves the testing accuracy. For EMNIST and ![39_image_0.png](39_image_0.png) Figure 7: Numerical results on CIFAR10, EMNIST, Sent140 with full participation (M = N) in the fixed computation speeds setting. 'Shard' denotes the number of classes per client. 'C.T.' denotes the communication cost per round. FEMNIST, we use MLP with three hidden layers with (512, 256, 64) hidden neurons. For Sent140 we use two-layer bidirectional LSTM with dimension 256 and dropout rate 0.5 followed by a fully connected layer of dimensions 5 and a classification head. Further, we use the standard glove embedding of dimension 100 and vocabulary of size 10000. For the needs of SRFRL, we split the neural network model into two parts, the customized head hi and the common representation φ. In our experiments, we simply take the customized head to be the last hidden layer and the rest of the parameters are treated as the common representation. Note that LG-FedAvg and LG-FLANP have a different head/representation split scheme and the head is globally shared across all clients while a local version of the representation is maintained on every client. For all included datasets, i.e. CIFAR10, CIFAR100, EMNIST, FEMNIST and Sent140 the common head include the last two fully connected layers and the rest of the layers are treated as the representation part. Datasets. We include five datasets in our empirical study: CIFAR10 which consists of 10 classes and a total number of 50,000 training data points, CIFAR100 which consists of 100 classes and the same amount of data points as CIFAR10, and EMNIST (balanced) which consists of 47 classes and 131,600 training data points. Note that in Figures 3-7, we use the first 10 classes from EMNIST following (Collins et al., 2021). For FEMNIST, we use the same setting as (Collins et al., 2021) with the exception that in Figures 3-6 we allocate to each client 150 data points and in Figure 6 we allocate to client i, (100 + ui) samples, with ui a uniformly distributed random variable in [0, 50] . The sentiment140 dataset contains 1, 600, 000 tweets annotated negative or positive and they can be used to detect sentiment. For this dataset we perform pre-processing splitting the samples across clients both in a homogeneous as well as a heterogeneous manner. The number of allocated samples to clients follow the log-normal distribution. During our training procedure, we perform the data augmentation operations of standard random crop and horizontal flip on the first two datasets and and perform no pre-processing for the last one. Homogeneous Setting. Apart from providing straggler-resilience benefits our method takes advantage of the shared representation model to simultaneously address the hurdle of data heterogeneity. In the less challenging ![40_image_0.png](40_image_0.png) Figure 8: Numerical results on FEMNIST, Sent140, CIFAR10 in the fixed computation speeds setting. From left to right: Imbalanced/Imbalanced/Stragglers%/Correlated Heterogeneity. 'Shard' denotes the number of classes per client.'C.T.' denotes the communication cost per round. homogeneous setting one would expect FedRep-SRPFL to lose some of its competitive advantage. However, our numerical results in Figure 7 indicate that our method exhibits a stable performance while maintaining an edge over the other baselines. We note that LG-SRPFL outperforms FedRep-SRPFL in the Sent140 dataset where LG-FedAvg appears to be better-suited than FedRep. Importantly, our doubling scheme continues to enhance both subroutines providing clear straggler-resilience benefits even in data homogeneous environments. Additional Regimes. In Figure 8 we observe the extent at which the benefits of our meta-algorithm persist in various regimes. *Imbalanced Datasets.* In the two left columns we explore how the performance of our doubling scheme degrades for different levels of imbalanced local datasets. In the FEMNIST dataset we allocate to each client i, (100 + ui) local samples where uiis a random variable uniformly distributed in [0, 50]. In this setting the numbers of local samples are sufficiently close and as a result the benefits of our scheme are prevalent for all different values of communication cost. As we allow the clients to have more diverse numbers of local samples however, the speedup provided by our doubling scheme diminishes. Indeed this is the case in the Sent140 dataset where we allocate data to clients in a more dispropotional manner following the log-normal distribution. We point out that in settings with high diversity simply doubling the number of clients is not sufficient. Instead, to achieve optimal performance one needs to select consecutive participating sets of clients with cumulative number of samples that double from one stage to the next. Different levels of stragglers. We further consider the setting where clients are split into two categories. The first category consists of typically fast clients whose computational values come from the exponential distribution with λ = 0.1. The second category consists of clients who are typically straggling, i.e. sampling their computational cost from the exponential distribution with λ = 10. On the third column of Figure 8 we compare FedRep, LG-FedAvg and their straggler-resilient variants for different percentages of clients coming from the latter category. Unsurprisingly, the performance of the methods under consideration degrades as the percentage of stragglers increases however our doubling scheme to some extent mitigates this effect. Correlated heterogeneity. In this setting we assign data to clients depending on their speeds. More specifically, we split clients into 3 groups based on the computational times and respectively we group samples into 3 distinct groups based on their labels. Subsequently, we allocate mutually exclusive (with respect to their labels) groups of data to specific groups of clients. The rightmost column of Figure 8 indicates that the benefits of our doubling scheme persist, although to a lesser degree as the correlation between data and system heterogeneity grows.
Review 1: Summary: This paper studies the problem of straggler nodes in the context of federated learning with partial model personalization. More specifically, it proposes a sampling mechanism based on doubling in which: * At each round, the server sends a model to $N$ clients. * The server only waits for the fastest $n \leq N$ clients to return an update. * The parameter $n$ is doubled at a specified cadence. The authors then study the application of this "doubling mechanism" to the FedRep algorithm (Collins et al., 2021), which does partial model personalization. The authors study this algorithm theoretically in a "linear representation" setting, and show that under various assumptions (including a kind of incoherence across clients, a "full-rank" assumption, and an exponential distribution runtime assumption), that this incurs a logarithmic speed-up in runtime. Finally, the paper does some comparisons on image datasets (ie. (F)EMNIST and CIFAR-10(0) datasets) and the SENT140 dataset. These experiments show that in many settings (varying amounts of heterogeneity, and varying communication costs), this FedRep + doubling algorithm converges faster than other methods, especially FedRep without the mechanism. Strengths and Weaknesses: My overall impression of the paper is that it has a good core (simple, in a postiive way) idea coupled with strong theoretical analysis. However, the paper has the following weaknesses. 1. Its claims in the abstract & introduction that overstate what the paper does. 1. The theoretical assumptions that are arguably too strong (as best as I can tell, though I would love to discuss with the authors about this). 1. An algorithmic choice that potentially goes in the face of the data minimization principle underlying federated learning. 1. The empirical results have some potential issues, especially related to how datasets are partitioned across clients, and a lack of reproducibility. I discuss these in greater detail below. ## Strengths The fundamental idea, the doubling mechanism, is interesting theoretically and empirically. The algorithm has a lot of intuitive appeal (ie. wait for the fastest $n$ clients to finish, and proceed), and its core simplicity is a virtue. In particular, the doubling mechanism can ostensibly be applied to any federated learning setting in which there are (1) sufficiently many clients available and (2) sufficiently many can participate in a given round. In particular, it is not actually tied to the FedRep method (though some of the wording in the first 2 sections obscures that fact, more on that below). The method can be applied to other methods, including FedAvg and LG-FedAvg (the latter of which is actually implemented in the empirical results). Of course, simple mechanisms often still require nuanced theoretical analysis, and this is no difference. I think that the focus on the "linear representation" setting is actually a strength of the paper. It allows for much more fine-grained control over the optimization behavior, and therefore over the runtime speedups that the method can incur. Moreover, the theory involves a ton of great high-dimensional probability results that I enjoyed reading. I will caveat that i have not verified the theory 100%, though I believe that the theory holds **with the assumptions made in the main body** (more on that below). For the empirical results, the authors do a good job of benchmarking a wide swath of methods. I do not believe that this is strictly necessary (as the paper's primary contributions are, in my opinion, theoretical) but this was nice to see nonetheless. ## Weaknesses ### Accomplishments versus Claims One of the core themes of the abstract and introduction are that this work addresses both data heterogeneity and system heterogeneity simultaneously. From the introduction: ``` We propose Straggler-Resilient Personalized Federated Learning (SRPFL), an adaptive node-participation method which accommodates gradient-based subroutines and leverages ideas from representation learning theory to find personalized models in a federated and straggler resilient fashion. ``` This statement struck me as incongruous with what the paper is doing. The representation learning in Algorithms 1 and 2 are all from the FedRep algorithm from (Collins et al., 2021). This is freely discussed in later sections of the work, but section 1 completely removes this connection from the discussion. More generally, I don't know that I agree that this paper really addresses data heterogeneity, as prior work has shown that FedRep does that. Rather, the contribution of this work is the doubling mechanism, which could be applied to any FL algorithm (whether or not it uses representation learning). This brings me to a fundamental question: **Question:** Why does this paper couple the study of the doubling mechanism to the FedRep algorithm? Are there theoretical obstacles to studying algorithms such as FedAvg when using this mechanism? Are there aspects of FedRep that specifically motivated the doubling mechanism? As it stands, I would greatly prefer that the authors focus on what they do provide and study. The question of data heterogeneity is also related to the following section, as I believe that the theoretical assumptions help mitigate issues at the intersection of data & system heterogeneity. ### Theoretical Assumptions The authors make a number of assumptions, 4.1 - 4.4, that I think are worth considering. Many of these appear in (Collins et al., 2021). However, they take on new meaning under the doubling mechanism proposed by this paper. Primarily, I am concerned with Assumption 4.3 (aka "client diversity"). This assumption states that if you take $n$ rows of a matrix $W^*$ (each row of which corresponds to a client), then you get a full rank matrix. As best as I can tell, this is the same $n$ from the doubling mechanism. This is the core issue: In work like (Collins et al., 2021), it is reasonable to assume that the $n$ rows are drawn randomly, and therefore this statement need only hold with high probability. In this work, $n$ is not random, it is simply the $n$ clients who finish fastest. The work seems to make no assumptions on this distribution. Thus, it is reasonable to imagine settings where the data heterogeneity is tied to the system heterogeneity, in which case this assumption is **much** stronger than it seems on first glance. For example, if we consider a simple setting with 3 clients, and $n = 2$, and one of the clients is always the slowest, then this assumption seems strange in practice. In fact, you can construct counter-examples in this vein to Theorem 4.5, but these counter-examples fail Assumption 4.3 if you assume that it must hold for all possible subset of size $n$. As discussed above, this relates back to data heterogeneity. As best as I can tell, this "full rank" assumption limits the amount of data heterogeneity possible. This is not necessarily bad, but it's certainly a limitation of the paper and probably deserves more discussion. I would love feedback from the authors about this. ### Data Minimization and SVD Initialization One of the core tenets of federated learning (and indeed, one of the reasons it exists as a computational paradigm) is data minimization - the server should have access to no more data than is completely necessary. However, the initialization phase of Algorithm 2 involves sending sums of outer products of data. This seems potentially (not definitely) problematic from a data minimization perspective. I am no expert on privacy mechanisms for distributed SVD computations, so it is possible that there is some way that this could be integrated with something like differential privacy. I do not think that this integration is necessary, but I do believe that the authors need to discuss why this initialization phase is not tantamount to sending data in the clear to the server. In short, why is this initialization operation acceptable in the context of federated learning? There is no single answer to this, but I think it warrants discussion. ### Empirical Investigation and Datasets I will preface this with saying that the empirical investigations are not the core of this paper, at least in my mind. That being said, they exhibit some characteristics that unfortunately reduces how convincing they are. First and foremost, two of the datasets that the paper mentions, FEMNIST and SENT140, have a natural federated structure, as discussed in the Leaf benchmark (Caldas et al., 2017). However, a close read of this paper suggests that this was not used, and instead the data was partitioned by some random process. Can the authors comment on why this natural federated structure was not used? Second, I will mention briefly that the choice of optimizer hyperparameter seems somewhat arbitrary to me. In particular, all methods use the same learning rate (whether or not they do partial model participation or global optimization), which I believe needs some form of justification. On a more minor note, this work says that it sets hyperparmeters following the work of (Collins et al., 2021), but this doesn't seem to be the case. The latter uses SGD with momentum and a learning rate of 0.5, while the former uses SGD with a learning rate of 0.01. These issues are not huge, but they are things that stuck out to me upon reading closely. Requested Changes: ## Critical Recommendations 1. Please clarify the contributions of this paper, especially disambiguating the doubling mechanism proposed and the representation learning and personalization approach of FedRep. Additionally, if there is any facet of the doubling mechanism that is intertwined with or reliant upon representation learning mechanisms (and is therefore not amenable to FedAvg), please discuss. Finally, please discuss obstacles (theoretically or otherwise) to studying FedAvg with the doubling mechanism (note that this might be a duplicate request with the previous sentence). 1. Please clarify the significance and limitations of Assumption 4.3. In particular, is this assumed to hold over all possible size $n$ sets simultaneously? If so, how does this impact the possible data heterogeneity possible, especially statistical correlation between data and system heterogeneity. 1. Please discuss why the initialization phase of Algorithm 2 is compatible with the general principles of federated learning, especially data minimization. ## Other Recommendations 1. Please clarify why the experiments do not use the "natural" client partitions of the FEMNIST and SENT140 datasets. 1. Please discuss how the learning rate for the optimizers were chosen, and why it is reasonable to use the same learning rate across all methods. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper proposes an algorithm for personalized federated learning that tackles data/system heterogeneity and is resilient to stragglers. Specifically, personalization is attained by learning a global representation followed by user-specific parameters. Straggler-resilience is achieved by adaptively and disproportionally selecting participating clients over time. Theoretical results show that the proposed algorithm can attain near-optimal sample complexity and logarithmic speedup in certain settings. Experiments on several datasets show that the algorithm can attain better performance than baselines. Strengths and Weaknesses: Strengths: 1. The paper studies important issues in federated learning: it considers data/system heterogeneity and aims the issues of stragglers. It is easy to follow and well-organized. 2. The paper conducts a theoretical analysis of the proposed algorithm to show the speedup guarantee. Weakness: 1. The idea of using shared representation and client-specific head parameters for personalized FL is not new, e.g., see (Jiang & Lin, 2023, URL: https://openreview.net/forum?id=3aBuJEza5sq), (Zhu et al., 2022, URL: https://openreview.net/pdf?id=E4EE_ohFGz). As far as I can tell, the main contribution seems to be the adaptive participating scheme, which I think also has some issues as detailed next. 2. To tackle the issue of stragglers, the proposed algorithm only selects a subset of the fastest clients to participate in each round. This may raise several issues: (a) In practical federated learning systems, the participation of clients is usually out of the central server's control. This significantly limits the adoption of the proposed algorithm in practice. (b) While the proposed algorithm geometrically increases the number of participating nodes, certain nodes such as stragglers may never participate. This can be problematic especially when the data distribution of stragglers differs from the fast nodes (as these clients never got the chance to update their local head). As also indicated by the authors, such a disproportional selection of nodes raises fairness and accuracy concerns. 3. In the experiments, the computational time of all clients is determined uniformly (i.e., time follows an exponential distribution with parameter uniformly sampled from [1/M,1]). This setup is not comprehensive enough as it intentionally avoids the issue of 2(b) mentioned above. Moreover, only overall accuracy on all clients is reported. It is necessary to compare local accuracy for personalized FL. More experiments should be conducted (see next section). Requested Changes: 1. More experiments need to be conducted to validate the proposed algorithm, including: (a) settings when the computational times of clients are not uniformly distributed, e.g., when stragglers are always the same set of clients, when certain classes are not accessible to the fastest clients, etc. (b) in addition to the overall accuracy, the performance of local clients also needs to be reported. (c) comparison with more baselines, especially those tackling personalization and heterogeneity. 2. Because the idea of using shared representation mapping and client-specific head parameters has been leveraged in many prior works, it is essential to highlight the novelty of the proposed algorithm. 3. The theoretical results are only for a special case with linear representation, I wonder whether it can provide some insights for non-linear cases. Broader Impact Concerns: The proposed algorithm may raise unfairness issues for devices with large computational time. ================================================== Review 3: Summary: The paper proposes a new framework to handle the data heterogeneity problem as well as system heterogeneity at same time. Specifically, to deal with the data heterogeneity problem, it adopt a framework that include a personalized head for every client and a shared embedding among all the clients. To deal with the system heterogeneity problem, instead of random sampling several candidates uniformly, based on the computation power, it select the top n_0 clients in every round. The theoretical analysis shows that in the linear representation case, the proposed SRPFL method could achieve O(log N) speed up compared with FedRep. The experiments done in several datasets with different system heterogeneity show the proposed method could achieve a faster convergence than other baselines. Strengths and Weaknesses: Pros: 1. The paper is well-written and easy to follow. The background introduction is detailed and the commons and differences of the proposed framework have been clearly illustrated. 2. The theoretical analysis shows the proposed method could achieve O(log N) improvement over FedRep although it is based on a linear representation cases. 3. The experiments on different level system heterogeneity show the potential of the proposed method to achieve a faster convergence over other baselines. Cons: 1. The proposed setting is not that practical and affects its novelty. Although the paper mention it is possible to get the computation power of every client, I still think it is a very big assumption. The network communication could affect the overall aggregation as well so that the slowest client might not be the slowest always. Also, I think it is not trivial to achieve the rank every easily. 2. The theoretical analysis is done in the linear way that is not practical as well. 3. The experimental results should be included for the data-homogeneous settings to show it could still achieve a stable performance. Requested Changes: Typos: dist(B^0, B*) in eq 7. 1. Move the experiments of Figure 5 or 6 in the appendix to the main paper. 2. Add experiments on the data-homogeneous settings. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers were ultimately in favor of this work appearing in TMLR---they agreed that this was an interesting approach, and appreciated the simplicity of the procedure and corresponding theoretical analysis. The reviewers also suggested several useful improvements to the paper, including more clearly explaining the main contributions, discussing potential limitations of the assumptions made (particularly Assumption 4.3), and including experiments in additional regimes including in homogeneous settings as well as scenarios when there may be correlations between data/systems heterogeneity. The authors have already thoughtfully addressed many of these suggestions in their revision, though we would suggest that they take a final pass to ensure that the feedback has been thoroughly incorporated---particularly in terms of clearly explaining key assumptions and potential limitations of the approach in addition to its benefits (e.g., in terms of data minimization/privacy, correlated heterogeneity, etc). ==================================================
# Flipped Classroom: Effective Teaching For Chaotic Time Series Forecasting Anonymous authors Paper under double-blind review ## Abstract Gated RNNs like LSTM and GRU are the most common choice for forecasting time series data reaching state of the art performance. Training such sequence-to-sequence RNNs models can be delicate though. While gated RNNs effectively tackle exploding and vanishing gradients, there remains the exposure bias problem provoked by training sequenceto-sequence models with teacher forcing. Exposure bias is a concern in natural language processing (NLP) as well and there are already plenty of studies that propose solutions, the most prominent probably being scheduled sampling. For time series forecasting, though, the most frequent suggestion is training the model in free running mode to stabilize its prediction capabilities over longer horizons. In this paper, we demonstrate that exposure bias is a serious problem even or especially outside of NLP and that training such models free running is only sometimes successful. To fill the gap, we are formalizing curriculum learning (CL) strategies along the training as well as the training iteration scale, we propose several completely new curricula, and systematically evaluate their performance in two experimental sets. We utilize six prominent chaotic dynamical systems for these experiments. We found that the newly proposed increasing training scale curricula with a probabilistic iteration scale curriculum consistently outperforms previous training strategies yielding an NRMSE improvement up to 81% over free running or teacher forced training. For some datasets we additionally observe a reduced number of training iterations and all models trained with the new curricula yield higher prediction stability allowing for longer prediction horizons. ## 1 Introduction 1 Advanced Recurrent Neural Networks (RNNs) such as Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Unit (GRU) (Cho et al., 2014) achieved significant results in predicting sequential data (Chung et al., 2014; Nowak et al., 2017; Yin et al., 2017). Such sequential data can for example be textual data as processed for Natural Language Processing (NLP) tasks where RNN models were the method of choice for a long time, before feed-forward architectures like transformers showed superior results in processing natural language data (Devlin et al., 2018; Yang et al., 2019; Radford et al., 2019; Brown et al., 2020). Shifting the view to the field of modeling dynamical or even chaotic systems, encoder-decoder RNNs are still the method of choice for forecasting such continuous time series data (Wang et al., 2019; Thavarajah et al., 2021; Vlachas et al., 2018; Sehovac et al., 2019; Sehovac & Grolinger, 2020; Shen et al., 2020). Nevertheless encoder-decoder RNNs do have a flaw as is well known from the time when they were the type of model to use for NLP applications. This shortcoming is termed *exposure bias* that can appear when teacher forcing is used for training the model of question. Teacher forcing is the strategy that is typically applied when training RNNs for time series sequence-to-sequence tasks (Williams & Zipser, 1989) regardless of the type of data. To understand the problem we first give a quick side note about the motivation behind teacher forcing. Its main advantage is that it can significantly reduce the number of steps a model needs to 1To the reviewers: Upon deanonymization our code for the experiments will be made available as a replication package on Github. The datasets used in this paper were already published on Dataverse. 1 converge during training and improve its stability (Miao et al., 2020). However, teacher forcing may result in worse model generalization due to a discrepancy between training and testing data distribution (*exposure* bias). It is less resilient against self induced perturbations caused by prediction errors in the inference phase (Sangiorgio & Dercole, 2020). Several authors propose methods to mitigate the *exposure bias* and reduce its negative effect with the help of training strategies (Bengio et al., 2015; Nicolai & Silfverberg, 2020; Lamb et al., 2016; Liu et al., 2020; Dou et al., 2019). However, all these methods address NLP tasks, such as, text translation or image captioning. We want to focus on *exposure bias* mitigation while forecasting chaotic systems. Those systems directly depend on their past but also tend to be easily irritated by perturbations. A model needs to be especially resilient against small perturbations when auto-regressively predicting future values, otherwise those small errors will quickly accumulate to larger errors (Sangiorgio & Dercole, 2020). We argue that sequence-tosequence models predicting chaotic time series are even more vulnerable to *exposure bias* and will thus more likely fail to forecast larger horizons. Besides *exposure bias* mitigation, the field of forecasting and analyzing (chaotic) dynamical systems has intensively been studied with many RNN-based approaches proposed to stabilize the training process and preventing exploding gradients. Most of these studies propose architectural tweaks or even new RNN architectures considering the specifics of dynamical systems and their theory (Lusch et al., 2018; Vlachas et al., 2018; Schmidt et al., 2019; Champion et al., 2019; Chang et al., 2019; Rusch & Mishra, 2020; Erichson et al., 2020; Rusch et al., 2021; Li et al., 2021). Monfared et al. (2021) performed a theoretical analysis relating RNN dynamics to loss gradients and argue that this analysis is especially insightful for chaotic systems. With this in mind they suggest a kind of sparse teacher forcing (STF) inspired by the work of Williams & Zipser (1989) that uses information about the degree of chaos of the treated dynamical system. As a result, they form a training strategy that is applicable without any architectural adaptations and without further hyperparameters. Their results using a vanilla RNN, a piecewise linear recurrent neural network (PLRNN) and an LSTM for the Lorenz (Lorenz, 1963) and the Rössler (Rössler, 1976) systems show clear superiority of applying chaos dependent STF. Reservoir computing RNNs were successfully applied to chaotic system forecasting and analysis tasks. For example, Pathak et al. (2017) propose a reservoir computing approach that fits the attractor of chaotic systems and predicts their Lyapunov exponents. In this paper, we focus on such training strategies that require no architectural changes of the model and thus can be applied easily for different and existing sequence-to-sequence (seq2seq) models. All presented strategies will be evaluated across different benchmark datasets. Our main contributions are the following. First, assembling a set of training strategies for encoder-decoder RNNs that can be applied for existing seq2seq models without adapting their architecture. Second, presenting a collection of strategies' hyperparameter configurations that optimize the performance of the trained model. Third proposing a "flipped classroom" like strategy that outperforms all existing comparable approaches on several datasets sampled from different chaotic (and hyper-chaotic) systems. Fourth, proposing a method that yields substantially better prediction stability and therefore allows for forecasting longer horizons. The course of the paper continues with Section 2 where we provide the background of sequence-to-sequence RNNs and the conventional ways to train them. We also give a short introduction to chaotic behavior here. In Section 3 we examine existing approaches dealing with or studying the *exposure bias* in the context of different applications. Section 4 describes how we designed our training strategies and how they are applied. Further information about the experimental setup and our results we present in Section 5. In Section 6 we discuss the results and the strengths and limitations of the different strategies. Finally, in Section 7 we conclude our findings. ## 2 Background Within this paper, we study multi step forecasting of multivariate chaotic time series using RNNs. We are using an encoder-decoder architecture (Chung et al., 2014) as sequence-to-sequence model to forecast time series data. The rough structure of the encoder-decoder architecture is shown in Figure 1a. It consists of two separate RNNs, an encoder and a decoder. The encoder is trained to build up a hidden state representing the recent history of the processed time series. It takes an input sequence (x1, x2*, . . . , x*n) of n input values where each value xj ∈ R dis a d dimensional vector. For each processed step the encoder updates its hidden state to provide context information for the following steps. The last encoder hidden state (after processing xn) is then used as initial hidden state of the decoder. Triggered by a sequence's preceding value, the decoder's task is predicting its next future value while taking into account the sequence's history via the accumulated hidden state. For a trained network in the inference phase, that means that the decoders preceding prediction is auto-regressively fed back as input into the decoder (aka auto-regressive prediction) (cp. Fig. 1a). All m outputs yj ∈ R dtogether form the output sequence (y1, y2*, . . . , y*m). ![2_image_0.png](2_image_0.png) Figure 1: RNN encoder-decoder architecture in inference phase (a) and in training phase using teacher forcing (b) ## 2.1 Training Sequence Prediction Tasks While training, the decoder's inputs may either be previous predicted outputs (free running) or known previous values stemming from the given dataset (teacher forcing). Training in free running mode might be an intuitive approach but slows down the training due to accumulated error throughout multiple prediction steps without the support of teacher forced inputs especially in the early training epochs (Nicolai & Silfverberg, 2020). In contrast, **teacher forcing** aims to avoid this problem by utilizing the corresponding previous ground truth value as decoder input rather than the previously predicted value. This way the model learns from the beginning of the training on to adapts its weights to perfect input values and converges Back noticeable faster (Miao et al., 2020) (cp. Fig. 1b). However, teacher forcing also bears a significant drawback since the model during training is never exposed to the noisy predicted values it will later face in the inference phase and it therefore does often not generalize very well. A model trained with teacher forcing solely learned to predict on basis of perfect values and is thus vulnerable for small perturbations on its input, this effect is called **exposure bias** (Ranzato et al., 2015). ## 2.2 Chaotic Systems We especially focus on the forecasting of time series data generated from chaotic systems. Whether or not a dynamical system is chaotic, can be confirmed by considering its Lyapunov exponents λk for k ∈ [1, d]. Given an initial perturbation ε0 the exponential rate with which the perturbation will increase (or decrease) in the direction of dimension i is the Lyapunov exponent λk = limt→∞ 1 t ln(||εt|| ||ε0||) (Dingwell, 2006). That is the Lyapunov exponents denote how sensitive the system is to the initial conditions (initial state). A deterministic dynamical system with at least one positive Lyapunov exponent while being aperiodic in its asymptotic limit is called **chaotic** (Dingwell, 2006). Analogously, dynamical systems with at least two positive, one negative, and one zero Lyapunov exponent are called **hyper-chaotic** systems. Dingwell points out that the Largest Lyapunov Exponent (LLE) can be used as a measure to compare the chaotic behavior of different systems. ## 3 Related Work Schmidt (2019) defines *exposure bias* as describing "a lack of generalization with respect to an - usually implicit and potentially task and domain dependent - measure other than maximum-likelihood" meaning that when the exclusive objective is to maximize the likelihood between the output and the target sequence one can use teacher forcing during training (Goodfellow et al., 2016). However, Goodfellow et al. argue that the kind of input the model sees while testing will typically diverge from the training data and the trained model may lack the ability of correcting its own mistakes. Thus, in practice the teacher forcing can be a proper training strategy but may hinder the model to learn compensating its inaccuracies. He et al. study exposure bias for natural language generation tasks (He et al., 2019). They use sequence and word level quantification metrics to observe the influence of diverging prefix distributions on the distribution of the generated sequences. Two distributions are generated. One with and one without induced *exposure bias*. Those two distributions are then compared on basis of the corresponding corpus-bleu scores (Papineni et al., 2002). The study concludes that for language generation the effect of the *exposure bias* is less serious than widely believed. As a result several studies propose approaches to overcome the *exposure bias* induced by teacher forcing. The earliest of these studies proposes **scheduled sampling** (Bengio et al., 2015). Scheduled sampling tries to take the advantages of training with teacher forcing while also acclimating the trained model to its own generated data distribution. It does that by using ground truth values as input for a subset of the training steps and predicted values for the remaining. i denotes the teacher forcing probability at step i. Accordingly, the probability of using the predicted value is 1−i. During training i decreases from s to e. This procedure makes it a **curriculum learning approach** as which scheduled sampling was proposed and works without major architectural adaptions. Originally proposed for image captioning tasks, scheduled sampling was also applied, e.g., for sound event detection (Drossos et al., 2019). Nicolai and Silfverberg consider and study the teacher forcing probability as a hyperparameter. Rather than using a decay function that determines the decrease of i over the course of training epochs, they use a fix teacher forcing probability throughout the training (Nicolai & Silfverberg, 2020). They observed a moderate improvement compared to strict teacher forcing training.Scheduled sampling is not restricted to RNN-based sequence-to-sequence models though, it has also been studied for transformer architectures (Mihaylova & Martins, 2019). Mihaylova and Martins tested their modified transformer on two translation tasks but could only observe improved test results for one of them. Apart from scheduled sampling (Bengio et al., 2015), a number of approaches have been proposed typically aiming to mitigate the *exposure bias* problem by adapting model architectures beyond an encoder-decoder design. **Professor forcing** (Lamb et al., 2016) is one of these more interfering approaches that aims to guide the teacher forced model in training by embedding it into a Generative Adversarial Network (GAN) framework (Goodfellow et al., 2014). This framework consists of two encoder-decoder RNNs that form the generator and the discriminator respectively. The generating RNNs have shared weights that are trained with the same target sequence using their respective inputs while at the same time they try to fool the discriminator by keeping their hidden states and outputs as similar as possible. The authors conclude that their method, compared to teacher forcing, provides better generalization for single and multi step prediction. In the field of Text-To-Speech (TTS), the concept of professor forcing has also been applied in the GAN based training algorithm proposed by Guo et al. (2019). They adapted professor forcing and found that replacing the teacher forcing generator with one that uses scheduled sampling improved the results of their TTS model in terms of intelligibility. As another approach for TTS, (Liu et al., 2020) proposed **teacherstudent training** using a training scheme to keep the hidden states of the model in free running mode close to those of a model that was trained with teacher forcing. It applies a compound objective function to align the states of the teacher and the student model. The authors observe improved naturalness and robustness of the synthesized speech compared to their baseline. Dou et al. (Dou et al., 2019) proposed **attention** forcing as yet another training strategy for sequence-to-sequence models relying on an attention mechanism that forces a reference attention alignment while training the model without teacher forcing. They studied TTS tasks and observed a significant gain in quality of the generated speech. The authors conclude that attention forcing is especially robust in cases where the order of predicted output is irrelevant. The discussed approaches for mitigating *exposure bias* were proposed in the context of NLP and mainly target speech or text generation. Additionally, these studies, except for scheduled sampling and subsequent approaches, mainly propose solutions that alter the model architecture to overcome *exposure bias* with potential other side effects for the actual task to train. Specifically for chaotic time series data, studies suggest to neglect teacher forcing completely and solely train the model in free running mode (Sangiorgio & Dercole, 2020), thereby, sacrificing the faster convergence of teacher forcing and potentially not reaching convergence at all. We argue that forecasting dynamical systems is a field that is not thoroughly dealt with regarding the mitigation of *exposure bias* and deserves more attention. In the context of RNNs for forecasting and analyzing dynamical systems, the majority of existing work deals with exploding and vanishing gradients as well as capturing long-term dependencies while preserving the expressiveness of the network. Various studies rely on methods from dynamical systems theory applied to RNN or propose new network architectures. Lusch et al. (2018) and Champion et al. (2019) use a modified autoencoder to learn appropriate eigenfunctions that the Koopman operator needs to linearize the nonlinear dynamics of the system. In another study, Vlachas et al. (2018) extend an LSTM model with a mean stochastic model to keep its state in the statistical steady state and prevent it from escaping the system's attractor. Schmidt et al. (2019) propose a more generalized version of a PLRNN (Koppe et al., 2019) by utilizing a subset of regularized memory units that hold information much longer and can thus keep track of long-term dependencies while the remaining parts of the architecture are designated to approximate the fast scale dynamics of the underlying dynamical system. The Antisymmetric Recurrent Neural Network (AntisymmetricRNN) introduced by Chang et al. (2019) represents an RNN designed to inherit the stability properties of the underlying ordinary differential equation (ODE) ensuring trainability of the network together with its capability of keeping track of longterm dependencies. A similar approach has been proposed as Coupled Oscillatory Recurrent Neural Networks (coRNNs) (Rusch & Mishra, 2020) that are based on a secondary order ODEs modeling a coupled network of controlled forced and damped nonlinear oscillators. The authors prove precise bounds of the RNN's state gradients and thus the ability of the coRNN being a possible solution for exploding or vanishing gradients. Erichson et al. (2020) propose the Lipschitz RNN having additional hidden-to-hidden matrices enabling the RNN to remain Lipschitz continuous. This stabilizes the network and alleviates the exploding and vanishing gradient problem. In Li et al. (2020; 2021), the authors propose the Fourier respectively the Markov neural operator that are built from multiple concatenated Fourier layers that directly work on the Fourier modes of the dynamical system. This way they retain major portion of the dynamics and forecast the future behavior of the system. Both, the incremental Recurrent Neural Network (IRNN) (Kag et al., 2019) and the time adaptive RNN (Kag & Saligrama, 2021) use additional recurrent iterations on each input to enable the model of coping different input time scales, where the later provides a time-varying function that adapts the model's behavior to the time scale of the provided input. All of this shows the increasing interest in the application of machine learning (ML) models for forecasting and analyzing (chaotic) dynamical systems. To meet this trend, Gilpin (2021) recently published a fully featured collection of benchmark datasets being related to chaotic systems including their mathematical properties. A more general guide of training RNNs for chaotic systems is given by Monfared et al. (2021). They discuss under which conditions the chaotic behavior of the input destabilizes the RNN and thus leads to exploding gradients during training. As a solution they propose STF, where every τ -th time step a true input value is provided (teacher forced) as input instead of the previous prediction. ## 4 Teaching Strategies Within this section, we systematically discuss existing teaching strategies for sequence-to-sequence prediction models and propose new strategies. All of these will then be evaluated in an experimental study with different chaotic time series reported in the following section. ![5_image_0.png](5_image_0.png) $$(1)$$ Figure 2: Overview of the proposed and evaluated training strategies where teacher forcing (TF) and free running (FR) refer to the two extreme cases that are combined to different Curriculum Learning (CL) strategies. ## 4.1 Free Running (Fr) Vs. Teacher Forcing (Tf) A widely used training strategy for RNN sequence-to-sequence models is to use teacher forcing (TF) throughout the whole training. Thereby, data is processed as shown in Fig. 2 (top left), i.e., the model is never exposed to its own predictions during training. A single forward step of the decoder during TF training is denoted as $${\hat{y}}^{t}=f(y^{t-1},\theta,c^{t-1}),$$ t−1*, θ, c*t−1), (1) where y tis the ground truth value for time step t, yˆ tis the predicted value for time step t, θ denotes the trainable parameters of the model, and c tis the decoder's hidden state at time step t. The other extreme form of training is free running (FR), i.e., only the model's first output value is predicted on basis of ground truth input and all subsequent output values of the sequence are predicted on basis of previous predictions throughout the training (cp. Fig. 2 (top right)). A single forward step of the decoder ![6_image_0.png](6_image_0.png) Figure 3: 30 000 time steps sampled with a time delta of dt = 0.1 of Thomas' cyclically symmetric attractor in (a) a chaotic parametrization and (b) a periodic parametrization ![6_image_1.png](6_image_1.png) $$\left(2\right)$$ Figure 4: Test NRMSE over 100 predicted time steps of the chaotically and periodically parametrized Thomas attractor (cp. Fig. 3b), predicted by GRU models trained with teacher forcing (TF) and free running (FR). during FR training is denoted as $${\hat{y}}^{t}=f({\hat{y}}^{t-1},\theta,c^{t-1}).$$ t−1*, θ, c*t−1). (2) A claimed major benefit of TF training is faster model convergence and thus reduced training time (Miao et al., 2020), while a major benefit of FR training is avoided *exposure bias* arising from solely training with ground truth data yielding a model that performs less robust on unseen validation data (Ranzato et al., 2015). To illustrate these benefits and drawbacks, we utilize the Thomas attractor with two parametrization, the first resulting in a periodic (cp. Fig. 3b) and the second resulting in a chaotic attractor (cp. Fig. 3a). By sampling from the attractors, we build two corresponding datasets of 10 000 samples each. For both datasets, we train a single layer encoder-decoder GRU following the free running (FR) and the teacher forcing (TF) strategy. Figure 4 shows test Normalized Root Mean Squared Error (NRMSE) of per trained model over 100 predicted time steps. All models have been initialized with 150 ground truth values to build up the hidden state before predicting these 100 time steps. We observe that the chaotic attractor is harder to predict for the trained models (cp. blue and green line in the figure) and that models trained with teacher forcing tend to predict with a smaller error at the first steps, which then grows relatively fast. In contrast, the prediction error of the FR trained models starts on a higher level but stays more stable over the prediction horizon. Arguing that chaotic time series forecasting represents an especially challenging task for sequence-to-sequence models, our work focuses on this type of time series data. The more precise forecasting capabilities of a TF-trained network at the early prediction steps vs. the overall more stable long-term prediction performance of a FR-trained network observed in the Thomas example (cp. Fig 3a, 3b), motivate the idea of combining both strategies into a curriculum learning approach. Schmidt (2019) describes the *exposure bias* in natural language generation as a lack of generalization. Following this argumentation motivates an analysis of training with FR and TF strategies when applied to forecasting dynamical systems with different amounts of available training data. Figure 5 shows the NRMSE when forecasting the Thomas attractor using different dataset sizes and reveals that increasing the dataset size yields generally improved model performance for TF as well as FR, while their relative difference is maintained. ![7_image_0.png](7_image_0.png) Figure 5: The NRMSE for different dataset sizes when using TF and FR during training for forecasting the Thomas attractor ## 4.2 Curriculum Learning (Cl) Within the context of our work, we denote the curriculum learning concept as combining teacher forcing and free running training, i.e., starting from the second decoder step the curriculum prescribes per decoder step whether to use the ground truth value or the predicted value of the previous time step as input. We formalize a single training step of a CL approach as follows: $${\hat{y}}^{t}={\begin{cases}f(y^{t-1},\theta,c^{t-1}),&{\mathrm{if~}}\Phi=1\\ f({\hat{y}}^{t-1},\theta,c^{t-1}),&{\mathrm{otherwise}}\end{cases}}$$ t−1*, θ, c*t−1), otherwise(3) $\left(3\right)$. where the teacher forcing decision Φ governs whether the decoder input is teacher forced or not. Figure 2 illustrates the data flow of a sequence-to-sequence model training with CL in between the conventional strategies. In our naming scheme CL-DTF-P resembles the scheduled sampling approach proposed by Bengio et al. (2015). Below, we discuss the different types of curricula on training and iteration scale resulting in different ways for determining Φ. ## 4.3 Curriculum On Training Scale The teacher forcing ratio i per training iteration i is determined by a curriculum function C : N → [0, 1] denoted as i = C(i). (4) We distinguish three fundamental types of curriculum on training scale. First, constant curricula where a constant amount of teacher forcing is maintained throughout the training denoted as $$\epsilon_{i}=C(i).$$ $\downarrow$ . $$\mathbf{\Sigma}$$ $$\epsilon_{i}=\epsilon.$$ $\downarrow$ . $$\epsilon_{i}=C(i),$$ i = . (5) Second, decreasing curricula where the training starts with a high amount of teacher forcing that continuously declines throughout the training. Third, increasing curricula where the training starts at a low amount of teacher forcing that continuously increases throughout the training. Both follow a transition function C : N → [start, end] denoted as i = C(i), (6) where start ≤ i ≤ i+1 ≤ end for increasing curricula, start ≥ i ≥ i+1 ≥ end for decreasing curricula and start 6= end for both. The following equations exemplarily specify decreasing curricula (cp. Eqs. 7–9) following differing transition functions inspired by those used to study the scheduled sampling approach (Bengio et al., 2015) $$C_{lin}(i)=\max(\epsilon_{end},\epsilon_{end}+(\epsilon_{start}-\epsilon_{end})\cdot(1-\frac{i}{\mathtt{L}})),$$ $$\text{with}\epsilon_{end}<\frac{\mathtt{L}-1}{\mathtt{L}},\ \ 1<\mathtt{L},\ \ i\in\mathbb{N},\tag{7}$$ $$C_{invSig}(i)=\epsilon_{end}+(\epsilon_{start}-\epsilon_{end})\cdot\frac{k}{k+e^{\frac{i}{\mathtt{L}}}},$$ $$\text{with}\epsilon_{end}<\epsilon_{start},\ \ 1\leq k,\ \ i\in\mathbb{N},$$ (8) $$C_{exp}(i)=\epsilon_{end}+(\epsilon_{start}-\epsilon_{end})\cdot k^{i},$$ $$\text{with}\epsilon_{end}<\epsilon_{start},\ \ 0<k<1,\ \ i\in\mathbb{N},\tag{9}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\partial})$$ where the curriculum length parameter Ł determines the pace as number of iterations which the curriculum Clin needs to transition from start to end. The curricula C*invSig* and Cexp have no such parameter since the functions never completely reach end in theory. In practice though, we adapt the curriculum specific parameter k to stretch or compress these curricula along the iteration axis to achieve the same effect. Figure 6a exemplarily visualizes three decreasing and three increasing curricula following differing transition functions C and being parametrized with *start* = 1 and end = 0 and *start* = 0 and end = 1 respectively. Furthermore, each is parametrized to have a curriculum length of Ł = 1 000. Figure 6b shows examples of decreasing and increasing Clin with different Ł. ## 4.4 Curriculum On Iteration Scale i prescribes a ratio of TF vs. FR steps for a given training iteration i. Based on that solely prescribes the amount of teacher forcing for an iteration, we can now develop micro curricula for distributing the TF and FR steps eventually providing a teacher forcing decision Φ per training step. We propose two ways to distribute TF and FR steps within one training iteration: (1) probabilistic - where is interpreted as the probability of being a TF step, and (2) deterministic - where as a rate that determines the number of TF ![9_image_0.png](9_image_0.png) $$(10)$$ Figure 6: Examples of different decreasing curricula and their corresponding increasing versions (a) and multiple linear curricula with different pace Ł (b). steps trained before moving to FR for the rest of the training sequence. For a probabilistic CL, we denote the teacher forcing decision Φ, which is a discrete random variable that is drawn from a Bernoulli distribution: $$\Phi_{\epsilon}\sim\mathrm{Bernoulli}(\epsilon).$$ Φ ∼ Bernoulli(). (10) For a deterministic CL, Φ depends not only on but also on the current position j within the predicted sequence of length m. Therefore, in this case we denote the teacher forcing decision Φ,j as: $$\Phi_{\epsilon,j}={\begin{cases}1,&{\mathrm{if~}}\epsilon\geq{\frac{j}{m}}\\ 0,&{\mathrm{otherwise.}}\end{cases}}$$ $$(11)$$ ## 5 Evaluation To compare the training strategies described in Section 4, we evaluate each with varying parametrization on six different chaotic time series' datasets. Our experiments aim to answer the following six research questions: RQ1 **Baseline teaching strategies.** How well and consistent do the current baseline strategies FR and TF train a model for forecasting dynamical systems? RQ2 **Curriculum learning strategies.** How do the different curriculum learning strategies perform in comparison to the baseline strategies? RQ3 **Training length.** How is training length influenced by the different teaching strategies? RQ4 **Prediction stability.** How stable is a model's prediction performance over longer prediction horizons when trained with the different strategies? RQ5 **Curriculum parametrization.** How much does the curriculum's parametrization influence model performance? RQ6 **Iteration scale curriculum.** How do iteration scale curricula differ in resulting model performance? ## 5.1 Evaluated Curricula In total, we define eight strategies that we evaluate in this study (cp. Fig. 2). For comparison, we train the two baseline methods teacher forcing (TF) and free running (FR) that "teach" throughout the entire training or do not "teach" at all respectively. All other methods prescribe a teaching curriculum CL throughout the training and we distinguish these strategies along two dimensions: (1) the overall increasing (ITF), constant (CTF), or decreasing (DTF) trend in teacher forcing throughout the training curriculum and (2) the probabilistic (P) or deterministic (D) teacher forcing distribution within training steps. Table I: Curriculum strategy parameters used during the baseline, *exploratory*, and the *essential* experiments | | Curriculum | | | |--------------------|-----------------------------------------------------------------|----------------------------------------|--------| | Strategy | parameter | evaluated values | | | baseline | C | - | | | FR | | 0.0 | | | Ł | - | | | | C | - | | | | TF | | 1.0 | | | Ł | - | | | | C | - | | | | CL-CTF-P | | {0.25, 0.5, 0.75} | | | Ł | - | | | | C | {linear, inverse sigmoid, exponential} | | | | CL-DTF-P, CL-DTF-D | start → end | {0.25, 0.5, 0.75, 1.0} → 0.0 | | | Ł | 1000 | | | | exploratory | C | {linear, inverse sigmoid, exponential} | | | CL-ITF-P, CL-ITF-D | start → end | {0.0, 0.25, 0.5, 0.75} → 1.0 | | | Ł | 1000 | | | | essential | CL-DTF-P, CL-DTF-D, CL-ITF-P, CL-ITF-D | C | linear | | start → end | {0.0 → 1.0, 1.0 → 0.0} | | | | Ł | {62, 125, 250, 500, 1 000, 2 000, 4 000, 8 000, 16 000, 32 000} | | | ## 5.2 Parametrization Of Training Curricula Table I shows all training strategy-specific parameters and their values for the evaluated strategies. We subdivide our experiments into three sets: baseline, *exploratory*, and *essential* experiments. The baseline strategies FR and TF do not have any additional parameters. The CL-CTF-P strategy has the parameter configuring the strategy's teacher forcing ratio. The increasing and decreasing strategies CL-DTF-x and CL-ITF-x are configured by *start* and end referring to the initial and eventual amount of teacher forcing and the function C transitioning between both. Additionally, Ł determines the number of training epochs in between *start* and end. For the *exploratory* experiments, we utilize a fix Ł = 1 000, while for the *essential* experiments, we evaluate all strategies solely using a linear transition C*linear* in the curriculum (cp. Eq. 7) with either *start* = 0 and end = 1 (increasing) or *start* = 1 and end = 0 (decreasing). ## 5.3 Performance Metrics We use the NRMSE and the R2 metrics as well as two derived of those to evaluate model performance. NRMSE is a normalized version of the Root Mean Squared Error (RMSE) where smaller values indicate | System | ODE/DDE | Parameters | d | LLE | | | |-----------------------------------------|-------------------------------------|-------------------|------|-------------------------|----|-------| | Mackey-Glass | dx dt = β | xτ | − γx | τ = 17, n = 10, γ = 0.1 | 1 | 0.006 | | 1+xn τ | | | | | | | | with | γ, β, n > 0 | β = 0.2, dt = 1.0 | | | | | | Thomas | dx dt = sin(y) − bx | b = 0.1, dt = 0.1 | 3 | 0.055 | | | | dy dt = sin(z) − by dz dt = sin(x) − bz | | | | | | | | Rössler | dx dt = −(y + z) | a = 0.2, b = 0.2 | 3 | 0.069 | | | | dy dt = x + ay | c = 5.7, dt = 0.12 | | | | | | | dz dt = b + z(x − c) | | | | | | | | Hyper Rössler | dx dt = −y − z | a = 0.25, b = 3 | 4 | 0.14 | | | | dy dt = x + ay + w | c = 0.5, d = 0.05 | | | | | | | dz dt = b + xz | dt = 0.1 | | | | | | | dw dt = −cz + dw | 8 | | | | | | | Lorenz | dx dt = −σx + σy | σ = 10, β = 3 | 3 | 0.905 | | | | dy dt = −xz + ρx − y | ρ = 28, dt = 0.01 | | | | | | | dz dt = xy − βz | | | | | | | | Lorenz'96 | dxk dt = −xk−2xk−1 +xk−1xk+1 −xk +F | F = 8, dt = 0.05 | 40 | 1.67 | | | | for k = 1 . . . d and x−1 = xd | | | | | | | Table II: Details of the chaotic systems that were approximated to generate the data used for our experiments better prediction performance. For a single value of a sequence, NRMSE is calculated as: $$\mathrm{NRMSE}(y,{\hat{y}})={\frac{\sqrt{{\frac{1}{d}}\cdot\sum_{j=1}^{d}(y_{j}-{\hat{y}}_{j})^{2}}}{\sigma}},$$ σ, (12) where y is a ground truth vector, yˆ is the corresponding prediction, σis the standard deviation across the whole dataset, and d is the size of the vectors y and yˆ. For model evaluation, we calculate the mean NRMSE over all m forecasted steps of a sequence. Additionally, we compute and report the NRMSE only for the last m 10 forecasted steps of a sequence to specifically evaluate model performance at long prediction horizons. The R2score lies in the range (−∞, 1] with higher values referring to better prediction performance. A score of 0 means that the prediction is as good as predicting the ground truth sequence's mean vector y¯. The R2 score is computed as: $$R^{2}=1-\frac{\sum_{j=1}^{d}(y_{j}-\hat{y}_{j})^{2}}{\sum_{j=1}^{d}(y_{j}-\bar{y}_{j})^{2}}.\tag{13}$$ $$(12)$$ We use the R2score to compute another metric LTR2 > 0.9 measuring the number of Lyapunov Time (LT)s that a model can predict without the R2score dropping below a certain a threshold of 0.9. Sangiorgio and Dercole (Sangiorgio & Dercole, 2020) proposed this metric while applying a less strict threshold of 0.7. ## 5.4 Evaluated Datasets We focus on forecasting chaotic time series data and sample datasets by approximating six commonly studied chaotic systems (cp. Tab. II), i.e., Mackey-Glass (Mackey & Glass, 1977), Rössler (Rössler, 1976), Thomas' cyclically symmetric attractor (Thomas, 1999), Hyper Rössler (Rossler, 1979), Lorenz (Lorenz, 1963) and Lorenz'96 (Lorenz, 1996). Table II shows per system, the differential equations and how we parametrized them. These systems, differ among other in the number of dimensions d and degree of chaos as indicated by the largest lyapunov exponent in the LLE column of Tab. II. The LLEs are approximated values that were published independently in the past (Brown et al., 1991; Sprott & Chlouverakis, 2007; Sano & Sawada, 1985; Sandri, 1996; Hartl, 2003; Brajard et al., 2020). We generate datasets by choosing an initial state vector | Thomas (0.055) Rössler (0.069) Lorenz (0.905) Lorenz'96 (1.67) | |------------------------------------------------------------------| Table III: Results of the *exploratory* tests with the best hyperparameter configuration per strategy and system. The arrow besides each metric's column title indicates whether smaller (↓) or larger (↑) values are favored. The best result values per dataset are printed in bold and the best baseline NRMSEs are underlined. Together with each dataset we put the corresponding LLE in parenthesis. of size d and approximate 10 000 samples using the respective differential equations. We use SciPy package's implementation of the Livermore solver for ordinary differential equations (LSODE) (Radhakrishnan & Hindmarsh, 1993) except for Mackey-Glass which we approximate through the Python module JiTCDDE implementing the delayed differential equation (DDE) integration method proposed by (Shampine & Thompson, 2001). Thereby, dt defines the time difference between two sampled states per dataset and is shown in Table II. Where available, we chose dt similar to previous studies aiming for comparability of results. We split each dataset into 80% training samples and 10% validation and testing samples respectively. All data is normalized following a z-transform. ## 5.5 Training Procedure All evaluated models follow an encoder-decoder GRU architecture with an additional fully connected layer after the decoder (cp. Fig. 7). We performed a full grid search for the hyper-parameters learning rate, batch size, learning rate reduction factor, loss plateau, input length n and hidden state size to determine suitable configurations for the experiments. Based on this optimization, we used the Adam (Kingma et al., 2015) optimizer with a batch size of 128 and apply Reduce Learning Rate on Plateau (RLROP) with an initial learning rate of 1e −3 and a reduction factor of 0.6, i.e., 40% learning rate reduction, given a loss plateau of 10 epochs for all datasets except Lorenz'96 where we use a reduction factor of 0.9 and a 20 epoch plateau respectively. Furthermore, we found an input length of n = 150 steps and a *hidden state size* of 256 to be | Together with each dataset we put the corresponding LLE in parenthesis. Best performing curriculum Trained NRMSE over 1LT Strategy C epochs absolut ↓ rel. impr. ↑ @BL epoch ↓ | last 10% ↓ | | | | | | | |----------|-----------------|-------------|-------|---------|----------|---------|----------| | FR | constant | 0.00 | 427 | 0.03416 | - | - | 0.047222 | | TF | constant | 1.00 | 163 | 0.34545 | - | - | 0.607954 | | CL-CTF-P | constant | 0.25 | 450 | 0.05535 | −62.03% | 0.05675 | 0.082443 | | CL-DTF-P | inverse sigmoid | 0.75 & 0.00 | 598 | 0.01858 | 45.61% | 0.02120 | 0.034325 | | CL-DTF-D | exponential | 0.25 & 0.00 | 557 | 0.03229 | 5.47% | 0.03792 | 0.039749 | | CL-ITF-P | exponential | 0.00 % 1.00 | 620 | 0.01403 | 58.93% | 0.02026 | 0.026014 | | CL-ITF-D | exponential | 0.25 % 1.00 | 944 | 0.01126 | 67.04% | 0.02179 | 0.018571 | | FR | constant | 0.00 | 3 863 | 0.00098 | - | - | 0.000930 | | TF | constant | 1.00 | 500 | 0.00743 | - | - | 0.016119 | | CL-CTF-P | constant | 0.25 | 2 081 | 0.00084 | 14.29% | 0.00084 | 0.001333 | | CL-DTF-P | linear | 1.00 & 0.00 | 2 751 | 0.00083 | 15.31% | 0.00083 | 0.000931 | | CL-DTF-D | inverse sigmoid | 0.25 & 0.00 | 4 113 | 0.00064 | 34.69% | 0.00066 | 0.000578 | | CL-ITF-P | inverse sigmoid | 0.00 % 1.00 | 7 194 | 0.00025 | 74.49% | 0.00034 | 0.000358 | | CL-ITF-D | linear | 0.75 % 1.00 | 5 132 | 0.00024 | 75.51% | 0.00031 | 0.000390 | | FR | constant | 0.00 | 918 | 0.01209 | - | - | 0.013166 | | TF | constant | 1.00 | 467 | 0.00152 | - | - | 0.002244 | | CL-CTF-P | constant | 0.75 | 297 | 0.00167 | −9.87% | 0.00167 | 0.002599 | | CL-DTF-P | inverse sigmoid | 0.75 & 0.00 | 522 | 0.00168 | −10.53% | 0.00162 | 0.002425 | | CL-DTF-D | inverse sigmoid | 1.00 & 0.00 | 204 | 0.00187 | −23.03% | 0.00187 | 0.002823 | | CL-ITF-P | linear | 0.00 % 1.00 | 750 | 0.00149 | 1.97% | 0.00217 | 0.002235 | | CL-ITF-D | inverse sigmoid | 0.75 % 1.00 | 803 | 0.00124 | 18.42% | 0.00132 | 0.002084 | | FR | constant | 0.00 | 8 125 | 0.07273 | - | - | 0.126511 | | TF | constant | 1.00 | 4 175 | 0.03805 | - | - | 0.075583 | | CL-CTF-P | constant | 0.50 | 2 615 | 0.07995 | −110.12% | 0.07995 | 0.140700 | | CL-DTF-P | linear | 0.75 & 0.00 | 939 | 0.04654 | −22.31% | 0.04654 | 0.087228 | | CL-DTF-D | linear | 0.75 & 0.00 | 1 875 | 0.04381 | −15.14% | 0.04381 | 0.081025 | | CL-ITF-P | inverse sigmoid | 0.25 % 1.00 | 4 787 | 0.01854 | 51.27% | 0.02016 | 0.036651 | | CL-ITF-D | inverse sigmoid | 0.00 % 1.00 | 3 263 | 0.02093 | 44.99% | 0.02196 | 0.040356 | ![13_image_0.png](13_image_0.png) Figure 7: Structure of a simple encoder-decoder GRU used for training with teacher forcing most suitable. We use early stopping with a *patience* of 100 epochs and a *minimum improvement threshold* of 1% to ensure the convergence of the model while preventing from overfitting. We train all models with a dataset-specific prediction length m defined as: $$m=\left[\frac{\mbox{LT}}{dt}\right]=\left[\frac{1}{dt\cdot\mbox{LLE}}\right].\tag{14}$$ The reason being that we aim to train for the same forecasting horizon that we mainly evaluate a trained model with. We adapt this horizon to the dataset's LT thereby aiming for performance measures that are comparable across datasets. We provide plots of the training and validation loss curves of the final parametrization per strategy and dataset in Appendix A.1. Based on these curves, we observe for ITF in contrast to DTF strategies that the training loss tends to move away from the validation loss faster. This is explainable by the fact that with increasing training time ITF strategies deliver an increasing amount of TF inputs counteracting the accumulation of error along the forecasted sequence and therefore further reducing training loss. For DTF strategies we observe an opposing behavior. Regarding training iterations, we observe that ITF strategies typically train for a larger number of epochs. Since the termination of the training is determined by the early stopping criterion this shows that ITF facilitates a longer and typically more successful training process compared to DTF and baseline strategies. ## 5.6 Results Table III shows results for the *exploratory* experiments. Per evaluated strategy and dataset, the table reports resulting model performance in terms of NRMSE. We only report that curriculum configuration in terms of transition function C and schedule that yields the highest NRMSE per strategy and dataset. Each model was used to predict a dataset-spetefic horizon of 1 LT. The best result per dataset and performance metric is highlighted in bold. First, we study the baseline strategies FR and TF and observe that for two datasets, i.e., Thomas and Rössler, the FR strategy outperforms the TF baseline, while TF outperforms FR for the other two. We select the one that performs best per dataset to measure relative improvement or deterioration gained by training with the respective curriculum learning strategy (cp. column "NMRSE rel. impr."). We observe that across all datasets and performance metrics the CL-ITF-P and CL-ITF-D strategies yield the best and second best performing model with a relative improvement of 1.97 - 80.61% over the best performing baseline strategy. The other curriculum learning strategies perform less consistent across the datasets. The CL-DTF-x strategies yield an improved NRMSE for half of the datasets while the constant | Mackey Glass (0.006) Thomas (0.055) Rössler (0.069) Hyper Rössler (0.14) Lorenz (0.905) Lorenz'96 (1.67) | |------------------------------------------------------------------------------------------------------------| Table IV: Best curriculum length per strategy and system for all six datasets. The arrow besides a metric's column title indicates whether smaller (↓) or larger (↑) values are favored. The best result values per dataset are printed in bold and the best baseline NRMSEs are underlined. Together with each dataset we put the corresponding LLE in parenthesis. CL-CTF-P only yields an improvement for the Thomas attractor. We separately report the NRMSE of the last 10% predicted values of 1 LT test horizon per dataset to assess how robust a prediction is over time (cp. column "NRMSE last 10%"). We observe that the CL-ITF-P and CL-ITF-D strategies also reach the best performance in terms of this metric meaning that they yield the most robust models. We further observe a diverse set of curriculum configurations yielding the best performing model per strategy and dataset. That means that all available transition functions, i.e., linear, inverse sigmoid, and exponential, have been discovered as best choice for at least one of the trained models. Further, we observe all evaluated as best choice for the CL-CTF-P strategy and one of the datasets respectively. Similarly, the best performing initial for the increasing and decreasing transitions per dataset spans all evaluated values except for 0.5. The table also reports the number of training iterations till reaching the early stopping criterion (cp. column "training | corresponding LLE in parenthesis. Best performing curriculum | Trained | NRMSE over 1LT | #LT with | | | | | | |----------------------------------------------------------------|-------------|------------------|------------|----------|-------------|------------|-----------|------------| | Strategy | | Ł | epochs | absolut↓ | rel. impr.↑ | @BL epoch↓ | last 10%↓ | R2 > 0.9 ↑ | | FR | 0.00 | - | 4 713 | 0.00391 | - | 0.00391 | 0.004101 | 4.50 | | TF | 1.00 | - | 44 | 0.09535 | - | 0.09535 | 0.171945 | 1.64 | | CL-CTF-P | 0.25 | - | 2 918 | 0.00632 | −61.64% | 0.00632 | 0.006544 | 4.51 | | CL-DTF-P | 1.00 & 0.00 | 2 000 | 3 733 | 0.00215 | 45.01% | 0.00215 | 0.003010 | 4.95 | | CL-DTF-D | 1.00 & 0.00 | 1 000 | 431 | 0.00585 | −49.62% | 0.00585 | 0.011022 | 3.91 | | CL-ITF-P | 0.00 % 1.00 | 500 | 1 566 | 0.00104 | 73.40% | 0.00104 | 0.001793 | 5.18 | | CL-ITF-D | 0.00 % 1.00 | 500 | 1 808 | 0.00211 | 46.03% | 0.00211 | 0.003032 | 4.99 | | FR | 0.00 | - | 427 | 0.03416 | - | 0.03416 | 0.047222 | 2.04 | | TF | 1.00 | - | 163 | 0.34545 | - | 0.34545 | 0.607954 | 1.73 | | CL-CTF-P | 0.25 | - | 450 | 0.05535 | −62.03% | 0.05675 | 0.082443 | 1.53 | | CL-DTF-P | 1.00 & 0.00 | 1 000 | 356 | 0.05084 | −48.83% | 0.05084 | 0.105585 | 2.13 | | CL-DTF-D | 1.00 & 0.00 | 1 000 | 326 | 0.10712 | −213.58% | 0.10712 | 0.206923 | 1.53 | | CL-ITF-P | 0.00 % 1.00 | 500 | 677 | 0.00930 | 72.78% | 0.01645 | 0.016729 | 3.99 | | CL-ITF-D | 0.00 % 1.00 | 500 | 649 | 0.01819 | 46.75% | 0.03934 | 0.030589 | 2.05 | | FR | 0.00 | - | 3 863 | 0.00098 | - | 0.00098 | 0.000930 | 9.46 | | TF | 1.00 | - | 500 | 0.00743 | - | 0.00743 | 0.016119 | 4.75 | | CL-CTF-P | 0.25 | - | 2 081 | 0.00084 | 14.29% | 0.00084 | 0.001333 | 7.51 | | CL-DTF-P | 1.00 & 0.00 | 1 000 | 2 751 | 0.00083 | 15.31% | 0.00083 | 0.000931 | 8.46 | | CL-DTF-D | 1.00 & 0.00 | 125 | 4 879 | 0.00100 | −2.04% | 0.00116 | 0.000947 | 9.28 | | CL-ITF-P | 0.00 % 1.00 | 500 | 4 523 | 0.00019 | 80.61% | 0.00022 | 0.000303 | 10.23 | | CL-ITF-D | 0.00 % 1.00 | 4 000 | 7 267 | 0.00027 | 72.24% | 0.00051 | 0.000368 | 9.41 | | FR | 1.00 | - | 6 461 | 0.00599 | - | 0.00599 | 0.007011 | 6.57 | | TF | 0.00 | - | 2 788 | 0.00435 | - | 0.00762 | 0.011194 | 5.24 | | CL-CTF-P | 0.25 | - | 2 909 | 0.01450 | −233.33% | 0.01450 | 0.015944 | 5.21 | | CL-DTF-P | 1.00 & 0.00 | 2 000 | 3 773 | 0.00560 | 28.74% | 0.00560 | 0.007052 | 6.32 | | CL-DTF-D | 1.00 & 0.00 | 16 000 | 1 793 | 0.00490 | 12.64% | 0.00490 | 0.007471 | 6.30 | | CL-ITF-P | 0.00 % 1.00 | 125 | 2 802 | 0.00366 | 15.86% | 0.00366 | 0.005802 | 6.50 | | CL-ITF-D | 0.00 % 1.00 | 250 | 3 317 | 0.00326 | 25.06% | 0.00326 | 0.004639 | 6.72 | | FR | 0.00 | - | 918 | 0.01209 | - | 0.01319 | 0.013166 | 3.31 | | TF | 1.00 | - | 467 | 0.00152 | - | 0.00152 | 0.002244 | 6.72 | | CL-CTF-P | 0.75 | - | 297 | 0.00167 | −9.87% | 0.00167 | 0.002599 | 6.43 | | CL-DTF-P | 1.00 & 0.00 | 4 000 | 450 | 0.00124 | 18.42% | 0.00124 | 0.001925 | 6.64 | | CL-DTF-D | 1.00 & 0.00 | 16 000 | 587 | 0.00111 | 26.97% | 0.00127 | 0.001650 | 6.53 | | CL-ITF-P | 0.00 % 1.00 | 250 | 1 137 | 0.00060 | 60.53% | 0.00124 | 0.000883 | 7.19 | | CL-ITF-D | 0.00 % 1.00 | 250 | 578 | 0.00135 | 11.18% | 0.00189 | 0.001725 | 4.33 | | FR | 0.00 | - | 8 125 | 0.07273 | - | 0.08362 | 0.126511 | 2.34 | | TF | 1.00 | - | 4 175 | 0.03805 | - | 0.03805 | 0.075583 | 3.01 | | CL-CTF-P | 0.50 | - | 2 615 | 0.07995 | −110.12% | 0.07995 | 0.140700 | 2.25 | | CL-DTF-P | 1.00 & 0.00 | 1 000 | 983 | 0.05278 | −38.71% | 0.05278 | 0.098130 | 2.67 | | CL-DTF-D | 1.00 & 0.00 | 1 000 | 4 083 | 0.07119 | −87.10% | 0.07119 | 0.126636 | 2.34 | | CL-ITF-P | 0.00 % 1.00 | 250 | 3 886 | 0.01680 | 55.85% | 0.01680 | 0.032439 | 4.01 | | CL-ITF-D | 0.00 % 1.00 | 250 | 3 379 | 0.01628 | 57.21% | 0.01628 | 0.031464 | 4.18 | epochs"). We observe that the two baseline strategies utilize strongly differing numbers of iterations across all datasets. For the Thomas and the Rössler attractor, the teacher forcing strategy TF does not allow for proper model convergence being characterized by a low number of iterations and a high NRMSE compared to the other strategies. Among the curriculum teaching strategies across all datasets, the strategies with increasing teacher forcing ratio CL-ITF-x utilize the most training iterations. These CL-ITF-x strategies also utilize more training iterations than the better performing baseline strategy across all datasets. To better understand whether the longer training is the sole reason for the higher performance of the CL-ITF-x trained models, we additionally report the performance in terms of NRMSE of all curriculum trained models after the same number of training iterations as the better performing baseline model, i.e., after 427 epochs for Thomas, after 3 863 epochs for Rössler, after 467 epochs for Lorenz, and after 4 175 epochs for Lorenz'96 (cp. column "Performance @BL epochs"). We observe across all datasets that the best performing teaching strategy still remains CL-ITF-P or CL-ITF-D. In conclusion, the *exploratory* experiments demonstrated that a well-parametrized CL-ITF-x strategy yields a 18.42 - 75.51% performance increase across the evaluated datasets. However, this improvement comes at the cost of an intensive parameter optimization of the respective curriculum. Therefore, we run a second series of *essential* experiments in which we simplify the parametrization of the curriculum by utilizing a linear transition from either 0.0 → 1.0 (CL-ITF-x) or 1.0 → 0.0 (CL-DTF-x) that is solely parametrized by the length of this transition in terms of training epochs Ł. Table IV reports results in terms of the previously introduced performance metrics again measured over a prediction horizon of 1 LT and across the same teaching strategies for six datasets including those studied for the *exploratory* experiments. Since the changes of the *essential* over the *exploratory* experiments solely effect teaching strategies with a training iteration-dependent curriculum, they have no effect on the baseline strategies FR and TF as well as the constant curriculum CL-CTF-P, which we still report in Table IV for direct comparison. Overall, we observe that CL-ITF-P outperforms all other strategies for four out of six datasets while it performs second best for the remaining two datasets where the deterministic version CL-ITF-D performs best. These strategies yield relative improvements ranging from 25.06 - 80.61% and are, thus, even higher than those observed for the *exploratory* experiments. Beyond that we observe that for all datasets treated in both experimental sets, the training curricula used in the *essential* experiments yield better performing models. For three out of four datasets the training even requires substantially less training iterations than in the explorative experiments. Additionally, we report in column "\#LT with R2 > 0.9" the prediction horizon in terms of LT that a trained model can predict while constantly maintaining an R2 > 0.9. We observe that the ranking between the different strategies mostly remains the same as those observed when predicting 1 LT. That means that also the best performing strategy consistently remains the same for the longer horizon. Figure 8 more concretely depicts how the R2score develops across the prediction horizon for the different teaching strategies and datasets. ## 5.7 Additional Experiments Judging from the essential experiments CL-ITF-x are our winning strategies on the chaotic systems we tested. However, as mentioned in Section 3 there are many other approaches targeting (chaotic) dynamical system forecasting with adapted RNNs architectures that take theoretical insights of dynamical systems into account. STF (Monfared et al., 2021) does not require any architectural modifications but instead provides an adapted training strategy. It determines a time interval τ =ln2 LLE that denotes how many FR steps are processed before the next TF value is used within one sequence. It is the strategy that we found most comparable to the CL approaches we study. Therefore, we executed another set of experiments where we used STF during the training of our encoder-decoder GRU for all chaotic systems in Table II. Since our data is sampled with different dt we have to redefine the time interval as τ =ln2 LLE·dt . The results (cp. Tab. V) show that STF provides improved performance compared to the best baseline for three of six datasets ranging from 26.21 - 46.75% relative improvement. For this it does require no additional hyperparameters if the systems LLE is known. It also beats the best performing CL strategy on the Hyper-Rössler dataset by a margin of 1.15%. For the rest of the datasets, the results stay behind those of the CL-ITF-x strategies showing a worse, i.e., increased, NRMSE by 7.00 - 236.18%. We assume that where STF systematically induces TF to catch chaos preventing exploding gradients before they appear ![16_image_0.png](16_image_0.png) | System | Strategy | Epochs | NRMSE ↓ | Rel. impr. ↑ | |--------------------------------------|------------|----------|-----------|----------------| | Mackey-Glass (0.006) | FR | 4 713 | 0.00391 | - | | TF | 44 | 0.09535 | - | | | CL-ITF-P | 1 566 | 0.00104 | 73.40% | | | CL-ITF-D | 1 808 | 0.00211 | 46.03% | | | STF | 5 517 | 0.00254 | 35.04% | | | Thomas (0.055) | FR | 427 | 0.03416 | - | | TF | 163 | 0.34545 | - | | | CL-ITF-P | 677 | 0.00930 | 72.78% | | | CL-ITF-D | 649 | 0.01819 | 46.75% | | | STF | 432 | 0.03655 | −7.00% | | | FR | 3 863 | 0.00098 | - | | | TF | 500 | 0.00743 | - | | | CL-ITF-P | 4 523 | 0.00019 | 80.61% | | | CL-ITF-D | 7 267 | 0.00027 | 72.24% | | | STF | 4 796 | 0.00065 | 33.67% | | | Rössler (0.069) Hyper-Rössler (0.14) | FR | 6 461 | 0.00599 | - | | TF | 2 788 | 0.00435 | - | | | CL-ITF-P | 2 802 | 0.00366 | 15.86% | | | CL-ITF-D | 3 317 | 0.00326 | 25.06% | | | STF | 2 645 | 0.00321 | 26.21% | | | Lorenz (0.905) | FR | 9 18 | 0.01209 | - | | TF | 4 67 | 0.00152 | - | | | CL-ITF-P | 1 137 | 0.00060 | 60.53% | | | CL-ITF-D | 5 78 | 0.00135 | 11.18% | | | STF | 1 853 | 0.00511 | −236.18% | | | Lorenz'96 (1.67) | FR | 8 125 | 0.07273 | - | | TF | 4 175 | 0.03805 | - | | | CL-ITF-P | 3 886 | 0.01680 | 55.85% | | | CL-ITF-D | 3 379 | 0.01628 | 57.21% | | | STF | 1 478 | 0.09030 | −137.32% | | Table V: Results of STF with those of the baseline and the CL-ITF-x strategies using knowledge about the processed data, CL helps the model to find more consistent minima in general disregarding the degree of chaos. We hypothesize that the GRU is in many cases able to keep the risk of exploding gradients low due to its gating mechanism and thus prevents STF to really show its full strength here. For further investigation on CL for non-chaotic systems and to enrich our experiments, we conduct additional experiments that include the application of the baseline strategies TF and FR together with CL-ITF-P on a periodic system and a measured real-world dataset. We use CL-ITF-P since it provides the most consistent relative improvements in the essential experiments. As periodic system, we study the Thomas attractor (Thomas, 1999) with parameter b = 0.32899 which ensures a periodic behavior. Extending our evaluation to empirical data, we selected a time series used in the Santa Fe Institute competition (Weigend & Gershenfeld, 1993)2. Table VI: Comparing baseline strategies and CL-ITF-P on periodic Thomas and measured Santa Fe laser dataset | Strategy | Ł | Epochs | NRMSE ↓ | Rel. impr. ↑ | | |-------------|--------|----------|-----------|----------------|----| | FR | - | 542 | 0.00057 | - | | | Per. Thomas | TF | - | 542 | 0.00107 | - | | CL-ITF-P | 8 000 | 794 | 0.00033 | 42.11% | | | CL-ITF-D | 125 | 326 | 0.00045 | 21.05% | | | FR | - | 500 | 0.02170 | - | | | Santa Fe | TF | - | 22 | 0.04793 | - | | CL-ITF-P | 32 000 | 469 | 0.02042 | 5.90% | | | CL-ITF-D | 4 000 | 536 | 0.02232 | −2.86% | | The results in Table VI support our assumption that CL-ITF-x strategies are not only applicable for chaotic data originating from known dynamical systems, but also for dynamical systems with periodic behaviour achieving relative improvements of 21.05 - 42.11%. Regarding the Santa Fe dataset we observe less impact by our strategies. Only having an improvement by 5.90% for CL-ITF-P and a worsening by 2.86% for CL-ITF-D on the empirical real-world data. We also conducted experiments for other RNN architectures, i.e., we selected a vanilla RNN and a LSTM, in the same encoder-decoder setup as applied for the previous experiments with the GRU architecture. In Tables VII and VIII, we compare the NRMSE and relative improvement of these architectures on the four chaotic datasets from the exploratory experiments (cp. Tab. III). We compare TF, FR and previously best performing training strategies CL-ITF-P and CL-ITF-D (cp. Tab. IV). Except for one case, we observe that both CL strategies outperform the respective best performing baseline strategy on the vanilla RNN as well as the LSTM architecture. The only exception is a vanilla RNN trained to forecast the Lorenz'96 system using CL-ITF-P. This setup suffers a performance decrease by 83.45% while the NRMSE in all other experiments improves by 37.83 - 75.28% RNN and 26.04 - 69.75% LSTM respectively when using the CL strategies. ## 6 Discussion Baseline teaching strategies (RQ1). Considering the baseline teaching strategies FR and TF, we observe that per dataset one of the strategies performs substantially better than the other. We also observe that for the upper, based on their LLE, less chaotic datasets in Table IV FR performs better, while for the lower more chaotic datasets TF yields the better performing model. However, a larger study with more datasets would be required to justify this claim. Our takeaway is that none of the methods can universally recommended again motivating curriculum learning strategies. Curriculum learning strategies (RQ2). Among the curriculum learning strategies, we observe that blending FR with a constant ratio of TF, i.e., CL-CTF-P, almost consistently yields worse results than the best performing baseline strategy and we therefore consider the strategy not relevant. The decreasing curricula CL-DTF-x that start the training with a high degree of teacher forcing and then incrementally reduce it to pure FR training partly perform better than the CL-CTF-P strategy and for a few datasets even substantially better than the baseline. However, was not foreseeable when this would be the case making their application not suitable for new datasets without a lot of experimentation and tuning. This finding is especially interesting since these strategies are conceptually similar to the scheduled sampling approach proposed for NLP tasks, thereby underlining the difference between NLP and dynamical system forecasting. We also proposed and studied increasing curricula CL-ITF-x that start the training with no or a low degree 2https://github.com/tailhq/DynaML/blob/master/data/santafelaser.csv | System | Strategy | Ł | Epochs | NRMSE ↓ | Rel. impr. ↑ | |-----------|------------|-------|----------|-----------|----------------| | Thomas | FR | - | 51 | 0.48117 | - | | TF | - | 249 | 0.41274 | - | | | CL-ITF-P | 1 000 | 266 | 0.17955 | 56.50% | | | CL-ITF-D | 250 | 183 | 0.25659 | 37.83% | | | Rössler | FR | - | 2292 | 0.00747 | - | | TF | - | 1 | 0.50174 | - | | | CL-ITF-P | 500 | 2063 | 0.00283 | 62.12% | | | CL-ITF-D | 1000∗ | 3075∗ | 0.00319 | 57.30% | | | Lorenz | FR | - | 506 | 0.11389 | - | | TF | - | 572 | 0.00913 | - | | | CL-ITF-P | 1000 | 746 | 0.00603 | 33.95% | | | CL-ITF-D | 125 | 782 | 0.00378 | 58.60% | | | Lorenz'96 | FR | - | 1710 | 0.31002 | - | | TF | - | 637 | 0.57870 | - | | | CL-ITF-P | 1000 | 505 | 0.56872 | −83.45% | | | CL-ITF-D | 1000 | 6573 | 0.07663 | 75.28% | | Table VII: Forecasting performance of the vanilla RNN on the different chaotic datasets | System | Strategy | Ł | Epochs | NRMSE ↓ | Rel. impr. ↑ | |-----------|------------|------|----------|-----------|----------------| | Thomas | FR | - | 45 | 0.43698 | - | | TF | - | 818 | 0.05265 | - | | | CL-ITF-P | 250 | 758 | 0.01892 | 64.06% | | | CL-ITF-D | 125 | 859 | 0.01181 | 77.57% | | | Rössler | FR | - | 2417 | 0.00210 | - | | TF | - | 1650 | 0.00139 | - | | | CL-ITF-P | 1000 | 3426 | 0.00063 | 54.68% | | | CL-ITF-D | 250 | 2367 | 0.00085 | 38.85% | | | Lorenz | FR | - | 1154 | 0.06526 | - | | TF | - | 398 | 0.00169 | - | | | CL-ITF-P | 250 | 806 | 0.00075 | 55.62% | | | CL-ITF-D | 31 | 419 | 0.00125 | 26.04% | | | Lorenz'96 | FR | - | 4019 | 0.13010 | - | | TF | - | 3721 | 0.22757 | - | | | CL-ITF-P | 500 | 9164 | 0.03935 | 69.75% | | | CL-ITF-D | 62 | 4509 | 0.06855 | 47.31% | | Table VIII: Forecasting performance of the LSTM on the different chaotic datasets. of TF, which is then incrementally increased of the course of the training. We observe that these strategies consistently outperform not only the baseline strategies but all other curriculum learning strategies as well. Training length (RQ3). Choosing an improper teaching strategy can result in an early convergence on a high level of generalization error, e.g., TF strategy for Mackey-Glass, Thomas, and Rössler. Models that yield better performance typically train for more iterations (cp. Tab. III and IV). However, a longer training may not necessarily yield a better performance, e.g., FR vs. TF for Lorenz. When considering the best performing CL-ITF-x strategies compared to the best performing baseline strategy, we observe moderately increased training iterations for some datasets, i.e., Thomas, Rössler, Hyper Rössler, Lorenz, but also decreased training iterations for other datasets, i.e., Mackey Glass and Lorenz'96. To better understand whether the longer training is the true reason for the better performing CL-ITF-x models, we compared their performance when only trained for as many iterations as the baseline model and still observe superior performance over the baseline model. In conclusion, we observe that the CL-ITF-x strategies facilitate a robust training reaching a better generalizing model in a comparable training time. Prediction stability (RQ4). We evaluated the generalization as NRMSE for all models trained with the different training strategies per dataset while forecasting a dataset-specific horizon of 1 LT. However, this metric reflects only an average. When we strive for higher model performance on a multi-step prediction task we often aim for a longer prediction horizon at an acceptable error. To compare prediction stability, we report the NRMSE metric separately solely computed on the last 10% of the 1LT horizon and additionally, we computed how many LTs per datasets can be predicted before falling below an R2 of 0.9. We found that the CL-ITF-x strategies yielded the lowest NRMSE of the last 10% predicted values across all datasets and even more promising that these strategies facilitated the longest prediction horizon without before falling below R2 = 0.9. We conclude that the CL-ITF-x strategies train models that are substantially more stable in their long-term forecasting. Curriculum parametrization (RQ5). Initially, we evaluated curriculum learning strategies with a variety of different transition functions and individual start and end teacher forcing rate (cp. *exploratory* experiments). In these experiments, we observed high prediction performance of the CL-ITF-x strategies but with a diverse dataset-specific best performing curriculum meaning that the application of these strategies for new datasets would have necessitated an extensive hyper-parameter search. In a second set of *essential* experiments, we therefore explored whether we could identify curricula with less parametrization and a similar performance. We found those by using a linear transition that is solely configured by a single parameter Ł that determines the pace with which the teacher forcing increases or decreases over the course of the training. We found that these curricula were not only comparable to the previous transition functions and their parametrization but performed better for all four datasets that we evaluated in both experimental sets and yielded the best performing model across all six datasets in the second experiment. Iteration scale curriculum (RQ6). Having the CL-ITF-x strategies outperforming the CL-DTF-x strategies leads to rethinking the hitherto common intuition of supporting the early phases of training by TF and moving towards FR in the later stages of training. Rather, we hypothesize that this lures the model into regions of only seemingly stable minima, resulting in a premature termination of the training. The difference between the two CL-ITF-x strategies is how the prescribed amount of teacher forcing is distributed across the prediction steps of one training iteration (aka epoch). While the CL-ITF-D strategy distributes them as one cohesive sequence, the CL-ITF-P strategy distributes them randomly across the training sequence. We found that in the *essential* experiments with the linear transition the CL-ITF-P strategy performed overall best for four of the six datasets and would have also be a good choice with a substantial gain over the best performing baseline training strategy for the other two datasets. In conclusion, we observe that the CL-ITF-P strategy trains models that yield 16–81% higher performance than a conventional training with FR or TF. Apart from that, the *essential* results do not lead to a clear conclusion whether to use CL-ITF-P or CL-ITF-D in a given case. The above-mentioned most obvious difference in the distribution of TF steps firstly may lead to a more coherent backpropagation in the deterministic variant, but it also results in a different behavior regarding maximum number of consecutive FR steps (TF-gap) for a given . Having the same curriculum function applied for CL-ITF-P and CL-ITF-D, therefore makes the TF-gap decrease much faster in the early training stage for the probabilistic variant. Further, it changes the TF-gap in a logarithmic rather than a linear fashion as for CL-ITF-D. This difference cannot be compensated by parametrizing the curriculum length demonstrating the need for the two strategies. Plus, this only affects the mean TF-gap produced by TF-ITF-P, which has a variance of 1− 2 due to its geometric distribution. Therefore the TF-gap also varies a lot in early training stage. Limitations of this work. Our observations allow us to draw conclusions regarding appropriate curricula for the training of seq-to-seq RNN on continuous time series data. Where in our study, this data origins from a possibly unknown dynamical system that may impose chaotic behavior. However, we acknowledge that more research is necessary to clarify the currently uncertain points. First, regarding the question why an increasing curriculum learning improves the results throughout all studied datasets. Leading to the question, what determines a proper curriculum and its parametrization. To answer these questions a closer look at the weights and behavior of the model gradients during training, the statistics of the gradient of the processed time series and the used sampling rate will be required at least. We hypothesize that this will enable us to guide the determination of the curricula's hyper-parameters and potentially allow to determine them from the characteristics of a dataset. This also includes a more thorough investigation on empirical real-world data improving on the early and inconclusive results on the Santa Fe dataset (cp. Tab. VI). ## 7 Conclusions While training encoder-decoder RNNs to forecast time series data, strategies like teacher forcing or the from NLP tasks originating scheduled sampling are used to reduce the discrepancy between training and inference mode, i.e., the *exposure bias*. We run an extensive series of experiments using eight chaotic dynamical systems as benchmark and observed that neither of those is well suited to consistently yielding well-performing models not impacted by the *exposure bias* problem. Further, we proposed two novel curriculum learning strategies and found that those yield models that consistently outperform the best performing baseline model by a wide margin of 15–80% NRMSE in a multi-step prediction scenario over one Lyapunov time. We found that these models are more robust in their prediction allowing to forecast longer horizon with a higher performance. We found it sufficient to parametrize the strategy with a single additional parameter adopting the pace of the curriculum. ## References Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In *Advances in Neural Information Processing Systems*, pp. 1171–1179, 2015. Julien Brajard, Alberto Carrassi, Marc Bocquet, and Laurent Bertino. Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: A case study with the lorenz 96 model. *Journal of Computational Science*, 44:101171, 2020. Reggie Brown, Paul Bryant, and Henry DI Abarbanel. Computing the lyapunov spectrum of a dynamical system from an observed time series. *Physical Review A*, 43(6):2787, 1991. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. Kathleen Champion, Bethany Lusch, J Nathan Kutz, and Steven L Brunton. Data-driven discovery of coordinates and governing equations. *Proceedings of the National Academy of Sciences*, 116(45):22445– 22451, 2019. Bo Chang, Minmin Chen, Eldad Haber, and Ed H Chi. Antisymmetricrnn: A dynamical system view on recurrent neural networks. *arXiv preprint arXiv:1902.09689*, 2019. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. *arXiv preprint arXiv:1409.1259*, 2014. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. *arXiv preprint arXiv:1412.3555*, 12 2014. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Jonathan B Dingwell. Lyapunov exponents. *Wiley encyclopedia of biomedical engineering*, 2006. Qingyun Dou, Yiting Lu, Joshua Efiong, and Mark JF Gales. Attention forcing for sequence-to-sequence model training. *arXiv preprint arXiv:1909.12289*, 2019. Konstantinos Drossos, Shayan Gharib, Paul Magron, and Tuomas Virtanen. Language modelling for sound event detection with teacher forcing and scheduled sampling. *arXiv preprint arXiv:1907.08506*, 2019. N Benjamin Erichson, Omri Azencot, Alejandro Queiruga, Liam Hodgkinson, and Michael W Mahoney. Lipschitz recurrent neural networks. *arXiv preprint arXiv:2006.12070*, 2020. William Gilpin. Chaos as an interpretable benchmark for forecasting and data-driven modelling. arXiv preprint arXiv:2110.05266, 2021. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27:2672–2680, 2014. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016. Haohan Guo, Frank K Soong, Lei He, and Lei Xie. A new gan-based end-to-end tts training algorithm. arXiv preprint arXiv:1904.04775, 2019. Michael D Hartl. Lyapunov exponents in constrained and unconstrained ordinary differential equations. arXiv preprint physics/0303077, 2003. Tianxing He, Jingzhao Zhang, Zhiming Zhou, and James Glass. Quantifying exposure bias for neural language generation. *arXiv preprint arXiv:1905.10617*, 2019. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997. Anil Kag and Venkatesh Saligrama. Time adaptive recurrent neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15149–15158, 2021. Anil Kag, Ziming Zhang, and Venkatesh Saligrama. Rnns incrementally evolving on an equilibrium manifold: A panacea for vanishing and exploding gradients? In *International Conference on Learning Representations*, 2019. DP Kingma, LJ Ba, et al. Adam: A method for stochastic optimization. 2015. Georgia Koppe, Hazem Toutounji, Peter Kirsch, Stefanie Lis, and Daniel Durstewitz. Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fmri. PLoS computational biology, 15(8):e1007263, 2019. Alex M Lamb, Anirudh Goyal Alias Parth Goyal, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. Professor forcing: A new algorithm for training recurrent networks. In Advances in neural information processing systems, pp. 4601–4609, 2016. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. *arXiv* preprint arXiv:2010.08895, 2020. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Markov neural operators for learning chaotic systems. arXiv preprint arXiv:2106.06898, 2021. Rui Liu, Berrak Sisman, Jingdong Li, Feilong Bao, Guanglai Gao, and Haizhou Li. Teacher-student training for robust tacotron-based tts. In *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech* and Signal Processing (ICASSP), pp. 6274–6278. IEEE, 2020. Edward N Lorenz. Deterministic nonperiodic flow. *Journal of the atmospheric sciences*, 20(2):130–141, 1963. Edward N Lorenz. Predictability: A problem partly solved. In *Proc. Seminar on predictability*, volume 1, 1996. Bethany Lusch, J Nathan Kutz, and Steven L Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. *Nature communications*, 9(1):1–10, 2018. Michael C Mackey and Leon Glass. Oscillation and chaos in physiological control systems. *Science*, 197 (4300):287–289, 1977. Chenfeng Miao, Shuang Liang, Minchuan Chen, Jun Ma, Shaojun Wang, and Jing Xiao. Flow-tts: A non-autoregressive network for text to speech based on flow. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7209–7213. IEEE, 2020. Tsvetomila Mihaylova and André FT Martins. Scheduled sampling for transformers. arXiv preprint arXiv:1906.07651, 2019. Zahra Monfared, Jonas M Mikhaeil, and Daniel Durstewitz. How to train rnns on chaotic data? arXiv preprint arXiv:2110.07238, 2021. Garrett Nicolai and Miikka Silfverberg. Noise isn't always negative: Countering exposure bias in sequenceto-sequence inflection models. In *Proceedings of the 28th International Conference on Computational* Linguistics, pp. 2837–2846, 2020. Jakub Nowak, Ahmet Taspinar, and Rafał Scherer. Lstm recurrent neural networks for short text and sentiment classification. In *International Conference on Artificial Intelligence and Soft Computing*, pp. 553–562. Springer, 2017. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the 40th annual meeting of the Association for Computational* Linguistics, pp. 311–318, 2002. Jaideep Pathak, Zhixin Lu, Brian R Hunt, Michelle Girvan, and Edward Ott. Using machine learning to replicate chaotic attractors and calculate lyapunov exponents from data. *Chaos: An Interdisciplinary* Journal of Nonlinear Science, 27(12):121102, 2017. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. *OpenAI blog*, 1(8):9, 2019. Krishnan Radhakrishnan and Alan C Hindmarsh. Description and use of lsode, the livermore solver for ordinary differential equations. 1993. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. *arXiv preprint arXiv:1511.06732*, 2015. OE5889510996 Rossler. An equation for hyperchaos. *Physics Letters A*, 71(2-3):155–157, 1979. Otto E Rössler. An equation for continuous chaos. *Physics Letters A*, 57(5):397–398, 1976. T Konstantin Rusch and Siddhartha Mishra. Coupled oscillatory recurrent neural network (cornn): An accurate and (gradient) stable architecture for learning long time dependencies. *arXiv preprint* arXiv:2010.00951, 2020. T Konstantin Rusch, Siddhartha Mishra, N Benjamin Erichson, and Michael W Mahoney. Long expressive memory for sequence modeling. *arXiv preprint arXiv:2110.04744*, 2021. Marco Sandri. Numerical calculation of lyapunov exponents. *Mathematica Journal*, 6(3):78–84, 1996. Matteo Sangiorgio and Fabio Dercole. Robustness of lstm neural networks for multi-step forecasting of chaotic time series. *Chaos, Solitons & Fractals*, 139:110045, 2020. Masaki Sano and Yasuji Sawada. Measurement of the lyapunov spectrum from a chaotic time series. *Physical* review letters, 55(10):1082, 1985. Dominik Schmidt, Georgia Koppe, Zahra Monfared, Max Beutelspacher, and Daniel Durstewitz. Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies. arXiv preprint arXiv:1910.03471, 2019. Florian Schmidt. Generalization in generation: A closer look at exposure bias. arXiv preprint arXiv:1910.00292, 2019. Ljubisa Sehovac and Katarina Grolinger. Deep learning for load forecasting: Sequence to sequence recurrent neural networks with attention. *IEEE Access*, 8:36411–36426, 2020. Ljubisa Sehovac, Cornelius Nesen, and Katarina Grolinger. Forecasting building energy consumption with deep learning: A sequence to sequence approach. In 2019 IEEE International Congress on Internet of Things (ICIOT), pp. 108–116. IEEE, 2019. Lawrence F Shampine and Skip Thompson. Solving ddes in matlab. *Applied Numerical Mathematics*, 37(4): 441–458, 2001. Guorui Shen, Jürgen Kurths, and Ye Yuan. Sequence-to-sequence prediction of spatiotemporal systems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 30(2):023102, 2020. Julien Clinton Sprott and Konstantinos E Chlouverakis. Labyrinth chaos. *International Journal of Bifurcation and Chaos*, 17(06):2097–2108, 2007. Rohan Thavarajah, Xiang Zhai, Zheren Ma, and David Castineira. Fast modeling and understanding fluid dynamics systems with encoder–decoder networks. *Machine Learning: Science and Technology*, 2(2): 025022, 2021. René Thomas. Deterministic chaos seen in terms of feedback circuits: Analysis, synthesis," labyrinth chaos". International Journal of Bifurcation and Chaos, 9(10):1889–1905, 1999. Pantelis R Vlachas, Wonmin Byeon, Zhong Y Wan, Themistoklis P Sapsis, and Petros Koumoutsakos. Datadriven forecasting of high-dimensional chaotic systems with long short-term memory networks. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 474(2213):20170844, 2018. Rui Wang, Eugenia Kalnay, and Balakumar Balachandran. Neural machine-based forecasting of chaotic dynamics. *Nonlinear Dynamics*, 98(4):2903–2917, 2019. Andreas S Weigend and Neil A Gershenfeld. Results of the time series prediction competition at the santa fe institute. In *IEEE international conference on neural networks*, pp. 1786–1793. IEEE, 1993. Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. *Neural computation*, 1(2):270–280, 1989. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Schütze. Comparative study of cnn and rnn for natural language processing. *arXiv preprint arXiv:1702.01923*, 2017. ## A Appendix A.1 Training and Validation Loss Curves ![25_image_0.png](25_image_0.png) Figure 9: Training and validation loss for Mackey-Glass ![26_image_0.png](26_image_0.png) ![27_image_0.png](27_image_0.png) ![28_image_0.png](28_image_0.png) Figure 12: Training and validation loss for Hyper-Rössler ![29_image_0.png](29_image_0.png) Figure 13: Training and validation loss for Lorenz ![30_image_0.png](30_image_0.png) Figure 14: Training and validation loss for Lorenz'96
Review 1: Summary: This paper studies the risk of exposure bias in training sequential models with teacher-forcing vs free-running, i.e., (a) Teacher-forcing (TF): Each timestep uses the correct input from the teacher (b) Free-running (FR): Each timesteps uses the model prediction from the previous step as the input. Teacher-forcing leads to faster training convergence but risks exposure bias since the teacher always provides the correct inputs during the training stage, the model is not exposed to its predictions as input from the previous stages. Since there is no teacher availability during the inference stage, the model risks exposure bias. Free-running removes the exposure bias during the inference stage as the model has been trained with its predictions from the past (and this is inline with the test time setup as the teacher is not available during the inference stage) This paper's proposal is to provide a set of curriculum that decides what fraction of the input timesteps uses the teacher inputs and what fraction uses the model predictions from previous stages. In contrast to previous works, this paper proposes curriculum that switches between TF and FR on two different scales: (i) depending on training iteration ( ex. start with TF in the initial iterations and then decay to use FR ) (ii) within a training iteration ( ex. initial timesteps use TF while later timesteps use FR) Paper evaluates the proposed curriculum learning strategies on various chaotic dynamical systems. Strengths and Weaknesses: Strengths: ----------- 1) Simplicity of the proposed curriculum specially the probabilistic strategies that allow switching between teacher forcing or free running. 2) Extensive empirical evaluation on different chaotic dynamical systems. Weaknesses: ----------- 1) Lack of real-world datasets for experimental evaluation. Simulations based on chaotic dynamical systems indeed provided insights into exposure problem. 2) Missing evaluations on the existing baselines mentioned in related works such as teacher-student training, attention forcing, etc. Its unclear if these learning methods fail to overcome the exposure bias in chaotic dynamical systems. Since the proposed curriculum does not rely on any properties of the dynamical system. 3) Even the closely related "scheduled sampling" (by Bengio et al. 2015) is missing in the evaluation. Although the paper discusses such related work, it does not distinguishes itself from these strategies. Without any empirical evaluation its unclear how to judge the novelty of the proposed curriculum, since scheduled sampling also starts off with TF and eventually settles on FR during the RNN training. Requested Changes: Questions for Authors: ------------ 1) Since the proposed curriculum strategies are general and do not rely on any properties of the underlying dynamical system, did the authors try these strategies for problems other than chaotic dynamical systems? 2) In some cases, the deterministic strategies seem to outperform the probabilistic ones. Is this an artifact of the way the training/test data was sampled from the dynamical system or is there any other fundamental difference between these systems to the ones where probabilistic ones come on top. 3) For fair evaluation as well as distinguishing the proposed work from related works, include baselines such as (i) "scheduled sampling", (ii) teacher-student training, (iii) attention forcing. 4) It would greatly increase the impact of the proposal if the authors include experiments on real-world datasets which may or may not come from chaotic dynamics, since the proposed curriculum is generic and should be broadly applicable. Missing Related Works: --------- Many RNNs that are based on dyanimcal systems are missing and may be these do not have such an exposure bias issue. For ex. - Antisymmetric RNN : https://openreview.net/forum?id=ryxepo0cFX - Incremental RNN : https://openreview.net/forum?id=HylpqA4FwS - CoRNN : https://arxiv.org/pdf/2010.00951.pdf - Time Adaptive RNNs: https://openaccess.thecvf.com/content/CVPR2021/papers/Kag_Time_Adaptive_Recurrent_Neural_Network_CVPR_2021_paper.pdf Nit-Picks: -------- Page-7 top para: "(cp. blue and orange line in the figure)" There is no orange line in the figure. In main results table (II, III, IV), it would help the reader if you indicate which System is more chaotic as mentioned in the Discussion Section (Baseline teaching strategies). Broader Impact Concerns: NA ================================================== Review 2: Summary: The authors compare different teacher forcing training schemes on time series from chaotic systems, which exhibit different degrees of chaos as assessed by the Lyapunov spectrum. The experiments are based on a GRU encoder-decoder architecture. Prediction errors for different ahead prediction time steps, immediate forecasts or relative to the Lyapunov time, are used for evaluation. Different training protocols seem optimal in different situations, but generally 1) adaptive curriculum schemes appear to work better than fixed ‘free running’ or teacher forcing schemes, 2) curriculum schemes with forcing increasing over time come out best most often. Strengths and Weaknesses: Strengths: The study provides a systematic comparison of different training schedules, suggesting that curriculum schemes with increasing forcing may be beneficial most commonly. The study also puts a new emphasis on finding optimal training schemes (as has been advocated recently by some authors). Weaknesses: First, the study does not really provide any novel methodological or conceptual developments. The types of training protocols explored in essence, more or less, have been around before. This in itself might not be a major weakness, but in my mind it raises the bar for other contributions of the paper. Knowing that certain training schemes could strongly improve performance on challenging tasks may still be very helpful for the community, but in order to draw any stronger conclusions here 1) in my mind the results remain too heterogeneous and limited in scope. For instance, solely GRUs were tested, not any of the more recent and state-of-the-art RNNs like Lipschitz RNN, coRNN, regularized PLRNN, antisymmetric RNN, LEM, Neural ODE etc. If one wants to take home anything of broader relevance and applicability from this study, I think a variety of RNNs needs to be tested, including state-of-the-art ones, not just older designs, to see if the results are more generic. 2) at least some theoretical analysis or insight is required. As it stands, the study remains purely exploratory and offers no theoretical guidance on how the examined curriculum schemes may work or why and when one or another scheme may be beneficial. Loss curves and gradients could be analyzed in more detail, some derivations on how the training schemes will affect the learning process seems feasible. Intuitively, one would expect that strong forcing is needed initially when the RNN is still far from a good solution, and should be relaxed later on as the RNN is trained to capture longer and longer horizons. Why is, apparently, the opposite the case? This begs for more theoretical insight. 3) a better fit between network architecture used and the specific problem addressed is necessary in my mind. The authors basically copied a design developed for sequence-to-sequence tasks in NLP and use it for forecasting chaotic dynamical systems. But there is a huge (not at all covered) literature on machine learning for dynamical systems identification and prediction which deals at length with these problems (to give just a few pointers: https://arxiv.org/abs/1904.02107, https://arxiv.org/abs/1802.07486, https://arxiv.org/abs/1710.07313, https://arxiv.org/abs/2110.07238, https://arxiv.org/abs/1712.09707, https://arxiv.org/abs/1910.03471, https://arxiv.org/abs/2010.08895, https://arxiv.org/abs/2106.06898, https://arxiv.org/abs/2110.05266). In my mind this would be the proper set of references and benchmark models for testing improvements on predicting chaotic systems (of note, some of these also explicitly discuss teacher forcing). Minor technical note: A positive Lyapunov exponent is not a sufficient condition for chaos (see some common textbooks on this topic, e.g. the behavior must also be aperiodic in the asymptotic limit). Requested Changes: Addressing points 1-3 above I think is critical. Specifically: - A wider range of architectures (1), including SOTA architectures and those that have been specifically designed with chaotic dynamical systems in mind (3), should be tested. - More theoretical analysis of why and when which curriculum scheme works best should be provided. Broader Impact Concerns: None. ================================================== Review 3: Summary: Remark: I'm not familiar with the time series domain. All my comments are based on my expertise in natural language processing and deep learning. It is possible that my comments are biased. This article studies the exposure bias in chaotic time series forecasting. Exposure bias, which is commonly available in sequence prediction for many NLP tasks, has been widely explored in literature. Scheduled sampling (Bengio et al., 2015), which is one of most effective solutions to migrate this issue, has been adopted by authors. Their main contribution is two folds: 1) demonstrate that exposure bias is also a big issue in chaotic time series; 2) provide extensive empirical experiments of these training strategies on chaotic time series. Strengths and Weaknesses: Strengths: 1. demonstrate exposure bias is the issue in chaotic time series. 2. provide extensive experiments to study different training strategies showing that the probabilistic iteration scale curriculum works better. Weaknesses: 1. Limited novelty. These training strategies are highly similar to ones from scheduled sampling (Bengio et al., 2015). 2. Introduced more free hyper-parameter needed to tune. Requested Changes: The proposed training strategies require more training steps or are longer to converge. It may be more informative if it could provide training curves and have more analysis. In machine translation, if we increase the size of the training set, the gap between scheduled sampling and teacher force becomes smaller. It would be interesting to how the behaviors of the proposed curricula in different training sizes. Broader Impact Concerns: No. ================================================== Metareview: Recommendation: Reject Comment: This paper examines various teacher forcing training schemes on time series from chaotic systems, finding that different curricula are optimal on different tasks, but that increasing the forcing over time often performs best. The reviewers offered split opinions, with some appreciating the extensive empirical evaluations and systematic analysis, and others questioning the novelty of the proposed forcing schedules and the robustness and strength of the conclusions. Overall and in light of these somewhat conflicting perspectives, this is a borderline paper. One concern that surfaced in the review process was a lack of clear and convincing evidence supporting the main claims. In particular, the paper does not convey clear conclusions about when particular protocols are supposed to work better than others, and the robustness of the results may not be sufficiently well established. In order to more fully meet the standards of TMLR, the paper should reduce the scope of its claims to more accurately reflect the scope and results of the empirical evaluations. These changes would include but not be limited to highlighting nuances/caveats and the restricted scope of the evaluations. Additionally, further effort should be devoted to framing the high-level conclusions to enhance the clarity of argument and presentation. As these modifications are not necessarily minor, the manuscript cannot be accepted in its current form. The authors are encouraged to resubmit an improved version for review. ==================================================
# Towards Provable Log Density Policy Gradient Anonymous authors Paper under double-blind review ## Abstract Policy gradient methods are a vital ingredient behind the success of modern reinforcement learning. Modern policy gradient methods, although successful, introduce a residual error in gradient estimation. In this work, we argue that this residual term is significant and correcting for it could potentially improve sample-complexity of reinforcement learning methods. To that end, we propose log density gradient to estimate the policy gradient, which corrects for this residual error term. Log density gradient method computes policy gradient by utilising the state-action discounted distributional formulation. We first present the equations needed to exactly find the log density gradient for a tabular Markov Decision Processes (MDPs). For more complex environments, we propose a temporal difference (TD) method that approximates log density gradient by utilizing backward on-policy samples. Since backward sampling from a Markov chain is highly restrictive we also propose a min-max optimization that can approximate log density gradient using just on-policy samples. We also prove uniqueness, and convergence under linear function approximation, for this min-max optimization. Finally, we show that the sample complexity of our min-max optimization to be of the order of m−1/2, where m is the number of on-policy samples. We also demonstrate a proof-of-concept for our log density gradient method on gridworld environment, and observe that our method is able to improve upon the classical policy gradient method by a clear margin, thus indicating a promising novel direction to develop reinforcement learning algorithms that require fewer samples. ## 1 Introduction Policy gradient (PG) methods are a vital ingredient behind the success of modern reinforcement learning (Silver et al., 2017; John Schulman et al., 2023; Haarnoja et al.; Kakade, 2001). The success of PG methods stems from their simplicity and compatibility with neural network-based function approximations (Sutton et al., 1999; Baxter & Bartlett, 2001). Although modern policy gradient methods like PPO and TRPO, have achieved excellent results in various on-policy tasks (Schulman et al., 2017; 2015), they require extensive hyper-parameter tuning. Additionally, it has been shown by Ilyas et al. (2020) that the estimation error between policy gradient estimated by the methods like PPO and the true policy gradient increases significantly during the training process. Classical policy gradient methods typically approximate gradient of the policy using Q-function estimated with discount factor strictly less than 1, which leads to a error in gradient estimation (Morimura et al., 2010). In this paper, we empirically demonstrate that this error in indeed significant, in Figure 1. We further propose a novel algorithm to estimate policy gradient that corrects for this residual error, which could potentially lead to sample efficient reinforcement learning, thus enabling their deployment over a wide variety of complex scenarios. We call our method, log density gradient. We show that log density gradient method can be used to estimate the policy gradient for all values of discounting factor – including the *average reward scenario*. Log density gradient method is based on the average state-action stationary distribution formulation of reinforcement learning, which allows for the estimation of policy gradient as a multiplication of the gradient of log density and the reward function (Nachum et al., 2019; Uehara et al.). This separation results in an improved correlation with the true policy gradient and requires fewer hyperparameters. We show that our method is consistent with the classical policy gradient theorem (Sutton, 1988) and also prove convergence properties and sample complexity. ![1_image_0.png](1_image_0.png) Figure 1: For the average reward scenario, performance of classical policy gradient (blue) algorithm as compared to log density gradient (green) algorithm over a n × n gridworld environment, for n = 5, 10. We observe that log density gradient algorithm consistently converges to better policy performance. Theoretical calculated solutions are used for implementation. Our main contributions are as follows. 1. A novel method to *provably* calculate policy gradient by using the average state-action discounted formulation for all values of the discounting factor. We will show that policy gradient estimated in this manner for average reward scenario will correct for the residual error in policy gradient estimation, which is widely ignored in empirical implementations of policy gradients (as shown in Figure 1). 2. A model-free Temporal Difference (TD) method for approximating policy gradient. We provide proof of contraction as well as convergence. However, there is a major drawback that it requires samples from the backward Markov chain (described in detail in the paper) which motivates the next contribution. 3. A min-max optimization which yields the gradient of log density for all values of the discounting factor including the average reward scenario and a model free TD method to implement it, with proof of convergence. We also show that this min-max optimization has a closed form solution under linear function class assumptions, thus enabling their practical use with linear MDP problems (Zhang et al., 2022). We additionally show sample complexity of the order O(m−1/2) for the projected version of the proposed TD method, where m is the number of on-policy samples. Our method is competitive with the sample complexity of classical vanilla policy gradient methods (Yuan et al., 2022). Section 2 starts with problem formulation and motivation behind this paper. Section 3, discusses prior work in policy gradient methods, temporal difference methods, and min-max problems in off-policy evaluation and compares our work with existing works to situate our paper in the literature. Our main contributions are discussed in detail starting from Section 4 which starts with rigorously defining log density gradient. Additionally we also propose a TD approach to estimate log density gradient under strict reversibility assumptions, and we describe the issue caused by this assumption. In section 5, to overcome this issue to propose a min-max variant that allows us to estimate log density gradient algorithm using empirical samples. We finally demonstrate a proof-of-concept of our algorithm in Section 6 which shows that log density gradient can be potentially sample efficient as compared to classical policy gradient methods. ## 2 Background And Motivation Notation: we let (·) T denote matrix transpose, and let e represent the vector of ones, the size of which would be clear from context. We define Markov Decision Process (MDP) as a 6-tuple of (S, A,P*, r, γ, d*0). Here, S is a finite state space of the MDP, A is a finite action space, P is the transition probability matrix, r is the reward function and d0 is the initial distribution. The reinforcement learning problems is optimise for a policy π : S → ∆(A) that maximizes Jγ(π), defined as $$J_{\gamma}(\pi):=(1-\gamma)\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})|s_{0}\sim d_{0},a_{t}\sim\pi(\cdot|s_{t}),s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t})],\ \ \text{for}\ \gamma\in[0,1)$$ $$J_{1}(\pi):=\lim_{\mathcal{T}\to\infty}\mathbb{E}[\frac{1}{\mathcal{T}}\sum_{t=0}^{\mathcal{T}}\gamma^{t}r(s_{t},a_{t}),s_{0}\sim d_{0},a_{t}\sim\pi(\cdot|s_{t}),s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t})],\ \ \text{for}\ \gamma=1,$$ where γ ∈ [0, 1] is the discounting factor which accounts for the impact of future rewards in present decision making. When γ = 1, J1(π) the scenario is called the average reward formulation. Most practical problems in reinforcement learning typically aim to solve for an optimal policy π ∗ = arg maxπ J1(π)(See Figure 1 Haarnoja et al.). Modern reinforcement learning methods aim to parameterise policy with a set of parameters θ ∈ R n, where n is the dimensions of the parameter space. We refer to such paremterisation as πθ. This kind of parameterisation enables us search for optimal set of parameters θ ∗instead of a search over *S × A* which in practice could be very large. We define $$\theta^{*}:=\arg\operatorname*{max}_{\theta\in\mathbb{R}^{n}}J_{1}(\pi_{\theta}).$$ (1a) $\left(1\text{b}\right)$ . The Q-function Qπθ γis commonly used function used to describe the performance of an RL agent. Q-function calculates the long term (discounted) rewards accumulated by an agent following a fixed policy πθ while starting from a state s ∈ S and taking an action a ∈ A $$Q_{\gamma}^{\pi_{\theta}}(s,a):=\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})|s_{0}=s,a_{0}=a,a_{t}\sim\pi_{\theta}(\cdot|s_{t}),s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t})]$$ $$=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim\mathcal{P}(\cdot|s,a),a^{\prime}\sim\pi_{\theta}(\cdot|s^{\prime})}[Q_{\gamma}^{\pi_{\theta}}(s^{\prime},a^{\prime})]$$ Where, equation 1b is called the Bellman Equation. Bellman equation is popularly used to estimate the Qfunction using just empirical data collected on the MDP. Q-function approximation methods typically use γ < 1 for stable estimation of the Q-function. We also similarly define value function V πθ γ(s) = Ea∼πθ(·|s)[Qπθ γ (*s, a*)]. Modern RL algorithms generally solve for θ ∗ by estimating the gradient of policy performance Jγ(πθ) with respect to policy parameters θ. This is also commonly referred to as the policy gradient theorem ∇θJ1(πθ) (Sutton et al., 1999) which says $$\nabla_{\theta}J_{\gamma}(\pi_{\theta})=\mathbb{E}_{(s,a)\sim d_{\gamma}^{\pi_{\theta}}}[Q_{\gamma}^{\pi_{\theta}}(s,a)\cdot\nabla_{\theta}\log\pi_{\theta}(a|s)],\quad\gamma\in[0,1].$$ Here, d πθ γis the average state-action discounted stationary distribution, which is defined as the cumulative sum of discounted state-action occupancy across the time horizon. $$d_{\gamma}^{\pi_{\theta}}(s,a):=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}\mathbb{P}(s_{t}=s,a_{t}=a|s_{0}\sim d_{0},a_{t}\sim\pi_{\theta}(s_{t}),s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t}),\gamma<1)\tag{3a}$$ $$d_{1}^{\pi_{\theta}}(s,a):=\lim_{T\to\infty}\frac{1}{T}\sum_{t=0}^{T}\mathbb{P}(s_{t}=s,a_{t}=a|s_{0}\sim d_{0},a_{t}\sim\pi_{\theta}(s_{t}),s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t}),\gamma=1).\tag{3b}$$ $$(2)^{\frac{1}{2}}$$ In this paper we make a standard assumption that the Markov chain induced by policy πθ is ergodic. In particular, this implies that d πθ γ (*s, a*) > 0 for all state action pairs (*s, a*) (Puterman, 2014). In scenarios where we are trying to optimize for J1(π), estimating the policy gradient becomes difficult. This is because the Bellman equation cannot be used to estimate Q-function for γ = 1. As a compromise policy gradient for average reward scenarios are instead approximated by calculating the Q-function for a discounting factor γ < 1, but close to 1, and using that estimate in the policy gradient equation 2 $$\hat{\nabla}_{\theta}J_{1}(\pi_{\theta})=\mathbb{E}_{(s,a)\sim\pi_{1}^{a}}{}^{\theta}[Q_{1}^{\pi_{\theta}}(s,a)\cdot\nabla_{\theta}\log\pi_{\theta}(a|s)],$$ $$\approx\mathbb{E}_{(s,a)\sim\pi_{1}^{a}}{}^{\theta}[Q_{\gamma}^{\pi_{\theta}}(s,a)\cdot\nabla_{\theta}\log\pi_{\theta}(a|s)],\quad\gamma<1.$$ $$(4\mathbf{a})$$ $$(4\mathrm{b})$$ In this paper we argue that the policy gradient calculated in this manner induces a significant residual error, which keeps on compounding as the reinforcement learning training proceeds even leading to a sub optimal solution. The following equation, derived in Proposition 2 characterizes that error, $$\nabla_{\theta}J_{1}(\pi_{\theta})=\mathbb{E}_{(s,a)\sim d_{1}^{\pi_{\theta}}}[Q_{\gamma}^{\gamma_{\pi}}(s,a)\cdot\nabla_{\theta}\log\pi_{\theta}(a|s)]+\underbrace{(1-\gamma)\mathbb{E}_{(s,a)\sim d_{1}^{\pi_{\theta}}}[\nabla_{\theta}\log d_{1}^{\pi_{\theta}}(s)\cdot V_{\gamma}^{\pi_{\theta}}(s)]}_{\text{Residual Error}}$$ (5) $\frac{1}{2}$ $$({\mathfrak{h}})$$ In this paper, we prove that this residual error is significant. What more, we also propose another method to exactly obtain the policy gradient, for all values of the discounting factor including γ = 1 which we call as the log density gradient. Our estimation of log density gradient utilises average state-action discounted distributional formulation of a reinforcement learning problem which re-states Jγ(π) as expectation under d πθ γ(Nachum et al., 2019; Uehara et al.) as $$J_{\gamma}(\pi)=\mathbb{E}_{(s,a)\sim d_{\gamma}^{\pi_{\theta}}}[r(s,a)].$$ Under this formulation, policy gradient can similarly be obtained using log derivative trick as follows, $$\nabla_{\theta}J_{\gamma}(\pi)=\mathbb{E}_{(s,a)\sim d_{\gamma}^{\pi_{\theta}}}[\nabla_{\theta}\log d_{\gamma}^{\pi_{\theta}}(s,a)\cdot r(s,a)].$$ (s, a) · r(*s, a*)]. (6) We refer to ∇θ log d πθ γ as the log density gradient. A key advantage of log density gradient is that it would allow us to approximate policy gradient for average reward scenarios in a provable manner. In this work, we show that log density gradient can be approximated even under average reward scenarios (γ = 1). To that end, we first propose a model based approach to approximate the log density gradient for tabular scenarios. Under reversibility assumptions we then show that this log density gradient can also be approximated using TD method. Since reversibility assumption is highly restrictive, we finally propose a min-max version of log density gradient which allows to approximate ∇θ log d πθ using empirical samples. Our experimental further show that this manner of policy gradient estimation can potentially make reinforcement learning sample efficient, thus helping them scale. ## 3 Survey Of Related Work And Comparison In this section we will discuss existing studies in policy gradient methods including the framework of log density gradient first introduced by Morimura et al. (2010). We also briefly discuss density ratio learning methods which have been very popular in off-policy evaluation. A short discussion on Temporal Difference (TD) learning methods may be found in Appendix 8.1. ## 3.1 Policy Gradient Methods Literature survey: Policy gradient methods are a widely studied topic in reinforcement learning. One of the earliest works in this area proposed a closed-form solution for evaluating policy gradients called the policy gradient theorem (Sutton et al., 1999). Initially implementations for policy gradient methods used episodic estimates to update policy parameters (Williams, 1992) and GPOMDP (Baxter & Bartlett, 2001). Unfortunately this way of implementing policy gradient suffered from high variance, thus inhibiting scalability to large problem spaces (Schulman et al., 2016). To address this problem, actor-critic methods approximate the Q-function or advantage function using an additional neural network, which are then used to update the policy (Mnih et al., 2016; Schulman et al., 2016). Furthermore policy gradient methods are also designed to be compatible with deterministic policies (Lillicrap et al., 2016; Silver et al., 2014). Recently Trust region methods, such as Trust Region Policy Optimization (TRPO) Schulman et al. (2015) and Proximal Policy Optimization (PPO) Schulman et al. (2017) have been introduced which update policies while ensuring monotonic performance improvement. To the best of our knowledge, Log density gradient has only been discussed in Morimura et al. (2010) in which a TD method to estimate log density gradient for average reward scenarios by using reversible backward Markov chain is proposed. Comparison: In our paper, we re-introduce the idea of log density gradient introduced by Morimura et al. (2010) for estimating gradient in the average reward scenario. Morimura et al. (2010) was also the first work to find out the residual error in policy gradient approximation (Proposition 2). Additionally, this work proposes estimating log density gradient specifically for average reward scenarios (γ = 1) using a TD update, with additional extensions for linear function approximation. Our work not only fixes many technical gaps evident in the theory of log density gradient as proposed by Morimura et al. (2010) but also builds on them to make log density gradient practical. We first define log density gradient (equation 10) over a range of discounting factor γ ∈ [0, 1], which also includes the average reward scenario. Using Lemma 2 and 3 we then prove mathematical conditions under which the log density gradient is unique and can be exactly calculated. We further use this relation to propose a TD form of updates (equation 13) for log density gradient estimation for all values of discounting factor γ ∈ [0, 1] as against the average reward scenario proposed by Morimura et al. (2010). In Lemma 4 we further prove that these TD updates converge to a unique solution for all values of discounting factor γ except 1. Thus, effectively demonstrating that TD-updates proposed by Morimura et al. (2010) *does not* converge to the true log density gradient, further limiting their use for large scale problems. Additionally, to make log density gradient estimation viable for practical problems, we propose a min-max optimization approach (equation 16) that allows us to estimate log density gradient using empirical samples. We also demonstrate that under linear function approximation settings, this min-max optimization not only has a closed form but also converges to a unique solution (Theorem 1). Under weighted updates, as proposed in algorithm 1 we also show a bound on sample complexity of log density gradient estimation of the order of O( √ 1 n ). ## 3.2 Density Ratio Learning Literature survey: Off-policy evaluation estimates the performance of a target policy π using an offline dataset generated by a behavior policy µ (Voloshin et al., 2019; Katdare et al.). This is done by estimating the average state-action density ratio d π γ dµ , which allows approximation of the target policy's performance. In this work, we are primarily interested in the DICE class of off-policy evaluation algorithms (Zhang et al., 2020b; Nachum et al., 2019; Zhang et al., 2020a). These algorithms typically approximate the divergence (for some f−divergence of their choice) between the two distributions d π and d µ in their convex dual form, eliminating the need to obtain samples from d π, which results in a min-max form optimization problem. Comparison: Inspired by the DICE class of algorithms we too propose a min-max form of estimating the log density gradient. We show that such a method of estimating log density gradient converges to the true policy under linear function approximations assumptions. We also show that the sample complexity of such an estimator is of the order (n −1/2), with n being the number of on-policy samples. ## 4 Log Density Gradient In this section, we introduce log density gradient and further show that classical policy gradient typically ignores a residual error which may be significant. We then propose a TD version of our log density gradient method that estimates the policy gradient without a need to account for this error. ## 4.1 Model Based Log Density Gradient The log density gradient method attempts to estimate the gradient of log of average state-action discounted stationery distribution d πθ γ . We start by observing that d πθ γsatisfies an identity called the Bellman flow equation Liu et al. (2018). Lemma 1. *The average state-action density distribution satisfies the following identity for all* γ ∈ [0, 1], $$d_{\gamma}^{\pi\,\theta}(s^{\prime})=(1-\gamma)d_{0}(s^{\prime})+\gamma\sum_{s,a}d_{\gamma}^{\pi\,\theta}(s,a){\mathcal{P}}(s^{\prime}|s,a)$$ ′|*s, a*) (7) We prove this result in the appendix 8.3. Note that this equation is similar to the Bellman equation but in a backward manner, which means, we need samples from the backward conditional distribution (defined rigorously in Section 4.2). It can also be understood as a form of flow conservation, wherein the flow out of state s ′(LHS) would be equal to the flow into s ′(RHS). $$\left(7\right)$$ Lemma 2. For all γ ∈ [0, 1] *the solution to the following optimisation is unique and equal to* d πθ γ . $$\operatorname*{arg\,min}_{w,S\to\mathbb{R}}\sum_{s^{\prime}}\left(w(s^{\prime})-(1-\gamma)d_{0}(s^{\prime})+\gamma\sum_{s,a}w(s)\pi_{\theta}(a|s)\mathcal{P}(s^{\prime}|s,a)\right)^{2}+\frac{\lambda}{2}(\sum_{s}w(s)-1)^{2}$$ Detailed proof can be found in the appendix 8.4. Intuitively speaking, the term λ 2 (Ps w(s) − 1)2is redundant for γ < 1. It becomes useful for average reward scenarios wherein γ = 1 ensures uniqueness. Although it is hard to estimate a closed form solution for ∇θ log d πθ γ , it is still possible to estimate it numerically. Using equation 7 we can similarly calculate an equivalent identity for ∇θ log d πθ γ $$\quad(8)$$ $$d_{\gamma}^{\pi_{\theta}}(s^{\prime})\nabla_{\theta}\log d_{\gamma}^{\pi_{\theta}}(s^{\prime})=\gamma\sum_{s,a}d_{\gamma}^{\pi_{\theta}}(s,a){\mathcal{P}}(s^{\prime}|s,a)\nabla_{\theta}\log d_{\gamma}^{\pi_{\theta}}(s,a).$$ $$(9)$$ $$(10)$$ $$(11)$$ Multiplying both sides by πθ(a ′|s ′) and recalling that d πθ γ (*s, a*) = d πθ γ (s)πθ(a|s), we obtain $$d_{\gamma}^{\pi s}(s^{\prime},a^{\prime})(\nabla_{\theta}\log d_{\gamma}^{\pi s}(s^{\prime},a^{\prime})-\nabla_{\theta}\log\pi(a^{\prime}|s^{\prime}))=\gamma\sum_{s,a}d_{\gamma}^{\pi s}(s,a)\nabla_{\theta}\log d_{\gamma}^{\pi a}(s,a){\cal P}(s^{\prime}|s,a)\pi_{\theta}(a^{\prime}|s^{\prime})$$ ′) (10) Similar to Lemma 2 we can estimate ∇θ log d πθ γ by solving for the equation 10 above. To that end, we propose the following optimization (where λ > 0 is a fixed regularizer) $$\min_{w,s\in\mathcal{N},a\to\mathbb{R}^{n}}\left\{\mathbb{E}_{(s^{\prime},a^{\prime})\sim d_{\gamma}^{w}}\|\nu(s^{\prime},a^{\prime})\|^{2}+\frac{\lambda}{2}\Big{\|}E_{(s,a)\sim d_{\gamma}^{w}}[w(s^{\prime},a^{\prime})]\Big{\|}^{2}\right\}\tag{1}$$ $$\nu(s^{\prime},a^{\prime}):=d_{\gamma}^{w}(s^{\prime},a^{\prime})(w(s^{\prime},a^{\prime})-\nabla_{\theta}\log\pi_{\theta}(a^{\prime}|s^{\prime}))-\gamma\sum_{s,a}d_{\gamma}^{w}(s,a)\mathcal{P}(s^{\prime}|s,a)\pi_{\theta}(a^{\prime}|s^{\prime})w(s,a)$$ Lemma 3. *The solution to equation 11 is unique and equal to* ∇θ log d πθ γ*for all* γ ∈ [0, 1]. We describe the proof for this Lemma in the appendix 8.5. Similar to Lemma 11, the constraint λ 2 ∥E(s,a)∼d πθ γ [w(s ′, a′)]∥ 2is only useful for average reward scenarios γ = 1. The proof follows from the fact that the solution to the equation 11 can be written in a linear form A · w = b and showing that A is invertible. It is worth reiterating that, once we have an estimate ∇θ log d πθ γ , we can use this estimate to approximate the policy gradient using equation 6. We will now recall two important properties of log density gradient. Proposition 1. *The policy gradient method as mentioned in equation 6 is exactly equal to the classical policy* gradient (Sutton et al., 1999) as mentioned in equation 2. Detailed proof for this proposition can be found in Appendix 8.2. In essence this means that log density gradient calculates the same policy gradient but using a different formulation. We show next that this formulation allows us to correct for the residual error in policy approximation which is typically ignored in many actor-critic implementations of policy gradient methods. Proposition 2. *The following identity, stated in equation 5, is true* $\nabla_{\theta}J_{1}(\pi_{\theta})=\mathbb{E}_{(s,a)\sim d_{1}^{\pi_{\theta}}}[Q_{\gamma}^{\gamma_{\theta}}(s,a)\cdot\nabla_{\theta}\log\pi_{\theta}(a|s)]+\underbrace{(1-\gamma)\mathbb{E}_{(s,a)\sim d_{1}^{\pi_{\theta}}}[\nabla_{\theta}\log d_{1}^{\pi_{\theta}}(s)\cdot V_{\gamma}^{\pi_{\theta}}(s)]}_{\text{Redundant Error}}$ Proof. From the definition of log density gradient equation 6 we have ∇θJ1(πθ) = E(s,a)∼d πθ 1 [∇θ log d πθ 1 (*s, a*)· r(*s, a*)]. Let γ < 1, and we use the Bellman equation 1b to obtain ∇θJ1(πθ) = E(s,a)∼d πθ 1 [∇θ log d πθ 1 (s, a) · (Q πθ γ (s, a) − γEs ′∼P(·|s,a)[V πθ γ(s ′)])] (12a) = E(s,a)∼d πθ 1 [(∇θ log d πθ 1 (s) + ∇θ log πθ(a|s)) · (Q πθ γ (s, a) − γEs ′∼P(·|s,a)[V πθ γ(s ′)])] (12b) = E(s,a)∼d πθ 1 [Q πθ γ (s, a) · ∇θ log πθ(a|s)] + (1 − γ)Es∼d πθ 1 [∇θ log d πθ 1 (s, a) · V πθ(s)] (12c) Here, we go from equation 12a to 12b by utilizing ∇θ log d πθ γ (*s, a*) = ∇θ log d πθ γ (s) + ∇θ log πθ(a|s). We finally go from 12b to 12c by using a key identity of log density gradient equation 10. using identity for log density gradient as shown in the equation 9. Note that the first term in equation 12b is the practical instantiation of classical policy gradient theorem, while the second term (1−γ)Es∼d πθ 1 [∇θ log d πθ 1 (s)· V πθ γ(s)] being the residual error. This completes the proof. Actor-Critic implementation of policy gradient methods first approximate the Q-function using a discounting factor γ strictly less than 1 and then approximate the policy gradient ∇θJ1(πθ) using equation 4b. This leads to a residual error in policy gradient approximation as shown in Proposition 2. We believe that correcting for this gradient estimation can potentially make policy gradient algorithms sample efficient and scalable to complex problems. Although the solution to equation equation 11 is unique, solving it requires access the transition matrix P which is impractical for complex environments. Additionally, the log density gradient estimation also becomes computationally infeasible with exponential scaling of state-action. We thus propose a temporal difference approach to estimate log density gradient from empirical data next . ## 4.2 Temporal Difference Log Density Gradient To propose an update equation for temporal difference (TD) method, we first begin with re-arranging few terms in equation 10 for log density gradient. $$\nabla_{\theta}\log d_{\gamma}^{\pi_{\theta}}(s^{\prime},a^{\prime})=\nabla_{\theta}\log\pi_{\theta}(a^{\prime}|s^{\prime})+\gamma\sum_{s,a}\frac{d_{\gamma}^{\pi_{\theta}}(s,a)\mathcal{P}(s^{\prime}|s,a)\pi_{\theta}(a^{\prime}|s^{\prime})}{d_{\gamma}^{\pi_{\theta}}(s^{\prime},a^{\prime})}\nabla_{\theta}\log d_{\gamma}^{\pi_{\theta}}(s,a)$$ We define the backward distribution of (*s, a*) given (s ′, a′) as $$\mathcal{P}_{b}(s,a|s^{\prime},a^{\prime}):=\frac{d_{\gamma}^{\pi_{\theta}}(s,a)\mathcal{P}(s^{\prime}|s,a)\pi_{\theta}(a^{\prime}|s^{\prime})}{d_{\gamma}^{\pi_{\theta}}(s^{\prime},a^{\prime})}=\frac{d_{\gamma}^{\pi_{\theta}}(s,a)\mathcal{P}^{\pi_{\theta}}(s^{\prime},a^{\prime}|s,a)}{d_{\gamma}^{\pi_{\theta}}(s^{\prime},a^{\prime})}\,.$$ which is a consequence of Bayes' rule. The summation therefore becomes an expectation under Pb. The log density gradient is therefore said to follow a backward recursion and it requires samples from backward conditional probability Pb to estimate log density gradient1. We first generalize algorithm of Morimura et al. (2010), who do it only for γ = 1, to estimate log density gradient w for all discounting factor γ ∈ [0, 1] in form of a temporal difference (TD(0)) method where our update equation is $$w(s^{\prime},a^{\prime})\gets w(s^{\prime},a^{\prime})+\alpha[\gamma w(s,a)+g(s^{\prime},a^{\prime})-w(s^{\prime},a^{\prime})]$$ ′, a′)] (13) with (s ′, a′) ∼ d πθ γ ,(s, a) ∼ Pb(·|s ′, a′) and g(s ′, a′) := ∇θ log πθ(a ′|s ′). Define operator Yγ to capture the behaviour of update rule equation 13 after taking expectation, $$(Y_{\gamma}\cdot w)(s^{\prime},a^{\prime}):=\gamma\mathbb{E}_{(s,a)\sim d_{\gamma}^{\pi_{\theta}}}[w(s,a)|(s^{\prime},a^{\prime})]+g(s^{\prime},a^{\prime}).$$ We can write this in matrix form as follows, $$(13)$$ $$Y_{\gamma}\cdot W=\gamma D_{\pi_{\theta}}^{-1}{\mathcal{P}}_{\pi_{\theta}}^{\top}D_{\pi_{\theta}}W+G$$ πθDπθW + G (14) where, W ∈ R |S|·|A|×n is the matrix with every row corresponding to w(*s, a*) for each state-action pair (*s, a*). Similarly, G ∈ R |S|·|A|×n has its rows as ∇θ log πθ for each state-action pair. Let Pπθ , Dπθ ∈ R |S|·|A|×|S|·|A| where (Pπθ )((s,a),(s ′,a′)) = P πθ (s ′, a′|*s, a*) and Dπθ is a diagonal matrix whose every element correspond to d πθ γ for each state-action pair. We use this matrix form for the operator Yγ in the proof of the following lemma. Lemma 4. Let w0 ∈ ∆(S, A) be an arbitrary initial guess. Let wk = Yγ · wk−1 *for all natural numbers* k ≥ 1. For γ ∈ [0, 1), the operator Yγ is a contraction, and {wk}k≥0 *converges to a unique fixed point* ∇θ log d πθ γ . 1Although for γ = 1 we can use samples from P as well (Morimura et al., 2010) $$(14)$$ Detailed proof of Lemma 4 can be found in Appendix 8.6. Extension of Lemma 4 to linear function approximation, and proof of convergence for the same, can be found in the Appendix 8.7. Although TD methods are known to converge, they still suffer from two problems. One, the access to samples from backward conditional probability. Two, scalability to large problem spaces. We attempt to solve both of these problems in the next section where we propose a min-max optimization procedure for estimating the log density gradient. ## 5 Min-Max Log Density Gradient In this section, we propose another approach for evaluating the log density gradient which uses the min-max form for optimization which does not need samples from the backward distribution. Min-max optimizations also allow us to use a large variety of function classes like neural networks to estimate log density gradient. Let us return to the loss function that we initially propose in equation 11. Classical machine learning algorithms typically require loss function in the form of an expectation under a distribution. These kind of algorithms are often called as Empiricial Risk Minimization, which allows us to approximate the loss using samples from that distribution. Consider a modified form of the optimization proposed in equation 11 (the modification is that δ(s ′, a′) is divided by d πθ γ (s ′, a′) where the ergodicity assumption ensures this operation is well defined), $$\begin{array}{l}\arg\min_{w\in\mathcal{S}^{d}\cup\Delta\supset\pi^{n}}\mathbb{E}_{(s^{\prime},a^{\prime})\sim a^{\prime\prime}}\left[\left\|\frac{\nu(s^{\prime},a^{\prime})}{d_{\gamma}^{\prime\prime}(s^{\prime},a^{\prime})}\right\|^{2}\right]+\frac{\lambda}{2}\|\mathbb{E}_{(s,a)\sim a^{\prime\prime}}[w(s,a)]\|^{2}\\ \nu(s^{\prime},a^{\prime}):=d_{\gamma}^{\alpha}(s^{\prime},a^{\prime})(w(s^{\prime},a^{\prime})-\nabla_{\theta}\log\pi_{\theta}(a^{\prime}|s^{\prime}))-\gamma\sum_{s,a}d_{\gamma}^{\alpha s}(s,a)\mathcal{P}(s^{\prime}|s,a)\pi(a^{\prime}|s^{\prime})w(s,a)\end{array}\tag{15}$$ The denominator term d πθ γis added to simplify the final optimization, which we shall see soon. We also add the term λ 2 ∥E(s,a)∼d πθ γ [w(*s, a*)]∥ 2to satisfy one of the properties of the gradient of log density E(s,a)∼d πθ γ [∇θ log d πθ γ (*s, a*)] = 0. It is worth noting that equation 15 is just a re-weighting of equation 11 with the 1 d πθ γ (s,a) . This implies that the optimal solution for the both the equation is the same because the minimum value for both the optimization can only be reached when w(*s, a*) = ∇θ log d πθ γ (*s, a*). By exploiting the Fenchel-duality, we can re-write this optimization in the minimax form (Rockafellar, 2015; Zhang et al., 2020b). arg min w:S×A→Rdmax f:S×A→Rd,τ∈Rd Lγ(w, f, τ ) := E(s ′,a′)∼d πθ γ [f(s ′, a′) · w(s ′, a′)] − E(s ′,a′)∼d πθ γ [f(s ′, a′) · ∇θ log πθ(a ′|s ′)] − γE(s,a)∼d πθ γ [Es ′∼P(·|s,a),a′∼πθ(·|s ′)[f(s ′, a′)] · w(s, a)] − 1 2 E(s,a)∼d πθ γ [∥f(s ′, a′)∥ 2] + λ(τ · E(s,a)∼d πθ γ [w(s, a)] − 1 2 ∥τ∥ 2) (16) In many cases searching over all set of functions is not possible, hence we search over tractable function classes W, F and the aim is to approximate $$\nabla_{\theta}\log d_{\gamma}^{\pi\circ}\approx\arg\operatorname*{min}_{w\in{\mathcal{W}}}\operatorname*{max}_{f\in{\mathcal{F}},\tau\in\mathbb{R}^{n}}L_{\gamma}(w,f,\tau).$$ Such a practical consideration allows us to use different types of function approximators like linear function approximation, neural networks, and reproducible kernel Hilbert spaces (RKHS). We will now provide an update rule to solve equation 16 under linear function approximation. For that we choose a feature map Φ : *S × A →* R d and parameters *α, β* ∈ R d×n that need to be learnt, so that we can approximate the optima of equation 16, w ∗(*s, a*) and f ∗(*s, a*) with α T Φ(*s, a*), and β T Φ(*s, a*) respectively, for each state action pair (*s, a*). The update rule is $$\delta_{t}=\Phi_{t}\Phi_{t}^{T}-\gamma\Phi_{t}(\Phi_{t}^{\prime})^{T}\tag{17a}$$ $$\alpha_{t+1}^{T}=\alpha_{t}^{T}-\varepsilon_{t}(\beta^{T}\delta_{t}+\lambda(\tau\Phi_{t}^{T}))$$ (17b) $$\beta_{t+1}^{T}=\beta_{t}^{T}+\varepsilon_{t}(\alpha_{t}^{T}\delta_{t}-g_{t}\Phi_{t}^{T}-\beta_{t}^{T}\Phi_{t}\Phi_{t}^{T})$$ (17c) $$\tau_{t+1}=\tau_{t}+\varepsilon_{t}(\lambda(\alpha_{t}^{T}\Phi_{t}-\tau_{t}))\tag{17d}$$ where, Φt := Φ(st, at) is the feature encountered at time t, gt := ∇θ log πθ(at|st), and Φ ′ t := Φt(s ′ t , a′t ) for (st, at) ∼ d πθ γ , s′t ∼ P(·|st, at), a′t ∼ πθ(a ′ t |s ′ t ). We first re-write the updates in equation 17 in form of dt = [αt, βt, τT t ] so that the updates can be written in matrix form dt+1 = dt + εt(Gt+1dt + ht+1), where, Gt+1, ht+1 are as follows, $$G_{t+1}:={\left[\begin{array}{l l l}{0}&{-A_{t}}&{-\lambda\Phi_{t}}\\ {A_{t}}&{-C_{t}}&{0}\\ {\lambda\Phi_{t}^{T}}&{0}&{-\lambda}\end{array}\right]}\,,\quad h_{t+1}:={\left[\begin{array}{l}{0}\\ {-B_{t}}\\ {0}\end{array}\right]}$$ and At := (ΦtΦ T t − γΦt(Φ′t ) T), Bt := Φg T t , Ct := ΦtΦ T t . We can calculate the expectation for each of these matrices as follows, $$G:=\mathbb{E}_{p}[G_{t+1}]=\begin{bmatrix}0&-A&-\lambda\Psi D_{\pi_{\theta}}e\\ A&C&0\\ \lambda e^{T}D_{\pi_{\theta}}\Psi^{T}&0&-\lambda\end{bmatrix},\quad h:=\mathbb{E}_{(s,a)\sim d_{\tau}^{\pi_{\theta}}}[h_{t+1}]=\begin{bmatrix}0\\ -B\\ 0\end{bmatrix}.$$ Here, each column of Ψ ∈ R |S|·|A|×n is the feature vector Φ(*s, a*), for each (s, a) *∈ S ×A* and e ∈ R n is a vector of 1's at every element. We can similarly write A = ΨDπθ (I − γPπθ )ΨT, B = ΨDπθGT, C = ΨDπθΨT and Ep[·] := E(s,a)∼d πθ γ ,s′∼P(·|s,a),a′∼πθ(·|s ′) [·]. We can now prove the convergence of linear function approximation under the following key assumptions. Assumption 1. 1. The matrix Ψ has linearly independent columns. 2. The matrix A is non-singular or the regularizer λ > 0. 3. The feature matrix Φ *has uniformly bounded second moments.* Theorem 1. Under the assumptions 1, the update equation 17 converges in probability to a unique solution. That is, limt→∞ dt = G−1h *in probability.* The detailed proof is provided in Appendix 8.8. The proof is similar to (Zhang et al., 2020b, Theorem 2) and invokes theorem 2.2 Borkar & Meyn (2000). We provide a sample complexity analysis for a projected version of the update rule equation 17. To that end, we propose Algorithm 1 called the Projected Log Density Gradient. We choose closed, bounded and convex sets X ⊂ R d×n, Y ⊂ R d×n, Z ⊂ R 1×n and define a projection operator ΠX, ΠY , ΠZ that project our variables αt, βt, τt onto *X, Y, Z* respectively. Moreover, we choose a learning rate {εt} m t=1 where we run the algorithm for m steps. The details of the choice of learning rate are found in Appendix 8.9. Theorem 2. Under assumptions 1 for (α, ¯ β, ¯ τ¯) obtained from Algorithm 1 after m *steps, the optimality gap* ϵg(¯αβ, ¯ τ¯) (defined below) is bounded with probability 1 − δ *as follows,* $$\epsilon_{g}(\bar{\alpha},\bar{\beta},\bar{\tau}):=\operatorname*{max}_{(\beta,\tau)\in Y\times Z}L(\bar{\alpha},\beta,\tau)-\operatorname*{min}_{\alpha\in X}L(\alpha,\bar{\beta},\bar{\tau})\leq C_{0}\sqrt{\frac{5}{m}}(8+2\log\frac{2}{\delta})\ w.p.\ 1-\delta\ .$$ where, C0 is a constant which is a function of the sets X, Y, Z*, and the second moment of* Φ. We present the proof of this result in appendix 8.9. This result essentially shows us that the upper-bound for log density gradient estimation requires O( √ 1 m ) (where m is the number of steps the algorithm runs for) samples to learn an accurate estimation. Algorithm 1 Projected Log Density Gradient 1: for t = 1, 2*, ..., m* do: 2: δt = ΦtΦ T t − γΦt(Φ′t ) T 3: α T t+1 = ΠX(α T t − εt(β Tδt + λ(τΦ T t ))) 4: β T t+1 = ΠY (β T t + εt(α T t δt − gtΦ T t − β T t ΦtΦ T t )) 5: τt+1 = ΠZ(τt + εt(λ(α T t Φt − τt))) 6: **Return** α¯, β¯, τ¯ Where, α¯ = Pn i=1 P εiαi n i=0 εi , β¯ = Pn i=1 P εiβi n i=0 εi , τ¯ = Pn i=1 P εiτi n i=0 εi ## 6 Experiments In this section, we present a proof of concept for our log density gradient estimation on two sets of environments 5 × 5 and 3 × 3 gridworld environment (Towers et al., 2023). For the gridworld experiments, we approximate log density gradient by using linear function approximation. Here, the features are ϕ : *S × A →* R |S|·|A| such that it maps every state to the corresponding standard basis vector. Our results for 5 × 5 are in Figure 2 and for 3 × 3 in Figure 3. We compare our algorithm against 3 different baselines. The first is theoretical log density gradient as described in Lemma 3. The second baseline implements REINFORCE algorithm, which is the practical rendition of the policy gradient theorem (Williams, 1992). The third is theoretical policy gradient method which exactly computes the classical policy gradient theorem, as in equation 4b (Sutton et al., 1999). We observe in that both log density gradient approaches are more sample efficient than both policy gradient approaches. This is because policy gradient methods approximate the gradient for average reward scenarios (γ = 1) by estimating a Q-function for a discounting factor less than 1. Moreover, we observe that our method tends to outperform REINFORCE with much reduced variance. Our approach is always very close in performance to the theoretical log density gradient which serves to validate correctness of our algorithm. In 5 × 5 gridworld we also observe our algorithm to outperforms theoretical log density gradient. This is because, theoretical log density gradient suffers from some numerical computation issues arising from average reward scenarios. ![9_image_0.png](9_image_0.png) Figure 2: For 5 × 5 gridworld, comparison of Log Density Gradient algorithms (in light green) as compared to REINFORCE (light red), theoretical policy gradient (gray) and theoretical log density gradient (blue). We observe that our empirical algorithm comfortably outperforms the other baselines. ![10_image_0.png](10_image_0.png) Figure 3: For 3 × 3 gridworld, comparison of Log Density Gradient algorithms (in light green) as compared to REINFORCE (light red), theoretical policy gradient (gray) and theoretical log density gradient (blue). We observe that our empirical algorithm comfortably outperforms the other baselines. ## 7 Conclusion And Future Work We present log density gradient algorithm that estimates policy gradient using state-action discounted formulation of a reinforcement learning problem. We observe that policy gradient estimated in this manner, corrects for a residual error common in many reinforcement learning tasks. We show that with a known model, we can exactly calculate the gradient of the log density by solving two sets of linear equations. We further propose a TD(0) algorithm to implement the same, but it needs samples from the backward Markov chain, which becomes too restrictive. Therefore, we propose a min-max optimization that estimates log density gradient using just on-policy samples. We not only prove theoretical properties like convergence and uniqueness but also experimentally demonstrate that our method is sample efficient as compared to classical policy gradient methods like REINFORCE. This approach looks promising, and further studies of log density gradient will focus on scaling their performance to complex tasks. ## References Baxter, J. and Bartlett, P. L. Infinite-horizon policy-gradient estimation. *J. Artif. Int. Res.*, 2001. Borkar, V. S. and Meyn, S. P. The o.d. e. method for convergence of stochastic approximation and reinforcement learning. 2000. Boyan, J. A. Least-squares temporal difference learning. In *ICML*, pp. 49–56, 1999. Bradtke, S. J. and Barto, A. G. Linear least-squares algorithms for temporal difference learning. *Mach.* Learn., 1996. Gelada, C. and Bellemare, M. G. Off-policy deep reinforcement learning by bootstrapping the covariate shift. In *The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI*, 2019. Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018. Hallak, A. and Mannor, S. Consistent on-line off-policy evaluation. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, 2017. Ilyas, A., Engstrom, L., Santurkar, S., Tsipras, D., Janoos, F., Rudolph, L., and Madry, A. A closer look at deep policy gradients. In *8th International Conference on Learning Representations, ICLR 2020*, 2020. John Schulman, Barret Zoph, C. K. et al. Chatgpt: Optimizing language models for dialogue, 2023. Kakade, S. M. A natural policy gradient. In Dietterich, T. G., Becker, S., and Ghahramani, Z. (eds.), Advances in Neural Information Processing Systems 14, 2001. Katdare, P., Liu, S., and Campbell, K. D. Off environment evaluation using convex risk minimization. In 2022 International Conference on Robotics and Automation, ICRA 2022. Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning. In 4th International Conference on Learning Representations, ICLR, 2016. Liu, B., Liu, J., Ghavamzadeh, M., Mahadevan, S., and Petrik, M. Finite-sample analysis of proximal gradient TD algorithms. In Meila, M. and Heskes, T. (eds.), Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, UAI 2015. Liu, Q., Li, L., Tang, Z., and Zhou, D. Breaking the curse of horizon: Infinite-horizon off-policy estimation. In *Advances in Neural Information Processing Systems 31*, 2018. Maei, H. R. Gradient temporal-difference learning algorithms. In *Ph.D thesis*, 2011. Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., and Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning. In *Proceedings of the 33nd International* Conference on Machine Learning, ICML, 2016. Morimura, T., Uchibe, E., Yoshimoto, J., Peters, J., and Doya, K. Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning. *Neural Comput.*, 2010. Nachum, O., Chow, Y., Dai, B., and Li, L. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. In *Advances in Neural Information Processing Systems 32*, 2019. Nemirovski, A., Juditsky, A. B., Lan, G., and Shapiro, A. Robust stochastic approximation approach to stochastic programming. *SIAM J. Optim.*, 2009. Puterman, M. L. *Markov Decision Processes: Discrete Stochastic Dynamic Programming*. John Wiley and Sons. 2014. Robbins, H. and Monro, S. A Stochastic Approximation Method. *The Annals of Mathematical Statistics*, 22 (3):400 - 407, 1951. doi: 10.1214/aoms/1177729586. URL https://doi.org/10.1214/aoms/1177729586. Rockafellar, R. T. *Convex Analysis*. Princeton University Press, 2015. Schulman, J., Levine, S., Moritz, P., Jordan, M. I., and Abbeel, P. Trust region policy optimization. *CoRR*, abs/1502.05477, 2015. URL http://arxiv.org/abs/1502.05477. Schulman, J., Moritz, P., Levine, S., Jordan, M. I., and Abbeel, P. High-dimensional continuous control using generalized advantage estimation. In *4th International Conference on Learning Representations, ICLR*, 2016. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347. Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., and Riedmiller, M. A. Deterministic policy gradient algorithms. In *Proceedings of the 31th International Conference on Machine Learning, ICML*, 2014. Silver, D., Schrittwieser, J., Simonyan, K., and thers. Mastering the game of go without human knowledge. Nat., 2017. Sutton, R. S. Learning to predict by the methods of temporal differences. *Mach. Learn.*, 1988. Sutton, R. S., McAllester, D. A., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In *Advances in Neural Information Processing Systems*, 1999. Tesauro, G. Practical issues in temporal difference learning. *Mach. Learn.*, 1992. Towers, M., Terry, J. K., Kwiatkowski, A., Balis, J. U., Cola, G. d., Deleu, T., Goulão, M., Kallinteris, A., KG, A., Krimmel, M., Perez-Vicente, R., Pierré, A., Schulhoff, S., Tai, J. J., Shen, A. T. J., and Younis, O. G. Gymnasium, March 2023. URL https://zenodo.org/record/8127025. Uehara, M., Huang, J., and Jiang, N. Minimax weight and q-function learning for off-policy evaluation. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020. Voloshin, C., Le, H. M., Jiang, N., and Yue, Y. Empirical study of off-policy policy evaluation for reinforcement learning. 2019. Williams, R. J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 1992. Yuan, R., Gower, R. M., and Lazaric, A. A general sample complexity analysis of vanilla policy gradient. In International Conference on Artificial Intelligence and Statistics, AISTATS 2022, Proceedings of Machine Learning Research, 2022. Zhang, R., Dai, B., Li, L., and Schuurmans, D. Gendice: Generalized offline estimation of stationary values. CoRR, abs/2002.09072, 2020a. URL https://arxiv.org/abs/2002.09072. Zhang, S., Liu, B., and Whiteson, S. Gradientdice: Rethinking generalized offline estimation of stationary values. *CoRR*, abs/2001.11113, 2020b. URL https://arxiv.org/abs/2001.11113. Zhang, T., Ren, T., Yang, M., Gonzalez, J., Schuurmans, D., and Dai, B. Making linear mdps practical via contrastive representation learning. In *International Conference on Machine Learning, ICML*, Proceedings of Machine Learning Research, 2022.
Review 1: Summary: The paper re-introduces a the method called "log density gradient" in Morimura 2010 for estimating policy gradients in reinforcement learning, based on the gradient of log density. The authors show that the Morimura's method corrects for a residual error term that is often ignored in classical policy gradient implementations. The author derived the log density gradient in discounting setting, and a new optimization method to estimate the gradient of log density without sampling from backward Markov chain. Theoretical analysis is provided, proving convergence and uniqueness properties of the min-max optimization, as well as sample complexity bounds. These are novel contribution and is not included in the Morimura paper. Strengths and Weaknesses: Strengths: This paper presents a theoretically grounded approach to policy gradient estimation in reinforcement learning. This paper further consolidate the theoretical foundation of the Morimura's method (log density gradient method). The log density gradient is so different from classical policy gradient and does not suffer from the residual error in classical policy gradient that it worth more attention of the community. The min-max optimization formulation is particularly noteworthy as it overcomes the limitation of requiring backward Markov chain samples, making the method more practical for real-world applications. Weakness: 1. This writing of mathematical equations and theorem statement should be improved for its rigorousness. For examples: In the definition of operator $Y_\gamma$ between Eq 13 and 14, $(s, a)$ should be sampled from the backward MC. The textual part of Proposition 1 and Proposition 2 should be written in a more formal language, and it should be self-contained: i.e. no "mentioned in Eq. x". 2. This paper should have a complete policy updating algorithm psedu-code or description. In the current version, after Section 3, the whole problems seems becoming how to estimate log density gradient. After solving that, the paper need to go back discuss what's the policy gradient and update rule, what's the effect of using an approximation of the log density gradient in the policy gradient. 3. The paper should discuss more limitation of the log density gradient methods in comparison with classical PG. For example, the TD method of log density gradient updates a d-dimensional vector function, where d is the number of policy parameter. Standard TD (to estimate the Q(s,a) in PG theorem) only need to update a real-valued function. For modern NN, d is huge. This is a significant concern of the scalability of the method. 4. The experimental domain is a 5x5 grid world. Requested Changes: See the three request in the weakness 1-3. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper uses the log density gradient idea to correct for the residual error in policy gradient estimation. And then a model-free TD based method is used to approximate the policy gradient with some theoretical guarantee. To overcome the practical inefficiency issue, this work further implements the min-max optimization idea that works under the on-policy samples and present some theoretical results under the linear function class assumption, which enables their method to work for the linear MDP problems. Empirical results are presented to validate the high efficiency of the proposed method. Strengths and Weaknesses: Strengths: 1. This work has a comprehensive summary of the existing literature. 2. I am not an expert in this area and hence I didn't check the proof in Appendix in detail, but all the arguments look sound to me. Weaknesses: 1. I feel the presentation of this work could be improved. 1.1 There are some typos, e.g.: in page one "we empirically demonstrate that this error 'in' indeed significant"; in page 5, there is no verb in the sentence "Thus, effectively demonstrating that TD-updates..." 1.2 Some parts are confusing and hard to understand: e.g. in page 5 sentence "Thus, effectively demonstrating that TD-updates...", I don't understand why your proof (converging to a unique solution) could showcase the drawback of the method from Morimura. Will this result indicate the drawback of your methods as well? 2. I feel the baseline in your experimental results is a little bit too weak. It would be better to compare your method with some modern policy gradient based algorithms but not just REINFORCE which is about 30 years old. And real-world datasets should also be used in your experiments. Requested Changes: 1. I feel that given the existing literature in Morimura et al. and Zhang et al., it may not be fair to state you "propose" the log density gradient and "propose" the min-max optimization, since your methods are adapted from the existing works. 2. It would be better to summarize the min-max optimization methods in the existing literature, e.g. Zhang et al.. Since your method also adapts this idea. 3. Including a recent baseline or adding some real experiments can make the experimental results stronger. 4. How difficult it is to extend from the average reward scenario to the general case over a range of discounting factor $\gamma \in [0,1]$? What are the major challenges in this extension? Please briefly explain this issue in your work. 5. A brief explanation on Assumption 1 is necessary. 6. TD($\lambda$) is not explained in the main paper (but in appendix) but TD(0) is used in the conclusion. Please briefly explain this notation before using it. Broader Impact Concerns: NA ================================================== Review 3: Summary: The paper presents a novel technique for estimating the policy gradient in the case of an average reward scenario. Existing techniques rely on $\gamma < 1$ to obtain an algorithm, but as previous work has shown, this results in a biased gradient when using a $Q_\gamma$ function to estimate the average reward policy gradient. The paper presents both a full estimate based on the backwards state transition probability and a more application friendly sample based version. Finally, the efficacy of the approach is verified in a small scale grid world experiment. Strengths and Weaknesses: Strengths: The authors present a thorough derivation of both the problems with existing approaches, and their solution. By providing both a population version, a sample based estimation approach and a sample complexity bound, they cover a good set of interesting questions relating to a novel approach. As far as I can tell, all proofs are correct (although see the following). Weaknesses: The exposition and presentation of the paper makes following the core derivations and results hard. The proofs in the appendix do not all restate the statements which are being proven, and equations across the paper are referenced requiring a reader to jump back and forth a lot. Furthermore, derivation steps could be shown with more detail, reconstructing some of the steps has taken me a non-trivial amount of time. The authors also at several points refer to proofs in other papers. I would prefer if the authors either restated results from other papers they are using explicitly, or rewrite the proofs in full here. The paper should be readable by itself as much as possible. One concrete example for unclear exposition is the introduction of Lemma 2: the role of $w(s)$ is not explicitly stated in the leadup to the lemma, nor after. This means the reader has to intuit by themselves what the purpose of its introduction is. Overall, I would encourage the authors to thoroughly rework the expository text for the mathematical results. I first assumed the role is simply to approximate $d^\pi$, but later the same variable is used to approximate the gradient of $d^\pi$, so it is simply used as the "variable to learn" in each lemma? In the text after Lemma 2, the exposition jumps back to the term $\nabla d$ without any transition or even a paragraph break. This is related to the previous problem. From context I reconstructed that the authors intended to present to successive lemmas about the learnability of $d$ and $\nabla d$ respectively, but this was not stated clearly. The experiments are very small scale, with tiny grid worlds. It would be interesting to see if the sample complexity shown by the authors is sufficient to scale up to slightly more complex problems, like larger gridworlds, standard testing benchmarks such as cliffwalk, chainwalk, garnets, or maybe mountain car. In the same spirit: as the authors motivated their sample based approach by highlighting how it enables function approximation, I think it would be a good idea to show small results with function approximation, such as tiling features over gridworlds. While they claim that they use function approximation, they are using one-hot features per state-action pair, which is basically equivalent to a tabular method. Requested Changes: Nice to have: In their introduction, the authors imply that most RL work is interested in the average reward formulation. I believe that this claim is slightly too strong and can lead to confusion: the policy gradient is (as far as I know) not biased if the goal is indeed to optimize for a discounted return (although almost no algorithm that is in practical use uses the discounted state visitation distribution, so I agree with the authors here that there is are come non-trivial oversights). The authors are correct that previous work has not been careful in differentiating between the goal stated in the respective works and those optimized by the presented algorithms, but I would argue that both optimizing for a discounted or average reward scenario is a valid goal. I would encourage the authors here to clarify the introduction to highlight under what conditions the policy gradients are biased. Nice to have: (slightly) larger experiment with actual function approximation Very good to have: The paper has quite a few typos and grammatical errors, another editorial pass would be good. Critical: Rework and expand the explanation of the method, the purpose of introduction for each step in the derivations, etc. I am currently rating the paper as having no "accurate, convincing and clear evidence" not because I have serious doubts about the correctness of the derivations, but because I believe that the writing can and should be seriously improved before acceptance. I want to stress that I believe that the contribution is both valuable and correct (as far as I can tell) and I commend the authors for their work but simply not really easily accessible in its current form. Broader Impact Concerns: n/a ==================================================
# A Survey On Transferability Of Adversarial Examples Across Deep Neural Networks Jindong Gu1, Xiaojun Jia2, Pau de Jorge1, Wenqain Yu3, Xinwei Liu4, Avery Ma5, Yuan Xun4, Anjun Hu1, Ashkan Khakzar1, Zhijiang Li3, Xiaochun Cao6**, Philip Torr**1 1 *Torr Vision Group, University of Oxford, Oxford, United Kingdom* 2 *Nanyang Technological University, Singapore* 3 *Wuhan University, Wuhan, China* 4 *University of Chinese Academy of Sciences, Beijing, China* 5 *University of Toronto, Toronto, Canada* 6 *Sun Yat-sen University, Shenzhen, China* Reviewed on OpenReview: *https: // openreview. net/ forum? id= AYJ3m7BocI* ## Abstract The emergence of Deep Neural Networks (DNNs) has revolutionized various domains by enabling the resolution of complex tasks spanning image recognition, natural language processing, and scientific problem-solving. However, this progress has also brought to light a concerning vulnerability: adversarial examples. These crafted inputs, imperceptible to humans, can manipulate machine learning models into making erroneous predictions, raising concerns for safety-critical applications. An intriguing property of this phenomenon is the transferability of adversarial examples, where perturbations crafted for one model can deceive another, often with a different architecture. This intriguing property enables "black-box" attacks which circumvents the need for detailed knowledge of the target model. This survey explores the landscape of the adversarial transferability of adversarial examples. We categorize existing methodologies to enhance adversarial transferability and discuss the fundamental principles guiding each approach. While the predominant body of research primarily concentrates on image classification, we also extend our discussion to encompass other vision tasks and beyond. Challenges and opportunities are discussed, highlighting the importance of fortifying DNNs against adversarial vulnerabilities in an evolving landscape. ## 1 Introduction In recent years, Deep Neural Network (DNN) has evolved as a powerful tool for solving complex tasks, ranging from image recognition (He et al., 2016; Dosovitskiy et al., 2020) and natural language processing (Kenton & Toutanova, 2019; Brown et al., 2020a) to natural science problems (Wang et al., 2023). Since the advent of neural networks, an intriguing and disconcerting phenomenon known as adversarial examples has come into focus (Szegedy et al., 2013; Goodfellow et al., 2014). Adversarial examples are specially crafted inputs that lead machine learning models to make incorrect predictions. These inputs are imperceptibly different from correctly predicted inputs. The existence of adversarial examples poses potential threats to real-world safety-critical DNN-based applications, e.g., medical image analysis (Bortsova et al., 2021) and autonomous driving systems (Kim & Canny, 2017; Kim et al., 2018). While the existence of adversarial examples has raised concerns about the robustness and reliability of machine learning systems, researchers have uncovered an even more intriguing phenomenon: the *transferability* of adversarial examples (Goodfellow et al., 2014; Papernot et al., 2016). Transferability refers to the ability of an adversarial example designed for one model to successfully deceive a different model, often one with a distinct architecture. With such a property, a successful attack can be implemented without accessing any detail of the target model, such as model architecture, model parameters, and training data. | Table 1: Categorization of transferability-enhancing methods. Data Augmentation Xie et al. (2019); Dong et al. (2019); Lin et al. (2019); Zou et al. (2020); Wu et al. (2021); Li et al. (2020b); Byun et al. (2022); Wang et al. (2021a); Huang & Kong (2022) | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Optimization Technique | Goodfellow et al. (2014); Dong et al. (2018); Nesterov (1983); Zou et al. (2022); Wang & He (2021); Xiong et al. (2022); Zhu et al. (2023b); Li et al. (2020a); Qin et al. (2022); Ma et al. (2023); Gubri et al. (2022b) | | | OptimizationBased | Loss Objective | Zhang et al. (2022a); Xiao et al. (2021); Li et al. (2020a); Zhao et al. (2021); Fang et al. (2022); Xu et al. (2022b); Li et al. (2023); Qian et al. (2023); Chen et al. (2023a) Zhou et al. (2018); Naseer et al. (2018); Hashemi et al. (2022); Salzmann et al. (2021); Ganeshan et al. (2019); Wang et al. (2021c); Zhang et al. | | Model Component | (2022c); Wu et al. (2020b); Inkawhich et al. (2019; 2020b;a); Waseda et al. (2023); Wu et al. (2020a); Guo et al. (2020); Naseer et al. (2022) | | | Unconditional Generation | Poursaeed et al. (2018); Xiao et al. (2018); Naseer et al. (2019; 2021); Kim et al. (2022); Feng et al. (2022); Zhao et al. (2023) | | | GenerationBased | Class-conditional Generation | Yang et al. (2022); Han et al. (2019); Mao et al. (2020); Han et al. (2019); Phan et al. (2020); Chen et al. (2023b;d; 2024) | The recent surge in interest surrounding the transferability of adversarial examples is attributed to its potential application in executing black-box attacks (Papernot et al., 2016; Liu et al., 2017). Delving into the reasons behind the capacity of adversarial examples tailored for one model to deceive others provides researchers with an opportunity to acquire a profound comprehension of the fundamental mechanisms underpinning the susceptibility of DNNs (Wu & Zhu, 2020). Moreover, a comprehensive understanding of transferability offers the possibility of fostering the creation of adversarially robust models, capable of effectively countering adversarial attacks (Jia et al., 2022b; Waseda et al., 2023; Ilyas et al., 2019). Given the growing volume of publications on adversarial examples, several survey papers (Sun et al., 2018; Zhang & Li, 2019; Wiyatno et al., 2019; Serban et al., 2020; Han et al., 2023b) have emerged, seeking to encapsulate these phenomena from diverse viewpoints. Yet, a comprehensive survey specifically addressing the transferability of adversarial examples remains absent. This study endeavors to bridge that gap, embarking on an extensive investigation into the realm of adversarial transferability. To this end, we thoroughly review the latest research on transferability assessment, methods to enhance transferability, and the associated challenges and prospects. Specifically, as shown in Table 1, we categorize the current transferability-enhancing methods into two major categories: (1) **optimization-based** methods where one directly optimizes for the adversarial perturbations based on one or more surrogate models at inference time, without introducing additional generative models, and (2) **generation-based** methods that introduce generative models dedicated for adversary synthesis. Moreover, we also examine adversarial transferability beyond the commonly studied misclassification attacks and provide a summary of such phenomenon in other tasks (e.g. image retrieval, object detection, segmentation, etc.). Upon assessing the current advancements in adversarial transferability research, we then outline a few challenges and potential avenues for future investigations. The organization of this paper is as follows: Section 2 first provides the terminology and mathematical notations used across the paper, and then introduces the formulation and evaluation metrics of adversarial transferability. Sections 3-5 present various techniques to improve the adversarial transferability of adversarial examples. Section 3 examines transferability-enhancing methods that are applicable to optimisation-based transferable attacks. These techniques are categorized into four perspectives: data processing, optimization, loss objective, and model architectures. In Section 4, various generation-based transferable attacks are presented. Section 5 describes the research on adversarial transferability beyond image classification. Concretely, transferability-enhancing techniques in various vision tasks, natural language processing tasks as well as the ones across tasks are discussed. The current challenges and future opportunities in adversarial transferability are discussed in Section 6. The last section concludes our work. In order to facilitate the literature search, we also built and released a project page where the related papers are organized and listed1. The page will be maintained and updated regularly. As the landscape of DNN continues to evolve, understanding the intricacies of adversarial examples and their transferability is of paramount importance. By shedding light on these vulnerabilities, we hope to contribute to the development of more secure and robust DNN models that can withstand the challenges posed by adversarial attacks. ## 2 Preliminary In this section, we first introduce the terminology and mathematical notations used across the paper. Then, we introduce the formulation of adversarial transferability. In the last part, the evaluation of adversarial transferability is presented. ## 2.1 Terminology And Notations The terminologies relevant to the topic of adversarial transferability and mathematical annotations are listed in Tab. 2 and Tab. 3, respectively. | | Table 3: The used mathematical notations are listed. They are followed across the paper. | | | |-------|--------------------------------------------------------------------------------------------|-----------------------------------------------|------------------------------------------------| | x | A clean input image | y | A ground-truth label of an image | | δ | Adversarial perturbation | ϵ | Range of adversarial perturbation | | χ | Input distribution | x adv | Adversarial example x + δ of the input x | | y t | Target class of an adversarial attack | x adv(t) | Adversarial input at the end of t th iteration | | ft | Target model under attack | fs | Surrogate (source) model for AE creation | | i (x) | the i th model output probability for the | th activation in the l th layer of the target | | | f | | Hl k | k | | | input x | model | | | Hl | the l th layer of the target network | z i | Model output logits | | AE | Adversarial Example | AT | Adversarial Transferability of AE | | Acc | Clean accuracy on clean dataset | F R | Fooling rate on target model | | | Table 2: The used terminologies are listed. They are followed across the paper. | |---------------------|-----------------------------------------------------------------------------------------------------| | adversarial | A small artificial perturbation that can cause misclassification of a neural network | | perturbation | when added to the input image. | | target model | The target model (e.g. deep neural network) to attack. | | surrogate model | The model built to approximate the target model for generating adversarial examples. | | white / black-box | White attacks can access the target model (e.g. architecture and parameters), while | | attacks | black-box attacks cannot. | | untargeted/targeted | The goal of untargeted attacks is to cause misclassifications of a target model, while | | attack | targeted attacks aim to mislead the target model to classify an image into a specific target class. | | clean accuracy | The model performance on the original clean test dataset. | | fooling rate | The percentage of images that are misclassified the target model. | 1https://github.com/JindongGu/awesome_adversarial_transferability ## 2.2 Formulation Of Adversarial Transferability Given an adversarial example x adv of the input image x with the label y and two models fs(·) and ft(·), adversarial transferability describes the phenomenon that the adversarial example that is able to fool the model fs(·) can also fooling another model ft(·). Formally speaking, the adversarial transferability of untargeted attacks can be formulated as follows: $$\operatorname{argmax}_{i}f_{t}^{i}(x^{a d v})\neq y,{\mathrm{given~argmax}}_{i}f_{s}^{i}(x^{a d v})\neq y$$ $$(1)$$ $$\left(2\right)$$ adv) ̸= y (1) Similarly, the targeted transferable attacks can be described as: argmaxif i t (x adv) = y t, given argmaxif i s (x $$v)=y^{t}{\mathrm{~and~argmax}}_{i}f_{s}^{i}(x)=y$$ (x) = y (2) The process to create x adv for a given example x is detailed in Sec. 3 and Sec. 4. ## 2.3 Evaluation Of Adversarial Transferability Fooling Rate (FR). The most popular metric used to evaluate adversarial transferability is FR. We denote P as the number of adversarial examples that successfully fool a source model. Among them, the number of examples that are also able to fool a target model is Q. The Fooling Rate is then defined as FR = Q P . The higher the FR is, the more transferable the adversarial examples are. Interest Class Rank (ICR). According to Zhang et al. (2022a), an interest class is the ground-truth class in untargeted attacks or the target class in targeted attacks. FR only indicates whether the interest class ranks top-1, which may not be an in-depth indication of transferability. To gain deeper insights into transferability, it is valuable to consider the ICR, which represents the rank of the interest class after the attack. In untargeted attacks, a higher ICR indicates higher transferability, while in targeted attacks, a higher ICR suggests lower transferability. Knowledge Transfer-based Metrics. Liang et al. (2021) considered all potential adversarial perturbation vectors and proposed two practical metrics for transferability evaluation. The transferability of dataset x ∼ D from source model fs to target model ft are defined as follows: $$\alpha_{1}{}^{f_{s}\to f_{t}}(x)=\frac{\ell(f_{t}(x),f_{t}(x+\delta_{f_{s},e}(x)))}{\ell(f_{t}(x),f_{t}(x+\delta_{f_{t},e}(x)))}$$ $$\mathbf{\Sigma}$$ $$f_{s,\varepsilon}(x))-f_{s}(x),$$ where δfs,ε(x) is the adversrial perturbation generated on surrogate model fs while δft,ε(x) is that on target model ft. The first metric α1 measures the difference between two loss functions which can indicate the performance of two attacks: black-box attack from fs to ft and white-box attack on ft. Tranferabilty from fs to ft is high when α1 is high. $$\alpha_{2}{}^{f_{s}\to f_{t}}=\left\|\mathbb{E}_{x\sim D}[\Delta\widehat{f_{s}\to f_{s}}(x)\ \cdot\ \Delta\widehat{f_{s}\to f_{t}}(x)^{\top}]\right\|_{F}\tag{1}$$ where ∆fs→fs (x) = fs(x + δfs,ε(x)) − fs(x), ∆fs→ft (x) = ft(x + δfs,ε(x)) − ft(x) (5) b· operation denotes the corresponding unit-length vector and ∥·∥F denotes the Frobenius norm. The second metric α2 measures the relationship between two deviation directions, indicating white-box attacks on fs and black-box attacks from fs to ft respectively. Liang et al. (2021) argue that these two metrics represent complementary perspectives of transferability: α1 represents how often the adversarial attack transfers, while α2 encodes directional information of the output deviation. ## 3 Optimization-Based Transferable Attacks In this section, we introduce optimization-based transferability-enhancing methods: a class of methods that seeks adversarial perturbation at test time based on one or more surrogate models without introducing or $$\mathbf{\Sigma}$$ $\left(x\right)=f_{\nu}(x)$ training additional models (e.g. a generative model). Based on our formulation in Section 2.2, the process to obtain transferable adversarial perturbations with this class of methods can be expressed as: $$\delta^{*}=\arg\operatorname*{max}_{\delta}\mathbb{E}_{T}\ \ell(f_{s}(T(x+\delta)),y),\ \ s.t.\ \ ||\delta||_{\infty}\leq\epsilon$$ δ ET ℓ(fs(T(x + δ)), y)*, s.t.* ||δ||∞ ≤ ϵ (6) where δ is an adversarial perturbation of the same size as the input, T(·) is data augmentation operations, ℓ(·) is a designed loss, and fs(·) is a model used in the optimization process, which could be slightly modified version of a surrogate model. The examples of the modifications are linearizing the surrogate model by removing the non-linear activation functions and highlighting skip connections in backpropagation passes. The problem in Equation 6 is approximately solved with Projected Gradient Descent (Madry et al., 2017). Multi-step attacks (e.g. (Madry et al., 2017)) update the perturbation iteratively as follows $$(6)$$ $$\delta^{(t+1)}=\mathrm{Clip}^{\epsilon}(\delta^{t}+\alpha\cdot\mathrm{sign}(\nabla_{x}\ell)),$$ t + α · sign(∇xℓ)), (7) where α is the step size to update the perturbation, x adv(t+1) = x + δ t, and Clipϵ(·) is a clipping function to make the constraint ||δ||∞ ≤ ϵ satisfied. In contrast, single-step attacks (e.g. (Szegedy et al., 2013; Goodfellow et al., 2014)) update the adversarial perturbation with only one attack iteration. Given the expression in Equation 6, we categorize the transferability-enhancing methods into four categories: data augmentation-based, optimization-based, model-based, and loss-based. ## 3.1 Data Augmentation-Based Transferability Enhancing Methods The family of methods discussed in this section, referred to as Data Augmentation-based methods, are all based on the rationale of applying data augmentation strategies. In these strategies, when computing the adversaries on the surrogate model fs, an input transformation T(·) with a stochastic component is applied to increase adversarial transferability by preventing overfitting to the surrogate model. In the following, we will discuss the specific instance of T(·) for each method. Diverse Inputs Method (DIM). Xie et al. (2019) are the first to suggest applying differentiable input transformations to the clean image. In particular, their DIM consists of applying a different transformation at each iteration of multi-step adversarial attacks. They perform random resizing and padding with a certain probability p, that is: T(x) = (x with probability 1 − p pad(resize(x)) with probability p $\downarrow$ . $$T(x)[i,j]=x[i+t_{x},j+t_{y}]$$ (8) and as p increases, the transferability of iterative attacks improves most significantly. They also notice that although this is tied to a drop in the white-box performance, the latter is much milder. Translation Invariance Method (TIM). Dong et al. (2019) study the *discriminative image regions* of different models, i.e. the image regions more important to classifiers output. They observe different models leverage different regions of the image, especially if adversarially trained. This motivates them to propose the *Translation Invariance Method* (TIM). That is, they want to compute an adversarial perturbation that works for the original image and any of its translated versions (where the position of all pixels is shifted by a fixed amount), therefore: T(x)[*i, j*] = x[i + tx, j + ty] (9) where tx and ty define the shift. Moreover, Dong shows that such perturbations can be found with little extra cost by applying a kernel matrix on the gradients. This method can be combined with other attacks or augmentation techniques (e.g. DIM) to further improve transferability. Scale Invariance Method (SIM). When attacking in a black-box setting, Dong et al. (2018) showed that computing an attack for an ensemble of models improves transferability albeit at an increased computational cost. Motivated by this fact, Lin et al. (2019) introduce the concept of *loss-preserving transformation* (any input transformation T that satisfies ℓ(f(T(x)), y) ≈ ℓ(f(x), y) ∀x) and of *model augmentation* (given a loss-preserving transformation (T), then f ′ = f(T(·)) would be an augmented model of f). One can then $\left(\mathbf{9}\right)$. use different model augmentations and treat them as an ensemble. In particular, Lin et al. (2019) find downscaling the input image tends to preserve the loss, $$T(x,i)=S_{i}(x)=x/2^{i},$$ $$(10)$$ $$(11)$$ i, (10) where Si(x) scales the pixels of x and i is the number of predefined scales. In a similar fashion as TIM, the authors propose the Scale Invariance Method (SIM) to find perturbations that work for any downscaled version of the input. Note that this is different from DIM since SIM optimizes the perturbation for different scales at once while DIM applies a different single resizing and padding at each gradient ascent step. Resized Diverse Inputs (RDI). Zou et al. (2020) Introduce a variant of DIM where the inputs are resized back to the original size after applying DIM augmentations, i.e. *Resized Diverse Inputs* (RDI). Resizing the inputs allows them to test much more aggressive resizing and padding augmentations which boost performance. Similarly to SIM, they also propose to ensemble the gradients from different inputs, however, instead of multiple-scale images, they use RDI augmentations with varying strength. They also observe that keeping the attack optimizer step-size α = ϵ constant, further improves the success rate of the attacks. $$T(x)=\mathrm{resize}(\mathrm{resize}(x,H/s,W/s),H,W)$$ Adversarial Transformation Transfer Attack (ATTA). A common theme in previous methods has been that combining multiple image transformations (DIM + TIM + SIM) usually leads to better results than using just one set of augmentations. However, all previous methods have focused on a fixed set of augmentations, which limits the diversity of augmentations even if combined. Wu et al. (2021) introduce the *Adversarial Transformation Transfer Attack* where input transformations are parametrized by a network that is optimized alongside the adversarial perturbations to minimize the impact of adversarial attacks. Thus, improving the resilience of the final attacks to input perturbations. $$T(x)=\psi_{\theta}(x),\ \ \theta=\arg\operatorname*{min}_{\theta}\ell(f_{s}(\psi_{\theta}(x+\delta)),y)$$ ℓ(fs(ψθ(x + δ)), y) (12) Regionally Homogeneous Perturbations (RHP). Li et al. (2020b) observe that adversarial perturbations tend to have high regional homogeneity (i.e. gradients corresponding to neighboring pixels tend to be similar), especially when models are adversarially robust. Based on this observation they propose to apply a parametrized normalization layer to the computed gradients that encourages *Regionally Homogeneous* Perturbations(RHP). Their objective can be written as: $$(12)$$ $$x^{a d v}=x+\epsilon\cdot\mathrm{sign}(T(\nabla_{x}\ell(f_{s}(x),y))),$$ $$(13)$$ adv = x + ϵ · sign(T(∇xℓ(fs(x), y))), (13) where T = ϕθ(·) is a parametrized transformation on the gradients rather than the input. This transformation is optimized to maximize the loss of the perturbed sample x adv. Interestingly, they observe that when optimizing the normalization parameters on a large number of images, the generated perturbations converge to an input-independent perturbation. Hence, although not explicitly enforced, they find RHP leads to universal perturbation (i.e. perturbations that can fool the model for many different inputs). Object-based Diverse Input (ODI). Motivated by the ability of humans to robustly recognize images even when projected over 3D objects (e.g. an image printed on a mug, or a t-shirt), Byun et al. (2022) present a method named *Object-based Diverse Input* (ODI), a variant of DIM which renders images on a set of 3D objects when computing the perturbations as a more powerful technique to perform data augmentation technique. T(x) = Π(*x, O*), (14) where Π is a projection operation onto the surface of a 3D mesh and O represents the 3D object. Admix. Inspired by the success of Mixup (training models with convex combinations of pairs of examples and their labels) in the context of data augmentation for classification model training (Zhang et al., 2017), Wang et al. (2021a) study this technique in the context of fostering transferability of adversarial example. However, they find that mixing the labels of two images significantly degrades the white-box performance of adversarial attacks and brings little improvement in terms of transferability. Hence, they present a variation $$T(x)=\Pi(x,O),$$ $$(14)$$ sh, and $Q$ represent. of Mixup (Admix) where a small portion of different randomly sampled images is added to the original one, but the label remains unmodified. This increases the diversity of inputs, and thus improves transferability but does not harm white-box performance. In this case, $$(15)$$ $$T(x)=\eta x+\tau x^{\prime},\mathrm{~where~}\eta<1\mathrm{~and~}\tau<\eta.$$ $${\hat{x}}=x+\alpha\cdot\operatorname{sign}(\operatorname{IG}(f_{s},x,x^{\prime})),$$ $$(16)$$ The blending parameters *τ, η* are randomly sampled and the restriction *τ < η* ensures the resulting image does not differ too much from the original one. The additional images x ′ are randomly sampled from other categories. Transferable Attack based on Integrated Gradients (TAIG). Huang & Kong (2022) leverage the concept of integrated gradients (i.e. a line integral of the gradients between two images) introduced by Sundararajan et al. (2017a). Instead of optimizing the attack based on the sign of the gradients (e.g. with FGSM) they use the sign of the integrated gradients from the origin to the target image. Moreover, they show that following a piece-wise linear random path improves results further. We can formalize their objective as follows: x˜ = x + α · sign(IG(fs*, x, x*′)), (16) and IG(fs*, x, x*′) is the integrated gradient between x and another image x ′. ## 3.2 Optimization Technique-Based Transferability Enhancing Methods The generation of transferable adversarial examples can be formulated as an optimization problem of Equation 6. In the last section, we presented how the input data augmentations influence the transferability of the created adversarial perturbations. In this section, we describe how the current work improves adversarial transferability from the perspective of the optimization technique itself. In this section, we focus on perturbations constrained by an ℓ∞ ball with radius ϵ, that is, ||x adv − x||∞ ≤ ϵ. To understand the rest of this section, we begin by formalizing the iterative variant of the fast gradient sign method (I-FGSM) (Goodfellow et al., 2014), which serves as the basis for the development of other methods. The I-FGSM has the following update rule: $$g^{(t+1)}=\nabla\ell(x^{adv(t)},y),$$ $$x^{adv(t+1)}=\mathrm{Clip}_{x}^{\epsilon}\{x^{adv(t)}+\alpha\cdot\mathrm{sign}(g^{(t+1)})\},\tag{1}$$ $$(17)$$ where g (t)is the gradient of the loss function with respect to the input, α denotes the step size at each iteration, and Clipϵx ensures that the perturbation satisfies the ℓ∞-norm constraints. Momentum (MI-FGSM). One of the simplest and most widely used techniques to improve the generalizability of neural networks is to incorporate momentum in the gradient direction (Polyak, 1964; Duch & Korczak, 1998). Motivated by this, Dong et al. (2018) propose a momentum iterative fast gradient sign method (MI-FGSM) to improve the vanilla iterative FGSM methods by integrating the momentum term in the input gradient. At each iteration, MI-FGSM updates g (t+1) by using $$g^{(t+1)}=\mu\cdot g^{(t)}+\frac{\nabla\ell(x^{a d v(t)},y)}{||\nabla\ell(x^{a d v(t)},y)||_{1}},$$ where g (t) now represents the accumulated gradients at iteration t, µ denotes the decay factor of g (t), and the formulation for x adv(t+1) remains the same as (17). By integrating the momentum term into the iterative attack process, MI-FGSM can help escape from poor local maxima, leading to higher transferability for adversarial examples. More variants of I-FGSM have been proposed to make the created adversarial perturbation more transferable. Similar to the development of the DNN optimization techniques, the first-order moment, the second-order moment, and more components are integrated successively to improve adversarial transferability. Those variants include Nesterov (NI-FGSM) (Lin et al., 2019), Adam (AI-FGTM) (Zou et al., 2022), and Variance Tuning (VNI/VMI-FGSM) (Wang & He, 2021), the details of which can be found in Appendix A. $$(18)$$ Stochastic Variance Reduced Ensemble (SVRE). Xiong et al. (2022) propose a variance-tuning strategy to generate transferable adversarial attacks. Conventional ensemble-based approaches leverage gradients from multiple models to increase the transferability of the perturbation. Xiong et al. (2022) analogize this process as a stochastic gradient descent optimization process, in which the variance of the gradients on different models may lead to poor local optima. As such, they propose to reduce the variance of the gradient by using an additional iterative procedure to compute an unbiased estimate of the gradient of the ensemble. SVRE can be generalized to any iterative gradient-based attack. Gradient Relevance Attack (GRA). Zhu et al. (2023b) introduce GRA, an attack strategy built upon MIFGSM and VMI-FGSM. This approach incorporates two key techniques to further increase transferability. First, during (18), the authors notice that the sign of the perturbation fluctuates frequently. Given the constant step size in the iterative process, this could suggest the optimization getting trapped in local optima. To address this, they introduced a decay indicator, adjusting the step size in response to these fluctuations, thus refining the optimization procedure. Additionally, they argue that in VMI-FGSM, the tuning strategy used during the last iteration might not accurately represent the loss variance at the current iteration. Therefore, they propose to use dot-product attention (Vaswani et al., 2017) to compute the gradient relevance between x adv(t) and its neighbors, providing a more precise representation of the regional information surrounding x adv(t). This relevance framework is then used to fine-tune the update direction. Adaptive Gradient Method (AGM). While the previous methods focus on improving the optimization algorithm, another angle of attack is through the optimization objective. Li et al. (2020a) demonstrate that the cross-entropy loss, commonly utilized in iterative gradient-based methods, is not suitable for generating transferable perturbations in the targeted scenario. Analytically, they demonstrate the problem of vanishing gradients when computing the input gradient with respect to the cross-entropy loss. Although this problem is circumvented by projecting the gradient to a unit ℓ1 ball, they empirically show in the targeted setting that this normalized gradient is overpowered by the momentum accumulated from early iterations. Notice that because historical momentum dominates the update, the effect of the gradient at every iteration diminishes, and thus the update is not optimal in finding the direction toward the targeted class. To circumvent the vanishing gradient problem, they propose to incorporate the Poincaré metric into the loss function. They empirically demonstrate that the input gradient with respect to the Poincaré metric can better capture the relative magnitude between the gradient magnitude and the distance from the perturbed data point to the target class, leading to more effective iterative updates during the attack process. Reverse Adversarial Perturbation (RAP). Improving the adversarial robustness of neural networks by training with perturbed examples has been studied under the robust optimization framework, which is also known as a min-max optimization problem. Similarly, Qin et al. (2022) propose to improve the transferability of adversarial examples under such a min-max bi-level optimization framework. They introduce RAP that encourages points in the vicinity of the perturbed data point to have similar high-loss values. Unlike the conventional I-FGSM formulation, the inner-maximization procedure of RAP first finds perturbations with the goal of minimizing the loss, whereas the outer-minimization process updates the perturbed data point to find a new point added with the provided reverse perturbation that leads to a higher loss. Momentum Integrated Gradients (MIG). The integrated gradient (IG) is a model-agnostic interpretability method that attributions the prediction of a neural network to its inputs (Sundararajan et al., 2017b). The gradient derived from this method can be understood as saliency scores assigned to the input pixels. Notably, a distinct feature of IG is its implementation invariance, meaning that the gradients only depend on the input and output of the model and are not affected by the model structure. Such a characteristic can be especially beneficial when improving the transferability of perturbation across different model architectures. Inspired by these, Ma et al. (2023) propose MIG which incorporates IG in the MI process. ## 3.3 Loss Objective-Based Transferability Enhancing Methods The loss objective used to create adversarial examples on the surrogate models is the cross-entropy loss in Equation 6. Various designs have also been explored to improve the transferability of the created adversarial examples. Normalized CE Loss. To increase the attack strength, Zhang et al. (2022a) identify that the weakness of the commonly used losses lies in prioritizing the speed to fool the network instead of maximizing its strength. With an intuitive interpretation of the logit gradient from the geometric perspective, they propose a new normalized CE loss that guides the logit to be updated in the direction of implicitly maximizing its rank distance from the ground-truth class. For boosting the top-k strength, the loss function consists of two parts: the commonly used CE, and a normalization part, averaging the CE calculated for each class. The loss function they term Relative CE loss or RCE loss in short is formulated as follows: $$\operatorname{RCE}\left(x^{a d v(t)},y\right)=\operatorname{CE}\left(x^{a d v(t)},y\right)-{\frac{1}{K}}\sum_{k=1}^{K}\operatorname{CE}\left(x^{a d v(t)},y_{k}\right)$$ $$(19)$$ $$(20)$$ Their proposed RCE loss in the equation above achieved a strong top-k attack in both white-box and transferable black-box settings. Generative Model as Regularization. Focusing on black-box attacks in restricted-access scenarios, Xiao et al. (2021) propose to generate transferable adversarial patches (TAPs) to evaluate the robustness of face recognition models. Some adversarial attacks based on transferability show that certain adversarial examples of white-box alternative models g can be maintained adversarial to black-box target models f. Suppose g is a white-box face recognition model that is accessible to the attacker, the optimization problem to generate the adversarial patch on the substitute model can be described as follows: $$\operatorname*{max}_{\mathbf{x}}{\mathcal{L}}_{g}\left(\mathbf{x},\mathbf{x}_{t}\right),{\mathrm{~s.t.~}}\mathbf{x}\odot(1-\mathbf{M})=\mathbf{x}_{s}\odot(1-\mathbf{M}),$$ xLg (x, xt), s.t. x ⊙ (1 − M) = xs ⊙ (1 − M), (20) where Lg is a differentiable adversarial objective, ⊙ is the element-wise product, and M ∈ {0, 1} n is a binary mask. However, when solving this optimization problem, even the state-of-the-art optimization algorithms are still struggling to get rid of local optima with unsatisfactory transferability. To solve this optimization challenge, Xiao et al. (2021) propose to optimize the adversarial patch on a low-dimensional manifold as a regularization. Since the manifold poses a specific structure on the optimization dynamics, they consider a good manifold should have two properties: sufficient capacity and well regularized. In order to balance the requirements of capacity and regularity, they learn the manifold using a generative model, which is pretrained on natural human face data and can combine different face features by manipulating latent vectors to generate diverse, unseen faces, e.g., eye color, eyebrow thickness, etc. They propose to use the generative model to generate adversarial patches and optimize the patches by means of latent vectors. Thus the above optimization problem is improved as: $$\max_{\mathbf{s}\in\mathcal{S}}\mathcal{L}_{g}\left(\mathbf{x},\mathbf{x}_{t}\right),$$ s.t. $\mathbf{x}\odot(1-\mathbf{M})=\mathbf{x}_{s}\odot(1-\mathbf{M})$, $$\mathbf{x}\odot\mathbf{M}=h(\mathbf{s})\odot\mathbf{M},\tag{21}$$ where the second constrain restricts the adversarial patch on the low-dimensional manifold represented by the generative model, h(s) : S → R n denote the pre-trained generative model and S is its latent space. When constrained on this manifold, the adversarial perturbations resemble face-like features. For different face recognition models, they expect that the responses to the face-like features are effectively related, which will improve the transferability of the adversarial patches. Metric Learning as Regularization. Most of the previous research on transferability has been conducted in non-targeted contexts. However, targeted adversarial examples are more difficult to transfer than nontargeted examples. Li et al. (2020a) find that there are two problems that can make it difficult to produce transferable examples: (1) During an iterative attack, the size of the gradient gradually decreases, leading to excessive consistency between two consecutive noises during momentum accumulation, which is referred to as noise curing. (2) Targeted adversarial examples must not only approach the target class but also stray from the ground-truth class. To overcome these two problems, they discard the Euclidean standard space and introduce Poincaré ball as a metric space for the first time to solve the noise curing problem, which makes the magnitude of gradient adaptive and the noise direction more flexible during the iterative attack process. Instead of the traditional cross-entropy loss, they propose Poincaré Distance Metric loss LP o, which makes the gradient increase only when it is close to the target class. Since all the points of the Poincaré ball are inside a n-dimensional unit ℓ2 ball, the distance between two points can be defined as: $$d\left(u,v\right)=\operatorname{arccosh}\left(1+\delta\left(u,v\right)\right),$$ d (*u, v*) = arccosh (1 + δ (*u, v*)), (22) where u and v are two points in n-dimensional Euclid space R n with ℓ2 norm less than one, and δ (*u, v*) is an isometric invariant defined as follow: $$\delta\left(u,v\right)=2\frac{\left\|u-v\right\|^{2}}{(1-\left\|u\right\|^{2})(1-\left\|v\right\|^{2})},$$ $$(22)$$ $$(23)$$ $$(24)$$ $$(25)$$ However, the fused logits are not satisfied ∥l(x)∥2 < 1, so they normalize the logits by the ℓ1 distance. And they subtract the one hot target label y from a small constant ξ = 0.0001 because the distance from any point to y is +∞. Thus, the final Poincar´e distance metric loss can be described as follows: $$\ell_{P o}\left(x,y\right)=d\left(u,v\right)=\operatorname{arccosh}\left(1+\delta\left(u,v\right)\right),$$ ℓP o (*x, y*) = d (*u, v*) = arccosh (1 + δ (*u, v*)), (24) where $u=l_{k}(x)/||l_{k}(x)||$ where $u=l_{k}(x)/\|l_{k}(x)\|_{1}$, $v=max\{y-\xi,0\}$, and $l(x)$ is the fused logits. In targeted attacks, the loss function usually only considers the desired target label. But sometimes, the generated adversarial examples are too similar to the original class, causing some to still be classified correctly by the target model. Therefore they also utilize the real label information as an addition during the iterative attack, using triplet loss, to help the adversarial examples stay away from the real labels to get better transferability. They use the logits of clean images l(x*clean*), one-hot target label and true label ytar, y*true* as the triplet input: $$\ell_{t r i p}\left(y_{t a r},l(x_{i}),y_{t r u e}\right)=\left[D\left(l(x_{i}),y_{t a r}\right)-D\left(l(x_{i}),y_{t r u e}\right)+\gamma\right]_{+}.$$ Since the l(x adv) is not normalized, so they use the angular distance D(·) as a distance metric, which eliminates the influence of the norm on the loss value. The distance calculation can be described as follows: $$D\left(l(x^{adv}),y_{tar}\right)=1-\frac{|\left.l(x^{adv})\cdot y_{tar}\right|}{\left\|l(x^{adv})\right\|_{2}\left\|y_{tar}\right\|_{2}}.\tag{1}$$ $$(26)$$ Therefore, after adding the triplet loss term, their overall loss function: $$\ell_{a l l}=\ell_{P o}\left(l(x),y_{t a r}\right)+\lambda\cdot\ell_{t r i p}\left(y_{t a r},l(x_{i}),y_{t r u e}\right).$$ ℓall = ℓP o (l(x), ytar) + λ · ℓtrip (ytar, l(xi), y*true*). (27) Simple Logit Loss. Zhao et al. (2021) review transferable targeted attacks and find that their difficulties are overestimated due to the blind spots in traditional evaluation procedures since current works have unreasonably restricted attack optimization to a few iterations. Their study shows that with enough iterations, even conventional I-FGSM integrated with simple transfer methods can easily achieve high targeted transferability. They also demonstrate that attacks utilizing simple logit loss can further improve the targeted transferability by a very large margin, leading to results that are competitive with state-of-the-art techniques. The simple logit loss can be expressed as: $$(27)$$ $\ell=\max_{{\bf x}^{\prime}}Z_{t}(x^{\prime})$, (11.10) $$(28)$$ where Zt(·) denotes the logit output before the softmax layer with respect to the target class. Meta Loss. Instead of simply generating adversarial perturbations directly in these tasks, Fang et al. (2022) optimize them using meta-learning concepts so that the perturbations can be better adapted to various conditions. The meta-learning method is implemented in each iteration of the perturbation update. In each iteration, they divide the task into a support set S and a query set Q, perform meta-training (training on the support set) and meta-testing (fine-tuning on the query set) multiple times, and finally update the adversarial perturbations. In each meta-learning iteration, they sample a subset Si ∈ S and calculate the average gradient with respect to input as: $$\mathbf{g}_{spt}=\frac{1}{\mid\mathbf{S}_{i}\mid}\sum_{(\mathbf{x}_{s},\gamma_{s})\in S_{i}}G\left(\mathbf{x}_{s},\gamma_{s}\right),\tag{1}$$ where G denotes the gradient updation of adversarial perturbations as in I-FGSM: G(xadv, f) = ∇xadv L(f(xadv), y). The γ = [γ1, γ2*, ..., γ*L] T ∈ [0, 1]T denotes the set of decay factors during model augmentation, and the factor γi defaults for i-th residual layer. Since the optimization of γ represents the augmentation of the model f, for ease of writing, they replaced γ with f, i.e, G(xadv, γs) = G(xadv, f) = ∇xadv L(f(xadv), y). Then, similar to FGSM, they obtain the temporary perturbation with a single-step update: $$(29)$$ $$\delta^{\prime}=\epsilon\cdot\mathrm{sign}(g_{s p t}).$$ $$(30)$$ $$(31)$$ $$(32)$$ ′ = ϵ · sign(gspt). (30) Then they finetune on the query set Q and compute the query gradient gqry by adding the temporary perturbation so that it can adapt more tasks: $$\mathbf{g}_{q r y}={\frac{1}{\mid{\mathcal{Q}}\mid}}\sum_{(\mathbf{x}_{q},\gamma_{q})\in{\mathcal{Q}}}G(\mathbf{x}_{q}+\delta^{\prime},\gamma_{q}).$$ Finally, they update the actual adversarial perturbation with the gradient from both the support set and the query set for maximum utilization: $$\mathbf{x}_{a d v}^{t+1}=\Pi_{e}^{\mathbf{x}}\left(\mathbf{x}_{a d v}^{t}+\alpha\cdot\mathrm{sign}\left(\overline{{{\mathbf{g}}}}_{s p t}+\overline{{{\mathbf{g}}}}_{q r y}\right)\right)\tag{1}$$ where gspt and gqry denote the average gradient over meta-learning iterations, respectively. Domain transferability Through Regularization. Xu et al. (2022b) propose a theoretical framework to analyze the sufficient conditions for domain transferability from the view of function class regularization. They prove that shrinking the function class of feature extractors during training monotonically decreases a tight upper bound on the relative domain transferability loss. Therefore, it is reasonable to expect that imposing regularization on the feature extractor during training can lead to better relative domain transferability. More Bayesian Attack. Many existing works enhance attack transferability by increasing the diversity in inputs of some substitute models. Li et al. (2023) propose to attack a Bayesian model for achieving desirable transferability. By introducing probability measures for the weights and biases of the alternative models, all these parameters can be represented under the assumption of some to-be-learned distribution. In this way, an infinite number of ensembles of DNNs (which appear to be jointly trained) can be obtained in a single training session. And then by maximizing the average predictive loss of this model distribution, adversarial examples are produced, which is referred to as the posterior learned in a Bayesian manner. The attacks performing on the ensemble of the set of M models can be formulated as: $$\arg\min_{\|\mathbf{\Delta x}\|_{p}\leq e}\frac{1}{M}\sum_{i=1}^{M}p\left(y\mid\mathbf{x}+\mathbf{\Delta x},\mathbf{w}_{i}\right)=\arg\max_{\|\mathbf{\Delta x}\|_{p}\leq e}\frac{1}{M}\sum_{i=1}^{M}L\left(\mathbf{x}+\mathbf{\Delta x},y,\mathbf{w}_{i}\right),\text{s.t.}\mathbf{w}_{i}\sim p(\mathbf{w}\mid\mathcal{D}),\tag{33}$$ where L(·, ·, wi) is a function evaluating prediction loss of a DNN model parameterized by wi. Using iterative optimization methods, different sets of models can be sampled at different iteration stages, as if an infinite number of substitute models existed. Lightweight Ensemble Attack (LEA). Qian et al. (2023) notice three models with non-overlapping vulnerable frequency regions that can cover a sufficiently large vulnerable subspace. Based on this finding, they propose LEA2, a lightweight ensemble adversarial attack consisting of standard, weakly robust, and robust models. Furthermore, they analyze Gaussian noise from a frequency view and find that Gaussian noise occurs in the vulnerable frequency regions of standard models. As a result, they replace traditional models with Gaussian noise to ensure that high-frequency vulnerable regions are used while lowering attack time consumption. They first define the vulnerable frequency regions and the adversarial example x ′is generated by the following optimization: $$\arg\operatorname*{max}_{\delta}-\log\left(\left(\sum_{i=1}^{M_{1}}w_{i}S_{\mathrm{robust}}^{i}\left(x+r+\delta\right)+\sum_{j=1}^{M_{2}}w_{j}S_{\mathrm{weak}}^{j}\left(x+r+\delta\right)\right)\cdot\mathbf{1}_{\boldsymbol{\nu}}\right),$$ $$(34)$$ (35) $\binom{36}{2}$ . , (34) where r ∼ N(0, σ2) is the Gaussian noise, M1 and M2 are the number of robust models and weakly robust models respectively, S*robust* and S*weak* represent the corresponding softmax outputs, 1y is the one-hot encoding of y, and PM1 i=1 +PM2 j=1 = 1. Together with the constraints of ∥x ′ − x∥∞ ≤ ϵ, their adversarial examples updating process can be described as follows: $$\begin{array}{l}{{x_{t+1}^{\prime}=\mathrm{{Clip}}_{x,\epsilon}\left\{x_{t}^{\prime}+\alpha\cdot\mathrm{{sign}}\left(\nabla_{x}{\mathcal{L}}\left(x_{t}^{\prime},y\right)\right)\right\},}}\\ {{{\mathcal{L}}\left(x_{t}^{\prime},y\right)=\sum_{i=1}^{M_{1}}w_{i}{\mathcal{L}}\left(h_{r o b u s t}^{i}\left(x_{t}^{\prime}\right),y\right)+\sum_{j=1}^{M_{2}}w_{j}{\mathcal{L}}\left(h_{w e a k}^{j}\left(x_{t}^{\prime}\right),y\right),}}\end{array}$$ where the h i robust and h j weak represent robust models and weakly robust models respestively, w is the coresponding ensemble weights. Adaptive Model Ensemble Attack (AdaEA). Existing ensemble attacks simply fuse the outputs of agent models uniformly, and thus do not effectively capture and amplify the intrinsic transfer information of the adversarial examples. Chen et al. (2023a) propose AdaEA that adaptively controls the fusion of each model's output, via monitoring the discrepancy ratio of their contributions towards the adversarial objective. Then, an extra disparity-reduced filter is introduced to further synchronize the update direction. The basic idea of an ensemble attack is to utilize the output of multiple white box models to obtain an average model loss and then apply a gradient-based attack to generate adversarial examples. In their work, they propose AdaEA equipped with adaptive gradient modulation (AGM) and a disparity-reduced filter (DRF) to amend the gradient optimization process for boosting the transferable information in the generated adversarial examples. In detail, The AGM strategy can adaptively combine the outputs of each model through an adversarial ratio, thus increasing the strength of the transferable information in the generated adversarial examples. The DRF can decrease the differences between the agent models by calculating a discrepancy map and synchronizing the update direction. The process of AdaEA can be represented in short by the following equation: $\begin{array}{l}x_{t+1}^{adv}=\mathrm{Clip}_{x}^{e}\left\{x_{t}^{adv}+\alpha\cdot\mathrm{sign}\left(g_{t+1}^{ens}\right)\right\},\\ g_{t+1}^{ens}=\nabla_{x_{t}^{adv}}\mathcal{L}\left(\sum_{k=1}^{K}w_{k}^{*}f_{k}\left(x_{t}^{adv}\right),y\right)\otimes\mathcal{B},\end{array}$ semble gradient of K models, the $\mathcal{B}$ represents a filter that can clean. $$(37)$$ $$(38)$$ where the g represents the ensemble gradient of K models, the B represents a filter that can clean the disparity part in the ensemble gradient, and ⊗ denotes the element-wise multiplication. ## 3.4 Model Component-Based Transferability Enhancing Methods In this subsection, we introduce the transferability-enhancing approaches based on various model components. Typically, the components are from the surrogate model in Equation 6. Features. Several methods have aimed to improve transfer attacks, which focus on considering the feature space of the source model to generate noise that less overfits the specific architecture. Zhou et al. (2018) first demonstrated that maximizing the distance of intermediate feature maps between clean images and adversarial examples can enhance the transfer attack across models. They introduced two additional penalty terms in the loss function to efficiently guide the search directions for traditional untargeted attacks. The optimization problem can be described as follows: $$x^{a d v}=\operatorname*{arg\,max}_{\|x-x_{a d v}\|^{\gamma}\leq\epsilon}l(x_{a d v},t)+\lambda\sum_{d\in D}\left\|T(L(x,d)-T(L(x^{a d v},d))\right\|^{2}+\eta\sum_{i}\operatorname{abs}R_{i}(x^{a d v}-x,w_{a}),$$ adv − *x, w*s), (39) $$(39)$$ where the first term represents the traditional untargeted attack loss, and L(*x, d*) denotes the intermediate feature map in layer d ∈ D. Here, T(L(*x, d*)) signifies the power normalization (Perronnin et al., 2010) of L(*x, d*). The regularization serves as a form of low-pass filter, enforcing the continuity of neighboring pixels and reducing the variations of adversarial perturbations. Naseer et al. (2018) and Hashemi et al. (2022) followed this idea and generated adversarial examples that exhibited transferability across different network architectures and various vision tasks, including image segmentation, classification, and object detection. Huang et al. (2019a) proposed the Intermediate Level Attack (ILA), which involves fine-tuning an existing adversarial example by magnifying the impact of the perturbation on a pre-specified layer of the source model. Given an adversarial example x ′ generated by a baseline attack, it serves as a hint. ILA aims to find a x ′′ such that the optimized direction matches the direction of ∆y ′ l = Fl(x ′) − Fl(x) while maximizing the norm of the disturbance in this direction ∆y ′′ l = Fl(x ′′) − Fl(x). Within this framework, they propose two variants, ILAP and ILAF. The ILAP simply adopts the dot product for the maximization problem, and the ILAF augments the losses by separating out the maintenance of the adversarial direction from the magnitude and controls the trade-off with the additional parameter α. In the following parameters for $\mathcal{L}_{ILAP}\left(y_{l}^{\prime},y_{l}^{\prime\prime}\right)=-\Delta y_{l}^{\prime}\cdot\Delta y_{l}^{\prime\prime}$ $\mathcal{L}_{ILAF}\left(y_{l}^{\prime},y_{l}^{\prime\prime}\right)=-\alpha\cdot\dfrac{\|\Delta y_{l}^{\prime\prime}\|_{2}}{\|\Delta y_{l}^{\prime\prime}\|_{2}}-\dfrac{\Delta y_{l}^{\prime\prime}}{\|\Delta y_{l}^{\prime\prime}\|_{2}}\cdot\dfrac{\Delta y_{l}^{\prime}}{\|\Delta y_{l}^{\prime}\|_{2}}$ $$(41)$$ Salzmann et al. (2021) introduced a transferable adversarial perturbation generator that employs a feature separation loss, with the objective of maximizing the L2 distance between the normal feature map fl(xi) and the adversarial feature map fl(x adv i) at layer l. This is defined as: $$(40)$$ $${\mathcal{L}}_{f e a t}(x_{i},x_{i}^{a d v})=\left\|f_{l}(x_{i})-f_{l}(x_{i}^{a d v})\right\|_{F}^{2}.$$ $$(42)$$ $$(43)$$ . (42) The above methods usually trap into a local optimum and tend to overfit to the source model by indiscriminately distorting features, without considering the intrinsic characteristics of the images. To overcome this limitation, Ganeshan et al. (2019) proposed the Feature Disruptive Attack (FDA), which disrupts features at every layer of the network and causes deep features to be highly corrupt. For a given i th layer li, they increase the layer objective L: $${\mathcal{L}}\left(l_{i}\right)=D\left(\left\{l_{i}({\hat{x}})_{N_{j}}\mid N_{j}\notin S_{i}\right\}\right)-D\left(\left\{l_{i}({\hat{x}})_{N_{j}}\mid N_{j}\in S_{i}\right\}\right),$$ where li(˜x)Nj denotes the Nj th value of li(˜x), Si denotes the set of activations that contribute to the current prediction. While this set is not straightforward to find, it can be approximated using a measure of central tendency, such as the median or the inter-quartile-mean. D is a monotonically increasing function of activations li(˜x). They perform it at each non-linearity in the network and combine the per-layer objectives for the goal. FDA treats all neurons as important neurons by differentiating the polarity of neuron importance by mean activation values. In contrast, Wang et al. (2021c) proposed a Feature Importance-aware Attack (FIA) to improve the transferability of adversarial examples by disrupting the critical object-aware features that dominate the decision of different models. FIA leverages feature importance, obtained by averaging the gradients with respect to feature maps from the source model, to guide the search for adversarial examples. Zhang et al. (2022c) introduced the Neuron Attribution-based Attack (NAA), which is a feature-level attack based on more accurate neuron importance estimations. NAA attributes the model's output to each neuron and devises an approximation scheme for neuron attribution, significantly reducing the computation cost. Subsequently, NAA minimizes the weighted combination of positive and negative neuron attribution values to generate transferable adversarial examples. Wu et al. (2020b) proposed to alleviate overfitting through model attention. They consider an entire feature map as a fundamental feature detector and approximate the importance of feature map Ack (the c-th feature map in layer k) to class t with spatially pooled gradients: $$\alpha_{k}^{c}[t]=\frac{1}{Z}\sum_{m}\sum_{n}\frac{\partial f(\mathbf{x})[t]}{\partial A_{k}^{c}[m,n]}.$$ $$(44)$$ . (44) They scale different feature maps with corresponding model attention weights α c k [t] and perform channel-wise summation of all feature maps in the same layer. Then derive the attention map for the label prediction t as follows: $$H_{k}^{t}=\mathrm{ReLU}\left(\sum\alpha_{k}^{c}[t]\cdot A_{k}^{c}\right),$$ , (45) Finally, they combine the original goal which aims to mislead the final decision of the target model, and the attention goal which aims to destroy the vital intermediate features. $$(45)$$ $$\operatorname*{arg\,max}_{\delta}\quad{\mathcal{L}}\left(f\left(\mathbf{x}^{a d v}\right),t\right)+\lambda\sum_{k}\left\|H_{k}^{t}\left(\mathbf{x}^{a d v}\right)-H_{k}^{t}(\mathbf{x})\right\|^{2}.$$ 2. (46) The above transfer attack methods are only for un-targeted attacks. Rozsa et al. (2017) and Inkawhich et al. (2019) first describe a transfer-based targeted adversarial attack that manipulates feature space representations to resemble those of a target image. The Activation Attack (AA) loss is defined to make the source image Is closer to an image of the target class It in feature space. $$(46)$$ $$J_{A A}\left(I_{t},I_{s}\right)=\left\|f_{L}\left(I_{t}\right)-f_{L}\left(I_{s}\right)\right\|_{2}=\left\|A_{t}^{L}-A_{s}^{L}\right\|_{2},$$ $$(47)$$ , (47) where JAA is the Euclidean distance between the vectorized source image activations and vectorized target image activations at layer L, and fL be a truncated version of a white-box model fw. However, this method is challenging to scale to larger models and datasets due to the lack of modeling for the target class and its sensitivity to the chosen target sample. Inkawhich et al. (2020b;a) propose to model the class-wise feature distributions at multiple layers of the whitebox model, aiming for a more comprehensive representation of the target class in targeted attacks. Initially, they modeled the feature distribution of a DNN using an auxiliary Neural Network gl,c to learn p(y = c|fl(x)), which represents the probability that the features of layer l of the white-box model, extracted from input image x, belong to class c. Subsequently, the attack employed these learned distributions to generate targeted adversarial examples by maximizing the probability that the adversarial example originates from a specific class's feature distribution. Additionally, they developed a flexible framework that could extend from a single layer to multiple layers, emphasizing the explainability and interpretability of the attacking process. Some other works have also explored the properties of adversarial examples in the feature space. Wang et al. (2021b) discovered a negative correlation between transferability and perturbation interaction units and provided a new perspective to understand the transferability of adversarial perturbations. They demonstrated that multi-step attacks tend to generate adversarial perturbations with significant interactions, while classical methods of enhancing transferability essentially decrease interactions between perturbation units. Therefore, they proposed a new loss to directly penalize interactions between perturbation units during an attack, significantly improving the transferability of previous methods. Waseda et al. (2023) demonstrated that adversarial examples tend to cause the same mistakes for nonrobust features. Different mistakes could also occur between similar models regardless of the perturbation size. Both different and the same mistakes can be explained by non-robust features, providing a novel insight into developing transfer adversarial examples based on non-robust features. Batch Normalization (BN). Benz et al. (2021) investigate the effect of BN on deep neural networks from a non-robust feature perspective. Their empirical findings suggest that BN causes the model to rely more heavily on non-robust features and increases susceptibility to adversarial attacks. Further, they demonstrate that a substitute model trained without BN outperforms its BN-equipped counterpart and that early-stopping the training of the substitute model can also boost transferable attacks. Skip Connections. Wu et al. (2020a) find that skip connections facilitate the generation of highly transferable adversarial examples. Thus, they introduced the Skip Gradient Method (SGM), which involves using more gradients from skip connections rather than residual ones by applying a decay factor on gradients. Combined with existing techniques, SGM can drastically boost state-of-the-art transferability. ReLU activation Guo et al. (2020) propose to boost transferability by enhancing the linearity of deep neural networks in an appropriate manner. To achieve this goal, they propose a simple yet very effective method technique dubbed linear backpropagation (LinBP), which performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients. Specifically, LinBP computes the forward pass as normal but backpropagates the loss linearly as if there is no ReLU activation encountered. Patch Representation. Naseer et al. (2022) propose the Self-Ensemble (SE) method to find multiple discriminative pathways by dissecting a single ViT model into an ensemble of networks. They also introduce a Token Refinement (TR) module to refine the tokens and enhance the discriminative capacity at each block of ViT. While this method shows promising performance, it has limited applicability since many ViT models lack enough class tokens for building an ensemble, and TR requires is time-consuming. Wei et al. (2018) find that ignoring the gradients of attention units and perturbing only a subset of the patches at each iteration can prevent overfitting and create diverse input patterns, thereby increasing transferability. They propose a dual attack framework consisting of a Pay No Attention attack and a PatchOut attack to improve the transferability of adversarial samples across different ViTs. Ensemble of Models. Liu et al. (2017) first propose transferable generating adversarial examples by utilizing an ensemble of multiple models with varying architectures. Gubri et al. (2022b) presents a geometric approach to enhance the transferability of black-box adversarial attacks by introducing Large Geometric Vicinity (LGV). LGV constructs a surrogate model by collecting weights along the SGD trajectory with high constant learning rates, starting from a conventionally trained deep neural network. Gubri et al. (2022a) develop a highly efficient method for constructing a surrogate based on state-of-the-art Bayesian Deep Learning techniques. Their approach involves approximating the posterior distribution of neural network weights, which represents the belief about the value of each parameter. Similarly, Li et al. (2023) adopt a Bayesian formulation in their method to develop a principled strategy for possible fine-tuning, which can be combined with various off-the-shelf Gaussian posterior approximations over deep neural network parameters. Huang et al. (2023) focus on the single-model transfer-based black-box attack in object detection. They propose an enhanced attack framework by making slight adjustments to its training strategies and draw an analogy between patch optimization and regular model optimization. In addition, they propose a series of self-ensemble approaches on the input data, the attacked model, and the adversarial patch to efficiently utilize the limited information and prevent the patch from overfitting. ## 4 Generation-Based Transferable Attacks Optimization-based adversarial attacks discussed in the previous use gradients from surrogate models to iteratively optimize bounded perturbations for each clean image at test time. In this section, we introduce generation-based transferability-enhancing methods. This class of methods takes an alternative approach by directly synthesizing the adversarial example (or the adversarial perturbation) with generative models. Generation-based attacks comprise two stages: Training and the attack stages. During the training stage, the attacker trains a generative model Gθ(·), a function parameterized by θ that outputs either the adversarial example x adv or an adversarial perturbation δ. The optimization of the generator parameters can be formulated as follows: $$\max_{\theta}\mathbb{E}_{(x,y)}l(f_{s}(\mathcal{G}_{\theta}(\cdot)),y)\tag{1}$$ where fs(·) is a surrogate model, and Gθ(·) directly generates the adversarial example. If the generator predicts the perturbation δ instead, the loss becomes l(fs(Gθ(·) + x), y). In the case of targeted attacks, the optimization is described as: $$\operatorname*{min}_{\theta}\mathbb{E}_{(x,y_{t})}l(f_{s}(G_{\theta}(\cdot)),y_{t})$$ E(x,yt)l(fs(Gθ(·)), yt) (49) Note that a surrogate model fs(.) is involved in the first step of the generative model-based attack. During the attack stage, adversarial examples are obtained directly with a single forward inference of the learned generator Gθ(·). The input to the generator varies depending on the problem formulation. For input-dependent (Poursaeed et al., 2018; Naseer et al., 2019) adversarial perturbation generation, where the goal is to generate a perturbation specific to the given input x, we have: $$(48)$$ $$x^{a d v}={\mathcal{G}}_{\theta}(x)$$ adv = Gθ(x) (50) $\left(49\right)^3$ where any smoothing operations or additive and clipping operations can be absorbed into the mapping G (Alternatively, we can generate the perturbation instead of the x adv, that is δ = Gθ(x)). For universal adversarial perturbations (Poursaeed et al., 2018), where the perturbation can be added to any x, we input a fixed noise z to the generator function: δ = Gθ(z) (51) $$\delta={\mathcal{G}}_{\theta}(z)$$ Generative models are believed to possess several properties that can help achieve improved imperceptibility and transferability. Firstly, only one single model forward pass is required at test time once the generator is trained. This avoids the costly iterative perturbation process and thus, allows highly efficient adversarial attacks to be performed in an online fashion. Secondly, generators are less reliant on class-boundary information from the surrogate classifier since they can model latent distributions (Naseer et al., 2021). Finally, generative models provide latent spaces from which perturbations can be injected. This enables the search for adversaries in the lower-dimensional latent space rather than directly within the input data space, resulting in smoother perturbations with improved photorealism, diversity, and imperceptibility. ## 4.1 Methods Based On Unconditional Generation Generative Adversarial Pertruabtions (GAP) Poursaeed et al. (2018) introduce generative models to the task of adversarial sample synthesis. In this work, the generator generates the perturbation (δ = Gθ(.)) as opposed to generating the x adv. They investigate two different variations: generating input-dependent adversarial perturbations and universal adversarial perturbations. For the former, they use $\left(51\right)^3$ $$\operatorname*{max}_{\theta}\mathbb{E}_{(x,y)}l(f_{s}({\mathcal{G}}_{\theta}(x)+x),y)$$ $$(52)$$ E(x,y)l(fs(Gθ(x) + x), y) (52) as generator training objective where l is defined as the cross entropy loss. They also investigate generating universal perturbations, where the perturbation δ can be added to any input image. In this case, the generator is given a fixed noise z as input (51). Thus, the optimization term becomes: $$\operatorname*{max}_{\theta}\mathbb{E}_{(x,y)}l(f_{s}(G_{\theta}(z)+x),y)$$ $$(53)$$ E(x,y)l(fs(Gθ(z) + x), y) (53) Their work demonstrates the high efficiency of learned generators for creating both targeted and untargeted adversarial examples. AdvGAN. Xiao et al. (2018) proposed to incorporate adversarial training for the generator by introducing a discriminator Dϕ and solving a min-max game (equation 54). In addition to equation 48, they incorporated a GAN loss to promote the realism of synthesized samples and a soft hinge loss on the L2 norm, where c denotes a user-specified perturbation budget. $$\operatorname*{min}_{\phi}\operatorname*{max}_{\theta}\mathbb{E}_{(x)}\left(l(f_{s}(\mathcal{G}_{\theta}(x)),y)+\log\left(1-\mathcal{D}_{\phi}(x)\right)+\log\left(\mathcal{D}_{\phi}(\mathcal{G}_{\theta}(x))\right)-\operatorname*{max}(0,||\mathcal{G}(x)||_{2}-c)\right)$$ Cross Domain Adversarial Perturbation. Naseer et al. (2019) investigate the usage of generative models in generating adversarial attacks that transfer across different input domains. They propose to use relativistic loss in the generator training objective to enhance the transferability of cross-domain targeted attacks. The relativistic cross entropy (equation 55) objective is believed to provide a "contrastive" supervision signal that is agnostic to the underlying data distribution and hence achieves superior cross-domain transferability. $$(54)$$ $${\mathcal{L}}:=\mathrm{CE}(f_{s}(x)-f_{s}({\mathcal{G}}_{\theta}(x)))$$ L := CE(fs(x) − fs(Gθ(x))) (55) Distribution and Neighbourhood Similarity Matching. To achieve good transferability for crossdomain targeted attacks, Naseer et al. (2021) propose a novel objective that considers both global distribution matching as well as sample local neighborhood structures. In addition to solving the optimization problem in equation 48, they propose to add two loss terms (1) one that minimizes the (scaled) Jensen-Shannon Divergence between the distribution of perturbed adversarial samples from the source domain p s(Gθ(x)) and the distribution of real samples from the target class in the target domain p t(x|yt); (2) a second term $\left(55\right)^2$ that aligns source and target similarity matrices S s and S t defined as S s i,j :=f(x i s )·f(x j s ) ∥f(xis )∥∥f(x j s)∥ and S $${\stackrel{t}{i,j}}:=$$ ``` f(x i t )·f(xstj) ∥f(x i t )∥∥f(x j t )∥ , which serves the purpose of matching the local structures based on neighborhood connectivity. ``` $$\begin{array}{l}{{{\mathcal{L}}_{a u g}=D_{K L}(p^{s}({\mathcal{G}}_{\theta}(x))\|p_{t}(x|y_{t}))+D_{K L}(p^{t}(x|y_{t})\|p^{s}({\mathcal{G}}_{\theta}(x)))}}\\ {{{\mathcal{L}}_{s i m}=D_{K L}({\bf S}^{t}\|{\bf S}^{s})+D_{K L}({\bf S}^{x}\|{\bf S}^{t})}}\end{array}$$ $$(56)$$ Attentive-Diversity Attack (ADA). Kim et al. (2022) propose a method that stochasticly perturbs various salient features to enhance adversarial sample transferability. By manipulating image attention, their method is able to disrupt common features shared by different models and hence achieve better transferability. Their generator takes an image along with a random latent code z ∼ N as input Gθ(*x, z*). They propose two losses in addition to the classification loss in equation 48 : (1) L*attn* that maximizes the distance between the attention maps of the original and the adversarial images for class-specific feature disruption and (2) Ldiv that promotes samples diversity by encouraging the generator to exploit the information in the latent code. They also argue that the stochasticity in z can help circumvent poor local optima and extend the search space for adversarial samples. $$\mathcal{L}_{attn}=\|A(\mathcal{G}_{\theta}(x,z))-A(x)\|_{2}$$ $$\mathcal{L}_{div}=\frac{\|A(\mathcal{G}_{\theta}(x_{1},z_{1}))-A(\mathcal{G}_{\theta}(x_{2},z_{2}))\|_{2}}{\|z_{1}-z_{2}\|}\tag{1}$$ Conditional Adversarial Distribution (CAD) Feng et al. (2022) propose a transferability-enhancing approach that emphasizes robustness against surrogate biases. They propose to transfer a subset of parameters based on CAD (i.e., the distribution of adversarial perturbations conditioned on benign examples) of surrogate models and learn the remainder of parameters based on queries to the target model while dynamically adjusting the CAD of the target model on new benign samples. Model Discrepancy Minimisation. Zhao et al. (2023) propose an objective based on the hypothesis discrepancy principle that can be used to synthesize robust and transferable targeted adversarial examples with multiple surrogate models. In an adversarial training fashion, they jointly optimize the generator and the surrogate models (used as discriminators) to minimize the maximum model discrepancy (M3D) between surrogate models (equation 60), transform the image into a target class (equation 61) while maintaining the quality of surrogate models to provide accurate classification results on the original images (equation 62). ## 4.2 Methods Based On Class-Conditional Generation Early generative targeted attack methods (Poursaeed et al., 2018; Naseer et al., 2021) suffer from low parameter efficiency as they require training a separate generator for each class. To address this issue, various approaches have been proposed to construct conditional generative models that handle targeted attacks of different classes with a single unified model. While many different actualizations exist, these methods share the same mathematical formulation: $$(60)$$ $$(61)$$ $$(62)$$ $$\operatorname*{min}_{\theta}\mathbb{E}_{(x,y)}l(f_{s}(G_{\theta}(x,y_{t})),y_{t})$$ E(x,y)l(fs(Gθ(x, yt)), yt) (63) Conditional Generators. Yang et al. (2022) propose a Conditional Generative model for a targeted attack, which can craft a strong Semantic Pattern (CG-SP). Concretely, the target class information was processed through a network before being taken as the condition of the generator (Mirza & Osindero, 2014). Claiming $$\max_{f_{1},f_{2}}\min_{\theta}\mathcal{L}_{d}=\mathbb{E}_{x\sim\chi}d[f_{1}\circ\mathcal{G}_{\theta}(x),f_{2}\circ\mathcal{G}_{\theta}(x)]$$ $$\min_{\theta}\mathcal{L}_{a}=\mathbb{E}_{x\sim\chi}\mathrm{CE}[f_{1}\circ\mathcal{G}_{\theta}(x),y_{t}]+\mathrm{CE}[f_{2}\circ\mathcal{G}_{\theta}(x),y_{t}]$$ $$\max_{f_{1},f_{2}}\mathcal{L}_{c}=\mathbb{E}_{x,y\sim(\chi,y)}\mathrm{CE}[f_{1}(x),y]+\mathrm{CE}[f_{2}(x),y]$$ $$(63)$$ that it is difficult for a single generator to learn distributions of all target classes, C-GSP divided all classes into a feasible number of subsets. Namely, only one generator is used for a subset of classes instead of each. Various ways to inject the condition into the synthesis process have been explored. For example, some authors propose to add trainable embeddings (Han et al., 2019) that can add target class information to the input tensor. In a similar spirit, GAP++(Mao et al., 2020) extends GAP by taking target class encodings as model input and thereby only requires one model for both targeted and untargeted attacks. Multi-target Adversarial Network (MAN) (Han et al., 2019) propose a method that enables multi-target adversarial attacks with a single model by incorporating category information into the intermediate features. To further improve the adversarial transferability, Phan et al. (2020) propose a Content-Aware adversarial attack Generator (CAG) to integrate class activation maps (CAMs) (Zhou et al., 2016) information into the input, making adversarial noise more focused on objects. Diffusion Models. Recent works have started to investigate the usage of diffusion models for enhancing adversarial transferability. DiffAttack (Chen et al., 2023b) is the first adversarial attack based on diffusion models (Ho et al., 2020), whose properties can help achieve imperceptibility. Concretely, the perturbations were optimized in the latent space after encoder and DDIM (Ho et al., 2020). Cross-attention maps are utilized in the loss function to distract attention from the labelled objects and disrupt the semantic relationship. Besides, self-attention maps are used for imperceptibility, keeping the original structure of images. In a similar spirit, AdvDiffuser (Chen et al., 2023d) and Adversarial Content Attack (ACA) (Chen et al., 2024) also leverage pre-trained diffusion models to craft highly transferable unrestricted adversarial examples. ## 5 Adversary Transferability Beyond Image Classification In this section, we present transfer attacks beyond image classification tasks, such as various vision tasks and NLP tasks. Furthermore, the transferability across tasks is also summarized. ## 5.1 Transfer Attacks In Vision Tasks Image Retrieval. Previous works (Yang et al., 2018; Tolias et al., 2019) have shown that image retrieval is also vulnerable to adversarial examples. Xiao & Wang (2021) explore the transferability of adversarial examples in image retrieval. In detail, they establish a relationship between the transferability of adversarial examples and the adversarial subspace by using random noise as a proxy. Then, they propose an adversarial attack method to generate highly transferable adversarial examples by being both adversarial and robust to noise. Xiao & Wang (2021) point out the relationship between adversarial subspace and black-box transferability. They propose to use additive Gaussian noise to estimate the generated adversarial region, thereby identifying adversarial perturbations that are both transferable and robust to additive noise corruption. Object Detection. Wei et al. (2018) find that existing image object detection attack methods suffer from weak transferability, *i.e.,* the generated adversarial examples usually have an attack success rate in attacking other detection methods. Then they propose a generative attack method to enhance the transferability of adversarial examples by using the feature maps extracted by a feature network. Specifically, they adopt the Generative Adversarial Network (GAN) framework, which is trained by the high-level class loss and low-level feature loss. Cai et al. (2022b) propose an object detection attack approach to generate context-aware attacks for object detectors. Specifically, they adopt the co-occurrence of objects, their relative locations, and sizes as context information to generate highly transferable adversarial examples. Moreover, Staff et al. (2021) explore the impact of transfer attacks on object detection. Specifically, they conduct objectness gradient attacks on the advanced object detector, *i.e.,* YOLO V3. Then, they find increasing attack strength can significantly enhance the transferability of adversarial examples. They also study the transferability when the datasets for the attacking and target models intersect. They find the size of the intersection has a direct relationship with the transfer attack performance. Additionally, Cai et al. (2022a) have indicated that existing adversarial attacks could not effectively attack the context-aware object detectors. To address that, They propose a zero-query context-aware attack method that can generate highly transferable adversarial scenes to fool context-aware object detectors effectively. Segmentation. Gu et al. (2021b) explore the transferability of adversarial examples on image segmentation models. In detail, they investigate the overfitting phenomenon of adversarial examples on both classification and segmentation models and propose a simple yet effective attack method with input diversity to generate highly transferable adversarial examples for segmentation models. Hendrik Metzen et al. (2017) explore the transferability of adversarial examples to attack the model of semantic image segmentation by generating universal adversarial perturbations. Specifically, they propose a method to generate universal adversarial examples that can change the semantic segmentation of images in arbitrary ways. The proposed adversarial perturbations are optimized on the whole training set. 3D Tasks. Previous works (Xiang et al., 2019; Zhou et al., 2019; Tsai et al., 2020) have developed several adversarial attack methods for 3D point clouds. Hamdi et al. (2020) discover that existing adversarial attacks of 3D point clouds lack transferability across different networks. Then, they propose an effective 3D point cloud adversarial attack method that takes advantage of the input data distribution by including an adversarial loss in the objective following Auto-Encoder reconstruction. Pestana et al. (2022) study the transferability of 3D adversarial examples generated by 3D adversarial textures and propose to use end-toend optimization for the generation of adversarial textures for 3D models. Specifically, they adopt neural rendering to generate the adversarial texture and ensemble non-robust and robust models to improve the transferability of adversarial examples. Person Re-Identification. Previous works (Gou et al., 2016; Xue et al., 2018; Huang et al., 2019b) have indicated that person re-identification (ReID), which inherits the vulnerability of deep neural networks (DNNs), can be fooled by adversarial examples.Wang et al. (2020) explore the transferability of adversarial examples on ReID systems. Specifically, they propose a learning-to-mis-rank method to generate adversarial examples. They also adopt a multi-stage network to improve the transferability of adversarial examples by extracting transferable features. Face Recognition. Jia et al. (2022a) have indicated that previous face recognition adversarial attack methods rely on generating adversarial examples on pixels, which limits the transferability of adversarial examples. Then, they propose a unified, flexible adversarial attack method, which generates adversarial for perturbations of different attributes based on target-specific face recognition features to boost the attack transferability. Video Classification. Wei et al. (2022a) have found that existing video attack methods have only limited transferability. Then, they propose a transferable adversarial attack method based on the temporal translation of the video, which generates adversarial perturbations over temporally translated video clips to enhance the attack transferability. ## 5.2 Transfer Attacks In Nlp Tasks. Yuan et al. (2020) introduce a comprehensive investigation into the transferability of adversarial examples for text classification models. In detail, they thoroughly study the impact of different factors, such as network architecture, on the transferability of adversarial examples. Moreover, they propose to adopt a generic algorithm to discover an ensemble of models capable of generating adversarial examples with high transferability. Furthermore, He et al. (2021) demonstrate the ability of an adversary to compromise a BERTbased API service. With the available model, they can generate highly transferable adversarial examples. Wang et al. (2022) show that the adversarial examples are also transferable across the topic models, which are important statistical models. To further improve the transferability, they propose to use a generator to generate effective adversarial examples and an ensemble method, which finds the optimal model ensemble. With the rise of large language models (LLM) like BERT (Kenton & Toutanova, 2019), GPT (Brown et al., 2020b), and their variants (Li et al., 2019), the adversarial examples on them have also received attention. Recently, Zou et al. (2023) have demonstrated it is possible to induce aligned language models to generate inappropriate content, dubbed jailbreak attack. They also propose a simple way to make the jailbreak attack more transferable to other LLMs. Specifically, a jailbreak attack aims to maximize the likelihood of the language model generating an affirmative response instead of declining to answer. They implement the attack by identifying a suffix that, when appended to various queries given to a language model, encourages generating undesirable content. They improve the transferability by attacking multiple surrogate LLMs with a single suffix. Zou et al. (2023) show that the adversarial suffix is even transferable to several mainstream close-sourced language models, e.g. ChatGPT (Achiam et al., 2023). ## 5.3 Cross-Task Transfer Attacks. Naseer et al. (2018) propose a novel adversarial attack method, which adopts the neural representation distortion to generate adversarial examples. They have demonstrated the remarkable transferability of adversarial examples across different neural network architectures, datasets, and tasks. Naseer et al. (2019) propose a novel concept of domain-invariant adversaries, which demonstrates the existence of a shared adversarial space among different datasets and models. They introduce a new generative framework that creates strong adversarial examples with a relativistic discriminator, outperforming traditional instance-specific attacks with a universal adversarial function. Moreover, they propose to exploit the adversarial patterns capable of deceiving networks trained on completely different domains to improve attack transferability. Lu et al. (2020) investigate the transferability of adversarial examples across diverse computer vision tasks, which include object detection, image classification, semantic segmentation, etc. They propose a Dispersion Reduction (DR) adversarial attack method which minimizes the standard deviation of intermediate feature maps to disturb features that are used by models intended for various tasks. Wei et al. (2022b) study the transferability of adversarial perturbations across different modalities. In detail, they apply the adversarial examples on white-box image-based models to attacking black-box video-based models by exploiting the similarity of low-level feature spaces between images and video frames. Naseer et al. (2023) propose to adopt task-specific prompts to incorporate spatial (image) and temporal (video) cues into the same source model, which can enhance the transferability of attacks from image-to-video and image-to-image models. They propose a method to add dynamic cues to pre-trained image models through a simple video-based transformation. Lu et al. (2023) study the adversarial transferability of some vision-language pre-training models. They propose a set-level guidance adversarial attack to improve the transferability of adversarial examples on vision-language pre-training models, which makes full use of cross-modal guidance. Han et al. (2023a) propose adopting optimal transport optimization to enhance the adversarial transferability of visionlanguage models, which uses optima transmission theory to find the most effective mapping between image and text features. Hu et al. (2024) propose to attack intermediate features of an encoder pre-trained on vision-language data for cross-task transferability. They adopt a patch-wise approach that independently diverts the representation of each adversarial image patch from the corresponding clean one by minimising the cosine similarity between the two, thereby producing highly transferable adversaries that fool various vision-language understanding tasks. ## 6 Challenges, Opportunities And Connections To Broader Topics This section delves into the intricacies of the challenges and illustrates the promising avenues for better transferability-based attacks, and their evaluation and understanding. ## 6.1 Challenges And Opportunities Adversarial Transferability is Far from Perfect. Adversarial examples are far from achieving perfection when transferred to other models due to several inherent challenges. First, the performance of adversarial transferability tends to degrade when evaluated on a variety of neural network architectures, highlighting the inconsistency in transferability across different models (Yu et al., 2023). Secondly, the task of transferring the adversarial perturbations created by targeted adversarial attacks remains challenging. Misleading to a specific class is much more difficult than a simple fool in the case of adversarial transferability (Yu et al., 2023). Finally, the current transferability-enhancing methods are mainly developed to target visual classification models with predefined classes. The current vision-language models (Lu et al., 2023; Radford et al., 2021b; Alayrac et al., 2022; Chen et al., 2023c), which extract visual information from a distinct perspective, pose unique challenges for transferability. These issues indicate that more transferability-enhancing methods should be explored for better defense evaluation. Natural, Unrestricted and Non-Additive Attacks. Albeit out of the scope of this survey, we note the alternative, relaxed definition of adversarial attacks does exist. Adversarial perturbation does not need to be constrained by any norm-ball Hendrycks et al. (2021); Zhao et al. (2017) and can be constructed through means other than additive noise (e.g. through transformations) (Brown et al., 2018). Several studies have explored the transferability of natural adversarial attacks Chen et al. (2023d), unconstrained (unrestricted) adversarial attacks (Liu et al., 2023; Chen et al., 2023b; Gao et al., 2022) and non-additive attacks (Zhang et al., 2020). Nonetheless, the community has not yet reached a consensus on how to effectively evaluate such attacks. For example, perceptual metrics other than Lp distances may be required to evaluate stealthiness. Source Model for Better Transferability. Current transferability methods are typically post hoc approaches that involve enhancing the ability of adversarial examples generated on pre-trained models to deceive other models. When considering the source model, the question arises: How can we train a model to improve the transferability of adversarial examples created on them? For instance, one promising avenue for achieving this is to learn the model from the perspective of knowledge transferability. A model with high knowledge transferability inherently becomes a superior source model, as adversarial examples generated on it exhibit a greater capacity to successfully deceive other models (Liang et al., 2021; Xu et al., 2022b). A follow-up question is which model architectures transfer better to others, CNNs, Vision Transformers (Naseer et al., 2022; Wu et al., 2021; Ma et al., 2023; Wu et al., 2022), Capsule Networks (Gu et al., 2021a), or Spiking Neural Networks (Xu et al., 2022a). Relation to Transferability Across Image Samples. In this work, we focus on adversarial transferability across models, namely, the ability of an adversarial perturbation crafted for one model or set of data to successfully fool another model. The community has also found that an adversarial perturbation that effectively fools one image can also be applied to a different image to achieve a similar adversarial effect, which is referred to as adversarial transferability across image samples (i.e. Universal Adversarial Image) (MoosaviDezfooli et al., 2017). Understanding the interplay between transferability across images and transferability across models is essential for comprehending the broader landscape of adversarial attacks and defences. These two dimensions together define the versatility and robustness of adversarial perturbations. Theoretical Perspectives on Adversarial Transferability. The root causes behind transferability receive continued research interest. Demontis et al. (2019) examine the effect of two factors on attack transferability: the intrinsic adversarial vulnerability of the target model and the complexity of the surrogate model. Ilyas et al. (2019) attributes adversarial transferability to the presence of non-robust features and points out the potential misalignment between robustness and inherent data geometry. Waseda et al. (2023) extends the theory of non-robust features by examining "class-aware transferability". In particular, they differentiate between the cases in which a target model predicts the same wrong class as the source model or a different wrong class, drawing connections between adversarial vulnerabilities and models' tendency of learning *superficial cues* (Jo & Bengio, 2017) and *shortcuts* (Geirhos et al., 2020). Charles et al. (2019) examine adversarial transferability from a geometric point of view and prove the existence of *transferable* adversarial directions for simple network architectures. Based on observations that AEs tend to occur in contiguous regions within which all points can similarly fool the model (referred to as *adversarial subspaces*) (Tanay & Griffin, 2016), Tramèr et al. (2017) explains the transferability as a result of intersection of models' adversarial subspaces: a higher number of orthogonal adversarial directions within these subspaces often implies higher transferability. Building on these works, Gubri et al. (2022b) highlights the role of weight space geometry in adversarial transferability, showing that adding random Gaussian noise to the weight space of DNNs increases their potential as surrogates for crafting more transferable adversaries. A contemporary work by Zhu et al. (2021) makes an analogous observation regarding the effect on adversarial transferability of adding random Gaussian noise in the output space. They propose Intrinsic Adversarial Attack (IAA) to diminish the impact of the deeper model layers. By doing so, they effectively exploit low-density regions of the data distribution where many highly transferable AEs can be found. Evaluation Metrics. The assessment of adversarial transferability is a complex undertaking that demands a thorough and extensive set of metrics. The Fooling Rate as a popular choice is often used to quantify the transferability of adversarial examples. It gauges the effectiveness of adversarial perturbations by measuring the percentage of these perturbations that can successfully deceive a specified target model. However, it's important to emphasize that the Fooling Rate metric is highly contingent on the choice of target models, which introduces a considerable source of variability into the evaluation process. Recent research, as highlighted in (Yu et al., 2023), has illuminated the profound impact that the selection of target models can have on the relative rankings of different transferability-enhancing methods. Consequently, there is a pressing need for even more comprehensive evaluation metrics. Consequently, there is a pressing need for an even more comprehensive benchmark that can encompass a wider range of model architectures and configurations. In addition to empirical evaluations, there is also a growing recognition of the necessity for theoretical characterizations of transferability. Such theoretical analyses can provide valuable insights into the underlying principles governing the transferability of adversarial attacks. Benchmarking Adversarial Transferability. Various benchmarks have been developed to evaluate adversarial transferability. Initial robustness benchmarks include transfer-based black-box attacks to evaluate the adversarial robustness of models (Croce et al., 2020; Dong et al., 2020). (Zhao et al., 2022) evaluates the transferability of adversarial examples, considering the perspectives of stealthiness of adversarial perturbations. (Mao et al., 2022) evaluates transferability-based attacks in real-world environments. Furthermore, (Zhao et al., 2022) builds a more reliable evaluation benchmark by including various architectures. Hybrid Approaches Combining Optimization-based and Generation-based. Both optimizationbased and generation-based approaches have been intensively studied. While each of them has both advantages and limitations, the pursuit of a well-designed hybrid method that combines both approaches to achieve better transferability is a promising direction for future endeavours. For example, certain methods leverage generative models while also going through optimization iterations during inference (Chen et al., 2023d). Such hybrid methods have the potential to leverage the strengths of each approach to enhance the robustness and generalization of adversarial examples across various models and scenarios. Adversarial Transferability Across Large Multimodal Models. The transferability of adversarial examples across language models has been studied by various works introduced in section 5.2. Given the prevalence of large language models (LLM) and multimodal foundation models, the adversarial transferability across such foundation models also become increasingly relevant to the community. The pioneering work of Zhao et al. (2024) shows that adversarial examples crafted against pre-trained models such as CLIP (Radford et al., 2021a) and BLIP (Li et al., 2022) can be transferred to other multimodal foundation models such as MiniGPT-4 (Zhu et al., 2023a) and LLaVA (Liu et al., 2024). Similarly, Dong et al. (2023) demonstrates the feasibility of attacking Google's Bard with vision encoders of open-sourced models. Meanwhile, various works demonstrate that adversarial examples created on CLIP can be transferred to various CLIP-based systems (Lu et al., 2023; Zhang et al., 2022b; Han et al., 2023a; Hu et al., 2024). As reported in the current work (Zhao et al., 2024; Dong et al., 2023; Luo et al., 2024), the transferability across large multimodal models is still very limited. Exploring the root causes behind such limited transferability and identifying strategies for enhancing it could be an interesting direction for future research. ## 6.2 Connections To Broader Topics Relation to Adversarial Transferability Prior To Deep Learning Era The concept of adversarial examples holds relevance both in deep learning contexts and in earlier eras of machine learning. Prior research shows that traditional machine learning algorithms also suffer from adversarial examples, e.g., support vector machine (Papernot et al., 2016) and decision tree (Papernot et al., 2016; Biggio et al., 2013). Furthermore, Papernot et al. (2016) show that the adversarial examples can be transferred to different classes of machine learning classifiers. Specifically, the adversarial examples created on traditional machine learning models can be transferred to others, even deep neural network-based classifiers. Similarly, the ones created on deep neural networks can also be transferred to traditional machine learning classifiers. However, the transferability is only shown on toy datasets. The experiments on large-scale datasets (e.g. ImageNet-1k (Russakovsky et al., 2015)) are infeasible since the traditional algorithms are not scalable to large datasets. Relation to Trustworthy AI. Adversarial transferability is closely linked to Trustworthy AI, which aims to make AI systems reliable, fair, transparent, and accountable (Kaur et al., 2022). When adversarial examples fool one AI model and then trick other models too, it highlights how vulnerable AI systems can be. Studying transferability can help us understand the cause of the adversarial example, namely, a better understanding of failure cases of AI systems (Ilyas et al., 2019; Waseda et al., 2023). In addition, understanding this link helps improve AI's reliability by developing better defenses against such tricks (Madry et al., 2017). Relation to Adversarial ML. Adversarial ML focuses on understanding and mitigating vulnerabilities in machine learning models (Oprea & Vassilev, 2023). In adversarial ML, the goal is to investigate how malicious actors can manipulate or deceive ML systems by modifying inputs (i.e. adversarial example) to cause incorrect predictions or behaviors. Adversarial transferability is a crucial idea in Adversarial ML. It shows how adversarial examples, which are inputs crafted to fool one ML model, can also fool other models, even if they are different (Goodfellow et al., 2014; Papernot et al., 2016). This reveals how vulnerabilities in ML systems can spread widely. Understanding adversarial transferability helps researchers grasp the complexities of attacks on ML models and develop stronger defenses against them. ## 7 Conclusion In this comprehensive survey, we embarked on a journey through the intricate world of adversarial transferability. Transferability allows adversarial examples designed for one model to successfully deceive a different model, often with a distinct architecture, opening the door to black-box attacks. The adversarial transferability across DNNs raises concerns about the reliability of DNN-based systems in safety-critical applications like medical image analysis and autonomous driving. Throughout this survey, we navigated through the terminology, mathematical notations, and evaluation metrics crucial to understanding adversarial transferability. We explored a plethora of techniques designed to enhance transferability, categorizing them into two main groups: surrogate model-based and generative model-based methods. Moreover, we extended our investigation beyond image classification tasks, delving into transferability-enhancing techniques in various vision and natural language processing tasks, as well as those that transcend task boundaries. As the DNN landscape continues to advance, the comprehension of adversarial examples and their transferability remains crucial. By illuminating the vulnerabilities inherent in these systems, we aim to contribute to the development of more resilient, secure, and trustworthy DNN models, ultimately paving the way for their safe deployment in real-world applications. In this ever-evolving journey towards adversarial resilience, we hope that this survey will serve as a valuable resource for researchers, practitioners, and enthusiasts alike. Broader Impact Statement. At the intersection of machine learning and security, the study of adversarial examples and their transferability not only illuminates the vulnerabilities of modern learning systems but also opens avenues for strengthening their robustness and reliability. By comprehensively surveying the landscape of adversarial transferability, this paper contributes to a deeper understanding of the challenges posed by adversarial attacks across diverse domains, from image classification to various tasks. As researchers and practitioners strive to fortify machine learning models against adversarial manipulation, the insights gleaned from this survey serve as a compass, guiding the development of more resilient algorithms and informing strategies for defending against emerging threats. Overall, this paper aims to better understand and safeguard against adversarial exploitation. ## Acknowledgments This work is partially supported by the UKRI grant: Turing AI Fellowship EP/W002981/1 and EPSRC/MURI grant: EP/N019474/1. We would also like to thank the Royal Academy of Engineering. ## References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. *arXiv* preprint arXiv:2303.08774, 2023. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems*, 35:23716–23736, 2022. Philipp Benz, Chaoning Zhang, and In So Kweon. Batch normalization increases adversarial vulnerability and decreases adversarial transferability: A non-robust feature perspective. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7818–7827, 2021. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In *Machine Learning and Knowledge* Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pp. 387–402. Springer, 2013. Gerda Bortsova, Cristina González-Gonzalo, Suzanne C Wetstein, Florian Dubost, Ioannis Katramados, Laurens Hogeweg, Bart Liefers, Bram van Ginneken, Josien PW Pluim, Mitko Veta, et al. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors. *Medical Image Analysis*, 73: 102141, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020a. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020b. Tom B Brown, Nicholas Carlini, Chiyuan Zhang, Catherine Olsson, Paul Christiano, and Ian Goodfellow. Unrestricted adversarial examples. *arXiv preprint arXiv:1809.08352*, 2018. Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Hee-Seon Kim, and Changick Kim. Improving the transferability of targeted adversarial examples through object-based diverse input. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15244–15253, 2022. Zikui Cai, Shantanu Rane, Alejandro E Brito, Chengyu Song, Srikanth V Krishnamurthy, Amit K RoyChowdhury, and M Salman Asif. Zero-query transfer attacks on context-aware object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15024–15034, 2022a. Zikui Cai, Xinxin Xie, Shasha Li, Mingjun Yin, Chengyu Song, Srikanth V Krishnamurthy, Amit K RoyChowdhury, and M Salman Asif. Context-aware transfer attacks for object detection. In *Proceedings of* the AAAI Conference on Artificial Intelligence, pp. 149–157, 2022b. Zachary Charles, Harrison Rosenberg, and Dimitris Papailiopoulos. A geometric perspective on the transferability of adversarial directions. In *The 22nd International Conference on Artificial Intelligence and* Statistics, pp. 1960–1968. PMLR, 2019. Bin Chen, Jiali Yin, Shukai Chen, Bohao Chen, and Ximeng Liu. An adaptive model ensemble adversarial attack for boosting adversarial transferability. In *Proceedings of the IEEE/CVF International Conference* on Computer Vision, pp. 4489–4498, 2023a. Jianqi Chen, Hao Chen, Keyan Chen, Yilan Zhang, Zhengxia Zou, and Zhenwei Shi. Diffusion models for imperceptible and transferable adversarial attack. *arXiv preprint arXiv:2305.08192*, 2023b. Shuo Chen, Jindong Gu, Zhen Han, Yunpu Ma, Philip Torr, and Volker Tresp. Benchmarking robustness of adaptation methods on pre-trained vision-language models. *arXiv preprint arXiv:2306.02080*, 2023c. Xinquan Chen, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. Advdiffuser: Natural adversarial example synthesis with diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4562–4572, October 2023d. Zhaoyu Chen, Bo Li, Shuang Wu, Kaixun Jiang, Shouhong Ding, and Wenqiang Zhang. Content-based unrestricted adversarial attack. *Advances in Neural Information Processing Systems*, 36, 2024. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. *arXiv preprint arXiv:2010.09670*, 2020. Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In *28th USENIX Security Symposium (USENIX Security 19)*, pp. 321–338, Santa Clara, CA, August 2019. USENIX Association. ISBN 978-1-939133-06-9. URL https://www.usenix. org/conference/usenixsecurity19/presentation/demontis. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9185–9193, 2018. Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Evading defenses to transferable adversarial examples by translation-invariant attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4312–4321, 2019. Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu. Benchmarking adversarial robustness on image classification. In *proceedings of the IEEE/CVF conference on computer* vision and pattern recognition, pp. 321–331, 2020. Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, and Jun Zhu. How robust is google's bard to adversarial image attacks? *arXiv preprint arXiv:2309.11751*, 2023. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *arXiv:2010.11929*, 2020. Włodzisław Duch and Jerzy Korczak. Optimization and global minimization methods suitable for neural networks. *Neural computing surveys*, 2:163–212, 1998. Shuman Fang, Jie Li, Xianming Lin, and Rongrong Ji. Learning to learn transferable attack. In *Proceedings* of the AAAI Conference on Artificial Intelligence, pp. 571–579, 2022. Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, and Shu-Tao Xia. Boosting black-box attack with partially transferred conditional adversarial distribution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15095–15104, 2022. Aditya Ganeshan, Vivek BS, and R Venkatesh Babu. Fda: Feature disruptive attack. In *Proceedings of the* IEEE/CVF International Conference on Computer Vision, pp. 8069–8079, 2019. Xiangbo Gao, Cheng Luo, Qinliang Lin, Weicheng Xie, Minmin Liu, Linlin Shen, Keerthy Kusumam, and Siyang Song. Scale-free and task-agnostic attack: Generating photo-realistic adversarial patterns with patch quilting generator. *arXiv preprint*, 2208, 2022. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence*, 2(11):665–673, 2020. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Mengran Gou, Xikang Zhang, Angels Rates-Borras, Sadjad Asghari-Esfeden, Mario Sznaier, and Octavia Camps. Person re-identification in appearance impaired scenarios. *arXiv preprint arXiv:1604.00367*, 2016. Jindong Gu, Baoyuan Wu, and Volker Tresp. Effective and efficient vote attack on capsule networks. In International Conference on Learning Representations (ICLR), 2021a. Jindong Gu, Hengshuang Zhao, Volker Tresp, and Philip Torr. Adversarial examples on segmentation models can be easy to transfer. *arXiv preprint arXiv:2111.11368*, 2021b. Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, and Koushik Sen. Efficient and transferable adversarial examples from bayesian neural networks. In *Uncertainty in Artificial Intelligence*, pp. 738–748. PMLR, 2022a. Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, and Koushik Sen. Lgv: Boosting adversarial example transferability from large geometric vicinity. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IV, pp. 603–618. Springer, 2022b. Yiwen Guo, Qizhang Li, and Hao Chen. Backpropagating linearly improves transferability of adversarial examples. *Advances in neural information processing systems*, 33:85–95, 2020. Abdullah Hamdi, Sara Rojas, Ali Thabet, and Bernard Ghanem. Advpc: Transferable adversarial perturbations on 3d point clouds. In *Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK,* August 23–28, 2020, Proceedings, Part XII 16, pp. 241–257. Springer, 2020. Dongchen Han, Xiaojun Jia, Yang Bai, Jindong Gu, Yang Liu, and Xiaochun Cao. Ot-attack: Enhancing adversarial transferability of vision-language models via optimal transport optimization. *arXiv preprint* arXiv:2312.04403, 2023a. Jiangfan Han, Xiaoyi Dong, Ruimao Zhang, Dongdong Chen, Weiming Zhang, Nenghai Yu, Ping Luo, and Xiaogang Wang. Once a man: Towards multi-target attack via learning multi-target adversarial network once. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 5158–5167, 2019. Sicong Han, Chenhao Lin, Chao Shen, Qian Wang, and Xiaohong Guan. Interpreting adversarial examples in deep learning: A review. *ACM Computing Surveys*, 2023b. Atiye Sadat Hashemi, Andreas Bär, Saeed Mozaffari, and Tim Fingscheidt. Improving transferability of generated universal adversarial perturbations for image classification and segmentation. In Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety, pp. 171–196. Springer International Publishing Cham, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 770–778, 2016. Xuanli He, Lingjuan Lyu, Qiongkai Xu, and Lichao Sun. Model extraction and adversarial transferability, your bert is vulnerable! *arXiv preprint arXiv:2103.10013*, 2021. Jan Hendrik Metzen, Mummadi Chaithanya Kumar, Thomas Brox, and Volker Fischer. Universal adversarial perturbations against semantic image segmentation. In *Proceedings of the IEEE international conference* on computer vision, pp. 2755–2764, 2017. Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 15262–15271, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in Neural* Information Processing Systems, 33:6840–6851, 2020. Anjun Hu, Jindong Gu, Francesco Pinto, Konstantinos Kamnitsas, and Philip Torr. As firm as their foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks? arXiv preprint arXiv:2403.12693, 2024. Hao Huang, Ziyan Chen, Huanran Chen, Yongtao Wang, and Kevin Zhang. T-sea: Transfer-based selfensemble attack on object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20514–20523, 2023. Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, and Ser-Nam Lim. Enhancing adversarial example transferability with an intermediate level attack. In *Proceedings of the IEEE/CVF international* conference on computer vision, pp. 4733–4742, 2019a. Yan Huang, Qiang Wu, Jingsong Xu, and Yi Zhong. Celebrities-reid: A benchmark for clothes variation in long-term person re-identification. In *2019 International Joint Conference on Neural Networks (IJCNN)*, pp. 1–8. IEEE, 2019b. Yi Huang and Adams Wai-Kin Kong. Transferable adversarial attack based on integrated gradients. *arXiv* preprint arXiv:2205.13152, 2022. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. *Advances in neural information processing systems*, 32, 2019. Nathan Inkawhich, Wei Wen, Hai Helen Li, and Yiran Chen. Feature space perturbations yield more transferable adversarial examples. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 7066–7074, 2019. Nathan Inkawhich, Kevin Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, and Yiran Chen. Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability. Advances in Neural Information Processing Systems, 33:20791–20801, 2020a. Nathan Inkawhich, Kevin J Liang, Lawrence Carin, and Yiran Chen. Transferable perturbations of deep feature distributions. *arXiv preprint arXiv:2004.12519*, 2020b. Shuai Jia, Bangjie Yin, Taiping Yao, Shouhong Ding, Chunhua Shen, Xiaokang Yang, and Chao Ma. Adv-attribute: Inconspicuous and transferable adversarial attack on face recognition. arXiv preprint arXiv:2210.06871, 2022a. Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, and Xiaochun Cao. Las-at: adversarial training with learnable attack strategy. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 13398–13408, 2022b. Jason Jo and Yoshua Bengio. Measuring the tendency of cnns to learn surface statistical regularities. *arXiv* preprint arXiv:1711.11561, 2017. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems, 26, 2013. Davinder Kaur, Suleyman Uslu, Kaley J Rittichier, and Arjan Durresi. Trustworthy artificial intelligence: a review. *ACM computing surveys (CSUR)*, 55(2):1–38, 2022. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of naacL-HLT*, volume 1, pp. 2, 2019. Jinkyu Kim and John F Canny. Interpretable learning for self-driving cars by visualizing causal attention. In *International Conference on Computer Vision (ICCV)*, pp. 2961–2969, 2017. Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, Zeynep Akata, et al. Textual explanations for self-driving vehicles. In *ECCV*, pp. 577–593. Springer, 2018. Woo Jae Kim, Seunghoon Hong, and Sung-Eui Yoon. Diverse generative perturbations on attention space for transferable adversarial attacks. In *2022 IEEE International Conference on Image Processing (ICIP)*, pp. 281–285. IEEE, 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In *ICML*, 2022. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. *arXiv preprint arXiv:1908.03557*, 2019. Maosen Li, Cheng Deng, Tengjiao Li, Junchi Yan, Xinbo Gao, and Heng Huang. Towards transferable targeted attack. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 641–649, 2020a. Qizhang Li, Yiwen Guo, Wangmeng Zuo, and Hao Chen. Making substitute models more bayesian can enhance transferability of adversarial examples. *arXiv preprint arXiv:2302.05086*, 2023. Yingwei Li, Song Bai, Cihang Xie, Zhenyu Liao, Xiaohui Shen, and Alan Yuille. Regional homogeneity: Towards learning transferable universal adversarial perturbations against defenses. In Computer Vision– ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16, pp. 795–813. Springer, 2020b. Kaizhao Liang, Jacky Y Zhang, Boxin Wang, Zhuolin Yang, Sanmi Koyejo, and Bo Li. Uncovering the connections between adversarial transferability and knowledge transferability. In International Conference on Machine Learning, pp. 6577–6587. PMLR, 2021. Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, and John E Hopcroft. Nesterov accelerated gradient and scale invariance for adversarial attacks. *arXiv preprint arXiv:1908.06281*, 2019. Fangcheng Liu, Chao Zhang, and Hongyang Zhang. Towards transferable unrestricted adversarial examples with minimum changes. In *2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)*, pp. 327–338. IEEE, 2023. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. In *5th International Conference on Learning Representations, ICLR 2017, Toulon,* France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. Dong Lu, Zhiqiang Wang, Teng Wang, Weili Guan, Hongchang Gao, and Feng Zheng. Set-level guidance attack: Boosting adversarial transferability of vision-language pre-training models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 102–111, 2023. Yantao Lu, Yunhan Jia, Jianyu Wang, Bai Li, Weiheng Chai, Lawrence Carin, and Senem Velipasalar. Enhancing cross-task black-box transferability of adversarial examples with dispersion reduction. In *Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition*, pp. 940–949, 2020. Haochen Luo, Jindong Gu, Fengyuan Liu, and Philip Torr. An image is worth 1000 lies: Transferability of adversarial images across prompts on vision-language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=nc5GgFAvtk. Wenshuo Ma, Yidong Li, Xiaofeng Jia, and Wei Xu. Transferable adversarial attack for both vision transformers and convolutional networks via momentum integrated gradients. In *International Conference on* Computer Vision (ICCV), pp. 4630–4639, 2023. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Xiaofeng Mao, Yuefeng Chen, Yuhong Li, Yuan He, and Hui Xue. Gap++: Learning to generate targetconditioned adversarial examples. *arXiv preprint arXiv:2006.05097*, 2020. Yuhao Mao, Chong Fu, Saizhuo Wang, Shouling Ji, Xuhong Zhang, Zhenguang Liu, Jun Zhou, Alex X Liu, Raheem Beyah, and Ting Wang. Transfer attacks revisited: A large-scale empirical study in real computer vision settings. In *2022 IEEE Symposium on Security and Privacy (SP)*, pp. 1423–1439. IEEE, 2022. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. *arXiv preprint arXiv:1411.1784*, 2014. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 1765–1773, 2017. Muhammad Muzammal Naseer, Salman H Khan, Muhammad Haris Khan, Fahad Shahbaz Khan, and Fatih Porikli. Cross-domain transferability of adversarial perturbations. *Advances in Neural Information Processing Systems*, 32, 2019. Muzammal Naseer, Salman H Khan, Shafin Rahman, and Fatih Porikli. Task-generalizable adversarial attack based on perceptual metric. *arXiv preprint arXiv:1811.09020*, 2018. Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli. On generating transferable targeted perturbations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7708–7717, 2021. Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Shahbaz Khan, and Fatih Porikli. On improving adversarial transferability of vision transformers. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. Muzammal Naseer, Ahmad Mahmood, Salman Khan, and Fahad Khan. Boosting adversarial transferability using dynamic cues. In *International Conference on Learning Representations (ICLR)*, 2023. Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o (1/kˆ 2). In *Doklady an ussr*, volume 269, pp. 543–547, 1983. Alina Oprea and Apostol Vassilev. Adversarial machine learning: A taxonomy and terminology of attacks and mitigations. Technical report, National Institute of Standards and Technology, 2023. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. *arXiv preprint arXiv:1605.07277*, 2016. Florent Perronnin, Jorge Sánchez, and Thomas Mensink. Improving the fisher kernel for large-scale image classification. In Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV 11, pp. 143–156. Springer, 2010. Camilo Pestana, Naveed Akhtar, Nazanin Rahnavard, Mubarak Shah, and Ajmal Mian. Transferable 3d adversarial textures using end-to-end optimization. In *Proceedings of the IEEE/CVF Winter Conference* on Applications of Computer Vision, pp. 88–97, 2022. Huy Phan, Yi Xie, Siyu Liao, Jie Chen, and Bo Yuan. Cag: a real-time low-cost enhanced-robustness high-transferability content-aware adversarial attack generator. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 5412–5419, 2020. Boris T Polyak. Some methods of speeding up the convergence of iteration methods. Ussr computational mathematics and mathematical physics, 4(5):1–17, 1964. Omid Poursaeed, Isay Katsman, Bicheng Gao, and Serge Belongie. Generative adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4422–4431, 2018. Yaguan Qian, Shuke He, Chenyu Zhao, Jiaqiang Sha, Wei Wang, and Bin Wang. Lea2: A lightweight ensemble adversarial attack via non-overlapping vulnerable frequency regions. In *Proceedings of the IEEE/CVF* International Conference on Computer Vision, pp. 4510–4521, 2023. Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, and Baoyuan Wu. Boosting the transferability of adversarial attacks with reverse adversarial perturbation. *arXiv preprint arXiv:2210.05968*, 2022. Alec Radford, Jong Wook Kim, Chris Hallacy, A. Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In *ICML*, 2021a. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pp. 8748–8763. PMLR, 2021b. Nicolas Roux, Mark Schmidt, and Francis Bach. A stochastic gradient method with an exponential convergence _rate for finite training sets. *Advances in neural information processing systems*, 25, 2012. Andras Rozsa, Manuel Günther, and Terranee E Boult. Lots about attacking deep features. In *2017 IEEE* International Joint Conference on Biometrics (IJCB), pp. 168–176. IEEE, 2017. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015. Mathieu Salzmann et al. Learning transferable adversarial perturbations. *Advances in Neural Information* Processing Systems, 34:13950–13962, 2021. Alex Serban, Erik Poll, and Joost Visser. Adversarial examples on object recognition: A comprehensive survey. *ACM Computing Surveys (CSUR)*, 53(3):1–38, 2020. Alexander Michael Staff, Jin Zhang, Jingyue Li, Jing Xie, Elizabeth Ann Traiger, Jon Arne Glomsrud, and Kristian Bertheussen Karolius. An empirical study on cross-data transferability of adversarial attacks on object detectors. In *AI-Cybersec@ SGAI*, pp. 38–52, 2021. Lu Sun, Mingtian Tan, and Zhe Zhou. A survey of practical adversarial example attacks. *Cybersecurity*, 1: 1–9, 2018. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In *International* conference on machine learning, pp. 3319–3328. PMLR, 2017a. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In *International* conference on machine learning, pp. 3319–3328. PMLR, 2017b. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Thomas Tanay and Lewis Griffin. A boundary tilting persepective on the phenomenon of adversarial examples. *arXiv preprint arXiv:1608.07690*, 2016. Giorgos Tolias, Filip Radenovic, and Ondrej Chum. Targeted mismatch adversarial attack: Query with a flower to retrieve the tower. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision, pp. 5037–5046, 2019. Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. *arXiv preprint arXiv:1704.03453*, 2017. Tzungyu Tsai, Kaichen Yang, Tsung-Yi Ho, and Yier Jin. Robust adversarial objects against deep learning models. In *Proceedings of the AAAI Conference on Artificial Intelligence*, pp. 954–962, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Hanchen Wang, Tianfan Fu, Yuanqi Du, Wenhao Gao, Kexin Huang, Ziming Liu, Payal Chandak, Shengchao Liu, Peter Van Katwyk, Andreea Deac, et al. Scientific discovery in the age of artificial intelligence. *Nature*, 620(7972):47–60, 2023. Hongjun Wang, Guangrun Wang, Ya Li, Dongyu Zhang, and Liang Lin. Transferable, controllable, and inconspicuous adversarial attacks on person re-identification with deep mis-ranking. In *Proceedings of the* IEEE/CVF conference on computer vision and pattern recognition, pp. 342–351, 2020. Xiaosen Wang and Kun He. Enhancing the transferability of adversarial attacks through variance tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1924–1933, 2021. Xiaosen Wang, Xuanran He, Jingdong Wang, and Kun He. Admix: Enhancing the transferability of adversarial attacks. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 16158–16167, 2021a. Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, and Quanshi Zhang. A unified approach to interpreting and boosting adversarial transferability. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021b. Zhen Wang, Yitao Zheng, Hai Zhu, Chang Yang, and Tianyi Chen. Transferable adversarial examples can efficiently fool topic models. *Computers & Security*, 118:102749, 2022. Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, and Kui Ren. Feature importance-aware transferable adversarial attacks. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 7639–7648, 2021c. Futa Waseda, Sosuke Nishikawa, Trung-Nghia Le, Huy H Nguyen, and Isao Echizen. Closer look at the transferability of adversarial examples: How they fool different models differently. In *Proceedings of the* IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1360–1368, 2023. Xingxing Wei, Siyuan Liang, Ning Chen, and Xiaochun Cao. Transferable adversarial attacks for image and video object detection. *arXiv preprint arXiv:1811.12641*, 2018. Zhipeng Wei, Jingjing Chen, Zuxuan Wu, and Yu-Gang Jiang. Boosting the transferability of video adversarial examples via temporal translation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, pp. 2659–2667, 2022a. Zhipeng Wei, Jingjing Chen, Zuxuan Wu, and Yu-Gang Jiang. Cross-modal transferable adversarial attacks from images to videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15064–15073, 2022b. Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, and Archy De Berker. Adversarial examples in modern machine learning: A review. *arXiv preprint arXiv:1911.05268*, 2019. Boxi Wu, Jindong Gu, Zhifeng Li, Deng Cai, Xiaofei He, and Wei Liu. Towards efficient adversarial training on vision transformers. In *European Conference on Computer Vision*, pp. 307–325. Springer, 2022. Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, and Xingjun Ma. Skip connections matter: On the transferability of adversarial examples generated with resnets. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020a. Lei Wu and Zhanxing Zhu. Towards understanding and improving the transferability of adversarial examples in deep neural networks. In *Asian Conference on Machine Learning*, pp. 837–850. PMLR, 2020. Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R Lyu, and Yu-Wing Tai. Boosting the transferability of adversarial samples via attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1161–1170, 2020b. Weibin Wu, Yuxin Su, Michael R Lyu, and Irwin King. Improving the transferability of adversarial samples with adversarial transformations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9024–9033, 2021. Chong Xiang, Charles R Qi, and Bo Li. Generating 3d adversarial point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9136–9144, 2019. Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 3905–3911, 2018. Yanru Xiao and Cong Wang. You see what i want you to see: Exploring targeted black-box transferability attack for hash-based image retrieval systems. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 1934–1943, 2021. Zihao Xiao, Xianfeng Gao, Chilin Fu, Yinpeng Dong, Wei Gao, Xiaolu Zhang, Jun Zhou, and Jun Zhu. Improving transferability of adversarial patches on face recognition with generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11845–11854, 2021. Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L Yuille. Improving transferability of adversarial examples with input diversity. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pp. 2730–2739, 2019. Yifeng Xiong, Jiadong Lin, Min Zhang, John E Hopcroft, and Kun He. Stochastic variance reduced ensemble adversarial attack for boosting the adversarial transferability. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pp. 14983–14992, 2022. Nuo Xu, Kaleel Mahmood, Haowen Fang, Ethan Rathbun, Caiwen Ding, and Wujie Wen. Securing the spike: On the transferabilty and security of spiking neural networks to adversarial examples. arXiv preprint arXiv:2209.03358, 2022a. Xiaojun Xu, Jacky Y Zhang, Evelyn Ma, Hyun Ho Son, Sanmi Koyejo, and Bo Li. Adversarially robust models may not transfer better: Sufficient conditions for domain transferability from the view of regularization. In *International Conference on Machine Learning*, pp. 24770–24802. PMLR, 2022b. Jia Xue, Zibo Meng, Karthik Katipally, Haibo Wang, and Kees Van Zon. Clothing change aware person identification. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition* Workshops, pp. 2112–2120, 2018. Erkun Yang, Tongliang Liu, Cheng Deng, and Dacheng Tao. Adversarial examples for hamming space search. IEEE transactions on cybernetics, 50(4):1473–1484, 2018. Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Boosting transferability of targeted adversarial examples via hierarchical generative networks. In *Computer Vision–ECCV 2022: 17th European* Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IV, pp. 725–742. Springer, 2022. Wenqian Yu, Jindong Gu, Zhijiang Li, and Philip Torr. Reliable evaluation of adversarial transferability. arXiv preprint arXiv:2306.08565, 2023. Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, and Kai-Wei Chang. On the transferability of adversarial attacksagainst neural text classifier. *arXiv preprint arXiv:2011.08558*, 2020. Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, and In So Kweon. Investigating top-k white-box and transferable black-box attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15085–15094, 2022a. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. *arXiv preprint arXiv:1710.09412*, 2017. Jiaming Zhang, Qi Yi, and Jitao Sang. Towards adversarial attack on vision-language pre-training models. In *Proceedings of the 30th ACM International Conference on Multimedia*, pp. 5005–5013, 2022b. Jianping Zhang, Weibin Wu, Jen-tse Huang, Yizhan Huang, Wenxuan Wang, Yuxin Su, and Michael R Lyu. Improving adversarial transferability via neuron attribution-based attacks. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14993–15002, 2022c. Jiliang Zhang and Chen Li. Adversarial examples: Opportunities and challenges. IEEE transactions on neural networks and learning systems, 31(7):2578–2593, 2019. Yanghao Zhang, Wenjie Ruan, Fu Wang, and Xiaowei Huang. Generalizing universal adversarial attacks beyond additive perturbations. In *2020 IEEE International Conference on Data Mining (ICDM)*, pp. 1412–1417. IEEE, 2020. Anqi Zhao, Tong Chu, Yahao Liu, Wen Li, Jingjing Li, and Lixin Duan. Minimizing maximum model discrepancy for transferable black-box targeted attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8153–8162, 2023. Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Man Cheung, and Min Lin. On evaluating adversarial robustness of large vision-language models. *Advances in Neural Information* Processing Systems, 36, 2024. Zhengli Zhao, Dheeru Dua, and Sameer Singh. Generating natural adversarial examples. *arXiv preprint* arXiv:1710.11342, 2017. Zhengyu Zhao, Zhuoran Liu, and Martha Larson. On success and simplicity: A second look at transferable targeted attacks. *Advances in Neural Information Processing Systems*, 34:6115–6128, 2021. Zhengyu Zhao, Hanwei Zhang, Renjue Li, Ronan Sicre, Laurent Amsaleg, and Michael Backes. Towards good practices in evaluating transfer adversarial attacks. *arXiv preprint arXiv:2211.09565*, 2022. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In *Proceedings of the IEEE conference on computer vision and pattern* recognition, pp. 2921–2929, 2016. Hang Zhou, Kejiang Chen, Weiming Zhang, Han Fang, Wenbo Zhou, and Nenghai Yu. Dup-net: Denoiser and upsampler network for 3d adversarial point clouds defense. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1961–1970, 2019. Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, and Yong Yang. Transferable adversarial perturbations. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 452–467, 2018. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing visionlanguage understanding with advanced large language models. *arXiv preprint arXiv:2304.10592*, 2023a. Hegui Zhu, Yuchen Ren, Xiaoyan Sui, Lianping Yang, and Wuming Jiang. Boosting adversarial transferability via gradient relevance attack. In *International Conference on Computer Vision (ICCV)*, pp. 4741–4750, 2023b. Yao Zhu, Jiacheng Sun, and Zhenguo Li. Rethinking adversarial transferability from a data distribution perspective. In *International Conference on Learning Representations*, 2021. Andy Zou, Zifan Wang, J. Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models, 2023. Junhua Zou, Zhisong Pan, Junyang Qiu, Xin Liu, Ting Rui, and Wei Li. Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. In *Computer* Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII, pp. 563–579. Springer, 2020. Junhua Zou, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan, and Zhisong Pan. Making adversarial examples more transferable and indistinguishable. In *Proceedings of the AAAI Conference on Artificial Intelligence*, pp. 3662–3670, 2022. ## A More Variants Of I-Fgsm In this appendix, we first recall some background information on I-FGSM and present more variants of I-FGSM Dong et al.. We focus on perturbations constrained by an ℓ∞ ball with radius ϵ, that is, ||x adv −x||∞ ≤ ϵ. To understand the rest of this section, we begin by formalizing the iterative variant of the fast gradient sign method (IFGSM) (Goodfellow et al., 2014), which serves as the basis for the development of other methods. The I-FGSM has the following update rule: $$g^{(t+1)}=\nabla\ell(x^{adv(t)},y),$$ $$x^{adv(t+1)}=\mathrm{Clip}_{x}^{\epsilon}\{x^{adv(t)}+\alpha\cdot\mathrm{sign}(g^{(t+1)}))\},\tag{1}$$ $$(64)$$ (t+1))}, (64) where g (t)is the gradient of the loss function with respect to the input, α denotes the step size at each iteration, and Clipϵx ensures that the perturbation satisfies the ℓ∞-norm constraints. More variants of IFGSM are as follows: Nesterov (NI-FGSM). Nesterov Accelerated Gradient (NAG) is another popular extension of the vanilla gradient descent algorithm that incorporates momentum to accelerate convergence and improve the generalization of neural networks (Nesterov, 1983). On top of the momentum mechanism, the most distinct feature of NAG is that the gradient is evaluated at a lookahead position based on the momentum term. Lin et al. propose NI-FGSM (Nesterov Iterative Fast Gradient Sign Method), which integrates NAG in the iterative gradient-based attack to leverage its looking-ahead property to help escape from poor local optima. At each iteration, NI-FGSM first moves the data point based on the accumulated update g (t) $$x^{n e s(t)}=x^{a d v(t)}+\alpha\cdot\mu\cdot g^{(t)},$$ then we have $$g^{(t+1)}=\mu\cdot g^{(t)}+\frac{\nabla\ell(x^{n e s(t)},y)}{||\nabla\ell(x^{n e s(t)},y)||_{1}},$$ and the formulation for x adv(t+1) remains the same as (17). The author argues that the anticipatory update of NAG helps to circumvent getting stuck at the local optimal easier and faster, thereby improving the transferability of the perturbation. Adam (AI-FGTM). Adam is a popular adaptive gradient method that combines the first- and secondorder momentum of the gradients (Kingma & Ba, 2015). Zou et al. introduce the Adam Iterative Fast Gradient Tanh Method (AI-FGTM), which adapts Adam to the process of generating adversarial examples. In addition to using Adam as opposed to the momentum formulation, a key feature of AI-FGTM is the replacement of the sign function with the tanh function, which has the advantage of a smaller perturbation size. $$\begin{array}{c}{{m^{(t+1)}=m^{(t)}+\mu_{1}\cdot\nabla\ell(x^{a d v(t)},y),}}\\ {{v^{(t+1)}=v^{(t)}+\mu_{2}\cdot\left(\nabla\ell(x^{a d v(t)},y)\right)^{2},}}\end{array}$$ where m(t) denotes the first moment vector, v (t)represents the second moment vector, µ1 and µ2 are the first and second-order momentum factors, respectively. Instead of using a fixed step size of α in each iteration, AI-FGTM computes an adaptive step size based on $$\alpha^{(t)}=\frac{\varepsilon}{\sum_{s=0}^{t}\frac{1-\beta_{1}^{(s+1)}}{\sqrt{\left(1-\beta_{2}^{(s+1)}\right)}}}\frac{1-\beta_{1}^{(t+1)}}{\sqrt{\left(1-\beta_{2}^{(t+1)}\right)}},$$ where β1 and β2 are exponential decay rates, λ denotes a scale factor, and we have Pt−1 s=0 α (s) = ϵ. Finally, the update rule of AI-FGTM is $$x^{a d v(t+1)}=\mathrm{Clip}_{x}^{\epsilon}\{x^{a d v(t)}+\alpha^{(t)}\cdot\operatorname{tanh}(\lambda\frac{m^{(t+1)}}{\sqrt{v^{(t+1)}}+\delta})\},$$ where δ is to avoid division-by-zero. Variance Tuning (VNI/VMI-FGSM). Previous work shows that the stochastic nature of the minibatch gradient introduces a large variance in the gradient estimation, resulting in slow convergence and poor generalization; and this gives rise to various variance reduction methods (Roux et al., 2012; Johnson & Zhang, 2013). In the context of generating adversarial examples, Wang & He presents a variance-tuning technique that adopts the gradient information in the neighborhood of the previous data point to tune the gradient of the current data point at each iteration. Given an input x ∈ Rd, they propose to approximate the variance of its gradient using $$V(x)=\frac{1}{N}\sum_{i=1}^{N}\nabla_{x^{i}}\ell(x^{i},y)-\nabla\ell(x,y),\tag{65}$$ $$(66)$$ where x i = x + ri and each dimension of riis independently sampled from a uniform distribution between −β and β with β being a hyper parameter. They introduce Variance-tuning MI-FGSM (VMI-FGSM) and Variance-tuning NI-FGSM (VNI-FGSM) as an improvement over the original formulation. To integrate gradient variance in the iterative process, we can modify the following update rule for g (t+1) by using $$\begin{array}{l}{{\hat{g}^{(t+1)}=\nabla\ell(x^{(t)},y)}}\\ {{\quad g^{(t+1)}=\mu\cdot g^{(t)}+\frac{\hat{g}^{(t+1)}+v^{(t)}}{||\hat{g}^{(t+1)}+v^{(t)}||_{1}},}}\\ {{\quad v^{(t+1)}=V(x^{(t)})}}\end{array}$$ (t)) (66) where x (t) = x adv(t)in MI-FGSM and x (t) = x nes(t)in VNI-FGSM.
Review 1: Summary: This submission is a survey paper. It provides a comprehensive review of research progress on achieving and improving the transferability of adversarial examples. The paper mainly focuses on transferability-based attacks in the image classification task, covers a broad range of these attacks, and classifies these attacks into two categories: via surrogate model, and via generative model. Then the paper further discusses the transferability-based attacks in other domains. The paper is concluded with a discussion of future works. Strengths and Weaknesses: Strengths: - A comprehensive survey covering most impactful and recent works in developing transferabiltiy-based attacks. - A systematic and clear taxonomy. For example, for attacks via surrogate models, the survey classifies them into three classes: data augmentation, loss, and optimization. Weaknesses: - The submission may be improved in terms of summarization. Several parts of the submission read more like pure enumeration (xxx et al present xxx; xxx et al present xxx; ...) instead of categorizing and summarizing the literature. It would benefit the readers more if the core methodology is revealed and used to narrate the literature top-down. Furthermore, some taxonomy figures and comparison tables would improve the accessibility. - The granularity assignment may be improved. For attacks vis surrogate models, many works are introduced in detail, with core equations listed but without a complete explanation. For example, $m$ and $v$ are not defined in AI-FGTM. Eqns. (27) and (28) are presented without detailed explanation. On the other hand, in Section 5.3, only the goal of the attacks are revealed (present an attack achieving xxx) rather than core methodology. It can be better if the core methodology of the representative methods is introduced and other details are omitted. - The discussion can be further enhanced. For example, I would expect the authors to contextualize the research of adversarial transferability in a larger scope - their impact and connections with general adversarial ML, the theoretical point of views of adversarial transferability, their applications in more general trustworthy AI, the transferability study prior to the DL era, the transferability study in LLMs, etc. However, the discussion of the submission seems to mainly focus on how to leverage transferability to derive stronger adversarial attacks itself. Requested Changes: See "weaknesses" above for requested changes. Minor: - Table 2: define "AE" - I guess it is "adversarial example"? - Page 3 last line: "pertrubations" typo - Page 4 above Eqn. (7) "in Equation 6 is often solved" -> "is often approximately solved" - Page 6: "Lin et al highlights": "Lin et al" is in plural form, so it should be "Lin et al highlight" Same to several other occurrences. - Page 10: " It is not enough for targeted adversarial examples to be close to the target class but not far from the true class." seems not very clear. - Eqn. (41): abs R_i seems to be a typo - Below Eqn. (45), $S_i$ denotes the set of activations. What is "the set of activations"? - Last line in Page 21, $L - p$ -> $L_p$ Broader Impact Concerns: The work may need to add a Broader Impact Statement to summarize the potential malicious use of adversarial transferability-based attacks and potential mitigations. ================================================== Review 2: Summary: This paper summarizes the existing literature on adversarial example transferability. Specifically, the authors cover a massive amount of research on transferability, organized by various topics: how to assess the transferability, two categories (surrogate model-based and generative model-based) of transferability-enhancing methods, and transferability-enhancing methods in diverse domains. The authors also discuss the current challenges and research opportunities in adversarial transferability. The paper introduces preliminary materials to cover the basic knowledge of adversarial attack transferability. Additionally, the authors summarize the three existing evaluation metrics that measure adversarial transferability: Fooling rate, interest class rank, and knowledge transfer-based metrics. The next two sections (Section 3 and Section 4) are the main parts that cover transferability-enhancing methods, mainly on image classification tasks. Specifically, the authors divide the existing transferability-enhancing techniques into two categories: One is based on the surrogate model, and the other is based on the generative model. The surrogate model-based method uses a surrogate model that approximates the target model, and the attacks described in Section 3 aim to improve the transferability of adversarial examples when attacking the surrogate model. The authors introduce an optimization form of the surrogate model-based attack. Then, the authors further divide this category into four subcategories according to the part on which the methods concentrate: data augmentation, optimization, model, and loss. The generative model-based method uses a generative model to generate adversarial examples (or adversarial perturbations). The authors introduce generative model-based methods in Section 4 from two perspectives: The methods' effectiveness and efficiency. Section 5 covers transferability-enhancing methods in machine learning domains other than image classification. Specifically, the authors briefly go through transfer attacks in vision tasks (e.g., object detection, segmentation, 3D point clouds, video, etc.), transfer attacks in NLP tasks, and even domain-invariant adversarial attacks. Finally, the authors list some existing challenges in adversarial transferability research, e.g., better choices of surrogate model, how to evaluate or benchmark transferability, etc. Strengths and Weaknesses: # Strengths 1. To the best of my knowledge, this is the first survey on adversarial example transferability. 2. The authors introduce many research works about enhancing adversarial transferability, which will benefit other researchers interested in the transferability of adversarial examples. # Weaknesses 1. The authors mainly focus on introducing transferability-enhancing techniques. While I’m unaware of all the literature on adversarial transferability, it would be better to include more discussions on theoretical findings or explanations about what causes adversarial transferability. 2. The authors should spend more time proofreading the writing and correcting mistakes in citation, spelling, typos, etc. In particular, typos in mathematical notations must be corrected; a survey paper should convey information as accurately as possible. Please take a look at** Requested Changes** for more detailed comments. Requested Changes: 1. Consider renaming the two categories of transferability-enhancing methods, i.e., surrogate model-based and generative model-based. * The current category names might look like generative model-based methods have nothing to do with the surrogate model, but they also use a surrogate model in the formula. * The generative model is a method that generates adversarial examples. However, the surrogate model is not an attack method, but the optimization generates adversarial examples for the methods in Section 3. I’d name this category optimization-based methods. (This may require renaming the subcategory in Section 3.2, though.) 2. I can see a few typos in mathematical notations/formulae. I’ll list some of those mistakes, but I cannot guarantee that I covered all the formulae. Please review all the formulae and ensure there are no typos. * In Table 2, $H_k^i$ should be $H_k^l$ because it is about $l$-th layer. * Equation 4 should be corrected to $\alpha_2^{f_s\rightarrow f_t}=\|\mathbb{E}_{x\sim D}[\widehat{\Delta _{f_s\rightarrow f_t}(x)} \cdot \widehat{\Delta _{f_s\rightarrow f_t}(x)}^\top]\|$. (i.e., move $\top$ inside the brackets. * In Equation 8, exchange the probability $p$ and $1-p$. I checked Xie et al. paper, and $p$ is the probability of applying the transformation. * In **Natural, Unrestricted and Non-Additive Attacks** (Page 21), $L-p$ in the last sentence looks like a typo of $L_p$ 3. Sometimes, the authors use terms or notations without explaining what they mean. Please describe the missing parts. * In **ADMIX** description (Page 6), the authors used the term "Mixup" without explaining it. Please explain Mixup briefly with a citation. * In **AI-FGTM** description (Page 7), the adaptive step size uses an unseen notation $T$. I believe that the author wanted to write $\alpha^{(t)} = \frac{\varepsilon}{ \sum_{s=0}^t \frac{1-\beta_1^{s+1}}{\sqrt{1-\beta_2^{s+1}}}} \frac{1-\beta_1^{t+1}}{\sqrt{1-\beta_2^{t+1}}}$ (I changed the running index to be $s$, running from 0 to $t$)? * In Equation (31), the authors used an unseen notation $\gamma$ without explanation. I had to read Fang et al. to understand what it means. It looks like this $\gamma$ is some decay factor that determines the augmented model $f$, so Fang et al. replaced $f$ by $\gamma$. I think the authors tried to clarify this in the next line but made another mistake in the formula. Correct the formula to $G(\mathbf{x} _{adv}, \gamma)=G(\mathbf{x} _{adv}, f) = \nabla _{\mathbf{x} _{adv}} \mathcal{L}(f _{\mathbf{x} _{adv}}, y)$ and explain the context and clarify the notation. * In **Conditional Adversarial Distribution (CAD)** (Page 17), the authors mention that Feng et al. proposed to transfer a subset of parameters based on CAD, but the authors do not explain anything about CAD. I can only see that it is an abbreviation of Conditional Adversarial Distribution, but what does it even mean? Please describe the concept to make it more understandable. 4. In Table 1, the untargeted attack success is determined by the misclassification of a model. However, Equation 1 uses "change of prediction" as an attack success. Either change the description of the untargeted attack in Table 1 or change Equation 1 to $\arg\max_i f_t^i(x^{adv})\ne y,\text{ given }\arg\max_i f_s^i(x^{adv})\ne y$. 5. The first paragraph in Section 3.2 contains an interesting perspective. Can we interpret other methods (out of Section 3.2) from this viewpoint, i.e., data augmentation as another approach to enhance the generalization? 6. In Section 3.2, separate the iterative variants of FGSM from the methods for adversarial transferability. You can add a `\subsubsection` summarizing the variants or move them to other sections, e.g., Appendix. 7. Consider addressing the following minor issues regarding the paper writing. * According to the [resource about how to format the paper](https://github.com/JmlrOrg/tmlr-style-file/archive/refs/heads/main.zip) in the [Author guidelines](https://jmlr.org/tmlr/author-guide.html), we distinguish in-text citations and in-parenthesis citations. In particular, > When the authors or the publication are included in the sentence, the citation should not be in parenthesis, You should use `citet{}` in this case, and you should use `citep{}` otherwise. I see too many exceptions to this rule. Please fix all the citations properly. * Run a spell checker or proofread to catch minor typos, e.g., “generaator”. * In **RDI** description (Page 5), "Similarly to SI" should be changed to "Similarly to SIM". * In **Relation to Transferability across Images** (Page 21), the paragraph's first sentence is redundant. Broader Impact Concerns: I don’t see a particular broader impact concern regarding this paper. ================================================== Review 3: Summary: The paper offers a comprehensive summary of prior research in the field of adversarial transferability. The author skillfully categorizes the existing approaches into two distinct methodologies: first, the surrogate model-based adversarial transferability, and second, the adversarial transferability utilizing generative models. Moreover, the author broadens the scope of discussion by introducing a variety of research contributions in natural language processing (NLP) and various vision tasks beyond the conventional realm of image classification. This inclusive approach enriches our understanding of the topic and its diverse applications. Strengths and Weaknesses: **Strengths:** * The survey paper is well-organized, effectively structuring the existing body of research on adversarial transferability. * It includes an extensive survey of adversarial transferability, commendably encompassing both NLP tasks and a variety of vision tasks. * The paper does an excellent job in clearly outlining the current challenges faced in the field of adversarial transferability. **Weaknesses:** * While the survey provides a comprehensive overview, it would be greatly enhanced by a more general conclusions and common findings. Offering insights or perspectives on the potential future directions of adversarial transferability research would add significant value to the paper. * The paper, although informative, is somewhat marred by numerous grammatical errors. A thorough review and editing for language accuracy would enhance its clarity and professional impact. Requested Changes: Major comments * Offering insights or perspectives on the potential future directions of adversarial transferability research would add significant value to the paper. Minor comments * There appears to be an inconsistency in the use of `\citet` and `\citep`. For improved clarity, it's advisable to use `\citet` when the citation is a subject of the sentence and `\citep` in other cases. Consistent application of these citation formats would enhance the paper's readability and scholarly accuracy. * In Section 3.1, there's a mismatch between the paragraph titles and their subjects. For example, DIM is mentioned as the subject in the first sentence, but SIM is used as the paragraph title. A similar issue is noticed with the usage of RDI, which might be intended as a subject but seems to have a grammatical error. Aligning the paragraph titles with the subjects discussed within each would improve the structure and coherence of this section. * A typo error is present in the section titled "Challenges and Opportunities." * The paper contains numerous grammatical errors. A comprehensive proofreading and editing process is recommended to enhance readability and professional presentation. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept as is Comment: While the paper does not offer any new or novel approaches to adversarial transferability, the consensus is that the paper provides an extensive literature review and will benefit future research. Concerns were raised regarding the comprehensiveness, depth of analysis and comparison, and presentation quality. These concerns were addressed by the authors during the discussion phase, significantly enhancing the paper's quality. ==================================================
# Linear Bandits With Memory Giulia Clerici *giulia.clerici@unimi.it* Department of Computer Science, University of Milan, Italy Pierre Laforgue *pierre.laforgue1@gmail.com* Department of Computer Science, University of Milan, Italy Nicolò Cesa-Bianchi nicolo.cesa-bianchi@unimi.it Department of Computer Science, University of Milan, Italy DEIB, Politecnico di Milano, Italy Reviewed on OpenReview: *https: // openreview. net/ forum? id= CrpDwMFgxr* ## Abstract Nonstationary phenomena, such as satiation effects in recommendations, have mostly been modeled using bandits with finitely many arms. However, the richer action space provided by linear bandits is often preferred in practice. In this work, we introduce a novel nonstationary linear bandit model, where current rewards are influenced by the learner's past actions in a fixed-size window. Our model, which recovers stationary linear bandits as a special case, leverages two parameters: the window size m ≥ 0, and an exponent γ that captures the rotting (γ < 0) or rising (γ > 0) nature of the phenomenon. When both m and γ are known, we propose and analyze a variant of OFUL which minimizes regret against cyclic policies. By choosing the cycle length so as to trade-off approximation and estimation errors, we then prove a bound of order √d (m+ 1) 12 +max{γ,0} T 3/4(ignoring log factors) on the regret against the optimal sequence of actions, where T is the horizon and d is the dimension of the linear action space. Through a bandit model selection approach, our results are then extended to the case where both m and γ are unknown. Finally, we complement our theoretical results with experiments comparing our approach to natural baselines. ## 1 Introduction Many real-world problems are naturally modeled by stochastic linear bandits, where actions belong to a linear space, and the learner obtains rewards whose expectations are linear functions of the chosen action (see e.g., Lattimore & Szepesvári (2020)). Formally, at each time step t the expected reward is rt = ⟨at, θ∗⟩, where at ∈ R dis the chosen action and θ ∗ ∈ R dis a fixed and unknown parameter to be estimated. In a song recommendation problem, for instance, the possible actions are the songs from the catalogue, usually represented by their feature vectors (Deshpande & Montanari, 2012; Korkut & Li, 2021; Ghoorchian & Maghsudi, 2022). The linear reward rt (i.e., the user satisfaction) measures how well the song at picked by the learner matches the (unknown) preferences of the user, represented by θ ∗. However, this model fails to capture a key aspect, i.e., the nonstationarity of the users' preferences. For example, user satiation with respect to the recommended items is a typical phenomenon in this context (Kapoor et al., 2015; Kunaver & Požrl, 2017), as studied in rotting bandits (Bouneffouf & Féraud, 2016). Indeed, identifying the favorite song of a user (i.e., the vector a in the action set that maximizes ⟨*a, θ*∗⟩) only partly solves the recommendation problem, as suggesting this song repeatedly is not meaningful in the long run (Kovacs et al., 2018; Schedl et al., 2018). But satiation is far from being the only nonstationary phenomenon observed in practice. In algorithmic selection for instance, one must choose among a pool of algorithms the one that is going to get the next chunk of resources (e.g., CPU time or samples). In this case, we expect the quality of the solution found by each algorithm to increase as the algorithm gets selected. This model, known as rising bandits, has been studied in deterministic (Heidari et al., 2016; Li et al., 2020) and stochastic (Metelli et al., 2022) settings. Nonstationarity in bandits, which has been mostly studied in the case of finitely many arms, appears to be significantly more intricate to analyze in a linear bandit framework due to the structure of the action space. For instance, rotting bandits (Bouneffouf & Féraud, 2016) or rested rising bandits (Metelli et al., 2022) assume that the expected reward of an arm is fully determined by the number of times this arm has been pulled in the past. In the linear case, on the contrary, one would expect nontrivial cross-arm effects. Listening to rock songs should affect the future interest in rock songs, but also to a minor extent that in folk music, as the two genres are related. On the other side, it also seems reasonable that a folk rock song does not increase rock satiation as much as a pure rock song. Hence, a principled way to model nonstationarity in linear environments is needed. In this work, we introduce a novel linear bandit framework that allows to model complex nonstationary behaviors in an infinite and structured space of actions. More specifically, the nonstationarity is captured by a matrix, determined by the past actions of the learner and affecting the expected reward of future actions. Formally, the expected reward at time step t becomes rt = ⟨at, At−1θ ∗⟩, where At−1 = A(at−1*, . . . , a*t−m) = A0 +Pm s=1 at−sa ⊤ t−s γ∈ R d×d. Here, A0 is some initial symmetric and positive semidefinite matrix. Typically, A0 is chosen to be the identity Id, which we refer to as the isotropic initialization. The memory size m ≥ 0 controls the range of past actions having an influence, while the exponent γ ∈ R quantifies their impact. A positive γ corresponds to a rising behavior, and a negative γ to a rotting one - two established scenarios in the bandit literature. In the rotting setting, playing action a at time t decreases the expected reward of a at time t + 1. Hence, solving this problem requires long-term planning, and playing repeatedly θ ∗ may not be optimal. Instead, in a rising scenario with isotropic initialization, an optimal action played (and thus boosted) at time t remains optimal at time t + 1. Although optimal policies are stationary in this case, note that such problems are intrinsically difficult as the learner is penalized twice: for not choosing a good action at the present time, but also at future time steps, for not having boosted the right action. We highlight that our approach is able to cope simultaneously with these two different scenarios. Finally, note that our model recovers stationary linear bandits as a special case when γ = 0 (or m = 0 and A0 = Id). We start by focusing on cyclic policies, and show that they provide a reasonable approximation to the optimal policy (which may not be cyclic) while being easier to learn. When m and γ are known, estimating the best block of fixed length reduces to a stationary problem, that we solve using a block variant of OFUL (AbbasiYadkori et al., 2011). When m = 0, our variant recovers the regret bound Od √Tof OFUL up to log factors. We then optimize the block length in order to balance the approximation and estimation errors, and obtain a bound on the regret against the optimal sequence of actions in hindsight of order √d (m + 1) 12 +max{γ,0}T 3/4 (ignoring log factors) for all T ≥ (md) 2. Finally, we extend our analysis to the case when m and γ are both unknown. For this case, we prove regret bounds via an extension of the bandit model selection approach of Cutkosky et al. (2020). Empirically, our approach is shown to outperform natural baselines, such as the oracle greedy strategy (playing the action with the best instantaneous expected reward) and a naive block learning approach. Our experimental results also include misspecified settings, where we learn θ ∗ and simultaneously either m or γ. ## Contributions. - We introduce a new bandit framework to model nonstationary effects in linear action spaces. Our model generalizes stationary linear bandits, whose bound we recover as a special case. - We propose an OFUL-based algorithm achieving sublinear regret against the best sequence of actions by learning cyclic policies and balancing estimation and approximation errors. - We use a bandit model selection approach to learn the system's parameters m and γ. - Empirically, our algorithm outperforms natural baselines in both rotting and rising settings. Related works. Stochastic linear bandits, which were introduced two decades ago (Abe & Long, 1999; Auer, 2002), are typically addressed using algorithms based on ellipsoidal confidence sets (Dani et al., 2008; Rusmevichientong & Tsitsiklis, 2010; Abbasi-Yadkori et al., 2011). Nonstationary bandits have been mainly studied in the case of finitely many arms. Among the most studied models, there are rested (Gittins, 1979; Gittins et al., 2011) and restless (Whittle, 1988; Ortner et al., 2012; Tekin & Liu, 2012) bandits, rotting bandits (Bouneffouf & Féraud, 2016; Heidari et al., 2016; Cortes et al., 2017; Levine et al., 2017; Seznec et al., 2019), bandits with rewards depending on arm delays (Kleinberg & Immorlica, 2018; Cella & Cesa-Bianchi, 2020; Simchi-Levi et al., 2021; Laforgue et al., 2022), blocking and rebounding bandits (Basu et al., 2019; Leqi et al., 2021), and rising bandits (Li et al., 2020; Metelli et al., 2022). The d-step lookahead regret of Pike-Burke & Grunewalder (2019) is similar to our regret against the best cyclic policy. However, while the lookahead oracle selects the best block based on the learner's current state, our oracle is defined independently of the learner's action. In this respect, our work investigates a policy regret version of the lookahead regret. Some works have also considered nonstationary bandit frameworks, where the unknown parameter θ ∗is then replaced by a sequence of vectors θ ∗ t that evolves over time. Standard assumptions then stipulate that θ ∗ t is piecewise stationary, with a fixed number of change points (Bouneffouf et al., 2017; Wu et al., 2018; Auer et al., 2019; Chen et al., 2019; Di Benedetto et al., 2020; Xu et al., 2020; Li et al., 2021), or that the variation budget Pt≤T ∥θ ∗ t − θ ∗ t−1∥ is bounded (Besbes et al., 2014; Karnin & Anava, 2016; Luo et al., 2018; Cheung et al., 2019; Russac et al., 2019; 2020; Kim & Tewari, 2020; Zhao et al., 2020). See also Mueller et al. (2019) for an application of linear bandits to nonstationary dynamic pricing. In addition to these assumptions, we highlight that the above works are fundamentally different from ours, as the evolution of θ ∗ t is oblivious to the actions taken by the learner. This removes any need for long-term planning and puts the focus on the dynamic regret, where the algorithm's performance is compared to the rewards which one could obtain by picking at according to θ ∗ t . Finally, note that nonstationarity in linear bandit environments may also be tackled using Gaussian Processes (Faury et al., 2021; Deng et al., 2022). We note that the idea of combining linear and rotting bandits was already discussed in Seznec (2020, Section 4.7), where the author provides some evidences on the intrinsic difficulty to do so. There, the author proposes an extension of rotting bandits to linear spaces of actions by summing along the different dimensions the projections of the past actions. It is however proved that such a model cannot be learned. Indeed, it is possible to exhibit an instance of this linear rotting problem for which any policy suffers linear regret. On the contrary, our analysis in Section 3 shows that our model (based instead on the covariance matrix of the past actions) is learnable. The price we pay for ensuring learnability is that our model does not capture the K-armed rotting bandit setting in its full generality, see Example 2 for more details. Notation. Bd denotes the Euclidean unit ball, 0d and (ek)k≤d the zero and standard basis in R d, Id ∈ R d×d the identity matrix, ∥M∥∗ the operator norm of M, and γ + = max(γ, 0) for any γ ∈ R. Bold characters refer to block objects, and Oe is used when neglecting logarithmic factors. ## 2 Model In this section, we introduce our model of *linear bandits with memory* (LBM in short). LBMs strictly generalize stationary linear bandits, and also recover some nonstationary bandit models with finitely many arms as special cases. The learning setup is as follows. At each time step t = 1, 2*, . . .* the learner picks an action at from a (possibly infinite) set of actions *A ⊂ B*d, and receives a stochastic reward yt. Similarly to linear models, we assume that the expected reward is a linear function of some unknown vector θ ∗ ∈ Bd. In contrast to stationary models, however, the expected reward at time t is also influenced by the choice of previous actions of the learner. Mathematically, this is captured by the correlation matrix Pm s=1 at−sa ⊤ t−s , where m measures how far in the past actions can influence the current reward.1 Finally, in order to model the type (rising or rotting) of behavior and its strength, we use a positive or negative exponent γ. This results in the following formula for the reward at time t, $$y_{t}=\left\langle a_{t},A(a_{t-m},\ldots,a_{t-1})\,\theta^{*}\right\rangle+\eta_{t}\,,$$ ∗+ ηt , (1) 1 Using a different analysis, one could replace our fixed-size window with exponentially decaying discount factors. However, while these factors are typically treated as fixed model parameters, our analysis shows how to learn the best m (see Section 3.3). $$(1)$$ where ηt is a 1-sub-Gaussian random variable independent of the actions of the learner, and $$A(a_{1},\ldots,a_{m})=\left(A_{0}+\sum_{s=1}^{m}a_{s}a_{s}^{\top}\right)^{\gamma}.$$ $$\left(2\right)$$ γ. (2) Equations (1) and (2) define a model which strictly generalizes standard linear bandits, recovered for γ = 0. The choice of the covariance matrix for (2) is intuitive, as it stores the previously played actions and thus naturally encodes the directions where satiation or excitation occurs through its eigenvectors, see Figure 1. For simplicity, in the rest of the paper we use the abbreviation At−1 = A(at−m*, . . . , a*t−1) and refer to it as the *memory matrix*. Conventionally, we set a1−m = a2−m = *. . .* = a0 = 0d and choose A0 = Id unless otherwise stated. Note that parameters m and γ have the twofold advantage of making the model general enough to account for both rotting (γ < 0) and rising (γ > 0) scenarios while being simple enough to be learned simultaneously with θ ∗, see Section 3.3. Note also that at any time step t the expected reward rt = E[yt] satisfies |rt*| ≤ ∥*At−1∥∗. Given a horizon T ∈ N, the learner aims at maximizing the expected sum of rewards obtained over the T interaction rounds. The performance is measured against the best sequence of actions over the T rounds, i.e., through the regret $$\sum\nolimits_{t=1}^{T}r_{t}^{*}-\mathbb{E}\left[\sum\nolimits_{t=1}^{T}y_{t}\right]\,,$$ where r ∗ t =a ∗ t , A(a ∗ t−m*, . . . , a*∗ t−1 ) θ ∗and (a ∗ t )t≥1 is the optimal sequence of actions, i.e., the sequence maximizing the expected sum of rewards obtained over the horizon T $$a_{1}^{*},\ldots,a_{T}^{*}=\mathop{\rm arg\,max}_{a_{1},\ldots,a_{T}\in A}\sum_{t=1}^{T}\left\langle a_{t},A(a_{t-m},\ldots,a_{t-1})\,\theta^{*}\right\rangle.\tag{3}$$ Throughout the paper, we use OPT to denote Pt r ∗ t whenever the horizon T is understood from the context. Note that a LBM is fully characterized by: the action set A, the parameter θ ∗, the memory size m, and the exponent γ. As shown in the following examples, LBMs fully generalize (stationary) linear bandits, and allow to partially recover rotting/rising rested bandits in the limit m → ∞. Example 1 (Stationary linear bandits) Consider a linear bandit model, defined by an action set *A ⊂ B*d and θ ∗ ∈ Bd. This is equivalent to a LBM with the same A and θ ∗, and memory matrix A *such that* A(a1*, . . . , a*m) = Id for any a1, . . . , am ∈ Am*, i.e., when* m = 0 or γ = 0. Example 2 (Rotting and rising rested bandits) In rotting (Levine et al., 2017; Seznec et al., *2019) or* rising (Metelli et al., 2022) rested bandits, the expected reward of an arm k at time step t is fully determined by the number nk(t) of times arm k has been played before time t*. Formally, each arm is equipped with a* function µk such that the expected reward at time t is given by µk(nk(t))*. In particular, requiring all the* µk to be nonincreasing corresponds to the rotting bandits model, and requiring all the µk *to be nondecreasing* corresponds to the rested rising bandits model. Now, let d = K, A = (ek)1≤k≤K, θ ∗ = (1/ √*K, . . . ,* 1/ √K), and m → ∞2. By the definition of A*, see* (2)*, and the orthogonality of the actions, it is easy to check that* the expected reward of playing action ek at time step t *is given by* (1 + nk(t))γ/ √K. When γ ≤ 0*, this is a* nonincreasing function of nk(t), and we recover rotting rested bandits. Conversely, when γ ≥ 0*, we recover* rising rested bandits. We note however that the class of decreasing (respectively increasing) functions we can consider is restricted to the set of monomials of the form n 7→ (1 + n) γ/ √K, for γ ≤ 0 *(respectively* γ ≥ 0). Extending it to generic polynomials is clearly possible, although it requires more computations in the model selection phase, see Remark 4 and Section 3.3. Although rotting and rising bandits require infinite memory, we argue on both practical and theoretical grounds that in our setting a finite value of m is preferable. First, in many applications it is reasonable to assume that the effect of past actions will vanish at some point. For example, listening to a song now does not affect how much we will enjoy the same song in a distant enough future. Second, permanent effects may trivialize 2In the next paragraph, however, we explain why a bounded memory m is preferable within our model. ![4_image_0.png](4_image_0.png) Figure 1: In the top pane, we plot the effect of the memory matrix (2) on the action space for d = 2, m = 1, and γ *∈ {−*6, 0, 2}. The red arrow is θ ∗ and the black arrow is action at−1. The color level indicates the value of the instantaneous expected reward of any action at (point on the disk). When γ = −6, the rotting effect is so powerful that the optimal action at is orthogonal to at−1. When γ = 0, the optimal action remains θ ∗, independently of at−1. For γ = 2, the optimal action is shifted between θ ∗ and at−1. However, the top plot does not show that playing constantly θ ∗is not the optimal policy. In the bottom pane, we consider horizon T = 2, with the same choices of parameters. For a given action a1, since T = 2, it is possible to determine the best possible next action a2. The color now indicates the sum of expected rewards as a function of the initial action a1 (point on the disk). For γ = −6, we clearly see that playing θ ∗is not optimal anymore. On the other side, it shows that not playing θ ∗is more harmful when γ = 2 than when γ = 0. the problem on the theoretical side: consider m → ∞ and γ ≤ −1/2*, then for any sequence of actions* (at)t≥1 we have $\sum_{t=1}^{T}(a_{t},A_{t-1}\theta^{*})\leq\sum_{t=1}^{T}\left\|A_{t-1}a_{t}\right\|_{2}\leq\sqrt{T\sum_{t=1}^{T}\left\|A_{t-1}a_{t}\right\|_{2}^{2}}\leq\sqrt{2dT\log(1+T/d)}\coloneqq B_{T}\,,$ where we have used the elliptical potential lemma (Lattimore & Szepesvári, 2020, Lemma 19.4). Hence, as soon as γ ≤ −1/2*, we have* OPT ≤ BT , and the trivial strategy consistently playing 0 enjoys a small regret BT . Conversely, consider γ ≥ 0*. The strategy consistently playing* θ ∗ achieves, after t *rounds, an* instantaneous *reward of* (1 + t) γ, which is diverging for γ ≥ 1. This is not realistic in most application and, incidentally, violates the concave payoffs assumption (Metelli et al., *2022, Assumption 3.2). Therefore,* although considering m = +∞ *may look attractive at first sight, it actually fails to adequately model song* satiation, and restricts the range of relevant γ from R to (−1/2, 1)*. Instead, focusing on finite memory* m yields more interesting problems, although it prevents a full generalization of rotting bandits with finitely many arms. We note however that when m < ∞, the spirit of rotting (resp., rising) bandits is still preserved, as playing an action does decrease (resp., increase) its efficiency for the next pulls (within the time window), see also Figure 1. A naive approach to learning LBM is to neglect nonstationarity. Assuming that θ ∗is known, one may then play at time t the action a greedy t = arg maxa∈A⟨*a, A*t−1θ ∗⟩. Although this strategy, which we refer to as oracle greedy, may be optimal in some cases (e.g., in rising isotropic settings, see Heidari et al. (2016, Section 3.1) and Metelli et al. (2022, Theorem 4.1) for discussions in the K-armed case), we highlight that it may also be arbitrarily bad, as stated in the next proposition. Proposition 1 *The oracle greedy strategy, which plays* a greedy t = arg maxa∈A⟨*a, A*t−1θ ∗⟩ at time step t*, can* suffer linear regret, both in rotting or rising scenarios. Hence, one must resort to more sophisticated strategies, which may include long-term planning. Before describing our approach in the next section, we conclude the model exposition by highlighting that LBMs may also be generalized to contextual bandits (Lattimore & Szepesvári, 2020). Remark 1 (Contextual bandits) In contextual bandits, at each time step t *the learner is provided a context* ct (e.g., data about a user). The learner then picks an action at ∈ A (based on ct*), and receives a reward* whose expectation depends linearly on the vector ψ(ct, at) ∈ R d, where ψ is a known feature map. Note that it is equivalent to have the learner playing actions at ∈ R dthat belong to a subset At = {ψ(ct, a) ∈ R d: a ∈ A}. The analysis developed in Section 3 still holds true when At depends on t*, and can thus be generalized to* contextual bandits with memory. ## 3 Regret Analysis In this section, we introduce and analyze OFUL-memory (Algorithm 1) for learning LBMs. We first observe that for every block length there exists a cyclic policy providing a reasonable approximation to the optimal policy (Proposition 2) that cannot be improved in general, see Proposition 3. Learning the optimal block in the cyclic policy then reduces to a stationary linear bandit problem that can be solved by running the OFUL algorithm (Proposition 4). This approach is however wasteful, as it estimates a concatenated model whose dimension scales with the block length. We thus propose a refined algorithm leveraging the structure of the concatenated model, and show that it enjoys a better regret bound. We then tune the block length to trade-off estimation and approximation errors (Theorem 1). Since the optimal block length depends on the memory size m, which may be unknown in practice, we finally wrap our algorithm with a bandit model selection algorithm that is shown to preserve regret guarantees (Corollary 1). Throughout the analysis, we assume for simplicity that the horizon T is always divisible by the block length considered. Finally, note that all technical proofs are relegated to the Appendix (Proposition 4 and Theorem 1 being proved with high probability while stated in expectation in the main body for simplicity of exposition). ## 3.1 Approximation In LBMs, finding a block of actions maximizing the sum of expected rewards is not a well-defined problem. Indeed, the rewards also depend on the initial conditions, determined by the m actions preceding the current block. To bypass this issue, we introduce the following proxy reward function. For any *m, L* ≥ 1 and any block a = a1 *. . . a*m+L of m + L actions, let $$\widetilde{r}(\mathbf{a})=\sum_{t=m+1}^{m+L}\left\langle a_{t},A_{t-1}\theta^{*}\right\rangle=\sum_{t=m+1}^{m+L}\left\langle A_{t-1}a_{t},\theta^{*}\right\rangle.$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ In words, we only consider the expected rewards obtained from the index m + 1 onward. Note that actions a1 *. . . a*m still do play a role in re, as they influence Am*, . . . , A*2m−1. The key is that re is now independent from the initial state, so that $\widetilde{\mathbf{a}}=\arg\max\ \widetilde{r}(\mathbf{a})$ $\mathbf{a}\in B_{d}^{m+L}$ is well-defined. The next proposition quantifies the approximation error incurred when playing cyclically ae instead of the optimal sequence of actions (a ∗ t )t≤T defined in (3). A critical quantity to establish this result is the maximal (and minimal) instantaneous reward one can obtain. To this end, we introduce the notation R = supa1*,...,a*m+1∈A ⟨am+1, A(a1*, . . . , a*m)θ ∗⟩ . Note that in (8) we provide a bound on R in terms of m and γ. We now state our approximation result, and show that it is tight up to constant. Proposition 2 For any m, L ≥ 1, let ae be the block of m + L *actions defined in* (5) and (ret) T t=1 *be the* expected rewards collected when playing cyclically ae*. We have* $$\mathrm{OPT}-\sum_{t=1}^{T}\widetilde{r}_{t}\leq\frac{2m R}{m+L}\,T\ .$$ T . (6) $$(6)$$ The dependence on the cycle length L of the right-hand side of (6) is as expected: by increasing L, the expected reward of the cyclic policy gets closer to OPT. In addition, note that for m = 0 we recover the stationary behaviour. In this case, there are no long-term effects and the performance is oblivious to the block length, so that we recover Pt ret = OPT independently of L. Next, we show that Proposition 2 is tight up to constants. Proposition 3 (Tight approximation) For any m, L ≥ 1 and γ ≤ 0, let ae be the block of m + L *actions* defined in (5) and (ret) T t=1 be the expected rewards collected when playing cyclically ae*. Then, there exists a* choice of A and θ ∗*such that* $${\rm OPT}-\sum_{t=1}^{T}\widetilde{\tau}_{t}\geq\frac{mR}{m+L}\,T.\tag{7}$$ $$(8)$$ Upper bounds on R are easy to obtain. Let a1, . . . , am+1 ∈ A, and Am = A(a1*, . . . , a*m), we have $$|r_{m}|=\left|\langle a_{m+1},A_{m}\theta^{*}\rangle\right|\leq\|a_{m+1}\|_{2}\,\,\|A_{m}\theta^{*}\|_{2}\leq\|A_{m}\|_{*}\,\,\|\theta^{*}\|_{2}\leq(m+1)^{\gamma^{+}}\,,$$ +, (8) such that one can take R = (m + 1)γ +. Note that any other choice of dual norms could have been used to upper bound ⟨am+1, Amθ ∗⟩ , as done in Proposition 3. For simplicity, we restrict ourselves to the Euclidean norm from now on, and use R = (m + 1)γ +. Remark 2 (On the necessity of optimizing over the first actions.) *We highlight that optimizing* over the first m *actions in Equation* (5) *is necessary, as there exists no such "pre-sequence" which is universally* optimal. Indeed, let At and A′t be the memory matrices generated by a1 . . . am+L and a ′ 1 . . . a′m am+1 *. . . a*m+L respectively. It is immediate to check that if the pre-sequence a1 . . . am *is better than* a ′ 1 . . . a′m *with respect to* some model θ ∈ R d, i.e., if we have Pm+L t=m+1⟨at, At−1θ⟩ ≥ Pm+L t=m+1⟨at, A′t−1 θ⟩*, then the opposite holds true* for −θ*. Hence, one cannot determine a priori a good pre-sequence and has to optimize for it.* ## 3.2 Estimation The next step now consists in building a sequence of blocks with small regret against ae. As detailed below, this reduces to a stationary linear bandit problem, with a specific action set. After showing an initial naive solution, we provide a refined approach which exploits the structure of the latent parameter and enjoys improved regret guarantees. A naive approach. We introduce some notation first. Let θ ∗ = (0d, . . . , 0d, θ∗*, . . . , θ*∗) ∈ R d(m+L) be the vector concatenating m times 0d and L times θ ∗. Inspired by the right-hand side in (4), we introduce the subset of R d(m+L)composed of the blocks b = b1 *. . . b*m+L whose actions are of the form bi = Ai−1ai for some block a ∈ Am+L. Formally, let $$\mathbf{B}=\left\{\mathbf{b}\in\mathbb{R}^{d(m+L)}\colon\exists\,\mathbf{a}\in\mathcal{A}^{m+L}\text{such that}\begin{cases}b_{i}=a_{i}&1\leq i\leq m\\ b_{i}=A_{i-1}a_{i}&m+1\leq i\leq m+L\end{cases}\right\},$$ $$({\mathfrak{g}})$$ where the (Ai) m+L−1 i=m+1 are the memory matrices generated from a. Equipped with this notation, it is easy to see that for any a ∈ Am+L and the corresponding b ∈ B we have re(a) = ⟨b, θ ∗⟩. Therefore, estimating eb (the block in B associated to ae) reduces to a standard stationary linear bandit problem in R d(m+L), with parameter θ ∗ and feasible set B. In other words, we have transformed the nonstationarity of the rewards into a constraint on the action set. Running OFUL (Abbasi-Yadkori et al., 2011) then amounts to playing at time step t = τ (m + L), the block aτ ∈ Am+L, whose associated block bτ in B satisfies $$\mathbf{b}_{\tau}=\operatorname*{arg\,max}_{\mathbf{b}\in\mathbf{B}}\operatorname*{sup}_{\mathbf{\theta}\in\mathbf{C}_{\tau-1}}\langle\mathbf{b},\mathbf{\theta}\rangle\,,\tag{1}$$ where Cτ =θ ∈ R d(m+L):θbτ − θVτ ≤ βτ (δ) , with βτ (δ) defined in Equation (17), Vτ =Pττ ′=1 bτ ′b ⊤ τ ′ + λId(m+L), yτ =Pm+L i=m+1 yτ,i , using yτ,i to denote the reward obtained by the i th action of block τ , and θbτ = V −1 τPττ ′=1 yτ ′bτ ′. Noticing that ∥θ ∗∥ 2 2 ≤ L, that for any block b ∈ B we have ∥b∥ 2 2 ≤ m+L(m+1)2γ + and ⟨θ ∗, b⟩ ≤ L(m + 1)γ +, and adapting the OFUL's analysis, we get the following regret bound. Proposition 4 Let λ ∈ [1, d], L ≥ m, and aτ *be the blocks of actions in* R d(m+L) associated to the bτ defined in (9)*. Then we have* $$\mathbb{E}\left[\sum_{\tau=1}^{T/(m+L)}\widetilde{r}(\widetilde{\mathbf{a}})-\widetilde{r}(\mathbf{a}_{\tau})\right]=\widetilde{\mathcal{O}}\Big(d L^{3/2}(m+1)^{\gamma^{+}}\sqrt{T}\Big)\ .$$ In the stationary case, i.e., when m = 0 and L = 1, the block approach coincide with OFUL and we do recover (up to log factors) the O(d √T) bound for standard linear bandits. Note that in Proposition 5 in the Supplementary Material we prove a more general high-probability bound, which also specializes to known results for linear bandits in the stationary case. A refined approach. Note however that the approach presented above is wasteful. Indeed, while the relevant model to estimate is θ ∗ ∈ R d, the θbτ are estimators of the concatenated vector θ ∗ ∈ R d(m+L), with degraded accuracy due to the increased dimension. Similarly, this method only uses the sum of rewards obtained by a block, while finer-grained information is available, namely the rewards obtained by each individual action in the block. Driven by these considerations, let aτ = aτ,1 . . . aτ,m+L be the block of actions played at block time step τ , Aτ,i−1 = A(aτ,i−m, . . . , aτ,i−1), and bτ,i = Aτ,i−1aτ,i for i ≥ m. We propose to compute instead $$\widehat{\theta}_{\tau}=V_{\tau}^{-1}\left(\sum_{r^{\prime}=1}^{\tau}\sum_{i=m+1}^{m+L}y_{\tau^{\prime},i}\,b_{\tau^{\prime},i}\right)\,,\tag{10}$$ where Vτ =Pττ ′=1 Pm+L i=m+1 bτ ′,ib ⊤ τ ′,i + λId. In words, θbτ is the standard regularized least square estimator of θ ∗ when only the last L rewards of each block of size m + L are considered. Note however that the θbτ are only computed every m + L rounds. Indeed, recall that regret is computed here at the block level, such that at each block time step τ the learner chooses upfront an entire block to play, preventing from updating the estimates between the individual actions of the block. Following the principle of optimism in the face of uncertainty, a natural strategy then consists in playing $$\mathbf{a}_{\tau}=\operatorname*{arg\,max}_{a_{\tau,i}\in\mathcal{A}}\;\;\operatorname*{sup}_{\theta\in\mathcal{C}_{\tau-1}}\;\;\sum_{i=1}^{L}\langle a_{\tau,i},A_{\tau,i-1}\theta\rangle\,,\tag{1}$$ $$(11)$$ $$(12)$$ where Cτ =θ ∈ R d:θbτ − θVτ ≤ βτ (δ) , for some βτ (δ) defined in (18). Expressed in terms of bτ , the estimate (11) corresponds to $$\begin{array}{c c c}{{\mathbf{b}_{\tau}=\operatorname*{arg\,max}_{\mathbf{b}\in\mathfrak{B}}}}&{{\operatorname*{sup}_{\mathbf{\theta}\in\mathfrak{D}_{\tau-1}}\langle\mathbf{b},\mathbf{\theta}\rangle\,,}}\\ {{}}&{{}}\end{array}$$ where Dτ =θ ∈ R d(m+L): ∃θ ∈ Cτ such that θ = (0d, . . . , 0d*, θ, . . . , θ*) . In words, this estimate is similar to (9), except that we use the improved confidence set Dτ that leverages the structure of θ ∗. A dedicated analysis to deal with the fact that the estimates θbτ are not "up to date" for actions inside the block then allows to bound the regret of the sequence aτ against the optimal ae. Setting the block size L in order to balance this bound with the approximation error of Proposition 2 yields the final regret bound. Theorem 1 Let λ ∈ [1, d], and aτ *be the blocks of actions in* R d(m+L) *defined in* (11)*. Then we have* $$\mathbb{E}\left[\sum\sp{T/(m+L)}_{\tau=1}\widetilde{r}(\widetilde{\mathbf{a}})-\widetilde{r}(\mathbf{a}_{\tau})\right]=\widetilde{\mathcal{O}}\Big(d L(m+1)\sp{\gamma\,+}\sqrt{T}\Big)\,.$$ Suppose that m ≥ 1, T ≥ d 2m2 + 1, and set L =pm/d T1/4− m. Let yt *be the rewards collected when* playing aτ *as defined in* (11)*. Then we have* $$\mathrm{OPT}-\mathbb{E}\left[\sum_{t=1}^{T}y_{t}\right]=\widetilde{O}\left(\sqrt{d}\ (m+1)^{\frac{1}{2}+\gamma^{+}}\,T^{3/4}\right)\,.$$ 8 When m = 0 *(i.e., in the stationary case), setting* L = 1 *recovers the OFUL bound.* When comparing the first claim of Theorem 1 to Proposition 4, we note that the dependence in L has been reduced from L 3/2to L, thanks to the improved confidence sets. Solving the approximation-estimation tradeoff using Proposition 4 would have yielded an overall regret bound of order d 2/5(m + 1) 35 +γ +T 4/5, worse than the bound provided by the second claim of Theorem 1. In the stationary case (i.e., for m = 0) Theorem 1 recovers the OFUL regret bound and matches the lower bound for stationary linear bandits (Lattimore & Szepesvári, 2020, Theorems 24.1 and 24.2, e.g.), such that our analysis is tight in general (recall that Proposition 3 shows that the control of the approximation error provided by Proposition 2 is optimal up to constants). Finding a lower bound matching Theorem 1 for arbitrary values of m and γ remains however an open problem. We highlight that lower bounds for nonstationary bandits are particularly hard to obtain and that most papers on this topic do not prove any, see e.g., Levine et al. (2017); Kleinberg & Immorlica (2018); Pike-Burke & Grunewalder (2019); Cella & Cesa-Bianchi (2020); Metelli et al. (2022). As we can see from the optimal choice of L in Theorem 1, OFUL-memory requires the knowledge of the horizon T, the memory size m, and the exponent γ, which might all be unknown in practice. If adaptation to T can be achieved by using the doubling trick, adaptation to m and γ is more involved. In the next section, we show that OFUL-memory can be wrapped by a model selection algorithm to learn m and γ. Before turning to this problem, we state a few remarks. Remark 3 (An over-optimistic variant) *Note that* Dτ =θ ∈ R d(m+L): ∃θ ∈ Cτ *such that* θ = (0d, . . . , 0d, θ, . . . , θ) is not the only improved confidence set that one can build from Cτ . Indeed, it is immediate to check that our proof remains unchanged if one uses instead Dopt τ =θ ∈ R d(m+L): ∃ θ1*, . . . , θ*L ∈ Cτ *such that* θ = (0d, . . . , 0d, θ1, . . . , θL) *. Optimizing* (12) *over* D opt τ−1 and not Dτ−1 creates an overoptimistic block version of the UCB, composed of the sum of the UCBs of the single-actions in the block, although the latter might be attained at different models θi*, while we know that* θ ∗*is the same model* θ ∗ repeated L times. Still, since each θi is estimated in the confidence set Cτ−1 of reduced dimension, the guarantees are unchanged. In the rest of the paper, we refer to this variant as the over-optimistic version of OFUL-memory, denoted by O3M*. Empirically,* O3M outperforms the vanilla approach. We attribute these better performances to the fact that the confidence set it is built upon is more optimistic. Remark 4 (Generic matrix mapping A) *Note that our analysis naturally extends to any matrix mapping* A*, as long as it is known. The term* (m+ 1)γ +in Theorem 1 *is then replaced with* supa1*...a*m ∥A(a1*, . . . , a*m)∥∗. We highlight however that having access to such knowledge is unlikely in practice. This is why we focus on the simpler parametric family (2), which encompasses many rotting and rising scenarios while allowing us to learn simultaneously m and γ, as shown in the next section. It is of course possible to extend the family of monomials (2) to a family of polynomials, but this requires tracking more parameters (namely, the different coefficients of the polynomial), thus degrading the final regret bound. Remark 5 (Solving LBM with a general Reinforcement Learning (RL) approach) *Our setting* may be seen as an MDP with a d-dimensional continuous space of actions, a (md)-dimensional continuous state space (for the past m *actions), a deterministic transition function parameterized by an unknown scalar* γ, and a stochastic reward function with a linear dependence on an additional d*-dimensional latent parameter* θ ∗*. The optimal policy in this MDP is generally nonstationary, and we are not aware of RL algorithms* whose regret can be bounded without relying on more specific assumptions on the MDP. By exploiting the structure of the MDP, and restricting to cyclic policies, we show instead that the original problem can be solved using stationary bandit techniques. ## 3.3 Model Selection In the absence of prior knowledge on the nature of the nonstationary mechanism at work, a natural idea consists in instantiating several LBMs with different values of γ and running a model selection algorithm for bandits (Foster et al., 2019; Cutkosky et al., 2020; Pacchiano et al., 2020). In bandit model selection, where a master algorithm runs the different LBMs, the adaptation to the memory size m becomes more complex. Indeed, the different putative values for m induce different block sizes (see Theorem 1) which perturb the Algorithm 1 OFUL-memory (OM, O3M) input : action space A ⊂ R d, memory size m, exponent γ, regularization parameter λ, horizon T. init : set L =pm/4 d T1/4 − m, θb0 = 0d, V0 = λId, β0 = 0. for τ = 1*, . . . , T /*(m + L) do $${},T/(t)$$ $$\mathbf{\partial}^{\prime}\mathbf{\partial}(\mathbf{\partial}x)=\mathbf{\partial}(\mathbf{\partial}x)\mathbf{\partial}(\mathbf{\partial}x)$$ // OM // O3M aτ = arg max aτ,i∈A sup θ∈Cτ−1 X L i=1 ⟨aτ,i, Aτ,i−1θ⟩ or aτ = arg max aτ,i∈A sup θi∈Cτ−1 X L i=1 ⟨aτ,i, Aτ,i−1θi⟩ $$\prime\setminus03\mathbf{M}$$ $${\mathfrak{a n c e}}\ {\mathfrak{s e t}}$$ #### Lay and up $ a=$ collect $ x$. $${\mathrm{\boldmath~\p u t e~}}{\mathcal{C}}_{\tau},{\mathrm{\boldmath~i.e.,~}}{\hat{\theta}}_{\tau}$$ // Play and update confidence set Play aτ , collect yτ,1, . . . , yτ,m+L Compute Cτ , i.e., θbτ , Vτ , and βτ via (10) and (18). time and reward scales of the master algorithm. For instance, bandits with larger block lengths will collect more rewards per block, although they might not be more efficient on average. Our solution consists in feeding the master algorithm with averaged rewards. One may then control the true regret (i.e., not averaged) of the output sequence, against a scaled version of the optimal sequence through Lemma 1, which links the normalized regret of a block meta-algorithm to the true regret of the corresponding sequence of blocks. Lemma 1 Suppose that a block-based bandit algorithm (in our case the bandit combiner) produces a sequence of Tbc blocks aτ , with possibly different cardinalities |aτ |*, such that* $$\sum_{\tau=1}^{T_{\mathrm{bc}}}{\frac{\widetilde r(\widetilde{\mathbf{a}})}{|\widetilde{\mathbf{a}}|}}-\sum_{\tau=1}^{T_{\mathrm{bc}}}{\frac{\widetilde r(\mathbf{a}_{\tau})}{|\mathbf{a}_{\tau}|}}\leq F(T_{\mathrm{bc}})\,,$$ for some sublinear function F*. Then, we have* $$\frac{\operatorname*{min}_{\tau}|\mathbf{a}_{\tau}|}{\operatorname*{max}_{\tau}|\mathbf{a}_{\tau}|}\left(\widetilde{r}(\widetilde{\mathbf{a}})\ \frac{\sum_{\tau}|\mathbf{a}_{\tau}|}{|\widetilde{\mathbf{a}}|}\right)-\sum_{\tau=1}^{T_{\mathrm{bc}}}\widetilde{r}(\mathbf{a}_{\tau})\ \leq\ \operatorname*{min}_{\tau}|\mathbf{a}_{\tau}|\,F(T_{\mathrm{bc}})\ .$$ In particular, if all blocks have the same cardinality the last bound is just the block regret bound scaled by |aτ |. Combining this result with Theorem 1 and (Cutkosky et al., 2020, Corollary 2) yields the following result. Corollary 1 Consider an instance of LBM with unknown parameters (m⋆, γ⋆). Assume a bandit combiner is run on N ≤ d √m⋆ instances of OFUL-memory (Algorithm *2), each using a different pair of parameters* (mi, γi) *from a set* S =(m1, γ1), . . . ,(mN , γN ) such that (m⋆, γ⋆) ∈ S*. Let* M = (maxj mj )/(minj mj ). Then, for all T ≥ (m⋆ + 1)2γ + ⋆ /m⋆d 4*, the expected rewards* r bc t T t=1 *of the bandit combiner satisfy* $$\frac{\mathrm{OPT}}{\sqrt{M}}-\mathbb{E}\left[\sum_{t=1}^{T}r_{t}^{\mathrm{bc}}\right]\ =\ \widetilde{\cal O}\Big(M\,d\,(m_{\star}+1)^{1+\frac{3}{2}\gamma_{\star}^{+}}\,T^{3/4}\Big)\,.$$ ## 4 Algorithms In this section, we discuss the practical implementation of our approach. This includes OFUL-memory (OM) and its over-optimistic variant (O3M, see Remark 3), both summarized in Algorithm 1. We also instantiate the Bandit Combiner from Cutkosky et al. (2020) to our specific setting with average rewards and O3M as base algorithm, see Algorithm 2. Maximizing the UCBs. We start by making explicit the UCBs used in OM and O3M, see (12), optimized over Dτ or Dopt τ. Using the formula for Cτ one can check that they are given by UCBτ (a) = Pm+L j=m+1 aj , Aj−1θbτ−1 + B(a), where B(a) = βτ−1 Pm+L j=m+1 A⊤ j−1aj V −1 τ−1 for OM and B(a) = βτ−1Pm+L j=m+1 A⊤ j−1aj V −1 τ−1 for O3M. The two UCBs only differ in their exploration bonuses. Note that by the triangle inequality, we have UCBOM τ(a) ≤ UCBO3M τ(a) for any a. Thanks to this closed form in terms of a, it is possible to approximate arg maxa UCBτ (a) using gradient ascent. Note however that maximizing the UCBs is a hard problem when the action space is infinite, which might be non-convex in general. In that respect, the theoretical guarantees we provide in Theorem 1 hold whenever the learner has access to some oracle that returns the exact UCB maximizer, as traditionally assumed in the literature, see e.g., Kveton et al. (2015). Conversely, note that the practical implementation of O3M still satisfies Theorem 1, but for a slightly weaker version of the regret where the "best block" is understood as the one returned by the approximated oracle used in O3M (i.e., our gradient ascent solver). See (Kveton et al., 2015, Section 9) for a similar discussion. Computational complexity. As described in Algorithm 1, our approach consists of two steps: updating the confidence region Cτ , i.e., θbτ and βτ according to (10) and (18), and computing the block aτ that maximizes the UCB index. The first step is performed by online Ridge regression, and has a computational cost of O(Ld2). We note here the advantage of our refined algorithm over the naive concatenated approach, whose Ridge regression update has cost O(L 2d 2). The maximization of the UCB indices, performed through gradient ascent has time complexity per iteration of O(m + L)d 2. Hence, the overall complexity of an epoch of Algorithm 1 is O(m + L)d 2· nit, where nit is the number of iterations performed by gradient ascent. Recall that the epochs of Algorithm 1 correspond to blocks of m + L actions, such that the actual per-round complexity is O(d 2· nit). Bandit combiner. Our bandit combiner, see Algorithm 2, builds upon the approach developed by Cutkosky et al. (2020) and works as follows. The meta-algorithm is fed with different bandit algorithms (in our case, instances of O3M with different choices of parameters mj and γj ) and at each round plays a block according to one of the algorithms. Each O3M instance comes with a *putative* regret bound CjT αj, which is the regret bound satisfied by the algorithm *should it be well-specified*, i.e., if the rewards are indeed generated through a memory matrix with memory mj and exponent γj . Note that in order to be comparable across the different instances, the putative regrets apply to the average rewards. The values of Cj and αj can be computed using Theorem 1, see the proof of Corollary 1 for details. The putative regrets are then used to successively discard the instances that are not well specified, and eventually identify the instance using parameters (m⋆, γ⋆). Knowing Cj and αj , we can compute for any j the target regret $$R_{j}=C_{j}\,T_{\rm bc}^{2/3}+\frac{5\sqrt{30}}{18}C_{j}^{3/2}\,T_{\rm bc}^{2/3}+1152(m_{j}+1)^{2+j}\,T^{1/3}\log(T_{\rm bc}^{8}N/\delta)+(N-1)T^{2/3}\,,\tag{13}$$ where Tbc is the number of blocks the Bandit Combiner is called on, see Appendix B for details. Here, we note how the presence of (mj + 1)2γ + j is impacting differently the rising and rotting scenarios. Using (Cutkosky et al., 2020, Corollary 2), the regret of Algorithm 2 is finally given by 3Rj⋆ , where j⋆ is the index such that (mj⋆ , γj⋆ ) = (m⋆, γ⋆). ## 5 Experiments We perform experiments to validate the theoretical performance of OM and O3M (Algorithm 1). Similarly to (Warlop et al., 2018), we work with synthetic data because of the counterfactual nature of the learning problem in bandits. Unless stated otherwise, we set d = 3 while θ ∗ ∈ R dis generated uniformly at random with unit norm. The rewards are generated according to (1) and (2), and perturbed by Gaussian noise with standard deviation σ = 1/10. Rotting with Bandit Combiner. We start by analyzing the rotting scenario with m = 2 and γ = −3. We measure the performance in terms of the cumulative reward averaged over 5 runs (this is enough because the variance is small). In Figure 2 (left pane) we compare the performance of O3M against oracle greedy, vanilla OFUL, and two instances of Bandit Combiner (Algorithm 2. The first instance, Combiner γ, works in the setting where the misspecified parameter is γ and the algorithm is run over the set {−4, −3, −2, −1, 0} of possible values for γ with the true value being −3. The second instance, Combiner m, tests the setting where Algorithm 2 Bandit Combiner on O3M input :Instances O3M(m1, γ1)*, . . . ,* O3M(mN , γN ), horizon Tbc numbers C1*, . . . , C*N > 0, target regrets R1*, . . . , R*N . Set T(i) = 0, Si = 0, ∆i = 0 for i = 1*, . . . , N*, and set I0 = {1*, . . . , N*} for t = 1*, . . . , T*bc do if there is some i ∈ It *with* T(i) = 0 **then** it = i else For each i ∈ It, compute the UCB index: $$\mathrm{UCB}(i)=\min\left\{(m_{i}+1)^{2\gamma_{i}^{+}},\frac{C_{i}}{\sqrt{T(i)}}+4(m_{i}+1)^{2\gamma_{i}^{+}}\sqrt{\frac{2\log(T^{3}N/\delta)}{T(i)}}\right\}$$ − $$\cdot{\frac{R_{i}}{T_{\mathrm{bc}}}}$$ Set it = arg maxi∈ItSi T(i) + UCB(i) Obtain from instance O3M(mit , γit ) a block of size mit + Lit and play it Return the total reward rit collected in the last Lit time steps of the block to O3M(mit , γit ) Compute the average reward rbit = rit Lit Update ∆it ← ∆it + Sit /T(it) − rbit (where we set 0/0 = 0) and Sit ← Sit + rbit Update the number of plays T(it) ← T(it) + 1 if ∆it ≥ CitT(it) γit + 12 (mit + 1)2γ + itp2 log(T3N/δ)T(it) **then** It = It−1 \ {it} else It = It−1 ![11_image_0.png](11_image_0.png) Figure 2: Cumulative rewards in rotting (left) and rising with non-isotropic initialization (right) cases. the misspecified parameter is m. In this case the algorithm is run over the set {0, 2, 3} of possible values for m with the true value being 2. The results—see Figure 2 (left pane)—show that O3M is able to plan the actions in the block ensuring that a good arm is not played right away if a higher reward can be obtained later on in the block. This means that O3M is waiting to play certain actions until the corresponding entries of A have been offloaded, preventing A to negatively impact the reward of these actions. Although learning m proves to be more difficult, which is consistent with the impact of M = (maxj mj )(minj mj ) in Corollary 1, Combiner m run on instances of O3M is competitive with O3M run with the true parameters. Note that with isotropic initialization there is no point in running Combiner γ with values of γ larger than zero. Indeed, in the isotropic case oracle greedy is optimal, stationary, and with the same optimal action for any γ ≥ 0. The empirical performance of our algorithms in a non-isotropic rising setting is investigated in the next example. Rising with non-isotropic initialization. When γ > 0 (rising setting) and A0 ̸= Id (non-isotropic initialization), there are instances for which oracle greedy is suboptimal, as we show next. Let d = 2, m = 2, γ = 1, A0 = e1e ⊤ 1 , and θ ∗ = (√ϵ, √1 − ϵ). With these choices, oracle greedy starts to pull action e1 = (1, 0) and will always play it, obtaining a cumulative reward of T(1 + m) √ϵ. Instead, a better strategy would be to play e2 = (0, 1) all the time, collecting a cumulative reward of Tm√1 − ϵ. We call this strategy π2 and in Figure 2 (right pane) we compare the performance of O3M with oracle greedy, π2, and OFUL. Here OFUL performs well because the optimal action is stationary and, unlike oracle greedy, OFUL can use exploration to discover that e2 is better than e1. ## 6 Conclusions And Open Problems We introduced and analyzed a nonstationary generalization of linear bandits that uses a fixed-size memory. Some interesting future research directions may include the derivation of a matching lower bound or quantifying the UCB optimization error to better tradeoff the block length L. ## Acknowledgments The authors acknowledge the financial support from the MUR PRIN grant 2022EKNE5K (Learning in Markets and Society), the FAIR (Future Artificial Intelligence Research) project, funded by the NextGenerationEU program within the PNRR-PE-AI scheme, and the EU Horizon CL4-2022-HUMAN-02 research and innovation action under grant agreement 101120237, project ELIAS (European Lighthouse of AI for Sustainability). ## References Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. Advances in neural information processing systems, 24, 2011. Naoki Abe and Philip M Long. Associative reinforcement learning using linear probabilistic concepts. In ICML, pp. 3–11, 1999. Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. *Journal of Machine Learning* Research, 3(Nov):397–422, 2002. Peter Auer, Pratik Gajane, and Ronald Ortner. Adaptively tracking the best bandit arm with an unknown number of distribution changes. In *Conference on Learning Theory*, pp. 138–158. PMLR, 2019. Soumya Basu, Rajat Sen, Sujay Sanghavi, and Sanjay Shakkottai. Blocking bandits. Advances in Neural Information Processing Systems, 32, 2019. Omar Besbes, Yonatan Gur, and Assaf Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. *Advances in neural information processing systems*, 27, 2014. Djallel Bouneffouf and Raphael Féraud. Multi-armed bandit problem with known trend. *Neurocomputing*, 205:16–21, 2016. Djallel Bouneffouf, Irina Rish, Guillermo A Cecchi, and Raphaël Féraud. Context attentive bandits: contextual bandit with restricted context. In *Proceedings of the 26th International Joint Conference on Artificial* Intelligence, pp. 1468–1475, 2017. Leonardo Cella and Nicolò Cesa-Bianchi. Stochastic bandits with delay-dependent payoffs. In *International* Conference on Artificial Intelligence and Statistics, pp. 1168–1177. PMLR, 2020. Yifang Chen, Chung-Wei Lee, Haipeng Luo, and Chen-Yu Wei. A new algorithm for non-stationary contextual bandits: Efficient, optimal and parameter-free. In *Conference on Learning Theory*, pp. 696–726. PMLR, 2019. Wang Chi Cheung, David Simchi-Levi, and Ruihao Zhu. Learning to optimize under non-stationarity. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1079–1087. PMLR, 2019. Corinna Cortes, Giulia DeSalvo, Vitaly Kuznetsov, Mehryar Mohri, and Scott Yang. Discrepancy-based algorithms for non-stationary rested bandits. *arXiv preprint arXiv:1710.10657*, 2017. Ashok Cutkosky, Abhimanyu Das, and Manish Purohit. Upper confidence bounds for combining stochastic bandits. *arXiv preprint arXiv:2012.13115*, 2020. Varsha Dani, Thomas P Hayes, and Sham M Kakade. Stochastic linear optimization under bandit feedback. In *Conference on Learning Theory*. PMLR, 2008. Yuntian Deng, Xingyu Zhou, Baekjin Kim, Ambuj Tewari, Abhishek Gupta, and Ness Shroff. Weighted gaussian process bandits for non-stationary environments. In *International Conference on Artificial* Intelligence and Statistics, pp. 6909–6932. PMLR, 2022. Yash Deshpande and Andrea Montanari. Linear bandits in high dimension and recommendation systems. In *2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)*, pp. 1750–1754. IEEE, 2012. Giuseppe Di Benedetto, Vito Bellini, and Giovanni Zappella. A linear bandit for seasonal environments. arXiv preprint arXiv:2004.13576, 2020. Louis Faury, Yoan Russac, Marc Abeille, and Clément Calauzènes. A technical note on non-stationary parametric bandits: Existing mistakes and preliminary solutions. In *Algorithmic Learning Theory*, pp. 619–626. PMLR, 2021. Dylan J Foster, Akshay Krishnamurthy, and Haipeng Luo. Model selection for contextual bandits. *Advances* in Neural Information Processing Systems, 32, 2019. Saeed Ghoorchian and Setareh Maghsudi. Bayesian linear bandits for large-scale recommender systems. *arXiv* preprint arXiv:2202.03167, 2022. John Gittins, Kevin Glazebrook, and Richard Weber. *Multi-armed bandit allocation indices*. John Wiley & Sons, 2011. John C Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society: Series B (Methodological), 41(2):148–164, 1979. Hoda Heidari, Michael J Kearns, and Aaron Roth. Tight policy regret bounds for improving and decaying bandits. In *IJCAI*, pp. 1562–1570, 2016. Komal Kapoor, Karthik Subbian, Jaideep Srivastava, and Paul Schrater. Just in time recommendations: Modeling the dynamics of boredom in activity streams. In Proceedings of the eighth ACM international conference on web search and data mining, pp. 233–242, 2015. Zohar S Karnin and Oren Anava. Multi-armed bandits: Competing with optimal sequences. *Advances in* Neural Information Processing Systems, 29, 2016. Baekjin Kim and Ambuj Tewari. Randomized exploration for non-stationary stochastic linear bandits. In Conference on Uncertainty in Artificial Intelligence, pp. 71–80. PMLR, 2020. Robert Kleinberg and Nicole Immorlica. Recharging bandits. In *2018 IEEE 59th Annual Symposium on* Foundations of Computer Science (FOCS), pp. 309–319. IEEE, 2018. Melda Korkut and Andrew Li. Disposable linear bandits for online recommendations. In *Proceedings of the* AAAI Conference on Artificial Intelligence, volume 35, pp. 4172–4180, 2021. Geza Kovacs, Zhengxuan Wu, and Michael S Bernstein. Rotating online behavior change interventions increases effectiveness but also increases attrition. *Proceedings of the ACM on Human-Computer Interaction*, 2(CSCW):1–25, 2018. Matevž Kunaver and Tomaž Požrl. Diversity in recommender systems–a survey. *Knowledge-based systems*, 123:154–162, 2017. Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. In *Artificial Intelligence and Statistics*, pp. 535–543. PMLR, 2015. Pierre Laforgue, Giulia Clerici, Nicolò Cesa-Bianchi, and Ran Gilad-Bachrach. A last switch dependent analysis of satiation and seasonality in bandits. In International Conference on Artificial Intelligence and Statistics, pp. 971–990. PMLR, 2022. Tor Lattimore and Csaba Szepesvári. *Bandit algorithms*. Cambridge University Press, 2020. Liu Leqi, Fatma Kilinc Karzan, Zachary Lipton, and Alan Montgomery. Rebounding bandits for modeling satiation effects. *Advances in Neural Information Processing Systems*, 34:4003–4014, 2021. Nir Levine, Koby Crammer, and Shie Mannor. Rotting bandits. Advances in neural information processing systems, 30, 2017. Chuanhao Li, Qingyun Wu, and Hongning Wang. Unifying clustered and non-stationary bandits. In International Conference on Artificial Intelligence and Statistics, pp. 1063–1071. PMLR, 2021. Yang Li, Jiawei Jiang, Jinyang Gao, Yingxia Shao, Ce Zhang, and Bin Cui. Efficient automatic CASH via rising bandits. In *Proceedings of the AAAI Conference on Artificial Intelligence*, pp. 4763–4771, 2020. Haipeng Luo, Chen-Yu Wei, Alekh Agarwal, and John Langford. Efficient contextual bandits in non-stationary worlds. In *Conference On Learning Theory*, pp. 1739–1776. PMLR, 2018. Alberto Maria Metelli, Francesco Trovo, Matteo Pirola, and Marcello Restelli. Stochastic rising bandits. In International Conference on Machine Learning, pp. 15421–15457. PMLR, 2022. Jonas W Mueller, Vasilis Syrgkanis, and Matt Taddy. Low-rank bandit methods for high-dimensional dynamic pricing. *Advances in Neural Information Processing Systems*, 32, 2019. Ronald Ortner, Daniil Ryabko, Peter Auer, and Rémi Munos. Regret bounds for restless markov bandits. In International conference on algorithmic learning theory, pp. 214–228. Springer, 2012. Aldo Pacchiano, My Phan, Yasin Abbasi Yadkori, Anup Rao, Julian Zimmert, Tor Lattimore, and Csaba Szepesvari. Model selection in contextual stochastic bandit problems. *Advances in Neural Information* Processing Systems, 33:10328–10337, 2020. Ciara Pike-Burke and Steffen Grunewalder. Recovering bandits. *Advances in Neural Information Processing* Systems, 32:14122–14131, 2019. Paat Rusmevichientong and John N Tsitsiklis. Linearly parameterized bandits. *Mathematics of Operations* Research, 35(2):395–411, 2010. Yoan Russac, Claire Vernade, and Olivier Cappé. Weighted linear bandits for non-stationary environments. Advances in Neural Information Processing Systems, 32, 2019. Yoan Russac, Olivier Cappé, and Aurélien Garivier. Algorithms for non-stationary generalized linear bandits. arXiv preprint arXiv:2003.10113, 2020. Markus Schedl, Hamed Zamani, Ching-Wei Chen, Yashar Deldjoo, and Mehdi Elahi. Current challenges and visions in music recommender systems research. *International Journal of Multimedia Information Retrieval*, 7(2):95–116, 2018. Julien Seznec. Apprentissage automatique séquentiel pour les systèmes éducatifs intelligents, 2020. Julien Seznec, Andrea Locatelli, Alexandra Carpentier, Alessandro Lazaric, and Michal Valko. Rotting bandits are no harder than stochastic ones. In *The 22nd International Conference on Artificial Intelligence* and Statistics, pp. 2564–2572. PMLR, 2019. David Simchi-Levi, Zeyu Zheng, and Feng Zhu. Dynamic planning and learning under recovering rewards. In International Conference on Machine Learning, pp. 9702–9711. PMLR, 2021. Cem Tekin and Mingyan Liu. Online learning of rested and restless bandits. IEEE Transactions on Information Theory, 58(8):5588–5611, 2012. Romain Warlop, Alessandro Lazaric, and Jérémie Mary. Fighting boredom in recommender systems with linear reinforcement learning. *Advances in Neural Information Processing Systems*, 31, 2018. Peter Whittle. Restless bandits: Activity allocation in a changing world. *Journal of applied probability*, 25 (A):287–298, 1988. Qingyun Wu, Naveen Iyer, and Hongning Wang. Learning contextual bandits in a non-stationary environment. In *The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval*, pp. 495–504, 2018. Xiao Xu, Fang Dong, Yanghua Li, Shaojian He, and Xin Li. Contextual-bandit based personalized recommendation with time-varying user interests. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 6518–6525, 2020. Peng Zhao, Lijun Zhang, Yuan Jiang, and Zhi-Hua Zhou. A simple approach for non-stationary linear bandits. In *International Conference on Artificial Intelligence and Statistics*, pp. 746–755. PMLR, 2020. ## A Technical Proofs We gather in this section the proofs omitted in the core text. Proposition 1 *The oracle greedy strategy, which plays* a greedy t = arg maxa∈A⟨*a, A*t−1θ ∗⟩ at time step t*, can* suffer linear regret, both in rotting or rising scenarios. Proof We build two instances of LBM, one rotting, one rising, in which the oracle greedy strategy suffers linear regret. We highlight that the other strategy exhibited, which performs better than oracle greedy, may not be optimal. Rotting instance. Let A = Bd, θ ∗ = e1, m = d − 1, and A such that $$A(a_{1},\ldots,a_{m})=\left(I_{d}+\sum_{s=1}^{m}a_{s}a_{s}^{\top}\right)^{-\gamma}\;,$$ , for some γ > 0 to be specified later. Oracle greedy, which plays at each time step a greedy t = arg maxa∈A⟨*a, A*t−1θ ∗⟩, constantly plays e1. After the first m pulls, it collects a reward of 1/dγ at every time step. On the other side, the strategy that plays cyclically the block e1 *. . . e*d collects a reward of 1 every d = m + 1 time steps, i.e., an average reward of 1/d per step. Hence, up to the transitive first m puuls, the cumulative reward of oracle greedy after T rounds is *T /d*γ, and that of the cyclic policy is *T /d*. The regret of oracle greedy is thus at least $$T\left(\frac{1}{d}-\frac{1}{d^{\gamma}}\right)\,,$$ which is linear for γ > 1. Rising instance. Let m ≥ 1, d = 2, A = B2, θ ∗ = (ε, 1) where ε > 0 is to be specified later, and A such that $$A(a_{1},\ldots,a_{m})=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}+\sum_{s=1}^{m}a_{s}a_{s}^{\top}\,.$$ Oracle greedy constantly plays e1 collecting a reward of (m + 1)θ ∗ 1 from round m + 1 onward. On the other side, the strategy that plays constantly e2 collects a reward of mθ∗ 2 from round m + 1 onward. Hence, the regret of oracle greedy from round m + 1 onward is at least (T − m)[m − (m + 1)ε], which is linear for ε < m/(m + 1). □ Proposition 2 For any m, L ≥ 1, let ae be the block of m + L *actions defined in* (5) and (ret) T t=1 be the expected rewards collected when playing cyclically ae*. We have* $${\rm OPT}-\sum_{t=1}^{T}\widetilde{r}_{t}\leq\frac{2mR}{m+L}\;T\;.\tag{6}$$ Proof Recall that the optimal sequence is denoted (a ∗ t ) T t=1 and collects rewards (r ∗ t ) T t=1. Let L > 0; by definition, there exists a block of actions of length L in (a ∗ t ) T t=1 with average expected reward higher that OPT/T. Let t ∗ be the first index of this block, we thus have (1/L)Pt ∗+L−1 t=t ∗ r ∗ t ≥ OPT/T. However, this average expected reward is realized only using the initial matrix At ∗−1, generated from a ∗ t ∗−1 , . . . , a∗ t ∗−m. Let a ∗ = a ∗ t ∗−m*, . . . , a*∗ t ∗+L−1 of length m + L. Note that, by definition, we have that re(ae) ≥ re(a ∗) = Pt ∗+L−1 t=t ∗ r ∗ t ≥ L OPT/T. Furthermore, by (8), when playing cyclically ae one obtains at least a reward of −R in each one of the first m pulls of the block. Collecting all the pieces, we obtain $$\sum_{t=1}^{T}\widetilde{r}_{t}\geq{\frac{T}{m+L}}\Big(-m R+\widetilde{r}(\widetilde{\mathbf{a}})\Big)$$ $$\geq\frac{T}{m+L}\Big{(}-mR+\widetilde{r}(\mathbf{a}^{*})\Big{)}$$ $$\geq\frac{T}{m+L}\,\bigg{(}-mR+L\,\frac{\text{OPT}}{T}\bigg{)}$$ $$=\frac{L}{m+L}\text{OPT}-\frac{mR}{m+L}\,T$$ $$\geq\frac{L}{m+L}\text{OPT}+\frac{m}{m+L}\text{OPT}-\frac{mR}{m+L}\,T-\frac{mR}{m+L}\,T\tag{14}$$ $$=\text{OPT}-\frac{2mR}{m+L}\,T\,,$$ $\square$ where (14) derives from OPT ≤ RT. □ We prove the (stronger) high probability version of Proposition 4. Proposition 5 Let λ ≥ 1, δ ∈ (0, 1), and aτ *be the blocks of actions in* R d(m+L) associated to the bτ defined in (9). Then, with probability at least 1 − δ *we have* $$\sum_{\tau=1}^{T/(m+L)}\widehat{r}(\widehat{\mathbf{a}})-\widehat{r}(\mathbf{a}_{\tau})\leq4L(m+1)^{\gamma^{+}}\,\sqrt{Td\,\,\ln\left(1+\frac{T(m+1)^{2\gamma^{+}}}{d(m+L)\lambda}\right)}$$ $$\cdot\left(\sqrt{\lambda L}+\sqrt{\ln\left(\frac{1}{\delta}\right)+d(m+L)\,\ln\left(1+\frac{T(m+1)^{2\gamma^{+}}}{d(m+L)\lambda}\right)\right)}\,.$$ Proof The proof essentially follows that of (Abbasi-Yadkori et al., 2011, Theorem 3). The main difference is that our version of OFUL operates at the block level. This implies a smaller time horizon, but also and increased dimension and an instantaneous regret ⟨eb, θ ∗*⟩ − ⟨*bτ , θ ∗⟩ upper bounded by 2L(m + 1)γ +instead of 1. We detail the main steps of the proof for completeness. Recall that running OFUL in our case amounts to compute at every block time step τ $$\widehat{\boldsymbol{\theta}}_{\tau}={\boldsymbol{V}}_{\tau}^{-1}\!\left(\sum_{\tau^{\prime}=1}^{\tau}{\boldsymbol{y}}_{\tau^{\prime}}{\boldsymbol{b}}_{\tau^{\prime}}\right),$$ where $${\boldsymbol{V}}_{\tau}=\sum_{\tau^{\prime}=1}^{\tau}{\boldsymbol{b}}_{\tau^{\prime}}{\boldsymbol{b}}_{\tau^{\prime}}^{\top}+\lambda I_{d(m+L)}\,,\qquad{\mathrm{and}}\qquad{\boldsymbol{y}}_{\tau}=\sum_{i=m+1}^{m+L}y_{\tau,i}\,,$$ since we associate with a block of actions the sum of rewards obtained after time step m. Note that by the determinant-trace inequality, see e.g., (Abbasi-Yadkori et al., 2011, Lemma 10), with actions bτ that satisfy ∥bτ ∥ 2 2 ≤ m + L(m + 1)2γ +we have $$\frac{|V_{\tau}|}{|\mathcal{M}_{d(m+L)}|}\leq\left(1+\frac{\tau(m+L(m+1)^{2\gamma^{+}})}{d(m+L)\lambda}\right)^{d(m+L)}\leq\left(1+\frac{\tau(m+1)^{2\gamma^{+}}}{d\lambda}\right)^{d(m+L)}\,.$$ $$\left(15\right)$$ $$(16)$$ The action played at block time step τ is the block aτ ∈ Bm+L dassociated with $$b_{\tau}=\operatorname*{arg\,max}_{b\in{\mathcal{B}}}\;\;\operatorname*{sup}_{\theta\in{\mathcal{C}}_{\tau-1}}\;\;\langle b,\theta\rangle\;,$$ ⟨b, θ⟩ , (16) where $${\mathcal{C}}_{\tau}=\left\{\theta\in\mathbb{R}^{d(m+L)}:\left\|{\widehat{\theta}}_{\tau}-\theta\right\|_{V_{\tau}}\leq\beta_{\tau}(\delta)\right\}\,,$$ with $$\beta_{\tau}(\delta)=\sqrt{2\ln\left(\frac{1}{\delta}\right)+d(m+L)\,\ln\left(1+\frac{\tau(m+1)^{2\gamma+}}{d\lambda}\right)}+\sqrt{\lambda L}\,.\tag{17}$$ Applying (Abbasi-Yadkori et al., 2011, Theorem 2) to θ ∗ ∈ R d(m+L) which satisfies ∥θ ∗∥2 ≤ √L we have that θ ∗ ∈ Cτ for every τ with probability at least 1 − δ. Denoting by θeτ the model that maximizes (16), we thus have that with probability at least 1 − δ, the inequality ⟨eb, θ ∗*⟩ ≤ ⟨*bτ , θeτ ⟩ holds for every τ , and consequently T /( Xm+L) τ=1 ⟨eb, θ ∗*⟩ − ⟨*bτ , θ ∗⟩ ≤ T /( Xm+L) τ=1 min n2L(m + 1)γ +, ⟨bτ , θeτ − θ ∗⟩ o ≤ T /( Xm+L) τ=1 min n2L(m + 1)γ +,θeτ − θ ∗Vτ−1 ∥bτ ∥V −1 τ−1 o ≤ T /( Xm+L) τ=1 min n2L(m + 1)γ +, 2βτ (δ) ∥bτ ∥V −1 τ−1 o ≤ 2L(m + 1)γ +βT /(m+L)(δ) T /( Xm+L) τ=1 min n1 , ∥bτ ∥V −1 τ−1 o ≤ 2L(m + 1)γ +βT /(m+L)(δ) vuutT m + L T /( Xm+L) τ=1 min 1 , ∥bτ ∥ 2 V −1 τ−1 ≤ 2 √2L(m + 1)γ +βT /(m+L)(δ) sT m + L ln |VT /(m+L)| |λId(m+L)| ≤ 4L(m + 1)γ + s T d ln 1 + T(m + 1)2γ+ d(m + L)λ · √ λL + s ln 1 δ + d(m + L) ln 1 + T(m + 1)2γ+ d(m + L)λ !, where we have used (Abbasi-Yadkori et al., 2011, Lemma 11), as well as (15) and (17). Note that in the stationary case, i.e., when m = 0 and L = 1, we exactly recover (Abbasi-Yadkori et al., 2011, Theorem 3). Proposition 4 is obtained by setting λ ∈ [1, d], L ≥ m, and δ = 1/T. □ Proof Let d = m + 1, A = {0d} ∪ (ek)k≤d, θ ∗ = (1/ √*d, . . . ,* 1/ √d), and γ ≤ 0. For simplicity, we note the basis modulo d, i.e., ek+d = ek for any k ∈ N. Note that for any a1, . . . , am+1 ∈ A we have ⟨am+1, Amθ ∗⟩ ≤ ∥am+1∥1 ∥Amθ ∗∥∞ ≤ 1/ √d, such that one can take R = 1/ √d. Observe now that the strategy which plays cyclically e1*, . . . , e*d collects a reward of 1/ √d at each time step, which is optimal, such that OPT = T /√d. Further, it is easy to check that block ae, composed of m pulls of 0d followed by e1*, . . . , e*L satisfies re(ae) = L/√d, which is optimal for similar reasons. Playing cyclically ae, one gets a reward of L/√d every m + L pulls. In other terms, we have $$\mathrm{OPT}-\sum_{t=1}^{T}\tilde{r}_{t}={\frac{T}{\sqrt{d}}}-{\frac{L}{m+L}}{\frac{T}{\sqrt{d}}}={\frac{m}{m+L}}{\frac{T}{\sqrt{d}}}={\frac{m R}{m+L}}\;T\;.$$ □ ## A.5 Proof Of Theorem 1 We prove the high probability version of Theorem 1, obtained by setting λ ∈ [1, d], and δ = 1/T. Theorem 2 Let λ ≥ 1, δ ∈ (0, 1), and aτ *be the blocks of actions in* R d(m+L) *defined in* (11). Then, with probability at least 1 − δ *we have* _If $\widetilde{\mathbf{a}}$ is least $1-\sigma$ we have_ $$\sum_{\tau=1}^{T/(m+L)}\widetilde{r}(\widetilde{\mathbf{a}})-\widetilde{r}(\widetilde{\mathbf{a}}_{\tau})\leq4L(m+1)^{\gamma^{+}}\ \sqrt{Td\ \ln\left(1+\frac{T(m+1)^{2\gamma^{+}}}{d\lambda}\right)}$$ $$\cdot\left(\sqrt{\lambda}+\sqrt{\ln\left(\frac{1}{\widetilde{\delta}}\right)+d\,\ln\left(1+\frac{T(m+1)^{2\gamma^{+}}}{d(m+L)\lambda}\right)}\ \right)\,.$$ Let m ≥ 1, T ≥ m2d 2 + 1, and set L =pm/d T1/4− m. Let rt *be the rewards collected when playing* aτ as defined in (11). Then, with probability at least 1 − δ *we have* $$\mathrm{OPT}-\sum_{t=1}^{T}r_{t}\leq4\sqrt{d}\,(m+1)^{\frac{1}{2}+\gamma^{+}}\,\,T^{3/4}\Bigg[1+2\sqrt{\ln\left(1+\frac{T(m+1)^{2\gamma^{+}}}{d\lambda}\right)}$$ $$\cdot\left(\sqrt{\frac{\lambda}{d}}+\sqrt{\frac{\ln(1/\delta)}{d}+\ln\left(1+\frac{T(m+1)^{2\gamma^{+}}}{d\lambda}\right)\,\right)\Bigg].$$ Proof The proof is along the lines of OFUL's analysis. The main difficulty is that we cannot use the elliptical potential lemma, see e.g., (Lattimore & Szepesvári, 2020, Lemma 19.4) due to the delay accumulated by Vτ , which is computed every m + L round only. Let $$\beta_{\tau}(\delta)=\sqrt{2\ln\left(\frac{1}{\delta}\right)+d\,\ln\left(1+\frac{\tau(m+1)^{2\gamma^{+}}}{d\lambda}\right)}+\sqrt{\lambda}\,.\tag{18}$$ By (Abbasi-Yadkori et al., 2011, Theorem 2), we have with probability at least 1 − δ that θ ∗ ∈ Cτ for every τ . It follows directly that θ ∗ ∈ Dτ for any τ , such that ⟨eb, θ ∗*⟩ ≤ ⟨*bτ , θeτ ⟩, where θeτ = (0d, . . . , 0d, θeτ *, . . . ,* θeτ ) with θeτ ∈ R dthat maximizes (11) over Cτ−1. It can be shown that the regret is upper bounded by Pτ Pm+L i=m+1⟨bτ,i, θeτ − θ ∗⟩. Following the standard analysis, one could then use $$\left\langle b_{\tau,i},\widetilde{\theta}_{\tau}-\theta^{*}\right\rangle\leq\|b_{\tau,i}\|_{V_{\tau-1}^{-1}}\left\|\widetilde{\theta}_{t}-\theta^{*}\right\|_{V_{\tau-1}}.$$ While the confidence set givesθet−θ ∗Vτ−1 ≤ 2βτ−1(δ), the quantity Pm+L i=m+1 ∥bτ,i∥V −1 τ−1 is much more complex to bound. Indeed, the elliptical potential lemma allows to bound Pt ∥at∥ 2 V −1 t−1 when Vt =Ps≤t asa ⊤ s + λId. However, recall that in our case we have Vτ =Pττ ′=1 Pm+L i=m+1 bτ ′,ib ⊤ τ ′,i + λId, which is only computed every m + L rounds. As a consequence, there exists a "delay" between Vτ−1 and the action bτ,i for i ≥ m + 2, preventing from using the lemma. Therefore, we propose to use instead $$\left\langle b_{\tau,i},\widetilde{\theta}_{\tau}-\theta^{*}\right\rangle\leq\left\|b_{\tau,i}\right\|_{V_{\tau,i-1}^{-1}}\left\|\widetilde{\theta}_{i}-\theta^{*}\right\|_{V_{\tau,i-1}},\quad\text{where}\quad V_{\tau,i}=V_{\tau-1}+\sum_{j=m+1}^{i}b_{\tau,j}b_{\tau,j}^{\top}\;.\tag{19}$$ By doing so, the elliptical potential lemma applies. On the other hand, one has to controlθet − θ ∗Vτ,i−1 , which is not anymore bounded by 2βτ−1(δ) since the subscript matrix is Vτ,i−1 instead of Vτ−1. Still, one can show that for any i ≤ m + L we have $$\begin{array}{c}{{\left\|\widetilde{\theta}_{t}-\theta^{*}\right\|_{V\tau,i-1}^{2}}}\\ {{=\mathrm{Tr}\left(V_{\tau,i-1}\,\left(\widetilde{\theta}_{t}-\theta^{*}\right)\left(\widetilde{\theta}_{t}-\theta^{*}\right)^{\top}\right)}}\end{array}$$ = Tr j=m+1 bτ,j b ⊤ τ,jθet − θ ∗θet − θ ∗⊤ Vτ−1 +X i−1 = Tr j=m+1 V −1/2 τ−1 bτ,j V −1/2 τ−1 bτ,j ⊤V 1/2 τ−1 θet − θ ∗θet − θ ∗⊤V 1/2 τ−1 Id +X i−1 ≤ Id +X i−1 j=m+1 V −1/2 τ−1 bτ,j V −1/2 τ−1 bτ,j ⊤∗ Tr V 1/2 τ−1 θet − θ ∗θet − θ ∗⊤V 1/2 τ−1 ≤ 1 + X i−1 j=m+1 V −1/2 τ−1 bτ,j 2 2 θet − θ ∗ 2 Vτ−1 ≤ 1 + (L − 1)(m + 1)2γ + θet − θ ∗ 2 Vτ−1 ≤ L(m + 1)2γ + θet − θ ∗ 2 Vτ−1 . (20) Recalling also that ⟨eb, θ ∗*⟩ − ⟨*bτ , θ ∗⟩ ≤ 2L(m + 1)γ +, we have with probability at least 1 − δ T /( Xm+L) τ=1 ⟨eb, θ ∗*⟩ − ⟨*bτ , θ ∗⟩ ≤ T /( Xm+L) τ=1 min n2L(m + 1)γ +, ⟨bτ , θeτ − θ ∗⟩ o = T /( Xm+L) τ=1 min (2L(m + 1)γ +, mX +L i=m+1 ⟨bτ,i, θeτ − θ ∗⟩ ) ≤ T /( Xm+L) τ=1 min (2L(m + 1)γ +, mX +L i=m+1 ∥bτ,i∥V −1 τ,i−1 θet − θ ∗Vτ,i−1 ) ≤ T /( Xm+L) τ=1 min (2L(m + 1)γ +, 2 √ L(m + 1)γ +βτ−1(δ) mX +L i=m+1 ∥bτ,i∥V −1 τ,i−1 ) ≤ 2L(m + 1)γ +βT /(m+L)(δ) T /( Xm+L) τ=1 mX +L i=m+1 min n1 , ∥bτ,i∥V −1 τ,i−1 o ≤ 2L(m + 1)γ +βT /(m+L)(δ) vuutT L m + L T /( Xm+L) τ=1 mX +L i=m+1 min 1 , ∥bτ,i∥ 2 V −1 τ,i−1 ≤ 2 √2L(m + 1)γ +βT /(m+L)(δ) s T ln |VT /(m+L)| |λId| ≤ 4L(m + 1)γ + s T d ln 1 + T(m + 1)2γ+ dλ · √ λ + s ln 1 δ + d ln 1 + T(m + 1)2γ+ d(m + L)λ !, (21) where we have used (18), (19), and (20). Similarly to Proposition 5, note that in the stationary case, i.e., when m = 0 and L = 1, we exactly recover (Abbasi-Yadkori et al., 2011, Theorem 3). The first claim of Theorem 1 is obtained by setting λ ∈ [1, d], and δ = 1/T. Let RT denote the right-hand side of (21). Combining this bound with the arguments of Proposition 2, we have with probability 1 − δ t=1 rt ≥ T /( Xm+L) τ=1 re(aτ ) − m(m + 1)γ + X T m + LT (22) = T /( Xm+L) τ=1 ⟨bτ , θ ∗⟩ − m(m + 1)γ + m + LT ≥ T /( Xm+L) τ=1 ⟨eb, θ ∗⟩ − RT − m(m + 1)γ + m + LT (23) = T /( Xm+L) τ=1 re(ae) − RT − m(m + 1)γ + m + LT ≥ X T t=1 ret − RT − 2m(m + 1)γ + m + LT (24) ≥ OPT − RT − 4m(m + 1)γ + m + LT (25) ≥ OPT − 4(m + 1)γ + "mT m + L + (m + L) s T d ln 1 + T(m + 1)2γ+ dλ · √ λ + s ln 1 δ + d ln 1 + T(m + 1)2γ+ d(m + L)λ ! # , where (22) and (24) come from the fact that any instantaneous reward is bounded by (m + 1)γ +, see (8), (23) from (21), and (25) from Proposition 2. Now, assume that m ≥ 1, T ≥ d 2m2 + 1, and let L =p*m/d T*1/4− m. By the condition on T, we have pm/d T1/4 > m ≥ 1, such that L ≥ 1 and rm d T 1/4 ≤ rm d T 1/4 = L + m ≤ rm d T 1/4 + 1 ≤ 2 rm d T 1/4. Substituting in the above bound, we have with probability 1 − δ t=1 rt ≤ 4 √ d (m + 1) 12 +γ +T 3/4 " 1 + 2sln 1 + T(m + 1)2γ+ dλ OPT − X T · rλ d + sln(1/δ) d+ ln 1 + T(m + 1)2γ+ dλ ! # . The second claim of Theorem 1 is obtained by setting λ ∈ [1, d], and δ = 1/T. □ ## A.6 Proof Of Corollary 1 Lemma 1 *Suppose that a block-based bandit algorithm (in our case the bandit combiner) produces a sequence* of Tbc blocks aτ , with possibly different cardinalities |aτ |*, such that* $$\sum_{\tau=1}^{T_{\mathrm{bc}}}\frac{\widetilde{r(\widetilde{\mathbf{a}})}}{|\widetilde{\mathbf{a}}|}-\sum_{\tau=1}^{T_{\mathrm{bc}}}\frac{\widetilde{r}(\mathbf{a}_{\tau})}{|\mathbf{a}_{\tau}|}\leq F(T_{\mathrm{bc}})\,,$$ for some sublinear function F*. Then, we have* $$\frac{\operatorname*{min}_{\tau}|\,\mathbf{a}_{\tau}|}{\operatorname*{max}_{\tau}|\,\mathbf{a}_{\tau}|}\left(\widetilde{r}(\widetilde{\mathbf{a}})\,\,\frac{\sum_{\tau}|\,\mathbf{a}_{\tau}|}{|\widetilde{\mathbf{a}}|}\right)-\sum_{\tau=1}^{T_{\mathrm{bc}}}\widetilde{r}(\mathbf{a}_{\tau})\ \leq\ \operatorname*{min}_{\tau}|\,\mathbf{a}_{\tau}|\,F(T_{\mathrm{bc}})\ .$$ In particular, if all blocks have the same cardinality the last bound is just the block regret bound scaled by |aτ |. Proof We have τ=1 re(aτ ) ≥ min τ|aτ | X Tbc X Tbc τ=1 re(aτ ) |aτ | ≥ min τ|aτ | X Tbc τ=1 re(ae) |ae| − F(Tbc) ! = minτ |aτ | maxτ |aτ | re(ae) |ae| max τ|aτ | Tbc − min τ|aτ | F(Tbc) ≥ minτ |aτ | maxτ |aτ | re(ae) Pτ |aτ | |ae| − min τ|aτ | F(Tbc). $\square$ Corollary 1 Consider an instance of LBM with unknown parameters (m⋆, γ⋆). Assume a bandit combiner is run on N ≤ d √m⋆ instances of OFUL-memory (Algorithm *2), each using a different pair of parameters* (mi, γi) *from a set* S =(m1, γ1), . . . ,(mN , γN ) such that (m⋆, γ⋆) ∈ S*. Let* M = (maxj mj )/(minj mj ). Then, for all T ≥ (m⋆ + 1)2γ + ⋆ /m⋆d 4*, the expected rewards* r bc t T t=1 *of the bandit combiner satisfy* $$\frac{\mathrm{OPT}}{\sqrt{M}}-\mathbb{E}\left[\sum_{t=1}^{T}r_{t}^{\mathrm{bc}}\right]\ \ =\ \widetilde{\mathcal{O}}\Big(M\,d\,(m_{\star}+1)^{1+\frac{3}{2}\gamma_{\star}^{+}}\,T^{3/4}\Big)\,.$$ Proof Let m⋆ be the true memory size, and L⋆ = L(m⋆) the corresponding (partial) block length. Throughout the proof, ae denotes the block defined in (5) with length m⋆ + L⋆. First observe that only one of the OFUL-memory instances we test is well-specified, i.e., has the true parameters (m⋆, γ⋆). We can thus rewrite the regret bound for the Bandit Combiner (Cutkosky et al., 2020, Corollary 2), generalized to rewards bounded in [−*R, R*] as follows $${\rm Regret}_{\rm bc}=\widetilde{\cal O}\left(C_{\star}T_{\rm bc}^{\alpha_{\star}}+C_{\star}^{\frac{1}{2\star}}T_{\rm bc}\eta_{\star}\overleftarrow{\star}+R^{2}T_{\rm bc}\eta_{\star}+\sum_{j\neq\nu}\frac{1}{\eta_{j}}\right)\,,\tag{26}$$ where Tbc = T /(m⋆ + L⋆) is the bandit combiner horizon, C⋆ and α⋆ are the constants in the regret bound of the well-specified instance (see below how we determine them), and the ηj are free parameters to be tuned. We now derive C⋆ and α⋆. To that end, we must establish the regret bound of the well-specified instance, and identify C⋆ and α⋆ such that this bound is equal to C⋆T α⋆ bc , where C⋆ may contain logarithmic factors. For the well-specified instance, the first claim of Theorem 2 gives that, with probability at least 1 − δ, we have $$\begin{array}{c}{{\sum_{\tau=1}^{T/(m_{\star}+L_{\star})}\tilde{r}(\vec{a})-\tilde{r}(\vec{a}_{\tau})\leq4(m_{\star}+L_{\star})(m_{\star}+1)^{\gamma_{+}^{+}}\sqrt{\;T d\,\ln\left(1+\frac{T(m_{\star}+1)^{2\gamma_{+}^{+}}}{d\lambda}\right)}}}\\ {{\left(\sqrt{\lambda}+\sqrt{\ln\left(\frac{1}{\delta}\right)+d\,\ln\left(1+\frac{T(m_{\star}+1)^{2\gamma_{+}^{+}}}{d(m_{\star}+L_{\star})\lambda}\right)}\;\right)}}\end{array}$$ $$\sum_{\tau=1}^{T/(m_{\star}+L_{\star})}\frac{\widehat{r}(\widehat{\mathbf{a}})}{|\widehat{\mathbf{a}}|}-\frac{\widehat{r}(\mathbf{a}_{\tau})}{|\mathbf{a}_{\tau}|}\leq T^{1/2}\,4(m_{\star}+1)^{\gamma_{\star}^{+}}\,\sqrt{d\ln\left(1+\frac{T(m_{\star}+1)^{2\gamma_{\star}^{+}}}{d\lambda}\right)}\tag{27}$$ $$\left(\sqrt{\lambda}+\sqrt{\ln\left(\frac{1}{\delta}\right)+d\,\ln\left(1+\frac{T(m_{\star}+1)^{2\gamma_{\star}^{+}}}{d(m_{\star}+L_{\star})\lambda}\right)}\,\right)\,,$$ where we have used that |aτ | = |ae| = m⋆ + L⋆ for every τ . Note that the right-hand side of (27) is expressed in terms of T, which is not the correct horizon, T /(m⋆ + L⋆). However, recall that we have $$\begin{array}{c}{{m_{\star}+L_{\star}\leq2\sqrt{\frac{m_{\star}}{d}}\,T^{1/4}}}\\ {{(m_{\star}+L_{\star})^{4}\leq\left(\frac{4m_{\star}}{d}\right)^{2}T}}\\ {{T^{3}\leq\left(\frac{4m_{\star}}{d}\right)^{2}\left(\frac{T}{m_{\star}+L_{\star}}\right)^{4}}}\\ {{T^{1/2}\leq\left(\frac{4m_{\star}}{d}\right)^{1/3}\left(\frac{T}{m_{\star}+L_{\star}}\right)^{2/3}\,,}}\end{array}$$ such that by substituting in (27) and identifying we have α⋆ = 2/3, and $$C_{\star}=4\left(\frac{4m_{\star}}{d}\right)^{1/3}(m_{\star}+1)^{\gamma_{\star}^{+}}\ \sqrt{d\ln\left(1+\frac{T_{\mathrm{bc}}(m_{\star}+L_{\star})(m_{\star}+1)^{2\gamma_{\star}^{+}}}{d\lambda}\right)}$$ $$\left(\sqrt{\lambda}+\sqrt{\ln\left(\frac{1}{\delta}\right)+d\,\ln\left(1+\frac{T_{\mathrm{bc}}(m_{\star}+1)^{2\gamma_{\star}^{+}}}{d\lambda}\right)}\ \right)\ .$$ Setting ηj = T −2/3 bc , and substituting in (26) with R = (m⋆ + 1)γ + ⋆ , we have that with high probability $$\sum_{\tau=1}^{T_{\mathrm{bc}}}\frac{\widetilde{r}(\widetilde{\mathbf{a}})}{|\widetilde{\mathbf{a}}|}-\frac{\widetilde{r}(\mathbf{a}_{\tau}^{\mathrm{bc}})}{|\mathbf{a}_{\tau}^{\mathrm{bc}}|}=\widetilde{\mathcal{O}}\Big{(}(C_{\star}^{3/2}+N)\,T_{\mathrm{bc}}^{2/3}+\ (m_{\star}+1)^{2\gamma_{4}^{+}}\,T_{\mathrm{bc}}^{1/3}\Big{)}\,.$$ Now, recall that Tbc = Opd/m⋆ T 3/4, and that C⋆ = Oe(m⋆ + 1) 13 +γ + ⋆ d 2/3. Hence, N ≤ d √m⋆ implies N = OC 3/2 j, and (m⋆ + 1)γ + ⋆ ≤ d 2√m⋆T implies (m⋆ + 1)γ + ⋆ T 1/3 bc = OC 3/2 ⋆ T 2/3 bc . Setting λ ∈ [1, d], δ = 1/T, we obtain $$\mathbb{E}\left[\sum_{r=1}^{T_{\rm bc}}\frac{\widetilde{r}(\widetilde{\mathbf{a}})}{|\widetilde{\mathbf{a}}|}-\frac{\widetilde{r}(\mathbf{a}_{r}^{\rm bc})}{|\mathbf{a}_{r}^{\rm bc}|}\right]=\widetilde{\mathcal{O}}\Big{(}d\sqrt{m_{\star}}\,(m_{\star}+1)^{\frac{3}{2}\gamma_{\star}^{+}}\,T_{\rm bc}^{2/3}\Big{)}\,.\tag{28}$$ Let mτ be the memory size associated to the bandit played at block time step τ by Algorithm 2. Let mmin = minj mj and mmax = maxj mj . Finally, let Lmin and Lmax the (partial) block length associated with mmin and mmax. We have $$\sum_{t=1}^{T}r_{t}^{\mathrm{bc}}\geq\sum_{\tau=1}^{T_{\mathrm{bc}}}\left(\tilde{r}(\mathbf{a}_{\tau}^{\mathrm{bc}})-m_{\tau}\left(m_{\star}+1\right)^{\gamma_{\star}^{+}}\right)\geq\sum_{\tau=1}^{T_{\mathrm{bc}}}\tilde{r}(\mathbf{a}_{\tau}^{\mathrm{bc}})-m_{\mathrm{max}}\left(m_{\star}+1\right)^{\gamma_{\star}^{+}}\,T_{\mathrm{bc}}\,,$$ such that by Lemma 1 and (28) we obtain $$\mathbb{E}\left[\frac{\operatorname*{min}_{\tau}|\mathbf{a}_{\tau}|}{\operatorname*{max}_{\tau}|\mathbf{a}_{\tau}|}\left({\tilde{r}}({\tilde{\mathbf{a}}})\ {\frac{\sum_{\tau}}{|{\tilde{\mathbf{a}}}|}}\right)\right]$$ $$<$$ |aτ | |ae| − X T t=1 r bc t # ≤ mmax (m⋆ + 1)γ + ⋆ Tbc + min τ|aτ | Oed √m⋆ (m⋆ + 1) 32 γ + ⋆ T 2/3 bc , $$\mathbb{E}\left[\frac{m_{\mathrm{min}}+L_{\mathrm{min}}}{m_{\mathrm{max}}+L_{\mathrm{max}}}\left(\frac{L_{\star}\ \mathrm{OPT}}{T}\ \frac{T}{m_{\star}+L_{\star}}\right)-\sum_{t=1}^{T}r_{t}^{\mathrm{hc}}\right],$$ $$\leq\frac{m_{\mathrm{max}}\,(m_{\star}+1)^{\gamma_{+}^{+}}\,T}{m_{\mathrm{min}}+L_{\mathrm{min}}}+(m_{\mathrm{min}}+L_{\mathrm{min}})^{1/3}\,\widetilde{\mathcal{O}}\Big(d\sqrt{m_{\star}}\,(m_{\star}+1)^{\frac{3}{2}\gamma_{+}^{+}}\,T^{2/3}\Big)\,,$$ E "r mmin mmax OPT − X T t=1 r bc t # ≤ mmax mmin pd m⋆ (m⋆ + 1)γ + ⋆ T 3/4 + Oed m⋆ (m⋆ + 1) 32 γ + ⋆ T 3/4 = mmax mmin Oed m⋆ (m⋆ + 1) 32 γ + ⋆ T 3/4, where we have used the fact that mmin + Lmin =pmmin/d T1/4, and mmax + Lmax =pmmax/d T1/4. Corollary 1 is obtained by setting M = mmax/mmin. □ ## B Bandit Combiner In this section we show our adaptation of the numbers Cj and target regrets Rj for the Bandit Combiner algorithm Algorithm 2 which builds on Cutkosky et al. (2020). For O3M(mj , γj ), j = 1*, . . . , N*, the numbers Cj and target regrets Rj are defined as $$C_{j}=4\left(\frac{4m_{j}}{d}\right)^{1/3}(m_{j}+1)^{\gamma_{j}^{+}}\sqrt{d\ln\left(1+\frac{T_{\rm loc}(m_{j}+L_{j})(m_{j}+1)^{2\gamma_{j}^{+}}}{d\lambda}\right)}\tag{29}$$ $$\left(\sqrt{\lambda}+\sqrt{\ln\left(\frac{1}{\delta}\right)+d\,\ln\left(1+\frac{T_{\rm loc}(m_{j}+1)^{2\gamma_{j}^{+}}}{d\lambda}\right)}\,\right)\,,$$ $$R_{j}=C_{j}T_{\mathrm{bc}}^{\alpha_{j}}+\frac{(1-\alpha_{j})^{\frac{1-\alpha_{j}}{\alpha_{j}}}(1+\alpha_{j})^{\frac{1}{\alpha_{j}}}}{\frac{1-\alpha_{j}}{\alpha_{j}}}C_{j}^{\frac{1}{\alpha_{j}}}T_{\mathrm{bc}}\eta_{j}^{\frac{1-\alpha_{j}}{\alpha_{j}}}$$ $$+1152(m_{j}+1)^{2\gamma_{j}^{+}}\log(T_{\mathrm{bc}}^{3}N/\delta)T_{\mathrm{bc}}\eta_{j}+\sum_{k\neq j}\frac{1}{\eta_{k}}.$$ Note that the form of the target regret Rj slightly differs from the one presented in (Cutkosky et al., 2020, Corollary 2) due to the different range of the rewards. The algorithm, which is an adaptation of Bandit Combiner in Cutkosky et al. (2020), is summarized in Algorithm 2. ## C Additional Experiments We provide an additional experiment comparing the regrets of O3M and OM-Block. In order to be able to plot the regret, we must know OPT which is hard to compute in general. Since in the rising scenario with an isotropic initialization OPT is oracle greedy, which is easy to compute, we present this experiment in a rising setting with m = 1 and γ = 2. We plot the regret of O3M and OM-Block against the number of time steps, measuring the performance at different time horizons and for different sizes of L (where L depends on T, see at the end of Section 3.2). Specifically, we instantiated O3M and OM-Block for increasing values of L, setting the horizon of each instance based on the equations in Theorem 1 and Proposition 4. Figure 3 shows how the dimension of θb, which is d for O3M and d × L for OM-Block, has an actual impact on the performance since O3M outperforms OM-Block. The code is written in Python and it is publicly available at the following GitHub repository: Linear Bandits with Memory. ![25_image_0.png](25_image_0.png) Figure 3: The regret of O3M and OM-Block. Each dot is a separate run where the value of L is tuned to the corresponding horizon.
Review 1: Summary: This paper studies a bandit problem where the observed (scalar) outcome of a given round depends on the latest $m+1$ actions. The authors propose an OFUL-based algorithm for this setting and provide an upper bound on the regret thereof against the optimal sequence of actions. Strengths and Weaknesses: Why is the model in eq (1) and (2) a reasonable or justified way of doing things? Why model memory via a covariance matrix? Why not simply use a linear model that depends on the latest $m$ actions? E.g., $y_t = \sum_{i=0}^{m-1}\theta_{i}^\ast a_{t-i} + \eta_t$, which can be recast as a standard linear model ($m=1$) in a higher dimensional space. One can then simply invoke the technical machinery already developed for linear bandits to essentially "solve" this richer setting. Even if there are computational/statistical benefits to modeling memory via a covariance matrix, I do not see how it is a natural or intuitive way to do it compared to the simple alternative described above? The paper currently does not provide any background/motivation on this, and I think the authors need to better explain why their model is reasonable/justified. In addition to the above, the "rotting/rising" settings need better explanation and motivation. Currently, they are simply passed off as settings with a positive or negative $\gamma$. Further, this model is, in fact, not linear as the outcome depends non-linearly on past and present actions unless $\gamma=0$. The current version of the paper lacks a coherent example/motivation to ground the model. The mathematics is fine, but the reader is likely to walk away with the impression that the paper analyzes and provides results for an esoteric model that is a strict generalization of linear bandits and a partial generalization of some other models. The current positioning and motivation leaves much to be desired. Requested Changes: Please see the "Strengths And Weaknesses" section. Broader Impact Concerns: Not applicable. ================================================== Review 2: Summary: The author propose a new linear bandits framwork LBM where the reward at time t is affected by all the actions within [t-m, t] blocks. This frameworks generalized the stationary linear bandits, rising and rotting rested bandits and cyclic bandits. Under this framework, the author first proposes a variant of OFUL algortihm with the known-parameter m. Then the author propose a parameter-free algortihm by using a meta-algorihtm to select the right model with an additional $M\sqrt{d(m+1)^{1+\gamma}}$ multiplicative term in the final bound. Strengths and Weaknesses: Strength: 1. The author proposes a novel bandits framework that can generalize several structured nonstationry bandits problem including bandits, rising and rotting rested bandits and cyclic bandits, and it can also generalize a special case of MDP problem. 2. I also like the way they explain this problem step by step. It is quite clear. For example, in Section 3.1. They discuss the influence of initial m conditions. In Section 3.1,3.2, they give a clear explaination on the trade-off on L between long tern plainning and infrequent updating. 3. Their results, although may not be optimal, is quite compelete as the first paper proposing this. Weakness: 1. It is unclear to me wether this solution is optimal, seems a very direct solutions. 2. It might be interesting to talk about some instance-dependent bound under this setting But overall I think the results are enough as the first paper under this framework. Requested Changes: I think it is a little unclear for me on the over-optimistic part (O3M). I can understand it has the same proofs as the OM. Are you mentioned this because it is more easy to implement? or has better emprirical results ? Broader Impact Concerns: NA ================================================== Review 3: Summary: In short, this paper is a linear bandit version of the rotting/rising bandit. The author assumes that the reward is non-stationary and depends on the previous decisions, especially on the memory matrix. Authors proved that in this sliding-window setting, the optimal policy can be tightly approximated by optimal cyclic policy, and using this idea they proposed a variant of OFUL, which minimizes the regret against the cyclic policies. This result holds for both rotting and rising bandit which is determined by the sign of the exponent $\gamma$. Not only that, using the model selection approach, they could extend their approach to all the positive and negative cases. Strengths and Weaknesses: Strength - Novelty: It is kind of a natural process to think of an extension from the analysis of the K-armed bandits to linear bandits. However, I believe this paper well-defined and interpreted the concept of rising/rotting in the linear bandit setting. I think this is a topic someone should deal with someday, and the authors made a proper analysis. - Quality: I want to value three main techniques. 1) $\langle a_t, A_{t-1} \theta^* \rangle = \langle A_{t-1} a_t, \theta^* \rangle$, so actually the researchers don't need to worry about the 'changing hidden parameter' $A_{t-1} \theta^*$, and one can take this rising/rotting environment as 'changing arm set' environment where the environment changes based on learner's choice and the learner fully understand its effect. I think it is a simple idea that researchers who have deeply studied this field may find it easily, but at least for me, it is a result that gives me a fresh intuition. 2) Instead of directly dealing with the 'true optimal sequence', they proved in Propositions 2 and 3 that one can use 'cyclic best action' as the alternative to the true optimal sequence since 'true optimal' and 'cyclic best' are tight. This is an interesting way to detour something difficult to compute. Though I haven't fully read the details of the proof, it is an interesting result for me if it is true. 3) It was surprising that even the learner knew $\theta^*$ beforehand, the 'oracle greedy policy' could have linear regret. It was very counter-intuitive. I haven't read the proof in detail, but if it is true then it is definitely worth to add points on this paper. - Clarity: The authors tried to deliver the main ideas as straightforward as possible. Though it is not crystal clear to me yet, at least I could understand the mathematical structure they constructed for their analysis. Plus, I personally like Figure 1 they added to help readers understand better. Weakness and Quality - Novelty: (super minor) Using OFUL, the traditional approach for the linear bandit, is a kind of predictable approach. - Clarity: I believe they need to explain some parts more clearly. 1) I hope they explain more about the ease of the 'infinite memory case.' I know they mentioned it on Page 5, but they also need to explain it on the rising bandit case too, I believe. It is not straightforward for me how rising will also be easy. 2) It is not clear how easy to find the best batch of actions $a_\tau$. Though they mentioned they found the UCB indices by Gradient Ascent (Page 10, computational complexity), I think it would be great if authors also provided more explanation that computing blocks $a_{\tau}$ is easy (such as a convex problem or sth? I am not super clear whether it is that easy problem). Requested Changes: - I believe the authors should also justify the scale of the effect of the history. I mean, the authors assume $A_t = I + \sum a_s a_s^\top$, but it is also a natural question to ask the case $A_t = I + \alpha \sum a_s a_s^\top$ for some constant $\alpha$ and effect of this constant $\alpha$. - I need more explanation to support why 'finite memory' is natural. I read their argument in Page 5 that using the well-known Eliptic potential lemma, it becomes a trivial problem. However, I believe the tendency of rotting ($\gamma <0$) and rising ($\gamma>0$) are quite different, as they demonstrated - I need some valid explanation 2) It is not clear how the authors 'optimized' the sequence of batch actions. Though they mentioned they found the UCB indices by Gradient Ascent (Page 10, computational complexity), I think it would be great if authors also provided more explanation that computing block $a_{\tau}$ is easy (such as convex problem or sth? I am not super clear whether it is actually that easy problem). - (Minor) What happens when the memory is not a sliding window, but a diminishing one? There are two representative ways of expressing the limitation of memory. One is the sliding window which the authors used in this paper, and the other is using discount factors to express the exponential decay of the memory. I think it would be great if the authors could also explain what happens with discount factors instead of this sliding window structure. Broader Impact Concerns: There are no ethical concerns we need to worry about in this paper. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper introduces a new nonstationary linear bandit model, to capture situations where rewards depend on the learner’s past actions in a fixed-length sliding window. This model (approximately) recovers a few previously bandit problems. Weaknesses include a matching lower bound, and a somewhat limited class of polynomial nonstationary reward functions. Overall, reviewers found the work to be technically sound and contribute meaningfully to the bandit literature. Some of the technical results are also interesting, such as using the cyclic policy as a reference as part of the analysis. Please use the reviews to revise the paper, including strengthening the motivation, among others. In addition, can you elaborate further why finite m avoid trivializing the theory? The current argument relies on gamma to be sufficiently small.If gamma is large, does an infinite m still trivialize the theory? ==================================================
# On The Adversarial Robustness Of Camera-Based 3D Object Detection Shaoyuan Xie shaoyux@uci.edu Department of Computer Science University of California, Irvine Zichao Li *zli489@ucsc.edu* Department of Computer Science and Engineering University of California, Santa Cruz Zeyu Wang zwang615@ucsc.edu Department of Computer Science and Engineering University of California, Santa Cruz Cihang Xie *cixie@ucsc.edu* Department of Computer Science and Engineering University of California, Santa Cruz Reviewed on OpenReview: *https: // openreview. net/ forum? id= 6SofFlwhEv* ## Abstract In recent years, camera-based 3D object detection has gained widespread attention for its ability to achieve high performance with low computational cost. However, the robustness of these methods to adversarial attacks has not been thoroughly examined, especially when considering their deployment in safety-critical domains like autonomous driving. In this study, we conduct the first comprehensive investigation of the robustness of leading camera-based 3D object detection approaches under various adversarial conditions. We systematically analyze the resilience of these models under two attack settings: white-box and black-box; focusing on two primary objectives: classification and localization. Additionally, we delve into two types of adversarial attack techniques: pixel-based and patch-based. Our experiments yield four interesting findings: (a) bird's-eye-view-based representations exhibit stronger robustness against localization attacks; (b) depth-estimation-free approaches have the potential to show stronger robustness; (c) accurate depth estimation effectively improves robustness for depth-estimation-based methods; (d) incorporating multi-frame benign inputs can effectively mitigate adversarial attacks. We hope our findings can steer the development of future camera-based object detection models with enhanced adversarial robustness. The code is available at: https://github.com/Daniel-xsy/BEV-Attack. ## 1 Introduction Deep neural network-based 3D object detectors (Li et al., 2022b; Wang et al., 2022b; Huang et al., 2021; Liu et al., 2022a; Wang et al., 2022a; 2021; Lang et al., 2019; Vora et al., 2020; Zhou & Tuzel, 2018; Yan et al., 2018) have demonstrated promising performance across multiple challenging real-world benchmarks, including the KITTI (Geiger et al., 2012), nuScenes (Caesar et al., 2020) and Waymo Open Dataset (Sun et al., 2020). These popular approaches utilize either point clouds (*i.e.*, LiDAR-based methods) (Lang et al., 2019; Vora et al., 2020; Zhou & Tuzel, 2018; Yan et al., 2018) or images (*i.e.*, camera-based methods) (Wang et al., 2021; 2022a;b; Li et al., 2022b;a; Huang et al., 2021; Liu et al., 2022a) as their inputs for object detection. Compared to LiDAR-based methods, camera-based approaches have garnered significant attention ![1_image_0.png](1_image_0.png) Figure 1: Adversarial nuScenes Detection Score (NDS) *v.s.* clean nuScenes Detection Score. Models that exhibit better performance on standard datasets do not necessarily exhibit better adversarial robustness. due to their low deployment cost, high computational efficiency, and dense semantic information. Additionally, camera-based detection exhibits inherent advantages in detecting long-range objects and identifying vision-based traffic signs. Monocular 3D object detection expands the capabilities of 2D object detection to 3D scenarios using carefully designed custom adaptations (Wang et al., 2021; 2022a). However, accurately estimating depth from a single image is challenging, often hindering the efficacy of monocular 3D object detection (Wang et al., 2022a). In contrast, using the bird's eye view (BEV) representation for 3D detection offers several advantages. First, it allows for joint learning from multi-view images. Second, the BEV perspective provides a physicsinterpretable method for fusing information from different sensors and time stamps (Ma et al., 2022). Third, the output space of a BEV-based approach can be easily applied to downstream tasks such as prediction and planning. Consequently, BEV-based models have demonstrated significant improvements (Li et al., 2022a;b; Huang et al., 2021; Huang & Huang, 2022). Despite the advancements achieved in 3D object detection algorithms, recent literature (Rossolini et al., 2022; Cao et al., 2021; Tu et al., 2020) have begun to highlight their susceptibility to adversarial attacks. Such vulnerabilities pose significant safety risks, particularly when these algorithms are deployed in safetycritical applications. Nevertheless, existing studies primarily concentrate on generating adversarial examples in limited scenarios, thereby failing to provide a comprehensive evaluation across a broader spectrum of adversarial settings and models. Motivated by this gap in the literature, we aim to conduct a thorough and systematic analysis of the robustness of various state-of-the-art 3D object detection methods against adversarial attacks, while also investigating avenues to bolster their resilience. Our investigation includes a spectrum of attack settings: pixel-based (introducing subtle perturbations to inputs) and patch-based (overlaying discernible adversarial designs onto inputs) adversarial examples, in white-box and black-box (whether information about the model is available to the attacker) setups. Our focus is on two main attack goals: misleading classification predictions and misleading localization predictions. Regarding pixel attacks, we apply the widely-used projected gradient descent (PGD) algorithm (Madry et al., 2017). To differentiate this attack algorithm from the 3D detection method known as Probabilistic and Geometric Depth (Wang et al., 2022a), we refer to the former as *PGD-Adv* and the latter as *PGDDet* in the rest of the paper. To further enhance the comprehensiveness of our work, we additionally use FGSM (Goodfellow et al., 2014), C&W Attack (Carlini & Wagner, 2017) and AutoPGD Attack (Croce & Hein, 2020). Regarding patch attacks, we incorporate a gradient-descent-optimized patch (Brown et al., 2017) centrally onto the target objects, adjusting its size accordingly with the object size. Additionally, we probe the efficacy of universal patches, known for their strong transferability across varied scenes, scales, and model architectures. Overall, our experiments interestingly show that models that perform better on standard datasets do not necessarily yield stronger adversarial robustness, as shown in Fig. 1. We distill our key findings as follows: - BEV-based models do not exhibit stronger robustness under classification attacks. However, they tend to be more robust toward localization attacks. - Precise depth estimation is crucial for models that rely on depth information to transform the perspective view to the bird's eye view (PV2BEV). The incorporation of explicit depth supervision during training, as well as prior knowledge of depth constraints, can lead to improved performance and stronger robustness. - Depth-estimation-free methods have achieved state-of-the-art performance with clean inputs (Wang et al., 2022b; Li et al., 2022b; Liu et al., 2022a), we further find they have the potential to yield stronger robustness compared to depth-estimation-based ones. - Adversarial effects can be mitigated through clean temporal information. Models utilizing multiframe benign inputs are less likely to fail under single-frame attacks. However, it is important to note that errors can accumulate under continuous adversarial input over multiple frames. ## 2 Related Work Camera-based 3D object detection. Existing camera-based 3D object detection methods can be broadly categorized into two groups: monocular-based approaches (Wang et al., 2021; 2022a) and multi-view image input bird's eye view (BEV) representation-based approaches (Li et al., 2022b; Huang et al., 2021; Huang & Huang, 2022; Li et al., 2022a; Wang et al., 2022b; Liu et al., 2022a). Monocular-based approaches, such as FCOS3D and PGD-Det (Wang et al., 2021; 2022a), extend FCOS (Tian et al., 2019) to the 3D domain through specific adaptations. BEV-based detectors perform PV2BEV and build BEV representations to conduct perception tasks. Inspired by LSS (Philion & Fidler, 2020), BEVDet (Huang et al., 2021) uses an additional depth estimation branch for the PV2BEV transformation. BEVDet4D (Huang & Huang, 2022) further improves performance by leveraging temporal information. BEVDepth (Li et al., 2022a) improves depth estimation accuracy through explicit depth supervision from point clouds. Given that inaccurate depth estimation is the main bottleneck of the above approaches, recent works explore pipelines without an explicit depth estimation branch. DETR3D (Wang et al., 2022b) represents 3D objects as object queries and performs cross-attention using a Transformer decoder (Vaswani et al., 2017). PETR (Liu et al., 2022a;b) further improves performance by proposing 3D position-aware representations. BEVFormer (Li et al., 2022b) introduces temporal cross-attention to extract BEV representations from multi-timestamp images. While these models show consistent improvement on the standard dataset, their behaviors under adversarial attacks have not been thoroughly examined, which could raise profound concerns, especially considering their potential deployment in safety-critical applications, *e.g.*, autonomous driving. Adversarial attacks on classification. Modern neural networks are susceptible to adversarial attacks (Szegedy et al., 2013; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2017), where the addition of a carefully crafted perturbation to the input can cause the network to make an incorrect prediction. (Goodfellow et al., 2014) proposes a simple and efficient method for generating adversarial examples using one-step gradient descent. (Madry et al., 2017) proposes more powerful attacks (*i.e.*, PGD-Adv) by taking multiple steps along the gradients and projecting the perturbation back onto a Lp norm ball. (Moosavi-Dezfooli et al., 2017) demonstrates the existence of universal adversarial perturbations. (Brown et al., 2017) generates physical adversarial patches. (Wu et al., 2021) addresses adversarial robustness in the context of longtailed distribution recognition tasks. In addition to developing more powerful attacks, some works focus on understanding the robustness of different neural architecture designs to attacks. (Shao et al., 2021; Bai et al., 2021) conduct extensive comparisons between CNNs and Transformers and gain insights into their adversarial robustness. Different from these works which focus on classification problems, our research pivots to the detection tasks. Adversarial attacks on object detection. Adversarial attacks for object detection can target both localization and classification. In the context of 2D object detection, (Xie et al., 2017) generates adversarial examples with strong transferability by considering all targets densely. (Liu et al., 2018) propose black-box patch attacks that can compromise the performance of popular frameworks such as Faster R-CNN (Ren et al., 2015). Given the importance of safety in autonomous driving, it is vital to study the adversarial robustness of 3D object detection. (Tu et al., 2020) crafts adversarial mesh placed on top of a vehicle to bypass a LiDAR detector. (Rossolini et al., 2022) studies digital, simulated, and physical patches to mislead real-time semantic segmentation models. (Cao et al., 2021) reveals the possibility of crashing Multi-Sensor Fusion (MSF) based models by attacking all fusion sources simultaneously. Despite the above works toward designing more powerful attacks, there still lacks a comprehensive understanding of the adversarial robustness of camera-based 3D object detection. Though concurrent work (Zhu et al., 2023) also explores the adversarial robustness of 3D detectors, they study much fewer models. Our research represents a pioneering effort to systematically bridge this knowledge gap. ## 3 Camera-Based 3D Object Detection In this section, we provide an overview of the current leading approaches in camera-based 3D object detection, which can be broadly classified into three categories: monocular-based detectors, BEV detectors with depth estimation, and BEV detectors without depth estimation. ## 3.1 Monocular Approach This line of research aims to directly predict 3D targets from an image input. We select FCOS3D (Wang et al., 2021) and PGD-Det (Wang et al., 2022a) as representative works to study their adversarial robustness. FCOS3D extends FCOS (Tian et al., 2019) to the 3D domain by transforming 3D targets to the image domain. PGD-Det further improves the performance of FCOS3D by incorporating uncertainty modeling and constructing a depth propagation graph that leverages the interdependence between instances. ## 3.2 Bev Detector With Depth Estimation This line of work first predicts a per-pixel depth map, mapping image features to corresponding 3D locations, and subsequently predicts 3D targets in the BEV representations. Building on the success of the BEV paradigm in semantic segmentation, BEVDet (Huang et al., 2021) develops the first high-performance BEV detector based on the Lift-Splat-Shoot (LSS) view transformer (Philion & Fidler, 2020). Subsequently, BEVDet4D (Huang & Huang, 2022) introduces multi-frame fusion to improve the effectiveness of temporal cue learning. BEVDepth (Li et al., 2022a) proposes to use point cloud projection to the image plane for direct supervision to depth estimation. Note that this approach can also incorporate temporal fusion, which we refer to as BEVDepth4D. We hereby aim to evaluate the robustness of these BEV models, ranging from the most basic detector (*i.e.*, BEVDet) to spatial (Depth) and temporal (4D) extensions, against attacks. ## 3.3 Bev Detector Without Depth Estimation In this set of works, trainable sparse object queries are utilized to aggregate image features without the need for depth estimation. Representative exemplars include DETR3D (Wang et al., 2022b), which connects 2D feature extraction and 3D bounding box prediction through backward geometric projection; PETR (Liu et al., 2022a;b), which enhances 2D features with 3D position-aware representations; and BEVFormer (Li et al., 2022b), which refines BEV queries using spatial and temporal cross-attention mechanisms. These approaches claim to not suffer from inaccurate depth estimation intermediate and thus achieve superior performance. Our research takes a step further, probing the robustness of these approaches when subjected to adversarial attacks. ## 4 Generating Adversarial Examples In this section, we present our adversarial example generation algorithms. It is essential to note that this paper primarily focuses on understanding model robustness to attacks, rather than on the development of new attack algorithms. As such, our approach adapts established 2D adversarial attacks (Xie et al., 2017; Moosavi-Dezfooli et al., 2017; Madry et al., 2017; Goodfellow et al., 2014; Carlini & Wagner, 2017; Croce & Hein, 2020) for the 3D context, incorporating essential modifications to ensure compatibility. Specifically, we consider three attack settings: pixel-based white-box attacks (Madry et al., 2017; Xie et al., 2017), patchbased white-box attacks (Liu et al., 2018), and universal patch black-box attacks (Moosavi-Dezfooli et al., 2017). In the context of pixel-based and patch-based white-box attacks, we utilize two adversarial targets, namely untargeted classification attacks, and localization attacks. For patch-based black-box attacks, we focus solely on the targeted classification. The summary of these attacks is presented in Tab. 1. Table 1: A summary of the five different attack settings implemented to examine the model robustness. | White-box / Black-box | Pixel / Patch | Objective | |-------------------------|-----------------|---------------------------| | White-box | Pixel | Untargeted Classification | | White-box | Pixel | Localization | | White-box | Patch | Untargeted Classification | | White-box | Patch | Localization | | Black-box | Patch | Targeted Classification | ## 4.1 Pixel-Based Attack Inspired by the approach in (Xie et al., 2017), we optimize the generation of adversarial examples over a set of targets. Let I ∈ R C×H×W be an input image, comprising N targets given by T = {t1, t2, t3*, ..., t*N }. By feeding the image I into 3D object detectors, we can have n perception results, capturing class, 3D bounding boxes, and other attributes, represented as f(I) = {y1, y2, y3*, ..., y*n}. Here, each yi symbolizes a discrete detection attribute such as localization, class, velocity, etc. We then compare these predictions with the ground truth bounding boxes T, establishing a match when the 2D center distances on the ground plane are under a predefined threshold, as employed in (Caesar et al., 2020). The goal of adversarial examples is to intentionally produce erroneous predictions. For instance, in classification attacks, the objective is to manipulate the model into predicting an incorrect class, denoted as fcls(I + r, ti) ̸= li, where fcls(I + r, ti) signifies the classification results on the i-th target, li represents its ground-truth classification label, and r denotes the adversarial perturbation. To accomplish this, we employ untargeted attacks, aiming to maximize the cross-entropy loss: $$\mathcal{L}_{untargeted}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{C}f_{cls}^{j}(\mathbf{I}+\mathbf{r},t_{i})\log p_{ij},\tag{1}$$ $$\left(1\right)$$ $$\left(2\right)$$ where C denotes the number of classes, and f j cls denotes the confidence score on j-th class. The adversarial perturbation r is optimized iteratively using PGD-Adv (Madry et al., 2017) as: $$\mathbf{r}_{i+1}=\mathrm{Proj}_{\epsilon}(\mathbf{r}_{i}+\alpha\mathrm{sgn}(\nabla_{\mathbf{I+r}_{i}}{\mathcal{L}})).$$ (ri + αsgn(∇I+riL)). (2) To facilitate an equitable comparison, the confidence scores undergo normalization within the range [0, 1] by using the sigmoid function, which mitigates sensitivity to unbounded logit ranges (Wu et al., 2021). Maximizing L*untargeted* can be achieved by making every target incorrectly predicted. For targeted attacks, we instead specify an adversarial label l ′ i ̸= li for each target and minimize the following objective: $${\mathcal{L}}_{t a r g e t e d}={\frac{1}{N}}\sum_{i=1}^{N}[f_{c l s}^{l_{i}^{\prime}}(\mathbf{I}+\mathbf{r},t_{i})-f_{c l s}^{l_{i}}(\mathbf{I}+\mathbf{r},t_{i})].$$ cls(I + r, ti)]. (3) $$\left({\boldsymbol{3}}\right)$$ ![5_image_0.png](5_image_0.png) Figure 2: Illustration of adversarial patch size adaptations, wherein the patch size is adjusted proportionally to the target's 2D bounding box dimensions. The left panel depicts a fixed-size patch, while the right panel presents a dynamically scaled patch. To attack the localization and other attributes, we adopt the straightforward L1 loss as the objective function, finding this method adequately effective: $$\mathcal{L}_{localization}=\frac{1}{N}\sum_{t=1}^{N}||f_{vec}(\mathbf{I}+\mathbf{r},t_{t})-loc_{i}||_{1}+||f_{vec}(\mathbf{I}+\mathbf{r},t_{t})-orie_{i}||_{1}+||f_{vel}(\mathbf{I}+\mathbf{r},t_{t})-ve\mathcal{U}_{i}||_{1}.\tag{4}$$ We further enhance our analysis by incorporating FGSM (Goodfellow et al., 2014), C&W Attack (Carlini & Wagner, 2017), and a stronger attacking method, AutoAttack (Croce & Hein, 2020). AutoAttack was originally designed for image classification tasks and can't be applied to object detection tasks directly. Therefore, we employ AutoPGD, a component of AutoAttack, as a more potent attack strategy. Our implementation rigorously adheres to the configurations specified in the foundational AutoAttack paper. The enhancements we introduce to the AutoPGD attack compared with the original PGD-Adv include the following modifications: (a) Integration of momentum during the update process, with the momentum. (b) Introduction of a dynamic step size that adjusts in accordance with the optimization process. (c) Implementation of a restart mechanism from the most effective attack points. (d) Balancing exploration and exploitation through the utilization of checkpoints. Note that implementing pixel-based attacks in real-world scenarios is challenging because it requires altering camera-captured images in real-time. However, considering this type of attack is still crucial, particularly when attackers possess full knowledge of the model and can engage with real-time systems. Moreover, we can glean insights into the robustness of 3D object detectors in this adversarial setting. ## 4.2 Patch-Based Attack Following (Liu et al., 2018; Rossolini et al., 2022), we next turn our attention to patch-based adversarial attacks. Considering a target within a 3D bounding box, it can be characterized by its eight vertices and a central point, collectively denoted as {co, c1*, ..., c*8} with ci ∈ R 3. Leveraging the camera parameters, we project these 3D points to 2D points on the image plane, yielding the transformed set {c ′ o , c′1 , ..., c′8}. We set the size of the adversarial patch to be proportional to the size of the rectangle formed by these 2D points, and strategically position it to be centered at point c ′ o , as illustrated in Fig. 2. Note that the adversarial loss objectives for the patch-based attack remain consistent with those detailed in Sec. 4.1. ## 4.3 Black-Box Attack Building on the concept of universal adversarial perturbations in classification tasks, as extensively explored by (Moosavi-Dezfooli et al., 2017), we delve into the potential existence of universal adversarial patches specific to 3D object detection tasks. We start by defining and randomly initializing a fixed-size patch, which is then superimposed at the center of the object, as described in Sec. 4.2. We employ bilinear interpolation to resize this patch. During the training phase, we optimize the universal patch over a wide range of images using the Adam optimizer (Kingma & Ba, 2014), following the recommendations of (Moosavi-Dezfooli et al., 2017). In the testing phase, we apply the generated patch to unseen images and evaluate its performance | Models | Image Size | #param | BEV | Depth | Temporal | clean NDS | Adv NDS | clean mAP | Adv mAP | |-------------------|--------------|----------|-------|---------|------------|-------------|-----------|-------------|-----------| | BEVFormer-Smal | 1280 × 720 | 59.6M | ✓ | 0.2623 | 0.1315 | 0.1324 | 0.0567 | | | | BEVFormer-Base | 1600 × 900 | 69.1M | ✓ | 0.4128 | 0.1585 | 0.3461 | 0.0833 | | | | DETR3D | 1600 × 900 | 53.8M | ✓ | 0.4223 | 0.1758 | 0.3469 | 0.1081 | | | | DETR3D† | 1600 × 900 | 53.8M | ✓ | 0.4342 | 0.1953 | 0.3494 | 0.1126 | | | | PETR-R50 | 1408 × 512 | 38.1M | ✓ | 0.3667 | 0.1193 | 0.3174 | 0.0641 | | | | PETR-VovNet | 1600 × 640 | 83.1M | ✓ | 0.4550 | 0.1529 | 0.4035 | 0.0838 | | | | BEVDepth-R50† | 704 × 257 | 53.1M | ✓ | ✓ | 0.4057 | 0.1493 | 0.3327 | 0.0923 | | | BEVDepth-R101†§ | 704 × 257 | 72.1M | ✓ | ✓ | 0.4167 | 0.1533 | 0.3376 | 0.1007 | | | BEVDet-R50† | 704 × 257 | 48.2M | ✓ | ✓ | 0.3770 | 0.1069 | 0.2987 | 0.0634 | | | BEVDet-R101†§ | 704 × 257 | 67.2M | ✓ | ✓ | 0.3864 | 0.1267 | 0.3021 | 0.0754 | | | BEVDet-Swin-Tiny† | 704 × 257 | 55.9M | ✓ | ✓ | 0.4037 | 0.1074 | 0.3080 | 0.0635 | | | FCOS3D | 1600 × 900 | 55.1M | - | 0.3949 | 0.1339 | 0.3214 | 0.0714 | | | | PGD-Det | 1600 × 900 | 56.2M | - | 0.4089 | 0.1441 | 0.3360 | 0.0843 | | | | BEVFormer-Small | 1280 × 720 | 59.6M | ✓ | ✓ | 0.4786 | 0.1593 | 0.3699 | 0.1007 | | | BEVFormer-Base | 1600 × 900 | 69.1M | ✓ | ✓ | 0.5176 | 0.1445 | 0.4167 | 0.0846 | | | BEVDepth4D-R50† | 704 × 257 | 53.4M | ✓ | ✓ | ✓ | 0.4844 | 0.2144 | 0.3609 | 0.1211 | | BEVDet4D-R50† | 704 × 257 | 48.2M | ✓ | ✓ | ✓ | 0.4570 | 0.1586 | 0.3215 | 0.0770 | Table 2: Overall results: Clean results are evaluated on nuScenes validation set, and adversarial results are evaluated on the mini subset. The adversarial NDS is averaged for all the attack types and severities. †: trained with CBGS Zhu et al. (2019), §: re-trained models with minimal modification since there is no publicly available checkpoint. BEV: BEV-based representations. **Depth**: Explicit depth estimation. across various network architectures. The overall training pipeline for this approach is presented in Fig. 4(b), which simulates scenarios where attackers operate without detailed model knowledge or system access. ## 5 Experiments 5.1 Experimental Setup To thoroughly assess the model performance, we evaluate both the clean performance and adversarial robustness using the nuScenes dataset. Given the substantial computational resources required for a full dataset evaluation, we opt for the nuScenes-mini dataset when probing adversarial robustness. We report two metrics, Mean Average Precision (mAP) and nuScenes Detection Score (NDS) (Caesar et al., 2020), in our experiments and discussions. For candidate models, wherever feasible, we use the official model configurations and publicly available checkpoints provided by open-sourced repositories; furthermore, we also train additional models with minimal modifications to facilitate experiments in controlled environments. A holistic robustness evaluation necessitates examining models across varying degrees of attack intensity. Consequently, we introduce multiple severity levels for each attack - by increasing the iteration count for pixel-based attacks or by adjusting the size of the adversarial patch in patch-based attacks. For the detailed performance across these attack severities, interested readers are directed to the Appendix. Detailed parameter configurations for each type of attack are provided below: Pixel-based Attacks. We evaluate pixel-based adversarial attacks using perturbations under the L∞ norm. Our experiment setup fixes the maximum perturbation value at ϵ = 5 and the step size at α = 0.1. The process begins with the introduction of Gaussian noise to randomly perturb input images. Subsequently, we progressively increase the number of attack iterations, ranging from 1 to 50, for both untargeted and localization attacks. The iteration halts if no prediction results align with the ground truth. For localization attacks, we adjust the adversarial objective to L1 loss of the localization, orientation, and velocity predictions while keeping all other settings unchanged. Given that the nuScenes dataset contains six images for every scene with minimal overlap, our attacks targeted individual cameras. For the AutoPGD attack, we use an iteration of 10, the momentum is 0.75 and the initial step size is 0.2ϵ. Patch-based Attacks. The initial patch pattern is generated using a Gaussian Distribution that has a mean and variance identical to the dataset. The attack iteration step size is designated as α = 5 and we maintained the iteration number at 50. The patch scale is incrementally increased from 0.1 to 0.4. ![7_image_0.png](7_image_0.png) Figure 3: Mean Average Precision (mAP) value v.s attack iterations. Models behave similarly under untargeted classification attacks while varying largely under localization attacks. All the models are similarly vulnerable to untargeted attacks while BEV-based exhibit better robustness toward localization attacks. Table 3: Overall results: The adversarial NDS is averaged over all attack severities. †: trained with CBGS Zhu et al. (2019), §: re-trained models with minimal modification since there is no publicly available checkpoint. BEV: BEV-based representations. **Depth**: Explicit depth estimation. #: Using temporal modeling. | Models | pix-cls | pix-loc | patch-cls | patch-loc | black-box | | | | | | |-------------------|-----------|-----------|-------------|-------------|-------------|--------|--------|--------|--------|--------| | NDS | mAP | NDS | mAP | NDS | mAP | NDS | mAP | NDS | mAP | | | BEVFormer-Small | 0.1170 | 0.0284 | 0.1310 | 0.0836 | 0.1428 | 0.0425 | 0.1720 | 0.1096 | - | - | | BEVFormer-Base | 0.1562 | 0.0621 | 0.1390 | 0.0892 | 0.1775 | 0.0713 | 0.1910 | 0.1560 | - | - | | DETR3D | 0.1700 | 0.0796 | 0.1709 | 0.1454 | 0.1797 | 0.0664 | 0.2030 | 0.1663 | - | - | | DETR3D† | 0.1921 | 0.0766 | 0.1873 | 0.1543 | 0.2021 | 0.0753 | 0.2183 | 0.1824 | 0.3523 | 0.2708 | | PETR-R50 | 0.1256 | 0.0559 | 0.1170 | 0.0786 | 0.0887 | 0.0338 | 0.1330 | 0.0911 | 0.2350 | 0.1947 | | PETR-VovNet† | 0.1708 | 0.0883 | 0.1321 | 0.0844 | 0.1352 | 0.0511 | 0.1548 | 0.0989 | - | - | | BEVDepth-R50† | 0.1339 | 0.0646 | 0.1626 | 0.1301 | 0.1339 | 0.0603 | 0.1891 | 0.1366 | - | - | | BEVDepth-R101†§ | 0.1436 | 0.0726 | 0.1691 | 0.1455 | 0.1301 | 0.0577 | 0.1751 | 0.1414 | 0.2801 | 0.1932 | | BEVDet-R50† | 0.0806 | 0.0377 | 0.1244 | 0.0932 | 0.1102 | 0.0397 | 0.1562 | 0.1107 | - | - | | BEVDet-R101†§ | 0.1121 | 0.0559 | 0.1406 | 0.1063 | 0.1176 | 0.0401 | 0.1558 | 0.1094 | 0.2113 | 0.1535 | | BEVDet-Swin-Tiny† | 0.0856 | 0.0406 | 0.1058 | 0.0746 | 0.1365 | 0.0632 | 0.1580 | 0.1191 | - | - | | FCOS3D | 0.1536 | 0.0861 | 0.1103 | 0.0524 | 0.1225 | 0.0543 | 0.1296 | 0.0799 | 0.2279 | 0.1830 | | PGD-Det | 0.1696 | 0.0947 | 0.1105 | 0.0694 | 0.1126 | 0.0612 | 0.1621 | 0.1049 | 0.2437 | 0.1965 | | BEVFormer-Small# | 0.1727 | 0.0964 | 0.1221 | 0.0949 | 0.1746 | 0.0907 | 0.1810 | 0.1388 | - | - | | BEVFormer-Base# | 0.1328 | 0.0555 | 0.1312 | 0.1017 | 0.1689 | 0.0766 | 0.1910 | 0.1559 | 0.3018 | 0.2603 | | BEVDepth4D-R50†# | 0.2143 | 0.0969 | 0.1914 | 0.1488 | 0.2388 | 0.0960 | 0.2425 | 0.1687 | - | - | | BEVDet4D-R50†# | 0.1394 | 0.0473 | 0.1499 | 0.1031 | 0.1887 | 0.0633 | 0.2157 | 0.1358 | - | - | Black-box Attacks. In this black-box setting, we optimize the patch with the nuScenes mini training set (Caesar et al., 2020). We set the learning rate to 10, the patch size to 100 × 100, and the patch scale to s = 0.3. Our adversarial objective uses targeted attacks, as in Eq. 3. The goal is to misclassify all categories as "Car" and to mislabel "Car" as "Pedestrian". In the inference stage, we apply the trained patch to unseen scenes and different models. To compensate for the occlusion effect induced by the patch, we set a baseline using a random pattern patch, and then assess the relative performance drop. Attacks for Temporal Models. It is worth noting that BEVFormer incorporates historical features for temporal cross-attention. This suggests that attacks on prior frames can affect current predictions. Thereby, we simulate scenarios where attackers continuously attack multiple frames by modifying every single frame. On the other hand, for BEVDepth4D and BEVDet4D, we attack the current frame while leaving the history frames untouched. Such a setting can let us probe if benign temporal information can help in mitigating adversarial effects. More discussions can be found in Sec. 5.3.3. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) (b) Optimization Workflow Figure 4: Left panel: The horizontal axis corresponds to the targeted model while the vertical axis denotes the source model. Transferability is quantified by the proportional reduction in performance (specifically, mAP) in comparison to a randomized patch pattern of identical size. Right panel: The pipeline of the optimization process for the universal patch. ## 5.2 Main Results Our main results of PGDAdv Attacks are presented in Tab. 2 and Tab. 3. The Adv NDS and mAP are averaged across all attack types and severities, excluding only the black-box attacks. The results of the AutoPGD Attack are present in Tab. 4. The results of FGSM and C&W attacks can be found in Appendix A. Table 4: Results of AutoPGD Croce & Hein (2020) attack. | Model | NDS | mAP | mATE | mASE | mAOE | mAVE | mAAE | |---------|--------|--------|--------|--------|--------|--------|--------| | DETR3D | 0.1373 | 0.0522 | 0.9419 | 0.5150 | 1.0066 | 1.2340 | 0.4311 | | PETR | 0.0880 | 0.0094 | 1.0032 | 0.6438 | 0.8749 | 1.2613 | 0.6482 | | FCOS3D | 0.1562 | 0.0507 | 0.8930 | 0.4965 | 0.9639 | 1.0070 | 0.3380 | | PGD-Dev | 0.1700 | 0.0544 | 0.8494 | 0.5297 | 0.8157 | 1.2942 | 0.3775 | | BEVDet | 0.0766 | 0.0203 | 1.0156 | 0.6403 | 1.0308 | 1.1250 | 0.6847 | Overall, we interestingly note that all the existing camera-based detectors are vulnerable to adversarial attacks, *e.g.*, the adversarial NDS of all models (except BEVDepth-4D) is lower than 0.2 for PGD-Adv Attacks. Furthermore, we find that AutoPGD can severely compromise the performance of the detection models by only leveraging 10 steps of iteration, as shown in Tab. 4. In terms of the attack categories, pixel-based attacks tend to be more malicious than patch-based ones, indicating that superimposing patch patterns onto target objects generally leads to less adversarial effects than pixel alterations. This finding concurs with the understanding that adversarial patches, being modifications of only specific image segments, naturally cause restricted adversarial perturbations. Furthermore, we find that attacks that aim to confuse classification have a greater adversarial effect than those meant to mislead localization. This observation holds for both pixel-based and patch-based attacks. Nevertheless, as illustrated in Fig. 3, the discrepancy in model robustness is more pronounced under localization attacks, indicating a variable degree of vulnerability in accurately identifying object locations. For black-box attacks, adversarial examples from monocular detectors exhibit enhanced transferability, even to BEV-based models, as illustrated in Fig. 4(a). Universal patches trained using FCOS3D or PGD-Det demonstrate strong transferability among various models. For instance, transferring attacks from FCOS3D to PETR led to a relative performance decline exceeding 70%. On the other hand, BEVDet and BEVDepth produce adversarial examples with limited transferability and show reduced susceptibility to universal patch attacks. Notably, the patch maintains its adversarial nature even after resizing to different shapes and scales. Such a universal patch, spanning various images, models, and scales, presents a pronounced potential threat for camera-based detectors, especially given the zero-risk tolerance in autonomous driving contexts. ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 5: Comparisons between BEV-based models and non-BEV-based models. ![9_image_2.png](9_image_2.png) between depth-based and depth-free models. ## 5.3 Discussions We next provide an in-depth discussion about these results. For better analysis, we primarily focus on three model components: BEV representation, Depth (*i.e.*, the incorporation of an explicit depth estimation branch for BEV transformation), and Temporal Modeling (*i.e.*, the ability to learn from multi-frame inputs). ## 5.3.1 Bev-Based Representations BEV-based models are generally vulnerable to classification attacks but show notable robustness to localization attacks. To probe if BEV detectors retain their superiority over monocular detectors under adversarial attacks, we select four models for comparison: BEVDet (Huang et al., 2021), BEVDepth (Li et al., 2022a), FCOS3D (Wang et al., 2021), and PGD-Det (Wang et al., 2022a), as they exhibit similar performance under standard conditions and all employ the ResNet101 backbone. As shown in Fig. 5, we can observe that BEV-based methods generally fail to showcase a clear advantage in terms of robustness against untargeted attacks. However, we interestingly note that BEV-based models demonstrate superior performance under localization attacks - under the pixel-based localization attacks, the adversarial NDS of BEVDepth outperforms PGD-Det by about 53%. This conclusion is further corroborated in Fig. 6(a) and Fig. 6(b). ## 5.3.2 Explicit Depth Estimation Precise depth estimation is crucial for depth-based models. Additionally, depth-free methods have the potential to yield stronger robustness. Our analysis suggests that models with more precise depth estimation capabilities typically exhibit enhanced robustness against adversarial attacks. As seen in Fig. 5, PGD-Det outperforms FCOS3D by leveraging superior depth estimation. This enhanced depth estimation results in consistent robustness improvement across all attack types. Additionally, the comparison between BEVDet and BEVDepth, which differ only in their depth estimation module, shows that the accurate depth estimation in BEVDepth can lead to a 39.6% increase in robustness. Furthermore, we found that depth-estimation-free approaches (Li et al., 2022b; Wang et al., 2022b; Liu et al., 2022a) generally show advantages over depth-based detectors under classification attacks (Fig. 6(c)). Interestingly, for localization attacks, depth-based models can outperform some depth-free models, if the depth estimation is sufficiently accurate, as shown in Fig. 6(d). Nonetheless, DETR3D (Wang et al., 2022b) still shows the best robustness, suggesting that carefully designed depth-free methods have the potential for superior robustness. ## 5.3.3 Temporal Fusion The effects of adversarial attacks can be mitigated using clean temporal information, but they might be exacerbated when multi-frame adversarial inputs are used. To investigate the impact of temporal information on adversarial robustness, we introduce two distinct attack scenarios. For the BEVFormer model, which updates its history of BEV queries on the fly, we attack each input frame. This results in all sequential inputs used for temporal information modeling being adversarial examples, causing an accumulation of errors within the model through retained temporal data. Our experiments reveal that the BEVFormer-Base model, when using temporal information, underperforms compared to its single-frame variant (*i.e.*, 0.1585 *v.s.* 0.1445). To further demonstrate the influence of temporal fusion, we simulate three cases: (a) Benign case: The model processes clean input across multiple frames. (b) Continuous adversarial attack: The model processes adversarial input persistently across multiple frames. (c) Single adversarial attack: The model processes clean input followed by adversarial input at a singular frame. We use the benign case as the ground truth and calculate the error on BEV temporal features according to it. The results presented in Tab. 5 further prove the observation. In the second scenario, we attack the current timestamp input while preserving historical information untouched. Under this condition, we test BEVDepth4D and BEVDet4D, which integrate features from the current frame and a recent historical frame in their predictions. As shown in Tab. 2, we observe that clean temporal information significantly reduces the adversarial effect in this scenario, with 0.1493 *v.s.* 0.2144 for BEVDepth, and 0.1069 *v.s.* 0.1586 for BEVDet. | Scenario | frame1 | frame3 | frame5 | frame7 | frame9 | frame11 | |-------------------------------|----------|----------|----------|----------|----------|-----------| | Continuous adversarial attack | 0.11 | 0.27 | 0.23 | 0.28 | 0.25 | 0.22 | | Single adversarial attack | 0 | 0 | 0 | 0 | 0 | 0.16 | Table 5: Error between adversarial BEV features and benign BEV features under different frames. ## 5.3.4 Others Strategies specifically designed to tackle long-tail problems can concurrently improve model robustness. Object detection often faces long-tail challenges, where certain categories like "Car" and "Pedestrian" are significantly more prevalent than others, such as "Motorcycle". Our findings suggest that ![11_image_0.png](11_image_0.png) Figure 7: Comparison between different model sizes: For the same model, an increase in parameter size typically leads to enhanced robustness against adversarial attacks. strategies designed to address long-tail problems, such as class-balanced group sampling (CBGS) training (Zhu et al., 2019), can also improve robustness. Our results show that DETR3D trained with CBGS improves adversarial NDS by about 11%. This observation contrasts with (Wu et al., 2021), which suggested that resampling training strategies minimally impact robustness. Nonetheless, it is important to note that our study considers detection tasks, which are different from the classification tasks in (Wu et al., 2021). Increasing the backbone size consistently leads to improved robustness. We investigate the impact of different backbone architectures, including ResNet (He et al., 2016), VoVNet (Lee et al., 2019), and SwinTransformer (Liu et al., 2021), on the robustness of models. We first compare Swin-Tiny and ResNet50, as their parameter sizes are similar (23.6M *v.s.* 27.5M). Although they perform slightly differently under various attacks, the overall robustness of ResNet50 and Swin-Tiny is similar (*i.e.*, 0.1069 *v.s.* 0.1074). On the other hand, the VovNet outperforms in both standard and adversarial robustness. Additionally, we find increasing the backbone size consistently leads to improved robustness. This trend is particularly noticeable for models with weaker robustness, as illustrated in Fig. 7. ## 6 Ethical And Societal Considerations This study addresses the threats of adversarial attacks in camera-based 3D detection models, focusing primarily on digital attacks. We emphasize the need to extend this research to include physical attack scenarios in future work, due to their significant potential impact on autonomous driving systems. Ethically, our research highlights the crucial responsibility of developers to ensure the safety and reliability of these systems, especially in public spaces where they are more vulnerable to adversarial attacks. The broader impacts of our study underline the importance of integrating strong security measures in both the development and deployment of such technologies. This is vital to reduce the risk of harm to society and promote a safer, more ethical approach to using technology in critical safety applications. ## 7 Conclusions We conduct an exhaustive analysis of adversarial robustness in camera-based 3D object detection models. Our findings reveal that a model's robustness does not necessarily align with its performance under normal conditions. Through our investigation, we successfully pinpoint several strategies that can enhance robustness. We hope our findings will contribute valuable insights to the development of more robust camera-based object detectors in the future. ## Acknowledgement This work is partially supported by a gift from Open Philanthropy and UCSC Office of Research Seed Funding for Early Stage Initiatives. This work is based upon the work supported by the National Center for Transportation Cybersecurity and Resiliency (TraCR) (a U.S. Department of Transportation National University Transportation Center) headquartered at Clemson University, Clemson, South Carolina, USA. Any opinions, findings, conclusions, and recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of TraCR, and the U.S. Government assumes no liability for the contents or use thereof. ## References Yutong Bai, Jieru Mei, Alan L Yuille, and Cihang Xie. Are transformers more robust than cnns? Advances in Neural Information Processing Systems, 34:26831–26843, 2021. Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017. Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 11621–11631, 2020. Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, and Bo Li. Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In 2021 IEEE Symposium on Security and Privacy (SP), pp. 176–194. IEEE, 2021. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. Ieee, 2017. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *International conference on machine learning*, pp. 2206–2216. PMLR, 2020. Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In *2012 IEEE conference on computer vision and pattern recognition*, pp. 3354–3361. IEEE, 2012. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Junjie Huang and Guan Huang. Bevdet4d: Exploit temporal cues in multi-camera 3d object detection. arXiv preprint arXiv:2203.17054, 2022. Junjie Huang, Guan Huang, Zheng Zhu, and Dalong Du. Bevdet: High-performance multi-camera 3d object detection in bird-eye-view. *arXiv preprint arXiv:2112.11790*, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12697–12705, 2019. Youngwan Lee, Joong-won Hwang, Sangrok Lee, Yuseok Bae, and Jongyoul Park. An energy and gpucomputation efficient backbone network for real-time object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 0–0, 2019. Yinhao Li, Zheng Ge, Guanyi Yu, Jinrong Yang, Zengran Wang, Yukang Shi, Jianjian Sun, and Zeming Li. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092, 2022a. Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Qiao Yu, and Jifeng Dai. Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. *arXiv preprint arXiv:2203.17270*, 2022b. Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, and Yiran Chen. Dpatch: An adversarial patch attack on object detectors. *arXiv preprint arXiv:1806.02299*, 2018. Yingfei Liu, Tiancai Wang, Xiangyu Zhang, and Jian Sun. Petr: Position embedding transformation for multi-view 3d object detection. *arXiv preprint arXiv:2203.05625*, 2022a. Yingfei Liu, Junjie Yan, Fan Jia, Shuailin Li, Qi Gao, Tiancai Wang, Xiangyu Zhang, and Jian Sun. Petrv2: A unified framework for 3d perception from multi-camera images. *arXiv preprint arXiv:2206.01256*, 2022b. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022, 2021. Yuexin Ma, Tai Wang, Xuyang Bai, Huitong Yang, Yuenan Hou, Yaming Wang, Yu Qiao, Ruigang Yang, Dinesh Manocha, and Xinge Zhu. Vision-centric bev perception: A survey. *arXiv preprint arXiv:2208.02797*, 2022. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 1765–1773, 2017. Jonah Philion and Sanja Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In *European Conference on Computer Vision*, pp. 194–210. Springer, 2020. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. *Advances in neural information processing systems*, 28, 2015. Giulio Rossolini, Federico Nesti, Gianluca D'Amico, Saasha Nair, Alessandro Biondi, and Giorgio Buttazzo. On the real-world adversarial robustness of real-time semantic segmentation models for autonomous driving. *arXiv preprint arXiv:2201.01850*, 2022. Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh. On the adversarial robustness of vision transformers. *arXiv preprint arXiv:2103.15670*, 2021. Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 2446–2454, 2020. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9627–9636, 2019. James Tu, Mengye Ren, Sivabalan Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, and Raquel Urtasun. Physically realizable adversarial examples for lidar object detection. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13716–13725, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Sourabh Vora, Alex H Lang, Bassam Helou, and Oscar Beijbom. Pointpainting: Sequential fusion for 3d object detection. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 4604–4612, 2020. Tai Wang, Xinge Zhu, Jiangmiao Pang, and Dahua Lin. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 913–922, 2021. Tai Wang, ZHU Xinge, Jiangmiao Pang, and Dahua Lin. Probabilistic and geometric depth: Detecting objects in perspective. In *Conference on Robot Learning*, pp. 1475–1485. PMLR, 2022a. Yue Wang, Vitor Campagnolo Guizilini, Tianyuan Zhang, Yilun Wang, Hang Zhao, and Justin Solomon. Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. In *Conference on Robot Learning*, pp. 180–191. PMLR, 2022b. Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, and Dahua Lin. Adversarial robustness under long-tailed distribution. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 8659–8668, 2021. Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, and Alan Yuille. Adversarial examples for semantic segmentation and object detection. In *Proceedings of the IEEE international conference on* computer vision, pp. 1369–1378, 2017. Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. *Sensors*, 18(10): 3337, 2018. Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4490–4499, 2018. Benjin Zhu, Zhengkai Jiang, Xiangxin Zhou, Zeming Li, and Gang Yu. Class-balanced grouping and sampling for point cloud 3d object detection. *arXiv preprint arXiv:1908.09492*, 2019. Zijian Zhu, Yichi Zhang, Hai Chen, Yinpeng Dong, Shu Zhao, Wenbo Ding, Jiachen Zhong, and Shibao Zheng. Understanding the robustness of 3d object detection with bird's-eye-view representations in autonomous driving. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 21600–21610, 2023. ## A More Adversarial Attacks Results We provide the results of FGSM (Goodfellow et al., 2014) and C&W attack (Carlini & Wagner, 2017) in this section. The results can be found fron Tab. 6 to Tab. 9. We observe that in the context of 3D object detection, even single-step FGSM can compromise the performance of models to a large extent, which further reveals the vulnerability of these models. The attack effectiveness of the C&W attack is close to the FGSM attack. However, the C&W attack needs more steps to optimize, which might limit their real-world application usage. | Model | NDS | mAP | mATE | mASE | mAOE | mAVE | mAAE | |---------|--------|--------|--------|--------|--------|--------|--------| | DETR3D | 0.2259 | 0.1406 | 0.9066 | 0.4804 | 0.7946 | 0.9122 | 0.3498 | | PETR | 0.1501 | 0.0710 | 0.9419 | 0.5616 | 0.9664 | 0.9048 | 0.4789 | | BEVDet | 0.1427 | 0.0603 | 1.0925 | 0.4855 | 1.1231 | 1.1421 | 0.3884 | | FCOS3D | 0.1712 | 0.1104 | 0.9319 | 0.5034 | 1.0274 | 1.3144 | 0.4053 | | PGD-Dev | 0.1848 | 0.0865 | 0.9440 | 0.5292 | 0.8091 | 0.9069 | 0.3949 | | Model | NDS | mAP | mATE | mASE | mAOE | mAVE | mAAE | |---------|--------|--------|--------|--------|--------|--------|--------| | DETR3D | 0.2441 | 0.2095 | 0.8805 | 0.4892 | 0.7957 | 1.1887 | 0.4413 | | PETR | 0.1778 | 0.1273 | 0.9900 | 0.4739 | 1.0308 | 1.1836 | 0.3951 | | BEVDet | 0.1675 | 0.1249 | 0.9852 | 0.4984 | 1.0121 | 1.2842 | 0.4655 | | FCOS3D | 0.1583 | 0.0942 | 0.9647 | 0.4982 | 0.9403 | 1.5566 | 0.4852 | | PGD-Dev | 0.1606 | 0.0960 | 1.0325 | 0.5231 | 0.9541 | 1.0494 | 0.3963 | Table 6: Results of FGSM Classification attack. | Model | NDS | mAP | mATE | mASE | mAOE | mAVE | mAAE | |---------|--------|--------|--------|--------|--------|--------|--------| | DETR3D | 0.2793 | 0.1461 | 0.8809 | 0.4635 | 0.7118 | 0.5647 | 0.3165 | | PETR | 0.1681 | 0.0870 | 0.8943 | 0.4938 | 0.9151 | 1.3880 | 0.4513 | | FCOS3D | 0.1217 | 0.0632 | 1.0194 | 0.5774 | 0.8748 | 1.4137 | 0.6470 | | PGD-Dev | 0.1485 | 0.0761 | 0.9929 | 0.5214 | 0.9067 | 1.3131 | 0.4745 | Model **NDS mAP mATE mASE mAOE mAVE mAAE** DETR3D 0.2765 0.2030 0.8686 0.4752 0.7202 0.8435 0.3419 PETR 0.1656 0.1167 0.9507 0.4833 1.0228 1.5613 0.4938 FCOS3D 0.1119 0.0730 1.0815 0.5873 1.0157 1.0151 0.6586 PGD-Dev 0.1585 0.0924 1.0400 0.5003 0.8569 1.3132 0.5195 Table 8: Results of C&W Classification attack. Table 9: Results of C&W Localization attack. ## B Dynamical Patch V.S. **Fixed-Size Patch** We compare the results of the dynamical patch and fixed-size patch. In our paper, we choose to use dynamical size patches because it is more physically reasonable. Real-world patch size changes according to distance from sensors. Attackers only need a smaller patch for pedestrians to fool the detectors while might need a Table 7: Results of FGSM Localization attack. larger one for larger objects (*i.e.*, Car and Bus). As a result, it is not reasonable nor fair to apply a fixed patch size to every object. To explain the difference, we calculate the detection results of each class, the comparison can be seen in Fig. 8. The results are evaluated using BEVFormer-Base(Li et al., 2022b) with temporal information on nuScenes(Caesar et al., 2020) mini validation set. We calculate the relative AP drop compared to clean input. ![16_image_0.png](16_image_0.png) Figure 8: Comparison between different patch size settings. Table 10: For PGD-Det model, the adversarial universal patch trained on nuScenes can transfer to KITTI. | Type | Easy | Moderate | Hard | |--------------|--------|------------|--------| | Clean | 64.4 | 54.5 | 49.2 | | Random Patch | 53.6 | 44.4 | 39.4 | | Adv Patch | 44.2 | 37.2 | 32.8 | ## C Black-Box Transfer Attacks Here we provide the full results of universal patch-based black box transfer attacks as shown in Tab. 11 and Tab. 12. We conduct universal patch attacks using BEVFormer-base(Li et al., 2022b) without temporal information, DETR3D(Wang et al., 2022b), PETR-R50(Liu et al., 2022a), BEVDepth-R101(Li et al., 2022a), and BEVDet-R50(Huang et al., 2021). Among the above models, DETR3D(Wang et al., 2022b), BEVDet(Huang et al., 2021), and BEVDepth(Li et al., 2022a) are trained with CBGS strategy. Considering the occlusion induced by patches, we randomly initialize the patch with the same size to serve as the baseline. To further verify the transferability of the generated universal patch, we utilize the universal patch trained by PGD-Det to KITTI (Geiger et al., 2012) dataset. We show that the universal patch generated using nuScenes (Caesar et al., 2020) can effectively transfer to KITTI (Geiger et al., 2012), as shown in Tab. 10. This further validates the existence of universal adversarial patterns in 3D detection tasks. ## D Visualization Of Depth Estimation We visualize the depth estimation results of BEVDet (Huang et al., 2021) and BEVDepth (Li et al., 2022a) in Fig. 9 to further show the importance of precise depth estimation for depth-based approaches. Due to the guidance from sparse LiDAR points, BEVDepth consistently yielded superior depth estimation results, contributing to its heightened robustness. ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) Figure 9: Depth estimation results. From left to right: original FRONT camera images, BEVDepth (Li ![17_image_2.png](17_image_2.png) et al., 2022a) depth prediction results, BEVDet (Huang et al., 2021) depth prediction results. Accurate depth estimation provides the model with strong robustness. | Adversarial Source | BEVFormer | DETR3D | PETR | FCOS3D | PGD-Det | BEVDet | BEVDepth | |----------------------|-------------|----------|--------|----------|-----------|----------|------------| | Random Noise | 0.2816 | 0.3014 | 0.2515 | 0.2379 | 0.2546 | 0.1924 | 0.2380 | | BEVFormer | 0.2663 | 0.2827 | 0.2301 | 0.2098 | 0.2316 | 0.1676 | 0.2132 | | DETR3D | 0.2767 | 0.2799 | 0.2195 | 0.1936 | 0.2221 | 0.1634 | 0.2017 | | PETR | 0.2561 | 0.2613 | 0.1236 | 0.1825 | 0.1904 | 0.1256 | 0.1895 | | FCOS3D | 0.2205 | 0.2266 | 0.0683 | 0.0724 | 0.0893 | 0.1018 | 0.1410 | | PGD-Det | 0.2357 | 0.2463 | 0.1805 | 0.1069 | 0.0879 | 0.1186 | 0.1128 | | BEVDet | 0.2760 | 0.2807 | 0.2416 | 0.2334 | 0.2513 | 0.1758 | 0.2283 | | BEVDepth | 0.2693 | 0.2874 | 0.2422 | 0.2275 | 0.2445 | 0.1829 | 0.2213 | | Adversarial Source | BEVFormer | DETR3D | PETR | FCOS3D | PGD-Det | BEVDet | BEVDepth | |----------------------|-------------|----------|--------|----------|-----------|----------|------------| | Random Noise | 0.3179 | 0.3746 | 0.2857 | 0.2776 | 0.2919 | 0.2670 | 0.3206 | | BEVFormer | 0.3141 | 0.3589 | 0.2648 | 0.2475 | 0.2696 | 0.2254 | 0.2965 | | DETR3D | 0.3235 | 0.3609 | 0.2566 | 0.2264 | 0.2592 | 0.2177 | 0.2876 | | PETR | 0.2954 | 0.3412 | 0.1890 | 0.2127 | 0.2396 | 0.1769 | 0.2804 | | FCOS3D | 0.2604 | 0.3149 | 0.1125 | 0.1473 | 0.1669 | 0.1461 | 0.2336 | | PGD-Det | 0.2811 | 0.3350 | 0.2315 | 0.1733 | 0.1670 | 0.1552 | 0.2080 | | BEVDet | 0.3079 | 0.3672 | 0.2715 | 0.2734 | 0.2807 | 0.2551 | 0.3131 | | BEVDepth | 0.3140 | 0.3655 | 0.2687 | 0.2648 | 0.2750 | 0.2467 | 0.3011 | Table 11: Universal patch black box attacks: full results of mAP. The X-axis represents the target models and Y-axis represents the source white box models. Table 12: Universal patch black box attacks: full results of NDS. The X-axis represents the target models and Y-axis represents the source white box models. ## E Full Results In this part, we list the full experiment results. We present the mAP and NDS metrics of all the models under 4 types of attacks (*i.e.*, the attack iterations or the patch scale). For each attack, we use multiple attack severities to minimize randomness. We set the random seed to 0 for all experiments. The curves are plotted from Fig. 10 to Fig. 13. ![18_image_0.png](18_image_0.png) (a) Pixel-based untargeted attacks: mAP *v.s.* attack iterations. (b) Pixel-based untargeted attacks: NDS *v.s.* attack iterations. ![18_image_2.png](18_image_2.png) ![18_image_1.png](18_image_1.png) NDS (c) Pixel-based localization attacks: mAP *v.s.* attack iterations. (d) Pixel-based localization attacks: NDS *v.s.* attack iterations. ![18_image_3.png](18_image_3.png) ![18_image_4.png](18_image_4.png) (e) Patch-based untargeted attacks: mAP *v.s.* patch scale. (f) Patch-based untargeted attacks: NDS *v.s.* patch scale. (g) Patch-based localization attacks: mAP *v.s.* patch scale. (h) Patch-based localization attacks: NDS *v.s.* patch scale. Figure 10: BEVFormer full results. ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) (a) Pixel-based untargeted attacks: mAP *v.s.* attack iterations. (b) Pixel-based untargeted attacks: NDS *v.s.* attack iterations. ![19_image_2.png](19_image_2.png) ![19_image_3.png](19_image_3.png) (c) Pixel-based localization attacks: mAP *v.s.* attack iterations. (d) Pixel-based localization attacks: NDS *v.s.* attack iterations. ![19_image_4.png](19_image_4.png) ![19_image_5.png](19_image_5.png) (e) Patch-based untargeted attacks: mAP *v.s.* patch scale. (f) Patch-based untargeted attacks: NDS *v.s.* patch scale. ![19_image_7.png](19_image_7.png) ![19_image_6.png](19_image_6.png) (g) Patch-based localization attacks: mAP *v.s.* patch scale. (h) Patch-based localization attacks: NDS *v.s.* patch scale. Figure 11: DETR3D and PETR full results. ![20_image_0.png](20_image_0.png) (a) Pixel-based untargeted attacks: mAP *v.s.* attack iterations. (b) Pixel-based untargeted attacks: NDS *v.s.* attack iterations. ![20_image_1.png](20_image_1.png) ![20_image_2.png](20_image_2.png) (c) Pixel-based localization attacks: mAP *v.s.* attack iterations. (d) Pixel-based localization attacks: NDS *v.s.* attack iterations. ![20_image_3.png](20_image_3.png) (e) Patch-based untargeted attacks: mAP *v.s.* patch scale. (f) Patch-based untargeted attacks: NDS *v.s.* patch scale. ![20_image_4.png](20_image_4.png) (g) Patch-based localization attacks: mAP *v.s.* patch scale. (h) Patch-based localization attacks: NDS *v.s.* patch scale. Figure 12: BEVDet and BEVDepth full results. ![21_image_1.png](21_image_1.png) ![21_image_0.png](21_image_0.png) (a) Pixel-based untargeted attacks: mAP *v.s.* attack iterations. (b) Pixel-based untargeted attacks: NDS *v.s.* attack iterations. ![21_image_2.png](21_image_2.png) ![21_image_3.png](21_image_3.png) (c) Pixel-based localization attacks: mAP *v.s.* attack iterations. (d) Pixel-based localization attacks: NDS *v.s.* attack iterations. ![21_image_4.png](21_image_4.png) ![21_image_5.png](21_image_5.png) (e) Patch-based untargeted attacks: mAP *v.s.* patch scale. (f) Patch-based untargeted attacks: NDS *v.s.* patch scale. (g) Patch-based localization attacks: mAP *v.s.* patch scale. (h) Patch-based localization attacks: NDS *v.s.* patch scale. Figure 13: FCOS3D and PGD-Det full results
Review 1: Summary: The work studies adversarial robustness of different camera-based 3D object detection approaches comprehensively. The 3D object detection approaches covered by the study include monocular approach and approaches based on bird's-eye-view (BEV) representations with or without depth estimation. The study considers pixel and patch based attack approaches using projected gradient descents (PGD) as well as black-box attack method. The work studies both clean performance, adversarial robustness, and the transferability of adversarial perturbations across models on the standard nuScenes dataset and draws critical and useful insights from the experiment results from the following perspectives: the use of BEV representations, explicit depth estimation, using temporal information, long-tail distribution of object classes, and model size. Strengths and Weaknesses: **Strength** 1. The problem studied by the work is interesting to the community of both 3D vision and adversarial machine learning and closely related to the latest research with safety-critical real-world applications like autonomous driving based on camera vision. 2. The camera-based 3D object detection approaches covered by the work are new and comprehensive. 3. The experiment study settings are comprehensive, considering different adversarial attack methods and different targets. The comprehensiveness of the experiment settings also make the conclusions of the work solid. 4. The study results are well-presented with clear descriptions of background knowledge and experiment settings, making the work very readable. **Weakness** 1. The white-box adversarial attack approaches only include PGD. However, there are other standard adversarial attack approaches including fast gradient sign method [1] and Carlini & Wagner [2] with strengths and shortcomings different from PGD. It would make the study even more comprehensive and the conclusions more solid if the study can show how effective these methods are against the 3D object detection approaches. 2. Since study is closely related to safety-critical real-world problems, the work should include a dedicated part discussing the real-world threats of the studied adversarial settings, ethical concerns, and broader impacts of the study's results. [1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014). [2] Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." 2017 ieee symposium on security and privacy (sp). Ieee, 2017. Requested Changes: **Requested Changes** The work should include a Broader Impact Statement discussing the real-world threats of the studied adversarial settings, ethical concerns, and broader impacts of the study's results. **Suggested Changes** I would encourage the authors to include study results based on other standard adversarial attack methods like FGSM and Carlini & Wagner in their study. Broader Impact Concerns: The work should not be exempt from having a Broader Impact Statement. Please see **Weaknesses** and **Requested Changes**. ================================================== Review 2: Summary: The paper conducts a comprehensive evaluation of 3D object detection methods against adversarial attacks. The authors categorize the existing 3D object detection methods into a few classes and evaluate their robustness against pixel-level and patch-level adversarial attacks, respectively. The results points to four findings. Strengths and Weaknesses: Strengths: + The paper clearly categorizes the adversarial attacks against 3D object detection. + The experiments are of a large scale and the corresponding findings are meaningful. Weaknesses: - The clean performance of the evaluated models is low. - The differences in results are not that significant. Requested Changes: I am not familiar with the 3D object detection domain, so please correct me if I am wrong. The clean performance of the selected models seems relatively low. Most of the models have only 0.3-0.4 precision and detection scores. I am not sure if these are really state-of-the-art performances. I would suggest the authors explicitly justify this or use models with better performance for evaluation. IMHO, it is more valuable to attack models with high performance rather than mediocre models. The differences in attack performance are not that significant. To better support the arguments and findings of the paper, it would be more rigorous if the authors conduct the experiments multiple times, reporting the mean and stand devotion together with the paired t-test results. Broader Impact Concerns: Not applicable. ================================================== Review 3: Summary: This paper conducts an investigation of the robustness of leading camera-based 3D object detection approaches under various adversarial conditions. It evaluates the robustness against pixel-based and patch-based adversarial attacks in classification and localization tasks. The paper found four techniques can potentially help improve the robustness: bird's-eye-view-based representations, depth-estimation-free approaches, accurate depth estimation, and multi-frame benign inputs. The authors conduct comprehensive experiments and show interesting results on designs that can benefit the adversarial robustness. Strengths and Weaknesses: **Strengths** 1. The experimental setting and analysis is exhaustive, providing insightful indications on robust 3D model design. 2. The figures and the tables are self-explanatory. The paper is well-written and is easy to follow. **Weakness** 1. The paper shows several techniques (i.e., BEV-based representation, explicit depth estimation, temporal fusion, etc.) can help improve the adversarial robustness by comparing the empirical adversarial robustness of models using different combinations of these techniques. It would be better if the authors could provide deeper understanding on this and explain why these techniques could help using theoretical analysis or a more fine-grained ablation study. 2. I am curious what is the strongest attack for 3D models. The authors considered PGD-based methods and patch-based methods, which is great. But I would like to learn how is the model's performance against even stronger attack like AutoAttack (Croce et al., 2020). Requested Changes: See weakness above. Broader Impact Concerns: No concern. ================================================== Metareview: Recommendation: Accept as is Comment: The adversarial robustness of 3D point models is an interesting research problem studied by the authors. The paper's comprehensive and robust experimental study lends strong empirical support to its claims. Furthermore, the authors demonstrate a constructive response to critiques, effectively addressing concerns regarding the simplicity of attacks and the depth of analysis. All reviewers have recommended the acceptance of this paper. ==================================================
# Geometric Random Walk Graph Neural Networks Via Implicit Layers Anonymous authors Paper under double-blind review ## Abstract Graph neural networks have recently attracted a lot of attention and have been applied with great success to several important graph problems. The Random Walk Graph Neural Network model was recently proposed as a more intuitive alternative to the well-studied family of message passing neural networks. This model compares each input graph against a set of latent "hidden graphs" using a kernel that counts common random walks up to some length. In this paper, we propose a new architecture, called Geometric Random Walk Graph Neural Network (GRWNN), that generalizes the above model such that it can count common walks of infinite length in two graphs. The proposed model retains the transparency of Random Walk Graph Neural Networks since its first layer also consists of a number of trainable "hidden graphs" which are compared against the input graphs using the geometric random walk kernel. To compute the kernel, we employ a fixed-point iteration approach involving implicitly defined operations. Then, we capitalize on implicit differentiation to derive an efficient training scheme which requires only constant memory, regardless of the number of fixed-point iterations. The employed random walk kernel is differentiable, and therefore, the proposed model is end-to-end trainable. Experiments on standard graph classification datasets demonstrate the effectiveness of the proposed approach in comparison with state-of-the-art methods. ## 1 Introduction Recent years have witnessed an enormous growth in the amount of data represented as graphs. Indeed, graphs emerge naturally in several domains, including social networks, bioinformatics, and neuroscience, just to name a few. Besides the increase in the amount of graph-structured data, there is also a growing interest in applying machine learning techniques to data modeled as graphs. Among others, the graph classification and graph regression tasks have attracted a great deal of attention in the past years. These tasks have served as the fundamental building block within applications that deal with problems ranging from drug design (Kearnes et al., 2016) to session-based recommendation (Wu et al., 2019). Graph Neural Networks (GNNs) provide a powerful tool for machine learning on graphs, So far, the field of GNNs has been largely dominated by message passing architectures. Indeed, most of them share the same basic idea, and can be reformulated into a single common framework, so-called message passing neural networks (MPNNs) (Gilmer et al., 2017). These models employ a message passing procedure to aggregate local information of vertices. For graph-related tasks, MPNNs usually apply some permutation invariant readout function to the vertex representations to produce a representation for the entire graph. The family of MPNNs has been heavily studied in the past few years, and there are now available very expressive models which have achieved state-of-the-art results in several tasks (Xu et al., 2019; Morris et al., 2019). Although the family of MPNNs is perhaps the most successful story in the field of graph representation learning, there exist models that follow different design paradigms and do not fall into this family. An example of such a model is the recently proposed Random Walk Graph Neural Network (RWNN) (Nikolentzos & Vazirgiannis, 2020). This model contains a number of trainable "hidden graphs", and it compares the input graphs against these graphs using a random walk kernel which counts the number of common walks in two graphs. The emerging kernel values are fed into a fully-connected neural network which acts as the classifier or regressor. The employed random walk kernel is differentiable, and thus RWNN is end-to-end trainable. However, this kernel considers only random walks of a small length. Such local patterns may fail to capture the overall large-scale shape of the graphs, while several interesting properties of graphs depend on the graph's global structure. Furthermore, increasing the length of the walks has a direct impact on the model's computational complexity. In this paper, we propose a novel approach to tackle these challenges. Specifically, we propose a new architecture, called Geometric Random Walk Graph Neural Network (GRWNN), that generalizes the RWNN model such that it can count common walks of infinite length in two graphs. The model contains a number of trainable "hidden graphs", and it compares the input graphs against these graphs using the geometric random walk kernel. Thus, instead of walks of small length, the proposed model considers walks of infinite length. To compute the kernel, GRWNN uses a fixed-point iteration approach. The kernel values are then passed on to a fully-connected neural network which produces the output. The proposed neural network is end-to-end trainable since we can directly differentiate through the fixed-point equations via implicit differentation, which leads to a very efficient implementation in terms of memory requirements. Hence, we can still update the "hidden graphs" during training with backpropagation. We compare the performance of the proposed model to state-of-the-art graph kernels and recently-proposed neural architectures on several graph classification datasets. Results show that in most cases, the GRWNN model matches or outperforms competing methods. Our main contributions are summarized as follows: - We propose a novel neural network model, Geometric Random Walk Graph Neural Network, which employs the geometric random walk kernel to produce graph representations. The model counts common walks of infinite length in the input graph and a set of randomly initialized "hidden graphs". - We employ an efficient scheme to compute the random walk graph kernel using fixed-point iterations. We show that we can directly differentiate through the fixed-point equations via implicit differentation, which leads to an efficient implementation. - We evaluate the model's performance on several standard graph classification datasets and show that it achieves results similar and in some cases superior to those obtained by recent GNNs and graph kernels. The rest of this paper is organized as follows. Section 2 provides an overview of the related work. Section 3 introduces some preliminary concepts. Section 4 provides a detailed description of the proposed model. Section 5 evaluates the proposed model in graph classification tasks. Finally, Section 6 concludes. ## 2 Related Work Graph kernels have a long history in the field of graph representation learning (Kriege et al., 2020). A graph kernel is a kernel function between graphs, i. e., a symmetric positive semidefinite function defined on the space of graphs. These methods generate implicitly (or explicitly) graph representations and enable the application of kernel methods such as the SVM classifier to graphs. Most graph kernels are instances of the R-convolution framework (Haussler, 1999), and they compare substructures extracted from the graphs to each other. Such substructures include shortest paths (Borgwardt & Kriegel, 2005), random walks (Gärtner et al., 2003; Kashima et al., 2003), small subgraphs (Shervashidze et al., 2009), and others. Our work is related to random walk kernels, i. e., kernels that compare random walks to each other. The first such kernels were proposed by Gärtner et al. (2003) and by Kashima et al. (2003). The work of Kashima et al. was later refined by Mahé et al. (2004). Vishwanathan et al. (2010) and Kang et al. (2012) proposed new algorithms for efficiently computing random walk kernels. These algorithms improve the time complexity of kernel computation. Sugiyama & Borgwardt (2015) studied the problem of halting (i. e., longer walks are downweighted so much that the kernel value is completely dominated by the comparison of walks of length 1) that occurs in random walk kernels, and showed that its extent depends on properties of the graphs being compared. Zhang et al. (2018b) defined a different kernel which does not compare random walks to each other, but instead, compares the return probabilities of random walks. Finally, Kalofolias et al. (2021) proposed a variant of the random walk kernel where structurally dissimilar vertices are not just down-weighed, but are not allowed to be visited during the simultaneous walk. Although the first GNNs were proposed several years ago (Sperduti & Starita, 1997; Scarselli et al., 2009; Micheli, 2009), until recently, these models had attracted limited attention. In recent years, with the rise of deep learning, a lot of models started to emerge (Bruna et al., 2014; Li et al., 2015; Duvenaud et al., 2015; Atwood & Towsley, 2016; Defferrard et al., 2016; Lei et al., 2017). Most models update the representation of each vertex by aggregating the feature vectors of its neighbors. This update procedure can be viewed as a form of message passing algorithm and thus, these models are known as message passing neural networks (MPNNs) (Gilmer et al., 2017). To compute a feature vector for the entire graph, MPNNs apply some permutation invariant readout function to all the vertices of the graph. The family of MPNNs has been heavily studied in the past few years, and there are now available several sophisticated models which can produce expressive graph representations (Xu et al., 2019; Morris et al., 2019; Dehmamy et al., 2019; Morris et al., 2020). Despite the general recent focus on MPNNs, some works have proposed architectures that are not variants of this family of models (Niepert et al., 2016; Maron et al., 2019b;a; Nikolentzos & Vazirgiannis, 2020). The work closest to ours is the one reported in Nikolentzos & Vazirgiannis (2020) which presents the Random Walk Graph Neural Network (RWNN) model. In fact, in this paper, we generalize the RWNN model to compare random walks of infinite length in two graphs. Recently, another method that uses random walks to extract features which are then processed by a standard convolutional neural network was proposed (Toenshoff et al., 2021). However, the proposed approach decouples data representation from learning since random walks are sampled in a preprocessing stage. Our work is also related to implicit models which have been applied successfully to many problems (de Avila Belbute-Peres et al., 2018; Chen et al., 2018; Amos et al., 2018; Bai et al., 2019). The outputs of these models are determined implicitly by a solution of some underlying sub-problem. Implicit models have also been defined in the context of graph representation learning. For instance, Gu et al. (2020) proposed IGNN, a model that seeks the fixed-point of some equation which is equivalent to running an infinite number of message passing iterations. Thus, the final representation potentially contains information from all neighbors in the graph capturing long-range dependencies. Gallicchio & Micheli (2020) proposed a similar model which generates graph representations based on the fixed point of a recursive/dynamical system, but is actually only partially trained. In contrast to these approaches whose objective is to apply a large (or infinite) number of message passing layers implicitly, in our setting, we employ a fixed-point iteration approach to compute the random walk kernel and then we directly differentiate through the fixed point equations via implicit differentation. ## 3 Preliminaries In this section, we begin by introducing our notation, and we then review the definition of the geometric random walk kernel. ## 3.1 Notation Let [n] = {1, . . . , n} ⊂ N for n ≥ 1. Let G = (*V, E*) be an undirected graph, where V is the vertex set and E is the edge set. We will denote by n the number of vertices and by m the number of edges. The neighbourhood N (v) of a vertex v is the set of all vertices adjacent to v. Hence, N (v) = {u |(*v, u*) ∈ E} where (*v, u*) is an edge between vertices v and u of V . The adjacency matrix A ∈ R n×n of a graph G is a symmetric (typically sparse) matrix used to encode edge information in the graph. The element of the i th row and j th column is equal to the weight of the edge between vertices vi and vj if such an edge exists, and 0 otherwise. The degree d(v) of a vertex v is equal to the sum of the weights of the edges that are adjacent to the vertex. For vertex-attributed graphs, every vertex in the graph is associated with a feature vector. We use X ∈ R n×dto denote the vertex features where d is the feature dimensionality. The feature of a given vertex vi corresponds to the i th row of X. The direct (tensor) product G× = (V×, E×) of two graphs G = (*V, E*) and G0 = (V 0, E0) is defined as follows: $V_{\times}=\{(v,v^{\prime})\in V\times V^{\prime}\}$ $E_{\times}=\left\{\left((v,v^{\prime}),(u,u^{\prime})\right)\in V_{\times}\times V_{\times}\,|\,(v,u)\in E,\text{and}(v^{\prime},u^{\prime})\in E^{\prime}\right\}$. We denote by A× the adjacency matrix of G×, and denote by ∆× and ¯d× the maximum and average of the vertex degrees of G×, respectively. Thus, ¯d× = 1/nPv∈V× d(v). A walk in a graph is a sequence of vertices such that consecutive vertices are linked by an edge. Performing a random walk on the direct product G× of two graphs G and G0is equivalent to performing a simultaneous random walk on the two graphs G and G0. We use ⊗ to represent the Kronecker product, and use to represent elementwise multiplication between two matrices or vectors of the same dimension. For a p×q matrix V, vec(V) ∈ R pq represents the vectorized form of V, obtained by stacking its columns. Let also vec−1 denote the inverse vectorization operator which transforms a vector into a matrix, i. e., for a pq vector v, V = vec−1(v) where V ∈ R p×q(see the appendix for the exact definition of the vec and vec−1 operators). ## 3.2 Random Walk Kernel Given two graphs G and G0, the random walk kernel counts all pairs of matching walks on G and G0(Gärtner et al., 2003). There are different variants of the kernel. For instance, the p-step random walk kernel (where p ∈ N) counts all pairs of matching walks up to length p on two graphs. The number of matching walks can be obtained through the adjacency matrix A× of the product graph G× (Vishwanathan et al., 2010) since a random walk on G× is equivalent to a simultaneous random walk on the two graphs. Assuming a uniform distribution for the starting and stopping probabilities over the vertices of two graphs, the p-step random walk kernel is defined as: $$\kappa^{p}(G,G^{\prime})=\sum_{i=1}^{|V_{\times}|}\sum_{i=j}^{|V_{\times}|}\left[\sum_{l=0}^{p}\lambda_{l}\mathbf{A}_{\times}^{l}\right]_{i j}$$ where λ0, λ1, λ2*, . . . , λ*p are positive, real-valued weights, and A0× is the identity matrix, i. e., A0× = I. For p → ∞, we obtain κ∞(*G, G*0) which is known as the random walk kernel. It turns out that if the sequence of weights λ0, λ1, λ2*, . . .* coresponds to the geometric sequence defined as λl = λ l, then the limit κ∞(*G, G*0) can be computed analytically as follows: $$k^{\infty}(G,G^{\prime})=\sum_{i=1}^{|V_{x}|}\sum_{i=j}^{|V_{x}|}\left[\sum_{l=0}^{\infty}\lambda^{l}\mathbf{A}_{x}^{l}\right]_{ij}=\sum_{i=1}^{|V_{x}|}\sum_{i=j}^{|V_{x}|}\left[\left(\mathbf{I}-\lambda\mathbf{A}_{\times}\right)^{-1}\right]_{ij}=\mathbf{1}^{\top}\left(\mathbf{I}-\lambda\mathbf{A}_{\times}\right)^{-1}\mathbf{1}\tag{1}$$ It is well-known that the geometric series of matrices I+λA× +(λA×) 2+*. . .* converges only if the the largestmagnitude eigenvalue of A× (which is also the maximum eigenvalue if G× is a graph with non-negative edge weights), denoted by µ max × , is strictly smaller than 1/λ. Therefore, the geometric random walk kernel k∞ is well-defined only if λ < 1/µmax × . Interestingly, the maximum eigenvalue of A× is sandwiched between the average and the maximum of the vertex degrees of G× (Brouwer & Haemers, 2011). We thus have that ¯d× ≤ µ max × ≤ ∆×, and by setting λ < 1/∆×, the geometric series of matrices is guaranteed to converge. By defining initial and stopping probability distributions over the vertices of G and G0, we can obtain a probabilistic variant of the geometric random walk kernel. Let p and p 0 be two vectors that represent the initial probability distributions over the vertices of G and G0. Likewise, let q and q 0 denote stopping probability distributions over the vertices of G and G0. For uniform distributions for the initial and stopping probabilities over the vertices of the two graphs, we have pi = qi = 1/|V | and p 0 i = q 0 i = 1/|V 0|. Then, p× = p p0> and q× = q q0>, and the variant of the geometric random walk kernel can be computed as k∞(*G, G*0) = vec(q×) >(I − λA×) −1vec(p×). ## 4 Geometric Random Walk Graph Neural Networks The proposed GRWNN model maps input graphs to vectors by comparing them against a number of "hidden graphs", i. e., graphs whose adjacency and attribute matrices are trainable. The function that we employ to compare the input graphs against the "hidden graphs" is the geometric random walk graph kernel, one of the most well-studied kernels between graphs (Gärtner et al., 2003; Mahé et al., 2004; Vishwanathan et al., 2010). The proposed GRWNN model contains N "hidden graphs" in total. The graphs may differ from each other in terms of size (i. e., number of vertices). Furthermore, the vertices and/or edges of those graphs can be annotated with continuous multi-dimensional features. As mentioned above, both the structure and the vertex attributes (if any) of these "hidden graphs" are trainable. Thus, the adjacency matrix of a "hidden graph" Gi of size n is described by a trainable matrix Wi ∈ R n×n, while the vertex attributes are contained in the rows of another trainable matrix Qi ∈ R n×d. Note that the "hidden graphs" correspond to weighted graphs, which can be directed or undirected graphs with or without self-loops. In our implementation, we constraint them to be undirected graphs without self-loops (n(n−1)/2 trainable parameters in total). To compare an input graph G against a "hidden graph" Gi, the model uses the geometric random walk kernel that was introduced in the previous section: $$k^{\infty}(G,G_{i})=\sum_{i=1}^{\left|V_{\times}\right|}\sum_{i=0}^{\left|V_{\times}\right|}\left[\sum_{l=0}^{\infty}\lambda^{l}\mathbf{A}_{\times}^{l}\right]_{i j}=\sum_{i=1}^{\left|V_{\times}\right|}\sum_{i=j}^{\left|V_{\times}\right|}\left[\left(\mathbf{I}-\lambda\mathbf{A}_{\times}\right)^{-1}\right]_{i j}=\mathbf{I}^{\top}\left(\mathbf{I}-\lambda\mathbf{A}_{\times}\right)^{-1}\mathbf{1}$$ $${\mathrm{(2)}}$$ −11 (2) where A× = A⊗Ai and Aiis the adjacency matrix of "hidden graph" Gi obtained as Ai = f(W). Here, f(·) is a function whose output is non-negative and potentially bounded, i. e., f(Wi) = ReLU(Wi) or f(Wi) = σ(Wi) where σ(·) denotes the sigmoid activation function. Then, given the set Gh = {G1, G2*, . . . , G*N } where G1, G2*, . . . , G*N denote the N "hidden graphs", we can compute N kernel values in total. These kernel values can be thought of as features of the input graph, and can be concatenated to form a vector representation of the input graph. This vector can then be fed into a fully-connected neural network to produce the output. Following Vishwanathan et al. (2010), to compute the geometric random walk graph kernel shown in Equation equation 2 above, we employ a two-step approach . We first need to solve the following linear system for z: $$(\mathbf{I}-\lambda\mathbf{A}_{\times})\,\mathbf{z}=\mathbf{1}$$ Then, given z, we can compute the kernel value as k∞(*G, G*i) = 1 >z. To solve the above linear system, we capitalize on fixed point methods. We first rewrite the above system as: $$\mathbf{z}=\mathbf{1}+\lambda\mathbf{A}_{\times}\,\mathbf{z}$$ $$\left({\mathbf{3}}\right)$$ z = 1 + λA× z (3) Now, solving for z is equivalent to finding a fixed point of Equation equation 3 (Nocedal & Wright, 2006). Such a fixed point can be obtained by simply iterating the first part of the forward pass. Letting z (t) denote the value of z at iteration t, we set z (0) = 1, and then compute the following: $$\mathbf{z}^{(t+1)}=\mathbf{1}+\lambda\mathbf{A}_{\times}\,\mathbf{z}^{(t)}$$ repeatedly until ||z (t+1) − z (t)|| < , where *|| · ||* denotes the Euclidean norm and some predefined tolerance or until a specific number of iterations has been reached. As mentioned in the previous section, the above problem is guaranteed to converge if the maximum eigenvalue of A× is strictly smaller than 1/λ, thus if all the eigenvalues of λA× lie inside the unit disk. If the values of the elements of Ai are bounded, we can compute an upper bound on the maximum degree of G× and set the parameter λ to some value smaller than the inverse of the upper bound. Efficient implementation. If the input graph G consists of n vertices and a "hidden graph" Gi consists of c vertices, then A× is an nc × nc matrix. Thus, multiplying A× by some vector inside the fixed-point algorithm requires O(n 2c 2) operations in total. Fortunately, to compute the kernel, it is not necessary to explicitly compute matrix A×. Specifically, the Kronecker product and vec operator are linked by the well-known property (Bernstein, 2009): $$\operatorname{vec}(\mathbf{A}\,\mathbf{B}\,\mathbf{C})=(\mathbf{C}^{\mathsf{T}}\otimes\mathbf{A})\mathrm{vec}(\mathbf{B})$$ vec(**A B C**) = (C> ⊗ A)vec(B) (4) Then, let Z ∈ R n×c be a matrix such that Z = vec−1(z). Recall also that A× = A ⊗ Ai. Based on the above and on Equation equation 4, we can write: $$\mathbf{A}_{\times}\,\mathbf{z}=(\mathbf{A}\otimes\mathbf{A}_{i})\mathrm{vec}(\mathbf{Z})=\mathrm{vec}(\mathbf{A}_{i}\,\mathbf{Z}\mathbf{A}^{\top})=\mathrm{vec}(\mathbf{A}_{i}\,\mathrm{vec}^{-1}(\mathbf{z})\mathbf{A}^{\top})$$ $$\left(4\right)$$ $$\left({\bar{5}}\right)$$ The above matrix-vector product can be computed in O(n 2c) time in case *n > c*. If A is sparse, then it can be computed yet more efficiently. Furthermore, we do not need to compute and store matrix A× which might not be feasible due to high memory requirements. Then, instead of solving the system of Equation equation 3, we solve the following equivalent system: $$\mathbf{z}=\mathbf{1}+\lambda\operatorname{vec}(\mathbf{A}_{i}\operatorname{vec}^{-1}(\mathbf{z})\mathbf{A}^{\top})$$ z = 1 + λ vec(Ai vec−1(z)A>) (6) Node attributes. In many real-world problems, vertices of the input graphs are annotated with realvalued multi-dimensional vertex attributes. We next generalize the proposed model to graphs that contain such vertex attributes. Let X ∈ R n×d denote the matrix that contains the vertex attributes of the input graph G. As already mentioned, we also associate a trainable matrix Qi ∈ R c×dto each "hidden graph" Gi, where c is the number of vertices of Gi. Then, let S = σ(X Q> i ) ∈ R c×n where σ(·) denotes the sigmoid function. The (*j, k*) th element of matrix S is equal to the inner product (followed by a sigmoid) between the attributes of the j th vertex of the input graph G and the k th vertex of the "hidden graph" Gi. Roughly speaking, this matrix encodes the similarity between the attributes of the vertices of the two graphs. Note that instead of directly using matrix X, we can first transform it into matrix X˜ using a single- or a multi-layer perceptron. Let s = vec(S) where s ∈ R nc. Each element of s corresponds to a vertex of G× and quantifies the similarity between the attributes of the pair of vertices (i. e., one from G and one from Gi) it represents. Then, we can compute the geometric random walk kernel as follows: $$k^{\infty}(G,G^{\prime})=\sum_{i=1}^{|V_{\times}|}\sum_{t=0}^{|V_{\times}|}\left[\sum_{t=0}^{\infty}\lambda^{t}\big{(}(\mathbf{s}\,\mathbf{s}^{\top})\odot\mathbf{A}_{\times}\big{)}^{t}\right]_{ij}\tag{7}$$ $$=\sum_{i=1}^{|V_{\times}|}\sum_{t=j}^{|V_{\times}|}\left[\left(\mathbf{I}-\lambda(\mathbf{s}\,\mathbf{s}^{\top})\odot\mathbf{A}_{\times}\right)^{-1}\right]_{ij}=\mathbf{1}^{\top}\big{(}\mathbf{I}-\lambda(\mathbf{s}\,\mathbf{s}^{\top})\odot\mathbf{A}_{\times}\big{)}^{-1}\mathbf{1}$$ $$(6)$$ $$({\boldsymbol{\delta}})$$ Note that since the elements of s take values between 0 and 1, the same applies to the elements of the output of the outer product s s>. Therefore, the maximum degree of the vertices of the graph derived from the matrix s s> A× is not greater than that of the graph derived from matrix A×, and we do not thus need to set λ to a new value. Then, to compute the kernel, we first need to solve the following system: $$\mathbf{z}=\mathbf{1}+\lambda(\mathbf{s}\,\mathbf{s}^{\top}\odot\mathbf{A}_{\times})\mathbf{z}$$ z = 1 + λ(s s> A×)z (8) Again, naively computing the right part of the above Equation is expensive and requires O(n 2c 2) operations in total. The following result shows that in fact we can compute the above in a more time and memory efficient manner. Proposition 1. Let A1 ∈ R n×n and A2 ∈ R m×m *be two real matrices. Let also* s, y ∈ R nm *be two real* vectors. Then, we have that: $$\big(\mathbf{s}\,\mathbf{s}^{\top}\odot(\mathbf{A}_{1}\otimes\mathbf{A}_{2})\big)\mathbf{y}=\mathbf{s}\odot v e c\big(\mathbf{A}_{2}\;v e c^{-1}(\mathbf{y}\odot\mathbf{s})\mathbf{A}_{1}^{\top}\big)$$ Based on the above result (the proof is left to the appendix), the system that needs to be solved is: $$\mathbf{z}=\mathbf{1}+\lambda{\Big(}\mathbf{s}\odot\operatorname{vec}{\big(}\mathbf{A}_{i}\operatorname{vec}^{-1}(\mathbf{z}\odot\mathbf{s})\mathbf{A}^{\top}{\big)}{\Big)}$$ Since we store matrix A as a sparse matrix, if there are O(m) non-zero entries in A, then computing one iteration of the above equation for all N "hidden graphs" takes ON c(n(d + c) + m)computational time where d is the size of the vertex attributes. Implicit differentiation. Clearly, iteratively computing Equation equation 3 or Equation equation 8 to find the fixed point corresponds to a differentiable module. However, to train the model, we need to backpropagate the error through the fixed point solver in the backward pass. That would require storing all the intermediate terms, which could be prohibitive in practice. Fortunately, thanks to recent advances in implicit layers and equilibrium models (Bai et al., 2019), this can be performed in a simple and efficient manner which requires constant memory, and assumes no knowledge of the fixed point solver. Specifically, based on ideas from Bai et al. (2019), we derive the form of implicit backpropagation specific to the employed fixed point iteration layer. Theorem 1. Let fθ *be the system of Equation equation 3 or Equation equation 8, and* z ? ∈ R nc *be a* solution to that linear system. Let also gθ(z ?; A, X) = fθ(z ?; A, X) − z ?*. Since* z ?is a fixed point, we have that gθ(z ?; A, X) → 0 and z ?is thus the root of gθ. Let y ∈ R *denote the ground-truth target of the input* sample, h : R → R be any differentiable function and let L : R × R → R *be a loss function that computes:* $$\ell={\mathcal{L}}{\big(}h(\mathbf{1}^{\top}\mathbf{z}^{\star}),y{\big)}={\mathcal{L}}{\big(}h{\big(}\mathbf{1}^{\top}{\mathrm{FindRoot}}(g_{\theta};\mathbf{A},\mathbf{X}){\big)},y{\big)}$$ $$(9)$$ $$r.t.{\mathrm{\boldmath~(\cdot)~}}(e.\,g.,{\mathrm{\boldmath~\theta~}},{\mathrm{\boldmath~A~}}o r\,{\mathrm{\bfX}}){\mathrm{\boldmath~i s}};$$ >FindRoot(gθ; A, X), y(9) Then, the gradient of the loss w.r.t. (·) (e. g., θ, A or X*) is:* $$\frac{\partial\ell}{\partial(\cdot)}=-\frac{\partial\ell}{\partial\mathbf{z}^{*}}\left(J_{g^{1}}^{-1}\right|_{\mathbf{z}^{*}}\right)\frac{\partial f_{\theta}(\mathbf{z}^{*};\mathbf{A},\mathbf{X})}{\partial(\cdot)}=-\frac{\partial\ell}{\partial h}\frac{\partial h}{\partial\mathbf{z}^{*}}\left(J_{g^{1}}^{-1}\right|_{\mathbf{z}^{*}}\right)\frac{\partial f_{\theta}(\mathbf{z}^{*};\mathbf{A},\mathbf{X})}{\partial(\cdot)}\tag{10}$$ where J −1 gθ z?is the inverse Jacobian of gθ *evaluated at* z ?. The above formula gives a form for the necessary Jacobian without needing to backpropagate through the method used to obtain the fixed point. Thus, as mentioned above, we only need to find the fixed point, and we can compute the necessary Jacobians at this specific point using the above analytical form. No intermediate terms of the iterative method used to compute the fixed point need to be stored in memory, while there is also no need to unroll the forward computations within an automatic differentiation layer. Still, to compute the analytical backward gradient at the solution of the fixed point equation, it is necessary to first compute the exact inverse Jacobian J −1 gθ which has a cubic cost. As shown in Bai et al. (2019), we can instead compute the − ∂` ∂z? J −1 gθ z? term by solving the following linear system: $$\mathbf{x}=\left({\frac{\partial f_{\theta}(\mathbf{z^{\star}};\mathbf{A},\mathbf{X})}{\partial\mathbf{z^{\star}}}}\right)^{\top}\mathbf{x}+\left({\frac{\partial\ell}{\partial\mathbf{z^{\star}}}}\right)^{\top}$$ which in fact is also a fixed point equation and can be solved via some iterative procedure. Note that the first term of the above Equation is a vector-Jacobian product which can be efficiently computed via autograd packages (e. g., PyTorch (Paszke et al., 2017)) for any x, without explicitly writing out the Jacobian matrix. Finally, we can compute ∂` ∂(·) as follows: $${\frac{\partial\ell}{\partial(\cdot)}}=\left({\frac{\partial f_{\theta}(\mathbf{z^{*}};\mathbf{A},\mathbf{X})}{\partial(\cdot)}}\right)^{\top}\mathbf{x}$$ where again this product is itself a vector-Jacobian product, computable via normal automatic differentiation packages. ## 5 Experimental Evaluation We next evaluate the proposed GRWNN model on standard graph classification datasets. ## 5.1 Real-World Datasets Datasets. We evaluate the proposed model on 10 publicly available graph classification datasets including 5 bio/chemo-informatics datasets: MUTAG, D&D, NCI1, PROTEINS, ENZYMES, and 5 social interaction datasets: IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI-5K, COLLAB (Kersting et al., 2016). To show that the proposed model also scales to larger datasets, we additionally use two Open Graph Benchmark (OGB) datasets (Hu et al., 2020). Specifically, we use a molecular property prediction dataset: ogbg-molhiv, and a code summarization dataset: ogbg-code2. More details about the datasets are given in the appendix. Experimental Setup. In the case of the 10 standard benchmark datasets, we compare the proposed model against the following three graph kernels: (1) graphlet kernel (GR) (Shervashidze et al., 2009), (2) shortest path kernel (SP) (Borgwardt & Kriegel, 2005), and (3) Weisfeiler-Lehman subtree kernel (WL) (Shervashidze et al., 2011), and against the following six neural network models: (1) DGCNN (Zhang et al., 2018a), (2) DiffPool (Ying et al., 2018), (3) ECC (Simonovsky & Komodakis, 2017), (4) GIN (Xu et al., 2019), (5) GraphSAGE (Hamilton et al., 2017), and (6) RWNN (Nikolentzos & Vazirgiannis, 2020). We also compare the proposed model against GRWNN-fixed, a variant of the model whose "hidden graphs" are randomly initialized and kept fixed during training. To evaluate the proposed model, we employ the experimental protocol proposed in (Errica et al., 2020). Therefore, we perform 10-fold cross-validation to obtain an estimate of the generalization performance of each method, while within each fold a model is selected based on a 90%/10% split of the training set. We use exactly the same splits as in (Errica et al., 2020) and in (Nikolentzos & Vazirgiannis, 2020), hence, for the different datasets, we use the results reported in these two papers. For all datasets, we set the batch size to 64 and the number of epochs to 300. We use the Adam optimizer with initial learning rate 0.001 and applied an adaptive learning rate decay based on validation results. We use a 1-layer perceptron to transform the vertex attributes. We apply layer normalization (Ba et al., 2016) on the generated graph representations (i. e., vector consisting of kernel values). The hyper-parameters we tune for each dataset are: (1) the number of "hidden graphs" ∈ {32, 64}, (2) the number of vertices of the "hidden graphs" ∈ {5, 10}, (3) the hidden-dimension size of the vertex features ∈ {32, 64} for the bio/chemo-informatics datasets and ∈ {8, 16} for the social interaction datasets, and (4) the dropout ratio ∈ {0, 0.1}. For both OGB datasets, we used the available predefined splits. We compare the proposed model against the following neural network models: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2019), GCN-FLAG (Kong et al., 2020), GIN-FLAG (Kong et al., 2020), PNA (Corso et al., 2020), GSN (Bouritsas et al., 2020), HIMP (Fey et al., 2020), and DGN (Beaini et al., 2020). For all models, we use the results that are reported in the respective papers. For ogbg-code2, we did not add the inverse edges to the graphs. All reported results are averaged over 10 runs. For both OGB datasets, we set the batch size to 128. For the ogb-molhiv dataset, we set the number of epochs to 300, the number of "hidden graphs" to 200, the number of vertices of the "hidden graphs" to 5, the hidden-dimension size of the vertex features to 128 and the dropout ratio to 0.1. Furthermore, we employ the probabilistic variant of the geometric random walk kernel and use uniform distributions for the initial and stopping probabilities over the vertices of the two compared graphs. For the ogb-code2 dataset, we set the number of epochs to 100, the number of "hidden graphs" to 200, the number of vertices of the "hidden graphs" to 5, the hidden-dimension size of the vertex features to 128 and the dropout ratio to 0.2. For both datasets, we apply layer normalization (Ba et al., 2016) on the generated graph representations. Implementation Details. To set the value of parameter λ, we assume a transductive setting, where we are given a collection of graphs beforehand. Therefore, we can find the vertex of highest degree across all graphs and set the value of λ accordingly. In the inductive learning setting, since we do not know a priori target graphs that the model may receive in the future, λ should be small enough so that λ < 1/µmax × for any pair of an unseen graph and a "hidden graph". This is a limitation of the proposed model since in case the model receives at test time a graph whose largest eigenvalue is higher than expected, we need to set λ to a smaller value and retrain the model. To compute the fixed point of Equation equation 3 or equation 8, we followed the naive approach where we simply performed multiple times the forward iteration. In practice, there are more efficient fixed point iteration methods, such as Anderson Acceleration (Walker & Ni, 2011), that converge faster than the naive forward iteration at the cost of some additional memory complexity. However, as shown next, we found that in our setting, the naive forward iteration converges in a small number of steps, while the additional cost introduced by more efficient methods associated with the generation and manipulation of new tensors made them overall slower than the naive forward iteration even though they required fewer iterations to converge. The model was implemented with PyTorch (Paszke et al., 2019), and all experiments were run on a single machine equipped with an NVidia Titan Xp GPU card. Table 1: Classification accuracy (± standard deviation) of the proposed model and the baselines on the 5 chemo/bio-informatics and on the 5 social interaction benchmark datasets. OOR means Out of Resources, either time (> 72 hours for a single training) or GPU memory. Best performance per dataset in **bold**, among the neural network architectures underlined. | MUTAG | D&D | NCI1 | PROTEINS | ENZYMES | | |-------------|--------------|--------------|--------------|--------------|--------------| | SP | 80.2 (± 6.5) | 78.1 (± 4.1) | 72.7 (± 1.4) | 75.3 (± 3.8) | 38.3 (± 8.0) | | GR | 80.8 (± 6.4) | 75.4 (± 3.4) | 61.8 (± 1.7) | 71.6 (± 3.1) | 25.1 (± 4.4) | | WL | 84.6 (± 8.3) | 78.1 (± 2.4) | 84.8 (± 2.5) | 73.8 (± 4.4) | 50.3 (± 5.7) | | DGCNN | 84.0 (± 6.7) | 76.6 (± 4.3) | 76.4 (± 1.7) | 72.9 (± 3.5) | 38.9 (± 5.7) | | DiffPool | 79.8 (± 7.1) | 75.0 (± 3.5) | 76.9 (± 1.9) | 73.7 (± 3.5) | 59.5 (± 5.6) | | ECC | 75.4 (± 6.2) | 72.6 (± 4.1) | 76.2 (± 1.4) | 72.3 (± 3.4) | 29.5 (± 8.2) | | GIN | 84.7 (± 6.7) | 75.3 (± 2.9) | 80.0 (± 1.4) | 73.3 (± 4.0) | 59.6 (± 4.5) | | GraphSAGE | 83.6 (± 9.6) | 72.9 (± 2.0) | 76.0 (± 1.8) | 73.0 (± 4.5) | 58.2 (± 6.0) | | 1-step RWNN | 89.2 (± 4.3) | 77.6 (± 4.7) | 71.4 (± 1.8) | 74.7 (± 3.3) | 56.7 (± 5.2) | | 2-step RWNN | 88.1 (± 4.8) | 76.9 (± 4.6) | 73.0 (± 2.0) | 74.1 (± 2.8) | 57.4 (± 4.9) | | GRWNN-fixed | 81.9 (± 6.4) | 73.2 (± 3.5) | 66.9 (± 2.4) | 74.6 (± 4.0) | 56.8 (± 3.7) | | GRWNN | 83.4 (± 5.6) | 75.6 (± 4.6) | 67.7 (± 2.2) | 74.9 (± 3.5) | 62.7 (± 5.2) | | IMDB | IMDB | REDDIT | REDDIT | COLLAB | | | BINARY | MULTI | BINARY | MULTI-5K | | | | SP | 57.7 (± 4.1) | 39.8 (± 3.7) | 89.0 (± 1.0) | 51.1 (± 2.2) | 79.9 (± 2.7) | | GR | 63.3 (± 2.7) | 39.6 (± 3.0) | 76.6 (± 3.3) | 38.1 (± 2.3) | 71.1 (± 1.4) | | WL | 72.8 (± 4.5) | 51.2 (± 6.5) | 74.9 (± 1.8) | 49.6 (± 2.0) | 78.0 (± 2.0) | | DGCNN | 69.2 (± 3.0) | 45.6 (± 3.4) | 87.8 (± 2.5) | 49.2 (± 1.2) | 71.2 (± 1.9) | | DiffPool | 68.4 (± 3.3) | 45.6 (± 3.4) | 89.1 (± 1.6) | 53.8 (± 1.4) | 68.9 (± 2.0) | | ECC | 67.7 (± 2.8) | 43.5 (± 3.1) | OOR | OOR | OOR | | GIN | 71.2 (± 3.9) | 48.5 (± 3.3) | 89.9 (± 1.9) | 56.1 (± 1.7) | 75.6 (± 2.3) | | GraphSAGE | 68.8 (± 4.5) | 47.6 (± 3.5) | 84.3 (± 1.9) | 50.0 (± 1.3) | 73.9 (± 1.7) | | 1-step RWNN | 70.8 (± 4.8) | 47.8 (± 3.8) | 90.4 (± 1.9) | 51.7 (± 1.5) | 71.7 (± 2.1) | | 2-step RWNN | 70.6 (± 4.4) | 48.8 (± 2.9) | 90.3 (± 1.8) | 51.7 (± 1.4) | 71.3 (± 2.1) | | GRWNN-fixed | 72.1 (± 4.1) | 48.1 (± 3.6) | 82.2 (± 2.4) | 53.1 (± 1.8) | 71.3 (± 1.9) | | GRWNN | 72.8 (± 4.2) | 49.0 (± 2.9) | 90.0 (± 1.8) | 54.4 (± 1.7) | 72.1 (± 1.9) | Results. Table 1 illustrates average prediction accuracies and standard deviations for the 10 standard graph classification datasets. We observe that the proposed GRWNN model is the best-performing method on 2 out of the 10 datasets, while it provides the second best and third best accuracy on 3 and 1 out of the remaining 8 datasets, respectively. The most successful method is the WL kernel which performs best on 4 of the 10 datasets, while it outperforms the other approaches with quite wide margins in most cases. Among the neural network models, the proposed GRWNN model outperforms the baseline models on 4 out of the 10 datasets. On the remaining 6 datasets, GIN is the best-performing model on half of them, and RWNN on the other half. On the ENZYMES and IMDB-BINARY datasets, our model offers respective absolute improvements of 3.1%, and 1.6% in accuracy over GIN. Overall, the model exhibits highly competitive performance on the graph classification datasets, while the achieved accuracies follow different patterns from all the baseline methods. Furthermore, the proposed model outperforms GRWNN-fixed on all datasets, demonstrating that the set of trainable "hidden graphs" is an indispensable component of the model. The Table shown in Figure 1a illustrates the performance on the two OGB datasets. Note that the proposed model does not utilize the edge features that are provided for the different datasets. Still, we can see that it managed to outperform several of the baselines on the ogbg-molhiv dataset, where it achieved the fourth best ROC-AUC. On the ogbg-code2 dataset, GRWNN outperformed GIN, while it achieved an F1-score similar to that of GCN. However, all these three models achieved a much smaller F1-score than the one achieved by PNA which is the best-performing model. As already discussed, the running time of the model depends on the number of fixed point iterations that need to be performed until convergence. Figure 1b (top) illustrates the average number of iterations (across all batches) for the forward and backward pass for different values of λ and for each epoch. The model was trained on a single split of the ENZYMES dataset. The maximum eigenvalue of all graphs of the dataset is (a) Performance of the proposed model and the baselines on the OGB datasets. Reported values correspond to ROCAUC scores for ogbg-molhiv and F1scores for ogbg-code2. | Method | Dataset | | |-----------|--------------|--------------| | | ogbg-molhiv | ogbg-code2 | | GCN | 76.06 ± 0.97 | 15.07 ± 0.18 | | GIN | 75.58 ± 1.40 | 14.95 ± 0.23 | | GCN+ | 76.83 ± 1.02 | - | | FLAG GIN+ | 76.54 ± 1.14 | - | | FLAG GSN | 77.99 ± 1.00 | - | | HIMP | 78.80 ± 0.82 | - | | PNA | 79.05 ± 1.32 | 15.70 ± 0.32 | | DGN | 79.70 ± 0.97 | - | | GRWNN | 78.38 ± 0.99 | 15.03 ± 0.21 | ![9_image_0.png](9_image_0.png) pass (top), and training and validation accuracy (bottom) on the ENZYMES dataset for different values of λ. Figure 1: Performance on the OGB datasets and impact of the value of parameter λ on running time and performance of the model. | MUTAG | D&D | NCI1 | PROTEINS | ENZYMES | IMDB | IMDB | REDDIT | REDDIT | COLLAB | | |---------|-------|--------|------------|-----------|----------|--------|----------|----------|----------|------| | | | BINARY | MULTI | BINARY | MULTI-5K | | | | | | | GIN | 0.03 | 0.34 | 0.50 | 0.14 | 0.07 | 0.13 | 0.19 | 0.81 | 2.43 | 0.98 | | 2-RWNN | 0.03 | 0.19 | 0.57 | 0.16 | 0.08 | 0.14 | 0.20 | 0.43 | 1.13 | 0.89 | | 3-RWNN | 0.04 | 0.23 | 0.76 | 0.21 | 0.11 | 0.18 | 0.28 | 0.55 | 1.42 | 1.04 | | GRWNN | 0.07 | 0.77 | 0.94 | 0.32 | 0.17 | 0.24 | 0.34 | 2.93 | 6.19 | 2.69 | Table 2: Average running time per epoch (in seconds) of the proposed model and 3 baselines on the 10 graph classification datasets. equal to 5.47, while the highest degree is equal to 9. The number of nodes of the "hidden graphs" was set to 5. If the elements of the adjacency matrices of the "hidden graphs" take values no greater than one, then no vertex of G× can have a degree greater than 9 ∗ 4 = 36. Thus, setting λ < 1/36 guarantees convergence. In practice, as shown in the Figure, we found that even if λ takes larger values, we only need a small number of iterations. For λ = 1/5, we can see that the fixed point equation fails to converge since the average number of iterations is close to 100 (which is the upper limit we have set). For λ = 1/10 and for smaller values of λ, the system converges in a small number of iterations. In terms of performance, as shown in Figure 1b (bottom), the model achieves the highest levels of validation accuracy for λ = 1/20 and λ = 1/30, while for λ = 1/5, the model yields much worse performance compared to the other values of λ. Similar behavior was observed on the other datasets. ## 5.2 Runtime Analysis The proposed model is indeed computationally more expensive than the RWNN model due to the fixed point iteration which is not parallelizable. However, as already discussed, we empirically observed that the forward iteration converges in a small number of steps, thus incurring a relatively small overhead in the model's running time. We have computed the average running time per epoch of the proposed model, and 3 of the baselines (2-RWNN, 3-RWNN and GIN) on the 10 graph classification datasets. We use the same values for the common hyperparameters (e. g., number and size of hidden graphs for GRWNN and RWNN, and hidden dimension size, batch size, etc for all 3 models). The results are shown in Table 2 (in seconds). As we can see, the proposed model is not much more expensive than the baselines. In fact, in most cases, ![10_image_0.png](10_image_0.png) Figure 2: Performance on the ENZYMES dataset as a function of the number of "hidden graphs" (left) and the number of vertices of the "hidden graphs" (right). its average running time per epoch is 1 − 3 times higher than that of the baselines, which is by no means prohibitive for real-world scenarios. ## 5.3 Sensitivity Analysis The proposed GRWNN model involves two main parameters: (1) the number of "hidden graphs", and (2) the number of vertices of "hidden graphs". We next investigate how these two parameters influence the performance of the GRWNN model. Specifically, in Figure 2, we examine how the different values of these parameters affect the performance of GRWNN on the ENZYMES dataset. We observe that the accuracy on the test set increases as the number of "hidden graphs" increases. The number of "hidden graphs" seems to have a significant impact on the performance of the model. When the number of graphs is set equal to 5, the model achieves an accuracy smaller than 50%, while when the number of graphs is set equal to 100, it yields an accuracy greater than 65%. On the other hand, the number of vertices of the "hidden graphs" does not affect that much the performance of the model. ## 6 Conclusion In this paper, we introduced the GRWNN model, a new architecture which generates graph representations by comparing the input graphs against "hidden graphs" using the geometric random walk kernel. To compute the kernel, the proposed model uses a fixed point iteration algorithm, and to update the "hidden graphs" during backpropagation, the model capitalizes on implicit differentation and employs an efficient training scheme which requires only constant memory, regardless of the number of fixed-point iterations. The model was evaluated on several graph classification datasets where it achieved competitive performance. The main cotribution of this work is methodological and therefore, there are no negative societal impacts directly related to it. Although we are not aware of any malicious uses of GNNs so far, these models could potentially serve as the key component behind harmful applications. For instance, in a social network where vertices represent humans, one could use GNNs to discriminate people in terms of some desired characteristic which can potentially affect people and their rights. Still, we believe that the benefits that arise from the use of these models (e. g., drug discovery) outweigh the potential negative societal impacts. To mitigate the risks associated with the use of GNNs, the community could develop tools that can identify potential harms. For instance, the negative impact of the aforementioned harmful application could be mitigated by developing tools that can detect whether individuals or groups of people are subject to discrimination. ## References Brandon Amos, Ivan Dario Jimenez Rodriguez, Jacob Sacks, Byron Boots, and J Zico Kolter. Differentiable MPC for End-to-end Planning and Control. In *Advances in Neural Information Processing Systems*, volume 31, pp. 8299–8310, 2018. James Atwood and Don Towsley. Diffusion-Convolutional Neural Networks . In *Advances in Neural Information Processing Systems*, volume 29, pp. 1993–2001, 2016. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Deep Equilibrium Models. In *Advances in Neural Information* Processing Systems, volume 32, pp. 690–701, 2019. Dominique Beaini, Saro Passaro, Vincent Létourneau, William L Hamilton, Gabriele Corso, and Pietro Liò. Directional Graph Networks. *arXiv preprint arXiv:2010.02863*, 2020. Dennis S Bernstein. *Matrix mathematics: theory, facts, and formulas*. Princeton University Press, 2009. K. Borgwardt, C. Ong, S. Schönauer, S. Vishwanathan, A. Smola, and H. Kriegel. Protein function prediction via graph kernels. *Bioinformatics*, 21(Suppl. 1):i47–i56, 2005. K. M. Borgwardt and H. Kriegel. Shortest-path kernels on graphs. In Proceedings of the 5th International Conference on Data Mining, pp. 74–81, 2005. Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. *arXiv preprint arXiv:2006.09252*, 2020. Andries E Brouwer and Willem H Haemers. *Spectra of graphs*. Springer Science & Business Media, 2011. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral Networks and Locally connected networks on Graphs. In *2nd International Conference on Learning Representations*, 2014. Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural Ordinary Differential Equations. In *Advances in Neural Information Processing Systems*, volume 31, pp. 6572–6583, 2018. Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Veličković. Principal Neighbourhood Aggregation for Graph Nets. In *Advances in Neural Information Processing Systems*, volume 33, 2020. Filipe de Avila Belbute-Peres, Kevin Smith, Kelsey Allen, Josh Tenenbaum, and J Zico Kolter. End-to-End Differentiable Physics for Learning and Control. *Advances in neural information processing systems*, 31: 7178–7189, 2018. A. Debnath, R. Lopez de Compadre, G. Debnath, A. Shusterman, and C. Hansch. Structure-Activity Relationship of Mutagenic Aromatic and Heteroaromatic Nitro Compounds. Correlation with Molecular Orbital Energies and Hydrophobicity. *Journal of Medicinal Chemistry*, 34(2):786–797, 1991. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In *Advances in Neural Information Processing Systems*, volume 29, pp. 3837–3845, 2016. Nima Dehmamy, Albert-Laszlo Barabasi, and Rose Yu. Understanding the Representation Power of Graph Neural Networks in Learning Graph Topology. In *Advances in Neural Information Processing Systems*, volume 32, 2019. P. Dobson and A. Doig. Distinguishing Enzyme Structures from Non-enzymes Without Alignments. *Journal* of Molecular Biology, 330(4):771–783, 2003. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan AspuruGuzik, and Ryan P Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In Advances in Neural Information Processing Systems, volume 28, 2015. Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A Fair Comparison of Graph Neural Networks for Graph Classification. In *Proceedings of the International Conference on Learning Representations*, 2020. Matthias Fey, Jan-Gin Yuen, and Frank Weichert. Hierarchical Inter-Message Passing for Learning on Molecular Graphs. *arXiv preprint arXiv:2006.12179*, 2020. Claudio Gallicchio and Alessio Micheli. Fast and Deep Graph Neural Networks. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, pp. 3898–3905, 2020. Thomas Gärtner, Peter Flach, and Stefan Wrobel. On Graph Kernels: Hardness Results and Efficient Alternatives. In *Learning Theory and Kernel Machines*, pp. 129–143. 2003. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural Message Passing for Quantum Chemistry. In *Proceedings of the 34th International Conference on Machine Learning*, pp. 1263–1272, 2017. Fangda Gu, Heng Chang, Wenwu Zhu, Somayeh Sojoudi, and Laurent El Ghaoui. Implicit Graph Neural Networks. In *Advances in Neural Information Processing Systems*, volume 33, pp. 11984–11995, 2020. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems, pp. 1024–1034, 2017. David Haussler. Convolution kernels on discrete structures. Technical report, Technical report, Department of Computer Science, University of California at Santa Cruz, 1999. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. arXiv preprint arXiv:2005.00687, 2020. Janis Kalofolias, Pascal Welke, and Jilles Vreeken. SUSAN: The Structural Similarity Random Walk Kernel. In *Proceedings of the 2021 SIAM International Conference on Data Mining*, 2021. U Kang, Hanghang Tong, and Jimeng Sun. Fast Random Walk Graph Kernel. In Proceedings of the 2012 SIAM International Conference on Data Mining, pp. 828–838, 2012. Hisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized Kernels Between Labeled Graphs. In Proceedings of the 20th International Conference on Machine Learning, pp. 321–328, 2003. Steven Kearnes, Kevin McCloskey, Marc Berndl, Vijay Pande, and Patrick Riley. Molecular graph convolutions: moving beyond fingerprints. *Journal of Computer-Aided Molecular Design*, 30(8):595–608, 2016. Kristian Kersting, Nils M. Kriege, Christopher Morris, Petra Mutzel, and Marion Neumann. Benchmark data sets for graph kernels, 2016. http://graphkernels.cs.tu-dortmund.de. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In In 5th International Conference on Learning Representations, 2017. Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, and Tom Goldstein. Flag: Adversarial Data Augmentation for Graph Neural Networks. *arXiv preprint* arXiv:2010.09891, 2020. Nils M Kriege, Fredrik D Johansson, and Christopher Morris. A survey on graph kernels. *Applied Network* Science, 5(1):1–42, 2020. Tao Lei, Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Deriving Neural Architectures from Sequence and Graph Kernels. In *Proceedings of the 34th International Conference on Machine Learning*, pp. 2024– 2033, 2017. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated Graph Sequence Neural Networks. In *3rd International Conference on Learning Representations*, 2015. Pierre Mahé, Nobuhisa Ueda, Tatsuya Akutsu, Jean-Luc Perret, and Jean-Philippe Vert. Extensions of Marginalized Graph Kernels. In *Proceedings of the 21st International Conference on Machine Learning*, 2004. Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably Powerful Graph Networks. In *Advances in Neural Information Processing Systems*, 2019a. Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and Equivariant Graph Networks. In *7th International Conference on Learning Representations*, 2019b. Alessio Micheli. Neural Network for Graphs: A Contextual Constructive Approachs. *IEEE Transactions on* Neural Networks, 20(3):498–511, 2009. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, pp. 4602–4609, 2019. Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings. In *Advances in Neural Information Processing Systems*, volume 33, pp. 21824–21840, 2020. Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning Convolutional Neural Networks for Graphs. In *Proceedings of the 33rd International Conference on Machine Learning*, pp. 2014–2023, 2016. Giannis Nikolentzos and Michalis Vazirgiannis. Random Walk Graph Neural Networks. *Advances in Neural* Information Processing Systems, 33:16211–16222, 2020. Jorge Nocedal and Stephen Wright. *Numerical Optimization*. Springer Science & Business Media, 2006. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. 2017. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In *Advances in Neural Information Processing Systems*, volume 32, pp. 8026–8037, 2019. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The Graph Neural Network Model. *IEEE Transactions on Neural Networks*, 20(1):61–80, 2009. N. Shervashidze, P. Schweitzer, E. J. Van Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeiler-Lehman Graph Kernels. *The Journal of Machine Learning Research*, 12:2539–2561, 2011. Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten M Borgwardt. Efficient graphlet kernels for large graph comparison. In *The 12th International Conference on Artificial Intelligence* and Statistics, pp. 488–495, 2009. Martin Simonovsky and Nikos Komodakis. Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 3693–3702, 2017. Alessandro Sperduti and Antonina Starita. Supervised Neural Networks for the Classification of Structures. IEEE Transactions on Neural Networks, 8(3):714–735, 1997. Mahito Sugiyama and Karsten Borgwardt. Halting in Random Walk Kernels. *Advances in Neural Information* Processing Systems, 28:1639–1647, 2015. Jan Toenshoff, Martin Ritzert, Hinrikus Wolf, and Martin Grohe. Graph learning with 1d convolutions on random walks. *arXiv preprint arXiv:2102.08786*, 2021. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph Attention Networks. In *6th International Conference on Learning Representations*, 2018. S Vichy N Vishwanathan, Nicol N Schraudolph, Risi Kondor, and Karsten M Borgwardt. Graph Kernels. Journal of Machine Learning Research, 11:1201–1242, 2010. N. Wale, I. Watson, and G. Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. *Knowledge and Information Systems*, 14(3):347–375, 2008. Homer F Walker and Peng Ni. Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis, 49(4):1715–1735, 2011. Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. Session-Based Recommendation with Graph Neural Networks. In *Proceedings of the 33rd AAAI Conference on Artificial Intelligence*, pp. 346–353, 2019. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. *Chemical Science*, 9(2):513–530, 2018. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How Powerful are Graph Neural Networks? In *7th International Conference on Learning Representations*, 2019. P. Yanardag and S. Vishwanathan. Deep Graph Kernels. In *Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, pp. 1365–1374, 2015. Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical Graph Representation Learning with Differentiable Pooling. In Advances in Neural Information Processing Systems, pp. 4801–4811, 2018. Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An End-to-End Deep Learning Architecture for Graph Classification. In *Proceedings of the 32nd AAAI Conference on Artificial Intelligence*, pp. 4438– 4445, 2018a. Zhen Zhang, Mianzhi Wang, Yijian Xiang, Yan Huang, and Arye Nehorai. RetGK: Graph Kernels based on Return Probabilities of Random Walks. *Advances in Neural Information Processing Systems*, 31:3964– 3974, 2018b. ## A Appendix The appendix is organized as follows. In section B, we define some basic concepts from linear algebra. In section C, we prove the Proposition 1. In section D, we give more details about the direct differentiation through the fixed point, while section E provides a detailed description of the graph classification datasets. Finally, section F gives details about the parameter λ. ## B Linear Algebra Concepts In this Section, we provide definitions for concepts of linear algebra, namely the vectorization operator, the inverse vectorization operator and the Kronecker product, which we use heavily in the main paper. Definition 1. *Given a real matrix* A ∈ R m×n*, the vectorization operator vec* : R m×n → R mn *is defined as:* ![14_image_0.png](14_image_0.png) where A:i *is the* i th *column of* A. Definition 2. *Given a real vector* b ∈ R mn*, the inverse vectorization operator vec*−1: R nm → R n×m is defined as: $$v e c^{-1}(\mathbf{b})={\begin{bmatrix}\mathbf{b}_{1}&\mathbf{b}_{n+1}&\dots&\mathbf{b}_{n(m-1)+1}\\ \mathbf{b}_{2}&\mathbf{b}_{n+2}&\dots&\mathbf{b}_{n(m-1)+2}\\ \vdots&\vdots&\vdots&\vdots\\ \mathbf{b}_{n}&\mathbf{b}_{2n}&\dots&\mathbf{b}_{n m}\end{bmatrix}}$$ Definition 3. *Given real matrices* A ∈ R n×m and B ∈ R p×q*, the Kronecker product* A ⊗ B ∈ R np×mq defined as: $\mathbf{A}\otimes\mathbf{B}=\begin{bmatrix}\mathbf{A}_{11}\,\mathbf{B}&\mathbf{A}_{12}\,\mathbf{B}&\mathbf{A}_{1m}\,\mathbf{B}\\ \mathbf{A}_{21}\,\mathbf{B}&\mathbf{A}_{22}\,\mathbf{B}&\mathbf{A}_{2m}\,\mathbf{B}\\ \vdots&\vdots&\vdots&\vdots\\ \mathbf{A}_{n1}\,\mathbf{B}&\mathbf{A}_{n2}\,\mathbf{B}&\mathbf{A}_{nm}\,\mathbf{B}\end{bmatrix}$ ## C Proof Of Proposition 1 For convenience, we restate the Proposition below. Proposition 2. Let A1 ∈ R n×n and A2 ∈ R m×m *be two real matrices. Let also* s, y ∈ R nm *be two real* vectors. Then, we have that: $$\big(\mathbf{s}\,\mathbf{s}^{\top}\odot(\mathbf{A}_{1}\otimes\mathbf{A}_{2})\big)\mathbf{y}=\mathbf{s}\odot vec\big(\mathbf{A}_{2}\ vec^{-1}(\mathbf{s}\odot\mathbf{y})\mathbf{A}_{1}^{\top}\big)$$ Proof. Let Ds denote a diagonal matrix with the vector s as its main diagonal. Then, we have: $$\left(\mathbf{s}\,\mathbf{s}^{\top}\odot(\mathbf{A}_{1}\otimes\mathbf{A}_{2})\right)\mathbf{y}=\left(\mathbf{D}_{\mathbf{s}}\left(\mathbf{A}_{1}\otimes\mathbf{A}_{2}\right)\mathbf{D}_{\mathbf{s}}\right)\mathbf{y}$$ The Hadamard product of two vectors s and y is the same as matrix multiplication of vector y by the corresponding diagonal matrix Ds of vector s, i. e., Ds y = s y. Thus, it follows that: $$\left(\mathbf{D_{s}}\left(\mathbf{A}_{1}\otimes\mathbf{A}_{2}\right)\mathbf{D_{s}}\right)\mathbf{y}=\mathbf{D_{s}}\left(\mathbf{A}_{1}\otimes\mathbf{A}_{2}\right)\left(\mathbf{s}\odot\mathbf{y}\right)$$ Note that the Kronecker product and vec operator are linked by the well-known property (Bernstein, 2009)(Proposition 7.1.9): $$\operatorname{vec}(\mathbf{A}\,\mathbf{B}\,\mathbf{C})=(\mathbf{C}^{\mathsf{T}}\otimes\mathbf{A})\operatorname{vec}(\mathbf{B})$$ Therefore, we have that: $\left(\mathbf{D_{s}\left(A_{1}\otimes A_{2}\right)\left(s\odot y\right)=D_{s}\operatorname{vec}(A_{2}\operatorname{vec}^{-1}(s\odot y)\,A_{1}^{\top})=s\odot\operatorname{vec}(A_{2}\operatorname{vec}^{-1}(s\odot y)\,A_{2}^{\top})}\right)$ which concludes the proof. ## D Implicit Differentiation Clearly, iteratively computing Equation (3) or Equation (8) (main paper) to find the fixed point corresponds to a differentiable module. However, to train the model, we need to backpropagate the error through the fixed point solver in the backward pass. That would require storing all the intermediate terms, which could be prohibitive in practice. Fortunately, thanks to recent advances in implicit layers and equilibrium models (Bai et al., 2019), this can be performed in a simple and efficient manner which requires constant memory, and assumes no knowledge of the fixed point solver. Specifically, based on ideas from (Bai et al., 2019), we derive the form of implicit backpropagation specific to the employed fixed point iteration layer. $$\square$$ Theorem 2. Let fθ *be the system of Equation (3) or Equation (8) (main paper), and* z ? ∈ R nc *be a solution* to that linear system. Let also gθ(z ?; A, X) = fθ(z ?; A, X) − z ?*. Since* z ?*is a fixed point, we have that* gθ(z ?; A, X) → 0 and z ?is thus the root of gθ. Let y ∈ R *denote the ground-truth target of the input sample,* h : R → R be any differentiable function and let L : R × R → R *be a loss function that computes:* $$\ell={\mathcal{L}}{\big(}h(\mathbf{1}^{\top}\mathbf{z}^{\star}),y{\big)}={\mathcal{L}}{\big(}h{\big(}\mathbf{1}^{\top}\mathrm{FindRoot}(g_{\theta};\mathbf{A},\mathbf{X}){\big)},y{\big)}$$ $$(11)$$ >FindRoot(gθ; A, X), y(11) _Then, the gradient of the loss w.r.t. $(\cdot)$ (e.g., $\theta$, $\mathbf{A}$ or $\mathbf{X}$) is:_ $$\frac{\partial\mathcal{U}}{\partial(\cdot)}=-\frac{\partial\mathcal{U}}{\partial\mathbf{x}^{*}}\big{(}J_{g^{m}}^{-1}\big{|}_{\mathbf{x}^{*}}\big{)}\frac{\partial f_{\theta}(\mathbf{x}^{*};\mathbf{A},\mathbf{X})}{\partial(\cdot)}=-\frac{\partial\mathcal{U}}{\partial\mathcal{U}}\frac{\partial\mathcal{U}}{\partial\mathbf{x}^{*}}\big{(}J_{g^{m}}^{-1}\big{|}_{\mathbf{x}^{*}}\big{)}\frac{\partial f_{\theta}(\mathbf{x}^{*};\mathbf{A},\mathbf{X})}{\partial(\cdot)}\tag{12}$$ where J −1 gθ z?is the inverse Jacobian of gθ *evaluated at* z ?. The above Theorem gives a form for the necessary Jacobian without needing to backpropagate through the method used to obtain the fixed point. We can thus treat the fixed point algorithm as a black box, and we do not need to store intermediate terms associated with the fixed point algorithm into memory. We only need to apply some algorithm that will produce a solution to the system (i. e., it will compute the fixed point). Following (Bai et al., 2019), we differentiate the two sides of the fixed point equation z ? = fθ(z ?; A, X) wih respect to (·): dz ? d(·) = dfθ(z ?; A, X) d(·)= ∂fθ(z ?; A, X) ∂(·)+ ∂fθ(z ?; A, X) ∂z ? dz ? d(·) =⇒ I − ∂fθ(z ?; A, X) ∂z ? dz ? d(·) = ∂fθ(z ?; A, X) ∂(·) Since gθ(z ?) = fθ(z ?; A, X) − z ?, we have: $$J_{g_{\theta}}\big|_{\mathbf{z}^{*}}=-\bigg(\mathbf{I}-{\frac{\partial j}{\partial t}}\bigg)$$ $$\frac{\partial f_{\theta}(\mathbf{z^{\star}};\mathbf{A},\mathbf{X})}{\partial\mathbf{z^{\star}}}\Bigg)$$ which implies the following: $$\frac{\partial\ell}{\partial(\cdot)}=\frac{\partial\ell}{\partial\mathbf{z}^{\star}}\frac{\mathrm{d}\mathbf{z}^{\star}}{\mathrm{d}(\cdot)}=-\frac{\partial\ell}{\partial\mathbf{z}^{\star}}\big(J_{g^{0}}^{-1}\,\vert_{\mathbf{z}^{\star}}\big)\frac{\partial f_{\theta}(\mathbf{z}^{\star};\mathbf{A},\mathbf{X})}{\partial(\cdot)}.$$ Unfortunately, computing the exact inverse Jacobian J −1 gθhas a cubic cost. As shown in (Bai et al., 2019), we can instead compute the − ∂` ∂z? J −1 gθ z? term of the gradient (which contains the Jacobian) by solving the following linear system: x > = − ∂` ∂z ? J −1 gθ z? =∂` ∂z ? I − ∂fθ(z ?; A, X) ∂z ? −1 =⇒ x = I − ∂fθ(z ?; A, X) ∂z ? >!−1∂` ∂z ? > =⇒ I − ∂fθ(z ?; A, X) ∂z ? >x = ∂` ∂z ? > =⇒ x = ∂fθ(z ?; A, X) ∂z ? >x + ∂` ∂z ? > which in fact is also a fixed point equation and can be solved via some iterative procedure. Note that the first term of the above Equation is a vector-Jacobian product which can be efficiently computed via autograd packages (e. g., PyTorch (Paszke et al., 2017)) for any x, without explicitly writing out the Jacobian matrix. Finally, we can compute ∂` ∂(·) as follows: $${\frac{\partial\ell}{\partial(\cdot)}}=\left({\frac{\partial f_{\theta}(\mathbf{z^{\star}};\mathbf{A},\mathbf{X})}{\partial(\cdot)}}\right)^{\top}\mathbf{x}$$ | Dataset | MUTAG | D&D | NCI1 | PROTEINS | ENZYMES | IMDB | IMDB | REDDIT | REDDIT | COLLAB | |--------------------|---------|--------|--------|------------|-----------|--------|--------|----------|----------|----------| | | | BINARY | MULTI | BINARY | MULTI-5K | | | | | | | Max # vertices | 28 | 5,748 | 111 | 620 | 126 | 136 | 89 | 3,782 | 3,648 | 492 | | Min # vertices | 10 | 30 | 3 | 4 | 2 | 12 | 7 | 6 | 22 | 32 | | Average # vertices | 17.93 | 284.32 | 29.87 | 39.05 | 32.63 | 19.77 | 13.00 | 429.61 | 508.50 | 74.49 | | Max # edges | 33 | 14,267 | 119 | 1,049 | 149 | 1,249 | 1,467 | 4,071 | 4,783 | 40,119 | | Min # edges | 10 | 63 | 2 | 5 | 1 | 26 | 12 | 4 | 21 | 60 | | Average # edges | 19.79 | 715.66 | 32.30 | 72.81 | 62.14 | 96.53 | 65.93 | 497.75 | 594.87 | 2,457.34 | | # labels | 7 | 82 | 37 | 3 | - | - | - | - | - | - | | # attributes | - | - | - | - | 18 | - | - | - | - | - | | # graphs | 188 | 1,178 | 4,110 | 1,113 | 600 | 1,000 | 1,500 | 2,000 | 4,999 | 5,000 | | # classes | 2 | 2 | 2 | 2 | 6 | 2 | 3 | 2 | 5 | 3 | Table 3: Summary of the 10 datasets that were used in our experiments. where again this product is itself a vector-Jacobian product, computable via normal automatic differentiation packages. ## E Datasets We evaluated the proposed model on 10 publicly available graph classification datasets including 5 bio/chemoinformatics datasets: MUTAG, D&D, NCI1, PROTEINS and ENZYMES, as well as 5 social interaction datasets: IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI-5K and COLLAB (Kersting et al., 2016). A summary of the 10 datasets is given in Table 3. MUTAG consists of 188 mutagenic aromatic and heteroaromatic nitro compounds. The task is to predict whether or not each chemical compound has mutagenic effect on the Gram-negative bacterium *Salmonella typhimurium* (Debnath et al., 1991). ENZYMES contains 600 protein tertiary structures represented as graphs obtained from the BRENDA enzyme database. Each enzyme is a member of one of the Enzyme Commission top level enzyme classes (EC classes) and the task is to correctly assign the enzymes to their classes (Borgwardt et al., 2005). NCI1 contains more than four thousand chemical compounds screened for activity against non-small cell lung cancer and ovarian cancer cell lines (Wale et al., 2008). PROTEINS contains proteins represented as graphs where vertices are secondary structure elements and there is an edge between two vertices if they are neighbors in the aminoacid sequence or in 3D space. The task is to classify proteins into enzymes and non-enzymes (Borgwardt et al., 2005). D&D contains over a thousand protein structures. Each protein is a graph whose nodes correspond to amino acids and a pair of amino acids are connected by an edge if they are less than 6 Ångstroms apart. The task is to predict if a protein is an enzyme or not (Dobson & Doig, 2003). IMDB-BINARY and IMDB-MULTI were created from IMDb, an online database of information related to movies and television programs. The graphs contained in the two datasets correspond to movie collaborations. The vertices of each graph represent actors/actresses and two vertices are connected by an edge if the corresponding actors/actresses appear in the same movie. Each graph is the ego-network of an actor/actress, and the task is to predict which genre an ego-network belongs to (Yanardag & Vishwanathan, 2015). REDDIT-BINARY and REDDIT-MULTI-5K contain graphs that model the social interactions between users of Reddit. Each graph represents an online discussion thread. Specifically, each vertex corresponds to a user, and two users are connected by an edge if one of them responded to at least one of the other's comments. The task is to classify graphs into either communities or subreddits (Yanardag & Vishwanathan, 2015). COLLAB is a scientific collaboration dataset that consists of the ego-networks of several researchers from three subfields of Physics (High Energy Physics, Condensed Matter Physics and Astro Physics). The task is to determine the subfield of Physics to which the ego-network of each researcher belongs (Yanardag & Vishwanathan, 2015). We also evaluated the proposed model on two datasets from the Open Graph Benchmark (OGB) (Hu et al., 2020), a collection of large-scale and diverse benchmark datasets for machine learning on graphs. A summary of the two datasets is given in Table 4. The ogbg-molhiv dataset is a molecular property prediction dataset that is adopted from the MoleculeNet (Wu et al., 2018). The dataset consists of 41, 127 molecules and corresponds to a binary classification dataset where the task is to predict whether a molecule inhibits HIV virus replication or not. The molecules in the training, validation and test sets are divided using a scaffold | Dataset | ogbg-molhiv | ogbg-code2 | |--------------------|---------------|----------------------| | Average # vertices | 25.5 | 125.2 | | Average # edges | 27.5 | 124.2 | | Node features | X | X | | Edge features | X | X | | Directed | - | X | | # graphs | 41,127 | 452,741 | | # tasks | 1 | 1 | | Split scheme | Scaffold | Project | | Split ratio | 80/10/10 | 90/5/5 | | Task type | Binary class. | Sub-token prediction | | Metric | ROC-AUC | F1-score | Table 4: Statistics of the 2 OGB datasets that we used in our experiments. | MUTAG | D&D | NCI1 | PROTEINS | ENZYMES | IMDB | IMDB | REDDIT | REDDIT | COLLAB | | |---------|-------|--------|------------|-----------|--------|--------|----------|----------|----------|--------| | BINARY | MULTI | BINARY | MULTI-5K | | | | | | | | | λ | 1/5 | 1/20 | 1/20 | 1/30 | 1/20 | 1/200 | 1/300 | 1/500 | 1/400 | 1/2000 | Table 5: Values of λ that we used in our experiments. splitting procedure that splits the molecules based on their two-dimensional structural frameworks. The scaffold splitting attempts to separate structurally different molecules into different subsets. The ogbg-code2 dataset contains a large number of Abstract Syntax Trees (ASTs) that are extracted from approximately 450, 000 Python method definitions. For each method, the AST edges, the AST nodes, and the tokenized method name are retrieved. Given the body of a method represented by the AST and its node features, the task (which is known as code summarization) is to predict the sub-tokens forming the name of the method. The ASTs for the training set are obtained from GitHub projects that do not appear in the validation and test sets. We refer the reader to (Hu et al., 2020) for more details about the OGB datasets. ## F Parameter Λ Given an input graph G and a "hidden graph" Gi, since the "hidden graph" is trainable, the maximum vertex degree of the product graph G× is not known beforehand. However, in case the weights of the edges of the "hidden graph" are bounded, we can compute an upper bound to that. Let ∆ denote the maximum vertex degree of G, c denote the number of vertices of the "hidden graph" Gi, and b the maximum edge weight of the "hidden graph". Then, we have that ∆× ≤ ∆cb, and therefore, to guarantee convergence, we need to set λ ≤ 1/∆cb. In practice, we empirically found that even if λ takes higher values, the geometric series converges within a small number of iterations. Table 5 shows the value of λ that we used for each dataset.
Review 1: Summary: The paper deals with supervised machine learning for graphs, specifically graph classification or regression. The authors leverage known results from the graph kernel literature, namely random walk kernels (RWK), to design end-to-end trainable graph neural networks (GNNs) leveraging walk information. Specifically, they extend the work of Nikolentzos & Vazirgiannis, 2020, which compares input graphs to learnable "hidden" graphs using a finite-length RWK. Here, the authors propose to use infinite-length random walks, by leveraging known results from the graph kernel literature. Moreover, they use known techniques from implicit differentiation to efficiently backpropagate through the resulting architecure. Finally, the authors conduct experiments on known benchmark datasets from the kernel and GNN literature showing good results, on par with baselines. Strengths and Weaknesses: *Strengths* - Well-written and easy to follow - Discussion of literature seems reasonable - Reasonable experimental setup/protocol *Weaknesses* - The paper is incremental, it is a combination of known ideas, i.e., *neither* of the following ideas is /new/ * comparing to learnable "hidden" graphs (Nikolentzos & Vazirgiannis, 2020) * Infinite random walks (Vishwanathan et al., 2010) * Implicit backpropagation for fixed-point equations (Bai et al. (2019)) - Infinite walks do not give much improvement over finite random-walk architecture in the experiments Requested Changes: *Critical problems* - The work of Nikolentzos & Vazirgiannis, 2020 should be described in more detail as it seems the basis of the present work - The paper does not distinguish clearly enough between known results and newly proposed ideas. For example, from the description in Section 4 it is not clear that all the ideas are already known and have been published (Vishwanathan et al., 2010). (I am not saying there is bad intent here.) - Further, it is not clear if Theorem 2 easily follows from the work of (Bai et al. (2019)) or if it is a truly novel contribution - The presentation is "clumsy" at times, e.g., Equations 1 and 2 are the same. Also, Equation 3 and the one below are the same (modulo the index) - The empirical results, Table 1, comparing the 1/2-step RWNN and the infinite one (the main contribution) are not convincing. Further, the 1/2-step RWNNs are not used on the OGB datasets, Figure 1 (a). That is, more convincing empirical evidence is needed, showing that the infinite RWs indeed give an edge on some datasets (over the finite ones) - Moreover, it should be made more clear what the benefits of the RW-based GNNs are compared to standard GNNs. (Note that they are not more expressive.) - Make clear why infinite random walks are needed in the case of *finite graphs*. This seems counterintuitive. Also, discuss work on Sugiyama & Borgwardt (2015) in more detail in that context. *Minor problems* - Paper needs proofreading, I stumbled on various typos and grammatical issues when working through the paper. - Bib. entries do not follow a unified format. Full surname vs. abbreviated, et cetera Broader Impact Concerns: Not relevant. ================================================== Review 2: Summary: This work extends the Random Walk GNN [Nikolentzos & Vazirgiannis (2020)] to take into account random walks of infinite length. The core technical contribution is to use the implicit differentiation to achieve the efficient training. Strengths and Weaknesses: Strength: - The use of implicit layer is appropriate. - The paper is well-written. Weakness: - Motivation is weak. In the original Random Walk GNN paper, the increase random walk step sizes did not give empirical improvement, but the authors try to extend the model to use infinite step size. - Experiments are weak. The results on the small TU graph classification datasets are rather mixed. The results on OGB datasets are also far from SoTA. In the OGB datasets, the original random walk GNN is not compared. - Technical contribution is weak. It is a minor extension of random walk GNN without strong empirical motivation nor empirical evidence. - Writing is confusing. A lot of content in the method section (Section 4) has been already presented in Nikolentzos & Vazirgiannis (2020). The authors did not do a good job separating their contribution from the original work that they build on. Requested Changes: - More clear motivation on why infinite-step random walk is needed. - Stronger empirical results compared to the original random walk GNN. - Random walk GNN results on the OGB datasets. - Clearer writing separating the actual contribution from the original work of Nikolentzos & Vazirgiannis (2020). All the above needs to be addressed for the paper to be accepted, which I think is difficult. Broader Impact Concerns: No concern. ================================================== Review 3: Summary: The authors expand on the Random Walk Graph Neural Networks work. The original RWGN looks at random walks of fixed length and here it is extended to infinite lengths using a geometric series. They then show how this can be computed efficiently. Finally, they compare results on a variety of datasets. Strengths and Weaknesses: Stregths: - The Geometric extension is interesting and novel. - Very solid experiments. Even if it wasn't always SOTA they gave good results on some. In general the experiments were properly done. Weaknesses: - Writing. Some important parts were missing to understand the model (without reading RWGNN paper). The paper is well written in the sense that what it explains it explains well, but a few important things are not explained at all. Requested Changes: Main issue: - The paper explains the GRW kernel well but doesn't really explain how it is integrated into a classification model. The paper talks about "hidden graphs" many times but doesn't really explain what they are. It also doesn't go into how the graph is classified once you get k(G,G_i). I had to go into the RWNN paper to get the exact formulation which should not happen in my mind. - When you describe the kernel you talk about initial and stopping distribution over nodes. While the initial is uniform by definition, if you run a random walk on a general graph the stopping distribution will not be uniform. Either rename the distribution or show that the stopping distribution is indeed uniform. Other remarks: - As the random walk kernel is a major part of this work it would be best if you gave some intuition as to why it is useful. Why does a high/low score represent similar/dissimilar graphs. - A kernel has a very explicit mathematical definition. While the original RWK is indeed a kernel it is not obvious, at least to me, that after the modifications it is still a kernel. While this doesn't affect the work here, as you don't really depend on it being a legitimate kernel, it is important to know if the object you claim is a kernel is indeed one. This point should be addressed, if not by showing that it is indeed a kernel then by claiming that it might not be a valid kernel. - What about graph with edge-features? Can you generalize your method to these types of graphs? Minor remark: - "Among others, the graph classification and graph regression..." I would mention node-based tasks as well as basic graph learning problems. Broader Impact Concerns: NA ==================================================
# Offline Equilibrium Finding Anonymous authors Paper under double-blind review ## Abstract Offline reinforcement learning (offline RL) is an emerging field that has recently attracted significant interest across a wide range of application domains, owing to its ability to learn policies from previously collected datasets. The success of offline RL has paved the way for tackling previously intractable real-world problems, but so far, only in single-agent scenarios. Given its potential, our goal is to generalize this paradigm to the multiplayer-game setting. To this end, we introduce a novel problem, called *offline equilibrium finding* (OEF), and construct various types of datasets spanning a wide range of games using several established methods. To solve the OEF problem, we design a model-based framework capable of directly adapting any online equilibrium finding algorithm to the OEF setting while making minimal changes. We adapt the three most prominent contemporary online equilibrium finding algorithms to the context of OEF, resulting in three model-based variants: OEF-PSRO and OEF-CFR, which generalize the widely-used algorithms PSRO and Deep CFR for computing Nash equilibria, and OEF-JPSRO, which generalizes the JPSRO for calculating (coarse) correlated equilibria. Additionally, we combine the behavior cloning policy with the modelbased policy to enhance performance and provide a theoretical guarantee regarding the quality of the solution obtained. Extensive experimental results demonstrate the superiority of our approach over traditional offline RL algorithms and highlight the importance of using model-based methods for OEF problems. We hope that our work will contribute to the advancement of research in large-scale equilibrium finding. ## 1 Introduction Game theory provides a universal framework for modeling interactions among cooperative and competitive players (Shoham & Leyton-Brown, 2008). The canonical solution concept is Nash equilibrium (NE), describing a situation when no player increases their utility by unilaterally deviating. However, computing NE in two-player or multi-player general-sum games is PPAD-complete (Daskalakis et al., 2006; Chen & Deng, 2006), which makes solving games, whether exactly or approximately, exceedingly difficult. The complexity persists even in two-player zero-sum games, regardless of whether the players can perceive the game state perfectly (e.g., in Go (Silver et al., 2016)) or imperfectly (e.g., in poker (Brown & Sandholm, 2018) or StarCraft II (Vinyals et al., 2019)). In recent years, learning algorithms have demonstrated their superiority over traditional optimization methods, such as linear or nonlinear programs in solving large-scale imperfectinformation extensive-form games. Particularly, the most successful learning algorithms belong either to the line of research on counterfactual regret minimization (CFR) (Brown & Sandholm, 2018), or policy space response oracles (PSRO) (Lanctot et al., 2017). CFR is an iterative algorithm that approximates NEs through repeated self-play. Several sampling-based CFR variants (Lanctot et al., 2009; Gibson et al., 2012) have been proposed to efficiently solve large games. To further scale up to even larger games, CFR can be embedded with neural network function approximation (Brown et al., 2019; Steinberger, 2019; Li et al., 2019; Agarwal et al., 2020). The other algorithm, PSRO, generalizes the double oracle method (McMahan et al., 2003; Bošanský et al., 2014) by incorporating (deep) reinforcement learning (RL) methods as a best-response oracle (Lanctot et al., 2017; Muller et al., 2019). Especially, neural fictitious self-play (NFSP) can be considered as a special case of PSRO (Heinrich et al., 2015). Both CFR and PSRO have achieved impressive performance, particularly in the more challenging realm of large-scale imperfect-information extensive-form games such as poker (Brown & Sandholm, 2018; McAleer et al., 2020). A critical component contributing to the success of these learning algorithms is the availability of efficient and accurate simulators. Simulators can be constructed using rules, such as in various poker games (Lanctot et al., 2019), or using a video-game suit like StarCraft II (Vinyals et al., 2017). These simulators serve as environments, enabling agents to collect millions or even billions of trajectories to facilitate the training process. We can consider this mode of learning, which relies on a simulator, as an online mode since it can access data from the simulator at any time. In many real-world games, such as football (Kurach et al., 2020; Tuyls et al., 2021) or table tennis (Ji et al., 2021), learning in online mode is not practical, as constructing a sufficiently accurate simulator may be infeasible due to numerous complex factors affecting the game-play. These factors include relevant laws of physics, environmental conditions (e.g., wind speed), or physiological limits of (human) bodies rendering certain actions unattainable. Consequently, football teams or table tennis players may resort to watching previous matches to improve their strategies, which semantically corresponds to the offline learning mode, i.e., learning from previously collected data. In recent years, there have been several (often domain-specific) attempts to formalize offline learning in the context of games. For instance, StarCraft II Unplugged (Mathieu et al., 2021) provides a dataset of human game-plays in this two-player zero-sum symmetric game. Concurrently, some works (Cui & Du, 2022; Zhong et al., 2022) explore the necessary properties of offline datasets of two-player zero-sum Markov games to successfully infer their NEs. However, these prior works mainly focus on solving Markov games, while our goal is to solve extensive-form games in the offline setting. Detailly, Cui & Du (2022) focus on computing the Nash equilibrium strategy for tabular two-player zero-sum Markov games, while our work not only focuses on computing the NE strategy for two-player zero-sum extensive-form games but also focuses on computing the C(CE) for multi-player extensive-form games. Furthermore, their theoretical results are focused on the two-player zero-sum Markov game while our work extends their results to the extensive-form game setting. To our understanding, there has been no comprehensive study focusing on multi-player games in an offline setting. Moreover, there is a notable absence of systematic definitions and research efforts aimed at formalizing offline learning within the context of games. To address this gap, we put forward a more general problem called *offline* equilibrium finding (OEF), which aims to identify the equilibrium strategy of the underlying game based on a fixed offline dataset collected by an unknown behavior strategy. The lack of an accurate simulator in offline settings complicates the process of identifying equilibrium strategies, as it makes evaluating and validating learned strategies more difficult. Consequently, the OEF problem poses a significant challenge, as it necessitates forging a connection between an equilibrium strategy and an offline dataset. To tackle this problem, we introduce an environment model that serves as an intermediary between the equilibrium strategy and the offline dataset. Our contributions can be summarized as follows: - We introduce a new paradigm, Offline Equilibrium Finding (OEF), highlighting the challenges associated with learning equilibrium strategies only from offline datasets without an accurate simulator. - We create OEF datasets from widely recognized game domains to better define the OEF problem. To achieve this, we employ various methods to produce different behavior strategies for generating offline data, enabling our OEF datasets to encompass a diverse range of potential gameplay scenarios. - We propose a novel OEF algorithm, BCMB, that combines a simple model-free algorithm (behavior cloning technique) with an innovative model-based framework. This model-based framework has the capability to generalize any online equilibrium finding algorithm to the OEF setting by introducing an environment model as an intermediary. Furthermore, we investigate the relationship between the data coverage of the offline dataset and the performance of the offline algorithm, and we provide a guarantee of the solution quality for our OEF algorithm. - We conduct comprehensive experiments to evaluate the effectiveness of our proposed OEF algorithm. The experimental results substantiate the superiority of our algorithm over model-based and modelfree offline RL algorithms and the efficiency of our algorithm for solving the OEF problem. We hope our work can provide a broader understanding of offline learning in multi-player games and establish a foundation for future research in this emerging area. ## 2 Preliminaries In this section, we first introduce the imperfect-information extensive-form game model focused on in this paper, and then we introduce two types of widely-used equilibrium-finding algorithms, namely Counterfactual Regret Minimization (CFR) and Policy Space Response Oracles (PSRO). ## 2.1 Imperfect-Information Extensive-Form Games An imperfect-information extensive-form game (Shoham & Leyton-Brown, 2008) can be represented as a tuple (N, H, A, P, I, u), where N = {1*, ..., n*} is a set of players, H is a set of histories (i.e., the possible action sequences) and A is a set of actions available to each player. The empty sequence ∅ corresponds to a unique root node of a game tree, which is included in H. Additionally, every prefix of a sequence in H is also in H. A special subset of the set of histories is Z ⊂ H which corresponds to the set of terminal histories. A(h) = {a : (*h, a*) ∈ H} is the set of actions available at any non-terminal history h ∈ H \ Z. P is the player function, which maps each non-terminal history to a player, i.e., P(h) 7→ N ∪ {c}, ∀h ∈ H \ Z, where c denotes the "chance player", which represents stochastic events outside of the players' controls. In other words, P(h) is the player who takes an action at the history h. If P(h) = c then chance determines the action taken at history h. I denotes the set of the information set. The information set Ii forms a partition over the set of histories where player i ∈ N takes action, such that player i ∈ N cannot distinguish these histories within the same information set Ii. Therefore, every information set Ii ∈ Ii corresponds to one decision point of player i which means that P(h1) = P(h2) and A(h1) = A(h2) for any h1, h2 ∈ Ii. For convenience, we use A(Ii) to represent the action set A(h) and P(Ii) to represent the player P(h) for any h ∈ Ii. For i ∈ N, a utility function ui: Z → R specifies the payoff of player i for every terminal history. The behavior strategy of player i, σi, is a function mapping every information set of player i to a probability distribution over A(Ii), and Σiis the set of strategies for player i. A strategy profile σ is a tuple of strategies, one for each player, (σ1, σ2*, ..., σ*n), with σ−i referring to all the strategies in σ except σi. Let π σ(h) = Qi∈N∪{c} π σ i (h) be the reaching probability of history h when all players choose actions according to σ, where π σ i (h) is the contribution of i to this probability. Given a strategy profile σ, the expected value to player i is the sum of expected payoffs of these resulting terminal nodes, ui(σ) = Pz∈Z π σ(z)ui(z). The canonical solution concept for imperfect information extensive form games is Nash equilibrium (NE), where no player can increase their expected utility by unilaterally switching to a different strategy. Formally, the strategy profile σ ∗forms an NE if it satisfies ui(σ ∗) = maxσ ′ i∈Σi ui(σ ′ i , σ∗−i ) ≥ ui(σi, σ∗−i ), ∀σi ∈ Σi, i ∈ N. To quantify the distance between a strategy profile σ and the NE strategy, we introduce the metric NashConv(σ) = Pi∈N NashConvi(σ), where NashConvi(σ) = maxσ ′ i ui(σ ′ i , σ−i)−ui(σ) for each player i. Especially, for two-player zero-sum games, the metric simplifies to NashConv(σ) = Pi∈N maxσ ′ i ui(σ ′ i , σ−i). When NashConv(σ) = 0, it indicates that σ is the NE strategy. However, for n-player general-sum games, apart from NE, (Coarse) Correlated Equilibrium ((C)CE) is more commonly employed as the solution concept. Similar to the NE strategy, a Correlated Equilibrium (CE) strategy is a joint mixed strategy in which no player has the incentive to deviate (Moulin & Vial, 1978). Formally, let Si represent the strategy space for player i and S represent the joint strategy space. The strategy profile σ ∗forms a CCE if it satisfies for ∀i ∈ *N, s*i ∈ Si,ui(σ ∗) ≥ ui(si, σ∗−i ) where σ ∗ −i is the marginal distribution of σ ∗ on strategy space S−i. Analogous to NE, the (C)CE Gap Sum is adopted to measure the gap between a joint strategy and the (C)CE (Marris et al., 2021). ## 2.2 Equilibrium Finding Algorithms PSRO. The Policy Space Response Oracles (PSRO) algorithm (Lanctot et al., 2017) begins with an initial set of randomly-generated policies, Σˆi for each player i. During each iteration of PSRO, a meta-game M is built using all existing policies of the players by simulation. A meta-solver then computes a meta-strategy, which is a distribution over the policies of each player (e.g., Nash, α-rank, or uniform distributions). The joint meta-strategy for all players is represented as α, where αi(σ) denotes the probability that player i selects σ as their strategy. Subsequently, an oracle computes at least one best response policy for each player, which is then added to Σˆi. It is important to note when computing a new policy for a player, the policies of all other players and the meta-strategy remain fixed. This leads to a single-player optimization problem that can be solved by DQN (Mnih et al., 2015) or policy gradient reinforcement learning algorithms. Neural Fictious Self-Play (NFSP) can be considered as a special case of PSRO that employs the uniform distribution as the meta-strategy (Heinrich et al., 2015). Joint Policy Space Response Oracles (JPSRO) is an innovative extension of PSRO that incorporates fully mixed joint policies to facilitate coordination among policies (Marris et al., 2021). JPSRO is proven to converge to a (C)CE over joint policies in extensive-form games. The details for the process of PSRO can be found in Lanctot et al. (2017) and we also provide the pseudocode for MB-PSRO in Appendix E which is similar to PSRO. CFR. Counterfactual Regret Minimization (CFR) (Zinkevich et al., 2007) is an iterative algorithm for approximately solving large imperfect-information games. In each iteration, the entire game tree is traversed, and the counterfactual regret for every action a in every information set I is computed. The computation of the counterfactual regret value for a player's information set is associated with the counterfactual value of the information set, which is the expected value of the information set given that the player tries to reach it. After traversing the game tree, players employ *Regret Matching* to select a distribution over actions in every information set, proportional to the positive cumulative regret values of those actions. In the next iteration, players use the new strategy to traverse the entire game tree and this process is repeated until convergence. In two-player zero-sum games, if the average regret for both players is less than ϵ, their average strategies over strategies in all iterations (σ T 1 , σ T 2 ) form a 2ϵ-equilibrium (Waugh et al., 2009). The details for the CFR algorithm can be found in Zinkevich et al. (2007) and we also provide the detailed implementation for the MB-CFR algorithm which is similar to CFR in Appendix E. More recent studies have adopted deep neural networks to approximate counterfactual values, resulting in superior performance compared to their tabular counterparts (Brown et al., 2019; Steinberger, 2019; Li et al., 2019; 2021). ## 3 Problem Statement To emphasize the importance of introducing the Offline Equilibrium Finding (OEF) problem, we will first ![3_image_0.png](3_image_0.png) present a motivating scenario that demonstrates the need for addressing the OEF problem. Following this, we will explain the limitations of current algorithms in solving the OEF problem. Lastly, we will introduce the OEF problem itself, along with the challenges it poses. (a) The game of table tennis. ![3_image_1.png](3_image_1.png) Figure 1: The example and illustration of OEF problem Motivating Scenario. Assume that a table tennis player A is preparing to compete against player B, whom they never faced before (Figure 1(a)). In this situation, what could player A do to prepare for the match? Although player A understands the rules of table tennis, they lack specific knowledge about playing against player B, such as their preferred moves or actions and their subjective payoff function. Without this detailed game information, player A cannot build an accurate simulator to simulate the game they will play, rendering self-play or other online equilibrium-finding algorithms ineffective. Moreover, if player A simply adopts the best response strategy against player B's previous strategy, this approach may be exploited if player B changes their strategy. Consequently, player A must watch the matches that player B *played against* other players to learn their style and compute the equilibrium strategy, which minimizes exploitation, of the underlying game they will play. This process aligns with the proposed OEF methodology. Next, we will present the definition of the OEF problem and discuss the challenges associated with it. Offline Equilibrium Finding. Based on the motivating scenario, we observe that in games with complex dynamics, such as table tennis games or football games (Kurach et al., 2020), it is difficult to build a realistic simulator or learn the policy during playing the game. An alternative solution is to learn the policy from the historical game data. To characterize this situation, we propose the *offline equilibrium finding* (OEF) problem: Given a fixed dataset D collected by an unknown behavior strategy σ*, find an equilibrium strategy* profile σ ∗ *of the underlying game.* In order to gain a deeper understanding of the OEF problem, we will illustrate the OEF problem using Figure 1(b) and provide a formal definition of the offline equilibrium finding problem as follows, Definition 3.1 (OEF). Given a game's offline dataset D = (st, a, st+1, rt+1) where st and st+1 refer to the game states, a refers to the action played at st and rt+1 refers to the reward after performing action a at st. The strategy used to collect the dataset D is unknown. The OEF problem is to find an approximate equilibrium strategy profile σ ∗that achieves a small gap between σ ∗ and equilibrium, i.e., the NashConv for NE and (C)CE Gap Sum for (C)CE, only based on D. The OEF problem is similar to the Offline RL problem but poses several unique challenges: i) the canonical solution concept in the OEF problem is the equilibrium strategy, which necessitates an iterative procedure of computing best responses; ii) the game in the OEF problem involves at least two players competing against each other, which amplifies sensitivity to distribution shifts and other uncertainties compared to the Offline RL problem; and iii) the distribution shifts of opponents' actions and the dynamic of the game are coupled, complicating the process of distinguishing and addressing these issues. We list a comparison of OEF with these related works in Table 1. Further discussion about these related works can be found in Appendix B. | Methods | Work w/o env | Converge to equilibrium | |------------------------------------------------------|----------------|---------------------------| | Offline RL (Lange et al., 2012; Levine et al., 2020) | ! | % | | Opponent Modelling (He et al., 2016) | % | % | | Empirical Game-Theoretic Analysis (Wellman, 2006) | % | ! | | OEF | ! | ! | Table 1: Comparison of OEF with other related methods. ## 4 Collection Of Offline Datasets As delineated in the OEF problem, a crucial element is an offline dataset of the game. Typically, this offline dataset is collected with unspecified strategies within real-world scenarios. Nevertheless, to effectively evaluate and analyze the performance of the offline algorithm in solving the OEF problem, a solitary dataset is insufficient. The reason is that the strategy employed to generate an offline dataset is unknown in the OEF problem, and relying on a single dataset for evaluation introduces bias. Consequently, an appropriate dataset benchmark for the OEF problem should consist of a diverse set of datasets that closely resemble realworld situations. Nonetheless, generating such a collection of offline datasets with high diversity presents a substantial challenge, as these offline datasets should be meaningful rather than merely comprising randomly generated datasets, even though such datasets may indeed display significant diversity. To mitigate this issue, we propose various methods for collecting a range of datasets that serve as the foundation for OEF research. These methods are all inspired by different real-world cases, ensuring that the resulting datasets are diverse and capable of mimicking real-world situations. We will now proceed to describe these collection methods. ## 4.1 Data Collection Methods Random Method. When playing an unfamiliar game, the most common approach is initially exploring the game by making random actions. Our random method is inspired by this natural tendency for exploration ![5_image_0.png](5_image_0.png) Figure 2: Dataset visualization and simulates the experience of a beginner learning and familiarizing themselves with the game for the first time. The random method consists of three steps. In the first step, we assign a uniform strategy to each player in the game. During the second step, players repeatedly participate in the game. Finally, we collect the game data generated throughout the game-play. By following these steps, we obtain a dataset generated through the random strategy, which we refer to the **Random Dataset**. Learning Method. Once players become familiar with the game, they tend to develop a non-exploitable strategy, such as the Nash equilibrium strategy, to improve their game-play. This observation inspires our learning method, which simulates the process of a player acquiring a non-exploitable strategy. To simulate the process of player learning, we employ the process of equilibrium finding algorithm. Therefore, our learning method used to collect datasets requires running one existing equilibrium-finding algorithm, such as CFR or PSRO. As the algorithm iterates through the equilibrium-finding process, we gather these intermediate interaction game data during each iteration and store it as the Learning Dataset. Expert Method. The inspiration for this method comes from the notion that, when learning a game, we often benefit from observing more experienced players in action. We assume that the expert always employs a Nash equilibrium strategy, which is a non-exploitable strategy. Subsequently, we can adopt a similar methodology as in the random method. First, we assign the NE strategy, which can be computed using any existing equilibrium finding algorithm, to each player in the game. As a second step, these Nashian players repeatedly interact in the game. Finally, we gather the game data generated during game-play and store it as the **Expert Dataset**. However, in multi-player or general-sum games, computing the NE strategy is challenging. In this case, we can still employ the existing equilibrium finding algorithm to derive a suitable strategy. For instance, the PSRO algorithm with α-rank as the meta-solver (Muller et al., 2019) can yield a fairly effective strategy (with low exploitability) in general-sum, multi-player games. Hybrid Method. To simulate more realistic scenarios and generate a diverse range of datasets, we propose a hybrid method that combines the random dataset and the expert dataset in varying proportions. This approach enables the creation of a more comprehensive and diverse collection of datasets that better represent real-world situations. We refer to these combined datasets as **hybrid datasets**. In this paper, we construct a dataset benchmark for the OEF problem by collecting data from player interactions in the most frequently used benchmark imperfect-information extensive-form games, which are prevalent in contemporary research on equilibrium finding. These games encompass poker games (two-player and multi-player Kuhn poker, two-player and multi-player Leduc poker), Phantom Tic-Tac-Toe, and Liar's Dice. The diverse datasets of these game data serve as the foundation for our OEF problem. ## 4.2 Visualizations Of Collected Datasets In accordance with the aforementioned collection methods, the collected datasets closely resemble realworld situations. To validate the diversity of these collected offline datasets and gain deeper insights into them, we introduce a visualization method for comparing them. Firstly, we generate the game tree for the corresponding game. Subsequently, we traverse the game tree using depth-first search (DFS) and assign an index to each leaf node based on the DFS results. Lastly, we count the frequency of each leaf node within the dataset. We focus solely on the frequency of leaf nodes because each leaf node represents a unique sampled trajectory originating from the root node of the game tree. As a result, the frequency of leaf nodes can effectively capture the distribution of the dataset. To visualize and compare these offline datasets, a range of statistical methods can be employed on the collected frequency data of the leaf nodes. The simplest methods for visualization involve plotting the frequency and cumulative frequency of leaf nodes. Figure 2 displays these datasets for two-player and three-player Kuhn games. From these figures, we can observe that in the random dataset, the frequency of leaf nodes is nearly uniform, whereas, in the expert dataset, the frequency distribution of leaf nodes is uneven. The distribution of the learning dataset and the hybrid dataset falls between that of the expert dataset and the random dataset. These observations confirm that the distributions of these datasets differ, thus validating the diversity of our proposed datasets. To provide more insight into our OEF datasets, we also apply other statistical methods, such as the Fourier transform. Additional visualization results can be found in Appendix C. ## 5 Algorithms For Offline Equilibrium Finding Drawing inspiration from Offline RL (Chen et al., 2020; Yu et al., 2020), there are two possible approaches for solving the OEF problem: model-free and model-based approaches. The model-free approach aims to learn a policy *directly* from the offline dataset, necessitating the establishment of a *direct* relationship between the equilibrium strategy and the offline dataset. The most straightforward method to achieve this is by applying the behavior cloning technique. However, the behavior cloning technique performs well only in certain cases. Specifically, if the offline dataset is generated using an equilibrium strategy, the behavior cloning technique can directly learn the equilibrium strategy from the offline dataset. However, the behavior cloning technique fails to produce satisfactory results when the strategy of the offline dataset is not an equilibrium strategy. Our experimental results also support this assertion. Moreover, considering that we cannot use the data of any two action tuples to determine which action tuple is closer to an equilibrium strategy, as equilibrium identification requires other action tuples to serve as references, other model-free algorithms are insufficient for solving the OEF problem since the model-free approach cannot measure the distance from the equilibrium strategy to guide the training process. The model-based approach typically involves introducing a model to assist in learning an optimal strategy when addressing offline RL problems. Likewise, we can propose a model-based approach for tackling the OEF problem by incorporating an environment model as an intermediary between the equilibrium strategy and the offline dataset. However, our proposed model-based algorithm also cannot perform well in all cases, particularly when the offline dataset does not cover the majority of the game states. Our experimental results support this claim. As neither a single model-free nor model-based approach can perform well in all scenarios, we ultimately propose a novel algorithm - **BCMB**, which combines the model-free approach and the model-based approach for effectively solving the OEF problem. In the subsequent sections, we initially explain how the behavior cloning technique and the model-based algorithm work to solve the OEF problem. Following that, we introduce our OEF algorithm, the combination method: BCMB. ## 5.1 Behavior Cloning Technique Behavior cloning (BC) is a method that imitates the behavior policy present in the dataset and is frequently used in solving offline RL (Fujimoto & Gu, 2021). In the OEF setting, we can also employ the BC technique to learn a behavior cloning strategy for every player from the offline dataset. More specifically, we can utilize the imitation learning algorithm to train a policy network σi, parameterized by θ, for each player i to predict the strategy for any given information set Ii. Only the information sets and action data are required when training the behavior cloning strategy. We use the cross-entropy loss as the training loss, defined as Lbc = E(Ii,a)∼D[l(*a, σ*i(Ii; θ))] = −E(Ii,a)∼D[a · log(σi(Ii; θ))], where a represents the one-hot encoding of the action. Figure 3(a) illustrates the structure of the behavior cloning policy network. Since equilibrium strategies in most information sets are non-trivial probability distributions, we apply a softmax layer after the output layer to obtain the final mixed strategy. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) Figure 3: Structure of neural networks ## 5.2 Model-Based Framework Many model-based algorithms exist for offline single-agent RL; however, they cannot be directly applied to solve the OEF problem. The primary reason is their inherent reliance on the absence of strategic opponents in the environment. This means that if we use these algorithms in the OEF setting, we would need to train a model for each player to compute the best response strategy given any opponent strategy. This process would be extremely time-consuming and highly computationally demanding. To address this issue, we train a single environment model for all players instead of using the single-agent model-based algorithm for each player. The trained environment model can capture the necessary game information for evaluating any action tuple. In this manner, only one environment model needs to be trained, and all players can share this environment model to compute the equilibrium strategy. ## 5.2.1 Environment Model In this section, we describe the methodology for training an environment model based on an OEF dataset. The trained environment model aims to provide all the information required for computing the equilibrium strategy for all players. As a result, the environment model can work as the game's environment, with the primary task of learning the game's dynamics. Considering that the dynamics of the game employed in this paper are relatively stable, we can implement supervised learning techniques to train the environment model. Figure 3(b) illustrates the structure of the environment model. It should be noted that the offline dataset comprises data tuples (st, a, st+1, rt+1), thereby enabling seamless training of the environment model using supervised learning methodologies based on the offline dataset. The environment model E, parameterized by θe, takes the game state st and action a performed by the player at state st as input, subsequently producing the next game state st+1 and rewards rt+1 for all players. Depending on the specific scenario, additional game information can be predicted to facilitate the computation of equilibrium strategies, such as the legal action set for the subsequent state A(st+1) or the termination of the game. As delineated in Section 2.1, the chance player embodies stochastic events beyond the control of the player. Consequently, the game's dynamics are primarily driven by the chance player (player c). To handle this dynamism, the environment model also outputs whether the next state is played by the chance player. If so, an action is simply sampled according to the predicted legal action set. For training the environment model, stochastic gradient descent (SGD) is employed as the optimizer for parameter updates. Any loss function satisfying Bregman divergence conditions (Banerjee et al., 2005) can be utilized. In this paper, the mean squared error loss is employed and defined as follows, $${\mathcal{L}}_{e n v}=$$ $$=\mathbb{E}_{(s_{t},a,s_{t+1},r_{t+1})\sim D}[M S E((s_{t+1},r_{t+1}),E(s_{t},a;\theta_{e}))].$$ Lastly, the environment model is trained by performing mini-batch SGD iterations. ## 5.2.2 Model-Based Algorithms Once the environment model is adequately trained, it can provide sufficient game information for equilibrium computation. Utilizing the trained environment model, we propose a general model-based framework capable of generalizing any online equilibrium finding algorithm to the context of the OEF setting by substituting the actual environment with the trained environment model. To demonstrate the generalization of existing online equilibrium finding algorithms to the OEF setting, we instantiate three model-based algorithms: Offline Equilibrium Finding-Policy Space Response Oracles (OEF-PSRO), Offline Equilibrium Finding-Deep CFR (OEF-CFR), and Offline Equilibrium Finding-Joint Policy Space Response Oracles (OEF-JPSRO). OEFPSRO and OEF-CFR generalize PSRO and Deep CFR, respectively, to compute Nash Equilibria (NEs), while OEF-JPSRO generalizes JPSRO to compute Coarse Correlated Equilibria (CCEs). In PSRO or JPSRO, a meta-game is represented as an empirical game that begins with a single policy (uniform random) and is iteratively expanded by adding new policies (oracles) approximating the best responses to the meta-strategies of other players. It is evident that when computing the best response policy oracle necessitates interaction with the environment to obtain game information. In the OEF setting, only an offline dataset is provided, rendering the direct application of PSRO or JPSRO unfeasible. In OEFPSRO and OEF-JPSRO, the trained environment model substitutes the actual environment to supply the game information. It is widely acknowledged that when computing the best response policy using DQN or other RL algorithms, the next state and reward based on the current state and the action are required. The trained environment model can provide such information and additional details for approximating the missing entries in the meta-game matrix using the same approach. Deep CFR is a variant of CFR that employs neural networks to approximate counterfactual regret values and average strategies. This algorithm necessitates the partial traversal of the game tree to compute the counterfactual regret value, which in turn requires an environment to provide the necessary game information. Analogous to OEF-PSRO, OEF-CFR also utilizes the trained environment model to replace the actual environment. During the traversal, the environment must identify the next game state and utility for the terminal game state, for which the trained environment model is employed. These algorithms are elaborated in detail in Appendix E. ## 5.3 Combination Method: Bcmb Although the above two algorithms can be used to solve the OEF problem, they can only perform well in certain cases, as shown in the experiment section. To this end, we combine the behavior cloning technique and the model-based framework, creating a more robust approach for tackling the OEF problem. We move to introduce the combination method **BCMB**, i.e., how to combine the two trained policies. Let α be the weight of the BC policy, making the weight of the MB policy 1 − α. The simplest way to select the parameter α is to randomly choose a number from 0 to 1. This method can be done in an offline way since it does not need any interaction with the actual environment. However, this method cannot guarantee to get the best parameter. In real life, if we can first interact with the actual environment, then we can use one online search method to select a better parameter as follows. We first preset 11 weight assignment plans, i.e., α ∈ {0, 0.1, 0.2*, ...,* 0.9, 1}. Next, we use these 11 weight assignment plans to combine these two policies, generating a set of final policies. Finally, we test these combination policies in the actual game to determine the best final policy based on the measure used to assess the gap from the equilibrium strategy. This method can get a good parameter while this method needs online interactions. To reduce online interactions, another method that sandwichs the above two methods is to train a parameter predictor model based on the difference between bc policy and mb policy. In this way, we first collect training data (the policy difference and the good parameter value) using the above online method for one game, then the parameter predictor can be trained based on these training data. Since the parameter predictor only depends on the difference between the two policies, it can also be used in different games (More details and experimental results can be found in Appendix E). Although this method can only provide an approximate best parameter, it needs little online interaction and can be reused in different games. Policy BC BC policy MB Datasets Model MB policy Figure 4: The flow of OEF algorithms. We present the general procedure of our OEF algorithm, BCMB, in Algorithm 1. Given the offline dataset D, we first train an environment model E according to the method introduced in Section 5.2.1. Then, based on the trained environment model, we can obtain the MB policy using the model-based algorithm, which is the generalized algorithm from an online equilibrium finding algorithm under the model-based framework. To get the BC policy, we directly apply the behavior cloning technique on the offline dataset. Finally, we combine these two policies, i.e., the BC policy and the MB policy, by assigning appropriate weights to these two policies to derive the final policy. Figure 4 illustrates the whole structure of our OEF algorithm. These Algorithm 1 General Framework of Offline Equilibrium Finding algorithm 1: **Input:** an offline dataset D 2: Train an environment model E based on the offline dataset D; 3: Learn a policy π mb on environment model E using any model-based algorithm; 4: Learn a policy π bc based on the offline dataset D using behavior cloning technique; 5: Combine π bc and π mb to get the policy π by selecting the best α based on the test results in actual game; 6: **Output:** Policy π dashed lines in the figure represent potential avenues for future research: i) whether we can learn an MB policy with the regularization of the BC policy, as well as interact with the dataset, and ii) if we can use the learned model to get the proper weights when combining these two policies. ## 5.4 Theoretical Analysis To better understand our OEF problem and algorithm, we offer some theoretical analysis of these algorithms' performance under different offline datasets. To facilitate the analysis, we initially provide two assumptions regarding the data coverage of the random dataset and the expert dataset, respectively. These assumptions are derived from the generation process of these datasets and align with intuitive understanding. Since the random dataset is generated using the uniform strategy, the state-action pair would be covered as long as we generate enough data. Because the expert dataset is generated using the Nash equilibrium strategy, the strategy got from the expert dataset in a statistical way would be the Nash equilibrium strategy. Therefore, we can easily get the following assumption for these two datasets. Assumption 5.1. The **random dataset** *satisfies the uniform dataset coverage assumption, i.e., for* ∀st and ∀a ∈ A(st),(st, a, st+1) *is covered by the random dataset.* Assumption 5.2. The **expert dataset** *only covers the state-action pair deduced by the Nash Equilibrium* (NE) strategy and the frequency of these state-action pairs is corresponding to the NE strategy. Since the behavior cloning policy and the environment model are both neural network models, they are trained in a supervised learning manner. Therefore, we provide a general generalization bound for training such neural network models which can be found in Appendix D. Then, based on these sample analysis results, we provide some analysis results regarding the relationship between the above algorithms and the OEF datasets. Here, we only provide several key theoretical analysis results. Further results and proofs can be found in Appendix D. The behavior cloning technique possesses the capability to mimic the behavior policy of the dataset, with its performance primarily relying on the dataset's quality. Consequently, we present the following theorem to summarize the performance of BC under various datasets. Theorem 5.1. *Assuming that the behavior cloning policy is trained on the offline dataset with an extremely* small training error ϵ*, then the behavior cloning technique (BC) can get the equilibrium strategy under the* expert dataset, and cannot get the equilibrium strategy under the random dataset. Since the trained environment model substitutes the actual environment in these algorithms, the performance primarily depends on the quality of the trained environment model. Consequently, we provide the following theorem to generalize the performance of the MB approach under varying datasets. Theorem 5.2. *Assuming that the environment model is trained on the offline dataset with an extremely* small training error ϵ, then the model-based framework (MB) can converge to an equilibrium strategy under the random dataset and cannot guarantee to converge under the expert dataset. The BCMB algorithm combines the BC policy and the MB policy. As a result, drawing upon the insights from the two theorems above, we can readily derive the following theorem regarding its performance. Theorem 5.3. Under the assumptions in Theorems 5.1 and 5.2, BCMB can compute the equilibrium strategy under either the random dataset or the expert dataset. To better understand our OEF algorithm, we also provide a guarantee of the solution quality for our OEF algorithm under a more general case, as represented by the following theorem. ![10_image_0.png](10_image_0.png) (a) Two-player Kuhn poker (b) Two-player Leduc poker (a) Three-player Kuhn Poker (b) Three-player Leduc Poker Figure 5: Comparison results with Offline RL Figure 6: Experimental results on multi-player games. Theorem 5.4. *Assuming that the environment model and the behavior cloning policy are trained with an* extremely small training error ϵ on the offline dataset Dσ generated using σ*, BCMB can get an equal or* better strategy than σ. ## 6 Experiments To assess the performance of our OEF algorithms, we conduct the following experiments: i) we conduct two offline RL algorithms in the OEF setting to evaluate their performance; ii) we perform experiments on various offline datasets to evaluate the effectiveness of our algorithm in computing NEs under the OEF setting; and iii) we conduct experiments on two three-player games to assess the performance of our modelbased algorithm in computing CCEs under the OEF setting. ## 6.1 Experimental Setting OpenSpiel1is an extensive collection of environments and algorithms for research in games (Lanctot et al., 2019). We use it as our experimental platform, as it is widely accepted and implements many different games. In this paper, we select several poker games (Kuhn poker, Leduc poker), Liar's Dice, and Phantom Tic-Tac-Toe, which are all widely used in previous works (Lisý et al., 2015; Brown & Sandholm, 2019), as experimental domains. Firstly, we generate the OEF datasets for every game using the methods introduced in Section 4. Then we conduct our experiments on these OEF datasets. NashConv is used to measure how close the strategy is to NEs, and (C)CE Gap Sum is employed as a measurement of closeness to (C)CEs. All results are averaged over three seeds, and error bars are also reported. Only selected results are presented here. The remaining experiment results, ablation study, and parameter setting can be found in Appendix F. ## 6.2 Comparison With Offline Rl In this section, we provide empirical evidence demonstrating that naive offline RL algorithms are insufficient for solving the OEF problem. To support this claim, we choose one model-based offline RL algorithm – Model-based Offline Policy Optimization (MOPO) (Yu et al., 2020) and one model-free offline RL algorithm - Best-Action Imitation Learning (BAIL) (Chen et al., 2020) as the representative of offline RL algorithms. Figure 5 shows the comparison results between offline RL algorithms and our OEF algorithm in two-player Kuhn poker and two-player Leduc poker games. The x-axis represents the proportion of random data in the hybrid dataset. When the ratio is zero, the dataset is equivalent to the expert dataset; conversely, when the ratio is one, the hybrid dataset consists entirely of the random dataset. As shown in the figure, we observe that our algorithm outperforms the two offline RL algorithms. Additionally, we notice that the performance of the MOPO algorithm varies significantly across different datasets. Compared to the MOPO algorithm, the performance of the BAIL algorithm appears to be more closely related to the quality of the dataset. However, neither of these offline RL algorithms can produce a strategy profile close enough to the equilibrium strategy, which might be attributed to the players' policies being optimized independently. 1https://github.com/deepmind/open_spiel ![11_image_0.png](11_image_0.png) Figure 7: Experimental results on Kuhn poker. Figure 8: Experimental results on Leduc poker. ## 6.3 Computation Of Nash Equilibrium We move to evaluate the performance of our OEF algorithm in computing the NE strategy. We first assess the individual performance of the behavior cloning technique and the model-based algorithm by applying them separately to several games. This assessment helps us understand the strengths and weaknesses of each algorithm. Figures 7(a) and 8(a) show the results of BC on two-player Kuhn poker and Leduc poker games. As the proportion of the random dataset increases, we observe that the performance of BC policy decreases in these two games. Additionally, we notice that as the size of offline data increases, the performance becomes more stable, while the improvement is not significant. This observation suggests that the performance of BC primarily depends on the quality of datasets, i.e., the quality of the behavior policy used to generate the dataset. Figures 7(b) and 8(b) depict the results of the MB framework. As shown in Figure 7(b), we observe that different model-based algorithms achieve nearly identical results. It indicates that the performance of the MB framework primarily relies on the quality of the trained environment model and is independent of the algorithm used to compute the equilibrium strategy. Another observation is that as the size of the offline dataset increases, the performance improves. It indicates that if the dataset includes sufficient data, the trained environment model is closer to the actual environment. Based on the above results, we can conclude that the BC performs poorly in the random dataset but well in the expert dataset. On the other hand, the MB framework exhibits slightly poorer performance in the expert dataset while performing well in the random dataset. This finding aligns with our theoretical analysis results, i.e., Theorem 5.1 and Theorem 5.2. Finally, we proceed to evaluate the performance of our OEF algorithm - BCMB. Figures 7(c)-7(d) and 8(c)- 8(d) present the results of our OEF algorithm on two-player Kuhn and Leduc poker games. For comparison, we also include the results of the BC and MB methods in these figures. We observe that our OEF algorithm outperforms both BC and MB methods in all cases, demonstrating the effectiveness of the combination. The optimal weights of BC policy (α) for these combined policies, as illustrated in Figures 7(e) and 8(e), show that as the proportion of the random dataset decreases, the weight of the BC policy in the combined policy increases. It also confirms that the BC policy performs better under the expert dataset and the MB policy performs better under the random dataset. We also evaluate our OEF algorithm on various poker games with different players using learning datasets, which can be considered as datasets generated by unknown strategies. Figures 7(f) and 8(f) demonstrate that our OEF algorithm outperforms other methods in all games. It indicates that given an OEF dataset generated by an unknown strategy, our OEF algorithm can consistently obtain a satisfactory approximate NE strategy. ## 6.4 Computation Of Coarse Correlated Equilibrium To evaluate the performance of our model-based framework in computing the CCE strategy, we apply the OEF-JPSRO algorithm to two three-player poker games using hybrid datasets. We do not perform the behavior cloning technique here since the offline dataset is collected using an independent strategy for each player, rather than a joint strategy. As described in Section 4, in multi-player games, although there is no guarantee of convergence to NE, we can still use PSRO with α-rank as the meta-solver to get a fairly effective strategy for generating the expert dataset. Figure 6 represents the results for three-player Kuhn and Leduc poker games. We can observe that as the size of the offline data increases, the performance of OEF-JPSRO improves. This further supports the notion that the performance of the model-based framework primarily depends on the trained environment model and highlights its significance in solving the OEF problem. ## 7 Conclusion We initiated an investigation into offline equilibrium finding (OEF), which focuses on finding equilibria in offline datasets. We first constructed OEF datasets from widely-used games using several data-collecting methods. To tackle the OEF problem, we proposed a model-based framework capable of generalizing any online equilibrium finding algorithm with minor changes by introducing an environment model. Specifically, we adapted several existing online equilibrium finding algorithms to the OEF setting to compute different equilibrium solutions. To further improve the performance, we combined the behavior cloning technique with the model-based framework. Experimental results demonstrated that our algorithm outperforms existing offline RL algorithms, and the model-based method is essential for the OEF setting. We hope our efforts will open new directions in equilibrium finding and accelerate the research in game theory. Future works. There are several limitations of this work that we intend to tackle in the future. First, the games we considered are rather smaller and large-scale games like Texas Hold'em poker (Brown & Sandholm, 2018) were postponed till future work. Second, the types of generated offline datasets are limited. For future work, we plan to collect datasets using large-scale games and connect our library to StarCraft II Unplugged (Mathieu et al., 2021). We will also include more data-collecting strategies (e.g., bounded rational agents) as well as additional human expert data2to diversify the provided datasets. ## References Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. *arXiv preprint arXiv:1806.06920*, 2018. Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. In *ICML*, pp. 104–114, 2020. Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, and Pieter Abbeel. Continuous adaptation via meta-learning in nonstationary and competitive environments. arXiv preprint arXiv:1710.03641, 2017. Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, Joydeep Ghosh, and John Lafferty. Clustering with bregman divergences. *Journal of Machine Learning Research*, 6(10), 2005. 2http://poker.cs.ualberta.ca/irc_poker_database.html Branislav Bošanský, Christopher Kiekintveld, Viliam Lisý, and Michal Pěchouček. An exact double-oracle algorithm for zero-sum extensive-form games with imperfect information. Journal of Artificial Intelligence Research, 51:829–866, 2014. Noam Brown and Tuomas Sandholm. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. *Science*, 359(6374):418–424, 2018. Noam Brown and Tuomas Sandholm. Solving imperfect-information games via discounted regret minimization. In *AAAI*, pp. 1829–1836, 2019. Noam Brown, Adam Lerer, Sam Gross, and Tuomas Sandholm. Deep counterfactual regret minimization. In *ICML*, pp. 793–802, 2019. Xi Chen and Xiaotie Deng. Settling the complexity of two-player Nash equilibrium. In *FOCS*, pp. 261–272, 2006. Xinyue Chen, Zijian Zhou, Zheng Wang, Che Wang, Yanqiu Wu, and Keith Ross. Bail: Best-action imitation learning for batch deep reinforcement learning. *Advances in Neural Information Processing Systems*, 33: 18353–18363, 2020. Qiwen Cui and Simon S. Du. When is offline two-player zero-sum markov game solvable? *arXiv preprint* arXiv:2201.03522, 2022. Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a Nash equilibrium. In *STOC*, pp. 71–78, 2006. Jakob N Foerster, Richard Y Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. Learning with opponent-learning awareness. *arXiv preprint arXiv:1709.04326*, 2017. Scott Fujimoto and Shixiang Gu. A minimalist approach to offline reinforcement learning. In *NeurIPS*, 2021. Richard Gibson, Marc Lanctot, Neil Burch, Duane Szafron, and Michael Bowling. Generalized sampling and variance in counterfactual regret minimization. In *AAAI*, pp. 1355–1361, 2012. Aditya Grover, Maruan Al-Shedivat, Jayesh Gupta, Yuri Burda, and Harrison Edwards. Learning policy representations in multiagent systems. In *International Conference on Machine Learning*, pp. 1802–1811, 2018. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In *International Conference on Machine* Learning, pp. 1861–1870, 2018. He He, Jordan Boyd-Graber, Kevin Kwok, and Hal Daumé III. Opponent modeling in deep reinforcement learning. In *ICML*, pp. 1804–1813, 2016. Johannes Heinrich, Marc Lanctot, and David Silver. Fictitious self-play in extensive-form games. In *ICML*, pp. 805–813, 2015. Zhang-Wei Hong, Shih-Yang Su, Tzu-Yun Shann, Yi-Hsiang Chang, and Chun-Yi Lee. A deep policy inference q-network for multi-agent systems. *arXiv preprint arXiv:1712.07893*, 2017. Yunfeng Ji, Xiaoyi Hu, Yutao Chen, Yue Mao, Gang Wang, Qingdu Li, and Jianwei Zhang. Model-based trajectory prediction and hitting velocity control for a new table tennis robot. In *IROS*, pp. 2728–2734, 2021. Patrick R Jordan, L Julian Schvartzman, and Michael P Wellman. Strategy exploration in empirical games. In *Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems*, pp. 1131–1138, 2010. Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel: Model-based offline reinforcement learning. *Advances in Neural Information Processing Systems*, 33:21810–21823, 2020. Dong Ki Kim, Miao Liu, Matthew D Riemer, Chuangchuang Sun, Marwa Abdulhai, Golnaz Habibi, Sebastian Lopez-Cot, Gerald Tesauro, and Jonathan How. A policy gradient algorithm for learning to learn in multiagent reinforcement learning. In *International Conference on Machine Learning*, pp. 5541–5550, 2021. B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A Al Sallab, Senthil Yogamani, and Patrick Pérez. Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, pp. 4909–4926, 2022. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. *Advances in Neural Information Processing Systems*, 33:1179–1191, 2020. Karol Kurach, Anton Raichuk, Piotr Stańczyk, Michał Zaj¸ac, Olivier Bachem, Lasse Espeholt, Carlos Riquelme, Damien Vincent, Marcin Michalski, Olivier Bousquet, et al. Google research football: A novel reinforcement learning environment. In *AAAI*, pp. 4501–4510, 2020. Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte Carlo sampling for regret minimization in extensive games. In *NeurIPS*, pp. 1078–1086, 2009. Marc Lanctot, Vinicius Zambaldi, Audr¯unas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Pérolat, David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent reinforcement learning. In NeurIPS, pp. 4193–4206, 2017. Marc Lanctot, Edward Lockhart, Jean-Baptiste Lespiau, Vinicius Zambaldi, Satyaki Upadhyay, Julien Pérolat, Sriram Srinivasan, Finbarr Timbers, Karl Tuyls, Shayegan Omidshafiei, et al. OpenSpiel: A framework for reinforcement learning in games. *arXiv preprint arXiv:1908.09453*, 2019. Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. Reinforcement learning: State-of-the-art, pp. 45–73, 2012. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv preprint arXiv:2005.01643*, 2020. Hui Li, Kailiang Hu, Shaohua Zhang, Yuan Qi, and Le Song. Double neural counterfactual regret minimization. In *ICLR*, 2019. Shuxin Li, Youzhi Zhang, Xinrun Wang, Wanqi Xue, and Bo An. CFR-MIX: Solving imperfect information extensive-form games with combinatorial action space. In *IJCAI*, pp. 3663–3669, 2021. Viliam Lisý, Marc Lanctot, and Michael Bowling. Online Monte Carlo counterfactual regret minimization for search in imperfect information games. In *AAMAS*, pp. 27–36, 2015. Siqi Liu, Kay Choong See, Kee Yuan Ngiam, Leo Anthony Celi, Xingzhi Sun, Mengling Feng, et al. Reinforcement learning for clinical decision support in critical care: comprehensive review. Journal of Medical Internet Research, 22(7):e18477, 2020. Luke Marris, Paul Muller, Marc Lanctot, Karl Tuyls, and Thore Grapael. Multi-agent training beyond zero-sum with correlated equilibrium meta-solvers. *arXiv preprint arXiv:2106.09435*, 2021. Michael Mathieu, Sherjil Ozair, Srivatsan Srinivasan, Caglar Gulcehre, Shangtong Zhang, Ray Jiang, Tom Le Paine, Konrad Zolna, Richard Powell, Julian Schrittwieser, et al. StarCraft II Unplugged: Large scale offline reinforcement learning. In *Deep RL Workshop NeurIPS 2021*, 2021. Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, and Shixiang Gu. Deployment-efficient reinforcement learning via model-based offline optimization. *arXiv preprint arXiv:2006.03647*, 2020. Stephen McAleer, John Lanier, Roy Fox, and Pierre Baldi. Pipeline psro: a scalable approach for finding approximate nash equilibria in large games. In *NeurIPS*, pp. 20238–20248, 2020. Stephen McAleer, Kevin Wang, Marc Lanctot, John Lanier, Pierre Baldi, and Roy Fox. Anytime optimal psro for two-player zero-sum games. *arXiv preprint arXiv:2201.07700*, 2022. Stephen Marcus McAleer, John Banister Lanier, Kevin Wang, Pierre Baldi, and Roy Fox. Xdo: A double oracle algorithm for extensive-form games. In *Advances in Neural Information Processing Systems*, 2021. H Brendan McMahan, Geoffrey J Gordon, and Avrim Blum. Planning in the presence of cost functions controlled by an adversary. In *ICML*, pp. 536–543, 2003. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, 2015. Hervé Moulin and J-P Vial. Strategically zero-sum games: the class of games whose completely mixed equilibria cannot be improved upon. *International Journal of Game Theory*, 7(3):201–221, 1978. Paul Muller, Shayegan Omidshafiei, Mark Rowland, Karl Tuyls, Julien Perolat, Siqi Liu, Daniel Hennes, Luke Marris, Marc Lanctot, Edward Hughes, et al. A generalized training approach for multiagent learning. In ICLR, 2019. Rafael Figueiredo Prudencio, Marcos ROA Maximo, and Esther Luna Colombini. A survey on offline reinforcement learning: Taxonomy, review, and open problems. *arXiv preprint arXiv:2203.01387*, 2022. Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, and Stuart Russell. Bridging offline reinforcement learning and imitation learning: A tale of pessimism. *Advances in Neural Information Processing Systems*, 34:11702–11716, 2021. Martin Schmid, Neil Burch, Marc Lanctot, Matej Moravcik, Rudolf Kadlec, and Michael Bowling. Variance reduction in Monte Carlo counterfactual regret minimization (VR-MCCFR) for extensive form games using baselines. In *AAAI*, pp. 2157–2164, 2019. L Julian Schvartzman and Michael P Wellman. Exploring large strategy spaces in empirical game modeling. Agent Mediated Electronic Commerce (AMEC 2009), pp. 139, 2009a. L Julian Schvartzman and Michael P Wellman. Stronger cda strategies through empirical game-theoretic analysis and reinforcement learning. In *Proceedings of The 8th International Conference on Autonomous* Agents and Multiagent Systems, pp. 249–256, 2009b. Shai Shalev-Shwartz and Shai Ben-David. *Understanding machine learning: From theory to algorithms*. Cambridge university press, 2014. Yoav Shoham and Kevin Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, 2008. Noah Y Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, and Martin Riedmiller. Keep doing what worked: Behavioral modelling priors for offline reinforcement learning. *arXiv preprint arXiv:2002.08396*, 2020. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of Go with deep neural networks and tree search. *Nature*, 529(7587):484–489, 2016. Bharat Singh, Rajesh Kumar, and Vinay Pratap Singh. Reinforcement learning in robotic applications: a comprehensive survey. *Artificial Intelligence Review*, pp. 1–46, 2021. Adish Singla, Anna N Rafferty, Goran Radanovic, and Neil T Heffernan. Reinforcement learning for education: Opportunities and challenges. *arXiv preprint arXiv:2107.08828*, 2021. Eric Steinberger. Single deep counterfactual regret minimization. *arXiv preprint arXiv:1901.07621*, 2019. Karl Tuyls, Shayegan Omidshafiei, Paul Muller, Zhe Wang, Jerome Connor, Daniel Hennes, Ian Graham, William Spearman, Tim Waskett, Dafydd Steel, et al. Game plan: What AI can do for football, and what football can do for AI. *Journal of Artificial Intelligence Research*, 71:41–88, 2021. Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, et al. StarCraft II: A new challenge for reinforcement learning. *arXiv preprint arXiv:1708.04782*, 2017. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Kevin Waugh, David Schnizlein, Michael H Bowling, and Duane Szafron. Abstraction pathologies in extensive games. In *AAMAS*, pp. 781–788, 2009. Michael P Wellman. Methods for empirical game-theoretic analysis. In *AAAI*, pp. 1552–1556, 2006. Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. Bellman-consistent pessimism for offline reinforcement learning. *Advances in neural information processing systems*, 34:6683–6694, 2021. Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. Mopo: Model-based offline policy optimization. Advances in Neural Information Processing Systems, 33:14129–14142, 2020. Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and Chelsea Finn. Combo: Conservative offline model-based policy optimization. *Advances in Neural Information Processing Systems*, 34, 2021a. Xiaopeng Yu, Jiechuan Jiang, Haobin Jiang, and Zongqing Lu. Model-based opponent modeling. arXiv preprint arXiv:2108.01843, 2021b. Yan Zheng, Zhaopeng Meng, Jianye Hao, Zongzhang Zhang, Tianpei Yang, and Changjie Fan. A deep bayesian policy reuse approach against non-stationary agents. *Advances in neural information processing* systems, 31, 2018. Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, and Zhuoran Yang. Pessimistic minimax value iteration: Provably efficient equilibrium learning from offline datasets. In ICLR 2022 Workshop on Gamification and Multiagent Solutions, 2022. Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. In *NeurIPS*, pp. 1729–1736, 2007. ## A Frequently Asked Questions Q1: What Is The Impact Of This Work? Offline RL aims to bridge the gap between reinforcement learning and real-world applications. We anticipate that our offline equilibrium finding setting could inspire new research directions in equilibrium finding and pave a path to solving real-world problems using these game theory-based methods. Notably, offline RL algorithms cannot be directly applied to the OEF setting. Offline RL seeks to compute the optimal strategy from the single agent perspective, but this optimal strategy might be exploitable in a game setting. In such situations, the Nash Equilibrium (NE) strategy may be a more suitable solution, as it comprises nonexploitable strategies. Consequently, OEF plays a crucial role in obtaining more robust strategies for tackling these competitive real-world problems. ## Q2: How To Connect The Example Scenario With Offline Equilibrium Finding? In the example scenario, Player A aims to obtain a larger reward by employing the best strategy (i.e., the best response against Player B's previous policy). However, this best strategy may be exploited by Player B if he adapts his strategy accordingly. As a result, Player A must learn more about the game information by observing replays (e.g., actions and preferences of Player B). To minimize the risk of being exploited, the optimal solution for Player A is to choose the Nash equilibrium strategy of the underlying game. ## Q3: Why Oef Is Important And Is More Difficult Than Offline Cooperative Multi-Agent Rl? Utilizing OEF algorithms specifically designed for adversarial environments is crucial in strictly competitive games, such as security games. This setting fundamentally differs from offline multi-agent RL, which generally focuses on cooperation between agents rather than strict competition. For instance, consider the class of pursuit-evasion games, where the pursuer (defender) chases the evader (attacker). In this scenario, we cannot make any assumptions about the attacker's strategy beforehand, as the attacker is strategic and capable of learning. Employing a vanilla offline RL algorithm to learn the defender's optimal strategy based solely on historical data might lead to a significant utility loss, as the defender's optimal strategy could be exploitable. In other words, the attacker may switch to the best response against the computed strategy of the defender instead of adhering to their past behavior estimated from the data. Therefore, achieving Nash Equilibrium (NE) may be a more suitable solution, as NE strategies are non-exploitable. To be more specific, traditional offline RL focuses on learning the optimal strategy, i.e., obtaining the highest utility, for an agent acting in a dynamic environment modeled as a single MDP, which does not depend on the actions of other agents. In contrast, in two-player games, the dynamics for one player depend not only on the environment but also on the strategy of the opponent. In other words, the MDP a player acts in games is determined by both the game and the fixed strategy of the opponent, and hence a change in the opponent's strategy instigates a corresponding change in the MDP. This makes computing the best strategy for the defender against a strategic opponent using offline RL significantly more difficult. The framework of OEF we introduced provides methods for computing a player's NE strategy, which is their optimal strategy against a strategic opponent (i.e., the worst case for the player). ## Q4: What Are The Differences Between Oef And Egta? 1) As described in (Wellman, 2006), EGTA takes the game simulator as the fundamental input and performs strategic reasoning through interleaved simulation and game-theoretic analysis. Therefore, **the game** simulator is required in EGTA. In contrast, under the OEF setting, only the offline dataset is available and the game simulator is not required. 2) The estimated game model (empirical game) in EGTA is built based on the simulation's results, which are obtained by performing **known strategies** on the simulator. In contrast, in the OEF setting, the offline dataset is generated with an **unknown strategy**. In our work, although we use different behavior strategies to generate several offline datasets, we do not utilize these behavior strategies when performing our OEF algorithm. Therefore, our proposed approach is different from EGTA. It is more challenging to find the equilibrium strategy in our OEF setting. ## Q5: What Are The Novelties Of The Proposed Oef Algorithm - Bcmb? We are the first ones to propose an empirical algorithm for solving the OEF problem. We introduce an environment model to propose a model-based framework that can generalize any existing online equilibrium finding algorithm to the context of the OEF setting. Due to the performance limitations of the model-based framework on certain offline datasets, we combine a model-free algorithm - the behavior cloning technique, with the model-based framework to improve performance. Unlike traditional offline RL algorithms, which belong to either model-based or model-free categories, our algorithm combines the advantages of both modelbased and model-free approaches to efficiently solve the OEF problem. ## B Related Work Overview Offline Reinforcement Learning (Offline RL). Offline RL is a *data-driven* paradigm that learns exclusively from static datasets of previously collected interactions, making it feasible to extract policies from large and diverse training datasets (Levine et al., 2020). This paradigm can be extremely valuable in settings where online interaction is impractical, either because data collection is expensive or dangerous (e.g., in robotics (Singh et al., 2021), education (Singla et al., 2021), healthcare (Liu et al., 2020), and autonomous driving (Kiran et al., 2022)). Therefore, efficient offline RL algorithms have a much broader range of applications than online RL and are particularly appealing for real-world applications (Prudencio et al., 2022). Due to its attractive characteristics, there have been a lot of recent studies. Here, we can divide the research of Offline RL into two categories: model-based and model-free algorithms. Model-free algorithms mainly use the offline dataset directly to learn a good policy. When learning the strategy from an offline dataset, we have two types of algorithms: actor-critic and imitation learning methods. Those actor-critic algorithms focus on implementing policy regularization and value regularization based on existing reinforcement learning algorithms. Haarnoja et al. (2018) propose soft actor-critic (SAC) by adding an entropy regularization term to the policy gradient objective. This work mainly focuses on policy regularization. For the research of value regularization, an offline RL method named Constrained Q-Learning (CQL) (Kumar et al., 2020) learns a lower bound of the true Q-function by adding value regularization terms to its objective. Another line of research on learning a policy is imitation learning which mimics the behavior policy based on the offline dataset. Chen et al. (2020) propose a method named Best-Action Imitation Learning (BAIL), which fits a value function, then uses it to select the best actions. Meanwhile, Siegel et al. (2020) propose a method that learns an Advantage-weighted Behavior Model (ABM) and uses it as a prior in performing Maximum a-posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018). It consists of multiple iterations of policy evaluation and prior learning until they finally perform a policy improvement step using their learned prior to extracting the best possible policy. Model-based algorithms rely on the offline dataset to learn a dynamics model or a trajectory distribution used for planning. The trajectory distribution induced by models is used to determine the best set of actions to take at each given time step. Kidambi et al. (2020) propose a method named Model-based Offline Reinforcement Learning (MOReL), which measures their model's epistemic uncertainty through an ensemble of dynamics models. Meanwhile, Yu et al. (2020) propose another method named Model-based Offline Policy Optimization (MOPO), which uses the maximum prediction uncertainty from an ensemble of models. Concurrently, Matsushima et al. (2020) propose the BehaviorREgularized Model-ENsemble (BREMEN) method, which learns an ensemble of models of the behavior MDP, as opposed to a pessimistic MDP. In addition, it implicitly constrains the policy to be close to the behavior policy through trust-region policy updates. More recently, Yu et al. (2021a) proposed a method named Conservative Offline Model-Based policy Optimization (COMBO), a model-based version of CQL. The main advantage of COMBO concerning MOReL and MOPO is that it removes the need for uncertainty quantification in model-based offline RL approaches, which is challenging and often unreliable. However, these above Offline RL algorithms can not directly apply to the OEF problem, which we have described in Section 3 and experimental results empirically verify this claim. Empirical Game Theoretic Analysis (EGTA). Empirical Game Theoretic Analysis is an empirical methodology that bridges the gap between game theory and simulation for practical strategic reasoning (Wellman, 2006). In EGTA, game models are iteratively extended through a process of generating new strategies based on learning from experience with prior strategies. The strategy exploration problem (Jordan et al., 2010) that how to efficiently assemble an efficient portfolio of policies for EGTA is the most challenging problem in EGTA. Schvartzman & Wellman (2009b) deploy tabular RL as a best-response oracle in EGTA for strategy generation. They also build the general problem of strategy exploration in EGTA and investigate whether better options exist beyond best-responding to an equilibrium (Schvartzman & Wellman, 2009a). Investigation of strategy exploration was advanced significantly by the introduction of the Policy Space Response Oracle (PSRO) framework (Lanctot et al., 2017) which is a flexible framework for iterative EGTA, where at each iteration, new strategies are generated through reinforcement learning. Note that when employing NE as the meta-strategy solver, PSRO reduces to the double oracle (DO) algorithm (McMahan et al., 2003). In the OEF setting, only an offline dataset is provided, and there is no accurate simulator. In EGTA, a space of strategies is examined through simulation, which means that it needs a simulator, and the policies are known in advance. Therefore, techniques in EGTA cannot directly apply to OEF. Opponent Modeling (OM) in Multi-Agent Learning. Opponent modeling algorithm is necessary for multi-agent settings where secondary agents with competing goals also adapt their strategies, yet it remains challenging because policies interact with each other and change (He et al., 2016). One simple idea of opponent modeling is to build a model each time a new opponent or group of opponents is encountered (Zheng et al., 2018). However, it is infeasible to learn a model every time. A better approach is to represent an opponent's policy with an embedding vector. Grover et al. (2018) use a neural network as an encoder, taking the trajectory of one agent as input. Imitation learning and contrastive learning are also used to train the encoder. Then, the learned encoder can be combined with RL by feeding the generated representation into the policy or/and value network. DRON (He et al., 2016) and DPIQN (Hong et al., 2017) are two algorithms based on DQN, which use a secondary network that takes observations as input and predicts opponents' actions. However, if the opponents can also learn, these methods become unstable. So it is necessary to take the learning process of opponents into account. Foerster et al. (2017) propose a method named Learning with Opponent-Learning Awareness (LOLA), in which each agent shapes the anticipated learning of the other agents in the environment. Further, the opponents may still be learning continuously during execution. Therefore, Al-Shedivat et al. (2017) propose a method based on a meta-policy gradient named Mata-MPG. It uses trajectories from current opponents to perform multiple meta-gradient steps and constructs a policy that favors updating the opponents. MetaMAPG (Kim et al., 2021) extends this method by including an additional term that accounts for the impact of the agent's current policy on the future policies of opponents, similar to LOLA. Yu et al. (2021b) propose model-based opponent modeling (MBOM), which employs the environment model to adapt to various opponents. In the OEF setting, our goal is to compute the equilibrium strategy based on the offline dataset. Applying opponent modeling is not enough for calculating the equilibrium strategy in the OEF setting since the opponent will always best respond to the agent. Equilibrium Finding Algorithms. The contemporary state-of-the-art algorithms for solving imperfectinformation extensive-form games may be roughly divided into two groups: no-regret methods derived from CFR, and incremental strategy-space generation methods of the PSRO framework. For the first group, CFR is a family of iterative algorithms for approximately solving large imperfectinformation games. Let σ t i be the strategy used by player i in round t. We define ui(*σ, h*) as the expected utility of player i given that the history h is reached, and then all players act according to strategy σ from that point on. Let us define ui(*σ, h* · a) as the expected utility of player i given that the history h is reached and then all players play according to strategy σ except player i who selects action a in history h. Formally, ui(*σ, h*) = Pz∈Z π σ(*h, z*)ui(z) and ui(*σ, h* · a) = Pz∈Z π σ(h · *a, z*)ui(z). The *counterfactual value* v σ i (I) is the expected value of information set I given that player i attempts to reach it. This value is the weighted average of the value of each history in an information set. The weight is proportional to the contribution of all players other than i to reach each history. Thus, v σ i (I) = Ph∈I π σ −i (h)Pz∈Z π σ(*h, z*)ui(z). For any action a ∈ A(I), the counterfactual value of action a is v σ i (*I, a*) = Ph∈I π σ −i (h)Pz∈Z π σ(h · *a, z*)ui(z). The instantaneous regret for action a in information set I of iteration t is r t(*I, a*) = v σ t P (I) (*I, a*) − v σ t P (I) (I). The counterfactual regret for action a in I of iteration T is RT(*I, a*) = PT t=1 r t(*I, a*). In vanilla CFR, players use *Regret Matching* to pick a distribution over actions in an information set proportional to the positive cumulative regret of those actions. Formally, in iteration T + 1, player i selects action a ∈ A(I) according to probabilities $$\sigma^{T+1}(I,a)=\left\{\begin{array}{l l}{{\frac{R_{+}^{T}(I,a)}{\sum_{b\in A(I)}R_{+}^{T}(I,b)}}}&{{\mathrm{if}\quad\sum_{b\in A(I)}R_{+}^{T}(I,b)>0,}}\\ {{\frac{1}{|A(I)|}}}&{{\mathrm{otherwise},}}\end{array}\right.$$ where RT+(*I, a*) = max{RT(*I, a*), 0} because we are concerned about the cumulative regret when it is positive only. If a player acts according to regret matching in I on every iteration, then in iteration T, RT(I) ≤ ∆i p|Ai| √T where ∆i = maxz ui(z) − minz ui(z) is the range of utilities of player i. Moreover, RT i ≤ PI∈Ii RT(I) *≤ |I*i|∆i p|Ai| √T. Therefore, limT→∞ R T i T = 0. In two-player zero-sum games, if both players' average regret R T i T ≤ ϵ, their average strategies (σ T 1 , σ T 2 ) form a 2ϵ-equilibrium (Waugh et al., 2009). Some variants are proposed to solve large-scale imperfect-information extensive-form games. Some sampling-based CFR variants (Lanctot et al., 2009; Gibson et al., 2012; Schmid et al., 2019) are proposed to effectively solve large-scale games by traversing a subset of the game tree instead of the whole game tree. With the development of deep learning techniques, neural network function approximation is also applied to the CFR algorithm. Deep CFR (Brown et al., 2019), Single Deep CFR (Steinberger, 2019), and Double Neural CFR (Li et al., 2019) are algorithms using deep neural networks to replace the tabular representation in the CFR algorithm. For the second group, PSRO (Lanctot et al., 2017) is a general framework that scales Double Oracle (DO) (McMahan et al., 2003) to large extensive-form games via using reinforcement learning to compute the best response strategy approximately. To make PSRO more effective in solving large-scale games, Pipeline PSRO (P2SRO) (McAleer et al., 2020) is proposed by parallelizing PSRO with convergence guarantees. Extensive-Form Double Oracle (XDO) (McAleer et al., 2021) is a version of PSRO where the restricted game allows mixing population strategies not only at the root of the game but every information set. It can guarantee to converge to an approximate NE in a number of iterations that are linear in the number of information sets, while PSRO may require a number of iterations exponential in the number of information sets. Neural XDO (NXDO) as a neural version of XDO learns approximate best response strategies through any deep reinforcement learning algorithm. Recently, Anytime Double Oracle (ADO) (McAleer et al., 2022), a tabular double oracle algorithm for 2-player zero-sum games is proposed to converge to a Nash equilibrium while decreasing exploitability from one iteration to the next. Anytime PSRO (APSRO) as a version of ADO calculates best responses via reinforcement learning algorithms. Except for NEs, other equilibrium solution concepts, for example, (Coarse) Correlated equilibrium ((C)CE) is considered. Joint Policy Space Response Oracles (JPSRO) (Marris et al., 2021) is proposed for training agents in n-player, general-sum extensive-form games, which provably converges to (C)CEs. The excellent performance of these equilibrium finding algorithms depends on the existence of efficient and accurate simulators. However, constructing a sufficiently accurate simulator may not be feasible or very expensive. In this case, we may resort to offline equilibrium finding (OEF) where the equilibrium strategy is computed based on the previous game data. ## C Visualization Of Datasets The additional figures provided showcase more visualization results for different datasets across various games. These results are consistent with those presented in the main paper. There are more high-frequency data in the expert dataset and the distributions of these datasets are very different. ![22_image_0.png](22_image_0.png) Figure 9: Frequency of leaf node for different offline datasets ![22_image_2.png](22_image_2.png) ![22_image_3.png](22_image_3.png) ![22_image_1.png](22_image_1.png) Figure 10: Cumulative frequency of leaf node for different offline datasets ![22_image_5.png](22_image_5.png) ![22_image_4.png](22_image_4.png) ![22_image_6.png](22_image_6.png) Figure 11: Amplitude-Frequency curve for different offline datasets ## D Theoretical Analysis The concurrent works Cui & Du (2022); Zhong et al. (2022) investigate the necessary properties of offline datasets of two-player zero-sum Markov games to successfully infer their NEs. To do this, they proposed several dataset coverage assumptions. Following their assumptions Cui & Du (2022), we also define some hypotheses on the dataset coverage under our OEF setting and provide extensive analysis about how the dataset coverage influences computing the equilibrium under the OEF setting. Our results are mainly for computing Nash equilibrium in extensive-form games. Here, we did not provide a sample complexity analysis since the influence of dataset coverage on the algorithm is more important for our problem. The analysis of the dataset coverage can provide more intuitive insight into our algorithm. ## D.1 Minimal Dataset Assumption For The Oef Problem We first introduce the difference between the OEF and the Offline RL from the standpoint of a theoretical analysis of dataset coverage. As demonstrated in offline RL papers (Rashidinejad et al., 2021; Xie et al., 2021), a coverage condition over the optimal policy is sufficient for the offline learning of MDPs. Therefore, it is straightforward to extend this coverage condition to our OEF settings. The following assumption shows this extended coverage condition. Assumption D.1. *(Single Strategy Coverage) The Nash equilibrium strategy* σ ∗*is covered by the dataset.* Subsequently, a question arises: is the single strategy coverage assumption over the offline dataset also sufficient for computing NE strategy under the OEF setting? The answer is no, and we employ the following theorem to elucidate the rationale behind this. Theorem D.1. Single strategy coverage assumption over offline dataset is not sufficient for computing an NE strategy. Proof. We provide a counter-example to prove this theorem. Here, we consider two two-player extensive-form games M1 and M2, which are represented in Figure 12. We can easily find that the NE of the game M1 is strategy profile σ 1 = (σ 1 1 , σ1 2 ) = ({S1 : a1}, {S2 : b1}), i.e., player 1 plays a1 at information set S1 and player 2 plays b1 at information set S2. The NE of the game M2 is strategy profile σ 2 = (σ 2 1 , σ2 2 ) = ({S1 : a2}, {S2 : b2}). Now we consider an offline dataset D which is generated using a strategy profile σD and the σD is set to be the uniform distribution on the strategy profiles σ 1 and σ 2. The dataset D covers strategy profile σ 1 and σ 2. Therefore, the dataset D satisfies the single strategy coverage assumption for these two games M1 and M2. However, it is impossible for any algorithm to distinguish these two extensive-form games only based on the dataset D since these two games are both consistent on the dataset D. Therefore, the single strategy converges assumption over the offline dataset is not sufficient for computing ![23_image_0.png](23_image_0.png) an NE strategy. Figure 12: Example of two-player extensive-form game From the above proof, we know that the single strategy coverage assumption over the dataset is sufficient for computing the optimal strategy under the offline RL setting while it is not sufficient for computing an NE strategy under the OEF setting. The intuition behind this theorem is that in an offline RL setting, we can easily use the data of two actions to decide which action is better, whereas, in an OEF setting, we cannot use data from only two action pairs to know which action pair is closer to NE, because identifying NE requires other action pairs as inferences. Based on this analysis, Cui & Du (2022) et al. provide a minimal coverage assumption over the dataset which is sufficient for computing an NE strategy in the two-player zero-sum Markov games, which is defined as follows, Assumption D.2. (Unilateral Coverage) For all strategy σi, (σi, σ∗−i ) for all player i *are covered by the* dataset, where σ ∗ = (σ ∗ 1 , ..., σ∗n ) *is the NE strategy.* Assumption D.3. (Deterministic Unilateral Coverage) For all deterministic strategy σi, (σi, σ∗−i ) for all player i *are covered by the dataset, where* σ ∗ = (σ ∗ 1 , ..., σ∗n ) *is the NE strategy.* We can find that deterministic unilateral coverage assumption is equivalent to unilateral coverage assumption. The intuition behind this is that any mixed strategy can be represented by a combination of several deterministic strategies. Therefore, if all deterministic strategies are covered by the dataset, then all mixed strategies are also covered. Based on this, in the following proof, we only consider all deterministic strategies. Cui & Du (2022) have proved that unilateral coverage assumption is the minimal assumption that is sufficient for computing an NE strategy in the two-player zero-sum Markov games. However, this conclusion is not hold for our model-based framework in computing the equilibrium strategy under the OEF setting. In other words, under the OEF setting, our model-based algorithm cannot guarantee coverage to the equilibrium strategy of the underlying game based on the dataset satisfying the unilateral coverage assumption. Theorem D.2. *The unilateral coverage assumption over the offline dataset is not sufficient for our modelbased algorithm to converge to the equilibrium strategy of the underlying game in the OEF setting.* Proof. We prove it by providing a counter-example. Here, we consider an imperfect-information extensiveform game M3, which is represented in Figure 13. We can easily find that the NE strategy of game M3 is ![24_image_0.png](24_image_0.png) the strategy profile σ ∗ = (σ1, σ2) = ({S1 : a1}, {S2 : b1}). Figure 13: Example of two-player extensive-form game To build an offline dataset satisfying the unilateral coverage assumption, the dataset needs to cover (σ ∗ 1 , σ2) for all σ2 and (σ1, σ∗ 2 ) for all σ1. We show the state-action pairs covered by these strategy profiles in Figure 13. These red lines show these covered state-action pairs. It means that the dataset satisfying the unilateral coverage assumption would cover these state-action pairs. When applying our model-based framework, the first step is to train an environment model based on the offline dataset. Assume that the environment model can be trained well which means that the environment model can precisely represent all these state-action pairs in the dataset. Therefore, the game represented by the trained environment model would be M∗ 3 in Figure 13. Note that there are missing data in the game. Although our trained environment model can give approximate results for these missing data, it may result in a different equilibrium strategy. For example, if the missing value in M∗ 3 is (0, 0) or (−1, 1), then the strategy profile σ = (σ1, σ ′ 2 ) = ({S1 : a1}, {S2 : b2}) would be the NE strategy of game M∗ 3 . However, the strategy profile σ is not the NE strategy for the original game M3. Therefore, the unilateral coverage assumption over the offline dataset is not sufficient for our model-based framework to converge to the NE strategy of the underlying game. Therefore, the unilateral coverage assumption is not sufficient for our model-based framework to converge to the equilibrium strategy. To guarantee the convergence of our model-based framework, we provide a minimal dataset coverage assumption for our model-based algorithm to converge to the equilibrium strategy of the underlying game under the OEF setting. Assumption D.4. (Uniform Coverage) For all state st and all actions at ∈ A(st)*, all state-action pairs* (st, at, st+1) *are covered by the dataset.* Theorem D.3. *The uniform coverage assumption over the offline dataset is the minimal dataset coverage* assumption which is sufficient for our model-based algorithm to converge to the equilibrium strategy in the OEF setting. Proof. From the example in the proof of Theorem D.2, we find that a slight violation of the uniform coverage assumption will impede the computation of the NE strategy using our model-based algorithm. In other words, any state-action pair that is not covered by the dataset would impede the restructure of the game using our environment model. Once the dataset satisfies the uniform coverage, then it covers all the state-action pairs in the game which is enough for training the environment model. It means that the environment model would be the same as the underlying game of the dataset. Then applying our model-based equilibrium finding algorithm on the trained environment model definitely can converge to the equilibrium strategy of the underlying game in the OEF setting. Here, we proved that the uniform dataset coverage assumption is sufficient for our model-based framework to converge to the equilibrium strategy. From the proof of Theorem D.2, we find that the game represented by the dataset satisfying the unilateral coverage assumption may be a part of the original game (here, we call the game in the dataset subgame). However, the non-uniqueness of the equilibrium in the subgame would result in the failure to find the equilibrium strategy of the underlying game using our model-based framework. The following theorem provides a more general analysis of the unilateral coverage assumption in the OEF setting. Theorem D.4. Under the assumption that the equilibrium strategy profile of the game represented by the dataset is unique, the unilateral coverage assumption would be the minimal assumption over the offline dataset which is sufficient for computing an NE strategy in the OEF setting. Proof. Firstly, we prove that a slight violation of the unilateral coverage assumption will impede the computation of the Nash equilibrium strategy. We can reuse the example game M1 in the proof of Theorem ![25_image_0.png](25_image_0.png) D.1 and consider another dataset D which is generated using strategy profile σD and σD is set to be the uniform distribution on these three deterministic strategy profiles σ 1 = (σ 1 1 , σ1 2 ) = ({S1 : a1}, {S2 : b1}), σ 2 = (σ 2 1 , σ2 2 ) = ({S1 : a2}, {S2 : b1}) and σ 3 = (σ 1 1 , σ2 2 ) = ({S1 : a2}, {S2 : b2}). Since the NE strategy of game M1 is strategy profile σ 1 = (σ 1 1 , σ1 2 ) = ({S1 : a1}, {S2 : b1}), we can find that only the deterministic strategy profile σ 4 = (σ 2 1 , σ1 2 ) = ({S1 : a2}, {S2 : b1}) is not covered by the dataset D compared with the dataset satisfying the unilateral coverage assumption. Then the game generated by the dataset is represented in Figure 14. we can find that the game generated based on the dataset has the unique Figure 14: Example game equilibrium strategy σ 1 = (σ 1 1 , σ1 2 ) = ({S1 : a1}, {S2 : b1}). Therefore, the dataset D satisfies the assumption the game generated based on the dataset has a unique equilibrium and slightly violates the unilateral coverage assumption. However, we find that the different missing data values in the game generated based on the dataset would result in a different equilibrium strategy. For example, if the missing value in the game generated based on the dataset is (0, 0), then the equilibrium strategy profile of the game would be σ ∗ = (σ1, σ2) = {S1 : {a1 : 0.75, a2 : 0.25}, S2 : {b1 : 0.75, b2 : 0.25}}, which is not the equilibrium strategy of the original game. Therefore, a slight violation of the unilateral coverage assumption will impede the computation of the equilibrium strategy. Then we prove that the unilateral coverage assumption is sufficient for computing an NE strategy in the OEF setting under the unique equilibrium assumption. Recall the definition of NE strategy, the strategy profile σ ∗forms an NE strategy if ui(σ ∗) ≥ ui(σ ′ i , σ∗−i ), ∀i ∈ N, ∀σ ′ i ∈ Σi which means that σ ∗ i is the best response strategy against σ ∗ −i for ∀i ∈ N. According to the unilateral coverage assumption, the dataset covers all strategy profiles (σi, σ∗−i ) for all i and all σi. Then it is easy to verify which strategy for player i is the best response strategy against σ ∗ −i based on the dataset. In other words, we have enough information about (σi, σ∗−i ), ∀σi which is sufficient to verify that σ ∗ i is the best response strategy of σ ∗ −i . In this way, we can verify the best response strategy for every player. Due to the uniqueness of the equilibrium strategy, the strategy σ ∗ would also be the equilibrium strategy of the original game. We can give an example to further explain it. Consider another dataset D ′for the game M1 which is generated using the strategy profile σ ′ D and σ ′ D is set to be the uniform distribution on these three deterministic strategy profiles σ 1 = (σ 1 1 , σ1 2 ) = ({S1 : a1}, {S2 : b1}), σ 2 = (σ 2 1 , σ2 2 ) = ({S1 : a2}, {S2 : b1}) and σ 3 = (σ 1 1 , σ2 2 ) = ({S1 : a1}, {S2 : b2}). We can easily verify that the dataset satisfies the unilateral coverage assumption for the game M1 and the game generated based on the dataset D ′(Figure 14) has a unique equilibrium strategy, σ 1 = (σ 1 1 , σ1 2 ) = ({S1 : a1}, {S2 : b1}). Then we can find that whatever the missing value in the game is the equilibrium of the game would not change and is the same as the equilibrium strategy of the original game. Therefore, based on the above analysis, under the strong assumption (equilibrium uniqueness), the unilateral coverage assumption would be the minimal dataset coverage assumption. The above theorem proves that under the strong assumption (equilibrium uniqueness), the dataset satisfying the unilateral coverage assumption is sufficient for the computation of equilibrium strategy under the OEF setting. However, in the general OEF setting, to guarantee convergence under the dataset satisfying unilateral coverage assumption, it may need a more powerful algorithm that can solve the non-uniqueness of the equilibrium problem. We left it as future work. ## D.2 Generalization Bound For Training Neural Network Model In this paper, we need to train the behavior cloning policy model and environment model which are both neural network models. Both models are trained in a supervised learning manner with different loss functions. Here, we provide a general generalization bound for training such neural network models. The supervised learning framework includes a data-generation distribution D a hypothesis class H of the neural network approximator, a training dataset S, and evaluation metrics to evaluate the performance of any approximator. Here, we use the loss function l to evaluate the performance of any approximation. The learning framework aims to minimize the true risk function LD(h) which is the expected loss function of h under the distribution D. $$L_{\mathcal{D}}(h)=\mathbb{E}_{d\sim{\mathcal{D}}}[l(h(d),d)]$$ Accordingly, the empirical risk function LS(h) on the training dataset S can be defined as: $$L_{S}(h)=\frac{1}{|S|}\sum_{d\sim S}[l(h(d),d)]\tag{1}$$ To get a generalization bound, we use an auxiliary lemma from Shalev-Shwartz & Ben-David (2014). Therefore, we can measure the capacity of the composition function class l ◦ H using the empirical Rademacher $$\left(1\right)$$ complexity on the training set S with size m, which is defined as: $${\mathcal{R}}_{S}(l\circ{\mathcal{H}})={\frac{1}{m}}\mathbb{E}_{\mathbf{x}\sim\{+1,-1\}^{m}}[\operatorname*{sup}_{\sigma\in\Sigma}\sum_{i=1}^{m}x_{i}\cdot l(h(d_{i}),d_{i})]$$ where x is distributed i.i.d. according to uniform distribution in {+1, −1}. Then we have the following lemma from Shalev-Shwartz & Ben-David (2014). Before providing the generalization bound, we first provide the distance between two different approximators and one common theorem to facilitate the proof of the generalization bound. Definition D.1. (r-cover). We say function class Hr r−cover H under ℓ∞,1-distance if for all function h ∈ H, there exists hr in Hr such that ||h − hr||∞,1 = maxx∈D ||h(x) − hr(x)||1 ≤ r. Definition D.2. (r-covering number). The r-covering number of H, denoted by N∞,1(H, r), is the cardinality of the smallest function class Hr that r-covers H under ℓ∞,1-distance. Theorem D.5. (Shalev-Shwartz & Ben-David (2014)) Let S be a training set of size m drawn i.i.d. from distribution D. Then with probability of at least 1 − δ over draw of S from D*, for all* σ ∈ Σ, $$L_{\mathcal{D}}(h)-L_{S}(h)\leq2\mathcal{R}_{S}(l_{b c}\circ\Sigma)+4\sqrt{\frac{2\ln{(4/\delta)}}{m}}$$ Here, we provide the generalization bound to measure the generalizability of the trained approximator under a training dataset with size m. Theorem D.6 (Generalization bound). Assume that the loss function l is T-Lipschitz continuous, then for hypothesis class H of approximator and distribution D, with probability at least 1−δ over draw of the training set S with size m from D, ∀h ∈ H *we have* $$L_{\mathcal{D}}(h)-L_{S}(h)\leq2\cdot\operatorname*{inf}_{r>0}[{\frac{\sqrt{2\log{\mathcal{N}_{\infty,1}({\mathcal{H}},r)}}}{m}}+T r]+4{\sqrt{\frac{2\ln{(4/\delta)}}{m}}}.$$ Proof. According to Theorem D.5, we have $$L_{\mathcal{D}}(h)-L_{S}(h)\leq2\mathcal{R}_{S}(l\circ\mathcal{H})+4\sqrt{\frac{2\ln{(4/\delta)}}{m}}.$$ We assume the loss function l(*x, y*) is T-Lipschitz continuous under ℓk-distance. Therefore, $$|l(x,y)-l(x^{\prime},y)|\leq T||x-x^{\prime}||_{k},$$ where *||·||*k is the k-norm. Let Hr be the function class that r-cover H for some r > 0 and |Hr| = N∞,1(H, r) be the cardinality of the smallest function class Hr. ∀h ∈ H, denote hr ∈ Hr be the function approximator that r-covers h. Based on above equation, we have $l(h(x),y)-l(h_{r}(x),y)|\leq T||h(x)-h_{r}(x)||_{k}\leq Tr$. Then we have RS(l ◦ H) (2) = 1 m Ex∼{+1,−1}m[sup h∈H Xm i=1 xi· l(h(di), di)] (3) = 1 m Ex∼{+1,−1}m[sup h∈H Xm i=1 xi· (l(hr(di), di) + l(h(di), di) − l(hr(di), di))] (4) ≤ 1 m Ex∼{+1,−1}m[ sup hr∈Hr Xm i=1 xi· l(hr(di), di)] + 1m Ex∼{+1,−1}m[sup h∈H Xm i=1 |xi· T r|] (5) ≤ sup hr∈Hr vuutXm i=1 (ℓ(hr, di))2 · p2 log N∞,1(H, r) m+ T r m Ex||x||1 (6) ≤ p2 log N∞,1(H, r) m+ T r (7) The reduction from Eq. 19(a) to Eq. 19(j) is based on Massart's lemma Shalev-Shwartz & Ben-David (2014). From above, we can get a get $$L_{\mathcal{D}}(h)-L_{S}(h)\leq2\mathcal{R}_{S}(l\circ\mathcal{H})+4\sqrt{\frac{2\ln{(4/\delta)}}{m}}\tag{8}$$ $$\leq2\cdot\inf_{r>0}[\frac{\sqrt{2\log{N_{\infty,1}(\mathcal{H},r)}}}{m}+Tr]+4\sqrt{\frac{2\ln{(4/\delta)}}{m}}\tag{9}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}_{0}^{1}$$ $$\mathbf{\Sigma}$$ $\square$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $$\square$$ From the above theorem, we can find that given a training dataset with size m, we can have a generalization bound for the error depending on the characteristic of the loss function. In this paper, we follow the above supervised-learning framework to train the behavior cloning policy and environment model. Therefore, we can provide some assumptions for the trained policy and environment models based on the above theorem, as follows: Assumption D.5. If the error for training behavior cloning policy is less than an extremely small ϵ on the dataset with enough data (the size of data can be computed according to the above theorem), then we can consider that the trained behavior cloning policy model is the same as the underlying strategy of the dataset. Assumption D.6. If the error for training the environment model is less than an extremely small ϵ on the dataset with enough data, then we can consider that the trained environment model is the same as the underlying game of the dataset. ## D.3 Theoretical Guarantee For Our Oef Algorithm We move to analyze our proposed datasets and their influences on our OEF algorithm. Here, we first provide two assumptions on our proposed datasets based on the generation process of the dataset. Since we only use the NE strategy to generate the expert dataset, we can have the following assumption. Assumption D.7 (Assumption 5.2). The expert dataset only covers the state-action pair deduced by the Nash Equilibrium (NE) strategy and the frequency of these state-action pairs is corresponding to the NE strategy. Note that according to the above assumption, we can find that although the expert dataset satisfies the single strategy coverage assumption, it is more strict than the single strategy coverage assumption since the expert dataset only covers the NE strategy. From the empirical results on the expert dataset, we found that the model-based algorithm indeed cannot converge to the NE strategy. However, the behavior cloning algorithm can get a good strategy on the expert dataset since it can mimic the strategy used to generate the expert dataset, i.e., the NE strategy. The random dataset is sampled by the uniform strategy. Therefore, it would involve all the state transitions and we can have the following assumption for the random dataset. Assumption D.8 (Assumption 5.1). *The random dataset satisfies the uniform dataset coverage assumption,* i.e., for ∀st and ∀a ∈ A(st), (st, a, st+1) *is covered by the random dataset.* Since the random dataset satisfies the uniform dataset coverage assumption, according to Theorem D.3, the random dataset is sufficient for our model-based algorithm to compute the NE strategy. From the empirical results, we can find that the model-based algorithm performs best under the random dataset, which verifies that the random dataset is sufficient for computing the NE strategy. Next, we will provide more analysis of the relationship between the algorithm and the dataset. From the empirical analysis, we find that the performance of the model-based algorithm mainly depends on the gap between the trained environment model and the actual game environment. It means that if the trained environment model can recover all the dynamics of the actual game, then the performance is good. Otherwise, the performance is worse. Since our model-based framework can generalize existing equilibrium finding algorithms to the context of the OEF setting and the performance of the existing equilibrium finding algorithm would also determine the convergence of the equilibrium strategy, we assume that there always exists an equilibrium finding algorithm for any game which can converge to the equilibrium strategy in the following proof. Then we have the following theorem. Theorem D.7 (Theorem 5.2). Assuming that the environment model is trained on the offline dataset with an extremely small training error ϵ*, then the model-based framework (MB) can converge to an equilibrium* strategy under the random dataset and cannot guarantee to converge under the expert dataset. Proof. Since the training error of the environment model is less than ϵ, the trained environment model can fully represent the information of the offline dataset according to Assumption D.6. If the random dataset is the offline dataset, the game defined by the trained environment model is the same as the actual game. The reason is that every state transition is covered by the random dataset according to Assumption D.8. Then the strategy learned by our model-based equilibrium finding algorithm is the approximate equilibrium strategy of the actual game due to the convergence property of the original equilibrium finding algorithm. Therefore, the model-based framework can converge to an equilibrium strategy under the random dataset. If the offline dataset is the expert dataset, then the dataset only covers these state transitions related to the NE strategy according to Assumption D.7. Therefore, the state transition of the actual game may not be covered by the expert dataset. The environment model trained based on the expert dataset would produce different transition information on these states not shown in the dataset compared with the actual game. It would cause a gap between the trained environment model and the actual game. Although the model-based framework can learn an approximate equilibrium strategy of the game defined by the environment model, there is no guarantee that the learned strategy is the equilibrium strategy of the actual game. Theorem D.7 is consistent with our previous conclusion that single strategy coverage is insufficient for NE identification and dataset coverage satisfying Assumption D.4 is sufficient for NE identification according to Theorem D.3. And our empirical results also verify these conclusions. The model-based framework performs best under the random dataset and worst under the expert dataset. Although the expert dataset satisfies the single strategy coverage, the expert dataset assumption is more strict than the single strategy coverage. We find that the behavior cloning algorithm can perform well on the expert dataset. Therefore, to offset the drawback of the model-based algorithm under the expert dataset, we propose to combine the behavior cloning (BC) technique. From the introduction of the BC technique, we know that the BC can mimic the behavior policy in the dataset. Therefore, we have the following theorem describing the power of the BC technique. Theorem D.8 (Theorem 5.1). Assuming that the behavior cloning policy is trained on the offline dataset with an extremely small training error ϵ, then the behavior cloning technique (BC) can get the equilibrium strategy under the expert dataset, and cannot get the equilibrium strategy under the random dataset. Proof. The assumption that the behavior cloning policy is trained on the offline dataset with an extremely small training error ϵ means that the behavior cloning policy can precisely mimic the behavior strategy used to generate the offline dataset according to Assumption D.5. If the offline dataset is the expert dataset, according to Assumption D.7, the behavior strategy used to generate the expert dataset is the NE strategy. Therefore, applying the behavior cloning algorithm on the expert dataset can get an NE strategy. If the offline dataset is the random dataset, according to the generation process of the random dataset and Assumption D.8, the behavior strategy used to generate the random dataset is a uniform strategy. Therefore, the behavior cloning algorithm can only get a uniform strategy instead of the equilibrium strategy under the random dataset. Our experimental results also show the same outcomes as Theorem D.8. The performance of the behavior cloning technique mainly depends on the quality of the behavior strategy used to generate the offline dataset. Therefore, the behavior cloning technique can perform well under the expert dataset. Based on the above two theorems, we propose our OEF algorithm, BCMB, by combining the above two techniques with different weights to improve the performance under these datasets with unknown behavior strategies. Theorem D.9 (Theorem 5.3). Under the assumptions in Theorems D.7 and D.8, BCMB can compute the equilibrium strategy under either the random dataset or the expert dataset. Proof. In the BCMB algorithm, the weight of the BC policy is represented by α. The weight of the MB policy is 1 − α. The α ranges from 0 to 1. When under the random dataset, let α equal 0. Then the policy of BCMB would equal to MB policy, i.e., the policy trained using the model-based algorithm. According to Theorem D.7, the model-based framework can converge to an equilibrium strategy under the random dataset. Therefore, BCMB can also converge to an equilibrium strategy under the random dataset. When under the expert dataset, let α equal to 1. Then the policy of BCMB would be equal to BC policy, i.e., the policy trained by the behavior cloning algorithm. Similarly, according to Theorem D.8, BCMB can get an equilibrium strategy in the expert dataset. Let's move to a more general case in which the offline dataset is generated by a behavior strategy σ. Then we have the following theorems under the general case. Theorem D.10. Assuming that the offline dataset Dσ generated by the behavior strategy σ *covers* (st, a, st+1), ∀st, a ∈ A(st) and the environment model is trained on Dσ with an extremely small training error ϵ, the model-based framework can converge to an equilibrium strategy that performs equal even better than σ. Proof. According to the proof of Theorem D.7, since every state transition of the actual game is covered by Dσ, the trained environment model would be the same as the actual game under the assumption that the environment model is trained on the offline dataset with an extremely small training error ϵ. Then according to Theorem D.7, the model-based framework can converge to an equilibrium strategy. If σ used to generate the dataset is not the equilibrium strategy, then the model-based framework can get a better strategy (equilibrium strategy) than σ. And if σ is an equilibrium strategy, then the strategy trained using a model-based framework would perform equal to σ. Theorem D.11. Assuming that the behavior cloning policy is trained on the offline dataset Dσ *generated by* the behavior strategy σ with an extremely small training error ϵ*, the performance of behavior cloning policy* σ bc *would be as good as the performance of* σ. Proof. According to the Assumption D.8, behavior cloning can precisely mimic the behavior strategy in the offline dataset. Therefore, σ bc would be same as σ. Consequently, the performance of σ bc would have the same performance as σ. Theorem D.12 (Theorem 5.4). *Assuming that the environment model and the behavior cloning policy are* trained with an extremely small training error ϵ on the offline dataset Dσ generated using σ*, BCMB can get* an equal or better strategy than σ. Proof. Following the proof of Theorem D.9, let α equal 1. Then BCMB would reduce to BC. Then according to Theorem D.11, the performance of BC policy is at least as good as σ. Therefore, BCMB can get a strategy that is at least as good as the behavior strategy σ. In another extreme case in which Dσ covers (st, a, st+1), ∀st, a ∈ A(st), let α equal to 0. Then BCMB would reduce to MB. Then according to Theorem D.10, the MB policy performs equal to or better than σ. Therefore, in this case, BCMB can get an equal or better strategy than σ. In conclusion, under the above assumptions, BCMB can perform at least equal to the behavior strategy used to generate the offline dataset. The improvement over the behavior strategy mainly depends on the performance of the model-based algorithm under the offline dataset. ## E Implementation Details Model-based Framework. Next, we introduce our instantiate offline model-based algorithms: OEF-PSRO and OEF-CFR, which are adaptions from two widely-used online equilibrium finding algorithms PSRO and Deep CFR, and OEF-JPSRO, which is an adaption from JPSRO. These three algorithms perform on the trained accurately environment model E. We first introduce the OEF-PSRO algorithm, and the whole flow is shown in Algorithm 2. Firstly, we need the trained accurately environment model E as input and initialize policy sets Π for all players using random strategies. Then, we need to estimate a meta-game matrix by computing expected utilities for each joint strategy profile π ∈ Π. In vanilla PSRO, to get the expected utility for π, it needs to perform the strategy π in the actual game simulator. However, the simulator is missing in the OEF setting. Therefore, we use the trained accurately environment model E to replace the game simulator to provide the information needed in the algorithm. Then we initialize meta-strategies using a uniform strategy. Next, we need to compute the best response policy oracle for every player and add the best response policy oracle to their policy sets. When training the best response policy oracle using DQN or other reinforcement learning algorithms, we sample the training data based on the environment model E. After that, we compute missing entries in the meta-game matrix and calculate meta-strategies for the meta-game. To calculate the meta-strategy σ of the meta-game matrix, we can use the Nash solver or αrank algorithm. Here, we use the α-rank algorithm as the meta solver because our algorithm needs to solve multi-player games. Finally, we repeat the above process until satisfying the convergence conditions. Since the process of JPSRO is similar to PSRO except for the best response computation and meta distribution solver, OEF-JPSRO is also similar to OEF-PSRO. We do not cover OEF-JPSRO in detail here. Algorithm 2 Offline Equilibrium Finding - Policy-Space Response Oracles 1: **Input:** Trained environment model E 2: Initial policy sets Π for all players 3: Compute expected utilities U Π for each joint π ∈ Π **based on the environment model** E 4: Initialize mate-strategies σi = Uniform(Πi) 5: **repeat** 6: for player i ∈ [1*, .., n*] do 7: for best response episodes p ∈ [0*, ..., t*] do 8: Sample π−i ∼ σ−i 9: Train best response π ′ i over ρ ∼ (π ′ i , π−i), which **samples on the environment model** E 10: **end for** 11: Compute missing entries in U Π from Π **based on the environment model** E 12: Compute a meta-strategy σ from U Π using α-rank algorithm; 13: **end for** 14: **until** Meet convergence condition 15: **Output:**current solution strategy σi for player i Algorithm 3 shows the process of OEF-CFR. It also needs the trained environment model E as input. We first initialize regret and strategy networks for every player and then initialize regret and strategy memories for every player. Then we need to update the regret network for every player. To do this, we can perform the traverse function to collect corresponding training data. The traverse function can be any samplingbased CFR algorithm. Here, we use the external sampling algorithm. Note that we need to perform the traverse function on the game tree. In OEF-CFR, the trained environment model can replace the game tree. Therefore, the trained environment model is the input of the traverse function. Algorithm 4 shows the process of the traverse function. In this traverse function, we collect the regret training data of the traveler, and the strategy training data of other players are also gathered. After performing the traverse function several times, the regret network is updated using the regret memory. We need to repeat the above processes n iterations. Then the average strategy network for every player is trained based on its corresponding strategy memory. Finally, the trained average strategy networks are output as the approximate NE strategy. Combination Method. As introduced in the main paper, we proposed three methods to select the parameter for combination. Here, we detailly introduce the method that uses a parameter predictor since the Algorithm 3 Offline Equilibrium Finding - Deep Counterfactual Regret Minimization 1: **Input:** Trained environment model E 2: Initialize regret network R(I, a|θr,p) for every player p; 3: Initialize average strategy network S(I, a|θπ,p) for every player p; 4: Initialize regret memory Mr,p and strategy memory Mπ,p for every player p; 5: for CFR Iteration t = 1 to T do 6: for player p ∈ [1*, ..., n*] do 7: for traverse episodes k ∈ [1*, ..., K*] do 8: TRVERSE(ϕ, p, θr,p, θπ,−p,Mr,p, Mπ,−p, E); 9: **end for** 10: Train θr,p from scratch based on regret memory Mr,p 11: **end for** 12: **end for** 13: Train θπ,p based on strategy memory Mπ,p for every player p; 14: **Output:**θπ,p for every player p other two methods are straightforward. Firstly, we need to collect the training data by getting the best parameters through online interaction for different offline datasets. We use CKA to measure the difference between the bc policy and mb policy since these two policies are both neural networks. Therefore, the differences between each layer of these two policies get from different offline datasets are taken as the input and the best parameter for different offline datasets is taken as the output. Then we can easily use these data to train a parameter predictor. Finally, when encountering a new OEF problem, we can directly get the parameter from the parameter predictor based on the difference between the trained bc policy and the trained mb policy. This parameter predictor can also be used for other games. Here, we conduct some experiments to show the performance of the proposed combination method. Figures 15 and 16 show these experimental results. We can find that BC+MB can provide a good parameter for the unseen game while the performance depends on the difference between the unseen game and the game used to train the parameter predictor. ![33_image_0.png](33_image_0.png) Figure 15: Experimental results for different combination methods (parameter predictor trained on twoplayer kuhn poker) Algorithm 4 TRVERSE(s, p, θr,p, θπ,−p,Mr,p, Mπ,−p, E)-External Sampling Algorithm 1: if s is terminal state **then** 2: Get the utility ui(s) from environment model E 3: **Output:** ui(s) 4: **else** 5: if s is a chance state **then** 6: Sample an action a based on the probability σc(s), which is obtained from model E 7: s ′ = E(*s, a*) 8: **Output:** TRAVERSE(s ′, p, θr,p, θπ,−p,Mr,p, Mπ,−p, E) 9: **else** 10: if P(s) = p **then** 11: I ← s[p]; \# game state is formed by information sets of every player 12: σ(I) ← strategy of I computed using regret values R(I, a|θr,p) based on regret matching 13: for a ∈ A(s) do 14: s ′ = E(*s, a*) 15: u(a) ← TRAVERSE(s ′, p, θr,p, θπ,−p,Mr,p, Mπ,−p, E) 16: **end for** 17: uσ ←Pa∈A(s) σ(*I, a*)u(a) 18: for a ∈ A(s) do 19: r(*I, a*) ← u(a) − uσ 20: **end for** 21: Insert the infoset and its action regret values (*I, r*(I)) into regret memory Mr,p 22: **Output:** uσ 23: **else** 24: I ← s[p] 25: σ(s) ← strategy of I computed using regret value R(*I, a*|θr,−p) based on regret matching 26: Insert the infoset and its strategy (*I, σ*(s)) into strategy memory Mπ,−p 27: Sample an action a from the probability distribution σ(s) 28: s ′ = E(*s, a*) 29: **Output:** TRAVERSE(s ′, p, θr,p, θπ,−p,Mr,p, Mπ,−p, E) 30: **end if** 31: **end if** 32: **end if** ![34_image_1.png](34_image_1.png) ![34_image_0.png](34_image_0.png) (a) Kuhn Poker(2-P) Figure 16: Experimental results for different combination methods (parameter predictor trained on twoplayer leduc poker) ## F Additional Experimental Results In this section, we provide experimental results on other games. First, we present the experimental results of the behavior cloning method and model-based framework (OEF-CFR) using hybrid datasets, followed by the results of our OEF algorithm (BC+MC). We also test our OEF algorithm on a two-player Phantom Tic-Tac-Toe game using the learning dataset. Finally, we provide an ablation study and the setting of hyper-parameters used in our experiments. Figure 17 displays the results of the behavior cloning technique on several multi-player poker games and a two-player Liar's Dice game. The results show that as the proportion of random datasets increases, performance decreases. This observation is consistent with the finding from our previous experiments. ![35_image_0.png](35_image_0.png) Figure 17: Experimental results for the BC method ![35_image_1.png](35_image_1.png) ![35_image_2.png](35_image_2.png) (e) Liar's Dice(2-P) Figure 18: Experimental results for the MB method The experimental results of the model-based framework (OEF-CFR) on these games are shown in Figure 18. Since the strategy learned by OEF-CFR is not a joint strategy, we only use NashConv to measure its closeness to NEs in these multiple-player games. From these results, we find that the performance of the model-based framework is not stable in these games but still shows a slightly decreasing tendency with the increase in the proportion of the random dataset. Note that the CFR-based algorithm has no theoretical guarantee of convergence in multiple-player games. Therefore, OEF-CFR also cannot guarantee convergence to the NE strategy. The performance of the model-based framework also depends on the trained environment model. As a result, poor performance may be caused by an inadequately trained environment model or the poor performance of the CFR-based algorithm in multiple-player games. Hence, learning a good enough strategy is a significant challenge in these multiple-player games under the OEF setting. Figures 19(a)-19(j) show the experimental results of BCMB on various games. We also test our OEF algorithm BCMB in the Phantom Tic-Tac-Toe game based on the learning dataset (Figure 19(k)). The NashConv values in Phantom Tic-Tac-Toe are approximate results since the best response policy is trained using DQN, and the utilities are obtained by simulation. The results show that the BCMB performs better than BC and MB, which implies that our combination method can perform well in any game under any unknown dataset. The appropriate weights in the BCMB algorithm under different datasets are shown in Figure 20. This displays a similar tendency as in previous experiments. ![36_image_0.png](36_image_0.png) Figure 19: Experimental results for the benchmark algorithm BCMB ![36_image_1.png](36_image_1.png) ![36_image_2.png](36_image_2.png) ![36_image_3.png](36_image_3.png) (d) Leduc Poker(3-P) (e) Liar's Dice(2-P) Figure 20: Experimental Results for Proper Weight Ablation Study. To investigate the influence of hyperparameters, we conduct several ablation experiments on two-player Kuhn poker and Leduc poker games. We consider different model structures with various numbers of hidden layers. Specifically, for the 2-Player Kuhn poker game, we use different environment models with 8, 16, 32, and 64 hidden layers. For the 2-Player Leduc poker game, which is a more complicated game, the numbers of hidden layers for different models are 32, 64, and 128. In addition, we train the environment models for different epochs to evaluate the robustness of our approach. Figures 21-22 show these ablation results. We find that the number of hidden layers and the number of training epochs have little effect on the performance of the BC algorithm. These results further verify that the performance of the BC algorithm primarily depends on the quality of the dataset. As we know, the performance of the model-based framework mainly depends on the trained environment model. Since the number of the hidden layer and the number of training epochs influence the training phase of the environment model, the number of the hidden layer and the number of train epochs have a slight impact on the performance of the modelbased framework. As long as the size of the hidden layer and the number of training epochs can guarantee that the environment model is trained accurately, the performance of the model-based framework will not be affected. ![37_image_0.png](37_image_0.png) Figure 21: Ablation results for different hidden layer size ![37_image_1.png](37_image_1.png) Figure 22: Ablation results for different train epoch Parameter Setting. We list the parameters used to train the behavior cloning policy and environment model for all games used in our experiments in Table 2 and Table 3. | Games | Data size | Hidden layer | Batch size | Train epoch | |----------------------|-------------|----------------|--------------|---------------| | 2-player Kuhn poker | 500 | 32 | 32 | 1000 | | 2-player Kuhn poker | 1000 | 32 | 32 | 2000 | | 2-player Kuhn poker | 5000 | 32 | 32 | 2000 | | 3-player Kuhn poker | 1000 | 32 | 32 | 5000 | | 3-player Kuhn poker | 5000 | 32 | 32 | 5000 | | 3-player Kuhn poker | 10000 | 64 | 128 | 5000 | | 4-player Kuhn poker | 5000 | 64 | 64 | 5000 | | 4-player Kuhn poker | 10000 | 64 | 128 | 5000 | | 4-player Kuhn poker | 20000 | 64 | 128 | 5000 | | 5-player Kuhn poker | 5000 | 64 | 64 | 5000 | | 5-player Kuhn poker | 10000 | 64 | 128 | 5000 | | 5-player Kuhn poker | 20000 | 64 | 128 | 5000 | | 2-player Leduc poker | 10000 | 128 | 128 | 10000 | | 2-player Leduc poker | 20000 | 128 | 128 | 10000 | | 2-player Leduc poker | 50000 | 128 | 128 | 10000 | | 3-player Leduc poker | 10000 | 128 | 128 | 10000 | | 3-player Leduc poker | 20000 | 128 | 128 | 10000 | | 3-player Leduc poker | 50000 | 128 | 128 | 10000 | | Liar's Dice | 10000 | 64 | 64 | 5000 | | Liar's Dice | 20000 | 64 | 128 | 5000 | | Liar's Dice | 50000 | 64 | 128 | 5000 | | Phantom Tic-Tac-Toe | 5000 | 128 | 128 | 5000 | | Phantom Tic-Tac-Toe | 10000 | 128 | 128 | 5000 | | Phantom Tic-Tac-Toe | 20000 | 128 | 128 | 5000 | Table 2: Parameters for Behavior Cloning algorithm | Games | Data size | Hidden layer | Batch size | Train epoch | |----------------------|-------------|----------------|--------------|---------------| | 2-player Kuhn poker | 500 | 32 | 32 | 1000 | | 2-player Kuhn poker | 1000 | 32 | 32 | 2000 | | 2-player Kuhn poker | 5000 | 32 | 32 | 2000 | | 3-player Kuhn poker | 1000 | 32 | 32 | 2000 | | 3-player Kuhn poker | 5000 | 32 | 32 | 5000 | | 3-player Kuhn poker | 10000 | 64 | 128 | 5000 | | 4-player Kuhn poker | 5000 | 64 | 64 | 5000 | | 4-player Kuhn poker | 10000 | 64 | 128 | 5000 | | 4-player Kuhn poker | 20000 | 64 | 128 | 5000 | | 5-player Kuhn poker | 5000 | 64 | 64 | 5000 | | 5-player Kuhn poker | 10000 | 64 | 128 | 5000 | | 5-player Kuhn poker | 20000 | 64 | 128 | 5000 | | 2-player Leduc poker | 5000 | 64 | 64 | 5000 | | 2-player Leduc poker | 10000 | 64 | 64 | 5000 | | 2-player Leduc poker | 20000 | 128 | 128 | 10000 | | 3-player Leduc poker | 10000 | 128 | 128 | 10000 | | 3-player Leduc poker | 20000 | 128 | 128 | 10000 | | 3-player Leduc poker | 50000 | 128 | 128 | 10000 | | Liar's Dice | 10000 | 64 | 64 | 5000 | | Liar's Dice | 20000 | 64 | 128 | 5000 | | Liar's Dice | 50000 | 64 | 128 | 5000 | | Phantom Tic-Tac-Toe | 5000 | 128 | 128 | 5000 | | Phantom Tic-Tac-Toe | 10000 | 128 | 128 | 5000 | | Phantom Tic-Tac-Toe | 20000 | 128 | 128 | 5000 | Table 3: Parameters for training Environment Model
Review 1: Summary: In short, the paper studies the problem of offline decision-making in games. The goal is to use historical data of gameplay to figure out equilibrium strategies for game playing. To do so, the paper proposes a set of datasets (based on the convention of random, hybrid and expert datasets common in offline RL and also logically motivated by game playing specifically) and then proposes a new model-based offline learning algorithm and a model-based + model-free learning algorithm to solve this problem. Strengths and Weaknesses: Strengths: - The paper studies an interesting problem setting, that has not been studied -- though I should say that theoretical RL literature does study offline RL in two-player games. See for example: https://arxiv.org/abs/2302.02571 and other related line of work. Weaknesses: - The main weakness of this paper comes from the fact that it does not actually compare to all the adequate baselines. While I do understand why model-free algorithms are not actually ideal when it comes to learning in games, however, it is not quite clear to me if they would necessarily be worse for the Hybrid dataset (my rationale is that if the dataset is near an equilibrium, the model of the environment and optimizing for the other opponent may not be needed as much, which is also why BC can work intuitively with the expert dataset). So not comparing model-free algorithms at all is a big weakness in my opinion. In fact, some more detailed studies should be done about it. - Assumptions 5.1 and 5.2 are actually very strong -- in some ways, it does assume coverage of every state and action, with a non-stochastic reward function, which is not great. Even when typical offline RL works assume the setting of full coverage, they would actually try to analyze the setting where the reward function is stochastic and would account for error due to stochasticity despite full coverage. But I don't think the analysis in this paper accounts for that. All the theoretical results are very informal -- it is just explained in words, the meanings of "accurate", "error is low", etc are not explained. So I think the theory needs a full revision to improve rigor and clarity. Right now it is just hand wavy. - How does the error in learning the model of the environment from offline data affect equilibrium finding? Does it need pessimism like most of offline RL? How does BCMB compare to standard model-based learning + CFR / JPSRO + pessimism? If pessimism is already imposed by the model-based algorithm, why do we need BC policy mixing? These questions need to be rigorously studied empirically and theoretically. Requested Changes: I would request the authors to amplify the number of experiments and baselines in the experimental section (as discussed above), discuss all relevant related work, especially those that study the question of offline RL in Markov games, and finally fix the theoretical results. Without these, I don't think the paper is ready to be accepted to TMLR. Broader Impact Concerns: No broader impact statement in the paper. But I don't think this paper particularly has issues of broader impact. ================================================== Review 2: Summary: This paper tackles the Offline Equilibrium Finding problem: given a dataset of transitions in a multiplayer game, find the Nash Equilibrium (or the (coarse) correlated equilibruim) of the game. To do that, they first build interaction dataset on standard benchmarks game, in a similar manner datatsets for offline RL have been constructed, ie recording interactions in the game with different polices: random, expert, or a mix of both. Then, they design an algorithm to solve OEF, they call BCMB. It is a mixture between a model-based method and behavior cloning on the dataset. The model-based method is adapted form an online equilibrium-finding algorithm: it learns a model from the dataset, and then runs an online algorithm on the dataset. The finally test their method, compared with offline RL baselines, on the OpenSpiel benchmark. Strengths and Weaknesses: # Strengths The paper is generally well written, well organised and easy to follow. Its major strengths are the clarity of its contributions, detailed below. ## New setting OEF is an interesting and novel setting. Offline RL has had a growing interest in the recent years, I think it is a important direction to also tackle this setting from a game theory perspective. ## Simple ideas The MB method is a straightforward adaption of online methods to the offline setting. It is quite interesting to know if this works or not in practice. ## Data collection Authors have built datasets for OEF. If there are published, this is a useful contribution to the community. # Weaknesses I have a few concerns about the paper, not all of the same importance. "major weaknesses" are what I think should be taken into account for the acceptance, "minor" are easily fixed. ## Major weakness 1: online parameter selection The main weakness of this work is that BCMB is actually not an offline algorithm Precisely, the $\alpha$ parameter that controls the mixture between BC and MB is selected by a grid search *online, on the environment*. This causes a few issues: First, it breaks the assumption that the algorithms can compute a policy without access to the environment. Without this finetuning, this paper does not provide a way to select $\alpha$ offline. This is a known issue in imitation and offline learning (for example see [1, 2]). Having environment-specific parameters is understandable in some cases, but here it depends on the dataset of the same env (whether it is random collection or expert). This is not just a question of over-tuning HPs. The mixture coefficient is a key component of the algorithm, and needs to depend on the quality of the policy that collected the data; but knowing this quality from offline interaction is a problem by itself. The comparison to the offline RL algorithm is not really fair. IIUC, they use the same set of HPs for each game, and are not finetuned online on each dataset. Overall, this is a strong limitation of the method, that is not mentioned or discussed. BCMB can provide insights on what is necessary for OEF, but right now it is a but misleading to present it as outperforming purely offline methods. [1]: Hyperparameter Selection for Imitation Learning, Hussenot et al, 2021 [2]: Hyperparameter Selection for Offline Reinforcement Learning, Le Paine et al, 2020 ## Major weakness 2: relevance of the theory One things that lacks some clarity to me is how the theory is presented in the work. especially theorems 5.2 and 5.3. First, Theorem 5.2 is not really discussed when it is introduced, so in section 5.2, we do not really know how relevant it is, what is says precisely on the method, if the assumptions can be realistically respected, etc... For example: the results depends on the coverage assumption (assumption 5.1), but it is not clear if this assumption can realistically be realized. Usually, finding a covering dataset can be a hard exploration problem, and standard "random" datasets in offline RL becnhmarks (e.g., [3]) are generated by a uniform policy, and do not have this property. Second, they seem to be direct consequences of assumptions (especially 5.3 which essentially says "we can put $\alpha=0$ or $\alpha=1$), I am not sure they deserve to be 'theorems'. This is not just a nomenclature issue: Th. 5.3 seems like it is backing up the algorithm with a convergence result, but it just says that form the mixture you can recover one of the element, I think it is a bit misleading. Third, and this is related to the first weakness: I actually have an issue with the proof of theorem 3. The proof can be summarized as: "if the dataset is random, choose $\alpha=0$, if expert, choose $\alpha=1$". This exposes the limitation of the algorithm in weakness 1. Sure, these values of alpha are indeed optimal for this setting, but the algorithm has now way of finding them without access to the simulator. So, in the fully offline setting, BCMB cannot guarantee to be optimal, even under these assumption, because it cannot select the mixture value. [3] D4RL: Datasets for Deep Data-Driven Reinforcement Learning, Fu et al, 2020 ## Minor weaknesses ### Page limit Some useful and interesting content is relegated to the appendix: the related work (which is quite complete and pedagogical), discussion of the theoretical analysis, in particular wrt the different coverage assumptions. It is not that much, but it feels a bit like playing on the appendix separation to get under the page limit, knowing that TMLR accepts longer articles. ### Details on figures - bars on Figure 6 are overlapping and hard to read. - colors are not consistent in Figures 7 and 8 Requested Changes: To explain the requested changes and the conclusion of my review: my background is mainly in RL, and I am not an expert on game theory, thus I will stick to the RL and experimental part for the key parts of the decision. **Selection of $\alpha$** I think the main issue of the paper is the online selection of the mixture (Weakness 1). As of now, I think the claim that "BCMB is better than offline RL to find a NE" is not really supported. What could change: - maybe all the other baseline are also finetuned online on the game, in that case the issue is not with this work in particular - at least the wording should be different, and this limitation should be clearly stated and discussed. - the best thing would be to have a way to select alpha offline (or online, but the same for every dataset), but this may be a hard problem in itself **Theory and assumptions** Then, I think revisiting the presentation of the theory would improve the paper (see weakness 2): - I would suggest discussing more the relation of the th 5.2 and 5.3 to the assumptions, and the realstic aspect of the assumptions, in the main text of the paper. Broader Impact Concerns: No broader impact statement is needed. ================================================== Review 3: Summary: This paper addresses the problem of learning equilibrium strategies in $n$-players imperfect-information extensive-form games relying on an offline dataset. First, the paper introduces a corresponding learning framework, called Offline Equilibrium Finding (OEF), which instantiate the problem of extracting either a Nash or correlated equilibrium from an offline dataset. Then, it presents a methodology (BCMB) that combines behavioral cloning with a model-based procedure employing popular game-solving algorithms (e.g., PSRO and CFR) on the approximated model. Finally, the paper provides some theoretical guarantees, which tie the quality of the data to the quality of the solution, as well as an empirical validation in simple domains. Strengths and Weaknesses: Strengths - The paper tackles the very relevant problem of solving games with offline datasets; - The paper is well-organized and easy to read. Weaknesses - Claims, definitions, assumptions are not formal enough (e.g., "trained accurately" or "covers the NE strategy" are not translated in mathematical terms); - I am not familiar with the literature of learning in games, but the novelty of this work is not sufficiently supported in the paper, and previous works seem to introduce similar formulations (e.g., Cui & Du, 2022); - The paper reports some asymptotic "possibility" results that are not particularly surprising (if I am understanding them correctly), and does not include a finite-samples analysis. To my understanding, the main contribution of this work is an heuristic that performs well with "hybrid" datasets, as it combines BC, which generally works better with experts' data, together with a model-based procedure, which works better with random data, from which a model of the game can be estimated. While all of this sounds reasonable, I think the paper lacks either a strong theoretical support for the introduced method, or a convincing experimental analysis that compare BCMB against competitive baselines in complex domains. Without those, I am wondering whether the contribution is sufficient to warrant publication. Requested Changes: 1) (Major, Novelty) The paper shall explicitly comment the novelty of the contribution w.r.t. prior work (e.g., Cui & Du, 2022). 2) (Minor, Presentation) Report the definition of (coarse) correlated equilibria in mathematical terms. 3) (Minor, Presentation) It is hard to understand how CFR and PSRO work from the provided description. I would rather extend the presentation of the algorithms or avoid the description entirely (just pointing to the references instead). 4) (Major, Finite-sample analysis) Most of the theoretical statements start with "Assume that the environment model and/or the behavioral cloning policy are trained accurately". On the one hand, this assumption is ambiguous, as it is unclear whether the authors mean the estimates are "accurate" given the dataset (aka training accuracy) or in general (aka generalization accuracy). In the latter case, the assumption is reasonable only assuming infinite datasets. Typically, the crucial challenge of (theoretical) offline learning is to provide a guarantee on the solution given the quality (more on this below) **and** the size of the dataset. 5) (Major, Coverage assumptions) The coverage assumptions are not stated formally. While the Ass. 5.1 is somewhat clear, the 5.2 is not. 6) (Major, Model-free vs model-based) The discussion in the initial paragraphs of Sec. 5 seem to imply that model-free offline learning cannot work in OEF, and model-based methods has to be considered (at least in combination with other model-free). This claim does not seem to be supported, especially if we do not restrict model-free methods to BC. In offline RL there exists several model-free methods with guarantees, and I am wondering whether it cannot be the case in OEF as well. Broader Impact Concerns: This paper can be categorized as fundamental research. I believe explicitly commenting on the potential negative impacts is not necessary at this stage of development. ================================================== Metareview: Recommendation: Reject Comment: fZuh: "not convinced with the theoretical analysis -- I still find it too informal and it does not discuss a number of steps required for a formal proof. I also think the empirical claims need to be strengthened with more baseline approaches, including more recent approaches" Patg: "As the authors have explained, I agree it is still a valuable contribution to have a perhaps incomplete, but at least existing, method, that can serve as a baseline for future improvements. I still have a reserve on the clarity of the theoretical statement, which I think still require some work to be more accessible / more rigorous, thus I stick with "leaning for accept" and not "accept"." FpMY: "this seems to be the first effort in offline learning for extensive-form games, which is interesting.... I believe the quality of the theoretical statements is insufficient, an opinion that appear to be shared by all the reviewers" ==================================================
# Deep Double Descent Via Smooth Interpolation | Matteo Gamba | mgamba@kth.se | |----------------------------------------------------|-----------------| | KTH Royal Institute of Technology Erik Englesson | engless@kth.se | | KTH Royal Institute of Technology Mårten Björkman | celle@kth.se | | KTH Royal Institute of Technology Hossein Azizpour | azizpour@kth.se | | KTH Royal Institute of Technology | | Reviewed on OpenReview: *https: // openreview. net/ forum? id= fempQstMbV* ## Abstract The ability of overparameterized deep networks to interpolate noisy data, while at the same time showing good generalization performance, has been recently characterized in terms of the double descent curve for the test error. Common intuition from polynomial regression suggests that overparameterized networks are able to sharply interpolate noisy data, without considerably deviating from the ground-truth signal, thus preserving generalization ability. At present, a precise characterization of the relationship between interpolation and generalization for deep networks is missing. In this work, we quantify sharpness of fit of the training data interpolated by neural network functions, by studying the loss landscape w.r.t. to the input variable locally to each training point, over volumes around cleanly- and noisily-labelled training samples, as we systematically increase the number of model parameters and training epochs. Our findings show that loss sharpness in the input space follows both modeland epoch-wise double descent, with worse peaks observed around noisy labels. While small interpolating models sharply fit both clean and noisy data, large interpolating models express a smooth loss landscape, where noisy targets are predicted over large volumes around training data points, in contrast to existing intuition 1. ## 1 Introduction The ability of overparameterized deep networks to interpolate noisy data, while at the same time showing good generalization performance (Belkin et al., 2018; Zhang et al., 2018), has been recently characterized in terms of the double descent curve of the test error (Belkin et al., 2019; Geiger et al., 2019). Within this framework, as model size increases, the test error follows the classical bias-variance trade-off curve (Geman et al., 1992), peaking as models become large enough to perfectly interpolate the training data, and decreasing as model size grows further (Belkin et al., 2019). This phenomenon, largely studied in the context of regression (Bartlett et al., 2020; Muthukumar et al., 2020) and random features (Belkin et al., 2020), at present lacks a precise characterization relating interpolation to generalization for deep networks. Current intuition from linear and polynomial regression suggests that, under some hypothesis on the training sample, large overparameterized models are able to perfectly interpolate both cleanly- and noisily-labeled samples, without considerably deviating from the ground-truth signal, thus showing good performance despite overfitting the training data (Muthukumar et al., 2020; Bartlett et al., 2020; Nakkiran et al., 2019a). 1Source code to reproduce our results available at https://github.com/magamba/double_descent ![1_image_0.png](1_image_0.png) Figure 1: **Intuition from overparameterized regression.** a) Polynomial of large degree, trained with gradient descent to fit noisy scalar data, reproducing the polynomial regression experiment of Nakkiran et al. (2019a), and reflecting common intuition on double descent, suggesting that the generalization ability of large interpolating models is tied to sharply fitting of noisy data, thus resulting in models that do not deviate considerably from the ground truth signal. b) In this work we show that, contrary to intuition, deep networks *smoothly* interpolate both clean and noisy data, and that improved generalization in the interpolating regime is tied to smoothness of the loss w.r.t. the input variable. **Geodesic MC integration.** For each base training point, we generate P geodesic paths by connecting a sequence of augmentations of increasing strength, which we use to cover volumes of increasing size in the loss landscape around each training point. We compare points that are c) sharply interpolated from those that are d) smoothly interpolated. Figure 1a illustrates this phenomenon, showing a polynomial of large degree that perfectly fits the training data, with predictive function sharply interpolating noisy samples (intuitively corresponding to a spike at each training point), while overall remaining close to the data-generating function. In this work, we study the emergence of double descent for the test error of deep networks (Nakkiran et al., 2019b) through the lens of smoothness of interpolation of the training data, as model size as well as the number of training epochs vary, for models trained in practice. To quantify smoothness of interpolation, we conduct an empirical exploration of the loss landscape w.r.t. the input variable, by providing explicit measures of sharpness of the loss, focusing on image classification. Due to the inherently noisy nature of Euclidean estimators in pixel space, and following the *manifold hypothesis* Pope et al. (2020); Bengio (2013); Narayanan & Mitter (2010), postulating that natural data lies on a combination of manifolds of lower dimension than the input data's ambient dimension, we constrain our measures to the support of the data distribution, locally to each training point. Our empirical study shows that the polynomial intuition in Figure 1a does not hold in practice for deep networks, which instead smoothly interpolate both clean and noisy data (Figure 1b). Specifically, smooth interpolation - emerging both for large overparameterized networks and prolonged training - results in large models confidently predicting the (noisy) training targets over large volumes around each training point. ## Contributions - We present the first systematic empirical study of smoothness of the loss landscape of deep networks in relation to overparameterization and interpolation for natural image datasets. - Starting from infinitesimal smoothness measures from prior work, we introduce volumetric measures that capture loss smoothness when moving away from training points. - We develop a geodesic Monte Carlo integration method for constraining our measures to a local approximation of the data manifold, in proximity of each training point. - We present an empirical study of model-wise and epoch-wise double descent for neural networks trained without confounders (explicit regularization, data augmentation, batch normalization), as well as for commonly-found training settings. By decoupling smoothness from generalization, we empirically show that overparameterization promotes input-space smoothness of the loss landscape. Particularly, we produce practical examples in which smoothness of the learned function of deep networks does not result in improved generalization, highlighting that the implicit regularization effect of overparameterization should be studied in terms of reduced variation of the learned function. ## 2 Related Work Recent years have seen increased interest in the study of smoothness of deep networks in relationship to generalization. For studies of *learned representations*, interpreting networks as functions of their parameters, loss landscape smoothness has been related to improved generalization (Ma & Ying, 2021; Foret et al., 2020; Rosca et al., 2020), increased stability to perturbations (Keskar et al., 2017), reduced minimum description length (Hochreiter & Schmidhuber, 1997), as well as better model compression (Chang et al., 2021). Additionally, for networks interpreted as functions of their input, for a *fixed parameterization*, sensitivity of the networks' learned function has been connected to generalization performance (LeJeune et al., 2019; Novak et al., 2018). Indeed mounting evidence, both empirical (Gamba et al., 2022; 2020; Novak et al., 2018) as well as theoretical (Bubeck & Sellke, 2021; Neyshabur et al., 2018), suggests that large overparameterized models achieve robust generalization (Ma & Ying, 2021) via smoothness of the learned function. While overparameterization alone is not enough to guarantee strong robustness (Chen et al., 2021; Rice et al., 2020), the large number of parameters of modern networks is thought to promote implicit regularization of the network's function (Gamba et al., 2022; Bubeck & Sellke, 2021; Neyshabur et al., 2018; 2015). In this context, a direct study of interpolation via the parameter-space interpretation is limited by confounders, such as symmetries of linear layers (Singh & Jaggi, 2020; Li et al., 2015), for which different parameterizations may yield the same equivalent interpolating function (Simsek et al., 2021). Thus, our work adopts the input-space view of the loss landscape, to directly study sharpness of interpolation around each training point. Our methodology builds upon input-space sensitivity analyses for neural networks, presenting a first systematic study of the role of overparameterization in promoting smoothness of the network's learned function. The smoothness measures presented in section 3, are inspired by the vast body of work on the loss landscape of neural networks in parameter space. Due to the extensive theoretical literature on double descent in simplified controlled settings such as linear regression (Muthukumar et al., 2020; Bartlett et al., 2020; Belkin et al., 2018), in the following we mainly draw connections to prior work targeting deep networks. Deep Double Descent Double descent (Belkin et al., 2019; Geiger et al., 2019) was first observed for several machine learning algorithms for increasing model size (model-wise). Later, Nakkiran et al. (2019b) showed a similar trend during training of deep networks (epoch-wise), as well as w.r.t. dataset size (samplewise). The phenomenon has been studied from various perspectives: bias-variance decomposition (Yang et al., 2020; Neal et al., 2018), samples to parameters ratio (Belkin et al., 2020), parameter norms (Belkin et al., 2019), and decision boundaries (Somepalli et al., 2022). In this work, we study model-wise and epoch-wise double descent in terms of smoothness of the loss landscape with respect to the input, and separate the analysis in terms of clean and noisily-labeled data points. Importantly, in contrast to existing studies of double descent (Belkin et al., 2020), we focus on input space and on the training loss - a quantity that does not follow double descent - and study sharpness metrics based on training data, showing that they strongly correlate with the test error. The most related work to ours is the concurrent one of Somepalli et al. (2022) studying decision boundaries in terms of reproducibility and double descent. Our works differ in that we study double descent in the loss landscape. Furthermore, our study takes a closer look at the impact of clean and noisily-labeled samples, and presents settings in which the emergence of regularity of the loss landscape does not result in improved generalization. Finally, we focus on implicit regularization (by disabling batch norm, data augmentation) and a simpler optimization procedure (constant learning rate SGD) to reduce confounding factors. Loss Landscape of Neural Networks To understand the remarkable generalization ability of deep networks (Xie et al., 2020; Geiger et al., 2019; Keskar et al., 2017), as well as to design better training criteria (Foret et al., 2020), several works study the loss landscape of deep networks in *parameter space*, focusing on solutions obtained by SGD (Kuditipudi et al., 2019), as well as the optimization process (Arora et al., 2022; Li et al., 2021). Inspired by such literature, we quantify smoothness of the loss landscape by estimating the *sharpness* of the loss, as proposed by Foret et al. (2020) and Keskar et al. (2017) for the parameter-space, but we perform our analysis in *input-space*. Importantly, in this work we focus on image classification tasks, and study smoothness of interpolation of training data points. Input Space Sensitivity and Smoothness Novak et al. (2018) present an empirical sensitivity study of fully-connected networks with piece-wise linear activation functions through the input-output Jacobian norm, which is shown to strongly correlate to the generalization ability of the networks considered. Their study proposes an infinitesimal analysis of the Jacobian norm at training and validation points, as well as the use of input-space trajectories in proximity of the data manifold to probe trained networks. LeJeune et al. (2019) analyse second-order information (the tangent Hessian of a neural network) by using weak data augmentation to constrain their measure to the proximity of the data manifold. Lastly, Gamba et al. (2022) introduce a nonlinarity measure for piece-wise linear networks, that strongly correlates with the test error in the second descent for large overparameterized models. Similar to the first two works, we study smoothness of neural networks, using the Jacobian and Hessian norm of neural networks trained in practice, and similar to the latter work, we provide a systematic study of double descent, which we further extend to epoch-wise trends. Finally, Rosca et al. (2020) postulate a connection between model-wise double descent and smoothness: during the first ascent models fit the training data at the expense of smoothness, while the second descent happens as the model size becomes large enough for smoothness to increase. Later, Bubeck & Sellke (2021) theoretically prove a universal law of robustness highlighting a trade-off between the model size and the Lipschitz constant of a learning algorithm w.r.t. its input variable. Our work provides empirical evidence supporting the postulate of Rosca et al. (2020) and the law of robustness of Bubeck & Sellke (2021). ## 3 Methodology Our leading research question is to quantify smoothness of interpolation of training data for deep networks trained on classification tasks, as the number of model parameters is increased. We interpret a network as a function with input variable x ∈ R d and learnable parameter θ, incorporating all weights and biases. Our study focuses on the landscape of the loss Lθ(x, y) := L(θ, x, y) treated as a function of the input x, with target y. Inspired by the literature on the loss landscape of neural networks in parameter space (Foret et al., 2020; Dinh et al., 2017; Keskar et al., 2017), we quantify (the lack of) smoothness by devising explicit measures of loss sharpness in a neighbourhood of training points (xn, yn), for n = 1*, . . . , N*. Crucially, for any given network, we focus on sharpness w.r.t. the input variable x, keeping the parameter θ fixed. We begin by describing infinitesimal sharpness in section 3.1, which we compute in proximity of the data manifold local to each training point in section 3.2. Finally, we introduce a method for estimating sharpness over data-driven volumes by exploiting data augmentation in section 3.3, and in section 3.4 we detail the chosen data augmentation strategies. The proposed methodology enables us to measure sharpness of interpolation of the training data, by restricting our study near the support of the data distribution. ## 3.1 Sharpness At Data Points To estimate how sharply the loss changes w.r.t. infinitesimal perturbations of the input variable x, we study the Jacobian of the loss, $$\mathbf{J}(\mathbf{x},y):={\frac{\partial}{\partial\mathbf{x}}}{\mathcal{L}}_{\theta}(\mathbf{x},y)$$ Lθ(x, y) (1) To measure sharpness at a point (xn, yn), we follow Novak et al. (2018), and compute the `2 norm of J(xn, yn), which we take in expectation over the training set D = {(xn, yn)} N n=1, $$J=\mathbb{E}_{\mathcal{D}}\|\mathbf{J}(\mathbf{x},y)\|_{2}$$ $$\left(2\right)$$ J = EDkJ(x, y)k2 (2) assuming that the loss is differentiable one time at the points considered. Intuitively, sharpness is measured by how fast the loss Lθ(x, y) changes in infinitesimal neighbourhoods of the training data, and a network is said to smoothly interpolate a data point xn if the loss is approximately flat locally around the point and the point is classified correctly according to the corresponding target yn. Throughout our experiments, the Jacobian J is computed using a backward pass w.r.t. the input variable x. Equation 2 provides first-order information about the loss landscape. To gain knowledge about curvature, we also study the Hessian of the loss w.r.t. the input variable, $$\mathbf{H}(\mathbf{x},y):={\frac{\partial^{2}}{\partial\mathbf{x}\partial\mathbf{x}^{T}}}{\mathcal{L}}_{\boldsymbol{\theta}}(\mathbf{x},y)$$ whose Frobenius norm again we take in expectation over the training set $$\mathbf{\Sigma}$$ $\text{raining set}$. $$H=\mathbb{E}_{\mathcal{D}}\|\mathbf{H}(\mathbf{x},y)\|_{2}$$ $$\mathbf{\partial}\mathbf{\partial}$$ H = EDkH(x, y)k2 (4) The Hessian tensor in Equation 3 depends quadratically on the input space dimensionality d, providing a noisy Euclidean estimator of loss curvature in proximity of the input data. Following the *manifold hypothesis* (Bengio, 2013; Narayanan & Mitter, 2010), stating that natural data lies on subspaces of dimensionality lower than the ambient dimension d, we restrict Hessian computation to the tangent subspace of each training point xn. Starting from Equation 1, throughout our experiments, Equation 3 is estimated by computing the tangent Hessian, as outlined in the next section. ## 3.2 Tangent Hessian Estimation To constrain Equation 3 to the support of the data distribution, we adapt the method by LeJeune et al. (2019) and estimate the loss Hessian norm projected onto the data manifold local to each training point. For any input data point (xn, yn) and corresponding Jacobian J(xn, yn), we generate M augmented data points xn + um by randomly sampling a displacement vector um using weak data augmentation. For each sampled um, we then estimate the Hessian H(xn, yn) projected along the direction xn + um, by computing the finite difference 1 δ J(xn, yn) − J(xn + δum, yn). Then, following Donoho & Grimes (2003) we estimate the Hessian norm directly by computing $$H=\frac{1}{M^{2}\delta^{2}}\mathbb{E}_{\mathcal{D}}\Big{(}\sum_{m=1}^{M}\|\mathbf{J}(\mathbf{x}_{n},y_{n})-\mathbf{J}(\mathbf{x}_{n}+\delta\mathbf{u}_{m},y_{n})\|_{2}^{2}\Big{)}^{\frac{1}{2}}\tag{5}$$ which is equivalent to a rescaled version of the rugosity measure of LeJeune et al. (2019). Importantly, different from rugosity, we generate augmentations xn + um by using weak colour transformations in place of affine transformations (1-pixel shifts), since weak photometric transformations are guaranteed to be fully on-manifold. Details about the specific colour transformations are presented in appendix C. ## 3.3 Sharpness Over Data-Driven Volumes The measures introduced in Equations 2 and 5, capture local sharpness over infinitesimal neighbourhoods of input data points. To study how different networks fit the training data, we devise a method for estimating loss sharpness over volumes centered at each training point xn, as one moves away from the point. Essentially, we exploit a variant of Monte Carlo (MC) integration to capture sharpness over data-driven volumes, by applying two steps. First, we integrate the Jacobian and Hessian norms along geodesic paths πp ⊂ R d based at xn, on the data manifold local to each training point, for p = 1*, . . . , P*. Second, we estimate sharpness over the volume covered by the loss along the P paths via MC integration. The following details each step. Sharpness along geodesic paths For each training point (xn, yn) ∈ D, we aim to estimate loss sharpness as we move away from xn, while traveling on the support of the data distribution. To do so, we exploit a sequence of weak data augmentations of increasing strength to generate P paths πp ⊂ R din the input space, each formed by connecting augmentations of xn in order of increasing strength. Formally, let Ts : R d → R d, represent a family of smooth transformations (data augmentation) acting on the input space and governed by parameter s, controlling the strength S = ksk2 as well as the direction of the augmentation in R d. In general, the parameter s, interpreted as a suitably distributed random variable, models the randomness of the transformation. Randomly sampling s, yields a value s p,k corresponding to a fixed transformation Ts p,k of strength S k. For instance, for affine translations, s p,k models a random radial direction sampled from a hypersphere centered at xn, with strength S k denoting the magnitude of the translation (e.g. 4-pixel shift). For photometric transformations, s p,k may model the change in brightness, contrast, hue, and saturation, with total strength S k. To generate on-manifold paths πp starting from xn, we proceed as follows. First, we fix a sequence of K + 1 strengths S 0 < S1 *< . . . < S*K, with S 0 = 0 denoting the identity transformation ∀ p. Then, for each strength S k, with k ≥ 1, p random directions s p,k are sampled, each with respective fixed magnitude ks p,kk2 = S k. This yields P sequences of transformations {Ts p,k } K k=0, each producing augmented versions x p,k n of xn, ordered by strength, x p,1 n ≺ *. . .* ≺ x p,K n, and forming a path πp ⊂ R d. Specifically, each path πp approximates an on-manifold trajectory by a sequence of Euclidean segments x p,k+1 n x p,k n , for k = 0*, . . . , K*. The maximum augmentation strength S K controls the distance traveled from xn, while the number K of strengths used controls how fine-grained the Euclidean approximation is. Pseudocode for generating geodesic paths is presented in section D. Volume integration Once a sequence of paths {πp} P p=1 is generated for xn, volume-based sharpness is computed by integrating over each path πp, and normalizing the measure by the length len(πp) of each path: $${\frac{1}{P}}\sum_{p=1}^{P}{\frac{1}{\operatorname{len}(\pi_{p})}}\int_{\pi_{p}}\,\sigma(\mathbf{x},y_{n})d\mathbf{x}$$ σ(x, yn)dx (6) where σ represents an infinitesimal sharpness measure, namely the Jacobian and tangent Hessian norms at (xn, yn). The same method can also be applied to accuracy and crossentropy loss to evaluate consistency and confidence of the models predictions over volumes. Figures 1c and 1d illustrate geodesic MC integration. For each training point, P geodesic paths are generated, each anchored to the data manifold by K augmentations. Integrating infinitesimal measures over each path returns a MC sample of sharpness along πp. Then, volumetric sharpness is estimated by MC integration over P samples. Importantly, the number P of paths is fixed throughout all experiments, representing the number of MC samples for volume-based integration. Finally, we take a mean-filed view by averaging over the training set D: $$\frac{1}{P}\mathbb{E}_{\mathcal{D}}\sum_{p=1}^{P}\frac{1}{\text{len}(\pi_{p})}\int_{\pi_{p}}\sigma(\mathbf{x},y_{n})d\mathbf{x}=\frac{1}{NP}\sum_{n=1}^{N}\sum_{p=1}^{P}\frac{1}{\text{len}(\pi_{p})}\int_{\pi_{p}}\sigma(\mathbf{x},y_{n})d\mathbf{x}\tag{7}$$ $$({\mathfrak{h}})$$ Importantly, extending LeJeune et al. (2019), we replace Euclidean integration by geodesic integration over a local approximation of the data manifold, by generating augmentations of increasing strength. Crucially, the proposed MC integration captures average-case sharpness in proximity of the training data and is directly related to the generalization ability of the studied networks, as opposed to worst-case sensitivity, as typically considered in adversarial settings (Moosavi-Dezfooli et al., 2019). In fact, the random sampling performed in Equation 7 is unlikely to hit adversarial directions, which are commonly identified by searching the input space through an optimization process (Goodfellow et al., 2014; Szegedy et al., 2013). To conclude our methodology, in section 3.4 we present the family of transformations Ts used for generating trajectories πp throughout our experiments. ![6_image_0.png](6_image_0.png) 4 5 6 7 Figure 2: a) Double descent curve for the test error for ConvNets trained on CIFAR-10 with 20% noisy labels. b) Average metrics integrated over volumes of increasing size. Volumes are denoted by the number K of weak augmentations used to generate each geodesic path. From left to right: average training accuracy, training loss, Jacobian norm and Hessian norm, each plotted against model size. The dotted vertical line marks the model width that achieves zero train error (i.e. the *interpolation threshold*). All models are trained for 4k epochs. We observe accuracy over volumes increases monotonically with model size, while crossentropy follows double descent. Combined, the two observations suggest that large networks *confidently* predict the training targets over increasingly large volumes around the training data (for increasing model size). Importantly, interpolation is *sharp* at the interpolation threshold, even infinitesimally at each training point (blue curves), while increasing overparameterization produces *smooth* interpolation, contrary to existing intuition. Shaded areas mark standard deviations over 3 seeds. ## 3.4 Weak Data Augmentation Strategies Computing sharpness of interpolation via Equation 6 for each data point xn requires generating P trajectories πp composed of augmentations of xn of controlled increasing strength. Furthermore, the augmented data points {x p,k n } K k=0 should lie in proximity of the base point xn in order for the Euclidean approximation to be meaningful. Finally, to correctly estimate correlation between smoothness and the generalization ability of the networks considered, volume-based sharpness should not rely on validation data points, i.e. the augmentations x p,k nshould be strongly correlated to xn, for each *p, k*. To satisfy the above, we modify a weak data augmentation algorithm introduced by Yu et al. (2018), which allows to efficiently generate augmentations that lie in close proximity to the base training point xn, for image data. Specifically, each base image xn, consisting of C input channels (e.g. C = 3 for RGB images) and h × w spatial dimensions, is interpreted as C independent matrices xn[c, :, : ] ∈ R h×w, each factorized using Singular Value Decomposition (SVD), yielding a decomposition xn[c, :, : ] = U cΣ cV cT, where Σ cis a diagonal matrix whose entries are the singular values of xn[c, :, : ] sorted by decreasing magnitude. In the original method, Yu et al. (2018) produce weak augmentations by randomly erasing one singular value from the smallest ones, thereby obtaining a modified matrix Σ˜ c, and then reconstructing each channel of the base sample via U cΣ˜ cV cT. In this work, in order to generate P random augmentations of strength k, Σ˜ cis obtained by erasing k singular values Σ c i,i, for i = w −k −p+ 1*, . . . , w* −p, and p = 0*, . . . , P* −1 2. Essentially, the augmentation strength is given by the number k of singular values erased, and P augmentations of similar strength are generated by erasing P subsets of size k from the smallest singular values, for each channel c. We note that this method produces augmented images that are highly correlated with the corresponding base training sample, and as such they do not directly amount to producing validation data points. We refer the reader to appendix E for further details. In the next section, we present our empirical study of sharpness of interpolation for neural networks in relationship to double descent. ## 4 Experiments In this section, we present our empirical exploration of input-space smoothness of the loss landscape of deep networks as model size and number of training epochs vary. Focusing on *implicit regularization* (Neyshabur 2Assuming square spatial dimensions h = w. et al., 2015) promoted by optimization and model architecture, we evaluate our sharpness measures on networks with increasing number of parameters, trained without any form of explicit regularization (e.g. weight decay, batch normalization, dropout). We extend our analysis to common training settings in section 4.4. Experimental setup We reproduce deep double descent by following the experimental setup of Nakkiran et al. (2019b). Specifically, we train a family of ConvNets formed by 4 convolutional stages of controlled base width [w, 2w, 4w, 8w], for w = 1*, . . . ,* 64, on the CIFAR-10 dataset with 20% noisy training labels and on CIFAR-100. All models are trained for 4k epochs using SGD with momentum 0.9 and fixed learning rate. Following Arpit et al. (2019), to stabilize prolonged training, we use a learning rate warmup schedule. Furthermore, we extend our empirical results to training settings more commonly found in practice, and validate our main findings on a series of ResNet18s (He et al., 2015) of increasing base width w = 1*, . . . ,* 64, with batch normalization, trained with the Adam optimizer for 4k epochs using data augmentation. We refer the reader to section B for a full description of our experimental setting. In section G.1, we extend our main results to Transformer networks trained on machine translation tasks. We begin our experiments by reproducing double descent for the test error for the ConvNets (Figure 2a). Starting with small models and by increasing model size, a U-shaped curve is observed whereupon small models underfit the training data, as indicated by high train and test error. As model size increases, the optimal bias/variance trade-off is reached (Geman et al., 1992). Mid-sized models increasingly overfit training data - as shown by increasing test error for decreasing train error and loss - until zero training error is achieved, and the training data is interpolated. The smallest interpolating model size is typically referred to as *interpolation threshold* (Belkin et al., 2019). Near said threshold, the test error peaks. Finally, large overparameterized models achieve improved generalization, as marked by decreasing test error, while still interpolating the training set. ## 4.1 Loss Landscape Smoothness Follows Double Descent In this section, we establish a strong correlation between double descent of the test error and smooth interpolation of noisy training data. Figure 2b studies fitting of training data for models at convergence (training for 4k epochs) as model size increases. Starting with (infinitesimal) sharpness at training points (blue curve), we observe that training accuracy at convergence monotonically increases with model size, with 100% accuracy reached at the interpolation threshold and maintained therefrom. At the same time, crossentropy loss over volumes follows double descent, with peak near the interpolation threshold, and then decreasing as model size grows. Similarly, the Jacobian and Hessian norms peak at the interpolation threshold and then rapidly decrease, showing that all training points become stationary for the loss, and that the landscape becomes flatter as model size grows past the interpolation threshold. When all measures are integrated over volumes of increasing size (number K of augmentations per path), we observe how large overparameterized models are able to smoothly fit the training data over large volumes. This finding suggests that - in contrast to the polynomial intuition of Figure 1a) - overparameterized networks interpolate training data *smoothly* (as intuitively depicted in Figure 1b). Our finding extends the observations of Novak et al. (2018) and LeJeune et al. (2019) from fixed-size networks to a spectrum of model sizes, and establishes a clear correlation with the test error peak in double descent. Finally, the results substantiate the universal law of robustness (Bubeck & Sellke, 2021), showing that at the interpolation threshold highest sensitivity to input perturbations is observed, while overparameterization beyond the threshold promotes smoothness. Intriguingly, our findings represent mean sharpness as opposed to the worst case studied by Bubeck & Sellke (2021), showing that the observed regularity is much stronger in practice. In the following section, we study this behaviour in proximity of cleanly- and noisily-labeled training samples. We refer the reader to section 4.4 for analogous results on ResNets trained with Adam. ## 4.2 Smooth Interpolation Of Noisy Labels In this section, we break down the noisily labeled training set into two subsets: cleanly-labeled points, and training points with corrupted labels, and explore how fitting is affected by the training labels. ![8_image_0.png](8_image_0.png) Figure 3: Average accuracy, crossentropy, Jacobian and Hessian norms integrated over volumes of increasing size (augmentations per path) around clean (top) and noisy (bottom) subsets of the CIFAR-10 training set with 20% noisy labels. For models near the interpolation threshold, we observe a large increase in the loss for increasing neighborhood size. At the interpolation threshold, sharp interpolation is observed for both clean and noisy samples, with crossentropy, sensitivity (Jacobian norm) and curvature peaking over all volumes considered. Larger models present a smoother loss landscape around training points, with the largest models expressing a locally flat landscape around each point. This finding shows that large networks are confidently and smoothly *predicting the noisy labels* around data points whose label was corrupted, suggesting that smoothness emerging from overparameterization in fact hinders generalization locally to those points. ![8_image_1.png](8_image_1.png) 4 5 6 7 Figure 4: a) Double descent for the test error for ConvNets trained on CIFAR-100. b) Average metrics integrated over volumes of increasing size (number K of augmentations per path). From left to right: average training accuracy, crossentropy, Jacobian, and Hessian norm, each plotted against model size. All models are trained for 4k epochs. For relatively complex datasets (i.e. with few samples per class), our findings hold even without artificially corrupted labels, suggesting that the trends reported in this work are not caused by synthetic noise. Shaded areas depict standard deviations over 3 seeds. Figure 3 reports accuracy, crossentropy, as well as sharpness measures computed on the clean subset of CIFAR-10 (top), as well as the corrupted subset (bottom), for volumes of increasing size. We begin by noting that small models fit mostly the cleanly labeled data points, and show close to zero accuracy on the noisily labeled data points, showing a bias towards learning simple patterns. We hypothesize that most cleanly labeled samples act as "simple examples", while noisily labeled ones provide "hard examples", akin to support vectors, for small size models. This behaviour is aligned with prior observations, reporting that ![9_image_0.png](9_image_0.png) Figure 5: (Left) Test error (Middle) Train crossentropy (Right) Jacobian norm for ConvNets trained on CIFAR-10 with 20% noisy labels. The heatmaps show each metric for increasing training epochs (y-axis) and base width (x-axis). Models past the interpolation threshold (base width w = 15) undergo epoch-wise double descent for each metric. Similar trends are observed for curvature, as measured by the Hessian norm. All metrics are computed on the training set only, without geodesic Monte Carlo integration. deep networks admit support vectors (Toneva et al., 2018) and that deep networks share the order in which samples are fitted (Hacohen et al., 2020). We refer the reader to Hacohen et al. (2020) for details. As model size grows toward the interpolation threshold, networks fit both clean and noisy samples (as marked by increasing accuracy on both subsets), with large models consistently predicting the clean and noisy labels over large volumes. At the same time, crossentropy local to each training point (blue curve) approaches zero past the interpolation threshold, while volume-based crossentropy undergoes double descent. Interestingly, this trend is observed both around cleanly- and noisily-labeled training samples, with peaks at the interpolation threshold which are considerably more marked for noisy labels. Our sharpness measures follow double descent for all volumes considered, even when no Monte Carlo integration is performed (blue curve). Importantly, curvature as measured by the Hessian norm rapidly decreases as model size grows, showing that large networks smoothly interpolate both clean and noisy samples. Importantly, we observe how the second descent in test error corresponds to improved fitting of cleanly-labeled samples, while the network lose their generalization ability locally to noisy labeled points. In Figure 4a we extend the observations to CIFAR-100, where model-wise double descent is observed even on the standard dataset without artificially corrupted labels. Similarly to what observed on CIFAR-10, Figure 4b shows the loss landscape peaking in sharpness at the interpolation threshold, and then rapidly decreasing as model size grows, with large networks smoothly fitting the training set over increasingly large volumes. This finding suggests that double descent is tied to dataset complexity, and that the trends reported in this work are not caused by artificially corrupted labels. ## 4.3 Epochwise Double-Descent We now turn our attention to epoch-wise double descent, first reported for the test error of deep networks by (Nakkiran et al., 2019b). Figure 5 shows the test error (left), train crossentropy (middle), as well as Jacobian norm (right) for ConvNets trained on CIFAR-10 with 20% noisy labels. We consolidate our observations for each metric with heatmaps, in which the y-axis represents training epochs, and the x-axis denotes the models' base width. We observe that models past the interpolation threshold (base width w = 15) undergo epoch-wise double descent for each metric. At the same time, models with base width w < 15 are unable to reduce their test error within 4k training epochs, and this is associated to non-decreasing training loss as well as Jacobian norm. We hypothesize that the model size affects a model's ability to interpolate the training data, and therefore affects the training dynamics and the occurrence of epoch-wise double descent. ![10_image_0.png](10_image_0.png) 4 5 6 7 (b) Figure 6: (a) Double descent for the test error the CIFAR-10 with 20% noisy labels for a family of ResNet18s of increasing base width w, trained with data augmentation. (b) Accuracy, crossentropy, Jacobian and Hessian norms over volumes. All models are trained for 4k epochs. Analogous trends as observed for the ConvNets holds in this case. However, the largest integration volume considered (K = 7 augmentations per path), now shows considerably increased sharpness and loss curvature, while still undergoing double descent. ## 4.4 Practical Training Settings So far, the training setting included the least amount of confounders (e.g. adaptive learning rates, explicit regularization, skip connections, normalization layers) and focused on implicit regularization. In Figure 6, we extend our findings to ResNets trained on CIFAR-10 with 20% noisy labels, with the Adam optimizer, data augmentation (4-pixel shifts and random horizontal flips), as well as batch normalization layer (see appendix B for details). Both model-wise and epoch-wise trends reported for the ConvNets also hold for this setup, with the interpolation threshold occurring at base width w = 18. We consolidate our model-wise and epoch-wise findings with heatmaps in Figure 12 and 14. Interestingly, data augmentation causes the peak in test error to occur earlier than the interpolation threshold. We hypothesize that the mismatch - which can also be observed in related works (Nakkiran et al., 2019b) - is due to a lack of fine grained control over model size as base width w varies. Importantly, for large volumes around training points (K = 7 augmentations per path), training accuracy degrades and loss sharpness increases. However, all sharpness metrics undergo double descent as model size grows, confirming the trends reported in simpler training settings. ## 4.5 Towards Decoupling Smoothness From Generalization Our experimental results suggest that double descent in the test error is closely related to input-space smoothness. One possible interpretation is that models at the interpolation threshold learn small and irregular decision regions, marked by high loss sharpness, while large models learn more regular decision regions with wider margins, supporting the observations of Jiang et al. (2019). In fact, as consistently observed in our experiments, on the one hand, models near the interpolation threshold fail to smoothly interpolate all clean samples, while on the other hand large models can smoothly interpolate the entire training set. This effectively enforces a trade-off for which large models lose generalization ability around noisy samples, but can correctly classify all clean samples. Assuming the train and test distributions are similar (i.e. excluding covariate shifts), this would in turn result in improved average test error past the interpolation threshold, as indeed observed in practice. To assess the validity of our interpretation, we decouple smoothness from generalization by studying training settings in which smooth training set interpolation hurts generalization. In this setting, we expect smooth interpolation to consistently emerge with overparameterization, but this time without producing double descent in the test error. To corroborate our interpretation, in principle, one would need to construct a nearest neighbour classifier (either in input or in feature space), and test whether predictions for each test sample are affected by proximity to corrupted samples. In the following, we propose a simple experiment to decouple smoothness from generalization, without requiring knowledge of proximity of test samples to train samples. First, we corrupt 20% of the CIFAR-100 training set with asymmetric label noise, such that 80% samples of 20 randomly selected classes are perturbed. At test time, this enables us to split the test set into (1) samples whose classes have been corrupted, and (2) samples belonging to unperturbed classes. Figure 7a ![11_image_0.png](11_image_0.png) 0 10 20 30 40 50 60 Base Width 0.0 0.2 0.4 0.6 0.8 1.0 Error Figure 7: **Decoupling smoothness from generalization**. We present an experiment in which 20 randomly selected classes of the CIFAR-100 training split are corrupted with asymmetric label noise, perturbing 80% training labels within each class, for a total of 20% corrupted training samples. At test time, this enables splitting the test set into classes that have been corrupted at train time, and unperturbed classes. (a) Overparameterization promotes a smooth and flat loss landscape around both cleanly-labeled as well as noisy training samples under asymmetric label noise. (b, top) Confirming our hypothesis, double descent for the test error can still be observed for the unperturbed classes, while the trend disappears for the corrupted classes. This finding shows that overparameterization promotes smoothness in the input variable, which is aligned with generalization only around cleanly labeled points. (b, bottom) For networks trained on 100% noisy labels, smooth interpolation still follows double descent over volumes of increasing size around training points, but such property in this case is not aligned with generalization. shows that, even under strong asymmetric noise, overparameterization promotes input-space smoothness over increasingly large volumes around both clean and noisy training samples. Perhaps surprisingly, in Figure 7b (top) at test time double descent is still observed for test samples belonging to unperturbed classes, while the trend disappears for the corrupted classes. This confirms our interpretation and shows that double descent should be understood in terms of input-space smoothness, and its relation to generalization. Second, we train ResNets on CIFAR-10 with *all training labels corrupted* (Figure 7b, bottom). Also in this setting, loss sharpness over volumes follows double descent, peaking near the interpolation threshold, and decreasing with increasing model size. Trivially, all networks in this setting lose their generalization ability, with performance close to random chance. This finding shows that overparameterization promotes smooth interpolation of the training data, and that such property is not necessarily aligned with generalization. ## 5 Conclusions In this work, we present geodesic Monte Carlo integration tools to study the input space of neural networks, providing intuition - built on extensive experiments - on how neural networks fit training data. We present a strong correlation between epoch-wise as well as model-wise double descent for the test error and smoothness of the loss landscape in input space. Our experiments show that overparameterization promotes input space regularity via smooth interpolation of clean and noisy training data, which is aligned with improved generalization for datasets with relatively low ratio of label noise. Crucially, contrary to intuitions in polynomial regression, deep networks uniformly predict noisy training targets over volumes around noisily-labeled training samples - a behaviour which may have severe negative impact in practical applications with imbalanced training sets or with covariate shifts of the population distribution. Consistently in our experiments, we observe a peak in test error and loss sharpness near the interpolation threshold, which decreases for better generalizing models. Finally, for increasing volumes around each training point, we observe that overparametrization promotes flatter minima of the loss *in input space*, providing initial clues as to why large overparameterized models generalize better, and corroborating the findings of Somepalli et al. (2022) on regularity of decision boundaries of overparameterized classifiers, as well as Gamba et al. (2022) on input-space regularity. Our analysis substantiates the law of robustness of Bubeck & Sellke (2021), and extends the findings of Novak et al. (2018) to experimental settings with controlled model size. We hypothesize that overparameterization affects the dynamics of optimization and interpolation, promoting a smooth loss landscape. An interesting open problem is characterizing the impact of individual layers on interpolation, as model size grows. Finally, our analysis opens the question of whether increased interpolation smoothness is to be attributed to the model architecture, the optimizer, or a combination of both. First, increased network width, as controlled in our experiments, has been recently connected to the existence of paths connecting critical points for the optimizer (Simsek et al., 2021), suggesting that model width plays an important role in affecting the dynamics of the optimizer. Particularly, one or mode connected manifold of minima may allow wider networks to retain interpolation while at the same time optimizing for input-space smoothness (Li et al., 2021). Second, understanding the existence of implicit regularization promoted by the optimizer is at present an active area of research. On the one hand, several studies argue that stochastic optimization, and the potential implicit regularization effect of mini-batch noise, are not required for generalization (Chiang et al., 2023; Paquette et al., 2022; Geiping et al., 2020). On the other hand, current models of double descent hypothesize that stochastic noise is an important component in explaining implicit regularization and double descent in deep learning (Li et al., 2021; Blanc et al., 2020). ## Acknowledgments This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Scientific computation was enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linköping University and the Knut and Alice Wallenberg foundation. The work was partially funded by Swedish Research Council project 2017-04609. ## References Sanjeev Arora, Zhiyuan Li, and Abhishek Panigrahi. Understanding gradient descent on edge of stability in deep learning. *arXiv preprint arXiv:2205.09745*, 2022. Devansh Arpit, Víctor Campos, and Yoshua Bengio. How to initialize your network? robust initialization for weightnorm & resnets. *Advances in Neural Information Processing Systems*, 32, 2019. Peter L. Bartlett, Philip M. Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. *Proceedings of the National Academy of Sciences*, 117(48):30063–30070, 2020. doi: 10.1073/pnas.1907378117. Mikhail Belkin, Daniel J Hsu, and Partha Mitra. Overfitting or perfect fitting? risk bounds for classification and regression rules that interpolate. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias–variance trade-off. *Proceedings of the National Academy of Sciences*, 116(32):15849– 15854, 2019. Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. *SIAM Journal on* Mathematics of Data Science, 2(4):1167–1180, 2020. Yoshua Bengio. Deep learning of representations: Looking forward. In International conference on statistical language and speech processing, pp. 1–37. Springer, 2013. Guy Blanc, Neha Gupta, Gregory Valiant, and Paul Valiant. Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process. In *Conference on learning theory*, pp. 483–513. PMLR, 2020. Sébastien Bubeck and Mark Sellke. A universal law of robustness via isoperimetry. Advances in Neural Information Processing Systems, 34, 2021. Mauro Cettolo, Christian Girardi, and Marcello Federico. Wit3: Web inventory of transcribed and translated talks. In *Proceedings of the Conference of European Association for Machine Translation (EAMT)*, pp. 261–268, 2012. Xiangyu Chang, Yingcong Li, Samet Oymak, and Christos Thrampoulidis. Provable benefits of overparameterization in model compression: From double descent to pruning neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 6974–6983, 2021. Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. Robust overfitting may be mitigated by properly learned smoothening. In *International Conference on Learning Representations*, 2021. P. Chiang, R. Ni, D.Y. Miller, A. Bansal, J. Geiping, M. Goldblum, and T. Goldstein. Gradient-based opti- ˜ mization is not necessary for generalization in neural networks. In International Conference on Learning Representations, 2023. Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In *International Conference on Machine Learning*, pp. 1019–1028. PMLR, 2017. David L. Donoho and Carrie Grimes. Hessian eigenmaps: Locally linear embedding techniques for highdimensional data. *Proceedings of the National Academy of Sciences*, 100(10):5591–5596, 2003. doi: 10. 1073/pnas.1031596100. Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In *International Conference on Learning Representations*, 2020. Matteo Gamba, Stefan Carlsson, Hossein Azizpour, and Mårten Björkman. Hyperplane arrangements of trained convnets are biased. *arXiv preprint arXiv:2003.07797*, 2020. Matteo Gamba, Adrian Chmielewski-Anders, Josephine Sullivan, Hossein Azizpour, and Mårten Björkman. Are all linear regions created equal? In *International Conference on Artificial Intelligence and Statistics*, pp. 6573–6590. PMLR, 2022. Mario Geiger, Stefano Spigler, Stéphane d'Ascoli, Levent Sagun, Marco Baity-Jesi, Giulio Biroli, and Matthieu Wyart. Jamming transition as a paradigm to understand the loss landscape of deep neural networks. *Physical Review E*, 100(1):012115, 2019. Jonas Geiping, Micah Goldblum, Phil Pope, Michael Moeller, and Tom Goldstein. Stochastic training is not necessary for generalization. In *International Conference on Learning Representations*, 2020. Stuart Geman, Elie Bienenstock, and René Doursat. Neural Networks and the Bias/Variance Dilemma. Neural Computation, 4(1):1–58, 01 1992. ISSN 0899-7667. doi: 10.1162/neco.1992.4.1.1. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Guy Hacohen, Leshem Choshen, and Daphna Weinshall. Let's agree to agree: Neural networks share classification order on real datasets. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 3950–3960. PMLR, 13–18 Jul 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In *Proceedings of the IEEE International Conference on* Computer Vision, pp. 1026–1034, 2015. Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. *Neural computation*, 9(1):1–42, 1997. Yiding Jiang, Dilip Krishnan, Hossein Mobahi, and Samy Bengio. Predicting the generalization gap in deep networks with margin distributions. In *International Conference on Learning Representations*, 2019. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In *International Conference on Learning Representations*, 2017. Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, and Sanjeev Arora. Explaining landscape connectivity of low-cost solutions for multilayer nets. Advances in neural information processing systems, 32, 2019. Daniel LeJeune, Randall Balestriero, Hamid Javadi, and Richard G Baraniuk. Implicit rugosity regularization via data augmentation. *arXiv preprint arXiv:1905.11639*, 2019. Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Do different neural networks learn the same representations? *arXiv preprint arXiv:1511.07543*, 2015. Zhiyuan Li, Tianhao Wang, and Sanjeev Arora. What happens after sgd reaches zero loss?–a mathematical framework. In *International Conference on Learning Representations*, 2021. Chao Ma and Lexing Ying. On linear stability of sgd and input-smoothness of neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural* Information Processing Systems, volume 34, pp. 16805–16817. Curran Associates, Inc., 2021. Matouš Macháček and Ondřej Bojar. Results of the wmt14 metrics shared task. In *Proceedings of the Ninth* Workshop on Statistical Machine Translation, pp. 293–301, 2014. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, and Pascal Frossard. Robustness via curvature regularization, and vice versa. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition (CVPR), June 2019. Vidya Muthukumar, Kailas Vodrahalli, Vignesh Subramanian, and Anant Sahai. Harmless interpolation of noisy data in regression. *IEEE Journal on Selected Areas in Information Theory*, 1(1):67–83, 2020. Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent, 2019a. URL https://windowsontheory.org/2019/12/05/deep-double-descent/. Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. In *International Conference on Learning* Representations, 2019b. Hariharan Narayanan and Sanjoy Mitter. Sample complexity of testing the manifold hypothesis. *Advances* in neural information processing systems, 23, 2010. Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, and Ioannis Mitliagkas. A modern take on the bias-variance tradeoff in neural networks. *arXiv preprint* arXiv:1810.08591, 2018. Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In International Conference on Learning Representations Workshop Track, 2015. Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. The role of over-parametrization in generalization of neural networks. In *International Conference on Learning Representations*, 2018. Roman Novak, Yasaman Bahri, Daniel A Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. Sensitivity and generalization in neural networks: an empirical study. In *International Conference on Learning* Representations, 2018. Courtney Paquette, Elliot Paquette, Ben Adlam, and Jeffrey Pennington. Implicit regularization or implicit conditioning? exact risk trajectories of sgd in high dimensions. In *Advances in Neural Information* Processing Systems, 2022. Phil Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, and Tom Goldstein. The intrinsic dimension of images and its impact on learning. In *International Conference on Learning Representations*, 2020. Leslie Rice, Eric Wong, and Zico Kolter. Overfitting in adversarially robust deep learning. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of Machine Learning Research*, pp. 8093–8104. PMLR, 13–18 Jul 2020. Mihaela Rosca, Theophane Weber, Arthur Gretton, and Shakir Mohamed. A case for new neural network smoothness constraints. *NeurIPS Workshops*, 2020. Berfin Simsek, François Ged, Arthur Jacot, Francesco Spadaro, Clement Hongler, Wulfram Gerstner, and Johanni Brea. Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 9722–9732. PMLR, 18–24 Jul 2021. Sidak Pal Singh and Martin Jaggi. Model fusion via optimal transport. *Advances in Neural Information* Processing Systems, 33:22045–22055, 2020. Gowthami Somepalli, Liam Fowl, Arpit Bansal, Ping Yeh-Chiang, Yehuda Dar, Richard Baraniuk, Micah Goldblum, and Tom Goldstein. Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pp. 13699–13708, 2022. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. In International Conference on Learning Representations, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Zeke Xie, Issei Sato, and Masashi Sugiyama. A diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima. In *International Conference on Learning Representations*, 2020. Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, and Yi Ma. Rethinking bias-variance trade-off for generalization of neural networks. In *International Conference on Machine Learning*, pp. 10767–10777. PMLR, 2020. Tao Yu, Huan Long, and John E Hopcroft. Curvature-based comparison of two neural networks. In 2018 24th International Conference on Pattern Recognition (ICPR), pp. 441–447. IEEE, 2018. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. *International Conference on Learning Representations*, 2018. ## A Appendix Section B summarizes our experimental setup, while Section C, D, and E respectively detail the tangent Hessian computation method, the geodesic path generation algorithm, and the weak data augmentation strategy used for geodesic Monte Carlo integration. Finally, in section F we extend our discussion of related works. Additional experiments are reported in section G. ## B Network Architectures And Training Setup Network Architectures The ConvNets and ResNets used follow the experimental settings of Nakkiran et al. (2019b), with the only difference that we disable batch normalization in order to focus our study on implicit regularization. In summary, the ConvNets are composed of 4 convolutional stages (each with a single conv + ReLU block) with kernel size 3 × 3, stride 1, padding 1, each followed by maxpooling of stride 2 and kernel size 2 × 2. Finally, a max pooling layer of stride 2 and kernel size 2 × 2 is applied, followed by a linear layer. The Residual networks used in this study are ResNet18s (He et al., 2015) without batch normalization. Both ConvNets and ResNets are formed by 4 convolutional stages at which the number of learned feature maps doubles, i.e. the base width of each stage follows the progression [w, 2w, 4w, 8w], with w = 64 denoting a standard ResNet18. To control the number of parameters in each network, the base width w varies from 1 to 64. Throughout our experiments, augmentations x˜n of a sample (xn, yn) are labelled with their respective (potentially noisy) training target yn. Dataset Splits To tune the training hyperparameters of all networks, a validation split of 1000 samples was drawn uniformly at random from the training split of CIFAR-10 and CIFAR-100. ConvNet Training Setup The training settings are the same for CIFAR-10 and CIFAR-100. All ConvNets are trained for 4k epochs with SGD with momentum 0.9, fixed learning rate 1e − 3, batch size 128, and no weight decay. All learned layers are initialized with Pytorch's default weight initialization (version 1.11.0). To stabilize prolonged training in the absence of batch normalization, we use learning rate warmup: starting from a base value of 1e − 4 the learning rate is linearly increased to 1e − 3 during the first 5 epochs of training, after which it remains constant at 1e − 3. ResNet Training Setup All ResNets are trained for 4k epochs using Adam with base learning rate 1e−4, batch size 128, and no weight decay. All learned layers are initialized with Pytorch's default initialization (version 1.11.0). All residual networks are trained with data augmentation, consisting of 4 − *pixel* random shifts, and random horizontal flips. Computational Resources Our experiments are conducted on a local cluster equipped with NVIDIA Tesla A100s with 40GB onboard memory. For each dataset and architecture, we train 64 different networks for 4000 epochs with 3 different seeds. The total time for computing our experiments, excluding training networks and hyperparameter finetuning, amounts to approximately 6 GPU years. Furthermore, computing our statistics requires evaluating per-sample Jacobians for each training point and corresponding augmentations, for increasing volumes around each point. For each training setting, this was performed for 72 model checkpoints collected during training, to produce the heatmaps in Figures 5, 11, 12, 13 and 14. ## C Tangent Hessian Computation To estimate the tangent Hessian norm at a point xn through Equation 5, we approximate the tangent space to the data manifold local to xn by using a set of random weak augmentations of xn. To guarantee that all augmentations xn + um, as well as the displacements xn + δum lie on the data manifold, we use weak colour augmentations as follows. ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) Figure 8: (Left) Visualization of random colour augmentations used to estimate the tangent Hessian norm. Each row represents a set of random augmentation, with the first image per-row showing the corresponding base sample. (Right) Each row represents SVD augmentations of increasing strength. Also in this case, the first column represents the base sample used to generate the corresponding augmentations in each row. For each sample xn, we apply in random order the following photometric transformations: - random brightness transformation in the range [0.9, 1.1], with 1. denoting the identity transformation. - random contrast transformation in [0.9, 1.1], with 1. denoting the identity transformation. - random saturation transformation in [0.9, 1.1], with 1. denoting the identity transformation. - random hue transformation in [ − 0.05, 0.05], with 0. denoting the identity transformation. Furthermore, a step size δ = 0.1 is used for computing the finite differences in Equation 5. 4 augmentations xn + um are sampled for each point. All randomness is controlled to ensure reproducibility. Figure 8 (left) shows a visualization of the colour augmentations used. ## D Geodesic Paths Generation In this section, we provide pseudocode for the algorithm used for generating geodesic paths, used for Monte Carlo integration. Let x0 ∈ R d denote a training point, which we use as the starting point of geodesic paths πp emanating from x0. Let Ts : R d → R d denote a family of smooth transformations (data augmentation), dependent on a parameter s controlling the magnitude and direction of the transformation (e.g. radial direction and displacement for pixel shifts). Let S := {s 1*, . . . ,* s K} denote a sequence of parameters for the family Ts, each with strength S k = ks kk2 for k = 1*, . . . , K*, such that S 1 *< . . . < S*K. Then, Algorithm 1 returns a geodesic path π : [0, 1] → R d, based at x0, i.e. π(0) = x0, which is anchored to the data manifold local to x0 by a sequence of augmentations of increasing strength, for k = 1*, . . . , K*. Particularly, Algorithm 1 can be applied P times to generate paths πp emanating from x0. Finally, by integrating metrics of interest (e.g. Jacobian and tangent Hessian norms) along each path πp, we obtain | Algorithm 1 Generate a geodesic path π emanating from a training point x0 | | . | |-----------------------------------------------------------------------------|----------------|--------------------------------------------| | 1: function Geodesic Path(x0, Ts, S := {s 1 , . . . , s K}) 2: P ← {x0} | | . Set of on-manifold points. | | 3: | for s k ∈ S do | | | 4: | sample s ∼ s k | . Sample augmentation of strength S | | 5: | x k = Ts(x0) | . Generate weak data augmentation. | | 6: | P ← P ∪ {x k} | | | 7: | end for | | | 8: | return P | . Set of data augmentations forming a path | 1: **function** Geodesic Path(x0, Ts, S := {s 1*, . . . ,* s K}) 2: *P ← {*x0} . Set of on-manifold points. 3: for s k ∈ S do 4: sample s ∼ s k . Sample augmentation of strength S k = ksk2. 5: x k = Ts(x0) . Generate weak data augmentation. 6: *P ← P ∪ {*x k} 7: **end for** 8: **return** P . Set of data augmentations forming a path π, with points sorted by distance from x0. 9: **end function** estimates of sharpness of the loss along each path, which we use as Monte Carlo samples in Equation 7 for estimating volume-sharpness. We recall that the size of the volume considered is controlled by the maximum augmentation strength S K used for generating weak augmentations, which is proportional to the distance travelled away from x0 in input space. ## E Svd Augmentation The SVD augmentation method presented in section 3.4 allows for generating images that lie in close proximity to the base sample xn. Figure 8 shows an illustration of the original image (first column) and several augmented images, as the augmentation strength (number of erased singular values) increases. Figure 9 shows the average (over the CIFAR-10 training set) Euclidean distance of augmented samples from their respective base sample, as well as the length of the polygonal path formed by connecting augmentations of increasing strength. We note that for k < 30, in expectation, augmentations lie in close proximity to the original base sample in Euclidean space. ![18_image_0.png](18_image_0.png) Figure 9: Average L2 distance from the base samples, for augmentations of increasing strength. ## F Extended Related Works In this section, we extend the related work discussion of section 2 to contextualize our findings in relationship to linear models. In linear regression, the model-wise double descent phenomenon has been studied in terms of *harmless interpolation* (Muthukumar et al., 2020) or *benign overfitting* of noisy data (Bartlett et al., 2020), by controlling the number of input features θ considered. Particularly, for the least squares solution to a noisy linear regression problem with random input and features, the impact of noise on generalization is mitigated by the abundance of weak features (Belkin et al., 2020). In this context, interpolation is studied for data whose population is described by noisy observations from a linear ground truth model. In the following, we delineate the main differences between linear regression and the experimental setting considered in our study. We begin by noting that, since the model function of linear models has zero curvature (both w.r.t. model input x and parameters θ), the only source of nonlinearity and curvature in linear regression is the error function (MSE). To see this, let f(x, θ) = θ T x denote a linear regression model, estimated by minimizing the mean squared error L(θ, x, y) = 1 2N P N n=1 (f(xn, θ) − yn) 2, where yn is a noisy target ∀n. Then, the error function L has constant curvature H = k ∂ 2L ∂x∂xT k2 = kθθT k2, independent of x. In contrast, we study the case of nonlinear classification problems and nonlinear models, which have notable differences from the linear case. First, there is no a priori closed form solution of the learning problem, thus providing relevance to empirical studies. Second, curvature of the model function is non-constant, and the function may oscillate arbitrarily outside of the training data (this is known as the Runge phenomenon). Third, studies that rely exclusively on the test error suggest that interpolation is harmless also in overparameterized nonlinear models. Finally, the model function of convolutional architectures is independent of input-data dimensionality, and the relationship between complexity of the model function and its underlying parameterization is therefore implicit. In this setting, we experimentally show that, in the interpolating regime, (1) curvature at training points depends non-monotonically on model size; (2) oscillations occur especially for small interpolating models, which are worst affected by noise; (3) large models achieve low-curvature interpolation of both clean and noisy samples (in contrast with the polynomial intuition), and such property is observed over large volumes (nonzero measure) around each training point (in contrast with the Runge phenomenon, thus providing evidence of implicit regularization); (4) Interpolation of noise impacts generalization even for large models (contrary to the overparameterized linear regression case); (5) Double descent observed for input space curvature occurs even when fitting 100% noisy data, more clearly pinpointing properties that are consistently promoted by overparameterization in deep nonlinear networks. Our methodology enables the study of sharpness of fit of training data for nonlinear models, providing a comparative study of the regularity with which different parameterizations achieve interpolation and (in some cases) generalization. ## G Additional Experiments G.1 Transformers ![19_image_0.png](19_image_0.png) Figure 10: a) **Double descent** of the test error for transformers trained on translation tasks, as the embedding dimension and model width vary. b) **Average Jacobian norm**. We consider multi-head attention-based Transformers (Vaswani et al., 2017) for neural machine translation tasks. We vary model size by controlling the embedding dimension de, as well as the width h of all fully connected layers, which we set to h = 4de following the architecture described in Vaswani et al. (2017). We train the transformer networks on the WMT'14 En-Fr task (Macháček & Bojar, 2014), as well as ISWLT'14 De-En (Cettolo et al., 2012). The training set of WMT'14 is reduced by randomly sampling 200k sentences, fixed for all models. The networks are trained for 80k gradient steps, to optimize per-token perplexity, with 10% label smoothing, and no dropout, gradient clipping or weight decay. For both datasets, Figure 10a shows the double descent curve for the test error for both datasets considered. Figure 10b extends our main result beyond vision models, showing that loss sharpness at each training point, as measured by the Jacobian norm, follows double descent for the test error. ## G.2 Convnets Figures 11 and 13 summarize our main findings with heatmaps showing modelwise and epochwise trends for ![20_image_0.png](20_image_0.png) the test error, train loss, as well as our sharpness metrics, individually computed over the clean and noisy subsets of CIFAR-10. Figure 11: Test error (left), crossentropy loss over cleanly-labelled training samples (middle) and corrupted training samples (right) over epochs (y-axis) for different model sizes (x-axis), for ConvNets on CIFAR-10. ## G.3 Resnets Figures 12 and 14 present heatmaps showing modelwise and epochwise trends for the test error, train loss, ![20_image_1.png](20_image_1.png) as well as our sharpness metrics, individually computed over the clean and noisy subsets of CIFAR-10. Figure 12: Test error (left), crossentropy loss over cleanly-labelled training samples (middle) and corrupted training samples (right) over epochs (y-axis) for different model sizes (x-axis), for ResNets on CIFAR-10. ![21_image_0.png](21_image_0.png) (Right column) Monte Carlo integration over a neighborhood with paths consisting of 7 augmentations. ![22_image_1.png](22_image_1.png) ![22_image_0.png](22_image_0.png) Figure 14: (Left column) Metrics evaluated on the training set without Monte Carlo integration on ResNets. (Right column) Monte Carlo integration over a neighborhood with paths consisting of 7 augmentations.
Review 1: Summary: The paper provides empirical study of the landscape smoothness in the overparameterized regime of neural networks. Through computing the Jacobian and tangent hessian at training data points, the paper finds that after reaching the interpolation threshold, neural networks tend to smoothly interpolate data, rather than forming spikes at training data points. This discovery enhances the understanding of neural networks' behavior in the practical setting (overparameterized). The reviewer believes this is a timely study and interesting discovery regarding overparameterized neural networks. Strengths and Weaknesses: The paper is clearly motivated and relatively easy to follow. Empirical results are rich and are performed under relevant and interesting settings. The following questions might be relevant. 1). Is 20% label corruption needed to study the smoothness of interpolation? Or changing different levels of label corruption, can we find different conclusions? As a sanity check, if the labels are totally corrupted, does overparameterized networks still produce smooth interpolations? 2). How accurate is the approximation to the tangent hessian estimation? As we know, data are high dimensional and in order to approximate a high-dimensional integral, exponentially many (exponent depending on data dimension, also known as the curse of dimensionality) samples are needed. 3). The finding of smooth interpolation contradicts previous intuition. Can authors comment on the cause of smooth interpolation? Is it a consequence of algorithmic regularization in SGD or is it an unique feature of large neural networks? The paper is expository yet a bit weak in explaining the thinking behind the smooth interpolation phenomenon. The paper should be stronger with more discussions. Requested Changes: I would suggest the authors to read through the manuscript once more to clear a few typos and grammatic issues. Broader Impact Concerns: No ethical concerns. ================================================== Review 2: Summary: This paper performs an empirical study of the smoothness of the loss landscape of DNNs in the input space. The key findings include: (1) overparameterization improves the input-space smoothness (2) the double descent phenomenon can be observed with respect to both training epochs and model sizes. Overall, this paper highlights the importance of studying the smoothness of the input-space to gain a better understanding of the implicit regularization effect of overparameterization. Strengths and Weaknesses: Strength: 1. The novelty is good, studying the effect of overparameterization and its relationship to the generalization via characterizing the input-space smoothness is a nice direction. 2. The finding is also interesting: the overparameterization helps reduce the sharpness of the input space. 3. The idea of characterizing the smoothness via estimating the geodesic paths is also interesting and potentially useful. Weakness: 1. Lack of comparison. It seems that the authors try to argue that the input-space smoothness could be better than the parameter-space smoothness. To better explain this, it would be better to draw similar plots as Figures 2&3 in terms of Jacobian/Tangent Hessian metrics for model parameters. Then we are able to see whether overparameterization can also encourage the parameter-space smoothness. 2. The experiment setup is not clearly presented. For instance, the authors may need to explain what's the number of augmentations in Figure 2. I can understand that this will affect the estimations of jacobian and Tangent Hessian, but why the crossentropy will also be different, do you use augmented data to calculate the loss? Or do you use the augmented data to train the model? 3. In order to show the double descent, it would be good to also include the test error/loss. 4. It seems that the sharpness will be largely different if using different numbers of augmentations. However, this is confusing to me as it seems that using different numbers of augmentations will also affect the estimation of sharpness, which implies they should be at least in the same order since the true sharpness of the loss does not depend on the estimating approaches. 5. Regarding Figure 7, the authors claim that the double descent for the corrupted cannot be observed. However, I actually do not see the difference between the trends for the clean and corrupted classes. Could you clarify this with more details? Requested Changes: Please refer to the weakness section. Broader Impact Concerns: No broader impact concerns. ================================================== Review 3: Summary: This paper proposes a way to calculate smoothness of neural networks, and conducts empirical experiments to study the relationship among over-parameterization, smoothness, and testing performance. Strengths and Weaknesses: Strength: [1] This paper clearly states the setups and observations of the experiments. [2] This paper reveals a difference between clean data and noisy data (Section 4.5). Weakness: [1] The writing of this paper needs improvement. (1.1) The title "Deep Double Descent via Smooth Interpolation" from my understanding means that the paper developed a method of smooth interpolation and it caused the double descent phenomenon. This is different from the aim of this paper. (1.2) From the sentences in the abstract, "Common intuition..." and "While small interpolating ... in contrast to existing intuition", the authors study the smoothness in clean and corrupted samples and have some observations different from the existing literature. However, the authors put a lot of effort in introducing double descent phenomenon, e.g., "Our work presents an empirical study of deep double descent..." in Section 2. It is not clear why double descent is helpful to achieve the goal in the abstract. I would suggest the authors to clarify what is the aim of this paper and how does double decent helps the understanding in this paper. I started to get confused when reading the introduction section. [2] From the paper Belkin, Mikhail, Daniel Hsu, and Ji Xu. "Two models of double descent for weak features." SIAM Journal on Mathematics of Data Science 2.4 (2020): 1167-1180. my understanding is that, the model they considered have already implies that over-parameterized models some times smoothly interpolates all training data (including both noisy and clean). Although Belkin's work is on simple models rather than neural networks, it is not surprising that neural networks also share similar properties. I would suggest the authors to compare with this work. [3] The numerical experiments use only CIFAR-10 and CIFAR-100. Since this paper only conducts empirical studies, using only these two datasets is not sufficient. In addition, the neural networks used in this paper are only ConvNet and ResNet, which is not sufficient. [4] Some descriptions in the paper are not rigorous. For example, on Page 9, it is mentioned that "deep networks are biased towards learning training samples in approximately the same order". Please provide more concrete definitions of "bias" and "order". [5] Some descriptions are not accurate. For example, in the abstract, it mentions that "Our findings show that loss sharpness in the input space follows both model- and epoch-wise double descent, with worse peaks observed around noisy labels". However, from Figure 7, noisy data does not have double descent. Other issues: [1] Is J(x,y) a vector or a matrix? If $L_\theta$ outputs a scalar loss, then how can we have the $||\cdot||_F$ norm of a vector? Please also double check if the words "Jacobian" and "Hessian" are used properly. [2] Please adjust the layout of the paper. For example, Figure 2 appears on Page 6, but only until the end of Page 7 Figure 2 is mentioned. [3] Please report the computation time for the proposed method in the experiments. Requested Changes: Please polish the paper to make it more clear, comparing with Belkin's work mentioned in the weakness part, and add some more experiments using other data sets. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept as is Comment: In this paper, a method for computing the smoothness of neural networks is proposed and empirical experiments are conducted to investigate the relationship between overparameterization, smoothness, and test performance. The paper finds that the investigated neural networks tend to interpolate data smoothly after reaching the interpolation threshold instead of forming spikes at the training data points. This discovery improves the understanding of overparametrized neural networks and can be seen as a way to extend the existing theoretical analysis on the double descent phenomenon to neural networks. The reviewers cautiously note that the numerical study may not be comprehensive enough to make generalizable statements for other networks/datasets. In addition, the metrics measurement methods chosen may have an impact on the transferability of the results to other applications. Despite these reservations, the reviewers believe that this is a timely study that meets the acceptance criteria of the TMLR. ==================================================
# On Noise Abduction For Answering Counterfactual Queries: A Practical Outlook Saptarshi Saha∗*saptarshi.saha_r@isical.ac.in* Computer Vision and Pattern Recognition Unit Indian Statistical Institute, Kolkata Utpal Garain utpal@isical.ac.in Computer Vision and Pattern Recognition Unit Indian Statistical Institute, Kolkata Reviewed on OpenReview: *https: // openreview. net/ forum? id= 4FU8Jz1Oyj& referrer= %5BTMLR% 5D* ## Abstract A crucial step in counterfactual inference is abduction - inference of the exogenous noise variables. Deep Learning approaches model an exogenous noise variable as a latent variable. Our ability to infer a latent variable comes at a computational cost as well as a statistical cost. In this paper, we show that it may not be necessary to abduct all the noise variables in a structural causal model (SCM) to answer a counterfactual query. In a fully specified causal model with no unobserved confounding, we also identify exogenous noises that must be abducted for a counterfactual query. We introduce a graphical condition for noise identification from an action consisting of an arbitrary combination of hard and soft interventions. We report experimental results on both synthetic and real-world German Credit Dataset, showcasing the promise and usefulness of the proposed exogenous noise identification. ## 1 Introduction "What if?" questions are frequent in the decision-making system in almost all realms of knowledge. These questions evoke hypothetical conditions, usually contradicting factual evidence. For example, when a patient dies in the hospital, a natural question is: What would have happened if the clinicians acted differently? Another example is that had the candidate been male instead of female, would the decision from the admissions committee be more favorable? By and large, counterfactuals are key ingredients that go into explaining why things happened as they did. It is not possible to answer those questions using statistical tools only, but the method of counterfactual inference of hypothetical scenarios can prove helpful in those cases (Pearl, 2016). Counterfactual techniques have been proposed into deep learning only in recent times (Schölkopf, 2019). For instance, there are inquisition in fairness (Kusner et al., 2017), recourse (Karimi et al., 2021), harm (Richens et al., 2022), mitigating bias in image classifiers (Dash et al., 2022), mitigating language bias in VQA (Niu et al., 2021), Zero-Shot Learning and Open-Set Recognition (Yue et al., 2021), mental health care (Marchezini et al., 2022). The structural causal model (SCM) is the standard framework for computing the answers to the counterfactual queries. An SCM takes two sets of variables - exogenous and endogenous, and a set of structural assignments into account that assigns each endogenous variable a value according to the values of some other variables in the model. The exogenous variables are external to the model. We chose not to elucidate how they are caused. Each endogenous variable is a descendant of an exogenous variable. One can use the structural assignments to accurately compute the value of endogenous variables from the values of the exogenous variables. The SCM paradigm provides a three-step procedure for answering counterfactual ∗first author questions: Abduction, Action, and Prediction. Abduction is the tractable inference of the exogenous noise variables. Action is to perform interventions. Prediction is to compute the quantities of interest. Deep Learning approaches founded on these three steps have been recently introduced for generating counterfactuals. For instance, Pawlowski et al. (2020) employ normalizing flows and variational inference for enabling tractable counterfactual inference, Sanchez & Tsaftaris (2022) use diffusion models for counterfactual estimation, Axel Sauer (2021) proposes counterfactual generative networks, Dash et al. (2022) incorporates a structural causal model (SCM) in a variant of Adversarially Learned Inference for generating counterfactual images. Normalizing flow-based methods for answering counterfactual queries has received a lot of attention in no time. For example, Pawlowski et al. (2020)'s work on healthy magnetic resonance images of the brain has been extended to account for the clinical and radiological phenotype of multiple sclerosis (MS) by Reinhold et al. (2021). Wang et al. (2021) perform counterfactual inference to achieve harmonization of brain imaging data with different protocols and from different sites in a clinical study. From a deep learning perspective, an exogenous variable might be considered as an inferred latent variable. To infer the state of the latent noise attached to an endogenous variable, we typically model a normalizing flow, perform amortized variational inference (in the case of very high dimensional variables) (Pawlowski et al., 2020) or use deterministic forward diffusion(Sanchez & Tsaftaris, 2022). Our ability to infer a latent variable comes at a computational cost as well as a statistical cost. To illustrate, the framework for counterfactual estimation by inferring exogenous noises via normalising flows parameterizes each structural assignment of an SCM as an invertible mechanism. Each mechanism explicitly calculates its inverse to enable efficient abduction of exogenous noises. These invertible architectures are typically computationally heavy. For a description of normalizing flows, see Appendix A and Papamakarios et al. (2019). However, given an SCM, in practice, we are interested in counterfactual queries involving a few variables (not all)! For example, Reinhold et al. (2021) studied what the brain image of the subject would look like if the subject did not have lesions, given the observation that they have a 60 mL lesion load. While the proposed SCM consists of age, lesion volume of the subject, duration of MS symptoms, slice number, brain volume, biological sex, image, ventricle volume, and the expanded disability severity score. Hence, it is quite natural to ask for noise variables that we can get rid of from abducting. While Pawlowski et al. (2020) have mentioned (on a footnote) in the case of brain imaging example that abduction of the noise attached to 'sex' is not necessary as 'sex' has no causal parents in the SCM1(Figure 5, Pawlowski et al. (2020)), we are unaware of any dedicated effort to identify the noises that must be abducted to answer a counterfactual query. In this context, our work shows that it may not be necessary to infer all the noise variables in the SCM and identifies exogenous noise variables that we must infer to answer a counterfactual query in a fully specified causal model with no unobserved confounding. We also introduce a graphical condition for noise identification from an action consisting of an arbitrary combination of hard, soft, and semi-soft (semi-hard) interventions. We report experimental results on both synthetic and real-world German Credit Dataset, showcasing the promise and usefulness of the proposed exogenous noise identification. The code for reproducing the results is available at https://github.com/Saptarshi-Saha-1996/Noise-Abduction-for-Counterfactuals. ## 2 Preliminaries 2.1 Background On Structural Causal Models A structural causal model(SCM) is defined as a tuple C := (S, P(ϵ)), where S = (f1, f2*, ..., f*p) is a collection of p deterministic structural assignments, $$X_{j}:=f_{j}(\mathbf{P}\mathbf{a}_{j},\epsilon_{j}),\quad j=1,2,..,p,$$ Xj := fj (Paj , ϵj ), j = 1, 2*, .., p,* (1) where Paj ⊆ {X1, ..., Xp*} \ {*Xj} is the set of parents of Xj (its direct causes) and P(ϵ) = Qp i=1 P(ϵi) is the joint distribution over mutually independent exogenous noise variables. The graph of a structural causal model G is obtained simply by drawing directed edges pointing from causes to effects. As assignments 1This need not be the case always. For instance, example 1(d). $$(1)$$ are assumed acyclic, the directed graph G induced by the SCM C is also acyclic. Every SCM C entails a unique joint distribution P C X over the variables X = (X1*, ..., X*p). The graph structure, along with the joint independence of the exogenous noises factorizes the entailed distribution P C X canonically into causal conditionals, $$P_{\mathbf{X}}^{\mathfrak{E}}(\mathbf{X}=\mathbf{x}):=\mathbb{P}_{\mathfrak{G}}(\mathbf{x})=\prod_{j=1}^{p}\mathbb{P}(x_{j}|\mathbf{p}\mathbf{a}_{j}^{\mathfrak{G}}).$$ $$\left(2\right)$$ ). (2) It is referred as causal (or disentangled or Markov) factorization. This allows to use G for predicting the effects of interventions, defined as substituting one or multiple of its structural assignments, written as 'do(···)'. An intervention on a set of variables {Xt : t ∈ I} is defined as substituting the respective structural assignments by $$X_{t}:={\tilde{f}}_{t}({\widehat{\mathbf{P}\mathbf{a}_{t}}},{\tilde{\epsilon}}_{t}),\quad t\in I.$$ The entailed distribution in the new SCM C˜ is called as intervention distribution, denoted by P C˜ X. The set of exogenous variables {ϵt : t /∈ I*} ∪ {*ϵ˜t : t ∈ I} in C˜ are required to be mutually independent. An intervention, where the structural assignment for a variable is modified by changing the function or the noise term, resulting in a change in the conditional distribution given its parents, is called soft/imperfect intervention. It is written as do(Xt := ˜ft(Paf t, ϵ˜t)) (Peters et al., 2017). As the new SCM C˜ should have an acyclic graph, the set of allowed interventions thus depends on the graph G, induced by C. In this paper, we mainly focus on interventions with Pagt equals Pat or empty (that will be clear from the context). We use Pagt for a different purpose described in section 3. Independent Causal Mechanisms (ICM) Principle (Peters et al. (2017)) says that performing an intervention upon one mechanism P(Xi|Pai) does not change any of the other mechanisms P(Xj |Paj )(i ̸= j). As a consequence, we get $$P_{\mathbf{X}}^{\tilde{\mathbf{\phi}}}(\mathbf{X}=\mathbf{x}):=\mathbb{P}_{\tilde{\mathbf{\phi}}}(\mathbf{x})=\prod_{j\not\in I}\mathbb{P}_{\mathbf{\phi}}(x_{j}|\mathbf{pa}_{j}^{\mathbf{\phi}})\prod_{j\in I}\mathbb{P}_{\tilde{\mathbf{\phi}}}(x_{j}|\bar{\mathbf{pa}}_{j}^{\tilde{\mathbf{\phi}}}).\tag{1}$$ $$\left(3\right)$$ When ˜f(Pat, ϵ˜t) puts a point mass on a real value a, i.e., PG˜ (xt|pat ) = 1xt=a, we simply written it as do(Xt = a) and call this an atomic/hard/perfect intervention. In particular, such constant reassignment disconnects Xt from all its parents and imparts a direct manipulation disregarding its natural causes. ## 2.2 Counterfactuals Given an observed outcome, counterfactuals are hypothetical retrospective interventions (cf. potential outcome): 'Given that we observed (Xi, Xj ) = (x obs i, xobs j), what would Xi have been if Xj were x ′ j ? By assumption, the state of any observable variable is fully determined by the exogenous noises and structural assignments/equations. The unit-level counterfactual is defined as the solution for Xi for a given situation ϵ = ϵ, where the equation for Xj is replaced with Xj = x ′ j . We denote it by XiXj←x ′ j (ϵ) (Read: "The value of Xiin situation ϵ, had Xj been x ′ j "). We might be able to answer unit-level (or individual-level) counterfactual queries if we know the specific functional form of these structural equations. Mathematically, counterfactual inference can be formulated as three-step algorithm (Pearl (2009)): 1. **Abduction:** Predict the exogenous noise ϵ from the observations x obs, i.e., infer P(ϵ|X = x obs). 2. **Action:** Perform interventions (e.g. do(Xj = x ′ j )) corresponding to the desired manipulations, resulting in a modified SCM C˜ := C|X=xobs;do(Xj=x ′ j ) = (S, ˜ P(ϵ|X = x obs)), where S˜ is the collection of structural assignments modified by interventions. 3. **Prediction:** Compute the quantities of interest (e.g. XiXj←x ′ j (ϵ)) based on the distribution entailed by the counterfactual SCM C˜, denoted by P C˜ X = P C|X=xobs;do(Xj=x′j ) X . The updated noise distribution of exogenous variables P(ϵ|X = x obs) need not be mutually independent anymore. It is not always possible to determine the counterfactuals with probability 1. When we can't solve for ϵi (e.g. function fi that maps ϵi to Xi for a fixed value of x isn't invertible in noise term? ), we assume some prior distribution for ϵi and update P(ϵi) by observations x obs to obtain P(ϵi|x obs) (**Abduction**). In general, using Bayes' theorem, $$\mathbb{P}(\mathbf{\epsilon}=\mathbf{\epsilon}|\mathbf{X}(\mathbf{\epsilon})=\mathbf{x}^{obs})=\frac{\mathbf{1}_{\mathbf{X}(\mathbf{\epsilon})=\mathbf{x}^{obs}}\mathbb{P}(\mathbf{\epsilon}=\mathbf{\epsilon})}{\sum_{\{\mathbf{\epsilon}^{\prime}|\mathbf{X}(\mathbf{\epsilon}^{\prime})=\mathbf{x}^{obs}\}}\mathbb{P}(\mathbf{\epsilon}=\mathbf{\epsilon}^{\prime})}.\tag{4}$$ X(ϵ) emphasizes that every endogenous variable Xiis a function of ϵ. In the case of non-invertible structural assignments, we do not get all the probabilities concentrated on one particular value of counterfactual XiXj←x ′ j (ϵ); instead, we get a distribution. Averaging over the space of ϵ, a potential outcome XiXj←x ′ j (ϵ) induces a random variable that is simply denoted as XiXj←x ′ j . The counterfactual distribution P(XiXj←x ′ j = XiXj←x ′ j (ϵ)|Xj = x obs j, Xi = x obs i) denotes the probability that XiXj←x ′ j is equal to the value XiXj←x ′ j (ϵ) if Xj is changed to a different value x ′ j , given a specific observation Xi = x obs iand Xj = x obs j. Let ϵ = ϵ be one of the situation that leads to the observation X = x obs (more specifically, Xi = x obs i, Xj = x obs j). Then, in particular, $$\mathbb{P}(X_{i X_{j}\gets x_{j}^{\prime}}=X_{i X_{j}\gets x_{j}^{\prime}}(\epsilon)|X_{j}=x_{j}^{o b s},X_{i}=x_{i}^{o b s})=\mathbb{P}(\epsilon=\epsilon|\mathbf{X}=\mathbf{x}^{o b s}).$$ It advances us from unit-level counterfactual to population-level counterfactual that is not specific to a particular situation ϵ (but all the situations are considered, i.e., population), e.g., E(XiXj←x ′ j |X = x obs). Expectation is taken over the whole population. P(ϵ) defines a probability distribution over endogenous variables X, $$\mathbb{P}(X_{i}=x_{i})=\sum_{\{\epsilon|X_{i}(\epsilon)=x_{i}\}}\mathbb{P}(\epsilon=\epsilon).$$ The probability of counterfactual statements is defined in the same manner, e.g., $$\mathbb{P}(X_{i:X_{j}\gets x^{\prime}_{j}}=x^{\prime}_{i}|X_{j}=x^{obs}_{j},X_{i}=x^{obs}_{i})=\sum_{\{\epsilon|X_{i:X_{j}\gets x^{\prime}_{j}}(\epsilon)=x^{\prime}_{i}\}}\mathbb{P}(\epsilon=\epsilon|\mathbf{X}=\mathbf{x}^{obs})\tag{5}$$ $$=\sum_{\epsilon}\mathbb{P}(X_{i:X_{j}\gets x^{\prime}_{j}}(\epsilon)=x^{\prime}_{i})\mathbb{P}(\epsilon=\epsilon|\mathbf{X}=\mathbf{x}^{obs})$$ With the help of such formulation, we are allowed to compute joint probabilities of every combination of counterfactual and observable events. Natural direct and indirect effects in mediation analysis, probability of necessity, probability of sufficiency (Pearl, 2016), harm (Richens et al., 2022), etc. are a few examples of counterfactual quantities. ## 2.2.1 Identifiability Of Counterfactuals One of the fundamental questions in the counterfactual analysis is the question of identification: Can the counterfactual quantities be estimated from either observational or experimental data or both observational and experimental data? In a fully specified causal model, i.e., if all parameters of the causal model are known (including P(ϵ)), every counterfactual is identifiable and can be computed using the three steps - abduction, action, and prediction. Counterfactual quantities may not be generally identifiable even if we have interventional and observational distributions (Peters et al., 2017). For computing unit-level counterfactuals, one needs parametric forms of structural assignments. Often, in reality, we do not know the structural assignments and distributions of exogenous noises. Flow-based SCMs use normalizing flows to parameterize each structural assignment of an SCM as an invertible mechanism2 and also make assumptions on the distributions of noises. One may not require parametric forms of structural equations to answer populationlevel counterfactuals. See Malinsky et al. (2019) for general identification of counterfactual quantities. 2We assume invertibility in the noise argument. ## 2.2.2 Scope Of Interventions For Counterfactual Analysis Standard tools of the SCM framework do not inherently restrict intervention. One could, at least in theory, intervene unconditionally on any subset of variables to perform counterfactual analysis. Thus the choices of form and feasibility in the scope of interventions are delegated to the individual and\or the institution and made based on a semantic understanding of the modelled variables. For example, Z can not be intervened in causal graphs in Figure 2 in Zhang et al. (2020). Throughout this paper, we do not restrict ourselves from intervening on the variables of interest in the counterfactual query. ## 3 Counterfactual With Different Interventions In section 2.1, we mentioned soft interventions, where the original conditional distributions of the intervened variables are replaced with new ones without fully eliminating the causal effect of the parents. This operation is also known as a mechanism change (Tian & Pearl (2013)). It presents in many settings a more realistic model than hard or perfect interventions, where variables are forced to a fixed value. Karimi et al. (2021) and Crupi et al. (2021) perform soft interventions (particularly an additive intervention) to generate counterfactual explanation and recommendation in the context of algorithmic recourse. Example 1 (adapted from Example 6.18 in Peters et al. (2017)). *Consider the following SCM:* $$\begin{array}{l}{{X:=\epsilon_{X}+1}}\\ {{Y:=X^{2}+\epsilon_{Y}}}\\ {{Z:=2Y+X+\epsilon_{Z}}}\end{array}$$ with ϵX, ϵY , ϵZ ∼ Uniform({−5, −4, ..., 4, 5}) idenpendently. Now, assume that we observe (*X, Y, Z*) = (1, 2, 4) and we are interested in the counterfactual query (a): what would have been Z, had Y been 5? Now we pose the question as follows: To answer the counterfactual query, do we need to know the state of the ϵX? ![4_image_0.png](4_image_0.png) Figure 1: Left-most directed acyclic graph (dag) G is the causal graph induced by the SCM in example 1. The rest four are causal graphs induced by counterfactual SCMs for queries in (a),(b),(c) & (d). Noises that must be abducted are filled in pink. Note that, given observation (*X, Y, Z*) = (1, 2, 4), inferring ϵZ = −1 *is sufficient to answer* ZY ←5(ϵ) = 10. Furthermore, We do not even need to know the structural equations of X and Y. However, the scenario would be a bit different if we change the counterfactual question (b): What would have been Z*, had Y* followed Y := X + ϵY ? In this case, given observation (*X, Y, Z*) = (1, 2, 4), we need to infer ϵZ = −1 and ϵY = 1 to answer that Z would have been 4, had Y followed the structural equation Y := X + ϵY . Further, for computing ZY ←Y +2(ϵ) = 8 (c), we do not even need to infer ϵY . Only ϵZ *suffices. Here, an interesting* observation to make is that the dag G˜ of the manipulated SCM C˜ remains the same as G for the counterfactual quarries in (b) and (c) (figure *1). To illustrate more, consider the counterfactual question (d): What would* have been Z, had Y *been 5 and X followed* X := ϵ 2 X + 2? It is sufficient to infer (ϵX, ϵZ) = (0, −1) *to answer* (d). Had Y *been 5 and X followed* X := ϵ 2 X + 2, Z *would have been 11.* The above example motivates us to define semi-hard\semi-soft intervention, an intermediate scenario where we technically do not force a constant value but disregard the interventions on the ancestor variables (of the intervened variable) when we intervene. Semi-hard\semi-soft intervention is defined as taking a unique functional form and, as a result, it is not required to know intervened variable's parents and corresponding noise variable for computing the value of the intervention if we are given the observed value. Definition 1 (semi-soft\semi-hard intervention). An intervention on Xt *of the form* Xt ← ˜f(Pat, ϵt) = h(f(Pat, ϵt)), where h *is any arbitrary function, is called semi-soft\semi-hard intervention.* Pat emphasizes the fact that we disregard any intervention on ancestors of Xt. If we consider the intervention on ancestors, we would have written it with Pagt, which coincides with soft intervention. A concrete example is given in Appendix B.1. An typical additive interventions is an example of a semi-soft intervention (h(f) = f + c, where c is a constant). Hard interventions are also a special case of semi-soft interventions (h(f) = c) but in this article, we strictly differentiate between a hard intervention, a soft intervention and a semi-soft intervention. One may argue that since we are denying interventional changes on ancestors when we are intervening, we could disconnect a semi-hard intervened variable from its parents in the graph induced by interventions. We resort to this argument for the rest of the paper. ## 4 Notations And Problem Setup A path in G is a sequence of (at least two) distinct vertices Xi1 , ..., Xim, such that there is an edge between Xik and Xik+1 for all k = 1*, ...,*(m − 1). If Xik → Xik+1 for all k, we speak of a directed path from Xi1 to Xim, denoted as Pi1→im. We will use the following standard kinship relations for sets of vertices in a directed acyclic graph G: DeG i = {Xj : ∃ a directed path from Xi to Xj in G} DeG A = {Xj : ∃ a directed path to Xj from Xiin G, for any i ∈ A} AnG i = {Xj : ∃ a directed path from Xj to Xiin G} AnG A = {Xj : ∃ a directed path from Xj to Xiin G, for any i ∈ A} Given an index set C ⊆ {1, 2*, ...p*}, XC denotes the random vector (Xi)i∈C and X−C = (Xi)i /∈C. Let us formally state the problem we want to address. Assume C := (S, P(ϵ)) be a structural causal model. The graph of C is G. For ensuring identifiability, we assume that C satisfies four standard assumptions: the Markov property, causally sufficiency (i.e., no hidden confounders), causal minimality, and causal faithfulness (Peters et al. (2017)). Assume AH to be the index set of random variables on which we perform hard interventions in the action stage. Similarly, {Xi: i ∈ AS} and {Xi: i ∈ AT } be the set of random variables on which we act soft interventions and semi-hard\semi-soft interventions, respectively. A = AS ∪ AH ∪ AT is the index set of intervened variables. Let the counterfactual query we want to answer be Q: What would XC have been if XAH were xAH and for each i ∈ AS ∪AT , mechanism fi was changed to ˜fi**, given that we observe** X = x obs? For the sake of simplicity, we denote the intervention $$d o\Bigl(X_{j}=x_{j}{\mathrm{~for~}}j\in A_{H};X_{j}={\tilde{f}}_{j}(\mathbf{Pa}_{j},\epsilon_{j}){\mathrm{~for~}}j\in A_{S}\cup A_{T}\Bigr)$$ as do(A ← a). G˜ be the graph of counterfactual SCM C˜, modified by intervention do(A ← a). For i ∈ C, let XiA←a denotes an answer to the counterfactual query Q. Set of all directed paths from XA to XC in G is defined as PG(XA → XC) = {Pi→j is in G : i ∈ A, j *∈ C}*. ## 5 Noises That Are Essential To Q Observation 1. If we intervene on Xj , following the causal flow in the DAG G, only Xj and the descendants of Xj , DeG j *will get affected*3. Theorem 1. XiA←a = x obs ialmost surely, for i *∈ C \ {*k : Xk ∈ DeG A ∪ XA}. Proof. Consider the subgraph G of G, obtained by deleting the vertices in {Xk : Xk ∈ DeG A} ∪ XA. G˜ be the graph induced by the SCM C˜, modified by the intervention do(A ← a). By observation 1, for any i ∈ {k : Xk *∈ G}*, the triplet (fi, PaG i , AnG i )C is same as (fi, PaG˜ i , AnG˜ i )C˜ , where (fi, PaG i , AnG i )C denotes the triplet of the structural assignment fiin the SCM C, parents and ancestors of Xiin a subgraph G of G, respectively. Let ϵ = ϵ be one of the situations that leads to the observation X = x obs, in particular Xi = x obs i. Then, following a topological order in G, $X_{i}(\epsilon)=x_{i}^{obs}=f_{i}(\mathbf{pa}_{i},\epsilon_{i})=X_{i,\mathcal{A}\gets\mathbf{a}}(\epsilon),\quad\forall i\in\{k:X_{k}\in\mathcal{G}\}.$ Hence, $$\{\epsilon:X_{i}(\epsilon)=x_{i}^{o b s}\}=\{\epsilon^{\prime}:X_{i A\gets a}(\epsilon^{\prime})=x_{i}^{o b s}\},\quad\forall i\in\{k:X_{k}\in\mathcal{G}\}.$$ Using (4) and (5), we get $\mathbb{P}(X_{i,\mathcal{A}\gets a}=x_{i}^{obs}|\mathbf{X}=\mathbf{x}^{obs})=1,\quad\forall i\in\{k:X_{k}\in\mathcal{G}\}.$ $\square$ We get the desired result as *C \ {*k : Xk ∈ DeG A ∪ XA} ⊆ {k : Xk *∈ G}* . As an immediate consequence, we do not need to infer noises attached to the variables outside the action set XA and its descendants DeG A, as we are about to modify the SCM C by acting on variables in A. For example, in the causal graph of figure 2a, if we intervene (hypothetically, in theory) on 'gender', a counterfactual answer about 'age' will not be a diversion from what we observe, which is pretty much intuitive from the causal graph and indeed 'causal' in nature. On the other hand, we are interested in counterfactual queries about XC. We do not need to oversee all the variables in DeG A ∪ XA. ![6_image_0.png](6_image_0.png) Figure 2: (a) Causal graph for the German credit dataset. (b) Causal graph of the synthetic dataset. Theorem 2. Assume that Xj has not been intervened. Then counterfactual prediction on Xj *may differ* from its observed value x obs jiff at least an ancestor of Xj *has been intervened.* 3By 'get affected', we mean a possibility of distributional change. Proof. If we intervene on an ancestor of Xj , from observation 1, counterfactual prediction on Xj may differ from its observed value x obs j. For the only if part, assume none of the ancestors of Xj has been intervened. Let I be the index set of intervened variables, then Xj ∈/ DeG I . Moreover, as Xj hasn't been intervened on, by theorem 1, the counterfactual value of Xj remains the same as its observed value, contradicting the hypothesis. $\square$ Theorem 2 says we need to worry about noises attached to variables in AnG C ∪ XC only, as we are interested in a counterfactual query about XC. For example, if we are concerned about only 'repayment duration' in the causal graph of figure 2a, we need to take care of its ancestors' exogenous noise. Furthermore, theorem 1 and theorem 2 allow us to constrain the search space to all the exogenous noises corresponding to the variables lying on a directed path from XA to XC in G. Continuing with the example of figure 2a, if we are interested in 'repayment duration' and we are intervening on 'gender' in the action step, we only need to infer ϵ3 and ϵ4 as they are attached to the variables that lie on the directed path (coloured in pink) from 'gender' to 'repayment duration'. Then why do we exclude ϵ1 from abduction? Theorem 3. XjXj←x ′ j = x ′ j . Proof. Immediate from property 2 (Effectiveness) in Pearl (2009). Effectiveness property releases us from inferring ϵAH . By definition of semi-soft intervention, we do not need to infer ϵAT . As the hard interventions and the semi-soft\semi-hard disconnect parents from the intervened variables, we further filter out exogenous disturbances by looking at G˜ instead of G. Theorem 4. XCA←a(ϵ) = XCA←a;do∗A (ϵ)*, where* do∗A = doXi = x obs ifor Xi ∈ AnG˜ C \ {DeG˜A ∪ XA} $\left\{\begin{array}{l}\cup\mathbf{X}_A\end{array}\right\}$). Proof. Immediate from theorem 1 and property 1 (Composition) in Pearl (2009) . Theorem 4 allows us to intervene on the variables outside DeG˜A ∪ XA with their observed values. This intervention do∗A depends on the intervention do(A = a). Theorem 4 also guarantees that do∗A doesn't change unit-level counterfactuals. We discuss this idea of intervention with the observation in Appendix B.2 in more detail. The set of noises that lie on a path from XA to XC in G˜ , i.e., {ϵi}i∈p, where p is the index set p = {i : Xilies on a path P such that *P ∈ P*G˜ (XA → XC) and ϵiis exogenous parent of Xiin G˜ }, is sufficient to answer Q. We next define the sufficient and the essential set of exogenous noises to answer Q and then we prove that {ϵi}i∈p is essential. Definition 2 (sufficient and essential set of exogenous noises). ϵ¯ ⊆ {ϵi} n i=1 *is said to be sufficient to* a counterfactual query Q if Q can be answered (or computed) by inferring ϵ¯ *only, using the three-step* (abduction, action and prediction) algorithm (as described in subsection 2.2). If the sufficient set ϵ¯ is minimal, i.e., any proper subset of ϵ¯ is not sufficient, then ϵ¯ *is called essential.* Theorem 5. {ϵi}i∈p *is essential to* Q. Proof. Assume that we do not infer ϵj , j ∈ p. If j ∈ p ∩ AS, i.e., Xj has been soft intervened on, then the prediction step on Xj based on C˜ isn't possible as it requires to compute ˜fj (paG j A→a , ϵj ) and ϵj is unknown. Similarly, if j ∈ p \ AS, i.e., Xj has not been intervened on (but at least one of its ancestors has been intervened), then also the prediction step on Xj is not possible as unknown ϵj creates the bottleneck in computing fj (paG˜ j A→a , ϵj ). Note that, Xj ∈ AnG C ∪ XC. If j ∈ C, since counterfactual prediction about Xj isn't possible, we are done. If Xj ∈ AnG C , i.e., Xj is ancestor of at least one variable Xi for i ∈ C, following the recursiveness of SCM, counterfactual prediction about Xi, i ∈ C is not possible. $$\mathbf{\Sigma}$$ We devise the following four-step procedure (adding one more to Pearl (2009)) for computing a counterfactual query Q in the SCM framework: 1. **Pre-abduction:** Identify the acting interventions, do(A ← a). Identify the essential set of exogenous noises {ϵi}i∈p for Q. 2. **Abduction:** Predict the essential exogenous noises, ϵi's from the observations x obs, i.e., infer P(πA(ϵ)|X = x obs), where πA is a projection operator depending on do(A ← a), maps ϵ to ϵp. 3. **Action:** Perform the desired interventions do(A ← a)*, do*∗A. 4. **Prediction:** Compute the quantities of interest in Q. What pre-abduction says is that - we know the interventions we will perform. Hence a priori, we know causal graph modified by the interventions. So it suggests exploiting this a priori knowledge for noise abduction since we ultimately perform prediction following these interventions and modified causal graph. This exploitation reduces the number of noises needed to abduct from the number of nodes in G to the total number of nodes in all directed paths from XA to XC in G˜ . This is quite effective in causal graph G consists of a moderate or large number of nodes (variables). ## 6 Experiments 6.1 Case Study 1: Synthetic Dataset For the synthetic setting, we generate data following the model in figure 2b, where we assume $$X_{5}=3X_{6}+\epsilon_{5}-1,$$ $$\begin{array}{c}{{X_{4}=2\epsilon_{4}+1,}}\\ {{X_{1}=X_{6}-X_{5}+3\epsilon_{1},}}\end{array}$$ $$\begin{array}{l}{{X_{6}=\epsilon_{6}-1,}}\\ {{X_{3}=-3X_{4}+\epsilon_{3}-3,}}\end{array}$$ X6 = ϵ6 − 1, X5 = 3X6 + ϵ5 − 1, X4 = 2ϵ4 + 1, X3 = −3X4 + ϵ3 − 3, X2 = X5 − ϵ2, X1 = X6 − X5 + 3ϵ1, $$\begin{array}{l}{{X_{2}=X_{5}-\epsilon_{2},}}\\ {{Y=X_{1}+2X_{2}-3X_{3}+\epsilon_{Y},}}\end{array}$$ and ϵY , ϵi ∼ N (0, 1) independently, for i = 1, 2*...,* 6. We generate 20000 data points from the SCM. This simple dataset allows for a comparison of generated counterfactuals in a controlled and measurable environment. We consider two models to answer: "What would have happened to Y , if X5 was different than what we observed: X = x obs, Y = y obs. The full model infers all the exogenous noises, whereas the partial model only infers ϵ1, ϵ2, and ϵY (following pre-abduction). We use this setting to study the importance of noise identification for abduction. We use affine coupling flows (Dinh et al., 2017) for X4 and X6 and conditional affine coupling transform for other dependent variables. In the full model, seven flows are implemented - two linear flows and five conditional flows. In the partial model, only three conditional flows are used for X1, X2, and Y . respectively. We model base densities with standard gaussian. We use the Pyro (Bingham et al., 2019) probabilistic programming language (PPL) framework for the implementation of the flow-based SCM. A PPL is a programming language in which probabilistic models are specified and inference for these models is performed automatically with terms corresponding to sampling and conditioning. Pyro is a PPL based on PyTorch (Paszke et al., 2019). For a detailed overview of PPLs, see van de Meent et al. (2018). Adam (Kingma & Ba, 2015) with batch-size 128, an initial learning rate of 10−3is used for optimization purposes. Both models are trained for 1000 epochs using 12th Gen Intel(R) Core(TM) i9-12900KF CPU. Figure 3a shows the counterfactual distribution P(YX5←5|X = x obs, Y = y obs) estimated by the full and partial models (for one particular seed) along with the true counterfactual distribution. We quantitatively compare the associative capabilities of both models by log-likelihoods (validation) as shown in the table 1. Figures depicting the goodness of noise estimation and sampling capabilities of both models are provided in ![9_image_0.png](9_image_0.png) Figure 3: (a) The red curve is the kernel density estimate (KDE) plot of the true counterfactual distribution. The solid green and black dashed lines are the KDE plots of the distributions estimated (for one particular seed) by the full and partial models. (b) Average (over ten different seeds) mean squared errors in estimating counterfactual values of Y . The x-axis represents the values we intervene on X5 Black circles are average errors in the partial model. Green dots are average errors in the full model. Appendix C. We run the same experiment for ten different seeds. We intervene X5 with 200 different values4 uniformly sampled from -30 to 30. Average (over ten different seeds) mean squared errors in counterfactual estimation (on seen datapoints5) for each model for the 200 different intervention values have been depicted in figure 3b. We report the average time to train 1000 epochs for both models in table 1. Table 1: Best validation log-likelihood and average training time for the full and partial model. | Model | Log-likelihood | Training time | | | | | | | |---------|------------------|-----------------|---------|---------|---------|---------|---------|---------------| | | X6 | X5 | X4 | X3 | X2 | X1 | Y | (in min.) | | Partial | - | - | - | - | -1.4160 | -2.5050 | -1.4415 | 6.46 ± 0.033 | | Full | -1.4198 | -1.4166 | -2.1126 | -1.4229 | -1.4163 | -2.5050 | -1.4418 | 11.22 ± 0.081 | Next, we experiment with training time progression (for 100 epochs) with different batch and sample sizes for both partial and full models. Samples are generated from the same SCM. We run both models for ten different seeds. Figure 4 depictures the average (over ten different seeds) training time ratios (partial model/full model) with different sample sizes and four different batch sizes. ![9_image_1.png](9_image_1.png) Figure 4: Training time ratio (partial model/full model) vs. sample size for four different batch sizes. 4These 200 values remain the same across ten different seeds. 5By 'seen datapoints', we mean these are the datapoints used in training and validation. MSE in estimation of counterfactuals on unseen data points( test MSE) is given in Appendix C ![10_image_0.png](10_image_0.png) Figure 5: Combinations of flows used in the experiment. Flows' combinations are identified by the phrases inside the ellipse. Flows inside the light grey rectangle are used in the partial model, i.e., we don't model any flow for age in the case of the partial model. We have the same order (i.e., either linear or quadratic) for the flows in red. ## 6.2 Case Study 2: German Credit Dataset As a real-world setting, we consider a subset of the features in the german credit dataset. This subset includes gender (X1), age (X2), credit amount (X3), and repayment duration (X4). In figure 2a, we see an example of DAG representing the causal relationships (Karimi et al. (2021)) in the german credit dataset (Dua & Graff (2017)). We do not consider the risk variable in our experiment. We are interested in studying the counterfactual query: Had the person been male instead of female (or female instead of male), would the person has been offered more (or less) credit amount for a larger ( or shorter) duration? First, flow-based SCM is trained using the observed data. Next, the states of exogenous noises are inferred with the estimated structural assignments that are invertible (abduction step). Then we intervene upon the sex by replacing the sex variable with a specific value 'male' or 'female'; this is denoted by do(sex = male) or do(sex=female). We use the modified flow-based SCM to compute counterfactual quantities. Similar to the synthetic data experiment, we consider two models. The full model infers all the exogenous noise variables except ϵ1 since we model the mechanisms of gender\sex(X1) as x1 = f1(ϵ1) = ϵ1. Age X2, Credit amount X3 and repayment duration X4 are modelled as x2 = f2(ϵ2) = (Splineθ ◦ AffineNormalisation ◦ exp)(ϵ2), x3 = f3(ϵ3; x1, x2) = (ConditionalTransformθ([x1, x2]) ◦ AffineNormalisation ◦ exp)(ϵ3), x4 = f4(ϵ4; x3) = (ConditionalTransformθ([x3]) ◦ AffineNormalisation ◦ exp)(ϵ4). The modules highlighted by θ are parameterized using neural networks. We use a categorical distribution for sex (X1) and directly learn the binary probability of sex (X1). The densities of exogenous noises (except ϵ1) are standard gaussians. For other structural assignments, we use real-valued normalizing flows. A linear flow and two conditional flows (conditioned on activations of a fully-connected network, one takes age and sex as input for credit amount and another takes credit amount as input for the duration) are used as structural assignments for age, credit amount, and duration features, respectively. We constrain age (X1), credit amount (X3), and repayment duration (X4) variables with lower bound (exponential transform) and rescale them using a fixed affine transform for normalization. The partial model infers only ϵ3 and ϵ4 as suggested by pre-abduction (described in section 5). We model flows for credit amount (X4) and repayment duration (X3) similar to the full model. However, we do not model a flow for the age variable. Combinations of flows used in the experiment are depicted in figure 5. Splineθtransformation stands for the linear neural spline flows (Dolatabadi et al. (2020).) ConditionalTransformθ(·) can be conditional affine or conditional spline transform. We use linear (Dolatabadi et al. (2020)) and quadratic (Durkan et al. (2019)) order, autoregressive and linear neural spline flows for the conditional spline transform. These are more expressive in comparison to the affine flows. Taking · as input, a context neural network estimates the transformation parameters of the ConditionalTransformθ(·). We implement the context networks as fully-connected networks for spline and affine flows. Adam (Kingma ![11_image_0.png](11_image_0.png) Figure 6: On the left, KDE plots of the observed distributions P(Credit amount|Sex = male) and P(Repayment duration|Sex = male) are given in red. Counterfactual distributions P(Credit amountdo(Sex=female)|Sex = male), P(Repayment durationdo(Sex=female) |Sex = male) estimated by full and partial models are presented in gray and black, respectively. On the right, KDE plots of the observed distributions P(Credit amount|Sex = female) and P(Repayment duration|Sex = female) are given in red. Counterfactual distributions, P(Credit amountdo(Sex=male)|Sex = female) and P(Repayment durationdo(Sex=male) |Sex = female), estimated by full and partial models, are presented in gray and black, respectively. The upper panel is for distributions related to credit amounts. The lower panel is for distributions related to payment duration. Box plots at the right-hand corner of each subplot are self-explanatory. & Ba, 2015) with a batch-size of 64, an initial learning rate of 3 × 10−4, and weight decay of 10−4 are used in training. We use a staircase learning rate schedule with decay milestones at 50% and 75% of the training duration. All instances of both models are trained for 500 epochs using NVIDIA RTX A5000 GPU. Training times are reported in Appendix D. Figure 6 depicts how observed distributions of credit amounts and repayment duration would have changed to the corresponding counterfactual distributions if we hypothetically set the gender of the loanees different from what is reported. While we present the result of counterfactual estimation via 'affine' flow combinations of linear order in figure 6, the results of other flow combinations are in Appendix D. We also quantitatively compare the associative capabilities of all instances of both models by log-likelihoods (validation) as given in Appendix D. ## 7 Discussion This paper tackles the problem of identifying exogenous noises that must be abducted for counterfactual inference. We demonstrate that explicitly identifying noises is an important task for counterfactual inference as we empirically show that identifying noise variables can reduce the computational load of counterfactual inference without compromising in performance. Identifying exogenous noise variables for answering a counterfactual query also reduces the burden of modelling too many normalizing flows. Our work makes Pawlowski et al. (2020)'s framework applicable to partially specified causal graphs in the setting where we observe all variables that lie in a directed path from XA to XC along with their parents. The causal relations among these variables are needed to be fully specified. For example, consider the causal graph in figure 2b. If we are interested in YX5←x ′ 5 (ϵ), it does not matter whether X4 is observed or not. Sub-graph inside the pink region will suffice. Note that we haven't really used X4 in the partial model of synthetic data experiment as we conditioned on X3 for answering YX5←x ′ 5 (ϵ). Though our work is heavily inspired by Pawlowski et al. (2020)'s framework, it is very general to apply to other frameworks for generating counterfactuals. Our work does come with limitations to be investigated further. For example, we do not study the scenario when a hidden (unobserved) variable lies in a path from the intervened variable to the variable we are interested in. We fundamentally do not restrict ourselves from intervening on the variables. In scenarios where we can not intervene on a variable fundamentally, i.e., when we try to answer the counterfactual queries from observed or a combination of observed and experimental data only, identification of counterfactual questions itself is the first priority. It would be interesting to investigate the roles of the noises in such settings. Another limitation is that reducing the noise abduction set might restrict the generative power of the model6. A noted limitation in counterfactual inference is that counterfactuals are usually unverifiable for real datasets. Evaluation is not possible except in a few constrained settings, as true counterfactuals are never usually noticed. Counterfactual speculation is what some variable would be in a parallel universe where all but the intervened variables and their descendants were the same. However, the machinery of counterfactual inference provides scientists with better schemes for controlling the known confounders. As a result, the SCM framework is widely applicable to enhance trust and the performance of ML\AI systems. ## Acknowledgments This research is partially supported by Science and Engineering Research Board (SERB), Dept. of Science and Technology (DST), Govt. of India through Grant File No. SPR/2020/000495. ## References Andreas Geiger Axel Sauer. Counterfactual generative networks. In *International Conference on Learning* Representations (ICLR), 2021. Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D. Goodman. Pyro: Deep universal probabilistic programming. *J. Mach. Learn. Res.*, 20(1):973–978, jan 2019. ISSN 1532-4435. Riccardo Crupi, Alessandro Castelnovo, Daniele Regoli, and Beatriz San Miguel Gonzalez. Counterfactual explanations as interventions in latent space. *arXiv preprint arXiv:2106.07754*, 2021. Saloni Dash, Vineeth N. Balasubramanian, and Amit Sharma. Evaluating and mitigating bias in image classifiers: A causal perspective using counterfactuals. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, pp. 3879–3888. IEEE, 2022. doi: 10.1109/WACV51458.2022.00393. URL https://doi.org/10.1109/WACV51458.2022.00393. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=HkpbnH9lx. Hadi Mohaghegh Dolatabadi, Sarah Erfani, and Christopher Leckie. Invertible generative modeling using linear rational splines. In The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 4236–4246, 2020. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. 6Discussed in Appendix C Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper/2019/file/7ac71d433f282034e088473244df8c02-Paper.pdf. Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. Algorithmic recourse: from counterfactual explanations to interventions. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980. Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings. neurips.cc/paper/2017/file/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf. Daniel Malinsky, Ilya Shpitser, and Thomas Richardson. A potential outcomes calculus for identifying conditional path-specific effects, 2019. URL https://arxiv.org/abs/1903.03662. Guilherme F. Marchezini, Anisio M. Lacerda, Gisele L. Pappa, Wagner Meira, Debora Miranda, Marco A. Romano-Silva, Danielle S. Costa, and Leandro Malloy Diniz. Counterfactual inference with latent variable and its application in mental health care. *Data Min. Knowl. Discov.*, 36(2):811–840, mar 2022. ISSN 1384-5810. doi: 10.1007/s10618-021-00818-9. URL https://doi.org/10.1007/s10618-021-00818-9. Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xiansheng Hua, and Ji rong Wen. Counterfactual vqa: A cause-effect look at language bias. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12695–12705, 2021. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. 2019. doi: 10.48550/ARXIV.1912. 02762. URL https://arxiv.org/abs/1912.02762. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. *PyTorch: An Imperative Style, High-Performance Deep Learning Library*. Curran Associates Inc., Red Hook, NY, USA, 2019. Nick Pawlowski, Daniel Coelho de Castro, and Ben Glocker. Deep structural causal models for tractable counterfactual inference. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 857– 869. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 0987b8b338d6c90bbedd8631bc499221-Paper.pdf. Judea Pearl. *Causality*. Cambridge University Press, 2 edition, 2009. doi: 10.1017/CBO9780511803161. Judea Pearl. *Causal inference in statistics : a primer*. John Wiley & Sons Ltd, Chichester, West Sussex, UK, 2016. ISBN 1119186854. J. Peters, D. Janzing, and B. Schölkopf. *Elements of Causal Inference - Foundations and Learning Algorithms*. Adaptive Computation and Machine Learning Series. The MIT Press, Cambridge, MA, USA, 2017. Jacob C. Reinhold, Aaron Carass, and Jerry L. Prince. A structural causal model for mr images of multiple sclerosis. In *Medical Image Computing and Computer Assisted Intervention - MICCAI 2021*, pp. 782–792, Cham, 2021. Springer International Publishing. Jonathan G. Richens, Rory Beard, and Daniel H. Thompson. Counterfactual harm, 2022. URL https: //arxiv.org/abs/2204.12993. Pedro Sanchez and Sotirios A. Tsaftaris. Diffusion causal models for counterfactual estimation. *CoRR*, abs/2202.10166, 2022. URL https://arxiv.org/abs/2202.10166. Bernhard Schölkopf. Causality for machine learning, 2019. URL https://arxiv.org/abs/1911.10500. Jin Tian and Judea Pearl. Causal discovery from changes, 2013. URL https://arxiv.org/abs/1301.2312. Brian Loeber Trippe and Richard E. Turner. Conditional density estimation with bayesian normalising flows. arXiv: Machine Learning, 2018. Jan-Willem van de Meent, Brooks Paige, Hongseok Yang, and Frank Wood. An introduction to probabilistic programming, 2018. URL https://arxiv.org/abs/1809.10756. Rongguang Wang, Pratik Chaudhari, and Christos Davatzikos. Harmonization with flow-based causal inference. In *Medical Image Computing and Computer Assisted Intervention - MICCAI 2021*, pp. 181–190, Cham, 2021. Springer International Publishing. ISBN 978-3-030-87199-4. Zhongqi Yue, Tan Wang, Hanwang Zhang, Qianru Sun, and Xiansheng Hua. Counterfactual zero-shot and open-set visual recognition. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15399–15409, 2021. Cheng Zhang, Kun Zhang, and Yingzhen Li. A causal view on robustness of neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing* Systems, volume 33, pp. 289–301. Curran Associates, Inc., 2020. URL https://proceedings.neurips. cc/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-Paper.pdf. ## A Normalizing Flows Normalizing flows learn complex probability distributions of real data using a sequence of diffeomorphic transformations from simpler base distributions with the same dimensionality. For an observed variable Xi, diffeomorphic transformations g i 1 , gi2 , ..., gi iK and base variable ϵi ∼ P(ϵi) such that Xi = (g i iK ◦...◦g i 2◦g i 1 )(ϵi) = fi(ϵi), the target density P(xi) can be calculated as $$\mathbb{P}(x_{i})=\mathbb{P}(\epsilon_{i})\mid\operatorname*{det}\nabla f_{i}(\epsilon_{i})\mid^{-1},$$ evaluated at ϵi = f −1 i(xi). Assuming g i 0 as identity function, $$\log\mathbb{P}(x_{i})=\log\mathbb{P}(\epsilon_{i})+\log\mid\operatorname*{det}\nabla f_{i}(\epsilon_{i})\mid^{-1}$$ $$\begin{split}&\boldsymbol{j}=\log\mathbb{P}(\epsilon_{i})+\log|\det\nabla g_{i}(\epsilon_{i})|\\ &=\log\mathbb{P}(\epsilon_{i})+\log|\det\prod_{j=1}^{i_{K}}\nabla g_{j}^{i}\mid_{(g_{j-1}^{i}\circ\ldots\circ g_{2}^{i}\circ g_{1}^{i})(\epsilon_{i})}|^{-1}\\ &=\log\mathbb{P}(\epsilon_{i})-\sum_{j=1}^{i_{K}}\log|\det\nabla g_{j}^{i}\mid_{(g_{j-1}^{i}\circ\ldots\circ g_{2}^{i}\circ g_{1}^{i})(\epsilon_{i})}|\.\end{split}$$ $$\epsilon_{X}\sim{\mathcal{N}}(0,1),$$ $$\epsilon_{Y}\sim{\mathcal{N}}(0,1),$$ $$\epsilon_{Z}\sim{\mathcal{N}}(0,1).$$ As the exact log-likelihood of input data becomes tractable, the loss function is simply the negative loglikelihood, and the model explicitly learns the data distribution. Moreover, It is possible to make flows as expressive as needed. In particular, for any pair of well-behaved distributions P(xi) and P(ϵi), there exists a diffeomorphism fi that can turn P(ϵi) into P(xi) (Papamakarios et al., 2019). Trippe & Turner (2018) has extended normalizing flows to conditional densities by parametrising the transformation as xi = fi(ϵi; pai ). Note that invertibility is assumed in the first argument. ## B A Deeper Look B.1 Semi-hard intervention Consider the following SCM $$X=f_{X}(\epsilon_{X})=\epsilon_{X},$$ $$Y=f_{Y}(X;\epsilon_{Y})=X^{2}+\epsilon_{Y},$$ $$Z=f_{Z}(Y,X,\epsilon_{Z})=X+Y^{2}+\epsilon_{z},$$ 2 + ϵz, ϵZ ∼ N (0, 1). Consider the mechanism change doX = ˜fX(ϵX) = ϵX + 1, Y = ˜fY (X; ϵY ) = X + ϵY + 1. Now the fundamental question is - In the intervention Y ← ˜fY (X; ϵY ) = X + ϵY + 1, which structural equation of X we should consider? fX or ˜fX? If ˜fX is taken into account, then it can be interpreted as a standard soft intervention. This mechanism change may be seen as a combination of sequential interventions. If we consider fX, then we are disregarding the intervention on its parent, i.e., doX = ˜fX(ϵX) = ϵX + 1. In general, we disregard interventions on ancestors. We can think of this mechanism change as a combination of simultaneous interventions. To emphasize which structural assignment of X has been taken into account when we intervene on Y , we write it as Y ← ˜fY (P a]Y , ϵY ) and Y ← ˜fY (P aY , ϵY ) for ˜fX and fX, respectively. Semi-hard\semi-soft intervention on a variable Xt is just a soft intervention with ˜f = h(f(P at, ϵt)), disregarding interventions on Xt's ancestors. For example, Y ← Y 2 + 2 in the given SCM. ## B.2 What Pre-Abduction Is Doing? Intervene With Observation? In an SCM C with the graph G, $$X_{i}(\epsilon)=f_{i}(\mathbf{p}\mathbf{a}_{i}^{\mathfrak{g}},\epsilon_{i}),$$ where Xi(ϵ) expresses Xi's dependency on exogenous noises ϵ only. The functional form of the Xi(ϵ) can be obtained by substituting variables with their structural assignments following a reverse topological order in G starting from fi(paG i , ϵi). From the computational perspective, the difference between Xi(ϵ) and fi(paG i , ϵi) is what you need to know for computing Xi. Further incorporating a projection map ϵi = πi(ϵ), we can write it as Xi(ϵ) = fi(paG i , ϵi) = fi(paG i , πi(ϵ)). Let ϵ be a situation that leads to x obs. Note that, ∀ϵ ′ ∈ π −1 i(ϵi), the following statement may not hold Xi(ϵ ′) = fi(paG i , ϵi) = fi(pa G;obs i, ϵi) = fi(pa $$\mathbf{\Sigma}_{i}^{05;o b s},\pi_{i}(\epsilon^{\prime})).$$ ′)). (6) Why do we even want such a thing to hold true? Because then it does not matter whether we know ϵj , j ̸= i or not. This can be best understood with the following example. Example 2. *Consider the following SCM,* $$({\bar{0}})$$ $$\begin{array}{l}{{X=\epsilon_{X}^{2}}}\\ {{Y=(X-1)^{2}+\epsilon_{Y}^{2}}}\end{array}$$ and observation (x obs, yobs) = (1, 0)*. This observation could have arrived in situations* ϵ = (ϵX, ϵY ) = {(1, 0),(−1, 0)}. π −1 Y ({0}) = R × {0}*. Let* (r, 0) ∈ π −1 Y ({0}) and r 2 ̸= 1. X((r, 0)) = fY (x(r), 0) = (r 2 − 1)2̸= 0 = fY (x = x obs, πY (r, 0)). Expression in 6 holds ∀ϵ ′ ∈ π −1 i(ϵi), if we intervene on PaG i with pa G;obs i. In particular, this intervention induce the graph G˜ and ∀ϵ ′ ∈ π −1 i(ϵi), $X_{i}(\epsilon^{\prime})=f_{i}(\mathbf{pa}_{i}^{\tilde{\Phi}},\epsilon_{i})=f_{i}(\mathbf{pa}_{i}^{\tilde{\Phi};obs},\epsilon_{i})=f_{i}(\mathbf{pa}_{i}^{\tilde{\Phi};obs},\pi_{i}(\epsilon^{\prime}))$. For any given intervention set I ⊆ X \ Xi, Xi can be expressed as $$X_{i}(\epsilon)=g_{i}\Big(I\cap\mathbf{An}_{i}^{\tilde{\otimes}},\mathbf{B},\pi_{I}(\epsilon)\Big),$$ where B ⊆ AnG˜ i \ (DeI ∪ I). If I = ϕ, then G˜ = G, gi = fi, B = PaG i , πI = πi. Such an expression can be obtained by traversing reverse on the directed paths from I to Xiin G˜ and replacing visiting nodes with their structural assignment. Consider the intervention I = A with index set A. Counterfactual inference with the pre-abduction step comes with an inherent intervention - doAni \ (DeA ∪ XA) = (ani \ (deA ∪ xA))obs . For the sake of simplicity, we refer to this as do∗A. ## C Synthetic Data Experiment C.1 Goodness of abduction (specific to one particular seed) ![17_image_1.png](17_image_1.png) ![17_image_0.png](17_image_0.png) ![17_image_2.png](17_image_2.png) Figure 7: KDE plots of true exogenous noise data (ϵY , ϵ1, ϵ2) are in red. KDE plots of the exogenous noises estimated by the partial and the full model are in black and green, respectively. ![17_image_3.png](17_image_3.png) ![17_image_4.png](17_image_4.png) ![17_image_5.png](17_image_5.png) ![17_image_6.png](17_image_6.png) Figure 8: KDE plots of true exogenous noise data (ϵ3, ϵ4, ϵ5, ϵ6) are in red. KDE plots of the exogenous noises estimated by the full model are in green. The full model abducts all the noise variables. On the other hand, the partial model abducts only ϵY , ϵ1, and ϵ2. Here, we compare both models in terms of their ability to infer ϵY , ϵ1, and ϵ2 in figure 7. For the sake of completeness, the full model's ability to infer ϵ3, ϵ4, ϵ5, and ϵ6 is given in figure 8. C.2 Sampling abilities of the partial and the full model (specific to one particular seed) ![17_image_7.png](17_image_7.png) ![17_image_8.png](17_image_8.png) Figure 9: KDE plots of observed data (Y , X1, and X2) are in red. KDE plots of the generated samples (1000 points) from the full and the partial model are in black and green, respectively. As we do not estimate flows for X3, X4, X5, and X6 in the partial model, we can not sample for any variable from the SCM using the partial model. Given X3, X4, X5, and X6, we can sample only for X1, X2, and Y using partial model. The full model does not have such limitations. The sampling abilities of both models for X1, X2 and Y are depicted in figure 9. The sampling ability of full model for X3, X4, X5, and X6 is given in figure 10. Note that, if we want to sample from a particular variable from the SCM, we can take care of that variable in the pre-abduction step. For example, if we do not want to lose the sampling ability for X4 from the SCM in figure 2b, we will estimate flows for X3 and X4 in addition to X1, X2, and Y . ![18_image_1.png](18_image_1.png) ![18_image_0.png](18_image_0.png) ![18_image_2.png](18_image_2.png) Figure 10: KDE plots of observed data (X3, X4, X5 and X6) are in red. KDE plots of the generated samples (1000 points) from the full model are in green. Partial model isn't able to generate samples for these variables. ## C.3 Average Mean Squared Error In The Estimation Of Counterfactuals On Unseen Data We generate 20000 datapoints from the same SCM for ten different seeds. These datapoints have not been used in training and validation. We try to estimate counterfactuals on these points using the trained model. We intervene X5 with 200 different values7 uniformly sampled from -30 to 30. Average (over ten different seeds) mean squared errors in counterfactual estimation for each model for the 200 different intervention values have been depicted in figure 11. ![18_image_3.png](18_image_3.png) Figure 11: Average (over ten different seeds) mean squared errors in estimating counterfactual values of Y . The x-axis represents the values we intervene on X5. Black circles are average errors in the partial model. Green dots are average errors in the full model. 7These 200 values remain the same across ten different seeds. ## D German Credit Dataset Table 2: Best validation -ve log-likelihood and training time for each model. | Combination of Flows | Flow Order | Model | -ve Log-likelihood | Training time | | | | |------------------------|--------------|---------|----------------------|-----------------|--------|--------|------| | Age | Sex | Amount | Duration | (in min) | | | | | linear | partial | - | 0.6519 | 9.0625 | 3.5161 | 1.63 | | | full | 3.8448 | 0.6519 | 8.9190 | 3.3912 | 1.98 | | | | Affine | quadratic | partial | - | 0.6520 | 9.0989 | 3.3936 | 1.58 | | full | 3.8492 | 0.6519 | 8.9196 | 3.4403 | 1.93 | | | | linear | partial | - | 0.6531 | 8.8849 | 3.4650 | 1.92 | | | full | 3.8461 | 0.6520 | 8.9240 | 3.5144 | 2.29 | | | | Spline | quadratic | partial | - | 0.6545 | 8.8973 | 3.3828 | 1.78 | | full | 3.8453 | 0.6519 | 8.8776 | 3.4565 | 2.15 | | | | linear | partial | - | 0.6519 | 8.9177 | 3.3728 | 1.95 | | | full | 3.8481 | 0.6519 | 8.9161 | 3.4557 | 2.31 | | | | Autoregressive | quadratic | partial | - | 0.6518 | 8.9095 | 3.4662 | 1.81 | | full | 3.8454 | 0.6519 | 8.8820 | 3.4468 | 2.19 | | | ![19_image_0.png](19_image_0.png) Figure 12: Flows' combination: spline, flow-order: linear ![20_image_0.png](20_image_0.png) Figure 13: Flows' combination: affine, flow-order: quadratic ![20_image_1.png](20_image_1.png) Figure 14: Flows' combination: autoregressive, flow-order: linear ![21_image_0.png](21_image_0.png) Figure 15: Flows' combination: autoregressive, flow-order: quadratic ![21_image_1.png](21_image_1.png) Figure 16: Flows' combination: spline, flow-order: quadratic
Review 1: Summary: The main observation of this paper is that for many counterfactual queries, we don't need to perform the abduction step (inference over the exogenous noise variables), so given a query, we can just do inference over the necessary variables rather than the full set of noise variables. Strengths and Weaknesses: Strengths: * The observation the not all noise variables are relevant to the counterfactual query makes sense. * I liked example 1 which is used to support this observation. Counterfactuals are not well-understood by a lot of the machine learning community, so a series of examples of queries and the corresponding inference steps serves a useful pedagogical role. Weaknesses: * While I know I'm not meant to be reviewing for impact - I had a really hard time thinking of settings where this idea would be used. If you're using a flow based model to estimate your SCM, the abduction step is just a forward pass through the inverse model so this save you having to query a couple of the outputs, but surely this is a negligible computational saving? * It's not clear what semi-soft / semi-hard interventions add - they don't seem to be treated differently in the theory? * Given that the main claim is computational efficiency, you should be reporting the computational saving - e.g. what is the difference in wall-clock time for inference for the two approaches across a large number of seeds? * The differences between the full model and the partial model in the experiments were mostly as expected (they're mathematically equivalent, so we should expect big differences beyond finite sample error), with the exception of figure 3 b. I didn't understand why the full model had much larger errors. Is this just an unlucky seed? * The synthetic experiments should have been run multiple times with different seeds to report standard errors. Requested Changes: * Can you give an example of a task that is not feasible unless you restrict the set of variables for the abduction step? Or at least show a significant computational saving empirically? * Better experiments mentioned above - error bars, report computational costs, etc. * This paper needs a serious proof read for grammatical errors and typos, etc.. Normally I don't make a big deal out typos, grammar, etc., but in this paper some of the sections are relatively well written, while others are full of typos and grammar errors. This suggests that the authors just didn't bother to do a proper proof read before submitting - which is very sloppy: the reviewing process is not meant as a substitute for proof reading. Here's the errors that I wrote down (a subsets of all errors): - Page 2 - "mayn’t" -> "may not" - Page 4 - "one could at least" -> "One could at least" - Page 2 - "and semi-soft(semi-hard)" -> "and semi-soft (semi-hard)" [missing space] - Page 6 - "In-fact, for the sake of keeping things simple, we take resort of this for rest" -> "In fact" and reword "we take resort of this for rest..." - Page 7 - reword Theorem 2 Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: Counterfactual inference is a fundamental problem in causality. Recently, some related studies are proposed towards it following the three-step framework (abduction, action, prediction) with some novel deep learning techniques. In the abduction step, the previous methods infer all noises, which are with high computational costs. In this paper, the authors propose that it is not necessary to infer all noises. According to the causal graph which is pre-known, they present the method to infer part of noises which are sufficient for achieving the counterfactual inference. Strengths and Weaknesses: Advantage: The deep learning techniques introduced to counterfactual inference are favored. Disadvantage: 1. The contribution is limited. 2. The writing can be improved largely. Although the idea is simple and there are not many new technical contributions, the paper is very hard to read because there are so many unclear and distracting expressions. I show them in detail in the next part. Due to the unsatisfactory writing, it is hard for me to evaluate the proposed method with high confidence or present some technical suggestions. Requested Changes: I cannot give some detailed technical suggestions right now, because it is hard for me to evaluate the method due to the unsatisfactory writing. For example, the main proposed process including the four steps is shown on Page 8, but I am not sure how the authors define **essential exogenous noises** and how to find the essential exogenous noises. And the **projection operator depends on** $do(A\leftarrow a)$ is also ambiguous. I can guess their implications, but I cannot be very confident. I suggest the authors revise the writing carefully at first. I first present the questions which I think the authors should address in the paper. Then I give some suggestions. Questions: 1. What are the strict definitions of essential exogenous noises, projection operator depends on $do(A\leftarrow a)$? 2. "Theorem 4 allows us to intervene on the variables outside..." What is the benefit of intervening on the variables outside...? I guess that the authors want to say when we intervene on some variables, the noise of these variables can be ignored. Thus we hope to have an expression with more variables intervened, in which way we do not need to infer many noises. Is it right? 3. Could the authors explain the difference between Thm.1 and Thm.2? I think they can be merged. 4. Section 2.2.1: Consider the generating assignment $x=f(\epsilon)$, if $f$ is a invertible function, does it necessarily mean that the distribution $P(\epsilon|X=x^{obs})$ is identifiable? 5. What does it mean by $2$ in "In a causal model with joint distribution having joint density satisfying 2"? 6. What does it mean by "Standard tools of the SCM framework don’t inherently restrict intervention. one could at least in theory intervene unconditionally on any subset of variables to perform counterfactual analysis."? Even if we have the interventional distribution, it is not necessarily that the counterfactual is identifiable, as shown by Peters et. al 2017. 7. What is the connection between Section 3 and Section 5? Or what role does semi-hard intervention play in this paper? Suggestions: 1. It is not proper to say "This paper tackles problem of identifying exogenous noises that must be abducted for counterfactual inference.". Are there any proofs to show that these exogenous noises **must be abducted** for counterfactual inference? As far as I see, if we infer these noises, we can achieve the counterfactual inference. But I am not sure whether the counterfactual inference is impossible if we do not infer these noises. 2. It is not quite proper to have no introduction to normalizing flows but just say a sentence "see Papamakarios et al. (2019)." 3. The expression of Thm.2 is not proper in an academic paper. The expression such as "$X_j$ is affected in 'counterfactual'" is too sloppy. 4. In Page 3, it seems that there is a typo. ' Read: “The value of $X_i$ in situation $\epsilon$, had $X_j$ been $x_j$” ' or ' Read: “The value of $X_i$ in situation $\epsilon$, had $X_j$ been $x_j'$” '. 5. The sentence "In general, it is not immediately clear how to design effective experimental procedures for evaluating counterfactuals, or how to compute them from observational data" is not quite exact. It should not be "not immediately clear". The counterfactual is not necessarily identifiable given both observational distribution and interventional distribution in general, thus there should not exist a method to design effective experimental procedures for evaluating counterfactuals or compute them from observational data for general cases. Some examples to indicate the unidentifiability are given in Peters et. al 2017. 6. The definitions for unit-level and population-level counterfactual seem to be missing. 7. Caption of Fig.1 1: causal graph -> causal graphs, and a missing full stop. 8. I suggest the authors change the title of Section 3. In this part, the authors only introduce the definition of semi-soft intervention. The title could be more clear. 9. The definition of **path** in Section 4 seems wrong. It seems that any of $X_{i_1},\cdots,X_{i_{m-1}}$ is adjacent to $X_{i_m}$. 10. "Every SCM entails a unique joint distribution over the variables $X = (X1, ...,Xp)$ such that relationships in (1) hold true.: "such that relationships in (1) hold true" seems to be redundant and not exact here. There is an SCM at first, then there is observational data based on the distribution of noise. 11. In the last paragraph of the introduction:"our work shows that it mayn’t be necessary to infer all the noise variables in the SCM and identifies exogenous noise variables that we must infer...": what do the authors want to express by "that we must infer". It seems to has the same implication as "it mayn’t be necessary to infer all...". Broader Impact Concerns: No. ================================================== Review 3: Summary: This paper studies the evaluation of counterfactual probabilities (i.e., counterfactual queries) in a fixed structural causal model (SCM), provided with its complete model parametrization. Specifically, the underlying SCM is a Markovian model that consists of observed variables (i.e., the query of interest) and unobserved variables (i.e., exogenous noises); no unobserved noise could affect multiple (more than one) observed variables simultaneously. That is, there is no unobserved confounder in the system. The authors further assume that the structural function determining values of every observed variable is invertible, parameterized as a normalizing flow. Existing counterfactual evaluation methods in Markovian model with invertible structural functions generally follow Pearl's three-step algorithm involving abduction, action, and prediction. All unobserved variables (exogenous) have to be abducted to update the posterior distribution given observed evidence. This step is typically computationally heavy. To address this issue, this paper shows that it may not be necessary to infer all unobserved variables in the SCM. They introduce a graphical criterion to determine a sufficient subset of unobserved variables for the abduction so that the counterfactual query could be properly evaluated in a fully specified Markovian model. In principle, this novel pre-abduction procedure could improve the computational efficiency of exiting counterfactual evaluation methods using normalizing flows. Strengths and Weaknesses: #### **Strength.** The proposed method is intuitive, and the proofs seem sound. Generally, one should abduct exogenous noise associated with all the ancestor nodes of the potential outcome, which is the target of the query. It is verifiable that such a collection of exogenous noises is sufficient for computing the target counterfactual probability. It is only a subset of all possible unobserved variables in the system. #### **Weakness.** Overall, I think this paper proposes a practical technique to improve the computational efficiency of counterfactual evaluation methods based on normalizing flows. However, I do have some concerns, summarized below. 1. I agree with the proposed method's general idea and with most of the technical details. However, the proposed method's implementation is unclear and deserves further explanation. For instance, in the pseudo-code on Page 8, Step 1 asks one to identify all exogenous noises that are "essential" to answer a query $Q$. However, the concept of "essential exogenous noises" is not well defined. I can see that Theorems 1-4 are closely related to this definition. It would still be appreciated if the authors could provide a precise and formal definition. 2. In their initial work, Pawlowski et al. (2020) focused on Markovian models where UCs do not exist. It is somewhat surprising that the literature has not moved beyond the assumption of no UCs and considers semi-Markovian models where UCs generally exist. Note that the presence of UCs is arguably one of the most crucial challenges in modern causal inference. It could be great to see if one could apply the computational framework of normalizing flows to this more general class of causal models. Could the authors comment on this? What are the main challenges in applying counterfactual evaluation with invertible functions in semi-Markovian models? 3. The primary motivation of this paper is to reduce the computational cost when evaluating counterfactual queries from a fully specified SCM. However, in the experiments, there is no baseline algorithm for comparing with the proposed method. It would be interesting to know whether the proposed method reduces the computational cost of counterfactual evaluation methods like (Pawlowski et al. 2020); if so, by how much? 4. This might be a minor one. The organization of the paper could be improved. The idea of semi-soft\semi-hard intervention is interesting. It is similar to the counterfactual randomization introduced in (Forney, Bareinboim & Pearl, 2015). However, it seems orthogonal to the central challenge studied in this paper: the selection of exogenous variables for the abduction. For clarity, I would recommend first considering the counterfactual evaluation for a standard atomic query (e.g., $P(Y_{x=0}, Y_{x=1})$), and then introducing semi-soft\semi-hard interventions as a generalization. Requested Changes: 1. Provide a precise definition of "essential exogenous noise" in the pre-abduction step. 2. Provide additional experiments comparing baseline algorithms that do not apply the proposed pre-abduction step. Report performance metrics for all algorithms, e.g., the training time. 3. (Optional) Reorganize the paper: start with the evaluation of atomic counterfactual queries; then introduce semi-soft\semi-hard interventions; and finally, the general counterfactual evaluation with soft and semi-soft interventions. However, I understand that the author might have challenges with the page limit. This request is optional. Broader Impact Concerns: The paper is mostly theoretical. Its longterm societal impact is not immediate to see. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper studies a nice improvement on Pearl's three-step algorithm for evaluating counterfactual queries in a fully specified causal model. This improvement reduces the total number of exogenous variables to be updated during the abduction phase, thus reducing the total training time. Simulation results show that the proposed method is able to reduce the training time by half compared to the standard approach which updates the full model. Therefore, this paper provides a compact solution that improves the computational efficiency of a popular algorithm in causal inference. In general, the proposed solution is theoretically sound and demonstrates good properties. However, there are some important aspects that can be improved. In definition 2, there is not a strict definition for "Q can be answered by inferring \bar{\epsilon} only". The authors should emphasize in Def. 2 that "Q can be answered by inferring \bar{\epsilon} only by the three-step algorithm". Without the limitation "by the three-step algorithm", the proof of Theorem 5 seems incorrect. One important question here is that is it possible to identify the counterfactual by other methods instead of the three-step algorithm? I cannot see the impossibility by the current proofs. It seems it is possible that although we cannot infer, we can identify the counterfactual queries by other methods, in which case the set can also be essential to Q. Hence, I suggest the authors write the definition 2 more carefully, which can influence the correctness of their result. Thus, I would like to recommend accepting the submission with minor revision. ==================================================
# Chaos Theory And Adversarial Robustness Anonymous authors Paper under double-blind review ## Abstract Neural networks, being susceptible to adversarial attacks, should face a strict level of scrutiny before being deployed in critical or adversarial applications. This paper uses ideas from Chaos Theory to explain, analyze, and quantify the degree to which neural networks are susceptible to or robust against adversarial attacks. To this end, we present a new metric, the "susceptibility ratio," given by Ψ( ˆ *h, θ*), which captures how greatly a model's output will be changed by perturbations to a given input. Our results show that susceptibility to attack grows significantly with the depth of the model, which has safety implications for the design of neural networks for production environments. We provide experimental evidence of the relationship between Ψˆ and the post-attack accuracy of classification models, as well as a discussion of its application to tasks lacking hard decision boundaries. We also demonstrate how to quickly and easily approximate the certified robustness radii for extremely large models, which until now has been computationally infeasible to calculate directly. ## 1 Introduction The current state of Machine Learning research presents neural networks as black boxes due to the high dimensionality of their parameter space, which means that understanding what is happening inside of a model regarding domain expertise is highly nontrivial, when it is even possible. However, the actual mechanics by which neural networks operate - the composition of multiple nonlinear transforms, with parameters optimized by a gradient method - were human-designed, and as such are well understood. In this paper, we will apply this understanding, via analogy to Chaos Theory, to the problem of explaining and measuring susceptibility of neural networks to adversarial methods. It is well-known that neural networks can be adversarially attacked, producing obviously incorrect outputs as a result of making extremely small perturbations to the input (Goodfellow et al., 2014; Szegedy et al., 2013). Prior work, like Shao et al. (2021); Wang et al. (2018) and Carmon et al. (2019) discuss "adversarial robustness" in terms of metrics like accuracy after being attacked or the success rates of attacks, which can limit the discussion entirely to models with hard decision boundaries like classifiers, ignoring tasks like segmentation or generative modeling (He et al., 2018). Other work, like Li et al. (2020) and Weber et al. (2020), develop "certification radii," which can be used to guarantee that a given input cannot be misclassified by a model without an adversarial perturbation with a size exceeding that radius. However, calculating these radii is computationally onerous when it is even possible, and is again limited only to models with hard decision boundaries. Gowal et al. (2021) provides a brief study of the effects of changes in model scale, but admits that there has been a dearth of experiments that vary the depth and width of models in the context of adversarial robustness, which this paper provides. Huang et al. (2022a) also studies the effects of architectural design decisions on robustness, and provides theoretical justification on the basis of deeper and wider models having a greater upper bound on the Lipschitz constant of the function represented by those models. Our own work's connection to the Lipschitz constant is discussed in Appendix C. Wu et al. (2021a) studies the effects of model width on robustness, and specifically discusses how robust accuracy is closely related to the perturbation stability of the underlying model, with an additional connection to the local Lipschitzness of the represented function. Our experimental results contradict those found in these papers in a few places, namely as to ![1_image_0.png](1_image_0.png) Figure 1: In a dynamical system, two trajectories with similar starting points may, over time, drift farther and farther away from one another, typically modeled as exponential growth in the distance between them. This growth characterizes a system as exhibiting "sensitive dependence," known colloquially as the "butterfly effect," where small changes in initial conditions eventually grow into very large changes in the eventual results. the relationship between depth and robustness. Additionally, previous work is limited to studying advanced State-of-the-Art CNN architectures, which introduces a number of effects that are never accounted for during their ablations. Regarding the existence of adversarial attacks *ab origine*, Pedraza et al. (2020) and Prabhu et al. (2018) have explained this behaviour of neural networks on the basis that they are dynamical systems, and then use results from that analysis to try and classify adversarial inputs based on their Lyapunov exponents. However, this classification methodology rests on loose theoretical ground, as the Lyapunov exponents of a single input must be relative to those of similar inputs, and it is entirely possible to construct a scenario wherein an input does not become more potent a basis for further attack solely because it is itself adversarial. In this work, we re-do these Chaos Theoretic analyses in order to understand, not particular inputs, but the neural networks themselves. We show that neural networks are dynamical systems, and then continuing that analogy past where Pedraza et al. (2020) and Prabhu et al. (2018) leave off, investigate what neuralnetworks-as-dynamical-systems means for their susceptibility to attack, through a combination of analysis and experimentation. We develop this into a theory of adversarial susceptibility, the "susceptibility ratio" as a measure of how effective attacks will be against a neural network, and show how to numerically approximate this value. Returning to the work in Li et al. (2020) and Weber et al. (2020), we use the susceptibility ratio to quickly and accurately estimate the certification radii of very large neural networks, aligning this paper with prior work. ## 2 Neural Networks As Dynamical Systems We will now re-write the conventional feed-forward neural network formulation in the language of dynamical systems, in order to facilitate the transfer of the analysis of dynamical systems back to neural networks. To begin with, we first introduce the definition of a dynamical system, per standard literature (Alligood et al., 1998). ## 2.1 Dynamical Systems In Chaos Theory, a dynamical system is defined as a tuple of three basic components, written in standard notation as (*T, X,* Φ). The first, T, referred to as "time," takes the form of a domain obeying time-like algebraic properties, namely associative addition. The second, X, is the state space. Depending on the system, elements of X might describe the positions of a pendulum, the states of memory in a computer program, or the arrangements of particles in an enclosed volume, with X being the space of all possibilities thereof. The final component, Φ : T × X → X, is the "evolution function" of the system. When Φ is given a state xi,t ∈ X and a change in time ∆t, it returns xi,t+∆t, which is the new state of the system after ∆t time has elapsed. The xi,t notation will be explained in greater detail later. We will write this as $$x_{i,t+\Delta t}=\Phi(\Delta t,x_{i,t})$$ In order to stay well defined, this has to possess certain properties, namely a self-consistency of the evolution function over the domain T. A state that is progressed forward ∆ta in T by Φ and then progressed again ∆tb should yield the same state as one that is progressed ∆ta + ∆tb in a single operation: $$\Phi\big(\Delta t_{b},\Phi(\Delta t_{a},x_{i,t})\big)=\Phi(\Delta t_{a}+\Delta t_{b},x_{i,t})$$ Relying partially on this self-consistency, we can take a "trajectory" of the initial state xi,0 over time, a set containing elements represented by t, Φ(*t, x*i,0) ∀t ∈ T . To clarify; because each element within X can be progressed through time by the injective and self-consistent function Φ, and therefore belongs to a given trajectory,1it becomes both explanatory and efficient to denote every element in the same trajectory with the same subscript index i, and to differentiate between the elements in the same trajectory at different times with t. In order to simplify the notation, and following on from the notion that the evolution of state within a dynamic system over time is equivalent to the composition of multiple instances of the evolution function, we will write the elements of this trajectory as $$\Phi(t,x_{i,0})=\Phi^{t}(x_{i})=x_{i,t}$$ with an additional simplification of notation using xi = xi,0, omitting the subscript t when t = 0. From these trajectories we may derive our notion of chaos, which concerns the relationship between trajectories with similar initial conditions. Consider xi, and xi + δx, where δx is of limited magnitude, and may be contextualized as a subtle reorientation of the arms of a double pendulum prior to setting it into motion. We also require some notion of the distance between two elements of the state space, but we will assume that the space is a vector space equipped with a length or distance metric written with *| · |*, and proceed from there. For the initial condition, we may immediately take $$|\Phi^{0}(x_{i})-\Phi^{0}(x_{i}+\delta x)|=|\delta x|$$ However, meaningful analysis only arises when we model the progression of this difference over time. In some systems, minor differences in the initial condition result in negligible effect, such as with the state of a damped oscillator; regardless of its initial position or velocity, it approaches the resting state as time progresses, and no further activity of significance occurs. However, in some systems, minor differences in the initial condition end up compounding on themselves, like the flaps of a butterfly's wings eventually resulting in a hurricane. Both of these can be approximately or heuristically modeled by an exponential function, $$|\Phi^{t}(x_{i})-\Phi^{t}(x_{i}+\delta x)|\approx|\delta x|e^{\lambda t}$$ In each of these cases, the growing or shrinking differences between the trajectories are described by λ, also called the Lyapunov exponent. If λ < 0, these differences disappear over time, and the trajectories of two similar initial conditions will eventually align with one another. However, if λ > 0, these differences increase over time, and the trajectories of two similar initial conditions will grow farther and farther apart, with their relationship becoming indistinguishable from that of two trajectories with wholly different initial conditions. This is called "sensitive dependence," and is the mark of a chaotic system.2It must be noted, however, that the exponential nature of this growth is a shorthand model, with obvious limits, and is not fully descriptive of the underlying behavior. 1Multiple trajectories may contain the same elements. For example, two trajectories such that the state at t = 1 of the first is taken as the initial condition of the second. Similarly, in a system for which Φ is not bijective, two trajectories with different initial conditions may eventually reduce to the same state at the same time. This neither impedes our analysis nor invalidates our notation, with the caveat that neither i ̸= j nor ta ̸= tb guarantees that xi,ta ̸= xj,tb . 2This is closely related to the concept of entropy, as it appears in Statistical Mechanics, but further discussion of the topic is beyond the scope of this paper. ## 2.2 Neural Networks Conventionally, a neural network is given a formulation along the following lines (Schmidhuber, 2015). It is denoted by a function h : Θ × X → Y , where Θ is the space of possible learned parameters, subdivided into the entries of multiplicative weight matrices Wl and additive bias vectors bl. X is the vector space of possible inputs, and Y is the vector space of possible outputs. Each of the L layers in the neural network is given by a matrix multiplication, a bias addition, and the application of a nonlinear activation function σ, with hidden states zi,l representing the intermediate values taken during the inference operation: $$z_{i,0}:=x_{i}$$ $$\begin{array}{c}{{z_{i,l+1}=\sigma(W_{l}z_{i,l}+b_{l})|W_{l},b_{l}\subset\theta}}\\ {{h(\theta;x_{i})=\hat{y}_{i}:=z_{i,L}}}\end{array}$$ Without loss of generality, we may transcribe this formulation as a dynamical system by taking its components as analogues. The first is [L] = {0, 1, 2 *. . . L*}, which here will be used to represent the current depth of the hidden state, from 0 for the initial condition up to L for the eventual output. Because it progresses forward during the inference operation, and is associative insofar as increases in depth are additive, [L] functions as an analogue for T. The second is Z, which is the vector space of all possible hidden states, and thus replaces X. The final component is g : [L] × Z → Z, which here we will write as $$z_{i,l+1}=g(1,z_{i,l})=\sigma(W_{l}z_{i,l}+b_{l})$$ A further discussion of the function g is given in Appendix A. The generalization to g(∆l, zi,l) then follows from the same rule of composition applied to the dynamical systems, at least for integer values of ∆l, under the condition that it never leaves [L]. This allows us to replace Φ with g. We can also then re-write the notation along the lines of that for the dynamical systems $$g(l,z_{i,0})=g^{l}(x_{i})$$ Noting of course that we have defined zi,0 as xi. Thus, the neural network inference operation can be rewritten as the triplet ([L]*, Z, g*), and mapped to the dynamical system formulation of (*T, X,* Φ). We can now start to discuss the trajectories of the hidden states of the neural network, and what happens when their inputs are changed slightly. For the first hidden state, defined as the input, we can immediately say that |g 0(xi) − g 0(xi + δx)| = |δx| and then by once again mapping to the dynamical systems perspective, we model the difference between the two trajectories at depth l with |g l(xi) − g l(xi + δx)*| ≈ |*δx|e λl While, as per the dynamical system, using an exponential model is typically the most illustrative despite the growth not necessarily being exponential, a basic theoretical justification for an exponential model is provided in Appendix B. Continuing, when the value of λ is greater than 0, we may call the neural network sensitive to its input, in precisely the same manner as a dynamical system is sensitive to its initial conditions. We may also say that, when the value of e λL is very large, it being the ratio of the magnitude of the change of the output to the magnitude of the change in the input, δx becomes an adversarial perturbation. If this analogy holds, we should expect that when we adversarially attack a neural network, the difference between the two corresponding hidden states should grow as they progress through the model. This is our first experimental result. As an aside, there is a tangential connection to be made between the Chaos Theoretic formulation of neural networks and Algorithmic Stability, like that discussed in Kearns & Ron (1997); Bousquet & Elisseeff (2000; 2002) and Hardt et al. (2016). However, while Algorithmic Stability also treats a notion of the effects of small changes in Machine Learning models, this is from the perspective of changes being made to the learning problem itself, such as to the training dataset, and the resulting effects on the learned model, rather than the effects of small changes being made to individual inference inputs and their respective outputs once the model has already been produced. ## 3 Experimental Design For our experiments, we used two different model architectures: ResNets (He et al., 2015), as per the default Torchvision implementation (Marcel & Rodriguez, 2010), and a basic custom CNN architecture in order to have finer-grained control over the depth and number of channels in the model. The ResNets were modified, and the custom models built as to allow for recording all of the hidden states during the inference operation. These models, unless specified that they were left untrained, were trained on the Food101 dataset from Bossard et al. (2014) for 50 epochs with a batch size of 64 and a learning rate of 0.0001 with the Adam optimizer against cross entropy loss. The ResNet models used were ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152. In the Torchvision ResNet class, models consist of operations named *conv1, bn1, relu, maxpool, layer1,* layer2, layer3, layer4, avgpool, and fc, with the first four representing a downsampling intake, then four more blocks of ResNet layers, and then a final operation that converts the rank 3 encoding tensor into a rank 1 class weight tensor. Hidden states are recorded at the input, after *conv1, layer1, layer2, layer3, layer4,* and at the output. The custom models, specified with C and D, consist of D tuples of convolutional layers, batch normalization operations, and ReLU nonlinearities, with the first tuple having a downsampling convolution and a maxpool operation after the ReLU. Each of these convolutions, besides the first which takes in three channels, has C channels. Finally, there is a 1 × 1 convolution, a channel-wise averaging, and then a single fully connected layer with 101 outputs, one for each class in the Food-101 dataset. Hidden states are recorded after every tuple, and also include the input and the output of the model. The first tuple approximates the downsampling intake of the ResNet models. In order to better handle the high dimensionality and changes in scale of the inputs, outputs, and hidden states, rather than using the Euclidean L2 norm as the distance metric, we used a modified Euclidean distance $$|{\vec{v}}|:={\sqrt{\frac{1}{\dim({\vec{v}})}\sum_{i}v_{i}^{2}}}$$ which will be applied for every instance of length and distance of and between hidden states, including attack radii. Adversarial perturbations δxadv against a neural network h(θ; ·) of a given radius r for a given input xi were generated by using five steps of gradient descent with a learning rate of 0.01, maximizing |h(θ; xi) − h(θ; xi + δxadv)| and projecting back to the hypersphere of radius r after every update. These attacks closely resemble those in Zhang et al. (2021); Wu et al. (2021b; 2022); Xie et al. (2021) and Shao et al. (2021), and their use of attacks with lp-norm decay metrics or boundaries. For comparison, random perturbations were also generated, by projecting randomly sampled Gaussian noise to the same hypersphere. In order to perform these experiments under optimal conditions, the inputs that were adversarially perturbed were selected only from the subset of the Food-101 testing set for which every single trained model was correct in its estimate of the Top-1 output class. A Jupyter Notebook implementing these training regimes and attacks will be made available alongside this manuscript, pending review. ## 4 Hidden State Drift An example of the approximately exponential growth in the distance between the hidden state trajectories associated with normal and adversarially perturbed inputs hypothesized in section 2.2 for 32 inputs is shown in Figure 2. Between the initial perturbations, generated with a radius of 0.0001, and the outputs, the differences grew by a factor of ∼ 747×. Given that ResNet18 has 18 layers, using 747 ≈ e 18λ, we can calculate λ ≈ 0.368, an average measure of this drift per layer. However, the Lyapunov exponent for each layer is of less interest to an adversarial attacker or defender, with the actual value of interest being given by this new metric, ψ, the adversarial susceptibility for a particular input and attack, given by $$\psi(h,\theta,x_{i},\delta x_{a d v}):=e^{\lambda L}={\frac{|h(\theta;x_{i})-h(\theta;x_{i}+\delta x_{a d v})|}{|\delta x_{a d v}|}}$$ |δxadv|(1) $\left(1\right)$. ![5_image_0.png](5_image_0.png) Figure 2: Example of hidden state drift while performing inference with the ResNet18 model. Note the ![5_image_1.png](5_image_1.png) logarithmic scaling on the y-axis. Figure 3: Despite a change in the radius of the adversarial perturbation by three orders of magnitude, the value of ψ associated with those attacks remains relatively stable. This is the "susceptibility ratio," the ratio of the change inflicted by a given adversarial perturbation to its original magnitude. If this is a meaningful metric by which to judge a neural network architecture, it should remain relatively stable despite changes in the radius of the adversarial attack. This is our second experimental result, demonstrated in Figure 3. By sampling ψ over a number of inputs xi from a dataset D and a variety of attack radii rmin ≤ |δxadv| ≤ rmax and taking the geometric mean3, we can come to a single value, written as Ψ(*h, θ*) = e Exi∼D,|δxadv|∼[*rmin,rmax*][ln(ψ(h,θ,xi,δxadv))] (2) and approximated with Ψ( ˆ *h, θ*), giving the susceptibility ratio for the model as a whole. These values have been calculated for the trained ResNet models, and are given in Table 1. These experimental results are more in line with those of Cazenavette et al. (2020) and Huang et al. (2022b) than with the predictions that we will make in the next section, at which point we will begin using our custom model architectures to tease out the relationships between a neural network's architecture and its susceptibility ratio. A discussion of this metric's relationship to the Lipschitz constant is provided in Appendix C. 3In order to increase the numerical stability of the geometric mean calculation, we use pn Qn i=0 ai = e $\sum_{i=1}^{\sqrt{11}}$ i=0 n $\pi$ a. $\left(2\right)$. | ResNet18 | ResNet34 | ResNet50 | ResNet101 | ResNet152 | | |------------|------------|------------|-------------|-------------|-------| | Ψ( ˆ h, θ) | 781.2 | 790.7 | 854.4 | 893.2 | 846.5 | | | Ψ( ˆ h, θ) Channels (C) | | | | | |----------|---------------------------|---------|--------|--------|-------| | 32 | 64 | 128 | 256 | | | | 2 | 0.749 | 0.523 | 0.651 | 0.560 | | | D) | 4 | 1.021 | 0.695 | 0.775 | 0.610 | | 8 | 2.788 | 2.134 | 1.505 | 1.276 | | | Layers ( | 16 | 15.491 | 12.935 | 8.423 | 7.123 | | 32 | 109.340 | 135.472 | 98.834 | 92.404 | | | 64 | 96.037 | 63.785 | 60.443 | 48.721 | | Table 1: Overall susceptibility ratio of trained ResNet models. Table 2: Susceptibility ratio of randomly initialized convolutional models with custom architectures on inputs consisting of random noise. ## 5 Architectural Effects On Adversarial Susceptibility Returning to the definition of ψ given in equation 1, we might model it as being proportional to the exponent of L, the depth of the neural network. Yet, despite ResNet152 having more than eight times as many layers as ResNet18, its susceptibility is only marginally higher. This effect was explored to a greater experimental degree in Cazenavette et al. (2020) and Huang et al. (2022b), demonstrating a remarkable tendency towards robustness in residual model architectures. Interestingly, Huang et al. (2022b) found that deeper models were more robust than wider models, which runs counter to both the experimental and theoretical evidence provided here. Proceeding, this makes the use of an exponential model, at least to explain these experimental results, limited. In order to explore this reasoning further in a more numerically ideal setting, we present our third experimental result, in Table 2, and replicated in Figure 4. Here, using randomly initialized, untrained models with custom architectures as described in the experimental methods section (3), having them perform inference on random inputs, and then adversarially attacking the same models on the same inputs, we can tease apart the relationship between model architecture and the resulting susceptibility ratio, for the case where both parameters and input dimensions are given as Gaussian noise. We immediately find an approximately exponential relationship between the susceptibility ratio and the depth of the model that was expected based on equation 1, however the slight dip upon moving from 32 to 64 layers is unexpected, and while exploring its potential causes and implications is outside of the scope of this paper, it may warrant further experimentation and analysis. Also of interest is the effect, or lack thereof, of increasing the number of channels in the neural network. While a quadratic increase in the number of parameters in the model might be expected to increase its susceptibility ratio, especially per the theoretical analysis in Huang et al. (2022a), no experiment that we performed yielded such a result. Some theoretical analysis and discussion is provided in Appendix B. We repeated the susceptibility ratio measurement on the same model architectures, this time with trained parameters, again sampling the inputs from the Food-101 testing subset for which all models produced correct Top-1 class estimates. These results are in Table 3, and replicated in Figure 5. The largest resulting difference is the increase of susceptibility for every model by multiple orders of magnitude. Training the models and switching to an information-rich input domain has resulted in trained models being far more sensitive to attack. Yet, following earlier experiments, we can again see that the number of channels has minimal and unclear effects on the susceptibility ratio of the model, while the number of layers increases it significantly. However, for these experimental results, the relationship between the number of layers and the susceptibility has changed, more closely resembling logarithmic than exponential growth, and somewhat replicates the relationship found between depth and susceptibility among the trained ResNet ![7_image_0.png](7_image_0.png) Figure 4: Graphical replication of Table 2; susceptibility ratios of models with random weights. | | Ý(h,0) | | | | |------------|--------------|----------|----------|----------| | | Channels (C) | | | | | 32 | 64 | 128 | 256 | | | 2 | 578.602 | 610.207 | 586.503 | 576.759 | | 4 | 1470.399 | 1658.209 | 1631.561 | 1695.993 | | 8 | 2144.418 | 2224.467 | 2536.745 | 2370.648 | | Layers (D) | 16 | 2401.381 | 2485.030 | 2846.418 | | 2361.251 | | | | | | 32 | 3162.568 | 3018.758 | 2987.640 | 3256.967 | | 64 | 2045.765 | 2213.575 | 3103.335 | 2471.823 | Table 3: Susceptibility ratios of trained custom convolutional models on inputs sampled from Food-101. ![7_image_1.png](7_image_1.png) Figure 5: Graphical replication of Table 3; susceptibility ratios of trained models. models. Interestingly, this is reasonably analogous to the testing accuracy of the models, for which increases in depth yield diminishing returns, and it may be theorized that both of these effects are due to changes in the encoding of information based on model architecture. However, it must be noted that the increase in susceptibility is greater than the increase in accuracy. Making models deeper makes them more vulnerable faster than it makes them more accurate, with additional costs in memory, runtime, and energy consumption. ## 6 Relationships To Other Metrics 6.1 Approximation Of Certified Robustness Radii In the work of Weber et al. (2020) and Li et al. (2020), they attempt to calculate what they refer to as "certified robustness radii." For a model with hard decision boundaries, e.g. a top-1 classification model, its certified robustness radius is the largest value ϵh such that, for any input xi and any adversarial perturbation |δxadv|, the ultimate classification given by the model argmaxch(θ; xi) = argmaxch(θ; xi + δxadv) for all perturbations with radius smaller than ϵh. In their work, however, they state explicitly that these values are highly demanding to calculate for small models, and computationally infeasible for larger models. However, using the susceptibility ratio for a model, one can quickly approximate this certified robustness radius for even very large models. It is simply the distance to the nearest decision boundary, divided by Ψ( ˆ *h, θ*). We demonstrate with an example: a five-class model outputs the following weights for a given input, yˆ = {2.1, 0.6, 0.1, −0.5, −1.1}. Thus, the nearest decision boundary occurs where the first and second classes become equal, at yˆ ′ = {1.35, 1.35, 0.1, −0.5, −1.1}. The modified Euclidean distance between these two vectors is 0.4703. Suppose that this model has a susceptibility ratio of Ψ = 25 ˆ .0. Its certified robustness radius would then be estimated as ϵˆh = 0.4703 25.0 = 0.01897. One could then take the mean or minimum over these values for every input in a dataset, and a number could be produced for the model as a whole. It must be noted that this will produce a substantial overestimate of the actual certified robustness radius, as the susceptibility ratio is a geometric mean rather than a supremum, and is produced via experimental approximation rather than a numerical solution. However, this "approximated robustness radius" is also useful in practice, as it provides a much larger radius wherein the associated model is highly probably immune from attack, rather than an extremely small radius wherein the associated model is provably immune from attack. Finally, an over-all criticism has to be made regarding the use of these certified robustness radii in general. Consider two models used for a binary classification problem, inferring on the same input, which has been perturbed by adversarial attacks of equal radii. The first model, moving from the vanilla to the adversarial input, changes its output from {0.9, 0.1} to {0.6, 0.4}. The second model, under the same conditions, changes its output from {0.55, 0.45} to {0.45, 0.55}. Using a certified robustness radius, it would be concluded that the first model is the more robust, while a more direct reading of the change in probabilities would declare the second model to be more robust. These certified robustness radii represent a dense and inscrutable encoding of information about both the model and the input distribution, such that it can be difficult to use them as a meaningful metric. Consider as a hypothetical if, in the previous example, the first model produced a highly confident output solely because it was significantly overfit, and the underlying domain of the dataset is non-separable near the input. Although this improves its robustness radius, it makes it more susceptible to attack in the field. ## 6.2 Post-Adversarial Accuracy One of the existing standard measures of Adversarial Robustness is to measure the accuracy of models on adversarially perturbed inputs. If our analyses and experimental results thus far are correct, we should see an inverse relationship between measured susceptibility ratios and the post-adversarial accuracy for any given attack radius. This is our fourth experimental result, shown in Figure 6. In it, we observe that among ResNets, which possess similar values of Ψ( ˆ *h, θ*), post-attack accuracies are close between models, with an approximate but minor correspondence between higher susceptibilities and lower post-attack accuracy. We also observe, among the custom architectures, represented in Figure 6 by the subset of models with 32 channels and in their entirety in Figure 4, a very close inverse relationship between higher susceptibility ![9_image_0.png](9_image_0.png) Figure 6: Post-Attack Accuracies ![9_image_1.png](9_image_1.png) Figure 7: Relationship between Adversarial Susceptibility and Post-Attack Accuracy, with a radius of 0.01. Linear best fit shown, with a correlation coefficient of -0.911. and lower post-attack accuracy, particularly at the 0.01 attack radius. We also observe that the custom architecture with D = 2, which experimentally had V = 578.602, has a post-attack accuracy curve that closely resembles those of the ResNet models, each of which had a similar susceptibility. ## 7 Conclusions And Future Work Our experiments have shown, with some variation due to the inscrutable black-box nature of Deep Learning, that there is an extremely strong, analytically valuable, and experimentally valid connection between neural networks and dynamical systems as they exist in Chaos Theory. This connection can be used to make accurate and meaningful predictions about different neural network architectures, as well as to efficiently measure how susceptible they are to adversarial attacks. We have shown a correspondence, both experimentally and analytically, between these new measurements, and those developed in prior works. Thus, a new tool has been added to the toolbox of practitioners looking to make decisions about neural networks. Future work will include further exploration in this area, and the utilization of more advanced techniques and analysis from Chaos Theory, as well as the development of new, more precise metrics that may tell us more about how models are affected by adversarial attacks. Additionally, the relationship between the susceptibility ratio and Adversarial Robustness training regimes deserves study, as well as the relationship with different attack methodologies. ## References Kathleen T Alligood, Tim D Sauer, James A Yorke, and David Chillingworth. Chaos: an introduction to dynamical systems. *SIAM Review*, 40(3):732–732, 1998. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 - mining discriminative components with random forests. In *European Conference on Computer Vision*, 2014. Olivier Bousquet and André Elisseeff. Algorithmic stability and generalization performance. Advances in Neural Information Processing Systems, 13, 2000. Olivier Bousquet and André Elisseeff. Stability and generalization. The Journal of Machine Learning Research, 2:499–526, 2002. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. *Advances in Neural Information Processing Systems*, 32, 2019. George Cazenavette, Calvin Murdock, and Simon Lucey. Architectural adversarial robustness: The case for deep pursuit, 11 2020. European Mathematical Society. Lipschitz constant. In *Encyclopedia of Mathematics*. EMS Press, May 2023. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. Uncovering the limits of adversarial training against norm-bounded adversarial examples, 2021. Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In *International conference on machine learning*, pp. 1225–1234. PMLR, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015. URL https://arxiv.org/abs/1512.03385. Warren He, Bo Li, and Dawn Song. Decision boundary analysis of adversarial examples. In International Conference on Learning Representations, 2018. Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, and Xingjun Ma. Exploring architectural ingredients of adversarially robust deep neural networks, 2022a. Shihua Huang, Zhichao Lu, Kalyanmoy Deb, and Vishnu Naresh Boddeti. Revisiting residual networks for adversarial robustness: An architectural perspective, 2022b. Michael Kearns and Dana Ron. Algorithmic stability and sanity-check bounds for leave-one-out crossvalidation. In *Proceedings of the tenth annual conference on Computational learning theory*, pp. 152–162, 1997. R. B. Levien and S. M. Tan. Double pendulum: An experiment in chaos. *American Journal of Physics*, 61(11): 1038–1044, 11 1993. ISSN 0002-9505. doi: 10.1119/1.17335. URL https://doi.org/10.1119/1.17335. Linyi Li, Xiangyu Qi, Tao Xie, and Bo Li. Sok: Certified robustness for deep neural networks. *arXiv preprint* arXiv:2009.04131, 2020. Sébastien Marcel and Yann Rodriguez. Torchvision the machine-vision package of torch. In *Proceedings of* the 18th ACM International Conference on Multimedia, MM '10, pp. 1485–1488, New York, NY, USA, 2010. Association for Computing Machinery. ISBN 9781605589336. doi: 10.1145/1873951.1874254. URL https://doi.org/10.1145/1873951.1874254. Anibal Pedraza, Oscar Deniz, and Gloria Bueno. Approaching adversarial example classification with chaos theory. *Entropy*, 22(11), 2020. ISSN 1099-4300. doi: 10.3390/e22111201. URL https://www.mdpi.com/ 1099-4300/22/11/1201. Vinay Uday Prabhu, Nishant Desai, and John Whaley. On lyapunov exponents and adversarial perturbation, 2018. URL https://arxiv.org/abs/1802.06927. Jürgen Schmidhuber. Deep learning in neural networks: An overview. *Neural networks*, 61:85–117, 2015. Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh. On the adversarial robustness of vision transformers. *arXiv preprint arXiv:2103.15670*, 2021. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Yizhen Wang, Somesh Jha, and Kamalika Chaudhuri. Analyzing the robustness of nearest neighbors to adversarial examples. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 5133–5142. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/wang18c.html. Maurice Weber, Xiaojun Xu, Bojan Karlaš, Ce Zhang, and Bo Li. Rab: Provable robustness against backdoor attacks. *arXiv preprint arXiv:2003.08904*, 2020. Boxi Wu, Jinghui Chen, Deng Cai, Xiaofei He, and Quanquan Gu. Do wider neural networks really help adversarial robustness?, 2021a. Fan Wu, Linyi Li, Zijian Huang, Yevgeniy Vorobeychik, Ding Zhao, and Bo Li. Crop: Certifying robust policies for reinforcement learning through functional smoothing. *arXiv preprint arXiv:2106.09292*, 2021b. Fan Wu, Linyi Li, Chejian Xu, Huan Zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, and Bo Li. Copa: Certifying robust policies for offline reinforcement learning against poisoning attacks. *arXiv* preprint arXiv:2203.08398, 2022. Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li. Crfl: Certifiably robust federated learning against backdoor attacks. In *International Conference on Machine Learning*, pp. 11372–11382. PMLR, 2021. Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, and In So Kweon. A survey on universal adversarial attack. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, aug 2021. doi: 10. 24963/ijcai.2021/635. URL https://doi.org/10.24963%2Fijcai.2021%2F635. ## A The Function G Because the function g only takes ∆l and zi,l as inputs, and not l itself, a modification must be made in order to define g such that it correctly performs the neural network layer layer operation, while still preserving the same formulation as the evolution function Φ(∆t, xi,t). This can be achieved by replacing Z with Z ′, such that Z ′:= Z × [L] = ∀(zi,l, l) ∃l ∈ [L] ∧ ∃zi,l ∈ Z Then, g is replaced with g ′: [L] × Z ′ → Z ′, with g ′1,(zi,l, l):=σ(Wlzi,l + bl), l + 1 Drawing the index of the parameters Wl and bl to use from the second element of the zi,l, ltuple, which it iterates, and with g ′lthen following from recursion. This has no bearing in practice, but helps to align theoretical analysis. ## B Theoretical Basis For Exponential Growth The use of exponential growth to describe sensitive dependence in Chaos Theory is primarily a model rather than a theoretical result, owing to the typically bounded nature of state spaces. A classic Physical example of a chaotic system is the double pendulum (Levien & Tan, 1993), with a state space defined by the set of possible arm angles X := [0, 2π)×[0, 2π), and therefore a maximum L1 distance of 2π, bounding exponential growth. For a purely Mathematical example of a chaotic system, consider the state space [0, 1) with the evolution function Φ(1, x) := 2x mod 1. The trajectory with an initial condition at x = 0.37 goes 0.74, 0.48, 0.96, 0.92, 0.84, *et cetera*. Starting with x = 0.38, it goes 0.76, 0.52, 0.04, 0.08, 0.16, *et cetera*. The distance between the two trajectories is bounded at 0.5, but starting with a distance of 0.01, it grew to 0.32 in only 5 time steps, for this period having a Lyanpunov exponent of ln(2) = 0.693; positive, and therefore chaotic. With this caveat in mind, a neural network can, to a finite degree and for a temporary period, be expected to produce exponential growth in the hidden state drift of two similar inputs. ## B.1 Random Matrices Consider a first-order approximation of a neural network which removes the nonlinear activation functions and biases, rendering it a product of matrices. Let us define these to be d × d real-valued square matrices Wl ∈ R d×d, and let us make the simplifying assumption that these are random matrices with i.i.d. univariate Gaussian entries wlij ∼ N (µ = 0, σ2 = 1). Next we define a product accumulator matrix HL :=QL l=0 Wl with elements ηLij . We will also rely on the following: summing distributions sums their means and variances, and the product of two zero-mean distributions has a variance equal to the product of the variances of its constituents. For the first two weight matrices W1 and W0 with elements w1ij and w0ij , and defining $$\eta_{1_{i j}}=\sum_{k=1}^{d}w_{1_{i k}}w_{0_{k j}}$$ we get that σ 2 η1ij = d and µ 2 η1ij = 0. Taking the recursion HL+1 = WL+1HL, we get $$\eta_{L+1_{i j}}=\sum_{k=1}^{d}w_{L+1_{i k}}\eta_{L_{k j}}$$ with σ 2 ηL+1ij = dσ2 ηLij and µ 2 ηL+1ij = 0. Trivially, this resolves to σ 2 ηLij = d L, additionally giving us a standard deviation σηLij = √d L = d L/2. This reduces the accumulator matrix parameter distribution to ηLij ∼ N (µ = 0, σ2 = d L), and the multiplication of a vector xs by HL becomes multiplication by a univariate Gaussian random matrix, and then by d L/2, given by d L/2W0xs. Substituting out xs for xs + δx, this gives us d L/2W0(xs + δx), and finally HLxs − HL(xs + δx) = −HLδx, thus $${\frac{|H_{L}x_{s}-H_{L}(x_{s}+\delta x)|}{|\delta x|}}=d^{L/2}=e^{\ln(d)L/2}$$ which is an exponential increase in the distance between the trajectories of xs and xs + δx, returning to the earlier dynamical systems formulation. ## B.2 Activation Function If we may assume that we know the vector xs beforehand, inserting ReLU activations between each matrix multiplication becomes equivalent to substituting a 0 for each entry in a vector that is being sequentially multiplied by each matrix, itself being equivalent to preserving the vector and instead substituting a 0 for each entry in the associated columns. Because all of the Gaussian distributions discussed have been zero mean, and the probability of a Gaussian being greater than or less than its mean is always 0.5, this gives us, relying on positive/negative symmetries, that each entry in the resulting vector may equivalently be sampled as $$x_{s,l_{i}}=\sum_{k=1}^{d}\sim w_{l_{i k}}x_{s,l_{k}}\cdot\mathrm{Bern}(p=0.5)$$ which has the effect of halving the number of dimensions that contributed to xs,li , in essence lowering d to d/2, and further decreasing the growth of the trajectory distance from e ln(d)L/2to e ln(d/2)L/2. ## B.3 Batch Normalization Consider a set of vectors that all have some existing magnitude and per-dimension standard deviation from one another. After a normalization step which subtracts out their mean and divides per-dimension by the standard deviation, the resulting vectors will have a mean of 0 and a standard deviation of 1. This resets any growth that they may previously have had away from one another. If a set of vectors including xi,0 has a standard deviation of 1, and each element thereof is multiplied by W0 with a ReLU activation applied, such that the set of vectors that includes xi,1 has a standard deviation of pd/2, the normalization step will result in division by that factor. The effect of this is to curtail the spatially infinite growth of each vector as each neural network layer is applied. Trajectories will still diverge from one another, and the dynamics of the underlying system is such that small changes will still compound, e.g. one entry with a value of 0.01 will have propagating and compounding effects compared to an entry with a value of -0.01, which will simply be erased by ReLU. But it places restrictions on the otherwise infinite possible growth, much like the domain boundaries of the double pendulum or the Φ(1, x) := 2x mod 1 system discussed earlier. ## C Adversarial Susceptibility And The Lipschitz Constant The Lipschitz constant M (European Mathematical Society, 2023) is defined for a function f : X → Y as $$\to Y\,\,\,{\mathrm{as}}$$ $$M(f):=\operatorname*{sup}_{(x_{i},x_{j})\in X\times X|x_{i}\neq x_{j}}{\frac{|f(x_{i})-f(x_{j})|}{|x_{i}-x_{j}|}}$$ This possesses a close ontological relationship in formulation to the susceptibility ratio, as defined in equations 1 and 2. They both describe a rate of change of a function's output with respect to its input. However, while this relationship is obvious, there are several points of differentiation. The susceptibility ratio is a numerically estimated geometric mean, whereas the Lipschitz constant is a global maximum, which cannot be produced analytically for Deep neural networks. Additionally, whereas the Lipschitz constant is a tool of Analysis, the susceptibility ratio is a construct of combining the Chaos Theoretic Lyanpunov exponent with a constant time horizon.
Review 1: Summary: This paper attempts to connect Chaos system with neural networks. The intuition is explained and numerical experiments are conducted. Strengths and Weaknesses: (1) My major concern is that the statements in this paper need to be supported with formal mathematical proof. While the authors try to map the neural network system with the Chaos system, it is essential to provide proof in order to state "g^l(x_i)-g^l(x_i+dx)\approx |dx|e^{lambda l}", which is the core theory of this paper. If the authors do not intend to prove the theory, the current experiments are not sufficient to support their claims, and the presentation of this paper should be significantly changed. (2) Relationship with algorithmic stability. Algorithmic stability (Kearns and Ron, 1999; Bousquet and Elisseeff, 2001, 2002)) is defined as the difference in the output of the algorithm given different inputs. It is used as a metric to evaluate the algorithm (i.e., we want a stable algorithm), and it is closely related to the generalization performance of the algorithm (Hardt et al., 2016). While the authors of this paper try to connect the Chaos system with neural networks, they may want to discuss further [1] the difference between the Chaos system and algorithmic stability and [2] whether it is possible to connect algorithmic stability with neural networks. References: Kearns, M. and Ron, D. (1999), “Algorithmic stability and sanity-check bounds for leave-one-out cross-validation,” Neural computation, 11, 1427–1453. Bousquet, O. and Elisseeff, A. (2001), “Algorithmic stability and generalization performance,” in Advances in Neural Information Processing Systems, pp. 196–202. Bousquet, O. and Elisseeff, A. (2002), “Stability and generalization,” Journal of machine learning research, 2, 499–526. Hardt, M., Recht, B., and Singer, Y. (2016), “Train faster, generalize better: Stability of stochastic gradient descent,” in International Conference on Machine Learning, pp. 1225–1234. (3) Writing of this paper: The writing of this paper can be significantly improved in many aspects. The primary issue is that there is some missing definition/logic that prevents people from understanding the math correctly: [1] In Section 2.1, what does the subscript i in x_{i,t} mean? [2] The notation z is sometimes written as z_l and z_{i,l} in other cases. [3] In Section 2.2, please use the mathematical expression to explain "We will handwave the method by which g... the ordinary operations of the feed-forward layer". In addition, the writing can be more formal. The current writing style is not formal enough for an academic paper. While the writing is easy to understand, I would suggest the authors polish it again. Below are some examples: [1] The words "this" and "that" are used too many times in the paper. These expressions make the writing informal when appearing frequently. Please try to remove them. [2] The introduction at the beginning of Section 2 "We’ll start by ... But we should begin by explaining..." is informal. It can be changed to "This paper aims to ... To begin with, we first introduce the definition of ...". [3] In Section 2.1, paragraph 1, the expression "or something like it" should be removed. We need a precise definition of ingredient one and avoid vague expressions. [4] In Section 2.1, paragraph 2, the expression "this has to have some properties like" and "like so" are not formal. The expression "like so" also appear in the later paragraphs as well. [5] In Section 2.1, paragraph 3, what does it refer by "From this"? [6] Page 3, paragraph 1, the expression "The final piece of the puzzle" is informal. [7] Page 3, paragraph 1, the sentence "We then need some notion of the distance between two elements of the state space... and proceed from here" can be changed into "To further quantify the distance between ... we assume a vector space equipped with a metric |.| ..." [8] Page 3, paragraph 1, the expression "know off the bat" is informal. [9] Page 3, paragraph 2, the sentence "no matter what, you reach the resting state, and that’s the end of the story" is informal. [10] Page 3, paragraph 2, the expression "until they might as well have started" is informal. [11] Section 2.2, paragraph 2, "which is written here as" can be changed to "which is written as". [12] Section 2.2, paragraph 3, "perhaps using something along" is informal. Other writing issues: [1] In the abstract, please simplify the sentence "We also demonstrate ... post-attack accuracy". Its current presentation is too long, and it is hard to capture the logic among all the parts in this sentence. [2] In Section 2.1, paragraph 1, please clarify the logic between the definition T and the sentence "A change in time can be added to an initial time to get an end time, in an associative fashion." When reading this paragraph, I can understand T, but I'm not sure the purpose of the sentence as mentioned. [3] It seems like there is one extra line between the first paragraph of Section 2.1 and the formula after it. Similar issues appear in many of the formulas in this paper. Please check through the paper and fix them. [4] Incorrect use of "e.g.". Please check whether "i.e." or "e.g." is needed. Requested Changes: Please (1) Provide the proof of "g^l(x_i)-g^l(x_i+dx)\approx |dx|e^{lambda l}". (2) Improve the writing to make the paper more clear and formal. (3) Please provide some discussions on algorithmic stability. Broader Impact Concerns: NA ================================================== Review 2: Summary: This paper investigates the problem of adversarial robustness from the standpoint of dynamical systems. Using their exponential model of hidden state drift, they propose a metric called susceptibility ratio and perform experiments to analyze how this value is influenced by the number of channels, model depth, and radius of adversarial perturbation. Strengths and Weaknesses: Strengths: - writing is clear - good experimental scope - interesting problem Weaknesses: - novelty is unclear - adversarial susceptibility definition seems similar to the definition of Lipschitz constant, some discussion/comparison is necessary. Also missing discussions/comparisons to related works in advML literature that analyze the impact of architecture on robustness such as: Gowal, Sven, et al. "Uncovering the limits of adversarial training against norm-bounded adversarial examples." arXiv preprint arXiv:2010.03593 (2020). Huang, Shihua, et al. "Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective." arXiv preprint arXiv:2212.11005 (2022). Huang, Hanxun, et al. "Exploring architectural ingredients of adversarially robust deep neural networks." Advances in Neural Information Processing Systems 34 (2021): 5545-5559. Wu, Boxi, et al. "Do wider neural networks really help adversarial robustness?." Advances in Neural Information Processing Systems 34 (2021): 7054-7067. Requested Changes: - [critical] Please clarify novelty of work by comparing to and discussing related works studying the impact of architecture on adversarial robustness - [critical] Please discuss how adversarial susceptibility differs from Lipschitz constant Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper studies the adversarial attacks on deep neural networks with the Chaos Theory. The "time" factor in Chaos Theory can be seen as the depth of the neural network, and the "state space" is the hidden representation. Based on the analogy, a small adversarial perturbation could result in a larger shift in the output following the exponential growth of the depth. The experimental results confirm this analogy under the condition that input samples have been correctly classified by a set of models. Based on this finding, this paper proposed a metric called Susceptibility Ratio to estimate how susceptibility a neural network can be attacked. Strengths and Weaknesses: Strengths - The Chaos Theory is well explained, which allows readers could easily follow. The analogy to neural networks is also very clear. --- Weaknesses - The paper claims that the susceptibility ratio approximates the certified robustness radii but does not demonstrate any empirical evidence. The proposed metric is an empirical metric. It would be great to provide evaluations that compare this metric with certified robustness radii with smaller models in which certified radii can be computed. - The formulation of the susceptibility ratio is very similar to the empirical Lipschitz constant used in [1,2]. The difference seems to be the arithmetic mean v.s. geometric mean. It would be great to provide further clarification of the difference and justifications of why the susceptibility ratio is preferable. - All experiments are conducted with the ResNet family of architectures. It would be better to include other types of models, such as VGG, DenseNet, and ViT. All experiments are evaluated on Food101, and it would be better to conduct them on other datasets to show the proposed method a generic. - The analogy of the Chaos Theory, deep neural networks and adversarial attacks is a hypothesis. The empirical justifications are limited under the condition that the depth only follows the exponential growth with input samples correctly classified with a set of models. It is unclear in Table 1 why ResNet does not follow this rule. - Continuing with the previous point, it would be better to make it clear with Figures 4 and 5 that one is with random weights and the other with trained width in the caption. - The analysis is based on randomly initialised models or standard training. What about adversarial training? - The insights from the Chaos Theory suggest that depth makes the deep models vulnerable towards adversarial attacks. It is unclear how Chaos Theory could help to design more robust models. [1] A closer look at accuracy vs. robustness. NeurIPS 2020\ [2] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks. NeurIPS 2021 Requested Changes: See the weaknesses part. Broader Impact Concerns: No ethical implications. ================================================== Metareview: Recommendation: Reject Comment: This paper connects adversarial robustness to chaos theory. One contribute is to propose a new metric to capture how greatly a model’s output will be changed by perturbations to a given input. The presentation of the paper is not very professional, as pointed out by one reviewer that I agree with. The reviewers also raise some other concerns, for example, the connection with algorithmic stability, the relation between adversarial susceptibility and Lipschitz constant, the need for more empirical evidence, and unsufficient to demonstrate the contribution of the paper. Unfortunately, the author did not provide a good rebuttal to address the issues. Many of the rebuttal are just general acknowledgements without evidence. To sum up, although the connection between adversarial robustness and chaos theory looks interesting. However, I believe the connection is not sufficiently well established, and the implication is not very clear. Thus, I recommend rejection. ==================================================
# Kernel Normalized Convolutional Networks Reza Nasirigerdeh *reza.nasirigerdeh@tum.com* Technical University of Munich Helmholtz Munich Reihaneh Torkzadehmahani *reihaneh.torkzadehmahani@tum.de* Technical University of Munich Daniel Rueckert *daniel.rueckert@tum.de* Technical University of Munich Imperial College London Georgios Kaissis *g.kaissis@tum.de* Technical University of Munich Helmholtz Munich Reviewed on OpenReview: *https: // openreview. net/ forum? id= Uv3XVAEgG6* ## Abstract Existing convolutional neural network architectures frequently rely upon batch normalization (BatchNorm) to effectively train the model. BatchNorm, however, performs poorly with small batch sizes, and is inapplicable to differential privacy. To address these limitations, we propose the kernel normalization (**KernelNorm**) and **kernel normalized** convolutional layers, and incorporate them into kernel normalized convolutional networks (**KNConvNets**) as the main building blocks. We implement KNConvNets corresponding to the state-of-the-art ResNets while forgoing the BatchNorm layers. Through extensive experiments, we illustrate that KNConvNets achieve higher or competitive performance compared to the BatchNorm counterparts in image classification and semantic segmentation. They also significantly outperform their batch-independent competitors including those based on layer and group normalization in non-private and differentially private training. Given that, KernelNorm combines the batch-independence property of layer and group normalization with the performance advantage of BatchNorm 1. ## 1 Introduction Convolutional neural networks (CNNs) (LeCun et al., 1989) are standard architectures in computer vision tasks such as image classification (Krizhevsky et al., 2012; Sermanet et al., 2014) and semantic segmentation (Long et al., 2015b). Deep CNNs including ResNets (He et al., 2016a) achieved outstanding performance in classification of challenging datasets such as ImageNet (Deng et al., 2009). One of the main building blocks of these CNNs is *batch normalization* (BatchNorm) (Ioffe & Szegedy, 2015). The BatchNorm layer considerably enhances the performance of deep CNNs by smoothening the optimization landscape (Santurkar et al., 2018), and addressing the problem of vanishing gradients (Bengio et al., 1994; Glorot & Bengio, 2010). BatchNorm, however, has the disadvantage of breaking the independence among the samples in the batch (Brock et al., 2021b). This is because BatchNorm carries out normalization along the batch dimension (Figure 1a), and as a result, the normalized value associated with a given sample depends on the statistics of the other samples in the batch. Consequently, the effectiveness of BatchNorm is highly dependent on 1The code is available at: https://github.com/reza-nasirigerdeh/norm-torch batch size. With large batch sizes, the batch normalized models are trained effectively due to more accurate estimation of the batch statistics. Using small batch sizes, on the other hand, BatchNorm causes reduction in model accuracy (Wu & He, 2018) because of dramatic fluctuations in the batch statistics. BatchNorm, moreover, is inapplicable to *differential privacy* (DP) (Dwork & Roth, 2014). For the theoretical guarantees of DP to hold for the training of neural networks (Abadi et al., 2016), it is required to compute the gradients individually for each sample in a batch, clip the per-sample gradients, and then average and inject random noise to limit the information learnt about any particular sample. Because per-sample (individual) gradients are required, the gradients of a given sample are not allowed to be influenced by other samples in the batch. This is not the case for BatchNorm, where samples are normalized using the statistics computed over the other samples in the batch. Consequently, BatchNorm is inherently incompatible with DP. To overcome the limitations of BatchNorm, the community has introduced *batch-independent* normalization layers including layer normalization (LayerNorm) (Ba et al., 2016), instance normalization (InstanceNorm) (Ulyanov et al., 2016), group normalization (GroupNorm) (Wu & He, 2018), positional normalization (PositionalNorm) (Li et al., 2019), and local context normalization (LocalContextNorm) (Ortiz et al., 2020), which perform normalization independently for each sample in the batch. These layers do not suffer from the drawbacks of BatchNorm, and might outperform BatchNorm in particular domains such as generative tasks (e.g. LayerNorm in Transformer models (Vaswani et al., 2017)). For image classification and semantic segmentation, however, they typically do not achieve performance comparable with BatchNorm's in non-private (without DP) training. In DP, moreover, these batch-independent layers might not provide the accuracy gain we expect compared to non-private learning. This motivates us to develop alternative layers, which are batch-independent but more efficient in both non-private and differentially private learning. Our main contribution is to propose two novel *batch-independent* layers called kernel normalization (**KernelNorm**) and the kernel normalized convolutional (**KNConv**) layer to further enhance the performance of deep CNNs. The distinguishing characteristic of the proposed layers is that they *extensively* take into account the *spatial correlation* among the elements during normalization. KernelNorm is similar to a pooling layer, except that it normalizes the elements specified by the kernel window instead of computing the average/maximum of the elements, and it operates over all input channels instead of a single channel (Figure 1g). KNConv is the combination of KernelNorm with a convolutional layer, where it applies KernelNorm to the input, and feeds KernelNorm's output to the convolutional layer (Figure 2). From another perspective, KNConv is the same as the convolutional layer except that KNConv first normalizes the input elements specified by the kernel window, and then computes the convolution between the normalized elements and kernel weights. In both aforementioned naive forms, however, KNConv is computationally inefficient because it leads to extremely large number of normalization units, and therefore, considerable computational overhead to normalize the corresponding elements. To tackle this issue, we present **computationally-efficient** KNConv (Algorithm 1), where the output of the convolution is adjusted using the mean and variance of the normalization units. This way, it is not required to normalize the elements, improving the computation time by orders of magnitude. As an application of the proposed layers, we introduce **kernel normalized convolutional networks** (**KNConvNets**) corresponding to residual networks (He et al., 2016a), referred to as **KNResNets**, which employ KernelNorm and computationally-efficient KNConv as the main building blocks while forgoing the BatchNorm layers (Section 3). Our last contribution is to draw performance comparisons among KNResNets and the competitors using several benchmark datasets including CIFAR-100 (Krizhevsky et al., 2009), ImageNet (Deng et al., 2009), and Cityscapes (Cordts et al., 2016). According to the experimental results (Section 4), KNResNets deliver significantly higher accuracy than the BatchNorm counterparts in image classification on CIFAR-100 using a small batch size. KNResNets, moreover, achieve higher or competitive performance compared to the batch normalized ResNets in classification on ImageNet and semantic segmentation on CityScapes. Furthermore, KNResNets considerably outperform GroupNorm and LayerNorm based models for almost all considered case studies in non-private and differentially private learning. Considering that, KernelNorm combines the performance advantage of BatchNorm with the batch-independence benefit of LayerNorm and GroupNorm. ![2_image_0.png](2_image_0.png) Figure 1: **Normalization layers** differ from one another in their normalization unit (highlighted in blue and green). The normalization layers in (a)-(f) establish a *one-to-one correspondence* between the input and normalized elements (i.e. no overlap between the normalization units, and no ignorance of an element). The proposed **KernelNorm** layer does not impose such one-to-one correspondence: Some elements (dashhatched area) are common among the normalization units, contributing more than once to the output, while some elements (uncolored ones) are ignored during normalization. Due to this unique property of overlapping normalization units, KernelNorm *extensively* incorporates the spatial correlation among the elements during normalization (akin to the convolutional layer), which is not the case for the other normalization layers. ## 2 Normalization Layers Normalization methods can be categorized into *input normalization* and *weight normalization* (Salimans & Kingma, 2016; Bansal et al., 2018; Wang et al., 2020; Qi et al., 2020). The former techniques perform normalization on the input tensor, while the latter ones normalize the model weights. The aforementioned layers including BatchNorm, and the proposed KernelNorm layer as well as *divisive normalization* (Heeger, 1992; Bonds, 1989), (Ren et al., 2017) and *local response normalization* (LocalResponseNorm) (Krizhevsky et al., 2012) belong to the category of input normalization. Weight standardization (Huang et al., 2017b; Qiao et al., 2019) and normalizer-free networks (Brock et al., 2021a) fall into the category of weight normalization. In the following, we provide an overview on the existing normalization layers closely related to KernelNorm, i.e. the layers which are based on input normalization, and employ standard normalization (zero-mean and unit-variance) to normalize the input tensor. For the sake of simplicity, we focus on 2D images, but the concepts are also applicable to 3D images. For a 2D image, the input of a layer is a 4D tensor of shape (n, c, h, w), where n is batch size, c is the number of input channels, h is height, and w is width of the tensor. Normalization layers differ from one another in their *normalization unit*, which is a group of input elements that are normalized together with the mean and variance of the unit. The normalization unit of **BatchNorm** (Figure 1a) is a 3D tensor of shape (n, h, w), implying that BatchNorm incorporates all elements in the batch, height, and width dimensions during normalization. LayerNorm's normalization unit (Figure 1b) is a 3D tensor of shape (c, h, w), i.e. LayerNorm considers all elements in the channel, height, and width dimensions for normalization. The normalization unit of InstanceNorm (Figure 1c) is a 2D tensor of shape (h, w), i.e. all elements of the height and width dimensions are taken into account during normalization. GroupNorm's normalization unit (Figure 1d) is a 3D tensor of shape (cg, h, w), where cg indicates the channel group size. Thus, GroupNorm incorporates all elements in the height and width dimensions and a subset of elements specified by the group size in the channel dimension during normalization. **PositionalNorm**'s normalization unit (Figure 1e) is a 1D tensor of shape c, i.e. PositionalNorm performs channel-wise normalization. The normalization unit of **LocalContextNorm** (Figure 1f) is a 3D tensor of shape (cg, r, s), where cg is the group size, and (r, s) is the window size. Therefore, LocalContextNorm considers a subset of elements in the height, width, and channel dimensions during normalization. BatchNorm, LayerNorm, InstanceNorm, and GroupNorm consider *all elements* in the height and width dimensions for normalization, and thus, they are referred to as *global normalization* layers. PositionalNorm and LocalContextNorm, on the other hand, are called *local normalization* layers (Ortiz et al., 2020) because they incorporate a *subset of elements* from the aforementioned dimensions during normalization. In spite of their differences, the aforementioned normalization layers including BatchNorm have at least one thing in common: There is a *one-to-one correspondence* between the original elements in the input and the normalized elements in the output. That is, there is exactly one normalized element associated with each input element. Therefore, these layers do not modify the shape of the input during normalization. ## 3 Kernel Normalized Convolutional Networks The KernelNorm and KNConv layers are the main building blocks of KNConvNets. **KernelNorm** takes the kernel size (kh, kw), stride (sh, sw), padding (ph, pw), and dropout probability p as hyper-parameters. It pads the input with zeros if padding is specified. The normalization unit of KernelNorm (Figure 1g) is a tensor of shape (c, kh, kw), i.e. KernelNorm incorporates *all elements* in the channel dimension but a subset of elements specified by the kernel size from the height and width dimensions during normalization. The KernelNorm layer (1) applies random dropout (Srivastava et al., 2014) to the normalization unit to obtain the *dropped-out* unit, (2) computes mean and variance of the dropped-out unit, and (3) employs the calculated mean and variance to normalize the *original* normalization unit: $$U^{\prime}=D_{p}(U),$$ $$U^{\cdot}=D_{p}(U),$$ $$\mu_{u^{\prime}}=\frac{1}{c\cdot k_{h}\cdot k_{w}}\cdot\sum_{i_{\epsilon}=1}^{c}\sum_{i_{h}=1}^{k_{h}}\sum_{i_{w}=1}^{k_{w}}U^{\prime}(i_{c},i_{h},i_{w}),$$ $$\sigma_{u^{\prime}}^{2}=\frac{1}{c\cdot k_{h}\cdot k_{w}}\cdot\sum_{i_{\epsilon}=1}^{c}\sum_{i_{h}=1}^{k_{h}}\sum_{i_{w}=1}^{k_{w}}(U^{\prime}(i_{c},i_{h},i_{w})-\mu_{u^{\prime}})^{2},$$ $$\hat{U}=\frac{U-\mu_{u^{\prime}}}{\sqrt{\sigma_{u^{\prime}}^{2}+\epsilon}},$$ ′ = Dp(U), (1) $$(1)$$ (3) $\frac{1}{2}$ $${\mathrm{(2)}}$$ where p is the dropout probability, Dp is the dropout operation, U is the normalization unit, U ′is the dropped-out unit, µu′ and σ 2 u′ are the mean and variance of the dropped-out unit, respectively, ϵ is a small number (e.g. 10−5) for numerical stability, and Uˆ is the normalized unit. Partially inspired by BatchNorm, KernelNorm introduces a *regularizing effect* during training by intentionally normalizing the elements of the original unit U using the statistics computed over the dropped-out unit U ′. In BatchNorm, the normalization statistics are computed over the batch but not the whole dataset, where the mean and variance of the batch are randomized approximations of those from the whole dataset. The "stochasticity from the batch statistics" creates a regularizing effect in BatchNorm according to Ba et al. (2016). KernelNorm employs dropout to generate similar stochasticity in the mean and variance of the normalization unit. Notice that the naive option of injecting random noise directly into the mean and variance might generate too much randomness, and hinder model convergence. Using dropout in the aforementioned fashion, KernelNorm can control the regularization effect with more flexibility. The first normalization unit of KernelNorm is bounded to a window specified by diagonal points (1, 1) and (kh, kw) in the height and width dimensions. The coordinates of the next normalization unit are (1, 1 + sw) and (kh, kw + sw), which are obtained by sliding the window sw elements along the width dimension. If there are not enough elements for kernel in the width dimension, the window is slid by sh elements in the height dimension, and the above procedure is repeated. Notice that KernelNorm works on the padded input of shape (n, c, h + 2 · ph, w + 2 · pw), where (ph, pw) is the padding size. The output Xˆ of KernelNorm is the concatenation of the *normalized units* Uˆ from Equation 3 along the height and width dimensions. KernelNorm's output is of shape (n, c, hout, wout), and it has total of n · hout kh· wout kwnormalization units, where hout and wout are computed as follows: $$h_{o u t}=k_{h}\cdot\lfloor{\frac{h+2\cdot p_{h}-k_{h}}{s_{h}}}+1\rfloor,w_{o u t}=k_{w}\cdot\lfloor{\frac{w+2\cdot p_{w}-k_{w}}{s_{w}}}+1\rfloor$$ In simple terms, KernelNorm behaves similarly to the pooling layers with two major differences: (1) KernelNorm normalizes the elements specified by the kernel size instead of computing the maximum/average over the elements, and (2) KernelNorm operates over all channels rather than a single channel. KernelNorm is a *batch-independent* and *local normalization* layer, but differs from the existing normalization layers in two aspects: (I) There is not necessarily a one-to-one correspondence between the original elements in the input and the normalized elements in the output of KernelNorm. Stride values less than kernel size lead to overlapping normalization units, where some input elements contribute more than once in the output (akin to the convolutional layer). If the stride value is greater than kernel size, some input elements are completely ignored during normalization. Therefore, the output shape of KernelNorm can be different from the input shape. (II) KernelNorm can *extensively* take into account the *spatial correlation* among the elements during normalization because of the overlapping normalization units. KNConv is the combination of KernelNorm and the traditional convolutional layer (Figure 2). It takes the number of input channels chin, number of output channels (filters) chout, kernel size (kh, kw), stride (sh, sw), and padding (ph, pw), exactly the same as the convolutional layer, as well as the dropout probability p as hyper-parameters. KNConv first applies KernelNorm with kernel size (kh, kw), stride (sh, sw), padding (ph, pw), and dropout probability p to the input tensor. Next, it applies the convolutional layer with chin channels, chout filters, kernel size (kh, kw), stride (kh, kw), and padding of zero to the output of KernelNorm. That is, both kernel size and stride values of the convolutional layer are identical to kernel size of KernelNorm. From another perspective, KNConv is the same as the convolutional layer except that it normalizes the input elements specified by the kernel window before computing the convolution. Assuming that U contains the input elements specified by the kernel window, Uˆ is the normalized version of U from KernelNorm (Equation 3), Z is the kernel weights of a given filter, ⋆ is the convolution (or dot product) operation, and b is the bias value, KNConv computes the output as follows: $$\operatorname{KNConv}(U,Z,b)={\hat{U}}\star Z+b$$ KNConv(*U, Z, b*) = *U ⋆ Z* ˆ + b (4) KNConv (or in fact KernelNorm) leads to extremely high number of normalization units, and consequently, remarkable computational overhead. Thus, KNConv in its simple format outlined in Equation 4 (or as a combination of the KernelNorm and convolutional layers) is computationally inefficient. Compared to the convolutional layer, the additional computational overhead of KNConv originates from (I) calculating the mean and variance of the units using Equation 2, and (II) normalizing the elements by the mean and variance using Equation 3. $$\left(4\right)$$ ![4_image_0.png](4_image_0.png) Figure 2: **KNConv** as the combination of the KernelNorm and convolutional layers. KNConv first applies KernelNorm with kernel size (3, 3) and stride (2,2) to the input tensor, and then gives KernelNorm's output to a convolutional layer with kernel size and stride (3, 3). That is, the kernel size and stride of the convolutional layer and the kernel size of KernelNorm are identical. Computationally-efficient KNConv reformulates Equation 4 in a way that it completely eliminates the overhead of normalizing the elements: KNConv(U, Z, b) = U ⋆ Z ˆ + b =Xc ic=1 Xkh ih=1 Xkw iw=1 ( U(ic, ih, iw) − µu′ pσ 2 u′ + ϵ ) · Z(ic, ih, iw) + b = (Xc ic=1 Xkh ih=1 Xkw iw=1 U(ic, ih, iw) · Z(ic, ih, iw) − µu′ ·Xc ic=1 Xkh ih=1 Xkw iw=1 Z(ic, ih, iw)) ·1 pσ 2 u′ + ϵ + b = (U ⋆ Z − µu′ ·Xc ic=1 Xkh ih=1 Xkw iw=1 Z(ic, ih, iw)) ·1 pσ 2 u′ + ϵ + b $$\left({5}\right)$$ According to Equation 5 and Algorithm 1, KNConv applies the convolutional layer to the original unit, computes the mean and standard deviation of the dropped-out unit as well as the sum of the kernel weights, and finally adjusts the convolution output using the computed statistics. This way, it is not required to normalize the elements, improving the computation time of KNConv by orders of magnitude. In terms of implementation, KernelNorm employs the *unfolding* operation in PyTorch (2023b) to implement the sliding window mechanism in the kn_mean_var function in Algorithm 1. Moreover, it uses the var_*mean* function in PyTorch (2023c) to compute the mean and variance over the unfolded tensor along the channel, width, and height dimensions. The defining characteristic of KernelNorm and KNConv is that they take into consideration the *spatial* correlation among the elements during normalization on condition that the kernel size is greater than 1×1. Existing architectures (initially designed for global normalization), however, do not satisfy this condition. For instance, all ResNets use 1×1 convolution for downsampling and increasing the number of filters. ResNet50/101/152, in particular, contains bottleneck blocks with a single 3×3 and two 1×1 convolutional layers. Consequently, the current architectures are unable to fully utilize the potential of kernel normalization. KNConvNets are bespoke architectures for kernel normalization, consisting of computationally-efficient KNConv and KernelNorm as the main building blocks. KNConvNets are *batch-independent* (free of BatchNorm), which primarily employ kernel sizes of 2×2 or 3×3 to benefit from the spatial correlation of elements during normalization. In this study, we propose KNConvNets corresponding to ResNets, called *KNResNets*, for image classification and semantic segmentation. Algorithm 1: Computationally-efficient KNConv layer Input: input tensor X, number of input channels chin, number of output channels chout, kernel size (kh, kw), stride (sh, sw), padding (ph, pw), *bias* flag, dropout probability p, and epsilon ϵ // 2-dimensional convolutional layer conv_layer = Conv2d(in_channels=chin, out_channels=chout, kernel_size=(kh, kw), stride=(sh, sw), padding=(ph, pw), bias=*f alse*) // convolutional layer output conv_out = conv_layer(input=X) // mean and variance from KernelNorm µ, σ 2 = kn_mean_var(input=X, kernel_size=(kh, kw), stride=(sh, sw), padding=(ph, pw), dropout_p=p) // KNConv output kn_conv_out = (conv_out - µ ·P conv_layer.weights) / √σ 2 + ϵ // apply bias if bias **then** kn_conv_out += conv_layer.bias Output: kn_conv_out ![6_image_0.png](6_image_0.png) KNConv-2x2 Max-pool-2x2ReLU KNConv-3x3 Max-pool 3x3 Figure 3: **KNResNet blocks**: Basic blocks are employed in KNResNet-18/34, while KNResNet-50 is based on bottleneck blocks. Transitional blocks are used in all KNResNets for increasing the number of filters and downsampling. The architectures of KNResNet-18/34/50 are available in Figures 5-6 in Appendix A. KNConv2x2 **Max-pool 2x2** ReLU ReLU KNConv-3x3 **Max-pool 3x3** KNResNets comprise three types of blocks: residual *basic* block, residual *bottleneck* block, and *transitional* block (Figure 3). Basic blocks contain two KNConv layers with kernel size of 2×2, whereas bottleneck blocks consist of three KNConv layers with kernel sizes of 2×2, 3×3, and 2×2, respectively. The stride value in both basic and bottleneck blocks is 1×1. The padding values of the first and last KNConv layers, however, are 1×1 and zero so that the width and height of the output remain identical to the input's (necessary condition for residual blocks with identity shortcut). The middle KNConv layer in bottleneck blocks uses 1×1 padding. Transitional blocks include a KNConv layer with kernel size of 2×2 and stride of 1×1 to increase the number of filters, and a max-pooling layer with kernel size and stride of 2×2 to downsample the input. We propose the KNResNet-18, KNResNet-34, and KNResNet-50 architectures based on the aforementioned block types (Figure 5 in Appendix A). KNResNet-18/34 uses basic and transitional blocks, while KNResNet50 mainly employs bottleneck and transitional blocks. For semantic segmentation, we utilize KNResNet18/34/50 as backbone (Figure 6 in Appendix A), but the kernel size of the KNConv and max-pooling layers in basic and transitional blocks is 3×3 instead of 2×2. ## 4 Evaluation We compare the performance of KNResNets to the BatchNorm, GroupNorm, LayerNorm, and LocalContextNorm counterparts. For image classification, we do not include LocalContextNorm in our evaluation because its performance is similar to GroupNorm (Ortiz et al., 2020). The experimental evaluation is divided into four categories: (I) batch size-dependent performance analysis, (II) image classification on ImageNet, (III) semantic segmentation on Cityscapes, and (IV) differentially private image classification on ImageNet32×32. We adopt the original implementation of ResNet-18/34/50 from PyTorch (Paszke et al., 2019), and the PreactResNet-18/34/50 (He et al., 2016b) implementation from Kuang (2021). The architectures are based on BatchNorm. For GroupNorm/LocalContextNorm related models, BatchNorm is replaced by GroupNorm/LocalContextNorm. Regarding LayerNorm based architectures, GroupNorm with number of groups of 1 (equivalent to LayerNorm) is substituted for BatchNorm. The number of groups of GroupNorm is 32 (Wu & He, 2018). The number of groups and window size for LocalContextNorm are 2 and 227×227, respectively (Ortiz et al., 2020). For low-resolution datasets (CIFAR-100 and ImageNet32×32), we replace the first 7×7 convolutional layer with a 3×3 convolutional layer and remove the following max-pooling layer. Moreover, we insert a normalization layer followed by an activation function before the last average-pooling layer in the PreactResNet architectures akin to KNResNets (Figure 5 at Appendix A). The aforementioned modifications considerably enhance the accuracy of the competitors. For semantic segmentation, we employ the fully convolutional network architecture (Long et al., 2015a) with BatchNorm, GroupNorm, LayerNorm, and LocalContextNorm based ResNet-18/34/50 as backbone. For KNResNets, we use fully convolutional versions of KNResNet18/34/50 (Figure 6 at Appendix A). | Table 1: Test accuracy versus batch size on CIFAR-100. | | | | | | |----------------------------------------------------------|---------------|------------|------------|------------|------------| | Model | Normalization | Parameters | B=2 | B=32 | B=256 | | ResNet-18-LN | LayerNorm | 11.220 M | 72.68±0.22 | 73.17±0.16 | 71.99±0.45 | | PreactResNet-18-LN | LayerNorm | 11.220 M | 73.51±0.10 | 73.36±0.15 | 72.91±0.07 | | ResNet-18-GN | GroupNorm | 11.220 M | 74.62±0.12 | 74.46±0.05 | 74.46±0.08 | | PreactResNet-18-GN | GroupNorm | 11.220 M | 74.82±0.24 | 74.74±0.44 | 74.62±0.36 | | ResNet-18-BN | BatchNorm | 11.220 M | 72.11±0.25 | 78.52±0.20 | 77.72±0.04 | | PreactResNet-18-BN | BatchNorm | 11.220 M | 72.57±0.19 | 78.32±0.09 | 77.83±0.16 | | KNResNet-18 (ours) | KernelNorm | 11.216 M | 79.10±0.10 | 79.29±0.02 | 78.84±0.10 | | ResNet-34-LN | LayerNorm | 21.328 M | 73.74±0.26 | 73.88±0.37 | 72.48±0.57 | | PreactResNet-34-LN | LayerNorm | 21.328 M | 74.79±0.13 | 74.34±0.42 | 73.10±0.42 | | ResNet-34-GN | GroupNorm | 21.328 M | 75.76±0.14 | 75.72±0.06 | 75.44±0.27 | | PreactResNet-34-GN | GroupNorm | 21.328 M | 75.82±0.05 | 75.85±0.28 | 75.76±0.25 | | ResNet-34-BN | BatchNorm | 21.328 M | 73.06±0.23 | 79.21±0.09 | 78.27±0.19 | | PreactResNet-34-BN | BatchNorm | 21.328 M | 72.20±0.19 | 79.09±0.03 | 78.59±0.24 | | KNResNet-34 (ours) | KernelNorm | 21.323 M | 79.28±0.09 | 79.53±0.15 | 79.16±0.21 | | ResNet-50-LN | LayerNorm | 23.705 M | 75.83±0.25 | 75.74±0.14 | 74.37±0.58 | | PreactResNet-50-LN | LayerNorm | 23.705 M | 74.28±0.31 | 74.57±0.32 | 73.41±0.15 | | ResNet-50-GN | GroupNorm | 23.705 M | 77.03±0.62 | 77.02±0.08 | 74.79±0.14 | | PreactResNet-50-GN | GroupNorm | 23.705 M | 75.67±0.27 | 76.08±0.18 | 75.52±0.13 | | ResNet-50-BN | BatchNorm | 23.705 M | 71.02±0.15 | 80.39±0.06 | 77.89±0.06 | | PreactResNet-50-BN | BatchNorm | 23.705 M | 70.83±0.41 | 80.28±0.15 | 78.88±0.21 | | KNResNet-50 (ours) | KernelNorm | 23.682 M | 80.24±0.18 | 80.18±0.10 | 80.09±0.26 | Table 1: Test accuracy versus batch size on **CIFAR-100**. ## 4.1 Batch Size-Dependent Performance Analysis Dataset. The CIFAR-100 dataset consists of 50000 train and 10000 test samples of shape 32×32 from 100 classes. We adopt the data preprocessing and augmentation scheme widely used for the dataset (Huang et al., 2017a; He et al., 2016b;a): Horizontally flipping and randomly cropping the samples after padding them. The cropping and padding sizes are 32×32 and 4×4, respectively. Additionally, the feature values are divided by 255 for KNResNets, whereas they are normalized using the mean and standard deviation (SD) of the dataset for the competitors. Training. The models are trained for 150 epochs using the cosine annealing scheduler (Loshchilov & Hutter, 2017) with learning rate decay of 0.01. The optimizer is SGD with momentum of 0.9 and weight decay of 0.0005. For learning rate tuning, we run a given experiment with initial learning rate of 0.2, divide it by 2, and re-run the experiment. We continue this procedure until finding the best learning rate (Table 5 in Appendix B). Then, we repeat the experiment three times, and report the mean and SD over the runs. Results. Table 1 lists the test accuracy values achieved by the models for different batch sizes. According to the table, (I) KNResNets dramatically outperform the BatchNorm counterparts for batch size of 2, (II) KNResNets deliver highly competitive accuracy values compared to BatchNorm-based models with batch sizes of 32 and 256, and (III) KNResNets achieve significantly higher accuracy than the batch-independent competitors (LayerNorm and GroupNorm) for all considered batch sizes. ## 4.2 Image Classification On Imagenet Dataset. The ImageNet dataset contains around 1.28 million training and 50000 validation images. Following the data preprocessing and augmentation scheme from TorchVision (2023a), the train images are horizontally flipped and randomly cropped to 224×224. The test images are first resized to 256×256, and then center cropped to 224×224. The feature values are normalized using the mean and SD of ImageNet. Training. We follow the experimental setting from Wu & He (2018) and use the multi-GPU training script from TorchVision (2023a) to train KNResNets and the competitors. We train all models for 100 epochs with total batch size of 256 (8 GPUs with batch size of 32 per GPU) using learning rate of 0.1, which is divided by 10 at epochs 30, 60, and 90. The optimizer is SGD with momentum of 0.9 and weight decay of 0.0001. | Table 2: Image classification on ImageNet. | | | | |----------------------------------------------|---------------|------------|----------------| | Model | Normalization | Parameters | Top-1 accuracy | | ResNet-18-LN | LayerNorm | 11.690 M | 68.34 | | ResNet-18-GN | GroupNorm | 11.690 M | 68.93 | | ResNet-18-BN | BatchNorm | 11.690 M | 70.28 | | KNResNet-18 (ours) | KernelNorm | 11.685 M | 71.17 | | ResNet-34-LN | LayerNorm | 21.798 M | 71.64 | | ResNet-34-GN | GroupNorm | 21.798 M | 72.63 | | ResNet-34-BN | BatchNorm | 21.798 M | 73.99 | | KNResNet-34 (ours) | KernelNorm | 21.793 M | 74.60 | | ResNet-50-LN | LayerNorm | 25.557 M | 73.80 | | ResNet-50-GN | GroupNorm | 25.557 M | 75.92 | | ResNet-50-BN | BatchNorm | 25.557 M | 76.41 | | KNResNet-50 (ours) | KernelNorm | 25.556 M | 76.54 | Table 2: Image classification on **ImageNet**. Results. Table 2 demonstrates the Top-1 accuracy values on ImageNet for different architectures. As shown in the table, (I) KNResNet-18 and KNResNet-34 outperform the BatchNorm counterparts by around 0.9% and 0.6%, respectively, (II) KNResNet-18/34/50 achieves higher accuracy (by about 0.6%-3.0%) than LayerNorm and GroupNorm based competitors, and (III) KNResNet-50 delivers almost the same accuracy as the batch normalized ResNet-50. ## 4.3 Semantic Segmentation On Cityscapes Dataset. The CityScapes dataset contains 2975 train and 500 validation images from 30 classes, 19 of which are employed for evaluation. Following Sun et al. (2019); Ortiz et al. (2020), the train samples are randomly cropped from 2048×1024 to 1024×512, horizontally flipped, and randomly scaled in the range of [0.5, 2.0]. The models are tested on the validation images, which are of shape 2048×1024. | Table 3: Semantic segmentation on CityScapes. | | | | | | |-------------------------------------------------|------------------|------------|------------|----------------|---------------| | Model | Normalization | Parameters | mIoU | Pixel accuracy | Mean accuracy | | ResNet-18-LN | LayerNorm | 13.547 M | 59.10±0.46 | 92.42±0.17 | 69.43±0.58 | | ResNet-18-GN | GroupNorm | 13.547 M | 62.33±0.52 | 93.23±0.01 | 71.58±0.55 | | ResNet-18-LCN | LocalContextNorm | 13.547 M | 62.25±0.67 | 92.99±0.06 | 71.59±0.68 | | ResNet-18-BN | BatchNorm | 13.547 M | 63.90±0.06 | 93.77±0.02 | 73.15±0.14 | | KNResNet-18 (ours) | KernelNorm | 13.525 M | 64.37±0.14 | 93.73±0.01 | 73.46±0.12 | | ResNet-34-LN | LayerNorm | 23.655 M | 60.19±0.32 | 92.73±0.17 | 70.12±0.33 | | ResNet-34-GN | GroupNorm | 23.655 M | 64.21±0.58 | 93.59±0.07 | 74.32±0.49 | | ResNet-34-LCN | LocalContextNorm | 23.655 M | 64.75±0.38 | 93.31±0.09 | 74.25±0.37 | | ResNet-34-BN | BatchNorm | 23.655 M | 66.94±0.34 | 94.27±0.03 | 76.50±0.41 | | KNResNet-34 (ours) | KernelNorm | 23.399 M | 67.61±0.17 | 94.13±0.05 | 76.58±0.19 | | ResNet-50-LN | LayerNorm | 32.955 M | 57.88±0.84 | 92.31±0.21 | 68.25±0.75 | | ResNet-50-GN | GroupNorm | 32.955 M | 62.14±0.68 | 93.34±0.04 | 71.66±0.64 | | ResNet-50-LCN | LocalContextNorm | 32.955 M | 64.03±0.02 | 93.07±0.14 | 73.40±0.03 | | ResNet-50-BN | BatchNorm | 32.955 M | 65.19±0.50 | 93.98±0.03 | 74.65±0.62 | | KNResNet-50 (ours) | KernelNorm | 32.874 M | 68.02±0.13 | 94.22±0.04 | 77.03±0.05 | Training. Following Sun et al. (2019); Ortiz et al. (2020), we train the models with learning rate of 0.01, which is gradually decayed by power of 0.9. The models are trained for 500 epochs using 2 GPUs with batch size of 8 per GPU. The optimizer is SGD with momentum of 0.9 and weight decay of 0.0005. Notice that we use SyncBatchNorm instead of BatchNorm in the batch normalized models. Table 3: Semantic segmentation on **CityScapes**. Results. Table 3 lists the mean of class-wise intersection over union (mIoU), pixel accuracy, and mean of class-wise pixel accuracy for different architectures. According to the table, (I) KNResNet-18/34 and the BatchNorm-based counterparts achieve highly competitive mIoU, pixel accuracy, and mean accuracy, whereas KNResNet-50 delivers considerably higher mIoU and mean accuracy than batch normalized ResNet-50, (II) KNResNets significantly outperform the batch-independent competitors (the LayerNorm, GroupNorm, and LocalContextNorm based models) in terms of all considered performance metrics. Surprisingly, ResNet-50 based models perform worse than ResNet-34 counterparts for the competitors possibly because of the smaller kernel size they employ in ResNet-50 compared to ResNet-34 (1×1 instead of 3×3). ## 4.4 Differentially Private Image Classification On Imagenet32×32 Dataset. ImageNet32×32 is the down-sampled version of ImageNet, where all images are resized to 32×32. For preprocessing, the feature values are divided by 255 for KNResNet-18, while they are normalized by the mean and SD of ImageNet for the layer and group normalized ResNet-18. Training. We train KNResNet-18 as well as the GroupNorm and LayerNorm counterparts for 100 epochs using the SGD optimizer with zero-momentum and zero-weight decay, where the learning rate is decayed by factor of 2 at epochs 70, and 90. Note that BatchNorm is inapplicable to differential privacy. All models use the Mish activation (Misra, 2019). For parameter tuning, we consider learning rate values of {2.0, 3.0, 4.0}, clipping values of {1.0, 2.0}, and batch sizes of {2048, 4096, 8192}. We observe that learning rate of 4.0, clipping value of 2.0, and batch size of 8192 achieve the best performance for all models. Our differentially private training is based on DP-SGD (Abadi et al., 2016) from the Opacus library (Yousefpour et al., 2021) with ε=8.0 and δ=8×10−7. The privacy accountant is RDP (Mironov, 2017) | Table 4: Differentially private image classification on ImageNet32×3 | 2. | | | |------------------------------------------------------------------------|---------------|------------|----------------| | Model | Normalization | Parameters | Top-1 accuracy | | ResNet-18-BN | BatchNorm | 11.682 M | NA | | ResNet-18-LN | LayerNorm | 11.682 M | 20.81 | | ResNet-18-GN | GroupNorm | 11.682 M | 20.99 | | KNResNet-18 (ours) | KernelNorm | 11.678 M | 22.01 | Results. Table 4 lists the Top-1 accuracy values on ImageNet32×32 for different models trained in the aforementioned differentially private learning setting. As can be seen in the table, KNResNet-18 achieves significantly higher accuracy than the layer and group normalized ResNet-18. ## 5 Discussion KNResNets incorporate only batch-independent layers such as the proposed KernelNorm and KNConv layers into their architectures. Thus, they perform well with very small batch sizes (Table 1) and are applicable to differentially private learning (Table 4), which are not the case for the batch normalized models. Unlike the batch-independent competitors such as LayerNorm, GroupNorm, and LocalContextNorm based ResNets, KNResNets provide higher or very competitive performance compared to the batch normalized counterparts in image classification and semantic segmentation (Tables 1-3). Moreover, KNResNets converge faster than the batch, layer, and group normalized ResNets in non-private and differentially private image classification as shown in Figure 4. These results verify our key claim: the kernel normalized models combine the performance benefit of the batch normalized counterparts with the batch-independence advantage of the layer, group, and local-context normalized competitors. The key property of kernel normalization is the overlapping normalization units, which allows for kernel normalized models to *extensively* take advantage of the spatial correlation among the elements during normalization. Additionally, it enables KernelNorm to be combined with the convolutional layer effectively as a single KNConv layer (Equation 5 and Algorithm 1). The other normalization layers lack this property. BatchNorm, LayerNorm, and GroupNorm are global normalization layers, which completely ignore the spatial correlation of the elements. LocalContextNorm *partially* considers the spatial correlation dur- ![10_image_0.png](10_image_0.png) faster than the competitors. Notice that BatchNorm is inapplicable to differential privacy; B: batch size. ing normalization because it has no overlapping normalization units, and must use very large window sizes to achieve practical computational efficiency. Our evaluations illustrate that this characteristic of kernel normalization lead to significant improvement in convergence rate and accuracy achieved by KNResNets. Normalizing the feature values of the input images using the mean and SD of the whole dataset is a popular data preprocessing technique, which enhances the performance of the existing CNNs due to feeding the normalized values into the first convolutional layer. This is unnecessary for KNConvNets because all KNConv layers including the first one are self-normalizing (they normalize the input first, and then, compute the convolution). This makes the data preprocessing simpler during training of KNConvNets. Compared to the corresponding non-normalized networks, the accuracy gain in KNResNets originates from normalization using KernelNorm and regularization effect of dropout. To investigate the contribution of each factor to the accuracy gain, we train KNResNet-50 on CIFAR-100 with batch size of 32 in three cases: (I) without KernelNorm, (II) with KernelNorm and without dropout, (III) with KernelNorm and dropout. The models achieve accuracy values of 71.48%, 78.32%, and 80.18% in (I), (II), and (III), respectively. Given that, normalization using KernelNorm provides accuracy gain of around 7.0% compared to the non-normalized model. Regularization effect of dropout delivers additional accuracy gain of about 2.0%. Prior studies show that normalization layers can reduce the sharpness of the loss landscape, improving the generalization of the model (Lyu et al., 2022; Keskar et al., 2016). Given that, we train LayerNorm, GroupNorm, and BatchNorm based ResNet-18 as well as KNResNet-18 on CIFAR-10 to compare the generalization ability and loss landscape of different normalization methods (experimental details in Appendix C). The layer, group, batch, and kernel normalized models achieve test accuracy of 90.32%, 90.58%, 92.11%, 93.27%, respectively. Figure 7 (Appendix C) visualizes the loss landscape for different normalization layers. According to the figure, KNResNet-18 provides flatter loss landscape compared to batch normalized ResNet18, which in turn, has smoother loss landscape than the group and layer normalized counterparts. These results indeed indicate that KNResNet-18 and BatchNorm-based ResNet-18 with flatter loss landscapes provide higher generalizability (test accuracy) than LayerNorm/GroupNorm based ResNet-18. There is a prior work known as convolutional normalization (ConvNorm) (Liu et al., 2021), which takes into account the convolutional structure during normalization similar to this study. ConvNorm performs normalization on the kernel weights of the convolutional layer (weight normalization). Our proposed layers, on the other hand, normalize the input tensor (input normalization). In terms of performance on ImageNet, the accuracy of KNResNet-18 is higher than the accuracy of the ConvNorm+BatchNorm based ResNet-18 reported in Liu et al. (2021) (71.17% vs. 70.34%). We explore the effectiveness of KernelNorm on the *ConvNext* architecture (Liu et al., 2022) in addition to ResNets. ConvNext is a convolutional architecture, but it is heavily inspired by vision transformers (Dosovitskiy et al., 2020), where it uses linear (fully-connected) layers extensively and employs LayerNorm as the normalization layer instead of BatchNorm. To draw the comparison, we train the original *ConvNextTiny* model from PyTorch and the corresponding kernel normalized version (both with around 28.5m parameters) on ImageNet using the training recipe and code from TorchVision (2023b) (more experimental details in Appendix B). The original model, which is based on LayerNorm, provides accuracy of 80.87%. The kernel normalized counterpart, on the other hand, achieves accuracy of 81.25%, which is 0.38% higher than the baseline. Given that, KernelNorm-based models are efficient not only with ResNets, but also with more recent architectures such as ConvNext, which incorporates several architectural elements from vision transformers into convolutional networks. We also make a comparison between KNResNets and the BatchNorm-based counterparts from the computational efficiency and memory usage perspectives (Tables 6 and 7 in Appendix D). For the batch normalized models, we employ two different implementations of the BatchNorm layer: The CUDA implementation (PyTorch, 2023a) and the custom implementation (D2L, 2023) using primitives provided by PyTorch. Because the underlying layers of KNResNets (i.e. KernelNorm and KNConv) are implemented using primitives from PyTorch, we directly compare KNResNets with ResNets based on the latter implementation of BatchNorm to have a fair comparison. According to Table 6, KNResNet-50 (our largest model) is only slower than batch normalized ResNet-50 by factor of 1.66. This slowdown is acceptable given the fact that KernelNorm is a local normalization layer with much more normalization units than BatchNorm as a global normalization layer (Figure 1). The CUDA-based implementation of BatchNorm, moreover, is faster than that based on primitives from PyTorch by factor of 1.8. We can expect a similar speedup for KNResNets if the underlying layers are implemented in CUDA. Additionally, the memory usage of KNResNets is higher than the BatchNorm counterparts as expected, which relates to the current implementation of the KNConv layer (more details in Appendix D). Notice that the most efficient implementation of KNResNets is not the focus of this study, and is left as a future line of improvement. Our current implementation, however, provides enough efficiency that allows for training KNResNet-18/34/50 on large datasets such as ImageNet. ## 6 Conclusion And Future Work BatchNorm considerably enhances the model convergence rate and accuracy, but it delivers poor performance with small batch sizes. Moreover, it is unsuitable for differentially private learning due to its dependence on the batch statistics. To address these challenges, we propose two novel batch-independent layers called KernelNorm and KNConv, and employ them as the main building blocks for KNConvNets, and the corresponding residual networks referred to as KNResNets. Through extensive experimentation, we show KNResNets deliver higher or very competitive accuracy compared to BatchNorm counterparts in image classification and semantic segmentation. Furthermore, they consistently outperform the batch-independent counterparts such as LayerNorm, GroupNorm, and LocalContextNorm in non-private and differentially private learning settings. To our knowledge, our work is the first to combine the batch-independence of LayerNorm/GroupNorm/LocalContextNorm with the performance advantage of BatchNorm in the context of convolutional networks. The performance investigation of KNResNets for object detection, designing KNConvNets corresponding to other popular architectures such as DenseNets (Huang et al., 2017a), and optimized implementations of KernelNorm and KNResNets in CUDA are promising directions for future studies. ## Acknowledgement We would like to thank *Javad TorkzadehMahani* for assisting with the implementations and helpful discussions on the computationally-efficient version of the kernel normalized convolutional layer. We would also like to thank *Sameer Ambekar* for his helpful suggestion regarding fairer comparison among the normalization layers from the computational efficiency perspective. This project was funded by the German Ministry of Education and Research as part of the PrivateAIM Project, by the Bavarian State Ministry for Science and the Arts, and by the Medical Informatics Initiative. The authors of this work take full responsibility for its content. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC conference on computer* and communications security, pp. 308–318, 2016. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint* arXiv:1607.06450, 2016. Nitin Bansal, Xiaohan Chen, and Zhangyang Wang. Can we gain more from orthogonality regularizations in training deep networks? *Advances in Neural Information Processing Systems*, 31, 2018. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. *IEEE transactions on neural networks*, 5(2):157–166, 1994. AB Bonds. Role of inhibition in the specification of orientation selectivity of cells in the cat striate cortex. Visual neuroscience, 2(1):41–55, 1989. Andrew Brock, Soham De, and Samuel L Smith. Characterizing signal propagation to close the performance gap in unnormalized resnets. *arXiv preprint arXiv:2101.08692*, 2021a. Andy Brock, Soham De, Samuel L Smith, and Karen Simonyan. High-performance large-scale image recognition without normalization. In *International Conference on Machine Learning*, pp. 1059–1071. PMLR, 2021b. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3213–3223, 2016. D2L. Batch normalization. https://d2l.ai/chapter_convolutional-modern/batch-norm.html, 2023. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9:211–407, 2014. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *Proceedings of the thirteenth international conference on artificial intelligence and statistics*, pp. 249– 256. JMLR Workshop and Conference Proceedings, 2010. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016b. David J Heeger. Normalization of cell responses in cat striate cortex. *Visual neuroscience*, 9(2):181–197, 1992. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 4700–4708, 2017a. Lei Huang, Xianglong Liu, Yang Liu, Bo Lang, and Dacheng Tao. Centered weight normalization in accelerating training of deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2803–2811, 2017b. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pp. 448–456. PMLR, 2015. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In *International Conference on Learning Representations*, 2016. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25, 2012. Liu Kuang. Pytorch models for ciafr-10/100. https://github.com/kuangliu/pytorch-cifar/, 2021. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. *Neural computation*, 1 (4):541–551, 1989. Boyi Li, Felix Wu, Kilian Q Weinberger, and Serge Belongie. Positional normalization. Advances in Neural Information Processing Systems, 32, 2019. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. *Advances in neural information processing systems*, 31, 2018a. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. https://github.com/tomgoldstein/loss-landscape, 2018b. Sheng Liu, Xiao Li, Yuexiang Zhai, Chong You, Zhihui Zhu, Carlos Fernandez-Granda, and Qing Qu. Convolutional normalization: Improving deep convolutional network robustness and training. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, 2021. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11976–11986, 2022. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3431–3440, 2015a. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3431–3440, 2015b. Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In *International* Conference on Learning Representations, 2017. Kaifeng Lyu, Zhiyuan Li, and Sanjeev Arora. Understanding the generalization benefit of normalization layers: Sharpness reduction. *Advances in Neural Information Processing Systems*, 35:34689–34708, 2022. Ilya Mironov. Rényi differential privacy. In 2017 IEEE 30th computer security foundations symposium (CSF), pp. 263–275. IEEE, 2017. Diganta Misra. Mish: A self regularized non-monotonic activation function. *arXiv preprint arXiv:1908.08681*, 2019. Anthony Ortiz, Caleb Robinson, Dan Morris, Olac Fuentes, Christopher Kiekintveld, Md Mahmudulla Hassan, and Nebojsa Jojic. Local context normalization: Revisiting local normalization. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11276–11285, 2020. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc., 2019. PyTorch. Batch normalization. https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d. html, 2023a. PyTorch. Unfold operation in pytorch. https://pytorch.org/docs/stable/generated/torch.nn. Unfold.html, 2023b. PyTorch. var_mean function in pytorch. https://pytorch.org/docs/stable/generated/torch.var_ mean.html, 2023c. Haozhi Qi, Chong You, Xiaolong Wang, Yi Ma, and Jitendra Malik. Deep isometric learning for visual recognition. In *International conference on machine learning*, pp. 7824–7835. PMLR, 2020. Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, and Alan Yuille. Micro-batch training with batch-channel normalization and weight standardization. *arXiv preprint arXiv:1903.10520*, 2019. Mengye Ren, Renjie Liao, Raquel Urtasun, Fabian H. Sinz, and Richard S. Zemel. Normalizing the normalizers: Comparing and extending network normalization schemes. In *International Conference on Learning* Representations, 2017. Tim Salimans and Diederik P Kingma. Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In *Proceedings of the 30th International Conference on Neural Information Processing Systems*, pp. 901–909, 2016. Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? *Advances in neural information processing systems*, 31, 2018. Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In *2nd International* Conference on Learning Representations, ICLR 2014, 2014. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1): 1929–1958, 2014. Ke Sun, Yang Zhao, Borui Jiang, Tianheng Cheng, Bin Xiao, Dong Liu, Yadong Mu, Xinggang Wang, Wenyu Liu, and Jingdong Wang. High-resolution representations for labeling pixels and regions. arXiv preprint arXiv:1904.04514, 2019. TorchVision. Classification training script in pytorch. https://github.com/pytorch/vision/tree/main/ references/classification\#resnet, 2023a. TorchVision. Classification training script in pytorch. https://github.com/pytorch/vision/tree/main/ references/classification\#convnext, 2023b. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. *arXiv preprint arXiv:1607.08022*, 2016. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Jiayun Wang, Yubei Chen, Rudrasis Chakraborty, and Stella X Yu. Orthogonal convolutional neural networks. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 11505–11515, 2020. Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pp. 3–19, 2018. Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Ghosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, and Ilya Mironov. Opacus: User-friendly differential privacy library in PyTorch. *arXiv preprint arXiv:2109.12298*, 2021. ![16_image_0.png](16_image_0.png) KNConv-2x2 Max-pool-2x2ReLU KNConv-3x3 Max-pool 3x3 ![16_image_1.png](16_image_1.png) ## A Knresnet Architectures ![16_image_2.png](16_image_2.png) Max-pool-3x3Bottleneck Block Figure 5: **KNResNets** for **image classification**: The dropout probability of the KNConv and KernelNorm layers are 0.05 and 0.25, respectively. For low-resolution images (e.g. CIFAR-100 with image shape of 32×32), the first KNConv layer is replaced by a KNConv layer with kernel size 3×3, stride 1×1, and padding 1×1, and the following max-pooling layer is removed. The kX (k=2/3/4/5) notation above the blocks means k blocks of that type. The numbers above arrows indicate the number of input/output channels of the first/last KNConv layer in the block. For KNResNet-18, the number of the output channels of the first KNConv layer (or the number of input channels of the second KNConv layer) is 256, 256, 512, and 724 for the first, second, third, and fourth set of basic blocks, respectively. For KNResNet-34, it is 256, 320, 640, and 843. For KNResNet-50, the number of output channels of the first and second KNConv layers are 64, 128, 201, and 512 in the first, second, third, and fourth set of bottleneck blocks, respectively. In KNResNet-50, the last transitional block and the last set of residual blocks use KNConv1×1 instead of KNConv2×2 to keep the number of parameters comparable to the original ResNet-50. ![17_image_1.png](17_image_1.png) ![17_image_0.png](17_image_0.png) KNConv-3x3 Max-pool-3x3ReLU KNConv-3x3 Max-pool 3x3 (a) KNResNet blocks (semantic segmentation) ![17_image_2.png](17_image_2.png) Figure 6: **KNResNets** for **semantic segmentation**: The dropout probability of the KNConv and KernelNorm layers are 0.1 and 0.5, respectively. For KNResNet-18, the number of the output channels of the first KNConv layer (or the number of input channels of the second KNConv layer) is 128, 256, 512, and 625 for the first, second, third, and fourth set of basic blocks. For KNResNet-34, they are 128, 256, 256, and 512, respectively. For KNResNet-50, the number of input/output channels of the middle KNConv layer are 128, 256, 458, and 512 for the first, second, third, and fourth set of bottleneck blocks. Unlike their counterparts for image classification, the KNConv and max-pooling layers in basic and transitional blocks employ kernel size of 3×3 instead of 2×2. ## B Reproducibility | Table 5: Learning rate values achieving the highest accuracy on CIFAR- | | | 100. | | |--------------------------------------------------------------------------|---------------|-------------|--------|-------| | Model | Normalization | B=2 | B=32 | B=256 | | ResNet-18-LN | LayerNorm | 0.0015625 | 0.0125 | 0.05 | | PreactResNet-18-LN | LayerNorm | 0.0015625 | 0.0125 | 0.05 | | ResNet-18-GN | GroupNorm | 0.0015625 | 0.025 | 0.1 | | PreactResNet-18-GN | GroupNorm | 0.0015625 | 0.025 | 0.1 | | ResNet-18-BN | BatchNorm | 0.00078125 | 0.025 | 0.2 | | PreactResNet-18-BN | BatchNorm | 0.00078125 | 0.025 | 0.2 | | KNResNet-18 | KernelNorm | 0.0015625 | 0.05 | 0.2 | | ResNet-34-LN | LayerNorm | 0.0015625 | 0.0125 | 0.05 | | PreactResNet-34-LN | LayerNorm | 0.0015625 | 0.0125 | 0.05 | | ResNet-34-GN | GroupNorm | 0.0015625 | 0.025 | 0.1 | | PreactResNet-34-GN | GroupNorm | 0.0015625 | 0.025 | 0.1 | | ResNet-34-BN | BatchNorm | 0.00078125 | 0.025 | 0.1 | | PreactResNet-34-BN | BatchNorm | 0.000390625 | 0.025 | 0.2 | | KNResNet-34 | KernelNorm | 0.0015625 | 0.05 | 0.2 | | ResNet-50-LN | LayerNorm | 0.00078125 | 0.0125 | 0.05 | | PreactResNet-50-LN | LayerNorm | 0.0015625 | 0.0125 | 0.05 | | ResNet-50-GN | GroupNorm | 0.00078125 | 0.0125 | 0.05 | | PreactResNet-50-GN | GroupNorm | 0.0015625 | 0.025 | 0.1 | | ResNet-50-BN | BatchNorm | 0.000390626 | 0.0125 | 0.1 | | PreactResNet-50-BN | BatchNorm | 0.000195313 | 0.0125 | 0.2 | | KNResNet-50 | KernelNorm | 0.0015625 | 0.025 | 0.2 | ConvNext on ImageNet: To train the LayerNorm and KernelNorm based ConvNextTiny models on ImageNet, we employ the code and recipe from TorchVision (2023b), where the models are trained with total batch size of 1024 using the AdamW optimizer, learning rate of 0.001, and cosine learning rate scheduler for 600 epochs. Note that we use 4 GPUs with batch size of 256 per GPU rather than 8 GPUs with batch size of 128 per GPU in the original recipe due to the resource limitation. ![19_image_0.png](19_image_0.png) ## C Loss Landscape Figure 7: **Loss landscape** of different normalization layers: Kernel normalized ResNet-18 has flatter loss landscape compared to the batch, group, and layer normalized counterparts on CIFAR-10. ResNet-18 on CIFAR-10: To compare the generalization ability and loss landscape of different normalization layers, we train BatchNorm, GroupNorm, LayerNorm, and KernelNorm based ResNet-18 on CIFAR-10. All models are trained for 70 epochs using batch size of 128 and tuned over learning rate values of {0.05, 0.1}. The weight decay is zero. The optimal learning rate is 0.05/0.05/0.1/0.1 for layer/group/batch/kernel normalized ResNet-18. The preprocessing and augmentation scheme and the other training settings are the same as the CIFAR-100 experiments in Section 4. We employ the source code from Li et al. (2018a;b) to visualize the loss landscape in Figure 7. ## D Running Time And Memory Usage | Model | Normalization | Implementation | Training time | Inference time | |--------------------|-----------------|-------------------------|-----------------|------------------| | ResNet-50-BN | BatchNorm | CUDA | 13m 23s | 6s | | ResNet-50-BN | BatchNorm | Primitives from PyTorch | 23m 49s | 10s | | KNResNet-50 (ours) | KernelNorm | Primitives from PyTorch | 39m 33s | 19s | | ResNet-34-BN | BatchNorm | CUDA | 9m 12s | 5s | | ResNet-34-BN | BatchNorm | Primitives from PyTorch | 12m 46s | 5s | | KNResNet-34 (ours) | KernelNorm | Primitives from PyTorch | 27m 15s | 12s | | ResNet-18-BN | BatchNorm | CUDA | 5m 28s | 4s | | ResNet-18-BN | BatchNorm | Primitives from PyTorch | 7m 46s | 4s | | KNResNet-18 (ours) | KernelNorm | Primitives from PyTorch | 13m 58s | 7s | Table 6: **Training and inference** time per epoch for ImageNet: The experiments are conducted with 8 NVIDIA A40 GPUs with batch size of 32 per GPU; m: minutes, s: seconds. | Model | Normalization | Implementation | Memory usage (GB) | |--------------------|-----------------|-------------------------|---------------------| | ResNet-50-BN | BatchNorm | CUDA | 5.7 | | ResNet-50-BN | BatchNorm | Primitives from PyTorch | 8.2 | | KNResNet-50 (ours) | KernelNorm | Primitives from PyTorch | 13.6 | | ResNet-34-BN | BatchNorm | CUDA | 3.6 | | ResNet-34-BN | BatchNorm | Primitives from PyTorch | 4.4 | | KNResNet-34 (ours) | KernelNorm | Primitives from PyTorch | 9.4 | | ResNet-18-BN | BatchNorm | CUDA | 3.2 | | ResNet-18-BN | BatchNorm | Primitives from PyTorch | 3.7 | | KNResNet-18 (ours) | KernelNorm | Primitives from PyTorch | 7.2 | Table 7: **Memory usage** on ImageNet: The experiments are conducted with a single NVIDIA RTX A6000 GPU with batch size of 32; GB: Gigabytes. The memory usage of KNResNets is higher than the BatchNorm counterparts. This observation is related to the current implementation of the KNConv layer, where the unfolding operation is performed in the kn_mean_var function (Algorithm 1) to compute the mean and variance of the units. We implemented KNConv in this fashion to avoid changing the CUDA implementation of the convolutional layer, which requires a huge engineering and implementation effort, and is outside the scope of our expertise. | W/H=32×32 | W/H=64×64 | W/H=128×128 | | |-------------|----------------|----------------|------------------| | Stride=1×1 | 2.44s | 2.80GB | 8.43s | 4.94GB | 33.45s | 13.44GB | | Stride=2×2 | 0.64s | 2.24GB | 1.07s | 2.72GB | 2.91s | 4.59GB | | Stride=3×3 | 0.58s | 2.16GB | 0.79s | 2.40GB | 1.71s | 3.28GB | In a hypothetical implementation of KNConv in CUDA, it would be possible to compute the mean/variance of the units directly inside the convolutional layer, and completely remove the kn_mean_var function, leading to substantially reducing the memory usage. This is because the units to compute convolution and mean/variance are the same, and those units are already available in the convolutional layer implementation. Table 8: **Inference time | memory usage** for different stride, width (W) and height (H) values. The experiments are carried out with a single NVIDIA RTX A6000 GPU using batch size of 256 on the test set of CIFAR-100. The model contains four KNConv layers with kernel size of 3×3 and 256 channels; s: seconds, GB: Gigabytes.
Review 1: Summary: This paper proposes a new normalization method for Convolutional Neural Networks (ConvNets) termed 'kernel normalization.' This method calculates the mean and variance across a spatial region determined by the convolutional kernel's width, height, and input channels. The authors present empirical verifications in image classification, semantic segmentation, and differential privacy. They demonstrate that classification performance remains unaffected by a reduced training batch size, while performance in the other two tasks shows improvement. Strengths and Weaknesses: (+) This is definitely an interesting operator that can potentially replace layernorm or groupnorm in ConvNets. It aligns more closely with the design of convolutional layers and leverages the spatial correlation inherent in the network. (+) The writing is clear, straightforward, and easy to understand. However, the main weakness lies in the design of the experiments. From my understanding, this manuscript only presents the performance of the proposed method alongside previous ones. While this effectively showcases the capabilities of the proposed method, the information provided is not particularly insightful. More specifically: (-) The manuscript lacks discussion on the use of dropout. Unlike other input normalizations, this method employs a dropout layer before computing the means. While this approach is logical, there are no experiments demonstrating its impact on performance. In the experiments, it appears that the method performs better with smaller datasets like CIFAR and in the early stages of training (as seen in Figure 4). This outcome is often observed with larger regularization factors (such as dropout, higher weight decay). Hence, it's challenging to determine if dropout plays a more significant role than expected in these experiments. (-) Practically speaking, the training time is excessively long. It is standard practice to implement the proposed operator efficiently; failing to do so could substantially diminish the paper's contribution. (-) There is no discussion on the additional memory usage. The Kernel norm layer effectively duplicates overlapping regions of an input tensor, which should increase GPU memory consumption. The authors should provide a comparison of GPU memory usage for different kernel shapes. When the stride is small and the kernel size is large, the additional memory requirement could be even more significant. This aspect is crucial as it also influences how researchers might design the kernel shape when utilizing the proposed method. (-) Similarly, a discussion on the running time (at test time) is necessary. By the same logic that affects memory usage, the impact on running time should also be evaluated and discussed. Requested Changes: Requested Changes: 1. Ablation study on the usage of dropout layer. 2. The extra memory / runtime for the proposed method. 3. Extra memory / inference speed as a function of kernel stride / width / height. And the inline citation format is wrong. The authors might mix the usage of \citet and \citep. Missing related work on weight normalization: [1] Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?, NIPS 2018. [2] Orthogonal Convolutional Neural Networks, CVPR 2020. [3] Deep Isometric Learning for Visual Recognition, ICML 2020. Broader Impact Concerns: Not Applicable. ================================================== Review 2: Summary: The paper tackles the batch-dependent limitation of Batch normalization (the model with BN layers could crash when learned on small batch size, and is not suitable under differentially private training) by introducing a new local, batch-independent normalization (KernelNorm). Authors further design kernel-normalized convolutional networks (KNConvNets), which are based on KernelNorm and can achieve competitive or higher performance of BN and other normalization techniques (LayerNorm, GroupNorm) under different settings. Strengths and Weaknesses: This paper is easy to follow, authors first point out the limitation of BatchNorm, then they propose a new normalization method, which can overcome the drawback of BN under the setting that BN fails. The empirical experiment verifies their proposed method, which can tackle the problem of BN when combat with small batch size and under differentially private training. Besides, the computation cost of the suggested method is well estimated, with acceptable running time (and could be further improved as the author discusses). Here are some of my concerns that relate to the paper: 1. Author indicates that overlap normalization can "extensively take into account the spatial correlation among the elements". In traditional CNN, the convolutional layers (Convs) already have this trait, so what is the advantage of overlap normalization in the proposed method over Convs in this case? 2. LayerNorm achieves better performance in ViT architecture, therefore comparing the proposed method with LayerNorm under ResNet architecture should not be fair. 3. Generalization ability: The normalization layer, besides helping the model learn more stable and achieve better performances, also indicates that helps improve the generalization of the learned model [1]. So it will be better if the authors can propose some compared results about the generalization performance of their methods with other normalization techniques (for example, the generalization could be clarified through visualizing the loss landscape of the learned model). **Reference** [1] Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction. NeurIPS 2022 Requested Changes: Please refer to my concerns Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper introduces Kernel Normalized Convolutional Networks (KNConvNets), which aim to overcome the limitations of BatchNorm in convolutional neural networks (CNNs). It proposes two batch-independent layers, KernelNorm and KNConv, which take into account spatial correlations during normalization. These layers are integrated into KNConvNets and corresponding residual networks (KNResNets) to achieve competitive or superior performance compared to BatchNorm in image classification and semantic segmentation, even with small batch sizes. Strengths and Weaknesses: ## Strengths - The introduced concept of Kernel Normalized ConvNets is novel to the best of my knowledge - The presented results are convincing, where the proposed method outperforms the compared methods - The paper is well-written and easy to follow ## Weaknesses - The evaluation is somewhat limited. The authors mainly study ResNet architectures. Hence it is not clear how generally applicable the kernel normalization is or if the authors found a technique that boosts only ResNet architectures. I suggest that the authors demonstrate the superiority of kernel normalization on architectures like DenseNet, NasNet, MobileNet, EfficientNet, etc. - The authors claim, that the proposed kernel normalization improves DP, however, the DP evaluation is very limited. The authors should elaborate on why the DP performance is relatively low, and how DP was evaluated. I also recommend aligning with a common DP evaluation strategy from the literature instead of simply stating the Top-1 accuracy. - The training times of the proposed method are a possible bottleneck for practitioners. Requested Changes: Please address the points in the weakness section. - Evaluation of normalized kernel on models beyond ResNet - More rigorous elaboration on DP, KNResNet benefit over other DP methods, and extended evaluation of DP. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept as is Comment: All in all, this is a well written piece of work that shows a novel and interesting way to do normalization in conv-nets. The authors show that it is competitive on a reasonable range of tasks and the disadvantages pointed out by the reviewers do not seem insurmountable given the potential benefit this method could bring (in cases where other normalization techniques like batch norm are not working too well). ==================================================
# Remember To Correct The Bias When Using Deep Learning For Regression! Anonymous authors Paper under double-blind review ## Abstract When training deep learning models for least-squares regression, we cannot expect that the training error residuals of the final model, selected after a fixed training time or based on performance on a hold-out data set, sum to zero. This can introduce a systematic error that accumulates if we are interested in the total aggregated performance over many data points. We suggest adjusting the bias of the machine learning model after training as a default post-processing step, which efficiently solves the problem. The severeness of the error accumulation and the effectiveness of the bias correction are demonstrated in exemplary experiments. ## 1 Problem Statement We consider regression models f : X → R d of the form $\left(1\right)$. fθ(x) = a Thw(x) + b (1) with parameters θ = (w, a, b) and x ∈ X. Here X is some arbitrary input space and w.l.o.g. we assume d = 1. The function hw : X → R F is parameterized by w and maps the input to an F-dimensional realvalued feature representation, a ∈ R F , and b is a scalar. If X is a Euclidean space and h the identity, this reduces to standard linear regression. However, we are interested in the case where hw is more complex. In particular, - fθ can be a deep neural network, where a and b are the parameters of the final output layer and hw represents all other layers (e.g., a convolutional or point cloud architecture); - h : *X 7→* R can be any regression model (e.g., a random forest or deep neural network) and fθ denotes hw with an additional wrapper, where a = 1 and initially b = 0. In the following, we call b the distinct bias parameter of our model (although w may comprise many parameters typically referred to as bias parameters if hw is a neural network). Given some training data D = {(x1, y1), . . . ,(xN , yN )} drawn from a distribution pdata over X × R, we assume that the model parameters θ are determined by minimizing the mean-squared-error (MSE) $${\bf MSE}_{\cal D}(f_{\mathbf{\theta}})=\frac{1}{|{\cal D}|}\sum_{(x,y)\in{\cal D}}^{N}\left(y-f_{\mathbf{\theta}}(x)\right)^{2}\;,$$ $$\left(2\right)$$ potentially combined with some form of regularization. Typically, the goal is to achieve a low expected NEW error MSE(fθ) = E(x,y)∼pdata [(y − fθ(x))2] = E[MSEDtest (fθ)], where the second expectation is over all test data sets drawn i.i.d. based on pdata. However, here we are mainly concerned with applications where the (expected) *absolute total error* defined as the absolute value of the sum of residuals $$\Delta_{\mathbf{D_{test}}}(f_{\theta})=\left|\sum_{(x,y)\in\mathbf{D_{test}}}(y-f_{\theta}(x))\right|$$ (3) $$\left({\boldsymbol{3}}\right)$$ is of high importance. That is, we are interested in the total aggregated performance over many data points. NEW A related error measure is the *relative total error* given by $$\delta_{\mathbf{\mathcal{D}}_{\text{test}}}\left(f_{\mathbf{\theta}}\right)=\frac{\Delta_{\mathbf{\mathcal{D}}_{\text{test}}}\left(f_{\mathbf{\theta}}\right)}{\left|\sum_{(x,y)\in\mathbf{\mathcal{D}}_{\text{test}}}y\right|}\;\;,\tag{4}$$ which is similar to the *relative systematic error* 100 |Dtest| P(x,y)∈Dtest y−fθ(x) y(in %, e.g., Jucker et al., 2017) and the mean error $${\rm ME}_{\mathcal{D}_{\rm test}}(\mathbf{f_{\mathbf{\theta}}})=\frac{\Delta_{\mathcal{D}_{\rm test}}(\mathbf{f_{\mathbf{\theta}}})}{|\mathcal{D}_{\rm test}|}=\left|\frac{1}{|\mathcal{D}_{\rm test}|}\sum_{(\mathbf{x},\mathbf{y})\in\mathcal{D}_{\rm test}}(\mathbf{y}-\mathbf{f_{\mathbf{\theta}}}(\mathbf{x}))\right|.\tag{5}$$ The measures defined by equation 3 to equation 5 are used to quantify the *prediction bias* of the model, that NEW is, how well P(x,y)∈Dtest fθ(x) approximates P(x,y)∈Dtest y for a test set Dtest. For |Dtest*| → ∞* a constant model predicting yˆ = E(x,y)∼pdata [y] would minimize ∆Dtest (fθ)/|Dtest|. However, in practice 1 |D| P(x,y)∈D y and 1 |Dtest| P(x,y)∈Dtest y can be considerably different from each other and from yˆ because of finite sample effects and violations of the i.i.d. assumption (e.g., due to covariate shift or sample selection bias), so optimization of individual predictions (e.g., minimizing equation 2) is preferred. Our study is motivated by applications in large-scale ecosystem monitoring such as convolutional neural network-based systems estimating tree canopy area from satellite imagery (Brandt et al., 2020) applied for assessing the total tree canopy cover of a country and learning systems trained on small patches of 3D point clouds to predict the biomass (and thus stored carbon) of large forests (Jucker et al., 2017; Oehmcke et al., 2021). For recent reviews of regression methods including deep neural networks for biomass prediction NEW we refer to Zhang et al. (2020), where also the root of the MSE as well as the mean error are considered as evaluation criteria and Morais et al. (2021). However, there are many other application areas in which accumulated predictions matter, such as estimating the overall performance of a portfolio based on estimates of the performance of the individual assets or overall demand forecasting based on forecasts for individual consumers. At a first glance, it seems that low E[MSEDtest (fθ)] guarantees low E[∆Dtest (fθ)], where the expectations are again with respect to data sets drawn i.i.d. based on pdata. Obviously, MSED(fθ) = 0 implies ∆D(fθ) = 0 for any data set D. More general, optimal parameters θ ∗ minimizing MSED(fθ) result in ∆D(fθ∗ ) = 0. Actually, ∂MSED(fθ) ∂b = 0 is necessary and sufficient for the error residuals to sum to zero and thus ∆D(fθ) = 0. This NEW well known fact can directly be seen from equation 7 below. However, if the partial derivative of the error NEW with respect to b is not zero, a low MSED(fθ) may not imply a low ∆D(fθ). In fact, if we are ultimately interested in the total aggregated performance over many data points, a wrongly adjusted parameter b may lead to significant systematic errors. Assume that f ∗is the Bayes optimal model for a given task and that fδ is the model where the optimal bias parameter b ∗is replaced by b ∗ − δb. Then for a test set Dtest of cardinality Ntest we have $$\sum_{(x,y)\in{\mathcal D}_{\mathrm{test}}}(y-f_{\delta_{b}}(x))=N_{\mathrm{test}}\cdot\delta_{b}+\sum_{(x,y)\in{\mathcal D}_{\mathrm{test}}}(y-f^{*}(x))\;\;.$$ That is, the errors δb accumulate and, thus, even a very small δb can have a drastic effect on aggregated NEW quantities. While one typically hopes that errors partly cancel out when applying a model to a lot of data points, the aggregated error due to a badly chose bias parameter increases. This can be a severe problem when using deep learning for regression, because in the canonical training process of a neural network for regression minimizing the (regularized) MSE the partial derivative of the error w.r.t. the parameter b of the NEW final model cannot be expected to be zero: - Large deep learning systems are typically not trained until the partial derivatives of the error w.r.t the NEW model parameters are close to zero, because this is not necessary to achieve the desired performance in terms of MSE and/or training would take too long. $$(6)$$ - The final weight configuration is often picked based on the performance on a validation data set (e.g., Prechelt, 2012), not depending on how close the parameters are to a local optimum as measured, NEW for example, by the maximum norm of the gradient. - Mini-batch learning introduces a random effect in the parameter updates, and therefore in the bias parameter value in the finally chosen network. Thus, despite low MSE, the performance of a (deep) learning system in terms of the total error as defined in equation 3 can get arbitrarily bad. For example, in the tree canopy estimation task described above, you may get a decently accurate biomass estimate for individual trees, but the prediction over a large area (i.e., the quantity you are actually interested in) could be very wrong. Therefore, we propose to adjust the bias parameter after training a machine learning model for least-squares regression as a default post-processing step. This post-processing can be regarded as playing a similar role as NEW model calibration in classification (e.g. Guo et al., 2017). In the next section, we show how to simply compute this correction that exactly removes the prediction bias on the training data (or a subset thereof) and discuss the consequences. Section 3 presents experiments demonstrating the problem and the effectiveness of the proposed solution. 2 Solution: Adjusting the biasNEW If the sum of residuals on the training data set D does not vanish, ∆D(fθ) > 0, we can also not expect that the residuals will cancel each other on some test set Dtest, showing a systematic error leading to a large ∆Dtest (fθ). Thus, we suggest to apply the minimal change to the model that leads to ∆D(fθ) = 0, namely minimizing the MSE on D = {(x1, y1), . . . ,(xN , yN )} w.r.t. b while fixing all other model parameters w and a. For the resulting bias parameter b ∗the first derivative w.r.t. b vanishes $$\left.{\frac{\partial\mathbf{MSE}_{\mathcal{D}}(f_{\theta})}{\partial b}}\right|_{b=b^{*}}={\frac{\mathbf{2}}{N}}\sum_{i=1}^{N}(y_{i}-\mathbf{a}^{\mathrm{T}}h_{\mathbf{w}}(x_{i})-b^{*})=0$$ ∗) = 0 (7) implying ∆D(f(w,a,b∗)) = 0. Thus, for fixed w and a we can simply solve for the new bias parameter: $$b^{*}={\frac{\sum_{i=1}^{N}(y_{i}-a^{\mathsf{T}}h_{w}(x_{i}))}{N}}={\frac{\sum_{i=1}^{N}y_{i}-\sum_{i=1}^{N}a^{\mathsf{T}}h_{w}(x_{i})}{N}}={\frac{\sum_{i=1}^{N}y_{i}-\sum_{i=1}^{N}f_{\theta}(x_{i})}{N}}+b$$ $$\left(7\right)$$ $$(8)$$ In practice, we can either replace b in our trained model by b ∗ or add δb to all model predictions. The costs of computing b ∗ and δb are the same as computing the error on the data set used for adjusting the bias. The proposed post-processing step can be related to an algorithm for updating models using additional NEW labelled data (e.g., in a transfer learning setting) described by Rodgers et al. (2007), see the discussion in the appendix. The trivial consequences of this adjustment are that the MSE on the training data set is reduced and the residuals on the training set cancel each other. But what happens on unseen data? The model with NEW ∆D(f(w,a,b∗)) = 0 can be expected to have a lower ∆Dtest (f(w,a,b∗)) on a test set Dtest than a model with ∆D(fθ) > 0. The effect on the MSEDtest is expected to be small. Adjusting the single scalar parameter b based on a lot of data is very unlikely to lead to overfitting. On the contrary, in practice we are typically observing a reduced MSE on external test data after adjusting the bias. However, this effect is typically minor. The weights of the neural network and in particular the bias parameter in the final linear layer are learned sufficiently well so that the MSE is not significantly degraded because the single bias parameter is not adjusted optimally - and that is why one typically does not worry about it although the effect on the absolute total error may be drastic. Which data should be used to adjust the bias? While one could use an additional hold-out set for the final optimization of b, this is not necessary. Data already used in the model design process can be used, because assuming a sufficient amount of data selecting a single parameter is unlikely to lead to overfitting. If there is a validation data set (e.g., for early-stopping), then these data could be used. If data augmentation is used, augmented data sets could be considered. We recommend to simply use all data available for model NEW building (e.g., the union of training and validation set). This minimizes the prediction bias of the model in the same way as standard linear regression. Using a large amount of data for the (typically very small) adjustment of a single model parameter that has no non-linear influence on the model predictions is extremely unlikely to lead to overfitting (as empirically shown in the experiments below), and the more data are used to compute the correction the more accurate it can be expected to be. How to deal with regularization? So far, we just considered empirical risk minimization. However, the bias parameter can adjusted regardless of how the model was obtained. This includes the use of early-stopping (Prechelt, 2012) or regularized risk minimization with an objective of the form 1N PN i=1(yi −fθ(xi))2 + Ω(θ). Here, Ω denotes some regularization depending on the parameters. This includes weight-decay, however, typically this type of regularization would not consider the bias parameter b of a regression model anyway (e.g., Bishop, 1995, p. 342).NEW Why not adjust more parameters? The proposed post-processing serves a very well defined purpose. If the error residuals do not sum to zero on the training data set, the residuals on test data can also not be expected to do so, which leads to a systematic prediction error. The proposed adjustment of b is the minimal change to the model that solves this problem. We assume that the model before the post-processing shows good generalization performance in terms of MSE, so we want to change it as little as possible. As argued above and shown in the experiments, just adjusting b, which has no non-linear effect on the predictions, based on sufficient data is unlikely to lead to overfitting. On the contrary, in practice an improvement of the generalization performance (e.g., in terms of MSE) is often observed (see also the experiments below). Of course, there are scenarios where adjusting more parameters can be helpful. For example, it is straightforward to also adjust the factor a in the wrapper such that the partial derivative of the MSE with respect to a vanishes. This has the effect that afterwards the residuals and training inputs are uncorrelated. However, minimizing the unregularized empirical risk w.r.t. many parameters (in particular if we have non-linear effects) bears the risk of overfitting. ## 3 Examples In this section, we present experiments that illustrate the problem of a large total error despite a low MSE and show that adjusting the bias as proposed above is a viable solution. We start with a simple regression task based on a UCI benchmark data set (Dua & Graff, 2017), which is easy to reproduce (see supplementary material). Then we move closer to real-world applications and consider convolutional neural networks for ecosystem monitoring. ## 3.1 Gas Turbine Emission Prediction First, we look at an artificial example based on real-world data from the UCI benchmark repository (Dua & Graff, 2017), which is easy to reproduce. We consider the *Gas Turbine CO and NOx Emission Data Set* (Kaya et al., 2019), where each data point corresponds to CO and NOx (NO and NO2) emissions and 9 aggregated sensor measurements from a gas turbine summarized over one hour. The typical tasks are to predict the hourly emissions given the sensor measurements. Here we consider the fictitious task of predicting the total amount of CO emissions for a set of measurements. Experimental setup. There are 36 733 data points in total. We assumed that we know the emissions for Ntrain = 21 733 randomly selected data points, which we used to build our models. We trained a neural network with two hidden layers with sigmoid activation functions having 16 and 8 neurons, respectively, feeding into a linear output layer. There were shortcut connections from the inputs to the output layer. We randomly split the training data into 16 733 examples for gradient computation and 5000 examples for validation. The network was trained for 1000 epochs using Adam (Kinga & Ba, 2015) with Table 1: Results for the total CO emissions prediction tasks for the different models, where "linear" refers to linear regression, "not corrected" to a neural network without bias correction, and "corrected" to the same neural network with corrected bias parameter. The results are based on 10 trials. The mean and standard error (SE) are given; values are rounded to two decimals; R2, ∆, δ, and ME denote the coefficient of determinations, the absolute total error, the relative error, and the mean error; δ is given in percent; D and Dtest refer to data available for model development and testing, respectively. | MODEL | R2 D | R2 Dtest | ∆D | ∆Dtest | δDtest | MEDtest | |---------------|-----------|------------|----------|----------|------------|-----------| | linear | 0.56 ±0.0 | 0.57 ±0.0 | 0 ± 0 | 173 ±14 | 0.49 ±0.04 | 0.02 ±0.0 | | not corrected | 0.78 ±0.0 | 0.72 ±0.0 | 1018 ±70 | 785 ±53 | 2.21 ±0.15 | 0.05 ±0.0 | | corrected | 0.78 ±0.0 | 0.72 ±0.0 | 0 ± 0 | 122 ± 6 | 0.34 ±0.02 | 0.01 ±0.0 | a learning rate of 1 · 10−2 and mini-batches of size 64. The network with the lowest error on the validation data was selected. For adjusting the bias parameter, we computed δb using equation 8 and all Ntrain data points available for model development. As a baseline, we fitted a linear regression model using all Ntrain data points. We used Scikit-learn (Pedregosa et al., 2011) and PyTorch (Paszke et al., 2019) in our experiments. The input data were converted to 32-bit floating point precision. We repeated the experiments 10 times with 10 random data splits, network initializations, and mini-batch shufflings. Results. The results are shown in Table 1 and Figure 1. The neural networks without bias correction achieved a higher R2(coefficient of determination) than the linear regression on the training and test data, see Table 1. On the test data, the R2 averaged over the ten trials increased from 0.56 to 0.78 when using the neural network. However, the ∆D and ∆Dtest were much lower for linear regression. This shows that a better MSE does not directly translate to a better total error (sum of residuals). Correcting the bias of the neural network did not change the networks' R2, however, the total errors went down to the same level as for linear regression and even below. Thus, correcting the bias gave the best of both world, a low MSE for individual data points and a low accumulated error. Figure 1 demonstrates how the total error developed as a function of test data set size. As predicted, with a badly adjusted bias parameter the total error increased with the number of test data points, while for the linear models and the neural network with adjusted bias this negative effect was less pronounced. The linear models performed worse than the neural networks with adjusted bias parameters, which can be explained by the worse accuracy of the individual predictions. ## 3.2 Forest Coverage Deep learning holds great promise for large-scale ecosystem monitoring (Persello et al., 2022; Yuan et al., 2020), for example for estimating tree canopy cover and forest biomass from remote sensing data (Brandt et al., 2020; Oehmcke et al., 2021). Here we consider a simplified task where the goal is to predict the amount of pixels in an image that belong to forests given a satellite image. We generated the input data from Sentinel 2 measurements (RGB values) and the accumulated pixels from a landcover map1 as targets, see Figure 2 for examples. Both, input and target are at the same 10 m spatial resolution, collected/estimated in 2017, and cover the country of Denmark. Each sample is a 100 × 100 large image with no overlap between images. Experimental setup. From the 127 643 data points in total, 70% (89 350) were used for training, 10% (12 764) for validation and 20% (25 529) for testing. For each of the 10 trials a different random split of the data was considered. 1https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-2/Land-cover_maps_of_Europe_from_ the_Cloud ![5_image_0.png](5_image_0.png) Figure 1: Absolute errors (absolute value of the summed residuals) are shown on the left and the mean errors on the right for the CO emission prediction task. Both are are presented in relation to the test set size. The error bars show the standard error (SE). Here "linear" refers to linear regression, "not corrected" to a neural network without bias correction, and "corrected" to the same neural network with corrected bias parameter. The results are averaged over 10 trials, the error bars show the standard error (SE). ![5_image_1.png](5_image_1.png) Figure 2: Exemplary inputs and targets (y) for the forest coverage dataset. (a) shows a scene with 75.7%, (b) with 2.5%, and (c) with 0.1% forest. Table 2: Results of forest coverage prediction, R2 and ∆, δ, and ME denote the coefficient of determinations, absolute total error, relative error, and mean error; D and Dtest are all data available for model development and testing, respectively. The relative total error δ is given in percent. Average and standard error (SE) of these metrics are given over 10 trials for the different models, where "mean" refers to predicting the constant mean of the training set, "not corrected" to EfficientNet-B0 without bias correction, and "corrected" to the same neural network with corrected bias parameter. Values are rounded to three decimals. | MODEL | R2 D | R2 Dtest | ∆D | ∆Dtest | δDtest | MEDtest | | |---------------|--------------|----------------|-----------------|-----------------|-----------------|--------------|----------| | mean | 0.000 ±0.000 | −3 · 10−5 ±0.0 | 6 ± | 2 | 169 666 ±48 944 | 0.955 ±0.272 | 665 ±192 | | not corrected | 0.992 ±0.027 | 0.977 ±0.0 | 389 747 ±77 987 | 152 666 ±22 164 | 0.864 ±0.124 | 598 ± 87 | | | corrected | 0.992 ±0.027 | 0.977 ±0.0 | 3 ± | 1 | 59 819 ±10 501 | 0.338 ±0.059 | 234 ± 41 | ![6_image_0.png](6_image_0.png) Figure 3: The absolute errors (absolute value of the summed residuals) are shown on the left and the relative errors on the right for the forest coverage prediction task. Both are are presented in relation to the test set size. The error bars show the standard error (SE). Results were averaged over 10 trials and show the different models, where "mean" refers to predicting the constant mean of the training set, "not corrected" to EfficientNet-B0 without bias correction, and "corrected" to the same neural network with corrected bias parameter. We employed the EfficientNet-B0 (Tan & Le, 2019), a deep convolutional network that uses mobile inverted bottleneck MBConv (Tan et al., 2019) and squeeze-and-excitation (Hu et al., 2018) blocks. It was trained for 300 epochs with Adam and a batch size of 256. For 100 epochs the learning rate was set to 3 · 10−4 and thereafter reduced to 1·10−5. The validation set was used to select the best model w.r.t. R2. When correcting the bias, the training and validation set were combined. We considered the constant model predicting the mean of the training targets as a baseline. Results. The results are summarized in Figure 3 and Table 2. The bias correction did not yield a better R2result, with 0.992 on the training set and 0.977 on the test set. However, ∆Dtest on the test set decreased by a factor of 2.6 from 152 666 to 59 819. The R2for the mean prediction is by definition 0 on the training set and was close to 0 on the test set, yet ∆Dtest is 169 666, meaning that a shift in the distribution center occurred, rendering the mean prediction unreliable. In Figure 3, we show ∆Dtest and δDtest while increasing the test set size. As expected, the total absolute error of the uncorrected neural networks increases with increasing number of test data points. Simply predicting the mean gave similar results in terms of the accumulated errors compared to the uncorrected model, which shows how misleading the R2can be as an indicator how well regression models perform in terms of the accumulated total error. When the bias was corrected, this effect drastically decreased and the performance was better compared to mean prediction. ## 4 Conclusions Adjusting the bias such that the residuals sum to zero should be the default post-processing step when doing least-squares regression using deep learning. It comes at the cost of at most a single forward propagation of the training and/or validation data set, but removes a systematic error that accumulates if individual predictions are summed. ## References Christopher M. Bishop. *Neural Networks for Pattern Recognition*. Oxford University Press, 1995. Martin Brandt, Compton J. Tucker, Ankit Kariryaa, Kjeld Rasmussen, Christin Abel, Jennifer Small, Jerome Chave, Laura Vang Rasmussen, Pierre Hiernaux, Abdoul Aziz Diouf, Laurent Kergoat, Ole Mertz, Christian Igel, Fabian Gieseke, Johannes Schöning, Sizhuo Li, Katherine Melocik, Jesse Meyer, Scott Sinno, Eric Romero, Erin Glennie, Amandine Montagu, Morgane Dendoncker, and Rasmus Fensholt. An unexpectedly large count of trees in the western Sahara and Sahel. *Nature*, 587:78–82, 2020. Pierre Bruneau and Nathan R McElroy. logd 7.4 modeling using bayesian regularized neural networks. assessment and correction of the errors of prediction. *Journal of Chemical Information and Modeling*, 46 (3):1379–1387, 2006. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning (ICML), pp. 1321–1330, 2017. Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In *Proceedings of the IEEE Conference* on Computer Vision and Pattern Recognition (CVPR), pp. 7132–7141, 2018. Tommaso Jucker, John Caspersen, Jérôme Chave, Cécile Antin, Nicolas Barbier, Frans Bongers, Michele Dalponte, Karin Y van Ewijk, David I Forrester, Matthias Haeni, Steven I. Higgins, Robert J. Holdaway, Zoshiko Iida, Craig Lorime, Peter L. Marshall, Stéphane Momo, Glenn R. Moncrieff, Pierre Ploton, Lourens Poorter, Kassim Abd Rahman, Michael Schlund, Bonaventure Sonké, Frank J. Sterck, Anna T. Trugman, Vladimir A. Usoltsev, Mark C. Vanderwel, Peter Waldner, Beatrice M. M. Wedeux, Christian Wirth, Hannsjörg Wöll, Murray Woods, Wenhua Xiang, Niklaus E. Zimmermann, and David A. Coomes. Allometric equations for integrating remote sensing imagery into forest monitoring programmes. *Global* Change Biology, 23(1):177–190, 2017. Heysem Kaya, Pinar Tüfekci, and Erdinç Uzun. Predicting CO and NOx emissions from gas turbines: novel data and a benchmark PEMS. *Turkish Journal of Electrical Engineering & Computer Sciences*, 27(6): 4783–4796, 2019. Diederik P. Kinga and Jimmy Lei Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. Tiago G. Morais, Ricardo F.M. Teixeira, Mario Figueiredo, and Tiago Domingos. The use of machine learning methods to estimate aboveground biomass of grasslands: A review. *Ecological Indicators*, 130: 108081, 2021. Stefan Oehmcke, Lei Li, Jaime Revenga, Thomas Nord-Larsen, Katerina Trepekli, Fabian Gieseke, and Christian Igel. Deep learning based 3D point cloud regression for estimating forest biomass. *CoRR*, abs/2112.11335, 2021. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In Hanna Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems (NeurIPS)*, pp. 8024–8035. 2019. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12(85):2825–2830, 2011. Claudio Persello, Jan Dirk Wegner, Ronny Hansch, Devis Tuia, Pedram Ghamisi, Mila Koeva, and Gustau Camps-Valls. Deep learning and earth observation to support the sustainable development goals: Current approaches, open challenges, and future opportunities. *IEEE Geoscience and Remote Sensing Magazine*, pp. 2–30, 2022. Lutz Prechelt. Early stopping - but when? In Grégoire Montavon, Geneviève B. Orr, and Klaus-Robert Müller (eds.), *Neural Networks: Tricks of the Trade: Second Edition*, pp. 53–67. Springer, 2012. Sarah L Rodgers, Andrew M Davis, Nick P Tomkinson, and Han van de Waterbeemd. QSAR modeling using automatically updating correction libraries: application to a human plasma protein binding model. Journal of Chemical Information and Modeling, 47(6):2401–2407, 2007. Mingxing Tan and Quoc Le. EfficientNet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning (ICML), pp. 6105–6114. PMLR, 2019. Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. MnasNet: Platform-aware neural architecture search for mobile. In *Proceedings of the IEEE Conference* on Computer Vision and Pattern Recognition (CVPR), pp. 2820–2828, 2019. Qiangqiang Yuan, Huanfeng Shen, Tongwen Li, Zhiwei Li, Shuwen Li, Yun Jiang, Hongzhang Xu, Weiwei Tan, Qianqian Yang, Jiwen Wang, Jianhao Gao, and Liangpei Zhang. Deep learning in environmental remote sensing: Achievements and challenges. *Remote Sensing of Environment*, 241:111716, 2020. Yuzhen Zhang, Jun Ma, Shunlin Liang, Xisheng Li, and Manyao Li. An evaluation of eight machine learning regression algorithms for forest aboveground biomass estimation from multiple satellite data products. Remote Sensing, 12(24), 2020. ## A **Local Bias Correction** Our bias correction can be related to other post-processing methods. There is a line of research that studies how the output of a model h can be adjusted using an additional labelled data set D′ not used for building the model, and the approach by Rodgers et al. (2007) resembles the recommended bias correction. The idea is to correct the prediction h(x) of the model by the (weighted) mean error of h when applied to the K nearest neighbors of x in D′. Let (x ′ x:k , y′x:k ) denote the K-th nearest neighbor of x in a data set D′. The distance is measured using a metric d, where ties can be broken at randomly or deterministically. Following Bruneau & McElroy (2006) and Rodgers et al. (2007), we consider the Mahalanobis distance d(x, z) = p(x − z) TC−1(x − z) for real vectors x and z, where C is empirical covariance matrix of the features based on the sample in D′. The output h(x) is then corrected to f(x) using $$f(\mathbf{x})=h(\mathbf{x})+\frac{\sum_{k=1}^{K}\omega_{\mathbf{x}:k}(h(\mathbf{x}^{\prime}_{\mathbf{x}:k})-y^{\prime}_{\mathbf{x}:k})}{\sum_{i=1}^{K}\omega_{\mathbf{x}:k}}\ \.$$ . (9) Here ωx:k is a weight depending on d(x, x ′ x:k P ). The number K of neighbors is a hyperparameter. The term K k=1 ωx:kh(x ′ x:k ) PK i=1 ωx:kis the weighted K-nearest neighbor prediction for x using D′, and PK k=1 ωx:ky ′ P x:k K i=1 ωx:kcan be viewed as the corresponding weighted target. For constant ωx:k = 1, we get f(x) = h(x)+ 1K PK k=1 h(x ′ x:k )− 1 K PK k=1 y ′ x:k . If we further set D′ = D and K = |D| this correction is identical to the suggested bias correction. For smaller K, we can think of this method as a *local bias correction*, which adjusts the bias individually for each input based on neighboring training data points. $$({\mathfrak{g}})$$ Our proposed post-processing step efficiently solves the well-defined problem that the error residuals on the training data may not sum to zero. The method suggested - for a different purpose - by Rodgers et al. (2007) is a heuristic with several crucial hyperparameters, obviously K but also the choice of the weighting function for computing the ωx:k. Instead of a one-time correction of a single model parameter, which can be done in linear time, the approach by Rodgers et al. (2007) requires evaluation of a K-nearest search in each application of a model, which drastically increases storage and time complexity for training data sets. The performance of their approach depends on the quality of the nearest neighbor regression. Nearest neighbor regression with Mahalanobis distance or standard Euclidean distance is unsuited for image analysis tasks as the one in Section 3.2. The input dimensionality is too high for the amount of training data and neither Mahalanobis distance nor standard Euclidean distance between raw pixels are appropriate to measure image similarity. In contrast, on the artificial problem in Section 3.1 with 9 inputs each representing a meaningful feature, nearest neighbor regression can be expected to work. We applied the local bias correction to the problem in Section 3.1 with K = 3 as suggested by Rodgers et al. (2007). This resulted in R2Dtest = 0.73 ± 0.0, ∆D0.0 ± 0.0, ∆Dtest 51 ± 5.0, δDtest = 0.38 % ± 0.02 %, and MEDtest = 0.02 ± 0.0. In this toy example, the nearest-neighbor auxiliary model performs very well. Still, while the bias correction reduced the systematic errors compared to the uncorrected neural network, it performed worse than the proposed rigorous post-processing (see Table 1).
Review 1: Summary: In the paper, the author points out bias issues with deep learning models for regression that may not affect squared mean error but accumulates on summed estimations such as tree canopy estimation. The authors explain how to correct bias with a simple mean error correction on the output of the model's predictions. They then present results corroborating the usefulness of a bias correction on a Gas turbine emission prediction task, modified to estimate total emission instead to demonstrate the bias issue with summed estimations, and Forest coverage task. Strengths and Weaknesses: Strengths + The paper is well written and clear. + The method is simple. Weaknesses - There is no related work presented. I could hardly believe that this simple correction method has not been mentioned before and easily found similar solutions, see for instance [1] [1] Rodgers, Sarah L., Andrew M. Davis, Nick P. Tomkinson, and Han van de Waterbeemd. "QSAR modeling using automatically updating correction libraries: application to a human plasma protein binding model." Journal of chemical information and modeling 47, no. 6 (2007): 2401-2407. Requested Changes: I would request one major change which would require rewriting most of the paper and a second round of reviews in my opinion. The bias correction method should be compared to other existing methods explaining how it differs, under which contexts it is preferable, and under which contexts it should be avoided. Looking quickly for bias correction methods I have found [1] (eq. 5) which is equivalent but more general. It uses the mean error weighted by the distance between a given test point and the training points. It seems clear to me that the proposed method does not provide any new insight compared to the one in [1]. The main interest I could see for this paper is bringing attention of the ML community to the issue of bias correction, but this would be better done with a thorough review of bias correction for machine learning regression models. [1] Rodgers, Sarah L., Andrew M. Davis, Nick P. Tomkinson, and Han van de Waterbeemd. "QSAR modeling using automatically updating correction libraries: application to a human plasma protein binding model." Journal of chemical information and modeling 47, no. 6 (2007): 2401-2407. Broader Impact Concerns: I see no broader impact concerns. ================================================== Review 2: Summary: This paper propose to correct the "bias" term in a regression (e.g. neural networks) model as a post-processing step. This post-processing step fixes all other parameters, and update the bias term to the close-form minimizer of the MSE loss on the training set. This paper argues that this has the advantage that the absolute total error, defined as the absolute value of the sum of residuals, can be reduced on the training set, and this step also tends to have no or small positive impact on model's generalization. Therefore, this should almost always be performed as it is also computationally cheap (equivalent to computing the average loss on the training set). Strengths and Weaknesses: Strength: this paper proposed a simple post processing step that effectively solves the bias issue. This method is easy to understand and to implement. Weakness: even though the paper mentioned some examples where the absolute total error is important. I'm a little bit confused if this is referring to the training set error or the test set error. The proposed method is targeting the error on the training set, but it seems empirically the error on the test set also reduces. It would be great if the paper could provide a concrete motivating example with detailed analyze of the consequences when the (training) absolute total error is not minimized. And also make it a bit more clear when are we talking about training errors and when test errors. Requested Changes: Refinements to writing: see "weakness" above. Minor: The MSE in Eq(2) is using the symbol R, which is very easily confused with the R^2 symbol in the results table indicating the coefficient of determination. Maybe consider using a different symbol for the MSE loss? Broader Impact Concerns: N/A ================================================== Review 3: Summary: The basic formal setting of this paper is regression using the quadratic loss using a model that is a linear combination of potentially non-linear features (with weights $\mathbf{a}$ and intercept $b$), where the nature of the non-linear features is also to be determined as part of the learning process as a whole (determined by $\mathbf{w}$). While the formal setting is regression under the quadratic loss, the authors' main interest as I understand it is on the "bias" that tends to occur when solving jointly in $\mathbf{\theta}$ and stopping before reaching a stationary point, where bias refers to (signed) residuals with a non-zero empirical mean. In practice, this means the learned candidate may tend to over-shoot or under-shoot the target variable. The main methodological contribution here is the suggestion to shift the learned candidate by the average residual to correct for this bias on the training data. The authors provide evidence for the utility of this approach in the form of numerical experiments using real data in which they find that their correction does not hurt the test performance under the quadratic loss, while maintaining smaller residual sums than are obtained without the correction. Strengths and Weaknesses: *Strengths:* The paper is well-structured, written in a concise but inviting tone, and is easy to follow at a high level, with the basic problem setup and proposed procedure described clearly. The experimental setup and results are also communicated effectively. *Weaknesses:* Overall, I feel the paper is quite vague in terms of what exactly the *problem* faced is, from both the perspectives of numerical optimization and statistical learning. Let me raise a few concrete points that I tripped up on when reading this paper that I think relate to this weakness. - As long as we are using the quadratic loss and we arrive at a stationary point (i.e., any point $\mathbf{\theta}$ where the gradient is $\nabla \mathcal{R}_{\mathcal{D}}(\mathbf{\theta}) = \mathbf{0}$), then we are guaranteed to have residuals with empirical mean zero, i.e, their (3) and (4) are both zero. The authors seem to be aware of this based on their statements on p.2 ("More general, the optimal parameters minimizing [...] result in [zero sum of residuals]"), so their motivations must be that many real-world settings do not achieve nearly-zero gradients? Again on p.2, they seem to state their basic concerns in three bullets after "...the parameter $b$ of the final model cannot be expected to be optimal," but again I find this rather troublesome. We do not need anything to be *optimal*, we just need the gradient of the MSE objective to be close to zero; since we do not have convexity in general, we can have (globally) sub-optimal parameters (potentially huge MSE) which still have zero residual sums. Is this okay in the mindset of the authors? The authors have not touched on this possibility, it seems. - Continuing on this same point, one often trains non-linear models (including neural networks) by running an optimization algorithm iteratively until we see a certain degree of numerical convergence. Gradient-based methods are very common, and they typically have the property that numerical convergence basically means the gradient of the MSE is close to zero (although MSE itself may still be large). As such, if the three bullet points near the end of p.2 are meant to suggest that the partial derivative taken with respect to $b$ tends to be large in practice, I find that claim somewhat hard to digest without harder evidence. - On the point of evidence, the tables in section 3 are comparing MSE (sum divided by number of samples) with plain sums; dividing $\Delta_{\mathcal{D}}$ by the number of samples would likely make these results a lot less remarkable. I think most readers will not be convinced by these results that severe "bias" is a problem inherent in deep learning. The above points highlight why I found the *research problem* very unclear. Now on the other side, related to the proposed method, while I understand perfectly well why the proposed method of the authors was selected, the perspective here seems way too narrow. Why just ensure $b$ satisfies the first-order necessary conditions for optimality? Why not consider $\mathbf{a}$ as well? We could do a two-stage learning process, fixing $\mathbf{a}$ and $b$ to start with, running a learning algorithm to determine some $\widehat{\mathbf{w}}$, and then solving the ordinary least squares problem in $(\mathbf{a},b)$ conditioned on $\widehat{\mathbf{w}}$. This possibility is not discussed, and I think many readers might expect that it would be beneficial to set aside some data to be used in each of these steps, but the authors dismiss this notion as being "not necessary" (p.3). I think we can easily come up with both settings where this is and is not necessary, so it seems strange to brush this under the carpet without further evidence. Requested Changes: I ask that the authors consider my feedback under strengths and weaknesses, and reconsider exactly what the research problem is that they are trying to solve. Once a real, concrete problem is made clear, a broader look at potential solution methods, and a search for convincing evidence that the problem has been solved to some degree will be the next step. I personally think it is somewhat difficult to accept this paper in its present form, since without a convincing problem, it is difficult to evaluate potential solutions. Broader Impact Concerns: No issues of broader impact. ================================================== Metareview: Recommendation: Reject Comment: The claims made in the submission are supported and experiments are confirming these. The post-processing method proposed in this submission is relatively straightforward, but of general interest to the community. The paper also reads well aside from a few remaining typos. However, the paper is missing a proper related work section. The authors added a paragraph in the main body and discussed the relation to one reference mentioned by one of the reviewers in the Appendix (+ reported new experimental results, which is great), but stopped there. I would encourage the authors to be more thorough in positioning their work with respect to prior work and discussing related work of bias correction in ML in more general terms. ==================================================
# Conditional Sampling Of Variational Autoencoders Via Iterated Approximate Ancestral Sampling Vaidotas Simkus *vaidotas.simkus@ed.ac.uk* Michael U. Gutmann michael.gutmann@ed.ac.uk School of Informatics University of Edinburgh Reviewed on OpenReview: *https: // openreview. net/ forum? id= I5sJ6PU6JN* ## Abstract Conditional sampling of variational autoencoders (VAEs) is needed in various applications, such as missing data imputation, but is computationally intractable. A principled choice for asymptotically exact conditional sampling is Metropolis-within-Gibbs (MWG). However, we observe that the tendency of VAEs to learn a structured latent space, a commonly desired property, can cause the MWG sampler to get "stuck" far from the target distribution. This paper mitigates the limitations of MWG: we systematically outline the pitfalls in the context of VAEs, propose two original methods that address these pitfalls, and demonstrate an improved performance of the proposed methods on a set of sampling tasks. ## 1 Introduction Conditional sampling of modern deep probabilistic models is an important but generally intractable problem. Variational autoencoders (VAEs, Kingma & Welling, 2013; Rezende et al., 2014) are a family of deep probabilistic models that *capture the complexity of real-world data distributions via a structured latent space*. The impressive modelling capability and the usefulness of the structured latent space make VAEs a model of choice in a broad range of domains from healthcare (Han et al., 2019) and chemistry (Gómez-Bombarelli et al., 2018) to images (Child, 2021) and audio (van den Oord et al., 2017). Ancestral sampling can be used for efficient unconditional sampling of VAEs, but many downstream tasks, for example, prediction or missing data imputation (e.g. Goodfellow et al., 2016, Chapter 5.1.1), instead require *conditional sampling*. However, *for VAEs, this is intractable*, and hence approximate methods are needed. A canonical approximate method is Markov chain Monte Carlo (MCMC, e.g. Barber, 2017, Chapter 27.4) but the general lack of knowledge about the learnt VAE may make tuning, for example, picking a good proposal distribution, and hence successfully using MCMC samplers challenging. To make sampling easier, an approach called Metropolis-within-Gibbs (MWG, Mattei & Frellsen, 2018) re-uses the encoder, an auxiliary component from the training of the VAE, to construct a suitable proposal distribution in a Metropolis– Hastings-type algorithm (Metropolis et al., 1953; Hastings, 1970). The simplicity of MWG and its asymptotic convergence guarantees make it a compelling choice for conditional sampling of VAEs. While a structured latent space is often a desirable property of VAEs, enabling the modelling of complex distributions, we *notice that this latent structure can cause the Markov chains of MWG to get "stuck"* hence impeding conditional sampling. In this paper we - Detail the potential pitfalls of Metropolis-within-Gibbs in the context of VAEs (section 3). - Propose a modification of MWG, called adaptive collapsed-Metropolis-within-Gibbs (AC-MWG, section 4.1), that mitigates the outlined pitfalls and prove its convergence. - Introduce an alternative sampling method, called latent-adaptive importance resampling (LAIR, section 4.2), which demonstrates an improved sampling performance in our experiments. - Evaluate the samplers on a set of conditional sampling tasks: (semi-)synthetic, where sampling from the ground truth conditional distributions is computationally tractable, and real-world missing data imputation tasks, where the ground truth distribution is not available. With the proposed methods we address the conditional sampling problem of VAEs, a key challenge to downstream application of this flexible family of models. Our methods build and improve upon the limitations of MWG enabling more accurate use of VAEs in important tasks like missing data imputation. ## 2 Background: Conditional Sampling Of Vaes We here describe the conditional sampling problem and the existing Gibbs-like methods that have been used to draw conditional samples. ## 2.1 Problem And Assumptions Given a pre-trained variational autoencoder, whose generative model we denote as p(x, z) = p(x | z)p(z), where x = (xobs, xmis) are the visible and z are the latent variables, we would like to sample: $$p(\mathbf{x}_{\rm mis}\mid\mathbf{x}_{\rm obs})=\frac{\int p(\mathbf{x}_{\rm obs},\mathbf{x}_{\rm mis},\mathbf{z})\,{\rm d}\mathbf{z}}{p(\mathbf{x}_{\rm obs})}=\int p(\mathbf{x}_{\rm mis}\mid\mathbf{x}_{\rm obs},\mathbf{z})p(\mathbf{z}\mid\mathbf{x}_{\rm obs})\,{\rm d}\mathbf{z}.\tag{1}$$ The variables xmis and xobs are respectively the target/missing and conditioning/observed variables. This choice of notation is motivated by the correspondence between conditional sampling and probabilistic imputation of missing data (Rubin, 1987; 1996).1 Unlike unconditional generation, *ancestral sampling of* p(xmis | xobs) is generally intractable since the posterior distribution p(z | xobs) *is not accessible* and hence approximations are required. In the rest of the paper we assume that the generative model is such that computation of p(xobs | z) and sampling of p(xmis | xobs, z) is tractable. This is typically the case for most VAE architectures due to conditional independence assumptions (i.e. xj ⊥ x∖j | z for all ∀j) or the use of a Gaussian family for the decoder distribution p(x | z). Moreover, we assume that the encoder distribution, or the amortised variational posterior, q(z | x) (Gershman & Goodman, 2014) which approximates the model posterior p(z | x), is available.2 ## 2.2 Pseudo-Gibbs (Rezende Et Al., **2014)** Rezende et al. (2014, Appendix F) have proposed a procedure related to Gibbs sampling (Geman & Geman, 1984), also called pseudo-Gibbs (Heckerman et al., 2000; Mattei & Frellsen, 2018), that due to its generality and simplicity has been regularly used for missing data imputation with VAEs (e.g. Rezende et al., 2014; Li et al., 2016; 2017; Rezende et al., 2018; Boquet et al., 2019). Starting with some random imputations x 0 mis the procedure iteratively samples latents z t ∼ q(z | xobs, x t−1 mis ) and imputations x t mis ∼ p(xmis | xobs, z t). 3 This iterative procedure generates a Markov chain that subject to some conditions on the closeness of the variational posterior q(z | xobs, xmis) and the intractable model posterior p(z | xobs, xmis) converges asymptotically in t to a distribution that approximately follows p(xmis, z | xobs) (Rezende et al., 2014, Proposition F.1). The sampler corresponds to an exact Gibbs sampler if q(z | xobs, xmis) = p(z | xobs, xmis). However, the equality q(z | xobs, xmis) = p(z | xobs, xmis) generally does not hold due to at least one of the following issues: insufficient flexibility of the variational distributional family, amortisation gap, or inference generalisation gap (Cremer et al., 2018; Zhang et al., 2021). Hence, pseudo-Gibbs sampling may produce sub-optimal samples even in the asymptotic limit or completely fail to converge due to an incompatibility of q(z | xobs, xmis) and p(xmis | xobs, z). 1Equation (1) corresponds directly to missing data imputation with missing-at-random (MAR) missingness pattern. 2The variational posterior is typically available after fitting the VAE on complete data using standard variational Bayes (Rezende et al., 2014; Kingma et al., 2014), or can be fitted afterwards using a real or generated complete data set. 3Superscript t represents the sampler iteration. ## 2.3 Metropolis-Within-Gibbs (Mattei & Frellsen, **2018)** Mattei & Frellsen (2018, Section 3.2) have proposed a simple modification of the pseudo-Gibbs sampler that can asymptotically in t generate exact samples from p(xmis | xobs). The method incorporates a Metropolis– Hastings accept-reject step (Metropolis et al., 1953; Hastings, 1970) to correct for the mismatch between q(z | xobs, xmis) and p(z | xobs, xmis) followed by sampling from p(xmis | xobs, z), hence yielding a sampler in the Metropolis-within-Gibbs (MWG) family (Gelman & Rubin, 1992, Section 4.4). Specifically, at each iteration t it generates the proposal sample z˜ ∼ q(z | xobs, x t−1 mis ) and accepts it as z t = z˜ with probability $$\rho^{t}(\hat{\mathbf{z}},\mathbf{z}^{t-1};\mathbf{x}_{\min}^{t-1})=\min\left\{1,\frac{p(\mathbf{x}_{\text{obs}},\mathbf{x}_{\min}^{t-1}\mid\hat{\mathbf{z}})p(\hat{\mathbf{z}})}{p(\mathbf{x}_{\text{obs}},\mathbf{x}_{\min}^{t-1}\mid\mathbf{z}^{t-1})p(\mathbf{z}^{t-1})}\,\frac{q(\mathbf{z}^{t-1}\mid\mathbf{x}_{\text{obs}},\mathbf{x}_{\min}^{t-1})}{q(\hat{\mathbf{z}}\mid\mathbf{x}_{\text{obs}},\mathbf{x}_{\min}^{t-1})}\right\}.\tag{2}$$ If the proposal z˜ is rejected, the latent sample from the previous iteration is used, so that z t = z t−1. Given z t, a new imputation x t mis is then sampled as in standard Gibbs sampling: x t mis ∼ p(xmis | xobs, z t). By incorporating the Metropolis–Hastings acceptance step, the pseudo-Gibbs sampler is transformed into an asymptotically exact MCMC sampler with p(xmis, z | xobs) as stationary distribution even if q(z | xobs, xmis) ̸= p(z | xobs, xmis). Importantly, as noted by the authors, the asymptotic exactness of MWG comes, compared to the pseudoGibbs sampler, at little additional computational cost in each iteration: the quantities required for computing ρ t are also computed in the pseudo-Gibbs sampler, except for the often cheap prior evaluations p(z). In summary, MWG has several desirable properties which make it an attractive choice for conditional sampling of VAEs: (i) it provides theoretical guarantees of convergence to the correct conditional distribution, (ii) it is simple to implement, and (iii) its per-iteration computational cost is relatively small, i.e. one standard evaluation of a VAE, and is comparable to the cost of pseudo-Gibbs. However, as we will see next, MWG is not free of important pitfalls. ## 3 Pitfalls Of Gibbs-Like Samplers For Vaes Although the Gibbs-like samplers from sections 2.2 and 2.3 are often used to conditionally sample from a VAE model, the structure of the latent space can cause poor non-asymptotic sampling behaviour. We here detail, in a form of three pitfalls, how this structure can affect the aforementioned samplers. While the reported pitfalls are related to the known limitations of the classical Gibbs (Geman & Geman, 1984) and Metropolis-within-Gibbs samplers (Gelman & Rubin, 1992), we here work out their significance in the context of VAEs. In fig. 1 we exemplify these pitfalls in an archetypical scenario using a synthetic 2-dimensional VAE model (for details about the model see appendix C.1).4 The proposed methods in the following section, AC-MWG (section 4.1) and LAIR (section 4.2), provide remedies for the reported pitfalls. Pitfall I. Strong relationship between the latents and the visibles can cause poor mixing. We often train VAEs to learn a structured latent space that captures the complexity of the data. This is typically achieved by using a decoder with a simple, often conditionally-independent, distribution. For example, to fit a binarised MNIST data set well with a Bernoulli decoder distribution p(x | z) = Qd Bernoulli(xd | z), the digits in the image space must be well-represented in the latent space and the variance of the decoder must be nearly 0, otherwise the model would produce noisy samples due to random "flips" of the pixels. Hence, in VAEs with simple decoders the complexity of modelling the visibles x is often converted to learning a complex structure in the latent space along with a near-deterministic mapping between the latents z and the visibles x as given by the decoder p(x | z). But this strong, near-deterministic, relationship can substantially inhibit the convergence and mixing properties of a sampler like Metropolis-within-Gibbs. This is because the proposed samples z˜ ∼ q(z) will be rejected with a high probability if the conditional distribution p(xmis | xobs, z˜) ∝ p(xobs, xmis | z˜) places little density/mass on the *previous* value of xmis = x t−1 mis , as a small value of p(xobs, x t−1 mis | z˜) will make the Metropolis–Hastings acceptance probability in eq. (2) small. This small acceptance probability leads to Markov chains that get "stuck" in a mode and prevents the sampler 4We note that the variational distribution q(z | x) in this section is constructed to be slightly wider than the model conditional p(z | x) to differentiate the different modes of failure. ![3_image_0.png](3_image_0.png) Figure 1: *Pitfalls of Gibbs-like samplers for VAE models.* (The figure is best viewed in colour.) Each panel corresponds to a distinct sampling problem, where the the observed variable xobs ∈ {x0, x1} is, from left to right, x1 = 0, x0 = 0, and x1 = 1. The line plots show the ground truth density p(xmis | xobs) (blue) and the density of the samples obtained from the two Gibbs-like methods, pseudo-Gibbs (orange) and MWG (pink). The contour plot shows the conditional joint density p(xmis, z | xobs) of the VAE model over the missing variable xmis (bottom axis) and the latent z (right axis), and the dashed green curve shows the expected value of z given x0 and x1. Both samplers were initialised with the same state and run for 50k iterations. Left: MWG fails to mix between nearby modes (in the space of z; right axis) due to high rejection probability in eq. (2). *Center:* both pseudo-Gibbs and MWG fail to find modes that are far apart (in the space of z; right axis) due to narrow proposal distribution. (We note that MWG and pseudo-Gibbs lines overlap in this plot.) *Right:* poor initialisation may leave MWG "stuck" far from the target distribution. Appendix D.1 contains an additional view of the pitfalls. from moving to nearby modes that are close in the latent space. We illustrate this pitfall in fig. 1 (left). In this example, MWG (pink) fails to mix between the modes that are close in the space of latents. This failure occurs despite the proposal distribution generating samples from the neighbouring modes because such proposed samples are rejected by the Metropolis–Hastings step. On the other hand, pseudo-Gibbs (orange) can mix between the modes since it does not use the Metropolis–Hastings step. Pitfall II. The encoder distribution generates proposals that are insufficiently exploratory. A further complication of the structured latent space is illustrated in fig. 1 (center). Here, the modes of the target distribution are sparsely dispersed in the latent space. In this example, we see that both MWG (pink) and pseudo-Gibbs (orange) fail to find distant modes. This is because the proposal distribution, as given by the encoder that approximates the model posterior p(z | x), is too "narrow" to propose values from the alternative modes. For example, given the upper-half of an MNIST image of number "8", it may not be possible to tell if the completed image should be an "8" or a "9", representing two modes of imputations. If the latent space representation of "8" and "9" are sufficiently far, then an encoder conditioned on a current imputation state, for example, xobs ∪ x t−1 mis ≡ "9", is unlikely to propose a z˜ that would decode into x˜mis in the alternative mode, that is, xobs ∪ x˜mis ≡ "8". On the other hand, even if the proposal distribution were wide enough to propose jumps to distant modes, MWG would still reject such proposals with high probability due to pitfall I and thus prevent effective exploration. Pitfall III. Poor initialisation can cause sampling of the wrong mode. As noted by Mattei & Frellsen (2018) MWG for VAEs is extremely sensitive to initialisation, and to alleviate this they suggest initialising by first sampling using pseudo-Gibbs before switching to MWG. But, deciding when to stop the "warm-up" is not easy, and poor initialisation can make MWG get stuck. Moreover, initialisation via an (approximate) MAP using stochastic gradient ascent may also suffer from the multimodality issues described above. In fig. 1 (right) we demonstrate a case where MWG (pink) fails due to a poor initialisation. The limitations of Gibbs-like samplers described in pitfalls I-III motivate our development of improved samplers. Interestingly, despite pseudo-Gibbs being theoretically inferior to MWG, we have seen in this section that pseudo-Gibbs can under some conditions perform better than MWG (fig. 1). In the following sections we propose two different methods that, like pseudo-Gibbs and MWG, utilise the encoder of the VAE to propose transitions in the latent space, whilst mitigating pitfalls I-III and having stronger theoretical guarantees than the simple pseudo-Gibbs method. ## 4 Remedies The Metropolis-within-Gibbs (MWG) sampler for conditional sampling of VAEs has several desirable properties (see section 2.3). However, as discussed in the previous section, the Gibbs-like sampler can have poor non-asymptotic performance. In this section we propose two methods for conditional sampling of VAEs inspired by MWG that also mitigate its potential pitfalls (section 3). The key idea of the proposed methods is akin to ancestral sampling of eq. (1); first, the methods approximately sample the intractable posterior over the latents p(z | xobs), improve this approximation iteratively, and then sample from the decoder distribution p(xmis | xobs, z) conditional on the produced latent samples. In section 4.1 we propose a few simple modifications to the MWG sampler and demonstrate on a synthetic example how this mitigates the pitfalls of MWG. In section 4.2 we propose an alternative method based on adaptive importance sampling and likewise demonstrate on a synthetic example how it mitigates the pitfalls of MWG. Detailed evaluation of the proposed methods is provided in section 5 and the code to reproduce the experiments is available at https://github.com/vsimkus/vae-conditional-sampling. ## 4.1 Adaptive Collapsed-Metropolis-Within-Gibbs We propose several modifications to the MWG sampler from section 2.3 to mitigate the pitfalls outlined in section 3. The proposed sampler is summarised in algorithm 1. First, to improve exploration and reduce the effects of poor initialisation (see pitfalls II and III) we introduce a prior–variational mixture proposal5 $${\tilde{q}}_{\epsilon}(\mathbf{z}\mid\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{mis}})=(1-\epsilon)q(\mathbf{z}\mid\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{mis}})+\epsilon p(\mathbf{z}),$$ $$(3)$$ q˜ϵ(z | xobs, xmis) = (1 − ϵ)q(z | xobs, xmis) + ϵp(z), (3) where q(z | xobs, xmis) is the variational encoder distribution, p(z) is the prior distribution of the VAE, and ϵ ∈ (0, 1) is the probability to sample from the prior. Clearly this modification alone would not resolve the pitfalls of MWG, since proposals z˜ sampled from the prior p(z) would be rejected with high probability at the Metropolis–Hastings step due to disagreement with the current imputation x t−1 mis in eq. (2). Hence, we next propose changing the target distribution of the Metropolis–Hastings step from p(z | xobs, xmis) to p(z | xobs), such that a good proposal z˜ would not be rejected due to a disagreement with an imputation x t−1 mis (see pitfall I). The modified Metropolis–Hastings acceptance probability is defined as $$\rho^{t}(\tilde{x}^{t},z^{t-1};\tilde{x}_{\rm mis})=\min\left\{1,\frac{p(\mathbf{x}_{\rm obs}\mid\tilde{x}^{t})p(\tilde{x}^{t})}{p(\mathbf{x}_{\rm obs}\mid\mathbf{z}^{t-1})p(\mathbf{z}^{t-1})}\,\frac{\tilde{q}_{\epsilon}(\mathbf{z}^{t-1}\mid\mathbf{x}_{\rm obs},\tilde{\mathbf{x}}_{\rm mis})}{\tilde{q}_{\epsilon}(\tilde{x}^{t}\mid\mathbf{x}_{\rm obs},\tilde{\mathbf{x}}_{\rm mis})}\right\}.\tag{4}$$ Marginalising the missing variables xmis out of the likelihood p(xobs, xmis | z˜ t) corresponds to reducing the conditioning (or collapsing) in Gibbs samplers which is a common approach to improve mixing and convergence (van Dyk & Park, 2008; van Dyk & Jiao, 2015). In our case, if the optimal proposal distribution p(z | xobs) were known, the sampler would become a standard ancestral sampler and would be maximally efficient, i.e. it would draw an independent sample at each iteration. Moreover, rather than using the imputation x t−1 mis from the previous iteration to condition the proposal distribution, as in MWG, we are here going to re-sample a random imputation x˜mis from an available set of historical imputations H t−1 mis that is updated adaptively with iterations t. We now combine the proposed changes in eqs. (3) and (4) to introduce the algorithm called adaptive collapsedMetropolis-within-Gibbs (AC-MWG), which can be seen as an instance of the class of adaptive independent 5Our mixture proposal is related to the small-world proposal of Guan et al. (2006), which has been shown to improve performance in complicated heterogeneous and multimodal distributions. Algorithm 1 Adaptive collapsed-Metropolis-within-Gibbs Input: VAE model p(x, z), variational posterior q(z | xmis, xobs), mixture prob. ϵ, and data-point xobs 1: H0mis = ∅ ▷ Initialise imputation history 2: (z 0, x 0 mis) ∼ p(z)p(xmis | xobs, z) ▷ Sample the initial values 3: for t = 1 to T do 4: x˜mis ∼ Uniform(H t−1 mis ) ▷ Choose random xmis from the history 5: z˜ ∼ q˜ϵ(z | xobs, x˜mis) ▷ Sample proposal value z˜ 6: ρ t = ρ t(z˜, z t−1; x˜mis) ▷ Calculate acceptance probability using eq. (4) 7: if *u < ρ*t, with u ∼ Uniform(0, 1) **then** ▷ Accept z˜ with probability ρ t 8: z t = z˜ 9: Htmis = {x τ mis} t−1 τ=0 10: **else** ▷ Reject z˜ with probability ρ t 11: z t = z t−1 12: Htmis = H t−1 mis 13: **end if** 14: x t mis ∼ p(xmis | xobs, z t) ▷ Sample xmis 15: end for return {(x 0 mis, z 0)*, . . . ,*(x T mis, z T)} ▷ Return all samples Metropolis–Hastings algorithms (Holden et al., 2009). Assume we start with an initial latent state z 0 and an imputation history H0mis = {xˆ 0 mis}, such that z 0 and xˆ 0 mis are mutually independent (for example, z 0 and xˆ 0 mis are generated via independent short runs of pseudo-Gibbs, see section 2.2, or LAIR, see section 4.2). Then a single iteration t of the sampler is as follows: 1. **Proposal sampling.** First, a historical sample x˜mis is re-sampled uniformly at random from the available imputation history H t−1 mis . 6 We then use the proposal distribution from eq. (3) to sample a single proposal z˜. 2. **Metropolis–Hastings acceptance.** The proposed sample z˜ is then either accepted as z t = z˜ with probability ρ t(z˜, z t−1; x˜mis) in eq. (4) or rejected leaving z t = z t−1. 3. **Imputation sampling.** The imputation x t mis is updated by sampling the conditional x t mis ∼ p(xmis | xobs, z t). 4. **Adaptation (history update).** The available history Htmis is updated as follows: if a new z˜ has been accepted then all imputations {x τ mis} t−1 τ=0 up to step t − 1 are made available at the next iteration, i.e. Htmis = {x τ mis} t−1 τ=0, otherwise it is left unchanged Htmis = H t−1 mis . Step 4 of the sampler constructs the available history H t−1 mis for the next iteration such that it does not contain imputations that depend on the current state z t−1, which ensures that the proposed values z˜ are independent of z t−1 and thus guarantees that the stationary distribution of the independent Metropolis–Hastings remains correct as the history H t−1 mis changes (Roberts & Rosenthal, 2007; Holden et al., 2009). However, the dependence on the sample history H t−1 mis makes AC-MWG non-Markovian, and hence convergence needs to be verified. Adapting proofs by Holden et al. (2009), we prove in appendix A that the Markov chain of AC-MWG correctly converges to the stationary distribution p(z, xmis | xobs) with probability arbitrarily close to 1 as the number of iterations T grows. Finally, we note that the per-iteration computational cost of AC-MWG and MWG (section 2.3) are nearly the same. The differences are: re-sampling x˜mis from the history H t−1 mis , which should be negligible compared to the cost of evaluating the model, and marginalising the missing variables from the likelihood p(xobs | z) = Rp(xobs, xmis | z) dxmis, which is often free if the standard conditional independence assumption holds. 6In this paper, we re-sample x˜mis from all the past samples in the available history Ht−1 mis , however other strategies might be devised to improve the computational and convergence properties of the algorithm (see e.g. Holden et al., 2009; Martino et al., 2018). For example, by using a shorter window of past samples instead of the full length of the history. ![6_image_0.png](6_image_0.png) Figure 2: *The proposed AC-MWG sampler (yellow) with* ϵ = 0.01 *on 2D VAE sampling problems, same as* in fig. 1. (The figure is best viewed in colour.) AC-MWG (yellow) samples the target distribution (blue) more accurately than MWG (pink) and pseudo-Gibbs (orange). All three samplers were initialised with the same state and run for 50k iterations. ## 4.1.1 Verification Of Ac-Mwg On Synthetic Vae We verify the proposed AC-MWG method on the synthetic VAE example in section 3 (see additional details in appendix C.1). The results are shown in fig. 2 (see also additional figures in appendix D.1). With the proposed modifications, AC-MWG samples the target distribution more accurately by exploring modes that are close in the latent space (left) due to the modified acceptance probability in eq. (4), as well as distant modes (center) due to the modified proposal distribution in eq. (3). The modified method is also less sensitive to poor initialisation (right). Moreover, we perform ablation studies in appendices D.2 and D.4 to further validate that both modifications, the mixture proposal in eq. (3) and the collapsed-Gibbs target in eq. (4), are key to the performance of the method. ## 4.2 Latent-Adaptive Importance Resampling Instead of MCMC, we can sample from eq. (1) via importance resampling (IR, see appendix B for details on standard importance resampling and Chopin & Papaspiliopoulos, 2020, for a comprehensive introduction). However, like MCMC, the efficiency of IR significantly depends on the choice of the proposal distribution. Our goal in this section is to design an *adaptive* importance resampling method that efficiently samples p(xmis | xobs) of a joint VAE model p(x), and we achieve this by constructing an adaptive proposal distribution q t(z | xobs) using the encoder distribution q(z | xobs, xmis). The proposed method is summarised in algorithm 2. As for AC-MWG, we aim to promote exploration and reduce the effects of poor initialisation (see pitfalls II and III). We thus start with the prior–variational mixture proposal q˜ϵ(z | xobs, xmis) from eq. (3) and use it to construct the following *adaptive* mixture proposal distribution q t(z | xobs), $$q^{t}(\mathbf{z}\mid\mathbf{x}_{\mathrm{obs}})=\mathbb{E}_{f^{t}(\mathbf{x}_{\mathrm{mis}}\mid\mathbf{x}_{\mathrm{obs}})}\left[\hat{q}_{\epsilon}(\mathbf{z}\mid\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{mis}})\right]\quad\mathrm{with}\quad f^{t}(\mathbf{x}_{\mathrm{mis}}\mid\mathbf{x}_{\mathrm{obs}})=\frac{1}{K}\sum_{k=1}^{K}\delta_{\mathbf{z}_{\mathrm{mis}}^{(\mathbf{z}-1,k)}}(\mathbf{x}_{\mathrm{mis}}),$$ where f t(xmis | xobs) is an imputation distribution represented as a mixture of Dirac masses at K particles {x (t−1,k) mis } K k=1, which we will use to adapt the proposal distribution at each iteration t. We further rewrite the proposal by inserting the definition of f t(xmis | xobs) and q˜ϵ(z | xobs, xmis), and re-parametrise it by setting ϵ =R K+R , where R is a non-negative integer, to obtain $$q^{t}(\mathbf{z}\mid\mathbf{x}_{\mathrm{obs}})={\frac{1}{K+R}}\left(\sum_{k=1}^{K}q(\mathbf{z}\mid\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{mis}}^{(t-1,k)})+\sum_{r=1}^{R}p(\mathbf{z})\right).$$ Algorithm 2 Latent-adaptive importance resampling Input: VAE model p(x, z), variational posterior q(z | x), data-point xobs, number of imputation particles K, number of iterations T 1: x (0,1) mis *, . . . ,* x (0,K) mis ∼ Ep(z)[p(xmis | xobs, z)] ▷ Sample the initial imputation particle values 2: for t = 1 to T do 3: z˜ (t,k) ∼ q(z | xobs, x (t−1,k) mis ) for ∀k ∈ {1*, . . . , K*} ▷ Draw a sample for each particle. 4: z˜ (t,K+r) ∼ p(z) for ∀r ∈ {1*, . . . , R*} ▷ Draw R prior proposals. 5: w(z˜ (t,k)) = p(xobs,z˜ (t,k)) q t(z˜(t,k)|xobs) for ∀k ∈ {1*, . . . , K* + R} ▷ Unnormalised importance weights. 6: w˜(z˜ (t,k)) =w(z˜ (t,k) P ) K+R j=1 w(z˜(t,j)) for ∀k ∈ {1*, . . . , K* + R} ▷ Normalise importance weights. 7: z (t,1)*, . . . ,* z (t,K) ∼ Multinomial{z˜ (t,k), w˜(z˜ (t,k))} K+R k=1 ▷ Resample z (t,k)from the proposed set. 8: x (t,k) mis ∼ p(xmis | xobs, z (t,k)) for ∀k ∈ {1*, . . . , K*}. ▷ Update imputation particles. 9: **end for** 10: w¯(z˜ (t,k)) = w(z˜ (t,k) P ) T τ=1 PK+R j=1 w(z˜(τ,j)) for ∀k ∈ {1*, . . . , K* + R} and ∀t ∈ {1*, . . . , T*} ▷ Re-norm. all proposals. 11: z i ∼ Multinomial{z˜ (t,k), w¯(z˜ (t,k))} (*T ,K*+R) t=1,k=1 for ∀i ∈ {1*, . . . , T* · K} ▷ Resample proposals from all iter. 12: x i mis ∼ p(xmis | xobs, z i) for ∀i ∈ {1*, . . . , T* · K}. ▷ Sample imputations return {x i mis} T·K i=1 The above proposal can be interpreted to have a total of K +R components of which K components depend on the imputation particles {x (t−1,k) mis } K k=1, which encourage exploitation, and R are "replenishing" prior components p(z), which encourage exploration and mitigate particle collapse. Moreover, we sample the proposal distribution using stratified sampling (Robert & Casella, 2004; Owen, 2013, Section 9.12; Elvira et al., 2019, Appendix A), a well-known variance-reduction technique that draws one sample from each of the K + R components. Using the mixture proposal distribution in eq. (5) we now introduce the new algorithm that we call latentadaptive importance resampling (LAIR), which belongs to the class of adaptive importance sampling algorithms (AIS) of Elvira & Martino (2022, Section 4). The algorithm starts with K imputation particles {x (0,k) mis } K k=1 that may come from a simple distribution such as the empirical marginals, another multiple imputation method, or simply the unconditional marginal of the VAE p(xmis). An iteration t of the algorithm then performs the following three steps: 1. **Proposal sampling.** Sample the proposal distribution q t(z | xobs) in eq. (5) using stratified sampling. That is, for each particle x (t−1,k) mis draw a sample z˜ (t,k)from the proposal q(z | xobs, x (t−1,k) mis ) and draw R proposals z˜ (t,K+r)from the prior p(z), for a total of K + R proposals. 2. **Weighting.** Compute the unnormalised importance weights w(z˜ (t,k)). 78 $$w(\hat{\mathbf{z}}^{(t,k)})={\frac{p(\mathbf{x}_{\mathrm{obs}},\hat{\mathbf{z}}^{(t,k)})}{q^{t}(\hat{\mathbf{z}}^{(t,k)}\mid\mathbf{x}_{\mathrm{obs}})}}.$$ $$({\hat{0}})$$ . (6) ## 3. Adaptation. 3.I. Resample a set {z (t,k)} K k=1 with replacement from the proposal set {z˜ (t,k)} K+R k=1 proportionally to the weights w(z˜ (t,k)). 9 3.II. Update the imputation particles {x (t,k) mis } K k=1 by sampling p(xmis | xobs, z) conditional on each z ∈ {z (t,k)} K k=1 from step 3.I. 7We here use deterministic-mixture MIS (DM-MIS) weights but alternative weighting schemes can also be used that enable a more fine-grained control of the cost-variance trade-off, see Elvira et al. (2019, Section 7.2). 8By marginalising the variables xmis in the numerator of the weights we address pitfall I, similar to eq. (4) of AC-MWG. 9Alternative resampling schemes may also be used, see Chopin & Papaspiliopoulos (2020, Section 9.4). Each iteration t at step 3.II. (accordingly, line 8 of algorithm 2) produces (approximate) samples {x (t,k) mis } K k=1 from the target distribution p(xmis | xobs), since any iteration t of the algorithm corresponds to standard importance resampling and hence inherits its properties (Cappé et al., 2004), see appendix B. In particular, the sampler monotonically approaches the target distribution as the number of proposed samples K + R tends to infinity. Hence, the algorithm may be used in settings where the target distribution changes across iterations t, for instance, when fitting a model from incomplete data via a Monte Carlo EM (Wei & Tanner, 1990; Simkus et al., 2023). However, unlike MCMC methods, a finite set of samples at any iteration t are not generally guaranteed to convergence to the target distribution as t grows large (Cappé et al., 2004; Douc et al., 2007). In particular, for finite sample sizes, K + R ≪ ∞, the sampler bias is of the order O(1 K+R ) (Owen, 2013; Paananen et al., 2021) at any iteration t and depends on the disparity between the proposal and the target distributions. To improve the approximation, after the algorithm completes all T iterations, we can use samples from all iterations t ∈ {1*, . . . , T*} to construct a more accurate estimator (Cappé et al., 2004). ## 4. Draw Final Samples After Completing All T Iterations. 4.I. Re-normalise the weights of z˜ (t,k) over all iterations t ∈ {0*, . . . , T*} and all k ∈ {1*, . . . , K* + R} ``` to obtain w¯(z˜ (t,k) = w(z˜ (t,k) P ) T τ=1 PK+R j=1 w(z˜(τ,j)) . ``` 4.II. Resample T · K samples z i with replacement from the set {z˜ (t,k)} (*T ,K*+R) t=1,k=1 using the weights w¯(z˜ (t,k) from the previous step. 4.III. Sample imputations {x i mis} T·K i=1 via ancestral sampling by sampling p(xmis | xobs, z) conditional on each z ∈ {z i} T·K i=1 from step 4.II. The advantage of resampling from the re-weighted full sequence of samples is that the bias of the selfnormalised importance sampler goes down with T (in addition to K + R) and hence more accurate samples can be obtained. In particular, the sampler now monotonically approaches the target distribution as the *total* number of proposed samples approaches infinity, T(K + R) → ∞, and the bias is of the order O(1 T(K+R) ). We note that the per-iteration computational cost of LAIR is comparable to running K + R parallel chains of MWG, with the exception of: marginalising the missing variables from the likelihood, as in AC-MWG, which may often be cheap, and evaluating the denominator of the importance weights w(z˜) in eq. (6), which requires that each of the K + R proposed samples z˜ must be evaluated with the densities of all K + R components in the mixture proposal in eq. (5), hence needing (K + R) 2evaluations. However, since the components of the proposal distribution in eq. (5) are typically all simple distributions, such as a diagonal Gaussians, the computational cost is often negligible for moderate number of proposals K + R. Moreover, the cost may be reduced by trading-off for a higher-variance of the estimator, see footnote 7. Finally, the computational cost of the final resampling in step 4 (accordingly, lines 10 to 12 in algorithm 2) is negligible since all the required quantities have already been computed in the past iterations. ## 4.2.1 Verification Of Lair On Synthetic Vae We now verify the proposed method, LAIR, on the synthetic VAE example in section 3 (see additional details in appendix C.1). The results are demonstrated in fig. 3 (see also additional figures in appendix D.1), where we have used K = 19 particles and R = 1 replenishing components (corresponding to ϵ = 0.05). We can see that the method mitigates the three main pitfalls: poor mixing (left), poor exploration (center), and is less sensible to poor initialisation (right). Moreover, in ablation studies performed in appendices D.2 and D.4 we further investigate the sensitivity of the method to choices of ϵ =R K+R and find that the method performs well as long as 0 *< ϵ <* 1. ## 5 Evaluation In sections 4.1 and 4.2 we have introduced our methods, AC-MWG and LAIR, for conditional sampling of VAEs which mitigate the potential pitfalls of Gibbs-like samplers (section 3) as verified in sections 4.1.1 ![9_image_0.png](9_image_0.png) Figure 3: *The proposed LAIR sampler (yellow) with* K = 19 *particles and* R = 1 *replenishing components on* 2D VAE sampling problems, same as in fig. 1. (The figure is best viewed in colour.) LAIR (yellow) samples the target distribution (blue) more accurately than MWG (pink) and pseudo-Gibbs (orange). MWG and pseudo-Gibbs were run for 50k iterations, and LAIR was run for 2.5k iterations to match the number of generative model evaluations. and 4.2.1. As motivated in section 2.1, conditional sampling is a fundamental tool for multiple imputation of missing data (Rubin, 1987; 1996), where the goal is to generate plausible values of the missing variables with correct uncertainty representation. We here evaluate the newly proposed methods for missing data imputation. We assume that we have a pre-trained VAE model, trained on complete data, and aim to generate imputations of the missing variables at test time. ## 5.1 Mixture-Of-Gaussians Mnist Evaluating the quality of imputations from data alone is a difficult task since the imputations represent guesses of unobserved values from an unknown conditional distribution (Abayomi et al., 2008; van Buuren, 2018, Section 2.5). Hence, to accurately evaluate the proposed methods, in this section we first fit a mixtureof-Gaussians (MoG) model to the MNIST data set, which we then use as the ground truth to simulate a semi-synthetic data set that is subsequently fitted by a VAE model (see appendix C.2 for more details). Using an intermediate MoG model enables us to tractably sample the reference conditional distribution (which would otherwise be unknown) when evaluating the accuracy of the conditional VAE samples obtained using the proposed and existing methods. In fig. 4 we demonstrate the performance of the methods on 10 sampling problems (see appendix D.3 for additional figures and metrics). We measure the performance using the Fréchet inception distance (FID, Heusel et al., 2017), where for the inception features we use the final layer outputs of the encoder network. The figures show that the proposed methods, AC-MWG (pink) and LAIR (yellow), significantly outperform the performance of the Gibbs-like samplers from sections 2.2 and 2.3 (blue and green). In appendix D.4 we further perform an ablation study for the proposed methods, where: we validate that both the mixture proposal in eq. (3) and the collapsed-Gibbs target in eq. (4) are key to the good performance of AC-MWG; and we find that LAIR can perform well for a number of values of ϵ =R K+R , as long as 0 *< ϵ <* 1. The results for MWG (green) use pseudo-Gibbs warm-up, as suggested by the authors Mattei & Frellsen (2018), to mitigate the effects of poor initialisation. We further investigated two different warm-up methods for MWG: an approximate MAP initialisation using stochastic gradient ascent on the log-likelihood, and LAIR. Both schemes improved over the base MWG but we found that the initialisation using LAIR generally performed better (see fig. 12 in the appendix). MWG with LAIR initialisation is denoted in fig. 4 as MWG′ (orange). We observe that with better initialisation the performance of MWG can be significantly improved, hence confirming the sensitivity of MWG to poor initialisation as discussed in section 3. However, with few exceptions MWG′(orange) still generally performs worse than the proposed methods (pink and yellow), ![10_image_0.png](10_image_0.png) Figure 4: Fréchet inception distance (FID) between samples from the ground truth conditional p(z | xobs), and samples from the imputation methods. Each panel in the figure corresponds to a different conditional sampling problem p(xmis | xobs). Each evaluation is repeated 20 times, and the box-plot represents the inter-quartile range, including the median, and the whiskers show the overall range of the results. ![10_image_1.png](10_image_1.png) Figure 5: *Sampling performance on four real-world UCI data sets. Top:* Sinkhorn distance of the imputed data sets evaluated on a 50k data-point subset of test data (except for Miniboone where the full test data set was used). *Bottom:* Average RMSE of the imputations on the whole test data set. In both rows imputations from the final iteration of each algorithm are used and uncertainty is shown over different runs. hence suggesting that the poor performance of MWG can be in part explained by the poor mixing of the sampler as discussed in section 3, that is addressed by the proposed methods. ## 5.2 Real-World Uci Data Sets We now evaluate the proposed methods on real-world data sets from the UCI repository (Dua & Graff, 2017; Papamakarios et al., 2017). We train a VAE model with ResNet architecture on complete training data and evaluate the sampling accuracy of the existing and proposed methods on incomplete test data with 50% ![11_image_0.png](11_image_0.png) Figure 6: *Imputation accuracy on the binarised Omniglot test set with 1-3 randomly missing quadrants.* The top and bottom rows show F1 and average SSIM scores (higher is better for both metrics) respectively between the imputed and the ground truth values. In both rows imputations from the final iteration of each algorithm are used and uncertainty is shown over different runs. missingness (see appendix C.3 for more details). We also include a simple baseline where imputations are sampled from the marginal distribution p(xmis) of the VAE. Moreover, in line with the observations from section 5.1 for MWG and AC-MWG we use LAIR initialisation as we have found it to considerably improve the performance of both methods. We here assess the performance using two metrics: Sinkhorn distance (Cuturi, 2013) between the imputed and ground truth data sets (computed using geomloss package by Feydy et al., 2019), and average RMSE of the imputations (for additional metrics, see appendix D.5). The results are shown in fig. 5. First, the figure shows that all methods outperform marginal imputations (blue), with one exception of pseudo-Gibbs (green) on Hepmass data, where the Sinkhorn distance is slightly higher than the baseline. Second, as before, pseudo-Gibbs (green) is typically improved-upon by MWG (orange). The only exception is the Gas data with the Sinkhorn distance as metric (first row, first column) where the performance shows high variability. Other metrics (second row, first column, and appendix D.5) do not display this behaviour. Third, we see that the proposed methods, AC-MWG (pink) and LAIR (yellow), show better or comparable performance to the existing methods in terms of Sinkhorn distance (top row), and always improve on the existing methods in terms of the point-wise RMSE (bottom row). In summary, the results in this section match our findings from section 5.1, and hence further highlight the importance of mitigating the pitfalls in section 3 when dealing with real-world tasks. ## 5.3 Omniglot Data Set In this section we evaluate the methods for conditional sampling of a VAE model trained on fully-observed binarised Omniglot data of handwritten characters (Lake et al., 2015). For the VAE model we use a convolutional ResNet encoder and decoder networks with 50 latent dimensions (see appendix C.4 for more details). We then evaluate the existing and proposed methods for conditional imputation of test set images that miss 1, 2, and 3 random quadrants. Similar to the previous section, we include a simple baseline where imputations are sampled from the marginal distribution p(xmis) of the VAE. The accuracy of the imputations on the binarised Omniglot is assessed using F1 score (Mattei & Frellsen, 2018) and structural similarity index measure (SSIM, Wang et al., 2004) between the ground truth and imputed values. The results are shown in fig. 6. We first note that all conditional sampling methods perform better than marginal imputations (deep blue). Furthermore, we see that the metrics for the existing methods imply the ranking pseudo-Gibbs (green) < MWG (orange) < MWG′(pink), as before. Finally, we observe that the proposed methods, AC-MWG (yellow) and LAIR (light blue), further improve the accuracy of the imputations over the existing methods. ## 6 Discussion Conditional sampling is a key challenge for downstream applications of VAEs and imprecise or inefficient samplers can cause unreliable results. We have examined the potential pitfalls of using Gibbs-like samplers, such as MWG, to conditionally sample from unconditional VAE models. While the outlined pitfalls are related to the well-known limitations of standard Gibbs sampler, we work out their significance in the context of VAEs. Pitfalls I and II outline two reasons for poor mixing of MWG: strong relationship between the latents z and visibles x, and lack of exploration when the variational encoder distribution is used as proposal. Pitfall III highlights the importance of good initialisation for the performance of the sampler. We introduced two samplers for conditional sampling of VAEs that address the pitfalls and show improved performance when compared to MWG and other baselines. The proposed methods, adaptive collapsedMetropolis-within-Gibbs (AC-MWG) and latent-adaptive importance resampling (LAIR), mitigate pitfall I by marginalising the missing variables xmis when (approximately) sampling the latents z, and then sample the missing values xmis ∼ p(xmis | xobs, z). Therefore, in contrast to Gibbs sampling, the two methods can be seen as approximate ancestral sampling methods with asymptotic exactness guarantees. To mitigate pitfall II we have constructed proposal distributions from a mixture composed of the variational encoder distribution and the prior, which balances exploitation and exploration. Finally, we have found that poor initialisation (pitfall III) affects LAIR much less than the MCMC methods due to its ability to use information from multiple points in the latent space, and hence using LAIR to initialise MWG and AC-MWG can further improve their respective performances. Depending on the task, computational budget, and accuracy requirements one may choose to use either AC-MWG or LAIR for conditional sampling of VAEs. For example, in tasks where the target distribution is changing between iterations, such as learning a VAE model from incomplete data (Simkus et al., 2023), LAIR could be more efficient than AC-MWG; this is because LAIR produces valid (although potentially biased) samples from the target distribution at any iteration, while AC-MWG requires a "burn-in" period until the sampler converges to the target distribution. On the other hand, on a strict computational budget AC-MWG might be preferred over LAIR: while the cost of AC-MWG is comparable to MWG (and hence also pseudo-Gibbs), each iteration of LAIR involves equivalent computations on K + R particles and hence the computational cost and memory requirements is about K + R times the cost of MWG. Finally, the convergence properties of the two methods are distinct: AC-MWG converges asymptotically in number of iterations, whereas the convergence in LAIR additionally scales in the number of particles K + R and therefore parallelisation may be used to improve the speed of convergence at the cost of additional memory usage. We have focused on conditional sampling of VAE models with moderate-dimensional latent spaces. To this end, we have addressed the "exploration–exploitation" dilemma by constructing the proposal distribution from the prior and variational encoder distributions. But, what works well in moderate dimensions might not work well in high dimensions, a direct consequence of the infamous "curse of dimensionality". This means that exploring the posterior by sampling the prior distribution might become impractical in higher dimensions. To scale the methods, alternative exploration strategies could be constructed by replacing the mixture proposal in eq. (3) with, for example, a mixture composed of annealed versions of the variational encoder distribution. Moreover, since the proposed methods belong to the large and general families of adaptive MCMC (Haario et al., 2001; Warnes, 2001; Roberts & Rosenthal, 2007; Holden et al., 2009; Liang et al., 2010) and adaptive importance sampling (AIS, Cappé et al., 2004; Bugallo et al., 2017), our work opens up additional opportunities to further improve the conditional sampling of VAEs. ## References Kobi Abayomi, Andrew Gelman, and Marc Levy. Diagnostics for multivariate imputations. Journal of the Royal Statistical Society: Series C (Applied Statistics), 57(3):273–291, 2008. ISSN 1467-9876. doi: 10.1111/j.1467-9876.2007.00613.x. (Cited on pg. 10) David Barber. *Bayesian Reasoning and Machine Learning*. Cambridge University Press, 2017. ISBN 978-0511-80477-9. doi: 10.1017/CBO9780511804779. (Cited on pg. 1, 19) Guillem Boquet, Jose Lopez Vicario, Antoni Morell, and Javier Serrano. Missing Data in Traffic Estimation: A Variational Autoencoder Imputation Method. In *IEEE International Conference on Acoustics, Speech* and Signal Processing (ICASSP), pp. 2882–2886, May 2019. doi: 10.1109/ICASSP.2019.8683011. (Cited on pg. 2) Monica F. Bugallo, Victor Elvira, Luca Martino, David Luengo, Joaquin Miguez, and Petar M. Djuric. Adaptive Importance Sampling: The past, the present, and the future. *IEEE Signal Processing Magazine*, 34(4):60–79, July 2017. ISSN 1558-0792. doi: 10.1109/MSP.2017.2699226. (Cited on pg. 13) Oliver Cappé, Arnaud Guillin, Jean-Michel Marin, and Christian P. Robert. Population Monte Carlo. Journal of Computational and Graphical Statistics, 13(4):907–929, December 2004. ISSN 1061-8600. doi: 10.1198/106186004X12803. (Cited on pg. 9, 13) Rewon Child. Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images. In *International Conference on Learning Representations (ICLR)*, March 2021. (Cited on pg. 1) Nicolas Chopin and Omiros Papaspiliopoulos. *An Introduction to Sequential Monte Carlo*. Springer Series in Statistics. Springer, 2020. (Cited on pg. 7, 8, 22) Chris Cremer, Xuechen Li, and David Duvenaud. Inference Suboptimality in Variational Autoencoders. In International Conference on Machine Learning (ICML), May 2018. (Cited on pg. 2) Marco Cuturi. Sinkhorn Distances: Lightspeed Computation of Optimal Transport. In Advances in Neural Information Processing Systems (NeurIPS), 2013. (Cited on pg. 12) Randal Douc, Arnaud Guillin, Jean-Michel Marin, and Christian P. Robert. Convergence of Adaptive Mixtures of Importance Sampling Schemes. *The Annals of Statistics*, 35(1):420–448, 2007. ISSN 00905364. (Cited on pg. 9) Dheeru Dua and Casey Graff. UCI Machine Learning Repository, 2017. (Cited on pg. 11, 24) Víctor Elvira and Luca Martino. Advances in Importance Sampling, March 2022. (Cited on pg. 8) Víctor Elvira, Luca Martino, David Luengo, and Mónica F. Bugallo. Generalized Multiple Importance Sampling. *Statistical Science*, 34(1), February 2019. ISSN 0883-4237. doi: 10.1214/18-STS668. (Cited on pg. 8) Jean Feydy, Thibault Séjourné, François-Xavier Vialard, Shun-ichi Amari, Alain Trouvé, and Gabriel Peyré. Interpolating between Optimal Transport and MMD using Sinkhorn Divergences. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, 2019. doi: 10.48550/arXiv.1810.08278. (Cited on pg. 12) Andrew Gelman and Donald B. Rubin. Inference from Iterative Simulation Using Multiple Sequences. *Statistical Science*, 7(4):457–472, November 1992. ISSN 0883-4237, 2168-8745. doi: 10.1214/ss/1177011136. (Cited on pg. 3) Stuart Geman and Donald Geman. Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, PAMI-6(6):721–741, November 1984. doi: 10.1109/TPAMI.1984.4767596. (Cited on pg. 2, 3) Samuel J. Gershman and Noah D. Goodman. Amortized Inference in Probabilistic Reasoning. In *Annual* Meeting of the Cognitive Science Society, volume 36, 2014. (Cited on pg. 2) Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules. *ACS Central Science*, 4(2):268–276, February 2018. ISSN 2374-7943. doi: 10.1021/acscentsci. 7b00572. (Cited on pg. 1) Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep Learning*. MIT Press, Cambridge, MA, USA, 2016. (Cited on pg. 1) Yongtao Guan, Roland Fleißner, Paul Joyce, and Stephen M. Krone. Markov Chain Monte Carlo in small worlds. *Statistics and Computing*, 16(2):193–202, June 2006. ISSN 1573-1375. doi: 10.1007/ s11222-006-6966-6. (Cited on pg. 5) Heikki Haario, Eero Saksman, and Johanna Tamminen. An Adaptive Metropolis Algorithm. *Bernoulli*, 7 (2):223–242, 2001. ISSN 1350-7265. doi: 10.2307/3318737. (Cited on pg. 13) Kuan Han, Haiguang Wen, Junxing Shi, Kun-Han Lu, Yizhen Zhang, Di Fu, and Zhongming Liu. Variational autoencoder: An unsupervised model for encoding and decoding fMRI activity in visual cortex. *NeuroImage*, 198:125–136, September 2019. ISSN 1095-9572. doi: 10.1016/j.neuroimage.2019.05.039. (Cited on pg. 1) Wilfred Keith Hastings. Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika, 57(1):97–109, 1970. ISSN 0006-3444. doi: 10.2307/2334940. (Cited on pg. 1, 3) David Heckerman, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, and Carl Kadie. Dependency Networks for Inference, Collaborative Filtering, and Data Visualization. *Journal of Machine* Learning Research, 1(Oct):49–75, 2000. (Cited on pg. 2) Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems (NeurIPS), 2017. (Cited on pg. 10) Lars Holden. Convergence of Markov Chains in the Relative Supremum Norm. *Journal of Applied Probability*, 37(4):1074–1083, 2000. ISSN 0021-9002. (Cited on pg. 22) Lars Holden, Ragnar Hauge, and Marit Holden. Adaptive independent Metropolis–Hastings. *The Annals of* Applied Probability, 19(1):395–413, February 2009. ISSN 1050-5164, 2168-8737. doi: 10.1214/08-AAP545. (Cited on pg. 6, 13, 18, 20, 22) Diederik P. Kingma and Jimmy Lei Ba. Adam: A Method for Stochastic Optimization. In *International* Conference on Learning Representations (ICLR), December 2014. (Cited on pg. 24, 25) Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In International Conference on Learning Representations (ICLR), December 2013. (Cited on pg. 1) Diederik P. Kingma, Danilo Jimenez Rezende, Shakir Mohamed, and Max Welling. Semi-Supervised Learning with Deep Generative Models. *Advances in Neural Information Processing Systems (NeurIPS)*, June 2014. (Cited on pg. 2) Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. *Science*, 2015. doi: 10.1126/science.aab3050. (Cited on pg. 12, 25) Chongxuan Li, Jun Zhu, and Bo Zhang. Learning to Generate with Memory. In International Conference on Machine Learning (ICML), June 2016. (Cited on pg. 2) Yingzhen Li, Richard E. Turner, and Qiang Liu. Approximate Inference with Amortised MCMC, May 2017. (Cited on pg. 2) Faming Liang, Chuanhai Liu, and Raymond Carroll. *Advanced Markov Chain Monte Carlo Methods: Learning from Past Samples*. John Wiley & Sons, Incorporated, New York, 2010. ISBN 978-0-470-66973-0. (Cited on pg. 13, 20) Luca Martino, Roberto Casarin, Fabrizio Leisen, and David Luengo. Adaptive independent sticky MCMC algorithms. *EURASIP Journal on Advances in Signal Processing*, 2018(1):5, January 2018. ISSN 16876180. doi: 10.1186/s13634-017-0524-6. (Cited on pg. 6) Pierre-Alexandre Mattei and Jes Frellsen. Leveraging the Exact Likelihood of Deep Latent Variable Models. In *Advances in Neural Information Processing Systems (NeurIPS)*, February 2018. (Cited on pg. 1, 2, 3, 4, 10, 12, 35) Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. Equation of State Calculations by Fast Computing Machines. *The Journal of Chemical Physics*, 21(6):1087–1092, June 1953. ISSN 0021-9606. doi: 10.1063/1.1699114. (Cited on pg. 1, 3) Art B. Owen. *Monte Carlo Theory, Methods and Examples*. https://artowen.su.domains/mc/, 2013. (Cited on pg. 8, 9, 23) Topi Paananen, Juho Piironen, Paul-Christian Bürkner, and Aki Vehtari. Implicitly adaptive importance sampling. *Statistics and Computing*, 31(2):16, March 2021. ISSN 0960-3174, 1573-1375. doi: 10.1007/ s11222-020-09982-2. (Cited on pg. 9, 23) George Papamakarios, Theo Pavlakou, and Iain Murray. Masked Autoregressive Flow for Density Estimation. Advances in Neural Information Processing Systems (NeurIPS), 30, 2017. (Cited on pg. 11, 24) Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference. In *International Conference on Machine Learning (ICML)*, Beijing, China, 2014. (Cited on pg. 1, 2) Danilo Jimenez Rezende, S. M. Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised Learning of 3D Structure from Images. In *Advances in Neural Information Processing* Systems (NeurIPS), June 2018. doi: 10.48550/arXiv.1607.00662. (Cited on pg. 2) Christian P. Robert and George Casella. *Monte Carlo Statistical Methods*. Springer, 2004. ISBN 0-38721239-6. (Cited on pg. 8) Gareth O. Roberts and Jeffrey S. Rosenthal. Coupling and Ergodicity of Adaptive Markov Chain Monte Carlo Algorithms. *Journal of Applied Probability*, 44(2):458–475, 2007. ISSN 0021-9002. (Cited on pg. 6, 13) Geoffrey Roeder, Yuhuai Wu, and David K. Duvenaud. Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference. *Advances in Neural Information Processing Systems*, 30, 2017. (Cited on pg. 24, 25) Donald B. Rubin. *Multiple Imputation for Nonresponse in Surveys*. John Wiley & Sons, New York, 1987. ISBN 0-471-08705-X. doi: 10.2307/3172772. (Cited on pg. 2, 10) Donald B. Rubin. Multiple Imputation After 18+ Years. *Journal of the American Statistical Association*, 91(434):473–489, 1996. ISSN 0162-1459. doi: 10.2307/2291635. (Cited on pg. 2, 10) Vaidotas Simkus, Benjamin Rhodes, and Michael U. Gutmann. Variational Gibbs Inference for Statistical Model Estimation from Incomplete Data. *Journal of Machine Learning Research*, 24(196):1–72, 2023. ISSN 1533-7928. (Cited on pg. 9, 13) Stef van Buuren. *Flexible Imputation of Missing Data*. CRC Press LLC, 2 edition, 2018. ISBN 978-1-13858831-8. (Cited on pg. 10) Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural Discrete Representation Learning. In Advances in Neural Information Processing Systems (NeurIPS), 2017. (Cited on pg. 1) David A. van Dyk and Xiyun Jiao. Metropolis-Hastings Within Partially Collapsed Gibbs Samplers. Journal of Computational and Graphical Statistics, 24(2):301–327, 2015. ISSN 1061-8600. (Cited on pg. 5) David A. van Dyk and Taeyoung Park. Partially Collapsed Gibbs Samplers. *Journal of the American* Statistical Association, 103(482):790–796, June 2008. ISSN 0162-1459. doi: 10.1198/016214508000000409. (Cited on pg. 5) Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P. Simoncelli. Image quality assessment: From error visibility to structural similarity. *IEEE Transactions on Image Processing*, 13(4):600–612, April 2004. ISSN 1941-0042. doi: 10.1109/TIP.2003.819861. (Cited on pg. 12) Gregory R. Warnes. The Normal Kernel Coupler: An Adaptive Markov Chain Monte Carlo Method for Efficiently Sampling From Multi-Modal Distributions. Technical Report 39, University of Washington, March 2001. (Cited on pg. 13) Greg C. G. Wei and Martin A. Tanner. A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation Algorithms. *Journal of the American Statistical Association*, 85(411):699–704, September 1990. doi: 10.1080/01621459.1990.10474930. (Cited on pg. 9) Mingtian Zhang, Peter Hayes, and David Barber. Generalization Gap in Amortized Inference. In *Workshop* on Bayesian Deep Learning at Neural Information Processing Systems (NeurIPS), pp. 6, 2021. (Cited on pg. 2) ## A Ac-Mwg Proofs Informally, showing convergence of MCMC samplers generally boils down to answering two questions: (i) does the Markov chain (asymptotically) reach the unique stationary distribution, and (ii) does the sampler remain in the stationary distribution after reaching it.10 First, we will focus on the latter question: does the AC-MWG sampler remain in the stationary distribution once it has been reached? Let p t denote the distribution after t iterations, and π(z, xmis) = p(z, xmis | xobs) denote the target distribution. The following theorem formalises the answer to the question.11 Theorem A.1. *The limiting distribution of the AC-MWG sampler conditioned on the history* H t−1 mis is invariant, that is p t−1(z t−1, x t−1 mis | Ht−1 mis ) = π(z t−1, x t−1 mis ) *implies* p t(z t, x t mis | Htmis) = π(z t, x t mis). Proof. Let us denote w t = Htmis \ Ht−1 mis the new variables made available in the history Htmis after iteration t of the algorithm. Note that w tis a random variable since it depends on the accept/reject decision in lines 9 and 12 of algorithm 1. In the proof we will show that by construction of the algorithm w t and the new state (z t, x t mis) are independent, and that the statement in the theorem then follows. Following algorithm 1 we now work out what are the new historical values of w t at each iteration t. If a proposal z˜ is *rejected* then line 12 of algorithm 1 corresponds to setting w = ∅. More generally, we allow adding to the history variables that depend on the rejected state z˜ but not on the current state z t−1. If a proposal z˜ is *accepted* then line 9 of algorithm 1 corresponds to setting w to be the set of imputations that were generated using the previous value of z = z t−1. For instance, if new proposals were rejected for the last r iterations, then z t−1−r = z t−1−r+1 = *. . .* = z t−1, and hence x t−1−r mis , x t−1−r+1 mis *, . . . ,* x t−1 mis would all depend on z t−1, i.e. x t−1−r mis , x t−1−r+1 mis *, . . . ,* x t−1 mis ∼ π(xmis | z t−1). Thus, in the case of proposal acceptance, the variable w t will contain the set of imputations {x τ mis} t−1 τ=t−1−r that were drawn from π(xmis | z t−1) in the past iterations. We define the conditional distribution of w t as π(w t| zˆ) !=Qt−1 τ=t−1−r π(x τ mis | zˆ) where zˆ is z t−1if a new proposal was accepted, or z˜ if a proposal was rejected. This construction of the history ensures that the proposal distribution q˜ϵ(z˜ | xobs, x˜mis) in lines 4 and 5 of algorithm 1 is *independent of the* current z t−1, and hence is a key ingredient to the proof. We denote the transition kernel of AC-MWG as k(z t, x t mis, wt| z t−1; H t−1 mis ). 12 The kernel, which depends on the history H t−1 mis , takes the current state of z t−1 and produces the new state (z t, x t mis) and the new historical variable w t. We further use f t−1 H (x˜mis) to denote the probability of sampling a historical imputation x˜mis from the available history H t−1 mis in line 4 of algorithm 1. The kernel of AC-MWG is then defined as follows $$k(z^{t},x_{\mathrm{mis}}^{t},w^{t}\mid z^{t-1};{\mathcal{H}}_{\mathrm{mis}}^{t-1})$$ t,x t mis, wt| z t−1; H t−1 mis ) = π(x t mis | z t)X x˜mis∈Htmis f t−1 H (x˜mis) Z q˜ϵ(z˜ | xobs, x˜mis)ρt(z˜, z t−1; x˜mis)δ(z t, z˜)π(w t| z t−1) + ˜qϵ(z˜ | xobs, x˜mis)-1 − ρt(z˜, z t−1; x˜mis)δ(z t, z t−1)π(w t| z˜) dz˜ = π(x t mis | z t)X x˜mis∈Htmis f t−1 H (x˜mis) q˜ϵ(z t| xobs, x˜mis)ρt(z t, z t−1; x˜mis)π(w t| z t−1) + δ(z t, z t−1) Zq˜ϵ(z˜ | xobs, x˜mis)-1 − ρt(z˜, z t−1; x˜mis)π(w t| z˜) dz˜ 10The proofs in this section will consider a single observed data-point xobs, and hence nearly all quantities would depend on it. To ease the notation we will therefore suppress the conditioning on xobs in all quantities, except for the proposal distribution q˜ϵ in eq. (3) to keep it consistent with algorithm 1. 11The theorem is analogous to Theorem 1 by Holden et al. (2009) but we extend their proof to the component-wise setting of AC-MWG that involves an additional sampling step x t mis ∼ p(xmis | xobs, z t), and where the history is maintained on xmis. 12Note that the kernel does not depend on the current x t−1 mis , since the new state only depends on the new z t, i.e. x t mis ∼ π(xmis | z t). The term in the parentheses corresponds to the standard Metropolis–Hastings kernel (see e.g. Barber, 2017, Section 27.4.2) with the addition of w tto denote the new variables to be appended to the history at iteration t. Assuming that at iteration t − 1 the sampler is already at the stationary distribution π(z t−1, x t−1 mis ), we now integrate the kernel with respect to the distribution of the current state (z t−1, x t−1 mis ) to obtain the marginal over z t, x t mis, and w t p t(z t,x t mis, wt| Ht−1 mis ) = Zk(z t, x t mis, wt| z t−1; H t−1 mis )π(z t−1, x t−1 mis ) dz t−1 dx t−1 mis Marginalising the x t−1 mis $$=\int k(\mathbf{z}^{t},\mathbf{x}_{\mathrm{mis}}^{t},w^{t}\mid\mathbf{z}^{t-1};\mathcal{H}_{\mathrm{mis}}^{t-1})\pi(\mathbf{z}^{t-1})\,\mathrm{d}\mathbf{z}^{t-1}$$ Inserting the definition of the kernel k and pushing the integral w.r.t. z t−1inside the sum over x˜mis = π(x t mis | z t)X x˜mis∈Htmis f t−1 H (x˜mis) Zq˜ϵ(z t| xobs, x˜mis)ρt(z t, z t−1; x˜mis)π(w t| z t−1)π(z t−1) dz t−1 + Zδ(z t, z t−1) Zq˜ϵ(z˜ | xobs, x˜mis)-1 − ρt(z˜, z t−1; x˜mis)π(w t| z˜) dz˜π(z t−1) dz t−1 Marginalising the z t−1in the second integral = π(x t mis | z t)X x˜mis∈Htmis f t−1 H (x˜mis) Zq˜ϵ(z t| xobs, x˜mis)ρt(z t, z t−1; x˜mis)π(w t| z t−1)π(z t−1) dz t−1 + Zq˜ϵ(z˜ | xobs, x˜mis)-1 − ρt(z˜, z t; x˜mis)π(w t| z˜)π(z t) dz˜ Expanding the second summand = π(x t mis | z t)X x˜mis∈Htmis f t−1 H (x˜mis) Zq˜ϵ(z t| xobs, x˜mis)ρt(z t, z t−1; x˜mis)π(w t| z t−1)π(z t−1) dz t−1 − Zq˜ϵ(z˜ | xobs, x˜mis)ρt(z˜, z t; x˜mis)π(w t| z˜)π(z t) dz˜ + π(z t) Zq˜ϵ(z˜ | xobs, x˜mis)π(w t| z˜) dz˜ Using detailed balance q˜ϵ(z˜ | xobs, x˜mis)ρt(z˜, z t; x˜mis)π(z t) = ˜qϵ(z t| xobs, x˜mis)ρt(z t, z˜; x˜mis)π(z˜) on the second summand above to obtain two identical integrals that cancel = π(x t mis | z t)X x˜mis∈Htmis f t−1 H (x˜mis) Zq˜ϵ(z t| xobs, x˜mis)ρt(z t, z t−1; x˜mis)π(w t| z t−1)π(z t−1) dz t−1 − Zq˜ϵ(z t| xobs, x˜mis)ρt(z t, z˜; x˜mis)π(w t| z˜)π(z˜) dz˜ + π(z t) Zq˜ϵ(z˜ | xobs, x˜mis)π(w t| z˜) dz˜ Cancelling the integral terms and rearranging we obtain the marginal distribution $$p^{t}(x^{t},x^{t}_{\mathrm{mis}},w^{t}\mid\mathcal{H}_{\mathrm{mis}}^{t-1})=\pi(x^{t}_{\mathrm{mis}},\bar{x}^{t})\sum_{\tilde{\Phi}_{\mathrm{mis}}\in\mathcal{H}_{\mathrm{mis}}^{t}}f_{\mathcal{H}}^{t-1}(\tilde{x}_{\mathrm{mis}})\int\tilde{q}_{\epsilon}(\tilde{x}\mid x_{\mathrm{obs}},\tilde{x}_{\mathrm{mis}})\pi(w^{t}\mid\tilde{x})\,\mathrm{d}\tilde{x}.$$ Importantly, the factorisation shows that (x t mis, z t) and w t are independent, and hence $$p^{\prime}(w^{t}\mid\mathcal{H}_{\min}^{t-1})=\int p^{t}(\mathbf{z}^{t},\mathbf{x}_{\min}^{t},w^{t}\mid\mathcal{H}_{\min}^{t-1})\,\mathrm{d}\mathbf{z}^{t}\,\mathrm{d}\mathbf{x}_{\min}^{t}=\sum_{\mathbf{\hat{x}}_{\min}\in\mathcal{H}_{\min}^{t-1}}f_{\mathbf{z}}^{t-1}(\mathbf{\hat{x}}_{\min})\int\hat{q}_{t}(\mathbf{\hat{z}}\mid\mathbf{x}_{\obs},\mathbf{\hat{z}}_{\min})\pi(w^{t}\mid\mathbf{\hat{z}})\,\mathrm{d}\mathbf{\hat{z}}.$$ Therefore it immediately follows that $$p^{t}(\mathbf{z}^{t},\mathbf{x}_{\mathrm{mis}}^{t}\mid{\mathcal{H}}_{\mathrm{mis}}^{t}={\mathcal{H}}_{\mathrm{mis}}^{t-1}\cup\{w^{t}\})={\frac{p^{t}(\mathbf{z}^{t},\mathbf{x}_{\mathrm{mis}}^{t},w^{t}\mid{\mathcal{H}}_{\mathrm{mis}}^{t-1})}{p^{t}(w^{t}\mid{\mathcal{H}}_{\mathrm{mis}}^{t-1})}}=\pi(\mathbf{x}_{\mathrm{mis}}^{t},\mathbf{z}^{t}),$$ which validates that the algorithm remains in the stationary distribution once it has reached it. Given that the sampler remains in the stationary distribution as shown in the above proof, we now show that the sampler can reach it. As discussed in section 4.1 the AC-MWG sampler corresponds to an ancestral sampler, which draws samples from p(z | xobs) using non-Markovian adaptive Metropolis–Hastings, and then draws from p(xmis | xobs, z) to obtain joint samples (z, xmis) ∼ p(z, xmis | xobs). Therefore, to prove that the sampler reaches the stationary distribution (question (i) from the start of the section) we only need to show that the Metropolis–Hastings sampler reaches p(z | xobs). Let π(z) = p(z | xobs) denote the target distribution, a t(x˜ t mis) ∈ [0, 1] a function that depends on the historical sample x˜ t mis ∼ f t−1 H (x˜ t−1 mis ) re-sampled from the history H t−1 mis in line 4 of algorithm 1 at iteration t, and X˜ tmis = (x˜ 1 mis*, . . . ,* x˜ t mis) which denotes all those x˜mis drawn up to iteration t, whose distribution we denote with p t H(X˜ tmis). We formalise the answer to question (i) in the following theorem.13 Theorem A.2. *If the likelihood of the model is bounded and the prior–variational mixture proposal in eq.* (3) uses an ϵ > 0*, then there is a function* a τ(x˜ τ mis) ∈ (0, 1] *that satisfies the strong Doeblin condition* q˜ϵ(z˜ | xobs, x˜ τ mis) ≥ a τ(x˜ τ mis)π(z˜), for ∀z˜ and ∀x˜ t mis, (7) and the total variation distance is bounded from above $$\left(7\right)$$ $$\|p^{t}(\mathbf{z}^{t})-\pi(\mathbf{z}^{t})\|_{\rm TV}\leq\mathbb{E}_{p^{t}_{\mathbf{k}}(\mathbf{\hat{x}}^{t}_{\rm min})}\left[\prod_{\tau=1}^{t}(1-a^{\tau}(\mathbf{\hat{x}}^{\tau}_{\rm min}))\right].\tag{8}$$ Hence the algorithm samples the target distribution within a finite number of iterations with a probability arbitrarily close to 1. Proof. The key observation for this proof is to note that, conditionally on the history H t−1 mis , each iteration t of the sampler corresponds to one iteration of a (generalised) rejection sampler (see e.g. Liang et al., 2010, Section 3.1.1). Let us denote α t ∈ {0, 1} a Bernoulli random variable with probability distribution p(α t| z˜, x˜ t mis) = B(α t; s t(z˜, x˜ t mis)) that signifies acceptance or rejection of a proposal z˜ with a success probability s t(z˜, x˜ t mis) of a rejection sampler. We obtain s t by lower-bounding the MH acceptance probability ρ tin eq. (4). We first rewrite the MH acceptance probability ρ t(z˜, z t−1; x˜ t mis) = min 1,π(z˜) π(z t−1) q˜ϵ(z t−1| xobs, x˜ t mis) q˜ϵ(z˜ | xobs, x˜ tmis) =π(z˜) q˜ϵ(z˜ | xobs, x˜ tmis) min q˜ϵ(z˜ | xobs, x˜ t mis) π(z˜), q˜ϵ(z t−1| xobs, x˜ t mis) π(z t−1) , 13Our theorem here is analogous to Theorem 2 by Holden et al. (2009) but we extend the proof to the case where the proposal distribution is sampled stochastically using the history. Lower-bounding the second term above to get a t(x˜ t mis) $$a^{t}(\hat{\mathbf{x}}_{\mathrm{mis}}^{t})=\operatorname*{min}_{\hat{\mathbf{z}}}\operatorname*{min}_{\hat{\mathbf{z}}}\left\{\frac{\hat{q}_{\varepsilon}(\hat{\mathbf{z}}\mid\mathbf{x}_{\mathrm{obs}},\hat{\mathbf{x}}_{\mathrm{mis}}^{t})}{\pi(\hat{\mathbf{z}})},\frac{\hat{q}_{\varepsilon}(\mathbf{z}^{t-1}\mid\mathbf{x}_{\mathrm{obs}},\hat{\mathbf{x}}_{\mathrm{mis}}^{t})}{\pi(\mathbf{z}^{t-1})}\right\}=\operatorname*{min}_{\hat{\mathbf{z}}}\frac{\hat{q}_{\varepsilon}(\hat{\mathbf{z}}\mid\mathbf{x}_{\mathrm{obs}},\hat{\mathbf{x}}_{\mathrm{mis}}^{t})}{\pi(\hat{\mathbf{z}})},$$ We finally obtain a lower-bounded acceptance probability s tto the MH acceptance probability ρ t $$\rho^{t}(\tilde{x},\mathbf{z}^{t-1};\tilde{x}_{\mathrm{mis}}^{t})\geq a^{t}(\tilde{x}_{\mathrm{mis}}^{t})\frac{\pi(\tilde{x})}{\tilde{q}_{\epsilon}(\tilde{x}\mid\mathbf{x}_{\mathrm{obs}},\tilde{x}_{\mathrm{mis}}^{t})}=s^{t}(\tilde{x},\tilde{x}_{\mathrm{mis}}^{t}).^{15}$$ We will now show that with a probability of at least a t(x˜ t mis) the sampler can jump to the stationary distribution π(z) at any iteration t. The conditional distribution of accepted samples of a rejection sampler is $p(\hat{\mathbf{z}}\mid\hat{\mathbf{x}}^{t}_{\text{mis}},\alpha^{t}=1)=\frac{\hat{q}_{\epsilon}(\hat{\mathbf{z}}\mid\mathbf{x}_{\text{obs}},\hat{\mathbf{x}}^{t}_{\text{mis}})p(\alpha^{t}=1\mid\hat{\mathbf{z}},\hat{\mathbf{x}}^{t}_{\text{mis}})}{p(\alpha^{t}=1\mid\hat{\mathbf{x}}^{t}_{\text{mis}})}\propto\hat{q}_{\epsilon}(\hat{\mathbf{z}}\mid\mathbf{x}_{\text{obs}},\hat{\mathbf{x}}^{t}_{\text{mis}})p(\alpha^{t}=1\mid\hat{\mathbf{z}},\hat{\mathbf{x}}^{t}_{\text{mis}})$. Inserting p(α t = 1 | z˜, x˜ t mis) = s t(z˜, x˜ t mis) we obtain p(z˜ | x˜ t mis, αt = 1) ∝ q˜ϵ(z˜ | xobs, x˜ t mis)a t(x˜ t mis) π(z˜) q˜ϵ(z˜ | xobs, x˜ tmis) = a t(x˜ t mis)π(z˜) p(α t = 1 | x˜ t mis) = Zq˜ϵ(z˜ | xobs, x˜ t mis)p(α t = 1 | z˜, x˜ t mis) dz˜ = Za t(x˜ t mis)π(z˜) dz˜ = a t(x˜ t mis) Hence it follows that the accepted samples follow the target distribution $$p(\hat{\mathbf{z}}\mid\hat{\mathbf{x}}_{\mathrm{mis}}^{t},\alpha^{t}=1)={\frac{a^{t}(\hat{\mathbf{x}}_{\mathrm{mis}}^{t})\pi(\hat{\mathbf{z}})}{\int a^{t}(\hat{\mathbf{x}}_{\mathrm{mis}}^{t})\pi(\hat{\mathbf{z}})\,\mathrm{d}\hat{\mathbf{z}}}}=\pi(\hat{\mathbf{z}}).$$ The analogy between AC-MWG and rejection sampling allows us to conclude that the conditional probability to jump to the stationary distribution at any iteration t is (at least) a t(x˜ t mis). This conditional probability depends on the historical sample x˜mis ∼ p t−1 H (x˜mis) but is independent of the current distribution of z t−1. We can now show that the probability to be in the stationary distribution within a finite number of iterations t can be made arbitrarily close to 1. Let b t be the probability that the sampler *does not* jump to the stationary distribution in t iterations $$b^{t}(\tilde{X}_{\mathrm{mis}}^{t})=\prod_{\tau=1}^{t}\left(1-a^{\tau}(\tilde{x}_{\mathrm{mis}}^{\tau})\right),$$ where X˜ tmis = (x˜ 1 mis*, . . . ,* x˜ t mis), and let p t(z t| X˜ tmis) denote the conditional distribution of z t after t iterations $$p^{t}(\mathbf{z}^{t}\mid\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})=\pi(\mathbf{z}^{t})(1-b^{t}(\tilde{\mathbf{X}}_{\mathrm{mis}}^{t}))+\nu^{t}(\mathbf{z}^{t}\mid\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})b^{t}(\tilde{\mathbf{X}}_{\mathrm{mis}}^{t}),$$ which can be seen as a mixture of the stationary distribution π(·) with probability (1 − b t(X˜ tmis)) and nonstationary distribution ν t(·) with probability b t(X˜ tmis). The marginal distribution at the t-th iteration is then $$p^{t}(\mathbf{z}^{t})=\int p^{t}(\mathbf{z}^{t}\mid\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})p_{\mathcal{H}}^{t}(\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})\,\mathrm{d}\tilde{\mathbf{X}}_{\mathrm{mis}}^{t}$$ We now derive a bound on the total variation distance ∥p t(z t) − π(z t)∥TV 14Note that taking the min over z˜ makes a t(x˜ t mis) independent of the current state z t−1! 15Note that due to ρ t ∈ [0, 1] we also have that the Bernoulli success probability s t ∈ [0, 1]. $$=\int\left|\int p^{t}(\mathbf{z}^{t}\mid\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})p_{\mathcal{H}}^{t}(\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})\,\mathrm{d}\tilde{\mathbf{X}}_{\mathrm{mis}}^{t}-\pi(\mathbf{z}^{t})\right|\mathrm{d}\mathbf{z}^{t}$$ Inserting the definition of p t(z t| X˜ tmis) and using linearity of expectation to take π(z t) into the expectation over X˜ tmis $$=\int\left|\int\left(\pi(\mathbf{z}^{t})(1-b^{t}(\tilde{\mathbf{X}}_{\mathrm{mis}}^{t}))+\nu^{t}(\mathbf{z}^{t}\mid\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})b^{t}(\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})-\pi(\mathbf{z}^{t})\right)p_{\mathcal{H}}^{t}(\tilde{\mathbf{X}}_{\mathrm{mis}}^{t})\,\mathrm{d}\tilde{\mathbf{X}}_{\mathrm{mis}}^{t}\right|\mathrm{d}\mathbf{z}^{t}\,.$$ Expanding $\pi(\mathbf{z}^{t})(1-b^{t}(\tilde{\mathbf{X}}^{t}_{\rm mis}))$ and cancelling terms $$=\int\left|\int\left(-\pi(\mathbf{z}^{t})+\nu^{t}(\mathbf{z}^{t}\mid\tilde{\mathbf{X}}^{t}_{\rm mis})\right)b^{t}(\tilde{\mathbf{X}}^{t}_{\rm mis})p^{t}_{\mathcal{H}}(\tilde{\mathbf{X}}^{t}_{\rm mis})\,{\rm d}\tilde{\mathbf{X}}^{t}_{\rm mis}\right|{\rm d}\mathbf{z}^{t}$$ Applying Jensen's inequality to the (convex) norm function ≤ Z Z −π(z t) + ν t(z t| X˜ tmis) dz tb t(X˜ tmis)p t H(X˜ tmis) dX˜ tmis Applying triangle inequality R|ν(z t) − π(z t)| dz t ≤R|ν(z t)| dz t +R|−π(z t)| dz t = 2 ≤ 2 Zb t(X˜ tmis)p t H(X˜ tmis) dX˜ tmis = 2Ep t H(X˜ tmis) "Yt τ=1 (1 − a τ(x˜ τ mis))# Hence, the algorithm converges almost everywhere if the product goes to zero with t → ∞. Therefore, if a τ(x˜ τ mis)) > 0 infinitely often then the sampler samples the target distribution π(z) with probability arbitrarily close to 1. To complete the proof we now show that the strong Doeblin condition (Holden, 2000; Holden et al., 2009) in eq. (7) holds, which requires that there exists a t(x˜ t mis) > 0 for all z˜ and x˜ t mis. Informally, the condition requires that the proposal distribution has heavier tails than the target distribution. We rewrite the condition in eq. (7) in its equivalent form as follows $$\frac{\pi(\tilde{z})}{\tilde{q}_{\epsilon}(\tilde{z}\mid x_{\mathrm{obs}},\tilde{x}_{\mathrm{mis}}^{t})}\leq\frac{1}{a^{t}(\tilde{x}_{\mathrm{mis}}^{t})}.$$ $$\mathbf{\Sigma}(9)$$ . (9) Inserting the definition of π(z˜) = p(z˜ | xobs) and q˜ϵ(z˜ | xobs, x˜ t mis) from eq. (3) to the left side we obtain π(z˜) q˜ϵ(z˜ | xobs, x˜ tmis) =p(z˜ | xobs) (1 − ϵ)q(z˜ | xobs, x˜ tmis) + ϵp(z˜) =p(z˜, xobs) p(xobs) ((1 − ϵ)q(z˜ | xobs, x˜ tmis) + ϵp(z˜)) =p(xobs | z˜) p(xobs) (1 − ϵ) q(z˜|xobs,x˜ t mis) p(z˜) + ϵ = p(xobs | z˜) p(xobs) (1 − ϵ) q(z˜ | xobs, x˜ t mis) p(z˜)+ ϵ −1. Hence the ratio is bounded if ϵ > 0 and if the likelihood is bounded, which we can safely assume since this is already a necessary condition to well learn the model. Since the left hand side of eq. (9) is bounded it follows that a t(x˜ t mis) > 0, which completes the proof. ## B Background: Importance Resampling We can generate samples following eq. (1) by using importance resampling (IR, e.g., Chopin & Papaspiliopoulos, 2020) to (approximately) sample p(z | xobs) and then sampling p(xmis | xobs, z) as in standard ancestral sampling. We start with the standard importance sampling formulation for approximating the marginal p(xobs): $$p(\mathbf{x}_{\rm obs})=\int p(\mathbf{x}_{\rm obs},\mathbf{z})\,{\rm d}\mathbf{z}=\int q(\mathbf{z})\frac{p(\mathbf{x}_{\rm obs},\mathbf{z})}{q(\mathbf{z})}\,{\rm d}\mathbf{z}=\mathbb{E}_{q(\mathbf{z})}\left[w(\mathbf{z})\right],\tag{10}$$ where q(z) is a proposal distribution that is assumed easy to sample and evaluate, and w(z) = p(xobs, z)/q(z) are the (unnormalised) importance weights, which are also computationally tractable. The importance weight function w(·) can then be used to re-weigh the samples from the proposal distribution q(z) to follow the model posterior p(z | xobs). We denote w¯(z˜) = w(z˜)/Eq(z¯)[w(z¯)] to be the (self-)normalised importance weights, and show that samples from the proposal can be re-weighted to follow the target distribution $$\pi(\mathbf{z})=\mathbb{E}_{q(\mathbf{z})}\left[\bar{w}(\mathbf{\hat{z}})\delta_{\mathbf{\hat{z}}}(\mathbf{z})\right]=\mathbb{E}_{q(\mathbf{\hat{z}})}\left[\frac{w(\mathbf{\hat{z}})}{\mathbb{E}_{q(\mathbf{\hat{z}})}\left[w(\mathbf{\hat{z}})\right]}\delta_{\mathbf{\hat{z}}}(\mathbf{z})\right]=\mathbb{E}_{q(\mathbf{\hat{z}})}\left[\frac{\frac{w(\mathbf{z}_{\text{\tiny{$\alpha$}}},\mathbf{z})}{\sigma(\mathbf{\hat{z}})}}{p(\mathbf{z}_{\text{\tiny{$\alpha$}}})\delta_{\mathbf{\hat{z}}}(\mathbf{z})}\right]=p(\mathbf{z}\mid\mathbf{x}_{\text{\tiny{obs}}}),\tag{11}$$ its $w(\hat{\mathbf{z}}^m)=\frac{p(\mathbf{z}_{\mathrm{obs}})}{q(\hat{\mathbf{z}}^n)}$ $$1\;\forall m\in[1,M].$$ where δz˜(·) is the Dirac delta distribution centred at point z˜. In practice, self-normalised importance resampling is generally implemented in four steps: 1. Draw M samples from a proposal z˜ $$\tilde{z}^{1},\ldots,\tilde{z}^{M}\sim q(z).$$ 2. Compute the (unnormalised) importance weights w(z˜m) = p(xobs,z˜m) q(z˜m)for all ∀m ∈ [1, M]. 3. Self-normalise the weights $\bar{w}(\hat{\mathbf{z}}^m)=\frac{w(\hat{\mathbf{z}}^m)}{\sum_{i=1}^M w(\hat{\mathbf{z}}^i)}$ for all $\forall m\in[1,M]$. 4. Resample z m with replacement from the set {z˜m}M m=1 using the normalised probabilities w¯(z˜m). Self-normalised importance sampling is consistent in the number M of proposed samples and hence samples p(z | xobs) exactly as M → ∞ but has a bias of the order of O(1/M) (Owen, 2013; Paananen et al., 2021). Samples xmis ∼ p(xmis | xobs) can then be obtained by sampling p(xmis | xobs, z). In standard importance sampling applications the proposal distribution q(z) is traditionally chosen heuristically using the domain knowledge of the target distribution. However, in the context of VAEs specifying a good proposal can be difficult due to poor prior knowledge about the latent space of the model. Moreover, the efficiency of the sampler depends on the quality of the proposal distribution q(z) and a poor proposal distribution can cause weight degeneracy in the non-asymptotic regime (M ≪ ∞), where only a few of the proposed samples have non-zero weights, and hence poorly approximate the target distribution. ## C Experiment Details In this appendix we provide additional details on the experiments. ## C.1 Synthetic 2D Vae To investigate and illustrate the pitfalls of MWG we constructed a simple synthetic VAE model that approximates mixture-of-Gaussians data, see fig. 7. The visibles x are 2-dimensional and parametrised with a diagonal Gaussian decoder p(x | z), the latents z are 1-dimensional with a uniform prior p(z) = Uniform(0, 1), and the variational proposal q(z | x) is a Beta distribution amortised with a neural network. The lowdimensional example lets us compute, via numerical integration, and visualise the conditional distributions p(xmis | xobs), p(xmis, z | xobs), and p(z | xobs). As demonstrated in the two right-most panels of fig. 7 mixing in the joint space of the missing variable and the latent (x0, z) may be poor due to low probability valleys between the modes (third panel), but could be easier in the marginal space of z (last panel). ![23_image_0.png](23_image_0.png) Figure 7: Left-to-right: the marginal distribution of the visibles p(x) of the VAE; the posterior expected value of the latents z, i.e. Ep(z|x)[z]; joint conditional distribution of x0 and z for an observed x1; conditional distribution of z for an observed x1. For pseudo-Gibbs and MWG in figs. 1 to 3 we perform a single run of each algorithm for 50k iterations, with both methods initialised at the same location using a sample drawn from the marginal distribution p(xmis) of the VAE. Similarly, in fig. 2 the proposed method AC-MWG performs a single run of the algorithm for 50k iterations with mixture coefficient ϵ = 0.01, initialised at the same location as pseudo-Gibbs and MWG. Finally, in fig. 3 the proposed method LAIR performs a single run of the algorithm for 2.5k iterations using 19 imputation particles (K = 19) and 1 replenishing mixture component (R = 1), the algorithm is initialised with K = 19 samples from the marginal distribution p(xmis) of the VAE. ## C.2 Mixture-Of-Gaussians Mnist We construct a mixture-of-Gaussians (MoG) ground truth model with 10 multivariate Gaussian components and uniform component probability π(c) = 1 10 . Each Gaussian component is fitted on all samples from the MNIST data set (downsampled to 14x14 and transformed with a logit transformation) with a particular label c ∈ [1, 10]. We then generated a semi-synthetic data set of 18k samples and fit a VAE model with a latent space dimensionality of 25. For the VAE, we have used a diagonal Gaussian decoder using ConvResNet architecture with 4 convolutional residual blocks of feature map depths of 128, 64, 32, and 32, and a dropout of 0.2. The prior distribution over the latents is a standard normal distribution. The variational distribution is parametrised with a diagonal Gaussian encoder using ConvResNet architecture with 4 convolutional residual blocks of feature map depths 32, 64, 128, and 256, and dropout of 0.2. To optimise the VAE model we have used the sticking-the-landing gradients (Roeder et al., 2017) and fit the model using batch size of 200 for 6000 epochs using Adam optimiser (Kingma & Ba, 2014) with a learning rate of 10−4. For pseudo-Gibbs we ran 5 independent chains for 10k iterations each, and to stabilise the sampler, the imputations were clipped to the minimum and maximum values of the data set for each dimension multiplied by 2. For MWG we have initialised 5 independent chains by running pseudo-Gibbs for 120 iterations with clipping and then run the MWG sampler for 9880 iterations on each chain. For MWG′ we have initialised 5 independent chains by running LAIR with K=4 and R=1 for 120 iterations for each chain, and then run the MWG sampler for 9880 iterations on each chain. For AC-MWG we have initialised 5 independent chains from the marginal distribution of the VAE and then run AC-MWG with ϵ = 0.05 for 10k iterations. For AC-MWG′ we have initialised 5 independent chains by running LAIR with K=4 and R=1 for 120 iterations for each chain, and then run the AC-MWG sampler for 9880 iterations on each chain with ϵ = 0.05. For LAIR we have initialised K = 4 particles from the marginal distribution of the VAE and then run the sampler with K = 4 and R = 1 for 10k iterations. ## C.3 Uci Data Sets We fit VAEs on four data sets from the UCI repository (Dua & Graff, 2017) with the preprocessing of (Papamakarios et al., 2017). For all models, the variational and the generator (decoder) distributions were fitted to be in the diagonal Gaussian family. For the encoder and decoder networks of the VAEs we fit MLP neural networks with residual block architecture using Adam optimiser (Kingma & Ba, 2014) with learning rate of 10−3for a total of 200k stochastic gradient ascent steps (except for Miniboone where 22k steps were used) using batch size of 512 (except for Miniboone where batch size of 1024 was used), while using 8 Monte Carlo samples in each iteration to approximate the variational ELBO and sticking-the-landing gradients to reduce variance (Roeder et al., 2017). For Gas, Power, and Hepmass data the encoder and decoder networks used 2 residual blocks each with hidden dimensionality of 256, ReLU activation functions, and a latent space of 16. In addition, for Power data we add small Gaussian noise to each batch with a standard deviation of 0.001. For Miniboone data the encoder used 5 residual blocks with hidden dimensionality of 256 and decoder networks used 2 residual blocks with hidden dimensionality of 256, ReLU activation functions, a latent space of 32, and dropout of 0.5. For pseudo-Gibbs we ran 5 independent chains for 3k iterations each, and to stabilise the sampler on Gas and Hepmass data sets imputations were clipped to the minimum and maximum values of the data set for each dimension multiplied by 2. For MWG we have initialised 5 independent chains by running LAIR with K=4 and R=1 for 100 iterations for each chain, and then run the MWG sampler for 2900 iterations on each chain. For AC-MWG we have initialised 5 independent chains by running LAIR with K=4 and R=1 for 100 iterations for each chain, and then run the AC-MWG sampler for 2900 iterations on each chain with ϵ = 0.3. For LAIR we have initialised K = 4 particles from the marginal distribution of the VAE and then run the sampler with K = 4 and R = 1 for 3k iterations. Each method evaluations were repeated with 5 different seeds, and the uncertainty reported in the figures reflects the uncertainty over different runs. ## C.4 Handwritten Character Omniglot Data Set We fit a VAE on a statically binarised Omniglot data set (Lake et al., 2015) downsampled to 28 × 28 pixels. We have used a fixed standard Gaussian prior distribution over the latents p(z) with a dimensionality of 50, an encoder distribution q(z | x) in the diagonal Gaussian family, and a decoder distribution p(x | z) in a Bernoulli family. For the encoder and decoder networks we have used convolution neural networks with ReLU activations, dropout probability of 0.2, and residual block architecture with 4 residual blocks in each networks. For the encoder the residual block hidden dimensionalities were 32, 64, 128, and 256, and for the decoder they were 128, 64, 32, and 32. We used Adam optimiser (Kingma & Ba, 2014) with a learning rate of 10−4 and a cosine annealing schedule, for a total of 3k stochastic gradient ascent steps using a batch size of 200. Moreover sticking-the-landing gradients were used to reduce encoder network gradient variance (Roeder et al., 2017). For pseudo-Gibbs we ran 5 independent chains for 5k iterations each. For MWG we have initialised 5 independent chains by running pseudo-Gibbs for 120 iterations, and then running the MWG sampler for 4880 iterations on each chain. For MWG′ we have initialised 5 independent chains by running LAIR with K=4 and R=1 for 120 iterations for each chain, and then run the MWG sampler for 4880 iterations on each chain. For AC-MWG′ we have initialised 5 independent chains by running LAIR with K=4 and R=1 for 120 iterations for each chain, and then run AC-MWG for 4880 iterations on each chain with ϵ = 0.05. For LAIR we have initialised K = 4 particles from the marginal distribution of the VAE and then run the sampler with K = 4 and R = 1 for 5k iterations. The above evaluations were repeated with 5 different seeds, and the uncertainty reported in the figures reflects the uncertainty over different runs. ## D Additional Figures In this appendix we provide additional figures for the experiments in this paper. ## D.1 Synthetic 2D Vae To aid with the understanding of the pitfalls in section 3 and our remedies in section 4, we here include additional figures on the synthetic VAE model (see details in appendix C.1). Specifically, in the top row of fig. 8 we plot the marginal distributions of the latents p(z | xobs) that provide an additional perspective of the failure modes described in section 3: A method that is able to sample the joint distribution p(xmis, z | xobs) must also be able to effectively sample the marginal p(z | xobs), and if it is able to do so, then the joint p(xmis, z | xobs) and the marginal of the missing variables p(xmis | xobs) are recovered via ancestral sampling of eq. (1). In the left-most column (pitfall I) we can see that MWG fails to explore the unimodal posterior p(z | xobs). As described in section 3 this is because the decoder distribution p(xobs, xmis | z˜) places little density/mass on the *previous* value of xmis = x t−1 mis , which in turn gets such latent proposals z˜ rejected. As a result, the MWG sampler remains "stuck" in a small part of the (marginal) posterior. The middle column provides an additional view of pitfall II. In particular, we see that the posterior distribution p(z | xobs) in this case is multi-modal. However, an encoder q(z | xobs, xmis) conditioned on a specific completed data-point xobs ∪xmis is unlikely to propose a latent value z˜ that would reach one of the alternative modes. As a result, the pseudo-Gibbs and MWG samplers never reach the alternative modes of the posterior p(z | xobs), and remain stuck in a single mode. Finally, the right-most column reinforces the understanding of pitfall III. Specifically, we see that if MWG is initialised in a low-probability location of p(z | xobs), it may fail to reach the high-probability mode. The second and third rows of fig. 8 show the posterior approximations obtained using AC-MWG (section 4.1) and LAIR (section 4.2). As we can see, similar to the results in sections 4.1.1 and 4.2.1 the proposed methods are able to avoid the pitfalls of pseudo-Gibbs and MWG. The proposed methods remedy pitfall I by targeting the marginal p(z | xobs) instead of the joint p(xmis, z | xobs). Once approximate samples from the marginal p(z | xobs) are obtained then the methods use this approximation to perform ancestral sampling of the joint p(xmis, z | xobs). Moreover, the methods address pitfall II by using the prior–variational mixture proposals in eqs. (3) and (5), which enable exploration of the latent space. The remedy to pitfall III is related to the remedies for pitfalls I and II: the prior–variational mixture proposal enables a search of the latent space and targeting the marginal distribution p(z | xobs) allows the sampler to move from the poor initial location to a better one by not conditioning on the previous imputation value xmis = x t−1 mis . ![26_image_0.png](26_image_0.png) Figure 8: Additional figures on the synthetic VAE, showing the marginals p(z | xobs). (The figure is best viewed in colour.) *Top:* showing only the true marginal p(z | xobs) in blue colour and the marginals of pseudo-Gibbs (orange) and MWG (pink). *Middle:* showing the marginal of AC-MWG (yellow). *Bottom:* showing the marginal of LAIR (yellow). ## D.2 Ablation Study: Synthetic 2D Vae In this section we perform an ablation study of AC-MWG and LAIR that supplements the results in sections 4.1.1 and 4.2.1. ![27_image_0.png](27_image_0.png) Figure 9: *Ablation studies on the 2D VAE sampling problems, same as in figs.* 1 to 3. (The figure is best viewed in colour.) *Top:* the same AC-MWG as before with ϵ = 0.01 (yellow), AC-MWG with ϵ = 0.0 (light blue) that does not "explore" the latent space via samples from the prior, and AC-MWG with ϵ = 0.01 but the Metropolis–Hastings target of p(z | xobs, xmis) (red), same as MWG. *Bottom:* the same LAIR as before with K = 19 and R = 1 (i.e. ϵ = 0.05, in yellow), LAIR with K = 20 and R = 0 (i.e. ϵ = 0.0, in light blue) that does not "explore" the latent space using samples from the prior, LAIR with K = 10 and R = 10 (i.e. ϵ = 0.5, in red), and LAIR with K = 0 and R = 20 (i.e. ϵ = 1.0, in purple), which corresponds to standard (non-adaptive) importance resampling using the prior distribution as proposal. In the top row of fig. 9 we show two ablation cases of AC-MWG. In the first case (light blue) we set ϵ = 0.0 in the prior–variational mixture proposal in eq. (3). Without the prior component the sampler fails to "explore" the latent space (see the middle panel in the figure, where the light blue and red curves overlap) due to insufficiently exploratory proposal distribution (i.e. pitfall II). In the second case (red) we change the target distribution of the Metropolis–Hastings step from p(z | xobs) in eq. (4) to p(z | xobs, xmis) used in standard MWG, that is, using the acceptance probability in eq. (2). We observe that with the MH target changed to p(z | xobs, xmis) the sampler also fails to mix between nearby modes in the latent space in the left-most panel (i.e. pitfall I), similar to MWG, which also affects the other two cases (middle and right panels). We therefore validate that the two modifications (the mixture proposal and the collapsed-Gibbs MH target p(z | xobs)) introduced in section 4.1 are key components of the AC-MWG sampler. In the bottom row of fig. 9 we show three ablation cases of LAIR by varying the prior probability ϵ =R K+R in the mixture proposal in eq. (5). The first case (light blue) corresponds to LAIR with ϵ = 0.0 (or R = 0) and performs sub-optimally (see the middle panel) due to lack of exploration in the latent space (see pitfall II). The second case (red) corresponds to LAIR with ϵ = 0.5 (or R = K = 10) and performs similarly to our base LAIR case (yellow). The third case (purple) is LAIR with ϵ = 1.0 and corresponds to a standard non-adaptive importance resampling with the prior distribution as the proposal. The standard importance sampling (purple) performs equally-well because of the simplicity and low-dimensionality of the latent space, however as the latent space gets more complex and higher dimensional the adaptive LAIR sampler will perform better (see results in appendix D.4). ## D.3 Mixture-Of-Gaussians Mnist Figure 10 shows the conditional mean and standard deviation at each "missing" pixel of the image, conditional on the "observed" pixels surrounded by a red border. Top-left shows the ground truth values, and the rest show values estimated from samples produced using the VAE and the (approximate) samplers. Furthermore, fig. 11 shows the absolute error in the conditional means (black is better) and signed error on the standard deviations (blue is underestimated, red is overestimated, white is perfect). The figures show a complementary view of the results in section 5.1. Interestingly, we can see that pseudo-Gibbs and MWG can overestimate the variance at some pixels while at the same time underestimating it at other pixels. The proposed methods, AC-MWG and LAIR, are less affected by this issue. Figure 12 corresponds to fig. 4 in the main text but we additionally show MWG with MAP initialisation using stochastic gradient ascent with 5 random restarts (red). Furthermore, fig. 13 shows the experiment results using additional metrics. The additional metrics mirror the results in the main text. ![29_image_0.png](29_image_0.png) Figure 10: Conditional mean µxmis|xobs and standard deviation σxmis|xobs *on the mixture-of-Gaussians MNIST.* The top-left panel shows the ground-truth values, and the other panels show estimates from imputations generated by the evaluated samplers. The pixels surrounded by a red border are the observed values xobs. ![30_image_0.png](30_image_0.png) Figure 11: The absolute error on the conditional mean μα…. |απαια από την οποία εν οι οι την οποία αποιεί o min in on the mixture-of-Gaussians MNIST. We can clearly see that the proposed methods (bottom row) outperform the existing samplers. ![31_image_0.png](31_image_0.png) Figure 12: Same as fig. 4, but with additional method included, MWG+SGDMAP (red), which initialises MWG using stochastic gradient ascent on the log-likelihood (with 5 restarts). ![32_image_0.png](32_image_0.png) (a) Jensen–Shannon divergence (JSD) between the ground truth conditional p(z | xobs), and estimator pˆ(z | xobs) = 1 ![32_image_1.png](32_image_1.png) PN i i (b) Fréchet inception distance (FID) between samples from the ground truth conditional p(z | xobs), and samples obtained from the imputation methods. The inception model features used in FID computation are the final layer outputs of a classifier neural network. Figure 13: *Additional metrics on the mixture-of-Gaussians MNIST.* Each panel in the subfigures corresponds to a different conditional sampling problem p(xmis | xobs). Each evaluation is repeated 20 times, and the box-plot represents the inter-quartile range, including the median, and the whiskers show the overall range of the results. ## D.4 Ablation Study: Mixture-Of-Gaussians Mnist This section shows an ablation study of AC-MWG and LAIR on the mixture-of-Gaussians MNIST data set that supplements the results in section 5.1. ![33_image_0.png](33_image_0.png) Figure 14: *Ablation studies on the MoG MNIST sampling problems, same as in fig.* 4. The FID score is computed using the final layer outputs of the encoder network as the inception features. *Left part of each* panel: AC-MWG (deep blue) is the same AC-MWG as in fig. 4 with ϵ = 0.05, AC-MWG with ϵ = 0.0 (green) corresponding to no prior component in the proposal distribution in eq. (3), and AC-MWG with ϵ = 0.05 but the Metropolis–Hastings target of p(z | xobs, xmis) (orange), same as MWG. *Right part of each panel:* LAIR (pink) is the same LAIR as in fig. 4 with K = 4 and R = 1 (i.e. ϵ = 0.2), LAIR with K = 5 and R = 0 (i.e. ϵ = 0.0, yellow), LAIR with K = 2 and R = 3 (i.e. ϵ = 0.6, light blue), and LAIR with K = 0 and R = 5 (i.e. ϵ = 1.0, red) corresponding to standard (non-adaptive) importance resampling with the prior distribution as the proposal. The left part of each panel in fig. 14 shows two ablation cases of AC-MWG. In the first case (green) we set ϵ = 0.0 in the prior–variational mixture proposal in eq. (3). As explained in pitfall II the sampler fails to explore the latent space and hence exhibits degraded performance compared to AC-MWG with ϵ > 0 (deep blue). In the second case (orange) we change the target distribution of the Metropolis–Hastings step from p(z | xobs) in eq. (4) to p(z | xobs, xmis) used in standard MWG, that is, using the acceptance probability in eq. (2). Similarly, we see that this ablation significantly reduces the performance of the sampler (orange versus deep blue). With this evaluation, similar to the results in appendix D.2, we validate that the proposed components (mixture proposal and collapsed-Gibbs MH target) are key to the performance of the AC-MWG method. The right-hand part of each panel in fig. 14 shows three ablation cases of LAIR with varying prior probabilities ϵ =R K+R ∈ {0.0, 0.6, 1.0} (or equivalently, varying K and R) in the mixture proposal in eq. (5). The first case (yellow) is LAIR with ϵ = 0.0 (i.e. R = 0), which corresponds to not using the prior distribution in the mixture proposal in eq. (5), and exhibits a significantly downgraded performance over LAIR with ϵ = 0.2 (or K = 4 and R = 1, pink). The second case (light blue) is LAIR with ϵ = 0.6 and performs similarly to LAIR with ϵ = 0.2 (pink), hence showing that the method is not highly sensitive to the choice of ϵ as long as the edge cases (ϵ = 0 and ϵ = 1) are avoided. The third case (red) is LAIR with ϵ = 1.0 and corresponds to a standard non-adaptive importance resampling with the prior distribution as the proposal. As we see here, the non-adaptive importance resampling (red) performs sub-optimally and hence validates that the adaptation in LAIR is important for good performance of the method. ## D.5 Uci Data Sets In fig. 15 we show additional metrics of the experiments in section 5.2. We also include MWG with pseudoGibbs initialisation (red) as originally proposed in Mattei & Frellsen (2018). The first two rows show energy-distance MMD and Laplacian MMD between the imputed data sets and the ground truth data. We observe a similar behaviour to the results in the main text. The main exception is the Hepmass data where MWG′(orange) seems to be preferred. However, we note that part of the good performance of MWG′ (orange) on Hepmass data is due to the use of LAIR initialisation, while using pseudo-Gibbs initialisation (red) performs similarly to LAIR (yellow). Moreover, the final row shows the average mean absolute error, and the proposed methods, AC-MWG (pink) and LAIR (yellow), are preferred over the existing methods on all data sets. ![34_image_0.png](34_image_0.png) Figure 15: *Additional metrics on sampling performance on four real-world UCI data sets. Top:* energy MMD. *Middle:* Laplacian MMD. *Bottom:* average MAE of the imputations. The divergences are evaluated on a 50k data-point subset of test data (except for Miniboone where the full test data set was used), and the MAE is averaged over the full test data set. In all rows imputations from the final iteration of the corresponding algorithms are used and uncertainty is shown over different runs.
Review 1: Summary: The authors propose two methods for conditional sampling from a pre-trained VAE, where the conditional distribution implies that only a part of a test point is observed. The main drawback of previous approaches is that the sampling process can potentially be trapped in a mode and thus fail to explore the true distribution well. The proposed methods alleviate this issue by providing a better proposal distributions and a more suitable acceptance probability in an MCMC framework, and also, in an adaptive importance-sampling scheme. Theoretical results support the proposed samplers, while their efficiency is demonstrated in the experimental section. Strengths and Weaknesses: **Strengths**: - The paper is fairly well-written. - The analysis of the previous methods is reasonable. - The proposed methods seem to be sensible and to work well in practice. - The theoretical analysis and the experimental results support well the claims.
 **Weaknesses/Questions**: - I think that the “pitfalls” and the “solution” can be clarified a bit more. As regards the pitfalls: 1) If the sampled $\tilde{z}^t$ is “far” from the $\tilde{z}^{t-1}$ then the likelihood $p(x_{obs}, x_{mis}^{t-1} | \tilde{z}^t)$ is going to be low and the sample will be rejected. The problem here is that decoders imply distributions with very low entropy? (Pitfall I) 2) If the sampled $\tilde{z}^t$ is “close” to the $\tilde{z}^{t-1}$ then the likelihood $p(x_{obs}, x_{mis}^{t-1} | \tilde{z}^t)$ is high and the sample is accepted. (Pitfall II) So both pitfalls essentially limit the exploration of the sampler in the non-assymptotic regime? As regards the solutions: 1) The proposed proposal in both methods depends on some parameters. How sensitive are the methods with respect to the hyperparameters $\varepsilon, R, K, T$? 2) The likelihood $p(x_{obs}|\tilde{z}^t)$ is computed instead of the full joint, but if the $\tilde{z}^t$ is far from the actual code of the $x_{obs}$ the associated likelihood will be again low? Does the LAIR approach fix this by using multiple latent proposals in every step? - The 1-dimensional examples are interesting and informative, but a bit hard to understand. Could you provide examples for 2-dimensional settings? In addition, an example to show the proposal distribution for both methods (in 2-dim) would have been very interesting. There is already Fig. 6, which I think can be improved. - Perhaps an ablation study as regards the hyperparameters would have been beneficial to see how much they affect the performance. Requested Changes: Please check weaknesses. I have no additional requests for changes. Broader Impact Concerns: I think that a Broder Impact statement is not necessary as this is mainly a technical paper with no direct ethical implications. ================================================== Review 2: Summary: The paper proposes two sampling algorithms for the task of conditional sampling in variational autoencoders. The paper theoretically proves that the algorithms are perfectly accurate in the infinite-time limit. In finite-time, the paper numerically demonstrates that the proposed techniques alleviate pitfalls of existing ones, and gives meaningful inferences on various types of data (synthetic, UCI). Strengths and Weaknesses: # Strengths The quality of writing is very high - The pitfalls (Section 3) are clearly explained, and the accompanying figures are helpful illustrations - Key methodological developments (Algorithm 1, Algorithm 2) are nicely paired with motivating problems, and the paper adequately connects to previous works - Figures (Figure 1, Figure 2 etc) are legible, and accompanying captions clearly spell out the takeaways. The paper's numerical experiments are likely easy to replicate independently (pending release of source code after successful submission) - Experimental setups are easy to understand - The authors promise to release code The paper's claims are supported by theoretical/empirical evidence - Figure 1 and Figure 2 provide empirical evidence that the proposed algorithms can overcome the pitfalls encountered by existing methods - Appendices contain correct proofs of the infinite-time accuracy of Algorithms 1 and Algorithm 2 # Weaknesses None of note. Requested Changes: The following are discussion questions I think the paper would benefit from (although they are not necessary): - how to tune the number of particles K and the number of draws from the samples R in Equation 5? - A possible answer to this is plotting the relationship between the Frechet inception distance and K and/or R. - are there conditions in which LAIR (Algorithm 2) is preferable to AC-MWG (Algorithm 1)? - in section 3, the paper nicely outlines situations in which LAIR is better than baselines (Pseudo-Gibbs or Metropolis-within-Gibbs) and AC-MWG is better than baselines. - what are situations in which AC-MWG is better than LAIR, in either statistical or computational sense? Broader Impact Concerns: None ================================================== Review 3: Summary: The paper proposes two novel methods for conditional sampling of VAEs: Adaptive collapsed Metropolis-within-Gibbs and Latent-adaptive importance resampling. These methods are motivated by three pitfalls of conditional samplers for VAEs (degenerate likelihoods causing poor mixing; encoder distributions not being wide enough; poor initialisation preventing mixing). Empirically, these methods significantly improve upon prior methods: pseudo-Gibbs and Metropolis-within-Gibbs. Strengths and Weaknesses: Strengths: 1. The paper focuses on a practically important problem of drawing conditional samples in VAEs. 2. The paper is overall easy to read. I really like the structure: Section 3 introduces pitfalls of existing methods, and Section 4 fixes them with the proposed methods. 3. The proposed methods are well-motivated and work well empirically. Weaknesses: 1. Both methods seem quite complex, while at the same time being 'modular': one could easily imagine the same methods without, say, Collapsed Gibbs. This raises the question of how practically important each piece of the methods is. Another 'module' is adaptive mixture proposal distributions. I would be very keen to see ablations for the methods. 2. All experiments in the paper have been run with latent space of 16..32 units. That's lower than 50 latent units in (Mattei & Frellsen, 2018), which makes me wonder about the scaling of the methods with respect to the latent space dimensionality. In particular, LAIR relies on importance resampling which means the bias would increase with higher dimensionality. Requested Changes: 1. (Important) Ablation studies for the methods, see the weaknesses part above. 2. (Less important, but would strengthen the paper). Exploration of how the methods behave with increasing latent space dimensionality. 3. (Minor) Ideally Figures 1&2 should be made readable in black and white print. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept as is Comment: Reviewers unanimously agreed that the paper's contribution is solid and should be published under TMLR. The main concerns raised by the reviewers include (1) some unclear points in the discussed pitfalls, and (2) ablation studies for the components of the proposed approach. In reply, the authors added more illustrations and experiments in the updated manuscript, and the reviewers are satisfied with the revision. Therefore I suggest this paper can be accepted in its current revised form. ==================================================
# Progressive-Hint Prompting Improves Reasoning In Large Language Models Anonymous authors Paper under double-blind review ## Abstract The performance of Large Language Models (LLMs) in reasoning tasks depends heavily on prompt design, with Chain-of-Thought (CoT) and self-consistency being critical methods that enhance this ability. However, these methods do not fully exploit the answers generated by the LLM to guide subsequent responses. This paper proposes a new prompting method, named Progressive-Hint Prompting (PHP), that enables automatic multiple interactions between users and LLMs by using previously generated answers as hints to progressively guide toward the correct answers. PHP is orthogonal to CoT and self-consistency, making it easy to combine with state-of-the-art techniques to further improve performance. We conducted extensive and comprehensive experiments on seven benchmarks. The results show that PHP significantly improves accuracy while remaining highly efficient. For instance, with text-davinci-003, we observed a 4.2% improvement on GSM8K with greedy decoding compared to Complex CoT, and a 46.17% reduction in sample paths with self-consistency. With GPT-4 and PHP, we achieve state-of-the-art performances on SVAMP (89.1% → 91.9%), GSM8K (92% → 95.5%), AQuA (76.4% → 79.9%) and MATH (50.3% → 53.9%). ## 1 Introduction While Large Language Models (LLMs) have demonstrated remarkable performance across various NLP tasks (Otter et al., 2020; Qiu et al., 2020; Chowdhary & Chowdhary, 2020), their ability to reason is often perceived as a limitation that cannot be overcome merely by increasing the scale of the model (Rae et al., 2021; Srivastava et al., 2022). Prompt engineering in large-scale models has shown comparable or superior performance to full training set fine-tuning in enhancing reasoning ability, while also being significantly more sample-efficient (Kojima et al., 2022; Lewkowycz et al., 2022). One area of research that aims to address this limitation is the use of Chain-of-Thought (CoT) approaches to promote intermediate reasoning steps (Wei et al., 2022; Zhou et al., 2023; Fu et al., 2023). Other works in this area, such as Least-to-Most (Zhou et al., 2023) and Complex CoT (Fu et al., 2023), have also explored this direction. Another area of research is self-consistency-related approaches. In comparison to CoT-related work that focuses on designing better prompts, self-consistency proposes to sample multiple answers from the LLMs and arrive at the correct answer through a majority vote (Fu et al., 2023). This approach is further improved upon by complex-based selection (Fu et al., 2023). CoT-related and self-consistency-related works can be seamlessly combined without any conflict. Prior research has not explored the potential of leveraging the outputs of LLM to refine reasoning paths iteratively. It stands to reason that similar to human cognition, LLM could benefit from reevaluating and adjusting its generated reasoning paths in order to correct errors and enhance overall performance. In this paper, we propose a new method named Progressive-Hint Prompting (PHP) that involves sequentially interacting with LLM to approach the correct answer gradually. The method operates as follows: (1) given a question, we ask the LLM to provide a Base Answer; (2) we combine the question and answer to re-ask the LLM and obtain the Subsequent Answer; (3) we repeat the operation in (2) until the answer is stable and does not change over the last two answers. PHP follows a human-like thought process where previous answers are leveraged as hints to arrive at the correct answer after re-evaluating the question. Base Prompting Q: There are 96 fourth-graders at Small Tree School. 43 of them are girls. On Friday, 5 fourth-grade girls and 4 fourthgrade boys were absent. How many fourth grade boys were at Small Tree School on Friday? ![1_image_2.png](1_image_2.png) ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) A: There were 96 fourth-graders at Small Tree School. 43 of them were girls. On Friday, 5 girls and 4 boys were absent. So there were 43 - 5 = 38 girls at school on Friday, and 96 - 38 = 58 boys at school on Friday. The answer is 58. A: We know the Answer Hints: 58. With the Answer Hints: 58, we will answer the question. There were 96 fourth-graders at Small Tree School, and 43 of them were girls.On Friday, 5 girls and 4 boys were absent.So there were 43 - 5 = 38 girls and 96 - 43 = 53 boys at Small Tree School on Friday.After subtracting the absent students, there were 38 - 4 = 34 girls and 53 - 5 = 48 boys. The answer is 48. A: We know the Answer Hints: 58, 48. With the Answer Hints: 58, 48, we will answer the question. There were 96 fourth-graders, and 43 of them were girls. On Friday, 5 girls and 4 boys were absent. So there were 43 - 5 = 38 girls and 96 - 9 = 87 students in total. Of those 87 students, 38 were girls, so there must have been 87 - 38 = 49 boys. The answer is 49. A: We know the Answer Hints: 58, 48, 49. With the Answer Hints: 58, 48, 49, we will answer the question. There were 96 fourth-graders at Small Tree School, and 43 of them were girls. On Friday, 5 girls and 4 boys were absent. So there were 43 - 5 = 38 girls and 96 - 43 = 53 boys originally. After the absences, there were 38 - 5 = 33 girls and 53 - 4 = 49 boys. The answer is 49. Figure 1: Our proposed Progressive-Hint Prompting method combines the generated answers and questions for double-checking purposes, which is divided into two stages. In the first stage, we generate a **base answer** by passing to the LLM a concatenation of the current question and a **base prompt**, such as CoT or Complex CoT. In the second stage, we generate the **subsequent answers** via the corresponding **progressive-hint** prompt, such as Progressive-Hint Prompting CoT (PHP-CoT) or Progressive-Hint Prompting Complex CoT (PHP-Complex CoT), for the subsequent interaction. The interaction stops when two consecutive answers are the same. Purple Box: The input of LLM. Orange Box: The output of LLM. Figure 1 illustrates the proposed PHP framework. We use the base prompt to obtain the initial base answer, and then employ the PHP prompt for subsequent questions. If the current answer matches the previous answer, it is more likely to be correct, and we terminate the LLM inquiry. With Complex CoT and GPT-4, after adding PHP, the performance achieves SOTA with 91.9% on SVAMP (Patel et al., 2021), 95.5% on GSM8K (Cobbe et al., 2021), and 79.9% on AQuA (Ling et al., 2017) and 53.9% on MATH (Hendrycks et al., 2021). In summary, our contributions are as follows: - We propose a new method, Progressive-Hint Prompting (PHP), alongside CoT and self-consistency, for improving LLM reasoning abilities. - We demonstrate the effectiveness of PHP through extensive experimentation, including baseline comparisons and ablation studies, using four LLMs, text-davinci-002 and text-davinci-003, GPT-3.5- Turbo and GPT-4 (Brown et al., 2020; Ouyang et al., 2022; OpenAI, 2023). - The experiment results show that our method can also improve performance with self-consistency. - We believe that progressive-hint prompting represents an important step towards automatic sequential interaction with LLMs and hope that it inspires future research in this field. ## 2 Related Work Emergent Abilities and Multi-Step Reasoning. LLMs are particularly skilled at in-context learning, which involves adhering to the structure of prompts (typically few-shot) and completing corresponding tasks (Brown et al., 2020; Chowdhery et al., 2022; Shin et al., 2020; Liu et al., 2023). Among the diverse range of language comprehension tasks, we are particularly interested in multi-step reasoning because it exhibits two unique features. Firstly, LLMs significantly outperform smaller models on multi-step reasoning tasks (Wei et al., 2022), whereas their performance gains on tasks like sentiment classification can be limited (Shin et al., 2020). Secondly, few-shot prompting outperforms full training set fine-tuning in multi-step reasoning tasks, even when conducted on LLMs (Lewkowycz et al., 2022). Chain-of-Thought Reasoning. Chain-of-thought (CoT) prompting (Wei et al., 2022) is a prominent work that demonstrates the multi-step reasoning capacities of LLMs. This approach suggests that the reasoning ability can be elicited through a chain of thoughts, where an answer directly follows a question without intermediate reasoning steps. Least-to-Most prompting (Zhou et al., 2023), which follows the same research direction, divides reasoning into problem breakdown parts and problem answer parts and describes the reasoning steps in more detail. Similarly, the complex CoT (Fu et al., 2023) highlights the importance of prompt complexity and selects the most complex questions and their answers as prompts. To reduce the human workload, the Auto-CoT is proposed (Zhang et al., 2022). Other works have found that using specific phrases like "Let's think step by step" (Kojima et al., 2022) can improve performance. Reasoning Path Extraction. Previous research has investigated various task-specific methods for identifying reasoning paths, including constructing semantic graphs (Xu et al., 2021), developing Recurrent Neural Network (RNN) models to retrieve reasoning paths from a Wikipedia graph (Asai et al., 2020), using human-annotated reasoning paths on math problems for fine-tuning (Cobbe et al., 2021), or training an extractor with heuristic-based pseudo reasoning paths (Chen et al., 2019). A novel research work, named Self-Consistency (Wang et al., 2023), couples the generation of reasoning paths and a final answer by sampling from the decoder and using aggregation to retrieve the most consistent answer without extra modules. This approach has shown great promise, and it has the potential to outperform existing methods in terms of accuracy. Furthermore, the vote complex (Fu et al., 2023), rank and select samples via complexity, is proposed to improve the self-consistency performance. This approach is particularly useful when dealing with complex reasoning problems. ## 3 Progressive-Hint Prompting Table 1: Illustration of Progressive-Hint Prompting. Blue Color: The difference between Base CoT and PHP-CoT. Red Color: The handcrafted Hint in the designed prompt. Base Prompting (*e.g.* **CoT)** Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more. So there must have been 21 - 15 = 6 trees that were planted. The answer is 6. ## Progressive-Hint Prompting 1: Hint Is The Correct Answer Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? (Hint: The answer is near to 6). A: We know the Answer Hints: 6. With the Answer Hints: 6, we will answer the question. There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more. So there must have been 21 - 15 = 6 trees that were planted. The answer is 6. Progressive-Hint Prompting 2: Hint is the incorrect answer Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? (Hint: The answer is near to 10, 8). A: We know the Answer Hints: 10, 8. With the Answer Hints: 10, 8, we will answer the question. There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more. So there must have been 21 - 15 = 6 trees that were planted. The answer is 6. One salient aspect of humanity is our ability to not only think once, but to also double-check our answers. In this paper, we propose that this process can be simulated in language models by sequentially employing previous answers. In other words, a model can generate an answer and then combine this with the question for the next round of thinking. If the current answer is the same as the previous one, we can have confidence that the current answer is correct. We have shown the Proposed Interaction in Figure 1 and Prompt Design in Table 1. We demonstrate the process of generating PHP-CoT prompts for a given CoT prompt in Table 1 and provide the complete prompt in the Appendix. Our pipeline is divided into two stages: (i) **base answer & base prompt**: the generation of the base answer via base prompts such as CoT or Complex CoT and (ii) subsequent answer & PHP: the subsequent interaction with the LLMs through corresponding progressive-hint prompts like Progressive-Hint Prompting CoT (PHP-CoT) or Progressive-Hint Prompting Complex CoT (PHP-Complex CoT). We propose a two-sentence structure for the PHP, consisting of a phrase indicating the proximity of the answer at the question part followed by a sentence rehearsing hints at answer part. For instance, to create a PHP prompt from a CoT prompt, we first add the phrase "The answer is near to A1*, ..., A*p" after the initial question, where A1*, ..., A*p represent possible answers. Next, we introduce the hints in the beginning sentence of the potential answers: "We know the Answer Hints: A1*, ..., A*p. With the Answer Hints: A1*, ..., A*p, we will answer the question.". PHP Design Principle: we should consider various situations of hints. When we ask LLM questions, we do not know what the answer will be so the hints are unknown. In this prompt design, we consider the following two potential situations: 1) The hints are the same as the correct answer: to be sure that the model can still get the correct answer when the hint is correct; 2) hints are not the same as the correct answer: to be sure that the model can jump out of the incorrect answer. Adhering to the above guidelines, we utilize the Standard prompt, CoT prompt, and Complex CoT prompt to generate initial base answers, from which we can then develop the subsequent answer generation prompts, namely, **PHP-Standard** prompt, **PHP-CoT** prompt, and **PHP-Complex CoT** prompt, respectively. The stopping criterion in PHP is reached when two consecutive responses are identical, signaling the end of the interactive exchange. Overall, this method represents a pipeline for improving the quality of responses and enhancing communication during question-answer scenarios. ## 4 Experiments Datasets and Models. We evaluate PHP on seven datasets (AddSub (Hosseini et al., 2014), MultiArith (Roy & Roth, 2015), SingleEQ (Koncel-Kedziorski et al., 2015), SVAMP (Patel et al., 2021), GSM8K (Cobbe et al., 2021), AQuA (Ling et al., 2017)) and MATH (Hendrycks et al., 2021). We choose the datasets because we focus on the reasoning ability of the model. The utilized for both the Standard and CoT prompts are sourced from the original CoT paper (Wei et al., 2022), whereas the prompt utilized for the Complex CoT (Fu et al., 2023) prompt is derived from the corresponding Complex CoT publication. Also, to validate our proposed method performance, we employ four models: text-davinci-002 and text-davinci-003, GPT-3.5-Turbo and GPT-4 (Brown et al., 2020; Ouyang et al., 2022; OpenAI, 2023). All models are employed via OpenAI API key. Prompts. We have shown the proposed process pipeline in the Method part. We show all the prompts in the Appendix and supplementary materials. ## 4.1 Main Results The main results of our study are presented in Table 2, with all methods using greedy decoding (i.e. temperature = 0). Our findings indicate that the proposed PHP improves performance, particularly when working with powerful prompts and models. PHP works better when the LLM is more powerful. In terms of model power, our analysis indicates that PHP is most effective when applied with powerful models. Specifically, when examining CoT and | Prompt | PHP | Dataset | Average | | | | | | | |-------------------------------|-----------------------------|-----------|-----------|--------|--------|---------|-------|-------|-------| | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | | | | | GPT-3.5 | | | | | | | | | | | text-davinci-002 | Standard (Wei et al., 2022) | ✗ | 79.4 | 34.0 | 80.7 | 64.8 | 15.1 | 25.5 | 49.91 | | ✓ | 80.5 | 31.8 | 79.9 | 64.2 | 14.7 | 25.5 | 49.43 | | | | (+1.1) | (-2.2) | (-0.8) | (-0.6) | (-0.4) | (0.0) | (-0.48) | | | | | CoT (Wei et al., 2022) | ✗ | 85.8 | 89.1 | 89.7 | 72.9 | 49.5 | 44.4 | 71.89 | | | ✓ | 86.8 | 89.0 | 90.1 | 72.3 | 51.1 | 45.6 | 72.48 | | | | (+1.0) | (-0.1) | (+0.4) | (-0.6) | (+1.6) | (+1.2) | (+0.59) | | | | | Complex CoT (Fu et al., 2023) | ✗ | 82.5 | 89.8 | 87.7 | 70.4 | 57.6 | 37.4 | 70.89 | | | ✓ | 83.7 | 90.1 | 89.9 | 74.6 | 61.2 | 37.0 | 72.75 | | | | (+1.2) | (+0.3) | (+2.2) | (+4.2) | (+3.6) | (-0.4) | (+1.86) | | | | | GPT-3.5 | | | | | | | | | | | text-davinci-003 | Standard (Wei et al., 2022) | ✗ | 89.1 | 36.3 | 83.8 | 68.7 | 15.9 | 28.3 | 53.68 | | ✓ | 89.1 | 36.0 | 83.6 | 68.7 | 16.0 | 28.3 | 53.61 | | | | (0.0) | (-0.3) | (-0.2) | (0.0) | (+0.1) | (0.0) | (-0.07) | | | | | CoT (Wei et al., 2022) | ✗ | 90.6 | 93.6 | 92.7 | 81.0 | 56.1 | 44.0 | 76.33 | | | ✓ | 91.1 | 94.0 | 93.5 | 81.3 | 57.5 | 44.4 | 76.96 | | | | (+0.5) | (+0.4) | (+0.8) | (+0.3) | (+1.4) | (+0.4) | (+0.63) | | | | | Complex CoT (Fu et al., 2023) | ✗ | 86.3 | 94.8 | 91.5 | 77.4 | 67.0 | 48.8 | 77.63 | | | ✓ | 88.1 | 95.0 | 94.0 | 80.0 | 71.6 | 50.0 | 79.78 | | | | (+1.8) | (+0.2) | (+2.5) | (+2.6) | (+4.6) | (+1.2) | (+2.15) | | | | Table 2: PHP, when applied to different LLMs and prompting methods, can help to improve the performance. Meanwhile, PHP works better when the model and prompt are more powerful. The results are with greedy decoding. ![4_image_0.png](4_image_0.png) Figure 2: The Interaction Number refers to the frequency at which we need to consult the LLM until we receive conclusive responses. With an analysis of various models and prompts, it has been observed that: 1) A stronger model leads to a decreased interaction number; 2) An improved prompt results in an increased interaction number. Complex CoT prompts, we found that while text-davinci-002 generally yielded a performance improvement after adding hints, there were occasions when performance would decline. However, when we replaced text-davinci-002 with text-davinci-003, performance improvement became more consistent and significant. For example, on GSM8K dataset, PHP-Complex CoT using text-davinci-002 improved performance by 3.6%, but then increased further to 4.6% with text-davinci-003. Similarly, on AQuA dataset, using PHP-Complex CoT resulted in a performance drop of 0.4% with text-davinci-002 but a 1.2% improvement with text-davinci-003. The text-davinci-002 is finetuned with supervised instruction tuning, while the text-davinci-003 is finetuned with reinforcement learning. The improved performance with text-davinci-003 can be attributed to its enhanced power, making it better at understanding and employing the given hint. PHP works better when the prompt is more powerful. After analyzing our data, it was determined that the prompt's power has a significant impact on the performance of the system. Our experimental results revealed that while the inclusion of PHP produced modest improvements with standard prompts, CoT and Complex CoT prompts demonstrated substantial gains in performance. Particularly noteworthy is the fact that the most potent prompt, Complex CoT, exhibited the most substantial performance improvement in comparison to the Standard prompt and CoT prompt. The in-context learning imparts a pattern to the model, and the quality of the prompt directly influences the model's ability to learn from this pattern. As indicated by the experiments in Table 2, the Complex CoT prompt outperforms the CoT prompt, and the CoT prompt surpasses the Standard prompt. Consequently, it is more advantageous for the Complex CoT to instruct the LLM in pattern recognition. Within the proposed PHP framework, the established pattern is as follows: 1) if the initial answer is correct, maintain the same correct response in the subsequent round; 2) if the initial answer is incorrect, strive to provide the correct answer in the next round. The Standard prompt falls short of effectively instilling such a pattern in the model, resulting in minimal variation in LLM's responses and a reduced number of interactions. In contrast, the Complex CoT excels in instructing the LLM to rectify its responses, facilitating a more dynamic and responsive learning process. This finding provides compelling evidence that a superior prompt leads to greater effectiveness of the system. The Interaction Number decreases when the model is more powerful and the prompt is less powerful. The number of interactions refers to how many times the agent engages with the LLMs. The interaction number is one when the agent receives the first answer and increases to two for the second answer. In Figure 2, we illustrate the interaction number of various models and prompts. Our findings indicate that: 1) The interaction number for text-davinci-003 is typically lower than that of text-davinci-002 when given the same prompt. This is primarily due to the higher accuracy of text-davinci-003, resulting in a higher probability of the base answer and subsequent answers being correct, thus requiring fewer interactions to obtain the final correct answer; 2) When using the same models, the interaction number generally increases as the prompt becomes more powerful. This is because the LLMs achieves better reasoning ability when the prompt becomes more potent, allowing them to leverage the hints to jump out of the incorrect answers, and ultimately leading to a higher number of interactions required to reach the final answer. ## 4.2 Impact Of The Hint Quality | PHP Prompt | Base Prompt | Dataset | Average | | | | | | |-------------------------------|-------------------------------|------------|-----------|-------|-------|------|-------|-------| | | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | | | Standard (Wei et al., 2022) | 89.1 | 36.0 | 83.6 | 68.7 | 16.0 | 28.3 | 53.61 | | | PHP-Standard | CoT (Wei et al., 2022) | 92.4 | 80.5 | 92.1 | 78.5 | 50.2 | 42.5 | 72.70 | | Complex CoT (Fu et al., 2023) | 90.6 | 80.6 | 92.9 | 77.2 | 60.3 | 45.6 | 74.53 | | | Standard (Wei et al., 2022) | 90.8 | 92.5 | 90.7 | 80.2 | 52.3 | 40.9 | 74.56 | | | CoT (Wei et al., 2022) | 91.1 | 94.0 | 93.5 | 81.3 | 57.5 | 44.4 | 76.96 | | | PHP-CoT | Complex CoT (Fu et al., 2023) | 90.6 | 96.8 | 93.7 | 81.2 | 62.6 | 50.0 | 79.14 | | Standard (Wei et al., 2022) | 88.3 | 80.1 | 93.3 | 80.4 | 65.5 | 35.4 | 73.83 | | | PHP-Complex CoT | CoT (Wei et al., 2022) | 88.8 | 95.6 | 94.8 | 81.4 | 70.6 | 45.6 | 79.46 | | Complex CoT (Fu et al., 2023) | 88.1 | 95.0 | 94.0 | 80.0 | 71.6 | 50.0 | 79.78 | | Table 3: Performance with different Base Answers. Initially, the base prompt provides base answers to the model and PHP generates the subsequent answers. The results are from text-davinci-003 with greedy decoding. The quality of the hint significantly affects the performance. Shown in Table 3, to enhance the PHPStandard, replacing the base prompt Standard with Complex CoT or CoT leads to a significant improvement in the final performance. For PHP-Standard, we observe that GSM8K performance amplifies from 16.0% with base prompt Standard to 50.2% with base prompt CoT and 60.3% with base prompt Complex CoT. Conversely, replacing the base prompt Complex CoT with Standard will reduce the final performance. For example, after replacing base prompt Complex CoT with Standard, the performance of PHP-Complex CoT drops from 71.6% to 65.5% on GSM8K dataset. Performance may further improve if PHP is not designed from the corresponding base prompt. The results indicate that the CoT with PHP-Complex CoT achieved a high accuracy rate of 96.8% on the MultiArith dataset, surpassing the performance of the CoT with PHP-CoT. Similarly, the Complex CoT with PHP-CoT demonstrated a notable accuracy rate of 95.6% on the same dataset, outperforming the Complex CoT with PHP-Complex CoT. The rationale behind these findings is twofold: 1) the performance of CoT and Table 4: Ablation Study. CoT-Merge: for the CoT base prompt and the PHP-CoT prompt, we employ the prompt that contains both base prompt and the PHP. P1: We know the Answer Hints A1*, ..., A*p. P2: With the Answer Hints A1*, ..., A*p, we will answer the question. According to the experiment results, we see that both the proposed P1 and P2 are necessary. Meanwhile, non-merge based method is better than merge based method when prompts are more powerful. The results are from text-davinci-003 with greedy decoding. | Base Prompt | PHP Prompt | P1 | P2 | Dataset | Average | | | | | | |-------------------|-------------------|------------|----------|-----------|-----------|------|------|-------|------|-------| | | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | | | | | CoT | N/A | N/A | N/A | 90.6 | 93.6 | 92.7 | 81.0 | 56.1 | 44.0 | 76.33 | | CoT-Merge | CoT-Merge | ✓ | ✓ | 91.3 | 94.6 | 93.1 | 79.5 | 58.6 | 50.0 | 77.85 | | ✗ | ✗ | 91.1 | 93.5 | 93.3 | 80.0 | 58.1 | 44.8 | 76.80 | | | | ✓ | ✗ | 90.8 | 93.1 | 92.9 | 80.7 | 58.8 | 43.7 | 76.66 | | | | CoT | PHP-CoT | ✗ | ✓ | 91.3 | 93.8 | 93.5 | 80.5 | 58.2 | 46.4 | 77.28 | | ✓ | ✓ | 91.1 | 94.0 | 93.5 | 81.3 | 57.5 | 44.4 | 76.96 | | | | Complex CoT | N/A | N/A | N/A | 86.3 | 94.8 | 91.5 | 77.4 | 67.0 | 48.8 | 77.63 | | Complex CoT-Merge | Complex CoT-Merge | ✓ | ✓ | 88.8 | 94.3 | 94.6 | 78.1 | 70.2 | 46.8 | 78.80 | | ✗ | ✗ | 87.8 | 93.3 | 93.7 | 78.0 | 68.3 | 50.3 | 78.56 | | | | ✓ | ✗ | 87.8 | 95.1 | 94.2 | 78.5 | 70.5 | 48.4 | 79.08 | | | | Complex CoT | Complex CoT | ✗ | ✓ | 88.3 | 94.3 | 94.6 | 79.1 | 69.3 | 46.8 | 78.73 | | ✓ | ✓ | 88.1 | 95.0 | 94.0 | 80.0 | 71.6 | 50.0 | 79.78 | | | | Method | Hint | | Dataset | | Average | | | | | |-------------------------------|-----------|--------|------------|----------|-----------|-------|------|-------|-------| | Correct | Incorrect | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | | | ✗ | ✗ | 90.6 | 93.6 | 92.7 | 81.0 | 56.1 | 44.0 | 76.33 | | | ✓ | ✗ | 91.6 | 94.3 | 93.3 | 81.9 | 57.0 | 43.7 | 76.96 | | | CoT (Wei et al., 2022) | ✗ | ✓ | 91.1 | 93.5 | 93.1 | 79.7 | 57.9 | 45.2 | 76.74 | | ✓ | ✓ | 91.1 | 94.0 | 93.5 | 81.3 | 57.5 | 44.4 | 76.96 | | | ✗ | ✗ | 86.3 | 94.8 | 91.5 | 77.4 | 67.0 | 48.8 | 77.63 | | | ✓ | ✗ | 88.3 | 94.0 | 93.8 | 77.8 | 68.6 | 46.4 | 78.14 | | | Complex CoT (Fu et al., 2023) | ✗ | ✓ | 88.1 | 94.6 | 94.0 | 79.2 | 70.2 | 48.4 | 79.08 | | ✓ | ✓ | 88.1 | 95.0 | 94.0 | 80.0 | 71.6 | 50.0 | 79.78 | | Table 5: Analysis of Hint Design (Shown in Figure 1). Correct: The hints of designed prompt are the same as the correct answers. Incorrect: The hints of the designed prompt are the incorrect answers. Green: The performance is better than without progressive-hint. Red: The performance is worse than without progressive-hint. The results are from text-davinci-003 with greedy decoding. Complex CoT are similar on all six datasets, and 2) since the Base answer is provided by CoT (or Complex CoT) and the subsequent answer is based on PHP-Complex CoT (or PHP-CoT), it is comparable to having two individuals collaborating to solve a problem. Therefore, in such circumstances, the system's performance may be further enhanced. ## 4.3 Ablation Study Furthermore, we conducted an ablation study to verify the criticality of the two sentences in answers: 1) P1: We know the Answer Hints A1*, ..., A*p; 2) P2: With the Answer Hints A1*, ..., A*p, we will answer the question. Moreover, we introduced a new type of prompt called CoT-Merge and Complex CoT-Merge. Firstly, we combined the original prompt with the PHP prompt into a single file. Subsequently, we utilized the same Merge Prompt for both the base answer and subsequent answers. Also, we prove that both correct and incorrect hints are necessary for prompt design. We employ the stop criterion (adaptive sampling) to determine termination for all experiments. The Proposed P1 and P2 are necessary. Incorporating the sentences P1 and P2 resulted in better performance for CoT with PHP across three of the six datasets. However, the significance of these two sentences became particularly apparent when we employed Complex CoT. With this method, better performance | Prompt | SC | PHP | Dataset | Average | | | | | | |-------------------------------|------------|----------|-----------|-----------|--------|--------|--------|--------|-------| | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | | | | | CoT (Wei et al., 2022) | 5 | ✗ | 90.6 | 95.3 | 94.4 | 81.6 | 63.3 | 49.2 | 79.06 | | 5 | ✓ | 90.8 | 96.6 | 94.8 | 83.5 | 66.3 | 49.6 | 80.26 | | | 5 | Number | 2.0075 | 2.0433 | 2.0098 | 2.1090 | 2.5458 | 2.0157 | 2.1218 | | | 10 | ✗ | 90.6 | 96.5 | 93.8 | 83.0 | 65.5 | 49.2 | 79.76 | | | 10 | ✓ | 90.8 | 97.1 | 93.8 | 83.5 | 67.5 | 50.0 | 80.45 | | | 10 | Number | 2.0075 | 2.0283 | 2.0059 | 2.0510 | 2.2145 | 2.0118 | 2.0531 | | | 20 | ✗ | 91.1 | 96.5 | 94.2 | 83.3 | 68.0 | 55.1 | 81.36 | | | 20 | ✓ | 91.6 | 96.5 | 94.4 | 83.7 | 68.6 | 55.1 | 81.64 | | | 20 | Number | 2.0050 | 2.0366 | 2.0098 | 2.0250 | 2.1144 | 2.0078 | 2.0330 | | | 40 | ✗ | 91.6 | 96.5 | 94.8 | 82.9 | 67.3 | 53.1 | 81.03 | | | 40 | ✓ | 91.6 | 96.6 | 95.0 | 83.7 | 68.4 | 53.1 | 81.39 | | | 40 | Number | 2.0050 | 2.0300 | 2.0050 | 2.0320 | 2.0530 | 2.0000 | 2.0208 | | | Complex CoT (Fu et al., 2023) | 5 | ✗ | 88.1 | 97.0 | 93.1 | 80.4 | 73.5 | 51.5 | 80.60 | | 5 | ✓ | 89.6 | 97.3 | 95.2 | 82.5 | 76.9 | 51.9 | 82.23 | | | 5 | Number | 2.0378 | 2.0166 | 2.0334 | 2.2370 | 2.5390 | 2.0118 | 2.1459 | | | 10 | ✗ | 88.6 | 98.3 | 93.3 | 82.4 | 76.4 | 54.3 | 82.21 | | | 10 | ✓ | 89.1 | 98.5 | 95.2 | 83.4 | 78.2 | 54.7 | 83.18 | | | 10 | Number | 2.0177 | 2.0016 | 2.0295 | 2.059 | 2.1531 | 2.0078 | 2.0447 | | | 20 | ✗ | 88.6 | 98.0 | 93.8 | 82.5 | 77.7 | 56.2 | 82.80 | | | 20 | ✓ | 89.8 | 98.0 | 95.8 | 83.6 | 78.6 | 56.2 | 83.66 | | | 20 | Number | 2.0253 | 2.0000 | 2.0196 | 2.0330 | 2.0401 | 2.0000 | 2.0196 | | | 40 | ✗ | 88.3 | 98.5 | 94.8 | 83.9 | 78.1 | 58.6 | 83.70 | | | 40 | ✓ | 88.6 | 98.5 | 95.8 | 84.7 | 79.0 | 58.6 | 84.20 | | | 40 | Number | 2.0101 | 2.0000 | 2.0137 | 2.0210 | 2.0348 | 2.0039 | 2.0137 | | Table 6: The results after adding Self-Consistency (SC). **Number**: The interaction number between agent and LLM. The best results of adding PHP are highlighted with red color, and the best results without PHP are highlighted with green color. We find that PHP further improves performance, even adding self-consistency. Meanwhile, PHP may reduce the cost of self-consistency. was achieved on five of the six datasets after adding P1 and P2. For instance, Complex CoT improved its performance from 78.0% to 80.0% on the SVAMP dataset, and from 68.3% to 71.6% on the GSM8K dataset. This highlights that sentences P1 and P2 can exhibit more potent abilities, particularly when the model's logical capacity is superior. Consequently, we can conclude that P1 and P2 will likely enhance model performance to a greater extent, particularly with more powerful prompts and models. Non-Merge based PHP is better than merge based PHP when prompts are more powerful. Regarding the CoT with PHP-CoT, the initial answer is derived from the CoT prompt, and subsequently, the answer is obtained from the PHP-CoT. Notably, compared to other CoT-base methods, CoT-Merge achieves the best performance. However, compared to other Complex CoT-based methods, we observe that non-merge PHP-Complex CoT with both P1 and P2 achieves the best performance. Hence, when prompts are better, the performance of non-merge based method will be better than merge-based method. Both correct and incorrect hints are needed in the prompt design. Table 5 demonstrates that the use of PHP was superior to its absence when the designed prompt included both correct and incorrect hints. Specifically, the provision of a correct hint in the prompt promoted the generation of answers that matched the given hint. Conversely, the provision of incorrect answers in the prompt encouraged the generation of alternative answers, with the aid of the given hint. ## 4.4 Performance With Self-Consistency As we discussed before, our proposed method can combine with CoT and self-consistency to further improve the model performance. The results are shown in Table 6. Following the self-consistency paper, we sample paths with numbers 5, 10, 20 and 40, and the model temperature 0.7. ![8_image_0.png](8_image_0.png) Figure 3: We show the results: 1) CoT of MultiArith; 2) CoT of SVAMP; 3) Complex CoT of GSM8K. According to 1) and 2), we can see that PHP could further improve performance. With result 3), we found that the PHP could even reduce the cost of self-consistency. PHP further improves performance. By utilizing similar prompts and sample path numbers, we discovered that our proposed PHP-CoT and PHP-Complex CoT always achieve superior performance when compared to CoT and Complex CoT, shown in Table 6 and Figure 3. For instance, CoT with self-consistency was able to attain a 96.5% accuracy on the MultiArith dataset with a sample path of 10, 20 and 40. Therefore, we can conclude that the best performance by CoT with self-consistency is 96.5% with text-davinci-003. However, after implementing PHP, the performance skyrocketed to 97.1%. Similarly, we observed CoT with self-consistency on SVAMP, achieving the best accuracy of 83.3% with 20 sampled paths, and further improved to 83.7% upon implementing PHP. This illustrates that PHP could break the performance bottleneck and further improve performance. PHP could reduce the cost of self-consistency. Incorporating the PHP can also lead to cost reduction. It is widely acknowledged that self-consistency involves an increased number of reasoning paths, resulting in a higher cost. The Table 6 illustrates that PHP can be an effective approach for reducing this cost, while still preserving performance gains. As shown in Figure 3, using Complex CoT with self-consistency, a 78.1% accuracy can be reached with 40 sample paths, while incorporating PHP reduces the required sample amount to 10×2.1531=21.531 paths, and results in an even better accuracy of 78.2%. ## 4.5 Performance With Chat Model | PHP | | | Dataset | | Average | | | | |---------------|--------|-------------------------|-------------------------|--------------------------|--------------------------|---------------------|---------------------------|-------| | | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | | | Previous SOTA | ✗ | 94.9 (Roy & Roth, 2015) | 100 (Wang et al., 2023) | 95.5 (Diao et al., 2023) | 89.1 (Chen et al., 2022) | 92.0 (OpenAI, 2023) | 76.4 (Pitis et al., 2023) | 91.31 | | | ✗ | 85.5 | 97.5 | 92.5 | 81.0 | 82.8 | 57.4 | 82.78 | | | ✓ | 85.3 | 98.0 | 92.9 | 83.1 | 85.1 | 60.6 | 84.16 | | GPT-3.5 Turbo | (-0.2) | (+0.5) | (+0.4) | (+2.1) | (+2.3) | (+3.2) | (+1.38) | | | Number | 2.1037 | 2.0133 | 2.0610 | 2.3570 | 2.3426 | 2.3228 | 2.2000 | | | | ✗ | 89.3 | 97.8 | 93.1 | 90.5 | 94.9 | 77.5 | 90.51 | | | ✓ | 89.6 | 98.1 | 93.1 | 91.9 | 95.5 | 79.9 | 91.34 | | GPT-4 | (+0.3) | (+0.3) | (0.0) | (+1.4) | (+0.6) | (+2.4) | (+0.83) | | | Number | 2.0126 | 2.0033 | 2.0019 | 2.0700 | 2.0507 | 2.2913 | 2.0716 | | Table 7: Performance of Complex CoT with GPT-3.5-Turbo and GPT-4, employing greedy decoding. Number: The average interaction number with LLM. In the previous section, we follow the previous work settings and employ text generation models for our experiments. With the release API of GPT-3.5-Turbo and GPT-4, we validate the performance of Complex CoT with PHP on the same six datasets. We use greedy decoding (i.e. temperature = 0) and Complex CoT as prompt for both models. Analyze GPT-3.5-Turbo. Let's delve into an in-depth analysis of GPT-3.5-Turbo, as detailed in Table 7. Our proposed PHP exhibits remarkable performance enhancements, resulting in a substantial 2.3% improvement on GSM8K and an impressive 3.2% boost on AQuA. However, it's worth noting that GPT3.5-Turbo appears to have a diminished capability when it comes to adhering to prompts compared to Table 8: Response when miss hint for GPT-3.5 Text-Divinci-003, with Complex CoT Question: Jerry has two pools, both with leaks emptying them out at 4 gallons/minute. 4 minutes ago, the big pool had twice as much water as the small pool. Now the big pool has four times as much water as the small pool. How much water does the small pool have now? (Hint: The answer is near to ). A: We know the Answer Hints: 16. With the Answer Hints: 16, we will answer the question. Let's think step by step 4 minutes ago, the big pool had twice as much water as the small pool. This means that the small pool had 4 gallons of water. In 4 minutes, the big pool has lost 4 gallons x 4 minutes = 16 gallons of water. The small pool has also lost 4 gallons x 4 minutes = 16 gallons of water. Therefore, the small pool now has 4 gallons - 16 gallons = -12 gallons of water. Since it is impossible to have negative gallons of water, the small pool must have 0 gallons of water. The answer is 0. Table 9: Performance of Complex CoT with GPT-3.5-Turbo and GPT-4 on MATH dataset, employing greedy decoding. Number: The average interaction number with LLM. **Overall**: The results overall MATH subtopics (Hendrycks et al., 2021). | PHP | | MATH Dataset | | | | | | | | |---------------------------------------|-------------|----------------|-----------|-------------|------------|---------|---------|--------|-------| | InterAlgebra | Precalculus | Geometry | NumTheory | Probability | PreAlgebra | Algebra | Overall | | | | Previous SOTA(Lewkowycz et al., 2022) | ✗ | - | - | - | - | - | - | - | 50.30 | | GPT-4 CoT(OpenAI, 2023) | ✗ | - | - | - | - | - | - | - | 42.50 | | GPT-3.5-Turbo Complex CoT (Ours) | ✗ | 14.6 | 16.8 | 22.3 | 33.4 | 29.7 | 53.8 | 49.1 | 34.12 | | ✓ | 17.1 | 16.1 | 25.4 | 35.1 | 33.7 | 57.7 | 51.1 | 36.50 | | | (+2.5) | (-0.7) | (+3.1) | (+1.7) | (+4.0) | (+3.9) | (+2.0) | (+2.38) | | | | Number | 4.2746 | 3.9625 | 4.3361 | 3.8166 | 3.7594 | 3.1526 | 3.0716 | 3.6673 | | | GPT-4 | | | | | | | | | | | Complex CoT (Ours) | ✗ | 23.4 | 26.7 | 36.5 | 49.6 | 53.1 | 71.6 | 70.8 | 50.36 | | ✓ | 26.3 | 29.8 | 41.9 | 55.7 | 56.3 | 73.8 | 74.3 | 53.90 | | | (+2.9) | (+3.1) | (+5.4) | (+6.1) | (+3.2) | (+2.2) | (+3.5) | (+3.54) | | | | Number | 3.2414 | 3.2435 | 3.2233 | 3.1740 | 2.8122 | 2.3226 | 2.4726 | 2.8494 | | its counterpart, text-davinci-003. To illustrate this disparity, we can provide two concrete examples: a) In scenarios where the provided hints are conspicuously absent, GPT-3.5-Turbo encounters difficulties in providing an answer, often responding with a statement such as, "We cannot answer this question as the answer hint is missing. Please provide the answer hint to proceed." On the other hand, text-davinci-003 autonomously generates and fills in the missing answer hint before addressing the question, a phenomenon that is well-demonstrated in Table 8. b) When confronted with more than ten hints, GPT-3.5-Turbo may respond with the message, "We cannot determine the correct answer as multiple answer hints are given. Please provide only one answer hint for the question." Also, Such behavior is not observed in text-davinci-003. Consequently, OpenAI may employ different implement alternative techniques to grant GPT-3.5-Turbo more response flexibility. As a result, the model's responses occasionally deviate from the given prompt, compared to those generated by GPT-3.5 text-davinci-0003. Analyze GPT-4. The GPT-4 model has significantly improved performance, showcasing its effectiveness across various benchmarks. It has achieved high accuracy rates: 91.9% on SVAMP, 95.5% on GSM8K, 79.9% on AQuA, and 53.90% on the challenging MATH dataset. These results are a testament to the effectiveness of our PHP methodology, which has been instrumental in enhancing GPT-4's capabilities. Particularly notable is the improvement on the MATH dataset, where PHP increased accuracy from 50.3% to 53.90%. This improvement is evident across all subdatasets, marking a distinct advancement over the previous GPT-3.5-turbo model, which showed mixed results, such as a slight decrease in performance on the Precalculus dataset after PHP implementation. Overall, PHP has proven highly effective with advanced models like GPT-4. This observation firmly suggests that as the model gains more computational prowess, it can more effectively comprehend and utilize contextual hints. Moreover, when we compare GPT-4 to the GPT-3.5-Turbo model, another fascinating insight emerges: there is a noticeable reduction in the number of interactions required by GPT-4. This aligns perfectly with the insightful finding that "The Interaction Table 10: The interaction number distribution of different datasets, with GPT-4 and Complex CoT. | Interaction Number | | Dataset | | | | | |----------------------|--------|------------|----------|--------|--------|--------| | | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | 2 | 98.98% | 99.66% | 99.80% | 95.80% | 97.42% | 84.64% | | 3 | 0.75% | 0.33% | 0.19% | 2.80% | 1.44% | 7.08% | | 4 | 0.25% | 0.0% | 0.0% | 0.80% | 0.53% | 4.33% | | 5 | 0.0% | 0.0% | 0.0% | 0.20% | 0.37% | 2.75% | | 6 | 0.0% | 0.0% | 0.0% | 0.20% | 0.07% | 0.78% | | 7 | 0.0% | 0.0% | 0.0% | 0.0% | 0.20 % | 0.39% | | 8 | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | | 9 | 0.0% | 0.0% | 0.0% | 0.0% | 0.07% | 0.0% | | 10 | 0.0% | 0.0% | 0.0% | 0.0% | 0.07% | 0.0% | Number decreases when the model is more powerful." In essence, this not only underscores the efficiency and improved performance of GPT-4 but also provides strong evidence that enhanced model capabilities lead to reduced interaction requirements, making it even more user-friendly and intuitive in its applications. Analyze Interaction Number Distribution. We conducted a comprehensive examination of interaction number distributions across various datasets, as illustrated in Table 10. Notably, more challenging datasets like AQuA exhibit a broader range of interaction numbers, which implies that the LLM exhibits uncertainty when confronted with difficult problems. Conversely, in the case of simpler datasets like Addsub, the LLM predominantly resolves problems with just 2 interactions. This suggests that interaction numbers can serve as a reliable indicator of dataset difficulty. In other words, when using the same prompt and LLM, a higher interaction number signifies greater uncertainty on the part of the LLM and, consequently, a more challenging dataset. ## 5 Conclusion This paper introduces a novel approach named Progressive-Hint Prompting (PHP) for interacting with LLMs, which offers multiple advantages: 1) PHP achieves substantial performance improvements on math reasoning tasks, leading to state-of-the-art results on several reasoning benchmarks; 2) with more powerful models and prompts, PHP can better and consistently benefit the LLMs; 3) PHP can be easily combined with CoT and self-consistency to further improve performance. To better enhance the progressive-hint prompting approach, future research endeavors can focus on improving the design of handcrafted hints in the question phase and prompt sentences in the answer part. Additionally, novel hints that aid the LLMs to reconsider the questions can be identified and extracted beside the answer. ## Further Work In this paper, we will explore the limitations of our proposed progressive-hint prompting technique and discuss possible avenues for further improvement. The Progressive-Hint Prompt can be non-handcrafted. Our proposed progressive-hint prompts are handcrafted by humans, similar to other related techniques such as Chain-Of-Thought and Complex Chain of Thought. As such, we aim to design Auto Progressive Hint in the future to improve its efficiency. For instance, we could continuously build and update the progressive hint during testing. The hint is defined beyond the answer. In this paper, we defined the hint as the answer. However, we acknowledge that the concept of hint encompasses other possibilities generated by models. These hints may include model confidence, reasoning path, and even interaction number. ## Broader Impacts Progressive-Hint Prompting aims to enhance the reasoning ability of Large Language Models (LLMs) by utilizing previous answers. We believe that the integration of PHP with LLM can be applied in a variety of areas, including: 1) Assisting students, particularly those from low-income areas, in learning more effectively and obtaining accurate answers with the help of LLM and PHP; 2) Aiding mathematicians in solving complex mathematical problems; 3) and other reasoning-related applications. By leveraging PHP with LLM, we hope to improve the performance of these models and enable their use in various practical scenarios. ## References David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169, 1985. Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977, 2020. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. Learning to retrieve reasoning paths over wikipedia graph for question answering. *arXiv preprint arXiv:1911.10470*, 2019. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. Learning to retrieve reasoning paths over wikipedia graph for question answering. In International Conference on Learning Representations, 2020. 3 Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. *arXiv preprint arXiv:2204.05862*, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. 2, 3, 4 Jifan Chen, Shih-ting Lin, and Greg Durrett. Multi-hop question answering via reasoning chains. arXiv preprint arXiv:1910.02610, 2019. 3 Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. *arXiv preprint arXiv:2107.03374*, 2021. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. *arXiv preprint arXiv:2211.12588*, 2022. 9 Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. Learning an executable neural semantic parser. *Computational Linguistics*, 45(1):59–94, 2019. KR1442 Chowdhary and KR Chowdhary. Natural language processing. *Fundamentals of artificial intelligence*, pp. 603–649, 2020. 1 Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. *arXiv preprint arXiv:2204.02311*, 2022. 3 Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*, 2021. 2, 3, 4 Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong Zhang. Active prompting with chain-of-thought for large language models. *arXiv preprint arXiv:2302.12246*, 2023. 9 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. Measuring and improving consistency in pretrained language models. *Transactions of the* Association for Computational Linguistics, 9:1012–1031, 2021. Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867, 2023. Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting for multi-step reasoning. In *The Eleventh International Conference on Learning Representations*, 2023. 1, 3, 4, 5, 6, 7, 8 Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346–361, 2021. doi: 10.1162/tacl_a_00370. URL https://aclanthology.org/2021.tacl-1.21. 17 Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. 2, 4, 10 SU Hongjin, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, et al. Selective annotation makes language models better few-shot learners. In *The Eleventh International Conference on Learning Representations*, 2023. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve arithmetic word problems with verb categorization. In *EMNLP*, pp. 523–533, 2014. 4 Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. *arXiv preprint arXiv:2210.11610*, 2022. Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. *arXiv preprint arXiv:2210.12283*, 2022a. Albert Qiaochu Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygóźdź, Piotr Miłoś, Yuhuai Wu, and Mateja Jamnik. Thor: Wielding hammers to integrate language models and automated theorem provers. In *Advances in Neural Information Processing Systems*, 2022b. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. *arXiv preprint* arXiv:2001.08361, 2020. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In *Advances in Neural Information Processing Systems*, 2022. 1, 3 Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. Parsing algebraic word problems into equations. *Transactions of the Association for Computational Linguistics*, 3: 585–597, 2015. 4 Yuxuan Lai, Chen Zhang, Yansong Feng, Quzhe Huang, and Dongyan Zhao. Why machine reading comprehension models learn shortcuts? *arXiv preprint arXiv:2106.01024*, 2021. Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet, Amaury Hayat, Gabriel Ebner, Aurélien Rodriguez, and Timothée Lacroix. Hypertree proof search for neural theorem proving. arXiv preprint arXiv:2205.11491, 2022. Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115, 2022. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. *arXiv preprint arXiv:2206.14858*, 2022. 1, 3, 10 Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597, 2021. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the advance of making language models better reasoners. *arXiv preprint arXiv:2206.02336*, 2022. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 158–167, 2017. 2, 4 Jiachang Liu, Dinghan Shen, Yizhe Zhang, William B Dolan, Lawrence Carin, and Weizhu Chen. What makes good in-context examples for gpt-3? In *Proceedings of Deep Learning Inside Out (DeeLIO 2022):* The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pp. 100–114, 2022. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35, 2023. 3 Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8086–8098, 2022. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback, 2023. 17 OpenAI. Gpt-4 technical report. *arXiv preprint arXiv:2303.08774*, 2023. 2, 4, 9, 10 Daniel W Otter, Julian R Medina, and Jugal K Kalita. A survey of the usages of deep learning for natural language processing. *IEEE transactions on neural networks and learning systems*, 32(2):604–624, 2020. 1 Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. *Advances in Neural Information Processing Systems*, 35:27730–27744, 2022. 2, 4 Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math word problems? In *Proceedings of the 2021 Conference of the North American Chapter of the Association for* Computational Linguistics: Human Language Technologies, pp. 2080–2094, 2021. 2, 4 Silviu Pitis, Michael R Zhang, Andrew Wang, and Jimmy Ba. Boosted prompt ensembles for large language models. *arXiv preprint arXiv:2304.05970*, 2023. 9 Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya Sutskever. Formal mathematics statement curriculum learning. *arXiv preprint arXiv:2202.01344*, 2022. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. Pre-trained models for natural language processing: A survey. *Science China Technological Sciences*, 63(10):1872–1897, 2020. 1 Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. *arXiv preprint arXiv:2112.11446*, 2021. 1 Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3982–3992, 2019. Kyle Richardson and Ashish Sabharwal. Pushing the limits of rule reasoning in transformers through natural language satisfiability. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 11209–11219, 2022. Subhro Roy and Dan Roth. Solving general arithmetic word problems. In *Proceedings of the 2015 Conference* on Empirical Methods in Natural Language Processing, pp. 1743–1752, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1202. URL https://aclanthology. org/D15-1202. 4, 9 Ohad Rubin, Jonathan Herzig, and Jonathan Berant. Learning to retrieve prompts for in-context learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2655–2671, 2022. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are multilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057, 2022. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. *arXiv preprint arXiv:2010.15980*, 2020. 3 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint arXiv:2206.04615*, 2022. 1 Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. What makes reading comprehension questions easier? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4208–4219, 2018. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421. 17 Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog applications. *arXiv* preprint arXiv:2201.08239, 2022. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*, 2023. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Rationale-augmented ensembles in language models. *arXiv preprint arXiv:2207.00747*, 2022. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In *The Eleventh International Conference on Learning* Representations, 2023. 3, 9 Colin Wei, Sang Michael Xie, and Tengyu Ma. Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. *Advances in Neural Information Processing Systems*, 34: 16158–16170, 2021. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*, 2022. 1, 3, 4, 5, 6, 7, 8 Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, and Yejin Choi. Generating sequences by learning to self-correct. *arXiv preprint arXiv:2211.00053*, 2022. Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Norman Rabe, Charles E Staats, Mateja Jamnik, and Christian Szegedy. Autoformalization with large language models. In *Advances in Neural Information* Processing Systems, 2022. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of in-context learning as implicit bayesian inference. *arXiv preprint arXiv:2111.02080*, 2021. Weiwen Xu, Yang Deng, Huihui Zhang, Deng Cai, and Wai Lam. Exploiting reasoning chains for multi-hop science question answering. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pp. 1143–1156, 2021. 3 Wenhao Yu, Chenguang Zhu, Lianhui Qin, Zhihan Zhang, Tong Zhao, and Meng Jiang. Diversifying content generation for commonsense reasoning with mixture of knowledge graph experts. In *Proceedings of the* 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022), pp. 1–11, 2022. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. *arXiv preprint arXiv:2210.03493*, 2022. 3 Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pp. 12697–12706. PMLR, 2021. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex reasoning in large language models. In *The Eleventh International Conference on Learning Representations*, 2023. 1, 3 ## A Appendix A.1 Experiment Results On Commonsense Reasoning Dataset | Model | PHP | CommonsenseQA | StragegyQA | |------------------|--------|-----------------|--------------| | GPT-3.5 | | | | | text-davinci-002 | ✗ | 74.8 | 55.5 | | ✓ | 75.5 | 58.7 | | | Improvement | (+0.7) | (+3.2) | | | GPT-3.5 | | | | | text-davinci-003 | ✗ | 79.3 | 71.1 | | ✓ | 79.6 | 73.2 | | | Improvement | (+0.3) | (+2.1) | | | | ✗ | 77.8 | 71.9 | | GPT-3.5 Turbo | ✓ | 78.7 | 73.4 | | Improvement | (+0.9) | (+1.5) | | | | ✗ | 83.6 | 81.3 | | GPT-4 | ✓ | 86.4 | 81.9 | | Improvement | (+2.8) | (+0.6) | | Table 11: The interaction number distribution of different datasets, with GPT-4 and Complex CoT. Experimental results show a steady improvement in PHP performance on commonsense datasets, including CommonsenseQA (Talmor et al., 2019) and StrategyQA (Geva et al., 2021) Using text-davinci-002, there's a boost of 0.7% in CommonsenseQA and 3.2% in StrategyQA. Switching to text-davinci-003, the increments are 0.3% for CommonsenseQA and 2.1% for StrategyQA. With GPT-3.5-Turbo, CommonsenseQA sees a 0.9% increase and StrategyQA a 1.5% rise. Moreover, with GPT-4, CommonsenseQA's performance jumps by 2.8%, and StrategyQA's by 0.6%. ## A.2 Compare Php With Self-Refine | Model | Prompt | Self-Refine | PHP | |------------------|----------|---------------|-------| | GPT-3.5 | Base | 64.1 | 67.0 | | text-davinci-003 | Proposed | 64.1 | 71.6 | | | (+0.0) | (+4.6) | | | GPT-3.5 Turbo | Base | 74.8 | 82.8 | | Proposed | 75.0 | 85.1 | | | | (+0.2) | (+2.3) | | | GPT-4 | Base | 92.9 | 94.6 | | Prompt | 93.1 | 95.5 | | | | (+0.2) | (+0.9) | | Table 12: The performance comparison between PHP and Self-Refine on GSM8K dataset. Base: The baseline performance of PHP and Self-Refine respectively. Proposed: the performance of PHP and Self-Refine respectively. we choose the famous prompting strategy Self-Refine (Madaan et al., 2023) for comparison. **Base**: The baseline performance of PHP and Self-Refine respectively. When with the same model, the base performance of PHP and Self-Refine is different because of different chain-of-thought prompts. **Proposed**: The performance of PHP and Self-Refine respectively. PHP achieves more improvement. When the model is text-davinci-003, Self-Refine increases performance from 64.1 to 64.1 so that the increment is 0.0. Our PHP increased performance from 67.0 to 71.6 so that increase is 4.6. Similarly, our PHP always gets better improvement than Self-Refine, whatever the model is text-davinci-003, GPT-3.5-Turbo, or GPT-4. For GPT-3.5-Turbo, the Self-Refine only improves by 0.2, while our PHP improves by 2.3 with GPT-3.5-Turbo and 0.9 with GPT-4. Explanation about why base prompt performance is different. The Self-Refine employs python-style CoT, while our CoT comes from the original chain-of-thought paper. This proves the advantage of using hints over LLM-generated internal feedback and hint rehearsal for reasoning. ## A.3 The Effect Of Adaptive Sampling Table 13: The performance comparison between PHP and Self-Refine on GSM8K dataset. Base: The baseline performance of PHP and Self-Refine respectively. Proposed: the performance of PHP and Self-Refine respectively. | Hint | Base | Subsequent prompt | Dataset | Average | | | | | | |--------|------------|---------------------|-----------|-----------|------|------|------|------|-------| | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | | | | | ✗ | CoT | N/A | 85.8 | 89.1 | 89.7 | 72.9 | 49.5 | 44.4 | 71.89 | | ✗ | CoT | CoT | 85.5 | 89.6 | 89.9 | 73.0 | 49.5 | 45.6 | 72.18 | | ✓ | CoT | PHP-CoT | 86.8 | 89.0 | 90.1 | 72.3 | 51.1 | 45.6 | 72.48 | The experiment setup is the following. 1) CoT+N/A: chain-of-thought without adaptive sampling. Only One Round interaction 2) CoT + CoT: chain-of-thought with adaptive sampling. We employed CoT to get answers. If the two subsequent answers are the same, then we stop. This is used to check the performance gains of adaptive sampling advantages. 3) CoT + PHP-CoT: this is the implementation of our method Progressive-Hint Prompting. In the first round, we employ CoT to get the answer. In the subsequent round, we employ PHP-CoT and the previous answer' hint to get an answer. If the two subsequent answers are the same, then we stop. This is used to check the performance gains of hint usage. We utilized text-davinci-002 for our experiments. The findings indicate that the most substantial improvements are made by adding hints to questions and adopting a PHP-style prompt. Adaptive Sampling led to a performance increase in CoT from 71.89 to 72.18. For Complex CoT, the performance slightly increased from 70.89 to 70.96. A more pronounced improvement was noted with Progressive-Hint Prompting, combining hints with a PHP-style prompt: CoT's performance escalated from 72.18 to 72.48, and Complex CoT experienced a significant jump from 70.96 to 72.75. ## A.4 Is The Performance Due To The Length Of The Prompt? The performance of PHP is not due to the length increase of the prompt. We can refer to the Table 2. If the performance increase is caused by increasing the length of the prompt, then the Standard+PHP should be better than the Standard prompt, and Complex CoT+PHP should be better than Complex CoT. What's more, as the length of the Standard+PHP prompt is almost double the length of Standard while the length of the Complex CoT+PHP prompt is smaller than the double length of Complex CoT, then the performance increase of Standard+PHP should be larger than Complex CoT. However, whatever the model is text-davinci-002 or text-davinci-003, the Standard+PHP average performance is always lower than Standard, and Complex CoT+PHP average performance is always better than Complex CoT. This suggests that the effect is not caused by prompt length increment. ## A.5 Alternative Similar Sentence Performance We have another potential sentence: I'm not sure if I got that right. Let me try again. This sentence will be used to replace P1 and P2. For the average performance, the suggested sentence "I'm not sure if I got that right. Let me try again." can improve performance. However, it gets worse performance than without PHP on three datasets, including AddSub (84.4<85.5), MultiArith (97.3<97.5), SingleEQ (92.1<92.5). For the original sentence performance, it always gets no worse than without PHP performance. This suggests that the original sentence's hint rehearsal is important to boost the performance. Table 14: The performance comparison with proposed P1+P2 and another alternatives. Original: the original sentences: We know the Answer Hints: We know the Answer Hints: A1*, ..., A*p. With the Answer Hints: A1*, ..., A*p, we will answer the question.. Alternative: "I'm not sure if I got that right. Let me try again.". | PHP | Sentence | Dataset | Average | | | | | | |--------|-------------|-----------|-----------|-------|------|------|------|-------| | AddSub | MultiArith | SingleEQ | SVAMP | GSM8K | AQuA | | | | | ✗ | N/A | 85.5 | 97.5 | 92.5 | 81.0 | 82.8 | 57.4 | 82.78 | | ✗ | Alternative | 84.8 | 97.3 | 92.1 | 82.5 | 85.1 | 59.0 | 83.96 | | ✗ | Original | 85.3 | 98.0 | 92.9 | 83.1 | 85.1 | 60.6 | 84.16 | ## A.6 Interaction Cases In this section, we provide more interaction cases. ## A.6.1 Success Case Question: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? A: Let's think step by step James runs 3 sprints 3 times a week, so he runs 3*3 = 9 sprints a week. Each sprint is 60 meters, so he runs 9*60 = 540 meters a week. The answer is 540. Question: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? (Hint: The answer is near to 540). A: We know the Answer Hints: 540. With the Answer Hints: 540, we will answer the question. Let's think step by step James runs 3 sprints 3 times a week. Each sprint is 60 meters, so he runs 3 x 60 = 180 meters each time. He runs 3 times a week, so he runs 180 x 3 = 540 meters each week. The answer is 540. Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner. Using this information, how many seconds will the faster team win by? A: Let's think step by step The first team will take 4 x 55 seconds = 220 seconds to complete the race. The second team will take 60 + 57 + 54 + 51 = 222 seconds to complete the race. The first team will win by 220 - 222 = -2 seconds. The answer is -2. Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner. Using this information, how many seconds will the faster team win by? (Hint: The answer is near to -2). A: We know the Answer Hints: -2. With the Answer Hints: -2, we will answer the question. Let's think step by step The first team will finish in 55 seconds x 4 runners = 220 seconds. The second team will finish in 60 seconds + (3 seconds x 3 runners) = 75 seconds. The difference between the two teams is 220 - 75 = 145 seconds. The faster team will win by 145 seconds. The answer is 145 seconds. Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner. Using this information, how many seconds will the faster team win by? (Hint: The answer is near to -2, 145). A: We know the Answer Hints: -2, 145. With the Answer Hints: -2, 145, we will answer the question. Let's think step by step The first team will take 4 x 55 = 220 seconds to complete the race. The second team will take 60 + 57 + 54 + 51 = 222 seconds to complete the race. The first team will finish the race 2 seconds faster than the second team. The answer is -2. Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner. Using this information, how many seconds will the faster team win by? (Hint: The answer is near to -2, 145, -2). A: We know the Answer Hints: -2, 145, -2. With the Answer Hints: -2, 145, -2, we will answer the question. Let's think step by step The first team will run their 4 legs in 55 seconds each, so their total time will be 4 x 55 = 220 seconds. The second team's first runner will run their 400 meter leg in 60 seconds, then each subsequent runner will be 3 seconds faster than the previous runner. So the second runner will run in 57 seconds, the third runner in 54 seconds, and the fourth runner in 51 seconds. The total time for the second team will be 60 + 57 + 54 + 51 = 222 seconds. The difference in time between the two teams is 222 - 220 = 2 seconds. The faster team will win by 2 seconds. The answer is 2. Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner. Using this information, how many seconds will the faster team win by? (Hint: The answer is near to -2, 145, -2, 2). A: We know the Answer Hints: -2, 145, -2, 2. With the Answer Hints: -2, 145, -2, 2, we will answer the question. Let's think step by step The first team will run their 4 legs in 55 seconds each, for a total of 4 x 55 = 220 seconds. The second team will run their 4 legs in 60, 57, 54, and 51 seconds, for a total of 60 + 57 + 54 + 51 = 222 seconds. The difference between the two teams is 222 - 220 = 2 seconds. The faster team will win by 2 seconds. The answer is 2. Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for one day. Now he must sell the items at original price the other day. By what percentage must he increase the price to original? Answer Choices: (A) 21% (B) 20% (C) 25% (D) 33% (E) 18% A: Let's think step by step The shopkeeper gave a discount of 20% on all the items. Therefore, the new price of the items = 80% of the original price. To get back to the original price, the shopkeeper must increase the price by (100% - 80%) = 20%. The answer is (B), 20% Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for one day. Now he must sell the items at original price the other day. By what percentage must he increase the price to original? Answer Choices: (A) 21% (B) 20% (C) 25% (D) 33% (E) 18% (Hint: The answer is near to (B)). A: We know the Answer Hints: (B). With the Answer Hints: (B), we will answer the question. Let's think step by step Let the original price of the item be x. Then, the price after discount = x - (20/100)x = 0.8x To get the original price, the shopkeeper must increase the price by (100/80)% = 25%. The answer is (C), 25% Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for one day. Now he must sell the items at original price the other day. By what percentage must he increase the price to original? Answer Choices: (A) 21% (B) 20% (C) 25% (D) 33% (E) 18% (Hint: The answer is near to (B), (C)). A: We know the Answer Hints: (B), (C). With the Answer Hints: (B), (C), we will answer the question. Let's think step by step Let the original price of the item be x. Then, the discounted price of the item = x - (20/100)x = 0.8x To get the original price, the shopkeeper must increase the price by (100/80)% = 25%. The answer is (C), 25% ## A.6.2 Failure Case Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi's flock is 20 chickens? A: Let's think step by step In the morning, Wendi gives her flock of chickens 15 cups of feed. Each chicken gets 3 cups of feed, so 15 cups of feed is for 5 chickens. In the afternoon, Wendi gives her chickens another 25 cups of feed. Each chicken gets 3 cups of feed, so 25 cups of feed is for 8 chickens. So, in total, Wendi has fed 5+8=13 chickens. Since Wendi's flock is 20 chickens, she needs to feed 20-13=7 more chickens. Each chicken needs 3 cups of feed, so Wendi needs to give her chickens 7*3=21 cups of feed in the final meal of the day. The answer is 21. Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi's flock is 20 chickens? (Hint: The answer is near to 21). A: We know the Answer Hints: 21. With the Answer Hints: 21, we will answer the question. Let's think step by step Wendi feeds each of her chickens three cups of feed per day. In the morning, she gives her flock of chickens 15 cups of feed. This means that 15 cups of feed is enough for 5 chickens. In the afternoon, she gives her chickens another 25 cups of feed. This means that 25 cups of feed is enough for 8 chickens. So, she has already fed 13 chickens in the morning and afternoon. She has 20 chickens in total, so she needs to feed 20-13 = 7 more chickens. Each chicken needs 3 cups of feed, so she needs to give 7 chickens x 3 cups of feed = 21 cups of feed in the final meal of the day. The answer is 21. Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the rest are good. How many oranges are good? A: Let's think step by step There are 25 oranges in the basket, of which 1 is bad, 20% are unripe, and 2 are sour. This means that 1 + 20% + 2 = 23 oranges are not good. Therefore, 25 - 23 = 2 oranges are good. The answer is 2. Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the rest are good. How many oranges are good? (Hint: The answer is near to 2). A: We know the Answer Hints: 2. With the Answer Hints: 2, we will answer the question. Let's think step by step There are 25 oranges in the basket. 1 is bad, so there are 24 oranges left. 20% of the oranges are unripe, so 20% of 24 = 4.8 oranges are unripe. 2 oranges are sour, so there are 24-4.8-2 = 17.2 oranges left. The rest of the oranges are good, so the number of good oranges is 17.2. The answer is 17.2. Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the rest are good. How many oranges are good? (Hint: The answer is near to 2, 17.2). A: We know the Answer Hints: 2, 17.2. With the Answer Hints: 2, 17.2, we will answer the question. Let's think step by step There are 25 oranges in the basket. 1 is bad, so there are 24 oranges left. 20% of 24 oranges are unripe, so 20% of 24 = 4.8 oranges are unripe. 2 oranges are sour, so there are 24-4.8-2 = 17.2 oranges left. These 17.2 oranges are good. The answer is 17.2. Question: If the probability that Stock A will increase in value during the next month is 0.56, and the probability that Stock B will increase in value during the next month is 0.74. What is the greatest value for the probability that neither of these two events will occur? Answer Choices: (A) 0.22 (B) 0.26 (C) 0.37 (D) 0.46 (E) 0.63 A: Let's think step by step The probability that Stock A will not increase in value during the next month is 1 - 0.56 = 0.44 The probability that Stock B will not increase in value during the next month is 1 - 0.74 = 0.26 The probability that neither of these two events will occur is the product of the two probabilities, which is 0.44 x 0.26 = 0.1144. The greatest value for the probability that neither of these two events will occur is 0.1144. The answer is (A), 0.22. Question: If the probability that Stock A will increase in value during the next month is 0.56, and the probability that Stock B will increase in value during the next month is 0.74. What is the greatest value for the probability that neither of these two events will occur? Answer Choices: (A) 0.22 (B) 0.26 (C) 0.37 (D) 0.46 (E) 0.63 (Hint: The answer is near to (A)). A: We know the Answer Hints: (A). With the Answer Hints: (A), we will answer the question. Let's think step by step The probability that neither of these two events will occur is 1 - (0.56 + 0.74) = 1 - 1.30 = 0.30 The greatest value for the probability that neither of these two events will occur is 0.30. The answer is (A), 0.22. ## A.7 Standard Prompt A.7.1 For Addsub, Multiarith, Singleeq, Svamp And Gsm8K Base Standard Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: The answer is 6. Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: The answer is 5. Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: The answer is 39. Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: The answer is 8. Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: The answer is 29. Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: The answer is 33. Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: The answer is 8. ## Progressive-Hint Prompting Standard (Php-Standard) Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? (Hint: The answer is near to 6). A: We know the Answer Hints: 6. With the Answer Hints: 6, we will answer the question. The answer is 6. Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? (Hint: The answer is near to 10, 8). A: We know the Answer Hints: 10, 8. With the Answer Hints: 10, 8, we will answer the question. The answer is 5. Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? (Hint: The answer is near to 30, 35). A: We know the Answer Hints: 30, 35. With the Answer Hints: 30, 35, we will answer the question. The answer is 39. Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? (Hint: The answer near to 8, 12). A: We know the Answer Hints: 8, 12. With the Answer Hints: 8, 12, we will answer the question. The answer is 8. Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? (Hint: The answer is near to 9, 5). A: We know the Answer Hints: 9, 5. With the Answer Hints: 9, 5, we will answer the question. The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? (Hint: The answer is near to 20). A: We know the Answer Hints: 20. With the Answer Hints: 20, we will answer the question. The answer is 29. Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? (Hint: The answer is near to 45). A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question. The answer is 33. Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? (Hint: The answer is near to 8). A: We know the Answer Hints: 8. With the Answer Hints: 8, we will answer the question. The answer is 8. ## A.7.2 For Aqua Base Standard Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: The answer is (a). Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: The answer is (b). Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: The answer is (b). ## Progressive-Hint Prompting Standard (Php-Standard) Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? (Hint: The answer is near to (a)). Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The answer is (a). Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (b)). Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The answer is (b). Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint: The answer is near to (d), (c)). Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: We know the Answer Hints: (d), (c). With the Answer Hints: (d), (c), we will answer the question. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to (a)). Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The answer is (b). Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? (Hint: The answer is near to (a), (d)). Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: We know the Answer Hints: (a), (d). With the Answer Hints: (a), (d), we will answer the question. The answer is (a). Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (a)). Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The answer is (b). Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint: The answer is near to (b)). Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to (b), (a)). Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: We know the Answer Hints: (b), (a). With the Answer Hints: (b), (a), we will answer the question. The answer is (b). ## A.8 Chain-Of-Thought Prompt A.8.1 For Addsub, Multiarith, Singleeq, Svamp And Gsm8K Base Chain-Of-Thought Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more. So there must have been 21 - 15 = 6 trees that were planted. The answer is 6. Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. Then 2 more cars arrive. Now 3 + 2 = 5 cars are in the parking lot. The answer is 5. Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates and her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39 pieces left in total. The answer is 39. Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason had 20 lollipops originally. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8 lollipops. The answer is 8. Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. He then got 2 toys each from his mom and dad. So he got 2 * 2 = 4 more toys. Now he has 5 + 4 = 9 toys. The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each day from monday to thursday, 5 more computers were installed. So 4 * 5 = 20 computers were added. Now 9 + 20 = 29 computers are now in the server room. The answer is 29. Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. He lost 23 on Tuesday, and lost 2 more on wednesday. So he had 58 - 23 = 35 at the end of Tuesday, and 35 - 2 = 33 at the end of wednesday. The answer is 33. Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. She bought 5 bagels for 3 dollars each. So she spent 5 * 3 = 15 dollars. Now she has 23 - 15 = 8 dollars left. The answer is 8. ## Progressive-Hint Prompting Chain-Of-Thought (Php-Cot) Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? (Hint: The answer is near to 6). A: We know the Answer Hints: 6. With the Answer Hints: 6, we will answer the question. There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more. So there must have been 21 - 15 = 6 trees that were planted. The answer is 6. Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? (Hint: The answer is near to 10, 8). A: We know the Answer Hints: 10, 8. With the Answer Hints: 10, 8, we will answer the question. There are originally 3 cars. Then 2 more cars arrive. Now 3 + 2 = 5 cars are in the parking lot. The answer is 5. Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? (Hint: The answer is near to 30, 35). A: We know the Answer Hints: 30, 35. With the Answer Hints: 30, 35, we will answer the question. Originally, Leah had 32 chocolates and her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39 pieces left in total. The answer is 39. Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? (Hint: The answer near to 8, 12). A: We know the Answer Hints: 8, 12. With the Answer Hints: 8, 12, we will answer the question. Jason had 20 lollipops originally. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8 lollipops. The answer is 8. Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? (Hint: The answer is near to 9, 5). A: We know the Answer Hints: 9, 5. With the Answer Hints: 9, 5, we will answer the question. Shawn started with 5 toys. He then got 2 toys each from his mom and dad. So he got 2 * 2 = 4 more toys. Now he has 5 + 4 = 9 toys. The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? (Hint: The answer is near to 20). A: We know the Answer Hints: 20. With the Answer Hints: 20, we will answer the question. There were originally 9 computers. For each day from monday to thursday, 5 more computers were installed. So 4 * 5 = 20 computers were added. Now 9 + 20 = 29 computers are now in the server room. The answer is 29. Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? (Hint: The answer is near to 45). A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question. Michael started with 58 golf balls. He lost 23 on Tuesday, and lost 2 more on wednesday. So he had 58 - 23 = 35 at the end of Tuesday, and 35 - 2 = 33 at the end of wednesday. The answer is 33. Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? (Hint: The answer is near to 8). A: We know the Answer Hints: 8. With the Answer Hints: 8, we will answer the question. Olivia had 23 dollars. She bought 5 bagels for 3 dollars each. So she spent 5 * 3 = 15 dollars. Now she has 23 - 15 = 8 dollars left. The answer is 8. ## A.8.2 For Aqua Base Chain-Of-Thought Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). Progressive-Hint Prompting Chain-Of-Thought (PHP-CoT) Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? (Hint: The answer is near to (a)). Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (b)). Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint: The answer is near to (d), (c)). Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: We know the Answer Hints: (d), (c). With the Answer Hints: (d), (c), we will answer the question. The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to (a)). Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? (Hint: The answer is near to (a), (d)). Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: We know the Answer Hints: (a), (d). With the Answer Hints: (a), (d), we will answer the question. If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (a)). Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint: The answer is near to (b)). Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to (b), (a)). Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: We know the Answer Hints: (b), (a). With the Answer Hints: (b), (a), we will answer the question. There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). ## A.9 Complex Chain-Of-Thought Prompt A.9.1 For Addsub, Multiarith, Singleeq, Svamp And Gsm8K Base Complex Chain-Of-Thought Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour, include 3 10-minute snack breaks each day, and 30 minutes for lunch each day? A: Let's think step by step. Angelo and Melanie think they should dedicate 3 hours to each of the 2 chapters, 3 hours x 2 chapters = 6 hours total. For the worksheets they plan to dedicate 1.5 hours for each worksheet, 1.5 hours x 4 worksheets = 6 hours total. Angelo and Melanie need to start with planning 12 hours to study, at 4 hours a day, 12 / 4 = 3 days. However, they need to include time for breaks and lunch. Every hour they want to include a 10-minute break, so 12 total hours x 10 minutes = 120 extra minutes for breaks. They also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30 minutes. And they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30 minutes for snack breaks + 30 minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3 extra hours. So Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours total. They want to study no more than 4 hours each day, 15 hours / 4 hours each day = 3.75 They will need to plan to study 4 days to allow for all the time they need. The answer is 4 Question: Mark's basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws. Their opponents score double the 2 pointers but half the 3 pointers and free throws. What's the total number of points scored by both teams added together? A: Let's think step by step Mark's team scores 25 2 pointers, meaning they scored 25*2= 50 points in 2 pointers. His team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers They scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in free throws. All together his team scored 50+24+10= 84 points Mark's opponents scored double his team's number of 2 pointers, meaning they scored 50*2=100 points in 2 pointers. His opponents scored half his team's number of 3 pointers, meaning they scored 24/2= 12 points in 3 pointers. They also scored half Mark's team's points in free throws, meaning they scored 10/2=5 points in free throws. All together Mark's opponents scored 100+12+5=117 points The total score for the game is both team's scores added together, so it is 84+117=201 points The answer is 201 Question: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck cards. If she buys 2/5 times more of each item, what would be the total number of the items she will have if she currently has 60 marbles? A: Let's think step by step When Bella buys 2/5 times more marbles, she'll have increased the number of marbles by 2/5*60 = 24 The total number of marbles she'll have is 60+24 = 84 If Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 = 30 frisbees. If Bella buys 2/5 times more frisbees, she'll have 2/5*30 = 12 more frisbees. The total number of frisbees she'll have will increase to 30+12 = 42 Bella also has 20 more frisbees than deck cards, meaning she has 30-20 = 10 deck cards If she buys 2/5 times more deck cards, she'll have 2/5*10 = 4 more deck cards. The total number of deck cards she'll have is 10+4 = 14 Together, Bella will have a total of 14+42+84 = 140 items The answer is 140 Question: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three baskets and 2 less of each fruit in the fourth basket. How many fruits are there? A: Let's think step by step For the first three baskets, the number of apples and oranges in one basket is 9+15=24 In total, together with bananas, the number of fruits in one basket is 24+14=38 for the first three baskets. Since there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first three baskets. The number of apples in the fourth basket is 9-2=7 There are also 15-2=13 oranges in the fourth basket The combined number of oranges and apples in the fourth basket is 13+7=20 The fourth basket also contains 14-2=12 bananas. In total, the fourth basket has 20+12=32 fruits. The four baskets together have 32+114=146 fruits. The answer is 146 Question: You can buy 4 apples or 1 watermelon for the same price. You bought 36 fruits evenly split between oranges, apples and watermelons, and the price of 1 orange is $0.50. How much does 1 apple cost if your total bill was $66? A: Let's think step by step If 36 fruits were evenly split between 3 types of fruits, then I bought 36/3 = 12 units of each fruit If 1 orange costs $0.50 then 12 oranges will cost $0.50 * 12 = $6 If my total bill was $66 and I spent $6 on oranges then I spent $66 - 6 =60 on the other 2 fruit types. Assuming the price of watermelon is W, and knowing that you can buy 4 apples for the same price and that the price of one apple is A, then 1W=4A If we know we bought 12 watermelons and 12 apples for $60, then we know that $60 = 12W + 12A Knowing that 1W=4A, then we can convert the above to $60 = 12(4A) + 12A 60 = 48A + 12A 60 = 60A Then we know the price of one apple (A) is $60/60= $1 The answer is 1 Question: Susy goes to a large school with 800 students, while Sarah goes to a smaller school with only 300 students. At the start of the school year, Susy had 100 social media followers. She gained 40 new followers in the first week of the school year, half that in the second week, and half of that in the third week. Sarah only had 50 social media followers at the start of the year, but she gained 90 new followers the first week, a third of that in the second week, and a third of that in the third week. After three weeks, how many social media followers did the girl with the most total followers have? A: Let's think step by step After one week, Susy has 100+40 = 140 followers. In the second week, Susy gains 40/2 = 20 new followers. In the third week, Susy gains 20/2 = 10 new followers. In total, Susy finishes the three weeks with 140+20+10 = 170 total followers. After one week, Sarah has 50+90 = 140 followers. After the second week, Sarah gains 90/3 = 30 followers. After the third week, Sarah gains 30/3 = 10 followers. So, Sarah finishes the three weeks with 140+30+10 = 180 total followers. Thus, Sarah is the girl with the most total followers with a total of 180. The answer is 180 Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He rearranged five of these boxes into packages of six highlighters each and sold them for $3 per package. He sold the rest of the highlighters separately at the rate of three pens for $2. How much profit did he make in total, in dollars? A: Let's think step by step Sam bought 12 boxes x 10 =120 worth of highlighters. He bought 12 * 30 = 360 highlighters in total. Sam then took 5 boxes × 6 highlighters/box = 30 highlighters. He sold these boxes for 5 * $3 = $15 After selling these 5 boxes there were 360 - 30 = 330 highlighters remaining. These form 330 / 3 = 110 groups of three pens. He sold each of these groups for $2 each, so made 110 * 2 = $220 from them. In total, then, he earned $220 + $15 = $235. Since his original cost was $120, he earned $235 - $120 = $115 in profit. The answer is 115 Question: In a certain school, 2/3 of the male students like to play basketball, but only 1/5 of the female students like to play basketball. What percent of the population of the school do not like to play basketball if the ratio of the male to female students is 3:2 and there are 1000 students? A: Let's think step by step The students are divided into 3 + 2 = 5 parts where 3 parts are for males and 2 parts are for females. Each part represents 1000/5 = 200 students. So, there are 3 x 200 = 600 males. And there are 2 x 200 = 400 females. Hence, 600 x 2/3 = 400 males play basketball. And 400 x 1/5 = 80 females play basketball. A total of 400 + 80 = 480 students play basketball. Therefore, 1000 - 480 = 520 do not like to play basketball. The percentage of the school that do not like to play basketball is 520/1000 * 100 = 52 The answer is 52 ## Progressive-Hint Prompting Complex Chain-Of-Thought (Php-Complex Cot) Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour, include 3 10-minute snack breaks each day, and 30 minutes for lunch each day? (Hint: The answer is near to 4). A: We know the Answer Hints: 4. With the Answer Hints: 4, we will answer the question. Let's think step by step. Angelo and Melanie think they should dedicate 3 hours to each of the 2 chapters, 3 hours x 2 chapters = 6 hours total. For the worksheets they plan to dedicate 1.5 hours for each worksheet, 1.5 hours x 4 worksheets = 6 hours total. Angelo and Melanie need to start with planning 12 hours to study, at 4 hours a day, 12 / 4 = 3 days. However, they need to include time for breaks and lunch. Every hour they want to include a 10-minute break, so 12 total hours x 10 minutes = 120 extra minutes for breaks. They also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30 minutes. And they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30 minutes for snack breaks + 30 minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3 extra hours. So Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours total. They want to study no more than 4 hours each day, 15 hours / 4 hours each day = 3.75 They will need to plan to study 4 days to allow for all the time they need. The answer is 4 Question: Mark's basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws. Their opponents score double the 2 pointers but half the 3 pointers and free throws. What's the total number of points scored by both teams added together?(Hint: The answer is near to 201). A: We know the Answer Hints: 201. With the Answer Hints: 201, we will answer the question. Let's think step by step Mark's team scores 25 2 pointers, meaning they scored 25*2= 50 points in 2 pointers. His team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers They scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in free throws. All together his team scored 50+24+10= 84 points Mark's opponents scored double his team's number of 2 pointers, meaning they scored 50*2=100 points in 2 pointers. His opponents scored half his team's number of 3 pointers, meaning they scored 24/2= 12 points in 3 pointers. They also scored half Mark's team's points in free throws, meaning they scored 10/2=5 points in free throws All together Mark's opponents scored 100+12+5=117 points The total score for the game is both team's scores added together, so it is 84+117=201 points The answer is 201 Question: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck cards. If she buys 2/5 times more of each item, what would be the total number of the items she will have if she currently has 60 marbles? (Hint: The answer is near to 160,145). A: We know the Answer Hints: 160, 145. With the Answer Hints: 160, 145, we will answer the question. Let's think step by step When Bella buys 2/5 times more marbles, she'll have increased the number of marbles by 2/5*60 = 24 The total number of marbles she'll have is 60+24 = 84 If Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 = 30 frisbees. If Bella buys 2/5 times more frisbees, she'll have 2/5*30 = 12 more frisbees. The total number of frisbees she'll have will increase to 30+12 = 42 Bella also has 20 more frisbees than deck cards, meaning she has 30-20 = 10 deck cards If she buys 2/5 times more deck cards, she'll have 2/5*10 = 4 more deck cards. The total number of deck cards she'll have is 10+4 = 14 Together, Bella will have a total of 14+42+84 = 140 items The answer is 140 Question: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three baskets and 2 less of each fruit in the fourth basket. How many fruits are there? (Hint: The answer is near to 93, 141). A: We know the Answer Hints: 93, 141. With the Answer Hints: 93, 141, we will answer the question. Let's think step by step For the first three baskets, the number of apples and oranges in one basket is 9+15=24 In total, together with bananas, the number of fruits in one basket is 24+14=38 for the first three baskets. Since there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first three baskets. The number of apples in the fourth basket is 9-2=7 There are also 15-2=13 oranges in the fourth basket The combined number of oranges and apples in the fourth basket is 13+7=20 The fourth basket also contains 14-2=12 bananas. In total, the fourth basket has 20+12=32 fruits. The four baskets together have 32+114=146 fruits. The answer is 146 Question: You can buy 4 apples or 1 watermelon for the same price. You bought 36 fruits evenly split between oranges, apples and watermelons, and the price of 1 orange is $0.50. How much does 1 apple cost if your total bill was $66? (Hint: The answer is near to 1, 5). A: We know the Answer Hints: 1, 5. With the Answer Hints: 1, 5, we will answer the question. Let's think step by step If 36 fruits were evenly split between 3 types of fruits, then I bought 36/3 = 12 units of each fruit If 1 orange costs $0.50 then 12 oranges will cost $0.50 * 12 = $6 If my total bill was $66 and I spent $6 on oranges then I spent $66 - $6 = $60 on the other 2 fruit types. Assuming the price of watermelon is W, and knowing that you can buy 4 apples for the same price and that the price of one apple is A, then 1W=4A If we know we bought 12 watermelons and 12 apples for $60, then we know that $60 = 12W + 12A Knowing that 1W=4A, then we can convert the above to $60 = 12(4A) + 12A $60 = 48A + 12A $60 = 60A Then we know the price of one apple (A) is $60/60= $1 The answer is 1 Question: Susy goes to a large school with 800 students, while Sarah goes to a smaller school with only 300 students. At the start of the school year, Susy had 100 social media followers. She gained 40 new followers in the first week of the school year, half that in the second week, and half of that in the third week. Sarah only had 50 social media followers at the start of the year, but she gained 90 new followers the first week, a third of that in the second week, and a third of that in the third week. After three weeks, how many social media followers did the girl with the most total followers have? (Hint: The answer is near to 180, 160). A: We know the Answer Hints: 180, 160. With the Answer Hints: 180, 160, we will answer the question. Let's think step by step After one week, Susy has 100+40 = 140 followers. In the second week, Susy gains 40/2 = 20 new followers. In the third week, Susy gains 20/2 = 10 new followers. In total, Susy finishes the three weeks with 140+20+10 = 170 total followers. After one week, Sarah has 50+90 = 140 followers. After the second week, Sarah gains 90/3 = 30 followers. After the third week, Sarah gains 30/3 = 10 followers. So, Sarah finishes the three weeks with 140+30+10 = 180 total followers. Thus, Sarah is the girl with the most total followers with a total of 180. The answer is 180 Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He rearranged five of these boxes into packages of six highlighters each and sold them for $3 per package. He sold the rest of the highlighters separately at the rate of three pens for $2. How much profit did he make in total, in dollars? (Hint: The answer is near to 120). A: We know the Answer Hints: 120. With the Answer Hints: 120, we will answer the question. Let's think step by step Sam bought 12 boxes x $10 = $120 worth of highlighters. He bought 12 * 30 = 360 highlighters in total. Sam then took 5 boxes × 6 highlighters/box = 30 highlighters. He sold these boxes for 5 * $3 = $15 After selling these 5 boxes there were 360 - 30 = 330 highlighters remaining. These form 330 / 3 = 110 groups of three pens. He sold each of these groups for $2 each, so made 110 * 2 = $220 from them. In total, then, he earned $220 + $15 = $235. Since his original cost was $120, he earned $235 - $120 = $115 in profit. The answer is 115 Question: In a certain school, 2/3 of the male students like to play basketball, but only 1/5 of the female students like to play basketball. What percent of the population of the school do not like to play basketball if the ratio of the male to female students is 3:2 and there are 1000 students? (Hint: The answer is near to 45). A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question. Let's think step by step The students are divided into 3 + 2 = 5 parts where 3 parts are for males and 2 parts are for females. Each part represents 1000/5 = 200 students. So, there are 3 x 200 = 600 males. And there are 2 x 200 = 400 females. Hence, 600 x 2/3 = 400 males play basketball. And 400 x 1/5 = 80 females play basketball. A total of 400 + 80 = 480 students play basketball. Therefore, 1000 - 480 = 520 do not like to play basketball. The percentage of the school that do not like to play basketball is 520/1000 * 100 = 52 The answer is 52 ## A.9.2 For Aqua Base Complex Chain-Of-Thought Question: there were 35 students in a hostel . due to the admission of 7 new students the expenses of the mess were increased by rs . 84 per day while the average expenditure per head diminished by re 1 . what was the original expenditure of the mess ? Options: (a) rs 450 (b) rs 920 (c) rs 550 (d) rs . 630 (e) none of these A: Let's think step by step let the original average expenditure be rs . x then , 42 ( x - 1 ) - 35 x = 84 7 x = 126 x = 18 therefore original expenditure = rs . ( 35 * 18 ) = rs . 630. The answer is (d), rs . 630 Question: a train 200 m long passes a man , running at 5 km / hr in the same direction in which the train is going , in 10 seconds . the speed of the train is ? Options: (a) 28 (b) 50 (c) 77 (d) 22 (e) 12 A: Let's think step by step speed of the train relative to man = ( 200 / 10 ) m / sec = ( 20 ) m / sec. [ ( 20 ) * ( 18 / 5 ) ] km / hr = 72 km / hr. let the speed of the train be x km / hr. then , relative speed = ( x - 5 ) km / hr. x - 5 = 72, x = 77 km / hr . The answer is (c), 77 Question: solution x contains 20 % of material a and 80 % of material b . solution y contains 30 % of material a and 70 % of material b . a mixture of both these solutions contains 22 % of material a in the final product . how much solution x is present in the mixture ? Options: (a) 40 % (b) 60 % (c) 80 % (d) 100 % (e) 110 % A: Let's think step by step we can assume the total weight of the mixture = 100 conc of a in the final mixture = 22 let weight of a in the mixture be x. conc given = 20% = 0.2 therefore , weight of b = 100 - x. conc given = 30% = 0.3 now , accordding to the problem, 0.2 x + 0.3 ( 100 - x ) = 22 solving , we get x = 80 since we assumed the weight of the mixture = 100, therefore presence of a in the mixture = 80%. The answer is (c), 80% Question: a trader sells 40 metres of cloth for rs . 8200 at a profit of rs . 35 per metre of cloth . how much profit will the trder earn on 40 metres of cloth ? Options: (a) rs . 950 (b) rs . 1500 (c) rs . 1000 (d) rs . 1400 (e) none of these A: Let's think step by step price of 1 metre cloth = 8200 / 40 = rs 205 cost of 1 metre cloth = rs 205 - 35 = rs 170 cost on 40 metres = 170 x 40 = rs . 6800 profit earned on 40 metres cloth = rs . 8200 - rs . 6800 = rs . 1400 The answer is (d), rs . 1400 Question: if x < y < z and y - x > 5 , where x is an even integer and y and z are odd integers , what is the least possible value s of z - x ? Options: (a) 6 (b) 7 (c) 8 (d) 9 (e) 10 A: Let's think step by step We know x < y < z to find the least possible value for z - x, we need to find the values for z and x that can be closest to each other. if x is some even number, then what could be minimum possible odd z. if x is some even number, y - x > 5 ; y > x + 5 minimum value for y = x + 5 + 2 = x + 7 (note : x + 5 is as even + odd = odd and nearest odd greater than x + 5 is x + 5 + 2) minimum value for z = y + 2 = x + 7 + 2 = x + 9 (note : z = y + 2 because both z and y are odd, difference between two odd numbers is 2) s = z - x = x + 9 - x = 9 The answer is (d), 9 Question: what is the difference between the c . i . on rs . 6000 for 1 1 / 2 years at 4 % per annum compounded yearly and half - yearly ? Options: (a) s . 2.04 (b) s . 2.08 (c) s . 2.02 (d) s . 2.83 (e) s . 2.45 A: Let's think step by step c . i . when interest is compounded yearly = [ 6000 * ( 1 + 4 / 100 ) * ( 1 + ( 1 / 2 * 4 ) / 100 ] = 6000 * 26 / 25 * 51 / 50 = rs . 6364.8 c . i . when interest is compounded half - yearly = [ 6000 * ( 1 + 2 / 100 ) 2 ] = ( 6000 * 51 / 50 * 51 / 50 * 51 / 50 ) = rs . 6367.25 difference = ( 6367.25 - 6364.8 ) = rs . 2.45. The answer is (e), s . 2.45 Question: the average weight of a , b and c is 45 kg . if the average weight of a and b be 40 kg and that of b and c be 45 kg , then the weight of b is : Options: (a) 31 kg (b) 32 kg (c) 33 kg (d) 35 kg (e) none of these A: Let's think step by step let a , b , c represent their respective weights. then , we have : a + b + c = ( 45 x 3 ) = 135 . . . ( i ) a + b = ( 40 x 2 ) = 80 . . . ( ii ) b + c = ( 45 x 2 ) = 90 . . . ( iii ) adding ( ii ) and ( iii ), we get : a + 2 b + c = 170 . . . ( iv ) subtracting ( i ) from ( iv ), we get : b = 35. b' s weight = 35 kg . The answer is (d), 35 kg Question: the compound and the simple interests on a certain sum at the same rate of interest for two years are rs . 11730 and rs . 10200 respectively . find the sum Options: (a) rs . 17037 (b) rs . 17000 (c) rs . 17276 (d) rs . 170287 (e) rs . 171881 A: Let's think step by step the simple interest for the first year is 10200 / 2 is rs . 5100 and compound interest for first year also is rs . 5100. the compound interest for second year on rs . 5100 for one year so rate of the interest = ( 100 * 1530 ) / ( 5100 * 1 ) = 30% p so p = ( 100 * 10200 ) / ( 30 * 2 ) = rs . 17000 The answer is (b), rs . 17000 ## Progressive-Hint Prompting Complex Chain Of Thought (Php-Complex Cot) Question: there were 35 students in a hostel . due to the admission of 7 new students the expenses of the mess were increased by rs . 84 per day while the average expenditure per head diminished by re 1 . what was the original expenditure of the mess ? (Hint: The answer is near to (d)). Options: (a) rs 450 (b) rs 920 (c) rs 550 (d) rs . 630 (e) none of these A: We know the Answer Hints: (d). With the Answer Hints: (d), we will answer the question. Let's think step by step let the original average expenditure be rs . x then , 42 ( x - 1 ) - 35 x = 84 7 x = 126 x = 18 therefore original expenditure = rs . ( 35 * 18 ) = rs . 630. The answer is (d), rs . 630 Question: a train 200 m long passes a man , running at 5 km / hr in the same direction in which the train is going , in 10 seconds . the speed of the train is ? (Hint: The answer is near to (c)). Options: (a) 28 (b) 50 (c) 77 (d) 22 (e) 12 A: We know the Answer Hints: (c). With the Answer Hints: (c), we will answer the question. Let's think step by step speed of the train relative to man = ( 200 / 10 ) m / sec = ( 20 ) m / sec. [ ( 20 ) * ( 18 / 5 ) ] km / hr = 72 km / hr. let the speed of the train be x km / hr. then , relative speed = ( x - 5 ) km / hr. x - 5 = 72, x = 77 km / hr . The answer is (c), 77 Question: solution x contains 20 % of material a and 80 % of material b . solution y contains 30 % of material a and 70 % of material b . a mixture of both these solutions contains 22 % of material a in the final product . how much solution x is present in the mixture ? (Hint: The answer is near to (a)). Options: (a) 40 % (b) 60 % (c) 80 % (d) 100 % (e) 110 % A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. Let's think step by step we can assume the total weight of the mixture = 100 conc of a in the final mixture = 22 let weight of a in the mixture be x. conc given = 20% = 0.2 therefore , weight of b = 100 - x. conc given = 30% = 0.3 now , accordding to the problem, 0.2 x + 0.3 ( 100 - x ) = 22 solving , we get x = 80 since we assumed the weight of the mixture = 100, therefore presence of a in the mixture = 80%. The answer is (c), 80% Question: a trader sells 40 metres of cloth for rs . 8200 at a profit of rs . 35 per metre of cloth . how much profit will the trder earn on 40 metres of cloth ? (Hint: The answer is near to (c)). Options: (a) rs . 950 (b) rs . 1500 (c) rs . 1000 (d) rs . 1400 (e) none of these A: We know the Answer Hints: (c). With the Answer Hints: (c), we will answer the question. Let's think step by step price of 1 metre cloth = 8200 / 40 = rs 205 cost of 1 metre cloth = rs 205 - 35 = rs 170 cost on 40 metres = 170 x 40 = rs . 6800 profit earned on 40 metres cloth = rs . 8200 - rs . 6800 = rs . 1400 The answer is (d), rs . 1400 Question: if x < y < z and y - x > 5 , where x is an even integer and y and z are odd integers , what is the least possible value s of z - x ? (Hint: The answer is near to (a), (c)). Options: (a) 6 (b) 7 (c) 8 (d) 9 (e) 10 A: We know the Answer Hints: (a), (c). With the Answer Hints: (a), (c), we will answer the question. Let's think step by step We know x < y < z to find the least possible value for z - x, we need to find the values for z and x that can be closest to each other. if x is some even number, then what could be minimum possible odd z. if x is some even number, y - x > 5 ; y > x + 5 minimum value for y = x + 5 + 2 = x + 7 (note : x + 5 is as even + odd = odd and nearest odd greater than x + 5 is x + 5 + 2) minimum value for z = y + 2 = x + 7 + 2 = x + 9 (note : z = y + 2 because both z and y are odd, difference between two odd numbers is 2) s = z - x = x + 9 - x = 9 The answer is (d), 9 Question: what is the difference between the c . i . on rs . 6000 for 1 1 / 2 years at 4 % per annum compounded yearly and half - yearly ? (Hint: The answer is near to (e), (d)). Options: (a) s . 2.04 (b) s . 2.08 (c) s . 2.02 (d) s . 2.83 (e) s . 2.45 A: We know the Answer Hints: (e), d). With the Answer Hints: (e), (d), we will answer the question. Let's think step by step c . i . when interest is compounded yearly = [ 6000 * ( 1 + 4 / 100 ) * ( 1 + ( 1 / 2 * 4 ) / 100 ] = 6000 * 26 / 25 * 51 / 50 = rs . 6364.8 c . i . when interest is compounded half - yearly = [ 6000 * ( 1 + 2 / 100 ) 2 ] = ( 6000 * 51 / 50 * 51 / 50 * 51 / 50 ) = rs . 6367.25 difference = ( 6367.25 - 6364.8 ) = rs . 2.45. The answer is (e), s . 2.45 Question: the average weight of a , b and c is 45 kg . if the average weight of a and b be 40 kg and that of b and c be 45 kg , then the weight of b is : (Hint: The answer is near to (d), (b)). Options: (a) 31 kg (b) 32 kg (c) 33 kg (d) 35 kg (e) none of these A: We know the Answer Hints: (d), (b). With the Answer Hints: (d), (b), we will answer the question. Let's think step by step let a , b , c represent their respective weights. then , we have : a + b + c = ( 45 x 3 ) = 135 . . . ( i ) a + b = ( 40 x 2 ) = 80 . . . ( ii ) b + c = ( 45 x 2 ) = 90 . . . ( iii ) adding ( ii ) and ( iii ), we get : a + 2 b + c = 170 . . . ( iv ) subtracting ( i ) from ( iv ), we get : b = 35. b' s weight = 35 kg . The answer is (d), 35 kg Question: the compound and the simple interests on a certain sum at the same rate of interest for two years are rs . 11730 and rs . 10200 respectively . find the sum (Hint: The answer is near to (e), (c)). Options: (a) rs . 17037 (b) rs . 17000 (c) rs . 17276 (d) rs . 170287 (e) rs . 171881 A: We know the Answer Hints: (e), (c). With the Answer Hints: (e), (c), we will answer the question. Let's think step by step the simple interest for the first year is 10200 / 2 is rs . 5100 and compound interest for first year also is rs . 5100. the compound interest for second year on rs . 5100 for one year so rate of the interest = ( 100 * 1530 ) / ( 5100 * 1 ) = 30% p so p = ( 100 * 10200 ) / ( 30 * 2 ) = rs . 17000 The answer is (b), rs . 17000
Review 1: Summary: This paper proposes a progressive hint prompting method to improve the reasoning ability of large language models. Specifically, the paper adds the current answer as a hint to the prompt of next turn and repeats this process until the answers of two continuous rounds are the same. The the proposed method thinks the LLM achieves its right answers. The method is evaluated on seven benchmarks and achieves new SOTA scores. Strengths and Weaknesses: **Strengths**: The experiments are diverse and solid. The proposed methods are evaluated on seven benchmarks and achieves new SOTA scores. **Weakness**: 1. The paper lacks novelty. The proposed method just iteratively adds the current answer to the next prompt. I don't see any novelty here. 2. Many claims in this paper are not proved. To be specific: 2.1 In page 2, "If the current answer matches the previous answer, it is more likely to be correct"; Page 4, "If the current answer is the same as the previous one, we can have confidence that the current answer is correct". To me, if the answers in continuous two rounds don't change, it just means that the LLM achieves a fix answer and would not change its answer any more even with more information. However, it doesn't mean that the answer is more likely to be correct. If the author wants to claim this, please show me the evidence. 2.2 Page 4, "PHP Design Principle": according to the hints are the same as the correct answer or not, the authors utilize the standard prompt, CoT prompt and Complex CoT prompt. I don't see too much correlation between the design principle and the implemented design methods. The authors can elaborate on this. 2.3 Page 6, "The LLM is more powerful, the more correct answer it might give, the fewer interactions it needs; The prompt is more powerful, the better reasoning ability it has, the more interactions it needs". Seems this two observations are contradictory. 3. Does the author set a maximum iteration number? What if the number of rounds is too large? Requested Changes: Please see above weakness. I hope the author could provide clear evidence for those claims. Broader Impact Concerns: No ================================================== Review 2: Summary: Currently, the performances of LLM-based applications heavily depend on prompt design. Authors point out that these methods do not utilize the generated answers to guide subsequent responses. Therefore, this paper introduces a progressive-hint prompting. Specifically, authors will build progressive-hint prompt based on the generated answers by prompt ("*We know the Answer hints*" and "*With the answer hint, we will answer the questoin*"). Experimental results demonstrate the effectiveness of the proposed method. Strengths and Weaknesses: **Strengths** 1. This paper elaborately presents a prompt to progressive refine the answers of LLMs by using hint. Experimental results demonstrate the effectiveness of the proposed method. 2. The writing of this paper is good. **Weaknesses** 1. Progressive Refinement is not a novel idea in ML tasks. Many methods (e.g., Reflexion [1], DTG) have adopted similar idea that refine results again based on the responses. Therefore, I think the novelty of this proposed method is incremental, which just introduces a simple prompt template to progressively refine the answer. And how to prove such a prompt design is the optimal choice still lack evidences. 2. Some conclusions in Sec 4.1. It is straightforward that the more powerful LLM is, the better model performance is. Therefore, 3. It seems that the proposed method is only suitable for arithmetic problems. What happens to more complex reasoning tasks (e.g., Commonsense reasoning)? 4. Although PHP has improved model performance, its results do not explicitly present any reasoning paths to understand how it utilize these hints. 5. Based on my above observations, the proposed method is only suitable for arithmetic problems, currently, there are many works that have utilized tools to address mathmatic problems. So are there any advantages for the proposed method when compared with tool-based methods. [1] Reflexion: Language Agents with Verbal Reinforcement Learning [2] Deliberate then Generate: Enhanced Prompting Framework for Text Generation Requested Changes: 1. As aformentioned, the proposed method (PHP) seems only suitable for arithmetic problems. I think more experiments should be provided to prove its generalization to different reasoning tasks. 2. PHP requires multiple queries, which bring additional cost. Currently, there have many famous prompting strategies, like ToT, GoT. How PHP compare with these methods? Broader Impact Concerns: I do not think this paper has any borader impact concerns. ================================================== Review 3: Summary: The paper presents a modified self-consistency method which iteratively prompts an LLM with the list of previously given answers until two consecutive answers are the same. The method is interesting, and demonstrates apparently real improvements on benchmarks. However, there are some key ablations missing from the experiments, and it is possible that most of the reason the method is good is that it replaces non-adaptive self-consistency with adaptive self-consistency (stop once two consecutive answers are the same). **Note:** I'm putting down "No" for "Are the claims made in the submission supported by accurate, convincing and clear evidence?" but believe this may be quite fixable as discussed below, and will change that answer if it is fixed. Strengths and Weaknesses: ### Strengths 1. The numerical results seem like a good improvement, though I do not have full command of the chain-of-thought literature so it is possible there are stronger alternative methods to use as baselines. 2. Multiple baselines are used, and the method seems to be an improvement when compared with several existing methods. 3. There are several useful ablations (section 4.3). ### Weaknesses First, the prompt looks odd! I don't have much a priori intuition that prompting with not necessarily correct hints would improve the results so significantly, which is why I suspect a lurking alternate explanation. Second, section 4.4 on self-consistency may be overlooking a key effect which might explain the benefits of the method is ways only vaguely related to the prompt. The paper says > As shown in Figure 3, using Complex CoT with self-consistency, a 78.1% accuracy can be reached with 40 sample paths, while incorporating PHP reduces the required sample amount to 10×2.1531=21.531 paths, and results in an even better accuracy of 78.2%. The issue is that this is comparing a non-adaptive method (sample a fixed number of times) with an adaptive method (sample until two consecutive answers at the same). For a toy model of this situation, consider a true/false question where a single sample has probability p of being correct. Let $q$ be the probability of correctness of the "sample until a consecutive match", and $n$ the expected number of samples. Assume $p \approx 1$. Then we have \begin{align*} q &= \frac{p (1 - (1-p)^2)}{1 - p(1-p)} \approx 1 - 2(1-p)^2 \\\\ n &= \frac{2 + p(1-p)}{1 - p(1-p)} \approx 2 + 3(1-p) \end{align*} Since the expected number of samples used is close to 2, plausibly the situation is well approximated by this toy model with $p \approx 1$, in which case the error rate of their method would be significantly lower with just a bit more than 2 turns. This could explain the situation where 21.531 turns with PHP is as good as 40 samples without. This is still interesting: if adaptive methods have not been presented before in a chain-of-thought context it would be good to point this out. I should say that the ablations in section 4.3 partially conflict with this version of the story, but Table 4 in particular seems like evidence but not *much* evidence: on some datasets their method is weaker than the baseline. I am also not completely sure I understand the setup of Table 4: I *think* the "no P1 + P2" versions are still iterating until consecutive equal answers, but the paper does not say this explicitly so I am not sure. Requested Changes: ### Understanding the reasons for gains The main question is whether further clarity can be reached around the source of the methods improvements, and in particular how much of the effect is due to 1. **Adaptive vs. non-adaptive self-consistency.** It is a bit hard with the current paper to read out how much of the gain is due to sample efficiency vs. prompting. First, it's important to clarify whether the ablations in section 4.3 are indeed doing the iteration-until-consecutive equal answers with the prompt stripped. Second, it would be good to have a version of Figure 4(3) showing the ablations as well. The axes of Figure 3 should also be changed to reflect total cost (accounting for numbers of turns). Finally, if adaptive vs. non-adaptive is indeed part of the story, I am skeptical that the best way to exploit this idea is to sample 10 times and stop each sample independently based on that one samples turns: I would expect there is a better method that stops the overall basket of samples based on partial agreement. 2. **Mundane prompt changes, such as increasing entropy or length.** The ablations in section 4.3 are some evidence that the prompt effect is real, but I have a few doubts. I am not confident that these should be required, or which set is optimal, so I will list a few options and leave decisions to the authors and to other reviewer discussion. Here are some alternate ablations that might probe the effect more precisely: 2a. First, does the answer matter in the hint? I would be curious to see an ablation that just picks a random number from some plausibly similar distribution, or even picks a number known to be correct vs. a number known to be incorrect and compares them. 2b. Second, is the effect due to just increasing the length of the prompt? The fact that Complex CoT is the strongest baseline is notable, as it simply reranks from the set of samples with high sentence count. This method increases the sentence count by 2. Do other similar sentences also do similar things? 2c. Third, an alternative prompting approach would be just show the previous attempt and say "I'm not sure if I got that right. Let me try again." ### Smaller requested changes 1. This language is over the top, and should be toned down: “With the successful deployment of the advanced GPT-4 model, we have witnessed a remarkable breakthrough in achieving a new state-of-the-art (SOTA) level of performance across a diverse range of benchmarks. These benchmarks, including SVAMP with an impressive accuracy rate of 91.9%, GSM8K at 95.5%, AQuA with a commendable 79.9%, and even the formidable MATH dataset, where our model now achieves a remarkable 53.90% accuracy, have all been positively impacted by our innovative PHP methodology. The strength of our proposed PHP method becomes especially evident as it consistently demonstrates its ability to enhance the performance of the GPT-4 model. Notably, even in the face of the most challenging dataset, MATH, the PHP methodology shines, elevating the performance from an initial 50.3% to an impressive 53.90%, and improving the performance of all subdatasets. This stark contrast to the performance of GPT-3.5-turbo, which may exhibit a minor decline on the Precalculus dataset after PHP implementation, further underscores the notion that PHP truly excels when harnessed with more powerful models.“ 2. There is a link called “Figure 10” which goes to “Table 10”. I would recommend using the cref latex package. 3. As mentioned above, Figure 3 is misleading at first glance: the x-axis of “sampled reasoning paths” doesn’t effectively measure the cost of different methods, as it doesn’t account for the number of turns taken. Broader Impact Concerns: I have no broader impact concerns: the method is an incremental but possibly useful improvement to a large literature on chain-of-thought, and most of the broader impact is baked into that thriving literature existing. ================================================== Metareview: Recommendation: Reject Comment: This paper presents a method called progressive hint prompting (PHP) aimed at improving the reasoning capabilities of large language models (LLMs). All three reviewers have expressed reservations, with their assessments ranging from Leaning Reject to outright Reject. Their concerns include: I concur with the reviewers regarding the assumption that consecutive matching answers imply correctness, as well as the seeming discrepancy between the proficiency of the LLM and the required number of interactions—two points that the authors' response did not adequately address. One reviewer highlights the potential limitation of the method to arithmetic problems. Although the authors demonstrate enhanced performance on two non-math datasets (CommonsenseQA and StrategyQA), the details of how the prompts are constructed for these tasks remain unclear, particularly since they cannot be structured around proximity to numerical values. Another reviewer expresses skepticism about the significant results attributed to the method due to the potential inclusion of incorrect hints and the lack of clear differentiation between non-adaptive and adaptive methods. Despite requests for clarification and further experimentation to clarify the method's benefits, the authors' subsequent efforts failed to persuade the reviewer, especially regarding the role of adaptive sampling. The reviewer found the additional experiments to be lacking and believed that the paper did not effectively address the confounding factors. ==================================================
# Graphprivatizer: Improved Structural Differential Privacy For Graph Neural Networks Anonymous authors Paper under double-blind review ## Abstract Graph privacy is crucial in systems that present a graph structure where the confidentiality and privacy of participants play a significant role in the integrity of the system itself. For instance, it is necessary to ensure the integrity of banking systems and transaction networks, protecting the privacy of customers' financial information and transaction details. We propose a method called GraphPrivatizer that privatizes the structure of a graph and protects it under Differential Privacy. GraphPrivatizer performs a controlled perturbation of the graph structure by randomly replacing the neighbors of a node with other similar neighbors, according to some similarity metric. With regard to neighbor perturbation, we find that aggregating features to compute similarities and imposing a minimum similarity score between the original and the replaced nodes provides the best privacy-utility trade-off. We use our method to train a Graph Neural Network server-side without disclosing users' private information to the server. We conduct experiments on real-world graph datasets and empirically evaluate the privacy of our models against privacy attacks. ## 1 Introduction In recent years, many research efforts have been made to effectively learn from graph-structured data. Graphbased approaches have been successful in a variety of tasks such as fake news detection in social networks (Benamira et al., 2019) and drug discovery (Gaudelet et al., 2021). Graphs can incorporate both information about individual data points and about their interactions: Graph Neural Networks (GNNs, Scarselli et al., 2008) have been in capturing this information and learning over graph-structured data. Both the information about the individual data points and the relational information can be, however, of a sensitive nature and must, therefore, be protected. Large scale machine learning models may require sending information to a server where the training is performed, which poses a privacy risk. Efforts have thus been recently made to address privacy attacks on graphs (Zhang et al., 2021; 2022). One possibility to protect private information on graphs is to use the formal privacy guarantees offered by Differential Privacy (DP, Dwork, 2006), whose range of applications on graph-structured data has been recently expanding (Mueller et al., 2022b). DP has been used both in centralized settings where a server has graph-wide access to information (Olatunji et al., 2023; Wu et al., 2022; Sajadmanesh et al., 2023) and in local settings (Sajadmanesh & Gatica-Perez, 2021). In a centralized privacy setting, a trusted entity is allowed to gather the private data and learn on it, while promising to release a model from which private information cannot be inferred. In this setting, the training procedure itself must therefore be DP (Abadi et al., 2016). A local privacy setting is instead desirable when no entity is trusted to gather all the private information: in this case, data must be privatized locally before it is made available to a central entity where the training is performed. The local privacy setting is therefore crucial in cases where no central entity is trusted to be willing and capable to keep private data secure. Once the data has been locally privatized, the central server is not able to infer private data and the training procedure itself does not need to be DP (Mueller et al., 2022b). Despite these advantages, a local privacy setting often results in reduced performance if compared to the centralized one (Cormode et al., 2018; Yang et al., 2023), due to the large amount of noise that the local privatization entails. While recent efforts have been made towards improving the privacy-utility trade-off in a local privacy setting (Yang et al., 2023), little investigation on locally privatizing the edges of a graph which is then trained on a central entity (e.g., a server) has been previously performed. Motivated by real-world applications such as private learning on social network data, we therefore focus on protecting the relational information contained in graphs by means of local privatization techniques that act on the individual users' side. To address this problem, we propose GraphPrivatizer, a method to locally privatize a graph's structure that does not rely on graph-wide information, and which protects the privacy of features and labels as well. In particular: (i) We introduce a definition of edge LDP for our local privacy settings; (ii) We propose GraphPrivatizer, a method that locally privatizes the structure of a graph while preserving the out-degree of nodes by construction and keeping labels and features private too; (iii) Taking advantage of the notion of message passing in GNNs, we parametrize the perturbations of a node's neighborhood which allows to only replace neighbors with other similar nodes to improve utility while preserving privacy; and (iv) We evaluate our proposal on different real-world datasets to investigate its privacy-utility trade-off, empirically assessing its privacy guarantees using privacy attacks that try to recover the private structure of the graph. ## 2 Related Work GNNs have gained increasing popularity as the framework of choice to solve graph-based learning tasks in recent years. The efficacy of GNNs in graph representation learning has motivated the proposal of several GNN variants such as Graph Convolutional Networks (Zhang et al., 2019), Graph Attention Networks (Veličković et al., 2018), and GraphSAGE (Hamilton et al., 2017), as well as architectures for large multirelation graphs (Iyer et al., 2021; Wang et al., 2019). Recent research efforts have also been made to address privacy attacks on graphs (Zhang et al., 2021; 2022). Privacy attacks can be categorized as graph properties attacks (inferring, e.g., the number of nodes), membership attacks (inferring, e.g., whether a subgraph is part of a graph), and graph reconstruction attacks (Zhang et al., 2022). Specifically, the existence of an edge between two nodes is often sensitive in nature (Mueller et al., 2022b) and should be kept private. Differential Privacy (DP, Dwork, 2006) offers formal privacy guarantees to protect information about individual training points, and has been used to provide privacy guarantees in GNNs as well. DP has been utilized in centralized settings where a server has access to information on the entire graph (Olatunji et al., 2023; Wu et al., 2022; Sajadmanesh et al., 2023), and in local settings (Sajadmanesh & Gatica-Perez, 2021; Joshi & Mishra, 2022; 2023). Different formulations of DP on graphs aim at protecting the relationship between nodes (*edge*-level DP) (Raskhodnikova & Smith, 2016; Hidano & Murakami, 2022), the individual nodes themselves (*node*-level DP) (Raskhodnikova & Smith, 2016; Ayle et al., 2022; Kasiviswanathan et al., 2013; Olatunji et al., 2023), or the entire graph as a single entity (*graph*-level DP) (Mueller et al., 2022a). For a survey on recent advances in DP approaches on structured data, refer to Mueller et al. (2022b). In this work, we focus on protecting the structure of the graph, i.e., hiding edges. Previous work has addressed structural privacy using central (Sajadmanesh et al., 2023; Olatunji et al., 2023) or local (Joshi & Mishra, 2022; 2023; Hidano & Murakami, 2022) DP; these approaches require however some entity to have access to the entire noiseless adjacency matrix of the graph (Sajadmanesh et al., 2023; Joshi & Mishra, 2022) or to part of it (Joshi & Mishra, 2023; Hidano & Murakami, 2022) in order to privatize the graph structure, which poses privacy concerns, or depend on the availability of public data (Olatunji et al., 2023). Additionally, approaches such as Sajadmanesh et al. (2023) require the introduction of a custom architecture. Hidano & Murakami (2022) propose a degree-preserving randomized response algorithm for graph classification on unattributed graphs where they empirically show the benefit of preserving the degree of nodes after graph perturbations. Besides the different learning task they consider, as we will focus on node classification for graphs with node features and labels, Hidano & Murakami (2022) require nonetheless that each node is aware of how many nodes the graph contains. Joshi & Mishra (2023) is, to the best of our knowledge, the closest existing approach that guarantees edge-level local DP: their approach requires however that each node has noiseless access to portions of the adjacency matrix and comes at a substantial reduction in model performance if compared to a non-private model. Additionally, Joshi & Mishra (2023) do not test against privacy attacks that try to recover the edges of the graph. In comparison, our approach can be used with any conventional GNN architecture and we do not rely on public data: we adopt a local privacy setting where the individual nodes have noise free access only to their own edges, features, and labels, and where no entity has access to the complete adjacency matrix of the graph. ## 3 Preliminaries And Problem Statement In this section we recall the definitions of Graph Neural Network (GNN, Scarselli et al., 2008) and Local Differential Privacy (LDP, Dwork, 2006; Yang et al., 2020). Then, we briefly discuss randomized response (RR, Warner, 1965) and edge privacy in graphs, as well as LinkTeller (Wu et al., 2022) as the privacy attack we use to validate our approach. Finally, we describe our local privacy setting and problem statement. ## 3.1 Graph Neural Networks Consider an unweighted graph defined as a tuple G = (*V, E, X, Y* ), where V = VL ∪ VU is the set union of labeled nodes VL and unlabelled nodes VU , E is the set of edges, X ∈ R|V |×dis a feature matrix consisting of d-dimensional feature vectors, one for each node v ∈ V , and Y is the set of labels. Let N (v) denote the neighborhood of v, that is, the set of nodes which are adjacent to v. Let deg(v) denote the *degree* of v, that is, the size of its neighborhood. Graph Neural Networks (GNNs, Scarselli et al., 2008) are a class of models that have been effective in learning over graph-structured data. A typical GNN consists of L layers where the embeddings of the nodes in a certain layer are obtained from the previous layer by means of an *aggregation* and an *update* function. Specifically, the embedding h l v for a node v in layer l is obtained by aggregating the embeddings of its neighbors N (v) from layer l − 1 and passing the resulting aggregated message mlv through the update function. The aggregation function is a permutation invariant and differentiable function, while the update function is a trainable and non-linear function: $$m_{v}^{l}=\mathrm{AggregATE}(\{h_{u}^{l-1}\mid\forall u\in{\mathcal{N}}(v)\}),$$ u| ∀u ∈ N (v)}), (1) $$(1)$$ $$h_{v}^{l}=\mathrm{UPDATE}(\{h_{v}^{l-1},m_{v}^{l}\}).$$ v, mlv}). (2) Common choices for the Aggregate function are the sum or the mean, while the Update function can, for instance, be a neural network. ## 3.2 Differential Privacy Differential Privacy (DP, Dwork, 2006) is a formal definition of privacy that protects individual training points. As originally introduced by Dwork (2006), central or *global* DP, simply referred to as DP, was designed for a *centralized* setting where a trusted entity gathers all user data and guarantees to process it while preserving the privacy of users. More formally, DP guarantees that an attacker cannot confidently infer whether the output of a DP mechanism M was obtained from a database D or from a database D′, where D and D′ differ in a single record and are thus said to be *adjacent* datasets. In a *local* privacy setting no trusted entity can process the private data of users and the datasets D consist of data from individual users which is then protected under Local Differential Privacy (LDP, Yang et al., 2020). Definition 3.1 (ϵ LDP). Let ϵ > 0. Consider a randomized mechanism M : D → R *and probabilities* Pr taken over the coin tosses of M. M satisfies ϵ *local differential privacy if, for any possible pairs of user's* private data points x, x ′ ∈ D and for any possible outputs S ⊆ R: $$\Pr[M(x)$$ Pr[M(x) ∈ S] ≤ e ϵ Pr[M(x ′) ∈ S]. We refer to ϵ as the *privacy budget* of the algorithm. In particular, ϵ = 0 indicates that the randomized mechanism is perfectly private and implies that the output of the mechanism is independent of the input. On the other hand, ϵ = ∞ provides no privacy guarantee. The choice of ϵ is both problem and data dependent (Lee & Clifton, 2011), with common ranges often considering values ϵ ∈ (0, 10] (Wu et al., 2022; Sajadmanesh & Gatica-Perez, 2021). For a given deterministic function, DP can be achieved by adding random noise to the output of the function to hide the contribution of individual training points, where the amount of noise added depends on the choice of privacy budget ϵ (Dwork et al., 2014). As no single entity is trusted with all the users' private data, LDP is generally a stronger privacy model than DP. However, the lack of such trusted central entity and thus the need to add noise to the local data of each user before communicating it to the central entity entails a greater total noise which can negatively affect model performance (Cormode et al., 2018). This, in turn, enhances the importance of novel approaches that can provide LDP with good accuracy which are thus our focus. ## 3.3 Dp And Privacy Attacks In Gnns DP can be applied to graph context by defining a notion of adjacency for graphs. Considering the (centralized) DP setting first, we say two graphs G and G′ are *edge adjacent* if they differ exactly in one edge, that is, if G′can be obtained from G by adding or removing a single edge. Similarly, G and G′ are *node adjacent* if they differ in exactly one node, that is, if G′can be obtained from G by adding or removing a single node and its edges. In our local privacy setting we consider LDP which can guarantee privacy on graphs in the sense of Definition 3.1 on any pair of adjacent user inputs. Focusing here on edge privacy, one can define a notion of edge adjacency to protect a user's edges. A common definition of edge LDP considers a user's v neighbor list as represented by |V |-dimensional bit vector (b1*, . . . , b*|V |), where bi,i=1,*···|*V | = 1 if and only if there is an edge between node v and node vi, and bi = 0 otherwise. Definition 3.2 (ϵ-edge LDP, (Qin et al., 2017)). Let ϵ > 0. A randomized mechanism M : D → R *satisfies* ϵ*-edge local differential privacy if, for any possible pairs of user's neighbor lists* b, b ′ *differing by one bit, and* for any possible outputs S ⊆ R*, it holds that:* $$\Pr[{\mathcal{M}}(b)\in S]\leq e^{\epsilon}\Pr[{\mathcal{M}}(b^{\prime})\in S].$$ Edge LDP can be achieved by perturbing the neighbor list of each node using *randomized response* (RR, Warner, 1965; Qin et al., 2017). RR flips every bit of each neighbor list with probability 1/(e ϵ + 1), guaranteeing ϵ-edge LDP. However, as large values of ϵ are undesirable because they imply low privacy, and as the neighbor list is often sparse, RR increases the connectivity of the graph. The addition of many spurious edges has negative consequences on the performance of a GNN trained on the perturbed graph (Joshi & Mishra, 2022): thus, we develop different perturbation techniques which offer a better trade-off between privacy and model accuracy for graph data. In this work, we propose an alternative definition of adjacency for edge privacy which imposes additional conditions on the edges RR can act on. To validate our approach, we test it against a LinkTeller attack. LinkTeller (Wu et al., 2022) is an influence analysis based attack that recovers private edges from trained GNNs. The attack assumes that the trained GNN exposes an inference API; at test time, a user can provide node features to query the API and obtain predictions for said nodes. For each pair of nodes the attacker provides perturbed feature vectors for the first node and queries the API with them, evaluating the effect this has on the predictions of the second node. The attacker then guesses the presence of edges between the pairs of nodes which have a high influence on each other. See Wu et al. (2022) for more details on LinkTeller. As LinkTeller cannot deal with randomized models such as GraphSAGE (Wu et al., 2022), we instead attack it using the LSA2 attack described in He et al. (2021). LSA2 leverages on node-level information and computes the model *posterior* for every node the attacker possesses, assigning edges to pairs of nodes whose posteriors have a high correlation. ## 3.4 Problem Statement We address private learning on graphs with LDP where each node can is considered as an individual user: for a graph G = (*V, E, X, Y* ) we consider V , E, X, and Y to be private to individual nodes/users, which only share noisy versions of them with a server. The server uses the information to train a GNN for node classification, learning to predict labels Y in a semi-supervised learning setting. More explicitly, no entity (neither users/nodes, nor the server) has complete and noise free information about the graph. The only noiseless information a node can have access to is its own features, label, and edges (Figure 1). We elect this as our setting of choice because of its similarities with real-world scenarios where individual nodes/users may not have knowledge about the entire graph beyond the nodes/users they directly interact with. ![3_image_0.png](3_image_0.png) Figure 1: In our setting, a node v knows only about its immediate, 1-hop neighbors. In fact, a node may not be aware of its 2hop neighbors in a real-world scenario. ## 4 Approach In this section, we introduce our approach, called GraphPrivatizer, which provides edge, feature, and label privacy. Specifically, we focus on investigating the trade-off between edge privacy and GNN performance in the local privacy setting described in Section 3.4. A global notion of differential privacy is not allowed by our local privacy setting, as no central entity is trusted with the entire graph. ϵ-edge LDP (Definition 3.2) too has limitations in our privacy setting, as we will discuss in the next section. Therefore, we provide a new definition of edge privacy for our setting and propose a novel edge private algorithm based on it. To guarantee label and/or feature privacy, GraphPrivatizer uses existing techniques which will be briefly presented in Section 4.3. ## 4.1 Adjacent Neighborhoods The definition of ϵ-edge LDP provided in Section 3.3 entails the perturbation of the neighbor list of a node v using RR, where two neighbor lists are adjacent if they differ by one bit. As previously mentioned, this perturbation leads to a great increase in the connectivity of the graph for small, thus desirable, values of ϵ. Moreover, in the local privacy setting described in Section 3.4 a node's neighbor list include only its immediate neighbors and there are therefore no graph-wide neighbor lists to perturb with the standard RR approach. To define a notion of privacy which is appropriate for our setting we choose to consider the set of neighbors of node v: two neighbor sets are said to be adjacent if they differ by a single node. Definition 4.1 (Adjacent neighborhoods). Consider a node v. Let b = {v1*, . . . , v*d} = N (v), b ′ = {v ′ 1 , . . . , v′d } = N ′(v) *be two neighbor sets, with* d = deg(v). We say N (v) and N ′(v) are adjacent if they differ in only one element; that is, they are adjacent neighborhoods if, without loss of generality, v1 ̸= v ′ 1 and vi = v ′ i for i = 2*, . . . , d*. We use Definition 4.1 to introduce an *edge set* notion of LDP, which we refer to as ϵ-edge set LDP. Definition 4.2 (ϵ-edge set LDP). Let ϵ > 0. A randomized mechanism M : D → R satisfies ϵ*-edge set* local differential privacy if, for any possible pairs user's neighbor sets b, b ′*that are adjacent according to* Definition 4.1, and for any possible outputs S ⊆ R*, it holds that:* Pr[M(b) ∈ S] ≤ e ϵ Pr[M(b ′) ∈ S] ϵ-edge set LDP can be seen as a relaxation of ϵ-edge LDP Definition 3.2, as it practically entails a more controlled perturbation of the edges of a node. To be more explicit on the relation between ϵ-edge set LDP (Definition 4.2) and the ϵ-edge LDP (Definition 3.2) notion used by related work such as Joshi & Mishra (2023), note that one can obtain any set perturbation as described in Definition 4.1 by means of two bit flips on the neighbor list of a node. A 2ϵ-edge LDP mechanism is thus also ϵ-edge set LDP. The converse does not hold as not all two bit flips on the neighbor list of a node correspond to a set perturbation as described in Definition 4.1. Moreover, and in contrast with ϵ-edge LDP, adjacent neighbor sets according to Definition 4.1 have the same number of nodes. Thus, a perturbation of the neighborhoods based on this notion of adjacency preserves, by construction, the degree of nodes. The preservation of node degrees after private perturbations is a desirable property as empirically shown, albeit for the different task of graph classification on unattributed graphs, by (Hidano & Murakami, 2022). ## 4.2 Edge Privacy Equipped with the notion of adjacency in Definition 4.1, we develop a perturbation technique that acts on the neighbor set of a node which is edge set LDP in the sense of Definition 4.2. Our approach informally seeks to replace nodes in a neighbor set with other nodes that are *similar* with respect to some similarity measure. The perturbed neighborhood can in this way retain more of the original information content of the neighborhood and provide good performance. Specifically, our proposal randomly replaces a neighbor u of a node v with one of the neighbors of u itself. That is, we perform perturbations considering nodes in the two-hop extended local view (Sun et al., 2019) of v. Two neighborhoods of v are then adjacent according to Definition 4.1 if one can be obtained from the other by means of an edge perturbation within the two-hop expended local view of v which preserves the degree of v. The replacement itself then occurs via RR, which ensures the privacy of the procedure. Conceptually, we can describe our method as consisting of two steps. The neighbor set N (v) of a node v is perturbed by: (i) selecting a set of candidate replacement nodes for the neighbors of v and *(ii)* randomly picking of the replacement nodes using RR. Algorithm 1 describes the procedure. We provide a summary of the main notation for ease of read. | Algorithm 1 Perturb neighborhood | Notation δ threshold on similarity α aggregation coefficient xu feature vector of node u | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|--------------------------------------------------------| | Input: Graph G = (V, E, X, Y ), node v ∈ V , similarity sα, threshold δ, aggregation coeff. α, strategy g(sα, α, δ) Output: N ′ (v): Perturbed neighborhood of node v 1: N (v) ← GetNeighbors(v, 1) 2: N ′ (v) ← ∅ 3: for u ∈ N (v) do 4: u ′ = RR(u, QuerySimilar(G, u, sα, α, δ, g)) 5: N ′ (v) ← N ′ (v) ∪ u ′ 6: end for 7: return N ′ (v) | xu,α | aggregated feature vector of node u and N (u), eq. (3) | | | sα(v, u) | cosine similarity between xv,α and xu,α, eq. (4) | Consider a node v and assume we want to perturb its neighbor set N (v) by replacing some of its nodes. Nodes u ∈ N (v) are randomly replaced with nodes picked in a set of candidates, where the candidates are selected from nodes in the two-hop extended local view of v according to a similarity measure s. That is, the set of nodes which are considered as candidates to replace a node u is constituted of nodes u ′that are similar to u. Given our local privacy setting, non-parametric and non-learnable similarity measures are a natural choice as no prior information on the data is available. We provide a comparison of the performance of our approach using the Euclidean distance and the cosine similarity in Appendix B, and find that the cosine similarity is preferable. We therefore measure the similarity between u and a candidate u ′ using the cosine similarity s of their feature vectors xu and xu′ , s =xu·xu′ ∥xu∥∥xu′∥ . In this regard, we devise two strategies to obtain the set of candidates based on similarity, which are described in Algorithm 3 and Algorithm 4. For both strategies, only the nodes u ′ which have a similarity score exceeding a threshold δ are selected as the set of candidates. As GNNs perform aggregations of the features of neighbors produce embeddings, we additionally propose to use such aggregated features to compute similarity scores. We therefore evaluate the similarity of a node u using Aggu = Aggregate({xn : ∀n ∈ N (u)}) instead of xu. We denote with α the hyper-parameter which parameterizes the contribution of aggregated features in the similarity computation. That is, for each node u we compute the aggregated feature vector xu,α of node u and N (u) as $$x_{u,\alpha}=(1-\alpha)x_{u}+\alpha\mathrm{AG}_{u}.$$ $$\left({\boldsymbol{3}}\right)$$ xu,α = (1 − α)xu + αAggu. (3) The similarity between two nodes u and u ′is then computed as $$s_{\alpha}(u,u^{\prime})=\frac{x_{u,\alpha}\cdot x_{u^{\prime},\alpha}}{\|x_{u,\alpha}\|\|x_{u^{\prime},\alpha}\|}.$$ . (4) To summarise, δ > 0 can be used to filter out dissimilar replacement candidates, while α > 0 is used to introduce aggregate information in the similarity computation. The case α = δ = 0 introduces no thresholds for the application of RR and no aggregation, and will thus be used as our reference to investigate the impact of such thresholds and aggregations. When computing similarities/aggregations between nodes, *noisy* feature vectors obtained according to Section 4.3 are used, ensuring that feature privacy is not violated. Our method is thus consistent with the setting described in Section 3.4 as we assume that the only noiseless information a node has access to consists of its own features, labels, and edges to 1-hop neighbors. Algorithm 2 describes the procedure to select replacement candidates. $$\left(4\right)$$ Algorithm 2 QuerySimilar Input: Graph G = (*V, E, X, Y* ), node v ∈ V , similarity sα, threshold δ, aggregation coeff. α, strategy g(sα*, α, δ*) Output: S(v): set of nodes similar to v, according to sα 1: S(v) ← ∅ 2: S(v) = g(v, sα*, α, δ*) \# get similar nodes according to strategy g 3: **return** S(v) As anticipated, we utilize two different strategies g to select candidates for a node u based on their similarity. Specifically, we either consider u's most similar neighbor if it exceeds the similarity threshold δ as a candidate for replacement (Algorithm 3) or all the neighbors that exceed the similarity threshold δ (Algorithm 4). | Algorithm 4 Threshold-based similar neighbors | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|------------| | Input: node v, | its neighborhood N (v), | similarity | | function sα, threshold δ, aggregation coeff. α | | | | Output: T (v): set of neighbors similar to v 1: T (v) ← ∅ 2: for u ∈ N (v) do 3: if sα(v, u) ≥ δ then 4: T (v) ← T (v) ∪ u 5: end if 6: end for 7: return T (v) | | | | Algorithm 3 Most-similar neighbor Input: node v, its neighborhood N (v), | similarity | |-----------------------------------------------------------------------------------------------------------------------------------|--------------| | function sα, threshold δ, aggregation coeff. α | | | Output: mv: v's most similar neighbor 1: mv = arg max sα(v, u) u∈N(v) 2: if sα(v, mv) ≥ δ then 3: return mv 4: end if 5: return v | | For each node v, the number of similarity values which need to be computed to perturb its neighborhood depends on the number of nodes in the two-hop extended local view of v and is upper bounded Pu∈N(v) deg(u) which may thus be computationally expensive for very dense graphs. However, in a practical setting where nodes are distributed among different computing units, the neighbor perturbation is performed locally by the individual nodes. Moreover, the nodes only require the (perturbed, possibly aggregated) features of their neighbor to compute their similarity with them. In terms of communication cost, this amounts to two applications of the Aggregate (Section 3.1) function and is done as a pre-processing step before training. With these considerations and the observation that real-world datasets have a small average degree (Table 4), we expect an efficient implementation of our approach to scale well to sparse large graphs. Once a set of candidate nodes has been obtained, the neighbor perturbation is performed in Algorithm 1 with RR. The probabilities of replacement associated with RR differ whether we consider one candidate replacement, (strategy in Algorithm 3) or a set of candidate replacements (strategy in Algorithm 4). Denote with Pr[u → u ′] the probability that node u gets replaced with node u ′. If we consider the most similar replacement candidate only, RR is applied as follows. $$\operatorname*{Pr}[u\to u^{\prime}]={\begin{cases}{\frac{e^{\epsilon}}{e^{\epsilon}+1}}&{{\mathrm{if~}}u^{\prime}=u}\\ {\frac{1}{e^{\epsilon}+1}}&{{\mathrm{otherwise}}}\end{cases}}$$ $$\left(5\right)$$ If we consider the threshold-based strategy, RR is instead applied as follows. $$\operatorname*{Pr}[u\to u^{\prime}]={\begin{cases}{\frac{e^{\epsilon}}{e^{\epsilon}+d-1}}&{{\mathrm{if~}}u^{\prime}=u}\\ {\frac{1}{e^{\epsilon}+d-1}}&{{\mathrm{otherwise}}}\end{cases}}$$ $$({\mathfrak{G}})$$ To distinguish the two strategies, we refer to our method as GraphPrivatizer-m (GP-m) when using the most-similar strategy, and as GraphPrivatizer-t (GP-t) when using the threshold-based strategy. Regardless of the strategy, our approach is ϵ-edge set private. ![7_image_0.png](7_image_0.png) a a a Figure 2: Scheme of GraphPrivatizer and how it acts on the features, edges, and labels of a node v. The shaded area highlights our contribution and where the algorithms we propose are utilized. a ## Theorem 4.3. Algorithm 1 Is Ε-Edge Set Ldp. Proof. Algorithm 1 with the neighbor selection described in Algorithm 3 is a randomized mechanism based on RR: we denote it as M. The proof follows from the DP of RR (see, e.g., Qin et al., 2017). Denote with Pr[v → u] the probability that a node v gets replaced by a node u, let p =1 e ϵ+1 be the probability of node replacement according to RR, and q = 1 − p. Note that ϵ > 0 implies *q > p*. Let b = {v1*, . . . , v*d}, b ′ = {v ′ 1 , . . . , v′d } be two neighbor sets which differ in only one element; assume, without loss of generality, that v1 ̸= v ′ 1 . Then, given any output s = {s1*, . . . , s*d} of M, it holds that: $${\frac{\operatorname*{Pr}[\mathcal{M}(b)=s]}{\operatorname*{Pr}[\mathcal{M}(b^{\prime})=s]}}={\frac{\operatorname*{Pr}[v_{1}\to s_{1}]\cdots\operatorname*{Pr}[v_{n}\to s_{d}]}{\operatorname*{Pr}[v_{1}^{\prime}\to s_{1}]\cdots\operatorname*{Pr}[v_{n}^{\prime}\to s_{d}]}}={\frac{\operatorname*{Pr}[v_{1}\to s_{1}]}{\operatorname*{Pr}[v_{1}^{\prime}\to s_{1}]}}<{\frac{q}{p}}=e^{e}$$ With analogous reasoning (see, e.g., Wang et al., 2016), one can show that Algorithm 1 with the neighbor selection described in Algorithm 4 is ϵ-edge set private. ## 4.3 Feature And Label Privacy In addition to edge privacy, GraphPrivatizer also ensures feature and label LDP. That is, for each node, GraphPrivatizer ensures that an attacker cannot confidently infer the feature vector or the label. Specifically, we make use of the Drop algorithm introduced in Sajadmanesh & Gatica-Perez (2021), which enables efficient LDP GNN training with both private labels and node features. Features are privatized with a *multi-bit* mechanism, which allows individual nodes to perturb their features before communicating them. Labels are, instead, privatized using RR: a node's class is randomly replaced with one of the other available classes with the same approach described in Equation (6). We refer the reader to the original publication for more details. We assign a privacy budget ϵx for feature privacy and ϵy for label privacy. a ## 4.4 Complete Architecture Figure 2 summarizes the complete approach: GraphPrivatizer produces perturbed features X′, edges E′, and labels Y ′, which are then shared with a server to train a GNN. In particular, Figure 2 schematically shows which components of GraphPrivatizer have access to the unperturbed, private data. Theorem 4.4. GraphPrivatizer is ϵ + ϵx + ϵy LDP Proof. The private feature vectors are only used by the multi-bit mechanism, the private labels are only used by the RR mechanism for labels, and the private edge information is only used by Algorithm 1. In particular, Algorithm 1 only post-processes the privatized feature vectors and, due to the composition and robustness to post-processing properties of DP (Dwork et al., 2014), GraphPrivatizer is thus ϵ + ϵx + ϵy LDP. ## 5 Experiments In this section, we empirically investigate1the performance of GraphPrivatizer and assess the trade-off between edge privacy and GNN accuracy in node classification tasks across several datasets. In particular, we (i) compare our approach against the baseline described in Section 5.1, and (ii) analyze the effects of the parameters α and δ on the privacy-accuracy trade-off for GraphPrivatizer. With respect to this trade-off, as discussed in Section 4.2, we use the setting α = δ = 0 as our reference and perform comparisons with *α, δ >* 0 to evaluate the benefits of higher thresholds δ and aggregation α coefficients. We empirically evaluate the edge privacy of our approach against attacks that try to recover the private edges. We use a variety of GNN architectures that include traditional convolutional GNNs, graph attention networks, and transformer networks, and perform experiments on the most commonly used benchmark datasets for node classification that include citation, co-purchase, and social networks. We experiment with GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2018), GT (a graph transformer adapted from Shi et al. (2021)), GATv2 (Brody et al., 2022) and GraphConv (the graph convolution operator introduced in Morris et al. (2019)) on the Cora (Yang et al., 2016), Pubmed (Yang et al., 2016), LastFM (Rozemberczki & Sarkar, 2020), Facebook (Rozemberczki et al., 2021), and Amazon Photo (Shchur et al., 2018) datasets. In what follows, we will take the feature privacy budget ϵx and label privacy budgets ϵy to be fixed: the results and discussion will therefore focus on the edge privacy parameter ϵ which is simply referred to as privacy budget. We leave additional details on hyper-parameters and on the datasets used to Appendix A. With regard to the privacy attack, we assume the trained GNN exposes an inference API that an attacker can query. We assume the attacker possesses feature information on pairs of nodes and wishes to determine whether an edge connects each pair of nodes. It should be noted that, according to the data model we adopt and describe in Section 3.4, no entity, either users or server, possesses noise-free information about other nodes' features. For this reason, we assume that the attacker itself may only have access to the noisy feature vector that a node sends to the server for, e.g., training. With these assumptions, we attack all models with LinkTeller (Wu et al., 2022) except for the GraphSAGE, which we attack with LSA2 (He et al., 2021; Wu et al., 2022). For LinkTeller, we use the default influence and graph density parameters of 0.001 and 1 (Wu et al., 2022). In all cases, we randomly sample 500 pairs of nodes that are connected in the original, unperturbed graph, as well as 500 pairs of nodes that are not connected in the original, unperturbed graph. The task of the attacker is to decide which of these nodes are connected or not connected, in a binary classification problem. We evaluate the performance of the attacker using the AUC, which we report multiplied by a factor 102, that is, AUC ∈ [0, 100]. As LinkTeller is a threshold based binary classifier, we use the AUC as a performance measure to capture all threshold values. A higher AUC denotes a higher ability of the attack to correctly identify the edges of the graph, and thus lower edge privacy, where an increase of 1 AUC point can be interpreted as a 1% increase in the likelihood of correctly identifying edges. It should be noted that the attacker has access to the API of a GNN which was trained on *perturbed* data, while we evaluate the attack AUC with respect to the original *unperturbed* data, which is what should be protected. ## 5.1 Comparison Against Baseline Approaches We compare GraphPrivatizer against a baseline approach adapted from existing literature on private node classification. Specifically the baseline approach is adapted from Sajadmanesh & Gatica-Perez (2021) and Wu et al. (2022) to our setting. Specifically, for the baseline we consider a slightly relaxation of the setting described in Figure 1 and assume that each node has access to the list of nodes in its 2-hop neighborhood. We then apply randomized response with privacy budget ϵ to this list, thus perturbing the 2-hop neighborhood of the node. This approach corresponds to a local version of the *EdgeRand* approach described in Wu et al. (2022) and applied to each node. These adaptations are necessary as the original algorithms assume access to the full adjacency matrix which is not compatible with our local privacy setting. A very similar approach is also used in Joshi & Mishra (2023), who apply RR to 2-hop neighborhoods as well. We refer to this baseline simply as a *randomized response* baseline and denote it with RR. The baseline approach does not, in general, preserve the degree of the nodes and thus the sparsity of the adjacency matrix of the graph. Additionally, no form of threshold or aggregation is considered for the baseline. We highlight that 1Code available at this repository. the baseline approach assumes that nodes have access to the list of nodes in the entire 2-hop neighborhood, while GraphPrivatizer operates with the more strict (and private) condition that only immediate neighbors are known. Our experiments show that, despite this more strict setting, GraphPrivatizer outperforms the baseline in terms of accuracy with comparable privacy and thus offers a better privacy-utility trade-off. We additionally compare our results with LPGNN (Sajadmanesh & Gatica-Perez, 2021), which is mirrored by our experiments with ϵ = ∞ where no edge privacy is guaranteed. For all comparisons we set the threshold and aggregation parameters for GraphPrivatizer to δ = 0 and α = 0.5 respectively. With reference to the results in Section 5.2, δ = 0 ensures the best privacy as it imposes no threshold during the neighborhood perturbation, while α = 0.5 corresponds to an intermediate amount of aggregation which provides good accuracy without sacrificing privacy. Refer to Section 5.2 for more details on the effects of α and δ. We present the results of our comparison between GraphPrivatizer (GP) and the RR baseline in Table 1: our approach is better in terms of accuracy without sacrificing privacy across all datasets. We performed statistical testing using a paired Wilcoxon signed rank test to compare the accuracy and privacy of our approach against the baseline across all privacy budgets, and report the p-values P in Table 1a. For accuracy, we tested the null hypothesis H0: AccGP − AccRR = 0 and found that GraphPrivatizer has higher accuracy across all datasets with statistical significance. For attack performance, we tested the null hypothesis H0: AUCGP − AUCRR = 0 and found that there is no statistically significant difference in the privacy of our approach and the RR baseline across all datasets. In Table 1b we use GAP to denote the utility gap, i.e., the accuracy loss with respect to the non-edge-private, ϵ = ∞ setting corresponding to LPGNN (Sajadmanesh & Gatica-Perez, 2021). While the non-edge-private setting is expected to provide better accuracy than the edge-private one, we are interested in evaluating how closely GraphPrivatizer and RR can match its accuracy while providing edge privacy. We report results for ϵ = 0.1 and find that GraphPrivatizer narrows the utility gap when compared to the RR baseline across all datasets. Table 1: Aggregate results across models for GraphPrivatizer (GP) and a randomized response (RR) baseline. For GraphPrivatizer, we report results for α = 0.5 and δ = 0. Sub-table (a) reports the average improvement in accuracy of GraphPrivatizer over the baseline as well as the average difference in attack performance between GraphPrivatizer and the baseline across all models and privacy budgets. We report p-values P for a paired Wilcoxon signed rank test, testing respectively the null hypotheses H0: AccGP − AccRR = 0 and H0: AUCGP − AUCRR = 0. In sub-table (b) GAP denotes the utility gap, i.e., the accuracy loss with respect to the non-edge-private, ϵ = ∞ setting corresponding to LPGNN (Sajadmanesh & Gatica-Perez, 2021), where we report aggregate results across models as (mean ± standard deviation), for ϵ = 0.1. | (a) Accuracy and AUC results across all privacy budgets. | (b) Utility gap (GAP) for ϵ = 0.1. | | | | | |------------------------------------------------------------|--------------------------------------|---------------|-------------|-----------|------------| | dataset | AccGP − AccRR | AUCGP − AUCRR | dataset | GAPGP | GAPRR | | Cora | 3.9 (P<.001) | 0.2 (P>.05) | | | | | LastFM | 10.1 (P<.001) | 0.5 (P>.05) | | | | | PubMed | 0.8 (P<.001) | 2.7 (P>.05) | | | | | Facebook | 5.0 (P<.001) | 0.6 (P>.05) | | | | | Amazon Photo | 5.5 (P<.01) | 3.9 (P>.05) | Cora | 6.3 ± 2.2 | 12.4 ± 6.3 | | | LastFM | 6.3 ± 3.2 | 21.7 ± 13.8 | | | | | PubMed | 1.9 ± 0.2 | 2.7 ± 1.7 | | | | | Facebook | 5.0 ± 0.6 | 13.2 ± 8.5 | | | | | Amazon Photo | 3.4 ± 2.7 | 8.0 ± 3.3 | | | (a) Accuracy and AUC results across all privacy budgets. (b) Utility gap (GAP) for ϵ = 0.1. Across all models and datasets, GraphPrivatizer provides an average 6.6 AUC points improvement in privacy with respect to the non-edge-private LPGNN (Sajadmanesh & Gatica-Perez, 2021) setting while suffering a 4.9% decrease in accuracy, while the RR baseline provides a similar 6.4 AUC points improvement in privacy but with a much less desirable 12.5% decrease in accuracy. Overall, GraphPrivatizer achieves thus a better privacy-utility trade-off than RR. Additional results in Appendix C. ## 5.2 Results For Graphprivatizer And Discussion While in Section 5.1 we show that GraphPrivatizer outperforms the RR baseline approach and improves upon the privacy-utility trade-off, here we investigate our approach in more detail and determine to which degree the use of thresholds and aggregations described in Section 4 is beneficial. We are interested in establishing if positive threshold and aggregation parameters (i.e., α, δ > 0) consistently provide a better privacy-utility trade-off than the reference case where no aggregation or threshold are considered (i.e., α = δ = 0). Table 2: Aggregate results for GraphPrivatizer. We denote as AccGP and AUCGP the average accuracy and AUC results for GraphPrivatizer with *α, δ >* 0, where the average is taken across all positive tested values of α and δ. We denote with ∆Acc the average accuracy difference with the α = δ = 0 case across all tested values of α and δ larger than zero, where positive values of ∆Acc indicate that GraphPrivatizer performs better when some thresholds are applied and/or aggregation is performed during the neighbor perturbation. Analogously, we denote with ∆AUC the average AUC×102 difference, where values close to zero indicate that GraphPrivatizer offers the same protection against privacy attack when introducing a positive threshold and/or feature aggregation during the neighbor perturbation. We denote the datasets we use as Cora (Cr), LastFM (FM), PubMed (PM), Facebook (Fb), and Amazon Photo (Ph). Results reported as (average values ± standard deviation), for ϵ = 0.1. | GP-t | GP-m | | | | | | | | | | | |--------|------------|------------|------------|-----------|-----------|------------|----------|-------|-------|------|------| | Model | Dataset | AccGP | AUCGP | ∆Acc | ∆AUC | Model | Dataset | AccGP | AUCGP | ∆Acc | ∆AUC | | Cr | 78.0 ± 1.7 | 92 ± 1 | 7 ± 3 | 3 ± 4 | | | | | | | | | FM | 81.1 ± 3.9 | 57 ± 2 | 6 ± 3 | 3 ± 2 | | | | | | | | | GAT | PM | 82.0 ± 0.3 | 70 ± 5 | 1.5 ± 0.7 | 4 ± 2 | | | | | | | | Fb | 89.5 ± 0.7 | 63 ± 4 | 4 ± 2 | 2 ± 2 | | | | | | | | | Ph | 83.3 ± 1.7 | 79 ± 4 | 11 ± 5 | 9 ± 6 | Cr | 78.9 ± 1.8 | 73 ± 6 | 3 ± 2 | 0 ± 3 | | | | | FM | 82.1 ± 3.0 | 57 ± 1 | 2 ± 2 | 1 ± 1 | | | | | | | | | GAT | PM | 82.1 ± 0.3 | 72 ± 5 | 1.5 ± 0.7 | 0 ± 1 | | | | | | | | Fb | 90.3 ± 0.7 | 64 ± 5 | 2 ± 1 | 0 ± 1 | | | | | | | | | Ph | 85.9 ± 3.8 | 81 ± 5 | 0.5 ± 0.9 | 0 ± 1 | | | | | | | | Cr | 79.7 ± 1.1 | 72 ± 5 | 4 ± 2 | 8 ± 5 | | | | | | | | | FM | 85.7 ± 0.7 | 90 ± 2 | 3 ± 1 | 10 ± 5 | | | | | | | | | GCN | PM | 82.0 ± 0.3 | 96 ± 1 | 1.4 ± 0.7 | 10 ± 5 | | | | | | | | Fb | 90.4 ± 0.2 | 96 ± 1 | 4 ± 2 | 6 ± 3 | | | | | | | | | Ph | 81.4 ± 1.9 | 95 ± 1 | 13 ± 7 | 3 ± 3 | Cr | 80.4 ± 1.1 | 93 ± 0.1 | 4 ± 2 | 5 ± 3 | | | | | FM | 85.7 ± 0.4 | 92 ± 2 | 3 ± 1 | 5 ± 3 | | | | | | | | | GCN | PM | 82.0 ± 0.3 | 97 ± 1 | 1.5 ± 0.7 | 2 ± 2 | | | | | | | | Fb | 91.3 ± 0.2 | 98 ± 1 | 1.5 ± 0.7 | 2 ± 1 | | | | | | | | | Ph | 83.1 ± 3.8 | 96 ± 1 | 1.2 ± 1.0 | 2 ± 4 | | | | | | | | Cr | 79.7 ± 1.0 | 78 ± 2 | 4 ± 2 | 5 ± 3 | | | | | | | | | FM | 83.5 ± 2.0 | 85 ± 3 | 3 ± 2 | 1 ± 1 | | | | | | | | | SAGE | PM | 81.7 ± 0.3 | 56 ± 3 | 1.3 ± 0.7 | 0 ± 1 | | | | | | | | Fb | 90.4 ± 0.2 | 74 ± 02 | 4 ± 2 | 4 ± 2 | | | | | | | | | Ph | 83.4 ± 0.6 | 85 ± 2 | 13 ± 5 | 17 ± 9 | Cr | 80.4 ± 1.0 | 77 ± 3 | 4 ± 2 | 3 ± 2 | | | | | FM | 84.2 ± 1.3 | 84 ± 3 | 3 ± 1 | 1 ± 1 | | | | | | | | | SAGE | PM | 81.6 ± 0.3 | 55 ± 3 | 1.5 ± 0.7 | 0 ± 1 | | | | | | | | Fb | 90.9 ± 0.2 | 74 ± 2 | 2 ± 1 | 1 ± 2 | | | | | | | | | Ph | 86.1 ± 3.5 | 90 ± 2 | 0.8 ± 0.6 | 8 ± 5 | | | | | | | We present an aggregate overview of our results in Table 2. We focus here on the case ϵ = 0.1 and on the GCN, GraphSAGE, and GAT models, with additional results available in Appendix C. GraphPrivatizer consistently performs better with *α, δ >* 0 in terms of accuracy, as it provides a positive accuracy improvement ∆Acc in almost the totality of cases. In more than half of the cases, it also provides performance in defending against privacy attacks which is equivalent to the α = δ = 0 setting, having ∆AUC which overlaps with zero. It should moreover be noted that the ∆AUC results reported in Table 2 are, as mentioned, multiplied by a factor 102. For this reason, most of the cases where ∆AUC > 0 correspond to only a small decrease in privacy, with the majority of cases reporting ∆AUC ≤ 2 which indicates an increase in the likelihood that the attacker is able to correctly identify edges of at most 2%. That is, it is on average preferable to introduce feature aggregation and/or to a threshold to select similar neighbors during the neighbor perturbation. If we compare the different models tested, GCN appears to be more prone to worse performance against privacy attacks. This behavior is consistent with the observations reported in Wu et al. (2022) and may be explained by highlighting how the influence computation that underlies the LinkTeller attack is better suited for graph convolution aggregations, and thus for GCNs (more details on Wu et al. (2022)). While information propagation between nodes can be exploited for other GNN architecture the introduction of, e.g., the attention mechanism in GAT/GATv2 can negatively impact the attack performance. Nevertheless, Table 1 and Table 2 shows that GraphPrivatizer performs well for different GNN models and datasets. While Table 2 shows that the setting *α, δ >* 0 is generally preferable when considering averaged results across all values of α and δ, a more detailed analysis shows that, depending on the model and the dataset, specific combinations of α and δ provide the best performance. We report here more detailed results for GP-t with GAT, leaving additional results in Appendix C. ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) ![11_image_3.png](11_image_3.png) ![11_image_2.png](11_image_2.png) (a) ∆Acc for different values of α and δ. Higher is better. ![11_image_4.png](11_image_4.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 3: Results for GP-t with GAT, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. Figure 3 exemplifies the trade-off between improvements in model accuracy, and a decrease in edge privacy for all combinations of α and δ, with the combination α = δ = 0 representing the reference base case. Model accuracy tends to increase with larger values of α and δ, while privacy tends to decrease. In fact, a higher threshold δ implies that fewer nodes may be selected as candidate replacements during perturbation, with δ = 1 requiring that only nodes with similarity of 1 may be considered. With respect to the aggregation parameter α, values larger than zero tend to provide increased accuracy for small δ. This suggests that, when computing similarities, it is beneficial to do so on aggregate features: in this way, nodes which behave similarly with respect to the output of the Aggregate function (Section 3.1), which is then used to train the GNN, will be favoured as candidates for replacement. Small values of α and δ provide therefore a good trade-off between model accuracy and privacy, with some combinations remarkably offering both improved accuracy and better privacy over the α = δ = 0 case. For instance, the combination α = 0.5, δ = 0.1 for Cora offers a 5.4% improvement in accuracy and a 0.8% improvement in edge privacy. Considering the additional visualizations provided on Appendix C, this behavior generally holds across datasets, for both GP-t and GP-m. Even for GCN (see for instance Table 3b), where average results show an unfavorable ∆AUC > 0, specific combinations of α and δ can provide accuracy improvements over the α = δ = 0 case at a small or null privacy cost. Table 3: Accuracy and AUC across different values of ϵ. We denote as with the subscript GP the average results for GraphPrivatizer across *α, δ >* 0, while the subscript b denotes the base case α = δ = 0. (a) GCN on LastFM with GP-t. Average across α and δ. (b) GCN on LastFM with GP-t. Values for α = 0.75 and δ = 0. | ϵ | ∆Acc | AccGP | Accb | AUCGP | AUCb | |-----|-----------|------------|--------|---------|--------| | 0.1 | 3.2 ± 1.0 | 85.7 ± 1.5 | 82.6 | 90 ± 5 | 80 | | 1 | 1.7 ± 1.0 | 85.9 ± 1.1 | 84.2 | 92 ± 3 | 84 | | 2 | 1.7 ± 1.0 | 86.2 ± 0.7 | 84.5 | 93 ± 2 | 89 | | 8 | 0.1 ± 0.0 | 86.6 ± 0.1 | 86.4 | 94 ± 0 | 94 | | ∞ | 0.0 ± 0.0 | 86.5 ± 0.0 | 86.5 | 94 ± 0 | 94 | 0.1 0.4 83.0 82.6 80 81 1 1.1 84.1 83.0 85 85 2 0.6 85.1 84.5 89 89 8 0.2 86.6 86.4 94 95 ∞ 0 86.6 86.6 95 95 ϵ ∆Acc AccGP Accb AUCGP AUCb Finally, we consider what the benefits of GraphPrivatizer are across the range of privacy budgets we tested. We focus here on GP-t on LastFM for GCN: Table 2 shows for this case a decrease in protection against privacy attacks corresponding to a relatively large ∆AUC = 10 ± 5 when introducing a positive α or δ. Analyzing this case more in detail, Table 3a shows that, as previously observed, on average GraphPrivatizer entails a 10% increase in the attack performance for this specific experiment for positive thresholds or aggregation coefficients. Nevertheless, AccGP for ϵ = 0.1 is close to that of the non private model for ϵ = ∞, despite having a smaller AUC. More in detail, when compared to GraphPrivatizer for ϵ = 0.1, the no aggregations and no thresholds achieve a better accuracy only for ϵ = 8, but with a worse AUC. That is, on average, GraphPrivatizer with threshold and feature aggregation provides a better trade-off between accuracy and privacy. Furthermore, the improvement is more evident for specific values of α and δ: for instance, Table 3b shows that for ϵ = 0.1, α = 0.75 and δ = 0, provide both a 0.4% improvement in accuracy and a 1% improvement in AUC. Similarly, higher privacy budgets are still favorable to GraphPrivatizer with α, δ > 0 which always outperforms the α = δ = 0 case at a smaller or equal AUC. Considering existing results available in literature, Sajadmanesh et al. (2023) perform experiments on the Facebook dataset and obtain an accuracy of 76.3 ± 0.21 for a total privacy budget of ϵ = 4. This result is, however, not directly comparable to ours as it considers a different privacy setting (global vs local privacy) and assumes access to the adjacency matrix. Closer to our approach is that of Joshi & Mishra (2023) who train an edge-LDP GNN where nodes have noiseless access to part of the adjacency matrix: despite the less strict privacy setting, they obtain worse accuracy and report a best accuracy of ≈ 50 on Cora, ≈ 70 on LastFM, and ≈ 70 on PubMed, while not testing the effectiveness of their method against privacy attacks that try to recover the edges. Their approach is, indeed, very similar to the one we adopt in Section 5.1 as a baseline which we have shown is consistently outperformed by our method and provides generally worse privacy-utility trade-off across all datasets. ## 6 Conclusions Motivated by real-world applications, we investigated LDP learning in GNNs. Considering a local privacy setting where the individual nodes of the graph can only have noise-free access to their own features, labels, and edges, we (i) introduced a new definition of LDP for our local privacy setting, (ii) proposed our private algorithm GraphPrivatizer, and (iii) empirically validated its performance on real-world datasets and against privacy attacks. GraphPrivatizer is a fully private algorithm that protects edges, features, and labels. Specifically, we introduced a new methodology to protect edges by means of controlled perturbations which replace the neighbors of a node with other similar nodes according to some similarity measure. We furthermore evaluated the impact of thresholds on the similarity between the original and the perturbed neighbor nodes and of feature aggregation in computing the similarity scores, finding that a positive threshold and aggregation coefficient provide the best privacy-utility trade-off. Compared to existing approaches which do not provide edgeprivacy (Sajadmanesh & Gatica-Perez, 2021) or do so while requiring a central entity to have complete information about the edges of the graph (Sajadmanesh et al., 2023; Joshi & Mishra, 2022), GraphPrivatizer provides LDP without requiring a trusted entity to have access to the adjacency matrix of the graph and is applicable to a variety of GNN models. Future work could focus on addressing some of the limitations of GraphPrivatizer. First, GraphPrivatizer does not consider datasets containing edge features. While it could be possible to, e.g., privatize categorical edge features using randomized response, our local privacy setting would need to be adapted to determine which entity can have noise free access to the edge features. Additionally, better personalized user privacy requirements could be considered. In particular, the introduction of a notion of *trust* between nodes could allow two neighboring nodes that trust each other to exchange less noisy information, thus improving the GNN performance. Finally, while GraphPrivatizer is generally applicable to various types of GNN models, edge perturbations which are tailored to a specific GNN architecture can possibly provide a better privacyutility trade-off and could therefore be explored in future work. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC conference on computer* and communications security, pp. 308–318, 2016. Morgane Ayle, Jan Schuchardt, Lukas Gosch, Daniel Zügner, and Stephan Günnemann. Training differentially private graph neural networks with random walk sampling. In *Workshop on Trustworthy and* Socially Responsible Machine Learning, NeurIPS 2022, 2022. Adrien Benamira, Benjamin Devillers, Etienne Lesot, Ayush K Ray, Manal Saadi, and Fragkiskos D Malliaros. Semi-supervised learning and graph neural networks for fake news detection. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 568–569, 2019. Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In *International* Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=F72ximsx7C1. Graham Cormode, Somesh Jha, Tejas Kulkarni, Ninghui Li, Divesh Srivastava, and Tianhao Wang. Privacy at scale: Local differential privacy in practice. In *Proceedings of the 2018 International Conference on* Management of Data, pp. 1655–1658, 2018. Cynthia Dwork. Differential privacy. In *Automata, Languages and Programming: 33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006, Proceedings, Part II 33*, pp. 1–12. Springer, 2006. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. *Foundations and* Trends® *in Theoretical Computer Science*, 9(3–4):211–407, 2014. Thomas Gaudelet, Ben Day, Arian R Jamasb, Jyothish Soman, Cristian Regep, Gertrude Liu, Jeremy BR Hayter, Richard Vickers, Charles Roberts, Jian Tang, et al. Utilizing graph machine learning within drug discovery and development. *Briefings in bioinformatics*, 22(6):bbab159, 2021. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. Stealing links from graph neural networks. In *30th USENIX Security Symposium (USENIX Security 21)*, pp. 2669–2686, 2021. Seira Hidano and Takao Murakami. Degree-preserving randomized response for graph neural networks under local differential privacy. *arXiv preprint arXiv:2202.10209*, 2022. Roshni G Iyer, Wei Wang, and Yizhou Sun. Bi-level attention graph neural networks. In *2021 IEEE* International Conference on Data Mining (ICDM), pp. 1126–1131. IEEE, 2021. Rucha Bhalchandra Joshi and Subhankar Mishra. Edge-level privacy in graph neural networks. In *18th* International Workshop on Mining and Learning with Graphs, 2022. Rucha Bhalchandra Joshi and Subhankar Mishra. Locally and structurally private graph neural networks. Digital Threats: Research and Practice, 2023. Shiva Prasad Kasiviswanathan, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Analyzing graphs with node differential privacy. In Theory of Cryptography: 10th Theory of Cryptography Conference, TCC 2013, Tokyo, Japan, March 3-6, 2013. Proceedings, pp. 457–476. Springer, 2013. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. Jaewoo Lee and Chris Clifton. How Much Is Enough? Choosing ϵ for Differential Privacy. In Xuejia Lai, Jianying Zhou, and Hui Li (eds.), *Information Security*, Lecture Notes in Computer Science, pp. 325–340, Berlin, Heidelberg, 2011. Springer. ISBN 978-3-642-24861-0. doi: 10.1007/978-3-642-24861-0_22. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In *Proceedings* of the AAAI conference on artificial intelligence, volume 33, pp. 4602–4609, 2019. Tamara T Mueller, Johannes C Paetzold, Chinmay Prabhakar, Dmitrii Usynin, Daniel Rueckert, and Georgios Kaissis. Differentially private graph classification with gnns. *arXiv preprint arXiv:2202.02575*, 2022a. Tamara T Mueller, Dmitrii Usynin, Johannes C Paetzold, Daniel Rueckert, and Georgios Kaissis. Sok: Differential privacy on graph-structured data. *arXiv preprint arXiv:2203.09205*, 2022b. Iyiola Emmanuel Olatunji, Thorben Funke, and Megha Khosla. Releasing graph neural networks with differential privacy guarantees. *Transactions on Machine Learning Research*, 2023. Zhan Qin, Ting Yu, Yin Yang, Issa Khalil, Xiaokui Xiao, and Kui Ren. Generating synthetic decentralized social graphs with local differential privacy. In *Proceedings of the 2017 ACM SIGSAC Conference on* Computer and Communications Security, pp. 425–438, 2017. Sofya Raskhodnikova and Adam Smith. Differentially private analysis of graphs. *Encyclopedia of Algorithms*, 2016. Benedek Rozemberczki and Rik Sarkar. Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management, 2020. Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-scale attributed node embedding. *Journal of* Complex Networks, 9(2):cnab014, 2021. Sina Sajadmanesh and Daniel Gatica-Perez. Locally private graph neural networks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 2130–2145, 2021. Sina Sajadmanesh, Ali Shahin Shamsabadi, Aurélien Bellet, and Daniel Gatica-Perez. Gap: Differentially private graph neural networks with aggregation perturbation. In USENIX Security 2023-32nd USENIX Security Symposium, 2023. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. *IEEE transactions on neural networks*, 20(1):61–80, 2008. Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network evaluation. *arXiv preprint arXiv:1811.05868*, 2018. Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjing Wang, and Yu Sun. Masked label prediction: Unified message passing model for semi-supervised classification. In Zhi-Hua Zhou (ed.), *Proceedings* of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pp. 1548–1554. ijcai.org, 2021. doi: 10.24963/IJCAI.2021/214. URL https://doi.org/10.24963/ijcai.2021/214. Haipei Sun, Xiaokui Xiao, Issa Khalil, Yin Yang, Zhan Qin, Hui Wang, and Ting Yu. Analyzing subgraph statistics from extended local views with decentralized differential privacy. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 703–717, 2019. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In *International Conference on Learning Representations*, 2018. Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. Heterogeneous graph attention network. In *The world wide web conference*, pp. 2022–2032, 2019. Yue Wang, Xintao Wu, and Donghui Hu. Using randomized response for differential privacy preserving data collection. In *EDBT/ICDT Workshops*, volume 1558, pp. 0090–6778, 2016. Stanley L Warner. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60(309):63–69, 1965. Fan Wu, Yunhui Long, Ce Zhang, and Bo Li. Linkteller: Recovering private edges from graph neural networks via influence analysis. In *2022 IEEE Symposium on Security and Privacy (SP)*, pp. 2005–2024. IEEE, 2022. Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, and Kwok-Yan Lam. Local differential privacy and its applications: A comprehensive survey. *arXiv preprint arXiv:2008.03686*, 2020. Mengmeng Yang, Taolin Guo, Tianqing Zhu, Ivan Tjuawinata, Jun Zhao, and Kwok-Yan Lam. Local differential privacy and its applications: A comprehensive survey. *Computer Standards & Interfaces*, pp. 103827, 2023. Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In *International conference on machine learning*, pp. 40–48. PMLR, 2016. Si Zhang, Hanghang Tong, Jiejun Xu, and Ross Maciejewski. Graph convolutional networks: a comprehensive review. *Computational Social Networks*, 6(1):1–23, 2019. Zaixi Zhang, Qi Liu, Zhenya Huang, Hao Wang, Chengqiang Lu, Chuanren Liu, and Enhong Chen. Graphmi: Extracting private graph data from graph neural networks. In Zhi-Hua Zhou (ed.), Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 3749–3755. International Joint Conferences on Artificial Intelligence Organization, 8 2021. doi: 10.24963/ijcai.2021/516. URL https://doi.org/10.24963/ijcai.2021/516. Main Track. Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, and Yang Zhang. Inference attacks against graph neural networks. In *31st USENIX Security Symposium (USENIX Security 22)*, pp. 4543–4560, 2022. ## A Experimental Details For all experiments, we divide our dataset with 50:25:25 train:validation:test set ratios, similarly to Sajadmanesh & Gatica-Perez (2021). We train all the models for 100 epochs, with learning rate 10−2, weight decay 10−3, and dropout rate 0.5. We run experiments with edge privacy budget ϵ = {0.1, 1, 2, 8, ∞}, α = {0, 0.25, 0.5, 0.75, 1.0}, and δ = {0, 0.1, 0.25, 0.5, 1}. We perform 10 runs for each value of ϵ, α and δ with different seeds and consider average results. We perform label and feature perturbations as described in Section 4.3, with privacy budgets fixed at ϵx = 3 and ϵy = 3. Additionally, we set the KProp hyperparameters in the Drop algorithm to the best values described in Sajadmanesh & Gatica-Perez (2021): we use Kx = 16, Ky = 2 for Cora, Kx = 4, Ky = 2 for Facebook, and Kx = 16, Ky = 0 for LastFM and PubMed. After a grid-search tuning, we use Kx = 4, Ky = 2 for Amazon Photo. Table 4: Statistics of the datasets. Cora and PubMed Yang et al. (2016) are citation networks where an edge (*i, j*) exists between two documents if document i cites document j, and features consist of bag-ofwords representations of the documents. Classes consist of document categories. Facebook Rozemberczki et al. (2021) is a page-page graph of verified Facebook pages, where nodes correspond to official Facebook pages and edges correspond to mutual likes between pages. Node features are extracted from the site descriptions and the classes correspond to various page categories. LastFM Rozemberczki & Sarkar (2020) is a friendship graph of LastFM users where nodes represent users and edges friendships between users. The classes correspond to the home countries of the users. Amazon Photo (Shchur et al., 2018) is a co-purchase network where nodes represent goods and edges correspond to two goods which have been frequently bought together on Amazon. The features consist of bag-of-words representations of the goods and the classes correspond to product categories. | Dataset | Classes | Nodes | Edges | Features | Avg. Degree | |-------------------|-----------|---------|---------|------------|---------------| | Cora (Cr) | 7 | 2708 | 5278 | 1433 | 3.90 | | LastFM (FM) | 10 | 7083 | 25814 | 7842 | 7.29 | | PubMed (PM) | 3 | 19717 | 44324 | 500 | 4.50 | | Facebook (Fb) | 4 | 22470 | 170912 | 4714 | 15.21 | | Amazon Photo (Ph) | 8 | 7650 | 238162 | 745 | 31.13 | ## B Similarity Metric The performance of GraphPrivatizer is affected by the choice of similarity measure in Algorithm 2. Nonparametric and non-learnable similarity measures are desirable in our local privacy setting, as they do not require access to data. We compare the cosine similarity with the Euclidean distance, and find that the cosine similarity is preferable. Refer to Section 5 for details on how privacy attacks are performed. | Dataset | ∆Acc | ∆AUC | |--------------|-----------|---------| | Cora | 4.8 ± 1.2 | 2 ± 3 | | Facebook | 1.6 ± 0.5 | 1 ± 1 | | LastFM | 2.4 ± 1.4 | 1 ± 1 | | Amazon Photo | −0.9 ± 5 | −1 ± 10 | | Pubmed | 2.8 ± 0.3 | 3 ± 7 | Table 5: We denote with ∆Acc the average accuracy difference between results obtained with the cosine similarity and with the Euclidean distance. Analogously, we denote with ∆AUC the average AUC × 102 difference. Results reported as (average ± standard deviation), for ϵ = 0.1 across 5 runs. The cosine similarity is preferable as it provides better accuracy ( ∆Acc > 0) with comparable privacy (∆AUC ≈ 0). | Cora Facebook LastFM PubMed Amazon Photo | |--------------------------------------------| ## C Additional Results C.1 Comparison Against The Rr Baseline Table 6: Results on all datasets for GraphPrivatizer and the RR baseline discussed in Section 5.1. The subscript ∞ denotes the non-edge-private results. | Dataset | Model | ∆Acc | ∆AUC | AccGP | AccRR | AUCGP | AUCRR | Acc∞ | AUC∞ | |-----------|---------|--------|-------------|-------------|------------|------------|-------------|------------|--------| | GAT | 3.2 | −1.1 | 71.9 ± 2.6 | 68.8 ± 5.4 | 65.0 ± 4.8 | 66.0 ± 7.9 | 80.0 ± 1.7 | 78.5 ± 6.9 | | | GATv2 | 1.2 | 1.2 | 71.0 ± 2.3 | 69.8 ± 2.3 | 57.0 ± 2.8 | 55.8 ± 3.1 | 79.7 ± 2.0 | 61.6 ± 7.4 | | | GCN | 3.1 | −8.8 | 75.7 ± 1.0 | 72.6 ± 2.4 | 83.4 ± 0.9 | 92.2 ± 2.0 | 81.7 ± 1.0 | 95.7 ± 0.9 | | | GConv | 22.2 | 5.8 | 71.1 ± 2.9 | 48.9 ± 6.4 | 56.6 ± 1.3 | 50.8 ± 0.5 | 74.0 ± 4.9 | 53.9 ± 1.4 | | | GT | 3.4 | 4.1 | 71.6 ± 2.9 | 68.3 ± 2.6 | 58.0 ± 0.9 | 53.8 ± 0.8 | 79.2 ± 2.8 | 62.3 ± 1.1 | | | SAGE | 3.4 | −5.1 | 77.0 ± 1.0 | 73.6 ± 2.1 | 72.0 ± 3.3 | 77.1 ± 2.8 | 81.8 ± 0.7 | 79.3 ± 3.1 | | | GAT | 4.2 | 5.1 | 85.7 ± 0.9 | 81.5 ± 0.5 | 60.8 ± 3.7 | 55.7 ± 3.3 | 90.9 ± 0.6 | 64.6 ± 3.7 | | | GATv2 | 3.4 | −0.7 | 83.0 ± 1.0 | 79.5 ± 1.1 | 53.7 ± 1.7 | 54.4 ± 3.5 | 88.6 ± 2.1 | 61.3 ± 5.9 | | | GCN | 5.9 | −1.6 | 86.5 ± 0.3 | 80.6 ± 0.4 | 91.3 ± 0.9 | 92.9 ± 1.1 | 91.9 ± 0.2 | 99.1 ± 0.4 | | | GConv | 26.7 | 2.3 | 84.6 ± 0.8 | 57.9 ± 8.8 | 52.4 ± 0.9 | 50.1 ± 0.1 | 88.5 ± 0.9 | 51.1 ± 0.5 | | | GT | 3.9 | 1.5 | 86.7 ± 0.5 | 82.8 ± 0.4 | 53.8 ± 0.7 | 52.2 ± 0.7 | 91.7 ± 0.1 | 60.3 ± 1.7 | | | SAGE | 5.3 | −0.8 | 87.0 ± 0.2 | 81.7 ± 0.4 | 69.7 ± 2.0 | 70.5 ± 3.5 | 91.9 ± 0.2 | 74.7 ± 2.3 | | | GAT | 24.2 | 2.9 | 75.7 ± 3.2 | 51.5 ± 14.8 | 54.7 ± 1.2 | 51.8 ± 1.2 | 83.6 ± 3.3 | 57.0 ± 2.2 | | | GATv2 | 9.8 | 1.5 | 68.9 ± 8.9 | 59.1 ± 18.6 | 52.5 ± 0.7 | 50.9 ± 0.7 | 77.3 ± 8.3 | 55.4 ± 1.2 | | | GCN | 3.3 | −5.9 | 82.2 ± 1.9 | 78.9 ± 2.3 | 81.7 ± 2.0 | 87.6 ± 3.1 | 86.5 ± 0.3 | 94.8 ± 1.9 | | | GConv | 42.3 | 2.8 | 65.2 ± 6.9 | 22.9 ± 12.2 | 52.9 ± 1.7 | 50.1 ± 0.2 | 66.6 ± 10.8 | 51.8 ± 1.6 | | | GT | 8.1 | 3.4 | 69.0 ± 4.4 | 60.9 ± 9.1 | 54.8 ± 0.7 | 51.5 ± 0.3 | 79.1 ± 5.1 | 59.3 ± 1.6 | | | SAGE | 4.8 | −1.2 | 80.5 ± 1.7 | 75.7 ± 2.4 | 82.8 ± 3.5 | 84.0 ± 5.9 | 86.0 ± 0.6 | 86.1 ± 2.4 | | | GAT | 0.5 | 4.3 | 80.4 ± 0.4 | 79.9 ± 0.4 | 68.7 ± 3.8 | 64.5 ± 5.8 | 82.5 ± 0.2 | 72.8 ± 5.6 | | | GATv2 | 0.2 | 2.8 | 80.7 ± 0.4 | 80.5 ± 0.3 | 59.1 ± 2.5 | 56.3 ± 3.2 | 82.6 ± 0.2 | 66.0 ± 4.7 | | | GCN | 0.2 | −12.3 | 80.6 ± 0.5 | 80.3 ± 0.4 | 85.7 ± 1.0 | 98.1 ± 0.5 | 82.4 ± 0.3 | 99.2 ± 0.7 | | | GConv | 3.9 | 20.5 | 79.8 ± 0.6 | 75.9 ± 1.3 | 71.3 ± 3.9 | 50.7 ± 0.4 | 82.0 ± 0.5 | 75.0 ± 2.2 | | | GT | −0.7 | 7.7 | 80.0 ± 0.6 | 80.7 ± 0.4 | 60.5 ± 0.5 | 52.8 ± 0.6 | 82.2 ± 0.3 | 64.3 ± 1.2 | | | SAGE | 0.1 | −1.5 | 80.3 ± 0.3 | 80.3 ± 0.5 | 56.9 ± 3.1 | 58.4 ± 3.8 | 82.0 ± 0.2 | 56.6 ± 1.7 | | | GAT | 2.5 | 11.6 | 82.7 ± 3.5 | 80.2 ± 1.1 | 72.9 ± 3.4 | 61.3 ± 4.0 | 86.8 ± 0.6 | 84.6 ± 3.1 | | | GATv2 | 9.0 | 8.8 | 83.5 ± 0.5 | 74.4 ± 4.2 | 63.5 ± 1.7 | 54.7 ± 1.6 | 85.0 ± 0.4 | 76.8 ± 5.1 | | | GCN | 8.1 | 3.6 | 80.4 ± 6.5 | 72.3 ± 1.7 | 95.1 ± 0.7 | 91.4 ± 0.2 | 80.8 ± 9.1 | 96.3 ± 1.7 | | | GConv | −3.6 | −0.7 | 55.7 ± 10.1 | 59.3 ± 1.3 | 52.7 ± 1.6 | 53.4 ± 2.1 | 61.3 ± 18.9 | 52.2 ± 1.3 | | | GT | 7.3 | 12.1 | 84.4 ± 0.6 | 77.1 ± 2.4 | 68.5 ± 1.0 | 56.4 ± 3.7 | 86.7 ± 0.3 | 87.4 ± 1.8 | | | SAGE | 9.5 | −11.8 | 85.0 ± 0.4 | 75.5 ± 1.8 | 78.9 ± 5.1 | 90.6 ± 0.7 | 86.4 ± 0.2 | 93.8 ± 1.0 | | ## C.2 Additional Results For The Gt, Gatv2, And Gconv Models Table 7: Aggregate results for GraphPrivatizer for all models. We denote as AccGP and AUCGP the average accuracy and AUC results for GraphPrivatizer with *α, δ >* 0, where the average is taken across all positive tested values of α and δ. We denote with ∆Acc the average accuracy difference with the α = δ = 0 case across all tested values of α and δ larger than zero, where positive values of ∆Acc indicate that GraphPrivatizer performs better when some thresholds are applied and/or aggregation is performed during the neighbor perturbation. Analogously, we denote with ∆AUC the average AUC×102 difference, where values close to zero indicate that GraphPrivatizer offers the same protection against privacy attack when introducing a positive threshold and/or feature aggregation during the neighbor perturbation. We denote the datasets we use as Cora (Cr), LastFM (FM), PubMed (PM), Facebook (Fb), and Amazon Photo (Ph). Results reported as (average values ± standard deviation), for ϵ = 0.1. | as (average values ± standard deviation), for ϵ = 0.1. GP-t | GP-m | | | | | | | | | | | |---------------------------------------------------------------|------------|------------|------------|-----------|-----------|------------|----------|-------|--------|------|------| | Model | Dataset | AccGP | AUCGP | ∆Acc | ∆AUC | Model | Dataset | AccGP | AUCGP | ∆Acc | ∆AUC | | Cr | 78.0 ± 1.7 | 92 ± 1 | 7 ± 3 | 3 ± 4 | | | | | | | | | FM | 81.1 ± 3.9 | 57 ± 2 | 6 ± 3 | 3 ± 2 | | | | | | | | | GAT | PM | 82.0 ± 0.3 | 70 ± 5 | 1.5 ± 0.7 | 4 ± 2 | | | | | | | | Fb | 89.5 ± 0.7 | 63 ± 4 | 4 ± 2 | 2 ± 2 | | | | | | | | | Ph | 83.3 ± 1.7 | 79 ± 4 | 11 ± 5 | 9 ± 6 | Cr | 78.9 ± 1.8 | 73 ± 6 | 3 ± 2 | 0 ± 3 | | | | | FM | 82.1 ± 3.0 | 57 ± 1 | 2 ± 2 | 1 ± 1 | | | | | | | | | GAT | PM | 82.1 ± 0.3 | 72 ± 5 | 1.5 ± 0.7 | 0 ± 1 | | | | | | | | Fb | 90.3 ± 0.7 | 64 ± 5 | 2 ± 1 | 0 ± 1 | | | | | | | | | Ph | 85.9 ± 3.8 | 81 ± 5 | 0.5 ± 0.9 | 0 ± 1 | | | | | | | | Cr | 79.7 ± 1.1 | 72 ± 5 | 4 ± 2 | 8 ± 5 | | | | | | | | | FM | 85.7 ± 0.7 | 90 ± 2 | 3 ± 1 | 10 ± 5 | | | | | | | | | GCN | PM | 82.0 ± 0.3 | 96 ± 1 | 1.4 ± 0.7 | 10 ± 5 | | | | | | | | Fb | 90.4 ± 0.2 | 96 ± 1 | 4 ± 2 | 6 ± 3 | | | | | | | | | Ph | 81.4 ± 1.9 | 95 ± 1 | 13 ± 7 | 3 ± 3 | Cr | 80.4 ± 1.1 | 93 ± 0.1 | 4 ± 2 | 5 ± 3 | | | | | FM | 85.7 ± 0.4 | 92 ± 2 | 3 ± 1 | 5 ± 3 | | | | | | | | | GCN | PM | 82.0 ± 0.3 | 97 ± 1 | 1.5 ± 0.7 | 2 ± 2 | | | | | | | | Fb | 91.3 ± 0.2 | 98 ± 1 | 1.5 ± 0.7 | 2 ± 1 | | | | | | | | | Ph | 83.1 ± 3.8 | 96 ± 1 | 1.2 ± 1.0 | 2 ± 4 | | | | | | | | Cr | 79.7 ± 1.0 | 78 ± 2 | 4 ± 2 | 5 ± 3 | | | | | | | | | FM | 83.5 ± 2.0 | 85 ± 3 | 3 ± 2 | 1 ± 1 | | | | | | | | | SAGE | PM | 81.7 ± 0.3 | 56 ± 3 | 1.3 ± 0.7 | 0 ± 1 | | | | | | | | Fb | 90.4 ± 0.2 | 74 ± 02 | 4 ± 2 | 4 ± 2 | | | | | | | | | Ph | 83.4 ± 0.6 | 85 ± 2 | 13 ± 5 | 17 ± 9 | Cr | 80.4 ± 1.0 | 77 ± 3 | 4 ± 2 | 3 ± 2 | | | | | FM | 84.2 ± 1.3 | 84 ± 3 | 3 ± 1 | 1 ± 1 | | | | | | | | | SAGE | PM | 81.6 ± 0.3 | 55 ± 3 | 1.5 ± 0.7 | 0 ± 1 | | | | | | | | Fb | 90.9 ± 0.2 | 74 ± 2 | 2 ± 1 | 1 ± 2 | | | | | | | | | Ph | 86.1 ± 3.5 | 90 ± 2 | 0.8 ± 0.6 | 8 ± 5 | | | | | | | | Cr | 78.1 ± 1.7 | 61 ± 2 | 6 ± 3 | 3 ± 2 | | | | | | | | | FM | 78.6 ± 1.1 | 54 ± 1 | 7 ± 4 | 3 ± 2 | | | | | | | | | GT | PM | 81.7 ± 0.3 | 64 ± 1 | 2 ± 1 | 3 ± 1 | | | | | | | | Fb | 90.5 ± 0.3 | 59 ± 1 | 4 ± 2 | 5 ± 2 | | | | | | | | | Ph | 84.3 ± 1.3 | 78 ± 4 | 7 ± 4 | 13 ± 8 | Cr | 78.4 ± 1.4 | 62 ± 1 | 4 ± 2 | 1 ± 1 | | | | | FM | 79.6 ± 2.9 | 59 ± 1 | 3 ± 1 | 1 ± 1 | | | | | | | | | GT | PM | 81.7 ± 0.3 | 64 ± 1 | 2 ± 1 | 1 ± 1 | | | | | | | | Fb | 90.1 ± 0.3 | 59 ± 1 | 3 ± 1 | 1 ± 1 | | | | | | | | | Ph | 86.2 ± 3 | 81 ± 4 | 1.0 ± 0.7 | 5 ± 7 | | | | | | | | Cr | 77.3 ± 2.2 | 60 ± 3 | 7 ± 3 | 4 ± 2 | | | | | | | | | FM | 73.8 ± 1.1 | 54 ± 1 | 4 ± 3 | 2 ± 1 | | | | | | | | | GATv2 | PM | 82.2 ± 0.4 | 64 ± 4 | 1.2 ± 0.7 | 3 ± 2 | | | | | | | | Fb | 87.3 ± 1.5 | 58 ± 4 | 4 ± 1 | 3 ± 2 | | | | | | | | | Ph | 81.1 ± 1.6 | 68 ± 5 | 11 ± 6 | 7 ± 7 | Cr | 77.8 ± 1.9 | 60 ± 5 | 5 ± 2 | 2 ± 2 | | | | | FM | 76.2 ± 8.2 | 55 ± 1 | 1 ± 2 | 0 ± 1 | | | | | | | | | GATv2 | PM | 82.1 ± 0.3 | 65 ± 4 | 1.4 ± 0.7 | 4 ± 1 | | | | | | | | Fb | 87.8 ± 1.7 | 59 ± 4 | 3 ± 1 | 2 ± 2 | | | | | | | | | Ph | 82.3 ± 2.6 | 70 ± 5 | 2 ± 2 | 2 ± 5 | | | | | | | | Cr | 75.4 ± 3.6 | 55 ± 2 | 4 ± 2 | −2 ± 1 | | | | | | | | | FM | 69.9 ± 5.7 | 52 ± 1 | 3 ± 2 | −1 ± 1 | | | | | | | | | GConv | PM | 81.4 ± 0.4 | 74 ± 3 | 1.7 ± 0.7 | 3 ± 2 | | | | | | | | Fb | 87.7 ± 0.8 | 51 ± 1 | 4 ± 2 | −1 ± 1 | | | | | | | | | Ph | 64.3 ± 7.9 | 51 ± 1 | 3 ± 6 | 0 ± 1 | Cr | 75.5 ± 3.1 | 54 ± 1 | 1 ± 1 | −2 ± 1 | | | | | FM | 70.0 ± 6.2 | 53 ± 2 | 1 ± 2 | −1 ± 1 | | | | | | | | | GConv | PM | 81.5 ± 0.4 | 75 ± 3 | 1.2 ± 0.7 | −1 ± 1 | | | | | | | | Fb | 88.1 ± 0.8 | 51 ± 1 | 3 ± 1 | 0 ± 0 | | | | | | | | | Ph | 71.1 ± 9.9 | 51 ± 2 | 5 ± 7 | 0 ± 1 | | | | | | | ## C.3 Results For Different Privacy Budgets We denote as with the subscript GP the average results for GraphPrivatizer across *α, δ >* 0, while the subscript b denotes the baseline α = δ = 0. | GP-t | GP-m | | | | | | | | | | | |--------|------------|------------|------------|------------|------------|------------|------------|------------|------------|------------|------| | Model | ϵ | ∆Acc | AccGP | Accb | AUCGP | AUCb | ∆Acc | AccGP | Accb | AUCGP | AUCb | | 0.1 | 8.2 ± 1.0 | 79.3 ± 1.4 | 71.1 | 74.2 ± 2.5 | 69.2 | 3.4 ± 2.0 | 79.8 ± 1.6 | 76.4 | 74.8 ± 2.7 | 73.2 | | | 1.0 | 4.2 ± 1.0 | 79.7 ± 1.0 | 75.6 | 73.6 ± 3.1 | 69.1 | 2.0 ± 1.0 | 80.3 ± 0.7 | 78.3 | 73.8 ± 2.6 | 70.6 | | | GAT | 2.0 | 1.9 ± 1.0 | 79.8 ± 0.6 | 77.8 | 74.2 ± 2.2 | 72.5 | 0.6 ± 0.0 | 80.7 ± 0.4 | 80.1 | 74.9 ± 2.1 | 74.9 | | 8.0 | −0.6 ± 0.0 | 80.0 ± 0.4 | 80.6 | 73.7 ± 2.8 | 78.7 | −0.4 ± 0.0 | 80.9 ± 0.4 | 81.3 | 75.5 ± 1.8 | 73.6 | | | ∞ | −0.1 ± 0.0 | 80.0 ± 0.3 | 80.2 | 77.3 ± 1.4 | 77.5 | 0.0 ± 0.0 | 81.0 ± 0.2 | 81.0 | 76.4 ± 1.3 | 73.3 | | | 0.1 | 8.2 ± 1.0 | 78.8 ± 0.8 | 70.6 | 61.1 ± 1.1 | 56.6 | 5.8 ± 1.0 | 79.0 ± 1.0 | 73.2 | 61.3 ± 0.7 | 58.6 | | | 1.0 | 4.6 ± 0.0 | 79.2 ± 0.3 | 74.5 | 61.6 ± 0.8 | 60.3 | 2.0 ± 1.0 | 79.1 ± 0.5 | 77.1 | 61.5 ± 0.8 | 60.1 | | | GATv2 | 2.0 | 2.3 ± 1.0 | 79.2 ± 0.5 | 77.0 | 62.3 ± 1.3 | 59.2 | 2.0 ± 1.0 | 79.3 ± 0.7 | 77.4 | 62.2 ± 1.0 | 61.1 | | 8.0 | 0.5 ± 0.0 | 79.1 ± 0.5 | 78.6 | 62.0 ± 0.9 | 61.3 | 0.1 ± 1.0 | 79.2 ± 0.6 | 79.1 | 60.9 ± 1.2 | 62.2 | | | ∞ | −0.2 ± 0.0 | 79.9 ± 0.3 | 80.0 | 62.3 ± 0.6 | 62.6 | −0.0 ± 0.0 | 79.8 ± 0.5 | 79.8 | 62.0 ± 0.5 | 62.0 | | | 0.1 | 5.4 ± 1.0 | 80.8 ± 1.1 | 75.4 | 94.8 ± 2.4 | 83.9 | 4.9 ± 1.0 | 81.3 ± 1.1 | 76.4 | 95.0 ± 1.7 | 88.8 | | | 1.0 | 4.0 ± 1.0 | 81.0 ± 0.5 | 76.9 | 95.1 ± 1.3 | 88.4 | 1.3 ± 0.0 | 81.8 ± 0.4 | 80.4 | 95.5 ± 0.6 | 93.7 | | | GCN | 2.0 | 1.4 ± 0.0 | 81.2 ± 0.4 | 79.8 | 95.6 ± 0.4 | 92.6 | 0.9 ± 0.0 | 82.0 ± 0.4 | 81.1 | 95.8 ± 0.4 | 95.2 | | 8.0 | 0.4 ± 0.0 | 81.3 ± 0.2 | 80.9 | 95.7 ± 0.3 | 95.9 | −0.4 ± 0.0 | 81.6 ± 0.3 | 81.9 | 95.9 ± 0.3 | 95.9 | | | ∞ | 0.0 ± 0.0 | 81.7 ± 0.0 | 81.7 | 95.7 ± 0.0 | 95.7 | 0.0 ± 0.0 | 81.8 ± 0.0 | 81.8 | 96.2 ± 0.0 | 96.2 | | | 0.1 | 5.5 ± 1.0 | 76.5 ± 1.0 | 71.0 | 54.6 ± 0.3 | 56.6 | 1.4 ± 1.0 | 75.6 ± 1.1 | 74.1 | 54.3 ± 0.3 | 57.0 | | | 1.0 | 4.6 ± 1.0 | 75.8 ± 0.5 | 71.2 | 54.3 ± 0.4 | 56.7 | 1.0 ± 1.0 | 75.2 ± 0.7 | 74.3 | 54.6 ± 0.6 | 54.9 | | | GConv | 2.0 | 0.7 ± 1.0 | 77.0 ± 1.1 | 76.4 | 54.3 ± 0.3 | 54.7 | −1.4 ± 1.0 | 74.8 ± 1.0 | 76.2 | 54.6 ± 0.5 | 54.0 | | 8.0 | 1.1 ± 0.0 | 76.4 ± 0.4 | 75.3 | 54.2 ± 0.4 | 54.5 | −0.8 ± 0.0 | 74.8 ± 0.5 | 75.6 | 54.1 ± 0.3 | 53.6 | | | ∞ | 0.0 ± 0.0 | 74.0 ± 0.0 | 74.0 | 53.9 ± 0.0 | 53.9 | 0.0 ± 0.0 | 74.0 ± 0.0 | 74.0 | 53.9 ± 0.0 | 53.9 | | | 0.1 | 7.5 ± 0.0 | 79.5 ± 0.5 | 72.1 | 62.4 ± 0.6 | 58.3 | 4.8 ± 0.0 | 79.4 ± 0.4 | 74.6 | 62.2 ± 0.5 | 60.4 | | | 1.0 | 5.3 ± 0.0 | 79.6 ± 0.5 | 74.3 | 62.3 ± 0.6 | 59.1 | 1.0 ± 0.0 | 79.2 ± 0.5 | 78.2 | 62.1 ± 0.4 | 61.0 | | | GT | 2.0 | 2.7 ± 0.0 | 79.6 ± 0.4 | 76.9 | 62.1 ± 0.4 | 60.8 | 0.4 ± 0.0 | 79.4 ± 0.3 | 79.0 | 62.6 ± 0.6 | 62.0 | | 8.0 | −0.6 ± 0.0 | 79.7 ± 0.5 | 80.2 | 62.4 ± 0.5 | 62.6 | 0.1 ± 0.0 | 79.7 ± 0.4 | 79.6 | 62.5 ± 0.6 | 62.5 | | | ∞ | −0.4 ± 1.0 | 79.4 ± 0.6 | 79.7 | 62.4 ± 0.5 | 61.9 | 0.8 ± 1.0 | 79.5 ± 0.6 | 78.7 | 62.2 ± 0.5 | 62.5 | | | 0.1 | 5.3 ± 1.0 | 80.8 ± 1.1 | 75.5 | 79.9 ± 2.1 | 72.6 | 4.5 ± 1.0 | 81.3 ± 1.1 | 76.8 | 78.6 ± 1.2 | 74.7 | | | 1.0 | 3.4 ± 1.0 | 81.1 ± 0.5 | 77.7 | 79.5 ± 0.5 | 75.9 | 1.5 ± 0.0 | 81.5 ± 0.4 | 80.0 | 79.3 ± 0.4 | 78.1 | | | SAGE | 2.0 | 1.8 ± 0.0 | 81.3 ± 0.3 | 79.5 | 79.7 ± 0.6 | 76.5 | 0.4 ± 0.0 | 81.6 ± 0.2 | 81.2 | 79.6 ± 0.5 | 78.5 | | 8.0 | 0.4 ± 0.0 | 81.4 ± 0.1 | 81.1 | 80.0 ± 0.4 | 80.1 | −0.1 ± 0.0 | 81.5 ± 0.2 | 81.7 | 79.6 ± 0.5 | 79.8 | | | ∞ | 0.0 ± 0.0 | 81.8 ± 0.0 | 81.8 | 79.3 ± 0.0 | 79.3 | 0.0 ± 0.0 | 81.8 ± 0.0 | 81.8 | 79.3 ± 0.0 | 79.3 | | Table 8: Results on Cora. GP-t GP-m Model ϵ ∆Acc AccGP Accb AUCGP AUCb ∆Acc AccGP Accb AUCGP AUCb GAT 0.1 7.4 ± 1.0 82.5 ± 1.0 75.1 57.9 ± 1.0 54.4 3.2 ± 1.0 83.1 ± 0.8 79.9 57.4 ± 0.5 56.7 1.0 2.8 ± 1.0 83.3 ± 0.9 80.5 57.6 ± 0.5 55.3 2.4 ± 1.0 82.8 ± 0.8 80.4 57.6 ± 0.4 57.0 2.0 1.2 ± 1.0 83.6 ± 0.7 82.3 57.7 ± 0.5 56.3 −0.8 ± 1.0 83.3 ± 1.1 84.1 57.4 ± 0.6 58.0 8.0 −1.9 ± 1.0 82.8 ± 0.9 84.7 57.8 ± 0.4 58.6 2.6 ± 1.0 84.1 ± 0.8 81.5 58.1 ± 0.6 56.9 ∞ −0.5 ± 1.0 82.9 ± 0.8 83.5 57.5 ± 0.3 57.8 0.4 ± 1.0 83.5 ± 0.6 83.1 57.5 ± 0.3 57.6 GATv2 0.1 4.9 ± 2.0 74.6 ± 1.9 69.8 54.8 ± 0.5 52.6 1.3 ± 2.0 77.0 ± 1.8 75.7 55.1 ± 0.4 54.8 1.0 6.2 ± 2.0 77.1 ± 1.9 70.9 55.1 ± 0.3 52.7 −1.6 ± 2.0 75.6 ± 1.5 77.2 55.0 ± 0.5 54.8 2.0 6.0 ± 4.0 77.9 ± 3.5 71.9 54.9 ± 0.4 53.2 0.4 ± 1.0 76.9 ± 1.5 76.5 55.1 ± 0.4 55.1 8.0 −1.7 ± 2.0 75.6 ± 1.8 77.3 54.8 ± 0.5 54.6 −1.7 ± 1.0 76.9 ± 1.1 78.6 55.2 ± 0.3 54.6 ∞ 0.2 ± 0.0 77.1 ± 0.4 76.9 55.5 ± 0.2 55.7 −0.2 ± 0.0 77.1 ± 0.4 77.3 55.5 ± 0.1 55.3 GCN 0.1 3.9 ± 0.0 86.5 ± 0.4 82.6 93.5 ± 2.0 80.8 3.2 ± 0.0 86.3 ± 0.5 83.1 94.2 ± 1.2 87.4 1.0 2.3 ± 0.0 86.5 ± 0.2 84.2 94.0 ± 1.2 84.7 1.2 ± 0.0 86.5 ± 0.2 85.3 94.9 ± 0.6 90.9 2.0 2.0 ± 0.0 86.5 ± 0.2 84.5 94.6 ± 0.8 89.2 0.3 ± 0.0 86.6 ± 0.1 86.2 94.5 ± 0.5 94.5 8.0 0.1 ± 0.0 86.6 ± 0.1 86.4 94.7 ± 0.5 94.9 −0.2 ± 0.0 86.5 ± 0.1 86.7 94.0 ± 0.4 93.5 ∞ 0.0 ± 0.0 86.5 ± 0.0 86.5 94.8 ± 0.0 94.8 0.0 ± 0.0 86.6 ± 0.0 86.6 94.5 ± 0.0 94.5 GConv 0.1 4.0 ± 1.0 70.5 ± 1.2 66.5 51.9 ± 0.2 53.3 1.1 ± 3.0 70.4 ± 3.2 69.3 52.5 ± 0.6 53.7 1.0 2.2 ± 1.0 71.6 ± 1.3 69.4 52.2 ± 0.3 51.8 2.0 ± 2.0 69.1 ± 1.8 67.1 52.7 ± 0.8 53.1 2.0 2.4 ± 1.0 69.9 ± 1.3 67.5 52.6 ± 0.7 52.9 −2.8 ± 3.0 70.0 ± 2.5 72.7 52.2 ± 0.3 52.8 8.0 1.3 ± 2.0 69.6 ± 2.2 68.4 52.5 ± 0.6 52.4 1.2 ± 2.0 69.7 ± 2.4 68.5 52.5 ± 0.6 52.6 ∞ 0.0 ± 0.0 66.6 ± 0.0 66.6 51.8 ± 0.0 51.8 0.0 ± 0.0 66.6 ± 0.0 66.6 51.8 ± 0.0 51.8 GT 0.1 8.3 ± 1.0 80.2 ± 0.8 71.9 58.8 ± 0.5 54.6 3.5 ± 1.0 80.3 ± 0.7 76.8 58.9 ± 0.3 57.5 1.0 6.8 ± 1.0 80.5 ± 0.8 73.7 59.4 ± 0.4 55.4 2.8 ± 1.0 81.0 ± 0.6 78.2 59.1 ± 0.4 57.9 2.0 5.7 ± 1.0 79.8 ± 0.9 74.2 59.4 ± 0.4 56.5 0.5 ± 0.0 80.6 ± 0.5 80.1 59.0 ± 0.4 58.2 8.0 −1.0 ± 1.0 80.1 ± 0.8 81.1 59.3 ± 0.3 59.0 −1.2 ± 1.0 79.9 ± 0.9 81.2 58.9 ± 0.4 58.8 ∞ −0.4 ± 0.0 80.3 ± 0.5 80.7 58.9 ± 0.2 58.9 −0.4 ± 1.0 80.4 ± 0.6 80.8 58.8 ± 0.2 59.0 SAGE 0.1 4.6 ± 1.0 84.6 ± 0.8 80.0 85.9 ± 1.2 83.7 3.2 ± 1.0 84.9 ± 0.9 81.8 85.2 ± 0.8 83.2 1.0 5.7 ± 1.0 84.6 ± 0.5 78.9 86.0 ± 1.1 83.5 3.1 ± 1.0 85.4 ± 0.7 82.3 85.2 ± 1.0 84.7 2.0 1.8 ± 0.0 84.6 ± 0.3 82.7 85.7 ± 0.8 85.4 1.8 ± 1.0 85.1 ± 0.8 83.4 85.8 ± 0.4 85.4 8.0 −0.0 ± 1.0 85.3 ± 0.6 85.3 85.7 ± 0.9 83.5 1.2 ± 0.0 85.3 ± 0.5 84.2 85.8 ± 0.6 85.1 ∞ 0.0 ± 0.0 86.0 ± 0.0 86.0 86.1 ± 0.0 86.1 0.0 ± 0.0 86.0 ± 0.0 86.0 86.1 ± 0.0 86.1 Table 9: Results on LastFM. Model ϵ ∆Acc AccGP Accb AUCGP AUCb ∆Acc AccGP Accb AUCGP AUCb GAT 0.1 1.9 ± 0.0 82.4 ± 0.1 80.5 71.6 ± 1.1 67.3 1.8 ± 0.0 82.4 ± 0.2 80.6 72.6 ± 0.9 72.0 1.0 1.4 ± 0.0 82.4 ± 0.1 81.1 72.4 ± 1.4 65.4 0.7 ± 0.0 82.5 ± 0.1 81.8 72.2 ± 1.3 73.8 2.0 0.8 ± 0.0 82.5 ± 0.1 81.7 71.8 ± 1.3 69.7 0.0 ± 0.0 82.4 ± 0.1 82.4 71.9 ± 1.4 69.5 8.0 0.1 ± 0.0 82.5 ± 0.1 82.4 72.8 ± 1.2 70.9 0.1 ± 0.0 82.5 ± 0.1 82.4 72.7 ± 1.3 73.3 ∞ −0.0 ± 0.0 82.5 ± 0.0 82.5 73.0 ± 0.2 73.2 −0.0 ± 0.0 82.5 ± 0.0 82.5 73.0 ± 0.3 72.8 GATv2 0.1 1.6 ± 0.0 82.5 ± 0.1 80.9 64.5 ± 0.6 60.3 1.8 ± 0.0 82.5 ± 0.1 80.8 65.5 ± 0.8 61.7 1.0 1.1 ± 0.0 82.4 ± 0.1 81.3 65.2 ± 0.9 61.4 0.7 ± 0.0 82.5 ± 0.1 81.9 65.8 ± 0.8 63.7 2.0 0.8 ± 0.0 82.5 ± 0.1 81.7 64.3 ± 0.8 62.8 0.4 ± 0.0 82.5 ± 0.1 82.1 64.9 ± 1.4 64.9 8.0 0.0 ± 0.0 82.5 ± 0.1 82.4 65.2 ± 1.2 65.3 −0.0 ± 0.0 82.5 ± 0.0 82.6 65.2 ± 1.0 66.5 ∞ −0.1 ± 0.0 82.6 ± 0.1 82.7 66.2 ± 0.2 66.2 0.0 ± 0.0 82.6 ± 0.0 82.6 66.2 ± 0.2 66.2 GCN 0.1 1.8 ± 0.0 82.3 ± 0.1 80.6 98.5 ± 1.0 85.9 1.8 ± 0.0 82.3 ± 0.1 80.5 98.7 ± 0.7 93.4 1.0 1.0 ± 0.0 82.4 ± 0.1 81.4 98.9 ± 0.5 89.9 0.7 ± 0.0 82.3 ± 0.1 81.6 99.1 ± 0.2 97.2 2.0 0.5 ± 0.0 82.4 ± 0.1 81.9 99.1 ± 0.3 94.4 0.0 ± 0.0 82.4 ± 0.1 82.3 99.1 ± 0.2 98.9 8.0 0.1 ± 0.0 82.4 ± 0.0 82.3 99.1 ± 0.2 99.1 0.1 ± 0.0 82.4 ± 0.1 82.3 99.1 ± 0.2 98.8 ∞ 0.0 ± 0.0 82.4 ± 0.0 82.4 99.2 ± 0.0 99.2 0.0 ± 0.0 82.4 ± 0.0 82.4 99.2 ± 0.0 99.2 GConv 0.1 2.0 ± 0.0 81.7 ± 0.1 79.7 74.8 ± 1.0 71.4 1.5 ± 0.0 81.8 ± 0.1 80.3 74.4 ± 0.8 76.0 1.0 1.1 ± 0.0 81.7 ± 0.1 80.6 74.8 ± 0.6 73.1 0.6 ± 0.0 81.9 ± 0.1 81.3 74.6 ± 0.7 75.8 2.0 0.7 ± 0.0 81.7 ± 0.1 81.1 74.1 ± 0.7 77.5 0.6 ± 0.0 81.9 ± 0.1 81.3 74.5 ± 1.0 74.4 8.0 0.2 ± 0.0 81.8 ± 0.1 81.6 74.4 ± 0.5 74.9 0.0 ± 0.0 81.8 ± 0.2 81.7 73.8 ± 1.3 74.5 ∞ 0.0 ± 0.0 82.0 ± 0.0 82.0 75.0 ± 0.0 75.0 0.0 ± 0.0 82.0 ± 0.0 82.0 75.0 ± 0.0 75.0 GT 0.1 1.9 ± 0.0 82.1 ± 0.1 80.1 64.3 ± 0.2 60.2 2.3 ± 0.0 82.0 ± 0.1 79.8 64.3 ± 0.3 63.2 1.0 1.2 ± 0.0 82.0 ± 0.1 80.8 64.1 ± 0.5 61.0 0.8 ± 0.0 82.0 ± 0.1 81.2 64.1 ± 0.2 64.1 2.0 1.0 ± 0.0 82.1 ± 0.1 81.0 64.3 ± 0.2 62.2 0.4 ± 0.0 82.0 ± 0.1 81.7 64.4 ± 0.3 64.4 8.0 0.1 ± 0.0 82.1 ± 0.1 82.0 64.2 ± 0.2 64.1 0.3 ± 0.0 82.1 ± 0.0 81.9 64.4 ± 0.3 64.5 ∞ 0.0 ± 0.0 82.1 ± 0.0 82.1 64.2 ± 0.2 64.3 −0.0 ± 0.0 82.1 ± 0.0 82.1 64.2 ± 0.2 64.0 SAGE 0.1 1.6 ± 0.0 82.0 ± 0.2 80.4 56.2 ± 0.9 56.1 1.9 ± 0.0 82.0 ± 0.1 80.1 55.9 ± 0.5 57.0 1.0 1.1 ± 0.0 82.0 ± 0.1 80.9 55.6 ± 0.6 57.0 0.6 ± 0.0 82.0 ± 0.1 81.4 56.1 ± 0.7 56.6 2.0 0.5 ± 0.0 82.0 ± 0.1 81.5 55.8 ± 0.5 54.8 0.2 ± 0.0 82.0 ± 0.1 81.8 55.9 ± 0.6 54.6 8.0 −0.0 ± 0.0 82.0 ± 0.1 82.1 55.9 ± 0.5 55.7 0.1 ± 0.0 82.0 ± 0.1 82.0 56.0 ± 0.6 57.0 ∞ 0.0 ± 0.0 82.0 ± 0.0 82.0 56.6 ± 0.0 56.6 0.0 ± 0.0 82.0 ± 0.0 82.0 56.6 ± 0.0 56.6 GP-t GP-m Table 10: Results on PubMed. Model ϵ ∆Acc AccGP Accb AUCGP AUCb ∆Acc AccGP Accb AUCGP AUCb GAT 0.1 4.8 ± 1.0 90.5 ± 1.1 85.8 63.8 ± 1.3 60.8 3.1 ± 0.0 91.0 ± 0.4 87.9 64.8 ± 0.6 64.0 1.0 4.1 ± 1.0 90.8 ± 0.8 86.6 65.2 ± 1.7 62.9 1.0 ± 0.0 91.0 ± 0.2 90.0 64.7 ± 1.1 66.1 2.0 2.5 ± 0.0 90.8 ± 0.4 88.4 64.1 ± 1.4 64.9 0.2 ± 0.0 90.9 ± 0.1 90.7 64.4 ± 1.0 64.9 8.0 0.1 ± 0.0 91.1 ± 0.2 91.0 63.8 ± 1.1 67.2 −0.3 ± 0.0 91.1 ± 0.2 91.4 64.9 ± 1.3 64.4 ∞ 0.1 ± 0.0 90.9 ± 0.1 90.8 64.7 ± 0.3 64.8 −0.1 ± 0.0 90.9 ± 0.1 91.0 64.6 ± 0.3 64.3 GATv2 0.1 4.9 ± 1.0 88.2 ± 0.7 83.3 58.5 ± 1.8 54.8 3.8 ± 0.0 88.6 ± 0.4 84.8 60.0 ± 1.3 57.0 1.0 4.0 ± 0.0 88.1 ± 0.4 84.1 58.4 ± 1.0 54.8 1.0 ± 1.0 87.9 ± 0.6 86.9 59.3 ± 1.4 57.6 2.0 3.2 ± 0.0 88.4 ± 0.4 85.1 58.8 ± 1.0 55.9 0.2 ± 1.0 88.3 ± 0.5 88.1 59.9 ± 1.6 59.3 8.0 0.1 ± 0.0 88.6 ± 0.5 88.5 61.2 ± 2.8 59.7 −0.2 ± 0.0 88.2 ± 0.4 88.4 58.6 ± 2.1 57.8 ∞ 0.1 ± 0.0 88.6 ± 0.1 88.4 61.7 ± 0.4 61.8 0.2 ± 0.0 88.6 ± 0.1 88.4 61.9 ± 0.4 61.3 GCN 0.1 4.8 ± 1.0 91.4 ± 1.0 86.6 98.1 ± 1.5 90.8 2.9 ± 0.0 91.8 ± 0.3 89.0 98.9 ± 0.2 96.9 1.0 4.0 ± 1.0 91.6 ± 0.7 87.6 98.5 ± 1.0 93.2 1.1 ± 0.0 91.9 ± 0.1 90.8 99.0 ± 0.1 98.7 2.0 3.0 ± 0.0 91.7 ± 0.5 88.7 98.8 ± 0.4 95.8 0.5 ± 0.0 91.9 ± 0.1 91.5 99.0 ± 0.0 99.1 8.0 0.1 ± 0.0 91.9 ± 0.0 91.9 99.1 ± 0.2 99.2 0.1 ± 0.0 92.0 ± 0.0 91.9 99.0 ± 0.1 99.0 ∞ 0.0 ± 0.0 91.9 ± 0.0 91.9 99.1 ± 0.0 99.1 0.0 ± 0.0 91.9 ± 0.0 91.9 99.1 ± 0.0 99.1 GConv 0.1 4.4 ± 0.0 88.5 ± 0.3 84.1 51.2 ± 0.3 52.4 2.5 ± 0.0 88.6 ± 0.2 86.1 51.2 ± 0.2 51.8 1.0 4.0 ± 0.0 88.5 ± 0.2 84.6 51.1 ± 0.2 52.1 0.4 ± 0.0 88.5 ± 0.2 88.1 51.1 ± 0.1 51.7 2.0 2.0 ± 0.0 88.5 ± 0.3 86.5 51.1 ± 0.2 52.2 0.1 ± 0.0 88.3 ± 0.2 88.2 51.2 ± 0.4 51.3 8.0 −0.0 ± 0.0 88.5 ± 0.3 88.6 51.0 ± 0.1 51.1 0.5 ± 0.0 88.4 ± 0.3 88.0 51.1 ± 0.1 51.0 ∞ 0.0 ± 0.0 88.5 ± 0.0 88.5 51.1 ± 0.0 51.1 0.0 ± 0.0 88.5 ± 0.0 88.5 51.1 ± 0.0 51.1 GT 0.1 4.7 ± 0.0 91.4 ± 0.4 86.7 60.1 ± 0.8 54.2 3.3 ± 0.0 91.5 ± 0.2 88.2 60.4 ± 0.4 57.2 1.0 3.7 ± 0.0 91.5 ± 0.2 87.8 60.4 ± 0.5 55.7 1.0 ± 0.0 91.6 ± 0.1 90.6 60.3 ± 0.4 58.8 2.0 2.6 ± 0.0 91.5 ± 0.1 88.9 60.2 ± 0.6 56.7 0.5 ± 0.0 91.6 ± 0.1 91.1 60.7 ± 0.3 59.5 8.0 0.0 ± 0.0 91.6 ± 0.1 91.6 60.7 ± 0.2 60.0 0.1 ± 0.0 91.6 ± 0.1 91.4 60.5 ± 0.6 59.7 ∞ −0.0 ± 0.0 91.6 ± 0.1 91.6 60.3 ± 0.4 60.1 0.1 ± 0.0 91.6 ± 0.1 91.5 60.3 ± 0.2 60.1 SAGE 0.1 4.4 ± 1.0 91.3 ± 0.9 86.9 75.0 ± 1.5 69.8 2.6 ± 1.0 91.5 ± 0.6 88.9 75.0 ± 1.4 72.9 1.0 3.6 ± 1.0 91.5 ± 0.7 87.8 74.7 ± 0.7 70.9 1.0 ± 0.0 91.7 ± 0.2 90.7 74.7 ± 0.4 76.0 2.0 2.5 ± 0.0 91.6 ± 0.4 89.1 74.6 ± 0.8 71.0 0.4 ± 0.0 91.8 ± 0.1 91.4 75.2 ± 0.5 75.2 8.0 −0.0 ± 0.0 91.8 ± 0.0 91.8 75.6 ± 0.5 76.2 0.1 ± 0.0 91.8 ± 0.0 91.7 75.8 ± 0.9 74.2 ∞ 0.0 ± 0.0 91.9 ± 0.0 91.9 74.7 ± 0.0 74.7 0.0 ± 0.0 91.9 ± 0.0 91.9 74.7 ± 0.0 74.7 GP-t GP-m Table 11: Results on Facebook. Model ϵ ∆Acc AccGP Accb AUCGP AUCb ∆Acc AccGP Accb AUCGP AUCb GAT 0.1 13.9 ± 2.0 85.6 ± 2.4 71.8 80.9 ± 3.9 70.1 1.4 ± 0.0 86.3 ± 0.5 84.9 82.3 ± 4.2 79.4 1.0 15.4 ± 2.0 85.2 ± 2.4 69.8 80.5 ± 4.6 71.5 0.2 ± 1.0 86.2 ± 1.1 86.0 80.0 ± 4.5 75.9 2.0 12.8 ± 2.0 85.6 ± 1.8 72.8 83.5 ± 1.9 72.2 −0.2 ± 1.0 86.2 ± 0.9 86.3 82.4 ± 3.6 82.6 8.0 0.2 ± 0.0 86.4 ± 0.3 86.1 84.9 ± 2.9 73.6 −0.2 ± 1.0 86.3 ± 1.1 86.5 83.4 ± 2.6 83.1 ∞ −0.0 ± 0.0 86.8 ± 0.0 86.8 83.9 ± 0.5 84.4 −0.0 ± 0.0 86.8 ± 0.0 86.8 83.7 ± 0.6 83.7 GATv2 0.1 13.9 ± 2.0 83.8 ± 2.4 69.9 71.0 ± 4.6 61.7 2.6 ± 2.0 82.7 ± 2.2 80.1 70.2 ± 4.2 68.2 1.0 12.2 ± 4.0 82.6 ± 4.0 70.4 69.4 ± 4.8 57.2 −0.6 ± 2.0 83.7 ± 1.9 84.3 70.9 ± 4.1 72.4 2.0 13.9 ± 5.0 82.2 ± 5.3 68.3 68.3 ± 2.6 60.0 −0.9 ± 1.0 84.1 ± 0.9 85.0 72.0 ± 3.2 76.8 8.0 −1.2 ± 1.0 83.8 ± 1.3 85.0 72.9 ± 3.4 71.8 −1.1 ± 2.0 83.4 ± 1.8 84.5 72.6 ± 3.4 69.9 ∞ 0.0 ± 0.0 84.9 ± 0.0 84.9 76.7 ± 0.8 77.6 −0.0 ± 0.0 84.9 ± 0.0 84.9 76.6 ± 0.9 77.3 GCN 0.1 15.4 ± 3.0 84.1 ± 2.9 68.7 96.4 ± 1.7 89.2 0.5 ± 2.0 83.2 ± 1.6 82.7 96.2 ± 0.9 95.9 1.0 10.6 ± 3.0 83.7 ± 3.5 73.1 95.9 ± 1.2 92.3 −0.7 ± 2.0 83.5 ± 1.9 84.2 96.2 ± 0.8 95.8 2.0 12.8 ± 3.0 83.3 ± 3.1 70.4 95.3 ± 0.6 93.0 −3.8 ± 3.0 81.9 ± 2.6 85.7 96.1 ± 1.0 95.5 8.0 −1.7 ± 2.0 84.2 ± 1.5 85.9 95.7 ± 0.9 95.3 −4.4 ± 2.0 81.8 ± 1.9 86.2 96.0 ± 0.6 96.5 ∞ 0.0 ± 0.0 80.8 ± 0.0 80.8 96.3 ± 0.0 96.3 0.0 ± 0.0 80.8 ± 0.0 80.8 96.3 ± 0.0 96.3 GConv 0.1 −3.2 ± 6.0 64.1 ± 6.1 67.3 51.8 ± 1.0 52.0 −5.3 ± 7.0 70.8 ± 7.1 76.1 52.2 ± 0.6 51.7 1.0 5.4 ± 5.0 65.9 ± 5.0 60.5 52.3 ± 0.9 51.7 −7.7 ± 9.0 62.2 ± 9.0 69.9 52.1 ± 0.7 52.2 2.0 11.8 ± 4.0 66.3 ± 4.3 54.4 52.2 ± 1.5 51.8 −7.4 ± 6.0 63.0 ± 6.2 70.3 51.9 ± 0.5 51.8 8.0 −13.0 ± 11.0 60.8 ± 10.5 73.7 52.2 ± 0.8 52.8 −5.5 ± 7.0 65.5 ± 6.6 71.0 52.0 ± 0.6 51.7 ∞ 0.0 ± 0.0 61.3 ± 0.0 61.3 52.2 ± 0.0 52.2 0.0 ± 0.0 61.3 ± 0.0 61.3 52.2 ± 0.0 52.2 GT 0.1 8.7 ± 1.0 86.1 ± 0.9 77.3 81.5 ± 6.6 66.4 1.3 ± 1.0 86.4 ± 0.6 85.2 82.7 ± 4.2 74.5 1.0 7.6 ± 1.0 86.1 ± 0.9 78.6 78.4 ± 5.5 67.3 0.3 ± 1.0 86.4 ± 0.7 86.0 82.8 ± 3.1 73.8 2.0 3.4 ± 1.0 86.2 ± 0.7 82.8 80.8 ± 4.2 67.1 0.0 ± 0.0 86.4 ± 0.5 86.3 83.5 ± 3.1 77.1 8.0 0.0 ± 0.0 86.6 ± 0.2 86.6 82.5 ± 2.0 80.3 −0.2 ± 0.0 86.6 ± 0.1 86.8 84.7 ± 2.4 81.0 ∞ −0.0 ± 0.0 86.7 ± 0.0 86.7 87.5 ± 0.5 87.6 0.1 ± 0.0 86.7 ± 0.0 86.6 87.3 ± 0.6 87.8 SAGE 0.1 14.9 ± 2.0 85.9 ± 1.7 71.0 85.1 ± 6.9 70.9 1.0 ± 0.0 86.3 ± 0.3 85.3 86.2 ± 7.0 79.0 1.0 14.6 ± 1.0 85.9 ± 0.9 71.3 78.4 ± 2.1 71.6 0.2 ± 0.0 86.3 ± 0.2 86.1 77.6 ± 2.8 83.0 2.0 13.0 ± 2.0 85.8 ± 1.6 72.7 80.5 ± 4.3 72.7 0.2 ± 0.0 86.4 ± 0.2 86.3 79.0 ± 2.3 83.0 8.0 −0.2 ± 0.0 86.7 ± 0.1 86.9 80.0 ± 1.9 80.0 0.1 ± 0.0 86.5 ± 0.1 86.4 78.7 ± 2.0 83.0 ∞ 0.0 ± 0.0 86.4 ± 0.0 86.4 93.8 ± 0.0 93.8 0.0 ± 0.0 86.4 ± 0.0 86.4 93.8 ± 0.0 93.8 Table 12: Results on Amazon Photo. We report results for ϵ = 0.1 and different values of α and δ, for all our experiments, as averaged values across 10 runs. ![24_image_0.png](24_image_0.png) (a) ∆Acc for different values of α and δ. Higher is better. ![24_image_2.png](24_image_2.png) ![24_image_1.png](24_image_1.png) ![24_image_4.png](24_image_4.png) ![24_image_3.png](24_image_3.png) ![24_image_5.png](24_image_5.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 4: Results for GP-t with GAT, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ![24_image_6.png](24_image_6.png) ![24_image_7.png](24_image_7.png) ![24_image_8.png](24_image_8.png) (a) ∆Acc for different values of α and δ. Higher is better. ![24_image_9.png](24_image_9.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 5: Results for GP-m with GAT, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ## C.5 Additional Results With Sage We report results for ϵ = 0.1 and different values of α and δ, for all our experiments, as averaged values across 10 runs. ![25_image_0.png](25_image_0.png) (a) ∆Acc for different values of α and δ. Higher is better. ![25_image_1.png](25_image_1.png) ![25_image_2.png](25_image_2.png) ![25_image_4.png](25_image_4.png) ![25_image_3.png](25_image_3.png) (b) ∆AUC for different values of α and δ. Smaller is better. 0.0 0.7 0.9 1.2 1.1 -0.3 0.5 1.0 1.5 1.1 -0.1 0.8 1.1 1.3 1.1 -0.2 1.1 1.1 1.1 1.1 -0.5 1.0 1.0 1.0 1.1 Amazon Photo 2 1 0 1 2 3 4 5 0.0 0.1 0.25 0.5 1.0 delta 0.0 0.25 0.5 0.75 1.0 alpha Figure 6: Results for GP-t with SAGE, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ![25_image_5.png](25_image_5.png) ![25_image_6.png](25_image_6.png) ![25_image_7.png](25_image_7.png) (a) ∆Acc for different values of α and δ. Higher is better. ![25_image_8.png](25_image_8.png) ![25_image_10.png](25_image_10.png) ![25_image_9.png](25_image_9.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 7: Results for GP-m with SAGE, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. We report results for ϵ = 0.1 and different values of α and δ, for all our experiments, as averaged values across 10 runs. ![26_image_0.png](26_image_0.png) (a) ∆Acc for different values of α and δ. Higher is better. ![26_image_2.png](26_image_2.png) ![26_image_1.png](26_image_1.png) ![26_image_5.png](26_image_5.png) ![26_image_3.png](26_image_3.png) ![26_image_4.png](26_image_4.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 8: Results for GP-t with GCN, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ![26_image_6.png](26_image_6.png) ![26_image_7.png](26_image_7.png) ![26_image_8.png](26_image_8.png) (a) ∆Acc for different values of α and δ. Higher is better. ![26_image_9.png](26_image_9.png) ![26_image_10.png](26_image_10.png) ![26_image_11.png](26_image_11.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 9: Results for GP-m with GCN, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ## C.7 Additional Results With Gatv2 We report results for ϵ = 0.1 and different values of α and δ, for all our experiments, as averaged values across 10 runs. ![27_image_0.png](27_image_0.png) (a) ∆Acc for different values of α and δ. Higher is better. ![27_image_2.png](27_image_2.png) ![27_image_1.png](27_image_1.png) ![27_image_3.png](27_image_3.png) ![27_image_4.png](27_image_4.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 10: Results for GP-t with GATv2, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ![27_image_5.png](27_image_5.png) (a) ∆Acc for different values of α and δ. Higher is better. ![27_image_8.png](27_image_8.png) ![27_image_10.png](27_image_10.png) ![27_image_6.png](27_image_6.png) ![27_image_7.png](27_image_7.png) ![27_image_9.png](27_image_9.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 11: Results for GP-m with GATv2, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. We report results for ϵ = 0.1 and different values of α and δ, for all our experiments, as averaged values across 10 runs. ![28_image_0.png](28_image_0.png) ![28_image_1.png](28_image_1.png) ![28_image_2.png](28_image_2.png) ![28_image_3.png](28_image_3.png) (b) ∆AUC for different values of α and δ. Smaller is better. 0.2 0.7 1.3 1.9 1.7 -0.4 1.1 1.5 1.8 1.7 -0.3 1.3 1.7 1.8 1.7 0.2 1.7 1.7 1.7 1.8 0.0 1.4 1.4 1.5 1.7 Amazon Photo 2 1 0 1 2 3 4 5 Figure 12: Results for GP-t with GT, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ![28_image_5.png](28_image_5.png) ![28_image_4.png](28_image_4.png) ![28_image_6.png](28_image_6.png) ![28_image_7.png](28_image_7.png) 0.0 0.1 0.25 0.5 1.0 delta 0.0 0.25 0.5 0.75 1.0 alpha (a) ∆Acc for different values of α and δ. Higher is better. 0.0 1.0 2.2 2.0 1.4 -0.2 1.0 1.9 2.2 2.4 -0.5 1.3 1.1 1.0 2.0 0.0 2.8 1.6 2.0 2.3 0.1 2.1 1.9 1.4 1.9 Cora | LastFM | | | | |-------------------------------------------------------------------------------------------------------|---------------------------------------------------------|---------------------------------------------------------|---------------------------------------------------| | 0.8 1.3 1.8 1.9 1.2 -0.5 1.5 1.7 1.6 1.5 0.4 1.5 1.2 1.3 1.0 0.1 1.3 1.3 1.1 1.4 0.0 0.7 0.6 2.2 1.3 | PubMed | | | | 0.6 0.7 0.8 1.0 0.8 -0.4 0.8 0.9 1.3 0.7 0.3 1.7 1.3 1.4 1.2 -0.2 0.8 1.2 0.7 1.1 0.0 0.5 1.4 1.5 1.0 | Facebook | | | | 2.3 3.6 3.9 3.5 3.0 0.5 3.1 3.2 3.2 3.5 0.4 2.5 3.5 3.5 3.5 0.6 2.3 2.9 3.6 2.6 0.0 1.4 2.5 4.6 2.9 | | | | | alpha | 0.0 0.25 0.5 0.75 1.0 0.0 0.25 0.5 0.75 1.0 delta alpha | 0.0 0.25 0.5 0.75 1.0 0.0 0.25 0.5 0.75 1.0 delta alpha | 0.0 0.25 0.5 0.75 1.0 0.0 0.25 0.5 0.75 1.0 delta | 0.0 0.25 0.5 0.75 1.0 delta 0.0 0.25 0.5 0.75 1.0 alpha ![28_image_8.png](28_image_8.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 13: Results for GP-m with GT, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ## C.9 Additional Results With Gconv We report results for ϵ = 0.1 and different values of α and δ, for all our experiments, as averaged values across 10 runs. ![29_image_0.png](29_image_0.png) ![29_image_3.png](29_image_3.png) PubMed 0.0 0.25 0.5 0.75 1.0 delta alpha ![29_image_2.png](29_image_2.png) ![29_image_1.png](29_image_1.png) (a) ∆Acc for different values of α and δ. Higher is better. ![29_image_4.png](29_image_4.png) ![29_image_5.png](29_image_5.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 14: Results for GP-t with GConv, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable. ![29_image_6.png](29_image_6.png) 0.0 1.5 1.5 1.6 1.6 -0.2 1.3 1.4 1.5 1.6 -0.3 1.3 1.5 1.6 1.6 -0.2 1.6 1.6 1.5 1.6 0.5 1.5 1.5 1.5 1.6 PubMed 0.0 0.25 0.5 0.75 1.0 ![29_image_8.png](29_image_8.png) delta 0.0 0.25 0.5 0.75 1.0 alpha ![29_image_7.png](29_image_7.png) 0.0 0.25 0.5 0.75 1.0 alpha (a) ∆Acc for different values of α and δ. Higher is better. ![29_image_9.png](29_image_9.png) (b) ∆AUC for different values of α and δ. Smaller is better. Figure 15: Results for GP-m with GConv, for ϵ = 0.1. Average values across 10 runs. The colormap is normalized to interpret all ∆Acc > 0 and ∆AUC ≤ 2 as desirable.
Review 1: Summary: The authors propose GraphPrivatizer that privatizes the structure of a graph under Differential Privacy. It uses controlled perturbation of the graph structure by randomly replacing the neighbors of a node with other similar neighbors, according to some similarity metric. Specifically, authors aggregate features to compute similarities and imposing a minimum similarity score between the original and the replaced nodes provides the best privacy-utility trade-off, and then train a Graph Neural Network server-side without disclosing users’ private information to the server. Strengths and Weaknesses: The graph neural network architectures mentioned in the related work are quite outdated, as there are recent state-of-the-art architectures that use bi-level attention based graph aggregation for large-scale multi-relation general-purpose graphs. The authors should mention these works in their related work section for relevance and recency to the latest research: "Bi-Level Attention Graph Neural Networks," 2021 IEEE International Conference on Data Mining (ICDM), Auckland, New Zealand, 2021, pp. 1126-1131, doi: 10.1109/ICDM51629.2021.00133. Heterogeneous Graph Attention Network. In The World Wide Web Conference (WWW '19). Association for Computing Machinery, New York, NY, USA, 2022–2032. Furthermore, the authors should highlight what the novelty of their work is and limitations of existing models, since controlled perturbation of the graph structure by randomly replacing the neighbors of a node with other similar neighbors, according to some similarity metric has been proposed by other prior works. Requested Changes: In addition to the above comments: Please also provide one figure summarizing where algorithms 2-4 fit in, in terms of the overall architecture (Figure 2). Please provide dataset properties and statistics summarized in one table (e.g., node count, edge count etc). The experiment baselines also seem highly outdated, with GCN and GraphSAGE being very old models. Broader Impact Concerns: This work investigates a useful topic of research in Differential Privacy for graphs, however I have a concern regarding novelty of the works, outdated baselines (for experiments) and overall architecture (as detailed in the above comments). ================================================== Review 2: Summary: The authors introduce a framework that utilizes Graph Neural Networks (GNNs) in conjunction with differential privacy techniques to derive representations from graph-structured data while adhering to privacy requirements applicable in real-world scenarios, particularly emphasizing a local setting. To ensure the privacy of features and labels, the graph structure undergoes local privatization through a perturbation strategy, where private nodes are substituted with similar ones based on a similarity score. Strengths and Weaknesses: **Strengths:** - The paper is generally easy to follow and clear in terms of key points in methodology and experiments. - An experimental evaluation of the proposed method for different datasets and GNN architectures is provided along with a thorough analysis of the results. - A complete list of hyperparameters for the proposed algorithm is provided in the Appendix, enhancing the reproducibility of the experimental setup. **Weaknesses:** - Definition 4.2 as presented is unclear whether it is derived from the cited paper on page 5 (Hidano & Murakami, 2022) or provided by the authors. If it is the latter, the statement that “by construction, node degree is preserved” and the conclusion “while the converse does not hold” require further explanation. Additionally, since the authors claim to provide a definition of ε edge-set LDP for their local privacy settings as a main contribution, it is necessary to clarify any algorithmic or theoretical overlap with the cited works in the methods section, especially concerning edge-set LDP. - Given that the cited work (Hidano & Murakami, 2022) provides a degree-preserving randomized response, which appears closely related to the proposed method (operating on neighboring lists), comparisons between the two methods are necessary. The authors should justify if such a comparison is not meaningful, by providing more information for the cited work in the related works section. - For reproducibility purposes (it is not mentioned if the code will be publicly available), the exact hyperparameters of the GNNs need to be specified for the different variants, including the number of layers and hidden dimensions. - Including a limitations section in the conclusion discussing scenarios where the proposed algorithm may not be applicable would be crucial for guiding future research directions. Requested Changes: **Motivation:** - *Privacy Considerations:* Local and global privacy, as well as the limitations of the considered setups in real-world scenarios, should be more thoroughly addressed in the introduction and contributions sections to contextualize the impact of the proposed method for differential privacy (DP). The significance of focusing on the local setting should be elucidated, explaining why this is crucial and whether the proposed method could potentially extend to other settings. **Algorithm and Experiments:** - *Baseline Comparisons:* The authors compare their method with only one baseline adapted from two cited methods (Sajadmanesh & Gatica-Perez, 2021, and Wu et al., 2022). It is important to justify why these modifications are necessary and whether other related works could be applied with fewer adjustments. - *Distinction from Related Work:* Further clarification is needed regarding the distinction from the cited work (Hidano & Murakami, 2022). An experimental comparison should be included, or the authors should justify why such a comparison is not applicable. - *Similarity Metrics:* Cosine similarity is used as the sole measure for node replacement. The authors should justify this choice and discuss its potential impact on performance. Comparisons with other similarity metrics would provide valuable insights and could serve as a useful ablation study for the proposed algorithm. **Readability:** - *Symbol Clarity:* Symbols involved in definitions should be clearly denoted alongside the equations in which they are used. For instance, symbol $m_v$ in Equations (1) and (2) in section 3.1 and $Pr[.]$ in Definition 3.1 should be explicitly defined. Similarly, symbols in algorithms 2, 3, and 4 mentioned in Section 4.2 are hard to identify in the provided text. The basic notations currently provided in the large upper paragraph of page 6 could be repositioned into equations or summarized in a table of basic notations to enhance symbol clarity throughout the paper. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper investigates differentially private algorithms within the framework of graph neural networks, with a particular focus on edge-level differential privacy. The primary motivation is to address the limitations of existing methods, such as the Randomized Response technique, which can undesirably increase the connectivity of the underlying graph. The authors propose a novel algorithm that ensures $\epsilon$-edge set Local Differential Privacy (LDP). Empirical results demonstrate the relative effectiveness of the proposed approach. Strengths and Weaknesses: **Strong points:** - The problem considered here is very interesting and relevant. - The algorithm is theoretically validated to achieve $\epsilon$-edge set LDP. - The proposed algorithm seems novel and is "model-agnostic," making it easily applicable to the majority of available GNNs (also also shown in the experimental section). - The empirical results look promising. **Weak points:** - [W1] In section 2, you claim: “individual nodes have only access to their own edges, features, and labels,” but in the algorithm, each node has access not only to its neighborhood but also to its 2-hop neighbors (and their labels to compute similarity). Can you please clarify this? - [W2] The authors use a similarity score based on the cosine similarity of node features. I believe more advanced and relevant similarity scores could yield better results. Additionally, considering feature privacy can be confusing as the algorithm becomes a two-step randomization. Specifically, ensuring node features' privacy will likely affect edge-level privacy. - [W3] The code implementation seems to be missing, raising possible questions about the validity and reproducibility of the empirical results. Could you please provide it as part of the submission? - [W4] The paper lacks a discussion on time complexity. Specifically, how does the algorithm scale with larger datasets? - [W5] On a minor personal note, the ordering of the sections was confusing. It would be better to divide all the concepts into a preliminary section and then adapt the problem setup section to include everything related to the authors’ work and hypotheses. Requested Changes: In addition to the previous weak points that should be addressed, a number of questions are also relevant and would be great to clarify in the manuscript: - In the beginning of Section 4.2, you state: “new definition.” Can you clarify this statement? Specifically, how novel is your “introduced definition” of adjacent neighborhoods compared to other available node-level privacy definitions (such as [1])? Note that while you aim for edge-level privacy, you seem to be using a node-level definition (Definition 4.1). - How can the similarity score be adapted in cases where nodes do not have labels? Generally, in the absence of node features, some crafted vectors (usually based on the node’s degree) are used. How can this be adapted in your case where the neighborhood is perturbed? - In accordance with W2, it seems that the interaction between label and edge privacy is not considered in Theorem 4.4. Can you please clarify this possible non-dependence? Additionally, a more formal and complete proof of this algorithm would be beneficial. - How does the algorithm scale with larger datasets? It would be great to provide results on larger datasets (perhaps OGBN-Arxiv?). - Related to the previous question, how can the model be adapted for datasets that contain edge features? How can this information be used to compute similarities? - In the experimental results, you use a 50:25:25 train/val/test split ratio (Appendix A). Is there a reason why you did not use the public folds available with some datasets (such as Cora and PubMed)? - As a suggestion, it would be great to expand your results to the graph classification setting. I believe your algorithm can be easily adapted to this setting as well. --- [1] Preserving Node-level Privacy in Graph Neural Networks, Xiang & Al. - 2024. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The paper was reviewed by three expert reviewers. The reviewers initially raised concerns about the presentation of the paper, the employed similarity metric and the scalability of the proposed approach. They also complained about missing related work and lack of discussion on how the proposed approach differs from prior work. Most of these concerns were addressed by the authors in the revision and all reviewers recommended weak acceptance of the paper. I thus think that the paper is now ready for publication. Since the main content of the paper is now longer than 12 pages, the authors should select "Long submission (more than 12 pages of main content)" instead of "Regular submission (no more than 12 pages of main content)" on OpenReview. Furthermore, the authors should add a link to the repository where the code is hosted. ==================================================
# Dpvim: Differentially Private Variational Inference Improved Joonas Jälkö∗*joonas.jalko@helsinki.fi* Department of Computer Science, University of Helsinki Lukas Prediger lukas.m.prediger@aalto.fi Department of Computer Science, Aalto University Antti Honkela *antti.honkela@helsinki.fi* Department of Computer Science, University of Helsinki Samuel Kaski *samuel.kaski@aalto.fi* Department of Computer Science, Aalto University Department of Computer Science, University of Manchester Reviewed on OpenReview: *https: // openreview. net/ forum? id= GlhM6XX1wv* ## Abstract Differentially private (DP) release of multidimensional statistics typically considers an aggregate sensitivity, e.g. the vector norm of a high-dimensional vector. However, different dimensions of that vector might have widely different magnitudes and therefore DP perturbation disproportionately affects the signal across dimensions. We observe this problem in the gradient release of the DP-SGD algorithm when using it for variational inference (VI), where it manifests in poor convergence as well as high variance in outputs for certain variational parameters, and make the following contributions: (i) We mathematically isolate the cause for the difference in magnitudes between gradient parts corresponding to different variational parameters. Using this as prior knowledge we establish a link between the gradients of the variational parameters, and propose an efficient while simple fix for the problem to obtain a less noisy gradient estimator, which we call *aligned* gradients. This approach allows us to obtain the updates for the covariance parameter of a Gaussian posterior approximation without a privacy cost. We compare this to alternative approaches for scaling the gradients using analytically derived preconditioning, e.g. natural gradients. (ii) We suggest using iterate averaging over the parameter iterates recovered during the training, to reduce the DP-induced noise in parameter estimates at no additional cost in privacy. Finally, (iii) to accurately capture the additional uncertainty DP introduces to the model parameters, we infer the DP-induced noise from the parameter iterates and include that in the learned posteriors to make them *noise aware*. We demonstrate the efficacy of our proposed improvements through various experiments on real data. ## 1 Introduction Differential privacy (DP) (Dwork et al., 2006) protects privacy of data subjects by limiting how much about the input data can be learned from the output of an algorithm. Additive noise mechanisms achieve DP by adding noise calibrated to the maximum change in function output due to a single individual, known as sensitivity. When releasing high-dimensional data through such mechanisms, different variables may have widely different sensitivities. However, this issue of varying sensitivities is often neglected or overlooked. ∗Work done while at Aalto University Instead, the sensitivity of the release is computed as an aggregate over all the dimensions, which we call total sensitivity, in contrast to *variable-specific sensitivity*. As the DP noise is subsequently scaled with this total sensitivity, it affects dimensions with lower sensitivities more. A prominent example where this occurs is the gradient release in DP stochastic gradient descent (DP-SGD) (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016), where it can affect the convergence rate of the corresponding parameters. Furthermore, the final parameters released from DP-SGD are noisy estimators of the optimal parameters and the resulting error is usually treated as an unavoidable trade-off of providing privacy (Abadi et al., 2016). As a result, final parameter estimates with small variable-specific sensitivity may exhibit larger errors due to DP randomness compared to other dimensions. The combination of these two issues means that parameters with relatively small sensitivity are at a double disadvantage. We discover these issues in the perturbed gradients used in the DP-SGD based DP variational inference (DPVI) algorithm (Jälkö et al., 2017), which is a widely applicable state-of-the-art method for privacy-preserving (approximate) Bayesian inference. We find that gradient magnitudes for different parameters in DPVI often differ significantly, resulting in severe errors in capturing the posterior. This results e.g. in poor predictive uncertainty estimation, making the predictions of the learned model less accurate. We mathematically isolate the cause for these problems in DPVI and propose and evaluate two ways of alleviating the problem of gradient scales in DPVI: one scales gradients with a preconditioning matrix before applying the DP mechanism, the other is based on insights into the mathematical structure of the gradients, which reveals that their components are mathematically linked and can be derived from each other in a post-processing step. Additionally, we theoretically and experimentally evaluate the method of iterate averaging as a way to further improve the parameter estimate as well as approximate the additional variance induced by DP perturbations to DPVI to make the posterior approximation noise aware at no additional cost in privacy. ## 1.1 Related Work In the context of DP-SGD, the following previous works acknowledge the different sensitivities of different parts of the full gradient: McMahan et al. (2018) suggested clipping the gradients of a neural network separately for each layer to avoid the clipping-induced bias (Chen et al., 2020). Other lines of work (Andrew et al., 2021; Wang et al., 2022) suggest adaptive clipping, where the total sensitivity is re-evaluated throughout the optimisation process to avoid adding excessive amounts of noise to the gradients. However, since in all these the perturbation is still scaled with the total sensitivity aggregated over the dimensions, this approach does not improve the disparate effect that the Gaussian noise will have on the dimensions with smaller gradients, so we see these approaches as orthogonal to our work. Besides the aforementioned works that study the tuning of clipping threshold, there are some recent works that study the use of clipping in terms of obtaining optimal rates in DP convex optimization Kamath et al. (2022); Lowy & Razaviyayn (2023). For noise aware DP Bayesian inference, the most related work is by Bernstein and Sheldon (Bernstein & Sheldon, 2018; 2019) and Kulkarni et al. (2021). These works include the DP perturbation mechanism into a probabilistic model using perturbed sufficient statistics as the inputs. This allows capturing the DP-induced additional uncertainty in the posterior distribution of model parameters. ## 2 Preliminaries 2.1 Differential Privacy Definition 2.1 (Differential Privacy (Dwork et al., 2006)). For ϵ ≥ 0 and δ ∈ [0, 1], a randomised mechanism M : *D → R* satisfies (*ϵ, δ*)-differential privacy if for any two data sets different in only one element, D, D′ ∈ D, and for all outputs S ⊆ im(M), the following constraint holds: $$\operatorname*{Pr}({\mathcal{M}}(D)\in S)\leq e^{\epsilon}\operatorname*{Pr}({\mathcal{M}}(D^{\prime})\in S)+\delta.$$ ϵ Pr(M(D′) ∈ S) + δ. (1) Property 2.1 (Post-processing immunity (cf. (Dwork et al., 2014))). Let M : *D → R* be a (*ϵ, δ*)-DP mechanism and f : *R → Z* any function that does not access the sensitive data. Then f ◦ M is (*ϵ, δ*)-DP. ## 2.2 Variational Inference Variational inference is a commonly applied technique in probabilistic inference, where the aim is to learn an approximation for a (typically intractable) posterior distribution of the parameters of a probabilistic model (Jordan et al., 1999). The goal is to minimise the Kullback-Leibler (KL) divergence of this approximation to the true posterior. However, computing the KL divergence directly is intractable as well, so instead we maximise a quantity called *evidence lower bound* (ELBO) over the parameters of the variational approximation. For a probabilistic model p(D, θ), where D denotes the observed variables and θ the model parameters, and for a variational approximation q(θ) of the posterior, the ELBO L(q) is derived as follows: $$\mathrm{KL}(q(\mathbf{\theta})\,||\,p(\mathbf{\theta}\,|\,D))=\mathbb{E}_{q(\mathbf{\theta})}\left[\log\frac{q(\mathbf{\theta})p(D)}{p(D,\mathbf{\theta})}\right]\tag{2}$$ $$=\log p(D)-\underbrace{\mathbb{E}_{q(\mathbf{\theta})}\left[\log\frac{p(D,\mathbf{\theta})}{q(\mathbf{\theta})}\right]}_{:=\mathcal{L}(q)}\tag{3}$$ $$\left(4\right)$$ $$\left(5\right)$$ Now, as the KL divergence is positive we have log p(D) ≥ L(q), (4) hence the quantity L(q) is called the evidence lower bound. We can easily see that if the KL divergence between the q and the posterior is 0, the ELBO matches the evidence. Since the evidence is independent of the variational parameters, minimizing the KL divergence w.r.t the variational parameters is equivalent to maximizing the ELBO. In the remainder of this paper we use the following equivalent formulation of the ELBO: $${\mathcal{L}}(q)=\mathbb{E}_{q(\mathbf{\theta})}\left[\log p(D,\mathbf{\theta})\right]+H(q),$$ L(q) = Eq(θ)[log p(D, θ)] + H(q), (5) where H(q) denotes the (differential) entropy of q. In the following we first restrict ourselves to the commonly used *mean-field* variational inference, i.e., using a Gaussian with diagonal covariance as the posterior approximation. We will later generalise this to a full-rank coveriance approximation. For d-dimensional data the diagonal approximation is parametrised by the means mq ∈ R d and the dimension-wise standard deviations σq ∈ R d. We further reparametrise the model with sq = T −1(σq), where T : R → R+ is monotonic, in order to facilitate optimisation in an unconstrained domain. Both T and T −1 are applied element-wise for each of the parameters. Common choices for T are the exponential function T(s) = exp(s) and the softplus function T(s) = log(exp(s) + 1) (used e.g. in the Pyro probabilistic programming package (Bingham et al., 2019)). We use ξ = (mq, sq) to refer to the complete set of variational parameters. A draw from this posterior distribution can then be written as (Kingma & Welling, 2014): $$({\mathfrak{G}})$$ $$\theta:=\theta(\eta;m_{q},s_{q})=m_{q}+T(s_{q})\eta,$$ θ := θ(η;mq, sq) = mq + T(sq)η, (6) where η ∼ N(0, Id), θ, η ∈ R d and Id is a d-dimensional identity matrix. Kucukelbir et al. (2017) use this reparametrisation trick together with single-sample MC integration to give the ELBO a differentiable form with gradients: $$\mathbf{g}_{m}:=\nabla_{\mathbf{m}_{q}}\mathcal{L}(q)=\nabla_{\mathbf{m}_{q}}\log p(D,\mathbf{\theta}(\mathbf{\eta};\mathbf{m}_{q},\mathbf{s}_{q}))\tag{7}$$ $$\mathbf{g}_{s}:=\nabla_{\mathbf{s}_{q}}\mathcal{L}(q)=\nabla_{\mathbf{s}_{q}}\log p(D,\mathbf{\theta}(\mathbf{\eta};\mathbf{m}_{q},\mathbf{s}_{q}))+\nabla_{\mathbf{s}_{q}}H(q),\tag{8}$$ where η ∼ N(0, I). Throughout this work we assume that the likelihood factorises as: p(D | θ) = Qx∈D p(x | θ). Using N to denote the size of D, we can now further decompose the gradients in (7) and (8) as gm = X x∈D ∇mq log p(x | θ(η;mq, sq)) + 1N ∇mq log p(θ(η;mq, sq))(9) gs = X x∈D ∇sq log p(x | θ(η;mq, sq)) + 1N ∇sq log p(θ(η;mq, sq)) + ∇sqH(q). (10) We denote the per-example gradient components (i.e., those for each individual x) that appear in the above sums with gm,x and gs,x respectively. A common approach to performing variational inference in practice is to initialise sq to small values, which allows the algorithm to move mq quickly close to their optimal values due to large error in the KL term of the ELBO induced by the narrow approximation. Assumptions for the probabilistic model and variational posterior For the rest of the paper, we make the following assumptions: - **Exchangeability**: We assume that there is no ordering in observing the elements of data set D: p(D | θ) = Qx∈D p(x | θ) - **Gaussian posterior approximation**: q(θ) = Nd(mq, Σq), where Nd denotes the pdf of a ddimensional Gaussian. When working with isotropic Gaussian posterior approximation (i.e. diagonal covariance), we denote the dimension-wise standard deviations with σq. - **Optimising the** σq: For the isotropic Gaussian, we use a mapping function T : R → R + to optimize the variational standard deviations. We apply this function element-wise to the parameter vector sq. Commonly used examples for this are the softplus function T(s) = log(1 + exp(s)) as well as the exponential function T(s) = exp(s). ## 3 Differentially Private Variational Inference The first algorithm for differentially private variational inference for non-conjugate models (Jälkö et al., 2017) optimises the ELBO using gradients (7) and (8) with differentially private stochastic gradient descent (Abadi et al., 2016) to provide privacy. This involves concatenating each of the per-example gradients to obtain gx = (g T m,x, g T s,x) T, clipping gx so that it has ℓ2 norm no larger than a threshold C to limit the sensitivity, and finally adding Gaussian noise to the sum of these clipped per-example gradients to obtain g˜, which is used for the parameter update. We refer to this algorithm in the following as *vanilla DPVI*. This formulation induces a problem which, while seemingly minor at first glance, severely affects accuracy of solutions. We next isolate this problem and then propose a solution through detailed analysis on the gradients of the variational parameters. ## 3.1 Disparate Perturbation Of Variational Parameter Gradients While the clipping of the gradients allows us to bound the global sensitivity of the gradient vector, it completely ignores any differences in gradient magnitudes across the dimensions. As DPVI (and more generally DP-SGD) proceeds to add Gaussian noise with standard deviation proportional to the clipping threshold to all of the dimensions, the signal-to-noise-ratio can vary greatly across the parameter dimensions. Parameter dimensions that experience low signal-to-noise ratio will converge much slower than others (cf. Domke, 2019 and references therein).1 Next, we will show that such a magnitude difference arises between the gradients of variational parameters mq and sq. Note that the gradient of Equation (6) w.r.t. mq is ∇mq θ(η;mq, sq) = 1, which leads to the following proposition (a more detailed derivation can be found in Appendix B): Proposition 3.1. Assume q to be diagonal Gaussian, then the gradient gs in Equation (8) becomes $$\mathbf{g}_{s}=\eta T^{\prime}(\mathbf{s}_{q})\mathbf{g}_{m}+\nabla_{\mathbf{s}_{q}}H(q),$$ ′(sq)gm + ∇sqH(q), (11) where T ′ denotes the derivative of T. As the entropy term is independent of the data, our update equation for sq depends on the data only through gm. In order to show that this term gets affected by the noise more than gm itself, it suffices to inspect 1We also provide a high-level argument why this is the case in Appendix A. $$(11)$$ the magnitudes of η and T ′(sq). We have Pr-|ηj | < 2> 97.5% and T ′(sq) ≤ σq for common choices of T discussed above (a proof for softplus and exp can be found in Appendix C). For each dimension of the data dependent part of gs it then follows that $|\eta_j T^\prime(s_{q,j})g_{m,j}|\leq\sigma_{q,j}|\eta_j g_{m,j}|<2\sigma_{q,j}|g_{m,j}|$ with probability $>97.5\%$. $$(12)$$ Therefore, it is easy to see that as σq becomes small, the data-dependent part of the gradient gs becomes small compared to gm in the overwhelming majority of cases. The entropy part of gs is typically small as well, especially when we are far from the posterior mode, i.e., early in training. As a result, gs as a whole becomes small in comparison to gm. Note that this is especially problematic combined with the practice of initialising sq to small values to speed up the convergence of mq, discussed in Sec. 2.2. ## 3.1.1 Addressing The Differing Gradient Scales Above we have identified a magnitude difference between the gradient components, which leads to variational standard deviation parameters being disproportionately affected by DP noise. Next we use this structural knowledge to propose a method for scaling the gradients to more closely matching magnitudes, after discussing two alternatives based on standard techniques. Natural Gradients As a first solution, we consider *natural gradients* (Amari, 1998). This is a common approach for improving convergence for VI (cf. e.g. Honkela et al., 2010; Khan & Nielsen, 2018; Salimbeni et al., 2018), which relies on scaling the gradients using the information geometry of the optimisation problem. The natural gradients g nat are computed using the inverse of the Fisher information matrix I as $$\begin{array}{l}{{\mathcal{I}=\mathbb{E}_{\theta|_{\theta_{s},m_{q}}}\left[(\nabla_{\theta}\log q(\theta))(\nabla_{\theta}\log q(\theta))^{T}\right]}}\\ {{\mathbf{g}^{n a t}=\mathcal{I}^{-1}\mathbf{g}.}}\end{array}$$ $$\quad(15)$$ $$\quad(16)$$ For our setting this leads to $$\begin{array}{l}{{g_{m}^{n a t}=T(s_{q})^{2}\nabla_{m_{s}}{\mathcal{L}}(q)}}\\ {{g_{s}^{n a t}=\frac{1}{2T^{\prime}(s_{q})}\left(\eta g_{m}^{n a t}+\frac{T(s_{q})^{2}}{T^{\prime}(s_{q})}\nabla_{s_{q}}H(q)\right).}}\end{array}$$ We observe that in the natural gradients the scaling by T ′(sq) in the gradients of sq is now reversed, meaning that for small T ′(sq) the gradients of sq will tend to dominate over those of mq. Therefore we expect natural gradients to result in a different instance of the problem of disproportionate DP noise instead of resolving it. Preconditioning of Gradients The simplest way to fix the disproportionate DP noise is preconditioning of the gradients to undo the downscaling of the data-dependent part in Eq. (11), by multiplying with (T ′(sq))−1, to obtain $$\mathbf{g}_{s}^{p r e c o n}={\frac{1}{T^{\prime}(\mathbf{s}_{q})}}\mathbf{g}_{s}=\eta\mathbf{g}_{m}+{\frac{\nabla_{\mathbf{s}_{q}}H(q)}{T^{\prime}(\mathbf{s}_{q})}}.$$ We can see that the data dependent part of g precon s(the first term) is of the same magnitude as gm, and thus the noise affects the gradient components equally.2 Note that while this approach addresses the issue of different magnitude in the gradients, it does so at the cost of increasing the overall ℓ2-norm of the full gradient (by increasing that of gs while keeping gm fixed). This in turn requires a higher clipping threshold in order to avoid additional bias due to clipping, which increases DP noise variance. 2Note that this scaling also affects the data-independent entropy term in the gradient for sq. While the scaling term (T ′(sq))−1does increase the entropy part for small sq, the data-dependent term is still typically much larger and will dominate the gradient. $$(17)$$ Aligned Gradients We will now discuss a new alternative method for resolving the disproportionate DP noise problem that addresses the gradient magnitude problem while avoiding the issues of the preconditioning approach. Equation (11) shows that we can write gs in terms of gm and an additional entropy term. Since neither the scaling factor ηT ′(sq) nor the entropy gradient ∇sqH(q) depend on the data D, it suffices to release the gradients gm under DP as g˜m, from which we obtain the g˜s via Eq. (11). As this is simply post-processing, it does not incur additional DP cost. Because g˜s is now computed directly as a transformation of g˜m, the noise term in both gradients is aligned in proportion to the gradient signals. We refer to this approach as *aligned DPVI* for the rest of the paper. The procedure for computing the aligned DPVI gradients is summarized in Algorithm 1. | Algorithm 1 The aligned gradient procedure | (single step) | | |----------------------------------------------------|----------------------------------------------|-----------------------------------------| | 1: θ ← mq + ηT(sq) where η ∼ N (0, I) | ▷ Draw sample from the variational posterior | | | 2: gm,x ← ∇mqL(q) for x ∈ D | ▷ Compute the per-example gradients for mq | | | 3: γx ← min(1, C/||gm,x||) for x ∈ D | ▷ Compute the clipping multiplier | | | 4: g˜m ← P x∈D γxgm,x + σDP Cψ, where ψ ∼ N (0, I) | ▷ Get DP release for gm | | | 5: g˜ aligned | ′ (sq)g˜m + ∇sqH(q) | ▷ Get DP aligned gs via post-processing | | s | ← ηT | | Corrolary 3.1. Let δ(ϵ; σDP , T) be any privacy accounting oracle. Let ϵ > 0, σDP > 0. Then aligned DPVI consisting of T iterations of Alg. 1 is (ϵ, δ(ϵ; σDP , T))-DP. Proof. The g˜m released in step 4 of Algorithm 1 satisfies the privacy guarantee provided by the Gaussian mechanism. As the η, T ′(sq) and the entropy contribution ∇sqH(q) are independent of the data, the g˜ aligned s satisfies the same privacy guarantee through post-processing immunity (Property 2.1). Hence the privacy guarantees of the entire algorithm follow from the composed privacy cost of the released gradients for the variational means {g˜ (t) m } T t=1. The following theorem (proved in Appendix D) guarantees that the variance in the gradients of sq is reduced in aligned DPVI: Theorem 3.1. Assume that C vanilla ≥ C aligned. If we obtain σq through transformation T such that T ′(s) ≤ 1, then for any fixed batch, $$\mathrm{Var}_{\eta,\psi}\left[\tilde{g}_{s}^{a l i g n e d}\right]<\mathrm{Var}_{\eta,\psi}\left[\tilde{g}_{s}^{v a n i l l a}\right],$$ $$(18)$$ s, (18) where η is the random variable of the MC approximation to the ELBO and ψ that of the DP perturbation. Note that the softplus transformation used in our experiments satisfies the assumption T ′(s) ≤ 1. | Algorithm 2 The aligned natural gradient procedure (single step) 1: θ ← mq + ηT(sq) where η ∼ N (0, I) ▷ Draw sample from the variational posteri | or | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|------------------|-------------|----------------------|---------------------| | 2: g nat m,x ← I−1 m ∇mqL(q) for x ∈ D | ▷ Compute per-example natural gradients for mq | | | | | | | nat | | | | | | 3: γx ← min(1, C/||g m,x||) for x ∈ D | ▷ Compute the clipping multiplier | | | | | | 4: g˜ nat m ← P x∈D γxgm,x + σDP Cψ, where ψ ∼ N (0, I) | ▷ Get DP release for g nat m | | | | | | nat,aligned | −1 | nat | nat | | | | 5: g˜ s | = I s | (ηT ′ (sq)Img˜ m | + ∇sqH(q)). | ▷ Get DP aligned g s | via post-processing | Aligned Natural Gradients Finally we also consider a combination of natural gradients and aligning, to enable the benefits of natural gradients for convergence while simultaneously removing the need to consider the gradient of sq for DP clipping and perturbation. The full procedure is given in Algorithm 2. We use Im, Is to refer to the blocks of I corresponding to the gradient components. ## 3.1.2 Extending To Full-Rank Covariance Matrices So far we have only considered a diagonal Gaussian as the variational posterior. Due to the low dimensionality of the variational parameters, this approach is computationally effective and often applied in practice, but it has limitations: It cannot capture correlations among different model parameters and, more importantly, it will underestimate the marginal variances of the parameters when the true covariance structure is non-diagonal. For those reasons, a full-rank covariance approximation would be favored to correctly capture the uncertainty of the parameters. However, learning the full-rank covariance approximation results in a quadratic (in the number of dimensions d) expansion of the number of learnable parameters. This not only increases computational costs but also implies less accurate learning of the parameters under DP, as the available privacy budget has to be spread over more parameters. Fortunately, the aligning procedure can be extended to full-rank Gaussian approximations as well, which allows us to alleviate the issue of increased sensitivity. The proof is very similar to the diagonal case. Instead of the parameters sq corresponding to marginal standard deviations, we now consider a parameter vector aq ∈ R d(d+1) 2 and a transformation function T : R d(d+1) 2 → R d×dsuch that T(aq) corresponds to the Cholesky factor of the posterior covariance. That is, T must guarantee that T(aq) is a lower triangular with positive entries along its diagonal, which will require similar transformations as in the purely diagonal covariance case discussed previously. Now, the reparametrisation step in (6) becomes $$\theta:=m_{q}+T(\mathbf{a}_{q})\eta,$$ $$(19)$$ $$(20)$$ θ := mq + T(aq)η, (19) and the gradient w.r.t aq can be written as $$\mathbf{g}_{a}=J_{a}(T(\mathbf{a}_{q})\mathbf{\eta})\mathbf{g}_{m}+\nabla_{\mathbf{a}_{q}}H(q),$$ ga = Ja(T(aq)η)gm + ∇aqH(q), (20) where Ja denotes the Jacobian of T w.r.t aq. Therefore, the gradient ga can again be written as a dataindependent transformation of gm. Thus under the post-processing immunity of DP, we can get the DP gradients for aq from DP versions of gm without suffering the quadratic increase of the size of the input to underlying the Gaussian mechanism present in vanilla DPVI. ## 3.2 Leveraging Dp Parameter Iterates To Reduce Error And For Uncertainty Estimation As other applications of DP-SGD, vanilla DPVI does not take the uncertainty that the DP mechanism introduces into account. Instead, after a finite number of iterations, the values found in the last iteration are usually treated as the true variational parameters. We argue that treating them this way can lead to severe errors in accuracy because these values are merely a noisy estimate of the optimal values. Annealing the learning rate to reduce fluctuations around the optimum does not help: Without knowledge of the convergence point the distance to an optimum can still be large due to the random walk prior to annealing. Instead in the following we suggest making use of some fraction of the DP parameter iterates output by the algorithm to 1) average out and 2) estimate the additional variance introduced by DP noise additions, making the learned model approximately noise-aware. While 1) is a known technique known as iterate averaging (Polyak & Juditsky, 1992), 2) has not been previously applied in this context to the best of our knowledge. Note that privacy guarantees of DP-SGD algorithms extend to all parameter iterates, so this comes at no additional privacy cost. We first briefly review some theory about the random walk behavior around the optimum and then discuss the details of our proposed approach. Mandt et al. (2017) investigated the random walk behaviour around the optimum for regular (non-DP) SGD arising from subsampling. They assume that near the optimum ξ ∗the loss function is well approximated by a quadratic approximation L(ξ) ≈ 1 2 (ξ − ξ ∗) T A(ξ − ξ ∗), and show that the stochastic process around the optimum can be characterised as an Ornstein-Uhlenbeck (OU) process $$d\mathbf{\xi}(t)=-\alpha\mathbf{A}(\mathbf{\xi}(t)-\mathbf{\xi}^{*})d t+\frac{1}{\sqrt{S}}\alpha\mathbf{B}d W(t),$$ where W(t) is a Wiener process, α is the step size of the SGD (assumed constant), S is the size of the subsampled data and B the Cholesky decomposition of the covariance matrix Z of the noise due to the subsampling. Directly adapting this analysis, we suggest that under the same regularity assumptions, DP-SGD still is an OU process. The principle of the proof is straightforward: DP-SGD adds an additional Gaussian noise component, allowing us to add the (diagonal) covariance matrix of the DP noise to Z and obtain a Bˆ $$(21)$$ such that $$\mathbf{Z}+S\sigma_{D P}^{2}\mathbf{I}={\hat{\mathbf{B}}}{\hat{\mathbf{B}}}^{T}.$$ $\left(22\right)$. T. (22) The more detailed proof can be found in Appendix E. This insight allows us to make the following two suggestions to improve the parameter estimates of DPVI. Iterate averaging to reduce noise in parameter estimate In order to reduce noise in our learned variational parameters, we apply iterate averaging and average the parameter traces, i.e., the sequence of parameter iterates during optimisation, over the last Tburn-out iterates for which we assume the trace has converged. As the OU process in Equation (21) is symmetric around the optimum, the mean of the trace is an unbiased estimator of ξ ∗. Compared to using the final iterate of the chain (also an unbiased estimator of ξ ∗), the averaged trace reduces the variance of the estimator by up to a factor of T −1 burn-out. Iterate averaging has been previously been used to reduce the noise of DP-SGD for example by Bassily et al. (2014) by scaling the learning rate w.r.t total number of iterations and by Bassily et al. (2019) and Lowy & Razaviyayn (2021) by taking average over all the iterates obtained during training. However, the key difference in our method is that we are actively estimating the point of convergence for the iterates, and only averaging after that. The convergence is required for estimating the DP induced noise from the OU process. Estimating the increased variance due to DP Finally, since our posterior approximation is Gaussian and the stationary distribution of the OU is Gaussian as well, we can add the variance of the averaged traces to the variances of our posterior to absorb the remaining uncertainty due to the inference process, and recover a noise-aware posterior approximation. Now the remaining problem is to determine Tburn-out, the length of the trace where the parameters have converged. For this we suggest a simple convergence check based on linear regression: For each of the traces, we fit linear regression models over different candidate Tburn-out. The regressor X*linreg* is set to interval [0, 1] split to Tburn-out points in an ascending order. The responses y are set to the corresponding parameter values in the trace, e.g. y = {mq (t)} T t=T −Tburn−out . If the linear regression model has a sufficiently small slope coefficient, we consider the trace as converged and pick the longest Tburn-out for which this is the case. ## 4 Experiments We experimentally test our methods for two different tasks using mean-field approximation with real data: learning a probabilistic generative model for private data sharing and learning a logistic regression model. We also experimentally explore aligned DPVI with full-rank Gaussian approximation using simulated data. ## 4.1 Implementation Details We implemented the different variants for DPVI introduced in Sec. 3.1.1 using the d3p package (Prediger et al., 2022) for the NumPyro probabilistic programming framework (Phan et al., 2019; Bingham et al., 2019). To compute the privacy cost in our experiments, we use the Fourier accountant method (Koskela et al., 2021). The hyperparameters used in our experiments are discussed in the Appendix F. We use the softplus function as our transformation T in all experiments. The code for reproducing the experiments can be found at https://github.com/DPBayes/dpvim-experiments. In order to asses learning over multiple parameters which converge to different values, and over repeated runs with different initial values, we define a *mean proportional absolute error (MPAE)*: Let ξ (t) ∈ R D be the parameter vector at iteration t and ξ ∗be the parameter vector at the optimum.3 We measure the MPAE at iteration t as $$\mathrm{MPAE}(\pmb{\xi}^{(t)})=\frac{1}{D}\sum_{d=1}^{D}\frac{|\pmb{\xi}_{d}^{(t)}-\pmb{\xi}_{d}^{*}|}{|\pmb{\xi}_{d}^{(0)}-\pmb{\xi}_{d}^{*}|}.$$ $$(23)$$ . (23) An MPAE value of 0 indicates perfect recovery of the optimum, a value of 1 suggests that the parameters on average did not move away from their initialisation. 3Since the optimal value ξ ∗is typically unknown, we instead use the results of classical non-DP variational inference in its place in practice. ![8_image_0.png](8_image_0.png) Figure 1: **UKB experiment:** (a) Aligned DPVI makes the smallest error in learning the variational parameters across all different initial values for σq, implying it is the most robust. (b) Aligned DPVI converges faster than the other variants while also having less deviation across the repeats (all initialised at σq = 1). Both subfigures show averaged MPAE for vanilla, aligned and preconditioned DPVI with error bars in (b) indicating standard error over repeats. ## 4.2 Using Dpvi To Learn A Generative Model Recently, Jälkö et al. (2021) suggested using DPVI to learn a probabilistic generative model for differentially private data sharing. Note that in this application it is especially crucial to learn the posterior variances well to faithfully reproduce the uncertainty in the original data in the synthetic data set. A recent study by Niedzwiedz et al. (2020) on personal health data from the United Kingdom Biobank (UKB) (Sudlow et al., 2015) studied how socio-economic factors affect an individual's risk of catching the SARS-CoV-2 virus. We aim to produce synthetic data, using DPVI to learn the generative model, from which we can draw similar discoveries. Following Niedzwiedz et al. (2020), we consider a subset of UKB data which comprises of 58 261 individuals with d = 7 discrete (categorical) features. We split the features into a set of explanatory variables and a response variable indicating whether the individual was infected by SARS-CoV-2. We place a mixture model for the explanatory variables X, and a Poisson regression model mapping the explanatory variables to the responses y, using θX, θy and π to designate the model parameters: $$p(\mathbf{X}\mid\mathbf{\theta}_{\mathbf{X}},\mathbf{\pi})=\sum_{k=1}^{K}\pi_{k}\prod_{j=1}^{d}\mathrm{Categorical}(\mathbf{X}_{j}\mid\mathbf{\theta}_{\mathbf{X}}^{(k)})$$ $$p(\mathbf{y}\mid\mathbf{X},\mathbf{\theta}_{\mathbf{y}})=\mathrm{Poisson}(\mathbf{y}\mid\exp(\mathbf{X}\mathbf{\theta}_{\mathbf{y}})).$$ $$(24)$$ $$(25)$$ In our experiments, we set the number of mixture components K = 16 which was chosen based on internal tests. Priors for the model parameters are specified in Appendix G.1. Aligned DPVI is more robust to initialisation We first demonstrate that aligned DPVI improves robustness to initialisation over vanilla and preconditioned DPVI. To do so we fix a privacy budget of ε = 1 and the number of passes over the entire data set, i.e., *epochs*, to 1000 and vary the initial value of sq such that σq is one of 0.01, 0.032, 0.1, 0.316 or 1. We perform 10 repetitions with different random seeds over which we keep the initialisation of sq fixed but initialise mq randomly. We compute the MPAE over the parameters of the Poisson regression part in the model, which corresponds directly to the downstream prediction task we are ultimately interested in. ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 2: **UKB experiment:** The aligned variant remains the most accurate method, even if we run the algorithm for longer. Initial σq = 1. Figure 3: **UKB experiment:** RMSE of parameters found in downstream analysis when doing iterate averaging with different Tburn-out. Iterative averaging can reduce the error and significantly reduce variance of error compared to using only the last iterate. Initial σq = 1, 4000 epochs of aligned DPVI. Figure 1a shows the trade-off different variants of DPVI make between the MPAE in variational means (mq) and stds (sq) averaged over the 10 repetitions for the different initial values of sq. We observe that the aligned variant is able to achieve small errors in mq and sq simultaneously while the alternatives cannot. To see how the MPAEs for mq and sq behave individually w.r.t. the initial value for sq refer to Appendix H. The natural gradient as well as the aligned natural gradient method performed slightly worse than the aligned method in this experiment and we report results for them in the appendix as well. Longer runs do not help vanilla DPVI Figure 1b suggests that vanilla DPVI has not converged in terms of MPAE, in the allotted number of iterations for an initial σq = 1. An obvious solution then seems to be to run the inference for longer. We now fix the initialisation of σq to 1, which designates the least relative scaling of gradients at the beginning of training and thus a best-case scenario for vanilla DPVI. We vary the number of training epochs from 200 to 8000 while always keeping the privacy budget fixed at ε = 1. Since longer runs require more accesses to the data, the DP perturbation scale increases with the number of iterations. As before, we repeat 10 inference runs for each parameter choice. Figure 2 shows the final MPAE over all 10 repetitions and all parameters in the Poisson regression part of the model for the different numbers of total epochs.4 The upper panel shows that with an increasing number of epochs, the difference in MPAE of variational means between vanilla and aligned DPVI vanishes. However, the lower panel shows clearly that even in a long training regime, vanilla DPVI still does not converge in variational variances and is consistently beaten by our aligned variant. Iterate averaging increases robustness of downstream task Next, we test the iterate averaging of noisy parameters traces for the generative model. We use the linear regression technique discussed in Sec. 3.1.2, individually for each parameter, to determine the length of the trace to average. We then use synthetic data from the generative model to learn the Poisson regression model used by Niedzwiedz et al. (2020) and compare the regression coefficients against those obtained from the original data. Further details on the downstream analysis setup are given in Appendix I. Figure 3 shows that the results from iterate averaged model are less noisy compared to just using the last iterate as the true parameters. However, the approach appears to be somewhat unstable: Changing the initial values of σq to 0.1 causes the variance of error for ε = 1. to increase over the non-averaged case. This is likely due to the simple linear regression heuristic we used in this experiment not detecting convergence correctly in this case. 4Note that it is not showing the evolution of the error over a single training of 8000 epochs. ## 4.2.1 Experiments On The Us Census Data Set To ensure the results are not specific to only a single data set, we next applied DPVI on a large set of US Census 1990 data (from UCI Dua & Graff (2017)), in the same data sharing setting as the UK Biobank experiment. We focused on individuals having military background, and using the similar mixture model/Poisson regression combination as with the UK Biobank experiment, we tried to predict the poverty indicator of the individual given military service related and demographic features. After preprocessing the data comprised of 320 754 samples with 13 features. The hyperparameters for the different DPVI variants were identical to the ones used in UK Biobank experiment. Figure 4 shows again that the aligned variant learns the variational scales better in terms of the mean proportional absolute error (MPAE) of DPVI parameters to the non-private optimum, as was the case with UK Biobank data. ## 4.3 Logistic Regression With The Adult Data Set ![10_image_0.png](10_image_0.png) Figure 4: **US Census experiment:** Aligned DPVI method learns the variational standard deviations faster than the alternatives. The plot shows the average MPAE and the standard deviation across 10 independent repeats of the variational inference with σq initialized to 1.0 applied on the US Census data sharing experiment. As the UK Biobank data is access-restricted, we further demonstrate our methods on the publicly available Adult data set from the UCI machine learning repository (Dua & Graff, 2017), which contains 30 162 training records. We learn a logistic regression model, classifying whether the income feature of the data exceeds $50k based on all other features. Aligned natural gradient outperforms the other variants We compare our private logistic regression coefficients to the ones obtained using privacy-agnostic VI. We also test the aligned natural gradient and the natural gradient variants for this data. Figure 5a shows the ℓ2-norm between variational parameters learned ![10_image_1.png](10_image_1.png) Figure 5: **Adult logistic regression experiment:** (a) The aligned natural gradient method is closest to the non-private variational parameters. Error is computed as a mean ℓ2-norm against non-private baseline over 20 repeats. Error bars show the standard error of the mean. (b) The standard deviation inferred from the converged traces (determined by the linear regression method) is close to that computed over last iterates across different repeats. Lines show the average MSE, error bars show the standard deviation across repeats. The baselines show the mean squared norm of the standard deviation estimated from repeated runs, corresponding to the MSE for not estimating DP-induced noise. with and without DP. From this figure we see that the aligned natural gradient method clearly outperforms all the other variants in this setting. Additionally, we clearly see again that vanilla DPVI learns the stds poorly, and also that the natural gradient variant reverses the problem compared to vanilla and struggles in learning the variational means as suggested in Section 3.1.1. DP noise can be inferred from the (converged) traces We test how well we can recover the DP noise effect from the parameter traces. We limit the test to the coefficients that have converged according to the linear regression test described in Section 3.1.2. Based on our internal tests, we chose a slope of 0.05 as the threshold for convergence. We compare the standard deviation of the converged parameter trace to an estimate of the DP-induced noise estimated by the standard deviation of the last iterates over 50 repeats in terms of mean squared error over different parameter sites. Figure 5b shows that the noise std estimated from the converged traces is close to the noise std we have across the last iterates of multiple independent repeats. ## 4.4 Experiments With Full-Rank Covariance As a final experiment, we investigate aligned gradients for a full-rank Gaussian posterior approximation. We perform Bayesian linear regression over a simulated data set where we can control the number of feature dimensions as well as the strength of correlations. We control the latter by setting the rate of nonzero off-diagonal entries in the covariance matrix for simulated data points. Further details of the experimental setup can be found in Appendix J. Figure 6 confirms that aligned gradients improve the average predictive log-likelihood of the posterior approximation over a held-out test set. This is true even when the data is not strongly correlated (panels in the right column), as the large increase in parameters over which vanilla DPVI has to split the privacy budget negatively impacts the learning. ## 5 Discussion In this paper we introduced the aligned gradient solution for the specific task of learning a Gaussian variational posterior. The technique should be applicable also in other tasks where gradients with respect to different parameters depend on data through a common term. Detection of such cases could be even automated by inspecting how the data enters the computational graph of the task. This would be an interesting future direction. ![11_image_0.png](11_image_0.png) Figure 6: **Full-rank covariance experiment:** DPVI with aligned gradients achieves better predictive loglikelihood for a full-rank approximation than vanilla DPVI. Higher density means more nonzero entries in data covariance. ε = 1. 50 repetitions. A limitation of DP in general is that it guarantees indistinguishability among the individuals in the data set by aiming to preserve more common characteristics of the data. Therefore the utility of a DP algorithm might be worse for individuals from less common groups. The somewhat unexpected performance of aligned natural gradients in the UKB example might be due to bad hyperparameter choices. While we performed some hyperparameter tuning with limited success, a more comprehensive search would be needed to fully assess the performance of these methods. The literature on MCMC holds many existing diagnostics for the converge of chains, such as the (split) R-hat estimator (Gelman & Rubin, 1992; Vehtari et al., 2021), that could be used to test the convergence of the parameter traces as well. Tuning these methods to work well in diagnosing the converge of the parameter trace would require extensive testing which we leave for future work. ## Acknowledgements This work was supported by the Research Council of Finland (Flagship programme: Finnish Center for Artificial Intelligence, FCAI; and grants 325572, 325573), the Strategic Research Council (SRC) established within the Research Council of Finland (grant 336032), UKRI Turing AI World-Leading Researcher Fellowship, EP/W002973/1 as well as the European Union (Project 101070617). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them. The authors also acknowledge the computational resources provided by the Aalto Science-IT project. This research has been conducted using the UK Biobank Resource under Application Number 65101. This work used data provided by patients and collected by the NHS as part of their care and support (Copyright © 2021, NHS England. Re-used with the permission of the NHS England and UK Biobank. All rights reserved). This work used data assets made available by National Safe Haven as part of the Data and Connectivity National Core Study, led by Health Data Research UK in partnership with the Office for National Statistics and funded by UK Research and Innovation (grant MC_PC_20058). ## References Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC Conference on* Computer and Communications Security, CCS '16, pp. 308–318, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450341394. doi: 10.1145/2976749.2978318. URL https://doi.org/ 10.1145/2976749.2978318. Shun-ichi Amari. Natural gradient works efficiently in learning. 10(2):251–276, 02 1998. ISSN 0899-7667. doi: 10.1162/089976698300017746. Galen Andrew, Om Thakkar, Hugh Brendan McMahan, and Swaroop Ramaswamy. Differentially private learning with adaptive clipping. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, 2021. URL https://openreview.net/ forum?id=RUQ1zwZR8_. Raef Bassily, Adam D. Smith, and Abhradeep Thakurta. Private empirical risk minimization, revisited. CoRR, abs/1405.7085, 2014. URL http://arxiv.org/abs/1405.7085. Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Guha Thakurta. Private stochastic convex optimization with optimal rates. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems* 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 11279–11288, 2019. URL https://proceedings.neurips.cc/paper/ 2019/hash/3bd8fdb090f1f5eb66a00c84dbc5ad51-Abstract.html. Garrett Bernstein and Daniel R. Sheldon. Differentially private Bayesian inference for exponential families. In *Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information* Processing Systems, (NeurIPS) 2018,, 2018. Garrett Bernstein and Daniel R. Sheldon. Differentially private Bayesian linear regression. In *Advances in* Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS) 2019, 2019. Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul A. Szerlip, Paul Horsfall, and Noah D. Goodman. Pyro: Deep universal probabilistic programming. *J. Mach. Learn. Res.*, 20:28:1–28:6, 2019. URL http://jmlr.org/papers/v20/18-403. html. Xiangyi Chen, Zhiwei Steven Wu, and Mingyi Hong. Understanding gradient clipping in private SGD: A geometric perspective. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020. URL https: //proceedings.neurips.cc/paper/2020/hash/9ecff5455677b38d19f49ce658ef0608-Abstract.html. Justin Domke. Provable gradient variance guarantees for black-box variational inference. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, Third Theory of Cryptography Conference, TCC 2006, Proceedings, 2006. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® *in Theoretical Computer Science*, 9(3–4):211–407, 2014. Andrew Gelman and Donald B. Rubin. Inference from Iterative Simulation Using Multiple Sequences. Statistical Science, 7(4):457 - 472, 1992. doi: 10.1214/ss/1177011136. Antti Honkela, Tapani Raiko, Mikael Kuusela, Matti Tornio, and Juha Karhunen. Approximate Riemannian conjugate gradient learning for fixed-form variational Bayes. *Journal of Machine Learning Research*, 11 (106):3235–3268, 2010. URL http://jmlr.org/papers/v11/honkela10a.html. Joonas Jälkö, Onur Dikmen, and Antti Honkela. Differentially private variational inference for non-conjugate models. In *Uncertainty in Artificial Intelligence 2017, Proceedings of the 33rd Conference (UAI),*, 2017. Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. *Mach. Learn.*, 37(2):183–233, 1999. doi: 10.1023/A:1007665907178. URL https://doi.org/10.1023/A:1007665907178. Joonas Jälkö, Eemil Lagerspetz, Jari Haukka, Sasu Tarkoma, Antti Honkela, and Samuel Kaski. Privacypreserving data sharing via probabilistic modeling. *Patterns*, 2(7):100271, 2021. ISSN 2666-3899. doi: https: //doi.org/10.1016/j.patter.2021.100271. URL https://www.sciencedirect.com/science/article/pii/ S2666389921000970. Gautam Kamath, Xingtu Liu, and Huanyu Zhang. Improved rates for differentially private stochastic convex optimization with heavy-tailed data. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), *International Conference on Machine Learning, ICML 2022, 17-23* July 2022, Baltimore, Maryland, USA, volume 162 of *Proceedings of Machine Learning Research*, pp. 10633–10660. PMLR, 2022. URL https://proceedings.mlr.press/v162/kamath22a.html. Mohammad Emtiyaz Khan and Didrik Nielsen. Fast yet simple natural-gradient descent for variational inference in complex models. In *2018 International Symposium on Information Theory and Its Applications* (ISITA), pp. 31–35, 2018. doi: 10.23919/ISITA.2018.8664326. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations, ICLR 2015, San Diego,* CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980. Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6114. Antti Koskela, Joonas Jälkö, Lukas Prediger, and Antti Honkela. Tight differential privacy for discrete-valued mechanisms and for the subsampled Gaussian mechanism using FFT. In Arindam Banerjee and Kenji Fukumizu (eds.), The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of *Proceedings of Machine Learning Research*, pp. 3358–3366. PMLR, 2021. URL http://proceedings.mlr.press/v130/koskela21a.html. Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M. Blei. Automatic differentiation variational inference. *J. Mach. Learn. Res.*, 18:14:1–14:45, 2017. URL http://jmlr.org/papers/ v18/16-107.html. Tejas Kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, and Antti Honkela. Differentially private Bayesian inference for generalized linear models. In Proceedings of the 38th International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR, 2021. Andrew Lowy and Meisam Razaviyayn. Locally differentially private federated learning: Efficient algorithms with tight risk bounds. *CoRR*, abs/2106.09779, 2021. URL https://arxiv.org/abs/2106.09779. Andrew Lowy and Meisam Razaviyayn. Private stochastic optimization with large worst-case Lipschitz parameter: Optimal rates for (non-smooth) convex losses and extension to non-convex losses. In Shipra Agrawal and Francesco Orabona (eds.), International Conference on Algorithmic Learning Theory, February 20-23, 2023, Singapore, volume 201 of *Proceedings of Machine Learning Research*, pp. 986–1054. PMLR, 2023. URL https://proceedings.mlr.press/v201/lowy23a.html. Stephan Mandt, Matthew D. Hoffman, and David M. Blei. Stochastic gradient descent as approximate Bayesian inference. 18(1):4873–4907, 2017. ISSN 1532-4435. H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. In *6th International Conference on Learning Representations, ICLR 2018, Vancouver,* Canada, Conference Track Proceedings, 2018. Shubhankar Mohapatra, Sajin Sasy, Xi He, Gautam Kamath, and Om Thakkar. The role of adaptive optimizers for honest private hyperparameter selection. 11 2021. Claire L Niedzwiedz, Catherine A O'Donnell, Bhautesh Dinesh Jani, Evangelia Demou, Frederick K Ho, Carlos Celis-Morales, Barbara I Nicholl, Frances S Mair, Paul Welsh, Naveed Sattar, et al. Ethnic and socioeconomic differences in SARS-CoV-2 infection: prospective cohort study using uk biobank. BMC medicine, 18(1):1–14, 2020. Du Phan, Neeraj Pradhan, and Martin Jankowiak. Composable effects for flexible and accelerated probabilistic programming in NumPyro. *arXiv preprint arXiv:1912.11554*, 2019. B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855, 1992. doi: 10.1137/0330046. Lukas Prediger, Niki Loppi, Samuel Kaski, and Antti Honkela. d3p - a python package for differentially-private probabilistic programming. *Proceedings on Privacy Enhancing Technologies*, 2022(2):407–425, 2022. doi: doi:10.2478/popets-2022-0052. Jerome P Reiter and Trivellore E Raghunathan. The multiple adaptations of multiple imputation. Journal of the American Statistical Association, 102(480):1462–1471, 2007. Donald B Rubin. *Multiple imputation for nonresponse in surveys*, volume 81. John Wiley & Sons, 2004. Hugh Salimbeni, Stefanos Eleftheriadis, and James Hensman. Natural gradients in practice: Non-conjugate variational inference in Gaussian process models. In Amos Storkey and Fernando Perez-Cruz (eds.), Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of *Proceedings of Machine Learning Research*, pp. 689–697. PMLR, Apr 2018. Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. Stochastic gradient descent with differentially private updates. In *IEEE Global Conference on Signal and Information Processing, GlobalSIP 2013, Austin,* TX, USA, December 3-5, 2013, pp. 245–248. IEEE, 2013. doi: 10.1109/GlobalSIP.2013.6736861. URL https://doi.org/10.1109/GlobalSIP.2013.6736861. Cathie Sudlow, John Gallacher, Naomi Allen, Valerie Beral, Paul Burton, John Danesh, Paul Downey, Paul Elliott, Jane Green, Martin Landray, Bette Liu, Paul Matthews, Giok Ong, Jill Pell, Alan Silman, Alan Young, Tim Sprosen, Tim Peakman, and Rory Collins. Uk biobank: An open access resource for identifying the causes of a wide range of complex diseases of middle and old age. *PLOS Medicine*, 12(3):1–10, 03 2015. doi: 10.1371/journal.pmed.1001779. Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, and Paul-Christian Bürkner. Ranknormalization, folding, and localization: An improved Rˆ for assessing convergence of MCMC (with discussion). *Bayesian analysis*, 16(2):667–718, 2021. Ning Wang, Yang Xiao, Yimin Chen, Ning Zhang, Wenjing Lou, and Y. Thomas Hou. Squeezing more utility via adaptive clipping on differentially private gradients in federated meta-learning. In Proceedings of the 38th Annual Computer Security Applications Conference, ACSAC '22, pp. 647–657, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450397599. doi: 10.1145/3564625.3564652. URL https://doi.org/10.1145/3564625.3564652. ## Appendices A Intuitive Reasoning Why Larger Noise Would Slow Convergence We consider only a single optimisation step in a single dimension for simplicity. Assume we are at θ (t) and have noisy gradient $$g^{(t)}=\nabla{\mathcal{L}}(\theta^{(t)})+\eta,\eta\sim{\mathcal{N}}(0,\sigma^{2})$$ (t)) + η, η ∼ N (0, σ2) (A.1) for some perturbation scale σ. We update the parameter as $$g^{(t+1)}=\theta^{(t)}-\alpha g^{(t)}$$ (t) − αg(t)(A.2) with learning rate α. In order to get closer to the optimum, we want *sign*(g (t)) = *sign*(∇L(θ (t))). Assume wlog that ∇L(θ (t)) > 0, then erf(·) is a monotonically increasing function, so we see from the above that the probability of progressing towards the optimum decreases with decreasing ∇L(θ (t)) σ. I.e., for a fixed gradient, larger variance σ 2 will decrease the probability of progressing towards the optimum in each step. ## B Proof Of Proposition 3.1 We begin by restating Proposition 3.1. Proposition B.1 (**Proposition 3.1**). Assume q to be diagonal Gaussian, then the gradient gs in Equation (8) becomes $$\mathbf{g}_{s}=\eta T^{\prime}(\mathbf{s}_{q})\mathbf{g}_{m}+\nabla_{\mathbf{s}_{q}}H(q),$$ where T ′ denotes the derivative of T. Proof. We first recall the reparametrisation for the diagonal Gaussian approximation from Eq. (6) as θ(η;mq, sq) = mq + T(sq)η and observe that ∇mq θ = 1 and ∇sq θ = ηT ′(sq) (where we abbreviate θ(η;mq, sq) to simply θ). With this we obtain the gradient of the ELBO with respect to mq by applying the chain rule in Eq. (7) as: $$\mathbf{g}_{m}=\nabla_{\mathbf{m}_{q}}{\mathcal{L}}(q)=\nabla_{\mathbf{\theta}}\log p(D,\mathbf{\theta})$$ gm = ∇mqL(q) = ∇θ log p(D, θ) (B.1) Similarly applying the chain rule in Eq. 8 and inserting the above yields $$\begin{array}{l}{{g_{s}=\nabla_{s_{q}}\log p(D,\theta)+\nabla_{s_{q}}H(q)}}\\ {{\quad=\nabla_{\theta}\log p(D,\theta)\nabla_{s_{q}}\theta+\nabla_{s_{q}}H(q)}}\\ {{\quad=\nabla_{\theta}\log p(D,\theta)\eta T^{\prime}(s_{q})+\nabla_{s_{q}}H(q)}}\\ {{\quad=\eta T^{\prime}(s_{q})g_{m}+\nabla_{s_{q}}H(q).}}\end{array}$$ $$\Pr\left[sign(g^{(t)})=sign(\nabla\mathcal{L}(\theta^{(t)}))\right]=\Pr\left[\nabla\mathcal{L}(\theta^{(t)})+\eta>0\right]$$ $$=\Pr\left[\eta>-\nabla\mathcal{L}(\theta^{(t)})\right]$$ $$=1-\Pr\left[\eta\leq-\nabla\mathcal{L}(\theta^{(t)})\right]$$ $$=1-\Phi(-\nabla\mathcal{L}(\theta^{(t)}))$$ $$=\Phi(\nabla\mathcal{L}(\theta^{(t)}))$$ $$=\frac{1}{2}\left(1+\text{erf}\left(\frac{\nabla\mathcal{L}(\theta^{(t)})}{\sigma\sqrt{2}}\right)\right).$$ $$(\mathrm{A.1})$$ $$(\mathrm{A.2})$$ $$(\mathrm{A.3})$$ $$(\mathrm{A.4})$$ $$(\mathrm{A.5})$$ $$(\mathrm{A.6})$$ $$(\mathrm{A.7})$$ $$(\mathrm{A.8})$$ $$(\mathrm{B.1})$$ $$\begin{array}{l}{{\mathrm{(B.2)}}}\\ {{\mathrm{(B.3)}}}\\ {{\mathrm{(B.4)}}}\end{array}$$ ## C Proof Of T ′(Sq) ≤ T(Sq) **For Softplus And Exponential Function** T(sq) = **softplus**(sq) Consider we transform sq in positive real numbers using the softplus function: $$(\mathrm{C.1})$$ $$T(\mathbf{s}_{q})=\log(1+\exp(\mathbf{s}_{q})).$$ T(sq) = log(1 + exp(sq)). (C.1) First, we make the following observation which connects the softplus to the sigmoid function $$T(\mathbf{s}_{q})=-\log\left(\frac{1}{1+\exp(\mathbf{s}_{q})}\right)$$ $$=-\log\left(1-\frac{1}{1+\exp(-\mathbf{s}_{q})}\right)$$ $$=-\log\left(1-\sigma(\mathbf{s}_{q})\right),$$ $$\left(\mathbf{C}\right)$$. $$(\mathrm{C.2})$$ $$(\mathrm{C.3})$$ = − log (1 − σ(sq)), (C.2) where σ denotes the sigmoid function. We then get T ′(sq) = σ(sq). It is easy to see that log(x) ≤ x − 1 and hence $$T(\mathbf{s}_{q})=-\log\left(1-\sigma(\mathbf{s}_{q})\right)$$ $$\geq1-\left(1-\sigma(\mathbf{s}_{q})\right)=\sigma(\mathbf{s}_{q})=T^{\prime}(\mathbf{s}_{q}).\tag{1}$$ We have therefore shown that T(sq) ≥ T ′(sq) ∀sq ∈ R. T(sq) = exp(sq) For T(sq) = exp(sq) the proof follows immediately from the fact that T ′(sq) = exp(sq) = T(sq). ## D Proof Of Theorem 3.1: Variance In Aligned Scale Gradients Is Smaller We begin by restating the theorem: Theorem D.1 (**Theorem 3.1**). Assume that C vanilla ≥ C aligned. If we obtain σq through transformation T such that T ′(s) ≤ 1, then for any fixed batch, $$\mathrm{Var}_{\eta,\psi}\left[\tilde{\mathbf{g}}_{s}^{a l i g n e d}\right]<\mathrm{Var}_{\eta,\psi}\left[\tilde{\mathbf{g}}_{s}^{v a n i l l a}\right],$$ where η ∈ N (0, 1) is the random variable of the MC approximation to the ELBO in the reparametrisation approach and ψ ∈ N (0, 1) that of the DP perturbation. Proof. We want to show, that the variance of the perturbed std parameter gradient for the aligned method is less or equal than the one for the vanilla approach in all the dimensions of the gradient. We start by setting the clipping threshold for the vanilla approach as C and the one for the aligned as Ca. Denote the jth dimension of gs obtained from the vanilla approach as g˜s,j , and similarly g˜ aligned s,j for the aligned. For the vanilla approach, we have $$\mathrm{Var}_{\eta,\psi}\left[\bar{\mathbf{g}}_{s,j}\right]=\mathrm{Var}_{\eta,\psi}\left[\eta_{j}T^{\prime}(\mathbf{s}_{q,j})\frac{\partial}{\partial\mathbf{m}_{q,j}}\mathcal{L}(q)+\frac{\partial}{\partial\mathbf{s}_{q,j}}H(q)+\psi_{j}\sigma_{D}rC\right]$$ $$=\mathrm{Var}_{\eta}\left[\mathbf{\eta}_{j}T^{\prime}(\mathbf{s}_{q,j})\frac{\partial}{\partial\mathbf{m}_{q,j}}\mathcal{L}(q)\right]+\sigma_{D}^{2}rC^{2}\,\mathrm{Var}_{\Psi}\left[\psi_{j}\right]$$ $$=\mathrm{Var}_{\eta}\left[\mathbf{\eta}_{j}T^{\prime}(\mathbf{s}_{q,j})\frac{\partial}{\partial\mathbf{m}_{q,j}}\mathcal{L}(q)\right]+\sigma_{D}^{2}rC^{2},$$ $$\text{(D.1)}$$ $$\text{(D.2)}$$ $$\text{(D.3)}$$ and for the aligned Varη,ψ hg˜ aligned s,j i= Varη,ψ ηjT ′(sq,j )∂ ∂mq,j L(q) + ∂ ∂sq,j H(q) + ηjT ′(sq,j )ψjσDP Ca (D.4) = Varη Eψ|η ηjT ′(sq,j )∂ ∂mq,j L(q) + ∂ ∂sq,j H(q) + ηjT ′(sq,j )ψjσDP Ca + Eη Varψ|η ηjT ′(sq,j )∂ ∂mq,j L(q) + ∂ ∂sq,j H(q) + ηjT ′(sq,j )ψjσDP Ca = Varη ηjT ′(sq,j )∂ ∂mq,j L(q) + Eη -η 2 jT ′(sq,j ) 2σ 2 DP C 2 a (D.7) = Varη ηjT ′(sq,j )∂ ∂mq,j L(q) + T ′(sq,j ) 2σ 2 DP C 2 a . (D.8) (D.5) $$\text{(D.4)}$$ $$\text{(D.5)}$$ $$\text{(D.6)}$$ $$\text{(D.7)}$$ $$\text{(D.8)}$$ (D.6) We can easily see that the two variances differ only in the noise term. Now, it is easy to see, that if we set Ca = C and have a transformation T s.t. T ′(s) ≤ 1, we have $${\rm Var}_{\mathbf{\eta},\mathbf{\psi}}\left[\tilde{\mathbf{g}}_{s,j}^{a l i g n e d}\right]\leq{\rm Var}_{\mathbf{\eta},\mathbf{\psi}}\left[\tilde{\mathbf{g}}_{s,j}\right],\,\forall j.\tag{1}$$ $$(\mathrm{D.9})$$ Setting C = Ca already shows that the aligned method's variance cannot exceed the vanilla method. However, consider now that Ca is a clipping threshold that satisfies Pr(||gm|| > Ca) < α for some α ∈ [0, 1), i.e. only an α fraction of the gradients get clipped in the aligned approach. Now, since the vanilla method clips based on the norm ||g||2 = ||gm||2 + ||gs ||2 *≥ ||*gm||2, we need to set the C at least as large as Ca to facilitate the same probability of clipping. Furthermore, we have the equality ||g||2 = ||gm||2 only in the case when ||gs || = 0 which would only happen if the std parameter has converged. Therefore, the C should be chosen larger than Ca to avoid clipping prior to the convergence of sq. Note that the magnitude of the gs does not affect the aligned approach, and therefore the clipping bound Ca. Now, as we need to choose *C > C*a and we have assumed T ′(s) ≤ 1, we have $${\rm Var}_{\mathbf{\eta},\mathbf{\psi}}\left[\tilde{\mathbf{g}}_{s,j}^{aligned}\right]<{\rm Var}_{\mathbf{\eta},\mathbf{\psi}}\left[\tilde{\mathbf{g}}_{s,j}\right],\,\forall j.$$ (D.10) Furthermore, from the above variance expressions, it is easy to see that the variance in the aligned approach gets significantly smaller than the vanilla one when T ′(s) ≪ 1. This would be the case for example if we initialize the variational posterior with small sq, or similarly towards the end of the convergence for models with small posterior variance. ## E Variance Of Dp From Ou Process On Convergence We follow closely the analysis performed by Mandt et al. (2017), Sec. 3.2., which makes the following assumptions for the loss function L around its optimum ξ ∗: 1. Mini-batch gradients of the loss functions are well approximated by a zero-mean Gaussian distribution with covariance matrix 1S Z, where S denotes the size of mini-batch, 2. L is locally well approximated by a quadratic function. We make the additional assumption that the clipping threshold C is chosen such that no clipping occurs for gradients close to the optimum in order to avoid clipping-induced bias. We begin by stating the SGD parameter update equation and the resulting update step ∆ξ(t) for update steps close to the optimum. ξ(t + 1) = ξ(t) − α ∇ξL(ξ(t)) + 1 √S Bν + σDP Iψ, ν ∼ N (0, I) , ψ ∼ N (0, I) = ξ(t) − α ∇ξL(ξ(t)) + 1 √S B + σDP I ψ ′ , ψ ′ ∼ N (0, I) (E.1) ∆ξ(t) := ξ(t + 1) − ξ(t) = −α∇ξL(ξ(t)) − α 1 √S B + σDP I ψ ′(E.2) We denote with B the triangular matrix resulting from Cholesky decomposition of Z = BBT. Since we are performing DP-SGD, we have an additional independent noise term with scale σDP . However, as both sources of stochasticity are independent zero-mean Gaussians, we can easily reformulate using a single Gaussian source of noise with the total variance. We now restate E.2 by the following stochastic differential equation: $$d\mathbf{\xi}(t)=-\alpha\nabla_{\xi}{\mathcal{L}}(\mathbf{\xi}(t))-\alpha({\frac{1}{\sqrt{S}}}\mathbf{B}+\sigma_{D P}\mathbf{I})d\mathbf{W}(t)$$ From our second assumption, we know that $$\begin{array}{l}\text{(E.1)}\end{array}$$ = $$\begin{array}{l}\text{(E.2)}\end{array}$$ . $$(\mathrm{E.3})$$ $${\mathcal{L}}(\mathbf{\xi})\approx{\frac{1}{2}}(\mathbf{\xi}-\mathbf{\xi}^{*})^{T}\mathbf{A}(\mathbf{\xi}-\mathbf{\xi}^{*}),$$ $$(\mathrm{E.4})$$ $$(\mathrm{E.5})$$ where A =∂ 2 (∂ξ) 2L(ξ ∗). Inserting (E.3) in (E.4), we get $$d\mathbf{\xi}(t)=-\alpha\mathbf{A}(\mathbf{\xi}-\mathbf{\xi}^{*})d t+\alpha({\frac{1}{\sqrt{S}}}\mathbf{B}+\sigma_{D P}\mathbf{I})d W(t)$$ B + σDP I)dW(t) (E.5) with describes an Ornstein-Uhlenbeck (OU) process with Gaussian stationary distribution block (OU) process with Gaussian stationary distribution ${q(\boldsymbol{\xi})\propto\exp\left\{-\frac{1}{2}(\boldsymbol{\xi}-\boldsymbol{\xi}^*)^T\boldsymbol{\Sigma}^{-1}(\boldsymbol{\xi}-\boldsymbol{\xi}^*)\right\}}$ equation where Σ satisfies the Lyapunov equation $$(\mathrm{E.6})$$ $$\Sigma\mathbf{A}+\mathbf{A}\mathbf{\Sigma}=\alpha\left({\frac{1}{S}}\mathbf{Z}+\sigma_{D P}^{2}\mathbf{I}\right).$$ $$(\mathrm{E.7})$$ Even without determining A we can already make an important discovery from this: Since A is fixed around the optimum, we see that the noise covariance of our OU process scales linearly with respect to σ 2 DP . Note that the above analysis holds for a constant learning rate. We have used the Adam optimization method (Kingma & Ba, 2015) in our experiments throughout the paper. While Adam does adapt the learning rate, recently Mohapatra et al. (2021) showed that the learning rate of Adam will converge to a static value, which means that the analysis above still holds as it is only concerned with the learning rate at convergence. ## F Hyperparameters We use Adam (Kingma & Ba, 2015) as the optimiser for all the experiments with starting learning rate of 10−3. In all of our experiments, the δ privacy parameter was set to 1/N where N denotes the size of the training data. For the UKB experiment In the experiments, we used various different training lengths (depicted e.g. in Figure 2). For all of our runs, we set the subsampling rate as 0.01. The clipping threshold C was set to C = 2.0 for the aligned and vanilla, 4.0 for the preconditioned variant and to 0.1 for the natural gradient based variants. For the Adult experiment The training was run for 4 000 epochs with subsampling ratio of 0.01, corresponding to total of 400 000 gradient steps. We chose the clipping thresholds for the gradient perturbation algorithm as the 97.5% upper quantile of the training data gradient norms at the non-private optima. This was done to avoid clipping-induced bias, thus making the models comparable to the non-private baseline. This lead to clipping thresholds C presented in Table F.1. Table F.1: Clipping thresholds for the Adult data logistic regression model | Variant | C | |-----------------------|-----| | Aligned | 3.0 | | Aligned Natural Grad. | 0.1 | | Natural Grad. | 0.1 | | Vanilla | 3.0 | | Preconditioned | 4.0 | ## G Model Priors G.1 For The Ukb And The Us Census Experiments Recall the probabilistic model used in the experiments: $$p(\mathbf{X}\mid\mathbf{\theta}_{\mathbf{X}},\mathbf{\pi})=\sum_{k=1}^{K}\pi_{k}\prod_{j=1}^{d}\text{Categorical}(\mathbf{X}_{j}\mid\mathbf{\theta}_{\mathbf{X}}^{(k)})$$ $$p(\mathbf{y}\mid\mathbf{X},\mathbf{\theta}_{\mathbf{y}})=\text{Poisson}(\mathbf{y}\mid\exp(\mathbf{X}\mathbf{\theta}_{\mathbf{y}})).$$ $$(\mathrm{G.1})$$ $$(\mathrm{G.2})$$ The categorical probabilities θ (k) Xj for each of the categorical features Xj , were given a uniform Dirichlet(1) prior. Similarly the mixture weights π we assigned a uniform Dirichlet prior. The regression coefficients θy were given a std. normal N(0, I) prior. ## G.2 For The Adult Experiment We use the following model and prior $$(\mathrm{G.3})$$ $$(\mathrm{G.4})$$ $$\begin{array}{c}{{y\sim\sigma(X w),}}\\ {{w\sim\mathcal{N}(\mathbf{0},I),}}\end{array}$$ y ∼ σ(Xw), (G.3) w ∼ N (0, I), (G.4) where σ(·) denotes the logistic regression function, σ(x) = 1/1+exp{−x}. ## H More Results For Robustness Figure H.1 shows how the MPAE for the different DPVI variants behave for the different initial values for sq for both, variational mean (upper panel) and standard deviation (lower panel) after 1 000 epochs. Figure H.2 shows the parameters traces for sq initialised such that σq = 0.01 (left) and σq = 0.1 (right) with the same split in upper and lower panels, similar to Figure 1b for σq = 1.0 in the main body of the paper. We observe that with decreasing initial values for sq (/σq) it becomes increasingly difficult for vanilla DPVI to learn variational standard deviation but learning of means is slightly improved. Preconditioned DPVI performs better overall in terms of standard deviation but learns means worse. Aligned DPVI consistently outperforms both competing variants. ![21_image_0.png](21_image_0.png) Error in parameters over initialization = 1.0, after1000epochs Figure H.1: The aligned variant has consistently low error across different initial values of qσ. Figure shows the error for ϵ = 1 for 1000 epochs of training. ![21_image_1.png](21_image_1.png) Figure H.2: Aligned DPVI consistently converges faster than the other methods for different initialisation of sq. **On left**, the sq is initialised such that σq = 0.01 and **on right** such that σq = 0.1. ## H.1 Natural Gradients And The Aligned Natural Gradients In The Ukb Experiment Besides the vanilla and aligned variant, we also fitted the UKB model using the natural gradient and aligned natural gradient variants. From Figure H.3 we can see the trade-off natural gradients make; the means are learned worse than standard deviations, which is what we expect based on the analysis of Section 3.1.1. Somewhat surprisingly, the aligned natural grad. variant performs slightly worse than the aligned variant in this experiment. This might be due to poor choice of hyperparameter, for example the learning rate for the Adam optimiser used in the experiments was set to 10−3for all the variants, while we know that the natural grad. variants tend to have smaller gradients than the others - although Adam should in theory be able to adapt to that. To evaluate whether the difference between aligned and aligned natural grad. results is due to clipping threshold being too large or too small for the aligned natural grad. approach, thus perturbing the gradients excessively or introducing excessive clipping bias, we repeat the experiment with higher and lower clipping thresholds. Figure H.4a shows that increasing the clipping threshold to C = 0.1 harms the natural gradient ![22_image_0.png](22_image_0.png) Figure H.3: The natural gradient based variants variant achieve comparable performance in terms of variational parameters for std but perform worse for variational means. This is strongly observable for the plain natural gradients while aligned natural gradients improve error in variational means in comparison. However, both are outperformed by (non-natural) aligned gradients in this experiment. Lines show the mean MPAE over 10 independent repeats as well as std. of mean as error bars. Clipping threshold for both natural gradient based variants is set to C = 0.02, σq was initialised to 1. The natural gradient variant appears to diverge from the true variational std. which might be caused by clipping-induced bias, while still struggling to learn the correct variational mean. based variants, most likely due to introducing too much privacy noise and thus preventing convergence of the variational mean parameters. Conversely, Figure H.4b shows that setting clipping threshold C = 0.01 natural gradient variants is too small, and the aligned natural gradients start to suffer from clipping-induced bias. These observations highlight the importance of choosing the clipping threshold appropriately to the method, which is not a trivial task. For this experiment, this seems to have a direct influence on how well aligned natural gradients can reign in the inversal of relative scaling of components present in plain natural gradients. For future work, it would be interesting to perform a more extensive hyperparameter tuning including for example the learning rate and number of iterations as the tunable hyperparameters, to see if the natural gradient methods make some non-trivial trade-offs in the learning that differ fundamentally from the aligned and vanilla methods. ## I Further Details On Downstream Analysis For Ukb Data In the UKB experiment, we use the learned variational posterior to sample a synthetic data set from the posterior predictive distribution (PPD) as suggested by Jälkö et al. (2021). We test the method by comparing the synthetic data in downstream analysis to the original data. As the downstream task, we fitted a Poisson regression model that aims to predict whether individual catches SARS-CoV-2 based on the predictors in the data. Note that this downstream perfectly overlaps with our generative model. In order to properly reflect the uncertainty rising from the data generating process to the final results computed from the synthetic data, we will employ so called *Rubin's rules* (Rubin, 2004). In this procedure, we first sample multiple synthetic data sets from the PPD and compute the downstream analysis on each of the sampled synthetic data. Next, the results are aggregated according to a set of rules and we recover finally a more robust estimator for our downstream analysis. Further discussion about the Rubin's rules can be found for example in (Reiter & Raghunathan, 2007). ![23_image_0.png](23_image_0.png) (a) The natural gradient based variants struggle to converge in the UKB experiment with clipping threshold set to 0.1. Figure shows the evolution of MPAE for both of the variational parameters over 1000 epochs. ![23_image_1.png](23_image_1.png) (b) Both natural gradient variants start to diverge if clipping is set too low C = 0.01. Figure H.4: Tests with smaller clipping threshold for the natural gradient variants. The clipping threshold for the vanilla and aligned is still set to 2.0. In our experiments, we sampled 100 data sets from the PPD learned using the aligned variant, and applied the Rubin's rules to compute a mean and std. estimate for the Poisson regression coefficients. Finally, the obtained means were compared to the Poisson regression coefficients learned using the original data. ## J Experimental Setup For Full-Rank Gaussian Approximation In this experiment we create simulated data where we control the amount of correlations between data dimensions as the ratio ρ of non-zero off-diagonal entries in the correlation matrix. To generate data with d dimensions and correlation density ρ, we 1. generate a correlation matrix C using Algorithm J.1 with inputs *d, ρ, α* = 8, β = 10, 2. sample a diagonal matrix D of marginal variances, where {D}ii ∼ expN (0, 0.2 2) , 3. obtain the covariance matrix Σ = DCD 4. sample N = 10 000 data points xn ∼ N (0, Σ) 5. sample random regression weight vector w ∼ N (0, I) 6. sample y ∼ N (Xw, σ2 y ), with σy = 1. We perform the above for all combinations of d = 100, 200 and ρ = 0.2, 0.8. We then use vanilla DPVI and DPVI with aligned gradients to learn the full-rank Gaussian posterior approximation to the Bayesian linear regression model with priors We run the inference for 1 000 epochs, gradient clipping threshold 0.2 and subsampling ratio 0.01. For the same *d, ρ* we then generate another 10 000 data points and compute the log-likelihood using the obtained posterior approximation. We repeat the inference and evaluation 50 times for each method and combination of d and ρ, keeping the generated training and testing set fixed. $$\begin{array}{l}{{y\sim{\mathcal{N}}(X w,\sigma_{y}^{2}),}}\\ {{w\sim{\mathcal{N}}(\mathbf{0},I)}}\\ {{\sigma_{y}\sim\mathrm{Gamma}(0.1,0.1).}}\end{array}$$ ), (J.1) w ∼ N (0, I) (J.2) σy ∼ Gamma(0.1, 0.1). (J.3) $$(\operatorname{J.1})$$ $$(\operatorname{J}.2)$$ $\left(\mathrm{J}.3\right)$. Algorithm J.1 Routine to generate a d-dimensional correlation matrix with given density ρ and strength of correlations controlled by *α, β*. Require: d, ρ ∈ [0, 1], α > 0*, β >* 0 Ensure: correlation matrix C K ← ρ d(d−1) 2▷ number of non-zero off-diagonal entries U ← ∅ k ← 0 C ← Id for k = 1*, . . . , K* do sample (*i, j*) ∈ T \ U at random sample c ∼ Beta(*α, β*) ▷ sample correlation strength, controlled by α and β sample f *∈ {−*1, 1} at random ▷ sample sign of correlation Cij ← f c Cji ← Cij U ← U ∪ {(i, j),(*j, i*)} end for Table K.1: Estimated runtimes for UKB experiment #epochs single repeat runtime repeats # epsilon values # initial values 200 4-8min 10 4 5 400 8-16min 10 4 1 600 12-18min 10 4 1 800 16-32min 10 4 1 1000 20-40min 10 4 1 2000 40-80min 10 4 1 4000 80-160min 10 4 1 8000 160-320min 10 4 1 ## K Runtimes K.1 Ukb Experiments In this experiment, we ran all the variants separately for 10 seeds and 4 levels of privacy. Additionally, we experimented with different runtimes and initialisations. For a training of 1 000 epochs, a single repeat takes between 20 to 40 minutes. The runtime scales linearly with the number of epochs. Further, we computed the downstream task for the aligned variants, which includes generating the 100 synthetic data sets and fitting the downstream Poisson regression model on those 100 synthetic data. This procedure takes between 30 to 60 minutes to complete. A rough estimate of the runtimes for the UKB experiment is given in Table K.1. A single CPU core with 8gb of memory was used for all the runs. ## K.2 Adult Experiments A single training repeat of learning the logistic regression model for all the different variants of DPVI, took between 10 and 30 minutes to finish on a single CPU core with 8gb of memory assigned. In total, the Adult experiment was repeated 50 times for four different levels of privacy. Therefore the total runtime of all the experiments is **between 2 000 and 6 000 minutes**. The variance in running times is likely due to differences in computation nodes in the clustered assigned by a automatic run scheduler. ## K.3 Full-Rank Experiments A single run for this experiment consisted of the inference using both vanilla and aligned DPVI with full-rank approximations. All runs were executed on a computing cluster utilising Nvidia K80, A100, P100, V100 GPU hardware, to which the runs were allocated automatically to balance overall load. As a result, runtimes varied slightly: Runs for and 100 dimensional data took 6-8 minutes to finish, runs for 200 dimensions took 8-10 minutes. With a total of 4 data set configurations and 50 repeats for each, the total runtime is 1 400 to 1 800 minutes. L Gradient distributions for different variants ![25_image_0.png](25_image_0.png) Figure L.1 Figure L.1 shows the distributions of gradient norms for variational means and scales for different variants of DPVI discussed in Section 3.1.1 when sq is set to 0.1. Figure L.1a clearly shows the different magnitudes for variational standard deviation in vanilla DPVI. Figure L.1b demonstrates that natural gradients simply reverse the problem. Figure L.1c shows that the scaling approach achieves matching magnitudes quite well. However, it comes at the cost of increasing the norm of the full (combined) gradient and therefore increased sensitivity.
Review 1: Summary: This paper studies the problem of learning a Gaussian variational posterior with differential privacy. They propose a version of DP SGD to learn the mean and variance of the Gaussian posterior which they call "aligned DPVI". Rather than use the Gaussian mechanism to add spherical noise to the gradient wrt both the mean and standard deviation, they notice that the gradient wrt the standard deviation can be written as a function of the gradient wrt the mean. They propose using the Gaussian mechanism to release the gradient wrt the mean, then post-process this into an estimate of the gradient wrt the standard deviation. They also compare to a version that scales the gradient wrt standard deviation so its scale matches the gradient wrt the mean, then adds Gaussian noise to both. They perform a series of experiments to show that these three algorithms behave as one would expect (aligned outperforms both other mechanisms). Strengths and Weaknesses: The paper is well-written and easy to follow. The experimental results are thorough, and the authors explore a variety of improvements over the vanilla algorithms (including iterate averaging and natural gradients). The paper offers an easy to implement improvement over the vanilla Gaussian mechanism for the problem of interest. The key observation of this paper seems to be that that 1. if you want to privately estimate 2 statistics, and one is a function of the other, then you shouldn’t spend privacy budget learning both, and 2. for the case of variational inference with a Gaussian posterior, the derivative wrt the standard deviation is a function of the derivative wrt the mean. The first fact is not very surprising. The proof of the second is short, but (given that the authors do not cite prior work for this result) does seem to be new to this work. The experimental results (excluding Figure H.3) behave as one would expect in that the “aligned” version of the algorithm uniformly outperforms the other methods in estimating both the mean and standard deviation. I was surprised by the swapping of the order of the aligned and vanilla versions of natural grad in estimating the standard deviation is Figure H.3. The authors mention this is possibly due to hyper-parameter tuning? I would have liked to hear more about this since other than this graph, it seems like the aligned version is uniformly better than the vanilla making the decision of whether to use it in practice very simple. Minor: On page 1 the authors refer to global sensitivity and local sensitivity as (sensitivity over all variables) and (sensitivity wrt a single variable). This is not the standard definition of these terms in differential privacy so the authors may want to consider alternate terminology. Minor: Is the second last paragraph in the discussion referring to Figure H.3? If so, a pointer would be helpful. Requested Changes: Discussion of Figure H.3 discussing under what circumstances the aligned algorithm can be worse than the vanilla algorithm. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper considers differentially private (DP) variational inference (VI). The paper identifies a problem of "misaligned'' gradients that occurs when the standard (DP-SGD-based) solution is used: the gradients of different components of the loss functions may have different magnitudes and sensitivities, requiring different amounts of privacy noise. To address this issue, the paper proposes an "aligned gradient procedure" which reduces the variance of the estimates of the gradients of the VI loss. Moreover, the paper proposes using iterate averaging to reduce the variance of the parameter estimate. The efficacy of their methods is evaluated empirically. Strengths and Weaknesses: **Strengths:** -The Algorithms 1 and 2 make sense and are intuitive ways to address the misaligned gradients problem. -Empirically, the algorithms show improvements over vanilla and preconditioned DPVI **Weaknesses:** -The writing is unclear in many places, with long sentences and/or insufficient explanation of the core ideas and/or excessive detail too early. Specifically: --The Abstract is too detailed and not accessible. I am a DP expert, and was confused after reading the abstract, as there was a lot of non-standard terminology (e.g. "parameter traces'') used before it was defined. Now imagine someone reading the abstract who is only somewhat familiar with DP and/or VI. I would suggest that you re-write the abstract to focus on: explaining the problem of DP VI clearly; then explain what the previous approaches to tackling DP VI are; then explain the shortcomings of these prior approaches; then explain (at a high level, not in great detail, and without using undefined terminology) how you address these shortcomings. --Intro: "which is typically ignored or unknown'' is unclear. I would suggest deleting this and replacing with a new sentence: ``However, the issue of varying sensitivities is often neglected or overlooked.'' --last sentence of paragraph 1 is unclear. Delete or re-word. --The problem of DP VI should be defined early in the introduction, *before* the particular problem of perturbed gradients in DP VI (paragraph 2) is discussed. --``DPVI is a widely applicable...based on DP-SGD'' sentence is out of place. I suggest moving it to the second sentence of paragraph 2. (More writing issues may be brought up later.) -Missed references: --While not directly relevant, it would be good to cite (e.g. in a sentence at the end of the first paragraph of the Related Work section) works that study the use of gradient clipping for obtaining optimal (at least in reference 2) below) rates in DP heavy-tailed stochastic optimization: 1) Kamath, G., Liu, X. and Zhang, H., 2022, June. Improved rates for differentially private stochastic convex optimization with heavy-tailed data. In International Conference on Machine Learning (pp. 10633-10660). PMLR. 2) Lowy, A. and Razaviyayn, M., 2023, February. Private Stochastic Optimization with Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses. In International Conference on Algorithmic Learning Theory (pp. 986-1054). PMLR. --When discussing iterate averaging to reduce noise in parameter estimation, it should be noted that this general idea is not novel and has been used to obtain optimal rates for various problems in DP optimization, e.g.: 1) Bassily, R., Smith, A. and Thakurta, A., 2014, October. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th annual symposium on foundations of computer science (pp. 464-473). IEEE. 2) Bassily, R., Feldman, V., Talwar, K. and Guha Thakurta, A., 2019. Private stochastic convex optimization with optimal rates. Advances in neural information processing systems, 32. 3) Lowy, A. and Razaviyayn, M., 2021. Private federated learning without a trusted server: Optimal algorithms for convex losses. arXiv preprint arXiv:2106.09779. -Section 2.2: explanation/motivation/intuition of ELBO (and why it is called ELBOW) would be great -$\mathbf{\sigma}_q$ is actually a scalar in the diagonal case -Section 3.1: $\eta$ is random. Thus, the statement ``$\eta_j = O(1)$ does not make sense without some probabilistic condition. -In Equation 9, you take absolute value of vector-valued quantities. This does not make sense. Also, since these are random variables, you need some probabilistic condition. -The claim that as $\sigma_q$ shrinks, $g_s$ becomes small relative to $g_m$ is not precisely stated. Please state this precisely and explain clearly why it is true. Also, you don't address the gradient of the entropy term in your current discussion around Equation 9. -Aligned gradients: please explain why ``it comes at the cost of increased $\ell_2$-norm of the full gradient.'' This is an important point that needs to be clearly justified. -DP of Algorithm 1: Although the post-processing argument seems correct, a rigorous proof that Algorithm 1 is DP should be provided somewhere in the main body or appendix. -Theorem 3.1: The first sentence is not clear/precise. Also, the claim before Theorem 3.1 that the variance is ``reduced'' is not quite established by Theorem 3.1 since the inequality in Equation 15 may not be strict. Either strengthen (15) to a strict inequality if possible, or weaken the claim before Theorem 3.1. -Section 3.2 needs major improvements. The purpose of the section is unclear at the outset. Even the title of the section is not very informative. Clarify what the section aims to show or accomplish at the beginning of the section. Quantification of uncertainty (e.g. by DP confidence intervals) would be interesting, but it doesn't seem that you do that. Instead, you seem to aim to reduce the variance of your estimator via iterate averaging. -Experiments: can you add plots showing error vs. epsilon for DPVI? -experiments: In Figure 1a, where is the initial value of $\sigma_q$? I only see a plot comparing error in estimating mean vs. error in estimating std (which, by the way, is not necessarily the most natural or informative plot to include as your first plot). -Section 4.3: logistic regression is usually thought of as a machine learning problem, not a VI problem. Please clarify why this is a VI problem. Is he ``Vanilla'' baseline just DP-SGD? Please clarify Requested Changes: Please see "**Weaknesses**'' above. Broader Impact Concerns: None. ================================================== Review 3: Summary: The paper studied differentially private variational inference problems solved by gradient-based methods. A new method called the aligned gradient procedure is proposed, which provides a less noisy gradient estimator. The method can be extended to learn the full-rank covariance approximation. Experiment results in simulated data and real data are provided. Strengths and Weaknesses: Strength: 1. The message is clear and simple. 2. Experiments show that the proposed method is effective and efficiency. Weakness: 1. The idea of Alg 1 is natural, and there is no theoretical innovation. 2. My main concern is the assumption ‘Assume C is chosen such that the bias induced by clipping is the same in vanilla and aligned DPVI’ in Thm 3.1. Is this assumption fair/reasonable? Are there any references? In addition, in the experiments, how to choose C and C’ for two methods? 3. Some statements are not rigorous. For example, In the paragraph above Eq(9), the author stated that “As $\eta\sim N(0,I_d)$, we have $\eta_j=O(1)$”. It is not correct? We could only say that with high probability, $\eta_j\le M$ where $M$ is a constant. Requested Changes: 1. The related statements about $\eta$ mentioned in Weakness 3 should be revised if I am correct. 2. More discussion on the assumption on $C$ in Thm 3.1 3. More details for experiments, i.e., how to choose $C$. Broader Impact Concerns: no ================================================== Metareview: Recommendation: Accept as is Comment: All the reviewers agree that the main concerns have been addressed, and the paper is technically solid. The paper is a great contribution to the differentially private variational inference problem. Thus I recommend acceptance. ==================================================
# A Characteristic Function For Shapley-Value-Based Attribution Of Anomaly Scores Naoya Takeishi *ntake@g.ecc.u-tokyo.ac.jp* The University of Tokyo RIKEN Center for Advanced Intelligence Project Kawahara Yoshinobu *kawahara@ist.osaka-u.ac.jp* Osaka University RIKEN Center for Advanced Intelligence Project Reviewed on OpenReview: *https: // openreview. net/ forum? id= eLX5XrajXh* ## Abstract In anomaly detection, the degree of irregularity is often summarized as a real-valued anomaly score. We address the problem of attributing such anomaly scores to input features for interpreting the results of anomaly detection. We particularly investigate the use of the Shapley value for attributing anomaly scores of semi-supervised detection methods. We propose a characteristic function specifically designed for attributing anomaly scores. The idea is to approximate the absence of some features by locally minimizing the anomaly score with regard to the to-be-absent features. We examine the applicability of the proposed characteristic function and other general approaches for interpreting anomaly scores on multiple datasets and multiple anomaly detection methods. The results indicate the potential utility of the attribution methods including the proposed one. ## 1 Introduction Anomaly detection has been one of the major tasks of machine learning and data mining, and a number of different methods have been proposed (see, e.g., Chandola et al., 2009; Pimentel et al., 2014). Anomaly localization (or root cause analysis) is a closely related task whose goal is to identify which features are the most responsible for the irregularity of anomaly-detected data. It is obviously an essential problem in many applications; understanding why specific instances have been considered anomalous is highly valuable for decision-making. Rigorous localization of anomalies is possible only with some physical or causal models of a data-generating process (Budhathoki et al., 2022). However, such models are not always available for complex natural and artificial systems, for which learning-based anomaly detection often has a high demand. Hence, we focus on the latter, that is, the structure-agnostic setting. We focus on the structure-agnostic interpretation of anomaly detection results, instead of the rigorous anomaly localization based on causal structures. We particularly study methods for post hoc attribution of anomaly scores to input features. In recent years, post hoc attribution of machine learning models, not only in anomaly detection but in general, is an active area of study. One of the most popular approaches refers to the notion of the Shapley value, which has been originally discussed in game theory (Shapley, 1953) and applied to machine learning (Lipovetsky, 2006; Štrumbelj & Kononenko, 2014; Datta et al., 2016; Lundberg & Lee, 2017; Sundararajan et al., 2017; Owen & Prieur, 2017; Ancona et al., 2019; Olsen et al., 2022). It has also been utilized for anomaly interpretation (Antwarg et al., 2021; Giurgiu & Schumann, 2019; Takeishi, 2019). These studies have concluded affirmatively on the use of the Shapley value for the interpretation of anomaly detection based on the attribution of anomaly scores. The previous studies on using the Shapley value for anomaly score attribution adopted general definitions of the characteristic function of the Shapley value. The characteristic function, namely v : 2d → R, is a set function that represents a game's gain under each coalition of players. In using the Shapley value for attributing machine learning models, it takes a subset of features (and their values) and should return the behavior of the model only with the selected features given as the model's input. Several working definitions of such a characteristic function for general machine learning models have been studied (see, e.g., Lundberg & Lee, 2017; Sundararajan & Najmi, 2020). However, when it comes to applying them to the interpretation of anomaly detection, they do not take into account the specific nature of anomaly scores, which might limit the utility of the attribution method. In this paper, we propose a definition of the characteristic function for the Shapley value particularly for attributing anomaly scores. The idea is to approximate the absence of some features by minimizing an anomaly score in the proximity of the original data point to be interpreted. As the exact computation of such a characteristic function is usually prohibitively time-consuming, we also present practical relaxed definitions. We empirically examine not only the proposed characteristic function but also some existing definitions of the characteristic function by applying them to anomaly scores. We show the results of experiments on synthetic and real anomalies with multiple datasets and anomaly detection methods to discuss the applicability of attribution methods. ## 2 Background 2.1 Anomaly Detection We focus on the type of anomaly detection methods where some *anomaly score* is computed. Semi-supervised anomaly detection1 usually falls into this category, where a mechanism to detect anomalies is sought given only normal data. A typical solution of semi-supervised anomaly detection consists of two phases (see, e.g., Sections 4, 6, and 7 of Chandola et al., 2009). In the first phase (i.e., the training phase), a model, such as a density estimator, a subspace-based model, and a set of clusters, is learned with the normal data. Then, in the second phase (i.e., the test phase), an anomaly score ## E : X → R is computed using the learned model, where x ∈ X is a test data sample in some data space X with d features such as X ⊂ R d or *X ⊂ {*0, 1} d. The anomaly score, e(x), should indicate a large value when x is anomalous. An alarm is issued if e(x) exceeds some threshold value, eth. Semi-supervised anomaly detection methods are useful in many practices, since they do not assume labeled anomalies and generally do not require storing all the data after the training phase. A popular choice of e(x) is the negative log-likelihood, e(x) = − log q(x), where q is some approximation of data's density by models such as Gaussian mixture models (GMMs). Another popular choice is the reconstruction error of dimensionality reduction methods such as principal component analysis (PCA) or autoencoders, e(x) = kx − g(f(x))k 2, where f and g are an encoder and a decoder between the data and latent spaces, respectively. Moreover, these two views are not mutually exclusive. For example, Zong et al. (2018) reported a good detection performance by using both the energy of the latent representations and the reconstruction errors of an autoencoder. A straightforward way to interpret such anomaly scores is to examine decomposed or marginal values of the scores. For example, reconstruction errors in Euclidean space are inherently decomposable into the errors of individual features. We may also refer to the marginal likelihood of each feature if the model admits such decomposition. However, we cannot always compute such quantities in general; for example, marginal distributions are usually not tractable for nonlinear generative models. ## 2.2 Attribution With The Shapley Value The Shapley value (Shapley, 1953) is a concept solution of coalitional games. Let v : 2d → R be a set function that represents a game's gain obtained from each coalition (i.e., a subset) of the game's players, and 1Following the terminology of Chandola et al. (2009); synonymous to novelty detection (e.g., Pimentel et al., 2014). let D = {1*, . . . , d*} denote the set of all players. This function v is called a *characteristic function*. A game is defined as a pair (*v, D*). The Shapley value of (*v, D*) is to distribute the total gain v(D) to each player in accordance with each one's contribution. The Shapley value of the player i ∈ D, namely φi, is the weighted average of the marginal contributions, that is, **Notations, that is,** $$\phi_{i}=\sum_{S\subseteq D\setminus\{i\}}\frac{|S|!(d-|S|-1)!}{d!}\left(v(S\cup\{i\})-v(S)\right),\tag{1}$$ $\quad(1)$ . where S denotes a subset of D\{i} = {1, ..., i − 1, i + 1*, ..., d*}, and the sum is taken over all the subsets. The Shapley value has been utilized for interpreting outputs of statistical machine learning (e.g., Lipovetsky, 2006; Štrumbelj & Kononenko, 2014; Datta et al., 2016; Lundberg & Lee, 2017; Sundararajan et al., 2017; Owen & Prieur, 2017; Ancona et al., 2019; Olsen et al., 2022), where the players of a game mean input features, and the gain of the game means the output of a machine learning model. Major challenges in utilizing the Shapley value include the following two points: (i) How to compute the summation for O(2d) terms (i.e., PS⊆D\{i} in Eq. (1))? (ii) How to define a characteristic function v? The former challenge, the exponential complexity, is a general difficulty and not limited to machine learning interpretation. A common remedy is Monte Carlo approximations (see, e.g., Castro et al., 2017, and references therein). In this work, we also use a Monte Carlo approximation and compute {φi} using the reformulation of Eq. (1) as a weighted least squares problem (Charnes et al., 1988; Lundberg & Lee, 2017). We will describe its concrete procedures later in Section 3.2. The latter challenge, the definition of v, rises specifically in interpreting machine learning because v(S) should simulate the "absence" of the features not in S for a machine learning model. It is, in principle, a question that admits no unique solutions; the most straightforward definition would be based on re-training of models for all subsets of D, which is not realistic in practice. We will see the existing approaches in the next subsection, Section 2.3. ## 2.3 Approaches To Defining Characteristic Function In the use of the Shapley value for machine learning model attribution in general, a characteristic function v has often been defined in either of the following two approaches: Reference-based approach replaces the values of "absent" features by some reference values (Sundararajan et al., 2017; Lundberg & Lee, 2017; Ancona et al., 2019). Suppose x = [x1*, . . . , x*d] ∈ R d. Let us denote the subvector of x corresponding to index set S ⊂ {1*, . . . , d*} by xS ∈ R |S|. Let S c:= {1*, . . . , d*}\S be the complement of S. Then, the value of v for a machine learning model h : R d → Y, where Y is some output space, is defined as the value of h on a sample where xSc is replaced by some reference value rSc ∈ R |S|. A challenge here is to choose a good reference vector rSc . rSc is often set to be zero or average of xSc , but there is no unique definition. Marginalization-based approach marginalizes out "absent" features (Štrumbelj & Kononenko, 2014; Datta et al., 2016; Lundberg & Lee, 2017; Olsen et al., 2022). That is, v is computed as the conditional expectation of h(x) given xS, where xSc is marginalized over some distribution p(xSc | xS). A challenge here is the computation of the conditional expectation, which is intractable in general. Typically, it is approximated via nearest neighbors in training data. Meanwhile, Sundararajan & Najmi (2020) argues that the marginalization-based approach loses some nice properties as attribution because it depends not only on the target function h but also on the data distribution. If the features are independent, this approach reduces to (the average of) the reference-based one with multiple reference vectors. Apart from the general point of view, we now briefly review how the Shapley value has been used for attributing anomaly scores. Giurgiu & Schumann (2019) and Antwarg et al. (2021) adopted the referencebased approach to defining v for anomaly scores. Since a good reference vector r· depends on a query data point x and a target feature set S, it should be determined adaptively to both x and S. However, in Antwarg et al. (2021), the reference does not depend on x nor S in principle. Giurgiu & Schumann (2019) proposed to ![3_image_0.png](3_image_0.png) Figure 1: Proposed characteristic function, v(S; x) in Eq. (2), for attributing an anomaly score e : X → R. x has d features, which are split into disjoint sets S ⊂ {1*, . . . , d*} and S c = {1*, . . . , d*}\S. v(S; x) is defined as a local minimum of e(x) in the proximity of x (i.e., Mx) with xS fixed at the original. Although the axes of xS and xSc are depicted as one dimension, they are |S|- and |S c|-dimensional in general. choose a reference value adaptively using the influence weights between a query data point x and a training dataset. Such a reference is adaptive to x but not to S, meaning the same reference value is used for every S. We also note that their method requires storing enough portion of training data, which may be undesirable in some applications of semi-supervised anomaly detection. Takeishi (2019) adopted the marginalization-based approach to defining v for the anomaly score computed with the probabilistic principal component analysis. Because the conditional expectation is available for the probabilistic PCA models, the marginalization can be computed exactly. This approach is free from choosing a reference value, but the type of applicable anomaly detection models is obviously restrictive. Moreover, considering conditional distributions under the presence of anomalies is not necessarily meaningful because the learned distribution is no longer reliable for out-of-distribution data points. ## 3 Anomaly Score Attribution With The Shapley Value We propose a definition of the characteristic function that takes into account the nature of anomaly scores. We take the reference-based approach; differently from the existing studies, the proposed method determines the reference value, r·, adaptively to both x and S and without referring to training data. ## 3.1 A Characteristic Function For Anomaly Scores Let e : X → R be an anomaly score function that outputs a large value when the input, x *∈ X ⊂* R d, is anomalous. We propose to define a characteristic function, which we will denote by v(S; x) to manifest the dependency on x, for an anomaly score e as follows. Recall that we want to design v(S; x) that simulates the "absence" of the features not in S. In other words, v(S; x) should represent how anomalous xS is with the remaining features xSc ignored. To this end, we define v(S; x) as *the smallest value of the anomaly score,* e, achieved in a neighborhood of x with xS *being fixed*. Rephrasing the idea, v(S; x) is defined as follows: $v(S;\mathbf{x})\coloneqq e(\mathbf{x}^{\star}(S;\mathbf{x})),\quad\text{where}\quad\mathbf{x}^{\star}(S;\mathbf{x})\coloneqq\operatorname*{arg\,min}_{\mathbf{y}}\,e(\mathbf{y})\quad\text{s.t.}\quad\mathbf{y}\in M_{\mathbf{x}}\subset\mathcal{X},\,\mathbf{y}_{S}=\mathbf{x}_{S}.$ The constraint of the optimization, Mx ⊂ X , is some compact neighborhood of x ∈ X . Figure 1 illustrates the idea; while xS is fixed at the original value, xSc is moved within Mx so that the value of e is minimized. The characteristic function in Eq. (2) enables us to examine how e(x) becomes large solely due to the features in S. Although it falls in the category of reference-based characteristic functions, it is different from the general methods in which the absence of features is simulated by the replacement with *predefined* reference (2) $\frac{1}{2}$ . values (Antwarg et al., 2021; Giurgiu & Schumann, 2019). The proposed method automatically determines the reference values depending on xS via solving the optimization problem in Eq. (2). Practically, it is unrealistic to determine Mx manually because it should depend both on the geometry of X and on the property of e(x). Hence, we propose a variant of Eq. (2) that admits less complexity of manual tuning. Instead of taking a minimum in a compact neighborhood, we consider a local minimum of e regularized by the distance from the original xS. That is, we define a variant of v, namely vˆ, as follows: $$\begin{split}\hat{v}(S;\mathbf{x})&:=e(\hat{\mathbf{x}}^{\star}(S;\mathbf{x})),\quad\text{where}\quad\hat{\mathbf{z}}^{\star}(S;\mathbf{x}):=\operatorname*{arg\,min}_{\mathbf{y}}\;\ell_{S,\mathbf{x}}(\mathbf{y})\quad\text{s.t.}\quad\mathbf{y}\in\mathcal{X},\;\mathbf{y}_{S}=\mathbf{x}_{S},\\ &\text{and}\quad\ell_{S,\mathbf{x}}(\mathbf{y}):=e(\mathbf{y})+\frac{\gamma}{|S^{\text{cl}}|}\sum_{i\in S^{\text{cl}}}\operatorname{dist}(y_{i},x_{i}).\end{split}\tag{3}$$ The second term of `S,x is a regularizer. The hyperparameter, γ ≥ 0, controls how far xSc can move from the original value and semantically corresponds to the radius of Mx in Eq. (2). Selecting the value of γ is not straightforward in general because it depends on the geometry of the data space. A practical suggestion is to roughly set a reference value of γ such that the magnitude of the two terms of `S,x become similar and try different values around it. To evaluate each γ, one can create anomalous data artificially based on prior knowledge and see how well the artificial anomalies are attributed. Meanwhile, we empirically found that the performance of the attribution method was not so sensitive to the value of γ, as reported in Appendix B.1. dist(·, ·) is some dissimilarity function in each dimension of X . It is to be defined by a user in accordance with the nature of the data space, X . We set it as the Euclidean distance in the experiments in Section 5. Comparing Eq. (2) and Eq. (3), we can understand that the constraint in Eq. (2) (i.e., y ∈ Mx) is relaxed to be the regularizer in Eq. (3) (i.e., the second term of `S,x). Despite the relaxation from v in Eq. (2) to vˆ in Eq. (3), the computation can still be prohibitively heavy for a moderate number of features (e.g., d & 10) because computing vˆ needs to run the local minimization for O(2d) times. We suggest a heuristic relaxation that reduces the number of minimization runs to O(d). The bottleneck lies in computing xˆ ?(S; x), a local minimum of `S,x. Hence, instead of xˆ ?, we define an "ansatz" x ? as follows: $$\mathbf{x}^{*}(S;\mathbf{x})=\mathbf{z}\quad\text{s.t.}\quad\mathbf{z}_{S}=\mathbf{x}_{S}\quad\text{and}\quad\mathbf{z}_{S^{c}}=\frac{1}{|S|+1}\left(\hat{\mathbf{x}}^{*}(\mathcal{O};\mathbf{x})+\sum_{i\in S}\hat{\mathbf{x}}^{*}(\{i\};\mathbf{x})\right).\tag{4}$$ The computation process in Eq. (4) can be rephrased in words as follows. The subvector of x ?(S; x) corresponding to S (which we will call the S-subvector of x ?) is from the original x (i.e., xS). This is the same with the S-subvector of xˆ ?, and the difference between xˆ ? and x ?lies in the remaining part, the S c-subvector. While the S c-subvector of xˆ ?is defined directly as a minimizer of `S,x, the S c-subvector of x ? is defined as the average of xˆ ?computed for the singletons of S's elements and the empty set. It means that we need to compute the minimization in Eq. (3) only for |S| + 1 times and that we can reuse its results for every S afterward to compute x ?. Consequently, we define a relaxed characteristic function, namely v, as $$\underline{{{v}}}(S;\mathbf{x}):=e(\underline{{{x}}}^{\star}(S;\mathbf{x})).$$ $$\mathbf{\Sigma}$$ ?(S; x)). (5) The relaxation from xˆ ?in Eq. (3) to x ?in Eq. (4) is heuristic, and the latter is not necessarily an approximation of the former in a proper sense. Meanwhile, we empirically found that the heuristic relaxation still works to some extent, as reported in Section 5. We note the idea of the relaxation as follows. First, recall that the original definition in Eq. (3) is the solution of the minimization problem with all the elements of x indexed in S fixed. Then, in Eq. (4), we try to make its surrogate by averaging the local solution of minimization problems with *each* element of x indexed in S fixed in each problem. We also counted the empty set as one of the cases of this process, thus the |S| + 1 cases in Eq. (4) appeared. ## 3.2 Overall Algorithm The main contribution of this paper lies in the definition of the characteristic function for attributing anomaly scores, which we have presented so far. Meanwhile, we note the overall algorithm to approximately compute Algorithm 1 Shapley value approximation for anomaly score attribution Input: Anomaly score function e, query data point x, hyperparameter γ ≥ 0, number of MC samples m Output: Shapley values for each feature, {φ1*, . . . , φ*d} 1: for s = ∅, {1}*, . . . ,* {d} do 2: xˆ ?(s; x) ← arg min y∈X , ys=xs `s,x(y) . `S,x in Eq. (3) 3: **end for** 4: for j = 1*, . . . , m* do 5: Sj ← Draw a random subset S of {1*, . . . , d*} with probability ∝ (d − 1)|S|!(d − |S| − 1)!/d! 6: x ?(Sj ; x) ← Eq. (4) 7: **end for** 8: {φ}←arg minφ0Pj e(x ?(Sj ; x))−(Pi∈{0}∪Sj φ 0 i )2. Charnes et al. (1988); Lundberg & Lee (2017) the Shapley value, in Algorithm 1, for the completeness of the paper. The following are some notes on the algorithm: - It does not assume any specific types of anomaly detectors as long as a (sub)differentiable anomaly score function is defined. The model can be GMMs, autoencoders, their combinations or ensembles, or anything else, and the anomaly score can be negative log-likelihoods, reconstruction errors, or others. - Lines 4–8 of Algorithm 1 perform the Monte Carlo approximation of the Shapley value computation. Upon the approximation, we utilize the weighted least squares formulation of the Shapley value (Charnes et al., 1988; Lundberg & Lee, 2017; Aas et al., 2021). - The only hyperparameter specific to the proposed method is γ. We found that the performance was not sensitive to γ (see the appendix). m is a hyperparameter generally present in Shapley value approximation, and we used the value recommended in Lundberg & Lee (2017), m = 2d + 211. ## 4 Related Work As mentioned earlier, anomaly score attribution with the Shapley value has already been studied (Antwarg et al., 2021; Giurgiu & Schumann, 2019; Takeishi, 2019), though the characteristic functions used in these studies are not necessarily specialized for anomaly scores. We will also try general methods of the Shapley value-based attribution in our experiments in Section 5 for comparison. For semi-supervised anomaly detection, other types of interpretation methods, not explicitly based on the Shapley value, have also been proposed. Siddiqui et al. (2019) formulated the sequential feature explanation, in which they seek a most convincing order of features to explain an anomaly. Zhang et al. (2019) suggested using a linear surrogate model learned via perturbation around a data point for explaining why the data point was regarded anomalous. We will also examine these methods in Section 5. The anomaly attribution method of Idé et al. (2021) is notable for its conceptual similarity with ours. Their method is for attributing anomalies found in a regression model x 7→ y, where y is the regressed variable. It seeks a local correction δ of x such that x + δ maximizes the likelihood p(y | x + δ) under the regression model, and the elements of δ are regarded as attributions. While the targeted type of anomaly detection is different from ours, their idea reminds us of the intermediate quantity of our approach, x ?, in Eq. (2); we seek x ?such that it minimizes the anomaly score in the vicinity of the original input. The method of Idé et al. (2021) could be roughly understood as using δ = x ?(∅; x)−x as attribution. In contrast, we compute the attribution based on the values of x ?(S; x) not only for S = ∅ but also for different configurations of S. Apart from the semi-supervised setting, many studies have been done on the interpretation of unsupervised anomaly detection (e.g., Knorr & Ng, 1999; Kriegel et al., 2012; Keller et al., 2012; Dang et al., 2016; Vinh et al., 2015; Liu et al., 2018; Yepmo et al., 2022) or *outlier detection*. These methods could be adjusted for the semi-supervised setting, but how it should be done depends on each method. We did not include these methods in the comparison in Section 5, since there already are several methods (as exemplified above) Table 1: Datasets properties. d is the number of features. Note that |Dtest| = 2|Dnorm test | = 2|Danom test |. | Name | d | Type | |Dtrain| | |Dvalid| | |Dtest| | |------------|-----|--------|------------|------------|-----------| | Thyroid | 6 | real | 2869 | 717 | 186 | | BreastW | 9 | real | 164 | 41 | 478 | | U2R | 10 | real | 12310 | 3078 | 420 | | Lympho | 59 | binary | 109 | 27 | 12 | | Musk | 166 | real | 2294 | 574 | 194 | | Arrhythmia | 274 | real | 256 | 54 | 132 | that directly fit the semi-supervised setting. Adapting the interpretation methods for outlier detection to semi-supervised anomaly detection would be an independent study. Budhathoki et al. (2022) address a related problem with a fundamentally different assumption; they assume to have the knowledge of the causal structure behind data. They compute the Shapley value of anomaly scores by referring to a functional causal model. While their method is not applicable to our setting as we assume rather the absence of causal knowledge, exploring new problem settings between the two extrema (complete presence or absence of causal knowledge) would be an interesting future direction. ## 5 Experiments We present the empirical performance of the proposed method and baseline methods for different datasets and anomaly detectors. We will summarize the implications of the results later in Section 6. ## 5.1 Experimental Setting 5.1.1 Datasets We used six public datasets with different numbers of features as listed in Table 1. The U2R dataset is a subset of the NSL-KDD dataset (Tavallaee et al., 2009), which is a modified version of the KDDCup'99 data. We created the U2R dataset by extracting the U2R attack type from the NSL-KDD dataset and by eliminating categorical and constant-valued features. The other datasets are from the ODDS repository (Rayana, 2016). For the Lympho dataset, we converted the categorical features into binary by one-hot encoding, which resulted in the 59-dimensional dataset. We split each dataset into a training set Dtrain, a validation set Dvalid, and a test set Dtest = Dnorm test ∪ Danom test . We used the whole anomaly part of each dataset as Danom test and randomly chose the same size of normal data as Dnorm test . Dtrain and Dvalid were created from the remaining normal data. We used Dtrain and Dvalid for training and selection of anomaly detectors, respectively, whereas Dnorm test and Danom test were secured for measuring the performance of attribution methods. We normalized the real-valued datasets using each training set's mean and standard average. ## 5.1.2 Anomaly Scores We examined the following types of anomaly detectors with corresponding e(x) (details are in the appendix): GMM e(x) as the energy (i.e., negative log-likelihood) of GMM. VAE-r e(x) as the reconstruction error of VAE. VAE-e e(x) as the negative empirical lower bound of VAE. DAGMM e(x) as the energy of the deep autoencoding Gaussian mixture models (Zong et al., 2018). ## 5.1.3 Attribution Methods We tried the following three baseline methods: marg Marginal scores of individual features. We examined the energy of marginal distributions for GMM and the reconstruction error of each feature for *VAE-r*. In principle, this baseline is not applicable to VAE-e and *DAGMM* as they do not admit straightforward decomposition of the anomaly scores. SFE The sequential feature explanation (Siddiqui et al., 2019). We used the greedy algorithm named "sequential marginal". This method is applicable only to GMM and *VAE-r*, as with marg. ACE The anomaly contribution explainer (Zhang et al., 2019). We also tried the following variants of the general Shapley value-based methods: IG The integrated gradients (Sundararajan et al., 2017), which can be regarded as a realization of the Aumann–Shapley value. We used the mean of training data as a reference value. KSH The kernel SHAP (Lundberg & Lee, 2017). As suggested by the authors' implementation, we used the cluster centers computed by k-means (k = 8) on training data as reference values. We note that Antwarg et al. (2021) also used the kernel SHAP for anomaly scores. wKSH A variant of the kernel SHAP, namely weighted kernel SHAP, in which we selected reference values by x's k-nearest neighbors from training data with k = 8. This approach is similar to the method proposed by Giurgiu & Schumann (2019), where they used the influence weights instead. Finally, with regard to our proposed method, we tried two variants: comp Attribution as the absolute value of each element of δ = xˆ ?(∅; x)−x, where xˆ ?is the intermediate quantity of the proposed method in Eq. (3). This is conceptually similar to the method of Idé et al. (2021) (anomaly compensation). ASH The full proposed method, namely anomaly Shapley, as outlined in Algorithm 1. We used the most relaxed definition of our characteristic function, v in Eq. (5), and set γ = 0.01, unless otherwise stated. ## 5.2 Attribution For Synthetic Anomaly We investigated the performance of the attribution methods under controlled conditions using the normal half of test data, Dnorm test . We generated synthetic anomalies by perturbing some features of the normal data and then tried to localize them with attribution results. We synthesized the anomalies as follows: for every data point of Dnorm test , we selected a set of danom features, and perturbed the values of the selected features by adding noise drawn uniformly from [−2, −1] ∪ [1, 2] (for real-valued datasets). We set this magnitude of the noise because the real-valued datasets were normalized to have the unit standard deviation. For the binary-valued dataset (i.e., Lympho), instead of adding the real-valued noise, we flipped the values of the selected features. All the anomaly detectors resulted in similarly good detection performances probably because the synthesized anomalies were obvious, so we focus only on the attribution performance. We defer the results with danom > 1 to the appendix, though the overall tendency is the same as what we observed from the results with danom = 1. For danom = 1, we report the mean reciprocal rank (MRR) of the attribution of the perturbed feature. This is the average of the reciprocal of the rank of the perturbed feature's attribution in the descending order, so, if the largest value is attributed to the perturbed feature for every data point, then the MRR becomes 1. The MRR values are detailed for all the datasets and the detectors in Table 2. The proposed approach, ASH, is better than or comparable with the other methods in many dataset-detector pairs. It is worth noting that the simple strategy, marg, works to some extent in most cases when it is applicable, though an obvious drawback is the limited applicability. We report the result with another criterion, Hits@3 (i.e., the proportion of samples where the perturbed feature was in the top-3 attribution), in Table 3 for completeness, where we observe the same tendency. Table 2: MRR (the larger, the better) of the synthetic anomalies, for danom = 1. The percentiles shown at the bottom of the table are statistics of the values of all the dataset-detector pairs. We show the percentiles for marg and SFE just for information and do not compare them with the other methods' statistics because the applicable detectors are different. The same applies to the following tables. | Setting | MRR | | | | | | | | | |-----------------|----------|--------|------|------|------|------|------|------|------| | Dataset | Detector | marg | SFE | ACE | IG | KSH | wKSH | comp | ASH | | GMM | 0.57 | 0.59 | 0.73 | 0.70 | 0.48 | 0.71 | 0.48 | 0.78 | | | VAE-r | 0.75 | 0.75 | 0.77 | 0.75 | 0.58 | 0.77 | 0.75 | 0.75 | | | Thyroid | VAE-e | - | - | 0.74 | 0.76 | 0.56 | 0.77 | 0.46 | 0.75 | | DAGMM | - | - | 0.39 | 0.23 | 0.38 | 0.46 | 0.66 | 0.51 | | | GMM | 0.78 | 0.79 | 0.77 | 0.78 | 0.42 | 0.76 | 0.47 | 0.78 | | | VAE-r | 0.67 | 0.67 | 0.71 | 0.83 | 0.66 | 0.81 | 0.77 | 0.77 | | | Breastw | VAE-e | - | - | 0.72 | 0.86 | 0.66 | 0.80 | 0.52 | 0.85 | | DAGMM | - | - | 0.37 | 0.13 | 0.38 | 0.68 | 0.49 | 0.57 | | | GMM | 0.84 | 0.82 | 0.70 | 0.86 | 0.36 | 0.90 | 0.57 | 0.86 | | | VAE-r | 0.84 | 0.84 | 0.88 | 0.87 | 0.36 | 0.91 | 0.81 | 0.89 | | | U2R | VAE-e | - | - | 0.83 | 0.89 | 0.37 | 0.90 | 0.41 | 0.88 | | DAGMM | - | - | 0.58 | 0.11 | 0.29 | 0.74 | 0.78 | 0.71 | | | GMM | 0.48 | 0.56 | 0.16 | 0.89 | 0.08 | 0.08 | 0.15 | 0.73 | | | VAE-r | 0.92 | 0.92 | 0.16 | 0.66 | 0.08 | 0.08 | 0.03 | 0.87 | | | Lympho | VAE-e | - | - | 0.16 | 0.69 | 0.08 | 0.08 | 0.03 | 0.83 | | DAGMM | - | - | 0.08 | 0.12 | 0.08 | 0.08 | 0.16 | 0.37 | | | GMM | 0.08 | 0.10 | 0.24 | 0.28 | 0.20 | 0.92 | 0.11 | 0.97 | | | VAE-r | 0.98 | 0.98 | 0.60 | 0.80 | 0.54 | 0.98 | 0.20 | 0.98 | | | Musk | VAE-e | - | - | 0.61 | 0.80 | 0.54 | 0.98 | 0.11 | 0.97 | | DAGMM | - | - | 0.10 | 0.03 | 0.06 | 0.47 | 0.11 | 0.28 | | | GMM | 0.22 | 0.24 | 0.25 | 0.42 | 0.09 | 0.48 | 0.08 | 0.49 | | | VAE-r | 0.72 | 0.72 | 0.35 | 0.54 | 0.36 | 0.58 | 0.17 | 0.71 | | | Arrhythmia | VAE-e | - | - | 0.35 | 0.54 | 0.36 | 0.58 | 0.18 | 0.72 | | DAGMM | - | - | 0.13 | 0.00 | 0.06 | 0.22 | 0.08 | 0.17 | | | 25th percentile | (0.55) | (0.58) | 0.22 | 0.27 | 0.09 | 0.47 | 0.11 | 0.67 | | | 50th percentile | (0.73) | (0.73) | 0.48 | 0.69 | 0.36 | 0.72 | 0.30 | 0.76 | | | 75th percentile | (0.84) | (0.82) | 0.72 | 0.81 | 0.49 | 0.83 | 0.53 | 0.86 | | ## 5.3 Comparison To Attribution Of Supervised Classifier For assessing the performance of the attribution methods in a more realistic situation, we use another half of the test data, Danom test . Although we know that Danom test comprises somewhat anomalous data points, we do not know which features are most anomalous for each data point, so the ground truth of anomaly score attribution is not available. Hence, we resort to comparing the attributions of the semi-supervised anomaly scores with the attribution of a supervised classifier's outputs, expecting that the supervised model can capture richer information on the data than the semi-supervised models can do, since the former is explicitly informed by both normal and anomalous labeled data, while the latter is only informed by normal data. For each dataset, we first train a binary classifier (specifically, SVM with the RBF kernel) using all the data, with the two classes being normal (Dtrain ∪ Dvalid ∪ Dnorm test ) and anomalous (Danom test ).2 As the two classes are imbalanced, we preprocessed the data using SMOTE (Chawla et al., 2002). We then compute the attribution of the classifier's output for each data point of Danom test using kernel SHAP (Lundberg & Lee, 2017) and choose the feature with the largest absolute value of the attribution as the most anomalous feature for each data point. We only use instances for which the following holds: $$|\phi_{i_{1}}^{\mathrm{sup}}|\geq2|\phi_{i_{2}}^{\mathrm{sup}}|,$$ i2|, (6) 2We use Danom test both for training and attribution, which is not problematic because our purpose does not lie in predicting the label by the supervised classifier; it just works as a reference of attribution. | Setting | Hits@3 | | | | | | | | | |-----------------|----------|--------|------|------|------|------|------|------|------| | Dataset | Detector | marg | SFE | ACE | IG | KSH | wKSH | comp | ASH | | GMM | 0.79 | 0.81 | 0.85 | 0.89 | 0.66 | 0.89 | 0.46 | 0.88 | | | VAE-r | 0.86 | 0.86 | 0.88 | 0.86 | 0.78 | 0.84 | 0.86 | 0.86 | | | Thyroid | VAE-e | - | - | 0.85 | 0.85 | 0.72 | 0.85 | 0.44 | 0.85 | | DAGMM | - | - | 0.39 | 0.15 | 0.48 | 0.47 | 0.79 | 0.57 | | | GMM | 0.85 | 0.90 | 0.85 | 0.88 | 0.55 | 0.82 | 0.44 | 0.88 | | | VAE-r | 0.78 | 0.78 | 0.76 | 0.92 | 0.71 | 0.89 | 0.89 | 0.80 | | | Breastw | VAE-e | - | - | 0.78 | 0.95 | 0.72 | 0.88 | 0.51 | 0.93 | | DAGMM | - | - | 0.32 | 0.03 | 0.37 | 0.69 | 0.53 | 0.68 | | | GMM | 0.97 | 0.93 | 0.74 | 0.97 | 0.47 | 0.91 | 0.57 | 0.97 | | | VAE-r | 0.97 | 0.97 | 0.94 | 0.96 | 0.47 | 0.90 | 0.97 | 0.96 | | | U2R | VAE-e | - | - | 0.95 | 0.96 | 0.49 | 0.90 | 0.41 | 0.96 | | DAGMM | - | - | 0.60 | 0.00 | 0.28 | 0.74 | 0.93 | 0.83 | | | GMM | 0.41 | 0.63 | 0.14 | 0.98 | 0.06 | 0.06 | 0.14 | 0.98 | | | VAE-r | 0.95 | 0.95 | 0.14 | 0.73 | 0.06 | 0.06 | 0.00 | 0.96 | | | Lympho | VAE-e | - | - | 0.14 | 0.82 | 0.06 | 0.06 | 0.00 | 0.84 | | DAGMM | - | - | 0.06 | 0.13 | 0.06 | 0.06 | 0.14 | 0.46 | | | GMM | 0.05 | 0.07 | 0.24 | 0.28 | 0.25 | 0.93 | 0.14 | 0.97 | | | VAE-r | 0.98 | 0.98 | 0.64 | 0.82 | 0.60 | 0.99 | 0.24 | 0.99 | | | Musk | VAE-e | - | - | 0.63 | 0.82 | 0.59 | 0.99 | 0.14 | 0.97 | | DAGMM | - | - | 0.10 | 0.03 | 0.04 | 0.56 | 0.14 | 0.30 | | | GMM | 0.21 | 0.23 | 0.27 | 0.49 | 0.07 | 0.57 | 0.07 | 0.53 | | | VAE-r | 0.75 | 0.75 | 0.41 | 0.70 | 0.45 | 0.71 | 0.18 | 0.78 | | | Arrhythmia | VAE-e | - | - | 0.41 | 0.70 | 0.45 | 0.71 | 0.22 | 0.79 | | DAGMM | - | - | 0.14 | 0.00 | 0.04 | 0.26 | 0.06 | 0.17 | | | 25th percentile | (0.67) | (0.72) | 0.21 | 0.25 | 0.07 | 0.54 | 0.14 | 0.76 | | | 50th percentile | (0.82) | (0.83) | 0.51 | 0.82 | 0.46 | 0.78 | 0.32 | 0.85 | | | 75th percentile | (0.95) | (0.94) | 0.80 | 0.90 | 0.59 | 0.89 | 0.54 | 0.96 | | Table 3: Hits@3 (the larger, the better) of the synthetic anomalies, for danom = 1. | Setting | | | MRR | | | | | | | |-----------------|----------|--------|-------|------|------|------|------|------|------| | Dataset | Detector | marg | SFE | ACE | IG | KSH | wKSH | comp | ASH | | GMM | 0.52 | 0.44 | 0.64 | 0.57 | 0.54 | 0.75 | 0.68 | 0.91 | | | VAE-r | 0.87 | 0.87 | 0.86 | 0.87 | 0.71 | 0.76 | 0.87 | 0.87 | | | Thyroid | VAE-e | - | - | 0.82 | 0.87 | 0.77 | 0.75 | 0.68 | 0.87 | | DAGMM | - | - | 0.47 | 0.18 | 0.42 | 0.36 | 0.64 | 0.68 | | | GMM | 0.72 | 0.69 | 0.48 | 0.73 | 0.37 | 0.45 | 0.82 | 0.73 | | | VAE-r | 0.64 | 0.64 | 0.70 | 0.65 | 0.38 | 0.37 | 0.71 | 0.66 | | | U2R | VAE-e | - | - | 0.69 | 0.69 | 0.39 | 0.40 | 0.14 | 0.66 | | DAGMM | - | - | 0.55 | 0.13 | 0.54 | 0.43 | 0.70 | 0.51 | | | GMM | 0.02 | 0.03 | 0.26 | 0.01 | 0.05 | 0.16 | 0.64 | 0.04 | | | VAE-r | 0.06 | 0.06 | 0.24 | 0.14 | 0.20 | 0.12 | 0.63 | 0.10 | | | Musk | VAE-e | - | - | 0.18 | 0.14 | 0.20 | 0.12 | 0.64 | 0.01 | | DAGMM | - | - | 0.24 | 0.01 | 0.02 | 0.18 | 0.64 | 0.14 | | | 25th percentile | (0.17) | (0.15) | 0.26 | 0.14 | 0.20 | 0.17 | 0.64 | 0.13 | | | 50th percentile | (0.58) | (0.54) | 0.52 | 0.38 | 0.39 | 0.39 | 0.66 | 0.66 | | | 75th percentile | (0.70) | (0.68) | 0.69 | 0.70 | 0.54 | 0.53 | 0.70 | 0.77 | | Table 4: MRR of the most-attributed features by the supervised classifier for real anomalies. where φ sup i1and φ sup i2respectively denote the largest and the second-largest elements of a set {φ sup 1*, . . . , φ*sup d}, which is the set of the supervised classifier's attribution to the d features. With Eq. (6), we can only use instances for which the i1-th feature is significantly more attributed than the runner-up i2-th feature. We finally evaluate how well such features can be localized by the attributions of the semi-supervised anomaly scores by methods listed in Section 5.1.3. In Table 4, we report the MRR of the attribution of the semi-supervised anomaly scores, with the features most attributed in the supervised classifier being the golden standard. The reciprocal rank is 1 when the attribution of the supervised classifier and that of the semi-supervised anomaly score assign the largest value to the same feature. Although ASH is competitive or better compared to other methods for the Thyroid and U2R datasets, it (as well as the other methods except comp) fails for the Musk dataset. Meanwhile, the comp baseline method is still successful to some extent for the Musk dataset. This is interesting, but the current data resource does not allow further analysis of why this was the case. We report the results only for the three datasets because, for the remaining datasets, only a few instances of each of them sufficed the condition in Eq. (6), and thus the performance comparison for those datasets was less reliable. In Fig. 2, we show examples of the anomaly score attributions. The right panel of the figure reports the normalized absolute values of the attributions for the GMM detector on the Thyroid dataset. Each row corresponds to each of four data points, which are emphasized in the scatter plot matrix in the left panel by the associated markers ( , , , and ). Here, we note that all of these four points are anomalous data points; attributing anomaly scores only makes sense for anomalous queries. In the first two examples ( and ), the largest attribution by ASH successfully coincides with that of the supervised classifier. In contrast, they do not match in the last two examples ( and ). We would note the following observations: - The Shapley-value-based attributions (i.e., IG, KSH, wKSH, and ASH) tend to give relatively (close to) sparse attributions compared to the other methods. This tendency was commonly observed in other examples not shown here, too. - The attributions by KSH for the first two examples, and , are similar. In contrast, the attributions by wKSH and ASH are very different between these examples. Such a difference of the behavior can be explained by the fact that while KSH uses the fixed reference value in computing the characteristic function, the reference values used by wKSH and ASH are adaptive to the query data point. - In the third example ( ), the attribution by ASH significantly deviates from the supervised classifier's attribution. This may be due to the extreme value of the corresponding data point, the point in the scatter plot matrix. At the same time, the ASH's attribution (largest for Feature \#3) is not necessarily meaningless because the point exhibits an extreme value also for Feature \#3. - In contrast to the third example ( ), the data point of the fourth example ( ) has no clear extreme values and rather lies in the proximity of points and . This fact makes the reason why ASH failed with the point less clear. Meanwhile, the failure is lighter than in the previous case because Feature \#0 has the second largest value in the attribution by ASH. ## 5.4 Validity Of Heuristic Relaxation Recall that in Section 3.1, we introduced a heuristic relaxation of the characteristic function. We investigated the validity of the relaxation by comparing the original relaxed characteristic function (i.e., vˆ in Eq. (3)) and the heuristically approximated one (i.e., v in Eq. (5)). In Table 5, we compare the Shapley value-based attribution using vˆ and v. In terms of the computation time for each x, the attribution with v is significantly faster than that with vˆ, which is a natural consequence of the definitions. Meanwhile, the resulting MRRs with each characteristic function (in the setting of synthetic anomaly experiment in Section 5.2) are similar to each other. We report the comparison only for the two datasets in Table 5 because the computation of vˆ for datasets with more features was basically infeasible. ## 6 Discussion The proposed strategy, ASH, resulted in a relatively stably good performance. Nonetheless, it sometimes failed while others were good (though the same could be said for all the methods; no one wins always). These observations tell us that using multiple attribution methods with multiple detection methods in ensembles may be good practice for interpreting anomaly detection. ![11_image_0.png](11_image_0.png) Figure 2: (*Left*) Scatter plot matrix of the Thyroid dataset (d = 6). The blue and orange points are from Dnorm test and Danom test , respectively. The labels from 0 to 5 are the feature index. (*Right*) Examples of the anomaly score attributions for the GMM detector. The normalized absolute values of the attributions are reported. "Sup" refers to the reference attribution of the supervised classifier, {φ sup i}. Each row corresponds to a data point of the Thyroid dataset. The associated markers ( , , , and ) point to the location of the corresponding data points in the left panel. Note that the points with those markers are anomalous data. Table 5: Comparison of the two definitions of the characteristic functions for anomaly scores, vˆ (without the heuristics, in Eq. (3)) and v (with the heuristics, in Eq. (5)). The per-sample computation time and the performance of the attribution for the synthetic anomaly with danom = 1 are reported. | Setting | Average computation time (STD) [sec/sample] | | MRR | | | |---------------|-----------------------------------------------|-----------------|-------------|---------|--------| | Dataset | Detector | with vˆ | with v | with vˆ | with v | | Thyroid d = 6 | GMM | 26.40 (13.19) | 6.19 (3.05) | 0.76 | 0.78 | | VAE-r | 23.50 (12.06) | 5.58 (3.04) | 0.75 | 0.75 | | | VAE-e | 23.81 (13.50) | 5.42 (3.28) | 0.75 | 0.75 | | | DAGMM | 51.52 (37.19) | 12.45 (8.43) | 0.57 | 0.51 | | | Breastw d = 9 | GMM | 278.84 (169.07) | 7.10 (4.28) | 0.78 | 0.78 | | VAE-r | 397.87 (235.73) | 14.76 (8.21) | 0.81 | 0.77 | | | VAE-e | 478.76 (272.03) | 15.86 (8.94) | 0.84 | 0.85 | | | DAGMM | 1071.99 (596.88) | 18.37 (12.48) | 0.57 | 0.57 | | Analysis of each method's failure is an important question, though it is out of the scope of this paper because such failure analysis is meaningful only with in-depth experimentation for each particular anomaly detector, characteristic function, data, and anomaly type. This paper, instead, has provided a general overview of the relevant attribution methods to investigate their applicability. Although we provided examples of attributions in Fig. 2, we think it is not safe to say anything more detailed than what we listed in Section 5.3 because the visualization reveals only limited aspects of the dataset, and the true cause of anomalies cannot necessarily be ascribed to a single feature as we did. In the experiment, we also observed that the simplest approach, marg, worked well in some cases. It is, in a sense, reassuring because marg has been the only way of anomaly attribution practiced in many use cases for a long time. However, it was sometimes substantially outperformed by other methods and can only be computed for limited types of anomaly scores, which motivates the use of the other attribution methods including the proposed one. Another general observation is that the anomaly score by *DAGMM* was relatively difficult to attribute, especially when d > 10, probably because of its strong non-additive nature. Such "attribution hardness" would be an interesting topic of future studies. Finally, as stated earlier, the evaluation using the real anomaly data has inherent limitations (as is often the case with the evaluation of interpretation methods), since the ground truth was a surrogate. More specific evaluations with real anomalies should be done in each application domain with experts' supervision. ## Broader Impact Statement This paper presents some methods for interpreting anomaly detection results, which can be utilized in various real-world domains including industrial sectors. The users must be constantly aware of the heuristic and datadriven nature of the methods; the correctness of the interpretation can never be guaranteed automatically. The methods can be useful in helping the user's decision-making processes but cannot replace any critical roles of human operators. Moreover, as advised in the paper, multiple attribution methods should be used together in ensembles for gaining stability. ## Acknowledgments The major part of this work was conducted when the first author was working at RIKEN Center for Advanced Intelligence Project. Afterward, major revision and additional experiments were done while the first author was at the University of Applied Sciences Western Switzerland. ## References Kjersti Aas, Martin Jullum, and Anders Løland. Explaining individual predictions when features are dependent: More accurate approximations to Shapley values. *Artificial Intelligence*, 298:103502, 2021. Marco Ancona, Cengiz Öztireli, and Markus Gross. Explaining deep neural networks with a polynomial time algorithm for Shapley values approximation. In *Proceedings of the 36th International Conference on* Machine Learning, pp. 272–281, 2019. Liat Antwarg, Ronnie Mindlin Miller, Bracha Shapira, and Lior Rokach. Explaining anomalies detected by autoencoders using Shapley Additive Explanations. *Expert Systems with Applications*, 186(30):115736, 2021. Kailash Budhathoki, Lenon Minorics, Patrick Blöbaum, and Dominik Janzing. Causal structure-based root cause analysis of outliers. In *Proceedings of the 39th International Conference on Machine Learning*, pp. 2357–2369, 2022. Javier Castro, Daniel Gómez, Elisenda Molina, and Juan Tejada. Improving polynomial estimation of the Shapley value by stratified random sampling with optimum allocation. *Computers & Operations Research*, 82:180–188, 2017. Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM Computing Surveys, 41(3):1–58, 2009. A. Charnes, B. Golany, M. Keane, and J. Rousseau. Extremal principle solutions of games in characteristic function form: Core, Chebychev and Shapley value generalizations. In *Econometrics of Planning and* Efficiency, volume 11 of *Advanced Studies in Theoretical and Applied Econometrics*, pp. 123–133. 1988. Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. Smote: Synthetic minority over-sampling technique. *Journal of Artificial Intelligence Research*, pp. 321–357, 2002. Xuan-Hong Dang, Arlei Silva, Ambuj Singh, Ananthram Swami, and Prithwish Basu. Outlier detection from network data with subnetwork interpretation. In *Proceedings of the 16th IEEE International Conference* on Data Mining, pp. 847–852, 2016. Anupam Datta, Shayak Sen, and Yair Zick. Algorithmic transparency via quantitative input influence. In Proceedings of the 2016 IEEE Symposium on Security and Privacy, pp. 598–617, 2016. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. Ioana Giurgiu and Anika Schumann. Additive explanations for anomalies detected from multivariate temporal data. In *Proceedings of the 28th ACM International Conference on Information and Knowledge* Management, pp. 2245–2248, 2019. Tsuyoshi Idé, Amit Dhurandhar, Jiří Navrátil, Moninder Singh, and Naoki Abe. Anomaly attribution with likelihood compensation. In *Proceedings of the 35th AAAI Conference on Artificial Intelligence*, pp. 4131– 4138, 2021. Fabian Keller, Emmanuel Muller, and Klemens Böhm. HiCS: High contrast subspaces for density-based outlier ranking. In *Proceedings of the 28th IEEE International Conference on Data Engineering*, pp. 1037–1048, 2012. Edwin M. Knorr and Raymond T. Ng. Finding intensional knowledge of distance-based outliers. In *Proceedings of the 25th International Conference on Very Large Data Bases*, pp. 211–222, 1999. Hans-Peter Kriegel, Peer Kröger, Erich Schubert, and Arthur Zimek. Outlier detection in arbitrarily oriented subspaces. In *Proceedings of the 12th IEEE International Conference on Data Mining*, pp. 379–388, 2012. Stan Lipovetsky. Entropy criterion in logistic regression and Shapley value of predictors. Journal of Modern Applied Statistical Methods, 5(1):95–106, 2006. Ninghao Liu, Donghwa Shin, and Xia Hu. Contextual outlier interpretation. In *Proceedings of the 27th* International Joint Conference on Artificial Intelligence, pp. 2461–2467, 2018. Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In *Advances in* Neural Information Processing Systems 30, pp. 4765–4774, 2017. Lars Henry Berge Olsen, Ingrid Kristine Glad, Martin Jullum, and Kjersti Aas. Using Shapley values and variational autoencoders to explain predictive models with dependent mixed features. Journal of Machine Learning Research, 23(213):1–51, 2022. Art B. Owen and Clémentine Prieur. On Shapley value for measuring importance of dependent inputs. SIAM/ASA Journal on Uncertainty Quantification, 5(1):986–1002, 2017. Marco A. F. Pimentel, David A. Clifton, Lei Clifton, and Lionel Tarassenko. A review of novelty detection. Signal Processing, 99:215–249, 2014. Shebuti Rayana. ODDS library, 2016. URL http://odds.cs.stonybrook.edu. Lloyd S. Shapley. A value for n-person games. In *Contributions to the Theory of Games*, volume II of *Annals* of Mathematics Studies, pp. 307–317. Princeton University Press, 1953. Md Amran Siddiqui, Alan Fern, Thomas G. Dietterich, and Weng-Keen Wong. Sequential feature explanations for anomaly detection. *ACM Transactions on Knowledge Discovery from Data*, 13(1):1–22, 2019. Erik Štrumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions. *Knowledge and Information Systems*, 41(3):647–665, 2014. Mukund Sundararajan and Amir Najmi. The many Shapley values for model explanation. In Proceedings of the 37th International Conference on Machine Learning, pp. 9269–9278, 2020. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, pp. 3319–3328, 2017. Naoya Takeishi. Shapley values of reconstruction errors of PCA for explaining anomaly detection. In Proceedings of the 2019 International Conference on Data Mining Workshops, pp. 793–798, 2019. Mahbod Tavallaee, Ebrahim Bagheri, Wei Lu, and Ali A. Ghorbani. A detailed analysis of the KDD CUP 99 data set. In *Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and* Defense Applications, 2009. Nguyen Xuan Vinh, Jeffrey Chan, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, and Jian Pei. Scalable outlying-inlying aspects discovery via feature ranking. In Advances in Knowledge Discovery and Data Mining, volume 9078 of *Lecture Notes in Computer Science*, pp. 422–434. 2015. Véronne Yepmo, Grégory Smits, and Olivier Pivert. Anomaly explanation: A review. *Data & Knowledge* Engineering, 137:101946, 2022. Xiao Zhang, Manish Marwah, I.-ta Lee, Martin Arlitt, and Dan Goldwasser. ACE - An anomaly contribution explainer for cyber-security applications. In *Proceedings of the 2019 IEEE International Conference on* Big Data, pp. 1991–2000, 2019. Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep autoencoding Gaussian mixture model for unsupervised anomaly detection. In *Proceedings of the* 6th International Conference on Learning Representations, 2018. ## A Experimental Settings A.1 Datasets Thyroid is originally from the UCI machine learning repository (Dua & Graff, 2017), and we used the reformatted dataset provided as a part of the ODDS repository (Rayana, 2016). The reformatted dataset comprises six real-valued features, eliminating the 15 categorical features of the original. The original task for which the dataset was prepared is to determine whether a patient is hypothyroid or not. For anomaly detection purposes, within the three classes of normal functioning, subnormal functioning, and hyperfunction, the first two are used as normal data, and the last is as anomalous data. BreastW is originally from the UCI machine learning repository, and we used the reformatted dataset provided as a part of the ODDS repository. The dataset comprises nine features that take categorical values from 1 to 10. The original problem is the classification between benign and malignant classes. For anomaly detection purposes, the malignant class is used as anomalous data. U2R is a part of the NSL-KDD dataset (Tavallaee et al., 2009), which is a modified version of the KDD Cup 1999 dataset. From the NSL-KDD dataset, we used the part of the dataset corresponding to the U2R attack type. We eliminated the six categorical features and a real-valued feature that does not change in the dataset, which resulted in the following ten features (in the original names): 1. *duration* 2. hot 3. *num_compromised* 4. *root_shell* 5. *num_root* 6. *num_file_creations* | 7. srv_count 8. dst_host_count 9. dst_host_srv_count | |--------------------------------------------------------| 10. *dst_host_same_src_port_rate* Lympho is originally from the UCI machine learning repository, and we used the reformatted dataset provided as a part of the ODDS repository. The dataset comprises 18 categorical features, and we transformed them by one-hot encoding, which resulted in the 59-dimensional dataset. The original task is the classification between four classes, two of which are quite small. For anomaly detection purposes, these small classes are used as anomalous data. Musk is originally from the UCI machine learning repository, and we used the reformatted dataset provided as a part of the ODDS repository. The dataset comprises 166 real-valued features. The original task is to classify molecules into musk and non-musk classes. For anomaly detection purposes, three non-musk classes are used as normal data, and two musk classes are as anomalous data. Arrhythmia is originally from the UCI machine learning repository, and we used the reformatted dataset provided as a part of the ODDS repository. The reformatted dataset comprises 274 real-valued features. The original task is a 16-class classification to distinguish between the presence and absence of cardiac arrhythmia. For anomaly detection purposes, the eight smallest classes are used as anomalous data. ## A.2 Anomaly Detection Methods A.2.1 Gaussian Mixture Models (Gmms) We selected best numbers of mixture components, K, from 2, 3, or 4, based on the validation likelihood. The corresponding anomaly score is the energy (i.e., negative log-likelihood) of the GMMs: $$e(\mathbf{x})=-\log\sum_{k=1}^{K}\pi_{k}\exp\Bigg(-\frac{d\log(2\pi)}{2}-\frac{1}{2}\log\det(\mathbf{\Sigma}_{k})-\frac{1}{2}(\mathbf{x}-\mathbf{\mu}_{k})^{\mathsf{T}}\mathbf{\Sigma}_{k}^{-1}(\mathbf{x}-\mathbf{\mu}_{k})\Bigg),$$ where {π1*, . . . , π*K} is the set of mixture weights, {µ1*, . . . ,* µK} is the set of means, and {Σ1*, . . . ,* ΣK} is the set of covariance matrices. ## A.2.2 Variational Autoencoders (Vaes) The encoder is a multilayer perceptron with one hidden layer, whereas the decoder is with two hidden layers. Every activation function is the softplus function. We selected the best values of the dimensionality of the latent variable, dim(z), and the number of the hidden units of the multilayer perceptrons, dim(MLP), based on the validation ELBO. The candidate of dim(z) was the rounded values of 0.2d, 0.4d, 0.6d, and 0.8d, where d is the dimensionality of each dataset. The candidate of dim(MLP) was the rounded values of 0.5d, d, and 2d. The loss function used for learning was the negative ELBO with the mean squared loss for the real-valued datasets or with the cross-entropy loss for the binary-valued dataset. We used the Adam optimizer with the learning rate 0.001 and stopped the optimization observing the validation set loss. The corresponding anomaly scores are: $$e(\mathbf{x})=\left\|\mathbf{x}-\mathbf{f}(\mathbf{g}(\mathbf{x}))\right\|^{2}$$ and $$e\left(\mathbf{x}\right)=-\mathbb{E}_{q_{\psi}\left(\mathbf{z}\mid\mathbf{x}\right)}{\big[}\log p_{\theta}(\mathbf{x}\mid\mathbf{z})+\log p(\mathbf{z})-\log q_{\psi}(\mathbf{z}\mid\mathbf{x}){\big]},$$ ## For *Vae-R* And *Vae-E*, Respectively. Θ And Ψ Denote The Sets Of Parameters Of The Decoder And The Encoder, Respectively. G And F Are The Decoder And The Encoder'S Mean Parameter Function, Respectively. A.2.3 Deep Autoencoding Gaussian Mixture Models (Dagmms) The architectures of the encoder and decoder are the same as the VAEs above. What is specific to DAGMMs is the so-called estimation network that outputs the estimation of cluster assignment of a GMM learned on the latent representations and the reconstruction error values. In our experiments, the estimation network is a multilayer perceptron with one hidden layer using the softplus function as activation. The candidates of the hyperparameters were the same as in the above cases, both for the autoencoder part and the GMM part. We used the Adam optimizer with the learning rate 0.0001 and stopped the optimization observing the validation set loss. ## B Experimental Results B.1 Sensitivity To Hyperparameter In Table 6, we report results of the synthetic anomaly experiment (in Section 5.2) using the GMM detector with γ being varied from 0.001 to 10. We can observe that the performance is insensitive to γ. Table 6: MRR for the synthetic anomaly. The values for the GMM detector using different values of γ are reported. | Dataset | MRR | | | | | |------------|--------------------|---------|---------|----------|------| | | γ = 0.001 γ = 0.01 | γ = 0.1 | γ = 1.0 | γ = 10.0 | | | Thyroid | 0.77 | 0.78 | 0.78 | 0.75 | 0.74 | | Breastw | 0.78 | 0.78 | 0.78 | 0.77 | 0.78 | | U2R | 0.86 | 0.86 | 0.86 | 0.86 | 0.86 | | Lympho | 0.73 | 0.73 | 0.73 | 0.73 | 0.74 | | Musk | 0.98 | 0.97 | 0.95 | 0.93 | 0.92 | | Arrhythmia | 0.49 | 0.49 | 0.49 | 0.50 | 0.50 | | Setting | Average computation time (STD) | | |--------------------|----------------------------------|-----------------| | Dataset | Detector | [sec/sample] | | Thyroid d = 6 | GMM | 6.19 (3.05) | | VAE-r | 5.58 (3.04) | | | VAE-e | 5.42 (3.28) | | | DAGMM | 12.45 (8.43) | | | Breastw d = 9 | GMM | 7.10 (4.28) | | VAE-r | 14.76 (8.21) | | | VAE-e | 15.86 (8.94) | | | DAGMM | 18.37 (12.48) | | | U2R | | | | d = 10 | GMM | 10.35 (7.64) | | VAE-r | 15.13 (7.30) | | | VAE-e | 11.53 (6.36) | | | DAGMM | 34.69 (18.01) | | | Lympho d = 59 | GMM | 81.41 (33.14) | | VAE-r | 213.68 (0.17) | | | VAE-e | 226.29 (0.27) | | | DAGMM | 96.68 (32.34) | | | Musk | | | | d = 166 | GMM | 217.82 (103.43) | | VAE-r | 184.01 (62.34) | | | VAE-e | 464.39 (92.27) | | | DAGMM | 503.65 (85.80) | | | GMM | 1187.44 (418.56) | | | VAE-r | 580.67 (298.37) | | | VAE-e | 1480.86 (161.63) | | | DAGMM | 404.46 (270.18) | | | Arrhythmia d = 274 | | | Table 7: Per-sample computation time of the attribution with the proposed characteristic function, v. ## B.2 Runtime With the heuristic relaxation we introduced (i.e., from Eq. (3) to Eq. (4)), the number of the minimization problems to be solved decreases from O(2d) to O(d). While the overall time complexity of the attribution algorithm certainly depends on this number, the actual runtime depends also on many other factors. Specifically, each of the minimization problems is solved with gradient descent, whose stopping rule refers to the empirical convergence of the objective, and thus the number of iterations significantly differs with different data. We report the average computation time of the attribution method with the proposed characteristic function (i.e., ASH) for the six datasets. The overall averages of the computation time over the four detectors are summarized in Fig. 3. Note that the evaluation for the last two datasets (d = 166 and d = 274) is difficult because they show large variances. ![17_image_1.png](17_image_1.png) ![17_image_0.png](17_image_0.png) Figure 3: Per-sample computation time reported in Table 7. The overall averages over the four detectors are plotted. The left panel is for the first three datasets, and the right panel is for the last four datasets. | Setting | AUC | | | | | | | | | |-----------------|----------|--------|------|------|------|------|------|------|------| | Dataset | Detector | marg | SFE | ACE | IG | KSH | wKSH | comp | ASH | | GMM | 0.73 | 0.73 | 0.78 | 0.76 | 0.56 | 0.73 | 0.52 | 0.83 | | | VAE-r | 0.82 | 0.82 | 0.83 | 0.82 | 0.68 | 0.80 | 0.82 | 0.82 | | | Thyroid | VAE-e | - | - | 0.82 | 0.82 | 0.67 | 0.80 | 0.48 | 0.82 | | DAGMM | - | - | 0.43 | 0.17 | 0.49 | 0.54 | 0.73 | 0.58 | | | GMM | 0.90 | 0.89 | 0.85 | 0.89 | 0.60 | 0.86 | 0.48 | 0.89 | | | VAE-r | 0.77 | 0.77 | 0.70 | 0.87 | 0.81 | 0.86 | 0.88 | 0.83 | | | Breastw | VAE-e | - | - | 0.68 | 0.89 | 0.81 | 0.85 | 0.51 | 0.86 | | DAGMM | - | - | 0.48 | 0.13 | 0.58 | 0.73 | 0.53 | 0.68 | | | GMM | 0.95 | 0.88 | 0.68 | 0.90 | 0.53 | 0.86 | 0.54 | 0.89 | | | VAE-r | 0.91 | 0.91 | 0.89 | 0.92 | 0.59 | 0.89 | 0.94 | 0.93 | | | U2R | VAE-e | - | - | 0.87 | 0.93 | 0.60 | 0.88 | 0.46 | 0.92 | | DAGMM | - | - | 0.59 | 0.06 | 0.50 | 0.71 | 0.92 | 0.73 | | | GMM | 0.89 | 0.93 | 0.70 | 0.97 | 0.50 | 0.50 | 0.63 | 0.95 | | | VAE-r | 0.99 | 0.99 | 0.71 | 0.90 | 0.50 | 0.50 | 0.29 | 0.95 | | | Lympho | VAE-e | - | - | 0.71 | 0.93 | 0.50 | 0.50 | 0.29 | 0.93 | | DAGMM | - | - | 0.60 | 0.40 | 0.50 | 0.50 | 0.71 | 0.86 | | | GMM | 0.69 | 0.69 | 0.65 | 0.81 | 0.74 | 0.95 | 0.48 | 0.96 | | | VAE-r | 0.99 | 0.99 | 0.92 | 0.95 | 0.90 | 0.99 | 0.75 | 1.00 | | | Musk | VAE-e | - | - | 0.91 | 0.95 | 0.90 | 0.99 | 0.48 | 0.99 | | DAGMM | - | - | 0.64 | 0.25 | 0.55 | 0.75 | 0.48 | 0.77 | | | GMM | 0.90 | 0.87 | 0.86 | 0.90 | 0.73 | 0.90 | 0.48 | 0.91 | | | VAE-r | 0.98 | 0.98 | 0.92 | 0.94 | 0.89 | 0.93 | 0.85 | 0.96 | | | Arrhythmia | VAE-e | - | - | 0.92 | 0.94 | 0.89 | 0.93 | 0.52 | 0.95 | | DAGMM | - | - | 0.57 | 0.11 | 0.62 | 0.67 | 0.48 | 0.68 | | | 25th percentile | (0.81) | (0.81) | 0.65 | 0.67 | 0.52 | 0.70 | 0.48 | 0.82 | | | 50th percentile | (0.90) | (0.89) | 0.71 | 0.89 | 0.60 | 0.82 | 0.52 | 0.89 | | | 75th percentile | (0.96) | (0.94) | 0.86 | 0.93 | 0.76 | 0.89 | 0.73 | 0.95 | | Table 8: Average AUROC for synthetic anomaly with danom = 2. ## B.3 Other Performance Measures We show the results of the synthetic anomaly experiment (in Section 5.2) with danom > 1. Since the MRR and the Hits@3 are not necessarily meaningful when danom > 1, we report the area under the receiver operator characteristic curve (AUROC) in Tables 8 and 9; given a set of attributions to features, we sweep a threshold value for the attributions from the largest attribution to the smallest attribution to define the ROCs. The overall tendency of the performance is the same as we reported in Section 5.2. Table 9: Average AUROC for synthetic anomaly with danom = 3. | Setting | | AUC | | | | | | | | |-----------------|----------|--------|------|------|------|------|------|------|------| | Dataset | Detector | marg | SFE | ACE | IG | KSH | wKSH | comp | ASH | | GMM | 0.74 | 0.71 | 0.77 | 0.75 | 0.58 | 0.71 | 0.50 | 0.82 | | | VAE-r | 0.85 | 0.85 | 0.82 | 0.85 | 0.66 | 0.81 | 0.85 | 0.85 | | | Thyroid | VAE-e | - | - | 0.82 | 0.85 | 0.66 | 0.80 | 0.46 | 0.85 | | DAGMM | - | - | 0.52 | 0.16 | 0.47 | 0.57 | 0.75 | 0.67 | | | GMM | 0.91 | 0.89 | 0.85 | 0.90 | 0.63 | 0.85 | 0.46 | 0.91 | | | VAE-r | 0.74 | 0.74 | 0.63 | 0.81 | 0.79 | 0.79 | 0.91 | 0.81 | | | Breastw | VAE-e | - | - | 0.63 | 0.84 | 0.79 | 0.80 | 0.53 | 0.81 | | DAGMM | - | - | 0.53 | 0.10 | 0.60 | 0.69 | 0.53 | 0.66 | | | GMM | 0.96 | 0.82 | 0.64 | 0.87 | 0.53 | 0.84 | 0.48 | 0.83 | | | VAE-r | 0.92 | 0.92 | 0.89 | 0.94 | 0.57 | 0.92 | 0.94 | 0.94 | | | U2R | VAE-e | - | - | 0.85 | 0.95 | 0.57 | 0.91 | 0.52 | 0.94 | | DAGMM | - | - | 0.56 | 0.06 | 0.52 | 0.66 | 0.92 | 0.68 | | | GMM | 0.89 | 0.91 | 0.71 | 0.96 | 0.50 | 0.50 | 0.58 | 0.93 | | | VAE-r | 0.97 | 0.97 | 0.71 | 0.88 | 0.50 | 0.50 | 0.29 | 0.91 | | | Lympho | VAE-e | - | - | 0.71 | 0.91 | 0.50 | 0.50 | 0.29 | 0.88 | | DAGMM | - | - | 0.66 | 0.44 | 0.50 | 0.50 | 0.71 | 0.82 | | | GMM | 0.68 | 0.71 | 0.71 | 0.83 | 0.73 | 0.96 | 0.47 | 0.95 | | | VAE-r | 1.00 | 1.00 | 0.91 | 0.95 | 0.90 | 1.00 | 0.75 | 0.99 | | | Musk | VAE-e | - | - | 0.91 | 0.95 | 0.90 | 1.00 | 0.47 | 0.99 | | DAGMM | - | - | 0.70 | 0.25 | 0.58 | 0.77 | 0.47 | 0.79 | | | GMM | 0.91 | 0.88 | 0.87 | 0.91 | 0.71 | 0.92 | 0.47 | 0.91 | | | VAE-r | 0.98 | 0.98 | 0.91 | 0.94 | 0.88 | 0.95 | 0.87 | 0.98 | | | Arrhythmia | VAE-e | - | - | 0.91 | 0.94 | 0.88 | 0.95 | 0.53 | 0.97 | | DAGMM | - | - | 0.56 | 0.10 | 0.61 | 0.68 | 0.47 | 0.69 | | | 25th percentile | (0.82) | (0.80) | 0.64 | 0.67 | 0.53 | 0.68 | 0.47 | 0.81 | | | 50th percentile | (0.91) | (0.89) | 0.71 | 0.86 | 0.60 | 0.80 | 0.53 | 0.86 | | | 75th percentile | (0.96) | (0.93) | 0.85 | 0.94 | 0.74 | 0.92 | 0.75 | 0.94 | |
Review 1: Summary: The paper is concerned with anomaly attribution. Specifically, while the Shapley value has been used for anomaly detection it is difficult to use it for anomaly attribution. The authors present a method for doing that where a minimization problem is solved over the ``absent’’ features. Since the problem in turn is computationally expensive to solve, algorithms to approximate this solution are used and validated empirically. Strengths and Weaknesses: A-The idea of minimizing the score function over the other features seems very reasonable and one can say that it is a suitable approach in applications where false positive anomaly detection is costly. B-My issue is that while the way the optimization in (2) on page 4 is formulated is rigorous, the modifications to (3) and (4) on page 5 feels very heuristic. But it seems that some of the methods here have already been used in the literature such as randomly sampling features in Lundberg et al. C-I mostly work on the theory side and I’m not that familiar with anamoly detection, so it is not clear to me how significant the contribution of this paper is. It seems to give a performance which is on par or at times better than previous methods. I believe that would be within the TMLR scope. Requested Changes: A-In the definition of the Shapley value (in Eq(1) page 2), $S$ is not defined at all or mentioned which is confusing. Minor Points: B-I think that adding a citation for semi-supervised methods would be good for the presentation. Speicifically in the following sentence in the first paragraph of section 2.1: ``A typical solution of semi-supervised anomaly detection consists of two phases [references to this line of work] '' C-It seems to me that in the following sentences “and” should be replaced by ‘or’: 1-2nd paragraph in section 2.1: such as ``$X ⊂ R^d$ and $X ⊂ \{0, 1}^d$ '' → such as $X ⊂ R^d$ or $X ⊂ \{0, 1\}^d$. 2-3rd paragraph in section 2.1: ``such as principal component analysis (PCA) and autoencoders'' → such as principal component analysis (PCA) or autoencoders D-Section 6 (Discussion): ``$d \ge O(10)$''. Asymptotic notation is meaningless for constants. Broader Impact Concerns: I don't think that there are any concerns about the broader impact. ================================================== Review 2: Summary: The authors consider the problem of attribution in anomaly detection. The objective is to identify which features lead to a particular sample being detected as anomalous. They propose to perform this attribution using Shapley values, which are frequently used for attribution in supervised learning tasks. To adapt the approach to anomaly detection, they propose a new characteristic function along with a faster heuristic relaxation for it. They demonstrate that their proposed anomaly Shapley (ASH) attribution method performs well for attributing synthetically generating anomalies by perturbing features on real data. The main contributions I observe are as follows: - Proposal of a new reference-based characteristic function for anomaly scores to be used for computing Shapley values. - Proposal of a heuristic relaxation for the characteristic function that is computationally efficient. - Demonstration of effectiveness of proposed ASH relaxed characteristic function on real data sets using two different evaluation approaches. *After author revision:* I have changed Claims and Evidence from No to Yes given the newly added examples and experiment results. My concerns regarding the paper have been addressed. Strengths and Weaknesses: Strengths: - The authors design a characteristic function for computing Shapley values that is specifically target at anomaly attribution. - Strong empirical performance in 2 different settings. It is not possible to compare attribution accuracy on real anomaly data because there is no ground truth to compare against, so the authors use both synthetically generated anomalies (where they can control the anomalous features) and real anomalies while treating the attributions from a supervised classifier as the "ground truth" for comparison. Weaknesses: - The paper does not show the computed Shapley values for any datasets. This is very strange given that the main purpose of this paper is about attribution, but then it doesn't show any attribution on real data. - No experimental validation of time complexity for the proposed heuristic relaxation is provided. Requested Changes: Major issues: - I strongly recommend for the authors to add a case study showing Shapley values computed on one of the real datasets. Perhaps this could be done in the same setting as in Section 5.3, where they compare the attribution performance with that of a supervised classifier. It would be highly useful to see the distributions of Shapley values with the different attribution methods to see how they differ and perhaps why ASH performs well. This can be added without removing content from the current paper and staying within 12 pages of main body. - The authors state that analysis of each method's failure is out of the scope of this paper. I agree that a detailed analysis of the type they describe is out of scope; however, an illustration of instances where different attribution methods differ, perhaps as part of the case study I suggest above, would be useful and very informative. Minor issues: - An experiment validating the $O(d)$ time complexity of the heuristic relaxation compared to the $O(2^d)$ complexity without the relaxation would be useful as an addition to the appendix. We can get a limited idea of the empirical time complexity from Table 5, but it's only 2 data points for $d$. - In Table 1, it would be good to show also a column with the size of $\mathcal{D}_{\text{test}}^{\text{anom}}$ so that the reader can see how balanced the nominal and anomalous classes are. - Why do you only extract U2R attack type from the NSL-KDD data rather than using all of the different attacks? Broader Impact Concerns: The included statement is reasonable. ================================================== Review 3: Summary: This paper proposes a new framework for the task of model agnostic (black box) anomaly attribution based on Shapley value (SV). Unlike prior methods that simply use an existing implementation or formula ironically as a black box, the authors look deep into the very definition of SV. In particular, they propose a new definition of SV’s characteristic function that depends on the local minimum of the anomaly score. More specifically, the key ideas are: 1. To use the SV value for the anomaly scoring function rather than the prediction function itself, 2. To use a characteristic function that depends on a “reference vector”, and 3. To determine the reference vector as a local minimum point of the anomaly score. Although they propose a well-defined optimization problem in Eq. (3) as the canonical definition of the proposed framework, it is not something they actually used in the empirical evaluation. To handle computational difficulties, the authors introduce two heuristic approaches. Instead of solving the regularized optimization problem (3), they propose to use a combination between the all-variable local minimum and one-variable conditional minima. Also, they use a SHAP-like least squared variant of SV. Strengths and Weaknesses: **Strength** The idea of - applying SV to the anomaly score rather than the prediction function and - using the local minimum as the reference point to define the characteristic function of SV is quite unique. I found the novelty and the originality of the paper very high. The idea alone definitely deserves immediate acceptance to top ML journals like TMLR. **Weakness** However, although this work indeed has the potential to become the first crucial work on SV-based attribution since SV's introduction to the ML community, I’d point out that the treatment after Eq. (3) looks rough. My biggest complaint is that the authors do not show the relationship between (3) and (4). I don’t even understand if it is an approximation or something else. More specifically, - I couldn’t find any clear description of the choice of dist(). - I couldn’t find any clear description of the choice of $\gamma$. - The second condition in (4) is not even the average, as the summation runs over $2S$ terms. A clear justification is needed. - No clear justification is provided about mixing the two local minima between those for $\{i\}$ and $\emptyset$. - No clear justification is given for the use of the least square “approximation” of SV. I’m not sure if Lundberg’s paper uses it as an approximation. This is not the authors’ fault, but Lundberg’s paper does not provide a clear-cut definition of the characteristic function, and its discussion is logically hard to follow. I do not think it is a good idea to use it blindly without justification. Requested Changes: This paper can be accepted as-is. This is not a mandatory request at all, but I encourage the authors to address the unclarity listed above. It will improve the readability of the paper significantly. Broader Impact Concerns: Except for very few exceptions, almost all model-agnostic anomaly explanation papers for black-box models ironically use existing attribution formulas/codes as a black box. This paper shines in such unfortunate darkness. ================================================== Metareview: Recommendation: Accept as is Comment: This paper studies the problem of attribution in anomaly detection, aiming to identify the important features causing anomaly. The feature attribution is studied via adapting the Shapley values, and a new characteristic function as a faster heuristic relaxation. The approach shows performance gains on attributing synthetically generated anomalies. Three reviewers provided assessment reports for this paper, all favoring the technical contributions while proposing major concerns. These issues were well addressed in the revised version, and all reviewers in general agreed that the paper is accetable. The AE oversaw the reviewing process and all evidences, and supported the Accept decision. ==================================================
# Variational Elliptical Processes Anonymous authors Paper under double-blind review ## Abstract We present elliptical processes—a family of non-parametric probabilistic models that subsumes the Gaussian processes and the Student's t processes. This generalization includes a range of new heavy-tailed behaviors while retaining computational tractability. The elliptical processes are based on a representation of elliptical distributions as a continuous mixture of Gaussian distributions. We parameterize this mixture distribution as a spline normalizing flow, which we train in two different ways using variational inference. The proposed form of the variational posterior enables a sparse variational elliptical process applicable to large-scale problems. We highlight some advantages compared to a Gaussian process through regression and classification experiments. Elliptical processes can replace Gaussian processes in several settings, including cases where the likelihood is non-Gaussian or when accurate tail modeling is essential. ## 1 Introduction Systems for autonomous decision-making are increasingly dependent on predictive models. To ensure safety and reliability, it is essential that these models capture uncertainty and risk accurately. Gaussian processes (GPs) offer a powerful framework for probabilistic modeling that is widely used, in part because it provides such uncertainty estimates. However, these estimates are only reliable to the extent that the model is correctly specified, i.e. that the assumptions of Gaussianity hold true. On the contrary, heavy-tailed data arise in many real-world applications, including finance (Mandelbrot, 1963), signal processing (Zoubir et al., 2012) and geostatistics (Diggle et al., 1998). We use a combination of normalizing flows and modern variational inference techniques to extend the modeling capabilities of GPs to the more general class of elliptical processes (EPs). Elliptical processes. The elliptical processes subsume the Gaussian process and the Student's t process (Rasmussen & Williams, 2006; Shah et al., 2014). It is based on the elliptical distribution—a scale-mixture of Gaussian distributions attractive mainly because it can describe heavy-tailed distributions while retaining most of the Gaussian distribution's computational tractability (Fang et al., 1990). We use a normalizing flow (Papamakarios et al., 2021a) to model the continuous scale-mixture, which provides an added flexibility that can benefit a range of applications. We explore the use of elliptical processes as both a prior (over functions) and a likelihood, as well as the combination thereof. We also explore the use of EPs as a variational posterior that can adapt its shape to match complex posterior distributions. ![0_image_0.png](0_image_0.png) Figure 1: Posterior distributions of an elliptical process and a Gaussian process with equal kernel hyperparameters and covariance. The shaded area are confidence intervals of the posterior processes. The elliptical confidence regions are wider due to the process's heavier tail, which makes the confidence region similar to the Gaussian's close to the mean, but also allows samples further out at the tail. Variational inference. Variational inference is a powerful tool for approximate inference that uses optimization to find a member of a predefined family of distributions that is close to the target distribution (Wainwright et al., 2008; Blei et al., 2017). Significant advances made in the last decade have made variational inference the method of choice for scalable approximate inference in complex parametric models (Ranganath et al., 2014; Hoffman et al., 2013; Kingma & Welling, 2013; Rezende et al., 2014). It is thus not surprising that the quest for more expressive and scalable variations of Gaussian processes has gone hand-in-hand with the developments in variational inference. For instance, sparse GPs use variational inference to select inducing points to approximate the prior (Titsias, 2009). Inducing points is a common building block in deep probabilistic models such as deep Gaussian processes (Damianou & Lawrence, 2013; Salimbeni et al., 2019) and can also be applied in Bayesian neural networks Maroñas et al. (2021); Ober & Aitchison (2021). Similarly, the combination of inducing points and variational inference enables scalable approximate inference in models with non-Gaussian likelihoods (Hensman et al., 2013a), such as when performing GP classification (Hensman et al., 2015; Wilson et al., 2016). However, the closeness of the variational distribution to the target distribution is bounded by the flexibility of the variational distribution. Consequently, the success of deep (neural network) models have inspired various suggestions on flexible yet tractable variational distributions, often based on parameterized transformations of a simple base distribution (Tran et al., 2016). In particular, models using a composition of invertible transformations, known as normalizing flows, have been especially popular (Rezende & Mohamed, 2015; Papamakarios et al., 2021a). ![1_image_0.png](1_image_0.png) Figure 2: **Left:** A contour plot of an elliptical twodimensional, correlated distribution with zero means. The name derives from its elliptical level sets. **Right:** Three examples of one-dimensional elliptical distributions with zero means and varying tail-heaviness. Elliptical distributions are symmetric around the mean E[X] = µ. Our contributions. We propose an adaptation of elliptical distributions and processes in the same spirit as modern Gaussian processes. Constructing elliptical distributions based on a normalizing flow provides a high degree of flexibility without sacrificing computational tractability. This makes it possible to sidestep the "curse of Gaussianity", and adapt to heavy-tailed behavior when called for. We thus foresee many synergies between EPs and recently developed GP methods. We make a first exploration of these, and simultaneously demonstrate the versatility of the elliptical process as a model for the prior and/or the likelihood, or as the variational posterior. In more detail, our contributions are: - a construction of the elliptical process and the elliptical likelihood as a continuous scale-mixture of Gaussian processes parameterized by a normalizing flow; - a variational approximation that can either learn an elliptical likelihood or handle known nonGaussian likelihoods, such as in classification problems; - formulating a sparse variational approximation for large-scale problems, as well as developing and comparing two different training schemes; - describing extensions to heteroscedastic and multi-path data enabled by amortized variational inference. ## 2 Background In this section, we present the necessary background on elliptical distributions, elliptical processes and normalizing flow models. Throughout, we consider the regression problem, where we are given a set of N scalar observations, y = [y1, · · · , yN ] ⊤, at the locations [x1, *· · ·* , xN ] ⊤, where xn is D-dimensional. The measurements yn are assumed to be noisy measurements, such that, $$y_{n}=f(\mathbf{x}_{n})+\epsilon_{n},$$ $$(1)$$ yn = f(xn) + ϵn, (1) where ϵn is zero mean, i.i.d., noise. The task is to infer the underlying function, f : R D → R. ## 2.1 Elliptical Distributions The elliptical process is based on elliptical distributions (Figure 2), which include Gaussian distributions as well as more heavy-tailed distributions, such as the Student's t distribution and the Cauchy distribution. The probability density of a random variable Y ∈ R N that follows the elliptical distribution can be expressed as, $$p(u;\,\eta)=c_{n,\eta}|\Sigma|^{-1/2}g_{N}(u;\,\eta),$$ $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ −1/2gN (u; η), (2) where u = (y − µ) TΣ−1(y − µ) is the squared Mahalanobis distance, µ is the location vector, Σ is the non-negative definite scale matrix, and cN,η is a normalization constant. The density generator gN,η(u) is a non-negative function with finite integral parameterized by η which determines the shape of the distribution. Elliptical distributions are consistent, i.e., closed under marginalization, if and only if p(u; η) is a scalemixture of Gaussian distributions (Kano, 1994). The density can be expressed as $$p(u;\,\eta)=|\Sigma|^{-\frac{1}{2}}\int_{0}^{\infty}\left(\frac{1}{2\pi\xi}\right)^{\frac{\eta}{2}}e^{\frac{-u}{2\xi}}\,p(\xi;\,\eta_{\xi})d\xi,$$ using a mixing variable ξ ∼ p(ξ; ηξ). Any mixing distribution p(ξ; ηξ) that is strictly positive can be used to define a consistent elliptical process. In particular, we recover the Gaussian distribution if the mixing distribution is a Dirac delta function and the Student's t distribution if it is a scaled inverse chi-square distribution. For more information on the elliptical distribution, see Appendix A ## 2.2 Elliptical Processes The elliptical process is defined, analogously to a Gaussian process, as: Definition 1 An elliptical process (EP*) is a collection of random variables such that every finite subset has* a consistent elliptical distribution, where the scale matrix is given by a covariance kernel. This means that an EP is specified by a mean function µ(x), scale matrix (kernel) k(x, x) and mixing distribution p(ξ; ηξ). Since the EP is built upon consistent elliptical distributions it is closed under marginalization. The marginal mean µ is the same as the mean for the Gaussian distribution, and the covariance is Cov[Y] = E [ξ] Σ where Y is an elliptical random variable, Σ is the covariance for a Gaussian distribution and ξ is the mixing variable. Formally a stochastic process {Xt : t ∈ T} on a probability space (Ω, F, P) consists of random maps Xt : ω → St, t ∈ T, for measurable spaces (St, St), t ∈ T (Bhattacharya & Waymire, 2007). We focus on the setting where S = R and the index set T is a subset of R N , in particular, the half-line [0, ∞). Due to Kolmogorov's extension theorem, we may construct the EP from the family of finite-dimensional, consistent, elliptical distributions, which is easy to check due to the restriction to S = R (which is a Polish space) and Kano's characterization above. Identifiability. When using GP for regression or classification we usually assume that the data originate from a single sample path, which is a single sample from the GP. An elliptical process, on the other hand, can be viewed as a hierarchical model, constructed by first sampling ξ ∼ p(ξ; ηξ) and then f *∼ GP*(f; µ, Kξ). This structure implies that it is not possible to infer the mixing distribution p(ξ; ηξ) from a single path.In other words, the identification condition for the mixing distribution p(ξ; ηξ) is to have draws from multiple paths. This point is explored further in sections 3.4 and 4.5. Prediction. To use the EP for predictions, we need the conditional mean and covariance of the corresponding elliptical distribution. The conditional distribution is guaranteed to be a consistent elliptical distribution but not necessarily the same as the original one—the shape depends on the training samples. (Recall that consistency only concerns the marginal distribution.) The conditional distribution can be derived analytically (see Appendix B) but we will instead solve it by approximate the posterior p(ξ| y; ηξ) with a variational distribution q(ξ; φξ). The approximate inference framework also let us to incorporate (non-Gaussian) noise according to the graphical models in Figure 3. We aim to model mixing distributions that can capture any shape of the elliptical noise in the data. One way to learn complex probability distributions is to normalize flows, which we will now go through. ## 2.3 Flow Based Models Normalizing flows are a family of generative models that map simple distributions to complex ones through a series of learned transformations (Papamakarios et al., 2021b). Suppose we have a random variable x that follows an unknown probability distribution px(x). Then, the main idea of a normalizing flow is to express x as a transformation Tγ of a variable z with a known simple probability distribution pz(z). The transformation Tγ has to be bijective and invertible, and it can have learnable parameters γ. Both T and its inverse have to be differentiable. The probability density of x is obtained by a change of variables: $$p_{x}(\mathbf{x})=p_{z}(\mathbf{z})\left|\det\left(\frac{\partial T_{\gamma}(\mathbf{z})}{\partial\mathbf{z}}\right)\right|^{-1}.\tag{4}$$ We focus on one-dimensional flows, since we are interested in modeling the mixing distribution. In particular, we use *linear rational spline flows* Dolatabadi et al. (2020); Durkan et al. (2019), wherein the mapping Tγ is an elementwise, monotonic linear rational spline: a piecewise function where each piece is a linear rational function. The parameters are the number of pieces (bins) and the knot locations. To train the model parameters, we use amortized variational inference, which we go through next. ## 2.4 Amortized Variational Inference In *amortized variational inference* (Gershman & Goodman, 2014) we replace the variational parameters, φ with a function that maps the input to the variational parameters φ = g(x). This is convenient for modelling local latent variables, i.e., variables associated directly to individual data points xn which have corresponding variational parameters φi. By replacing the local parameter with a function, φi = g(xn), we reduce the problem to fitting a function g, rather than fitting each φi. Furthermore, it becomes easy to add new data points, since the local variational parameters are then given by the function g. ## 3 Method We propose the variational EP with elliptical noise, where the variational EP can learn any consistent elliptical process, and the elliptical noise can capture any consistent elliptical noise. The key idea is to model the mixing distributions with a normalizing flow. The joint probability distribution of the model (see Figure 3c) is $$p(\mathbf{y},\mathbf{f},\omega,\xi;\,\eta)=\underbrace{p(\mathbf{f}|\xi;\,\eta_{\mathbf{f}})p(\xi;\,\eta_{\xi})}_{\mathrm{prior}}\ \underbrace{\prod_{i=1}^{N}p(y_{i}|f_{i},\omega)p(\omega;\,\eta_{\omega})}_{\mathrm{likelihood}}\,.$$ $\left(5\right)$. . (5) Here, p(f|ξ; ηf ) ∼ N (0*, Kξ*) is a regular EP prior with the covariance kernel K containing the parameters ηf , p(ξ; ηξ) is the process mixing distribution and p(ω; ηω) is the noise mixing distribution. To learn the mixing distributions p(ξ; ηξ) and p(ω; ηω) by gradient-based optimization, they need to be differentiable with respect to the parameters ηξ and ηω in addition to being flexible and computationally efficient to sample and evaluate. Based on these criteria, a spline flow (Section 2.3) is a natural fit. We construct the mixing distributions by transforming a sample from a standard normal distribution with a spline flow. The output of the spline flow is then projected onto the positive real axis using a differentiable function such as *Softplus* or Sigmoid. ![4_image_0.png](4_image_0.png) $$({\mathfrak{G}})$$ Figure 3: Graphical models of (a), the elliptical likelihood, (b) the EP-prior, and (c) the EP with independent elliptical noise. In the following sections, we detail the construction of the model and show how to train it using variational inference. For clarity, we describe the likelihood first, before combining it with the prior and describing a (computationally efficient) sparse approximation. ## 3.1 Likelihood By definition, the likelihood (Figure 3a) describes the measurement noise ϵn (Equation (1)). The probability distribution of the independent elliptical likelihood is, $$p(\epsilon_{n};\sigma,\eta_{\omega})=\int{\cal N}(\epsilon_{n};0,\sigma^{2}\omega)p(\omega;\,\eta_{\omega})d\omega,$$ where σ can be set to unity without loss of generality. In other words, the likelihood is a continuous mixture of Gaussian distributions where, e.g., ϵn follows a Student's t distribution if ω is scaled chi-squared distributed. Parameterization. We parameterize p(ω; ηω) as a spline flow, $$p(\omega;\,\eta_{\omega})=p(\zeta)\left|\,\frac{\partial T(\zeta;\,\eta_{\omega})}{\partial\zeta}\,\right|^{-1}\tag{7}$$ although it could, in principle, be any positive, finite probability distribution. Here, p(ζ) ∼ N (0, 1) is the base distribution and ω = T(ζ ; ηω) represents the spline flow transformation followed by a *Softplus* transformation to guarantee ω to be positive. The flexibility of this flow-based construction lets us capture a broad range of elliptical likelihoods, but we could also specify an appropriate likelihood ourselves. For instance, using a categorical likelihood enables EP classification, see section 4.3. Training objective. Now, assume that we observe N independent and identically distributed residuals ϵn = yn − fn between the observations y and some function, f. We are primarily interested in estimating the noise for the purpose of "denoising" the measurements. Hence, we fit an elliptical distribution to the residuals by maximizing the (log) marginal likelihood with respect to the parameters ηω, that is $$\log p(\epsilon;\,\eta_{\omega})=\sum_{n=1}^{N}\log\int{\mathcal{N}}\left(\epsilon_{n};0,T(\zeta;\,\eta_{\omega})\right)\left|{\frac{\partial T(\zeta;\,\eta_{\omega})}{\partial\zeta}}\right|^{-1}p(\zeta)d\zeta.$$ For general mixing distributions this integral is intractable, but we can approximate it using variational inference. Variational approximation. Instead of optimizing the marginal likelihood (8) directly, we approximate the posterior base distribution p(ζ|ϵn) ≈ q(ζn; φζn ), where φζn are variational parameters, and maximize $$\mathbf{\Sigma}$$ the corresponding evidence lower bound (ELBO): the corresponding evidence lower bound (ELBO): $$\mathcal{L}(\mathbf{\eta}_{\epsilon},\varphi_{\zeta_{1}},\ldots,\varphi_{\zeta_{N}})=\sum_{m=1}^{N}\mathbb{E}_{q(\zeta_{1},\,\mathbf{\eta}_{\epsilon})}\left[\log\left(\mathcal{N}\left(\epsilon_{m};0,T(\zeta;\,\mathbf{\eta}_{\epsilon})\right)\left[\frac{\partial T(\zeta;\,\mathbf{\eta}_{\epsilon})}{\partial\zeta}\right]^{-1}\right)\right]-D_{\text{KL}}\left(q(\zeta_{m};\,\varphi_{\zeta_{m}})||p(\zeta)\right).\tag{9}$$ In this section we investigate the existence of a critical value of $\epsilon$ for $\epsilon$, $\mathbf{\eta}_{\epsilon}$ is $$(10)$$ $$(11)$$ In this variational approximation we have one set of variational parameters φζn for each observed noise ϵn. To reduce complexity, we amortize (see Section 2.4) the variational parameters by letting φζn = g(ϵn; γζ ), which reduces the ELBO to L(ηω, γζ ). Specifically, we model the variational posterior as a Normal distribution q(ζn) = N (µζ (ϵn), σζ (ϵn)) where the variational parameters φζn , namely the mean µζ and standard deviation σζ , are functions defined by a neural network with parameters γζ . The parameters of the normalizing flow and variational posterior are trained jointly by gradient-based optimization of the ELBO, ∇ηω,γζL(ηω, γζ ). The gradients are estimated using black-box variational inference (Bingham et al., 2019). Ultimately, we arrive at the likelihood $$p(\mathbf{y}|\mathbf{f})=\prod_{n=1}^{N}\int{\mathcal{N}}\left(y_{n};f_{n},T(\zeta;\,\mathbf{\eta}_{\omega})\right)\left|{\frac{\partial T(\zeta;\,\mathbf{\eta}_{\omega})}{\partial\zeta}}\right|^{-1}p(\zeta)d\zeta.$$ Note that the variational posterior q(ζn) does not appear in this expression—it is only used as an aid for training the mixing distribution (specifically, the parameters ηω). ## 3.2 Prior Recall that our main objective is to infer the latent *function* f ∗ = f(x ∗) at arbitrary locations x ∗ ∈ R D given a finite set of noisy observations y. In probabilistic machine learning, the mapping y 7→ f ∗is often defined by the posterior predictive distribution $$p(f^{*}|\mathbf{y})=\int p(f^{*}|\mathbf{f})p(\mathbf{f}|\mathbf{y})d\mathbf{f},\tag{1}$$ which turns modeling into a search for suitable choices of p(f ∗|f) and p(f|y). Accordingly, the noise estimation described in the previous section is only done in pursuit of this higher purpose. Sparse formulation. For an elliptical process (EP) we can rewrite the posterior predictive distribution as $$p(\mathbf{f}^{*}|\mathbf{y})=\int p(\mathbf{f}^{*}|\mathbf{f},\xi)\,p(\mathbf{f},\mathbf{u},\xi|\mathbf{y})d\mathbf{f}d\mathbf{u}\,d\xi,\tag{12}$$ where we are marginalizing not only over the mixing variable ξ and the function values f (at the given inputs x) but also over the function values u at the, so called, M inducing inputs Xu. Introducing inducing points lets us derive a *sparse* variational EP—a computationally scalable version of the EP similar to the sparse variational GP (Titsias, 2009). We refer to Appendix D for a non-sparse version of the model. The need for approximation arises because of the intractable second factor, p(f,u, ξ|y), in (12). (The first factor, p(f ∗|f, ξ), is simply a Normal distribution.) Variational approximation. We make the variational ansatz p(f,u, ξ|y) ≈ p(f|u, ξ)q(u, ξ) q(ξ), and parameterize this variational posterior as an elliptical distribution. We do so for two reasons: first, this makes the variational posterior similar to the true posterior, and second, we can then use the conditional distribution to make predictions. In full detail, we factorize the posterior as $$q(\mathbf{f},\mathbf{u},\xi;\,\varphi)=p(\mathbf{f}|\mathbf{u},\xi;\,\eta_{\mathbf{f}})q(\mathbf{u}|\xi;\varphi_{\mathbf{u}})q(\xi;\,\varphi_{\xi}),$$ where φ = (φf , φu, φξ) are the variational parameters, q(u|ξ; φu) = N (m, Sξ) is a Gaussian distribution with the variational mixing distribution ξ ∼ q(ξ; φξ). Again, q(ξ; φξ) could be any positive finite distribution, but we parameterize it with a spline flow. $$(13)$$ Note that, because of the conditioning on ξ, the first two factors in (13) is a Gaussian conjugate pair in u. Thus, marginalization over u results in a Gaussian distribution, for which the marginals of fn only depends on the corresponding input xn (Salimbeni et al., 2019): $$(14)$$ $$q(f_{n}|\xi;\varphi)={\mathcal{N}}(f_{n}|\mu_{f}(\mathbf{x}_{n}),\sigma_{f}(\mathbf{x}_{n})\xi),\tag{1}$$ $$\begin{array}{l}{(15)}\\ {(16)}\end{array}$$ $$(17)$$ where $\begin{array}{l}{\mu}_{f}({\mathbf{x}}_{n})={\mathbf{k}}_{n}^{\top}{\mathbf{K}}_{\mathbf{uu}}^{-1}{\mathbf{m}},\\ {\sigma}_{f}({\mathbf{x}}_{n})={k}_{nn}-{\mathbf{k}}_{n}^{\top}\left({\mathbf{K}}_{\mathbf{uu}}^{-1}-{\mathbf{K}}_{\mathbf{uu}}^{-1}{\mathbf{S}}{\mathbf{K}}_{\mathbf{uu}}^{-1}\right){\mathbf{k}}_{n},\\ {\langle}{\mathbf{x}}_{n},{\mathbf{x}}_{n}),\text{and}{\mathbf{K}}_{\mathbf{uu}}=k({\mathbf{X}}_{u},{\mathbf{X}}_{u}).\end{array}$ into , "$\tau$", are then computed according to (see Appendix E). and kn = k(xn, Xu), knn = k(xn, xn), and Kuu = k(Xu, Xu). Predictions on unseen data points, x ∗, are then computed according to (see Appendix E) $$p(f^{*}|{\bf y};{\mathbf{x}}^{*})=\mathbb{E}_{q(\xi;\,{\mathbf{\varphi}}_{\xi})}\left[{\cal N}(f^{*}|\mu_{f}({\mathbf{x}}^{*}),\sigma_{f}({\mathbf{x}}^{*})\xi)\right],$$ We consider two distinct methods for training the variational parameters: (VI) variational inference, i.e. maximizing the evidence lower bound to indirectly maximize the marginal likelihood, and (PP) maximum likelihood estimation of the posterior predictive distribution (Jankowiak et al., 2020). Both models are, however, trained by stochastic gradient descent and black-box variational inference (Bingham et al., 2019; Wingate & Weber, 2013; Ranganath et al., 2014). VI training. The marginal likelihood is $$p(\mathbf{y};\,\mathbf{\eta}_{f},\mathbf{\eta}_{\xi})=\int p(\mathbf{y},\mathbf{f},\mathbf{u},\xi;\,\mathbf{\eta}_{f},\mathbf{\eta}_{\xi})d\mathbf{f}d\mathbf{u}d\xi=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\mathbf{u},\xi;\,\mathbf{\eta}_{f})p(\xi;\,\mathbf{\eta}_{\xi})d\mathbf{f}d\mathbf{u}d\xi.\tag{18}$$ However, this integral is intractable--just as it was for the elliptical likelihood--since $p(\xi;\,\mathbf{\eta}_{\xi})$ is param However, this integral is intractable—just as it was for the elliptical likelihood—since p(ξ; ηξ) is parameterized by a spline flow. To overcome this we use the same procedure as for the likelihood model and approximate the marginal likelihood with the ELBO $$\begin{split}\mathcal{L}_{\text{ELBO}}(\mathbf{\eta}_{f},\mathbf{\eta}_{\xi},\mathbf{\varphi}_{f},\mathbf{\varphi}_{\xi})&=\mathbb{E}_{q(f,\xi;\mathbf{\varphi})}\left[\log p(\mathbf{y}|\mathbf{f})\right]-\log D_{\text{KL}}\left(q(\mathbf{u},\xi;\,\mathbf{\varphi})\,||\,p(\mathbf{u},\xi;\mathbf{\eta})\right)\\ &=\sum_{n=1}^{N}\mathbb{E}_{q(f_{n},\xi;\,\mathbf{\varphi})}\left[\log p(y_{n}|f_{n})\right]-\log D_{\text{KL}}\left(q(\mathbf{u},\xi;\,\mathbf{\varphi})\,||\,p(\mathbf{u},\xi;\mathbf{\eta})\right),\end{split}\tag{19}$$ Had the likelihood been Gaussian, the expectation Eq(fn,ξ; φ)[log p(yn|fn; ηf )] could have been computed analytically. In our case, however, it is elliptical and we therefore use a Monte Carlo estimate instead. Inserting the elliptical likelihood (10) from the previous section gives $$\begin{split}\mathcal{L}(\boldsymbol{\eta},\boldsymbol{\varphi})&=\sum_{n=1}^{N}\mathbb{E}_{q(f_{n},\xi;\,\boldsymbol{\varphi})q(\zeta_{n};\,\boldsymbol{\varphi}_{\zeta_{n}})}\left[\log\left(\mathcal{N}\left(y_{n};f_{n},T(\zeta;\,\boldsymbol{\eta}_{\omega})\right)\left|\frac{\partial T(\zeta;\,\boldsymbol{\eta}_{\omega})}{\partial\zeta}\right|^{-1}\right)\right]\\ &\quad-\sum_{n=1}^{N}D_{\text{KL}}\left(q(\zeta_{n};\,\boldsymbol{\varphi}_{\zeta_{n}})||p(\zeta)\right)-\log D_{\text{KL}}\left(q(\boldsymbol{u},\xi;\,\boldsymbol{\varphi})||p(\boldsymbol{u},\xi;\boldsymbol{\eta})\right).\end{split}\tag{20}$$ $$(18)$$ PP training. To train directly on the predictive posterior of the elliptical process has the effect of moving the posterior distribution q(fn|ξ; xn) inside the log (Jankowiak et al., 2020), $$\mathcal{L}(\boldsymbol{\eta},\boldsymbol{\varphi})=\sum_{n=1}^{N}\mathbb{E}_{q(\zeta_{n};\boldsymbol{\varphi}_{n})\in\mathbb{R}(\boldsymbol{\varphi}_{N})}\left[\log\left(\mathcal{N}(y_{n};\mu_{\boldsymbol{f}}(\boldsymbol{x}_{n}),\sigma_{\boldsymbol{f}}(\boldsymbol{x}_{n})\xi+T(\zeta;\boldsymbol{\eta}_{\omega})\left|\frac{\partial T(\zeta;\boldsymbol{\eta}_{\omega})}{\partial\zeta}\right|^{-1}\right.\right)\right]\tag{21}$$ $$-\sum_{n=1}^{N}D_{\text{KL}}\left(q(\zeta_{n};\boldsymbol{\varphi}_{\omega})\|p(\zeta)\right)-\log D_{\text{KL}}\left(q(\boldsymbol{u},\xi;\boldsymbol{\varphi})\|p(\boldsymbol{u},\xi;\boldsymbol{\eta})\right]$$ Note that when drawing a single Monte Carlo sample from $q(f_{n},\xi;\boldsymbol{\varphi})$ the two methods are equivalent. $$(21)$$ Similarly, the expectation over q(ξ; φξ) can be moved inside the log if a single Monte Carlo sample from q(ξ; φξ) is used. ## 3.3 Extension To Heteroscedastic Noise We extend the elliptical likelihood by modeling heteroscedastic noise. First, recall from Section 3.1 that we amortized the variational mixing distribution for the elliptical likelihood. Here, we describe how we can model elliptic heteroscedastic noise by letting the parameters ηω of the mixing distribution of the likelihood depend on the input location. In heteroscedastic regression, the noise depends on the input location xn. For example, heteroscedastic elliptical noise can be useful in a time series where the noise variance and tail-heaviness change over time. Examples of this can be found in statistical finance (Liu et al., 2020) and robotics (Kersting et al., 2007). To model this, we return to the idea of amortized variational inference and use a neural network with parameters γω to represent the mapping from input location to spline flow parameters, xn 7→ ηωn . To train the likelihood (see Section 3.1) we used a variational approximation of the posterior base distribution but kept the same flow as the mixing distribution. As shown later, in Section 4.1, this works well for homoscedastic noise. For heteroscedastic noise, on the other hand, we got better results by instead keeping the base distribution fixed and learning a different spline flow with parameters dependent on both input location and noise, φωn = g(xn, ϵn; γ˜ω). We train the model by minimizing the ELBO $${\mathcal{L}}(\gamma_{\omega},{\hat{\gamma}}_{\omega})=\sum_{n=1}^{N}\mathbb{E}_{\omega\sim q(\eta_{n};\,\varphi_{\omega_{n}})}\left[\log p(\epsilon_{n}|\,\omega_{n})\right]-D_{\mathrm{KL}}\left(q(\omega_{n};\,\varphi_{\omega_{n}})\,||\,p(\omega_{n};\,\eta_{\omega_{n}})\right).$$ )). (22) ![7_image_0.png](7_image_0.png) This model can be extended by including additional inputs to the spline flow. ## 3.4 Extension To Multi-Path Data Here, we look at data with multiple independent realizations from the same EP prior, called sample paths, see Figure 4. For example, it could be multiple time series generated by an underlying physical process like temperature or pressure. Suppose we have M sample paths such that every pair (y m, Xm) represent one of the trajectories. We create a model where the sample paths share the same EP prior, i.e. have the same mixing distribution and kernel, but where each sample path has its own approximate posterior $$\begin{array}{c}{{q(\xi;\varphi_{\xi}^{m})\approx p(\xi|y^{m};\eta_{\xi}),}}\\ {{q(\mathbf{u};\varphi_{\mathbf{u}}^{m})\approx p(\mathbf{u}|y^{m};\eta_{\mathbf{u}}).}}\end{array}$$ m; ηu).(23) By sharing the prior, the hope is that the model will be less prone to overfitting. Also, the final EP prior represents a unified representation of all sample paths, which may provide additional insight into the underlying process. $$(22)$$ (23) $\frac{1}{2}$ $$(24)$$ Figure 4: Illustration of multi-path EP-data, where the data includes multiple draws from one single EP-prior. The main challenge of this model is that each path has a separate tuple of variational parameters (ηξ,m,mm,S m), which can be problematic if there are many of them. We, again, resolve this by amortizing the variational parameters: $$\varphi_{\xi}^{m}=g(\mathbf{y}^{m},\mathbf{X}^{m};\gamma_{\xi}),$$ $$\mathbf{m}^{m},\mathbf{S}^{m}=g(\mathbf{y}^{m},\mathbf{X}^{m};\gamma_{\mathbf{u}}).$$ The functions g(· ; γξ) and g(· ; γu) are parameterized by neural networks with parameters γξ and γu in a similar fashion as in Jafrasteh et al. (2021). ![8_image_0.png](8_image_0.png) Figure 5: The posterior predictive distribution when using an GP with elliptical noise, modeled with a spline flow. The histograms show the learned and the true noise mixing distribution. ## 4 Experiments We examine the variational elliptical processes using five different experiments. In the **first** experiment, we investigate how well the elliptical likelihood (Section 3.1) recovers known elliptical noise in synthetic data. In the **second** experiment, we investigate the benefits of using the sparse EP compared to the sparse GP for regression on standard benchmarks. In the **third** experiment, we examine if using a EP is beneficial in classification tasks. In the **fourth** experiment, we investigate the amortized elliptical processes described in Section 3.3 to model heteroscedastic noise. In the **fifth** and last experiment we illustrate how we can use an amortized multi-path EP ( Section 3.3) on a dataset with multiple similar trajectories. Implementation. The mixing distribution of the variational EP uses a linear rational spline flow, where we transform the likelihood flow p(ω) using *Softplus* and the posterior flow p(ξ) using a *Softmax* to ensure that it is bounded from below and positive. We use a squared exponential kernel with independent length scales in all experiments. See Appendix F for further implementation details. The code from the experiments will be published on GitHub if the paper is accepted, with a link added here. ## 4.1 Noise Identification To examine how well the elliptical likelihood, described in Section 3.1, captures different types of elliptical noises, we created three equal synthetic datasets, each with N = 300 data points, by using the function fn = sin(3xn)/2, where x ∈ R is uniformly sampled, xn ∼ U(−2, 2). Each of the dataset has its own independent elliptical noise ϵn, which are andomly sampled and added to the function, yn = fn + ϵn. For ![9_image_0.png](9_image_0.png) Figure 6: The predictive distribution of the latent function together with the 99 % credibility interval when using (**top row**) an GP with elliptical noise, modeled with a spline flow and a (**bottom row**) a GP with Gaussian noise. The histograms show the learned and the true noise mixing distribution. each dataset, we trained a sparse variational GP with a variational elliptical likelihood. For the loss, we used the parametric posterior (21). Figure 5 illustrates the results from the experiments. The histograms illustrated the trained mixing distribution p(ω; ηω) which we compare to the actual mixing distribution (the red curve) from which the noise ϵn originated. The learned distribution follows the shape of the actual mixing distribution quite well, which indicates that it is possible to learn a noise mixing distribution. The predictive posterior (plotted in Figure 21). The figure also presents the predictive posterior of the final models, demonstrating that the models learned suitable kernel parameters simultaneously as they learned the likelihood mixing distribution. Figure 6 compares the final mixing distribution using an elliptical likelihood and a Gaussian one. We see that the Gaussian likelihood matches the heavier-tailed mixing distribution with a variance (ω = 0.4) that is too wide. This results in a latent function confidence interval that is extremely narrow. ## 4.2 Regression We investigated the effects of the elliptical process and the elliptical noise by running experiments on several datasets from the UCI repository (Dua & Graff, 2017). The models we investigate, summarized in Table 1, all use a GP prior, but for some models, we use an approximated EP posterior instead of the regular GP posterior. We compare our model to the sparse variational GP described in Hensman et al. (2013b), which we call VI-GP-GP and the parametric GP described in Jankowiak et al. (2020), which we call PP-GP-GP. We also compare the models to an exact GP for all but the two largest datasets. Figure 7 summarizes the results from the experiment by plotting the mean and standard deviation from the outcome of ten randomly sampled training validation and test data points. The figures show a hold-out test set's mean squared error (MSE) and the negative test log-likelihood (LL). See Appendix G for more details. An elliptic likelihood gives a lower negative log-likelihood than a Gaussian likelihood on most datasets. However, the advantage of an elliptical likelihood on the three smaller datasets is small at best. We hypothesize ![10_image_0.png](10_image_0.png) Figure 7: Predictive negative log-likelihood (LL) and mean-squared error (MSE) on the hold out sets from the experiments (smaller is better). We show the average of the ten folds and standard deviation as a line. Table 1: The different types of models we train on the regression datasets. | Name | Approx | Loss | Likelihood | Posterior | |-----------|-------------|---------------------|--------------|-------------| | Exact GP | Exact | Marginal likelihood | Gaussian | Gaussian | | VI-GP- GP | Variational | ELBO | Gaussian | Gaussian | | VI-EP-GP | Variational | ELBO | Elliptic | Gaussian | | VI-EP-EP | Variational | ELBO | Elliptic | Elliptic | | PP-GP- GP | Variational | Parametric | Gaussian | Gaussian | | PP-EP-GP | Variational | Parametric | Elliptic | Gaussian | | PP-EP-EP | Variational | Parametric | Elliptic | Elliptic | that the elliptic likelihood may be too flexible and overfits the training data. Potentially, a less flexible mixing distribution combined with stronger regularization might improve performance. On the larger datasets the elliptic posterior yields lower negative log-likelihoods compared to a Gaussian posterior, even though the extra benefit from only the elliptic likelihood is marginal. Theoretically, a Gaussian prior combined with an elliptical likelihood should yield an elliptical poster. However, finding the correct posterior during training might be challenging, which could be why we only see the benefit for the largest datasets. We notice that for the majority of the datasets, we get a lower negative predictive log-likelihood when we train the predictive log-likelihood directly. This is true for both the GP and the EP models. However, the improvement is not as clear when considering the mean square error, even though we see a considerably decreased MSE on the three largest datasets. ## 4.3 Binary Classification To evaluate the EP on classifications tasks, we perform variational EP and GP classification by simply replacing the likelihood with a binary one. To derive the expectation in Equation 19 we first sample fn ∼ N (fn|µf (xn, σf (xn)ξ) and then derive the likelihood as Ber(Sigmoid(fi)). This realization is interesting since here, we do not have a likelihood that captures the noise in the data, but instead, the process itself has to do it. Therefore, we can indicate the value of the elliptical process itself without the elliptical noise. We compare two sparse EP models with a sparse GP model using 20 inducing points. The two EPs differs in the prior mixing distribution. We used a GP prior and a EP posterior for the first model. For the second model, we insted replace the GP prior to an elliptical one. We can see the trainable prior mixing distribution as using a continuously scaled mixture of Gaussian processes, which can be more expressive than a single GP. ![11_image_0.png](11_image_0.png) Figure 8: The classification AUC (Area Under the Curve) and accuracy score from the ten-fold cross validation (higher is better). We trained the models on three classification datasets, described in Appendix I. The results from a ten-fold cross-validation is presented in Figure 8. From the area under the curve (AUC) score, we see that the EP prior separates the two classes better. It seems that the variational elliptical distribution mainly contributes to the higher AUC score. Training the mixing distribution of the EP prior did not improve the score. ## 4.4 Elliptic Heteroskedastic Noise In this experiment, we aimed to learn heteroscedastic noise as described in Section 3.3 on a synthetic dataset of 150 samples, see Figure 9. We created the dataset using the function f(x) = sin(5x)+x. We then added Student's t noise, ϵ(x) ∼ St(ν(x), σ(x), where we decreased the noise scale by ν(x) = 25 − 11|x + 1| 0.9, and the increased the standard deviation by σ(x) = 0.5|x+1| 1.6+0.001. We used a variational sparse GP with heteroscedastic noise as described in Section 3.3. We used six bins for the prior mixing distribution and eight bins for the posterior mixing distribution, which resulted in 19 and 35 parameters to predict, respectively. We had more bins for the posterior mixing distribution since we wanted the approximate posterior to be as flexible as possible to fit the true posterior. The results from the experiments are depicted in Figure 9 and show that the model was able to capture the varying noise, both in term of the scale and the increasing heaviness of the tail. A single spike in the mixing distribution indicates that the noise is Gaussian, and the *wider* the mixing distribution is, the heavier tailed the noise is. ![11_image_1.png](11_image_1.png) Figure 9: The result form training a GP process with heteroscedastic elliptical noise on a synthetic dataset. The histogram shows the noise resulting mixing distributions ad different xn. ![12_image_0.png](12_image_0.png) Figure 11: Posterior predictive distributions of the wind speed during one year at nine different locations in Australia. They all share the same EP prior but have data-dependent posterior predictive distributions. ## 4.5 Multi-Path Data Here we experimented with multi-path data, as described in Section 3.4. The dataset, (Young & Young, 2020), contains daily temperature observations from J = 49 different locations in Australia in 2015. We randomly divided trajectories (time series) corresponding to different locations into training and test sets. We used a multipath EP-process (Section 3.4) since we assumed that temperature trajectories at different locations still have an underlying similarity and thus correspond to sample paths from the same EP-prior. Furthermore, we used the same elliptical likelihood for all trajectories. The variational mixing distribution parameters φm ξare amortized by a dense neural networks with two hidden layers, each with 512 hidden units. The parameters µ m and Σm are amortized by a dense neural networks with two hidden layers, with 512 and 1024 hidden units for µ m and Σm, respectively. ![12_image_1.png](12_image_1.png) Figure 10: Negative predictive log likelihood (LL) of the multi-EP compare with modelling every trajectory individually (ind EP). Figure 11 illustrate the resulting posterior distribution q(f m; φ) for a some of the sample paths. We compare the negative log likelihood when training all trajectories individually (Figure 10) and see an see a decrease in negative log likelihood when sharing the EP prior. ## 5 Related Work In general, attempts at modeling heavy-tailed stochastic processes modify either the likelihood or the stochastic process prior—rarely both. Approximate inference is typically needed when going beyond Gaussian likelihoods (Neal, 1997; Jylänki et al., 2011), e.g., for robust regression, but approximations that preserve analytical tractability have been proposed (Shah et al., 2014). Ma et al. (2019) describes a class of stochastic processes where the finite-dimensional distributions are only defined implicitly as a parameterized transformation of some base distribution, thereby generalizing earlier work on warped Gaussian processes (Snelson et al., 2004; Rios & Tobar, 2019). However, the price of this generality is that standard variational inference is no longer possible. Based on an assumption of a Gaussian likelihood, they describe an alternative based on the wake-sleep algorithm by Hinton et al. (1995). Other attempts at creating more expressive GP priors include Maroñas et al. (2021), who used a GP in combination with a normalizing flow, and Luo & Sun (2017), who used a discrete mixture of Gaussian processes. Similar ideas combining mixtures and normalizing flows have also been proposed to create more expressive likelihoods (Abdelhamed et al., 2019; Daemi et al., 2019; Winkler et al., 2019; Rivero & Dvorkin, 2020) and variational posteriors (Nguyen & Bonilla, 2014). Non-stationary extensions of Gaussian processes, such as when modeling heteroscedastic noise, are quite rare but the mixture model of Li et al. (2021) and the variational model of Lázaro-Gredilla & Titsias (2011) are two examples. In the statistics literature, it is well-known that the elliptical processes can be defined as scale-mixtures of Gaussian processes (Huang & Cambanis, 1979; O'Hagan, 1991; O'Hagan et al., 1999). However, unlike in machine learning, little emphasis is placed on building the models from data (i.e., training). These models have found applications in environmental statistics because of the field's inherent interest in modeling spatial extremes (Davison et al., 2012). Several works take the mixing distribution as the starting point, like us, and make localized predictions of quantiles (Maume-Deschamps et al., 2017) or other tail-risk measures (Opitz, 2016). ## 6 Conclusions The Gaussian distribution is the default choice in statistical modeling for good reasons. Even so, far from everything is Gaussian—casually pretending it is, comes at a risk. The elliptical distribution offers a computationally tractable alternative that can capture heavy-tailed distributions. The same reasoning applies when comparing the Gaussian process to the elliptical process. We believe that a sensible approach in many applications would be to start from the weaker assumptions of the elliptical process and let the data decide whether the evidence supports gaussianity. We constructed the elliptical processes as a scale mixture of Gaussian distributions. By parameterizing the mixing distribution using a normalizing flow, we show how a corresponding elliptical process can be trained using variational inference. The variational approximation we propose enables us to capture heavy-tailed posteriors and makes it straightforward to create a sparse variational elliptical process that scales to large datasets. We performed experiments on regression and classification. In addition, we compared the elliptical processes with the Gaussian process. Our experiments show that the elliptical process achieves a lower predictive log likelihood on the majority of the datasets, in particular the larger ones (n > 10000). The added flexibility of the elliptical processes could benefit a range of applications, both classical and new. However, advanced statistical models are not a cure-all, and one needs to avoid over-reliance on such models, especially in safety-critical applications. ## References Abdelrahman Abdelhamed, Marcus A Brubaker, and Michael S Brown. Noise flow: Noise modeling with conditional normalizing flows. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision, pp. 3165–3173, 2019. Jesús Alcalá-Fdez, Alberto Fernández, Julián Luengo, Joaquín Derrac, Salvador García, Luciano Sánchez, and Francisco Herrera. Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. *Journal of Multiple-Valued Logic & Soft Computing*, 17, 2011. Rabindra Nath Bhattacharya and Edward C Waymire. *A basic course in probability theory*, volume 69. Springer, 2007. Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D Goodman. Pyro: Deep Universal Probabilistic Programming. *Journal of Machine Learning Research*, 2018. Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D Goodman. Pyro: Deep universal probabilistic programming. *The Journal of Machine Learning Research*, 20(1):973–978, 2019. David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877, 2017. Atefeh Daemi, Hariprasad Kodamana, and Biao Huang. Gaussian process modelling with Gaussian mixture likelihood. *Journal of Process Control*, 81:209–220, 2019. Andreas Damianou and Neil D Lawrence. Deep Gaussian processes. In International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 207–215, 2013. Anthony C Davison, Simone A Padoan, Mathieu Ribatet, et al. Statistical modeling of spatial extremes. Statistical science, 27(2):161–186, 2012. Peter J Diggle, Jonathan A Tawn, and Rana A Moyeed. Model-based geostatistics. *Journal of the Royal* Statistical Society: Series C (Applied Statistics), 47(3):299–350, 1998. Hadi Mohaghegh Dolatabadi, Sarah Erfani, and Christopher Leckie. Invertible generative modeling using linear rational splines. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, pp. 4236–4246, 2020. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Kai-Tai Fang, Samuel Kotz, and Kai Wang Ng. *Symmetric multivariate and related distributions*. Chapman and Hall, 1990. Samuel Gershman and Noah Goodman. Amortized inference in probabilistic reasoning. In Proceedings of the annual meeting of the cognitive science society, volume 36, 2014. James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. In Uncertainty in Artificial Intelligence (UAI), pp. 282, 2013a. James Hensman, Nicolò Fusi, and Neil D. Lawrence. Gaussian processes for big data. In Ann Nicholson and Padhraic Smyth (eds.), *Uncertainty in Artificial Intelligence*, volume 29. AUAI Press, 2013b. James Hensman, Alexander Matthews, and Zoubin Ghahramani. Scalable variational Gaussian process classification. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, pp. 351– 360, 2015. Geoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The wake-sleep algorithm for unsupervised neural networks. *Science*, 268(5214):1158–1161, 1995. Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 2013. Steel T Huang and Stamatis Cambanis. Spherically invariant processes: Their nonlinear structure, discrimination, and estimation. *Journal of Multivariate Analysis*, 9(1):59–83, 1979. Bahram Jafrasteh, Carlos Villacampa-Calvo, and Daniel Hernández-Lobato. Input dependent sparse Gaussian processes. *arXiv preprint arXiv:2107.07281*, 2021. Martin Jankowiak, Geoff Pleiss, and Jacob Gardner. Parametric Gaussian process regressors. In International Conference on Machine Learning (ICML), pp. 4702–4712. PMLR, 2020. Pasi Jylänki, Jarno Vanhatalo, and Aki Vehtari. Robust Gaussian process regression with a Student-t likelihood. *Journal of Machine Learning Research*, 12(Nov):3227–3257, 2011. Yutaka Kano. Consistency property of elliptic probability density functions. *Journal of Multivariate Analysis*, 51(1):139–147, 1994. Kristian Kersting, Christian Plagemann, Patrick Pfaff, and Wolfram Burgard. Most likely heteroscedastic Gaussian process regression. In *International Conference on Machine Learning (ICML)*, pp. 393–400, 2007. Dennis Kibler, David W Aha, and Marc K Albert. Instance-based prediction of real-valued attributes. Computational Intelligence, 5(2):51–57, 1989. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *International Conference* on Learning Representations, 12 2015. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Miguel Lázaro-Gredilla and Michalis K Titsias. Variational heteroscedastic Gaussian process regression. In ICML, 2011. Tao Li, Di Wu, and Jinwen Ma. Mixture of robust Gaussian processes and its hard-cut EM algorithm with variational bounding approximation. *Neurocomputing*, 452:224–238, 2021. Bingqing Liu, Ivan Kiskin, and Stephen Roberts. An overview of Gaussian process regression for volatility forecasting. In *International Conference on Artificial Intelligence in Information and Communication* (ICAIIC), pp. 681–686, 2020. Chen Luo and Shiliang Sun. Variational mixtures of Gaussian processes for classification. In *IJCAI*, volume 357, pp. 4603–4609, 2017. Chao Ma, Yingzhen Li, and José Miguel Hernández-Lobato. Variational implicit processes. In International Conference on Machine Learning (ICML), pp. 4222–4233, 2019. Benoit Mandelbrot. The variation of certain speculative prices. *The Journal of Business*, 36(4):394–419, 1963. Juan Maroñas, Oliver Hamelijnck, Jeremias Knoblauch, and Theodoros Damoulas. Transforming Gaussian processes with normalizing flows. In *International Conference on Artificial Intelligence and Statistics* (AISTATS), pp. 1081–1089, 2021. Véronique Maume-Deschamps, Didier Rullière, and Antoine Usseglio-Carleve. Quantile predictions for elliptical random fields. *Journal of Multivariate Analysis*, 159:1–17, 2017. Radford M Neal. Monte Carlo implementation of Gaussian process models for Bayesian regression and classification. Technical Report 9702, Department of Statistics, University of Toronto, 1997. Trung V Nguyen and Edwin V Bonilla. Automated variational inference for Gaussian process models. Advances in Neural Information Processing Systems, 27, 2014. Sebastian W Ober and Laurence Aitchison. Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes. In *International Conference on Machine Learning (ICML)*, pp. 8248–8259, 2021. Anthony O'Hagan. Bayes–Hermite quadrature. *Journal of statistical planning and inference*, 29(3):245–260, 1991. Anthony O'Hagan, Marc C Kennedy, and Jeremy E Oakley. Uncertainty analysis and other inference tools for complex computer codes. In *Bayesian statistics 6*, pp. 503–524. Oxford University Press, 1999. Thomas Opitz. Modeling asymptotically independent spatial extremes based on Laplace random fields. Spatial Statistics, 16:1–18, 2016. R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. *Statistics & Probability Letters*, 33(3): 291–297, 1997. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. *Journal of Machine Learning* Research, 22(57):1–64, 2021a. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. *Journal of Machine Learning* Research, 22(57):1–64, 2021b. Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In *Artificial intelligence* and statistics, pp. 814–822, 2014. Carl Edward Rasmussen and Christopher K I Williams. *Gaussian processes for machine learning*. The MIT Press, 2006. p. 194. Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In *International Conference on Machine Learning (ICML)*, pp. 1530–1538, 2015. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International Conference on Machine Learning (ICML)*, pp. 1278– 1286, 2014. Gonzalo Rios and Felipe Tobar. Compositionally-warped Gaussian processes. *Neural Networks*, 118:235–246, 2019. Ana Diaz Rivero and Cora Dvorkin. Flow-based likelihoods for non-Gaussian inference. *Physical Review D*, 102(10):103507, 2020. Hugh Salimbeni, Vincent Dutordoir, James Hensman, and Marc Deisenroth. Deep Gaussian processes with importance-weighted variational inference. In *International Conference on Machine Learning (ICML)*, pp. 5589–5598, 2019. Amar Shah, Andrew Wilson, and Zoubin Ghahramani. Student-t processes as alternatives to Gaussian processes. In *Artificial Intelligence and Statistics*, pp. 877–885, 2014. Jack W Smith, James E Everhart, WC Dickson, William C Knowler, and Robert Scott Johannes. Using the adap learning algorithm to forecast the onset of diabetes mellitus. In *Proceedings of the annual symposium* on computer application in medical care, pp. 261. American Medical Informatics Association, 1988. Edward Snelson, Carl Edward Rasmussen, and Zoubin Ghahramani. Warped Gaussian processes. Advances in neural information processing systems, 16:337–344, 2004. Michalis Titsias. Variational learning of inducing variables in sparse Gaussian processes. In *Artificial intelligence and statistics*, pp. 567–574, 2009. Dustin Tran, Rajesh Ranganath, and David M Blei. Variational Gaussian process. In *International Conference on Learning Representations (ICLR)*, 2016. Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and variational inference. Foundations and Trends® *in Machine Learning*, 1(1–2):1–305, 2008. Andrew G Wilson, Zhiting Hu, Russ R Salakhutdinov, and Eric P Xing. Stochastic variational deep kernel learning. *Advances in Neural Information Processing Systems*, 29, 2016. David Wingate and Theophane Weber. Automated variational inference in probabilistic programming. *arXiv* preprint arXiv:1301.1299, 2013. Christina Winkler, Daniel Worrall, Emiel Hoogeboom, and Max Welling. Learning likelihoods with conditional normalizing flows. *arXiv preprint arXiv:1912.00042*, 2019. I-C Yeh. Modeling of strength of high-performance concrete using artificial neural networks. Cement and Concrete research, 28(12):1797–1808, 1998. Young and Young. Rain in Australia. https://www.kaggle.com/datasets/jsphyg/ weather-dataset-rattle-package, 2020. Accessed: 2022-01-22. Abdelhak M Zoubir, Visa Koivunen, Yacine Chakhchoukh, and Michael Muma. Robust estimation in signal processing: A tutorial-style treatment of fundamental concepts. *IEEE Signal Processing Magazine*, 29(4): 61–80, 2012. ## A The Elliptical Distribution The Gaussian distribution—the basic building block of Gaussian processes—has several attractive properties that we wish the elliptical process to inherit, namely (i) closure under marginalization, (ii) closure under conditioning, and (iii) straightforward sampling. This leads us to consider the family of *consistent* elliptical distributions. Following Kano (1994), we say that a family of elliptical distributions {p(u(yN ); η)| N ∈ N} is consistent if and only if $$\int_{-\infty}^{\infty}p\left(u({\bf y}_{N+1});\,\mathbf{\eta})\,dy_{N+1}=p\left(u({\bf y}_{N});\,\mathbf{\eta}\right).\right.\tag{1}$$ In other words, a consistent elliptical distribution is closed under marginalization. Far from all elliptical distributions are consistent, but the complete characterization of those that are is provided by the following theorem (Kano, 1994). Theorem 1 *An elliptical distribution is consistent if and only if it originates from the integral* $$(25)$$ $$p(u;\,\eta)=|\Sigma|^{-\frac{1}{2}}\int_{0}^{\infty}\left(\frac{1}{\xi^{2\pi}}\right)^{\frac{N}{2}}e^{\frac{-u}{2\xi}}p(\xi;\,\eta_{\xi})d\xi,$$ $$(26)$$ where ξ is a mixing variable with the corresponding, strictly positive finite, mixing distribution p(ξ; η), that is independent of N. This shows that consistent elliptical distributions p(u; η) are scale-mixtures of Gaussian distributions, with a mixing variable ξ ∼ p(ξ; η). Note that any mixing distribution fulfilling Theorem 1 can be used to define a consistent elliptical process. We recover the Gaussian distribution if the mixing distribution is a Dirac delta function and the Student's t distribution if it is a scaled chi-square distribution. If p(u; η) is a scale-mixture of normal distributions, it has the stochastic representation $${\bf Y}|\;\xi\sim{\mathcal N}(\mu,\Sigma\xi),\quad\xi\sim p(\xi;\,\eta).$$ Y| ξ ∼ N (µ, Σξ), ξ ∼ p(ξ; η). (27) By using the following representation of the elliptical distribution, $$(27)$$ $$(28)$$ $$\mathbf{Y}=\mu+\Sigma^{1/2}\mathbf{Z}\xi^{1/2},$$ $$(29)$$ 1/2, (28) where Z follows the standard normal distribution, we get the mean $$\mathbb{E}[\mathbf{Y}]=\mu+\Sigma^{1/2}\mathbb{E}\left[\mathbf{Z}\right]\ \mathbb{E}[\xi^{1/2}]=\mu$$ 1/2] = µ (29) and the covariance $$\mathrm{Cov}(\mathbf{Y})=\mathbb{E}\left[(\mathbf{Y}-\boldsymbol{\mu})(\mathbf{Y}-\boldsymbol{\mu})^{\top}\right]$$ $$=\mathbb{E}\left[(\boldsymbol{\Sigma}^{1/2}\mathbf{Z}\sqrt{\xi})(\boldsymbol{\Sigma}^{1/2}\mathbf{Z}\sqrt{\xi})^{\top}\right]$$ $$=\mathbb{E}\left[\xi\boldsymbol{\Sigma}^{1/2}\mathbf{Z}\mathbf{Z}^{\top}(\boldsymbol{\Sigma}^{1/2})^{\top}\right]$$ $$=\mathbb{E}\left[\xi\right]\boldsymbol{\Sigma}.$$ $$(30)$$ The variance is a scale factor of the scale matrix Σ. To get the variance we have to derive E [ξ]. Note that if ξ follows the inverse chi-square distribution, E[ξ] = ν/(ν − 2). We recognize form the Student's t distribution, where Cov(Y) = ν/(ν − 2)Σ. ## B Conditional Distribution To use the EP for predictions, we need the conditional mean and covariance of the corresponding elliptical distribution, which are derived next. We partition the data as y = [y1, y2], where y1 is the N1 observed data points, y2 is the N2 data points to predict, and N1 + N2 = N. We have the following result: Proposition 1 *If the data* y = [y1, y2] *originate from the consistent elliptical distribution in (3), the conditional distribution originates from the distribution* $$p_{\mathbf{y}_{2}|u_{1}}(\mathbf{y}_{2})={\frac{c_{N_{1},\,\mathbf{\eta}}}{\left|\mathbf{\Sigma}_{22|1}\right|^{\frac{1}{2}}(2\pi)^{\frac{N_{2}}{2}}}}\int_{0}^{\infty}\xi^{-{\frac{\mathbf{\eta}}{2}}}e^{-(u_{2|1}+u_{1}){\frac{1}{2\xi}}}\,p(\xi;\,\mathbf{\eta})d\xi$$ $$(31)$$ $$(32)$$ with the conditional mean E[y2|y1] = µ2|1 *and the conditional covariance* $$\mathrm{Cov}[{\bf Y}_{2}|{\bf Y}_{1}={\bf y}_{2}]=\mathbb{E}[\hat{\xi}]{\bf\Sigma}_{22|1},\quad\hat{\xi}\sim\xi|{\bf y}_{1},$$ ˆξ]Σ22|1,ˆξ ∼ ξ|y1, (32) where u1 = (y1 − µ1) ⊤Σ −1 11 (y1 − µ1), u2|1 = (y2 − µ2|1) ⊤Σ −1 22|1 (y2 − µ2|1), and cN1,η is a normalization constant. The conditional scale matrix Σ22|1 and the conditional mean vector µ2|1 are the same as the mean and the covariance matrix for a Gaussian distribution. The proof is derived in Appendix B. The conditional distribution is guaranteed to be a consistent elliptical distribution but not necessarily the same as the original one—the shape depends on the training samples. (Recall that consistency only concerns the marginal distribution.) To prove Proposition 1, we partition the data y as [y1, y2], so n1 data points belong to y1, n2 data points belong to y2 and n1 + n2 = n. Proof of proposition 1. The joint distribution of [y1, y2] is p(y1, y2|ξ)p(ξ; η) and the conditional distribution of y2, given y1 is p(y2|y1, ξ)p(ξ|y1M; η). For a given ξ, p(y2|y1, ξ) is the conditional normal distribution and so $$p({\bf y}_{2}|{\bf y}_{1},\xi)\sim{\cal N}(\mu_{2|1},\Sigma_{22|1}\hat{\xi}),\quad\hat{\xi}\sim p(\xi|{\bf y}_{1};\,\eta)$$ ˆξ),ˆξ ∼ p(ξ|y1; η) (33) where, $$\begin{array}{l}{{\mu_{2|1}=\mu_{2}+\Sigma_{21}\Sigma_{11}^{-1}({\bf y}_{1}-\mu_{1})}}\\ {{\Sigma_{22|1}=\Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{21},}}\end{array}$$ 11 (y1 − µ1) (34) 11 Σ21, (35) the same as for the conditional Gaussian distribution. We obtain the conditional distribution p(ξ|y1; η) by remembering that p(y1|ξ) ∼ N (µ1, Σ11ξ). (36) Using Bayes' Theorem we get $$p(\xi|\mathbf{y}_{1};\,\eta)\propto p(\mathbf{y}_{1}|\xi)p(\xi;\,\eta)$$ $$(33)$$ $$(34)$$ $$\begin{array}{l}{{\left.\left(\frac{\partial u_{1}}{\partial\xi}\right)\right.\right.}}\\ {{\left.\left.\propto\left|\Sigma_{11}\xi\right|^{-1/2}\exp\left\{-\frac{u_{1}}{2\xi}\right\}p(\xi;\,\eta)\right.\right.}}\\ {{\left.\left.\propto\xi^{-N_{1}/2}\exp\left\{-\xi\frac{u_{1}}{2}\right\}p(\xi;\,\eta).\right.}}\end{array}\right.$$ $$(37)$$ $$(38)$$ Recall that u1 = (y − µ1) ⊤Σ −1 11 (y − µ1)). We normalize the distribution by $$c_{N_{1},\,\eta}^{-1}=\int_{0}^{\infty}\xi^{-N_{1}/2}\exp\left\{-\frac{u_{1}}{2\xi}\right\}p(\xi;\,\eta)d\xi$$ The conditional mixing distribution is $$p(\xi|{\bf y}_{1};\,\eta)=c_{N_{1},\eta}\xi^{-N_{1}/2}\exp\left\{-\frac{u_{1}}{2\xi}\right\}p(\xi;\,\eta)$$ p(ξ; η) (39) The conditional distribution of y2 given y1 is derived by using the consistency formula $$p({\bf y}_{2}|{\bf y}_{1})=\frac{1}{|\Sigma_{22|1}|^{1/2}(2\pi)^{N_{2}/2}}\int_{0}^{\infty}\xi^{-N_{2}/2}\exp{-\frac{u_{2|1}}{2\xi}}p(\xi|{\bf y}_{1})d\xi,$$ p(ξ|y1)dξ, (40) where u2|1 = (y2 − µ2|1) ⊤Σ −1 22|1 (y2 − µ2|1). Using (39) we get $$p({\bf y}_{2}|{\bf y}_{1})=\frac{c_{N_{1},\,{\bf\eta}}}{|\Sigma_{22|1}|^{1/2}(2\pi)^{N_{2}/2}}\int_{0}^{\infty}\xi^{-n/2}e^{-(u_{2|1}+u_{1})/(2\xi)}p(\xi;\,{\bf\eta})d\xi$$ $$(39)$$ $$(40)$$ $$(41)$$ ## C Derivation Of The Confidence Regions Of The Elliptical Process We derive the confidence region of the elliptical process, by using Monte Carlo approximation of the integral, as $$p(-z\sigma<x<z\sigma)=\frac{1}{\sigma\sqrt{2\pi}}\int_{-z\sigma}^{z\sigma}\int_0^\infty\xi^{-1/2}e^{-x^2/(\xi2\sigma^2)}p(\xi)d\xi dx$$ $$=\frac{1}{\sigma\sqrt{2\pi}}\int_{-z\sigma}^{z\sigma}\frac{1}{m}\sum_{i=1}^m\xi_i^{-1/2}e^{-x^2/(2\xi_i\sigma^2)}dx$$ $$=\frac{1}{\sigma m\sqrt{2\pi}}\sum_{i=1}^m\xi_i^{-1/2}\int_{-z\sigma}^{z\sigma}e^{-x^2/(2\xi_i\sigma^2)}dx$$ $$=\frac{2}{m\sqrt{\pi}}\sum_{i=1}^m\int_0^{\sqrt{2\xi_i}}e^{-u^2}du$$ $$=\frac{1}{m}\sum_{i=1}^m\text{erf}\left(\frac{z}{\sqrt{2\xi_i}}\right)$$ I think this was looking at the proof here of the condition I'm the number of words. (42) $\binom{43}{4}$ (43) . (44) $\binom{44}{45}$ (45) . $$(46)$$ $$(47)$$ For every mixing distribution we can derive the confidence of the prediction. It is the number of samples m we take that decides the accuracy of the confidence. ## D Training The Elliptical Process For a Gaussian process the posterior of the latent variables f is $$p(\mathbf{f}|\mathbf{y})\propto p(\mathbf{y}|\mathbf{f})p(\mathbf{f}).$$ $$(48)$$ p(f|y) ∝ p(y|f)p(f). (47) Here, the prior p(f|x) ∼ N (0, K), is a Gaussian process with kernel K and the likelihood p(y|x, f) ∼ N (f, σ2I) is Gaussian. The posterior derives to $$p(\mathbf{f}|\mathbf{y})\sim{\mathcal{N}}\left(\mathbf{f}|K\left(K+\sigma^{2}I\right)^{-1}\mathbf{y},\left(\mathbf{K}^{-1}+\sigma^{-2}\mathbf{I}\right)^{-1}\right)$$ and we can derive the predictive distribution of an arbitrary input location x ∗ by $$p(f^{*}|\mathbf{y})=\int p(f_{*}|\mathbf{f})p(\mathbf{f}|\mathbf{y})d\mathbf{f},$$ ∗|y) = Zp(f∗|f)p(f|y)df, (49) where p(f∗|f, x, x∗) is the conditional distribution, which is again Gaussian with $${\mathcal{N}}\left(f_{*}|{\bf k}_{*}^{\top}({\bf k}+\sigma^{2}{\bf I})^{-1}{\bf y},k_{**}-{\bf k}_{*}^{\top}({\bf K}+\sigma^{2}{\bf I})^{-1}{\bf k}_{*}\right).$$ . (50) We want to derive the predictive distribution for the elliptical process, but the problem is that the posterior is intractable. In order to get a tractable posterior, we train the model using variational inference, where we approximate the intractable posterior with a tractable one, $p(\mathbf{f},\xi,\omega|\mathbf{y};\,\eta)\approx q(\mathbf{f},\xi,\omega;\,\varphi)=q(\mathbf{f}|\xi;\,\varphi_{f})q(\xi;\,\varphi_{\xi})$. Here, q(f|ξ; φ) ∼ N (mf , Sf ξ), where mf and Sf are variational parameters, and q(ξ; φξ) and q(ω; φω) are parameterized with any positive distribution such as a normalizing flow. We use this approximation when we derive the predictive distribution $$p(f^{*}|\mathbf{y})=\int p(f_{*}|\boldsymbol{f},\xi;\ \boldsymbol{\eta})p(\boldsymbol{f},\xi|\mathbf{y};\ \boldsymbol{\eta})dfd\xi$$ $$=\int p(f_{*}|\boldsymbol{f},\xi;\ \boldsymbol{\eta}_{\boldsymbol{f}})p(\boldsymbol{f},\xi|\mathbf{y};\ \boldsymbol{\eta})dfd\xi$$ $$\approx\int p(f_{*}|\boldsymbol{f},\xi;\ \boldsymbol{\eta}_{\boldsymbol{f}})q(\boldsymbol{f}|\xi;\ \varphi_{\boldsymbol{f}})q(\xi;\ \varphi_{\xi})dfd\xi.$$ $$(49)$$ $$(50)$$ $$(51)$$ $$(52)$$ $$(53)$$ $$(54)$$ $$(55)$$ If we first take a look at the prior distribution p(f ∗, f|ξ) when ξ is constant, which is $$\left[\begin{array}{c}f^{*}\\ \mathbf{f}\end{array}\right]\xi\sim\mathcal{N}\left(0,\left[\begin{array}{cc}k_{*}&\mathbf{k}_{*}^{\top}\\ \mathbf{k}_{*}&\mathbf{K}\end{array}\right]\xi\right),$$ with the the conditional distribution $$p(f^{*}|\mathbf{f},\xi;\,\mathbf{\eta})=\mathcal{N}\left(\mathbf{k}_{*}^{\top}\mathbf{K}^{-1}\mathbf{f},\left(k_{**}-\mathbf{k}_{*}^{\top}\mathbf{K}^{-1}\mathbf{k}_{*}\right)\xi\right)$$ $$=\mathcal{N}\left(\mathbf{a}^{\top}\mathbf{f},b\xi\right).\tag{1}$$ $$(56)$$ $$\begin{array}{l}{(57)}\\ {(58)}\end{array}$$ $$(59)$$ Here, a ⊤ = k ⊤ ∗ K−1 and b = k∗∗ − k ⊤ ∗ K−1k∗. We use this expression and the variational approximation when we derive the posterior predictive distribution, p(f ∗|y) = Zp(f∗|f, ξ; η)q(f|ξ; φf )q(ξ; φξ)dfdξ (59) = Eq(ξ; φξ) Zp(f∗|f, ξ)q(f|ξ; φf )df (60) = Eq(ξ; φξ) ZNf∗|a ⊤f, bξN (f|m, Sξ) df (61) = Eq(ξ; φξ) ZNf∗|a ⊤m, a ⊤Saξ + bξ(62) = Eq(ξ; φξ)[N (f∗|m∗, s∗ξ)] (63) where $$m_{*}=\mathbf{a}^{\top}\mathbf{m}$$ $$s_{*}=\mathbf{a}^{\top}\mathbf{Sa}+b\tag{1}$$ $$\begin{array}{l}{(64)}\\ {\quad(65)}\end{array}$$ and we get the covariance by E[ξ]s∗. ## Optimizing The Elbo We train the model by optimizing the evidence lower bound (ELBO) given by L(φ, η) = Eq(f|ξ; φf )q(ξ; φξ)q(ω; φω)[log p(y, f*, ξ, ω*; η) − log (q(f|ξ; φf )q(ξ; φξ)q(ω; φω))] . (66) The model is implemented in Pyro (Bingham et al., 2018), see Section F for details. ## E Sparse Elliptical Processes With the variational inference framework we can create a sparse version of the model $$\int p(\mathbf{f},\mathbf{u},\xi;\,\mathbf{\eta})d\xi=\int p(\mathbf{f}|\mathbf{u},\xi;\,\mathbf{\eta}_{\mathbf{f}})p(\mathbf{u}|\xi;\,\mathbf{\eta_{\mathbf{u}}})p(\xi;\,\mathbf{\eta_{\xi}})d\xi,$$ the _elliptical process_ located at the inducing inputs $\mathbf{x}=\mathbf{u}$. We approximate where u are outputs of the elliptical process, located at the inducing inputs xu. We approximate the posterior with p(f,u, ξ|y; η) ≈ p(f|u, ξ; ηf )q(u|ξ; φu)q(ξ; φξ) (68) The posterior of the distribution is given by $$p(f^{*}|\mathbf{y})=\int p(f_{*}|\mathbf{f},\mathbf{u},\xi;\,\eta)p(\mathbf{f},\mathbf{u},\xi|\mathbf{y};\,\eta)dfdud\xi$$ $$\approx\int p(f_{*}|\mathbf{f},\mathbf{u},\xi;\,\eta)p(\mathbf{f}|\mathbf{u},\xi;\,\eta_{\mathbf{f}})q(\mathbf{u}|\xi;\,\varphi_{\mathbf{u}})q(\xi;\,\varphi_{\xi})dfdud\xi$$ $$=\int\left[\int p(f_{*}|\mathbf{f},\mathbf{u},\xi;\,\eta)p(\mathbf{f}|\mathbf{u},\xi;\,\eta_{\mathbf{f}})df\right]q(\mathbf{u}|\xi;\,\varphi_{\mathbf{u}})q(\xi;\,\varphi_{\xi})dud\xi$$ $$(67)$$ $$(69)$$ We can simplify the inner expression by using the fact that the elliptical distribution is consistent, $$\int p(f_{*}|\mathbf{f},\mathbf{u},\xi;\,\eta)p(\mathbf{f}|\mathbf{u},\xi;\,\eta)d\mathbf{f}=\int p(f_{*},\mathbf{f}|\mathbf{u},\xi;\,\eta)d\mathbf{f}=p(f_{*}|\mathbf{u},\xi;\,\eta).$$ Hence, Equation (69) is simplifies to $$(70)$$ $$p(f^{*}|\mathbf{y})=\int p(f_{*}|\mathbf{u},\xi;\,\eta)q(\mathbf{u}|\xi;\,\varphi_{\mathbf{u}})q(\xi;\,\varphi_{\xi})d\mathbf{u}d\xi,$$ where q(u|ξ; φu) = N (mu, Suξ) with the variational parameters mu and Su, and ξ is parameterized, e.g., by a normalizing flow Finally, we obtain the posterior p(f ∗|x ∗) = Eq(ξ;φξ)[N (fn|µf (x ∗), σf (x ∗))] where $$(71)$$ $$({\mathbf{x}}^{\prime}),\,\partial f({\mathbf{x}}^{\prime}))]\,\,\,{\mathrm{where}}\,\,\,$$ $$\begin{array}{l}{(72)}\\ {(73)}\end{array}$$ $$\mu_{f}(\mathbf{x}_{n})=\mathbf{k}_{n}^{\top}\mathbf{K}_{\mathbf{uu}}^{-1}\mathbf{m}$$ $$\sigma_{f}(\mathbf{x}_{n})=k_{nn}-\mathbf{k}_{n}^{\top}\left(\mathbf{K}_{\mathbf{uu}}^{-1}-\mathbf{K}_{\mathbf{uu}}^{-1}\mathbf{S}\mathbf{K}_{\mathbf{uu}}^{-1}\right)\mathbf{k}_{n}.\tag{1}$$ uukn. (73) Here kn = k(xn, Xu), knn = k(xn, xn), and Kuu = k(Xu, Xu). ## F Implementation: Variational Inference We used the Pyro library (Bingham et al., 2018), which is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. In Pyro, we trained a model with variational inference (Kingma & Welling, 2013) by creating "stochastic functions" called **model** and a **guide**, where the **model** samples from the prior latent distributions p(f*, ξ, ω*; η), and the observed distribution p(y|f, ω), and the **guide** samples the approximate posterior q(f|ξ; φf )q(ξ; φξ)q(ω; φω). We then trained the model by minimizing the ELBO, where we simultaneously optimized the model parameters η and the variational parameters φ. (See more details here, https://pyro.ai/examples/svi_part_i.html.) To implemented the model in Pyro, we created the guide and the model (see Algorithm 3), which we did by building upon the already implemented variational Gaussian process. We used the guide and the model to derive the evidence lower bound (ELBO), which we then optimized with stochastic gradient descent using the Adam optimizer (Kingma & Ba, 2015). We used the already implemented rational linear spline flow for the normalizing flow in Pyro. Algorithm 1 PyTorch implementation of the variational sparse elliptical process (VI-EP-EP). 1: **procedure** model(X, y) 2: Sample ξ = from p(ξ; ηξ)( Normalizing flow ) 3: Sample u from N (0, ξKuu)) ▷ Take a sample from the latent u, ξ and ω 4: Derive the variational posterior QN n=1 q(fn|ξ; φ) = N (µf (xn), σf (xn)ξ). ▷ During training ξ is sampled from the posterior/guide. 5: Take a Monte-Carlo sample ˆfn from each q(fn|ξ; φ) 6: Sample for each xn ζn = from N (0, 1) 7: Derive ωn = T(ζn; ηω) 8: Derive the log probability of QN n=1 N (yn| ˆfn, ωn) ▷ during training ωn is sampled from the posterior/guide. 9: **end procedure** 10: **procedure** guide 11: Sample ξ = from q(ξ; φξ)( Normalizing flow ) 12: Sample u, from N (m, Sξ)) 13: For each xn sample ζ from N (µζ ((yn − fn) 2), σζ ((yn − fn) )). 14: **end procedure** Algorithm 2 PyTorch implementation of the variational sparse parametric elliptical process (PP-EP-EP). 1: **procedure** model(X, y) 2: Sample ξ = from p(ξ; ηξ)( Normalizing flow ) 3: Sample u from N (0, ξKuu)) ▷ Take a sample from the latent u, ξ and ω 4: Derive the variational posterior QN n=1 q(fn|ξ; φ) = N (µf (xn), σf (xn)ξ). ▷ During training ξ is sampled from the posterior/guide. 5: For each xn sample ζn = from N (0, 1) 6: Derive ωn = T(ζn; ηω) 7: Derive the log probability of QN n=1 N (yn|µf (xn), σf (xn)ξ + ωn) ▷ during training ωn is sampled from the posterior/guide. 8: **end procedure** 9: **procedure** guide 10: Sample ξ = from q(ξ; φξ)( Normalizing flow ) 11: Sample u, from N (m, Sξ)) 12: For each xn sample ζ from N (µζ (z), σζ (z). Were z = [(yn − µf (xn))2), σf (xn)]. 13: **end procedure** ## G Regression Experiment Setup In the regression experiments in section 4.2 we ran all experiments using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.003 that was sequentially decreased during the training. For all experiments, we created ten random train/val/test splits with the proportions 0.75/0.1/0.15, except for the two smallest datasets (machine and mpg), where we instead evaluated the model on the training data (the split proportions was train/test =0.75/0.25). We used the model with the lowest predictive probability on the validation set. For the large datasets (n > 1000), we used 500 inducing points and a batch size of 1000. For the small datasets, we used 100 inducing points and no batching. We run the training for the large dataset during 250 epochs and the small dataset for 5000 epochs. For the full GP, we used a learning rate of 0.01, which we decreased during the training. For the large datasets (n>1000), we trained the full GP on a single split. Elliptical process setup. The likelihood mixing distribution uses a spline flow with 9 bins and *Softplus* as its output transformation. The elliptic posterior mixing distribution uses a spline flow with 5 bins and a Sigmoid otput transformation. The reason we use a *Sigmoid* for the process is that we want to regularize it more since we hypothesis it is more difficult to learn. The neural network of the posterior likelihood mixing distribution uses a two layer neural network with 32 hidden dimensions. ## H Results The regression results from figures 7 are presented in tables 2 and 3. Table 2: Predictive Mean Square Error (MSE) on the hold out sets from the experiments. We show the average of the ten runs and one standard deviation in parenthesis. | Machine | MPG | Concrete | Elevators | California | Kin40k | Protein | | |-----------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | PP-EP-EP | 0.472 (0.516) | 0.111 (0.029) | 0.131 (0.025) | 0.166 (0.006) | 0.230 (0.013) | 0.106 (0.004) | 0.506 (0.007) | | PP-EP-GP | 0.503 (0.478) | 0.121 (0.027) | 0.123 (0.009) | 0.170 (0.006) | 0.237 (0.013) | 0.139 (0.003) | 0.533 (0.012) | | PP-GP-GP | 0.395 (0.304) | 0.125 (0.019) | 0.154 (0.015) | 0.173 (0.006) | 0.249 (0.013) | 0.176 (0.006) | 0.575 (0.006) | | VI-EP-EP | 0.580 (0.518) | 0.108 (0.022) | 0.134 (0.024) | 0.155 (0.005) | 0.255 (0.015) | 0.154 (0.003) | 0.651 (0.009) | | VI-EP-GP | 0.274 (0.460) | 0.117 (0.025) | 0.122 (0.013) | 0.160 (0.005) | 0.261 (0.015) | 0.152 (0.004) | 0.647 (0.013) | | VI-GP-GP | 0.261 (0.219) | 0.121 (0.027) | 0.119 (0.015) | 0.157 (0.005) | 0.251 (0.011) | 0.170 (0.003) | 0.652 (0.013) | | Exact GP | 0.417 (0.417) | 0.123 (0.123) | 0.098 (0.098) | 0.175 (0.175) | 0.227 (0.227) | | | Table 3: Negative log likelihood on the hold out sets from the experiments. We show the average of the ten runs and one standard deviation in parenthesis. | | Machine | MPG | Concrete | Elevators | California | Kin40k | Protein | |----------|-----------------|---------------|---------------|---------------|---------------|---------------|---------------| | PP-EP-EP | -0.053 (0.215) | 0.168 (0.076) | 0.247 (0.075) | 0.393 (0.012) | 0.453 (0.015) | 0.247 (0.010) | 0.975 (0.007) | | PP-EP-GP | -0.206 (0.315) | 0.215 (0.087) | 0.180 (0.062) | 0.397 (0.011) | 0.464 (0.016) | 0.312 (0.005) | 1.006 (0.010) | | PP-GP-GP | -0.403 (0.245) | 0.220 (0.066) | 0.359 (0.050) | 0.426 (0.009) | 0.554 (0.017) | 0.387 (0.007) | 1.060 (0.004) | | VI-EP-EP | 0.237 (0.559) | 0.240 (0.082) | 0.379 (0.085) | 0.466 (0.011) | 0.618 (0.020) | 0.527 (0.010) | 1.209 (0.006) | | VI-EP-GP | -0.134 (0.250) | 0.265 (0.066) | 0.340 (0.042) | 0.481 (0.005) | 0.633 (0.038) | 0.558 (0.009) | 1.207 (0.009) | | VI-GP-GP | -0.063 (0.259) | 0.339 (0.091) | 0.339 (0.052) | 0.491 (0.010) | 0.730 (0.018) | 0.608 (0.005) | 1.210 (0.009) | | Exact GP | -0.135 (-0.135) | 0.438 (0.438) | 0.180 (0.180) | 0.520 (0.520) | 0.771 (0.771) | | | ## I Datasets Elevators dataset (Dua & Graff, 2017) is obtained from the task of controlling a F16 aircraft, and the objective is related to an action taken on the elevators of the aircraft according to the status attributes of the aeroplane. Physicochemical properties of protein tertiary structure dataset The data set is taken from CASP 5-9. There are 45730 decoys and size varying from 0 to 21 armstrong. California housing dataset was originally published by Pace & Barry (1997). There are 20 640 samples and 9 feature variables in this dataset. The targets are prices on houses in the California area. The Concrete dataset (Yeh, 1998) has 8 input variables and 1030 observations. The target variables are the concrete compressive strength. Machine CPU dataset (Kibler et al., 1989) where the target value is the relative performance of the CPU. The dataset consist of 209 samples with nine attributes. Auto MPG dataset (Alcalá-Fdez et al., 2011) originally from the StatLib library which is maintained at Carnegie Mellon University. The data concerns city-cycle fuel consumption in miles per gallon and consists of 392 samples with five features each. Pima Indians Diabetes Database (Smith et al., 1988) originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. The dataset consist of 768 samples with eight attributes. The Cleveland Heart Disease dataset consists of 13 input variables and 270 samples. The target classifies whether a person is suffering from heart disease or not. The Mammographic Mass dataset predicts the severity (benign or malignant) of a mammographic mass lesion from BI-RADS attributes and the patient's age. This dataset consists of 961 with six attributes.
Review 1: Summary: The purpose of the paper is to demonstrate efficient large-scale modeling and inference with elliptical distributions and elliptical processes. The main modeling insight is that a consistent elliptic distribution/process is a scale mixture of Gaussians; so, the conditional distributions are Gaussian (conditioned on the mixing variable, that is). This insight is leveraged by parameterizing the mixing distribution as a normalizing flow and using variational inference over the resulting family to form approximate posteriors. Large-scale inference is accomplished by inducing variables (as done in sparse Gaussian processes). A heteroscedastic noise model with input-dependent normalizing flow parameter(s) is also described. Regression and classification results on synthetic and real datasets are provided. Strengths and Weaknesses: Strengths: * Since the elliptical distribution subsumes many other thin- and heavy-tailed distributions, efficient and algorithmically robust/stable inference in this family is a very attractive proposition. The conditionalization property highlighted is nicely leveraged by the proposed variational inference approach. That it also leads directly to a sparse formulation is kind of neat too! The paper hints at a strong foundation here. * The paper's exposition was fairly clear: laid out well and mostly easy to follow. Weaknesses: * The biggest weakness is lack of compelling evidence to use the proposed model and inference approach. First, the reported results in Table 1 do not show a strong statistically significant performance advantage of the proposed method over a standard GP (wrt/ reported error bars). The classification results are reported in a per-fold manner and show maybe a slight advantage for the proposed method, but not significant. Second, there are no competitors included in the experimental results, e.g., sparse variational GPs, full GPs with variational approximation, etc. If the heavy-tail modeling capability of elliptic distributions was to be spotlighted, an application in robust regression with real data would have been expected somewhere. Third, there are virtually no experiments on medium-to-large datasets. With the exception of the California dataset, all of the datasets are small (1030 sample size or less). A sparse approximation, as utilized in the experimental section, should not be required in these cases (unless I am mis-understanding something about the proposed model and inference approach). * A second weakness (less substantial than the previous) is lack of any discussion on how difficult or easy hyperparameter selection with variational elliptical processes is. For example, a standard issue with sparse variational GPs with non-Gaussian likelihoods is how finicky hyperparameter optimization vs. variational parameter optimization can be. Another issue that a reader may be curious about is how stable variational EM is in this scenario when modeling the mixing distribution with a normalizing flow. * There were two confusing points in the exposition regarding ergodicity and the conditional distribution that hindered understanding a little. Requested Changes: Critical: * Please include some baselines in the experimental section. There are many to choose from. Also the paper needs to include similar experiments on large datasets. For example, "Parametric Gaussian Process Regressors" (2020) by Jonkowiak et al. lists numerous medium-to-large datasets that could be utilized. The synthetic heteroscedastic experiment was cool, but perhaps the proposed method could be compared e.g., to Table 2 in "Variational Heteroscedastic Gaussian Process Regression"? Also, why not compare how variational elliptical processes due in robust regression like in "Robust Gaussian Process Regression with a Student-t Likelihood"? To be clear, I'm not suggesting running all the baselines above; I'm suggesting running the proposed method in the same experimental settings as those baselines and comparing against published results. * Please provide some discussion of practical usability of the proposed method. How does one select the normalizing flow parameterization for a novel dataset? Training time comparisons to baselines would be great to include in the supplement or experiments. Non-critical: * Could the authors clear up the confusing points? One was a brief section on ergodicity. I believe the paper was just stating that the mixing distribution cannot be learned from data that represents a single draw of an elliptical process. However, the section seemed to imply that either mixture processes or non-stationary processes (or both?) cannot be estimated which would be helped by an illustrative example or two. Gaussian processes with non-stationary kernels are routinely used and estimation in a mixture model is essentially Type 2 ML. What is the effect of this point of ergodicity on the experimental conclusions? Perhaps I am missing something obvious here? The second confusing point was the statement on the first half of page 4: "The conditional distribution is guaranteed to be a consistent elliptical distribution but not necessarily the same as the original one..." How does this issue specifically manifest in inference? Could an example be provided? * Describe how the predictive mean (RHS of (16), (20)) is calculated for the experiments. Broader Impact Concerns: No impact statement was provided, but I don't see any broader impact concerns with this work. ================================================== Review 2: Summary: The paper proposes *elliptical processes*, a new family of probabilistic stochastic process on top of Gaussian processes (GPs) and Student-T processes (SPs), whose underlying mechanism is based on the elliptical distribution. For performing inference over such processes, the paper develops approximate methods and uses modern density estimators; in particular variational inference and normalizing flows with spline mappings. To achieve both inference and predictive tasks, the methodology faces several issues concerning the tractability of integrals, which in the end are circumvent via approximations. The paper also proposes two extensions for sparse approximations (to scale up the model) and *heteroscedastic* noise. Experimental results show evidence of performance in regression, classification and *heteroscedastic* tasks. Strengths and Weaknesses: **Strengths** > In my opinion, the paper tackles an interesting idea and is in general on a good direction. Authors seems to know well the literature, existing approaches and the spirit is somehow similar to Shah (ICML 2014). This last point is also positive to me. The construction of the whole approach based in the elliptical distribution seems technically correct and I did not miss any particular detail in the first part of the manuscript. Despite some little inconsistencies on the notation, the formulation is somehow clear and well introduced. The decisions taken for inference and the methods used are meaningful and similar to the strategies taken in the state-of-the-art, which makes it reliable. In the experiments, authors positively address three different tasks to characterise the performance of the method. **Weaknesses** > Even considering that the paper is well-written and thorough in many aspects, there is a general doubt that comes to mind when reading the paper. In the original Student T paper of Shah (ICML 2014), the idea was to explore other alternatives to circumvent the limitation of the *Gaussianity* (a lot of probability mass around the mean, but tails quickly going to zero when euclidean distance augments). The idea of considering Student T processes was interesting back then because one could obtain posterior densities with tighter or heavier tails depending on which type of process one wants. In the end, this translated to more certainty, or at least that was my impression. On the other hand, the pursue of heteroscedastic GPs is also of interest in the community, due to one might want different noise levels depending on the input region. However, I detect a general satisfaction in the paper with having heavier and heavier tails, which in the end, looks like having more uncertain in the model to me. I say this, because I do not fully understand the motivation behind this search, or at least I would like to comprehend. This might come from a bit of lack of motivation on the application side, e.g. what is the specific scenario where a heavy tail process wins the GP. Another example of this can be found in the second paragraph of the Conclusion, where authors indicate ``` "The variational approximation we propose enables us to capture heavy-tailed posteriors and makes it straightforward to create a sparse variational elliptical process that scales to large datasets" ``` This is a bit odd to me, since one generally desires a posterior which is more certain, with the less probability around its mean as possible. Claiming that a posterior distribution with heavier tails seems to go into the opposite direction. Other aspect that concerns me is the presentation of the methodology in **Section 3**. I think this could be improved significantly, at least to make it clearer why the decisions are taken. In particular, I do not understand the modelling of the likelihood noise $\epsilon_n$ in the likelihood subsection instead of the output data points $y_n$. Later, the normalising flow is introduced here, what makes things a bit more difficult to get. The last point of weakness, to me, is that I find experimental results somehow limited. In particular, the toy regression task seems to be too simple with the sinusoidal function and only N=300 datapoints. Also, I would love to find a clear result that motivates the search of heavier tails in the stochastic process, but somehow I do not see it. Additionally, the $\mathcal{EP}^2$ model used in the following regression experiments looks unclear to me, as it is indicated to use both a GP-prior and a GP-posterior. Why is a $\mathcal{EP}$ then? In general, even if the experiments look on a good direction, addressing three tasks, they do not project an idea about the computational cost or for example the difficulty to fit the model. This could be improved, quite a lot. Requested Changes: To me, the manuscript would have to be improved on three main points: **Change 1**: Introduce, motivate and explain better why heavier tails are desired and why the reader should be interested on them. Connect that point with uncertainty and which properties one desires to achieve when having both an elliptical process with an elliptical likelihood model on top. Perhaps, it is too much, but if the EP is a general extension of GPs and SPs, would be nice to have more references to this connection and what does it imply. **Change 2**: Section 3 should be re-written to be clearer and cleaner on the message. At least to make it clear why decisions are taken and why the likelihood is on the likelihood noise. And why, for instance, we observe this noise (I think authors mention something like this). At its current state, this is not clear enough. **Change 3**: I would suggest to improve a bit the experiments, at least with toy experiments where it is very clear that the EP is doing what authors claim (to capture an underlying process with heavier tails). Also, I would add more details and results on the computational cost and perhaps on the sparse extension that is mentioned. The classification results on AUC are a bit odd for the GP community, so extra metrics or visual results would also help. In general, there is quite a lot of gap in the experiments for improvement, which I consider would be good for the manuscript. Broader Impact Concerns: I do not detect particular ethical implications of the work. A slight detail would be around this claim that heavier tailed posteriors are of interest for some applications. This is perhaps odd, as we generally want systems to be more certain on the tasks for safety reasons. I think it would be helpful to have some comments on the aspect between heavier tails and uncertainty. ================================================== Review 3: Summary: This paper claims to: - construct elliptical processes as a distribution on functions - heteroskedastic likelihood functions based on eliptically distributed noise - inducing point based variational inference The novelty in the construction of elliptical processes is the flexible parameterisation of the distribution on the scale parameter $\xi$. Strengths and Weaknesses: The paper is clearly written, with only a few passages where it would be nice to have more details. While elliptical processes have been studied before (see unpublished preprint https://arxiv.org/abs/2003.00720), the paper does contain some novelties. In particular, the flexible distributions on scale parameters, and variational approximations. For this paper to be useful, the experiments need to really clearly show the situations in which this methodology is useful. This is where the paper is currently lacking, and I believe this needs to be improved before the paper can be useful to the ML community. §4.1 contains a useful synthetic experiment on showing that heavy-tailed noise can be dealt with well. This could be improved by including an example of how a GP with the typical Gaussian noise behaves poorly. §4.2 contains some real datasets, on which EPs are illustrated. The datasets are small, although this is not a bad thing per se. Working with small datasets is a sensible research question. The main issue is that the experiments are superficial, in that they only provide performance metrics, without providing clear ablations of the different components of the proposed method. I can see that there was some attempt to do this, based on the two EP models that were introduced. However, it is not clear what questions these comparisons are trying to address. In addition it seems strange that EP^2 uses both a GP prior and posterior. Why call this an EP? Is this a typo? This section really should be a key support of the paper. To improve this section, I suggest addressing the following questions: 1. Does an EP prior on the latent function improve performance, as compared to a GP? The comparison really should be to a _full GP_, without any inducing point approximations, to ensure that _model_ assumptions are tested, not any error that is introduced by the approximation. All datasets (even the 20k one) can be run using a full Cholesky decomposition on modern desktop machines. Using 20 or 50 inducing points is not enough. 2. Does an elliptical noise distribution help with prediction? For this you can compare a GP with Gaussian noise to a GP with elliptical noise. 3. How much does both an EP and elliptical noise further improve performance? Extending the range of datasets to include more of the UCI datasets will also be helpful. Similar issues surrounding a lack of inducing points also hold in the classification settings of §4.3. In addition, it is rather unusual to provide the result for each fold in a run of crossvalidation, as is done in fig 5. Would it not be better to present the results in a table, with the averaged results being provided? It is perhaps likely that the differences won't look very large after averaging. Perhaps this is not very surprising in classification, as an EP effectively only performs Bayesian inference over the scale parameter, which classification isn't very sensitive too. It is sensitive in cases where data is scarce and there are somewhat overlapping classes, which could be the case in the datasets that are investigated. Requested Changes: ## Citations There are a few strange citations, which I would suggest that they should be changed: - Page 1: Tran et al 2016. The proposed variational elliptical process method uses approximate posteriors in a way almost exactly the same as in regular approximate GPs (e.g. Hensman et al 2015, which is cited, and Hensman et al 2013, GPs for big data, which isn't cited). This is a very different setting to the Tran et al 2016 paper, which aims somehow to use a GP in a variational distribution to parameterise more complex distributions. I think this citation is not appropriate here. - Page 1 and elsewhere: Student's t processes were discussed significantly before Shah et al 2014. See §9.9 in _Gaussian Processes for Machine Learning_ by Rasmussen & Williams. A more comprehensive list of citations should be included. - Page 6: It's mentioned that BBVI is used, and Wingate & Weber, Ranganath et al are cited. However, in the appendix it is stated that Pyro is used, and references are given to the reparameterisation trick, which is not used in the W&W and Retal papers. These papers can be mentioned in related work on BBVI, but I would suggest for the method that is _actually_ used to be cited in the main explanation of the method. - There is one unpublished preprint on elliptical processes which is not cited: https://arxiv.org/abs/2003.00720. Since this is very relevant, I do think it should be cited. ## Experiments I believe that the experiments need to be significantly expanded, in accordance with what is mentioned above. Particularly the comparison to a full GP is important, because a) models should not be artificially limited in capacity when benchmarking, and b) so the effect of the change in prior can be investigated independently of effects from the approximation. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Reject Comment: Overall, I concur with the reviewers that the paper is well-written and clear, and presents nice ideas that I believe will be of interest to many working on Gaussian processes. While I am recommending reject due to the need for some additional experiments (as detailed below), I believe that the changes needed should be fairly straightforward and hope to see a resubmission soon. While the experiments have been notably improved since the first submission, they still do not succeed at addressing the fundamental question of whether using an elliptical process is beneficial over a Gaussian process (when stripped of the various approximation techniques used both in this paper, and in comparison methods), and what the impact of your approximations is. The comment from July 30 by Reviewer 9XiL lays out a proposed set of experiments to address this, which seem a manageable amount of additional work. I believe these results, if they do show a notable improvement over the GP, will really highlight the fundamental benefits of your approach, and will also highlight areas where future work could be beneficial. I agree with your proposed changes and would like to see these in the resubmitted manuscript, particularly the proposed new experiments on real-world and larger datasets. While overall I agree with the decision to remove the multi-path results (as the reviewers say, it detracts from the main story; isn't really a standard use case of GPs; and is a little underexplored). I do think the discussions on identifiability are important though, and think it is fine to explain that identifiability can be achieved in a multi-path context. I believe the paper is very close to acceptance, and I would hope to see a revised version accepted. The reason I am recommending reject rather than accept with minor revisions, is because of the need to properly evaluate the approximation and compare with a standard GP. If these experiments do not show a key difference in performance over the GP, then an accept with minor revisions would prove inappropriate. However, if those experiments show the desired results, I do not think that the changes needed to the paper are too dramatic. ==================================================
# Lora Learns Less And Forgets Less Dan Biderman1,2, Jacob Portes2, Jose Javier Gonzalez Ortiz2, Mansheej Paul2**, Philip** Greengard1, Connor Jennings2, Daniel King2, Sam Havens2, Vitaliy Chiley2, Jonathan Frankle2, Cody Blakeney2**, John P. Cunningham**1 1**Columbia University** {db3236, pg2118, jpc2181}@columbia.edu 2**Databricks Mosaic Research** {jacob.portes, j.gonzalez, mansheej.paul, connor.jennings, daniel.king, sam.havens, vitaliy.chiley, jfrankle, cody.blakeney}@databricks.com Reviewed on OpenReview: **https://openreview.net/forum?id=aloEru2qCG** ## Abstract Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for large language models. LoRA saves memory by training only low rank perturbations to selected weight matrices. In this work, we compare the performance of LoRA and full finetuning on two target domains, programming and mathematics. We consider both the instruction finetuning (≈100K prompt-response pairs) and continued pretraining (≈20B unstructured tokens) data regimes. Our results show that, in the standard low-rank settings, LoRA substantially underperforms full finetuning. Nevertheless, LoRA better maintains the base model's performance on tasks outside the target domain. We show that LoRA mitigates forgetting more than common regularization techniques such as weight decay and dropout; it also helps maintain more diverse generations. Finally, we show that full finetuning learns perturbations with a rank that is 10-100× greater than typical LoRA configurations, possibly explaining some of the reported gaps. We conclude by proposing best practices for finetuning with LoRA. ## 1 Introduction Finetuning large language models (LLMs) with billions of weights requires a non-trivial amount of GPU memory. Parameter-efficient finetuning methods reduce the memory footprint during training by freezing a pretrained LLM and only training a small number of additional parameters, often called adapters. Low-Rank Adaptation (LoRA; Hu et al. (2021)) trains adapters that are low-rank perturbations to selected weight matrices. LoRA is widely adopted for finetuning LLMs under hardware constraints, but the jury is still out on whether it compromises performance compared to full finetuning. The two seminal methods papers on the topic, introducing LoRA (Hu et al., 2021) and its more recent combination with model quantization (QLoRA; Dettmers et al. (2024)) reported that LoRA performs better or equivalent to full finetuning. More empirical work (Ghosh et al., 2024; Zhao et al., 2024b) reaches a similar conclusion; and this sentiment is echoed in an array of industry blog posts as well (e.g., Raschka (2023); Niederfahrenhorst et al. (2023)). At the same time, there is evidence that LoRA underperforms full finetuning (Ivison et al., 2023; Zhuo et al., 2024), and the need to improve upon LoRA has led to the development of enhanced LoRA variants (Hayou et al., 2024; Meng et al., 2024; Li et al., 2023b; Shi et al., 2024) or alternative low-rank approximation methods (e.g Liu et al. (2024); Zhao et al. (2024a)). To shed light on this ongoing debate, we ask: under which conditions does LoRA approximate full finetuning accuracy on challenging target domains, such as code and math? By training fewer parameters, LoRA is hypothesized to constrain the finetuned model from diverging significantly from the base model (Sun et al., 2023; Du et al., 2024). This potential characteristic is particularly helpful for LLM finetuning, a form of continual learning where specializing in new domains can come at the expense of base model capabilities (Wang et al., 2024) (a phenomenon known its extreme form as "catastrophic forgetting" McCloskey & Cohen (1989); French (1999)). To date, only a few studies have examined forgetting in modern LLMs (Kleiman et al., 2023; Kalajdzievski, 2024; Vu et al., 2022). To address this gap, **we also ask: when performing continual learning on a new domain, to what extent does** LoRA mitigate forgetting of base model capabilities? In this study, we compare LoRA and full finetuning for Llama-2 7B models across two challenging target domains, code and mathematics. Within each domain, we explore two training regimes. The first regime is continued pretraining, which involves training on billions of unlabeled domain-specific tokens, most commonly via full finetuning; here we use the StarCoder-Python (Li et al., 2023a) and OpenWebMath (Paster et al., 2023) datasets (Table 1). The second is *instruction finetuning*, the common scenario for LoRA involving questionanswer datasets with tens to hundreds of millions of tokens. Here, we use Magicoder-Evol-Instruct-110K (Wei et al., 2023) and MetaMathQA (Yu et al., 2023). We evaluate target-domain performance (henceforth, *learning*) via challenging coding and math benchmarks (HumanEval; Chen et al. (2021), and GSM8K; Cobbe et al. (2021)). We evaluate source-domain *forgetting* performance on language understanding, world knowledge, and common-sense reasoning tasks (Zellers et al., 2019; Sakaguchi et al., 2019; Clark et al., 2018). We find that with commonly used low-rank settings, LoRA substantially underperforms full finetuning, while typically requiring longer training (Sec. 4.1). In continued pretraining, the performance gap between full finetuning and LoRA is not closed even with high ranks. In instruction finetuning, on the other hand, high ranks can match full finetuning performance. Despite LoRA's limitations, we show that it consistently maintains better source-domain performance compared to full finetuning (Sec. 4.2). Furthermore, we characterize the tradeoff between learning and forgetting (Sec. 4.3). We then show that LoRA - even with higher rank - mitigates forgetting more aggressively than classic regularization techniques that aim to prevent overfitting, such as dropout (Srivastava et al., 2014; Goodfellow et al., 2013), and weight decay (Goodfellow et al., 2016). Moreover, by analyzing the generated solutions to HumanEval problems, we demonstrate that while full finetuning tends to produce a limited set of solutions, LoRA produces a wider range of solutions more akin to those of the base model (Sun et al., 2023; Du et al., 2024) Why does LoRA underperform full finetuning? LoRA was originally motivated in part by the hypothesis that finetuning results in low-rank perturbations to the base model's weight matrix (Li et al., 2018; Aghajanyan et al., 2020; Hu et al., 2021). However, the tasks explored by these prior works are relatively easy for modern LLMs, and certainly easier than the coding and math domains studied here. Thus, we perform a singular value decomposition to show that full finetuning barely changes the spectrum of the base model's weight matrices, and yet the difference between the two (i.e. the perturbation) is high rank. The rank of the perturbation grows as training progresses, with ranks 10-100× higher than typical LoRA configurations (Figure 6). We conclude by proposing best practices for training models with LoRA. We find that LoRA is very sensitive to hyperparameters, including learning rates, choice of target modules, ranks, and scaling factors; setting these properly is a prerequisite to approach full finetuning performance. To summarize, we contribute the following results: - Full finetuning is more accurate and sample-efficient than LoRA in CPT for code and math; in instruction finetuning, higher ranks can close most of the gaps (Sec.4.1). - LoRA forgets less of the source domain (Sec. 4.2 and 4.3). - LoRA forgets less than common regularization techniques; it also helps maintaining the diversity of generations (Sec. 4.5). - Full finetuning finds high rank weight perturbations (Sec. 4.6). - A hyperparameter sensitivity analysis for LoRA, as well as practical recommendations (Sec. 4.7). Model checkpoints and LoRA adapters can be accessed at https://github.com/danbider/lora-tradeoffs. | Code | Math | | |--------|---------------------------------------------|--------------------------------| | CPT | StarCoder-Python (up to 20B tokens) | OpenWebMath (up to 20B tokens) | | IFT | Magicoder-Evol-Instruct-110K (72.3M tokens) | MetaMathQA (103M tokens) | Table 1: Datasets and token counts for math and code experiments ## 2 Background LoRA involves freezing a pretrained weight matrix Wpretrained ∈ R d×k, and learning only a low-rank perturbation to it, denoted here as ∆, as follows: $$\begin{array}{r l}{W_{\mathrm{finetuned}}=W_{\mathrm{pretrained}}+\Delta}\\ {\Delta=\gamma_{r}A B,\quad A\in\mathbb{R}^{d\times r},\quad B\in\mathbb{R}^{r\times k}.}\end{array}$$ Most common implementations initialize A0 ∼ N (0, 1), B0 = 0 and set the scalar γr = α/r with a controllable hyperparameter α. The user chooses which Wpretrained to adapt ("target modules"), the rank *r << d, k*, and the hyperparameter α. By doing so, only d × r + r × k parameters are trained per module instead of d × k, which reduces the memory and FLOPS required for computing the gradient. As an example, applying a r = 16 LoRA to a 7B weight matrix with d = k = 4096 trains < 1% of the original parameter count. Appendix Sec. H lays out the approximate memory savings by LoRA. LoRA's introduction and first applications targeted only the Wq and Wv matrices in the self-attention module (Hu et al., 2021). Since then, it has become best practice to target all transformer modules (Raschka, 2023; Dettmers et al., 2024), i.e., {W (l) q , W(l) k , W(l) v , W(l) o } L l=1 in the self-attention modules, and {W (l) gate, W(l) up , W(l) down}}L l=1} in the feedforward modules for L layers in, say, a Llama architecture (Hu et al., 2021; Touvron et al., 2023). ## 3 Experimental Setup We train on code and math datasets that have been shown to increase downstream performance. We motivate the training datasets and evaluation benchmarks below. All training was done using the Databricks MosaicML composer1, streaming2, and llm-foundry3repositories, as well as the HuggingFace peft library. ## 3.1 Datasets For Continued Pretraining (Cpt) And Instruction Finetuning (Ift) Coding CPT - Starcoder-Python (Li et al., 2023a) This dataset consists of permissively licensed repositories from GitHub, including Git commits, in 80+ programming languages. We chose the Python subset and sub-sampled it to 20B tokens. Math CPT - OpenWebMath (Paster et al., 2023) This dataset contains 14.7B tokens derived from mathematical web pages from Common Crawl, correctly formatted to preserve mathematical content such as LaTeX equations.4 To match with the StarCoder-Python dataset, we trained on up to 20B tokens, repeating tokens beyond the first 14.7B. An analysis of this dataset shows that it contains a considerable amount of full English sentences.5 Coding IFT - Magicoder-Evol-Instruct-110k (Wei et al., 2023) This dataset contains 72.97M tokens of programming questions and answers. It reproduces the "Evol-Instruct" dataset of WizardCoder (Luo et al., 1https://github.com/mosaicml/composer 2https://github.com/mosaicml/streaming 3https://github.com/mosaicml/llm-foundry 4https://huggingface.co/datasets/open-web-math/open-web-math 5Out of a random selection of 100K examples, a regex search shows that 75% of the examples contain LaTex. The data is classified as 99.7% English and "overwhelmingly English" by the langdetect and fasttext tools. 2023b) by iteratively prompting an LLM (GPT-4) to increase the difficulty of a set of question-answer pairs from Code Alpaca (Chaudhary, 2023). Math IFT - MetaMathQA (Yu et al., 2023) This dataset was built by bootstrapping mathematical word problems from the *training* sets of GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021) by rewriting the questions with variations using GPT-3.5. This dataset contains 395K question-answer pairs and roughly 103M tokens.6 We quantify learning and forgetting via benchmarks reported on the Open LLM Leaderboard7for state of the art open-source LLMs such as Llama (Touvron et al., 2023). ## 3.2 Measuring Learning With Coding And Math Benchmarks (Target Domain **Evaluation)** Coding - HumanEval (Chen et al., 2021) This benchmark contains 164 problems that involve generating of a Python program given a docstring and a function signature. A generation is considered correct if it passes all supplied unit tests. We use the Code Generation LM Evaluation Harness (Ben Allal et al., 2022) configured to output 50 generations per problem, and calculate "pass@1" with softmax temperature=0.2 and top_p = 0.95 for 0-shot HumanEval. Math - GSM8K (Cobbe et al., 2021) This benchmark includes a collection of 8.5K grade-school math word problems. We evaluate on the test split of GSM8K (1,319 samples) as implemented in the LM Evaluation Harness (Gao et al., 2023), with default generation parameters (temperature=0, 5 few-shot, pass@1). ## 3.3 Forgetting Metrics (Source Domain **Evaluation)** We use the following benchmarks to asses degradation of base model capabilities. **HellaSwag** (Zellers et al., 2019) includes 70K problems, each describing an event with multiple possible continuations. The task is to pick the most plausible continuation, which requires making inferences about nuanced everyday situations. WinoGrande (Sakaguchi et al., 2019) also assesses commonsense reasoning. It includes 44K problems with sentences that require ambiguous pronoun resolution. **ARC-Challenge** (Clark et al., 2018) consists of 7,787 grade-school level, multiple-choice science questions, and tests complex reasoning and understanding of scientific concepts. These benchmarks involve multiple-choice questions that use the predicted logits for calculating accuracy, and do not require specifying further generation hyperparameters. All forgetting metrics were computed using the MosaicML Gauntlet evaluation harness (Dohmann, 2023).8 ## 4 Results 4.1 Target-Domain Performance: Lora At Low Ranks Underperforms Full Finetuning We compare LoRA and full finetuning after performing an exhaustive learning rate sweep for each method, which we found to be crucial (Dettmers et al., 2024). We include learning rate sweep results in Figure S1. We perform a sample-efficiency analysis - i.e., compute the learning metrics as a function of training samples seen - for both LoRA and full finetuning. For IFT, we train separate models for 1, 2, 4, 8, and 16 epochs. For CPT, we vary the number of training tokens (0.25, 0.5, 1, 2, 4, 8, 16, 20 billion), using individual learning rate cooldown schedules. For each condition, we train one full finetuning model and three LoRA models with ranks r = 16, 64, 256 noting that most LoRA papers use a "low" rank of 8-64, (e.g., Dettmers et al. (2024); Zhuo et al. (2024)). The LoRA models target all transformer modules and use α = 2r, as known to be best practice (Raschka, 2023). For further details on experimental setup and hyperparameters, see Appendix A. The results appear in Fig. 1. We first note that for both programming and math, IFT improves evaluation scores much more than CPT, which is expected because the samples in each IFT dataset are more similar to the evaluation problems (e.g., for code, IFT achieves maximum HumanEval of 0.497 vs. 0.263 for CPT). 6https://huggingface.co/datasets/meta-math/MetaMathQA 7https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 8https://github.com/mosaicml/llm-foundry/tree/main/scripts/eval ![4_image_0.png](4_image_0.png) Figure 1: LoRA performance scales by rank and underperforms full finetuning in code and math. (A) Starcoder-Python, (B) Magicoder-Evol-Instruct-110K, (C) OpenWebMath, (D) MetaMathQA. In (A) and (B) y-axis: HumanEval pass@1. In (C) and (D) y-axis: GSM8K strict match. In all panels, "base model" indicates Llama-2-7b without instruction finetuning. Note that 16 epochs are ~1.16B and ~1.6B tokens, for Magicoder-Evol-Instruct-110K and MetaMathQA. For Code CPT (Fig. 1A and Table $1), we identify a substantial gap between full finetuning and LoRA that grows with more data. The best LoRA model, with rank r = 256, peaks at 20B tokens with HumanEval=0.224, roughly matching full finetuning with 4B tokens (HumanEval=0.218). Full finetuning reaches its peak HumanEval of 0.263 at 20B tokens. A clear ordering by rank emerges after the initial 1B CPT tokens. For Code IFT (Fig. 1B and Table $5), HumanEval accuracy is clearly ordered by rank from the very first epoch. The more common r = 16 and r = 64 LoRA configurations have lower accuracy than full finetuning, with HumanEval scores of 0.358 and 0.417 at epoch 4, respectively). With a high LoRA rank (r = 256), full finetuning performance can be matched (LoRA=0.498 in epoch 4, full finetuning=0.497 in epoch 8). In Appendix Sec. F we perform a more sensitive HumanEval analysis, calculating pass@k as a function of k = 1*, . . . ,* 256 with a higher temperature of 0.8 for full finetuning and the LoRA models (at epoch 4). This analysis shows that full finetuning is superior to r = 256 for k < 64, after which the two are equal. Math CPT (Fig. 1C and S3) results closely echo those of code CPT. Consistent patterns in GSM8K emerge at 4B tokens. Full finetuning opens a gap in GSM8K which widens with more data. Similarly, LoRA performance is ordered by rank. The best LoRA (r = 256) peaks at 16B tokens (GSM8K=0.203), underperforming full finetuning at 4B tokens (GSM8K=0.224) and at its peak at 20B tokens (GSM8K=0.293). LoRA closes much of the gap with full finetuning in the **Math IFT** (Fig. 1D and Table S7) dataset, while remaining less sample efficient. Both methods substantially improve upon the base model; LoRA (r = 256) peaks at 8 epochs (GSM8K=0.634) while full finetuning achieves GSM8K=0.641 at 2 epochs and peaks at 4 epochs, with GSM8K=0.642.9 Unlike the code IFT dataset, r = 64 suffices to approach full finetuning and achieve GSM8K=0.624 at epoch 4. We suggest that lower ranks are effective here because English mathematics problems involve a smaller domain shift from the pretraining data as compared to coding ones. In summary, in CPT, LoRA underperforms full finetuning across all configurations. In IFT, and especially in code, high LoRA ranks are required to close the gap with full finetuning. ## 4.2 Lora Forgets Less Than Full Finetuning Here, we investigate the extent of forgetting (defined in Sec. 3.2) as a function of training data in Fig. 2. Overall, we observe that (1) IFT induces more forgetting than than CPT, (2) programming induces more forgetting than math, and (3) forgetting tends to worsen with training duration. Most importantly, LoRA forgets less than full finetuning, and the extent of forgetting is controlled by rank. In code - both CPT and IFT - full finetuning forgets substantially more than any LoRA configuration. In code CPT (Table S2), at 20B tokens, full finetuning scores 0.545 versus 0.617 by LoRA r = 256. In code IFT (Table S6), full finetuning scores 0.414 versus 0.509 by LoRA r = 64. In math - for both CPT and IFT - LoRA with r = 256 forgets nearly as much as full finetuning. In CPT (Table S4), LoRA scores 0.616 (20B tokens) versus 0.613 of full finetuning (16B tokens). In IFT (Table S8), LoRA and full finetuing respectively degrade to 0.567 and 0.559 at epoch 16. We note that the least forgeting occurs for the OpenWebMath dataset, which is dominated by English sentences (see 3.1 for details). ## 4.3 The Learning-Forgetting Tradeoff It is trivial that models that change less when finetuned to a new target domain will forget less of the source domain. The nontrivial question is: do LoRA and full finetuning differ in how they tradeoff learning and forgetting? Can LoRA achieve similar target domain performance but with diminished forgetting? We form learning-forgetting Pareto curves by plotting the forgetting metric versus the learning metric for each training duration (Fig. 3). As models train on more data, they learn more and forget more, traveling up and left in this space. As we increase LoRA ranks, we find that the curves shift up and left as well, again, learning more and forgetting more, doing so more consistently in IFT than CPT. Each dataset presents a unique tradeoff pattern which makes it difficult to conclude whether LoRA and full finetuning offer fundamentally different learning-forgetting tradeoffs. We will review each dataset next. For Code CPT, though the full finetuning curve reaches much higher values of HumanEval, it appears to forget more for any given HumanEval value, which LoRA can reach if trained on more tokens. This pattern does not hold for math CPT, where LoRA and full finetuning curves are roughly overlapping until full finetuning shoots up (in 4B tokens) to achieve much higher GSM8K scores without increased forgetting. In code IFT, LoRA r = 256 offers comparable HumanEval accuracy while strictly forgetting less. Lower ranks 9We note that the original MetaMath paper reports a maximum accuracy of 0.665 when (fully) finetuning Llama-2-7B on the MetaMathQA dataset. We attribute this to small differences in hyperparameters; they trained on 3 epochs with a batch size of 128 using the AdamW optimizer, a learning rate of 2e-5, a learning rate warmup of 3%. ![6_image_0.png](6_image_0.png) Figure 2: LoRA forgets less than full finetuning. In all panels, the y-axis shows the average of HellaSwag, ARC-Challenge and Winogrande for Llama-2-7B trained trained on: (A) StarCoder-Python (B) Magicoder-Evol-Instruct-110k (C) OpenWebMath (D) MetaMathQA. do not reach high values of HumanEval to compare to full finetuning. In math IFT, LoRA and full finetuning seem to lie on adjacent learning-forgetting tradeoff curves, with full finetuning offering preferable tradeoffs. With the caveats mentioned above, it seems that LoRA can offer preferable learning-forgetting tradeoffs for code, while full finetuning can offer preferable tradeoffs for math. Moreover the choice of LoRA rank can serve as a knob to navigate the learning-forgetting tradeoffs. ## 4.4 For The Tülu-V2-Mix Dataset, Lora Is On Par With Full Finetuning So far, we analyzed how LoRA and full finetuning specialize in very specific domains. Often, code or math problems appear as part of larger IFT data mixtures that include multi-turn conversations and a variety of other NLP tasks, such as summarization, etc (e.g., Wei et al. (2021)). We therefore finetuned LoRA and full finetuning models on one such popular dataset, the Tülu-v2-mix (Ivison et al., 2023). The results are ![7_image_0.png](7_image_0.png) The learning-forgetting tradeoff Figure 3: LoRA vs. full finetuning trade-off for Llama-2-7B. Relative to full finetuning, LoRA learns less (lower values on the y-axis) and forgets less (higher values on the x-axis). Each dot is a separate model, with marker size corresponding to training duration (from 0.25-20 billion tokens for CPT, and 1-16 epochs for IFT). Same data as Figures 1, 2. presented in the Appendix (Sec. C and Table S9). In summary, we find that both LoRA and full finetuning meaningfully improve upon the base model, and that LoRA, even with lower ranks, can match full finetuning in chat quality as measured by Multi-Turn Benchmark (MT-bench (Zheng et al., 2024)), GSM8K (Cobbe et al., 2021), and Massive Multitask Language Understanding (MMLU; Hendrycks et al. (2020)). At longer training durations (6 epochs), LoRA also forgets less. ## 4.5 How Strongly Does Lora Constrain The Finetuning Process? In this section, we analyze Llama-2-7B models trained on the Magicoder-Evol-Instruct-110K dataset. We first compare the learning-forgetting tradeoffs between LoRA and classic regularization techniques, and then analyze the diversity of the generated text. ![8_image_0.png](8_image_0.png) Figure 4: **LoRA forgets less than attention dropout and weight decay.** Results from Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K. Left panel: learning as measured by accuracy on HumanEval. Right panel: forgetting as measured by the average of HellaSwag, ARC-Challenge and WinoGrande scores. The solid slateblue line shows that LoRA (r=256) learns as much as full finetuning, weight decay, and attention dropout, while forgetting much less. LoRA forgets less than attention dropout and weight decay We compare LoRA (r = 16, 256, training all modules) to weight decay (Goodfellow et al., 2016) with values 5e −5, 1e −4 and attention dropout (Srivastava et al., 2014) with values 0.05, 0.1. Both regularization techniques appear to learn and forget as much as full finetuning, except that weight decay starts to generally deteriorate at longer training durations (epochs 8 and 16). LoRA, with the common r = 16, learns less and forgets less than all other models. LoRA r = 256, on the other hand, learns as much as the other methods while forgetting less. LoRA helps maintain diversity of token generations. We scrutinize the generated solution strings for HumanEval problems. We calculate the unique number of output strings out of 50 generations (for base model, full finetuning, and LoRA) serving as a coarse proxy for predictive diversity. In Figure 5 we separately show the results for correct and incorrect answers. As in the reinforcement learning from human feedback literature (Du et al., 2024; Sun et al., 2023), we find that full finetuning results in fewer unique generations ("distribution collapse") compared to the base model, for both pass and fail generations, with LoRA in between the two. The above works also suggest that LoRA could even substitute a common Kullback-Leibler divergence term that keeps the probabilities of the generated text similar between the finetuned and base model. We reiterate that exact string matching between generations is not a sensitive metric of predictive diversity, as generations can slightly vary in format and remain functionally identical. ## 4.6 Full Finetuning On Code And Math Does Not Learn Low-Rank Perturbations In this section, we seek to study whether we should expect low-rank training to be a good approximation to full finetuning, and if so, what is the necessary rank. Recall that full finetuning can be written as Wfinetuned = Wpretrained + ∆; here we compute the Singular Value Decomposition of all three terms in the equation. We focus on continued pretraining for code, where there are drastic differences between LoRA and full finetuning. We analyze checkpoints obtained at 0.25, 0.5, 1, 2, 4, 8, 16, and 20 billion training tokens. First, in Figure S8 we present results for the Wq projection at layer 26 of Llama-2-7B (with dimensions d × d, d = 4096). We show that the spectrum of the finetuned weight matrix is very similar to that of the base weight matrix, both decaying slowly and requiring keeping ≈ 50% of singular vectors (≈ 2000/4096) to explain 90% of the variance in the weight matrix. Critically, the difference ∆ also has a similar spectrum to the finetuned and base weight matrices (up to a multiplicative scaling). These results are in line with the ![9_image_0.png](9_image_0.png) Figure 5: **LoRA maintains output token diversity relative to full finetuning.** analysis in Zeng & Lee (2024) showing that any transformer model can be well approximated with r = d/2. Additionally, we suggest that there is nothing extraordinary about the full finetuning spectra; similar spectra can be achieved by adding low-magnitude Gaussian i.i.d noise to a weight matrix (Fig. S9). Next, we ask when during training does the perturbation become high rank, and whether it meaningfully varies between module types and layers. We estimate the rank needed to explain 90% of the variance in the matrix. The results appear in Figure 6. We find that: (1) The earliest checkpoint at 0.25B CPT tokens exhibits ∆ matrices with a rank that is 10 − 100× larger than typical LoRA ranks; (2) the rank of ∆ increases when trained on more data; (3) MLP modules have higher ranks compared to attention modules; (4) first and last layers seem to be lower rank compared to middle layers. ## 4.7 Hyperparameter Sensitivity Analyses For Lora Our goal in this work was to optimally configure LoRA so that it has the best chances of matching full finetuning. This is nontrivial, as LoRA has a large number of hyperparameters to choose from: target modules, rank, scaling factors, and learning rates. We turn to analyze the importance of each, and provide some practical recommendations. First, we found that the choice α = 2r is crucial for high ranks. Most common packages, e.g., HuggingFace's PEFT10 scale the LoRA matrices by α/r, effectively scaling down higher ranks (see also Kalajdzievski (2023)). One might think that high learning rate values may compensate for fixed low α's, but doing so creates instabilities and often leads to inferior performance. To show this, we performed a joint hyperparameter sweep over α and learning rate for the Magicoder dataset training a r = 256 LoRA for 4 epochs (Fig. ??). We find that α = 512 does much better than 256 or 32 across all learning rates. Next, to assess the relative contribution of target modules and rank, we trained Llama-2-7b models on 4 epochs of the Magicoder dataset, sweeping over target modules ("Attention", "MLP", and "All", their union), ranks (r = 16, 64, 256), setting α = 2r. Fig. 7 shows that HumanEval performance increases with rank, and 10https://huggingface.co/docs/peft/en/index ![10_image_0.png](10_image_0.png) Figure 6: Dynamics of rank for Llama-2-7B trained on the Starcoder (CPT) data. In each panel, the x-axis denotes layer number and the y-axis denotes rank needed to explain at least 90% of the variance (maximal dimensionality is 4096). Colors denote CPT tokens, with lighter colors trained for longer. ![10_image_1.png](10_image_1.png) Figure 7: Targeting MLP or A11 modules is superior to training Attention modules alone. All Llama-2-7B checkpoints were trained on Magicoder for 1, 2 and 4 epochs with rank 16 (left), 64 (center) and 256 (right). that targeting just "Attention" underperforms both "MLP" and "All", where in the latter, most gains are interestingly driven by the "MLP" modules. This is potential evidence that the MLP blocks are the primary loci for continual learning in LoRA, at least in our datasets. For IFT, we find that LoRA is more sensitive to learning rates compared to full finetuning, and benefits from the highest learning rate that enables stable training for the chosen training duration (see Appendix Sec. B and Fig. S1). LoRA's best learning rates should be set one order of magnitude higher than full finetuning's, often ranging between 5e −5 and 5e −4for these combinations of model architecture and dataset. In Appendix Sec. I, we benchmark throughput and peak GPU memory of different LoRA configurations, showing that for standard implementations and a fixed batch size, LoRA tends to train slower than full finetuning. To conclude, based on our main results and hyperparameter sweeps, we recommend: (a) using LoRA for instruction finetuning and not continued pretraining; (b) if GPU memory allows, targeting "All" transformer modules with a rank of 256, since ranks 16 − 64 tend not to suffice for code tasks; (c) using α = 2r, and (d) sweeping over learning rates between [1e − 5, 5e − 4], picking the highest value that enables stable training. ## 5 Related Work Extensions to LoRA LoRA has inspired many variants and extensions. One group of methods improves training with LoRA by focusing on initialization or scaling (Meng et al., 2024; Hayou et al., 2024; Li et al., 2023b; Kalajdzievski, 2023; Nikdan et al., 2024), sequential training procedures (Xia et al., 2024), or architectural modifications (Shi et al., 2024). Other works propose alternative low-rank approximations altogether (Liu et al., 2024; Zhao et al., 2024a; Jiang et al., 2024a; Kopiczko et al., 2023). In this study we chose to analyze the classic LoRA setup; while many of these proposed variations of LoRA seem promising, we leave a rigorous comparison of these techniques to future work. Benchmarking LoRA vs. Full Finetuning The original LoRA paper Hu et al. (2021) reported that LoRA matched full finetuning performance for RoBERTa (Liu et al., 2019) on GLUE (Wang et al., 2018), and GPT-2 on E2E NLG Challenge (Novikova et al., 2017), and GPT-3 on WikiSQL (Zhong et al., 2017), MNLI (Williams et al., 2017), and SAMSum (Gliwa et al., 2019). Many subsequent studies follow this template and report encoder model performance on tasks in GLUE such as SST-2 (Socher et al., 2013) and MNLI (Williams et al., 2017). Models such as RoBERTa are less than 340M parameters, however, and classification tasks such as MNLI are quite trivial for modern billion-parameter LLMs such as Llama-2-7B. Despite LoRA's popularity, only a few studies have rigorously compared LoRA to full finetuning in this setting and with challenging domains such as code and math. Dettmers et al. (2024) for example found that QLoRA matched full finetuning MMLU (Hendrycks et al., 2020) performance when finetuning Llama-1 7B, 13B, 33B and 65B on the Alpaca (Taori et al., 2023) and FLAN (Chung et al., 2024) datasets. Ivison et al. (2023) on the other hand found that QLoRA did not perform as well as full finetuning for Llama-2-7B, 13B and 70B models trained on the Tülü-v2-mix dataset when evaluated across MMLU, GSM8K, AlpacaEval (which uses LLM-as-a-judge; (Dubois et al., 2024)) and HumanEval. One recent notable study is Astraios, which found that LoRA at rank r = 8 performed worse than full finetuning on 8 datasets and across 4 model sizes (up to 16 billion parameters), on 5 representative code tasks (Zhuo et al., 2024). Our study corroborates these results and shows that with higher ranks and proper hyperparameter choices, LoRA can perform much better. The conclusions have also been mixed with regards to the practical details surrounding LoRA target modules and rank: Raschka (2023) and Dettmers et al. (2024) show that optimized LoRA configurations perform as well as full finetuning, and that performance is governed by choice of target modules but not rank. However, in that work, the scalar α was not modified with rank, and we found that increasing it to 2r was necessary to unlock improvements by rank. In contrast, Liu et al. (2024) shows that LoRA is sensitive to ranks. It is likely that some of these discrepancies are due to differences in finetuning datasets and evaluations. Continual learning on code and math. A growing body of work investigates ways of specializing LLMs to code and math. In code, models such as StarCoder (Li et al., 2023a; Lozhkov et al., 2024), DeepSeek Coder (Guo et al., 2024), and SantaCoder (Allal et al., 2023) were pretrained from scratch on large-scale code datasets. Alternatively, some works start with a generic pretrained base model, and combine continued pretraining on large code datasets followed by IFT on code problems (usually with full finetuning), e.g., Codex (Chen et al., 2021), Code-Qwen (Bai et al., 2023), CodeLlama (Roziere et al., 2023). Some perform only IFT on top of a base model, like MagiCoder Wei et al. (2023), or WizardCoder (Luo et al., 2023b). OctoCoder (Muennighoff et al., 2023) performs IFT with LoRA. Similarly, much recent work aims to improve mathematical capabilities. Models like DeepSeek Math (Shao et al., 2024) perform continued pretraining on top of a base model, while other methods focus on finetuning by generating high-quality synthetic math problems, scaling to millions of examples. Luo et al. (2023a) takes the Evol-Instruct approach to data generation (akin to the Magicoder dataset; Sec. 3.1) which it then uses to train reward models for instruction quality and solution correctness, which are in turn used for LLM finetuning. Other work develops Monte Carlo Tree Search methods to automatically supervise the intermediate reasoning steps while solving math problems (Luo et al., 2024), and Yue et al. (2024) generates questions and answers from the pretraining web corpus. Toshniwal et al. (2024) uses an LLM to synthesize Code-Interpreter-style solutions to the GSM8K and MATH benchmarks. The proposed solutions can be verified against the official solutions. Singh et al. (2023) iterate over this procedure multiple times ("Self-training") using an expectation-maximization approach. All reviewed methods meaningfully improve math capabilities. Learning-Forgetting tradeoffs Vu et al. (2022) shows that prompt tuning (Lester et al., 2021), another parameter-efficient finetuning method, can aid in mitigating forgetting for cross-lingual summarization tasks (using multilingual variants of the T5 model). With large Llama-style LLMs, it has been reported that code-finetuned LLMs lose some of their capabilities in language understanding and commonsense reasoning (Li et al., 2023a; Roziere et al., 2023; Wei et al., 2023). A common approach to mitigate forgetting involves "replaying" source-domain data during continual learning, which can be done by storing the data in a memory buffer, or generating it on the fly (Lesort et al., 2022; Scialom et al., 2022; Sun et al., 2019). ## 6 Discussion Does the difference between LoRA and full finetuning change with model size? Studies in the past have hinted at a relationship between the effectiveness of finetuning and model size (Aghajanyan et al., 2020; Hu et al., 2021; Zhuo et al., 2024). While recent studies have successfully applied LoRA to 70B parameter models (Ivison et al., 2023; Yu et al., 2023; Niederfahrenhorst et al., 2023; Turgutlu, 2024), and previous work shows that techniques like prompt tuning become more effective for larger models (Vu et al., 2022), we leave a rigorous study of these intriguing scaling properties to future work. Limitations of the spectral analysis. The observation that full finetuning tends to find high rank solutions does not rule out the possibility of low-rank solutions; rather, it shows that they are not typically found. An alternative interpretation is that the rank needed to reconstruct the weight matrix is higher than the rank needed for a downstream task. We also only presented SVD analysis for the continued pretraining setting. It is possible that a similar analysis for the instruction finetuning setting would reveal that the full finetuning does not tend to be as high rank. ## 7 Conclusion This work sheds light on the downstream performance of 7 billion parameter LLMs trained with LoRA and full finetuning. Unlike most prior work, we use domain-specific datasets in code and math, associated with sensitive evaluation metrics. We show that LoRA, with commonly used low-rank settings, underperforms full finetuning across domains. We also show that LoRA keeps the finetuned model's behavior close to that of the base model, with diminished source-domain forgetting and more diverse generations at inference time. We show that LoRA mitigates forgetting more than classical regularization techniques, and also show that full finetuning finds weight perturbations that are far from being low-rank. We conclude by analyzing LoRA's increased sensitivity to hyperparameters and highlighting best practices. ## Acknowledgements We would like to thank the editor and the three anonymous reviewers who provided high-quality feedback on this work. We are also grateful to Daniel Han and Damjan Kalajdzievski for carefully reading our work and pointing out the importance of setting α = 2r for training with high ranks. ## Author Contributions D.B. led this project by developing code, running experiments, analyzing results, and writing the manuscript. J.P. ran experiments and assisted in the writing of the manuscript. J.G.O. wrote code and ran experiments. P.G. advised the SVD analysis, C.J. ran experiments, and D.K. wrote code. M.P., S.H., V.C., J.F., C.B., and J.P.C. advised this work. ## References Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. *arXiv preprint arXiv:2012.13255*, 2020. Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. Santacoder: don't reach for the stars! arXiv preprint arXiv:2301.03988, 2023. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. *arXiv preprint arXiv:2309.16609*, 2023. Loubna Ben Allal, Niklas Muennighoff, Logesh Kumar Umapathi, Ben Lipkin, and Leandro von Werra. A framework for the evaluation of code generation models. https://github.com/bigcode-project/ bigcode-evaluation-harness, 2022. Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation. https://github. com/sahil280114/codealpaca, 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. Journal of Machine Learning Research, 25(70):1–53, 2024. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. *ArXiv*, abs/1803.05457, 2018. URL https://api.semanticscholar.org/CorpusID:3922816. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*, 2021. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. *Advances in Neural Information Processing Systems*, 36, 2024. Jeremy Dohmann. Blazingly fast llm evaluation for in-context learning, February 2023. URL https: //www.databricks.com/blog/llm-evaluation-for-icl. Blog post, Mosaic AI Research. Yuqing Du, Alexander Havrilla, Sainbayar Sukhbaatar, Pieter Abbeel, and Roberta Raileanu. A study on improving reasoning in language models. In *I Can't Believe It's Not Better Workshop: Failure Modes in* the Age of Foundation Models, 2024. URL https://openreview.net/forum?id=tCZFmDyPFm. Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. *Advances in Neural Information Processing Systems*, 36, 2024. Robert M French. Catastrophic forgetting in connectionist networks. *Trends in cognitive sciences*, 3(4): 128–135, 1999. Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac'h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 12 2023. URL https://zenodo.org/records/10256836. Sreyan Ghosh, Chandra Kiran Reddy Evuru, Sonal Kumar, Deepali Aneja, Zeyu Jin, Ramani Duraiswami, Dinesh Manocha, et al. A closer look at the limitations of instruction tuning. *arXiv preprint arXiv:2402.05119*, 2024. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. Samsum corpus: A human-annotated dialogue dataset for abstractive summarization. *arXiv preprint arXiv:1911.12237*, 2019. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016. Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. *arXiv preprint arXiv:1312.6211*, 2013. Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y Wu, YK Li, et al. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. *arXiv preprint arXiv:2401.14196*, 2024. Soufiane Hayou, Nikhil Ghosh, and Bin Yu. Lora+: Efficient low rank adaptation of large models. *arXiv* preprint arXiv:2402.12354, 2024. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. *arXiv preprint arXiv:2009.03300*, 2020. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A Smith, Iz Beltagy, et al. Camels in a changing climate: Enhancing lm adaptation with tulu 2. *arXiv preprint arXiv:2311.10702*, 2023. Ting Jiang, Shaohan Huang, Shengyue Luo, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang, Deqing Wang, et al. Mora: High-rank updating for parameter-efficient fine-tuning. *arXiv* preprint arXiv:2405.12130, 2024a. Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu, Yu Zhang, Zhenguo Li, and James T. Kwok. Forwardbackward reasoning in large language models for mathematical verification, 2024b. Damjan Kalajdzievski. A rank stabilization scaling factor for fine-tuning with lora. arXiv preprint arXiv:2312.03732, 2023. Damjan Kalajdzievski. Scaling laws for forgetting when fine-tuning large language models. *arXiv preprint* arXiv:2401.05605, 2024. Anat Kleiman, Jonathan Frankle, Sham M Kakade, and Mansheej Paul. Predicting task forgetting in large language models. 2023. URL https://openreview.net/pdf?id=0BMg0OgNTP. Dawid Jan Kopiczko, Tijmen Blankevoort, and Yuki Markus Asano. Vera: Vector-based random matrix adaptation. *arXiv preprint arXiv:2310.11454*, 2023. Timothée Lesort, Oleksiy Ostapenko, Diganta Misra, Md Rifat Arefin, Pau Rodríguez, Laurent Charlin, and Irina Rish. Challenging common assumptions about catastrophic forgetting. *arXiv preprint* arXiv:2207.04543, 2022. Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes. *arXiv preprint arXiv:1804.08838*, 2018. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be with you! *arXiv* preprint arXiv:2305.06161, 2023a. Yixiao Li, Yifan Yu, Chen Liang, Pengcheng He, Nikos Karampatziakis, Weizhu Chen, and Tuo Zhao. Loftq: Lora-fine-tuning-aware quantization for large language models. *arXiv preprint arXiv:2310.08659*, 2023b. Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. Dora: Weight-decomposed low-rank adaptation. *arXiv preprint arXiv:2402.09353*, 2024. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, et al. Starcoder 2 and the stack v2: The next generation. *arXiv preprint arXiv:2402.19173*, 2024. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct, 2023a. URL https://arxiv.org/abs/2308.09583. Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, and Abhinav Rastogi. Improve mathematical reasoning in language models by automated process supervision, 2024. URL https://arxiv.org/abs/2406.06592. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. *arXiv* preprint arXiv:2306.08568, 2023b. Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109–165. Elsevier, 1989. Fanxu Meng, Zhaohui Wang, and Muhan Zhang. Pissa: Principal singular values and singular vectors adaptation of large language models. *arXiv preprint arXiv:2404.02948*, 2024. Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro Von Werra, and Shayne Longpre. Octopack: Instruction tuning code large language models. *arXiv preprint arXiv:2308.07124*, 2023. Artur Niederfahrenhorst, Kourosh Hakhamaneshi, and Rehaan Ahmad. Fine-tuning llms: Lora or fullparameter? an in-depth analysis with llama 2, September 2023. URL https://www.anyscale.com/blog/ fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2. Blog post. Mahdi Nikdan, Soroush Tabesh, and Dan Alistarh. Rosa: Accurate parameter-efficient fine-tuning via robust adaptation. *arXiv preprint arXiv:2401.04679*, 2024. Jekaterina Novikova, Ondřej Dušek, and Verena Rieser. The e2e dataset: New challenges for end-to-end generation. *arXiv preprint arXiv:1706.09254*, 2017. Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open dataset of high-quality mathematical web text. *arXiv preprint arXiv:2310.06786*, 2023. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In *SC20: International Conference for High Performance Computing,* Networking, Storage and Analysis, pp. 1–16. IEEE, 2020. Sebastian Raschka. Practical tips for finetuning llms using lora (low-rank adaptation), 2023. URL https://magazine.sebastianraschka.com/p/practical-tips-for-finetuning-llms\# %C2%A7enable-lora-for-more-layers. Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale, 2019. Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. Fine-tuned language models are continual learners. *arXiv preprint arXiv:2205.12393*, 2022. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models, 2024. URL https://arxiv.org/abs/2402.03300. Shuhua Shi, Shaohan Huang, Minghui Song, Zhoujun Li, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. Reslora: Identity residual mapping in low-rank adaption. *arXiv preprint* arXiv:2402.18039, 2024. Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, et al. Beyond human data: Scaling self-training for problem-solving with language models. *arXiv preprint arXiv:2312.06585*, 2023. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631–1642, 2013. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1): 1929–1958, 2014. Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. Lamol: Language modeling for lifelong language learning. arXiv preprint arXiv:1909.03329, 2019. Simeng Sun, Dhawal Gupta, and Mohit Iyyer. Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of rlhf, 2023. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023. Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman. Openmathinstruct-1: A 1.8 million math instruction tuning dataset, 2024. URL https://arxiv.org/abs/ 2402.10176. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. *arXiv preprint arXiv:2307.09288*, 2023. Kerem Turgutlu. Efficient finetuning of llama 3 with fsdp qdora, April 2024. URL https://www.answer.ai/ posts/2024-04-26-fsdp-qdora-llama3.html. Blog post. Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, and Noah Constant. Overcoming catastrophic forgetting in zero-shot cross-lingual generation. *arXiv preprint arXiv:2205.12647*, 2022. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multitask benchmark and analysis platform for natural language understanding. *arXiv preprint arXiv:1804.07461*, 2018. Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application, 2024. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. *arXiv preprint arXiv:2109.01652*, 2021. Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. Magicoder: Source code is all you need. *arXiv preprint arXiv:2312.02120*, 2023. Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao. Large language models are better reasoners with self-verification. *arXiv preprint arXiv:2212.09561*, 2022. Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. *arXiv preprint arXiv:1704.05426*, 2017. Wenhan Xia, Chengwei Qin, and Elad Hazan. Chain of lora: Efficient fine-tuning of language models via residual learning. *arXiv preprint arXiv:2401.04151*, 2024. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. *arXiv preprint arXiv:2309.12284*, 2023. Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from the web, 2024. URL https://arxiv.org/abs/2405.03548. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence?, 2019. Yuchen Zeng and Kangwook Lee. The expressive power of low-rank adaptation, 2024. URL https: //arxiv.org/abs/2310.17513. Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, and Yuandong Tian. Galore: Memory-efficient llm training by gradient low-rank projection. *arXiv preprint arXiv:2403.03507*, 2024a. Justin Zhao, Timothy Wang, Wael Abid, Geoffrey Angus, Arnav Garg, Jeffery Kinnison, Alex Sherstinsky, Piero Molino, Travis Addair, and Devvret Rishi. Lora land: 310 fine-tuned llms that rival gpt-4, a technical report. *arXiv preprint arXiv:2405.00732*, 2024b. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36, 2024. Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. *arXiv preprint arXiv:1709.00103*, 2017. Terry Yue Zhuo, Armel Zebaze, Nitchakarn Suppattarachai, Leandro von Werra, Harm de Vries, Qian Liu, and Niklas Muennighoff. Astraios: Parameter-efficient instruction tuning code large language models. arXiv preprint arXiv:2401.00788, 2024. ## Appendix A Experimental Setup LoRA configuration for all experiments. All experiments were done with the Databricks MosaicML composer, streaming and llm-foundry libraries in conjunction with the HuggingFace peft library on 32×H100-80GB GPUs. We targeted all trainable modules inside each of the L Llama transformer blocks: {W (l) q , W(l) k , W(l) v , W(l) o , W(l) gate, W(l) up , W(l) down}}L l=1. We used ranks of r = 16, 64, 256 and set α = 2r, to achieve a constant scaling factor γr = 2 across ranks. We use lora_dropout=0.05. For both the Code CPT and Math CPT settings, we train the model once for 20B tokens. We then perform individual cooldowns using intermediate checkpoints as follows: We set a target max training duration (e.g. 8 billion tokens), and define the last 20% of max training duration as the cooldown period. We then retrain from the latest available checkpoint prior to the cooldown period. Code CPT Llama-2-7B trained on the StarCoder-Python dataset. - **seq_len**: 4096 - **optimizer**: decoupled_lionw (betas=[0.9, 0.95]) - **learning_rate**: 1.0e-05 for LoRA and Full Finetuning - **scheduler**: inv_sqrt_with_warmup (t_scale=1000ba, t_warmup=1000ba, t_cooldown=5086ba, alpha_f_decay=1, alpha_f_cooldown=0). We note that this ends up looking very much like a trapezoidal schedule. - **weight_decay**: 1.0e-06 - **precision**: amp_bf16 - **global_train_batch_size**: 192 - **device_train_microbatch_size**: 6 - **gradient_clipping**: norm (threshold=1) - **num_gpus**: 32 - LR Scheduler: Inverse square root with warmup t*warmup* = 500 batches, t*scale* = 500 batches, t*cooldown* = 5200 batches α*fdecay* = 1.0 α*f cooldown* = 0.0 Math CPT. Llama-2-7B trained on the OpenWebMath dataset. - **max_seq_len**: 4096 - **optimizer**: decoupled_lionw (betas=[0.9, 0.95]) - **learning_rate**: 1.0e-05 for full finetuning, 4.0e-05 for LoRA - **scheduler**: inv_sqrt_with_warmup (t_scale=1000ba, t_warmup=1000ba, t_cooldown=5086ba, alpha_f_decay=1, alpha_f_cooldown=0). We note that this ends up looking very much like a trapezoidal schedule. - **weight_decay**: 0 - **precision**: amp_bf16 - **global_train_batch_size**: 192 - **device_train_microbatch_size**: 6 - **gradient_clipping**: norm (threshold=1) - **num_gpus**: 32 Code IFT: Finetuning Llama-2-7b on the Magicoder-Evol-Instruct-110K dataset - **max_seq_len**: 4096 - **optimizer**: decoupled_lionw (betas=[0.9, 0.95]) - **learning_rate**: 2e-4 for rank r = 16, 64 and 1e-4 for r = 256 α = 2r = 512 (due to instabilities/loss spikes at 2e-4) - **scheduler**: cosine_with_warmup (alpha_f=0.01, t_warmup=0.1dur) - **weight_decay**: 0 - **precision**: amp_bf16 - **global_train_batch_size**: 192 - **device_train_microbatch_size**: 6 - **gradient_clipping**: norm (threshold=1) - **num_gpus**: 32 Math IFT: Finetuning Llama-2-7b on the MetaMathQA dataset - **seq_len**: 1024 - **optimizer**: decoupled_lionw (betas=[0.9, 0.95]) - **learning_rate**: Full finetuning: 1e-5, LoRA: 1e-4 for r = 16, 64, 5e-5 for r = 256 due to instabilities. - **scheduler**: cosine_with_warmup (alpha_f=0.01, t_warmup=0.1dur) - **weight_decay**: 0 - **precision**: amp_bf16 - **global_train_batch_size**: 768 - **device_train_microbatch_size**: 24 - **gradient_clipping**: norm (threshold=1) - **num_gpus**: 32 ## A.1 Training The Input And Output Embedding Layers. Vanilla LoRA and other popular methods such as QLoRA (Dettmers et al., 2024) often do not train the input and output embedding layers. Recent open-source work11, on the other hand, shows that it might be beneficial to supplement LoRA with full finetuning of these two modules (additional ≈ 200M parameters for a 7B model). We view this approach as a hybrid of LoRA and full finetuning, and therefore leave its empirical investigation for future work. Moreover, this hybrid approach involves further hyperparameter optimization: the input and output layers require tuning their own separate learning rates, which should typically be 2-10X smaller than the LoRA learning rates (training with a single learning rate results in instabilities). ## B Learning Rate Searches We perform a learning rate sensitivity analysis for Llama-2-7B, trained for two epochs on the code and math IFT datasets, and followed by HumanEval and GSM8K evaluation, respectively. Fig. S1 shows that LoRA improves monotonically with learning rate up to a value at which training diverges, with best learning rates of 5e −4for code and 2e −4for math. On both datasets, these best LoRA learning rates are underperformed by four alternative full finetuning learning rates. The best full finetuning learning rates are 5e −5 and 1e −5, respectively, an order of magnitude smaller than LoRA. For LoRA, we cannot find alternative learning rates that achieve at least 90% of the best learning rate's performance. For full finetuning, there are two viable alternative learning rates for code and three for math. Note that in these experiments, the LoRA models target all modules but the Wgate, with α = 32 which should preferably be higher for r = 64. ## B.1 Learning Rate Sensitivity Analysis Across Optimizers We compared the AdamW and Decoupled LionW optimizers by training for two epochs of Magicoder-EvolInstruct-110K using different learning rates. We found that Decoupled LionW performed better on HumanEval for both LoRA and full finetuning, and across learning rates, as seen in Fig. S2. 11https://unsloth.ai/blog/contpretraining, see also the following blogpost https://www.anyscale.com/blog/ fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2 (Niederfahrenhorst et al., 2023) ![21_image_0.png](21_image_0.png) Figure $1: LoRA is more sensitive to learning rates compared to full finetuning. Llama-2-7B models (A) trained on Magicoder-Evol-Instruct-110k (Wei et al., 2023) and evaluated on HumanEval, (B) trained on Meta.MathQA (Yu et al., 2023) and evaluated on GSM8K. Experiments here are performed with LionW; see Fig. S2 for a comparion to AdamW. Model: llama-2-7b Dataset: Magicoder-Evol-Instruct-110K (2ep) ![21_image_1.png](21_image_1.png) Figure $2: Comparing LionW to AdamW across learning rates for two epochs of the Magicoder-Evol- Instruct-110K dataset. Left: HumanEval; Right: Average of "Language Understanding" benchmarks in the MosaicML evaluation gauntlet. Both methods peak at the learning rate used in the original paper (Wei et al., 2023). ## B.2 The Importance Of The & Parameter For Lora We found that the performance of all models was particularly sensitive to the LoRA & hyperparameter. Figure S3 shows two experiments on two separate datasets (Magicoder-Evol-Instruct-110K and Open Web Math) for LoRA with rank r = 256. In both cases the best accuracy is achieved when a = 2r. ![22_image_1.png](22_image_1.png) ![22_image_0.png](22_image_0.png) (a) Jointly sweeping over LoRA a and learning rate. The optimal choice is a = 2r (blue). (b) Continued pretraining with two different choices of a, where a = 2r is best (blue). Figure S3: LoRA performance is sensitive to the a hyperparameter. We show that for Code IFT (a) and math CPT (b) an a that is scaled with rank such that a = 2r leads to the highest accuracy. ## C Finetuning On The Tülu-V2-Mix Dataset We finetuned Llama-2-7b models on the Tülu-v2-mix (Ivison et al., 2023), a dataset which contains a variety of finetuning datasets containing chain of thought reasoning, multi-turn assistant conversations, math and science problems, code, and more. There are roughly 326k samples in this dataset. As in all main experiments, we compared full finetuning and LoRA r = 16, 64, 256, targeting all transformer modules. For each of the four experimental conditions, we trained a model for up to 6 epochs and evaluated it after 2, 4, and 6 epochs. Different from the main experiments, the checkpoints evaluated are "hot" and are not cooled down for each training duration. As in the original paper (Ivison et al., 2023), we assess math capabilities with **GSM8K** Cobbe et al. (2021), STEM, humanities, and social science capabilities as the average of 57 subjects of the Massive Multitask Language Understanding (MMLU; Hendrycks et al. (2020)), and conversational capabilities with **MultiTurn Benchmark** (MT-bench (Zheng et al., 2024)) which includes 80 multi-turn conversations where the model responses are evaluated automatically by GPT-4. We also compute the same average forgetting score as in all other datasets in this paper. Since datasets like Tülu-v2-mix are where LoRA is mostly used, we ask: can LoRA, even with a low rank, achieve full finetuning-accuracy both in specific domains and in general conversational capabilities? ## C.1 Experimental Setup After an initial learning rate sweep, we chose the following hyperparameters: - **max_seq_len**: 4096 - **optimizer**: decoupled_lionw (betas=[0.9, 0.95]) - **learning_rate**: Full finetuning: 5e-6; LoRA 1e-4 - **scheduler**: cosine_with_warmup (alpha_f=0.01, t_warmup=0.1dur) - **weight_decay**: 0 - **precision**: amp_bf16 - **global_train_batch_size**: 192 - **device_train_microbatch_size**: 6 - **gradient_clipping**: norm (threshold=1) - **num_gpus**: 32 ## C.2 Results First, we find that on MT-bench (Fig. S5), both LoRA and full finetuning meaningfully improve upon the base model (2.74), starting from the second epoch and improving only slightly when trained for longer. Crucially, all LoRA models are within one standard error of the mean of the full finetuning model (computed with 160 datapoints = 80 questions × 2 turns). That is, one can achieve full finetuning conversational capabilities with r = 16. The caveat is that only 80 questions appear in this benchmark and that the variance, within model, is high. In GSM8K (Fig. S6a), again, all models significantly improve upon the base model (0.145). Remarkably, even in this specific domain, LoRA and full finetuning are overlapping, with the best model being LoRA r = 256 at epoch 4, which is followed by full finetuning at epoch 2. Here too, as in the other math datasets in the paper, there is an ordering by LoRA rank. In MMLU (Fig. S6b), full finetuning and LoRA are overlapping with LoRA r = 64 as the best model (epoch 4), followed by full finetuning at epoch 2. Here there is no ordering by rank. As for forgetting (Fig. S7), we find an overall mild forgetting compared to the rest of the datasets in the paper. At two epochs, full finetuning does better than LoRA. The former starts to degrade at epoch 4. At epoch 6, the findings of the main paper are replicated: full finetuning forgets the most and we find a clear ordering of forgetting by rank. ![24_image_0.png](24_image_0.png) Figure $4: LoRA forgets less even on a more diverse IFT dataset like Tülu-v2-mix. We plot the average forgetting score, same as in all other datasets, as a function of training duration. Across all evaluations - learning and forgetting - full finetuning is the best model at epoch 2, and only degrades afterwards. LoRA, on the other hand, needs 4 epochs to train, mirroring the findings in the main part of the paper. LoRA r = 16 seems to offer competitive conversational capabilities, and minimal forgetting, but it underperforms in domain-specific knowledge like math. Future work should seek to understand why this is the case. ![24_image_1.png](24_image_1.png) Figure S5: Average MT-bench score with GPT-4 as a judge, calculated over 80 questions with two turns each. Base model value as reported in the MT-bench paper. We note that the Tülu paper reports a 6.3 MT-bench value from full finetuning of Llama-2-7b base model, which is only slightly exceeding the standard error from our average score. ![25_image_0.png](25_image_0.png) Figure S6: On Tülu-v2-mix, LoRA and full finetuning both improve upon the base model and perform comparably. ![25_image_1.png](25_image_1.png) Figure $7: LoRA forgets less even on a more diverse IFT dataset like Tülu-v2-mix. We plot the average forgetting score, same as in all other datasets, as a function of training duration. ## D Supplementary Tables | Num. tokens (billions) | 0.25 | 0.50 | 1 | 2 | 4 | 8 | 16 | 20 | |--------------------------|--------|--------|-------|-------|-------|-------|-------|-------| | Condition LoRA (r=16) | 0.143 | 0.144 | 0.141 | 0.141 | 0.154 | 0.159 | 0.162 | 0.162 | | LoRA (r=64) | 0.142 | 0.146 | 0.141 | 0.153 | 0.157 | 0.176 | 0.194 | 0.196 | | LoRA (r=256) | 0.144 | 0.142 | 0.143 | 0.159 | 0.159 | 0.208 | 0.211 | 0.224 | | Full Finetuning | 0.152 | 0.153 | 0.172 | 0.181 | 0.218 | 0.258 | 0.255 | 0.263 | Table S1: Starcoder-Python Results (HumanEval pass@1) | Num. tokens (billions) | 0.25 | 0.50 | 1 | 2 | 4 | 8 | 16 | 20 | |--------------------------|--------|--------|-------|-------|-------|-------|-------|-------| | Condition LoRA (r=16) | 0.162 | 0.157 | 0.161 | 0.155 | 0.165 | 0.156 | 0.152 | 0.158 | | LoRA (r=64) | 0.163 | 0.167 | 0.150 | 0.166 | 0.164 | 0.168 | 0.179 | 0.163 | | LoRA (r=256) | 0.162 | 0.161 | 0.140 | 0.170 | 0.193 | 0.196 | 0.203 | 0.202 | | Full Finetuning | 0.155 | 0.152 | 0.165 | 0.158 | 0.224 | 0.238 | 0.283 | 0.293 | | Num. tokens (billions) | 0.25 | 0.50 | 1 | 2 | 4 | 8 | 16 | 20 | |--------------------------|--------|--------|-------|-------|-------|-------|-------|-------| | Condition LoRA (r=16) | 0.645 | 0.642 | 0.645 | 0.642 | 0.644 | 0.640 | 0.638 | 0.635 | | LoRA (r=64) | 0.646 | 0.644 | 0.646 | 0.646 | 0.639 | 0.634 | 0.626 | 0.626 | | LoRA (r=256) | 0.644 | 0.645 | 0.643 | 0.639 | 0.636 | 0.630 | 0.618 | 0.617 | | Full Finetuning | 0.625 | 0.624 | 0.625 | 0.616 | 0.599 | 0.583 | 0.551 | 0.545 | Table S2: Starcoder-Python Results (Forgetting Average) Table S3: OpenWebMath Results (GSM8K) Table S4: OpenWebMath Results (Forgetting Average) | Num. tokens (billions) | 0.25 | 0.50 | 1 | 2 | 4 | 8 | 16 | 20 | |--------------------------|--------|--------|-------|-------|-------|-------|-------|-------| | Condition LoRA (r=16) | 0.640 | 0.641 | 0.646 | 0.641 | 0.643 | 0.641 | 0.636 | 0.637 | | LoRA (r=64) | 0.640 | 0.640 | 0.638 | 0.637 | 0.643 | 0.634 | 0.634 | 0.627 | | LoRA (r=256) | 0.638 | 0.638 | 0.637 | 0.634 | 0.633 | 0.620 | 0.620 | 0.616 | | Full Finetuning | 0.634 | 0.634 | 0.640 | 0.630 | 0.629 | 0.619 | 0.613 | 0.618 | | Epoch | 1 | 2 | 4 | 8 | 16 | |-----------------------|-------|-------|-------|-------|-------| | Condition LoRA (r=16) | 0.628 | 0.617 | 0.616 | 0.616 | 0.596 | | LoRA (r=64) | 0.617 | 0.609 | 0.608 | 0.586 | 0.568 | | LoRA (r=256) | 0.613 | 0.607 | 0.599 | 0.584 | 0.567 | | Full Finetuning | 0.598 | 0.599 | 0.590 | 0.572 | 0.559 | | Epoch | 1 | 2 | 4 | 8 | 16 | |-----------------------|-------|-------|-------|-------|-------| | Condition LoRA (r=16) | 0.447 | 0.528 | 0.580 | 0.578 | 0.569 | | LoRA (r=64) | 0.527 | 0.588 | 0.624 | 0.624 | 0.595 | | LoRA (r=256) | 0.557 | 0.607 | 0.625 | 0.634 | 0.594 | | Full Finetuning | 0.604 | 0.641 | 0.642 | 0.619 | 0.599 | | Epoch | 1 | 2 | 4 | 8 | 16 | |-----------------------|-------|-------|-------|-------|-------| | Condition LoRA (r=16) | 0.653 | 0.648 | 0.652 | 0.646 | 0.609 | | LoRA (r=64) | 0.652 | 0.651 | 0.632 | 0.580 | 0.510 | | LoRA (r=256) | 0.655 | 0.659 | 0.631 | 0.552 | 0.517 | | Full Finetuning | 0.595 | 0.579 | 0.512 | 0.446 | 0.414 | Epoch 1 2 4 8 16 Condition LoRA (r=16) 0.197 0.275 0.358 0.338 0.324 LoRA (r=64) 0.249 0.339 0.417 0.392 0.405 LoRA (r=256) 0.299 0.385 0.498 0.437 0.466 Full Finetuning 0.302 0.464 0.470 0.497 0.416 Table S5: Magicoder-Evol-Instruct-110K Results (HumanEval pass@1) Table S6: Magicoder-Evol-Instruct-110K Results (Forgetting Average) Table S7: MetaMathQA Results (GSM8K) Table S8: MetaMathQA Results (Forgetting Average) ## Table S9: Tülu-V2-Mix Results Table S10: Tülu-v2-mix MT-Bench | Epoch | 2 | 4 | 6 | |-----------------------|-------|-------|-------| | Condition LoRA (r=16) | 0.251 | 0.275 | 0.280 | | LoRA (r=64) | 0.285 | 0.270 | 0.295 | | LoRA (r=256) | 0.296 | 0.335 | 0.301 | | Full Finetuning | 0.324 | 0.291 | 0.303 | | Epoch | 2 | 4 | 6 | |-----------------------|-------|-------|-------| | Condition LoRA (r=16) | 5.681 | 5.997 | 5.712 | | LoRA (r=64) | 5.597 | 5.725 | 5.944 | | LoRA (r=256) | 5.788 | 5.834 | 5.894 | | Full Finetuning | 5.825 | 5.838 | 5.862 | Table S12: Tülu-v2-mix GSM8K | epoch | 2 | 4 | 6 | |-----------------------|-------|-------|-------| | condition LoRA (r=16) | 0.650 | 0.657 | 0.657 | | LoRA (r=64) | 0.649 | 0.655 | 0.647 | | LoRA (r=256) | 0.653 | 0.649 | 0.629 | | Full Finetuning | 0.660 | 0.652 | 0.621 | Table S11: Tülu-v2-mix MMLU | Epoch | 2 | 4 | 6 | |-----------------------|-------|-------|-------| | Condition LoRA (r=16) | 0.491 | 0.502 | 0.504 | | LoRA (r=64) | 0.503 | 0.509 | 0.504 | | LoRA (r=256) | 0.502 | 0.496 | 0.492 | | Full Finetuning | 0.507 | 0.504 | 0.502 | Table S13: Tülu-v2-mix Forgetting Average ## Supplementary Figures For Svd Analysis E ![29_image_0.png](29_image_0.png) Figure S8: SVD analysis for 4096 x 4096 matrix W, at layer 26. Left: singular values for base weights, finetuned weights, and their difference. Right: cumulative explained variance. Notice that for all three matrices, a rank > 1500 is needed to explain 90% of the variance. ![29_image_1.png](29_image_1.png) ![29_image_2.png](29_image_2.png) (a) Spectrum of A and A + cB as well as cB for c = 0.1. (b) Mean absolute difference between spectra of A and Notably, A, cB, A + cB are all high rank. A + cB for various c. Figure S9: Analyzing the spectra of the sum of two 1000 x 1000 Gaussian i.i.d matrices. A and B are 1000 × 1000 random matrices with i.i.d. standard normal Gaussian entries. ## F Solution Generation Diversity On Humaneval For the best set of Llama-2-7B models trained in MagicCoder we evaluate how their pass@k metric in the HumanEval benchmark increases as we increase the parameter k which controls the acceptance criterion. The pass@k metric (Chen et al., 2021) is defined as $$\mathrm{pass}@k:=\mathbb{E}\left[1-{\frac{\binom{n-c}{k}}{\binom{n}{k}}}\right],$$ $$\text{(1)}$$. where n is the number of generations, c the number of correct generations and k determines the size of the sample set of generations considered for acceptance. Assuming we sample from the model outputs, i.e. sampling temperature T > 0, then increasing k will increase the diversity of generations, and increase the likelihood of a passing generation being present in a random subset of size k. Figure S10 reports pass@k for the LoRA models trained in the MagicCoder dataset as well as the base Llama-2-7B model. For all models, as we increase k, the pass@k consistently and monotonically improves. Finetuned models scores are substantially higher than the base model. At k = 1, Full finetuning outperforms the LoRA model whose scores are ordered from largest to smallest rank, as expected. As k increases we observe all models improving their pass@k scores and the gap between them reducing when k > 16. We note that full finetuning is superior across all values of k with temperature 0.8. This complements the results in 1 which used a temperature of 0.2 and pass@1, where the improvements upon r = 256 at epoch 4 are less clear. ![30_image_0.png](30_image_0.png) Figure S10: HumanEval pass@k **for models trained on the Magicoder dataset.** For every model, we sample 256 independent generations with temperature 0.8. ## G Training Datasets G.1 Metamathqa (Math Ift) The MetaMathQA dataset (Yu et al. (2023), https://huggingface.co/datasets/meta-math/MetaMathQA) contains 395,000 samples that are bootsrapped from the GSM (Cobbe et al., 2021) and Math (Hendrycks et al., 2021) training sets. These samples are augmented by GPT-3.5 using the following methods: - Answer Augmentation (155k samples, Yu et al. (2023)): this method proposed by the MetaMathQA authors generates multiple reasoning paths for a given mathetical question and filters for generated reasoning paths that contain the correct final answer. - Rephrasing (130k samples, (Yu et al., 2023)): this method proposed by the MetaMathQA authors uses GPT-3.5 to rephrase questions. They check for the correctness of rephrased questions by using few-shot Chain of Thought prompting to compare reasoning chains and proposed answers with ground truth answers. Both Self-Verification (Weng et al., 2022) and FOBAR (Jiang et al., 2024b) fall under the category of "backward reasoning," where the question starts with a given condition and requires reasoning backwards to solve for an unknown variable. In order to generate new mathematical questions, a numerical value in the original question is masked as a variable X, and the question is rephrased accordingly. - Self-Verification (55k samples, Weng et al. (2022)): the question is rephrased into a declarative statement followed by a question about the masked variable X. - FOBAR (55k samples, Jiang et al. (2024b)): this approach is similar to Self-Verification but directly appends the answer to the question using the template "If we know the answer to the above question is Aˆc, what is the value of unknown variable x?" MetaMathQA samples are organized by 4 columns: type, original_question, query and response. We include two full examples below: Example G.1: MetaMathQA type: GSM_Rephrased query: Scott wants to prepare and freeze a large portion of ratatouille. He purchases 5 pounds of eggplants and 4 pounds of zucchini at a rate of $2.00 per pound from the farmers' market. Additionally, he needs 4 pounds of tomatoes that cost $3.50 per pound, 3 pounds of onions priced at $1.00 per pound, and a pound of basil sold for $2.50 per half pound. If this quantity yields 4 quarts, what is the cost per quart? original_question: Scott wants to make and freeze a large batch of ratatouille. At the farmers' market he buys 5 pounds of eggplants and 4 pounds of zucchini at $2.00 a pound. He needs 4 pounds of tomatoes that are $3.50 a pound. The onions are $1.00 a pound and he needs 3 pounds. Then he needs a pound of basil which is sold for $2.50 per half pound. If this yields 4 quarts, how much does each quart cost? response: The cost of the eggplants is 5 pounds x $2.00 per pound = $10.00. The cost of the zucchini is 4 pounds x $2.00 per pound = $8.00. The cost of the tomatoes is 4 pounds x $3.50 per pound = $14.00. The cost of the onions is 3 pounds x $1.00 per pound = $3.00. The cost of the basil is 1 pound x ($2.50 per half pound x 2) = $5.00. The total cost for all the ingredients is $10.00 + $8.00 + $14.00 + $3.00 + $5.00 = $40.00. Since this quantity yields 4 quarts, the cost per quart is $40.00 / 4 quarts = $10.00 per quart. \#\#\#\# 10 The answer is: 10 ## Example G.2: Metamathqa type: MATH_SV query: If a school bus leaves school with X students on board, and one-half of the students get off the bus at each of the first three stops, The number of students remaining on the bus after the third stop is 6. What is the value of unknown variable X? original_question: If a school bus leaves school with 48 students on board, and one-half of the students get off the bus at each of the first three stops, how many students remain on the bus after the third stop? response: To solve this problem, we need to determine the value of x, which represents the initial number of students on the bus. Let's break down the information given: Number of students remaining on the bus after the third stop: 6 Number of students getting off the bus at each stop: 1/2 Number of stops: 3 We can set up the equation as follows: Number of students on the bus * (1/2)ˆ 3 = Number of students remaining on the bus after the third stop x * (1/2)ˆ 3 = 6 Let's simplify and solve for x: x * (1/8) = 6 To isolate x, we multiply both sides of the equation by 8: x * (1/8) * 8 = 6 * 8 x = 48 The value of x is 48. The answer is: 48 ## G.2 Magicoder-Evol-Instruct-110K (Code Ift) Example G.3: Magicoder-Evol-Instruct-110K ``` instruction: Please amend the subsequent Python script so that it includes a 'while' loop rather than the existing 'for' loop, which iterates through the items of an integer list. The script currently has a bug where it attempts to print an object that is outside the bounds of the list. Fix this error and modify the script to use 'while' instead of 'for' loop. Ensure your script correctly handles empty lists. '''python # Establish an integer list arr = [1, 2, 3, 4] # Determine the length of the list n = len(arr) # Traverse the list and output each individual element for i in range(n+1): print(arr[i]) ''' ``` response: ``` '''python # Establish an integer list arr = [1, 2, 3, 4] # Determine the length of the list n = len(arr) # Initialize index at 0 i = 0 # Traverse the list and output each individual element while i < n: print(arr[i]) i += 1 ''' In the given code, it tries to access 'arr[n]' which is out of bounds as python list indexing starts from 0 and ends at n-1 for a list of length n. Also, I used 'while' loop as per your request. By initializing 'i=0' before the loop and 'i += 1' within the loop, we can iterate over the list. This iteration continues until 'i' is less than 'n', the length of the list. The 'while' loop will not proceed if the list is empty so it correctly handles empty lists. ``` ## G.3 Starcoder Python (Code Cpt) Example G.4: Starcoder-Python ``` '''python """ function of invoking Gitee API """ ``` import base64 import logging import requests from flask import current_app from requests import exceptions logger = logging.getLogger(__name__) ORG_URL = "\href{https://gitee.com/api/v5/orgs}{https://gitee.com/api/v5/orgs}" REPO_URL = "\href{https://gitee.com/api/v5/repos}{https://gitee.com/api/v5/repos}" def get_request(url, params): """ get request """ logger.debug("Get request, connect url: %s", url) try: response = requests.get(url,params=params) return True, response except exceptions.ConnectionError as err: logger.error(err) return False, 'connection error' except IOError as err: logger.error(err) return False, 'IO error' ''' more functions truncated... ## G.4 Openwebmath (Math Cpt) Example G.5: Openwebmath url: http://math.stackexchange.com/questions/222974/probability-of-getting -2-aces-2-kings-and-1-queen-in-afive-card-poker-hand-pa text: \# Probability of getting 2 Aces, 2 Kings and 1 Queen in a five card poker hand (Part II) So I reworked my formula in method 1 after getting help with my original question - Probability of getting 2 Aces, 2 Kings and 1 Queen in a five card poker hand. But I am still getting results that differ...although they are much much closer than before, but I must still be making a mistake somewhere in method 1. Anyone know what it is? Method 1 $P(2A \cap 2K \cap 1Q) = P(Q|2A \cap 2K)P(2A|2K)P(2K)$ $$= \frac{1}{12}\frac{{4 \choose 2}{46 \choose 1}}{50 \choose 3}\frac{{4 \choose 2}{48 \choose 3}}{52 \choose 5}$$ $$= \frac{(6)(17296)(6)(46)}{(2598960)(19600)(12)}$$ $$= 4.685642 * 10ˆ{-5}$$ Method 2 $$\frac{{4 \choose 2} {4 \choose 2}{4 \choose 1}}{52 \choose 5} = \frac{3}{54145}$$ $$5.540678 * 10ˆ{-5}$$ - Please make an effort to make the question self-contained and provide a link to your earlier question. - Sasha Oct 28 '12 at 19:56 I think we would rather ahve you edit your initial question by adding your new progress. This avoids having loss of answer and keeps track of progress - Jean-Sébastien Oct 28 '12 at 19:56 But there already answers to my original question so those answers would not make sense now that I am using a new formula for method 1. - sonicboom Oct 28 '12 at 20:03 Conditional probability arguments can be delicate. Given that there are exactly two Kings, what's the $46$ doing? That allows the possibility of more Kings. - André Nicolas Oct 28 '12 at 20:26 The $46$ is because have already taken two kings from the pack leaving us with 50. And now we have chosen 2 aces and we have to pick the other 1 card from the 50 remaining cards less the 4 aces? - sonicboom Oct 28 '12 at 20:42 show 1 more comment $$\frac{1}{11}\frac{{4 \choose 2}{44 \choose 1}}{48 \choose 3}\frac{{4 \choose 2}{48 \choose 3}}{52 \choose 5}$$ If you wrote this as $$\frac{{4 \choose 2}{48 \choose 3}}{52 \choose 5}\frac{{4 \choose 2}{44 \choose 1}}{48 \choose 3}\frac{{4 \choose 1}{40 \choose 0}}{44 \choose 1}$$ it might be more obvious why they are the same. date: 2014-03-07 11:01:44 There is often some confusion about the memory gains that vanilla LoRA offers both in theory and in practice. In Appendix H we discuss some of the theoretical benefits of LoRA, and show how it can enable training both on GPUs with less memory and on fewer total GPUs (in the multi-GPU setting). In Appendix I we show how LoRA in practice leads to memory savings relative to full finetuning, but can in fact lead to slower throughput for particular hardware and software settings. ## H Theoretical Memory Efficiency Gains With Lora For Single And Multi-Gpu Settings Modern systems for training neural networks store and operate on the following objects (following the conventions in Rajbhandari et al. (2020)). Most memory requirements relate to *model states*, which include: - parameter weights - gradients - higher order optimization quantities such as optimizer momentum and variance in the Adam optimizer, and the momentum in the Lion optimizer The remaining memory requirements come from the *residual states*: - activations (which depend on batch size and maximum sample sequence length) - temporary buffers for intermediate quantities in the forward and backward pass. which will require more memory when increasing the batch size and maximum sequence lengths. LoRA offers memory savings with respect to the *model states*. The next two sections describe these memory savings in the single GPU and multi-GPU setting with examples loosely inspired by Rajbhandari et al. (2020). The data stored at single precision includes: - a "master copy" of the tuned parameter weights - the gradient - all optimizer states (both momentum and variance for Adam, and just momentum for Lion) For simplicity, we do not consider mixed-precision training, which involves storing critical data at single precision (fp32; 4 bytes per number) while performing some computations at half precision (fp16 or bfloat16; 2 bytes per number). ## H.1 Training On A Single Gpu In the single GPU setup, the difference in memory requirements between LoRA and full finetuning is particularly drastic when using the Adam optimizer (Hu et al., 2021; Rajbhandari et al., 2020). Storing the master weights in fp32 requires 4 bytes per parameter, while storing the gradient in fp32 requires 4 bytes *per tuned parameter*. In order to maintain the optimizer state in fp32 for Adam, 8 bytes per tuned parameter are required; 4 bytes for the momentum term, and 4 bytes for the variance term. Let Ψ be the number of model parameters. Therefore, in the Adam full finetuning setting of a Ψ = 7B parameter model, the total memory requirements are at least roughly 4 × Ψ + 4 × Ψ + 8 × Ψ = 112 GB. The Lion optimizer only uses a momentum term in the gradient calculation, and the variance term in Adam therefore disappears. In the Lion full finetuning setting of a Ψ = 7B parameter model, the total memory requirements are therefore roughly 4 × Ψ + 4 × Ψ + 4 × Ψ = 84 GB. LoRA, on the other hand, does not calculate the gradients or maintain optimizer states (momentum and variance terms) *for most of the parameters*. Therefore the amount of memory used for these terms is drastically reduced. | 7B Training | 1 GPU | 8 GPUs | 16 GPUs | 32 GPUs | 64 GPUs | |---------------|----------|----------|-----------|-----------|-----------| | Adam | 112 GB | 14 GB | 7 GB | 3.5 GB | 1.75 GB | | Adam + LoRA | 15.12 GB | 1.89 GB | 0.945 GB | 0.4725 GB | 0.236 GB | | Lion | 84 GB | 10.5 GB | 5.25 GB | 2.625 GB | 1.3125 GB | | Lion + LoRA | 14.84 GB | 1.855 GB | 0.9275 GB | 0.464 GB | 0.232 GB | Table S14: **Theoretical memory required to store the model and optimizer state during training** for a 7B parameter model. Note that the numbers exclude memory needed to store activations. FSDP sharding the parameter and optimizer states across N devices results in less memory usage relative to LoRA. LoRA on the other hand enables training on GPUs with far less memory and also enables training without needing as many GPUs to shard across. A LoRA setting with Adam that only tunes matrices that are 1% of the total parameter count (e.g. Ψ = 7B base model with 70M additional parameters used by LoRA) requires roughly 4 × Ψ(1 + 0.01) + 4 × Ψ × 0.01 + 8 × Ψ × 0.01 = 29.12 GB of memory. Theoretically this can be reduced further to 2 × Ψ + 16 × Ψ × 0.01 = 15.12 GB *if the non-tuned parameter weights are stored in bfloat16*. We use this assumption for the subsequent examples. Note again that these numbers do not take into consideration sample batch size or sequence length, which affect the memory requirements of the activations. ## H.2 Training On A Multiple With Fully Sharded Data Parallelism Past approaches for training LLMs across multiple GPUs include model parallelism, where different layers of the LLM are stored on different GPUs. However this requires high communication overhead and has very poor throughput (Rajbhandari et al., 2020). Fully Sharded Data Parallelism (FSDP) shards the parameters, the gradient, and the optimizer states across GPUs. This incredibly efficient and actually is competitive with the memory savings offered by LoRA in certain settings. FSDP sharding the parameter and optimizer states across N devices results in less memory usage relative to LoRA. LoRA on the other hand enables training on GPUs with far less memory and also emanes training without needing as many GPUs to shard across. For example, in the Adam full finetuning setting of a Ψ = 7B parameter model on 8 GPUs with FSDP, the total memory requirement for *each* GPU is roughly (4 × Ψ + 4 × Ψ + 8 × Ψ)/8 = 14 GB. This reduces further to 3.5 GB for FSDP with 32 GPUs (see Table S14). The LoRA with Adam setup on 8 GPUs (where Ψ = 7B base model and there are 70M additional LoRA parameters) requires roughly (2 × Ψ + 16 × Ψ × 0.01)/8 = 1.89 GB of memory per GPU. With 32 GPUs this decreases further to 0.4725 GB. Standard industry level GPUs have on-device memory between 16 GB (e.g. V100s) and 80 GB (e.g. A100s and H100s). As Table S14 demonstrates, the per-GPU memory requirements for training a 7B parameter model decrease drastically as the number of GPUs increase. The memory requirements for training a 7B model with Adam + LoRA on a single GPU are 15.12 GB, but the same per-GPU memory requirement for training a 7B model with Adam but *without* LoRA on 8 GPUs is 14 GB. In this 8 GPU scenario, the efficiency gains from LoRA disappear. Table S15 applies similar calculations to a 70B parameter model. Finetuning such a large model on 8 GPUs is *only* possible using a technique like LoRA; where Adam requires 140 GB per GPU, Adam+LoRA requires 18.9 GB per GPU. The efficiency gains of LoRA relative to FSDP therefore depend on the model size and GPU availability/cost considerations. We do the same analysis for a 405B parameter model to highlight how LoRA is beneficial as model size scales (Table S16). | 70B Training | 1 GPU | 8 GPUs | 16 GPUs | 32 GPUs | 64 GPUs | |----------------|----------|----------|-----------|-----------|-----------| | Adam | 1.12 TB | 140 GB | 70 GB | 35 GB | 17.5 GB | | Adam + LoRA | 151.2 GB | 18.9 GB | 9.45 GB | 4.725 GB | 2.36 GB | | Lion | 840 GB | 105 GB | 52.5 GB | 26.25 GB | 13.125 GB | | Lion + LoRA | 148.4 GB | 18.55 GB | 9.275 GB | 4.64 GB | 2.32 GB | | 405B Training | 1 | 8 | 16 | 32 | 64 | 128 | 256 | |-----------------|-------|---------|--------|---------|--------|--------|-------| | Adam | 6480 | 810 | 405 | 202.5 | 101.25 | 50.625 | 25.3 | | Adam + LoRA | 874.8 | 109.35 | 54.65 | 27.34 | 13.67 | 6.83 | 3.42 | | Lion | 4860 | 607.5 | 303.75 | 151.875 | 75.94 | 37.97 | 18.98 | | Lion + LoRA | 858.6 | 107.325 | 53.66 | 26.83 | 13.42 | 6.71 | 3.35 | Table S15: Theoretical memory required to store the model and optimizer state during training for a 70B parameter model. Table S16: **Theoretical memory required to store the model and optimizer state during training** for a 405B parameter model. Units are in gigabytes (GB) ## I Lora Throughput And Memory Measurements We report training efficiency comparisons between full finetuning and models trained with LoRA for various of choices of rank. We measured both the throughput (in tokens per second) and peak active memory (in GB) for training runs representative of the experiments reported in the paper. We performed the runs using single node of 8×H100-80GB GPUs. We used a per-GPU micro batch size of 1 and targeted all linear layer weights with LoRA (i.e. both Attention and MLP). In Figure S11 we observe that there is a significant gap between full finetuning and LoRA runs, related to the additional overheads of the LoRA computations. In general, **LoRA leads to an approximately** 15% reduction in throughput for a given batch size. LoRA with higher ranks is slower than lower ranks across all batch sizes; this is particularly noticeable for rank r = 512. Similarly, LoRA settings with higher batch sizes have slightly higher throughput relative to lower batch sizes. Some of the slowdown is intrinsically related to the overheads of performing LoRA, since in practice it involves more computations of intermediate activations. However, we note that we did not optimize the LoRA implementation and used the publicly available HuggingFace peft library, which might be amenable to further optimizations that could reduce the gap in throughput. For peak memory, we notice that for small batch sizes, LoRA provides a substantial reduction in peak memory (∼ 40%). This is expected since the optimizer state is significantly smaller when using parameter efficient methods. However, as batch size increases, the size of intermediate activations increases proportionally, dominating the required memory. We limit the per GPU micro batch size to 8 to prevent out of memory errors, so for batch sizes 64 and above, we perform gradient accumulation. This leads to the throughput and memory stabilizing for batch size 64 and above, with just around (∼ 15% memory savings) for larger batch sizes. ![38_image_0.png](38_image_0.png) Figure S11: **Throughput and Memory Measurements for LoRA vs. full finetuning**. (left) Training throughput measured in tokens per second across all 8 GPUs. (right) Peak active memory used by the training process in a single GPU (max GPU memory is 80GB).
Review 1: Summary: The authors provide an in-depth analysis of a popular parameter-efficient fine-tuning method, LoRA. They compare LoRA with full fine-tuning in both math and coding domains and find that LoRA tends to forget less the source domain, while learning less than full fine-tuning, which is more sample-efficient. They also show that LoRA is a regularizer that promotes generation diversity. Another interesting finding is that, contrary to popular belief, full-parameter fine-tuning finds high rank weight perturbations, which shakes one of the key motivations of utilizing LoRA. Strengths and Weaknesses: ### Strengths - A large number of useful contributions are made throughout this paper. This is a very insightful paper that will be of great interest to the community. - The experimental setup is sound and the results are well presented. The authors fine-tune the models several times to reduce the variance in the results, which is a good practice and will help to ensure the results are reliable. Additionally, the authors utilize open-access datasets and models, which will help in the reproducibility of the results and foster further research in this area. - The hyper-parameter analysis for LoRA is a great addition. It's widely known that LoRA is sensitive to hyper-parameters, and the authors provide a comprehensive analysis of the hyper-parameters that affect LoRA the most. Interestingly, they find that selecting appropriate target modules is more important than rank itself. Future work may focus on understanding why this is the case. ### Weaknesses - While the authors provide inference hyperparameters for HumanEval and GSM8K, they do not provide them for HellaSwag, Arc, and WinoGrande. This information is vital for reproducibility and should be included in the paper. Additionally, for HumanEval and GSM8K, the authors should provide top-p and top-k hyperparameters used, if any. - The analysis on sample diversity for LoRA could be improved. The authors utilize string match to measure diversity, which is a very limited metric, especially for code generation tasks. Requested Changes: - Please include missing inference hyperparameters used for evaluation in the paper. - What is the impact on pass@k; k>1 for LoRA compared to full fine-tuning? Given the claim that LoRA promotes generation diversity, it would be interesting to see how this affects the pass@k metric with ks greater than 1. For this experiment, the authors should consider using a higher temperature parameter than 0.2. [1] finds a temperature of 0.8 to be optimal for this task. One way to show this is to graph the pass@k metric against k for both LoRA and full fine-tuning. [1] Chen et al., "Evaluating Large Language Models Trained on Code" Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper conducts a comprehensive analysis on LoRA fine-tuning and full fine-tuning. Under various fine-tuning tasks, the authors make several interesting observations. On the downstream tasks, LoRA underperforms full fine-tuning. However, LoRA incurs less loss of the performance on pretraining tasks. Also, LoRA achieves stronger regularization compared to common techniques. The authors also analyzed the rank of perturbations from full fine-tuning and observed that they are much larger than the rank number used in LoRA, possibly explaining the gap between LoRA and full-finetuning. Strengths and Weaknesses: Strengths: 1. The paper is well-written and the key message is presented clearly, supported by strong empirical evidence. 2. This paper conducts a deep analysis on LoRA and full-finetuning. One notable thing is that the authors have evaluated on many code and math datasets. 3. The question that this paper tries to answer is important in practice. With the huge computational resources required to train LLMs, we often faces the problem as to what to choose between LoRA and full fine-tuning. I think this paper provides a valuable contribution on this front. Weaknesses: 1. The authors have found that section 4.1 LoRA underperforms full fine-tuning on math and coding tasks. This seems to be expected, given that LoRA uses much less parameters. I wonder if this finding is novel or just verifying existing hypothesis? 2. I think this paper lacks some discussion on the existing practice of fine-tuning on the match and code datasets. For example, one question would be when people fine-tune on these math or code datasets, what fine-tuning strategies do they often use? Are there tradeoffs that people have observed before or is this the first work? 3. There is some ambiguity around the regularization here. In section 4.4, the authors mentioned that “we define regularization (loosely) as a training mechanism that keeps the finetuned LLM similar to the base LLM”. However, this does not seem to be the common meaning of regularization, which I think refers to techniques that helps the training trajectory easier. So my question is that why would people care about the regularization property as defined in the paper? It would also be good to add more clarification on what the authors mean by regularization in the paper. 4. Beyond code and math tasks, I think another important and common fine-tuning direction is fine-tuning for chatting purposes. I wonder if the authors have done experiments on this. 5. The paper concluded with some suggestions on practical takeaways for LoRA fine-tuning, i.e. Section 4.6. However, I find this part unrelated to the topic of this paper, which is to study the tradeoff between LoRA and full-finetuning. I would suggest the paper add more aspects on how practitioners can better choose between LoRA and full fine-tuning, given the observations from this paper. Requested Changes: See weaknesses for details. For me the biggest concern is that this paper lacks a clear discussion on the existing studies comparing LoRA and full fine-tuning, e.g. performance and efficiency. I would suggest the authors describe the existing consensus regarding this tradeoff clearly in the introduction. If there is no study on this before, it should also be mentioned. Also, the authors should clarify the meaning of the term “regularization” used in the paper. I think the current description is too vague and short. Broader Impact Concerns: I do not have concerns regarding the broader impacts of this paper. ================================================== Review 3: Summary: This work empirically investigate the effectiveness of Low-rank based finetuning of language models for code and maths domains. Authors look at performance on old domains under domain shift (forgetting) versus task performance+generation diversity (learning). The baseline each time is full finetuning of the LLM. Their investigation focuses on Llama 2 models of different sizes only. The authors find that LoRA underperforms full finetuning in code generation, but that it does mitigate forgetting, confirming the title of the paper. In some rare cases, LoRA-trained models sometimes perform just as well as full-finetuned models in the target domain, while forgetting less about the source domain. However, these outliers are not sufficient to say that LoRA learns just as much while forgetting less. The work also compares the regularization effect of LoRA to regularization techniques such as dropout, and find that LoRA compares favorably, also resulting in more diverse outputs than full finetuning. The authors hypothesize that the low-rank restriction proved effective in other tasks because the tasks were simple, and show hints that this holds by comparing the ranks of weights of LoRA-finetuned models to those of full finetuning models. Finally, they make concrete recommendations for finetuning with LoRA, namely to use it for instruction fine-tuning but to favor full finetuning for continued pretraining. Strengths and Weaknesses: Strengths: - The work investigates a large number of settings and benchmarks: continued pretraining vs instruction finetuning, benchmarks for Python code and maths generation, using multiple LoRA configurations per setting. - The authors attempt to understand why full finetuning learns more by investigating regularization effect of LoRA and looking at the rank of weight matrices post-training. Although the observation that "less flexible models change less" may seem trivial, this investigation provides additional insight into this specific finetuning method, and supporting evidence that LoRA finetuning is not always a good idea or easy drop-in replacement for full finetuning. - The authors provide practical recommendations for LoRA based finetuning (for IFT vs CPT), making this paper potentially useful reference material for practitioners - even if the used datasets are perhaps less common than some text datasets. Weaknesses: - The authors ask an important question in section 4.3: "The nontrivial question is: do LoRA and full finetuning differ in how they tradeoff learning and forgetting? Can LoRA achieve similar target domain performance but with diminished forgetting?". I don't believe that this specific question is answered in the paper as there is insufficient data to answer positively or negatively. - For code and math, it does like like we should use full finetuning (from section 4.1). - But: the authors do observe that there are outliers where LoRA performs as well while forgetting less. As far as I can tell, they do not draw conclusions from these observations however, nor make recommendations for when to use LoRA. - This work would have been stronger if full finetuning runs were run until they reach similar forgetting-or-learning performance, to enable direct comparison. At the moment the LoRA and full finetuning runs mostly live in different parts of the tradeoff curve. Of course, full finetuning is expensive so it is understandable that there are fewer points for this setup. - The authors show results for multiple ranks in the appendix, but as far as I can tell only for a single value for alpha for some Code runs, while keeping it to 2*rank for the maths runs. Given that there are many setup differences between code and maths runs, including LoRA hyperparameters, it may be less meaningful to compare LoRA performance between these two settings. That said, I'm not sure if using the same LoRA configuration in both domains is useful either. - The insights about forgetting and learning seem to depend strongly on the domain and dataset. The paper title makes the claim more general than it may be in practice. LoRA is often used in domains not related to maths and code generation, and we cannot say whether the same insights will hold there. Most LORA variations seem to look at GLUE or other text datasets. "Forgetting" is more difficult to test when going from text to text domains, but learning can probably be measured! I don't think restricting to "just" Llama-2 is a downside of this study, this is a large model that's expensive to train/finetune, and the authors try two very large versions. Obviously the insights would be more general if other models were tried as well but I don't think that's a prerequisite for acceptance. Additionally: - I think the claims in the submission are accurate and convincing. My only (small) concern is that the general claim in the title has a caveat "in code and maths generation tasks". The authors provide clear evidence that LoRA learns less and forgets less, characterized by the learning/forgetting tradeoff curve. The study seems broad, looking at multiple datasets in domains in detail, even if it only tackles math and code (and not say, text generation in a specific domain). The authors make no claims that LoRA is as good as full finetuning, as only a handful of datapoints show this. - I believe this study is interesting for TMLRs audience, especially the learning/forgetting tradeoff and the practical recommendations for LoRA based finetuning on small domains. Choosing whether to use LoRA or not based on specific points on the tradeoff curve could be useful in practice. Additionally, this study is one of the first to show that LoRA is not always a good drop-in replacement for full finetuning on specific domains. Requested Changes: - Please provide more information about the experimental setup, especially the motivation for using different LoRA configurations between maths and code settings. Are hyperparameter choices motivated by the original works or benchmark datasets? - Please provide an answer for the question you ask in section 4.3, even if that answer is "there is insufficient data to answer this question". - For Figure 4C, we see performance on GSM8k decrease for most runs wrt the base model, for both full finetuning and LoRA. Does this indicate a mismatch between the GSM8k and WebMath dataset? It would be useful to provide more intuition for the lack of learning here as this figure seems to be the odd one out. - There are many knobs to tune for LLM finetuning, especially when switching from one domain to another. How were these parameters chosen? It would be beneficial to explain the training procedure in more detail if possible, including what is done with tokens+embeddings (if anything). - Related: Is the tokenizer frozen for code and math finetuning runs? In the domain shift setting, would a model benefit from adding new tokens or adapting the tokenizer, potentially reaching better performance or preventing forgetting if tokens for code/maths are added? See e.g. "Efficient domain adaptation of language models via adaptive tokenization", Sachidananda 2021. Some discussion on why these parameters are good under domain shift would strengthen the claims, IMO. Broader Impact Concerns: I dont believe there are ethical implications to this work that would require a more detailed impact statement. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper conducts a comprehensive analysis on LoRA fine-tuning and full fine-tuning. Under code and math tasks, the authors empirically observed that LoRA learns less and forgets less. After author rebuttal, it received Accept, Accept, and Leaning Accept recommendations. All the reviewers agree that (1) the paper is well-written and the key message is presented clearly, supported by strong empirical evidence, (2) the question that this paper tries to answer is important in practice, and the paper is insightful and will be of great interest to the community. On the other hand, reviewers have asked many questions, and the authors have also promised many changes. However, the current draft has not reflected these. Therefore, the AC would like to recommend accept, depending on whether the promised changes have been made. The authors have promised adding additional new results, including: 1. the new analysis suggested by the reviewer, using temperature=0.8 and computing HumanEval pass@k for k=1, …, 256 for both LoRA (at different ranks, targeting all modules) and full finetuning. 2. the new experiment results on the Tülu-2 finetuning dataset, evaluating chat-quality with MT-Bench. 3. full finetuning experiments for shorter durations; we are currently running additional LoRA experiments with identical scaling factors across math and code (alpha=2r). 4. rerunning the experiments in Fig. 4C with more tokens, to get closer to the regime in the Starcoder experiment. In terms of paper writing, the authors have promised to include the following revision: 1. detail the experimental setup and evaluation hyper-parameters used, and also link to the evaluation repositories we used, to ensure reproducibility. 2. revise Related Work section to review previous work studying continual learning on math and code datasets (via continued pretraining or instruction finetuning). 3. try to avoid the use of “regularization”, in favor of terminology that is more explicit about learning and forgetting in downstream tasks. 4. better streamline the connection between the core experiments and our hyperparameter sensitivity analyses and recommendations. 5. the introduction will review work comparing LoRA vs full finetuning, and make the point that the jury is still out. 6. Section 4.3 will state that our results cannot conclusively tell whether LoRA and full finetuning live on same or different tradeoff curves. It will also discuss potential avenues of future work given these results. ==================================================
# Boomerang: Local Sampling On Image Manifolds Using Diffusion Models Lorenzo Luzi enzo@rice.edu Rice University Paul M. Mayer *pmm3@rice.edu* Rice University Josue Casco-Rodriguez *jc135@rice.edu* Rice University Ali Siahkoohi alisk@rice.edu Rice University Richard G. Baraniuk *richb@rice.edu* Rice University Reviewed on OpenReview: *https: // openreview. net/ forum? id= NYdThkjNW1* ## Abstract The inference stage of diffusion models can be seen as running a reverse-time diffusion stochastic differential equation, where samples from a Gaussian latent distribution are transformed into samples from a target distribution that usually reside on a low-dimensional manifold, e.g., an image manifold. The intermediate values between the initial latent space and the image manifold can be interpreted as noisy images, with the amount of noise determined by the forward diffusion process noise schedule. We utilize this interpretation to present Boomerang, an approach for local sampling of image manifolds exploiting the reverse diffusion process dynamics. As implied by its name, Boomerang local sampling involves adding noise to an input image, moving it closer to the latent space, and then mapping it back to the image manifold through a partial reverse diffusion process. Thus, Boomerang generates images on the manifold that are "similar," but nonidentical, to the original input image. We can control the proximity of the generated images to the original by adjusting the amount of noise added. Furthermore, due to the stochastic nature of the partial reverse diffusion process in Boomerang, the generated images display a certain degree of stochasticity, allowing us to obtain ample local samples from the manifold without encountering any duplicates. Boomerang offers the flexibility to work seamlessly with any pretrained diffusion model, such as Stable Diffusion, without necessitating any adjustments to the reverse diffusion process. We present three applications for local sampling using Boomerang. First, we provide a framework for constructing privacy-preserving datasets having controllable degrees of anonymity. Second, we show that using Boomerang for data augmentation increases generalization performance and outperforms state-of-the-art synthetic data augmentation. Lastly, we introduce a perceptual image enhancement framework powered by Boomerang, which enables resolution enhancement. ## 1 Introduction Generative models have seen a tremendous rise in popularity and applications over the past decade, ranging from image synthesis (Grcić et al., 2021; Dhariwal & Nichol, 2021; Sauer et al., 2022), audio generation (van ![1_image_0.png](1_image_0.png) Figure 1: An example using Boomerang via Stable Diffusion (Rombach et al., 2022). Starting from an original image x0 ∼ p(x0), we add varying levels of noise to the latent variables according to the noise schedule of the forward diffusion process. Boomerang maps the noisy latent variables back to the image manifold by running the reverse diffusion process starting from the reverse step associated with the added noise out of T = 1000. The resulting images are local samples from the image manifold, where the closeness is determined by the amount of added noise. Note how, as t approaches to T, the content of Boomeranggenerated images *strays* further away from the starting image. While Boomerang here is applied to the Stable Diffusion model, it is applicable to other types of diffusion models, e.g., denoising diffusion models (Ho et al., 2020). Additional images are provided in Appendix A.1. A Boomerang Colab demo is available at https://colab.research.google.com/drive/1PV5Z6b14HYZNx1lHCaEVhId-Y4baKXwt. den Oord et al., 2016; Klejsa et al., 2019; Kong et al., 2020; 2021), out-of-distribution data detection (Li et al., 2022; Dionelis et al., 2022), and reinforcement learning to drug synthesis (Kingma & Welling, 2014; Goodfellow et al., 2014; Bond-Taylor et al., 2021). One of the key benefits of generative models is that they can generate new samples using training samples from an unknown probability distribution. A family of generative models known as diffusion models have recently gained attention in both the academic and public spotlights with the advent of Dall-E 2 (Ramesh et al., 2022), Imagen (Saharia et al., 2022a), and Stable Diffusion (Rombach et al., 2022). Generative models estimate the underlying probability distribution, or manifold, of a dataset by learning to sample from this distribution. Sampling with generative models involves mapping samples from a latent distribution, e.g., Gaussian, via the trained generative model to samples from the target distribution, which yields a *global* sampling mechanism. While global sampling is useful for modeling the whole distribution, there are several important problems which require *local* sampling—the ability to produce samples close to a particular data point. The anonymization of data is one application of local sampling in which the identity of subjects in the dataset should be erased while maintaining data fidelity. Another application is data augmentation, which involves applying transformations onto copies of data points to produce new data points (Wong et al., 2016). A third application of local sampling is to remove noise from an image, especially in the cases of severe noise, where traditional denoising methods might fail (Kawar et al., 2021). Despite their success in global sampling, GANs (Goodfellow et al., 2014), Variational autoencoders (VAEs) (Kingma & Welling, 2014), and Normalizing Flows (NFs) (Rezende & Mohamed, 2015) are not the best candidates for local sampling. GANs do not lend themselves well for local sampling: GAN inversion is required, and finding or training the best methods for GAN inversion is a difficult problem and an ongoing area of research (Karras et al., 2020; Xia et al., 2022; Yu et al., 2020). VAEs and NFs, on the other hand, can both project a data point x into a latent vector z in their respective latent spaces and then re-project z back into the original data space, producing an estimate x ′ of the original data point. As such, VAEs and NFs can perform local sampling of a data point x by projecting a perturbed version of its corresponding latent vector z back into the original data space. While the straightforward tractability of VAE- or NFbased local sampling is attractive, VAEs and NFs are not state-of-the-art (SOTA) on popular tasks such as image synthesis. For these reasons, we turn to diffusion models, which are SOTA (Dhariwal & Nichol, 2021), to perform local sampling. For these reasons, we propose local sampling with diffusion models, which are SOTA (Dhariwal & Nichol, 2021) and do not require latent space inversion. We propose the Boomerang algorithm to enable local sampling of image manifolds using *pretrained* diffusion models. Boomerang earns its name from its principle mechanism—adding noise of a certain variance to push data away from the image manifold, and then using a diffusion model to pull the noised data back onto the manifold. The variance of the noise is the only parameter in the algorithm, and governs how similar the new image is to the original image, as reported by Ho et al. (2020). We apply this technique to three applications: (1) data anonymization for privacy-preserving machine learning; (2) data augmentation; and (3) perceptual enhancement for low resolution images. We show that the proposed local sampling technique is able to: (1a) anonymize entire datasets to varying degrees; (1b) trick facial recognition algorithms; and (1c) anonymize datasets while maintaining better classification accuracy when compared with SOTA synthetic datasets. For data augmentation we: (2a) obtain higher classification accuracy when trained on the Boomerangaugmented dataset versus no augmentation at all; and (2b) outperform SOTA synthetic data augmentation. Finally, we show that Boomerang can be used for perceptual image enhancement. The images generated via local sampling: 3a) have better perceptual quality than those generated with competing methods; 3b) are generated faster than other deep-learning methods trained methods such as the Deep Image Prior (Ulyanov et al., 2020); and 3c) can be used for any desired upsampling factor without needing to train or fine-tune the network. In Section 2 we discuss the training framework of diffusion models, introducing the forward and reverse processes. In Section 3 we introduce our proposed local sampling method—Boomerang—and provide insights on how the amount of added noise affects the locality of the resulting samples. Finally, we describe three applications (Sections 4 to 6) that Boomerang can be used without any modification to the diffusion model pretraining. ## 2 Diffusion Models Diffusion models sample from a distribution by learning to reverse a forward diffusion process that turns data from the training dataset into realizations of a Gaussian distribution (Ho et al., 2020). The forward diffusion process involves adding Gaussian noise to the input data, x0 ∼ p(x0), in T steps: $$\mathbf{x}_{t}:={\sqrt{1-\beta_{t}}}\mathbf{x}_{t-1}+\mathbf{\epsilon}_{t},\quad\mathbf{\epsilon}_{t}\sim{\mathcal{N}}(\mathbf{0},\beta_{t}\mathbf{I}),\quad t=1,\ldots,T,$$ where βt ∈ (0, 1), t = 1*, . . . , T,* is the noise variance at step t, which is typically chosen beforehand (Song & Ermon, 2020). Since the transition from step t − 1 to t is defined by a Gaussian distribution in the form of q(xt|xt−1) := N ( √1 − βtxt−1, βtI), the distribution of xt conditioned on the clean input image x0 can be expressed as a Gaussian distribution, $\downarrow$ . $$q(\mathbf{x}_{t}|\mathbf{x}_{0})={\mathcal{N}}\left({\sqrt{\alpha_{t}}}\mathbf{x}_{0},(1-\alpha_{t})\mathbf{I}\right),\quad t=1,\ldots,T,$$ √αtx0,(1 − αt)I), t = 1*, . . . , T,* (2) with αt =Qt i=1(1−βi). During training, diffusion models learn to reverse the forward process by starting at t = T with a sample from the standard Gaussian distribution xT ∼ N (0, I). The reverse process is defined via a Markov chain over x0, x1*, . . . ,* xT such that $$\mathbf{x}_{t-1}:=f_{\phi}(\mathbf{x}_{t},t)+\mathbf{\eta}_{t},\quad\mathbf{\eta}_{t}\sim\mathcal{N}(\mathbf{0},\bar{\beta}_{t}\mathbf{I}),\quad t=1,\ldots,T.$$ xt−1 := fϕ(xt, t) + ηt, ηt ∼ N (0, β¯tI), t = 1*, . . . , T.* (3) In the above expression, fϕ(xt, t) is parameterized by a neural network with weights ϕ, and β¯tI denotes the covariances at step t. Equation (3) represents a chain with transition probabilities defined with Gaussian distributions with density $p_{\phi}(\mathbf{x}_{t-1}|\mathbf{x}_{t}):=\mathcal{N}(f_{\phi}(\mathbf{x}_{t},t),\bar{\beta}_{t}\mathbf{I}),\quad t=1,\ldots,T.$ The covariance β¯tI in different steps can be also parameterized by neural networks, however, we follow Luhman & Luhman (2022) and choose β¯t = 1−αt−1 1−αt βt, that matches the forward posterior distribution when conditioned on the input image q(xt−1|xt, x0) (Ho et al., 2020). $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $\downarrow$ . To ensure the Markov chain in Equation (3) reverses the forward process (Equation (1)), the parameters ϕ are updated such that the resulting image at step t = 0 via the reverse process represents a sample from the target distribution p(x0). This can be enforced by maximizing—with respect to ϕ—the likelihood pϕ(x0) of training samples where x0 represents the outcome of the reverse process at step t = 0. Unfortunately, the density pϕ(x0) does not permit a closed-form expression. Instead, given the Gaussian transition probabilities defined in Equation (4), the joint distribution over all the T + 1 states can be factorized as, $$p_{\phi}(\mathbf{x}_{0},\ldots,\mathbf{x}_{T})=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\phi}(\mathbf{x}_{t-1}|\mathbf{x}_{t}),\quad p_{T}(\mathbf{x}_{T})={\mathcal{N}}(\mathbf{0},\mathbf{I}),$$ with all the terms on the right-hand-side of the equality having closed-form expressions. To obtain a tractable expression for training diffusion models, we treat x1*, . . . ,* xT as latent variables and use the negative evidence lower bound (ELBO) expression for pϕ(x0) as the loss function, $$\mathcal{L}(\boldsymbol{\phi}):=\mathbb{E}_{p(\boldsymbol{x}_{0})}\mathbb{E}_{q(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{T}|\boldsymbol{x}_{0})}\left[-\log p_{T}(\boldsymbol{x}_{T})-\sum_{t=1}^{T}\log\frac{p_{\phi}(\boldsymbol{x}_{t-1}|\boldsymbol{x}_{t})}{q(\boldsymbol{x}_{t}|\boldsymbol{x}_{t-1})}\right]$$ $$\geq\mathbb{E}_{p(\boldsymbol{x}_{0})}\left[-\log p_{\phi}(\boldsymbol{x}_{0})\right].$$ $$\left(5\right)$$ $$\quad(6)$$ After training, new global samples from pϕ(x0) ≈ p(x0) are generated by running the reverse process in Equation (3) starting from xT ∼ N (0, I). Due to the stochastic nature of the reverse process, particularly, the additive noise during each step, starting from two close initial noise vectors at step T does not necessarily lead to close-by images on the image manifold. The next section describes our proposed method for local sampling on the image manifold. ## 3 Boomerang Method Our method, Boomerang, allows one to locally sample a point x ′ 0 on an image manifold X close to a point x0 ∈ X using a pretrained diffusion model fϕ. Since we are mainly interested in images, we suppose that x0 and x ′ 0 are images on the image manifold X . We control how close to x0 we want x ′ 0 to be by setting the hyperparameter tBoom. We perform the forward process of the diffusion model tBoom times, from t = 0 to t = tBoom in Equation (1), and use fϕ to perform the reverse process from t = tBoom back to t = 0. If tBoom = T we perform the full forward diffusion and hence lose all information about x0; this is simply equivalent to globally sampling from the diffusion model. We denote this partial forward and reverse process as B(x0, tBoom) = x ′ 0 and call it *Boomerang* because x0 and x ′ 0 are close for small tBoom, which can be seen in Figure 1. When performing the forward process of Boomerang, it is not necessary to iteratively add noise tBoom times. Instead, we simply calculate the corresponding αtBoom and sample from Equation (2) once to avoid unnecessary computations. The reverse process must be done step by step however, which is where most of the computations take place, much like regular (global) sampling of diffusion models. Nonetheless, sampling with Boomerang has significantly lower computational costs than global sampling; the time required for Boomerang is approximately tBoom Ttimes the time for regular sampling. Moreover, we can use Boomerang to perform local sampling along with faster sampling schedules, e.g., sampling schedules that reduce sampling time by 90% (Kong & Ping, 2021) before Boomerang is applied. Pseudocode for the Boomerang algorithm is shown in Algorithm 1. We present a quantitative analysis to measure the variability of Boomerang-generated images as tBoom is changed. As an expression of this variability, we consider the distribution of samples generated through the Boomerang procedure conditioned on the associated noisy input image at step tBoom, i.e., pϕ(x ′ 0 |xtBoom ). According to Bayes' rule, we relate this distribution to the distribution of noisy images at step tBoom of the forward process, $$\left(7\right)$$ $$p_{\phi}(\mathbf{x}_{0}^{\prime}|\mathbf{x}_{t_{\text{Boson}}})\propto p_{\phi}(\mathbf{x}_{t_{\text{Boson}}}|\mathbf{x}_{0}^{\prime})p(\mathbf{x}_{0}^{\prime})$$ $$\approx q(\mathbf{x}_{t_{\text{Boson}}}|\mathbf{x}_{0}^{\prime})p(\mathbf{x}_{0}^{\prime})$$ $$=\mathcal{N}\left(\sqrt{\alpha_{t_{\text{Boson}}}}\mathbf{x}_{0}^{\prime},(1-\alpha_{t_{\text{Boson}}})\mathbf{I}\right)p(\mathbf{x}_{0}).$$ Algorithm 1 Boomerang local sampling, given a diffusion model fϕ(x, t) Input: x0, tBoom, {αt} T t=1, {βt} T t=1 Output: x ′ 0 ϵ ← N (0, I) x ′ tBoom ← √αtBoom x0 + √1 − αtBoom ϵ for t = tBoom*, ...,* 1 do if t > 1 **then** β¯t = 1−αt−1 1−αt βt η ∼ N(0, β¯tI) else η = 0 end if x ′ t−1 ← fϕ(x ′ t , t) + η end for return x ′ 0 The second line in the expression above assumes that by training the diffusion model via the loss function in Equation (6), the model will be able to reverse the diffusion process at each step of the process. The last line in the equation above follows from Equation (2) and the fact that p(x ′ 0 ) = p(x0). The latter can be understood by noting that p(x ′ 0 ) is the distribution of images generated by the diffusion model when the reverse process is initiated at step tBoom using noisy images obtained from x ′ tBoom = √αtBoom x0 + √1 − αtBoom ϵ, where the original images x0 are drawn from p(x0) and ϵ ∼ N (0, I). The distribution of these noisy images is equivalent to N ( √αtBoom x0, 1 − αtBoom I), with x0 ∼ p(x0), which is equal to the forward diffusion process distribution at step tBoom, denoted as q(xtBoom |x0) (recall Equation (2)). Given that the diffusion model is well-trained, we can expect that its output matches the original image distribution regardless of which step the reverse process in initiated, as long as the same forward diffusion process noise schedule is used. Equation (7) suggests that the density of Boomerang-generated images is proportional to the density of a Gaussian distribution with covariance (1 − αtBoom )I times the clean image density p(x0). In other words, the resulting density will have very small values far away from the mean of the Gaussian distribution. In addition, the high probability region of pϕ(x ′ 0 |xtBoom ) grows as 1 − αtBoom becomes larger. This quantity monotonically increases as tBoom goes from one to T since αt =Qt i=1(1 − βi) and βi ∈ (0, 1). As a result, we expect the variability in Boomerang-generated images to increase as we run Boomerang for larger tBoom steps. Since Boomerang depends on a pretrained diffusion model fϕ, it does not require the user to have access to significant amount of computational resources or data. This makes Boomerang very accessible to practitioners and even everyday users who do not have specialized datasets or hardware to train a diffusion model for their specific problem. Boomerang requires that the practitioner find a diffusion model that represents the desired image manifold. With the advent of diffusion models trained on diverse datasets, such as Stable Diffusion (Rombach et al., 2022), finding such models is becoming less and less of a problem. Overall, our Boomerang method allows local sampling on image manifolds without requiring significant amounts of computational resources or data. ## 4 Application 1: Anonymization Of Data Local sampling anonymizes data by replacing original data with close, but nonidentical, samples on the learned data manifold. Overparameterized (i.e., deep) models are vulnerable to membership inference attacks (Shokri et al., 2017; Tan et al., 2022), which attempt to recover potentially sensitive data (e.g., medical data, financial information, and private images) given only access to a model that was trained on said data. Boomerang local sampling enables privacy-preserving machine learning by generating data points that are similar to real (i.e., sensitive) data, yet not the same. The degree of similarity to the real data can be coarsely controlled through the tBoom parameter, however Boomerang can not remove specific sen- ![5_image_0.png](5_image_0.png) Figure 2: Using Boomerang on CIFAR-10 to change the visual features of images. These images were created with FastDPM (Kong & Ping, 2021) using tBoom/T = 40 100 = 40%. sitive attributes while retaining other attributes. We prove the effectiveness of Boomerang anonymization by anonymizing various datasets, quantitatively measuring the degree of anonymity for anonymized face datasets, and successfully training classifiers on fully anonymous datasets. ## 4.1 Data And Models To show the versatility of Boomerang anonymization, we apply it to several datasets such as the LFWPeople (Huang et al., 2007), CelebA-HQ (Karras et al., 2018), CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), FFHQ (Karras et al., 2019), and ILSVRC2012 (ImageNet) (Russakovsky et al., 2015) datasets. For the ImageNet-200 experiments, we use a 200-class subset of ImageNet that we call ImageNet-200; these are the 200 classes that correspond to Tiny ImageNet (Russakovsky et al., 2015). Furthermore, to show Boomerang can be applied independent of the specific diffusion model architecture, we apply Boomerang to the Stable Diffusion (Rombach et al., 2022), Patched Diffusion (Luhman & Luhman, 2022),1, Denoising Likelihood Score Matching (DLSM) (Chao et al., 2022) 2, and FastDPM (Kong & Ping, 2021) 3 models. We compare Boomerang-generated data with purely synthetic data from the SOTA StyleGAN-XL (Sauer et al., 2022) and DLSM models. When generating Boomerang samples for data anonymization or augmentation, we pick tBoom so that the Boomerang samples look visually different than the original samples. With the FastDPM model we use tBoom/T = 40/100 = 40%3; with Patched Diffusion, we use tBoom/T = 75/250 = 30%; and with DLSM, we use tBoom/T = 250/1000 = 25%. ## 4.2 Anonymization Boomerang can anonymize entire datasets to varying degrees controlled by the hyperparameter tBoom, which coarsely defines the anonymization level. For example, we anonymize commonly used datasets of face images. Additionally, we anonymize natural images. Specifically, we define that a natural image x0 is anonymized to x ′ 0 if the features of each image are visibly different such that an observer would guess that the two images are of different objects (note that we do not control which features are being anonymized). For each diffusion model, we pick tBoom so that the anonymized images are different, but not drastically different from the original dataset images. Some examples of anonymized images are shown in Figures 2 to 4. Using Boomerang, we successfully anonymize datasets of faces, such that established facial recognition networks infer that the anonymized and original images are from different people. First, we apply Boomerang Stable Diffusion to the LFWPeople and CelebA-HQ datasets at several ratios of tBoom/T : {0.2, 0.5, 0.7, 0.8}. Random samples (Figure 4) qualitatively establish that sufficiently large values of tBoom replace identifiable features of the original images. Meanwhile, dataset-wide evaluations via facial recognition network embedding distances show that the distributions of perceptual dissimilarity between the original and Boomeranganonymized images shift upward as a function of tBoom (Figure 5 and Appendix A.1). In fact, the percentage of Boomerang-anonymized images that the facial recognition networks declare as originating from different people than the original dataset images approaches 100% as tBoom increases. Therefore, we qualitatively 1We use this repository for Patched Diffusion. 2We use this repository for DLSM. 3We use this repository models. It distills the original 1000 diffusion steps down to 100 via a STEP DDPM sampler. ![6_image_0.png](6_image_0.png) Figure 3: Using Boomerang on ImageNet-200 to change the visual features of images. These images were created with Patched Diffusion (Luhman & Luhman, 2022) using tBoom/T = 75/250 = 30%. The FID values for these images have been plotted in Figure 15 in the Appendix. ![6_image_1.png](6_image_1.png) Figure 4: Nine randomly selected LFWPeople samples anonymized via Boomerang Stable Diffusion with the same random seed and with the same prompt: "picture of a person". and quantitatively establish that Boomerang anonymization is an efficient method of anonymizing images by local sampling. Since training on anonymous data enables privacy-preserving machine learning, we show in Table 1 that our local sampling method outperforms state-of-the-art generative models on the task of generating entirely anonymous datasets upon which to train a classifier. Specifically, we compare Boomerang anonymization against the strongest alternative: image synthesis by StyleGAN-XL (SOTA for CIFAR-10 and ImageNet) and DLSM (SOTA for CIFAR-100). We observe across all tasks that purely synthetic data, even from SOTA models, cannot reach the quality of data anonymized via Boomerang local sampling—even when the local sampling is done with diffusion models that are not SOTA. For example, on ImageNet the synthetic data produces a test accuracy of 39.8%, while the Boomerang-anonymized data produces a test accuracy of 57.8%. This phenomenon makes sense because anonymization via local sampling is in-between generating purely synthetic data and using real data. Thus, training on Boomerang-anonymized data outperforms training on synthetic data. ![7_image_0.png](7_image_0.png) Figure 5: Facial recognition embedding distances between Boomeranged images and the original images as a function of tBoom. We use the VGG-Face and Facenet models (Wang & Deng, 2021) to calculate the embeddings; both models have a default minimum distance threshold of 0.4 to declare that two images are of different people. The largest standard error is 0.0017 for embedding distance and 0.21% for the number of images over the threshold. Figure 10 contains the full distribution of VGG-Face embedding distances. Table 1: With respect to training CIFAR-10, CIFAR-100, ImageNet-200, and ImageNet classifiers on anonymous data, Boomerang-anonymized data is a superior alternative to purely synthetic data from the SOTA StyleGAN-XL and DLSM models. | Classification | Training data | Top-1 Test | Top-5 Test | |------------------|---------------------------------------------|--------------|--------------| | Task | Accuracy | Accuracy | | | CIFAR-10 | StyleGAN-XL generated data (SOTA synthetic) | 81.5% | | | CIFAR-10 | FastDPM Boomerang data (ours) | 84.4% | | | CIFAR-10 | CIFAR-10 data (no anonymization) | 87.8% | | | CIFAR-100 | DLSM generated data (SOTA synthetic) | 26.9% | | | CIFAR-100 | DLSM Boomerang data (ours) | 55.6% | | | CIFAR-100 | CIFAR-100 data (no anonymization) | 62.7% | | | ImageNet-200 | StyleGAN-XL generated data (SOTA synthetic) | 50.2% | 73.0% | | ImageNet-200 | Patched Diffusion Boomerang data (ours) | 61.8% | 83.4% | | ImageNet-200 | ImageNet-200 data (no anonymization) | 66.6% | 85.6% | | ImageNet | StyleGAN-XL generated data (SOTA synthetic) | 39.8% | 62.1% | | ImageNet | Patched Diffusion Boomerang data (ours) | 57.8% | 81.3% | | ImageNet | ImageNet data (no anonymization) | 63.3% | 85.3% | In conclusion, we have established that data anonymization via Boomerang can operate on entire datasets, successfully anonymize facial images, and is a superior alternative to generating purely synthetic data for downstream tasks such as image classification. Table 2: Boomerang-generated data for data augmentation increases test accuracy of CIFAR-100, ImageNet200, and ImageNet classification tasks. The purely sythetic data augmentation was done using state-of-theart (SOTA) models: Denoising Likelihood Score Matching (DLSM) (Chao et al., 2022) and StyleGANXL (Sauer et al., 2022). For the Boomerang data, we used DLSM and Patched Diffusion (Luhman & Luhman, 2022). | Classification | Training data | Top-1 Test | Top-5 Test | |------------------|------------------------------------------------------|--------------|--------------| | Task | | Accuracy | Accuracy | | CIFAR-100 | CIFAR-100 only (no data augmentation) | 62.7% | | | CIFAR-100 | CIFAR-100 + DLSM DA (SOTA synthetic) | 56.0% | | | CIFAR-100 | CIFAR-100 + DLSM Boomerang DA (ours) | 63.6% | | | ImageNet-200 | ImageNet-200 only (no data augmentation) | 66.6% | 85.6% | | ImageNet-200 | ImageNet-200 + StyleGAN-XL DA (SOTA synthetic) | 65.8% | 85.5% | | ImageNet-200 | ImageNet-200 + Patched Diffusion Boomerang DA (ours) | 70.5% | 88.3% | | ImageNet | ImageNet only (no data augmentation) | 63.3% | 85.3% | | ImageNet | ImageNet + StyleGAN-XL DA (SOTA synthetic) | 63.3% | 85.3% | | ImageNet | ImageNet + Patched Diffusion Boomerang DA (ours) | 64.4% | 86.0% | ## 5 Application 2: Data Augmentation Data augmentation for image classification is essentially done by sampling points on the image manifold near the training data. Typical augmentation techniques include using random image flips, rotations, and crops, which exploit symmetry and translation invariance properties of images in order to create new data from the training set. Although there are many techniques for data augmentation, most involve modifying the training data in ways which make the new data resemble the original data while still being different, i.e., they attempt to perform local sampling on the image manifold. Since Boomerang can also locally sample on manifolds, we investigate if using Boomerang is beneficial for data augmentation on classification tasks. For our data augmentation experiments, we pick tBoom to be large enough to produce visual differences between the original dataset and the Boomerang-generated dataset, as shown in Figures 2 and 3; as discussed Section 4.1. Due to the intrinsic computational costs of diffusion models, we generate the augmented data before training instead of on-the-fly generation during training. We then randomly choose to use the training data or the Boomerang-generated data with probability 0.5 at each epoch. We use ResNet-18 (He et al., 2016) for our experiments. We use the same models and datasets as described in Section 4.1. To demonstrate the impact of data augmentation using Boomerang, we begin by comparing how it enhances accuracy in classification tasks as compared to the case of no data augmentation. We find that using Boomerang for data augmentation increases generalization performance over not using data augmentation at all for a wide range of datasets and classifier architectures. As shown on Table 2, using Boomerang data augmentation on CIFAR-100, ImageNet-200, and ImageNet classification tasks increases test accuracy from 62.7% to 63.6%, from 66.6% to 70.5%, and from 63.3% to 64.4%, respectively. Applying Boomerang data augmentation requires only selecting a single hyperparameter, namely tBoom, and it does not make any explicit assumptions about data invariances, making it a versatile and general-purpose data augmentation tool. Additionally, Boomerang data augmentation can be combined with other data augmentation techniques, e.g., flips, crops, and rotations. We also observe that using Boomerang for data augmentation increases generalization performance over using purely synthetic data augmentation. Specifically, we see that using purely synthetic data augmentation does not appear to help generalization performance at all. The generalization performance is actually lower than or equal to the baseline for CIFAR-100, ImageNet-200, and ImageNet classifications tasks with SOTA synthetic data augmentation (see Table 2). This is somewhat surprising because the models that we used for the Boomerang data augmentation are not SOTA, with the exception of the DLSM model. Meanwhile, the models that we use for the purely synthetic data augmentation are SOTA and thus we compare to the hardest setting possible. For example, the reported FID (metric of image quality, lower is better) for the Patched Diffusion model we use for Boomerang on ImageNet is 8.57 whereas it is 2.3 for StyleGANXL Luhman & Luhman (2022), which is used for the synthetic data augmentation. Therefore, Boomerang data augmentation seems to be more beneficial than using purely synthetic data augmentation even if Boomerang is done with a lower performing model. In conclusion, we showed that Boomerang data augmentation can be used to increase generalization performance over no augmentation and that it even beats synthetic data augmentations using SOTA models. ## 6 Application 3: Perceptual Resolution Enhancement For Low-Resolution Images As the final application of Boomerang local sampling, we propose perceptual resolution enhancement (PRE) for images. This process aims to upsample low-resolution images with a focus on perceptual quality even if traditional handcrafted quality measures such as peak signal-to-noise ratio (PSNR) are low. We begin by framing PRE as a local sampling problem, followed by two proposed perceptual resolution enhancement PRE methods using Boomerang. ## 6.1 Perceptual Resolution Enhancement Via Local Sampling Downsampled or otherwise corrupted images belong to a different distribution than realistic or natural images. For example, the dimension of a ground-truth image xtrue ∈ R n is higher than that of its downsampled image xds ∈ R d. We frame PRE as a local sampling approach to restore xds through: (1) obtaining a rough approximation xup ∈ R n of the ground-truth high-resolution image using standard resolution enhancement techniques, and (2) restoring perceptual quality via applying Boomerang to xup. By locally sampling around xup ≈ xtrue, Boomerang produces a point on the manifold of high-resolution images near the ground-truth image xtrue. The goal of PRE is to improve the perceptual quality of a corrupted or low-dimensional image xds without being forced to match xds at every pixel: i.e., unlike traditional super-resolution, PRE is allowed to modify the pixel values of the downsampled image instead of just interpolating between them. ## 6.2 Vanilla Boomerang Perceptual Resolution Enhancement We perform Boomerang PRE by: 1. Bringing the corrupted image into the ambient space of the desired result using some off-the-shelf interpolation method (cubic, nearest-neighbor, etc). In all our experiments, we used linear interpolation. 2. Selecting a tBoom corresponding to the desired "radius" of the search space. As tBoom T → 1, we move from locally sampling around a given point to sampling globally on the learned manifold. 3. Performing Boomerang as described in Section 3. The resulting image is an enhanced version of the input image in the neighborhood of the ground-truth image. The parameter tBoom corresponds to the strength of PRE applied, with larger values more appropriate for more corrupted images. While others such as Saharia et al. (2022b) and Rombach et al. (2022) have used diffusion models for image enhancement (including super-resolution), Boomerang has two key advantages to existing diffusion-modelbased methods. The first is that Boomerang's local sampling keeps the output "close" to the input on the image manifold, i.e., we implicitly account for the geodesics of the manifold instead of merely optimizing a metric in the Euclidean space. The second is that adjusting tBoom allows easy "tuning" of the PRE strength, controlling the resulting trade-off between image fidelity and perceptual quality. This means that *the same* pretrained network can perform perceptual enhancement on images with different levels of degradation or downsampling: one can choose a larger value of tBoom to fill in more details of the final image. Furthermore, by varying tBoom for different passes of the same input image, one can generate multiple images, each with a different detail/variance trade-off. This can be seen in Figure 11 in Appendix A.2, wherein more aggressive enhancement improves clarity at the cost of distance from the ground-truth image. Empirical tests showed that setting tBoom ≈ 100 on the Patched Diffusion Model (out of T = 250) produced a good balance between sharpness and the features of the ground-truth image, as seen in Appendix A.2. Finally, the same image can be passed through Boomerang multiple times with a fixed tBoom to generate different candidate images. Our subsequent PRE experiments with Boomerang all use the Patch Diffusion model (Luhman & Luhman, 2022). With Stable Diffusion (Rombach et al., 2022), empirical results showed that detail was not added to the low-resolution images except for large tBoom, where generated images no longer strongly resembled the ground truth images. We believe this is because noise is added in the latent space with Stable Diffusion (as opposed to in the image space with Patched Diffusion). We conclude that the noise impacts which kinds of inverse problems Boomerang will be effective on. Since Boomerang PRE emphasizes perceptual quality, we evaluate its performance on metrics correlating with human perceptual quality, such as Fréchet Inception Distance (FID) and Learned Perceptual Image Patch Similarity (LPIPS) (Heusel et al., 2017; Zhang et al., 2018). FID measures the distance (lower is better) between the distributions of "real" images and generated images. LPIPS, on the other hand, compares deep embedding of corresponding patches between two images, and correlates well with human visual perception. We follow (Zhang et al., 2018)'s recommendation and use AlexNet as the backend for LPIPS. Unlike FID, LPIPS is an image-to-image metric. We compare Boomerang PRE with (1) classical super-resolution methods such as nearest interpolation, linear interpolation, and cubic interpolation; and (2) the Deep Image Prior (DIP) (Ulyanov et al., 2020), an *untrained* method that performs well on superresolution tasks. We chose not to compare Boomerang to deep learning methods explicitly trained to do image enhancement or super-resolution, as Boomerang and DIP do not require training nor fine-tuning. Boomerang provides superior performance to all competing methods on perceptual metrics such as LPIPS and FID, as seen in Table 3. Furthermore, the hyperparameter tBoom provides a unique trade-off between data fidelity and perceptual quality, enabling cost-effective and controllable generation of perceptually enhanced images. We present the Boomerang perceptual enhancement results in Table 3. Another benefit of Boomerang is its flexibility: Boomerang can perform variable perceptual image enhancement at various degrees of degradation using the same pretrained diffusion model. This is shown in Figure 13, where we enhance images downsampled by 4x, 8x and even 16x by simply varying tBoom. Although methods like DIP and linear interpolation are also flexible to the extent they are applicable for images downsampled by different factors, many superresolution methods are not. In particular, many deep learning superresolution methods require separate training or fine-tuning depending on the desired fidelity, which requires more compute and explicit model selection. In our examples, which used 1024×1024 images, we found tBoom = 50 worked well with images downsampled by 4x while tBoom = 100 worked best with images downsampled by 8x. Our intuition suggests that a larger value of tBoom is more appropriate for more degraded images, and our empirical results are in agreement. Furthermore, Boomerang PRE is relatively fast and computationally efficient. On 1024 × 1024 images, Boomerang takes 0.932 minutes with a standard deviation of 0.03214 minutes per image. On the other hand, DIP takes about 36.426 minutes with a standard of 0.867 minutes per image. All of these statistics were calculated using random 50 images. In the time it takes to perform PRE on one image with DIP, 39 images can be enhanced with Boomerang. Additionally, Boomerang benefits from diffusion acceleration techniques like batch processing and distillation, whereas DIP cannot because each image is its own optimization problem. Although classical methods, such as linear interpolation, are much faster than Boomerang, they are not perceptually pleasing and were not specifically designed to do data-driven perceptual enhancement. ## 6.3 Cascaded Boomerang Image Enhancement As tBoom is increased, the variance of the generated images dramatically increases due to diffusion models' stochastic nature (noise is added to data at each step t, see Algorithm 1). As a result, increasing tBoom eventually causes the generated images to vary so much that they no longer resemble the input image at all 4These times are reported for a single Nvidia GeForce GTX Titan X GPU Table 3: FID and LPIPS scores of different super-resolution methods (lower is better). Each was calculated from 5000 images from the FFHQ dataset (Karras et al., 2019). Since LPIPS is an image-to-image metric, these values are the average scores across the individual images. FID scores around 5 are within the common range between test and training splits of the same dataset. For Cascaded results, the number of cascades were selected based on the best images as decided by a human observer who chose the image that had the best perceptual quality. | Image Enhancement Method | LPIPS Score | FID | |--------------------------------|---------------|-------| | Nearest Interpolation | 0.550 | 18.82 | | Linear Interpolation | 0.449 | 20.98 | | Cubic Interpolation | 0.443 | 16.60 | | Deep Image Prior | 0.353 | 7.14 | | Boomerang with tBoom = 50 | 0.341 | 8.93 | | Boomerang with tBoom = 100 | 0.338 | 5.10 | | Boomerang with tBoom = 150 | 0.394 | 5.19 | | Boomerang Cascaded, tBoom = 50 | 0.351 | 4.47 | (i.e., we move from local sampling to global sampling). Even for modest values of tBoom, the large variance of added noise causes repeated PRE attempts to differ significantly. In this section, we propose a simple method to keep variance of the generated image manageable. Cascaded Boomerang describes repeated passes of a corrupted image through a diffusion network with a small value of tBoom, as opposed to passing an image through Boomerang once with a large value of tBoom. If we denote the Boomerang PRE method in the previous section on an input image xds to generate x0 = Bfϕ (xds), the method described here would be xcascade = Bfϕ (Bfϕ (*. . .*(Bfϕ (xds)))). We designate ncascade as the number of times we repeat Boomerang on the intermediate result. In addition to stabilizing independent image enhancement attempts, the cascade method allows users to iteratively choose the desired PRE detail—simply stop repeating the cascade process once the desired clarity is achieved. An example of cascaded perceptual enhancement with Boomerang is shown in Figure 6, with more examples shown in Figure 12 in Appendix A.2. The cascade method achieved higher FID scores (worse) but lower LPIPS score (better) than vanilla Boomerang PRE. Since FID score calculates distances between distributions, and cascaded Bboomerang has a stabilizing effect on the resulting images, this is consistent with our observations. Despite higher LPIPS score, we thought the subjective quality of the cascaded Boomerang PRE images were superior to the vanilla generated images. The cascaded Boomerang method allows for progressive PRE and multiple candidate generation beyond the vanilla Boomerang method. Boomerang is a quick and efficient method of performing perceptual resolution enhancement with pretrained diffusion models. As we have shown, Boomerang allows the user to easily adjust the strength of the enhancement by varying tBoom. The same procedure and pretrained network can thus be used to perform perceptual enhancement at different strengths and for images at any dimension smaller than the output dimension of the diffusion network (we used images with dimensions 1024 × 1024): for larger scaling factors, simply increase tBoom or cascade the result until one achieves the desired fidelity. This also avoids the issue of needing to train different networks depending on the magnitude of enhancement desired or for a given factor of downsampling. This reflects the real-world reality that we often want to "enhance" images that may not conform to a specific scale factor or dimensionality. As with the previous applications, Boomerang's generated images look realistic yet Boomerang requires no additional fine-tuning or training. ## 7 Related Work The method that has the closest connection to our work—in terms of the underlying algorithmic procedure involving the reverse process—is SDEdit (Meng et al., 2022). This method uses pretrained unconditional diffusion models to edit or generate photorealistic images from sketches or non-natural images. While ![12_image_0.png](12_image_0.png) Figure 6: Cascaded Boomerang perceptual resolution enhancement with t*Boomerang* = 50. SDEdit, like Boomerang, utilizes the distinct characteristics of the reverse diffusion process to accomplish its goal, it differs fundamentally from Boomerang in that we focus on local sampling, as opposed to image editing. Specifically, we start with a natural image and generate nearby natural images on the learned manifold, and we show its applicability to data anonymization, data augmentation, and image perceptual quality enhancement. In contrast, SDEdit lacks the ability for local sampling as it requires an input sketch to generate a variation of the original natural image. Additionally, the SDEdit algorithm executes the complete forward and reverse process with an altered noise schedule, whereas Boomerang solely carries out a partial forward and reverse process without modifying the noise schedule, which makes Boomerang more computationally efficient. Inspired by SDEdit, Haque et al. (2023) employ an image-conditioned diffusion model to edit scenes generated using NeRF (Mildenhall et al., 2021). The method shares the concept of starting the diffusion reverse process from an intermediate step, but it differs from our approach in that it carries out unconditioned local sampling instead of relying on a task-specific conditional diffusion model. In an effort to reduce the sampling cost in diffusion models, Zheng et al. (2023) proposed initiating the reverse diffusion process from an intermediate step rather than from pure Gaussian noise. To generate samples from the diffusion model, the authors suggest utilizing another generative model, such as a VAE, to learn the distribution of noisy data at the intermediate step and employ the samples provided by this model for the reverse process. The methodology described differs from our approach as it does not involve local sampling. Instead, the authors employ a partial reverse diffusion process to expedite the sampling procedure. In their work, Chung et al. (2022) aim to decrease the computational costs of solving inverse problems that are implicitly regularized by diffusion models. To achieve this, the authors propose an iterative scheme that involves updating the estimate by taking a gradient step using a data fidelity loss function, followed by projecting it into the range of a diffusion model. To expedite the projection, the authors run the reverse process from an intermediate noisy estimate obtained by partially forward diffusing the current estimate. In contrast, our approach for enhancing image perceptual quality differs from Chung et al. (2022) in that we compute the gradient step only once at the beginning, followed by projecting it onto to the range of the diffusion model. This allows us to perform local sampling without modifying the reverse process, while Chung et al. (2022) alters the reverse process, making it conditional. Other related work modifies the reverse diffusion processes to perform data anonymization, augmentation, or upsampling. Recently, Klemp et al. (2023) proposed a technique for dataset anonymization utilizing pretrained diffusion models. Specifically, they remove sensitive components from an image and then use a pretrained Stable Diffusion inpainting model to fill in the removed details. In comparison to their approach, our proposed method is more controllable (through the use of parameter tBoom), requires fewer iterations to execute (especially for low values of tBoom), and does not require the diffusion model to be specifically trained for inpainting purposes. Trabucco et al. (2023) present a novel data augmentation methodology that utilizes a pretrained text-to-image diffusion model. While Boomerang is applicable to any generic diffusion model, their approach specifically necessitates the use of a text-to-image model, and importantly, requires textual input to generate augmented images Kawar et al. (2022) propose a novel approach that leverages pretrained unconditional diffusion models to remove JPEG artifacts. Specifically, their methodology involves modifying the reverse process by conditioning it on the JPEG-compressed image. In another related work, Lugmayr et al. (2022) alter the reverse process to condition it on an image with occlusions to perform inpainting. Both of these methods for JPEG artifact removal and image inpainting utilize a partial reverse diffusion process; however, they modify the reverse process, whereas Boomerang does not make any modifications. The image editing methodology proposed by Ackermann & Li (2022) presents a multi-stage upscaling process suitable for high-resolution images using the Blended Diffusion model (Avrahami et al., 2022). Specifically, this image editing approach involves passing a low-resolution image through an off-the-shelf super-resolution model, followed by the addition of noise and reverse diffusion from an intermediate stage using the textguided Blended diffusion model (Avrahami et al., 2022). While the usage of a partial reverse diffusion process is shared with Boomerang, this method is fundamentally different in that it aims to perform image editing instead of local sampling. Finally, the recently introduced consistency models (Song et al., 2023) aim to reduce the computational complexity of diffusion models during inference. In their few-step method, a network is trained to perform the reverse process using a significantly coarse time discretization. The last step in the few-step method, which maps an intermediate noisy image to the final image manifold, is, in fact, an instance of Boomerang. In addition, the few-step method provides a means to trade off compute for sample quality. While this resembles the trade-off that we describe in our Boomerang-based method for perceptual image quality enhancement, in which tBOOM trades off compute and data fidelity for perceptual quality, consistency models are essentially a model distillation approach and do not perform local sampling. ## 8 Conclusions We have introduced the Boomerang algorithm, which is a straightforward and computationally efficient method for performing local sampling on an image manifold using pretrained diffusion models. Given an image from the manifold, Boomerang generates images that are "close" to the original image by reversing the diffusion process starting from an intermediate diffusion step initialized by the diffused original image. The choice of intermediate diffusion step tBoom determines the degree of locality in Boomerang sampling, serving as a tuning parameter that can be adjusted based on the specific requirements of the downstream application. Boomerang can be run on a single GPU, without any re-training, modifications to the model, or change in the reverse diffusion process. We showed the applicability of Boomerang on various tasks, such as anonymization, data augmentation, and perceptual enhancement. Future works include continued experiments of its efficacy for data augmentation, as well as applying the Boomerang algorithm to different domains of data, such as audio and text. Finally, recent work has shown that diffusion models can work with non-stochastic transforms instead of additive Gaussian noise (Bansal et al., 2022; Daras et al., 2022), and evaluating the Boomerang algorithms with such diffusion models would provide further insight into the nature of local sampling using diffusion models. ## Acknowledgement This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N0001418-12571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; DOE grant DE-SC0020345; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047. ## References Johannes Ackermann and Minjun Li. High-resolution image editing via multi-stage blended diffusion, 2022. URL https://arxiv.org/abs/2210.12965. Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 18208–18218, June 2022. Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Cold diffusion: Inverting arbitrary image transforms without noise. arXiv preprint arXiv:2208.09392, 2022. Sam Bond-Taylor, Adam Leach, Yang Long, and Chris G. Willcocks. Deep generative modelling: A comparative review of VAEs, GANs, normalizing flows, energy-based and autoregressive models. *IEEE Transactions* on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021. doi: 10.1109/TPAMI.2021.3116668. Chen-Hao Chao, Wei-Fang Sun, Bo-Wun Cheng, Yi-Chen Lo, Chia-Che Chang, Yu-Lun Liu, Yu-Lin Chang, Chia-Ping Chen, and Chun-Yi Lee. Denoising likelihood score matching for conditional score-based data generation. *arXiv preprint arXiv:2203.14206*, 2022. Hyungjin Chung, Byeongsu Sim, and Jong Chul Ye. Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In *2022 IEEE/CVF Conference on* Computer Vision and Pattern Recognition (CVPR), pp. 12403–12412, 2022. doi: 10.1109/CVPR52688. 2022.01209. Giannis Daras, Mauricio Delbracio, Hossein Talebi, Alexandros G Dimakis, and Peyman Milanfar. Soft diffusion: Score matching for general corruptions. *arXiv preprint arXiv:2209.05442*, 2022. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 8780–8794, 2021. Nikolaos Dionelis, Sotirios A. Tsaftaris, and Mehrdad Yaghoobi. Omasgan: Out-of-distribution minimum anomaly score gan for anomaly detection. In *2022 Sensor Signal Processing for Defence Conference* (SSPD), pp. 1–5, 2022. doi: 10.1109/SSPD54131.2022.9896220. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger (eds.), *Advances in Neural Information Processing Systems*, volume 27, 2014. Matej Grcić, Ivan Grubišić, and Siniša Šegvić. Densely connected normalizing flows. In *Advances in Neural Information Processing Systems*, volume 34, pp. 23968–23982, 2021. URL https://proceedings. neurips.cc/paper_files/paper/2021/file/c950cde9b3f83f41721788e3315a14a3-Paper.pdf. Ayaan Haque, Matthew Tancik, Alexei Efros, Aleksander Holynski, and Angjoo Kanazawa. InstructNeRF2NeRF: Editing 3D scenes with instructions. *arXiv preprint 2303.12789*, 2023. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information processing systems*, 30, 2017. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, volume 33, pp. 6840–6851, 2020. Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report, University of Massachusetts, Amherst, 2007. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In *International Conference on Learning Representations*, 2018. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 4401–4410, 2019. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition (CVPR), 2020. Bahjat Kawar, Gregory Vaksman, and Michael Elad. Stochastic image denoising by sampling from the posterior distribution. In *Proceedings of the IEEE/CVF International Conference on Computer Vision* (ICCV) Workshops, pp. 1866–1875, 2021. Bahjat Kawar, Jiaming Song, Stefano Ermon, and Michael Elad. JPEG artifact correction using denoising diffusion restoration models. In *NeurIPS 2022 Workshop on Score-Based Methods*, 2022. URL https: //openreview.net/forum?id=O3WJOt79289. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. International Conference on Learning Representations, 2014. doi: 10.48550/ARXIV.1312.6114. Janusz Klejsa, Per Hedelin, Cong Zhou, Roy Fejgin, and Lars Villemoes. High-quality speech coding with sample RNN. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7155–7159, 2019. doi: 10.1109/ICASSP.2019.8682435. Marvin Klemp, Kevin Rösch, Royden Wagner, Jannik Quehl, and Martin Lauer. Ldfa: Latent diffusion face anonymization for self-driving applications. *arXiv preprint arXiv:2302.08931*, 2023. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. HiFi-GAN: Generative adversarial networks for efficient and high fidelity speech synthesis. *Advances in Neural Information Processing Systems*, 33:17022–17033, 2020. Zhifeng Kong and Wei Ping. On fast sampling of diffusion probabilistic models. In *ICML Workshop on* Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2021. Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. DiffWave: A versatile diffusion model for audio synthesis. In *International Conference on Learning Representations*, 2021. URL https: //openreview.net/forum?id=a-xFK8Ymz5J. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images, 2009. Yewen Li, Chaojie Wang, Xiaobo Xia, Tongliang Liu, xin miao, and Bo An. Out-of-distribution detection with an adaptive likelihood ratio on informative hierarchical vae. In *Advances in Neural Information* Processing Systems, volume 35, pp. 7383–7396, 2022. Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models, 2022. Troy Luhman and Eric Luhman. Improving diffusion model efficiency through patching. arXiv preprint arXiv:2207.04316, 2022. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In *International Conference on* Learning Representations, 2022. URL https://openreview.net/forum?id=aBsCjcPu_tE. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. *Commun. ACM*, 65(1):99–106, dec 2021. ISSN 0001-0782. doi: 10.1145/3503250. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 2022. Danilo Rezende and Shakir Mohamed. Variational Inference with Normalizing Flows. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International Conference on Machine Learning*, volume 37, pp. 1530–1538, 2015. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695, 2022. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-toimage diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*, 2022a. Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J. Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. *IEEE Transactions on Pattern Analysis and Machine* Intelligence, pp. 1–14, 2022b. doi: 10.1109/TPAMI.2022.3204461. Axel Sauer, Katja Schwarz, and Andreas Geiger. StyleGAN-XL: Scaling StyleGAN to large diverse datasets. In *ACM SIGGRAPH 2022 Conference Proceedings*, pp. 1–10, 2022. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In *2017 IEEE Symposium on Security and Privacy (SP)*, pp. 3–18, 2017. doi: 10.1109/SP.2017.41. Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. In *Advances* in Neural Information Processing Systems, volume 33, pp. 12438–12448, 2020. Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models, 2023. URL https: //arxiv.org/abs/2303.01469. Jasper Tan, Blake Mason, Hamid Javadi, and Richard G Baraniuk. Parameters or privacy: A provable tradeoff between overparameterization and membership inference. *arXiv preprint arXiv:2202.01243*, 2022. Brandon Trabucco, Kyle Doherty, Max Gurinas, and Ruslan Salakhutdinov. Effective data augmentation with diffusion models, 2023. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. International Journal of Computer Vision, 128(7):1867–1888, mar 2020. doi: 10.1007/s11263-020-01303-4. URL https: //doi.org/10.1007%2Fs11263-020-01303-4. Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. In Proc. 9th ISCA Workshop on Speech Synthesis Workshop (SSW 9), pp. 125, 2016. Mei Wang and Weihong Deng. Deep face recognition: A survey. *Neurocomputing*, 429:215–244, 2021. doi: https://doi.org/10.1016/j.neucom.2020.10.081. URL https://www.sciencedirect.com/science/ article/pii/S0925231220316945. Sebastien C. Wong, Adam Gatt, Victor Stamatescu, and Mark D. McDonnell. Understanding data augmentation for classification: When to warp? In 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 1–6, 2016. doi: 10.1109/DICTA.2016.7797091. Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei Zhou, and Ming-Hsuan Yang. GAN inversion: A survey. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022. Ning Yu, Ke Li, Peng Zhou, Jitendra Malik, Larry Davis, and Mario Fritz. Inclusive GAN: Improving data and minority coverage in generative models. In *European Conference on Computer Vision (ECCV)*, 2020. doi: 10.1007/978-3-030-58542-6_23. Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. *CoRR*, abs/1801.03924, 2018. URL http://arxiv.org/ abs/1801.03924. Huangjie Zheng, Pengcheng He, Weizhu Chen, and Mingyuan Zhou. Truncated diffusion probabilistic models and diffusion-based adversarial auto-encoders. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=HDxgaKk956l. ## A Appendix A.1 Boomerang-Generated Images Via The Stable Diffusion Model Here we present additional images created via the Boomerang method that indicate the evolution of the predicted image as we increase tBoom. These images are generated via the pretrained Stable Diffusion model (Rombach et al., 2022) where instead of adding noise to the image space during the forward process it is added in the latent space. Figures 7 to 9 showcase this where the images on the bottom row show noisy latent variables whereas the ones on the top row indicate the Boomerang predictions with increasing amounts of added noise from left to right, except for the rightmost image, which is created by using an alternate prompt. ## A.2 Vanilla Boomerang Super-Resolution Here we present the result of Boomerang perceptual resoltion enhancement as described in Section 6.2. Figure 11 illustrates the results for image perceptual enhancement. The top-left image in this figure shows the low-resolution image, and the top-right and bottom-left images are the result of vanilla Boomerang perceptual enhancement with tBoom = 100 and tBoom = 150, respectively. When compared with the highresolution image in the bottom-right corner of Figure 11 we observe that the resulting image with tBoom = 100 is plausible while the result with tBoom = 150 seems high-resolution, but is inconsistent compared to the ground-truth image. ## B Trade-Off Between Accuracy And Boomerang Distance In Data Augmentation The observed benefits of Boomerang in data augmentation presented in Section 5 are for specific values of tBoom T. One can observe that there would be no benefit in picking tBoom T = 0% because then no modification of the training data is performed and hence the augmented data will be the same as the non-augmented data, i.e., the training set. On the other hand, if we use tBoom T = 100%, we are essentially just sampling purely synthetic images from the diffusion model and using that for data augmentation. As shown in Section 5, there exists a value of tBoom T which is better than both of these extremes. In Figure 14 we show accuracy on the CIFAR-100 classification task for intermediate values of tBoom T, demonstrating a tradeoff between accuracy and tBoom T. ![18_image_0.png](18_image_0.png) Figure 7: The Boomerang method using Stable Diffusion (T = 1000), as in Figure 1, with an image of a cat. ![19_image_0.png](19_image_0.png) Figure 8: The Boomerang method using Stable Diffusion (T = 1000), as in Figure 1, with an image of a bedroom. ![19_image_1.png](19_image_1.png) Figure 9: The Boomerang method using Stable Diffusion (T = 1000), as in Figure 1, with an image of Albert Einstein. ![20_image_0.png](20_image_0.png) Distribution of LFWPeople Embedding Distances Distribution of CelebA-HQ Embedding Distances Figure 10: Distribution of facial recognition embedding distances from VGG-Face. ![20_image_1.png](20_image_1.png) Figure 11: Boomerang perceptual resolution enhancement for different values of tBoom. ![21_image_0.png](21_image_0.png) Figure 12: Cascaded Boomerang perceptual resolution enhancement. For the top row, notice how the best quality image is seen after 6 cascade steps, with t*Boomerang* = 50. After 8 cascades, details such as the teeth begin to be removed. On the bottom row, however, n*casc* = 8 provides better results and more detail compared to previous steps. This shows the value in introducing the cascade method: different images may have better results with more cascade steps than others. ![22_image_0.png](22_image_0.png) Figure 13: Perceptual resolution enhancement with Boomerang. Here, we show that by adjusting tBoom, we can easily control the strength of the perceptual resolution enhancement applied, where larger values are more appropriate for lower resolution images. 23 ![23_image_0.png](23_image_0.png) Data augmentation trade-off between accuracy Figure 14: The trade-off between accuracy and tBoom Thas a maximum beneficial peak somewhere between 0% and 100% data augmentation. At 0% we have essentially no data augmentation because Boomerang does not modify the augmented training data. At 100% we augment with purely synthetic data, which is typically has lower quality than training data. ![23_image_1.png](23_image_1.png) Figure 15: Using Boomerang with Stable Diffusion on the LFWPeople and CelebA-HQ datasets results in larger FID values for larger values of tBoom T. This is in part because real images are better than synthetic, even if one uses an excellent diffusion model such as Stable Diffusion. ![24_image_0.png](24_image_0.png) Figure 16: Here we use the same original image from Figure 1 to compare Boomerang local sampling (top) with UnCLIP (https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip) and Image Variations (https://huggingface.co/lambdalabs/sd-image-variations-diffusers), which are Stable Diffusion models fine-tuned to accept CLIP guidance to produce variations of an input image. The same two random seeds are used across each method. For Boomerang and UnCLIP, we used the default guidance of 7.5 and the prompt "picture of a person". UnCLIP has a noise level parameter that behaves similarly to tBoom. Image Variations cannot accept prompts and thus only has guidance as its input parameter. Unlike either CLIP guidance method, Boomerang can smoothly interpolate between locally sampling very closely and very distantly from the original data point while maintaining consistent image quality. For example, notice that neither CLIP guidance method can recover the original image.
Review 1: Summary: In this paper, authors propose a simple technique to do "local sampling" of images from a pre-trained diffusion model where the term "local sampling" roughly translates to sampling images related to a given image. The idea is very simple: instead of inverting an image all the way to noise, you stop mid-way and return to the image manifold (giving the approach its name--Boomerang.) Since we return mid-way, the image does not completely lose its perceptual quality; however, since the generative process is stochastic, the generated image differs from the original image. Importantly, the simple idea can be implemented with minimal changes. The authors employ this simple technique for constructing privacy-preserving datasets, augmenting datasets, and enhancing image resolution. Strengths and Weaknesses: ### Strengths - The technique is extremely simple and can be easily implemented by any practitioner to augment datasets and create similar-looking pictures without much overhead. - It is important to demonstrate the applications in a paper with a simple technique. The authors nailed two important applications: preserving privacy and data augmentation; both are extremely important use cases. - The empirical evidence for how this simple technique can generate samples that are close to the original image is competing. ### Weaknesses - The biggest flaw I see is being unable to control any part of the generation. In the sense that you can not control what part of the image is being anonymized. Similarly, you can not control how (or if) each cascading boomerang enhances the image resolution; it will produce a similar image to the output of the previous cascade; however, you can not guide the generation towards enhanced resolution. Therefore, anonymization and super-resolution applications feel more like a stretch to me. - Can the authors precisely clarify what are the changes between the anonymization and data augmentation experiments? In particular, if you took the data you used from augmentation experiments and trained only on augmented data, is the performance the same as the anonymized data? - How do you decide if the image is being anonymized or prepared for data augmentation? Is the augmented data always anonymized? Is the anonymized data always augmented? - Also, there are no error bars in any experiment. he absence of error bars is generally a concern. Could the authors comment on the lack of variance estimates? Isn't the data generation process stochastic and should lead to multiple answers no matter where you apply Boomerang? - While data augmentation is the primary application, it is fairly limited by being unable to control what the diffusion model would generate. If someone wants to use Boomerang to create data, then adjusting the $t_{\mathrm{Boom}}$ parameter has a huge cost. You first sample a huge dataset with $t_{\mathrm{Boom}} = 100$. You train and check the performance. It did not work great. So, you do that again with $t_{\mathrm{Boom}} = 200$, and so on. - Also, what is the synthetic data in Table 2? Is this the synthetic data generated from some pre-trained diffusion model? How do we know this generated synthetic data is relevant to the classification problem? This is more of curiosity than criticism. I want to understand the comparison. - In Table 3, you mention that the number of cascades was selected based on the "best images." What are the best images? Who decides the best images? - In Section 6.2, the last paragraph, you talk about the time it takes to generate an image with Boomerang. On what computing architecture are you reporting these numbers? - In the perceptual resolution enhancement, can the authors comment on what fraction of the images enhanced using boomerang falls below the minimum distance threshold from Figure 5? Can this be a good metric to demonstrate super-resolution? - The authors do a good job of not complicating a simple procedure. Well, for the most part. The purpose behind equation 7 was unclear. In the text, you state that the probability of $p(x_0')$ is the same as $p(x_0)$. Why is that true? Also, what is the inference from this small analysis? That the variability increases from increasing $t_{\mathrm{Boom}}$? Don't we already know this without the confusing math? Overall, I feel that this paper is born out of a simple and beautiful observation. However, I wonder if this beautiful observation is as important as the authors claim. Clarifying the above questions can probably convince me otherwise. Requested Changes: See the weaknesses section. Broader Impact Concerns: The most important concern is in terms of privacy-preserving applications. As far as I can see, there is no discussion on ensuring that the proposed method can preserve the user's privacy. Consider this scenario: a person deploys a system based on this technique where they take photos of users and "anonymize" them. However, they did not boomerang for long enough, and now the identity of the person in the photo is still revealed. My concern may seem unwarranted to the authors. However, this technique is so simple and so easy to employ that anybody with access to a GPU can run this "anonymization" scheme. So, I believe there has to be a rigorous discussion about the claims of anonymity. ================================================== Review 2: Summary: The authors propose a framework, termed Boomerang, that can be used to generate variations of a given image using a pre-trained diffusion model. The algorithm works by adding some amount of noise to the input image and then denoising the image using the reverse SDE initialized at the intermediate noise level. One can control how close the generated images will be to the original image by changing the amount of noise added. The authors show how this framework can be used to perform data augmentation, anonymization and image enhancement. Strengths and Weaknesses: I think the idea is very simple, yet effective. The applications of the framework are really important. Data anonymization becomes more and more important, especially as it becomes clear that diffusion models that are trained on LAION (such as Stable Diffusion) can memorize training images. The synthetic data application is also very important to avoid overfitting the dataset (in the case of small datasets) and to potentially mitigate biases in the model (e.g. by balancing different classes in the dataset). Finally, I really like the idea of using a pre-trained diffusion model to perform image enhancement without further training. My main concern with this work is the novelty of the proposed method. I believe that the trick of producing image variations by adding noise and then denoising has been known to the community of people working with Stable Diffusion and other foundation models. For example, this feature seems to be supported out-of-the-box in this well-known open-source repo: [https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2918](https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2918). Additionally, there are other ways to perform image variations too, e.g. by getting the CLIP embedding of the input image and then using it to guide the forward process. Adjusting the guidance scale has a similar effect to controlling the amount of noise added. I don't see any comparison with this method. Again, producing image variations seems to be something that the practitioners can already do and there are tools to do so, e.g. see [here](https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations). Finally, an alternative way to produce image variations is to use Dreambooth or some other framework for personalized text-to-image generation. I understand that this framework requires some additional work (finetuning the model) but the cost of this seems to be trivial with LoRA and other similar tricks widely used by the practitioners. My overall comment is that I get the sense that producing image variations has been a topic of interest, at the very least to the community of practitioners, and I believe that the authors do not establish well enough the connection to the existing efforts. I tried to follow the references in these open-source implementations and it seems that many of them credit the SDEedit paper, which is very similar to the proposed framework. I appreciated the discussion of the differences with the SDEedit in the main paper, but I am not convinced that the idea is significantly different. The authors mention that in the SDEedit paper the authors run the whole reverse process with a modified noise schedule. However, the initialization of both methods is the same (a noisy version of the reference image) and even Boomerang can be though as SDEEdit with a modified noise schedule that spends no time in the first part of the reverse SDE. At the very least, I think that a comparison with SDEedit is required. Another weakness of the paper is that for each one of the applications there are alternative ways to achieve them and there is no thorough comparison to these approaches. For example, there have been many recent works about diffusion models and synthetic data augmentation. It would be useful to thoroughly explain the differences to such methods and provide numerical comparisons. Further, data anonymization can be achieved by projecting an image to the latent space of a GAN (image inversion) and then perturbing this latent. Finally, the data anonymization might not work if the image to be anonymized was in the training set of the foundation model. Requested Changes: My main concerns were listed in the Section above. The main thing I would like to see is a discussion and a potential comparison with alternative methods for image variations that are widely used by the community, including SDEEdit and the image variations using CLIP guidance. Broader Impact Concerns: I do not have any ethical concerns that are specific to this paper. ================================================== Review 3: Summary: The authors present Boomerang: a method for locally sampling from the data manifold using pre-trained diffusion models. More specifically, local sampling is achieved by noising the input data using the diffusion forward process and using the noisy sample as an initialization for the reverse process to generate the final sample. The degree of variability between the input and the final sample can be controlled by the amount of noise and is a hyperparameter of the method. The authors demonstrate the effectiveness of the method in a number of applications. Strengths and Weaknesses: Strengths: 1) The presented method is simple and intuitive. Moreover, having a single hyperparameter and compatibility with pretrained-diffusion models makes adoption easy for practitioners. 2) The authors do a nice job of empirically validating the application of the method across a number of important application use-cases. More specifically, the data augmentation results in Section 5 demonstrate that augmenting datasets with purely synthetically generated samples might be suboptimal which is interesting. Weaknesses: See Requested Changes Requested Changes: Requested Changes: 1) For the qualitative results in Fig. 4, it looks like the visual quality of the samples becomes worse as the noise added in the input image increases (for instance at 70% some blurry artifacts are evident). It would be nice if the authors can include some quantitative results (in terms of FID) comparing the quality of the original images with the generated samples at different noising scales (or t_boom values). 2) For the results in Section 5, I think it might be worth including a experiment demonstrating the impact of augmenting with different t_boom on the classification accuracy for a small dataset like CIFAR-10/100. 3) For the super-resolution results, is there a general guideline for selecting t_boom for different downsampling factors? Intuitively, I would expect t_boom to be larger for larger downsampling factors since high-frequency details are added towards the end of the reverse process. 4) It is common to use deterministic samplers like DDIM, DPM-Solver etc. for fast diffusion model sampling. It might be worth adding an experiment demonstrating the robustness of the proposed method within existing experimental setup (for ex: super-resolution) in the context of these samplers. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: Two of the three reviewers acknowledged that the author response and paper revision has addressed most of the concerns raised in the original reviews and are leaning towards acceptance. One outstanding concern from the third reviewer is about the applicability of the method for anonymization applications and lack of sufficient experimental support for this - which I agree with. In particular it is not clear how the proposed method can anonymize sensitive attribute while retaining other attributes of interest since the only controllability it allows is through controlling the variance of the noise added to the data. I agree with this concern and suggest the author revise the paper to clearly describe the limitations of the method when used for the anonymization task. ==================================================
# Hico-Det-Sg And V-Coco-Sg: New Data Splits For Evaluating The Systematic Generalization Performance Of Humanobject Interaction Detection Models Anonymous authors Paper under double-blind review ## Abstract Human-Object Interaction (HOI) detection is a task to localize humans and objects in an image and predict the interactions in human-object pairs. In real-world scenarios, HOI detection models are required systematic generalization, i.e., generalization to novel combinations of objects and interactions, because the train data are expected to cover a limited portion of all possible combinations. However, to our knowledge, no open benchmarks or previous work exist for evaluating the systematic generalization performance of HOI detection models. To address this issue, we created two new sets of HOI detection data splits named HICO-DET-SG and V-COCO-SG based on the HICO-DET and V-COCO datasets, respectively. When evaluated on the new data splits, the representative HOI detection models performed much more poorly than when evaluated on the original splits. This reveals that systematic generalization is a challenging goal in HOI detection. By analyzing the evaluation results, we also gain insights for improving the systematic generalization performance and identify four possible future research directions. We hope that our new data splits and presented analysis will encourage further research on systematic generalization in HOI detection. ## 1 Introduction Human-Object Interaction (HOI) detection is a task to localize humans and objects in an image and predict the interactions in human-object pairs. HOI detection has been attracting large interest in computer vision, as it is useful for various applications such as self-driving cars, anomaly detection, analysis of surveillance video, and so on. The outputs of this task are typically represented as <human, **interaction**, object> triplets. The publication of HOI detection datasets (Koppula et al., 2013; Everingham et al., 2014; Gupta & Malik, 2015; Chao et al., 2018; Gu et al., 2018; Chiou et al., 2021) has triggered a large number of studies on this task (Gao et al., 2018; Gkioxari et al., 2018; Li et al., 2019; Gao et al., 2020; Li et al., 2020; Liao et al., 2020; Kim et al., 2021; Chen & Yanai, 2021; Tamura et al., 2021; Zhang et al., 2021; Chen et al., 2021; Zou et al., 2021; Zhang et al., 2022; Liao et al., 2022; Ma et al., 2023). HOI detection is an advanced computer vision task as it requires a model not only to localize humans and objects but also to predict interactions between them. Moreover, humans can have different interactions with the same object (e.g., **wash** a horse and **walk** a horse) and the same interaction with different objects (e.g., wash a horse and wash a car). In real-world scenarios, the train data will likely cover a limited portion of all possible combinations of objects and interactions of interest. Thus, HOI detection models must be generalizable to novel combinations of known objects and interactions. Such generalization to novel combinations of known concepts, called systematic generalization, is a highly desired property for machine learning models. The systematic generalization performance is evaluated in various tasks such as sequence-to-sequence parsing (Lake & Baroni, 2018), language understanding (Ruis et al., 2020; Bergen et al., 2021), visual properties extraction (Ullman et al., 2021), and visual question answering (Johnson et al., 2017; Bahdanau et al., 2019; 2020; D'Amario et al., 2021; Hsu et al., 2022; Yamada ![1_image_0.png](1_image_0.png) Figure 1: Illustration of a data split for evaluating the systematic generalization performance of Human-Object Interaction (HOI) detection models. All images and annotations are selected from HICO-DET-SG split3. The train data consists of combinations such as <human, **wash**, car>, <human, **wash**, elephant>, <human, walk, horse>, and <human, **straddle**, horse>. After trained on such data, an HOI detection model is tested whether it can generalize to novel combinations in the test data such as <human, **wash**, horse>. To systematically generalize to such novel combinations, the model must learn the visual cues of the object (in this case, horse) and the interaction (in this case, **wash**) independently of the specifically paired interaction/object classes in the train data. et al., 2023; Kamata et al., 2023). However, to our knowledge, no open benchmarks or evaluation studies have been published for the systematic generalization in HOI detection. The existing HOI detection datasets cannot be used for evaluating the systematic generalization performance for novel combinations of known objects and interactions as they are, because their train and test data provide the same object-interaction combinations. On such datasets, a model might predict an interaction class based solely on the paired object class or an object class based solely on the paired interaction class rather than by capturing the visual cues such as human posture and positional relations. In this paper, we introduce two new sets of HOI detection data splits named HICO-DET-SG and V-COCO-SG, which we created based on the HICO-DET (Chao et al., 2018) and V-COCO (Gupta & Malik, 2015) datasets, respectively, for evaluating the systematic generalization (SG) capabilities of HOI detection models. An illustration of such data split is shown in Figure 1. To ensure that the test performance is not an artifact of a specific selection of combinations in the train and test data, we prepared three distinct train-test splits of HICO-DET-SG and V-COCO-SG, respectively. We evaluated recent representative HOI detection models on our data splits and found a large degradation from the performances on the original data splits. We also analyzed the results and gained insights to improve systematic generalization performance of HOI detection models. Our contributions are summarized below: - We created two new sets of HOI detection data splits with no overlapping object-interaction combinations in train and test data, which serve for studying systematic generalization in HOI detection. - We evaluated the systematic generalization performances of representative HOI detection models on our new data splits and found large decreases in the test performance from those on the original splits; this reveals that the systematic generalization is a challenging goal in HOI detection. - We derived four possible future directions to improve systematic generalization performance in HOI detection based on the analysis and considerations on our experimental results and related work: 1) increasing the diversity of the train data, 2) introducing two-stage or other modular structures into a model, 3) utilizing pretraining, and 4) integrating commonsense knowledge from external natural language resources. The JSON files determining HICO-DET-SG and V-COCO-SG and the source code that created the files are publicly available at a GitHub repository. For the review, they can be assessed via the following anonymized repository https://anonymous.4open.science/r/hoi_sg-58CE/. ## 2 Related Work In this section, we first briefly review the HOI detection task. Then we explain the related work to the new research topic we deal with in this study, namely, systematic generalization in HOI detection. ## 2.1 Overview Of Human-Object Interaction (Hoi) Detection As explained in the Introduction, HOI detection is a task to localize humans and objects in an image and predict the interactions between them. The HICO-DET (Chao et al., 2018) and V-COCO (Gupta & Malik, 2015) are the two most popular datasets for HOI detection. Preceding to the HICO-DET dataset, the HICO dataset (Chao et al., 2015) was created for HOI recognition task, which classifies an object and a human's interaction with that object in an image without bounding-boxes. Subsequently, HICO-DET dataset was created based on HICO by adding bounding-boxes around the humans and objects in the images. Moreover, one image in the dataset is associated with multiple human, object, and interaction annotations. The V-COCO dataset was created based on the Microsoft COCO object detection dataset (Lin et al., 2014) by adding annotations of interactions (verbs). Statistics of the HICO-DET and V-COCO datasets are given in Table 1. By definition, HOI detection consists of two tasks: localizing the human and object instances in a given image and predicting the interactions between them. Accordingly, there are two types of model architectures to solve the HOI detection task: two-stage model and one-stage model. Two-stage models (Gao et al., 2018; Li et al., 2019; Gao et al., 2020; Li et al., 2020; Liao et al., 2022; Zhang et al., 2022) detect humans and objects at the first stage and then classify the interactions in all human-object pairs at the second stage. Aiming to improve both the instance detection and interaction classification via multi-task learning and to reduce inference time, one-stage models (Gkioxari et al., 2018; Liao et al., 2020) have been proposed recently and gained popularity over two-stage models. Some recent one-stage models (Kim et al., 2021; Tamura et al., 2021; Chen & Yanai, 2021; Zhang et al., 2021; Chen et al., 2021; Zou et al., 2021; Ma et al., 2023) are based on the Transformer architecture (Vaswani et al., 2017), which is designed to capture long-range relationships in an input and has been successful in natural language processing (Kalyan et al., 2021; Lin et al., 2022) and computer vision (Khan et al., 2021). HOI detection is closely related to Scene Graph Generation (SGG) (Johnson et al., 2015), which is a task to generate a visually-grounded scene graph that most accurately correlates with an image. A scene graph consists of nodes corresponding to object bounding boxes with their object categories, and edges indicating the pairwise relations between objects. While HOI detection is closely related to SGG, HOI detection differs from SGG in two main ways. First, the subjects in SGG can be of any type (humans, cars, etc.), whereas in HOI detection they are only humans. Second, the relations in SGG can be both positional relations (e.g., next to) and actions (e.g., play with), whereas in HOI detection they only consist of the latter. Therefore, HOI detection can be regarded as a subset of SGG in a sense. On the other hand, HOI detection can also be regarded as a task focusing on measuring a model's ability for complex scene understanding, because action recognition requires additional information, not merely the locations of humans and objects. ## 2.2 Studies Related To Systematic Generalization In Hoi Detection Systematic generalization (Lake & Baroni, 2018; Bahdanau et al., 2019; Ruis et al., 2020; Bahdanau et al., 2020; Bergen et al., 2021; D'Amario et al., 2021; Yamada et al., 2023; Kamata et al., 2023), also referred to as compositional generalization (Johnson et al., 2017; Kim & Linzen, 2020; Hsu et al., 2022) or combinatorial generalization (Vankov & Bowers, 2020; Ullman et al., 2021), is a special case of Out-of-Distribution (OoD) generalization, i.e., generalization to data distributions that differ from the train data (Teney et al., 2020; Hendrycks et al., 2021; Shen et al., 2021; Ye et al., 2022). Among many types of OoD generalization, systematic generalization has been particularly considered as a hallmark of human intelligence and contrasted to the properties of artificial neural networks in each era (Fodor & Pylyshyn, 1988; Marcus, 2001; van der Velde et al., 2004; Lake et al., 2017; 2019; Baroni, 2020; Smolensky et al., 2022). Recently, the systematic generalization capability of deep neural networks, including Transformer-based models, has been actively studied in various tasks, as explained in the Introduction. In HOI detection, HICO-DET dataset provides rare-triplets evaluation to measure the few-shot generalization ability of the models (rare-triplets are defined as those which appear less than 10 times in the train data). Generalization to rare-triplets is a type of OoD generalization and some existing work (Baldassarre et al., 2020; Ji et al., 2021) attempted to improve this performance. However, to our knowledge, no benchmarks or previous work have tackled systematic generalization, i.e., zero-shot generalization, in HOI detection. We present the first data splits for evaluating the systematic generalization performance and benchmark results of the representative HOI models in this paper. In SGG, there are some previous work to evaluate (Tang et al., 2020) and improve (Lu et al., 2016; Kan et al., 2021) systematic generalization for new combinations of subject, relation and object classes under the name of zero-shot generalization. All studies revealed large performance degradations in the systematic generalization compared to the in-distribution generalization (generalization within the same combinations as the train data) unless some techniques are intentionally used. As explained in the previous subsection, HOI detection can be regarded as a subset of SGG focusing on measuring a model's capability for complex scene understanding. Thus, we regard the improvement of systematic generalization performance in HOI detection as an early step toward better models for SGG and other visual understanding tasks. ## 3 Hico-Det-Sg And V-Coco-Sg In this section, we first explain the creation process of HICO-DET-SG and V-COCO-SG, the novel data splits for evaluating systematic generalization performance in HOI detection task. We then present the statistics of HICO-DET-SG and V-COCO-SG. ## 3.1 The Creation Process Of The Systematic Generalization (Sg) Splits When designing a pair of train data and test data of a systematic generalization (SG) split, we disallowed overlapping object–interaction combination classes between the train and test data so that HOI detection models are required to generalize to novel combinations. For example, in Figure 1, the train data consists of combinations such as <human, **wash**, car>, <human, **wash**, elephant>, <human, **walk**, horse>, and <human, **straddle**, horse>. An HOI detection model is tested whether it can generalize to novel combinations in the test data, such as <human, **wash**, horse>. In the train data, we ensure that every object class is paired with multiple interaction classes, and that every interaction class is paired with multiple object classes. This split design makes it possible for a model to learn the concepts of object/interaction themselves independently from the specific interaction/object classes paired in the train data. To ensure that the test performance is not an artifact of a specific selection of combinations in the train and the test data, we prepared three distinct train-test splits of the HICO-DET-SG and V-COCO-SG, respectively. When designing a test data, we eliminate images containing the same object-interaction combination classes as the train data. Consequently, the SG splits contain fewer images and HOIs in total than the original splits. Table 1: Statistics of the HICO-DET-SG and V-COCO-SG as well as the original HICO-DET and V-COCO. The train and test data of the systematic generalization (SG) splits are composed of non-overlapping objectinteraction combination classes. The test data of the SG splits contain fewer images and HOI triplets than the original test data because when designing the test data we eliminated images for which the object-interaction combination classes also exist in the train data. | # of images | # of HOI triplets | | # of object-interaction comb. classes | | | | |--------------------|---------------------|-------|-----------------------------------------|--------|-------|------| | Data splits | train | test | train | test | train | test | | Original HICO-DET | 38,118 | 9,061 | 117,871 | 33,405 | 600 | 600 | | HICO-DET-SG split1 | 38,312 | 8,515 | 119,331 | 14,475 | 540 | 60 | | HICO-DET-SG split2 | 39,213 | 7,656 | 122,299 | 14,811 | 540 | 60 | | HICO-DET-SG split3 | 40,672 | 6,229 | 120,096 | 8,994 | 540 | 60 | | Original V-COCO | 5,400 | 4,946 | 14,153 | 12,649 | 228 | 228 | | V-COCO-SG split1 | 7,297 | 2,850 | 18,214 | 3,872 | 160 | 68 | | V-COCO-SG split2 | 7,057 | 3,066 | 15,644 | 4,322 | 160 | 68 | | V-COCO-SG split3 | 6,210 | 3,888 | 10,951 | 6,244 | 160 | 68 | The creation process of the SG splits is further detailed in Appendix A. ## 3.2 Statistics Of The Hico-Det-Sg And V-Coco-Sg The new HICO-DET-SG data splits were created based on the HICO-DET dataset (Chao et al., 2018) as explained above. The statistics of the original HICO-DET dataset and the HICO-DET-SG data splits are given in Table 1 (upper half). The original HICO-DET dataset contains 80 classes of objects, 117 classes of interactions, and 600 classes of object-interaction combinations. In the HICO-DET-SG splits, 540 out of 600 object-interaction combination classes are assigned to the train data; the remaining 60 classes are assigned to the test data. The new V-COCO-SG data splits were created by the same process based on the V-COCO dataset (Gupta & Malik, 2015). The statistics of the original V-COCO dataset and the V-COCO-SG splits are given in Table 1 (lower half). The original V-COCO dataset contains 80 classes of objects, 29 classes of interactions, and 228 classes of object-interaction combinations. In the V-COCO-SG splits, 160 out of 228 object-interaction combination classes are assigned to the train data; the remaining 68 classes are assigned to the test data. ## 4 Experimental Setups For Evaluating Representative Hoi Detection Models This section describes the experimental setups for evaluating the systematic generalization ability of recent representative HOI detection models. Subsection 4.1 explains the evaluated models and the reasons for their selection. Subsection 4.2 explains the experimental conditions in detail. ## 4.1 Hoi Detection Models We evaluated the systematic generalization performance of four representative HOI detection models: HOTR (Kim et al., 2021), QPIC (Tamura et al., 2021), FGAHOI (Ma et al., 2023), and STIP (Zhang et al., 2022). The characteristics of the four models are shown in Table 2. All models except STIP adopt the recently popularized one-stage architecture. The backbone (feature extraction) network of each model was pretrained on object detection task using the Microsoft COCO dataset (Lin et al., 2014). The encoders and decoders of HOTR, QPIC, and STIP were also pretrained on object detection task using the COCO dataset. However, the encoder and decoder of FGAHOI cannot be pretrained on object detection task because there are some modifications compared to its base object-detection model, namely, deformable DETR (Zhu et al., 2021). Therefore, we report the results of HOTR, QPIC, and STIP both with and without pretraining the encoder and decoder on the object detection task in Figure 2 of Section 5 and Tables 3 and 4 of Appendix B for the sake of a fair comparison. The details of each model are given below. | HOTR | QPIC | FGAHOI | STIP | | |--------------------------------------|------------|------------|-------------------------|-----------| | Architecture type | One-stage | One-stage | One-stage | Two-stage | | parallel | end-to-end | end-to-end | | | | Feature extractor | CNN | CNN | Multi-scale Transformer | CNN | | Base model | DETR | DETR | deformable DETR | DETR | | mAP (%) on original HICO-DET | 25.73 | 29.90 | 37.18 | 32.22 | | mAP (%) on original HICO-DET (rare)∗ | 17.34 | 23.92 | 30.71 | 28.15 | | mAP (%) on original V-COCO | 63.8 | 61.0 | 61.2 | 70.65 | Table 2: Characteristics of the four HOI detection models: HOTR, QPIC, FGAHOI, and STIP. The upper half of the table describes the architecture types, feature extractors, and base models of the four models. The lower half gives the mean average precision (mAP, where higher values are desired) on the original HICO-DET and V-COCO reported in the original papers of each model. HOTR. HOTR (Human-Object interaction detection TRansformer) (Kim et al., 2021) is among the first Transformer-based models for HOI detection. This model adopts a one-stage parallel architecture and consists of a CNN-based backbone (feature extractor), a shared encoder, an instance (human + object) decoder, and an interaction decoder. To get the matching between instance and interaction decoder outputs, three independent feed-forward networks, named HO Pointers, are trained to predict the correct <human, interaction, object> combination matching. Most part of the network (except HO Pointers) is based on DETR (DEtection TRansformer) (Carion et al., 2020), a Transformer-based object detector. Therefore, we can pretrain the backbone, shared encoder, instance decoder, and interaction decoder with DETR weights trained on the object detection task. QPIC. QPIC (Query-based Pairwise human-object interaction detection with Image-wide Contextual information) (Tamura et al., 2021) is another Transformer-based HOI detection model proposed around the same time as HOTR. Like HOTR, it is based mainly on DETR but unlike HOTR, it adopts a one-stage end-to-end architecture with a single decoder and no HO Pointers. The backbone, encoder, and decoder can be pretrained using the weights of DETR trained on the object detection task. FGAHOI. FGAHOI (Fine-Grained Anchors for Human-Object Interaction detection) (Ma et al., 2023) had exhibited the best performance on the HICO-DET dataset on the Papers with Code leaderboard 1 at the time of our experiments. The encoder of FGAHOI is trained to generate query-based anchors representing the points with high objectness scores in an image. To improve the computational efficiency, the decoder of FGAHOI predicts objects and interactions on the anchors alone and uses deformable DETR (Zhu et al., 2021), a modified version of DETR that computes self-attention from a limited range of feature maps. Therefore, the model can extract multi-scale feature maps from an image. Although the basic components described above are based on those of QAHOI (Chen & Yanai, 2021), FGAHOI can generate more fine-grained anchors than its predecessor due to the combination of three novel components: a multi-scale sampling mechanism, a hierarchical spatial-aware merging mechanism, and a task-aware merging mechanism. A novel training strategy (stage-wise training) is also designed to reduce the training pressure caused by overly complex tasks done by FGAHOI. The backbone of the network can be pretrained because it is based on Swin Transformer (Liu et al., 2021), but the encoder and the decoder of FGAHOI cannot be pretrained because there are some modifications on the deformable DETR (object detection model). For this reason, the performances of FGAHOI are reported only for the non-pretrained encorder and decorder cases in Figure 2 of Section 5 and Tables 3 and 4 of Appendix B. STIP. STIP (Structure-aware Transformer over Interaction Proposals) (Zhang et al., 2022) had exhibited the best performance on the V-COCO dataset on the Papers with Code leaderboard2 at the time of our experiments. This model has a two-stage architecture to perform HOI set prediction from non-parametric 1https://paperswithcode.com/sota/human-object-interaction-detection-on-hico 2https://paperswithcode.com/sota/human-object-interaction-detection-on-v-coco interaction queries detected by an independent instance detector. Therefore, the model can explore inter- and intra-interaction structures during early training epochs by fixing the correspondence between the interaction query and each target HOI. The backbone and object detector can both be pretrained using the weights of DETR pretrained on an object detection task because the first stage of the network is the same as DETR. ## 4.2 Pretraining, Hyperparameters, And Other Conditions We used the official source code of the four models taken from the publicly-available repositories (URLs are listed in the References). The backbone (feature extraction) networks of all models were pretrained on the object detection task using the Microsoft COCO dataset (Lin et al., 2014) under the settings reported in the respective original papers. The feasibility of pretraining the encoder and decoder parts depends on the model structure. The FGAHOI results were obtained without pretraining the encoder and decoder, because (as explained previously), the encoder and decoder of FGAHOI cannot be pretrained. For a fair comparison, we report the results of HOTR, QPIC, and STIP both with and without pretraining the encoder and decoder on the object detection task using the COCO dataset, although the original papers encouraged the use of pretrained weights to optimize the performance of the original HOI detection task. Adopting the hyperparameters reported in the original papers and the official code repositories, we trained each model as described below. For HOTR, ResNet-50 (He et al., 2016) was used as the backbone, the number of both encoder and decoder layers was set to 6, the number of attention heads was set to 8, the number of query embeddings was set to 300, the hidden dimension of embeddings in the Transformer was set to 256, and the dropout rate was set to 0.1. The model was trained for 100 epochs using the AdamW optimizer (Loshchilov & Hutter, 2019) with a batch size of 2, an initial learning rate of 10−5for the backbone network and 10−4for the other networks, and a weight decay of 10−4. Both learning rates decayed after 80 epochs. For QPIC, ResNet-101 (He et al., 2016) was used as the backbone, the number of both encoder and decoder layers was set to 6, the number of attention heads was set to 8, the number of query embeddings was set to 100, the hidden dimension of embeddings in the Transformer was set to 256, and the dropout rate was set to 0.1. The model was trained for 150 epochs using the AdamW optimizer with a batch size of 16, an initial learning rate of 10−5for the backbone network and 10−4for the other networks, and a weight decay of 10−4. Both learning rates decayed after 100 epochs. The hyperparameters of the Hungarian costs and loss weights related to the bounding box were 2.5 times larger than those unrelated to the bounding box. For FGAHOI, Swin-Large∗+ (Liu et al., 2021) was used as the backbone, the number of both encoder and decoder layers were set to 6, the number of attention heads was set to 8, the number of query embeddings was set to 300, the hidden dimension of embeddings in the Transformer was set to 256, and the dropout was not applied. The model was trained with the AdamW optimizer with a batch size of 16, an initial learning rate of 10−5for the backbone network and 10−4for the other networks, and a weight decay of 10−4. On HICO-DET and HICO-DET-SG, the base network was trained for 150 epochs and the learning rate was dropped from the 120th epoch during the first stage of training. Subsequent training was performed over 40 epochs with a learning rate drop at the 15th epoch. On V-COCO and V-COCO-SG, the base network was trained for 90 epochs and the learning rate was dropped from the 60th epoch during the first stage of training. Subsequent training was performed over 30 epochs with a learning rate drop at the 10th epoch. For STIP, ResNet-50 (He et al., 2016) was used as the backbone, the number of both encoder and decoder layers were set to 6, the number of attention heads was set to 8, the number of query embeddings for object detector was set to 100, the number of queries for interaction decoder was set to 32, the hidden dimension of embeddings in the Transformer was set to 256, and the dropout rate was set to 0.1. The model was trained for 30 epochs using the AdamW optimizer with a batch size of 8 and a learning rate of 5 × 10−5. We trained and tested the following seven types of models: HOTR, QPIC, FGAHOI, and STIP without pretraining the encoder and decoder, and HOTR, QPIC, and STIP with pretraining the encoder and decoder. Training and testing were performed once on each split. One training required approximately 1-2 days using 4 NVIDIA V100 GPUs. ![7_image_0.png](7_image_0.png) Figure 2: Evaluation results of the systematic generalization performances on (a) HICO-DET-SG and (b) V-COCO-SG data splits, and the in-distribution generalization performances on the original splits. The mAPs (%) for the test data, which are the higher the better, of all models are considerably lower on both HICO-DET-SG and V-COCO-SG (dark blue) than on the original splits (pale blue) that evaluate the in-distribution generalization ability. The dark blue bars and the error bars represent the averages and the standard deviations, respectively, across the three distinct SG splits. "Not pretrained" denotes that the encoders and decoders of the models were trained from scratch, wherreas "Pretrained" denotes that the initial encoder and decoder weights were copied from DETR trained on object detection task using the Microsoft COCO dataset. The results of FGAHOI are reported only for "Not pretrained" cases because the encoder and decoder of FGAHOI cannot be pretrained on object detection task as explained in Section 4. Further details of the evaluation results are given in Tables 3 and 4 in Appendix B. ## 5 Evaluation Results This section reports the systematic generalization performances of the four representative HOI detection models (HOTR (Kim et al., 2021), QPIC (Tamura et al., 2021), FGAHOI (Ma et al., 2023), and STIP (Zhang et al., 2022)) evaluated on the HICO-DET-SG and V-COCO-SG. A qualitative inspection of failure cases is also presented. ## 5.1 Degradation In The Systematic Generalization Performance Figure 2 compares the evaluation results on HICO-DET-SG and V-COCO-SG with those on the original splits. The evaluation metric is the mean average precision (mAP), which was adopted in the original papers of the four models. A high mAP indicates that a model is well-performing. Regarding the HICO-DET-SG and V-COCO-SG, the averages and standard deviations calculated over the three distinct splits are presented in each subfigures. The mAPs of all models are considerably lower on both HICO-DET-SG and V-COCO-SG (dark blue) than on the original splits (pale blue) that evaluate the in-distribution generalization ability. The differences among the test mAPs on the three SG splits are less than 3 percentage points, regardless of the model and base dataset. This means that the performances of all models largely degraded for any selection of object-interaction combinations in the train and test data. These results highlights the difficulty of systematic generalization in HOI detection task, i.e., recognizing novel combinations of known objects and interactions. ![8_image_0.png](8_image_0.png) Figure 3: Three failure cases of STIP with the pretrained encoder and decoder after training and testing on HICO-DET-SG split3. (a) An example of predicting the wrong interaction class. The model predicted the interaction as **straddle**, although the correct class is **wash**. (b) An example of detecting the wrong object. The model predicted an irrelevant region as a wrong class bench, although it should detect a bed under the person. (c) An example of wrong class prediction of both object and interaction. The model predicted <human, hit, baseball bat> triplet although the correct answer is <human, **swing**, tennis racket>. Further details of the evaluation results are given in Appendix B. ## 5.2 Qualitative Inspection Of Failure Cases To further reveal the difficulty of systematic generalization in HOI detection, we inspected the failure cases. Figure 3 show the outputs of STIP with pretrained encoder and decoder trained and tested on HICO-DET-SG split3, which achieved the highest mAP among all models on all SG splits. Figure 3 (a) shows an example of predicting the wrong interaction class, the most frequently observed error type. In this example, the model predicts the interaction as **straddle**, although the correct class is **wash**. The <human, **straddle**, horse> triplet appears in the train data but the <human, **wash**, horse> triplet appears only in the test data (the **wash** interaction appears with other objects in the train data). The model appears to predict the interaction from the object class (horse) alone and cannot generalize to the novel combination of <human, **wash**, horse>. Figure 3 (b) shows an example of detecting the wrong object. The model is supposed to detect a bed under the person in the image but instead predicts an irrelevant region of the wrong class, bench. The <human, lie on, bench> triplet appears in the train data but the <human, **lie on**, bed> triplet appears only in the test data (bed appears with other interactions in the train data). Besides being unable to generalize to the novel combination, the model appears to predict the interaction (through its interaction decoder) mainly from visual cues of the human posture rather than from visual cues of the object or the positional relationships. Figure 3 (c) shows an example of wrong class prediction of both object and interaction. The model predicted the tennis racket as a baseball bat and swing as hit. The <human, hit, baseball bat> triplet appears in the train data but the <human, **swing**, tennis racket> triplet appears only in the test data; moreover, the train data include the <human, **swing**, baseball bat> triplet but not the <human, hit, tennis racket> triplet. The model detected the object as a baseball bat at the first stage. Based on the detection result, the interaction was predicted as hit at the second stage, most-likely because the baseball bat frequently appeared with the hit interaction in the train data. ## 6 Discussion 6.1 Comparison Of The Results On Hico-Det-Sg And V-Coco-Sg Comparing the results on the HICO-DET-SG and V-COCO-SG, we find that the performance gap is larger between HICO-DET (the original split) and HICO-DET-SG (the systematic generalization split) than between V-COCO and V-COCO-SG. This difference might reflect differences in the number of images and HOIs between the datasets. The HICO-DET-SG train data contains approximately 5.6 times as many images and 12.6 times as many HOIs as the train data of V-COCO-SG (Table 1). In more detail, the variety of objectinteraction combination classes is 3.4 times higher in the HICO-DET-SG train data than in the V-COCO-SG train data, and more examples for one object-interaction combination exists in the HICO-DET-SG train data. In other computer vision tasks, increasing the diversity of train data is known to improve the systematic generalization performance (Lake & Baroni, 2018; Bahdanau et al., 2019; D'Amario et al., 2021; Madan et al., 2022). The same trend might be expected in HOI detection. ## 6.2 Comparison Across Models The models achieved different performances on the SG splits. Without encoder and decoder pretraining, HOTR completely failed to generalize, as evidenced by the nearly 0% mAP. Even with the encoder and decoder pretrained on object detection task, the mAP of HOTR only improved to less than 3%. FGAHOI also underperformed on SG splits, with mAPs of approximately 6% or less. In contrast, QPIC showed some generalizability to novel combinations especially when using pretrained DETR weights: they achieved approximately 20% mAPs on the HICO-DET-SG splits. STIP with pretraining outperformed all other models on both the HICO-DET-SG and V-COCO-SG data splits. This superior performance might be attributed to the two-stage architecture of STIP, in which the instance and interaction detectors are independent and less affected by each other than in one-stage architectures. Supporting this inference, modular structures improved systematic generalization ability of deep neural networks in other computer vision tasks (Purushwalkam et al., 2019; Bahdanau et al., 2019; 2020; D'Amario et al., 2021; Madan et al., 2022; Yamada et al., 2023; Kamata et al., 2023). ## 6.3 Importance Of Pretraining The Encoder And Decoder Pretraining the encoder and decoder on object detection task using the Microsoft COCO dataset improved the systematic generalization performances of HOTR, QPIC, and STIP. With pretraining, the mAP on HICO-DET improved from 0.4% to 2.7% for HOTR, from 12.1% to 20.8% for QPIC, and from 0.0% to 23.2% for STIP. Also the mAP on V-COCO changed from around 0.2% to around 2.1% for HOTR, from around 0.6% to around 3.8% for QPIC, and from around 0.0% to 6.3% for STIP. Note that the pretraining using the COCO dataset improved the systematic generalization performance not only on V-COCO-SG (that is based on COCO dataset) but also on HICO-DET-SG. In general, vision Transformers require a large amount of training to achieve high performance when trained from scratch (Khan et al., 2021). Therefore, without pretraining, it is natural that the Transformer-based HOI detection models perform poorly on in-distribution generalization and even more poorly on systematic generalization. The performance of STIP was particularly degraded without pretraining, possibly because the number of training epochs (30) was much smaller for STIP than for the other models (100). FGAHOI could not be evaluated with its encoder and decoder pretranined, because the network is designed only for HOI detection task and it is hard to obtain pretrained weights using other tasks such as object detection. If we modify the final part of FGAHOI to enable training on other tasks, the systematic generalization performance of FGAHOI would presumably improve with the pretrained weights. ## 6.4 Toward The Improvement Of Systematic Generalization Performance In Hoi Detection Based on the above experimental results, we propose the following suggestions for improving the systematic generalization performance of future HOI detection models. First, increasing the diversity of the train data (variety of the object-interaction combination classes) likely improve systematic generalization ability of HOI detection models. This possible solution is supported by the differences between HICO-DET-SG and V-COCO-SG and the results in other computer vision tasks (Lake & Baroni, 2018; Bahdanau et al., 2019; D'Amario et al., 2021; Madan et al., 2022) (see Subsection 6.1). Second, introducing two-stage or other modular structures might improve the systematic generalization performance of HOI detection models. This suggestion is supported by the higher performance of the pretrained STIP than of the other models and by the results in other computer vision tasks (Purushwalkam et al., 2019; Bahdanau et al., 2019; 2020; D'Amario et al., 2021; Madan et al., 2022; Yamada et al., 2023; Kamata et al., 2023) (see Subsection 6.2). Two-stage model has also shown its effectiveness for systematic generalization in Scene Graph Generation (Lu et al., 2016), a task closely related to HOI detection (see Section 2), . Third, our results also confirmed that pretraining the encoder and decoder improves the systematic generalization performance of HOI detection models (see Subsection 6.3). In this study, the encoder and decoder were initialized with the weights of DETR (Carion et al., 2020) trained on object detection task using the Microsoft COCO dataset. Pretraining on other related tasks such as Scene Graph Generation might also improve the systematic generalization performance of HOI detection models. In addition, previous work (Lu et al., 2016; Kan et al., 2021) have shown that integrating commonsense knowledge from external natural language resources is effective for improving the systematic generalization performance in Scene Graph Generation. This is the fourth direction we think worth pursuing in HOI detection as well. ## 7 Conclusion We created new splits of two HOI detection datasets, HICO-DET-SG and V-COCO-SG, whose train and test data consist of separate combinations of object-interaction classes for evaluating the systematic generalization performance of HOI detection models. The test performances of representative HOI detection models were considerably worse on our SG splits than on the original splits, indicating that systematic generalization is a challenging goal in HOI detection. We also analyzed the results and presented four possible research directions for improving the systematic generalization performance. We hope that our new data splits and presented analysis will encourage further research on systematic generalization in HOI detection. ## Reproducibility Statement The JSON files determining HICO-DET-SG and V-COCO-SG and the source code that created the files are publicly available at a GitHub repository. For the review, they can be assessed via the following anonymized repository: https://anonymous.4open.science/r/hoi_sg-58CE/. The URLs of other existing assets (datasets and source code) used in this study are provided in the References. The experimental setups for performance evaluation of representative HOI detection models are described in Section 4. ## References Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. Systematic generalization: What is required and can it be learned? In Proceedings of the 7th International Conference on Learning Representations (ICLR), 2019. URL https://openreview.net/ forum?id=HkezXnA9YX. Dzmitry Bahdanau, Harm de Vries, Timothy J. O'Donnell, Shikhar Murty, Philippe Beaudoin, Yoshua Bengio, and Aaron C. Courville. CLOSURE: Assessing systematic generalization of CLEVR models. arXiv preprint, arXiv:1912.05783v2, 2020. DOI 10.48550/arXiv.1912.05783. Federico Baldassarre, Kevin Smith, Josephine Sullivan, and Hossein Azizpour. Explanation-based weaklysupervised learning of visual relations with graph networks. In Proceedings of the 16th European Conference on Computer Vision (ECCV), pp. 612–630, 2020. DOI 10.1007/978-3-030-58604-1_37. Marco Baroni. Linguistic generalization and compositionality in modern artificial neural networks. *Philosophical Transactions of the Royal Society B: Biological Sciences*, 375(1791):20190307, 2020. DOI 10.1098/rstb.2019.0307. Leon Bergen, Timothy J. O'Donnell, and Dzmitry Bahdanau. Systematic generalization with edge transformers. In *Advances in Neural Information Processing Systems 34 (NeurIPS)*, pp. 1390–1402, 2021. URL https: //proceedings.neurips.cc/paper/2021/hash/0a4dc6dae338c9cb08947c07581f77a2-Abstract.html. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Proceedings of the 16th European Conference on Computer Vision (ECCV), pp. 213–229, 2020. DOI 10.1007/978-3-030-58452-8_13. Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, and Jia Deng. HICO: A benchmark for recognizing human-object interactions in images. In *Proceedings of the 2015 IEEE/CVF International Conference on* Computer Vision (ICCV), pp. 1017–1025, 2015. DOI 10.1109/ICCV.2015.122. Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng. Learning to detect human-object interactions. In Proceedings of the 2018 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 381–389, 2018. DOI 10.1109/WACV.2018.00048 The HICO-DET dataset is publicly available at http://www-personal.umich.edu/~ywchao/hico/ (Accessed on September 29th, 2022). Junwen Chen and Keiji Yanai. QAHOI: Query-based anchors for human-object interaction detection. arXiv preprint, arXiv:2112.08647, 2021. DOI 10.48550/arXiv.2112.08647. Mingfei Chen, Yue Liao, Si Liu, Zhiyuan Chen, Fei Wang, and Chen Qian. Reformulating HOI detection as adaptive set prediction. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9000–9009, 2021. DOI 10.1109/CVPR46437.2021.00889. Meng-Jiun Chiou, Chun-Yu Liao, Li-Wei Wang, Roger Zimmermann, and Jiashi Feng. ST-HOI: A spatialtemporal baseline for human-object interaction detection in videos. In Proceedings of the Workshop on Intelligent Cross-Data Analysis and Retrieval, pp. 9–17, 2021. DOI 10.1145/3463944.3469097. Vanessa D'Amario, Tomotake Sasaki, and Xavier Boix. How modular should neural module networks be for systematic generalization? In *Advances in Neural Information Processing Systems 34* (NeurIPS), pp. 23374–23385, 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/ hash/c467978aaae44a0e8054e174bc0da4bb-Abstract.html. Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman. The Pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111:98–136, 2014. DOI 10.1007/s11263-014-0733-5. Jerry A. Fodor and Zenon W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2):3–71, 1988. DOI 10.1016/0010-0277(88)90031-5. Chen Gao, Yuliang Zou, and Jia-Bin Huang. iCAN: Instance-centric attention network for human-object interaction detection. In *Proceedings of the 29th British Machine Vision Conference (BMVC)*, 2018. DOI 10.48550/arXiv.1808.10437. Chen Gao, Jiarui Xu, Yuliang Zou, and Jia-Bin Huang. DRG: Dual relation graph for human-object interaction detection. In *Proceedings of the 16th European Conference on Computer Vision (ECCV)*, 2020. DOI 10.1007/978-3-030-58610-2_41. Georgia Gkioxari, Ross Girshick, Piotr Dollár, and Kaiming He. Detecting and recognizing human-object interactions. In *Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition* (CVPR), pp. 8359–8367, 2018. DOI 10.1109/CVPR.2018.00872. Chunhui Gu, Chen Sun, David A. Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, and Jitendra Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6047–6056, 2018. DOI 10.1109/CVPR.2018.00633. Saurabh Gupta and Jitendra Malik. Visual semantic role labeling. arXiv preprint, arXiv:1505.04474, 2015. DOI 10.48550/arXiv.1505.04474 The V-COCO dataset is publicly available at https://github.com/s-gupta/v-coco (Accessed on September 29th, 2022). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. DOI 10.1109/CVPR.2016.90. Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8320–8329, 2021. DOI 10.1109/ICCV48922.2021.00823. Joy Hsu, Jiayuan Mao, and Jiaju Wu. DisCo: Improving compositional generalization in visual reasoning through distribution coverage. *Transactions on Machine Learning Research*, 2022. URL https://openreview.net/forum?id=EgHnKOLaKW. Zhong Ji, Xiyao Liu, Yanwei Pang, Wangli Ouyang, and Xuelong Li. Few-shot human-object interaction recognition with semantic-guided attentive prototypes network. *IEEE Transactions on Image Processing*, 30:1648–1661, 2021. DOI 10.1109/TIP.2020.3046861. Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma, Michael Bernstein, and Li Fei-Fei. Image retrieval using scene graphs. In *Proceedings of the 2015 IEEE/CVF Conference on Computer Vision* and Pattern Recognition (CVPR), pp. 3668–3678, 2015. DOI 10.1109/CVPR.2015.7298990. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross Girshick. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1988–1997, 2017. DOI 10.1109/CVPR.2017.215. Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, and Sivanesan Sangeetha. AMMUS : A survey of transformer-based pretrained models in natural language processing. arXiv preprint, arXiv:2108.05542, 2021. DOI 10.48550/arXiv.2108.05542. Yuichi Kamata, Moyuru Yamada, and Takayuki Okatani. Self-Modularized Transformer: Learn to modularize networks for systematic generalization. In *Proceedings of the 18th International Joint Conference on* Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, pp. 599–606, 2023. DOI 10.5220/0011682100003417. Xuan Kan, Hejie Cui, and Carl Yang. Zero-shot scene graph relation prediction through commonsense knowledge integration. In *Proceedings of the 2021 European Conference on Machine Learning and Principles* and Practice of Knowledge Discovery in Databases (ECML PKDD), pp. 466–482, 2021. DOI 10.1007/9783-030-86520-7_29. Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. *ACM Computing Survey*, 54(10s):1–41, 2021. DOI 10.1145/3505244. Bumsoo Kim, Junhyun Lee, Jaewoo Kang, Eun-Sol Kim, and Hyunwoo J. Kim. HOTR: End-to-end humanobject interaction detection with transformers. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 74–83, 2021. DOI 10.1109/CVPR46437.2021.00014 The official source code of HOTR is publicly available at https://github.com/kakaobrain/HOTR (Accessed on September 29th, 2022). Najoung Kim and Tal Linzen. COGS: A compositional generalization challenge based on semantic interpretation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 9087–9105, 2020. DOI 10.18653/v1/2020.emnlp-main.731. Hema Swetha Koppula, Rudhir Gupta, and Ashutosh Saxena. Learning human activities and object affordances from RGB-D videos. *International Journal of Robotics Research*, 32(8):951–970, 2013. DOI 10.1177/0278364913478446. Brenden M. Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of the 35th International Conference on Machine Learning (ICML), pp. 2873–2882, 2018. URL http://proceedings.mlr.press/v80/lake18a.html. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. *Behavioral and Brain Sciences*, 40:e253, 2017. DOI 10.1017/S0140525X16001837. Brenden M. Lake, Tal Linzen, and Marco Baroni. Human few-shot learning of compositional instructions. In Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci), pp. 611–617, 2019. URL https://dblp1.uni-trier.de/rec/conf/cogsci/LakeLB19.html. Yong-Lu Li, Siyuan Zhou, Xijie Huang, Liang Xu, Ze Ma, Hao-Shu Fang, Yanfeng Wang, and Cewu Lu. Transferable interactiveness knowledge for human-object interaction detection. In *Proceedings of the 2019* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3580–3589, 2019. DOI 10.1109/CVPR.2019.00370. Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, and Cewu Lu. HOI analysis: Integrating and decomposing human-object interaction. In *Advances in Neural Information Processing Systems 33 (NeurIPS)*, pp. 5011–5022, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 3493894fa4ea036cfc6433c3e2ee63b0-Abstract.html. Yue Liao, Si Liu, Fei Wang, Yanjie Chen, Chen Qian, and Jiashi Feng. PPDM: Parallel point detection and matching for real-time human-object interaction detection. In *Proceedings of the 2020* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 482–490, 2020. DOI 10.1109/CVPR42600.2020.00056. Yue Liao, Aixi Zhang, Miao Lu, Yongliang Wang, Xiaobo Li, and Si Liu. GEN-VLKT: Simplify association and enhance interaction understanding for HOI detection. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 20123–20132, 2022. DOI 10.1109/CVPR52688.2022.01949. Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng Qiu. A survey of transformers. *AI Open*, 3:111–132, 2022. DOI 10.1016/j.aiopen.2022.10.001. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In Proceedings of the 13th European Conference on Computer Vision (ECCV), pp. 740–755, 2014. DOI 10.1007/978-3-319-10602-1_48. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9992–10002, 2021. DOI 10.1109/ICCV48922.2021.00986. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In Proceedings of the 7th International Conference on Learning Representations (ICLR), 2019. URL https://openreview.net/ forum?id=Bkg6RiCqY7. Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. Visual relationship detection with language priors. In *Proceedings of the 14th European Conference on Computer Vision (ECCV)*, pp. 852–869, 2016. DOI 10.1007/978-3-319-46448-0_51. Shuailei Ma, Yuefeng Wang, Shanze Wang, and Ying Wei. FGAHOI: Fine-grained anchors for human-object interaction detection. arXiv preprint, arXiv:2301.04019, 2023. DOI 10.48550/arXiv.2301.04019 The official source code of FGAHOI is publicly available at https://github.com/xiaomabufei/FGAHOI (Accessed on March 23th, 2023). Spandan Madan, Timothy Henry, Jamell Dozier, Helen Ho, Nishchal Bhandari, Tomotake Sasaki, Frédo Durand, Hanspeter Pfister, and Xavier Boix. When and how convolutional neural networks generalize to out-of-distribution category–viewpoint combinations. *Nature Machine Intelligence*, 4(2):146–153, 2022. DOI 10.1038/s42256-021-00437-5. Gary F. Marcus. *The algebraic mind: Integrating connectionism and cognitive science*. MIT press, 2001. Senthil Purushwalkam, Maximillian Nickel, Abhinav Gupta, and Marc'Aurelio Ranzato. Task-driven modular networks for zero-shot compositional learning. In *Proceedings of the 2019 IEEE/CVF International* Conference on Computer Vision (ICCV), pp. 3592–3601, 2019. DOI 10.1109/ICCV.2019.00369. Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, and Brenden M. Lake. A benchmark for systematic generalization in grounded language understanding. In *Advances in Neural Information* Processing Systems 33 (NeurIPS), 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ e5a90182cc81e12ab5e72d66e0b46fe3-Abstract.html. Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, and Peng Cui. Towards out-of-distribution generalization: A survey. arXiv preprint, arxiv.2108.13624, 2021. DOI 10.48550/arXiv:2108.13624. Paul Smolensky, Richard Thomas McCoy, Roland Fernandez, Matthew Goldrick, and Jianfeng Gao. Neurocompositional computing: From the central paradox of cognition to a new generation of AI systems. AI Magazine, 43(3):308–322, 2022. DOI 10.1002/aaai.12065. Masato Tamura, Hiroki Ohashi, and Tomoaki Yoshinaga. QPIC: Query-based pairwise human-object interaction detection with image-wide contextual information. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10405–10414, 2021. DOI 10.1109/CVPR46437.2021.01027 The official source code of QPIC is publicly available at https://github.com/hitachi-rd-cv/qpic (Accessed on September 29th, 2022). Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, and Hanwang Zhang. Unbiased scene graph generation from biased training. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3713–3722, 2020. DOI 10.1109/CVPR42600.2020.00377. Damien Teney, Ehsan Abbasnejad, Kushal Kafle, Robik Shrestha, Christopher Kanan, and Anton van den Hengel. On the value of out-of-distribution testing: An example of Goodhart's law. In *Advances in Neural* Information Processing Systems 33 (NeurIPS), pp. 407–417, 2020. URL https://proceedings.neurips. cc/paper/2020/hash/045117b0e0a11a242b9765e79cbf113f-Abstract.html. Shimon Ullman, Liav Assif, Alona Strugatski, Ben-Zion Vatashsky, Hila Levi, Aviv Netanyahu, and Adam Uri Yaari. Image interpretation by iterative bottom-up top-down processing. Technical Report CBMM Memo No. 120, Center for Brains, Minds and Machines, 2021. URL https://cbmm.mit.edu/publications/ image-interpretation-iterative-bottom-top-down-processing. Frank van der Velde, Gwendid T. van der Voort van der Kleij, and Marc de Kamps. Lack of combinatorial productivity in language processing with simple recurrent networks. *Connection Science*, 16(1):21–46, 2004. DOI 10.1080/09540090310001656597. Ivan I. Vankov and Jeffrey S. Bowers. Training neural networks to encode symbols enables combinatorial generalization. *Philosophical Transactions of the Royal Society B: Biological Sciences*, 375(1791):20190309, 2020. DOI 10.1098/rstb.2019.0309. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in Neural Information Processing Systems 30 (NeurIPS)*, 2017. URL https://papers.nips.cc/paper_files/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html. Moyuru Yamada, Vanessa D'Amario, Kentaro Takemoto, Xavier Boix, and Tomotake Sasaki. Transformer Module Networks for systematic generalization in visual question answering. Technical Report CBMM Memo No. 121, Ver.2, Center for Brains, Minds and Machines, 2023. URL https://cbmm.mit.edu/publications/ transformer-module-networks-systematic-generalization-visual-question-answering. Nanyang Ye, Kaican Li, Haoyue Bai, Runpeng Yu, Lanqing Hong, Fengwei Zhou, Zhenguo Li, and Jun Zhu. OoD-Bench: Quantifying and understanding two dimensions of out-of-distribution generalization. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7937–7948, 2022. DOI 10.1109/CVPR52688.2022.00779. Aixi Zhang, Yue Liao, Si Liu, Miao Lu, Yongliang Wang, Chen Gao, and Xiaobo Li. Mining the benefits of two-stage and one-stage HOI detection. In *Advances in Neural Information Processing Systems 34 (NeurIPS)*, 2021. URL https://papers.nips.cc/paper_files/paper/2021/hash/ 8f1d43620bc6bb580df6e80b0dc05c48-Abstract.html. Yong Zhang, Yingwei Pan, Ting Yao, Rui Huang, Tao Mei, and Chang-Wen Chen. Exploring structure-aware transformer over interaction proposals for human-object interaction detection. In *Proceedings of the 2022* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 19526–19535, 2022. DOI 10.1109/CVPR52688.2022.01894 The official source code of STIP is publicly available at https://github.com/zyong812/STIP (Accessed on September 29th, 2022). Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable DETR: Deformable transformers for end-to-end object detection. In Proceedings of the 9th International Conference on Learning Representations (ICLR), 2021. URL https://openreview.net/forum?id=gZ9hCDWe6ke. Cheng Zou, Bohan Wang, Yue Hu, Junqi Liu, Qian Wu, Yu Zhao, Boxun Li, Chenguang Zhang, Chi Zhang, Yichen Wei, and Jian Sun. End-to-end human object interaction detection with HOI transformer. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11820–11829, 2021. DOI 10.1109/CVPR46437.2021.01165. ## A Further Details Of The Creation Process Of Systematic Generalization (Sg) Splits This Appendix explains the process of splitting the HICO-DET and V-COCO datasets for evaluating systematic generalization performance. First, we decided the numbers of object-interaction combination classes in the train and test data. Initially we attempted to match their ratio to the ratio of HOI triplets in the original train and test data, but eventually included more combination classes in the train data (540 in HICO-DET-SG and 160 in V-COCO-SG) to ensure that every object class is paired with multiple interaction classes and that every interaction class is paired with multiple object classes. This makes it possible for a model to learn the concepts of object/interaction themselves independently of the specific interaction/object paired in the train data. We then created the SG splits as described in Algorithm 1. From the test data, we eliminated images containing the same object–interaction combination classes as the train data. Consequently, the SG splits contain fewer images and HOI triplets in total than the original datasets (Table 1). seed = 0 while flag do test_combinations = SELECT(all_combinations, seed) train_data = [] test_data = [] for scene in dataset do sum = 0 test_hois = [] for hoi in scene.hois do match = COUNT(test_combinations, [hoi.object_class, hoi.interaction_class]) sum = sum + match if match > 0 **then** test_hois.append(hoi) end if end for if sum == length(scene.hois) **then** test_data.append(scene) else if sum == 0 **then** train_data.append(scene) end if end for if VERIFY(train_data) **then** flag = False else seed += 1 end if end while Algorithm 1 Creation of the systematic generalization (SG) splits. Remark: "test_combinations" is a list of object-interaction combination classes to be contained only in the test data, "all_combinations" is a list of all combinations of object-interaction classes in the original dataset, "SELECT" is the function that randomly selects "test_combinations" from "all_combinations" with a specified seed, "dataset" represents the set of images in the original dataset (whole the train and test data) and their annotations, and "scene" represents a set of one image and HOI triplets in the image. The "COUNT" function returns the number of components in the first argument which is equal to the second argument. The "VERIFY" function verifies that all object and interaction classes alone are contained in the train data, that every object class is paired with multiple interaction classes, and that every interaction class is paired with multiple object classes. When creating another distinct split, the initial seed is set larger than the seed used for creating the existing split. Finally, we verified that all object and interaction classes alone are contained in the train data, while certain combinations are contained only in the test data. To ensure that the test performance is not an artifact of a specific selection of object-interaction combination classes in the train and test data, we prepared three distinct train-test splits for each of the HICO-DET-SG and V-COCO-SG. The actual source code that created the SG splits is publicly available at the following anonymized repository for review: https://anonymous.4open.science/r/hoi_sg-58CE/. ## B Further Details Of The Evaluation Results To ensure that the models are well-trained, we evaluated the performances of the models on the train data before the evaluation on the test data. The left halves of Table 3 and Table 4 display the mAPs on the train data of HICO-DET-SG and V-COCO-SG, respectively. For both datasets and for all the models, the mAPs on the train data of the SG splits are coequal to or slightly higher than the original splits. This is probably attributed to the lower variety of triplets in the train data of the SG splits: HICO-DET-SG and V-COCO-SG contain 540 and 160 object-interaction combinations, respectively, out of 600 and 228 combinations contained in the original datasets, respectively. The right halves of these tables represent the raw data used for constructing Figure 2. Table 3: Further details of the systematic generalization performances evaluated on HICO-DET-SG data splits. The mAPs (%) for both train and test data are given in this table. The right half of the table represents the raw data used for constructing Figure 2 (a). The values in brackets are those reported in the original papers. The term "pretraining" means that the initial weights of the model's encoder and decoder were copied from DETR (Carion et al., 2020) trained on object detection using the Microsoft COCO dataset (Lin et al., 2014). The results of FGAHOI are reported only for the non-pretrained cases due to the reasons given in Section 4. The test performances of all models are considerably lower on the HICO-DET-SG compared to original HICO-DET. Evaluation on train data (reference) | ✓ | | |--------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ✗ ✗ | (25.73) (29.90) (37.18) (32.22) | | Evaluation on test data (main) HOTR QPIC FGAHOI STIP ✓ | | | ✗ ✓ ✗ | 33.92 46.03 34.05 49.70 58.97 13.69 33.17 0.33 2.81 11.23 21.94 6.14 0.00 24.53 31.48 42.04 30.23 48.28 55.95 15.13 34.44 0.40 2.97 12.87 19.98 5.93 0.00 24.14 32.05 44.91 35.54 47.90 54.72 13.25 30.61 0.62 3.01 15.43 20.58 6.83 0.00 26.19 32.48 44.33 33.30 48.63 56.55 14.02 32.74 0.45 2.93 13.18 20.83 6.30 0.00 24.95 | | ✓ | Original HICO-DET 30.54 44.80 33.29 46.28 55.82 13.47 32.23 17.63 26.30 21.70 29.59 36.91 13.21 31.57 (mAP in literature) | | ✗ ✗ | | | HOTR QPIC FGAHOI STIP ✓ | | | ✗ ✓ ✗ pretraining | SG split1 SG split2 SG split3 Average | | ✓ | | |---------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ✗ ✗ | (63.8) (61.0) (61.2) (70.7) | | Evaluation on test data (main) HOTR QPIC FGAHOI STIP ✓ | | | ✗ ✓ ✗ ✓ Original V-COCO 28.23 64.72 30.61 65.63 64.27 19.10 72.89 24.26 62.54 27.64 63.41 61.57 18.43 70.43 (mAP in literature) | V-COCO-SG split1 30.57 65.79 31.24 67.25 67.49 23.51 71.91 0.31 2.23 1.12 4.22 4.23 0.17 6.87 V-COCO-SG split2 31.53 67.28 32.53 68.43 69.28 20.04 74.38 0.29 3.02 0.68 4.85 4.84 0.00 6.27 V-COCO-SG split3 28.21 61.07 29.83 60.47 63.24 22.41 73.43 0.38 2.13 0.39 2.96 3.99 0.00 6.91 Average over SG splits 30.10 64.71 31.20 65.38 66.67 21.99 73.24 0.33 2.46 0.73 4.08 4.35 0.06 6.68 | | ✗ ✗ | | | HOTR QPIC FGAHOI STIP ✓ | | | ✗ ✓ ✗ pretraining | | Table 4: Further details of the systematic generalization performances evaluated on V-COCO-SG data splits. The mAPs (%) for both train andtest data are given in this table. The right half of the table represents the raw data used for constructing Figure 2 (b). The values in brackets arethose reported in the original papers. The term "pretraining" means that the initial weights of the model's encoder and decoder were copied fromDETR (Carion et al., 2020) trained on object detection using the Microsoft COCO dataset (Lin et al., 2014). The results of FGAHOI are reportedonly for the non-pretrained cases due to the reasons given in Section 4. The test performances of all models are considerably lower on the V-COCO-SGcompared to original V-COCO for all the models.Evaluation on train data (reference)
Review 1: Summary: This paper presents several new data splits for the Human-Object Interaction (HOI) detection task, which aims to evaluate HOI models' ability to detect novel (unseen) object-interaction combinations. The paper shows that SOTA HOI models perform much worse on novel combinations than on the original test split. It conducts comprehensive comparisons and analysis, and provides some insights about possible future research directions. Strengths and Weaknesses: Strengths: - This paper contributes new data splits for HICO-DET and V-COCO datasets, which can be used for evaluating the zero-shot (object-interaction combination) HOI detection ability. - This paper is easy to follow and the compared HOI methods are well-illustrated. The comparative experiments are well-organized and the analysis is comprehensive. Weaknesses: - The novelty of this paper is limited and the survey of related works is insufficient. This paper is **definitely not the first one** to present new HOI data splits of novel (unseen) object-interaction combinations. - Hou et al. [1] proposed a visual compositional learning method that composes new interaction features (e.g., ride-horse) from known interaction concepts (e.g., feed-horse and ride-bicycle), and benefits the zero-shot HOI detection performance. - Recent work [2] provided a new split of HICO-DET and V-COCO datasets with novel object-interaction combinations. The authors in [2] even annotated extra possible object-interaction combinations. e.g., they extend the number of object-interaction concepts to 1,681 on HICO-DET dataset (compared to the original 600 concepts) , and 401 object-interaction concepts on V-COCO dataset (originally 228 concepts). - The SWiG-HOI [3] is a larger dataset than HICO-DET and V-COCO, which is not surveyed and discussed in this paper. It contains 1000 object classes and 406 action (interaction) classes, which can produce more object-interaction combinations than HICO-DET and V-COCO (consequently more novel/unseen combinations in its test split). - The contribution of this paper is limited. First, creating new splits of HOI datasets is a relatively simple process. Moreover, this paper only compares several SOTA HOI methods in the proposed new splits of HOI datasets. It does not introduce any approach in terms of how to improve the zero-shot combination detection ability (or the systematic generalization ability as called in the paper). - The insights about improving the systematic generalization (which is claimed as contributions) are somewhat trivial. E.g., increasing the diversity of the training data and pre-training the encoder & decoder on object detection tasks can improve the performance. It seems that these are known commonsense in the HOI detection community. [1] Hou, Zhi, et al. "Visual compositional learning for human-object interaction detection." ECCV 2020. [2] Hou, Zhi, Baosheng Yu, and Dacheng Tao. "Discovering human-object interaction concepts via self-compositional learning." ECCV 2022. [3] Wang, Suchen, et al. "Discovering human interactions with large-vocabulary objects via query and multi-scale detection." ICCV 2021. Requested Changes: - A more comprehensive survey is suggested. More related work should be considered and included in this paper. - Given the fact that existing HOI models perform much more poorly on test splits with novel combinations, it's better to introduce some inspiring methods to improve the zero-shot detection ability of novel object-interactions (with comparisons vs. existing HOI models). Broader Impact Concerns: NA. ================================================== Review 2: Summary: This paper proposes open benchmarks to investigate the systematic generalization of human-object interaction (HOI), termed HICO-DET-SG and V-COCO-SG, based on the HICO-DET and V-COCO datasets. Meanwhile, the authors also evaluate some representative HOI methods on the new proposed benchmarks with and without additional pretrained data. Last, the authors further introduce four possible directions on the HOI systematic generalization topic. Strengths and Weaknesses: Strengths: **1**: The proposed benchmarks are practically useful since the generalization of objects and interaction classes is very important when applying the HOI methods. **2**: The derived four possible directions for improving the systematic generalization performance of HOI methods could benefit the researchers. Weaknesses: **1**: Limited analysis of the proposed benchmarks. The authors only focus on the number of images and classes, as shown in Table 1. However, the distribution of interaction triplets is very important in this topic. For example, is there a long-tailed distribution? What's the distribution like for objects or the interaction? What's the distribution of relations between the objects and interactions? Since the proposed benchmark is regarded as the most important contribution in this paper, I think adding more in-depth analysis is necessary. **2**: Limited evaluation. This paper misses some newly developed HOI methods based on vision-language models such as RLIPv1 [1] and RLIPv2 [2]. The language guidance can benefit from the large-scale pre-trained vision-language models and use more flexible text guidance to solve the HOI problem, especially in the systematic generalization topic that this paper mainly talks about. Therefore, I encourage the authors to add an analysis of the vision-language-based methods to the proposed benchmarks. minor: **1**: I find the results for comparing different methods in Table 2 and Figure 2 convey similar conclusions, i.e., existing methods perform well on existing benchmarks but are not good at the proposed benchmarks. So, I wonder the necessity of the existence of Table 2. **2**: I find Section 4.2 (introducing the hyperparameters of existing methods) not helpful to help me understand the importance of the SG split dataset. I think it would be better if the experimental part focuses on evaluating the proposed benchmarks (different datasets and splits). The setup part can be moved to the appendix. Overall, I suggest the authors reorganize the experimental part and convey more valuable insights into the systematic generalization problem of HOI. [1] Yuan, Hangjie, et al. "Rlip: Relational language-image pre-training for human-object interaction detection." NeurIPS 2022. [2] Yuan, Hangjie, et al., "RLIPv2: Fast Scaling of Relational Language-Image Pre-Training." ICCV 2023. Requested Changes: Please see the above weaknesses part. Broader Impact Concerns: This paper focuses on evaluating the HOI methods. The potential bias in the dataset may mislead the usage of the HOI methods. ================================================== Review 3: Summary: This paper presents new data splits for HOI detection datasets HICO-DET and V-COCO. The key idea is to avoid training and testing on the same object-interaction pairs, which previous splits allowed. The new splits more convincingly evaluate the generalization capability of HOI models. The results show that model accuracies in the new splits are much lower, which is surprising and informative. Strengths and Weaknesses: This paper is well written, the work is well motivated, and the results are interesting. Although I am not working in this area, I believe the work is useful to the community. I appreciate the section summarizing potential ways to improve results, although none of these are novel (and are not claimed to be so). A downside to the paper is that the contributions are quite narrow. Simplicity is good, but narrowness may mean limited interest. Still, for the people working on HOI (and particularly working on HICO-DET and V-COCO), I think the interest will be high. Requested Changes: "no open benchmarks or evaluation studies have been published for the systematic generalization in HOI detection" This might be true, but are the authors aware of Bongard-HOI (CVPR 2022)? It's a different problem setup, but it does aim to measure generalization, by separating object-interaction pairs between train and test, as done here. Typo: HOI detection models are required systematic generalization -> HOI detection models require systematic generalization Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: This paper presents an effort to improve Human-Object Interaction (HOI) detection models by proposing new data splits, HICO-DET-SG and V-COCO-SG, for evaluating systematic generalization performance. However, the paper falls short in several key areas (please see the detailed breakdown as above). Firstly, the novelty of the approach is limited, as creating new data splits is a relatively straightforward task and does not constitute a significant advancement in the field. Secondly, the paper lacks a comprehensive validation of larger, more diverse benchmarks, which is crucial for establishing the effectiveness and generalizability of the proposed method. Additionally, the analysis presented is somewhat narrow, lacking in-depth comparisons and insights that could strengthen the paper's contributions. Given these limitations, the paper does not meet the high standards of innovation and thoroughness expected by TMLR. ==================================================
# Deciphering Attention Mechanisms: Optimization And Fenchel Dual Solutions Anonymous authors Paper under double-blind review ## Abstract Attention has been widely adopted in many state-of-the-art deep learning models. While the significant performance improvements it brings have attracted great interest, the theoretical understanding of attention remains limited. This paper presents a new perspective on understanding attention by showing that it can be seen as a solver of a family of estimation problems. Specifically, we explore a convex optimization problem central to many estimation tasks prevalent in the development of deep learning architectures. Instead of solving this problem directly, we address its Fenchel dual and derive a closed-form approximation of the optimal solution. This approach results in a generalized attention framework, with the popular dot-product attention used in transformer networks being a special case. We show that T5 transformer has implicitly adopted the general form of the solution by demonstrating that this expression unifies the word mask and the positional encoding functions. Finally, we discuss how these new attention structures can be practically applied in model design and argue that the underlying convex optimization problem offers a principled justification for the architectural choices in attention mechanisms. ## 1 Introduction Attention-based deep neural networks are now integrated into cutting-edge language models that have revolutionized a broad range of tasks: machine translation (Bahdanau et al., 2014; Luong et al., 2015), sentiment classification (Wang et al., 2016), image captioning (Xu et al., 2015) and unsupervised representation learning (Devlin et al., 2019), etc. Especially, attention plays a pivotal role in the construction of the transformer architecture (Vaswani et al., 2017), which has had a profound impact on the deep learning field. Despite great empirical success, the design principle of attention has not been well studied in the literature, and there is no in-depth understanding of why attention-based models (e.g. BERT (Devlin et al., 2019)) have significantly better performance than other models. This gap in understanding limits practitioners' ability to effectively employ attention layers, posing challenges in developing new attention-based architectures. In this paper, we offer a new perspective for understanding attention by showing that it is in fact a solver for a certain type of optimization problem that corresponds to an inference task. We give several examples, all of which can be characterized as follows: given 1) an unreliable estimate of the mean of an unknown distribution p on R d and 2) a preference distribution u on R dencoding beliefs on p's selection, the inference task is to get a better estimate of p's mean given its unreliable estimate and u. We derive a convex optimization problem that is abstracted from the task and solve it by instead solving its Fenchel dual (Rockafellar, 1970, p.104). Remarkably, the derived expression of the improved estimate of p gives a generalized attention structure whose special case is equivalent to the popular dot-product attention (Luong et al., 2015) that is also applied in the transformer network (Vaswani et al., 2017). In addition, we show that our generalized attention expression has been implicitly adopted by T5 transformer (Raffel et al., 2020) as the expression unifies the concept of word masks and its positional encoding functions. Extra examples are given to show how the generalized attention structures can be used in practice, and a novel optimal transport (OT)-based attention is derived to show how our framework helps develop more general attention structures. Additionally, experiments are performed, which validate our theoretical work. ## 2 Related Work Since 2019, several authors have investigated the properties and working mechanisms of attention. This series of works mainly addresses whether the attention mechanism can serve as a proxy of saliency (Michel et al., 2019; Voita et al., 2019; Jain & Wallace, 2019; Wiegreffe & Pinter, 2019; Serrano & Smith, 2020; Vashishth et al., 2020). Most of these works obtain insights into the attention mechanism by performing empirical studies. The related methods include analyzing the behaviours of trained attention-based models (Clark et al., 2019), pruning a few heads, analyzing the effects of altering the attention weights (Michel et al., 2019; Voita et al., 2019), or a mixture of these (Jain & Wallace, 2019; Vashishth et al., 2020). Beyond empirical understanding, theoretical results by Brunner et al. (2019) and Hahn (2020) indicate that self-attention layers are not identifiable, meaning multiple combinations of attention weights can yield equally good predictions. This non-uniqueness complicates interpretability. Additionally, Tsai et al. (2019) reformulated attention using kernel theory, showing it can be viewed as applying a kernel smoother over the inputs. Recent studies have also explored the expressivity of attention (Dong et al., 2021; Baldi & Vershynin, 2022; Mahdavi et al., 2024). To understand the underpinning inductive bias of attention, Sahiner et al. (2022) have investigated convex-relaxations through the lens of convex duality by replacing the softmax function with element-wise nonlinear functions. While our work views the problem through a similar lens, the framework covers the unaltered attention architecture and focuses more on the design motivation of attention and its generalization. Another important approach to understanding attention is to analyze its asymptotic behaviour when the number of heads and the network width approach infinity (Yang, 2019; Hron et al., 2020). In this limit, the entire network behaves as a Gaussian process (Lee et al., 2018) allowing for closed-form characterizations not available in the finite regime. Since 2021, several theoretical works have explored attention outside this asymptotic regime. Lu et al. (2021) set up a simple attention-based classification model and derive a closed-form relationship between the word's embedding norm and the product of its key and the query. They empirically show that such a relationship also exists in a more practical configuration. Similarly, Jelassi et al. (2022); Li et al. (2023); Deora et al. (2024) characterize optimization and generalization properties for gradient-descent training. Ramsauer et al. (2021) established an equivalence between attention and a newly proposed Hopfield network with continuous states, demonstrating that the new Hopfield network's update rule is equivalent to the attention mechanism used in transformers (Vaswani et al., 2017). ## 3 Setup Of A Design Problem We consiser a prediction task: given an input X, predict an output quantity Y = (Y (1), Y (2)*, . . . , Y* (K)), including K components. We will present several machine-learning problems and show they can be unified and abstracted into a mean estimation problem. Specifically, the goal is to estimate the mean of a distribution p, given a prototype of p and a noisy estimate of the discrepancy between their means. By framing the problem in this way, we can devise a unified convex optimization framework to address these various scenarios. The solutions derived under this framework yield attention-like structures, which can be used to tackle the original prediction tasks. Furthermore, plugging in various functions for closeness constraints, we recover the original dot-product attention (Sec 6) and derive a variant with added properties (Sec 9). Translation Problem (TP). In this problem, the input X is a sentence, or a sequence of words, in the source language. Output Y is the sequence of words in the target sentence, where Y (k)is the k th word. Image Captioning (IC). In this problem, the input X is a raw image and output Y is the sequence of words in the caption, where Y (k)is the k th word. Filling in the Blanks Task (FB). This task has been used to train the BERT model (Devlin et al., 2019). The input X is a sequence of words with a certain percentage of words masked. The output Y are the predicted masked words, where Y (k) denotes the k th masked one. The objective of any of these problems and that we address in this paper is to learn a function F, mapping from the space of X to the space of Y so that Y = F(X). We will denote by F (k)the part of F responsible ![2_image_0.png](2_image_0.png) Figure 1: A conceptual graph of the deep learning model that we work with. The block g (k)is the one we will investigate. (a) shows the general structure of a sequence generation model, with F (k)responsible for the k-th output. Our focus is on the architecture in (b), where F (k)contains a component g (k)that infers a distribution's mean h (k) based on its noisy estimations from two aspects: its preference (prior) distribution u (k) and a noisy estimation of its mean shift z (k)from u (k)'s. We show that g (k)should implement the expression in (c), which includes the dot-product attention as a special case (Luong et al., 2015). for predicting Y (k)(Fig 1a), namely, Y (k) = F (k)(X). Although we here express F as separate functions (F (1), F(2)*, . . . , F*(K)), we note that it is in fact possible that different F (k)'s share some component in common. Without loss of generality, we now focus on the design of F (k). ## 3.1 The Design Problem In deep learning research, a typical approach to solving the three running tasks is first to use a neural network to extract vector representations {t (k) 1, t (k) 2*, . . . ,* t (k) M } ⊆ R d of X, which are referred as templates. Collectively, we will denote the set {t (k) 1, t (k) 2*, . . . ,* t (k) M } of templates by T(k). 1(If X are words, typical choices of neural network include RNN, LSTM, etc. If X is an image, a typical choice is CNN.) Let A ⊆ R d denote the space containing all templates. For each Y (k), some mechanism g (k)is needed to adaptively combine the representations of X to obtain h (k), which is then fed into a classifier f (k) out to predict Y(k). To obtain an idea of how to produce h (k), consider TP task, where h (k)corresponds to a vector (also known as embedding) representing the k-th word in the target sentence, and T(k) = {t (k) 1, t (k) 2*, . . . ,* t (k) M } are the ones of the source sentence.2 Then, the inference of h (k)corresponds to combining the semantic meanings encoded in T(k)to produce the k-th word embedding in the target sentence. This can be simply modelled as $$\mathbf{h}^{(k)}=\int_{\Omega}\mathbf{t}\;p^{(k)}(\mathbf{t})\;\mathrm{d}\mathbf{t}\quad\text{s.t.}p^{(k)}(\mathbf{t})\geq0\;\;\text{for all}\;\mathbf{t}\in\mathbf{T}^{(k)}\;\text{and}\;\;\int_{\Omega}p^{(k)}(\mathbf{t})\;\mathrm{d}\mathbf{t}=1,\tag{1}$$ where Ω = T(k). That is, h (k)is a convex combination of t ∈ T(k), or equivalently, the mean of an unknown distribution p (k) on T(k). For generality, our following discussion extends the support of p (k)to all possible templates by setting Ω = A. In Sec 9, we show that this extension enables optimal transport-based attention, taking into account words having similar embeddings in T(k). That is, even if a word is not present in the source sentence, its embedding will still be optimized if it has a similar embedding in T(k). In practice, the cardinality of A may be huge or infinite; therefore, it is important to design a mechanism that allows the users to inject prior knowledge to guide the production of h (k). For example, in TP task, A would be the set of all word embeddings, which could contain more than 10K elements. However, h (k)should largely depend on the templates associated with the words (in the input sentence) having similar locations to the k-th word in the target sentence. If we could effectively inject this prior information, the inference task would be largely simplified. One natural way to do so is to use a neural network module f (k) pref to propose a prototype of p (k), referred to as the preference distribution u (k), and let p (k) be close u (k). Specifically u (k) puts non-zero probability masses on templates t (k) 1, t (k) 2*, . . . ,* t (k) M , and their probabilities are respectively u (k) 1, u (k) 2*, . . . , u* (k) M (which sum to 1). For TP, u (k)is expected to have larger values for the words in a similar 1We add the superscript k to note that the inference of Y (k) does not necessarily share the set of templates. 2We mainly use TP to motivate the design and discussion. In Sec 3.2, we show the same ideas also apply to IC and FB. ![3_image_0.png](3_image_0.png) Figure 2: The model architectures of the three running examples. For the f (k) evd in (a) and (b), the dashed links exist throughout the training and are replaced by the dotted ones in the generation stage. location of the k-th word of the target sentence. The preference distribution u (k)is considered as a good approximation of p (k), in the sense that the support of p (k)is contained in the set T(k) of templates. Note that if R dis the word embedding space for a large vocabulary, and if the size M of the template set T(k)is relatively small, then restricting the support of p (k)to be within T(k)imposes a strong constraint on p (k). On the other hand, u (k)is not a sufficiently accurate approximation of p (k), in the sense that u (k) may assign probabilities to T(k)somewhat differently. For example, in TP, the choice of Y (k) depends on both X and the already generated words Y (i<k). 3 While u (k) provides a strong prior that p (k)should mainly focus on the words appearing in the source sentence, it is inherently tough for u (k)to capture the semantic evolution in Y (i<k). The difficulty shifts the mean µ (k) of u (k)from the mean h (k) of p (k). To alleviate the problem, we need another piece of information z (k) ∈ R dthat is generated by another network module f (k) evd and provides information regarding the mean shift. In TP, z (k) depends on Y (i<k).) In particular, we assume that z (k)is a noisy version of the shift, more precisely, $$\mathbf{z}^{(k)}=\mathbf{h}^{(k)}-\mu^{(k)}+\epsilon,$$ $$\left(2\right)$$ (k) + ϵ, (2) where ϵ ∼ N (0, σ2I) is a spherical Gaussian noise in R d with covariance σ 2I. We refer to z (k) as the evidence. We summarize the problem setup in Fig 1b. Then the design problem is *to construct a function, or a network* block, g*, which infers the unknown distribution* p (k) *and hence its mean* h (k)*based on the evidence* z (k) and the preference distribution u (k). ## 3.2 Additional Examples Having demonstrated how the setup applies to the translation problem (TP), we will now illustrate its applicability to the other two examples. Image Captioning (IC). The caption generation model in Fig 2b has a similar architecture adopted by Xu et al. (2015), where f (k) pref extracts templates from the image using a CNN. In this task, a word's position is independent of the object's location, so all CNN-extracted templates have the same preference weight. 3We assume the sentence generation process is Markovian. More details are given in Sec 3.2. Similar objects in the image have similar CNN features. Allowing non-T templates to influence h (k)could introduce irrelevant information, harming word inference accuracy. To improve h (k)estimation, we constrain p (k)to have support only within u (k). As generation progresses, h (k)should evolve to provide relevant image information for the next word. This semantic evolution is captured by z (k) = f (k) evd, which predicts the shift of µ (k)from h (k). So µ (k) + z (k)estimates h (k) and should be close to it, as should u (k) and p (k). Filling in the Blanks Task (FB). For filling-in-the-blank tasks, consider a BERT-like model (Fig 2c) where f (k) pref and f (k) evd share transformation layers common to NLP tasks. f (k) pref applies a linear map V to the output sequence of the previous layer to form the template set T supporting u (k), with preference weights specified by positional encoding. Concurrently, z (k) = f (k) evd estimates the shift of h (k)from the mean µ (k) due to local variation. As before, we need µ (k) + z (k)close to h (k) and p (k)close to u (k). Notably, the formulation of the problem is based on the assumption that the network modules f (k) evd and f (k) pref are fixed and generate z (k) and u (k)satisfying the above-assumed properties. In reality, f (k) evd and f (k) pref are obtained via training. However, we argue that if g is made to satisfy our design objective, we can at least *interpret* f (k) evd and f (k) pref obtained from training as serving to produce z (k) and u (k) with our desired properties. ## 4 Formulation Of An Optimization Problem The discussion made in the previous section implies that the key optimization problem we are about to focus on should ensure 1. h (k)is not too far from µ (k) + z (k), where h (k)is constructed by p (k) according to (1) and µ (k)is the mean of the preference distribution u (k). 2. p (k)is close to u (k) while p (k)'s support is a subset of u (k)'s. These two desiderata prompt us to optimize: $$\min_{p}\frac{\alpha}{2}\left\|(\mathbf{\mu}+\mathbf{z})-\int_{\mathbb{R}^{d}}\mathbf{a}p(\mathbf{a})\ \mathrm{d}\mathbf{a}\right\|^{2}+\mathcal{K}(p,u)\tag{3}$$ $$\left(4\right)$$ where α > 0 is responsible for the relative strength of the two terms (and can be interpreted as the reliability of µ + z), K(*p, u*) denotes the KL divergence of u from p. 4 By definition, K(*p, u*) has a finite value if and only if p has zero values outside the support of u. Thus, both requirements in the second desideratum are satisfied by using the KL divergence as a measure for the closeness of p and u. Let p˜ be the minimizer of (3). The estimate of h is $${\hat{\mathbf{h}}}=\int_{\mathbb{R}^{d}}\mathbf{a}{\tilde{p}}(\mathbf{a})\ \mathrm{d}\mathbf{a}.$$ ap˜(a) da. (4) Naturally, this optimization problem can be derived from three different, though related perspectives. Below, we present a less commonly known view that demonstrates how α affects the optimal solution from a hard constraint perspective. The maximum likelihood and Bayesian perspectives are included in Appx B. A Maximum Entropy on the Mean Perspective. Consider a problem that seeks a distribution p such that the expectation RRd ap(a) da is not far from µ+z. Namely, we require(µ + z) −RRd ap(a) da 2≤ 1 2α . Given z, there are infinitely many p's that satisfy the constraints, making it difficult to select the "best" p. A technique in information theory, maximum entropy on the mean (MEM) (Rioux et al., 2020; Gamboa, 1989), addresses this by selecting the best guess of the ground truth p ∗that satisfies the constraint and minimizes the KL divergence: $\tilde{p}=\underset{p}{\operatorname{\bf argmin}}\,\mathcal{K}(p,u)$ s.t. $\left\|(\boldsymbol{\mu}+\mathbf{z})-\int_{\mathbb{R}^{d}}\mathbf{a}p(\mathbf{a})\;\mathbf{da}\right\|^{2}\leq\frac{1}{2\alpha}$, * [10] M. C. which also minimizes (3) according to (Rioux et al., 2020, Eq (18)) and (Borwein & Lewis, 1992, Cor 4.9). 4As we will focus on a single step of sequence predictions, we simplify our notations by omitting superscript (k) in the rest of our discussions. ## 1 $\mathbf{2020}$, Eq.(18)) and (B. ## 5 A Motivating Example To Find The Optimal Solution To better illustrate our method for solving (3), we first examine a special case where the preference distribution u follows a spherical Gaussian distribution, specifically u ∼ N (µ, Id). In this scenario, the convex problem can be solved in closed form. The derivation provides valuable insights into how the problem can be approached in a general context, as we will demonstrate in Sec 6. Let b = µ + z serve as an unreliable observation of hp. Rioux et al. (Rioux et al., 2020) prove, via Fenchel duality (Rockafellar, 1970, p.104) that the minimizer p ∗ of (3) takes the form $$p^{*}({\bf a})=\frac{u({\bf a})\exp\langle{\bf a},\mathbf{\lambda}^{*}\rangle}{\int u({\bf a}^{\prime})\exp\langle{\bf a}^{\prime},\mathbf{\lambda}^{*}\rangle\ {\rm d}{\bf a}^{\prime}},\tag{10.1}$$ $$\left({\bar{5}}\right)$$ where $$\mathbf{\lambda}^{*}=\operatorname*{argmax}_{\mathbf{\lambda}\in\mathbb{R}^{d}}\langle\mathbf{b},\mathbf{\lambda}\rangle-\frac{1}{2\alpha}\left\|\mathbf{\lambda}\right\|^{2}-\log\int u(\mathbf{a})\exp\langle\mathbf{a},\mathbf{\lambda}\rangle\ \mathrm{d}\mathbf{a}.\tag{6}$$ Note that Ru(a) exp⟨a,λ⟩ da = exp(⟨µ,λ⟩ + 1 2 ∥λ∥ 2) as it is the moment generating function (MGF) of u ∼ N (µ, Id). Substituting the expression into (6) followed by setting the derivative with respect to λ to zero yields λ ∗ =α α+1 (b − µ). By (5), p ∗(a) ∝ exp(− 1 2 ∥a − µ∥ 2 + ⟨a,λ ∗⟩) ∝ exp(− 1 2 ∥a − (µ + λ ∗)∥ 2). Substituting λ ∗ =α α+1 (b − µ) into it implies that p ∗follows a Gaussian distribution N (1 1+α µ +α 1+α b, Id). Thus, our estimate of hp is 1 1+α µ +α 1+α b. The value α in (3) can also be considered as a measure of the reliability of the noisy observation b, where a smaller α implies a less reliable b. Then, the estimate of hp should be less affected by b as α approaches zero, which is well captured by our derived expression 1 1+α µ +α 1+α b. We will also see this relationship in a more general setting in our subsequent discussions. While a more complicated analysis is involved, the underlying principles are essentially the same. In Sec 6, we focus on a similar optimization problem that estimates hp assuming that u is instead a discrete distribution. By solving the optimization problem, we derive a closed-form approximation for the estimate of hp, via Fenchel duality. The approximation then gives a generalized attention layer structure as shown in Fig 1. A special case of it is equivalent to the familiar dot-product attention (Luong et al., 2015) that is also adopted in transformers (Vaswani et al., 2017). Moreover, we will show that T5 transformer (Raffel et al., 2020) implicitly adopts our generalized attention expression. ## 6 Attention As Inference Via Fenchel Duality Now we present how to solve (3) with general u, where the solution yields the standard attention mechanism. Rioux et al. proved that the optimization problem stated in (3) has the following Fenchel dual: Theorem 1. *The dual of (3) is given by* $$\operatorname*{max}_{\lambda\in\mathbb{R}^{d}}\left\{\langle\boldsymbol{\lambda},\boldsymbol{\mu}+\mathbf{z}\rangle-{\frac{1}{2\alpha}}\left\|\boldsymbol{\lambda}\right\|^{2}-\log M(\boldsymbol{\lambda})\right\},$$ where $$M(\mathbf{\lambda})=\int_{\mathbb{R}^{d}}u(\mathbf{a})\exp\langle\mathbf{a},\mathbf{\lambda}\rangle\ \mathrm{d}\mathbf{a}.$$ u(a) exp⟨a,λ⟩ da. (8) Given a maximizer λ ∗ of (7), one can recover the minimizer p˜ *of (3) via* $${\tilde{p}}(\mathbf{a})={\frac{u(\mathbf{a})\exp\langle\mathbf{a},\boldsymbol{\lambda}^{*}\rangle}{\int_{\mathbb{R}^{d}}u(\mathbf{a}^{\prime})\exp\langle\mathbf{a}^{\prime},\boldsymbol{\lambda}^{*}\rangle\,\operatorname{d}\!\mathbf{a}^{\prime}}}.$$ . (9) $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ By Theorem 1, the estimated h defined in (4) can be re-written as $${\hat{\mathbf{h}}}=\int_{\mathbb{R}^{d}}{\boldsymbol{a}}{\tilde{p}}(\mathbf{a})\ \mathrm{d}\mathbf{a}=\int_{\mathbb{R}^{d}}\mathbf{a}\,{\frac{u(\mathbf{a})\exp\langle\mathbf{a},\boldsymbol{\lambda}^{*}\rangle}{\int_{\mathbb{R}^{d}}u(\mathbf{a}^{\prime})\exp\langle\mathbf{a}^{\prime},\boldsymbol{\lambda}^{*}\rangle\ \mathrm{d}\mathbf{a}^{\prime}}}\ \mathrm{d}\mathbf{a},$$ $$(10)$$ where λ ∗is a maximizer of (7). In general, λ ∗ does not have a closed-form expression in terms of α, u and z, and a standard paradigm is to search for it using gradient ascent-based methods. In this paper, we will not search for λ ∗in this way; instead, we will derive a closed-form expression to approximate it. Remarkably, this takes the form of the generalized attention presented in Fig 1. Note that M(λ) in (8) equals Eu[exp⟨W, λ⟩], the expectation of the random variable exp⟨W, λ⟩ where W has the probability distribution u. The expectation is just the moment generating function (MGF) of W, and the value log M(λ) is called the cumulant of W (McCullagh, 1987, p.26), which has an expansion (McCullagh, 1987, (2.4)) $$\log M(\mathbf{\lambda})=\langle\mathbf{\mu},\mathbf{\lambda}\rangle+\frac{1}{2}\,\langle\mathbf{\lambda},\Sigma\mathbf{\lambda}\rangle+o(\|\mathbf{\lambda}\|^{2}),\tag{1}$$ $$(11)$$ with µ=Rau(a) da and Σ=R(a − µ) (a − µ) Tu(a)da respectively denote the expectation and the variancecovariance matrix of W. Note that the expansion implicitly assumes that random variable W following distribution u has bounded moments. (Derivation of (11) is given in Appx A.) Now we assume that α is small and we argue that this assumption is justified in practice. For instance, in the translation task, all words in the dictionary can serve as candidate templates, which could be more than 10,000, but u reduces this size to the length of the source sentence (usually less than tens of words). The inference of p should strongly anchor around this prior information; consequently the information provided by z should weigh less. On the other hand, z can hardly provide an accurate estimate of the mean shift, since the generation of z is often ignorant of the templates selected by u (for example, in the example translation and image captioning models) or generated by a low-capacity module (as in the example filling-in-the-blank model). For these reasons, one should de-emphasize the constraint imposed by z and thus choose a small α. When α is picked to be small enough (see (7)), the optimization of λ gets a large penalty on its L2 norm and thus, ∥λ ∗∥ is close to zero. Then, by (11), we have $$\log M(\lambda^{*})\approx\langle\mu,\lambda^{*}\rangle+\frac{1}{2}\langle\lambda^{*},\Sigma\lambda^{*}\rangle.\tag{10.11}$$ Note that the approximation becomes exact for any α > 0 if u is Gaussian, which is the case of the motivating example in Sec 5. Substituting (12) into (7) followed by setting the derivative with respect to λ to zero yields $$\lambda^{*}=\alpha(I_{d}+\alpha\Sigma)^{-1}{\bf z},\tag{1}$$ where Id denotes the d × d identity matrix.5 As α is assumed close to zero, (13) is further reduced to $$(12)$$ $$(13)$$ $$\lambda^{*}=\alpha\mathbf{z}.$$ $$(14)$$ $$(15)$$ ∗ = αz. (14) Plugging the expression into (10) gives the result stated as follows: Theorem 2. Given u with bounded moments, for a small enough α > 0, the estimated h *defined in (4) can* be approximated by $${\hat{\mathbf{h}}}=\int_{\mathbb{R}^{d}}\mathbf{a}\ {\frac{u(\mathbf{a})\exp(\alpha\langle\mathbf{a},\mathbf{z}\rangle)}{\int_{\mathbb{R}^{d}}u(\mathbf{a}^{\prime})\exp(\alpha\langle\mathbf{a}^{\prime},\mathbf{z}\rangle)\ \mathrm{d}\mathbf{a}^{\prime}}}\ \mathrm{d}\mathbf{a}.$$ For the case that u is a discrete distribution with support {t1, t2*, . . . ,* tn} and the preference probability {u1, u2*, . . . , u*n}, (15) becomes simply $$\hat{\mathbf{h}}=\sum_{i=1}^{n}\mathbf{t}_{i}\ \frac{u_{i}\exp\left(\alpha(\mathbf{t}_{i},\mathbf{z})\right)}{\sum_{j=1}^{n}u_{j}\exp\left(\alpha(\mathbf{t}_{j},\mathbf{z})\right)}.\tag{1}$$ $$(16)$$ 7 ![7_image_0.png](7_image_0.png) Figure 3: The approximation of hˆ for different choices of α. The dots in orange compose the support of discrete u with the preference weights labelled above. The dark blue arrow starting from the mean µ of u denotes the evidence z. The red square marks the hˆ constructed by (10) with the λ ∗ maximizing (7), while the purple one marks the hˆ approximated by (16). As we can observe, (16) gives a precise approximation of hˆ when α is sufficiently small. In Fig 3, we set d = 2 and visualize the approximation of h for various selections of α. We observe that, as α decreases, (16) outputs a better approximation of hˆ. Besides, as a decreasing α implies a less reliable µ+z, h is less affected by µ+z and gets close to µ. Note that our results do not suggest that α should be arbitrarily close to zero for a perfect approximation (which leaves z useless). Fig 3 shows a good approximation is achieved when α = 0.5, 1. And for these two choices, hˆ still significantly deviates from µ (corresponding to the case when α = 0 and z is useless). Thus, z still largely affects the final estimation results. The derived solution in (16) aligns with the original attention mechanisms discussed by Bahdanau et al. (2014) and Luong et al. (2015), where u is set to a uniform distribution. These models have incorporated most of the crucial components of the modern transformer architecture. In Sec 8, we will demonstrate that (16) also extends to more contemporary architectures, such as the BERT model (Devlin et al., 2019) and T5 (Raffel et al., 2020). Furthermore, we will show that a good approximation can be achieved in practice by comparing the accurate solution with its approximated counterpart used in these pretrained models. ## 7 Discussion In Section 6, we derived an alternative expression of hˆ defined in (4) by solving the Fenchel dual of the optimization problem (3). Although the expression is not in closed form, as we are only interested in the case when α is small, a closed-form approximation of hˆ is derived in Theorem 2 and reduced to the form stated in (16) when considering a discrete distribution u. As we pointed out, the block g in Fig 2a, Fig 2b and Fig 2c is expected to find the inferred p˜ minimizing (3) followed by plugging it into (4) to construct hˆ. Thus, one can complete the architecture designs of the three running examples by replacing g with a network layer implementing (16), namely, the structure in Fig 1c. The relationship between the optimal solution and attention models. Remarkably, the expression stated in (16) gives a generalized attention block. In particular, based on our framework, researchers can customize the implementations of f (k) evd and f (k) pref to generate z and u and feed them into (16) to get an attention-like network architecture.6 For instance, by setting ui = 1 n for all i, the expression is equivalent to the well known dot-product attention (Luong et al., 2015), which is also applied in the transformer network (Vaswani et al., 2017). The equivalence of the expression of hˆ and the dot-product attention layer tells us: (a) *by applying a dot-product attention* layer in a model, we essentially ask the model to perform an optimization task defined in (3) and construct the output according to (4). (b) the derivation of h *depends on two relatively independent pieces of information:* a preference distribution given the global information and an estimate of the output's deviation from the preference distribution's mean according to some local information. This suggests that the design of attentionbased model can be decomposed into two parts that respectively estimate these two values. 5When Σ = Id, (13) becomes λ ∗ = α(Id + αId)−1z = α 1+α z. By (2), b = h + ϵ = z + µ. Thus, λ ∗ = α 1+α (b − µ) recovers the expression of λ ∗ in the motivating example. 6Potential selectionss of f (k) evd and f (k) pref includes constant functions, fixed formulas and neural networks. The model consisting of a stack of attention layers. Although our discussion focuses on the case that contains a single attention layer, any attention layer L in an attention stack fits our framework (see Fig 1). In particular, all the attention layers closer to the input X than L can be grouped into the functions fpref or fevd. For those layers that take the current layer's output as input, we can group them into fout, where c may contain the outputs of other attention layers working in parallel. Multi-head attention. For clarity, our derivation does not account for multi-head attention scenarios. In essence, an n-head attention structure can be viewed as having n distinct estimations of mean shift estimates. Consequently, the outputs of n-head attention can be interpreted as the solutions to n underlying convex problems, which are subsequently stacked together at the end of the inference processes. T5 transformer implicitly adopts the generalized attention structure. Recent studies in NLP have shown that T5 (Raffel et al., 2020) can achieve state-of-the-art performance for many NLP benchmarks, including text summarization, classification, question answering, etc. While their transformer implementations are quite similar to the original transformer architecture (Vaswani et al., 2017; Devlin et al., 2019), they adopt trainable relative position embeddings to replace the sinusoidal position signals.7 The modification provides the model with extra flexibility to encode the positional information with little computational cost. We will see that in comparison to the original transformer implementation, T5 transformer can be seen as a natural realization of the generalized attention in (16), where the preference weights u unifies the concepts of word masks and T5's positional encoding functions. Thus, the usefulness and the validity of our framework are well-supported by the state-of-the-art performance of T5 in many NLP tasks (Raffel et al., 2020). Consider the running example: filing in the blanks, with the preference distribution $\text{Circular}$ d. $u(\mathbf{t}_i)=\begin{cases}0&\text{if the$i^{\text{th}}$word is masked}\\ \exp(b_{j-i})/Z&\text{otherwise,}\end{cases}$ $$(17)$$ where Z is a normalizing constant and bj−iis a trainable scalar that only depends on the relative position of word i and word j (which is the k th masked word that we are inferring). Substituting such u into (16) with α = 1 yields $$\hat{\mathbf{h}}=\sum_{i=1}^{n}\mathbf{t}_{i}\frac{\exp\left(\left(\mathbf{t}_{i},\mathbf{z}\right)+b_{j-i}+\mathbf{1}_{\text{masked}}(i)\right)}{\sum_{l=1}^{n}\exp\left(\left(\mathbf{t}_{l},\mathbf{z}\right)+b_{j-l}+\mathbf{1}_{\text{masked}}(l)\right)},\tag{18}$$ where 1masked(i) is an indicator function that equals −∞ if word i is masked and zero otherwise. The expression in (18) has the same structure as that adopted in T5 transformer, where the indicator function serves as the mask function to prevent the model from assigning weights to the masked words. In this way, the concepts of word masks and the positional encoding functions are unified by u in (17). Conversely, T5 transformer is a realization of the generalized attention with the preference weights u specified in (17). Generalized attention structures suggested by the optimal solution. While T5 transformer has implicitly adopted the generalized attention, (16) suggests potential for further generalizations. For instance, in T5 transformer, the function that outputs the template's preference weights only considers the word masks and the word's relative positions. This function could also be generalized to factor in the input sentence contexts, and the output weights encode the importance of each word before giving the local information stored in z. The same idea could be applied to the image captioning example to replace the uniform preference weights. By adding a neural network taking the input image to generate non-uniform preference weights, we devise a mechanism to estimate the importance of each part of the image before the caption generation. In this way, the newly added network collects global information from the image to propose a preference distribution, which could be updated locally based on the current generation stage encoded in z. Besides, although we mainly focus on the case when u is discrete, we want to emphasize that the analysis performed in Section 6 also covers continuous u. This hints that a continuous attention mechanism could also be implemented, which might prove to be useful in some applications. Moreover, our theoretical work enables the design of more general attention structures; for instance, KLdivergence in the optimization problem (3) requires estimated distribution to share support with preference 7They also simplified the layer normalization (Lei Ba et al., 2016) for faster training and inference speed. ![9_image_0.png](9_image_0.png) (a) BERT (b) T5 Figure 4: The distribution of relative deviations ∥λ ∗−αz∥ ∥λ∗∥for the attention in BERT and T5. The red vertical lines mark the average of the errors. distribution, which may not be desired in many tasks. (e.g. translation, where the target should be unaffected if we replace some words in the source sentence with synonyms.) Using our theory, in Sec 9, we show that this can be achieved by replacing KL divergence with an optimal transport (OT)-based measure that handles word similarities in their embedding space. ## 8 Empirical Evidence To show the proposed optimization problem (3) indeed provides a principle justifying the design of attention modules, we show that the maximizer λ ∗ of its dual problem (7) nearly coincides with its approximated counterpart used in the pretrained BERT model (Devlin et al., 2019) and T5-small (Raffel et al., 2020). Verification on other popular attention-based models yielded similar results. Let xi ∈ R dfor i ∈ 1, 2 *. . . , n*, yj ∈ R dfor i ∈ 1, 2 *. . . , m* and *K, Q, V* ∈ R d ′×d. The k th outputs of BERT attention and T5 are respectively, $$\mathbf{BERT}:\ \sum_{i=1}^{n}V\mathbf{x_{i}}\ \frac{\exp\left(\left(K\mathbf{x_{i}},Q\mathbf{x_{k}}\right)/\sqrt{d}\right)}{\sum_{j=1}^{n}\exp\left(\left(K\mathbf{x_{j}},Q\mathbf{x_{k}}\right)/\sqrt{d}\right)}\qquad\mathbf{T5}:\ \sum_{i=1}^{n}V\mathbf{x_{i}}\ \frac{u_{i}\exp\left(\left(K\mathbf{x_{i}},Q\mathbf{y_{k}}\right)/\sqrt{d}\right)}{\sum_{j=1}^{n}\exp\left(\left(K\mathbf{x_{j}},Q\mathbf{y_{k}}\right)/\sqrt{d}\right)}.\tag{19}$$ Here, T5 has three types of attention, self-attentions in the encoder and the decoder and the cross-attention connecting them. For the two self-attentions, xi = yi and m = n. Following the reparameterization method used by Ramsauer et al. (2021), for BERT, setting α = 1, ti = √xid′ , z = K⊤Qxk, V ′ = V √d ′, and ui ∝ 1 yields V ′ Pn i=1 ti P ui exp⟨ti,z⟩ n j=1 uj exp⟨tj ,z⟩ , where the summation part is the one derived in (16).8 Likewise, for T5, we use the same setting as BERT except that uiis computed based on its positional encoding and z = K⊤Qyk for the cross-attention. We find λ ∗ by plugging α, ui's, ti's and z into (7) followed by performing gradient ascent. We then calculate the relative deviation ∥λ ∗−αz∥ ∥λ∗∥of its approximated counterpart αz and report its distribution in Fig 4 for each attention layer by taking the average over the attention heads. We report the distributions for each head in Appx C. As Fig 4 indicates, λ ∗ almost coincides with its approximated counterpart αz inferred by BERT and T5. As a result, the T5 and BERT's attention inference can be seen as solving the proposed convex problem, which corroborates that problem (3) gives a principle justifying the design of attention. 8Templates ti absorb the scaling factor d ′− 12 so that their norms remain largely unchanged as d ′increases. Thus, u has bounded moments, and Theorem 2 applies. Note that it is a common practice to scale outputs before performing theoretical analysis (e.g., see the work of Arora et al. (2019)). ## 9 An Optimal Transport-Based Attention In Sec 7, we mentioned that our theoretical work enables the design of more general attention structures. Let R+ denote the set of non-negative real numbers. In this section, we provide an example by replacing the KL-divergence in (3) with an entropy-regularized OT-based measure (Cuturi, 2013): $${\mathcal{W}}_{\gamma}(p,u;M)=\operatorname*{min}_{X\in U(p,u)}\langle M,X\rangle-\gamma H(X),$$ $$(20)$$ where γ > 0, H(X) = PN i,j=1 −Xij log Xij is the entropy of X, U(*p, u*) = {X ∈ R |A|×|A| + ; X1 = *p, X*T 1 = u} and M ∈ R |A|×|A| + is a cost matrix that measures the similarity between each pair of the templates in A. 9 The entropy regularization makes the minimizer X∗in (20) change smoothly in terms of *p, u* and M, which stabilizes and speeds up evaluation of W (Cuturi, 2013). When γ → 0, M(t, t ′) = dA(t, t ′) ρ, W 1/ρ γ is reduced to the Wasserstein ρ-distance. We note that, due to the entropy term, for fixed u and M, the true preference distribution u˜ that minimizes Wγ(˜*u, u*; M) is slightly deviated from u and will approach to u if γ → 0. (see Appx D for details.) Let µ˜ denote the expectation of u˜. Then we can rewrite (3) as $$\operatorname*{min}_{p}{\frac{\alpha}{2}}\left\|({\hat{\mu}}+\mathbf{z})-\int_{\mathbb{R}^{d}}\mathbf{a}p(\mathbf{a})\,\operatorname{d}\!\mathbf{a}\right\|^{2}+{\mathcal{W}}_{\gamma}(p,u;M).$$ Following a similar procedure presented in Sec 6 (the derivation is given in Appx D), we can derive and solve its Fenchel dual problem and show that when both α and α γ are small, the minimizer p ∗takes the form $$(21)$$ $$p^{*}({\bf t})=\sum_{i=1}^{n}u_{i}\;\frac{\exp\big(\;\big(\alpha{\bf t}^{T}{\bf z}-M({\bf t},{\bf t}_{i})\big)/\gamma\;\big)}{Z_{i}}\;.$$ $$(22)$$ $\exists$XD () with Zi =Pt ′∈A exp α(t ′) Tz − M(t ′, ti)γ. Substituting (22) into (4), we get the OT-based attention. The OT-based attention considers all templates in A. In comparison to the generalized attention derived in Sec 6, the OT-based one assigns non-zero weights to all templates in A. To see how it works, consider an extreme case in which the templates are partitioned into several groups. If two templates t, t ′ belong to the same group, M(t, t ′) = 0; otherwise, M(t, t) = ∞. Moreover, templates within the same groups are very similar in the sense that their inner products with z are approximately equal. Suppose ti belongs to a group G and other templates tj̸=i do not, then for all t ∈ G, we have p ∗(t) = ui/|G|. That is, all templates of G share the weight of ti and thus be potentially trained even if most of them do not appear in the input. In general, if a template t is similar to some ti ∈ T (i.e., M(t, ti) is small), it will share ti's weight although it does not appear in T. In contrast, for regular attention, only templates in T can be assigned non-zero weights. The peculiar property of OT-based attention is desired in some practical tasks. For example, in an NLP problem, synonyms intuitively have similar templates. Then, if a word appears in the input sentence and is trained, its synonyms should be trained in a similar way and thus be assigned a similar weight (because replacing a word with its synonym does not alter the input in a semantic sense). Likewise, in the Vision Transformer (ViT) (Dosovitskiy et al., 2021), images are divided into small patches, each of which is conceptually treated as a word. Consequently, an image composed of these patches is analogous to a sentence. A multilayer transformer, similar to BERT, is then used to extract features from these patches. Finally, a special learnable token is incorporated to aggregate these features (templates) using an attention mechanism, and the aggregated result is fed into a classifier for image classification. Intuitively, images of the same class consist of visually similar patches and replacing patches in an image with visually similar patches should not alter its class. Thus, it is reasonable for the last attention layer to share the template sets for images of the same classes and adopt the OT-based attention to train the templates associated with visually similar patches. 9A smaller Mij implies a larger similarity between ti and tj . While many OT-related problems define M by embedding templates into a metric space (A, dA) with M(t, t ′) = dA(t, t ′) ρ, ρ ≥ 1, our discussion makes no assumption on M other than it is non-negative and symmetric, and M(t, t) < M(t ′, t) for all t ′ ̸= t. To corroborate our claims, we test the ViT and its OT-based variant on Fashion-MNIST (Xiao et al., 2017), CIFAR10 and CIFAR100 (Krizhevsky, 2009). The OT-ViT model is identical to the ViT, except that the final transformer layer is substituted with OT-based attention where M(a, b) = −a ⊤b + C, α = 1 and γ = √hidden dim. (C is an upper bound of all possible template pairs' inner products, which ensures M is nonnegative.) To improve the training efficiency, when training OT-ViT, there is a 50% chance that the set T consists solely of templates extracted from the input image and a 50% chance that T also includes templates from another randomly selected image of the same class. During testing, T consists only of templates from the input image. | LR | 3 × 10−3 | 3 × 10−4 | 3 × 10−5 | |--------|--------------|--------------|--------------| | ViT | 0.228 ± 0.01 | 0.463 ± 0.01 | 0.452 ± 0.01 | | OT-ViT | 0.175 ± 0.01 | 0.491 ± 0.01 | 0.412 ± 0.01 | | Fashion-MNIST | CIFAR10 | CIFAR100 | | |-----------------|--------------|--------------|--------------| | ViT | 0.928 ± 0.01 | 0.751 ± 0.01 | 0.463 ± 0.01 | | OT-ViT | 0.937 ± 0.01 | 0.772 ± 0.02 | 0.491 ± 0.01 | Table 1: Test accuracies of ViT and OT-ViT on CIFAR100 with various learning rates (LR). Table 2: Test accuracies of ViT and OT-ViT on Fashion-MNIST, CIFAR10 and CIFAR100. Throughout our experiments, we fixed the patch size to be 4×4 and the dropout rate to be 0.2. To ensure a fair and tractable comparison, we constrained both models to have 3.2M parameters. Under this constraint, we traded off the number of layers and hidden dimensions of the Vision Transformer (ViT) model to achieve the best performance on CIFAR100 (Krizhevsky, 2009). The study showed that a six-layer ViT model with a hidden dimension of 512 had the optimal performance. We then used this setting for both the ViT and OT-ViT models throughout the remaining experiments. (Note that for a fixed hidden dimension, the OTbased attention has a nearly identical number of parameters to the regular transformer implementation.) Similarly, we searched for the optimal learning rate (LR) of both models on CIFAR100 and reported the test accuracy with the 95% confidence intervals in Table 1. The results indicate that both models achieved the best performance when the learning rate was set to 3×10−4. We, therefore, used this learning rate selection when training the models on the other datasets. In Table 2, we compare the performances of ViT and OT-ViT on Fashion-MNIST (Xiao et al., 2017), CIFAR10 and CIFAR100 (Krizhevsky, 2009) by reporting their test accuracies with the 95% confidence interval. As demonstrated, OT-ViT consistently outperforms ViT, highlighting the effectiveness of OTbased attention. ## 10 Conclusion This paper presented a new perspective on understanding the attention mechanism by showing that it can be viewed as a solver of a family of inference tasks. These tasks involve improving the noisy estimate of a distribution p's mean by a preference distribution that encodes some beliefs of p's value. We have used three running examples with the typical model architectures to show that such tasks naturally exist in neural network design. We then abstracted a convex optimization problem from these tasks and derived a closed-form approximation of the optimal solution by solving the problem's Fenchel dual. We find that the closed-form approximation can be seen as a generalized attention layer, and one of its special cases is equivalent to the dot-product attention adopted in transformers. We further analyzed the general form and showed that T5 transformer implicitly adopts the generalized attention structure with attention weights unifying the concepts of the word masks and the positional encoding functions. We empirically show that our framework can well-explain the attention inference in the pretrained BERT and T5 models. To demonstrate the potential for designing more general attention structures, we replaced the KL divergence with an OTbased measure, deriving an OT-based attention structure that removes the support constraints on p (k) mentioned in the examples. This paper also presents a principled justification for the design of attention modules in neural networks. Specifically, there is a general assumption that because attention in humans narrows the search space, a similar phenomenon is at play in transformers. In this paper, we have shown that the mechanism corresponds to proposing a preference distribution over the templates, followed by adjusting it using a noisy mean shift estimation. The generalized attention structure presented potentially opens the door to a wide design space. For example, the preference weights need not be derived from the positional encoding functions; they could integrate a variety of information provided by other network components. Additionally, this research successfully demonstrates a novel approach to analyze the functioning of a neural network component, namely, via isolating the component from the complex network structure and asking: is there a "local problem" that is solved by the design of this component? Broader impact. This paper presents a new perspective on understanding attention and derives a generalized attention structure. Our work is foundational, which we believe does not have direct negative societal impacts. Due to the very wide range of applications of attention, such as self-driving (Kim & Canny, 2017), healthcare (Ma et al., 2017) and protein interaction prediction (Tsubaki et al., 2018), we expect our works can facilitate the algorithm developments in these areas, which may have unexpected impacts. ## References Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In *International Conference on Machine* Learning (ICML), pp. 322–332, 2019. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In *ICLR 2015*, pp. 1–15, 2014. Pierre Baldi and Roman Vershynin. The quarks of attention, 2022. arXiv:2202.08371. J. M. Borwein and A. S. Lewis. Partially finite convex programming, part i: Quasi relative interiors and duality theory. *Mathematical Programming*, 57(1):15–48, 1992. doi: 10.1007/BF01581072. URL https: //doi.org/10.1007/BF01581072. Gino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. On Identifiability in Transformers. In *International Conference on Learning Representations (ICLR)*, 2019. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. What Does BERT Look at? An Analysis of BERT's Attention. *arXiv preprint arXiv:1906.04341*, 2019. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (eds.), *Advances in Neural Information* Processing Systems, volume 26. Curran Associates, Inc., 2013. URL https://proceedings.neurips.cc/ paper/2013/file/af21d0c97db2e27e13572cbf59eb343d-Paper.pdf. Marco Cuturi and Gabriel Peyre. A smoothed dual approach for variational wasserstein problems. SIAM journal on imaging sciences, 9(1):320–343, 2016. ISSN 1936-4954. Puneesh Deora, Rouzbeh Ghaderi, Hossein Taheri, and Christos Thrampoulidis. On the optimization and generalization of multi-head attention. *Transactions on Machine Learning Research*, 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=wTGjn7JvYK. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://www.aclweb.org/anthology/N19-1423. Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need, pure attention loses rank doubly exponentially with depth. 2021. URL https://arxiv.org/abs/2103.03404. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Fabrice Gamboa. Methode du maximum d'entropie sur la moyenne et applications. *Phd Thesis*, 1989. Michael Hahn. Theoretical Limitations of Self-Attention in Neural Sequence Models. Transactions of the Association for Computational Linguistics, 8:156–171, 2020. Jiri Hron, Yasaman Bahri, and Jascha Sohl-dickstein Roman. Infinite attention : NNGP and NTK for deep attention networks. In *International Conference on Machine Learning (ICML)*, 2020. Sarthak Jain and Byron C. Wallace. Attention is not Explanation. In *Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT)*, 2019. Samy Jelassi, Michael Eli Sander, and Yuanzhi Li. Vision transformers provably learn spatial structure. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. URL https://openreview.net/forum?id=eMW9AkXaREI. Jinkyu Kim and John Canny. Interpretable learning for self-driving cars by visualizing causal attention. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969, 2017. doi: 10.1109/ ICCV.2017.320. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. Jaehoon Lee, Jascha Sohl-dickstein, Jeffrey Pennington, Roman Novak, Sam Schoenholz, and Yasaman Bahri. Deep neural networks as gaussian processes. In International Conference on Learning Representations (ICLR), 2018. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer Normalization. *arXiv e-prints*, art. arXiv:1607.06450, July 2016. Hongkang Li, Meng Wang, Sijia Liu, and Pin-Yu Chen. A theoretical understanding of shallow vision transformers: Learning, generalization, and sample complexity. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=jClGv3Qjhb. Haoye Lu, Yongyi Mao, and Amiya Nayak. On the dynamics of training attention models. In *International* Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=1OCTOShAmqB. Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412–1421, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1166. URL https://www.aclweb.org/anthology/D15-1166. Fenglong Ma, Radha Chitta, Jing Zhou, Quanzeng You, Tong Sun, and Jing Gao. Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '17, pp. 1903–1911, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450348874. doi: 10.1145/3097983.3098088. URL https://doi.org/10.1145/3097983.3098088. Sadegh Mahdavi, Renjie Liao, and Christos Thrampoulidis. Memorization capacity of multi-head attention in transformers. In *The Twelfth International Conference on Learning Representations*, 2024. URL https: //openreview.net/forum?id=MrR3rMxqqv. P. McCullagh. *Tensor Methods in Statistics : Monographs on Statistics and Applied Probability*. Chapman and Hall/CRC, Boca Raton, FL, first edition. edition, 1987. ISBN 9781351077118. Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? In Neural Information Processing Systems (NIPS), volume 32. Curran Associates, Inc., 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. URL http://jmlr.org/papers/v21/20-074. html. Hubert Ramsauer, Bernhard Schäfl, Johannes Lehner, Philipp Seidl, Michael Widrich, Lukas Gruber, Markus Holzleitner, Thomas Adler, David Kreil, Michael K Kopp, Günter Klambauer, Johannes Brandstetter, and Sepp Hochreiter. Hopfield networks is all you need. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=tL89RnzIiCd. Gabriel Rioux, Rustum Choksi, Tim Hoheisel, Pierre Marechal, and Christopher Scarvelis. The maximum entropy on the mean method for image deblurring. *Inverse Problems*, oct 2020. doi: 10.1088/1361-6420/ abc32e. URL https://doi.org/10.1088/1361-6420/abc32e. R. Tyrrell Rockafellar. *Convex analysis*. Princeton mathematical series ; 28. Princeton University Press, Princeton, N.J, 1970. ISBN 0691080690. Arda Sahiner, Tolga Ergen, Batu Ozturkler, John Pauly, Morteza Mardani, and Mert Pilanci. Unraveling attention via convex duality: Analysis and interpretations of vision transformers. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pp. 19050–19088. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/sahiner22a. html. Sofia Serrano and Noah A. Smith. Is attention interpretable? In Annual Meeting of the Association for Computational Linguistics (ACL), 2020. Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov. Transformer dissection: An unified understanding for transformer's attention via the lens of kernel. In EMNLP, 2019. Masashi Tsubaki, Kentaro Tomii, and Jun Sese. Compound–protein interaction prediction with end-to-end learning of neural networks for graphs and sequences. *Bioinformatics*, 35(2):309–318, 07 2018. Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. Attention Interpretability Across NLP Tasks. In *International Conference on Learning Representations (ICLR)*, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head selfattention: Specialized heads do the heavy lifting, the rest can be pruned. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, July 2019. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. Attention-based LSTM for aspect-level sentiment classification. In *Empirical Methods in Natural Language Processing (EMNLP)*, pp. 606–615, Austin, Texas, November 2016. Association for Computational Linguistics. Sarah Wiegreffe and Yuval Pinter. Attention is not not explanation. In *2019 Conference on Empirical* Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *CoRR*, abs/1708.07747, 2017. URL http://arxiv.org/abs/1708.07747. K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In *Proc. 32nd Int. Conf. Mach. Learn.*, pp. 257–261, 2015. doi: 10.1109/EEEI.2002.1178445. Greg Yang. Tensor Programs I : Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes. In *Neural Information Processing Systems (NIPS)*, 2019. ## A Derivation Of (11) For Preference Distributions Of Bounded Moments Assume a preference distribution u has bounded moments. Then its moment generating function $$M(\mathbf{\lambda})=\int_{\mathbb{R}^{d}}\langle\mathbf{a},\mathbf{\lambda}\rangle u(\mathbf{a})\mathrm{d}\mathbf{a}=1+\langle M^{\prime}(0),\mathbf{\lambda}\rangle+{\frac{1}{2}}\langle\mathbf{\lambda},M^{\prime\prime}(0)\mathbf{\lambda}\rangle+o(\left\|\mathbf{\lambda}\right\|^{2}),$$ 2), (23) where $$(23)$$ $$M^{\prime}(0)=\int\mathbf{a}u(\mathbf{a})\mathrm{d}\mathbf{a}=\boldsymbol{\mu},\tag{1}$$ $$M^{\prime\prime}(0)=\int\mathbf{a}\mathbf{a}^{\top}u(\mathbf{a})\mathrm{d}\mathbf{a}.$$ $$(26)$$ Notice that $$\log(1+x)=t-\frac{t^{2}}{2}+\frac{t^{3}}{3}-\frac{t^{4}}{4}+\cdots=t-\frac{t^{2}}{2}+o(t^{2}).\tag{1}$$ Thus, log(M(λ)) = ⟨M′(0),λ⟩ + 1 2 ⟨λ, M′′(0)λ⟩ + o(∥λ∥ 2) − 1 2 ⟨M′(0),λ⟩ + 1 2 ⟨λ, M′′(0)λ⟩ + o(∥λ∥ 2) 2 + o ⟨M′(0),λ⟩ + 1 2 ⟨λ, M′′(0)λ⟩ + o(∥λ∥ 2)2 =⟨M′(0),λ⟩ + 1 2 ⟨λ, M′′(0)λ⟩ − ⟨M′(0),λ⟩ 2+ o∥λ∥ 2 =⟨µ,λ⟩ + 1 2 λ ⊤Σλ + o∥λ∥ 2, where $$\Sigma=M^{\prime\prime}(0)-M^{\prime}(0)M^{\prime}(0)^{\top}=\int\left(\mathbf{a}-\boldsymbol{\mu}\right)\left(\mathbf{a}-\boldsymbol{\mu}\right)^{T}u(\mathbf{a})\mathrm{d}\mathbf{a}.$$ ## B Other Perspectives To Derive (3) A Maximum Likelihood Perspective. The optimization problem in (3) can be derived using the maximum log likelihood method by treating the KL-divergence term as a regularizer. According to (2), the difference (µ + z) − h follows a Gaussian distribution N (0, σ2I). This implies the log likelihood function ℓ(z) ∝ − 1 2σ2 ∥(µ + z) − h∥ 2. Maximizing it with the KL-divergence term as a regularizer is the same as minimizing $$\frac{1}{2\sigma^{2}}\left\|\left(\boldsymbol{\mu}+\boldsymbol{z}\right)-\boldsymbol{h}\right\|^{2}+\eta\mathcal{K}(p,u),\tag{27}$$ where $\eta>0$ controls the strength of the regularization. Substituting (1) into (27) followed by rearrangement yields $$(27)$$ $$\min_{p}\frac{1}{2\eta\sigma^{2}}\left\|(\mathbf{\mu}+\mathbf{z})-\int_{\mathbb{R}^{d}}\mathbf{a}p(\mathbf{a})\;\mathrm{d}\mathbf{a}\right\|^{2}+\mathcal{K}(p,u),\tag{1}$$ $$(28)$$ which is equivalent to (3) by setting α −1 = ησ2. A Bayesian perspective. Given observed data and prior belief about the distribution of parameters, Bayesian inference allows us to update this distribution to reflect the new knowledge. Assume that the distribution p is specified by parameters θ. By considering µ + z as the observed data, we will show that picking the pθ that minimizes (3) is the same as choosing the θ ∗that maximizes the posterior density of θ given the observed data. Let ϑ be the parameters of the preference distribution uϑ and suppose the prior distribution f(θ|ϑ) of θ satisfies $$f(\theta|\vartheta)\propto\exp\Big(-\eta\mathcal{K}(p_{\theta},u_{\vartheta})\Big),$$ where η > 0 is a hyper-parameter that controls the decaying speed of the probability density as pθ deviates from uϑ. In (2), we have assumed that given θ, (µ+z)−hθ follows a spherical Gaussian distribution N (0, σ2I), where hθ is the mean of pθ. Therefore, given its parameter θ, the probability density function of µ + z is $$f(\mu+\mathbf{z}|\theta)=f(\mu+\mathbf{z}|\mathbf{h}_{\theta})\propto\exp\left(-{\frac{1}{2\sigma^{2}}}\left\|(\mu+\mathbf{z})-\mathbf{h}_{\theta}\right\|^{2}\right).$$ Then the posterior distribution of θ satisfies $$f(\theta|\mathbf{\mu}+\mathbf{z},\vartheta)\propto f(\mathbf{\mu}+\mathbf{z}|\theta)\,\,\,f(\theta|\vartheta)$$ $$\propto\exp\left(-\frac{1}{2\sigma^{2}}\,\|(\mathbf{\mu}+\mathbf{z})-\mathbf{h}_{\theta}\|^{2}-\eta\mathcal{K}(p_{\theta},u_{\vartheta})\right).$$ $$(29)$$ $$(30)$$ Finding θ ∗that maximizes the posterior f(θ|µ + z, ϑ) is the same as finding $$p_{\theta}^{\star}=\operatorname*{argmin}_{p_{\theta}}\left\{\frac{1}{2\sigma^{2}}\left\|(\mathbf{\mu}+\mathbf{z})-\mathbf{h}_{\theta}\right\|^{2}+\eta\mathcal{K}\big{(}p_{\theta},u_{\vartheta}\big{)}\right\}$$ $$=\operatorname*{argmin}_{p_{\theta}}\left\{\frac{1}{2\eta\sigma^{2}}\left\|(\mathbf{\mu}+\mathbf{z})-\mathbf{h}_{\theta}\right\|+\mathcal{K}\big{(}p_{\theta},u_{\vartheta}\big{)}\right\},$$ which is equivalent to (3) by setting α −1 = ησ2. ![18_image_0.png](18_image_0.png) ## C Extra Experimental Results ∥λ∗∥for the attention in BERT. The red vertical lines mark the average of the errors. ![19_image_0.png](19_image_0.png) Figure 6: The distribution of relative errors ∥λ ∗−αz∥ ∥λ∗∥for the self-attention of the encoder in T5. The red vertical lines mark the average of the errors. ![19_image_1.png](19_image_1.png) Figure 7: The distribution of relative errors ∥λ ∗−αz∥ ∥λ∗∥for the self-attention of the decoder in T5. The red vertical lines mark the average of the errors. ![20_image_0.png](20_image_0.png) $$(31)$$ $$(32)$$ Figure 8: The distribution of relative errors ∥λ ∗−αz∥ ∥λ∗∥for the cross-attention in T5. The red vertical lines mark the average of the errors. ## D Details On The Derivation Of Ot-Based Attention According to the discussion in Sec 9, we consider the optimization problem $$p^{*}=\operatorname*{argmin}_{p}{\frac{\alpha}{2}}\left\|({\hat{\boldsymbol{\mu}}}+\mathbf{z})-\int_{\mathbb{R}^{d}}\mathbf{a}p(\mathbf{a})\,\operatorname{d}\!\mathbf{a}\right\|^{2}+{\mathcal{W}}_{\gamma}(p,u;M)$$ where µ˜ denotes the mean of the true preference distribution u˜ that minimizes f(p) = Wγ(*p, u*; M). We will show in Prop 1 that $${\tilde{\boldsymbol{\mu}}}=\sum_{{\mathbf{t}},{\mathbf{t}}^{\prime}\in{\mathcal{A}}\times{\mathcal{A}}}{\boldsymbol{u}}({\mathbf{t}}^{\prime})\ {\frac{\exp{(-M({\mathbf{t}},{\mathbf{t}}^{\prime})/\gamma)}}{\sum_{{\mathbf{t}}^{\prime\prime}\in{\mathcal{A}}}\exp{(-M({\mathbf{t}}^{\prime\prime},{\mathbf{t}}^{\prime})/\gamma)}}}{\mathbf{t}}.$$ Cuturi and Peyre Cuturi & Peyre (2016) proved that the Fenchel dual of Wγ(d; *r, M*) is $${\mathcal{W}}_{\gamma}^{*}(p;u,M)=\gamma\left(H(u)+\sum_{{\bf t}\in{\mathcal{A}}}u({\bf t})\;\log\left[\sum_{{\bf t}^{\prime}\in{\mathcal{A}}}\exp\left(\gamma^{-1}\big(p({\bf t})-M({\bf t},{\bf t}^{\prime})\big)\right)\right]\right)$$ for p ∈ R N ; and, for t ∈ A $$(33)$$ $$\left[\nabla_{p}{\mathcal W}_{\gamma}^{*}(p;u,M)\right]_{\mathbf t}=\sum_{\mathbf t^{\prime}\in{\cal A}}\frac{u(\mathbf t^{\prime})\exp\left(\gamma^{-1}\big(p(\mathbf t)-M(\mathbf t,\mathbf t^{\prime})\big)\right)}{\sum_{\mathbf t^{\prime\prime}\in{\cal A}}\exp\left(\gamma^{-1}\big(p(\mathbf t^{\prime\prime})-M(\mathbf t^{\prime},\mathbf t^{\prime\prime})\big)\right)},$$ $$(34)$$ where -∇pW∗ γ (p; *u, M*)t denote the entry in -∇pW∗ γ (p; *u, M*)that is associated to template t. By the Fenchel's duality theorem, we know that p ∗in (31) takes the form $$p^{*}({\bf t})=\sum_{{\bf t}^{\prime}\in{\cal A}}\frac{u({\bf t}^{\prime})\exp\left(\gamma^{-1}\big({\bf t}^{\top}\lambda^{*}-M({\bf t},{\bf t}^{\prime})\big)\right)}{\sum_{{\bf t}^{\prime\prime}\in{\cal A}}\exp\left(\gamma^{-1}\big(({\bf t}^{\prime\prime})^{\top}\lambda^{*}-M({\bf t}^{\prime},{\bf t}^{\prime\prime})\big)\right)},$$ $$(35)$$ where $$\lambda^{*}=\arg\max_{\lambda\in\mathbb{R}^{n}}\langle\hat{\mu}+\mathbf{z},\lambda\rangle-\frac{1}{2\alpha}\left\|\lambda\right\|^{2}-\mathcal{W}_{\gamma}^{*}\left(\mathbf{t}^{T}\lambda|\mathbf{t}\in\mathcal{A}\right];u,M\rangle$$ $$=\arg\max_{\lambda\in\mathbb{R}^{n}}\langle\hat{\mu}+\mathbf{z},\lambda\rangle-\frac{1}{2\alpha}\left\|\lambda\right\|^{2}-\gamma\sum_{\mathbf{t}\in\mathcal{A}}u(\mathbf{t})\ \log\left[\sum_{\mathbf{t}^{\prime}\in\mathcal{A}}\exp\left(\frac{(\mathbf{t}^{\prime})^{\top}\lambda-M(\mathbf{t},\mathbf{t}^{\prime})}{\gamma}\right)\right].\tag{36}$$ The true preference distribution. The Fenchel dual perspective allows us to derive a closed-form expression for the minimizer of f(p) = Wγ(*p, u*; M), which we refer as the true preference distribution u˜ in the main text. We will also show that u˜ approaches to the preference u as γ → 0. Notice that, by definition, u˜ → p ∗ when α → 0 in (31). In this case, the optimization of λ in (36) gets an infinite penalty on its L2 norm and thus ∥λ ∗∥ 2 = 0. Therefore, we have Proposition 1. Wγ(p; u, M) has the minimizer u˜(t) *taking the form* $${\hat{u}}(\mathbf{t})=\sum_{\mathbf{t}^{\prime}\in{\mathcal{A}}}u(\mathbf{t}^{\prime}){\frac{\exp\left(-M(\mathbf{t},\mathbf{t}^{\prime})/\gamma\right)}{\sum_{\mathbf{t}^{\prime\prime}\in{\mathcal{A}}}\exp\left(-M(\mathbf{t}^{\prime},\mathbf{t}^{\prime\prime})/\gamma\right)}},$$ $$(37)$$ $$(38)$$ for t ∈ A*. Besides, its mean* $${\tilde{\mu}}=\sum_{{\bf t},{\bf t}^{\prime}\in{\mathcal{A}}\times{\mathcal{A}}}u({\bf t}^{\prime})\;{\frac{\exp{(-M({\bf t},{\bf t}^{\prime})/\gamma)}}{\sum_{{\bf t}^{\prime\prime}\in{\mathcal{A}}}\exp{(-M({\bf t}^{\prime\prime},{\bf t}^{\prime})/\gamma)}}}{\bf t}.$$ When γ → 0,exp(−M(t,t ′ P )/γ) t*′′∈A* exp(−M(t ′,t ′′)/γ) approaches to 1 if t = t ′ and 0 otherwise. Therefore, u˜(t) → u(t) for all t ∈ A. The derivation of (22). Then we show how to derive (22) when α and α γ are assumed small. Within the summation term of (36), for a fixed t log "X t ′∈A exp (t ′) ⊤λ − M(t, t ′) γ #= log "X t ′∈A exp −M(t, t ′) γ exp (t ′) ⊤λ γ # = log "X t ′∈A qt(t ′)Z(t) exp (t ′) ⊤λ γ #= log "X t ′∈A qt(t ′) exp (t ′) ⊤λ γ #+ log Z(t) = logMt(λ/γ) + log Z(t), (39) where $$q_{\mathbf{t}}(\mathbf{t}^{\prime})=\exp\left({\frac{-M(\mathbf{t},\mathbf{t}^{\prime})}{\gamma}}\right){\Big/}Z(\mathbf{t}),$$ $$Z(\mathbf{t})=\sum_{\mathbf{t}^{\prime}\in{\mathcal{A}}}\exp\left({\frac{-M(\mathbf{t},\mathbf{t}^{\prime})}{\gamma}}\right),$$ and Mt denotes the MGF of qt. Note that logMt(λ/γ) is called the cumulant of qt and has the expansion $$\log{\mathcal{M}}_{\mathbf{t}}(\lambda/\gamma)=\mu_{\mathbf{t}}^{\top}(\lambda/\gamma)+{\frac{1}{2}}\ (\lambda/\gamma)^{\top}\ \Sigma_{\mathbf{t}}\ (\lambda/\gamma)+{\mathcal{O}}(\|\lambda/\gamma\|^{3}),$$ where $$\mu_{\mathbf{t}}=\sum_{\mathbf{t}^{\prime}\in{\mathcal{A}}}q_{\mathbf{t}}(\mathbf{t}^{\prime})\ \mathbf{t}^{\prime}$$ ′(41) $$(40)$$ $$(41)$$ and $$\Sigma_{\mathbf{t}}=\sum_{\mathbf{t}\in{\mathcal{A}}}q_{\mathbf{t}}(\mathbf{t}^{\prime})\ (\mathbf{t}^{\prime}-\mu_{\mathbf{t}})(\mathbf{t}^{\prime}-\mu_{\mathbf{t}})^{\top}$$ ⊤ (42) respectively denote the mean and the variance-covariance matrix of qt. Substituting (39) and (40) into (36) yields λ ∗ = arg max λ∈Rd ⟨µ˜ + z, λ⟩ − 1 2α ∥λ∥ 2 − γ "X t∈A u(t)µt ⊤(λ/γ) + 12 X t∈A u(t)(λ/γ) ⊤Σt(λ/γ)+ O(∥λ/γ∥ 3) + X t∈A u(t) log Z(t) # = arg max λ∈Rd ⟨µ˜ + z, λ⟩ − 1 2α ∥λ∥ 2 − γ " ( X t∈A u(t)µt ) ⊤(λ/γ) + 12 X t∈A u(t)(λ/γ) T Σt(λ/γ)+ O(∥λ/γ∥ 3) # = arg max λ∈Rd ⟨µ˜ + z, λ⟩ − 1 2α ∥λ∥ 2 − "X i u(t)µt ⊤λ + 1 2γ X t∈A u(t)λ ⊤Σtλ+ γO(∥λ/γ∥ 3) # . $$\left(42\right)$$ When α is assumed to be small, the optimization of λ gets a large penalty on its L2 norm and thus, ∥λ ∗∥ 2 is close to zero. So we have $$\lambda^{*}\approx\arg\max_{\lambda\in\mathbb{R}^{d}}\langle\tilde{\mathbf{\mu}}+\mathbf{z},\lambda\rangle-\frac{1}{2\alpha}\left\|\lambda\right\|^{2}$$ $$-\left[\left(\sum_{\mathbf{t}\in\mathcal{A}}u(\mathbf{t})\mathbf{\mu_{t}}\right)^{\top}\lambda+\frac{1}{2\gamma}\sum_{\mathbf{t}\in\mathcal{A}}u(\mathbf{t})\left(\lambda^{\top}\Sigma_{\mathbf{t}}\lambda\right)\right]\,$$ Taking the derivative in terms of λ and setting it to zero yields rms of $\lambda$ and setting it to zero yields $$ \left(\tilde{\mu}+\mathbf{z}\right)-\frac{1}{\alpha}\lambda^*-\sum_{\mathbf{t}\in\mathcal{A}}u(\mathbf{t})\mu_{\mathbf{t}}-\frac{1}{\gamma}\sum_{\mathbf{t}\in\mathcal{A}}u(\mathbf{t})\Sigma_{\mathbf{t}}\lambda^*=0.$$ × As $$\sum_{\mathbf{t}\in\mathcal{A}}u(\mathbf{t})\boldsymbol{\mu}_{\mathbf{t}}=\sum_{i=1}^{N}u(\mathbf{t})\sum_{\mathbf{t}^{\prime}\in\mathcal{A}}q_{i}(\mathbf{t}^{\prime})\mathbf{t}^{\prime}=\sum_{\mathbf{t},\mathbf{t}^{\prime}\in\mathcal{A}\times\mathcal{A}}u(\mathbf{t})\frac{\exp{(-M(\mathbf{t},\mathbf{t}^{\prime})/\gamma)}}{\sum_{\mathbf{t}^{\prime\prime}\in\mathcal{A}}\exp{(-M(\mathbf{t},\mathbf{t}^{\prime\prime})/\gamma)}}\mathbf{t}^{\prime}=\bar{\boldsymbol{\mu}},\tag{43}$$ are we also have $$\mathbf{z}-\left({\frac{1}{\alpha}}I_{d}+{\frac{1}{\gamma}}\sum_{\mathbf{t}\in{\mathcal{A}}}u(\mathbf{t})\Sigma_{\mathbf{t}}\right)\lambda^{*}=0.$$ That is, $$\lambda^{*}=\left(\frac{1}{\alpha}I_{d}+\frac{1}{\gamma}\sum_{\mathbf{t}\in\mathcal{A}}u(\mathbf{t})\Sigma_{\mathbf{t}}\right)^{-1}\mathbf{z}$$ $$=\left(I_{d}+\frac{\alpha}{\gamma}\sum_{\mathbf{t}\in\mathcal{A}}u(\mathbf{t})\Sigma_{\mathbf{t}}\right)^{-1}(\alpha\mathbf{z}).$$ When α γ is small, the expression becomes simply λ ∗ = αz. Plugging it into (35), we get $$p^{*}({\bf t})=\sum_{{\bf t}^{\prime}\in{\mathcal A}}\frac{u({\bf t}^{\prime})\exp\left(\gamma^{-1}\big(\alpha{\bf t}^{\top}{\bf z}-M({\bf t},{\bf t}^{\prime})\big)\right)}{\sum_{{\bf t}^{\prime\prime}\in{\mathcal A}}\exp\left(\gamma^{-1}\big(\alpha({\bf t}^{\prime\prime})^{\top}{\bf z}-M({\bf t}^{\prime},{\bf t}^{\prime\prime})\big)\right)},$$ $$(444)$$ ′′), (44) which is (22).
Review 1: Summary: This paper tries to deepen the theoretical understanding of the popular attention mechanism, by relating it to a general estimation problem formulation. The paper derives a (Fenchel) dual formulation of the original estimation problem, and shows that after some approximation steps, the solution can be derived in closed form. In that case, the solution resembles the classical attention module. Numerical experiments show that the approximation steps are reasonable to assume for real-world models. Strengths and Weaknesses: The paper in general tries to keep notation simple in order to be easy to follow. However, sometimes this leads to parts that are hard to understand, for example "That is, even if a word is absent from the source sentence, it will still be optimized if it has a similar embedding in T(k)." How can a word be optimized? Or: what kind of mathematical objects are $\mathcal{A}$ and $a$? Main Weaknesses: 1) The presented model to connect attention to a certain optimization problem does not encompass multi-head attention, skip connections, or the fact that many attention layers are stacked after each other. Both practical examples (BERT and T5) use at least some of the mentioned techniques (if not all), thus are not matching the presented model closely. This is briefly adressed on page 9, but given that modern transformer architectures are much more complicated than a single attention layer, I would ask the authors to explain in more detail why it would be sufficient to only focus on one attention layer, one head, no skip connection etc. 2) Regarding the experiments, if I understood correctly then they only confirm that the actual solution $\lambda^*$ is close to the approximated solution $\alpha z$. However, this only confirms the arguments of Section 6, which however is simply using prior work and a few approximation steps. In my understanding the main contribution of this paper would be to connect the optimization problem in section 6 to attention/transformer architectures, but empirical validation of this model (e.g. for BERT and T5 on page 11 that is, the choice of z,t,u, etc.) is missing. 3) The connection between (18) and the models for BERT and T5 on page 11 seems to be very loose, which raises the question whether the claimed connection is very specific to attention at all. To be more specific: first, with your choice the value matrix V can not be cast into (18), which weakens the claim of the connection. Second, if we choose $\alpha=1/d$ and $t_i=x_i$ instead, we obtain also BERT. This raises several question that are not addressed in the paper: what does it mean if (18) fits to BERT/T5 with several choices? In particular, the argument of Section 6 relies on $\alpha$ being small, but then what interpretation does $\alpha$ even have for BERT/T5? **The above two points are the main reason I marked Claims and Evidence as No, but I am open to change my evaluation after discussion/revision.** Attention is a very well-studied mechanism and multiple interpretations of it have been proposed. It would improve the paper if a short literature overview of the models/interpretations of attention is given, in order to better understand how this paper can offer a new viewpoint. Based on this, it remains unclear to me why the presented viewpoint would be more helpful to understand attention or improve its design compared to previously offered explanations. Requested Changes: * Please address the main weaknesses that are mentioned in the previous part (discuss more realistic attention architecture, explain the relationship between BERT/T5 and the model of section 6, and verify numerically that these models are accurate; discuss how the presented framework provides new understanding). * I am not sure what the purpose of Section 5 is and how it is important for the rest of the paper. * Regarding Section 3.1 and 3.2, it does not become really clear what the meaning of z,t,u are n the context of the examples. Very late, the specific formulae are given for BERT and T5, but no interpretation or motivation for this choice is given, other than to match formula (18). Broader Impact Concerns: None. ================================================== Review 2: Summary: The authors present an optimization problem central to many estimation tasks and show that a closed-form approximation of its Fenchel dual results in a generalized attention framework. The authors suggest that the success of the T5 architecture is because it has implicitly adopted the general form of the solution. To show the proposed optimization problem provides a principle justifying the design of attention modules, they show the maximizer of its dual problem nearly coincides with the output of the attention block in BERT and T5. The authors use the generalized attention framework to propose an optimal transport-based attention. However, there are no experiments with the proposed method. Strengths and Weaknesses: Strengths: * The generalized attention framework is likely of interest to the TMLR audience. * The authors clearly explain that to solve the optimization problem, the problem must be reformulated as its Fenchel dual, solved, and converted back to one for the original problem. * Fig. 4 shows the maximizer of its dual problem nearly coincides with the output of the attention block in BERT and T5 (it would be nice to see a naive baseline in these plots). Weaknesses: * The authors suggest potential further generalizations of the T5 architecture that arise from Eq. (18), however, there are no experiments showing the impact of these further generalizations. * A reviewer previously criticized the paper for lacking sufficient insight on how the proposed framework could be used to design improved attention mechanisms. In response to this, the authors proposed an optimal transport-based attention. However, there are no experiments with the proposed method. Requested Changes: * The presentation in Sec. 4 needs cleaning up. The equations in the *A Maximum Entropy on the Mean Perspective* section are redundant with Eq. (4). * The lack of experiments is the biggest weakness of this paper and why I think the claims made in the paper are not supported by evidence. The paper should include experiments demonstrating the impact of further generalizations to the T5 architecture or the proposed optimal transport-based attention. As is, I do not consider the proposed optimal transport-based attention a meaningful contribution of the paper. Requested Clarifications: * Looking at Eq. (2) from Rioux et al. (2020), shouldn’t $\mathcal{K}(p, u)$ denote the KL divergence from $u$ to $p$? Not from $p$ to $u$? * Would it be helpful if the number of bins in the histograms in Fig. 4 were increased? As is, these histograms do not show much detail. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper discusses the attention mechanism; namely, given some template (i.e., key-value) vectors $\mathbf{t}_1,\ldots,\mathbf{t}_M$, and an evidence (i.e., query) vector $\mathbf{z}$, the attention $\mathbf{h}$ defined as $$ \mathbf{h}:=\sum_{i=1}^{M}a_i\mathbf{t}_i $$ where $$ a_i=\frac{\exp(\mathbf{t}_i\cdot\mathbf{z})}{\sum_j\exp(\mathbf{t}_j\cdot\mathbf{z})}. $$ Intuitively, the attention $\mathbf{h}$ can be understood as a convex combination of $\mathbf{t}_1,\ldots,\mathbf{t}_M$ that is "closest to" $\mathbf{z}$; this paper formalizes this intuition. Formally, it shows that $\mathbf{h}$ is approximately the solution of $$ \min\frac{1}{2}\lVert (\mathbf{\mu}+\mathbf{z})- \mathbf{h}\rVert^2 + KL(a, u), $$ where $\mathbf{\mu}$ is the mean of $\mathbf{t}_1,\ldots,\mathbf{t}_M$, the $KL(\bullet)$ is the KL-divergence, $a=(a_1,\ldots,a_M)$ is the attention probabilities on $(1,\ldots,M)$ and $u$ is the uniform distribution on $(1,\ldots,M)$. I said "approximately", because more precisely one should consider the optimization problem $$ \min\frac{\alpha}{2}\lVert (\mathbf{\mu}+\mathbf{z})- \mathbf{h}\rVert^2 + KL(a, u), $$ where $\alpha>0$ is a hyper-parameter; the authors show that this problem can be solved by $$ a_i=\frac{\exp(\mathbf{t}_i\cdot\mathbf{\lambda})}{\sum_j\exp(\mathbf{t}_j\cdot\mathbf{\lambda})} $$ where $\mathbf{\lambda}=\alpha\mathbf{z}$, when $\alpha$ is sufficiently small. Then, empirical experiments on T5 and BERT show that the approximation is good enough at $\alpha=1$. Strengths and Weaknesses: Strengths: I like the way that this work formalizes a pertaining intuition of the attention mechanism. The formalization is new to me; I have learned something by reading the paper. Weaknesses: I think the authors still need to answer the "so what?" question in order to publish this work. Why does this formalization matter? What can it be useful to? Currently, the authors claim that the formalization is a justification for the design of attention modules in neural networks. I feel this is a bit overreaching. The attention mechanism is justified by its wide adoption, its simplicity and flexibility in being applicable to various situations, and its good performance. Specific to the Transformer model, attention is critical because it models the correlation among tokens in a sequence. I don't see the formalization of this paper can be connected to such strengths of attention in any significant way. One potentially fruitful direction, in my opinion, is that the formalization can be naturally generalized beyond the discrete space $(1,\ldots,M)$; i.e., besides attending to a fixed number of vectors $\mathbf{t}_1,\ldots,\mathbf{t}_M$, one may attend to some continuous distribution of vectors with some prior preference, or attend to some structured space. It would be excellent if the authors could find an application where such generalization is natural and can empirically outperform the vanilla attention significantly. Requested Changes: To find at least one application of the formalization, that can naturally generalize beyond the vanilla attention mechanism, and empirically verify its significance. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Reject Comment: This paper studies transformers and the attention mechanism using convex analysis. The results are further validated by language models. While all reviewers saw some novel insights in this submission, they also pointed out some significant issues with the presentation, motivation, and experimental design. These issues cannot be addressed by the authors' revision and require another round of full review. In particular, the major issues include: - Presentation: The current structure and storyline cause some confusion to reviewers and most likely other readers as well. Also, some terminology might not be easily understandable for the ML community (e.g. an explanation how the templates relate to key, query, value of attention). This is important because the position of this paper is to help provide explanations to ML researchers and practitioners. - Experiment: Some important details and ablations are missing. The connection between the theoretical part and the numerical findings needs to be strengthened. In light of the review comments and my own reading, I recommend rejection and major revision for this submission. I hope the authors will incorporate the review comments into the resubmitted version. ==================================================
# A Simple Video Segmenter By Tracking Objects Along Axial Trajectories Ju He jhe47@jhu.edu Johns Hopkins University Qihang Yu qihang.yu@bytedance.com ByteDance Inkyu Shin *dlsrbgg33@kaist.ac.kr* Korea Advanced Institute of Science and Technology Reviewed on OpenReview: *https: // openreview. net/ forum? id= Sy6ZOStz5v* ## Abstract Video segmentation requires consistently segmenting and tracking objects over time. Due to the quadratic dependency on input size, directly applying self-attention to video segmentation with high-resolution input features poses significant challenges, often leading to insufficient GPU memory capacity. Consequently, modern video segmenters either extend an image segmenter without incorporating any temporal attention or resort to window space-time attention in a naive manner. In this work, we present Axial-VS, a general and simple framework that enhances video segmenters by tracking objects along axial trajectories. The framework tackles video segmentation through two sub-tasks: short-term within-clip segmentation and long-term cross-clip tracking. In the first step, Axial-VS augments an offthe-shelf clip-level video segmenter with the proposed *axial-trajectory* attention, sequentially tracking objects along the height- and width-trajectories within a clip, thereby enhancing temporal consistency by capturing motion trajectories. The axial decomposition significantly reduces the computational complexity for dense features, and outperforms the window spacetime attention in segmentation quality. In the second step, we further employ *axial-trajectory* attention to the object queries in clip-level segmenters, which are learned to encode object information, thereby aiding object tracking across different clips and achieving consistent segmentation throughout the video. Without bells and whistles, Axial-VS showcases state-ofthe-art results on video segmentation benchmarks, emphasizing its effectiveness in addressing the limitations of modern clip-level video segmenters. Code and models are available here. Xueqing Deng *xueqingdeng@bytedance.com* ByteDance Alan Yuille *ayuille1@jhu.edu* Johns Hopkins University Xiaohui Shen shenxiaohui@bytedance.com ByteDance Liang-Chieh Chen liangchieh.chen@bytedance.com ByteDance ## 1 Introduction Video segmentation is a challenging computer vision task that requires temporally consistent pixel-level scene understanding by segmenting objects, and tracking them across a video. Numerous approaches have been proposed to address the task in a variety of ways. They can be categorized into frame-level (Kim et al., 2020; Wu et al., 2022c; Heo et al., 2023; Li et al., 2023a), clip-level (Athar et al., 2020; Qiao et al., 2021; Hwang et al., 2021; Mei et al., 2022), and video-level segmenters (Wang et al., 2021b; Heo et al., 2022; Zhang et al., 2023), which process the video either in a frame-by-frame, clip-by-clip, or whole-video manner. Among them, clip-level segmenters draw our special interest, as it innately captures the local motion within a short period of time (a few frames in the same clip) compared to frame-level segmenters. It also avoids the memory constraints incurred by the video-level segmenters when processing long videos. Specifically, clip-level segmenters first pre-process the video into a set of short clips, each consisting of just a few frames. They then predict clip-level segmentation masks and associate them (i.e., tracking objects across clips) to form the final temporally consistent video-level results. Concretely, the workflow of clip-level segmenters requires two types of tracking: short-term *within-clip* and long-term *cross-clip* tracking. Most existing clip-level segmenters (Li et al., 2023b; Shin et al., 2024) directly extend the modern image segmentation models (Cheng et al., 2022; Yu et al., 2022b) to clip-level segmentation without any temporal attention, while TarViS (Athar et al., 2023) leverages a straightforward window space-time attention mechanism for within-clip tracking. However, none of the previous studies have fully considered the potential to enhance within-clip tracking and ensure long-term consistent tracking beyond neighboring clips. An intuitive approach to improve tracking ability is to naively calculate the affinity between features of neighboring frames (Vaswani et al., 2017). Another unexplored direction involves tracking objects along trajectories, where a variant of self-attention called trajectory attention (Patrick et al., 2021) was proposed to capture object motion by computing the affinity of down-sampled embedded patches in video classification. Nevertheless, in video segmentation, the input video is typically of high-resolution and considerable length. Due to the quadratic complexity of attention computation concerning input size, directly computing self-attention or trajectory attention for dense pixel features becomes computationally impractical. To address this challenge, we demonstrate the feasibility of decomposing and detecting object motions independently along the height (H-axis) and width (W-axis) dimensions, as illustrated in Fig. 1. This approach sequentially computes the affinity between features of neighboring frames along the height and width dimensions, a concept we refer to as *axial-trajectory* attention. The axial-trajectory attention is designed to learn the temporal correspondences between neighboring frames by estimating the motion paths sequentially along the height- and width-axes. By concurrently considering spatial and temporal information in the video, this approach harnesses the potential of attention mechanisms for dense pixel-wise tracking. Furthermore, the utilization of axial-trajectory attention can be expanded to compute the affinity between clip object queries. Modern clip-level segmenters encode object information in clip object queries, making this extension valuable for establishing robust long-term cross-clip tracking. These innovations serve as the foundations for our within-clip and cross-clip tracking modules. Building upon these components, we introduce Axial-VS, a general and simple framework for video segmentation. Axial-VS enhances a clip-level segmenter by incorporating within-clip and cross-clip tracking modules, leading to exceptional temporally consistent segmentation results. This comprehensive approach showcases the efficacy of axial-trajectory attention in addressing both short-term within-clip and long-term cross-clip tracking requirements. We instantiate Axial-VS by employing Video-kMaX (Shin et al., 2024) or Tube-Link (Li et al., 2023b) as the clip-level segmenters, yielding a significant improvement on video panoptic segmentation (Kim et al., 2020) and video instance segmentation (Yang et al., 2019), respectively. Without bells and whistles, Axial-VS improves over Video-kMaX (Shin et al., 2024) by 8.5% and 5.2% VPQ on VIPSeg (Miao et al., 2022) with ResNet50 (He et al., 2016) and ConvNeXt-L (Liu et al., 2022b), respectively. Moreover, it also achieves 3.5% VPQ improvement on VIPSeg compared to the state-of-the-art model DVIS (Zhang et al., 2023), when using ResNet50. Besides, Axial-VS can also boost the strong baseline Tube-Link (Li et al., 2023b) by 0.9% AP, 4.7% APlong, and 6.5% AP on Youtube-VIS-2021 (Yang et al., 2021a), Youtube-VIS-2022 (Yang et al., 2022), and OVIS (Qi et al., 2022) with Swin-L (Liu et al., 2021). ![2_image_0.png](2_image_0.png) Figure 1: **Visualization of Learned Axial-Trajectory Attention.** In this short clip depicting the action 'playing basketball', the basketball location at frame 1 is selected as the *reference point* (mark in red). We multiply the learned height and width axial-trajectory attentions and overlay them on frame 2, 3 and 4 to visualize the trajectory of the reference point over time. As observed, the axial-trajectory attention can capture the basketball's motion path. ## 2 Related Work Attention for Video Classification The self-attention mechanism (Vaswani et al., 2017) is widely explored in the modern video transformer design (Bertasius et al., 2021; Arnab et al., 2021; Neimark et al., 2021; Fan et al., 2021; Patrick et al., 2021; Liu et al., 2022a; Wang & Torresani, 2022) to reason about the temporal information for video classification. While most works treat time as just another dimension and directly apply global space-time attention, the divided space-time attention (Bertasius et al., 2021) applies temporal attention and spatial attention separately to reduce the computational complexity compared to the standard global space-time attention. Trajectory attention (Patrick et al., 2021) learns to capture the motion path of each query along the time dimension. Deformable video transformer (Wang & Torresani, 2022) exploits the motion displacements encoded in the video codecs to guide where each query should attend in their deformable space-time attention. However, most of the aforementioned explorations cannot be straightforwardly extended to video segmentation due to the quadratic computational complexity and the high-resolution input size of videos intended for segmentation. In this study, we innovatively suggest decomposing object motion into height and width-axes separately, thereby incorporating the concept of axial-attention (Ho et al., 2019; Huang et al., 2019; Wang et al., 2020). This leads to the proposal of axial-trajectory attention, effectively enhancing temporal consistency while maintaining manageable computational costs. Attention for Video Segmentation The investigation into attention mechanisms for video segmentation is under-explored, primarily hindered by the high-resolution input size of videos. Consequently, most existing works directly utilizes the modern image segmentation models (Cheng et al., 2022) to produce frame-level or clip-level predictions, while associating the cross-frame or cross-clip results through Hungarian Matching (Kuhn, 1955). VITA (Heo et al., 2022) utilizes window-based self-attention to effectively capture relations between cross-frame object queries in the object encoder. DVIS (Zhang et al., 2023) similarly investigates the use of standard self-attention to compute the affinity between cross-frame object queries, resulting in improved associating outcomes. TarViS (Athar et al., 2023) introduces a window space-time attention mechanism at the within-clip stage. Axial-VS extends these concepts by introducing the computation of axial-trajectory attention along object motion trajectories sequentially along the height and width axes. This operation is considered more effective for improving within-clip tracking capabilities and facilitating simultaneous reasoning about temporal and spatial relations. Moreover, we apply axial-trajectory attention to object queries to efficiently correlate cross-clip predictions, thereby enhancing cross-clip consistency. Video Segmentation Video segmentation aims to achieve consistent pixel-level scene understanding throughout a video. The majority of studies in this field primarily focus on video instance segmentation, addressing the challenges posed by 'thing' instances. Additionally, video panoptic segmentation is also crucial, emphasizing a comprehensive understanding that includes both 'thing' and 'stuff' classes. Both video instance and panoptic segmentation employ similar tracking modules, and thus we briefly introduce them together. Based on the input manner, they can be roughly categorized into frame-level segmenters (Yang et al., 2019; Kim et al., 2020; Yang et al., 2021b; Ke et al., 2021; Fu et al., 2021; Li et al., 2022; Wu et al., 2022c; Huang et al., 2022; Heo et al., 2023; Liu et al., 2023; Ying et al., 2023; Li et al., 2023a), clip-level segmenters (Athar et al., 2020; Qiao et al., 2021; Hwang et al., 2021; Wu et al., 2022a; Mei et al., 2022; Athar et al., 2023; Li et al., 2023b; Shin et al., 2024), and video-level segmenters (Wang et al., 2021b; Lin et al., 2021; Wu et al., 2022b; Heo et al., 2022; Zhang et al., 2023). Specifically, TubeFormer (Kim et al., 2022) tackles multiple video segmentation tasks in a unified manner (Wang et al., 2021a), while TarVIS (Athar et al., 2023) proposes task-independent queries. Tube-Link (Li et al., 2023b) exploits contrastive learning to better align the cross-clip predictions. Video-kMaX (Shin et al., 2024) extends the image segmenter (Yu et al., 2022b) for clip-level video segmentation. VITA (Heo et al., 2022) exhibits a video-level segmenter framework by introducing a set of video queries. DVIS (Zhang et al., 2023) proposes a referring tracker to denoise the frame-level predictions and a temporal refiner to reason about long-term tracking relations. Our work focuses specifically on improving clip-level segmenters, and is thus mostly related to the clip-level segmenters Video-kMaX (Shin et al., 2024) and Tube-Link (Li et al., 2023b). Building on top of them, Axial-VS proposes the within-clip and cross-clip tracking modules for enhancing the temporal consistency within each clip and over the whole video, respectively. Our cross-clip tracking module is similar to VITA (Heo et al., 2022) and DVIS (Zhang et al., 2023) in the sense that object queries are refined to obtain the final video outputs. However, our model builds on top of clip-level segmenters instead of frame-level segmenters, and we use axial-trajectory attention to refine the object queries without extra complex designs, while VITA introduces another set of video queries and DVIS additionally cross-attends to the queries cashed in the memory. ## 3 Method In this section, we briefly overview the clip-level video segmenter framework in Sec. 3.1. We then introduce the proposed *within-clip* tracking and *cross-clip* tracking modules in Sec. 3.2 and Sec. 3.3, respectively. ## 3.1 Video Segmentation With Clip-Level Segmenter Formulation of Video Segmentation Recent works (Kim et al., 2022; Li et al., 2022) have unified different video segmentation tasks as a simple set prediction task (Carion et al., 2020), where the input video is segmented into a set of tubes (a tube is obtained by linking segmentation masks along the time axis) to match the ground-truth tubes. Concretely, given an input video V ∈ R L×3×H×W with L represents the video length and H, W represent the frame height and width, video segmentation aims at segmenting it into a set of N class-labeled tubes: $$\{\hat{y}_{i}\}=\{(\hat{m}_{i},\hat{p}_{i}(c))\}_{i=1}^{N},$$ $\left(1\right)$. i=1, (1) where mˆ i ∈ [0, 1]L×H×W and pˆi(c) represent the predicted tube and its corresponding semantic class probability. The ground-truth set containing M class-labeled tubes is similarly represented as {yi} = {(mi, pi(c))}M i=1. These two sets are matched through Hungarian Matching (Kuhn, 1955) during training to compute the losses. Formulation of Clip-Level Video Segmentation The above video segmentation formulation is theoretically applicable to any length L of video sequences. However, in practice, it is infeasible to fit the whole video into modern large network backbones during training. As a result, most works exploit frame-level segmenter or clip-level segmenter (a clip is a short video sequence typically of two or three frames) to get frame-level or clip-level tubes first and further associate them to obtain the final video-level tubes. In this work, we focus on the clip-level segmenter, since it better captures local temporal information between frames in the same clip. Formally, we split the whole video V into a set of *non-overlapping* clips: vi ∈ R T×3×H×W , where T represents the length of each clip in temporal dimension (assuming that L is divisible by T for simplicity; if not, we simply duplicate the last frame). For the clip-level segmenter, we require T ≥ 2. Overview of Proposed Axial-VS Given the independently predicted clip-level segmentation, we propose Axial-VS, a meta-architecture that builds on top of an off-the-shelf clip-level segmenter (e.g., Video-kMaX (Shin et al., 2024) or Tube-Link (Li et al., 2023b)) to generate the final temporally consistent video-level segmentation results. Building on top of the clip-level segmenter, Axial-VS contains two additional modules: within-clip tracking module and cross-clip tracking module, as shown in Fig. 2. We detail each module in the following subsections, and choose Video-kMaX as the baseline for simplicity in describing the detailed designs. ![4_image_0.png](4_image_0.png) Figure 2: Overview of Axial-VS, which builds two components on top of a clip-level segmenter (blue): the within-clip tracking and cross-clip tracking modules (orange). Both modules exploit the axial-trajectory attention to enhance temporal consistency. We obtain video features by concatenating all clip features output by the pixel decoder (totally K clips), and video prediction by multiplying (0) video features and refined clip object queries. ![4_image_1.png](4_image_1.png) Figure 3: Within-clip tracking module takes input clip features extracted by the network backbone, iteratively stacks Multi-Scale Deformable (MSDeform) Attention and axial-trajectory attention (sequentially along H- and W-axes) for Nw times, and outputs the spatially and temporally consistent clip features. ## 3.2 Within-Clip Tracking Module As shown in Fig. 3, the main component of the within-clip tracking module is the proposed axial-trajectory attention, which decomposes the object motion in the height-axis and width-axis, and effectively learns to track objects across the frames in the same clip (thus called *within-clip* tracking). In the module, we also enrich the features by exploiting the multi-scale deformable attention (Zhu et al., 2020) to enhance the spatial information extraction. We explain the module in detail below. Axial-Trajectory Attention Trajectory attention (Patrick et al., 2021) was originally proposed to capture the object motion information contained in the video for the classification task. However, unlike video classification, where input video is usually pre-processed into a small set of tokens and the output prediction is a single label, video segmentation requires dense prediction (i.e., per pixel) results, making it infeasible to directly apply trajectory attention, which has quadratic complexity proportional to the input size. To unleash the potential of tracking objects through attention in video segmentation, we propose *axial-trajectory attention* that tracks objects along axial trajectories, which not only effectively captures object motion information but also reduces the computational cost. Formally, given an input video clip consisting of T frames, we forward it through a frame-level network backbone (e.g., ConvNeXt (Liu et al., 2022b)) to extract the feature map F ∈ R T×D×H×W , where D, H, W stand for the dimension, height and width of the feature map, respectively. We note that the feature map F is extracted frame-by-frame via the network backbone, and thus no temporal information exchanges between frames. We further reshape the feature into Fh ∈ RW×TH×D to obtain a sequence of TH pixel features xth ∈ R D. Following (Vaswani et al., 2017), we linearly project xth to a set of query-key-value vectors qth, kth, vth ∈ R D. We then perform *axial-attention* along trajectories (i.e., the probabilistic path of a point between frames). Specifically, we choose a specific time-height position th as the *reference point* to illustrate the computation process of axial-trajectory attention. After obtaining its corresponding query qth, we construct a set of trajectory points yett′h which represents the pooled information weighted by the trajectory probability. The *axial-trajectory* extends for the duration of the clip, and its point yett′h ∈ R D at different times t ′is defined as: $$\widetilde{\mathbf{y}}_{t t^{\prime}h}=\sum_{h^{\prime}}\mathbf{v}_{t^{\prime}h^{\prime}}\cdot\frac{\exp\left\langle\mathbf{q}_{t h},\mathbf{k}_{t^{\prime}h^{\prime}}\right\rangle}{\sum_{\overline{{h}}}\exp\left\langle\mathbf{q}_{t h},\mathbf{k}_{t^{\prime}\overline{{h}}}\right\rangle}.$$ Note that this step computes the axial-trajectory attention in H-axis (index h ′), independently for each frame. It finds the axial-trajectory path of the reference point th across frames t ′in the clip by comparing the reference point's query qth to the keys kt ′h′ , only along the H-axis. To reason about the intra-clip connections, we further pool the trajectories over time t ′. Specifically, we linearly project the trajectory points and obtain a new set of query-key-value vectors: $$\widetilde{\mathbf{q}}_{t h}=\mathbf{W}_{q}\widetilde{\mathbf{y}}_{t t h},\quad\widetilde{\mathbf{k}}_{t t^{\prime}h}=\mathbf{W}_{k}\widetilde{\mathbf{y}}_{t t^{\prime}h},\quad\widetilde{\mathbf{v}}_{t t^{\prime}h}=\mathbf{W}_{v}\widetilde{\mathbf{y}}_{t t^{\prime}h},$$ where Wq,Wk, and Wv are the linear projection matrices for query, key, and value. We then update the reference point at time-height th position by applying 1D attention along the time t ′: $$\left(2\right)$$ $$(3)$$ $${\bf y}_{t h}=\sum_{t^{\prime}}\widetilde{\bf v}_{t t^{\prime}h}\cdot\frac{\exp\left\langle\widetilde{\bf q}_{t h},\widetilde{\bf k}_{t t^{\prime}h}\right\rangle}{\sum_{t}\exp\left\langle\widetilde{\bf q}_{t h},\widetilde{\bf k}_{t h}\right\rangle}.\tag{1}$$ With the above update rules, we propagate the motion information in H-axis in the video clip. To capture global information, we further reshape the feature into Fw ∈ R H×TW×D and apply the same axial-trajectory attention (but along the W-axis) consecutively to capture the width dynamics as well. The proposed axial-trajectory attention (illustrated in Fig. 4) effectively reduces the computational complexity of original trajectory attention from O(T 2H2W2) to O(T 2H2W + T 2W2H), allowing us to apply it to the dense video feature maps, and to reason about the motion information across frames in the same clip. Multi-Scale Attention To enhance the features spatially, we further adopt the multi-scale deformable attention (Zhu et al., 2020) for exchanging information at different scales of feature. Specifically, we apply the multi-scale deformable attention to the feature map F (extracted by the network backbone) frame-by-frame, $$\left(4\right)$$ ![6_image_0.png](6_image_0.png) Figure 4: **Illustration of Axial-Trajectory Attention** (only *Height*-axis axial-trajectory attention is shown for ![6_image_1.png](6_image_1.png) simplicity), which includes two steps: computing the axial-trajectories ye along *Height*-axis (Eq. 2) of the dense pixel feature maps x ∈ R TH×D, where T, H, and D denote the clip length, feature height and channels, respectively and then computing temporal attention along the axial-trajectories (Eq. 4) to obtain the temporally consistent features y. Figure 5: **Cross-clip tracking module** refines K sets of clip object queries by performing axial-trajectory attention and temporal atrous spatial pyramid pooing (Temporal-ASPP) for Nc times. which effectively exchanges the information across feature map scales (stride 32, 16, and 8) for each frame. In the end, the proposed within-clip tracking module is obtained by iteratively stacking multi-scale deformable attention and axial-trajectory attention (for Nw times) to ensure that the learned features are spatially consistent across the scales and temporally consistent across the frames in the same clip. Transformer Decoder After extracting the spatially and temporally enhanced features, we follow typical video mask transformers (e.g., Video-kMaX (Shin et al., 2024) or Tube-Link (Li et al., 2023b)) to produce clip-level predictions, where clip object queries Ck ∈ R N×D (for k-th clip) are iteratively refined by multiple transformer decoder layers (Carion et al., 2020). The resulting *clip object queries* are used to generate a set of N class-labeled tubes within the clip, as described in Sec. 3.1. Clip-Level (Near-Online) Inference With the above within-clip tracking module, our clip-level segmenter is capable of segmenting the video in a near-online fashion (i.e., clip-by-clip). Unlike Video-kMaX (Shin et al., 2024) which takes overlapping clips as input and uses video stitching (Qiao et al., 2021) to link predicted clip-level tubes, our method simply uses the Hungarian Matching (Kuhn, 1955) to associate the clip-level tubes via the clip object queries (similar to MinVIS (Huang et al., 2022); but we work on the clip-level, instead of frame-level), since our input clips are non-overlapping. ## 3.3 Cross-Clip Tracking Module Though axial-trajectory attention along with the multi-scale deformable attention effectively improves the within-clip tracking ability, the inconsistency between clips (i.e., beyond the clip length T) still remains a challenging problem, especially under the fast-moving or occluded scenes. To address these issues, we further propose a cross-clip tracking module to refine and better associate the clip-level predictions. Concretely, given all the clip object queries {Ck} K k=1 ∈ R KN×D of a video (which is divided into K = L/T non-overlapping clips, and k-th clip has its own clip object queries Ck ∈ R N×D), we first use the Hungarian Matching to align the clip object queries as the initial tracking results (i.e., "clip-level inference" in Sec. 3.2). Subsequently, these results are refined by our proposed cross-clip tracking module to capture temporal connections across ![7_image_0.png](7_image_0.png) Figure 6: **Illustration of Temporal-ASPP,** which operates on the clip object queries and includes three parallel atrous convolution with different atrous rates to aggregate local temporal cross-clip connections across different time spans followed by 1x1 convolution and layer norm to obtain the final updated clip object queries. the entire video, traversing all clips. As shown in Fig. 5, the proposed cross-clip tracking module contains two operations: axial-trajectory attention and Temporal Atrous Spatial Pyramid Pooling (Temporal-ASPP). We elaborate on each operation in detail below. Axial-Trajectory Attention For k-th clip, the clip object queries Ck encode the clip-level tube predictions (i.e., each query in Ck generates the class-labeled tube for a certain object in k-th clip). Therefore, associating clip-level prediction results is similar to finding the trajectory path of object queries in the whole video. Motivated by this observation, we suggest leveraging axial-trajectory attention to capture whole-video temporal connections between clips. This can be accomplished by organizing all clip object queries in a sequence based on the temporal order (i.e., clip index) and applying axial-trajectory attention along the sequence to infer global cross-clip connections. Formally, for a video divided into K clips (and each clip is processed by N object queries), each object query Ckn ∈ {Ck} is first projected into a set of query-key-value vectors qkn, kkn, vkn ∈ R D. Then we compute a set of trajectory queries Zekk′n by calculating the probabilistic path of each object query: $$\widetilde{\mathbf{Z}}_{kk^{\prime}n}=\sum_{k^{\prime}}\mathbf{v}_{k^{\prime}n^{\prime}}\cdot\frac{\exp\left\langle\mathbf{q}_{kn},\mathbf{k}_{k^{\prime}n^{\prime}}\right\rangle}{\sum_{\overline{n}}\exp\left\langle\mathbf{q}_{kn},\mathbf{k}_{k^{\prime}\overline{n}}\right\rangle}.\tag{5}$$ After further projecting the trajectory queries Zekk′n into qekn, kekk′n, vekk′n, we aggregate the cross-clip connections along the trajectory path of object queries through: $$Z_{k n}=\sum_{k^{\prime}}\tilde{\bf v}_{k k^{\prime}n}\cdot\frac{\exp\left\langle\tilde{\bf q}_{k n},\tilde{\bf k}_{k k^{\prime}n}\right\rangle}{\sum_{k}\exp\left\langle\tilde{\bf q}_{k n},\tilde{\bf k}_{k\bar{k}n}\right\rangle}.$$ $$(6)$$ . (6) Temporal-ASPP While the above axial-trajectory attention reasons about the whole-video temporal connections, it can be further enriched by a short-term tracking module. Motivated by the success of the atrous spatial pyramid pooling (ASPP (Chen et al., 2017a; 2018)) in capturing spatially multi-scale context information, we extend it to the temporal domain. Specifically, our Temporal-ASPP module contains three parallel temporal atrous convolutions (Chen et al., 2015; 2017b) with different rates applied to the all clip object queries Z for capturing motion at different time spans, as illustrated in Fig. 6. Cross-Clip Tracking Module The proposed cross-clip tracking module iteratively stacks the axialtrajectory attention and Temporal-ASPP to refine all the clip object queries {Ck} K k=1 of a video, obtaining a temporally consistent prediction at the video-level. Video-Level (Offline) Inference With the proposed within-clip and cross-clip tracking modules, built on top of any clip-level video segmenter, we can now inference the whole video in an offline fashion by exploiting all the refined clip object queries. We first obtain the video features by concatenating all clip features produced by the pixel decoder (totally K clips). The predicted video-level tubes are then generated by multiplying all the clip object queries with the video features (similar to image mask transformers (Wang et al., 2021a; Yu et al., 2022a)). To obtain the predicted classes for the video-level tubes, we exploit another 1D convolution layer (i.e., the "Temporal 1D Conv" in the top-right of Fig. 5) to generate the temporally weighted class predictions, motivated by the fact that the object queries on the trajectory path should have the same class prediction. Table 1: **Video Panoptic Segmentation (VPS) results.** We reproduce baseline Video-kMaX (column RP) by taking non-overlapping clips as input and replacing their hierarchical matching scheme with simple Hungarian Matching on object queries. We then compare our Axial-VS with other state-of-the-art works. Reported results of Axial-VS are averaged over 3 runs. WC: Our Within-Clip tracking module. CC: Our Cross-Clip tracking module. a) **[VPS] Effects of proposed modules on VIPSeg val set** | a) [VPS] Effects of proposed modules on VIPSeg val set | | | b) [VPS] Comparisons with others on VIPSeg val set method backbone VPQ VPQTh VPQSt online/near-online methods TarVIS (Athar et al., 2023) ResNet50 33.5 39.2 28.5 DVIS (Zhang et al., 2023) ResNet50 39.2 39.3 39.0 TarVIS (Athar et al., 2023) Swin-L 48.0 58.2 39.0 DVIS (Zhang et al., 2023) Swin-L 54.7 54.8 54.6 Axial-VS w/ Video-kMaX ResNet50 46.1 45.6 46.6 Axial-VS w/ Video-kMaX ConvNeXt-L 56.2 58.4 54.0 Axial-VS w/ Video-kMaX ConvNeXtV2-L 57.7 58.3 57.1 offline methods DVIS (Zhang et al., 2023) ResNet50 43.2 43.6 42.8 DVIS (Zhang et al., 2023) Swin-L 57.6 59.9 55.5 Axial-VS w/ Video-kMaX ResNet50 46.7 46.7 46.6 Axial-VS w/ Video-kMaX ConvNeXt-L 57.1 59.3 54.8 Axial-VS w/ Video-kMaX ConvNeXtV2-L 58.0 58.8 57.2 | | | | | |----------------------------------------------------------|--------------|--------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----|------|------|------| | method | backbone | RP WC CC VPQ VPQTh VPQSt | | | | | | | Video-kMaX | ResNet50 | - | - | - | 38.2 | - | - | | Video-kMaX | ResNet50 | ✓ | - | - | 42.7 | 42.5 | 42.9 | | Axial-VS | ResNet50 | ✓ | ✓ | - | 46.1 | 45.6 | 46.6 | | Axial-VS | ResNet50 | ✓ | ✓ | ✓ | 46.7 | 46.7 | 46.6 | | Video-kMaX | ConvNeXt-L | - | - | - | 51.9 | - | - | | Video-kMaX | ConvNeXt-L | ✓ | - | - | 52.7 | 54.1 | 51.3 | | Axial-VS | ConvNeXt-L | ✓ | ✓ | - | 56.2 | 58.4 | 54.0 | | Axial-VS | ConvNeXt-L | ✓ | ✓ | ✓ | 57.1 | 59.3 | 54.8 | | Axial-VS | ConvNeXtV2-L | ✓ | ✓ | - | 57.7 | 58.3 | 57.1 | | Axial-VS | ConvNeXtV2-L | ✓ | ✓ | ✓ | 58.0 | 58.8 | 57.2 | ## 4 Experimental Results We evaluate Axial-VS based on two different clip-level segmenters on four widely used video segmentation benchmarks to show its generalizability. Specifically, for video panoptic segmentation (VPS), we build Axial-VS based on Video-kMaX (Shin et al., 2024) and report performance on VIPSeg (Miao et al., 2022). We also build Axial-VS on top of Tube-Link (Li et al., 2023b) for video instance segmentation (VIS) and report the performance on Youtube-VIS 2021 (Yang et al., 2021a), 2022 (Yang et al., 2022), and OVIS (Qi et al., 2022). Since Tube-Link is built on top of Mask2Former (Cheng et al., 2022) and thus already contains six layers of Multi-Scale Deformable Attention (MSDeformAttn), we simplify our within-clip tracking module by directly inserting axial-trajectory attention after each original MSDeformAttn. We follow the original setting of Video-kMaX and Tube-Link to use the same training losses. Note that when training the cross-clip tracking module, both the clip-level segmenter and the within-clip tracking module are frozen due to memory constraint. We utilize Video Panoptic Quality (VPQ), as defined in VPSNet (Kim et al., 2020), and Average Precision (AP), as defined in MaskTrack R-CNN (Yang et al., 2019), for evaluating the models on VPS and VIS, respectively. We provide more implementation details in the appendix. ## 4.1 Improvements Over Baselines We first provide a systematic study to validate the effectiveness of the proposed modules. Video Panoptic Segmentation (VPS) Tab. 1a summarizes the improvements over the baseline VideokMaX (Shin et al., 2024) on the VIPSeg dataset. For a fair comparison, we first reproduce Video-kMaX in our PyTorch framework (which was originally implemented in TensorFlow (Weber et al., 2021a)). Our re-implementation yields significantly improved VPQ results compared to the original model, with a 4.5% and 0.8% VPQ improvement using ResNet50 and ConvNeXt-L, respectively, establishing a solid baseline. As shown in the table, using the proposed within-clip tracking module improves over the reproduced solid baseline by 3.4% and 3.5% VPQ with ResNet50 and ConvNeXt-L, respectively. Employing the proposed cross-clip tracking module further improves the performance by additional 0.6% and 0.9% VPQ with ResNet50 and ConvNeXt-L, respectively. Finally, using the modern ConvNeXtV2-L brings another 1.5% and 0.9% improvements, when compared to the ConvNeXt-L counterparts. Video Instance Segmentation (VIS) Tab. 2 summarizes the improvements over the baseline TubeLink (Li et al., 2023b) on the Youtube-VIS-21, -22, and OVIS datasets. Similarly, for a fair comparison, we first reproduce the Tube-Link results, using their official code-base. Our reproduction yields similar performances to the original model, except OVIS, where we observe a gap of 4.1% AP for ResNet50. On Youtube-VIS-21 (Tab. 2a), the proposed within-clip tracking module improves the reproduced baselines by 0.6% and 0.6% for ResNet50 and Swin-L, respectively. Using our cross-clip tracking module additionally Table 2: **Video Instance Segmentation (VIS) results.** We reproduce baseline Tube-Link (column RP) with their official code-base. We then build on top of it with our Within-Clip tracking module (WC) and Cross-Clip tracking module (CC). For Youtube-VIS-22, we mainly report APlong (for long videos) and see appendix for APshort (for short videos) and APall (average of them). Reported results are averaged over 3 runs. §: Our best attempt to reproduce Tube-Link's performances (25.4%), lower than the results (29.5%) reported in the paper. Their provided checkpoints also yield lower results (26.7%). N/A: Not available from their code-base, but we have attempted to reproduce. | a) [VIS] Youtube-VIS-21 val set method backbone RP WC CC AP Tube-Link ResNet50 - - - 47.9 Tube-Link ResNet50 ✓ - - 47.8 Axial-VS ResNet50 ✓ ✓ - 48.4 Axial-VS ResNet50 ✓ ✓ ✓ 48.5 Tube-Link Swin-L - - - 58.4 Tube-Link Swin-L ✓ - - 58.2 Axial-VS Swin-L ✓ ✓ - 58.8 Axial-VS Swin-L ✓ ✓ ✓ 59.1 | b) [VIS] Youtube-VIS-22 val set method backbone RP WC CC APlong Tube-Link ResNet50 - - - 31.1 Tube-Link ResNet50 ✓ - - 32.1 Axial-VS ResNet50 ✓ ✓ - 36.5 Axial-VS ResNet50 ✓ ✓ ✓ 37.0 Tube-Link Swin-L - - - 34.2 Tube-Link Swin-L ✓ - - 34.2 Axial-VS Swin-L ✓ ✓ - 35.9 Axial-VS Swin-L ✓ ✓ ✓ 38.9 | c) [VIS] OVIS val set | | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|----------------------|---------|-------|--------|------| | | | method | backbone RP WC CC AP | | | | | | | | Tube-Link ResNet50 - | - | - | 29.5 | | | | | | Tube-Link ResNet50 ✓ | - | - 25.4§ | | | | | | | Axial-VS ResNet50 ✓ | ✓ | - | 27.6 | | | | | | Axial-VS ResNet50 ✓ | ✓ | ✓ 28.3 | | | | | | | Tube-Link Swin-L | - | - | - N/A | | | | | | Tube-Link Swin-L | ✓ | - | - | 33.3 | | | | | Axial-VS | Swin-L | ✓ | ✓ | - | 39.1 | | | | Axial-VS | Swin-L | ✓ | ✓ | ✓ 39.8 | | improves the performance by 0.1% and 0.3% for ResNet50 and Swin-L, respectively. On Youtube-VIS-22 (Tab. 2b), our proposed modules bring more significant improvements, showing our method's ability to handle the challenging long videos in the dataset. Specifically, using our within-clip tracking module shows 4.4% and 1.7% APlong for ResNet50 and Swin-L, respectively. Our cross-clip tracking module further improves the performances by 0.5% and 3.0% APlong for ResNet50 and Swin-L, respectively. On OVIS (Tab. 2c), even though we did not successfully reproduce Tube-Link (using their provided config files), we still observe a significant improvement brought by the proposed modules. Particularly, our within-clip tracking modules improves the baselines by 2.2% and 5.8% AP for ResNet50 and Swin-L, respectively. Another improvements of 0.7% and 0.7% AP for ResNet50 and Swin-L can be attained with the proposed cross-clip tracking module. To summarize, our proposed modules bring more remarkable improvements for long and challenging datasets. ## 4.2 Comparisons With Other Methods After analyzing the improvements brought by the proposed modules, we now move on to compare our Axial-VS with other state-of-the-art methods. Video Panoptic Segmentation (VPS) As shown in Tab. 1b, in the online/near-online setting, when using ResNet50, our Axial-VS significantly outperforms TarVIS (Athar et al., 2023) (which co-trains and exploits multiple video segmentation datasets) by a large margin of 12.6% VPQ. Axial-VS also outperforms the recent ICCV 2023 work DVIS (Zhang et al., 2023) by a healthy margin of 6.9% VPQ. When using the stronger backbones, Axial-VS with ConvNeXt-L still outperforms TarVIS and DVIS with Swin-L by 8.2% and 1.5% VPQ, respectively. The performance is further improved by using the modern ConvNeXtV2-L backbone, attaining 57.7% VPQ. In the offline setting, Axial-VS with ResNet50 outperforms DVIS by 3.5% VPQ, while Axial-VS with ConvNeXt-L performs comparably to DVIS with Swin-L. Finally, when using the modern ConvNeXtV2-L, Axial-VS achieves 58.0% VPQ, setting a new state-of-the-art. Video Instance Segmentation (VIS) Tab. 3 compares Axial-VS with other state-of-the-art methods for VIS. On Youtube-VIS-21 (Tab. 3a), Axial-VS exhibits a slight performance advantage over TarVIS (Athar et al., 2023) and DVIS (Zhang et al., 2023) with an improvement of 0.1% and 1.1% AP, respectively. On Youtube-VIS-22 (Tab. 3b), Axial-VS outperforms DVIS in both online/near-online and offline settings by 5.3% and 1.1% APlong, respectively. ## 4.3 Ablation Studies We conduct ablation studies on VIPSeg, leveraging ResNet50 due to its scene diversity and long-length videos. Here, we present ablations on attention operations and cross-clip tracking design, as well as hyper-parameters such as the number of layers in the within-clip tracking and cross-clip tracking modules, clip length and sampling range. We further provide GFlops comparisons, more visualizations and failure cases in the appendix. Table 3: **Video Instance Segmentation (VIS) results.** We compare our Axial-VS with other state-of-the-art works on Youtube-VIS-21 and Youtube-VIS-22 val set. Reported results of Axial-VS are averaged over 3 runs. ∗: All results are reproduced by us using their official checkpoints. | a) [VIS] Youtube-VIS-21 val set method backbone AP online/near-online methods TarVIS (Athar et al., 2023) ResNet50 48.3 Axial-VS ResNet50 48.4 offline methods VITA (Heo et al., 2022) ResNet50 45.7 DVIS (Zhang et al., 2023) ResNet50 47.4 Axial-VS ResNet50 48.5 | b) [VIS] Youtube-VIS-22 val set method backbone APlong online/near-online methods DVIS (Zhang et al., 2023) ∗ ResNet50 31.2 Axial-VS ResNet50 36.5 offline methods VITA (Heo et al., 2022) ∗ ResNet50 31.9 DVIS (Zhang et al., 2023) ∗ ResNet50 35.9 Axial-VS ResNet50 37.0 | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | b) [Ablations] Cross-clip tracking design | | | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|---------------------------------|----|----|------|------| | | cross-clip tracking design | video query encoder decoder VPQ | | | | | | a) [Ablations] Attention in the within-clip tracking attention operations VPQ - 42.7 Joint Space-Time Attn (Vaswani et al., 2017) 43.2 Divided Space-Time Attn (Bertasius et al., 2021) 43.6 MSDeformAttn (Zhu et al., 2020) 44.5 Axial-Trajectory Attn 44.7 MSDeformAttn + Window Space-Time Attn (Athar et al., 2023) 44.9 MSDeformAttn + Axial-Trajectory Attn 46.1 | | - | - | - | - | 46.1 | | | VITA (Heo et al., 2022) | ! | ! | ! | 46.3 | | | | cross-clip tracking module | % | ! | % | 46.7 | | Table 4: Ablations on attention operations in the within-clip tracking module and cross-clip tracking design. For the within-clip tracking module, we compare Joint Space-Time Attention (Vaswani et al., 2017), Divided Space-Time Attention (Bertasius et al., 2021), Multi-Scale Deformable Attention (Zhu et al., 2020) (MSDeformAttn), Axial-Trajectory Attention, and TarVIS Temporal Neck (Athar et al., 2023) (i.e., MSDeformAttn + Window SpaceTime Attention). Visualizations are provided in Fig. 7 to illustrate the differences between the compared attentions. For the cross-clip tracking module, we compare VITA (Heo et al., 2022) and the proposed cross-clip tracking module. Reported results are averaged over 3 runs. −: Not using any operations. Our final setting is marked in grey. Attention Operations in Within-Clip Tracking Module In Tab. 4a, we ablate the attention operations used in the within-clip tracking module. To begin with, we utilize joint space-time attention (Vaswani et al., 2017), achieving a performance of 43.2% VPQ, which is a 0.5% improvement over the baseline. Subsequently, we apply divided space-time attention (Bertasius et al., 2021) (i.e., decomposing space-time attention into space- and time-axes), resulting in a performance of 43.6% VPQ. This represents a further improvement of 0.4% VPQ over joint space-time attention, potentially due to its larger learning capacity, incorporating distinct learning parameters for temporal and spatial attention. Afterwards, we employ either only Multi-Scale Deformable Attention (Zhu et al., 2020) (MSDeformAttn) for spatial attention or only the proposed AxialTrajectory Attention (AxialTrjAttn), sequentially along H- and W-axes for temporal attention, obtaining the performance of 44.5% and 44.7% VPQ, respectively. Replacing the attention operations with TarVIS Temporal Neck (Athar et al., 2023) (i.e., MSDeformAttn + Window Space-Time Attention) increases the performance to 44.9% VPQ. Finally, if we change the attention scheme to the proposed MSDeformAttn + AxialTrjAttn, it brings another performance gain of 1.2% over TarVIS's design, achieving 46.1% VPQ. To better understand the distinctions among the four space-time self-attention schemes introduced above, we illustrate them in Fig. 7. On the temporal side, "Joint Space-Time Attention" simply attends to all pixels, while both "Divided Space-Time Attention" and "Window Space-Time Attention" focus on a fixed region across time. In contrast, our proposed *axial-trajectory attention* effectively tracks the object across time, capturing more accurate information and yielding more temporally consistent features. Tracking Design in Cross-Clip Tracking Module In Tab. 4b, we ablate the cross-clip tracking design in the cross-clip tracking module. We experiment with the design of VITA (Heo et al., 2022) to learn an additional set of video object queries by introducing a decoder for decoding information from encoded clip queries, yielding 46.3% VPQ, with a slight gain of 0.2% over the baseline. Replacing the VITA design with the proposed simple encoder-only and video-query-free design leads to a better performance of 46.7% VPQ. Within-Clip Tracking Module In Tab. 5, we ablate the design choices of the proposed within-clip tracking module. To begin with, we employ one MSDeformAttn and one TrjAttn (Trajectory Attention) with Nw = 2, obtaining the performance of 45.3% VPQ (+2.6% over the baseline). Replacing the TrjAttn ![11_image_0.png](11_image_0.png) Figure 7: **Illustration of the four space-time self-attention schemes studied in this work.** For clarity, we represent the reference point at frame 1 in red and display its space-time attended pixels under each scheme in non-red colors. Pixels without color are not involved in the self-attention computation of the reference point. Different colors within a scheme represent attentions applied along distinct dimensions. Note that the visualizations of Multi-Scale Deformable Attention (MSDeformAttn) and Axial-Trajectory Attention are simplified for improving visual clarity. Table 5: **Ablation on within-clip tracking module.** We vary the number of Multi-Scale Deformable Attention (\#MSDeformAttn), number of Trajectory Attention (\#TrjAttn), or number of Axial-Trajectory Attention (\#AxialTrjAttn). Nw denotes the number of blocks (i.e., repetitions). Numbers are averaged over 3 runs. −: Not using any operations. N/A: Not avaliable due to insufficient GPU memory capacity. The final setting is marked in grey. | #MSDeformAttn | #TrjAttn | #AxialTrjAttn | Nw | VPQ | |-----------------|------------|-----------------|------|-------| | - | - | - | - | 42.7 | | 1 | 1 | - | 2 | 45.3 | | 1 | 2 | - | 2 | N/A | | 1 | - | 1 | 2 | 45.4 | | 1 | - | 2 | 1 | 44.7 | | 1 | - | 2 | 2 | 46.1 | | 1 | - | 2 | 3 | 45.2 | | 1 | - | 3 | 2 | 45.7 | | 1 | - | 4 | 2 | 45.5 | | 2 | - | 4 | 1 | 45.8 | with AxialTrjAttn yields a comparable performance of 45.4%. We note that it will be Out-Of-Memory, if we stack two TrjAttn layers in a V100 GPU. Stacking two AxialTrjAttn layers in each block leads to our final setting with 46.1%. Increasing or decreasing the number of blocks Nw degrades the performance slightly. If we employ one more AxialTrjAttn layers per block, the performance drops by 0.4%. Finally, if we change the iterative stacking scheme to a sequential manner (i.e., stacking two MSDeformAttn, followed by four AxialTrjAttn), the performance also decreases slightly by 0.3%. Cross-Clip Tracking Module Tab. 6 summarizes our ablation studies on the design choices of the proposed cross-clip tracking module. Particularly, in Tab. 6a, we adopt different operations in the module. Using self-attention (SelfAttn), instead of axial-trajectory attention (AxialTrjAttn) degrades the performance by 0.3% VPQ. Removing the Temporal-ASPP operation also decreases the performance by 0.2%. In Tab. 6b, we ablate the atrous rates used in the three parallel temporal convolutions of the proposed Temporal-ASPP. Using atrous rates (1, 2, 3) (i.e., rates set to 1, 2, and 3 for those three convolutions, respectively) leads to the best performance. In Tab. 6c, we find that using Nc = 4 blocks in the cross-clip tracking module yields the best result. | a) Operations | b) Temporal-ASPP atrous rates VPQ (1, 2, 3) 46.7 (1, 2, 5) 46.5 (1, 3, 5) 46.4 | c) Number of blocks Nc Nc VPQ 4 46.7 6 46.7 8 46.2 | |-------------------------------------|----------------------------------------------------------------------------------|------------------------------------------------------| | SelfAttn AxialTrjAttn Temporal-ASPP | VPQ | | | ! | ! | 46.4 | | ! | ! | 46.7 | | ! | 46.5 | | Table 6: **Ablation on cross-clip tracking module.** We vary operations in the block, Temporal-ASPP (atrous rates), and number of blocks Nc. Numbers are averaged over 3 runs. The final setting is marked in grey. | a) Clip length | | |---------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------| | clip length Video-kMaX Axial-VS (near-online) Axial-VS (offline) 2 42.7 46.1 46.7 3 42.1 45.1 45.5 4 41.4 44.2 44.7 | b) Clip sampling range range Axial-VS (near-online) ±1 46.1 ±3 45.8 ±10 43.9 | Table 7: **Ablation on clip length and clip sampling range.** We vary the clip length T and sampling range (i.e., frame index interval) of a clip. Numbers are averaged over 3 runs. The final setting is marked in grey. Clip Length and Clip Sampling Range Tab. 7 summarizes our ablation studies on the clip length T (i.e., number of frames in a clip) and clip sampling range (i.e., frame index interval). Concretely, in Tab. 7a, we adopt different clip sizes for training the segmenters. As observed, the performance of Video-kMaX (Shin et al., 2024) gradually decreases with the increase of clip size (42.7% → 42.1% → 41.4%). However, both our Axial-VS near-online (with within-clip tracking module) and Axial-VS offline (with within-clip + cross-clip tracking module) models consistently bring steady improvements. Specifically, for a clip size of 2, the within-clip tracking module enhances Video-kMaX performance by 3.4%, achieving 46.1% VPQ. Subsequently, our cross-clip tracking module further elevates the performance by 0.6% to 46.7% VPQ. These improvements are also notable for other clip sizes. In conclusion, the observed performance drops are primarily influenced by the performance variance of the deployed clip-level segmenters (i.e., Video-kMaX). We propose two main hypotheses: Firstly, in existing video panoptic segmentation datasets such as VIPSeg, objects typically exhibit slow movement. Therefore, neighboring frames contain the most informative data, with minimal additional information gained from including more frames. Secondly, the transformer decoders employed in Video-kMaX may encounter challenges when processing larger feature maps associated with longer clip lengths. As indicated in their original paper, Video-kMaX also adopts a clip length of 2 in their final settings when training on VIPSeg. In Tab. 7b, we ablate the sampling range used when constructing a training clip. Using continuous frames (i.e., ±1) leads to the best performance. While slightly increasing the sampling range to ±3 degrades the performance slightly by 0.3% to 45.8% VPQ, increasing it to ±10 greatly hampers the learning of the within-clip tracking module, yielding only 43.9% VPQ. ## 5 Conclusion In conclusion, our contribution, Axial-VS, represents a meta-architecture that elevates the capabilities of a standard clip-level segmenter through the incorporation of within-clip and cross-clip tracking modules. These modules, empowered by axial-trajectory attention, strategically enhance short-term and long-term temporal consistency. The exemplary performance of Axial-VS on video segmentation benchmarks underscores its efficacy in mitigating the limitations observed in contemporary clip-level video segmenters. ## References Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, and Cordelia Schmid. Vivit: A video vision transformer. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 6836–6846, 2021. Ali Athar, Sabarinath Mahadevan, Aljoša Ošep, Laura Leal-Taixé, and Bastian Leibe. STEm-Seg: Spatiotemporal embeddings for instance segmentation in videos. In Proceedings of the European Conference on Computer Vision, 2020. Ali Athar, Alexander Hermans, Jonathon Luiten, Deva Ramanan, and Bastian Leibe. Tarvis: A unified approach for target-based video segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18738–18748, 2023. Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In *International Conference on Machine Learning*. PMLR, 2021. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision, pp. 213–229. Springer, 2020. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In International Conference on Learning Representations, 2015. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):834–848, 2017a. Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. *arXiv preprint arXiv:1706.05587*, 2017b. Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision, pp. 801–818, 2018. Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1290–1299, 2022. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In *Proceedings of the IEEE/CVF International Conference* on Computer Vision, pp. 6824–6835, 2021. Yang Fu, Linjie Yang, Ding Liu, Thomas S Huang, and Humphrey Shi. Compfeat: Comprehensive feature aggregation for video instance segmentation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, pp. 1361–1369, 2021. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016. Miran Heo, Sukjun Hwang, Seoung Wug Oh, Joon-Young Lee, and Seon Joo Kim. Vita: Video instance segmentation via object token association. *Advances in Neural Information Processing Systems*, 35: 23109–23120, 2022. Miran Heo, Sukjun Hwang, Jeongseok Hyun, Hanjung Kim, Seoung Wug Oh, Joon-Young Lee, and Seon Joo Kim. A generalized framework for video instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14623–14632, 2023. Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. Axial attention in multidimensional transformers. *arXiv preprint arXiv:1912.12180*, 2019. De-An Huang, Zhiding Yu, and Anima Anandkumar. Minvis: A minimal video instance segmentation framework without video-based training. *Advances in Neural Information Processing Systems*, 2022. Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu. Ccnet: Crisscross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612, 2019. Sukjun Hwang, Miran Heo, Seoung Wug Oh, and Seon Joo Kim. Video instance segmentation using inter-frame communication transformers. *Advances in Neural Information Processing Systems*, 2021. Lei Ke, Xia Li, Martin Danelljan, Yu-Wing Tai, Chi-Keung Tang, and Fisher Yu. Prototypical cross-attention networks for multiple object tracking and segmentation. Advances in Neural Information Processing Systems, 34:1192–1203, 2021. Dahun Kim, Sanghyun Woo, Joon-Young Lee, and In So Kweon. Video panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9859–9868, 2020. Dahun Kim, Jun Xie, Huiyu Wang, Siyuan Qiao, Qihang Yu, Hong-Seok Kim, Hartwig Adam, In So Kweon, and Liang-Chieh Chen. Tubeformer-deeplab: Video mask transformer. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition, pp. 13914–13924, 2022. Harold W Kuhn. The hungarian method for the assignment problem. *Naval research logistics quarterly*, 2 (1-2):83–97, 1955. Junlong Li, Bingyao Yu, Yongming Rao, Jie Zhou, and Jiwen Lu. Tcovis: Temporally consistent online video instance segmentation. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, 2023a. Xiangtai Li, Wenwei Zhang, Jiangmiao Pang, Kai Chen, Guangliang Cheng, Yunhai Tong, and Chen Change Loy. Video k-net: A simple, strong, and unified baseline for video segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18847–18857, 2022. Xiangtai Li, Haobo Yuan, Wenwei Zhang, Guangliang Cheng, Jiangmiao Pang, and Chen Change Loy. Tube-link: A flexible cross tube baseline for universal video segmentation. In *Proceedings of the IEEE/CVF* International Conference on Computer Vision, 2023b. Huaijia Lin, Ruizheng Wu, Shu Liu, Jiangbo Lu, and Jiaya Jia. Video instance segmentation with a propose-reduce paradigm. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, 2021. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In *Proceedings of the European* Conference on Computer Vision, 2014. Qihao Liu, Junfeng Wu, Yi Jiang, Xiang Bai, Alan L Yuille, and Song Bai. Instmove: Instance motion for object-centric video segmentation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 6344–6354, 2023. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In *Proceedings of the IEEE/CVF* International Conference on Computer Vision, pp. 10012–10022, 2021. Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video swin transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3202–3211, 2022a. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11976–11986, 2022b. Jieru Mei, Alex Zihao Zhu, Xinchen Yan, Hang Yan, Siyuan Qiao, Liang-Chieh Chen, and Henrik Kretzschmar. Waymo open dataset: Panoramic video panoptic segmentation. In European Conference on Computer Vision, pp. 53–72. Springer, 2022. Jiaxu Miao, Xiaohan Wang, Yu Wu, Wei Li, Xu Zhang, Yunchao Wei, and Yi Yang. Large-scale video panoptic segmentation in the wild: A benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Asselmann. Video transformer network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3163–3172, 2021. Mandela Patrick, Dylan Campbell, Yuki Asano, Ishan Misra, Florian Metze, Christoph Feichtenhofer, Andrea Vedaldi, and Joao F Henriques. Keeping your eye on the ball: Trajectory attention in video transformers. Advances in Neural Information Processing Systems, 34:12493–12506, 2021. Jiyang Qi, Yan Gao, Yao Hu, Xinggang Wang, Xiaoyu Liu, Xiang Bai, Serge Belongie, Alan Yuille, Philip HS Torr, and Song Bai. Occluded video instance segmentation: A benchmark. International Journal of Computer Vision, 130(8):2022–2039, 2022. Siyuan Qiao, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Vip-deeplab: Learning visual perception with depth-aware video panoptic segmentation. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, 2021. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015. Inkyu Shin, Dahun Kim, Qihang Yu, Jun Xie, Hong-Seok Kim, Bradley Green, In So Kweon, Kuk-Jin Yoon, and Liang-Chieh Chen. Video-kmax: A simple unified approach for online and near-online video panoptic segmentation. In *IEEE Winter Conference on Applications of Computer Vision (WACV)*, 2024. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in Neural Information Processing Systems*, 30, 2017. Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In *Proceedings of the European Conference on* Computer Vision, 2020. Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Max-deeplab: End-to-end panoptic segmentation with mask transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5463–5474, 2021a. Jue Wang and Lorenzo Torresani. Deformable video transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14053–14062, 2022. Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, and Huaxia Xia. End-to-end video instance segmentation with transformers. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pp. 8741–8750, 2021b. Mark Weber, Huiyu Wang, Siyuan Qiao, Jun Xie, Maxwell D Collins, Yukun Zhu, Liangzhe Yuan, Dahun Kim, Qihang Yu, Daniel Cremers, Laura Leal-Taixé, Alan Yuille, Florian Schroff, Hartwig Adam, and Liang-Chieh Chen. Deeplab2: A tensorflow library for deep labeling. *arXiv preprint arXiv:2106.09748*, 2021a. Mark Weber, Jun Xie, Maxwell Collins, Yukun Zhu, Paul Voigtlaender, Hartwig Adam, Bradley Green, Andreas Geiger, Bastian Leibe, Daniel Cremers, Aljosa Osep, Laura Leal-Taixe, and Liang-Chieh Chen. Step: Segmenting and tracking every pixel. *Neural Information Processing Systems (NeurIPS) Track on* Datasets and Benchmarks, 2021b. Sanghyun Woo, Dahun Kim, Joon-Young Lee, and In So Kweon. Learning to associate every segment for video panoptic segmentation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 2705–2714, 2021. Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, and Saining Xie. Convnext v2: Co-designing and scaling convnets with masked autoencoders. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16133–16142, June 2023. Jialian Wu, Sudhir Yarram, Hui Liang, Tian Lan, Junsong Yuan, Jayan Eledath, and Gerard Medioni. Efficient video instance segmentation via tracklet query and proposal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 959–968, 2022a. Junfeng Wu, Yi Jiang, Song Bai, Wenqing Zhang, and Xiang Bai. Seqformer: Sequential transformer for video instance segmentation. In *Proceedings of the European Conference on Computer Vision*, pp. 553–569. Springer, 2022b. Junfeng Wu, Qihao Liu, Yi Jiang, Song Bai, Alan Yuille, and Xiang Bai. In defense of online models for video instance segmentation. In *Proceedings of the European Conference on Computer Vision*, pp. 588–605. Springer, 2022c. Linjie Yang, Yuchen Fan, and Ning Xu. Video Instance Segmentation. In Proceedings of IEEE International Conference on Computer Vision, 2019. Linjie Yang, Yuchen Fan, Yang Fu, and Ning Xu. The 3rd large-scale video object segmentation challenge - video instance segmentation track, June 2021a. Linjie Yang, Yuchen Fan, and Ning Xu. The 4th large-scale video object segmentation challenge - video instance segmentation track, June 2022. Shusheng Yang, Yuxin Fang, Xinggang Wang, Yu Li, Chen Fang, Ying Shan, Bin Feng, and Wenyu Liu. Crossover learning for fast online video instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8043–8052, 2021b. Kaining Ying, Qing Zhong, Weian Mao, Zhenhua Wang, Hao Chen, Lin Yuanbo Wu, Yifan Liu, Chengxiang Fan, Yunzhi Zhuge, and Chunhua Shen. Ctvis: Consistent training for online video instance segmentation. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, 2023. Qihang Yu, Huiyu Wang, Dahun Kim, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Cmt-deeplab: Clustering mask transformers for panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022a. Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. k-means Mask Transformer. In Proceedings of the European Conference on Computer Vision, pp. 288–307. Springer, 2022b. Tao Zhang, Xingye Tian, Yu Wu, Shunping Ji, Xuebo Wang, Yuan Zhang, and Pengfei Wan. Dvis: Decoupled video instance segmentation framework. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023. Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. In *International Conference on Learning Representations*, 2020. ## Appendix In the appendix, we provide additional information as listed below: - Sec. A provides the implementation details. - Sec. B provides additional experimental results, including computational cost (GFLOPs), running time (FPS) and memory consumption (VRAM) comparison and more comparison with other methods for video panoptic segmentation (VPS) and video instance segmentation (VIS). - Sec. C provides prediction visualizations and additional axial-trajectory attention visualization results. - Sec. D discusses our method's limitations. - Sec. E provides the dataset information. - Sec. F discusses the broader impact of Axial-VS. ## A Implementation Details Implementation Details The proposed Axial-VS is a unified approach for both near-online and offline video segmentation (i.e., the cross-clip tracking module is only used for the offline setting). For the near-online setting (i.e., employing the within-clip tracking module), we use a clip size of two and four for VPS and VIS, respectively. For the offline setting (i.e., employing the cross-clip tracking module), we adopt a video length of 24 (i.e., 12 clips) for VPS and 20 (i.e., 5 clips) for VIS. At this stage, we only train the cross-clip tracking module, while both the clip-level segmenter and the within-clip tracking module are frozen due to memory constraint. During testing, we directly inference with the whole video with our full model. We experiment with four backbones for Axial-VS: ResNet50 (He et al., 2016), Swin-L (Liu et al., 2021), ConvNeXt-L (Liu et al., 2022b) and ConvNeXt V2-L (Woo et al., 2023). For VPS experiments, we first reproduce Video-kMaX (Shin et al., 2024) based on the official PyTorch re-implementation of kMaXDeepLab (Yu et al., 2022b). We employ a specific pre-training protocol for VIPSeg, closely following the prior works (Weber et al., 2021b; Kim et al., 2022; Shin et al., 2024). Concretely, starting with an ImageNet (Russakovsky et al., 2015) pre-trained backbone, we pre-train the kMaX-DeepLab and Multi-Scale Deformable Attention (MSDeformAttn) in our within-clip tracking module on COCO (Lin et al., 2014). The within-clip and cross-clip tracking modules deploy Nw = 2 and Nc = 4 blocks, respectively, for VPS. On the other hand, for VIS experiments, we use the official code-base of Tube-Link (Li et al., 2023b). Since Tube-Link is built on top of Mask2Former (Cheng et al., 2022) and thus already contains six layers of MSDeformAttn, we simplify our within-clip tracking module by directly inserting axial-trajectory attention after each original MSDeformAttn. As a result, the within-clip and cross-clip tracking modules use Nw = 6 and Nc = 4 blocks, respectively, for VIS. We note that we do not use any other video datasets (e.g., pseudo COCO videos) for pre-training axial-trajectory attention. We closely adhere to the training protocols established by the baseline clip-level segmenters. Specifically, for the VPS task with ResNet50 as the backbone, we adopt the training methodology of Video-kMaX. Our near-online Axial-VS is trained on the VIPSeg dataset with a clip size of 2 × 769 × 1345 and a batch size of 32, utilizing 16 V100 32G GPUs for 40k iterations. This training regimen spans approximately 13 hours. Additionally, our offline Axial-VS is trained on VIPSeg with a video size of 24 × 769 × 1345 (12 clips, each comprising 2 frames) and a batch size of 16, employing 8 A100 80G GPUs for 15k iterations. This training process requires approximately 10 hours. For the VIS task with ResNet50 as the backbone, we adopt the Tube-Link training protocol. Our near-online Axial-VS is trained on Youtube-VIS with a batch size of 8 clips (each containing 4 frames) using 8 V100 32G GPUs for 15k iterations. We adhere to the literature by randomly resizing the shortest edge of each clip to a predetermined size within the range [288, 320, 352, 384, 416, 448, 480, 512]. This training process takes approximately 7 hours. Additionally, our offline Axial-VS is trained on Youtube-VIS with a batch size of 8 videos (each comprising 20 frames, equivalent to 5 clips) using 8 V100 32G GPUs for 10k iterations. This training process requires approximately 4 hours. Table 8: **GFlops, FPS and VRAM comparisons on attention operations in the within-clip tracking** module. We compare Joint Space-Time Attention (Vaswani et al., 2017), Divided Space-Time Attention (Bertasius et al., 2021), Multi-Scale Deformable Attention (Zhu et al., 2020) (MSDeformAttn), the proposed Axial-Trajectory Attention, and TarVIS Temporal Neck (Athar et al., 2023) (i.e., MSDeformAttn + Window Space-Time Attention). The GFLOPs and FPS are obtained by measuring on models with ResNet50 as the backbone using an A100 GPU. We report the VRAM on two different input resolutions: 2 × 513 × 897 and 2 × 769 × 1345, respectively. Reported results are averaged over 3 runs. −: Not using any operations. Our final setting is marked in grey. | | input clip resolution | | | | | |------------------------------------------------------------|-------------------------|----------------|------|--------|------| | | 2 × 513 × 897 | 2 × 769 × 1345 | | | | | attention operations | VRAM | GFLOPs | FPS | VRAM | VPQ | | - | 6.74G | 354 | 14.3 | 11.90G | 42.7 | | Joint Space-Time Attn (Vaswani et al., 2017) | 9.87G | 493 | 10.3 | 25.97G | 43.2 | | Divided Space-Time Attn (Bertasius et al., 2021) | 8.58G | 430 | 12.6 | 19.58G | 43.6 | | MSDeformAttn (Zhu et al., 2020) | 7.75G | 432 | 12.5 | 14.15G | 44.5 | | Axial-Trajectory Attn | 7.57G | 443 | 11.7 | 13.81G | 44.7 | | MSDeformAttn + Window Space-Time Attn (Athar et al., 2023) | 8.21G | 476 | 10.5 | 15.15G | 44.9 | | MSDeformAttn + Axial-Trajectory Attn | 8.38G | 481 | 10.5 | 15.59G | 46.1 | | cross-clip tracking design | video query | encoder | decoder | GFLOPs | VPQ | |------------------------------|---------------|-----------|-----------|----------|-------| | - | - | - | - | - | 46.1 | | VITA (Heo et al., 2022) | ! | ! | ! | 47 | 46.3 | | cross-clip tracking module | % | ! | % | 32 | 46.7 | Table 9: **GFLOPs comparisons on cross-clip tracking design.** For the cross-clip tracking module, we compare VITA (Heo et al., 2022) and the proposed cross-clip tracking module and report GFLOPs of them only. Reported results are averaged over 3 runs. −: Not using any operations. Our final setting is marked in grey. ## B Additional Experimental Results In this section, we provide more experimental results, including the computational cost (GFLOPs), running time (FPS) and memory consumption (VRAM) comparisons on the proposed within-clip tracking module along with GFlops comparisons on the cross-clip tracking modules (Sec. B.1), as well as more detailed comparisons with other state-of-the-art methods (Sec. B.2). ## B.1 Gflops, Fps And Vram Comparisons We conduct the GFLOPs, FPS and VRAM comparisons on the VIPSeg dataset, using ResNet50. GFLOPs, FPS and VRAM Comparisons on Attention Operations in Within-Clip Tracking Module In Tab. 8, we present a comparison of the GFLOPs, FPS and VRAM for the attention operations used in the within-clip tracking module. The GFlops and FPS are evaluated using a short clip of size 2 × 769 × 1345 on an A100 GPU. Additionally, VRAM is reported for two different input clip resolutions: 2 × 513 × 897 and 2 × 769 × 1345, respectively. The table highlights that the proposed "Axial-Trajectory Attn" introduces a moderate increase in GFlops and VRAM, along with a modest decrease in FPS, while significantly enhancing performance in VPQ. Notably, "Divided Space-Time Attn", "MSDeformAttn", and "Axial-Trajectory Attn" exhibit comparable computational costs and GPU memory usage. Conversely, "Joint Space-Time Attn" imposes the highest computational load and GPU memory consumption due to its compute-intensive attention operations on high-resolution dense pixel feature maps. GFLOPs Comparisons on Tracking Design in Cross-Clip Tracking Module In Tab. 9, we present a comparison of the GFLOPs for the cross-clip tracking design. The numbers are computed using a video of size 24 × 769 × 1345, specifically measuring the computational costs of the cross-clip tracking module. The table shows that our cross-clip tracking module is more lightweight compared to VITA, yet achieves superior performance, which can be attributed to the simple and effective design of our cross-clip tracking module. | method | backbone | VPQ | VPQTh | VPQSt | |------------------------------------------------------------|-------------------|-------|---------|---------| | online/near-online methods ViP-DeepLab (Qiao et al., 2021) | ResNet50 | 16.0 | - | - | | VPSNet-FuseTrack (Kim et al., 2020) | ResNet50 | 17.0 | - | - | | VPSNet-SiamTrack (Woo et al., 2021) | ResNet50 | 17.2 | - | - | | Clip-PanoFCN (Miao et al., 2022) | ResNet50 | 22.9 | - | - | | Video K-Net (Li et al., 2022) | ResNet50 | 26.1 | - | - | | TubeFormer (Kim et al., 2022) | Axial-ResNet50-B3 | 31.2 | - | - | | TarVIS (Athar et al., 2023) | ResNet50 | 33.5 | 39.2 | 28.5 | | Video-kMaX (Shin et al., 2024) | ResNet50 | 38.2 | - | - | | Tube-Link (Li et al., 2023b) | ResNet50 | 39.2 | - | - | | DVIS (Zhang et al., 2023)‡ | ResNet50 | 39.2 | 39.3 | 39.0 | | TarVIS (Athar et al., 2023) | Swin-L | 48.0 | 58.2 | 39.0 | | Video-kMaX (Shin et al., 2024) | ConvNeXt-L | 51.9 | - | - | | DVIS (Zhang et al., 2023) | Swin-L | 54.7 | 54.8 | 54.6 | | Axial-VS w/ Video-kMaX (ours) | ResNet50 | 46.1 | 45.6 | 46.6 | | Axial-VS w/ Video-kMaX (ours) | ConvNeXt-L | 56.2 | 58.4 | 54.0 | | Axial-VS w/ Video-kMaX (ours) | ConvNeXt V2-L | 57.7 | 58.3 | 57.1 | | offline methods DVIS (Zhang et al., 2023) | ResNet50 | 43.2 | 43.6 | 42.8 | | DVIS (Zhang et al., 2023) | Swin-L | 57.6 | 59.9 | 55.5 | | Axial-VS w/ Video-kMaX (ours) | ResNet50 | 46.7 | 46.7 | 46.6 | | Axial-VS w/ Video-kMaX (ours) | ConvNeXt-L | 57.1 | 59.3 | 54.8 | | Axial-VS w/ Video-kMaX (ours) | ConvNeXt V2-L | 58.0 | 58.8 | 57.2 | Table 10: VIPSeg val **set results.** We provide more complete comparisons with other state-of-the-art methods. Numbers of Axial-VS are averaged over 3 runs. ‡: Evaluated using their open-source checkpoint. ## B.2 Comparisons With Other Methods Video Panoptic Segmentation (VPS) In Tab. 10, we compare with more state-of-the-art methods on the VIPSeg dataset. We observe the similar trend as discussed in the main paper, and thus simply list all the other methods for a complete comparison. Video Instance Segmentation (VIS) In Tab. 11, we report more state-of-the-art methods on the Youtube-VIS-21 dataset. As shown in the table, our Axial-VS with ResNet50 backbone demonstrates a better performance than the other methods as discussed in the main paper, while our Axial-VS with Swin-L performs slightly worse than TarVIS (Athar et al., 2023) in the online/near-online setting and than DVIS (Zhang et al., 2023) in the offline setting. We think the performance can be improved by exploiting more video segmentation datasets, as TarVIS did, or by improving the clip-level segmenter. Particularly, our baseline Tube-Link with Swin-L performs worse than the other state-of-the-art methods with Swin-L. For the Youtube-VIS-22 results, we notice that the reported numbers in some recent papers are not comparable, since some papers report APlong (AP for long videos) while some papers use APall, which is the average of APlong and APshort (AP for short videos). To carefully and fairly compare between methods, we therefore reproduce all the state-of-the-art results by using their official open-source checkpoints, and clearly report their APall, APlong, and APshort in Tab. 12. Similar to the discussion in the main paper, our Axial-VS with ResNet50 significantly improves over the baseline Tube-Link and performs better than other state-of-the-art methods, particularly in APlong. However, our results with Swin-L lag behind other state-of-the-art methods with Swin-L, whose gap may be bridged by improving the baseline Tube-Link Swin-L. In Tab. 13, we summarize more comparisons with other state-of-the-art methods on OVIS. As shown in the table, our method remarkably improves over the baseline, but performs worse than the state-of-the-art methods, partially because we fail to fully reproduce the baseline Tube-Link that our method heavily depends upon. Similar to our other VIS results, we think the improvement of clip-level segmenter will also lead to the improvement of Axial-VS. Table 11: Youtube-VIS-21 val **set results.** We provide more complete comparisons with other state-of-the-art methods. Numbers of Axial-VS are averaged over 3 runs. | method | backbone | AP | AP50 | AP75 | AR1 | AR10 | |--------------------------------------------------------|------------|------|--------|--------|-------|--------| | online/near-online methods MinVIS (Huang et al., 2022) | ResNet50 | 44.2 | 66.0 | 48.1 | 39.2 | 51.7 | | IDOL (Wu et al., 2022c) | ResNet50 | 43.9 | 68.0 | 49.6 | 38.0 | 50.9 | | GenVISnear-online (Heo et al., 2023) | ResNet50 | 46.3 | 67.0 | 50.2 | 40.6 | 53.2 | | DVIS (Zhang et al., 2023) | ResNet50 | 46.4 | 68.4 | 49.6 | 39.7 | 53.5 | | GenVISonline (Heo et al., 2023) | ResNet50 | 47.1 | 67.5 | 51.5 | 41.6 | 54.7 | | Tube-Link (Li et al., 2023b) | ResNet50 | 47.9 | 70.0 | 50.2 | 42.3 | 55.2 | | TarVIS (Athar et al., 2023) | ResNet50 | 48.3 | 69.6 | 53.2 | 40.5 | 55.9 | | MinVIS (Huang et al., 2022) | Swin-L | 55.3 | 76.6 | 62.0 | 45.9 | 60.8 | | IDOL (Wu et al., 2022c) | Swin-L | 56.1 | 80.8 | 63.5 | 45.0 | 60.1 | | Tube-Link (Li et al., 2023b) | Swin-L | 58.4 | 79.4 | 64.3 | 47.5 | 63.6 | | DVIS (Zhang et al., 2023) | Swin-L | 58.7 | 80.4 | 66.6 | 47.5 | 64.6 | | GenVISonline (Heo et al., 2023) | Swin-L | 59.6 | 80.9 | 65.8 | 48.7 | 65.0 | | GenVISnear-online (Heo et al., 2023) | Swin-L | 60.1 | 80.9 | 66.5 | 49.1 | 64.7 | | TarVIS (Athar et al., 2023) | Swin-L | 60.2 | 81.4 | 67.6 | 47.6 | 64.8 | | Axial-VS w/ Tube-Link (ours) | ResNet50 | 48.4 | 71.1 | 51.8 | 42.0 | 57.4 | | Axial-VS w/ Tube-Link (ours) | Swin-L | 58.8 | 81.3 | 65.0 | 46.7 | 62.7 | | offline methods VITA (Heo et al., 2022) | ResNet50 | 45.7 | 67.4 | 49.5 | 40.9 | 53.6 | | DVIS (Zhang et al., 2023) | ResNet50 | 47.4 | 71.0 | 51.6 | 39.9 | 55.2 | | VITA (Heo et al., 2022) | Swin-L | 57.5 | 80.6 | 61.0 | 47.7 | 62.6 | | DVIS (Zhang et al., 2023) | Swin-L | 60.1 | 83.0 | 68.4 | 47.7 | 65.7 | | Axial-VS w/ Tube-Link (ours) | ResNet50 | 48.5 | 70.9 | 52.4 | 42.3 | 57.9 | | Axial-VS w/ Tube-Link (ours) | Swin-L | 59.1 | 81.9 | 64.9 | 46.9 | 63.8 | | method | backbone | APall | APshort | AP50 | AP75 | AR1 | AR10 | APlong | AP50 | AP75 | AR1 | AR10 | |----------------------------------------------------------|------------|---------|-----------|--------|--------|-------|--------|----------|--------|--------|-------|--------| | online/near-online methods MinVIS (Huang et al., 2022) ∗ | ResNet50 | 32.8 | 43.9 | 66.9 | 47.5 | 38.8 | 51.9 | 21.6 | 42.9 | 18.1 | 18.8 | 25.6 | | GenVISnear-online (Heo et al., 2023) ∗ | ResNet50 | 38.1 | 45.9 | 66.3 | 50.2 | 40.8 | 53.7 | 30.3 | 50.9 | 32.7 | 25.5 | 36.2 | | DVIS (Zhang et al., 2023) ∗ | ResNet50 | 38.6 | 46.0 | 68.1 | 50.4 | 39.7 | 53.5 | 31.2 | 50.4 | 36.8 | 30.2 | 35.7 | | Tube-Link (Li et al., 2023b) ∗ | ResNet50 | 39.5 | 47.9 | 70.4 | 50.5 | 42.6 | 55.9 | 31.1 | 56.1 | 31.2 | 29.1 | 36.3 | | MinVIS (Huang et al., 2022) ∗ | Swin-L | 43.5 | 55.0 | 77.8 | 60.6 | 45.3 | 60.3 | 31.9 | 51.4 | 33.0 | 28.2 | 35.3 | | Tube-Link (Li et al., 2023b) ∗ | Swin-L | 46.0 | 57.8 | 78.7 | 63.4 | 47.0 | 62.7 | 34.2 | 53.2 | 37.9 | 31.5 | 38.9 | | DVIS (Zhang et al., 2023) ∗ | Swin-L | 48.9 | 58.8 | 80.6 | 65.9 | 47.5 | 63.9 | 39.0 | 56.0 | 43.0 | 33.0 | 43.5 | | Axial-VS w/ Tube-Link (ours) | ResNet50 | 41.6 | 46.8 | 68.1 | 50.5 | 41.5 | 56.2 | 36.5 | 61.1 | 41.7 | 32.3 | 42.3 | | Axial-VS w/ Tube-Link (ours) | Swin-L | 47.3 | 58.7 | 81.1 | 64.9 | 46.9 | 62.7 | 35.9 | 62.0 | 37.0 | 34.2 | 39.7 | | offline methods VITA (Heo et al., 2022) ∗ | ResNet50 | 38.8 | 45.7 | 66.6 | 50.1 | 41.0 | 53.1 | 31.9 | 53.8 | 37.0 | 31.1 | 37.3 | | DVIS (Zhang et al., 2023) ∗ | ResNet50 | 41.6 | 47.2 | 70.8 | 51.0 | 40.0 | 54.9 | 35.9 | 58.4 | 39.9 | 32.2 | 41.9 | | VITA (Heo et al., 2022) ∗ | Swin-L | 49.3 | 57.6 | 80.4 | 62.5 | 47.7 | 62.3 | 41.0 | 62.1 | 43.9 | 39.4 | 43.5 | | DVIS (Zhang et al., 2023) ∗ | Swin-L | 52.4 | 59.9 | 82.7 | 68.3 | 47.8 | 65.2 | 44.9 | 66.3 | 48.9 | 37.1 | 53.2 | | Axial-VS w/ Tube-Link (ours) | ResNet50 | 41.3 | 45.6 | 68.0 | 51.1 | 40.2 | 54.7 | 37.0 | 63.4 | 36.7 | 29.0 | 40.2 | | Axial-VS w/ Tube-Link (ours) | Swin-L | 48.8 | 58.7 | 81.0 | 64.2 | 46.6 | 63.5 | 38.9 | 64.4 | 39.3 | 32.0 | 42.3 | Table 12: Youtube-VIS-22 val **set results.** We provide more complete comparisons with other state-of-the-art methods. Numbers of Axial-VS are averaged over 3 runs. ∗: All results are reproduced by us using their official checkpoints. We report APshort and APlong for short and long videos, respectively, and APall by averaging them. ## C Visualization Results Visualizations of Prediction We provide visualization results in Fig. 8, Fig. 9, Fig. 10, and Fig. 11 for different video sequences. We compare with DVIS (Zhang et al., 2023) and our re-implemented VideokMaX (Shin et al., 2024) with ResNet50 as backbone and inference in an online/near-online fashion. Visualizations of Learned Axial-Trajectory Attention We provide more visualizations of the learned axial-trajectory attention maps in Fig. 12, Fig. 13 and 14. Concretely, in Fig. 12, we illustrate the feasibility Table 13: OVIS val **set results.** We provide more complete comparisons with other state-of-the-art methods. Numbers of Axial-VS are averaged over 3 runs. §: Reproduced by us using their official code-base. | method | backbone | AP | AP50 | AP75 | AR1 | AR10 | |-----------------------------------------------------------|------------|------|--------|--------|-------|--------| | online/near-online methods MinVIS (Huang et al., 2022) | ResNet50 | 25.0 | 45.5 | 24.0 | 13.9 | 29.7 | | § | ResNet50 | 25.4 | 44.9 | 26.5 | 14.1 | 30.1 | | Tube-Link (Li et al., 2023b) Tube-Link (Li et al., 2023b) | ResNet50 | 29.5 | 51.5 | 30.2 | 15.5 | 34.5 | | IDOL (Wu et al., 2022c) | ResNet50 | 30.2 | 51.3 | 30.0 | 15.0 | 37.5 | | DVIS (Zhang et al., 2023) | ResNet50 | 30.2 | 55.0 | 30.5 | 14.5 | 37.3 | | TarVIS (Athar et al., 2023) | ResNet50 | 31.1 | 52.5 | 30.4 | 15.9 | 39.9 | | GenVISnear-online (Heo et al., 2023) | ResNet50 | 34.5 | 59.4 | 35.0 | 16.6 | 38.3 | | GenVISonline (Heo et al., 2023) | ResNet50 | 35.8 | 60.8 | 36.2 | 16.3 | 39.6 | | Tube-Link (Li et al., 2023b) § | Swin-L | 33.3 | 54.6 | 32.8 | 16.8 | 37.7 | | MinVIS (Huang et al., 2022) | Swin-L | 39.4 | 61.5 | 41.3 | 18.1 | 43.3 | | IDOL (Wu et al., 2022c) | Swin-L | 42.6 | 65.7 | 45.2 | 17.9 | 49.6 | | TarVIS (Athar et al., 2023) | Swin-L | 43.2 | 67.8 | 44.6 | 18.0 | 50.4 | | GenVISonline (Heo et al., 2023) | Swin-L | 45.2 | 69.1 | 48.4 | 19.1 | 48.6 | | GenVISnear-online (Heo et al., 2023) | Swin-L | 45.4 | 69.2 | 47.8 | 18.9 | 49.0 | | DVIS (Zhang et al., 2023) | Swin-L | 47.1 | 71.9 | 49.2 | 19.4 | 52.5 | | Axial-VS w/ Tube-Link | ResNet50 | 27.6 | 50.1 | 27.2 | 14.6 | 32.5 | | Axial-VS w/ Tube-Link | Swin-L | 39.1 | 62.3 | 39.8 | 18.5 | 42.3 | | offline methods VITA (Heo et al., 2022) | ResNet50 | 19.6 | 41.2 | 17.4 | 11.7 | 26.0 | | DVIS (Zhang et al., 2023) | ResNet50 | 33.8 | 60.4 | 33.5 | 15.3 | 39.5 | | VITA (Heo et al., 2022) | Swin-L | 27.7 | 51.9 | 24.9 | 14.9 | 33.0 | | DVIS (Zhang et al., 2023) | Swin-L | 48.6 | 74.7 | 50.5 | 18.8 | 53.8 | | Axial-VS w/ Tube-Link | ResNet50 | 28.3 | 50.7 | 27.0 | 14.6 | 34.0 | | Axial-VS w/ Tube-Link | Swin-L | 39.8 | 64.5 | 40.1 | 17.9 | 43.7 | of decomposing object motion into height- and width-axis components. We select the football in the first frame as the *reference point* and show the height and width axial-trajectory attention separately. We then multiply the height and width axial-trajectory attentions to visualize the trajectory of the reference point over time. In Fig. 13, we select the basketball in the first frame as the *reference point* and show that our axial-trajectory attention accurately tracks it along the moving trajectory. In Fig. 14, we select the black table in the first frame as the *reference point*. We note that the camera motion is very small in this short clip and the table thus remains static. Our axial-trajectory attention still accurately keeps tracking at the same location as time goes by. Failure Cases for Prediction We provide visualizations of failure cases of Axial-VS in Fig. 15 and 16. In general, we observe three common patterns of errors: heavy occlusion, fast moving objects, and extreme illumination. Specifically, the first challenge is that when there are heavy occlusions caused by multiple close-by instances, Axial-VS suffers from ID switching, leading Axial-VS to assign inconsistent ID to the same instance. For example, in clip (a) of Fig. 15, the ID of the human in red dress changes between frame 2 and 3, while in clip (b) of Fig. 15 the two humans in the back are recognized as only one human until frame 3 due to the heavy occlusion. The second common error is that in videos containing fast motion, Axial-VS suffers from precisely predicting the boundary of the moving object. In clip (c) of Fig. 16, the human's legs are not segmented out in frame 1 and 3. The last common error is that in videos containing extreme or varying illumination, Axial-VS might fail to detect the objects thus fails to generate consistent segmentation. In clip (d) of Fig. 16, the objects under the extreme illumination can not be well segmented. Failure Cases for Learned Axial-Trajectory Attention In Fig. 17 and 18, we show two failure cases of axial-trajectory attention where the selected *reference point* is not discriminative enough, sometimes yielding inaccurate axial-trajectory. To be specific, in Fig. 17, we select the left light of the subway in the first frame as reference point. Though axial-trajectory attention precisely associates its position at the second frame, in the third frame the attention becomes sparse, mostly because that there are many similar 'light' objects in the ![22_image_0.png](22_image_0.png) Figure 8: **Qualitative comparisons on videos with unusual viewpoints in VIPSeg.** Axial-VS exhibits consistency in prediction even with an unusual view while DVIS and Video-kMaX fail to consistently detect all animals over time. ![22_image_1.png](22_image_1.png) Figure 9: **Qualitative comparisons on videos with complex indoor scenes as background in VIPSeg.** Axial-VS accurately segments out the boundary of cat with correct classes, while DVIS and Video-kMaX fail. ![23_image_0.png](23_image_0.png) Figure 10: **Qualitative comparisons on videos with light and shade in VIPSeg.** Axial-VS makes accurate and consistent predictions under different illumination situations. DVIS fails at the junction between light and shade (e.g., the fish tank) while Video-kMaX completely fails at dark places. ![23_image_1.png](23_image_1.png) Figure 11: **Qualitative comparisons on videos with multiple instances in VIPSeg.** Axial-VS detects more instances with accurate boundary. DVIS fails to segment out the crowded humans while Video-kMaX performs badly on the stuff classes. ![24_image_0.png](24_image_0.png) Figure 12: **Illustration of Tracking Objects along Axial Trajectories.** In this short clip of two frames depicting the action 'playing football' and the football at frame 1 is selected as the *reference point* (mark in red). We multiply the height and width axial-trajectory attentions to visualize the trajectory of the *reference point* over time. ![24_image_1.png](24_image_1.png) Figure 13: **Visualization of Learned Axial-Trajectory Attention.** In this short clip of three frames depicting the action 'play basketball', the basketball at frame 1 is selected as the *reference point* (mark in red). The axial-trajectory attention can accurately track the moving basketball across frames. Best viewed by zooming in. third frame and the attention dilutes. Similarly, in Fig. 18, we select the head of the human as the reference point. Since the human wears a black jacket with a black hat, the selected reference point has similar but ambiguous appearance to the human body, yielding sparse attention activation in the whole human region. ![25_image_0.png](25_image_0.png) Figure 14: **Visualization of Learned Axial-Trajectory Attention.** In this short clip of three frames depicting a student at class, the right static table at frame 1 is selected as the *reference point* (mark in red). Though the table remains static across the frames, axial-trajectory attention can accurately track it. Best viewed by zooming in. ![25_image_1.png](25_image_1.png) Figure 15: **Failure modes caused by heavy occlusion.** Axial-VS fails to predict consistent ID for the same instance when there is heavy occlusion. (a) The ID of the human changes between frame 2 and 3; refer to the red box for details. (b) The two humans are recognized as only one until frame 3; refer to the red box for details.. Best viewed by zooming in. ![26_image_0.png](26_image_0.png) Figure 16: **Failure modes caused by fast-moving and extreme illumination scenarios.** Axial-VS fails to predict accurate boundary due to the large motion and extreme illumination. (c) The human's leg is not segmented out in frame 1 and 3; refer to the red box for details. (d) The objects under extreme illumination can not be well segmented; refer to the red box for details. Best viewed by zooming in. ## D Limitations The proposed Axial-VS builds on top of off-the-shelf clip-level segmenters with the proposed within-clip and cross-clip tracking modules. Even though flexible, its performance depends on the underlying employed clip-level segmenter. Additionally, when training the proposed cross-clip tracking module, the clip-level segmenter and the within-clip tracking module are frozen due to insufficient GPU memory capacity, which may lead to a sub-optimal result since ideally, end-to-end training leads to a better performance. We leave it as a future work to efficiently fine-tune the whole model for processing long videos. ## E Datasets VIPSeg (Miao et al., 2022) is a new large-scale video panoptic segmentation dataset, targeting for diverse in-the-wild scenes. The dataset contains 124 semantic classes, consisting of 58 'thing' and 66 'stuff' classes with 3536 videos, where each video spans 3 to 10 seconds. The main adopted evaluation metric is video panoptic quality (VPQ) (Kim et al., 2020) on this benchmark. Youtube-VIS (Yang et al., 2019) is a popular benchmark on video instance segmentation, where only 'thing' classes are segmented and tracked. It contains multiple versions. The YouTube-VIS-2019 (Yang et al., 2019) consists of 40 semantic classes, while the YouTube-VIS-2021 (Yang et al., 2021a) and YouTube-VIS-2022 (Yang et al., 2022) are improved versions with higher number of instances and videos. Youtube-VIS adopts track AP (Yang et al., 2019) for evaluation. OVIS (Qi et al., 2022) is a challenging video instance segmentation dataset with focuses on long videos with 12.77 seconds on average, and objects with severe occlusion and complex motion patterns. The dataset contains 25 semantic classes and also adopt track AP (Yang et al., 2019) for evaluation. ![27_image_0.png](27_image_0.png) Figure 17: **[Failure mode] Visualization of Learned Axial-Trajectory Attention.** In this short clip of three frames depicting a moving subway, the left front light at frame 1 is selected as the *reference point* (mark in red). While the axial-trajectory attention can still more or less capture the same front light at the frame 2, it gradually loses the focus since there are many similar "light" objects in the clip. Best viewed by zooming in. ## F Broader Impact Statement This paper introduces Axial-VS, which enhances a standard clip-level segmenter with the proposed axialtrajectory attention, thus advancing the field of video segmentation. While there may be potential societal consequences, however, none of which we feel are necessary to highlight here. ![28_image_0.png](28_image_0.png) Figure 18: **[Failure mode] Visualization of Learned Axial-Trajectory Attention.** In this short clip of three frames depicting the action 'downhill ski', the head of the human at frame 1 is selected as the *reference point* (mark in red). Since the head and the human body have similar appearance, the axial-trajectory attention becomes diluted among the human body. Best viewed by zooming in.
Review 1: Summary: This paper presents "Axial-VS," an approach that can enhance video segmentation by tracking objects along axial trajectories in videos. Technically, the authors propose an axial-attention module that performs self-attention along the height- or width-wise attention. The authors also further decompose the task into two main sub-tasks: within-clip segmentation and cross-clip tracking. Inside the within-clip Segmentation, Axial-VS uses axial-trajectory attention to track objects within individual clips; while in the Cross-Clip Tracking module, the framework applies axial-trajectory attention to clip-level segmenters’ object queries, enabling effective tracking of objects across different clips. This helps achieve consistent segmentation throughout the video. The proposed method can improve the baseline models with less memory and time consumption, however, there is lack of evidence supporting this claim. Strengths and Weaknesses: Strengths: - The video demos look great. - The code will be released. - The results, both qualitative and quantitative ones are promising. Weaknesses: - the writing of the paper should be further polished: E.g. "GPU Out-Of-Memory errors" in the abstract are informal; Missing words in the first sentence of Paragraph 3 (intro). - In the last second paragraph of the introduction, why does the axial attention "concurrently consider spatial and temporal information"? The answer can be found in the method section but it's confusing in the introduction part. - The metric VPQ is never defined in the main paper. - Running time and memory consumption should be reported. Other concerns: - As claimed by the authors, the decomposition of the task and the axial attention modules help to address the memory consumption. However, some sparse attention modules and memory-efficient attention operations can help to reduce the computational cost but maintain the performance of the full attention operations. How do the proposed axial-attention modules compare with these methods? - The object queries remain the same for different video clips. Given that a transformer decoder is used inside the framework to produce the output queries, will using the output object query from clip t-1 as the input object query for clip t help improve the performance? i.e. if we update the object queries for different video clips, will the performance be further improved? - Missing ablation study: if full attention is used inside the within-clip tracking module, will the performance be improved? How about the running time and memory consumption? - Results with swin-L or other baseline models with ConvNeXt should be reported for fair comparison for Table 1 (b). **Overall, I think this paper should be further revised.** Requested Changes: The texts should be further polished. Moreover, additional experimental evaluations should be included, see the weaknesses: - Running time and memory consumption should be reported. - Results with swin-L or other baseline models with ConvNeXt should be reported for fair comparison for Table 1 (b). - Missing ablation study: if full attention is used inside the within-clip tracking module, will the performance be improved? How about the running time and memory consumption? Broader Impact Concerns: The authors briefly discussed the broader impact. No concerns are found there. ================================================== Review 2: Summary: The paper proposes a framework called Axial VS to learn dense video features in an efficient way to facilitate downstream segmentation tasks. It involves factorizing the attention op to operate along the height and width dimensions one after the other rather than exhaustively in one go, thus resulting in a reduced memory footprint. The paper proposes network blocks for space-time attention within clips, and also across clips where it specifically designed for DETR-based video segmentation architectures that rely on object queries. The proposed framework is applied to two existing models, namely TubeLink and Video-kMax and is evaluated for Video Panoptic Segmentation (VPS) on VIPSeg, and Video Instance Segmentation (VIS) on YouTube-VIS and OVIS. Several ablation experiments are also reported that evaluate the efficacy of individual contributions. Strengths and Weaknesses: **STRENGTHS** Overall, it is a pretty solid paper that it is really well-written and easy to read. The contributions are also well-motivated and strongly backed by experimental evaluation. The workflow for the proposed network blocks is nicely explained with consistent notation and illustrations. On the experimental side, the evaluation is thorough with 3 datasets for 2 different tasks. The proposed Axial-VS is applied to two existing architectures. For each, a self-trained baseline is created and the difference in performance from both within-clip and across-clip attentions are shown separately. Ablations are also thorough with all the major design choices covered and averages over multiple runs being reported. **WEAKNESSES** 1. Given that the paper focuses on video-centric feature learning with a focus on the temporal aspect on video, it is a bit of a letdown to see that the performance peaks at a clip length of 2 (Table 7a) which is the bare minimum increment over a single image. 2. Since the motivation behind axial attention across height and width separately is to be efficient in terms of VRAM, there should be some experimental analysis for this e.g. comparison of the proposed Axial-VS to a vanilla baseline with full-fledged space-time attention, and the other existing video attention mechanisms in Fig. 7. I can see some analysis in terms of GFLops in the supplementary, but IMO VRAM would be more interesting and should be briefly discussed in the main text. Requested Changes: The paper should be modified to address the weaknesses, i.e.: 1. Discussion about clip length and why the performance deteriorates for higher clip lengths even though temporal attention is explicitly handled in the proposed framework. 2. Analysis of VRAM usage for the proposed framework against existing baselines. Ideally this should be done for a few different feature map resolutions. 3. Some of the most important implementation details should be part of the main text e.g. the feature map resolution at which Axial-VS is applied, number of GPUs utilized for training, total training time. Broader Impact Concerns: The paper proposed a low-level video feature learning mechanism. It is hard to see any serious ethical implications of such a work. ================================================== Review 3: Summary: The authors' core contribution is the axial-trajectory attention, an extension of a previous method (Patrick et al. 2021). The axial-trajectory attention is a two-phase attention mechanism that initially operates on the T*H (or W) dimension, producing a feature vector for each point in TH across different times, as I understand it. This idea is interesting, and the duplication of time makes it less intuitive and more intriguing than other types of attention methods. Using the proposed attention mechanism, the authors enhance features for an off-the-shelf clip segmenter and generate better clip queries than the original clip segmenter. They then propose inputting these clip queries into a cross-clip tracking module, which utilizes a similar attention mechanism and a temporal ASPP to refine the results. Strengths and Weaknesses: Pros: + The paper is well written and easy to understand with minimal reading of previous works. + The method is novel, interesting, and not intuitive, and it improves the results of existing works. + The authors provided good ablations and experimental evaluations. + The authors have reimplemented the competitors' results, improving their initial results, and applying more modern networks within the said works. Reporting these new, better results is a very good practice and should be commended. Cons: - The usage of axial attention in the second part is not clear, and I did not fully understand how the main idea (the axial part across H and W) is translated there. The attention mechanism is similar in the regard where you use the K clips in a similar fashion to the previous usage of time, but it does not reflect the spatial context of the original attention; clarity could be improved there. Nitpicking: * Multiple frames in the appendix have really similar colors for adjacent instance segmentations, making it hard to tell the difference. * The subsubsection titled "Within-Clip Tracking Module" in 3.2 is mostly about the multi-scale attention, and not the entire module; the * title should reflect this. * Italic 'reference point' (mid-end page 5) gives the notion that it is defined somewhere (e.g., the axial-attention); it is not, and some context should be added to it. * In Eqn 3, there is a missing apostrophe in the first term on the RHS. Requested Changes: In addition to the weaknesses and nitpicking mentioned above, it would be beneficial to include a section on the failure cases of the model, not just examples in the appendix. Since different models excel in different scenarios, it would be interesting to know when to use, and more importantly, when not to use your proposed method. Specifically, insights into the limitations of the proposed attention mechanism, which could be applicable to other tasks as well, would be very useful. Broader Impact Concerns: I do not see any additional broader impact concerns. ================================================== Metareview: Recommendation: Accept as is Comment: All the reviewers were positive about the paper. They cite extensive experimental evaluation as the main strength of the work as well as the novel design of the proposed axial trajectory attention. The authors engaged with the reviewers and successfully addressed all the concerns that were raised. Thus, I recommender the paper for acceptance to TMLR without any further revisions. ==================================================
# A Comprehensive Study Of Real-Time Object Detection Networks Across Multiple Domains: A Survey Elahe Arani‡1,2, Shruthi Gowda‡1, Ratnajit Mukherjee1, Omar Magdy1**, Senthilkumar** Kathiresan1**, Bahram Zonooz**1,2 {elahe.arani, shruthi.gowda}@navinfo.eu, bahram.zonooz@gmail.com 1*Advanced Research Lab, NavInfo Europe, Eindhoven, The Netherlands* 2*Department of Mathematics and Computer Science, Eindhoven University of Technology, The Netherlands* ‡*Contributed equally.* Reviewed on OpenReview: *https: // openreview. net/ forum? id= ywr5sWqQt4* ## Abstract Deep neural network based object detectors are continuously evolving and are used in a multitude of applications, each having its own set of requirements. While safety-critical applications need high accuracy and reliability, low-latency tasks need resource and energyefficient networks. Real-time detection networks, which are a necessity in high-impact realworld applications, are continuously proposed but they overemphasize the improvements in accuracy and speed while other capabilities such as versatility, robustness, resource, and energy efficiency are omitted. A reference benchmark for existing networks does not exist nor does a standard evaluation guideline for designing new networks, which results in ambiguous and inconsistent comparisons. We, therefore, conduct a comprehensive study on multiple real-time detection networks (anchor-based, keypoint-based, and transformer-based) on a wide range of datasets and report results on an extensive set of metrics. We also study the impact of variables such as image size, anchor dimensions, confidence thresholds, and architecture layers on the overall performance. We analyze the robustness of detection networks against distribution shift, natural corruptions, and adversarial attacks. Also, we provide the calibration analysis to gauge the reliability of the predictions. Finally, to highlight the realworld impact, we conduct two unique case studies, on autonomous driving and healthcare application. To further gauge the capability of networks in critical real-time applications, we report the performance after deploying the detection networks on edge devices. Our extensive empirical study can act as a guideline for the industrial community to make an informed choice on the existing networks. We also hope to inspire the research community towards a new direction of design and evaluation of networks that focuses on the bigger and holistic overview for a far-reaching impact. ## 1 Introduction Recent advancements in deep neural networks have led to remarkable breakthroughs in the field of object detection. Object detection combines both classification and localization by providing the locations of the object instances along with the class label and confidence scores. Detectors find their way into multiple applications such as autonomous driving systems (ADS), surveillance, robotics, and healthcare. An ADS needs to detect vehicles, traffic signs, and other obstructions accurately in real-time, and additionally, to ensure safety, they need the detectors to perform reliably and consistently in different lighting and weather conditions. Healthcare applications require high accuracy even if it is not extremely fast. Low-latency applications require deployment on edge devices and hence need detectors to be fast and also compact enough to fit on low-power hardware devices. Different applications have different criteria and real-world settings come with time and resource constraints. Therefore, detectors need to be resource and energy | the experiments in this study. Detection Head | Backbone | Dataset | Hardware | |-------------------------------------------------|--------------------------------------------------------------------------------------------------------|----------------------------------------|------------| | ThunderNet YOLO; SSD DETR | | | | | CenterNet; TTFNet; FCOS; NanoDet | ShuffleNet-v2; EfficientNet-B0 MobileNet-v2; DeiT-T DarkNet-19; ResNet-18 Xception; HarDNet-68; VoVNet | VOC; COCO | | | | Corrupted COCO BDD; Cityscapes Kvasir-SEG | NVIDIA 2080Ti Jetson Xavier Jetson TX2 | | efficient to ensure deployment in high-impact real-world applications. This calls for a detailed analysis of real-time detection networks on different criteria. A number of real-time detection networks have been proposed which achieve state-of-the-art performance and while they primarily focus on reporting accuracy and speed, other metrics such as simplicity, versatility, and energy efficiency are omitted. As newer detectors are being proposed frequently, without a standard evaluation framework, the trend is moving more towards a short-sighted direction that focuses only on nominal improvements. Furthermore, there is ambiguity in the parameters used which makes reproducibility and fair comparison hard. Underplaying metrics such as resource and energy consumption can also result in a negative environmental impact (Badar et al., 2021). Several survey papers (Liu et al., 2018a; Zhao et al., 2018b; Zou et al., 2019) provide an overview of detection networks over the years but certain gaps prevent them from being used as a standard benchmark. Most of the analyses are done on a subset of networks (mostly on older two-stage architectures) and do not focus on real-time detectors. Hence, this won't prove beneficial in comparing and choosing networks for real-time applications. Also, the focus is on accuracy and speed but the network capabilities such as generalization, robustness, and reliability are not covered. The studies are limited to a few networks and datasets and the results are reported on desktop GPU only. These shortcomings present a challenge in comparing existing detection networks in a fair and detailed manner to choose a network for a specific application, and also in designing new object detectors as there is neither a detailed reference benchmark nor a standard evaluation guideline. We, therefore, conduct a comprehensive study on real-time object detection on multiple detectors on diverse datasets along with the genetic object detection benchmarks, we also provide two case studies on autonomous driving and healthcare applications. In addition to accuracy and inference time, we evaluate the resource and energy consumption of each model to estimate the environmental impact. We choose a multitude of networks and create a uniform framework where a different combination of backbones (or feature extractors) and detection heads can be analyzed with ease (Table 1). To further assess the performance gain/loss in detail, we decouple the effects of different variables such as image size, anchor size, confidence thresholds, and architecture layer types. For a uniform evaluation, we follow a standard experimentation pipeline and report all the parameters used. Each combination is trained and tested on two widely used generic datasets (PASCAL VOC (Everingham et al., 2010) and MS COCO (Lin et al., 2014)). Networks should be robust to changing lighting and weather conditions in real-time applications, hence, we further do an extensive robustness analysis to analyze the results of the networks on distribution shift and natural corruptions. For safety-critical applications, networks should be also robust to adversarial images which contain the imperceptible changes to the human's eyes, hence we evaluate the robustness of the networks to such attacks. Likewise, for these application, a measure of uncertainty is useful to take timely decisions, and hence we also provide the reliability analysis of each of the networks. Finally, to demonstrate the real-world impact, we conduct two exclusive case studies on autonomous driving and healthcare domains. For the former, we report the detector performances on the Berkeley Deep Drive (BDD) (Yu et al., 2018) dataset, which is more relevant for ADS application. We also show the generalization capability and report performance on an out-of-distribution (OOD) dataset, Cityscapes (Cordts et al., 2016). To highlight the feasibility of real-time deployment of the detectors, we deploy the NVIDIA TensorRT optimized models on embedded hardware and report the real-time performances on low-power devices. For the healthcare case study, we present the capability of the networks to detect polyps from medical images, that are used to detect cancer in patients. These applications cover two diverse domains which have different requirements and our case studies offer a unique perspective to look beyond the standard benchmarks and gauge the capability of the detectors on diverse datasets that are more relevant and applicable in real-time applications. ![2_image_0.png](2_image_0.png) Figure 1: Overall performance of different object detector networks (with HarDNet-68 backbone) trained on COCO dataset on different metrics. All metrics are normalized w.r.t minimum and maximum while for the four metrics with −1 superscript, normalization was applied on the inverse values. Therefore, a network with full coverage of the octagon has all the ideal features: highest accuracy, natural/adversarial robustness, and speed with lowest number of parameters, MAC count, energy consumption, and calibration error (check Section A.1 for more details). Our comprehensive analyses are summarized in Figure 1 where each vertex corresponds to a metric and the eight different colors represent different detectors (Check Section A.1 for more details). We report eight metrics namely, accuracy, robustness to natural and adversarial corruptions, speed, number of parameters, MAC (Multiply-Accumulate operations) count, energy consumption, and calibration error (that measures reliability). The plot is represented such that the ideal network should occupy the whole octagon and such a network has the highest accuracy, robustness, and speed, the lowest number of parameters and MAC count while consumes the lowest energy and is the best calibrated. The only real-time two-stage detector *ThunderNet*, designed for mobile devices, is efficient in resources, but falls short in accuracy, natural robustness, and is one of the slowest networks. *YOLO*, an anchor-based detector, performs second-best in energy consumption and lies in the middle range of calibration but falls behind in speed, accuracy, and robustness. SSD, another anchor-based detector, lies in the middle spectrum and provides a good balance between accuracy and speed. It also has the best calibration score and is more reliable. *DETR*, which is the Transformer-based detector, is the second best in terms of adversarial robustness but has the lowest calibration score and hence less reliable predictions. *CenterNet* has the highest robustness to adversarial attacks, is the second fastest, and also lies in the good spectrum on all other metrics. *TTFNet* lies in the middle spectrum. *FCOS* has the highest accuracy and robustness but falters in other metrics. *NanoDet* is the fastest and second-best in terms of accuracy, and has the lowest resource consumption. These four belong to the anchor-free keypoint-based detector category. Overall, *NanoDet* reaches the highest point on the most of the vertices and has average scores on calibration, and hence, is a contender for applications that need to be run on low-power devices with high speed and accuracy. Apart from these extensive analyses, we also extract useful insights and details. The keypoint-based methods generally outperform the anchor-based and two-stage methods in terms of accuracy and speed. We also note that while higher MAC counts may result in higher energy consumption, they do not necessary lead to an improvement in accuracy. All the detectors provide higher accuracy on medium and large objects but suffer in the case of small object detection. *FCOS* relatively performs better in detecting small objects when paired with heavier backbones such as *HarDNet-68*. We observe that increase in input image size is not always beneficial as the decline in speed often outnumbers the increase in accuracy. The non-deterministic way the anchor sizes affect the predictions of anchor-based detectors makes them difficult to adapt to newer datasets. Keypoint-based detectors tend to generalize well across multiple datasets. The variation in accuracy and speed seen with different confidence thresholds shows the ambiguity in reproducing the results. As transformers employ attention blocks to capture global context, they are less sensitive to varying image sizes and offer consistent performance. The calibration results show that keypoint-based methods are cautious and underconfident thus proving useful in safety-critical applications. But, interestingly, *DETR* is the most overconfident among all the networks. As transformers are gaining more traction, such detailed analysis will offer more insights into the capabilities and pitfalls of this new architectural paradigm. The case study on ADS reveals that the performance trend seen on one device does not necessarily translate to embedded hardware used in deployment. The healthcare case study shows that networks with relatively higher precision may not have higher recall values which are more important in medical data (as a false negative is more harmful than a false positive). Overall, we highlight the importance of a standard evaluation guideline that can address ambiguous comparisons and omissions of important metrics in the present literature. To that end, we provide a holistic view of real-time detection by providing both the big picture with a more zoomed-in view into the architectures and the analyses. The industrial community can benefit by referring to the extensive results to make an informed choice about the detector based on the application requirements. We also hope to encourage the research community to focus more on essential and far-reaching issues instead of nominal performance improvements. And since new detection networks are being introduced frequently, this study can be used as a precept while designing new detectors. All the networks (detection head and backbones), datasets, and hardware used in this study are summarized in Table 1. The contributions of the paper are summarized below: - An extensive empirical study across combinations of nine feature extraction networks and eight detection heads ranging from two-stage, single-stage, anchor-based, keypoint-based, and transformerbased architectures. - Detailed results including accuracy, speed, number of learnable parameters, MAC count, and energy consumption on the benchmark datasets. - Effect of variables such as image size, anchor size, confidence thresholds, and specific architecture design layouts on the overall performance. - Robustness analysis of all the networks against fifteen different natural corruptions and adversarial attacks with varying strength. - Reliability analysis by assessing the calibration score of all the networks. - A case study on Autonomous Driving Systems by performing analysis on the more relevant BDD dataset. And, generalization performance on out-of-distribution data by testing the networks on the Cityscapes dataset. - Deployment of TensorRT-optimized detectors on edge devices: Jetson-Xavier and Jetson-Tx2. - A case study on healthcare application by performing analysis on the Kvasir-SEG dataset to detect cancerous polyps. ## 2 Related Work Logothetis & Sheinberg (1996) conducted one of the initial surveys approaching visual object recognition from a neuroscience perspective. Grauman & Leibe (2011) and Andreopoulos & Tsotsos (2013) conducted surveys on computer vision-based visual object recognition systems through the years. However, these surveys focus entirely on classical computer vision techniques based on geometric properties and textures covering a broader topic of object recognition and not specifically detection. Given the rapid progress of object detection in recent years, more surveys (Liu et al., 2018a; Zhao et al., 2018b; Zou et al., 2019) have been published providing broad coverage of the deep learning-based detection techniques. Huang et al. (2017) benchmarked three object detectors along with three backbones on the COCO dataset. The paper reports accuracy and speed for Faster R-CNN (Ren et al., 2015), R-FCN (Dai et al., 2016), and SSD (Liu et al., 2016) detectors in combination with MobileNet (Howard et al., 2017), ResNet-101 (He et al., 2016), and InceptionResNet-V2 (Szegedy et al., 2016a) backbones. They mostly focus on two-stage detectors except for SSD and analyze their results on a single dataset, i.e. MS COCO dataset. Wu et al. (2020) delves into more recent advances in deep learning for object detection and provides information on the evolution of object detection from traditional methods to deep learning-based methods. They provide results mostly on two-stage detectors belonging to the R-CNN family. The accuracy is the main focus again while generalization, resource consumption, energy, robustness, and other metrics are omitted. The results are reported with resource-heavy backbones such as VGG-16 or ResNet-101 and do not qualify for a real-time detection survey. All the survey papers provide an overview of detection over the years but there are shortcomings and gaps that have not been addressed. First, the newer design of detectors such as anchor-free and transformer-based detectors have not been considered. Second, metrics such as energy consumption, MAC count, and parameter count of the architectures are not compared to provide a more detailed analysis of the resources needed to use the networks. Third, other relevant capabilities such as robustness, reliability, and out-of-distribution generalization are omitted thus preventing acquiring a holistic analysis of all the detectors. Fourth, none of the surveys focus on real-time detection and hence cannot be used as a guide when trying to select (from existing networks) or design a new network for real-time applications. We focus on real-time object detection and use a standardized (plug-&-play style) pipeline to evaluate a large number of feature extractors across anchor-, Keypoint-, and transformer-based detectors. We decouple the various factors that affect the detection results, such as image size, anchor size, deformable layers, and confidence thresholds, which helps in analyzing the networks in a more flexible manner. This facilitates a fair comparison framework to evaluate the detection models across a wide range of metrics and datasets, thus providing a holistic and unique insight into network design considerations. We, further, do an extensive robustness analysis against different corruptions and adversarial attacks. Robustness against adversarial attacks has been reported predominantly in the image recognition domain and not so much in object detection. To bridge this gap, we report the robustness performance of all the detection networks against varying attack strengths. We also perform a calibration study to evaluate the reliability of the predictions. Moreover, to analyze real-time performance, we benchmark all the networks on low-powered embedded devices, Nvidia TX2 and Xavier. Finally, we present two unique case studies on two diverse and equally important applications in Autonomous Driving and Healthcare domains. ## 3 Object Detection: Brief Overview Object detection combines both classification and localization by providing class labels and bounding box coordinates of the object instances. Convolutional Neural Network (CNN) based object detectors are typically classified into two categories, viz. two-stage, and one-stage detection methods. ## 3.1 Two-Stage Detection The two-stage detectors consist of a separate Region Proposal Network (RPN) along with classification. The extracted features from the proposed Region of Interest (ROIs) in RPN are passed to both the classification head, to determine the class labels, and to the regression head to determine the bounding boxes (Girshick et al., 2014; Girshick, 2015; Ren et al., 2015; He et al., 2017; He et al., 2015). Region-based Convolutional Neural Network (RCNN) uses a selective search algorithm to find regions of pixels in the image that might represent objects and tthe shortlisted candidates are then passed through a CNN (Girshick et al., 2014). The extracted features from the CNN become inputs to a Support Vector Machine (SVM) that classifies the regions and to a regressor that predicts the bounding boxes. RCNN requires progressive multi-stage training and is quite slow. To overcome the shortcoming of RCNN, Fast-RCNN proposed certain modifications | convolution (T). Head Localization Type | | Multi- | Neck | Convolution | Loss Functions | | NMS | | | |-------------------------------------------|-----------|----------------|--------|---------------|------------------|------------|-------|---------------------|----------| | | | scale | Type | Layers | Cls | Loc | Layer | | | | Thundernet | two-stage | anchor-based | ✓ | 3 | CEM | V | CE | reg: Smooth L1 | Soft-NMS | | Yolo | one-stage | anchor-based | ✓ | 2 | - | V | FL | reg: L2; conf: L2 | NMS | | SSD | one-stage | anchor-based | ✓ | 6 | FPN | V | CE | reg: Smooth L1 | NMS | | DETR | one-stage | self-attention | ✗ | - | - | - | CE | reg: L1 + GIoU | - | | CenterNet | one-stage | keypoint-based | ✗ | - | - | V, DCN,T | FL | off: L1; emb: L1 | Maxpool | | TTFNet | one-stage | keypoint-based | ✓ | 4 | - | V, DCN, UP | FL | reg: GIoU | Maxpool | | FCOS | one-stage | keypoint-based | ✓ | 5 | FPN | V | FL | reg: IoU; cent: BCE | NMS | | Nanodet | one-stage | keypoint-based | ✓ | 3 | PAN | V, DWS | GFL | reg: GIoU | NMS | (Girshick, 2015). First, instead of using a CNN to extract features of the regions proposed by selective search, a CNN is used to directly extract features of the entire image. From the extracted features, a Region of Interest (ROI) pooling layer pools the features corresponding to the proposed image regions. Second, the SVM classifier and the regressor are each replaced with fully connected layers. Faster-RCNN proposed further improvements to get rid of the slow selective search algorithm for region proposals. The features extracted by the backbone CNN are sent to an additional CNN-based Region Proposal Network (RPN) that provides region proposals (Ren et al., 2015). However, despite high accuracy, the aforementioned two-stage detection methods are not suitable for real-time applications (Liu et al., 2018a; Zhao et al., 2018b; Zou et al., 2019). A lightweight two-stage detector named ThunderNet (Qin et al., 2019) was proposed with an efficient RPN coupled with a small backbone for real-time detection (Section 5.1). ## 3.2 One-Stage Detection One-stage detectors consist of a single end-to-end feedforward network performing classification and regression in a monolithic setting. These detectors do not have a separate stage for proposal generation and instead consider all positions on the image as potential proposals. Each of these is used to predict class probabilities, bounding box locations, and confidence scores. Confidence scores determines how sure the network is about its class prediction. The main two categories within one-stage detectors are the anchor-based and keypoint-based detectors. Anchor-based detectors use predetermined anchor boxes (or priors) to aid predictions. Salient examples of this approach are You Only Look Once (YOLO; Redmon et al. (2016); Redmon & Farhadi (2017; 2018)) and Single Shot Detector (SSD; Liu et al. (2016)). YOLO works by visualizing the input image as a grid of cells, where each cell is responsible for predicting a bounding box if the center of a box falls within the cell. Each grid cell predicts multiple bounding boxes and outputs the location and the class label along with its confidence score. SSD was the first single-stage detector to match the accuracy of contemporary two-stage detectors while maintaining real-time speed. SSD predicts scores and box offsets for a fixed set of anchors of different scales at each location on several feature maps extracted from a Feature Pyramid Network (FPN). FPN facilitates multi resolution features (Lin et al., 2017a). Anchor-based detectors have the disadvantage of dealing with hyperparameters such as number, aspect ratio, and size of the anchors which are highly dataset-dependent. This resulted in the introduction of a new paradigm of anchor-free (aka keypoint-based) object detectors. Keypoint-based methods treat objects as points instead of modeling them as bounding boxes. Keypoints, such as corners or center of objects, are estimated and the width and height are regressed from these points instead of predetermined anchors. Several keypoint-based networks were introduced, namely CornerNet, CenterNet, FCOS, NanoDet and TTFNet (Law & Deng, 2018; Zhou et al., 2019a; Tian et al., 2019; Lyu, 2020; Liu et al., 2020). Although both anchor-based and keypoint-based detectors have achieved remarkable accuracy in generic object detection, it has largely been dominated by CNN-based architectures which lack global context. Moreover, modern detectors typically perform regression and classification on a large set of proposals, anchors, or window centers. Thus, their performance suffers from complex post-processing tasks such as NMS (Non-Maximum Suppression to remove a near-identical set of proposals; more details in Section 4.1.3). Vision transformers have been introduced as an alternative architectural paradigm to CNNs. Transformer-based detectors, such as DETR (Carion et al., 2020), make use of self-attention modules which explicitly model all interactions between elements in a given sequence thus providing global context. The overall design of transformers also bypasses the hand-crafted processes such as NMS by making direct predictions on a given input. Detailed information about each of these architectures is provided in Section 5 and Table 2. ## 4 Object Detection Formulation Here, we first explain all the basic components of detection networks and then explain the main challenges in object detection. ## 4.1 Basic Components The object detection problem can be formalized as: given an arbitrary image Ii and a predefined list of object classes, the object detection model not only classifies the type of object instances present in the image {c1, c2*, ..., c*m} but also returns the location of each object in the form of bounding boxes {b1, b2*, ..., b*m} where bi = {(x1, y1),(x2, y2)} is the top-left and bottom-right coordinates of the bounding box. Object detectors, both single-stage, and two-stage, typically consist of a feature extractor (hereafter, backbone) and detection head. A backbone is usually a CNN-based network that extracts the most prominent representations of a scene (from low to high-level features). Most backbones use pooling/convolution layers with strides to gradually reduce the size of feature maps and increase the receptive field of the network. The output feature maps are then passed to the detection head which performs classification and regression to determine the label and location of the object instances (Figure 2 shows an overview of a generic object detection). ## 4.1.1 Loss Functions We present a brief overview of the loss functions used to train an object detector. Two objective functions are typically used to train CNN-based detector, i.e. the classification and the regression losses. The classification loss is usually defined by Cross-Entropy (CE) loss: $${\mathcal{L}}_{C E}=-\sum_{i=1}^{n}t_{i}\log(p_{i})$$ tilog(pi) (1) where tiis the ground-truth label and piis the softmax probability for the i th class. However, the CE loss does not account for imbalanced datasets where less frequent examples are much harder to learn compared to frequently occurring objects. Thus, Lin et al. (2017b) proposed Focal Loss (FL) which addresses the class imbalance by assigning more importance to the hard samples while down-weighting the loss contribution of easy samples, $${\mathcal{L}}_{F L}=-\alpha_{i}(1-p_{i})^{\gamma}\log(p_{i})$$ γlog(pi) (2) ![6_image_0.png](6_image_0.png) $$\left(1\right)$$ $$\left({2}\right)$$ Figure 2: Schematic diagram of the main components of an object detector. ![7_image_0.png](7_image_0.png) Figure 3: (b) Anchor-based techniques assign anchors to each grid in the image and make predictions as offsets to these. (c) The Keypoint-based techniques forego anchors and estimate keypoints (for instance, corners or center of object) onto heatmap and regress the width and height of the object from this keypoint. where αiis the weighting parameter, γ ≥ 0 is the tunable modulating parameter. The regression loss is usually an L1 (Least Absolute Deviations) or L2 (Least Square Error) loss on all the four bounding box coordinates between the ground-truth and the predicted bounding box. ## 4.1.2 Anchor Box Vs. Keypoint Anchor-based object detection techniques use the concept of anchor boxes (also interchangeably referred to as prior boxes in the literature). In this approach, an image is divided into grids in which each grid cell can be assigned to multiple predefined anchor boxes (Figure 3b). These boxes are defined to capture the scale and aspect ratio of specific object classes and are typically chosen based on object sizes in the training dataset. Intersection over Union (IoU) is calculated between the anchors and the ground-truth bounding boxes and the anchor that has the highest overlap is used to predict the location and class of that object. When overlap of an anchor with a ground-truth box is higher than a given threshold, it is considered as a positive anchor. The network, instead of directly predicting bounding boxes, predicts the offsets to the tiled anchor boxes and returns a unique set of predictions for each. The use of anchor boxes helps in detecting multiple objects, objects of different scales, and overlapping objects. Anchor-based approaches, however, have two major drawbacks. First, a very large number of anchor boxes are required to ensure sufficient overlap with the ground-truth boxes and in practice, only a tiny fraction overlaps with the ground-truth. This creates a huge imbalance between positive and negative anchors, thus increasing training time. Second, the size, shape, and aspect ratio of anchor boxes are highly datasetdependent and thus require fine-tuning for each dataset. However, such choices are made using ad-hoc heuristics (using the ground-truth boxes of the dataset) which become significantly more complicated with multi-scale architectures where each scale uses different features and its own set of anchors. To mitigate the above-mentioned issues, keypoint-based object detection techniques are proposed that do not use anchors (Law & Deng, 2018; Zhou et al., 2019a; Tian et al., 2019). The detection problem is reformulated as a per-pixel prediction, similar to segmentation. The features of a CNN are used to create heatmaps, where the intensity peaks represent keypoints such as corners or centers of relevant objects. Along with these, there are additional branches that predict the dimensions of the boxes (width and height). The heatmap predictions along with embeddings are used to estimate the correct location and size of the predicted boxes. For center keypoints, distances from centers to the four sides of the object bounding box are predicted for detection (Figure 3c). ## 4.1.3 Non-Maximum Suppression (Nms) Object detectors produce far too many proposals, many of which are redundant. To remove the dense set of duplicate predictions, detectors commonly use a post-processing step called NMS. The NMS module first sorts the predicted proposals for each instance by their confidence score and selects the proposal with the highest confidence. Subsequently, a Jaccard overlap is performed with the near-identical set of proposals for each instance, $$I o U(b_{m},b_{i})={\frac{b_{m}\cap b_{i}}{b_{m}\cup b_{i}}}$$ bm ∪ bi(3) where bm is the proposal with the highest confidence and bi represents each of the near-identical proposals for the same instance. The duplicates are removed if this value is greater than the set NMS threshold (typically 0.5). However, one of the associated issues with NMS is that valid proposals are rejected when the proposals (for different instances) are close to each other or overlapping in some cases. This is especially true for crowded scenes. Therefore, Soft-NMS was proposed to improve the NMS constraint. In Soft-NMS, the scores of detection proposals (for other instances) which has slightly less overlap with bm are decayed while ensuring that the proposals that have higher overlap with bm are decayed more, such that the duplicates can be removed. This is done by calculating the product of the confidence score (of a proposal bi), and the negative of IoU with bm as shown below: $\left(3\right)$. $$s_{i}=\begin{cases}s_{i},&\text{if}I o U(b_{m},b_{i})<t h r e s h o l d\\ s_{i}(1-I o U(b_{m},b_{i})),&\text{if}I o U(b_{m},b_{i})\geq t h r e s h o l d\end{cases}$$ $$\left(4\right)$$ Keypoint-based detectors do not use this IoU-based NMS as they process points on heatmaps instead of overlapping boxes. The NMS in these networks is a simple peak-based maxpool operation that is computationally less expensive. ## 4.2 Challenges Of Object Detection Object detection as a computer vision problem is inherently challenging as an ideal detector has to deliver both high accuracy and high performance at reasonable energy and computational expense. Later in Section 8, we discuss the pros and cons of several backbone and detection head combinations to demonstrate the trade-off between accuracy and speed. Detection accuracy and inference speed are also dependent on both image sizes and object sizes. While higher image resolutions yield better accuracy by extracting more information from the scene, it also lowers inference speed. Therefore, choosing an image size that provides the right balance between accuracy and speed is of paramount importance. Additionally, object sizes play a major role in detection accuracy. While detectors can achieve high accuracy on medium to large objects, almost all detectors struggle to detect smaller objects in the scene (Liu et al., 2021). In Sections 9.1 and 9.2, we study the effects of object sizes and image sizes on detection accuracy and speed. To deliver high accuracy, the detector needs to be robust and localize and classify real-world objects with significant intra-class variation (for instance, variations in size, shape, and type of objects), pose, and nonrigid deformations. For using anchor-based detectors, the optimization of anchors is a challenge as they are dataset dependent. In Section 9.3, we demonstrate how varying anchor sizes affect detection accuracy. Another major challenge is maintaining consistent performance in varying weather (rain, snow, blizzard) and lighting conditions. For applications such as autonomous driving, the detector also has to account for cluttered backgrounds, crowded scenes, and camera effects. We provide a detailed study on the robustness of the detectors in Section 11. Finally, deep neural networks tend to rely on shortcut learning cues, hence overfitting to the training data distribution and not generalizing to out-of-distribution (OOD) data. As generalization to the unseen data is the most important concern, we provide a detailed analysis on both in-distribution as well as OOD data in Section 13. ## 5 Object Detection Heads Since the scope of our study is real-time object detection, we focus on one **two-stage** detector: ThunderNet (Qin et al., 2019), two **anchor-based** detectors: SSD (Liu et al., 2016), YOLO (Redmon & Farhadi, 2017), ![9_image_0.png](9_image_0.png) Figure 4: Schematic diagram of two-stage detector, ThunderNet. four (anchor-free) **keypoint-based** detectors: CenterNet (Zhou et al., 2019a), FCOS (Tian et al., 2019), NanoDet (Lyu, 2020) and TTFNet (Liu et al., 2020), and one **transformer-based** detector: DETR (Carion et al., 2020). ## 5.1 Thundernet ThunderNet revisits the two-stage detector architecture and improves upon Lighthead-RCNN (Li et al., 2017) and uses a variant of ShuffleNet-v2 (Ma et al., 2018) as backbone. The detection head increases the number of channels in the early stages of the network to encode low-level features and provide a boost in accuracy. ThunderNet uses two new modules: Context Enhancement Module (CEM) and Spatial Attention Module (SAM). CEM aggregates features from three different scales enlarging the receptive field by utilizing local and global features. SAM refines the features by strengthening foreground features while suppressing the background features (Figure 4). The output of the SAM module is given as: $$F_{S A M}=F_{C E M}\cdot\sigma({\mathcal{T}}(F_{R P N}))$$ $$\left({\bar{5}}\right)$$ FSAM = FCEM · σ(T (F*RP N* )) (5) where FSAM, FCEM, and F*RP N* represent the output features of SAM, CEM, and RPN modules, respectively. σ(.) is the sigmoid function and T (.) denotes the dimension transformation function to match the number of output channels from FCEM and F*RP N* . $${\cal L}_{rpn}=\frac{1}{N_{b}}\sum_{i}{\cal L}_{cls}\left(p_{i},t_{i}\right)+\lambda\frac{1}{N_{a}}\sum_{i}t_{i}{\cal L}_{reg}(b_{i},b_{g})\tag{6}$$ where Lcls is the log loss over two classes (object or not). bi and bg represent the bounding box regression targets for anchor i and the ground-truth, respectively. Anchors with overlap with any ground-truth box higher than a given threshold are considered positive (ti = 1) and the rest of the anchors are considered as negative (ti = 0). Thus, the multiplicative term ti ensures that the regression loss is activated only for positive anchors. Na and Nb represent the number of anchor locations and mini-batch size, and λ is the balancing weight. Similar to Fast R-CNN, ROI pooling is performed and these regions are sent through two branches for classification and regression with objective functions as follow: $${\mathcal{L}}={\mathcal{L}}_{c l s}(p,u)+\lambda[u\geq1]{\mathcal{L}}_{r e g}(b_{u},b_{g})$$ L = Lcls(*p, u*) + λ[u ≥ 1]Lreg(bu, bg) (7) where Lcls is the log loss for true class u and λ is the balancing weight. Lreg computes the regression loss over the ground-truth bounding box and predicted box for class u. [u ≥ 1] is the Inverson indicator function that evaluates to 1 when u ≥ 1 (u = 0 is the background class). ## 5.2 You Only Look Once (Yolo) YOLO (Redmon et al., 2016) is a single-stage object detection network that was targeted for real-time processing. YOLO divides image into grid cells and each cell predicts one object defined by bounding boxes and scores. An object is said to belong to a particular grid cell if its center lies in it. YOLO is fast and simple but suffers from low recall. Redmon & Farhadi (2017) proposed YOLO v2 to improve accuracy and speed of YOLO. Instead of making arbitrary guesses for bounding boxes, YOLO v2 uses anchors of different sizes and aspect ratios in each grid to cover different positions and scales of the entire image. Anchors can be $\downarrow$ . ![10_image_0.png](10_image_0.png) $$({\boldsymbol{\delta}})$$ Figure 5: Schematic diagram of single-stage anchor-based detectors, YOLO. made more accurate by calculating anchor sizes using an IoU-based k-Means clustering over the particular dataset. The network predictions are the offsets of each of the anchor boxes. YOLO v2 makes bounding box predictions on a single feature map obtained by concatenating two feature map scales (Figure 5). Multiple other versions of YOLOs have been introduce and all of them are built upon the fundamental concept of YOLO v2 but with many tips and tricks to achieve higher performance. As we are trying to evaluate on a fair and simple framework, in this study, we consider YOLO v2 version only, as it is simple, fast, and has the minimal bag of tricks. The loss function composes of the classification loss, the localization loss, and the confidence loss (which measures the objectness of the box): $${\mathcal{L}}={\mathcal{L}}_{c l s}+\lambda{\mathcal{L}}_{r e g}+\lambda^{\prime}{\mathcal{L}}_{c o n f}$$ ′L*conf* (8) where Lcls is Focal Loss, Lreg and L*conf* are both L2 losses. L*conf* is the confidence loss that measures the objectness of a box ( Ex. if an object is not present in the bounding box, then its confidence of objectness will be reduced. λ and λ ′ are the balancing weights. ## 5.3 Single Shot Multibox Detector (Ssd) SSD (Liu et al., 2016) has a feedforward CNN which produces bounding boxes, confidence scores and classification labels for multiple object instances in a scene. SSD uses multiple feature maps from progressively reducing resolutions emulating input images at different sizes, at the same time sharing computation across scales. While the feature maps from shallow layers are used to learn the coarse features from smaller objects, the features from deeper layers are used to localize larger objects in the scene. The detection head employs separate pre-defined anchor boxes for each feature map scale and combines the predictions of all default anchors at different scales and aspect ratios (Figure 6). The scale and size of the anchors for each feature map k is defined as: $$s_{k}=s_{m i n}+{\frac{s_{m a x}-s_{m i n}}{m-1}}(k-1)$$ m − 1(k − 1) (9) where k ∈ [1, m] and the default values for smin and smax are given as 0.2 and 0.9, respectively. m = 6 feature maps are used in SSD. ![10_image_1.png](10_image_1.png) $$({\mathfrak{g}})$$ Figure 6: Schematic diagram of single-stage anchor-based detector, SSD. ![11_image_0.png](11_image_0.png) $$(10)$$ Figure 7: Schematic diagram of single-stage keypoint-based detector, CenterNet. SSD produces a diverse set of predictions covering object instances of various shapes and sizes. SSD uses a matching strategy to determine which anchors correspond to the ground-truth and the anchor with the highest overlap is used to predict that object's location and class. The objective function is derived from multibox objective (He et al., 2015) and extended for multiple categories. The overall objective function is $${\cal L}=\frac{1}{N_{c l s}}{\cal L}_{c l s}+\lambda{\cal L}_{r e g}\tag{1}$$ Npos where Lcls is the cross-entropy loss, Lreg is the sum of Smooth L1 loss across all bounding box properties for matched positive boxes. Npos is the number of positive samples and λ is the balancing weight. ## 5.4 Centernet Anchor-based detectors have to deal with hyperparameters such as number, aspect ratio, and size of anchors that are also highly dataset dependent. CornerNet was proposed as the first alternative to the anchor-based approach that reduced the object detection problem to keypoint estimation problem (Law & Deng, 2018). Amongst the multiple methods proposed in (Law et al., 2019; Zhou et al., 2019b;a), we use CenterNet (Zhou et al., 2019a), since it not only achieves higher accuracy than CornerNet, but also simplifies keypoint estimation. The detection algorithm augments the backbone with three transposed convolutional layers to incorporate high resolution outputs. The first branch outputs a heatmap to estimate the keypoints or the center points of the objects and the number of heatmaps is equal to the number of classes. Ground-truth heatmaps are created by using Gaussian kernels on the centers of the ground-truth boxes. The peaks are used to estimate the center of an instance and also determine the instance category. There are two more branches generating heatmaps: the embedding branch regresses the dimensions of the boxes, i.e. width and height, and the offset branch accounts for the discretization error caused by mapping the center coordinates to the original input dimension (see Figure 7). The overall objective function is given as: $${\mathcal{L}}={\mathcal{L}}_{c l s}+\lambda{\mathcal{L}}_{o f f}+\lambda^{\prime}{\mathcal{L}}_{e m b}$$ $$(11)$$ ′Lemb (11) where the Lcls is a penalty reduced pixel-wise logistic regression with Focal Loss (Lin et al., 2017b), L*of f* is an L1 loss to minimize the discretization on the center coordinates and finally Lemb is also an L1 loss to minimize errors in computing the width and height of the predicted boxes. λ and λ ′ are balancing weights. ## 5.5 Fully Convolution One-Stage Object Detection (Fcos) FCOS, a fully convolutional anchor-free detector, reformulates object detection as a per-pixel prediction problem similar to semantic segmentation (Tian et al., 2019). The detector uses multi-level prediction with FPN (Lin et al., 2017a) to improve recall and resolve the ambiguity from overlapping bounding boxes. Five feature maps are obtained at different scales and pixel-by-pixel regression is performed on each of these layers. This increases recall but gives rise to low-quality predictions at locations far away from the center of the object. To avoid this, an additional branch is added in parallel, to predict the centerness of a location (Figure 8). The overall loss function is given as: $${\mathcal{L}}={\frac{1}{N_{p o s}}}{\mathcal{L}}_{c l s}+{\frac{\lambda}{N_{p o s}}}{\mathcal{L}}_{r e g}+{\mathcal{L}}_{c e n t}$$ Lreg + L*cent* (12) $$(12)^{\frac{1}{2}}$$ ![12_image_0.png](12_image_0.png) Figure 8: Schematic diagram of fully convolutional single-stage keypoint-based detector, FCOS. where Lcls is Focal Loss, Lreg is IoU regression loss, and L*cent* is the centerness loss which uses a Binary Cross Entropy (BCE) loss. Npos is the number of positive samples and λ is the balancing weight. The IoU regression is based on UnitBox (Yu et al., 2016) and is a form of cross-entropy loss with input as IoU. Instead of the L2 loss that optimizes the coordinates independently, the IoU loss treats it as one unit. The final score is weighted by the centerness-score. Hence, this branch down-weighs the scores of bounding boxes predicted farther away from the object center, which helps the final NMS in filtering out low-quality predictions. ## 5.6 Nanodet Inspired by FCOS, NanoDet was introduced as a lightweight anchor-free detector (Lyu, 2020). The proposed detector uses the Adaptive Training Sample Selection (ATSS) module (Zhang et al., 2020) which automatically selects positive and negative training samples based on object characteristics. The detector uses a Generalized Focal Loss (GFL) (Li et al., 2020) for classification and regression. GFL aims to extend Focal Loss from discrete to the continuous domain for better optimization. This is a combination of Quality Focal Loss (QFL) and Distributed Focal Loss (DFL). QFL combines classification and IoU quality to provide one score for both and DFL views the prediction boxes as a continuous distribution and optimizes it. Generalized IoU loss (GIoU) is useful for non-overlapping cases, as it increases the predicted box's size to overlap with the target box by moving slowly towards the target box. The overall loss function used for training NanoDet is given as: $${\mathcal{L}}={\frac{1}{N_{p o s}}}\sum_{z}{\mathcal{L}}_{Q F L}+{\frac{1}{N_{p o s}}}\sum_{z}1_{\{c_{z}^{*}>0\}}(\lambda{\mathcal{L}}_{G I o U}+\lambda^{\prime}{\mathcal{L}}_{D F L})$$ where L*QF L* and L*DF L* are the Quality and Distributed Focal losses and the L*GIoU* is the Generalized IoU loss. Npos is the number of positive samples and λ and λ ′ are balancing weights. z denotes all locations on the pyramid feature maps. While FCOS uses five feature maps which are passed to a multi-level FPN, NanoDet uses three feature maps which are passed to three individual Path Aggregation Network (PAN) (Liu et al., 2018b) blocks. PAN is similar to FPN but enhances the lower-level features by adding a bottom-up path augmentation. The outputs from the PAN blocks are connected to individual detection heads which compute the classification labels and bounding boxes for a specific feature map. NanoDet also removes the centerness branch which ![12_image_1.png](12_image_1.png) $$(13)$$ Figure 9: Schematic diagram of fully convolutional single-stage keypoint-based detector, NanoDet. ![13_image_0.png](13_image_0.png) Figure 10: Schematic diagram of transformer-based detector, DETR. exists in FCOS, thus making it a faster variant. The outputs from the three heads are finally passed to the NMS to achieve the final boxes and classification labels for the input image (see Figure 9). ## 5.7 Detection Transformer (Detr) Transformers are a new design paradigm in computer vision which rely on attention mechanism and were introduced for the first time in object detection by DETR (Carion et al., 2020). DETR casts the object detection task as a direct set prediction problem which eliminates duplicate bounding box predictions. Transformers capture the pairwise relations between the objects based on the entire image context by using the self-attention module, thus avoiding repetitive predictions. This is a benefit over the conventional object detectors which use post-processing steps such as NMS to eliminate the duplicate predictions, hence creating a computational bottleneck. DETR consists of an encoder-decoder transformer and a FeedForward Network (FFN) that makes the final prediction (Figure 10). The encoder consists of a Multi-Head Self-Attention (MHSA) module (Vaswani et al., 2017) and a FFN. These blocks are permutation invariant and hence, fixed positional encoding is added to the input of every attention layer. The decoder uses the encoder features and transforms the object queries to an output embedding using multiple MHSA modules. The N output embeddings are used by two different FFN layers, one for predicting class labels and the other one for predicting box coordinates. DETR finds the best predicted box for every given ground-truth using unique bipartite matching. The one-to-one mapping of each of the N queries to each of the N ground-truths is computed efficiently using the Hungarian optimization algorithm. After obtaining all matched pairs for the set, a standard cross-entropy loss for classification and a linear combination of L1 and GIoU losses for regression are used. Auxiliary losses are added after each decoder layer to help the model output the right number of objects in each class. Given λ and λ ′ are the balancing weights, the total loss is as follow: $${\mathcal{L}}=\lambda{\mathcal{L}}_{c l s}+\lambda^{\prime}{\mathcal{L}}_{r e g}$$ $$(14)$$ ′Lreg (14) ## 5.8 Training-Time-Friendly Network (Ttfnet) Inspired by CenterNet (Zhou et al., 2019a), TTFNet uses the same strategy wherein detection is treated as a two-part problem of center localization and bounding box size regression (Liu et al., 2020). For center localization, TTFNet adopts the Gaussian kernel to produce a heatmap with higher activation near the object center similar to CenterNet, but additionally, it also considers the aspect ratio of the bounding boxes. For size regression, instead of choosing just the center pixel as a training sample, TTFNet proposes to treat all the pixels in the Gaussian area as training samples. Also, these samples are weighted by the weight calculated by the target size and Gaussian probability, thus utilizing more information. The motivation is that encoding more training samples is similar to increasing the batch size which helps in expanding the learning rate and speeding up the training process. TTFNet modifies the Gaussian kernel by constructing a sub-region around the center of an object and extracting training samples only from this region (Figure 11). Gaussian probability is used as the weight to emphasize samples close to the center of the target hence alleviating the overlapping ambiguity. Due to the large variance in object sizes, larger objects produce significantly more samples than smaller ones and hence ![14_image_0.png](14_image_0.png) $$(15)$$ Figure 11: Schematic diagram of single-stage keypoint-based detector, TTFNet. the contribution of smaller objects is negligible which can hamper detection accuracy. Thus, a loss balancing strategy is introduced which makes good use of more annotation information in large objects while retaining the information for smaller objects. $${\mathcal{L}}_{r e g}=\frac{1}{N}\sum_{(i,j)\in A_{m}}G I o U({\hat{b}}_{i j},b_{m})\times W_{i j}\tag{1}$$ Given ground-truth box bm, Gaussian kernel is adopted and each pixel within a sub-area Am is treated as a regression sample. ˆbij is the predicted box, Wij is the balancing weight, and N is the number of regression samples. Thus, the Overall loss is as below: $$(16)$$ $${\mathcal{L}}=\lambda{\mathcal{L}}_{c l s}+\lambda^{\prime}{\mathcal{L}}_{r e g}$$ ′Lreg (16) where λ = 1.0 and λ ′ = 5.0 are classification and regression balancing weights and Lcls is a modified version of the Focal Loss presented in Kong et al. (2019). ## 6 Backbones In this study, we choose nine feature extractors as backbones based on factors such as speed, energy, and memory efficiency specifically catered for real-time applications. In the following, we present the backbones in chronological order. ResNet: He et al. (2016) reformulated the network layers as learning residual functions with residual skip connections. Networks with skip connections are easier to optimize and can gain considerable accuracy with increased depth. *ResNet-18*, a lightweight variant of deep residual networks consisting of four residual blocks each with two convolutions followed by batch normalization layers, is considered in this study. DarkNet: Redmon & Farhadi (2017) proposed a computationally lightweight feature extractor DarkNet as part of their proposal for a real-time object detection algorithm, YOLO. Darknet improves over VGG-16 by reducing the number of parameters. For the purpose of real-time detection, only *DarkNet-19* is considered in this study. Xception: Chollet (2017) proposed *Xception* as an improvement to Inception-V3 (Szegedy et al., 2016b) which is entirely based on depth-wise separable convolutions (DWS; Kaiser et al. (2017)). The proposed architecture is a linear stack of 36 depth-wise separable convolution layers, structured into 14 modules all of which have residual connections, except the first and last. MobileNet: Sandler et al. (2018) designed *MobileNet-v2* as a lightweight backbone specifically for realtime object detection on embedded devices. The architecture uses inverted residual blocks with linear bottlenecks and depth-wise separable convolutions. It is termed as inverted because skip connections exist between narrow parts of the network which results in a lesser number of parameters. Additionally, the network contains skip connections for feature re-usability between the input and output bottlenecks. | type of data. U and R stand for urban and residential areas, whereas D and N stand for day and night cas | | | es. | | | | | |------------------------------------------------------------------------------------------------------------|----------|---------------|---------------|---------------|--------------------|--------------------|-----------------| | | | | Training Img | Test Img | | | | | Dataset | Year | Type | #cls | Resolution | Total Img | | | | | | (annotations) | (annotations) | (annotations) | | | | | VOC | '07, '12 | Generic | 20 | Various | 21,503 (52,090) | 16,551 (40,058) | 4952 (12,032) | | COCO | '17 | Generic | 80 | Various | 123,287 (896,782) | 118,287 (860,001) | 5000 (36,781) | | BDD | '18 | AD: U/R/D/N | 10 | 1280 x 720 | 80,000 (1,472,397) | 70,000 (1,286,871) | 10000 (185,526) | | Cityscapes | '16 | AD: U/D | 10 | 2048x1024 | 5000 (99,523) | 4500 (83,574) | 500 (15,949) | | Kvasir-SEG | '20 | Medical | 1 | various1 | 1000 | 800 | 200 | ShuffleNet-v2: Ma et al. (2018) designed *ShuffleNet-v2* to optimize inference latency by reducing the memory access cost. The building blocks of this architecture consist of a channel split operation that divides the input into two, each of which is fed to a residual block. A channel shuffle operation is introduced to enable information transfer between the two splits to improve accuracy. The high efficiency in each building block enables using more feature channels and larger network capacity. VoVNet: Lee et al. (2019) proposed VoVNet as an energy efficient backbone for real-time detection. It is built using One-Shot Aggregation (OSA) modules, that concatenate all intermediate features only once in the last feature map. The OSA block has convolution layers with the same input/output channel for minimizing the MAC count, which in turn improves the GPU computation efficiency. *VoVNet-39*, the faster and the more energy efficient variant, is used in this study. EfficientNet: Tan & Le (2019) designed EfficientNet, a feature extractor, using an automated multiobjective architecture search which is optimized for accuracy and MAC count. The proposed architecture achieves high accuracy by re-adjusting and balancing network depth, width, and resolution. The building blocks of this architecture uses Mobile Inverted Bottleneck Convolutions (MBConv) and also includes Squeeze and Excitation (SE) modules (Hu et al., 2018). Amongst the several versions proposed, we use *EfficientNetB0* which is the most lightweight version of this architecture. HarDNet: Chao et al. (2019) proposed Harmonic Densely Connected Networks (HarDNet) to achieve high efficiency in terms of both MAC count and memory traffic. HarDNet stands out among all the other backbones in terms of reduced DRAM (Dynamic Random Access Memory). The sparsification scheme proposes a connection pattern between the layers such that it resembles an overlapping of power-of-two harmonic waves (hence the name). The proposed connection pattern forms a group of layers called a Harmonic Dense Block (HDB) and instead of considering all layers, gradients in a HDB with depth L pass through only 'log L' layers. The output of a HDB is the concatenation of layer L and all its preceding odd number layers and the output of even layers is discarded once the HDB is done. Also, stride-8 is used instead of stride-16 (that is adopted in many CNN networks) to enhance local feature extraction. Along with the reduction of the traffic of feature maps, it also provides other advantages such as low latency time, higher accuracy, and higher speed. Amongst the several versions proposed, *HarDNet-68* is used for this study. DeiT: Touvron et al. (2021) modified the vision transformers to be used as a feature extractor for dense prediction tasks. The proposed architecture, Data-efficient image Transformer (DeiT), consists of repeated blocks of self-attention, feedforward layers, and an additional distillation module. To extract meaningful image representations, the learned embeddings from the final Transformer block are sent to an extra block to get features at different scales before sending it to the detection head. The smallest version of this architecture, i.e. *DeiT-T* (where T stands for tiny) is used for this study. 1ranging from 332×487 to 1920×1072 ## 7 Empirical Evaluation We briefly introduce the datasets, the evaluation metrics, and the experimental setup. The overall summary of all datasets is shown in Table 3. The experimental setup and hyperparameter details for all the detectors are outlined in Appendix, Table 12. ## 7.1 Datasets PASCAL VOC, henceforth termed as VOC (Everingham et al., 2010), consists of 20 object categories divided into two datasets viz. VOC 2007 and 2012 with a combined total of 21, 493 images containing 52, 090 annotations. We combine the training and validation sets of both VOC '07 and '12 for training. The final testing is conducted over the VOC '07 test set consisting of 4952 images. COCO (Lin et al., 2014) is a more challenging dataset and consists of 80 object categories. Following Zhou et al. (2019a), we use the 2017 split of the dataset which contains 118, 287 images (860, 001 labeled instances) for training. For the final testing, we use 5000 images with 36, 781 instances. BDD (Yu et al., 2018) is one of the largest and most challenging datasets on autonomous driving. It contains a variety of driving scenarios including urban, highways, and rural areas as well as a variety of weather and day/night driving conditions representing real-life driving challenges. The training set contains 69, 863 images with ≈ 1.28 million labeled instances and the test set contains 10, 000 images with 185, 526 labeled instances across 10 object categories. Cityscapes (Cordts et al., 2016) is a well-documented dataset for complete scene understanding in challenging urban scenarios. We extracted the bounding boxes from the instance segmentation and the labeled annotations were then grouped into 10 super-categories to match the categories of BDD. For OOD evaluation, we used the test set of 500 images with 15949 bounding boxes. Corrupted COCO. To test the robustness of the models, we create a dataset that emulates different external influences that are found in real-world scenarios, by adding corruptions (Michaelis et al., 2019) to the original COCO dataset. There are 15 different corruptions and we categorize these into four groups: Noise, Blur, Weather, and Digital effects. Noise consists of Gaussian, Impulse, and Shot noise. The Blur group consists of Defocus, Glass, Motion, and Zoom blur effects. We use Brightness, Fog, Frost, and Snow to mimic different Weather conditions. Finally, we account for Digital effects by adding changes in Contrast, Elastic Transformation, JPEG compression, and Pixelation. These 15 corruptions are applied at 5 different levels of severity. The severity level ranges from 1 (less severe corruption) to 5 (most severe corruption). Kvasir-SEG (Jha et al., 2020) is a bio-medical dataset for localizing polyps in gastrointestinal region. The dataset consists of 1000 images with the segmentation masks of polyps present in each image. The dataset also has bounding boxes obtained from the segmentation masks. Here, the dataset is divided into 800 images for training and 200 images for testing. ## 7.2 Evaluation Metrics Object detectors make predictions in terms of a bounding box and a class label. Here, we first measure the overlap between the predicted bounding box and the ground-truth by calculating the IoU (given in Eq. 3). Based on a IoU threshold, the predicted box is categorized as True Positive (TP), False Positive (FP), or False Negative (FN). Next, we compute the precision and recall, $$\begin{array}{c}{{p r e c i s i o n=\frac{T P}{T P+F P},}}\\ {{r e c a l l=\frac{T P}{T P+F N}.}}\end{array}$$ $$(17)$$ Precision measures how accurate the predictions are while recall shows how well the model finds all the positives. High precision but low recall implies more FN while the reverse implies more FP. A precisionrecall (PR) curve shows the trade-off between the precision and recall values for different thresholds. It is downward sloping because as the threshold is decreased, more predictions are made (high recall) and the less precise they are (low precision). We compute the average of precisions (AP) across all the recall values (between 0 and 1) at various IoU thresholds which can be interpreted as the area under the PR curve. Finally, the mAP (mean Average Precision) is computed by averaging the AP over all the classes. The PASCAL VOC challenge (Everingham et al., 2010) evaluates the mAP at 0.5 IoU threshold (@IoU:0.5) whereas the COCO challenge (Lin et al., 2014) sets ten different thresholds @IoU:[0.5-0.95] using 0.05 step size. In some applications like healthcare, recall measure holds more value as having more FN is more harmful than FP. Average Recall is measured by averaging recall over all IoUs, and the average of these is reported as the mean Average Recall (mAR). We also report the F1 score metric, which measures the balance between precision and recall. The F1 score is computed as follows: $$F1=2\times\left({\frac{p r e c i s i o n\times r e c a l l}{p r e c i s i o n+r e c a l l}}\right)$$ $$(18)$$ precision + *recall*(18) We compute the MAC (multiply-accumulate operations) count for the convolution, batch norm, and fully connected layers for the detector and also report the number of learnable parameters (in million scales). We also report the inference speed in frames per second (FPS) for the combination of each backbone and head. The inference speed is computed for 500 images and averaged to remove any bias. Finally, given the recent trend towards energy-efficient AI (Schwartz et al., 2019), we compute the inference energy consumption of a model on the whole test dataset. We use the NVIDIA Management Library (NVDIA, 2019) to calculate the approximation power consumption by the GPU during the inference. The inference energy consumption on a dataset is reported in kilo Joules (KJ) and we do not include the power consumption of other components. ## 7.3 Experimental Setup All the networks are re-implemented in our repository. The original results from the corresponding papers are reproduced prior to performing other experiments. Our complete framework is in PyTorch 1.7 (Paszke et al., 2019) and includes all the backbones and detection heads to execute all the training and evaluations. For a fair comparison between different object detection networks, we use a consistent and uniform scheme for both training and evaluation. Unless otherwise stated, all our experiments use the following training scheme. It is important to note that some of the detection heads (such as YOLO and DETR) use multi-scale training in their original implementations, but for a uniform training scheme, we use single-scale training with an image size of 512. All images are first normalized by per-channel mean subtraction using ImageNet mean values (Russakovsky et al., 2015) dataset. Default PyTorch weight initialization, with a fixed seed value, is used for detection heads, and pre-trained ImageNet weights are used for the backbones. For data augmentation, we use to expand, random horizontal flip, random crop, and random photometric distortions which include random contrast within the range of [0.5, 1.5], saturation [0.5, 1.5], and hue [-18, +18]. We use a batch size of 32 and train the models using Stochastic Gradient Descent (SGD) optimizer (Bottou, 2010) with 0.9 momentum with a learning rate decay factor of 0.1. The learning rate scheduler is chosen to ensure the convergence of all the models. The only exception to this rule is DETR where we follow the authors and use AdamW (Loshchilov & Hutter, 2017) optimizer. The NMS threshold is set to 0.45 and the confidence threshold to 0.01 wherever applicable. For all the experiments, we evaluate the models on an NVIDIA RTX 2080Ti GPU unless otherwise stated. The Pytorch models are converted to their optimized high-performance inference engines, using NVIDIA TensorRT (version 8.0), to facilitate deployment on embedded hardware. The TRT conversion optimizes the network by fusing multiple layers in the network (including convolution and batch normalization layers) to enable parallel processing. The inference energy consumption is calculated using the NVIDIA NVML API (Corporation, 2020) on a single clean machine with minimal running applications. ![18_image_0.png](18_image_0.png) Figure 12: Summary result of all detection models with various backbones on the COCO dataset reported on three evaluation metrics - accuracy, speed, and energy. The darker shade in accuracy and inference speed and lighter shade in energy represents an ideal case. ## 8 Results We provide a broad overview of results for all networks in Figure 12 to demonstrate the overall trend for three metrics: inference accuracy, speed, and energy consumption on the COCO dataset. The backbones and detection heads are ordered taking "accuracy" as the pivotal metric and sorting the results from low to high 1. For easier performance analysis, we categorize backbones and heads into three spectra, namely low, middle and high, where low and high are the least and most accurate backbone and head combinations and the middle spectrum contains networks that have average performance. Accuracy: Figure 12 demonstrates that the three backbones, i.e. VoVNet-39, *HarDNet-68* and *Xception* consistently achieve high accuracy across all heads and belong to the high spectrum. The accuracy of VoVNet and HarDNet can be attributed to the One-Shot Aggregation (OSA) modules and the local feature enhancement modules, respectively while the linear stack of 36 CNN layers help Xception. The middle spectrum is occupied by ResNet-18, *DarkNet-19* and *DeiT-T*. ResNet and DarkNet are all the light weight architectures and DeiT enjoys the benefits of self-attention modules which helps in global context. Finally, EfficientNet-B0, *MobileNet-v2* and *ShuffleNet-v2*, which are sleek networks mainly designed for reducing the MAC count, achieve the least accuracy. Amongst the heads, the single-stage keypoint-based detectors such as NanoDet, FCOS, *TTFNet* and *CenterNet* consistently achieve high accuracy. All of these detectors are keypoint estimation-based methods and do not have the hassle of defining anchor sizes based on the object sizes. *FCOS* has FPN to perform multi-scale prediction and has a centerness branch to filter out low-quality predictions. *NanoDet* incorporates PAN which enhances low-level features and uses a GIoU loss that helps refines the locations. *TTFNet* and *CenterNet* also incorporate multiple resolutions and further refine the locations. The attention modules present in *DETR* result in improved accuracy but since the backbone is still CNN, the performance is pushed to the middle spectrum. SSD also occupies the middle spectrum and the lower spectrum consists of *YOLO* and *ThunderNet* which are anchor-based and two-stage detectors, respectively. Speed: The inference speed in Figure 12 (middle plot) shows a different trend compared to the accuracy. Amongst the backbones, the networks that were positioned in the middle spectrum in terms of "accuracy" are the fastest, namely ResNet-18, *DarkNet-19* and *DeiT-T*, and thus facilitate a good trade-off between accuracy and speed. Although *ShuffleNet-v2* is one of the least accurate, the inference speed is quite high owing to its pruned architecture designed for low latency. *VoVNet-39* and *HarDNet-68* which are in the highest spectrum in accuracy are positioned in the middle spectrum in terms of speed. However, *Xception* is one of the slowest due to its big linear stack of convolution layers. Amongst the heads, CenterNet, *TTFNet*, and *NanoDet* are the fastest and outperform other detectors by a significant margin. *CenterNet* and *TTFNet* have no NMS bottleneck (as it uses a heatmap peak-based maxpooling NMS instead of IoU-based NMS) which helps in increasing the inference speed. *FCOS*, which has 1All the following tables and figures follow this specific order. ![19_image_0.png](19_image_0.png) Figure 13: Speed-accuracy trade-off of all combinations of detection heads and backbones from Table 4, on COCO dataset. The size of the bubble corresponds to the number of parameters present in the model. Colors indicate detection heads and the letters represent different backbones. the highest accuracy lies in the lowest spectrum in terms of speed owing to its heavy architecture with five feature maps and an additional centerness branch. *NanoDet* is similar to *FCOS* but has a more optimized and lightweight architecture with just three feature maps and no separate branch which results in improved inference speed. *DETR* is in the middle spectrum here, as the transformer architecture is not as hardware optimized as CNNs (Ivanov et al., 2020). SSD and *YOLO* also lie in the middle spectrum, achieving average speed. The two-stage based detector, *ThunderNet* is the slowest. Further, Figure 13 shows the speed, accuracy, and parameter trade-off for all detection heads and backbones (72 combinations) on COCO dataset. The size of the each bubble indicates the number of parameters in the network. Most of the combinations result in speed range of 15 to 50 FPS, while NanoDet and CenterNet achieve higher speed for all the backbones. Energy and Resources: Figure 12 (last plot) shows that amongst the backbones, the low spectrum networks consume the least amount of energy as these networks are quite small in size which is also reflected in the lower number of parameters (in Table 4). *DarkNet* proves to be quite energy efficient. Amongst the heads, the high spectrum detectors fare quite well except *FCOS* as the architecture is heavier than the other keypoint-based detectors. *NanoDet*, which was specifically designed to run on mobile hardware, consumes the least amount of energy. SSD and *DETR* maintain the middle spectrum in terms of energy consumption. ThunderNet has the proposal stage in addition to the classification and regression stage and consumes more energy compared to single stage detectors. ## Detailed Analysis: Table 4 provides more detailed information for eight detection heads with nine backbones across two different datasets: VOC and COCO. We report detailed information on the accuracy (mAP and F1 score), inference speed (in FPS), inference energy consumption (in kJ), and also resource information such as GMAC count and number of parameters (in million). VOC is simple in complexity and the number of classes is 20 compared Table 4: Results of all detection models with various backbones on COCO and VOC datasets. MAC count (G), number of parameters (#P; in Million), inference energy consumption (in kilo Joules), accuracy in terms of F1 score and mAP, and inference speed in terms of FPS are reported. mAP is reported @IoU: [0.5 - 0.95] for COCO and @IoU: 0.5 for VOC. The best performance for each detector model is in bold and the overall best performance for each metric and dataset is highlighted. COCO VOC HeadBackbone MAC (G) #P (M) Inf Energy (kJ) F1 Score mAP Inf Speed (FPS) MAC (G) #P (M) Inf Energy (kJ) F1 Score mAP Inf Speed (FPS) ShuffleNet-v2 **1.71 2.51 32.31** 36.52 18.45 18.65 **1.41 2.21 12.69** 71.26 68.24 41.41 MobileNet-v2 2.71 4.12 33.83 40.66 21.73 18.87 2.41 3.81 16.23 74.85 72.14 41.92 EfficientNet-B0 2.91 5.60 38.32 42.06 22.83 17.93 2.60 5.29 24.26 77.11 74.76 36.67 DeiT-T 11.89 12.58 54.21 40.62 22.07 18.01 11.58 12.27 31.41 74.45 72.01 35.63 DarkNet-19 15.57 24.83 37.93 41.15 22.09 19.47 15.26 24.52 22.81 78.75 76.59 **44.91** ResNet-18 17.54 16.33 37.94 43.56 23.83 19.09 17.24 16.02 24.20 77.49 75.12 44.41 Xception 25.48 26.78 61.56 **49.89 29.11** 17.88 25.17 26.47 36.76 **81.64 79.92** 34.05 HarDNet-68 23.32 18.04 42.19 47.75 27.32 18.95 23.01 17.73 27.51 77.70 75.28 40.18 ThunderNet VoVNet-39 38.16 23.11 53.66 48.99 28.34 **19.77** 37.85 22.80 30.62 81.00 79.31 41.21 ShuffleNet-v2 **7.44 27.22 15.65** 34.33 14.20 37.05 **7.36 26.91 7.97** 36.91 51.04 **90.60** MobileNet-v2 10.24 35.70 **15.65** 40.16 17.71 39.88 10.16 35.39 12.22 71.45 71.19 56.75 EfficientNet-B0 8.28 28.23 18.76 32.58 13.27 32.65 8.20 27.92 15.78 41.25 57.26 75.66 DeiT-T 17.60 37.85 30.52 27.06 10.10 33.78 17.52 37.54 28.42 64.35 63.35 43.86 DarkNet-19 22.53 54.63 17.96 44.07 20.62 **43.33** 22.33 50.66 18.07 71.14 73.76 71.12 ResNet-18 43.03 37.58 25.21 45.43 21.07 27.02 23.25 41.23 21.15 69.82 69.55 58.24 Xception 34.75 65.73 37.38 47.74 22.49 34.26 34.67 65.43 30.20 73.26 75.88 43.80 HarDNet-68 30.26 47.69 24.40 46.90 22.20 40.09 30.18 47.39 25.52 75.04 76.86 48.70 YOLO VoVNet-39 54.79 66.78 33.31 **51.02 25.68** 39.67 44.99 52.43 24.67 **76.03 78.61** 53.94 ShuffleNet-v2 4.24 15.14 **24.16** 37.45 17.31 20.28 **1.92** 8.21 **14.55** 72.35 67.38 22.40 MobileNet-v2 4.30 13.60 26.22 43.97 21.52 23.84 2.51 **6.03** 18.08 78.56 72.54 48.53 EfficientNet-B0 **3.52 10.16** 26.68 31.77 22.15 24.72 2.46 6.46 19.86 79.54 73.17 45.92 DeiT-T 15.68 21.53 35.39 48.98 22.05 31.89 12.78 14.41 21.83 79.31 74.54 52.92 DarkNet-19 21.44 35.63 31.54 50.72 26.27 32.06 16.56 27.06 16.52 83.17 77.84 71.13 ResNet-18 21.35 26.47 31.81 50.87 26.00 31.10 17.95 18.45 17.22 82.00 76.47 **74.00** Xception 33.55 44.93 47.56 55.59 30.49 27.08 27.04 30.96 27.03 82.59 78.86 50.05 HarDNet-68 30.85 35.49 35.34 55.79 29.86 **33.53** 24.91 24.99 21.07 85.37 80.41 60.12 SSD VoVNet-39 48.66 41.70 40.77 **58.88 32.55** 33.28 40.59 30.37 21.20 **85.80 81.57** 65.29 ShuffleNet-v2 4.17 22.98 **16.93** 47.27 22.38 45.58 4.16 22.97 **11.90** 77.52 69.72 46.97 MobileNet-v2 5.21 **20.75** 18.88 49.81 24.20 46.85 5.20 **20.73** 14.91 78.15 70.78 48.24 EfficientNet-B0 5.49 22.21 26.56 53.68 26.45 40.82 5.48 22.20 23.17 81.95 75.31 42.76 DeiT-T 14.35 27.21 30.64 53.09 27.08 43.21 14.34 27.20 27.09 80.11 73.07 43.95 DarkNet-19 17.92 41.29 25.09 49.72 24.32 **51.40** 17.91 41.28 21.59 81.00 74.29 51.58 ResNet-18 19.90 32.79 26.32 51.68 25.75 51.10 19.90 32.77 21.32 79.68 72.68 50.88 Xception 27.65 43.20 36.46 54.90 28.17 40.57 27.64 43.19 31.51 83.13 77.32 41.39 HarDNet-68 25.78 38.41 30.86 56.44 29.59 41.61 25.77 38.40 27.57 83.70 77.44 41.64 DETR VoVNet-39 40.53 43.45 34.23 **57.88 30.22** 42.67 40.52 43.43 28.97 **84.57 78.67** 44.08 ShuffleNet-v2 12.04 15.20 19.31 48.20 23.79 88.85 11.92 15.19 16.48 76.36 68.42 79.71 MobileNet-v2 12.90 16.71 20.84 52.21 26.63 87.04 12.77 16.70 15.70 80.11 72.84 89.44 EfficientNet-B0 13.09 **13.76** 23.61 54.29 28.43 72.04 12.97 **13.76** 18.69 81.05 73.14 73.65 DeiT-T 21.89 19.65 24.12 58.05 31.49 75.12 21.77 19.64 18.48 81.03 75.24 75.14 DarkNet-19 25.65 36.08 19.35 55.62 29.47 104.67 25.53 36.07 15.32 82.66 76.47 103.19 ResNet-18 27.63 25.22 **19.05** 54.49 28.59 **105.78** 27.50 25.21 **14.69** 81.68 75.18 **104.98** Xception 35.50 42.70 29.33 57.35 31.19 57.99 35.38 42.69 24.25 82.73 77.30 57.07 HarDNet-68 33.55 33.20 25.97 58.19 31.84 65.82 33.42 33.19 20.81 82.69 77.49 65.70 CenterNet VoVNet-39 48.35 38.23 26.25 **60.92 34.17** 69.59 48.23 38.23 22.05 **84.88 79.73** 64.98 ShuffleNet-v2 5.55 7.66 15.33 44.15 19.32 94.93 5.44 7.65 12.08 75.93 68.37 94.49 MobileNet-v2 **2.72 4.50 15.14** 41.74 18.22 95.61 **2.68 4.50 11.89** 78.37 71.56 97.45 EfficientNet-B0 3.12 5.26 19.17 37.71 15.22 73.07 3.08 5.26 15.90 65.32 60.97 74.23 DeiT-T 21.14 20.35 24.85 50.37 24.76 70.85 21.09 20.34 19.52 79.67 74.41 70.57 DarkNet-19 46.77 35.16 23.40 34.31 13.65 79.63 46.52 35.14 18.16 83.62 77.62 79.80 ResNet-18 24.81 18.15 18.05 53.27 26.76 **115.37** 24.68 18.14 13.57 81.62 75.31 **113.08** Xception 59.15 48.72 34.43 54.73 26.73 47.65 58.89 48.70 29.49 81.43 74.70 46.75 HarDNet-68 65.57 36.64 32.27 57.52 **30.63** 50.96 65.26 36.62 26.72 85.17 80.15 50.00 TTFNet VoVNet-39 160.17 53.98 46.65 **57.77** 30.36 34.20 159.66 53.94 42.60 **86.05 81.20** 33.11 ShuffleNet-v2 **106.58** 25.55 43.45 50.73 25.43 33.99 **105.07** 25.27 37.98 79.95 71.41 33.67 MobileNet-v2 107.24 **23.28 42.96** 54.71 28.72 34.85 105.73 **23.00 37.27** 83.56 76.06 34.88 EfficientNet-B0 107.77 24.38 47.93 52.99 26.70 30.75 106.26 24.10 43.22 83.33 75.74 30.46 DeiT-T 117.39 30.40 48.64 54.02 26.64 30.81 115.88 30.12 43.46 84.22 77.19 30.72 | DarkNet-19 | 120.23 | 44.09 | 43.05 | 59.82 | 33.50 | 35.76 | 118.73 | 43.81 | 37.33 | 86.12 | 79.61 | 35.78 | | |--------------------|----------|---------|---------|---------|---------|---------|----------|---------|---------|---------|---------|---------|-------| | ResNet-18 | 122.34 | 35.51 | 43.80 | 58.32 | 31.87 | 35.59 | 120.83 | 35.24 | 37.36 | 84.85 | 77.98 | 35.58 | | | FCOS | Xception | 130.70 | 46.50 | 51.94 | 57.75 | 31.99 | 28.50 | 129.19 | 46.22 | 46.53 | 86.26 | 80.08 | 28.44 | | HarDNet-68 | 128.76 | 41.43 | 49.16 | 63.67 | 36.45 | 29.96 | 127.25 | 41.16 | 43.56 | 87.86 | 81.59 | 30.03 | | | VoVNet-39 | 144.04 | 46.63 | 49.65 | 64.63 | 37.98 | 30.18 | 142.53 | 46.36 | 43.73 | 88.01 | 81.79 | 30.37 | | | ShuffleNet-v2 | 1.04 | 1.43 | 10.66 | 45.66 | 20.90 | 88.54 | 1.00 | 1.41 | 7.28 | 75.88 | 66.81 | 87.45 | | | MobileNet-v2 | 1.89 | 2.45 | 13.79 | 52.09 | 25.82 | 96.99 | 1.86 | 2.44 | 10.80 | 81.92 | 74.67 | 98.91 | | | EfficientNet-B0 | 2.18 | 3.74 | 18.39 | 50.48 | 24.11 | 70.55 | 2.15 | 3.72 | 15.49 | 82.69 | 75.45 | 70.70 | | | DeiT-T | 11.71 | 10.31 | 22.99 | 47.91 | 21.89 | 71.09 | 11.68 | 10.29 | 18.84 | 81.47 | 75.11 | 71.64 | | | NanoDet DarkNet-19 | 14.76 | 20.09 | 14.45 | 59.27 | 31.51 | 124.46 | 14.72 | 20.07 | 11.55 | 84.32 | 79.13 | 128.48 | | | ResNet-18 | 16.83 | 15.32 | 16.41 | 55.80 | 29.12 | 111.12 | 16.79 | 15.30 | 13.21 | 82.87 | 76.15 | 112.33 | | | Xception | 24.48 | 21.19 | 24.84 | 61.22 | 33.35 | 69.61 | 24.45 | 21.18 | 19.59 | 85.59 | 79.94 | 69.99 | | | HarDNet-68 | 22.64 | 16.83 | 21.21 | 61.00 | 33.61 | 70.09 | 22.61 | 16.81 | 18.25 | 85.91 | 80.35 | 71.42 | | | VoVNet-39 | 37.53 | 21.89 | 22.26 | 64.84 | 36.85 | 80.54 | 37.50 | 21.87 | 17.32 | 86.09 | 81.67 | 80.58 | | to COCO which has 80 classes. The accuracy (both mAP and F1 score) is at a higher scale in VOC compared to COCO, but networks show similar trends in the case of both datasets. *FCOS + VoVNet-39* pair has the highest accuracy while *NanoDet+DarkNet-19* pair has the highest inference speed in both the datasets. Among the mid-spectrum backbones, *DeiT-T* displays lower MAC count. The resources across both datasets are similar except in SSD where the number of parameters increases (in some cases around 2×) upon changing the dataset from VOC to COCO. As SSD uses six feature maps with separate anchor boxes for each, there is a resource overhead when the number of classes increases. The effect is less amplified in anchor-free designs. Overall, among the backbones, the high spectrum networks *HarDNet-68* and *VoVNet-39* perform well across all metrics but *Xception* falls short in all the metrics except accuracy. The middle spectrum comprising of ResNet-18, *DarkNet-19* and *DeiT-T* offer a good balance between accuracy, speed, and resources. *DeiT-T*, being convolution free is the most resource friendly with the least MAC count. Among detection heads, *NanoDet* achieves high accuracy and speed while also being computationally efficient. *CenterNet* and *TTFNet* also offer a good balance while *TTFNet* facilitates faster training times. *DETR* (likewise, *NanoDet*) displays low MAC count when paired with the lighter backbones. Our detailed results dispense an atlas for choosing a network based on specific requirements. Furthermore, it also acts as a comparative baseline while designing and testing new architectures. To further demonstrate the importance of evaluating the networks on different metrics, we report the Pearson correlation between all the metrics in Figure 14 for the results of both COCO and VOC datasets in Table 4. As the plot is symmetric, only the lower triangle is displayed. As seen, accuracy is positively correlated with all the other metrics with GMAC being the highest correlated after F1 score. Speed is only highly negatively correlated with inference energy. Energy has the highest positive correlation with GMAC, indicating that networks with more MAC operations tend to consume more energy. ![21_image_0.png](21_image_0.png) Figure 14: Correlation between different parameters reported in Table 4 for all heads and backbones. ![22_image_0.png](22_image_0.png) Figure 15: Accuracy (mAP) of all detection networks in conjunction with various backbones reported on different object sizes (small, medium and large) @IoU: [0.5-0.95] on COCO dataset. Darker shades represent higher accuracy values. The qualitative results of each detector on the COCO dataset are presented in Figure 23 in Appendix. ## 9 Decoupling Different Effects The accuracy of the networks can depend on multiple factors, thus in this section, we provide further analyses to decouple these variables. This provides detailed insights into the capabilities of the network that are often overlooked while providing benchmarks. We study the network's performance on various object sizes viz. small, medium, and large. We also show the gain/loss in accuracy and speed when using input images of different resolutions. The role of the confidence threshold parameter while reporting performances is explained with examples. For anchor-based approaches, we show the sensitivity of the networks to different anchor sizes. Finally, we decouple an architectural element, namely the Deformable Convolution (DCN) layer in the networks to understand the accuracy and speed trade-off. This section provides a rare outlook on the effect of many understated variables on object detectors' performance to provide a holistic picture that helps in both deployment and design of architectures. ## 9.1 Effect Of Object Size Detection of small objects is a challenging problem for most object detectors (Liu et al., 2021). To show the performance of networks across objects of different sizes, we compare the accuracy of the backbones and detection heads across the three different sizes, i.e. small, medium and large. Figure 15 shows that all backbones and detection heads struggle to obtain high accuracy for smaller objects. In this context, TTFNet, *NanoDet*, and *FCOS* outperform the rest, mostly when complimented by the best performing heavy backbones (see Table 4) such as HarDNet-68 or *VoVNet-39*. The higher resolution feature maps from the heavier backbones combined with the FPN/PAN in these detection heads work in their favor. The accuracy of medium and large objects are considerably better. Among the heads, *FCOS* and *NanoDet* perform better overall for objects of all sizes. TTFNet, *CenterNet*, and SSD, positioned in the middle spectrum, with faster backbones can be good choices for applications which require higher inference speed. The consistent performance of *FCOS* can be attributed to its heavier architecture owing to the use of five feature maps of varying resolutions. For further analyses, we consider all the detection heads with *HarDNet-68* backbone as it provides the best compromise on a more complex dataset, namely COCO. ## 9.2 Effect Of Input Image Size The resolution of the input image used for training plays a major role in the final accuracy. The image resolution used in all prior experiments is 512×512. To analyze the accuracy and speed trade-off of other input resolutions, we use different image sizes for training, including 256, 384, 512, and 736. The image sizes are chosen as an "even multiple of 16", a common feature map used by multiple detection heads. ![23_image_0.png](23_image_0.png) Figure 16: Effect of image sizes on accuracy and inference speed of detection models with HarDNet-68 backbone on COCO dataset. As image size increases, the gain in accuracy is masked by the decline in speed. Figure 16 demonstrates that the accuracy across the detection heads follow a trend of "diminishing returns". In most cases, there is a significant accuracy jump from 256 to 384 image resolution. However, the gain decreases as the image size increases further with the least gain observed when the image size changes from 512 to 736 (also, in some cases it reduces the accuracy). Additionally, we observe that the gain in accuracy for higher image sizes is masked by a bigger decline in speed. For instance, a ∼4.4% accuracy gain in *FCOS*, from resolution 512 to 736, is masked by the 37% decrease in speed and hence not an efficient choice to switch to higher resolutions. The speed decline with increasing resolution is more pronounced for *YOLO*, TTFNet, and *FCOS* as the number of operations in multi-resolution feature concatenation in *YOLO* and in FPN (in the others) increases with image size. *DETR* which employs attention blocks to capture the global context of the images and *ThunderNet* with separate region proposals for each sample respectively are less sensitive to varying image sizes. ## 9.3 Effect Of Anchor Box Size Anchor sizes and aspect ratios need to be in line with the size of the objects present in the dataset and hence are an important parameter in anchor-based networks. The anchor sizes also need to be known a priori and hence is difficult to adapt to new datasets. Out of the eight detection heads in our study, SSD, YOLO, and *Thundernet* use anchor-based approach for detection. To analyze the impact of anchors on detection performance, we conduct experiments on all the three anchor-based detection heads with varying anchor box sizes which are defined by their respective width and height. The aim is to add some offsets to anchors' width and height and analyze the change in speed and accuracy of the networks. Instead of linearly increasing/decreasing the anchor dimensions, we sample the offsets from a Gaussian distribution with varying sigma, and add this to the original anchor width and height to create the modified anchor dimensions. Moreover, the modified anchors are kept constant throughout that specific experiment. The width and height of the original anchors (from the original architecture of these networks) are considered as the baseline. Table 5 shows the change in accuracy and inference speed with modified anchors in percentage w.r.t to the baseline (first row). We observe that the change in accuracy at anchors of different sizes follows a random pattern and there is no correlation between accuracy and speed. *ThunderNet* is less sensitive to changes in Table 5: Effect of anchor box sizes on accuracy (mAP) and inference speed (FPS) for the three anchor-based detection networks with HarDNet-68 backbone on COCO dataset (@IoU: [0.5, 0.95]). Numbers are reported in terms of percentage change w.r.t the original anchor sizes. The best performance for each detector is in bold. There is no common pattern and the performance changes in a non-deterministic way with respect to the anchor box sizes. Anchor ThunderNet YOLO SSD size mAP FPS mAP FPS mAP FPS Orig. 27.32 18.65 22.20 40.09 **29.86** 33.53 0.2 σ +0.26 +4.00 +3.38 -2.51 -0.21 -2.46 0.3 σ +1.06 **+4.90** +2.32 -6.70 -4.29 **+5.85** 0.5 σ **+1.79** +4.72 **+3.69 +2.38** -9.76 -2.09 Table 6: Effect of confidence threshold on performance and inference speed (FPS) on detectors with HarDNet68 backbone on COCO dataset (@IoU: [0.5-0.95]). The results show that a lower threshold yields higher accuracy while a higher threshold yields higher speed. | Head | Threshold 0.01 | Threshold 0.4 | | | | | |---------|------------------|-----------------|-------|-------|-------|-------| | | mAR | mAP | FPS | mAR | mAP | FPS | | YOLO | 53.74 | 22.20 | 40.09 | 30.51 | 17.15 | 68.52 | | SSD | 66.26 | 29.86 | 33.53 | 39.05 | 24.32 | 49.95 | | FCOS | 76.74 | 36.45 | 29.96 | 43.97 | 29.24 | 31.12 | | NanoDet | 74.20 | 33.61 | 70.09 | 44.73 | 28.26 | 72.15 | anchor box sizes in terms of accuracy while its inference speed boosts constantly. SSD is very sensitive to these changes as its accuracy decreases for all the three sets of anchor box sizes while the speed increases for one of them. The changes improve the accuracy of *YOLO* but do not affect the inference speed uniformly. The non-deterministic way the anchor box sizes affect the detection proves that modifying anchors to improve detection is not a simple and straightforward task. ## 9.4 Effect Of Confidence Thresholds Object detectors produce many boxes and a threshold is used to filter out the redundant and low confident predictions. Varying this threshold affects precision and recall. Hence, the confidence threshold plays a vital role in calculating accuracy and inference speed. There has been a discrepancy in reproducing the results from prior object detection literature as such parameters are not explicitly mentioned. Using different thresholds shows a significant difference in accuracy and inference numbers. To exhibit this non-uniformity, we analyze the networks with different confidence thresholds and report the performance in terms of mAR and mAP, and speed for each. ThunderNet has region-based proposals, and utilizes a Soft-NMS and the scores are decayed instead of fixing a hard threshold, hence this parameter does not affect the results. *CenterNet* and *TTFNet* use maxpool to select the predictions, and *DETR* removes the traditional detection modules and hence do not use this threshold. Thus, only YOLO, SSD, *FCOS*, and *NanoDet* are considered for this study. Table 6 shows the dip in accuracy while using a higher threshold and the significant increase in speed while using the lower threshold. For example, the mAP of *YOLO* dips by ∼22% while the speed increase by ∼71% by changing the threshold from 0.01 to 0.4. For applications such as medical imaging, where the recall metric is more relevant to eliminating false negative cases, the decline in accuracy is even more prominent and hence can have a higher impact. This analysis is an attempt to encourage transparency and fairness in comparison to all future works, which will help the reproducibility. | comes at the cost of lower speed and higher resource requirements and energy consumption. COCO BDD Head DCN Layer mAP Inf Speed #Params Energy #Params | | | | Energy | | | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------|-----------|----------|-------|-------|-------|-------|-------| | | (kJ) | mAP | Inf Speed | | | | | | | | | (FPS) | (M) | (FPS) | (M) | (kJ) | | | | | | CenterNet | ✓ | 31.84 | 65.82 | 33.20 | 25.97 | 22.12 | 66.57 | 33.19 | 43.69 | | ✗ | 29.67 | 71.77 | 32.76 | 24.72 | 21.72 | 72.43 | 32.76 | 38.58 | | | TTFNet | ✓ | 30.63 | 50.96 | 36.64 | 32.27 | 22.01 | 55.81 | 36.22 | 51.81 | | ✗ | 29.13 | 56.10 | 36.16 | 30.34 | 20.98 | 56.98 | 36.14 | 51.13 | | Table 7: Effect of DCN layers on accuracy and efficiency metrics. The detectors with HarDNet-68 backbone are trained and tested on COCO and BDD datasets (@IoU: [0.5-0.95]). The gain in accuracy using DCN comes at the cost of lower speed and higher resource requirements and energy consumption. ## 9.5 Effect Of Deformable Convolution Layer Dai et al. (2017) introduced Deformable Convolution (DCN) layer that help in detecting objects with geometrical deformations. Conventional convolutions use a fixed rectangular grid on the image, based on defined kernel size. In DCN, each grid point can be moved by a learnable offset, i.e. the grid is deformable. DCN benchmarks focus mainly on accuracy gains and not on the other metrics. To obtain more information on speed and resource requirements, we analyze the effect of DCN layers on the two detectors that use DCN in their original proposed architecture, i.e. *CenterNet* and *TTFNet*. Table 7 provides accuracy, speed, number of parameters, and energy consumption of the two aforementioned detectors with and without DCN layers. The results are reported on two datasets, *COCO* and BDD. On the BDD dataset, replacing the DCN layers with standard convolution layers in *CenterNet* results in 1.8% decrease in accuracy while the speed increase by more than 8%. The usage of DCN layers also increases the number of parameters and leads to 13% higher energy consumption. On the COCO dataset, changing DCN layers from *TTFNet* to standard convolution layers decreases the accuracy by less than 5% while the speed increase by 10% and energy consumption improves by around 6%. These results demonstrate that there is an inherent accuracy, speed, and resource requirement trade-off when using DCN layers. Overall, this section acts as a guideline for new design considerations and also for selecting existing architectures based on the application criteria. ## 10 Calibration Of Object Detectors Many applications, especially safety-critical ones, need the detection networks to be highly accurate and reliable. Detectors must not only be accurate, but should also indicate when they are likely to be incorrect. Model calibration provides insight into model uncertainty, which can be afterwards communicated to endusers or assisting in further processing of the model outputs. It refers to the metric that the probability associated with one prediction reflects the overall accuracy likelihood. Most works focus on improving only predictive accuracy of the networks but it is essential to have a model that is also well calibrated. Large and accurate networks tend to be overconfident (Guo et al., 2017) and miscalibrated. Hence, there is an urgent need to revisit and measure the calibration of the SOTA detectors to get a complete assessment. Most of the work on calibration has been concentrated on classification domain but Kuppers et al. (2020) includes the bounding box predictions along with the classification labels to evaluate the overall calibration of the detectors. Expected Calibration Error (ECE) (Naeini et al., 2015) is one of the common metric devised to measure calibration and measures the difference in expectation between predicted confidence and accuracy. In classification domain, this score signifies the deviation of the classification accuracy from the estimated confidence. The Detection ECE (D-ECE) (Kuppers et al., 2020) measures the deviation of the observed average precision (AP) w.r.t to both classification and the bounding box properties. The confidence space as well as the bounding box space is partitioned into equal bins and D-ECE is calculated by iterating over all the bins and accumulating the difference between AP and confidence in each bin. One dimensional case considers only the confidence but we use the multidimensional D-ECE case that combines all the factors: p, cx, cy, w, h representing the class probability, center coordinates, width, and height of the predictions, respectively. ![26_image_0.png](26_image_0.png) Confidence Figure 17: Reliability diagrams (based on classification prediction) of detection networks with HarDNet68 backbone trained and tested on COCO dataset. Green shaded areas indicate the error compared to perfect calibration - darker shade of green indicates the underconfident predictions whereas the lighter shade indicates the overconfident predictions. The better the calibration, the more reliable the predictions are. SSD is relatively well calibrated. In general, the single-stage detectors are more underconfident while the two-stage ThunderNet and DETR are very overconfident. Table 8: Calibration error for detection networks with HarDNet-68 backbone trained and tested on COCO dataset. Each row shows the error for a specific dimension: classification confidence only (ECE) and confi- | dence + bounding box (D-ECE). The lowest calibration errors across all detectors are highlighted. | | | | | | | | | |-----------------------------------------------------------------------------------------------------|------------|-------|------|-------|-----------|--------|-------|---------| | Head | ThunderNet | AOLO | SSD | DETR | CenterNet | TTFNet | FCOS | NanoDet | | ECE (%) | 8.09 | 5.88 | 4.07 | 18.46 | 4.31 | 6.17 | 21.56 | 6.48 | | D-ECE (%) | 14.94 | 10.92 | 7.60 | 23.48 | 8.04 | 10.62 | 22.20 | 9.02 | Reliability diagrams (DeGroot & Fienberg, 1983) are used to visually represent model calibration, where accuracy is plotted as a function of confidence. Reliability scores and diagrams are provided in Table 8 and Figure 17, respectively. In the reliability diagrams, the diagonal represents the perfect calibration and the green shades represent the gap in calibration. Among anchor-based detectors, SSD is quite well calibrated while YOLO is more underconfident. All the keypoint-based methods (last row in the diagram) are leaning more towards being underconfident and are more cautious about their predictions and hence can be more favorable for safety-critical applications. But, the transformer-based (DETR) and two-stage based (ThunderNet) detectors are highly overconfident and can be undesirable in safety-critical applications. The callbration error increases when the localization is also included (as reflected in D-ECE). The reliability diagrams for the VOC dataset are provided in Figure 20. We further provide the reliability diagrams of all detectors trained on COCO dataset and tested on Corrupted COCO dataset in Figure 21. Very similar pattern holds for OOD calibration analysis variant. For more details, see Section A.4 in Appendix. We note that there are several calibration solutions in the domain of classification such as histogram binning (Zadrozny & Elkan, 2001), logistic calibration/Platt scaling (Platt et al., 1999), temperature scaling (Guo et al., 2017), and beta calibration (Kull et al., 2017). However, applying these to object detection might not be as effective, thus other works (Neumann et al., 2018; Kuppers et al., 2020) have been proposed to procure ![27_image_0.png](27_image_0.png) Figure 18: Robustness of detection networks with HarDNet-68 backbone trained on COCO and tested on Corrupted COCO dataset. Results are shown on four categories of corruptions: Noise, Blur, Weather, and Digital. well-calibrated estimates for object detection specifically. In this study, we concentrate on comparing the reliability of different detectors as is, and do not delve deep into the solutions to improve their calibration. ## 11 Natural Robustness Real-time object detection applications such as autonomous driving put huge emphasis on safety and precision. Object detectors used in such applications need to be consistent in their predictions and robust to various factors such as the changing weather conditions, lighting, and various other imaging effects. Public datasets do not have sufficient coverage of all these effects and hence we simulate them by adding different corruptions on them. As explained in Section 7.1, the Corrupted COCO dataset is created with 15 different corruptions. Figure 18 shows the results of each head on the four categories of corruptions: Noise, Blur, Weather and Digital effects. The accuracy values are the mean of different corruptions in that particular category. The severity level 0 is the performance of the networks on the original data. The performance of all the networks deteriorate on all the corruptions and it declines faster as the severity increases. On Noise, Blur and digital effects, the networks show relatively steeper decline in performance compared to the weather category. For all corruption categories, *FCOS* is the most robust while *YOLO* is the least robust. The top, middle and low spectrum of detectors in terms of accuracy on IID data (Figure 12), still holds good on the OOD setting too Miller et al. (2021). FCOS which proved to be the most accurate in the IID test set, continues this performance even on the challenging OOD setting, i.e. for natural corrupted data. And in general, the ![28_image_0.png](28_image_0.png) Figure 19: Adversarial robustness of all detection networks (with HarDNet-68 backbone) trained on COCO, against PGD attacks of varying strengths (Epsilon). Epsilon=0 represents the original natural accuracy. keypoint-based detectors are relatively more robust to the natural corruptions than the other detectors, as seen from the upper clusters in all the graphs. ## Detailed Analysis: To provide a more detailed analysis, we show results on all the fifteen different corruptions for each network in Figure 22. In each of the heatmaps, we calculate the mean Corruption Accuracy (mCA) by averaging over all the corruptions. All the detectors show a similar decline trend in performance across all the three noises (gaussian, shot and impulse noise). *FCOS* and *TTFNet* have the least decline and are relatively more robust to noisy corruptions compared to others. Amongst the blur corruptions, the decline is more steady for defocus and motion blur while for the glass blur, the accuracy declines gradually initially but dips severely after severity level 3. On the zoom blur, the dip in performance of all detectors starts right from severity level 1. All the detectors are robust to varying brightness and fog corruptions than on frost and snow. The worst performance is seen in snowy conditions and the trend is similar across all. Among the digital effects, networks are more robust to elastic transformation and JPEG compression compared to pixelation and contrast. All the models are less robust to contrast changes with *YOLO* being the least robust. The visualizations of the predictions of each network on the different types of corruptions are presented in Figure 24 in Appendix. ## 12 Adversarial Robustness Several works have shown the vulnerability of deep neural networks to adversarial attacks. Adversarial perturbations are imperceptible noise, that when added to the data, appear imperceptible to human eye, but can lead the networks to make wrong predictions. In safety-critical applications such as AD, robustness is of more importance to prevent networks from making untimely decisions. Hence, robustness against adversarial attacks is a critical metric for object detection. However, it is not prominently present in the literature. Here, we evaluate the robustness of all the eight detector networks against adversarial attack. Table 9: Results of detection models with HarDNet-68 backbone trained on BDD and tested on BDD (indistribution test) and CityScape (out-of-distribution test) datasets. MAC count, number of parameters, inference energy consumption (kJ), mAP (@IoU: 0.7), and inference speed (FPS) are reported. The best performance for each metric is highlighted. Head MAC #Params BDD **Cityscapes** (G) (M) mAP FPS kJ mAP FPS kJ ThunderNet 22.96 17.68 21.51 45.35 47.94 18.69 44.26 4.44 YOLO 30.17 47.34 6.38 42.01 50.53 4.47 36.46 4.44 SSD 24.27 23.46 26.15 10.69 29.97 20.07 11.58 3.16 DETR 25.77 38.40 13.80 40.10 60.13 11.29 36.26 5.45 CenterNet 36.58 32.76 24.19 73.18 44.23 19.22 65.62 4.86 TTFNet 69.57 36.14 23.05 58.56 52.94 18.74 50.09 5.69 FCOS 126.99 41.11 27.89 30.63 93.23 23.05 28.08 9.16 NanoDet 22.60 16.81 30.09 55.51 47.12 22.88 48.55 4.69 We employ gradient based attacks which utilize the gradient information of the network to generate the perturbation. Projected Gradient Descent (PGD) (Madry et al., 2017), a common untargeted attack, maximizes the training loss to generate an adversarial perturbation, which is clamped within an epsilon bound. We use both classification loss and regression loss as the objective for the PGD attack. We perform the PGD attack at varying attack strengths and report the resulting accuracy in Figure 19. The accuracy at Epsilon=0 refers to clean accuracy on the original test set. As the attack strength increases, the performance declines. *CenterNet* and *DETR* exhibit consistent and better robustness compared to the other detectors. FCOS has the highest natural accuracy and shows good resistance to the very weak attacks but the performance sharply decline for the higher perturbations. *TTFNet* and *ThunderNet* are the next best performers. YOLO, *NanoDet*, and SSD occupy the next spectrum. ## 13 Case Study: Autonomous Driving Real-time object detection is highly pertinent in the autonomous driving (AD) domain and the network needs to learn various objects such as pedestrians, vehicles, and road signs present on city roads and highways. Most benchmarks for detection networks are provided on VOC and COCO datasets, which consist of mostly household objects. Results on these datasets will not suffice in gauging the capability of the networks to perform in an AD setting. Hence, we conduct a realistic case study for AD using the BDD dataset (Yu et al., 2018) which is one of the largest and most diverse datasets in this domain. First, we show the performance of all the networks on this complex dataset. Then, we address the Out-of-Distribution (OOD) generalization by using the models trained on BDD and testing on a different dataset (i.e. Cityscapes (Cordts et al., 2016)). Finally, we deploy all the models trained on BDD on the embedded devices and report the numbers to showcase the real-time application capability of each network. As shown in Section 9.5, the accuracy gain due to DCN is not significant compared to the decline in speed. Hence in this section, we consider all the networks uniformly without DCN layers. ## 13.1 Generalization On Iid Data In Table 9, we present the results obtained on the BDD validation set. Similar to Zhao et al. (2018a), we computed the accuracy (mAP) of our models at IoU = 0.7. *NanoDet* exhibits the best accuracy. *FCOS* is the next most accurate but falters in speed and also has the highest energy consumption. *CenterNet* is the fastest with marginally lower accuracy. SSD consumes the least energy and *YOLO* has the lowest accuracy. Interestingly, the bias of object positions in the BDD dataset results in generating lesser region proposals thus making ThunderNet faster. The qualitative results of each detector on the BDD in Appendix, Figure 25. | supported on TX2). The best performances are highlighted. Head RTX 2080Ti Xavier | | | | TX2 | | | | | |------------------------------------------------------------------------------------|------|------|------|-------|------|------|------|----| | FP32 | FP16 | INT8 | FP32 | FP16 | INT8 | FP32 | FP16 | | | ThunderNet | 151 | 212 | 225 | 15 | 30 | 36 | 10 | 16 | | YOLO | 174 | 244 | 280 | 15 | 33 | 45 | 9 | 17 | | SSD | 154 | 212 | 225 | 15 | 30 | 36 | 10 | 16 | | DETR | 138 | 184 | 184 | 14 | 29 | 30 | 8 | 13 | | CenterNet | 141 | 229 | 228 | 11 | 29 | 29 | 7 | 13 | | TTFNET | 101 | 193 | 206 | 7 | 19 | 19 | 4 | 8 | | FCOS | 53 | 116 | 133 | 4 | 10 | 13 | 2 | 4 | | NanoDet | 153 | 210 | 213 | 15 | 27 | 33 | 9 | 14 | Table 10: Inference speed (FPS) of all TensorRT optimized detection networks with HarDNet-68 backbone on BDD dataset deployed on three devices at FP32, FP16, and INT8 precision (note that INT8 is not supported on TX2). The best performances are highlighted. ## 13.2 Generalization On Ood Data Generalization to distribution shift is one of the main challenges in AD scenarios. The networks, when deployed in real-life applications, need to adapt to unseen data and perform consistently. However, a majority of deep learning benchmarks are shown on the test set, which has the same distribution as the training data (Geirhos et al., 2020). Therefore, to test the robustness of networks to distribution shift, we test the BDD trained models on Cityscapes data. We extract the ground-truth bounding boxes from the instance segmentation annotations of the Cityscapes dataset. In Table 9, we observe that *FCOS* has the best accuracy with *NanoDet* coming a close second. *CenterNet* is the fastest network and SSD is the most energy efficient in both the sets. In general, keypoint-based detectors are good candidates for generalization across challenging AD datasets. ## 13.3 Performance On Embedded Devices AD applications have power and resource constraints as the networks are deployed on the on-board edge devices. The real-time performance of detection networks on low-power devices is paramount to their efficacy. For deployment, we use the TensorRT library (ten) to convert the networks into optimized high-performance inference engines. TensorRT is NVIDIA's parallel programming model that can optimize neural networks for deployment on embedded or automotive product platforms. These engines are then tested on three different ranges of GPUs by NVIDIA: (1) 2080Ti, a commonly used desktop GPU (2) Jetson-Xavier, a powerful mobile GPU, and (3) Jetson-TX2, a low-power mobile GPU. Table 10 shows the inference speed of all the eight detectors for three precision modes, i.e. FP32, FP16, and INT8. The performance trend might not be the same as seen earlier, as it depends on the optimization of different layers by TensorRT. The optimization fuses subsequent layers and makes the computation parallel. The anchor-based detectors ThunderNet, *YOLO*, and SSD have relatively simple architectures and achieve the highest gain in speed after optimization. *YOLO* being the least complex gets optimized the most and is the fastest across all the platforms. However, all the keypoint-based detectors obtain the least gain in speed from optimization. *DETR* lies in the middle spectrum and since the transformer architecture is relatively new it is not as optimized by the TensorRT engine as other convolution layers. This unique case study reveals that the performance trend seen on one device does not necessarily translate to other hardware. This benchmark proves useful when choosing models to deploy on edge devices for real-time AD applications. ## 14 Case Study: Healthcare The recent advancements in deep learning have enabled AI models to assist surgeons and radiologists in diagnosing and treating life-threatening diseases. Manual detection requires expertise, takes time, and can | inference speed (FPS) are reported. The best performance for each metric is highli | | | | ghted. | | | |--------------------------------------------------------------------------------------|--------|---------|------------|------------|------------|------------| | Head | MAC | #Params | Inf Energy | Inf Speed | mAR | mAP | | | (G) | (M) | (kJ) | (FPS) | | | | ThunderNet | 22.92 | 17.63 | 4.34±0.02 | 51.79±2.01 | 89.53±0.25 | 67.49±1.14 | | YOLO | 30.16 | 47.29 | 5.05±0.02 | 53.16±0.95 | 94.10±1.02 | 60.25±1.34 | | SSD | 23.02 | 21.66 | 4.02±0.02 | 68.70±1.27 | 90.42±0.25 | 66.98±0.82 | | DETR | 25.77 | 38.39 | 5.07±0.05 | 37.85±1.11 | 89.97±0.68 | 63.70±1.04 | | CenterNet | 33.38 | 33.19 | 4.44±0.04 | 55.99±1.02 | 92.33±0.68 | 69.30±0.93 | | TTFNet | 65.16 | 36.62 | 5.31±0.06 | 43.58±0.47 | 91.30±0.51 | 65.58±1.30 | | FCOS | 126.78 | 41.07 | 6.76±0.03 | 26.76±0.54 | 94.84±1.28 | 71.42±0.79 | | NanoDet | 22.60 | 16.80 | 4.07±0.03 | 65.02±1.20 | 91.59±0.00 | 70.44±1.08 | Table 11: Results of detection networks with HarDNet-68 backbone trained and tested on Kvasir-SEG. MAC count, number of parameters, inference energy consumption, mAR and mAP (@IoU: [0.5-0.95]), and inference speed (FPS) are reported. The best performance for each metric is highlighted. also be subjected to human errors. AI-based detection solutions aid in reducing cost, and resources, and can provide an accurate tool for detection in medical imaging. One of the applications is to use DNNs to detect polyps in medical images. Colon and rectum (colorectal) cancer is commonly caused by the polyps found on the inner lining of the colon or rectum. Detecting those polyps and treating them at a very early stage is vital for cancer treatment. The medical images have a drastically different distribution than the standard datasets such as COCO and VOC. Hence, standard benchmarks may not be able to provide vital information about which model to choose for this application. Also, different metrics are more relevant depending on the application. While standard benchmarks focus on the precision metric for accuracy, in the healthcare industry, where even one false negative can cause more damage than a false positive result, recall is more important. To address this new data distribution and metrics, we conduct a case study specifically for medical images by evaluating the detectors on the Kvasir-SEG dataset. Table 11 presents the results obtained on the test split of Kvasir-SEG. The recall is more relevant to this application and we, therefore, report mean average recall (mAR) along with mAP. Certain networks like *YOLO* may not have the highest precision but fare well w.r.t recall. *FCOS* has the highest recall and precision which make it an ideal candidate for such test cases. In terms of speed, SSD is the fastest, while *Nanodet* comes second. The qualitative results of each detector are presented in Figure 26 in Appendix. ## 15 Discussion We provide a comprehensive study of a combination of feature extractors and detectors (ranging from twostage, single-stage, anchor-based, keypoint-based, and transformer-based architectures) under a uniform experimental setup across different datasets. We report an extensive set of results including accuracy, speed, resources, and energy consumption, also robustness and calibration analyses. We evaluate the robustness of detectors against both natural adversarial corruptions. Additionally, detailed insights are highlighted to get a complete understanding of the effect of different variables on the final result. Different variables such as the effect of the backbone architecture, image size, object size, confidence threshold, and specific architecture layers are decoupled and studied. We also contribute two unique case studies on two diverse industries: autonomous driving and healthcare. We, further, optimize and benchmark the networks on embedded hardware to check the feasibility of the networks to be deployed on edge devices. The combination of results in Section 8, suggests that keypoint-based detectors tend to generalize well across multiple datasets, as anchor box optimization is no longer required. *NanoDet* fares well in terms of both accuracy and speed while also being resource-friendly. *CenterNet* is the second fastest and also lies in the good spectrum on all other metrics and *TTFnet* lies in the middle spectrum. *FCOS* has the highest accuracy but falters in other metrics, while *DETR*, which is the Transformer-based detector, lies in the middle spectrum. Among the backbones, modern networks designed specifically for low-memory traffic, such as HarDNet, provide the best balance between accuracy, inference speed, and energy consumption. All detectors underperform when detecting small objects, with *FCOS* performing relatively better. Varying anchors affect the performance in a non-deterministic way, thus rendering them difficult to generalize. We report the accuracy-speed-resource requirement trade-off that should be taken into consideration when switching to higher image sizes or using DCN layers. On robustness against natural corruptions, the performance of all the networks deteriorates on all the fifteen corruptions, and it declines faster as the severity increases. In general, the keypoint-based detectors are relatively more robust to natural corruptions than the other detectors. *FCOS* is the most robust while *YOLO* is the least robust. *FCOS* and *TTFNet* are relatively more robust to noisy and blurry corruptions, but all detectors fail in snowy conditions. *CenterNet* proves to be the most robust against adversarial perturbations, while textit FCOS and *DETR* are also quite resistant to these attacks. In reliability analysis, SSD is relatively well calibrated, while keypoint-based detectors are more prudent in their predictions, thus rendering them useful in safety-critical applications. *ThunderNet* and DETR lean towards being more overconfident. Thus, the analysis of transformer-based detectors in these various settings will offer more insight into the capabilities and pitfalls of this new architectural paradigm. Case studies on AD and healthcare cover two important domains with different requirements. The AD case study reports the performance on a more relevant dataset related to driving scenarios and also reports the OOD generalization performance. The deployment on three different GPUs (desktop and embedded) shows that the performance on embedded hardware shows a different trend from the desktop GPUs. Anchor-based detectors, such as SSD and *YOLO* are better optimized owing to the simple architectures and run faster on edge devices. This study helps compare and choose a network based on the hardware capability of an application. The healthcare case study highlights the importance of the recall metric (compared to just precision results), and we observe that the most accurate network is not necessarily the best performer w.r.t. the recall metric. These case studies cover two diverse domains that have different requirements and offer a unique perspective on looking beyond the standard benchmarks and gauging the capability of the detectors on diverse datasets that are more relevant and applicable in real-time applications. ## Broader Impact Statement We provide a holistic and comprehensive analysis of deep learning-based real-time object detection networks across multiple datasets and domains in a standard pipeline. We showcase all the high-level generic results while also zooming in on encapsulated detailed intuitions. Our extensive analyses also provide insights into the capabilities and pitfalls of new architectural paradigms (transformers vs CNNs). Different applications have different criteria, and our study can act as a guideline for the industrial community to gauge the different trade-offs while choosing detectors for the respective application. And since new detection networks are being introduced frequently, we also hope to inspire the research community to use this study as a precept for new designs. This study highlights the importance of a standardized, transparent and fair pipeline and also emphasizes the need to shift the focus from nominal improvements to a more broad perspective. We hope this will help pave a new way for future research. ## References Nvidia tensorrt. URL https://developer.nvidia.com/tensorrt. Alexander Andreopoulos and John K. Tsotsos. 50 years of object recognition: Directions forward. Computer Vision and Image Understanding, 117(8):827 - 891, 2013. ISSN 1077-3142. doi: https://doi.org/10.1016/ j.cviu.2013.04.005. URL http://www.sciencedirect.com/science/article/pii/S107731421300091X. Ahmed Badar, Arnav Varma, Adrian Staniec, Mahmoud Gamal, Omar Magdy, Haris Iqbal, Elahe Arani, and Bahram Zonooz. Highlighting the importance of reducing research bias and carbon emissions in cnns. arXiv preprint arXiv:2106.03242, 2021. Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Yves Lechevallier and Gilbert Saporta (eds.), *Proceedings of COMPSTAT'2010*, pp. 177–186, Heidelberg, 2010. Physica-Verlag HD. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. *arXiv preprint arXiv:2005.12872*, 2020. Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, and Youn-Long Lin. Hardnet: A low memory traffic network. In *The IEEE International Conference on Computer Vision (ICCV)*, October 2019. Francois Chollet. Xception: Deep learning with depthwise separable convolutions. In *The IEEE Conference* on Computer Vision and Pattern Recognition (CVPR), July 2017. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, June 2016. Nvidia Corporation. NVIDIA Management Library: Reference Guide. Technical report, June 2020. URL https://docs.nvidia.com/pdf/NVML_API_Reference_Guide.pdf. Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. R-fcn: Object detection via region-based fully convolutional networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 29*, pp. 379–387. Curran Associates, Inc., 2016. URL http://papers.nips. cc/paper/6465-r-fcn-object-detection-via-region-based-fully-convolutional-networks.pdf. Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In *Proceedings of the IEEE international conference on computer vision*, pp. 764–773, 2017. Morris H DeGroot and Stephen E Fienberg. The comparison and evaluation of forecasters. *Journal of the* Royal Statistical Society: Series D (The Statistician), 32(1-2):12–22, 1983. Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. *International Journal of Computer Vision*, 88(2):303–338, Jun 2010. ISSN 1573-1405. doi: 10.1007/s11263-009-0275-4. URL https://doi.org/10.1007/s11263-009-0275-4. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. *arXiv preprint* arXiv:2004.07780, 2020. Ross Girshick. Fast r-cnn. In *The IEEE International Conference on Computer Vision (ICCV)*, December 2015. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In *Proceedings of the IEEE conference on computer vision* and pattern recognition, pp. 580–587, 2014. Kristen Grauman and Bastian Leibe. Visual object recognition. Synthesis lectures on artificial intelligence and machine learning, 5(2):1–181, 2011. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, pp. 1321–1330. PMLR, 2017. K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 37(9):1904–1916, Sep. 2015. ISSN 0162-8828. doi: 10.1109/TPAMI.2015.2389824. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. *CoRR*, abs/1704.04861, 2017. URL http://arxiv.org/abs/1704.04861. Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In *Proceedings of the IEEE conference* on computer vision and pattern recognition, pp. 7132–7141, 2018. Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, and Kevin Murphy. Speed/accuracy trade-offs for modern convolutional object detectors. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. Andrei Ivanov, Nikoli Dryden, Tal Ben-Nun, Shigang Li, and Torsten Hoefler. Data movement is all you need: A case study on optimizing transformers. *arXiv preprint arXiv:2007.00072*, 2020. Debesh Jha, Pia H Smedsrud, Michael A Riegler, Pål Halvorsen, Thomas de Lange, Dag Johansen, and Håvard D Johansen. Kvasir-seg: A segmented polyp dataset. In International Conference on Multimedia Modeling, pp. 451–462. Springer, 2020. Lukasz Kaiser, Aidan N. Gomez, and François Chollet. Depthwise separable convolutions for neural machine translation. *CoRR*, abs/1706.03059, 2017. URL http://arxiv.org/abs/1706.03059. Tao Kong, Fuchun Sun, Huaping Liu, Yuning Jiang, and Jianbo Shi. Foveabox: Beyond anchor-based object detector. *arXiv preprint arXiv:1904.03797*, 2019. Meelis Kull, Telmo Silva Filho, and Peter Flach. Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. In *Artificial Intelligence and Statistics*, pp. 623–631. PMLR, 2017. Fabian Kuppers, Jan Kronenberger, Amirhossein Shantia, and Anselm Haselhoff. Multivariate confidence calibration for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 326–327, 2020. Hei Law and Jia Deng. Cornernet: Detecting objects as paired keypoints. In *The European Conference on* Computer Vision (ECCV), September 2018. Hei Law, Yun Teng, Olga Russakovsky, and Jia Deng. Cornernet-lite: Efficient keypoint based object detection. *CoRR*, abs/1904.08900, 2019. URL http://arxiv.org/abs/1904.08900. Youngwan Lee, Joong-won Hwang, Sangrok Lee, Yuseok Bae, and Jongyoul Park. An energy and gpucomputation efficient backbone network for real-time object detection. *CoRR*, abs/1904.09730, 2019. URL http://arxiv.org/abs/1904.09730. Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. *arXiv preprint* arXiv:2006.04388, 2020. Zeming Li, Chao Peng, Gang Yu, Xiangyu Zhang, Yangdong Deng, and Jian Sun. Light-head r-cnn: In defense of two-stage object detector. *arXiv preprint arXiv:1711.07264*, 2017. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (eds.), *Computer Vision - ECCV 2014*, pp. 740–755, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10602-1. Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017a. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In *Proceedings of the IEEE International Conference on Computer Vision (ICCV)*, Oct 2017b. Li Liu, Wanli Ouyang, Xiaogang Wang, Paul Fieguth, Jie Chen, Xinwang Liu, and Matti Pietikäinen. Deep learning for generic object detection: A survey. *arXiv preprint arXiv:1809.02165*, 2018a. Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, and Jiaya Jia. Path aggregation network for instance segmentation. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, June 2018b. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In *European conference on computer vision*, pp. 21–37. Springer, 2016. Yang Liu, Peng Sun, Nickolas Wergeles, and Yi Shang. A survey and performance evaluation of deep learning methods for small object detection. *Expert Systems with Applications*, 172:114602, 2021. ISSN 0957-4174. doi: https://doi.org/10.1016/j.eswa.2021.114602. URL https://www.sciencedirect.com/ science/article/pii/S0957417421000439. Zili Liu, Tu Zheng, Guodong Xu, Zheng Yang, Haifeng Liu, and Deng Cai. Training-time-friendly network for real-time object detection. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 11685–11692, 2020. Nikos K. Logothetis and David L. Sheinberg. Visual object recognition. *Annual Review of Neuroscience*, 19(1):577–621, 1996. doi: 10.1146/annurev.ne.19.030196.003045. URL https://doi.org/10.1146/ annurev.ne.19.030196.003045. PMID: 8833455. Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. *CoRR*, abs/1711.05101, 2017. URL http://arxiv.org/abs/1711.05101. Rangi Lyu. Super fast and lightweight anchor-free object detection model. real-time on mobile devices. 2020. URL https://github.com/RangiLyu/nanodet. Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In *The European Conference on Computer Vision (ECCV)*, September 2018. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S Ecker, Matthias Bethge, and Wieland Brendel. Benchmarking robustness in object detection: Autonomous driving when winter is coming. *arXiv preprint arXiv:1907.07484*, 2019. John P Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: on the strong correlation between outof-distribution and in-distribution generalization. In *International Conference on Machine Learning*, pp. 7721–7735. PMLR, 2021. Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In *Twenty-Ninth AAAI Conference on Artificial Intelligence*, 2015. Lukas Neumann, Andrew Zisserman, and Andrea Vedaldi. Relaxed softmax: Efficient confidence autocalibration for safe pedestrian detection. 2018. NVDIA. pynvml: Python bindings to the nvidia management library (nvml). https://docs.nvidia.com/ deploy/nvml-api/index.html, 2019. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, highperformance deep learning library. In *Advances in Neural Information Processing Systems* 32, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. *Advances in large margin classifiers*, 10(3):61–74, 1999. Zheng Qin, Zeming Li, Zhaoning Zhang, Yiping Bao, Gang Yu, Yuxing Peng, and Jian Sun. Thundernet: Towards real-time generic object detection. *arXiv preprint arXiv:1903.11752*, 2019. Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In *Proceedings of the IEEE conference* on computer vision and pattern recognition, pp. 7263–7271, 2017. Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. *arXiv preprint arXiv:1804.02767*, 2018. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 779–788, 2016. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 28*, pp. 91–99. Curran Associates, Inc., 2015. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. *International Journal of Computer Vision (IJCV)*, 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In *Proceedings of the IEEE Conference on Computer Vision and* Pattern Recognition, pp. 4510–4520, 2018. Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green AI. *CoRR*, abs/1907.10597, 2019. URL http://arxiv.org/abs/1907.10597. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. *CoRR*, abs/1602.07261, 2016a. URL http://arxiv.org/abs/1602. 07261. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In *The IEEE Conference on Computer Vision and Pattern* Recognition (CVPR), June 2016b. Mingxing Tan and Quoc V Le. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019. Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In The IEEE International Conference on Computer Vision (ICCV), October 2019. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, pp. 10347–10357. PMLR, 2021. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in neural information processing systems*, pp. 5998–6008, 2017. Xiongwei Wu, Doyen Sahoo, and Steven CH Hoi. Recent advances in deep learning for object detection. Neurocomputing, 396:39–64, 2020. Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan, and Trevor Darrell. BDD100K: A diverse driving video database with scalable annotation tooling. *CoRR*, abs/1805.04687, 2018. URL http://arxiv.org/abs/1805.04687. Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, and Thomas Huang. Unitbox: An advanced object detection network. In *Proceedings of the 24th ACM international conference on Multimedia*, pp. 516–520, 2016. Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In *Icml*, volume 1, pp. 609–616. Citeseer, 2001. Shifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, and Stan Z. Li. Bridging the gap between anchorbased and anchor-free detection via adaptive training sample selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. Qijie Zhao, Tao Sheng, Yongtao Wang, Feng Ni, and Ling Cai. Cfenet: An accurate and efficient single-shot object detector for autonomous driving. *CoRR*, abs/1806.09790, 2018a. URL http://arxiv.org/abs/ 1806.09790. Zhong-Qiu Zhao, Peng Zheng, Shou-tao Xu, and Xindong Wu. Object detection with deep learning: A review. *arXiv preprint arXiv:1807.05511*, 2018b. Xingyi Zhou, Dequan Wang, and Philipp Krähenbühl. Objects as points. *CoRR*, abs/1904.07850, 2019a. URL http://arxiv.org/abs/1904.07850. Xingyi Zhou, Jiacheng Zhuo, and Philipp Krahenbuhl. Bottom-up object detection by grouping extreme and center points. In *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, June 2019b. Zhengxia Zou, Zhenwei Shi, Yuhong Guo, and Jieping Ye. Object detection in 20 years: A survey. *arXiv* preprint arXiv:1905.05055, 2019. ## A Appendix A.1 Overall Performance Graph Figure 1 shows the overall performances of all detectors across varying metrics. The results are from detectors trained on COCO dataset with image resolution 512 and HarDNet-68 backbone. Each vertex corresponds to a metric and the eight different colors represent different detection heads. We report eight metrics: accuracy (in terms of mAP), natural robustness (in terms of mAP), speed (in terms of FPS), number of parameters, MAC count, energy consumption (in terms of kJ), calibration error (that measures reliability), and adversarial robustness (in terms of mAP). For procuring one natural robustness value per detector, the accuracy of each detector (with HarDNet-68 backbone) is averaged over all the (fifteen) corruptions for all five levels of severity (Figure 22). For adversarial robustness, the robustness accuracy is averaged for all the four attack strengths (Epsilon: [0.25, 0.5, 1, 2]; Figure 19). The calibration errors are the D-ECE scores from Table 8. The rest of the metric values are obtained from the Table 4. Considering an ideal application, a well performing detector is expected to have the highest accuracy, highest (natural/adversarial) robustness, highest speed, lowest number of parameters, lowest MAC count, lowest energy, and low calibration errors. Hence, for accuracy, robustness and speed metrics, naturally the higher values are better. But for the others, lower value is better. So, we use the inverse values for these metrics: number of parameters, MAC count, energy consumption, and calibration error (hence, represented with a superscript "−1" in the plot). We then normalize each metric between 0 and 1: $$\hat{z}_{i}=\frac{z_{i}-m i n(z)}{m a x(z)-m i n(z)}$$ $$(19)$$ max(z) − min(z)(19) Therefore, the ideal network should occupy the whole octagon, i.e. have value of 1 for all metrics. ## A.2 Details Of Implementations Here, we briefly discuss the implementation challenges and details. The official code of ThunderNet is not released. ShuffleNet-v2 was introduced in the paper but the ImageNet pretrained weights for this new backbone was not available and had to be trained, to reproduce the results and moreover, not all the hyperparameters are included in the paper which made this implementation very hard and challenging. Recently, some unofficial repository shared the TensorFlow and Pytorch implementation of ThunderNet. YOLOv2 is implemented using DarkNet, which is an open source framework implemented in C and CUDA. But, this is in a different structure compared to other repositories and hence was challenging to use. There were many other unofficial repositories in PyTorch but none had completely reproduced the results, and had many open issues. SSD has multiple good repositories and the hyperparameters were tabulated well enough to reproduce the result. CenterNet and TTFNet use DCN layers and the original repositories had built these layers for older version of Pytorch. These needed to be re-built for new upgraded versions and were not easily available. TTFNet, designed to train on fewer epochs, needed some tuning to find the best epoch for a particular dataset. The settings given in the paper did not work for all datasets. TTFNet has faster training time but is a bit challenging to find the converging parameters. NanoDet albeit having a good repository, does not have a publication. Using the repository was easy to reproduce the results but some concepts and loss functions were derived from multiple different works, and a paper publication would have enhanced the learning experience. Both FCOS and DETR had a good repository and were easy to reproduce. On the other hand, for TensorRT conversion, we need to follow two steps: (i) exporting the model to ONNX format, and (ii) converting the model from ONNX to TensorRT Engines. The complexity of these two Table 12: The detailed setting of the experiments for each network. The initial learning rate (LR) is decayed by a factor of 0.1 and we use a common *batch_size* of 32 and *confidence_threshold* of 0.01 for all the experiments. BG stands for the background class and Opt stands for the optimizer. | experiments. BG stands for the background class and Opt stands for the optimizer. LR Steps LR: | | Warmup | Weight | | | | | |--------------------------------------------------------------------------------------------------|------------------|----------|--------------------|-----------------------------|-----------|-----|---------| | Head | BG | Opt. | # Iters: | (% of | | | | | VOC/COCO/BDD/Med | VOC/COCO/BDD/Med | Iters | Decay | | | | | | | #Iters)1 | | | | | | | | ThunderNet | ✓ | SGD | 120K/400K/240K/50K | 0.005/0.005/0.005/0.005 | (70, 90) | 500 | 0.0001 | | YOLO | ✗ | SGD | 120K/400K/240K/50K | 0.0001/0.0001/0.0001/0.0001 | (70, 90)2 | 500 | 0.0005 | | SSD | ✓ | SGD | 120K/400K/240K/50K | 0.001/0.001/0.005/0.001 | (70, 90) | 500 | 0.0005 | | DETR | ✗ | AdamW | 120K/400K/240K/50K | 0.0001/0.0001/0.0001/0.0001 | 70 | 0 | 0.0001 | | CenterNet | ✗ | SGD | 120K/400K/240K/50K | 0.001/0.001/0.005/0.001 | (70, 90) | 500 | 0.0001 | | TTFNet | ✗ | SGD | 30K/120K3/72K/50K | 0.001/0.016/0.005/0.016 | (70, 90) | 500 | 0.00014 | | FCOS | ✓ | SGD | 120K/400K/240K/50K | 0.001/0.001/0.005/0.001 | (70, 90) | 500 | 0.0005 | | NanoDet | ✗ | SGD | 120K/400K/240K/50K | 0.001/0.001/0.005/0.001 | (70, 90) | 500 | 0.0005 | steps is determined by the number of custom layers present in the model and their support availability in TensorRT Open Source Software (OSS). SSD, YOLO, FCOS, and NanoDet have only one custom layer at the end, i.e. the NMS layer, for which support is available in TRT-OSS. Hence, with some additional work, these models can be converted into TensorRT inference engines. CenterNet, TTFNet, and DETR do not have any custom layers, which makes them easier to convert without any additional work. The ThunderNet contains additional custom layers, such as ROI proposal and ROI align. Since these layers are not exactly supported in TRT-OSS, an equivalent layer has to be selected while exporting to ONNX which made the conversion very difficult and challenging. Note that apart from custom layers, functions such as "upsample" and "gather" are version sensitive in ONNX and TensorRT. ## A.3 Resource Analysis The MAC count is a measure of the multiply-accumulation operations in neural networks. However, it has been brought to our attention that the standard flop counter libraries available online may not accurately account for MAC in transformer-based architectures that utilize multi-head attention modules. These libraries are typically designed for use with convolutional neural networks and do not consider transformer-based architectures. To address this issue, we evaluated various flop counter libraries and ultimately chose to use the fvcore(v0.1.5) library 5to calculate the flops of our transformer-based backbone (DEIT-T) and detector (DETR). This library was selected because it is capable of accurately accounting for MAC in transformer-based architectures, in addition to the common layers used in neural networks. We also found that the MAC counts for CNN-based architectures using the fvcore library were the same as those obtained using a previously used library. ## A.4 Reliability Analysis On Ood Data Calibration for the IID COCO dataset is provided in Section 10. Here, we report the calibration results for the OOD version by testing the networks on the Corrupted COCO dataset in Section 7.1. COCO validation set consists of 5000 images and we apply a random corruption out of 15 corruptions to each of the sample and evaluate the calibration on this set. Reliability diagrams are provided in Figure 21 where the diagonal represents the perfect calibration and the green shades represent the gap in calibration. 1With the exception for VOC dataset where 67% of #Iter used for DETR and (67%, 83%) for all the other detectors. 2YOLO starts with a lower learning rate of 1e − 4 but it is slowly increased to 1e − 2 in the first few epochs. 3TTFNet-EfficientNet-B0 needs twice these epochs to converge 4With the exception for MED dataset where weight decay of 0.0005 is used. 5https://github.com/facebookresearch/fvcore ![40_image_0.png](40_image_0.png) ![40_image_1.png](40_image_1.png) Confidence Figure 20: Reliability diagrams on VOC dataset for all detection heads with HarDNet-68 backbone. Green boxes indicate the error compared to perfect calibration (darker shade of green indicates the underconfident predictions whereas the lighter shade indicates the overconfident predictions). The better the calibration is the more reliable the predictions are. ![40_image_2.png](40_image_2.png) ![40_image_3.png](40_image_3.png) Confidence Figure 21: Reliability diagrams (based on classification prediction) of detection networks with HarDNet-68 backbone trained on COCO dataset and tested on Corrupted COCO dataset. Green (striped) boxes indicate the error compared to perfect calibration - darker shade of green indicates the underconfident predictions whereas the lighter shade indicates the overconfident predictions. The better the calibration, the more reliable the detector is. SSD is relatively well calibrated regarding an OOD data as well. Overall, in line with the IID reliability analyses, the single-stage detectors are more underconfident while the two-stage ThunderNet and DETR are very overconfident. | Noise | Blur | Weather | Digital | | | | | | | | | | | | | | |----------------------|------------------|-----------|-----------|--------|---------------------|-------|------------|------|-------|------|------|----------|------------------|------|----------|------| | 0 | 27.3 | 27.3 | 27.3 | 27.3 | 27.3 | 27.3 | 27.3 | 27.3 | 27.3 | 27.3 | 27.3 | 27.3 | | | | | | 1 | 22.4 | 23.5 | 21.5 | 23.5 | 22.2 | 22.0 | 22.8 | 11.6 | 26.5 | 24.6 | 22.3 | 20.5 | | | | | | 2 | 20.0 | 21.1 | 18.5 | 20.8 | 20.1 | 18.5 | 19.7 | 8.1 | 25.9 | 23.9 | 18.2 | 15.3 | 27.3 | 27.3 | 27.3 | 27.3 | | | 24.8 | 22.0 | 24.1 | 24.6 | | | | | | | | | | | | | | | 23.6 | 19.5 | 22.4 | 24.0 | | | | | | | | | | | | | | | 21.2 | 16.3 | 20.9 | 18.5 | | | | | | | | | | | | | | | 14.4 | 14.1 | 16.3 | 12.0 | | | | | | | | | | | | | | | 6.4 | 11.2 | 10.6 | 7.8 | | | | | | | | | | | | | | | Contrast Elastic | JPEG | Pixelate | | | | | | | | | | | | | | | 3 | 16.7 | 16.8 | 16.3 | 16.9 | 16.0 | 9.6 | 15.1 | 5.8 | 25.1 | 23.0 | 15.1 | 14.7 | | | | | | Severity 5 4 | 13.1 | 11.5 | 10.5 | 10.6 | 12.6 | 7.8 | 10.2 | 4.3 | 23.9 | 22.5 | 14.5 | 11.2 | | | | | | 9.7 | 5.4 | 5.3 | 6.7 | 9.6 | 5.9 | 7.5 | 3.2 | 22.3 | 20.1 | 12.9 | 10.5 | | | | | | | mCA Gaussian Impulse | Shot | Defocus | Glass | Motion | Zoom Brightness Fog | Frost | Snow | | | | | | | | | | | 0 | 22.2 | 22.2 | 22.2 | 22.2 | 22.2 | 22.2 | 22.2 | 22.2 | 22.2 | 22.2 | 22.2 | 22.2 | | | | | | 1 | 18.5 | 19.0 | 17.1 | 19.1 | 18.5 | 19.4 | 19.1 | 9.7 | 21.5 | 20.1 | 18.8 | 16.4 | | | | | | 2 | 16.4 | 16.9 | 15.0 | 16.7 | 16.5 | 15.9 | 16.0 | 6.5 | 21.0 | 19.3 | 15.1 | 12.6 | 22.2 | 22.2 | 22.2 | 22.2 | | | 19.9 | 19.1 | 19.7 | 20.4 | | | | | | | | | | | | | | | 18.9 | 16.8 | 18.0 | 20.0 | YOLO | | | | | | | | | | | | | | 16.4 | 13.9 | 16.9 | 17.4 | | | | | | | | | | | | | | | 10.5 | 11.9 | 12.5 | 14.3 | | | | | | | | | | | | | | | 3.8 | 9.3 | 7.9 | 11.6 | | | | | | | | | | | | | | 3 | 13.6 | 13.8 | 13.3 | 13.8 | 12.7 | 7.1 | 11.6 | 4.6 | 20.4 | 18.6 | 12.3 | 11.3 | | | | | | Severity | 5 4 | 10.7 | 9.4 | 8.8 | 8.8 | 9.5 | 5.1 | 7.4 | 3.2 | 19.7 | 18.3 | 11.9 | 8.5 | | | | | 7.9 | 4.5 | 4.6 | 5.5 | 6.7 | 3.3 | 5.1 | 2.2 | 18.5 | 16.7 | 10.2 | 9.0 | Contrast | Elastic | JPEG | Pixelate | | | mCA | Gaussian Impulse | Shot | Defocus | Glass | Motion | Zoom | Brightness | Fog | Frost | Snow | | | | | | | | 0 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | 29.9 | | 1 | 23.8 | 24.6 | 21.4 | 24.9 | 23.6 | 23.8 | 24.3 | 11.7 | 28.8 | 26.3 | 23.8 | 21.2 | 26.5 | 24.5 | 25.1 | 25.9 | | 20.7 | 21.4 | 18.3 | 21.2 | 21.3 | 19.5 | 20.5 | 7.9 | 27.9 | 25.5 | 19.1 | 15.0 | 25.0 | 21.6 | 22.1 | 24.7 | | | 2 3 | 16.9 | 16.3 | 15.4 | 16.3 | 16.5 | 9.8 | 15.2 | 5.8 | 26.7 | 24.2 | 15.5 | 14.6 | 22.2 | 17.7 | 19.7 | 17.6 | | Severity 5 4 | 12.7 | 10.0 | 8.9 | 9.0 | 12.3 | 7.8 | 10.0 | 4.2 | 25.1 | 24.0 | 15.1 | 10.6 | Contrast Elastic | JPEG | Pixelate | | | | 14.8 | 15.2 | 13.3 | 10.6 | | | | | | | | | | | | | | 9.2 | 3.5 | 3.5 | 4.6 | 9.2 | 5.7 | 7.1 | 3.1 | 23.3 | 21.5 | 13.2 | 10.2 | 6.7 | 12.0 | 7.8 | 6.1 | | | mCA Gaussian Impulse | Shot | Defocus | Glass | Motion | Zoom Brightness Fog | Frost | Snow | | | | | | | | | | | 0 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | | 1 | 24.6 | 25.4 | 22.7 | 25.8 | 25.4 | 25.2 | 26.0 | 12.6 | 28.9 | 26.9 | 24.2 | 21.2 | 27.3 | 23.7 | 26.6 | 26.7 | | 21.8 | 22.7 | 19.3 | 22.5 | 23.2 | 21.1 | 22.4 | 8.3 | 28.1 | 26.3 | 19.8 | 15.8 | 26.2 | 20.7 | 24.9 | 26.0 | | | 2 3 | 18.1 | 17.8 | 16.4 | 17.9 | 18.7 | 10.3 | 17.2 | 5.9 | 27.3 | 25.1 | 16.1 | 14.6 | 23.8 | 16.7 | 23.6 | 20.9 | | Severity 5 4 | 14.1 | 11.3 | 9.9 | 10.0 | 14.2 | 8.4 | 11.2 | 4.1 | 25.9 | 24.9 | 15.9 | 10.3 | Contrast Elastic | JPEG | Pixelate | | | | 16.8 | 13.9 | 19.1 | 15.3 | | | | | | | | | | | | | | 10.3 | 4.6 | 4.5 | 5.6 | 10.3 | 5.9 | 8.1 | 2.9 | 24.2 | 22.2 | 13.6 | 10.9 | 7.7 | 10.9 | 13.2 | 9.9 | | | mCA Gaussian Impulse | Shot | Defocus | Glass | Motion | Zoom Brightness Fog | Frost | Snow | | | | | | | | | | | 0 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | 29.6 | | | | | |----------------------|------------------|---------|----------|--------|---------------------|--------|--------|--------|--------|--------|--------|--------|------------------|------|----------|------| | 1 | 24.6 | 25.4 | 22.7 | 25.8 | 25.4 | 25.2 | 26.0 | 12.6 | 28.9 | 26.9 | 24.2 | 21.2 | | | | | | 21.8 | 22.7 | 19.3 | 22.5 | 23.2 | 21.1 | 22.4 | 8.3 | 28.1 | 26.3 | 19.8 | 15.8 | | | | | | | 2 | 29.6 | 29.6 | 29.6 | 29.6 | | | | | | | | | | | | | | | 27.3 | 23.7 | 26.6 | 26.7 | | | | | | | | | | | | | | | 26.2 | 20.7 | 24.9 | 26.0 | | | | | | | | | | | | | | | 23.8 | 16.7 | 23.6 | 20.9 | | | | | | | | | | | | | | | 16.8 | 13.9 | 19.1 | 15.3 | | | | | | | | | | | | | | | 7.7 | 10.9 | 13.2 | 9.9 | | | | | | | | | | | | | | | Contrast Elastic | JPEG | Pixelate | | | | | | | | | | | | | | | 3 | 18.1 | 17.8 | 16.4 | 17.9 | 18.7 | 10.3 | 17.2 | 5.9 | 27.3 | 25.1 | 16.1 | 14.6 | | | | | | Severity 5 4 | 14.1 | 11.3 | 9.9 | 10.0 | 14.2 | 8.4 | 11.2 | 4.1 | 25.9 | 24.9 | 15.9 | 10.3 | | | | | | 10.3 | 4.6 | 4.5 | 5.6 | 10.3 | 5.9 | 8.1 | 2.9 | 24.2 | 22.2 | 13.6 | 10.9 | | | | | | | mCA Gaussian Impulse | Shot | Defocus | Glass | Motion | Zoom Brightness Fog | Frost | Snow | | | | | | | | | | | 0 | 33.4 | 33.4 | 33.4 | 33.4 | 33.4 | 33.4 | 33.4 | 33.4 | 33.4 | 33.4 | 33.4 | 33.4 | | | | | | 1 | 27.3 | 28.3 | 26.3 | 28.6 | 27.4 | 27.4 | 27.8 | 13.0 | 32.4 | 30.3 | 27.4 | 24.8 | | | | | | 2 | 24.2 | 25.2 | 23.0 | 25.0 | 24.6 | 22.9 | 23.3 | 8.5 | 31.7 | 29.4 | 22.7 | 18.9 | 33.4 | 33.4 | 33.4 | 33.4 | | | 30.6 | 26.9 | 28.9 | 29.8 | | | | | | | | | | | | | | | 29.0 | 23.8 | 26.2 | 28.8 | | | | | | | | | | | | | | | 26.1 | 19.6 | 24.3 | 22.2 | | | | | | | | | | | | | | | 18.4 | 16.5 | 17.9 | 13.8 | | | | | | | | | | | | | | | 8.8 | 13.0 | 11.3 | 8.9 | | | | | | | | | | | | | | | Contrast Elastic | JPEG | Pixelate | | | | | | | | | | | | | | | 3 | 20.1 | 20.4 | 20.2 | 20.6 | 19.0 | 10.5 | 17.1 | 5.8 | 30.7 | 28.4 | 18.7 | 17.7 | | | | | | Severity 5 4 | 15.6 | 14.6 | 13.9 | 13.9 | 14.3 | 7.9 | 10.9 | 4.1 | 29.2 | 28.0 | 18.1 | 13.2 | | | | | | 11.7 | 8.2 | 8.1 | 9.7 | 10.2 | 5.5 | 7.8 | 2.8 | 27.1 | 25.5 | 15.9 | 13.1 | | | | | | | mCA Gaussian Impulse | Shot | Defocus | Glass | Motion | Zoom Brightness Fog | Frost | Snow | | | | | | | | | | | 0 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | 36.5 | | 1 | 29.2 | 30.2 | 27.2 | 30.3 | 29.0 | 29.0 | 29.8 | 13.9 | 35.0 | 32.5 | 29.5 | 26.2 | 32.7 | 29.6 | 31.1 | 31.5 | | 2 | 25.6 | 26.3 | 23.4 | 25.9 | 25.8 | 24.2 | 24.8 | 9.4 | 34.0 | 31.4 | 24.0 | 19.5 | 30.9 | 26.2 | 27.9 | 30.2 | | 3 | 21.1 | 20.4 | 19.9 | 20.6 | 20.1 | 12.1 | 18.5 | 6.8 | 32.9 | 29.9 | 19.8 | 18.1 | 27.5 | 21.7 | 25.7 | 22.6 | | Severity 5 4 | 16.3 | 13.5 | 12.7 | 12.6 | 15.3 | 9.9 | 12.2 | 4.9 | 31.2 | 29.6 | 19.0 | 13.0 | Contrast Elastic | JPEG | Pixelate | | | | 18.8 | 18.7 | 18.8 | 14.4 | | | | | | | | | | | | | | 12.1 | 6.3 | 6.3 | 7.8 | 11.3 | 7.2 | 8.8 | 3.5 | 29.0 | 26.7 | 16.9 | 13.0 | 8.5 | 15.1 | 11.5 | 8.9 | | | mCA Gaussian Impulse | Shot | Defocus | Glass | Motion | Zoom Brightness Fog | Frost | Snow | | | | | | | | | | | 0 | 33.6 | 33.6 | 33.6 | 33.6 | 33.6 | 33.6 | 33.6 | 33.6 | 33.6 | 33.6 | 33.6 | 33.6 | | | | | | 1 | 27.2 | 28.4 | 25.8 | 28.7 | 26.8 | 26.9 | 27.6 | 12.9 | 32.5 | 29.9 | 27.2 | 24.6 | | | | | | 2 | 23.8 | 25.0 | 22.2 | 24.7 | 23.7 | 22.2 | 22.9 | 8.5 | 31.6 | 28.9 | 22.1 | 18.2 | 33.6 | 33.6 | 33.6 | 33.6 | | | 30.1 | 27.1 | 29.1 | 29.7 | | | | | | | | | | | | | | | 28.4 | 24.2 | 26.2 | 28.6 | | | | | | | | | | | | | | | 25.3 | 19.9 | 24.2 | 21.8 | | | | | | | | | | | | | | | 16.7 | 16.9 | 17.3 | 14.5 | | | | | | | | | | | | | | | 6.7 | 13.4 | 10.3 | 9.7 | | | | | | | | | | | | | | | Contrast Elastic | JPEG | Pixelate | | | | | | | | | | | | | | | 3 | 19.6 | 19.6 | 19.0 | 19.7 | 18.0 | 10.1 | 16.6 | 6.2 | 30.5 | 27.7 | 18.1 | 17.1 | | | | | | Severity 5 4 | 15.0 | 13.1 | 12.3 | 12.3 | 13.4 | 8.0 | 10.5 | 4.5 | 28.8 | 27.2 | 17.4 | 12.6 | | | | | | 11.0 | 6.1 | 6.2 | 7.6 | 9.4 | 5.6 | 7.6 | 3.2 | 26.9 | 24.5 | 15.4 | 12.4 | | | | | | | mCA Gaussian Impulse | Shot | Defocus | Glass | Motion | Zoom Brightness Fog | Frost | Snow | | | | | | | | | | Figure 22: Robustness accuracy of all the eight detection heads with HarDNet-68 backbone when tested on Corrupted COCO dataset. The corruption has 5 levels of severity, with level 0 being the result on original COCO data. ![43_image_0.png](43_image_0.png) Figure 23: Visualization of predictions of all eight heads on COCO dataset. ![44_image_0.png](44_image_0.png) ![45_image_0.png](45_image_0.png) Figure 24: Robustness accuracy of the eight detection heads with HarDNet-68 backbone when tested on Corrupted COCO dataset. The corruption has 5 levels of severity, with level 0 being the result on original COCO data. ![46_image_0.png](46_image_0.png) Figure 25: Visualization of predictions of all eight heads on BDD dataset. ![47_image_0.png](47_image_0.png) Figure 26: Visualization of predictions of all eight heads on Kvasir-SEG medical dataset.
Review 1: Summary: This paper is a survey of modern object detection approaches with a focus on real-time architectures. The authors (to quite an impressive extent) compare architectures in an apples-to-apples fashion and going beyond some previous surveys evaluate on not just COCO and Pascal VOC, but a number of other domains (self driving, healthcare). Moreover the authors focus on metrics beyond speed and accuracy, including quantities like energy consumption, robustness and reliability, as well as benchmarks on edge devices. Overall the paper should serve as a reasonably good handbook for a practitioner who needs to select an architecture. Strengths and Weaknesses: ### Strengths: This survey contains good coverage of many modern architectures and some of the specific focuses in this work (e.g. real time inference, out-of-distribution performance) separate it from prior object detection surveys. Moreover apples-to-apples comparisons of these approaches (done quite well in this work) is often very difficult to do — typically numbers are just cited from prior papers with little effort to standardize on backbones or hardware. One surprising outcome for example is that older architectures like SSD are shown to still be somewhat competitive when evaluated using comparable backbones that today’s architectures use. ### Weaknesses: On the other hand, it’s worth at least discussing some of the apples-to-apples choices more carefully. For example, it’s not completely obvious that the “right” thing to do is to put all models on the same footing during training (e.g. training with batch size 32). For example, what if some model for whatever reason really needs a large batch size? Alternatively, we could let the performance of a model be defined by the optimum performance it can reach with a given input resolution and trained on consistent hardware… Also, as far as I can tell, there are no details given about the specific implementations used of each architecture here. Did the authors reimplement everything? If so, how did they verify that the were able to capture the original authors’ implementation faithfully? And if they reimplemented everything, will there be a code release? (This point definitely should be addressed if it is to be accepted). Finally, a number of followup works are not addressed in this paper (e.g. further versions of MobileNet and Yolo — e.g. Yolo v5, mobilenet v3 — and these choices to omit certain architectures should be explained) Requested Changes: The following are recommendations that would strengthen the work (though not critical to securing my recommendation for acceptance). * *Correlations/dependencies between metrics*: a number of metrics are correlated with each other — this deserves a longer discussion. For example, what causes a model to do well on energy consumption than can be expected by its FLOP count? * *Self-contained explanations of detectors*: Section 3 is a nice survey of detectors from the last several years - however some of the descriptions are not completely self contained (the overall mechanism of the detectors could not be guessed based on description). One example: Eqn 6: how does this equation capture classification into K categories at the end? I recommend having a reader who is familiar with classifiers but not object detectors try reading this section and then try explaining the training procedure to the authors. * *Other ideas for out-of-distribution generalization*: Another idea would be to train on some cities then test on hold-out cities (see, e.g., Google’s Auto-Arborist dataset). Or train on static images and test on video frames. (Note this is just a suggestion for strengthening the work, not a condition of acceptance) * *CenterNet vs FCOS*: I’m picking out this particular pairing (but I think the comparison might be relevant to other pairs in the work too). The original CenterNet best results were achieved with an hourglass backbone. Both CornerNet and CenterNet showed results that (for whatever reason), they were able to achieve much stronger with hourglass shaped backbones than with standard ResNet with FPN backbones. So I’m wondering if we are being fair to CenterNet in this paper since when we look at mean AP from both papers, the performance of the two architectures are not so different, but according to this paper FCOS is flat-out superior to CenterNet. * *Batch norm folding*: Do the authors fold batch-norm into convolutions when benchmarking? * *Non max suppression details*: Noting the effect of the confidence threshold on speed is nice — could the authors explain this? Is the extra running time due purely to NMS? If so, I think this issue could be discussed more carefully. In practice, we have to choose an operating point and since we don’t always know what choice will be made, one way to view the mAP metric is as an integrated metric over all choices of operating point. Thus a very low threshold (e.g. even 1e-8) makes sense for computing mAP. But of course in practice, we’d often use a much higher threshold which would reduce the NMS portion of compute. Thus setting a confidence threshold of 0.4 is more about achieving a particular precision/recall tradeoff (and we might not discuss mean AP in this setting since it is the average over all choices of confidence threshold). Note also that running time might depend on NMS variant (some implementations of soft NMS can be much slower! — so what is the running time complexity of the implementation used in this work?) * *Robustness*: I recommend comparing results here to those in “Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization” by Miller et al. Specifically if you look at Figure 16, a striking feature of these plots is the relative ordering among the different detectors remains mostly unchanged over all corruption levels. This meshes well with the result in Miller et al. And it is not surprising therefore that FCOS is the most robust detector (given that it performs the best under no corruption). * *Calibration*: Could calibration not be fixed to a large extent by applying something simple like platt scaling? * *Misc fixes/questions (apologies that I wrote these out of order)*: * Can you plot speed accuracy tradeoff? (I.e., speed on one axis, mean AP on the other — as is done by many prior works). This would allow us to see a “pareto-optimal” front of detectors. * Is Figure 1 averaged over (e.g. resolution, backbone, dataset etc?) Are scales comparable? It seems the variation in one metric might be very small but variation in another might be very large… * Eqn 7: Lconf is not defined * Fig 4: typo: enchancement * Also everything is written out horizontally except NMS which looks funny * Clarification: In Sec 9.3 - what does it mean to modify anchor dimensions using a Gaussian? * Usage of “quintessentially” in Xception paragraph in Section 6 — is this correct? * Paragraph above Eqn 14 (second paragraph of Section 5.8) mentions that FCOS and CenterNet solve the overlapping issue by using centerness and FPN (just note that CenterNet doesn’t use FPN - or else there is also a problem with Table 2) * Part of table 4 shows up in a larger font * The colors in Figure 15 are somewhat confusing — I think the dark green is meant to indicate that blue goes all the way to the top and then the gap is the difference between the “blue” (which we can’t actually see) and the red dotted line. It might be more intuitive to visualize the gap with an arrow Broader Impact Concerns: None ================================================== Review 2: Summary: The paper is a survey of real-time object detection networks that, in addition to the standard accuracy, documents the energy consumption, robustness, calibration, and speed of a large number of popular and recent object detection networks. The goal is to aid practitioners in choosing an object detection network suitable for their particular real-time application and to encourage researchers to evaluate future models across a more comprehensive set of metrics. Contributions: - This survey paper sets itself apart from prior surveys of object detection networks by focusing on a holistic set of metrics tailored to real-time applications and by considering a more modern set of networks, including newer anchor-free and transformer-based detectors. - Network backbones and detection heads are evaluated in several combinations with a standardized plug-and-play pipeline across a large set of datasets. - The authors test robustness of the networks to corruptions on the Corrupted COCO dataset, as well as OOD generalization in a self-driving application. - They also evaluate calibration, as well as the impact of object size, image size, anchor box size, confidence thresholds, deformable convolution layers. - Two case studies for health care and self-driving applications are considered. Strengths and Weaknesses: Strengths: - Overall, I found the survey comprehensive and well-executed. I think it will serve as a valuable resource for practitioners and researchers alike. - The authors make several insightful comments about the field in general when motivating their work, e.g. “As newer detectors are being proposed frequently, without a standard evaluation framework, the trend is moving more towards a short-sighted direction that focuses only on nominal improvements.” - The standardized evaluation protocol for a large combination of backbones and detector heads is especially valuable in a field where minor implementation details across different papers make apples-to-apples comparisons difficult. Weaknesses: - Figure 1 is a nice overview of different networks, but leaves out robustness to distribution shifts as a key metric. It's also not clear how each metric/vertex was obtained - i.e. is this is a summary across different datasets/backbones? - Calibration is not evaluated on OOD shifts. Networks are also evaluated without an off-the-shelf post-hoc calibration method which would be fairly easy for any practitioner to implement. - Simplicity of design and ease of implementation / flexibility are important factors that survey could cover more comprehensively. Parameter count does not necessarily equate to simplicity. For example, for ease of implementation, we might document how many open source frameworks the detectors are implemented in. The strengths of the paper outweigh the relatively minor weaknesses. Requested Changes: - Give more explanation for how Figure 1 was created. Consider adding robustness to ood shifts as a vertex. - Include an analysis of calibration on corrupted COCO shifts and/or the self driving OOD generalization dataset. - Think more carefully about how to evaluate ease of use or adoption, code complexity, amount. of tuning needed, and simplicity. Include a more in depth discussion or evaluation if possible. Broader Impact Concerns: n/a ================================================== Review 3: Summary: The paper is a very detailed survey on the study of real time Object Detection Networks. The paper details and evaluates different detection heads with different backbone, different dataset and different hardware. There is no methodological novelty but the proposed study deals with an interesting problem in many aspects with many experiments. Strengths and Weaknesses: Strengths: - The paper is well written and easy to follow. - The study is clear and very well detailed. The paper is very complete with a lot of evaluation with different architecture and different dataset. I think this can be of great value to the community. - The applications considered are interesting Weaknesses: - Overfitting evaluation: It would be interesting to do an overfitting study by testing the different approaches considered with several seeds to better understand the significance of the results. Since there are a lot of approaches and a lot of datasets, maybe do it with a not too expensive configuration just to get an idea of the overfitting. Requested Changes: I would like to see a study on overfitting but this is not critical as the paper is already very exhaustive. Broader Impact Concerns: Nothing special to mention . ================================================== Metareview: Recommendation: Accept as is Comment: The problem of object detection is one of the most important applications in computer vision requiring not only the ability to identify not just what resides in an image but also where the object instance resides. In the last few years there have been a multitudes of advancements in the field of object detection including basic backbone architectures, detection stems, cascade architectures, data augmentations, etc. Stitching together what all of these individual advancements mean for the larger field or a modern day practitioner is a difficult enterprise because it is difficult to ascertain what methods are additive and what trade-offs result from employing various methods in tandem on modern detection problems. This paper provides a survey of many of the most important modern object detection approaches with a focus on real-time architectures. The authors go to great lengths of provide an in-depth analysis of the accuracy, energy consumption, robustness, calibration, and speed. They test these methods not just on standard academic datasets (e.g. Pascal VOC and COCO) but on non-standard and important problems in medical imaging and self-driving applications and provide more comprehensive evaluations of real-time performance. The reviewers highlight how this work provide a great handbook for future practitioners in the field. Given the many advancements in the field, this survey additionally provides a common evaluation framework and benchmarks for future advancements in the field. The paper is accepted provided assuming all of the comments from the reviewers are fully addressed. ==================================================
# Size Lowerbounds For Deep Operator Networks Anirbit Mukherjee *anirbit.mukherjee@manchester.ac.uk* Department of Computer Science The University of Manchester Amartya Roy∗ *Amartya.Roy@in.bosch.com* Robert Bosch GmbH, Coimbatore, India Reviewed on OpenReview: *https: // openreview. net/ forum? id= RwmWODTNFE* ## Abstract Deep Operator Networks are an increasingly popular paradigm for solving regression in infinite dimensions and hence solve families of PDEs in one shot. In this work, we aim to establish a first-of-its-kind data-dependent lowerbound on the size of DeepONets required for them to be able to reduce empirical error on noisy data. In particular, we show that for low training errors to be obtained on n data points it is necessary that the common output dimension of the branch and the trunk net be scaling as Ω ( √4 n). This inspires our experiments with DeepONets solving the advection-diffusion-reaction PDE, where we demonstrate the possibility that at a fixed model size, to leverage increase in this common output dimension and get monotonic lowering of training error, the size of the training data might necessarily need to scale at least quadratically with it. ## 1 Introduction Data-driven approaches to analyze, model, and optimize complex physical systems are becoming more popular as Machine Learning (ML) methodologies are gaining prominence. Dynamic behaviour of such systems is frequently characterized using systems of Partial Differential Equations (PDEs). A large body of literature exists for using analytical or computational techniques to solve these equations under a variety of situations, such as various domain geometries, input parameters, and initial and boundary conditions. Very often one wants to solve a "parametric" family of PDEs i.e have a mechanism of quickly obtaining new solutons to the PDE upon variation of some parameter in the PDE like say the viscosity in a fluid dynamics model. This is tantamount to obtaining a mapping between the space of possible parameters and the corresponding solutions to the PDE. The cost of doing this task with conventional tools such as finite element methods (Brenner & Carstensen, 2004) is enormous since distinct simulations must be run for each unique value of the parameter, be it domain geometry or some input or boundary value. Fortuitously, in recent times there have risen a host of ML methods under the umbrella of "operator learning" to achieve this with more attractive speed-accuracy trade-offs than conventional methods, (Ray et al., 2023) As reviewed in (Ray et al., 2023), we recognize that operator learning is itself a part of the larger program of rapidly increasing interest, "physics informed machine learning" (Karniadakis et al., 2021). This program encompasses all the techniques that are being developed to utilize machine learning methods, in particular neural networks, for the numerical solution of dynamics of physical systems, often described as differential equations. Notable methodologies that fall under this ambit are, Physics Informed Neural Nets (Raissi & Karniadakis, 2018), DeepONet (Lu et al., 2019), Fourier Neural Operator (Li et al., 2020b), Wavelet Neural Operator (Tripura & Chakraborty, 2022), Convolutional Neural Operators (Raonic et al., 2023) etc. Physics-Informed Neural Networks (PINNs) have emerged as a notable approach when there is one specific PDE of interest that needs to be solved. To the best of our knowledge some of the earliest proposals of ∗A part of the work was done while the author was at the Jadavpur University this were made in, (Dissanayake & Phan-Thien, 1994; Lagaris et al., 1998; 2000). The modern avatar of this idea and the naming of PINNs happened in (Raissi et al., 2019). This learning framework involves minimizing the residual of the underlying partial differential equation (PDE) within the class of neural networks. Notably, PINNs are by definition an unsupervised learning method and hence they can solve PDEs with no need for knowing any sample solutions. They have demonstrated significant efficacy and computational efficiency in approximating solutions to PDEs, as evidenced by (Raissi et al., 2018), (Lu et al., 2021), (Mao et al., 2020), (Pang et al., 2019), (Yang et al., 2021), (Jagtap & Karniadakis, 2021), (Jagtap et al., 2020), (Bai et al., 2021), A detailed review of this field can be seen at, (Cuomo et al., 2022). As opposed to the question being solved by PINNs, Deep Operator Networks train a pair of nets in tandem to learn a (possibly nonlinear) operator mapping between infinite-dimensional Banach spaces - which de-facto then becomes a way to solve a family of parameteric PDEs in "one-shot". Its shallow version was proposed in (Chen & Chen, 1995b) and more recently its deeper versions were investigated in (Lu et al., 2019) and its theoretical foundations laid in (Lanthaler et al., 2022a). Till date numerous variants of DeepONet models (Park et al., 2023), (Liu & Cai, 2021), (Hadorn, 2022), (Almeida et al., 2022), (Lin et al., 2022), (Xu et al., 2022), (Tan & Chen, 2022), (Zhang et al., 2022), (Goswami et al., 2022) have been proposed and this training process takes place offline within a predetermined input space. As a result, the inference phase is rapid because no additional training is needed as long as the new conditions fall within the input space that was used during training. Other such neural operators like FNO (Li et al., 2020b), WNO[(Tripura & Chakraborty, 2022) enable efficient and accurate solutions to complex mathematical problems, opening up new possibilities for scientific computing and data-driven modeling. They have shown promise in various scientific and engineering applications including physics simulations (Choubineh et al., 2023), (Gopakumar et al., 2023), (Li et al., 2022b), (Lehmann et al., 2023), (Li et al., 2022a), image processing (Johnny et al., 2022), (Tripura et al., 2023), and weather-modelling (Kurth et al., 2022), (Pathak et al., 2022). A deep mystery with neural nets is the effect of their size on their performance. On one hand, we know from various experiments as well as theory that the asymptotically wide nets are significantly weaker than actual neural nets and they have very different training dynamics than what is true for practically relevant nets. But, it is also known that there are specific ranges of overparametrization at which the neural net performs better than at any lower size. Modern learning architectures exploit this possibility and they are almost always designed with a large number of training parameters than the size of the training set. It seems to be surprisingly easy to find overparametrized architectures which generalize well. This contradicts the traditional understanding of the trade-off between approximation and generalization, which suggests that the generalization error initially decreases but then increases due to overfitting as the number of parameters increases (forming a U-shaped curve). However, recent research has revealed a puzzling non-monotonic dependency on model size of the generalization error at the empirical risk minimum of neural networks. This curious pattern is referred to as the "double-descent" curve,(Belkin et al., 2019). Some of the current authors had pointed out (Gopalani & Mukherjee, 2021), that the nature of this double-descent curve might be milder (and hence the classical region exists for much large range of model sizes) for DeepONets - which is the focus of this current study. It is worth noting that this phenomenon has been observed in decision trees and random features and in various kinds of deep neural networks such as ResNets, CNNs, and Transformers (Nakkiran et al., 2021). Also, various theoretical approaches have been suggested towards deriving the double-descent risk curve, (Belkin et al., 2018a), (Belkin et al., 2018b), (Deng et al., 2022), (Kini & Thrampoulidis, 2020). In recent times, many kinds of generalization bounds for neural nets have also been derived, like those based on Rademacher complexity (Sellke, 2023), (Golowich et al., 2018), (Bartlett et al., 2017) which are uniform convergence bounds independent of the trained predictor or results as in (Li et al., 2020a) and (Muthukumar & Sulam, 2023) which have developed data-dependent non-uniform bounds. These help explain how the generalization error of deep neural nets might not explicitly scale with the size of the nets. Some of the current authors had previously shown (Gopalani et al., 2022) the first-of-its-kind Rademacher complexity bounds for DeepONets which does not explicitly scale with the width (and hence the number of trainable parameters) of the nets involved. Despite all these efforts, to the best of our knowledge, it has generally remained unclear as to how one might explain the necessity for overparameterization for good performance in any such neural system. In light of this, a key advancement was made in, (Bubeck & Sellke, 2023). They showed, that with high probability over sampling n training data in d dimensions, if there has to exist a neural net f of depth D and p parameters such that it has empirical squared-loss error below a measure of the noise in the labels then it must be true that, Lip(f) ≥ Ω˜ ( √ nd Dp ). This can be interpreted as an indicator of why large models might be necessary to get low training error on real world data. Building on this work, we prove the following result (stated informally) for the specific instance of operator learning as we consider, Theorem 1.1 (Informal Statement of Theorem 4.2). *Suppose one considers a DeepONet function class at* a fixed bound on the weights and the total number of parameters and both the branch and the trunk nets ending in a layer of sigmoid gates. Then with high probability over sampling a n−sized training data set, if this class has to have a predictor which can achieve empirical training error below a label noise dependent threshold, then necessarily the common output dimension of the branch and the trunk must be lower bounded as Ω ( √4 n). And notably, the prefactors suppressed by Ω *scale inversely with the bound on the weights and* the size of the model. Thus, to the best of our knowledge, our result here makes a first-of-its-kind progress with explaining the size requirement for DeepONets and in particular how that is related to the available size of the training data. Further, motivated by the above, we shall give experiments to demonstrate that at a fixed model size, for DeepONets to leverage an increase in the size of the common output dimension of branch and trunk, the size of the training data might need to be scaled at least quadratically with that. The proof in (Bubeck & Sellke, 2023) critically uses the Lipschitzness condition of the predictors to leverage isoperimetry of the data distribution. And that is a fundamental mismatch with the setup of operator learning - since DeepONets are not Lipschitz functions. Thus our work embarks on a program to look for an analogous insight as in (Bubeck & Sellke, 2023) that applies to DeepONets. ## 1.1 The Formal Setup Of Deeponets We recall the formal setup of DeepONets (Ryck & Mishra, 2022). Given T > 0 and D ⊂ R dcompact, consider functions u ∶ [0, T] × D → R k, for k ≥ 1, that solve the following time-dependent PDE, La(u)(*t, x*) = 0 and u(x, 0) = u0 ∀(t, x) ∈ [0, T] × D . This abstracts out the use case of wanting to find the time evolution of k dimensional vector fields on d dimensional space and them being governed by a specific P.D.E. Further, let H be the function space of PDE solutions of the above. Define a function space Y s.t u0 ∈ Y ⊂ L 2(D) i.e the space of initial conditions and then the differential operator above can be chosen to map as, La ∶ H → L 2([0, T] × D) and these operators are indexed by a function a ∈ Z ⊂ L 2(D). Corresponding to the above we have the solution operator G ∶ X → H ∶ f ↦ u, where f ∈ {u0, a} X ∈ {Y, Z} - where the two choices correspond to the two natural settings one might consider, that of wanting to solve the PDE for various initial conditions at a fixed differential operator or solve for a fixed initial condition at different parameter values for the differential operator. The DeepONet architecture as shown in Figure 1 consists of two nets, namely a Branch Net and a Trunk Net. The former is denoted by NB that maps R d1 → R q- which in use will take as input a d1 point discretization of a real valued function f as a vector, s = (f(x1), f(x2)*, ..., f*(xd1)) corresponding to some arbitrary choice of "sensor points" {xj ∣ 1 ≤ j ≤ d1} ⊂ D. The Trunk Net, denoted by NT, maps R d2 → R q which takes as input any point in the domain of the functions in the solution space of the PDE. (In the context of the above PDE we would have d2 = 1 + d). Note that in above q is an arbitrary constant. Then the DeepONet with parameters θ (inclusive of both its nets) can be defined as the following map, $${\mathcal G}_{\theta}\left({\underset{\mathbf{s}}{f\left(\left(x_{1}\right),f\left(x_{2}\right),\cdots,f\left(x_{d_{1}}\right)\right)}}\right)\left(\mathbf{p}\right)\coloneqq\left\langle{\mathcal N}_{\mathrm{B}}(\mathbf{s}),{\mathcal N}_{\mathrm{T}}(\mathbf{p})\right\rangle\,.$$ ![3_image_0.png](3_image_0.png) $$\left(1\right)$$ (p) ∶= ⟨NB(s), NT(p)⟩ . (1) One would often want to constraint p ∈ U where U compact domain in R d2. Given the setup as described above, the objective of a DeepONet is to approximate the value G(f)(p) by Gθ (f (x1) , f (x2) ,⋯, f (xd1)) (p). Figure 1: A Sketch of the DeepONet Architecture Review of the Universal Approximation Property of DeepONets An universal approximation theorem for shallow DeepONets was established in (Chen & Chen, 1995a). A more general version of it was established in (Lanthaler et al., 2022b) which we shall now briefly review. Consider two compact domains, D ⊂ R dand U ⊂ R n, and two compact subsets of infinite dimensional Banach spaces, K1 ⊂ C(D) and K2 ⊂ C(U), where C(D) represents the collection of all continuous functions defined on the domain D and similarly for C(U). We then define a (possibly nonlinear) continuous operator G ∶ K1 → K2. Theorem 1.2. (Restatement of a key result from (Lanthaler et al., 2022b) on Generalised Universal Approximation for Operators). Let µ ∈ P(C(D)) be a probability measure on C(D)*. Assume that the mapping* G ∶ C(D) → L 2(U) *is Borel measurable and satisfies* G ∈ L 2(µ). Then, for any positive value ε, there exists an operator G˜ ∶ C(D) → L 2(U) *(a DeepONet composed with a discretization map for functions in* C(D)), such that $$\|{\mathcal{G}}-{\tilde{\mathcal{G}}}\|_{L^{2}(\mu)}=\left(\int_{C(D)}\|{\mathcal{G}}(u)-{\tilde{\mathcal{G}}}(u)\|_{L^{2}(U)}^{2}d\mu(u)\right)^{1/2}<\varepsilon$$ In other words, G˜ can approximate the original operator G arbitrarily close in the L 2(µ)-norm with respect to the measure µ. The above approximation guarantee between DeepONets and solution operators of differential equations (G) clearly motivates the use of DeepONets for solving differential equations. ## 2 Related Works In (Lanthaler et al., 2022b) the authors have defined the DeepONet approximation error as follows, $${\widetilde{\mathcal{E}}}=\left(\int_{C(D)}\int_{U}|{\mathcal{G}}(u)(y)-{\mathcal{N}}(u)(y)|^{2}\ \mathrm{d}y\ \mathrm{d}\mu(u)\right)^{1/2},$$ where the DeepONet approximates the underlying operator G ∶ C(D) → C(U) and µ being as defined previously and u a fixed finite grid discretization of u. To the best of our knowledge, the following is the only DeepONet size lowerbound proven previously, Theorem 2.1. Let µ ∈ P (L 2(T)). Let u ↦ G(u) denote the operator, mapping initial data u(x) to the solution at time t = π/2,for the Burgers' PDE (Hon & Mao, *1998). Then there exists a universal constant* C > 0 (depending only on µ*, but independent of the neural network architecture), such that the DeepONet* approximation error Ê is, $${\widehat{\varepsilon}}\geqslant{\frac{C}{\sqrt{q}}}$$ where, q is the common output dimension of the branch and the trunk net. Firstly, from above it does not seem possible to infer any relationship between the size of the neural architecture required for any specified level of performance and the amount of training data that is available to use. And that is a key connection that is being established in our work. *Secondly,* the above lowerbound is in the setting where there is no noise in the training labels - while our bound specifically targets to understand how the architecture is constrained when trying to get the empirical error to be below a measure of noise in the labels i.e our setup is that of solving PDEs in a supervised way while accounting for uncertainty in data. *Thirdly,* and most critically, the lower bound above is specific to Burger's PDE, while our theorem is PDE-independent. Organization Starting in the next section we shall give the formal setup of our theory. In Section 4 we shall give the full statement of our theorem. In Section 5 we shall state all the intermediate lemmas that we need and their proofs. In Section 6 we give the proof of our main theorem. Motivated by the theoretical results, in Section 7 we give an experimental demonstration revealing a property of DeepONets about how much training data is required to leverage any increase in the common output dimension of the branch and the trunk. We conclude in Section 8 delineating some open questions. ## 3 Our Setup In this section, we will give all the definitions about the training data and the function spaces that we shall need to state our main results. As a specific illustration of the definitions, we will also give an explicit example of a DeepONet loss function. Definition 1. **Training Datasets** Let (yi, (si,pi)) be i.i.d. sampled input-output pairs from a distribution on [−B, B] × D × U where D and U are compact subsets of R d1 and R d2 *respectively and we define the conditional random variable* g(si,pi) ∶= E[y ∣ (si,pi)]. ## Definition 2. Branch Functions & Trunk Functions B ∶= {Bw a function with ≤ dB *parameters* ∣ Bw ∶ R d1 → R q, Lip(Bw) ≤ LB & ∥w∥2≤ WB & ∥Bw∥∞ ≤ C} T ∶= {Tw a function with ≤ dT *parameters* ∣ Tw ∶ R d2 → R q, Lip(Tw) ≤ LT & ∥w∥2≤ WT & ∥Tw∥∞ ≤ C} The functions in the set B shall be called the "Branch Functions" and the functions in the set T would be called the "Trunk Functions". The bound of C in the above definitions abstracts out the model of the branch and the trunk functions being nets having a layer of bounded activation functions in their output layer - while they can have any other activation (like ReLU) in the previous layers. Definition 3. **DeepONets** Given the function classes in Definition *2, we define the corresponding class of DeepONets as,* H ∶= {hwb,wt = h(wb,wt)∣ R d1 × R d2∋ (s,p) ↦ hwb,wt(s,p) ∶= ⟨Bwb(s), Twt(p)⟩ ∈ R, Bwb∈ B & Twt∈ T } Further, note that ∀θ > 0 ∃ a "θ-cover" of this function space Hθ *such that,* ∀hwb,wt∈ H, ∃h(wb, θ2 ,wt, θ2 ) ∈ Hθ s.t ∥wb − wb, θ2 ∥ ≤ θ 2 and ∥wt − wt, θ2 ∥ ≤ θ 2 and wb, θ2 and wt, θ2 being elements of the θ2 covering space of the set of branch and trunk weights respectively. It's easy to see how the above definition of H includes functions representable by the architecture given in Figure 1. Now we recall the following result about neural nets from (Bubeck & Sellke, 2023). Lemma 3.1. Let fw be a neural network of depth D, mapping into R *with the vector of parameters being* w ∈ R p and all the parameters being bounded in magnitude by W i.e the set of neural networks parametrized by w ∈ [−W,W] p. Let Q *be the maximum number of matrix or bias terms that are tied to a single parameter* wa for some a ∈ [p]*. Corresponding to it we define,* B(w) ∶= ∏j∈[D] max (∥Wj ∥op , 1), where Wj *is the matrix* in the j th−*layer of the net.* Let x ∈ R dsuch that ∥x∥ ≤ R*, and* w1, w2 ∈ R psuch that B (w1) , B (w2) ≤ B¯*. Then one has* ∣fw1(x) − fw2(x)∣ ≤ B¯2QR√p ∥w1 − w2∥ . Moreover for any w ∈ [−W,W] p with W ≥ 1*, one has,* B(w) ≤ (W √pQ) D. In light of the above, we define J as follows, Definition 4 (**Defining** J). Given any two valid weight vectors w1 and w2 for a "branch function" B we assume to have the following inequality for some fixed J > 0, $\sup\|B_{\mathbf{w}_{1}}(\mathbf{s})-B_{\mathbf{w}_{2}}(\mathbf{s})\|_{\infty}\leq J\cdot\|\mathbf{w}_{1}-\mathbf{w}_{2}\|$ And similarly for the trunk functions over the compact domain U. One can see that the above inequality is easy to satisfy if the space of inputs to the branch or the trunk is bounded. Thus invocation of this inequality implicitly constraints the support of the data distribution. An Example of a DeepONet Loss To put the above definitions in context, let us consider an explicit example of using a DeepONet to solve the forced pendulum PDE, R 2∋ d(y,v) dt= (v,−k ⋅sin(y) + f(t)) ∈ R 2at different forcings f at a given initial condition. Corresponding to this, the training data a DeepONet would need would be 3−tuples of the form, (xB(f), xT , y), where xB(f) is a discretization of a forcing function f onto a grid of "sensor points", y ∈ R is the angular position of the pendulum at time t = xT for f. Typically y is a standard O.D.E. solver's approximate solution. It's clear that here y being an angle is bounded, as was the setup in Definition 1. Referring to Figure 1, we note that s⃗ = xB(f), a d1 point discretization of a forcing function f would be the input to the branch net, p⃗ = xT ∈ R d2 = R, would be the trunk input i.e the time instant where we have the location (y) of the pendulum. Then the output of the architecture in Figure 1 is the evaluation of the following inner-product, R d1 × R d2 ∋ (s, ⃗ p⃗) ↦ DeepONet(s, ⃗ p⃗) ∶= ⟨Branch−Net(s⃗), Trunk−Net(p⃗)⟩. And given n such data as above, the ℓ2 empirical loss would be, L ∶=1 2n ∑ n i=1(yi − DeepONet(xB(fi), x*T ,i*))2. This empirical loss when minimized would yield a trained architecture of form as in Figure 1, which when queried for new forcing functions and time instants would yield accurate estimates of the corresponding pendulum locations. ## 4 The Main Theorem In the setup of the definitions given above, now we can state our main result as follows, Theorem 4.1. ∀δ ∈ (0, 1) and an arbitrary positive constant ϵ > 0 and for C ≥ 1 (from Definition *2), if with* probability at least 1 − δ *with respect to the sampling of the data* {(yi, (si,pi)) ∣ i = 1, . . . , n}, ∃ hwb,wt∈ H s.t $${\frac{1}{n}}\sum_{i=1}^{n}\left(y_{i}-h_{w_{b},w_{t}}(s_{i},p_{i})\right)^{2}\leq\sigma^{2}-\epsilon\left(1+\mathcal{C}\cdot J\cdot\left(B+2\cdot\mathcal{C}^{2}\right)\right)$$ then, $$q\geq n^{\frac{1}{4}}\cdot\left(\frac{e^{2}}{288\cdot B^{2}}\cdot\frac{1}{\ln\left(\left(\frac{4\min\{d_{B},d_{T}\}^{2}}{e}\right)^{d_{B}+d_{T}}\cdot\left(W_{B}\sqrt{d_{B}}\right)^{d_{B}}\cdot\left(W_{T}\sqrt{d_{T}}\right)^{d_{T}}+2\right)+\ln\left(\frac{2}{1+d^{2}}\right)}\right)^{\frac{1}{4}}\tag{2}$$ _where $\sigma^{2}:=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[\left(y_{i}-g\left(\mathbf{s}_{i},\mathbf{p}_{i}\right)\right)^{2}\right]$ and $g(\mathbf{s},\mathbf{p})=\mathbb{E}\left[y\mid(\mathbf{s},\mathbf{p})\right]$._ The proof of the above can be seen in Section 6. Note that the lowerbound proven here for q is a necessary (and not a sufficient) condition for the required high probability (over data sampling) of the existence of a DeepONet with empirical risk below the threshold given above. For further insight, we now specialize our Theorem 4.1 to using C = 1 - which then encompasses the case that we shall do experiments with, that of having DeepONets whose branch and trunk nets end in a sigmoid gate. Also, towards the following weakened bound - for a more intuitive presentation - we assume a common upperbound of W on the 2−norm of the parameter vector for the branch and the trunk net and define s ∶= dB + dT as the upperbound on the total number of parameters in the predictor being trained. Theorem 4.2. **(Lowerbounds for DeepONets Whose Branch and Trunk End in Sigmoid Gates)** Let C = 1 and constants W and s *be bounds on the norms of the weights of the branch and the trunk and* the total number of trainable parameters respectively. Then ∀δ ∈ (0, 1)*, and any arbitrary positive constant* ϵ > 0 if with probability at least 1 − δ *with respect to the sampling of the data* {(yi, (si,pi)) ∣ i = 1*, . . . , n*}, ∃ hwb,wt∈ H *s.t,* $${\frac{1}{n}}\sum_{i=1}^{n}\left(y_{i}-h_{\mathbf{w}_{b},\mathbf{w}_{t}}(\mathbf{s}_{i},\mathbf{p}_{i})\right)^{2}\leq\sigma^{2}-\epsilon\left(1+J\cdot(B+2)\right)$$ then, $$q\geq n^{\frac{1}{4}}\cdot\left(\frac{e^{2}}{288\cdot B^{2}}\cdot\frac{1}{\ln\Bigl{(}2+e^{-\pi\cdot\alpha^{\prime}}\cdot\Bigl{(}\frac{4\min(d_{\,D}\cdot d_{\,T})^{2}}{e}\cdot W\sqrt{s}\Bigr{)}^{s}\Bigr{)}+\ln\Bigl{(}\frac{2}{1-\alpha}\Bigr{)}}\right)^{\frac{1}{4}}\tag{3}$$ where σ 2∶= 1 n ∑ n i=1 E[(yi − g (si,pi))2] and g(s,p) = E[y ∣ (s,p)] and if the branch net has α−*fraction of the* training parameters then α′ = α 2 ln 1α + 1−α 2ln 1 1−α . To interpret the above theorem consider a sequence of DeepONet training being done for fixed training data (and hence a fixed n) and on different architectures - but having the same weight bound and the same number of parameters and the same q, the common output dimension of the branch and the trunk functions. Now we can see how the above theorem reveals a largeness requirement for DeepONets - that if there has to exist an architecture which can get the training error below a certain label-noise dependent threshold then necessarily the branch/trunk output dimension q has to be Ω((training−data−size) 1 4 ). Later, in Section 7, we shall conduct an experimental study motivated by the above and reveal something more than what the above theorem guarantees. We will see that over a sequence of training being done on different DeepONet architectures (and a fixed PDE) having nearly the same number of parameters, one can get monotonic improvement in performance upon increasing training data size n if it is accompanied by an increase in q s.t q√n is nearly constant. We also show that a slightly smaller rate of growth for n for the same sequence of qs would break this monotonicity. Thus it reveals a "scaling law" for DeepONets - which is not yet within the ambit of our theoretical analysis. ## 5 Lemmas Towards Proving Theorem 4.1 Lemma 5.1. For any space X with Euclidean metric, we denote as N(θ, X) *the covering number of it at* scale θ. Further recall from Definition 2, that dB and dT are the total number of parameters in any function in the sets B and T *respectively. Let* WB ⊆ R dB , WT ⊆ R dT and WH = WB × WT denote the sets of allowed weights of B, T , and H (Definition *3), respectively. Then the following three bounds hold for any* θ > 0, $$N(\theta,{\mathcal W}_{B})\leq\left({\frac{2W_{B}\sqrt{d_{B}}}{\theta}}\right)^{d_{B}}\qquad\qquad N(\theta,{\mathcal W}_{T})\leq\left({\frac{2W_{T}\sqrt{d_{T}}}{\theta}}\right)^{d_{T}}.$$ $N(\theta,W_{H})\leq N(\theta/2,W_{B})\cdot N(\theta/2,W_{T})$. The proof of the above Lemma is given in Section 5.1.1 Lemma 5.2. We recall the definition of H from Definition 3, B as given in Defintion 1 & J from Definiton 4. Further for any h and any training data of the form as given in Theorem 4.1, we denote the corresponding empirical risk as, Rˆ(h) ∶= 1 n ∑ n i=1(yi − h(si,pi))2. Then, ∀θ > 0 *we have,* $$\widehat{\mathcal{R}}(h_{(\mathbf{w}_{b},\frac{\theta}{2}\cdot\mathbf{w}_{t,\frac{\theta}{2}})})\leq\widehat{\mathcal{R}}(h_{(\mathbf{w}_{b},\mathbf{w}_{t})})+q\mathcal{C}J\theta\cdot\left(B+2q\mathcal{C}^{2}\right),$$ _and $\mathbf{w}_{b,\frac{\theta}{2}}$ and $\mathbf{w}_{t,\frac{\theta}{2}}$ be s.t. $\left\|\mathbf{w}_{b}-\mathbf{w}_{b,\frac{\theta}{2}}\right\|\leq\frac{\theta}{2}$ and $\left\|\mathbf{w}_{t}-\mathbf{w}_{t,\frac{\theta}{2}}\right\|\leq\frac{\theta}{2}$._ Thus we see that it is quantifiable as to how much is the increment in the empirical risk when for a given training data a DeepONet is replaced by another with weights within a distance of θ from the original - and that this increment is parametric in θ. The proof of the above lemma is given in Section 5.1.2. Lemma 5.3. We recall the definition of Hθ from Definition 3; dB, dT , WB, WT , C & q *from Defintion* 2 and B as given in Defintion 1. Then ∀θ > 0*, and for* zi∶= yi − g (si,pi); P(∃ h(wb, θ2 ,wt, θ2 ) ∈ Hθ ∣ 1 n n ∑ i=1 h(wb, θ2 ,wt, θ2 ) (si,pi) zi ≥ θ 4 ) ≤ 2 2(dB+dT )+1 θ(dB+dT )⋅ (WB √dB) dB ⋅ (WT √dT ) dT ⋅ exp (−2nθ2 8 4 ⋅ (B ⋅ qC 2) 2 ) + 2 exp (−nθ2 8 3 ⋅ B2 ⋅ q 2 ⋅ C 4 ) The proof of the above lemma is given in Section 5.1.3 Lemma 5.4. *We continue in the same setup as in the previous lemma and further recall the definition of* σ as in Theorem *4.1. Then* ∀θ > 0 $$\mathbb{P}\left(3\ h_{\text{tw}_{1},\text{tw}_{2}}\in\mathcal{H}\ |\ \frac{1}{n}\sum_{i=1}^{n}\left(n_{i}-h_{\text{tw}_{1},\text{tw}_{2}}\left(s_{i},p_{i}\right)\right)^{2}\leq\sigma^{2}-\theta\right)\leq2\exp\left(-\frac{n\theta^{2}}{288B^{2}}\right)*\mathbb{P}\left(3\ h_{\text{tw}_{1},\text{tw}_{2}}\in\mathcal{H}\ |\ \frac{1}{n}\sum_{i=1}^{n}h\left(s_{i},p_{i}\right)z_{i}\geq\frac{\theta}{4}\right).$$ The above lemma reveals an intimate connection between the empirical error of DeepONets and the correlation of its output with label noise. The proof of the above lemma is given in Section 5.1.4 5.1 Proofs of the Lemmas In the following sections we give the proofs of the above listed lemmas. 5.1.1 Proof of Lemma 5.1 Proof. The first two equations are standard results, Example 27.1 of (Shalev-Shwartz & Ben-David, 2014) Further define d(x, y) = ∣∣x − y∣∣2. Then, let S ⊂ R dB be a witness for N(θ/2,WB), that is, for all wb ∈ WB, there is some s ∈ S such that d(wb, s) ≤ θ/2. Similarly, let P ⊂ R dT be a witness for N(θ/2,WT ). Then for all wb ∈ WB, wt ∈ WT , there exist a corresponding cover point s ∈ S and p ∈ P. And since (wb, wt) ∈ WH: $d((\mathbf{w}_{b},\mathbf{w}_{t}),(s,p))\leq d((\mathbf{w}_{b},\mathbf{w}_{t}),(s,\mathbf{w}_{t}))+d((s,\mathbf{w}_{t}),(s,p))$ $$=d(\mathbf{w}_{b},s)+d(\mathbf{w}_{t},p)\quad\text{(under$d\sim l_{2}$-norm)}$$ $$\leq\theta\quad\text{(by definition of$S$and$P$)}$$ Hence, S × T is an θ-cover of WH. ## 5.1.2 Proof Of Lemma 5.2 Proof. Given an θ > 0 and a h(wb,wt) ∈ H, let wb, θ2 and wt, θ2 be s.t. ∥wb − wb, θ2 ∥ ≤ θ 2 and ∥wt − wt, θ2 ∥ ≤ θ 2 . Then from the definition of J in Definition 4, the following inequalities hold, $$\operatorname*{sup}_{\boldsymbol{s}}\left\|B_{\boldsymbol{w}_{b}}(\boldsymbol{s})-B_{\boldsymbol{w}_{b},{\frac{\theta}{2}}}(\boldsymbol{s})\right\|_{\infty}\leq J.{\frac{\theta}{2}}{\mathrm{~and~}}\operatorname*{sup}_{\boldsymbol{p}}\left\|T_{\boldsymbol{w}_{t}}(\boldsymbol{p})-T_{\boldsymbol{w}_{t,{\frac{\theta}{2}}}}(\boldsymbol{p})\right\|_{\infty}\leq J.{\frac{\theta}{2}}$$ Further, we can simplify as follows, for any valid (s,p) input to the function hwb,wt = ⟨Bwb , Twt⟩ and similarly for hwb, θ2 ,wt, θ2 . ∣⟨Bwb(s), Twt(p)⟩ − ⟨Bwb, θ2 (s), Twt, θ 2 (p)⟩∣ =∣⟨Bwb(s), Twt(p)⟩ − ⟨Bwb(s), Twt, θ2 (p)⟩ + ⟨Bwb(s), Twt, θ2 (p)⟩ − ⟨Bwb, θ2 (s), Twt, θ2 (p)⟩∣ ≤∣⟨Bwb(s), Twt(p) − Twt, θ2 (p)⟩∣ + ∣⟨Twt, θ2 (p), Bwb(s) − Bwb, θ2 (s)⟩∣ To upperbound the above we recall (a) the definition of C from Definitions 2 and (b) that for any two q−dimensional vectors a and b we have, ∣⟨a, b⟩∣ ≤ ∑ q i=1∣ai∣∣bi∣ ≤ (maxi=1*,...,q* ∣bi∣) ∑ q i=1∣ai∣. Thus we have, $$\forall(\mathbf{s},\mathbf{p}),\ \left|\left(B_{w_{b}}(s),T_{w_{s}}(p)\right)-\left\langle B_{w_{b,\frac{\mathbf{s}}{2}}}(\mathbf{s}),T_{w_{s},\frac{\mathbf{s}}{2}}(\mathbf{p})\right\rangle\right|\leq2\cdot\left(\frac{J\theta}{2}\cdot q\cdot\mathcal{C}\right)\tag{4}$$ $$\Longrightarrow\ \forall(\mathbf{s},\mathbf{p}),\ \left|h_{w_{b},w_{s}}(\mathbf{s},\mathbf{p})-h_{w_{b,\frac{\mathbf{s}}{2}},w_{s},\frac{\mathbf{s}}{2}}(\mathbf{s},\mathbf{p})\right|\leq q\cdot\mathcal{C}\cdot J\theta\tag{5}$$ Define, $r_{1,i}:=\left(y_i-h_{\left(\mathbf{w_b},\frac{\mathbf{g}}{2},\mathbf{w_t},\frac{\mathbf{g}}{2}\right)}(\mathbf{s_i},\mathbf{p_i})\right)$ and $r_{2,i}:=\left(y_i-h_{\left(\mathbf{w_b},\mathbf{w_t}\right)}(\mathbf{s_i},\mathbf{p_i})\right)$ Now, Now, r 2 1,i − r 2 2,i = (h(wb, θ2 ,wt, θ2 )(si,pi) 2 − h(wb,wt)(si,pi) 2) + 2yi (h(wb,wt)(si,pi) − h(wb, θ2 ,wt, θ2 )(si,pi)) ≤ (∣h(wb,wt)(si,pi) − h(wb, θ2 ,wt, θ2 )(si,pi)∣) (h(wb,wt)(si,pi) + h(wb, θ2 ,wt, θ2 )(si,pi)) + 2 ⋅ B ⋅ q ⋅ C ⋅ Jθ ≤ (h(wb,wt)(si,pi) + h(wb, θ2 ,wt, θ2 )(si,pi)) ⋅ q ⋅ C ⋅ Jθ + B ⋅ q ⋅ C ⋅ Jθ ≤ q ⋅ C ⋅ Jθ ⋅ ((h(wb,wt)(si,pi) + h(wb, θ2 ,wt, θ2 )(si,pi) + B) Averaging the above over all training data we get, $$\frac{1}{n}\sum_{i=1}^{n}r_{1,i}^{2}\leq\frac{1}{n}\sum_{i=1}^{n}r_{2,i}^{2}+\frac{1}{n}\sum_{i=1}^{n}q\cdot\mathcal{C}\cdot J\theta\cdot\left(\left(h_{(\mathbf{w}_{k},\mathbf{w}_{i})}(\mathbf{s}_{i},\mathbf{p}_{i})+h_{(\mathbf{w}_{k,\frac{q}{2},\mathbf{w}_{i,\frac{p}{2}}})}(\mathbf{s}_{i},\mathbf{p}_{i})\right)+B\right)\tag{6}$$ Using Cauchy-Schwarz over the inner-product in the definition of h, we get, $$h_{(w_{1},w_{1})}(s_{i},p_{i})|\leq\sqrt{q}\mathcal{C}\cdot\sqrt{q}\mathcal{C}\leq q\cdot\mathcal{C}^{2}\implies\left((h_{(w_{1},w_{1})}(s_{i},p_{i})+h_{(w_{1},\frac{q}{2},w_{1},\frac{q}{2})}(s_{i},p_{i}))\right)\leq2q\cdot\mathcal{C}^{2}\tag{7}$$ Substituting the above into equation 6 and invoking the definition of Rˆ, $$\widehat{\mathcal{R}}(h_{(w_{k,\frac{\pi}{2}},w_{\iota,\frac{\pi}{2}})})\leq\widehat{\mathcal{R}}(h_{(w_{k},w_{\iota})})+\frac{1}{n}\sum_{i=1}^{n}q\cdot\mathcal{C}\cdot J\theta\cdot\left(\left(h_{(w_{k},w_{\iota})}(s_{i},p_{i})+h_{(w_{k,\frac{\pi}{2}},w_{\iota,\frac{\pi}{2}})}(s_{i},p_{i})\right)+B\right)$$ $$\leq\widehat{\mathcal{R}}(h_{(w_{k},w_{\iota})})+(q\cdot\mathcal{C}\cdot J\theta\cdot B)+(q\cdot\mathcal{C}\cdot J\theta)\cdot\left(2q\cdot\mathcal{C}^{2}\right)$$ $$\leq\widehat{\mathcal{R}}(h_{(w_{k},w_{\iota})})+q\mathcal{C}J\theta\cdot\left(B+2q\mathcal{C}^{2}\right)$$ The above is what we set out to prove. ## 5.1.3 Proof Of Lemma 5.3 Proof. Recall that for each data i, we had defined the random variable, zi∶= yi − g (si,pi). Since g(s,p) = E[y ∣ (s,p)], we can note that E[zi] = 0. Further, $$z_{i}^{2}=\left(y_{i}-g\left(\mathbf{s}_{i},\mathbf{p}_{i}\right)\right)^{2}\leq y_{i}^{2}-2\cdot y_{i}\cdot g(\mathbf{s}_{i},\mathbf{p}_{i})+g\left(\mathbf{s}_{i},\mathbf{p}_{i}\right)^{2}\leq4B^{2}\tag{1}$$ Recall from equation 7. that ∣h(wb, θ2 ,wt, θ2 )(si,pi)∣ ≤ q ⋅ C 2 For each data i, we further define the random variable, Yθ,i ∶= ((h(wb, θ2 ,wt, θ2 ) (si,pi) − E[h(wb, θ2 ,wt, θ2 )])zi) Now note that, $$\mathbf{\Sigma}$$ E[Yθ,i] = E[(h(wb, θ2 ,wt, θ2 ) (si,pi) − E[h(wb, θ2 ,wt, θ2 )])zi] = E[h(wb, θ2 ,wt, θ2 ) (si,pi) ⋅ yi] − E[h(wb, θ2 ,wt, θ2 ) (si,pi) ⋅ g (si,pi)] 1 Next, we use the tower property of conditional expectation to expand the first term, E[h(wb, θ2 ,wt, θ2 ) (si,pi) ⋅ yi] = E[E[h(wb, θ2 ,wt, θ2 ) (si,pi) yi∣ (si,pi)]] = E[h(wb, θ2 ,wt, θ2 ) (si,pi)E[y ∣ (si,pi)]] = E[h(wb, θ2 ,wt, θ2 ) (si,pi) ⋅ g (si,pi)] Substituting this back into the previous equation, we get, $$\mathbb{E}[Y_{\theta,i}]=0$$ Further, ∣Yθ,i∣ = ∣h(wb, θ2 ,wt, θ2 ) (si,pi) − E[h(wb, θ2 ,wt, θ2 )]∣ ⋅ ∣zi∣ ≤ (∣h(wb, θ2 ,wt, θ2 )(si,pi)∣ + ∣E[h(wb, θ2 ,wt, θ2 )]∣) ⋅ 2B ≤ 4 ⋅ C 2⋅ B ⋅ q Applying Hoeffding's inequality 1on Yθ,i, we will get, $$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\left(\left(h_{\left(w_{k,\frac{\alpha}{2},w_{i,\frac{\alpha}{2}}\right)}\left(s_{i},p_{i}\right)-\mathbb{E}[h_{\left(w_{k,\frac{\alpha}{2},w_{i,\frac{\alpha}{2}}\right)}]z_{i}\right)}\right)\geq t\right)\leq\exp\left(-\frac{2nt^{2}}{(8\cdot B\cdot qC^{2})^{2}}\right)\tag{9}$$ We choose t = θ 8 to get, $$\mathbb{P}\left(\left|\frac{1}{n}\sum_{i=1}^{n}\left(\left(h_{\left(\mathbf{w}_{b_{i}\frac{\theta}{2},\mathbf{w}_{c_{i}\frac{\theta}{2}}\right)}\left(\mathbf{s}_{i},\mathbf{p}_{i}\right)-\mathbb{E}[h_{\left(\mathbf{w}_{b_{i}\frac{\theta}{2},\mathbf{w}_{c_{i}\frac{\theta}{2}}\right)}]\right)z_{i}}\right)\right|\geq\frac{\theta}{8}\right)\leq2\cdot\exp\left(-\frac{2n\theta^{2}}{8^{4}\cdot(B\cdot qC^{2})^{2}}\right).$$ We define two events, **Theorem 5.5**.: _(Identifying's inequality). Let $Z_{1},\ldots,Z_{n}$ be independent bounded random variables with $Z_{i}\in\{a,b\}$ for all $i,b$ where $-\infty<a\leq b<\infty$. Then_ $$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\left(Z_{i}-\mathbb{E}\left[Z_{i}\right]\right)z\right)\leq\exp\left(-\frac{2a^{2}t^{2}}{\left(b-a\right)^{2}}\right)$$ _and_ $$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\left(Z_{i}-\mathbb{E}\left[Z_{i}\right]\right)\leq-t\right)\leq\exp\left(-\frac{2a^{2}t^{2}}{\left(b-a\right)^{2}}\right)$$ _for all $t\geq0$._ $$\mathbf{E}_{5}\coloneqq\left\{\left|\frac{1}{n}\sum_{i=1}^{n}z_{i}\right|\geq\frac{\theta}{8\cdot q^{C2}}\right\}\ \&\ \mathbf{E}_{6}\coloneqq\left\{3\ h_{(\boldsymbol{w}_{b_{i}}\triangleq\boldsymbol{w}_{b_{i}}\triangleq\boldsymbol{\theta})}\in\mathcal{H}_{\theta}\ |\ \frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[h_{(\boldsymbol{w}_{b_{i}}\triangleq\boldsymbol{w}_{b_{i}}\triangleq\boldsymbol{\theta})}]z_{i}\geq\frac{\theta}{8}\right\}.$$ Recalling the bound on the h function we have, 1 q⋅C2 ⋅ ∣E[h(wb, θ2 ,wt, θ2 )]∣ ∈ [0, 1], we have that if E c 5 happens then for such a sample of {zi, i = 1*, . . . , n*}, $$\begin{array}{l}{{\forall h_{(\mathbf{w}_{b,\frac{a}{2}},\mathbf{w}_{c,\frac{d}{2}})}\in\mathcal{H}_{\theta},}}\\ {{\frac{\theta}{8\cdot q C^{2}}>\left|\frac{1}{n}\sum_{i=1}^{n}z_{i}\right|\geq\frac{1}{q\cdot C^{2}}\cdot|\mathbb{E}[h_{(\mathbf{w}_{b,\frac{a}{2}},\mathbf{w}_{c,\frac{d}{2}})}]|\right|\frac{1}{n}\sum_{i=1}^{n}z_{i}|\geq\frac{1}{n\cdot q C^{2}}\sum_{i=1}^{n}\mathbb{E}[h_{(\mathbf{w}_{b,\frac{a}{2}},\mathbf{w}_{c,\frac{d}{2}})}]z_{i}}}\end{array}$$ Hence E c 5 Ô⇒ E c 6 and hence P(E6) ≤ P(E5) i.e $$\mathbb{P}\left(\exists\ h_{\{\mathbf{w}_{k_{1}}\notin\mathbf{w},\,\hat{\mathbf{z}}\}}\in\mathcal{H}\ |\ \frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[h_{\{\mathbf{w}_{k_{1}}\notin\mathbf{w},\,\hat{\mathbf{z}}\}}z_{i}\geq\frac{\theta}{8}\right)\leq\mathbb{P}\left(\left|\frac{1}{n}\sum_{i=1}^{n}z_{i}\right|\geq\frac{\theta}{8\cdot q\cdot C^{2}}\right)\tag{10}$$ Recalling that (∣zi∣ ≤ 2B), by Hoeffding's inequality we have, $$\mathbb{P}\left(\left|\frac{1}{n}\sum_{i=1}^{n}z_{i}\right|\geq\frac{\theta}{8\cdot q\cdot C^{2}}\right)\leq2\exp\left(\frac{-n\theta^{2}}{8^{3}\cdot B^{2}\cdot q^{2}\cdot C^{4}}\right)\tag{11}$$ Now we define three events E7,E8 and E9 as follows, E7 ∶= {∀ h(wb, θ2 ,wt, θ2 ) ∈ Hθ, 1 n n ∑ i=1 (h(wb, θ2 ,wt, θ2 ) (si,pi) − E[h(wb, θ2 ,wt, θ2 )]) zi ≤ θ 8 } E8 ∶= {∀ h(wb, θ2 ,wt, θ2 ) ∈ Hθ, 1 n n ∑ i=1 E[h(wb, θ2 ,wt, θ2 )]zi ≤ θ 8 } E9 ∶= {∀ h(wb, θ2 ,wt, θ2 ) ∈ Hθ, 1 n n ∑ i=1 h(wb, θ2 ,wt, θ2 ) (si,pi) zi ≤ θ 4 } Observe that, if E7 and E8 hold then E9 will also hold. Hence, $\mathbb{P}(\mathbf{E}_{7}\cap\mathbf{E}_{8})\leq\mathbb{P}(\mathbf{E}_{9})\implies\mathbb{P}(\mathbf{E}_{9}^{c})\leq\mathbb{P}(\mathbf{E}_{7}^{c})+\mathbb{P}(\mathbf{E}_{8}^{c})$. Thus, we can invoke equations 10 and 11 to get, $$\mathbb{P}\left(\exists h_{(\mathbf{w}_{b,\frac{\theta}{2}},\mathbf{w}_{t,\frac{\theta}{2}})}\in\mathcal{H}_{\theta}\mid\frac{1}{n}\sum_{i=1}^{n}h_{(\mathbf{w}_{b,\frac{\theta}{2}},\mathbf{w}_{t,\frac{\theta}{2}})}\left(\mathbf{s}_{i},\mathbf{p}_{i}\right)z_{i}\,\geq\frac{\theta}{4}\right)\,.$$ ≤ P(∃h(wb, θ2 ,wt, θ2 ) ∈ Hθ ∣ 1 n ∣ n ∑ i=1 (h(wb, θ2 ,wt, θ2 ) (si,pi) − E[h(wb, θ2 ,wt, θ2 )]) zi∣ ≥ θ 8 ) + P(∣ 1n n ∑ i=1 zi∣ ≥θ 8 ⋅ q ⋅ C 2 ) ≤ P ⎛ ⎜⎜ ⎝ ⋃ h(wb, θ2 ,wt, θ2 )∈Hθ { 1 n ∣ n ∑ i=1 (h(wb, θ2 ,wt, θ2 )(si,pi) − E[h(wb, θ2 ,wt, θ2 )])zi∣ ≥ θ 8 } ⎞ ⎟⎟ ⎠ + 2 exp (−nθ2 8 3 ⋅ B2 ⋅ q 2 ⋅ C 4 ) ≤ ∑ h(wb, θ2 ,wt, θ2 )∈Hθ P( 1 n ∣ n ∑ i=1 (h(wb, θ2 ,wt, θ2 )(si,pi) − E[h(wb, θ2 ,wt, θ2 )])zi∣ ≥ θ 8 ) + 2 exp (−nθ2 8 3 ⋅ B2 ⋅ q 2 ⋅ C 4 ) Hence, P(∃h(wb, θ2 ,wt, θ2 ) ∈ Hθ ∣ 1 n n ∑ i=1 h(wb, θ2 ,wt, θ2 ) (si,pi) zi ≥ θ 4 ) ≤ 2 2(dB+dT ) θ(dB+dT ) ⋅ (WB √dB) dB ⋅ (WT √dT ) dT ⋅ 2 ⋅ exp (−2nθ2 8 4 ⋅ (B ⋅ qC 2) 2 ) + 2 exp (−nθ2 8 3 ⋅ B2 ⋅ q 2 ⋅ C 4 ) And the above is what we set out to prove. ## 5.1.4 Proof Of Lemma 5.4 Proof. Recall the definition of zi from the previous proof and and from the assumptions in Theorem 4.2 we have, 1n ∑ n i=1 E[z 2 i] = σ 2. Recalling that zi ∈ [−2B, 2B] and they are i.i.d. we can invoke Hoeffding's Lemma 5.5 (with t = θ 6 , b = 2*B, a* = −2B) to get, $$\mathbb{P}\left({\frac{1}{n}}\sum_{i=1}^{n}z_{i}^{2}\leq\sigma^{2}-{\frac{\theta}{6}}\right)\leq\exp\left(-{\frac{n\theta^{2}}{288B^{2}}}\right)$$ $$(12)$$ Further note that, zi⋅ g (si,pi) is i.i.d with mean 0 since E[zi∣ (si,pi)] = 0 and ∣zi⋅ g (si,pi)∣ ≤ 2B Applying Hoeffding's inequality again, $$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}z_{i}g\left(\mathbf{s}_{i},\mathbf{p}_{i}\right)\leq-\frac{\theta}{6}\right)\leq\exp\left(-\frac{n\theta^{2}}{288B^{2}}\right)\tag{1}$$ Given a hwb,wt∈ H, we define the following vector random variables, $$Z:=\frac{1}{\sqrt{n}}\left(z_{1},z_{2},\cdots,z_{n}\right)$$ $$G=\frac{1}{\sqrt{n}}\left(g\left(\boldsymbol{s_{1}},\boldsymbol{p_{1}}\right),g\left(\boldsymbol{s_{2}},\boldsymbol{p_{2}}\right),\cdots,g\left(\boldsymbol{s_{n}},\boldsymbol{p_{n}}\right)\right)$$ $$F=\frac{1}{\sqrt{n}}\left(h_{\boldsymbol{w_{b}},\boldsymbol{w_{t}}}\left(\boldsymbol{s_{1}},\boldsymbol{p_{1}}\right),h_{\boldsymbol{w_{b}},\boldsymbol{w_{t}}}\left(\boldsymbol{s_{2}},\boldsymbol{p_{2}}\right),\cdots,h_{\boldsymbol{w_{b}},\boldsymbol{w_{t}}}\left(\boldsymbol{s_{n}},\boldsymbol{p_{n}}\right)\right)$$ $$(13)$$ (14) $\binom{15}{}$ (16) ... Note that, $$\left|G+Z-F\right|^{2}=1\left|\frac{1}{\sqrt{n}}\left(g\left(\mathbf{s}_{1},\mathbf{p}_{1}\right),\cdots,g\left(\mathbf{s}_{n},\mathbf{p}_{n}\right)\right)+\frac{1}{\sqrt{n}}\left(z_{1},\ldots,z_{n}\right)-\frac{1}{\sqrt{n}}(h_{\mathbf{w}_{0},\mathbf{w}_{1}}(\mathbf{s}_{1},\mathbf{p}_{1}),\cdots h_{\mathbf{w}_{0},\mathbf{w}_{0}}\left(\mathbf{s}_{n},\mathbf{p}_{n}\right))\right|^{2}\tag{17}$$ Recalling that zi∶= yi − g (si,pi) and the definition of the empirical risk of the predictor, Rˆ(hwb,wt) ∶= 1 n ∑ n i=1(yi − hwb,wt(si,pi))2, we realize that, $$\left\|Z+G-F\right\|^{2}=\hat{R}(h_{w_{b},w_{t}})$$ Suppose, ∥Z∥ 2 ⩾ σ 2 − θ 6 and ⟨*Z, G*⟩ ⩾ − θ 6 . Then we have, $$\left|Z+G-F\right|^{2}=\left\|Z\right\|^{2}+2\{Z,G-F\}+\left\|G-F\right\|^{2}=\left\|Z\right\|^{2}+2\{Z,G\}-2\{Z,F\}+\left\|G-F\right\|^{2}$$ $$\geq\sigma^{2}-\frac{\theta}{6}-2\frac{\theta}{6}-2\{Z,F\}\geq\sigma^{2}-\frac{\theta}{2}-2\{Z,F\}.$$ If further we have, ∥Z + G − F∥ 2 ≤ σ 2 − θ then we have from above, ⟨*F, Z*⟩ ≥ θ 4 Motivated by the above, we define the following 4 events, namely Ei, i = 1*, . . . ,* 4 $$\mathbf{E}_{1}:=\left\{|Z|^{2}\geq\sigma^{2}-\frac{\theta}{6}\right\},\ \mathbf{E}_{2}:=\left\{\left\langle Z,G\right\rangle\geq-\frac{\theta}{6}\right\},\ \mathbf{E}_{3}:=\left\{3\ h_{\mathbf{w}_{1},\mathbf{w}_{1}}\in\mathcal{H}\,|\,\widehat{\mathcal{R}}\leq\sigma^{2}-\theta\right\}\ \&\ \mathbf{E}_{4}:=\left\{3\ h_{\mathbf{w}_{1},\mathbf{w}_{1}}\in\mathcal{H}\,|\,\left\langle F,Z\right\rangle\geq\frac{\theta}{4}\right\}.$$ Thus our above argument can be summarized to say that if the events E1, E2 and E3 hold, then E4 will also hold. This we can write as, P(E1 ∩ E2 ∩ E3) ≤ P(E4). This implies, P(E4) ≥ 1 − P((E1 ∩ E2 ∩ E3) c). But, by union bounding, P(E c 1 ∪ E c 2 ∪ E c 3) ≤ P(E c 1) + P(E c 2) + P(E c 3) ≤ 3 − (P(E1) + P(E2) + P(E3)). Hence combining we have, P(E4) ≥ −2 + (P(E1) + P(E2) + P(E3)) From equations 12 and 13 we obtain, that, (1 − P(E1)) ≤ exp (− nθ2 288B2 ) and similarly for (1 − P(E2)). Thus substituting in above we get, $$\mathbb{P}(\mathbf{E}_{3})\leq2\exp\left(-{\frac{n\theta^{2}}{288B^{2}}}\right)+\mathbb{P}(\mathbf{E}_{4})$$ Thus we have proven what we had set out to prove, ## 6 Proof Of The (Main)Theorem 4.1 A careful study of the proof of Lemma 5.4 would reveal that it can as well be invoked on Hθ. And by doing so we get, P(∃h(wb, θ2 ,wt, θ2 ) ∈ Hθ ∣ 1 n n ∑ i=1 (yi − h(wb, θ2 ,wt, θ2 )(si,pi))2≤ σ 2 − θ) ≤ 2 exp (−nθ2 288 ⋅ B2 ) + P(∃h(wb, θ2 ,wt, θ2 ) ∈ Hθ ∣ 1 n n ∑ i=1 h(wb, θ2 ,wt, θ2 ) (si,pi) zi ⩾ θ 4 ) Using Lemma 5.3, ≤ 2 exp (−nθ2 288 ⋅ B2 ) + 2 2(dB+dT )+1 θ(dB+dT )⋅ (WB √dB) dB ⋅ (WT √dT ) dT ⋅ exp (−2nθ2 8 4 ⋅ (B ⋅ qC 2) 2 ) + 2 exp (−nθ2 8 3 ⋅ (B ⋅ q ⋅ C 2) 2 ) $$(18)$$ $$(19)$$ Invoking Lemma 5.2 at θ = ϵ q 2 and recalling that the q ≥ 1 (by definition) we have, $$\hat{\mathcal{R}}(h_{(\mathbf{w}_{b},{\frac{\epsilon}{2\cdot q^{2}}},\mathbf{w}_{t},{\frac{\epsilon}{2\cdot q^{2}}})})\leq\hat{\mathcal{R}}(h_{(\mathbf{w}_{b},\mathbf{w}_{t})})+\mathcal{C}\cdot J\cdot\epsilon\cdot\left(B+2\cdot\mathcal{C}^{2}\right)$$ 2) (19) With respect to random sampling of the training data we define two events E1 (corresponding to the function class H) and E2 (corresponding to the ϵ 2q 2 −cover of H), $$\begin{array}{l}{{\mathbf{E}_{1}\coloneqq\left\{\exists h_{(w_{b},w_{t})}\in\mathcal{H}\mid\hat{\mathcal{R}}(h_{(w_{b},w_{t})})\leq\sigma^{2}-\epsilon-\mathcal{C}\cdot J\cdot\epsilon\cdot\left(B+2\cdot\mathcal{C}^{2}\right)\right\}}}\\ {{\mathbf{E}_{2}\coloneqq\left\{\exists h_{(w_{b},\frac{\epsilon}{2\cdot q^{2}},w_{t},\frac{\epsilon}{2\cdot q^{2}})}\in\mathcal{H}_{\theta}\mid\hat{\mathcal{R}}(h_{(w_{b},\frac{\epsilon}{2\cdot q^{2}},w_{t},\frac{\epsilon}{2\cdot q^{2}})})\leq\sigma^{2}-\epsilon\right\}}}\end{array}$$ Thus if E1 is true, we can invoke the above inequality to get, $$\widehat{R}(h_{(w_{k},\frac{1}{w^{2}},w_{k},\frac{1}{w^{2}})})\leq\widehat{R}(h_{(w_{k},w_{k})})+\mathcal{C}\cdot J\cdot\epsilon\cdot\left(B+2\cdot\mathcal{C}^{2}\right)\leq\sigma^{2}-\epsilon-\mathcal{C}\cdot J\cdot\epsilon\cdot\left(B+2\cdot\mathcal{C}^{2}\right)+\mathcal{C}\cdot J\cdot\epsilon\cdot\left(B+2\cdot\mathcal{C}^{2}\right)$$ $$\leq\sigma^{2}-\epsilon$$ Thus we observe that, E1 Ô⇒ E2 and thus P(E1) ≤ P(E2) we can invoke equation 18 to get, P ⎛ ⎜⎜⎜⎜⎜⎜⎜ ⎝ ∃ h(wb,wt) ∈ H ∣ 1 n n ∑ i=1 (yi − hwb,wt(si,pi))2 ≤ σ 2 − ϵ (1 + C ⋅ J ⋅ (B + 2 ⋅ C 2)) ⎞ ⎟⎟⎟⎟⎟⎟⎟ ⎠ ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ Rˆ(h(wb ,wt)) ≤ P ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜ ⎝ ∃ h(wb, ϵ 2⋅q2 ,wt, ϵ 2⋅q2) ∈ H ϵ q2 ∣ 1 n n ∑ i=1 (yi − h(wb, ϵ 2⋅q2 ,wt, ϵ 2⋅q2)(si,pi))2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ Rˆ(h(wb, ϵ 2⋅q2 ,wt, ϵ 2⋅q2 )) ≤ σ 2 − ϵ ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟ ⎠ ( ϵ q 2 ) (dB+dT ) ⋅ (WB √dB) dB ⋅ (WT √dT ) dT ⋅ exp (−2nϵ2 8 4 ⋅ (B ⋅ qC 2) 2 ⋅ q 4 ) (20) ≤ 2 exp (−nϵ2 288 ⋅ B2 ⋅ q 4 ) + 2 2(dB+dT )+1 + 2 exp (−nϵ2 8 3 ⋅ B2 ⋅ q 6 ⋅ C 4 ) Hence if the required probability has to be at least 1 − δ, it's necessary that we have, $$(1-\delta)\leq2\exp\left(-\frac{ne^{2}}{288\cdot B^{2}\cdot q^{4}}\right)+\frac{2^{2(dn+d_{T})+1}}{\left(\frac{1}{q^{n}}\right)^{(dn+d_{T})}}\cdot\left(W_{B}\sqrt{d_{B}}\right)^{dn}\cdot\left(W_{T}\sqrt{d_{T}}\right)^{d_{T}}\cdot\exp\left(-\frac{2ne^{2}}{8^{4}\cdot B^{2}\cdot q^{6}\cdot C^{4}}\right)$$ $$+2\exp\left(-\frac{ne^{2}}{8^{3}\cdot B^{2}\cdot q^{6}\cdot C^{4}}\right)$$ Note that, nϵ 2 8 3⋅B2⋅q 6⋅C4 >2nϵ 2 8 4⋅B2⋅q 6⋅C4 . Further recalling that q, C ≥ 1 we have, nϵ 2 288⋅B2⋅q 4 >nϵ 2 8 3⋅B2⋅q 6⋅C4 . Thus a necessary condition to satisfy the above inequality is obtained by weakening the above inequality by replacing all the three exponentials in there with the smallest of them, $$1-\delta\leq2\cdot\exp\left(\frac{-n\epsilon^{2}}{288\cdot B^{2}\cdot q^{4}}\right)\cdot\left(\left(\frac{4q^{2}}{\epsilon}\right)^{d_{B}+d_{T}}\cdot\left(W_{B}\sqrt{d_{B}}\right)^{d_{B}}\cdot\left(W_{T}\sqrt{d_{T}}\right)^{d_{T}}+2\right)$$ And the necessary condition above leads to the lower bound, $$q\geq n^{\frac{1}{4}}\cdot\left(\frac{\epsilon^{2}}{288\cdot B^{2}}\cdot\frac{1}{\ln\left(\left(\frac{4\epsilon^{2}}{\epsilon}\right)^{d_{B}+d_{T}}\cdot\left(W_{B}\sqrt{d_{B}}\right)^{d_{B}}\cdot\left(W_{T}\sqrt{d_{T}}\right)^{d_{T}}+2\right)+\ln\left(\frac{2}{1-\delta}\right)}\right)^{\frac{1}{4}}\tag{21}$$ $\epsilon$ is the $\epsilon$-function. $$(22)$$ The claimed lower bound in the theorem statement follows by further weakening the inequality above recalling that by definition we have, q ≤ min{dB, dT }. ## 7 The Experiment Set-Up In this section, we shall demonstrate that at a fixed number of total parameters, increasing the output dimension(q) and increasing the training data as q 2can cause a monotonic decrease in the training error of a DeepONet. The advection-diffusion-reaction P.D.E. (Rahaman et al., 2022) (referred as the ADR PDE from here on) plays a crucial role in modeling various physical, chemical, and biological processes - and that shall be our example for the experimental studies. More specifically, given a function f this PDE is specified as follows, $$\frac{\partial u}{\partial t}=D\frac{\partial^{2}u}{\partial x^{2}}+ku^{2}+f(x),\quad x\in[0,1],t\in[0,1]$$ $\partial u$ is a $\partial u$-invariant. with zero initial and boundary conditions, and for our preliminary experiments we shall use D = 0.01 as the diffusion coefficient, and k = 0.01 as the reaction rate. We use DeepONets to learn the operator G mapping from f(x) to the PDE solution u(*x, t*). In this case the operator Gθ will map the source terms f(x) to the PDE solution u(*x, t*). Given a choice of m sensor points, we shall denote a discretization of f onto the sensor points as the vector f ∈ R m. Recalling the DeepONet operator loss, we realize that minimizing that is trying to induce, Gθ(f (x, t)) ≈ G(f(*x, t*)). For sampling f we have considered the Gaussian random field(GRF) distribution. Here we have used the mean-zero GRF, f ∼ G (0, kl (x1, x2)) where the covariance kernel kl (x1, x2) = exp (−∥x1 − x2∥ 2/2l 2) is the radial-basis function (RBF) kernel with a length-scale parameter l > 0. For our experiments we have taken l = 10−3. After sampling f from the chosen function spaces, we solve the PDE by a second-order finite difference method to obtain the reference solutions. For n training data samples, the ℓ2 empirical loss being minimized is, LˆDeepONet ∶= 1 n ∑ n i=1(yi − Gθ (fi) (pi))2, where piis a randomly sampled point in the (*x, t*) space and yiis the approximate PDE solution at pi corresponding to fi - which we recall was obtained from a conventional solver. ## 7.1 Implementations & Results We created 10 DeepOnet models in each experimental setting such that each model has a depth of 5 and width varying between 24 and 50 for each layer while keeping the total number of training parameters approximately equal for each of those 10 models. For each case the branch input dimension is 40(i.e number of sensor points), and trunk input dimension is 2. The smallest number of training data (n) we use is 104 and twice we make a choice of 10 different (*q, n*) values parameterizing the learning setups, once keeping the ratio q√n approximately constant and then holding the ratio q n 2 3 almost fixed. All the DeepONet models were trained by the stochastic Adam optimizer at its default parameters. The code for this experiment can be found in our GitHub repository (link). Experiments in the fixed q√n setting. In this setting, the q value was varied from 5 to 50, in increments of 5. We have taken the starting value of n as 104. In Figure 2 we have plotted the training loss dynamics for these 10 models being trained over 120 epochs. ``` Experiments in the fixed q n 2 3 setting. We repeat the above experiment but while appproximately fixing the value of q n 2 3 . The corresponding plots are shown in Figure 3. ![16_image_0.png](16_image_0.png) ``` Figure 2: Training Loss vs Epoch in fixed q√n setting Figure 3: Training Loss vs Epoch in fixed q n 2 3 setting Further experiments of the above kinds at other values of D and k, for orders of magnitude above and below what's considered here, can be seen in Appendix A. In Appendix A.1 we shall show that the emergence of a scaling law as demonstrated for fixed q√n experiments persists even for fixed q n 1 6experiments - as is to be expected as the amount of available data increases for each model considered. A summary table of all parameters studied for the ADR PDE can be seen in Section A.2. We draw two primary conclusions from the above results. *Firstly*, from Figure 2, we can observe that if q and n increase at a fixed q√n then performance increases almost monotonically. *Secondly*, from the Figure 3 it is clearly visible that the previous monotonicity is breaking - that is the rate of increase of data size in the later experiment was not sufficient to leverage the increase in the output dimension size of the branch and the trunk as was happening in the first figure. ## 8 Discussion Our key result Theorem 4.1 shows that a certain data size dependent largeness of q is needed if there has to exist a bounded weight DeepONet at that q which can have their empirical error below the label noise threshold. From our experiments, we have shown that there is some non-trivial range of q (the common output dimension) along which empirical risk improves with q for a fixed model size - if the amount of training data is scaled quadratically with q. We envisage that trying to prove this "scaling law" can be a very interesting direction for future exploration in theory. Secondly, we note that our result hasn't yet fully exploited the structure of the neural nets used in the branch and the trunk. Also, it would be interesting to understand how to tune the argument specifically for the different variations of this architecture (Kontolati et al., 2023), (Bonev et al., 2023) that are getting deployed, Lastly, we note that our result is currently agnostic to the PDE being attempted to be solved. There is a tantalizing possibility, that methods in this proof could be extended to derive bounds which can distinguish PDEs that are significantly hard for operator learning. ## References J. Almeida, P. R. B. Rocha, Allan Moreira De Carvalho, and A. C. Nogueira. A coupled variational encoderdecoder-deeponet surrogate model for the rayleigh-benard convection problem. *null*, 2022. doi: null. Genming Bai, Ujjwal Koley, Siddhartha Mishra, and Roberto Molinaro. Physics informed neural networks (pinns) for approximating nonlinear dispersive pdes. *arXiv preprint arXiv:2104.05584*, 2021. Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. *Advances in neural information processing systems*, 30, 2017. Mikhail Belkin, Daniel J Hsu, and Partha Mitra. Overfitting or perfect fitting? risk bounds for classification and regression rules that interpolate. *Advances in neural information processing systems*, 31, 2018a. Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need to understand kernel learning. In *International Conference on Machine Learning*, pp. 541–549. PMLR, 2018b. Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias–variance trade-off. *Proceedings of the National Academy of Sciences*, 116(32):15849– 15854, 2019. Boris Bonev, Thorsten Kurth, Christian Hundt, Jaideep Pathak, Maximilian Baust, Karthik Kashinath, and Anima Anandkumar. Spherical fourier neural operators: Learning stable dynamics on the sphere. *arXiv* preprint arXiv:2306.03838, 2023. Susanne C. Brenner and Carsten Carstensen. Finite Element Methods, nov 15 2004. URL http://dx.doi. org/10.1002/0470091355.ecm003. Sébastien Bubeck and Mark Sellke. A universal law of robustness via isoperimetry. *Journal of the ACM*, 70 (2):1–18, 2023. Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. *IEEE Transactions on Neural* Networks, 6(4):911–917, 1995a. Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. *IEEE Transactions on Neural* Networks, 6(4):911–917, 1995b. doi: 10.1109/72.392253. A. Choubineh, Jie Chen, David A. Wood, Frans Coenen, and Fei Ma. Fourier neural operator for fluid flow in small-shape 2d simulated porous media dataset. *Algorithms*, 2023. doi: 10.3390/a16010024. Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, and Francesco Piccialli. Scientific machine learning through physics–informed neural networks: Where we are and what's next. *Journal of Scientific Computing*, 92(3):88, 2022. Zeyu Deng, Abla Kammoun, and Christos Thrampoulidis. A model of double descent for high-dimensional binary linear classification. *Information and Inference: A Journal of the IMA*, 11(2):435–495, 2022. MWMG Dissanayake and Nhan Phan-Thien. Neural-network-based approximations for solving partial differential equations. *communications in Numerical Methods in Engineering*, 10(3):195–201, 1994. Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of neural networks. In *Conference On Learning Theory*, pp. 297–299. PMLR, 2018. Vignesh Gopakumar, S. Pamela, L. Zanisi, Zong-Yi Li, Anima Anandkumar, and Mast Team. Fourier neural operator for plasma modelling. *null*, 2023. doi: null. Pulkit Gopalani and Anirbit Mukherjee. Investigating the Role of Overparameterization While Solving the Pendulum with DeepONets. In The Symbiosis of Deep Learning and Differential Equations, NeurIPS Workshop, 2021. URL https://openreview.net/forum?id=q1rTts5XOIB. Pulkit Gopalani, Sayar Karmakar, and Anirbit Mukherjee. Capacity Bounds for the DeepONet Method of Solving Differential Equations. 2022. URL https://doi.org/10.48550/arXiv.2205.11359. S. Goswami, Katiana Kontolati, M. Shields, and G. Karniadakis. Deep transfer learning for partial differential equations under conditional shift with deeponet. *arXiv.org*, 2022. doi: 10.48550/arxiv.2204.09810. Patrik Hadorn. Shift-deeponet: Extending deep operator networks for discontinuous output function-patrik hadorn. *null*, 2022. doi: null. Yiu-Chung Hon and XZ Mao. An efficient numerical scheme for burgers' equation. Applied Mathematics and Computation, 95(1):37–50, 1998. Ameya D Jagtap and George E Karniadakis. Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. In *AAAI Spring Symposium: MLPS*, pp. 2002–2041, 2021. Ameya D Jagtap, Ehsan Kharazmi, and George Em Karniadakis. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. *Computer* Methods in Applied Mechanics and Engineering, 365:113028, 2020. Williamson Johnny, Hatzinakis Brigido, Marcelo Ladeira, and Joao Carlos Felix Souza. Fourier neural operator for image classification. *Iberian Conference on Information Systems and Technologies*, 2022. doi: 10.23919/cisti54924.2022.9820128. George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physicsinformed machine learning. *Nature Reviews Physics*, 3(6):422–440, 2021. Ganesh Ramachandra Kini and Christos Thrampoulidis. Analytic study of double descent in binary classification: The impact of loss. In *2020 IEEE International Symposium on Information Theory (ISIT)*, pp. 2527–2532. IEEE, 2020. Katiana Kontolati, Somdatta Goswami, George Em Karniadakis, and Michael D Shields. Learning in latent spaces improves the predictive accuracy of deep neural operators. *arXiv preprint arXiv:2304.07599*, 2023. T. Kurth, Shashank Subramanian, P. Harrington, Jaideep Pathak, M. Mardani, D. Hall, Andrea Miele, K. Kashinath, and Anima Anandkumar. Fourcastnet: Accelerating global high-resolution weather forecasting using adaptive fourier neural operators. *Platform for Advanced Scientific Computing Conference*, 2022. doi: 10.1145/3592979.3593412. Isaac E Lagaris, Aristidis Likas, and Dimitrios I Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. *IEEE transactions on neural networks*, 9(5):987–1000, 1998. Isaac E Lagaris, Aristidis C Likas, and Dimitris G Papageorgiou. Neural-network methods for boundary value problems with irregular boundaries. *IEEE Transactions on Neural Networks*, 11(5):1041–1049, 2000. Samuel Lanthaler, Siddhartha Mishra, and George E Karniadakis. Error estimates for DeepONets: a deep learning framework in infinite dimensions. *Transactions of Mathematics and Its Applications*, 6 (1), 03 2022a. ISSN 2398-4945. doi: 10.1093/imatrm/tnac001. URL https://doi.org/10.1093/imatrm/ tnac001. tnac001. Samuel Lanthaler, Siddhartha Mishra, and George E Karniadakis. Error estimates for deeponets: A deep learning framework in infinite dimensions. *Transactions of Mathematics and Its Applications*, 6(1):tnac001, 2022b. F. Lehmann, F. Gatti, M. Bertin, and D. Clouteau. Fourier neural operator surrogate model to predict 3d seismic waves propagation. *arXiv.org*, 2023. doi: 10.48550/arxiv.2304.10242. Jingling Li, Yanchao Sun, Jiahao Su, Taiji Suzuki, and Furong Huang. Understanding generalization in deep learning via tensor methods. In *International Conference on Artificial Intelligence and Statistics*, pp. 504–515. PMLR, 2020a. Zhijie Li, Wenhui Peng, Zelong Yuan, and Jianchun Wang. Fourier neural operator approach to large eddy simulation of three-dimensional turbulence. *Theoretical and Applied Mechanics Letters*, 2022a. doi: 10.1016/j.taml.2022.100389. Zongyi Li, Nikola B. Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew M. Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv: Learning, 2020b. doi: null. Zongyi Li, Daniel Zhengyu Huang, Burigede Liu, and Anima Anandkumar. Fourier neural operator with learned deformations for pdes on general geometries. *Cornell University - arXiv*, 2022b. doi: 10.48550/ arxiv.2207.05209. Guang Lin, Christian Moya, and Zecheng Zhang. B-deeponet: An enhanced bayesian deeponet for solving noisy parametric pdes using accelerated replica exchange sgld. *Journal of Computational Physics*, 2022. doi: 10.1016/j.jcp.2022.111713. Lizuo Liu and Wei Cai. Multiscale deeponet for nonlinear operators in oscillatory function spaces for building seismic wave responses. *arXiv.org*, 2021. doi: null. Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. *arXiv: Learning*, 2019. doi: null. Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. Deepxde: A deep learning library for solving differential equations. *SIAM review*, 63(1):208–228, 2021. Zhiping Mao, Ameya D Jagtap, and George Em Karniadakis. Physics-informed neural networks for highspeed flows. *Computer Methods in Applied Mechanics and Engineering*, 360:112789, 2020. Ramchandran Muthukumar and Jeremias Sulam. Sparsity-aware generalization theory for deep neural networks. In *The Thirty Sixth Annual Conference on Learning Theory*, pp. 5311–5342. PMLR, 2023. Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. *Journal of Statistical Mechanics: Theory and* Experiment, 2021(12):124003, 2021. Guofei Pang, Lu Lu, and George Em Karniadakis. fpinns: Fractional physics-informed neural networks. SIAM Journal on Scientific Computing, 41(4):A2603–A2626, 2019. Jaewan Park, Shashank Kushwaha, Junyan He, S. Koric, D. Abueidda, and I. Jasiuk. Sequential deep learning operator network (s-deeponet) for time-dependent loads. *arXiv.org*, 2023. doi: 10.48550/arxiv.2306.08218. Jaideep Pathak, Shashank Subramanian, P. Harrington, S. Raja, A. Chattopadhyay, M. Mardani, Thorsten Kurth, D. Hall, Zong-Yi Li, K. Azizzadenesheli, P. Hassanzadeh, K. Kashinath, and Anima Anandkumar. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv.org, 2022. doi: null. Muhammad Masudur Rahaman, Humaira Takia, Md Kamrul Hasan, Md Bellal Hossain, Shamim Mia, and Khokon Hossen. Application of advection diffusion equation for determination of contaminants in aqueous solution: A mathematical analysis. *Applied Mathematics*, 10(1):24–31, 2022. Maziar Raissi and George Em Karniadakis. Hidden physics models: Machine learning of nonlinear partial differential equations. *Journal of Computational Physics*, 357:125–141, 2018. Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: A navier-stokes informed deep learning framework for assimilating flow visualization data. *arXiv preprint arXiv:1808.04327*, 2018. Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707, 2019. Bogdan Raonic, Roberto Molinaro, Tobias Rohner, Siddhartha Mishra, and Emmanuel de Bezenac. Convolutional neural operators. In *ICLR 2023 Workshop on Physics for Machine Learning*, 2023. Deep Ray, Orazio Pinti, and Assad A Oberai. Deep learning and computational physics (lecture notes). arXiv preprint arXiv:2301.00942, 2023. Tim De Ryck and Siddhartha Mishra. Generic bounds on the approximation error for physics-informed (and) operator learning. *ArXiv*, abs/2205.11393, 2022. Mark Sellke. On size-independent sample complexity of relu networks. *arXiv preprint arXiv:2306.01992*, 2023. Shai Shalev-Shwartz and Shai Ben-David. *Understanding machine learning: From theory to algorithms*. Cambridge university press, 2014. Lesley Tan and Liang Chen. Enhanced deeponet for modeling partial differential operators considering multiple input functions. *arXiv.org*, 2022. doi: null. Tapas Tripura and S. Chakraborty. Wavelet neural operator: a neural operator for parametric partial differential equations. *arXiv.org*, 2022. doi: 10.48550/arxiv.2205.02191. Tapas Tripura, Abhilash Awasthi, Sitikantha Roy, and Souvik Chakraborty. A wavelet neural operator based elastography for localization and quantification of tumors. *Comput. Methods Programs Biomed.*, 2023. doi: 10.1016/j.cmpb.2023.107436. Wuzhe Xu, Yulong Lu, and Li Wang. Transfer learning enhanced deeponet for long-time prediction of evolution equations. *arXiv.org*, 2022. doi: 10.48550/arxiv.2212.04663. Liu Yang, Xuhui Meng, and George Em Karniadakis. B-pinns: Bayesian physics-informed neural networks for forward and inverse pde problems with noisy data. *Journal of Computational Physics*, 425:109913, 2021. Jiahao Zhang, Shiqi Zhang, and Guang Lin. Multiauto-deeponet: A multi-resolution autoencoder deeponet for nonlinear dimension reduction, uncertainty quantification and operator learning of forward and inverse stochastic problems. *arXiv.org*, 2022. doi: 10.48550/arxiv.2204.03193. ## A Appendix In this section we extended the scope of our demonstration by revisiting the experiments detailed in Section 7.1 and redoing them at further more values of the diffusion coefficient (D) and the reaction rate (k), as specified in Equation 22. This time we go a couple of orders of magnitude above as well as below the value of D = k chosen in Section 7.1. We conducted four sets of experiments, at different common values for D and k, namely 1) 1 in Figure 4, 2) 0.1 in Figure 5, 3) 0.001 in Figure 6 and 4) 0.0001 in Figure 7. Our findings indicate that for any given pair of (*D, k*) value, at a fixed q√n setting, performance keeps on increasing monotonically i.e the best empirical loss obtained monotonically falls with increasing q. However, for the fixed q n 2 3 setting, this monotonicity breaks, particularly for higher values of D = k. Thus the key insights about a possible scaling law for DeepONets continues to hold as was motivated earlier ![21_image_0.png](21_image_0.png) in Section 7.1. Figure 4: (D & k value as 1) Left: Training Loss vs Epoch in fixed q√n setting. Right: Training Loss vs Epoch in fixed q ![21_image_1.png](21_image_1.png) n 2 3 setting. Figure 5: (D & k value as 0.1) Left: Training Loss vs Epoch in fixed q√n setting. Right: Training Loss vs Epoch in fixed q n 2 3 setting. ![22_image_0.png](22_image_0.png) Figure 6: (D & k value as 0.001) Left: Training Loss vs Epoch in fixed Je setting. Right: Training Loss vs Epoch in fixed යු setting ![22_image_1.png](22_image_1.png) Figure 7: (D & k value as 0.0001) Left: Training Loss vs Epoch in fixed ‫(1 setting. Right: Training Loss vs Epoch in fixed - setting ## A.1 Experiment At A Fixed Q N 1 6 Setting On The Adr Pde In here we conducted two sets of experiments, at q and n settings more closely inspired by Theorem 4.2 ![23_image_0.png](23_image_0.png) i.e. we chose a set of nets of increasing q such that the size of the nets and q n 1 6are almost fixed. We do experiments at different common values for D and k, namely 0.0001 & 1 in Figure 8. Figure 8: (D & k value as 0.0001) Left: Training Loss vs Epoch in fixed q n 1 6 setting. (D & k value as 1)Right: Training Loss vs Epoch in fixed q n 1 6 setting. The above plots suggest that the monotonic performance improvement with increasing q as seen in all earlier experiments continue to hold even at the scaling as chosen here. ## A.2 Table Of Data For All Experiments On The Adr Pde ``` A.2.1 Experiments where q n 1 6 (and size of the operator net) are kept nearly constant ``` | # Trainable Parameters in the DeepONet | q | n, D = k = 10−4 | n, D = k = 1 | |------------------------------------------|-----|-------------------|----------------| | 18112 | 6 | 11650 | 11650 | | 18316 | 8 | 65511 | 65511 | | 18520 | 10 | 249906 | 249906 | | 18724 | 12 | 746215 | 746215 | Table 1: The last 2 columns correspond to Figure 8 | # Trainable Parameters in the DeepONet | q | n, D = k = 10−4 | n, D = k = 10−3 | n, D = k = 10−2 | n, D = k = 10−1 | n, D = k = 1 | |------------------------------------------|-----|-------------------|-------------------|-------------------|-------------------|----------------| | 18010 | 5 | 10000 | 10000 | 10000 | 10000 | 10000 | | 18520 | 10 | 40000 | 40000 | 40000 | 40000 | 40000 | | 18568 | 15 | 90000 | 90000 | 90000 | 90000 | 90000 | | 18719 | 40 | 640000 | 640000 | 640000 | 640000 | 640000 | | 18714 | 45 | 810000 | 810000 | 810000 | 810000 | 810000 | | 18760 | 50 | 1000000 | 1000000 | 1000000 | 1000000 | 1000000 | ``` A.2.2 Experiments where q n 1 2 (and size of the operator net) are kept nearly constant ``` Table 2: The column for D = k = 10−2corresponds to Figure 2 and the rest of the last 5 columns, starting from the rightmost above, correspond to the left column of Figures 4, 5, 6, and 7 A.2.3 Experiments where q n 2 3 (and size of the operator net) are kept nearly constant | #Trainable Parameters in the DeepONet | q | n, D = k = 10−4 | n, D = k = 10−3 | n, D = k = 10−2 | n, D = k = 10−1 | n, D = k = 1 | |-----------------------------------------|-----|-------------------|-------------------|-------------------|-------------------|----------------| | 18010 | 5 | 10000 | 10000 | 10000 | 10000 | 10000 | | 18520 | 10 | 31623 | 31623 | 31623 | 31623 | 31623 | | 18568 | 15 | 58000 | 58000 | 58000 | 58000 | 58000 | | 18719 | 40 | 252982 | 252982 | 252982 | 252982 | 252982 | | 18714 | 45 | 301870 | 301870 | 301870 | 301870 | 301870 | | 18760 | 50 | 353553 | 353553 | 353553 | 353553 | 353553 | Table 3: The column for D = k = 10−2corresponds to Figure 3 and the rest of the last 5 columns, starting from the rightmost above, correspond to the right column of Figures 4, 5, 6, and 7
Review 1: Summary: This paper analyses Deep Operator Networks and gives a bound which relates the choice of the $q$ parameter (the dimension of the latent space connecting the 'branch' and 'trunk' networks in a DeepONet ) to the size $n$ of the empirical data set. To my knowledge, the relationship derived is novel and of interest to the operator learning literature for PDE surrogate modelling. From my understanding, I see two main contributions of this paper. 1) How to derive a bound which relates the choice of $q$ with the choice of $n$; and, 2) Empirical evidence to show that monotonic convergence can be achieved by keeping $q/\sqrt{n}$ constant with a fixed parameter budget on a certain PDE. For the first point, deriving this relationship is the main contribution of the paper, with Lemma 5.1-5.4 necessary to pave the path for achieving the bound given in Theorem 4.1. As the authors note, there is a discrepancy between the bound they derive and what achieves monotonic convergence in their experiments "Later, in Section 8, we shall conduct an experimental study motivated by the above and reveal something more than what the above theorem guarantees.". From this, I see this paper as a first step towards quantifying an optimal scaling relationship between $q$ and $n$ in DeepONets, but there is still future work to possibly sharpen this result. Because of this, I find what I consider their second contribution to be rather interesting and understated compared to point 1) . Motivated by the relationship they derive, the authors conduct a numerical convergence test and identify monotonic convergence at a rate different to what they found in Theorem 4.1. Whilst welcomed, I view this result as separate to their theoretical analysis and possibly suggestive of a stronger result yet to be proved. Strengths and Weaknesses: Strengths: The proofs in the paper is cleanly set out and reasonable to follow. Splitting Theorem 4.1 into four smaller steps makes it easier to identify how the relationship between $q$ and the dataset size is derived, and the splitting of events $E_i$ make this proof well structured. The paper also clearly lays out previous work and restates relevant theorems pertaining to the approximation theory of DeepONets. This paper is strong in completing its main objective of identifying how to connect the parameter $q$ with the dataset size $n$. Weaknesses: The computational study in this paper is limited compared to the theoretical analysis. Whilst the authors present two settings, one for the case $q/\sqrt{n}$ is fixed and one whilst $q/n^{2/3}$ is fixed, neither of these correspond to the rate that they derive in their analysis. Whilst, from what I gather, their computational evidence here is stronger than what they prove, for completeness the paper should have a small experiment for the rate they derive. The authors also only consider one test problem of a single PDE, with variations of the coefficients in the appendix. Whilst a computational study is not the focal point of the paper, it would be interesting to know if this monotonic convergence for $q/\sqrt{n}$ occurs elsewhere, or if this is just relevant to the PDE the authors consider. Currently, it is unclear to me whether the authors are trying to convey a conjecture that the rate $q/\sqrt{n}$ might be expected in practice generally or if this rate could be given from specific additional structure of the particular PDE investigated. The paper also has many grammatical inconsistencies and mistakes. These would be easy to correct, but should be done to improve the readability and flow of the paper. Requested Changes: Suggested changes: I might be incorrect in my understanding here, but currently I see that the experiments run show a sharper rate than what is proven in the main theorem. If this is the case, I believe the text could be improved with the addition of a simple table of $q$ and $n$ comparing the values used in the experiments and the rate required from the theory to make this clear. I also think having a small example of the rate predicted by the theory, even if it is a weaker experiment, would help in the presentation of the results. Currently it is a little difficult to immediately understand how the current bound influences the choice of $q$ and $n$ in the experiments. I also recommend reordering some of the sections to improve the flow of the paper. I currently do not understand why the proofs of the lemmas 5.1-5.4 are listed in section 7 when section 5 states all of the lemmas and section 6 begins with: "A careful study of the proof of Lemma 5.4 would reveal that it can as well be invoked on Hθ.". For the readability of the document, I would recommend just putting the proofs of the lemmas with their statements in section 5, especially seeing as the current section 6 is also proof based and follows on from the proofs of the lemmas. Minor comments: The authors swap between PDE, PDEs and P.D.E in various sections (see Introduction and Section 8). This type of inconsistency is present throughout the document, for instance the authors swap between Physics Informed Neural Networks and Physics Inspired Neural Networks in an introductory paragraph. Making sure that abbreviations are consistent would aide the reader throughout the document. There are several spelling/grammar mistakes throughout the current document which can be straightforwardly fixed, the ones I can easily garner are: "oftenn" and "reviewed" (instead of review I think) in the second paragraph of the introduction; Equation 1 and the equation at the beginning of Section 1 are missing punctuation; Definitions 1 and 3 are not complete sentences. I would request that the authors ensure that the sentences are complete, especially around the mathematics and definitions, to help the flow of the document. Broader Impact Concerns: - ================================================== Review 2: Summary: The authors propose a lower bound on the size of DeepONets (meaning the number of trunk and branch nets), mainly as a function of the number of samples in the training data set $n$, with a resulting lower bound $\Omega(\sqrt{n})$. Inspiration is drawn from the work of Bubeck and Sellke (2023). The theory is supplemented by an experiment where DeepONets were trained for fixed ratios of $q/n^\gamma$, suggesting a sort of scaling law. Strengths and Weaknesses: *Strengths* - S1) Theory for operator learning has mainly focused on providing upper bounds on the network size in terms of the error tolerance. It is a very interesting topic to investigate whether corresponding lower bounds can also be proved, and how these depend on all parameters. This work could be a valuable contribution in this area. - S2) Although the evidence is rather limited, the scaling law that the experiment shows is very interesting and would be useful for practitioners. *Weaknesses* - W1) Just like the results of Bubeck and Sellke (2023), the lower bounds are only relevant in the case of noisy training data. In the context of DeepONets, noise in the training data can occur when physical measurements are used, or (more commonly) when the training data set is obtained through a numerical solver. Suppose one wants to guarantee an accuracy of $\sigma^2/2$ in the error bound of Theorem 4.1 (bottom of page 6), then one can either (1) use a large enough DeepONet following Theorem 4.1 or (2) one can simply use a slightly better numerical solver such that the variance of the noise is $\sigma^2/2$ rather than $\sigma^2$. I believe that such considerations are important and put the proposed results in a different context, as results such as Theorem 2.1 do not depend on data noise. - W2) Notation throughout the paper is very inconsistent and therefore hard to follow. The true operator is denoted by both $\mathcal{G}$ and $g$, the input function of the DeepONet is denoted by either $f$ or $s$, whereas $s$ is at the same time the total number of parameters in the DeepONet. The number of branch/trunk nets is denoted by both $p$ and $q$. - W3) There sometimes is a lack of mathematical rigour, which again makes the paper harder to follow. For instance, in Definition 1 it is unclear where the $(y_i, (s_i,p_i))$ is sampled from (which space, which distribution). In Section 1.1 one considers a compact domain $D$ yet in Definition 2 the trunk and branch nets are considered over $\mathbb{R}^{d_i}$. It is unclear which setting is considered in the main results. In Definition 3, which vector norm is used? In Definition 4, over which set is the supremum taken? In the main theorem, $\theta$ is defined in terms of $q$ but $q$ is undefined at that point and then only later a lower bound for $q$ is introduced. - W4) The authors state that the downside of Theorem 2.1 is that it doesn't make a connection between the lower bound and the size of the branch net, but also main theorem only gives a lower bound on the number of trunk nets, and not (at least not explicitly) on the size of the branch net. - W5) Finally, the main result is stated in a way that is very hard to interpret. In particular, there are three equations that link $q$, $n$ and $\epsilon$: namely $\theta \leq \epsilon/q^2$, the assumption on the training set size $n\geq C_1 / \theta^2$ and the final lower bound $q\geq C_2 \theta \sqrt{n}$ (simplified). Rearranging these equations leads to $q^2 \lesssim \epsilon n$. This can either be interpreted as (1) an upper bound on $q$ that needs to hold if the lower bound is valid (which is a bit weird as the goal is to obtain a *lower* bound), or (2) a lower bound for $\epsilon$ (which is hopefully smaller than $(1+\mathcal{C} J(B+2\mathcal{C}^2)^{-1}$ as otherwise the assumption of the theorem is invalid). Finally, the total number of parameters of the DeepONet $d_B$ and $d_T$ (and their sum $s$) also depend on $q$. This means that the current upper bound on $q$ is more of an implicit relationship between various quantities, rather than an explicit upper bound. All of this makes it currently very hard to get a clean understanding of the meaning of the main result. Requested Changes: It would be great if the authors can address or clarify the weaknesses mentioned above. Some additional remarks: - RC1) Regarding W1: I would suggest to clarify further that the obtained lower bounds are more of an artefact from the way the training data is constructed, rather than an intrinsic property of DeepONets or the underlying PDE (as was the case in Theorem 2.1). - RC2) Regarding W4: I would suggest to clarify how Theorem 4.1 addresses the second weakness mentioned after Theorem 2.1. - RC3) In Lemma 5.1 one uses the notation $N$ before defining it at the end of the lemma, which is very confusing. Similar in Lemma 5.2 with $\hat{\mathcal{R}}$. - RC4) In the beginning of page 10 (proof of Theorem 4.1) it is written that one requires that the probability is at least $1-\delta$. In the proof of the main result of Bubeck and Sellke (2023) one requires that the probability is at most $\delta$. Why is it different here? - RC5) Experiments could be repeated with different constants, and perhaps different PDEs. Broader Impact Concerns: / ================================================== Review 3: Summary: This paper studies the necessary lower bound on the size of DeepONet required to attain a given empirical training error. It is shown that the output dimension of the DeepONet needs to scale $O(\sqrt{n})$ where $n$ denotes the number of training data when the training error is achieved below a label noise dependent threshold. Some numerical experiments are provided to verify the necessary growth of output dimension as a function of the size of training data to monotonically decrease the training error. Strengths and Weaknesses: - Strengths. Proving complexity lower bounds of DeepONet is an important question for operator learning while has less been explored. The paper proved an interesting lower bound for learning a generic operator with DeepONet. To the best of my knowledge, this result seems to be new and relies on the recent result on robustness of neural networks by Bubeck \& Selker 2023. - Weaknesses. I do not have much to comment on the weakness, but I do have some concerns and questions on the statements of main results, which can lead to a major mistake/weakness of the paper. See the Questions below. Requested Changes: - One major concern I have is about the statements of the main results. For example, in Theorem 4.1, I am confused by which parameter is fixed (and which needs to be bounded). If I understand it correctly, given a fixed sample size $n$ the goal is to obtain a lower bound on the output dim $q$ as a function of $n$. However, the introduction of parameter $\theta$ complicates the process and makes me really confused. In the preample of Theorem 4.1, $\theta \leq \epsilon/q^2$ and $n \geq 288B^2/\theta^2 \ln{4/(1-\delta)}$. This means that $n \gtrsim q^4/\epsilon^2$ or equivalently $q \lesssim n^{1/4}$. Would not that contradict with the lower bound requirement $q\gtrsim O(n^{1/2})$? The author should elaborate and clarify this in the revision. - I strongly recommend the authors elaborate more on the Setup, especially the discussion on the training datasets. For readers who do not know the background of operator learning, the current description of training dataset is way too sketchy. If I understand it correctly, the data set should be the input and output functions. But what is the meaning of $y_i$? -In Theorem 2.1 and several other places, the authors mention that $p$ is the size of the trunk net. Is this the total number of neurons or just the width of the network? The authors are recommended to submit a major revision of the paper addressing the comments above before it is considered being published in TMLR. Broader Impact Concerns: No. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper has been reviewed by 3 reviewers: two recommend "leaning accept" (qWBR and mYsr) and one recommends "learning reject" (reviewer pViX). It provides a lower bound o the size of DeepONets, mainly as a function of the number N of training data. Some numerical results have also been provided to verify experimentally the necessary growth of output as a function of N. During the rebuttal phase, the paper was significantly improved but the authors erroneously did not make the answers visible to reviewer pViX. This has been corrected and there has been a lively discussion to clarify the remaining issues this reviewer had. I recommend acceptance of the paper subject to the final comments of reviewer pViX being taken into account. ==================================================
# Pathwise Gradient Variance Reduction With Control Variates In Variational Inference Anonymous authors Paper under double-blind review ## Abstract Stochastic gradient descent is a workhorse in modern deep learning. The gradient of interest is almost always the gradient of an expectation, which is unavailable in closed form. The pathwise and score-function gradient estimators represent the most common approaches to estimating the gradient of an expectation. When it is applicable, the pathwise gradient estimator is often preferred over the score-function gradient estimator because it has substantially lower variance. Indeed, the latter is almost always applied with some variance reduction techniques. However, a series of works suggest, in the context of variational inference, that pathwise gradient estimators may also benefit from variance reduction. In this work, we review existing control-variates-based variance reduction methods for pathwise gradient estimators to determine their effectiveness. Work in this vein generally rely on approximations of the integrand which necessitates the functional form of the variational family be simple. In light of this limitation, we also propose applying zero-variance control variates on pathwise gradient estimators, as the control variates have the advantage that requires little assumption on the variational distribution, other than being able to sample from it. ## 1 Introduction Stochastic optimisation, in particular stochastic gradient descent, is ubiquitous in modern machine learning. In many applications, deployment of stochastic gradient descent necessitates estimating the gradient of an expectation, g(λ) = ∇λEq(z;λ)[r(z; λ)], (1) where r is some real-valued function and λ ∈ R dimλ is the parameter over which we optimise. Note that since the expectation is taken with respect to a distribution q that is parameterised by λ, we cannot simply push the gradient operator through the expectation. The two most common approaches to estimating g(λ) are the pathwise gradient estimator and the **score-function gradient estimator**. We will introduce them in the context of variational inference (VI). Given an observed dataset x that is governed by a data generating process which depends on latent variables z ∈ R dimz and a prior p(z) of the latent variables, the posterior distribution is given by p(z|x) ∝ p(x|z)p(z), which is only known up to a normalising constant. VI seeks to approximate the posterior with a simple, tractable distribution in the variational family Q = {q(z; λ) : λ ∈ R dimλ }. This is often done by finding the λ that minimises the Kullback-Leibler divergence between the variational distribution q and the posterior (in that order): λ ∗ = arg min λ Eq(z;λ)[log q(z; λ) − log p(z|x)]. (2) In practice, this is done by maximising the so-called evidence lower bound (ELBO), $$i_{q}(z;\lambda)[\log q(z;\lambda)]$$ $$\lambda^{*}=\arg\operatorname*{max}_{\lambda}\operatorname{ELBO}(\lambda),$$ ELBO(λ), (3) $$\operatorname{A)}-\log p(z|x)].$$ $\left(3\right)$. where $$\mathrm{ELBO}(\lambda)=\mathbb{E}_{q(z;\lambda)}[\log p(z,x)-\log q(z;\lambda)].$$ It is easy to see that minimising the KL divergence in (2) is equivalent to maximising the ELBO in (3). Notably, computation of the latter avoids the intractable normalising constant in the posterior p(z|x). The closed-form solution for λ ∗is generally unavailable. Stochastic VI (Hoffman et al., 2013), which optimises with minibatch stochastic gradient descent, became a game changer and opened up new applications for VI. Stochastic VI requires gradient computation of the *minibatch ELBO*, $$\mathrm{mELBO}(\lambda)=\mathbb{E}_{q(z;\lambda)}\left[r(z;\lambda)\right],$$ where $$r(z;\lambda)=\frac{N}{B}\sum_{i\in\text{batch}}\left[\log p(x_{i}|z)\right]+\log\frac{p(z)}{q(z;\lambda)}.\tag{4}$$ Here N and B are data and batch size respectively, and xi denotes the i th element of the dataset. ## 2 Gradient Estimators For Vi There are two main types of gradient estimators in the VI literature for computing the gradient of mELBO: 1) the pathwise gradient, also known as the reparametrization trick; and 2) the score-function estimator, also known as REINFORCE. The latter is more broadly applicable than the former but comes at the cost of larger variance. Indeed, the score-function estimator is almost always used in conjunction with control variates to reduce its variance (see, for example Ranganath et al., 2014; Ji et al., 2021). It is far less common to see variance reduction employed for the pathwise gradient estimator but a series of recent works suggest potential benefits from doing so (Miller et al., 2017; Geffner & Domke, 2020). In this work, we propose a new variance reduction technique for the pathwise gradient estimator in the context of VI. But first in this section we review the pathwise graident estmator. The pathwise gradient estimator is only readily applicable for *reparametrizable* distributions q(z; λ), i.e. distributions where z can be equivalently generated from z = T(ϵ; λ) where ϵ ∈ R dimz ∼ q0(ϵ) and q0 is referred as the *base distribution* which is independent from λ. Take z ∼ N (*µ, σ*2I) as an example, the corresponding transformation is T(ϵ; λ) = µ + σϵ where ϵ is standard Gaussian and λ = (*µ, σ*). When q is reparametrizable, the gradient operator can be pushed inside the expectation and we get that the gradient of mELBO is given by $$g(\lambda):=\nabla_{\lambda}\,\text{mELBO}(\lambda)$$ $$=\mathbb{E}_{q_{0}(\epsilon)}\varphi(\epsilon;\lambda),\tag{1}$$ $$\varphi(\epsilon;\lambda)=\nabla_{\lambda}\left[r(T(\epsilon;\lambda);\lambda)\right].$$ where we define φ(ϵ; λ) = ∇λ [r(T(ϵ; λ); λ)] . (6) The pathwise gradient estimator is then simply a Monte Carlo estimator of (5) using a set of samples {ϵ[l]} L l=1 from q0: $${\hat{g}}(\epsilon_{[1]},\ldots,\epsilon_{[L]};\lambda)\coloneqq{\frac{1}{L}}\sum_{l=1}^{L}\varphi(\epsilon_{[l]};\lambda).$$ $$\left({\bar{5}}\right)$$ $\left(6\right)$. $$\left(7\right)$$ We will refer to L as the number of gradient samples. The variance of the gradient estimator is thought to play an important factor in the convergence properties of stochastic gradient descent. Henceforth, any expectations or variances without a subscript refer to q0. Define the variance of the pathwise gradient estimator to be $$\mathbb{V}[\hat{g}]=\mathbb{E}\|\hat{g}\|^{2}-\|\mathbb{E}\hat{g}\|^{2}={\frac{1}{L}}(\mathbb{E}\|\varphi\|^{2}-\|\mathbb{E}\varphi\|^{2})={\frac{1}{L}}\mathbb{V}[\varphi].$$ To reduce the variance of (7), we may add a *control variate* to the pathwise gradient estimator $$\frac{1}{L}\sum_{l=1}^{L}\left[\varphi(\epsilon_{[l]};\lambda)+c(\epsilon_{[l]})\right],$$ $$({\boldsymbol{\delta}})$$ -φ(ϵ[l]; λ) + c(ϵ[l]), (8) where the control variance c(·) ∈ R dimλ is a random variable with zero expectation, i.e. E[φ + c] = Eφ. Let Tr(C[*φ, c*]) be the trace of the covariance matrix C[*φ, c*] = E[(φ − Eφ)(c − Ec) ⊤]. A good control variate c has a strong, negative correlation with φ, since $$\mathbb{V}[\frac{1}{L}\sum_{l=1}^{L}\varphi(\epsilon_{[l]};\lambda)+c(\epsilon_{[l]})]=\frac{1}{L}\mathbb{V}[\varphi+c]$$ $$=\frac{1}{L}(\mathbb{V}[\varphi]+\mathbb{V}[c]+2\operatorname{Tr}(\mathbb{C}[\varphi,c]))$$ $$=\mathbb{V}[\hat{g}]+\frac{1}{L}(\mathbb{V}[c]+2\operatorname{Tr}(\mathbb{C}[\varphi,c]))$$ Therefore, as long as Tr(C[*φ, c*]) < 0 and | Tr(C[*φ, c*])| < 1 2V[c], the control-variate-adjusted gradient estimator (8) will have a smaller variance than (7). Finally, a control variate may also be formed as a linear combinations of control variates. Let C : R dimz → R dimλ×J be a matrix with J control variates in the columns. This leads to the control-variate-adjusted gradient estimator $$\hat{h}(\epsilon_{[1]},\ldots,\epsilon_{[L]};\lambda):=\frac{1}{L}\sum_{l=1}^{L}\left[\varphi(\epsilon_{[l]};\lambda)+C(\epsilon_{[l]})\beta\right],$$ $$(9)$$ $$(10)^{\frac{1}{2}}$$ which remains a valid control variate due to the linearity of expectation operators, i.e. E[Cβ] = 0. Here β ∈ R Jis a vector of coefficients corresponding to each control variate. This construction allows us to combine multiple weak control variates into a strong one by adjusting β. For obvious reasons, applying control variates is only worthwhile if the computation of control variates is cheaper than increasing the number of samples in (7). From (9), we can see that, for example, the estimator variance can be reduced by half either by doubling L or halving V[φ + c]. This presents a unique challenge when applying control variates in the low L regime (such as the gradient estimator of VI where L is often very low), since the cost of applying control variates will more likely outweigh the cost of increasing L to achieve the same variance reduction. The control variates that were developed in the Markov Chain Monte Carlo (MCMC) community cannot be easily applied in our work because 1) they require large L, but L can be as small as one in stochastic VI, and 2) MCMC variance reduction is usually done at the very end, while variance reduction is required for each gradient update in stochastic VI. In this work, we review existing control-variates-adjusted pathwise gradient estimators in the context of VI. We are primarily interested in whether employing control variates can achieve faster convergence with respect to wall-clock time. We are also motivated by the gap in the VI literature on gradient variance reduction when q is reparametrizable but the mean and covariance of q are not available in closed form. A good example of such q is normalizing flow, where z is the result of pushing forward a base distribution q0 through an invertible transformation T(·; λ) that is parameterised by λ, i.e. z = T(ϵ; λ) where ϵ ∼ q0. Such transformation T can be arbitrarily complex and usually involves neural networks. To this end, we introduce a control-variate-adjusted gradient estimator based on zero-variance control variates (Assaraf & Caffarel, 1999; Mira et al., 2013; Oates et al., 2017) which does not have this limitation. This paper is structured as follows: in Section 3, we provide a review of the latest advancements in variance reduction techniques for pathwise gradient estimators in the context of VI. This is followed by a discussion on methods for selecting β and C of (10) in Sections 4 and 5, respectively. The novel method based on zero-variance control variates is introduced in Section 5.2. Finally, we present the experimental results in Section 6. ## 3 Related Work Variance reduction for the pathwise gradient estimator in VI has been explored in Miller et al. (2017) and Geffner & Domke (2020). These works focused on designing a single control variate (i.e. C only has a column) with the form of C = Eφ˜−φ˜, where φ˜(ϵ; λ) is an approximation of φ(ϵ; λ). The expectation Eφ˜ is designed to be theoretically tractable, but this usually implies that there is some restriction imposed on T (and therefore q) to make this expectation easy to compute. Miller et al. (2017) proposed φ˜ that is based on the first-order Taylor expansion of ∇z log p(*z, x*). However, this Taylor expansion requires the expensive computation of the Hessian ∇2 z log p(*z, x*). Geffner & Domke (2020) improved upon their work and proposed using a quadratic function to approximate log p(*z, x*). Their method only requires the first-order gradient ∇z log p(*z, x*), and their Eφ˜ is available in closed-form as a function of the mean and covariance of q. A direct modification of their method is to estimate Eφ˜ empirically when the mean and covariance of q are unavailable; see Section 5.1 for more details. Both Miller et al. (2017) and Geffner & Domke (2020) only considered Gaussian q in their work, although the latter can be applied to a larger class of variational families where the mean and covariance of q is known. The proposed estimator based on zero-variance control variates is similar in spirit to another work from the same group in Geffner & Domke (2020). Like Geffner & Domke (2018), we propose combining weak control variates into a stronger one, but our work differs in the construction of the individual control variates and the optimisation criterion for β. ## 4 Selecting Β **For The Control-Variate-Adjusted Pathwise Gradient Estimator** The utility of control variates depends on the choice of β and C in (10). In this section, we will discuss various strategies to pick an appropriate β given a family of control covariates C. ## 4.1 A Unique Set Of Β **For Each Dimension Of** Λ The formulation in (10) suggests that the same set of β is used across the dimensions of φ. This can be too restrictive for C that are weakly-correlated to φ. In such instance, having a unique set of β coefficients for each dimension of φ can be beneficial, as it allows the coefficients to be selected on a per-dimension basis. In fact, this can be easily done by turning C into a dimλ × (Jdimλ)-dimensional, block diagonal matrix $${\left[\begin{array}{l l l}{C_{1,:}}&{\ldots}&{}&{0}\\ {\vdots}&{\ddots}&{}&{\vdots}\\ {0}&{\ldots}&{}&{C_{\dim_{\lambda},:}}\end{array}\right]}$$ , where Ci,:is the i th row of the original C. In other words, we expand the number of control variates to Jdimλ, and each control variate will only reduce the variance of a single dimension of φ. ## 4.2 Optimisation Criteria For Β The β is usually chosen to minimise the variance of hˆ. In practice, this variance is usually replaced by an empirical approximation due to the lack of its closed-form expression. There are three approximations in the literature, the first of which is a direct approximation of the variance with samples {ϵ[l]} L l=1, $$\mathbb{V}[\varphi+C\beta]\approx{\frac{1}{L(L-1)}}\sum_{l>l^{\prime}}\|\varphi(\epsilon_{[l]})+C(\epsilon_{[l]})\beta-\varphi(\epsilon_{[l^{\prime}]})-C(\epsilon_{[l^{\prime}]})\beta\|^{2},$$ 2, (11) as seen in Belomestny et al. (2018). The second approximation is based on the definition of variance $$\mathbb{V}[\varphi+C\beta]=\mathbb{E}\|\varphi-\mathbb{E}[\varphi+C\beta]+C\beta\|^{2}\approx\min_{\alpha\in\mathbb{R}^{m_{n}}}\frac{1}{\lambda}\sum_{l=1}^{L}\|\varphi(\epsilon_{[l]})+\alpha+C(\epsilon_{[l]})\beta\|^{2},\tag{12}$$ where α ∈ R dimλ is an intercept term in place of the unknown E[φ + Cβ]. The second approximation in (12) is generally cheaper to compute than the first approximation in (11) as it only requires O(L) operations rather than O(L 2); see Si et al. (2022) for details. $$(11)$$ Minimising (12) with respect to β is essentially a least squares problem with a well-known closed-formed solution $$\begin{split}\begin{bmatrix}\alpha^{*}\\ \beta^{*}\end{bmatrix}&=\arg\min_{\alpha,\beta}\sum_{l=1}^{L}\left\|\varphi(\epsilon_{[l]})+\left[\mathbb{I}_{\dim_{\lambda}}\quad C(\epsilon_{[l]})\right]\begin{bmatrix}\alpha\\ \beta\end{bmatrix}\right\|^{2}\\ &=-(\mathbf{X}^{\top}\mathbf{X})^{-1}\mathbf{X}^{\top}\varphi,\end{split}\tag{13}$$ where Idimλ is an identity matrix of size dimλ, and φ and the design matrix X are given by $$\varphi=\begin{bmatrix}\varphi(\epsilon_{[1]})\\ \vdots\\ \varphi(\epsilon_{[L]})\end{bmatrix},\quad\mathbf{X}=\begin{bmatrix}\mathbb{I}_{\dim_{\lambda}}&C(\epsilon_{[1]})\\ \vdots&\vdots\\ \mathbb{I}_{\dim_{\lambda}}&C(\epsilon_{[L]})\end{bmatrix}.$$ However, the matrix inversion in (14) can be problematic to compute, especially when XT X is rank-deficient. In such instance, we can include a penalty in (13) and solve for the penalised least squares solution (see, for example, South et al., 2022), or use an iterative optimisation algorithm to minimise (13), as suggested in Si et al. (2022). Finally the third approach relies on the assumption that E[Cβ] = 0 and is based on the observation that $$\mathbb{V}[\varphi+C\beta]=\mathbb{E}\|\varphi+C\beta\|^{2}-\|\mathbb{E}\varphi\|^{2}.$$ $$(15)$$ This suggests that the variance can be equivalently minimised by minimising simply the expected squared norm component, E∥φ+Cβ∥ 2. Geffner & Domke (2018) shows that the minimiser β ∗ = arg minβ E∥φ+Cβ∥ 2 is given by $$\beta^{*}=-\mathbb{E}[C^{\top}C]^{-1}\mathbb{E}[C^{\top}\varphi],$$ ⊤φ], (15) and suggests replacing E[C ⊤C] and E[C ⊤φ] with their empirical estimates. This approach, however, requires inverting a costly inversion of size J matrix. ## 4.3 Potential Bias In The Gradient Estimator The unbiasedness of the control-variate-adjusted Monte Carlo estimator (10) relies on the assumption that the β are independent of C, since E[C(ϵ)β(ϵ)] ̸= 0 in general. This necessitates that β and C should be estimated with independent sets of ϵ samples. However, in practice, the β is estimated with the same set of ϵ in C to save computational time at the cost of introducing bias in the gradient estimates. ## 5 Control Variates Having reviewed methods to select β given a family of control variates C, we now turn our attention to constructing C. We will first propose a simple modification of Geffner & Domke (2020) that will enable it to work for variational distributions q with unknown mean and covariance. Subsequently, we will introduce zero-variance control variates, which can be constructed without the knowledge of q or T. ## 5.1 Quadratic Approximation Control Variates In this section we review the quadratic-approximation control variates proposed in Geffner & Domke (2020). An important distinction at the outset is their assumption that the entropy term in mELBO, $$-\mathbb{E}_{q(z)}\log q(z;\lambda),$$ is *known*. As such the focus of Geffner & Domke (2020) is to reduce the variance of $$\mathbb{E}\nabla_{\lambda}f(T(\epsilon;\lambda)),$$ where $$f(z)={\frac{N}{B}}\sum_{i\in\mathrm{batch}}\,[\log p(x_{i}|z)]+\log p(z).$$ $$(16)^{\frac{1}{2}}$$ [log p(xi|z)] + log p(z). (16) Geffner & Domke (2020) proposed control variates of the form $$C(\epsilon)=\mathbb{E}[\nabla_{\lambda}\tilde{f}(T(\epsilon;\lambda))]-\nabla_{\lambda}\tilde{f}(T(\epsilon;\lambda)),$$ $$(17)$$ ˜f(T(ϵ; λ)), (17) where $${\tilde{f}}(z;v)=b_{v}^{\top}(z-z_{0})+{\frac{1}{2}}(z-z_{0})^{\top}B_{v}(z-z_{0})$$ is a quadratic approximation of (16). Here, v = {Bv, bv} are the parameters of the quadratic equation that are chosen to minimise the L 2 difference between ∇f(z) and ∇ ˜f(z). We will drop v in ˜f for the sake of brevity. The location parameter z0 is set to ET(ϵ; λ). This quadratic approximation of f can also be viewed as a linear approximation on ∇f. The first term in (17) has a closed-form expression that depends on the mean and covariance of q(z; λ), making the expectation cheap to evaluate when they are readily available. However, this is not the case when T(ϵ; λ) is arbitrarily complex, e.g. normalizing flow. A direct workaround of this limitation is to replace E∇λ ˜f(T(ϵ; λ)) with its empirical estimate based on a sample of ϵ's. Note that ˜f requires z0 = ET(ϵ; λ), which we estimate using another independent set of ϵ's. See Algorithm 1 for a summary of the procedure. As (16) is a part of the Monte Carlo estimator (10), it could be tempting to estimate E∇λ ˜f(T(ϵ; λ)) with an average of the ∇λ ˜f(T(ϵ; λ)) evaluations that have been computed in (10). This is to be avoided as it will result in the two terms in (17) cancelling each other out. As (17) is designed to be strongly correlated with φ when ˜f is reasonably close to f, the choice of β becomes less significant. Geffner & Domke (2020) opted to minimise the expected squared norm with a scalar β (note that C is a column vector in this case), the solution of which is given in (15). In their work, the expectations E[C ⊤φ] and E[C T C] are replaced with their empirical estimates computed from C and φ in (10), instead of fresh evaluations. However, as discussed in Section 4.2, the resulting gradient estimate is biased due to the dependency of between β and C. While this bias is not mentioned explicitly in Geffner & Domke (2020), we conjecture that they overcame the issue by estimating the expectations with C and φ computed from previous iterations, as specified in Algorithm 1. Therefore, their β is independent from C in the current iteration. This will avoid introducing bias to the gradient estimate at the cost of having sub-optimal β. They also claimed that their estimates of E[C ⊤φ] and E[C T C] (and by extension, β) do not differ much across iterations. Moreover, their β is largely acting as an auxiliary 'switch' of the control variate when ˜f is a poor approximation of f, rather than the primary mechanism to reduce the estimator variance, since the β will be almost 0 when ˜f is not approximating well (i.e. C[*φ, C*] ≈ 0). Their C only kicks in when it is sufficiently correlated to φ. Lastly, let us return to the discussion of the entropy term at the beginning of this section. Our setup is more general than Geffner & Domke (2020) as we does not assume the entropy term −Eq(z)log q(z; λ) to necessarily have a closed-form expression, i.e. our r(*z, λ*) includes − log q(z; λ). Although it was claimed in Geffner & Domke (2020) that their quadratic approximation control variate can also be similarly designed for r(*z, λ*) in (4) rather than f(*z, λ*) in (16), we found the implementation difficult because the updating step of v requires the gradient ∇z log q(z; λ), and in turn ∂λ ∂z , which is challenging to compute. ## 5.2 Zero-Variance Control Variates The control variates in Geffner & Domke (2020) require one to know the mean and covariance of q(z; λ). To avoid this requirement, we propose the use of gradient-based control variates (Assaraf & Caffarel, 1999; Mira et al., 2013; Oates et al., 2017). These control variates are generated by applying a so-called Stein operator L to a class of user-specified functions P(z). Typically the Stein operator uses ∇z log q(z), the gradients of the log probability density function for the distribution over which the expectation is taken, but it does not require any other information about φ or T. Algorithm 1 Quadratic approximation control variates with empirical estimates of E ˜f Require: Learning rates γ (λ), γ (v). Initialise λ, v and control variate weight β = 0. for k = 0, 1, 2, *· · ·* do Sample ϵ[1]*, . . . ϵ*[L] ∼ q0 to compute φ(ϵ[l]; λk) Generate an independent set of 100 ϵ samples to estimate z0 = ET(ϵ; λ) Generate an independent set of 100 ϵ samples to estimate E∇λ ˜f(T(ϵ; λ); vk) Compute h = 1 L PL l=1 -φ(ϵ[l]; λk) + C(ϵ[l])βSee (17) Take an ascent step λk+1 ← λk + γ (λ)h Estimate E[C ⊤C] and E[C ⊤φ] with ϵ[1]*, . . . ϵ*[L], and update β with (15). Take a descent step vk+1 ← vk − γ (v) 1 2L PL l=1 ∇v∥∇zf(T(ϵ[l]; λk)) − ∇z ˜f(T(ϵ[l]; λk); vk)∥ 2 end for We will focus on the form of gradient-based control variates known as zero-variance control variates (ZVCV, Assaraf & Caffarel, 1999; Mira et al., 2013). ZVCV uses the second order Langevin Stein operator and a polynomial P(z) = PJ j=1 βjPj (z), where Pj (z) is the jth monomial in the polynomial and J is the number of monomials. The control variates are $$\{{\mathcal{L}}P_{j}(z)\}_{j=1}^{J}=\{\Delta_{z}P_{j}(z)+\nabla_{z}P_{j}(z)\cdot\nabla_{z}\log q(z)\}_{j=1}^{J},$$ where ∆z is the Laplacian operator and q(z) is the probability density function for the distribution over which the expectation is taken. A sufficient condition for these control variates to have zero expectation is that the tails of q decay faster than polynomially (Appendix B of Oates et al., 2016), which is satisfied by Gaussian q for example. In this paper, we only consider first-order polynomials so there are J = dimz control variates of the form $$\left\{{\frac{\partial}{\partial z_{j}}}\log q(z)\right\}_{j=1}^{\dim_{z}}.$$ See $\left(17\right)$. . Here, zj refers to the j th dimension of z. We do not find second-order polynomials to have any advantage over first-order polynomials; see Appendix F for a discussion. For pathwise gradient estimators using a standard Gaussian as the base distribution, these control variates simplify further to $$\left\{\frac{\partial}{\partial\epsilon_{j}}\log q_{0}(\epsilon)\right\}_{j=1}^{\dim_{z}}=\{-\epsilon_{j}\}_{j=1}^{\dim_{z}}.$$ $$(18)^{\frac{1}{2}}$$ We are also using the same set of control variates across different dimensions of φ, but assigning each dimension with a unique set of β. That is, the matrix of control variates C is a block-diagonal matrix of size dimλ × dimλdimz $$C(\epsilon)=\begin{bmatrix}-\epsilon^{\top}&\dots&0\\ \vdots&\ddots&\vdots\\ 0&\dots&-\epsilon^{\top}\end{bmatrix}.$$ . (18) This is in contrast to Geffner & Domke (2020) where the values in C is different across dimensions, but their β is shared. The simplicity of ZVCV comes with the drawback that it is often not as correlated as the integrand. This makes the choice of β crucial. ## 5.2.1 Exact Least Squares As discussed in Section 4.2, the optimal β can be selected by solving (13), the solution of which is given in (14). We can further exploit the block-diagonal structure of (18) and decompose (13) into a series of linear regression that corresponds to each dimension of λ. In other words, we can solve β for each dimension of λ individually $${\left[\begin{array}{l}{\alpha_{i}^{*}}\\ {\beta_{i}^{*}}\end{array}\right]}=-({\mathbf{X}}^{\top}{\mathbf{X}})^{-1}{\mathbf{X}}^{\top}\varphi_{i},\quad i=1,\ldots,\dim_{\lambda},$$ −1X⊤φi, i = 1*, . . . ,* dimλ, (19) where i denotes the dimension of λ which α ∗ i , β ∗ i and φi correspond to, e.g. φi = [φi(ϵ[1])*, . . . , φ*i(ϵ[L])]⊤ is a subset of φ in (14) that corresponds to the i th dimension of λ, and $$(19)^{\frac{1}{2}}$$ $$\mathbf{X}={\begin{bmatrix}1&-\epsilon_{[1]}^{\top}\\ \vdots&\vdots\\ 1&-\epsilon_{[L]}^{\top}\end{bmatrix}}\,.$$ Note that the inversion of X⊤X only needs to be performed once and can be used across different φi, thereby scaling well to models with high-dimensional λ. The control variate for φi can then be computed as φi(ϵ) = −ϵ ⊤β ∗ i . We also propose using ϵ[1]*, . . . , ϵ*[L] that have already been generated in (10) to compute β ∗, as the resulting control variate tends to have a lower mean squared error. ## 5.2.2 Least Squares With Gradient Descent There are several challenges we face in applying ZVCV. For example, XT X in (19) must be invertible. This is often not the case for models with large dimz, as we usually keep L low and thus the column space of X is rank-deficient. This can be solved by adding a penalty term in (13) (i.e. shrinking β towards 0), and empirical evidence suggests that is more effective than using a subset of these control variates to achieve better performance (Geffner & Domke, 2018; South et al., 2022). However, penalised least squares remains prohibitively expensive to solve as it still involves inverting a matrix of size dimz. Instead, we propose mimicking penalised least squares by minimising (13) with respect to α and β with gradient descent. This is done by $$\stackrel{\cdot}{=}_{1}\varphi(\epsilon_{[l]})\mathrm{~a}$$ 1. Initialise α at − 1 L PL l=1 φ(ϵ[l]) and β at the zero vector. Set γ (α,β)to a low value; 2. Take a descent step $(\alpha_{m+1},\beta_{m+1})\leftarrow(\alpha_{m},\beta_{m})-\gamma^{(\alpha,\beta)}\frac{1}{L}\sum_{l=1}^{L}\nabla_{\alpha,\beta}\|\varphi(\epsilon_{[l]})+\alpha_{m}+C(\epsilon_{[l]})\beta_{m}\|^{2}$; 3. Repeat Step 2 for a few times. See Algorithm 2 for a complete description. The combination of learning rate and number of iterations is analogous to the penalty in penalised least squares: a lower number of iterations and learning rate γ (α,β) will result in a near-zero β that results from a stronger penalty (more shrinkage of β towards 0). This procedure is also similar in spirit to Si et al. (2022). ## 6 Experiments In these experiments, we assess the efficacy of various control variate strategies. Model and datasets We perform VI on the following model-dataset pairs: logistic regression on the a1a dataset, a hierarchical Poisson model on the *frisk* dataset, and Bayesian neural network (BNN) on the redwine dataset. For the BNN model, we consider a full-batch gradient estimator trained on a subset of 100 data points of the *redwine* dataset following the experimental setup in Geffner & Domke (2020) and Miller et al. (2017). We also consider a mini-batch estimator of size 32 but trained on the full *redwine* dataset, see Appendix A for more details. With the exception of mini-batch BNN, these models appeared in either Geffner & Domke (2020) or Miller et al. (2017). Variational families Three classes of variational families are considered: - **Mean-field Gaussian** The covariance of the Gaussian distribution N (µ, Σ) is parameterised by log-scale parameters, i.e. Σ = diag (exp(2 log σ1*, . . . ,* 2 log σdimz )). ## Algorithm 2 Zvcv-Gd Require: Learning rates γ (λ), γ (α,β). Initialise λ for k = 0, 1, 2, *· · ·* do Sample ϵ[1]*, . . . ϵ*[L] ∼ q0 Compute φ(ϵ[l]; λk), ∀l = 1*, . . . , L* See (5) Initialise α0 = − 1 L PL l=1 φ(ϵ[l]; λk) and β0 at the zero vector for m = 0, 1, 2, *· · ·* do Take a descent step (αm+1, βm+1) ← (αm, βm) − γ (α,β) 1 L PL l=1 ∇α,β∥φ(ϵ[l]; λk) + αm + C(ϵ[l])βm∥ 2 end for Set β ∗ as the final value of β from the previous inner loop Compute h = 1 L PL l=1 φ(ϵ[l]; λk) + C(ϵ[l])β ∗ See (18) Take an ascent step λk+1 ← λk + γ (λ)h. end for - **Rank-5 Gaussian** The covariance of the Gaussian distribution N (µ, Σ) is parameterised by a factor F ∈ R dimz×5 and diagonal components, i.e. Σ = F F ⊤ + diag (exp(2 log σ1*, . . . ,* 2 log σdimz )). - **Real NVP** We use a real NVP normalizing flow (Dinh et al., 2017) with two coupling layers and compose the layers in alternate pattern. The flow has a standard multivariate Gaussian as its base distribution. The scale and translation networks have the same architecture of 8×16×16 hidden units with ReLU activations, followed by a fully connected layer. There is an additional tanh activation at the tail of the scale network to prevent the exponential term from blowing up. We only present the results for mean-field Gaussian and real NVP in the main section. The results for rank-5 Gaussian are included in Appendix C, as they are largely similar to those obtained for mean-field Gaussian. Optimiser and learning rate We use an Adam optimiser and set its learning rate γ (λ) = 0.01, except for the BNN models with real NVP where we set γ (λ) = 0.001. These learning rates have been selected as the most best options, in terms of convergence time to a respectable ELBO, from the set of {0.1, 0.01, 0.001, 0.0001}. Control variates The gradient estimator is equipped with the following control variate strategies: - **NoCV** The vanilla gradient estimator without any control variates. - **ZVCV-GD** A ZVCV with β minimising least squares with an inner gradient descent, as described in Algorithm 2 and Section 5.2.2. We set the learning rate γ (α,β) = 0.001 and iterated the inner gradient descent 4 times for each outer Adam step. These hyperparameter choices may not always yield the maximum variance reduction in every situation, but they represent a good compromise with computation time. Additionally, we have discovered that prolonging the inner gradient descent iterations does not necessarily lead to better variance reduction. For a more comprehensive discussion, please refer to Appendix F. - **QuadCV** This is the original algorithm presented in Geffner & Domke (2020) when q is Gaussian (i.e. the mean and covariance of q are readily available). When q is real NVP, we use Algorithm 1 and 100 samples to estimate ET(ϵ; λ) and E∇λ ˜f(T(ϵ; λ)). The learning rate γ (v)is set to γ (λ), following the original work. Note that above we only compare our method in detail with Geffner & Domke (2020) as it is a direct improvement of Miller et al. (2017). Initialisations We repeated the experiment five times, each time using different initialisations of λ to assess the convergence performance of each method under varying initial conditions. For the mean-field Gaussian, the λ values were randomly sampled from a zero-mean Gaussian distribution with a scale parameter of 0.5. In contrast, for real NVP, we initialised the λ values using a Glorot normal initialiser (Glorot & Bengio, 2010). These choices of initialisers were made deliberately to ensure a wide range of initial values, covering both favourable and unfavourable starting points. Consequently, we expect to observe a diverse range of ELBO trajectories. Evaluation settings We report the ELBO and variance of the gradient estimators. The ELBO for evaluation purpose is always computed with the full dataset (even when using mini-batched ELBO for optimisation) and 500 samples from q. We also present the variance ratio, V[hˆ]/V[gˆ] where gˆ and hˆ as defined in (7) and (10) respectively, in every 50 iterations; a ratio less than 1 indicates a reduction in variance relative to the corresponding NoCV with the same number of L. The variance of the gradient estimators is computed by repeatedly sampling 100 gradients (say, gˆ[1]*, . . . ,* gˆ[100]) from the estimator and computed with V[gˆ] ≈1 100 Pj ∥gˆ[j] − ( 1 100 Pi gˆ[i])∥ 2. See Appendix B for more details on the calculation of the variance ratio. ## 6.1 Elbo Against Iteration Counts The results in Figure 1 demonstrate that QuadCV generally outperform NoCV, while ZVCV-GD provides only marginal improvement and can even converge to a suboptimal maximum in some cases (e.g. logistic regression, real NVP 2, and L = 10). The performance gap between the estimators also decreases as the number of gradient samples L increases, as seen in the bottom rows of Figure 1a and 1b. It should be noted that QuadCVs may perform poorly in the early stages of gradient descent (e.g. logistic regression on mean-field Gaussian and hierarchical Poisson on real NVP) as it takes time to learn the quadratic function ˜f. In general, there is also a high degree of variability in ELBO across different runs. This is especially noticeable in Figure 1a due to the substantial impact of λ initialisation on optimisation convergence. For a more detailed examination of the individual trajectories with various initialisations, please refer to Appendix E. The variance ratio of the gradient estimators can help explain the performance gap observed in Figure 1. As shown in Figure 2, QuadCV generally achieves a lower variance than ZVCV-GD, particularly for Gaussian q when E ˜f can be computed exactly. The estimator with ZVCV-GD and larger L tends to perform better in models with fewer control variates (i.e. low dimz), as the β is less susceptible to overfitting when solving the least squares with the gradient descent algorithm discussed in Section 5.2.2. On the contrary, in models with large dimz, such as BNNs, ZVCV-GD fails to reduce variance. A noteworthy characteristic of QuadCV is that variance reduction only becomes prominent after ˜f in (17) has been adequately trained. This typically occurs as the optimisation process nears convergence. With a QuadCV-adjusted gradient estimator, it is possible to push the ELBO at convergence a few nats further, although significant time has to be spent to reach convergence at all. However, this raises an interesting question about the worthiness of such an effort, as a relatively minor improvement in ELBO may not necessarily translate into substantially improved downstream metrics; see Appendix E for a more in-depth discussion. The comparison between L = 10 and L = 50 in Figure 1 suggests that variance reduction in the early stages can facilitate quicker convergence in terms of iteration counts (notice the leftward shift in the trajectories for L = 50. This observation implies that employing a larger number of gradient samples is an effective strategy to improve the convergence performance of stochastic VI, as long as the computation of additional gradient samples remains cost-effective in the overall optimisation process. It is important to note that increasing L from 10 to 50 immediately reduces the gradient estimator's variance by five-fold (equivalent to a variance ratio of 0.2) from the very first iteration of the optimisation, in contrast to QuadCV. These results suggest that variance reduction is more beneficial during the initial stages of optimisation when the goal is to expedite convergence towards a satisfactory ELBO, rather than aiming to attain the maximum achievable ELBO. ## 6.2 Elbo Against Wall-Clock Time ![10_image_0.png](10_image_0.png) (a) Mean-field Gaussian with 10 (top) and 50 (bottom) gradient samples. ![10_image_1.png](10_image_1.png) (b) Real NVP with 10 (top) and 50 (bottom) gradient samples. Figure 1: ELBO is plotted against the number of gradient descent steps for different numbers of gradient samples L and two families of q. The bold lines represent the median of ELBO values recorded at the same iteration across five repetitions. The shaded area illustrates the range of ELBO values across five repetitions. The ELBO values are smoothed using an exponential moving average. The trajectories of ZVCV-GD and NoCV are nearly identical in both full-batch and mini-batch BNN when L = 10. A higher ELBO indicates better performance. See Figure 6 for plots where the bold lines represent the mean ELBO. ![11_image_0.png](11_image_0.png) Figure 2: We present the variance ratio V[hˆ]/V[gˆ], where gˆ is NoCV and hˆ is either ZVCV-GD or QuadCV, at each iteration. We show only the median variance ratios recorded at the same iteration across five repetitions, omitting the individual variance ratios from each repetition to prevent clutter in the plots. The ratios from mean-field Gaussian and real NVP are shown in top and bottom rows respectively. Note that NoCV (in red) is always 1 by definition. We see that ZVCV-GD (in blue) struggles to reduce variance in the BNN models. There is also a significant overlap in QuadCV between L = 10 (solid green) and L = 50 (dotted green). A lower ratio indicates better performance. See Figure 7 for plots where the bold lines represent the mean variance ratios. To assess whether the computational expense of calculating control variates or additional gradient samples justifies the potential improvement in ELBO, we measure ELBO against wall-clock time, as illustrated in Figure 3. We timed our VI implementation in JAX and ran on an Nvidia A100 80GB GPU. It is worth noting that recorded times may vary among computing platforms and implementations, given that our code was compiled with XLA (resulting in platform-dependent binaries) and ran without memory constraints. Our experiments reveal that NoCV generally converges to a respectable ELBO more swiftly. Furthermore, the performance gap between the estimators is even narrower when L = 50. An unexpected observation is that increasing L from 10 to 50 incurs negligible computational cost but produce meaningfully faster convergence, as evident when comparing the top and bottom rows of Figure 3a and 3b. It is important to note that the computational cost of extra gradient samples may vary depending on the construction of φ, and increasing L might not always be a worthwhile strategy for achieving faster convergence (see, for example, the BNNs experiments in Figure 4b of Appendix C). QuadCV does succeed in increasing the maximum achievable ELBO in certain scenarios, albeit at the expense of longer convergence times. For instance, QuadCV can improve ELBO by approximately 0.7 nats and 6 nats in hierarchical Poisson and full-batch BNN when using a mean-field Gaussian q at L = 10. However, this comes at a cost of roughly 50% to 100% more runtime compared to NoCV. Given finite computational resources and the absence of a universal guarantee that a slight ELBO increase will substantially enhance downstream metrics (as discussed in, for example, Yao et al., 2018; 2019; Foong et al., 2020; Masegosa, 2020; Deshpande et al., 2022), it is left to practitioners to determine whether implementing control variates is a worthwhile endeavour. ## 7 Conclusion In our study of the pathwise gradient estimator in VI, we reviewed the existing state-of-the-art control variates for reducing gradient variance, namely the QuadCV in Geffner & Domke (2020). We identified a gap in the literature of variance reduction of pathwise gradient estimators in stochastic VI resulting from the ![12_image_0.png](12_image_0.png) (a) Mean-field Gaussian with 10 (top) and 50 (bottom) gradient samples. ![12_image_1.png](12_image_1.png) (b) Real NVP with 10 (top) and 50 (bottom) gradient samples. Figure 3: ELBO is plotted against wall-clock time for different numbers of gradient samples L and two families of q. The bold lines represent the median of ELBO values recorded at the same iteration across five repetitions. The shaded area illustrates the range of ELBO values across five repetitions. The ELBO values are smoothed using an exponential moving average. A higher ELBO indicates better performance. See Figure 8 for plots where the bold lines represent the mean ELBO. setting where the variational distribution has intractable mean and covariance, rendering the state-of-the-art represented by Geffner & Domke (2020) not applicable directly. To address this gap, we proposed using ZVCV, which does not assume specific conditions on the variational distribution. However, our empirical results showed that neither the ZVCV-adjusted nor the QuadCV-adjusted estimator provided substantial improvement against our evaluation criteria that justifies their implementation. Instead, we found that increasing the number of gradient samples is a highly cost-effective method for improving convergence time. Taking a step back, it is worth discussing the fundamental value in performing variance reduction for pathwise gradient estimators in stochastic VI. For one, it is quite interesting that a dramatic reduction in gradient variance can fail to deliver any discernible effect on the ELBO. This is what was observed in the experiments section - even when the variance ratio was substantially lower than 1, the control variate-adjusted gradient estimator, compared to the vanilla gradient estimator, did not move the needle in a meaningful manner on the ELBO optimisation objective. As such, it can be expected that downstream metrics including log pointwise predictive density or predictive mean squared error will also reveal the general futility of equipping the gradient estimator with a control variate. These findings seem to point to a negative phenomenon for pathwise gradients in stochastic VI - reducing the gradient variance is insufficient to improving downstream performance. In future work, we hope to explore ZVCV-adjusted gradient estimators in generative models where it can truly shine. Namely, ZVCV is particularly powerful when the distribution of interest is difficult to sample from. One class of distribution models that fit this description are energy-based models (Song & Kingma, 2021). Relatedly, there is a class of stochastic VI methods known as implicit VI. The variational distribution employed is still required to be reparametrizable but we drop the requirement that the so-called pathwise score, ∇z log q(z; λ), be known, e.g. as in normalizing flows. It was shown in Titsias & Ruiz (2019) that the pathwise score may itself be written as an expectation, ∇z log q(z; λ) = Eq(ϵ|z;λ)∇z log q(z|ϵ; λ) where q(ϵ|z; λ) is referred to as the reverse conditional. In Titsias & Ruiz (2019), the expectation with respect to the reverse conditional is based on MCMC samples. We could conceivably improve the efficiency by employing ZVCV here. ## References Roland Assaraf and Michel Caffarel. Zero-Variance Principle for Monte Carlo Algorithms. *Physical Review* Letters, 83(23):4682–4685, December 1999. doi: 10.1103/PhysRevLett.83.4682. D. V. Belomestny, L. S. Iosipoi, and N. K. Zhivotovskiy. Variance Reduction in Monte Carlo Estimators via Empirical Variance Minimization. *Doklady Mathematics*, 98(2):494–497, September 2018. ISSN 1531-8362. doi: 10.1134/S1064562418060261. Sameer Deshpande, Soumya Ghosh, Tin D. Nguyen, and Tamara Broderick. Are you using test log-likelihood correctly? In I Can't Believe It's Not Better Workshop: Understanding Deep Learning Through Empirical Falsification, December 2022. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. In *International* Conference on Learning Representations, 2017. Andrew Foong, David Burt, Yingzhen Li, and Richard Turner. On the Expressiveness of Approximate Inference in Bayesian Neural Networks. In *Advances in Neural Information Processing Systems*, volume 33, pp. 15897–15908. Curran Associates, Inc., 2020. Tomas Geffner and Justin Domke. Using Large Ensembles of Control Variates for Variational Inference. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. Tomas Geffner and Justin Domke. Approximation Based Variance Reduction for Reparameterization Gradients. In *Advances in Neural Information Processing Systems*, volume 33, pp. 2397–2407. Curran Associates, Inc., 2020. Andrew Gelman, Jeffrey Fagan, and Alex Kiss. An Analysis of the New York City Police Department's "Stop-and-Frisk" Policy in the Context of Claims of Racial Bias. *Journal of the American Statistical* Association, 102(479):813–823, September 2007. ISSN 0162-1459. doi: 10.1198/016214506000001040. Andrew Gelman, Hal S. Stern, John B. Carlin, David B. Dunson, Aki Vehtari, and Donald B. Rubin. Bayesian Data Analysis. CRC Press, third edition, 2013. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics*, pp. 249–256. JMLR Workshop and Conference Proceedings, March 2010. Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic Variational Inference. Journal of Machine Learning Research, 14(40):1303–1347, 2013. ISSN 1533-7928. Geng Ji, Debora Sujono, and Erik B. Sudderth. Marginalized Stochastic Natural Gradients for Black-Box Variational Inference. In *Proceedings of the 38th International Conference on Machine Learning*, pp. 4870–4881. PMLR, July 2021. Andres Masegosa. Learning under Model Misspecification: Applications to Variational and Ensemble methods. In *Advances in Neural Information Processing Systems*, volume 33, pp. 5479–5491. Curran Associates, Inc., 2020. Andrew Miller, Nick Foti, Alexander D' Amour, and Ryan P Adams. Reducing Reparameterization Gradient Variance. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. Antonietta Mira, Reza Solgi, and Daniele Imparato. Zero variance Markov chain Monte Carlo for Bayesian estimators. *Statistics and Computing*, 23(5):653–662, September 2013. ISSN 1573-1375. doi: 10.1007/ s11222-012-9344-6. Chris J. Oates, Theodore Papamarkou, and Mark Girolami. The Controlled Thermodynamic Integral for Bayesian Model Evidence Evaluation. *Journal of the American Statistical Association*, 111(514):634–645, April 2016. ISSN 0162-1459. doi: 10.1080/01621459.2015.1021006. Chris J. Oates, Mark Girolami, and Nicolas Chopin. Control functionals for Monte Carlo integration. *Journal* of the Royal Statistical Society. Series B (Statistical Methodology), 79(3):695–718, 2017. ISSN 1369-7412. Rajesh Ranganath, Sean Gerrish, and David M. Blei. Black Box Variational Inference. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, pp. 814–822. PMLR, April 2014. Shijing Si, Chris. J. Oates, Andrew B. Duncan, Lawrence Carin, and François-Xavier Briol. Scalable Control Variates for Monte Carlo Methods Via Stochastic Optimization. In Alexander Keller (ed.), Monte Carlo and Quasi-Monte Carlo Methods, Springer Proceedings in Mathematics & Statistics, pp. 205–221, Cham, 2022. Springer International Publishing. ISBN 978-3-030-98319-2. doi: 10.1007/978-3-030-98319-2_10. Yang Song and Diederik P. Kingma. How to Train Your Energy-Based Models, February 2021. L. F. South, C. J. Oates, A. Mira, and C. Drovandi. Regularized Zero-Variance Control Variates. Bayesian Analysis, -1(-1):1–24, January 2022. ISSN 1936-0975, 1931-6690. doi: 10.1214/22-BA1328. Michalis K. Titsias and Francisco Ruiz. Unbiased Implicit Variational Inference. In *Proceedings of the* Twenty-Second International Conference on Artificial Intelligence and Statistics, pp. 167–176. PMLR, April 2019. Jiayu Yao, Weiwei Pan, Soumya Ghosh, and Finale Doshi-Velez. Quality of Uncertainty Quantification for Bayesian Neural Network Inference, June 2019. Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Yes, but Did It Work?: Evaluating Variational Inference. In *Proceedings of the 35th International Conference on Machine Learning*, pp. 5581–5590. PMLR, July 2018. ## A Models And Datasets Logistic regression with the a1a **dataset** We extracted the a1a dataset from the repository hosting Geffner & Domke (2020). We used the full dataset {xi, yi} 1605 i=1 and 90% of the dataset for training. The response yiis binary and is modelled as $$w_{0},w\sim{\mathcal{N}}(0,10^{2})$$ $p(y_{i}|\mathbf{x}_{i},z)=\text{Bernoulli}\left(\frac{1}{1+\exp(-w_{0}-\mathbf{w}^{T}\mathbf{x}_{i})}\right).$ , where z = {w0, w} and dimz = 120. The size of training and test sets are 1440 and 165 respectively. Hierarchical Poisson regression with the *frisk* **dataset** This example is coming from Gelman et al. (2007). We only used a subset of data (weapon-related crime, precincts with 10%-40% of black proportion), as in Miller et al. (2017) and Geffner & Domke (2020). The response yep denotes the number of frisk events due to weapons crimes within an ethnicity group e in precinct p over a 15-months period in New York City: $$\mu\sim\mathcal{N}(0,10^2)$$ $$\log\sigma_\alpha,\log\sigma_\beta\sim\mathcal{N}(0,10^2)$$ $$\alpha_e\sim\mathcal{N}(0,\sigma_\alpha^2)$$ $$\beta_p\sim\mathcal{N}(0,\sigma_\beta^2)$$ $$\log\lambda_{ep}=\mu+\alpha_e+\beta_p+\log N_{ep}$$ $$p(y_{ep}|z)=\mathrm{Poisson}(\lambda_{ep}),$$ $$p(y_{e})$$ where z = {α1, α2, β1, . . . , β32*, µ,* log σα, log σβ} and dimz = 37. Nep is the (scaled) total number of arrests of ethnicity group e in precinct p over the same period of time. We do not split out a test set due to its small size (total data size is 96). Bayesian neural network with the *redwine* **dataset** We push a vector input xi through a 50-unit hidden layer and ReLU activation's to predict wine quality. The response yiis an integer from 1 to 10 (inclusive) measuring the score of red wine. We place an uniform improper prior on the log-variance of the weights and error (see Section 5.7 of Gelman et al., 2013, for a discussion on the prior choice): $$p(\log\alpha^{2})\propto1,\quad\mathrm{equiv}$$ 2) ∝ 1, equivalent to p(α) ∝ α $$\alpha)\propto\alpha^{-1}$$ $$\begin{array}{l}{{p(\log\tau^{2})\propto1,}}\\ {{w_{i}\sim{\mathcal{N}}(0,\alpha^{2}),}}\end{array}$$ 2) ∝ 1, equivalent to p(τ ) ∝ τ −1 $${\bf\Phi}[{\bf\Phi}],$$ wi ∼ N (0, α2), i = 1*, . . . ,* 651 yi|xi, w, τ ∼ N (ϕ(x, w), τ 2) where ϕ is a multi-layer perception. Here, z = {log α 2, log τ 2, w} and dimz = 653. For full-batch gradient descent, we use two mutually exclusive subsets of 100 data point as train and test sets, as in Miller et al. (2017) and Geffner & Domke (2020). For mini-batch gradient descent, we use 90% of the full dataset for training and the rest for testing (size of train and test sets are 1431 and 168 respectively). ## B Computation Of Variance Ratio The variance ratio V[hˆ]/V[ˆg] was computed with the following step: 1. Collect 100 samples of gˆ resulting in {gˆ[j]} 100 j=1; 2. For each gˆ[j], compute its corresponding control-variate-adjusted gradient estimate hˆ (10) to collect {hˆ[j]} 100 j=1; 3. Estimate V[ˆg] ≈1 100 Pj ∥gˆ[j] − ( 1 100 Pi gˆ[i])∥ 2. Repeat the same step for V[hˆ]; 4. Calculate the ratio V[hˆ]/V[ˆg]. This ratio is designed to evaluate the effectiveness of control variates in reducing variance relative to a corresponding gradient estimator without control variates. Therefore, in our work, the ratio is always computed with a pair of gˆ and hˆ with the same number of samples. ## C Results From Rank-5 Gaussian The insight derived from Figure 4 and 5 below are similar to those obtained from Figure 1, 3 and 2. In most cases, the cost for evaluating control variates outweighs the improvement in ELBO achieved through variance reduction in the gradient estimator. We observe marginal gain in ELBO despite the estimators with control variates taking longer time to converge. ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) (b) ELBO versus wall-clock time. Figure 4: ELBO is plotted against gradient descent steps and wall-clock time for varying numbers of gradient samples L using rank-5 Gaussian. The bold lines represent the median of ELBO values recorded at the same iteration across five repetitions. The ELBO values have been smoothed using an exponential moving average. A higher ELBO indicates better performance. See Figure 9 for plots where the bold lines represent the mean ELBO. ![18_image_0.png](18_image_0.png) (a) Variance ratio Figure 5: We present the variance ratio V[hˆ]/V[gˆ] of rank-5 Gaussian, where gˆ is NoCV and hˆ is either ZVCV-GD or QuadCV, at each iteration. We show only the median variance ratios recorded at the same iteration across five repetitions, omitting the individual variance ratios from each repetition to prevent clutter in the plots. Note that NoCV (in red) is always 1 by definition. We see that ZVCV-GD (in blue) struggles to reduce variance in the BNN models. There is also some overlap between L = 10 (solid green) and L = 50 (dotted green). A lower ratio indicates better performance. A lower ratio indicates better performance. See Figure 10 for plots where the bold lines represent the mean variance ratios. ## D **Mean Elbo Trajectories And Variance Ratio** We have recreated the figures in Section 6 and Appendix C, with the exception that the bold lines now represent the means of ELBO or variance ratios, as opposed to their medians. Using means provides a more transparent depiction of the robustness of each method, although it can be substantially influenced by the repetition that starts farthest from the optimal λ. Ideally, individual trajectories should be plotted separately (as in Appendix E), but this is not feasible due to space limitations. Nonetheless, the findings of this study are substantiated by interpreting either the mean or median of the evaluation statistics. ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) (a) Mean-field Gaussian with 10 (top) and 50 (bottom) gradient samples. (b) Real NVP with 10 (top) and 50 (bottom) gradient samples. Figure 6: ELBO is plotted against the number of gradient descent steps for different numbers of gradient samples L and two families of q. The bold lines represent the mean of ELBO values recorded at the same iteration across five repetitions. The shaded area illustrates the range of ELBO values across five repetitions. The ELBO values are smoothed using an exponential moving average. The trajectories of ZVCV-GD and NoCV are nearly identical in both full-batch and mini-batch BNN when L = 10. A higher ELBO indicates better performance. See Figure 1 for plots where the bold lines represent the median ELBO. ![20_image_0.png](20_image_0.png) Figure 7: We present the variance ratio V[hˆ]/V[gˆ], where gˆ is NoCV and hˆ is either ZVCV-GD or QuadCV, at each iteration. We show only the mean variance ratios recorded at the same iteration across five repetitions, omitting the individual variance ratios from each repetition to prevent clutter in the plots. The ratios from mean-field Gaussian and real NVP are shown in top and bottom rows respectively. Note that NoCV (in red) is always 1 by definition. We see that ZVCV-GD (in blue) struggles to reduce variance in the BNN models. There is also a significant overlap in QuadCV between L = 10 (solid green) and L = 50 (dotted green). A lower ratio indicates better performance. See Figure 2 for plots where the bold lines represent the median variance ratios. ![21_image_0.png](21_image_0.png) (a) Mean-field Gaussian with 10 (top) and 50 (bottom) gradient samples. ![21_image_1.png](21_image_1.png) (b) Real NVP with 10 (top) and 50 (bottom) gradient samples. Figure 8: ELBO is plotted against wall-clock time for different numbers of gradient samples L and two families of q. The bold lines represent the mean of ELBO values recorded at the same iteration across five repetitions. The shaded area illustrates the range of ELBO values across five repetitions. The ELBO values are smoothed using an exponential moving average. A higher ELBO indicates better performance. See Figure 3 for plots where the bold lines represent the median ELBO. ELBO (EMA) ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) ![22_image_2.png](22_image_2.png) (b) ELBO versus wall-clock time. Figure 9: ELBO is plotted against gradient descent steps and wall-clock time for varying numbers of gradient samples L using rank-5 Gaussian. The bold lines represent the mean of ELBO values recorded at the same iteration across five repetitions. The ELBO values have been smoothed using an exponential moving average. A higher ELBO indicates better performance. See Figure 4 for plots where the bold lines represent the median ELBO. ![23_image_0.png](23_image_0.png) (a) Variance ratio Figure 10: We present the variance ratio V[hˆ]/V[gˆ] of rank-5 Gaussian, where gˆ is NoCV and hˆ is either ZVCV-GD or QuadCV, at each iteration. We show only the mean variance ratios recorded at the same iteration across five repetitions, omitting the individual variance ratios from each repetition to prevent clutter in the plots. Note that NoCV (in red) is always 1 by definition. We see that ZVCV-GD (in blue) struggles to reduce variance in the BNN models. There is also some overlap between L = 10 (solid green) and L = 50 (dotted green). A lower ratio indicates better performance. See Figure 5 for plots where the bold lines represent the median variance ratios. ## E **Individual Runs Of Full-Batch Bnn With Mean-Field Gaussian** We zoom in on a particular model and variational family from the experiments in the main text. Our aim in this section is to look the trajectory according to each initialisation separately to help visualise the impact of initialisation on convergence. Due to space limitations, we have only included trajectories from full-batch BNN with mean-field Gaussian. In addition to the ELBO reported in the main text, we also report the downstream metric, log pointwise predictive density evaluated on a test set (test lppd), which is popular in the VI literature. Mathematically, the test lppd is defined as $$\sum_{x\in\mathcal{D}_{\mathrm{test}}}\log\left(|\mathcal{Z}|^{-1}\sum_{z\in\mathcal{Z}}p(x|z)\right)$$ . Here, Dtest represents a test set, Z is a set of samples drawn from q(z; λ), and |Z| indicates the cardinality of Z. We have set |Z| = 1000 in our experiments. The test lppd is also referred to as the test log-likelihood, test log-predictive, or predictive log-likelihood in the literature (see, for example, Yao et al., 2019; Deshpande et al., 2022). Figures 11a clearly show that trajectories vary substantially with different initialisations. This is consistent with the high variability of ELBO trajectories in Figure 1. In all cases, increasing L, the number of gradient samples, effectively reduces the variance of the gradient estimator from the outset of the optimisation process. This stands in contrast to QuadCV, which only becomes effective after the quadratic approximation ˜f in (17) has been adequately trained (Figure 11b). Consequently, QuadCV performs poorly in the early and middle stages of optimisation (as seen in Repetitions 2 and 3 in Figure 11a). Prior research on variance reduction in pathwise gradient estimators (e.g. Miller et al., 2017; Geffner & Domke, 2018; 2020) often aims to push the boundaries of attainable ELBO. Achieving this typically requires longer training periods. However, we are of the opinion that the additional ELBO gained through this effort does not warrant the extra computational cost incurred by implementing control variates. This is particularly relevant given that improvements in downstream metrics, such as test lppd, are marginal when compared to improvements achieved in the earlier stages of optimisation (note the y-axis scale in Figure 12a and 12b). For instance, in Repetition 1, there is only a 3 nats improvement in test lppd (over a test set of size 100), while substantial improvements are observed in the earlier stages, often in the scale of hundreds. These 3 nats come at a cost of over 50% additional computation time compared to NoCV (as indicated in Figure 3a). Furthermore, it is worth noting that an improvement in ELBO does not invariably guarantee a substantial improvement in downstream statistics, as evidenced in previous works, such as Yao et al. (2018; 2019); Foong et al. (2020); Masegosa (2020); Deshpande et al. (2022). ![25_image_0.png](25_image_0.png) Control variate NoCV QuadCV ZVCV−GD Gradient samples (L) 10 50 (a) ELBO trajectories. This is a zoomed-out version of the last column of Figure 1a. Higher values are preferred. ![25_image_1.png](25_image_1.png) Control variate NoCV QuadCV ZVCV−GD Gradient samples (L) 10 50 (b) Variance ratios. A reading of 1 indicates no variance reduction. Lower values are preferable. Figure 11: The trajectories of ELBO and variance ratio for full-batch BNN with mean-field Gaussian are depicted over the course of iterations, with each of the five repetitions presented individually. By definition, the variance ratio of NoCV (red) is 1. Notably, there is a substantial overlap between NoCV (in red) and ZVCV-GD (in blue). In some cases, the distinctions between all three methods are hardly discernible. However, there is a relatively noticeable difference between L = 10 and L = 50. ![26_image_0.png](26_image_0.png) Control variate NoCV QuadCV ZVCV−GD Gradient samples (L) 10 50 (a) Test lppd trajectories. ![26_image_1.png](26_image_1.png) Control variate NoCV QuadCV ZVCV−GD Gradient samples (L) 10 50 (b) Test lppd trajectories, zooming in between lppd = (−145, −135). Figure 12: The trajectories of test lppd for full-batch BNN with mean-field Gaussian are depicted over the course of iterations, with each of the five repetitions presented individually. Notably, there is a substantial overlap between NoCV (in red) and ZVCV-GD (in blue). In some cases, the distinctions between all three methods are hardly discernible. However, there is a relatively noticeable difference between L = 10 and L = 50. Higher values are preferred. ## F **Comparison Of Zvcv-Gd With Different Hyperparameters** We conducted experiments with ZVCV-GD that explore various hyperparameter settings, running with both first- and second-order polynomials (Figure 13), and testing different number of steps in the inner gradient descent loop (Figure 14). We focus on the hierarchical Poisson model using a mean-field Gaussian and setting L = 10. We repeated the experiment five times, each time with different initialisations. The red trajectories in Figure 13 and 14 correspond to the default settings of ZVCV-GD as specified in Section 6. Figure 13b reveals that second-order ZVCV-GD did not effectively reduce variance in the gradient estimator; instead, it introduced additional noise into the estimator. This detrimental impact is also evident in the ELBO trajectories, as shown in Figure 13a. In light of these findings, we concluded that the simpler first-order ZVCV-GD is preferable over the second-order variant. ![27_image_0.png](27_image_0.png) ZVCV order 1 2 (a) ELBO trajectories. Higher values are preferable. ![27_image_1.png](27_image_1.png) ZVCV order 1 2 (b) Variance ratios. A reading of 1 indicates no variance reduction. Lower values are preferable. Figure 13: ELBO trajectories and variance ratios for hierarchical Poisson models using mean-field Gaussian, ZVCV-GD with L = 10, and first- and second-order ZVCV-GD both with 4 inner GD steps. The experiment was repeated five times, each time with different initialisations. In Figure 14, we present the ELBO trajectories and variance ratios obtained by running the inner gradient descent (GD) of ZVCV-GD with three different settings: 4 steps, 20 steps, and 'until convergence'. Here, 'convergence' is defined as the point at which the residual of the inner least squares problem in (13) no longer decreases substantially. We observe that running the inner GD until convergence does not necessarily yield the greatest variance reduction, as illustrated in Figure 14b. This phenomenon can be attributed to overfitting the linear regression in (13), where the number of rows in C is considerably smaller than the number of columns. On the other hand, iterating the inner GD 20 times achieves a more substantial variance reduction compared to the default 4 steps. However, it is worth highlighting that there is no discernible impact on the ELBO trajectories when varying the number of GD steps, as demonstrated in Figure 14a. The optimal number of steps is not always evident without experimentation. Hence, we typically opt for 4 steps to balance computational efficiency and the risk of over-optimizing the inner GD process. ![28_image_0.png](28_image_0.png) Inner GD steps 4 20 converge (a) ELBO trajectories. Higher values are preferable. ![28_image_1.png](28_image_1.png) (b) Variance ratios. A reading of 1 indicates no variance reduction. Lower values are preferable. Figure 14: ELBO trajectories and variance ratios for hierarchical Poisson models using mean-field Gaussian, (first-order) ZVCV-GD with L = 10, running with different number of steps in the inner gradient descent. The experiment was repeated five times, each time with different initialisations. The ELBO trajectories for different GD steps are practically indistinguishable. The erratic variance ratio readings occur during the early optimisation stages in the low ELBO region, where gradient magnitudes are substantial.
Review 1: Summary: The paper proposes a new control variates method to reduce the variance of pathwise (reparametrization) gradient in the context of variational inference. The new methods reduces the requirement of variational distribution by the existing methods targeting the same problem. Strengths and Weaknesses: Strengths: - The paper contains a clear review of the related literature. - It identifies the limitation of existing methods. - It provides a fair discussion of the experimental results. Weakness: - One major concern is the experimental results. In the main results of Figure 1, the proposed ZVCV-GD does not achieve the optimal objective value in any case. It raises the question of its effectiveness. - Based on the observation, the paper discusses that "dramatic reduction in gradient variance can fail to deliver any discernible effect on the ELBO", "reducing the gradient variance is insufficient to improving downstream performance," and "we found that increasing the number of gradient samples is a highly cost-effective method for reducing variance." The reviewer holds similar questions about whether it makes a big difference to reduce the variance of reparameterization gradient, and whether the variance reduction methods can be more efficient than taking multiple Monte-Carlo samples. Since the practitioners often take only a single sample estimate for the reparameterization gradient, it might hint at the negative results concluded by the paper. Unfortunately, the negative results may not serve as enough reasons to support the acceptance. - I do hold some concerns about the negative results. First, in Figure 1, Quadratic CV does achieve the best performance, and in Figure 2, Quadratic CV has the lowest ratio. This seems to suggest the variance reduction is useful? The authors may design a toy example (e.g. the one in [1]) where the analytic gradient and reparametrization gradient are both computable. Such an example can help validate the negative conclusions in the paper. - A minor comment for the paper's clarity: the review of existing work ends after page 6, which contains weakly relevant information. For example, if the paper learns \beta by Eq 12, it might be distractive to review all three methods to learn \beta, at least before introducing proposed method. Additional questions: - The C matrix in Sec. 4.2 is very high dimensional (dim(\lambda) x Jdim(\lambda)) with dim(\lambda) as number of neural net parameters. How scalable is solving Eq 12? - The paper mentions, "A good example of such a variational distribution q is normalizing flow, where z is the result of pushing forward a base distribution q through a neural network parameterised by λ". The normalizing flow requires the transformation to be invertible, but a neural network is generally non-invertible. Maybe it refers to the semi-implicit distribution [2,3]? [1] REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models, Tucker et al. 2017 [2] Semi-Implicit Variational Inference, Yin & Zhou, 2018 [3] Unbiased Implicit Variational Inference, Titsias & Ruiz, 2018 Requested Changes: - Reflect the empirical advantages of the proposed method in the simulations. - Improve the writing clarity and accuracy. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This work proposes a new variance reduction technique for variational inference with the reparameterization trick. It applies zero-variance control variates (ZVCV) constructed by Stein operators to reduce the variance of estimating the expectation. This control variates do not require to know the analytical form of the mean and covariance of the variational distribution. However, the empirical results showed that the proposed gradient estimator did not provide substantial variance reduction. Instead, it is found that increasing the number of gradient samples is a more cost-effective method for reducing variance. Strengths and Weaknesses: ## Strengths * This paper is clearly written and has sufficient discussion of closely related work. * The use of Langevin Stein operator to avoid the difficulty of computing mean and variance of normalizing flow type of variational distribution is well-motivated. This is a major weakness of the Geffner et al. (2020) approach and is correctly identified in this work. ## Weaknesses * Only using first-order polynomials in the control variate construction feels rather limited. The fact that the control variate falls back to a linear form is also quite unsatisfying. This means the control variates could not be very effective when the gradient of log likelihood term is highly nonlinear. * Although it can be argued that in large data problems, the posterior will become more Gaussian and the linear control variate can potentially be effective, it does not justify the sophisticated derivation to obtain such a CV. Could we apply standard regression based linear control variates instead? I assume the results can be very similar. * The way the coefficient before the control variates is estimated is also concerning. It requires a inner loop of gradient descent to solve an optimization problem, mainly because the exact solution is computationally expensive. However, the inner loop is only iterated 4 times and it is unclear the coefficients obtained in this way will be close to the exact solution in the end. * The empirical results are negative. This is not to say that negative results are not valued. But, the negative result can be sort of predicted from the reasons listed above--the control variates being too simple. Requested Changes: Since my concern is more about the methodology itself, I would suggest the authors to revisit their proposed approach and try to improve upon the few points raise above. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The authors (probably) aim to demonstrate the application of the zero-variance control variates, initially proposed to reduce variance in pathwise (reparameterized) gradients, especially in scenarios where determining the variational parameters during training is challenging (e.g., normalizing flow). This paper provides comprehensive explanations and reviews of existing studies (from Section 1 to Section 5.1). In Sections 5.2 and 5.3, they introduce an approximation-based variance reduction method, which minimizes the penalized least square loss using gradient descent based on the similar approach in Si et al. (2022). Empirical validation of the proposed method is conducted in Section 6 across three variational families: Mean-field Gaussian, Rank-5 Gaussian, and Real NVP. Strengths and Weaknesses: First and foremost, I would like to express my sincere respect for all the efforts the authors have invested in this paper. Furthermore, it should be noted that the following review is explicitly written with an understanding that TMLR places importance on *accuracy, convincing, and clear evidence,* as well as *capturing readers' interest* over novelty and impact. # Strengths - The authors have focused on an intriguing problem, which is that the reparameterized gradient estimator cannot be applied to variational inference scenarios where parameters cannot be traced, such as in the case of normalizing flows. This issue has the potential to become an interesting research topic. - This paper provides a detailed review of existing research on pathwise gradient estimation from Section 1 to Section 5.1. # Weakness ## Concerns for discussion based on accurate evidence - Since this is a methodological paper, it is crucial to provide accurate and convincible empirical evidence to assure the validity of the contribution. However, I have concerns about the credibility of the experimental results. I raise several points of concern as follows. - **Figure 1, Overall results**: For each method, there appears to be a large discrepancy in the results obtained by repeating the experiment five times. For example, when examining the simplest experimental setting (the top-left figure; Hierarchical Poisson (L=10)), it is observed that the convergence of the method without using CV varies greatly (in some cases, it converges at $8000$ iterations, while in others, it has not converged even after $30,000$ iterations.). This extreme divergence seems odd if the experiment is repeated for the same model and hyperparameter settings under a simple experimental setup. Additionally, in cases like *Mini-batch BNN (L=50)*, the ELBO indicated by the red line suddenly drops around iteration $15,000$. In my opinion, these behaviors are highly unusual, and it seems unlikely that they would occur in an appropriately controlled experimental environment. Similar irregularities can be observed sporadically in other experimental settings as well. - **Figure 1, regarding the "median" ELBO among the 5 repetitions**: It is unclear whether this *median* refers to the median calculated from the numerical ELBO values recorded at the same iteration across the 5 repetitions, or if it means choosing the results from the third best experimental results out of the 5. It appears to be the latter from Figure 1. This choice of adopting the *third-best* experimental results raises questions about the rationale behind such a selection. Without a valid explanation for why the *third-best* experimental results were chosen, the interpretability of the evaluation method seems to be compromised. Furthermore, this evaluation approach leads to cases where methods are unfairly evaluated. For instance, in the experimental results for *Logistic regression (L=10)* and similar cases, the proposed method appears to perform competitively with existing methods, yet the bold line is placed considerably lower. It is difficult for me to adopt as accurate and convincing evidence the results of an experiment evaluated in a way that seems inappropriate. Considering that the discussion primarily revolves around the gradient variance due to i.i.d. Monte Carlo samples, it would indeed be more appropriate to evaluate performance using the mean ± std. of the ELBO. - **Figure 1, the top-right figure**: It appears that the experimental results for *No CV* are missing from the figure (red line). ## Concerns regarding the interest of TMLR readers - To capture the readers' interest, readability is one of the crucial points. However, in this paper, it appears that the following elements are having a negative impact on readability. - **Position of this paper**: If this paper is positioned as a review paper on pathwise gradient variance reduction, then the detailed review of existing studies extending up to Section 5.1 would indeed contribute to researchers and students interested in the relevant field. However, if the algorithm proposed in Section 5.2.2 is to be considered an essential contribution, its explanation seems inadequate. In the current structure, the evaluation of this paper may vary depending on which perspective is taken. Therefore, it seems that the authors should first clarify the main claim they want to convey in this paper. - **Discussion of related studies**: In this paper, it is assumed that in problem settings where the variational distribution during training cannot be obtained exactly, as in the case of Real NVP, the variance in gradient estimation negatively affects convergence and predictive performance. However, there seems to be a lack of discussion regarding related studies on this aspect, leaving the paper's position unclear. Are there any related work around this issue? Given that this aspect appears to be of significant interest to TMLR readers, providing adequate discussion on it would likely make the paper even stronger. - **Citation deficiencies etc.**: There are inconsistencies throughout the paper in terms of how references are presented and the use of abbreviations, which significantly impairs the overall readability and raises concerns about detracting from the interests of the intended readers. For instance, there is a lack of reference information for [Belomestny et al. (2018)]. Furthermore, although abbreviations such as *CV* and *VI* are introduced, the paper later inconsistently reverts to using expressions like *control variates* and *variational inference.* Requested Changes: - It would be greatly appreciated if you could clarify the positioning of this paper. In other words, is this intended to be a review paper on pathwise gradient variance reduction methods, or is the main contribution centered around the methods proposed in Section 5.2.2? Please provide explicit clarification on this point. - It would be advisable to check the implementation and environment of the experiments for any issues and reconsider the evaluation methods. At least to me, there seems to be an unnatural discrepancy in the results of repeated experiments. If there are no issues, it would be beneficial to investigate why this phenomenon is occurring and provide an explanation for it in Section 6. This aspect is closely related to one of the main acceptance criteria of TMLR: *a discussion based on accurate evidence.* - Please enhance the overall quality of the paper's presentation. Mistakes in citations and inconsistencies in descriptions may reduce the reader's interest. Broader Impact Concerns: I believe that this work does not raise any ethical concerns because it is a methodological study focused on gradient estimation with variance reduction. ================================================== Metareview: Recommendation: Reject Comment: Unfortunately, all three reviewers unanimously voted for rejection of the paper, for the aforementioned reasons. Reviewer CLe6 argued that the findings are not sufficiently significant: "The negative results are interesting, but they are relatively predictable given the control variates construction and the general use of reparametrization gradient." Reviewer wCDN had concerns about the experimental environment being inconsistent (e.g., by conducting experiments with both unrealistic and reasonable initialization). The positioning of the paper isn't still clear to any of the reviewers, who found it ambiguous. In its current form, the paper is closer to a methodology paper proposing ZVCV-GD, while the authors claim it is a review paper on control-variate-adjusted pathwise gradient. For a review paper, the paper would be missing important connections with existing approaches and also with existing review papers: (1) there is no discussion of the motivation behind focusing on pathwise gradient variance reduction [Reviewer wCDN]; (2) it remains unclear how it differs from existing review papers on Monte Carlo gradient estimation [Reviewer CLe6]; (3) the paper does not discuss the relevance to other approaches (e.g., better Monte Carlo techniques) [Reviewer wCDN]; and (4) should provide insights on why some of these methods work better than others [Reviewer hVtQ]. ==================================================
# Stochastic Mirror Descent: Convergence Analysis And Adaptive Variants Via The Mirror Stochastic Polyak Stepsize Ryan D'Orazio *ryan.dorazio@mila.quebec* Mila, Université de Montréal Nicolas Loizou nloizou@jhu.edu Johns Hopkins University Issam Laradji issam.laradji@gmail.com ServiceNow Research Ioannis Mitliagkas *ioannis@iro.umontreal.ca* Mila, Université de Montréal Canada CIFAR AI Chair Reviewed on OpenReview: *https: // openreview. net/ forum? id= 28bQiPWxHl* ## Abstract We investigate the convergence of stochastic mirror descent (SMD) under interpolation in relatively smooth and smooth convex optimization. In relatively smooth convex optimization we provide new convergence guarantees for SMD with a constant stepsize. For smooth convex optimization we propose a new adaptive stepsize scheme - the mirror stochastic Polyak stepsize (mSPS). Notably, our convergence results in both settings do not make bounded gradient assumptions or bounded variance assumptions, and we show convergence to a neighborhood that vanishes under interpolation. Consequently, these results correspond to the first convergence guarantees under interpolation for the exponentiated gradient algorithm for fixed or adaptive stepsizes. mSPS generalizes the recently proposed stochastic Polyak stepsize (SPS) (Loizou et al., 2021) to mirror descent and remains both practical and efficient for modern machine learning applications while inheriting the benefits of mirror descent. We complement our results with experiments across various supervised learning tasks and different instances of SMD, demonstrating the effectiveness of mSPS. ## 1 Introduction We consider the constrained stochastic optimization problem, min x∈X f(x) = Eξ [fξ(x)] , (1) where X ⊆ R dis a non-empty closed convex set that is possibly unbounded, and ξ is a random vector supported on a set Ξ such that Eξ [fξ(x)] is always well defined. We assume that it is possible to generate a sequence of independent and identically distributed (i.i.d.) realizations of ξ, and that for each x ∈ X Eξ [∇fξ(x)] = ∇f(x). We use X∗ ⊂ X to denote the set of minimizers x∗ of (1) and assume that X∗ is not empty. A special case of interest is the finite-sum optimization problem where Ξ = {1, · · · , n} and Eξ [fξ(x)] = Pn i=1 fi(x) n. Finite-sum optimization problems are often used in machine learning tasks where vector x $\left(1\right)$. denotes the model parameters, fi(x) represents the loss on the training point i and the goal is to minimize the average loss f(x) across the training points while satisfying the problem constraints (expressed as x ∈ X ). A common iterative approach to solve (1) when X = R dis stochastic gradient descent (SGD) (Robbins & Monro, 1951; Gower et al., 2019), iterates are updated in the negative direction of a gradient computed from a single realization of ξ. When the problem is constrained, X ⊂ R d, one may employ projected methods such as stochastic projected gradient descent (SPGD). However, the convergence guarantees of both SGD and SPGD depend on values measured by the Euclidean norm. If the Euclidean structure is not naturally suited to the problem, then SGD and PGD can suffer a worse dependence on the dimension d. A powerful generalization of SGD and SPGD is stochastic mirror descent (SMD), permitting better convergence guarantees by matching the geometry of the problem (Nemirovski & Yudin, 1983; Beck & Teboulle, 2003). For example in some cases, SMD can improve SGD's √d dependence to plog(d) (Ben-Tal et al., 2001; Beck & Teboulle, 2003). Furthermore, mirror descent leverages non-Euclidean projections, allowing for different choices of projections that are perhaps better suited to the constraint set. For example, in sequential games particular instances of mirror descent have been designed to allow for efficient projections on the strategy spaces of players (Hoda et al., 2010; Kroer et al., 2020). A classical analysis of mirror descent and first-order methods often relies on smoothness with respect to some norm *||·||*. The norm *||·||* is often used in selecting the appropriate instance of mirror descent (Dekel et al., 2012; Bubeck, 2015). However, a recent trend is to study non-euclidean methods like mirror descent with the more general assumption of relative smoothness (Birnbaum et al., 2011; Bauschke et al., 2017; Lu et al., 2018). Several applications of interest are not smooth but relatively smooth, *e.g.*, algorithmic game theory (Birnbaum et al., 2011); Poisson inverse problems (Bertero et al., 2009); and more (Lu et al., 2018). In contrast to deterministic methods, stochastic methods under relative smoothness have received less attention. We contribute to the literature of SMD with new constant-stepsize results under relative smoothness and new adaptive-stepsize results under smoothness, both of which use weak assumptions on the noise. ## 1.1 Main Contributions The key contributions of this work are as follows: - **Technical assumptions on the noise.** Unlike most of the SMD literature, all our convergence results, with relative smoothness or smoothness, and for fixed and adaptive stepsizes, do not make bounded gradient or bounded variance assumptions. Instead we use the *finite optimal objective* difference, introduced by Loizou et al. (2021), for our adaptive smooth setting and introduce a new constrained version for the relative smooth setting. More precisely, Loizou et al. (2021) assume $$\sigma^{2}:=f(x_{*})-\mathbb{E}_{\xi}\left[f_{\xi}^{*}\right]<\infty,$$ $$\left(2\right)$$ $$(3)$$ < ∞, (2) where f(x∗) = minx∈X f(x), f ∗ ξ := infx∈Rd fξ(x). In the finite-sum case, E [f ∗ i ] = Pn i=1 f ∗ i n . For our relative smooth results we introduce the *constrained finite optimal objective difference*, a refinement that depends on the constraint X , $$\sigma_{\mathcal{X}}^{2}:=f(x_{*})-\mathbb{E}_{\xi}\left[f_{\xi}^{*}(\mathcal{X})\right]<\infty,$$ (X )< ∞, (3) where f ∗ ξ (X ) = infx∈X fi(x) and in the finite-sum case E [f ∗ i (X )] = Pn i=1 infx∈X fi(x) n. From definition we have that the constrained version is a weaker assumption than its unconstrained counterpart since σ 2 X ≤ σ 2. - **Novel adaptive SMD.** We propose the mirror stochastic Polyak stepsize (mSPS) as an adaptive stepsize for SMD. Contrary to most adaptive mirror descent methods for stochastic optimization we do not use an online to batch reduction (Cesa-Bianchi et al., 2004; Littlestone, 1989). Hence, we avoid common assumptions like bounded constraints and provide efficient convergence results like linear convergence under strong convexity and smoothness. - **Exact convergence with interpolation.** In modern machine learning, overparametrized models capable of driving training error to zero have increasingly become important in both theory and in practice (Ma et al., 2018; Zhang et al., 2021). Under these conditions (see Definition 4) it has been shown that SGD enjoys favourable guarantees with exact convergence (Ma et al., 2018). Our analysis with σ 2, and σ 2 X , shows that SMD inherits similar guarantees with fast and exact convergence under interpolation. We are unaware of other similar results for SMD with adaptive stepsizes. Moreover, our results provide the *first* convergence guarantees under interpolation for the exponentiated gradient algorithm under both the relative smoothness and the classic smoothness settings. - **Extensive numerical experiments for adaptive SMD.** We demonstrate the adaptive capability of our proposed adaptive stepsize across a wide variety of domains and mirror descent algorithms for both constrained and unconstrained problems. ## 2 Related Work Stochastic Mirror Descent. SMD is often analyzed as a stochastic method for optimizing non-smooth Lipschitz continuous convex functions (Nemirovski et al., 2009; Bubeck, 2015; Beck, 2017). These results can be derived from online to batch reductions yielding a O( 1/ √T) convergence rate (Cesa-Bianchi et al., 2004; Duchi et al., 2010; Orabona, 2019). In the case of non-smooth and strongly convex, several works improve the results following from online regret bounds with a O( 1/T) convergence rate (Hazan & Kale, 2014; Ghadimi & Lan, 2012; Juditsky & Nesterov, 2010). Under smoothness, similar improvements can be made (Dekel et al., 2012; Bubeck, 2015). All of these results use bounded variance or bounded gradient assumptions.1 These assumptions can be difficult to verify, and may impose further restrictions. For example, one cannot in general assume a bounded gradient with strong convexity if X is unbounded, therefore it is common to assume X is compact. In relative smooth optimization Hanzely & Richtarik (2021) make an assumption similar to bounded variance. More recently, Dragomir et al. (2021) avoid making bounded variance or bounded gradient assumptions under relative smoothness but make larger restrictions on the class of problems and mirror descent methods. We make an in-depth comparison with these works in Section 5. We also add that there are several works related to randomized coordinate descent methods (Hanzely & Richtarik, 2021; Gao et al., 2020; Hendrikx et al., 2020), and with variance reduction (Hendrikx et al., 2020; Dragomir et al., 2021). Interpolation in constrained optimization. Interpolation conditions have mostly been studied with SGD in unconstrained settings (Gower et al., 2019; Vaswani et al., 2019a) or with SMD and conditions that do not incorporate constraints Hanzely & Richtarik (2021). Consequently, these conditions can yield large or unbounded neighborhoods of convergence in constrained optimization. Xiao et al. (2022) addresses some of these shortcomings by introducing the variance based weak growth condition to model interpolation under stochastic constrained optimization. However, the condition only holds under interpolation and requires the variance to be zero at the optimum. In comparison, our constraint-aware condition σ 2 X can hold without interpolation and does not require variance to be zero at the optimum. Adaptive stepsizes. Adaptive stepsizes for mirror descent have a long history. Accumulating past gradients or subgradients to set a stepsize, ηt ∝ 1/ qPts=1 ||gs||2∗ , can be traced back to online learning (Auer et al., 2002; Streeter & McMahan, 2010). Recently, similar coordinate-wise stepsizes such as ADAGRAD (McMahan & Streeter, 2010; Duchi et al., 2011) have been proposed. The convergence guarantees for these methods in convex optimization use online regret bounds, requiring sublinear regret. Unfortunately, all mirror descent methods with the aforementioned stepsizes require a bounded constraint; when the problem is unconstrained Orabona & Pál (2018) prove a Ω(T) worst case lower bound for the regret.2 Furthermore, in the stochastic case, bounded gradient and variance assumptions are made when using the online to batch reduction (Duchi, 2018; Orabona, 2019). In contrast our methods employ a completely different stepsize and we make a very 1Lei & Tang (2018) derive results for non-smooth and strongly convex functions without bounded subgradients but assume a weak growth condition. 2Convergence results may still be possible without online to batch reductions; for example, in the case of unconstrained SGD see Li & Orabona (2019). weak assumption on the noise. Another line of related work includes adaptive stepsizes for mirror descent with non-smooth functional constraints (Bayandina, 2017; Bayandina et al., 2018; Stonyakin et al., 2019). Polyak stepsize. Our adaptive stepsizes are in the spirit of Polyak's stepsize - originally proposed for deterministic projected subgradient descent (Polyak, 1987). In the deterministic setting Polyak's results have been been successfully extended and used for solving weakly convex and smooth problems (Boyd et al., 2003; Davis et al., 2018; Hazan & Kakade, 2019). More recently, variations of the Polyak stepsize have been proposed for stochastic optimization (Loizou et al., 2021; Prazeres & Oberman, 2021; Berrada et al., 2020; Gower et al., 2021). The adaptive stepsizes proposed herein are a generalization of SPS proposed and analyzed by Loizou et al. (2021) (see Section 4.2) to the constrained case and for mirror descent. ## 3 Background We denote vectors within the feasible set as x *∈ X ⊆* R d, where R is the set of real numbers. We use the subscript to denote time, after t time steps the average of the iterates x1, · · · , xt is x¯t = 1/tPts=1 xs. With a slight abuse of notation we may also refer to the ith coordinate of x as xi, x = (x1, · · · , xd). Whether the subscript refers to time or the coordinate is clear from context. We denote *||·||*2 as the Euclidean norm and ||·|| as any arbitrary norm with corresponding dual norm ||x||∗ = supy{hx, yi : ||y*|| ≤* 1}. For a differentiable function ψ, we define the difference between ψ(x) and the first order approximation of it at y as the Bregman divergence Bψ(x; y). Definition 1 (Bregman divergence). Let ψ : D → R *be differentiable on* int D*. Then the Bregman divergence* with respect to ψ is Bψ : D × int D → R*, defined as* $$B_{\psi}(x;y)=\psi(x)-\psi(y)-\langle\nabla\psi(y),x-y\rangle.$$ A differentiable function f is convex on a convex set X if Bf (x; y) ≥ 0 for any x, y ∈ X . Similarly, a function f is L-smooth with respect to a norm *||·||* if Bf (x; y) ≤ L 2 ||x − y||2, and is µ-strongly convex with respect to the norm *||·||* if µ 2 ||x − y||2 ≤ Bf (x; y). We will also refer to the generalization of smoothness and strong convexity - relative smoothness and relative strong convexity defined below. Definition 2. A function f is L-smooth relative to ψ on X if for all (x, y) ∈ X × (X ∩ int D) *it holds that:* Bf (x; y) ≤ LBψ(x; y). Definition 3. A function f is µ-strongly convex relative to ψ on X if for all (x, y) ∈ X × (X ∩ int D) it holds that: µBψ(x; y) ≤ Bf (x; y). ## 3.1 Mirror Descent To solve problem (1) we consider the general stochastic mirror descent update with a convex function ψ and domain D $${\boxed{x_{t+1}=\operatorname*{arg\,min}_{x\in{\mathcal{X}}}\left\langle\nabla f_{\xi_{t}}(x_{t}),x\right\rangle+{\frac{1}{\eta_{t}}}B_{\psi}(x;x_{t}).}}$$ Bψ(x; xt). (4) Where ξt is a realization of ξ that is i.i.d.. In the non-smooth or deterministic setting ∇fξt (xt) may be replaced by a subgradient or the full gradient respectively. To make the updates well defined, all we require is that xt+1 ∈ int D in update (4), otherwise Bψ(·, xt+1) will be undefined at the next step. Assumption 1. Let ψ be convex with domain D*, differentiable over* int D, and X ⊆ D. For any g, and any stepsize ηt > 0, xt+1 = arg minx∈X h*g, x*i + 1 ηt Bψ(x; xt) ∈ int D. For example the following assumption by Orabona (2019) would be sufficient. $$\left({\boldsymbol{4}}\right)$$ Assumption (Section 6.4 Orabona (2019)). Let ψ : D → R be a strictly convex function such that *X ⊆ D*, we require either one of the following to hold: limx→∂X ||∇ψ(x)||2 = +∞ or X ⊆ int D. 3 The first requirement from Orabona (2019) amounts to assuming ψ is a Legendre function (i.e., essentially smooth and strictly convex), which implies that xt+1 ∈ int D (Cesa-Bianchi & Lugosi, 2006). Otherwise, if the second condition holds then the update is also well defined. Furthermore, we note that other assumptions can be made to guarantee xt+1 ∈ int D; for more examples see Bauschke et al. (2003). Another common assumption is to assume ψ is strongly convex over X , which will be important for our adaptive stepsize in Section 6, however, it is not needed for the constant step size results of Section 5. The following is a standard one step mirror descent lemma and will be used often (Beck, 2017; Bubeck, 2015; Orabona, 2019; Duchi, 2018), this particular statement and proof is taken from Lemma 6.7 by Orabona (2019) and we include the full proof in the appendix for completeness. All other omitted proofs are deferred to the appendix. Lemma 1. Let Bψ be the Bregman divergence with respect to a convex function ψ : D → R and assume assumption 1 holds. Let xt+1 = arg minx∈X hgt, xi + 1 ηt Bψ(x; xt). Then for any x∗ ∈ X $$B_{\psi}(x_{*};x_{t+1})\leq B_{\psi}(x_{*};x_{t})-\eta_{t}(g_{t},x_{t}-x_{*})-B_{\psi}(x_{t+1};x_{t})+\eta_{t}(g_{t},x_{t}-x_{t+1}).$$ Furthermore if ψ is µψ-strongly convex over X *then* $$\left(5\right)$$ $$B_{\psi}(x_{*};x_{t+1})\leq\!B_{\psi}(x_{*};x_{t})-\eta_{t}\langle g_{t},x_{t}-x_{*}\rangle+\frac{\eta_{t}^{2}}{2\mu_{\psi}}\,||g_{t}||_{*}^{2}\,.$$ $$(6)$$ . (6) The SMD update (4) recovers both SGD and SPGD if ψ is taken to be 12 ||·|| 22 . Some other interesting examples include the case where ψ(x) = 12 ||x||2p for 1 < p ≤ 2 (Grove et al., 2001; Gentile, 2003). If we instead use ψ(x) = 12 h*x, Mx*i = 1 2 ||x||2M for a positive definite matrix M then we recover the scaled projected gradient algorithm, xt+1 = arg minx∈X xt − ηtM−1gt − x 2 M (Bertsekas & Tsitsiklis, 2003). Another common setup is when ψ is taken to be the negative entropy with a constraint set X = ∆d = {xi ≥ 0|Pd i=1 xi = 1}. In this case ψ is 1-strongly convex with respect to *||·||*1 and the update rule corresponds to the exponentiated gradient algorithm (Littlestone & Warmuth, 1994; Kivinen & Warmuth, 1997; Beck & Teboulle, 2003; CesaBianchi & Lugosi, 2006). ## 3.2 Overparameterization, Interpolation, And Constrained Interpolation Modern machine learning models are expressive and often over-parametrized, i.e., they can fit or interpolate the training dataset (Zhang et al., 2021). For example when problem (1) is the training problem of an overparametrized model such as a deep neural network (Ma et al., 2018) or involves solving a consistent linear system (Loizou & Richtárik, 2020b;a) or problems such as deep matrix factorization (Rolinek & Martius, 2018; Vaswani et al., 2019b), each individual loss function fi attains its minimum at x∗. That is the following interpolation condition is satisfied. Definition 4 (Interpolation). We say that the interpolation condition holds when there exists x∗ ∈ X∗ *such* that fξ(x∗) = infx∈Rd fξ(x) *almost surely.* In the finite-sum setting this condition amounts to fξ(x∗) = infx∈Rd fξ(x) for all i ∈ {1, · · · , n}. Note that when the interpolation condition is satisfied, it follows that σ 2 = 0 (see (2)). The use of interpolation in the literature has mostly been discussed within the context of unconstrained optimization. Despite Definition 4 being designed to study SGD for unconstrained optimization, we show that constrained optimization with mirror descent enjoys similar convergence benefits if σ 2 = 0. However, the standard definition of interpolation, as given by Definition 4, does not adequately describe interpolation with respect to the constraint X . For example, consider the case where fi(x) = f(x) for all x ∈ X but do not 3∂X denotes the boundary of X. agree outside the constraint X . In this case, it is possible to have σ 2 arbitrarily large, meanwhile, there is no variance in the stochastic gradient E h||∇f(x) − ∇fi(x)||22 i= 0 - rendering the problem non-stochastic. Xiao et al. (2022) describe an interpolation-like condition within constraint optimization, however, this condition requires the stochastic gradient to have zero variance at the optimum, which need not hold generally despite all fξ sharing a common minimum (see for example Figure 1). Therefore, we also make use of the following constrained interpolation condition: Definition 5 (Contrained Interpolation). *We say that the interpolation condition holds with respect to the* constraint X if there exists x∗ ∈ X∗ *such that* fξ(x∗) = infx∈X fξ(x) *almost surely.* Similar to Definition 4 we have interpolation with respect to X holds when σ 2 X = 0. In the finite-sum setting, constrained interpolation reduces to fi(x∗) = infx∈X fi(x) for all i ∈ {1, · · · , n}. ## 4 Constant And Polyak Stepsize For Mirror Descent In this section we provide background on constant stepsize selection for mirror descent. For non-constant setpsize, we introduce our natural extensions of the classic Polyak stepsize and SPS for mirror descent. ## 4.1 Constant Stepsize When a function is L-smooth with respect to the Euclidean norm, a common stepsize for gradient descent is η = 1/L, allowing for convergence in many settings (Bubeck, 2015). Similarly for an L-relatively smooth function with respect to ψ, the prescibed stepsize for mirror descent using ψ is η = 1/L (Birnbaum et al., 2011; Lu et al., 2018). In the stochastic and relatively smooth case, Hanzely & Richtarik (2021) use η = 1/L as well as different stepsize schedules. In Section 5, we provide new convergence guarantees (under weaker assumptions) for SMD with η = 1/L. ## 4.2 Polyak Stepsize An alternative method to selecting a stepsize, as suggested by Polyak (Polyak, 1987), is to take ηt by minimizing an upper bound on ||xt+1 − x∗||22 . From Lemma 1, if we take ψ = 1 2 ||·||22 and assume gt ∈ ∂f(xt) is a subgradient at xt for f then we recover a well known inequality for projected subgradient descent4 $$\frac{1}{2}\left|\left|x_{*}-x_{t+1}\right|\right|_{2}^{2}\leq\frac{1}{2}\left|\left|x_{*}-x_{t}\right|\right|_{2}^{2}-\eta_{t}(f(x_{t})-f(x_{*}))+\frac{\eta_{t}^{2}}{2}\left|\left|g_{t}\right|\right|_{2}^{2}.$$ Minimizing the right hand size with respect to ηt yields Polyak's stepsize, ηt = (f(xt)−f(x∗))/||gt||22 (Polyak, 1987; Beck, 2017). Following in a similar fashion, we propose a generalization of Polyak's stepsize for mirror descent. If ψ is µψ-strongly convex5 with respect to the norm *||·||* then we can minimize the right hand side of equation (6) to arrive at the mirror Polyak stepsize $$\eta_{t}={\frac{\mu_{\psi}(f(x_{t})-f(x_{*}))}{\left\|g_{t}\right\|_{*}^{2}}}.$$ . (7) Despite the well-known connection between projected subgradient descent and mirror descent (Beck & Teboulle, 2003), this generalization of Polyak's stepsize is absent from the literature. For completeness, we include analysis of the non-smooth case in Section D of the appendix, including both a O( 1/ √t) convergence and a last iterate convergence result. As expected, mirror descent with the mirror Polyak stepsize maintains the benefits of mirror descent - it permits a mild dependence on the dimension of the space. 4After using the fact that gt is a subgradient, f(xt) − f(x∗) ≤ hgt, xt − x∗i. 5Without loss of generality we could assume ψ to be 1-strongly convex and scale ψ by 1/µψ. The stepsize would remain the same, scaling of ψ inversely scales the stepsize. $$\left(7\right)$$ However, it inherits the impractical issues with the Polyak stepsize - knowledge of f(x∗) and an exact gradient or subgradient. In the stochastic setting Loizou et al. (2021) propose the more practical stochastic Polyak stepsize (SPS), ηt = (fξt (xt)−f ∗ ξt )/c||∇fξt (xt)||22 , and the bounded variant SPSmax, ηt = min{(fξt (xt)−f ∗ ξt )/c||∇fξt (xt)||22 , ηb}. Where f ∗ ξt is known in many machine learning applications, and c is a scaling parameter that depends on the class of functions being optimized (Loizou et al., 2021). Similar to our generalization of Polyak's stepsize (7), we propose a generalization of SPS and SPSmax for mirror descent, the mirror stochastic Polyak stepsize (mSPS) and the bounded variant mSPSmax, $$\mathrm{mSPS}:\eta_{t}={\frac{\mu_{\psi}(f_{\xi_{t}}(x_{t})-f_{\xi_{t}}^{*})}{c\left|\left|\nabla f_{\xi_{t}}(x_{t})\right|\right|_{*}^{2}}},$$ $$(8)$$ mSPS${}_{\max}$ : $\eta_{t}=\min\left\{\frac{\mu_{\psi}(f_{\xi_{t}}(x_{t})-f_{\xi_{t}}^{*})}{c\left|\left|\nabla f_{\xi_{t}}(x_{t})\right|\right|_{*}^{2}},\eta_{b}\right\}$. (9) ## 4.2.1 Self-Bounding Property Of Msps An important property of SPS and mSPS is its self-bounding property for when fξt is L-smooth and µstrongly convex with respect to a norm *||·||*, $$\frac{\mu_{\psi}}{2cL}\leq\eta_{t}=\frac{\mu_{\psi}(f_{\xi_{t}}(x_{t})-f_{\xi_{t}}^{*})}{c\left|\left|\nabla f_{\xi_{t}}(x_{t})\right|\right|_{*}^{2}}\leq\frac{\mu_{\psi}}{2c\mu}.\tag{10}$$ We extensively use the lower bound, also known as the self-bounding property of smooth functions (Srebro et al., 2010), and we provide a complete proof in the appendix (Section E). A proof of the upper bound can be found in Orabona (2019)[Corollary 7.6]. ## 5 Convergence With Constant Stepsize In Relatively Smooth Optimization In this section we provide new convergence results for SMD with constant stepsize under relative smoothness. We provide the following lemma which allows us to bound the last two terms in (5). This result can be seen as a generalization of Lemma 2 in Collins et al. (2008), where the exponentiated gradient algorithm is studied under the relative smoothness assumption. Lemma 2. Suppose f is L-smooth relative to ψ*. Then if* η ≤ 1 L we have $-B_{\psi}(x_{t+1};x_{t})+\eta\langle\nabla f(x_{t}),x_{t}-x_{t+1}\rangle\leq\eta(f(x_{t})-f(x_{t+1}))$. ## 5.1 Relative Smoothness And Strong Convexity For an appropriately selected stepsize, we have that SMD enjoys a linear rate of convergence to a neighborhood of the minimum x∗. Theorem 1. Assume ψ satisfies assumption 1. Furthermore assume f to be µ*-strongly convex relative to* ψ over X , and fξ to be L-smooth relative to ψ over X *almost surely. Then SMD with stepsize* η ≤ 1 L guarantees $$\mathbb{E}\left[B_{\psi}(x_{*};x_{t+1})\right]\leq(1-\mu\eta)^{t}B_{\psi}(x_{*};x_{1})+\frac{\sigma_{\chi}^{2}}{\mu}.$$ Importantly, we do not assume fξ to be convex and under interpolation with respect X we have σ 2 X = 0 implying SMD will converge to the true solution if ψ is strictly convex. If ψ is strongly convex then Theorem 1 provides a linear rate on the expected distance ||xt+1 − x∗||2for some norm *||·||*. For example Collins et al. (2008) show that a particular loss fξ appearing in the dual problem of fitting regularized log-linear models is both smooth and strongly convex relative to the negative entropy function. In this case, our results provide a linear rate on E h||xt+1 − x∗||21 ifor the stochastic exponentiated gradient algorithm since ψ (negative entropy) is strongly convex with respect to the norm *||·||*1 . In the case of interpolation, we have that Bψ(x∗; xt+1) → 0 almost surely. If ψ is strictly convex then xt → x∗ almost surely. Corollary 2. *Under the same assumptions as Theorem 1, if* σ 2 X = 0 then Bψ(x∗; xt+1) → 0 *almost surely.* ## 5.2 Relative Smoothness Without Convexity Similar to Theorem 1 we show convergence of a quantity to a neighborhood, only assuming fξ to be L-smooth relative to ψ, where f or fξ need not be convex. Theorem 3. Assume ψ satisfies assumption 1. Furthermore assume fξ to be L-smooth relative to ψ *over* X *almost surely. Then SMD with stepsize* η ≤ 1 L guarantees $$\mathbb{E}\left[{\frac{1}{t}}\sum_{s=1}^{t}B_{f}(x_{s};x_{s})\right]\leq{\frac{B_{\psi}(x_{s};x_{1})}{\eta t}}+\sigma_{\chi}^{2}.$$ The above guarantee also implies a result for the "best" iterate if f is convex, E [min1≤s≤t Bf (x∗; xs)], to a neighborhood. If f is strictly convex then this implies at least one iterate xs converges to a neighborhood of x∗ in expectation. If f is strictly convex and 1-coercive6then its conjugate function f ∗is also strictly convex (Hiriart-Urruty & Lemaréchal, 2004)[Corollary 4.1.3] and we have Bf (x∗; xs) = Bf ∗ (∇f(xs); ∇f(x∗)) (Bauschke et al., 1997)[Theorem 3.7], implying that the average gradients 1/tPts=1 ∇f(xs) or at least one of ∇f(xs) converges to a neighborhood of ∇f(x∗). If f happens to be convex and L-smooth with respect to a norm *||·||* then 1 2L ||∇f(x∗) − ∇f(xs)||2∗ ≤ Bf (x∗; xs)(Nesterov, 2018)[Theorem 2.1.5], providing a similar convergence guarantee on the distance of gradients to ∇f(x∗). Similar to Theorem 1, an almost surely convergence result follows from Theorem 3 under interpolation. Corollary 4. Under the assumptions of Theorem 3, if f *is convex and* σ 2 X = 0 then Bf (x∗; xt) → 0 *almost* surely. ## 5.2.1 Application Of Theorem 3, Solving Linear Systems Theorem 3 provides convergence for the unconventional quantity Bf (x∗; xt), however, this quantity is sometimes equal to f(xt) − f(x∗), as in the case of solving linear systems. In this case Theorem 3 automatically gives a result for the quantity E [f(¯xt) − f(x∗)] if f is convex. More formally, solving a constrained linear system amounts to to finding x∗ such that $$(11)$$ $$A x_{*}=b,{\mathrm{~and~}}x_{*}\in{\mathcal{X}}.$$ Ax∗ = b, and x∗ ∈ X . (11) Problem (11) can be reformulated as a constrained finite sum problem with fi(x) = 12 (hAi:, xi − bi) 2, where Ai: and bi denote the i th row and component of A and b respectively. Note that x∗ interpolates all fi since fi(x∗) = 0 by construction, with ∇fi(x∗) = 0 and σ 2 X = 0. Since the divergence Bfi is symmetric7 and Bf is simply the average over Bfi it holds that $$B_{f}(x_{*};x_{t})=\sum_{i=1}^{n}{\frac{B_{f_{i}}(x_{*};x_{t})}{n}}=\sum_{i=1}^{n}{\frac{B_{f_{i}}(x_{t};x_{*})}{n}}=\sum_{i=1}^{n}{\frac{f_{i}(x_{t})-f_{i}(x_{*})}{n}}=f(x_{t})-f(x_{*}).$$ Where the third equality follows from the fact that ∇fi(x∗) = 0. Theorem 3 therefore gives a convergence result for the gap E [f(¯xt) − f(x∗)], and Corollary 4 guarantees f(xt) → f(x∗), provided each fiis relatively smooth. Relative smoothness of fi holds if ψ is strongly convex 6A function is 1-coercive if lim||x*||→∞* f(x)/||x|| = +∞. 7See Proposition 1 in the appendix for a proof. since for any norm *||·||* there exists a constant Li for which fiis Li-smooth with respect to *||·||*. Therefore, taking L = maxi Li gives $$B_{f_{i}}(x;y)\leq{\frac{L}{2}}\left|\left|x-y\right|\right|^{2}={\frac{L\mu_{\psi}}{2\mu_{\psi}}}\left|\left|x-y\right|\right|^{2}\leq{\frac{L}{\mu_{\psi}}}B_{\psi}(x;y).$$ ## 5.2.2 Eg For Finding Stationary Distributions Of Markov Chains An important example of problem (11) for which x∗ exists and X 6= R dis the problem of finding a stationary distribution of a Markov chain with transition matrix P. That is, to find x∗ such that $$(P^{\top}-I)x_{*}=0,\mathrm{~and~}x_{*}\in\Delta^{n}.$$ $$\left(12\right)$$ > − I)x∗ = 0, and x∗ ∈ ∆n. (12) Problem (12) is ubiquitous in science and machine learning, for example in online learning many algorithms require computing a stationary distribution of a Markov chain at each iteration(Greenwald et al., 2006; Blum & Mansour, 2007). From the above discussion we can formulate the problem (12) as a finite-sum problem with constraint X = ∆m. A natural choice for the simplex constraint is the stochastic EG algorithm (SMD with ψ taken to be negative entropy). Denoting the gi = (P > − I)i: we have that fi(x) = 12 hgi, xi 2, and is 1-smooth with respect to *||·||*1 . Since negative entropy is 1-strongly convex with respect to *||·||*1 we have that fiis also 1-smooth relative to ψ. Therefore, Theorem 3 and Corollary 4 guarantee E [f(¯x) − f(x∗)] and f(xt) − f(x∗) converge to zero, respectively, for the following stochastic update: sample i ∈ {1, · · · , n} uniformly at random and $$y_{t+1}=x_{t}\odot\exp\left(-\nabla f_{i}(x_{t})\right)\text{,}x_{t+1}=\frac{y_{t+1}}{||y_{t+1}||_{1}}.\tag{13}$$ Where and exp are component wise multiplication and component wise exponentiation respectively. We highlight that no other existing works show convergence without a neighborhood under interpolation for the EG algorithm. Additionally, problem (12) also exhibits a natural occurence where f ∗ i is known and equal to 0, rendering mSPS (8) computable. In Section 6, we demonstrate similar guarantees with mSPS. ## 5.3 Comparison With Related Works In the constant stepsize and relatively smooth regime, Hanzely & Richtarik (2021) and Dragomir et al. (2021) provide convergence guarantees for SMD under different assumptions and to different neighborhoods. We provide an in-depth comparison as well as demonstrate via an example where convergence to the solution is not guaranteed by previous works but is possible by Theorem 1. Hanzely & Richtarik (2021) make an assumption akin to bounded variance, they assume E[h∇f(xt)−∇fξt (xt),xt+1−x˜t+1i|xt]/η ≤ σ 2, where x˜t+1 is the mirror descent iterate using the true gradient ∇f(xt). In the case of relative smoothness and relative strong convexity they show a linear rate of convergence to a neighborhood for E [f(¯xt) − f(x∗)] (Theorem 5.3), where x¯t is a particular weighted average of the iterates (x1, · · · , xt). Without relative strong convexity, a similar result is shown for the uniform average x¯t with a rate of O( 1/T) using a particular schedule of stepsizes (Corollary 5.5). Our results more closely resemble the work of Dragomir et al. (2021), giving the same rates and exact convergence with interpolation (Theorem 1 and Theorem 3), however, there are several *important* differences. Firstly, our results apply to a wider range of problems and mirror descent methods. Dragomir et al. (2021) do not decouple the domain D of the function ψ and the constraint set X , they assume that xt+1 ∈ int X where ∇ψ(xt+1) = ∇ψ(xt) − ηt∇fξt (xt). Therefore their definition of mirror descent *precludes* the famous exponentiated gradient algorithm or projected gradient descent—no projection steps are allowed in their definition. Furthermore, our analysis allows x∗ to be anywhere in X while Dragomir et al. (2021) require ∇f(x∗) = 0 and exclude the case when x∗ is on the boundary of X . Secondly, our neighborhoods of convergence are different, they show convergence to a different neighborhood ησ 2/µ (ησ2in the smooth case), ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 1: Finite-sum example of Theorem 1 with SPGD and ψ = 1/2 *||·||*22 . (left) f(x) is strongly convex and is a sum of smooth fi that are either non-convex or strongly convex functions. (right) As predicted by Theorem 1, linear convergence is observed for the mean trajectory of SPGD over 10,000 runs. where σ 2is such that E h||∇fξ(x∗)||2∇2ψ∗(zt) i≤ σ 2. 8 Unlike our results, their neighborhood can be controlled with a smaller stepsize if interpolation does not hold. Thirdly, our results hold for η ≤ 1/L while they require η ≤ 1/2L. Finally, when f is strongly convex relative to ψ they require fξ to be convex while we allow fξ to be non-convex. ## 5.3.1 An Example Of Theorem 1 To demonstrate the differences with previous works we consider a finite-sum example with SPGD and ψ = 1/2 *||·||*22 , where each fi: R → R : x 7→ aix 2 + bix + ciis quadratic, and the constraint is the closed interval X = [0, 1]. As demonstrated in Figure 1, f is the average of both non-convex and strongly convex quadratics, and f is strongly convex relative to ψ. Additionally, each fiis maxj 2|aj |-smooth relative to ψ. More importantly, as depicted in Figure 1 we consider the case where σ 2 X = 0, interpolation relative to X holds with f(0) = f ∗ i (X ) for all i. In comparison to Hanzely & Richtarik (2021), their results apply with a neighborhood of convergence equal to ≈ 168. We note that their neighborhood depends on the choice of stepsize and we report the smallest neighborhood guaranteed by their results by using their perscribed stepsize. In comparison to Dragomir et al. (2021), their results do not apply for several reasons: SPGD is not included in their analysis (only SGD in the Euclidean case is allowed), fiis not always convex, and ∇f(x∗) 6= 0. Nevertheless, their variance term in the Euclidean case corresponds to E h||∇fi(x∗)||22 i, the expected squared norm at the optimum, which has a value of 520 in this constrained finite-sum example. ## 6 Convergence Of Mirror Sps In this section we present our convergence results for SMD with mSPSmax when fi are Li-smooth and with varying assumptions. First, we consider the case when f is strongly convex relative to ψ, a common assumption when analysing mirror descent under strong convexity Hazan & Kale (2014). Then we present 8Note that Dragomir et al. (2021) assume ψ to be twice differentiable and strictly convex and ψ ∗ is the conjugate function of ψ. zt is some point within the line segment between ∇ψ(xt) − 2η∇fi(x∗) and ∇ψ(xt). rates under convexity and smoothness but without relatively strong convexity. Afterwards, we discuss the results under interpolation and provide examples. ## 6.1 Smooth And Strongly Convex With strong convexity of ψ and f being relatively strongly convex with respect to ψ we can show a linear rate of convergence to a neighborhood. Theorem 5. Assume fξ is convex and L-smooth almost surely with respect to the norm ||·||*. Furthermore,* assume that f is µ-strongly convex relative to ψ over X , where ψ is µψ-strongly convex over X *with respect* to the norm ||·|| and assumption 1 holds. Then SMD with mSPSmax and c ≥ 1 2 guarantees $$\mathbb{E}\left[B_{\psi}(x_{s};x_{t+1})\right]\leq(1-\mu\alpha)^{t}B_{\psi}(x_{s};x_{1})+\frac{\eta_{b}\sigma^{2}}{\alpha\mu}.$$ Where α := min{µψ/2cL, ηb}. Since ψ is strongly convex we get a guarantee on expected distance to the minimum, as µψ 2 ||x∗ − xt+1||2 ≤ Bψ(x∗; xt+1). Also, if each fiis a strongly convex function or if it satisfies the Polyak-Łojasiewicz (PL) condition (Assumption 2) then mSPS is upper bounded by equation (10) and equivalent to mSPSmax with ηb = µψ/2cµ. Therefore, SMD with mSPS converges by Theorem 5. A similar result was shown for SGD with SPSmax Loizou et al. (2021)[Theorem 3.1]. Indeed, Theorem 5 generalizes their results; by taking ψ(x) = 12 ||x||22 we recover a result which is true for both SGD and SPGD. Corollary 6. Assume fξ is convex and L-smooth with respect to the norm *||·||*2 almost surely and that f is µ-strongly convex with respect to *||·||*2 over X *. Then SPGD (SGD if* X = R d) with SPSmax *guarantees* E h1 2 ||x∗ − xt+1||22 i≤ (1 − µα) t 1 2 ||x∗ − x1||22 + ηbσ 2 αµ . For the case of preconditioned SGD, ψ(x) = 12 ||x||2M and X = R d, we can go further and extend a non-convex result similar to Theorem 3.6 in Loizou et al. (2021). We include the result and proof in section G.3. In Loizou et al. (2021) constant stepsize results are derived as a special case of SPSmax, similarly if in mSPSmax ηb is selected such that ηb ≤ µψ/2cLmax then ηt is a constant and we can derive new constant stepsize results for SMD. However, using Theorem 5 and mSPSmax to analyze constant stepsize SMD yields weaker results than Theorem 1. The assumptions made in Theorem 5 are stronger. For example, Theorem 5 requires ψ to be both strongly convex and smooth on X with respect to a norm which would not be possible if ψ is Legendre over X and X is bounded. This limitation, however, does not apply for Theorem 1 and the next result for smooth convex losses since we do not enforce a smoothness condition on ψ. ## 6.2 Smooth And Convex Without f being relatively strongly convex we can attain convergence results on the average function value. Theorem 7. If fξ is convex and L-smooth with respect to a norm ||·|| almost surely, assumption 1 holds, and ψ is µψ-strongly convex over X with respect to ||·||. Then mirror descent with mSPSmax and c ≥ 1 guarantees $$\mathbb{E}\left[f(\bar{x}_{t})-f(x_{*})\right]\leq\frac{2B_{\psi}(x_{*};x_{1})}{\alpha t}+\frac{2\eta_{b}\sigma^{2}}{\alpha}.$$ Where α := min{µψ/2cL, ηb}. Similarly to Theorem 5 we can derive constant stepsize results, except we require ηb ≤ ψ/2Lmax (with c = 1), see Section G.2.1 for details. Unlike Theorem 5, however, this result does not require ψ to be smooth over X . Comparison with SPS. Unlike the analysis of SPS our results and stepsize depend on the choice of the mirror map ψ. This dependence, as observed historically, is an important motivation for mirror descent, allowing for tighter bounds and better dependence on the dimension d. For example, suppose X = ∆d and fξ is L smooth with respect to *||·||*1 and for simplicity x1 = (1/d, · · · , 1/d), c = 1, and ηb is selected large enough such that α = µψ/2cLmax. Then the bound in Theorem (7) for EG gives E [f(¯xt) − f(x∗)] ≤ 4L log d/t+4Lηbσ 2. Meanwhile, under SPGD the bound is 4dL/t + 4dLηbσ 2, since fξ is L˜ smooth with respect to *||·||*2 if L˜ = dL. Note that unlike SPGD the neighborhood of convergence for EG is independent of d! Moreover, under interpolation EG converges at a rate that scales logarithmically in d, which is otherwise not possible with SGD. Therefore, selecting the appropriate ψ and stepsize allows for better dependence on d with a smaller neighborhood of convergence. ## 6.3 Exact Convergence With Adaptive Stepsizes And Interpolation As a consequence of the previous results, we have several convergence guarantees under interpolation (σ 2 = 0). In fact, when σ 2 = 0 the upper bound ηb is not needed, the unbounded variant mSPS will enjoy the same convergence rates as mSPSmax. Additionally, similar to Section 5, we can attain almost sure convergence results analogous to Corollary 2 and Corollary 4. To the best of our knowledge, all existing results are with constant stepsize (Section 5, Dragomir et al., 2021; Azizan & Hassibi, 2019), or with conditions on the initialization of parameters Azizan et al. (2019). In contrast, with mSPS we have provided exact global convergence guarantees with an adaptive stepsize. ## 6.4 Mirror Descent Examples To demonstrate the generality of our results we consider two cases of Theorem 7. We examine the so called pnorm algorithms, and preconditioned SGD. Similar results can also be derived with the exponential gradient algorithm and the norm *||·||* 1. Corollary 8 (p-norm). Suppose the assumptions of Theorem 7 hold with ||·|| = *||·||*p for 1 < p ≤ 2 and ψ(x) = 12 ||x||2p . Let q *be such that* 1p + 1 q = 1*. Then SMD with stepsizes* ηt = min (p−1)(fξt (xt)−f ∗ ξt ) ||∇fξt (xt)||2 q , ηb , guarantees E [f(¯xt) − f(x∗)] ≤ 2Bψ(x∗;x1) αt + 2ηbσ 2 α. Another interesting case is SGD with preconditioning xt+1 = xt − ηM−1∇fi(xt), for some positive definite matrix M. In other words, ψ is taken to be ψ(x) = 12 ||x||2M, with Bψ(x; y) = 12 ||x − y||2M and X = R d. Corollary 9 (Preconditioned SGD). *Suppose* X = R d *and the assumptions of Theorem 7 hold with* ||·|| = ||·||M, for a positive definite matrix M*. Then SMD with* ψ(x) = 12 ||x||2M *and stepsizes* ηt = min (fξt (xt)−f ∗ ξt ) ||∇fξt (xt)||2M−1 , ηb *, guarantees* E [f(¯xt) − f(x∗)] ≤ ||x∗−x1||2M αt + 2ηbσ 2 α. ## 7 Experiments We test the performance of mSPS on different supervised learning domains and with different instances of SMD. We use mSPS in our convex experiments with c = 1. In theory the bounded stepsize mSPSmax is required in absence of interpolation, however, in practice we observe mSPS converges, likely due to the problems being close to interpolation. For our non-convex deep learning experiments we follow Loizou et al. (2021) by selecting c = 0.2 and a smoothing procedure to set a moving upper bound for mSPSmax. 9 To compare against a constant stepsize we sweep over {10−5, 10−4, 10−3, 10−2, 10−1, 1, 101, 102, 103, 104, 105}. Code for our experiments and implementation is available at: https://github.com/IssamLaradji/mirror-sps. We consider 4 series of experiments. First, we consider unconstrained convex problems with mSPS and different p-norm algorithms, ψ(x) = ||x||2p . Second, we evaluate the performance of mSPS with SPGD and positive constraints. Third, we solve a convex problem with a `1 constraint using mSPS and the exponentiated gradient algorithm (EG). Finally, Section 7.4 demonstrates that our method shows competitive performance over highly tuned stepsizes for deep learning without any tuning of the hyper-parameters. 9This technique is a moving upperbound. More precisely we run mSPSmax with an upper bound at time t given by η t b = τ b/nηt−1 where b and n are the batchsize and number of examples respectively, which amounts to τ b/n ≈ 1 in our experiments, with η t b ≈ ηt−1. ![12_image_0.png](12_image_0.png) Figure 2: Comparison between mSPS with c = 1 and constant stepsizes on convex binary-classification problem with no constraints (row1), with non-negative (NN) constraints (row2), and with `1 constraints (row3). ![12_image_1.png](12_image_1.png) Figure 3: Comparison between mSPS and constant stepsizes on non-convex multiclass classification with deep networks. The leftmost plot shows the stepsize evolution of mSPS with smoothing and c = 0.2 for different p values. ## 7.1 Mirror Descent Across P-Norms. We consider a convex binary-classification problems using radial basis function (RBF) kernels. We experiment on the ijcnn dataset obtained from LIBSVM (Chang & Lin, 2011) which does not satisfy interpolation.10 ijcnn has 22 dimensions, 39,992 training examples, and 9998 test examples. We selected the kernel bandwidth 0.05 following Vaswani et al. (2019b). For these experiments we compare across p ∈ {1.2, 1.4, 1.6, 1.8} between mSPS and the standard constant stepsize. The first row of Figure 2 shows the training loss for the different optimizers with a softmax loss. We make the following observations: (i) mSPS performs reasonably well across different values of p and outperforms most stepsizes of SMD. (ii) mSPS performs well on ijcnn even though it is not separable in the kernel space (*i.e.* there is no interpolation). This demonstrates some robustness to violations of the interpolation condition and to different values of the p. Note that each optimizer was ran with five different random seeds to demonstrate their robustness. 10In the appendix we include results on the mushroom dataset where interpolation is satisfied. ## 7.2 Projected Gradient Descent In this setup we consider optimizing the logistic loss with a non-negative constraint on the parameters. We run our optimizers on two real-datasets ijcnn and rcv1. rcv1 has 47,236 dimensions, 16194 training examples and 4048 test examples. Following Vaswani et al. (2019b) we selected the RBF kernel on rcv1 with bandwidth 0.25. We also ran the optimizers on two synthetic datasets for binary classification that are linearly separable datasets with margins 0.01 and 0.05 respectively. Linear separability ensures the interpolation condition will hold. For each margin, we generate a dataset with 10k examples with d = 20 features and binary targets. We observe in the second row of Figure 2 that mSPS outperforms the best tuned constant stepsize in most cases, and in the rest of the cases is competitive. The result underlines the importance of adaptive stepsizes. ## 7.3 Exponentiated Gradient To test the effectiveness of mSPS with EG we consider the datasets in Section 7.2 with logitistic regression where parameters are constrained to the `1 ball, X = {x : ||x||1 ≤ λ}. To solve this problem with EG, we employ the common trick of reducing an `1 ball constraint to a simplex constraint with dimension (2d − 1) (Schuurmans & Zinkevich, 2016). For these experiments we test our optimizers on rcv1 and ijcnn and the two synthetic datasets mentioned in Section 7.2 and report their results in row 3 of Figure 2. Like in the previous experiments, mSPS is significantly faster than most constant stepsizes even though c is kept at 1 and in some cases outperforms the best tuned SMD. Note that the constant stepsizes that don't appear in the plots have diverged. ## 7.4 P**-Norm For Optimizing Deep Networks** For mutliclass-classification with deep networks, we considered the p-norm algorithms for the CIFAR10 dataset. CIFAR10 has 10 classes and we used the standard training set consisting of 50k examples and a test set of 10k. As in the kernel experiments, we evaluated the optimizers using the softmax loss for different values of p. We used the experimental setup proposed in Loizou et al. (2021) and used a batch-size of 128 for all methods and datasets. We used the standard image-classification architecture ResNet-34 (He et al., 2016). As in the other experiments, each optimizer was run with five different random seeds in the final experiment. The optimizers were run until the performance of most methods saturated; 200 epochs for the models on the CIFAR10 dataset. From Figure 3, we observe that: (i) mSPS with c = 0.2 and smoothing constantly converges to a good solution much faster when compared to most constant stepsizes. (ii) The gap between the performance of mSPS and constant stepsize increases as p decreases suggesting that, like in the convex setting, our method is robust to different values of p. ## 8 Conclusions And Future Work Stochastic mirror descent (SMD) is a powerful generalization of stochastic projected gradient descent to solve problems without a Euclidean structure. We provide new convergence analysis for SMD with constant stepsize in relatively smooth optimization and with the new adaptive stepsizes mSPS, mSPSmax, in the smooth case. Consequently, we achieve the first interpolation results for the EG algorithm under interpolation. A main novelty of our results is the use of the finite optimal objective difference assumption (Loizou et al., 2021) with mirror descent, allowing for convergence without bounded gradient or variance assumptions and achieving exact convergence under interpolation. In relative smooth optimization we refine the finite optimal objective difference assumption to better capture interpolation with constraints and achieve convergence in cases not guaranteed by existing works. In smooth optimization we experimentally validate mSPS in several supervised learning domains and across various instances of mirror descent. mSPS requires no tuning but is nonetheless competitive or better than extensively hand-tuned step sizes. This adaptivity is important for tackling different problem domains with different versions of mirror descent. Beyond the scope of this paper there are several interesting directions for future work. For example, we critically rely on the relative smoothness or smoothness, however, it would be interesting to attain rates of convergence with the finite optimal objective difference assumption without smoothness. Additionally, our convergence result of mSPSmax in Theorem 5 requires ψ to be smooth over X , an assumption not required for our constant stepsize results in Section 5, it would be interesting to unify the results by developing a variant of mSPSmax for the more general relatively smooth problem. ## References Peter Auer, Nicolò Cesa-Bianchi, and Claudio Gentile. Adaptive and self-confident on-line learning algorithms. *Journal of Computer and System Sciences*, 64(1):48–75, 2002. ISSN 0022-0000. Navid Azizan and Babak Hassibi. Stochastic gradient/mirror descent: Minimax optimality and implicit regularization. In *International Conference on Learning Representations*, 2019. Navid Azizan, Sahin Lale, and Babak Hassibi. Stochastic mirror descent on overparameterized nonlinear models: Convergence, implicit regularization, and generalization. *arXiv preprint arXiv:1906.03830*, 2019. Heinz H Bauschke, Jonathan M Borwein, et al. Legendre functions and the method of random bregman projections. *Journal of convex analysis*, 4(1):27–67, 1997. Heinz H Bauschke, Jonathan M Borwein, and Patrick L Combettes. Bregman monotone optimization algorithms. *SIAM Journal on control and optimization*, 42(2):596–636, 2003. Heinz H Bauschke, Jérôme Bolte, and Marc Teboulle. A descent lemma beyond lipschitz gradient continuity: first-order methods revisited and applications. *Mathematics of Operations Research*, 42(2):330–348, 2017. Anastasia Bayandina. Adaptive stochastic mirror descent for constrained optimization. In 2017 Constructive Nonsmooth Analysis and Related Topics (dedicated to the memory of V.F. Demyanov) (CNSA), pp. 1–4, 2017. Anastasia Bayandina, Pavel Dvurechensky, Alexander Gasnikov, Fedor Stonyakin, and Alexander Titov. Mirror Descent and Convex Optimization Problems with Non-smooth Inequality Constraints, pp. 181–213. Springer International Publishing, Cham, 2018. ISBN 978-3-319-97478-1. Amir Beck. *First-Order Methods in Optimization*. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2017. Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. *Operations Research Letters*, 31(3):167–175, 2003. ISSN 0167-6377. Aharon Ben-Tal, Tamar Margalit, and Arkadi Nemirovski. The ordered subsets mirror descent optimization method with applications to tomography. *SIAM Journal on Optimization*, 12(1):79–108, 2001. Leonard Berrada, Andrew Zisserman, and M Pawan Kumar. Training neural networks for and by interpolation. In *International Conference on Machine Learning*, pp. 799–809. PMLR, 2020. Mario Bertero, Patrizia Boccacci, Gabriele Desiderà, and Giuseppe Vicidomini. Image deblurring with poisson data: from cells to galaxies. *Inverse Problems*, 25(12):123006, 2009. Dimitri P Bertsekas and John N Tsitsiklis. Parallel and distributed computation: numerical methods. 2003. Benjamin Birnbaum, Nikhil R Devanur, and Lin Xiao. Distributed algorithms via gradient descent for fisher markets. In *Proceedings of the 12th ACM conference on Electronic commerce*, pp. 127–136, 2011. Avrim Blum and Yishay Mansour. From external to internal regret. *Journal of Machine Learning Research*, 8(6), 2007. Stephen Boyd, Lin Xiao, and Almir Mutapcic. Subgradient methods. *lecture notes of EE392o, Stanford* University, Autumn Quarter, 2004:2004–2005, 2003. Sébastien Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(3-4):231–357, 2015. Nicolo Cesa-Bianchi and Gábor Lugosi. *Prediction, learning, and games*. Cambridge university press, 2006. Nicolo Cesa-Bianchi, Alex Conconi, and Claudio Gentile. On the generalization ability of on-line learning algorithms. *IEEE Transactions on Information Theory*, 50(9):2050–2057, 2004. Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. *ACM Transactions on Intelligent Systems and Technology*, 2011. Software available at http://www.csie.ntu.edu.tw/ ~cjlin/libsvm. Gong Chen and Marc Teboulle. Convergence analysis of a proximal-like minimization algorithm using bregman functions. *SIAM Journal on Optimization*, 3(3):538–543, 1993. Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras Pérez, and Peter Bartlett. Exponentiated gradient algorithms for conditional random fields and max-margin markov networks. Journal of Machine Learning Research, 9:1775–1822, 2008. Damek Davis, Dmitriy Drusvyatskiy, Kellie J MacPhee, and Courtney Paquette. Subgradient methods for sharp weakly convex functions. *Journal of Optimization Theory and Applications*, 179(3):962–982, 2018. Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction using mini-batches. *Journal of Machine Learning Research*, 13:165–202, 2012. Radu Alexandru Dragomir, Mathieu Even, and Hadrien Hendrikx. Fast stochastic bregman gradient methods: Sharp analysis and variance reduction. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning* Research, pp. 2815–2825. PMLR, 18–24 Jul 2021. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of Machine Learning Research*, 12:2121–2159, 2011. John C Duchi. Introductory lectures on stochastic optimization. *The mathematics of data*, 25:99, 2018. John C Duchi, Shai Shalev-Shwartz, Yoram Singer, and Ambuj Tewari. Composite objective mirror descent. In *23rd Conference on Learning Theory, COLT 2010*, pp. 14–26, 2010. Barbara Franci and Sergio Grammatico. Convergence of sequences: A survey. *Annual Reviews in Control*, 2022. Tianxiang Gao, Songtao Lu, Jia Liu, and Chris Chu. Randomized bregman coordinate descent methods for non-lipschitz optimization. *arXiv preprint arXiv:2001.05202*, 2020. Claudio Gentile. The robustness of the p-norm algorithms. *Machine Learning*, 53(3):265–299, 2003. Saeed Ghadimi and Guanghui Lan. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: A generic algorithmic framework. *SIAM Journal on Optimization*, 22(4): 1469–1492, 2012. Robert Gower, Othmane Sebbouh, and Nicolas Loizou. Sgd for structured nonconvex functions: Learning rates, minibatching and interpolation. In *International Conference on Artificial Intelligence and Statistics*, pp. 1315–1323. PMLR, 2021. Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, and Peter Richtárik. Sgd: General analysis and improved rates. In *International Conference on Machine Learning*, pp. 5200– 5209. PMLR, 2019. Amy Greenwald, Zheng Li, and Casey Marks. Bounds for regret-matching algorithms. In *AI&M*, 2006. Adam J Grove, Nick Littlestone, and Dale Schuurmans. General convergence results for linear discriminant updates. *Machine Learning*, 43(3):173–210, 2001. Filip Hanzely and Peter Richtarik. Fastest rates for stochastic mirror descent methods. *Computational* Optimization and Applications, 79(3):717–766, 2021. Elad Hazan and Sham Kakade. Revisiting the polyak step size. *arXiv preprint arXiv:1905.00313*, 2019. Elad Hazan and Satyen Kale. Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization. *The Journal of Machine Learning Research*, 15(1):2489–2512, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. Hadrien Hendrikx, Francis Bach, and Laurent Massoulié. Dual-free stochastic decentralized optimization with variance reduction. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 19455–19466. Curran Associates, Inc., 2020. Jean-Baptiste Hiriart-Urruty and Claude Lemaréchal. *Fundamentals of convex analysis*. Springer Science & Business Media, 2004. Samid Hoda, Andrew Gilpin, Javier Pena, and Tuomas Sandholm. Smoothing techniques for computing nash equilibria of sequential games. *Mathematics of Operations Research*, 35(2):494–512, 2010. Anatoli Juditsky and Yuri Nesterov. Primal-dual subgradient methods for minimizing uniformly convex functions. *Technical Report*, 2010. Jyrki Kivinen and Manfred K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. *Information and Computation*, 132(1):1–63, 1997. ISSN 0890-5401. Christian Kroer, Kevin Waugh, Fatma Kılınç-Karzan, and Tuomas Sandholm. Faster algorithms for extensive-form game solving via improved smoothing functions. *Mathematical Programming*, 179(1):385– 417, 2020. Yunwen Lei and Ke Tang. Stochastic composite mirror descent: Optimal bounds with high probabilities. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. Xiaoyu Li and Francesco Orabona. On the convergence of stochastic gradient descent with adaptive stepsizes. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), *Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics*, volume 89 of *Proceedings of Machine Learning* Research, pp. 983–992. PMLR, 16–18 Apr 2019. Nick Littlestone. From on-line to batch learning. In *Proceedings of the second annual workshop on Computational learning theory*, pp. 269–284, 1989. Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. *Information and computation*, 108(2):212–261, 1994. Nicolas Loizou and Peter Richtárik. Convergence analysis of inexact randomized iterative methods. SIAM Journal on Scientific Computing, 42(6):A3979–A4016, 2020a. Nicolas Loizou and Peter Richtárik. Momentum and stochastic momentum for stochastic gradient, newton, proximal point and subspace descent methods. *Computational Optimization and Applications*, 77(3):653– 710, 2020b. Nicolas Loizou, Sharan Vaswani, Issam Hadj Laradji, and Simon Lacoste-Julien. Stochastic polyak stepsize for sgd: An adaptive learning rate for fast convergence. In International Conference on Artificial Intelligence and Statistics, pp. 1306–1314. PMLR, 2021. Stanislaw Łojasiewicz. Une propriété topologique des sous-ensembles analytiques réels, 1963. Haihao Lu, Robert M Freund, and Yurii Nesterov. Relatively smooth convex optimization by first-order methods, and applications. *SIAM Journal on Optimization*, 28(1):333–354, 2018. Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of* the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning* Research, pp. 3325–3334. PMLR, 2018. H Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. COLT, 2010. Arkadi Nemirovski and David Borisovich Yudin. *Problem Complexity and Method Efficiency in Optimization*. A Wiley-Interscience publication. Wiley, 1983. ISBN 9780471103455. Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. *SIAM Journal on optimization*, 19(4):1574–1609, 2009. Yurii Nesterov. *Lectures on convex optimization*, volume 137. Springer, 2018. Francesco Orabona. A modern introduction to online learning. *arXiv preprint arXiv:1912.13213*, 2019. Francesco Orabona and Dávid Pál. Scale-free online learning. *Theoretical Computer Science*, 716:50–69, 2018. ISSN 0304-3975. Special Issue on ALT 2015. Boris Polyak. *Introduction to optimization*. New York : Optimization Software, Publications Division, 1987. Boris T Polyak. Gradient methods for solving equations and inequalities. *USSR Computational Mathematics* and Mathematical Physics, 4(6):17–32, 1964. Mariana Prazeres and Adam M Oberman. Stochastic gradient descent with polyak's learning rate. *Journal* of Scientific Computing, 89(1):1–16, 2021. Herbert Robbins and Sutton Monro. A stochastic approximation method. *The annals of mathematical* statistics, pp. 400–407, 1951. Michal Rolinek and Georg Martius. L4: Practical loss-based stepsize adaptation for deep learning. In Advances in Neural Information Processing Systems, 2018. Dale Schuurmans and Martin A Zinkevich. Deep learning games. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 29. Curran Associates, Inc., 2016. Samuel Sokota, Ryan D'Orazio, J Zico Kolter, Nicolas Loizou, Marc Lanctot, Ioannis Mitliagkas, Noam Brown, and Christian Kroer. A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=DpE5UYUQzZH. Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Optimistic rates for learning with a smooth loss. arXiv preprint arXiv:1009.3896, 2010. Fedor Sergeevich Stonyakin, M Alkousa, Aleksei Nikolaevich Stepanov, and Aleksandr Aleksandrovich Titov. Adaptive mirror descent algorithms for convex and strongly convex optimization problems with functional constraints. *Journal of Applied and Industrial Mathematics*, 13(3):557–574, 2019. Matthew Streeter and H Brendan McMahan. Less regret via online conditioning. arXiv preprint arXiv:1002.4862, 2010. Sharan Vaswani, Francis Bach, and Mark Schmidt. Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 1195–1204. PMLR, 16–18 Apr 2019a. URL https:// proceedings.mlr.press/v89/vaswani19a.html. Sharan Vaswani, Aaron Mishkin, Issam Laradji, Mark Schmidt, Gauthier Gidel, and Simon Lacoste-Julien. Painless stochastic gradient: Interpolation, line-search, and convergence rates. In Advances in Neural Information Processing Systems, pp. 3727–3740, 2019b. Sharan Vaswani, Olivier Bachem, Simone Totaro, Robert Müller, Shivam Garg, Matthieu Geist, Marlos C. Machado, Pablo Samuel Castro, and Nicolas Le Roux. A general class of surrogate functions for stable and efficient reinforcement learning. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera (eds.), *Proceedings of The 25th International Conference on Artificial Intelligence and Statistics*, volume 151 of *Proceedings of Machine Learning Research*, pp. 8619–8649. PMLR, 28–30 Mar 2022. URL https: //proceedings.mlr.press/v151/vaswani22a.html. Tesi Xiao, Krishnakumar Balasubramanian, and Saeed Ghadimi. Improved complexities for stochastic conditional gradient methods under interpolation-like conditions. *Operations Research Letters*, 50 (2):184–189, 2022. ISSN 0167-6377. doi: https://doi.org/10.1016/j.orl.2022.01.015. URL https: //www.sciencedirect.com/science/article/pii/S0167637722000219. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. *Commun. ACM*, 64(3):107–115, 2021. ## A Appendix The appendices include omitted proofs, other results, and additional experiments. The material is organized as follows: equivalent definitions of relative smoothness are given in Section B; standard mirror descent results are presented in Section C; non-smooth analysis of mirror descent with the mirror Polyak stepsize is given in Section D; the lower bound proof of mSPS is included in Section E; the proofs for the results in Section 5 are presented in Section F, including Theorem 1 and Theorem 3; proofs for Section 6 are given in Section G, including a non-convex result for preconditioned SGD in Section G.3; experiment details are given in Section H. ## B Relative Smoothness Relative smoothness is a generalization of smoothness in first order optimization that includes non-Lipschitz gradients. A function is L-smooth with respect to a norm *||·||* if ||∇f(x) − ∇f(y)||∗ ≤ L||x − y|| . With a Lipschitz gradient the error in the first order approximation of f grows at most quadratically f(x) − (f(y) − h∇f(y), x − yi) = Bf (x; y) ≤ L 2 ||x − y||2. Relative smoothness replaces the quadratic upper bound with a divergence relative to a convex function ψ, Bf (x; y) ≤ LBψ(x; y). If ψ is strongly convex then smoothness with respect to a norm implies relative smoothness with respect to ψ. However, a relative smooth function may not admit a Lipchitz gradient, as was first remarked by Birnbaum et al. (2011) and later rediscovered by Lu et al. (2018) and Bauschke et al. (2017). Similar to traditional smoothness, equivalent definitions of relative smoothness exist in the literature. For example, Lu et al. (2018) prove that the following conditions are equivalent: 1. f is L smooth relative to ψ 2. Lψ − f is convex $$\preceq L\nabla^{2}\psi$$ 3. under twice differentiability ∇2f L∇2ψ $\therefore\;\;\langle\nabla f(x)-\nabla f(y),x-\rangle$ . 4. h∇f(x) − ∇f(y), x − yi ≤ Lh∇ψ(x) − ∇ψ(y), x − yi. Similar conditions can also be stated for relative strong convexity. Defining curvature and smoothness relative to a function ψ allows for a wider application of first order methods via mirror descent. Recently, such assumptions have been used to establish both new results and algorithms in machine learning. In reinforcement learning, Vaswani et al. (2022) use relative smoothness and mirror descent to provide a new perspective on policy optimization in reinforcement learning. In algorithmic game theory, Sokota et al. (2023) use relative strong convexity to establish fast convergence to quantalresponse equilibria in extensive-form games. ## C Mirror Descent Lemmas Lemma 3 (Three Point Property (Chen & Teboulle, 1993)). Let Bψ *be the Bregman divergence with respect* to ψ : D → R. Then for any three points *x, y* ∈ int D , and z ∈ D*, the following holds* Bψ(z; x) + Bψ(x; y) − Bψ(z; y) = h∇ψ(y) − ∇ψ(x), z − xi. ## C.1 Proof Of Lemma 1 Lemma 1. Let Bψ be the Bregman divergence with respect to a convex function ψ : D → R and assume assumption 1 holds. Let xt+1 = arg minx∈X hgt, xi + 1 ηt Bψ(x; xt). Then for any x∗ ∈ X Bψ(x∗; xt+1) ≤ Bψ(x∗; xt) − ηthgt, xt − x∗i − Bψ(xt+1; xt) + ηthgt, xt − xt+1i. (14) Furthermore if ψ is µψ strongly convex over X *then* $$B_{\psi}(x_{t+1};x_{t})+\eta_{t}\langle g_{t},x_{t}-x_{t+1}\rangle.$$ $$B_{\psi}(x_{*};x_{t+1})\leq B_{\psi}(x_{*};x_{t})-\eta_{t}\langle g_{t},x_{t}-x_{*}\rangle+\frac{\eta_{t}^{2}}{2\mu_{\psi}}\left|\left|g_{t}\right|\right|_{*}^{2}.\tag{15}$$ Proof. The proof follows closely to the one presented in Orabona (2019)[Lemma 6.7]. First observe that xt+1 statisfies the first order optimality condition $$(14)$$ $\tau_{\rm int}(\tau_{\rm int})$ hηtgt + ∇ψ(xt+1) − ∇ψ(xt), x∗ − xt+1i ≥ 0, since ∇xBψ(x; xt) = ∇ψ(x) − ∇ψ(xt). We start by examining the inner product hηtgt, xt − x∗i and adding subtracting quantities to make the first order optimality condition appear. hηtgt, xt − x∗i = hηtgt + ∇ψ(xt+1) − ∇ψ(xt), xt+1 − x∗i + h∇ψ(xt+1) − ∇ψ(xt), x∗ − xt+1i + hηtgt, xt − xt+1i ≤ h∇ψ(xt+1) − ∇ψ(xt), x∗ − xt+1i + hηtgt, xt − xt+1i(first order optimality) = Bψ(x∗; xt) − Bψ(x∗; xt+1) − Bψ(xt+1; xt) + hηtgt, xt − xt+1i (three point property). Rearranging gives the first result. Note at this point we only require ψ to be convex and ψ to be differentiable at xt and xt+1, which is guaranteed by assumption 1. To obtain the second result, observe $$\langle\eta_{t}g_{t},x_{t}-x_{*}\rangle\leq B_{\psi}(x_{*};x_{t})-B_{\psi}(x_{*};x_{t+1})-B_{\psi}(x_{t+1};x_{t})+\langle\eta_{t}g_{t},x_{t}-x_{t+1}\rangle\text{(from above)}$$ $$\leq B_{\psi}(x_{*};x_{t})-B_{\psi}(x_{*};x_{t+1})-\frac{\mu_{0}}{2}\left|\left|x_{t+1}-x_{t}\right|\right|^{2}+\langle\eta_{t}g_{t},x_{t}-x_{t+1}\rangle\text{(strong convexity)}$$ $$\leq B_{\psi}(x_{*};x_{t})-B_{\psi}(x_{*};x_{t+1})+\frac{\eta_{t}^{2}}{2\mu_{\psi}}\left|\left|g_{t}\right|\right|_{*}^{2}\text{(Fenchel-Young inequality)}.$$ Rearranging gives the second result. ## D Non-Smooth Analysis Of Mirror Sps For Lipschitz Functions As we have already mentioned in the main paper, the Polyak step-size is used extensively in the literature of projected subgradient descent for solving non-smooth optimization problems. However to the best of our knowledge there is no efficient generalization of this step-size for the more general mirror descent update. Theorem 10 (Non-smooth deterministic). Assume f is convex with bounded subgradients, ||∂f(xt)||∗ ≤ G. Let ψ be µψ strongly convex with respect to the norm ||·||, and assume that Assumption 1 holds. Then mirror descent with stepsize ηt = µψ(f(xt)−f(x∗)) ||∂f(xt)||2∗*satisfies,* $$f\left({\bar{x}}_{t}\right)-f(x_{*})\leq G{\sqrt{\frac{{\frac{2}{\mu_{\psi}}}B_{\psi}(x_{*};x_{1})}{t}}},$$ where x¯t = 1 t Pts=1 xs*. The same result holds for the best iterate* f(x ∗ t ) = mins{f(xs)}1≤s≤t. Moreover, we have limt→∞ f(xt) = f(x∗). Proof. Let gt be a subgradient of f at xt used to compute ηt. Then by Lemma 1 we have Bψ(x∗; xt+1) ≤ Bψ(x∗; xt) − ηthgt, xt − x∗i +η 2 t 2µψ ||gt||2∗ ≤ Bψ(x∗; xt) − ηt(f(xt) − f(x∗)) + η 2 t 2µψ ||gt||2∗ (by convexity) = Bψ(x∗; xt) − µψ (f(xt) − f(x∗))2 ||gt||2∗ + µψ (f(xt) − f(x∗))2 2 ||gt||2∗ (by definition of ηt) = Bψ(x∗; xt) − µψ (f(xt) − f(x∗))2 2 ||gt||2∗ . Rearranging and summing across time we have $$\sum_{s=1}^{t}\frac{\mu_{\psi}\left(f(x_{s})-f(x_{*})\right)^{2}}{2\left\|g_{s}\right\|_{*}^{2}}\leq B_{\psi}(x_{*};x_{1})-B_{\psi}(x_{*};x_{t+1})\leq B_{\psi}(x_{*};x_{1}).\tag{16}$$ Applying the upper bound ||gs||∗ ≤ G and taking the square root gives, $${\sqrt{\sum_{s=1}^{t}\left(f(x_{s})-f(x_{*})\right)^{2}}}\leq G{\sqrt{\frac{2B_{\psi}(x_{*};x_{1})}{\mu_{\psi}}}}.$$ The result then follows by the convexity of f and concavity of the square root function, $$f(\bar{x}_{t})-f(x_{*})\leq\frac{1}{t}\sum_{s=1}^{t}(f(x_{s})-f(x_{*}))=\frac{1}{t}\sum_{s=1}^{t}\sqrt{(f(x_{s})-f(x_{*}))^{2}}\leq\sqrt{\frac{1}{t}\sum_{s=1}^{t}(f(x_{s})-f(x_{*}))^{2}}$$ $$\leq G\sqrt{\frac{2B_{\psi}(x_{s};x_{1})}{t\mu_{\psi}}}.$$ To obtain the best iterate result notice that f(x ∗ t ) − f(x∗)) ≤ 1 t Pts=1(f(xs) − f(x∗)). To attain the limiting result observe that 16 implies $$\sum_{s=1}^{\infty}\mu_{\psi}\left(f(x_{s})-f(x_{*})\right)^{2}\leq G^{2}B_{\psi}(x_{*};x_{1})<+\infty.$$ Giving the result limt→∞ f(xt) = f(x∗). ## D.1 Last-Iterate Convergence To A Solution Under the same assumptions as Theorem 10 we have that mirror descent converges to a point. First we provide a useful lemma applicable to mirror descent with a strongly convex mirror map ψ. Lemma 4. Suppose ψ is strongly convex, then if the sequece {xt}t≥1 *is Bregman monotone with respect to* a set S, that is for any x ∈ S *we have* $$B_{\psi}(x;x_{t+1})\leq B_{\psi}(x;x_{t}),$$ then xt → x∗ ∈ S if and only if all the sequential cluster points of {xt}t≥1 *are contained in* S. Proof. If the sequence is {xt}t≥1 is Bregman monotone then by strong convexity $${\frac{\mu_{\psi}}{2}}\left|\left|x-x_{t+1}\right|\right|^{2}\leq B_{\psi}(x;x_{1}),$$ hence the sequence is bounded. Therefore, the sequence has a limit point xl such that there exists a subsequence xbt → xl. Assume xl ∈ S and consider the sequence {yt = Bψ(xl; xt)}t≥1. Since yt is monotonically decreasing and bounded below yt → L for some L ∈ R. However, the subesequence {Bψ(xl; xbt )} converges to zero, therefore we have that limt→∞ Bψ(xl; xt) = 0, implying that limt→∞ xt = xl by strong convexity of ψ. Corollary 11. *Under the same assumptions as Theorem 10, mirror descent converges to a solution,* $$\operatorname*{lim}_{t\to\infty}x_{t}=x_{*},$$ for some x∗ ∈ X∗. Proof. From Theorem 10 we have the following inequality, $$B_{\psi}(x_{*};x_{t+1})\leq B_{\psi}(x_{*};x_{t})-\frac{\mu_{\psi}\left(f(x_{t})-f(x_{*})\right)^{2}}{2\left||g_{t}||\right|_{*}^{2}}.$$ Therefore by Lemma 4 it remains to show that all limit points of xt are contained within X∗. By Theorem 10 we have that f(xt) → f(x∗). For any limit point xl we have a subsequence xbt such that xbt → xl and by continuity of f, xl must be a solution, $$f(x_{*})=\operatorname*{lim}_{t\to\infty}f(x_{t})=\operatorname*{lim}_{t\to\infty}f(x_{b_{t}})=f(\operatorname*{lim}_{t\to\infty}x_{b_{t}})=f(x_{l}).$$ The result then follows by Lemma 4. ## E Proof Of Msps Lower Bound In Section 4 The lower bound of mSPS (10) when fξ is L smooth, restated below, is vital to our analysis, $$\square$$ $$\frac{\mu_{\psi}}{2c L}\leq\eta_{t}=\frac{\mu_{\psi}(f_{\xi}(x_{t})-f_{\xi}^{*})}{c\,||\nabla f(x_{t})||_{*}^{2}}.$$ Notice the above inequality is equivalent to $${\frac{1}{2L}}\leq{\frac{(f_{\xi}(x_{t})-f_{\xi}^{*})}{||\nabla f(x_{t})||_{*}^{2}}}.$$ The first inequality is attained by multiplying both sides by µψ/c. We provide a detailed proof below. Lemma 5. If f : R n → R is L-smooth with respect to a norm ||·|| *then* $${\frac{\left\|\nabla f(x)\right\|_{*}^{2}}{2L}}\leq f(x)-\operatorname*{inf}_{y\in\mathbb{R}^{n}}f(y).$$ Rearranging and defining f ∗ = infy∈Rn f(y) *gives* $${\frac{1}{2L}}\leq{\frac{f(x)-f^{*}}{||\nabla f(x)||_{*}^{2}}}.$$ Proof. Since f is L-smooth we have $$f(y)\leq f(x)+\langle\nabla f(x),y-x\rangle+{\frac{L}{2}}\left|\left|x-y\right|\right|^{2}\quad\forall x,y\in\mathbb{R}^{n}.$$ Therefore we have the following upper bound on infy f(y). inf y f(y) ≤ min y f(x) + h∇f(x), y − xi + L 2 ||x − y||2 = min r≥0,||z||≤1 f(x) + rh∇f(x), zi + L 2 r 2||z||2 ≤ min r≥0,||z||≤1 f(x) + rh∇f(x), zi + L 2 r 2 = f(x) + min r≥0 min ||z||≤1 {rh∇f(x), zi} + L 2 r 2 = f(x) + min r≥0 −r max ||z||≤1 {h∇f(x), −zi} + L 2 r 2 = f(x) + min r≥0 −r ||∇f(x)||∗ + L 2 r 2 by the definition of ||·||∗ (r=||∇f(x)||∗ /L) = f(x) − ||∇f(x)||2∗ L+ ||∇f(x)||2∗ 2L Simplifying and rearranging gives the result. ## F Proofs For Section 5 In this section we provide proofs of our main results in the relative smooth setting. For convenience we denote the expectation conditional upon (ξ1, ξ2, · · · , ξt) as Et [·]. All statements hold almost surely. ## F.1 Proof Of Lemma 2 Lemma 2. Suppose f is L smooth relative to ψ*. Then if* η ≤ 1 L we have $$-B_{\psi}(x_{t+1};x_{t})+\eta\langle\nabla f(x_{t}),x_{t}-x_{t+1}\rangle\leq\eta\,,$$ Proof. Since f is L smooth relative to ψ it is also 1 η smooth relative to ψ (because L ≤ 1 η and ψ is convex). Therefore, $$\square$$ $$B_{f}(x_{t+1};x_{t})\leq\frac{1}{\eta}B_{\psi}(x_{t+1};x_{t})$$ $\implies-B_{\psi}(x_{t+1};x_{t})+\eta B_{f}(x_{t+1};x_{t})\leq0$. Now we examine the inner product ηh∇f(xt), xt − xt+1i, $$\eta(\nabla f(x_{t}),x_{t}-x_{t+1})=\eta\left(f(x_{t+1})-f(x_{t})-\left\langle\nabla f(x_{t}),x_{t+1}-x_{t}\right\rangle+f(x_{t})-f(x_{t+1})\right)$$ $$=\eta\left(B_{f}(x_{t+1};x_{t})+f(x_{t})-f(x_{t+1})\right).$$ Therefore, we have the following $$-B_{\psi}(x_{t+1};x_{t})+\eta(\nabla f(x_{t}),x_{t}-x_{t+1})=-B_{\psi}(x_{t+1};x_{t})+\eta B_{f}(x_{t+1};x_{t})+\eta(f(x_{t})-f(x_{t+1}))$$ $$\leq\eta(f(x_{t})-f(x_{t+1}).$$ Theorem 1. Assume ψ satisfies assumption 1. Furthermore assume f to be µ*-strongly convex relative to* ψ over X , and fξ to be L-smooth relative to ψ over X *almost surely. Then SMD with stepsize* η ≤ 1 L guarantees $\square$ $$\mathbb{E}\left[B_{\psi}(x_{*};x_{t+1})\right]\leq(1-\mu\eta)^{t}B_{\psi}(x_{*};x_{1})+\frac{\sigma_{\mathcal{X}}^{2}}{\mu}.$$ Proof. From Lemma 1 (before applying strong convexity but assuming convexity of ψ) we have Bψ(x∗; xt+1) ≤ Bψ(x∗; xt) − ηh∇fξt (xt), xt − x∗i − Bψ(xt+1; xt) + ηh∇fξt (xt), xt − xt+1i ≤ Bψ(x∗; xt) − ηh∇fξt (xt), xt − x∗i + η(fξt (xt) − fξt (xt+1)) (by Lemma 2) ≤ Bψ(x∗; xt) − ηh∇fξt (xt), xt − x∗i + η(fξt (xt) − f ∗ ξt (X )) (by definition of f ∗ ξt (X )) = Bψ(x∗; xt) − ηh∇fξt (xt), xt − x∗i + η(fξt (xt) − fξt (x∗)) + η(fξt (x∗) − f ∗ ξt (X )). By taking an expectation conditioning on (ξ1, · · · , ξt) we obtain, Et [Bψ(x∗; xt+1)] ≤ Bψ(x∗; xt) − ηh∇f(xt), xt − x∗i + η(f(xt) − f(x∗)) + η(f(x∗) − Et -f ∗ ξt (X )) = Bψ(x∗; xt) − η (f(x∗) − f(xt) − h∇f(xt), x∗ − xti) | {z } Bf (x∗;xt) +η(f(x∗) − Et -f ∗ ξt (X )) (17) ≤ Bψ(x∗; xt)(1 − µη) + η(f(x∗) − Et -f ∗ ξt (X )) (by relative strongly convexity of f). Now by the tower property of expectations and applying the definition of σ 2 X , 1 ($\xi_1,\cdot\cdot\cdot$ ,$\xi_t$) we obtain, . $$\mathbb{E}\left[B_{\psi}(x_{*};x_{t+1})\right]\leq\mathbb{E}\left[B_{\psi}(x_{*};x_{t})\right](1-\mu\eta)+\eta\sigma_{\mathcal{X}}^{2}.$$ Iterating the inequality gives, $$\mathbb{E}\left[B_{\psi}(x_{*};x_{t+1})\right]\leq B_{\psi}(x_{*};x_{1})(1-\mu\eta)^{t}+\sum_{s=0}^{t-1}\eta\sigma_{\mathcal{X}}^{2}(1-\mu\eta)^{s}$$ $$\leq B_{\psi}(x_{*};x_{1})(1-\mu\eta)^{t}+\frac{\sigma_{\mathcal{X}}^{2}}{\mu}.$$ Where the last inequality follows by Pt−1 vs by $\sum_{s=0}^{t-1}(1-\mu\eta)^s\leq\sum_{s=0}^{\infty}$ s=0(1 − µη) s = 1/µη. Corollary 2. *Under the same assumptions at Theorem 1, if* σ 2 X = 0 then Bψ(x∗; xt+1) → 0 *almost surely.* Proof. By Theorem 1 the following inequality holds: $\mathbb{E}_{t}\left[B_{\psi}(x_{*};x_{t+1})\right]\leq B_{\psi}(x_{*};x_{t})(1-\mu\eta)$. The result then follows by Franci & Grammatico (2022)[Lemma 4.7]. $\square$ $$(18)$$ Theorem 3. Assume ψ satisfies assumption 1. Furthermore assume fξ to be L-smooth relative to ψ *over* X *almost surely. Then SMD with stepsize* η ≤ 1 L guarantees $$\mathbb{E}\left[{\frac{1}{t}}\sum_{s=1}^{t}B_{f}(x_{s};x_{s})\right]\leq{\frac{B_{\psi}(x_{*};x_{1})}{\eta t}}+\sigma_{\chi}^{2}.$$ Proof. Note that in the proof of Theorem 1 relative strong convexity is not used to attain the inequality (17). Therefore we have, $$\mathbb{E}_{t}\left[B_{\psi}(x_{*};x_{t+1})\right]$$ Et [Bψ(x∗; xt+1)] ≤ Bψ(x∗; xt) − ηBf (x∗; xt) + η(f(x∗) − Et -f ∗ ξt (X )). After applying the tower property, definition of σ 2, and rearranging, we have $$\eta\mathbb{E}\left[B_{f}(x_{*};x_{t})\right]\leq\mathbb{E}\left[B_{\psi}(x_{*};x_{t})\right]-\mathbb{E}\left[B_{\psi}(x_{*};x_{t+1})\right]+\eta\sigma_{\mathcal{X}}^{2}.$$ Summing across time and dividing by ηt gives the result. Corollary 4. Under the assumptions of Theorem 3, if f *is convex and* σ 2 X = 0 then Bf (x∗; xt) → 0 almost surely. Proof.: Under interpolation $f(x_{*})-\mathbb{E}_{4}\left[f_{\xi_{4}}^{*}(\mathcal{X})\right]=0$. From Theorem 3 the following inequality holds: $\square$ $$\mathbb{E}_{t}\left[B_{\psi}(x_{*};x_{t+1})\right]\leq B_{\psi}(x_{*};x_{t})-\eta B_{f}(x_{*};x_{t}).\tag{19}$$ $\square$ Since f is convex Bf (x∗; xt) ≥ 0, therefore, by the Robbins-Siegmun Lemma (e.g. (Franci & Grammatico, 2022)[Lemma 4.1]) Bf (x∗; xt) → 0 almost surely. **Proposition 1.** _Let $f(x)=\frac{1}{2}\left(\langle g,x\rangle-b\right)^{2}$ then $B_{f}(x;y)=B_{f}(y;x)$._ Proof. Note that ∇f(x) = (hg, xi − b) g. Therefore, Bf (x; y) = 12 (hg, xi − b) 2 − 1 2 (hg, yi − b) 2 − (hg, yi − b)hg, x − yi (20) = 1 2 (hg, xi) 2 − hg, xib + b 2 − 1 2 (hg, yi) 2 + hg, yib − b 2 − (hg, yi − b)hg, x − yi (21) = 1 2 (hg, xi) 2 − hg, xib − 1 2 (hg, yi) 2 + hg, yib − (hg, yi − b)hg, x − yi (22) = 1 2 (hg, xi) 2 − hg, xib − 1 2 (hg, yi) 2 + hg, yib − hg, yihg, xi + bhg, xi + (hg, yi) 2 − bhg, yi (23) = 1 2 (hg, xi) 2 + 1 2 (hg, yi) 2 − hg, yihg, xi. (24) It follows that Bf is symmetric. ## G Proofs For Section 6 In this section we provide proofs of our main results in the smooth setting. For convenience we denote the expectation conditional upon (ξ1, ξ2, · · · , ξt) as Et [·]. All statements hold almost surely. Notice that by definition of mSPSmax we have the following upper bound $$\eta_{t}\leq\frac{\mu_{\psi}(f_{\xi_{t}}(x_{t})-f_{\xi_{t}}^{*})}{c\left|\left|\nabla f_{\xi_{t}}(x_{t})\right|\right|_{*}^{2}}.$$ $\square$ Muliplying both sides of the inequality with ηt||∇fξt (xt)||2∗/µψ gives the following useful inequality, $$\frac{\eta_{t}^{2}\left||\nabla f_{\xi_{t}}(x_{t})||_{*}^{2}}{\mu_{\psi}}\leq\frac{\eta_{t}(f_{\xi_{t}}(x_{t})-f_{\xi_{t}}^{*})}{c}.$$ $$(25)$$ c. (25) The inequality holds with equality for mSPS. Theorem 5. Assume fξ is convex and L-smooth almost surely with respect to the norm ||·||. Furthermore, assume that f is µ-strongly convex relative to ψ over X , where ψ is µψ-strongly convex over X *with respect* to the norm ||·|| and assumption 1 holds. Then SMD with mSPSmax and c ≥ 1 2 guarantees $$\mathbb{E}\left[B_{\psi}(x_{s};x_{t+1})\right]\leq(1-\mu\alpha)^{t}B_{\psi}(x_{s};x_{1})+\frac{\eta_{b}\sigma^{2}}{\alpha\mu}.$$ Where α := min{µψ/2cL, ηb}. Proof. Bψ(x∗; xt+1) ≤ Bψ(x∗; xt) − ηth∇fξt (xt), xt − x∗i +η 2 t 2µψ ||∇fξt (xt)||2∗ (25) ≤ Bψ(x∗; xt) − ηth∇fξt (xt), xt − x∗i + ηt (fξt (xt) − f ∗ ξt ) 2c (c≥1/2) ≤ Bψ(x∗; xt) − ηth∇fξt (xt), xt − x∗i + ηt(fξt (xt) − f ∗ ξt ) = Bψ(x∗; xt) − ηth∇fξt (xt), xt − x∗i + ηt(fξt (xt) − fξt (x∗) + fξt (x∗) − f ∗ ξt ) = Bψ(x∗; xt) − ηt (fξt (x∗) − fξt (xt) *− h∇*fξt (xt), x∗ − xti) | {z } ≥0 +ηt(fξt (x∗) − f ∗ ξt ) (10) ≤ Bψ(x∗; xt) − min n µψ 2cL , ηb o(fξt (x∗) − fξt (xt) *− h∇*fξt (xt), x∗ − xti) + ηb(fξt (x∗) − f ∗ ξt ) Taking an expectation over i condition on xt gives Et [Bψ(x∗; xt+1)] ≤ Bψ(x∗; xt) − min n µψ 2cL , ηb o(f(x∗) − f(xt) − h∇f(xt), x∗ − xti) + ηbEt -(fξt (x∗) − f ∗ ξt ) ≤ Bψ(x∗; xt) 1 − µ min µψ 2cLmax , ηb + ηbEt -(fξt (x∗) − f ∗ ξt )(by relative strong convexity of f) = Bψ(x∗; xt) (1 − µα) + ηbEt -(fξt (x∗) − f ∗ ξt ). Now by the tower property of expectations and applying the definition of σ 2, $\mathbb{E}\left[B_{\psi}(x_{*};x_{t+1})\right]\leq\mathbb{E}\left[B_{\psi}(x_{*};x_{t})\right](1-\mu\alpha)+\eta_{b}\sigma^{2}$. Iterating the inequality gives, $$\mathbb{E}\left[B_{\psi}(x_{*};x_{t+1})\right]\leq B_{\psi}(x_{*};x_{1})(1-\mu\alpha)^{t}+\sum_{s=0}^{t-1}\eta_{b}\sigma^{2}(1-\mu\alpha)^{s}$$ $$\leq B_{\psi}(x_{*};x_{1})(1-\mu\alpha)^{t}+\frac{\eta_{b}\sigma^{2}}{\alpha\mu}.$$ Where the last inequality follows by Pt−1 s=0(1 − µα) s ≤P∞ s=0(1 − µα) s = 1/µα. Theorem 7. If fξ is convex and L-smooth with respect to a norm ||·|| almost surely, assumption 1 holds, and ψ is µψ-strongly convex over X with respect to ||·||. Then mirror descent with mSPSmax and c ≥ 1 guarantees $$\mathbb{E}\left[f({\bar{x}}_{t})-f(x_{*})\right]\leq{\frac{2B_{\psi}(x_{*};x_{1})}{\alpha t}}+{\frac{2\eta_{b}\sigma^{2}}{\alpha}}.$$ Where α := min{µψ/2cL, ηb}. Proof. We begin with Lemma 1, Bψ(x∗; xt+1) ≤ Bψ(x∗; xt) − ηth∇fξt (xy), xt − x∗i +η 2 t 2µψ ||∇fξt (xt)||2∗ ≤ Bψ(x∗; xt) − ηt (fξt (xt) − fξt (x∗)) + η 2 t 2µψ ||∇fξt (xt)||2∗ by convexity (25) ≤ Bψ(x∗; xt) − ηt (fξt (xt) − fξt (x∗)) + ηt(fξt (xt) − f ∗ ξt ) 2c (c≥1) ≤ Bψ(x∗; xt) − ηt (fξt (xt) − fξt (x∗)) + ηt(fξt (xt) − f ∗ ξt ) 2 = Bψ(x∗; xt) − ηt fξt (xt) − f ∗ ξt + f ∗ ξt − fξt (x∗)+ ηt(fξt (xt) − f ∗ ξt ) 2 = Bψ(x∗; xt) − ηt 1 − 1 2 fξt (xt) − f ∗ ξt + ηt(fξt (x∗) − f ∗ ξt ) = Bψ(x∗; xt) − ηt 2 fξt (xt) − f ∗ ξt | {z } ≥0 +ηt(fξt (x∗) − f ∗ ξt ) (10) ≤ Bψ(x∗; xt) − α 2 fξt (xt) − f ∗ ξt + ηb(fξt (x∗) − f ∗ ξt ) = Bψ(x∗; xt) − α 2 (fξt (xt) − fξt (x∗)) − α 2 fξt (x∗) − f ∗ ξt + ηb(fξt (x∗) − f ∗ ξt ) ≤ Bψ(x∗; xt) − α 2 (fξt (xt) − fξt (x∗)) + ηb(fξt (x∗) − f ∗ ξt ) Recall from $\eqref{eq:walpha}$ that we have . $$\alpha=\operatorname*{min}\left\{{\frac{\mu_{\psi}}{2c L_{\operatorname*{max}}}},\eta_{b}\right\}\leq\eta_{t}\leq\eta_{b}.$$ By a simple rearrangement we have $${\frac{\alpha}{2}}\left(f_{\xi_{t}}(x_{t})-f_{\xi_{t}}^{*}\right)\leq B_{\psi}(x_{*};x_{t})-B_{\psi}(x_{*};x_{t+1})+\eta_{b}(f_{\xi_{t}}(x_{*})-f_{\xi_{t}}^{*}).$$ Taking an expectation on both sides, dividing by α, and applying the definition of σ 2 yields $$\mathbb{E}\left[f(x_{t})-f(x_{*})\right]\leq{\frac{2}{\alpha}}\left(\mathbb{E}\left[B_{\psi}(x_{*};x_{t})\right]-\mathbb{E}\left[B_{\psi}(x_{*};x_{t+1})\right]\right)+{\frac{2\eta_{b}}{\alpha}}\sigma^{2}.$$ Summing across time, applying convexity of f, and dividing by t gives $$\mathbb{E}\left[f({\bar{x}}_{t})-f(x_{*})\right]\leq{\frac{1}{t}}\sum_{s=1}^{t}\mathbb{E}\left[f(x_{s})-f(x_{*})\right]\leq{\frac{2B_{\psi}(x_{*};x_{1})}{\alpha t}}+{\frac{2\eta_{b}\sigma^{2}}{\alpha}}.$$ ## G.2.1 Constant Stepsize Corollary In this section we present the constant stepsize corollary for Theorem 7. If ηb ≤ µψ/2L then mSPSmax with c = 1 is a constant stepsize because of the lower bound (10), ηt = ηb, and we have that ηb = α. Therefore plugging in these values into Theorem 7 gives the following corollary. Corollary 8. Assume fξ is convex and L smooth with respect to a norm ||·|| almost surely, assumption 1 holds, and ψ is µψ strongly convex over X with respect to the norm ||·||*. Then stochastic mirror descent with* η ≤ µψ/2L *guarantees* $$\mathbb{E}\left[f(\bar{x}_{t})-f(x_{*})\right]\leq\frac{2B_{\psi}(x_{*};x_{1})}{\eta t}+2\sigma^{2}.$$ ## G.3 Sgd With Preconditioning In this section we extend the result of mSPSmax to the non-convex setting when f is smooth and satisfies the PL condition. The result generalizes Theorem 3.6 in Loizou et al. (2021) by replacing SGD with preconditioned SGD. Note that in this case we have ψ(x) = 12 h*x, Mx*i is (µψ = 1)-stronlgy convex with respect to the norm *||·||*M and Bψ(x; y) = 12 ||x − y||2M. Assumption 2 (Polyak (1964); Łojasiewicz (1963)). *Assume that* f : R n → R satisfies the PL condition with respect to the norm *||·||*∗ if there exists µ > 0 *such that for all* x ∈ R n $$||\nabla f(x)||_{*}^{2}\geq2\mu(f(x)-f^{*}).$$ $$(26)$$ ∗). (26) Theorem 9. Assume that f and fξ are L smooth with respect to the norm ||·||M *almost surely, where* M is a positive definite matrix. Furthermore, assume that f satisfies the PL condition (26) with respect to the norm ||·||M−1 *, then unconstrained stochastic mirror descent with* ψ(x) = 12 ||x||2M *and stepsizes* $$\eta_{t}=\min\left\{\frac{f_{\xi_{t}}(x_{t})-f_{\xi_{t}}^{*}}{c\left|\left|\nabla f_{\xi_{t}}(x_{t})\right|\right|_{M^{-1}}^{2}},\eta_{b}\right\},$$ _with $c>\frac{L_{\max}}{4\mu}$ and $\eta_{b}<\max\left\{1/\big{(}\frac{1}{\mu}-2\mu+\frac{L_{\max}}{2c}\big{)},\frac{1}{2cL_{\max}}\right\}$, guarantees_ $$\mathbb{E}\left[f(x_{t+1})-f(x_{*})\right]\leq\nu^{t}(f(x_{1})-f(x_{*}))+{\frac{L\sigma^{2}\eta_{b}}{2(1-\nu)c}},$$ _where $\alpha=\min\{\frac{1}{2cL_{\max}},\eta_{b}\}$ and $\nu=\eta_{b}(\frac{1}{\alpha}-2\mu+\frac{L_{\max}}{2c})\in(0,1)$._ Proof. We have that the algorithm performs updates of the form $$x_{t+1}=x_{t}-\eta_{t}M^{-1}\nabla f_{\xi_{t}}(x_{t}).$$ We first apply the L smoothness upper bound on f, f(xt+1) ≤ f(xt) + h∇f(xt), xt+1 − xti + L 2 ||xt+1 − xt||2M = f(xt) − ηth∇f(xt), M−1∇fξt (xt)i + Lη2 t 2 M−1∇fξt (xt) 2 M = f(xt) − ηth∇f(xt), M−1∇fξt (xt)i + Lη2 t 2hM−1∇fξt (xt), MM−1∇fξt (xt)i = f(xt) − ηth∇f(xt), M−1∇fξt (xt)i + Lη2 t 2||∇fξt (xt)||2M−1 ηt≤ −h∇f(xt), M−1∇fξt (xt)i + Lηt 2||∇fξt (xt)||2M−1 (25) ≤ −h∇f(xt), M−1∇fξt (xt)i + L 2c (fξt (xt) − f ∗ i ) = −h∇f(xt), M−1∇fξt (xt)i + L 2c (fξt (xt) − fξt (x∗)) + L 2c (fξt (x∗) − f ∗ i ) $$\implies{\frac{f(x_{t+1})-f(x_{t})}{\eta_{t}}}$$ We proceed by taking an expectation over ξt condition on knowing xt. $$\mathbb{E}_{\epsilon}\left[\frac{f(x_{t+1})-f(x_{t})}{\eta_{t}}\right]=-\langle\nabla f(x_{t}),M^{-1}\nabla f(x_{t})\rangle+\frac{L}{2}(f(x_{t})-f(x_{*}))+\frac{L}{2}\mathbb{E}_{\epsilon}\left[(f_{\epsilon}(x_{*})-f_{t}^{*})\right]$$ $$\leq-\left\|\nabla f(x)\right\|_{M^{-1}}^{2}+\frac{L}{2c}(f(x_{t})-f(x_{*}))+\frac{L}{2c}\sigma^{2}$$ $$\stackrel{{\eqref{eq:201}}}{{\leq}}-2\mu(f(x_{t})-f(x_{*}))+\frac{L}{2c}(f(x_{t})-f(x_{*}))+\frac{L}{2c}\sigma^{2}$$ $$=\min\{-\mu^{\mu}\epsilon-\eta_{t}\}$$ Let α = min{µψ 2cLmax , ηb}. Et f(xt+1) − f(x∗) ηt ≤ Et f(xt) − f(x∗) ηt − 2µ(f(xt) − f(x∗)) + L 2c (f(xt) − f(x∗)) + L 2c σ 2 ≤ 1 α (f(xt) − f(x∗)) − 2µ(f(xt) − f(x∗)) + L 2c (f(xt) − f(x∗)) + L 2c σ 2 = 1 α − 2µ + L 2c (f(xt) − f(x∗)) + L 2c σ 2 ≤ 1 α − 2µ + Lmax 2c (f(xt) − f(x∗)) + L 2c σ 2 Therefore we have the following sequence of inequalities, $$\mathbb{E}_{t}\left[\frac{f(x_{t+1})-f(x_{\star})}{\eta_{b}}\right]\leq\mathbb{E}_{t}\left[\frac{f(x_{t+1})-f(x_{\star})}{\eta_{t}}\right]\leq\left(\frac{1}{\alpha}-2\mu+\frac{L_{\max}}{2c}\right)(f(x_{t})-f(x_{\star}))+\frac{L}{2c}\sigma^{2}\left(\frac{\eta_{b}}{\eta_{b}}\right),$$ By the tower property of expectations and multiplying both sides by ηb we have $$\mathbb{E}\left[f(x_{t+1})-f(x_{*})\right]\leq\underbrace{\eta_{b}\left(\frac{1}{\alpha}-2\mu+\frac{L_{\mathrm{max}}}{2c}\right)}_{\nu}\mathbb{E}\left[\left(f(x_{t})-f(x_{*})\right)\right]+\frac{\eta_{b}L}{2c}\sigma^{2}.$$ If ν ∈ (0, 1) then iterating the inequality and summing the geometric series gives the result, $$\mathbb{E}\left[f(x_{t+1})-f(x_{*})\right]\leq\nu^{t}(f(x_{1})-f(x_{*})+\sum_{s=0}^{t-1}\nu^{s}\frac{\eta_{b}L}{2c}\sigma^{2})$$ $$\leq\nu^{t}(f(x_{1})-f(x_{*})+\frac{\eta_{b}L\sigma^{2}}{2(1-\nu)c}.$$ Therefore, it remains to show that 0 *< ν <* 1. For the lower bound notice that α ≤1 2cLmax , $$\begin{array}{l}{{\nu=\eta_{b}\left(\frac{1}{\alpha}-2\mu+\frac{L_{\mathrm{max}}}{2c}\right)}}\\ {{\qquad\geq\eta_{b}\left(2c L_{\mathrm{max}}-2\mu+\frac{L_{\mathrm{max}}}{2c}\right)}}\\ {{\qquad=\eta_{b}\left(\left(2c+\frac{1}{2c}\right)L_{\mathrm{max}}-2\mu\right)>0.}}\end{array}$$ Following similar arguments made in Loizou et al. (2021)[Theorem 3.6], we can show ν < 1 by considering two cases. Recall from our assumptions we have c > Lmax 4µand ηb < max n1/( 1 α −2µ+ Lmax 2c ),1 2cLmax o, therefore we consider the two following cases: $$\begin{array}{c}{{\eta_{b}<\frac{1}{2c L_{\mathrm{max}}}}}\\ {{\eta_{b}<\frac{1}{\left(\frac{1}{\alpha}-2\mu+\frac{L_{\mathrm{max}}}{2c}\right)}.}}\end{array}$$ $$(27)$$ 2cLmax(27) $$(28)$$ . (28) For the first case (27) we have α = ηb and $$\begin{array}{r l}{{\nu=\eta_{b}\left({\frac{1}{\eta_{b}}}-2\mu+{\frac{L_{\operatorname*{max}}}{2c}}\right)}}\\ {{}}&{{=1-2\eta_{b}\mu+{\frac{L_{\operatorname*{max}}}{2c}}\eta_{b}}}\\ {{}}&{{\stackrel{(c>{\frac{L_{\operatorname*{max}}}{4\mu}})}{<}}1-2\mu\eta_{b}+2\mu\eta_{b}=1.}}\end{array}$$ For the second case (28) we have α =1 2cLmax and by the upper bound we have $$\nu=\eta_{b}\left(\frac{1}{\alpha}-2\mu+\frac{1}{\alpha}\right)$$ $\square$ $${\frac{L_{\operatorname*{max}}}{2c}}\biggr)<1.$$ However, we also have α =1 $2cL_{\rm max}\;\geq\;\eta_{fb}$, cc. ≤ ηb, to avoid a contradiction we need $${\frac{1}{2c L_{\mathrm{max}}}}<{\frac{1}{{\frac{1}{\alpha}}-2\mu+{\frac{L_{\mathrm{max}}}{2c}}}}=$$ $${\frac{1}{2c L_{\mathrm{max}}-2\mu+{\frac{L_{\mathrm{max}}}{2c}}}}.$$ Which holds by assumption since c > Lmax 4µ . ## H Experiment Details In this section we provide details for our experiments including the updates for different mirror descent algorithms. Note that in all our experiments we have f ∗ i = 0. ## H.1 Compute Resources We ran around a thousand experiments using an internal cluster, where each experiment uses a single NVIDIA Tesla P100 GPU, 40GB of RAM, and 4 CPUs. Some experiments like the synthetic ones took only few minutes to complete, while the deep learning experiments like CIFAR10 took about 12 hours. ## H.2 Mirror Descent Across P-Norms We select ψ(x) = 12 ||x||2p and X = R dfor 1 < p ≤ 2. We have in this case that ψ is µψ = (p − 1) strongly convex with respect to the norm *||·||*p with dual norm *||·||*q where q is such that 1/p + 1/q = 1 (Orabona, 2019). Therefore, as defined in Corollary 8, mSPSmax with c = 1 is $$\eta_{t}=\operatorname*{min}\left\{{\frac{(p-1)(f_{i}(x_{t})-f_{i}^{*})}{\left\|\nabla f_{i}(x_{t})\right\|_{q}^{2}}},\eta_{b}\right\},$$ and similarly for mSPS. The closed form update for mirror descent in this case is given by the following coordinate wise updates (Duchi, 2018): let φ p: R d → R d with component functions φ p i (x) = (||x||p ) 2−psign(xi)|xi| p−1, then the mirror descent update with stepsize ηt is $$x_{t+1}=\phi^{q}(\phi^{p}(x_{t})-\eta_{t}\nabla f_{i}(x_{t})).$$ ## H.3 Projected Gradient Descent With Positive Constrains We select ψ(x) = 12 ||x||22 to recover projected gradient descent, in this case ψ is µψ = 1 strongly convex with respect to the Euclidean norm and ||·||∗ = *||·||*2 . Since X = R d +, the non-negative orthant, the projection step amounts to clipping negative values (setting them to zero). ## H.4 Exponentiated Gradient With `1 **Constraint** We consider the case of supervised learning with constraint set X = {x : ||x||1 ≤ λ}. In our experiments we set λ = 10, 000 · d. To consider the exponentiated gradient algorithm we equivalently write the set X as a convex hull of its corners, X = {Λx : x ∈ ∆2d} where ∆2d is the (2d − 1)-dimensional probability simplex and Λ is a matrix with 2d columns and d rows, $$\Lambda={\begin{bmatrix}\lambda&-\lambda&0&0&\cdots&0&0\\ 0&0&\lambda&-\lambda&\cdots&0&0\\ 0&0&0&0&\cdots&0&0\\ \vdots&\vdots&\vdots&\vdots&\cdots&\vdots&\vdots\\ 0&0&0&0&\cdots&\lambda&-\lambda\end{bmatrix}}$$ . Therefore we can use the exponentiated algorithm with constraint set ∆2d by selecting ψ(x) = P2d i=1 xilog(xi). In this case ψ is µψ = 1 strongly convex on ∆2d with respect to the norm *||·||*1 . Since the dual norm ||·||∗ = *||·||*∞ we have that mSPSmax with c = 1 is $$\eta_{t}=\operatorname*{min}\left\{{\frac{\left(f_{i}(x_{t})-f_{i}^{*}\right)}{\left||\nabla f_{i}(x_{t})||_{\infty}^{2}\right.}},\eta_{b}\right\},$$ and similarly for mSPS. The mirror descent update then can be written in two steps (Bubeck, 2015), $$\begin{array}{l}{{y_{t+1}=x_{t}\odot\exp(-\eta_{t}\nabla f_{i}(x_{t}))}}\\ {{x_{t+1}=\frac{y_{t+1}}{\left\|y_{t+1}\right\|_{1}}.}}\end{array}$$ Where and exp are component wise multiplication and component wise exponentiation respectively. ## H.5 Additional Results Across P-Norms We observe in Figure 4 that mSPS outperforms a large grid of step-sizes for most values of p. Note that we used the mushrooms dataset with the kernel bandwidth selected in Vaswani et al. (2019b) which satisfies interpolation. ![30_image_0.png](30_image_0.png) Figure 4: Comparison between mSPS with c = 1 and constant step-sizes on convex binary-classification problem on the mushroom dataset. For the non-convex multi-class classification problem in Figure 5 we use MNIST. MNIST has a training set consisting of 60k examples and a test set of 10k examples. We use a 1 hidden-layer multi-layer perceptron (MLP) of width 1000. We also observe that mSPS is either competitive or better than most constant stepsizes across various values of p. ![31_image_0.png](31_image_0.png) Figure 5: Comparison between mSPS with c = .2 and constant step-sizes on MNIST across different values of p. ![31_image_1.png](31_image_1.png) Figure 6: Comparison between mSPS with and without smoothing on non-convex multiclass classification with deep networks across different values of p.
Review 1: Summary: The paper investigates the convergence of stochastic mirror descent (SMD) under interpolation in relatively smooth and smooth convex optimization. The authors provide new convergence guarantees for SMD with a constant stepsize in relatively smooth convex optimization. They also propose a new adaptive stepsize scheme, the mirror stochastic Polyak stepsize (mSPS), for smooth convex optimization. Notably, their convergence results do not make bounded gradient assumptions or bounded variance assumptions, and they show convergence to a neighborhood that vanishes under interpolation. Strengths and Weaknesses: The paper is well-structured and provides a comprehensive analysis of the SMD under different conditions. The introduction of Polyak as an adaptive stepsize for SMD seems to be an interesting contribution, and the authors' approach to not using an online to batch reduction is noteworthy. The paper also stands out for its extensive numerical experiments for adaptive SMD across a wide variety of domains and mirror descent algorithms for both constrained and unconstrained problems. In terms of technical aspects, the analysis of relative smoothness and Polyak step size appears to be fairly standard, and the convergence results are as expected. Additionally, it's unclear to me what the standard for TMLR is, but in my view, this paper may not meet the high standards set by conferences such as ICML or NeurIPS. Requested Changes: The use of Polyak'stepsize seems to be of limited use. Maybe I missed something, is it possible to use the adaptive stepsize when the optimal value is unknown? can you provide a discussion? Broader Impact Concerns: In my opinion, there don't appear to be any specific concerns regarding Broader Impact. ================================================== Review 2: Summary: This paper considers stochastic mirror descent with both constant step sizes and Polyak stepsizes. For the constant step sizes, the paper presents convergence in a neighborhood for strongly convex problems, and convergence in terms of Bregman distance of $f$ for smooth problems. For the Polyak step sizes, the paper presents convergence in a neighborhood for strongly convex problems, and convergence in function values for smooth and convex problems. Experimental results are presented to show the effectiveness of the proposed algorithm. Strengths and Weaknesses: **Strength** - Mirror descent is a powerful extension of gradient descent which covers many instantiations of existing algorithms, e.g., exponentiated gradient, gradient descent. The paper studies convergence rate of stochastic mirror descent. The proposed results are general. - The existing results often require a bounded sub-gradient assumption or a bounded variance assumption. The paper gives convergence rates without these assumptions. - The existing analysis on Polyak step size is conducted for SGD. The paper gives the first analysis on Polyak step size for stochastic mirror descent. The analysis seems to be rigorous. - There are various experimental results. **Weakness** - While mirror descent is more general, the framework of studying the convergence of mirror descent has already been established in the literature. The analysis presented in the paper seems to be standard in the mirror descent literature. For example, all the analysis is based on Lemma 1, which has already been developed in the literature. - Within the recent progress in optimization, people have already known how to handle the unbounded gradient assumption. For example, it is now becoming standard to remove the bounded gradient assumption by using the smoothness assumption of the loss function. Therefore, it is not surprising to remove the bounded gradient assumption in the paper. - The analysis of Polyak step sizes has already been done for SGD. The paper extends it to SMD. While this extension is definitely useful, it seems that the extension follows standard techniques. Requested Changes: The paper is very well written and the results are solid. My only concern is the novelty of the analysis. Both the analysis and the results are a bit standard. I would like to see more discussions on the novelty of the paper in the revision, e.g., what are the technical challenges of the extension and technical contributions of the analysis? Minor comments: Section 3.1: "it it" should be "it" Section 5.3.1: "is a quadratic" should be "is quadratic" Section C.1: "stronly" should be "strongly" Broader Impact Concerns: I have no impact concerns. ================================================== Review 3: Summary: This paper studies the mirror descent method for stochastic optimization. The main contributions are two-fold: on the one hand it proves new theory for constant step-size mirror descent based on an interpolation constant. On the other hand, it proposes a new method which is reminiscent of Polyak's step size adapted to the setting of mirror descent (mSPS). For this new method, theoretical results (mainly for the convex case) are established; experiments show that mSPS achieves good convergence with less tuning due to an adaptive step size. Strengths and Weaknesses: Strengths: Adaptive learning rates such as the Polyak step size have drawn interest in the recent years because of their favourable performance for problems with (almost) interpolation and their improved sensitivity to learning rate tuning. This paper studies the important question how we can make use of these techniques for mirror descent methods where specific variable constraints or domains motivate the use of a different proximal term. Even though the results of section 5 and 6 seem somewhat unrelated at first sight, the paper has a clear structure and the contributions of these two sections are linked through their relation to the interpolation constant. The paper gives a thorough literature review and relates their results nicely within the context of previous work which makes the paper an enjoyable read. Weaknesses: * I think that the result of section 5 (e.g. Theorem 1) is useful mainly in the cases where the interpolation constant is reasonably small. A major difference to the works compared in the paper is that one can not force the constant term small by making the step size small (even though the other terms are large). For the result of e.g. Dragomir et al 2021 this is different - the constant term in their work multiplies with the step size. Hence, if the interpolation constant is not very small, the constant term in Thm. 1 is not in control of the user. While this is in general fine for me, I think that it needs to be pointed out and explained clearer as it limits the applicability of the convergence result to problems with zero or reasonably small interpolation constant. * Regarding the point above, I think that the example in 5.3.1 is insightful to showcase the advantage of your results but also slightly one-sided. You could give a second example where the interpolation constant is large to show what happens in such a case (I would not see it as a weakness if your result is less useful for such settings). * The same remark applies to Theorem 5 and subsequent results: if $\eta_b$ is set small then at some point $\alpha=\eta_b$ and it will cancel out in the constant term. Hence, the constant term can not be made small by reducing the parameter $\eta_b$. * The theoretical results (except for Thm 9 using PL condition) are mainly in the convex setting. From the current paper it is unclear to me if the presented results could be extended to more general nonconvex settings. Requested Changes: Some of the below are questions which I had while reading - it might be sufficient to clarify them during the discussion period without changing the paper. On the theoretical part: * Do you assume that $\mathcal{X} \subset \mathcal{D}$? This seems to be somewhat implictly assumed in Def. 2 as $B_{\psi}$ is defined via $\mathcal{D}$, but I couldn't find it in the paper. * I think you need to assume that $B_{\psi}$ is non-negative (e.g. for Corollary 2) i.e. $\psi$ is convex. I couldn't find this assumption in the paper, but maybe I just skipped it. Can you clarify? * Page 3: what do you mean with "bounded constraint"? * For the theory of section 6, you use the interpolation constant $\sigma^2$ and not $\sigma^2_{\mathcal{X}}$. I think this comes from the lower bound of $\eta_t$, but it seemed a bit hidden to me why this is the case - I guess that one would prefer to have $\sigma^2_{\mathcal{X}}$ if it was possible. Maybe you could explain this phenomenon in a short sentence/paragraph. On the experiments: * For some experiments, it looks like the biggest step size is still the best one for mirror descent e.g. first row or cifar10, p=1.4. I would have expected that the largest one is always too large so that we can be sure that the best step size is among the ones that were run. But maybe its just misleading because there are lines missing from diverged runs? If not, I think you should extend the step size range in order to make sure that the (approximately) best step size is among the ones that were run. * The runs for ijcnn and rcv1 for the l1-constrained problem look a bit strange: it looks to me like mirror descent would need a smaller step size as the ones that are displayed are already quite unstable. Also the scale of the loss there (in the orders of 10^6) looks uncommonly high to me. Related to this I did not find the value of $\lambda$ you chose. Further, in section G.4 in the mirror descent update I can't see any influence of $\lambda$ - but it should have. I think the result from Bubeck, 2015 is only valid if the domain is the simplex, but in your case we have an additional $\Lambda$? * I understand that you use the smoothing of $\eta_b$ and the values of $c$ for mSPS based on the Loizou et al 2021 paper. However, we should be careful as this introduces additional tuning for mSPS which has to be accounted for - in the sense that some hyperparameters are varied per experiment which is not done for Mirror Descent. It is unclear to me when $c=1$ or when $c=0.2$ should be used - is it only depending on convexity and what's the intuition for this? For completeness, it would be nice to see - for example - also the results for the choice $c=1$ in the experiments where $c=0.2$ is used. Minor remarks: * In several spots, you write $f_i$ instead of $f_\xi$ (e.g. page 2, end of first bullet or several times in Appendix F). * You cite an Assumption right below Assum. 1, from the book of Orabona. As this book has lots of content, it would be nice to be more specific where that assumption can be found in the book. The same applies to the result of Hanzely & Richtarik 2021 and Dragomir et al 2021 in section 5.3. Please state explicitly which theorem you compare to as it makes it easier for the reader. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept with minor revision Comment: In consensus with the reviewers, I find this paper to have enough novelty and significance to warrant publication, so I am happy to recommend publication. It's well-written and pushes existing results forward in a few new directions. Please make a few small changes, e.g., some of the authors in the bibliography have abbreviated first names (A. Nemirovski...) while most others don't; please update references to journal versions when possible; and I'd change the "Anatoli Iouditski" and Nesterov paper to the "Juditsky" transliteration (as he himself uses on his PDF copy, https://arxiv.org/pdf/1401.1792.pdf ). Lemma 3, the three point property, was proven by Chen and Teboulle, "Convergence analysis of a proximal-like minimization algorithm using Bregman functions", 1993, so I would cite that instead of Bubeck '15/Orabona '19. One reviewer requested a link to an open source implementation of the experiments, if you have that. Reviewer qw5m had requested that the case c=1 to be run for the non-convex experiments (in Fig. 3). If this doesn't fit in well with the main paper, it would be OK to put it in the appendix. Finally, to make the paper more accessible to a general audience, I would add a small section on relative smoothness to the appendix, since it's a new concept (from Lu, Freund & Nesterov '16) ==================================================
# Sample-Efficient Self-Supervised Imitation Learning Anonymous authors Paper under double-blind review ## Abstract Imitation learning allows an agent to acquire skills or mimic behaviors by observing an expert performing a given task. While imitation learning approaches successfully replicate the observed behavior, they are limited to the trajectories generated by the expert both regarding their quality and availability. In contrast, while reinforcement learning does not need a supervised signal to learn the task, it requires a lot of computation, which can result in sub-optimal policies when we are dealing with resource constraints. For addressing those issues, we propose Reinforced Imitation Learning (RIL), a method that learns optimal policies using a very small sample of expert behavior to substantially speed up the process of reinforcement learning. RIL leverages expert trajectories to learn how to mimic behavior while also learning with its own experiences in a typical reinforcement learning fashion. A thorough set of experiments show that our method outperforms both imitation and reinforcement learning methods, providing a good compromise between sample efficiency and task performance. ## 1 Introduction Humans have the ability to learn by observing other individuals performing certain activities. We can learn actions from individuals even without having prior information of their behavior (Rizzolatti & Sinigaglia, 2010). For example, we can learn tasks such as cooking, drawing, playing an instrument, or playing games just by watching videos. Our capabilities go beyond merely imitating; we can learn from a demonstration of a given task, despite differences in the environment, body, or objects that constitute the demonstration. Despite being studied in different areas, such as psychology (Vogt & Thomaschke, 2007; Király et al., 2013) and robotics (Schaal, 1999; Ratliff et al., 2007; Raza et al., 2012), learning by imitation recently became a prominent field in the area of artificial intelligence (AI) (Bandura & Walters, 1977; Hussein et al., 2017; Fang et al., 2019). Reinforcement learning (RL), in turn, is one of the approaches employed in AI for learning without supervised signals, often in a trial-and-error strategy based on post-action rewards. The output of RL is a policy that specifies how an agent should act at any given time. Value-based methods compute the optimal policy by first estimating the expected value of each action using samples from the environment, and then choosing the actions with the maximum estimated expected values. However, certain environments with sparse rewards may result in a very large number of interactions of the agent to reach a reward that can be propagated to other states, making RL a strategy considerably slower than supervised learning. The assumption that the only information available to the learning agent are the immediate environmental rewards also makes the learning problem very hard (Sutton & Barto, 2018, Ch.3). By contrast, humans can learn much faster if they have an expert showing them what to do. Imitation learning (IL) builds on this idea by mimicking the behavior of an agent (known as *the expert*) that successfully completes the task of interest, even when its behavior is not necessarily what could be consider the *optimal* behavior. However, having access to a large amount of samples provided by an expert is not the case of most application domains, in which we are often only provided with very few observations. For addressing the disadvantages of both IL and RL, we develop a novel approach called Reinforced Imitation Learning (RIL), shown in Figure 1. Our method guides itself through the environment looking at both ![1_image_0.png](1_image_0.png) Figure 1: Reinforced Imitation Learning framework. immediate rewards and a limited amount of observations from an expert. We show that RIL outperforms IL state-of-the-art approaches based on behavior cloning, regarding both *performance* (P) and Average Episodic Reward (AER) metrics. We also show that RIL is competitive with (and often better than) RL approaches, though with the advantage of learning much more efficiently, being thus the strategy that presents the best trade-off between effectiveness and efficiency for real-world applications. ## 2 Related Work We briefly review recent approaches to both reinforcement and imitation learning, starting with the former. Deep Q-Network (DQN) (Schaul et al., 2015) is an approach that learns from the agent's experience using a deep neural network with hierarchical layers to approximate the optimal Q-function. Proximal Policy Optimization (PPO) (Schulman et al., 2017) combines ideas from Asynchronous Advantage Actor Critic (Mnih et al., 2016) and Trust Region Policy Optimization (Schulman et al., 2015). It uses multiple workers to avoid replay buffer and employs trust region to guarantee monotonic improvement. Like PPO, Actor-Critic using Kronecker-Factored Trust Region (ACKTR) (Wu et al., 2017) uses ideas from other methods to improve the efficiency of the learning process. The most straightforward form of imitation learning from observation is Behavioral Cloning (BC) (Pomerleau, 1988), which treats imitation learning as a supervised problem. It uses samples comprised of the state at time t, the action and the resulting state (st*, a, s*t+1) from an expert to learn how to approximate the agent's trajectory from the expert's. However, such an approach becomes costly for more complex scenarios, requiring more samples and information about the action effects over the environment. Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) solves this issue by matching the state-action frequencies from the agent to those seen in the demonstrations. GAIL uses adversarial training to discriminate stateactions from either the agent or the expert while minimizing the difference between both. It requires less expert data, though it needs substantial interactions within the environment. Recent self-supervised approaches (Torabi et al., 2018; Gavenski et al., 2020) that learn from observations make use of the expert's transitions (st, st+1) and leverage from random transitions (st*, a, s*t+1) in order to learn the inverse dynamics of the environment, and afterwards employ this knowledge to generate pseudolabels for the expert's trajectories. Imitating Latent Policies from Observation (ILPO) (Edwards et al., 2019) differs from those previous studies by trying to estimate the probability of a latent action given a state. Within a limited number of environment steps, it remaps the latent actions to the corresponding actions. There is a specific line of research in which RL uses demonstrations to assist the policy, and that could be viewed as a middle-ground (or hybrid approach) between RL and IL. DAGGER (Ross et al., 2011) iteratively produces new policies based on pulling the expert policy outside its original state space. Deep Q-learning from Demonstration (DQfD) (Hester et al., 2018) uses human demonstrations in a DQN fashion to pre-train its policy. Active Deep Q-learning with Demonstration (ARLD) (Chen et al., 2020) improves the approach of using human demonstrations to guide the learning process by introducing an active learning mechanism. Even though most of those hybrid approaches appear in the recent literature (Chen et al., 2021), note that they require the expert policy or ground-truth labels to provide feedback to the agent. In this paper, we assume scenarios in which we do not have access to ground-truth labels for performing IL. All we have access to is the trajectory performed by an expert when acting in a given scenario, while all the rest can be learned self-supervisedly. We want to make it clear that we not have access to the actions performed by the expert. In contrast, by having access to ground-truth information, those hybrid related methods require an additional setup that our method does not, *e.g.,* training a policy to act as the expert or annotating a large number of demonstrations with the corresponding actions. Furthermore, these methods use IL as a form of divergence minimization, while our proposed approach uses IL as its primary training source and RL as a trajectory correction between observed and actual data. With that being said, in this paper we only compare our proposed method with baselines that do either reinforcement or imitation learning, but not to hybrid approaches that have access to ground-truth labels. ## 3 Reinforced Imitation Learning Reinforced Imitation Learning (RIL) interleaves imitation and reinforcement learning steps to converge into an optimal policy in a very sample-efficient manner. RIL employs the idea of self-supervisedly learning policies based on an inverse dynamics model (Torabi et al., 2018; Monteiro et al., 2020; Gavenski et al., 2020) and then refining and improving such a policy with reward-based exploration typically performed in q-learning (Watkins & Dayan, 1992; Mnih et al., 2013; Kaiser et al., 2019), with the final goal of creating a new algorithm capable of using samples of an expert's trajectory to guide in the design of a policy while also using its own experiences to correct its trajectory by exploring states outside the expert's trajectories. In this work, we base our implementation of the RL component on the original unmodified DQN architecture, since it shares interesting similarities with the self-supervised IL component (*e.g.,* both are off-policy), and we know that *exploration versus exploitation* trade-offs play a crucial role for achieving higher rewards. ## 3.1 Problem Formulation Our problem assumes an agent acting in a Markov Decision Process (MDP) represented by a five-tuple M = {*S, A, T, r, γ*} (Sutton & Barto, 2018), in which: S is the state-space, A is the action space, T is the transition model, r is the immediate reward function, and γ is the discount factor. Solving an MDP yields a stochastic policy π(a | s) with a probability distribution over actions for an agent in state s to perform. Imitation from observation (Torabi et al., 2018) aims to learn the inverse dynamics Mst,st+1 a = P(a | st, st+1) of the agent, *i.e.,* the probability distribution of each action a when the agent transitions from state st to st+1. In this problem, the learning agents knows neither the reward function nor the actions performed by the expert, so we want to find an imitation policy from a set of state-only demonstrations of the expert D = {ζi} N i=1, where ζ is a variable-length state-only trajectory. ## 3.2 Self-Supervised Imitation Learning Self-supervised imitation learning is a framework that usually comprises a module to learn the inverse dynamics of the environment (Inverse Dynamics Model, IDM) and a module to learn an imitation policy. The IDM is responsible for learning the actions through a transition of states Mθ(a | st, st+1), while the policy (πϕ) acts as a stationary model predicting the most likely action a given st. To learn these transitions, we can use πϕ with random weights to create a pre-demonstration dataset (I pre) comprised of (st, a, st+1) samples. The IDM then uses I pre to learn the inverse dynamics of the agent by finding parameters θ ∗that best describe the state transitions. Since there are no expert labels, we make use of pairs of states from expert demonstration (s e t , se t+1) and the IDM to predict the action responsible for all expert transitions. Subsequently, the policy model uses these self-supervised labels to learn π(ˆa | st); however, considering that I pre contains random actions, the pseudo-labels generated by the IDM can be far from the expert's. To mitigate this issue, we can rely on an iterative process, where the updated policy creates new samples I pos and balances I s with all trajectories that reach the environment goal. This process allows the model to maintain a weighted distribution between the random and updated policy samples and avoid local minima since the probability of actions vanishing in each iteration is minimal. ## 3.3 Exploration With Neural Networks And Q**-Values** The second element of our proposed approach is an exploration mechanism via RL based on q-values and neural networks. One such a method is DQN (Mnih et al., 2013), which employs a deep neural network with hierarchical layers to approximate the optimal Q-function in Equation 1, where r is the reward received when transitioning from state st to st+1, α is the learning rate, a is the action and Q is a deep neural network. $$Q\left(S_{t},A_{t}\right)\gets Q\left(S_{t},A_{t}\right)+\alpha\left[R_{t+1}+\gamma\operatorname*{max}_{a}Q\left(S_{t+1},a\right)-Q\left(S_{t},A_{t}\right)\right]$$ i(1) DQN implements an experience replay mechanism that stores a set of observations from the environment to update the Q-function with random samples. This process solves the correlation issue between sequences of observations and smooth changes in the data distribution. For each episode, the algorithm has a probability ϵ to select a random action or use the action-value function. The value of ϵ usually decreases as training progresses to trade the exploration of early states to the exploitation of late states (near the goal). The agent executes the ϵ-greedy action in the environment, which returns a reward and the next state. ## 3.4 Combining Il And Rl Reinforced Imitation Learning iteratively creates new samples using the environment and combines ideas from both IL and RL to understand how a policy can benefit from both approaches. Since the construction of a new I pos consists of using the updated policy in the environment, where we have access to states and rewards, we introduce a reinforcement learning approach to learn by experience. Thus, the entire learning pipeline of RIL is presented in Algorithm 1, whose main steps are: (i) create dataset I pre by using πϕ as I s(lines 4-5); (ii) use I sto learn the inverse dynamics of the environment (line 7); (iii) label the expert action Aˆ responsible for the state transitions in the expert samples T e with the IDM network (line 8); (iv) use T e and Aˆ to train the policy πϕ in an IL fashion (line 9); (v) use πϕ in the environment to create new state transitions I pos and employ the temporal difference update to further learn from its own experiences (lines 10-13); and (vi) use a sampling mechanism to create a new dataset I sto feed the IDM network (line 14); (vii) repeat steps ii-vi until no further improvement is noticed (either when no actions change between two consecutive epochs or no significant reduction in loss is observed in consecutive epochs). $$(1)$$ Algorithm 1 Reinforced Imitation Learning 1: Initialize model Mθ as a random approximator 2: Initialize policy πϕ with random weights 3: Generate state transitions T efrom expert demonstration 4: Generate I pre using policy πϕ 5: Set I s = I pre 6: **while** πϕ improves from either method do 7: Update Mθ by train(Mθ, I s) 8: Generate pseudo-labels Aˆ by Mθ(T e) 9: Update πϕ by BCLoss(T e, Aˆ) 10: for e ← 1 to |E| do 11: Use πϕ to solve environment e 12: Append samples I pos ← (st, at, st+1) 13: Update πϕ by tdLoss(I pos, A) 14: I s ← goalSampler(I pos) 4 RIL uses ϵ-greedy exploration in its RL-based component for learning states outside its expert. Most often, ϵ-greedy approaches decrease the exploration chance as time passes, however, RIL interchanges learning from demonstration and experience. Shifting its learning approaches can result in acquiring information that might not be ideal from both perspectives. Therefore, RIL adapts its exploration behavior according to its certainty. RIL's policy uses the softmax distribution from its output to predict the action. Thus, as the policy learns to separate the different actions, it chooses actions other than the maximum *a posteriori* label less often. In every iteration from RIL, we compute the exploration ratio from the policy during the self-supervised learning component (Line 9) and use the same number for the epsilon (Lines 10-13). This approach allows the model to explore at the beginning, like ϵ-greedy policies would, and as the model learns to differentiate the actions, it allows for less exploration and more exploitation. This strategy does not need to rely on a time-dependent decaying function for the exploration values (as commonly seen in RL), and allows for increased exploration when the policy finds itself in local minima. RL approaches are not commonly designed to learn an optimal policy under few episodes. Thus, we also need to adapt the size of the experience replay: if its size is too big, there will be fewer updates during each iteration, which can lead to less desirable actions; if too small, there will be fewer samples, resulting in sub-optimal weight updates. Considering that RIL has access to the average size of each expert trajectory, we use this information to decide the size of the replay memory according to each environment. RIL sets the experience replay size to be 10× the average of the expert samples it has access to for each domain. The drawback of RIL is that part of its learning approach depends on a reward signal (RL) and another part is self-supervised (IL) in an iterative process that constantly shifts its data and labels. To overcome this issue, we modify both components in three different ways: (i) we clip the gradients from both reinforcement and imitation learning components, (ii) we add layer normalization into both IDM and policy models, and (iii) we adapt the sampling mechanism of the self-supervised strategy to reduce the inherited covariate shift. The first adaptation reduces the gradient from both learning methods. Since both parts of RIL learn with different objectives, the clipping allows for lower variance during learning. The reason for the second modification is due to the shift in data and labels from the self-supervised iterative nature that happens constantly. During the training of typical IL methods, the expert dataset contains a vast number of samples, which minimizes the consequences of the covariance shift impacting the training procedure. However, RIL contains much fewer expert samples. Thus, when the pseudo-labels from IDM significantly change from one epoch to another, a deterioration of the policy occurs. Adding layer normalization into RIL allowed for the model to learn how to correctly normalize all neurons in each layer according to the samples and allowed the policy to learn. Finally, we alter the sampling mechanism that forces the policy action distribution into the I given the model capability of solving the environment. Sampling from the softmax distribution allows the IDM model to rapidly learn a distribution outside I pre, which is balanced for all actions. However, this approach does not yield good results in cases with very few expert samples. The constant change from the IDM's predictions combined with the small number of examples deteriorates the policy. To avoid this issue, we introduce an upper limit to the number of samples I from I pos, which we compute using Equation 2, where n is a hyperparameter used to define the number of epochs it takes for the upper limit to be set to 100%, e is the current epoch, and k the slope of the curvature. Such an approach allows RIL to have smoother changes between epochs, reducing the covariate shift. $$\operatorname*{lim}\operatorname*{sup}\ {\mathcal{I}}^{p o s}=1-{\frac{1}{1+({\frac{n}{e}}-1)^{-k}}},$$ , (2) ## 4 Experimental Methodology Regarding the upper limit of samples, we set k = 2 for all main experiments in this work and give each algorithm 150 epochs (e = 150) for training. These values will only vary in the ablation studies we conduct in later sections. Gradient clipping values for the RL model are between [−0.5, 0.5] and for the IL models between [−1, 1]. Details on the neural network topologies, *e.g.,* number of layers, neurons, α, and more are all described in Section 4.1 for each environment. $$(2)$$ We evaluate RIL and the IL related work in terms of both *Average Episodic Reward* and *Performance* metrics. AER is the average reward of 100 episodes for each environment. Since AER depends on an environment reward function, its value differs from task to task. AER measures how well the algorithm performs the task and indicates how difficult it is for the agent to imitate the expert's behavior. *Performance* is the average reward for each run scaled from 0 to 1, where 0 is the random policy reward, and 1 is the expert. A model can achieve scores lower than zero if it performs worst than random actions and higher than one if it performs better than the expert. We do not use accuracy as a metric since achieving high accuracy in I s does not guarantee solving the problems. The accuracy of the policy highly correlates with the predictions of the IDM, which does not carry information beyond the agent behavior. For the RL approaches, we compare the sample efficiency of each method by counting how many samples each algorithm receives before reaching a certain reward, instead of computing the usual timesteps, since RIL uses expert samples as well as environment samples, allowing for a fair comparison. We use two different DQN methods: the original version (Mnih et al., 2013) (DQN1), from which we borrow several mechanisms for RIL; and its latest version (Schaul et al., 2015) (DQN2), which holds the state-of-theart for most environments experimented in this paper. We also use two other RL algorithms as baselines: PPO and ACKTR. We select these particular algorithms because they present very different approaches that end up resulting in optimal policies for the Acrobot and MountainCar environments. In this work, all IL methods apart from RIL do not employ any RL mechanism. ## 4.1 Environments And Network Topologies The model's memory usage varies since the network topologies vary (⩽ 1GB). For the IL models, both IDM and policy networks use Cross-Entropy Loss, while the Temporal Difference Loss is used by the RL model. We employ the Adam optimizer (Kingma & Ba, 2014) with its default values in all models. Below we briefly describe each environment and their respective neural network topologies and learning rates: i) CartPole-v1 is an environment where an agent pulls a car sideways intending to sustain a pole vertically upward as long as possible. The environment has a discrete action space composed of left or *right* actions, while the state space has 4 dimensions: cart position, cart velocity, *pole angle*, and *pole velocity at tip*. Barto et al. (1983) define solving CartPole as getting an average reward of 195 over 100 consecutive trials. The learning rate for this domain is 5 × 10−4for both models. The architecture for both IDM and policy models here is an MLP with two layers of 8 neurons activated with LReLU, and a self-attention layer with layer normalization. iii) Acrobot-v1 includes an agent of two joints and two links, where the joint between the two links is actuated. Initially, the links are hanging downwards and the goal is to swing the end of the lower link up to a given height. The state space consists of: {cos θ1,sin θ1, cos θ2,sin θ2, θ1, θ2}, and the action space consists of the 3 possible forces. Sutton (1996) first described Acrobot and later Geramifard et al. (2015) improved it, which is the version we use. Acrobot-v1 is an unsolved environment, *i.e.,* it does not have a specified reward threshold at which it is considered solved. The learning rate for this domain is 5 × 10−5for the IDM and policy models, and 5× 10−4for the RL model. Both models share the same architecture, which is a two layer model with 32 neurons activated with LReLU, self-attention and layer normalization. ii) MountainCar-v0 environment consists of a car in a one-dimensional track positioned between two "mountains". The state-space has two dimensions, the respective car coordinates (*x, y*), and the action space consists of 3 possible signals to move the car (−1, 0, or 1). To achieve the goal in this environment, the car has to acquire the required momentum and reach a flag placed on the second mountain top. Moore (1990) defines solving MountainCar as getting an average reward of −110 over 100 consecutive trials. Here the learning rate is set to 5 × 10−3for the IDM model, and 5 × 10−4for the policy and RL models, while the network topology is kept the same from the Acrobot-v1 environment. iv) LunarLander-v2 is an environment where an agent needs to land on the moon. The agent has four different actions (do nothing, move left, right and reduce the falling velocity), and the way the agent actuates influences the reward. Any movement, except for "do nothing," costs −0.3 reward. If it moves toward the designated landing area (always at coordinates 0, 0), the environment returns a positive value. However, moving away from these coordinates results in losing the previously earned reward. Finally, when reaching the floor, the environment checks whether the agent has landed or crashed and awards 100 or −100 points, respectively. To solve the LunarLander-v2 environment, the agent must receive a reward of 200 over 100 consecutive trials. The learning rate for this domain is 5 × 10−4for the IDM model, 5 × 10−7for the policy model, and 5×10−6for the RL model. Both models share the same architecture, which is a two layer model with 128 neurons activated with LReLU, with self-attention layers and layer normalization. ## 5 Experimental Results 5.1 Policy Optimization Behavior IL and RL methods work on the same premise that an agent needs to learn an approximation to a theoretical optimal policy in the form of an MDP. Nevertheless, IL focuses on a more specific optimal policy, *i.e.,* the expert's. At the same time, RL learns how to optimize its value function, thus achieving one of many possible optimal functions for each environment. We hypothesize that RIL yields better policies than IL methods since it learns with its own experiences, while also achieving results much more efficiently than RL methods. To validate that hypothesis, we conduct an experiment where we compute the KL Divergence of a trajectory with four different policies: (i) the optimal policy (π ∗), (ii) an RL policy (Mnih et al., 2013), (iii) an IL policy (Gavenski et al., 2020), and (iv) the RIL policy. Since π ∗ may be one of many theoretical optimal policies, we do not use these results as a form of quantifying any of the policies created. However, upon carefully analyzing them, we can draw intuitions regarding the combination of RL and IL into a single policy. We compute two different KL Divergence values. First, we compare all models with π ∗ and compute the difference with the probability of all possible actions. This result shows how similar a policy is to the optimal one regarding its mapping of the likelihood of actions given a state. However, such difference is trivial when an agent during evaluation uses only a greedy approach for choosing an action. Thus, we also compute the KL Divergence using one-hot encodings for the specific action given a state (KL-Divergence∗). Table 1 shows that RIL is the furthest policy from π ∗regarding the first metric (9.6476). This is likely to occur for two reasons: the expert may not be acting optimally; and the reinforcement learning updates can drastically alter the softmax probabilities. For the first hypothesis, we can look at Figure 2 which shows the probability of the *maximum a posteriori* action for all discretized values. When comparing π ∗to the other policies, we can see that in the middle of the *valley* π ∗ has a degree of uncertainty (≈ 33% for each action), which does not show for the other policies. When comparing RL and IL, we see that the RL policy shows more similar behavior to π ∗than IL. The IL policy carries more certainty than the other policies for all the discretized values (≈ 58%). This behavior can explain the poor performance of this model. Since IL approaches only take expert samples in a supervised fashion into consideration, the model's certainty only takes into account the classification ![6_image_0.png](6_image_0.png) Figure 2: Visualization of the MountainCar-v0 environment. Each figure illustrates the maximum a posteriori probabilities in a 3D mesh. | Metric | π ∗ | RL (DQN) | IL (IUPE) | RIL | |----------------|-------|------------|-------------|--------| | Reward | -86 | -87 | -162 | -84 | | KL Divergence | - | 2.0869 | 4.9863 | 9.6476 | | KL Divergence∗ | - | 1.7345 | 1.7345 | 0.8094 | Table 1: KL Divergence from all three models, when compared to an optimal policy (π ∗). problem without considering the sparse reward that MountainCar presents. On the other hand, when we compare RIL to the other methods, we observe that the *valley* could be a mash of the RL and IL approaches. Although RIL still does not generate a result more in line with π ∗, it balances its certainty to create a more moderate mapping for discretized values. When comparing RIL with the greedy optimal policy, our method achieves the highest similarity (0.8094). This result shows that both methods, despite their difference in probabilities, are quite similar, *i.e.,* both methods agree on the action for the same state. Figure 3 illustrates how close RIL is from π ∗ by discretizing the continuous state-space from MountainCar into a 20×20 Q-table, plotting the *maximum a posteriori* action and coloring the states that are equal to π ∗. The figure shows that, in a discrete space, RIL is closer to the optimal policy than the remaining methods. We do not consider the reward in this episode a detrimental factor for the result in KL-Divergence∗, since RL is equally distant to the optimal policy than IL. Finally, upon analyzing the regular KL Divergence, RIL should be closer to zero than RL if the reward is a significant factor for performing well in the MountainCar environment. ![7_image_0.png](7_image_0.png) Figure 3: Comparison between policies trained in the MountainCar-v0 environment. We only color the tiles that have the same action as π ∗for easier visualization. ## 5.2 Sample Efficiency To understand how RIL benefits from both approaches, we conduct two different experiments to validate the following research questions: (i) how each IL algorithm perform when only given one episode from its expert, and (ii) how many samples each RL method uses before solving the environment (or in the cases where the algorithm is not able to, when reaching the maximum reward). ## 5.2.1 Imitation Learning Since RIL reaches P ⩾ 1 with only one episode from the expert, we give the same single trajectory for all other IL methods during this experiment. The results in Table 2 show the average and standard deviation | trajectory as data. Algorithms Metric | CartPole | Acrobot | MountainCar | LunarLander | Average P | | |-----------------------------------------|-------------|-----------------|------------------|----------------|------------------|-------------| | Random | AER | 18.7 | −482.6 | −200 | −182.72 | 0 ± 0 | | P | 0 | 0 | 0 | 0 | | | | Expert | AER | 500 | −85 | −106 | 235.96 | 1 ± 0 | | P | 1 | 1 | 1 | 1 | | | | BC | AER | 490.96 ± 19.65 | −122.75 ± 2.99 | −129.92 ± 4.14 | 131.84 ± 53.25 | 0.84 ± 0.12 | | P | 0.98 ± 0.04 | 0.91 ± 0.01 | 0.75 ± 0.04 | 0.75 ± 0.13 | | | | GAIL | AER | 185.07 ± 168.25 | −279.02 ± 104.91 | −196 ± 10.99 | 59.03 ± 87.76 | 0.36 ± 0.23 | | P | 0.35 ± 0.34 | 0.51 ± 0.26 | 0.04 ± 0.12 | 0.58 ± 0.21 | | | | ILPO | AER | 456.87 ± 4.10 | −125.92 ± 19.23 | −200 ± 0 | −451.81 ± 247.53 | 0.29 ± 0.75 | | P | 0.91 ± 0.01 | 0.90 ± 0.04 | 0 ± 0 | −0.64 ± 0.59 | | | | IUPE | AER | 144.64 ± 11.65 | −232.38 ± 50.92 | −198.00 ± 6.00 | −203.05 ± 35.51 | 0.19 ± 0.27 | | P | 0.26 ± 0.02 | 0.55 ± 0.16 | 0.02 ± 0.06 | −0.05 ± 0.08 | | | | RIL | AER | 500 ± 0 | −79.52 ± 4.49 | −100.29 ± 1.59 | 261.73 ± 9.91 | 1.04 ± 0.03 | | P | 1 ± 0 | 1.01 ± 0.01 | 1.06 ± 0.02 | 1.06 ± 0.02 | | | Table 2: *Performance* (P) and *Average Episode Reward* (AER) for each IL methods with only one expert's trajectory as data. of 10 different runs for each learning algorithm. We also perform an ablation study and test each algorithm with an increased number of trajectories in Section 6.3. Considering that for the CartPole and Acrobot environments there is almost no variation in their initial states, one trajectory should be enough to achieve their goal, even though not optimally. We hypothesize that all methods have comparable results in these cases. Nevertheless, just ILPO and RIL results were good enough to achieve the goal for the CartPole environment, *i.e.,* r ⩾ 195. In contrast, GAIL and IUPE achieved performance around 0.30, with GAIL being only 10 reward points from the goal, though far from the expert. The Acrobot environment does not have a defined goal, but we can define that a reward close to −80 can be considered ideal, as is the case for the expert. However, only RIL was able to reach such a result. ILPO achieves a performance of 0.9, with IUPE being close with 0.8 performance points. Since a random policy achieves an AER of −482.6, both methods converge to policies closer to the expert than GAIL, which only achieves −279.02 reward points. By contrast, MountainCar depends heavily on the agent starting position, while LunarLander alters its objective during each iteration. Having only one trajectory to learn how to mimic the expert is a significant disadvantage. This limitation is evident in the overall results for all IL methods that reach a performance of ≈ 0.06 in MountainCar, and of ≈ −0.04 in LunarLander. These policies are further away from the expert and the goal of the environment (−110 and 200). Since the BC method uses the ground-truth labels, we hypothesize that this approach yield similar results to RIL, even though the number of trajectories is limited. In the CartPole and Acrobot environments, the BC method achieves results closer to the expert (P ≃ 0.9); however, performance and rewards decrease significantly during the MountainCar and LunarLander environments. This experiment shows that RIL's capability of learning with its own experience is a substantial advantage, even with a small number of examples. ## 5.2.2 Reinforcement Learning By comparing RIL with the IL methods, we show that it reaches better results with fewer expert samples. However, RIL uses its own experiences in the form of q-value mapping to create the optimal policy, which the other IL methods had no access to. Hence, we also compare it to RL methods to understand whether RIL reaches the same results with fewer samples than them. Results in Table 3 show the number of | Environment | DQN1 | DQN2 | PPO | ACKTR | RIL | |---------------|---------|---------|-----------|---------|---------| | CartPole | 211,000 | 64,500 | 15,000 | 169,500 | 14,800 | | Acrobot | 427,500 | 498,000 | 98,000 | 482,000 | 16,365 | | MountainCar | 224,500 | 155,500 | 680,000 | 507,500 | 29,745 | | LunarLander | 357,000 | 239,000 | 154, 000† | 646,000 | 281,204 | Table 3: Average timesteps needed to reach the maximum reward (Table 4) for each algorithm in each environment. timesteps needed for reaching each method's maximum reward. Since RIL uses off and online learning, we compute both the samples used during the RL training and the expert samples used during the IL training in these results. Thus, if the expert trajectory has a size of 500 samples, for each RIL's iteration we count the number of timesteps from the RL component training plus 500. As expected, DQN1 and DQN2 yield similar results reaching their maximum reward, though DQN2 solves most environments while DQN1 does not. DQN2 achieves its maximum reward more efficiently than the other RL methods, besides achieving higher or comparable rewards. DQN2 presents P = 0.92, while PPO and ACKTR achieve P ≈ 0.85, with the difference between PPO and DQN2 being negligible for CartPole and significant for MountainCar. The exception for DQN2 is the Acrobot environment where PPO's use of a smoother exploration mechanism can achieve better results with fewer samples (524, 500). We observe the same behavior with RIL. Since the ϵ-greedy strategy used during the RL training initiates with a value corresponding to the exploration rate from the policy in the previous epoch, the exploration becomes less frequent. Thus, it allows for a more efficient path for reaching its maximum reward. While ACKTR achieves the best result among the RL methods for the MountainCar, it also requires ≈ 500, 000 timesteps (477, 755 more than RIL). We note that for MountainCar, the number of timesteps PPO needs to reach its maximum reward is lower than all the other algorithms. Since PPO does not reach the environmental goal, we do not consider it a relevant result. These experiments show that the trajectory of the expert can be used as a shortcut, allowing RIL to achieve its maximum reward and the goal of all environments more efficiently than the RL approaches. Nevertheless, RIL inherits the problem from IL of using an expert's trajectory as a guide, making it quite difficult to learn how to behave in those environments in which the goal constantly shifts, *e.g.,* LunarLander. ## 5.3 Quantitative Results Table 4 presents all results for both paradigms. In this experiment, we compare the capability of each method to learn a policy capable of reaching the highest reward, and we also show the average P when compared to the expert performance. Considering that we previously hinder all IL methods with a single expert trajectory, we use 100 different trajectories (≈ 100× more samples). For the RL approaches, we train each algorithm for 2, 000, 000 timesteps. As expected, DQN1 achieves lower rewards than every other RL algorithm, in agreement with our premise that the RL component of RIL itself should not be enough to solve the environment. Except for the CartPole environment, in which all methods reach the r ⩾ 195, the first DQN was closer to the random policy than the expert reward. By comparison, DQN2 achieves the environmental goal, and the Acrobot ideal reward, for almost all environments but MountainCar, something that no other RL approach was capable of. When analyzing PPO's results, we see that even though it achieves the best result over all other RL methods for Acrobot, it performs worse than ACKTR and DQN2 for the other environments, while ACKTR achieves the best result for MountainCar while being worse in the rest of the environments. We expect MountainCar to be a complex task considering it has a sparse reward function and a goal that rewards less exploration from the policy. On the other hand, for the LunarLander environment, even though most of the RL algorithms did not achieve the goal, *i.e.,* r ⩾ 200, we note that achieving a positive result can be quite difficult. Since its goal and reward system do not have a strong correlation, such as in CartPole, and its goal is not fixed, unlike ![10_image_0.png](10_image_0.png) Figure 4: Boxplot of the *Average Episodic Reward* for all methods and environments. e.g., MountainCar and Acrobot, LunarLander has the lowest performance among all tested environments. The average P for all RL approaches is approximately 0.8. When comparing the IL methods (except for BC), it becomes clear that they can have significant difficulties when dealing with the LunarLander environment as well. While the RL methods achieve a positive result, the IL strategies deteriorate over time. We assume that such behavior originates from the policies learning how to mimic the expert's landing position, which does not correlate to the goal. Since these methods lack the reward signal to correct themselves, the result is closer to the random policy than the expert. We observe that the IL methods achieve, on average, a higher result in the Acrobot environment (≈ 81.69) than the RL methods (≈ 86.87 - excluding DQN1). This is due to the fact that IL methods learn an optimal trajectory without much exploring. However, the IL approaches tend to replicate the average of the actions in a given state as the number of trajectories grows. This is a good thing in cases such as CartPole and Acrobot because the expert's states do not vary as much. The policies can predict the correct answers even when a particular state was absent in the learned trajectories or rapidly correct itself. Given the self-supervised nature from these methods, we observe in MountainCar a decrease in reward. This outcome happens because those algorithms make use of pseudo-labels and only approximate the expert's action. An incorrect action might cause the car to lose momentum in this environment, resulting in fewer reward points. A solution would be to use the ground-truth labels from the expert. | Reinforcement Learning | | | Imitation Learning | | | | | | | |--------------------------|---------|---------|----------------------|---------|--------|---------|---------|---------|---------| | Environments | DQN1 | DQN2 | PPO | ACKTR | BC | GAIL | ILPO | IUPE | RIL | | CartPole | 431.87 | 500.00 | 500.00 | 487.70 | 500.00 | 500.00 | 500.00 | 500.00 | 500.00 | | ±5.31 | ±0.00 | ±0.00 | ±64.76 | ±0.00 | ±0.00 | ±0.00 | ±0.00 | ±0.00 | | | Acrobot | -191.51 | -87.83 | -83.43 | -89.36 | -82.92 | -83.12 | -83.84 | -78.10 | -75.72 | | ±64.29 | ±27.96 | ±23.29 | ±24.89 | ±2.63 | ±20.95 | ±2.50 | ±10.56 | ±4.49 | | | MountainCar | -145.00 | -135.28 | -142.16 | -112.55 | -99.69 | -186.74 | -177.56 | -130.70 | -100.37 | | ±31.18 | ±24.10 | ±21.67 | ±21.19 | ±0.69 | ±0.65 | ±27.77 | ±15.23 | ±2.50 | | | LunarLander | -127.64 | 273.07 | 105.24 | 85.85 | 214.93 | 83.56 | -421.62 | -211.2 | 266.55 | | ±70.58 | ±37.92 | ±51.90 | ±64.72 | ±5.56 | ±65.26 | ±180.41 | ±40.77 | ±19.34 | | | Average P | 0.53 | 0.92 | 0.83 | 0.88 | 1.01 | 0.70 | 0.42 | 0.67 | 1.04 | Table 4: Quantitative results for all RL and IL algorithms used in this work as baselines. We also display the average *performance* of all environments. DQN1is the unmodified DQN architecture Mnih et al. (2013), while DQN2is the version from Schaul *et al.* Schaul et al. (2015). BC has results similar to the best RL method (DQN2) and close to RIL with 1.01 performance points. While the performance is similar to the expert, it has access to ground-truth labels, which can be hard to acquire or ineffective if an agent has to learn the environment to produce a significant number of annotated trajectories. Hence, RIL's overall performance is a significant improvement over the current IL methods. RIL performs equal or greater than the expert on all environments without any ground-truth labels from its expert. It achieves higher rewards than the RL methods as well, with the single exception of DQN2 on the LunarLander environment. For that, we analyze the standard deviation from both methods and the boxplot graph presented in Figure 4. RIL achieves a reward of 266.55 with a standard deviation of 19.34, while DQN2 achieves 273.07 reward points with a higher deviation (37.92). RIL stands within the interval from DQN2 and achieves a reward far higher than needed for solving the LunarLander environment, *i.e.,* r ⩾ 200. We hypothesize that applying a harsher gradient clipping during the RL training within RIL is responsible for that result. A solution would be to decrease the values from the IL training while increasing for the RL component as the epochs progress. We note that the standard deviation from RIL is smaller than every other method but BC. The average deviation for all environments is ≈ 6.58, while DQN2, which has the best results among the RL methods, is ≈ 23. For BC, the average standard deviation was 2.22, only 4.36 lower than RIL, a difference that is not really significant considering that BC uses ground-truth labels for the policy to learn. For that reason, we plot the interval for all methods in all environments in Figure 4, allowing us to understand how RIL compares to other methods in terms of variance (stability). Apart from BC, we observe that RIL presents the lowest variance among all methods. In the case of MountainCar, RIL does not achieve the highest possible value - ACKTR is the method that achieves it, with a non-outlier maximum value of ≈ −60, though RIL's behavior surpasses the median behavior of ACKTR. Note that RIL presents a very low variance, which translates into stability, a desired property for a policy to perform well when in production settings. A similar thing happens in the LunarLander environment. Even though DQN2 has the highest average of 273.07, and maximum value of ≈ 290, RIL's behavior is within the interval of the RL method, with a difference of 6.52. We recall that LunarLander is very difficult for the IL methods because the landing position strongly correlates to the final reward. We are computing the average of the environment over 100 episodes, *i.e.,* 100 different landing positions. No other environment we use in this work has the same characteristic of a moving goal. RIL's behavior shows that employing a hybrid RL/IL approach results in policies with lower variance. Moreover, by comparing its stability with the RL approaches, we observe that the adaptation capability of the latter is not as on-point as RIL's. We observe the same trend for the IL approaches, though their average result is more in line with RIL's in the Acrobot environment. ## 6 Discussion 6.1 Off-Policy And Imitation Learning When trying to learn an optimal policy, all learning methods need to find a good trade-off between instant (greedy) locally-optimal behavior and sub-optimal (though perhaps better in the long term) behavior, giving rise to the well-known *exploration vs exploitation* dichotomy. Off-policy methods make use of two different policies to address this trade-off. The first one is the target/optimal policy, while the second is the behavior/exploration policy. The behavior policy uses exploration mechanisms to generate state transitions, which will be used by the target policy during learning. Off-policy methods have access to its behavior policy, its states transitions, and output in the form of (st, st+1, π(At | st), r). Such methods can use this information paired with an importance sampling mechanism to estimate expected values under a distribution. On the other hand, imitation learning addresses the trade-off by leveraging an unknown expert policy. The most simplistic approach, Behavioral Cloning, uses the state transitions and actions (st, st+1, at) in a supervised manner to learn the expert's optimal behavior without the need for exploration. However, this becomes costly as the domain's complexity rises due to the need for additional data. Other IL approaches ![12_image_0.png](12_image_0.png) Figure 5: Figures 5a and 5b present different values for k and n and their effect in Equation 2, while Figure 5c shows the action probability distribution representation of π ∗for each algorithm in a given trajectory within the MountainCar-v0 environment. often only have access to the state transitions (st, st+1) since state transitions are more accessible than entire annotated datasets. In this case, the IL policies cannot use importance sampling to learn the optimal solution for given a problem. The action performed by the expert is not accessible by the policy either. Hence, an iterative process such as the one in RIL is vital to improve the policy. By using pseudo-labels that constantly change due to weight updates from the IDM, the policy receives different aˆ for the same st. This behavior allows the weights from the policy to receive more updates, which helps at avoiding local minima. We illustrate an example of this behavior in Figure 5c, where the changing of probabilities becomes more abrupt for the policy. For that reason, the IL component in RIL cannot be considered as an off-policy method, nor the RL component as an IL method. We borrow mechanisms from both IL and RL, which are vital for the performance of RIL. ## 6.2 Iterative Vs Sequential Learning RIL combines RL and IL components following the insight that both offline and online learning can provide benefits in terms of both efficiency and effectiveness. In this section, we compare two possibilities regarding the use of the IL and RL components within our framework RIL. In the regular setting, IL and RL are intertwined within the same iterative learning process, and thus the reward signal helps correcting the policy path by visiting unexplored states and approximating the policy from the expert's trajectories. In an alternative setting, we execute the IL component first and afterwards train the RL component to improve the policy with its own experiences. For that, we run the IL component for 100 epochs and afterwards the RL for 2, 000, 000 timesteps. Results for this setting are presented in Table 5 denoted as RIL*. Table 5: AER and *Performance* (P) for all environments with sequenced (RIL∗) and iterative (RIL) approaches. | Environment | RIL∗ | RIL | |---------------|-----------------|----------------| | CartPole | 382.95 ± 208.29 | 500.00 ± 0.00 | | Acrobot | −493.63 ± 44.61 | −75.72 ± 4.49 | | MountainCar | −200.00 ± 0.00 | −100.29 ± 1.59 | | LunarLander | −114.09 ± 56.84 | 266.55 ± 19.34 | | Average P | 0.22 | 1.04 | During this investigation, we observe that two possible scenarios can occur when using the methods in sequence: (i) if we keep a low number of expert's trajectories, the IL policy can get stuck in bad local minima, from which the RL policy cannot escape from; or (ii) the IL policy comes close to solving the environment, but due to the ϵ-greedy exploration nature of the RL policy, all the learned behavior is lost in the early exploration steps. The IL component stuck on bad local minima occurs in MountainCar, where RIL* achieves r = −200.00 (the minimum environment reward). RL deteriorating the IL behavior scenario occurs in CartPole and Acrobot, where RIL* achieves 382.95 and −493.63, respectively. We confirm that consecutively swapping between both approaches indeed help the policy to adjust itself better. By applying the IL component, the policy can correct the exploring mistakes with the expert's trajectory, whereas applying the RL component allows the policy to deviate from the expert whenever needed. ## 6.3 Impact Of The Amount Of Expert Samples In the previous experiments, we showed results with 1 and 100 expert trajectories. While the difference seem to be minor, we want to understand how different numbers of trajectories impact RIL. Table 6 presents results when using 1, 25, 50, 75, and 100 different expert's trajectories. Given the forgiving nature of the CartPole environment, we expected no differences when using more (or fewer) trajectories. RIL achieves 500 with no standard deviation between runs in the environment. The Acrobot environment presents low variance among the number of trajectories. Using 100 different trajectories, RIL achieves the best reward (−75.72). However, this improvement represents a performance of 1.02, less than 1% from using a single trajectory. Considering the cost of producing 100 trajectories, we assume that using a single trajectory for this environment is enough. For the MountainCar environment, we observe that all results stay within the standard deviation from both Tables 2 and 4. We hypothesize that RIL can correctly balance the experiences from its IL and RL components and access the proper action given a state. This behavior is crucial given that IL methods perform worse than RL in this environment. The policy's own experiences can over-correct the trajectory to predict better actions and prevent the agent from slowing | Table 6: RIL performance when using a variable amount of trajectories from the expert. | | | | |------------------------------------------------------------------------------------------|--------------|----------------|---------| | Environment | Trajectories | Samples Amount | Reward | | 1 | 500 | 500.00 | | | 25 | 12,500 | 500.00 | | | CartPole | 50 | 25,000 | 500.00 | | 75 | 37,500 | 500.00 | | | 100 | 50,000 | 500.00 | | | 1 | 74 | -79.52 | | | 25 | 1,946 | -77.90 | | | Acrobot | 50 | 4,089 | -77.06 | | 75 | 6,163 | -76.50 | | | 100 | 8,099 | -75.72 | | | 1 | 106 | -100.29 | | | 25 | 2,466 | -100.20 | | | MountainCar | 50 | 4,974 | -101.90 | | 75 | 7,512 | -99.78 | | | 100 | 10,046 | -100.37 | | | 1 | 453 | 261.73 | | | 25 | 10,314 | 221.80 | | | LunarLander | 50 | 20,846 | 246.32 | | 75 | 32,311 | 264.20 | | | 100 | 42,384 | 266.55 | | the car's momentum. In the LunarLander environment, we note that as the expert samples increase, the policy tends to deteriorate. We attribute such a behavior to the IL component, because as the number of expert trajectories grows, more data is available to the policy during the behavioral cloning update. These weight updates deteriorate the policy since the expert's positions do not align with the goal in all episodes. Thus, when reaching 100 trajectories, the policy has far more examples to generalize and perform well. As is the case for all other environments, the RL component helps the policy to maintain a higher reward. These results show that even though RIL benefits from a more diverse expert dataset, the overall gain is usually not significant to justify the cost of acquiring more extensive datasets. When paired with fewer samples, the IL component helps the policy mapping the actions and guiding the agent to an early trajectory. In contrast, the RL component helps correcting the trajectory by maximizing the reward signal. ## 6.4 Upper Limit From I Pos In this section, we show how different values for k and n from Equation 2 affect the performance of RIL. We first investigate hyperparameter k to understand how a more relaxed upper limit impacts RIL. Next, we test different values for n, which controls the shift from I pre and I pos to I s, and how different values of this hyperparameter can affect RIL. ## 6.4.1 Varying K Hyperparameter k allows us to understand whether RIL benefits from a more strict upper limit or a more relaxed one. A more rigid upper limit results in fewer shifts in the labeled data, reducing the covariance shift between iterations. In contrast, a more relaxed upper limit allows I sto receive a larger number of I pos, becoming closer to the expert trajectories and constantly shifting data for the IDM. Figure 5a presents the upper limit for I pos by varying k from 1 to 4. When k = 1, the upper limit grows linearly with the epochs, and as k increases, the growth becomes exponential in the initial epochs and logarithmic in the final epochs. Table 7 shows the AER for a policy trained with different k values (we keep the other hyperparameters the same: n = 150, |T e| = 1). For CartPole, the value of k makes no difference to RIL results. Since the environment can be easily solved, we had not expected any differences for this hyperparameter. However, as we analyze the other environments, it becomes clear that it is critical to guarantee smoother transitions between transition samples for a proper policy. In Acrobot, as k increases, the policy degrades. We believe that is because the random policy's transitions are not enough for the IDM to create pseudo-labels for the expert's trajectory properly. Hence, as the values decrease, the policy achieves higher rewards, with the optimal value being 2. As for MountainCar, we observe that just as in Table 6, the results do not show a significant variation. Except for the values of 1 and 4, which yield rewards of ≈ −150, we observe a standard deviation of 5.08, which is lower than all RL learning approaches in Table 4. In LunarLander, we see a rapid deterioration of the reward as k deviates from 2. We observe the same behavior in MountainCar, where 1 provides the worst result and 4 the second worst result. We attribute such a behavior to RIL's capability of using its RL component to counteract the IL sub-optimal knowledge, thus acquiring higher rewards than the RL methods by themselves and comparable results among different k values. These results show that using k = 2 is the best strategy for RIL. It offers a good trade-off by maintaining I s mainly from I pre earlier in | | k | | | | | | | |--------------|--------|---------|---------|---------|---------|---------|---------| | Environments | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | | CartPole | 500.00 | 500.00 | 500.00 | 500.00 | 500.00 | 500.00 | 500.00 | | Acrobot | -81.84 | -79.95 | -79.52 | -80.37 | -81.05 | -83.69 | -85.43 | | MountainCar | -150.3 | -102.30 | -100.29 | -105.30 | -109.20 | -112.80 | -149.90 | | LunarLander | 132.00 | 145.36 | 266.55 | 247.50 | 209.40 | 174.6 | 171.00 | Table 7: *Average Episode Reward* (AER) for varying k values with n = 150 and |T e| = 1. the process, while later primarily from I pos. This behavior allows the IDM to learn the action transitions from I s without suffering from heavy shifts in terms of dataset. ## 6.4.2 Varying N Hyperparameter n alters how early in the learning process the upper limit will reach its maximum value of 1 (or 100%). Figure 5b presents the behavior of the upper limit for 150 epochs with n varying in {37, 75, 112, 150}, which is equivalent to 25%, 50%, 75%, and 100% of the epochs. Table 8 shows the AER for a policy trained with distinct n values, while all other hyperparameters stay the same, *i.e.,* k = 2 and |T e| = 1. The policy once again achieves the maximum reward for the CartPole environment independently of its hyperparameters. On the other hand, as n decreases the policy deteriorates in the remaining environments. We believe this is due to the high variance in samples for I s. As the upper limit curvature becomes steeper, the faster I s becomes I pos, *i.e.,* fewer samples from I pre are used to complement I s. This shift in data can deteriorate the IDM capability of predicting the correct action given (st, st+1). However, the results achieved by the policy are still solid enough to solve each environment. They corroborate our hypothesis that the RL component in RIL can partially use the expert's knowledge to achieve good results with its own experiences. These results show that using n = 150 (total number of epochs) is the best strategy for RIL. | n | | | | | |--------------|---------|---------|---------|---------| | Environments | 150 | 115 | 75 | 37 | | CartPole | 500.00 | 500.00 | 500.00 | 500.00 | | Acrobot | -79.52 | -80.90 | -82.23 | -85.31 | | MountainCar | -100.29 | -100.50 | -106.70 | -112.10 | | LunarLander | 266.55 | 163.70 | 148.10 | 120.98 | Table 8: Average Episode Reward (AER) for distinct n values. ## 7 Usage Of Sub-Optimal Experts As a premise for IL methods, the expert acts as an optimal policy in the environment. Thus, we experiment on how RIL acts when given sub-optimal experts. We hypothesize that with the use of an increasingly degrading expert, RIL's performance should degrade accordingly. Nevertheless, considering its capability of using its own experiences, the drop in performance should not be drastic, at least in theory. We do not use the LunarLander environment in this experiment, considering its aforementioned complexity (high correlation with landing position, which is variable). For the other environments, we generate experts by continuously reducing their performance in solving the task. Table 9 shows the reward from the expert that was employed for learning RIL's policy and πϕ results for all environments. The first value for each environment is the expert used in the main experiments. Each expert then becomes gradually worse (decreasing values of reward). The CartPole policy achieves the maximum reward even with an expert policy distant from its goal, *e.g.,* r ⩾ 195. This behavior was verified in every other experiment, and points to the simplicity of solving this particular environment. The same behavior does not occur for Acrobot and MountainCar, which followed our hypothesis of gradual degradation. The degradation of the expert's policy results in worse πϕ, though note that such a degradation is sub-linear with the decrease in reward. Even with severely sub-optimal experts, RIL is capable of achieving rewards equal or very close to the environment's goal. As the expert decayed 50 reward points for Acrobot, πϕ decayed only ≈ 0.80. The same happens to MountainCar, where the expert decay is 44 reward points, while πϕ decreases only 10.91 points. The results from MountainCar provide evidence that the trajectory of the expert helps the policy in learning how to act in an environment while the RL component helps adjusting that behavior towards the maximum reward. | Table 9: Average Episodic Reward for the learned policy given sub-optimal trajectories from an expert. | | | |----------------------------------------------------------------------------------------------------------|---------------|---------------| | Environment | Expert Reward | Policy Reward | | CartPole | 500 | 500.00 | | | 400 | 500.00 | | | 300 | 500.00 | | | 200 | 500.00 | | | 100 | 500.00 | | Acrobot | -85 | -79.52 | | | -100 | -81.41 | | | -150 | -80.27 | | | -200 | -80.03 | | | -250 | -82.72 | | | -106 | -100.29 | | MountainCar | -140 | -104.07 | | | -150 | -111.20 | Table 9: *Average Episodic Reward* for the learned policy given sub-optimal trajectories from an expert. ## 8 Conclusions And Future Work In this work, we proposed Reinforced Imitation Learning (RIL), a framework that combines IL and RL components into an intertwined iterative process. RIL uses unlabeled expert samples and its own experiences to achieve state-of-the-art results in distinct benchmarking environments. The IL component of RIL uses unlabeled expert trajectories to guide the policy into a theoretical optimal policy. The RL component, in turn, can adjust the policy towards a more coherent path by exploring the q-value functions from its own experiences. RIL offers two main advantages: (i) it is capable of working even with very few expert's trajectories due to its self-supervised learning strategy; and (ii) it achieves state-of-the-art results without large amounts of data or timesteps due to its capability of leveraging from the advantages of both IL and RL paradigms. Compared to other IL methods, RIL achieves better results with fewer samples while also matching their performance in scenarios with a high number of expert trajectories. Experiments also show that RIL achieves comparable (and often better) results than RL baselines though in a much more efficient way (*i.e.,* fewer timesteps). As future work, we intend to adapt RIL to continuous environments, and also to scenarios in which the states are represented by visual information. Since the mechanisms implemented in RIL require discretization of continuous environments, we intend to perform modifications that will allow RIL to perform continuous exploration seamlessly. Environments whose states are represented as images (visual domains) such as Atari games are often harder to learn due to their larger state-spaces, so it will be interesting to verify whether RIL can also achieve state-of-the-art results in those scenarios. ## References Albert Bandura and Richard H Walters. *Social Learning Theory*. Prentice-hall Englewood Cliffs, NJ, 1 edition, 1977. Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. *IEEE transactions on systems, man, and cybernetics*, 1(5):834–846, 1983. Si-An Chen, Voot Tangkaratt, Hsuan-Tien Lin, and Masashi Sugiyama. Active deep q-learning with demonstration. *Machine Learning*, 109(9):1699–1725, 2020. Tao Chen, Jie Xu, and Pulkit Agrawal. A system for general in-hand object re-orientation. arXiv preprint arXiv:2111.03043, 2021. Ashley D Edwards, Himanshu Sahni, Yannick Schroecker, and Charles L Isbell. Imitating latent policies from observation. In *Proceedings of the 36th International Conference on Machine Learning*, ICML 2019, pp. 1755–1763. Proceedings of the 36th International Conference on Machine Learning, 2019. Bin Fang, Shidong Jia, Di Guo, Muhua Xu, Shuhuan Wen, and Fuchunt Sun. Survey of imitation learning for robotic manipulation. *International Journal of Intelligent Robotics and Applications*, 3(4):362–369, 2019. Nathan Gavenski, Juarez Monteiro, Roger Granada, Felipe Meneguzzi, and Rodrigo C Barros. Imitating unknown policies via exploration. In *Proceedings of the 2020 British Machine Vision Virtual Conference*, BMVC 2020, pp. 1–8. Proceedings of the 2020 British Machine Vision Virtual Conference, BMVA, 2020. Alborz Geramifard, Christoph Dann, Robert H. Klein, William Dabney, and Jonathan P. How. Rlpy: A value-function-based reinforcement learning framework for education and research. Journal of Machine Learning Research, 16(46):1573–1578, 2015. URL http://jmlr.org/papers/v16/geramifard15a.html. Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning from demonstrations. In Thirty-second AAAI conference on artificial intelligence, 2018. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In *Advances in neural information processing systems*, pp. 4565–4573. Advances in neural information processing systems, 2016. Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. Imitation learning: A survey of learning methods. *ACM Computing Surveys*, 50(2):21:1–21:35, 2017. Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, et al. Model-based reinforcement learning for atari. *arXiv preprint arXiv:1903.00374*, 2019. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 1:1–15, 2014. Ildikó Király, Gergely Csibra, and György Gergely. Beyond rational imitation: Learning arbitrary means actions from communicative demonstrations. *Journal of Exp. Child Psych.*, 116(2):471–486, 2013. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. *arXiv preprint arXiv:1312.5602*, 1: 1–9, 2013. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In *International conference on machine learning*, pp. 1928–1937. PMLR, 2016. Juarez Monteiro, Nathan Gavenski, Roger Granada, Felipe Meneguzzi, and Rodrigo Barros. Augmented behavioral cloning from observation, 2020. Andrew William Moore. *Efficient memory-based learning for robot control*. PhD thesis, University of Cambridge, 1990. Dean A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Proceedings of the 1st Conference on Neural Information Processing Systems, NIPS 1988, pp. 305–313. Proceedings of the 1st Conference on Neural Information Processing Systems, 1988. Nathan Ratliff, J Andrew Bagnell, and Siddhartha S Srinivasa. Imitation learning for locomotion and manipulation. In *2007 7th IEEE-RAS International Conference on Humanoid Robots*, pp. 392–397. IEEE, 2007. Saleha Raza, Sajjad Haider, and Mary-Anne Williams. Teaching coordinated strategies to soccer robots via imitation. In *Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics*, ROBIO 2012, pp. 1434–1439. Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics, 2012. Giacomo Rizzolatti and Corrado Sinigaglia. The functional role of the parieto-frontal mirror circuit: Interpretations and misinterpretations. *Nature Reviews Neuroscience*, 11(4):264–274, 2010. Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In *Proceedings of the fourteenth international conference on artificial* intelligence and statistics, pp. 627–635. JMLR Workshop and Conference Proceedings, 2011. Stefan Schaal. Is imitation learning the route to humanoid robots? *Trends in cognitive sciences*, 3(6): 233–242, 1999. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. *arXiv preprint* arXiv:1511.05952, 2015. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In *Trust region policy optimization*, pp. 1889–1897. International conference on machine learning, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Richard S Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In *ANIPS*, pp. 1038–1044. Advances in neural information processing systems, 1996. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral cloning from observation. In *Proceedings of* the 27th International Joint Conference on Artificial Intelligence, pp. 4950–4957. Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018. Stefan Vogt and Roland Thomaschke. From visuo-motor interactions to imitation learning: behavioural and brain imaging studies. *Journal of Sports Sciences*, 25(5):497–517, 2007. Christopher JCH Watkins and Peter Dayan. Q-learning. *Machine learning*, 8(3-4):279–292, 1992. Yuhuai Wu, Elman Mansimov, Roger B Grosse, Shun Liao, and Jimmy Ba. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. *Advances in neural information* processing systems, 30:5279–5288, 2017.
Review 1: Summary: The authors propose an algorithm that tackles the Learning from Demonstration problem in the more specific setting more the demonstrations contain only observations and no action (Learning from Observations). The agent thus has access to the environment to interact with, to the reward information and to demonstrations with only observations in it. The authors propose to: - initialize a policy $\pi_{\theta}$ - learn an inverse model to predict $a_t$ given $s_t$ and $s_{t+1}$ based on random exploration - train $\pi_{\theta}$ with behavioral cloning loss on the demonstration dataset with the actions predicted by the inverse model. - interact with the environment to - train $\pi_{\theta}$ with TD-loss on the reward - augment the dataset the inverse model is trained on There is a goal sampling technique mentioned in Algorithm 1 that I don’t understand given that all environments considered are not goal-conditioned. Strengths and Weaknesses: # Strength It is an interesting setup to tackle LfD from observation only. # Weaknesses - Writing - It was not clear for me until page 5 what was the actual setup considered (and I am still not a 100% sure). To understand what is the tackled problem (LfD from obs.), the reader has to gather information from 6 different places. Some details about it are never even mentioned, like the restriction to the discrete action setup. - The algorithm explanation lacks a lot of details. E.g. The authors write “using $\pi_{\theta}$ as $I^s$” although $I^s$ has not been introduced before. The authors probably rely on figure 1 for the reader to understand on its own. - A lot of the notations used are actually never introduced, or change across the explanation (r vs R, $\pi$ vs $\pi_{\theta}$ etc…). - Algorithm and contributions - The contributions are not clearly stated. The algorithm is described as a bag of tricks that are poorly linked to the relevant literature. - Using DQN with a BC loss is actually the core of the DQfD [1] algorithm (that was extended in DDPGfD[2] and R2D3 [3], or used differently in AQuaDem [4]). Inverse models have been extensively studied in the literature, e.g. for reward bonus like in ICM [5] but in many more. - Tricks are stated as contributions like acting following softmax(logits) but various uncited work have been doing this in the past years, like SoftDQN [6] or Munchausen RL [7]. - The authors don’t bring the reader any intuition or mathematical explanation why the algorithm should work. - Experimental setup - The experimental setup is weak in the following sense: many crucial details are lacking: where do the demonstrations come from? How many steps the algorithms are trained on? Proper description of the baselines: baseline scores are very low on these very simple environments and I strongly suspect the use of weak baselines. GAIL has been improved a lot since initial publication, as shown in the following work [8] - Authors consider only very low dimensional environments where learning the model is probably very simple. At least one larger example would really be welcome to showcase the performance of the algorithm. - An improper hyperparameter selection method where different HPs are needed in every environment. [1] https://arxiv.org/abs/1704.03732 [2] https://arxiv.org/pdf/1707.08817.pdf [3] https://arxiv.org/abs/1909.01387 [4] https://arxiv.org/abs/2110.10149 [5] https://arxiv.org/pdf/1705.05363.pdf [6] https://arxiv.org/abs/1912.10891 [7] https://arxiv.org/abs/2007.14430 [8] https://arxiv.org/abs/2106.00672 Requested Changes: - A much clearer introduction of the notations - A clear statement of the contributions - A refactoring of section 3 which completely mix contributions with related work - A clearer description of the algorithm. - Proper citations of the related work when using techniques that were previously used - Mathematical expression when needed (as example when introducing the considered KL-divergence). - A clear explanation of the baselines. - A choice of stronger baselines. - A clear explanation of the hyperparameter selection process with a single set for all environments. - At least one higher dimensional environment Broader Impact Concerns: No particular concern. ================================================== Review 2: Summary: The authors propose to mix Imitation Learning and Reinforcement Learning. On their journey, they position their approach with respect to related work, propose an algorithm RIL, enounce experimental hypotheses, perform a large variety of experiments, and present the empirical results. Strengths and Weaknesses: Strengths: the scientific methodology is complete: positioning, algorithmic innovation, empirical hypothesis, and empirical validation. The figures are high quality. Weakness: unfortunately, the writing and formalization are unclear. In my opinion, it undermines too much the submission to recommend acceptance. After reading the paper, I am left not understanding the motivation, the objectives, the claims, the empirical results, etc. As a result, I am unable to answer positively to the main reviewing question: *Are the claims made in the submission supported by accurate, convincing and clear evidence?* Next section provides many examples of imprecise formulations and inconsistent claims. Requested Changes: This section enumerates remarks and questions in an unorganized manner (in chronological order): - Abstract: *it requires a lot of computation* => I'm not sure what it means given that the RIL algorithm has an inner RL optimization loop. - Abstract: *a method that learns an optimal policy* => RIL looks like IL with some additional RL components, but if the objective is to learn an optimal policy, then it is more like IL-guided RL. **Nowhere in the submission, the objective of the RIL algorithm is clearly stated.** - Introduction 2nd paragraph: *at any given time* => no, a policy defines what the agent does (not should do) in any context. It is a function from the state space to a distribution over actions. - Introduction 2nd paragraph: *Value-based methods compute the optimal policy* => no, they search for the optimal policy. - Introduction 2nd paragraph: *The assumption that the only information available to the learning agent are the immediate environmental rewards* => the information signal is much richer than that with the transition function. Yann Le Cun even made this (in)famous quote that the reward is only the cherry on the machine learning cake. - Figure 1: the figure is absolutely impossible to understand when it is introduced. Not a single notation is introduced, the figure is not explained, etc. - Introduction 4th paragraph: *both performance and average episodic reward (AER) metrics*. These are not introduced and their difference is unclear at this point. We learn later that performance is actually just a rescaling of AER so they are simply the same indicator presented in a different way. - Related work: RL related work is too much centered on deep RL literature, it is also outdated, and off-topic (not connected to RIL). - Related work: *BC [...] learns how to approximate the agent’s trajectory from the expert’s* => not really. BC intends to reproduce the policy by predicting the action as a function of the state. - Related work: *require the expert policy or ground-truth labels to provide feedback to the agent.* => it was very unclear to me what meant ground-truth label is the RL setting. I understood much later that it meant the action performed by the expert. Is it really a strong assumption that the expert actions are observable? - Problem formulation: *Solving an MDP yields a stochastic policy π(a | s) with a probability distribution over actions for an agent in state s to perform* => I have no idea what this sentence intends to say. Solving an MDP means to find a policy optimizing an objective function. - Self-supervised Imitation Learning: *the policy (π_ϕ) acts as a stationary model predicting the most likely action a given s_t.* => a policy is a function of the state that returns a distribution over actions. It is not predicting anything. If the authors were referring to the Imitation policy, then it intends to reproduce not the most likely action, but the state-conditioned distribution of actions of the behavioral policy. - Self-supervised Imitation Learning: *balances I^s* => at this point, we don't know what I^s is and what balance is being looked after. - Self-supervised Imitation Learning: *that reach the environment goal* => does the environment need to be goal-based? - Self-supervised Imitation Learning: *This process allows the model to maintain a weighted distribution between the random and updated policy samples and avoid local minima since the probability of actions vanishing in each iteration is minimal.* I cannot understand this sentence. What weights are we talking about? What local minima? With respect to which objective? And finally, what does it mean to have the probability of action to vanish to be minimal? Minimal inside which set? - Exploration with Neural Networks: how is ε-greedy exploration with neural networks? - Exploration with Neural Networks: *to approximate the optimal Q-function in Equation 1* => there is no optimal q-function in equation 1, only a q-learning update. - Combining IL and RL: *(iv)* Is T^e different from dataset D? - Combining IL and RL: *(vi) use a sampling mechanism* => is it detailed somewhere? Is it eq 2? - Algorithm 1 lines 10-13: What is this set of environments? What do you mean by π_φ to solve environment e? What does it mean that π_φ is in turn updated until convergence to different objectives? - Combining IL and RL: *we compute the exploration ratio from the policy during the self-supervised learning component (Line 9)* => what is an exploration ratio? How is it computed. There are too much vagueness in many of the steps, **I cannot understand why it is done this way, and how to reproduce.** - Combining IL and RL: *covariance shift* => covariate shift. - Combining IL and RL: *k the slope of curvature* => it is another hyperparameter? - Equation 2: what is a lim sup of a dataset? The text explains well what it means: it is the number of samples coming from I^pos but the formula return a number that is smaller than 1. Should I understand that it is a ratio of samples coming from I^pos? - Equation 2: We may also notice that this formula is equivalent to $\frac{1}{1+(\frac{n}{e}-1)^k}$. This formula is weird (and again not motivated or even explained) since when e tends to $\infty$, it changes the sign inside the parenthesis, yielding non-monotonic behaviors. - Experimental methodology: *and its latest version (Schaul et al., 2015) (DQN2)* => it is not the *latest version* of DQN (whatever is means). - Policy optimization behavior: *IL and RL methods work on the same premise that an agent needs to learn an approximation to a theoretical optimal policy in the form of an MDP.* Do you mean that both approaches return a policy? Because RL does not aim at learning a policy, but at yielding maximum discounted sum of rewards. In contrast, IL indeed aims at reproducing a policy, which may be interpreted in different ways (minimize policy KL or state density KL to name two). - Policy optimization behavior: *First, we compare all models with π∗ and compute the difference with the probability of all possible actions.* => A divergence is not a difference. - Policy optimization behavior: *the KL Divergence using one-hot encodings for the specific action given a state* => I don't understand what it means. Which specific action? Formalizing with actual equations would help to understand the difference between KL-divergence and KL-divergence*. - Policy optimization behavior: *When comparing π∗ to the other policies, we can see that in the middle of the valley...* => I do not understand what in Figure 2, we are asked to look at (Figure 2 is composed of 4 sub-figures). - Figure 2: what is a maximum a posteriori probability? - Figure 2a (and the others): why do probabilities have negative values? - Table 1: how can RIL yield more reward than the optimal policy? It raises the question: with respect to what is it optimal? - Table 1: it is suspicious that both RL and IL get exactly the same KL divergence*. Maybe it's possible. I can't tell since I did not understand how it is computed. - Policy optimization behavior: *Figure 3 illustrates how close RIL is from π∗* => why would we look at this? IL has for objective to reproduce another policy, and RL for objective to maximize return, which is only distantly connected to mimicking a specific policy (there may be several RL (near-)optimal policies). After page 8, I felt that I was too lost to continue the reading and be able to provide relevant feedback on the paper. I will only mention that I've noticed that RIL consistently obtains performance strictly higher than 1, although 1 is supposed to be the performance of the optimal policy, once again raising the question of what is the optimal policy. Broader Impact Concerns: My little understanding of what/why/how things are made in this submission did not allow me to formulate any broader impact concerns analysis. ================================================== Review 3: Summary: _tldr: BCO + DQN._ This paper proposes an RL algorithm for the setting where the agent is also given access to a set of state-only expert demonstrations. The proposed method alternates between (1) running behavioral cloning from observations (Torabi '18), which performs imitation learning on the expert demos; and (2) deep Q-learning. On 4 simple tasks, the proposed method outperforms baselines that only perform imitation learning (i.e., omit step 2) and baselines that only perform RL (i.e., omit step 1). Strengths and Weaknesses: **Strengths** * The paper includes fairly detailed ablation experiments. * The high-level problem studied in this paper, improving the sample efficiency of RL methods, is very important. **Weaknesses** * The paper is hard to follow. After reading the introduction, I was unsure exactly what problem the paper was solving (e.g., would the method be evaluated based on reward maximization or divergence minimization). * The proposed method makes more assumptions than the baselines. Some baselines don't use the reward signal, the rest of the baselines don't use the expert demos. This makes the comparisons seem a bit unfair. * The experiments are only on very simple tasks. Requested Changes: **[very important]** Compare to recent, competitive baselines that make the same assumptions as the proposed method. Use more challenging tasks, like those used in these recent works (e.g., the Deepmind Control Suite, CARLA). **[very important]** Some parts of the writing were imprecise (or worse, wrong); many claims were missing citations. I would highly recommend revising the paper (especially the abstract and Sections 1 -- 3) to make sure that the writing is precise. A few examples: * "[imitation learning approaches] are limited to trajectories where ..." I didn't understand what this means. * "reinforcement learning does not need a supervision signal" -- I don't think this is true: the reward is a supervision signal. * "a method that learns optimal policies" -- The paper doesn't include any analysis proving that the proposed method always returns the optimal policy; "optimal" isn't formally defined. **[medium importance]** The paper seems much too long. I'm not sure what the general standard for TMLR is, but it seems like the paper could be made 30-50% shorter without losing anything important. For example, section on ablation experiments could be shortened to 1 page. **[medium importance]** "by counting how many samples..." This seems to hide an important difference: expert samples and agent samples don't necessarily cost the same amount. I would highly recommend comparing to baselines that make the same assumptions. In addition, it could be interesting to look at a sort of Pareto frontier of performance after (say) 10k samples vs % of expert samples. **[less important]** Minor writing comments * "recently became ... [Bandura '77]" -- It's a bit odd to refer to a 44 year old paper as recent. * "post-action rewards" -- Unclear what this means. * "Humans can learn much faster" -- Cite. * "not necessarily what could" -- Cite. * In the related work section, I'd recommend focusing on just the most relevant prior methods (those that combine RL + IL). * "we do not have access to ground truth labels .... we have a trajectory performed by an expert" -- This seems like a contradiction. * "very sample-efficient" -> "sample efficient" * "q-learning" -> "Q-learning" * "RIL employs ... outside the expert's trajectories" -- Potential run-on sentence. * "DQN ... shares similarities with ... IL" -- I didn't understand this claim. I would recommend explaining it. * "Markov Decision Process" -> "Markov decision process" * "pseudo-labels ... can be far from the experts" -- Why? * Sec. 3.3 -- I found this section confusing because DQN is an RL algorithm, not an exploration algorithm. I'd recommend explaining that the method is an RL+IL method, and mentioning that the RL algorithm is discussed in Sec. 3.3 and the IL algorithm is discussed in Sec. X. * Alg 1, L10: "e" was defined to indicate things coming from the expert, so it's unclear how one can enumerate over e. * "softmax distribution" -- I'd recommend discussing the similarities/differences with Boltzmann exploration. * Sec. 4.1 could be moved to the appendix. Broader Impact Concerns: N/A. The proposed RL algorithm doesn't directly raise any concerns in this regard. ================================================== Metareview: Recommendation: Reject Comment: There has been extensive discussions between the reviewers and the authors. The reviewers comments more or less highlight similar strength and weaknesses. They appreciated the original work and the fact that this line of work is very relevant to the journal as well as the timeliness. It brings a new perspective on relevant problems in the imitation / RL community. Yet they also unanimously agree on the lack of clarity of the paper and major discrepancies between claims and experiments (like using the reward in an imitation learning setting). This is the main criteria for acceptance at TMLR and these issues unfortunately prevent publication at this stage. This is why all reviewers recommended to reject the paper this time. The authors seem to have understood most of the issues from the reviewers comments. I really appreciated the discussion and I'm sure it will help the authors coming up with a much better version of their paper. Yet, at this moment, there is too much work to be done for relying on this discussion so as to accept the paper. ==================================================
# Spectral Self-Supervised Feature Selection Anonymous authors Paper under double-blind review ## Abstract Choosing a meaningful subset of features from high-dimensional observations in unsupervised settings can greatly enhance the accuracy of downstream analysis, such as clustering or dimensionality reduction, and provide valuable insights into the sources of heterogeneity in a given dataset. In this paper, we propose a self-supervised graph-based approach for unsupervised feature selection. Our method's core involves computing robust pseudo-labels by applying simple processing steps to the graph Laplacian's eigenvectors. The subset of eigenvectors used for computing pseudo-labels is chosen based on a model stability criterion. We then measure the importance of each feature by training a surrogate model to predict the pseudo-labels from the observations. Our approach is shown to be robust to challenging scenarios, such as the presence of outliers and complex substructures. We demonstrate the effectiveness of our method through experiments on real-world datasets, showing its robustness across multiple domains, particularly its effectiveness on biological datasets. ## 1 Introduction Improvements in sampling technology enable scientists across many disciplines to acquire numerous variables from biological or physical systems. One of the critical challenges in real-world scientific data is the presence of noisy, information-poor, or nuisance features. While such features could be mildly harmful to supervised learning, they could dramatically affect the outcome of downstream analysis tasks (e.g., clustering or manifold learning) in the unsupervised setting (Mahdavi et al., 2019). There is thus a growing need for unsupervised feature selection schemes that enhance latent signals of interest by removing nuisance variables and thus advance reliable data-driven scientific discovery. Unsupervised Feature Selection (UFS) methods are designed to identify a set of informative features that can improve the outcome of downstream analysis tasks such as clustering and manifold learning. With the lack of labels, however, selecting features becomes a challenge since the downstream task cannot be used to drive the selection of features. As an alternative, most UFS methods use a label-free criterion that correlates with the downstream task. For instance, many UFS schemes rely on a reconstruction prior (Li et al., 2017) and seek a subset of features that can be used to reconstruct the entire set of features as accurately as possible. Several works use Autoencoders (AE) to learn a reduced representation of the data while applying a sparsification penalty to force the AE to remove redundant features. This idea was implemented with several types of sparsity-inducing regularizers, including ℓ2,1 based (Chandra and Sharma, 2015; Han et al., 2018), relaxed ℓ0 (Balın et al., 2019; Shaham et al., 2022; Svirsky and Lindenbaum) and more. One of the most commonly used criteria for UFS is feature smoothness. According to this hypothesis, the structure of interest, such as clusters or a manifold, can be captured using the graph Laplacian matrix (Ng et al., 2001). The smoothness of features is measured using the Laplacian Score (LS) (He et al., 2005), which is based on the Rayleigh quotient of the Laplacian. A feature that is smooth with respect to the graph is considered to be associated with the primary underlying data structures. There are many other UFS methods that use a graph to select informative features Li et al. (2018); Roffo et al. (2017); Zhu et al. (2017; 2020); Xie et al. (2023). (Li et al., 2012) derived Nonnegative Discriminative Feature Selection (NDFS) , which performs feature selection and spectral clustering simultaneously. Its extension Li and Tang (2015) adds a loss term to prevent the joint selection of correlated features. ![1_image_0.png](1_image_0.png) Figure 1: Illustration of SSFS. In (a) we show a tSNE scatter plot of noisy MNIST digits (3, 6, 8). (b) Presents the six leading eigenvectors computed based on the graph Laplacian of the data. Samples are ordered according to the identity of the digit. (c) We then use the k-medoids algorithm to define pseudolabels y ∗ i . These are presented as colors overlayed on the eigenvectors. (d) We select the three eigenvectors whose pseud-labels are the most "stable" with respect to several prediction models (see Section 3.2). (e) For each data feature we estimate its importance score for each of the selected eigenvectors (see Section 3.3). (f) We aggregate the feature scores across eigenvectors. Embedded unsupervised feature selection schemes aim to cluster the data while simultaneously removing irrelevant features. Examples include Wang et al. (2015), which performs the selection directly on the clustering matrix, and Zhu and Yang (2018), which learns feature weights while maximizing the distance between clusters. In recent years, several works have derived self-supervised learning methods for feature selection. The key idea is to design a supervised type learning task with pseudo-labels that do not require human annotation. A seminal work based on this paradigm is Multi-Cluster Feature Selection (MCFS) (Cai et al., 2010). MCFS uses the eigenvectors of the graph Laplacian as pseudo-labels and learns the informative features by optimizing over an ℓ1 regularized least squares problem. More recently, Lee et al. (2021) used self-supervision with correlated random gates to enhance the performance of feature selection. In this work, we present a spectral self-supervised scheme for feature selection. The key idea is to selectively and discriminatively use the eigenvectors of the graph Laplacian. We implement this process through a multi-stage approach. Firstly, we generate robust discrete pseudo-labels from the eigenvectors and filter them based on a stability measure. Next, we fit flexible surrogate classification models on the selected eigenvectors and query the models for feature scores. Using these components, we can identify informative features that are effective for clustering on real-world datasets. ## 2 Preliminaries 2.1 Laplacian Score And Representation-Based Feature Selection Generating a graph-based representation for a group of high-dimensional observations has become a common practice for unsupervised learning tasks. In manifold learning, methods such as ISOMAPS (Tenenbaum et al., 2000), LLE (Roweis and Saul, 2000), Laplacian eigenmaps (Belkin and Niyogi, 2003), and diffusion maps (Coifman and Lafon, 2006) compute a low-dimensional representation that is associated with the manifold's latent structure. In spectral clustering, a set of points is partitioned by applying the k-means algorithm to the leading Laplacian eigenvectors (Ng et al., 2001). In graph methods, each node vi corresponds to one of the observations xi ∈ R p. The weight Wij between two nodes vi, vj is computed based on some kernel function K(xi, xj ). For example, the popular Gaussian kernel is equal to, $$K(\mathbf{x}_{i},\mathbf{x}_{j})=\exp\bigg(-\,\frac{\|\mathbf{x}_{i}-\mathbf{x}_{j}\|^{2}}{2\sigma^{2}}\bigg).$$ Where the parameter σ determines the bandwidth of the kernel function. Let D be a diagonal matrix with the degree of each node in the diagonal, such that Dii =Pj Wij . The unnormalized graph Laplacian matrix is equal to L = D − W. For any vector v ∈ R n we have the following equality (Von Luxburg, 2007), $$\mathbf{v}^{T}\mathbf{L}\mathbf{v}={\frac{1}{2}}\sum_{i,j}\left(v_{i}-v_{j}\right)^{2}W_{i,j}.$$ $$(1)$$ 2Wi,j . (1) The quadratic form in equation 1 gives rise to a notion of graph *smoothness*. (Ricaud et al., 2019; Shuman et al., 2013). A vector is smooth with respect to a graph if it has similar values on pairs of nodes connected with an edge with a significant weight. This notion underlies the Laplacian score suggested as a measure for unsupervised feature selection (He et al., 2005). Let fm ∈ R n denote the values of the m-th feature for all observations. The Laplacian score sm is equal to, $$s_{m}=\mathbf{f}_{m}^{T}\mathbf{L}\mathbf{f}_{m}=\frac{1}{2}\sum_{i,j}\left(f_{m,i}-f_{m,j}\right)^{2}W_{ij}.\tag{1}$$ A low score indicates that a feature is smooth with respect to the computed graph and thus strongly associated with the latent structure of the high-dimensional data x1*, . . . ,* xn. The notion of the Laplacian score has been the basis of several other feature selection methods as well (Lindenbaum et al., 2021; Shaham et al., 2022; Zhu et al., 2012). Let vi, λi denote the i-th smallest eigenvector and eigenvalue of the Laplacian L. A slightly different interpretation of equation 2 is that the score for each feature is equal to a weighted sum of its correlation with the eigenvectors, such that $$\mathbf{\Pi}(2)$$ $$s_{m}=\sum_{i=1}^{n}\lambda_{i}(f_{m}^{T}\mathbf{v}_{i})^{2}.$$ A potential drawback of the Laplacian score is its dependence on many eigenvectors. This may reduce its stability in measuring a feature's importance to the data's main structures. To overcome this limitation, Zhao and Liu (2007) derived an alternative score based only on a feature's correlation to the leading Laplacian eigenvectors. A related, more sophisticated approach is Multi-Cluster Feature Selection (MCFS) (Cai et al., 2010), which computes the solutions to the generalized eigenvector problem Lv = λDv. The leading eigenvectors are then used as pseudo-labels for a regression task with l1 regularization. Specifically, MCFS applies Least Angle Regression (LARS) (Efron et al., 2004) to obtain, for each leading eigenvector vi, a sparse vector of coefficients β i ∈ R p. A feature score is computed by maximizing the absolute values of its corresponding coefficient, sj = maxi|β i j |. The output of MCFS is the set of features with the highest score. In the next section, we derive Spectral Self-supervised Feature Selection (SSFS), which improves upon the MCFS algorithm in several critical aspects. ## 3 Spectral Self-Supervised Feature Selection 3.1 Rationale As its title suggests, MCFS aims to uncover features that separate clusters in the data. Let us consider an ideal case where the observations are partitioned into k well-separated clusters, denoted A1*, . . . , A*k, such that the weight matrix Wij = 0 if xi, xj are in separate clusters. Let e i denote an indicator vector for cluster i such that $$e_{j}^{i}={\begin{cases}1/{\sqrt{|A_{i}|}}&j\in A_{i}\\ 0&{\mathrm{otherwise,}}\end{cases}}$$ where |Ai| denotes the size of cluster Ai. In this scenario, the zero eigenvalue of the graph Laplacian has multiplicity k, and the corresponding eigenvectors are equal, up to a rotation matrix, to a matrix E ∈ R n×d whose columns are equal to e 1*, . . . ,* e k. In such a case, the k leading eigenvectors are indeed suitable for use as pseudo-labels for the feature selection task. Assuming that the clusters are amenable to a linear separation, the MCFS algorithm should provide highly informative features in terms of cluster separation. However, cluster separation is often imperfect in many applications, which can make using leading eigenvectors for regression suboptimal. Here are some common scenarios: 1) High-dimensional datasets may contain ![3_image_0.png](3_image_0.png) Figure 2: The first four Laplacian eigenvectors of two real datasets. Samples are sorted according to the real class label and colored by the outcome of a one-dimensional k-medoids per eigenvector. The vertical bar indicates the separation between the classes. In Prostate-GE, v4 is the most informative to the class labels, and an outlier can be seen on the upper left in the third and fourth eigenvectors. In TOX-171, v3 is more informative to the class labels than v2. substructures in top eigenvectors, while the main structure of interest will appear later in spectrum. For illustration, consider the MNIST dataset visualized via t-SNE in Figure 1(a). The data contains images of 3, 6 and 8. Panel (b) shows the elements of the six leading eigenvectors of the graph Laplacian matrix, sorted by their corresponding digits. The leading eigenvector shows a clear gap between images of digit 6 and the rest of the data. However, there is no clear separation between digits 3 and 8. Indeed, the next eigenvector is not associated with such a separation. Applying feature selection with this eigenvector may produce spurious features irrelevant to separating the two digits. This scenario is prevalent in the real datasets used in the experimental section. For example, Figure 2a shows four eigenvectors of a graph computed from observations containing the genetic expression data from prostate cancer patients and controls (Singh et al., 2002). The leading two eigenvectors, however, are not associated with the patient-control separation. 2) The leading eigenvectors may be affected by outliers. For example, an eigenvector may indicate a small group of outliers separated from the rest of the data. This phenomenon can also be seen in the third and fourth vectors of the Prostate-GE example in Figure 2a. While the fourth eigenvector separates the categories, it is corrupted by outliers and, hence, unsuitable for use as pseudo-labels in a classical regression task, as it might highlight features associated with the outliers. 3) The relation between important features and the separation of clusters may be highly non-linear. In such cases, applying linear regression models to obtain feature scores may be too restrictive. Motivated by the above scenarios, we derive Spectral Self-supervised Feature Selection (SSFS). We explain our approach in detail in the following two sections. ## 3.2 Eigenvector Processing And Selection Generating binary labels. Given the Laplacian eigenvectors V = (v1*, ...,* vd), our goal is to generate pseudo-labels that are highly informative to the cluster separation in the data. To that end, for each eigenvector vi, we compute a binary label vector y ∗ i (pseudo-labels) by applying a one-dimensional k-medoids algorithm (Kaufman and Rousseeuw, 1990) to the elements of vi. In contrast to k-means, in k-medoids, the cluster centers are set to one of the input points, which makes the algorithm robust to outliers. In Figure 2, the eigenvectors are colored according to the output of the k-medoids. After binarization, the fourth eigenvector of the Prostate-GE dataset is highly indicative of the category. The feature selection is thus based on a classification rather than a regression task, which is more aligned with selecting features for clustering. In Section 5.2 we show the impact of the binarization step on multiple real-world datasets. Eigenvector selection. Selecting k eigenvectors according to their eigenvalues may be unstable in cases where the eigenvalues exhibit a small spectral gap. We derive a robust criterion for selecting informative eigenvectors that is based on the stability of a model learned for each vector. Formally, we consider a surrogate model h : R p → R, and a feature score function s(h) ∈ R p, where p denotes the number of features. The feature scores are non-negative and their sum is normalized to one. For example, h can be the logistic regression model h(x) = σ(β T x). In that case, a natural score function is the absolute value of the coefficient vector β. For each eigenvector vi, we train a model hi on B (non-mutually exclusive) subsets of the input data X and the pseudo-labels y ∗ i . We then estimate the variance of the feature score function, for every feature m ∈ {1*, ..., p*}: $$\widehat{\mathrm{Var}}(s_{m}(h_{i}))=\frac{1}{B-1}\sum_{b=1}^{B}(s_{m}(h_{i,b})-\bar{s}_{m}(h_{i}))^{2}.$$ This procedure is similar (though not identical) to the Delete-d Jackknife method for variance estimation (Shao and Wu, 1989). We keep, as pseudo-labels, the k binarized eigenvectors with the lowest sum of variance, Sˆi =Ppm=1 Var( d sm(hi)). We denote the set of selected eigenvectors by I. A pseudo-code for the pseudo-labels generation and eigenvector selection appears in Algorithm 1. ## 3.3 Feature Selection For the feature selection step, we train k models, denoted {fi| i ∈ I}, to predict the selected binary pseudolabels based on the original data. Similarly to the eigenvector selection step, each model is associated with a feature score function s(fi). The features are then scored according to the following maximum criterion, $$\operatorname{score}(m)=\operatorname*{max}_{i\in I}s_{m}(f_{i}).$$ Finally, the features are ranked by their scores, and the top-ranked features are selected for the subsequent analysis. The choice of model for this step can differ from that used in the eigenvector selection step, allowing for flexibility in the modeling approach (see Section 3.4 for details). Pseudo-code for SSFS appears in Algorithm 2. ## 3.4 Choice Of Surrogate Models Our algorithm is compatible with any supervised model capable of providing feature importance scores. We combine the structural information from the graph Laplacian with the capabilities of various supervised models for unsupervised feature selection. Empirical evidence supports the use of more complex models such as Gradient-Boosted Decision Trees for various complex, real-world datasets (McElfresh et al., 2023; Chen and Guestrin, 2016). These models are capable of capturing complex nonlinear relationships, which we leverage by training them on pseudo-labels derived from the Laplacian's eigenvectors. For example, for eigenvector selection, one can use a simple logistic regression model for fast training on the resampling procedure and a more complex gradient boosting model such as XGBoost (Chen and Guestrin, 2016) for the feature selection step. ## 4 The Importance Of A Proper Selection Of Eigenvectors: Analysis Of The Product Manifold Model As described in Section 3.1, the principle of selecting the leading Laplacian eigenvectors as pseudo-labels is inspired by the case of highly separable clusters, where observations in different clusters have very low connectivity between them. In many cases, the separation between meaningful states (i.e., biological or medical conditions) may not be that clear. To illustrate this point, consider the MNIST example in Figure 1. The separation between digit 6 and the rest of the data is clear and appears in the leading Laplacian eigenvector. In contrast, digits 8 and 3 are not clearly separated. Figure 3a shows a scatter plot of these digits, where each image is located according to its coordinates in the third and fourth eigenvectors. Even when considering the most relevant eigenvectors, there is no clear separation between the digits. Instead, the transition between 3 and 8 is smooth and depends on the properties of the digits. To provide some insight into the importance of eigenvector selection, we analyze a *product of manifold* model. Our analysis is based on results from two research topics: (i) the convergence, under the manifold assumption, of the Laplacian eigenvectors to the eigenfunctions of the Laplace Beltrami operator associated with the manifold, and (ii) the properties of manifold products. We next provide a brief background on these two topics. Algorithm 1 Pseudo-code for Eigenvector Selection and Pseudo-labels Generation Input: Dataset X ∈ R n×p(with n samples and p features), number of eigenvectors to select k, number of eigenvectors to compute d, surrogate models H = {hi| i ∈ [d]}, feature scoring function s : F → R p, number of resamples B 1: Initialize an empty list for the pseudo-labels Y ∗ and an empty list for the sums of features variance Sˆ 2: Compute the significant d eigenvectors of the Laplacian of X: V = (v1*, ...,* vd) 3: for i = 1 to d do 4: Binarize the eigenvector vi using k-medoids to obtain y ∗ i , and append to Y ∗ 5: for b = 1 to B do 6: Subsample ((X)b,(y ∗ i )b) from (X, y ∗ i ) 7: Fit the model hi,b to ((X)b,(y ∗ i )b) 8: **end for** 9: for m = 1 to p do 10: Estimate the variance of the m-th feature score: $${\widehat{\operatorname{Var}}}(s_{m}(h_{i}))={\frac{1}{B-1}}\sum_{b=1}^{B}(s_{m}(h_{i,b})-{\bar{s}}_{m}(h_{i}))^{2}$$ 11: **end for** 12: Sˆi =Ppm=1 Var( d sm(hi)) 13: S ←ˆ *S ∪ {* ˆ Sˆi } 14: **end for** 15: Select the indices of the k smallest elements in Sˆ and store in I 16: **return** Y ∗, I Algorithm 2 Pseudo-code for Spectral Self-supervised Feature Selection (SSFS) Input: Dataset X ∈ R n×p(with n samples and p features) number of eigenvectors to select k, number of eigenvectors to compute d, surrogate eigenvector selection models H = {hi| i ∈ [d]}, surrogate feature selection models F = {fi| i ∈ [d]}, feature scoring function s : F → R p, number of resamples B, number of features to select ℓ. 1: Apply Algorithm 1 to obtain the pseudo-labels and the selected eigenvectors: Y ∗, I = **EigenvectorSelection**(X, k, d, H, s, B) 2: for i in I do 3: Fit the model fi on (X, y ∗ i ) 4: Calculate the feature scores s(fi) 5: Normalize the feature scores such that their sum is one 6: **end for** 7: for m = 1 to p do 8: Compute the final score for the m-th feature: score(m) = max i∈I sm(fi) 9: **end for** 10: **return** a list of ℓ features with the highest score. ## 4.1 Convergence Of The Laplacian Eigenvectors In many applications, the high dimensional observations are assumed to reside close to some manifold M with low intrinsic dimensionality, which we denote by d. Many papers in recent decades have analyzed the relation between the Laplacian eigenvectors and the manifold structure Von Luxburg et al. (2008); Singer and Wu (2017); García Trillos et al. (2020); Wormell and Reich (2021); Dunson et al. (2021); Calder and Trillos (2022). More formally, let vk denote the k-th eigenvector of the graph Laplacian, and let gk denote the k-th eigenfunction of the Laplace-Beltrami (LB) operator. We usually assume that gk is normalized such thatZ $$\int_{\mathcal{M}}g_{k}(\mathbf{x})^{2}\mu(\mathbf{x})d V(\mathbf{x})=1,$$ where µ is the distribution function over M. Under several assumptions and proper normalization of gk, we have $$\mathbf{v}_{k}\ {\xrightarrow[n\to\infty]{}}\ g_{k}(X).$$ where gk(X) is a vector of size n containing samples of the function gk at the n rows of the data matrix X. Let us provide a simple example. Consider n points sampled uniformly at random over an interval [0, 1]. The LB operator over an interval is the second derivative whose eigenfunctions are the harmonic functions gk(x) = cos(kπx). Figure 3 shows the three Laplacian eigenvectors computed with n = 102, 103 and 3 · 103 points. As n → ∞, the difference between gk(X) and vk is decreases to 0. Here, we use a convergence result from Cheng and Wu (2022), derived under the following assumptions: (i) The n observations were generated according to a uniform distribution over the manifold, such that µ(x) equals to a constant µ. (ii) Let λk denote the eigenvalue associated with the eigenfunction gk. To ensure the stability of the eigenvectors, we assume a spectral gap between the smallest K eigenvalues bounded away from 0 such that, $$\operatorname*{min}_{i=1}^{K-1}(\lambda_{i+1}-\lambda_{i})>\gamma>0.$$ (iii) The graph weights are computed by a Gaussian kernel exp(−∥xi −xj∥ 2/ϵn), with a bandwidth ϵn *−−−−→* n→∞ 0 + that satisfies ϵ d/2+2 n > Ck log n nfor a constant CK. Theorem 1 (Theorem 5.4 of Cheng and Wu (2022)) For n → ∞ and under assumptions (i)-(iii), with probability larger than 1−4K2n −10−(2K+6)n −9, the k-th eigenvector vk of the unnormalized Laplacian satisfies $$\left\|\mathbf{v}_{k}-\alpha\mathbf{g}_{k}(\mathbf{X})\right\|_{2}={\mathcal{O}}(\epsilon_{n})+{\mathcal{O}}\bigg({\sqrt{\frac{\log n}{n\epsilon_{n}^{d/2+1}}}}\bigg),\qquad k\leq K,$$ (3) $\frac{1}{2}$ where ∥vk∥ = 1 and |α| = o(1). ## 4.2 The Product Of Manifold Model In a product of two manifolds, denoted M = M1 × M2, every point x ∈ M is associated with a pair of points x1, x2 where x1 ∈ M1 and x2 ∈ M2. We denote by π1(x), π2(x) the canonical projections of a point in M to its corresponding points x1, x2 in M1,M2, respectively. For example, a 2D rectangle is a product of two 1D manifolds, where π1(x) and π2(x) select, respectively, the first and second coordinates. We denote by g (1) i(x), g (2) i(x) the i-th eigenfunction of the LB operator of M1,M2, respectively, evaluated at a point x, and by λ (1) i, λ(2) ithe corresponding eigenvalues. In a manifold product M1×M2, the eigenfunctions are equal to the pointwise product of the eigenfunctions of the LB operator of M1,M2, and the corresponding eigenvalues are equal to the sum of eigenvalues, such that $$g_{l,k}(\mathbf{x})=g_{l}^{(1)}(\pi_{1}(\mathbf{x}))\cdot g_{k}^{(2)}(\pi_{2}(\mathbf{x}))\qquad\lambda_{l,k}=\lambda_{l}^{(1)}+\lambda_{k}^{(2)}.\tag{4}$$ For simplicity, we denote by vl,k the (*l, k*)-th eigenvector of the Laplacian matrix, as ordered by λl,k. An example of a product of 2 manifolds is illustrated in Figure 4b. The figure shows the leading eight eigenvectors of the graph Laplacian. The eigenvectors are indexed by the vector b = [*l, k*]. The full details of this example will be provided in the next section. ## 4.3 Considerations For Eigenvector Selection In A Product-Of-Manifold Model We analyze a setting where the p features can be partitioned into H sets according to their dependencies on a set of latent and independent random variables θ1*, . . . , θ*H with some bounded support. A feature fm ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) (a) Noisy MNIST: digits 3 and 8 (b) Convergence of eigenvectors for graph on an interval Figure 3: Panel (a) shows a scatter plot of the noisy MNIST dataset, containing digits 3 and 8, where each image is located according to its coordinates in the third and fourth eigenvectors. Panel (b) shows the leading eigenvector of a graph computed over n points on a 1D interval and the leading eigenfunction cos(πx). ![7_image_2.png](7_image_2.png) Figure 4: Panel (a) illustrates three features of a simulated dataset. Each feature is equal to a different polynomial of the same random latent variable θ1. Each point in the 3D scatter plot is located according to the values of the three features and colored by the value of θ1. Panel (b) shows the eigenvectors of the graph Laplacian matrix. Each point is located according to the value of (θ1, θ2) and colored by the value of its corresponding element in the eight leading eigenvectors. The eigenvectors are indexed by the vector b, whose elements bi determine the eigenvector order in the submanifold M(i). that depends on θh consists of samples from a smooth transformation θh Fm −−→ fm. We denote by X(h)the submatrix that contains the features associated with θh. The smoothness of the transformations implies that the rows of X(h)constitute random samples from a manifold of intrinsic dimension 1. Figure 4a shows a 3D scatter plot, where the axis are three such features with values generated by three polynomials of θ1. The figure is an illustration of a manifold with a single intrinsic dimension embedded in a 3D space. The independence of the latent variables θh implies that the observations xi ∈ R p are samples from a product of H manifolds, each of dimensionality 1 Zhang et al. (2021); He et al. (2023). The canonical projection π (h)(x) selects the features associated with the latent variable θh. According to the eigenfunctions properties in equation 4, the eigenfunctions are equal to the product of H eigenfunctions of the submanifolds M(h), and can thus be indexed by a vector of size H, which we denote by b ∈ N H. $$g_{\mathbf{b}}(\mathbf{x})=\prod_{h=1}^{H}g_{\mathbf{b}_{h}}^{(h)}\big(\pi^{(h)}(\mathbf{x})\big)\qquad\lambda_{\mathbf{b}}=\sum_{h=1}^{H}\lambda_{\mathbf{b}_{h}}^{(h)}\,.$$ $$\left(5\right)$$ Let e (h) denote an index vector with elements e (h) j = 1 if j = h and 0 otherwise. The first eigenfunctions g (h) 0are equal to a constant for all submanifolds M(h). Thus, The eigenfunctions ge(h) are equal to $$g_{\mathbf{e}^{(h)}}(\mathbf{x})=g_{1}^{(h)}\big(\pi^{(h)}(\mathbf{x})\big)\prod_{j\neq h}^{H}g_{0}^{(j)}\big(\pi^{(j)}(\mathbf{x})\big)=C g_{1}^{(h)}\big(\pi^{(h)}(\mathbf{x})\big),$$ where C is some constant. Importantly, the functions g (h) 1and thus ge(h) , depend only on the parameter θ (h). We define by E the family of vectors in N H that include the indicator vectors e (h) or their integer products (e.g. 2e (h), 3e (h)etc.). A similar derivation as in equation 5 shows that for every index vector b ∈ E, the eigenfunction gb depends on only one of the latent variable in θ1*, . . . , θ*h. On the relevance of features for choosing eigenvectors as pseudo-labels. Our goal is to select a set of features that contains at least one (or more) features from each of the H partitions. Such a choice would ensure that the set contains information about all the H latent variables. Clearly, this imposes a requirement on the set of pseudo-label vectors: we would like at least one vector of pseudo-labels that is correlated with each latent variable. It is instructive to consider the asymptotic case where n → ∞ and hence according to Theorem 1 and the properties of manifold products, the eigenvectors vb converge to gb(X). A proper choice of eigenvectors for pseudo-labels would be the set {ve(h) } H h=1, as each of these vectors converges to the samples g (h) 1(X), and is thus associated with a different latent variable. However, there is no guarantee that these eigenvectors have the smallest eigenvalues. Consider for example the case for the data illustrated in Figure 4a. Panel (c) shows the leading eight eigenvectors of the graph Laplacian. The leading two eigenvectors are functions of θ1 and by choosing them we completely disregard θ2 with an obvious impact on the feature selection accuracy. A better choice for pseudo-labels would be the first and third eigenvectors, indexed by e1 and e2. Therefore, we need an improved criterion for selecting eigenvectors to serve as pseudolabels for the feature selection process. The following theorem, proven in Appendix A.1, implies that the feature vectors fi are relevant for developing such a criterion. Theorem 2 *We assume that the samples are generated according to our specified latent variable model and* that assumptions (i)-(iii) are satisfied. Let fi ∈ R n be a normalized, zero mean feature vector associated with parameter θh*. Then,* $$\mathbf{f}_{i}^{T}\mathbf{v}_{b}={\mathcal{O}}(\epsilon_{n})+{\mathcal{O}}\left({\sqrt{\frac{\log n}{n\epsilon_{n}^{d/2+1}}}}\right)\qquad\forall b\notin{\mathcal{E}}.$$ The theorem is proved via the following two steps. The details of the proof are provided in the appendix. Step 1: We show that the inner product f T i gb(X) can be written as the inner product of two random vectors with independent elements. Thus, f T i gb(X) is of order O 1/ √n by standard concentration inequalities. Step 2: Combine the convergence of vb to gb(X) with the concentration result of step 1. Theorem 2 implies that one can use the inner products to avoid selecting less informative eigenvectors that depend on more than one variable. Further guarantees, such as selection of a single vector from each variable, require additional assumptions on the feature values, which we do not make here. In Algorithm 1 we compute the normalized measure of stability for the feature scores {sm(hi)} p m=1 obtained by the model hi to predict the labels computed from the i-th eigenvector. When the model hiis linear (or generalized linear), the score is strongly related to the simple inner product of Theorem 2. In that case, Theorem 2 indicates that the inner product between an uniformative eigenvectors and all features is close to zero. Thus, we expect the variance (after normalization) to be similar to the variance of random positive noise. The advantage of the stability measure over the simple linear product as an eigenvector selection criterion is that it allows for more flexibility in the choice of model. ## 5 Experiments 5.1 Evaluation On Real World Datasets Data and experiment description. We applied SSFS to eight real-world datasets from various domains. Table 4 in Appendix C.1 gives the number of features, samples, and classes in each dataset. All datasets are available online 1. We compare the performance of our approach to the following alternatives: (i) standard Laplacian score (LS) (He et al., 2005), (ii) Multi-Cluster Feature Selection (MCFS) (Cai et al., 2010), (iii) Nonnegative Discriminative Feature Selection (NDFS), (Li et al., 2012), (iv) Unsupervised Discriminative Feature Selection (UDFS) (Yang et al., 2011), (v) Laplacian Score-regularized Concrete Autoencoder (LS-CAE) (Shaham et al., 2022), (vi) Unsupervised Feature Selection Based on Iterative Similarity Graph Factorization and Clustering by Modularity (KNMFS) (Oliveira et al., 2022) and (vii) a naive baseline, where random selection is applied with a different seed for each number of selected features. For evaluation, we adopt a criterion that is similar to, but not identical to, the one used in prior studies (Li et al., 2012; Wang et al., 2015). We select the top 2, 5, 10, 20, 30, 40, 50, 100, 150, 200, 250, and 300 features as scored by each method. Then, we apply k-means 20 times on the selected features and report the average clustering accuracy (along with the standard deviation), computed by (Cai et al., 2011): $$\mathrm{ACC}=\operatorname*{max}_{\pi}{\frac{1}{N}}\sum_{i=1}^{N}\delta(\pi(c_{i}),l_{i}),$$ where ci and li are the assigned cluster and true label of the i-th data point, respectively, δ(*x, y*) is the delta function which equals one if x = y and zero otherwise, and π represents a permutation of the cluster labels, optimized via the Kuhn-Munkres algorithm (Munkres, 1957). Unlike the evaluation approach taken by Wang et al. (2015); Li et al. (2012), which entailed a grid search over hyper-parameters to report the optimum results for each method, our analysis employed the default hyperparameters as specified by the respective implementations, including SSFS. This approach aims for a fair comparison to avoid favoring methods that are more sensitive to hyper-parameter adjustments. In addition, it acknowledges the practical constraints in unsupervised settings where hyper-parameter tuning is typically infeasible. Such differences in the approach to hyper-parameter selection could account for discrepancies between the results reported in previous studies and those in our study. See Appendix C for additional details. Table 1 shows, for each method, the highest average accuracy and the number of features for which it was achieved similarly to (Li et al., 2012; Wang et al., 2015). Figure 5 presents a comparative analysis of clustering accuracy across various datasets and methods, considering the full spectrum of selected features. This comparison aims to account for the inherent variance in each method, addressing a limitation where the criterion of the maximum accuracy over the number of selected features might inadvertently favor methods exhibiting higher variance. 1https://jundongl.github.io/scikit-feature/datasets.html Table 1: Average clustering accuracy on benchmark datasets along with the standard deviation. The number of selected features yielding the best clustering performance is shown in parentheses, the best result for each dataset highlighted in bold. | Dataset | Random | LS | MCFS | NDFS | UDFS | KNMFS | LS-CAE | SSFS | |-------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | COIL20 | 65.1±2.1(250) | 61.9±2.4(300) | 67.4±3.3(300) | 63.4±2.6(200) | 61.9±3.5(300) | 68.1±2.0(300) | 64.2±3.1(30) | 67.1±2.8(300) | | GISETTE | 70.2±0.1(150) | 70.0±0.0(250) | 70.7±0.0(5) | 58.3±1.9(100) | 69.1±0.1(50) | 54.9±0.0(40) | 70.8±0.0(200) | 69.7±0.0(150) | | Yale | 47.8±3.5(250) | 43.9±3.2(300) | 44.4±2.9(300) | 43.5±2.5(250) | 43.8±2.3(50) | 47.2±4.3(300) | 46.2±1.6(10) | 50.3±2.3(100) | | TOX-171 | 44.2±1.8(250) | 51.3±1.0(5) | 44.5±0.5(5) | 47.3±0.1(150) | 40.2±3.8(250) | 48.1±3.5(20) | 50.1±5.3(200) | 59.4±2.5(100) | | ALLAML | 73.2±1.7(300) | 72.2±0.0(200) | 75.0±0.0(150) | 76.6±0.7(2) | 66.4±1.3(50) | 59.9±9.2(150) | 63.9±0.0(2) | 75.4±3.2(100) | | Prostate-GE | 63.0±0.7(30) | 58.8±0.0(2) | 61.8±0.0(100) | 58.8±0.0(2) | 63.6±0.3(50) | 62.7±0.0(50) | 63.7±0.0(40) | 75.9±0.5(10) | | ORL | 58.9±1.8(300) | 51.6±1.7(300) | 57.0±2.8(300) | 59.1±2.5(300) | 57.3±2.4(300) | 63.2±2.0(150) | 61.0±2.0(300) | 61.0±2.2(200) | | ISOLET | 59.5±1.8(300) | 48.9±2.0(300) | 50.7±1.5(300) | 63.1±2.4(200) | 44.6±1.7(300) | 52.7±2.3(300) | 63.0±2.6(300) | 59.9±1.4(100) | | Mean rank | 4.12 | 5.88 | 4.62 | 4.94 | 6.44 | 4.38 | 3.31 | 2.31 | | Median rank | 4.0 | 6.5 | 5.5 | 5.5 | 6.5 | 4.5 | 2.75 | 2.25 | ![10_image_0.png](10_image_0.png) Figure 5: Clustering accuracy vs. the number of selected features on eight real-world datasets. For SSFS, we use the following surrogate models: (i) The eigenvector selection model hiis set to Logistic Regression with ℓ2 regularization. We use scikit-learn's (Pedregosa et al., 2011) implementation with a default regularization value of C = 1.0. Feature scores are equal to the absolute value of the model's coefficients. (ii) The feature selection model fiis set to XGBoost classifier with *Gain* feature importance. We use the popular implementation by DMLC (Chen and Guestrin, 2016). Note that we employ the default hyper-parameters for all surrogate models as provided in their widely used implementations. However, it's worth noting that one can undoubtedly leverage domain knowledge to select surrogate models and hyperparameters better suited to the specific domain. For each dataset, SSFS selects k from d = 2k eigenvectors, where k is the number of distinct classes in the data. Results. SSFS has been ranked as the best method in four out of eight datasets. It has shown a significant advantage over other competing methods, especially in the Yale, TOX-171, and Prostate-GE datasets. As discussed in Section 3.1, the Prostate-GE dataset has several outliers, and the fourth eigenvector plays a vital role in providing information about the class labels compared to the earlier eigenvectors. SSFS can effectively deal with such challenging scenarios, and this might explain its superior performance. Although our method is not ranked first in the other four datasets, it has produced results comparable to the leading method. ## 5.2 Ablation Study We demonstrate the importance of three SSFS components: (i) eigenvector selection, (ii) self-supervision with nonlinear surrogate models, and (iii) binarization of the Laplacian eigenvectors along with classifiers instead of regressors as surrogate models. The ablation study is performed on a synthetic dataset described in Section 5.2.1, and the eight real datasets used for evaluation in Section 5.1. ## 5.2.1 Synthetic Data ![11_image_0.png](11_image_0.png) Figure 6: Visualizations of the synthetic data: Panel (a): scatter plot of the first five features corresponding to the Gaussian blobs, colored by the real label. Panel (b): the covariance matrix of the dataset. Panel (c): the top-4 eigenvectors, samples are sorted by the label and are partitioned by the vertical bar, colored according to the output of k-medoids. Table 2: Ablation study: average clustering accuracy on benchmark datasets, the number of selected features is shown in parenthesis for the best clustering accuracy over the feature range. | Dataset | no selection | no XGBoost | no selection, regression | regression | SSFS | |-------------|----------------|--------------|----------------------------|--------------|------------| | COIL20 | 65.0 (150) | 62.1 (150) | 70.5 (100) | 69.0 (300) | 67.1 (300) | | GISETTE | 72.5 (10) | 64.9 (300) | 64.6 (5) | 64.6 (5) | 69.7 (150) | | Yale | 48.6 (50) | 42.7 (250) | 49.8 (200) | 47.4 (250) | 50.3 (100) | | TOX-171 | 50.9 (2) | 45.6 (20) | 45.0 (5) | 45.5 (50) | 59.4 (100) | | ALLAML | 75.4 (100) | 66.7 (50) | 71.1 (300) | 71.1 (300) | 75.4 (100) | | Prostate-GE | 59.8 (30) | 69.6 (30) | 61.8 (150) | 61.8 (150) | 75.9 (10) | | ORL | 60.0 (300) | 56.8 (300) | 58.5 (300) | 58.5 (200) | 61.1 (200) | | ISOLET | 57.0 (150) | 57.1 (300) | 61.3 (300) | 58.7 (300) | 59.9 (100) | | Mean rank | 2.94 | 4.0 | 3.0 | 3.5 | 1.56 | | Median rank | 2.5 | 4.5 | 3.5 | 3.5 | 1.25 | We generate a synthetic dataset as follows: the first five features are generated from two isotropic Gaussian blobs; these blobs define the clusters of interest. Additional 45 nuisance features are generated according to a multivariate Gaussian distribution, with zero mean and a block-structured covariance matrix Σ, such that each block contains 15 features. The covariance elements Σi,j are equal to 0.5 if *i, j* are in the same block and to 0.01 otherwise. We generated a total of 500 samples; see Appendix B.1 for further details. In Figure 6a, you can see a scatter plot of the first five features, and in Figure 6b, you can see a visualization of the covariance matrix. Our goal is to identify the features that can distinguish between the two groups. As Figure 6a demonstrates, the two clusters are linearly separated by three distinct features. Furthermore, examining Figure 6c reveals that while the fourth eigenvector distinctly separates the clusters, the higherranked eigenvectors do not exhibit this behavior. This pattern arises due to the correlated noise, significantly influencing the graph structure. The evaluation of this dataset is performed by calculating the true positive rate (TPR) with respect to the top-selected features and the discriminative features sampled from the two Gaussian blobs. The performance on the real-world datasets is measured similarly to Section 5.1. ## 5.2.2 Results Eigenvector Selection. We compare to a variation of SSFS termed SSFS (no selection), where we don't filter the eigenvectors. We train the surrogate feature selector model on the leading k eigenvectors, with k set to the number of distinct classes in the data. Figure 7b, shows that our eigenvector selection scheme Table 3: Synthetic data results: Top-3 selected features (sorted in descending order by rank), along with their TPR (relative to the first five features). | Method | Top-3 Features | TPR | |----------------|------------------|-------| | SSFS | 2, 9, 19 | 0.3 | | (no XGBoost) | 4, 3, 2 | 1.0 | | (no selection) | 43, 30, 49 | 0.0 | | (regression) | 15, 17, 14 | 0.0 | | MCFS | 47, 7, 43 | 0.0 | provides an advantage in seven out of eight datasets. Similarly to Sec. 5.1, filtering the eigenvectors is especially advantageous on the Prostate-GE dataset, as our method successfully selects the most discriminative eigenvectors (see Figure 2a ). On the synthetic dataset, the selection procedure provides a large advantage, as seen in Table 3. Figure 6c illustrates that the fourth eigenvector is the informative one with respect to the Gaussian blobs. Indeed, the fourth eigenvector and the third eigenvector are selected by the selection procedure. This eigenvector yields better features than MCFS and SSFS (no selection), which rely on the top two eigenvectors. Classification and regression. We compare the following regression variants of SSFS , which use the original continuous eigenvectors as pseudo-labels (without binarization): (i) SSFS (regression): uses ridge regression for eigenvector selection and XGBoost regression for the feature selection as surrogate models. (ii) SSFS (no selection, regression): uses the top k eigenvectors without binarization and XGBoost regression. Figure 7a and Table 2 show that SSFS performs best on six of the eight real-world datasets. Interestingly, when using continuous regression as a surrogate model, the selection procedure does not seem to provide an advantage compared to no selection. ![12_image_0.png](12_image_0.png) Figure 7: Ablation study results on the real-world datasets. The best clustering accuracy over the number of selected features is shown for each method. Complex nonlinear models as surrogate models. We compare SSFS to a variant of our method denoted SSFS (no XGBoost), which employs a logistic regression instead of XGBoost as the surrogate feature selector model. Figure 7b shows that XGBoost provides an advantage compared to the linear model on real-world datasets. On the synthetic dataset, the linear variant provides better coverage for the top-3 features that separate the Gaussian blobs, compared to XGBoost (see Table 3 and Figure 6a). That is not surprising since, in this example, the cluster separation is linear in each informative feature. We note, however, that the top-ranked feature by SSFS with XGBoost is a discriminative feature for the clusters in the data (see Figure 6a); therefore, its selection can still be considered successful in the case of a single feature selection. ## 6 Discussion And Future Work We proposed a simple procedure for filtering eigenvectors of the graph Laplacian and demonstrated that its application could have a significant impact on the outcome of the feature selection process. The selection is based on the stability of a classification model in predicting binary pseudo-labels. However, additional criteria, such as the accuracy of a specific model or the overlap of the chosen features for different eigenvectors, may provide information on the suitability of a specific vector for a feature selection task. We also illustrated the utility of expressive models, typically used for supervised learning, in unsupervised feature selection. Another direction for further research is using self-supervised approaches for *group feature selection* (GFS) for single modality (Sristi et al., 2022) or multi-modal data (Yang et al., 2023; Yoffe et al., 2024). In contrast to standard feature selection where the output is sparse, GFS aims to uncover groups of features with joint effects on the data. Learning models based on different eigenvectors may provide information about group effects with potential applications such as detecting brain networks in Neuroscience and gene pathways in genetics. ## References Muhammed Fatih Balın, Abubakar Abid, and James Zou. Concrete autoencoders: Differentiable feature selection and reconstruction. In *International conference on machine learning*, pages 444–453. PMLR, 2019. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003. Deng Cai, Chiyuan Zhang, and Xiaofei He. Unsupervised feature selection for multi-cluster data. In *Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining*, pages 333–342, 2010. Deng Cai, Xiaofei He, and Jiawei Han. Locally consistent concept factorization for document clustering. IEEE Transactions on Knowledge and Data Engineering, 23(6):902–913, 2011. Jeff Calder and Nicolas Garcia Trillos. Improved spectral convergence rates for graph laplacians on ε-graphs and k-nn graphs. *Applied and Computational Harmonic Analysis*, 60:123–175, 2022. B Chandra and Rajesh K Sharma. Exploring autoencoders for unsupervised feature selection. In 2015 International Joint Conference on Neural Networks (IJCNN), pages 1–6. IEEE, 2015. Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In *Proceedings of the 22nd* acm sigkdd international conference on knowledge discovery and data mining, pages 785–794, 2016. Xiuyuan Cheng and Nan Wu. Eigen-convergence of gaussian kernelized graph laplacian by manifold heat interpolation. *Applied and Computational Harmonic Analysis*, 61:132–190, 2022. Ronald R Coifman and Stéphane Lafon. Diffusion maps. *Applied and computational harmonic analysis*, 21 (1):5–30, 2006. David B Dunson, Hau-Tieng Wu, and Nan Wu. Spectral convergence of graph Laplacian and heat kernel reconstruction in l∞ from random samples. *Applied and Computational Harmonic Analysis*, 55:282–336, 2021. Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. *The Annals* of Statistics, 32(2):407 - 499, 2004. Nicolás García Trillos, Moritz Gerlach, Matthias Hein, and Dejan Slepčev. Error estimates for spectral convergence of the graph laplacian on random geometric graphs toward the Laplace-Beltrami operator. Foundations of Computational Mathematics, 20(4):827–887, 2020. Kai Han, Yunhe Wang, Chao Zhang, Chao Li, and Chao Xu. Autoencoder inspired unsupervised feature selection. In *2018 IEEE international conference on acoustics, speech and signal processing (ICASSP)*, pages 2941–2945. IEEE, 2018. Jesse He, Tristan Brugère, and Gal Mishne. Product manifold learning with independent coordinate selection. In *Topological, Algebraic and Geometric Learning Workshops 2023*, pages 267–277. PMLR, 2023. Xiaofei He, Deng Cai, and Partha Niyogi. Laplacian score for feature selection. In Proceedings of the 18th International Conference on Neural Information Processing Systems, NIPS'05, page 507–514, Cambridge, MA, USA, 2005. MIT Press. Leonard Kaufman and Peter J. Rousseeuw. *Finding Groups in Data: An Introduction to Cluster Analysis.* John Wiley, 1990. ISBN 978-0-47031680-1. Changhee Lee, Fergus Imrie, and Mihaela van der Schaar. Self-supervision enhanced feature selection with correlated gates. In *International Conference on Learning Representations*, 2021. Jundong Li, Jiliang Tang, and Huan Liu. Reconstruction-based unsupervised feature selection: An embedded approach. In *IJCAI*, pages 2159–2165, 2017. Xuelong Li, Han Zhang, Rui Zhang, Yun Liu, and Feiping Nie. Generalized uncorrelated regression with adaptive graph for unsupervised feature selection. IEEE transactions on neural networks and learning systems, 30(5):1587–1595, 2018. Zechao Li and Jinhui Tang. Unsupervised feature selection via nonnegative spectral analysis and redundancy control. *IEEE Transactions on Image Processing*, 24(12):5343–5355, 2015. Zechao Li, Yi Yang, Jing Liu, Xiaofang Zhou, and Hanqing Lu. Unsupervised feature selection using nonnegative spectral analysis. In *Proceedings of the AAAI conference on artificial intelligence*, volume 26, pages 1026–1032, 2012. Ofir Lindenbaum, Uri Shaham, Erez Peterfreund, Jonathan Svirsky, Nicolas Casey, and Yuval Kluger. Differentiable unsupervised feature selection based on a gated laplacian. *Advances in Neural Information* Processing Systems, 34:1530–1542, 2021. Kaveh Mahdavi, Jesus Labarta, and Judit Gimenez. Unsupervised feature selection for noisy data. In Advanced Data Mining and Applications: 15th International Conference, ADMA 2019, Dalian, China, November 21–23, 2019, Proceedings 15, pages 79–94. Springer, 2019. Duncan McElfresh, Sujay Khandagale, Jonathan Valverde, Ganesh Ramakrishnan, Micah Goldblum, Colin White, et al. When do neural nets outperform boosted trees on tabular data? arXiv preprint arXiv:2305.02997, 2023. James R. Munkres. Algorithms for the assignment and transportation problems. *Journal of The Society* for Industrial and Applied Mathematics, 10:196–210, 1957. URL https://api.semanticscholar.org/ CorpusID:15996572. Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. *Advances* in neural information processing systems, 14, 2001. Marcos de S Oliveira, Sergio R de M Queiroz, and Francisco de AT de Carvalho. Unsupervised feature selection method based on iterative similarity graph factorization and clustering by modularity. Expert Systems with Applications, 208:118092, 2022. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Benjamin Ricaud, Pierre Borgnat, Nicolas Tremblay, Paulo Gonçalves, and Pierre Vandergheynst. Fourier could be a data scientist: From graph fourier transform to signal processing on graphs. Comptes Rendus Physique, 20(5):474–488, 2019. Giorgio Roffo, Simone Melzi, Umberto Castellani, and Alessandro Vinciarelli. Infinite latent feature selection: A probabilistic latent graph-based ranking approach. In *Proceedings of the IEEE international conference* on computer vision, pages 1398–1406, 2017. Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326, 2000. Uri Shaham, Ofir Lindenbaum, Jonathan Svirsky, and Yuval Kluger. Deep unsupervised feature selection by discarding nuisance and correlated features. *Neural Networks*, 152:34–43, 2022. Jun Shao and CF Jeff Wu. A general theory for jackknife variance estimation. *The annals of Statistics*, pages 1176–1197, 1989. David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. *IEEE signal processing magazine*, 30(3):83–98, 2013. Amit Singer and Hau-Tieng Wu. Spectral convergence of the connection laplacian from random samples. Information and Inference: A Journal of the IMA, 6(1):58–123, 2017. Dinesh Singh, Phillip Febbo, Kenneth Ross, Donald Jackson, Judith Manola, Christine Ladd, Pablo Tamayo, Andrew Renshaw, Anthony D'Amico, Jerome Richie, Eric Lander, Massimo Loda, Philip Kantoff, Todd Golub, and William Sellers. Gene expression correlates of clinical prostate cancer behavior. *Cancer cell*, 1:203–9, 04 2002. doi: 10.1016/S1535-6108(02)00030-2. Ram Dyuthi Sristi, Gal Mishne, and Ariel Jaffe. Disc: Differential spectral clustering of features. Advances in Neural Information Processing Systems, 35:26269–26282, 2022. Jonathan Svirsky and Ofir Lindenbaum. Interpretable deep clustering for tabular data. In Forty-first International Conference on Machine Learning. Joshua B Tenenbaum, Vin de Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction. *science*, 290(5500):2319–2323, 2000. Roman Vershynin. High-dimensional probability. *University of California, Irvine*, 2020. Ulrike Von Luxburg. A tutorial on spectral clustering. *Statistics and computing*, 17:395–416, 2007. Ulrike Von Luxburg, Mikhail Belkin, and Olivier Bousquet. Consistency of spectral clustering. *The Annals* of Statistics, pages 555–586, 2008. Suhang Wang, Jiliang Tang, and Huan Liu. Embedded unsupervised feature selection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015. Caroline L Wormell and Sebastian Reich. Spectral convergence of diffusion maps: Improved error bounds and an alternative normalization. *SIAM Journal on Numerical Analysis*, 59(3):1687–1734, 2021. Xijiong Xie, Zhiwen Cao, and Feixiang Sun. Joint learning of graph and latent representation for unsupervised feature selection. *Applied Intelligence*, pages 1–14, 2023. Junchen Yang, Ofir Lindenbaum, Yuval Kluger, and Ariel Jaffe. Multi-modal differentiable unsupervised feature selection. In *Uncertainty in Artificial Intelligence*, pages 2400–2410. PMLR, 2023. Yi Yang, Heng Tao Shen, Zhigang Ma, Zi Huang, and Xiaofang Zhou. L2,1-norm regularized discriminative feature selection for unsupervised learning. In *Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume Two*, IJCAI'11, page 1589–1594. AAAI Press, 2011. ISBN 9781577355144. Shira Yoffe, Amit Moscovich, and Ariel Jaffe. Spectral extraction of unique latent variables. *arXiv preprint* arXiv:2402.18741, 2024. Sharon Zhang, Amit Moscovich, and Amit Singer. Product manifold learning. In *International Conference* on Artificial Intelligence and Statistics, pages 3241–3249. PMLR, 2021. Zheng Zhao and Huan Liu. Spectral feature selection for supervised and unsupervised learning. In *Proceedings* of the 24th international conference on Machine learning, pages 1151–1157, 2007. Linling Zhu, Linsong Miao, and Daoqiang Zhang. Iterative laplacian score for feature selection. In Chinese Conference on Pattern Recognition, 2012. Qi-Hai Zhu and Yu-Bin Yang. Discriminative embedded unsupervised feature selection. Pattern Recognition Letters, 112:219–225, 2018. Xiaofeng Zhu, Shichao Zhang, Rongyao Hu, Yonghua Zhu, et al. Local and global structure preservation for robust unsupervised spectral feature selection. *IEEE Transactions on Knowledge and Data Engineering*, 30(3):517–529, 2017. Xiaofeng Zhu, Shichao Zhang, Yonghua Zhu, Pengfei Zhu, and Yue Gao. Unsupervised spectral feature selection with dynamic hyper-graph learning. *IEEE Transactions on Knowledge and Data Engineering*, 34(6):3016–3028, 2020. ## A Product Of Manifold Perspective. A.1 Proof Of Theorem 1. As mentioned in the main text, the theorem is proven with the following two main steps: Step 1: Prove that the inner product f T i gb(X) $$\left.(X)\right|{\mathrm{~is~of~order~}}{\mathcal{O}}{\Big(}1/{\sqrt{\pi}}$$ √n for all eigenfunctions gb(X) indexed by a vector b ∈ E / . Step 2: Combine the result of step 1 with the convergence guarantees in Theorem 1 to bound the inner product f T i vb. Step 1: According to our model, feature i is equal to a smooth transformation of a single latent variables. Assume w.l.o.g that the single variable is θ1 such that fi = Fi(θ1). By the product of manifold assumption, the eigenfunction gb is equal to $$\mathbf{g_{b}}(\mathbf{x})=\prod_{h=1}^{H}\mathbf{g_{b_{h}}}(\pi^{(h)}(\mathbf{x}))=\mathbf{g_{b_{1}}}(\pi^{(1)}(\mathbf{x}))\prod_{h=2}^{H}\mathbf{g_{b_{h}}}(\pi^{(h)}(\mathbf{x})).$$ Let ⊗ denote the Hadamard product. We can write the inner product f $$\colon f_{i}^{T}g_{b}(X){\mathrm{~as,}}$$ $$\mathbf{f}_{i}^{T}\mathbf{g}_{\mathbf{b}}(\mathbf{X})=\left(\mathbf{f}_{i}\otimes g_{\mathbf{b}_{1}}(\pi^{(1)}(\mathbf{X}))\right)^{T}\left(\mathbf{g}_{\mathbf{b}_{2}}(\pi^{(2)}(\mathbf{X}))\otimes,\ldots,\otimes\mathbf{g}_{\mathbf{b}_{H}}(\pi^{(H)}(\mathbf{X})).\tag{6}$$ The vectors fi and gb1 (π (1)(X)) both depend on θ1 only. The vectors {gbh (π (h)(X)} H h=2 depend, respectively, on θ2*, . . . , θ*H. We set $\mathbf{a}(\theta_{1})=\mathbf{f}_{i}\otimes g_{\mathbf{b}_{1}}(\pi^{(1)}(\mathbf{X}))\qquad\mathbf{d}(\theta_{2},\ldots,\theta_{H})=\mathbf{g}_{\mathbf{b}_{2}}(\pi^{(2)}(\mathbf{X}))\otimes,\ldots,\otimes\mathbf{g}_{\mathbf{b}_{H}}(\pi^{(H)}(\mathbf{X}))$. The elements of the random vectors a(θ1) and d(θ2*, . . . , θ*H) are statistically independent. In addition, we have that ∥fi∥ = 1 and ∥gh(π (h)(X))∥ = 1 + o(1) ∀(h), see for example (Cheng and Wu, 2022, Lemma 3.4). This implies that both a(θ1) and d(θ2*, . . . , θ*H) are bounded by 1 + o(1). The inner product between two independent random vectors with unit norm and iid elements is of order O(1/ √n), (see for example (Vershynin, 2020, Remark 3.2.5)). Thus, $|f_{i}^{T}g_{b}(X)|=|a(\theta_{1})^{T}d(\theta_{2},\ldots,\theta_{H})|={\cal O}(1/\sqrt{n})$. $${\mathcal{D}}(1/{\sqrt{n}}).$$ Step 2: By the triangle inequality, $|\mathbf{f}_{i}^{T}\mathbf{v_{b}}|=|\mathbf{f}_{i}^{T}(\mathbf{v_{b}}-\mathbf{g_{b}}(\mathbf{X})+\mathbf{g_{b}}(\mathbf{X}))|\leq|\mathbf{f}_{i}^{T}(\mathbf{v_{b}}-\mathbf{g_{b}}(\mathbf{X}))|+|\mathbf{f}_{i}^{T}\mathbf{g_{b}}(\mathbf{X})|$. i gb(X)|. (7) The first term on the right-hand side of equation 7 can be bounded by the Cauchy-Schwartz inequality and Theorem 1 via: $$\|\mathbf{f}_{i}^{T}(\mathbf{v_{b}}-\mathbf{g_{b}}(\mathbf{X}))|\leq\|\mathbf{f}_{i}^{T}\|\|\mathbf{v_{b}}-\mathbf{g_{b}}(\mathbf{X})\|=\mathcal{O}(\epsilon_{n})+\mathcal{O}\bigg{(}\sqrt{\frac{\log n}{n\epsilon_{n}^{d/2+1}}}\bigg{)}.\tag{14.14}$$ $\downarrow$ . $$\mathbf{\Sigma}$$ The second term is bounded by step 1. Since the term in equation 8 dominates O(1/ √n) for any ϵn, this concludes the proof. ## B Ablation Study B.1 Synthetic Data Generation For the synthetic data, we generated 500 samples, where we used the make_blobs function from scikit-learn to generate the first five features, with arguments cluster_std=1, centers=2. ![18_image_0.png](18_image_0.png) Figure 8: Ablation study: Clustering accuracy on real-world datasets ## B.2 Detailed Experimental Results In this section, we provide more detailed results of the ablation study. Figure 8 contains comparative analysis in terms of the performance for the whole selected feature range, ## C Additional Experimental Details C.1 Datasets Table 4 provides information about the real-world datasets used in the experiments. | Dataset | Samples | Dim | Classes | Domain | |-------------|-----------|-------|-----------|----------| | COIL20 | 1440 | 1024 | 20 | Image | | ORL | 400 | 1024 | 40 | Image | | Yale | 165 | 1024 | 15 | Bio | | ALLAML | 72 | 7129 | 2 | Bio | | Prostate-GE | 102 | 5966 | 2 | Bio | | TOX 171 | 171 | 5748 | 4 | Bio | | Isolet | 1560 | 617 | 26 | Speech | | GISETTE | 7000 | 5000 | 2 | Image | Table 4: Real-world datasets description. For all datasets, the features are z-score normalized to have zero mean and unit variance. ## C.2 Hyperparameters For SSFS, we use the same hyperparameters, as follows: - Number of eigenvectors to select k is set to the distinct number of classes in the specific dataset, they are selected from a total of d = 2k eigenvectors. - Size of each subsample is 95% of the original dataset. - 500 resamples are performed in every dataset. - For the affinity matrix, we used a Gaussian kernel with an adaptive scale σiσj such that σiis the distance to the k = 2 neighbor of xi. The Laplacian we used was the symmetric normalized Laplacian. In the ablation study, for regression, we use scikit-learn ridge regression (for eigenvector selection) and DMLC XGBoost regressor (for the final feature scoring) with their default hyperparameters. For all of the baseline methods, we used the default hyperparameters. So, for all methods, including SSFS, the hyperparameters are fixed for all datasets (excluding parameters that correspond to the number of features to select and the number of clusters). For LS, MCFS, UDFS, and NDFS, we used an implementation from the scikit-feature library 2 and inputted the same similarity matrices as SSFS for the methods which accepted such an argument. We fixed a bug in MCFS implementation to choose by the max of the absolute value of the coefficients instead of the max of the coefficients (this improved MCFS performance). For LS-CAE, we used an implementation from 3. For KNMFS, we used an implementation from 4. 2https://github.com/jundongl/scikit-feature 3https://github.com/jsvir/lscae 4https://github.com/marcosd3souza/KNMFS
# Revisiting Topic-Guided Language Models Carolina Zheng *carozheng@cs.columbia.edu* Department of Computer Science Columbia University Keyon Vafa *kv2294@columbia.edu* Department of Computer Science Columbia University David M. Blei *david.blei@columbia.edu* Department of Statistics Department of Computer Science Columbia University Reviewed on OpenReview: *https: // openreview. net/ forum? id= lXBEwFfxpA* ## Abstract A recent line of work in natural language processing has aimed to combine language models and topic models. These *topic-guided language models* augment neural language models with topic models, unsupervised learning methods that can discover document-level patterns of word use. This paper compares the effectiveness of these methods in a standardized setting. We study four topic-guided language models and two baselines, evaluating the held-out predictive performance of each model on four corpora. Surprisingly, we find that *none of* these methods outperform a standard LSTM language model baseline, and most fail to learn good topics. Further, we train a probe of the neural language model that shows that the baseline's hidden states already encode topic information. We make public all code used for this study. ## 1 Introduction Recurrent neural networks (RNNs) and LSTMs have been an important class of models in the development of methods for many tasks in natural language processing, including machine translation, summarization, and speech recognition. One of the most successful applications of these models is in language modeling, where they are effective at modeling small text corpora. Even with the advent of transformer-based language models, RNNs and LSTMs can outperform non-pretrained transformers on various small datasets (Melis et al., 2020). While powerful, RNN- and LSTM-based models struggle to capture long-range dependencies in their context history (Bai et al., 2018; Sankar et al., 2019). Additionally, they are not designed to learn interpretable structure in a corpus of documents. To this end, multiple researchers have proposed adapting these models by incorporating topic models (Dieng et al., 2017; Lau et al., 2017; Rezaee & Ferraro, 2020; Guo et al., 2020). The motivation for combining language models and topic models is to decouple local syntactic structure, which can be modeled by a language model, from document-level semantic concepts, which can be captured by a topic model (Khandelwal et al., 2018; O'Connor & Andreas, 2021). The topic model component is also designed to uncover latent structure in documents. We refer to these models as *topic-guided language models*. Broadly, this body of research has reported good results: topic-guided language models improve next-word predictive performance and learn interpretable topics. In this work, we re-investigate this class of models by evaluating four representative topic-guided language model (TGLM) papers in a unified setting. We train the models from Dieng et al. (2017); Lau et al. (2017); Rezaee & Ferraro (2020); Guo et al. (2020) on three document-level corpora and evaluate their held-out perplexity. Unlike some prior work, during next-word prediction, we take care to condition the topic model component on only previous words, rather than the entire document. Moreover, we use a baseline language model that is conditioned on all previously seen document words, rather than being restricted to the current sentence (Lau et al., 2017; Rezaee & Ferraro, 2020; Guo et al., 2020). Additionally, we choose baseline language models with comparable model sizes to ensure valid comparisons. Our finding: no predictive improvement of TGLMs over a standard LSTM-LM baseline (Zaremba et al., 2014). In order to understand why topic-guided language models offer no predictive improvement, we probe the LSTM-LM's hidden representations. A probe is a trained predictor used to measure the extent to which fitted "black-box" models, such as neural models, have learned specific linguistic features of the input (Hewitt & Liang, 2019). The probe reveals that the LSTM-LM already encodes topic information, rendering a formal topic model component redundant. Additionally, topic-guided language models were developed to provide insight into text corpora by uncovering latent topics. This method of exploratory text analysis is commonly used in the social sciences and digital humanities (Griffiths & Steyvers, 2004; Blei & Lafferty, 2007; Grimmer & Stewart, 2013; Mohr & Bogdanov, 2013). Here, we show that the topics learned by topic-guided language models are not better than a standard topic model and, for some of the models, qualitatively poor. This paper contributes to a line of reproducibility studies in machine learning that aim to evaluate competing methods in a consistent and equitable manner. These studies have uncovered instances where results are not directly comparable, as reported numbers are borrowed from prior works that used different experimental settings (Marie et al., 2021; Hoyle et al., 2021). Furthermore, they identify cases where baselines are either too weak or improperly tuned (Dacrema et al., 2019; Nityasya et al., 2023). We observe analogous issues within the topic-guided language modeling literature. To support transparency and reproducibility, we make public all code used in this study.1 Finally, we consider how these insights apply to other models. While prior work has incorporated topic model components into RNNs and LSTMs, the topic-guided language model framework is agnostic to the class of neural language model used. We conclude by discussing how the results in this paper are relevant to researchers considering incorporating topic models into more powerful neural language models, such as transformers. ## 2 Study Design Let x1:T = {x1*, . . . , x*T } be a sequence of tokens collectively known as a document, where each xt indexes one of V words in a vocabulary (words outside the vocabulary are mapped to a special out-of-vocabulary token). Given a corpus of documents, the goal of language modeling is to learn a model p(x1:T ) that approximates the probability of observing a document. A document can be modeled autoregressively using the chain rule of probability, $$p(\mathbf{x}_{1:T})=\prod_{t=1}^{T}p(x_{t}\mid\mathbf{x}_{<t}),$$ $$(1)$$ t=1 p(xt | x<t), (1) where x<t denotes all the words in a document before t. A language model parameterizes the predictive distribution of the next word, pµ(xt | x<t), with a set of parameters µ. Given a set of documents indexed by Dtrain, we compute a parameter estimate µˆ by maximizing the log likelihood objective, $$\sum_{d=1}^{D_{\mathrm{train}}}\sum_{t=1}^{T_{d}}\log p_{\mu}(x_{d,t}\mid\mathbf{x}_{d,<t}),$$ with respect to µ. Language models are evaluated using perplexity on a held-out set of documents. With Dtest as the index set of the test documents, perplexity is defined as $$\exp\left\{-\frac{1}{\sum_{d}T_{d}}\sum_{d=1}^{D_{\mathrm{test}}}\sum_{t=1}^{T_{d}}\log p_{\hat{\mu}}(x_{d,t}\mid\mathbf{x}_{d,<t})\right\}$$ . 1https://github.com/carolinazheng/revisiting-tglms Perplexity is the inverse geometric average of the likelihood of observing each word in the set of test documents under the fitted model; a lower perplexity indicates a better model fit. ## 3 Language Models And Topic Models Here, we provide an overview of the two components of topic-guided language models: neural language models and topic models. The topic-guided language model literature has focused on models based on RNNs and LSTMs, which are the neural language models we focus on here. RNN language model. A recurrent neural network (RNN) language model (Mikolov et al., 2010) defines each conditional probability in Equation (1) as $$\begin{array}{l}{(2)}\\ {(3)}\end{array}$$ ht−1 = f(xt−1, ht−2) (2) $$\begin{array}{c}{{\mathbf{h}_{t-1}=f(x_{t-1},\mathbf{h}_{t-2})}}\\ {{p_{\mathrm{RNN}}(x_{t}\mid\mathbf{x}_{<t})=\mathrm{softmax}(\mathbf{W}^{\intercal}\mathbf{h}_{t-1}),}}\end{array}$$ where W ∈ R D×V and ht ∈ R D. The hidden state ht−1 summarizes the information in the preceding sequence, while the function f combines ht−1 with the word at time t to produce a new hidden state, ht. The function f is parameterized by a recurrent neural network (RNN). The parameter W and the RNN model parameters are trained by maximizing the log likelihood of training documents using backpropagation through time (BPTT) (Williams & Peng). (In practice, the backpropagation of gradients is truncated after a specified sequence length.) The model directly computes the predictive distribution of the next word, p(xt | x<t). The baselines use the widely used RNN architecture, the LSTM (Hochreiter & Schmidhuber, 1997), as the language model, which we call LSTM-LM (Zaremba et al., 2014). The LSTM architecture is described in Appendix A. To make full use of the document context, it is natural to condition on all previous words of the document when computing p(xt | x<t). Even when the full document does not fit into memory, this can be done at no extra computational cost by storing the previous word's hidden state (ht−2 in Equation (2)) (Melis et al., 2017). This is our main baseline. One can also define x<t to be only the previous words in the current sentence. In this scenario, the model will not condition on all prior words in the document. This is the LSTM-LM baseline used in many TGLM papers (Lau et al., 2017; Guo et al., 2020; Rezaee & Ferraro, 2020). We call this model the sentence-level LSTM-LM. Topic model. Another way to model a document is with a bag-of-words model that represents documents as word counts. One such model is a probabilistic topic model, which assumes the observed words are conditionally independent given a latent variable θ. In a topic model, the probability of a document is $$\mathbf{\Phi}_{i}\mid\mathbf{\theta})p(\mathbf{\theta})c$$ $\left(4\right)$. p(x1:T ) = R QT i=1 p(xi| θ)p(θ)dθ, (4) where p(θ) is a prior distribution on θ and p(x | θ) is the likelihood of word x conditional on θ. A widely used probabilistic topic model is Latent Dirichlet Allocation (LDA) (Blei et al., 2003). LDA posits that a corpus of text is comprised of K latent topics. Each document d contains a distribution over topics, θd, and each topic k is associated with a distribution over words, βk. These two terms combine to form the distribution of each word in a document. The generative model for LDA is: 1. Draw K topics: β1,..., βK ∼ DirichletV (γ). 2. For each document: (a) Draw topic proportions, θ ∼ DirichletK(α). (b) For each word x1,..., xT : i. Draw topic indicator, zxt ∼ Categorical(θ). ii. Draw word, xt ∼ Categorical(βzxt ). Since each word is drawn conditionally independent of the preceding words in the document, LDA is not able to capture word order or syntax. However, it can capture document-level patterns since the topic for each word is drawn from a document-specific distribution. Practitioners typically rely on approximate posterior inference to estimate the LDA posterior. The most common methods are Gibbs sampling (Griffiths & Steyvers, 2004) or variational inference (Blei et al., 2003). After approximating the posterior distribution over topics from the training documents, the next-word posterior predictive distribution is $\left(5\right)$. pLDA(xt | x<t) = Rp(xt | θ)p(θ | x<t)dθ. (5) Given words x<t from a document, one can use approximate posterior inference to estimate the topic proportions posterior, p(θ | x<t), and then draw Monte Carlo samples of θ to estimate the predictive distribution. ## 4 Topic-Guided Language Model We now discuss topic-guided language models (TGLMs), which are a class of language models that combine topic models and neural language models. TGLMs were initially proposed to combine the fluency of neural language models with the document modeling capabilities of topic models. Dieng et al. (2017) and Lau et al. (2017), who propose two of the models that we study here, argue that long-range dependency in language is captured well by topic models. Subsequent TGLM papers build on Dieng et al. (2017) and Lau et al. (2017), but differ from these previous works in evaluation setting (Wang et al., 2018; Rezaee & Ferraro, 2020; Guo et al., 2020). Topic-guided language models can be divided into two frameworks, differing in whether they model the document's bag-of-words counts in addition to the typical next-word prediction objective. In this section, we discuss the two frameworks: a topic-biased language model and a joint topic and language model. The graphical structure of these models are shown in Figure 1. ## 4.1 Topic-Biased Language Models A topic-biased language model defines the next-word probability to be the sum of two terms: a linear transformation of the hidden state, as in an RNN, and the distribution of words according to a document's topics, as in a topic model. Each document follows the data generating mechanism below: 1. Draw topic vector, θ ∼ DirichletK(·). 2. For each word x1,..., xT : (a) ht = RNN(xt, ht−1). (b) Draw ℓt ∼ Bernoulli(σ(u ⊺ht)). (c) Draw zt ∼ Categorical(θ). (d) Draw xt+1 ∝ exp(W⊺ht + (1 − ℓt)βzt ). Here, σ(·) denotes the sigmoid function. The model parameters are the parameters of the RNN, the weights W ∈ R D×V and u ∈ R D, and the topics β1,..., βK ∈ R V. The latent variable θ determines the document's topic proportions. Of the two additive terms in a word's likelihood, the RNN term encourages fluency and syntax while the topic modeling term can be understood as a bias toward the document's global structure. Since topic models ![4_image_0.png](4_image_0.png) Figure 1: Graphical model representations of the two frameworks of topic-guided language models. (a) is the topic-biased language model. (b) is the joint topic and language model. Circles denote random variables, while squares denote deterministic variables. Shading indicates that the variable is observed. struggle with modeling very common words ("stop words") (Wallach et al., 2009), a word's likelihood only includes the topic modeling term if it is not predicted to be a stop word (ℓt = 0). The realizations of the stop word indicators are observed during training (ℓt = 1 if xt+1 belongs to a list of stop words, and 0 otherwise). During prediction, the stop word indicators are treated as latent variables and are marginalized out. Hence, the topic-biased language models learn to interpolate between a standard language model's predictions and topics. TopicRNN. TopicRNN (Dieng et al., 2017) approximates Step 2(d) by marginalizing zt before normalizing: $p_{\rm{TRNN}}(x_{t+1}\mid h_{t},\mathbf{\theta})\propto\exp(\mathbb{E}[\mathbf{W}^{\intercal}\mathbf{h}_{t}+(1-\ell_{t})\mathbf{\beta}_{z_{t}}])$ $$=\exp(\mathbf{W}^{\intercal}\mathbf{h}_{t}+(1-\ell_{t})\mathbf{\beta}^{\intercal}\mathbf{\theta}).$$ The topic matrix β ∈ R K×Vcontains the topic vectors β1,..., βK as rows. Additionally, in Step 1, TopicRNN draws θ from a standard Gaussian, rather than a Dirichlet distribution. VRTM. VRTM (Rezaee & Ferraro, 2020) (short for Variational Recurrent Topic Model) exactly computes Step 2(d) by marginalizing zt after normalizing: $p_{\text{VRTM}}(x_{t+1}\mid\mathbf{h}_{t},\boldsymbol{\theta})=\mathbb{E}[\text{softmax}(\mathbf{W}^{\intercal}\mathbf{h}_{t}+(1-\ell_{t})\boldsymbol{\beta}_{z_{t}})]$ $$=\sum_{k=1}^{K}\boldsymbol{\theta}_{k}*\text{softmax}(\mathbf{W}^{\intercal}\mathbf{h}_{t}+(1-\ell_{t})\boldsymbol{\beta}_{k}).$$ This makes VRTM a mixture-of-RNNs (Yang et al., 2017), where the mixture proportions are determined by θ. Inference. The model parameters for topic-biased language models are learned using variational inference (Wainwright et al., 2008; Blei et al., 2017). We provide a high-level overview of the method here. The goal of variational inference is to approximate the posterior of the latent variable θ, p(θ | x1:T ), with a learned distribution qϕ(θ), called the variational distribution. To fit qϕ(θ), variational inference minimizes the KL divergence between the two distributions. This is equivalent to maximizing a lower bound of the marginal log likelihood, or evidence lower bound (ELBO): $\mathrm{O}_{7}^{\mathrm{o}}$ [] $$\begin{array}{r l}{\log p(\mathbf{x_{1:T}})=1}\\ {\geq1}\end{array}$$ log p(x1:T ) = log Rpµ(x1:T | θ)p(θ)dθ ≥ Eqϕ(θ)[log pµ(x1:T | θ)] − KL(qϕ(θ)∥p(θ)). The ELBO contains two terms: a reconstruction loss, which is the expected log probability of the data under qϕ(θ), and the KL divergence between the variational distribution and the prior on θ. By maximizing the ELBO, we can simultaneously learn the model parameters and the variational distribution parameters, represented by µ and ϕ respectively. In order to share the learned variational parameters, the variational distribution is defined to be a function of the data, i.e., we learn qϕ(θ | x1:T ), where qϕ is parameterized by a neural network. In practice, the ELBO is maximized with respect to parameters µ and ϕ using backpropagation. The expectation is estimated using samples from qϕ(θ | x1:T ) and the KL can often be computed analytically (e.g., when both distributions are Gaussians) (Kingma & Welling, 2014). Prediction. For both models, the next-word predictive distribution is $$(6)$$ p(xt | x<t) = Ep(θ | x<t)[p(xt | x<t, θ)]. (6) Using a learned variational distribution in place of the exact posterior, we approximate the expectation using its mean, i.e., let θˆ = E[qϕ(θ | x<t)]. Then p(xt | x<t) ≈ p(xt | x<t, θˆ). For computation reasons, in practice, θˆ is only updated in a sliding window (i.e., every N words). ## 4.2 Joint Topic And Language Model A joint topic and language model learns the topic model and language model simultaneously, essentially fitting two views of the same data. The two models share the document-level latent variable θ. Each document during training has a pair of representations, its bag-of-words x TM 1:T and its word sequence x LM 1:T , generated by the topic model and the language model, respectively. For each document, a basic version of the data generating mechanism is: 1. Draw topic vector, θ ∼ DirichletK(·). 2. Draw the bag-of-words x TM 1:T from a topic model (Section 3): (a) x TM 1:T ∼ TopicModel(θ). 3. For each word x LM 1,..., xLM T: (a) ht = RNN(x LM t, ht−1). (b) gt = a(ht, θ). (c) Draw x LM t+1 ∝ exp(W⊺gt). Here, the latent variable θ determines the document's topic proportions in the topic model. In the language model, the hidden state ht is combined with θ in a differentiable function a, usually the Gated Recurrent Unit (Cho et al., 2014). The GRU architecture is described in Appendix B. The model parameters are the parameters of the topic model, the parameters of the RNN, the parameters of a, and the weights W ∈ R D×V. TDLM. TDLM (Lau et al., 2017) (short for Topically Driven Language Model) is a variant of the model outlined above. There are two major differences. First, θ is not considered to be a latent variable. Instead, an encoder function maps a bag-of-words to θ. In the topic model, the bag-of-words used is from the entire document, i.e., θ TM = enc(x TM 1:T ). Second, the language model component of the data generating process (Step 3) uses a different θ than the topic modeling component, which we call θ LM. To prevent the model from memorizing the current sentence, θ LM is computed from the document bag-of-words excluding the current sentence. In the language model, if j is the index set of the words in the current sentence, θ LM = enc(x TM 1:T\j ). Inference. TDLM is trained by maximizing the log likelihood of the topic model and the language model jointly. The objective, LTDLM, is: $$\begin{array}{c}{{{\mathcal{L}}_{\mathrm{TM}}=\log p(\mathbf{x}_{1:T}^{\mathrm{TM}}\mid\boldsymbol{\theta}^{\mathrm{TM}})}}\\ {{{\mathcal{L}}_{\mathrm{LM}}=\sum_{t}\log p(x_{t}^{\mathrm{LM}}|\mathbf{x}_{<t}^{\mathrm{LM}},\boldsymbol{\theta}^{\mathrm{LM}})}}\\ {{{\mathcal{L}}_{\mathrm{TDLM}}={\mathcal{L}}_{\mathrm{TM}}+{\mathcal{L}}_{\mathrm{LM}}.}}\end{array}$$ Although the original model only conditions on previous words in the current sentence when forming LLM, we condition on all prior words in the document because it improves performance. $\mathbf{a}$ Prediction. Let θˆ = enc(x TM <t ). The next-word predictive distribution is $$p(x_{t}^{\mathrm{LM}}\mid\mathbf{x}_{<t}^{\mathrm{LM}})=p(x_{t}^{\mathrm{LM}}\mid\mathbf{x}_{<t}^{\mathrm{LM}},{\hat{\boldsymbol{\theta}}}),$$ $$\left(7\right)$$ <t , θˆ), (7) which is defined in Step 3 of the data generating process. In practice, like prediction for the topic-biased LMs, we recompute θˆ in a sliding window. rGBN-RNN. rGBN-RNN (Guo et al., 2020) is an extension of the model outlined in this section. In rGBN-RNN's topic model, each sentence j has a unique bag-of-words: it is the document's bag-of-words with the sentence excluded, denoted x TM 1:T\j . In Step 1 of the data generating mechanism, a different topic vector is drawn sequentially for each sentence. For sentences j = 1,..., J, where J is the total number of sentences, θj ∼ Gamma(Πθj−1, τ0), where Π and τ0 are model parameters. In Step 2, for each sentence j, its bag-of-words is drawn: $$\mathbf{x}_{1:T\setminus j}^{\mathrm{TM}}\sim\mathrm{Poisson}(\Phi\theta_{j}),$$ $$({\boldsymbol{\delta}})$$ where Φ is a model parameter. For the language modeling component (Step 3 of the data generating mechanism), rGBN-RNN generates individual sentences. In Step 3, each sentence j is conditionally independent of the other sentences, given its corresponding topic vector, θj . In other words, p(x $$p(x_{j_{t}}^{\mathrm{LM}}|\mathbf{x}_{<j_{t}}^{\mathrm{LM}},\theta_{j})=p(x_{j_{t}}^{\mathrm{LM}}|\mathbf{x}_{j,<t}^{\mathrm{LM}},\theta_{j}),$$ j,<t, θj ), (8) where x TM jtis the t'th word of sentence j, x LM <jt denotes all the words in document before the t'th word of the j'th sentence, and x LM j,<t denotes only the words in the j'th sentence before the t'th word. rGBN-RNN also introduces multiple stochastic layers to both the topic model and language model, which is simplified to one layer in this exposition. In the experiments, we use the full original model. Inference. rGBN-RNN is trained using a combination of variational inference and stochastic gradient MCMC (Guo et al., 2018). We refer the reader to Guo et al. (2020) for further mathematical details of the model and inference algorithm. Prediction. For each sentence j, the next-word predictive distribution is $$p(x_{j,t}^{\mathrm{LM}}\mid\mathbf{x}_{<j_{t}}^{\mathrm{LM}})=\mathbb{E}_{p(\boldsymbol{\theta}_{j}\mid\mathbf{x}_{<j_{1}}^{\mathrm{TM}})}[p(x_{j,t}^{\mathrm{LM}}\mid\mathbf{x}_{j,<t}^{\mathrm{LM}},\boldsymbol{\theta}_{j})].$$ The expectation is approximated using a sample from the approximate posterior of θj computed during inference. The topic model parameters, Φ and Π, are similarly marginalized out via MCMC sampling (see Guo et al. (2020) for more details). | in the table. Parameter counts are listed in Appendix E. | | | Perplexity | | | | |------------------------------------------------------------|------------|------------|--------------|------------|-------------|-------------| | Model | LSTM Size | Topic Size | APNEWS | IMDB | BNC | WT-2 | | LSTM-LM (sentence-level) | 600 | - | 65.0 (0.3) | 76.8 (0.1) | 112.9 (0.3) | 115.3 (0.8) | | LSTM-LM | 600 | - | 56.5 (0.3) | 73.1 (0.2) | 96.4 (0.1) | 90.7 (0.7) | | TopicRNN (Dieng et al., 2017) | 600 | 100 | 56.6 (0.3) | 73.0 (0.1) | 96.8 (0.2) | 93.2 (0.6) | | VRTM (Rezaee & Ferraro, 2020) | 600 | 50 | 56.8 (0.3) | 73.6 (0.2) | 96.3 (0.2) | 90.8 (0.6) | | LSTM-LM | 600 (+GRU) | - | 53.5 (0.2) | 68.8 (1.3) | 91.2 (0.1) | 89.9 (0.8) | | TDLM (Lau et al., 2017) | 600 (+GRU) | 100 | 53.7 (0.1) | 68.8 (0.1) | 91.4 (0.2) | 90.4 (0.7) | | LSTM-LM | 600x3 | - | 51.9 (0.4) | 66.6 (0.8) | 88.8 (0.3) | 89.5 (0.7) | | rGBN-RNN (Guo et al., 2020) | 600x3 | 100-80-50 | 52.6 (0.3) | 64.8 (0.2) | 97.7 (1.1) | - | ## 5 Experiments In this section, we detail the reproducibility study and results. We also investigate the quality of learned topics and probe the LSTM-LM's hidden representations to find the amount of retained topic information. ## 5.1 Reproducibility Study We evaluate the held-out perplexity of four TGLMs and corresponding LSTM-LM baselines on four document-level corpora. Datasets. We use four publicly available natural language datasets: APNEWS,2IMDB (Maas et al., 2011), BNC (Consortium, 2007), and WikiText-2 (Merity et al., 2017). We follow the training, validation, and test splits from Lau et al. (2017) and Merity et al. (2017). Details about the datasets and preprocessing steps are in Appendix C. Models. The LSTM-LM is described in Section 2. The four topic-guided language models are TDLM (Lau et al., 2017), TopicRNN (Dieng et al., 2017), rGBN-RNN (Guo et al., 2020), and VRTM (Rezaee & Ferraro, 2020), and are described in Section 4. We implement TopicRNN (Dieng et al., 2017), TDLM (Lau et al., 2017), and VRTM (Rezaee & Ferraro, 2020) from scratch. For rGBN-RNN (Guo et al., 2020), we use the publicly available codebase and make minimal adjustments to the codebase to ensure that preprocessing and evaluation are consistent. Some other topic-guided language models do not have public code (Wang et al., 2018; Tang et al., 2019; Wang et al., 2019) and are not straightforward to implement. These models are not part of the study, but their architecture is similar to that of Lau et al. (2017), which we compare to. For all LSTM-LM baselines, we use a hidden size of 600, word embeddings of size 300 initialized with Google News word2vec embeddings (Mikolov et al., 2013), and dropout of 0.4 between the LSTM input and output layers (and between the hidden layers for the 3-layer models). For the four TGLMs we study, we use the same settings as LSTM-LM for the LSTM components. For the additional TGLM-specific components, we use the architectures and settings from the original papers, except for small details reported in Appendix D to make certain settings consistent across models.3 To obtain comparable baselines to all TGLMs studied, we train three LSTM-LMs of varying sizes. The default baseline is a 1-layer LSTM-LM. To control for the additional GRU layer in the language model 2https://www.ap.org/en/ 3The one additional change is that Guo et al. (2020) reports a 600-512-256 size model, but their public code only supports 3-layer models with same-size RNN layers, so we use 600-600-600. component of TDLM, we train a 1-layer LSTM-LM with a GRU layer between the LSTM output and the output embedding layer. To compare to rGBN-RNN, a hierarchical model, we train a 3-layer LSTM-LM. Finally, we also compare to a baseline considered in prior work, an LSTM-LM conditioned only on previous words in the same sentence. We call this model the sentence-level LSTM-LM. Training Details. We train the RNN components using truncated backpropagation through time with a sequence length of 30. For sequences within the same document (or sentence for the sentence-level LSTMLM), if hiis the final hidden state computed by the RNN for the i th sequence, we initialize the initial hidden state for the (i + 1)th sequence with stop_gradient(hi). Although the TDLM and rGBN-RNN models assume conditional independence between sentences, we found that in practice, keeping hidden states between sentences improved their performance. Following Lau et al. (2017), Rezaee & Ferraro (2020), and Guo et al. (2020), we use the Adam optimizer with a learning rate of 0.001 on APNEWS, IMDB, and BNC. For WikiText-2, we follow Merity et al. (2017) and use stochastic gradient descent; the initial learning rate is 20 and is divided by 4 when validation perplexity is worse than the previous iteration. The models are trained until validation perplexity does not improve for 5 epochs and we use the best validation checkpoint. The models in our codebase train to convergence within three days on a single Tesla V100 GPU. rGBN-RNN, trained using its public codebase, trains to convergence within one week on the same GPU. We do not include WikiText-2 results for rGBN-RNN because its perplexity did not decrease during training. Results. Table 1 shows the results. After controlling for language model size, the LSTM-LM baseline consistently matches or outperforms topic-guided language models with the same number of parameters. Although most topic-guided language models improve over a baseline considered in prior work - the sentence level LSTM-LM - they are matched by an LSTM which, like topic-guided language models, conditions on all prior words in a document. As discussed in Section 2, this is a standard practice that can be performed at no extra computational cost. ## 5.2 Probing The Rnn Hidden States The motivation for topic-guided language models is to augment language models with document-level information captured by topic models (Dieng et al., 2017). To assess whether topic models are adding useful information, we perform a probe experiment. In NLP, probe tasks are designed to understand the extent to which linguistic structure is encoded in the representations produced by black-box models (Alain & Bengio, 2016; Conneau et al., 2018; Hewitt & Liang, 2019; Pimentel et al., 2020). In this case, we probe the baseline LSTM-LM's hidden representations to assess how much topic information it already captures. Specifically, we evaluate whether an LSTM-LM's hidden representation of the document's first t words, ht, is predictive of the topic vector estimated from the document's first t words, θt, of a topic-guided language model. We evaluate TDLM since it is the best performing topic-guided language model and has the highest quality topics (see Section 5.3). We also evaluate TopicRNN, a topic-guided language model with lower quality topics. We use the fitted models from Section 5.1 to create training data for the probe experiment. The input is the LSTM-LM's initial hidden state ht for each sequence in a document (in this experiment, we define a sequence to be a 30-word chunk). The output is TDLM's topic proportions at the sequence, transformed with inverse-softmax to ensure it is real-valued: θ˜t = log(θt)−PK j=1 log(θt,j ), where j indexes the dimension of θ. In the experiment, a linear model is trained to predict θ˜t from ht. The loss function is mean squared error summed across the topic components. The held-out prediction quality of this model (or "probe") can be viewed as a proxy for mutual information between the LSTM-LM's hidden state and the topic proportion vector θ, which is not easily estimable. For each topic-guided language model, we also run a baseline experiment where we probe a randomly initialized LSTM-LM that was not fit to data. Table 2: The probe experiment reveals that the hidden state of an LSTM-LM is predictive of the topic information of two topic-guided language models, TDLM and TopicRNN, on held-out data. We also train baselines where the probe data is from a randomly initialized LSTM-LM. Standard deviations are listed in Appendix F. | APNEWS | | | IMDB | BNC | WT-2 | | | | | | | | | |----------|---------|-------|--------|-------|--------|-------|------|-------|-------|------|-------|-------|-------| | Target | Data | Acc-1 | Acc-5 | R 2 | Acc-1 | Acc-5 | R 2 | Acc-1 | Acc-5 | R 2 | Acc-1 | Acc-5 | R 2 | | TDLM | init. | .036 | .135 | .134 | .028 | .132 | .023 | .098 | .294 | .073 | .128 | .379 | .018 | | TDLM | trained | .340 | .681 | .621 | .180 | .449 | .400 | .314 | .638 | .379 | .510 | .858 | .352 | | TopicRNN | init. | .075 | .224 | .022 | .086 | .259 | .014 | .077 | .195 | .017 | .197 | .442 | -.010 | | TopicRNN | trained | .210 | .540 | .232 | .238 | .575 | .231 | .172 | .424 | .170 | .208 | .486 | .067 | Table 2 shows the results of the probe experiment. The linear model can reconstruct TDLM's topic proportion vector to some extent; 62% of the variance in a held-out set of APNEWS topic proportions can be explained by the LSTM's hidden state. Moreover, the hidden state predicts TDLM's largest topic for between 18% to 51% of test sequences across datasets, and improves 15% to 30% over the initialization-only baseline. These accuracies indicate that the LSTM-LM has learned to capture TDLM's most prevalent topic. Compared to TDLM, the TopicRNN probe exhibits a smaller improvement over its baseline. Nevertheless, the probe task shows that a notable amount of broader topic information is captured by the baseline LSTM's hidden state. ## 5.3 Topic Quality Although the predictions from topic-guided language models are matched or exceeded by an LSTM-LM baseline, it is possible that the learned topics will still be useful to practitioners. We compare the topics learned by topic-guided language models to those learned by a classical topic model, LDA (Blei et al., 2003). While LDA's next word predictions are worse than those of neural topic models (Dieng et al., 2020), its topics may be more interpretable. To assess the quality of learned topics, we compute an automated metric that correlates with human judgements of topic coherence (Aletras & Stevenson, 2013; Lau et al., 2014). Automated coherence can be estimated using normalized pointwise mutual information (NPMI) scores. The NPMI score of a topic from its N top words is defined as $$\binom{N}{2}^{-1}\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}\frac{\log\frac{p(w_{i},w_{j})}{p(w_{i})p(w_{j})}}{-\log p(w_{i},w_{j})},$$ $$\mathbf{\Sigma}$$ and ranges from −1 to 1. To compute the word co-occurrence statistics (estimates of p(wi) and p(wi, wj ) in Equation (9)) we use the corresponding dataset as the reference corpus. To obtain the model-level coherence, we average the scores from the top 5/10/15/20 topic words for each topic, then average over all topics.4 The coherence for each model is in Table 3. The topic-biased language models (TopicRNN and VRTM) learn largely incoherent topics, while only TDLM (Lau et al., 2017) achieves comparable coherences to LDA in two out of the four corpora, APNEWS and BNC. Since the quality of automated topic evaluation metrics has been disputed for neural topic models (Hoyle et al., 2021), Appendix G includes the top words from randomly sampled topics. The joint topic and language models (TDLM and rGBN-RNN) learn qualitatively acceptable topics, while VRTM fails to learn distinct topics entirely. ## 6 Discussion This reproducibility study compares topic-guided language models to LSTM baselines. We find that the baselines outperform the topic-guided language models in predictive performance. This finding differs from 4We use gensim (Rehurek & Sojka, 2011) to calculate NPMI scores, with a window size of 10. In processing the reference corpora, we retain only terms that exist in the topic model vocabulary. Table 3: Topic coherences across corpora and models. The topic-guided language models do not learn topics that are more coherent than LDA. In parentheses are standard deviations in coherence from training with three different random seeds per model. Note we do not include VRTM in the table because the model did not learn distinct topics (see Table 7). Coherence Model APNEWS IMDB BNC WT-2 LDA .125 (.00) .080 (.01) .124 (.00) .093 (.01) TDLM .176 (.00) .011 (.02) .104 (.01) -.026 (.03) TopicRNN -.330 (.00) -.327 (.01) -.293 (.01) -.311 (.00) rGBN-RNN .047 (.01) .002 (.01) -.017 (.01) – the results reported in the topic-guided language modeling literature, which shows improvements over baselines. In general, these differences are due to weaker baselines in the literature and a form of evaluation that considers future words. Baselines. The baseline compared to in most prior work (Lau et al., 2017; Rezaee & Ferraro, 2020; Guo et al., 2020) is the sentence-level LSTM-LM, which we report as the weakest model in Table 1; this baseline does not condition on all words in a document's history. Similarly, the baseline in Dieng et al. (2017) does not condition on the history beyond a fixed-context window. In contrast, the LSTM-LM baseline in this work conditions on all previous words in the document during training and evaluation. Our findings suggest that the predictive advantage of topic-guided language models stems from conditioning on all prior words in a document via a representation of topic proportions. Additionally, topic-guided language models typically augment their language model component with additional parameters. We only compare topic-guided language models to baselines with a similar number of language model parameters. TDLM (Lau et al., 2017) adds parameters to its language model via an additional GRU; however, TDLM was originally compared to a baseline without this module. We find that TDLM's predictive advantage fades when scaling the LSTM-LM baseline to match TDLM's language model size. Evaluation. Different papers have evaluated the performance of topic-guided language models in different ways. In this paper, we standardize the evaluation of models and their baselines to make sure results are comparable. As described in Section 4, topic-guided language models use a representation of the full document to estimate the topic proportions vector θ during training. During evaluation, θ must be estimated using only previous document words. Otherwise, a model would be looking ahead when making next-word predictions.5 Some methods in the TGLM literature have conditioned on future words in their evaluation, making them incomparable to language models that only condition on previous words. For example, TDLM (Lau et al., 2017) is proposed as a sentence-level model, and thus the paper reports results when conditioning on future words. Additionally, VRTM (Rezaee & Ferraro, 2020) and rGBN-RNN (Guo et al., 2020) are proposed as document-level models, but in the respective evaluation scripts of their public codebases, θ is estimated using future words. Finally, conditioning during evaluation isn't the only difference among these models. Some prior papers do not use consistent language model vocabulary sizes, which makes reported numbers incomparable. For example, VRTM employs a smaller vocabulary size than the baselines and other models it compares to. These discrepancies in evaluation may account for differences in reported results. 5We also ran experiments that corrected this mismatch by estimating θ using only previous document words in both training and evaluation, but found that this did not help performance. ## 7 Conclusion We find that compared to a standard LSTM language model, topic-guided language models do not improve language modeling performance. The probe experiment shows that this is due to both the standard LSTM already possessing some level of topic understanding but also that capturing the exact topic vector is unnecessary for the task. For the two topic-biased language models, the quality of the learned topics is poor. This may be due to the choice of latent variable for the topic proportions vector θ, as previous work has found that using a Gaussian leads to low topic coherence, while using a Dirichlet with reparameterization gradients is prone to posterior collapse (Srivastava & Sutton, 2017; Burkhardt & Kramer, 2019). While this study shows that current topic-guided language models do not improve next-word predictive performance, it is possible that incorporating a topic model can provide greater control or diversity in language model generations. Topic-guided language models can generate text conditional on topics (Lau et al., 2017; Guo et al., 2020); one potential direction for future work is a systematic investigation of controllability in topic-guided language models. The topic-guided language modeling literature has focused on LSTMs, but we note that this framework is agnostic to the class of neural language model used. This means the same framework can be used to incorporate topic models into more powerful neural language models, such as transformers (Vaswani et al., 2017). However, if incorporating topic models into transformers does not improve predictive performance or provide meaningful latent variables, it is not necessarily because of architectural differences between transformers and LSTMs. Rather, the probing results in this paper indicate that neural language models are sufficiently expressive such that they already retain topic information. Transformers, which are more expressive than LSTMs, are likely even more capable of capturing topic information without explicitly modeling topics. Novel approaches are needed to enable joint learning of expressive neural language models and interpretable, topic-based latent variables. ## Acknowledgements We thank Adji Dieng and the reviewers for their thoughtful comments and suggestions, which have greatly improved the paper. This work is supported by NSF grant IIS 2127869, ONR grants N00014-17-1-2131 and N00014-15-1-2209, the Simons Foundation, and Open Philanthropy. ## References Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644, 2016. Nikolaos Aletras and Mark Stevenson. Evaluating topic coherence using distributional semantics. In *International Conference on Computational Semantics*, 2013. Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. *arXiv preprint arXiv:1803.01271*, 2018. David M. Blei and John D. Lafferty. A correlated topic model of science. *The Annals of Applied Statistics*, 1(1), 2007. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. *Journal of Machine* Learning Research, 3(Jan), 2003. David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518), 2017. Sophie Burkhardt and Stefan Kramer. Decoupling sparsity and smoothness in the Dirichlet variational autoencoder topic model. *Journal of Machine Learning Research*, 20(131), 2019. Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder–decoder approaches. In *Workshop on Syntax, Semantics and Structure in* Statistical Translation, 2014. Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. What you can cram into a single $&!\#* vector: Probing sentence embeddings for linguistic properties. In *Association* for Computational Linguistics, 2018. BNC Consortium. The British National Corpus, version 3 (BNC XML Edition), 2007. M. Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. Are we really making much progress? A worrying analysis of recent neural recommendation approaches. In *Recommender Systems*, 2019. Adji B. Dieng, Chong Wang, Jianfeng Gao, and John Paisley. TopicRNN: A recurrent neural network with long-range semantic dependency. In *International Conference on Learning Representations*, 2017. Adji B. Dieng, Francisco J. R. Ruiz, and David M. Blei. Topic modeling in embedding spaces. Transactions of the Association for Computational Linguistics, 8, 2020. Thomas L. Griffiths and Mark Steyvers. Finding scientific topics. *National Academy of Sciences*, 101, 2004. Justin Grimmer and Brandon M Stewart. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. *Political Analysis*, 21(3), 2013. Dandan Guo, Bo Chen, Hao Zhang, and Mingyuan Zhou. Deep Poisson Gamma dynamical systems. In Neural Information Processing Systems, 2018. Dandan Guo, Bo Chen, Ruiying Lu, and Mingyuan Zhou. Recurrent hierarchical topic-guided RNN for language generation. In *International Conference on Machine Learning*, 2020. John Hewitt and Percy Liang. Designing and interpreting probes with control tasks. In Empirical Methods in Natural Language Processing, 2019. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural Computation*, 9(8), 1997. Alexander Hoyle, Pranav Goel, Andrew Hian-Cheong, Denis Peskov, Jordan Boyd-Graber, and Philip Resnik. Is automated topic model evaluation broken? The incoherence of coherence. 2021. Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. Sharp nearby, fuzzy far away: How neural language models use context. In *Association for Computational Linguistics*, 2018. Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In International Conference on Learning Representations, 2014. Dan Klein and Christopher D. Manning. Accurate unlexicalized parsing. In *Association for Computational* Linguistics, 2003. Jey H. Lau, David Newman, and Timothy Baldwin. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In *European Chapter of the Association for Computational* Linguistics, 2014. Jey H. Lau, Timothy Baldwin, and Trevor Cohn. Topically driven neural language model. In Association for Computational Linguistics, 2017. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In *Association for Computational Linguistics: Human* Language Technologies, 2011. Benjamin Marie, Atsushi Fujita, and Raphael Rubino. Scientific credibility of machine translation research: A meta-evaluation of 769 papers. In *Association for Computational Linguistics*, 2021. Andrew McCallum. MALLET: A machine learning for language toolkit. 2002. Gábor Melis, Chris Dyer, and Phil Blunsom. On the state of the art of evaluation in neural language models. CoRR, 2017. Gábor Melis, Tomáš Kočisk`y, and Phil Blunsom. Mogrifier LSTM. In *International Conference on Learning* Representations, 2020. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In International Conference on Learning Representations, 2017. Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In *Interspeech*, volume 2, 2010. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. *arXiv preprint arXiv:1301.3781*, 2013. John W. Mohr and Petko Bogdanov. Topic models: What they are and why they matter, 2013. Made Nindyatama Nityasya, Haryo Wibowo, Alham Fikri Aji, Genta Winata, Radityo Eko Prasojo, Phil Blunsom, and Adhiguna Kuncoro. On "scientific debt" in NLP: A case for more rigour in language model pre-training research. In *Association for Computational Linguistics*, 2023. Joe O'Connor and Jacob Andreas. What context features can transformer language models use? In *Association for Computational Linguistics*, 2021. Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. Information-theoretic probing for linguistic structure. In *Association for Computational Linguistics*, 2020. Radim Rehurek and Petr Sojka. Gensim–python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic, 3(2), 2011. Mehdi Rezaee and Francis Ferraro. A discrete variational recurrent topic model without the reparametrization trick. *Neural Information Processing Systems*, 2020. Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, and Yoshua Bengio. Do neural dialog systems use the conversation history effectively? An empirical study. In Association for Computational Linguistics, 2019. Akash Srivastava and Charles Sutton. Autoencoding variational inference for topic models. In International Conference on Learning Representations, 2017. Hongyin Tang, Miao Li, and Beihong Jin. A topic augmented text generation model: Joint learning of semantics and structural features. In *Empirical Methods in Natural Language Processing*, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Neural Information Processing Systems*, 2017. Martin J. Wainwright, Michael I. Jordan, et al. Graphical models, exponential families, and variational inference. *Foundations and Trends in Machine Learning*, 1(1–2), 2008. Hanna Wallach, David Mimno, and Andrew McCallum. Rethinking LDA: Why priors matter. In Neural Information Processing Systems, 2009. Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. Topic compositional neural language model. In International Conference on Artificial Intelligence and Statistics, 2018. Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. Topic-guided variational auto-encoder for text generation. In North American Chapter of the Association for Computational Linguistics, 2019. Ronald J. Williams and Jing Peng. An efficient gradient-based algorithm for on-line training of recurrent network trajectories. *Neural Computation*, 2(4). | Dataset | Vocab Size | Training | Validation | Test | | | | |------------|--------------|------------|--------------|--------|--------|-------|------| | Docs | Tokens | Docs | Tokens | Docs | Tokens | | | | APNEWS | 34231 | 50K | 15M | 2K | 0.6M | 2K | 0.6M | | IMDB | 36009 | 75K | 20M | 12.5K | 0.3M | 12.5K | 0.3M | | BNC | 43703 | 15K | 18M | 1K | 1M | 1K | 1M | | WikiText-2 | 33280 | 6182 | 2M | 620 | 218K | 704 | 246K | Table 4: Dataset statistics. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. Breaking the softmax bottleneck: A high-rank RNN language model. *CoRR*, 2017. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization, 2014. ## A Lstm The LSTM (Hochreiter & Schmidhuber, 1997) components are $i_t=\sigma(\mathbf{W}_i\mathbf{v}_t+\mathbf{U}_i\mathbf{h}_{t-1}+b_i)$ $f_t=\sigma(\mathbf{W}_f\mathbf{v}_t+\mathbf{U}_f\mathbf{h}_{t-1}+b_f)$ $o_t=\sigma(\mathbf{W}_o\mathbf{v}_t+\mathbf{U}_o\mathbf{h}_{t-1}+b_i)$ $\hat{\mathbf{c}}_t=\operatorname{tanh}(\mathbf{W}_c\mathbf{v}_t+\mathbf{U}_c\mathbf{h}_{t-1}+b_c)$ $\mathbf{c}_t=f_t\odot\mathbf{c}_{t-1}+i_t\odot\hat{\mathbf{c}}_t$ $\equiv$). ht = ot ⊙ tanh(ct). The symbol ⊙ denotes element-wise product, while it, ft, ot are the input, forget, and output activations at time t. Additionally, vt, ht, ct are the input word embedding, hidden state, and cell state at time t. Finally, W, U, b are model parameters. ## B Gru The GRU (Cho et al., 2014) components are $\begin{array}{l}z_t=\sigma(\mathbf{W}_z\mathbf{v}_t+\mathbf{U}_z\mathbf{h}_t+b_z)\\ r_t=\sigma(\mathbf{W}_r\mathbf{v}_t+\mathbf{U}_r\mathbf{h}_t+b_r)\\ \hat{\mathbf{h}}_t=\tanh(\mathbf{W}_h\mathbf{s}+\mathbf{U}_h(r_t\odot\mathbf{h}_t)+b_h)\\ \mathbf{h}_t=(1,\dots,)\odot\mathbf{h}_t+\dots\odot\mathbf{f}_r\end{array}$ $t\;\mathbb{Q}\;\mathbf{h}_{t}$. $$\mathbf{\hat{h}}_{t}^{\prime}=(1-z_{t})\odot\mathbf{h}_{t}+z_{t}\odot{\hat{\mathbf{h}}}_{t}.$$ Here, zt and rt are the update and reset gate activations at time t. Meanwhile, vt and ht are the input vector and the hidden state at time t, while W, U, and b are model parameters. ## C Datasets We evaluate on four publicly available corpora. APNEWS is a collection of Associated Press news articles from 2009 to 2016. IMDB is a set of movie reviews collected by Maas et al. (2011). BNC is the written portion of the British National Corpus (Consortium, 2007), which contains excerpts from journals, books, letters, essays, memoranda, news, and other types of text. WikiText-2 is a subset of the verified Good or Featured Wikipedia articles (Merity et al., 2017). A random subset of APNEWS and BNC is selected for the experiments. Table 4 shows the dataset statistics. The data is preprocessed as follows. For WikiText-2, we use the standard vocabulary, tokenization, and splits from Merity et al. (2017). We determine documents based on section header lines in the data. The EOS token is prepended to the start of each document and added to the end of each document. For APNEWS, IMDB, and BNC, documents are lowercased and tokenized using Stanford CoreNLP (Klein & Manning, 2003). Tokens that occur less than 10 times are replaced with the UNK token. The SOS token is prepended to the start of each document; the EOS token is appended to the end of each document. While we use the same vocabulary and tokenization as Lau et al. (2017), we do not add extra SOS and EOS tokens to the beginning and end of each sentence, so our perplexity numbers are not directly comparable to Lau et al. (2017); Rezaee & Ferraro (2020); Guo et al. (2020), who evaluate on the same datasets. When we redo the reproducibility experiment to use their original preprocessing settings, the trends in model performance are nearly identical to the results in this paper. We use the same splits as Lau et al. (2017). Each model uses the same vocabulary for next-word prediction, so predictive performance is comparable across models. Models have different specifications for the number of words in the topic model component. For rGBN-RNN and TDLM, we follow the vocabulary preprocessing steps outlined in the respective papers. We exclude the top 0.1% most frequent tokens and tokens that appear in the Mallet stop word list. For TopicRNN and VRTM, we additionally exclude words that appear in less than 100 documents, following Wang et al. (2018). ## D Experiment Settings We train all models on single GPUs with a language model batch size of 64. The experiments can be replicated on an AWS Tesla V100 GPU with 16GB GPU memory. LSTM-LM, TopicRNN, VRTM, and TDLM are implemented in our codebase in Pytorch 1.12. We use the original implementation of rGBN-RNN, which uses Tensorflow 1.9. We note differences between our experiment settings and the original papers here. All other settings are the same as in the original papers, and we refer the reader to them for details. For the topic model components of the topic-guided language models, we keep the settings from the original papers. However, in some cases, the original papers use different language model architectures and settings. In order for a topic-guided language model's performance not to be confounded by use of a stronger or weaker language model component, it was necessary to equalize these architectures and settings in the reproducibility study. Specifically, we use 600 hidden units for the language model component and a truncated BPTT length of 30 for all topic-guided language models. Additionally, we initialize VRTM with pre-trained word embeddings rather than a random initialization, and we strengthen TopicRNN's stop word prediction component by replacing the linear layer with an MLP. We found these changes to help the performance of the respective models, so we included them in the reproducibility study. As noted in the main paper, for rGBN-RNN, we use a model size of 600-600-600 because their public code only supports 3-layer models with same-size RNN layers. As described in the main text, each model in Table 1 is trained until the validation perplexity does not improve for 5 epochs. After convergence, we use the checkpoint with the best validation perplexity. For each model, we perform three runs with random initializations trained until convergence. We report the mean of these runs along with their standard deviations. We train LDA via Gibbs sampling using Mallet (McCallum, 2002). The hyperparameters are: α (topic density) = 50, β (word density) = 0.01, number of iterations = 1000. ## E Model Sizes Table 5 contains parameters counts for the baselines and topic-guided language models. Here, the TGLMs have 100 topics (except rGBN-RNN, which has 100-80-50 hierarchical topics) and the same topic model vocabulary from the APNEWS dataset. | | Excluding embeddings | Including embeddings | | | | | |-------------------------------|------------------------|------------------------|-------|-------|-----|------| | Model | Total | LM | TM | Total | LM | TM | | LSTM-LM (sentence-level) | 2.2M | 2.2M | - | 33M | 33M | - | | LSTM-LM (1 layer) | 2.2M | 2.2M | - | 33M | 33M | - | | TopicRNN (Dieng et al., 2017) | 2.5M | 2.4M | 0.1M | 45M | 33M | 12M | | VRTM (Rezaee & Ferraro, 2020) | 3.3M | 2.4M | 0.9M | 37M | 33M | 4.1M | | LSTM-LM (1 layer, with GRU) | 3.3M | 3.3M | - | 46M | 46M | - | | TDLM (Lau et al., 2017) | 3.3M | 3.3M | 0.01M | 46M | 34M | 12M | | LSTM-LM (3 layers) | 7.9M | 7.9M | - | 39M | 39M | - | | rGBN-RNN (Guo et al., 2020) | 11.6M | 11.6M | 0.05M | 90M | 87M | 3.3M | Table 5: Parameter counts for each LSTM-LM baseline and topic-guided language model. ## F Full Probing Results Table 6 shows the full results for the probe experiment. Standard deviations are computed from three runs with different random seeds for both the LSTM-LM baseline and the topic-guided language model. | | APNEWS | | IMDB | 2 | | | | |----------|----------|------------|------------|------------|------------|------------|-------------| | Target | Data | Acc-1 | Acc-5 | R 2 | Acc-1 | Acc-5 | R | | TDLM | init. | .036 (.01) | .135 (.02) | .134 (.00) | .028 (.01) | .132 (.02) | .023 (.00) | | TDLM | trained | .340 (.01) | .681 (.01) | .621 (.01) | .180 (.02) | .449 (.04) | .400 (.02) | | TopicRNN | init. | .075 (.00) | .224 (.02) | .022 (.00) | .086 (.01) | .259 (.04) | .014 (.00) | | TopicRNN | trained | .210 (.00) | .540 (.01) | .232 (.00) | .238 (.02) | .575 (.00) | .231 (.01) | | | BNC | | WT-2 | | | | | | Target | Data | Acc-1 | Acc-5 | R 2 | Acc-1 | Acc-5 | R 2 | | TDLM | init. | .098 (.02) | .294 (.05) | .073 (.02) | .128 (.05) | .379 (.06) | .018 (.01) | | TDLM | trained | .314 (.07) | .638 (.07) | .379 (.05) | .510 (.07) | .858 (.02) | .352 (.05) | | TopicRNN | init. | .077 (.01) | .195 (.05) | .017 (.00) | .197 (.17) | .442 (.10) | -.010 (.00) | | TopicRNN | trained | .172 (.01) | .424 (.02) | .170 (.00) | .208 (.14) | .486 (.11) | .067 (.01) | Table 6: Probing results with standard deviations. ## G Topic-Guided Lm Topics Table 7 includes randomly sampled topics from each topic-guided language model fit to APNEWS. | Table 7: Randomly selected learned topics from each model on APNEWS. | | |-------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Model | Topics | | | stolen robber robbed store robbery stole theft jewelry robbers suspect | | | plane aviation aircraft passengers helicopter pilots airport crashed faa guard | | TDLM | crash driver vehicle truck highway car accident died injuries scene | | | assange fifa wikileaks nsa snowden blatter iran ukraine journalists russian | | | emissions nuclear renewable energy congress trade turbines obama reactor china arriving unsuccessful wash. fail audio bases bargaining sunset first-quarter install marion evacuate ceiling skull caliber tend evacuation exist shanghai sank | | TopicRNN | graham turner ellis gordon albany edwards albuquerque davis cia contributions malloy dannel cheyenne buffalo indian hudson paris india carbon broadway | | | follow-up scenario rebound rodham luxury rebel ordinary referring prohibiting insist ground site family left dead kilometers miles residents village members world american disease america military blood days information pentagon top | | rGBN-RNN | film art movie actor collection artists artist studio festival theater recent called lost past washington small big good place today | | | workers system employees pay cost services agreement authority union contract | | counties family high reported billion prosecutors community percentage asked caliber angeles vegas moines half ap guilty prosecutors paso smith brown | | | VRTM | counties reported high family billion asked prosecutors gov. percentage earlier high prosecutors gov. family part american recent earlier past long gov. prosecutors high family recent earlier american past part top |
Review 1: Summary: This paper revisits __topic-guided language models__, broadly defined as language models that combine the standard causal, left-to-right language modelling next-word prediction task with __topic models__ (Blei et al., 2003; inter alia). This line of work is motivated by earlier results on the limited ability of RNN and LSTM-based models in capturing long-range dependencies. In theory, such combination of language models and topic models would enhance topic-guided language models' ability to model long-range dependencies, by decoupling the predictions that only require *local* context (e.g. local syntactic and semantic dependencies), which RNN and LSTM-based language models can already do quite well, from those that require long-range, *document-level*, semantic context, which topic models are able to capture well by uncovering latent topics. To that end, prior work has claimed positive results for topic-guided language modelling, demonstrating that such topic-guided language models achieve better perplexity than standard LSTM-based models, whilst also learning interpretable topics as an important byproduct; this stands in contrast to the hidden state activations of standard LSTM-based language models, which are comparatively harder to interpret. The paper begins with a review of both language models (in particular, those that are based on RNNs and LSTMs) and topic models, before summarising four major prior work in topic-guided language modelling, covering both __topic-biased language models__ and __joint topic and language model__. The paper then asks three key questions: - Under comparable and scientifically rigorous experimental conditions, to what extent --- if at all --- do topic-guided language models really outperform the standard LSTM language model baselines that are properly tuned to the same extent? In this work, the comparable experimental condition is achieved by controlling for three factors: (i) First, the topic model component should depend *only on the preceding words*, in order to preserve the left-to-right causality of the language modelling task; (ii) second, the baseline LSTM models should condition on prior words in the *entire document* history (as is standard in the language modelling literature), as opposed to only the prior words in the current sentence, as done in most prior topic-guided language modelling work; and lastly, (iii) the baseline models and the topic-guided ones should be compared under *similar model sizes*, as model size can be an important determinant of language modelling performance. - To what extent --- if at all --- do the learnt topics from topic-guided language models outperform those much simpler, standard topic model approaches? - To what extent can a properly tuned, standard LSTM-based language model encode a similar degree of topic information (in its hidden state activations) as the latent topics uncovered by more complicated, topic-guided language modelling approaches? To that end, the paper found that, under comparable and scientifically rigorous experimental conditions as outlined above, the baseline LSTM LM approach can actually __outperform__ the four topic-guided language models in terms of perplexity; these results are consistent across three different document-level language modelling benchmarks. Furthermore, the latent topics uncovered by these topic-guided language models are, in fact, __less coherent__ than those found by a much simpler, standard LDA (Blei et al., 2003) approach. Finally, despite the baseline model's lack of an explicit objective function or encouragement to uncover latent topics, probing analysis demonstrates that the hidden state activations of standard LSTM LMs do, in fact, encode a fairly similar extent of latent topic information as that uncovered by a topic-guided language model approach (TDLM, Lau et al., 2017). Strengths and Weaknesses: __Strengths__ - Given the incredibly fast pace of the machine learning, NLP, and language modelling literature, __reproducibility and scientific rigor__ --- in particular, by conducting fair comparisons of different models under comparable experimental conditions, and accounting for various potential confounders like model sizes, proper tuning of the baseline models, etc. --- are extremely important in the field. This paper makes an important step in this direction: By comparing standard, well-tuned LSTM language models with their topic-guided counterparts in a level playing field, the paper derives useful insights (primarily around the fact that topic-guided language models do *not* necessarily outperform standard, well-tuned LSTM baselines, and that they do not necessarily learn coherent topics) that will be useful for other researchers in the field and the broader community. This kind of work is important to advance our __scientific understanding__ of what works well today (and what doesn't), above and beyond achieving the next state of the art results. - The paper provides a comprehensive review of the important concepts, such as topic models and topic-guided language models, that are necessary for less familiar readers to understand the key ideas, and therefore interpret the paper's key findings. I find the review of background material to be particularly thorough, especially given the page limit. - The paper conducts experiments on three different document-level language modelling datasets, and also takes into account the variance resulting from different random seeds, which improves the credibility of the findings and reflects more scientific rigor. __Weaknesses__ - The choice of evaluation datasets should be broadened to include more commonly used document-level language modelling benchmarks, such as WikiText-2 and WikiText-103. This would make it easier to benchmark the paper's results against prior work on strong, document-level language models, which is difficult to do at the moment (currently Table 1 in the paper only includes the authors' own LSTM-LM implementation as the baseline). This would enable the readers to assess how the authors' baseline implementation compares against strong LSTM LMs used in prior work (e.g. Mogrifier LSTM, Melis et al., 2019), including those with a similar number of parameters. This would further improve the credibility of the findings. - The mathematical notation in the paper can be improved to be clearer and more precise. Following standard notation, I would recommend using bold uppercase letters for matrices, bold lowercase letters for vectors, and non-bold letters for scalars. Similarly, I would recommend using bold lowercase letters to denote a __sequence of words__ (e.g. $\mathbf{x}_{<t}$ ). In contrast, a single word can be denoted with a lower, non-bold, italic lowercase letter, such as $x_t$. - The probing analysis --- while helpful --- can be strengthened by using control task (Hewitt and Liang, 2019) or minimum description length (Voita and Titov, 2020). This would help disentangle how much of the probing performance is due to the presence of the information in the hidden state vector, as opposed to the strength of the classifier itself. - Because the primary benefit of topic-guided language models is claimed to be long-range, document-level semantic coherence (maybe human evaluation over the generated text?), it would be nice if there is a targeted metric for this, to assess whether or not such models really have an advantage over standard, well-tuned LSTM LMs in terms of long-range semantic coherence. - There are still some open questions, suggestions, and stylistic suggestions (as listed below) that are not resolved yet. __Questions__ - Does the topic-biased language model (section 4.1) basically learn to interpolate between the topic model and the standard LSTM LM? If so, it might be worth saying more explicitly. - In the "Inference" section of the TDLM, the topic model (TM) and the language model (LM) have two different sets of parameters. But in Eq. (7), $\hat{\boldsymbol{\theta}}$ is based on an encoding of the prefix by the topic model (TM) parameters $\boldsymbol{\theta}^{\text{TM}}$ , which is then fed into the LM parameters, even though the equation above it mentioned that $\ell_{\text{LM}}$ depends on $\boldsymbol{\theta}^{\text{LM}}$, not $\boldsymbol{\theta}^{\text{TM}}$. Could you please clarify this? - The backpropagation through time used a truncated length of 30 (page 8, training details). This seems to be on the shorter end; did you try how the results would change with a longer truncated length? - Another potential benefit of topic-guided language models is the __controllability__ angle, as we can ask the model to generate an article for a given topic vector. Having a discussion on whether this advantage is likely to hold (or mentioned as a potential topic of investigation for future work) would be nice. __Stylistic Suggestions__ - In Table 1, I recommend putting the best entry model (lowest ppl.) for a given parameter count in bold, which would make the table much easier to read (this might require some rearranging of how the table is structured). - In page 3, "memoizing" is a typo. - In page 9, "prevelant" is a typo. - In page 10, "the the" is a grammatical error. - I find the last sentence in the conclusion ("Future work can prioritize...") to be a bit hard to parse. __Citation__ There is a recent paper that discusses a similar issue (lack of fair comparison and scientific rigor when comparing different models, leading the community to have the wrong understanding of what works and what doesn't): Nityasya et al. (2023): "On "Scientific Debt" in NLP: A Case for More Rigour in Language Model Pre-Training Research". Requested Changes: - **Strongly Recommended**: Conducting experiments on a more commonly used document-level language modelling benchmark, such as WikiText 2 or WikiText-103. This would give the readers a better idea of how the numbers compare with strong language models in prior work (weakness 1 above). - **Strongly Recommended**: Improving the mathematical notations to be more standard and precise (weakness 2 above). - **Strongly Recommended**: Re-running probing analysis with either control task or minimum description length probing, to assess how much of the probe results are really due to the probed information being present in the hidden state activation (as opposed to the classifier's expressivity) (weakness 3 above). - **Strongly Recommended**: Resolving the questions (and incorporating the relevant reply into the paper where applicable), stylistic suggestions, and citation as raised above. Broader Impact Concerns: I do not foresee any broader impact concerns. ================================================== Review 2: Summary: The paper takes a deep dive on evaluating work on topic-guided language modeling (TGLM). TGLM aims to improve language model performance by integrating more traditional topic models with RNN-family language models, under the assumption that topic models will capture global document information better than RNNs do on their own. The authors point to multiple flaws in the experimental design of prior work, and show that when these flaws are fixed and thus evaluation is more apples-to-apples, there is no performance gain to TGLM compared to vanilla LMs. The primary results focus on the language modeling performance, but the authors include two sections of additional results, one in which they use probing to measure whether vanilla LMs capture topic information natively, and one in which they look at the quality of the learned topics using automated metrics. Strengths and Weaknesses: Overall, the paper represents a nice, if modest, contribution. It is very well written and easy to read. The intuition that the authors support (i.e., that LMs already capture topic information and thus topic modeling is in some ways “redundant”) is an intuition I have personally had when seeing work on TGLMs and so I can’t help but find it satisfying that the authors confirm this intuition. I have some qualms with the experiments (below), in particular the probing studies, and I think augmenting the experiments to address these (or rearranging so that material from appendix can appear in the main text, if needed) would significantly strengthen the work. Requested Changes: Questions about Experiments/Content: * Since this is a “reproducibility study”, I would like to see more depth and discussion surrounding the differences in your (re)implementation of prior work vs. what was reported in that prior work. In particular, for many models, you say you reimplement from scratch. This is fine—but can you report how your reimplementation compares to the prior work before you make any adjustments to their setup? I.e., can you quantify how much better/worse your implementation’s performance is assuming your goal is just to exactly replicate the prior work? I think it is important to show this, so we can fully understand where the disconnect between your findings and the prior work’s findings stems from. Put differently: a skeptic might say that your lack of difference between models isn’t due to what you say (i.e., isn’t due to a more apples-to-apples comparison) but rather might just mean you did a bad job reimplementing the work! So if you can show that your reimplementation is legit, that strengthens your argument. * I am skeptical of the section on probing, I think you need to add more in order to make these results compelling. Specifically: —> Can you include a baseline or two in Table 2? E.g., maybe most frequent topic, and performance of a probe trained on a randomly initialized LM? I don’t know how to contextualize these numbers, so I can’t tell if these support the claim that there is significant topic information captured. —> Why do you only probe one TGLM? Can you probe the others too, to help contextualize? —> The issue you mention about how probe performance is correlated with topic model performance concerns me. Can you dig into this more? I feel like there are some artifacts driving the results which aren’t transparently captured in your analysis. Maybe better baselines will help disentangle, or maybe you can run some other analysis to convince us that the probe is capturing exactly what you want it to capture, and not random “other stuff”. * (This is a thought which is harder to address, so not expecting anything, just dropping it here.) I wonder if you can design a baseline or experiment that gets to the heart of what you are trying to claim here. Namely, I feel like you are trying to say that both topic modeling and language modeling are capturing largely the same stuff, e.g., the principle components of the corpus perhaps? And thus a topic-augmented LM is redundant. Can you quantify this intrinsic connection between the models somehow? Either experimentally or mathematically? Comments/Suggestions on Writing and Style: * The paper is nicely written, and whoever did the writing is clearly a good teacher/good at explaining things. :) But that said, there was a lot of time spent on background that was not novel to the paper and could be cut. For example, the section on variational inference felt odd. Either the reader already knows VI and will skip this section, or they don’t know about VI in which case this short description will hardly suffice to catch them up to speed. Moreover, understanding VI is not at all necessary for your contribution. I’d recommend cutting sections of this type of background (on VI, on LSTMs, on LDA, etc) and instead use the extra space to deepen your experimental results and analysis. * The term “reproducibility study” is a bit of a misnomer here. That implies that you are simply trying to replicate the numbers that others produced, and are not able to do so. But you are actually trying to correct previous problems with experimental design. So it’s not a problem with reproducibility, but rather that when you fix certain problems, you draw an opposite conclusion from the original work. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper compares the effectiveness of topic-guided language models with the standard LSTM language model baselines in a unified setting. Four topic-guided language models and two baselines are evaluated by the held-out predictive performance of each model on three corpora. The authors find that none of these topic-guided language models outperform a standard LSTM language model baseline, and most fail to learn good topics. Probing analysis shows that the baseline’s hidden states already encode topic information, which explains why the topic-guided language models do not yield better performance. The major contribution is that this work provides an insightful analysis showing that topic-guided language models do not perform effectively as reported in prior work due to inconsistent settings with baselines. In contrast, the LSTM language model has already captured effective topic information inside the hidden states. Strengths and Weaknesses: Strengths: 1. The paper is well-written and easy to read. The problem is clearly defined, the motivation is clear, and the analysis flows smoothly. 2. A comprehensive analysis of topic-guided language models is conducted, reaching insightful conclusions. The conclusion challenges the effectiveness of the methodology in prior work, which would be of interest to the community. 3. This paper would contribute to a line of reproducibility studies that aim to evaluate competing methods in a consistent and equitable manner. The authors promise that the code will be publicly available. Weaknesses: 1. There are different experimental settings in previous studies. The choice of the unified setting lacks clarity. It is not clear if the conclusion still holds when using different experimental settings, e.g., different or no pre-trained word embeddings, different architecture, etc. 2. Only LSTM approaches are evaluated. As there are transformer-based topic-guided language models proposed in recent years, it would be interesting to see if those transformer approaches also have a similar phenomenon. Requested Changes: More comprehensive studies with other model architectures and other tasks are suggested. Broader Impact Concerns: n/a ================================================== Metareview: Recommendation: Accept with minor revision Comment: Overall, it is a good study and makes good contributions to the community (more details can be found in reviews). My recommendation is "accept with minor revision". The authors need to address the concern about test datasets and revise the paper: This work focuses on empirical study, but conducted experiments on less well-used datasets, which makes the conclusions less convincing, as there is not much prior work on these datasets. Therefore, the authors need to test on widely used benchmarks like WikiText-103 or other larger datasets and add new results into the final version. ==================================================
# Variational Autoencoding Of Dental Point Clouds Johan Ziruo Ye Technical University of Denmark 3Shape Thomas Ørkild 3Shape Peter Lempel Søndergaard 3Shape Søren Hauberg Technical University of Denmark Johan. Ye@3shape.com Thomas. Orkild@3shape.com Peter.Soendergaard@3shape.com sohau@dtu.dk Reviewed on OpenReview: https://openreview.net/forum?id=nH416rLatI ## Abstract Digital dentistry has made significant advancements, yet numerous challenges remain. This paper introduces the FDI 16 dataset, an extensive collection of tooth meshes and point clouds. Additionally, we present a novel approach: Variational FoldingNet (VF-Net), a fully probabilistic variational autoencoder for point clouds. Notably, prior latent variable models for point clouds lack a one-to-one correspondence between input and output points. Instead, they rely on optimizing Chamfer distances, a metric that lacks a normalized distributional counterpart, rendering it unsuitable for probabilistic modeling. We replace the explicit minimization of Chamfer distances with a suitable encoder, increasing computational efficiency while simplifying the probabilistic extension. This allows for straightforward application in various tasks, including mesh generation, shape completion, and representation learning. Empirically, we provide evidence of lower reconstruction error in dental reconstruction and interpolation, showcasing state-of-the-art. performance in dental sample generation while identifying valuable latent representations4. ## 1 Introduction Recent advancements and widespread adoption of intraoral scanners in dentistry have made micrometer resolution 3D models readily available. Consequently, the demand for efficiently organizing these noisy scans has grown in parallel. To this end, we propose a variational autoencoder (Kingma & Welling, 2014; Rezende et al., 2014) specifically designed for point clouds, enabling the identification of continuous representations. This approach effectively captures the continuous changes and degradation of teeth over time. Our solution is a probabilistic latent variable model that ensures a one-to-one correspondence between points in the observed and generated point cloud. This one-to-one connection throughout the network allows for optimization of the original variational autoencoder objective. This is achieved by projecting the point cloud onto an intrinsic 2D surface representation, which allows for efficient sampling and also discourages storage information about the overall shape within this space. These 2D projections impart a strong inductive bias, proving highly beneficial when the input point cloud and the 2D surface share topology. Notably, this also bottlenecks the model, preventing it from learning the identity mapping. Specifically, Variational Foldingnet (VF-Net) learns a projection from the 3D point cloud input down to 2D space, which then is deformed back to reconstruct the input point cloud. Finally, these projections facilitate mesh generation without further 1 ![1_image_0.png](1_image_0.png) Figure 1: VF-Net teeth samples, generated by our probabilistic variational autoencoder for point clouds. Note the wide variety in the samples which retain anatomical details in its cusps/fissure composition. training, as well as straightforward shape completion and shape extrapolation, all without compromising the quality of the learned representations (see Fig. 1 for samples). Previous point cloud models generally lack one to-one correspondence throughout the network due to their invariant architecture design. Instead, they evaluate reconstruction error using Chamfer distances (CD) (Barrow et al., 1977) defined as $$\text{CHAMP-DIST}(\mathbf{x},\mathbf{y})=\frac{1}{|\mathbf{x}|}\sum_{i=1}^{m}\min_{y_{j}\in\mathbf{y}}\|x_{i}-y_{j}\|_{2}+\frac{1}{|\mathbf{y}|}\sum_{j=1}^{n}\min_{x_{i}\in\mathbf{x}}\|y_{j}-x_{i}\|_{2},\tag{1}$$ where m and n are the number of elements of x and y respectively. This metric solves the invariance problem. However, it also poses a new one: The Chamfer distance does not readily lead to a likelihood, preventing its use in probabilistic modeling. For instance, when used in the Gaussian distribution, the function x > 1/c exp(-CHAMF-DIST"(x, u)) cannot be normalized to have unit integral due to the explicit minimization in Eq. 1. Consequently, previous latent variable models are closer to regularized autoencoders than the variational autoencoder. Since our model ensures one to-one correspondence between points in the point clouds, we can easily build a proper probabilistic model. Moreover, to encourage further research, we release a new dataset, the FDI 16 Tooth Dataset, providing a large collection of dental scans, available as both meshes and point clouds2. This dataset provides real world representations with planar topology. We consider this an excellent compromise between high quality computer-aided design (CAD) models and sparse LiDAR scans (Chang et al., 2015; 2017; Caesar et al., 2020; Armeni et al., 2016). In digital dentistry, significant challenges are found in diagnostics, tooth (crown) generation, shape completion of obstructed areas of the teeth, and sorting point clouds, etc. In summary, we present the first fully probabilistic variational autoencoder for point clouds, VF-Net, characterized by a highly expressive decoder with state-of-the-art generative capabilities. All while learning compressed representations and being adaptable for shape completion tasks. Furthermore, we release a dataset of 7,732 tooth meshes to facilitate further research on real-world 3D data. ## 2 Related Work We focus on point cloud representations of 3D objects, but there are many alternative methods of representation including voxel grids (Zheng et al., 2021; Wu et al., 2018), multi-angle inference (Wen et al., 2019; Han et al., 2019), and meshes (Alldieck et al., 2019; Wang et al., 2018; Groueix et al., 2018). A major paradigm in neural networks for point clouds is to remain permutation and cardinality invariant. In terms of encoder-decoder models, this frequently leads to designs without a one-to-one correspondence between inputs and outputs (Yang et al., 2018; Groueix et al., 2018). This becomes an obstacle in adapting the variational autoencoder to point clouds. Accordingly, other methods have become prominent, including GANs (Li et al., 2018; 2019), diffusion models (Zhou et al., 2021; Zeng et al., 2022; Zhou et al., 2023), and traditional autoencoders (Achlioptas et al., 2018; Groueix et al., 2018; Pang et al., 2021). Existing Point Cloud Variational Autoencoders. Previous attempts to design a variational autoen coder for point clouds frequently relies on Chamfer distances as an approximation of the reconstruction 2 Data available here term in the standard evidence lower bound. Consequently, these VAEs fail to evaluate a likelihood, a key characteristic of VAEs. This includes works like Edit VAE, which aims to disentangle each point cloud into smaller parts. For each disentangled part, they use the Chamfer distance individually and a superquadric loss that consists of another Chamfer distance term and a regularization term to prevent overlapping parts (Li et al., 2022). The Venatus Geometric Variational Auto-Encoder (VG-VAE) introduces a Geometric Proximity Correlator module to better capture local geometric signatures. However, their work also relies on the Chamfer distance as the reconstruction term. Another latent variable model for point clouds is SetVAE (Kim et al., 2021), which uses transformers to process point clouds as sets. Their primary novelty being the introduction of a latent space with an enforced prior inside the transformer block. These transformer blocks are then stacked to form a hierarchical variational autoencoder (Sønderby et al., 2016), which complicates evaluation of its representations. However, the SetVAE also approximates their reconstruction loss via Chamfer distances. Without explicit likelihood evaluation, these models become closer to a regularized autoencoder than the variational autoencoder. Other Generative Models. On the other hand, LION PEPRESENTATIONS PROBABILISTIC (Zeng et al., 2022) is a latent diffusion model (Rom- OFNERATIVE COMPLETION bach et al., 2022) that maintains a one-to-one mapping throughout the network, allowing for probabilistic evalu- SetVAE ation. However, they only implicitly utilize this by opti- LION mizing an L1-loss. Similar to our work, they encode their FrePolad points in a separate space, but instead of bottlenecking PointFlow V x x this, they map them to a higher dimensional space. This, DPM V × く x unfortunately, leads to information about the shape being PVD x stored here, preventing direct sampling/modification to FoldingNet the embedded points in this space. Similarly to Set VAE, VF-Net (ours) evaluating the quality of representations in LION, a hierarchical latent variable model, poses challenges. Recently, Table 1: VF-Net is a generative model Zhou et al. (2023) presented FrePolad, another latent dif- (GENERATIVE) for point clouds, but it can genfusion model. Their primary novelty is the introduction erate meshes without additional training (MESH) of the frequency rectification module that better captures and do simple shape completion (COMPLETION). high-frequency signals in point clouds. They train their It is also fully probabilistic (PROBABILISTIC) model via a modified VAE loss to account for frequency and can identify interpretable lower-dimensional rectified distances. One fully probabilistic work is Pointrepresentations (REPRESENTATIONS). Flow (Yang et al., 2019). PointFlow utilizes a continuous normalizing flow (CNF) both as a prior and decoder, similar to approaches previously applied to images (Kingma et al., 2017; Sadeghi et al., 2019). Intuitively, one CNF models the distribution of shapes, while the other models the point distribution given the shape. In a comparable way, VF-Net's encoder maps to a global latent space, with point encoding projections providing a latent mapping for each input point. However, PointFlow's two CNFs are trained separately, whereas VF-Net trains them simultaneously, resulting in a more integrated and efficient process. PointFlow is unfortunately very slow to train (Kim et al., 2021). On our full proprietary dataset, PointFlow would have required 200 GPU days of training. Thus, we excluded it from our baselines. Diffusion models such as diffusion probabilistic model (DPM) (Luo & Hu, 2021) and point-voxel diffusion (PVD) (Zhou et al., 2021) present two diffusion models for the point clouds, especially PVD generates accurate new samples. However, diffusion models do not find compressed structured representations of the data as our VF-Net does; see table. 1 for a model property overview. Digital Dentistry. In computational dentistry, extrapolating the tooth's obstructed sides is a well-known task. Qiu et al. (2013) presents an attempt to use classic computational geometry methods. They attempt to reconstruct the missing parts of the distal and mesial sides of the tooth. This leads to a very smooth extrapolation, which performs well. Several works within dentistry take this a step further, e.g., attempting to extrapolate not just the sides but also the roots of the tecth (Wei et al., 2015; Zhou et al., 2018; Wu et al., 2016). We are optimistic that our model could adapt to such a task given that dental cone beat computed tomography (CBCT) of the dental roots was available in the training data. Unfortunately, CBCT scans are expensive and rare; thus, we do not have a large enough dataset for neural network training. ![3_image_0.png](3_image_0.png) Figure 2: VF-Net is a variational autoencoder with a normalizing flow prior over the shape latent. Individual points are projected to 2D space, establishing a one-to-one connection and facilitating mesh generation and shape completion. The decoder follows FoldingNet's with added residual connections, while the variance network consists of 3 folding modules as introduced in FoldingNet. ## 3 Variational Point Cloud Inference Background: FoldingNet. To handle varying sizes and arbitrary order in point clouds, a common strategy is to employ neural networks exhibiting invariance to changes in cardinality and permutation, as proposed by Qi et al. (2017) in PointNet. FoldingNet employs a very similar encoder, e, that operates independently on each point of the point cloud to identify a latent code, z. Subsequently, the folding-based decoder, f : Z x R2 -> R3, "folds" a chosen constant base shape with points, c, according to the latent code. In our case, the base shape is a constant uniform grid in the two-dimensional planar patch [-1, 1]2 (Yang et al., 2018). Both the encoder, e, and the decoder, f, are jointly trained to minimize the reconstruction error approximated via Chamfer distances (1), $$\tau(e(\mathbf{x}),\mathbf{c}))\,.$$ $${\mathcal{E}}=0$$ $\downarrow$ . E = CHAMF-DIST (x, f(e(x), c)). $$(2)$$ This ensures invariance to cardinality and permutation changes, although it complicates variational inference extensions. A variational autoencoder yields a distribution for each input point (Kingma & Welling, 2014; Rezende et al., 2014). However, FoldingNet and most current permutation-invariant neural networks do not have a correspondent output for each individual input point in a point cloud. ## 3.1 The Variational Foldingnet Motivated by unsupervised probabilistic representation learning's benefits across many tasks, including generative modeling (Kingma & Welling, 2014; Rezende et al., 2014; Dinh et al., 2017; Ho et al., 2020), out-of-distribution detection (Nalisnick et al., 2019; Havtorn et al., 2021), handling missing data (Mattei & Frellsen, 2019) etc, we introduce Variational FoldingNet (VF-Net). Architecturally, VF-Net closely resembles FoldingNet, employing a PointNet encoder, with the decoder structure mirroring that of FoldingNet. For a complete overview, consult Fig. 2. The major technical innovation is the introduction of a novel projection for each input point into the planar space, defined as G = [-1, 1]2. Let x be a point cloud of points xi, ... , xn. Each projection corresponds to each point gi, ... , gn in the set g. We will refer to these projections as our point encodings. It is important to note that the point encodings are not constrained by any prior distribution. Decoding these point encodings instead of a static planar patch establishes a one-to-one correspondence throughout the entire network, a necessity for evaluating likelihoods using the classical variational autoencoder objective. As VF-Net learns the point projections from x, the projected points, g, are now dependent on x. The folding of the point encodings, f(z, g), continues to be governed by the latent parameter vector z predicted by the PointNet encoder, e. The optimal projections are thus given by $$g_{i}=\operatorname*{arg\,min}_{g^{\prime}\in{\mathcal{G}}}\|x_{i}-f(\mathbf{z},g^{\prime})\|^{2},$$ $$\left({\boldsymbol{3}}\right)$$ where g; E g. We use a neural network to amortize the calculation of g such that the encoder network outputs both g and the distribution of z. By enabling the model to adjust the point encoding, we circumvent ![4_image_0.png](4_image_0.png) Figure 3: Top: Mesh data samples from our released FDI 16 dataset and their corresponding VF-Net reconstructions. Note the large variety in health conditions between the teeth. the need for optimizing through costly Chamfer distances. Furthermore, the learned projections allow the point encodings to adapt to their input, mitigating common pitfalls observed in FoldingNet, see Fig. 4. With a one-to-one point correspondence established across the network, we optimize our model using traditional variational autoencoder methods. In this context, the variational extension aligns closely with traditional methods, yet with a notable adjustment, the evaluation of likelihood now also depends on the projected points p(x) = f p(x|z,g) p(z) dz. This integral remains intractable, and approximations are necessary. Following conventional inference (Kingma & Welling, 2014; Rezende et al., 2014), an evidence lower bound (Elbo) on p(x) is given by $${\mathcal{L}}(\mathbf{x})=\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}[\log p(\mathbf{x}|\mathbf{z},\mathbf{g})]-\mathrm{KL}(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z})),$$ $$\left({4}\right)$$ where q(z|x) is an approximation to the posterior p(z|x), which is assumed to follow a gaussian distribution. Note that Eq. 3 is implicitly optimized in the likelihood term of the ELBo. Most current point cloud models replace the likelihood with a Chamfer distance, making the models closer to regularized autoencoders (Yang et al., 2018; Groueix et al., 2018; Kim et al., 2021). This design loses one-to-one correspondences between input and output, making likelihood evaluation difficult. In particular, no suitable normalization constant can be derived for probabilistic distributions using Chamfer distances. Our novel method for probabilistic evaluation for 3D reconstruction networks avoids the computationally expensive Chamfer distance (1). In supplementary Fig. S1, we empirically demonstrate that our projections can effectively replace Chamfer distances. We observe that the two metrics closely align, with Euclidean distances acting as an upper bound that tightens with improved reconstruction precision. During the evaluation of the Elbo loss, we use a multivariate student-t distribution with isotropic variance and three degrees of freedom as the reconstruction term. This choice helps to decrease emphasis on outliers and instead focus more on the majority of the data points (Takahashi et al., 2018). p(x/z,g) = Student-t(x|f(z,g), o2(z,g)I,v), where f : Z x R2 -> R3 and o2 : Z x R2 -> R+ are neural networks. No major changes were made to the generative process. We let a normalizing flow model the prior, r(z), which describes the shape of an object (Kingma et al., 2017). Note that this is trained subsequently and no downscaling of the Kullback-leibler divergence term occurs. When the input, X, and the projections, G, share topology, the bias allows for uniform sampling in the planar patch [-1, 1]2. As in FoldingNet, this grid is subsequently deformed according to z. New samples can thus be generated by first sampling z and then mapping the uniformly sampled grid points through f and o, $$\mathbf{x}=f(\mathbf{z},\mathbf{g})+\sigma(\mathbf{z},\mathbf{g})\cdot\mathbf{t},\quad\mathbf{t}\sim\mathrm{Student-t}(\nu).$$ $$\left(5\right)$$ This also enables straightforward mesh generation as deformations are smooth - points projected closely to each other correspond to points close in output space. Consequently, we can generate meshes by simply defining the facets in the 2D planar space. ## 4 The Fdi 16 Tooth Dataset To improve the state-of-the-art modeling of dental scans, we will release an extensive new dataset alongside this paper under the CC BY-NC-SA 4.0 license. The FDI 16 dataset is a collection of 7,732 irregular triangle meshes of the right-side first maxillary molar tooth formally denoted as 'FDI 16' following ISO 3950 notation (see Fig. 3). These meshes were acquired from fully anonymized intraoral scans primarily scanned using 3Shape's TRIOS 3 scanners. Each tooth in the FDI 16 Tooth dataset was algorithmically segmented from an upper jaw scan by 3Shape's Ortho Systems 2023. As the teeth are a subsection of a full intraoral jaw scan, there will be areas obstructed by the adjacent teeth. The teeth, therefore, constitute open meshes and have clear boundaries with no representation of interior object volume. All tooth meshes are from patients undergoing aligner treatment, and accordingly, aligner attachments will be present in a substantial number of scans. This introduces a bias towards younger individuals, who generally have fewer restorations and dental problems. The top row of Fig. 3 shows examples of such meshes. All scans have been made publicly available fully anonymously as meshes and point clouds at millimeter scale. The teeth have been algorithmically rotated to ensure that the x-axis is turned towards the neighboring tooth (FDI 17) while the y-axis points in the occlusal direction of the biting surface). Finally, the z-axis is given by the cross-product to ensure a right-hand coordinate system. Dental scans have a diverse set of research applications. This study explores reconstruction, generation of new teeth, representation learning, and shape completion. All of which have different but critical applications in digital dentistry. We believe that the FDI 16 dataset addresses a crucial niche within 3D datasets by offering a dataset that strikes a balance between the highly detailed but idealized CAD scans (Chang et al., 2015) and sparser real world LIDAR scans (Chang et al., 2017; Caesar et al., 2020; Armeni et al., 2016). Note that any method considered for deployment must be capable of running efficiently on edge devices without a significant performance overhead. This is particularly important as intraoral scanners must function seamlessly even in areas with limited network connectivity. ## 5 Experimental Results We next evaluate VF-Net's performance on point cloud generation, auto-encoding, shape completion, and unsupervised representation learning. Note that FrePolad (Zhou et al., 2023), EditVAE (Li et al., 2022), and VG-VAE (Anvekar et al., 2022) has been excluded from comparison as no public implementation is available. Point cloud generation. To compare sampling performances, we deploy three established metrics for 3D generative model evaluation (Yang et al., 2019). Namely, minimum matching distance (MMD) is a metric that measures the average distance to its nearest neighbor point cloud. Coverage (COV) measures the fraction of point clouds in the ground truth test set that is considered the nearest test sample neighbor for a generated sample. 1-nearest neighbor accuracy (1-NNA) uses a 1-NN classifier to classify whether a sample is generated or from the ground truth dataset, 50%, meaning generated samples are indistinguishable from the test set. Data handling and training details for FDI 16 experiments can be found in supplementary section $1.3 and $1.4, respectively. Table 2: Across five seeds, VF-Net produces close to as large a variety of teeth as PVD and LION while generating samples much closer to real teeth. MMD has been multiplied by 100. | | MMD(1) | COV(%↑) | | 1-NNA(%↓) | | | |------------------|-------------|-------------|-------------|-------------|------------|-------------| | Method | CD | EMD | CD | EMD | CD | EMD | | Train subsampled | 21.00±0.09 | 51.53±0.06 | 49.00±0.64 | 46.95±2.79 | 49.83±0.68 | 50.97 ±0.82 | | SetVAE | 39.00±0.78 | 66 66 ±0.38 | 10.66+0.66 | 9.52 ±0.27 | 97.99±0.32 | 97.95±0.34 | | DPM | 20.71 ±0.10 | 51.94±0.09 | 36.94 ±0.65 | 33.28±0.65 | 70.30±0.82 | 75.75±0.90 | | PVD | 21.58±0.03 | 51.64±0.08 | 44.11+0.76 | 43.23+0.92 | 62.85+0.78 | 60.70±1.06 | | LION | 22.12±0.15 | 52.75±0.12 | 45.12+0.60 | 43.32±1.28 | 68.56+0.73 | 66.76+0.94 | | VF Net (Ours) | 20.38±0.09 | 49.72 ±0.04 | 42.85 ±0.84 | 40.20±0.71 | 56.31±0.39 | 56.05±0.32 | ![6_image_0.png](6_image_0.png) Figure 4: FoldingNet's mesh reconstructions have gaps and highly distorted facets. Conversely, VF-Net's mesh facets are even more regular than the input point cloud, and points in the reconstruction are placed closely resembling its input. Sampling from VF-Net can be done by sampling a uniform grid in the latent point encodings space, akin to FoldingNet. However, the corners of the uniform grid cause edge artifacts in the generated samples, evident in generated meshes in Fig. S2. This can also be observed in the generated meshes in Fig. 3 and Fig. 4, although it is more difficult to spot. The sampling metrics heavily punish such artifacts. Instead, we trained a minor network similar to the decoder of FoldingNet to predict the point encodings from the latent representation. We emphasize that this is entirely unnecessary for regular sampling. The sampling evaluations across five different seeds can be found in Table 2. The results demonstrate that VF-Net generates much more accurate samples, as evidenced by the significantly lower MMD and 1-NNA scores while being close in diversity to PVD and LION (Zhou et al., 2021; Zeng et al., 2022). Furthermore, sampling is much faster than PVD and LION as VF-Net does not depend on an iterative diffusion process. Note that while MMD is very stable across seeds, the COV and 1-NNA scores may vary. Outside of the FDI 16 dataset, we also train VF-Net on a proprietary dataset, which includes the remaining teeth from the FDI 16 jaws; see supplementary section $1.5 for training details. However, we did not quantify sampling performance, as sampling evaluation on 40k test samples would be exceedingly computationally expensive. We observe that VF-Net can sample from all major teeth types, incisors, canines, premolars, and molars, see Fig. 1. Additional mesh samples may be found in supplementary Fig. S2. Table 3: Reconstruction error measured in Chamfer distances (CD) and earth mover's distances (EMD). Note both values have been multiplied by 100. | FDI 16 Tooth | All FDIs | | | | |----------------|------------|-------|------|-------| | Method | CD | EMD | CD | EMD | | DPM | 10.04 | 43.98 | 5.67 | 35.8 | | SetVAE | 21.50 | 59.24 | 9.98 | 51.48 | | LION | 5.35 | 22.85 | 3.02 | 9.66 | | FoldingNet | 5.26 | 33.67 | 3.43 | 31.25 | | 1.21 | 6.30 | 0.97 | 5.30 | | | VF-Net (ours) | | | | | Point cloud auto-encoding. We evaluate VF-Net's reconstruction quality to the previously mentioned generative models and FoldingNet. This evaluation was performed on both on FDI 16 dataset and the larger proprietary dataset. Please consult supplementary sections S1.3 and S1.5 for data handling and training details. We compared the reconstruction errors using Chamfer distance and earth mover's distance (Rubner et al., 2000), $$\text{EMD}(\mathbf{x},\mathbf{y})=\min_{\phi:\mathbf{x}\to\mathbf{y}}\sum_{x_{i}\in\mathbf{x}}\|x_{i}-\phi(x_{i})\|_{2}.\tag{1}$$ $$(6)$$ The earth mover's distance measures the least expensive one-to-one transportation between two distributions. However, this is computationally expensive and thus rarely used for model optimization (Wu et al., 2021). The reconstruction errors are presented in Table 3. Point-Voxel Diffusion (PVD) (Zhou et al., 2021) was excluded from comparison due to not returning the same tooth upon reconstruction. VF-Net achieves a significantly lower reconstruction error than our comparison methods on both the FDI 16 dataset and the proprietary dataset comprising 119,496 teeth, encompassing 32 distinct teeth. As shown in Fig. 4, ![7_image_0.png](7_image_0.png) VF-Net's one-to-one correspondence is evident in its reconstruction. The point placements mimic those in the input point cloud, while FoldingNet's points are evenly distributed. VF-Net and FoldingNet can both generate meshes without any additional training of the model. However, FoldingNet folds the edge across the tooth to accommodate teeth of different sizes. Besides mesh gaps, this also leads to highly irregular facets that intersect one another. On the other hand, VF-Net can adjust the point encoding area to avoid such artifacts. However, VF-Net's reconstructions often exhibit excessive smoothness and lack the desired level of detail. A common observation in variational autoencoders (Kingma & Welling, 2014; Vahdat & Kautz, 2021; Tolstikhin et al., 2019). Variance estimation for point clouds. Predicted variances from the variance network are shown in Fig. 5, where red indicates a higher variance and green indicates a lower variance within each point cloud. Note that all variances shown are relative intra-point cloud variances. Notably, the network assigns higher variance to the fifth cusp and aligner attachments, features only present in a subset of samples. Furthermore, the border of the mesh tends to be assigned higher variance, likely due to a combination of data loading and segmentation artifacts. When the network is not in doubt about the previously mentioned two factors, the network assigns the highest variance to the occlusal surface. All of which aligns with expectations of areas of the teeth that have the most variance. Figure 5: Intra-point cloud relative predicted variance (red is high, green is low). Notably, the carabelli cusp and aligner attachment areas exhibit high variance, two features only present in a subset of individuals. Reconstructions and the Corresponding Latent Point Encoding ![7_image_1.png](7_image_1.png) Figure 6: Left: Red points are removed from the point cloud. is facilitated by sampling within the correspond- Simulated shape completion. One significant benefit Right: Reconstruction and of the inductive bias from the point encodings is straightprojected point encodings remain highly similar despite point deletion. Sampling the missing area forward shape completion and shape extrapolation. In computational dentistry, inferring the obstructed sides of ing empty region of the latent point encoding. a tooth and reconstructing the tooth surface beneath obstructions such as braces pose a key challenge. Paired data of obstructed and unobstructed surfaces is exceedingly rare. Therefore, developing a model capable of extrapolating such surfaces without explicit training is highly desirable. To this end, we simulate the task by evaluating the interpolation performance of each model. This is done by sampling a point on the outward side of the tooth and deleting its nearest neighbors to a total of 200 points. Selecting a mid-buccal point simulates bracket removal prediction ('Bracket sim') while opting for a lower buccal point simulates the obstructed side prediction ("Gap sim"). An example of a synthetic hole is depicted in Fig. 6, Table 4: Unsupervised generative models in the where the red points are to be removed. Both recontop half are untrained interpolation, while the structions and latent point encodings remain highly bottom half are trained models. All Chamfer similar despite the removal of the red points. Extrapdistances have been multiplied by 100. olation/interpolation can be performed by sampling in the point encoding space. To quantify the interpolation Method Bracket sim Gap sim performance, we calculate the distance from the deleted DPM 15.88 38.00 points to their nearest neighbor in the completed point Unsupervise SetVAE 11.50 13.35 cloud; see supplementary Sec. S1.7 for more experiment FoldingNet 16.42 20.14 details. To contextualize the performance, we trained VF-Net (ours) 3.55 4.35 several shape completion methods (PVD (Zhou et al., PVD 2.23 2.37 2021), PoinTr (Yu et al., 2021), VRCNet (Pan et al., 2021)). Since these methods only predict the missing PoinTr 1.84 1.83 VRCNet 2.42 2.04 area, a completely fair comparison cannot be made. The results can be found in Table 4, under "Bracket sim" and "Gap sim,' simulating the removed bracket and the gap between teeth, respectively. Here, VF-Net outperforms its peers when it comes to untrained interpolation, and as expected there is a gap in performance between the trained and untrained methods. Shape completion using LION's latent points from the original tooth contains information about the shape, rendering a fair comparison infeasible. Table 5: Percentage teeth which had classification prediction increase according to expectation when moved in the tooth wear direction. L, M, H denotes light, medium, and heavy wear respectively. | Method | L -> H | L → M | M → L | M -> H | H -> M | H -> L | |---------------|----------|---------|---------|----------|----------|----------| | FoldingNet | 91.77 | 91.77 | 95.02 | 94.89 | 97.80 | 97.80 | | VF-Net (ours) | 92.11 | 99.31 | 97.04 | 96.37 | 98.24 | 99.12 | Representation learning. We compare our latent representation to FoldingNet's, as it is the comparison model with the most interpretable latent variables. First, we follow FoldingNet's proposed evaluation method of classifying the input point cloud from the latent space. Using a linear support vector machine (SVM) to classify which tooth from the larger proprietary dataset is embedded, a 32-class problem. Here, the SVM achieves 96.80% accuracy on VF-Net's latent codes compared to 96.36% of FoldingNet. Indicating all global point cloud information is stored in the latent variables, meaning the latent point encodings exclusively contain information about specific points. No information pertaining to the overall point cloud shape is stored in the point encodings. For qualitative assessment, an interpolation between two FDI 16 teeth and an interpolation example between an incisor and a premolar can be found in Fig. 7. Both interpolations exhibit a seamless transition in the latent space; for a more detailed view, see supplementary Fig. S3. ![8_image_0.png](8_image_0.png) Figure 7: Interpolating between two teeth by interpolating their latent codes using the same mesh decoding. Removed Tooth Wear ![9_image_0.png](9_image_0.png) Medium Wear Reconstruction Added Tooth Wear Figure 8: Moving in the tooth wear direction in latent space. Left: Red areas have higher values than the original. Middle: The original reconstruction. Right: Blue areas are lower than the original. As the level of tooth wear increases, we observe a gradual smoothing in the occlusal surface. Next, we attempt to add and remove toothwear; see Fig. 8. We navigate the latent space of VF-Net in the direction of tooth wear or away from it. The direction was determined by calculating the average change in latent representations when encoding 10 teeth from their counterparts with synthetically induced tooth wear. These teeth were manually sculpted to simulate tooth wear; see supplementary Fig. S4. We observe behavior that closely aligns with our expectations of how the tooth would change when adding or subtracting tooth wear. To quantify the performance, we train a small PointNet model (Qi et al., 2017) on a proprietary dataset of 1400 teeth annotated with light/medium/heavy tooth wear. Subsequently, validate whether a change in the latent space yielded the expected change in classifier prediction. In Table 5, each class denotes the base class before adding/removing tooth wear. For light and heavy, we added and removed tooth wear, respectively, while medium tooth wear teeth were evaluated both when adding/removing wear. The findings presented in Table 5 indicate that VF-Net's latent representations show greater robustness. Limitations. Similar to variational autoencoders in other domains, VF-Net tends to produce overly smooth This characteristic could impact applications samples. such as crown generation, where precise replication of the biting surface is crucial to prevent patient discomfort. Moreover, the model's tendency towards smoothness suggests potential challenges in capturing finer details of teeth, which are essential for comprehensive representation learning. Reconstructions and the CorrespondingLatent Point Encoding ![9_image_1.png](9_image_1.png) Until now, the inductive bias from folding a 2D plane to a point cloud has proven highly beneficial. This is only the case when the input point cloud shares topology with the 2D plane. Unfortunately, this inductive bias is not as beneficial when the two topologies differ. We trained VF-Net on ShapeNet data (Chang et al., 2015). The drawback is not evident through the reconstructions; see supplemen tary Table S1. VF-Net has a low reconstruction error, but LION boasts the lowest. Issues arise when attempting to generate new samples. Due to information of the shape sponding point encodings. being stored in the latent point encodings, as depicted in Fig. 9. The latent point encodings form a non-continuous distribution, posing challenges for sampling new models. Note that for point clouds sharing topology, VF-Net is strongly biased towards generating a continuous distribution; see Fig. 9. Addressing this issue could potentially be solved by training a flow or diffusion Figure 9: Left: While accurately reconstructed, the airplane forms a non-continuous distribution in the latent point encoding, posing challenges for sampling. Right: An incisor and its corre- prior for the point encodings, similar to the approach used in LION (Zeng et al., 2022). However, since this was not the focus of our model, we did not pursue this idea. ## Б Conclusion We have introduced the FDI 16 dataset and Variational FoldingNet (VF-Net), a fully probabilistic point cloud model in the same spirit as the original variational autoencoder (Kingma & Welling, 2014; Rezende et al., 2014). The key technical innovation is the introduction of a point-wise encoder network that replaces the commonly used Chamfer distance, allowing for probabilistic modeling. Importantly, we have shown that VF-Net offers better auto-encoding than current state-of-the-art generative models and more realistic sample generation for dental point clouds. Additionally, VF-Net offers straightforward shape completion and extrapolation due to its latent point encodings. All while identifying highly interpretable latent representations. Impact statement. This paper contributes a generative model that is particularly suitable for dental data. This translates into several positive use cases within clinical practice. However, previous generative models have shown to be useful for less positive use cases such as deep fakes and fake news. It is unclear how this could take form in digital dentistry, but destructive minds tend to be creative. This work was funded in part by the Novo Nordisk Foundation through the Center Acknowledgements. for Basic Machine Learning Research in Life Science (NNF20OC0062606). SH was supported in part by rescarch grants (15334, 42062) from VILLUM FONDEN. JY was supports by Innovation Fund Denmark (1044-00172B). ## References Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Learning Representations and Generative Models for 3D Point Clouds, June 2018. URL http://arxiv.org/abs/1707.02392. arXiv:1707.02392 [cs]. Thiemo Alldieck, Gerard Pons-Moll, Christian Theobalt, and Marcus Magnor. Tex2Shape: Detailed Full Human Body Geometry From a Single Image. arXiv:1904.08645 [cs], September 2019. URL http: //arxiv.org/abs/1904.08645. arXiv: 1904.08645. Tejas Anvekar, Ramesh Ashok Tabib, Dikshit Hegde, and Uma Mudengudi. VG-VAE: A Venatus Geometry Point-Cloud Variational Auto-Encoder. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2977-2984, New Orleans, LA, USA, June 2022. IEEE. ISBN 978-166548-739-9. doi: 10.1109/CVPRW56347.2022.00336. URL https://ieeexplore.ieee.org/document/ 9857384/. Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3D Semantic Parsing of Large-Scale Indoor Spaces. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1534-1543, Las Vegas, NV, USA, June 2016. IEEE. ISBN 978-1-46738851-1. doi: 10.1109/CVPR.2016.170. URL http://ieeexplore.ieee.org/document/7780539/. Harry G. Barrow, Jay M. Tenenbaum, Robert C. Bolles, and Helen C. Wolf. Parametric Correspondence and Chamfer Matching: Two New Techniques for Image Matching. August 1977. URL https://openreview. net/forum?id=rkb6wXfdWB. Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuScenes: A multimodal dataset for autonomous driving, May 2020. URL http://arxiv.org/abs/1903.11027. arXiv:1903.11027 [cs, stat]. Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Nießner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3D: Learning from RGB-D Data in Indoor Environments, September 2017. URL http://arxiv.org/abs/1709.06158. arXiv:1709.06158 [cs]. Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information Rich 3D Model Repository, December 2015. URL http://arxiv.org/abs/1512.03012. arXiv:1512.03012 [cs]. Nicki S. Detlefsen, Martin Jørgensen, and Søren Hauberg. Reliable training and estimation of variance networks, November 2019. URL http://arxiv.org/abs/1906.03260. arXiv:1906.03260 [cs, stat]. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP, February 2017. URL http://arxiv.org/abs/1605.08803. arXiv:1605.08803 [cs, stat]. Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, and Mathieu Aubry. AtlasNet: A Papier-Mache Approach to Learning 3D Surface Generation. arXiv:1802.05384 [cs], July 2018. URL http://arxiv.org/abs/1802.05384. arXiv: 1802.05384. Zhizhong Han, Xiyang Wang, Yu-Shen Liu, and Matthias Zwicker. Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half to-Half Prediction. arXiv:1907.12704 [cs], July 2019. URL http://arxiv.org/abs/1907.12704. arXiv: 1907.12704. Jakob D. Havtorn, Jes Frellsen, Søren Hauberg, and Lars Maaløe. Hierarchical VAEs Know What They Don't Know. ατΧίν:2102.08248 [cs, stat], March 2021. URL http://arxiv.org/abs/2102.08248. arXiv: 2102.08248. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models, December 2020. URL http://arxiv.org/abs/2006.11239. arXiv:2006.11239 [cs, stat]. Jinwoo Kim, Jaehoon Yoo, Juho Lee, and Seunghoon Hong. SetVAE: Learning Hierarchical Composition for Generative Modeling of Set-Structured Data, March 2021. URL http://arxiv.org/abs/2103.15619. arXiv:2103.15619 |cs]. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv:1312.6114 [cs, stat], May 2014. URL http://arxiv.org/abs/1312.6114. arXiv: 1312.6114. Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving Variational Inference with Inverse Autoregressive Flow, January 2017. URL http://arxiv.org/abs/ 1606.04934. arXiv:1606.04934 [cs, stat]. Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnabas Poczos, and Ruslan Salakhutdinov. Point Cloud GAN. arXiv:1810.05795 [cs, stat], October 2018. URL http://arxiv.org/abs/1810.05795. arXiv: 1810.05795. Ruihui Li, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. PU-GAN: a Point Cloud Upsampling Adversarial Network. arXiv:1907.10844 [cs], July 2019. URL http://arxiv.org/abs/1907. 10844. arXiv: 1907.10844. Shidi Li, Miaomiao Liu, and Christian Walder. EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape Generation, March 2022. URL http://arxiv.org/abs/2110.06679. arXiv:2110.06679 [cs]. Shitong Luo and Wei Hu. Diffusion Probabilistic Models for 3D Point Cloud Generation, June 2021. URL http://arxiv.org/abs/2103.01458. arXiv:2103.01458 [cs]. Pierre-Alexandre Mattei and Jes Frellsen. MIWAE: Deep Generative Modelling and Imputation of Incomplete Data, February 2019. URL http://arxiv.org/abs/1812.02633. arXiv:1812.02633 [cs, stat]. Eric Nalisnick, Akihiro Matsukawa, Yee Whye Tch, Dilan Gorur, and Balaji Lakshminarayanan. Do Deep Generative Models Know What They Don't Know?, February 2019. URL http://arxiv.org/abs/1810. 09136. arXiv:1810.09136 [cs, stat]. Liang Pan, Xinyi Chen, Zhongang Cai, Junzhe Zhang, Haiyu Zhao, Shuai Yi, and Ziwei Liu. Variational Relational Point Completion Network, April 2021. URL http://arxiv.org/abs/2104.10154. arXiv:2104.10154 [cs]. Jiahao Pang, Duanshun Li, and Dong Tian. TearingNet: Point Cloud Autoencoder to Learn Topology- Friendly Representations. arXiv:2006.10187 /cs/, September 2021. URL http://arxiv.org/abs/2006. 10187. arXiv: 2006.10187. Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. arXiv:1612.00593 [cs], April 2017. URL http://arxiv.org/abs/ 1612.00593. arXiv: 1612.00593. Nina Qiu, Ran Fan, Lihua You, and Xiaogang Jin. An efficient and collision-free hole-filling algorithm for orthodontics. The Visual Computer, 29(6):577-586, June 2013. ISSN 1432-2315. doi: 10.1007/ s00371-013-0820-6. URL https://doi.org/10.1007/s00371-013-0820-6. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. arXiv:1401.4082 [cs, stat], May 2014. URL http: //arxiv.org/abs/1401.4082. arXiv: 1401.4082. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-Resolution Image Synthesis with Latent Diffusion Models, April 2022. URL http://arxiv.org/abs/2112.10752. arXiv:2112.10752 [cs]. Yossi Rubner, Carlo Tomasi, and Leonidas J. Guibas. The Earth Mover's Distance as a Metric for Image Retrieval. International Journal of Computer Vision, 40(2):99-121, November 2000. ISSN 1573-1405. doi: 10.1023/A:1026543900054. URL https://doi.org/10.1023/A:1026543900054. Hossein Sadeghi, Evgeny Andriyash, Walter Vinci, Lorenzo Buffoni, and Mohammad H. Amin. PixelVAE++: Improved PixelVAE with Discrete Prior, August 2019. URL http://arxiv.org/abs/1908.09948. arXiv:1908.09948 [cs, stat]. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder Variational Autoencoders. arXiv:1602.02282 (cs, stat}, May 2016. URL http://arxiv.org/abs/1602.02282. arXiv: 1602.02282. Hiroshi Takahashi, Tomoharu Iwata, Yuki Yamanaka, Masanori Yamada, and Satoshi Yagi. Student t Variational Autoencoder for Robust Density Estimation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pp. 2696-2702, Stockholm, Sweden, July 2018. International Joint Conferences on Artificial Intelligence Organization. ISBN 978-0-9992411-2-7. doi: 10.24963/ijcai.2018/374. URL https://www.ijcai.org/proceedings/2018/374. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein Auto-Encoders, December 2019. URL http://arxiv.org/abs/1711.01558. arXiv:1711.01558 [cs, stat]. Arash Vahdat and Jan Kautz. NVAE: A Deep Hierarchical Variational Autoencoder. ar Xiv:2007.03898 /cs, stat}, January 2021. URL http://arxiv.org/abs/2007.03898. arXiv: 2007.03898. Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images. arXiv:1804.01654 [cs], August 2018. URL http://arxiv. org/abs/1804.01654. arXiv: 1804.01654. Xiaomeng Wei, Li Chen, and Chaowei Gao. Automatic mesh fusion for dental crowns and roots in a computer aided orthodontics system. In 2015 8th International Conference on Biomedical Engineering and Informatics (BMEI), pp. 280-290, October 2015. doi: 10.1109/BMEI.2015.7401516. Chao Wen, Yinda Zhang, Zhuwen Li, and Yanwei Fu. Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation. ar Xiv:1908.01491 {cs}, August 2019. URL http://arxiv.org/abs/1908.01491. arXiv: 1908.01491. Chenglei Wu, Derek Bradley, Pablo Garrido, Michael Zollhöfer, Christian Theobalt, Markus Gross, and Thabo Beeler. Model-based teeth reconstruction. ACM Transactions on Graphics, 35(6):1-13, November 2016. ISSN 0730-0301, 1557-7368. doi: 10.1145/2980179.2980233. URL https://d1.acm.org/doi/10. 1145/2980179.2980233. Jiajun Wu, Chengkai Zhang, Xiuming Zhang, Zhoutong Zhang, William T. Freeman, and Joshua B. Tenenbaum. Learning Shape Priors for Single-View 3D Completion and Reconstruction. arXiv:1809.05068 [cs], September 2018. URL http://arxiv.org/abs/1809.05068. arXiv: 1809.05068. Tong Wu, Liang Pan, Junzhe Zhang, Tai Wang, Ziwei Liu, and Dahua Lin. Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion, November 2021. URL http://arxiv.org/abs/ 2111.12702. arXiv:2111.12702 [cs]. Guandao Yang, Xun Huang, Zekun Hao, Ming-Yu Liu, Serge Belongie, and Bharath Hariharan. PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows, September 2019. URL http://arxiv. org/abs/1906.12320. arXiv:1906.12320 [cs]. Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation. ar Xiv:1712.07262 [cs], April 2018. URL http://arxiv.org/abs/1712.07262. arXiv: 1712.07262. Xumin Yu, Yongming Rao, Ziyi Wang, Zuyan Liu, Jiwen Lu, and Jie Zhou. PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers, August 2021. URL http://arxiv.org/abs/ 2108.08839. arXiv:2108.08839 [cs]. Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, and Karsten Kreis. LION: Latent Point Diffusion Models for 3D Shape Generation, October 2022. URL http://arxiv.org/ abs/2210.06978. arXiv:2210.06978 [cs, stat]. Zerong Zheng, Tao Yu, Qionghai Dai, and Yebin Liu. Deep Implicit Templates for 3D Shape Representation. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1429-1439, Nashville, TN, USA, June 2021. IEEE. ISBN 978-1-66544-509-2. doi: 10.1109/CVPR46437.2021.00148. URL https://ieeexplore.ieee.org/document/9578218/. Chenliang Zhou, Fangcheng Zhong, Param Hanji, Zhilin Guo, Kyle Fogarty, Alejandro Sztrajman, Hongyun Gao, and Cengiz Oztireli. FrePolad: Frequency-Rectified Point Latent Diffusion for Point Cloud Generation, November 2023. URL http://arxiv.org/abs/2311.12090. arXiv:2311.12090 [cs]. Linqi Zhou, Yilun Du, and Jiajun Wu. 3D Shape Generation and Completion through Point-Voxel Diffusion, August 2021. URL http://arxiv.org/abs/2104.03670. arXiv:2104.03670 [cs]. Xinwen Zhou, Yangzhou Gan, Jing Xiong, Dongxia Zhang, Qunfei Zhao, and Zeyang Xia. A Method for Tooth Model Reconstruction Based on Integration of Multimodal Images. Journal of Healthcare Engineering, 2018:1-8, June 2018. ISSN 2040-2295, 2040-2309. doi: 10.1155/2018/4950131. URL https://www.hindawi.com/journals/jhe/2018/4950131/.
Review 1: Summary: The paper introduces Variational FoldingNet (VF-Net), a novel model that improves point cloud processing by utilizing a probabilistic encoder instead of traditional Chamfer distance minimization, enhancing computational efficiency and simplifying probabilistic modeling. It demonstrates state-of-the-art performance in dental reconstruction and interpolation tasks, achieving lower reconstruction errors. Additionally, the paper contributes to the field by releasing the FDI 16 Tooth Dataset, a comprehensive resource for future research in digital dentistry. Strengths and Weaknesses: ## Strengths - **Dataset Contribution:** The release of the FDI 16 Tooth Dataset is a valuable contribution to the field. This dataset provides a rich resource for future research in digital dentistry. - **Approach:** The introduction of Variational FoldingNet (VF-Net) effectively replaces the traditional Chamfer distance minimization with a probabilistic encoder, increasing computational efficiency and simplifying the probabilistic modeling. - **Empirical Performance:** The paper demonstrates lower reconstruction errors and state-of-the-art performance in dental sample generation, showcasing the superiority of VF-Net in dental reconstruction and interpolation tasks. ## Weaknesses - **Reproducibility:** The dataset release is a valuable contribution; however, the use of a private dataset for certain experiments, particularly in the "Representation Learning" section, limits the study's reproducibility. Additionally, while annotations are provided for the proprietary dataset, they appear to be missing for the openly released dataset, which further hinders replication efforts. - **Discussion of diffusion models:** The authors discuss the non-compression of the space of diffusion models; however, they overlook approaches such as Latent Diffusion Model [1], which effectively addresses this issue. Including a discussion on these methods would enhance the study. [1] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models. 2022 IEEE." CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021. Requested Changes: My main requests regard the discussion of the reproducibility of some of the experiments and a more thorough discussion of the applicability of Diffusion Models in the field. Broader Impact Concerns: The authors already included an Impact Statement. ================================================== Review 2: Summary: The contributions of this paper are summarised as follows: 1. The paper proposes a new VAE method for representation learning and generation of 3D point clouds, with a focus on a dental point cloud dataset. The proposed method appears to achieve better performance than a few recent methods on the dataset. 2. The paper introduces a new 3D dental point cloud dataset. Strengths and Weaknesses: Strengths: 1. The experiments of the paper look relatively comprehensive and several evaluations are covered, including point cloud generation, shape completion, representation learning, etc. The baselines seem to be recent advances in the area. 2. The release of a new dataset is highly welcome, which can be valuable to the research area. Weaknesses: 1. The focus of the paper is on the newly introduced dental point cloud dataset. It is a bit unclear about the unique characteristics of the new dataset compared with existing point cloud datasets. For example, does the new dataset bring new research challenges that existing method cannot address and can be addressed by the proposed method? If not, it might be interesting to evaluate the proposed method on other benchmark datasets as well. 2. The paper claims that the proposed method is the first fully probabilistic VAE for point clouds. However, there are existing VAE methods such as in [1, 2]. Given the existing works, the claim might be bold to me. Moreover, the discussions and comparions to the following methods are also needed. [1] Anvekar, Tejas, et al. "VG-VAE: a venatus geometry point-cloud variational auto-encoder." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [2] Li, Shidi, Miaomiao Liu, and Christian Walder. "EditVAE: Unsupervised parts-aware controllable 3D point cloud shape generation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 2. 2022. 3. The clarity of the paper needs to be improved significantly. - A few details are missing in the paper, including: what's the architectures of the encoder and decoder (I guess the paper reuses those from FoldingNet, but it is important to introduce the details)? What's the prior and variational posterior of $z$? - A few configurations of the method are not justified or motivated or supported with ablation study, including why the projection in Eq 3 and 4 is useful? Why student-t distribution is used as the likelihood function? - It is hard to understand the core part of the proposed method. Specifically, Eq 4 is a minimisation. How is this minimisation incorporated in ELBO of VAE? Eq 4 is also a reconstruction term of the input $x$. Why is a square error used here? Requested Changes: 1. More discussions and/or comparions to the methods in [1, 2] and adjustment to the claims of the contributions accordingly. 2. Improvement to clarity. Broader Impact Concerns: Broader Impact Statement has been presented in the paper. ================================================== Review 3: Summary: This work proposes a VAE-based generative model for dental point clouds. A single point cloud here corresponds to a single tooth. Its contributions are threefold: - The authors propose VF-Net, a variational autoencoder for point clouds. To my understanding (which is incomplete - see weaknesses), the unique idea here is to build a VAE using an autoencoding structure inspired by FoldingNet, which contains two types of latent variables: $\mathbf{z}$, which captures shape structure, and $\mathbf{g}$, which capture individual points' locations on the shape. - The authors also introduce a new dataset, FDI 16, of 7732 tooth meshes. - This appears to be the first application of generative shape modelling to digital dentistry. The proposed model is state-of-the-art at modelling teeth. Strengths and Weaknesses: ## Weaknesses 1. The main weakness in my view is that the method description (Section 3) is difficult to understand. - For one, some of the notation is all over the place. For example, $\mathcal{G}$ is a latent space, $\hat{\mathcal{G}}$ is a finite set of latent points, and $\hat{\mathcal{G}}(\mathbf{x}, \mathbf{z})$ is a specific "projected" point in the set $\hat{\mathcal{G}}$. These are confusing for a couple reasons. - Referring to specific points as $\hat{\mathcal{G}}(\mathbf{x}, \mathbf{z})$ is misleading in light of the fact that a lowercase $\mathbf{g}$ is otherwise used for generic points in the same set. - The $\hat{\mathcal{G}}(\mathbf{x}, \mathbf{z})$ notation for a projection may or may not mean the same thing as a different notation introduced beforehand: $\text{proj}_\mathcal{G}(\mathbf{x}): \mathbb{R}^3 \to \mathcal{G}$. These are just a couple of examples of where notational choices could be more consistent and suggestive of the meaning they signify. The fact that most of these notations are introduced in one paragraph and then never used again makes it even harder to infer what's going on. - How are latent point encodings $\mathbf{g}$ obtained? There seem to be two ways to do this: 1. Through the encoder (Figure 2 seems to suggest that point encodings are obtained through "local feature vectors"). 2. Through the projection $\text{proj}_\mathcal{G}(\mathbf{x})$ (or maybe $\hat{\mathcal{G}}(\mathbf{x}, \mathbf{z})$) given by Equation (4). - No loss is given for this model. Equation (5) outlines a generic ELBO loss, but that doesn't apply to this model because, for example, $p(\mathbf{x} | \mathbf{z})$ is never defined. Only $p(\mathbf{x} | \mathbf{z}, \mathbf{g})$ is defined. Is $p(\mathbf{x} | \mathbf{z}) := p(\mathbf{x} | \mathbf{z}, \text{proj}_\mathcal{G}(\mathbf{x}))$? - The structure of the encoder is discussed, but not how it is used to parameterized the approximate posterior $q(\mathbf{z} | \mathbf{x})$. - Most of the discussion and notation in Section 3 pertains to individual points $\mathbf{x}$, not entire point clouds $\mathbf{X}$. The section never makes clear how the the full point cloud structure $\mathbf{X}$ is incorporated into the optimization process. 2. A key advantage boosted by the paper is that VF-Net computes the likelihood using the L2 loss rather than the Chamfer distance, which lacks a tractable normalizing constant but is used by competing methods. I question how much of an advantage this really is; it only really affects the coefficient of the reconstruction term in the loss, right? The authors put forward "probabilistic evaluation" as an advantage; I assume this means the ability to evaluate true densities/likelihoods/ELBOs. Does that actually have utility in this setting? 3. As the authors point out, the model generates excessively smooth shapes (akin to VAEs in other domains). ## Strengths - Outside of Section 3, the paper is clear and well-motivated. I like that the authors explain clearly how shape modelling is useful in computational dentistry: tooth generation, shape completion, diagnostics, etc. - By multiple metrics, VF-Net generates more realistic teeth than other shape generation algorithms. - VF-Net also demonstrates some interesting domain-specific capabilities, like point-wise uncertainty quantification and simulated shape completion. - The representation learning experiments were also cleverly designed, in my opinion. They clearly show that the learned representations are more robust than FoldingNet. - It's always nice to see a new dataset released alongside a paper. Hopefully this will open up future research in digital dentistry. Requested Changes: ## Critical changes - Please take some time to clean up the notation, especially in Section 3, to make it more understandable. Take any approach you find to be the most clear, but in my opinion, a bulletproof way to remove ambiguity would be to include all of the following: - All the key variables (eg. $\mathbf{x}$, $\mathbf{g}$, $\mathbf{z}$, $\mathcal{G}(\mathbf{x},\mathbf{z})$), clearly defined, ideally including the set to which they belong. - All the functions introduced in the section, including the encoder, decoder, and "projection" function, with clearly defined domains and codomains. - The form of the approximate posterior $q(\mathbf{z} | \mathbf{x})$ (along with $p(\mathbf{x} | \mathbf{z})$ and $p(\mathbf{z})$, which are already included) - The loss used to train the model, expressed using the functions and key variables you've defined as per the previous three bullets. ## Minor but necessary changes - Figure 1 caption: anatomically -> anatomical - The assertion that Chamfer Distances cannot be used in probabilistic modelling is overly strong - "probabilistic modelling" can be interpreted quite broadly and not every distribution used in "probabilistic modelling" needs to (1) be Euclidean-valued or (2) have a tractable normalizing constant. - Figure 2 caption: "Figure 2" appears twice - Figure 2 mentions that the encoder is "graph-based," but the main text never mentions this anywhere. - Page 2: planer -> planar - Equation 2, RHS: this should read CHAMF-DIST$(\mathbf{x}, f((e(\mathbf{x}), \mathcal{G})))$ , not CHAMF-DIST$(\mathbf{x} - f((e(\mathbf{x}), \mathcal{G})))$ - The meaning of the notation $\mathbf{x}$ seems ambiguous. In equation (1) it represents an individual point in a point cloud, whereas in equation (2) it appears to represent an entire point cloud. Later it seems to represent an individual point again, though equation (5) is again ambiguous. - Weird line spacing on page 8 in the "Simulated Shape Completion" paragraph ## Non-critical changes - The work doesn't contain any comparisons to previous algorithms designed specifically for digital dentistry. It would be really cool if VF-Net were to outperform more bespoke tooth modelling algorithms, such as the work of Qiu et al. (2013), which I think is applicable to the "Simulated Shape Completion" paragraph in Section 5. It would be helpful, then, to see Qiu et al. (2013) added to Table 4. - "Probabilistic evaluation" is a nonstandard term - either more precise terminology or some other added explanation would be helpful. - One of the model's limitations is that generated teeth are overly smooth. It would be nice to see some discussion about the downstream effects of this limitation in actual applications; I assume it would make the model less likely to capture the distribution of teeth with highly localized defects, for example. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The submission introduces a new dataset for digital dentistry and a novel method, Variational FoldingNet (VF-Net), as a probabilistic autoencoder for point clouds. Reviewers originally had some concerns on the novelty of the method and the presentation quality. Post revision and discussion, all reviewers agreed that the revised paper has addressed their concerns and recommended acceptance. The AE agreed with the recommendation. ==================================================
# Zero-Order One-Point Gradient Estimate In Consensusbased Distributed Stochastic Optimization Anonymous authors Paper under double-blind review ## Abstract In this work, we consider a distributed multi-agent stochastic optimization problem, where each agent holds a local objective function that is smooth and strongly convex and that is subject to a stochastic process. The goal is for all agents to collaborate to find a common solution that optimizes the sum of these local functions. With the practical assumption that agents can only obtain noisy numerical function queries at precisely one point at a time, we consider an extention of a standard consensus-based distributed stochastic gradient (DSG) method to the bandit setting where we do not have access to the gradient, and we introduce a zero-order (ZO) one-point estimate (1P-DSG). We analyze the convergence of this techniques using stochastic approximation tools, and we prove that it converges almost surely to the optimum despite the biasedness of our gradient estimate. We then study the convergence rate of our method. With constant step sizes, our method competes with its first-order (FO) counterparts by achieving a linear rate O(% k) as a function of number of iterations k. To the best of our knowledge, this is the first work that proves this rate in the noisy estimation setting or with one-point estimators. With vanishing step sizes, we establish a rate of O( √ 1 k ) after a sufficient number of iterations *k > K*0. This rate matches the lower bound of centralized techniques utilizing one-point estimators. We then provide a regret bound of O( √k) with vanishing step sizes. We further illustrate the usefulness of the proposed technique using numerical experiments. ## 1 Introduction Gradient-free optimization is an old topic in the research community; however, there has been an increased interest recently, especially in machine learning applications, where optimization problems are typically solved with gradient descent algorithms. Successful applications of gradient-free methods in machine learning include competing with an adversary in bandit problems (Flaxman et al., 2004; Agarwal et al., 2010), generating adversarial attacks for deep learning models (Chen et al., 2019; Liu et al., 2019) and reinforcement learning (Vemula et al., 2019). Gradient-free optimization aims to solve optimization problems with only functional ZO information rather than FO gradient information. These techniques are essential in settings where explicit gradient computation may be impractical, expensive, or impossible. Instances of such settings include high data dimensionality, time or resource straining function differentiation, or the cost function not having a closed-form. ZO information-based methods include direct search methods (Golovin et al., 2019), 1-point methods (Flaxman et al., 2004; Bach & Perchet, 2016; Vemula et al., 2019; Li & Assaad, 2021) where a function f(·, S) : R d → R is evaluated at a single point with some randomization to estimate the gradient as such $$g_{\gamma,z}^{(1)}(x,S)=\frac{d}{\gamma}f(x+\gamma z,S)z,$$ f(x + *γz, S*)z, (1) with x the optimization variable, γ > 0 a small value, and z a random vector following a symmetrical distribution. ZO also includes 2- or more point methods (Duchi et al., 2015; Nesterov & Spokoiny, 2017; Gorbunov et al., 2018; Bach & Perchet, 2016; Hajinezhad et al., 2019; Kumar Sahu et al., 2018; Agarwal et al., 2010; Chen et al., 2019; Liu et al., 2019; Vemula et al., 2019), where functional difference at various $\left(1\right)$. points is employed for estimation, generally having the respective structures $$g_{\gamma,z}^{(2)}(x,S)=d\frac{f(x+\gamma z,S)-f(x-\gamma z,S)}{2\gamma}z$$ and $g_{\gamma}^{(2d)}(x,S)=\sum_{j=1}^{d}\frac{f(x+\gamma e_{j},S)-f(x-\gamma e_{j},S)}{2\gamma}e_{j}$ (3) $$\left(2\right)$$ where {ej}j=1*,...,d* is the canonical basis, and other methods such as sign information of gradient estimates (Liu et al., 2019). Another area of great interest is distributed multi-agent optimization, where agents try to cooperatively solve a problem with information exchange only limited to immediate neighbors in the network. Distributed computing and data storing are particularly essential in fields such as vehicular communications and coordination, data processing and distributed control in sensor networks (Shi & Eryilmaz, 2020), big-data analytics (Daneshmand et al., 2015), and federated learning (McMahan et al., 2017). More specifically, one direction of research integrates (sub)gradient-based methods with a consensus/averaging strategy; the local agent incorporates one or multiple consensus steps alongside evaluating the local gradient during optimization. Hence, these algorithms can tackle a fundamental challenge: overcoming differences between agents' local data distributions. ## 1.1 Problem Description Consider a set of agents N = {1, 2*, . . . , n*} connected by a communication network. Each agent i is associated with a local objective function fi(·, S) : K → R, where K ⊂ R dis a convex feasible set. The global goal of the agents is to collaboratively locate the decision variable x ∈ K that solves the stochastic optimization problem: $$\min_{x\in K}{\cal F}(x)=\frac{1}{n}\sum_{i=1}^{n}F_{i}(x)\tag{1}$$ $$\left(4\right)$$ where $$F_{i}(x)=\mathbb{E}_{S}f_{i}(x,S),$$ with S ∈ S denoting an i.i.d. ergodic stochastic process describing uncertainties in the communication system. We assume that at each time step, agent i can only query the function values of fi at exactly one point, and can only communicate with its neighbors. Further, we assume that the function queries are noisy ˜fi = fi +ζi with ζi some additive noise. Agent i must then employ this query to estimate the gradient of the form gi(*x, S*i). ## 1.2 Function Classes And Gradient Estimate Assumptions Consider the following five classes of functions: - The convex class Ccvx containing all functions f : R d → R that are convex. - The strongly convex class Csc containing all functions f : R d → R that are continuously differentiable and admit a constant λf such that $$\langle\nabla f(x)-\nabla f(y),x-y\rangle\geq\lambda_{f}\|x-y\|^{2},\ \ \forall x,y\in\mathbb{R}^{d}.$$ - The Lipschitz continuous class Clip containing all functions f : R d → R that admit a constant Lf such that $$|f(x)-f(y)|\leq L_{f}\|x-y\|,\ \ \forall x,y\in\mathbb{R}^{d}.$$ - The smooth class Csmo containing all functions f : R d → R that are continuously differentiable and admit a constant Gf such that k∇f(x) − ∇f(y)k ≤ Gf kx − yk, ∀*x, y* ∈ R d. - The gradient dominated class Cgd containing all functions f : R d → R that are differentiable, have a global minimizer x ∗, and admit a constant νf such that 2νf (f(x) − f(x ∗)) *≤ k∇*f(x)k 2, ∀x ∈ R d. This gradient domination property can be viewed as a nonconvex analogy of strong convexity. In addition, consider the following assumptions on the gradient estimate: - A gradient estimate g is said to be unbiased w.r.t. the true gradient ∇f if for all x ∈ R d and independent S ∈ S, it satisfies the following equality ES[g(*x, S*)|x] = ∇f(x). - Otherwise, it is said to be biased and satisfies ES[g(*x, S*)|x] = ∇f(x) + b(x), with b(x) some bias term. - A gradient estimate g is said to have bounded variance when for all x ∈ R d and independent S ∈ S, ES[kg(x, S) − ∇f(x)k 2|x] ≤ σ for some σ > 0. - Otherwise, when this bound is unknown or does not exist, it is said to have unbounded variance. In general, FO stochastic gradient estimates are unbiased and have bounded variance. ZO estimates, on the other hand, are biased. While multi-point ZO estimates have bounded or even vanishing variance, one-point estimates have unbounded variance Liu et al. (2020). ## 1.3 Related Work FO Consensus-Based Distributed Methods: The optimal convergence rate for solving problem (4), assuming the objective function F is strongly convex with Lipschitz continuous gradients, has been established as O( 1 k ) under a diminishing step size with full gradient information Pu & Nedić (2018); Nemirovski et al. (2009). However, when employing a constant step size α > 0 that is sufficiently small, the iterates produced by a stochastic gradient method converge exponentially fast (in expectation) to an O(α)-neighborhood of the optimal solution (Pu & Nedić, 2018); this is known as the linear rate O(% k). The literature dedicated to solving problem (4) is vast. In what follows, we highlight some of the contributions. Towfic et al. (2016); Tu & Sayed (2012) study distributed stochastic gradient methods where they compare the adapt-then-combine (ATC) and combine-then-adapt (CTA) strategies, and prove that the ATC strategy outperforms CTA one in terms of convergence rate, whether with vanishing or with constant step sizes and that it is more robust against data distribution drifts and network topology. Jakovetic et al. (2018) consider the CTA strategy with noisy FO gradients over random networks and establish an O( 1 k ) convergence rate for strongly convex and smooth objectives and vanishing step size. Matei & Baras (2011); Yuan et al. (2016) also consider random networks and solve problem (4) using a noise-free (sub)gradient instead and achieve a linear rate to a neighborhood of the optimum with constant step sizes. Nedić & Olshevsky (2016) consider time-varying and directed networks and present a subgradient-push method based on noisy FO gradients that achieves an O( ln k k ) rate under the same assumptions on the objective function and vanishing step size. Both the works of Shi et al. (2015) and Qu & Li (2018) consider a static version of the objective function and | ESTIMATE | OP | FUNCTION | STEP | REGRET | CONVERGENCE | | |-------------|---------------------------------------|------------------|----------|-----------------------------------|------------------------------------------------|------------------------| | CLASS | SIZE | BOUND | RATE | | | | | | 3 | | | | | | | Centralized | Ccvx T Clip | f. | O(k 4 ) | O( 1√4 k ) Flaxman et al. (2004) | | | | Onepoint | Centralized | Csc T Clip T Csm | v. | O( √ k) | O( √1 k ) Bach & Perchet (2016) | | | | √ k) | O( √1 k | | | | | | Distributed | Csc T Csmo | v. | O( | ) 1P-DSG | | | | Distributed | Csc T Csmo | f. | - | O(% k ) 1P-DSG | | | | ZO | Centralized | Ccvx T Clip | v. | O( √ k) | O( √1 k ) Agarwal et al. (2010) | | | Twopoint | log k | | | | | | | Centralized | Csc T Clip | v. | O(log k) | O( | k | )Agarwal et al. (2010) | | Distributed | Clip T Csmo | v. | - | O( √1 k log k) Tang et al. (2021) | | | | Distributed | Csmo T Cgd | v. | - | O( 1 ) Tang et al. (2021) k 1 | | | | Distributed | Csmo | f. | - | O( | ) Tang et al. (2021) k | | | 2dpoint | Distributed | Csmo T Cgd | f. | - | O(% k ) Tang et al. (2021) | | | | ) Kumar Sahu et al. (2018) | | | | | | | Distributed | Csc T Csmo | v. | - | O( √1 k | | | | Distributed | Csc T Csmo | f. | - | O(% k ) Matei & Baras (2011); | | | | | Yuan et al. (2016); Pu & Nedić (2018) | | | | | | | FO | Unbiased /BV | 1 | | | | | | Distributed | Csc T Csmo | v. | - | O( | ) Jakovetic et al. (2018); k Pu & Nedić (2018) | | Table 1: Convergence rates for various algorithms related to our work, classified according to the nature of the gradient estimate, whether the optimization problem (OP) is centralized or distributed, the assumptions on the objective function, whether the step size is fixed (f.) or varying (v.), and the achieved regret bound and convergence rate propose methods that employ history information of the gradient. They both obtain a rate of O( 1 k ) for general convex and smooth objectives with constant step sizes. Under the further strong convexity assumption, the static nature of the objective allows them to establish a linear convergence rate to the exact solution instead of a neighborhood of it. Qu & Li (2018) inspire the vast literature on gradient tracking extended to the stochastic setting (Pu & Nedić, 2018; Pu, 2020; Xin et al., 2019) that utilizes local auxiliary variables to track the average of all agents' gradients, the linear rate, however, is established to a neighborhood of the optimum. ZO Centralized Methods: ZO methods are known to have worse convergence rates than their FO counterparts under the same conditions. For example, under a convex centralized setting, Flaxman et al. (2004) prove a regret bound of O(k 3 4 ) (or equivalently a rate of O( 1 √4k )) with a one-point estimator for Lipschitz continuous functions. For strongly convex and smooth objective functions, Hazan & Levy (2014) and Ito (2020) improve upon this result by proving a regret of O( √k log k) and Bach & Perchet (2016) that of O( √k). In the work of Agarwal et al. (2010), when the number of points is two, they prove regret bounds of O˜( √k) with high probability and of O(log(k)) in expectation for strongly convex loss functions. When the number is d + 1 point, they prove regret bounds of O( √k) and of O(log(k)) with strong convexity. The reason why the performance improves with the addition of number of points in the estimate, is that their variance can be bounded, unlike one-point estimates whose variance cannot be bounded (Liu et al., 2020). However, when the function queries are subjected to noise, multi-point estimates start behaving like one-point ones. In noisy function queries (centralized) scenario, it has been proven that gradient-free methods cannot achieve a better convergence rate than Ω( √ 1 k ) which is the lower bound derived by Duchi et al. (2015); Jamieson et al. (2012); Shamir (2013) for strongly convex and smooth objective functions. In the work of Bubeck et al. (2021), a kernelized loss estimator is proposed where a generalization of Bernoulli convolutions is adopted, and an annealing schedule for exponential weights is used to control the estimator's variance in a focus region for dimensions higher than 1. Their method achieves a regret bound of O( √k). ZO Consensus-Based Distributed Methods: In distributed settings, Tang et al. (2021) develop two algorithms for a noise-free nonconvex multi-agent optimization problem aiming at consensus. One of them is gradient-tracking based on a 2d-point estimator of the gradient with vanishing variance that achieves a rate of O( 1 k ) with smoothness assumptions and a linear rate for an extra ν-gradient dominated objective assumption and for fixed step sizes. The other is based on a 2-point estimator following an ATC strategy instead of gradient tracking and achieves a rate of O( √ 1 k log k) under Lipschitz continuity and smoothness conditions and O( 1 k ) under an extra gradient dominated function structure. Kumar Sahu et al. (2018) propose a standard CTA method where they consider a 2d-point estimate with noisy function queries over random networks. Under smoothness and strong convexity, they establish an O( √ 1 k ) convergence rate with vanishing step sizes. We highlight some of the mentioned convergence rates from the literature in Table 1. ## 1.4 Contributions While consensus-based distributed methods have been extended to the ZO case (Tang et al., 2021; Kumar Sahu et al., 2018), their approach relies on a multi-point gradient estimator. The multi-point estimation technique assumes the ability to observe multiple instances of the objective function under identical system conditions, i.e., many function queries are done for the same realization of S in (2) and (3). However, this assumption needs to be revised in applications such as mobile edge computing (Mao et al., 2017; Chen et al., 2021; Zhou et al., 2022) where computational tasks from mobile users are offloaded to servers within the cellular network. Thus, queries requested from the servers by the users are subject to the wireless environment and are corrupted by noise not necessarily additive. Other applications include sensor selection for an accurate parameter estimation (Liu et al., 2018) where the observation of each sensor is continuously changing. Thus, in such scenarios, one-point estimates offer a vital alternative to solving online optimization/learning problems. Yet, one-point estimators are not generally used because of their slow convergence rate. The main reason is due to their unbounded variance. To avoid this unbounded variance, in this work, we don't use the estimate given in (1), we extend the one point approach in Li & Assaad (2021)'s work where the action of the agent is a scalar and different agents have different variables, to our consensual problem with vector variables. The difference is that in our gradient estimate, we don't divide by γ. This brings additional challenges in proving that our algorithm converges and a consensus can be achieved by all agents. And even with bounded variance, there's still a difficulty achieving good (linear) convergence rates with two-point estimates due to the constant upper bound of the variance (Tang et al., 2021). Here, despite this constant bound, we were able to go beyond two-point estimates to achieve a linear rate. Moreover, while it requires 2d points *with* the gradient tracking method to achieve a linear rate in Tang et al. (2021)'s work, which is twice the dimension of the gradient itself, here we only need one scalar point or query. This is much more computationally efficient. We further replace the gradient tracking method by a standard ATC strategy which is more communication efficient as it requires the sharing of only one vector instead of two. We summarize our contribution in the following points, - We consider smooth and strongly convex local objectives, and we consider the distributed stochastic gradient method in the case where we do not have access to the gradient in the noisy setting. Under the realistic assumption that the agent only has access to a single noisy function value at each time without necessarily knowing the form of this function, we propose a one-point estimator in a stochastic framework. - Naturally, one-point estimators are biased with respect to the true gradient and suffer from high variance (Liu et al., 2020); Despite this, in this work, we analyze and indeed prove the algorithm's convergence *almost surely* with a biased estimate. This convergence is stronger than expected convergence analysis usually established for ZO optimization. We also consider that a stochastic process influences the objective function from one iteration to the other, which provides a practical modeling for real-world scenarios that involve various sources of stochasticity, not necessarily additive noise. - We then study the convergence rate and we demonstrate that with fixed step sizes, the algorithm achieves a linear convergence rate O(% k) to a neighborhood of the optimal solution, marking the first instance where this rate is attained in ZO optimization with one-point/two-point estimates and in a noisy query setting, to the best of our knowledge. This linear rate competes with FO methods and even centralized algorithms in terms of convergence speed (Pu & Nedić, 2018). - When the step-sizes are vanishing, we prove that a rate of O( √ 1 k ) is attainable to converge to an exact solution after a sufficient number of iterations *k > K*0. This rate satisfies the lower bounds achieved by its centralized counterparts in the same derivative-free setting (Duchi et al., 2015; Jamieson et al., 2012; Shamir, 2013). - We then show that a regret bound of O( √k) is achieved for this algorithm. - Finally, we support our theoretical claims by providing numerical evidence and comparing the algorithm's performance to its FO and centralized counterparts. The rest of this paper is divided as follow. In subsection 1.5, we present the mathematical notation followed in this paper. In subsection 1.6, we present the main assumptions of our optimization problem. We then describe our gradient estimate followed by the proposed algorithm in subsection 2.1. We then prove the almost sure convergence of our algorithm in subsection 3.1 and study its rate in subsection 3.2 with varying step sizes. In subsection 3.3, we find its regret bound. And in subsection 3.4, we consider the case of fixed step sizes, study the convergence of our algorithm and its rate. Finally, in section 4, we provide numerical evidence and conclude the paper in section 5. ## 1.5 Notation In all that follows, vectors are column-shaped unless defined otherwise and 1 denotes the vector of all entries equal to 1. For two vectors a, b of the same dimension, h*a, b*i is the inner product. For two matrices A, B ∈ R n×d, we define $$\langle A,B\rangle=\sum_{i=1}^{n}\langle A_{i},B_{i}\rangle$$ $$\left(5\right)$$ where Ai (respectively, Bi) represents the i-th row of A (respectively, B). This matrix product is the HilbertSchmidt inner product which is written as h*A, B*i = tr(ABT). k.k denotes the 2-norm for vectors and the Frobenius norm for matrices. We next let ΠK(·) denote the Euclidean projection of a vector on the set K. We know that this projection on a closed convex set K is nonexpansive (Kinderlehrer & Stampacchia (2000) - Corollary 2.4), i.e., $$\|\Pi_{\mathcal{K}}(x)-\Pi_{\mathcal{K}}(x^{\prime})\|\leq\|x-x^{\prime}\|,\ \ \forall x,x^{\prime}\in\mathbb{R}^{d}.$$ d. (5) We assume that each agent i maintains a local copy xi ∈ K of the decision variable and each agent's local function is subject to the stochastic variable Si ∈ R m. At iteration k, the respective values are denoted as xi,k and Si,k. Bold notations denote the concatenated version of the variables, i.e., x := [x1, x2*, . . . , x*n] Tis of dimension n × d and S := [S1, S2*, . . . , S*n] Tof dimension n × m. We then define the mean of the decision variable as x¯ := 1n 1 T x whose dimension is 1 × d. We define the gradient of Fi at the local variable ∇Fi(xi) ∈ R d and its Hessian matrix ∇2Fi(xi) ∈ R d×d and we let ∇F(x) := [∇F1(x1), ∇F2(x2)*, . . . ,* ∇Fn(xn)]T ∈ R n×d and $$\mathbf{\nabla}^{r}(\mathbf{x}):=[\nabla F_{1}(x_{1}),$$ $$\mathbf{v}(I)$$ $\mathbf{g}:=g(\mathbf{x},\mathbf{S}):=[g_{1}(x_{1},S_{1}),g_{2}(x_{2},S_{2}),\ldots,g_{n}(x_{n},S_{n})]^{T}\in\mathbb{R}^{n\times d}$. We define its mean g¯ := 1n 1 T g ∈ R 1×d and we denote each agent's gradient estimate at time k by gi,k = gi(xi,k, Si,k). ## 1.6 Basic Assumptions In this subsection, we introduce the fundamental assumptions that ensure the performance of the 1P-DSG algorithm. Assumption 1.1. *(on the graph) The topology of the network is represented by the graph* G = (N , E) where the edges in E ⊆ N × N represent communication links. A graph G is undirected, i.e., (i, j) ∈ E iff (j, i) ∈ E, and connected (there exists a path of links between any two agents). W = [wij ] ∈ R n×n denotes the agents' coupling matrix, where agents i and j *are connected iff* wij = wji > 0 (wij = wji = 0 otherwise). W is a nonnegative matrix and doubly stochastic, i.e., W1 = 1 and 1 T W = 1 T. All diagonal elements wii *are strictly positive.* Assumption 1.2. *(on the objective function) We assume the existence and the continuity of both* ∇Fi(x) and ∇2Fi(x)*. Let* x ∗ ∈ K *denote the solution of the problem (4) such that* F(x ∗) = minx∈K F(x)*. We next* assume that F(x) is λ*-strongly convex where* $${\mathcal{F}}(y)\geq{\mathcal{F}}(x)+\langle\nabla{\mathcal{F}}(x),y-x\rangle+{\frac{\lambda}{2}}\|y-x\|^{2},\;\forall x,y\in{\mathcal{K}}.$$ We further assume the boundedness of the local Hessian where there exists a constant c1 ∈ R + *such that* k∇2Fi(x)k2 ≤ c1, ∀x ∈ K, ∀i ∈ N , where here it suffices to the spectral norm (keeping in mind for a matrix A, kAk2 ≤ kAkF ). Assumption 1.3. (on the additive noise) ζi,k is a zero-mean uncorrelated noise with bounded variance, where E(ζi,k) = 0 and E(ζ 2 i,k) = c2 < ∞, ∀i ∈ N . Lemma 1.4. (Qu & Li, 2018) Let ρw *be the spectral norm of* W − 1 n 11T*. When Assumption 1.1 is satisfied,* we have the following inequality $$\|W\omega-{\bf1}\bar{\omega}\|\leq\rho_{w}\|\omega-{\bf1}\bar{\omega}\|,\ \forall\omega\in\mathbb{R}^{n\times d}\ and\ \bar{\omega}=\frac{1}{n}{\bf1}^{T}\omega,$$ and ρw < 1. Lemma 1.5. *(Pu & Nedić, 2018) Define* h(x) := 1n 1 T ∇F(x) ∈ R 1×d*. Due to the boundedness of the second* derivative in Assumption 1.2, there exists a scalar L > 0 such that the objective function is L*-smooth, and* $$\|\nabla{\mathcal{F}}({\bar{x}})-h(\mathbf{x})\|\leq{\frac{L}{\sqrt{n}}}\|\mathbf{x}-\mathbf{1}{\bar{x}}\|.$$ Proof: See Appendix A. ## 2 Distributed Stochastic Gradient Methods We propose to employ a zero-order one-point estimate of the gradient subject to the stochastic process S and an additive noise ζ while a stochastic perturbation and a step size are introduced, and we assume that each agent can perform this estimation at each iteration. To elaborate, let gi,k denote the aforementioned gradient estimate for agent i at time k, then we define it as $$g_{i,k}=\Phi_{i,k}\hat{f}_{i}(x_{i,k}+\gamma_{k}\Phi_{i,k},S_{i,k})$$ $$=\Phi_{i,k}(f_{i}(x_{i,k}+\gamma_{k}\Phi_{i,k},S_{i,k})+\zeta_{i,k}),\tag{1}$$ $$(6)$$ where γk > 0 is a vanishing step size and Φi,k ∈ R dis a perturbation randomly and independently generated by each agent i. gi,k is in fact a biased estimation of the gradient ∇Fi(xi,k) and the algorithm can converge under the condition that all parameters are properly chosen. For clarification on the form of this bias and more on the properties of this estimate, refer to Appendix B. ## 2.1 The 1P-Dsg Algorithm We consider a zero-order distributed stochastic gradient algorithm aiming for consensus with a one-point estimate. We denote it as 1P-DSG employing the gradient estimate gi,k in (6). Every agent i initializes its variables with an arbitrary valued vector xi,0 ∈ K and computes gi,0 at that variable. Then, at each time k ∈ N, agent i updates its variables independently according to the following steps: $z_{i,k+1}=\sum_{j=1}^{n}w_{ij}(x_{j,k}-\alpha_{k}g_{j,k})$ $x_{i,k+1}=\Pi_{\mathcal{K}}(z_{i,k+1})$ perform the action: $x_{i,k+1}+\gamma_{k+1}\Phi_{i,k+1}$ $$\quad(7)$$ where αk > 0 is a step size. Algorithm (7) can then be written in the following compact matrix form for clarity of analysis: $\mathbf{z}_{k+1}=W(\mathbf{x}_{k}-\alpha_{k}\mathbf{g}_{k})$ $\mathbf{x}_{k+1}=\left[x_{1,k+1},x_{2,k+1},\ldots,x_{n,k+1}\right]^{T}$ perform the action: $\mathbf{x}_{k+1}+\gamma_{k+1}\boldsymbol{\Phi}_{k+1}$ $$(8)$$ where Φk ∈ R n×dis defined as Φk = [Φ1,k, Φ2,k, . . . , Φn,k] $$:=[\Phi_{1,k},\Phi_{2,k},\ldots,\Phi_{n,k}]^{T}.$$ As is evident from the update of the variables, the exchange between agents is limited to neighboring nodes, and it encompasses the value xk − αkgk or the local gradient descent step. We remark the effect of the gradient estimate variance on the convergence by carefully examining the steps in (8). Naturally, when the estimates have a large variance, the estimated gradients can vary widely from one sample to another. This means that the norm of xk+1, which is directly affected by this variance, may also grow considerably. Thus, it may then take longer to converge to the optimal solution because it cannot reliably discern the direction of the steepest descent. In the worst case, the huge variance causes instability as the optimizer may oscillate around the optimum or even diverge if the variance is too high, making converging to a satisfactory solution difficult. In this work, we use the fact that the local functions and the noise variance are bounded to prove that the variance of gradient estimate presented in (6) is indeed bounded. This boundedness, alongside the properties of the matrix W in Assumption 1.1, allows us to find then an upper bound on the variation of xk+1 with respect to its mean and the variation of this mean with respect to the optimizer at every iteration and analyze the convergence of both. We then consider the following assumptions for the subsequent convergence analysis. We must note that the first assumption is only taken into account when we study the algorithm's behavior with varying step sizes, otherwise it is dropped. Assumption 2.1. (on the step-sizes) Both αk and γk vanish to 0 as k → ∞*, and satisfy the the following* _sums_ $$\sum_{k=1}^{\infty}\alpha_{k}\gamma_{k}=\infty,\;\sum_{k=1}^{\infty}\alpha_{k}^{2}<\infty,\;\mbox{and}\;\sum_{k=1}^{\infty}\alpha_{k}\gamma_{k}^{2}<\infty.$$ **Assumption 2.2**.: _(on the random perturbation) Let $\Phi_{i,k}=(\phi_{i,k}^{1},\phi_{i,k}^{2},\ldots,\phi_{i,k}^{d})^{T}$._ Each agent i chooses its Φi,k vector independently from other agents j 6= i. In addition, the elements of Φi,k are assumed i.i.d with E(φ d1 i,kφ d2 i,k) = 0 for d1 6= d2 and there exists c3 > 0 *such that* E(φ dj i,k) 2 = c3, ∀dj , ∀i, almost surely. We further assume that there exists a constant c4 > 0 where kΦi,kk ≤ c4, ∀i*, almost surely.* Example 2.3. *One example is to take* αk = α0(k + 1)−υ1 and γk = γ0(k + 1)−υ2 *with the constants* α0, γ0, υ1, υ2 ∈ R +*. As* P∞ k=1 αkγk *diverges for* υ1 + υ2 ≤ 1,P∞ k=1 α 2 k converges for υ1 > 0.5*, and* P∞ k=1 αkγ 2 k converges for υ1 + 2υ2 > 1, we can find pairs of υ1 and υ2 *so that Assumption 2.1 is satisfied.* To achieve the conditions in Assumption 2.2, we can choose the probability distribution of φ dj i,k to be the symmetrical Bernoulli distribution where φ dj i,k *∈ {−* √ 1 d , √ 1 d } *with* P(φ dj i,k = − √ 1 d ) = P(φ dj i,k = √ 1 d ) = 0.5, ∀dj , ∀i. Assumption 2.4. (on the local functions) K is a compact convex set and all local functions x 7→ fi(*x, S*) are bounded on the c4γ0-neighborhood of K*, i.e.,* $$|f_{i}(x)|$$ |fi(*x, S*)| < ∞, ∀x ∈ Nc4γ0 (K), ∀S ∈ R m, ∀i ∈ N , where Nc4γ0 (K) = {x ∈ R d| infa∈K kx − ak < c4γ0} is the c4γ0*-neighborhood of* K. ## 3 The 1P-Dsg Algorithm In this section, we analyze Algorithm 1P-DSG presented in (7) and (8). ## 3.1 Convergence Results The goal of this part is to analyze the asymptotic behavior of Algorithm (8). We start the analysis by defining Hk as the history sequence {x0, y0, S0, . . . , xk−1, yk−1, Sk−1, xk} and denoting by E[.|Hk] as the conditional expectation given Hk. We define g˜k to be the expected value of g¯k with respect to all the stochastic terms S, Φ, ζ given Hk, i.e., $$\tilde{g}_{k}=\mathbb{E}_{S,\Phi,\zeta}[\bar{g}_{k}|{\mathcal{H}}_{k}].$$ In what follows, we use g˜k = E[¯gk|Hk] for shorthand notation. We define the error ek to be the difference between the value of a single realization of g¯k and its conditional expectation g˜k, i.e., ek = ¯gk − g˜k, where ek can be seen as a stochastic noise. The following lemma describing the vanishing of the stochastic noise is essential for our main result. Lemma 3.1. If all Assumptions 1.2, 1.3, 2.1, 2.2, and 2.4 hold, then for any constant ν > 0*, we have* $$\mathbb{P}(\operatorname*{lim}_{K\to\infty}\operatorname*{sup}_{K^{\prime}\geq K}\|\sum_{k=K}^{K^{\prime}}\alpha_{k}e_{k}\|\geq\nu)=0,\;\forall\nu>0.$$ Proof: See Appendix C. For any integer k ≥ 0, we define the divergence, or the error between the average action taken by the agents x¯k and the optimal solution x ∗ within K as $$d_{k}=\|{\bar{x}}_{k}-x^{*}\|^{2}.$$ $$({\mathfrak{g}})$$ 2. (9) The following theorem describes the main convergence result. Theorem 3.2. If all Assumptions 1.1-1.3, 2.1-2.2, and 2.4 hold, then as k → ∞, dk → 0, x¯k → x ∗*, and* xi,k → x¯k, for all i ∈ N *, almost surely by applying the Algorithm.* Proof: See Appendix D. ## 3.2 Convergence Rate With Vanishing Step Sizes This part deals with how fast the expected divergence vanishes to find the proposed algorithm's expected convergence rate. To do so, we define the expected divergence as $$D_{k}=\mathbb{E}[\|{\bar{x}}_{k}-x^{*}\|^{2}].$$ The goal is to bound this divergence from above by sequences whose convergence rate is known. The analysis is highly associated with the parameters αk and γk that play a significant role in determining this upper bound. Hence, in what follows, the analysis starts with a general form of αk and γk, then a particular case is considered. ## 3.2.1 General Form Of Αk And Γk We first study the rate of convergence of the consensus error by introducing the following lemma. Lemma 3.3. *Let Assumptions 1.1-1.3, 2.1-2.2, and 2.4 hold. Define* $$\begin{array}{r l}{K_{1}=}&{{}{\mathrm{arg\,min}}\quad k.}\\ {\quad}&{{}{\frac{\alpha_{k+1}^{2}}{\alpha_{k}^{2}}}{>}{\frac{1+\rho_{w}^{2}}{2}}}\end{array}$$ $$(10)$$ Then, for k ≥ K1, there exist 0 < ϑ1, ϑ2 < ∞*, such that* $$\|{\bf x}_{k}-{\bf1}\bar{x}_{k}\|^{2}<\vartheta_{1}^{2}\alpha_{k}^{2}\ \ \mbox{and}\ \ \|{\bf z}_{k+1}-{\bf1}\bar{x}_{k}\|^{2}\leq\vartheta_{2}^{2}\alpha_{k}^{2}.\tag{1}$$ Proof: Refer to Appendix D.3. Our main result regarding the convergence rate is summarized in the following theorem. Theorem 3.4. *Let Assumptions 1.1-1.3, 2.1-2.2, and 2.4 hold. We then define the constants* A = λc3 2 , ``` B = 4c3L 2ϑ 2 1 λn , C = c 2 1 c 6 4 c3λ , E = ϑ2 n , ``` $$K_{2}=\operatorname*{arg\,min}_{A\alpha_{k}\gamma_{k}<1}k,\;\;\;a n d\;\;K_{0}=\operatorname*{max}\{K_{1},K_{2}\}.$$ We finally define the following parameters: κk = 1−( γk+1 γk ) 2 αkγk, σ1 = max k≥K0 κk, σ2 = max k≥K0 α 2 k γ 2 k , σ3 = max k≥K0 αk γ 3 k , τk = 1− αk+1γ−1 k+1 αkγ−1 k αkγk, σ4 = max k≥K0 τk, σ5 = max k≥K0 αkγk, σ6 = max k≥K0 γ 3 k αk . $$(11)$$ If κk < A for any k ≥ K0*, then* Dk ≤ ς1γ 2 k , ∀k ≥ K0, (12) with $$S_{1}\geq\max\left\{\frac{D_{K_{0}}}{\gamma_{K_{0}}^{2}},\frac{B\sigma_{2}+E\sigma_{3}+C}{A-\sigma_{1}}\right\}.\tag{1}$$ $$(12)$$ $$(13)$$ *If $\tau_k<A$ for any $k\geq K_0$, then*. , ∀k ≥ K0, (14) $$D_{k}\leq\varsigma_{2}\frac{\alpha_{k}}{\gamma_{k}},\ \forall k\geq K_{0},$$ with $$\varsigma_{2}\geq\max\Bigg\{\frac{D_{K_{0}}\gamma_{K_{0}}}{\alpha_{K_{0}}},\frac{B\sigma_{5}+C\sigma_{6}+E}{A-\sigma_{4}}\Bigg\}.$$ $$(14)$$ $$(15)$$ $$(16)$$ Proof: See Appendix E.1. 3.2.2 A Special Case of αk and γk We now consider the special case mentioned in Example 2.3: $$k=\gamma_{0}(k+1)^{-v_{2}},$$ $\alpha_k=\alpha_0(k+1)^{-v_1}$ and $\gamma_k=\gamma_0(k+1)^{-v_2}$. αk = α0(k + 1)−υ1 and γk = γ0(k + 1)−υ2, (16) where 0.5 < υ1 < 1, 0 < υ2 ≤ 1 − υ1, and υ1 + 2υ2 > 1. Theorem 3.5. Let αk and γk *have the forms given in (16) and consider the same assumptions of Theorem* 3.4. If α0γ0 ≥ max{2υ2, υ1 − υ2}/A, then we can say that there exists Υ < ∞*, where* $$D_{k}\leq\Upsilon(k+1)^{-\operatorname*{min}\{2v_{2},v_{1}-v_{2}\}},\ \forall k\geq K_{0}.$$ Proof: See Appendix E.4. The parameters clearly affect the upper bound of the convergence rate or rate of expected divergence decay in Theorem 3.5. As it is evident that $$-\left.v_{2}\right\}\leq0$$ max{2υ2, υ1 − υ2} ≤ 0.5, the best choice is when equality holds for υ1 = 0.75 and υ2 = 0.25. With the sufficient condition on the parameters in Theorem 3.5, we can finally state that our algorithm converges with a rate of O( √ 1 k ) after a sufficient number of iterations *k > K*0 when the step sizes are vanishing. ## 3.3 Regret Bound To further examine the performance of our algorithm, we present the following theorem on the achieved regret bound. Theorem 3.6. Let the assumptions of Theorem 3.4 hold. When αk and γk *have the forms of (16) with* υ1 = 0.75 and υ2 = 0.25*, the regret bound is given by* $$\mathbb{E}\biggl[{\frac{1}{n}}\sum_{k=1}^{K}\sum_{i=1}^{n}F_{i}(x_{i,k})-F_{i}(x^{*})\biggr]\leq O({\sqrt{K}}).$$ Proof: See Appendix F. ## 3.4 Convergence Rate With Constant Step Sizes In this subsection, we fix the step sizes to αk = α > 0 and γk = γ > 0, ∀k ≥ 0, and we assume them to be two arbitrarily small values. We also define the following terms, A = λc3 2 , B = 4c3L 2 λn , C = c 2 1 c 6 4 c3λ , and R = kx0 − 1x¯0k 2. We let M denote the upper bound on kg¯kk 2. We then let G1 = 2nM(1+ρ 2 w) (1−ρ2w) 2 and G2 = nM1+ρ 2 w 1−ρ2w 2+ 1+ρ 2 w 1−ρ2w . We finally define %1 = 1 − Aαγ and %2 = 1+ρ 2 w 2. Theorem 3.7. Assume *αγ <* 1A and α < γ*. Let Assumptions 1.1-1.3, 2.2, and 2.4 hold, then* $$\|{\bf x}_{k+1}-{\bf1}\bar{x}_{k+1}\|^{2}\leq\varrho_{2}^{k+1}R+\alpha^{2}G_{1}\;\;\;\mathrm{and}\;\;\|{\bf z}_{k+1}-{\bf1}\bar{x}_{k}\|^{2}\leq\varrho_{2}^{k+1}R+\alpha^{2}G_{2}.$$ Meaning, kxk+1 − 1x¯k+1k 2*converges with the linear rate of* O% k 2 for an arbitrary small α almost surely. Further, - *When* %1 ≤ %2, $$D_{k+1}\leq\!\varrho_{1}^{k+1}D_{0}+\varrho_{2}^{k+1}\frac{2R\Big(B\alpha\gamma+\frac{\varrho_{2}}{n}\Big)}{2A\alpha\gamma+\rho_{w}^{2}-1}+\alpha^{2}\frac{B G_{1}}{A}+\frac{\alpha}{\gamma}\frac{G_{2}}{n A}+\gamma^{2}\frac{C}{A}.$$ $$(17)$$ $$(18)$$ $$(19)$$ .(18) Then, for arbitrary small step sizes, Dk *converges with the linear rate of* O% k 2 . - When %1 > %2, $$D_{k+1}\leq\!g_{1}^{k+1}\!\left(D_{0}+\frac{2R B\alpha\gamma+\frac{2R\alpha}{n}}{1-2A\alpha\gamma-\rho_{w}^{2}}\right)+\alpha^{2}\frac{B G_{1}}{A}+\frac{\alpha}{\gamma}\frac{G_{2}}{n A}+\gamma^{2}\frac{C}{A}.$$ . (19) Then, for arbitrary small step sizes, Dk *converges with the linear rate of* O% k 1 . Proof: See Appendix G. Taking arbitrarily small values of α, γ satisfying *αγ <* 1A and *α < γ*, and setting % = max{%1, %2}, the convergence rate becomes O(% k), achieving the same rate as with FO information. ## 4 Numerical Results In this section, we provide numerical examples to illustrate the performance of the algorithm 1P-DSG. We compare it with FO distributed methods aiming to achieve consensus, namely DSGT (Pu & Nedić, 2018) and EXTRA (Shi et al., 2015), and a ZO centralized algorithm based on gradient descent (e.g. Flaxman et al. (2004) and Bach & Perchet (2016)) using another one-point estimate which is presented in (1). We denote the ZO centralized algorithm by 1P-GD. For DSGT and EXTRA, we calculate the exact gradient and add white noise to it to form an unbiased FO estimator. The network topology is a connected Erdős-Rényi random graph with a probability of 0.05. We consider a logistic classification problem to classify m images of the two digits, labeled as yij = +1 or −1 from the MNIST data set (LeCun & Cortes, 2005). Each image, Xij , is a 785-dimensional vector and is compressed using a lossy autoencoder to become 10-dimensional denoted as X0 ij , i.e., d = 10. The total images are split equally among the agents such that each agent has mi = m n images and no access to other ones for privacy constraints. However, the goal is still to make use of all images and to solve collaboratively $$\operatorname*{min}_{\theta\in K}{\frac{1}{n}}\sum_{i=1}^{n}{\frac{1}{m}}\sum_{j=1}^{m_{i}}\mathbb{E}_{u\sim{\mathcal{N}}(1,\sigma_{u})}\ln(1+\exp(-u_{i j}y_{i j}.X_{i j}^{T}\theta))+c\|\theta\|^{2},$$ while reaching consensus on the decision variable θ ∈ K with K = [−10, 10]d. We note here that u models some perturbation on the local querying of every example to add to the randomization of the communication process. We consider classifying the digits 1 and 2 with m = 12700 images. There are n = 100 agents in the network and thus each has a local batch of mi = 127 images. We take σu = 0.01 and let αk = 0.05(k + 1)−0.75 and γk = 0.8(k + 1)−0.25 for 1P-DSG with vanishing step sizes, and α = 0.05 and γ = 0.6 with constant step sizes. We choose Φk *∈ {−* √ 1 d , √ 1 d } d with equal probability. Also, every function query is subject to a white noise generated by the standard normal distribution. For the DSGT algorithm, we set the step size to αk = 0.015(k + 1)−1 when it is vanishing and α = 0.015 when constant, and we do not consider the perturbation on the objective function nor the noise on the objective function, only the noise on the exact gradient. Similarly for EXTRA and we set its step size to α = 0.01. For the centralized 1P-GD algorithm, we set α = 0.001 and γ = 0.5. We let c = 0.1, and the initialization be the same for all algorithms, with θi,0 uniformly chosen from [−0.5, 0.5]d, ∀i ∈ N , per instance. We finally average the simulations over 30 instances. The expected evolution of the loss objective function is presented in Figure 1 and the graphs are zoomed in on in Figure 2. Experimental results seem to validate our theoretical results: Our proposed algorithm converges linearly fast with constant step sizes, however the final gap is due to converging to an O(α)- neighborhood of the optimal solution. 1P-DSG with vanishing step sizes converges with an O( √ 1 k ) while DSGT with vanishing step size converges at a rate of O( 1 k ). Using constant vs vanishing step size does not seem to affect the convergence rate of the loss function of DSGT. EXTRA consistently performs similarly to DSGT. The most interesting point is that 1P-DSG, with vanishing and constant step sizes, outperforms the centralized ZO counterpart 1P-GD to an impressive extent highlighting the significance of our proposed estimate and method. We also note that 1P-GD fluctuates a lot due to the division by the small value γ in the gradient estimate, which causes instability and difficulty in tuning generally. In Figure 3, we measure at every iteration the classification accuracy against an independent test set of 2167 images using the updated mean vector ¯θk = 1 n Pn i=1 θi,k of the local decision variables. The interest of the constant step sizes appears in the convergence rate of this accuracy, where our algorithm is able to compete with DSGT with full FO information, and to outperform DSGT with a vanishing step size. This is an important result as it shows that the classification goal with ZO is well met despite the limiting upper bounds of convergence rate and that O(α)-neighborhood of the optimal solution achieved linearly fast can be sufficient to achieve the best possible accuracy. The reason for this better accuracy attainment is generally because the constant step-size version of a gradient method often achieves better generalization performance due to its ability to find a good balance between exploration and exploitation during the optimization process. In other words, it allows the algorithm to explore a wide range of parameter values while still exploiting gradients to make consistent progress toward the ![12_image_0.png](12_image_0.png) optimal solution. This balance enhances the model's ability to capture the underlying structure of the data and avoid overfitting. While a vanishing step size version also offers this balance at the beginning, as the step sizes become smaller, the exploration is substituted for exploitation, causing worse/slower generalization. Figure 1: Expected loss function evolution of the ![12_image_2.png](12_image_2.png) proposed algorithm vs. DSGT, EXTRA, and 1PGD considering vanishing vs. constant step sizes. Figure 3: Expected test accuracy evolution of the proposed algorithm vs. DSGT, EXTRA, and 1PGD considering vanishing vs. constant step sizes. Figure 2: Expected loss function evolution of the ![12_image_1.png](12_image_1.png) proposed algorithm vs. DSGT, EXTRA, and 1PGD considering vanishing vs. constant step sizes. This is well confirmed by the centralized 1P-GD vs 1P-DSG with vanishing step sizes. Despite the latter outperforming the first in convergence speed (of the objective function), the first with constant step sizes seems to generalize better. In Figures 4, 5, and 6 the curves are those of the evolution of the expected consensus error, or E Pn i=1 kθi,k − ¯θkk 2which is the expected error between the local decision variables and their average. For all algorithms, the error again validates the theoretical bounds and decreases quite fast. Generally, as evident in Figure 6 for all algorithms, vanishing step sizes allow the consensus error to completely vanish while constant step sizes leave an O(α 2)-gap. We add other numerical examples for different image labels in Appendix H. ## 5 Conclusion In this work, we extended the distributed stochastic gradient algorithm to present a practical solution to a relevant problem with realistic assumptions. A novel ZO algorithm was studied and proved to converge with a biased and high variance one-point gradient estimate and a stochastic perturbation on the objective function. In the context of noisy ZO optimization, we have successfully established a linear convergence rate of O(% k) using fixed step sizes and O( √ 1 k ) with vanishing step sizes. These rates align with the optimal expectations examined in the existing literature. We also prove a regret bound that of O( √k) with vanishing step sizes. A numerical application confirmed the success and efficiency of the algorithm. ## Б Acknowledgment We are grateful for the comments made by the editor and the reviewers that have substantially improved the quality of this work. ![13_image_1.png](13_image_1.png) Figure 4: Expected consensus error evolution of the proposed algorithm vs. DSGT and EXTRA considering vanishing vs. constant step sizes. ![13_image_0.png](13_image_0.png) Figure 5: Expected consensus error evolution of the proposed algorithm vs. DSGT and EXTRA considering vanishing vs. constant step sizes. ![13_image_2.png](13_image_2.png) Figure 6: Expected consensus error evolution of the proposed algorithm vs. DSGT and EXTRA considering vanishing vs. constant step sizes. ## References Alekh Agarwal, Ofer Dekel, and Lin Xiao. Optimal algorithms for online convex optimization with multipoint bandit feedback. In COLT, 2010. Francis Bach and Vianney Perchet. Highly-smooth zero-th order online optimization vianney perchet, 2016. URL https://arxiv.org/abs/1605.08165. Sébastien Bubeck, Ronen Eldan, and Yin Tat Lee. Kernel-based methods for bandit convex optimization. J. ACM,68(4), jun 2021. ISSN 0004-5411. doi: 10.1145/3453721. URL https://doi.org/10.1145/3453721. Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, and David Cox. Zo-adamm: Zerothorder adaptive momentum method for black-box optimization. In NeurIPS, 2019. Ying Chen, Zhiyong Liu, Yongchao Zhang, Yuan Wu, Xin Chen, and Lian Zhao. Deep reinforcement learningbased dynamic resource management for mobile edge computing in industrial internet of things. *IEEE* Transactions on Industrial Informatics, 17(7):4925–4934, 2021. doi: 10.1109/TII.2020.3028963. Amir Daneshmand, Francisco Facchinei, Vyacheslav Kungurtsev, and Gesualdo Scutari. Hybrid random/deterministic parallel algorithms for convex and nonconvex big data optimization. IEEE Transactions on Signal Processing, 63(15):3914–3929, 2015. doi: 10.1109/TSP.2015.2436357. Joseph L. Doob. Stochastic processes. 1953. John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Andre Wibisono. Optimal rates for zero-order convex optimization: The power of two function evaluations. *IEEE Transactions on Information Theory*, 61(5):2788–2806, 2015. doi: 10.1109/TIT.2015.2409256. Abraham Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. *CoRR*, cs.LG/0408007, 2004. URL http://arxiv. org/abs/cs.LG/0408007. Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, and Qiuyi (Richard) Zhang. Gradientless descent: High-dimensional zeroth-order optimization. *CoRR*, abs/1911.06317, 2019. URL http://arxiv.org/abs/1911.06317. Eduard Gorbunov, Pavel Dvurechensky, and Alexander Gasnikov. An accelerated method for derivative-free smooth stochastic convex optimization, 2018. URL https://arxiv.org/abs/1802.09022. Davood Hajinezhad, Mingyi Hong, and Alfredo Garcia. Zone: Zeroth-order nonconvex multiagent optimization over networks. *IEEE Transactions on Automatic Control*, 64(10):3995–4010, 2019. doi: 10.1109/TAC.2019.2896025. Elad Hazan and Kfir Levy. Bandit convex optimization: Towards tight bounds. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger (eds.), *Advances in Neural Information Processing Systems*, volume 27. Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/ paper_files/paper/2014/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf. Shinji Ito. An optimal algorithm for bandit convex optimization with strongly-convex and smooth loss. In Silvia Chiappa and Roberto Calandra (eds.), *Proceedings of the Twenty Third International Conference* on Artificial Intelligence and Statistics, volume 108 of *Proceedings of Machine Learning Research*, pp. 2229–2239. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/v108/ito20a.html. Dusan Jakovetic, Dragana Bajovic, Anit Kumar Sahu, and Soummya Kar. Convergence rates for distributed stochastic optimization over random networks. In *2018 IEEE Conference on Decision and Control (CDC)*, pp. 4238–4245, 2018. doi: 10.1109/CDC.2018.8619228. Kevin G. Jamieson, Robert D. Nowak, and Benjamin Recht. Query complexity of derivative-free optimization. In *NIPS*, 2012. David Kinderlehrer and Guido Stampacchia. An introduction to variational inequalities and their application. 31, 01 2000. doi: 10.1137/1.9780898719451. Anit Kumar Sahu, Dusan Jakovetic, Dragana Bajovic, and Soummya Kar. Distributed zeroth order optimization over random networks: A kiefer-wolfowitz stochastic approximation approach. In 2018 IEEE Conference on Decision and Control (CDC), pp. 4951–4958, 2018. doi: 10.1109/CDC.2018.8619044. Yann LeCun and Corinna Cortes. The mnist database of handwritten digits. 2005. Wenjie Li and Mohamad Assaad. Distributed stochastic optimization in networks with low informational exchange. *IEEE Transactions on Information Theory*, 67(5):2989–3008, 2021. doi: 10.1109/TIT.2021. 3064925. Sijia Liu, Jie Chen, Pin-Yu Chen, and Alfred Hero. Zeroth-order online alternating direction method of multipliers: Convergence analysis and applications. In Amos Storkey and Fernando Perez-Cruz (eds.), Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of *Proceedings of Machine Learning Research*, pp. 288–297. PMLR, 09–11 Apr 2018. URL https://proceedings.mlr.press/v84/liu18a.html. Sijia Liu, Pin-Yu Chen, Xiangyi Chen, and Mingyi Hong. signsgd via zeroth-order oracle. In *ICLR*, 2019. Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred O. Hero III, and Pramod K. Varshney. A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications. *IEEE Signal Processing Magazine*, 37(5):43–54, 2020. doi: 10.1109/MSP.2020.3003837. Yuyi Mao, Changsheng You, Jun Zhang, Kaibin Huang, and Khaled B. Letaief. A survey on mobile edge computing: The communication perspective. *IEEE Communications Surveys Tutorials*, 19(4):2322–2358, 2017. doi: 10.1109/COMST.2017.2745201. Ion Matei and John S. Baras. Performance evaluation of the consensus-based distributed subgradient method under random communication topologies. *IEEE Journal of Selected Topics in Signal Processing*, 5(4):754– 771, 2011. doi: 10.1109/JSTSP.2011.2120593. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Aarti Singh and Jerry Zhu (eds.), *Proceedings of the 20th International Conference on Artificial Intelligence and Statistics*, volume 54 of *Proceedings of Machine Learning Research*, pp. 1273–1282. PMLR, 20–22 Apr 2017. URL https://proceedings.mlr.press/v54/mcmahan17a.html. Angelia Nedić and Alex Olshevsky. Stochastic gradient-push for strongly convex functions on time-varying directed graphs. *IEEE Transactions on Automatic Control*, 61(12):3936–3947, 2016. doi: 10.1109/TAC. 2016.2529285. A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. *SIAM Journal on Optimization*, 19(4):1574–1609, 2009. doi: 10.1137/070704277. URL https://doi.org/10.1137/070704277. Yurii Nesterov and Vladimir G. Spokoiny. Random gradient-free minimization of convex functions. *Foundations of Computational Mathematics*, 17:527–566, 2017. Shi Pu. A robust gradient tracking method for distributed optimization over directed networks. In *2020* 59th IEEE Conference on Decision and Control (CDC), pp. 2335–2341, 2020. doi: 10.1109/CDC42340. 2020.9303917. Shi Pu and Angelia Nedić. Distributed stochastic gradient tracking methods, 2018. URL https://arxiv. org/abs/1805.11454. Guannan Qu and Na Li. Harnessing smoothness to accelerate distributed optimization. IEEE Transactions on Control of Network Systems, 5(3):1245–1260, 2018. doi: 10.1109/TCNS.2017.2698261. Ohad Shamir. On the complexity of bandit and derivative-free stochastic convex optimization. In Shai Shalev-Shwartz and Ingo Steinwart (eds.), *Proceedings of the 26th Annual Conference on Learning Theory*, volume 30 of *Proceedings of Machine Learning Research*, pp. 3–24, Princeton, NJ, USA, 12–14 Jun 2013. PMLR. URL https://proceedings.mlr.press/v30/Shamir13.html. Wei Shi, Qing Ling, Gang Wu, and Wotao Yin. Extra: An exact first-order algorithm for decentralized consensus optimization. *SIAM Journal on Optimization*, 25(2):944–966, 2015. doi: 10.1137/14096668X. URL https://doi.org/10.1137/14096668X. Zai Shi and Atilla Eryilmaz. A zeroth-order admm algorithm for stochastic optimization over distributed processing networks. In *IEEE INFOCOM 2020 - IEEE Conference on Computer Communications*, pp. 726–735, 2020. doi: 10.1109/INFOCOM41043.2020.9155520. Yujie Tang, Junshan Zhang, and Na Li. Distributed zero-order algorithms for nonconvex multiagent optimization. *IEEE Transactions on Control of Network Systems*, 8(1):269–281, 2021. doi: 10.1109/TCNS. 2020.3024321. Zaid J. Towfic, Jianshu Chen, and Ali H. Sayed. Excess-risk of distributed stochastic learners. *IEEE* Transactions on Information Theory, 62(10):5753–5785, 2016. doi: 10.1109/TIT.2016.2593769. Sheng-Yuan Tu and Ali H. Sayed. Diffusion strategies outperform consensus strategies for distributed estimation over adaptive networks. *IEEE Transactions on Signal Processing*, 60(12):6217–6234, 2012. doi: 10.1109/TSP.2012.2217338. Anirudh Vemula, Wen Sun, and J. Andrew Bagnell. Contrasting exploration in parameter and action space: A zeroth-order optimization perspective. *ArXiv*, abs/1901.11503, 2019. Ran Xin, Anit Kumar Sahu, Usman A. Khan, and Soummya Kar. Distributed stochastic optimization with gradient tracking over strongly-connected networks. In *2019 IEEE 58th Conference on Decision and* Control (CDC), pp. 8353–8358, 2019. doi: 10.1109/CDC40024.2019.9029217. Kun Yuan, Qing Ling, and Wotao Yin. On the convergence of decentralized gradient descent. *SIAM Journal* on Optimization, 26(3):1835–1854, 2016. doi: 10.1137/130943170. URL https://doi.org/10.1137/ 130943170. Fanqin Zhou, Lei Feng, Michel Kadoch, Peng Yu, Wenjing Li, and Zhili Wang. Multiagent rl aided task offloading and resource management in wi-fi 6 and 5g coexisting industrial wireless environment. IEEE Transactions on Industrial Informatics, 18(5):2923–2933, 2022. doi: 10.1109/TII.2021.3106973. ## A L-Smoothness Property $$\|\nabla\mathcal{F}(\bar{x})-h(\mathbf{x})\|=\left\|\frac{1}{n}\sum_{i=1}^{n}\left(\nabla F_{i}(\bar{x})-\nabla F_{i}(x_{i})\right)\right\|$$ $$\leq\frac{1}{n}\sum_{i=1}^{n}\left\|\nabla F_{i}(\bar{x})-\nabla F_{i}(x_{i})\right\|$$ $$\leq\frac{L}{n}\sum_{i=1}^{n}\left\|\bar{x}-x_{i}\right\|$$ $$=\frac{L}{n}\sum_{i=1}^{n}\left\|x_{i}-\bar{x}\right\|$$ $$\stackrel{{(a)}}{{\leq}}\frac{L}{n}\sqrt{n}\sum_{i=1}^{n}\left\|x_{i}-\bar{x}\right\|^{2}$$ $$\stackrel{{(b)}}{{=}}\frac{L}{\sqrt{n}}\|\mathbf{x}-\mathbf{1}\bar{x}\|,$$ $\lambda\to\epsilon$ where (a) is by applying the Cauchy-Schwarz inequality, |Pn i=1 ai· 1| ≤ (Pn i=1 a 2 i ) 1 2 · (Pn i=1 1 2) 1 2 = n 1 2 (Pn i=1 a 2 i ) 1 2 , and (b) is by definition of the Frobenius norm, kx − 1x¯k 2 =Pn i=1 kxi − x¯k 2. ## B Estimated Gradient In this section, we derive the bias of the gradient estimate with respect to the real gradient of the local objective function. Let $${\check{g}}_{i,k}=\mathbb{E}_{S,\Phi,\zeta}[g_{i,k}|{\mathcal{H}}_{k}].$$ Thus, by Assumption 1.3 and the definition in (4), $$\hat{g}_{i,k}=\mathbb{E}_{S,\Phi,\zeta}[\Phi_{i,k}(f_{i}(x_{i,k}+\gamma_{k}\Phi_{i,k},S_{i,k})+\zeta_{i,k})|\mathcal{H}_{k}]$$ $$=\mathbb{E}_{S,\Phi}[\Phi_{i,k}f_{i}(x_{i,k}+\gamma_{k}\Phi_{i,k},S_{i,k})|\mathcal{H}_{k}]$$ $$=\mathbb{E}_{\Phi}[\Phi_{i,k}F_{i}(x_{i,k}+\gamma_{k}\Phi_{i,k})|\mathcal{H}_{k}].$$ By Taylor's theorem and the mean-valued theorem, there exists x˜i,k located between xi,k and xi,k + γkΦi,k where Fi(xi,k + γkΦi,k) = Fi(xi,k) + γkhΦi,k, ∇Fi(xi,k)i + γ 2 k 2 hΦi,k, ∇2Fi(˜xi,k)Φi,ki, substituting in the previous definition, $$\hat{g}_{i,k}=F_{i}(x_{i,k})\mathbb{E}_{\Phi}[\Phi_{i,k}]+\gamma_{k}\mathbb{E}_{\Phi}[\Phi_{i,k}\Phi_{i,k}^{T}]\nabla F_{i}(x_{i,k})+\frac{\gamma_{k}^{2}}{2}\mathbb{E}_{\Phi}[\Phi_{i,k}\Phi_{i,k}^{T}\nabla^{2}F_{i}(\hat{x}_{i,k})\Phi_{i,k}|\mathcal{H}_{k}]$$ $$=c_{3}\gamma_{k}[\nabla F_{i}(x_{i,k})+b_{i,k}].$$ Thus, the estimation bias has the form $$\begin{array}{l l}{{b_{i,k}=\frac{\check{g}_{i,k}}{c_{3}\gamma_{k}}-\nabla F_{i}(x_{i,k})}}\\ {{\quad=\frac{\gamma_{k}}{2c_{3}}\mathbb{E}_{\Phi}[\Phi_{i,k}\Phi_{i,k}^{T}\nabla^{2}F_{i}(\tilde{x}_{i,k})\Phi_{i,k}|\mathcal{H}_{k}].}}\end{array}$$ Let Assumptions 1.2 and 2.2 hold. Then, we can bound the bias as $$\|b_{i,k}\|\leq\frac{\gamma_{k}}{2c_{3}}\mathbb{E}_{\Phi}[\|\Phi_{i,k}\|_{2}\|\Phi_{i,k}^{T}\|_{2}\|\nabla^{2}F_{i}(\tilde{x}_{i,k})\|_{2}\|\Phi_{i,k}\|_{2}|\mathcal{H}_{k}]$$ $$\leq\gamma_{k}\frac{c_{3}^{4}c_{1}}{2c_{3}}.$$ $$(20)$$ We can see kbi,kk → 0 as k → ∞ since γk is vanishing. We remark that $$\tilde{g}_{k}=\mathbb{E}[\tilde{g}_{k}|\mathcal{H}_{k}]$$ $$=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[g_{i,k}|\mathcal{H}_{k}]$$ $$=\frac{1}{n}\sum_{i=1}^{n}c_{3}\gamma_{k}[\nabla F_{i}(x_{i,k})+b_{i,k}]$$ $$=c_{3}\gamma_{k}[h(\mathbf{x}_{k})+\bar{b}_{k}]$$ $$(21)$$ is also a biased estimator of h(xk) with $$\begin{split}\|{\bar{b}}_{k}\|&=\|{\frac{1}{n}}\sum_{i=1}^{n}b_{i,k}\|\\ &\leq{\frac{1}{n}}\sum_{i=1}^{n}\|b_{i,k}\|\\ &\leq{\frac{1}{n}}\sum_{i=1}^{n}\gamma_{k}{\frac{c_{4}^{3}c_{1}}{2c_{3}}}\\ &=\gamma_{k}{\frac{c_{4}^{3}c_{1}}{2c_{3}}}.\end{split}$$ $$\left(22\right)^{1}$$. Lemma B.1. Let all Assumptions 1.3, 2.2, and 2.4 hold, then there exists a bounded constant M > ¯ 0*, such* that E[kg¯kk 2|Hk] < M¯ . Proof. ∀i ∈ N , we have where (a) is due to Assumption 2.2, (b) Assumption 1.3, and (c) Assumption 2.4. Then, E[kgkk 2|Hk] = E hPn i=1 kgi,kk 2 Hk i=Pn i=1 E[kgi,kk 2|Hk] < ∞ and E[kg¯kk 2|Hk] =E hk 1 n Xn i=1 gi,kk 2 Hk i = 1 n2 E hk Xn i=1 gi,kk 2 Hk i ≤ n n2 E hXn i=1 kgi,kk 2 Hk i = 1 n Xn i=1 E[kgi,kk 2|Hk] <∞. E[kgi,kk 2|Hk] = E[kΦi,k(fi(xi,k + γkΦi,k, Si,k) + ζi,k)k 2|Hk] = E[kΦi,kk 2kfi(xi,k + γkΦi,k, Si,k) + ζi,kk 2|Hk] (a) ≤ c 2 4E[(fi(xi,k + γkΦi,k, Si,k) + ζi,k) 2|Hk] (b) = c 2 4E[f 2 i (xi,k + γkΦi,k, Si,k)|Hk] + c 2 4 c2 (f) < ∞, ## C Stochastic Noise To prove Lemma 3.1, we begin by demonstrating that the sequence {PK0 k=K αkek}K0≥K is a martingale. To do so, we have to prove that for all K0 ≥ K, XK0 =PK0 k=K αkek satisfies the following two conditions: (i) E[XK0+1|XK0 ] = XK0 (ii) E[kXK0k 2] < ∞ We know that $$\mathbb{E}[e_{k}]=\mathbb{E}[\bar{g}_{k}-\mathbb{E}[\bar{g}_{k}|\mathcal{H}_{k}]]=\mathbb{E}_{\mathcal{H}_{k}}\left[\mathbb{E}\Big{[}\bar{g}_{k}-\mathbb{E}[\bar{g}_{k}|\mathcal{H}_{k}]\Big{|}\mathcal{H}_{k}\Big{]}\right]=0$$ ectation. Hence, by the law of total expectation. Hence, a four expectation. Hence, $$\mathbb{E}[X_{K^{\prime}+1}|X_{K^{\prime}}]=\mathbb{E}\Big{[}\alpha_{K^{\prime}+1}e_{K^{\prime}+1}+\sum_{k=K}^{K^{\prime}}\alpha_{k}e_{k}\Big{]}\sum_{k=K}^{K^{\prime}}\alpha_{k}e_{k}\Big{]}=0+\sum_{k=K}^{K^{\prime}}\alpha_{k}e_{k}=X_{K^{\prime}}.\tag{23}$$ In addition, ek and ek0 are uncorrelated for any k 6= k 0since (assuming *k > k*0) E-e T k ek0= E-E[e T k ek0 |Hk]= E -ek0E[e T k |Hk]= 0. Thus, $$\begin{split}\mathbb{E}(\|\sum_{k=K}^{K^{\prime}}\alpha_{k}e_{k}\|^{2})&=\mathbb{E}(\sum_{k=K}^{K^{\prime}}\sum_{k^{\prime}=K}^{K^{\prime}}\alpha_{k}\alpha_{k^{\prime}}\langle e_{k},e_{k^{\prime}}\rangle)\\ &\stackrel{{(a)}}{{=}}\mathbb{E}(\sum_{k=K}^{K^{\prime}}\|\alpha_{k}e_{k}\|^{2})\\ &\leq\sum_{k=K}^{\infty}\mathbb{E}(\alpha_{k}^{2}\|\bar{g}_{k}-\mathbb{E}[\bar{g}_{k}|\mathcal{H}_{k}]\|^{2})\end{split}$$ $$=\sum_{k=K}^{\infty}\alpha_{k}^{2}\mathbb{E}(\|\bar{g}_{k}\|^{2})-\mathbb{E}_{\mathcal{H}_{k}}(\|\mathbb{E}[\bar{g}_{k}|\mathcal{H}_{k}]\|^{2})$$ $$\leq\sum_{k=K}^{\infty}\alpha_{k}^{2}\mathbb{E}(\|\bar{g}_{k}\|^{2})\tag{2}$$ $$\overset{(b)}{\leq}M\sum_{k=K}^{\infty}\alpha_{k}^{2}<\infty,$$ attendness $\mathbb{E}[\langle e_{k},e_{k^{\prime}}\rangle]=0$, $(b)$ is by Lemma B.1, and $(c)$ is by Assumption $$(24)$$ where (a) is due to the uncorrelatedness E[hek, ek0 i] = 0, (b) is by Lemma B.1, and (c) is by Assumption 2.1. Therefore, both (i) and (ii) are satisfied and we can say that {PK0 k=K αkek}K0≥K is a martingale. This permits us to use Doob's martingale inequality Doob (1953): For any constant ν > 0, $$\mathbb{P}(\sup_{K^{\prime}\geq K}\|\sum_{k=K}^{K^{\prime}}\alpha_{k}e_{k}\|\geq\nu)\leq\frac{1}{\nu^{2}}\mathbb{E}(\|\sum_{k=K}^{K^{\prime}}\alpha_{k}e_{k}\|^{2})\tag{25}$$ $$\stackrel{{(a)}}{{\leq}}\frac{M}{\nu^{2}}\sum_{k=K}^{\infty}\alpha_{k}^{2},$$ where (a) is following the exact same steps as (24). Since M is a bounded constant and limK→∞ P∞ k=K α 2 k = 0 by Assumption 2.1, we get limK→∞ M ν2P∞ k=K α 2 k = 0 for any bounded constant ν. Hence, the probability that kPK0 k=K αkekk ≥ ν also vanishes as K → ∞, which concludes the proof. ## D Proof Of Convergence We start by stating the following lemma that will be useful for the proof of convergence. Lemma D.1. *If all Assumptions 1.1-1.3, 2.1-2.2, and 2.4 hold, then* limk→∞ kxk − 1x¯kk 2 = 0*. In fact, we* have $$\sum_{k=0}^{\infty}\|\mathbf{x}_{k}-\mathbf{1}\bar{x}_{k}\|^{2}<\infty,\,\sum_{k=0}^{\infty}\|\mathbf{z}_{k+1}-\mathbf{1}\bar{x}_{k}\|^{2}<\infty,\,\,a n d\,\sum_{k=0}^{\infty}\gamma_{k}\alpha_{k}\|\mathbf{x}_{k}-\mathbf{1}\bar{x}_{k}\|<\infty,$$ almost surely. Proof: See Appendix D.2. ## D.1 Proof Of Theorem 3.2 By using the compact form of the algorithm in (8), we know that $$\bar{z}_{k+1}=\frac{1}{n}\mathbf{1}^{T}W(\mathbf{x_{k}}-\alpha_{k}\mathbf{g_{k}})\stackrel{{(a)}}{{=}}\frac{1}{n}\mathbf{1}^{T}(\mathbf{x_{k}}-\alpha_{k}\mathbf{g_{k}})=\bar{x}_{k}-\alpha_{k}\bar{g}_{k},\tag{26}$$ where $(a)$ is again due to the doubly stochastic property of $W$. The divergence at time k + 1 can then be written as ### 1 as "content as ${\begin{array}{l}d_{k+1}=\parallel\bar{x}_{k+1}-x^*\parallel^2\\ =\parallel\frac{1}{n}\sum_{i=1}^n(x_{i,k+1}-x^*)\parallel^2\\ \leq\frac{n}{n^2}\sum_{i=1}^n\parallel x_{i,k+1}-x^*\parallel^2\\ \stackrel{(a)}{\leq}\frac{1}{n}\sum_{i=1}^n\parallel z_{i,k+1}-x^*\parallel^2\\ \end{array}}$ = 1 n Xn i=1 kzi,k+1 − x¯k + ¯xk − x ∗k 2 = 1 n Xn i=1 kzi,k+1 − x¯kk 2 + 2 1 n Xn i=1 hzi,k+1 − x¯k, x¯k − x ∗i + 1 n Xn i=1 kx¯k − x ∗k 2 = 1 n kzk+1 − 1x¯kk 2 + 2hz¯k+1 − x¯k, x¯k − x ∗i + kx¯k − x ∗k 2 (b) = 1 n kzk+1 − 1x¯kk 2 + 2h−αkg¯k, x¯k − x ∗i + dk =dk − 2αkhx¯k − x ∗, g¯k − E[¯gk|Hk] + E[¯gk|Hk]i + 1 n kzk+1 − 1x¯kk 2 (27) =dk − 2αkhx¯k − x ∗, E[¯gk|Hk]i − 2αkhx¯k − x ∗, eki + 1 n kzk+1 − 1x¯kk 2 (c) =dk − 2c3γkαkhx¯k − x ∗, h(xk) + ¯bki − 2αkhx¯k − x ∗, eki + 1 n kzk+1 − 1x¯kk 2 =dk − 2c3γkαkhx¯k − x ∗, ∇F(¯xk)i + 2c3γkαkhx¯k − x ∗, ∇F(¯xk) − h(xk)i − 2c3γkαkhx¯k − x ∗, ¯bki − 2αkhx¯k − x ∗, eki + 1 n kzk+1 − 1x¯kk 2 (d) ≤dk − 2c3γkαkhx¯k − x ∗, ∇F(¯xk)i + 2c3Lγkαk √nkx¯k − x ∗kkxk − 1x¯kk + 2c3γkαkkx¯k − x ∗kk¯bkk − 2αkhx¯k − x ∗, eki + 1 n kzk+1 − 1x¯kk 2, where (a) is by the projection inequality (5) noting that x ∗ ∈ K (so projecting it onto K gives us the same point), (b) is by (26), (c) is due to (21), and (d) is due to Lemma 1.5. By recursion of inequality (27), we have $$\begin{split}d_{K+1}\leq d_{0}-2c_{5}\sum_{k=0}^{K}\gamma_{k}\alpha_{k}(\tilde{x}_{k}-x^{*},\nabla F(\tilde{x}_{k})+\tilde{b}_{k})+\frac{2c_{3}L}{\sqrt{n}}\sum_{k=0}^{K}\gamma_{k}\alpha_{k}\|\tilde{x}_{k}-x^{*}\|\|{\bf x}_{k}-{\bf1}\tilde{x}_{k}\|\\ +2c_{5}\sum_{k=0}^{K}\gamma_{k}\alpha_{k}\|\tilde{x}_{k}-x^{*}\|\|\tilde{b}_{k}\|-2\sum_{k=0}^{K}\alpha_{k}(\tilde{x}_{k}-x^{*},e_{k})+\frac{1}{n}\sum_{k=0}^{K}\|{\bf z}_{k+1}-{\bf1}\tilde{x}_{k}\|^{2}.\end{split}\tag{28}$$ $$(29)$$ $$(30)$$ $$(31)$$ By Lemma 3.1, we have limK→∞ kPK k=0 αkekk < ∞ almost surely. Since kx¯k−x ∗k < ∞ by the compactness of K in Assumption 2.4, hence $$\lim_{K\to\infty}\|\sum_{k=0}^{K}\alpha_{k}\langle\bar{x}_{k}-x^{*},e_{k}\rangle\|<\infty.\tag{10.1}$$ From (42) in Lemma D.1, we have $$\lim_{K\to\infty}\sum_{k=0}^{K}\|{\bf z}_{k+1}-{\bf1}\bar{x}_{k}\|^{2}<\infty.\tag{10}$$ As stated in Lemma D.1, we have P∞ k=0 γkαkkxk − 1x¯kk < ∞, adding to kx¯k − x ∗k < ∞ by Assumption 2.4, then $$\lim_{K\to\infty}\sum_{k=0}^{K}\gamma_{k}\alpha_{k}\|\bar{x}_{k}-x^{*}\|\|{\bf x}_{k}-{\bf1}\bar{x}_{k}\|<\infty.\tag{1}$$ By (22), we know that k ¯bkk ≤ c 3 4 c1 2c3 γk and kx¯k − x ∗k < ∞ by Assumption 2.4, $$\lim_{K\to\infty}\sum_{k=0}^{K}\gamma_{k}^{2}\alpha_{k}\|\bar{x}_{k}-x^{*}\|<\infty,\tag{10}$$ $$(32)$$ by Assumption 2.1. From the above inequalities (28)-(32), we see that there exists 0 < D0 < ∞ such that dK+1 ≤ D0 + zK, with zK defined as $$z_{K}=-2c_{3}\sum_{k=0}^{K}\gamma_{k}\alpha_{k}\langle\bar{x}_{k}-x^{*},\nabla{\cal F}(\bar{x}_{k})\rangle.\tag{1}$$ By the strong convexity, we have $$-\left\langle\bar{x}_{k}-x^{*},\nabla{\cal F}(\bar{x}_{k})\right\rangle\leq{\cal F}(x^{*})-{\cal F}(\bar{x}_{k})-\frac{\lambda}{2}\|\bar{x}_{k}-x^{*}\|^{2}\leq0,$$ as F(¯xk) ≥ F(x ∗) by the definition of x ∗ being the optimum in K and x¯k ∈ K (by the property of a convex set). Thus, zK ≤ 0, confirming dK+1 < ∞. Let's assume that ∀h > 0, ∃Kh such that kx¯k − x ∗k 2 > h for k ≥ Kh, meaning $$\operatorname*{lim}_{K\to\infty}-\sum_{k=K_{h}}^{K}\gamma_{k}\alpha_{k}\|{\bar{x}}_{k}-x^{*}\|^{2}<-\epsilon_{h}\operatorname*{lim}_{K\to\infty}\sum_{k=K_{h}}^{K}\gamma_{k}\alpha_{k}<-\infty,$$ $$(33)$$ $$(34)$$ $$(35)$$ since Pk αkγk diverges by Assumption 2.1. However, this implies that zK < −∞ and as a consequence, dK+1 ≤ D0 + zK < −∞ which is a contradiction as dK+1 ≥ 0. We conclude that limk→∞ dk = 0 and limk→∞ x¯k = x ∗, almost surely. ## D.2 Proof Of Lemma D.1 The goal is to bound kxk+1 − 1x¯k+1k 2 by kxk − 1x¯kk 2 and other vanishing terms. kxk+1 − 1x¯k+1k 2 =kxk+1 − 1x¯k + 1x¯k − 1x¯k+1k 2 =kxk+1 − 1x¯kk 2 + 2hxk+1 − 1x¯k, 1x¯k − 1x¯k+1i + k1x¯k − 1x¯k+1k 2 (a) =kxk+1 − 1x¯kk 2 − k1x¯k − 1x¯k+1k 2 ≤kxk+1 − 1x¯kk 2 = Xn i=1 kxi,k+1 − x¯kk 2 (b) ≤ Xn i=1 kzi,k+1 − x¯kk 2 =kzk+1 − 1x¯kk 2 =kWxk − αkWgk − 1x¯kk 2 =kWxk − 1x¯kk 2 − 2αkhWxk − 1x¯k, Wgki + α 2 kkWgkk 2 (c) ≤ kWxk − 1x¯kk 2 + αk[ 1 − ρ 2 w 2ρ 2wαk kWxk − 1x¯kk 2 + 2ρ 2 wαk 1 − ρ 2w kWgkk 2] + α 2 kkWgkk 2 (d) ≤ρ 2 wkxk − 1x¯kk 2 + αk[ 1 − ρ 2 w 2αkkxk − 1x¯kk 2 + 2ρ 2 wαk 1 − ρ 2w kWgkk 2] + α 2 kkWgkk 2 = 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 k 1 + ρ 2 w 1 − ρ 2w kWgkk 2 = 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 k 1 + ρ 2 w 1 − ρ 2w kWgk − 1g¯k + 1g¯kk 2 (e) = 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 k 1 + ρ 2 w 1 − ρ 2w kWgk − 1g¯kk 2 + α 2 k n(1 + ρ 2 w) 1 − ρ 2w kg¯kk 2 22 $$\leq\frac{1+\rho_{w}^{2}}{2}\|\mathbf{x}_{k}-\mathbf{1}\bar{x}_{k}\|^{2}+\alpha_{k}^{2}\frac{\rho_{w}^{2}(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}\|\mathbf{g}_{k}-\mathbf{1}\bar{g}_{k}\|^{2}+\alpha_{k}^{2}\frac{n(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}\|\bar{g}_{k}\|^{2}$$ $$\leq\frac{(J)}{2}\frac{1+\rho_{w}^{2}}{2}\|\mathbf{x}_{k}-\mathbf{1}\bar{x}_{k}\|^{2}+\alpha_{k}^{2}\frac{n(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}M.$$ $$(36)$$ where (a) is by (37), (b) is the projection inequality (5) noting that x¯k ∈ K since K is a convex set (so projecting it onto K gives us the same point), (c) is by −2 × 1 h*a, b*i = −2ha, 1 bi ≤ 2kak 2 + 1 2 kbk 2(d) is by Lemma 1.4, (e) is by (38), and (f) is by (39) and (40). 2hxk+1 − 1x¯k, 1x¯k − 1x¯k+1i =2Xn i=1 hxi,k+1 − x¯k, x¯k − x¯k+1i =2h Xn i=1 (xi,k+1 − x¯k), x¯k − x¯k+1i (37) =2hn(¯xk+1 − x¯k), x¯k − x¯k+1i = − 2nhx¯k − x¯k+1, x¯k − x¯k+1i = − 2nkx¯k − x¯k+1k 2 = − 2k1x¯k − 1x¯k+1k 2. $$\langle W\mathbf{g}_{k}-\mathbf{1}\bar{g}_{k},\mathbf{1}\bar{g}_{k}\rangle=\sum_{i=1}^{n}\langle\sum_{j=1}^{n}w_{ij}g_{j,k}-\bar{g}_{k},\bar{g}_{k}\rangle\tag{38}$$ $$=\langle\sum_{i=1}^{n}\sum_{j=1}^{n}w_{ij}g_{j,k}-n\bar{g}_{k},\bar{g}_{k}\rangle$$ $$=\langle\sum_{j=1}^{n}(\sum_{i=1}^{n}w_{ij})g_{j,k}-n\bar{g}_{k},\bar{g}_{k}\rangle$$ $$=\langle\sum_{j=1}^{n}g_{j,k}-n\bar{g}_{k},\bar{g}_{k}\rangle$$ $$=0.$$ From Lemma B.1, we know that kg¯kk 2 ≤ M < ∞ almost surely, $$\|\mathbf{g}_{k}-\mathbf{1}\bar{g}_{k}\|^{2}=\sum_{i=1}^{n}\|g_{i,k}-\frac{1}{n}\sum_{j=1}^{n}g_{j,k}\|^{2}$$ $$=\sum_{i=1}^{n}\left(\|g_{i,k}\|^{2}-2(g_{i,k},\frac{1}{n}\sum_{j=1}^{n}g_{j,k})+\|\bar{g}_{k}\|^{2}\right)$$ $$=\|\mathbf{g}_{k}\|^{2}-2n\|\bar{g}_{k}\|^{2}+n\|\bar{g}_{k}\|^{2}$$ $$=\|\mathbf{g}_{k}\|^{2}-n\|\bar{g}_{k}\|^{2}$$ $$\quad(39)$$ Then, $$\rho_{w}^{2}\|\mathbf{g}_{k}-\mathbf{1}\bar{g}_{k}\|^{2}+n\|\bar{g}_{k}\|^{2}=\rho_{w}^{2}\|\mathbf{g}_{k}\|^{2}+n(1-\rho_{w}^{2})\|\bar{g}_{k}\|^{2}$$ $$\leq\rho_{w}^{2}nM+n(1-\rho_{w}^{2})M$$ $$=nM.$$ $$(40)$$ 1. **Proving** limK→∞ PK k=0 kxk −1x¯kk 2 < ∞, limK→∞ PK k=0 kzk+1 −1x¯kk 2 < ∞**, and** limk→∞ kxk − 1x¯kk 2 = 0 Reconsider (36), $$\|{\bf x}_{k+1}-{\bf1}\bar{x}_{k+1}\|^{2}\leq\frac{1+\rho_{w}^{2}}{2}\|{\bf x}_{k}-{\bf1}\bar{x}_{k}\|^{2}+\alpha_{k}^{2}\frac{n(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}M$$ $$\|{\bf x}_{k}-{\bf1}\bar{x}_{k}\|^{2}\leq\frac{1+\rho_{w}^{2}}{2}\|{\bf x}_{k-1}-{\bf1}\bar{x}_{k-1}\|^{2}+\alpha_{k-1}^{2}\frac{n(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}M\tag{41}$$ $$\cdots$$ $$\|\mathbf{x}_{1}-\mathbf{1}\bar{x}_{1}\|^{2}\leq{\frac{1+\rho_{w}^{2}}{2}}\|\mathbf{x}_{0}-\mathbf{1}\bar{x}_{0}\|^{2}+\alpha_{0}^{2}{\frac{n(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}}M.$$ Adding all inequalities in (41), we obtain $$\|\mathbf{x}_{k+1}-\mathbf{1}{\bar{x}}_{k+1}\|^{2}\leq-{\frac{1-\rho_{w}^{2}}{2}}\sum_{l=1}^{k}\|\mathbf{x}_{l}-\mathbf{1}{\bar{x}}_{l}\|^{2}+{\frac{1+\rho_{w}^{2}}{2}}\|\mathbf{x}_{0}-\mathbf{1}{\bar{x}}_{0}\|^{2}+{\frac{n(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}}M\sum_{l=0}^{k}\alpha_{l}^{2}\mathbf{x}_{l}^{2}+\mathbf{1}\alpha_{l}^{2}\mathbf{x}_{l}^{2},$$ Let k → ∞, then the second and third terms are bounded due to Assumption 2.1. There are then 2 cases: Pl kxl − 1x¯lk 2 P either diverges or converges. Assume the validity of the hypothesis H2 ) l kxl − 1x¯lk 2 diverges, i.e., P∞ l=1 kxl − 1x¯lk 2 → ∞. This leads to $$\left\|\mathbf{x}_{k+1}-\mathbf{1}{\bar{x}}_{k+1}\right\|^{2}<-\infty,$$ as − 1−ρ 2 w 2 < 0. However, kxk+1 − 1x¯k+1k 2should be positive. Thus, hypothesis H2 cannot be true and Pl kxl − 1x¯lk 2converges. Hence, limk→∞ kxk − 1x¯kk 2 = 0 almost surely. Thus, reconsider (36), $$\|{\bf z}_{k+1}-{\bf1}\bar{x}_{k}\|^{2}\leq\frac{1+\rho_{w}^{2}}{2}\|{\bf x}_{k}-{\bf1}\bar{x}_{k}\|^{2}+\alpha_{k}^{2}\frac{n(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}M$$ $$\sum_{k=0}^{K}\|{\bf z}_{k+1}-{\bf1}\bar{x}_{k}\|^{2}\leq\frac{1+\rho_{w}^{2}}{2}\sum_{k=0}^{K}\|{\bf x}_{k}-{\bf1}\bar{x}_{k}\|^{2}+\frac{n(1+\rho_{w}^{2})}{1-\rho_{w}^{2}}M\sum_{k=0}^{K}\alpha_{k}^{2}\tag{42}$$ $$<\infty.$$ $$(43)$$ 2. **Proving** P∞ k=0 γkαkkxk − 1x¯kk < ∞ By induction from (36), we have $$\|\mathbf{x}_{k+1}-\mathbf{1}{\bar{x}}_{k+1}\|^{2}\leq\big({\frac{1+\rho_{w}^{2}}{2}}\big)^{k+1}\|\mathbf{x}_{0}-\mathbf{1}{\bar{x}}_{0}\|^{2}+{\frac{2n M}{1-\rho_{w}^{2}}}\sum_{j=0}^{k}\big({\frac{1+\rho_{w}^{2}}{2}}\big)^{j+1}\alpha_{k-j}^{2}.$$ . (43) Since √a + b < √a + √b, $$\|{\bf x}_{k+1}-{\bf1}\bar{x}_{k+1}\|\leq\Big{(}\frac{1+\rho_{w}^{2}}{2}\Big{)}^{\frac{k+1}{2}}\|{\bf x}_{0}-{\bf1}\bar{x}_{0}\|+\sqrt{\frac{2nM}{1-\rho_{w}^{2}}}\sum_{j=0}^{k}\Big{(}\frac{1+\rho_{w}^{2}}{2}\Big{)}^{\frac{j+1}{2}}\alpha_{k-j}.\tag{44}$$ Then, substituting into the sum P∞ k=0 γkαkkxk − 1x¯kk, X∞ k=1 γkαk 1 + ρ 2 w 2 k 2 kx0 − 1x¯0k + s2nM 1 − ρ 2w k X−1 j=0 1 + ρ 2 w 2 j+1 2 αk−1−j ! ≤γ0α0kx0 − 1x¯0k p1 + ρ 2w √2 −p1 + ρ 2w + s2nM 1 − ρ 2w X∞ k=1 γkαk k X−1 j=0 1 + ρ 2 w 2 j+1 2 αk−1−j , where the inequality is due to the fact that γk and αk are both decreasing step-sizes and we have a geometric sum of ratio q1+ρ2 2 < 1. We then study the sums in the second term, X∞ k=1 γkαk k X−1 j=0 1 + ρ 2 w 2 j+1 2 αk−1−j ≤ X∞ k=1 γk k X−1 j=0 1 + ρ 2 w 2 j+1 2 α 2 k−1−j = X∞ k=1 γk X k j=1 1 + ρ 2 w 2 k−j+1 2 α 2 j−1 = X∞ j=1 α 2 j−1 X∞ k=j γk 1 + ρ 2 w 2 k−j+1 2 ≤γ0 X∞ j=1 α 2 j−1 X∞ k=j 1 + ρ 2 w 2 k−j+1 2 =γ0 p1 + ρ 2w √2 −p1 + ρ 2w X∞ j=1 α 2 j−1 <∞, as Pα 2 k converges by Assumption 2.1. Finally, P∞ k=0 γkαkkxk − 1x¯kk < ∞. ## D.3 Convergence Rate Of The Consensus Error Kxk − 1X¯Kk 2 **And Of** Kzk+1 − 1X¯Kk 2 As Pk kxk − 1x¯kk 2 < ∞, let us assume that kxk − 1x¯kk 2 vanishes with the same rate as α 2 k . Then, there must be a scalar ϑ1 > 0 such that kxk − 1x¯kk 2 < ϑ21α 2 k . To test if such ϑ1 exists, we employ (36) to check whether kxk+1 − 1x¯k+1k 2 < ϑ21α 2 k+1 holds, $$\|{\bf x}_{k+1}-{\bf1}\bar{x}_{k+1}\|^{2}\leq\frac{1+\rho_{w}^{2}}{2}\|{\bf x}_{k}-{\bf1}\bar{x}_{k}\|^{2}+\frac{n(1+\rho_{w}^{2})M}{1-\rho_{w}^{2}}\alpha_{k}^{2}\tag{45}$$ $$\leq\frac{1+\rho_{w}^{2}}{2}\theta_{1}^{2}\alpha_{k}^{2}+\frac{n(1+\rho_{w}^{2})M}{1-\rho_{w}^{2}}\alpha_{k}^{2}$$ $$=\biggl{(}\frac{1+\rho_{w}^{2}}{2}\theta_{1}^{2}+\frac{n(1+\rho_{w}^{2})M}{1-\rho_{w}^{2}}\biggr{)}\alpha_{k}^{2}.$$ Then, testing $$\left(\frac{1+\rho_{w}^{2}}{2}\vartheta_{1}^{2}+\frac{n(1+\rho_{w}^{2})M}{1-\rho_{w}^{2}}\right)\alpha_{k}^{2}\leq\vartheta_{1}^{2}\alpha_{k+1}^{2}$$ $$\frac{n(1+\rho_{w}^{2})M}{1-\rho_{w}^{2}}\leq\vartheta_{1}^{2}\bigg{(}\frac{\alpha_{k+1}^{2}}{\alpha_{k}^{2}}-\frac{1+\rho_{w}^{2}}{2}\bigg{)}\tag{46}$$ $$\frac{\frac{n(1+\rho_{w}^{2})M}{1-\rho_{w}^{2}}}{\frac{\alpha_{k+1}^{2}}{\alpha_{k}^{2}}-\frac{1+\rho_{w}^{2}}{2}}\leq\vartheta_{1}^{2}.$$ Thus, 0 < %2 < ∞ whenever α 2 k+1 α2k− 1+ρ 2 w 2 > 0. Let us consider αk having the form in Example 2.3, then α 2 k+1 α2k= k+1 k+22υ1is an increasing function of k taking values between 0 and 1, and define $$\begin{array}{r l}{K_{1}=}&{{}{\mathrm{arg\,min}}\quad k.}\\ {\quad}&{{}{\frac{\alpha_{k+1}^{2}}{\alpha_{k}^{2}}}{>}{\frac{1+\rho_{w}^{2}}{2}}}\end{array}$$ To test whether K1 grows very large, we find the intersection α 2 k+1 α2k= 1+ρ 2 w 2, $$\left(\frac{k+1}{k+2}\right)^{2v_{1}}=\frac{1+\rho_{w}^{2}}{2}$$ $$\frac{k+1}{k+2}=\left(\frac{1+\rho_{w}^{2}}{2}\right)^{\frac{1}{2v_{1}}}$$ $$k+1=(k+2)\Big{(}\frac{1+\rho_{w}^{2}}{2}\Big{)}^{\frac{1}{2v_{1}}}\tag{4}$$ $$k=\frac{2\Big{(}\frac{1+\rho_{w}^{2}}{2}\Big{)}^{\frac{1}{2v_{1}}}-1}{1-\Big{(}\frac{1+\rho_{w}^{2}}{2}\Big{)}^{\frac{1}{2v_{1}}}}.$$ $$(47)$$ ``` Define the function h(x, υ1) = 2x 1 2υ1 −1 1−x 1 2υ1 for 0 < x < 1 and 0.5 < υ < 1. ``` $ \frac{\partial h(x,v_1)}{\partial v_1}=-\frac{\exp(\frac{\ln x}{2v_1})\ln x}{2v_1^2(1-x^{\frac{1}{2v_1}})^2}>0$ for a fixed $ 0<x<1.$ $ \frac{\partial h(x,v_1)}{\partial x}=\frac{\frac{-2v_1+1}{x}}{2v_1}>0$ for a fixed $ 0.5<v_1<1.$ $ \frac{1}{2v_1(1-x^{\frac{1}{2v_1}})^2}>0$ for a fixed $ 0.5<v_1<1.$ Taking an extreme case of x = υ1 = 0.99, we obtain h(0.99, 0.99) ≈ 196 iterations. For x = υ1 = 0.95, h(0.95, 0.95) ≈ 36 iterations. It decreases even more drastically for realistic choices of ρw and υ1. Thus, it is reasonable to study the rate for k ≥ K1. We conclude that for k ≥ K1, there exists 0 < ϑ1 < ∞, such that $$\|\mathbf{x}_{k}-\mathbf{1}{\bar{x}}_{k}\|^{2}<\vartheta_{1}^{2}\alpha_{k}^{2}.$$ . (48) Thus, from (36), for k ≥ K1, we also have $$\|{\bf z}_{k+1}-{\bf1}\bar{x}_{k}\|^{2}\leq\frac{1+\rho_{w}^{2}}{2}\|{\bf x}_{k}-{\bf1}\bar{x}_{k}\|^{2}+\alpha_{k}^{2}\frac{n(1+\rho_{w}^{2})M}{1-\rho_{w}^{2}}$$ $$\leq\Bigl{(}\frac{1+\rho_{w}^{2}}{2}\,\vartheta_{1}^{2}+\frac{n(1+\rho_{w}^{2})M}{1-\rho_{w}^{2}}\Bigr{)}\alpha_{k}^{2}$$ $$:=\vartheta_{2}^{2}\alpha_{k}^{2}.$$ $$(48)$$ $$(49)$$ ## E Convergence Rate ``` Our primary result, stated in the following Lemma, is based on finding a relation between two successive iterations of the expected divergence. Lemma E.1. Let A = λc3 2 , B = 4c3L 2ϑ 2 1 λn , C = c 2 1 c 6 4 c3λ , and E = ϑ2 n . Then, for k > K1, ``` $B=\frac{4\alpha_{3}L^{2}\theta_{1}^{2}}{\lambda n}$, $C=\frac{c_{2}^{2}c_{4}^{6}}{c_{3}\lambda}$, and $E=\frac{\theta_{2}}{n}$. Then, for $k>K_{1}$, $$D_{k+1}\leq(1-A\alpha_{k}\gamma_{k})D_{k}+B\alpha_{k}^{3}\gamma_{k}+C\alpha_{k}\gamma_{k}^{3}+E\alpha_{k}^{2}.\tag{50}$$ Proof: See Appendix E.2. Next, we let $$K_{2}=\operatorname*{arg\,min}_{A\alpha_{k}\gamma_{k}<1}k$$ and K0 = max{K1, K2}. For the ensuing part, the purpose is to locate a vanishing upper bound of Dk, making use of the inequality (50). The idea is to propose a decreasing sequence Uk+1 ≤ Uk and suppose that Dk ≤ Uk, ∀k ≥ K0, and then verify that Dk+1 ≤ Uk+1 by induction. The choice of Uk is the most difficult component as one has to keep in mind the general forms of αk and γk in (50) and what kind of decisions to take regarding these forms. An essential property of Uk is presented in the subsequent lemma. Lemma E.2. If a decreasing sequence Uk+1 ≤ Uk for k ≥ K0 exists such that Dk+1 ≤ Uk+1 *can be deduced* from Dk ≤ Uk *and (50), then* $$U_{k}\geq\frac{B}{A}\alpha_{k}^{2}+\frac{C}{A}\gamma_{k}^{2}+\frac{E}{A}\frac{\alpha_{k}}{\gamma_{k}}.$$ . (51) Proof: See Appendix E.3. An important remark is that the lower bound of Uk in (51) is vanishing as α 2 k , γ2 k , and αk γk are all vanishing. This lower bound provides an insight on the convergence rate of Dk as it cannot be better than that of α 2 k , γ2 k , or αk γk . The previous Lemma allows us to move forward in confirming the existence of the constants ς1 and ς2 that permit Dk ≤ ς1γ 2 k and Dk ≤ ς2 αk γk in Theorem 3.4, respectively. ## E.1 Proof Of Theorem 3.4 1. Proof Of (12) By definition of ς1, DK0 ≤ ς1γ 2 K0 . The next step is to make sure that Dk+1 ≤ Uk+1 can be obtained from Dk ≤ Uk, ∀k ≥ K0. Take Uk = ς1γ 2 k , let Dk ≤ Uk hold, and substitute in (50), $$(51)$$ Dk+1 ≤ (1 − Aαkγk)ς1γ $\chi_{1}\gamma_{1})_{C_{1}}\gamma_{1}^{2}$ c. $\overset{8}{\varepsilon}\gamma_{k}+\overset{4}{\uparrow}$ $1\,-\,\frac{1}{2}$ . $$D_{k+1}\leq$$ k + Bα3kγk + Cαkγ 3 k + Eα2k . We solve Dk+1 ≤ Uk+1 for ς1 ∈ R + $$(1-A\alpha_{k}\gamma_{k})\varsigma_{1}\gamma_{k}^{2}+B\alpha_{k}^{3}\gamma_{k}+C\alpha_{k}\gamma_{k}^{3}+E\alpha_{k}^{2}\leq U_{k+1}=\varsigma_{1}\gamma_{k+1}^{2}.$$ Then, by considering κk = lg $\kappa_k=\frac{1-(\frac{\gamma_k+1}{\gamma_k})^2}{\alpha_k\gamma_k}>0$ as given in (11), $$B\alpha_{k}^{2}\gamma_{k}^{-2}+E\alpha_{k}\gamma_{k}^{-3}+C\leq\varsigma_{1}(A-\kappa_{k}),$$ and assuming A − κk > 0, we find a constant ς¯1 such that $$\varsigma_{1}\geq\bar{\varsigma_{1}}=\frac{B\alpha_{k}^{2}\gamma_{k}^{-2}+E\alpha_{k}\gamma_{k}^{-3}+C}{A-\kappa_{k}},$$ keeping in mind that Bα2k γ −2 k + Eαkγ −3 k + C is positive by definition. Examine the parameters σ1, σ2, and σ3 as they are introduced in (11), then $$\bar{\varsigma_{1}}\leq\frac{B\sigma_{2}+E\sigma_{3}+C}{A-\sigma_{1}},$$ We conclude that Dk ≤ ς1γ 2 k where ς1 satisfies the definition (13). 2. **Proof of** (14) DK0 ≤ ς2 γK0 αK0 by definition of ς2. ∀k ≥ K0, let Dk ≤ ς2 $K_0$, let $D_k\leq\varsigma_2\frac{\alpha_k}{\gamma_k}$. , then $$D_{k+1}\leq(1-A\alpha_{k}\gamma_{k})\varsigma_{2}\frac{\alpha_{k}}{\gamma_{k}}+B\alpha_{k}^{3}\gamma_{k}+C\alpha_{k}\gamma_{k}^{3}+E\alpha_{k}^{2}.$$ Solving Dk+1 ≤ ς2 αk+1 γk+1 for ς2 ∈ R $$\mathbf{\tau}_{2}\in\mathbb{R}^{+},$$ $$(1-A\alpha_{k}\gamma_{k})\varsigma_{2}\frac{\alpha_{k}}{\gamma_{k}}+B\alpha_{k}^{3}\gamma_{k}+C\alpha_{k}\gamma_{k}^{3}+E\alpha_{k}^{2}\leq\varsigma_{2}\frac{\alpha_{k+1}}{\gamma_{k+1}}.$$ Take $\tau_{k}=\frac{\frac{\alpha_{k}}{\tau_{k}}-\frac{\alpha_{k+1}}{\tau_{k+1}}}{\alpha_{k}^{2}}>0$ as given in (11), then $$B\alpha_{k}\gamma_{k}+C\alpha_{k}^{-1}\gamma_{k}^{3}+E\leq(A-\tau_{k})\varsigma_{2}.$$ If $\frac{\alpha_k}{\gamma_k}-\frac{\alpha_{k+1}}{\gamma_{k+1}}<A\alpha_k^2$, then $\exists\ \bar{\varsigma_2}$ such that . $$\varsigma_{2}\geq\bar{\varsigma_{2}}=\frac{B\alpha_{k}\gamma_{k}+C\alpha_{k}^{-1}\gamma_{k}^{3}+E}{(A-\tau_{k})}.$$ $$\beth_2\leq\beth_2=\cfrac{(A-\tau_k)}{(A-\tau_k)}$$ Examine $\sigma_4,\sigma_5$, and $\sigma_6$ that are defined in (11), we can say $$B\sigma_5+C\sigma_6+E_4$$. $$\bar{\varsigma_{2}}\leq\frac{B\sigma_{5}+C\sigma_{6}+E}{(A-\sigma_{4})}.$$ We conclude that Dk ≤ ς2 αk γk with ς2 satisfying (15). ## E.2 Proof Of Lemma E.1 Starting with the same steps as in (27), Dk+1 =E[kx¯k+1 − x ∗k 2] ≤E[ 1 n kzk+1 − 1x¯kk 2 + 2h−αkg¯k, x¯k − x ∗i + dk] =Dk + 1 n E[kzk+1 − 1x¯kk 2] − 2αkE[hx¯k − x ∗, g¯ki] (a) = Dk + 1 n E[kzk+1 − 1x¯kk 2] − 2c3αkγkE[hx¯k − x ∗, h(xk) + ¯bki] =Dk + 1 n E[kzk+1 − 1x¯kk 2] − 2c3αkγkE[hx¯k − x ∗, ∇F(¯xk)i] + 2c3αkγkE[hx¯k − x ∗, ∇F(¯xk) − h(xk)i] − 2c3αkγkE[hx¯k − x ∗, ¯bki] (52) where (a) is due to both E[ek|Hk] = 0 and (21): $$_{k}|{\mathcal{H}}_{k}]=0{\mathrm{~and~}}(21){\mathrm{:}}$$ E[hx¯k − x ∗, g¯ki] = E[hx¯k − x ∗, g¯k − E[¯gk|Hk] + E[¯gk|Hk]i] = E[hx¯k − x ∗, eki] + E[hx¯k − x ∗, E[¯gk|Hk]i] = EHk [E[hx¯k − x ∗, eki|Hk]] + E[hx¯k − x ∗, E[¯gk|Hk]i] = 0 + E[hx¯k − x ∗, E[¯gk|Hk]i]. From Lemma B.1, we have E[kg¯kk 2] < M¯ with M¯ a bounded constant. By the strong convexity in Assumption 1.2, we have $$-2c_{3}\alpha_{k}\gamma_{k}\mathbb{E}[\langle\tilde{x}_{k}-x^{*},\nabla\mathcal{F}(\tilde{x}_{k})\rangle]\leq2c_{3}\alpha_{k}\gamma_{k}\mathbb{E}[\mathcal{F}(x^{*})-\mathcal{F}(\tilde{x}_{k})]-\lambda c_{3}\alpha_{k}\gamma_{k}\mathbb{E}[\|\tilde{x}_{k}-x^{*}\|^{2}]\tag{53}$$ $$\leq-\lambda c_{3}\alpha_{k}\gamma_{k}\mathbb{E}[\|\tilde{x}_{k}-x^{*}\|^{2}]$$ $$=-\lambda c_{3}\alpha_{k}\gamma_{k}D_{k},$$ where we used the fact that F(x ∗) − F(¯xk) ≤ 0. Next, from Lemma 1.5, we have 2c3αkγkhx¯k − x ∗, ∇F(¯xk) − h(xk)i ≤ 2c3αkγk L √n kx¯k − x ∗kkxk − 1x¯kk (a) ≤ λc3αkγk 4kx¯k − x ∗k 2 + 4c3αkγk L 2 λnkxk − 1x¯kk 2, where (a) is due to 2 √ × √ 1 ha, bi = 2h √a, √ 1 bi ≤ kak 2 + 1 kbk 2. From (48), we have for k ≥ K1, kxk − 1x¯kk 2 ≤ ϑ 2 1α 2 k . Hence, $$2c_{3}\alpha_{k}\gamma_{k}\mathbb{E}[\langle\bar{x}_{k}-x^{*},\nabla\mathcal{F}(\bar{x}_{k})-h(\mathbf{x}_{k})\rangle]\leq\frac{\lambda c_{3}\alpha_{k}\gamma_{k}}{4}D_{k}+\frac{4c_{3}L^{2}\partial_{1}^{2}}{\lambda n}\alpha_{k}^{3}\gamma_{k}.$$ From (22), $$-2c_{3}\alpha_{k}\gamma_{k}\mathbb{E}[\langle\tilde{x}_{k}-x^{*},\tilde{b}_{k}\rangle]\leq\frac{\lambda c_{3}\alpha_{k}\gamma_{k}}{4}D_{k}+\frac{4c_{3}\alpha_{k}\gamma_{k}}{\lambda}\mathbb{E}[\|\tilde{b}_{k}\|^{2}]$$ $$\leq\frac{\lambda c_{3}\alpha_{k}\gamma_{k}}{4}D_{k}+\frac{c_{1}^{2}c_{4}^{2}\alpha_{k}\gamma_{k}^{3}}{c_{3}\lambda}$$ $$(54)$$ $$\left(55\right)$$ $$(56)$$ $$\left(57\right)$$ From (49), for k ≥ K1, we have ${\frac{1}{n}\mathbb{E}[||\mathbf{z}_{k+1}-\mathbf{1}\bar{x}_k||^2]\leq\frac{\vartheta_2}{n}\alpha_k^2.}$ (55) and (56) we get (50). Finally, by combining (52), (53), (54), (55), and (56) we get (50). ## E.3 Proof Of Lemma E.2 Since 1 − Aαkγk > 0 when k ≥ K0, we may substitute Dk ≤ Uk in (50), $$D_{k+1}\leq(1-A\alpha_{k}\gamma_{k})U_{k}+B\alpha_{k}^{3}\gamma_{k}$$ 3 k + Eα2k . Testing Dk+1 ≤ Uk+1 in the previous inequality, we get $$(1-A\alpha_{k}\gamma_{k})U_{k}+B\alpha_{k}$$ $$\cdot A\alpha_{k}\gamma_{k})U_{k}+B\alpha_{k}^{3}\gamma_{k}+C\alpha_{k}\gamma_{k}^{3}+E\alpha_{k}^{2}\leq U_{k+1}\leq U_{k}$$ $$\frac{B}{A}\alpha_{k}^{2}+\frac{C}{A}\gamma_{k}^{2}+\frac{E}{A}\frac{\alpha_{k}}{\gamma_{k}}\leq U_{k}.\tag{10.1}$$ ## E.4 Proof Of Theorem 3.5 Theorem 3.4 indicates that the convergence rate is a function of υ1 and υ2, as γ 2 k ∝ (k + 1)−2υ2 and αk γk ∝ (k + 1)−(υ1−υ2). Nonetheless, we must still verify the validity of the assumptions presented in the theorem, meaning: - Are σ1 < A and σ4 < A fulfilled? - Are ς1 and ς2 bounded? We must remark that in what follows, the analysis is done for k ≥ K0. Let αk and γk have the forms given in (16). 1. **Verifying that** σ1 < A and σ4 < A The idea is to find a bound on α0 and γ0 to guarantee σ1 < A and σ4 < A. We start by bounding σ1 and σ4 from above, i.e., $$\sigma_{1}=\operatorname*{max}_{k\geq K_{0}}{\frac{1-({\frac{\gamma_{k+1}}{\gamma_{k}}})^{2}}{\alpha_{k}\gamma_{k}}}=\operatorname*{max}_{k\geq K_{0}}{\frac{1-(1+{\frac{1}{k+1}})^{-2v_{2}}}{\alpha_{0}\gamma_{0}(k+1)^{-v_{1}-v_{2}}}}$$ and $$\sigma_{4}=\operatorname*{max}_{k\geq K_{0}}{\frac{1-{\frac{\alpha_{k+1}\gamma_{k+1}^{-1}}{\alpha_{k}\gamma_{k}^{-1}}}}{\alpha_{k}\gamma_{k}}}=\operatorname*{max}_{k\geq K_{0}}{\frac{1-(1+{\frac{1}{k+1}})^{-(v_{1}-v_{2})}}{\alpha_{0}\gamma_{0}(k+1)^{-v_{1}-v_{2}}}}.$$ To do so, we define a function q(x) = x −a(1 − (1 + x) −b) with *a, b, x* ∈ (0, 1]. Since x −a ≤ x −1, we have q(x) ≤ x −1(1 − (1 + x) −b) = r(x). To further bound q(x), We study the derivative of r(x) as it is simpler to do so, $$r^{\prime}(x)=x^{-2}{\bigg(}((b+1)x+1)(1+x)^{-b-1}-1{\bigg)}=x^{-2}s(x).$$ Hence the sign of r 0(x) is that of s(x). We again calculate the derivative of s(x) to find its sign, s 0(x) = −b(b + 1)x(1 + x) −b−2 ≤ 0 since b > 0 and x > 0. Then, s(x) is a decreasing function of x over (0, 1]. We remark that limx→0 s(x) = 0, meaning s(x) < 0 and r 0(x) < 0, ∀x ∈ (0, 1]. Finally, $$r(x)<\operatorname*{lim}_{x\to0}r(x)={\frac{1-(1+x)^{-b}}{x}}=b,$$ and q(x) ≤ r(x) < b, noting that limx→0 q(x) = b for a = 1. We conclude that σ1 <2υ2 α0γ0 and σ4 < υ1−υ2 α0γ0 . For σ1 < A and σ4 < A to be valid, we must have $$(58)$$ $$\alpha_{0}\gamma_{0}\geq\operatorname*{max}\{2v_{2},v_{1}-v_{2}\}/A.$$ α0γ0 ≥ max{2υ2, υ1 − υ2}/A. (58) ## 2. Verifying That Σ1 And Σ2 Are Bounded The goal is to verify that the constant term in the convergence rate is bounded. Thus, we must check that the lower bounds given in (13) and (15) are indeed finite. We start by analyzing σ2 and σ5, $$\sigma_{2}=\alpha_{0}^{2}\gamma_{0}^{-2}\operatorname*{max}_{k\geq K_{0}}\left(1+k\right)^{-2(v_{1}-v_{2})}=\alpha_{0}^{2}\gamma_{0}^{-2}(1+K_{0})^{-2(v_{1}-v_{2})},\ \ \mathrm{as}\ \ 0<v_{2}\leq v_{1}.$$ and $\sigma_{5}=\alpha_{0}\gamma_{0}\max_{k\geq K_{0}}\left(1+k\right)^{-(v_{1}+v_{2})}=\alpha_{0}\gamma_{0}(1+K_{0})^{-(v_{1}+v_{2})},\ \ \mbox{as}\ \ 0<v_{2}+v_{1}.$ We end with the analysis of σ3 and σ6, i.e., $$\sigma_{3}=\alpha_{0}\gamma_{0}^{-3}\operatorname*{max}_{k\geq K_{0}}\left(1+k\right)^{-\left(\nu_{1}-3\nu_{2}\right)}=\left\{\begin{array}{l l}{{\alpha_{0}\gamma_{0}^{-3}(1+K_{0})^{-\left(\nu_{1}-3\nu_{2}\right)},}}&{{\mathrm{if~}\nu_{1}\geq3\nu_{2},}}\\ {{\infty,}}&{{\mathrm{if~}\nu_{1}<3\nu_{2},}}\end{array}\right.$$ and $$\sigma_{6}=\alpha_{0}^{-1}\gamma_{0}^{3}\operatorname*{max}_{k\geq K_{0}}\;(1+k)^{\upsilon_{1}-3\upsilon_{2}}=\left\{\begin{array}{c l}{{\alpha_{0}^{-1}\gamma_{0}^{3}(1+K_{0})^{\upsilon_{1}-3\upsilon_{2}},}}&{{\mathrm{if}\;\upsilon_{1}\leq3\upsilon_{2},}}\\ {{\infty,}}&{{\mathrm{if}\;\upsilon_{1}>3\upsilon_{2}.}}\end{array}\right.$$ There are clearly 3 cases: - υ1 > 3υ2 Thus, σ3 is bounded. Since σ2 and ς1 (by definition) are also bounded provided that α0γ0 ≥ 2υ2 Ain (58). However, ς2 → ∞ since σ6 → ∞ resulting in a loose upper bound in (14). To that end, we can write Dk ≤ Υ1(1 + k) −2υ2 with Υ1 a bounded constant. - υ1 < 3υ2 Similarly, σ6 is bounded while σ3 → ∞. Then, ∃ Υ2 < ∞, where Dk ≤ Υ2(1 + k) −(υ1−υ2) provided that α0γ0 ≥ υ1−υ2 A. - υ1 = 3υ2 Both σ3 and σ6 are bounded allowing both previous inequalities corresponding to Dk to be valid. By this analysis, we conclude the proof of Theorem 3.5. We present Figure 7 for easier reading of the conditions on the step sizes' exponents where we plot υ2 vs. υ1. Figure 7: Plot of υ2 vs. υ1 where the ![30_image_0.png](30_image_0.png) yellow shaded area is the feasibility region determined by Assumption 2.1. ## F Regret Analysis Consider the following modified definition of expected divergence that we denote by D0k . $$D_{k}^{\prime}=\mathbb{E}\Big[{\frac{1}{n}}\sum_{i=1}^{n}\|x_{i,k}-x^{*}\|^{2}\Big].$$ We then develop this entity, D0k+1 =E h1 n Xn i=1 kxi,k+1 − x ∗k 2i (a) ≤E h1 n Xn i=1 kzi,k+1 − x ∗k 2i =E h1 n Xn i=1 Xn j=1 wij (xj,k − αkgj,k) − x ∗ 2i =E h1 n Xn i=1 Xn j=1 wij (xj,k − αkgj,k − x ∗) 2i (b) ≤E h1 n Xn i=1 Xn j=1 wijkxj,k − αkgj,k − x ∗k 2i (c) =E h1 n Xn j=1 kxj,k − αkgj,k − x ∗k 2i =D0k − 2αk 1 n Xn j=1 E[hxj,k − x ∗, gj,ki] + α 2 k 1 n Xn j=1 E[kgj,kk 2] =D0k − 2c3αkγk 1 n Xn j=1 E[hxj,k − x ∗, ∇Fj (xj,k) + bj,ki] + α 2 k 1 n Xn j=1 E[kgj,kk 2] (d) ≤D0k − 2c3αkγk 1 n Xn j=1 E[Fj (xj,k) − Fj (x ∗)] + c3αkγk 1 n Xn j=1 E[kxj,k − x ∗k 2 + kbj,kk 2] + α 2 kM (e) ≤D0k − 2c3αkγk 1 n Xn j=1 E[Fj (xj,k) − Fj (x ∗)] + c3αkγkD0k + c3αkγ 3 k c 6 4 c 2 1 4c 2 3 + α 2 kM, where (a) is by applying the projection inequality (5), (b) is by the convexity of the norm square function, (c) is by the doubly stochastic nature of the matrix W, (d) is by the convexity of the objective function and Lemma B.1, and (e) is by (20). Then, $$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[F_{i}(x_{i,k})-F_{i}(x^{*})]\leq\frac{D^{\prime}_{k}-D^{\prime}_{k+1}}{c_{3}\alpha_{k}\gamma_{k}}+D^{\prime}_{k}+\frac{c_{3}^{0}c_{1}^{2}}{4c_{3}^{2}}\gamma_{k}^{2}+\frac{\alpha_{k}}{c_{3}\gamma_{k}}M.\tag{59}$$ We know that D0k can be written as D0k =E h1 n Xn i=1 kxi,k − x ∗k 2i =E h1 n Xn i=1 kxi,k − x¯k + ¯xk − x ∗k 2i =E h1 n Xn i=1 kxi,k − x¯kk 2 + 2hxi,k − x¯k, x¯k − x ∗i + kx¯k − x ∗k 2i (60) =E h1 n kxk − 1x¯kk 2 + 2hx¯k − x¯k, x¯k − x ∗i + kx¯k − x ∗k 2i =E h1 n kxk − 1x¯kk 2 + kx¯k − x ∗k 2i. Hence, to find the regret bound, we write E 1 n X K k=K0 Xn i=1 Fi(xi,k) − Fi(x ∗) (a) ≤ X K k=K0 D0k − D0k+1 c3αkγk+ D0k + c 6 4 c 2 1 4c 2 3 γ 2 k + αk c3γk M ! =X K k=K0+1 D0k 1 c3αkγk −1 c3αk−1γk−1 +D0K0 c3αK0 γK0 +D0K+1 c3αK+1γK+1 + X K k=K0 D0k + c 6 4 c 2 1 4c 2 3 γ 2 k + αk c3γk M (b) = 1 c3α0γ0 + 1 X K k=K0+1 D0k + 1 c3αK0 γK0 + 1D0K0 +D0K+1 c3αK+1γK+1 + X K k=K0 c 6 4 c 2 1γ 2 0 4c 2 3 1 √k + 1 + Mα0 c3γ0 1 √k + 1 (c) ≤ 1 c3α0γ0 + 1 X K k=K0+1 ϑ 2 1α 2 0 n 1 (k + 1) 32 + Υ 1 √k + 1 + (K + 2) c3α0γ0 ϑ 2 1α 2 0 n 1 (K + 2) 32 + Υ 1 √K + 2 + 1 c3αK0 γK0 + 1D0K0 + c 6 4 c 2 1γ 2 0 4c 2 3 + Mα0 c3γ0 X K 1 √k + 1 k=K0 where (a) is following up from (59), (b) is by substituting αk = α0(k + 1)− 34 and γk = γ0(k + 1)− 14 , and (c) is by (60), Lemma 3.3, and Theorem 3.5. To find an upper bound, we interpret the sums over K0+1 ≤ k ≤ K as Riemann sums in which the functions 1 (u+1) 3 2 and √ 1 u+1 are evaluated at the right endpoint of the interval [i − 1, i] for i = K0 + 1, K0 + 2*, . . . , K*. Since the functions 1 (u+1) 3 2and √ 1 u+1 are monotonically decreasing, the sums are in fact *lower* Riemann sums and therefore bounded from above by the integrals R K K0 1 (u+1) 3 2 du and R K K0 √ 1 u+1 du, respectively. $$\sum_{k=K_{0}+1}^{K}{\frac{1}{(k+1)^{\frac{3}{2}}}}\leq\int_{K_{0}}^{K}{\frac{1}{(u+1)^{\frac{3}{2}}}}d u=2\Big({\frac{1}{\sqrt{K_{0}+1}}}-{\frac{1}{\sqrt{K+1}}}\Big)$$ $$\sum_{k=K_{0}+1}^{K}{\frac{1}{\sqrt{k+1}}}\leq\int_{K_{0}}^{K}{\frac{1}{\sqrt{u+1}}}d u=2(\sqrt{K+1}-\sqrt{K_{0}+1})$$ Finally, E 1 n X K k=K0 Xn i=1 Fi(xi,k) − Fi(x ∗) ≤2 1 c3α0γ0 + 1 ϑ 2 1α 2 0 n 1 √K0 + 1 −1 √K + 1 + Υ(√K + 1 −pK0 + 1)! +1 c3α0γ0 ϑ 2 1α 2 0 n 1 √K + 2 + Υ√K + 2 + 1 c3αK0 γK0 + 1D0K0 + c 6 4 c 2 1γ 2 0 2c 2 3 + 2Mα0 c3γ0 ( √K + 1 −pK0) ## G Convergence Rate With Constant Step Sizes We start by going over previous derivations, $$\tilde{g}_{i,k}=\mathbb{E}_{\Phi,\xi_{i}}[\Phi_{i,k}(f_{i}(x_{i,k}+\gamma\Phi_{i,k},S_{i,k})+\zeta_{i,k})|\mathcal{H}_{k}]$$ $$=\mathbb{E}_{\Phi}[\Phi_{i,k}F_{i}(x_{i,k}+\gamma\Phi_{i,k})|\mathcal{H}_{k}]$$ $$=F_{i}(x_{i,k})\mathbb{E}_{\Phi}[\Phi_{i,k}]+\gamma\mathbb{E}_{\Phi}[\Phi_{i,k}\Phi_{i,k}^{T}|\mathcal{H}_{k}]\nabla F_{i}(x_{i,k})+\frac{\gamma^{2}}{2}\mathbb{E}_{\Phi}[\Phi_{i,k}\Phi_{i,k}^{T}\nabla^{2}F_{i}(\tilde{x}_{i,k})\Phi_{i,k}|\mathcal{H}_{k}]$$ $$=c_{37}[\nabla F_{i}(x_{i,k})+b_{i,k}].$$ Thus, bi,k =γ 2c3 EΦ[Φi,kΦ T i,k∇2Fi(˜xi,k)Φi,k|Hk]. Let Assumptions 1.2 and 2.2 hold. Then, we can bound the bias as $$\|b_{i,k}\|\leq\frac{\gamma}{2c_{3}}\mathbb{E}_{\Phi}[\|\Phi_{i,k}\|_{2}\|\Phi_{i,k}^{T}\|_{2}\|\nabla^{2}F_{i}(\tilde{x}_{i,k})\|_{2}\|\Phi_{i,k}\|_{2}|\mathcal{H}_{k}]$$ $$\leq\gamma\frac{c_{4}^{3}c_{1}}{2c_{3}}.$$ We remark that $$\begin{split}\tilde{g}_{k}&=\mathbb{E}[\bar{g}_{k}|\mathcal{H}_{k}]\\ &=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[g_{i,k}|\mathcal{H}_{k}]\\ &=\frac{1}{n}\sum_{i=1}^{n}c_{3}\gamma[\nabla F_{i}(x_{i,k})+b_{i,k}]\\ &=c_{3}\gamma[h(\mathbf{x}_{k})+\bar{b}_{k}]\end{split}$$ (61) $\frac{1}{2}$ is also a biased estimator of h(xk) with $$\begin{split}\|\bar{b}_{k}\|&=\|\frac{1}{n}\sum_{i=1}^{n}b_{i,k}\|\\ &\leq\frac{1}{n}\sum_{i=1}^{n}\|b_{i,k}\|\\ &\leq\frac{1}{n}\sum_{i=1}^{n}\gamma\frac{c_{4}^{3}c_{1}}{2c_{3}}\\ &=\gamma\frac{c_{4}^{3}c_{1}}{2c_{3}}.\end{split}$$ $$(62)$$ Lemma G.1. Let all Assumptions 1.3, 2.2, and 2.4 hold, then there exists a bounded constant M > ¯ 0, such that E[kg¯kk 2] < M¯ . Proof. ∀i ∈ N , we have $$\mathbb{E}[\|g_{i,k}\|^{2}|{\mathcal{H}}_{k}$$ E[kgi,kk 2|Hk] = E[kΦi,k(fi(xi,k + γΦi,k, Si,k) + ζi,k)k 2|Hk] = E[kΦi,kk 2kfi(xi,k + γΦi,k, Si,k) + ζi,kk 2|Hk] (a) ≤ c 2 4E[(fi(xi,k + γΦi,k, Si,k) + ζi,k) 2|Hk] (b) = c 2 4E[f 2 i (xi,k + γΦi,k, Si,k)|Hk] + c 2 4 c2 (c) < ∞, $\square$ where $(a)$ is due to Assumption 2.2, $(b)$ Assumption 1.3, and $(c)$ Assumption 2.4. The stochastic noise is still defined as $e_{k}=\tilde{g}_{k}-\tilde{g}_{k}$ and retains its property $$\mathbb{E}[e_{k}]=\mathbb{E}[\tilde{g}_{k}-\mathbb{E}[\tilde{g}_{k}|\mathcal{H}_{k}]]=\mathbb{E}_{\mathcal{H}_{k}}\left[\mathbb{E}\Big{[}\tilde{g}_{k}-\mathbb{E}[\tilde{g}_{k}|\mathcal{H}_{k}]\Big{|}\mathcal{H}_{k}\Big{]}\right]=0.$$ 1. **Proving** kxk − 1x¯kk 2 and kzk+1 − 1x¯kk 2 **converge linearly** kxk+1 − 1x¯k+1k 2 =kxk+1 − 1x¯k + 1x¯k − 1x¯k+1k 2 =kxk+1 − 1x¯kk 2 + 2hxk+1 − 1x¯k, 1x¯k − 1x¯k+1i + k1x¯k − 1x¯k+1k 2 (a) =kxk+1 − 1x¯kk 2 − k1x¯k − 1x¯k+1k 2 ≤kxk+1 − 1x¯kk 2 = Xn i=1 kxi,k+1 − x¯kk 2 (b) ≤ Xn i=1 kzi,k+1 − x¯kk 2 =kzk+1 − 1x¯kk 2 =kWxk − αWgk − 1x¯kk 2 =kWxk − 1x¯kk 2 − 2αhWxk − 1x¯k, Wgki + α 2kWgkk 2 (c) ≤ kWxk − 1x¯kk 2 + α[ 1 − ρ 2 w 2ρ 2wα kWxk − 1x¯kk 2 + 2ρ 2 wα 1 − ρ 2w kWgkk 2] + α 2kWgkk 2 (d) ≤ρ 2 wkxk − 1x¯kk 2 + α[ 1 − ρ 2 w 2αkxk − 1x¯kk 2 + 2ρ 2 wα 1 − ρ 2w kWgkk 2] + α 2kWgkk 2 = 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 1 + ρ 2 w 1 − ρ 2w kWgkk 2 = 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 1 + ρ 2 w 1 − ρ 2w kWgk − 1g¯k + 1g¯kk 2 (e) = 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 1 + ρ 2 w 1 − ρ 2w kWgk − 1g¯kk 2 + α 2 n(1 + ρ 2 w) 1 − ρ 2w kg¯kk 2 ≤ 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 ρ 2 w(1 + ρ 2 w) 1 − ρ 2w kgk − 1g¯kk 2 + α 2 n(1 + ρ 2 w) 1 − ρ 2w kg¯kk 2 (f) ≤ 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 n(1 + ρ 2 w) 1 − ρ 2wM. (63) where (a) is by (37), (b) is the projection inequality (5) noting that x¯k ∈ K since K is a convex set (so projecting it onto K gives us the same point), (c) is by −2× 1 h*a, b*i = −2ha, 1 bi ≤ 2kak 2 + 1 2 kbk 2 (d) is by Lemma 1.4, (e) is by (38), and (f) is by (39) and (40). By induction, we have $$\begin{split}\|\mathbf{x}_{k+1}-\mathbf{1}\bar{x}_{k+1}\|^{2}&\leq\big{(}\frac{1+\rho_{\mathrm{w}}^{2}}{2}\big{)}^{k+1}\|\mathbf{x}_{0}-\mathbf{1}\bar{x}_{0}\|^{2}+\alpha^{2}\frac{2nM}{1-\rho_{\mathrm{w}}^{2}}\sum_{j=0}^{k}\big{(}\frac{1+\rho_{\mathrm{w}}^{2}}{2}\big{)}^{j+1}\\ &\leq\big{(}\frac{1+\rho_{\mathrm{w}}^{2}}{2}\big{)}^{k+1}\|\mathbf{x}_{0}-\mathbf{1}\bar{x}_{0}\|^{2}+\alpha^{2}\frac{2nM(1+\rho_{\mathrm{w}}^{2})}{(1-\rho_{\mathrm{w}}^{2})^{2}},\end{split}\tag{64}$$ where the last inequality is due to the geometric sum with 1+ρ to the geometric sum with $\frac{1+\rho_w^2}{2}<1$ . We conclude that kxk − 1x¯kk 2converges linearly to an α 2 neighborhood of 0, almost surely. Substituting in (63), kzk+1 − 1x¯kk 2 ≤ 1 + ρ 2 w 2kxk − 1x¯kk 2 + α 2 n(1 + ρ 2 w) 1 − ρ 2wM ≤ 1 + ρ 2 w 2 1 + ρ 2 w 2 kkx0 − 1x¯0k 2 + α 2 2nM(1 + ρ 2 w) (1 − ρ 2w) 2 ! + α 2 n(1 + ρ 2 w) 1 − ρ 2wM (65) =1 + ρ 2 w 2 k+1kx0 − 1x¯0k 2 + α 2nM1 + ρ 2 w 1 − ρ 2w 2+ 1 + ρ 2 w 1 − ρ 2w Finally, kzk+1 − 1x¯kk 2converges linearly to an α 2 neighborhood of 0, almost surely, as well. 2. **Proving** Dk = E[kx¯k − x ∗k 2] **converges linearly** Dk+1 =E[kx¯k+1 − x ∗k 2] ≤E[ 1 n kzk+1 − 1x¯kk 2 + 2h−αg¯k, x¯k − x ∗i + dk] =Dk + 1 n E[kzk+1 − 1x¯kk 2] − 2αE[hx¯k − x ∗, g¯ki] (a) = Dk + 1 n E[kzk+1 − 1x¯kk 2] − 2c3αγE[hx¯k − x ∗, h(xk) + ¯bki] =Dk + 1 n E[kzk+1 − 1x¯kk 2] − 2c3αγE[hx¯k − x ∗, ∇F(¯xk)i] + 2c3αγE[hx¯k − x ∗, ∇F(¯xk) − h(xk)i] − 2c3αγE[hx¯k − x ∗, ¯bki] (66) where (a) is due to both E[ek|Hk] = 0 and (21): E[hx¯k − x ∗, g¯ki] = E[hx¯k − x ∗, g¯k − E[¯gk|Hk] + E[¯gk|Hk]i] = E[hx¯k − x ∗, eki] + E[hx¯k − x ∗, E[¯gk|Hk]i] = EHk [E[hx¯k − x ∗, eki|Hk]] + E[hx¯k − x ∗, E[¯gk|Hk]i] = 0 + E[hx¯k − x ∗, E[¯gk|Hk]i]. From Lemma G.1, we have E[kg¯kk 2] < M¯ with M¯ a bounded constant. By the strong convexity in Assumption 1.2, we have $$-2c_{3}\alpha\gamma\mathbb{E}[(\bar{x}_{k}-x^{*},\nabla\mathcal{F}(\bar{x}_{k}))]\leq2c_{3}\alpha\gamma\mathbb{E}[\mathcal{F}(x^{*})-\mathcal{F}(\bar{x}_{k})]-\lambda c_{3}\alpha\gamma\mathbb{E}[\|\bar{x}_{k}-x^{*}\|^{2}]$$ $$\leq-\lambda c_{3}\alpha\gamma\mathbb{E}[\|\bar{x}_{k}-x^{*}\|^{2}]$$ $$=-\lambda c_{3}\alpha\gamma D_{k},$$ $$(67)$$ where we used the fact that F(x ∗) − F(¯xk) ≤ 0. Next, from Lemma 1.5, we have 2c3αγhx¯k − x ∗, ∇F(¯xk) − h(xk)i ≤ 2c3αγ L √n kx¯k − x ∗kkxk − 1x¯kk (a) ≤ λc3αγ 4kx¯k − x ∗k 2 + 4c3αγ L 2 λnkxk − 1x¯kk 2, where (a) is due to 2 √ × √ 1 ha, bi = 2h √a, √ 1 bi ≤ kak 2 + 1 kbk 2. In (64) and (65), we let R = kx0 − 1x¯0k 2, and G1 = 2nM(1+ρ 2 w) (1−ρ2w) 2 , G2 = nM1+ρ 2 w 1−ρ2w 2+ 1+ρ 2 w 1−ρ2w , kxk − 1x¯kk 2 ≤1 + ρ 2 w 2 kR + α 2G1 and kzk+1 − 1x¯kk 2 ≤1 + ρ 2 w 2 k+1R + α 2G2. (68) Hence, $$2c_{3}\alpha\gamma\mathbb{E}[\left(\bar{x}_{k}-x^{\ast},\nabla\mathcal{F}(\bar{x}_{k})-h(\mathbf{x}_{k})\right)]\leq\frac{\lambda c_{3}\alpha\gamma}{4}D_{k}+4c_{3}\alpha\gamma\frac{L^{2}}{\lambda n}\Big[\Big(\frac{1+\rho_{w}^{2}}{2}\Big)^{k}R+\alpha^{2}G_{1}\Big].$$ (62) i. (69) From (62), $$-2c_{3}\alpha\gamma\mathbb{E}[\langle\bar{x}_{k}-x^{*},\bar{b}_{k}\rangle]\leq\frac{\lambda c_{3}\alpha\gamma}{4}D_{k}+\frac{4c_{3}\alpha\gamma}{\lambda}\mathbb{E}[\|\bar{b}_{k}\|^{2}]$$ $$\leq\frac{\lambda c_{3}\alpha\gamma}{4}D_{k}+\alpha\gamma^{3}\frac{c_{1}^{2}c_{4}^{6}}{c_{3}\lambda}$$ $$(69)$$ $$(70)$$ Finally, by combining (66), (67), (68), (69), and (70), and setting now A = λc3 2 , B = 4c3L 2 λn , and ``` C = c 2 1 c 6 4 c3λ , we get ``` $$D_{k+1}\leq(1-A\alpha\gamma)D_{k}+B\alpha\gamma\Big{[}\Big{(}\frac{1+\rho_{\rm w}^{2}}{2}\Big{)}^{k}R+\alpha^{2}G_{1}\Big{]}+\frac{1}{n}\Big{[}\Big{(}\frac{1+\rho_{\rm w}^{2}}{2}\Big{)}^{k+1}R+\alpha^{2}G_{2}\Big{]}+C\alpha\gamma^{3}\tag{71}$$ $$=(1-A\alpha\gamma)D_{k}+R\Big{[}B\alpha\gamma+\frac{1}{n}\Big{(}\frac{1+\rho_{\rm w}^{2}}{2}\Big{)}\Big{]}\Big{(}\frac{1+\rho_{\rm w}^{2}}{2}\Big{)}^{k}+\alpha^{3}\gamma BG_{1}+\alpha^{2}\frac{1}{n}G_{2}+\alpha\gamma^{3}C.$$ Let %1 = 1 − Aαγ and %2 = 1+ρ 2 w 2. Then, assuming αγ < 1A and taking the telescoping sum Dk+1 ≤% k+1 1 D0 + R Bαγ + %2 n X k i=0 % i 1% k−i 2 + (α 3γBG1 + α 2 1 n G2 + αγ3C) X k i=0 % i 1 =% k+1 1 D0 + R Bαγ + %2 n X k i=0 % i 1% k−i 2 + (α 3γBG1 + α 2 1 n G2 + αγ3C) 1 − % k+1 1 1 − %1 (72) =% k+1 1 D0 + R Bαγ + %2 n X k i=0 % i 1% k−i 2 + α 2 BG1 A+ α γ G2 nA + γ 2 C A (1 − % k+1 1) ≤% k+1 1 D0 + R Bαγ + %2 n X k i=0 % i 1% k−i 2 + α 2 BG1 A+ α γ G2 nA + γ 2 C A where in the last equality, we further imposed the step sizes to satisfy α < γ. $$(73)$$ In what follows, we discuss the summation in the second term of the inequality to avoid setting loose bounds. We know that this summation can be written as follows, $$\sum_{i=0}^{k}\varrho_{1}^{i}\varrho_{2}^{k-i}=\sum_{i=0}^{k}\varrho_{1}^{k-i}\varrho_{2}^{i}\tag{1}$$ Thus, without imposing further assumptions on the step sizes, we consider the following function the two cases: - When %1 ≤ %2, we use the left hand side of the previous equality Dk+1 ≤% k+1 1 D0 + R Bαγ + %2 n % k 2 X k i=0 % i 1% −i 2 + α 2 BG1 A+ α γ G2 nA + γ 2 C A ≤% k+1 1 D0 + R Bαγ + %2 n % k 2 1 1 − %1 %2 + α 2 BG1 A+ α γ G2 nA + γ 2 C A (74) =% k+1 1 D0 + % k+1 2 2R Bαγ + %2 n 2Aαγ + ρ 2w − 1 + α 2 BG1 A+ α γ G2 nA + γ 2 C A Then, for arbitrary small step sizes satisfying *αγ <* 1A and *α < γ*, Dk converges with the linear rate of O% k 2 . - When %1 > %2, we use the right hand side Dk+1 ≤% k+1 1 D0 + R Bαγ + %2 n % k 1 X k i=0 % −i% i 2 + α 2 BG1 A+ α γ G2 nA + γ 2 C A ≤% k+1 1 D0 + R Bαγ + %2 n % k 1 1 1 − %2 %1 + α 2 BG1 A+ α γ G2 nA + γ 2 C A (75) =% k+1 1D0 + 2RBαγ + 2R%2 n 1 − 2Aαγ − ρ 2w + α 2 BG1 A+ α γ G2 nA + γ 2 C A Then, for arbitrary small step sizes satisfying *αγ <* 1A and *α < γ*, Dk converges with the linear rate of O% k 1 . ## H Additional Numerical Examples Figures 8-10 depict the classification of images with the labels 2 and 3 and Figures 11-13 depict those with ![36_image_0.png](36_image_0.png) the labels 3 and 4. Figure 8: Expected loss function evolution of the proposed algorithm vs. DSGT, EXTRA, and 1PGD considering vanishing vs. constant step sizes classifying images with labels 2 and 3. Figure 9: Expected test accuracy evolution of the ![36_image_1.png](36_image_1.png) proposed algorithm vs. DSGT, EXTRA, and 1PGD considering vanishing vs. constant step sizes classifying images with labels 2 and 3. ![37_image_1.png](37_image_1.png) Figure 10: Expected consensus error evolution of the proposed algorithm vs. DSGT and EXTRA considering vanishing vs. constant step sizes classifying images with labels 2 and 3. ![37_image_2.png](37_image_2.png) Figure 12: Expected test accuracy evolution of the proposed algorithm vs. DSGT, EXTRA, and 1P-GD considering vanishing vs. constant step sizes classifying images with labels 3 and 4. ![37_image_0.png](37_image_0.png) Figure 11: Expected loss function evolution of the proposed algorithm vs. DSGT, EXTRA, and 1P-GD considering vanishing vs. constant step sizes classifying images with labels 3 and 4. ![37_image_3.png](37_image_3.png) Figure 13: Expected consensus error evolution of the proposed algorithm vs. DSGT and EXTRA considering vanishing vs. constant step sizes classifying images with labels 3 and 4.
# Normed Spaces For Graph Embedding Diaaeldin Taha∗*diaaeldin.taha@mis.mpg.de* Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany Wei Zhao∗ *wei.zhao@abdn.ac.uk* University of Aberdeen, Aberdeen, United Kingdom J. Maxwell Riestenberg max.riestenberg@mis.mpg.de Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany Michael Strube *michael.strube@h-its.org* Heidelberg Institute for Theoretical Studies, Heidelberg, Germany Reviewed on OpenReview: *https: // openreview. net/ forum? id= 4E2XLydJiv* ## Abstract Theoretical results from discrete geometry suggest that normed spaces can abstractly embed finite metric spaces with surprisingly low theoretical bounds on distortion in low dimensions. Inspired by this theoretical insight, we highlight in this paper normed spaces as a more flexible and computationally efficient alternative to several popular Riemannian manifolds for learning graph embeddings. Normed space embeddings significantly outperform several popular manifolds on a large range of synthetic and real-world graph reconstruction benchmark datasets while requiring significantly fewer computational resources. We also empirically verify the superiority of normed space embeddings on growing families of graphs associated with negative, zero, and positive curvature, further reinforcing the flexibility of normed spaces in capturing diverse graph structures as graph sizes increase. Lastly, we demonstrate the utility of normed space embeddings on two applied graph embedding tasks, namely, link prediction and recommender systems. Our work highlights the potential of normed spaces for geometric graph representation learning, raises new research questions, and offers a valuable tool for experimental mathematics in the field of finite metric space embeddings. We make our code and data publically available 1. ## 1 Introduction Graph representation learning aims to embed real-world graph data into ambient spaces while sufficiently preserving the geometric and statistical graph structures for subsequent downstream tasks and analysis. Graph data in many domains exhibit non-Euclidean features, making Euclidean embedding spaces an unfit choice. Motivated by the manifold hypothesis (see, e.g., Bengio et al. (2013)), recent research work has proposed embedding graphs into Riemannian manifolds (Chamberlain et al., 2017; Defferrard et al., 2020; Grattarola et al., 2020; Gu et al., 2019; Tifrea et al., 2019). These manifolds introduce inductive biases, such as symmetry and curvature, that can match the underlying graph properties, thereby enhancing the quality of the embeddings. For instance, Chamberlain et al. (2017) and Defferrard et al. (2020) proposed embedding graphs into hyperbolic and spherical spaces, with the choice determined by the graph structures. 1https://github.com/andyweizhao/graphs-normed-spaces ∗ These authors contributed equally to this work. More recently, López et al. (López et al., 2021; López et al., 2021) proposed Riemannian symmetric spaces as a framework that unifies many Riemannian manifolds previously considered for representation learning. They also highlighted the Siegel and SPD symmetric spaces, whose geometries combine the sought-for inductive biases of many manifolds. However, operations in these non-Euclidean spaces are computationally demanding and technically challenging, making them impractical for embedding large graphs. In this work, we highlight normed spaces, particularly ℓ d 1 and ℓ d∞, as a more flexible, more computationally efficient, and less technically challenging alternative to several popular Riemannian manifolds for learning graph embeddings. In particular, normed spaces are empirically observed to be easier to train and to perform better in general graph embedding settings that leverage gradient descent. Our proposal is motivated by theoretical results from discrete geometry, which suggest that normed spaces can abstractly embed finite metric spaces with surprisingly low theoretical bounds on distortion in low dimensions. This is evident in the work of Bourgain (1985); Johnson & Lindenstrauss (1984) and Johnson et al. (1987). We evaluate the representational capacity of normed spaces on synthetic and real-world benchmark graph datasets through a graph reconstruction task. Our empirical results corroborate the theoretical motivation; as observed in our experiments, diverse classes of graphs with varying structures can be embedded in low-dimensional normed spaces with low average distortion. Second, we find that normed spaces consistently outperform Euclidean spaces, hyperbolic spaces, Cartesian products of these spaces, Siegel spaces, and spaces of SPD matrices across test setups. Further empirical analysis shows that the embedding capacity of normed spaces remains robust across varying graph curvatures and with increasing graph sizes. Moreover, the computational resource requirements for normed spaces grow much slower than other Riemannian manifold alternatives as the graph size increases. Lastly, we showcase the versatility of normed spaces in two applied graph embedding tasks, namely, link prediction and recommender systems, with the ℓ1 normed space surpassing the baseline spaces. As the field increasingly shifts towards technically challenging geometric methods, our work underscores the untapped potential of simpler geometric techniques. As demonstrated by our experiments, normed spaces set a compelling baseline for future work in geometric representation learning. ## 2 Related Work Graph embeddings are mappings of discrete graphs into continuous spaces, commonly used as substitutes for the graphs in machine learning pipelines. There are numerous approaches for producing graph embeddings, and we highlight some representative examples: (1) Matrix factorization methods (Belkin & Niyogi, 2001; Cai et al., 2010; Tang & Liu, 2011) which decompose adjacency or Laplacian matrices into smaller matrices, providing robust mathematical vector representations of nodes; (2) Graph neural networks (GNNs) (Kipf & Welling, 2017; Veličković et al., 2018; Chami et al., 2019a) which use message-passing to aggregate node information, effectively capturing local and global statistical graph structures; (3) Autoencoder approaches (Kipf & Welling, 2016; Salha et al., 2019) which involve a two-step process of encoding and decoding to generate graph embeddings; (4) Random walk approaches (Perozzi et al., 2014; Grover & Leskovec, 2016; Kriege, 2022) which simulate random walks on the graph, capturing node proximity in the embedding space through co-occurrence probabilities; and (5) Geometric approaches (Gu et al., 2019; López et al., 2021) which leverage the geometric inductive bias of embedding spaces to align with the inherent graph structures, typically aiming to learn approximate isometric embeddings of the graphs in the embedding spaces. We note that these categories are not mutually exclusive. For instance, matrix factorization can be seen as a linear autoencoder approach, and geometric approaches can be combined with graph neural networks. Here we follow previous work (Gu et al., 2019; López et al., 2021; López et al., 2021; Giovanni et al., 2022) and use a geometric approach to produce graph embeddings in normed spaces. Recently, there has been a growing interest in geometric deep learning, especially in the use of Riemannian manifolds for graph embeddings. Those manifolds include hyperbolic spaces (Chamberlain et al., 2017; Ganea et al., 2018; Nickel & Kiela, 2018; López et al., 2019), spherical spaces (Meng et al., 2019; Defferrard et al., 2020), combinations thereof (Bachmann et al., 2020; Grattarola et al., 2020; Law & Stam, 2020), Cartesian products of spaces (Gu et al., 2019; Tifrea et al., 2019), Grassmannian manifolds (Huang et al., 2018), spaces of symmetric positive definite matrices (SPD) (Huang & Gool, 2017; Cruceru et al., 2020), and Siegel spaces (López et al., 2021). All these spaces are special cases of Riemannian symmetric spaces, also known as *homogeneous spaces*. Non-homogeneous spaces, such as Giovanni et al. (2022), have been explored for embedding heterogeneous graphs. Other examples of mathematical spaces include Hilbert spaces (Sriperumbudur et al., 2010; Herath et al., 2017), Lie groups (Falorsi et al., 2018) (such as the torus (Ebisu & Ichise, 2018)), non-abelian groups (Yang et al., 2020) and pseudo-Riemannian manifolds of constant nonzero curvature (Law & Stam, 2020). These spaces introduce inductive biases that align with critical graph features. For instance, hyperbolic spaces, known for embedding infinite trees with arbitrarily low distortion (Sarkar, 2012), are particularly suitable for hierarchical data. Though the computations involved in working with these spaces are tractable, they incur non-trivial computational costs and often pose technical challenges. In contrast, normed spaces, which we focus on in this work, avoid these complications. ## 3 Theoretical Inspiration In discrete geometry, abstract embeddings of finite metric spaces into normed spaces, which are characterized by low theoretical distortion bounds, have long been studied. Here we review some of the existence results that motivated our work. These results provide a rationale for using normed spaces to embed various graph types, much like hyperbolic spaces are often matched with hierarchical graph structures. While these results offer a strong motivation for our experiments, we emphasize that these theoretical insights do not immediately translate to or predict our empirical results. Theoretical results and embedding spaces investigated in this work are summarized in Tables 7 and 8 (Appendix). It is a well-known fact that any n-pointed metric space can theoretically be isometrically embedded into ℓ n∞. For many classes of graphs, the theoretical bound on dimension can be substantially lowered: the complete graph Kn can theoretically be isometrically embeded in l ⌈log2 (n)⌉ 1, every tree T with n vertices can theoretically be isometrically embedded in ℓ O(log n) ∞ , and every tree T with ℓ leaves can theoretically be isometrically embedded in ℓ O(log ℓ) ∞ (Linial et al., 1995). Bourgain showed that similar dimension bounds can be obtained for finite metric spaces in general by relaxing the requirement that the embedding is isometric. A map f : X → Y between two metric spaces (*X, d*X) and (*Y, d*Y ) is called a D*-embedding* for a real number D ≥ 1 if there exists a number r > 0 such that for all x1, x2 ∈ X, r · dX(x1, x2) ≤ dY (f(x1), f(x2)) ≤ D · r · dX(x1, x2). The infimum of the numbers D such that f is a D-embedding is called the *distortion* of f. Every n-point metric space (*X, d*) can be embedded in an O(log n)-dimensional Euclidean space with an O(log n) distortion (Bourgain, 1985). Johnson and Lindenstrauss obtained stronger control on the distortion at the cost of increasing the embedding dimension. Any set of n points in a Euclidean space can be mapped to R t where t = O( log n ϵ 2 ) with distortion at most 1 + ϵ in the distances. Such a mapping may be found in random polynomial time (Johnson & Lindenstrauss, 1984). Similar embedding theorems were obtained by Linial et al. in other ℓp-spaces. In random polynomial-time (*X, d*) may be embedded in ℓ O(log n) p (for any 1 ≤ p ≤ 2), with distortion O(log n) (Linial et al., 1995) or into ℓ O(log2 n) p (for any p > 2), with distortion O(log n) (Linial et al., 1995). When the class of graphs is restricted, stronger embedding theorems are known. Krauthgamer et al. obtain embeddings with bounded distortion for graphs when certain minors are excluded; we mention the special case of planar graphs. Let X be an n-point edge-weighted planar graph, equipped with the shortest path metric. Then X embeds into ℓ O(log n) ∞ with O(1) distortion (Krauthgamer et al., 2004). Furthermore, there are results on the limitations of embedding graphs into ℓp-spaces. For example, Linial et al. show that their embedding result for 1 ≤ p ≤ 2 is sharp by considering expander graphs. Every embedding of an n-vertex constant-degree expander into an ℓp space, 2 ≥ p ≥ 1, of any dimension, has distortion Ω(log n). The metric space of such a graph cannot be embedded with constant distortion in any normed space of dimension O(log n) (Linial et al., 1995). ![3_image_0.png](3_image_0.png) Figure 1: Embedding distortion across spaces on a small synthetic graph, with color indicating distortion levels (the absolute difference between graph edge and norm distances). The graph embeds well in the ℓ1 and ℓ∞ normed spaces but endures distortion in other spaces. These theoretical results illustrate the principle that large classes of finite metric spaces can in theory be abstractly embedded with low theoretical bounds on distortion in low dimensional normed spaces. Furthermore, the distortion and dimension can be substantially improved when the class of metric spaces is restricted. This leaves open many practical questions about the embeddability of real-world data into normed spaces and translating these theoretical results into predictions about the empirical results from experiments. ## 4 Experiments We evaluate the graph embedding capacity of normed spaces alongside other popular Riemannian manifolds in the graph reconstruction task on various synthetic and real-world graphs (§4.1). We analyze further (a) the space capacity and computational costs for varying graph sizes and curvatures; (b) space dimension; and in Appendix E, we extend our analysis to (c) expander graphs and (d) the asymmetry of the loss function. Additionally, we evaluate normed spaces in two tasks: link prediction (§4.2) and recommender systems (§4.3). For link prediction, we investigate the impact of normed spaces on **four** popular graph neural networks. ## 4.1 Benchmark: Graph Reconstruction Shortest Path Metric Embeddings. A *metric embedding* is a mapping f : X → Y between two metric spaces (*X, d*X) and (*Y, d*Y ). Ideally, one would desire metric embeddings to be distance preserving. In practice, accepting some distortion can be necessary. In this case, the overall quality of an embedding can be evaluated by *fidelity measures* such as the average distortion Davg and the *mean average precision* mAP (cf. Appendix D.1 for the definitions). A special case of metric embedding is the *shortest path metric embedding*, also known as low-distortion or *approximate isometric* embedding, where X is the node set V of a graph G = (V, E) and dG corresponds to the shortest path distance within G. These embeddings represent or reconstruct the original graph G in the chosen embedding space Y , ideally preserving the desirable geometric features of the graph. Learning Framework. To compute these embeddings, we optimize a distance-based loss function inspired by generalized MDS (Bronstein et al., 2006) and which was used earlier in, e.g., Gu et al. (2019); López et al. (2021); Giovanni et al. (2022). Given graph distances dG(*u, v*) between all pairs u, v ∈ V of nodes connected by a path in G, which we denote u ∼ v, the loss is defined as: $${\cal L}(f)=\sum_{u\sim v}\left|\left(\frac{d_{Y}(f(u),f(v))}{d_{G}(u,v)}\right)^{2}-1\right|,\tag{1}$$ where dY (f(u), f(v)) is the distance between the corresponding node embeddings in the target embedding space. In this context, the model parameters are a finite collection of points f(u) in Y , each indexed by a specific node u of G. These parameters, i.e., the coordinates of the points, are optimized by minimizing the loss function through gradient descent. This loss function treats the distortion of different path lengths uniformly during training. We provide more context for the loss function in Appendix B. We also empirically evaluate embeddings learned by the distance-based loss function from eq. (1) against another popular distance-based loss function, namely mean squared error loss, in Appendix D.1. Motivation. In geometric machine learning, graph reconstruction tasks have achieved a "de facto" benchmark status for empirically quantifying the representational capacity of geometric spaces for preserving graph structures given through their local close neighborhood information, global all-node interactions, or an intermediate of both (Nickel & Kiela, 2017; 2018; Gu et al., 2019; Cruceru et al., 2020; López et al., 2021; López et al., 2021). This fidelity to structure is crucial for downstream tasks such as link prediction and recommender systems, where knowing the relationships between nodes or users is key. Other applications include embedding large taxonomies. For instance, Nickel & Kiela (2017) and Nickel & Kiela (2018) proposed embedding WordNet while maintaining its local graph structure (semantic relationships between words), and applied these embeddings to downstream NLP tasks. Additionally, low-distortion metric embeddings have applications in approximation algorithms (Linial et al., 1995), online algorithms (Bansal et al., 2015), and distributed algorithms (Khan et al., 2008). We note that though we employ a global loss function, the resulting normed space embeddings preserve both local and global structure notably well. Experimental Setup. Following the work of Gu et al. (2019), we train graph embeddings by minimizing the previously mentioned distance-based loss function. We follow López et al. (2021), and we do not apply any scaling to either the input graph distances or the distances calculated in the space, unlike earlier work (Gu et al., 2019; Cruceru et al., 2020). We report the average results across five runs in terms of (a) average distortion Davg and (b) *mean average precision* (mAP). We provide the training details and the data statistics in Appendix D.1. Baseline Comparison. We compare the performance of normed metric spaces with many other spaces for graph embedding. These spaces fall under three classes: (a) Normed spaces: R 20 ℓ1 , R 20 ℓ2 and R 20 ℓ∞ ; (b) Riemannian symmetric spaces (Cruceru et al., 2020; López et al., 2021; López et al., 2021), incl. the space of SPD matrices: SPD6R, Siegel upper half spaces: S 4 R, S 4 F1 , S 4 F∞ , bounded symmetric spaces: B 4 R, B 4 F1 , B 4 F∞ , hyperbolic spaces (Nickel & Kiela, 2017): H20 R (Poincaré model), and product spaces (Gu et al., 2019): H10 R × H10 R ; (c) Cartesian product spaces involving normed spaces: R 10 ℓ1 × R 10 ℓ∞ , R 10 ℓ1 × H10 R , R 10 ℓ2 × H10 R and R 10 ℓ∞ ×H10 R ; (d) pseudo-Euclidean space (Goldfarb, 1985; Vishwakarma & Sala, 2022): R 10+,10− PSE . The notation for all metrics follows a standardized format: the superscript scales with the space dimension, and the subscript denotes the specific distance metric used (e.g., R for Riemannian and F for Finsler). Following López et al. (2021), we ensure uniformity across metric spaces by using the same number of free parameters, specifically a dimension of 20; and more importantly, we are concerned with the capacity of normed space at such low dimensions where non-Euclidean spaces have demonstrated success for embedding graphs (Chami et al., 2019b; Gu et al., 2019). We also investigate the capacities of spaces with growing dimensions, observing that other spaces necessitate much higher dimensions to match the capacity of the ℓ∞ normed space (see Tab. 4). Note that S n · and B n · have n(n + 1) dimensions, and SPDn · has a dimension of n(n + 1)/2. We elaborate on these metric spaces in Appendix A. Synthetic Graphs. Following the work of López et al. (2021), we compare the representational capacity of various geometric spaces on several synthetic graphs, including grids, trees, and their Cartesian and rooted | 4D Grid | Tree | Tree × Tree | Tree ⋄ Grid | Grid ⋄ Tree | | | | | | | |---------------|-------------|---------------|---------------|---------------|------------|--------|------------|--------|------------|-------| | (|V |, |E|) | (625, 2000) | (364, 363) | (225, 420) | (775, 1270) | (775, 790) | | | | | | | Davg | mAP | Davg | mAP | Davg | mAP | Davg | mAP | Davg | mAP | | | R 20 | 1.08 ±0.00 | 100.00 | 1.62±0.02 | 73.56 | 1.22±0.01 | 100.00 | 1.22±0.01 | 71.91 | 1.75±0.02 | 60.13 | | ℓ1 R 20 | 11.24±0.00 | 100.00 | 3.92±0.04 | 42.30 | 9.78±0.00 | 96.03 | 3.86±0.02 | 34.21 | 4.28±0.04 | 27.50 | | ℓ2 | | | | | | | | | | | | R 20 ℓ∞ | 0.13±0.00 | 100.00 | 0.15±0.01 | 100.00 | 0.58±0.01 | 100.00 | 0.09±0.01 | 100.00 | 0.23±0.02 | 99.39 | | 20 R | 25.23±0.05 | 63.74 | 0.54±0.02 | 100.00 | 20.59±0.11 | 75.67 | 14.56±0.27 | 44.14 | 14.62±0.13 | 30.28 | | H SPD6 R | 11.24±0.00 | 100.00 | 1.79±0.02 | 55.92 | 8.83±0.01 | 98.49 | 1.56±0.02 | 62.31 | 1.83±0.00 | 72.17 | | S R | 11.27±0.01 | 100.00 | 1.35±0.02 | 78.53 | 8.68±0.02 | 98.03 | 1.45±0.09 | 72.49 | 1.54±0.08 | 76.66 | | 4 | | | | | | | | | | | | S 4 F∞ | 5.92±0.06 | 99.61 | 1.23±0.28 | 99.56 | 3.31±0.06 | 99.95 | 10.88±0.19 | 63.52 | 10.48±0.21 | 72.53 | | 4 S F1 | 0.01±0.00 | 100.00 | 0.76±0.02 | 91.57 | 1.08±0.16 | 100.00 | 1.03±0.00 | 78.71 | 0.84±0.06 | 80.52 | | 4 B R | 11.28±0.01 | 100.00 | 1.27±0.05 | 74.77 | 8.74±0.09 | 98.12 | 2.88±0.32 | 72.55 | 2.76±0.11 | 96.29 | | 4 B F∞ | 7.32±0.16 | 97.92 | 1.51±0.13 | 99.73 | 4.26±0.26 | 99.70 | 6.55±1.77 | 73.80 | 7.15±0.85 | 90.51 | | 4 B F1 | 0.39±0.02 | 100.00 | 0.77±0.02 | 94.64 | 1.28±0.16 | 100.00 | 1.09±0.03 | 76.55 | 0.99±0.01 | 81.82 | | 10 | 10 | | | | | | | | | | | R ℓ1 × R ℓ∞ | 0.16±0.00 | 100.00 | 0.63±0.02 | 99.73 | 0.62±0.00 | 100.00 | 0.54±0.01 | 99.84 | 0.60±0.01 | 94.81 | | 10 | 10 | | | | | | | | | | | R ℓ1 × H R | 0.55±0.00 | 100.00 | 1.13±0.01 | 99.73 | 0.62±0.01 | 100.00 | 1.76±0.02 | 50.74 | 1.65±0.01 | 89.47 | | 10 | 10 | | | | | | | | | | | R ℓ2 × H R | 11.24±0.00 | 100.00 | 1.19±0.04 | 100.00 | 9.30±0.04 | 98.03 | 2.15±0.05 | 58.23 | 2.03±0.01 | 97.88 | | 10 | 10 | | | | | | | | | | | R ℓ∞ × H R | 0.14±0.00 | 100.00 | 0.22±0.02 | 96.96 | 1.91±0.01 | 99.13 | 0.15±0.01 | 99.96 | 0.57±0.01 | 90.34 | | 10 | 10 | | | | | | | | | | | H R × H R | 18.74±0.01 | 78.47 | 0.65±0.02 | 100.00 | 8.61±0.03 | 97.63 | 1.08±0.06 | 77.20 | 2.80±0.65 | 84.88 | | 10+,10− R PSE | 5.65±0.02 | 99.87 | 4.91±0.03 | 33.38 | 3.46±0.04 | 99.70 | 4.32±0.03 | 40.09 | 4.93±0.03 | 27.53 | Table 1: Results on the five synthetic graphs. Lower Davg is better. Higher mAP is better. Metrics are given as percentages. products. Further, we extend our analysis to three expander graphs, which can be considered theoretical worst-case scenarios for normed spaces embedding theorems (Linial et al., 1995), and thus are challenging setups for graph reconstruction. Tab. 1 and 2 report the results on synthetic graphs and expanders. Overall, the ℓ∞ normed space largely outperforms all other metric spaces considered on the graph configurations we examine. Notably, it excels over manifolds typically paired with specific graph topologies. For instance, the ℓ∞ space significantly outperforms hyperbolic spaces, and surpasses Cartesian products of hyperbolic spaces and pseudo-Euclidean space on embedding tree graphs. Further, the ℓ∞ space outperforms sophisticated symmetric spaces such as S 4 F1 and B 4 F1 on the graphs with mixed Euclidean and hyperbolic structures (Tree ⋄ Grids and Grids ⋄ Tree), although these symmetric spaces have compound geometries that combine Euclidean and hyperbolic subspaces. We also observe competitive performance from the ℓ1 space, which outperforms the ℓ2, hyperbolic, and symmetric spaces equipped with Riemannian and Finsler infinity metrics. Interestingly, combining ℓ∞ and hyperbolic spaces using the Cartesian product does not bring added benefits and is less effective than using the ℓ∞ space alone. Further, combining ℓ1 and ℓ∞ spaces yields intermediate performance between the individual ℓ1 and ℓ∞ spaces, due to the substantial performance gap between these two spaces. These findings underline the high capacity of the ℓ1 and ℓ∞ spaces, aligning with our theoretical motivations. In Tab. 2, we report graph reconstruction results for three expander graphs, namely Margulis-Gabber-Galil, Paley, and Chordal-Cycle graphs (Bollobás & Bollobás, 1998; Lubotzky, 1994; Vadhan et al., 2012). We note that expanders are considered representative of complex structures due to their high degree of sparsity and connectivity. As observed in our results, none of the metric spaces investigated align well with these intricate structures, leading to substantial distortion of graph structures across all spaces. Overall, graph structures in the ℓ1 and ℓ∞ spaces endure much lower distortion in terms of Davg compared to the other spaces. Importantly, even with considerable distortion, the ℓ1 and ℓ∞ spaces yield favorable mAP scores, particularly on Chordal. This suggests that the Chordal graph, while not isometrically embedded, is nearly isomorphic in these spaces, indicative of high-quality embeddings. In sum, these results affirm that the ℓ1 and ℓ∞ spaces are well-suited for embedding graphs, showing robust performance when their geometry closely, or even poorly, aligns with the graph structures. Real-World Graph Networks. We evaluate the representational capacity of metric spaces on five popular real-world graph networks. These include (a) USCA312 (Hahsler & Hornik, 2007) and EuroRoad (Šubelj | Margulis | Paley | chordal | | | | | | |------------------|-------------|-------------|-------------|-----------|-----------|-----------|-------| | (|V |, |E|) | (625, 2500) | (101, 5050) | (523, 1569) | | | | | | Davg | mAP | Davg | mAP | Davg | mAP | | | | R 20 ℓ1 | 13.4±0.00 | 87.97 | 22.7±0.00 | 65.84 | 10.7±0.01 | 99.66 | | | R 20 ℓ2 | 14.0±0.01 | 83.99 | 23.6±0.02 | 60.80 | 12.8±0.01 | 87.79 | | | R 20 ℓ∞ | 14.2±0.01 | 82.73 | 16.1±0.01 | 66.88 | 10.5±0.01 | 98.39 | | | H 20 R | 16.8±0.01 | 69.47 | 23.8±0.02 | 60.76 | 22.8±0.02 | 59.19 | | | SPD6 R | 14.1±0.01 | 84.98 | 23.6±0.01 | 61.76 | 12.8±0.01 | 77.59 | | | S 4 F1 | 24.2±0.02 | 2.24 | 26.6±0.01 | 51.94 | 38.1±0.02 | 1.40 | | | B F1 | 24.1±0.01 | 2.17 | 26.5±0.01 | 52.97 | 37.2±0.01 | 1.43 | | | 4 | | | | | | | | | R 10 × R 10 ℓ∞ | 13.8±0.00 | 87.25 | 20.4±0.01 | 60.09 | 10.6±0.01 | 99.47 | | | ℓ1 R 10 × H 10 R | 14.2±0.00 | 83.63 | 23.3±0.00 | 62.77 | 11.7±0.00 | 82.95 | | | ℓ1 10 | 10 | | | | | | | | R | × H R | 14.4±0.00 | 79.12 | 23.7±0.01 | 60.72 | 12.8±0.00 | 81.93 | | 10 ℓ2 | 10 | | | | | | | | R ℓ∞ × H R | 14.6±0.01 | 86.26 | 20.8±0.00 | 60.57 | 12.1±0.01 | 88.98 | | | 10 | 10 | | | | | | | | H R × H R | 15.4±0.01 | 75.77 | 23.7±0.01 | 60.33 | 17.2±0.00 | 58.25 | | | 10+,10− R PSE | 15.7±0.02 | 45.25 | 24.7±0.01 | 56.78 | 9.1±0.02 | 76.48 | | | USCA312 | bio-diseasome | csphd | EuroRoad | Facebook | | | | | | |-------------------|-----------------|-------------|--------------|--------------|---------------|------------|-------|-----------|-------| | (|V |, |E|) | (312, 48516) | (516, 1188) | (1025, 1043) | (1039, 1305) | (4039, 88234) | | | | | | Davg | Davg | mAP | Davg | mAP | Davg | mAP | Davg | mAP | | | 20 R ℓ1 | 0.29±0.01 | 1.62±0.01 | 89.14 | 1.59±0.02 | 52.34 | 1.73±0.01 | 93.61 | 2.38±0.02 | 31.22 | | 20 R ℓ2 | 0.18±0.01 | 3.83±0.01 | 76.31 | 4.04±0.01 | 47.37 | 4.50±0.00 | 87.70 | 3.16±0.01 | 32.21 | | 20 R ℓ∞ | 0.95±0.02 | 0.53±0.01 | 98.24 | 0.42±0.01 | 99.28 | 1.06±0.01 | 99.48 | 0.71±0.02 | 42.21 | | 20 H R | 2.39±0.02 | 6.83±0.08 | 91.26 | 22.42±0.23 | 60.24 | 43.56±0.44 | 54.25 | 3.72±0.00 | 44.85 | | SPD6 R | 0.21±0.02 | 2.54±0.00 | 82.66 | 2.92±0.11 | 57.88 | 19.54±0.99 | 92.38 | 2.92±0.05 | 33.73 | | 4 S R | 0.28±0.03 | 2.40±0.02 | 87.01 | 4.30±0.18 | 59.95 | 29.21±0.91 | 84.92 | 3.07±0.04 | 30.98 | | 4 S F∞ | 0.57±0.08 | 2.78±0.49 | 93.95 | 27.27±1.00 | 59.45 | 46.82±1.02 | 72.03 | 1.90±0.11 | 45.58 | | 4 S F1 | 0.18±0.02 | 1.55±0.04 | 90.42 | 1.50±0.03 | 64.11 | 3.79±0.07 | 94.63 | 2.37±0.07 | 35.23 | | 4 B R | 0.24±0.07 | 2.69±0.10 | 89.11 | 28.65±3.39 | 62.66 | 53.45±2.65 | 48.75 | 3.58±0.10 | 30.35 | | B 4 F∞ | 0.21±0.04 | 4.58±0.63 | 90.36 | 26.32±6.16 | 54.94 | 52.69±2.28 | 48.75 | 2.18±0.18 | 39.15 | | B 4 F1 | 0.18±0.07 | 1.54±0.02 | 90.41 | 2.96±0.91 | 67.58 | 21.98±0.62 | 91.63 | 5.05±0.03 | 39.87 | | R 10 ℓ1 × R 10 ℓ∞ | 0.47±0.01 | 1.56±0.01 | 98.22 | 1.38±0.02 | 89.18 | 1.65±0.02 | 98.34 | 2.16±0.02 | 39.90 | | R 10 ℓ1 × H 10 R | 0.72±0.01 | 1.99±0.01 | 93.78 | 1.83±0.02 | 78.10 | 2.26±0.02 | 96.19 | 2.77±0.02 | 33.79 | | R 10 ℓ2 × H 10 R | 0.18±0.00 | 2.52±0.02 | 91.99 | 3.06±0.02 | 73.25 | 4.24±0.02 | 89.93 | 2.80±0.01 | 34.26 | | R 10 ℓ∞ × H 10 R | 0.42±0.02 | 1.42±0.02 | 96.51 | 1.16±0.01 | 76.91 | 1.77±0.01 | 97.38 | 1.41±0.02 | 35.03 | | H 10 R × H 10 R | 0.47±0.18 | 2.57±0.05 | 95.00 | 7.02±1.07 | 79.22 | 23.30±1.62 | 75.07 | 2.51±0.00 | 36.39 | | R 10+,10− PSE | 0.41±0.01 | 3.87±0.02 | 70.82 | 3.17±0.02 | 35.41 | 2.49±0.01 | 90.33 | 5.10±0.03 | 20.36 | Table 2: Results on the three expander graphs. Metrics are given as percentages. Table 3: Results on the five real-world graphs. Metrics are given as percentages. & Bajec, 2011), representing North American city networks and European road systems respectively; (b) bio-diseasome (Goh et al., 2007), a biological graph representing the relationships between human disorder and diseases and their genetic origins; (c) CSPHD (Nooy et al., 2011), a graph of Ph.D. advisor-advisee relationships in computer science and (d) Facebook (McAuley & Leskovec, 2012), a dense social network from Facebook. In Tab. 3, the ℓ1 and ℓ∞ spaces generally outperform all other metric spaces on real-world graphs, consistent with the synthetic graph results. However, for USCA312—a weighted graph of North American cities where edge lengths match actual spherical distances—the inherent spherical geometry limits effective embedding into the ℓ1 and ℓ∞ spaces at lower dimensions. Graph Representational Capacity. We assess the capacity of the metric spaces for embedding graphs of increasing size, focusing on trees (negative curvature), grids (zero curvature), and fullerenes (positive curvature). See the illustrations of these graphs in Appendix F. For trees, we fix the valency at 3 and vary the tree height from 1 to 7; for 4D grids, we vary the grid dimension from 2 to 7; for fullerenes, we vary the number of carbon atoms from 20 to 240. We report the average results across three runs. ![7_image_0.png](7_image_0.png) Figure 2: Metric space capacities with growing graph size and unnoticeable distortion variance. Fig. 2 (left) reports the results on trees. Within the range of computationally feasible graphs, we see that as graph size grows, the capacity of all metric spaces, barring the ℓ∞ space and its product with hyperbolic space, improves significantly before plateauing. In hyperbolic spaces, which are not scale-invariant, embedding trees with high fidelity to all path lengths could require scaling (see, e.g., Sarkar (2012), Sala et al. (2018)). This can be seen in the high embedding distortion of small trees, specifically those with a height less than 4. Further empirical analysis demonstrates that the optimization goal localizes unavoidable distortion to some paths of short combinatorial lengths and their contribution to the average loss becomes smaller with increased size since there are relatively fewer of them. In contrast, the ℓ∞ space consistently exhibits a high capacity, largely unaffected by graph size, and significantly outperforms the hyperbolic space within the observed range. Fig. 2 (center) reports the results on 4D grids with zero curvature. We find that the metric spaces whose geometry aligns poorly with grid structures, such as the ℓ2 space, the hyperbolic space and their products, exhibit weak representational capacity. In contrast, the ℓ1 and ℓ∞ spaces preserve grid structures consistently well as the graph size increases. Fig. 2 (right) reports the results on fullerenes with positive curvature. Given that none of the spaces considered feature a positively curved geometry, they are generally ill-suited for embedding fullerenes. However, we see that the ℓ1 and ℓ∞ spaces and the product spaces accommodating either of these two spaces consistently outperform others even as the number of carbon atoms increases. Overall, these results show that the ℓ1 and ℓ∞ spaces consistently surpass other metric spaces in terms of representation capacity. They exhibit small performance fluctuation across various curvatures and maintain robust performance within graph configurations and size ranges we consider. Training Efficiency. Fig. 3 compares the training time for different metric spaces on grids and trees with growing size. The training time grows as C(space) × O(number of paths), where C(space) is a constant that depends on the embedding space. Among these spaces, SPD demands the highest amount of training efforts, even when dealing with small grids and trees. For other spaces, the training time differences become more noticeable with increasing graph size. The largest difference appears at a grid size of 7 and a tree height of 6: The ℓ1, ℓ2, and ℓ∞ normed spaces exhibit the highest efficiency, outperforming product, hyperbolic and SPD spaces in training time. These results are expected given that transcendental functions and eigendecompositions are computationally costly operations. Overall, normed spaces show high scalability with increasing graph size. Their training time grows much slower than product spaces and Riemannian alternatives. Space Dimension. Tab. 4 compares the results of different spaces across dimensions on the Bio-Diseasome dataset. The surveyed theoretical results suggest that in sufficiently high dimensions, space capacities appear to approach theoretical limits, leading to the possibility that the performance of different spaces can become similar. However, we find that other spaces necessitate very high dimensions to match the capacity of the ℓ∞ normed space. For instance, even after tripling space dimension, H66 R and SPD11 R still perform much worse than R 20 ℓ∞ . R 66 ℓ1 rivals R 20 ℓ∞ only in mAP. R 33 ℓ1 × R 33 ℓ∞ surpasses R 20 ℓ∞ in mAP but lags behind in Davg. López et al. (2021) similarly evaluated Euclidean, hyperbolic and spherical spaces, their products, and Siegel space at n = 306. Their best results on Bio-Diseasome were Davg = 0.73 and mAP = 99.09 for S 17 F1 . In contrast, ![8_image_0.png](8_image_0.png) Figure 3: Training time scales up as the graph size increases. ℓ 20∞ and ℓ 18 1 × ℓ 18∞ achieved Davg = 0.5 ± 0.01 and mAP = 99.4, respectively. These results show that normed spaces efficiently yield low-distortion embeddings at much lower dimensions than other spaces. | n = 20 | n = 36 | n = 66 | | | | | | |-----------------|----------|----------|----------|----------|----------|----------|------| | Davg | mAP | Davg | mAP | Davg | mAP | | | | n R ℓ1 | 1.6±0.01 | 89.1 | 1.6±0.01 | 94.3 | 1.7±0.01 | 98.1 | | | R n ℓ2 | 3.8±0.01 | 76.3 | 3.8±0.01 | 85.9 | 3.9±0.01 | 86.2 | | | R n ℓ∞ | 0.5±0.01 | 98.2 | 0.5±0.01 | 98.3 | 0.6±0.01 | 99.2 | | | H n R | 6.8±0.08 | 91.2 | 5.8±0.06 | 93.6 | 5.9±0.05 | 93.2 | | | SPDk R | 2.5±0.00 | 82.6 | 2.4±0.02 | 87.8 | 2.3±0.02 | 90.5 | | | n | n | | | | | | | | R 2 × R 2 ℓ1 ℓ∞ | 1.5±0.01 | 98.2 | 1.2±0.01 | 99.4 | 1.4±0.01 | 99.8 | | | n | n | | | | | | | | R 2 × H R | 1.9±0.01 | 93.7 | 1.8±0.01 | 95.8 | 1.7±0.01 | 98.4 | | | 2 | | | | | | | | | ℓ1 n | n | | | | | | | | R 2 | 2 | | | | | | | | ℓ2 × H R | 2.5±0.02 | 91.9 | 2.6±0.02 | 92.3 | 2.5±0.01 | 94.6 | | | n | n | | | | | | | | R 2 | × H 2 | | | | | | | | ℓ∞ | R | 1.4±0.02 | 96.5 | 1.1±0.02 | 98.6 | 0.8±0.01 | 98.8 | | n 2 | n 2 | | | | | | | | H R × H R | 2.5±0.05 | 95.0 | 2.5±0.04 | 97.4 | 2.6±0.05 | 97.6 | | Table 4: Results on BIO-DISEASOME. When n takes 20, 36, 66, k in SPDkR takes 6, 8, 11. Metrics are given as percentages. ## 4.2 Application 1: Link Prediction Experimental Setup. We comparably evaluate the impact of normed spaces on four popular architectures of graph neural networks (GNNs), namely GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), SGC (Wu et al., 2019) and GIN (Xu et al., 2019). Following Kipf & Welling (2016); Chami et al. (2019a), we evaluate GNNs in the link prediction task on two citation network datasets: Cora and Citeseer (Sen et al., 2008). This task aims to predict the presence of edges (links) between nodes that are not seen during training. We split each dataset into train, development and test sets corresponding to 70%, 10%, 20% of citation links that we sample at random. We report the average performance in AUC across five runs. We provide the training details in Appendix D.2. Results. Tab. 5 reports the results of GNNs across different spaces for link prediction. While previous works showed the superiority of hyperbolic over Euclidean space for GNNs at lower dimensions on these datasets (Chami et al., 2019a; Zhao et al., 2023), our findings indicate the opposite (see the results from HnR and R n ℓ2 ). This is attributed to the vanishing impact of hyperbolic space when operating GNNs in a larger dimensional space (with up to 128 dimension to achieve optimal performance on development sets). The ℓ1 normed space consistently outperforms other spaces (including the ℓ∞ and product spaces), demonstrating its superiority for link prediction. | Cora | Citeseer | | | | | | | | | |----------------|-------------------------------------------------------|----------|----------|----------|----------|----------|-------------------|----------|----------| | GCN | GAT | SGC | GIN | GCN | GAT | SGC | GIN | | | | R n l1 | 93.4±0.3 92.8±0.4 93.7±0.5 91.6±0.5 93.1±0.3 93.1±0.4 | 93.8±0.4 | 92.4±0.3 | | | | | | | | R n l2 | 92.1±0.5 | 91.7±0.5 | 91.1±0.3 | 90.2±0.5 | 91.4±0.5 | 91.1±0.4 | 93.8±0.4 | 92.0±0.3 | | | R n l∞ | 89.5±0.4 | 88.2±0.5 | 88.8±0.3 | 88.4±0.5 | 90.3±0.4 | 89.5±0.5 | 91.7±0.3 | 90.5±0.3 | | | n H R | 86.1±0.5 | 92.1±0.6 | 89.9±0.4 | 87.7±0.3 | 92.5±0.2 | 91.1±0.3 | 91.0±0.3 | 91.7±0.4 | | | n | n | | | | | | | | | | R 2 × R ℓ∞ | 93.0±0.5 | 92.3±0.3 | 93.5±0.5 | 90.9±0.4 | 92.9±0.5 | 92.8±0.6 | 94.6±0.5 92.4±0.4 | | | | 2 | | | | | | | | | | | ℓ1 n 2 | n 2 | | | | | | | | | | R | × H R | 89.5±0.5 | 90.7±0.4 | 88.7±0.4 | 89.0±0.6 | 91.5±0.4 | 90.3±0.3 | 90.6±0.4 | 90.3±0.5 | | ℓ1 n | n | | | | | | | | | | R 2 × H 2 ℓ2 R | 90.2±0.4 | 90.6±0.5 | 88.3±0.6 | 87.8±0.4 | 90.8±0.5 | 91.2±0.3 | 91.0±0.5 | 90.4±0.3 | | | n | n | | | | | | | | | | R 2 | × H R | 89.7±0.5 | 90.7±0.3 | 89.2±0.3 | 87.7±0.4 | 90.5±0.3 | 90.2±0.4 | 90.8±0.4 | 90.4±0.3 | | 2 | | | | | | | | | | | ℓ∞ n | n | | | | | | | | | | H 2 | 2 | | | | | | | | | | R × H R | 85.2±0.3 | 88.0±0.4 | 88.7±0.4 | 87.6±0.3 | 90.4±0.3 | 87.2±0.5 | 89.2±0.3 | 91.3±0.5 | | | ml-100k | lastfm | MeetUp | | | | | |------------------|----------|----------|----------|-------|----------|------| | HR@10 | nDG | HR@10 | nDG | HR@10 | nDG | | | R 20 ℓ1 | 54.5±1.2 | 28.2 | 69.3±0.4 | 48.9 | 82.1±0.4 | 63.3 | | R 20 ℓ2 | 54.6±1.0 | 28.7 | 55.4±0.3 | 24.6 | 79.8±0.2 | 59.5 | | R 20 ℓ∞ | 50.1±1.1 | 25.5 | 54.9±0.5 | 31.7 | 70.2±0.2 | 45.3 | | H 20 R | 53.4±1.0 | 28.2 | 54.8±0.5 | 24.9 | 79.1±0.5 | 58.8 | | SPD6 R | 53.3±1.4 | 28.0 | 55.4±0.2 | 25.3 | 78.5±0.5 | 58.6 | | S 4 F1 | 55.6±1.3 | 29.4 | 61.1±1.2 | 38.0 | 80.4±0.5 | 61.1 | | R 10 × R 10 ℓ∞ | 52.0±1.1 | 27.1 | 68.2±0.4 | 47.3 | 79.6±0.3 | 60.1 | | ℓ1 R 10 × H 10 R | 53.1±1.2 | 27.6 | 69.2±0.5 | 49.9 | 80.6±0.3 | 61.2 | | ℓ1 R 10 × H 10 R | 53.1±1.3 | 27.9 | 45.5±0.4 | 18.9 | 79.3±0.2 | 58.9 | | ℓ2 | | | | | | | | R 10 ℓ∞ × H 10 R | 54.9±1.2 | 28.4 | 66.2±0.5 | 48.2 | 77.8±0.4 | 57.2 | | 10 | 10 | | | | | | | H R × H R | 54.8±0.9 | 29.1 | 55.0±0.6 | 24.6 | 79.5±0.2 | 59.2 | Table 5: Results of GNNs in different spaces for link prediction, where n is a hyperparameter of space dimension that we tune on the development sets. Results are reported in AUC. Table 6: Results on the three recommendation bipartite graphs. Higher HR@10 and nDG are better. ## 4.3 Application 2: Recommender Systems Experimental Setup. Following López et al. (2021), we conduct a comparative examination of the impact of the choice of metric spaces on a recommendation task. This task can be seen as a binary classification problem on a bipartite graph, in which users and items are treated as two distinct subsets of nodes, and recommendation systems are tasked with predicting the interactions between user-item pairs. We adopt the approach of prior research (Vinh Tran et al., 2020; López et al., 2021) and base recommendation systems on graph embeddings in metric spaces. Our experiments include three popular datasets: (a) ml-100k (Harper & Konstan, 2015) from MovieLens for movie recommendation; (b) last.fm (Cantador et al., 2011) for music recommendation, and (c) MeetUp (Pham et al., 2015) from Meetup.com in NYC for event recommendation. We use the train/dev/test sets of these datasets from the work of López et al. (2021), and report the average results across five runs in terms of two evaluation metrics: hit ratio (HR) and normalized discounted cumulative gain (nDG), both at 10. We provide the training details and the statistics of the graphs in Appendix D.3. Results. Tab. 6 reports the results on three bipartite graphs. We find that the performance gaps between metric spaces are small on ml-100k. Therefore, the choice of metric spaces does not influence much the performance on this graph. In contrast, the gaps are quite noticeable on the other two graphs. For instance, we see that the ℓ1 space largely outperforms all the other spaces, particularly on last.fm. This showcases the importance of choosing a suitable metric space for downstream tasks. It is noteworthy that the ℓ1 norm outperforms the ℓ∞ norm on the recommender systems task, while the opposite is true for the graph reconstruction task. This raises intriguing questions about how normed space embeddings leverage the geometries of the underlying normed spaces. ## 5 Conclusions Classical discrete geometry results suggest that normed spaces can abstractly embed a wide range of finite metric spaces, including graphs, with surprisingly low theoretical bounds on distortion. Motivated by these theoretical insights, we highlight normed spaces as a valuable complement to popular manifolds for graph representation learning. Our empirical findings show that normed spaces consistently outperform other manifolds across several real-world and synthetic graph reconstruction benchmark datasets. Notably, normed spaces demonstrate an enhanced capacity to embed graphs of varying curvatures, an increasingly evident advantage as graph sizes get bigger. We further illustrate the practical utility of normed spaces on two applied graph embedding tasks, namely link prediction and recommender systems, underscoring their potential for applications. Moreover, while delivering superior performance, normed spaces require significantly fewer computational resources and pose fewer technical challenges than competing solutions, further enhancing their appeal. Our work not only emphasizes the importance of normed spaces for graph representation learning but also naturally raises several questions and motivates further research directions: Modern and Classical AI/ML Applications. The potential of normed space embeddings can be tested across a wide range of AI applications. In many machine learning applications, normed spaces provide a promising alternative to existing Riemannian manifolds, such as hyperbolic spaces (Nickel & Kiela, 2017; 2018; Chami et al., 2020a;b) and other symmetric spaces, as embedding spaces. Classical non-differentiable discrete methods for embedding graphs into normed spaces have found applications in various areas (Livingston & Stout, 1988; Linial et al., 1995; Deza & Shtogrin, 2000; Mohammed, 2005). Our work demonstrates the efficient computation of graph embeddings into normed spaces using a modern differentiable programming paradigm. Integrating normed spaces into deep learning frameworks holds the potential to advance graph representation learning and its applications, bridging modern and classical AI research. Discrete Geometry. Further analysis is needed to describe how normed space embeddings leverage the geometry of normed spaces. It is also important to investigate which emergent geometric properties of the embeddings can be used for analyzing graph structures, such as hierarchies. Lastly, we anticipate our work will provide a valuable experimental mathematics tool. Limitations **and Future Research.** Our work and others in the geometric machine literature, such as Nickel & Kiela (2017; 2018); Chami et al. (2019a); Cruceru et al. (2020); López et al. (2021); Giovanni et al. (2022), lack theoretical guarantees. It is crucial to connect the theoretical bounds on distortion for abstract embeddings and the empirical results, especially for real-world graphs. In particular, it is possible that hyperparameter tuning or different spaces and training techniques than normed spaces and gradient descent can achieve better performance in general settings, in addition to the restricted cases where some geometric spaces perform exceptionally well on particular graphs, such as hyperbolic geometry and trees. It would also be valuable to analyze more growing families of graphs, such as expanders and mixed-curvature graphs. Furthermore, embedding larger real-world networks would provide insights into scalability in practical settings, and how the benefits would translate to graphs of larger scale. Lastly, future work should expand this study to dynamic graphs with evolving structures and investigating the transferability of embeddings learned on one task, e.g., link predictions with GNNs, to other tasks. ## Acknowledgments We would like to extend our thanks to Anna Wienhard, Maria Beatrice Pozetti, Ullrich Koethe, Federico López, and Steve Trettel for many interesting conversations and valuable insights. We extend our sincere gratitude to the reviewers for their insightful comments and suggestions, which have significantly enhanced the quality of this paper. J.M. Riestenberg was supported by the RTG 2229 "Asymptotic Invariants and Limits of Groups and Spaces" and by the DFG under Project-ID 338644254 - SPP2026. Wei Zhao was supported by the Klaus Tschira Foundation and a Young Marsilius Fellowship at Heidelberg University. ## References G. Bachmann, G. Bécigneul, and O.-E. Ganea. Constant curvature graph convolutional networks. In 37th International Conference on Machine Learning (ICML), 2020. Nikhil Bansal, Niv Buchbinder, Aleksander Madry, and Joseph Naor. A polylogarithmic-competitive algorithm for the k-server problem. *Journal of the ACM (JACM)*, 62(5):1–49, 2015. Gary Bécigneul and Octavian-Eugen Ganea. Riemannian adaptive optimization methods. In 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA, May 2019. URL https:// openreview.net/forum?id=r1eiqi09K7. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Thomas G. Dietterich, Suzanna Becker, and Zoubin Ghahramani (eds.), *Advances in Neural Information* Processing Systems 14 [Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, December 3-8, 2001, Vancouver, British Columbia, Canada], pp. 585–591. MIT Press, 2001. URL https: //proceedings.neurips.cc/paper/2001/hash/f106b7f99d2cb30c3db1c3cc0fde9ccb-Abstract.html. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013. Béla Bollobás and Béla Bollobás. *Random graphs*. Springer, 1998. Silvère Bonnabel. Stochastic gradient descent on Riemannian manifolds. *IEEE Transactions on Automatic* Control, 58, 11 2011. doi: 10.1109/TAC.2013.2254619. J. Bourgain. On Lipschitz embedding of finite metric spaces in Hilbert space. *Israel J. Math.*, 52(1-2):46–52, 1985. ISSN 0021-2172. doi: 10.1007/BF02776078. URL https://doi.org/10.1007/BF02776078. Alexander M Bronstein, Michael M Bronstein, and Ron Kimmel. Generalized multidimensional scaling: a framework for isometry-invariant partial surface matching. *Proceedings of the National Academy of* Sciences, 103(5):1168–1172, 2006. Deng Cai, Xiaofei He, Jiawei Han, and Thomas S Huang. Graph regularized nonnegative matrix factorization for data representation. *IEEE transactions on pattern analysis and machine intelligence*, 33(8):1548–1560, 2010. Iván Cantador, Peter Brusilovsky, and Tsvi Kuflik. 2nd Workshop on Information Heterogeneity and Fusion in Recommender Systems (HetRec 2011). In *Proceedings of the 5th ACM Conference on Recommender* Systems, RecSys 2011, New York, NY, USA, 2011. ACM. Ben Chamberlain, Marc Deisenroth, and James Clough. Neural embeddings of graphs in hyperbolic space. In Proceedings of the 13th International Workshop on Mining and Learning with Graphs (MLG), 2017. Ines Chami, Zhitao Ying, Christopher Ré, and Jure Leskovec. Hyperbolic graph convolutional neural networks. In *Advances in Neural Information Processing Systems 32*, pp. 4869– 4880. Curran Associates, Inc., 2019a. URL https://proceedings.neurips.cc/paper/2019/file/ 0415740eaa4d9decbc8da001d3fd805f-Paper.pdf. Ines Chami, Zhitao Ying, Christopher Ré, and Jure Leskovec. Hyperbolic graph convolutional neural networks. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems 32: Annual Conference on* Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 4869–4880, 2019b. Ines Chami, Albert Gu, Vaggos Chatziafratis, and Christopher Ré. From trees to continuous embeddings and back: Hyperbolic hierarchical clustering. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS* 2020, December 6-12, 2020, virtual, 2020a. URL https://proceedings.neurips.cc/paper/2020/hash/ ac10ec1ace51b2d973cd87973a98d3ab-Abstract.html. Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher Ré. Low-dimensional hyperbolic knowledge graph embeddings. In *Proceedings of the 58th Annual Meeting of the Association for* Computational Linguistics, pp. 6901–6914, Online, July 2020b. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.617. URL https://www.aclweb.org/anthology/2020.acl-main.617. C. Cruceru, G. Bécigneul, and O.-E. Ganea. Computationally tractable Riemannian manifolds for graph embeddings. In *37th International Conference on Machine Learning (ICML)*, 2020. Michaël Defferrard, Martino Milani, Frédérick Gusset, and Nathanaël Perraudin. DeepSphere: A graphbased spherical CNN. In *International Conference on Learning Representations*, 2020. URL https: //openreview.net/forum?id=B1e3OlStPB. M Deza and Mikhail Ivanovich Shtogrin. Embeddings of chemical graphs in hypercubes. *Mathematical Notes*, 68:295–305, 2000. Takuma Ebisu and Ryutaro Ichise. Toruse: Knowledge graph embedding on a Lie group. In Sheila A. McIlraith and Kilian Q. Weinberger (eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp. 1819–1826. AAAI Press, 2018. URL https://www.aaai.org/ocs/index. php/AAAI/AAAI18/paper/view/16227. Luca Falorsi, Pim de Haan, Tim R. Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, and Taco S. Cohen. Explorations in homeomorphic variational auto-encoding, 2018. Octavian-Eugen Ganea, Gary Bécigneul, and Thomas Hofmann. Hyperbolic entailment cones for learning hierarchical embeddings. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 1646–1655, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/ v80/ganea18a.html. Francesco Di Giovanni, Giulia Luise, and Michael M. Bronstein. Heterogeneous manifolds for curvature-aware graph embedding. *CoRR*, abs/2202.01185, 2022. URL https://arxiv.org/abs/2202.01185. Kwang-Il Goh, Michael E Cusick, David Valle, Barton Childs, Marc Vidal, and Albert-László Barabási. The human disease network. *Proceedings of the National Academy of Sciences*, 104(21):8685–8690, 2007. Lev Goldfarb. A new approach to pattern recognition. *Progress in pattern recognition*, 2:241–402, 1985. Daniele Grattarola, Daniele Zambon, Lorenzo Livi, and Cesare Alippi. Change detection in graph streams by learning graph embeddings on constant-curvature manifolds. *IEEE Trans. Neural Networks Learn. Syst.*, 31 (6):1856–1869, 2020. doi: 10.1109/TNNLS.2019.2927301. URL https://doi.org/10.1109/TNNLS.2019. 2927301. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi (eds.), *Proceedings* of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pp. 855–864. ACM, 2016. doi: 10.1145/2939672.2939754. URL https://doi.org/10.1145/2939672.2939754. Albert Gu, Frederic Sala, Beliz Gunel, and Christopher Ré. Learning mixed-curvature representations in product spaces. In *International Conference on Learning Representations*, 2019. URL https://openreview. net/forum?id=HJxeWnCcF7. Michael Hahsler and Kurt Hornik. Tsp-infrastructure for the traveling salesperson problem. Journal of Statistical Software, 23(2):1–21, 2007. F. Maxwell Harper and Joseph A. Konstan. The MovieLens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4), December 2015. ISSN 2160-6455. doi: 10.1145/2827872. URL https: //doi.org/10.1145/2827872. Sigurdur Helgason. *Differential geometry, Lie groups, and symmetric spaces*. Academic Press New York, 1978. ISBN 0123384605. Samitha Herath, Mehrtash Tafazzoli Harandi, and Fatih Porikli. Learning an invariant Hilbert space for domain adaptation. In *2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu,* HI, USA, July 21-26, 2017, pp. 3956–3965. IEEE Computer Society, 2017. doi: 10.1109/CVPR.2017.421. URL https://doi.org/10.1109/CVPR.2017.421. Zhiwu Huang and Luc Van Gool. A Riemannian network for SPD matrix learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, pp. 2036–2042. AAAI Press, 2017. Zhiwu Huang, Jiqing Wu, and Luc Van Gool. Building deep networks on Grassmann manifolds. In Sheila A. McIlraith and Kilian Q. Weinberger (eds.), *Proceedings of the Thirty-Second AAAI Conference on Artificial* Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp. 3279–3286. AAAI Press, 2018. URL https://www.aaai.org/ocs/index. php/AAAI/AAAI18/paper/view/16846. William B. Johnson and Joram Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. In *Conference in modern analysis and probability (New Haven, Conn., 1982)*, volume 26 of *Contemp.* Math., pp. 189–206. Amer. Math. Soc., Providence, RI, 1984. doi: 10.1090/conm/026/737400. URL https://doi.org/10.1090/conm/026/737400. William B. Johnson, Joram Lindenstrauss, and Gideon Schechtman. On Lipschitz embedding of finite metric spaces in low-dimensional normed spaces. In *Geometrical aspects of functional analysis (1985/86)*, volume 1267 of *Lecture Notes in Math.*, pp. 177–184. Springer, Berlin, 1987. doi: 10.1007/BFb0078145. URL https://doi.org/10.1007/BFb0078145. Mahmut Kaya and Hasan Şakir Bilge. Deep metric learning: A survey. *Symmetry*, 11(9):1066, 2019. Maleq Khan, Fabian Kuhn, Dahlia Malkhi, Gopal Pandurangan, and Kunal Talwar. Efficient distributed approximation algorithms via probabilistic tree embeddings. In Proceedings of the twenty-seventh ACM symposium on Principles of distributed computing, pp. 263–272, 2008. Thomas N. Kipf and Max Welling. Variational graph auto-encoders. *CoRR*, abs/1611.07308, 2016. URL http://arxiv.org/abs/1611.07308. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. URL https://openreview.net/forum?id=SJU4ayYgl. Max Kochurov, Rasul Karimov, and Sergei Kozlukov. Geoopt: Riemannian optimization in PyTorch. *ArXiv*, abs/2005.02819, 2020. Robert Krauthgamer, James R Lee, Manor Mendel, and Assaf Naor. Measured descent: A new embedding method for finite metrics. In *45th Annual IEEE Symposium on Foundations of Computer Science*, pp. 434–443. IEEE, 2004. Nils M. Kriege. Weisfeiler and leman go walking: Random walk kernels revisited. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 20119–20132. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/ paper_files/paper/2022/file/7eed2822411dc37b3768ae04561caafa-Paper-Conference.pdf. Brian Kulis et al. Metric learning: A survey. Foundations and Trends® *in Machine Learning*, 5(4):287–364, 2013. Marc T. Law and Jos Stam. Ultrahyperbolic representation learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Nathan Linial, Eran London, and Yuri Rabinovich. The geometry of graphs and some of its algorithmic applications. *Combinatorica*, 15(2):215–245, 1995. ISSN 0209-9683. doi: 10.1007/BF01200757. URL https://doi.org/10.1007/BF01200757. Marilynn Livingston and Quentin F Stout. Embeddings in hypercubes. *Mathematical and Computer Modelling*, 11:222–227, 1988. Federico López, Benjamin Heinzerling, and Michael Strube. Fine-grained entity typing in hyperbolic space. In *Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)*, pp. 169–180, Florence, Italy, August 2019. Association for Computational Linguistics. doi: 10.18653/v1/W19-4319. URL https://www.aclweb.org/anthology/W19-4319. Federico López, Beatrice Pozzetti, Steve Trettel, Michael Strube, and Anna Wienhard. Symmetric spaces for graph embeddings: A finsler-riemannian approach. In Marina Meila and Tong Zhang (eds.), *Proceedings* of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pp. 7090–7101. PMLR, 2021. URL http: //proceedings.mlr.press/v139/lopez21a.html. Federico López, Beatrice Pozzetti, Steve Trettel, Michael Strube, and Anna Wienhard. Vector-valued distance and gyrocalculus on the space of symmetric positive definite matrices. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 34. Curran Associates, Inc., 2021. Alex Lubotzky. *Discrete groups, expanding graphs and invariant measures*, volume 125. Springer Science & Business Media, 1994. Julian McAuley and Jure Leskovec. Learning to discover social circles in ego networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS'12, pp. 539–547, Red Hook, NY, USA, 2012. Curran Associates Inc. Yu Meng, Jiaxin Huang, Guangyuan Wang, Chao Zhang, Honglei Zhuang, Lance Kaplan, and Jiawei Han. Spherical text embedding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32, pp. 8208–8217. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 043ab21fc5a1607b381ac3896176dac6-Paper.pdf. Qatawneh Mohammed. Embedding linear array network into the tree-hypercube network. European Journal of Scientific Research, 10(2):72–76, 2005. Maximilian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in* Neural Information Processing Systems 30, pp. 6341–6350. Curran Associates, Inc., 2017. URL https: //proceedings.neurips.cc/paper/2017/file/59dfa2df42d9e3d41f5b02bfc32229dd-Paper.pdf. Maximillian Nickel and Douwe Kiela. Learning continuous hierarchies in the Lorentz model of hyperbolic geometry. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference on Machine* Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 3779–3788, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/nickel18a.html. Wouter De Nooy, Andrej Mrvar, and Vladimir Batagelj. *Exploratory Social Network Analysis with Pajek*. Cambridge University Press, USA, 2011. ISBN 0521174805. OEIS Foundation Inc. The On-Line Encyclopedia of Integer Sequences, 2023. Published electronically at http://oeis.org. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: online learning of social representations. In Sofus A. Macskassy, Claudia Perlich, Jure Leskovec, Wei Wang, and Rayid Ghani (eds.), The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, New York, NY, USA - August 24 - 27, 2014, pp. 701–710. ACM, 2014. doi: 10.1145/2623330.2623732. URL https://doi.org/10.1145/2623330.2623732. T. N. Pham, X. Li, G. Cong, and Z. Zhang. A general graph-based model for recommendation in event-based social networks. In *2015 IEEE 31st International Conference on Data Engineering*, pp. 567–578, 2015. doi: 10.1109/ICDE.2015.7113315. Frederic Sala, Chris De Sa, Albert Gu, and Christopher Re. Representation tradeoffs for hyperbolic embeddings. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 4460–4469, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/sala18a.html. Guillaume Salha, Romain Hennequin, Viet Anh Tran, and Michalis Vazirgiannis. A degeneracy framework for scalable graph autoencoders. In *28th International Joint Conference on Artificial Intelligence (IJCAI)*, 2019. Rik Sarkar. Low distortion delaunay embedding of trees in hyperbolic plane. In Marc van Kreveld and Bettina Speckmann (eds.), *Graph Drawing*, pp. 355–366, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. ISBN 978-3-642-25878-7. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. *AI magazine*, 29(3):93–93, 2008. Carl Ludwig Siegel. Symplectic geometry. *American Journal of Mathematics*, 65(1):1–86, 1943. ISSN 00029327, 10806377. URL http://www.jstor.org/stable/2371774. Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, and Gert R.G. Lanckriet. Hilbert space embeddings and metrics on probability measures. *J. Mach. Learn. Res.*, 11:1517–1561, August 2010. ISSN 1532-4435. Lovro Šubelj and Marko Bajec. Robust network community detection using balanced propagation. The European Physical Journal B, 81:353–362, 2011. Lei Tang and Huan Liu. Leveraging social media networks for classification. *Data Mining and Knowledge* Discovery, 23:447–478, 2011. The Sage Developers. *SageMath, the Sage Mathematics Software System (Version x.y.z)*, 2023. https://www.sagemath.org. Alexandru Tifrea, Gary Bécigneul, and Octavian-Eugen Ganea. Poincare GloVe: Hyperbolic word embeddings. In *7th International Conference on Learning Representations, ICLR*, New Orleans, LA, USA, May 2019. URL https://openreview.net/forum?id=Ske5r3AqK7. Salil P Vadhan et al. *Pseudorandomness*, volume 7. Now Publishers, Inc., 2012. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In *International Conference on Learning Representations*, 2018. URL https: //openreview.net/forum?id=rJXMpikCZ. Lucas Vinh Tran, Yi Tay, Shuai Zhang, Gao Cong, and Xiaoli Li. HyperML: A boosting metric learning approach in hyperbolic space for recommender systems. In Proceedings of the 13th International Conference on Web Search and Data Mining, WSDM '20, pp. 609–617, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450368223. doi: 10.1145/3336191.3371850. URL https://doi.org/10. 1145/3336191.3371850. Harit Vishwakarma and Frederic Sala. Lifting weak supervision to structured prediction. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 37563–37574. Curran Associates, Inc., 2022. URL https://proceedings.neurips. cc/paper_files/paper/2022/file/f463d31ed2fdd7b0ec585c041ec1baa8-Paper-Conference.pdf. Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In *International conference on machine learning*, pp. 6861–6871. PMLR, 2019. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=ryGs6iA5Km. Tong Yang, Long Sha, and Pengyu Hong. *NagE: Non-Abelian Group Embedding for Knowledge Graphs*, pp. 1735–1742. Association for Computing Machinery, New York, NY, USA, 2020. ISBN 9781450368599. URL https://doi.org/10.1145/3340531.3411875. Wei Zhao, Federico Lopez, J Maxwell Riestenberg, Michael Strube, Diaaeldin Taha, and Steve Trettel. Modeling graphs beyond hyperbolic: Graph neural networks in symmetric positive definite matrices. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2023. ## A Embedding Spaces Metric Spaces. Let X be a non-empty set. A *metric space* is an ordered pair (*X, d*), where d : X × X → R is a function, called the metric or *distance function*, that satisfies the following properties for all *x, y, z* ∈ X: (i) d(*x, y*) ≥ 0, (ii) d(*x, y*) = 0 if and only if x = y, (iii) d(*x, y*) = d(*y, x*), and (iv) d(x, z) ≤ d(*x, y*) + d(*y, z*). A map f : X → Y between two metric spaces (*X, d*X) and (*Y, d*Y ) is an *isometric embedding* if it preserves distances, i.e., dY (f(x1), f(x2)) = dX(x1, x2), ∀ x1, x2 ∈ X. Riemannian Manifolds. Let M be a smooth manifold, p ∈ M be a point, and TpM be the tangent space at the point p. A Riemannian manifold (*M, g*) is a smooth manifold M equipped with a Riemannian metric g given by a smooth inner product gp : TpM × TpM → R at each point p ∈ M. Euclidean space is the simplest example of a Riemannian manifold. Let V be any n-dimensional real vector space endowed with the Euclidean metric g given by g(*v, w*) = ⟨*v, w*⟩ for any p ∈ V and any *v, w* ∈ TpV ∼= V . Normed Spaces. A *normed space* is a vector space V over the real numbers R or complex numbers C equipped with a norm. A *norm* is a function ∥·∥: V → [0, +∞) satisfying the following properties for all vectors *x, y* ∈ V and scalars α ∈ F: (i) ∥x∥ ≥ 0, with equality if and only if x = 0, (ii) ∥αx∥ = |α|∥x∥, and (iii) ∥x + y*∥ ≤ ∥*x∥ + ∥y∥. Normed spaces induce metric spaces via the *induced distance function*, defined as d(*x, y*) = ∥x − y∥. The p-norms are among the most important examples of norms. For a real number p ≥ 1, the p-norm of a vector x ∈ R dis given by ∥x∥p := (|x1| p + |x2| p + *· · ·* + |xd| p) 1 p . The definition is extended for p = ∞ as ∥x∥∞ := max1≤i≤d|xi|. The space R dequipped with p-norm is denoted as ℓ d p . Here we focus on the cases p = 1, 2, and ∞. Pseudo-Euclidean Spaces. Denote the product R d +× R d −, where d + and d − are non-negative integers, by R d +,d− PSE , and write any element x ∈ R d +,d− PSE as x = (x +, x−), where x + ∈ R d +and x − ∈ R d −. Then the *pseudo-Euclidean space* R d +,d− PSE is the set R d +× R d −with the squared distance function defined by d 2 PSE(*x, y*) = ∥x + − y +∥ 2 2 − ∥x − − y −∥ 2 2 , where *∥ · ∥*22 is the square of the ℓ2 norm on the respective space. Hyperbolic Space. Hyperbolic space is a Riemannian manifold with a constant negative curvature. There are several models of hyperbolic space, such as the Poincaré ball model and Lorentz model. The models are essentially the same in a mathematical sense (they are pairwise isometric), but one model can have computational advantages over another. Definition 1 (Poincaré Ball Model). Let ∥ · ∥ be the Euclidean norm. Given a negative curvature c, the Poincaré ball model is a Riemannian manifold (B n c , gB x )*, where* B n c =x ∈ R n : ∥x∥ 2 < −1/c *is an open ball* with radius 1/p|c| and g B x = (λ c x ) 2Id*, where* λ c x = 2/(1 + c∥x∥ 2 2 ) and Id *is the identity matrix.* Product Manifold. Let M1, M,*. . . , M*k be a sequence of smooth manifolds. The product manifold is given by the Cartesian product M = M1 × M2 *× · · · ×* Mk. Each point p ∈ M has the coordinates p = (p1*, . . . , p*k), with pi ∈ Mi for all i. Similarly, a tangent vector v ∈ TpM can be written as (v1*, . . . , v*k), with each vi ∈ TpiMi. If each Miis equipped with a Riemannian metric gi, then the product manifold M can be given the product metric where g(*v, w*) = Pk i=1 gi(vi, wi). Riemannian Symmetric Spaces. Riemannian symmetric spaces are connected Riemannian manifolds such that the geodesic symmetry2 at each point defines a global isometry of the space. For simply connected manifolds this condition is equivalent to having covariantly constant curvature tensor. A key consequence of the definition is that symmetric spaces are homogeneous manifolds. Intuitively, this means that the manifold "looks the same" at every point. Furthermore, simply connected symmetric spaces decompose into products of irreducible symmetric spaces and Euclidean space. Irreducible symmetric spaces can be described in terms of semisimple Lie groups. Basic examples of Riemannian symmetric spaces include Euclidean spaces, hyperbolic 2For any point p in any Riemannian manifold, there exists a sufficently small ϵ > 0 such that the map Sp : B(p, ϵ) → B(*p, ϵ*) defined by Sp(c(t)) = c(−t) is well-defined for any unit-speed geodesic c : (−ϵ, ϵ) → B(*p, ϵ*) with c(0) = p. Such a map Sp is called the *geodesic symmetry* at p. | Space | Underlying Set | Distance | | | | |----------------------------------------------------------|---------------------------------------------------|--------------------------------------------|----------------------------------------------------------------------------|-----------|----| | Rn ℓp | normed space | Rn | d(x, y) = ∥x − y∥p | | | | HnR | hyperbolic space | {Rn | ∥x∥2 ≤ 1} | d(x, y) = arcosh 1 + 2 | ∥x−y∥ 2 2 | | | | | (1−∥x∥ 2 )(1−∥y∥ 2 ) 2 2 i=1∥2, | | | | | SPDkR | SPD (López et al., 2021) | symmetric positive definite k × k matrices | d(x, y) = ∥(λi(x−1y))k where λi(x−1y) is the ith eigenvalue of x−1y | | | | Siegel spaces (López et al., 2021) S kR upper half model | {Z = X + iY ∈ Sym(n, C) | Y >> 0} | see Algorithm 1 | | | | | k S F∞ | upper half model | {Z = X + iY ∈ Sym(n, C) | Y >> 0} | see Algorithm 1 | | | | S k | upper half model | {Z = X + iY ∈ Sym(n, C) | Y >> 0} | see Algorithm 1 | | | | F1 B kR | bounded symmetric domain model | {Z ∈ Sym(n, C) | Id − Z∗Z >> 0} | see Algorithm 1 | | | | B kR | bounded symmetric domain model | {Z ∈ Sym(n, C) | Id − Z∗Z >> 0} | see Algorithm 1 | | | | B kR | bounded symmetric domain model | {Z ∈ Sym(n, C) | Id − Z∗Z >> 0} | see Algorithm 1 | | | | d+,d− PSE | pseudo-Euclidean space (Vishwakarma & Sala, 2022) | Rd+ × Rd− | d(x, y) = p ∥x+ − y+∥ 2 2 − ∥x− − y−∥ 2 (when defined), | | | | R | | where x = (x+, x−) and y = (y+, y−) 2 | | | | | M1 × M2 | product space | M1 × M2 | d(x, y) = p d1(x1, y1)2 + d2(x2, y2)2, where x = (x1, x2) and y = (y1, y2) | | | Table 7: A summary of the embedding spaces. spaces and spheres. In the following we will describe two further special cases: Siegel space and the space of symmetric positive definite (SPD) matrices. Siegel spaces, HypSPDn, are matrix versions of the hyperbolic plane, accommodating many products of hyperbolic planes and the copies of SPD as submanifolds. These spaces support Finsler metrics that induce the ℓ1 and the ℓ∞ metric on the Euclidean subspaces. HypSPDn has the two following models with n(n + 1) dimensions, both of which are open subsets of the space Sym(n, C) over C. These two models generalize the Poincaré disk and the upper half plane model of the hyperbolic space. Definition 2 (Bounded Symmetric Domain Model). *The bounded symmetric domain model for HypSPD*n generalizes the Poincaré disk. It is given by Bn := {Z ∈ Sym(n, C)| Id − Z ∗*Z >>* 0}. Definition 3 (Siegel Upper Half Space Model). The Siegel upper half space model for HypSPDn *generalizes* the upper half plane model of the hyperbolic plane by Sn := {Z = X + iY ∈ Sym(n, C)| *Y >>* 0}. There exists an isomorphism from Bn to Sn given by the Cayley transform, which is a matrix analogue of the familiar map from the Poincare disk to upper half space model of the hyperbolic plane: $$Z\mapsto i(Z+\mathrm{Id})(Z-\mathrm{Id})^{-1}.$$ We refer readers to Siegel (1943) and López et al. (2021) for an in-depth overview of Siegel spaces and their applications in graph embeddings. Definition 4 (SPD Space). SPDn is the space of positive definite real symmetric n × n *matrices, given by* SPD(n, R) := {X ∈ Sym(n, R)| X >> 0}*. It has the structure of a Riemannian manifold of non-positive* curvature of n(n + 1)/2 *dimensions. The Riemannian metric on* SPDn is defined as follows: if U, V ∈ Sn are tangent vectors based at P ∈ SPDn, their inner product is given by ⟨*U, V* ⟩P = Tr(P −1UP −1V ). The tangent space to any point of SPDn can be identified with the vector space Sn of all real symmetric n×n matrices. SPDn is more flexible than Euclidean or hyperbolic geometries, or products thereof. In particular, it contains n-dimensional Euclidean subspaces, (n − 1)-dimensional hyperbolic subspaces, products of ⌊ n 2 ⌋ hyperbolic planes, and many other interesting spaces as totally geodesic submanifolds, see the reference (Helgason, 1978) for an in-depth introduction. ## B Graph Reconstruction Loss Function The graph reconstruction task aims to empirically quantify the capacity of a space for embedding graph structure given through its node-to-node shortest paths. Recent work has generally employed local, global, or hybrid loss functions, focusing on close neighborhood information, all-node interactions, or an intermediate of | Finite Metric Spaces | Embedding Space | Distortion Bound | Reference | | |-----------------------------------|---------------------------------|---------------------|----------------------------|-----------------------| | Complete graph (Kn) | l ⌈log2(n)⌉ 1 | O(1) | (Linial et al., 1995) | | | | O(log n) | | | | | Tree (Tn) | ℓ ∞ | O(1) | (Linial et al., 1995) | | | | O(log n) | | | | | Planar graph with n vertices | ℓ ∞ | O(1) | (Krauthgamer et al., 2004) | | | Expander with n vertices | ℓp of any dimension (2 ≥ p ≥ 1) | Ω(log n) | (Linial et al., 1995) | | | | O(log n) | | | | | Metric space (X, d) with n points | ℓ p | (for any 1 ≤ p ≤ 2) | O(log n) | (Linial et al., 1995) | | | O(log2 n) | | | | | Metric space (X, d) with n points | ℓ p | (for any p > 2) | O(log n) | (Linial et al., 1995) | Table 8: A summary of theoretical results. both. Local loss functions emphasize preserving neighborhoods, exemplified by the loss function $${\mathcal{L}}(f)=-\sum_{(u,v)\in E}\log{\frac{\exp\big(-d_{Y}(f(u),f(v))\big)}{\sum_{w\in{\mathcal{N}}(u)}\exp\big(-d_{Y}(f(u),f(w))\big)}}.$$ from Nickel & Kiela (2017; 2018), where N (u) = {w | (u, w) ̸∈ E*} ∪ {*v} is the set of negative examples for u (including v). The resulting embeddings are typically favored by rank-based evaluation metrics such as mean average precision mAP. On the other hand, global functions emphasize preserving distances directly via loss functions motivated by generalized MDS (Bronstein et al. (2006)), exemplified by the loss function $${\mathcal{L}}(f)=\sum_{u\sim v}\left|\left({\frac{d_{Y}(f(u),f(v))}{d_{{\mathcal{G}}}(u,v)}}\right)^{2}-1\right|,$$ from Gu et al. (2019), and which we use in this work. The resulting embeddings are typically favored by average distortion Davg. Lastly, hybrid loss functions, such as the Riemannian Stochastic Neighbor Embedding (RSNE) from Cruceru et al. (2020), aim to balance the emphasis on local and global, sometimes with a tunable parameter for controlling the optimization goal. We note that though we employ a global loss function, the resulting normed space embeddings notably perform well on both Davg and mAP. ## C Metric Learning Metric learning is a machine learning approach concerned with learning distance metrics in an embedding space, with the aim of using the distances between data points as features for tasks such as classification, regression, clustering, and image retrieval. In connection with our work, metric learning pipelines broadly consist of a learnable embedding function f : X → Y that maps data from an input space X into a target embedding space Y equipped with a distance metric dY , a machine learning component that takes the values of the distance df (·, ·) := dY (f(·), f(·)) as input features for a task, and appropriate optimization algorithms for learning the parameters in the pipeline. The embedding function and the parameters of the machine learning components (if any) could be jointly learned in a supervised manner or otherwise hand-crafted or leveraged from embeddings trained on another task. In *deep* metric learning, typically, the embedding function and the machine learning component are differentiable, and parameter optimization takes place by minimizing or maximizing an appropriate loss function with gradient descent. For reference, we recommend the following excellent surveys on metric learning: Kulis et al. (2013); Kaya & Bilge (2019). We note that many geometric machine learning pipelines, including ours, align with the metric learning paradigm, where the metric space has a Riemannian manifold structure. Check, for example, Nickel & Kiela (2017; 2018); Chami et al. (2019a); Cruceru et al. (2020); López et al. (2021); Giovanni et al. (2022). We summarize the metric learning pipelines used in this work in Table 9. A *shallow embedding* is a function that simply assigns each entity to a point in the target embedding space, with these points serving as the learnable parameters. On the other hand, a *graph neural network* maps the features of a node and its neighbors to a point in the target space, with the network weights being the learnable parameters. | Task | Embedding Function | ML Component | Loss Function | |----------------------------------|----------------------|------------------------------|------------------------------| | Graph Reconstruction (§4.1, D.1) | shallow embedding | - | distance-based loss (eq. 1) | | Link Prediction (§4.2, D.2) | graph neural network | Fermi-Dirac decoder (eq. 4) | binary cross-entropy (eq. 5) | | Recommender System (§4.3, D.3) | shallow embedding | similarity score (eq. 6) | hinge loss (eq. 7) | | | | binary cross-entropy (eq. 8) | | | Graph | Nodes | Edges | Triples | Grid | Tree | Tree | |--------------|---------|---------|-----------|--------|--------|--------| | | | Layout | Valency | Height | | | | 4D Grid | 625 | 2000 | 195,000 | (5)4 | - | - | | Tree | 364 | 363 | 66,066 | - | 3 | 5 | | Tree × Tree | 225 | 420 | 25,200 | - | 2 | 3 | | Tree ⋄ Grids | 775 | 1,270 | 299,925 | 5 × 5 | 2 | 4 | | Grid ⋄ Trees | 775 | 790 | 299,925 | 5 × 5 | 2 | 4 | Table 9: Summary of metric learning pipelines. Table 10: Characteristics of synthetic graphs. ## D Experiments Hardware and Code Release. All experiments were executed on an Intel(R) Xeon(R) CPU E5-2650 computer, equipped with 48 CPUs operating at 2.2 GHz and a single Tesla P40 GPU with a 24GB of memory running on CUDA 11.2. ## D.1 Graph Reconstruction | Graph | Nodes | Edges | Triples | |---------------|---------|---------|-----------| | USCA312 | 312 | 48,516 | 48,516 | | bio-diseasome | 516 | 1,188 | 132,870 | | csphd | 1,025 | 1,043 | 524,800 | | road-euroroad | 1,039 | 1,305 | 539,241 | | facebook | 4,039 | 88,234 | 8,154,741 | | Margulis | 625 | 2,500 | 195,000 | | Paley | 101 | 5,050 | 5,050 | | Chordal | 523 | 1,569 | 136,503 | Implementation Details. In all setups, we use the RAdam optimizer (Bécigneul & Ganea, 2019), and run the same grid search to to train graph embeddings. The implementation of all baselines are taken from Geoopt (Kochurov et al., 2020) and López et al. (2021). We train for 3000 epochs, and stop training when the average distortion has not decreased for 200 epochs. We experiment with three hyperparameters: (a) learning rate ∈ {0.1, 0.01, 0.001}; (b) batch size ∈ {512, 1024, 2048, −1} with −1 as the node count within a graph and (c) maximum gradient norm ∈ {10, 50, 250}. Table 10 and 11 report the stats of all the synthetic and real-world graphs. Evaluation Metrics. We evaluate the quality of the learned embeddings using distortion and precision metrics. Consider a graph G, a target metric space Y , and a metric embedding f : G → Y . The distortion of the embedding of a pair of nodes *u, v* is given by: distortion(*u, v*) = |dY (f(u), f(v)) − dG(*u, v*)| dG(*u, v*). Table 11: Characteristics of real-world and expander graphs. | BIO-DISEASOME | | CSPHD | | | |-----------------------------------|-------------|--------------|------------|-------| | (|V |, |E|) | (516, 1188) | (1025, 1043) | | | | Davg | mAP | Davg | mAP | | | Stress Loss R 20 ℓ1 2.79±0.01 | 87.09 | 2.16±0.01 | 45.55 | | | R 20 ℓ2 | 4.41±0.02 | 76.71 | 4.51±0.01 | 39.05 | | 20 R ℓ∞ | 1.88±0.01 | 88.86 | 1.54±0.01 | 69.00 | | 20 H R | 11.34±0.05 | 66.55 | 30.88±0.06 | 19.62 | | Distortion Loss R 20 ℓ1 1.62±0.01 | 89.14 | 1.59±0.02 | 52.34 | | | R 20 ℓ2 | 3.83±0.01 | 76.31 | 4.04±0.01 | 47.37 | | R 20 ℓ∞ | 0.53±0.01 | 98.24 | 0.42±0.01 | 99.28 | | H 20 R | 6.83±0.08 | 91.26 | 22.42±0.23 | 60.24 | Table 12: Comparison of distortion and stress loss functions. We denote the average of distortion over all pairs of nodes by Davg. The other metric that we consider is the mean average precision (mAP). It is a ranking-based measure for local neighborhoods that does not track explicit distances. For the mean average precision (mAP) metric, consider G = (*V, E*) as a graph and Na as the neighborhood of the node a ∈ V . Let Ra,bi be the smallest neighborhood of f(a) in the space Y that contains f(bi), with f : G → P as a metric embedding. Then, mAP can be defined as follows: $$\mathrm{mAP}(f)={\frac{1}{|V|}}\sum_{a\in V}{\frac{1}{\mathrm{deg(a)}}}\sum_{i=1}^{|{\mathcal{N}}_{a}|}{\frac{|{\mathcal{N}}_{a}\cap R_{a,b_{i}}|}{|R_{a,b_{i}}|}}.$$ mAP quantifies how well the embedding approximates graph isomorphism, applicable only to unweighted graphs. mAP measures the average discrepancy between the neighborhood of each node u ∈ V and the neighborhood of f(u) ∈ Y . It's important to note that an embedding with zero average distortion guarantees a perfect mean average precision score (i.e., 100.00), but the inverse is not always true: an embedding that effectively preserves the adjacency structure might not be an isometry. Distortion Loss vs. Stress Loss. We note that distance-based loss functions inspired by generalized MDS (Bronstein et al., 2006) include distortion and stress loss functions. *Distortion loss*, which is the main loss function we use for learning embeddings in this work, is given by: $${\cal L}_{\rm distortion}(f)=\sum_{u\sim v}\left|\left(\frac{d_{Y}(f(u),f(v))}{d_{\cal G}(u,v)}\right)^{2}-1\right|.\tag{2}$$ (Compare eq. (2) with eq. (1).) On the other hand, *strain loss*, also known as *mean squared error loss*, is given by: $${\mathcal{L}}_{\mathrm{stress}}(f)=\sum_{u\sim v}\left(d_{\mathcal{G}}(u,v)-d_{Y}(f(u),f(v))\right)^{2}.$$ In Table 12, we compare the impacts of distortion loss and mean squared error loss functions on graph embeddings for graph reconstruction. Our results show that with respect to the Davg and mAP evaluation metrics, ℓ∞ embeddings trained with both loss functions consistently outperform the other embeddings, and for each space, the embeddings trained with the distortion loss outperform the embeddings trained with mean squared error. This justifies our choice for learning embeddings with distortion loss. Space Dimension. In Table 14, 15, 16, and 13, we vary the dimension size on datasets with varying curvatures. Our results show that for ℓ1 and ℓ∞, performance generally improves as dimension grows bigger. $\quad(3)^{\frac{1}{2}}$ . We also observe that increasing the dimension does not seem to help much when there is mismatch between the geometry of a space and a graph, such as is the case with R n ℓ2 and Hn ℓR on Grid, and Hn ℓR on Fullerenes. Lastly, we highlight that López et al. (2021) similarly evaluated embeddings on the BIO-DISEASOME dataset in Euclidean, hyperbolic and spherical spaces, their products, and Siegel space at dimension n = 306; our low-dimensional normed spaced embeddings outperformed their high-dimensional embeddings. Choice of Norm. The choice of the norm could be considered a hyperparameter to be tuned. In our experiments, ℓ∞ performs best in the graph reconstruction task on 10/13 datasets, whereas for downstream tasks (link prediction and recommender systems), ℓ1 performs consistently best in almost all cases on five datasets. So, we recommend using ℓ∞ for graph reconstruction and ℓ1 for downstream tasks. Alternatively, we recommend the product of ℓ1 and ℓ∞ in all setups, as that product performs consistently as the second best option behind ℓ1 and ℓ∞. Link Prediction. We evaluate the performance of task-agnostic shortest-path metric embeddings in a link prediction task. Here, *task-agnostic* and *task-specific* refer to the technique used for training the embeddings. We first embed the nodes of the Cora and Citeseer datasets by minimizing the distance-based distortion loss defined in eq. (1) using a training set of existing edges. We note that no node features are used in this process. Subsequently, we train a logistic regression classifier on the Hadamard product of source and target node embeddings to predict whether a link exists between the nodes for a training set that includes existing and non-existing edges. Lastly, we evaluate the classifier on a test set of existing and non-existing edges. We follow the same experimental setup for link prediction from Appendix D.2 and tune hyperparameters (including dimension size and learning rate) on the development set. Table 17 compares the results for link prediction using the task-agnostic and task-specific embeddings on the Cora and Citeseer datasets. (The results for the task-specific embeddings are taken from Table 5.) We observe that ℓ1 space performs best among task-agnostic embeddings on these link prediction datasets, but overall, task-agnostic embeddings underperform task-specific counterparts in our setup. Even though shortest-path metric embeddings capture enough information to enable non-trivial accuracy in link prediction, they fall short of capturing the more nuanced information present in node features and higher-order proximity, resulting in performance that lags behind task-specific embeddings that do capture the aforementioned information. Thus, among the embeddings considered for link prediction in this work, we recommend task-specific ℓ1 GNN embeddings for practitioners. | n = 20 | n = 36 | n = 66 | | | | | |----------|------------|----------|------------|-------|------------|-------| | Davg | mAP | Davg | mAP | Davg | mAP | | | R n ℓ1 | 1.59±0.02 | 52.34 | 1.32±0.01 | 68.35 | 1.11±0.01 | 82.40 | | R n ℓ2 | 4.04±0.01 | 47.37 | 3.84±0.01 | 62.35 | 3.77±0.02 | 68.86 | | n R ℓ∞ | 0.42±0.01 | 99.28 | 0.50±0.01 | 99.16 | 0.47±0.01 | 99.57 | | H n R | 22.42±0.23 | 60.24 | 21.81±0.20 | 74.62 | 21.54±0.15 | 75.45 | | n = 20 | n = 36 | n = 66 | | | | | |----------|------------|----------|------------|--------|------------|--------| | Davg | mAP | Davg | mAP | Davg | mAP | | | R n ℓ1 | 1.08±0.00 | 100.00 | 0.36±0.00 | 100.00 | 0.21±0.00 | 100.00 | | R n ℓ2 | 11.24±0.00 | 100.00 | 11.23±0.00 | 100.00 | 11.22±0.00 | 100.00 | | R n ℓ∞ | 0.13±0.00 | 100.00 | 0.02±0.00 | 100.00 | 0.02±0.00 | 100.00 | | H n R | 25.23±0.05 | 63.74 | 25.30±0.05 | 68.69 | 25.25±0.05 | 68.78 | Table 13: Results on CSPHD. Table 14: Results on Grid (zero curvature). | n = 20 | n = 36 | n = 66 | | | | | |----------|-----------|----------|-----------|--------|-----------|--------| | Davg | mAP | Davg | mAP | Davg | mAP | | | n R ℓ1 | 1.62±0.02 | 73.56 | 0.96±0.02 | 90.61 | 0.68±0.02 | 97.80 | | R n ℓ2 | 3.92±0.04 | 42.30 | 3.13±0.02 | 55.19 | 2.77±0.02 | 55.59 | | R n ℓ∞ | 0.15±0.01 | 100.00 | 0.02±0.01 | 100.00 | 0.02±0.01 | 100.00 | | H n R | 0.54±0.02 | 100.00 | 0.43±0.02 | 100.00 | 0.51±0.02 | 100.00 | | n = 20 | n = 36 | n = 66 | | | | | |----------|------------|----------|------------|--------|------------|--------| | Davg | mAP | Davg | mAP | Davg | mAP | | | ℓ1 | 3.32±0.02 | 100.00 | 3.06±0.03 | 100.00 | 2.97±0.03 | 100.00 | | ℓ2 | 8.53±0.03 | 100.00 | 8.45±0.03 | 100.00 | 8.25±0.03 | 100.00 | | ℓ∞ | 2.95±0.02 | 100.00 | 1.96±0.01 | 100.00 | 1.59±0.01 | 100.00 | | R | 25.18±0.05 | 84.13 | 25.34±0.04 | 84.51 | 25.22±0.05 | 84.89 | Table 15: Results on Tree (negative curvature). Table 16: Results on Fullerenes-140 (positive curvature). ## D.2 Link Prediction Implementation Details. For each dataset, we use grid search to tune hyperparameters on the development set. Our hyperparameters include (a) dimension ∈ {32, 64, 128} and (b) learning rate ∈ {0.1, 0.01, 0.001}. We set batch size to the number of nodes present in each graph dataset. We train for 1000 epochs and stop training when the loss on the development set has not been decreased for 200 epochs. We report the average performance in AUC across five runs. Following Chami et al. (2019a), we use the Fermi-Dirac decoder to compute the likelihood of a link between node pairs, and generate negative sets by randomly selecting links from non-connected node pairs. All graph neural networks are trained by optimizing the cross-entropy loss function. We extend the implementation of Poincaré GCN (Chami et al., 2019a) to support the other three architectures, enabling them to operate in both hyperbolic space and product spaces. We reduce the learning rate by a factor of 5 if GNNs cannot improve the performance after 50 epochs for hyperbolic and product spaces. Model. Given a graph G = (V, E) with a vertex set V and edge set E, node features xu ∈ R dfor each node u ∈ V, and a target metric space (*Z, d*Z), a GNN is used to map each node u to an embedding zu ∈ Z. The Fermi-Dirac decoder is used to compute probability scores for edges: $$p_{u,v}=\left(1+\exp\left(\frac{d_{Z}^{2}({\bf z}_{u},{\bf z}_{v})-r}{t}\right)\right)^{-1},\tag{4}$$ where pu,v is the probability of an edge existing between nodes u and v, dZ(zu, zv) is the distance between the embeddings of the nodes, r is a learnable parameter that adjusts the decision boundary, and t is a learnable temperature parameter that controls the sharpness of the decision boundary. Loss Function. Given a training set Etrain := Epos ∪ Eneg consists of existing edges Epos and non-existing edges Eneg, the binary cross-entropy loss used to train the model is given by: $$\mathcal{L}=-\left(\sum_{(u,v)\in\mathcal{E}_{\text{pos}}}\log(\sigma(p_{u,v}))+\sum_{(u,v)\in\mathcal{E}_{\text{neg}}}\log(1-\sigma(p_{u,v}))\right),\tag{5}$$ where pu,v is the output of the model and σ(x) = 1 1+e−x is the sigmoid function. Space Dimension. In Table 18, we vary the dimension size on the Cora dataset for link prediction. Our results show that, ℓ1 performs consistently best across GNNs and dimensions from 32 to 128, and overall, the results show improvement with increased dimension. | Embedding | Cora | Citeseer | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|------------| | Low-Distortion Embeddings (task-agnostic) R n ℓ1 81.3±0.3 76.3±0.5 n R ℓ2 79.5±0.4 73.8±0.3 R n ℓ∞ 77.0±0.3 76.3±0.5 GCN Embeddings (task-specific) n GCN in R ℓ1 93.4±0.3 93.1±0.3 n GCN in R ℓ2 92.1±0.5 91.4±0.5 GCN in R n ℓ∞ 89.5±0.4 90.3±0.4 | | | | n = 32 | | n = 64 | | n = 128 | | | | | | | | | |-----------|----------|----------|----------|-----------|----------|----------|----------|----------|----------|----------|----------|----------| | GCN | GAT | SGC | GIN | GCN | GAT | SGC | GIN | GCN | GAT | SGC | GIN | | | n R ℓ1 | 93.4±0.3 | 91.2±0.3 | 92.5±0.3 | 90.2±0.4 | 92.1±0.3 | 92.2±0.3 | 93.7±0.5 | 91.6±0.5 | 92.5±0.3 | 92.8±0.4 | 93.0±0.3 | 91.0±0.3 | | R n | 91.3±0.3 | 91.1±0.2 | 90.7±0.1 | 90.2±0.5 | 91.5±0.5 | 90.9±0.5 | 90.8±0.3 | 89.5±0.5 | 92.1±0.5 | 91.7±0.5 | 91.1±0.3 | 89.2±0.5 | | n ℓ2 R ℓ∞ | 89.0±0.4 | 86.2±0.2 | 87.6±0.3 | 87.4±0.4 | 89.1±0.4 | 87.2±0.5 | 88.8±0.3 | 88.4±0.5 | 89.5±0.4 | 88.2±0.5 | 87.8±0.3 | 87.2±0.5 | | H n R | 84.8±0.3 | 84.3±0.2 | 87.7±0.3 | 86.6±0.3 | 86.1±0.3 | 89.0±0.5 | 88.7±0.3 | 85.9±0.3 | 86.1±0.5 | 92.1±0.6 | 89.9±0.4 | 87.7±0.3 | Table 17: Results for link prediction on Cora and Citeseer using different embeddings. Table 18: Results on Cora for link prediction. ## D.3 Recommender Systems Implementation Details. We follow a metric learning approach Vinh Tran et al. (2020), with the implementation of all baselines taken from López et al. (2021). We minimize the hinge loss function for ml-100k and last.fm, while minimizing the binary cross-entropy (BCE) function for MeetUp. We use the Rsgd optimizer (Bonnabel, 2011) to tune graph node embeddings. In all setups, we run the same grid search to train recommender systems. We train for 500 epochs, reduce the learning rate by a factor of 5 if the model does not improve the performance after 50 epochs. We stop training when the loss on the dev set has not been decreased for 50 epochs. We use the burn-in strategy (Nickel & Kiela, 2017; Cruceru et al., 2020) that trains recommender systems with a 10 times smaller learning rate for the first 10 epochs. We experiment with three hyperparameters: (a) learning rate ∈ {0.1, 0.01, 0.001}; (b) batch size ∈ {512, 1024, 2048} and (c) maximum gradient norm ∈ {5, 10, 50}. Table 19 reports the stats of all the bipartite graphs in the recommendation task. Model. Given a set of entities E and a target metric space (*X, d*X), we associate with each entity e ∈ E an embedding f(e) ∈ X and bias terms be,lhs, be,rhs ∈ R, where f : E → X is a learnable embedding function. Given a pair of entities e1, e2 ∈ E, the model computes a similarity score ϕ(e1, e2) as follows $$\phi_{f,b,X}(e_{1},e_{2}):=b_{e_{1},\mathrm{lhs}}+b_{e_{2},\mathrm{rhs}}-d_{X}^{2}(f(e_{1}),f(e_{2})).$$ $$({\mathfrak{G}})$$ X(f(e1), f(e2)). (6) Subtracting the square distance ensures that the entities whose embeddings are closer in the metric space have a higher score, making it a suitable representation of similarity. The model we use is *shallow*: It learns a collection of points f(e) ∈ M indexed by the entities e ∈ E. In our setting, E = *U ∪ V*, where U is the space of users and V is the space of items. Hinge Loss Function. Given a set T = {(*u, v*)} of observed user-item interactions, the hinge loss function is given by: $$\mathcal{L}=\sum_{(u,v)\in\mathcal{T}}\sum_{(u,w)\not\in\mathcal{T}}[m+\phi_{f,b,X}(u,v)-\phi_{f,b,X}(u,w)]_{+},\tag{1}$$ where w is an item the user u has not interacted with, m is the hinge margin, and [z]+ = max(0, z). For each user u, we generate a negative set by randomly selecting 100 items that the user has not interacted with. Binary Cross-Entropy Loss Function. Let T1 and T2 be a set of observed user-item interactions and a set of non-interactions, respectively. Consider T = T1∪T2 as the collection of all interactions and non-interactions. For each pair (u, v) ∈ T , let yu,v ∈ {0, 1} denote the true label: If the pair belongs to T1, then yu,v = 1, $$\left(7\right)$$ | Dataset | Users | Items | Interactions | Density (%) | |------------|---------|---------|----------------|---------------| | ml-100k | 943 | 1,682 | 100,000 | 6.30 | | last.fm | 1,892 | 17,632 | 92,834 | 0.28 | | meetup-nyc | 46,895 | 16,612 | 277,863 | 0.04 | Table 19: Recommender system dataset stats ![25_image_0.png](25_image_0.png) $$({\boldsymbol{\delta}})$$ Figure 4: Embedding distortion shown in various spaces on a small expander-chordal graph. Color range indicates distortion levels. otherwise yu,v = 0. The BCE loss function is given by: $${\mathcal{L}}=\sum_{(u,v)\in{\mathcal{T}}}-y_{u,v}\cdot\log(\sigma(\phi_{f,b,X}(u,v)))-(1-y_{u,v})\cdot\log(1-\sigma(\phi_{f,b,X}(u,v))),$$ where σ(x) = 1 1+e−x is the sigmoid function. For each user u, we generate a negative set by randomly selecting one item that the user has not interacted with. ## E Supplementary Graph Reconstruction Analysis Results of Expander Graphs. For readability, we choose to embed a small expander graph into various spaces and visually compare embedding distortion in these spaces, as displayed in Figure 4. We find that both normed spaces perform much better than other spaces. Further, we see that the graph undergoes small distortion in the ℓ1 space and unnoticeable distortion in the ℓ∞ space when dealing with a small expander, although embedding expanders into normed spaces is a well-known challenge (Linial et al., 1995, Proposition 4.2). ## F Trees, Grids, And Fullerenes In our Large Graph Representational Capacity experiments (Section 4), we used trees, grids, and fullerenes as discretizations of manifolds with negative, zero, and positive curvatures, respectively. Refer to Figure 5 for a visual illustration of these discretizations. In chemistry, a *fullerene* is any molecule composed entirely of carbon in the form of a hollow spherical, ellipsoidal, or cylindrical mesh. In our experiments, we used combinatorial ![26_image_0.png](26_image_0.png) Figure 5: Top: surfaces of negative, zero, and positive curvature (from left to right). Bottom: graphs of negative, zero, and positive curvature (from left to right). graphs representing spherical fullerenes. We generated the fullerene graphs using the graphs.fullerenes() function from SageMath (The Sage Developers, 2023). The number of possible fullerenes grows fast as a function in the number of nodes (OEIS Foundation Inc., 2023, A007894), and we used the first fullerene graph generated by graphs.fullerenes() for each node count. The graph data for the specific fullerenes used in our experiments can be found in our code repository, ensuring reproducibility and facilitating further analysis. Trees and grids are well-known and require no further description.
Review 1: Summary: This paper explores the normed spaces to embed finite metric spaces with low distortion bounds in low dimensions. Drawing inspiration from the theoretical foundations in prior works (Johnson & Lindenstrauss, 1984; Bourgain, 1985; Linial et al., 1995; Krauthgamer et al., 2004 ), the authors advocate for normed spaces as a flexible and computationally efficient alternative to popular Riemannian manifolds for learning graph embeddings. Through extensive experimentation on synthetic and real-world graph reconstruction benchmarks, normed space embeddings demonstrate superior performance compared to traditional manifolds while requiring significantly fewer computational resources. Additionally, empirical validation across graph families associated with various curvatures underscores the adaptability of normed spaces in capturing diverse graph structures as graph sizes increase. Furthermore, the paper showcases the practical utility of normed space embeddings in tasks such as link prediction and recommender systems, offering a valuable tool for geometric graph representation learning. Strengths and Weaknesses: Strengths: 1. The paper offers a novel approach by underscoring normed spaces as a more flexible, computationally efficient, and technically less challenging alternative to popular methods for learning graph embeddings. 2. The proposal is grounded in theoretical results from discrete geometry, providing a solid foundation for using normed spaces in embedding finite metric spaces with low distortion bounds in low dimensions. 3. Empirical evaluation on synthetic and real-world benchmark graph datasets demonstrates the superior representational capacity of normed spaces, outperforming various Riemannian manifold alternatives across diverse graph structures and curvatures. Weaknesses: I do not see any major weaknesses. The requested changes section lists my concerns, questions, and suggestions below. Requested Changes: The paper is generally well written, however, from the point of view of a general reader in ML it would be helpful to make these changes. 1. Section 3 would be much better with some more background on normed spaces and other embedding spaces. I can see it in the Appendix, it would be helpful to have abridged version of it in the main paper. 2. In Section 3. the main point being made is "large classes of finite metric spaces can in theory be abstractly embedded with low theoretical bounds on distortion in low dimensional normed spaces". It would be better to clearly state this at the start the section and then give the evidence by discussing the prior works as it is done in the middle paragraphs. If possible add some more details about the the kind of metric spaces for which the results apply. It would be great if these results can be discussed in depth in the Appendix and have a table summarizing the results in the main paper. 3. The authors did not discuss pseudoEuclidean embeddings (PSE). These are also a great choice for embedding discrete metric spaces since any discrete metric space is isometrically embeddable in PSE spaces and they also contain normed vector spaces. See figure 2.1 in [1]. I am curious to see how these embeddings will perform for the settings considered in the paper. Please see the references below for PSE. 4. Have a table (maybe in the appendix) summarizing various embedding spaces used in the experimental results. [1] Goldfarb 1985, A new approach to Pattern Recognition, Progress in Pattern Recognition. [2] Vishwakarma & Sala, 2022, Lifting Weak Supervision to Structured Prediction, NeurIPS 2022. Broader Impact Concerns: NA ================================================== Review 2: Summary: The authors propose using normed spaces, with particular emphasis on $\mathbb R^n_{\ell^1}, \mathbb R^n_{\ell^\infty}$, to represent nodes in a graph. They contrast this with metric spaces such as hyperbolic or spherical embeddings which have been motivated by the fact that their metric may more naturally align to capturing distances in certain graphs, and have provable bounds on the distortion. The authors of this work show empirically that embeddings in $\mathbb R^n_{\ell^1}$ and $\mathbb R^n_{\ell^\infty}$ can be effectively trained via gradient descent to outperform a variety of alternative geometries, including hyperbolic and spherical, on graph reconstruction, link prediction, and recommendation. Strengths and Weaknesses: ### Strengths The paper is very well written, includes the relevant background and cites prior work appropriately. The authors support their claims in both synthetic settings as well as real-world tasks, and the results look significant. The author's proposed embedding space is also computationally more efficient than the baselines, and more straightforward to implement. ### Weaknesses No theoretical justification for the observed results are presented. The author's claims rest entirely on empirical observation, and since all methods under consideration are trained via gradient descent this leaves open the possibility that hyperparameter tuning or different training techniques could allow opposite results to be achieved. The size of the graphs under consideration are relatively small, it is unclear if these benefits would translate to graphs of larger scale. Requested Changes: **Asymmetric Loss Function:** In the discussion here the authors claim that Figure 4 (b) and (c) shows that the asymmetry in the loss function has a bigger impact on the small tree. This is not clear to me from the figures. The asymmetry in the loss function would have suggested that nodes which are closer than they should be get less of a penalty than nodes which are farther apart, but the graphs show the opposite (namely, that smaller trees have a larger number of nodes which are farther apart than they should be). In addition, I think more graphs should be inspected if one wants to make a more general statement in this regard. For example, trying a variety of graph sizes and calculating, for each one, the number of pairs which over vs. under-estimate the distance, and then plotting the average of this for each graph size. To be fair, however, I think the asymmetry is not much of an issue - after all, the loss function directly targets the distortion, so it is minimizing exactly the metric it will be evaluated on. **Link Prediction:** In Section 4.1 "baseline experiments", the authors state that their focus was on settings with small dimensions, where it was known that geometric embeddings tended to outperform Euclidean. The results in Table 3 suggested $\mathbb R_{\ell^2}$ and $\mathbb R_{\ell^\infty}$ achieved lower distortion even when $n$ was small. When evaluating link prediction in Section 4.2, however, the authors switch to using a larger embedding dimension. Are larger embeddings required to see a benefit here? Do the results of Section 4.1, where $\mathbb R^n_{\ell^2}$ and $\mathbb R_{\ell^\infty}^n$ outperformed alternative geometric embeddings in low dimensions, not transfer to this setting? **Training vs. Representational Capacity:** Some of the proposed baselines have very strong theoretical bounds on distortion for certain graphs - for example, the fact that hyperbolic space can embed any tree with arbitrarily low distortion - whereas the results for normed spaces are less impressive. Thus, it seems to me that an implication of the author's claims is that normed spaces can be *trained* more effectively than more complicated geometric counterparts. On the other hand, for $\mathbb R_{\ell^\infty}$ gradient descent would essentially be operating as coordinate descent, since only one dimension would appear at a time. Have I correctly inferred the implication of the author's claims, and if so how is this explained in light of the sparse gradient in the $\mathbb R_{\ell^\infty}$ setting? Broader Impact Concerns: None ================================================== Review 3: Summary: This work proposes to embed nodes of a graph in normed spaces. The authors use a very simple algorithm which randomly initializes the embeddings of the nodes and uses gradient descent to minimize an asymmetric loss function which compares norm distances against shortest path distances. The proposed embedding method is evaluated in the task of graph reconstruction, but also in the task of link prediction and in a recommendation task. In most cases, the normed space embeddings outperform other baseline embeddings. Strengths and Weaknesses: Strengths: - The proposed method is simple and can be integrated into different techniques. Evaluating the loss function is computationally efficient in case the shortest paths are pre-computed. Thus, the proposed method can be applied to very large graphs. - The presented empirical results are strong. Normed space embeddings lead to low distortions and can thus tackle the graph reconstruction problem better than other types of embeddings. - The paper is well-written and easy to read. All details are clearly explained. Weaknesses: - It is not clear to me how Section 3 is connected to the rest of the paper. In my understanding, the theoretical statements presented in this Section do not apply to the proposed embeddings and the authors should explain why. - There is a lack of explanations or insights about the employed loss function. What are the advantages of such an asymmetric loss function over standard loss functions such as the mean squared error or the mean absolute error? - The learned embeddings are not directly compatible with standard machine learning algorithms that operate in Euclidean spaces. Embeddings serve as general representations of objects that can be used in different downstream tasks. Could the learned embeddings be combined with a standard classification algorithm such as logistic regression or an MLP? - In the reported results, the embedding dimension is always set to 20. Typically, the embedding dimension is treated as a hyperparameter since different values might lead to very different results. - No intuition is given on why a specific norm ($\ell_\infty$) works better than the rest of the norms in most cases. It would be useful to a practitioner to know which norm to use in a specific application. Requested Changes: The requested changes listed below are related to the weaknesses of the paper presented above. - From Section 3: every tree $T$ with $n$ vertices can be embedded isometrically in $\ell_\infty^{\mathcal{O}(\log n)}$. Will the proposed learning algorithm embed isometrically a tree $T$ with $n$ vertices in $\ell_\infty^{\mathcal{O}(\log n)}$? If the answer is no, please explain why. - Give a justification why the proposed loss function is chosen over standard loss functions such as the mean squared error. I would suggest the authors provide some empirical results where the proposed function is replaced by the mean squared error. - I would suggest the authors investigate whether the proposed embeddings could be used along with standard learning algorithms. For example, the authors could evaluate the learned embeddings + logistic regression in a node classification task (such as Cora, etc.). - The authors should experiment with different embedding dimension sizes. Ideally, the embedding dimension size could be treated as a hyperparameter and thus be tuned. - Please, provide some intuition on how a practitioner can choose which norm to employ in a specific application. Broader Impact Concerns: no concerns. ================================================== Metareview: Recommendation: Accept as is Comment: The paper studies network embeddings into normed spaces. This builds on earlier work embedding graphs into Riemannian manifolds, which showed that Euclidean embeddings can be outperformed (i.e., lower distortion at the same number of dimensions). The authors propose an even simpler approach. The work presents empirical results showing very strong results. All reviewers suggest accepting. I agree with their assessment. This is strong empirical work showing how a simple idea works very well. Such work is very useful, especially for a field where the trend is to look for more and more exotic manifolds to embed into. ==================================================
# Adafed: Fair Federated Learning Via Adaptive Common Descent Direction Shayan Mohajer Hamidi smohajer@uwaterloo.ca Department of Electrical and Computer Engineering University of Waterloo En-Hui Yang ehyang@uwaterloo.ca Department of Electrical and Computer Engineering University of Waterloo Reviewed on OpenReview: *https: // openreview. net/ forum? id= rFecyFpFUp* ## Abstract Federated learning (FL) is a promising technology via which some edge devices/clients collaboratively train a machine learning model orchestrated by a server. Learning an unfair model is known as a critical problem in federated learning, where the trained model may unfairly advantage or disadvantage some of the devices. To tackle this problem, in this work, we propose AdaFed. The goal of AdaFed is to find an updating direction for the server along which (i) all the clients' loss functions are decreasing; and (ii) more importantly, the loss functions for the clients with larger values decrease with a higher rate. AdaFed adaptively tunes this common direction based on the values of local gradients and loss functions. We validate the effectiveness of AdaFed on a suite of federated datasets, and demonstrate that AdaFed outperforms state-of-the-art fair FL methods. ## 1 Introduction Conventionally, a machine learning (ML) model is trained in a centralized approach where the training data is available at a data center or a cloud server. However, in many new applications, devices often do not want to share their private data with a remote server. As a remedy, federated learning (FL) was proposed in McMahan et al. (2017) where each device participates in training using only locally available dataset with the help of a server. Specifically, in FL, devices share only their local updates with the server, and not their raw dataset A well-known setup to carry out such decentralized training is FedAvg McMahan et al. (2017) which combines local stochastic gradient descent (SGD) on each client with iterative model averaging. The server sends the most recent global model to some selected clients (Eichner et al., 2019; Wang et al., 2021a), and then these clients perform a number of epochs of local SGD on their local training data and send the local gradients back to the central server. The server then finds the (weighted) average of the gradients to update the global model, and the process repeats. In FedAvg, the vector of *averaged gradients* computed by the server is in fact a common direction along which the global model is updated. However, finding the common direction in this manner may result in a direction which is not descent for some clients. Consequently, the learnt model could perform quite poorly once applied to the private dataset of the clients, yielding an unfair global model (Li et al., 2019a; Bonawitz et al., 2019; Kairouz et al., 2021); that is, although the average accuracy might be high, some clients whose data distributions differ from the majority of the clients are prone to perform poorly on the learnt model. One possible method to find a direction that is descent for all the clients is to treat the FL task as a multi-objective minimization (MoM) problem Hu et al. (2022). In this setup, a Pareto-stationary solution of the MoM yields a descent direction for all the clients. However, having a common descent direction is not enough *per se* to train a fair model with uniform test accuracies across the clients1. This is because data heterogeneity across different clients makes the local loss functions vary significantly in values, and therefore those loss functions with larger values should decrease with a higher rate to learn a fair model. To address the above-mentioned issues and to train a fair global model, in this work, we propose AdaFed. The aim of AdaFed is to help the sever to find a common direction dt (i) that is descent for all the clients, which is a *necessary* condition to decrease the clients' loss functions in the SGD algorithm; and (ii) along which the loss functions with larger values decrease with higher rates. The latter is to enforce obtaining a global model with uniform test accuracies across the clients. We note that if the directional derivatives of clients' loss functions along the normalized common direction dt are all positive, then −d is a common descent direction for all the clients. As such, AdaFed adaptively tunes dt such that these directional derivatives (i) remain positive over the course of FL process, and (ii) are larger for loss functions with higher values enforcing them to decrease more during the global updation by the server. The contributions of the paper are summarized as follows: - We introduce AdaFed, a method to realize fair FL via adaptive common descent direction. - We provide a closed-form solution for the common direction found by AdaFed. This is in contrast with many existing fair FL methods which deploy iterative or generic quadratic programming methods. - Under some common assumptions in FL literature, we prove the convergence of AdaFed under different FL setups to a Pareto-stationary solution. - By conducting thorough experiments over seven different datasets (six vision datasets, and a language one), we show that AdaFed can yield a higher level of fairness among the clients while achieving similar prediction accuracy compared to the state-of-the-art fair FL algorithms. - The experiments conducted in this paper evaluate many existing fair FL algorithms over different datasets under different FL setups, and therefore can pave the way for future researches. ## 2 Related Works There are many different perspectives in the literature to combat the problem of fairness in FL. These methods include client selection (Nishio & Yonetani, 2019; Huang et al., 2020a; 2022; Yang et al., 2021), contribution Evaluation (Zhang et al., 2020; Lyu et al., 2020; Song et al., 2021; Le et al., 2021), incentive mechanisms (Zhang et al., 2021; Kang et al., 2019; Ye et al., 2020; Zhang et al., 2020), and the methods based on the loss function. Specifically, our work falls into the latter category. In this approach, the goal is to attain uniform performance across the clients in terms of test accuracy. To this end, the works using this approach target to reduce the variance of test accuracy across the participating clients. In the following, we briefly review some of these works. One of the pioneering methods in this realm is agnostic federated learning (AFL) (Mohri et al., 2019). AFL optimizes the global model for the worse-case realization of weighted combination of the user distributions. Their approach boils down to solving a saddle-point optimization problem for which they used a fast stochastic optimization algorithm. Yet, AFL performs well only for a small number of clients, and when the size of participating clients becomes large, the generalization guarantees of the model may not be satisfied. Du et al. (2021) deployed the notation of AFL and proposed the AgnosticFair algorithm. Specifically, they linearly parametrized model weights by kernel functions and showed that AFL can be viewed as a special case of AgnosticFair. To overcome the generalization problem in AFL, q-fair federated learning (q-FFL) (Li et al., 2019a) was proposed to achieve more uniform test accuracy across users. The main idea of q-FFL stemmed 1Similarly to other fields like ML (Barocas et al., 2017), communications (Huaizhou et al., 2013), and justice (Rawls, 2020), the notion of fairness does not have a unique definition in FL. However, following (Li et al., 2019a; 2021), we use standard deviation of the clients' test accuracies—and some other metrics discussed in Section 7—to measure how uniform the global model performs across the clients. Please refer to Appendix C for more in-depth discussions. from fair resource allocation methods in wireless communication networks. Afterward, Li et al. (2020a) developed TERM, a tilted empirical risk minimization algorithm which handles outliers and class imbalance in statistical estimation procedures. Compared to q-FFL, TERM has demonstrated better performance in many FL applications. Deploying a similar notion, Huang et al. (2020b) proposed using training accuracy and frequency to adjust weights of devices to promote fairness. Furthermore, FCFC Cui et al. (2021) minimizes the loss of the worst-performing client, leading to a version of AFL. Later, Li et al. (2021) devised Ditto, a multitask personalized FL algorithm. After optimizing a global objective function, Ditto allows local devices to run more steps of SGD, subject to some constraints, to minimize their own losses. Ditto can significantly improve testing accuracy among local devices and encourage fairness. Our approach is more similar to *FedMGDA+* (Hu et al., 2022), which treats the FL task as a multi-objective optimization problem. In this scenario, the goal is to minimize the loss function of each FL client simultaneously. To avoid sacrificing the performance of any client, *FedMGDA+* uses Pareto-stationary solutions to find a common descent direction for all selected clients. ## 3 Notation And Preliminaries 3.1 Notation We denote by [K] the set of integers {1, 2, · · · , K}. In addition, we define {fk}k∈[K] = {f1, f2*, . . . , f*K} for a scalar/function f. We use bold-symbol small letters to represent vectors. Denote by ui the i-th element of vector u. For two vectors u, v ∈ R d, we say u ≤ v iff ui ≤ vi for ∀i ∈ [d], i.e., two vectors are compared w.r.t. partial ordering. In addition, denote by v · u their inner product, and by proju (v) = v·u u·u u the projection of v onto the line spanned by u. ## 3.2 Preliminaries And Definitions In Hu et al. (2022), authors demonstrated that FL can be regarded as multi-objective minimization (MoM) problem. In particular, denote by f(θ) = {fk(θ)}k∈[K] the set of local clients' objective functions. Then, the aim of MoM is to solve $$\theta^{*}=\arg\operatorname*{min}_{\theta}f(\theta),$$ $\left(1\right)$. θf(θ), (1) where the minimization is performed w.r.t. the *partial ordering*. Finding θ ∗could enforce fairness among the users since by setting setting θ = θ ∗, it is not possible to reduce any of the local objective functions fk without increasing at least another one. Here, θ ∗is called a Pareto-optimal solution of Equation (1). In addition, the collection of function values {fk(θ ∗)}k∈[K] of all the Pareto points θ ∗is called the Pareto front. Although finding Pareto-optimal solutions can be challenging, there are several methods to identify the Pareto-stationary solutions instead, which are defined as follows: Definition 3.1. Pareto-stationary (Mukai, 1980): The vector θ ∗is said to be Pareto-stationary iff there exists a convex combination of the gradient-vectors {gk(θ ∗)}k∈[K] which is equal to zero; that is, PK k=1 λkgk(θ ∗) = 0, where λ ≥ 0, and PK k=1 λk = 1. Lemma 3.2. (Mukai, 1980) Any Pareto-optimal solution is Pareto-stationary. On the other hand, if all {fk(θ)}k∈[K]'s are convex, then any Pareto-stationary solution is weakly Pareto optimal 2. There are many methods in the literature to find Pareto-stationary solutions among which we elaborate on two well-known ones, namely linear scalarization and Multiple gradient descent algorithm (MGDA) (Mukai, 1980; Fliege & Svaiter, 2000; Désidéri, 2012). 2θ ∗is called a weakly Pareto-optimal solution of Equation (1) if there does not exist any θ such that f(θ) < f(θ ∗); meaning that, it is not possible to improve all of the objective functions in f(θ ∗). Obviously, any Pareto optimal solution is also weakly Pareto-optimal but the converse may not hold. - **Linear scalarization:** this approach is essentially the core principle behind the FedAvg algorithm. To elucidate, in FedAvg, the server updates θ by minimizing the weighted average of clients' loss functions: $$\min_{\mathbf{\theta}}f(\mathbf{\theta})=\sum_{k=1}^{K}\lambda_{k}f_{k}(\mathbf{\theta}),\tag{1}$$ $$\left(2\right)$$ where the weights {λk}k∈[K] are assigned by the server and satisfy PK k=1 λk = 1. These fixed {λk}k∈[K] are assigned based on some priori information about the clients such as the size of their datasets. We note that different values for {λk}k∈[K] yield different Pareto-stationary solutions. Referring to Definition 3.1, any solutions of Equation (2) is a Pareto-stationary solution of Equation (1). To perform FedAvg, at iteration t, client k, k ∈ [K] sends its gradient vector gk(θt) to the server, and server updates the global model as $$\boldsymbol{\theta}_{t+1}=\boldsymbol{\theta}_{t}-\eta_{t}\boldsymbol{\vartheta}_{t},\ \ \text{where}\ \ \boldsymbol{\vartheta}_{t}=\sum_{k=1}^{K}\lambda_{k}\boldsymbol{\vartheta}_{k}(\boldsymbol{\theta}_{t}).\tag{1}$$ $$\left({3}\right)$$ However, linear scalarization can only converge to Pareto points that lie on the *convex* envelop of the Pareto front (Boyd & Vandenberghe, 2004). Furthermore, the weighted average of the gradients with pre-defined weights yields a vector dt whose direction might not be descent for all the clients; because some clients may have conflicting gradients with opposing directions due to the heterogeneity of their local datasets (Wang et al., 2021b). As a result, FedAvg may result in an unfair accuracy distribution among the clients (Li et al., 2019a; Mohri et al., 2019). - **MGDA:** To mitigate the above issue, (Hu et al., 2022) proposed to exploit MGDA algorithm in FL to converge to a fair solution on the Pareto front. Unlike linear scalarization, MGDA adaptively tunes {λk}k∈[K] by finding the minimal-norm element of the convex hull of the gradient vectors defined as follows (we drop the dependence of gk to θt for ease of notation hereafter) $$\mathcal{G}=\{\mathbf{g}\in\mathbb{R}^{d}|\mathbf{g}=\sum_{k=1}^{K}\lambda_{k}\mathbf{g}_{k};\ \lambda_{k}\geq0;\ \sum_{k=1}^{K}\lambda_{k}=1\}.\tag{4}$$ Denote the minimal-norm element of G by d(G). Then, either (i) d(G) = 0, and therefore based on Lemma 3.2 d(G) is a Pareto-stationary point; or (ii) d(G) ̸= 0 and the direction of −d(G) is a common descent direction for all the objective functions {fk(θ)}k∈[K] (Désidéri, 2009), meaning that all the directional derivatives {gk · d(G)}k∈[K] are positive. Having positive directional derivatives is a *necessary* condition to ensure that the common direction is descent for all the objective functions. ## 4 Motivation And Methodology We first discuss our motivation in Section 4.1, and then elaborate on the methodology in Section 4.2. ## 4.1 Motivation Although any solutions on the Pareto front is fair in the sense that decreasing one of the loss functions is not possible without sacrificing some others, not all of such solutions impose uniformity among the loss functions (see Figure 1a). As such, we aim to find solutions on the Pareto front which enjoy such uniformity. First we note that having a common descent direction is a necessary condition to find such uniform solutions; but not enough. Additionally, we stipulate that the rate of decrease in the loss function should be greater for clients whose loss functions are larger. In fact, the purpose of this paper is to find an updation direction for the server that satisfies both of the following conditions at the same time: - *Condition (I)*: It is a descent direction for all {fk(θ)}k∈[K], which is a *necessary* condition for the loss functions to decrease when the server updates the global model along that direction. ![4_image_0.png](4_image_0.png) Figure 1: (a) The Pareto front for two objective functions f1(θ) and f2(θ) is depicted. MGDA may converge to any points on the Pareto front. (b)-(c) Illustration of convex hull G and minimal-norm vector d(G) for two gradient vectors g1 and g2. In (b), ∥g1∥ 2 2 < ∥g2∥ 2 2 , where the direction of d(G) is more inclined toward g1. In (c), ∥g1∥ 2 2 = ∥g2∥ 2 2 = 1, where the direction of d(G) is the same as that of the bisection of g1 and g2. - *Condition (II)*: It is more inclined toward the clients with larger losses, and therefore the directional derivatives of loss functions over the common direction are larger for those with larger loss functions. To satisfy *Condition (I)*, it is enough to find d(G) using MGDA algorithm (as Hu et al. (2022) uses MGDA to enforce fairness in FL setup). Nevertheless, we aim to further satisfy *Condition (II)* on top of *Condition* (I). To this end, we investigate the direction of d(G), and note that it is more inclined toward that of min{∥gk∥ 2 2}k∈[K]. For instance, consider the simple example depicted in Figure 1b, where ∥g1∥ 2 2 < ∥g2∥ 2 2 . The Convex hull G and the d(G) are depicted for g1 and g2. As seen, the direction of d(G) is mostly influenced by that of g1. However, this phenomenon is not favourable for satisfying *Condition (II)* since after some rounds of communication between the server and clients, the value of g becomes small for those objective functions which are close to their minimum points. Consequently, the direction of d(G) is mostly controlled by these small g's, which is undesirable. Note that gk · d(G) represents how fast fk(θ) changes if θ changes in the direction of d(G). In fact, the direction of d(G) should be more inclined toward the gradients of those clients with larger loss functions. One possible solution could be to naively normalize {gk}k∈[K] by their norm to obtain {gk ∥gk∥ 2 2 }k∈[K] whose convex hull is denoted by Gnorm, and then use this normalized set of gradients to find d(Gnorm). Yet, the normalization makes all the {gk · d(Gnorm)}k∈[K] equal (see Figure 1c) which is still undesirable as the rate of decrease becomes equal for all {fk(θ)}k∈[K]. Based on these observations, the gradient vectors should be somehow *scaled* if one aims to also satisfy Condition (II). Finding such *scaling* factor is not straight-forward in general. To tackle this issue, and to be able to find a closed-form formula, we find the minimal-norm vector in the convex hull of mutually-orthogonal scaled gradients instead, and prove that this yields a common direction for which both *Conditions (I)* and (II) are satisfied. ## 4.2 Methodology To devise an appropriate *scaling* as explained above, we carry out the following two phases. ## 4.2.1 Phase **1, Orthogonalization** In order to be able to find a closed-form formula for the common descent direction, in the first phase, we orthogonalize the gradients. Once the gradient updates {gk}k∈[K] are transmitted by the clients, the server first generates a mutually orthogonal 3set {g˜k}k∈[K] that spans the same K-dimensional subspace in R d as that spanned by {gk}k∈[K]. 3Here, orthogonality is in the sense of standard inner product in Euclidean space. To this aim, the server exploits a modified Gram–Schmidt orthogonalization process over {gk}k∈[K]in the following manner 4 $$\tilde{\mathfrak{g}}_{1}=\mathfrak{g}_{1}/|f_{k}|^{\gamma}$$ $$\tilde{\mathfrak{g}}_{k}=\frac{\mathfrak{g}_{k}-\sum_{i=1}^{k-1}\mathrm{proj}_{\tilde{\mathfrak{g}}_{i}}(\mathfrak{g}_{k})}{|f_{k}|^{\gamma}-\sum_{i=1}^{k-1}\frac{\mathfrak{g}_{k}\cdot\tilde{\mathfrak{g}}_{i}}{\mathfrak{g}_{i}\cdot\mathfrak{g}_{i}}},\text{for}k=2,3,\ldots,K,\tag{1}$$ where γ > 0 is a scalar. - Why such orthogonalization is possible? First, note that the orthogonalization approach in *phase* 1 is feasible if we assume that the K gradient vectors {gk}k∈[K] are linearly independent. Indeed, this assumption is reasonable considering that (i) the gradient vectors {gk}k∈[K] are K vectors in d-dimensional space, and *d >> K* for the current deep neural networks (DNNs)5; and (ii) the random nature of the gradient vectors due to the non-iid distributions of the local datasets. The validity of this assumption is further confirmed in our thorough experiments over different datasets and models. ## 4.2.2 Phase **2, Finding Optimal** Λ ∗ In this phase, we aim to find the minimum-norm vector in the convex hull of the mutually-orthogonal gradients found in *Phase (I)*. First, denote by G˜ the convex hull of gradient vectors {g˜k}k∈[K] obtained in *Phase* 1; that is, $$\tilde{\mathcal{G}}=\{\mathbf{g}\in\mathbb{R}^{d}|\mathbf{g}=\sum_{k=1}^{K}\lambda_{k}\tilde{\mathbf{g}}_{k};\ \lambda_{k}\geq0;\ \sum_{k=1}^{K}\lambda_{k}=1\}.\tag{1}$$ $$\mathbf{\dot{\theta}}$$ $$\hat{\mathfrak{h}})$$ $$\left(7\right)$$ $$(8)^{\frac{1}{2}}$$ In the following, we find the minimal-norm element in G˜, and then we show that this element is a descent direction for all the objective functions. Denote by λ ∗the weights corresponding to the minimal-norm vector in G˜. To find the weight vector λ ∗, we solve $$\mathbf{g}^{*}=\arg\min_{\mathbf{g}\in\mathcal{G}}\|\mathbf{g}\|_{2}^{2},\tag{1}$$ , (8) which accordingly finds λ ∗. For an element g ∈ G, we have $$\|\mathbf{g}\|_{2}^{2}=\|\sum_{k=1}^{K}\lambda_{k}\widehat{\mathbf{g}}_{k}\|_{2}^{2}=\sum_{k=1}^{K}\lambda_{k}^{2}\|\widehat{\mathbf{g}}_{k}\|_{2}^{2},\tag{1}$$ where we used the fact that {g˜k}k∈[K] are orthogonal. To solve Equation (8), we first ignore the inequality λk ≥ 0, for k ∈ [K], and then we observe that this constraint will be automatically satisfied. Therefore, we make the following Lagrangian to solve the minimization problem in Equation (8): $$L(\tilde{\mathfrak{g}},\lambda)=\|g\|_{2}^{2}-\alpha\left(\sum_{k=1}^{K}\lambda_{k}-1\right)=\sum_{k=1}^{K}\lambda_{k}^{2}\|\tilde{\mathfrak{g}}_{k}\|_{2}^{2}-\alpha\left(\sum_{k=1}^{K}\lambda_{k}-1\right).$$ $$(9)$$ $$(10)$$ $$(11)$$ . (10) Hence, $$\frac{\partial{\cal L}}{\partial\lambda_{k}}=2\lambda_{k}\|\tilde{\mathfrak{g}}_{k}\|_{2}^{2}-\alpha,$$ 2 − α, (11) 4The reason for such normalization is to satisfy *Conditions I* and II. This will be proven later in this section. 5Also, note that to tackle non-iid distribution of user-specific data, it is a common practice that server selects a different subset of clients in each round (McMahan et al., 2017). and by setting Equation (11) to zero, we obtain: $$\lambda_{k}^{*}=\frac{\alpha}{2\|\hat{\mathbf{g}}_{k}\|_{2}^{2}}.\tag{14}$$ $$(12)$$ On the other hand, since PK k=1 λk = 1, from Equation (12) we obtain $$\alpha=\frac{2}{\sum_{k=1}^{K}\frac{1}{\|\mathfrak{g}_{k}\|_{2}^{2}}},\tag{13}$$ from which the optimal λ ∗is obtained as follows ained as follows $$\lambda_{k}^{*}=\frac{1}{\|\widehat{\mathfrak{g}}_{k}\|_{2}^{2}\sum_{k=1}^{K}\frac{1}{\|\mathfrak{g}_{k}\|_{2}^{2}}},\quad\text{for}k\in[K].\tag{14}$$ Note that λ ∗ k > 0, and therefore the minimum norm vector we found belongs to G. Using the λ ∗found in (14), we can calculate dt =PK k=1 λ ∗ k g˜k as the minimum norm element in the convex hull G˜. In the following (Theorem 4.1), we show that the negate of dt satisfies both *Conditions (I)* and *(II)*. Theorem 4.1. The negate of dt =PK k=1 λ ∗ k g˜k satisfies both *Conditions (I)* and (II). Proof. We find the directional derivatives of loss functions {fk}k∈[K] over dt. For ∀k ∈ [K] we have gk · dt = g˜k(|fk| γ − k X−1 i=1 gk · g˜i g˜i· g˜i ) + k X−1 i=1 projg˜i (gk) ! · ( X K i=1 λ ∗ i g˜i) (15) = λ ∗ k∥g˜k∥ 2 2 |fk| γ − k X−1 i=1 gk · g˜i g˜i· g˜i ! + k X−1 i=1 gk · g˜i g˜i· g˜i λ ∗ i ∥g˜i∥ 2 2(16) = α 2 |fk| γ − k X−1 i=1 gk · g˜i g˜i· g˜i ! + α 2 k X−1 i=1 gk · g˜i g˜i· g˜i(17) = α 2 |fk| γ =|fk| γ PK k=11 ∥g˜k∥ 2 2 > 0, (18) (15) $$\text{(16)}$$ $$\text{(17)}$$ $$\text{(18)}$$ ... where (i) Equation (15) is obtained by using definition of g˜k in Equation (6), (ii) Equation (16) follows from the orthogonality of {g˜k} K k=1 vectors, and (iii) Equation (17) is obtained by using Equation (12). As seen in Equation (18), the directional derivatives over dt are positive, meaning that the direction of −dt is descent for all {fk}k∈[K]. In addition, the value of these directional derivatives are proportional to |fk| γ. This implies that if the server changes the global model in the direction of dt, the rate of decrease is higher for those functions with larger loss function values. Thus, −dt satisfies both *Conditions (I)* and *(II)*. Remark 4.2. As seen, Equation (14) yields a closed-form formula to find the optimal weights for the orthogonal scaled gradients {g˜k}k∈[K], based on which the common direction is obtained. On the contrary, FedMGDA+ (Hu et al., 2022) solves an iterative algorithm to find the updating directions. The complexity of such algorithms is greatly controlled by the size of the model (and the number of participating devices). As the recent DNNs are large in size, deploying such iterative algorithms slows down the FL process. Furthermore, we note that the computational cost of proposed algorithm is negligible (see Appendix F for details). ## 5 The Adafed Algorithm At iteration t, the server computes dt using the methodology described in Section 4.2, and then updates the global model as $\;\pmb{\theta}_{t+1}=\pmb{\theta}_{t}-\eta_{t}\pmb{\vartheta}_{t}.$ at updating the global model. θt+1 = θt − ηtdt. (19) Similarly to the conventional GD, we note that updating the global model as (19) is a *necessary* condition to have f(θt+1) ≤ f(θt). In Theorem 5.1, we state the *sufficient* condition to satisfy f(θt+1) ≤ f(θt). $$(19)$$ Theorem 5.1. Assume that f = {fk}k∈[K] are L-Lipschitz smooth. If the step-size ηt ∈ [0, 2 L min{|fk| γ}k∈[K]], then f(θt+1) ≤ f(θt), and equality is achieved iff dt = 0. Proof. If all the {fk}k∈[K] are L-smooth, then $$\mathbf{f}(\mathbf{\theta}_{t+1})\leq\mathbf{f}(\mathbf{\theta}_{t})+\mathbf{g}^{T}(\mathbf{\theta}_{t+1}-\mathbf{\theta}_{t})+\frac{L}{2}\|\mathbf{\theta}_{t+1}-\mathbf{\theta}_{t}\|_{2}^{2}.\tag{1}$$ Now, for client k ∈ [K], by using the update rule Equation (19) in Equation (20) we obtain $$f_{k}(\mathbf{\theta}_{t+1})\leq f_{k}(\mathbf{\theta}_{t})-\eta_{t}\mathbf{\mathfrak{g}}_{k}\cdot\mathbf{\mathfrak{d}}_{t}+\eta_{t}^{2}{\frac{L}{2}}\|\mathbf{\mathfrak{d}}_{t}\|_{2}^{2}.\tag{1}$$ . (21) To impose fk(θt+1) ≤ fk(θt), we should have ηtgk · dt ≥ η 2 t L 2 ∥dt∥ 2 2 ⇔ gk · dt ≥ ηtL 2 X K ∥g˜k∥ 2 2 ∥g˜k∥ 4 2 PK i=11 ∥g˜i∥ 2 2 2(22) k=1 PK k=11 ∥g˜k∥ 2 2 ≥ ηtL 2 1 PK k=11 ∥g˜k∥ 2 2 2 X K ⇔|fk| γ 1 ∥g˜k∥ 2 2 k=1 ⇔ ηt ≤ 2 L |fk| γ. (24) $$(20)$$ $$(21)$$ $$\boxed{\bot}$$ (22) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (23) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (24) . Therefore, if the step-size ηt ∈ [0, 2 L min{|fk| γ}k∈[K]], then f(θt+1) ≤ f(θt). Lastly, similar to many recent FL algorithms McMahan et al. (2017); Li et al. (2019a); Hu et al. (2022), we allow each client to perform a couple of local epochs e before sending its gradient update to the server. In this case, the pseudo-gradients (the opposite of the local updates) will be abusively used as the gradient vectors. It is important to note that we provide a convergence guarantee for this scenario in Section 5.1. We summarize AdaFed in Algorithm 1. Remark 5.2. When e > 1, an alternative approach is to use the accumulated loss rather than the loss from the last iteration in line (9) of Algorithm 1. However, based on our experiments, we observed that using the accumulated loss does not affect the overall performance of the algorithm, including its convergence speed, accuracy and fairness. This stands in contrast to the use of pseudo-gradients, which serves clear purposes of accelerating convergence and reducing communication costs. ## 5.1 Convergence Results In the following, we prove the convergence guarantee of AdaFed based on how the clients update the local models: (i) using SGD with e = 1, (ii) using GD with e > 1, and (iii) using GD with e = 1. Of course the strongest convergence guarantee is provided for the latter case. Theorem 5.3 (e = 1 & local SGD). Assume that f = {fk}k∈[K] are l-Lipschitz continuous and L-Lipschitz smooth, and that the global step-size ηt satisfies the following three conditions: (i) ηt ∈ (0, 1 2L ], (ii) limT→∞ PT t=0 ηt → ∞, and (iii) limT→∞ PT t=0 ηtσt < ∞; where σ 2 t = E[∥g˜λ ∗ − g˜sλ ∗ s∥] 2is the variance of stochastic common descent direction. Then $$\lim_{T\to\infty}\ \min_{t=0,...,T}{\bf E}[[\|{\bf\partial}_{t}\|]\to0.\tag{25}$$ Theorem 5.4 (e > 1 & local GD). Assume that f = {fk}k∈[K] are l-Lipschitz continuous and L-Lipschitz smooth. Denote by ηt and η the global and local learning rates, respectively. Also, define ζt = ∥λ ∗ − λ ∗ e∥, where λ ∗ e is the optimum weights obtained from pseudo-gradients after e local epochs. We have $$\operatorname*{lim}_{T\to\infty}\ \operatorname*{min}_{t=0,...,T}\|\Phi_{t}\|\to0,$$ ∥dt∥ → 0, (26) $$(26)^{\frac{1}{2}}$$ ## Algorithm 1 Adafed 1: **Input:** Number of global epochs T, number of local epochs e, global learning rate ηt, local learning rate η, initial global model θ0, local datasets {Dk}k∈K. 2: for t = 0, 1*, . . . , T* − 1 do 3: Server randomly selects a subset of devices St and sends θt to them. 4: for device k ∈ St in parallel do [local training] 5: Store the value θt in θinit; that is θinit ← θt. 6: for e epochs do 7: Perform (stochastic) gradient descent over local dataset Dk to update: θt ← θt −η∇fk(θt, Dk). 8: **end for** 9: Send the pseudo-gradient gk := θinit − θt and local loss value fk(θt) to the server. 10: **end for** 11: for k = 1, 2*, . . . ,* |St| do 12: Find g˜k form Equations (5) and (6). 13: **end for** 14: Find λ ∗from Equation (14). 15: Calculate dt := PK k=1 λ ∗ k g˜k. 16: θt+1 ← θt − ηtdt. 17: **end for** 18: **Output:** Global model θT . if the following conditions are satisfied: (i) ηt ∈ (0, 1 2L ], (ii) limT→∞ PT t=0 ηt → ∞, (iii) limt→∞ ηt → 0, (iv) limt→∞ η → 0, and (v) limt→∞ ζt → 0. Before introducing Theorem 5.5, we first introduce some notations. Denote by ϑ the Pareto-stationary solution set6 of minimization problem arg minθ f(θ). Then, denote by θ ∗ t the projection of θt onto the set ϑ; that is, θ ∗ t = arg minθ∈ϑ ∥θt − θ∥ 2 2 . Theorem 5.5 (e = 1 & local GD). Assume that f = {fk}k∈[K] are l-Lipschitz continuous and σ-convex, and that the global step-size ηt satisfies the following two conditions: (i) limt→∞ Ptj=0 ηj → ∞, and (ii) limt→∞ Ptj=0 η 2 j < ∞. Then almost surely θt → θ ∗ t ; that is, $$\mathbb{P}\left(\operatorname*{lim}_{t\to\infty}\left(\pmb{\theta}_{t}-\pmb{\theta}_{t}^{*}\right)=0\right)=1,$$ $$(27)^{\frac{1}{2}}$$ ) = 0= 1, (27) where P(E) denotes the probability of event E. The proofs for Theorems 5.3 to 5.5 are provided in Appendices A.1 to A.3, respectively, and we further discuss that the assumptions we made in the theorems are common in the FL literature. Note that all the Theorems 5.3 to 5.5 provide some types of convergence to a Pareto-optimal solution of optimization problem in Equation (1). Specifically, diminishing dt in Theorems 5.3 and 5.4 implies that we are reaching to a Pareto-optimal point (Désidéri, 2009). On the other hand, Theorem 5.5 explicitly provides this convergence guarantee in an almost surely fashion. ## 6 Adafed Features And A Comparative Analysis With Fedadam 6.1 Adafed Features Aside from satisfying fairness among the users, we mention some notable features of AdaFed in this section. The inequality f(θt+1) ≤ f(θt) gives motivation to the clients to participate in the FL task as their loss functions would decrease upon participating. In addition, since the common direction is more inclined toward 6In general, the Pareto-stationary solution of multi-objective minimization problem forms a set with cardinality of infinity (Mukai, 1980). that of the gradients of loss functions with bigger values, a new client could possibly join the FL task in the middle of FL process. In this case, for some consecutive rounds, the loss function for the newly-joined client decreases more compared to those for the other clients. The parameter γ is a hyper-parameter of AdaFed. In fact, different γ values yield variable levels of fairness. Thus, γ should be tuned to achieve the desired fairness level7. In general, a moderate γ value enforces a larger respective directional derivative for the devices with the worst performance (larger loss functions), imposing more uniformity to the training accuracy distribution. Lastly, we note that AdaFed is orthogonal to the popular FL methods such as Fedprox (Li et al., 2020b) and q-FFL (Li et al., 2019a). Therefore, it could be combined with the existing FL algorithms to achieve a better performance, especially with those using personalization (Li et al., 2021). ## 6.2 Comparison With Fedadam And Its Variants Similarly to AdaFed, there are some FL algorithms in the literature in which the server adaptively updates the global model. Despite this similarity, we note that the purpose of AdaFed is rather different from these algorithms. For instance, let us consider FedAdam (Reddi et al., 2020); this algorithm changes the global update rule of FedAvg from one-step SGD to one-step adaptive gradient optimization by adopting an Adam optimizer on the server side. Specifically, after gathering local pseudo-gradients and finding their average as gt =1 |St| Pk∈St g k t , the server updates the global model by Adam optimizer: mt = β1mt−1 + (1 − β1)gt, (28) $$\begin{array}{l}{{m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})\mathfrak{g}_{t},}}\\ {{v_{t}=\beta_{2}v_{t-1}+(1-\beta_{2})\mathfrak{g}_{t}^{2},}}\\ {{\theta_{t+1}=\theta_{t}+\eta{\frac{m_{t}}{\sqrt{v_{t}}+\epsilon}},}}\end{array}$$ , (29) , (30) where β1 and β2 are two hyper-parameters of the algorithm, and ϵ is used for numerical stabilization purpose. We note that other variants of this algorithm, such as FedAdagrad and FedYogi (Reddi et al., 2020) and FedAMSGrad (Tong et al., 2020), involve slight changes in the variance term vt. The primary objective of FedAdam (as well as its variants) is to enhance convergence behavior (Reddi et al., 2020); this is achieved by retaining information from previous epochs, which helps prevent significant fluctuations in the server update. In contrast, AdaFed is designed to promote fairness among clients. Such differences could be confirmed by comparing Algorithm 1 with Equations (28) to (30). ## 7 Experiments In this section, we conclude the paper with several experiments to demonstrate the performance of AdaFed, and compare its effectiveness with state-of-the-art alternatives under some performance metrics. - **Datasets:** We conduct a thorough set of experiments over **seven** datasets. The results for four datasets, namely CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), FEMNIST (Caldas et al., 2018) and Shakespear (McMahan et al., 2017) are reported in this section; and those for Fashion MNIST (Xiao et al., 2017), TinyImageNet (Le & Yang, 2015), CINIC-10 (Darlow et al., 2018) are reported in Appendix D. Particularly, in order to demonstrate the effectiveness of AdaFed in different FL scenarios, for each of the datasets reported in this section, we consider two different FL setups. In addition, we tested the effectiveness of AdaFed over a real-world noisy dataset, namely Clothing1M (Xiao et al., 2015), in Appendix I. - **Benchmarks:** We compare the performance of AdaFed against various fair FL algorithms in the literature including: q-FFL (Li et al., 2019a), TERM (Li et al., 2020a), FedMGDA+ (Hu et al., 2022), AFL (Mohri et al., 2019), Ditto (Li et al., 2021), FedFA (Huang et al., 2020b), and lastly FedAvg (McMahan et al., 2017). In our experiments, we conduct a grid-search to find the best hyper-parameters for each of the benchmark 7Most (if not all) of the fair FL methods introduce an extra hyper-parameter to tune in order to establish a trade-off between fairness and accuracy, for instance: (i) ϵ in FEDMGDA+ (Hu et al., 2022), (ii) q in Q-FFL & q-FFL (Li et al., 2019a), and (iii) t in TERM (Li et al., 2020a). Similarly, AdaFed introduces a new parameter to make this trade-off. | | Setup 1 | | | Setup 2 | | | | | |-----------|-----------|----------|---------|-----------|-----------|----------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Worst 5% | Best 5% | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | | | | FedAvg | 46.85 | 3.54 | 19.84 | 69.28 | 63.55 | 5.44 | 53.40 | 72.24 | | q-FFL | 46.30 | 3.27 | 23.39 | 68.02 | 57.27 | 5.60 | 47.29 | 66.92 | | FedMGDA+ | 45.34 | 3.37 | 24.00 | 68.51 | 62.05 | 4.88 | 52.69 | 70.77 | | FedFA | 46.40 | 3.61 | 19.33 | 69.30 | 63.05 | 4.95 | 48.69 | 70.88 | | TERM | 47.11 | 3.66 | 28.21 | 69.51 | 64.15 | 5.90 | 56.21 | 72.20 | | Ditto | 46.31 | 3.44 | 27.14 | 68.44 | 63.49 | 5.70 | 55.99 | 71.34 | | AdaFed | 46.42 | 3.01 | 31.12 | 69.41 | 64.80 | 4.50 | 58.24 | 72.45 | Table 1: Test accuracy on CIFAR-10. The reported results are averaged over 5 different random seeds. | | Setup 1 | | | Setup 2 | | | | | |-----------|-----------|-----------|----------|-----------|-----------|----------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | | | | FedAvg | 30.05 | 4.03 | 25.20 | 40.31 | 20.15 | 6.40 | 11.20 | 33.80 | | q-FFL | 28.86 | 4.44 | 25.38 | 39.77 | 20.20 | 6.24 | 11.09 | 34.02 | | FedMGDA+ | 29.12 | 4.17 | 25.67 | 39.71 | 20.15 | 5.41 | 11.12 | 33.92 | | AFL | 30.28 | 3.68 | 25.33 | 39.45 | 18.92 | 4.90 | 11.29 | 28.60 | | TERM | 30.34 | 3.51 | 27.03 | 39.35 | 17.88 | 5.98 | 10.09 | 31.68 | | Ditto | 29.81 | 3.79 | 26.90 | 39.39 | 17.52 | 5.65 | 10.21 | 31.25 | | AdaFed | 31.42 | 3.03 | 28.91 | 40.41 | 20.02 | 4.45 | 11.81 | 34.11 | Table 2: Test accuracy on CIFAR-100. The reported results are averaged over 5 different random seeds. methods including AdaFed. The details are reported in Appendix E, and here we only report the results obtained from the best hyper-parameters. - **Performance metrics:** Denote by ak the prediction accuracy on device k. We use a¯ = 1 K PK k=1 ak as the average test accuracy of the underlying FL algorithm, and use σa = q1 K PK k=1(ak − a¯) 2 as the standard deviation of the accuracy across the clients (similarly to (Li et al., 2019a; 2021)). Furthermore, we report worst 10% (5%) and best 10% (5%) accuracies as a common metric in fair FL algorithms (Li et al., 2020a). - **Notations:** We use **bold** and underlined numbers to denote the best and second best performance, respectively. We use e and K to represent the number of local epochs and that of clients, respectively. ## 7.1 Cifar-10 CIFAR-10 dataset (Krizhevsky et al., 2009) contains 40K training and 10K test colour images of size 32 × 32, which are labeled for 10 classes. The batch size is equal to 64 for both of the following setups. - **Setup 1:** Following (Wang et al., 2021b), we sort the dataset based on their classes, and then split them into 200 shards. Each client randomly selects two shards without replacement so that each has the same local dataset size. We use a feedforward neural network with 2 hidden layers. We fix e = 1 and K = 100. We carry out 2000 rounds of communication, and sample 10% of the clients in each round. We run SGD on local datasets with stepsize η = 0.1. - **Setup 2:** We distribute the dataset among the clients deploying Dirichlet allocation (Wang et al., 2020) with β = 0.5. We use ResNet-18 (He et al., 2016) with Group Normalization (Wu & He, 2018). We perform 100 communication rounds in each of which all clients participate. We set e = 1, K = 10 and η = 0.01. Results for both setups are reported in Table 1. Additionally, we depict the average accuracy over the course of training for setup 1 in Appendix G.1. | FEMNIST-original | | | FEMNIST-skewed | | | | | | |--------------------|-------|------------|------------------|-------|------------|----------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Angle (◦ ) | KL (a∥u) | a | σ | | | | | | | | ¯ | a | Angle (◦ ) | KL (a∥u) | | | | FedAvg | 80.42 | 11.16 | 10.18 | 0.017 | 79.24 | 22.30 | 12.29 | 0.054 | | q-FFL | 80.91 | 10.62 | 9.71 | 0.016 | 84.65 | 18.56 | 12.01 | 0.038 | | FedMGDA+ | 81.00 | 10.41 | 10.04 | 0.016 | 85.41 | 17.36 | 11.63 | 0.032 | | TERM | 81.08 | 10.32 | 9.15 | 0.015 | 84.29 | 13.88 | 11.27 | 0.025 | | AFL | 82.45 | 9.85 | 9.01 | 0.012 | 85.21 | 14.92 | 11.44 | 0.027 | | Ditto | 83.77 | 10.13 | 9.34 | 0.014 | 92.51 | 14.32 | 11.45 | 0.022 | | AdaFed | 82.26 | 6.58 | 8.12 | 0.009 | 92.21 | 7.56 | 9.44 | 0.011 | Table 3: Test accuracy on FEMNIST. The reported results are averaged over 5 different random seeds. ## 7.2 Cifar-100 CIFAR-100 (Krizhevsky et al., 2009) contains the same number of samples as those in CIFAR-10, yet it contains 100 classes instead. The model for both setups is ResNet-18 (He et al., 2016) with Group Normalization (Wu & He, 2018), where all clients participate in each round. We also set e = 1 and η = 0.01. The batch size is equal to 64. The results are reported in Table 2 for both of the following setups: - **Setup 1**: We set K = 10 and β = 0.5 for Dirichlet allocation, and use 400 communication rounds. - **Setup 2**: We set K = 50 and β = 0.05 for Dirichlet allocation, and use 200 communication rounds. Additionally, we perform the same experiments with more local epochs, specifically e = 10, 20 as presented in Appendix H. ## 7.3 Femnist FEMNIST (Federated Extended MNIST) Caldas et al. (2018) is a federated image classification dataset distributed over 3,550 devices by the dataset creators. This dataset has 62 classes containing 28 × 28-pixel images of digits (0-9) and English characters (A-Z, a-z) written by different people. For implementation, we use a CNN model with 2 convolutional layers followed by 2 fully-connected layers. The batch size is 32, and e = 2 for both of the following setups: - **FEMNIST-original:** We use the setting in Li et al. (2021), and randomly sample K = 500 devices (from the 3550 ones) and train models using the default data stored in each device. - **FEMNIST-skewed:** Here K = 100. We first sample 10 lower case characters ('a'-'j') from Extended MNIST (EMNIST), and then randomly assign 5 classes to each of the 100 devices. Similarly to (Li et al., 2019a), we use two new fairness metrics for this dataset: (i) the angle between the accuracy distribution and the all-ones vector 1 denoted by Angle (◦), and (ii) the KL divergence between the normalized accuracy a and uniform distribution u denoted by KL (a∥u). Results for both setups are reported in Table 3. In addition, we report the distribution of accuracies across clients in Appendix G.2. ## 7.4 Text Data We use *The Complete Works of William Shakespeare* (McMahan et al., 2017) as the dataset, and train an RNN whose input is 80-character sequence to predict the next character. In this dataset, there are about 1,129 speaking roles. Naturally, each speaking role in the play is treated as a device. Each device stored several text data and those information will be used to train a RNN on each device. The dataset is available on the LEAF website (Caldas et al., 2018). We use e = 1, and let all the devices participate in each round. The results are reported in Table 4 for the following two setups: | | Setup 1 | | | Setup 2 | | | | | |-----------|-----------|-----------|----------|-----------|-----------|----------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | | | | FedAvg | 53.21 | 9.25 | 51.01 | 54.41 | 50.48 | 1.24 | 48.20 | 52.10 | | q-FFL | 53.90 | 7.52 | 51.52 | 54.47 | 50.72 | 1.07 | 48.90 | 52.29 | | FedMGDA+ | 53.08 | 8.14 | 52.84 | 54.51 | 50.41 | 1.09 | 48.18 | 51.99 | | AFL | 54.58 | 8.44 | 52.87 | 55.84 | 52.45 | 1.23 | 50.02 | 54.17 | | TERM | 54.16 | 8.21 | 52.09 | 55.15 | 52.17 | 1.11 | 49.14 | 53.62 | | Ditto | 60.74 | 8.32 | 53.57 | 55.92 | 53.12 | 1.20 | 50.94 | 55.23 | | AdaFed | 55.65 | 6.55 | 53.79 | 55.86 | 52.89 | 0.98 | 51.02 | 54.48 | Table 4: Test accuracy on Shakespeare. The reported results are averaged over 5 different random seeds. - **Setup 1**: Following McMahan et al. (2017), we subsample 31 speaking roles, and assign each role to a client (K = 31) to complete 500 communication rounds. We use a model with two LSTM layers (Hochreiter & Schmidhuber, 1997) and one densely-connected layer. The initial η = 0.8 with decay rate of 0.95. - **Setup 2**: Among the 31 speaking roles, the 20 ones with more than 10000 samples are selected, and assigned to 20 clients (K = 20). We use one LSTM followed by a fully-connected layer. η = 2, and the number of communication rounds is 100. ## 7.5 Analysis Of Results Based on Tables 1 to 4, we can attain some notable insights. Compared to other benchmark models, AdaFed leads to significantly more fair solutions. In addition, the average accuracy is not scarified, yet interestingly, for some cases it is improved. We also note that the performance of AdaFed becomes more superior when the level of non-iidness is high. For instance, by referring to FEMNIST-skewed in Table 3, we observe a considerable superiority of AdaFed. Note that the average accuracy of Ditto over FEMNIST is greater than that of AdaFed. This is comprehensible, since Ditto provides a personalized solution to each device, while AdaFed only returns a global parameter θ. We also observe a similar trend in three other datasets reported in Appendix D. We further analyse the effect of hyper-parameter γ in AdaFed in Appendix E. ## 7.5.1 Percentage Of Improved Clients We measure the training loss before and after each communication round for all participating clients and report the percentage of clients whose loss function decreased or remained unchanged, as defined below $$\rho_{t}=\frac{\sum_{k\in\mathcal{S}_{t}}\mathbb{I}\{\mathbf{f}_{k}(\mathbf{\theta}_{t+1})\leq\mathbf{f}_{k}(\mathbf{\theta}_{t})\}}{|\mathcal{S}_{t}|},\tag{1}$$ $$(31)$$ where St is the participating clients in round t, and I(·) is the indicator function. Then, we plot ρt versus communication rounds for different fair FL benchmarks, including AdaFed. The curves for CIFAR-10 and CIFAR-100 datasets are reported in Figure 2a and Figure 2b, respectively. As seen, both AdaFed and FedMGDA+ consistently outperform other benchmark methods in that fewer clients' performances get worse after participation. This is a unique feature of these two methods. We further note that after enough number of communication rounds, curves for both AdaFed and FedMGDA+ converge to 100% (with a bit of fluctuation). ## 7.5.2 Rate Of Decrease In Loss Function In this part, we observe the loss function values for two clients over the course of training to verify Theorem 4.1. This theorem asserts that the rate of decrease in the loss function is higher for clients with larger initial loss function values. To this end, we select two clients—one with a low initial loss function and one with a high initial loss function—and depict their respective training loss as a function of communication rounds. ![13_image_0.png](13_image_0.png) Figure 2: The percentage of improved clients as a function of communication rounds for (a) CIFAR-10 setup one in Section 7.1; and (b) CIFAR-100 setup one in Section 7.2. ![13_image_1.png](13_image_1.png) Figure 3: The training loss function for two clients trained in AdaFed framework Vs. the communication rounds for (a) CIFAR-10 setup one in Section 7.1; and (b) CIFAR-100 setup one in Section 7.2. The curves are illustrated in Figure 3a and Figure 3b for CIFAR-10 and CIFAR-100 datasets, respectively. As observed in both curves, the rate of decrease in the loss function of the client with a larger initial loss function is higher. Additionally, close to the end of the training task, the values for the loss function of the clients converge to almost the same value, indicating fairness among the clients. ## 7.5.3 Convergence Of ∥Dt∥ In this part, we aim to observe the behaviour of ∥dt∥ over the course of training. Particularly, we consider three cases following Theorems 5.3 to 5.5, namely (i) e = 1 & local SGD, (ii) e > 1 & local GD, and (iii) e = 1 & local GD. For training, we follow the setup in Section 7.1; however, we change e and the local training method—either GD or SGD—to generated the three cases mentioned above. Then, for these three cases, we normalize ∥dt∥, and depict it versus the communication rounds. As observed in Figures 4a to 4c, in all the three cases, ∥dt∥ tends to zero. Nonetheless, the curve for e = 1 & local GD is more smooth. ![14_image_0.png](14_image_0.png) Figure 4: The convergence of ∥dt∥ as a function of communication rounds for (a) e = 1 and local SGD, (b) e > 1 and local GD, and (c) e = 1 and local GD. The dataset is CIFAR-10. ## 8 Conclusion In this paper, we proposed a method to enforce fairness in FL task dubbed AdaFed. In AdaFed, the aim is to adaptively tune a common direction along which the server updates the global model. The common direction found by AdaFed enjoys two properties: (i) it is descent for all the local loss functions, and (ii) the loss functions for the clients with worst performance decrease with a higher rate along this direction. These properties were satisfied by using the notion of directional derivative in the multi-objective optimization task. We then derived a closed-form formula for such common direction, and proved that AdaFed converges to a Pareto-stationary point. The effectiveness of AdaFed was demonstrated via thorough experimental results. ## References Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. *Nips tutorial*, 1:2017, 2017. Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečn`y, Stefano Mazzocchi, Brendan McMahan, et al. Towards federated learning at scale: System design. *Proceedings of machine learning and systems*, 1:374–388, 2019. Stephen P Boyd and Lieven Vandenberghe. *Convex optimization*. Cambridge university press, 2004. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečn`y, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. *arXiv preprint* arXiv:1812.01097, 2018. Sen Cui, Weishen Pan, Jian Liang, Changshui Zhang, and Fei Wang. Addressing algorithmic disparity and performance inconsistency in federated learning. *Advances in Neural Information Processing Systems*, 34: 26091–26102, 2021. Luke N Darlow, Elliot J Crowley, Antreas Antoniou, and Amos J Storkey. Cinic-10 is not imagenet or cifar-10. arXiv preprint arXiv:1810.03505, 2018. Jean-Antoine Désidéri. *Multiple-gradient descent algorithm (MGDA)*. PhD thesis, INRIA, 2009. Jean-Antoine Désidéri. Multiple-gradient descent algorithm (mgda) for multiobjective optimization. *Comptes* Rendus Mathematique, 350(5-6):313–318, 2012. Wei Du, Depeng Xu, Xintao Wu, and Hanghang Tong. Fairness-aware agnostic federated learning. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pp. 181–189. SIAM, 2021. Hubert Eichner, Tomer Koren, Brendan McMahan, Nathan Srebro, and Kunal Talwar. Semi-cyclic stochastic gradient descent. In *International Conference on Machine Learning*, pp. 1764–1773. PMLR, 2019. Jörg Fliege and Benar Fux Svaiter. Steepest descent methods for multicriteria optimization. Mathematical methods of operations research, 51(3):479–494, 2000. Shayan Mohajer Hamidi and Oussama Damen. Fair wireless federated learning through the identification of a common descent direction. *IEEE Communications Letters*, pp. 1–1, 2024. doi: 10.1109/LCOMM.2024. 3350378. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997. Zeou Hu, Kiarash Shaloudegi, Guojun Zhang, and Yaoliang Yu. Federated learning meets multi-objective optimization. *IEEE Transactions on Network Science and Engineering*, 2022. SHI Huaizhou, R Venkatesha Prasad, Ertan Onur, and IGMM Niemegeers. Fairness in wireless networks: Issues, measures and challenges. *IEEE Communications Surveys & Tutorials*, 16(1):5–24, 2013. Tiansheng Huang, Weiwei Lin, Wentai Wu, Ligang He, Keqin Li, and Albert Y Zomaya. An efficiency-boosting client selection scheme for federated learning with fairness guarantee. *IEEE Transactions on Parallel and* Distributed Systems, 32(7):1552–1564, 2020a. Tiansheng Huang, Weiwei Lin, Li Shen, Keqin Li, and Albert Y Zomaya. Stochastic client selection for federated learning with volatile clients. *IEEE Internet of Things Journal*, 9(20):20055–20070, 2022. Wei Huang, Tianrui Li, Dexian Wang, Shengdong Du, and Junbo Zhang. Fairness and accuracy in federated learning. *arXiv preprint arXiv:2012.10069*, 2020b. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. Foundations and Trends® *in Machine Learning*, 14(1–2):1–210, 2021. Jiawen Kang, Zehui Xiong, Dusit Niyato, Han Yu, Ying-Chang Liang, and Dong In Kim. Incentive design for efficient federated learning in mobile networks: A contract theory approach. In *2019 IEEE VTS Asia* Pacific Wireless Communications Symposium (APWCS), pp. 1–5. IEEE, 2019. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. P. Langley. Crafting papers on machine learning. In Pat Langley (ed.), Proceedings of the 17th International Conference on Machine Learning (ICML 2000), pp. 1207–1216, Stanford, CA, 2000. Morgan Kaufmann. Tra Huong Thi Le, Nguyen H Tran, Yan Kyaw Tun, Minh NH Nguyen, Shashi Raj Pandey, Zhu Han, and Choong Seon Hong. An incentive mechanism for federated learning in wireless cellular networks: An auction approach. *IEEE Transactions on Wireless Communications*, 20(8):4874–4887, 2021. Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. *CS 231N*, 7(7):3, 2015. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Tian Li, Maziar Sanjabi, Ahmad Beirami, and Virginia Smith. Fair resource allocation in federated learning. In *International Conference on Learning Representations*, 2019a. Tian Li, Ahmad Beirami, Maziar Sanjabi, and Virginia Smith. Tilted empirical risk minimization. In International Conference on Learning Representations, 2020a. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. *Proceedings of Machine Learning and Systems*, 2:429–450, 2020b. Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. In *International Conference on Machine Learning*, pp. 6357–6368. PMLR, 2021. Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. *arXiv preprint arXiv:1907.02189*, 2019b. Lingjuan Lyu, Xinyi Xu, Qian Wang, and Han Yu. Collaborative fairness in federated learning. Federated Learning: Privacy and Incentive, pp. 189–204, 2020. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In *Artificial intelligence and statistics*, pp. 1273–1282. PMLR, 2017. Quentin Mercier, Fabrice Poirion, and Jean-Antoine Désidéri. A stochastic multiple gradient descent algorithm. European Journal of Operational Research, 271(3):808–817, 2018. Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. Agnostic federated learning. In International Conference on Machine Learning, pp. 4615–4625. PMLR, 2019. Hiroaki Mukai. Algorithms for multicriterion optimization. *IEEE transactions on automatic control*, 25(2): 177–186, 1980. Takayuki Nishio and Ryo Yonetani. Client selection for federated learning with heterogeneous resources in mobile edge. In *ICC 2019-2019 IEEE international conference on communications (ICC)*, pp. 1–7. IEEE, 2019. John Rawls. *A theory of justice: Revised edition*. Harvard university press, 2020. Sashank J Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečn`y, Sanjiv Kumar, and Hugh Brendan McMahan. Adaptive federated optimization. In *International Conference on* Learning Representations, 2020. Zhendong Song, Hongguang Sun, Howard H Yang, Xijun Wang, Yan Zhang, and Tony QS Quek. Reputationbased federated learning for secure wireless networks. *IEEE Internet of Things Journal*, 9(2):1212–1226, 2021. Qianqian Tong, Guannan Liang, and Jinbo Bi. Effective federated adaptive gradient methods with non-iid decentralized data. *arXiv preprint arXiv:2009.06557*, 2020. Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. Federated learning with matched averaging. *arXiv preprint arXiv:2002.06440*, 2020. Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. A field guide to federated optimization. arXiv preprint arXiv:2107.06917, 2021a. Zheng Wang, Xiaoliang Fan, Jianzhong Qi, Chenglu Wen, Cheng Wang, and Rongshan Yu. Federated learning with fair averaging. *arXiv preprint arXiv:2104.14937*, 2021b. Yuxin Wu and Kaiming He. Group normalization. In *Proceedings of the European conference on computer* vision (ECCV), pp. 3–19, 2018. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017. Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 2691–2699, 2015. Jingyi Xu, Zihan Chen, Tony QS Quek, and Kai Fong Ernest Chong. Fedcorr: Multi-stage federated learning for label noise correction. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 10184–10193, 2022. Miao Yang, Ximin Wang, Hongbin Zhu, Haifeng Wang, and Hua Qian. Federated learning with class imbalance reduction. In *2021 29th European Signal Processing Conference (EUSIPCO)*, pp. 2174–2178. IEEE, 2021. Dongdong Ye, Rong Yu, Miao Pan, and Zhu Han. Federated learning in vehicular edge computing: A selective model aggregation approach. *IEEE Access*, 8:23920–23935, 2020. Jingfeng Zhang, Cheng Li, Antonio Robles-Kelly, and Mohan Kankanhalli. Hierarchically fair federated learning. *arXiv preprint arXiv:2004.10386*, 2020. Jingwen Zhang, Yuezhou Wu, and Rong Pan. Incentive mechanism for horizontal federated learning based on reputation and reverse auction. In *Proceedings of the Web Conference 2021*, pp. 947–956, 2021. ## A Convergence Of Adafed In the following, we provide three theorems to analyse the convergence of AdaFed under different scenarios. Specifically, we consider three cases: (i) Theorem A.1 considers e = 1 and using SGD for local updates, (ii) Theorem A.2 considers an arbitrary value for e and using GD for local updates, and (iii) Theorem A.4 considers e = 1 and using GD for local updates. ## A.1 Case 1: E = 1 **& Local Sgd** Notations: We use subscript (·)s to indicate a stochastic value. Using this notation for the values we introduced in the paper, our notations used in the proof of Theorem A.1 are summarized in Table 5. | Notation | Description | | |------------|---------------------------------------------------------------------------------------------------|-----------------| | gk,s | Stochastic gradient vector of client k. | | | gs | Matrix of Stochastic gradient vectors [g1,s, . . . , gK,s]. | | | ˜gk,s | Stochastic gradient vector of client k after orthogonalization process. | | | g˜s | Matrix of orthogonalized Stochastic gradient vectors [˜g1,s, . . . , ˜gK,s]. | | | ∗ λ k,s | Optimum weights obtained from Equation (14) using Stochastic gradients g˜s . ; that is, ds = PK ∗ | | | ds | Optimum direction obtained using Stochastic g˜s | k=1 λ k,s˜gk,s. | Table 5: Notations used in Theorem A.1 for e = 1 & local SGD. Theorem A.1. Assume that f = {fk}k∈[K] are l-Lipschitz continuous and L-Lipschitz smooth, and that the step-size ηt satisfies the following three conditions: (i) ηt ∈ (0, 1 2L ], (ii) limT→∞ PT t=0 ηt → ∞ and (iii) limT→∞ PT t=0 ηtσt < ∞; where σ 2 t = E[∥g˜λ ∗ − g˜sλ ∗ s∥] 2is the variance of stochastic common descent direction. Then $$\operatorname*{lim}_{T\to\infty}\operatorname*{min}_{t=0,\ldots,T}\mathbf{E}[\|\mathbf{\partial}_{t}\|]\to0.$$ E[∥dt∥] → 0. (32) Proof. Since orthogonal vectors {g˜k}k∈[K] span the same K-dimensional space as that spanned by gradient vectors {gk}k∈[K], then $$\exists\{\lambda^{\prime}_{k}\}_{k\in[K]}\ \ \text{s.t.}\ \ \ \mathfrak{d}=\sum_{k=1}^{K}\lambda^{*}_{k}\tilde{\mathfrak{g}}_{k}=\sum_{k=1}^{K}\lambda^{\prime}_{k}\mathfrak{g}_{k}=\mathfrak{g}\lambda^{\prime}.\tag{1}$$ $$(32)$$ $$(33)$$ Similarly, for the stochastic gradients we have $$\exists\{\lambda^{\prime}_{k,s}\}_{k\in[K]}\ \ \text{s.t.}\ \ \ \mathfrak{d}_{s}=\sum_{k=1}^{K}\lambda^{*}_{k,s}\tilde{\mathfrak{g}}_{k,s}=\sum_{k=1}^{K}\lambda^{\prime}_{k,s}\mathfrak{g}_{k,s}=\mathfrak{g}_{s}\lambda^{\prime}_{s}.\tag{1}$$ $$(34)$$ Define ∆t = gλ ′ − gsλ ′ s = g˜λ ∗ − g˜sλ ∗ s , where the last equality is due to the definitions in Equations (33) and (34). We can find an upper bound for f(θt+1) as follows f(θt+1) = f(θt − ηtdt) (35) = f(θt − ηt X K k=1 λ ∗ k,sg˜k,s) (36) = f(θt − ηtgsλ ′ s ) (37) ≤ f(θt) − ηtg T g T s λ ′ s + Lη2 t 2∥g T s λ ′ s∥ 2(38) ≤ f(θt) − ηtg T g T λ ′ + Lη2 t ∥g T λ ′∥ 2 + ηtg T ∆t + Lη2 t ∥∆t∥ 2(39) ≤ f(θt) − ηt(1 − Lηt)∥g T λ ′∥ 2 + lηt∥∆t∥ + Lη2 t ∥∆t∥ 2, (40) $$(35)$$ $$(36)$$ $$(41)$$ where (36) uses stochastic gradients in the updating rule of AdaFed, (37) is obtained from the definition in (34), (38) holds following the quadratic bound for smooth functions f = {fk}k∈[K], and lastly (40) holds considering the Lipschits continuity of f = {fk}k∈[K]. Assuming ηt ∈ (0, 1 2L ] and taking expectation from both sides, we obtain: $$\operatorname*{min}_{t=0,\ldots,T}\mathbf{E}[\|\partial_{t}\|]\leq{\frac{f(\theta_{0})-\mathbf{E}[f(\theta_{T+1})]+\sum_{t=0}^{T}\eta_{t}(l\sigma_{t}+L\eta_{t}\sigma_{t}^{2})}{{\frac{1}{2}}\sum_{t=0}^{T}\eta_{t}}}.$$ Using the assumptions (i) limT→∞ PT j=0 ηt → ∞, and (ii) limT→∞ PT t=0 ηtσt < ∞, the theorem will be concluded. Note that *vanishing* dt implies reaching to a Pareto-stationary point of original MoM problem. Yet, the convergence rate is different in different scenarios as we see in the following theorems. ## A.1.1 Discussing The Assumptions - **The assumptions over the local loss functions:** The two assumptions l-Lipschitz continuous and L-Lipschitz smooth over the local loss functions are two standard assumptions in FL papers providing some sorts of convergence guarantee (Li et al., 2019b). - **The assumptions over the step-size:** The three assumptions we enforced over the step-size could be easily satisfied as explained in the sequel. For instance, one can pick ηt = κ1 1 t for some constant κ1 such that ηt ∈ (0, 1 2L ] is satisfied. Then even if σt has a extremely loose upper-bound, let's say σt < κ2 t ϵ for a small ϵ ∈ R+ and a constant number κ2, then all the three assumptions over the step-size in the theorem will be satisfied. Note that the convergence rate of AdaFed depends on how fast σt diminishes which depends on how heterogeneous the users are. ## A.2 Case 2: E > 1 **& Local Gd** The notations used in this subsection are elaborated in Table 6. Theorem A.2. Assume that f = {fk}k∈[K] are l-Lipschitz continuous and L-Lipschitz smooth. Denote by ηt and η the global and local learning rate, respectively. Also, define ζt = ∥λ ∗ − λ ∗ e∥, where λ ∗ e is the optimum weights obtained from pseudo-gradients after e local epochs. Then, $$\operatorname*{lim}_{T\to\infty}\ \operatorname*{min}_{t=0,\ldots,T}\|\Phi_{t}\|\to0,$$ ∥dt∥ → 0, (42) if the following conditions are satisfied: (i) ηt ∈ (0, 1 2L ], (ii) limT→∞ PT t=0 ηt → ∞ and (iii) limt→∞ ηt → 0, (iv) limt→∞ η → 0, and (v) limt→∞ ζt → 0. $$(42)$$ | Notation | Description | | |-------------|-------------------------------------------------------------------------------------|----| | t | Updated weight for client k after e local epochs at the t-th round of FL. | | | θ(k,e) gk,e | gk,e = θt − θ(k,e) t ; that is, the update vector of client k after e local epochs. | | | ge | Matrix of update vectors [g1,e, . . . , gK,e]. | | | ˜gk,e | Update vector of client k after orthogonalization process. | | | g˜e | Matrix of orthogonalized update vectors [˜g1,e, . . . , ˜gK,e]. | | | ∗ | | | | λ k,e | Optimum weights obtained from Equation (14) using g˜e . | ∗ | | de | Optimum direction obtained using g˜e ; that is, de = PK k=1 λ k,e˜gk,e. | | Table 6: Notations used in the Theorem A.2 for e > 1 and local GD. Proof. As discussed in the proof of Theorem A.1, we can write $$\exists\{\lambda^{\prime}_{k}\}_{k\in[K]}\ \ \text{s.t.}\ \ \partial=\sum_{k=1}^{K}\lambda^{*}_{k}\tilde{\mathfrak{g}}_{k}=\sum_{k=1}^{K}\lambda^{\prime}_{k}\mathfrak{g}_{k}=\mathfrak{g}\boldsymbol{\lambda}^{\prime},$$ $$\exists\{\lambda^{\prime}_{k,e}\}_{k\in[K]}\ \ \text{s.t.}\ \ \partial_{e}=\sum_{k=1}^{K}\lambda^{*}_{k,e}\tilde{\mathfrak{g}}_{k,e}=\sum_{k=1}^{K}\lambda^{\prime}_{k,e}\mathfrak{g}_{k,e}=\mathfrak{g}_{e}\boldsymbol{\lambda}^{\prime}_{e}.$$ $$(43)$$ $$(44)$$ To prove Theorem A.2, we first introduce a lemma whose proof is provided in Appendix B. Lemma A.3. Using the notations used in Theorem A.2, and assumming that f = {fk}k∈[K] are L-Lipschitz smooth, we have ∥gk,e − gk∥ ≤ ηel. Using Lemma A.3, we have $$\|\mathfrak{d}-\mathfrak{d}_{e}\|=\|\tilde{\mathfrak{g}}\lambda^{*}-\tilde{\mathfrak{g}}_{e}\lambda^{*}_{e}\|\leq\|\tilde{\mathfrak{g}}\lambda^{*}-\tilde{\mathfrak{g}}\lambda^{*}_{e}\|+\|\tilde{\mathfrak{g}}\lambda^{*}_{e}-\tilde{\mathfrak{g}}_{e}\lambda^{*}_{e}\|$$ $$\leq\|\tilde{\mathfrak{g}}\|\|\lambda^{*}-\lambda^{*}_{e}\|+\|\mathfrak{g}\lambda^{\prime}_{e}-\mathfrak{g}_{e}\lambda^{\prime}_{e}\|$$ $$\leq\|\tilde{\mathfrak{g}}\|\|\lambda^{*}-\lambda^{*}_{e}\|+\eta el$$ $$\leq\zeta_{l}l\sqrt{K}+\eta el,$$ where Equation (45) follows triangular inequality, Equation (46) is obtained from Equations (43) and (44), and Equation (47) uses Lemma A.3. As seen, if limt→∞ η → 0, and limt→∞ ζt → 0, then ∥d − de∥ → 0. Now, by writing the quadratic upper bound we obtain: f(θt+1) ≤ f(θt) − ηtg T g T e λ ′ e + Lη2 t 2∥g T e λ ′ e∥ 2(49) ≤ f(θt) − ηtg T g T λ ′ + Lη2 t ∥g T λ ′∥ 2 + ηtg T(d − de) + Lη2 t ∥d − de∥ ≤ f(θt) − ηt(1 − Lηt)∥g T λ ′∥ 2 + lηt∥d − de∥ + Lη2 t ∥d − de∥ 2. (51) 2(50) Noting that ηt ∈ (0, 1 2L ], and utilizing telescoping yields $$\min_{t=0,\ldots,T}\|\mathbf{\hat{v}}_{t}\|\leq\frac{\mathbf{f}(\mathbf{\theta}_{0})-\mathbf{f}(\mathbf{\theta}_{T+1})+\sum_{t=0}^{T}\eta_{t}(\|\mathbf{\hat{v}}-\mathbf{\partial}_{\mathbf{e}}\|+L\eta_{t}\|\mathbf{\partial}-\mathbf{\partial}_{\mathbf{e}}\|^{2})}{\frac{1}{2}\sum_{t=0}^{T}\eta_{t}}.\tag{1}$$ For the $T$-norm of $\mathbf{\hat{v}}$, we have Using ∥d − de∥ → 0, the Theorem A.2 is concluded. (49) $\text{(50)}$ (51) ... $$(52)$$ $\square$ ## A.3 Case 3: E = 1 **& Local Gd** Denote by ϑ the Pareto-stationary solution set of minimization problem arg minθ f(θ). Then, define θ ∗ t = arg minθ∈ϑ ∥θt − θ∥ 2 2 . Theorem A.4. Assume that f = {fk}k∈[K] are l-Lipschitz continuous and σ-convex, and that the step-size ηt satisfies the following two conditions: (i) limt→∞ Ptj=0 ηj → ∞ and (ii) limt→∞ Ptj=0 η 2 j < ∞. Then almost surely θt → θ ∗ t ; that is, $$\mathbb{P}\left(\lim_{t\rightarrow\infty}\left(\mathbf{\theta}_{t}-\mathbf{\theta}_{t}^{*}\right)=0\right)=1,\tag{1}$$ $$\left(53\right)$$ where P(E) denotes the probability of event E. Proof. The proof is inspired from Mercier et al. (2018). Without loss of generality, we assume that all users participate in all rounds. Based on the definition of θ ∗ t we can say condition of $\mathbf{\theta}_{t}$ we can say $$\|\mathbf{\theta}_{t+1}-\mathbf{\theta}_{t+1}^{*}\|_{2}^{2}\leq\|\mathbf{\theta}_{t+1}-\mathbf{\theta}_{t}^{*}\|_{2}^{2}=\|\mathbf{\theta}_{t}-\eta_{t}\mathbf{\vartheta}_{t}-\mathbf{\theta}_{t}^{*}\|_{2}^{2}$$ $$=\|\mathbf{\theta}_{t}-\mathbf{\theta}_{t}^{*}\|_{2}^{2}-2\eta_{t}(\mathbf{\theta}_{t}-\mathbf{\theta}_{t}^{*})\cdot\mathbf{\vartheta}_{t}+\eta_{t}^{2}\|\mathbf{\vartheta}_{t}\|_{2}^{2}.$$ . (55) To bound the third term in Equation (55), we note that from Equation (23), we have: $$\eta_{t}^{2}\|\partial_{t}\|_{2}^{2}=\frac{\eta_{t}^{2}}{\sum_{k=1}^{K}\frac{1}{\|\delta_{k}\|_{2}^{2}}}\leq\frac{\eta_{t}^{2}l^{2}}{K}.$$ To bound the second term, first note that since orthogonal vectors {g˜k}k∈[K] span the same K-dimensional space as that spanned by gradient vectors {gk}k∈[K], then $$\exists\{\lambda^{\prime}_{k}\}_{k\in[K]}\ \ \text{s.t.}\ \ \mathfrak{d}=\sum_{k=1}^{K}\lambda^{*}_{k}\tilde{\mathfrak{g}}_{k}=\sum_{k=1}^{K}\lambda^{\prime}_{k}\mathfrak{g}_{k}.\tag{1}$$ $$\begin{array}{l}{(54)}\\ {(55)}\end{array}$$ $$(56)$$ $$(57)$$ (58) $\binom{59}{5}$ (59) . $\left(60\right)$ $\left(61\right)$ $\left(62\right)$ $\left(62\right)$ Using Equation (57) and the σ-convexity of {fk}k∈[K] we obtain (θt − θ ∗ t ) · dt = (θt − θ ∗ t ) · X K k=1 λ ∗ kg˜k (58) = (θt − θ ∗ t ) · X K k=1 λ ′ kgk (59) ≥ X K k=1 λ ′ k (fk(θt) − fk(θ ∗ t )) + σ ∥θt − θ ∗ t ∥ 2 2 2(60) ≥ λ ′ αM 2∥θt − θ ∗ t ∥ 2 2 + σ ∥θt − θ ∗ t ∥ 2 2 2(61) = λ ′ αM + σ 2∥θt − θ ∗ t ∥ 2 2 . (62) Now, we return back to Equation (55) and find the conditional expectation w.r.t. θt as follows $$\mathbf{E}[\|\boldsymbol{\theta}_{t+1}-\boldsymbol{\theta}_{t+1}^{*}\|_{2}^{2}\mid\boldsymbol{\theta}_{t}]\leq(1-\eta_{t}\mathbf{E}[\lambda_{\alpha}^{\prime}M+\sigma|\boldsymbol{\theta}_{t}])\|\boldsymbol{\theta}_{t}-1$$ . (63) $$|\theta_{t}-\theta_{t}^{*}||_{2}^{2}+\frac{\eta_{t}^{2}l^{2}}{K}.$$ Assume that E[λ ′ αM + σ|θt] ≥ c, taking another expectation we obtain: $$\mathbf{E}[\|\mathbf{\theta}_{t+1}-\mathbf{\theta}_{t+1}^{*}\|_{2}^{2}]\leq(1-\eta_{t}c)\mathbf{E}[\|\mathbf{\theta}_{t}-\mathbf{\theta}_{t}^{*}\|_{2}^{2}]+{\frac{\eta_{t}^{2}l^{2}}{K}},$$ $$(63)^{\frac{1}{2}}$$ $$(64)$$ which is a recursive expression. By solving Equation (64) we obtain $$\mathbf{E}[\|\boldsymbol{\theta}_{t+1}-\boldsymbol{\theta}_{t+1}^{*}\|_{2}^{2}]\leq\underbrace{\prod_{j=0}^{t}(1-\eta_{j}c)\mathbf{E}[\|\boldsymbol{\theta}_{0}-\boldsymbol{\theta}_{0}^{*}\|_{2}^{2}]}_{\mathrm{First~term}}+\underbrace{\sum_{m=1}^{t}{\frac{\prod_{j=1}^{t}(1-\eta_{j}c)\eta_{m}^{2}l^{2}}{K\prod_{j=1}^{m}(1-\eta_{j}c)}}_{\mathrm{Second~term}}.$$ $$(65)$$ . (65) It is observed that if the limit of both First term and Second term in Equation (65) go to zero, then E[∥θt+1 − θ ∗ t+1∥ 2 2 ] → 0. For the First term, from the arithmetic-geometric mean inequality we have $$\lim_{t\to\infty}\prod_{j=0}^{t}(1-\eta_{j}c)\leq\lim_{t\to\infty}\left(\frac{\sum_{j=0}^{t}(1-\eta_{j}c)}{t}\right)^{t}=\lim_{t\to\infty}\left(1-c\frac{\sum_{j=0}^{t}\eta_{j}}{t}\right)^{t}$$ $$=\lim_{t\to\infty}e^{-c\sum_{j=0}^{t}\eta_{j}}.$$ (66) $\binom{67}{2}$ . $$(68)$$ $$\square$$ $$\begin{array}{l}{(69)}\\ {(70)}\end{array}$$ From Equation (67) it is seen that if limt→∞ Ptj=0 ηj → ∞, then the First term is also converges to zero as t → ∞. On the other hand, consider the Second term in Equation (65). Obviously, if limt→∞ Ptj=0 η 2 j < ∞, then the Second term converges to zero as t → ∞. Hence, if (i) limt→∞ Ptj=0 ηj → ∞ and (ii) limt→∞ Ptj=0 η 2 j < ∞, then E[∥θt+1−θ ∗ t+1∥ 2 2 ] → 0. Consequently, based on standard supermartingale (Mercier et al., 2018), we have $$\mathbb{P}\left(\lim_{t\to\infty}\left(\theta_{t}-\theta_{t}^{*}\right)=0\right)=1.\tag{1}$$ ## B Proof Of Lemma A.3 Proof. $$\boldsymbol{\theta}_{k,e}=\boldsymbol{\theta}_{t}-\boldsymbol{\theta}_{(k,e)^{t}}=(\boldsymbol{\theta}_{t}-\boldsymbol{\theta}_{(k,1)^{t}})+(\boldsymbol{\theta}_{(k,1)^{t}}-\boldsymbol{\theta}_{(k,2)^{t}})+\cdots+(\boldsymbol{\theta}_{(k,e-1)^{t}}-\boldsymbol{\theta}_{(k,e)^{t}})$$ $$=\boldsymbol{\theta}_{k}(\boldsymbol{\theta}_{t})+\eta\boldsymbol{\theta}_{k,1}+\cdots+\eta\boldsymbol{\theta}_{k,e-1}.$$ t ) (69) $$(71)$$ Hence, $$\|\mathfrak{g}_{k,e}-\mathfrak{g}_{k}\|=\|\eta\sum_{j=1}^{e}\mathfrak{g}_{k,j}\|\leq\eta\sum_{j=1}^{e}\|\mathfrak{g}_{k,j}\|\leq\eta el.\tag{1}$$ ## C More About Fairness In Fl C.1 Sources Of Unfairness In Federated Learning Unfairness in FL can arise from various sources and is a concern that needs to be addressed in FL systems. Here are some of the key reasons for unfairness in FL: 1. **Non-Representative Data Distribution**: Unfairness can occur when the distribution of data across participating devices or clients is non-representative of the overall population. Some devices may have more or less relevant data, leading to biased model updates. 2. **Data Bias**: If the data collected or used by different clients is inherently biased due to the data collection process, it can lead to unfairness. For example, if certain demographic groups are underrepresented in the training data of some clients, the federated model may not perform well for those groups. $\square$ 3. **Heterogeneous Data Sources**: Federated learning often involves data from a diverse set of sources, including different device types, locations, or user demographics. Variability in data sources can introduce unfairness as the models may not generalize equally well across all sources. 4. **Varying Data Quality**: Data quality can vary among clients, leading to unfairness. Some clients may have noisy or less reliable data, while others may have high-quality data, affecting the model's performance. 5. **Data Sampling**: The way data is sampled and used for local updates can introduce unfairness. If some clients have imbalanced or non-representative data sampling strategies, it can lead to biased model updates. 6. **Aggregation Bias**: The learned model may exhibit a bias towards devices with larger amounts of data or, if devices are weighted equally, it may favor more commonly occurring devices. ## C.2 Fairness In Conventional Ml Vs. Fl The concept of fairness is often used to address social biases or performance disparities among different individuals or groups in the machine learning (ML) literature (Barocas et al., 2017). However, in the context of FL, the notion of fairness differs slightly from traditional ML. In FL, fairness primarily pertains to the consistency of performance across various clients. In fact, the difference in the notion of fairness between traditional ML and FL arises from the distinct contexts and challenges of these two settings: 1. Centralized vs. decentralized data distribution: - In traditional ML, data is typically centralized, and fairness is often defined in terms of mitigating biases or disparities within a single, homogeneous dataset. Fairness is evaluated based on how the model treats different individuals or groups within that dataset. - In FL, data is distributed across multiple decentralized clients or devices. Each client may have its own unique data distribution, and fairness considerations extend to addressing disparities across these clients, ensuring that the federated model provides uniform and equitable performance for all clients. ## 2. Client Autonomy And Data Heterogeneity: - In FL, clients are autonomous and may have different data sources, labeling processes, and data collection practices. Fairness in this context involves adapting to the heterogeneity and diversity among clients while still achieving equitable outcomes. - Traditional ML operates under a centralized, unified data schema and is not inherently designed to handle data heterogeneity across sources. We should note that in certain cases where devices can be naturally clustered into groups with specific attributes, the definition of fairness in FL can be seen as a relaxed version of that in ML, i.e., we optimize for similar but not necessarily identical performance across devices (Li et al., 2019a). Nevertheless, despite the differences mentioned above, to maintain consistency with the terminology used in the FL literature and the papers we have cited in the main body of this work, we will continue to use the term "fairness" to denote the uniformity of performance across different devices. ## D Additional Three Datasets In this section, we evaluate the performance of AdaFed against some benchmarks over some other datasets, namely Fashion MNIST, CINIC-10, and TinyImageNet whose respective results are reported in Appendices D.1 to D.3. ## D.1 Fashion Mnist Fashion MNIST (Xiao et al., 2017) is an extension of MNIST dataset (LeCun et al., 1998) with images resized to 32 × 32 pixels. We use a fully-connected neural network with 2 hidden layers, and use the same setting as that used in Li et al. (2019a) for our experiments. We set e = 1 and use full batchsize, and use η = 0.1. Then, we conduct 300 rounds of communications. For the benchmarks, we use the same as those we used for CIFAR-10 experiments. The results are reported in Table 7. By observing the three different classes reported in Table 7, we observe that the fairness level attained in AdaFed is not limited to a dominate class. | Algorithm | a | σ | | | | |-------------|-------|------|-------|----------|---------| | | ¯ | a | shirt | pullover | T-shirt | | FedAvg | 80.42 | 3.39 | 64.26 | 87.00 | 89.90 | | q-FFL | 78.53 | 2.27 | 71.29 | 81.46 | 82.86 | | FedMGDA+ | 79.29 | 2.53 | 72.46 | 79.74 | 85.66 | | FedFA | 80.22 | 3.41 | 63.71 | 86.87 | 89.94 | | AdaFed | 79.14 | 2.12 | 72.49 | 79.81 | 86.99 | Table 7: Test accuracy on Fashion MNIST. The reported results are averaged over 5 different random seeds. ## D.2 Cinic-10 CINIC-10 (Darlow et al., 2018) has 4.5 times as many images as those in CIFAR-10 dataset (270,000 sample images in total). In fact, it is obtained from ImageNet and CIFAR-10 datasets. As a result, this dataset fits FL scenarios since the constituent elements of CINIC-10 are not drawn from the same distribution. Furthermore, we add more non-iidness to the dataset by distributing the data among the clients using Dirichlet allocation with β = 0.5. | Algorithm | a | σ | | | |-------------|-------|-------|-----------|----------| | | ¯ | a | Worst 10% | Best 10% | | q-FFL | 86.57 | 14.91 | 57.70 | 100.00 | | Ditto | 86.31 | 15.14 | 56.91 | 100.00 | | AFL | 86.49 | 15.12 | 57.62 | 100.00 | | TERM | 86.40 | 15.10 | 57.30 | 100.00 | | AdaFed | 86.34 | 14.85 | 57.88 | 99.99 | For the model, we use ResNet-18 with group normalization, and set η = 0.01. There are 200 communication rounds in which all the clients participate with e = 1. Also, K = 50. Results are reported in Table 8. Table 8: Test accuracy on CINIC-10. The reported results are averaged over 5 different random seeds. ## D.3 Tinyimagenet Tiny-ImageNet (Le & Yang, 2015) is a subset of ImageNet with 100k samples of 200 classes. We distribute the dataset among K = 20 clients using Dirichlet allocation with β = 0.05 We use ResNet-18 with group normalization, and set η = 0.02. There are 400 communication rounds in which all the clients participate with e = 1. The results are reported in Table 9. | Algorithm | a | σ | | | |-------------|-------|-----------|----------|-------| | ¯ | a | Worst 10% | Best 10% | | | q-FFL | 18.90 | 3.20 | 13.12 | 23.72 | | AFL | 16.55 | 2.38 | 12.40 | 20.25 | | TERM | 16.41 | 2.77 | 11.52 | 21.02 | | FedMGDA+ | 14.00 | 2.71 | 9.88 | 19.21 | | AdaFed | 18.05 | 2.35 | 13.24 | 23.08 | Table 9: Test accuracy on TinyImageNet. The reported results are averaged over 5 different random seeds. ## E Experiments Details, Tuning Hyper-Parameters For the benchmark methods and also AdaFed, we used grid-search to find the best hyper-parameters for the underlying algorithms. The parameters we tested for each method are as follows: - **AdaFed:** γ ∈ {0, 0.01, 0.1, 1, 5, 10}. - **q-FFL:** q ∈ {0, 0.001, 0.01, 0.1, 1, 2, 5, 10}. - **TERM:** t ∈ {0.1, 0.5, 1, 2, 5}. - **AFL:** ηt ∈ {0.01, 0.05, 0.1, 0.5, 1}. - **Ditto:** λ ∈ {0.01, 0.05, 0.1, 0.5, 1, 2, 5}. - **FedMGDA+:** ϵ ∈ {0.01, 0.05, 0.1, 0.5, 1}. - **FedFA:** (*α, β*) = {(0.5, 0.5)}, (γs, γc) = {(0.5, 0.9)}. To have a better understanding about how the parameter γ in AdaFed affects the performance of the FL task, we report the results for different values of γ in AdaFed in this section. ## E.1 Cifar-10 The best hyper-parameters for the benchmark methods are: q = 10 for q-FFL, ϵ = 0.5 for FedMGDA+, and (*α, β*) = {(0.5, 0.5)}, (γs, γc) = {(0.5, 0.9)} for FedFA. The detailed results for different γ in AdaFed are reported in Table 10. We used γ = 5 as the best point for Table 1. | | Setup 1 | | Setup 2 | | | | | | |--------------|-----------|----------|-----------|-------|-----------|----------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Worst 5% | Best 5% | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | | | | AdaFedγ=0 | 45.44 | 3.43 | 20.18 | 68.04 | 59.88 | 4.89 | 48.12 | 70.62 | | AdaFedγ=0.01 | 45.77 | 3.36 | 23.55 | 68.07 | 60.39 | 4.81 | 49.43 | 70.63 | | AdaFedγ=0.1 | 46.01 | 3.18 | 27.12 | 68.12 | 60.98 | 4.70 | 50.91 | 70.70 | | AdaFedγ=1 | 46.55 | 3.18 | 27.75 | 68.20 | 63.24 | 4.54 | 54.55 | 71.12 | | AdaFedγ=5 | 46.42 | 3.01 | 31.12 | 67.73 | 64.80 | 4.50 | 58.24 | 72.45 | | AdaFedγ=10 | 46.00 | 2.88 | 35.21 | 67.35 | 63.25 | 4.66 | 51.74 | 71.25 | Table 10: Tuning γ in AdaFed over CIFAR-10. Reported results are averaged over 5 different random seeds. ## E.2 Cifar-100 The best hyper-parameters for the benchmark methods are: q = 0.1 for q-FFL, t = 0.5 in TERM, and ηt = 0.5 in AFL. In addition, the detailed results for different γ in AdaFed are reported in Table 11. We used γ = 1 as the best point for Table 2. | Setup 1 | | | Setup 2 | | | | | | |--------------|-------|-----------|-----------|-----------|----------|------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | a | σ | | | | | | | ¯ | a | Worst 10% | Best 10% | | | | | AdaFedγ=0 | 29.41 | 4.45 | 24.41 | 39.21 | 17.05 | 6.71 | 10.04 | 27.41 | | AdaFedγ=0.01 | 30.12 | 4.05 | 25.23 | 39.41 | 17.77 | 6.09 | 10.43 | 28.42 | | AdaFedγ=0.1 | 31.05 | 3.52 | 26.13 | 40.12 | 19.51 | 4.95 | 10.89 | 32.10 | | AdaFedγ=1 | 31.42 | 3.03 | 28.91 | 40.41 | 20.02 | 4.45 | 11.81 | 34.11 | | AdaFedγ=5 | 31.23 | 2.95 | 28.12 | 40.20 | 19.79 | 4.31 | 11.86 | 33.67 | | AdaFedγ=10 | 31.34 | 2.91 | 28.52 | 40.15 | 19.61 | 4.56 | 11.42 | 32.91 | Table 11: Tuning γ in AdaFed over CIFAR-100. Reported results are averaged over 5 different random seeds. ## E.3 Fashion Mnist The best hyper-parameters for the benchmark methods are: ϵ = 0.5 for FedMGDA+, q = 0.1 for q-FFL, (*α, β*) = {(0.5, 0.5)}, (γs, γc) = {(0.5, 0.9)} for FedFA. The detailed results for different γ in AdaFed are reported in Table 12. We used γ = 1 as the best point for Table 7. | Algorithm | a | σ | | | | |--------------|-------|------|-------|----------|---------| | | ¯ | a | shirt | pullover | T-shirt | | AdaFedγ=0 | 78.84 | 2.55 | 71.77 | 78.34 | 84.12 | | AdaFedγ=0.01 | 78.88 | 2.41 | 71.73 | 78.66 | 85.62 | | AdaFedγ=0.1 | 79.24 | 2.30 | 72.46 | 79.14 | 85.66 | | AdaFedγ=1 | 79.14 | 2.12 | 72.33 | 79.81 | 83.99 | | AdaFedγ=5 | 79.04 | 2.09 | 71.55 | 78.37 | 85.41 | | AdaFedγ=10 | 78.91 | 1.96 | 71.43 | 78.04 | 85.82 | Table 12: Tuning γ in AdaFed over Fashion MNIST. Reported results are averaged over 5 different seeds. ## E.4 Femnist | FEMNIST-original | | | FEMNIST-skewed | | | | | | |--------------------|-------|------------|------------------|-------|------------|----------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Angle (◦ ) | KL (a∥u) | a | σ | | | | | | | | ¯ | a | Angle (◦ ) | KL (a∥u) | | | | AdaFedγ=0 | 81.32 | 13.59 | 10.85 | 0.019 | 84.39 | 13.54 | 11.32 | 0.024 | | AdaFedγ=0.01 | 82.67 | 12.03 | 10.68 | 0.018 | 87.66 | 12.02 | 10.91 | 0.019 | | AdaFedγ=0.1 | 81.60 | 8.72 | 9.23 | 0.011 | 88.62 | 10.59 | 10.75 | 0.017 | | AdaFedγ=1 | 82.26 | 6.58 | 8.12 | 0.009 | 92.21 | 7.56 | 9.44 | 0.011 | | AdaFedγ=5 | 80.10 | 5.16 | 7.29 | 0.007 | 90.12 | 5.82 | 7.31 | 0.009 | | AdaFedγ=10 | 80.05 | 3.03 | 6.44 | 0.007 | 84.38 | 4.49 | 6.99 | 0.008 | The best hyper-parameters for the benchmark methods are: λ = 0.1 for Ditto, q = 0.1 for q-FFL, t = 0.5 for TERM, ηt = 0.5 for AFL. Also, the detailed results for different γ in AdaFed are reported in Table 3. We used γ = 1 as the best point for Table 13. Table 13: Tuning γ in AdaFed over FEMNIST. Reported results are averaged over 5 different seeds. ## E.5 Shakespeare The best hyper-parameters for the benchmark methods are: q = 0.1 for q-FFL, λ = 0.1 for Ditto, and ηt = 0.5 for AFL. Furthermore, the results obtained for different γ values in AdaFed are reported in Table 14. We used γ = 0.1 as the best point for Table 4. Table 14: Tuning γ in AdaFed over Shakespeare. Reported results are averaged over 5 different seeds. | Setup 1 | | Setup 2 | | | | | | | |--------------|-------|-----------|----------|-----------|----------|------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | a | σ | | | | | | | ¯ | a | Worst 10% | Best 10% | | | | | AdaFedγ=0 | 48.40 | 13.5 | 44.12 | 51.29 | 48.80 | 1.58 | 46.23 | 51.12 | | AdaFedγ=0.01 | 53.55 | 8.01 | 50.96 | 54.46 | 51.67 | 1.10 | 48.71 | 53.16 | | AdaFedγ=0.1 | 55.65 | 6.55 | 53.79 | 55.86 | 52.89 | 0.98 | 51.02 | 54.48 | | AdaFedγ=1 | 53.91 | 5.10 | 51.94 | 54.06 | 51.44 | 1.06 | 50.88 | 54.52 | | AdaFedγ=5 | 54.40 | 4.15 | 52.17 | 54.77 | 51.20 | 1.05 | 50.72 | 54.61 | | AdaFedγ=10 | 54.56 | 4.22 | 52.20 | 54.73 | 51.19 | 1.07 | 50.70 | 54.01 | ## E.6 Cinic-10 The best hyper-parameters for the benchmark methods are: t = 0.5 for TERM, q = 0.1 for q-FFL, λ = 0.1 for Ditto, and ηt = 0.5 for AFL. Furthermore, the results obtained for different γ values in AdaFed are reported in Table 15. We used γ = 1 as the best point for Table 8. | Algorithm | a | σ | | | |--------------|-------|-------|-----------|----------| | | ¯ | a | Worst 10% | Best 10% | | AdaFedγ=0 | 85.17 | 15.71 | 54.67 | 99.92 | | AdaFedγ=0.01 | 85.87 | 15.54 | 56.12 | 99.95 | | AdaFedγ=0.1 | 86.13 | 15.32 | 57.01 | 99.98 | | AdaFedγ=1 | 86.34 | 14.85 | 57.88 | 99.99 | | AdaFedγ=5 | 86.03 | 15.01 | 57.72 | 99.98 | | AdaFedγ=10 | 85.49 | 15.08 | 57.23 | 99.99 | Table 15: Tuning γ in AdaFed over CINIC-10. Reported results are averaged over 5 different seeds. ## E.7 Tinyimagenet The best hyper-parameters for the benchmark methods are: ϵ = 0.05 for FedMGDA+, t = 0.5 for TERM, q = 0.1 for q-FFL, and ηt = 0.5 for AFL. Furthermore, the results obtained for different γ values in AdaFed are reported in Table 16. We used γ = 1 as the best point for Table 9. ## E.8 The Effect Of Parameter Γ In this section, we reported the results for AdaFed over different datasets when γ takes different values. Based on the tables reported in this section, we observe almost a similar trend over all the dataset. As a rule of thumb, a higher (lower) γ yields a higher (lower) fairness and slightly lower (higher) accuracy. Nevertheless, the best performance of AdaFed (in terms of establishing an appropriate trade-off between average accuracy and fairness) is achieved for a moderate value of γ. This is also consistent with the other fairness methods in the literature, where in most cases, the best hyper-parameter is a moderate one. | Algorithm | a | σ | | | |--------------|-------|------|-----------|----------| | | ¯ | a | Worst 10% | Best 10% | | AdaFedγ=0 | 13.25 | 3.14 | 9.82 | 19.24 | | AdaFedγ=0.01 | 14.38 | 2.72 | 10.12 | 19.97 | | AdaFedγ=0.1 | 16.20 | 2.65 | 11.65 | 21.12 | | AdaFedγ=1 | 18.05 | 2.35 | 13.24 | 23.08 | | AdaFedγ=5 | 17.76 | 2.31 | 12.44 | 22.84 | | AdaFedγ=10 | 17.05 | 2.38 | 12.58 | 23.67 | Table 16: Tuning γ in AdaFed over TinyImageNet. Reported results are averaged over 5 different seeds. ## F Computation Cost Of Adafed F.1 Comparing To Fedmgda+ First, note that AdaFed concept is built upon that of FedMGDA+ (Hu et al., 2022) (and FairWire in Hamidi & Damen (2024)), in that both use Pareto-optimal notion to enforce fairness in FL task. Note that the optimal solutions in MoM usually forms a set (in general of infinite cardinality). As discussed, what distinguishes FedMGDA+ and AdaFed is that to which point of this set these algorithms converge. Particularly, AdaFed converges to more uniform solutions Figure 1a. This is because FedMGDA+ algorithm only satisfies *Condition (I)*, yet in AdaFed, both *Conditions (I)* and *(II)* are held. Interestingly, the cost of performing AdaFed is less than that of performing FedMGDA+. To elucidate, FedMGDA+ also finds the minimum-norm vector in the convex hull of the gradients' space in order to find a common descent direction. To this end, they used generic quadratic programming which entails iteratively finding the minimum-norm vector in the convex hull of the local gradients. One of the pros of AdaFed is that it finds the common descent direction without performing any iterations over the gradient vectors. Thus, AdaFed not only yields a higher level of fairness compared to FedMGDA+, but also solves its complexity issue. ## F.2 Running Time For Adafed Assume that the number of clients is K, and the dimension of the gradient vectors is d. Then, the orthogonalization for k-th client, k ∈ [K], needs O(2dk) operations (by operations we meant multiplications and additions). Hence, the total number of operations needed for orthogonalization process in equal to O(2dK2) (Also note that Gram-Schmidt is the most efficient algorithm for orthogonalization). In our experimental setup, we realized that the overhead of AdaFed is negligible, resulting almost the same overall running time for FedAvg and AdaFed. To justify this fact, please refer to FedMGDA+ paper where they discussed the overhead of their proposed method; as they claimed, the overhead is negligible yielding the same running time as FedAVG. On the other hand, as explained in Appendix F.1, the complexity of AdaFed is lower than that of FedMGDA+. ## G Curves And Histograms G.1 Training Curves For Cifar-10 And Cifar-100 In this subsection, we depict the average test accuracy over the course of training for CIFAR-10 dataset using setup one (see Section 7.1). In particular, we depict the average test accuracy across all the clients Vs. the number of communication rounds. Additionally, to demonstrate the convergence of the FL algorithms after 2000 communication rounds, we have depicted the training curve for 4000 rounds. The curve for each method is obtained by using the best hyper-parameter of the respective method (we discussed the details of hyper-parameter tuning in Appendix E). Furthermore, the curves are averaged over five different seeds. The results are shown in Figure 5 and Figure 6 for CIFAR-10 and CIFAR-100, respectively. Particularly in Figure 5, AdaFed converges faster than the benchmark methods. Specifically, AdaFed reaches average test accuracy of 40% after around 400 communication rounds; however, the benchmark methods reach this accuracy after around 900 rounds. Indeed, this is another advantage of AdaFed in addition to imposing fairness across the clients. ![29_image_0.png](29_image_0.png) Figure 5: Average test accuracy across clients for different FL methods on CIFAR-10. The setup for the experiments is elaborated in Section 7.1, setup 1. ![29_image_1.png](29_image_1.png) Figure 6: Average test accuracy across clients for different FL methods on CIFAR-100. The setup for the experiments is elaborated in Section 7.2, setup 1. ## G.2 Histogram Of Accuracies To better observe the spread of clients accuracy, we depict the histogram of accuracy across 500 clients for the Original FEMNIST dataset (the setup for the experiment is discussed in Appendix D.1). To this end, we depict the histogram of the clients' accuracies for three different methods: (i) FedAvg, (ii) Q-FFL, and (iii) AdaFed(γ = 5); all using their well-tuned hyper-parameters. The result is depicted in Appendix G.2. As ![30_image_0.png](30_image_0.png) seen, the distribution of the accuracy is more concentrated (fair) for AdaFed. Figure 7: The distribution of clients accuracy for Original FEMNIST dataset using three different methods, namely: (i) FedAvg, (ii) Q-FFL, and (iii) AdaFed. ## H Cifar-100 Results With More Local Epochs In this section, we want to test the performance of AdaFed using a larger number of local epochs e. To this end, we use the same setups as those used in the main body of the paper to produce the results for CIFAR-100, but we change the number of local epochs e to 10 and 20. 8. The results for e = 10 and e = 20 are reported in Table 17 and Table 18, respectively. We highlight two key observations from the tables: - AdaFed can still provide a higher level of fairness compared to the benchmark methods; - While increasing the number of local epochs from 1 to 10 results in higher average accuracy, this trend is not observed when further increasing e to 20. | Setup 1 | | Setup 2 | | | | | | | |-----------|-------|-----------|-----------|----------|-------|------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | a | σ | | | | | | ¯ | a | Worst 10% | Best 10% | | | | | | FedAvg | 31.14 | 4.09 | 25.30 | 40.54 | 20.54 | 6.42 | 11.12 | 33.47 | | q-FFL | 29.45 | 4.66 | 25.35 | 39.91 | 20.77 | 6.20 | 11.05 | 33.52 | | AFL | 31.17 | 3.69 | 25.12 | 39.52 | 19.32 | 4.85 | 11.23 | 28.93 | | TERM | 30.56 | 3.63 | 27.19 | 39.46 | 17.91 | 5.87 | 10.11 | 32.00 | | AdaFed | 31.19 | 3.14 | 28.81 | 40.42 | 20.41 | 4.71 | 11.39 | 34.08 | Table 17: Test accuracy on CIFAR-100 with e = 10. The reported results are averaged over 5 different random seeds. ## I Integration With A Label Noise Correction Method I.1 What Is Label Noise In Fl? Label noise in FL refers to inaccuracies or errors in the ground truth labels associated with the data used for training. It occurs when the labels assigned to data points are incorrect or noisy due to various reasons. 8We selected e = {10, 20} because these values have been commonly utilized in the literature for the CIFAR-100 dataset. | | Setup 1 | | Setup 2 | | | | | | |-----------|-----------|-----------|-----------|-------|-----------|----------|-------|-------| | Algorithm | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | a | σ | | | | | | | | ¯ | a | Worst 10% | Best 10% | | | | FedAvg | 29.11 | 4.31 | 24.61 | 39.45 | 18.05 | 6.12 | 9.15 | 30.12 | | q-FFL | 29.15 | 4.23 | 25.12 | 39.67 | 19.02 | 6.15 | 8.41 | 31.66 | | AFL | 30.38 | 3.78 | 25.00 | 39.12 | 17.74 | 4.96 | 10.01 | 27.08 | | TERM | 31.15 | 3.62 | 27.02 | 40.41 | 15.81 | 5.68 | 8.17 | 29.26 | | AdaFed | 30.41 | 3.19 | 27.35 | 40.18 | 18.37 | 4.21 | 10.55 | 31.78 | Table 18: Test accuracy on CIFAR-100 with e = 20. The reported results are averaged over 5 different random seeds. Label noise can be introduced at different stages of data collection, annotation, or transmission, and it can have a significant impact on the performance and reliability of FL models. Label noise in FL is particularly challenging to address because FL relies on decentralized data sources, and participants may have limited control over label quality in remote environments. Dealing with label noise often involves developing robust models and FL algorithms that can adapt to the presence of inaccuracies in the labels. ## I.2 Are The Fair Fl Algorithms Robust Against Label Noise? The primary intention of the fair FL algorithms including AdaFed is to ensure fairness among the clients while maintaining the average accuracy across them. Yet, these algorithms are not robust against label noise (mislabeled instances). Nonetheless, AdaFed could be integrated with label-noise resistant methods in the literature yielding an FL method which (i) satisfies fairness among the clients, and (ii) is robust against the label noise. In particular, among the label-noise resistant FL algorithms in the literature, we select FedCorr (Xu et al., 2022) to be integrated with AdaFed. FedCorr introduces a dimensionality-based filter to identify noisy clients, which is accomplished by measuring the local intrinsic dimensionality (LID) of local model prediction subspaces. They demonstrate that it is possible to distinguish clean datasets from noisy ones by observing the behavior of LID scores during the training process (we omit further discussions about FedCorr, and refer interested readers to their paper for more details). Similarly to FedCorr, we use a real-world noisy dataset, namely Clothing1M9(Xiao et al., 2015), and we use exactly the same setting as they used for this dataset10. In particular, we use local SGD with a momentum of 0.5, with a batch size of 16, and five local epochs, and set the hyper-parameter T1 = 2 in their algorithm. In addition, when integrated with AdaFed, we set γ = 5 for AdaFed. The results are summarized in Table 19. As observed, the average accuracy obtained by AdaFed is around 2.2% lower than that obtained from FedCorr which shows that AdaFed is not robust against label-noise. Moreover, as expected AdaFed results in a more fair client accuracy. On the other hand, when AdaFed is combined with FedCorr, the average accuracy improves while maintaining satisfactory fairness among the clients. 9Clothing1M contains 1M clothing images in 14 classes. It is a dataset with noisy labels, since the data is collected from several online shopping websites and include many mislabelled samples. 10https://github.com/Xu-Jingyi/FedCorr Table 19: Test accuracy on Clothing1M dataset. The reported results are averaged over 5 different random seeds. | Algorithm | a | σ | | | |------------------|-------|-------|-----------|----------| | | ¯ | a | Worst 10% | Best 10% | | FedAvg | 70.49 | 13.25 | 43.09 | 91.05 | | FedCorr | 72.55 | 13.27 | 43.12 | 91.15 | | AdaFed | 70.35 | 5.17 | 49.91 | 90.77 | | FedCorr + AdaFed | 72.29 | 8.12 | 46.52 | 91.02 |
Review 1: Summary: This paper studies the problem of learning an unfair model in federated learning, where the trained model may unfairly advantage or disadvantage some of the devices. The authors propose AdaFed which adaptively finds an updating direction for the server along which (i) all the clients’ loss functions are decreasing; and (ii) the loss functions for the clients with larger values decrease with a higher rate. This results in a method which better balances the training across multiple devices which is effective on a suite of federated datasets in improving fairness. Strengths and Weaknesses: Strengths: 1. The paper is solid and well motivated, and generally well written. 2. The method is very well presented and analyzed, with theoretical results supporting the main claims. 3. Experiments are extensive over many FL datasets and settings, results are consistently better than baselines. Weaknesses: 1. Although the authors have acknowledged the lack of standard definition of fairness, I am wondering if there is another more suitable term - fairness is commonly used to refer to social biases or performance disparities across different individuals or groups in the ML literature. Furthermore, the authors have not really analyzed why unfairness can occur - is it mostly due to imbalanced client dataset sizes? out of distribution clients, clients with data from a different modality? Analyzing these reasons seems important to built solutions. 2. Quite a few missing baselines, in particular FedMGDA which is directly built-upon for motivation is a missing baseline in Tables 2,3,4. TERM and Ditto are also missing in some of the Tables. 3. Section 7.5 Analysis of Results can be significantly expanded. It would be good if the main theorems regarding convergence and scaling are analyzed empirically here as well. Another experiment would to more closely analyze which populations/devices the model found to be unfair and ended up scaling to improve fairness. Any insights here for how to better design FL distributions/study fairness/improve modeling? 4. Not a critical change, but the paper would be much more impactful if there were some real-world FL experiments with actual data distributions that exhibit unfairness when trained normally, such as certain hospitals, demographic groups, etc. Requested Changes: see weakness points 1-4 Broader Impact Concerns: none ================================================== Review 2: Summary: The authors introduce a new server-side averaging method for federated learning. For each clients' update vector that is sent to the server, a scalar multiplicative value is found such that the average of the scaled update vectors results in a fairer model at convergence. The authors motivate their method thoroughly, providing analysis and intuition. Their approach is accompanied by convergence proofs (that I have not read in detail), as well as experimental results across a wide variety of datasets & benchmarks. Strengths and Weaknesses: Strengths: - Well-motivated, good exposition, good writing - Wide selection of datasets, tasks, models and baselines for experimental evaluation - relevant topic Weaknesses: - While the experimental setup is extensive, I see potential red flags that I would ask the authors to address (detailed below) - Baselines: A common approach to improved server-side averaging is to use FedAdam (or other higher-moment optimizers) at the server (https://arxiv.org/abs/2003.00295). I would like to understand why the authors did not compare against these algorithms Requested Changes: The experimental evaluation is very extensive, but I am surprised by the decision to optimize only 200 or 400 rounds for Cifar100. In my own experience with this dataset and model in the FL context, convergence is achieved only after a higher number of rounds - I have not replicated the exact setting of course, but I would be curious to see a learning curve. Along the same lines Figure 2 for Cifar10 seems to suggest that in terms of average test accuracy, the differently trained algorithms have not converged yet. I would like to understand how much of a compromise AdaFed makes in terms of accuracy compared to these baselines when trained to convergence. As mentioned above, I would like to understand if adaptive server-side optimizers should not serve as a relevant baseline to these settings and if yes, please include one in your baselines. That being said, I really enjoyed reading this paper and believe it to be strong. Broader Impact Concerns: No concerns ================================================== Review 3: Summary: This work proposes a way to tackle the fairness problem in federated learning (FL). Defining the problem as the variance in accuracies across clients, this work employs multi-objective minimization to develop a novel approach of global model aggregation, such that this problem can be addressed. Additionally, some theoretical analysis has been conduct to reason the choice of global learning rate and to prove the convergence. In the experimental section, results of extensive datasets show the promise of the proposed AdaFed approach. In particular, AdaFed performs outstandingly in reducing the variance of accuracies across the clients, i.e. the problem attempt to address in this paper. Strengths and Weaknesses: # Strengths 1. This work successfully correlates the loss variance with the aggregation algorithm using principles derived from multi-objective optimization (MOO). This is a unique and well-crafted concept. 2. Further analysis has been undertaken to rationalize the choice of global learning rate and to prove the convergence, adding to the paper's reliability. 3. This work clearly clarifies how the proposed approach is derived, making the work accessible to the audience not working directly on MOO. This has the advantage of attracting more researchers in the field of FL to study MOO. 4. Experimental results show that the variance in accuracy is efficiently reduced compared with the baselines, proving the effectiveness of AdaFed. # Weaknesses 1. The derivation of AdaFed is essentially based on distributed SGD instead of FL. The main difference between these two approaches is that one performs aggregation at every iteration, while the other performs aggregation after many epochs of local training. While I believe some insights from distributed SGD can be transferred to FL, they are not always generalizable. This work lacks discussion regarding this. 2. When applied to a real FL application, accumulated gradients are regarded as pseudo-gradients. However, only the loss of the last iteration is used to re-scale the gradient (Line 9, Algorithm 1). Can the author explain the reason of choosing this approach instead of accumulated loss? 3. The experimental results and settings are not very convincing. In particular, the number of epochs is mostly set to 1 which is rare for FL applications due to the concern about communication overhead. Additionally, data splitting does not seem to impose a very difficult learning task and the networks are not small for the respective dataset. However, the reported accuracies are too small. Requested Changes: The following changes can strengthen the work in my view: 1. More experiments with a larger number of epochs. Even negative results can add credibility to this work. 2. Discussion regarding the weaknesses. 3. Modern datasets usually contain mis-labeled instances. It seems the proposed approach is susceptible to such problems. Discussion and experiments to show the limitation are preferred. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers unanimously commended the exposition of the paper citing that it is clear and coherent. The reviewers also found the claims to be correct. The problem considered by the paper is also of potential interest to the federated learning community. Some suggestions made by reviewers were partially addressed but they are not critical. ==================================================
# Attention Beats Concatenation For Conditioning Neural Fields Daniel Rebain *drebain@google.com* University of British Columbia Google Research Mark J. Matthews mjmatthews@google.com Google Research Kwang Moo Yi *kmyi@cs.ubc.ca* University of British Columbia Gopal Sharma *gopalsharma.research@gmail.com* University of British Columbia Dmitry Lagun *dlagun@google.com* Google Research Andrea Tagliasacchi *taglia@google.com* Google Research Simon Fraser University Reviewed on OpenReview: *https: // openreview. net/ forum? id= GzqdMrFQsE* ## Abstract Neural fields model signals by mapping coordinate inputs to sampled values. They are becoming an increasingly important backbone architecture across many fields from vision and graphics to biology and astronomy. In this paper, we explore the differences between common conditioning mechanisms within these networks, an essential ingredient in shifting neural fields from memorization of signals to generalization, where the set of signals lying on a *manifold* is modelled jointly. In particular, we are interested in the scaling behaviour of these mechanisms to increasingly high-dimensional conditioning variables. As we show in our experiments, high-dimensional conditioning is key to modelling complex data distributions, thus it is important to determine what architecture choices best enable this when working on such problems. To this end, we run experiments modelling 2D, 3D, and 4D signals with neural fields, employing concatenation, hyper-network, and attention-based conditioning strategies - a necessary but laborious effort that has not been performed in the literature. We find that attention-based conditioning outperforms other approaches in a variety of settings. ## 1 Introduction Neural fields, also called coordinate-based neural networks, have demonstrated excellent results in learning complex signals. Neural fields learn to reproduce signals by mapping input coordinates to output values. They are most commonly implemented as Multilayer Perceptrons (MLP) and have found widespread application over a range of tasks: 1D time sequences (Sitzmann et al., 2020), 2D image regression (Tancik et al., 2020), 3D radiance fields (Mildenhall et al., 2020), and 4D light fields (Suhail et al., 2022). Many previous works overfit to a single example (e.g. an image, sound, object, or scene) (Sitzmann et al., 2020; Mildenhall et al., 2020), employing neural networks to "memorize" the target signal. However, there are many potential applications which require jointly representing *multiple* signals, commonly as a distribution over a ![1_image_0.png](1_image_0.png) Figure 1: **Main results –** The relationship between representation effectiveness, as measured by PSNR, and representation complexity, as measured by latent code size; the plot reveals how attention conditioning outperform concatenation and hyper-networks when scaling to larger representations. manifold (a smooth subspace of all possible signals). This generalization of neural fields has been achieved though various means: latent vector conditioned MLPs (Park et al., 2019), hyper-networks (Sitzmann et al., 2020), and set-latent transformer representations (Sajjadi et al., 2022). In all these means, the location of a particular signal on the output manifold is represented with some form of *latent code*. The properties of the latent codes are important in determining what a conditional network is capable of modelling. Of particular interest to us is the *dimensionality* of the latent code, which directly determines the dimensionality of the manifold on which the output signals lie. If we wish to perfectly model some distribution of data, then we must use a latent dimension at least equal to that of the manifold on which that distribution lies. We demonstrate this effect on a toy dataset where we explicitly control the dimensionality of the data manifold, and find that increasing dimension of this manifold corresponds to worse reconstruction quality across different models. Based on this observation, and inspired by methods such as VQ-VAE (van den Oord et al., 2017), which have utilized high-dimensional conditioning to great effect in CNN-based networks, we wish to determine whether neural field networks are capable of scaling up to condition on such high-dimensional latent codes, and what factors may affect this. Unlike CNNs, which "share" computation across the different (discrete) output samples of signals that they model, neural fields typically repeat the majority of their computation for each sample. As such, the question of how to most efficiently incorporate high-dimensional conditioning signals into a neural field network does not have an obvious answer - a problem which we intend to address in this paper. To facilitate this, we undertake a comprehensive study of three common methods of conditioning on latent inputs in neural fields: concatenated latent vectors, hyper-networks, and attention-based set latent representations. We evaluate each of these methods across three application domains: image auto-encoding, novel view synthesis using neural volumetric rendering, and light field networks. Performing the experiments reported in this paper required a very large amount of compute time: on the order of 23k GPU-hours. Due to the significant expense involved, we chose experimental parameters carefully and focused on architectural choices which appeared most likely to affect the outcome of the experiments, and therefore the conclusions of our analysis. We also did not run any additional hyper-parameter tuning or searches on top of our primary sweeps and ablation. It is our hope that by incurring this expense and sharing our conclusions, others can avoid the need to run similarly costly experiments when making design decisions for neural field implementations. In summary, our contributions include: - A series of extensive quantitative evaluations, where we compare how concatenation, hyper-network, and attention-based neural field architectures perform in modelling high-entropy data distributions for a number of application domains: 2D image auto-encoding, 3D radiance fields, and 4D light fields, benchmarked on standard datasets. - An ablation study comparing different approaches to conditioning MLP neural field networks by concatenation when the latent dimension is very high. We find that for very large latent codes, splitting the code between the hidden layers of the MLP provides the best results; ![2_image_0.png](2_image_0.png) Figure 2: **Related works –** a temporal overview of how techniques have evolved, with arrows roughly denoting the dependency graph of technical contributions. At a high-level, we reveal how recently proposed coordinate network architectures (Sajjadi et al., 2022) are significantly more effective than MLPs (Stanley, 2007) for the task of conditional image generation (Rebain et al., 2022). Abbreviations for the various methods are highlighted in Section 2. - An analysis showing that attention-based conditioning is broadly more effective for high-dimensional latent codes, given a particular compute budget. ## 2 Related Works Our work mainly focuses on coordinate-based networks, also called neural fields. Introduced by Stanley (2007) under the name of Conditional Pattern Producing Networks (**CPPN**), recently became popular as differentiable representations for 3D data (Tewari et al., 2021; Xie et al., 2022). We now position our work with respect to the temporal progression of research in this topic (Figure 2), and in particular discussing how conditioning is implemented in architectures. Coordinate networks w/ 3D supervision. Neural implicit networks for 3D shape representation receive as input a 3D point coordinate along with a latent encoding the shape and output either occupancy (**OccNet** (Mescheder et al., 2019) and **IMNet** (Chen & Zhang, 2019)) or (truncated) signed distance fields (**DeepSDF** Park et al. (2019)). These architectures are either trained with an auto-encoder (IMNet, OccNet) or an auto-decoder (DeepSDF) fashion, and rely on direct 3D supervision for training. Conditioning is implemented either by input concatenation (IMNet, DeepSDF), or by modulation of activations (DeepSDF). Coordinate networks w/ 2D supervision. Moving away from 3D supervision, Scene Representation Networks (SRN) (Sitzmann et al., 2019) pioneered the use of 2D *images as supervision* of an underlying 3D neural scene, by relying on differentiable ray-marching. Similarly to previously discussed works, each scene in SRN is represented as latent code while conditioning realized via hyper-networks (Ha et al., 2017), but the generated images lack fine-grained details. Meanwhile, **NeuralVolumes** (Lombardi et al., 2019) introduced the use of *differentiable volume rendering* as a way to learn models with high visual-quality, but still relied on grids for scene representation. Marrying methods that store radiance within a coordinate network (Ren et al., 2013) with differentiable volume rendering (NeuralVolumes), neural scene representations (SRN), Mildenhall et al. (2020) introduced Neural Radiance Fields (**NeRF**) as an effective way to capture a 3D scene via differentiable rendering, and achieved novel-view synthesis results of unforeseen visual quality1. Overcoming the spectral bias. To represent high-frequency signals in neural scene representations one must overcome the so-called "spectral bias" of MLP architectures (Basri et al., 2020; Rahaman et al., 2019) - an inductive bias that causes fully connected neural networks to prefer representing low-frequency signals. NeRF overcame this issue by borrowing the ideas of (sinusoidal) *positional encoding* from transformers (Vaswani et al., 2017), SIREN (Sitzmann et al., 2020) proposed the use of sinusoidal activations (and careful initialization). These are relatively similar solutions, with the core difference that sinusoidal encoding is applied at the input 1Concurrent work by Yariv et al. (2020) and Niemeyer et al. (2020) is similar in spirit but have a more restrictive setup. ![3_image_0.png](3_image_0.png) Figure 3: **Application domains –** We consider three main application domains: 2D image encoding, 3D radiance fields (radiance integrated along rays, but without directional dependence), 4D light fields. layer in NeRF, and throughout the entire architecture in SIREN. Further, note that both of these solutions originally targeted the representation of a *single* scene, that is, working in the overfitting (i.e. non-conditional) training regime; the next section will cover multi-scene training, which is achieved via conditioning. Conditional networks w/ 2D supervision. The recent **pi-GAN** (Chan et al., 2021) and **CodeNeRF** (Jang & Agapito, 2021) demonstrate conditioning FiLM SIREN and MLP models respectively to obtain (class-specific) conditional neural radiance fields for novel view synthesis. While CodeNeRF struggles with low image quality, pi-GAN is capable of generating high quality samples by leveraging generative adversarial networks. Somewhat concurrently to CodeNeRF, Rebain et al. (2022) has shown that, in the conditional novel view synthesis setting, **LoLNeRF** can achieve competitive performance in comparison to GANs given a sufficiently large network and latent code. However, we reveal how there are diminishing returns in employing large MLP networks and large latent codes when MLP architectures are employed. We investigate this problem from the point of view of representation power. Our extensive experiments reveal how recently proposed alternatives to MLPs for coordinate-based neural networks (Sajjadi et al., 2022)(SRT) overcome this issue across a number of application domains. ## 3 Method We experimentally analyze the representation power of different neural field conditioning mechanisms. With that objective, we first introduce the basics of neural fields in Section 3.1, their training strategies in Section 3.2, and their architecture in Section 3.3. ## 3.1 Neural Fields Given a signal (1D for audio, 2D for images, 3D for geometry etc., See Figure 3) mapping a bounded set of coordinate inputs X ⊂ R dto outputs O ∈ R n, a neural field fθ with parameters θ, represents a signal, taking a coordinate x ∈ X as input and output fθ ∈ O: $$f_{\theta}:\mathbb{R}^{d}\to\mathbb{R}^{n},\quad\mathbf{x}\mapsto f_{\theta}(\mathbf{x})$$ n, x 7→ fθ(x) (1) Examples include fields receiving as input: - x ∈ R 1 1D time coordinates, outputting scalar audio intensity (Sitzmann et al., 2020); - x ∈ R 2 2D pixel coordinates, outputting RGB color for pixels (Stanley, 2007); - x ∈ R 3 3D coordinate, outputting a (truncated) signed distance from a shape (Park et al., 2019); - x ∈ R 4 4D encoded camera rays, outputting pixel color (Sitzmann et al., 2021); - x ∈ R 5 3D coordinate and 2D view direction, outputting density and radiance (Mildenhall et al., 2020). $$(1)$$ Networks can be supervised minimizing a loss function over the observations: $$\operatorname*{arg\,min}_{\theta}\;\mathbb{E}_{x\sim{\mathcal{X}}}\left[||f_{\theta}(\mathbf{x})-S(\mathbf{x})||_{2}^{2}\right].$$ $$\left(2\right)$$ . (2) When using MLP architectures to implement fθ, directly inputting the coordinates to the neural network, they have a bias towards representing low-frequency signals (Rahaman et al., 2019), leading to low-quality signal reconstruction. To encourage high-frequency signal learning, the input coordinates may be independently mapped to higher dimension using sine and cosine functions with increasingly large frequencies (Vaswani et al., 2017; Mildenhall et al., 2020): $$\gamma(x)=(\sin(2^{0}\pi x),\sin(2^{1}\pi x),....,\sin(2^{l-1}\pi x),\cos(2^{0}\pi x),\cos(2^{1}\pi x),...,\cos(2^{l-1}\pi x))\tag{3}$$ Several other formulations of position encoding have been explored (Rahimi & Recht, 2007; Barron et al., 2021; Tancik et al., 2020), but we use this one in our experiments as it has proven widely effective and remains commonly used. ## 3.2 Neural Fields: Training Methodology Coordinate networks can be used to overfit (memorize) a bounded signal for such purposes as image denoising (Sitzmann et al., 2020) or novel view synthesis of scenes given a finite set of input images and associated poses (Mildenhall et al., 2020), but this memorization process does not generalize to new inputs. To overcome these limitations, conditional networks have been designed, where the decoder takes as input an *instance-specific* latent code z ∈ R m along with the coordinates to regress desired quantities. Below, we describe two approaches to training networks conditioned on latent codes: ⃝1 auto-encoders2 and ⃝2 auto-decoders. Auto-encoder. An auto-encoder (or encoder-decoder) employs a domain-specific encoder E parameterized by θe that takes a dataset element I ∈ I as input and outputs the latent code as E : I *7→ E*(I) = z ∈ R m. The element I can be an image, point cloud, audio signal etc. A decoder D parameterized by θd accepts the latent code and reconstructs the input as D : z *7→ D*(z) = ˆI. These networks are trained to jointly optimize encoder and decoder parameters using the reconstruction objective: $$\operatorname*{arg\,min}_{\theta_{e},\theta_{d}}\,\operatorname*{\mathbb{E}}_{\mathbf{I}\sim\mathcal{I}}\left[||\mathcal{D}_{\theta_{d}}(\mathcal{E}_{\theta_{e}}(\mathbf{I}))-\mathbf{I}||_{2}^{2}\right]\,.\tag{1}$$ $$\left(4\right)$$ Auto-decoder. Also known as Generative Latent Optimization (GLO) (Bojanowski et al., 2017), autodecoders are a form of generative model which generates an output conditioned on a latent code z. Here, the latent code is not predicted by an encoder, rather jointly optimized with the decoder parameters. A code-book Z={zi} is used where each row consists of a latent code corresponding to each training instance. This alleviates the need of designing task-specific encoder. This neural network is trained to optimize the parameters of the decoder θd and the latent codes Z using the reconstruction loss: arg min Z,θd E (I,z)∼(I,Z) -∥Dθd (z) − I∥ 2 2 + ρ(z), (5) where ρ is an optional regularization function applied to the latent z when a particular distribution is desired. For example, choosing ρ(·) = *∥ · ∥*22 results in the codes Z being distributed as a Gaussian (Park et al., 2019). During inference, commonly referred to as *test-time optimization* in the literature, the latent code is discovered by optimizing the above loss function while keeping the network parameters constant: $$\operatorname*{arg\,min}_{{\bf z}}\,\operatorname*{\mathbb{E}}_{{\bf I}\sim{\mathcal I}}\left[||{\mathcal D}_{\theta_{d}}({\bf z})-{\bf I}||_{2}^{2}+\rho({\bf z})\right]\,,$$ (z) − I||22 + ρ(z), (6) 2An extension of auto-encoders, Variational Auto-Encoders (VAE) (Kingma & Welling, 2013), are a probabilistic formulation that define a distribution of latent codes that generate elements matching the input dataset distribution. Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are another common method of synthesis that map a latent code to output. However, for the purpose of this paper we do not analyze VAEs and GANs, as we are not interested in generative aspects of the problem, and benchmark our results as a regression task. $$\left(5\right)$$ $$\left(6\right)$$ ![5_image_0.png](5_image_0.png) Figure 4: **Architectures –** Multi-layer perceptrons (MLPs) can be conditioned using a concatenation of coordinate and latent code (left). Hyper-networks use a secondary MLP that computes weights for the primary network which operates on the coordinate (middle). Attention networks condition on a set-latent representation that is incorporated into the network through attention layers, with query vectors being derived from the coordinates (right). ## 3.3 Neural Fields: Decoder Architectures Many neural field architectures have been proposed in the literature. In this section, we review some common varieties including the ones tested in our experiments. Of particular relevance is the different ways in which these networks incorporate *conditioning* signals. For additional details on how we implement these architectures, see Section 4 and the Appendix. Concatenation - Figure 4 (left). Arguably the simplest way to construct a neural field is with an MLP applied to the input coordinates (either as x or γ(x)), and the simplest method to condition it by concatenating a latent vector to these coordinates (Park et al., 2021; Martin-Brualla et al., 2021). This input is commonly re-concatenated to one or more hidden layers using *skip connections* to improve stability (Chen & Zhang, 2019). This approach has proven effective for many applications, but has undesirable characteristics such as requiring O(k(m + k)) parameters for the linear layers, where m is the latent dimension, and k is the hidden layer width. Critically for our purposes, when m becomes larger than k, the hidden layers effectively perform a linear dimensionality reduction of the latent codes, limiting the network's ability to utilize high-dimensional latents effectively. This can be partially mitigated by partitioning the latents to be distributed among the hidden layers, but even this is limited by the layer count; see Section 4.3. Hyper-networks - Figure 4 (middle). One approach to overcoming the inefficiency of concatenation in MLPs is hyper-networks (Ha et al., 2017; Sitzmann et al., 2019). In this approach, a secondary network Ψ(z) applied to the latent code z regresses the *weights* ϕ of the MLP fϕ, possibly allowing for a more efficient incorporation of latent information. While the coordinate network itself is very efficient in this setup, it off-loads complexity to the hyper-network, which can become very large in order to predict entire layer weights as output. How much of a problem this is depends on how frequently the hyper-network needs to be invoked. If only a few instances (and thus latent codes) are seen per batch, the hyper-network cost will be small. In cases like Rebain et al. (2022) however, where thousands of latent codes per batch are required, this cost can become prohibitive. A related approach to hyper-networks is *feature-wise linear modulation* (FiLM) (Perez et al., 2018; Mehta et al., 2021; Chan et al., 2021), which similarly incorporates a secondary network to operate on latent codes, but restricts the output of this network to feature-wise *modulation* of activations in the MLP, rather than predicting all of its weights. This reduces the computational load of the hyper-network, but also reduces its expressiveness in terms of how it can affect the main MLP. Attention - Figure 4 (right). More recently, a class of neural field decoder architectures has emerged based on the sequence decoder mechanism used in transformers (Vaswani et al., 2017). Introduced by Jiang et al. (2021) and extended to higher-dimensional fields by Sajjadi et al. (2022), this architecture defines a neural field using *cross-attention* layers. These layers compute attention between a query derived from the encoded sample position γ(x), and a *set of tokens*, which fill a role equivalent to latent codes in the previously described architectures. This method of incorporating conditioning information from a *set-latent* representation (which subdivides the latent code into fixed-width tokens) provides unique benefits compared to other approaches, namely the ability to efficiently condition on very high-dimensional inputs and the ability to vary this conditioning in a position-dependent way. This efficiency comes from the fact that a set of latent tokens can encode information equivalent to the weights of a linear layer predicted by a hyper-network, but are produced by a network that is *shared over the tokens* rather than predicting the entire token set in a single operation. To quantify this, the complexity of predicting a set-latent is O(n 2), compared to the O(n 3) required to predict weights for a linear layer with a hyper-network3. Compared to other more efficient hyper-network-like architectures like FiLM, this also allows the flow of conditioning information into the coordinate network to vary over space, as the attention is derived from the encoded position. While it might seem at first glance that such set-latent prediction is not necessary for networks like auto-decoders, we have observed that the inclusion of a token-level mapping network can prevent training from collapsing when conditioning with a learned latent table. A number of recent works (Müller et al., 2022; Chan et al., 2022; Liu et al., 2020; Chen et al., 2022) construct neural field style networks which condition on sample positions, but also incorporate a *spatial grid*, where each cell is associated with latent vector(s). Conceptually, this is similar to attention-based conditioning, as it also derives a sample-level feature from a larger set of vectors using weights derived from the sample position. Unlike the attention-based architectures we focus on, grid-based methods can perform spatial "look-up" nearly for free, significantly increasing their computational efficiency. Nonetheless, the application of these models to category-level models has been limited so far, and they suffer from the *curse of dimensionality* in scaling to problems beyond 3 dimensions. As such, we consider grids to be a complementary approach, and focus our analysis on the other architectures previously mentioned. ## 4 Experiments Given the architectures described in Section 3.3, we are interested in analysing how they scale to very high-dimensional latent codes. In Section 4.1, we verify our assumptions on an *image auto-encoding* problem to investigate how changes in latent dimensionality and complexity of the data distribution affects performance. Then, in Section 4.2, we experiment on different application domains, each consisting of several datasets, to evaluate the latent scaling performance in practical settings. Finally, in Section 4.3, we test alternative concatenation schemes to what we choose for MLPs to demonstrate that we are performing a fair comparison against conditioning-by-concatenation. ## 4.1 Image Auto-Encoding We implement a simple image auto-encoder which compresses images with a CNN to a latent representation of some dimension, using a neural field decoder to reconstruct the images from this latent code by interpreting it as a single latent code (concatenation, hyper-network) or a set-latent (attention). In all of these experiments we change only two variables for each dataset: *latent dimension* and *decoder architecture*. This allows us to isolate the variance between decoder architectures in their ability to utilize the information from the encoder. We test on two datasets, respectively discussed in Section 4.1.1 and Section 4.1.2, with network implementation details detailed in Section A.3. ## 4.1.1 Tiled Mnist - Figure 5 And Table 1 We design a dataset with controllable complexity to demonstrate how much the performance of an architecture can be affected by the size of the latent code and dimensionality of the data manifold. Loosely inspired by Lin et al. (2018), it consists of images formed by a 16×16 grid of MNIST digits where each digit is down-scaled to 16×16 pixels for a total image resolution of 256×256. The digits are chosen *randomly* from the 60, 000 images of the MNIST dataset, creating up to 60, 00016×16 unique possible combinations. This *on-the-fly* 3For simplicity, we assume that all network widths, token sizes, and token counts are equal to n ![7_image_0.png](7_image_0.png) | # unique digits | 1 | 16 | 256 | | | | | | | |----------------------|------|------|-------|------|------|------|------|------|------| | Latent dimension (N) | 32 | 128 | 512 | 32 | 128 | 512 | 32 | 128 | 512 | | Concatenation | 31.5 | 32.8 | 32.6 | 19.5 | 24.4 | 25.0 | 16.9 | 17.0 | 16.8 | | Hyper-network | 30.8 | 33.3 | 33.7 | 19.3 | 24.4 | 25.4 | 18.8 | 21.7 | 21.8 | | Attention | 35.0 | 36.3 | 33.9 | 26.1 | 31.3 | 31.4 | 23.5 | 24.3 | 24.6 | | Ground Truth | Concat (N = 32) Concat (N = 512) | Attn (N = 32) | Attn (N = 512) | |-------------------------------------------------------------|------------------------------------|-----------------|------------------| | Figure 5: Tiled MNIST - Qualitative reconstruction results. | | | | Table 1: **Tiled MNIST –** We compare the reconstruction quality in terms of PSNR of three variants of our toy dataset. We find that the reconstruction quality drops significantly with data complexity, as expected, and that the best performing configurations for the two complex distributions are the attention-based model with the largest latent code. As expected, we also find that large latent dimensions are not helpful for very simple data. image formation strategy ensures that memorization of the entire dataset is impossible. To test the effect of dataset complexity on network performance, we vary the number of *unique* digits in each image: either 1, 16, or 256, by setting consecutive 16×16, 4×4, or 1×1 blocks of digits to use a single repeated digit. We do this so that the images of each variant have the *same* size and visual properties, but vary *significantly* in the amount of information they contain. Analysis. In Table 1, we report average reconstruction PSNR, leading to the following key observations: - despite the similarity of the images, the reconstruction performance varies significantly between the variants as the number of unique digits increases, confirming that the more complex datasets are indeed more difficult to model; - increasing latent code size correlates with reconstruction quality for the more complex datasets, but not for the single-digit dataset, confirming that latent code size affects reconstruction by accounting for the dimensionality of the data manifold; - for concatenation-based decoders on the 256-digit variant, the quality is very poor and does not improve with latent code size; this suggests that the concatenation-conditioned network does not have the capacity to model this complex, high-dimensional data manifold. ![8_image_0.png](8_image_0.png) Figure 6: **Image auto-encoding** - Qualitative reconstruction results on CelebA-HQ for N = 8192. | Latent dimension (N) | 512 | 2048 | 8192 | | | | | | | |------------------------|--------|--------|--------|--------|-------|------|--------|-------|------| | Decoder architecture | Concat | Hyper | Attn | Concat | Hyper | Attn | Concat | Hyper | Attn | | CelebA-HQ | 21.4 | 21.1 | 21.3 | 24.7 | 23.5 | 26.0 | 24.5 | 23.6 | 26.3 | | Decoder params (×106 ) | 0.6 | 2.1 | 1.7 | 1.0 | 2.2 | 1.7 | 2.6 | 2.5 | 1.7 | Table 2: **Image auto-encoding –** We repeat our auto-encoding experiments on the CelebA-HQ dataset, and report the reconstruction quality (PSNR). We again find that the attention-based network in combination with high-dimensional latent codes provides the best performance. This advantage can not be explained by network capacity alone, as we find that the attention model performs the best for the largest codes, despite having the smallest number of parameters. ## 4.1.2 Celeba-Hq In addition to the Tiled MNIST dataset, we also experiment with the CelebA-HQ dataset introduced by Karras et al. (2018). In contrast to the previous dataset, these are 30k real images consisting of high-resolution versions of CelebA (Liu et al., 2015) human face images. We resize these images to 256×256 resolution to keep the CNN encoder the same as for Tiled MNIST, and to limit the computational cost of training. In Table 2, we observe that attention-based decoder achieves the highest reconstruction PSNR, whereas concatenation and hyper-network decoders encounter saturation in performance earlier. ## 4.2 Novel View Synthesis For the remaining experiments, we focus on the more challenging real-world task of *novel view synthesis*, one of the main application domains of neural fields. Given one or more images of an object or a scene, this is the task of generating images from novel viewpoints. We experiment with two different neural field-based approaches to novel view synthesis: neural radiance fields (Mildenhall et al., 2020), and light field networks (Sitzmann et al., 2021). Both are analyzed using the following datasets, where we randomly select 10% of views to be held-out from training and used for testing: - **HUMBI (Yu et al., 2020)**: is a large multiview dataset of 772 human subjects across a variety of demographics, captured with 107 synchronized HD cameras. We down-scale center crops of 97 views to 256×256 for 308 full body subjects. We multiplied the RGB values against a computed segmentation mask, such that the networks only reconstruct the subjects and not the background. - **SRN Cars and Chairs (Sitzmann et al., 2019)**: We use the rendered dataset of cars and chairs from ShapeNet (Chang et al., 2015). The chair dataset consists of 3654 models with each instance being rendered from 200 unique viewpoints which we downscale to 128×128 resolution, while the cars dataset consists of 2151 models, with each instance being rendered from 250 unique viewpoints which we upscale to 256×256 resolution. ![9_image_0.png](9_image_0.png) Figure 7: **Neural Radiance Fields** - Qualitative reconstruction results on SRN Chairs for N = 8192. | Latent dimension (N) | 512 | 2048 | 8192 | | | | | | | |------------------------|--------|--------|--------|--------|-------|------|--------|-------|------| | Decoder architecture | Concat | Hyper | Attn | Concat | Hyper | Attn | Concat | Hyper | Attn | | HUMBI | 29.3 | 30.2 | 31.0 | 27.9 | 30.6 | 31.0 | 28.3 | 29.9 | 31.0 | | SRN-Chairs | 20.1 | 24.5 | 24.1 | 20.2 | 23.2 | 24.4 | 20.9 | 23.5 | 25.0 | | SRN-Cars | 25.8 | 28.6 | 29.3 | 25.0 | 28.0 | 29.1 | 25.9 | 28.0 | 29.4 | | Decoder params (×106 ) | 0.6 | 2.1 | 2.6 | 1.1 | 2.2 | 2.6 | 2.6 | 2.8 | 2.6 | Table 3: **Neural Radiance Fields –** We compare the performance of different conditioning approaches, on the NeRF novel view synthesis task using the reconstruction PSNR metric on HUMBI, SRN-Chairs and SRN-Cars datasets. Attention conditioning with the largest latent size achieves the best result in all cases. Latent codes. To avoid the complexity of deriving the parameters of a 3D scene from images through an encoder network, we adopt the approach of Rebain et al. (2022) for our novel view synthesis experiments, and implement an auto-decoder with a learnable latent table. The rows of the table, which each correspond to an object in the dataset, are interpreted as either single latent vectors, or set-latents with a fixed token width of 128 and varying number of tokens (such that the overall dimension is unchanged), depending on the decoder architecture. Training. All novel view synthesis methods are supervised using the pixel-wise reconstruction loss in (2) applied on the training images and rendered pixel values for training views. For all datasets and architectures, training batches consist of 64 instances, with 2 views per instance, and 64 pixels sampled per image. ## 4.2.1 Neural Radiance Fields A neural radiance field network fθ defines a function from a point in space x and viewing direction d to radiance and optical density: fθ : R 5 → R 4, x 7→ fθ(x, d) = (c, σ). The radiance and density values can be integrated along a ray using the approach described by Mildenhall et al. (2020) to produce rendered per-pixel color values. In our experiments, for simplicity, and because view-dependent effects are absent from two of our datasets and insignificant in the third, we omit the view direction inputs from the NeRF network. For volume rendering, we use the hierarchical sampling approach described by Mildenhall et al. (2020), but without allocating separate "coarse" and "fine" networks - instead sampling both coarse and fine values from the same network; specifically, we use 64 fine/importance samples, and 128 coarse/uniform samples for each ray, which we found to be the minimal counts required to avoid noticeable artifacts with our data. Analysis. We report the reconstruction metrics on our test images in Table 3, and Figure 7 compares images rendered using different NeRF architectures. As these numbers show, attention-based decoders consistently achieve the best results for large latent codes, as well as the best performance overall. ![10_image_0.png](10_image_0.png) Figure 8: **Light Fields** - Qualitative reconstruction results on SRN Chairs for N = 8192. | Latent dimension (N) | 512 | 2048 | 8192 | | | | | | | |------------------------|--------|--------|--------|--------|-------|------|--------|-------|------| | Decoder architecture | Concat | Hyper | Attn | Concat | Hyper | Attn | Concat | Hyper | Attn | | HUMBI | 25.2 | 29.2 | 29.8 | 24.7 | 29.0 | 29.8 | 24.9 | 29.0 | 29.9 | | SRN-Chair | 17.4 | 20.3 | 20.7 | 17.6 | 19.9 | 21.7 | 18.2 | 19.9 | 22.2 | | SRN-Car | 23.8 | 26.8 | 27.5 | 23.8 | 26.3 | 28.1 | 24.2 | 26.3 | 28.4 | | Decoder params (×106 ) | 0.6 | 2.2 | 2.6 | 1.1 | 2.3 | 2.6 | 2.7 | 2.9 | 2.6 | Table 4: **Light Fields –** We compare the performance of different conditioning approaches, on the light field novel view synthesis task using the reconstruction PSNR metric on HUMBI, SRN-Chairs and SRN-Cars datasets. Only attention conditioning shows consistent improvement in PSNR on all datasets as latent size is increased. ## 4.2.2 Light Field Networks Neural light fields define a function from ray to radiance: fθ : R 5 → R 3, x 7→ fθ(x) = cˆ. In this formulation, the camera parameters map pixel coordinates directly to rays, which the light field network maps directly to pixel color values. For our experiments we employ Plücker coordinates (Sitzmann et al., 2021), which assign 5D ray coordinates to points on a 4D manifold, such that co-linear rays with the same direction map to the same point. This simplifies the mapping that the network needs to learn, though at the cost of requiring view rays to originate from a point outside the bounds of the scene. Analysis. We report the reconstruction metrics on test images in Table 4, and qualitative results in Figure 8. In all cases, the transformer decoder achieves the highest reconstruction quality, and either increases with latent code size (SRN chairs and cars), or remains approximately constant (HUMBI) - indicating that either the data manifold dimensionality has been reached, or that the manifold is to complex for the network to model with the capacity it has. Hyper-networks remain approximately flat or decrease slightly with increasing latent size, indicating poor ability to integrate large conditioning vectors. Concatenation provides the worst performance on average, and show inconsistent behaviour with respect to latent size. ## 4.3 Ablation Of Concatenation Methods | Variants | PSNR | |-----------------------------|--------| | 8-way split | 24.46 | | 4-way split | 23.27 | | 2-way split | 22.40 | | 1 skip connection | 22.12 | | no split or skip connection | 20.53 | As our primary goal is to compare attention and concatenation as conditioning methods, it is important to verify that the approach we take to concatenating large latent codes into an MLP network is optimal. To this end, we perform an ablation study for different concatenation schemes using the largest latent code size of 8192 for one of our main experiment setups: CelebA-HQ auto-encoding. The first scheme we test is simply concatenating the entire latent code to the first layer input. Next, we try adding a skip connection from the input of this first layer to a layer half-way through the network. For the rest, we try splitting the latent code evenly into 2, 4, or 8 parts, and concatenating these sub-vectors to different layers spread evenly through the network. Note that the number of splits is limited by the depth of the network (8 layers in all our experiments). The result of this study is shown in the inset table. Splitting the latent code into 8 parts (as we do in our main experiments), and concatenating them to individual layers provides the best results. This would suggest that the linear dimensionality reduction effect described in Section 3.3 is indeed the limiting factor for concatenation efficiency. As such, we split all of our latent codes for concatenation-based decoders to either 8 sub-codes with widths larger than the hidden layer width of 256, or to ⌈ N 256 ⌉ sub-codes when the latent dimension N is less than 2048. ## 5 Conclusion In this paper we have provided an analysis of performance for some common conditioning architectures for neural fields. We found a strong trend in favor of attention-based architectures being the most effective option, particularly for cases where the conditioning signal is very high-dimensional. This result is valuable for making decisions about architecture design for methods which require high-dimensional conditioning to model complex data distributions. By sharing the results of these *very* computationally expensive experiments, we hope to reduce the cost burden for future work in this area. Limitations (of our analysis). For architectures like hyper-networks which split computation into an instance-level and sample-level stages, there is an inherent trade-off between the efficiency of the two stages. For applications where new instances are seen infrequently, it can be advantageous to save resources by having a more expensive instance-level computation and a cheaper sample-level one. If the application requires many invocations of the instance-level network however, e.g. because training batches require many hundreds or thousands of instances as in Rebain et al. (2022); Sajjadi et al. (2022), such a configuration may be prohibitively inefficient. As detailed in Section A.1, we balance the hyper-paramater choices in our networks such that each is comparable in efficiency when considering *both* the instance-level and sample-level computations. This provides an analysis that is widely applicable to different applications, but may overlook some application-specific routes for optimization that take advantage of caching the results of the instance-level network. Broader impact. Our analysis is fundamental research having broad implications for a wide range of ML applications, and thus it is difficult, if not impossible, to anticipate all specific consequences of our research. We do however note a few potential areas. First, we note that conditioned neural fields could be used for generative methods, and thus come with all the consequent baggage of synthetic media generation. We feel these issues have been widely discussed in the community at large, with our analysis not adding anything new other than the potential for incrementally higher quality synthetic data. Secondly, we note the large amount of energy required to run these experiments. We undertook this investigation as a service to the community so that others don't have to, and make our source code available for verifiability. Our results should allow others to make informed design decisions without needing to repeat our experiments. Acknowledgements. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, NSERC Collaborative Research and Development Grant, Google Research (Canada), Digital Research Alliance of Canada, and Advanced Research Computing at the University of British Columbia. We thank Wei Jiang, Weiwei Sun, Eric Hedlin, Daniel Watson, and David Fleet for their comments. ## References Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In *ICCV*, 2021. Ronen Basri, Meirav Galun, Amnon Geifman, David Jacobs, Yoni Kasten, and Shira Kritchman. Frequency bias in neural networks for input of non-uniform density. In *ICML*, 2020. Piotr Bojanowski, Armand Joulin, David Lopez-Paz, and Arthur Szlam. Optimizing the latent space of generative networks. *arXiv*, 2017. Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In *CVPR*, 2021. Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In *CVPR*, 2022. Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, L. Yi, and Fisher Yu. Shapenet: An information-rich 3d model repository. *arXiv*, 2015. Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In ECCV, 2022. Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. In *CVPR*, 2019. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *NeurIPS*, 2014. David Ha, Andrew M. Dai, and Quoc V. Le. Hypernetworks. In *ICLR*, 2017. Wonbong Jang and Lourdes Agapito. Codenerf: Disentangled neural radiance fields for object categories. In ICCV, 2021. Wei Jiang, Eduard Trulls, Jan Hosang, Andrea Tagliasacchi, and Kwang Moo Yi. COTR: Correspondence Transformer for Matching Across Images. In *ICCV*, 2021. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In *ICLR*, 2018. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv*, 2013. Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh. PacGAN: The power of two samples in generative adversarial networks. In *NeurIPS*, 2018. Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. In *NeurIPS*, 2020. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In *ICCV*, 2015. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. Neural volumes: Learning dynamic renderable volumes from images. *ACM TOG*, 2019. Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In CVPR, 2021. Ishit Mehta, Michaël Gharbi, Connelly Barnes, Eli Shechtman, Ravi Ramamoorthi, and Manmohan Chandraker. Modulated periodic activations for generalizable local functional representations. In *ICCV*, 2021. Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In *CVPR*, 2019. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In *ECCV*, 2020. Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. *ACM TOG*, 2022. Michael Niemeyer, Lars M. Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In *CVPR*, 2020. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In *CVPR*, June 2019. Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. *ACM TOG*, 2021. Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In *AAAI*, 2018. Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In *ICML*, 2019. Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In *NeurIPS*, 2007. Daniel Rebain, Mark Matthews, Kwang Moo Yi, Dmitry Lagun, and Andrea Tagliasacchi. Lolnerf: Learn from one look. In *CVPR*, 2022. Peiran Ren, Jiaping Wang, Minmin Gong, Stephen Lin, Xin Tong, and Baining Guo. Global illumination with radiance regression functions. *ACM TOG*, 2013. Mehdi S. M. Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lucic, Daniel Duckworth, Alexey Dosovitskiy, Jakob Uszkoreit, Thomas Funkhouser, and Andrea Tagliasacchi. Scene Representation Transformer: Geometry-Free Novel View Synthesis Through Set-Latent Scene Representations. In *CVPR*, 2022. Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In *NeurIPS*, 2019. Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In *NeurIPS*, 2020. Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B. Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. In *NeurIPS*, 2021. Kenneth O. Stanley. Compositional pattern producing networks: A novel abstraction of development. Genetic Programming and Evolvable Machines, 2007. Mohammed Suhail, Carlos Esteves, Leonid Sigal, and Ameesh Makadia. Light field neural rendering. In CVPR, 2022. Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. In *NeurIPS*, 2020. A. Tewari, O. Fried, J. Thies, V. Sitzmann, S. Lombardi, Z. Xu, T. Simon, M. Nießner, E. Tretschk, L. Liu, B. Mildenhall, P. Srinivasan, R. Pandey, S. Orts-Escolano, S. Fanello, M. Guo, G. Wetzstein, J.-Y. Zhu, C. Theobalt, M. Agrawala, D. B Goldman, and M. Zollhöfer. Advances in neural rendering. In ACM SIGGRAPH Courses, 2021. Aaron van den Oord, Oriol Vinyals, and koray kavukcuoglu. Neural discrete representation learning. In NeurIPS, 2017. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *NeurIPS*, 2017. Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond. Comput. Graph. Forum, 2022. Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, and Yaron Lipman. Multiview neural surface reconstruction by disentangling geometry and appearance. In *NeurIPS*, 2020. Zhixuan Yu, Jae Shin Yoon, In Kyu Lee, Prashanth Venkatesh, Jaesik Park, Jihun Yu, and Hyun Soo Park. Humbi: A large multiview dataset of human body expressions. In *CVPR*, 2020. ## A Appendix A.1 Balancing Instance-Level W/ Sample-Level Computation As previously noted, depending on workload, a conditional network may require more computation at the sample level (e.g. rendering all the pixels in a large image), or at the instance-level (e.g. batch sizes of hundreds or thousands of images). This can have a drastic effect on what architecture is more efficient - if most of the computation can be off-loaded from the sample level to the instance level, as in a large hyper-network, then the training or inference process may be much more efficient for cases where only few instances need to be considered at once. We chose to balance our choice of hyper-parameters for the case where such an efficiency gain is limited as is the case for methods like SRT and LOLNeRF. In practice for our problem domains, this means choosing batch sizes with large numbers of images with pixels sparsely sampled from each. The most noticeable effect of this choice is that we choose a network width for our hyper-networks of 64 neurons, as this keeps the memory and compute cost close to that of attention and concatenation for our training setup. This results in most of our decoders having similar parameter counts, as the total number of flops is close to proportional to parameters when instance-level computations are not heavily re-used. ## A.2 Neural Field Decoder Architecture Details The decoder takes input coordinates x as input along with encoded latents z. All networks use sinusoidal positional encoding to map raw coordinates to high-dimensional vectors before conditioning on them. For the concatenation and hyper-network decoders, the set-latent output of the encoder is flattened to a vector, while the set structure is retained for the cross-attention layers of the attention decoder. The network is supervised using the pixel-wise reconstruction loss of Eq. 2. The concatenation and hyper-network models both consist of 8-layer MLPs in all cases, while the attention models use 5 attention stages with three dense layers after each. With the exception of the hyper-network layers, all MLPs use a network width of 256 neurons. All multi-head attention layers use 16 heads and 256-dimensional keys unless otherwise specified. ## A.3 Image Auto-Encoding Architecture Details We use an SRT-style (Sajjadi et al., 2022) network to encode an input image into patch-wise latent vectors; a CNN feature extractor maps the image to a M ×M ×D feature map, from which P patches are extracted and re-shaped to P × F latent tokens. These tokens are further processed with a single multi-head self-attention layer, resulting in P × G latent tokens z such that P G = N (the latent dimension). In all cases, the encoder architecture is identical for each decoder architecture used. The decoder takes pixel coordinates as input and directly outputs pixel color. In order to vary the latent dimension output from the encoder (equivalent here to the bottleneck width of the auto-encoder), the width and height of patches extracted from the last layer of the CNN before self-attention layers is varied such that the number of latent tokens varies while their individual dimensionality remains constant. We also configure the attention decoder for this application to use *single-head* attention (and correspondingly reduced the key dimension to 64), rather than multi-head as with the others, as we found it to provide slightly superior performance. This is likely due to the one-to-one nature of the mapping from output pixel coordinates to relevant information in the input features. For training auto-encoders, we use a batch size of 128 images with 512 pixels per image. ## A.4 Nerf And Light Field Auto-Decoder Architecture Details We directly store the latent code/set-latent values in a learnable latent table (initialized to zero), which is trained alongside the decoder network. For both hyper-network and attention models we include a learnable 64-dimensional embedding for each latent token which is initialized to a random value, sampled from a normal distribution - we found this to be necessary, as the initially zero-valued codes would otherwise cause the training of the layers conditioned only on latents to collapse. This embedding, unlike the latent codes, is the same for each object in the dataset, so the non-zero initialization is not harmful to convergence. For concatenation, the latents are directly combined with previous layer activations, so we did not find additional embeddings to be necessary. The NeRF decoder takes in 3D world-space sample positions as inputs, and outputs radiance and density values which are used to predict color for each ray according to the volume rendering equation described by Mildenhall et al. (2020). The light field decoder takes ray coordinates, mapped into Plücker coordinates, as input, and directly outputs pixel color for the ray. For training auto-decoders, we use a batch size of 128 images with 64 pixels per image. ## A.5 Neural Architecture Details In this section we give additional details on our neural network architecture choices. All MLPs are reluactivated, and use the original layer normalization strategy of the method each architecture is based on: - Concatenation: none (Rebain et al., 2022) - Hyper-networks: at each layer (Sitzmann et al., 2019) - Attention: after skip connections (Sajjadi et al., 2022) The network stage/layer count hyper-parameters were chosen to provide as close as possible to equal parameter counts across all experiments for the largest codes without changing the architecture for each experiment and while minimizing the number of hyper-parameters that are different from those used in previous works. Finally, we give details specific to each conditioning method. Concatenation. The concatenation decoder consists of an 8-layer MLP f(x), which maps positional-encoded coordinates x to the task specific output. Conditioning is achieved by splitting the latent code z to 8 equally sized sub-codes and concatenating these to the input of each layer. Hyper-network. The hyper-network decoder consists of the hyper-network H(z), which maps latent codes z to the weights of an 8-layer MLP f(x), which maps positional-encoded coordinates x to the task specific output. More specifically, the hyper-network first splits the latent code into 8 components, as in the concatenation-based decoder, then each of these sub-codes is then mapped by a 2-layer MLP to a weight and bias tensor to form one of the layers in f(x). Attention. The attention decoder is a function g(x) which maps positional-encoded coordinates x to the task specific output. g(x) consists of 5 network stages, each of which has a cross-attention layer followed by a 3-layer MLP, both of which are bridged by a skip-connection as in Scene Representation Transformer (Sajjadi et al., 2022). The cross-attention layer uses the same standard multi-head formulation as SRT and maps latent tokens to key and value vectors, and the input network activations from previous stages to query vectors. 16 heads are used for NeRF and light fields, and one head is used for image auto-encoding (as this gave slightly better results).
Review 1: Summary: The authors conduct a large scale study on the architecture bias to condition neural fields. They compare concatenation, HyperNetworks and an Attention-based concatenation mechanism on different datasets and observe an advantage over most setups for the Attention-based concatenation mechanism. Strengths and Weaknesses: In general I find the details given in the manuscript not sufficient to 1) understand properly what they did 2) how the experiments were conducted. This makes is not possible to evaluate the study, see below. Requested Changes: Please provide a clear description of the architectures, their parameter count, their training time, hyperparameters used. Although you try to describe the different architecture in words, I am not sure what you do, until I see a proper mathematical description how exactly the 3 versions, especially the HyperNetwork and the Attention version, are constructed and what happens if you change e.g. the latent dimension. In general, I am not convinced about the approach put forward in this paper. I would have liked a much more fine grained study + ablations on some moderate dataset, e.g. the MNIST one studied here, but where the architecture search is much more thorough. What roles do initialisation, regularisation (strength), learning rates, etc play. Also architecture wise, the HyperNetwork can also be used with sharing meaning that the HyperNetwork gets as input a "layer token" and a conditioning z token producing the weights of every single layer (see e.g. https://arxiv.org/abs/1906.00695) In the ablation that you did, even more splits for the concatenation method then leads to even improved performance, better than the attention mechanism? Also, please provide ablations for the attention mechanism. When does it fail, what makes it work? It seems that often performance of methods decrease with more dimensionality, can you try different initialisation schemes to prevent that from happing? Or is this related to regularisation and one wants a lower dimensional latent space? Why does this not apply for the attention base mechanism? More technical issues: 1. "If we wish to perfectly model some distribution of data, then we must use a latent dimension at least equal to that of the manifold on which that distribution lies." I would be very careful with these kind of statements. I agree that the latent dimension plays a role in practice to model a mapping from Z to some Image distribution but theoretically this is a much more delicate issue since one can densely sample from a high prob distribution through a mapping coming from a one-dim (e.g. Uniform) distribution, see e.g. https://arxiv.org/abs/2006.16664. Since this is one of the major claims of the paper, I would study these issues in detail. 2. "For example, choosing ρ(·) = ∥ · ∥22 results in the codes Z being distributed as a Gaussian". I think this is not true: You are not modeling the posterior or a distribution as far as I understand but a set of parameters for which you penalize large values. Why would these "samples" converge to one from a gaussian? Broader Impact Concerns: Does not apply I think. ================================================== Review 2: Summary: This paper considers the topic of conditioning neural fields by a latent vector, and aims to find the best way to infuse that information into the network. The authors consider several popular choices, and run extensive experiments, comparing them across various datasets and latent vector sizes. They conclude that attention over a set of latent subvectors works best, especially at scale. Strengths and Weaknesses: === Strengths === - (S1): The authors pose a very concrete question, and do a thorough job at trying to answer it. They experiment across various datasets with differing complexities, ranging from toy to large scale. In the toy experiment with MNIST, the authors reduce the intrinsic dimensionality while retaining the image size by duplicating digits, which allows them to decouple these two factors; I found this idea quite clever. - (S2): The writing is excellent: it reads very clearly, they are no typos/errors, and overall feels quite polished. The basics of what neural fields are and how they are used are well-explained. I also liked the inclusion of Figure 2 to visualize the dependencies between the different works. === Weaknesses === I did not observe significant weaknesses, given the type of the work. While clearly there is no novelty presented in the paper, this is expected from a "large scale evaluation"-style work, and it doesn't make it any less useful. I have some clarification questions (rather minor) which I defer to the "Questions" section below. === Questions === - (Q1): In the attention conditioning, the network queries the set-latent with a query based on the location x. Is it fair to assume that each of the vectors in the set-latent will possibly end up corresponding to a part of the object that is a coherent slice of the space? In other words, does it make sense to view this as a relaxed/learned version of having a different latent vector per every voxel of some predefined grid? - (Q2): At the bottom of page 6, the authors talk about the complexity of predicting the set-latent and how that compares to the complexity of predicting the network weights using a hypernetwork. Is this discussion specific to the auto-encoder setup (as in the auto-decoder the latent would not have to be predicted)? === Nitpicks === - In "Our analysis is fundamental research (...), and thus difficult, if not impossible, to anticipate all specific consequences of our research", shouldn't there be a verb somewhere near "difficult" e.g. "it is difficult"? Requested Changes: I think the work is already in good shape, and I don't have significant changes to request. Broader Impact Concerns: I see no unaddressed ethical concerns, and the discussion the authors include in their paper is sufficient. ================================================== Review 3: Summary: This paper studies the design choice of neural field conditioning. The author mainly considered 'concatenation', 'hyper-network', and 'attention' approaches in 2D, 3D, and 4D signals, where the backbone neural field is based on MLP- architecture with positional encoding. The authors empirically demonstrate that the 'attention' achieves the best performance across modalities and the scaling, i.e., enlarging the conditioning dimension, on the 'attention' strategy also shows consistent improvement. Strengths and Weaknesses: **Strengths** The observational study is very meaningful as the current neural field does not have a consensus on which design choice to use. Experiments are considered over various neural field applications, e.g., image, 3d radiance field, and light field ------------ **Weakness** The writing of the methodology part is slightly confusing. - Except for the "Concatenation", the mathematical definition is not clear (as concatenation is somewhat trivial). - Can the author provide the explicit mathematical definition of "Hypernetwork" and "Attention"? While the finding is very interesting, it is slightly unclear why "Attention" is the best choice. - Can the author provide some intuition about the observation? While authors have roughly categorized the design choices (into three), there exist various methods that are known to be effective. Considering more design choices will definitely give more intuition to the readers. - Functa [1]: only adds a bias term on a globally shared network - CIPS [2]: similar to Pi-GAN - Modulated SIREN [3] More extensive experiments across neural field architectures - For instance, does this observation holds for various neural field architectures such as SIREN [4], and FFN [5]? ------------ **Questions**\ Have the author tried such observational studies on different modalities such as audio (t) or video (x,y,t) ------------ **Reference**\ [1] Dupont et al., From data to functa: Your data point is a function and you can treat it like one, ICML 2022\ [2] Anokhin et al., Image Generators with Conditionally-Independent Pixel Synthesis, CVPR 2021\ [3] Mehta et al., Modulated Periodic Activations for Generalizable Local Functional Representations, ICCV 2021\ [4] Sitzmann et al., Implicit Neural Representations with Periodic Activation Functions, NeurIPS 2020\ [5] Tancik et al., Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains, NeurIPS 2020 Requested Changes: **Mathematical Definition**: Some explicit mathematical definitions of the design choice (in the main text) will be encouraged. **Intuition**: Providing some intuitions about the observation, i.e., "Attention" is better than others, will be great. **more verification** (optional): Since this paper is an observational study, more extensive verification will be encouraged to convince the claim. While this is optional, more extensive experiments in the following area will truly make the paper stronger. - (i) hyper-network design choices [1,2,3] - (ii) neural field architectures [4,5] - (iii) more modalities, e.g., audio, video Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: This paper studies the design choice of neural field conditioning, and considers 'concatenation', 'hyper-network', and 'attention' approaches in 2D, 3D, and 4D signals, where the backbone neural field is based on MLP architectures. The authors empirically demonstrate that the 'attention' achieves the best performance. It received three reviews. After rebuttal, two reviewers recommended Accept, and one reviewer recommended Leaning Accept. On one hand, all the reviewers agree that the authors have posed a very concrete question, the empirical study is meaningful, and experiments are comprehensive. The paper is also well written. On the other hand, two reviewers mentioned that more clear definition of "HyperNet" and "Attention" is needed. Reviewer HshV suggested extra experiments to further strengthen the paper. In the paper revision, the authors have provided clarification on additional architecture details. Overall, all the reviewers are happy about the paper, and the Action Editor would like to recommend accept of the paper. ==================================================
# Disco: Improving Compositional Generalization In Visual Reasoning Through Distribution Coverage Joy Hsu joycj@stanford.edu Department of Computer Science, Stanford University Jiayuan Mao jiayuanm@mit.edu Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology Jiajun Wu *jiajunwu@cs.stanford.edu* Department of Computer Science, Stanford University Reviewed on OpenReview: *https:// openreview.net/ forum?id=EgHnKOLaKW* ## Abstract We present DisCo, a learning paradigm for improving compositional generalization of visual reasoning models by leveraging *unlabeled, out-of-distribution* images from the test distribution. DisCo has two components. The first is an iterative pseudo-labeling framework with an entropy measure, which effectively labels images of novel attribute compositions paired with randomly sampled questions. The second is a distribution coverage metric, serving as a model selection strategy that approximates generalization capability to test examples drawn from a different attribute combination distribution to the train set, without the use of labeled data from the test distribution. Both components are built on strong empirical evidence of the correlation between the chosen metric and model generalization, and improve distribution coverage on unlabeled images. We apply DisCo to visual question answering, with three backbone networks (FiLM, TbD-net, and the Neuro-Symbolic Concept Learner), and demonstrate that it consistently enhances performance on a variety of compositional generalization tasks with varying levels of train data bias. ## 1 Introduction A long-standing goal of visual reasoning is to build machines that can respond to queries about images in a flexible and general way as humans do. To achieve this, machines must contend with the combinatorial complexity of natural images and queries: a scene has multiple objects, each object has a collection of attributes, and objects form various spatial and functional relationships. The combinatorial explosion of image spaces, together with practical data limitation in downstream tasks, makes many learning problems ill-posed (Bienenstock et al., 1996; Lake et al., 2017). In this paper, we focus on the compositional generalization to novel combinations of object attributes, generalizing from the reasoning of *blue cubes* and *red cylinders* to that of red cubes. This is an important desideratum for machine learning systems: it is impossible for any dataset to include all possible combinations of object attributes for model training. ![0_image_0.png](0_image_0.png) Figure 1: The base VQA model in its original training paradigm is trained on labeled data, while DisCo leverages *unlabeled, out-of-distribution* images from the test distribution to improve compositional generalization performance. We address this problem by introducing DisCo, a learning paradigm that leverages *unlabeled, out-ofdistribution* images from the test distribution to help visual reasoning systems better generalize compositionally (See Figure 1). Concretely, we focus on the task of visual question answering (VQA), though our framework is model-agnostic and can be used for a variety of vision domains that have combinatorial structures, such as in the tasks of referring expression comprehension, grounded instructions, and robotic manipulation (Achlioptas et al., 2020; Shridhar et al., 2020; Chevalier-Boisvert et al., 2019; Shridhar et al., 2022). Given a labeled train set of image, question, and answer triplets, and unlabeled images from the test distribution that contains novel attribute combinations, DisCo bootstraps a visual reasoning model by iteratively mining data instances derived from the unlabeled dataset. Starting from a base VQA model trained on a labeled, possibly biased dataset, our framework couples unlabeled images that are out-of-distribution with randomly sampled questions, and discovers pairs that are answerable. These newly-created data points are trained with equal weighting as the original labeled data points, and bootstrap learning: the model gradually labels more and more new image-question pairs, increasing *distribution coverage* on out-of-distribution image sets that contain novel attribute compositions, with individual attributes seen with labeled data but under different combinations. Pseudo-labeling is particularly difficult in the visual question answering setting, as given an unlabeled question-image pair, 1) there is a high probability of *presupposition* adherence failure (e.g., the question asks about the color of the cube in the image, but there is no cube), and 2) the current model may not have the capability to reason about the out-of-distribution image correctly. These two failure cases make pseudo-labeling images from the test distribution especially noisy. The effectiveness of our iterative learning paradigm is hence based on the empirical insight that, for a pretrained visual reasoning model, the *entropy* of its predicted answer distribution correlates strongly with its accuracy on images with novel attribute compositions. This entropy metric approximates both presupposition adherence as well as compositional generalization accuracy, thus is crucial to DisCo choosing unlabeled samples to be pseudo-labeled. Moreover, in the compositional generalization setting, model tuning and model selection are challenging, as there is limited access to labeled data in the test distribution. Validation sets are in the same distribution as the train set, and hence unable to approximate test set performance, leaving methods unable to select for model checkpoints that best generalize to unseen data. To address this issue, we propose a distribution coverage metric, which computes the percentage of unlabeled, out-of-distribution images drawn from the test distribution that can be answered confidently by the current VQA model. This distribution coverage measure well approximates model accuracy on generalization test splits with unseen attribute compositions, allowing us to effectively tune and select models without labeled data points from the test distribution. We validate the effectiveness of our approach on biased versions of the CLEVR dataset (Johnson et al., 2017) created for compositional generalization in visual reasoning. Specifically, in addition to the original CLEVR CoGenT dataset, we also construct datasets that contain questions with referred objects as well as one-hop relational questions, and demonstrate generalization improvement with DisCo compared to base VQA models with their original training paradigm. We demonstrate that DisCo consistently helps three VQA models, FiLM (Perez et al., 2018), TbD-net (Mascharka et al., 2018), and the Neuro-Symbolic Concept Learner (NSCL; Mao et al., 2019), perform better on a test set with a distribution shift of attribute combinations from the train set. We also show that our framework outperforms other methods that leverage unlabeled data, including generative modeling and contrastive learning, on different levels of biases in training data. Our model exhibits an advantage in generalization to novel attribute combinations, and is a step towards contending with the combinatorial complexity of the visual world. ## 2 Related Work Compositional generalization. Prior work on improving the compositional generalization of visual reasoning systems generally falls into two groups. The first line of research leverages explicit structures of compositional concepts, such as visual grammars (Zhu & Mumford, 2007; Chen et al., 2007), compositional embeddings (Misra et al., 2017), neural operators (Nagarajan & Grauman, 2018), neural module networks (Purushwalkam et al., 2019), and causal graphs (Niu et al., 2021; Yang et al., 2021b). The second line of research introduces additional supervision, such as the taxonomy of concepts to improve model generalization (Han et al., 2019). We present a novel perspective on compositional generalization, which is to leverage unlabeled, out-of-distribution data from the test distribution. Self-supervised learning for visual reasoning. Our iterative pseudo-labeling framework is also related to prior work on self-supervised learning for visual reasoning. Specifically, Kim et al. (2021) and Lin & Parikh (2017) use active learning to select image and question pairs to be labeled. Askarian et al. (2021) and Li et al. (2020) use curriculum learning to prioritize training data for visual reasoning models. Methods such as Kim et al. (2019); Zhu et al. (2020); Liu et al. (2018) apply adversarial self-supervised learning to overcome language priors in vision-language models. Although some earlier work has explored similar entropy-based measures as ours, our work differs from them in two key aspects. First, our method studies a different setting, where no additional labels will be requested on the unlabeled dataset. Second, in contrast to data efficiency or task performance, our work shows that an entropy-based measure is especially beneficial for the compositional generalization capability of models in a test set with novel attribute combinations. Semi-supervised learning. DisCo generally falls into the category of semi-supervised learning, whose idea is to leverage unlabeled data to improve model performance. Specifically, early work on pseudo-labeling (Nigam & Ghani, 2000; Grandvalet & Bengio, 2004) has drawn important theoretical connections between entropy-based self-training and expectation maximization algorithms. Prior work has also used signals such as high values in density-based clustering (Choi et al., 2019) and label propagation (Iscen et al., 2019) to choose and infer pseudo-labeled examples, and introduced regularization techniques to learn better betweenclass separability (Shi et al., 2018). A related work, Rizve et al. (2021), proposes choosing pseudo-labels based on confidence and uncertainty of network predictions for classification, while DisCo tackles the complex VQA task with out-of-distribution images from the specified test distribution that contain unseen attribute combinations. We refer readers to Van Engelen & Hoos (2020) and Yang et al. (2021a) as two recent comprehensive surveys. Our paper uses a similar broader framework and focuses on the empirical evidence in visual reasoning domains to use unlabeled images to generalize in combinatorially complex settings. ## 3 Methods We present DisCo as a method to leverage *unlabeled, out-of-distribution* image data from the test distribution for compositional generalization. At a high level, DisCo is a pseudo-labeling framework applied to VQA models, that iteratively learns to label images farther from the train data distribution. Intuitively, DisCo chooses unlabeled images from a test distribution of novel attribute compositions that the current reasoning model can effectively answer. As training progresses, DisCo selects more difficult question-image pairs, which increases distribution coverage on the unlabeled image set. We show that in the CLEVR dataset, after training, models can reason about simulated objects in scenes with new attribute compositions, not seen with labels during training. In this section, we first describe our problem formulation (Section 3.1) and broader learning paradigm with unlabeled images (Section 3.2). We then discuss critical components in the framework. We describe image proposal methods for efficiently generating answerable images in DisCo (Section 3.3). Then, we propose an entropy-based threshold as a measure for accurate pseudo-labeling of out-of-distribution images from the test distribution (Section 3.4). Finally, we present a distribution coverage measure as an effective model selection strategy for compositional generalization (Section 3.5). ## 3.1 Problem Formulation In this paper, we focus on improving the object-level compositional generalization of visual reasoning models. Intuitively, objects in images are associated with many concepts, such as color, shape, and material. During training, the model may only see a finite number of possible concept combinations for objects. The goal of object-level compositional generalization is to let the model trained with limited labeled data generalize to novel object concept compositions. Our proposal is to leverage unlabeled, out-of-distribution images from the test distribution, which are significantly more available than human-annotated data, to improve the performance of visual reasoning models in combinatorially complex domains. Algorithm 1 The DisCo framework described in Section 3.2. Input: Dtrain: the labeled train dataset; f: image proposal function derived from the unlabeled test dataset Dtest; M(*v, q*; θ): visual question answering model; n: entropy threshold. Output: θ: M(*v, q*; θ). 1: Pretrain M with Dtrain. 2: Track distribution coverage in c. 3: for i ← 0 do 4: (v+, q+, a+) ∼ Dtrain 5: v− ← f() ▷ Image proposal function Dtest.sample() or GAN.generate(), see Section 3.3. 6: p+ ← M(v+, q+; θ) ▷ Retrieve predictions from pretrained M. 7: p− ← M(v−, q+; θ) 8: if entropy(p−) < n **then** ▷ Entropy measure to select for pseudo-label targets, see Section 3.4. 9: Update c with distribution coverage. 10: Update M with xent(p+, a+). 11: Update M with *xent*(p−, arg max(p−)). ▷ Update M with equal weighting of + and −. 12: **else** 13: Reject sample. 14: **end if** 15: **end for** 16: Choose checkpoint of M through c. ▷ Coverage used for model selection, see Section 3.5. DisCo is trained on a labeled dataset of VQA triplets, with each data point containing a visual scene, question, and answer; we denote this as (v+, q+, a+) ∈ Dtrain. The training dataset only contains a subset of attribute combinations of *colors* and *shapes* (e.g., *blue cubes* but not *red cubes*), while the test dataset contains objects of different color-shape combinations. Our training objective is thus to bootstrap from a VQA model to iteratively improve the distribution coverage of out-of-distribution test examples. ## 3.2 Iterative Pseudo-Labeling Our learning paradigm has three steps. First, we assume a base visual question answering model M(*v, q*; θ), and train M to convergence on Dtrain without modifications to the original training procedure. Second, we bootstrap the model on *unlabeled, out-of-distribution* data from the test distribution with a proposed pseudo-labeling framework. Lastly, with the distribution coverage produced by DisCo, we perform model selection for a checkpoint of M that generalizes best to novel attribute compositions. We describe this paradigm in Alg 1 and show the overview in Figure 2. DisCo utilizes unlabeled images in Dtest through an image proposal function f derived from Dtest (Alg 1, L5). The proposal function can be a random sampler of unlabeled images in Dtest, or a learned generative model, such as a generative adversarial network (GAN; Goodfellow et al., 2014), trained on the unlabeled images in Dtest. Either f yields unlabeled images v− from the test distribution to be used in our framework. In the iterative pseudo-labeling process, DisCo couples images v− with randomly sampled questions q+ from the labeled train set (Alg 1, L7); let p− denote the answer distribution produced by M, i.e., p− = M(v−, q+; θ). We use an entropy measure to select confident predictions such that question-image pairs satisfy presuppositions and are answerable (Alg 1, L8). The (v−, q+) pair will be pseudo-labeled with arg max(p−), its own sharpened predictions (Alg 1, L11). During training, we keep track of the percentage of pairs (v−, v+) that can be confidently answered by the model (Alg 1, L9). This distribution coverage metric will be used for model tuning and checkpoint selection (Alg 1, L16). At each pseudo-labeling step, given a pseudo-labeled triplet (v−, q+, arg max(p−)) that satisfies the entropy metric, a labeled training triplet (v+, q+, a+) will also be sampled to be trained with equal weighting (Alg 1, L10). This weight balancing allows the model to learn from both image distributions, and acts as a model correctness regularization. ![4_image_0.png](4_image_0.png) Figure 2: Overview of the DisCo framework. At each pseudo-labeling step, labeled VQA triplets from the train distribution are used in an equal weighting to pseudo-labeled VQA triplets, which contain images from the test distribution of novel attribute compositions. ## 3.3 Image Proposals DisCo is compatible with various kinds of unlabeled image distributions. In this paper, we focus on two prevalent choices for f: 1) direct sampling from unlabeled images in Dtest, and 2) generation from a generative adversarial network (GAN) trained on unlabeled images in Dtest. We can directly sample unlabeled images from Dtest and propose each image as a potential pseudo-label target. We show in experiments that this method achieves strong performance on Dtest, as well as on an unseen dataset that has the same distribution as Dtest—both of which are *out-of-distribution* compared to the labeled set Dtrain. A potential approach to better cover test image distribution is to use generative models. We first train a GAN on unlabeled test images and make inferences of the trained model for image proposals, which essentially acts as a data augmentation. In this work, we train an unconditional StyleGAN v2 (Karras et al., 2020) on images from Dtest. We also show that both sampling methods achieve improved performance on a setting with an *unknown* test distribution, where unlabeled images are drawn from a distribution that is different from both the training and test set. ## 3.4 Pseudo-Label Selection With Entropy Threshold Pseudo-labeling out-of-distribution test examples in visual reasoning is especially noisy and challenging. This is not only because unlabeled images contain novel attribute compositions correlated with visual challenges such as occlusion, but also because randomly-sampled questions may contain presuppositions that the images must satisfy. That is, the referred objects in the question may not exist in the image. When the vocabulary of concepts is large, most randomly sampled images and question pairs will be unanswerable. Without an effective measure for filtering the pseudo-labeled training data, the model will be corrupted with a high percentage of inaccurate labels that gives a poor signal in generalization. Recall that there are two types of errors we want to filter out. The first is presupposition failure, where the referred object in q+ does not exist in v−. For example, the question asks "what size is the cylinder", but there is no cylinder in the image. The second is questions that are difficult to answer due to limited training data or other visual challenges. For example, questions regarding a novel color-shape combination of a gray cylinder may be difficult to answer given the partial obstruction of the object. Below, we introduce and validate an entropy-based measure, which effectively filters both types of errors. In this work, we leverage a strong correlation between the entropy of p ← M(v−, q+; θ), and compositional generalization accuracy. We demonstrate that this metric is an effective measure for pseudo-labeling out-ofdistribution images from the test distribution with unseen attribute compositions. The entropy is calculated from the softmax of model logits. Let k denote the number of elements in the output vocabulary, the entropy is computed as H(X) = −Pk i=1 p(xi) log p(xi). In Figure 3, we empirically verify this relationship between entropy and question-answering accuracy on images with novel attribute combinations. The left graph 3a) shows a cumulative entropy to accuracy plot on a log scale, with presupposition adherence accuracy, prediction accuracy, and prediction accuracy given presupposition adherence. Presupposition adherence accuracy (blue), is the percentage of question-image pairs whose unlabeled image satisfies the sampled question's presupposition (i.e., the referred-to object exists). Prediction accuracy (red) is calculated such that the presupposition is satisfied and the predicted answer is correct. Prediction accuracy given presupposition adherence (purple), is the prediction accuracy of only pairs that adhere to presupposition. The right graph 3b) depicts a cumulative entropy histogram. Interestingly, we find that while prediction accuracy for images that satisfy presuppositions (purple) does not decrease significantly with entropy increase, the percentage of question-image pairs that violate presuppositions (blue) does decrease significantly. This suggests that our entropy measure well captures presupposition failure, and is hence effective and necessary for this learning paradigm. The black line on both graphs indicates an approximately 30-th percentile entropy threshold, based on the histogram of entropies, which yields a 0.8207 prediction accuracy on out-of-distribution test images and a 0.8506 presupposition adherence accuracy. Given images that passed the question presupposition at this threshold, 0.9648 were accurate. ![5_image_0.png](5_image_0.png) Figure 3: Relationship between entropy and compositional generalization accuracy and count. The base VQA model is trained on a biased train dataset; entropy and prediction accuracy is evaluated on unlabeled images sampled from the test dataset paired with questions from the train set. ## 3.5 Model Selection For compositional generalization tasks, because we do not have access to the ground-truth labels for images from the test distribution, there is no natural criterion for model tuning and model selection. A common practice of previous methods for model selection is through the maximization of validation set accuracy; however, the validation set has the same data distribution as the train set, and thus is biased and not a good measure for test distributions. In this work, we propose a more effective measure by leveraging the unlabeled image set for model tuning and model selection. Specifically, DisCo employs a distribution-coverage-based metric, which does not require any labeled examples from the test distribution. The high-level idea is to maximize the distribution coverage on the unlabeled dataset. Formally, recall that during training, the sampler produces a pair (v−, q+) for pseudo-labeling, where v− is from the unlabeled distribution and q+ is from the train set. Our algorithm keeps track of the percentage of pairs (v−, q+) that are rejected from the entropy thresholding for each model (checkpoint). After training, we select the model with the maximum coverage of the test set, i.e., the model that rejects the least number of pseudo-labeling pairs (v−, q+). Empirically, we validate that distribution coverage from our framework well approximates compositional generalization accuracy in the test set, while the standard validation set accuracy does not. In Figure 4, the left plot 4a) depicts the correlation between test set accuracy and validation set accuracy from model checkpoints, while the right plot 4b) depicts the correlation with distribution coverage. The Pearson correlation coefficient between validation and test set accuracy is 0.2565. In this experiment, we can see that validation set accuracies are mostly close to 1.0, while test set accuracies range from 0.90 to 0.98. In comparison, the distribution coverage value has a strong correlation with the test performance: the Pearson correlation coefficient is 0.6066, and thus is a more effective metric for model selection. ![6_image_0.png](6_image_0.png) Figure 4: Correlation between test set and validation set accuracy (left plot, with Pearson correlation coefficient of 0.2565) as well as between test set accuracy and our distribution coverage metric (right plot, with a coefficient of 0.6066). ## 4 Experiments We evaluate DisCo on a set of CLEVR datasets and three visual question-answering models—FiLM (Perez et al., 2018), a representative end-to-end attention-based approach, TbD-net (Mascharka et al., 2018), a stateof-the-art neural module network-based approach, and NS-CL, a neuro-symbolic and object-centric approach. DisCo considerably improves the compositional generalization performance of base models compared to their original training paradigm or other semi-supervised learning approaches. Specifically, we compare our framework against two baselines that leverage unlabeled data: variational autoencoders (VAEs; Kingma & Welling, 2014) and SimCLR (Chen et al., 2020). For both baselines, we first train a VAE or a SimCLR model, and use their encoding networks to initialize the feature extractor of the visual reasoning model. Both baselines use the same amount and exact set of unlabeled data as DisCo. We describe our datasets and implementation details in Section 4.1, compare our work against prior work in Section 4.2, and provide more ablation studies in Section 4.3, and more analyses in Section 4.4. ## 4.1 Datasets & Implementation Details In addition to the original CLEVR compositional generalization (CoGenT) dataset 1(Johnson et al., 2017) (released under the CC BY 4.0 license), we also report results on multiple CoGen datasets based on CLEVR. Specifically, we generate images with two to three objects per image, with a CoGen split following that of the original CLEVR dataset. In this setup, there are two sets of colors, with the first as [*gray, blue, brown,* yellow], and the second as [*red, green, purple, cyan*]. CoGen split A contains cubes in the first set of colors, 1We use the term CoGenT to specifically refer to the original compositional generalization dataset introduced in Johnson et al. (2017), and use the term CoGen to generally refer to dataset splits (e.g., CoGen split A, CoGen split B) generated for testing compositional generalization. Table 1: DisCo performance compared to the original training paradigm and baselines, where 0.5% Ref (unk) is performance on an unknown test distribution, with the unlabeled dataset not drawn from the specified test distribution, and 0.5% Ref (unseen) is performance on an unseen test set not exposed during training . 0.0% 0.1% 0.5% 0.5% 0.5% 0.5% 0.5% Ref Ref Ref Ref (unk) Ref (unseen) OneHop CoGenT | . | 0.0% | 0.1% | 0.5% | 0.5% | 0.5% | 0.5% | 0.5% | |----------------|--------|--------|-----------|--------------|--------|--------|--------| | Ref | Ref | Ref | Ref (unk) | Ref (unseen) | OneHop | CoGenT | | | FiLM | 0.7589 | 0.7931 | 0.9265 | 0.9265 | 0.9251 | 0.9270 | 0.7859 | | FiLM + vae | 0.7500 | 0.7993 | 0.9387 | 0.9396 | 0.9362 | 0.9228 | 0.7868 | | FiLM + simclr | 0.7520 | 0.8036 | 0.9288 | 0.9255 | 0.9257 | 0.9261 | 0.7926 | | FiLM + DisCo-S | 0.7582 | 0.8363 | 0.9621 | 0.9609 | 0.9616 | 0.9469 | 0.8004 | | FiLM + DisCo-G | 0.7760 | 0.8191 | 0.9545 | 0.9524 | 0.9510 | 0.9311 | 0.7979 | Table 2: Comparison of DisCo with baselines on the TbD-net model. Performance reported on the original, unknown, and unseen test set of 0.5% Ref. Table 3: Comparison of DisCo with baselines on the NS-CL model. Performance reported on the original, unknown, and unseen test set of 0.5% Ref. 0.5% 0.5% 0.5% Ref Ref Ref (unk) (unseen) | | | | Ref | Ref | Ref | |-------|----------|-----------------|--------|--------|--------| | (unk) | (unseen) | NS-CL | 0.7622 | 0.7622 | 0.7633 | | | | NS-CL + vae | 0.7572 | 0.7583 | 0.7589 | | | | NS-CL + simclr | 0.7739 | 0.7727 | 0.7758 | | | | NS-CL + DisCo-S | 0.8024 | 0.8016 | 0.8027 | | | | NS-CL + DisCo-G | 0.7820 | 0.7798 | 0.7793 | TbD 0.8993 0.8993 0.9027 TbD + vae 0.9018 0.9027 0.9025 TbD + simclr 0.9073 0.9054 0.9077 TbD + DisCo-S 0.9206 0.9205 0.9197 TbD + DisCo-G 0.9189 0.9156 0.9145 and cylinders in the second. CoGen split B is reversed in the attribute combinations. Both CoGen split A and B contain spheres of all eight colors. We study a biased setup, where Dtrain consists of images in CoGen split A with either zero or only a small percentage p of objects from CoGen split B. We show performance with train datasets of p ∈ {0.0, 0.001, 0.005}, evaluated on a full CoGen split B test set. Based on the aforementioned image setup, we generate two additional datasets with different types of questions. The first is the referred object dataset (Ref), with questions of the form, "What [attribute] is the [referred object]?" (e.g., "What material is the red object?"). Although this question template is simple, it reflects one of the most important problems in pseudo-labeling for VQA: the satisfaction of question presuppositions. The second dataset consists of "one-hop"2 questions from the CLEVR dataset (*OneHop*), which consists of more complex relational questions, such as "How many red objects are to the left of the sphere?" or "There is a large object to the right of the metal thing; what is its color?". We also evaluate the models on a biased CLEVR CoGenT dataset (*CoGenT*) with the full set of complex objects and questions. We use the official implementations of FiLM, TbD-net, and NS-CL along with their original hyperparameters in our framework. The GAN image proposal function is the unconditional StyleGAN2 (Karras et al., 2020), trained with the Adam optimizer of learning rate 0.002. We set our entropy threshold n to be at the 30th percentile. Empirically, we find this threshold value to be robust to differences of around 10 percentile increase or decrease. For the more complex CoGenT dataset, we lowered the entropy threshold to be at the 10th percentile to account for the naturally lower percentage of presupposition adherence. All models are trained on a single Titan RTX GPU. ## 4.2 Results We train DisCo with FiLM on five datasets—three Ref datasets with varying bias levels, from 0.0% (fully biased), to 0.1% biased, to 0.5% biased, as well as the 0.5% biased OneHop dataset and 0.5% biased CoGenT 2In *one-hop* questions, the target objects are referred to by relating to another unique object. dataset. We show the results of our framework with both image proposal functions (DisCo-S as direct sampling and DisCo-G as GAN generation). Note that for VAE, SimCLR, and our framework DisCo, the VQA models see unlabeled test set images during pretraining or pseudo-labeling. Thus, for a fair comparison, we additionally report accuracy on a larger, test set that contains unseen images following the same distribution as the original test set, for 0.5% biased Ref. In addition, while we mainly focus on experiments that leverage unlabeled data from a known test distribution to improve the performance of that specific distribution shift, we also present results in a setting where the test dataset is from an unknown test distribution. In this setting, the unlabeled data is drawn from an expanded set of distributions including the test distribution, but not solely consisting of samples that follow the test distribution. We set up this experiment with the training dataset from CoGen split A, the test dataset from CoGen split B, and the unlabeled dataset from the full CLEVR split consisting of both split A and B with all color-shape attributes. Table 1 shows our results; DisCo outperforms the original training paradigm and both baselines. In addition, our framework, with both image proposal functions, is robust to both images from an unknown test distribution as well as unseen images in a known test distribution. By comparing the two image proposal approaches across experiments, DisCo-S achieves better performance than DisCo-G. We conjecture that this is because, in the sampler method, the model is trained with the exact set of *real*, unlabeled test images that we evaluate with. Moreover, in the fully biased Ref experiments, the GAN image proposal function outperforms the direct sampler. We attribute this to the GAN's generation of more diverse unlabeled images, which can better cover the image space of possible camera angles and lighting conditions, allowing DisCo to improve model performance when there are few signals from the labeled VQA dataset. The fully biased Ref experiments illustrate a difficult and important setting; there are often cases where it is useful to perform well without any labeled data in the compositional generalization setting, e.g. when we only have access to objects with a specified set of attributes during training and do not know the test distribution, or when we need to adapt to new test distributions without labels. Table 4: Ablation of DisCo with direct sampler, trained on the 0.5% bias Ref. (PL = pseudo-labeling, EM = entropy measure, CS = coverage selection). FiLM base 0.9265 base+PL 0.6749 base+PL+EM 0.9605 base+PL+EM+CS **0.9621** Integration with TbD-net and NS-CL. DisCo can be integrated with other visual reasoning models, too. In this paper, we implement TbD-net and NS-CL with DisCo, to showcase flexibility. The results on TbD-net are presented on the 0.5% Ref split in Table 2, and results on NS-CL in Table 3. The observations are consistent with the FiLM-based experiments. DisCo similarly outperforms the original training paradigm and baselines by a noticeable margin. ## 4.3 Ablations In Table 4, we present ablation studies of our framework with a direct sampler as the image proposal function. First, we see that directly adding the pseudo-labeling (PL) module (i.e., pseudo-label all images in the test set with the pretrained FiLM model without any thresholding) significantly degrades model accuracy. Second, adding our entropy measure (EM) improves compositional generalization performance. This finding is consistent with our visualizations of the correlation between entropy and accuracy in Fig. 3. Moreover, adding coverage-based selection (CS) further improves test accuracy. Table 5 zooms in into the candidate model selection methods. Specifically, we compare test accuracy from our coverage-based selection strategy (Coverage) with test accuracy from the standard, validation accuracy-based selection strategy (Val acc). Our framework shows consistent advantage across FiLM, TbD-net, and NS-CL VQA models. Note that the distribution coverage metric is not directly applicable to pretraining-based baselines (VAE and SimCLR) because they do not compute pseudo-labels for test images. ## 4.4 Analyses Qualitative examples We qualitatively analyze the performance gain brought by our framework. Figure 5 (top row) shows two example images in the CoGen split B test set. We apply the base FiLM model, the Table 5: Comparison of test accuracy with baselines on model selection strategy, trained on 0.5% Ref. | FiLM | TbD-net | NS-CL | | | | | |----------------|-----------|---------|----------|---------|----------|--------| | Val acc | Coverage | Val acc | Coverage | Val acc | Coverage | | | Base | 0.9265 | N/A | 0.8993 | N/A | 0.7622 | N/A | | Base + vae | 0.9387 | N/A | 0.9018 | N/A | 0.7572 | N/A | | Base + simclr | 0.9288 | N/A | 0.9073 | N/A | 0.7739 | N/A | | Base + DisCo-S | 0.9605 | 0.9621 | 0.9000 | 0.9206 | 0.7887 | 0.8024 | | Base + DisCo-G | 0.9508 | 0.9545 | 0.9020 | 0.9189 | 0.7780 | 0.7820 | FiLM model trained with DisCo-S (direct sampling), and the FiLM model trained with DisCo-G (GAN) on the fully biased Ref dataset and retrieve their predictions. In the top left example, the test set question asks "What color is the cylinder?" of the *brown cylinder* in the image. Brown cylinders are never seen in the fully biased train set, hence the FiLM model answers incorrectly with red, unable to identify the referred object. DisCo with direct sampling also produces a wrong answer, likely due to a lack of signal from the labeled image set to bootstrap visual reasoning. In this case, we see that DisCo with GAN is able to answer correctly with *brown* and show better compositional generalization. We conjecture that this is due to the GAN covering a denser image distribution. In the bottom row of Figure 5, we see two similar images— the left image taken from the train set with a *brown cube* and cyan cylinder, and the right image generated by our GAN with a *brown cylinder* in a closely aligned scene. We hypothesize that it is diverse image proposals like this that enable visual reasoning models to better learn the concept of a *brown cylinder.* In the top right example of Figure 5, similarly, *purple cubes* are never seen in the fully biased train set, thus the FiLM model answers a completely incorrect color, while DisCo with both image proposal functions is able to generalize to this novel attribute combination. Limitations DisCo provides a framework for bootstrapping visual reasoning; however, it relies on the inductive bias of convolutional networks to compositionally generalize. Without priors on visual attributes, given a red cube (a novel color-shape combination not seen in the train set), the model could learn that the object is neither red nor a *cube*—as red could be learned as a color only existing on cylinders and spheres, while *cube* could be learned as a shape that is only paired with colors that are gray, blue, brown, or yellow, as seen from the labeled train set. Assumptions about inductive biases of visual attributes in convolutional networks allow our framework to learn generalization, and hence DisCo is limited to, and also especially powerful, in the vision domain. We do not report test results on more real-world datasets due to the lack of datasets that evaluate compositional generalization. This leaves several open questions for future work, such as how DisCo may perform with more variability in the dataset, and with a base model that exhibits poor generalization ability and is challenging to bootstrap from. Additionally, the effects of utilizing a thresholding metric based on variants of the softmax score (Hendrycks & Gimpel, 2017; Liang et al., 2018) and the energy score (Liu et al., 2020) may also be fruitful to explore as future work. Image from train set GAN image proposal ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) ![9_image_2.png](9_image_2.png) Figure 5: Top row: two prediction examples from FiLM, DisCo-S, and DisCo-G on fully biased Ref. Bottom row: two closely aligned images from the train set and from the GAN proposal function. ## 5 Conclusion We have presented DisCo, a framework for improving compositional generalization by leveraging *unlabeled,* out-of-distribution images from the test distribution through iterative pseudo-labeling. We studied and proposed the entropy measure as an effective signal for presupposition adherence and pseudo-label accuracy in out-of-distribution test examples, and also introduced the distribution coverage model selection strategy, which well approximates test performance on novel attribute combinations while only requiring unlabeled data. We demonstrated our framework's ability to improve compositional generalization performance, and showed potential for future work to leverage unlabeled images to achieve generalization in evaluation regimes with combinatorial complexity. ## Broader Impact Statement Our work shows the importance of learning unbiased concepts from datasets with better distribution coverage. We expect minimal negative societal impact, however, when using our framework, it's important to ensure that the unlabeled dataset itself has enough distribution coverage to minimize dataset bias. Our goal is for DisCo to help models perform well in data-limited environments for good. ## Acknowledgments We thank Eric Chan for providing valuable feedback on the paper. This work is in part supported by the Stanford Institute for Human-Centered AI (HAI), Toyota Research Institute (TRI), NSF RI \#2211258, ONR MURI N00014-22-1-2740, and Analog, JPMC, Meta, Salesforce, and Samsung. JH is supported by the Knight Hennessy fellowship and the NSF Graduate Research Fellowship. ## References Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. Referit3D: Neural Listeners for Fine-grained 3D Object Identification in Real-world Scenes. In *ECCV*, 2020. Narjes Askarian, Ehsan Abbasnejad, Ingrid Zukerman, Wray Buntine, and Gholamreza Haffari. Curriculum Learning Effectively Improves Low Data VQA. In Annual Workshop of the Australasian Language Technology Association, 2021. Elie Bienenstock, Stuart Geman, and Daniel Potter. Compositionality, MDL Priors, and Object Recognition. In *NeurIPS*, 1996. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A Simple Framework for Contrastive Learning of Visual Representations. In *ICML*, 2020. Yuanhao Chen, Long Zhu, Chenxi Lin, Hongjiang Zhang, and Alan L Yuille. Rapid Inference on a Novel and/or Graph for Object Detection, Segmentation and Parsing. In *NeurIPS*, 2007. Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning. In *ICLR*, 2019. Jaehoon Choi, Minki Jeong, Taekyung Kim, and Changick Kim. Pseudo-labeling Curriculum for Unsupervised Domain Adaptation. In *BMVC*, 2019. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In *NeurIPS*, 2014. Yves Grandvalet and Yoshua Bengio. Semi-supervised Learning by Entropy Minimization. In *NeurIPS*, 2004. Chi Han, Jiayuan Mao, Chuang Gan, Joshua B. Tenenbaum, and Jiajun Wu. Visual Concept Metaconcept Learning. In *NeurIPS*, 2019. Dan Hendrycks and Kevin Gimpel. A Baseline for Detecting Misclassified and Out-of-distribution Examples in Neural Networks. In *ICLR*, 2017. Ahmet Iscen, Giorgos Tolias, Yannis Avrithis, and Ondrej Chum. Label Propagation for Deep SemiSupervised Learning. In *CVPR*, 2019. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. In *CVPR*, 2017. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and Improving the Image Quality of Stylegan. In *CVPR*, 2020. Dong-Jin Kim, Jinsoo Choi, Tae-Hyun Oh, and In So Kweon. Image Captioning with Very Scarce Supervised Data: Adversarial Semi-Supervised Learning Approach. In *EMNLP*, 2019. Dong-Jin Kim, Jae Won Cho, Jinsoo Choi, Yunjae Jung, and In So Kweon. Single-Modal Entropy based Active Learning for Visual Question Answering. In *BMVC*, 2021. Diederik P Kingma and Max Welling. Auto-encoding Variational Bayes. In *ICLR*, 2014. Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building Machines that Learn and Think like People. *Behav. Brain Sci.*, 40, 2017. Qing Li, Siyuan Huang, Yining Hong, and Song-Chun Zhu. A Competence-Aware Curriculum for Visual Concepts Learning via Question Answering. In *ECCV*, 2020. Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhancing the Reliability of Out-of-distribution Image Detection in Neural Networks. In *ICLR*, 2018. Xiao Lin and Devi Parikh. Active Learning for Visual Question Answering: An Empirical Study. arXiv:1711.01732, 2017. Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based Out-of-distribution Detection. In NeurIPS, 2020. Xihui Liu, Hongsheng Li, Jing Shao, Dapeng Chen, and Xiaogang Wang. Show, Tell and Discriminate: Image Captioning by Self-Retrieval with Partially Labeled Data. In *ECCV*, 2018. Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B Tenenbaum, and Jiajun Wu. The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences from Natural Supervision. In *ICLR*, 2019. David Mascharka, Philip Tran, Ryan Soklaski, and Arjun Majumdar. Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning. In *CVPR*, 2018. Ishan Misra, Abhinav Gupta, and Martial Hebert. From Red Wine to Red Tomato: Composition with Context. In *CVPR*, 2017. Tushar Nagarajan and Kristen Grauman. Attributes as Operators: Factorizing Unseen Attribute-Object Compositions. In *ECCV*, 2018. Kamal Nigam and Rayid Ghani. Analyzing the Effectiveness and Applicability of Co-training. In *CIKM*, 2000. Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. Counterfactual VQA: A Cause-Effect Look at Language Bias. In *CVPR*, 2021. Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. FiLM: Visual Reasoning with a General Conditioning Layer. In *AAAI*, 2018. Senthil Purushwalkam, Maximilian Nickel, Abhinav Gupta, and Marc'Aurelio Ranzato. Task-Driven Modular Networks for Zero-Shot Compositional Learning. In *ICCV*, 2019. Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. In Defense of Pseudo-labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning. In *ICLR*, 2021. Weiwei Shi, Yihong Gong, Chris Ding, Zhiheng MaXiaoyu Tao, and Nanning Zheng. Transductive SemiSupervised Deep Learning using Min-Max Features. In *ECCV*, 2018. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. Alfred: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. In *CVPR*, 2020. Mohit Shridhar, Lucas Manuelli, and Dieter Fox. Cliport: What and Where Pathways for Robotic Manipulation. In *CoRL*, 2022. Jesper E Van Engelen and Holger H Hoos. A Survey on Semi-Supervised Learning. MLJ, 109(2):373–440, 2020. Xiangli Yang, Zixing Song, Irwin King, and Zenglin Xu. A Survey on Deep Semi-Supervised Learning. arXiv:2103.00550, 2021a. Xu Yang, Hanwang Zhang, and Jianfei Cai. Deconfounded Image Captioning: A Causal Retrospect. *IEEE* TPAMI, 2021b. Song-Chun Zhu and David Mumford. *A Stochastic Grammar of Images*. Now Publishers Inc, 2007. Xi Zhu, Zhendong Mao, Chunxiao Liu, Peng Zhang, Bin Wang, and Yongdong Zhang. Overcoming Language Priors with Self-Supervised Learning for Visual Question Answering. In *IJCAI*, 2020. ## A Appendix The supplementary material is organized as the following. First, in Section A.1, we provide released code, and in Section A.2, we describe our dataset construction. Section A.3 shows results from our image proposal functions, while Section A.4 and Section A.5 demonstrate ablations on entropy thresholds and robustness of the distribution coverage metric. We show additional experiment results on settings with an unknown test distribution in Section A.6. In Section A.7, we provide details on baseline implementations. Last, in Section A.8 and Section A.9, we report quantitative analyses of our model performance as well as qualitative examples. ## A.1 Code Release Code for DisCo with the FiLM model can be found: https://github.com/joyhsu0504/disco, based on the FiLM codebase (https://github.com/ethanjperez/film). We want to highlight that when using DisCo, it is important to ensure that the unlabeled dataset has enough distribution coverage to minimize dataset bias. ## A.2 Dataset We generate additional CLEVR datasets of two to three objects based on the CoGen (compositional generalization) split introduced in Johnson et al. (2017). As a recap, in this setup, there are two sets of colors, with the first as [*gray, blue, brown, yellow*], and the second as [*red, green, purple, cyan*]. CoGen split A contains cubes in the first set of colors, and cylinders in the second. CoGen split B is reversed in the attribute combinations. Both CoGen split A and B contain spheres of all eight colors. In our construction, the train set of CoGen split A consists of 8,000 images, and the validation set of CoGen split A and the test set of CoGen split B consist of 2,000 images each. The larger, unseen test set consists of 8,000 images. The Ref datasets include questions of the form "What [attribute] is the [referred object]?", while the OneHop dataset includes one-hop relation questions as defined in Johnson et al. (2017). ## A.3 Image Proposal Examples In Figure 6, we provide examples of image proposals from our two functions—direct sampling and GAN ![13_image_0.png](13_image_0.png) generation. Both capture a range of novel attribute combinations not in the labeled train set images. Figure 6: Image proposals from direct sampling and GAN generation, used as unlabeled image input in DisCo. ## A.4 Entropy Thresholds In this ablation study, we validate the robustness of DisCo with respect to different entropy thresholds. We show that DisCo yields strong results at different values of the hyperparameter. See Table 6 for results of FiLM with DisCo-S on entropy thresholds at varying percentiles. In addition, we present test accuracy curves for each run, compared to the base FiLM model in Figure 7. The black line indicates where DisCo begins pseudo-labeling from the pretrained FiLM model. We observe that DisCo considerably improves upon the base VQA model. These results also show the robustness of our method against different random seeds. For each run shown, the network weights, as well as the data samples, are generated based on different random seeds, but the improvements are consistent. Table 6: Comparison of different percentiles of entropy thresholds on the 0.5% biased Ref dataset. | 30th | 35th | 40th | 45th | | |----------------|--------|--------|--------|--------| | FiLM + DisCo-S | 0.9612 | 0.9616 | 0.9667 | 0.9643 | ## A.5 Distribution Coverage We demonstrate that our distribution coverage measure well approximates test accuracy, even when the VQA model is decreasing in performance. In Figure 8, we report the test accuracy and distribution coverage curves of a FiLM + DisCo-S experiment at too high an entropy threshold, where performance quickly degrades. In Figure 9, we present the curves of a FiLM + DisCo-G experiment that slightly degrades in performance before recovering. The Pearson correlation coefficient between test accuracy and distribution coverage for these experiments are 0.6585 and 0.7987, respectively, both showing highly correlated values allowing for effective model selection. We see that DisCo does not tend to oversample or propagate errors; instead, it lowers the number of samples chosen when compositional generalization performance decreases. We conjecture that this is due to our model correctness regularization of training on labeled triplets, which contain different combinations of attributes. When the base VQA model is trained on corrupted pseudo-labels, the model is no longer confident in its predictions given conflicting signals from labeled and pseudo-labeled triplets, and hence fewer samples are chosen, as intended in our framework. ![14_image_0.png](14_image_0.png) Figure 7: Test accuracy curves of the base FiLM model (red), and DisCo-S at different percentiles of entropy thresholds. ![14_image_1.png](14_image_1.png) Figure 8: Test accuracy and distribution coverage curves of an experiment with large decrease in model performance, with Pearson correlation coefficient of 0.6585. ## A.6 Experiments With Unknown Test Distributions Although we mainly focus on settings where the unlabeled dataset is drawn from a known test distribution, we also show experiments on settings where the test distribution is unknown. We report results on settings where the unlabeled dataset is a superset of the unknown test distribution (UNK in the main text), as well as on settings where the unlabeled dataset is a subset of the unknown test distribution (UNK-SUB below). In the latter setup, we train DisCo on labeled CoGen split A, leverage unlabeled CoGen split B, and test model performance on the full CLEVR split. We see in Table 7, Table 8, and Table 9 that DisCo shows consistent performance gain compared to base and baseline models in this setting for all three VQA backbones. ## A.7 Baseline Implementation We implemented two pretraining approaches that leverage unlabeled images. Specifically, we first train a variational autoencoder (VAE; Kingma & Welling, 2014) and SimCLR (Chen et al., 2020) model and use their encoding networks to initialize the feature extractor of the visual reasoning model. The VAE implementation is based on https://github.com/AntixK/PyTorch-VAE, and the Sim(CLR implementation on https://github.com/Spijkervet/SimCLR. For both methods, we use retrieved image features from ResNet101 ![15_image_0.png](15_image_0.png) Figure 9: Test accuracy and distribution coverage curves of an experiment with a slight decrease in model performance, with Pearson correlation coefficient of 0.7987. Table 7: DisCo on the FiLM model. Performance reported on a test set of 0.5% Ref from an unknown test distribution. Table 8: DisCo on the TbD-net model. Performance reported on a test set of 0.5% Ref from an unknown test distribution. | 0.5% Ref (unk-sub) | 0.5% Ref (unk-sub) | 0.5% Ref (unk-sub) | | |----------------------|----------------------|----------------------|--------| | FiLM | 0.9634 | | | | FiLM + vae | 0.9683 | | | | FiLM + simclr | 0.9635 | | | | FiLM + DisCo-S | 0.9798 | | | | FiLM + DisCo-G | 0.9745 | TbD | 0.9506 | | TbD + vae | 0.9495 | | | | TbD + simclr | 0.9510 | | | | TbD + DisCo-S | 0.9582 | | | | TbD + DisCo-G | 0.9536 | NS-CL | 0.8810 | | | NS-CL + vae | 0.8789 | | | | NS-CL + simclr | 0.8872 | | | | NS-CL + DisCo-S | 0.9011 | | | | NS-CL + DisCo-G | 0.8902 | | Table 9: DisCo on the NS-CL model. Performance reported on a test set of 0.5% Ref from an unknown test distribution. as input to the VAE and SimCLR model. We add in encoding layers of [Conv2d, BatchNorm2d, and LeakyReLU] to both networks, and use feature-level reconstruction and contrastive loss to supervise learning. After pretraining, we use the newly added layers as the additional encoding for our VQA models. ## A.8 Quantitative Analyses In Table 10, we examine the test set accuracy per color-shape combination of referred objects in the fully biased Ref dataset. We compare FiLM with DisCo-S and DisCo-G, and report metrics on attribute combinations not seen in the labeled train set. Interestingly, the best-performing model with GAN-generated image proposals performs significantly better on cubes, with a decrease in accuracy on some cylinders in comparison to FiLM and direct sampling. We might instead expect uniformly increased performance on all color-shape combinations, but empirical results reveal that models that compositionally generalize may learn to do so better on some set of novel attribute combinations. ## A.9 Qualitative Examples In Figure 10, we show examples of VQA pairs at different values of entropy output by the model. We see that low entropy examples both satisfy presuppositions and correctness, while at a higher entropy value, Table 10: Comparison of test accuracy per color-shape combination of the referred object, trained on the fully biased Ref dataset. FiLM FiLM + DisCo-S FiLM + DisCo-G | FiLM | FiLM + DisCo-S | FiLM + DisCo-G | | |-----------------|------------------|------------------|--------| | red cube | 0.6794 | 0.6589 | 0.7658 | | green cube | 0.6399 | 0.638 | 0.7173 | | purple cube | 0.6478 | 0.6478 | 0.7676 | | cyan cube | 0.6313 | 0.6313 | 0.6774 | | gray cylinder | 0.6481 | 0.6490 | 0.6292 | | blue cylinder | 0.6431 | 0.6450 | 0.5752 | | brown cylinder | 0.6654 | 0.6774 | 0.6719 | | yellow cylinder | 0.6385 | 0.6459 | 0.6106 | | red sphere | 1.0 | 0.994 | 0.9859 | | green sphere | 1.0 | 0.9937 | 0.9958 | | purple sphere | 0.9958 | 0.9958 | 0.9917 | | cyan sphere | 1.0 | 1.0 | 0.9981 | | gray sphere | 0.9948 | 0.9923 | 0.9794 | | blue sphere | 1.0 | 1.0 | 0.9961 | | brown sphere | 1.0 | 1.0 | 0.9981 | | yellow sphere | 1.0 | 0.9980 | 0.9940 | only presuppositions are satisfied but the answer predicted by the model for out-of-distribution objects is incorrect, and at the highest values of entropy there exists presupposition failure. In this way, DisCo is able to choose suitable pairs to be added to training. We also present additional qualitative examples of predictions from FiLM, DisCo-S, and DisCo-G. In Figure 11, we see examples where DisCo-S and DisCo-G outperform FiLM (first row) and examples where DisCo-G outperforms DisCo-S and FiLM (second row), on the fully biased Ref dataset. ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) What shape is the rubber thing? What shape is the small thing? What material is the cylinder? What color is the sphere? Figure 10: Qualitative examples VQA pairs at different levels of entropy. ![17_image_2.png](17_image_2.png) ![17_image_3.png](17_image_3.png) Figure 11: Qualitative examples of FiLM, DisCo-S, and DisCo-G on the fully biased Ref dataset.
Review 1: Summary: This work aims to improve the compositional generalization of visual question answering (VQA) models by leveraging unlabeled, out-of-distribution image samples. This work, DisCo, proposes two techniques for better exploitation of unlabeled images: 1) an entropy-based metric for filtering pseudo-labels on unlabeled images that are not appropriate for self-training, and 2) a distribution coverage metric for selecting a model checkpoint that approximately minimizes generalization errors. Both of the techniques are based on empirical observations. The proposed learning paradigm DisCo successfully improves various VQA models (including FiLM, TbD-net, and NS-CL) on the CLEVR positional generalization dataset, and outperforms unsupervised representation baselines such as VAE and SimCLR. Strengths and Weaknesses: Strengths: - This work studies a novel setting where the VQA model can have access to unlabeled images and extract learning signals from them. Different from prior work, this new setting explores how to achieve better compositional generalization using unlabeled data resources. Considering the efforts in collecting human annotations for the VQA task, this unsupervised learning paradigm is practical and helpful for future research. - This work has conducted extensive experiments to validate the proposed method DisCo. Three base VQA models (FiLM, TbD-net, and NS-CL), two baselines (VAE and SimCLR), and various dataset settings have been evaluated and compared. DisCo can consistently improve the base VQA models. - The writing is clear and easy to follow. Weaknesses: - This work claims to improve the compositional generalization of VQA models, which means that the model can better generalize to images that contain objects with new compositions of attributes (e.g., shape+color in this work). However, this claim does not seem to align well with the current training and evaluation protocol: At training time, unlabeled **test images** are exposed to DisCo (and other unsupervised learning baselines as well), and thus the model can learn from the different composition distribution in the test images, even though these images are not labeled. During evaluation, the model is tested with exactly the same test images or new images drawn from the same test distribution (“Ref unseen” in the experiments). In other words, the model has already learned from the test-time data distribution during training before testing. This experiment setup seems to be weak for supporting the claim of improved generalizability. In practice, it is not likely to know the test-time compositions while training the VQA model. A stronger setting might be to evaluate the learned VQA model with another distribution that is different from both the labeled and unlabeled images. - The entropy-based metric for thresholding pseudo-labels is highly related to prior work in out-of-distribution (OOD) detection, such as [1][2][3]. A comprehensive comparison between the proposed entropy-based metric and the related methods is necessary. Moreover, on this specific task of VQA, is the proposed metric better than well-established OOD detection methods like [3]? Further experiments that compare the proposed metric and previous OOD detection methods might be helpful. - This work is mainly established on empirical observations. Some theoretical insights could further strengthen the proposed strategies. - The performance gain seems limited. For example, as shown in Table 1, the performance improvement brought by DisCo (from 0.7589 to 0.7760) is less than simply introducing only 0.1% labeled data from the target distribution (from 0.7589 to 0.7931). More clarification might be helpful for readers to understand how significant this improvement is. References: [1] Dan Hendrycks, and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017. [2] Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. In ICLR, 2018. [3] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. In NeurIPS, 2020. Requested Changes: Additional explanation and experimental results can help to address the weaknesses as detailed above. In addition to those, the following minor changes can be made: - The “out-of-distribution images” introduced in this work contain objects with different **compositions** of attributes, but these individual attributes (e.g., colors or shapes) have already been seen in the labeled data. It might be better to clearly explain when this work first introduces out-of-distribution images. - In the introduction, it is mentioned that “our framework is model-agnostic and can be used for a variety of vision domains that have combinatorial structures.” Could the authors provide some specific examples other than VQA? - Figure 2: To be consistent with the notations in Section 3.2, please use subscripts for $+$ and $-$ signs. - Figure 3: To be consistent with the index of this whole figure, please rename the sub-figures as 3a) and 3b). Similar issue in Figure 4. - Figure 4-left: Why do some checkpoints (points on the left side) have validation accuracy close to 0.0, while having test accuracy close to 1.0? In the text, it is mentioned that “validation set accuracies are mostly close to 1.0.” Please clarify. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: The paper proposes to improve out of distribution performance by two main mechanisms. First, the use of pseudo-labeling and second, and more importantly, the use of unlabeled out of distribution data to improve distribution coverage. The experiments are conducted on the popolar CLEVR dataset with familiar backbone networks for CLEVR, and the results show improved compostional generalization to OOD samples in varying levels of distribution shifts introduced. Strengths and Weaknesses: Strengths: 1. Technically sound and relevant to TMLR: I can find no obvious red flags with regard to technical correctness. The proposed entropy-based method for pseudo-label generation, or generation of samples using GAN, and the model selected using "distribution coverage" are all technically sound and can be expected to work in the manner proposed by the paper. The topic is also highly relevant to TMLR and should be of interest to a good subsection of its readers. 2. Thorough experiments: The experiments are clearly presented and are thorough enough to allow readers to know the effect of different choices used in the model. I do not see any holes or additional experiments that I would request. I do have a concern about whether the current experiments are enough to validate the main claims in the paper but they are a slightly orthogonal issue which I discuss below. This should not change the range of experiments presented, and I would still be totally okay with the same set of experiments repeated with a corrected training setup. Weaknesses: There is just one main weakness to me in the correct draft. 1. There are two main ways that the paper introduces unlabeled "OOD" samples to the proposed pipeline: A) direct sampling from unlabeled images in Dtest, and B) generation from a generative adversarial network (GAN) trained on unlabeled images in Dtest. Both of these are quite problematic and are antithetical to OOD testing. To this reviewer, the whole point of OOD testing should be to estimate performance on *unknown* but different distribution to training. In practice, this is approximated by a differently distributed test set, but there is more than a single way to have OOD samples. Here, the paper studies OOD in terms of color distribution shift but it could have other ways to do OOD. It is totally okay to use unlabelled data from *different* distributions from training, but it is not okay to (only) use the one that is literally drawn from the same distribution as the test. That is not just OOD, but rather one specific distribution, that we use to measure OOD under a controlled setting. To properly validate the claims against the proposed technique, the work should focus on creating/sampling images from all possible distribution shifts. To me, the current setup invalidates the main claims of the paper that the use of unlabeled/pseudo-labeled data can improve OOD generalization. See more details below on "requested changes". The same parallels can be said for model selection "The high-level idea is to maximize the distribution coverage on the unlabeled dataset." is currently more like "...coverage on unlabeled dataset sampled from one specific distribution, which also happens to be the same as our test set", which is problematic in the same way as above! Requested Changes: - Both direct sampling and generative model should be sampled from an expanded set of distributions **INCLUDING** the test distribution, not solely consisting of samples that follow the test-set distribution. There should at least be a variant where the model should not have knowledge of the test-set distribution. The current experiments can be kept as-is but this should also be included. The use of samples generated from test distribution alone cast serious doubts on the claims made in the paper in the form that they appear currently. If the authors are unwilling to do this, the claims and writing should be changed to reflect the new setup (i.e., The model assumes direct and explicit knowledge of what exactly the "test" distribution looks like, OR, For known distribution shifts having unlabeled data can help improve performance on *that* specific distribution shift.). This is critical to securing my recommendation for accept. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper proposes a learning method called: DisCo (Distribution Coverage) to improve compositional generalization for the CLEVR dataset. It proposes two components: the pseudo-labeling framework pairs up compositionally novel, unlabeled images with randomly selected questions to improve compositional generalization and the model selection strategy uses a distribution coverage metric to select models that generalize better. The results show improved compositional generalization in the CLEVR dataset. Strengths and Weaknesses: **Strengths:** [S1] Overall, the paper is clearly written; the proposed method is sensible and the results support the core claim of compositional generalization (if the scope of the claim is narrowed down (see weaknesses section)). [S2] Both sampling and using a generative model are sound approaches to gathering unlabeled, test-like data. [S3] Use of entropy threshold to filter out samples e.g., unanswerable ones is well motivated and Fig 3 does a really great job of justifying the use of entropy. This is also a simple measure to compute for practical purposes. [S4] Model selection procedure is also sound as long as one has access to unlabeled test/test-like data. Sec 3.5 clearly shows the need and utility of this approach over a labeled val set that does not reflect test-like shifts. **Weaknesses:** The main weakness is that the experimental setup does not fully support the current broad claim of compositional generalization. To support the scope of the current claims, I think the paper needs to study different types and levels of shifts between unlabeled and test distributions, which is perhaps best studied with a more realistic dataset e.g., GQA instead of CLEVR. The other option would be to revise certain claims/sections. Requested Changes: It would be ideal to test the method on more realistic scenarios i.e., with varying types/degrees of shifts between the unlabeled and the actual test distributions preferably on datasets more realistic than CLEVR. I think such experiments could provide evidence for some of the claims with broad scope e.g., those made in Sec 3: > As training progresses, DisCo selects more “difficult” question-image pairs, which increases distribution coverage on the unlabeled image set. After training, models can reason about objects with new attribute compositions, not seen with labels during training. This is however not critical for acceptance as long as the claims are edited to reflect the scope/assumptions. Broader Impact Concerns: The existing section looks good. ================================================== Metareview: Recommendation: Accept as is Comment: This paper proposes a learning method called DisCo for better distribution coverage to improve compositional generalization for visual reasoning on the CLEVR dataset. After author response, it received 2 Leaning Accept, and 1 Accept recommendations. On one hand, all the reviewers agree that the paper is clearly written, the proposed method is reasonable and the results support the core claim of compositional generalization after revision. The experiments are thorough and the ablation is clear. On the other hand, the main concern raised by all the reviewers is that the experimental setup does not fully support the broad claim of compositional generalization. Specifically, the model has already learned from the test-time data distribution during training before testing. This experiment setup is a little bit weak, since the specific test distribution should be unknown at training time. During rebuttal, the authors have adjusted the claims and added some new results, which the reviewers appreciated. Another weakness is that all the experiments are performed on the toy CLEVR dataset, so it remains unclear how the technique developed in this paper is really useful for real-world datasets. The authors have added this limitation discussion in the revision. Overall, the rebuttal solved most of the concerns raised by the reviewers, and all the reviewers tend to accept the paper. Though some weaknesses still exist, the paper itself is still self-contained, and will be interesting to researchers specifically working in this sub-field. Therefore, the editor decided to recommend accept by the end. ==================================================
# Booster-Shot: Boosting Stacked Homography Transformations For Multiview Pedestrian Detection With Attention Anonymous authors Paper under double-blind review ## Abstract Improving multi-view aggregation is integral for multi-view pedestrian detection, which aims to obtain a bird's-eye-view pedestrian occupancy map from images captured through a set of calibrated cameras. Inspired by the success of attention modules for deep neural networks, we first propose a Homography Attention Module (HAM) which is shown to boost the performance of existing end-to-end multiview detection approaches by utilizing a novel channel gate and spatial gate. Additionally, we propose Booster-SHOT, an end-to-end convolutional approach to multiview pedestrian detection incorporating our proposed HAM as well as elements from previous approaches such as view-coherent augmentation or stacked homography transformations. Booster-SHOT achieves 92.9% and 94.2% for MODA on Wildtrack and MultiviewX respectively, outperforming the state-of-the-art by 1.4% on Wildtrack and 0.5% on MultiviewX, achieving state-of-the-art performance overall for standard evaluation metrics used in multi-view pedestrian detection. 1 ## 1 Introduction Multi-view detection Sankaranarayanan et al. (2008); Aghajan & Cavallaro (2009); Hou et al. (2020b) leverages multiple camera views for object detection using synchronized input images captured from varying view angles. Compared to a single-camera setup, the multi-view setup alleviates the occlusion issue, one of the fundamental problems in many computer vision applications. In this work, we consider the problem of multi-view pedestrian detection. As shown in Figure 1, a bird's-eye-view representation is obtained with the synchronized images from multiple calibrated cameras, which is then further used to detect pedestrians in the scene. A central problem in multi-view detection is to obtain a correct *multi-view aggregation*. The change in viewpoint and occlusions make it challenging to match object features across different view angles. Various works attempted to address this problem, ranging from early approaches leveraging "classical" computer vision Alahi et al. (2011), hybrid approaches further incorporating deep learning, to end-to-end trainable deep learning architectures Hou et al. (2020b); Hou & Zheng (2021); Song et al. (2021). One core challenge in multiview detection is designing how the multiple views should be aggregated. MVDet Hou et al. (2020b) proposes a fully convolutional end-to-end trainable solution for the multi-view detection task. MVDet aggregates different views by projecting the convolution feature map via perspective transformation to a single ground plane and concatenating the multiple projected feature maps. Given the aggregated representation, MVDet applies convolutional layers to detect pedestrians in the scene. Song et al. Song et al. (2021) identified that the projection of the different camera views to a single ground plane is not accurate due to misalignments. Consequently, they proposed to project the feature maps onto different height levels according to different semantic parts of pedestrians. Additionally, they use a neural-networkbased soft-selection module to assign a likelihood to each pixel of the features extracted from the different views. They termed their approach SHOT, due to the use of the Stacked HOmography Transformations. MVDeTr Hou & Zheng (2021) extends MVDet by introducing a shadow transformer to attend differently at different positions to deal with various shadow-like distortions as well as a view-coherent data augmentation 1Code will be made public after publication and is available in the supplementary. ![1_image_0.png](1_image_0.png) Figure 1: Overview of multiview detection with homography attention module (HAM) method, which applies random augmentations while maintaining multiview-consistency. MVDeTr currently constitutes the SotA approach for multiview detection. In recent years the attention mechanism for deep neural networks has played a crucial role in deep learning Hu et al. (2018); Woo et al. (2018); Guo et al. (2021b) due to the non-trivial performance gains that it enabled. Attention mechanisms have provided benefits for various vision tasks, e.g. image classification Hu et al. (2018); Woo et al. (2018), object detection Dai et al. (2017); Carion et al. (2020), semantic segmentation Fu et al. (2019); Yuan et al. (2021), or Point Cloud Processing Xie et al. (2018); Guo et al. (2021a). However, to this date, no dedicated attention mechanism has been proposed for the task of multiview pedestrian detection. In this work, we fill this gap and propose an attention mechanism specifically designed to boost existing multiview detection frameworks. Our proposed Homography Attention Module (HAM) is specifically tailored for the core task of multiview aggregation in modern multiview detection frameworks. As shown in the lower part of Figure 1 our proposed solution consists of a channel gate module and a spatial gate module. The channel gate is directly applied to the accumulated image features from the different views. The intuition behind our *channel gate* is that different channels hold meaningful information for different homographies. The channel gate is followed by our *spatial gate*. We conjecture, that for each view and homography combination different spatial features are of higher importance. Our proposed attention mechanism can be readily plugged into existing methods. We also combine insight from previous approaches and HAM to propose Booster-SHOT, a new end-to-end multiview pedestrian detection framework. Our experimental results show that both incorporating HAM into previous frameworks and Booster-SHOT improves over previous multiview detection frameworks and achieves state-of-the-art performance. Additionally, we provide quantitative and qualitative results to verify and justify our design choices. ## 2 Related Work 2.1 Multiview Detection It can be difficult to detect pedestrians from a single camera view, due to crowded scenes and occlusions. Hence, many research works study pedestrian detection in the multi-camera setup. Multiple calibrated and synchronized cameras capturing a scene from different view angles can provide a richer representation of the environment. Camera calibrations additionally produces a correspondence between each location on the ground plane and its bounding boxes in multiple camera views. This is possible since a 2D bounding box can be calculated, once an average human height and width is assumed via perspective transformation. Research in multiview pedestrian detection has been explored intensively in the past. Early methods mainly relied on background subtraction, geometric constraints, occlusion reasoning, etc. Fleuret et al. (2007); Sankaranarayanan et al. (2008); Berclaz et al. (2011). Given different views from two to four video streams Fleuret *et al*. Fleuret et al. (2007), first estimates a probabilistic occupancy map of the ground plane via a generative model, which is followed by a tracking mechanism to track up to six individuals. The work by Sankaranarayanan *et al*. Sankaranarayanan et al. (2008) emphasizes how geometric constraints in multicamera problems can be leveraged for detection, tracking, and recognition. Similar to Fleuret et al. (2007), Coates and Ng Coates & Ng (2010) also leverage a probabilistic method to fuse the outputs of multiple object detectors from different views to enhance multi-view detection in the context of robotics. Using the k-shortest paths algorithm Berclaz *et al*. Berclaz et al. (2011) propose a multiple object tracking framework, requiring as input an occupancy map from a detector. Their algorithm handles unknown numbers of objects while filtering out false positives and bridging gaps due to false negatives. Similar to previous approaches, given the output of several object detectors for different viewpoints Roig *et al*. Roig et al. (2011) estimate the ground plane location of the objects. They model the problem via Conditional Random Fields (CRFs) to simultaneously predict the labeling of the entire scene. With the success of deep learning, deep neural networks have also been successfully applied to the multiview detection problem Chavdarova & Fleuret (2017); Baqué et al. (2017); Hou et al. (2020b); Song et al. (2021); Hou & Zheng (2021). Chavdarova and Fleuret Chavdarova & Fleuret (2017) propose an end-to-end deep learning architecture that combines the early layers of pedestrian detectors for monocular views with a multiview network optimized for multi-view joint detection. Baqué *et al*. Baqué et al. (2017) identify that the performance of multiview systems degrades significantly in crowded scenes. To overcome this, they propose a hybrid approach of CRFs and CNNs. To perform robustly in crowded scenes their end-to-end trainable model leverages high-order CRF to model occlusions. Recently, MVDet Hou et al. (2020b), an end-to-end trainable multiview detector has been proposed. To aggregate cues from multiple views, MVDet transforms feature maps obtained from multiple views to a single ground plane. To aggregate information from spatially neighboring locations, MVDet leverages large kernel convolutions on the multiview aggregated feature map. MVDet has been further extended through stacked homographies and shadow transformers Song et al. (2021); Hou & Zheng (2021) Song *et al*. Song et al. (2021) identified that the projection to a single view in MVDet Hou et al. (2020b) leads to inaccuracies in the alignments. Consequently, they propose to approximate projections in 3D world coordinates via a stack of homographies. Additionally, to assemble occupancy information across views, they propose a soft selection module. The soft-selection module predicts a likelihood map that assigns each pixel of the extracted features extracted from individual views to one of the homographies. MVDeTr Hou & Zheng (2021), adopts shadow transformers to aggregate multiview information, which attend differently based on position and camera differences to deal with shadow-like distortions. Additionally, MVDeTr introduces a view-coherent data augmentation method, which applies random augmentations while maintaining multiview consistency. To the best of our knowledge, MVDeTr currently constitutes the SOTA approach for multiview pedestrian detection. ## 2.2 Attention Mechanism In Computer Vision Attention mechanisms for computer vision emphasize more important regions of an image or a feature map and suppress less relevant parts Guo et al. (2021b). They can be broadly divided into channel attention, spatial attention, and a combination of the two variants. Channel Attention selects important channels through an attention mask across the channel domain. Pioneered by Hu *et al*. Hu et al. (2018) various works have extended upon the Squeeze-and-Excitation (SE) mechanism module Gao et al. (2019); Lee et al. (2019); Yang et al. (2020); Qin et al. (2021) . Spatial Attention selects important spatial regions of an image or a feature map. Early spatial attention variants are based on recurrent neural networks (RNN) Mnih et al. (2014); Ba et al. (2014). In the literature various variants of visual attention-based model can be found Xu et al. (2015); Oktay et al. (2018) To achieve transformation invariance while letting CNNs focus on important regions, Spatial Transformer Networks Jaderberg et al. (2015) had been introduced. Similar mechanisms have been introduced in deformable convolutions Dai et al. (2017); Zhu et al. (2019). Originating from the field of natural language processing, self-attention mechanisms have been examined for computer vision applications Wang et al. (2018); Carion et al. (2020); Dosovitskiy et al. (2020); Chen et al. (2020); Zhu et al. (2020). Channel Attention & Spatial Attention can also be used in combination. Residual Attention Networks Wang et al. (2017) extend ResNet He et al. (2016) through a channel & spatial attention mechanism on the feature representations. A spatial and channel-wise attention mechanism for image captioning has been introduced in Chen et al. (2017). The Bottleneck Attention Module (BAM) Park et al. (2018) and Convolutional Block Attention Module (CBAM) Woo et al. (2018) both infer attention maps along the channel and spatial pathway. While in the previous two methods the channel and spatial pathways are computed separately, triplet attention Misra et al. (2021) was introduced to account for cross-dimension interaction between the spatial dimensions and channel dimension of the input. Channel & spatial attention has also been applied in the context of segmentation Roy et al. (2018); Fu et al. (2019) Further combinations of channel and spatial attention include self-calibrated convolutions Liu et al. (2020), coordinate attention Hou et al. (2021) and strip pooling Hou et al. (2020a). ## 3 Preliminaries Let the input images for N camera views be (I 1*, ... , I*N ). The respective feature maps obtained from the feature extractor in the initial step of the general framework are denoted as (F 1*, ... , F* N ). The intrinsic, extrinsic parameters of the i'th camera are Gi ∈ R 3×3 and Ei = [Ri|t i] ∈ R 3×4, respectively, where Riis the 3 × 3 matrix for rotation in the 3D space and t iis the 3 × 1 vector representing translation. Following MVDet Hou et al. (2020b), we quantize the ground plane into grids and define an additional matrix F i ∈ R 3×3 that maps world coordinates to the aforementioned grid. While the mathematical concept of homography is an isomorphism of projective spaces, we use the term homography to describe correspondence relations between points on a given plane parallel to the ground as they are seen from the bird's-eye-view and from a separate camera-view. This is in line with SHOT Song et al. (2021) where the authors explain their projections as being homographies describing the translation of a plane for the pin-hole camera model. We will go into further depth regarding the homography transforms in our supplementary materials. ## 4 Methodology 4.1 Previous Multiview Detection Methods Before presenting our proposed attention module, we outline the previous multiview detection frameworks in which we have implemented and tested its performance. MVDet Hou et al. (2020b) presented a multiview detection framework that functions as follows: First, the input images from different viewpoints are passed through a generic feature extractor such as ResNet18 with minor modifications. The feature maps are passed through an additional convolutional neural network that detects the head and feet of pedestrians before to aid the network during training. Next, the feature maps are projected to the ground plane via homography transformation and concatenated. Additionally, *x, y* coordinate maps are concatenated to the stack of transformed feature maps as in CoordConv Liu et al. (2018). Finally, this is passed through a CNN to output a bird's-eye-view (BEV) heatmap which is then post-processed via thresholding and non-maximum suppression. Extending upon MVDet, MVDeTr Hou & Zheng (2021) proposed the use of affine transformations (rotation, translation, sheer, scale, cropping), which are view-coherent augmentations. | Table 1: Settings for each approach | | | | | |---------------------------------------|------|-------|------------------|---------------| | Method | Aug. | Loss | BEV gen. | Multi Homogr. | | MVDet Hou et al. (2020b) | ✗ | MSE | CNN | ✗ | | SHOT Song et al. (2021) | ✗ | MSE | CNN | ✓ | | MVDeTr Hou & Zheng (2021) | ✓ | Focal | Transformer | ✗ | | Booster-Shot | ✓ | Focal | CNN, Transformer | ✓ | Additionally, the final CNN to generate the BEV heatmap is replaced with a shadow transformer, with the purpose to handle various distortion patterns during multiview aggregation. MVDeTr further replaces the MSE loss used in MVDet with Focal Loss Law & Deng (2018) coupled with an offset regression loss. While MVDet and MVDeTr both project the feature maps to the ground plane, SHOT Song et al. (2021) proposes to approximate projections in 3D world coordinates via a stack of homographies. In line with MVDet, SHOT uses a ResNet18 as a feature extractor. Contrary to MVDet, SHOT introduces additional planes parallel to the ground plane with different distances to the ground. The features are selectively projected from the camera-view to these different bird's-eye-views. As the projection to some planes may be of more importance than others, SHOT introduces a soft selection module where a network learns which homography should be used for which pixel. The soft selection module has two shortcomings when compared to HAM. First, it uses softmax activation, therefore each pixel gets projected to some extent to each homography. Since even the homography given the lowest score by the soft selection module affects the projected outcome, this introduces some noise into the final projected feature map. In addition, all feature channels corresponding to a single pixel are multiplied by the same value when projected to a homography. However, different channels attend to different features and some will be useful for the selected homography while others won't. In contrast, HAM selects channels in a discrete manner for each homography to avoid the two problems mentioned above. To show the efficiency of our homography attention module (HAM), we use the approaches as is without modification to their loss or training configuration, and simply plug in our proposed HAM. As the soft selection module in SHOT is rendered obsolete by our proposed HAM, we remove it when comparing the performance of SHOT with our module with the reported values for SHOT. Additionally, based on the advances made in previous works and our proposed attention module we further propose Booster-Shot (see Section 4.3). ## 4.2 Homography Attention Module In this section, we introduce our proposed homography attention module (HAM) for boosting the performance of multi-view pedestrian detection. HAM consists of a channel gate and several spatial gates equal to the number of homographies used. Note, that our attention module is specifically designed for viewaggregation in the context of multiview detection and is hence only applied in the multiview aggregation part. The image feature maps are first passed through the channel gate, then the spatial gate, and then finally through the homography, followed by the BEV heatmap generator. Channel Gate Our proposed channel gate follows the intuition that depending on the homography, different channels are of importance. Taking into consideration the multiple homography layers deployed at different heights, different feature information becomes more valuable. For instance, when we consider the homography at Z = 0, discriminative feature information near the ground plane, such as a person's feet, ankles, and lower legs, may offer more significant representation. This is because the homography at Z = 0 focuses on objects that are closer to the ground plane, which makes features near the ground plane more informative. This is in contrast to the approach proposed by SHOT, which feeds all feature maps through each of the homographies. Figure 2 outlines the architecture of our proposed channel gate, which broadly consists of the *channel selection module* and the *top-K selection module*. Given the stack of feature maps acquired from the different views, first the channel selection module is applied. The channel selection module first ![5_image_0.png](5_image_0.png) Figure 2: Diagram showing our proposed channel gate. applies max pooling and average pooling along the spatial dimension. Both pooled feature maps are passed through a shared 2-layer MLP. As the number of channels in the output from the last layer of the MLP is decided by the number of homographies (denoted as D) with the number of channels in the input (denoted as C), we obtain C channel for each homography, or in other words a C × D channel output size. Afterward, we apply the softmax function along the channel dimension for each of the outputs. The outputs are then fed into the top-K selection module. The top-K selection module takes these D different C-dimensional outputs and selects the top K largest values. The corresponding top-K selected channels from the original input are then concatenated, resulting in a subset of the original input with K channels. In the case of D = 1 (using only one homography, usually the ground plane), the top-K selection module defaults to an identity function. To retain the channel-wise aspect of our module, in this scenario we multiply the output of the channel selection module element-wise with the input. This completes the channel gate, which outputs D times K-channel feature maps, which are then fed into spatial gate. Spatial Gate Our spatial gate is motivated by our conjecture that for each view and homography combination different spatial features are of different importance. This intuition is based on the understanding that the view determines the camera's position and orientation, the homography corresponds to different heights in the scene, and the spatial features capture the patterns, textures, and shapes of the objects in the scene. Depending on the specific view and homography combination, certain spatial features may be more informative and relevant for feature extraction than others. For example, features closer to the lower image border might be more important for a view-homography combination with a nearly parallel to the ground plane and the homography at Z = 0. By using a spatial gate to selectively weight and filter the spatial features for each combination, our proposed method can effectively capture the relevant information from the image and improve performance. Figure 3 shows the architecture of our spatial gate. The input is max and average pooled along the channel dimension, then concatenated channel-wise. This 2-channel ![6_image_0.png](6_image_0.png) Figure 3: Diagram showing our proposed spatial gate. input is then passed through a 2-layer convolutional neural network to generate the spatial attention map. Finally, this spatial attention map is multiplied with the original input element-wise to create an output with dimensions identical to the input. For each homography-path a separate spatial gate is applied. Architecture-wise, while SHOT uses a "soft selection module" to estimate the importance of each homography plane for each pixel, HAM estimates the importance of channels and spatial information for each homography. Also, while MVDeTr introduced a "shadow transformer" after the homography transforms to remove shadowlike background noise, HAM uses attention to optimize image features fed to each homography and is applied prior to the homography transforms. ## 4.3 Booster-Shot Given the insights collected from previous approaches in addition to our proposed HAM, we design a multiview pedestrian detection architecture, which we term Booster-SHOT. Booster-SHOT bases itself on SHOT Song et al. (2021), using their stacked homography approach and leverages MVDeTr's Focal loss and offset regression loss along with the view-coherent augmentation. We retain SHOT's convolutional architecture used to generate the BEV heatmap but remove the soft selection module as the implementation of our module renders it obsolete. Figure 1 outlines how our proposed module is implemented in Booster-Shot. Table 1 outlines the design choices of Booster-SHOT alongside previous methods. ## 5 Experiments 5.1 Datasets Our method is tested on two datasets for multiview pedestrian detection. Wildtrack Chavdarova et al. (2018) consists of 400 synchronized image pairs from 7 cameras, constituting a total of 2,800 images. The images cover a region with dimensions 12 meters by 36 meters. The ground plane is denoted using a grid of dimensions 480×1440, such that each grid cell is a 2.5-centimeter by 2.5-centimeter square. Annotations are provided at 2fps and there are, on average, 20 people per frame. Each location within the scene is covered by an average of 3.74 cameras. MultiviewX Hou et al. (2020b) is a synthetic dataset created using human models from PersonX Sun & Zheng (2019) and the Unity engine. It consists of 1080 × 1920 images taken from 6 cameras that cover a 16-meter by 25-meter area. Per the method adopted in Wildtrack, the ground plane is represented as a 640 × 1000 grid of 2.5-centimeter squares. Annotations are provided for 400 frames at 2fps. An average of 4.41 cameras cover each location, while an average of 40 people are present in a single frame. ## 5.2 Settings And Metrics In accordance with the previous methods, we report the four metrics: Multiple Object Detection Accuracy (MODA), Multiple Object Detection Precision (MODP), precision, and recall. Let us define N as the number of ground truth pedestrians. If the true positives (TP), false positives (FP) and false negatives (FN) are known, precision and recall can be calculated as T P F P +T P and T P N , respectively. MODA is an accuracy metric for object detection tasks and is therefore obtained by calculating 1 − F P +F N N. MODP is computed with the formula P1−d[d<t]/t T P where d is the distance from a detection to its ground truth (GT) and t is the threshold for a correct detection. We keep the original threshold of 20 that was proposed in SHOT. Our implementation is based on the released code for MVDet Hou et al. (2020b), SHOT Song et al. (2021), MVDeTr Hou & Zheng (2021) and follows the training settings (optimizer, learning rate, etc.) for each. For all instances, the input images are resized to 720 × 1280 images. The output features are 270 × 480 images for MVDet and SHOT and 90 × 160 for MVDeTr. ∆z (the distance between homographies) is set to 10 on Wildtrack and 0.1 on MultiviewX. All experiments are run on two A30 GPUs (depending on the framework) with a batch size of 1. For experiments implementing our module in SHOT, our base approach involves selecting the top-32 channels each for 4 homographies. We note that SHOT's base approach uses 5 homographies. | Table 2: Performance comparison (in %) on Wildtrack and MultiviewX datasets Wildtrack MultiviewX | | | | | | | | | |--------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|-------------------------|-------------|-------------------------------------|-------------|-------------|-------------|--------| | Method | MODA | MODP | precision | recall | MODA | MODP | precision | recall | | MVDet | 88.2 | 75.7 | 94.7 | 93.6 | 83.9 | 79.6 | 96.8 | 86.7 | | MVDet + HAM | 89.6 ± 0.35 80.4 ± 0.21 95.7 ± 1.06 93.8 ± 0.42 91.3 ± 0.35 81.7 ± 0.14 98.3 ± 0.49 91.9 ± 2.26 | | | | | | | | | SHOT | 90.2 | 76.5 | 96.1 | 94.0 | 88.3 | 82.0 | 96.6 | 91.5 | | SHOT + HAM | 90.2 ± 0.49 77.4 ± 0.57 96.2 ± 0.07 93.9 ± 0.42 91.2 ± 0.53 86.9 ± 4.14 98.2 ± 1.25 92.9 ± 0.78 | | | | | | | | | MVDeTr | 91.5 | 82.1 | 97.4 | 94.0 | 93.7 | 91.3 | 99.5 | 94.2 | | MVDeTr + HAM 92.8 ± 0.49 82.4 ± 0.71 96.6 ± 0.85 96.6 ± 1.25 94.2 ± 0.07 91.4 ± 0.57 99.4 ± 0.21 94.8 ± 0.21 | | | | | | | | | | Booster-Shot + Tr 92.5 ± 0.64 | 82.0 ± 0.71 | 96.3 ± 0.78 | 96.3 ± 1.50 | 93.8 ± 0.49 | 91.8 ± 0.07 | 98.8 ± 0.64 | 95.0 ± 1.06 | | | Booster-Shot | 92.8 ± 0.17 | 84.9 ± 4.42 97.5 ± 1.25 | 95.3 ± 1.17 | 94.4 ± 0.18 92.0 ± 0.04 99.4 ± 0.07 | 94.9 ± 0.21 | | | | ## 5.3 Comparison With Previous Methods As shown above in Table 2, we provide a comparison for the three most recent methods before and after applying our module. Applying our module to MVDet, SHOT and MVDeTr improved (or matched) all four metrics reported in their respective papers for MultiviewX. Specifically, the average performance of MVDet with our module improves over the reported values for MVDet on MultiviewX by 7.4%, 2.1%, 1.5%, and 5.2% for MODA, MODP, precision, and recall respectively. For Wildtrack, the use of our module again improved all four metrics with the exception of MVDeTr. For MVDeTr, our precision was still comparable with the reported value as there was only a 0.8% decrease in precision while the MODA, MODP, and recall each improved 1.3%, 0.3%, and 2.6% respectively. When compared with MVDet, SHOT and MVDeTr, Booster-SHOT outperforms them on all metrics except for precision against MVDeTr. As MVDeTr proposed the shadow transformer as a way to improve performance, we applied it to BoosterSHOT and the results are denoted in Table 2 as Booster-SHOT + Tr. However, we were unable to obtain any meaningful improvement over the purely convolutional approach. ## 5.4 Ablation Experiments Number of homographies As shown in SHOT Song et al. (2021), as using multiple homographies is essentially a quantized version of a 3D projection, using more homographies leads to better performance for multi-view pedestrian detection. As our method assigns fewer channels to each homography as the number of homographies increases, we test the performance of SHOT with our module implemented for 2, 4, 6, and 8 homographies. Overall, all four metrics show improvement as the number of homographies increases (see Table 3). The 6 homography case has the highest MODP and recall while the 8 homography case has the highest precision. Both cases mentioned above have the highest MODA. As the overall performance is very similar, we conclude that the improvement from the increased number of homographies has reached an equilibrium with the decreased number of channels passed to each homography. | Table 3: Performance depending on the number of homographie | | | | s | | |---------------------------------------------------------------|----|------------|------|-----------|--------| | | | MultiviewX | | | | | Method | #H | MODA | MODP | precision | recall | | SHOT | 5 | 88.3 | 82.0 | 96.6 | 91.5 | | SHOT + HAM | 2 | 89.4 | 80.8 | 95.2 | 94.2 | | SHOT + HAM | 4 | 90.6 | 82.2 | 96.8 | 93.8 | | SHOT + HAM | 6 | 91.4 | 83.1 | 97.4 | 93.9 | | SHOT + HAM | 8 | 91.4 | 82.6 | 97.5 | 93.8 | Number of top-K **channels** Our approach initially determined the number of channels selected per homography based on the number of homographies and the number of input channels. For example, our base approach for 128 input channels and 4 homographies involves selecting the top-32 channels for each homography. We further test the performance of our module when we fix the number of channels selected per homography (hereon denoted as K in accordance with the name top-K selection) and change the number of output channels accordingly. Setting K = 64 for 4 homographies and 128 input channels indicates we take the top-64 channels for each homography and output 64 × 4 = 256 channels. Table 4 outlines the results we get for K = 4, 8, 16, 32, 64, 128. For MODA, MODP and precision, using the top-16 channels for each homography outperforms the other instances with considerable margins. The top-32 instance (our base approach) improves on the top-16 instance only for recall. We conclude that our channel selection approach is effective in removing irrelevant channels and concentrating relevant information into selected channels for each homography. | Table 4: Performance depending on the number of selected chan | | | nels | | | |-----------------------------------------------------------------|-----|------------|--------|-----------|--------| | | | MultiviewX | | | | | Method | K | MODA | MODP | precision | recall | | SHOT + HAM | 4 | 90.6 | 81.8 | 97.7 | 92.7 | | SHOT + HAM | 8 | 90.4 | 82.2 | 97.9 | 92.4 | | SHOT + HAM | 16 | 91.8 | 82.6 | 98.9 | 92.9 | | SHOT + HAM | 32 | 90.6 | 82.2 | 96.8 | 93.8 | | SHOT + HAM | 64 | 90.2 | 82.2 | 96.9 | 93.2 | | SHOT + HAM | 128 | 89.2 | 81.8 | 96.0 | 93.0 | Table 4: Performance depending on the number of selected channels Attention Mechanisms In Table 5, we outline the effects of the channel gate and the spatial gate on MVDet, as well as their combination (HAM). It can be observed that both the channel gate and the spatial gate individually improve the performance over MVDet. However, using the channel gate and spatial gate subsequently, in other words HAM, improves in MODA and recall while retaining similar precision compared to MVDet, leading to an overall improvement in performance. ## 5.5 Analysis Efficacy of HAM in comparison to existing methods We emphasize that the novelty of HAM lies in the architectural integration of the attention mechanism for the specific purpose of multi-view aggregation, for which, to the best of our knowledge, our work is the first. Previous attention mechanisms (e.g. CBAM Woo et al. (2018), CCG Abati et al. (2020)) are applied at the convolutional blocks in the backbone network, while | Table 6: BoosterSHOT performance with HAM vs pre-existing attention mechanisms MODA MODP precision recall | | | | | |-------------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|-------------| | Booster-SHOT w/o attention | 93.2 ± 0.18 | 91.2 ± 0.07 | 99.4 ± 0.04 | 93.7 ± 0.20 | | Booster-SHOT (SE) | 93.7 ± 0.23 | 88.2 ± 5.66 | 98.1 ± 1.11 | 95.5 ± 0.90 | | Booster-SHOT (CBAM) | 93.2 ± 0.14 | 90.5 ± 0.14 | 98.5 ± 0.53 | 94.7 ± 0.35 | | Booster-SHOT (CCG) | 93.4 ± 0.18 | 91.4 ± 0.04 | 99.1 ± 0.11 | 94.2 ± 0.07 | | Booster-SHOT | 94.4 ± 0.18 | 92.0 ± 0.04 | 99.4 ± 0.07 | 94.9 ± 0.21 | | Table 5: Performance of attention modules on MVDet Wildtrack Method MODA MODP precision recall MVDet 88.2 75.7 94.7 93.6 | | | | | |----------------------------------------------------------------------------------------------------------------------------|------|------|------|------| | MVDet + Channel Gate | 88.8 | 76.0 | 95.1 | 93.6 | | MVDet + Spatial Gate | 88.6 | 76.6 | 95.5 | 93.0 | | MVDet + HAM | 89.4 | 75.7 | 95.2 | 94.1 | HAM is applied after the backbone network since it is tailored toward multi-view aggregation. Consequently, HAM can be seen as complementary to existing attention mechanisms. To illustrate the importance of the design choices of HAM we compare it with the naive integration of SENet, CBAM, and CCG into Booster-SHOT on MultiviewX. SENet, CBAM, and CCG come after the feature extractor in place of HAM. To provide a common baseline for HAM, SENet, CBAM, and CCG, we provide additional results for "BoosterSHOT without attention". This implementation is equivalent to SHOT Song et al. (2021) with Focal Loss and training-time augmentations. As shown in Table 6, BoosterSHOT is shown to outperform all of the compared methods across the board. Only BoosterSHOT without attention shows similar results in precision, a very saturated metric for which BoosterSHOT shows only a slightly lower performance. In addition, when compared with BoosterSHOT without attention, adding CBAM, CCG, and SE showed only an increase of a maximum 0.5% in MODA, while adding HAM boosted MODA by 1.2%. Attention for different homographies We previously conjectured that the significance of each channel is different for each homography. In the following we validate this hypothesis through empirical evidence. Note that the following results are shown for the synthetic MultiviewX dataset. Since the results for the real-world Wildtrack dataset are consistent, we refer the reader to the supplementary for the Wildtrack visualizations. Figure 4 shows images created from camera view 1 of the MultiviewX dataset and containing output from the channel selection module corresponding to each homography. The channel selection module output is average pooled channel-wise (in this instance, the output for each homography contains 32 channels) and superimposed onto a grayscale version of the original image from the MultiviewX dataset. Yellow areas indicate high values in the output, indicating that the network is attending strongly to those regions. We denote the ground plane as H0 (homography 0) and number the remaining homographies accordingly. We can observe that the output from the channel selection module is homography-dependent as the yellow areas in all four images differ. We also note that the body parts with the brightest colors align with the height of the homographies. H0 highlights the feet while H1 highlights the lower body, especially around the knee area. H2 and H3 both highlight the upper body but H3 extends a bit farther upwards compared to H2. A similar phenomenon has been reported by the SHOT authors for their soft selection module. However, our channel selection module output shows more distinct highlighting of the body parts and is obtained through a completely different method. Overall, these results support the importance of selecting different channels for different homographies. ![10_image_0.png](10_image_0.png) Figure 4: Homography-wise output from channel selection (left) and spatial attention maps (right) ![10_image_1.png](10_image_1.png) Figure 5: Heatmap representation of channel selection homography-wise. Deeper yellow colors indicates that the channel is selected most of the time while deeper blue colors are assigned to channels that are seldom selected. Figure 4 shows the attention values from the spatial attention block at the end of our proposed module. All four attention maps show starkly different distributions, confirming our conjecture that different pixels in the feature map can differ in importance for each homography. The results shown above were obtained through an experiment where the distance between homography planes was increased from 10cm to 60cm for MultiviewX. We noticed that, due to the low height of even the top homography plane in the 10cm case (30cm off the ground), the difference between the attention module outputs was not easily noticeable. By increasing the distance between homography planes, we were able to obtain images that clearly show homographies that are higher off the ground attend to higher regions of the human body. In addition, we noticed that the foot regression auxiliary loss caused bias toward the foot region in the extracted image features, thus distorting our heatmap visualization of the attention module outputs. As such, the experiments from which Figure 4, Figure 5 and Figure 6 were obtained did not include auxiliary losses during training (see supplementary). We further provide results averaged over the entire MultiviewX test dataset. Specifically, we visualize how often certain channels are selected for each homography for a given view. We select Booster-SHOT for this experiment. For each channel, we count the number of times it is selected for each homography and divide by the total number of image pairs in the test set and display the resulting heatmap in Figure 5. First, it can be observed that the channels that are selected often (yellow hues) show almost no overlap across homographies, again providing evidence to our previous claim that different channels attend to different homographies. Although there are minor differences in the specific number of times some channels are chosen, the channels that are selected for the majority of the test set for each homography are unchanged (see supplementary). Interestingly, we also observe that some channels are not selected at all by any homography while other channels appear to be selected by multiple homographies. Attention across different views Figure 6 further presents evidence that our channel selection module output is only homography-dependent. We denote the camera views as C1 (camera 1) through C6 for the homography to the ground plane (H0). For all 6 images, the feet and surrounding area of the pedestrians are ![11_image_0.png](11_image_0.png) Figure 6: Camera view-wise output from channel selection module highlighted. Therefore, we conclude that the output from the channel selection module attends consistently to certain features across all camera views. Generalization across camera views. To evaluate the generalization capabilities for unseen views, we compared different methods on non-overlapping camera subsets from MultiviewX. Adding the HAM module to MVDet and SHOT showed significant improvement in MODA and precision while retaining comparable performance for MODP and recall, providing evidence that HAM facilitates generalization. For extended analysis results see supplementary. Computational cost, memory consumption **and runtime.** We evaluate the benefits of our method in terms of computational cost, via Giga FLoating-point OPerations (GFLOPs), memory consumption, and runtime in seconds. Through one-time inference, we account for floating-point operations such as addition, multiplication, and division.2 For evaluating memory consumption, we count model parameters and buffers. For the runtime, we ran 20 randomly generated tensors with values between 0 and 1 through the models. With our method, upon selecting K channels from the image heatmaps for each homography and using D homographies, the input to the bird's-eye-view (BEV) heatmap generator has K × D channels. A smaller K indicates fewer channels in the input and fewer parameters in the BEV heatmap generator. In addition, as the spatial attention module input in our spatial gate has K channels, we further save on computations during channel-wise pooling. | Table 7: Computational complexity comparison between methods for 4 homograp | | | hies | |-------------------------------------------------------------------------------|--------|-----------------|-------------------| | Method | GFLOPs | # of parameters | runtime (seconds) | | SHOT | 4.71k | 19.0M | 0.33 ± 0.076 | | SHOT + HAM (top 4) | 4.09k | 14.9M | 0.29 ± 0.071 | | SHOT + HAM (top 16) | 4.23k | 16.4M | 0.28 ± 0.074 | | SHOT + HAM (top 32) | 4.42k | 18.5M | 0.22 ± 0.054 | | MVDeTr | 2.59k | 12.8M | 0.19 ± 0.00032 | | MVDeTr + HAM | 2.69k | 12.9M | 0.21 ± 0.0029 | | Booster-SHOT | 2.54k | 13.3M | 0.19 ± 0.0015 | | Booster-SHOT w/ Tr | 2.54k | 12.9M | 0.22 ± 0.0016 | 2As the post-processing cost is unchanged regardless of our settings, we take into account the calculations required to go from the RGB image input to a bird's-eye-view heatmap Utilizing the ptflops3 package, we count the number of FLOPs for SHOT and SHOT with HAM. Table 7 shows results for SHOT and SHOT with HAM using 4, 16, and 32 channels per homography. We find that SHOT with 4 homographies, our baseline, takes up 4705.73 GFLOPs and 19048768 parameters. SHOT with HAM and 4 homographies using only the top-32 channels for each homography takes up 4423.78 GFLOPs and 18530184 parameters, yielding 6.16% improvement in computational cost and 2.63% improvement in memory usage while outperforming SHOT in all four metrics. Our best performing approach (top 16 in Table 7) takes up 4228.66 GFLOPs and 16438088 parameters, outperforming SHOT in all four metrics while achieving a 13.7% and 10.2% reduction in computational cost and memory usage. Our most lightweight approach (top 4 in Table 7) takes up 4089.63 GFLOPs and 14881112 parameters, showing a 21.6% and 13.2% reduction, respectively. It also outperforms SHOT in MODA, precision, and recall with comparable MODP (-0.2%) (see Table 4). In addition, we also tested the additional computational cost and runtime incurred by applying our HAM to previous approaches such as MVDet, SHOT, and MVDeTr. We test the computational cost and runtime of pre-existing methods before and after applying HAM. All experiments use an input tensor of size (1, 7, 3, 720, 1280) (consistent with Wildtrack/MultiviewX). SHOT (4 homographies): 4705.73 GFLOPs SHOT + HAM (4 homographies): 4423.78 GFLOPs MVDeTr: 2594.53 GFLOPs MVDeTr + HAM: 2685.69 GFLOPs Booster-SHOT: 2537.40 GFLOPs For SHOT, because we replaced the "soft selection module" with the lighter HAM (HAM takes fewer channels as input, reducing the number of parameters), computational cost decreases by 5.99%. We also found that by reducing the number of channels (K) selected per homography, applying HAM to SHOT can reduce the computational cost by 13.2% while still improving the performance. For MVDeTr, adding HAM (one homography) results in a 3.51% increase in computational cost. The additional cost of increasing the number of homographies is also minimal: SHOT + HAM (6 homographies): 4444.46 GFLOPs SHOT + HAM (8 homographies): 4480.92 GFLOPs. The results of the runtime evaluation overall follow the same trend as that of the computational cost. Overall, the results indicate that HAM incurs minimal additional cost when naively applied to existing methods and enables to tune the model to reduce computational memory costs while boosting performance. ## 6 Conclusion In this work, we propose a homography attention module (HAM) as a way to improve across all existing multiview pedestrian detection approaches. HAM consists of a novel channel gate module that selects the most important channels for each homography and a spatial gate module that applies spatial attention for each homography. In addition, we outline an end-to-end multiview pedestrian detection framework (BoosterSHOT) taking insight from previous approaches while also incorporating our proposed module. For both Booster-SHOT and previous approaches with HAM, we report new state-of-the-art performance on standard benchmarks while providing extensive empirical evidence that our conjectures and design choices are logically sound. ## References Davide Abati, Jakub Tomczak, Tijmen Blankevoort, Simone Calderara, Rita Cucchiara, and Babak Ehteshami Bejnordi. Conditional channel gated networks for task-aware continual learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Hamid Aghajan and Andrea Cavallaro. *Multi-camera networks: principles and applications*. Academic press, 2009. Alexandre Alahi, Laurent Jacques, Yannick Boursier, and Pierre Vandergheynst. Sparsity driven people localization with a heterogeneous network of cameras. *Journal of Mathematical Imaging and Vision*, 2011. Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014. Pierre Baqué, François Fleuret, and Pascal Fua. Deep occlusion reasoning for multi-camera multi-target detection. In *International Conference on Computer Vision (ICCV)*, 2017. 3https://github.com/sovrasov/flops-counter.pytorch Jerome Berclaz, Francois Fleuret, Engin Turetken, and Pascal Fua. Multiple object tracking using k-shortest paths optimization. *Transactions on pattern analysis and machine intelligence (T-PAMI)*, 2011. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In *European Conference on Computer Vision (ECCV)*, 2020. Tatjana Chavdarova and François Fleuret. Deep multi-camera people detection. In *International Conference* on Machine Learning and Applications (ICMLA), 2017. Tatjana Chavdarova, Pierre Baqué, Stéphane Bouquet, Andrii Maksai, Cijo Jose, Timur Bagautdinov, Louis Lettry, Pascal Fua, Luc Van Gool, and François Fleuret. Wildtrack: A multi-camera hd dataset for dense unscripted pedestrian detection. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2018. Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Conference on computer vision and pattern recognition (CVPR), 2017. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *International Conference on Machine Learning (ICML)*, 2020. Adam Coates and Andrew Y Ng. Multi-camera object detection for robotics. In International Conference on Robotics and Automation (ICRA), 2010. Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In *International conference on computer vision (ICCV)*, 2017. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Francois Fleuret, Jerome Berclaz, Richard Lengagne, and Pascal Fua. Multi-camera people tracking with a probabilistic occupancy map. *Transactions on pattern analysis and machine intelligence (T-PAMI)*, 2007. Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019. Zilin Gao, Jiangtao Xie, Qilong Wang, and Peihua Li. Global second-order pooling convolutional networks. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019. Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R Martin, and Shi-Min Hu. Pct: Point cloud transformer. *Computational Visual Media*, 2021a. Meng-Hao Guo, Tian-Xing Xu, Jiang-Jiang Liu, Zheng-Ning Liu, Peng-Tao Jiang, Tai-Jiang Mu, Song-Hai Zhang, Ralph R Martin, Ming-Ming Cheng, and Shi-Min Hu. Attention mechanisms in computer vision: A survey. *arXiv preprint arXiv:2111.07624*, 2021b. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on computer vision and pattern recognition (CVPR), 2016. Qibin Hou, Li Zhang, Ming-Ming Cheng, and Jiashi Feng. Strip pooling: Rethinking spatial pooling for scene parsing. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2020a. Qibin Hou, Daquan Zhou, and Jiashi Feng. Coordinate attention for efficient mobile network design. In Conference on Computer Vision and Pattern Recognition (CVPR), 2021. Yunzhong Hou and Liang Zheng. Multiview detection with shadow transformer (and view-coherent data augmentation). In *ACM International Conference on Multimedia*, 2021. Yunzhong Hou, Liang Zheng, and Stephen Gould. Multiview detection with feature perspective transformation. In *European Conference on Computer Vision (ECCV)*, 2020b. Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Conference on computer vision and pattern recognition (CVPR), 2018. Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. *Advances in* neural information processing systems (NuerIPS), 2015. Hei Law and Jia Deng. Cornernet: Detecting objects as paired keypoints. In *European Conference on* Computer Vision (ECCV), 2018. HyunJae Lee, Hyo-Eun Kim, and Hyeonseob Nam. Srm: A style-based recalibration module for convolutional neural networks. In *International Conference on Computer Vision (ICCV)*, 2019. Jiang-Jiang Liu, Qibin Hou, Ming-Ming Cheng, Changhu Wang, and Jiashi Feng. Improving convolutional networks with self-calibrated convolutions. In *Conference on Computer Vision and Pattern Recognition* (CVPR), 2020. Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. *Advances in neural* information processing systems (NeurIPS), 2018. Diganta Misra, Trikay Nalamada, Ajay Uppili Arasanipalai, and Qibin Hou. Rotate to attend: Convolutional triplet attention module. In *Winter Conference on Applications of Computer Vision (WACV)*, 2021. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In *Advances in* neural information processing systems (NeurIPS), 2014. Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, et al. Attention u-net: Learning where to look for the pancreas. *arXiv preprint arXiv:1804.03999*, 2018. Jongchan Park, Sanghyun Woo, Joon-Young Lee, and In So Kweon. Bam: Bottleneck attention module. The British Machine Vision Conference (BMVC), 2018. Zequn Qin, Pengyi Zhang, Fei Wu, and Xi Li. Fcanet: Frequency channel attention networks. In *International* Conference on Computer Vision (ICCV), 2021. Gemma Roig, Xavier Boix, Horesh Ben Shitrit, and Pascal Fua. Conditional random fields for multi-camera object detection. In *International Conference on Computer Vision (ICCV)*, 2011. Abhijit Guha Roy, Nassir Navab, and Christian Wachinger. Recalibrating fully convolutional networks with spatial and channel "squeeze and excitation" blocks. *Transactions on medical imaging*, 2018. Aswin C. Sankaranarayanan, Ashok Veeraraghavan, and Rama Chellappa. Object detection, tracking and recognition for multiple smart cameras. *Proceedings of the IEEE*, 2008. Liangchen Song, Jialian Wu, Ming Yang, Qian Zhang, Yuan Li, and Junsong Yuan. Stacked homography transformations for multi-view pedestrian detection. In *International Conference on Computer Vision* (ICCV), 2021. Xiaoxiao Sun and Liang Zheng. Dissecting person re-identification from the viewpoint of viewpoint. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, and Xiaoou Tang. Residual attention network for image classification. In *Conference on computer vision and pattern* recognition (CVPR), 2017. Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In *Computer* vision and pattern recognition (CVPR), 2018. Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In *European conference on computer vision (ECCV)*, 2018. Saining Xie, Sainan Liu, Zeyu Chen, and Zhuowen Tu. Attentional shapecontextnet for point cloud recognition. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2018. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning (ICML), 2015. Zongxin Yang, Linchao Zhu, Yu Wu, and Yi Yang. Gated channel transformation for visual recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Yuhui Yuan, Lang Huang, Jianyuan Guo, Chao Zhang, Xilin Chen, and Jingdong Wang. Ocnet: Object context for semantic segmentation. *International Journal of Computer Vision*, 2021. Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019. Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. *arXiv preprint arXiv:2010.04159*, 2020.
Review 1: Summary: The paper proposes an attention mechanism called Homography Attention Module (HAM) designed to improve the performance of existing multiview detection frameworks for the task of multiview pedestrian detection. The proposed HAM consists of a channel gate and a spatial gate module and can be incorporated into existing methods. The authors also introduce a new end-to-end multiview pedestrian detection framework called Booster-SHOT, which combines HAM with previous approaches and achieves state-of-the-art performance. The paper provides experimental results to support the effectiveness of the proposed approach. Strengths and Weaknesses: --- **Strengths**: * The proposed attention mechanism is a useful addition to existing multi-view detectors, as it can improve their performance with minimal modifications. * The paper provides extensive experimental results on multiple benchmarks. --- **Weaknesses**: * The paper is not easy to read. The authors use terminology such as "homography" without adequately defining it, making it challenging for readers to understand the proposed method and its contributions. Additionally, the proposed attention module lacks proper intuition and explanation, which further complicates comprehension. * The technical novelty of this paper is limited. The key contribution, the homography attention module, combines spatial and channel attention, both of which have been widely explored in the literature. Although the authors acknowledge this in their related work section, they should provide more information on how their proposed module differs from existing methods, as well as more experimental results to confirm its effectiveness. In my opinion, the proposed approach does not introduce any fundamentally new concepts or techniques. * The proposed method does not seem to be limited to pedestrian detection. The authors should consider verifying the effectiveness of their proposed method on other BEV detection/segmentation tasks. --- Requested Changes: Please refer to the weaknesses section for more details. Broader Impact Concerns: N/A ================================================== Review 2: Summary: In this paper, the authors propose an extension to SHOT (stacked homography transformations for multiview pedestrian detection). The key difference from SHOT is that the authors designed a homography attention module (HAM) and inserted it between the multi-view feature extractor and the camera-to-BEV projection module. Albeit simple, results in Section 5 show that HAM brings consistent performance improvements to different multi-view pedestrian detection frameworks including MVDet, Shot and MVDeTr. Strengths and Weaknesses: ## Strengths: - The authors did a good job in summarizing previous papers in the field of multi-view detection and attention for computer vision (Sec 2, Sec 4). These sections lay a good foundation for the readers to understand the proposed method. - The proposed method is easy to follow and it achieves consistent performance improvement as indicated in Table 2. - The authors did spent good efforts in ablating different components in their design. ## Weaknesses: - The proposed method has limited novelty. In the high level, I did not see too many differences between the proposed HAM and the well-known squeeze-and-excitation (SE) module. The authors also cited a bunch of attention papers in vision (p4), but they did not compare the proposed HAM with any of the method cited. This makes me question whether it is necessary to propose a new attention mechanism. One could also directly reuse existing designs such as SE, global context blocks, non-local blocks, etc. - The authors did not seem to mention a lot of details about the backbone network used in this paper. I'm curious if the proposed HAM will still work if there are a lot of spatial/channel attention design in the backbone (e.g., ViT or SE-Nets as the backbone). - According to the ablation studies, it seems that neither of channel gating or spatial gating is working for the recall metric. The overall improvement brought by HAM on MODP also seems to be non-existent. - The authors did not provide error bars for all experiments. I'm not sure whether the improvement are significant enough when a fluctuation range is taken into consideration. Requested Changes: The authors are expected to respond to aforementioned "weaknesses" in the revision. It is very important to distinguish HAM from existing attention modules in vision. Besides that, I'm also interested in the runtime overhead of HAM (mentioned by Reviewer iVUb) and the possibility of incorporating HAM into self-driving workloads (mentioned by the other two reviewers). Broader Impact Concerns: The paper does not have ethical implications in my opinion. ================================================== Review 3: Summary: The paper proposes an attention mechanism, called the Homography Attention Module (HAM), specifically designed to improve multi-view pedestrian detection frameworks. The HAM consists of a channel gate module and a spatial gate module, and can be readily integrated into existing methods. The paper also introduces Booster-SHOT, a new end-to-end multi-view pedestrian detection framework that combines HAM and insights from previous approaches. Experimental results show that incorporating HAM into previous frameworks and using Booster-SHOT leads to improved performance and achieves state-of-the-art results. The paper fills a gap in the field of multi-view pedestrian detection, as no dedicated attention mechanism had been proposed for this task before. Strengths and Weaknesses: Pros: 1. Detailed ablation studies demonstrate the design choice of each component and parameter. Cons: 1. The organization of the paper should be improved. For example, the paper lacks an overview of the proposed method and closely related work SHOT, which makes the readers who are unfamiliar with the context hard to follow. 2. If the top K channels are selected for each homography, how the entire method is fully end-to-end differentiable? 3. The proposed method is quite like CBAM in my view. According to the description of authors, the spatial attention module is the same as previous work, while the difference of channel attention module only lies in split the channel into homography groups. I am not sure whether these contributions meet the acceptance criteria of the journal. Requested Changes: 1. The paper should be more self-contained. An introduction of SHOT is helpful to understand how the proposed component is injected into existing methods. 2. Besides the theoretic FLOPs, actual running time is also needed. 3. Is is possible to integrate the proposed module into modern BEV detectors for autonomous driving? It seems the principle of multi-view aggregation is quite similar. If the authors could provide such evidence, the results could be more convincing. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: Post revision, all three expert reviewers remain unconvinced about the validation of the proposed technique, as well as the scope and applications of the work. They all recommend rejection. The AE agrees and recommends rejection. ==================================================
# Dfml: Decentralized Federated Mutual Learning Yasser H. Khalil *yasser.khalil1@huawei.com* Huawei Noah's Ark Lab, Montreal, Canada. Amir H. Estiri amir.hossein.estiri1@huawei.com Huawei Noah's Ark Lab, Montreal, Canada. Mahdi Beitollahi mahdi.beitollahi@huawei.com Huawei Noah's Ark Lab, Montreal, Canada. Nader Asadi *nader.asadi@huawei.com* Huawei Noah's Ark Lab, Montreal, Canada. Sobhan Hemati sobhan.hemati@huawei.com Huawei Noah's Ark Lab, Montreal, Canada. Xu Li xu.lica@huawei.com Huawei Technologies Canada Inc., Ottawa, Canada. Guojun Zhang guojun.zhang@huawei.com Huawei Noah's Ark Lab, Montreal, Canada. Xi Chen *xi.chen4@huawei.com* Huawei Noah's Ark Lab, Montreal, Canada. Reviewed on OpenReview: *https: // openreview. net/ forum? id= I9HvzJbUbh* ## Abstract In the realm of real-world devices, centralized servers in Federated Learning (FL) present challenges including communication bottlenecks and susceptibility to a single point of failure. Additionally, contemporary devices inherently exhibit model and data heterogeneity. Existing work lacks a Decentralized FL (DFL) framework capable of accommodating such heterogeneity without imposing architectural restrictions or assuming the availability of additional data. To address these issues, we propose a Decentralized Federated Mutual Learning (DFML) framework that is serverless, supports nonrestrictive heterogeneous models, and avoids reliance on additional data. DFML effectively handles model and data heterogeneity through mutual learning, which distills knowledge between clients, and cyclically varying the amount of supervision and distillation signals. Extensive experimental results demonstrate consistent effectiveness of DFML in both convergence speed and global accuracy, outperforming prevalent baselines under various conditions. For example, with the CIFAR-100 dataset and 50 clients, DFML achieves a substantial increase of **+17.20%** and **+19.95%** in global accuracy under Independent and Identically Distributed (IID) and non-IID data shifts, respectively. ## 1 Introduction Federated Learning (FL) stands as a promising paradigm in machine learning that enables decentralized learning without sharing raw data, thereby enhancing data privacy. Although, Centralized FL (CFL) has been predominant in the literature (McMahan et al., 2017; Alam et al., 2022; Diao et al., 2020; Horvath et al., 2021; Caldas et al., 2018), it relies on a central server. Communication with a server can be a bottleneck, especially when numerous dispersed devices exist, and a server is vulnerable to a single point of failure. To avoid these challenges, Decentralized FL (DFL) serves as an alternative, facilitating knowledge sharing among clients without the need of a central server (Beltrán et al., 2023; Yuan et al., 2023; Giuseppi et al., 2022; Li et al., 2021). DFL also offers computational and energy efficiency, as resources are distributed across clients instead of being concentrated in a centralized source. Furthermore, the distribution of computational loads across clients allows DFL to offer larger scalability, enabling the involvement of a larger number of clients and even the support of larger-scale models without overburdening the central server. Federated Averaging (FedAvg) (McMahan et al., 2017) is a widely adopted FL approach that disperses knowledge by averaging the parameters of models. Despite its advantages, FedAvg faces a significant limitation: the lack of support for model heterogeneity. This limitation becomes impractical in real-world scenarios where devices inherently possess diverse architectures. In a DFL system with model heterogeneity, FedAvg confines parameter averaging to models with the same architectures, thereby hindering knowledge sharing among clients. This problem exacerbates with the presence of heterogeneity in data between clients, affecting the preservation of global knowledge (Ye et al., 2023). Figure 1 demonstrates the adverse effects of model and data heterogeneity using decentralized FedAvg. This entails a need for a novel framework that better supports model and data heterogeneity in DFL. In this paper, we quantify global knowledge using global accuracy which is measured based on a global test dataset. ![1_image_0.png](1_image_0.png) Figure 1: Demonstrating the adverse impact of model and data heterogeneity on global accuracy using decentralized FedAvg. The experiment uses CIFAR-100 dataset with 50 clients. Homogeneous models and IID data signify clients with identical model architectures and data distributions. In contrast, heterogeneous models and non-IID data indicate variations in both model architectures and data distributions among clients. Additional experimental details can be found in Section 4.4.1. Researchers have extended FedAvg to support model heterogeneity, but these extensions often impose constraints on model architectures (Diao et al., 2020; Alam et al., 2022; Shen et al., 2020). Another approach that addresses model heterogeneity in FL involves mutual learning, where models collaboratively learn by teaching each other a task (Zhang et al., 2018; Li et al., 2021). Mutual learning enables knowledge transfer as models mimic each other's class probabilities. In this process, each client acts as both a student and a teacher. The objective function comprises a supervision loss component and another responsible for distilling knowledge from experts (Hinton et al., 2015). However, existing works utilizing knowledge distillation require a server or additional data (Lin et al., 2020; Li & Wang, 2019; Li et al., 2020a). Reliance on public data can be impractical, especially in sensitive domains such as health. Other works rely on generating synthetic data to perform knowledge transfer (Zhang et al., 2022b;a; Li et al., 2020b; Heinbaugh et al., 2023; Dai et al., 2024), which introduces an extra privacy concern. Thus, a solution that avoids using additional data is more desirable. To this end, we propose a Decentralized Federated Mutual Learning (DFML) framework that is 1) serverless, 2) supports model heterogeneity without imposing any architectural constraints, and 3) does not require additional public data. We define decentralized or serverless systems as lacking a dedicated, centralized client responsible for managing knowledge transfer. Table 1 highlights the advantages of our proposed DFML over prior arts. As will be shown in the results sec- | Table 1: Comparison between DFML and other FL methods. | | | | |----------------------------------------------------------|---------------|----------------|---------------| | Framework | No | Nonrestrictive | No Additional | | Server | Heterogeneous | Data | | | FedAvg | ✗ | ✗ | ✓ | | HeteroFL | ✗ | ✗ | ✓ | | FedRolex | ✗ | ✗ | ✓ | | FML | ✗ | ✗ | ✓ | | FedDF | ✗ | ✓ | ✗ | | FedMD | ✗ | ✓ | ✗ | | Def-KT | ✓ | ✗ | ✓ | | DFML (Ours) | ✓ | ✓ | ✓ | ![2_image_0.png](2_image_0.png) Figure 2: Our proposed DFML framework. In each communication round t, randomly selected clients (senders) send their locally trained models Wn to another randomly chosen client (aggregator). Mutual learning takes place at the aggregator using α (t). The updated models W+ n and α (t) are then transmitted back to the senders. α (t)controls the impact of the loss components in the objective function (see Section 3), and is computed based on a scheduler function. t denotes the current communication round. Different shapes and sizes signify model and data heterogeneity. In this example, clients 2 and 4 act as senders, while client 1 serves as the aggregator. tion, DFML outperforms other baselines in addressing the model and data heterogeneity problems in DFL. Figure 2 depicts our proposed DFML framework. In each communication round, multiple clients (senders) transmit their models to another client (aggregator) for mutual learning, effectively handling model heterogeneity. DFML leverages the aggregator's data for knowledge distillation, eliminating the need for extra data. Additionally, DFML addresses data heterogeneity by employing re-Weighted SoftMax (WSM) cross-entropy (Legate et al., 2023), which prevents models from drifting toward local objectives. Furthermore, we observed varying performance levels1 when using different fixed values to balance the ratio between the supervision and distillation loss components in the objective function. In response, we propose a cyclic variation (Loshchilov & Hutter, 2016; Smith, 2017) of the ratio between these loss components. The cyclical approach iteratively directs the model's objective between the aggregator's data and the reliance on experts to distill knowledge using that data. Gradually, as the distillation signal becomes dominant, global knowledge increases and eventually reaches a peak. This cyclical knowledge distillation further enhances global knowledge compared to using a fixed ratio between the supervision and distillation loss components. In this paper, we empirically demonstrate the superior performance of DFML over state-of-the-art baselines in terms of convergence speed and global accuracy. The main contributions of this paper are summarized as follows: - We propose a novel mutual learning framework that operates in DFL, supports nonrestrictive heterogeneous models and does not rely on additional data. - We propose cyclically varying the ratio between the supervision and distillation signals in the objective function to enhance global accuracy. ## 2 Related Work Due to the vastness of the FL literature, we restrict our review to works most relevant to our research. This includes studies that support homogeneous and heterogeneous architectures, operate in DFL, address catastrophic forgetting, and adapt knowledge distillation. 1Refer to Section 5.2 ## 2.1 Homogeneous And Restrictive Heterogeneous Support FL aims to learn global knowledge from distributed clients without sharing their private data. FedAvg (McMahan et al., 2017) uses a server to perform parameter averaging on locally updated models, but its support is limited to homogeneous architectures due to the nature of parameter averaging. In contrast, decentralized FedAvg (Li et al., 2021; Roy et al., 2019; Giuseppi et al., 2022; Savazzi et al., 2020) depends on client-to-client communication, with aggregation occurring on any participating clients. While other methods support heterogeneous models through parameter averaging, however, their support is constrained. Methods like Federated Dropout (Caldas et al., 2018), HeteroFL (Diao et al., 2020), and FedRolex (Alam et al., 2022) employ partial training with a random, static, and rolling technique for sub-model extractions, respectively. Lastly, FML (Shen et al., 2020) conducts mutual learning between personalized (heterogeneous) models, and a global (homogeneous) model. However, FML assumes the existence of a global model, and all the global knowledge resides in that model instead of the clients' heterogeneous models. This differs from our goal of transferring global knowledge to each of the clients' heterogeneous models. ## 2.2 Nonrestrictive Heterogeneous Support Works that support model heterogeneity without imposing constraints exist, however, they require assistance from a server and the availability of public data (Li & Wang, 2019; Lin et al., 2020). FedMD (Li & Wang, 2019) assumes the availability of public data on all clients. In FedMD, clients generate predictions using public data which are then communicated to the server for averaging. Then, the averaged predictions are communicated back to the clients and are used to update the heterogeneous models using knowledge distillation. FedDF (Lin et al., 2020) communicates heterogeneous models to a server, where prototypes are assumed to exist. The server facilitates parameter averaging of models with same architectures, and knowledge distillation using unlabelled public data. The reliance on a server and additional data limits the applicability of these methods. ## 2.3 Decentralized Fl Def-KT (Li et al., 2021) operates within a DFL framework, where clients communicate models to other clients (aggregators) for mutual learning. Despite its serverless nature, Def-KT only supports homogeneous architectures as it replaces the aggregator's model with the incoming model. This hinders its effectiveness in scenarios with model heterogeneity. ## 2.4 Catastrophic Forgetting Catastrophic forgetting focuses on acquiring new tasks without forgetting previous knowledge (Wang et al., 2023; De Lange et al., 2021). In the context of FL among clients with non-IID data distributions, catastrophic forgetting occurs, preventing models from reaching optimal global accuracy. To mitigate this issue, researchers aim to shield the learned representations from drastic adaptation. Asymmetric cross-entropy (ACE) is applied to the supervision signal of the objective function to address the representation drift problem (Caccia et al., 2021). ACE uses masked softmax on the classes that do not exist in the current data. Another approach, proposed by Legate et al. (2023), modifies the loss function of each client using re-Weighted SoftMax (WSM) cross-entropy, with re-weighting based on each client's class distribution. ## 2.5 Adaptive Knowledge Distillation In the literature, existing works have explored scaling the loss function in knowledge distillation (Zhou et al., 2021; Clark et al., 2019); however, the advancements in this area have been minimal. WSL (Zhou et al., 2021) handles sample-wise bias-variance trade-off during distillation, while ANL-KD (Clark et al., 2019) gradually transitions the model from distillation to supervised learning. Early in training, the model primarily distills knowledge to leverage a useful training signal, and towards the end of the training, it relies more on labels to surpass its teachers. Additionally, although FedYogi (Reddi et al., 2020) proposes adaptive optimization, but its reliance on a server limits its applicability. ## 3 Proposed Approach To begin with, we outline the DFL system within which our proposed DFML operates. Then, we provide a detailed explanation of DFML without delving into the discussion of varying the balance between the loss components. Lastly, we explain cyclical knowledge distillation and peak models. ## 3.1 System Setup In our DFL setup, there are N clients, each client n is equipped with local training data Dn ∈ {D1, D2*, ..., D*N }, regular model Wn ∈ {W1, W2*, ..., W*N }, peak model Wcn ∈ {Wc1, Wc2*, ...,* WcN }, and local weight alpha αn ∈ {α1, α2*, ..., α*N }. In each communication round, a client is chosen at random to act as the aggregator a. The choice of the aggregator is determined by the previous aggregator from the preceding round. In the initial round, the client with the lowest ID number is assigned the role of the aggregator. Additionally, several clients are randomly chosen as senders S to send their models to the aggregator. ## 3.2 Dfml Formulation | Algorithm 1 DFML Input: Initialize N clients, each client n has dat | a | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----| | Dn and two models: regular Wn and peak Wcn. The local weight alpha of each client αn = 0. for communication round t = 1, 2, ..., T do Randomly select one aggregator a ∈ {1, ..., N} Randomly select senders S ⊂ {1, ..., N}, a /∈ S Participants P = S ∪ {a} \\ Client Side for all n ∈ P do for all batch Xn ∈ local data Dn do Wn ← locally train Wn using Equation 2 Send locally updated models Ws for all s ∈ S to a \\ Aggregator Side α (t) ← scheduler(·) for k = 1, 2, ..., K do for all batch Xa ∈ local data Da do for all n ∈ P do zn ← logits(Wn, Xa) for all n ∈ P do Update Wn using α (t) and logits according to Equation 1 Send back updated models W+ s and α (t) for all s ∈ S \\ Client Side for all n ∈ P do Wn ← W+ n \\ Update regular model if α (t) ≥ αn then Wcn ← W+ n \\ Update peak model (t) | | The goal of DFML is to enable the exchange of local information among clients without the need to share raw data. DFML facilitates knowledge sharing across multiple communication rounds T until convergence. In each round t ∈ T, a set of participants (P = S ∪ {a}) is randomly selected. For each selected clients n ∈ P, their regular models Wn undergo local training on their private data Dn. This ensures that Wn retains its local knowledge, allowing knowledge transfer to other participants during the aggregation process. The process of distilling knowledge to other models is referred to as aggregation. Subsequently, all locally trained models are sent to the aggregator for the aggregation process. When multiple clients send their models to the aggregator, multiple experts contribute during aggregation, enhancing the accuracy of the global knowledge transfer. DFML employs weighted mutual learning for aggregation to allow models to collaboratively learn from each other using the aggregator's data. This technique ensures that larger models contribute more significantly to knowledge transfer compared to smaller models, leveraging their finer knowledge. DFML conducts the aggregation process K times to maximize knowledge transfer without impacting the communication cost per round. Following this, all updated models W+ n are transmitted back to their respective senders. Subsequently, each participant n ∈ P replaces its model Wn with the updated version W+ n . This entire process is repeated until global knowledge has been disseminated across the entire network. Cyclical knowledge distillation and the peak models are explained in Section 3.2.1. Algorithm 1 describes our proposed DFML framework. The objective function of DFML is as follows: L = (1 − α)LWSM + αLKL, (1) where LWSM represents the supervision loss signal computed using re-weighted softmax cross-entropy (WSM) (Legate et al., 2023), and LKL represents the distillation loss signal computed by Kullback–Leibler Divergence (KL). The hyperparameter α controls the balance between these two loss components. LWSM is defined as follows: $${\mathcal{L}}_{\mathrm{WSM}}\!=\!-\sum_{x\in X_{a}}\left[z_{n}(x)_{y(x)}-\log\!\left(\sum_{c\in C}\beta_{c}e^{z_{n}(x)_{c}}\right)\right],\tag{1}$$ where Xa represents a data batch drawn from the distribution Da of aggregator a. zn denotes the logits of data sample x with model weights Wn of client n, y(x) is the data label, βc is a vector representing the proportion of label c present in the aggregator's dataset, and C is the set of classes in the entire dataset. During the aggregation process, LWSM has access to only the aggregator's data and is computed for each Wn available at the aggregator. However, during the local training stage, i.e. before the models are sent to the aggregator, LWSM is also used at each client n, exploiting its private data to undergo local training. The distillation loss component LKL is defined as follows: $$\mathcal{L}_{\mathrm{KL}}=\sum_{x\in X_{u}}\sum_{q\neq n}^{\mathcal{P}}\biggl{[}\frac{\Phi_{q}}{\sum_{q\neq n}^{\mathcal{P}}\Phi_{u}}\mathrm{KL}\bigl{(}p_{q}(x)\mid\mid p_{n}(x)\bigr{)}\biggr{]},\tag{3}$$ $$\mathbf{\Sigma}$$ where Xa corresponds to a data batch drawn from the distribution Da of aggregator a. P denotes the set of participants in mutual learning, including the senders and the aggregator. Φ represents the model size based on the number of trainable parameters. Finally, pq(x) and pn(x) are the teacher q and student n predictions of the data sample x with model weights Wq and Wn, respectively. u is a dummy variable indexing all teachers. The use of WSM in both local training and mutual learning serves as a protective measure against catastrophic forgetting, which arises from shifts in data distribution between clients. WSM ensures that the models update parameters considering the proportion of available labels. This strategy prevents models from altering their accumulated knowledge on labels that are not present in the current data distribution, thereby safeguarding against catastrophic forgetting. ## 3.2.1 Cyclic Knowledge Distillation Cyclical knowledge distillation is manifested by periodically adjusting the value of α in the objective function during each communication round. Inspired by (Loshchilov & Hutter, 2016; Smith, 2017; Izmailov et al., 2018), we use the cyclical behavior to vary α. Cosine annealing scheduler, defined in Equation 4, is used to generate the cyclical behavior. This dynamic variation in α occurs with each new aggregator selection, leading to mutual learning with a distinct α value at each round. The cyclical process, influencing the balance between the supervision and distillation loss components, contributes to an overall increase in global knowledge. The global knowledge is measured by global accuracy throughout training. $$\alpha^{(t)}=\alpha_{min}+\frac{1}{2}(\alpha_{max}-\alpha_{min})(1+\cos(\frac{t}{T}\pi)),\tag{4}$$ $\alpha^{(t)}=\alpha_{min}+\frac{1}{2}(\alpha_{max}-\alpha_{min})(1+\cos(\frac{t}{T}\pi)),$ $\alpha^{(t)}=\alpha_{min}+\frac{1}{2}(\alpha_{max}-\alpha_{min})(1+\cos(\frac{t}{T}\pi)),$ where α (t)is the α value at the current communication round t, T is the maximum communication round, while αmin and αmax are the ranges of the α values. Figure 3 depicts the impact of cyclical α on global accuracy. When the supervision signal dominates, each Wn exclusively learns from Da without collaborating with other models (experts). This focus on the supervision signal directs the model's objective toward Da, causing Wn to lose previously acquired global knowledge from earlier rounds. As the distillation signal gains prominence, Wn begins to reacquire global knowledge, facilitating simultaneous knowledge distillation among all models. With a dominant distillation signal, each model exclusively learns from other experts, maximizing global knowledge in each one. Therefore, the peak in global accuracy is reached when the distillation signal is dominant (α is maximum), and the lowest is attained when the supervision signal is dominant (α is minimum). We observed that cyclically changing between the two signals leads to a higher global accuracy compared to using either one exclusively or a linear combination of them. This cyclical adjustment of α is crucial for the continuous growth in global accuracy throughout training, albeit with fluctuations in global accuracy. Supervision signal Distillation signal ![6_image_0.png](6_image_0.png) Decrease in global accuracy as supervision signal becomes dominant ![6_image_2.png](6_image_2.png) ![6_image_1.png](6_image_1.png) Figure 3: Illustrating the impact of cyclically varying α on global accuracy. Peak models are updated up to the first α maximum and every subsequent time α reaches its maximum limit. In this example, α is varied using a cosine annealing scheduler. ## 3.2.2 Peak Models To counteract the undesirable fluctuations in global accuracy resulting from the cyclical process, we introduce an additional model for each client, termed the *peak model* Wcn. The primary role of Wcn is to retain the best global parameters of Wn. Specifically, each Wcn is updated whenever Wn is aggregated with a dominant distillation signal. Wcn are detached from the training process and are kept in a frozen state, preserving the maximum global accuracy achieved so far. Also, the peak models are continuously updated from the initial communication round up to the first α maximum, allowing them to quickly reach the first global accuracy peak. Thus, the peak models act as a stabilizing mechanism, retaining the optimal global knowledge attained. ## 4 Experiments 4.1 Dataset We evaluate our proposed DFML against prevalent baselines using five datasets including CIFAR-10/100, FMNIST, Caltech101, Oxford Pets, and Stanford Cars. The evaluation covers experiments on two data distribution shifts: Independent and Identically Distributed (IID) and non-IID. The non-IID distribution involves a heavy label shift based on the Dirichlet distribution with β = 0.1. The dataset is distributed among clients by dividing the train set into N splits, either evenly for IID or utilizing Dirichlet distribution for non-IID. Each split is further segmented into training and validation sets following an 80:20 ratio. For Caltech101, samples are first split 80:20, where the 20% represents the global test set, and the remaining samples follows the defined splitting strategy above. Validation sets are employed to assess local performance, while the entire test set evaluates the global accuracy of the clients. The global accuracy of DFML is evaluated by examining the peak models unless stated otherwise. The data partitions for all clients are available in Appendix A.1. ## 4.2 Implementations In our experiments, we utilize CNN, ResNet, ViT, and EfficientNet as our model architectures. The evaluation of DFML encompasses three architecture modes: homogeneous, restrictive heterogeneous, and nonrestrictive heterogeneous. We name these three modes: H0, H1, and H2, respectively. Details of these modes and the associated model architectures are outlined in Table 10. To ensure a fair comparison, all experiments including the baselines are run with WSM instead of Cross-Entropy (CE), unless specified otherwise. Further implementation details are available in Appendix A.2. | Table 2: Different model specifications supported by the baselines and DFML in the netw | | | ork. | |-------------------------------------------------------------------------------------------|-----------------------|-----------------|--------| | Framework | Different | | | | Model Types | Different # of Layers | Different | | | | | Width of layers | | | Dec. FedAvg | ✗ | ✗ | ✗ | | Dec. FedProx | ✗ | ✗ | ✗ | | Def-KT | ✗ | ✗ | ✗ | | Dec. HeteroFL | ✗ | ✗ | ✓ | | Dec. FedRolex | ✗ | ✗ | ✓ | | DFML (Ours) | ✓ | ✓ | ✓ | ## 4.3 Baselines To address the absence of baselines tailored for DFL settings, we derive baselines by decentralizing some stateof-the-art CFL algorithms, adapting them to function within our DFL system. We begin our experiments by constraining model architectures to enable comparison with existing baselines. The initial experiments focus on homogeneous architectures for direct comparison with baselines like decentralized FedAvg, FedProx (Li et al., 2020c) and Def-KT. Our derived decentralized version of FedAvg and FedProx are referred to as decentralized FedAvg and FedProx, respectively. Def-KT intrinsically operates in a DFL framework. Subsequently, we conduct experiments with restrictive heterogeneous architectures, where models have the same number of layers but different widths. This allows comparison of DFML with our derived decentralized versions of partial training methods (HeteroFL and FedRolex) alongside FedAvg. Further details on the derived decentralized baselines are provided in Appendix A.3. Following this, we demonstrate the full capabilities of DFML by conducting experiments with nonrestrictive heterogeneous architectures, which are incompatible with partial training algorithms. The only baseline available for this set of experiments is decentralized FedAvg. In this paper, we omit the comparison of DFML with baselines that require additional data such as (Lin et al., 2020; Li & Wang, 2019) to ensure fairness. Table 2 summarizes the model features supported by the baselines and our proposed DFML in the network. Decentralized FedAvg, decentralized FedProx and Def-KT baselines only accommodate homogeneous models, requiring all models to be of same type, with the same number of layers, and each layer have the same dimensions. Decentralized HeteroFL and FedRolex baselines support models of the same type, with the same number of layers, but they allow layers to have different widths. In contrast, our proposed DFML can support different model types, models with different number of layers, and varying widths. ## 4.4 Results We evaluate DFML against other state-of-the-art baselines. We first demonstrate the effectiveness of DFML in handling model and data heterogeneity. Second, we prove that DFML outperforms all baselines in terms of final convergence speed and final global accuracy across three architecture modes: homogeneous, restrictive heterogeneous, and nonrestrictive heterogeneous architectures. Next, we demonstrate the performance of each cluster of architectures in an experiment with nonrestrictive heterogeneous architectures. Finally, we present the scalability of DFML under a significant model heterogeneity scenario. ## 4.4.1 Model And Data Heterogeneity Figure 4 demonstrates that our proposed DFML under model and data heterogeneity, mitigates the impact on global accuracy more effectively than decentralized FedAvg and HeteroFL. To ensure a fair comparison between homogeneous and heterogeneous experiments, we maintained the same average number of parameters in each case. This was achieved by selecting the median model of the five different architectures from the heterogeneous experiment as the model used in the homogeneous experiment. ## 4.4.2 Homogeneous Architectures Table 3 demonstrates that our DFML outperforms decentralized FedAvg, decentralized FedProx, and DefKT in terms of global accuracy across datasets and under two data distributions. Larger improvements are ![8_image_0.png](8_image_0.png) Figure 4: Demonstrating the global accuracy gain DFML achieves in comparison with decentralized FedAvg and HeteroFL under model and data heterogeneity. CIFAR-100 dataset is used with 50 clients and CNN architectures. Table 3: Global accuracy comparison using homogeneous CNN architectures with 50 clients and 25 senders. For Def-KT, 25 aggregators are selected, and 1 aggregator for the other methods. CIFAR-10 CIFAR-100 Method IID non-IID IID non-IID Dec. FedAvg 80.00 ± 0.19 74.68 ± 0.06 48.32 ± 0.18 43.17 ± 0.16 Dec. FedProx 79.93 ± 0.04 74.55 ± 0.19 47.97 ± 0.34 42.41 ± 0.47 Def-KT 79.29 ± 0.18 72.59 ± 0.34 48.43 ± 0.18 43.84 ± 0.24 DFML (Ours) 80.96 ± 0.07 76.27 ± 0.40 50.47 ± 0.29 46.41 ± **0.48** | senders. Architectures used are CNN, ResNet, and ViT. CNN ResNet | | | ViT | | | | | | | | | | |--------------------------------------------------------------------|--------------|--------------|--------------|--------------|---------------------------|--------------|--------------|--------------|--------------|---------------------------|--------------|--------------| | | CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | | | | | | | | Method | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | | Dec. FedAvg | 72.09 ± 0.20 | 59.84 ± 0.18 | 33.59 ± 0.06 | 27.03 ± 0.23 | 79.84 ± 0.16 | 54.93 ± 0.76 | 39.98 ± 0.45 | 28.80 ± 0.24 | 64.50 ± 0.13 | 50.76 ± 0.26 | 27.43 ± 0.22 | 22.27 ± 0.03 | | Dec. HeteroFL | 73.96 ± 0.21 | 64.21 ± 0.67 | 37.62 ± 0.16 | 32.98 ± 0.10 | 81.70 ± 0.11 | 68.92 ± 0.60 | 43.33 ± 0.34 | 34.92 ± 0.29 | 65.08 ± 0.61 | 53.90 ± 0.09 | 29.60 ± 0.53 | 25.65 ± 0.25 | | Dec. FedRolex | 74.38 ± 0.02 | 64.30 ± 0.41 | 37.65 ± 0.27 | 32.83 ± 0.16 | 81.70 ± 0.17 | 69.41 ± 0.19 | 43.30 ± 0.20 | 34.94 ± 0.43 | 65.17 ± 0.40 | 53.56 ± 0.61 | 30.44 ± 0.21 | 25.81 ± 0.06 | | DFML (Ours) | 79.02 ± 0.04 | 73.87 ± 0.19 | 48.60 ± 0.03 | 44.26 ± 0.20 | 85.68 ± 0.01 71.24 ± 0.02 | 53.32 ± 0.07 | 46.27 ± 0.38 | 68.38 ± 0.03 | 54.50 ± 0.01 | 36.48 ± 0.20 28.56 ± 0.60 | | | Table 5: Global accuracy comparison using restrictive heterogeneous architectures. The experiments are conducted using ResNet architectures with 50 clients and 25 senders. FMNIST Caltech101 Oxford Pets Stanford Cars Method IID non-IID IID non-IID IID non-IID IID non-IID Dec. FedAvg 89.07± 0.05 73.53± 0.13 36.74± 0.61 22.83± 0.34 10.25± 0.43 9.21± 0.31 2.83± 0.24 3.44± 0.21 Dec. HeteroFL 91.06± 0.15 85.49± 0.10 49.05± 0.21 38.04± 0.16 23.35± 0.14 17.68± 0.12 5.55± 0.14 6.25± 0.10 Dec. FedRolex 91.26± 0.20 86.06± 0.19 49.14± 0.32 37.75± 0.22 22.67± 0.14 18.39± 0.15 5.88± 0.11 6.09± 0.09 DFML (Ours) 92.01± 0.02 87.31± 0.03 60.05± 0.04 50.07± 0.06 40.42± 0.16 31.25± 0.11 41.12± 0.10 25.31± **0.09** recorded under non-IID data shifts. To align the number of communications per round for all baselines, adjustments were made for Def-KT by setting the number of aggregators to 25. This is necessary as in DefKT each sending model should be received by a different aggregator. Decentralized FedProx is not explored further in the remaining experiments as it yielded approximately the same performance as decentralized FedAvg under homogeneous architectures. Centralized FedProx requires the availability of a global model at each participating client to restrict local updates to stay close to the global model. However, in a decentralized setting, a single global model does not exist, and each client treats its local model, before local training, as a global model. This limitation explains why FedProx did not outperform FedAvg in the decentralized setting. ## 4.4.3 Heterogeneous Architectures With Restrictions In this set of experiments, some restrictions are applied to heterogeneous architectures. Tables 4 and 5 demonstrate that DFML consistently outperforms all baselines in terms of global accuracy across different architectures, datasets, and both IID and non-IID data distributions. Another observation is that the partial training baselines outperformed decentralized FedAvg, which is expected as more knowledge is shared by averaging overlapping parameters of the models rather than only averaging models with the same architec- | | N : 10 | N : 50 | N : 100 | | | | | | | | | | |-------------|--------------|--------------|--------------|--------------|---------------------------|--------------|--------------|--------------|--------------|---------------------------|--------------|--------------| | | CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | | | | | | | | Method | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | | Dec. FedAvg | 72.65 ± 0.17 | 43.78 ± 0.06 | 33.58 ± 0.08 | 23.01 ± 0.09 | 71.51 ± 0.24 | 59.05 ± 0.22 | 33.19 ± 0.19 | 26.27 ± 0.30 | 71.00 ± 0.32 | 57.79 ± 0.11 | 32.30 ± 0.30 | 26.93 ± 0.13 | | DFML (Ours) | 83.87 ± 0.22 | 74.30 ± 0.88 | 54.03 ± 0.15 | 49.67 ± 0.20 | 81.74 ± 0.04 75.51 ± 0.12 | 50.39 ± 0.09 | 46.22 ± 0.07 | 79.94 ± 0.03 | 71.75 ± 0.05 | 47.66 ± 0.06 42.84 ± 0.23 | | | | signals and cyclically varying α. The dataset used is CIFAR-100, with 50 clients and 25 senders. CNN ResNet ViT Method CE / WSM Cyclical α IID non-IID IID non-IID IID non-IID | | | | | | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----|--------------|--------------|--------------|--------------|--------------|--------------|--------------| | CE | ✗ | 47.44 ± 0.08 | 29.16 ± 2.27 | 49.13 ± 0.30 | 18.78 ± 0.67 | 35.66 ± 0.02 | 14.02 ± 0.01 | | | CE | ✓ | 48.44 ± 0.23 | 41.47 ± 0.30 | 55.29 ± 0.13 | 41.07 ± 1.27 | 36.62 ± 0.30 | 22.34 ± 0.12 | | | DFML (Ours) | WSM | ✗ | 47.57 ± 0.26 | 43.59 ± 0.35 | 49.77 ± 0.24 | 30.91 ± 0.96 | 34.04 ± 0.21 | 25.04 ± 0.56 | | WSM | ✓ | 48.60 ± 0.03 | 44.26 ± 0.20 | 53.32 ± 0.07 | 46.27 ± 0.38 | 36.48 ± 0.20 | 28.56 ± 0.60 | | | Different numbers of clients N are used. N : 10 | N : 50 | N : 100 | | | | | | | | | | | | |---------------------------------------------------|----------|-----------|----------|-----------|----------|-----------|-----|---------|-----|---------|-----|---------|----| | Dec. FedAvg | CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | | | | | | | | | Communication | | | | | | | | | | | | | | | Method | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | IID | non-IID | | | Round | | | | | | | | | | | | | | | DFML (Ours) | 100 | 10 | 20 | 20 | 10 | 30 | 40 | 30 | 30 | 50 | 60 | 40 | 50 | | 500 | 20 | 20 | 20 | 20 | 90 | 90 | 90 | 80 | 150 | 190 | 130 | 130 | | Table 7: Global accuracy comparison using nonrestrictive heterogeneous architectures. The experiments are conducted using EfficientNet architectures with 10 clients and 5 senders. Caltech101 Oxford Pets Stanford Cars Method IID non-IID IID non-IID IID non-IID Dec. FedAvg 54.75± 0.24 30.92± 0.16 18.34± 0.31 15.34± 0.21 6.07± 0.25 11.83± 0.12 DFML (Ours) 83.42± 0.20 65.70± 0.13 68.90± 0.22 42.94± 0.13 70.29± 0.19 50.63± **0.23** tures. Furthermore, we implemented decentralized Federated Dropout; however, the results are not reported as it did not work. The poor performance is attributed to local models being assigned random parameters from the global model generated at the aggregator. ## 4.4.4 Heterogeneous Architectures Without Restrictions After confirming that our proposed DFML competes with the prevalent baselines in DFL, we showcase the strength of DFML in knowledge transfer using nonrestrictive heterogeneous models. Additionally, we evaluate DFML under different N clients. From Tables 6 and 7, it is evident that across various N clients, datasets, and data distributions, DFML consistently achieves superior global accuracy compared to the baseline. Table 8 shows that DFML requires fewer communication rounds to reach the same accuracy as decentralized FedAvg does at specific rounds. The communication cost per round for all experiments is 50, involving the transmission of 25 models to and from the aggregator. Appendix A.4 provides further experimental analysis on DFML. A visual illustration of the convergence speedup achieved by DFML compared to the baselines is presented in Appendix A.4.4. Moreover, a comparison between DFML and a decentralized version of FML is provided in Appendix A.4.11. ## 4.4.5 Performance Per Cluster Of Architectures Figure 5 presents the performance of each architecture cluster in DFML compared to decentralized FedAvg. In this experiment, five different CNN architectures are distributed among 50 clients. The figure demonstrates that the bigger the architecture size, the higher the attained global accuracy. Moreover, all clusters in DFML surpasses their corresponding clusters in decentralized FedAvg. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) Figure 5: Performance comparison between the different architecture clusters in both DFML and decentralized FedAvg. In this experiment, five nonrestrictive heterogeneous architectures are distributed among 50 clients. C0, C1, C2, C3, and C4 represent the global accuracy average of all models with CNN architectures [32, 64, 128, 256], [32, 64, 128], [32, 64], [16, 32, 64], and [8, 16, 32, 64], respectively. ![10_image_2.png](10_image_2.png) ![10_image_3.png](10_image_3.png) Figure 6: Comparison between DFML and decentralized FedAvg under significant model heterogeneity. Ten different architectures are distributed among 50 clients including: 2 ViT, 4 ResNet, and 4 CNN architectures. ## 4.4.6 High Model Heterogeneity To showcase the scalability of DFML with model heterogeneity, we conduct an experiment involving 50 clients with significant model heterogeneity. We compare the results obtained by DFML to decentralized FedAvg. In this experiment, ten different architectures are deployed and selected from Table 10: the two largest ViT architectures, the four largest ResNet architectures, and the four largest CNN architectures under the H2 category. Figure 6 demonstrates that DFML performs effectively under heavy model heterogeneity conditions and greatly outperforms decentralized FedAvg. ## 5 Analysis In this section, we first demonstrate the effect of using different fixed α values versus cyclically adjusting it. Second, we present the impact of cyclical α on global accuracy by evaluating the regular models. Subsequently, we examine the behavior exhibited by both regular and peak models as α changes, and highlight the ability of the peak models to capture the peaks in global accuracy. Last, we conduct an analysis to understand the effects of different supervision signals and cyclical α on global accuracy. ## 5.1 Regular Vs Peak Models Figure 7 shows the fluctuations in global accuracy as α is cyclically varied. When α = 0, representing only the supervision signal in the objective function, global accuracy is at its lowest. This is attributed to models being optimized solely toward the aggregator's local data. However, as α increases and eventually reaches its maximum defined value, models gain knowledge from each other through knowledge distillation, resulting ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) Figure 7: Performance comparison of the regular models against cyclically oscillating α with full and half cycles, respectively. The supervision signal used is CE. ![11_image_2.png](11_image_2.png) Figure 8: Performance comparison between DFML regular models, that are communicated and updated in each communication round, and peak models that are updated only when a peak occurs. The supervision signal used is CE. in a peak in global accuracy. The decline in global accuracy starts when the distillation signal diminishes, and the supervisory signal takes over. To maintain global accuracy at the peaks, the peak models are used. Initially, the peak models are updated regularly with every regular model update until the first maximum α value is reached. After that, the peak models are only updated when the regular models are aggregated using the highest value of α. Figure 8 depicts the global accuracy of both regular and peak models, highlighting the stability achieved by the peak models throughout training. ## 5.2 Fixed Vs Cyclical Α As shown in Figure 7, the highest global accuracy is consistently achieved when α reaches its maximum defined value. Now, we address the impact of using different fixed values for α and whether setting α = 1 throughout training yields similar performance as cyclically changing α. Figure 9 demonstrates that using different fixed values of α, under different distribution shifts, results in varied performance levels. Furthermore, fixing α = 1 leads to the worst global accuracy because when only the distillation signal is present throughout training without any supervision, noise will be propagated. This results in experts teaching each other incorrect information. ## 5.3 Supervision Signal And Cyclical Α The impact of different supervision signals (CE and WSM) and cyclical α on DFML is presented in Table 9. Results indicate that the use of WSM primarily enhances the global accuracy in non-IID data distribution shifts. Furthermore, the addition of cyclical α on top of CE and WSM further improves global accuracy. The best outcomes are mostly reported when both WSM and cyclical α are applied. Additional experimental analysis on cyclical knowledge distillation can be found in Appendix A.5. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) Figure 9: Performance comparison between different fixed α values and cyclically varying it, under IID and non-IID data distributions. The supervision signal used is CE. ## 6 Limitations While DFML presents significant advancements over existing baselines, it does have certain limitations that need to be acknowledged. One notable limitation is its computational expense. DFML requires aggregators to possess sufficient memory to receive multiple models and enough computational power to perform mutual learning, involving both forward and backward passes on all models at the aggregator. Another limitation is the lack of convergence analysis within in this paper. While proving the convergence properties of DFML is beyond the current scope, proving the convergence properties of DFML is left for future work. ## 7 Conclusion We proposed DFML, a framework that supports a decentralized knowledge transfer among heterogeneous models, without architectural constraints or reliance on additional data. DFML overcomes common centralization issues such as communication bottlenecks and single points of failure, making it a robust alternative for real-world applications. DFML outperformed state-of-the-art baselines in addressing model and data heterogeneity in DFL, showcasing better convergence speed and global accuracy. ## References Samiul Alam, Luyang Liu, Ming Yan, and Mi Zhang. Fedrolex: Model-heterogeneous federated learning with rolling sub-model extraction. *Advances in Neural Information Processing Systems*, 35:29677–29690, 2022. Enrique Tomás Martínez Beltrán, Mario Quiles Pérez, Pedro Miguel Sánchez Sánchez, Sergio López Bernal, Gérôme Bovet, Manuel Gil Pérez, Gregorio Martínez Pérez, and Alberto Huertas Celdrán. Decentralized federated learning: Fundamentals, state of the art, frameworks, trends, and challenges. *IEEE Communications Surveys & Tutorials*, 2023. Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, and Eugene Belilovsky. New insights on reducing abrupt representation change in online continual learning. *arXiv preprint* arXiv:2104.05025, 2021. Sebastian Caldas, Jakub Konečny, H Brendan McMahan, and Ameet Talwalkar. Expanding the reach of federated learning by reducing client resource requirements. *arXiv preprint arXiv:1812.07210*, 2018. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D Manning, and Quoc V Le. Bam! born-again multi-task networks for natural language understanding. *arXiv preprint arXiv:1907.04829*, 2019. Rong Dai, Yonggang Zhang, Ang Li, Tongliang Liu, Xun Yang, and Bo Han. Enhancing one-shot federated learning through data and ensemble co-boosting. *arXiv preprint arXiv:2402.15070*, 2024. Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence, 44(7):3366–3385, 2021. Enmao Diao, Jie Ding, and Vahid Tarokh. Heterofl: Computation and communication efficient federated learning for heterogeneous clients. *arXiv preprint arXiv:2010.01264*, 2020. Alessandro Giuseppi, Sabato Manfredi, Danilo Menegatti, Antonio Pietrabissa, and Cecilia Poli. Decentralized federated learning for nonintrusive load monitoring in smart energy communities. In *2022 30th* Mediterranean Conference on Control and Automation (MED), pp. 312–317. IEEE, 2022. Clare Elizabeth Heinbaugh, Emilio Luz-Ricca, and Huajie Shao. Data-free one-shot federated learning under very high statistical heterogeneity. In *The Eleventh International Conference on Learning Representations*, 2023. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris, and Nicholas Lane. Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout. Advances in Neural Information Processing Systems, 34:12876–12889, 2021. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. *arXiv preprint arXiv:1803.05407*, 2018. Seung Hoon Lee, Seunghyun Lee, and Byung Cheol Song. Vision transformer for small-size datasets. *arXiv* preprint arXiv:2112.13492, 2021. Gwen Legate, Lucas Caccia, and Eugene Belilovsky. Re-weighted softmax cross-entropy to control forgetting in federated learning. *arXiv preprint arXiv:2304.05260*, 2023. Chengxi Li, Gang Li, and Pramod K Varshney. Decentralized federated learning via mutual knowledge transfer. *IEEE Internet of Things Journal*, 9(2):1136–1147, 2021. Daliang Li and Junpu Wang. Fedmd: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581, 2019. Qinbin Li, Bingsheng He, and Dawn Song. Practical one-shot federated learning for cross-silo setting. *arXiv* preprint arXiv:2010.01017, 2020a. Qinbin Li, Bingsheng He, and Dawn Song. Practical one-shot federated learning for cross-silo setting. *arXiv* preprint arXiv:2010.01017, 2020b. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. *Proceedings of Machine learning and systems*, 2:429–450, 2020c. Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. *Advances in Neural Information Processing Systems*, 33:2351–2363, 2020. Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. *arXiv preprint* arXiv:1608.03983, 2016. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273–1282. PMLR, 2017. Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečn`y, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. *arXiv preprint arXiv:2003.00295*, 2020. Abhijit Guha Roy, Shayan Siddiqui, Sebastian Pölsterl, Nassir Navab, and Christian Wachinger. Braintorrent: A peer-to-peer environment for decentralized federated learning. *arXiv preprint arXiv:1905.06731*, 2019. Stefano Savazzi, Monica Nicoli, Vittorio Rampa, and Sanaz Kianoush. Federated learning with mutually cooperating devices: A consensus approach towards server-less model optimization. In *ICASSP 2020-2020* IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3937–3941. IEEE, 2020. Tao Shen, Jie Zhang, Xinkang Jia, Fengda Zhang, Gang Huang, Pan Zhou, Kun Kuang, Fei Wu, and Chao Wu. Federated mutual learning. *arXiv preprint arXiv:2006.16765*, 2020. Leslie N Smith. Cyclical learning rates for training neural networks. In *2017 IEEE winter conference on* applications of computer vision (WACV), pp. 464–472. IEEE, 2017. Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application. *arXiv preprint arXiv:2302.00487*, 2023. Mang Ye, Xiuwen Fang, Bo Du, Pong C Yuen, and Dacheng Tao. Heterogeneous federated learning: Stateof-the-art and research challenges. *ACM Computing Surveys*, 56(3):1–44, 2023. Liangqi Yuan, Lichao Sun, Philip S Yu, and Ziran Wang. Decentralized federated learning: A survey and perspective. *arXiv preprint arXiv:2306.01603*, 2023. Jie Zhang, Chen Chen, Bo Li, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chunhua Shen, and Chao Wu. Dense: Data-free one-shot federated learning. *Advances in Neural Information Processing Systems*, 35: 21414–21428, 2022a. Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, and Ling-Yu Duan. Fine-tuning global model via datafree knowledge distillation for non-iid federated learning. In *Proceedings of the IEEE/CVF conference on* computer vision and pattern recognition, pp. 10174–10183, 2022b. Ying Zhang, Tao Xiang, Timothy M Hospedales, and Huchuan Lu. Deep mutual learning. In *Proceedings of* the IEEE conference on computer vision and pattern recognition, pp. 4320–4328, 2018. Helong Zhou, Liangchen Song, Jiajie Chen, Ye Zhou, Guoli Wang, Junsong Yuan, and Qian Zhang. Rethinking soft labels for knowledge distillation: A bias-variance tradeoff perspective. *arXiv preprint* arXiv:2102.00650, 2021. ![15_image_0.png](15_image_0.png) CIFAR-10 and CIFAR-100 datasets. The size of the red circle represents the magnitude of data samples for each class label in each client. ## A Appendix A.1 Dataset Distributions The data partitions for both IID and non-IID distributions of CIFAR-10 and CIFAR-100 datasets are illustrated in Figure 10. ![16_image_0.png](16_image_0.png) Figure 11: Cyclic oscillation of α with incremental period increase. α ranges from 0 → 1. ## A.2 Implementation Details In our setup, we assume a star topology where all clients can send and receive models from any other client. Each communication round involves randomly selecting 50% of the clients as senders S, with an additional client randomly chosen as the aggregator a (unless specified otherwise). We utilize SGD optimizer for each client with momentum 0.9 and weight decay 5e-4. The learning rate is selected from {0.1, 0.01, 0.001}. The batch size is set to 8 for the EfficientNet experiments, 16 for the ResNet experiments using Caltech101, Oxford Pets, and StanfordCars datasets, and batch size of 64 is used for all other experiments,. For the cyclic α scheduler, we apply cosine annealing. The initial oscillating period is set to 10 and is incrementally increased after each completion. α is oscillated from 0 to a maximum value selected between {0.8, 0.9, 1.0}. Figure 11 illustrates an example of the behavior of α throughout training. The number of mutual learning epochs K, performed at the aggregator, is set to 10. Moreover, the temperature is configured to 1. All experiments are repeated for 3 trials with random seeds. With the existence of an α and period schedulers, aggregators need to be aware of communication round t, and the round when the period was last updated to compute the period and α value. To achieve this, each aggregator needs to communicate these two values to the next aggregator. The architectures employed in our experiments are presented in Table 10. Modes H0, H1, and H2 refer to homogeneous, restrictive heterogeneous, and nonrestrictive heterogeneous architectures, respectively. In mode H1, the model rates for different model types are: [20, 2 −1, 2 −2, 2 −3, 2 −4]. Here, smaller models are scaled versions of the largest model in terms of width. Mode H2 designates heterogeneous architectures with no constraints, allowing each client to have a different number of layers and hidden channel sizes. In our H1 and H2 experiments, the models are evenly distributed among clients. For CNNs, the values inside the array represent the number of channels in each layer, and the array's length corresponds to the number of layers. Each CNN layer has 5 × 5 kernels followed by the ReLU activation function, 2×2 max pooling, and layer normalization. In ResNets, the largest model is pre-activated ResNet18, while others are scaled versions of ResNet18. The values inside the array represent the number of channels per layer, and each layer consists of two blocks. Similarly, for ViTs, the largest model is comprised of 2 layers with 512 channels each; the others are scaled versions of the largest ViT. Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA) (Lee et al., 2021) are used in our ViT architectures to solve the lack of locality inductive bias and enable us to use non-pretrained ViTs on small datasets. Additionally, for ViTs, the patch size is set to 4 × 4, head dimensions to 64, depth to 2, dropout to 0.1, and embedding dropout to 0.1. ## A.3 Baselines A.3.1 Decentralized Fedavg When senders send their heterogeneous models to the aggregator, decentralized FedAvg performs parameter averaging exclusively among models with identical architectures. In the homogeneous scenario, averaging encompasses all available models. In contrast, in heterogeneous scenarios, clusters of global models are | Table 10: Different architecture modes. | | | | |-------------------------------------------|------------------------------|-----------------------------------------------------------------------------------------|----------------| | Mode | Model | | | | Name | Heterogeneity | Type | Architectures | | H0 | Homogeneous | CNN | [32, 64] | | | CNN | [128, 256] [64, 128] [32, 64] [16, 32] [8, 16] | | | H1 | Restrictive | | | | Heterogeneous | ResNet | [64, 128, 256, 512] [32, 64, 128, 256] [16, 32, 64, 128] [8, 16, 32, 64] [4, 8, 16, 32] | | | | ViT | [512, 512] [256, 256] [128, 128] [64, 64] [32, 32] | | | | CNN | [32, 64, 128, 256] [32, 64, 128] [32, 64] [16, 32, 64] [8, 16, 32, 64] | | | H2 | Nonrestrictive Heterogeneous | EfficientNet | B0 B1 B2 B3 B4 | formed as identical models in each group are averaged together. The resulting global models are then communicated back to the clients with the same model architecture. During parameter averaging, weights are assigned based on the number of data samples in each client. Algorithm 2 provides a detailed explanation of decentralized FedAvg. Model parameters in FedAvg are aggregated as follows: $$W_{g}={\frac{1}{\sum_{n\in{\mathcal{P}}}d_{n}}}\sum_{n\in{\mathcal{P}}}d_{n}W_{n},$$ $$\left(5\right)$$ dnWn, (5) where Wg is the global model and Wn is the model of client n. The weight dn is based on the number of data samples in the client. ## A.3.2 Decentralized Fedprox Decentralized FedProx is similar to decentralized FedAvg in that a subset of clients send their locally trained models to the aggregator for parameter averaging to form a global model, which is then sent back to the senders. However, FedProx adds a proximal term to the local training objective, which helps improve the method's stability by restricting the local updates to be closer to the initial (global) model. The proximal term necessitates keeping a copy of the global model while performing local training. In decentralized setting with partial client participation, multiple versions of global models exist throughout the network. Decentralizing FedProx requires each client to treat its model as the global model before performing local training. Algorithm 2 provides a detailed explanation of decentralized FedProx. The hyperparameter µ is selected from {0.5, 1, 2}. Input: Initialize N clients, each client n has a model Wn and data Dn. All models with the same architectures have the same initialization. for communication round t = 1, 2*, ..., T* do Randomly select one aggregator a ∈ {1*, ..., N*} Randomly select senders S ⊂ {1, ..., N}, a /∈ S Participants P = S ∪ {a} \\ Client Side for all n ∈ P do Wfn ← Wn for all batch Xn ∈ local data Dn do Wn ← locally train Wn using Equation 2 ## Algorithm 2 Decentralized Fedavg And Fedprox Wn ← locally train Wn using Equation 2 + µ 2 Wfn − Wn 2 Send locally updated models Ws for all s ∈ S to a \\ Aggregator Side Each cluster u ∈ U contains a group of models of same architectures. for all u ∈ U do Wu g ← Aggregate homogeneous models, for all Wn ∈ u, according to Equation 5 for all u ∈ U do for all Wn ∈ u do Wn ← Wu g \\ Fork Wu g into local models Send back updated models Ws for all s ∈ S ## A.3.3 Decentralized Partial Training Similar to DFML and decentralized FedAvg, in decentralized partial training methods each client owns a local model, and in each communication round several clients are randomly selected. One client is designated as the aggregator, while the others act as senders. The senders transmit their models to the aggregator after training their models locally. At the aggregator, the largest available model serves as the global model Wg. Algorithm 3 provides a detailed description of decentralized Federated Dropout, HeteroFL, and FedRolex. Model parameters in partial training methods are aggregated as follows: $$W_{g,[i,j]}=\frac{1}{\sum_{n\in\mathcal{P}}d_{n}}\sum_{n\in\mathcal{P}}d_{n}W_{n,[i,j]},$$ $$(6)$$ dnWn,[i,j], (6) where W[i,j]is the j th parameter at layer i of global model Wg, while Wn,[i,j]is the j th parameter at layer i of client n. The client weight is equal for all clients, dn = 1/|P|. Decentralized Federated Dropout In each communication round t within Federated Dropout (Caldas et al., 2018), sub-models (local models) are extracted from the centralized Wg based on a random selection process. The parameters representing the sub-models that are randomly chosen through this selection are then transmitted to clients for local training. After local training, the updated parameters are sent back to Wg for aggregation. The random extraction scheme for layer i of sub-model Wn for client n is extracted from Wg as follows: Xn,i = {js | integer js ∈ [0, Ji − 1] for 1 ≤ s ≤ ⌊rnJi⌋}, (7) where Xn,i is the parameter indices of layer i extracted from Wg. rn denotes Wn rate relative to Wg. Ji denotes the total number of parameters in layer i of Wg. A total of ⌊rnJi⌋ is randomly selected from layer i of Wg for Wn. Our derived decentralized version of Federated Dropout involves generating random indices (Equation 7) at the aggregator, guided by the largest available model (Wg). Subsequently, each model is assigned a random set of indices equivalent to its size. The aggregation process is then carried out using these randomly selected Input: Initialize N clients, each client n has a model Wn and data Dn. Each model has a rate rn, which is the model's size rate compared to the largest model in the network. Models with the same rate have the same initialization. for communication round t = 1, 2*, ..., T* do Randomly select one aggregator a ∈ {1*, ..., N*} Randomly select senders S ⊂ {1, ..., N}, a /∈ S Participants P = S ∪ {a} \\ Client Side for all n ∈ P do for all batch Xn ∈ local data Dn do Wn ← locally train Wn using Equation 2 Send locally updated models Ws for all s ∈ S to a \\ Aggregator Side Set Wg to be like the largest available model Local models are assigned indices Xn,i for all i and n ∈ P, where Xn,i is from Equation 7 or Equation 8 or Equation 9 for all n ∈ P do Wn ← Wg,Xn,i for all i \\ Split Wg into local models Send back updated models Ws for all s ∈ S Algorithm 3 Decentralized Federated Dropout , HeteroFL , and FedRolex indices. Once the aggregation is finalized, sub-models are created from Wg using the same set of indices. Finally, these sub-models are transmitted back to the respective participating clients. Decentralized HeteroFL Unlike Federated Dropout, HeteroFL (Diao et al., 2020) consistently extracts submodels from a predefined section of Wg. Specifically, HeteroFL extracts sub-models from Wg starting from index 0 up to the maximum layer size of Wn. The extraction scheme is defined as follows: $$\mathcal{X}_{n,i}=\{0,1,\ldots\lfloor r_{n}J_{i}\rfloor-1\},$$ $$({\boldsymbol{\delta}})$$ Xn,i = {0, 1, ...⌊rnJi⌋ − 1}, (8) In the decentralized HeteroFL approach, the parameters of each layer in all sub-models share the same starting point (index 0). Consequently, at the aggregator, parameter averaging takes place with overlapping indices from the available models. After aggregation, the updated parameters of each layer from all models, spanning from index 0 up to the maximum size, are communicated back to their respective clients. Decentralized FedRolex In FedRolex (Alam et al., 2022), local clients are initially generated from the global model beginning from index 0 and extending up to the local models' capacity. In the first communication round, the sub-models are generated similarly to HeteroFL. However, in each subsequent communication round, the starting point of the indices shifts to the right. The sub-model extraction in FedRolex is defined as follows: $$\mathcal{X}_{n,i}^{t}=\begin{cases}\{\tilde{t},\tilde{t}+1,...,\tilde{t}+\lfloor r_{n}J_{i}\rfloor-1\}&\text{if}\tilde{t}+\lfloor r_{n}J_{i}\rfloor\leq J_{i},\\ \{\tilde{t},\tilde{t}+1,...,J_{i}-1\}\cup\{0,1,...,\tilde{t}+\lfloor r_{n}J_{i}\rfloor-1-J_{i}\}&\text{otherwise,}\end{cases}$$ where t˜= t mod Ji. t is the current communication round and Jiis the size of layer i of Wg. In decentralized FedRolex, the size of Wg is determined by the largest available model, and as a result, the rightward shift in indices is computed based on the current communication round t and Ji of the selected Wg. The indices are calculated using X t n,i from Equation 9. These indices are utilized for aggregation and to extract the updated local models after aggregation. In decentralized FedRolex, since aggregators must be aware of t to compute the indices, each aggregator needs to communicate t to the next aggregator. $$({\mathfrak{g}})$$ ![20_image_0.png](20_image_0.png) ![20_image_1.png](20_image_1.png) Figure 12: Performance comparison between centralized (with server) and decentralized (serverless) settings. ## A.3.4 Decentralized Fml FML (Shen et al., 2020) is a centralized framework that relies on a central server. In FML, each client owns a local model, which is not transmitted during training. In each communication round, the global model forks its model into all participating clients. Shen et al. 2020 named the forked models: *meme models*. Subsequently, at each client, the local (heterogeneous) model engages in mutual learning with the meme (homogeneous) model using their respective local data. After mutual learning is complete, the meme models are communicated back to the server for aggregation. In decentralized FML, two models are dedicated to each client: the first is the heterogeneous model, and the second is the homogeneous (meme) model. In each communication round, several clients perform mutual learning between their heterogeneous and homogeneous models using their local data. Next, the homogeneous models from all participating clients are transmitted to the aggregator for aggregation. After aggregation is complete, the aggregated model is transmitted back to all participating clients. This process repeats for the remaining communication rounds. We decentralized FML for comparison with our proposed DFML. **It is crucial to emphasize that decentralized FML and our proposed DFML are two distinct frameworks.** The key differences are as follows: 1) DFML uses one heterogeneous model per client for training, while decentralized FML uses two models per client for training; 2) DFML aims to transfer global knowledge to heterogeneous models, whereas FML treats the heterogeneous models as personalized models and the homogeneous models to hold the global knowledge; and 3) FML requires a server, and we derived the decentralized version of FML to facilitate comparison with our proposed DFML. ## A.4 Dfml: Further Analysis A.4.1 Centralized Vs Decentralized We highlight a challenge of a serverless setting compared to having a centralized server in the network. For simplicity, we illustrate the difference using FedAvg. In centralized FedAvg, a global model exists at the server, and in each communication round, the global model is distributed to randomly selected clients. Even when partial client participation, there is always one version of the global model at the server. In a decentralized setting with partial participation, clients send their model to another client for aggregation and receive back the aggregated model, resulting in a version of the global model that differs from the previous round especially with different clients selected for participation. The existence of multiple versions of global models among clients in a serverless network affects the convergence speed compared to a centralized server where the latest version of the global model is transmitted to clients in every round. Figure 12 illustrates the convergence speed drop between a centralized and decentralized network using FedAvg and homogeneous architectures. Additionally, our proposed DFML challenges the centralized FedAvg. As mentioned in Section 1, there are other advantages of a serverless network compared to having a centralized server. ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) Figure 13: Performance comparison between using the same client (client 0) and randomly selecting a client in each communication round to perform the aggregation. ![21_image_3.png](21_image_3.png) ![21_image_2.png](21_image_2.png) Figure 14: Performance comparison between using both distillation and supervision signals and only using the supervision signal at the aggregator to perform knowledge transfer. ## A.4.2 Fixed Vs Random Aggregator Figure 13 shows that using a fixed client to perform the aggregation significantly limits performance since the same local data is used in the knowledge transfer between models. In this experiment, the senders are randomly selected in each communication round. while the aggregator is always client 0. ## A.4.3 Effect Of Knowledge Distillation We explore the effect of utilizing a distillation signal in the objective function compared to just having a supervision signal at the aggregator (Equation 1). Figure 14 illustrates the communication cost difference between having both supervision and distillation signals in the aggregator's objective function versus just having the supervision signal. The figure shows that without the distillation signal, the computation cost is approximately an order of magnitude higher than with the addition of the distillation signal to the supervision signal at the aggregator, as described in Equation 1. ## A.4.4 Convergence Speedup We illustrate in Figures 15, 16, and 17 the convergence speedup achieved by DFML compared to the baselines under three heterogeneity settings: homogeneous architectures, restrictive heterogeneous architectures, and nonrestrictive heterogeneous architectures; respectively. ## A.4.5 Local Accuracy Our proposed DFML not only surpasses the baselines in global accuracy but also achieves competitive results in local accuracy. As shown in Figure 18, the local accuracy attained by DFML generally exceeds ![22_image_0.png](22_image_0.png) Figure 15: Comparison between DFML, decentralized FedAvg, and Def-KT using homogeneous architectures. ![22_image_3.png](22_image_3.png) ![22_image_1.png](22_image_1.png) ![22_image_2.png](22_image_2.png) ![22_image_4.png](22_image_4.png) Figure 16: Comparison between DFML, decentralized partial training algorithms, and decentralized FedAvg using restrictive heterogeneous architectures. ![22_image_5.png](22_image_5.png) Figure 17: Comparison between DFML and decentralized FedAvg for different numbers of clients and data distributions using non-restrictive heterogeneous architectures. that of decentralized FedAvg. Although DFML exhibits slightly lower local accuracy compared to decentralized FedAvg in the CIFAR-10 non-IID experiment with 10 clients, it remains competitive. Moreover, the corresponding global accuracy achieved by DFML in that experiment surpasses decentralized FedAvg. ![23_image_0.png](23_image_0.png) Figure 18: Local accuracy comparison between DFML and decentralized FedAvg for different numbers of clients and data distributions using non-restrictive heterogeneous architectures. ## A.4.6 Different Number Of Participants We investigate the performance of DFML with a varying number of senders S in each communication round. Figure 19 compares DFML and decentralized FedAvg with 50%, 20%, and 10% senders. DFML exhibits effective learning even with fewer participants compared to decentralized FedAvg. Additionally, both methods with a reduced number of participants demonstrate slower convergence speed compared to the 50% scenario, which is expected. Due to the low participation rate, we increase the number of peak updates. With a limited number of participating models, updating them only when the maximum α is reached results in slower convergence speeds. Therefore, we allow multiple updates instead of updating the models solely at the maximum α. We estimate that an appropriate number of peak updates is N |S| for |S| < 50% of the clients. Consequently, with |S| = 10%, updates are applied in the largest five α values. If cyclical α is not added to DFML, adjusting the number of peak updates is unnecessary, as peak models will no longer be needed and only the updated models must be communicated back to senders. ## A.4.7 Weighted Vs Vanilla Average Of Kl Divergences (Kls) In Equation 3, we use a weighted average of KL divergences (KLs) between all teacher models and the student. The weighting is determined based on the number of trainable parameters in each teacher model. Figure 20 demonstrates that the weighted average leads to better convergence speed and global accuracy compared to vanilla averaging. The reason is that larger models tend to have a higher probability of possessing finer knowledge, thus giving them more weight during knowledge distillation results in better knowledge transfer. ![24_image_0.png](24_image_0.png) Figure 19: Evaluating DFML against decentralized FedAvg with fewer number of senders S. Figure 20: Weighted vs vanilla average of KL divergences (KLs) in the distillation signal of the objective function. Figure 21: Mutual vs vanilla knowledge transfer. ![24_image_2.png](24_image_2.png) ![24_image_1.png](24_image_1.png) Figure 22: Effect of increasing mutual learning epochs K on global accuracy with respect to (Left) communication rounds and (Right) computation cost. ## A.4.8 Mutual Vs Vanilla Knowledge Transfer With vanilla knowledge transfer, only the aggregator's model is updated, with the other models act as teachers. Conversely, with mutual learning, all models are updated. Figure 21 demonstrates the significant improvement in global accuracy and convergence speed achieved through mutual learning compared to vanilla knowledge transfer. ## A.4.9 Effect Of Increasing Mutual Learning Epochs K Figure 22 shows that increasing the number of mutual learning epochs K at each aggregator contributes to a faster convergence speed. A higher number of K enables more knowledge to be distilled from the experts, leading to improved convergence speed. This, in turn, results in a more efficient communication cost throughout training. On the other hand, the computational cost will increase significantly. From the figure we can see that increasing the K beyond 10 does not result in significant enhancement in convergence speed at the expense of adding more communications between clients. Further, selecting K lower than 10 will save computation but will increase the communication cost. Thus, there is a trade-off between computation and communication cost for convergence. Last, Even though decentralize FedAvg is more computation efficient than DFML, but it does not reach to the same global accuracy level as DFML. ## A.4.10 Different Topology In all previous experiments, we used a mesh topology, where all clients can reach each other. However, in this subsection, we explore a different network topology, as illustrated in Figure 23. In this new topology, clients are divided into two groups, each connected in a mesh configuration, with a single link connecting ![25_image_0.png](25_image_0.png) Figure 23: Different Topology. Table 11: Global accuracy comparison using restrictive heterogeneous architectures with 50 clients and 25 senders. The experiments are conducted using ResNet architectures using the topology illustrated in Figure 23. CIFAR-10 CIFAR-100 Method IID non-IID IID non-IID Dec. FedAvg 44.71± 0.12 22.11± 0.06 10.36 ± 0.16 7.51 ± 0.14 Dec. HeteroFL 58.91 ± 0.17 43.45 ± 0.21 18.24 ± 0.13 14.81 ± 0.22 Dec. FedRolex 58.91 ± 0.09 43.54 ± 0.19 18.20 ± 0.08 15.10 ± 0.11 DFML (Ours) 78.87 ± 0.01 68.52 ± 0.02 45.02 ± 0.02 40.31 ± **0.03** Table 12: Global accuracy comparison using nonrestrictive heterogeneous architectures with 50 clients and 25 senders. The experiments are conducted using CNN architectures using the topology illustrated in Figure 23. CIFAR-10 CIFAR-100 Method IID non-IID IID non-IID Dec. FedAvg 51.45 ± 0.01 25.03 ± 0.02 27.10 ± 0.11 20.16± 0.21 DFML (Ours) 78.87 ± 0.10 68.52 ± 0.18 45.01 ± 0.15 40.30± **0.15** the two groups. The first group contains clients with odd addresses [1, 3, 5*, . . . , N* − 1] and the second group contains clients with even addresses [2, 4, 6*, . . . , N*]. The clients connecting group 1 and group 2 have the "median" address from the list of addresses in its group in their respective groups. We conducted experiments using both restrictive and nonrestrictive heterogeneous architectures. The results, provided in Tables 11 and 12, demonstrates that even with this different topology, DFML continues to surpass the baselines. ## A.4.11 Dfml Vs Decentralized Fml In decentralized FML, the clients' heterogeneous models are the same as the models used in our proposed DFML, which are based on CNN architectures and mode H2 from Table 10. The homogeneous model size used is [32, 64], which is the median of models of the five different architectures. Figure 24 compares our DFML and decentralized FML. We include the global accuracy of the local heterogeneous and the homogeneous meme models. We observe that the homogeneous meme models have a better convergence speed than our DFML, and in some cases, lead to better final accuracy. However, the global knowledge performance of heterogeneous models in decentralized FML is much worse than DFML and is even lower than decentralized FedAvg. As our goal is to transfer global knowledge to clients' heterogeneous models, we consider that our DFML significantly outperforms the decentralized FML framework. ![26_image_0.png](26_image_0.png) Figure 24: Comparison between our proposed DFML and decentralized FML. ![26_image_2.png](26_image_2.png) ![26_image_1.png](26_image_1.png) Figure 25: Comparison between different supervision signals. Figure 26: Comparing various adaptive techniques with our proposed DFML. The supervision signal used is CE. Figure 27: Different schedulers assigned to the loss components in the objective function. ## A.5 Cyclical Knowledge Distillation: Further Analysis A.5.1 Different Supervision Signals As previously mentioned, mitigating catastrophic forgetting is crucial in applications such as FL where data distribution shift exists across clients. Training models on one dataset can lead to optimizing its objective towards that specific local data, causing it to forget tasks learned previously from other clients. ACE (Caccia et al., 2021) and WSM (Legate et al., 2023) are two approaches to mitigate catastrophic forgetting. Figure 25 illustrates the improvement in global accuracy when the supervision signal is changed from CE to ACE or WSM. Results show that WSM as a supervision signal leads to the highest global accuracy, as it takes into account the number of samples for each class label in the clients. The maximum range of α oscillation is tuned for each supervision signal to achieve the best final accuracy. Tuning the maximum α range is important, particularly in scenarios where the supervisory signal is non-noisy, such as in IID distribution shift, or when using ACE or WSM in non-IID settings. Completely diminishing the supervisory signal (equivalent to setting α = 1) in such cases would lead to a performance decline. Therefore, in situation where the supervision signal is not noisy, the maximum α value is better to be set to 0.8 or 0.9. For instance, in non-IID cases with CE as the supervision signal, where the signal is very noisy, setting α = 1 yields the best performance. ## A.5.2 Different Adaptive Techniques In Figure 26, we compare the performance of different adaptive techniques, including WSL (Zhou et al., 2021) and ANL-KD (Clark et al., 2019), with our proposed cyclical DFML framework. The results indicate that WSL and ANL-KD show negligible improvement compared to DFML with a fixed α = 0.5. ![27_image_0.png](27_image_0.png) $${\mathcal{L}}=\gamma{\mathcal{L}}_{\mathrm{CE}}+\alpha{\mathcal{L}}_{\mathrm{KL}}$$ Figure 28: Comparison between fixing and increasing the period throughout training. The initial period is set to 10. ## A.5.3 Different Schedulers For Loss Components In Equation 1, the objective function consists of two loss components: supervision and distillation loss signals. In Equation 10, we examine the impact of independently varying each loss component on global accuracy. Specifically, we compare additional scenarios where one scheduler is assigned to LCE alone, another scenario where a scheduler is assigned to LKL alone, and a third scenario where one scheduler scales both loss components with the same factor. For this experiment, we use CE as the supervision signal, as the improvements are more notable under the non-IID distribution shifts. Figure 27 illustrates the comparison of assigning independent schedulers to loss components. In this experiment, γ oscillates from 1 → 0, and α oscillates from 0 → 1. L = γLCE + αLKL (10) Scaling the distillation signal alone and leaving the scale of LCE untouched does not yield any advancement in performance compared to keeping α fixed for both loss components. This indicates that the supervision signal has a more significant impact than the distillation signal. Performance gains are observed when the LCE signal is reduced, allowing more influence on the LKL signal. In contrast, when LCE oscillates while the LKL scale is kept fixed, it results in the same performance as when both LCE and LKL signals are scaled in opposite directions (Equation 1). This is because the LCE signal is dominant without any scaling, and as it diminishes it allows the LKL signal to take precedence. The peak value is attained when the LCE signal reaches 0, and the LKL signal's scale is 1. Finally, scaling both LCE and LKL signals with a common scheduler leads to inferior performance compared to scaling LCE alone or scaling both signals in opposite directions. The poor performance of oscillating LKL signal alone is attributed to the continuous dominance of LCE signal during mutual learning, causing the models to drift toward the aggregator's local data. ## A.5.4 Fixed Vs Increasing Period Figure 28 demonstrates that increasing the period over time results in better convergence speed, higher global accuracy, and enhanced stability. The period is initially set to 10. In the fixed period experiment, the period is kept constant, while in the increasing period experiment, the period is incremented. Starting with a small period is crucial for more frequent peak updates, which accelerates convergence speed. However, over time, increasing the period proves beneficial, allowing models to transition from the supervision to the distillation signal more slowly. This extended time in the distillation-dominant region enhances global accuracy. For instance, if all clients are participating and the period is initially set at 100, better accuracy is achieved after 100 communication rounds compared to a constant period of 10. However, the convergence speed is notably affected, as a period of 100, results in one peak at communication round 100, while a period of 10 leads to 10 peaks. Further, in scenarios with partial participation, then after 100 rounds only the participating clients will be updated. Whereas a smaller initial period ensures that, on average, all clients are updated several times within the first 100 rounds. Therefore, to reap the benefits of high convergence speed and improved final accuracy, we set the period to be initially small and increment it gradually.
Review 1: Summary: The paper proposes a novel Decentralized Federated Mutual Learning (DFML) framework that is serverless, supports heterogeneous models without architectural constraints, and does not rely on additional data. DFML handles model and data heterogeneity through mutual learning that distills knowledge between clients, and by cyclically varying the amount of supervision and distillation signals. Extensive experiments demonstrate DFML consistently outperforms prevalent baselines in convergence speed and global accuracy under various conditions. Strengths and Weaknesses: Strengths - DFML supports nonrestrictive heterogeneous models and does not rely on additional data, making it more practical for real-world scenarios. - Extensive experiments demonstrate that DFML consistently outperforms state-of-the-art baselines in terms of convergence speed and global accuracy under various conditions. Weaknesses - The impact of different network topologies on the performance of DFML is not investigated. The star topology is symmetric and has good stability. I think you should evaluate DFML on more Irregular topologies. - The experiments do not cover a wide range of real-world datasets and applications, which could help validate the generalizability of the proposed framework. - The impact of the number of mutual learning epochs K on the performance of DFML is not thoroughly investigated. The paper mentions that increasing K contributes to faster convergence but does not provide a detailed analysis of the trade-off between computational cost and performance improvement. - Although the framework does not rely on additional data, the privacy risks associated with sharing model parameters and the potential for privacy leaks during the mutual learning process are not addressed. Nowadays, there exsit some powful model inversion attack methods that may cause privacy leakage. Requested Changes: Overall, I think this paper is well-written and has significant reasearch value. However, there are some issues to be addressed. - (major) Investigate the impact of different network topologies on the performance of DFML. - (major) Conduct an ablation study to evaluate the individual contributions of the key components of DFML, such as mutual learning, cyclical variation of α, and the use of peak models. - (minor) Expand the experiments to cover a wider range of real-world datasets and applications. - (minor) Include a theoretical analysis of the convergence properties and limitations of DFML. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper presents Decentralized Federated Mutual Learning (DFML) to address heterogeneity in a Decentralized Federated Learning (DFL) setting. Specifically, DFML transmits models to a randomly selected aggregator to handle model heterogeneity and uses re-weighted SoftMax to address data heterogeneity. The aggregation is performed by distilling knowledge on the aggregator client. A cyclical approach is used to balance two loss functions. The experimental results show the proposed method is effective in addressing the non-IID challenge. Strengths and Weaknesses: Strengths: Handling heterogeneity in Decentralized Federated Learning setting is well-motivated. The experiments are extensive and easy to follow. The results show the proposed method is effective. Weaknesses: Although the communication cost is not impacted, the computation cost might be high for the selected aggregator (client). If a less powerful edge device is selected as the aggregator and it cannot run the model sent from other clients, how can the method still work? For example, if an edge device can only deploy a CNN model, how can it aggregate a ViT model sent from other clients? Will the learned model bias toward the data on the last aggregator at the end of training? In the global accuracy comparison using homogeneous CNN architectures, can the authors compare DFML with a decentralized version of an FL method designed for data heterogeneity? Requested Changes: See above. Please consider using \citep{} if the authors are not part of the sentence. Broader Impact Concerns: None ================================================== Review 3: Summary: This work aims at serverless and relaxed-constraint FL. To this end, the authors propose a novel learning framework in which models are uploaded to certain clients and fine-tuned using their data. Some experiments are conducted to verify the effectiveness of the proposed method. Strengths and Weaknesses: [Strengths ] 1 The studied problem is promising. 2 Introducing accuracy on IID data is convincing. 3 Multiple architectures are leveraged for evaluation. [Weaknesses] 1 The claimed "no server" (or serverless) is confusing and requires careful clarification. Specifically, it seems like changing the server in FedAvg to the "no server" setting claimed in this work induces no challenge. Namely, we can perform aggregation on any client, as claimed in this work. 2 The claimed architectural constraints are widely studied in one-shot FL, where limited or no constraints are leveraged. 3 The proposed cyclic knowledge distillation would introduce numerous costs as the FL system is scaling up, e.g., with more data or clients. 4 The experimental results are not convincing, as merely CIFAR10 and CIFAR100 are considered in the evaluation. 5 The proposed method would approach the centralized training setting and performance if the proposed learning paradigm does not fix the client for knowledge distillation. Requested Changes: 1 The motivation/significance of the highlighted contributions should be carefully clarified, see Weaknesses 1, 2, and 5. 2 More experiments are required. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept as is Comment: The paper provides a new idea for handling heterogeneity in decentralized federated learning systems and proposes a new method of knowledge distillation for clients. Before rebuttal, reviewers raised several concerns regarding the extensive computational demands of the method, potential biases inherent in the model, the need for more rigorous ablation studies, and a perceived lack of sufficient empirical evidence supporting the method's efficacy. After rebuttal, reviewers acknowledged that major concerns have been addressed. The limitation of this paper acknowledged by most reviewers is its demand on computational cost, which should not be a major concern for rejecting this paper. The authors have also clearly acknowledged these limitations in their manuscript. ==================================================
# Prompt-Based Exemplar Super-Compression And Regeneration For Class-Incremental Learning Anonymous authors Paper under double-blind review ## Abstract Replay-based methods in class-incremental learning (CIL) have attained remarkable success. Despite their effectiveness, the inherent memory restriction results in saving a limited number of exemplars with poor diversity. In this paper, we introduce ESCORT, a novel approach that substantially increases the quantity and enhances the diversity of exemplars based on a pre-trained general-purpose diffusion model, without fine-tuning it on target datasets or storing it in the memory buffer. Images are compressed into visual and textual prompts, which are saved instead of the original images, decreasing memory consumption by a factor of 24. In subsequent phases, diverse exemplars are regenerated by the diffusion model. We further propose partial compression and diffusion-based data augmentation to minimize the domain gap between generated exemplars and real images. Comprehensive experiments demonstrate that ESCORT significantly improves CIL performance across multiple benchmarks, e.g., 3.2% above the previous state-of-the-art on ImageNet-100. ## 1 Introduction Ideally, AI systems should be adaptive to ever-changing environments—where the data are continuously observed. The AI models should be capable of learning concepts from new data while maintaining the ability to recognize the old ones. In practice, AI systems often have constrained memory budgets, because of which most of the historical data must be abandoned. However, deep AI models suffer from catastrophic forgetting when being updated by abundant new data and limited historical data, as previous knowledge can be overridden by the new information (McCloskey & Cohen, 1989; Ratcliff, 1990). To study how to overcome catastrophic forgetting, the class-incremental learning (CIL) protocol (Rebuffi et al., 2017) is established. CIL assumes training samples from various classes are introduced to the model in phases, with previous data mostly discarded from memory. CIL has enjoyed significant advancements (Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018; Chaudhry et al., 2018; Lee et al., 2017; Yoon et al., 2017; Yan et al., 2021; Wang et al., 2022a; Zhou et al., 2022; Pham et al., 2021), among which replay-based methods (Rebuffi et al., 2017; Hou et al., 2019; Liu et al., 2021a;b; Yan et al., 2021; Liu et al., 2020) stand out in terms of performance by employing a memory buffer to store a limited number of representative samples (i.e., exemplars) from former classes. During subsequent learning phases, these exemplars are revisited to help retain previously learned knowledge. Although replay-based methods demonstrate notable effectiveness, they are still restricted by two main drawbacks. Firstly, since the exemplar set is much smaller compared to the new training data, the model is biased towards the new classes. Secondly, the poor exemplar diversity leads to overfitting on old classes. These two issues are essentially incurred by the lack of quantity and diversity of exemplars respectively. Therefore, by tackling these two problems, all replay-based CIL methods can potentially be enhanced. We consider these questions in CIL: is it efficient to save old-class information as RGB images? Can we compress the images into something more compact so that more information can be stored with the same memory consumption? In this paper, we propose a novel approach named Exemplar Super-CO*mpression* and R*egeneration based on promp*Ts (ESCORT). Instead of directly storing the previous images, we compress them into visual and textual prompts, e.g., edge maps and textual descriptions, and save these prompts in ![1_image_0.png](1_image_0.png) Figure 1: Comparison between traditional replay-based CIL methods with our approach. (a) **Traditional** replay-based CIL methods can only select and save a small number of exemplars due to the memory restriction, leading to two severe issues: firstly, the relatively small size of the exemplar set compared to the new training dataset gives rise to a pronounced imbalance between old and new classes; secondly, the limited diversity of the exemplar set compared to the original training set incurs an overfitting problem. (b) **ESCORT** compresses the old images into visual and textual prompts, e.g., edge maps and class tags, and saves these prompts to the memory. In subsequent phases, diverse high-resolution exemplars are regenerated from these prompts via an off-the-shelf pre-trained diffusion model, e.g., ControlNet. ESCORT dramatically improves the quantity and diversity of exemplars without violating the memory constraint. the memory. In subsequent phases, diverse high-resolution exemplars are regenerated from the prompts by leveraging an off-the-shelf pre-trained diffusion model, e.g., ControlNet (Zhang et al., 2023). Compared to traditional direct replay methods, ESCORT enjoys increased **quantity** and enhanced **diversity** of the exemplar set, as illustrated in Figure 1. Firstly, the exemplar quantity is boosted by compression: since the memory consumption of a 1-bit edge map is merely 1 24 that of its 8-bit, 3-channel RGB image counterpart, 24 times more exemplars can be saved within the same memory budget. We call this process super-compression as this compression ratio is far beyond that of existing image compression algorithms, without even compromising the image quality. Secondly, instead of relying on the images only from the dataset itself, we apply diffusion-based image generation to produce unseen samples with great diversity. This can be achieved simply by changing the random seed of the diffusion model. However, utilizing generated images for CIL model training leads to a potentially huge domain gap between old exemplars and new data. We propose two techniques to mitigate this problem, i.e., partial compression and diffusion-based data augmentation, enabling the CIL model to properly benefit from the synthetic exemplars without the need to fine-tune the diffusion model on the target dataset. Since the same pretrained diffusion model can be directly downloaded from the public cloud at any time when necessary, we do not need to store the fine-tuned generator using our own memory. Extensive experiments show that our ESCORT achieves top performance on both fine-grained and coarsegrained classification datasets: Caltech-256 (Griffin et al., 2022), Food-101 (Bossard et al., 2014), Places-100 (Zhou et al., 2016b), and ImageNet-100 (Deng et al., 2009). We fully investigate the effect of ESCORT under different CIL settings and demonstrate that our approach achieves tremendous improvements compared to the state-of-the-art (SOTA) CIL method, e.g., substantially increasing the average learning from half accuracy on 11-phase ImageNet-100 by 3.2%. Our contributions can be summarized as follows. - We challenge the traditional manner of saving old class exemplars as RGB images in CIL and propose a memory-efficient data storage approach based on prompts, significantly increasing exemplar quantity with the same memory cost. - We employ a general-purpose ControlNet, without fine-tuning it on our target datasets, to regenerate diverse high-resolution images from prompts during incremental stages. - We devise two techniques, i.e., partial compression and diffusion-based data augmentation, to alleviate the domain gap between generated exemplars and real images. - We conduct extensive experiments on four classification datasets, two CIL protocols, seven CIL methods, and three budget settings to evaluate the performance of our approach. ## 2 Related Work Class-incremental learning. The goal of class-incremental learning is to develop machine learning models that can effectively adapt to and learn from data presented in a series of training stages. This is closely associated with topics known as continual learning (De Lange et al., 2019a; Lopez-Paz & Ranzato, 2017) and lifelong learning (Chen & Liu, 2018; Aljundi et al., 2017). Recent incremental learning approaches are either task-based, i.e., all-class data come but are from a different task for each new phase (Shin et al., 2017; Zhao et al., 2020; Li & Hoiem, 2017), or class-based, i.e., each phase has the data of a new set of classes coming from the identical dataset (Yoon et al., 2017; Yan et al., 2021; Wang et al., 2022a; Zhou et al., 2022; Pham et al., 2021). The latter is typically called class-incremental learning (CIL). CIL methods can be grouped into three main categories: replay-based, regularization-based, and parameter-based (De Lange et al., 2019b; Prabhu et al., 2020). **Replay-based** methods employ a memory buffer to store information from previous classes for rehearsal in later stages. Direct replay (Isele & Cosgun, 2018; Aljundi et al., 2019; Iscen et al., 2020; Bang et al., 2021; Liu et al., 2021b) saves representative images from the dataset, while generative replay (Shin et al., 2017; He et al., 2018; Hu et al., 2018; Zhu et al., 2021; Petit et al., 2023; Liu et al., 2021c) saves generators trained by previous images. These strategies are centered around selecting key samples, training generators, and effectively enhancing the classification model by utilizing a dataset that combines both exemplars and new data. **Regularization-based** methods (Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018; Chaudhry et al., 2018; Lee et al., 2017) incorporate regularization terms into the loss function to reinforce previously acquired knowledge while assimilating new data. **Parameter-based** methods (Yoon et al., 2017; Yan et al., 2021; Wang et al., 2022a; Zhou et al., 2022; Pham et al., 2021) allocate distinct model parameters to each incremental phase, aiming to avoid model forgetting that arises from overwriting parameters Exemplar compression. Attempts have been made to compress the exemplars and reduce their memory consumption, so that exemplar quantity can be increased with the same memory buffer. MRDC (Wang et al., 2022b) employs JPEG (Wallace, 1992) to compress exemplars and studies the trade-off between quality and quantity of the compressed images. CIM (Luo et al., 2023) identifies foreground regions of exemplars by CAM (Zhou et al., 2016a) and downsamples the background to compress images. These approaches have two weaknesses. 1) The compression is lossy and the exemplar image quality degrades. 2) The compression ratio is data-dependent, as JPEG compression falls short with irregular patterns and non-smooth gradients, and background downsampling is inefficient with dominating foreground regions. Our ESCORT, in contrast, guarantees high-resolution exemplars and a constant compression ratio of 24, independent of the image dataset we choose. Diffusion models. Diffusion models function by progressively deteriorating the data, introducing Gaussian noise in incremental steps, and then learning to reconstruct the data by reversing the noise introduction process (Ho et al., 2020; Singer et al., 2022; Villegas et al., 2022). They have demonstrated notable success in producing high-resolution and photorealistic images from a range of conditional inputs, e.g., natural language descriptions, segmentation maps, and keypoints (Ho et al., 2020; 2022; Zhang et al., 2023). Recently, textto-image diffusion models such as Stable Diffusion (Rombach et al., 2022) have been employed to enhance training data as well. SDDR (Jodelet et al., 2023) leverages a pre-trained Stable Diffusion model to generate extra exemplars from class tags for CIL, but image generation based only on class tags can hardly recover detailed information of images in that class. Diffusion models with spatial control. The recent development (Zhang et al., 2023) of adding spatial conditioning controls to pre-trained text-to-image diffusion models, as seen in the work referred to as ControlNet, has garnered significant interest. ControlNet, a variation of the diffusion model, achieves this by integrating a pre-trained large-scale diffusion model, like Stable Diffusion (Rombach et al., 2022), with zero convolutions in the network. This allows ControlNet to create high-quality images from the input text and a visual prompt, which can be edges, depth, segmentation, or human pose representations. This extraordinary ability sheds light on a new potential data format for storing old-class exemplars. Since visual prompts such ![3_image_0.png](3_image_0.png) Figure 2: The CIL model training process with ESCORT in the i-th phase (i ≥ 2). A pre-trained ControlNet is downloaded in advance and remains frozen for image generation. Initially, we have three subsets of data because of partial compression: new real images of this phase, old prompts (i.e., edge maps and class tags), and old exemplars. Firstly, we transform the new real images and old exemplars into edge maps by Canny edge detection. Then, we use ControlNet to generate images from all the prompts we have. Finally, we train the CIL model with real and generated images. An image, if having both generated and real versions, will appear only once in each epoch, with a probability p of being the generated version and 1 − p of being the real version. as edge maps cost much less space than the original RGB images, more prompts can be saved with the same memory budget, and more high-resolution images with the same details can be regenerated by ControlNet in later stages for CIL model training. ## 3 Methodology As illustrated in Figure 2, our approach generates diverse exemplars from former prompts to overcome the forgetting problem in CIL. Specifically, we describe the problem setup of replay-based CIL in Section 3.1. Then, we explain how to compress images to prompts in Section 3.2. Next, we show how to regenerate the exemplars from prompts and train the CIL model with them in Section 3.3. We introduce two techniques to reduce the domain gap in Section 3.4. The overall algorithm is provided in Section 3.5 and Algorithm 1. ## 3.1 Problem Setup Reply-based CIL has multiple phases during which the number of classes gradually increases to the maximum (Douillard et al., 2020; Hou et al., 2019; Liu et al., 2020). In the 1-st phase, we observe data D1, using them to learn an initial model Θ1. After training, we can only store a small subset of D1 (i.e., exemplars denoted as E1) in memory used as replay samples in later phases. In the i-th phase (i ≥ 2), we get new class data Di and load exemplars E1:i−1 = E1 *∪ · · · ∪ E*i−1 from the memory. Then, we initialize Θi with Θi−1, and train it using E1:i−1 ∪ Di. We evaluate the model Θi on a test set Q1:i with all classes observed so far. Eventually, we select exemplars E1:i from E1:i−1 ∪ Di and save them in the memory. For the growing memory buffer setup with a budget of b units per class (where one unit corresponds to the memory cost of one image), we need to ensure that the final number of exemplars stored at this stage |E1:i| ≤ ib. For the fixed memory buffer setup with a budget of B units in total, we need to have |E1:i| ≤ B in all phases. ## 3.2 Prompt-Based Super-Compression The performance of replay-based methods is severely limited by the quantity and diversity of exemplars. To tackle these two issues, we compress old images into visual and textual prompts and use them to regenerate the exemplars by utilizing an off-the-shelf diffusion model. As the prompts require much less memory compared to RGB images, we are able to save a large number of prompts within the same memory budget, so that far more exemplars can be regenerated in subsequent phases. Prompt extraction. At the end of the i-th phase, we first randomly select a training instance (x, y) from the dataset Di, where x denotes the image and y denotes the classification label. Next, we extract the visual prompts. The visual prompts should preserve as many sufficient details as possible to help the CIL model retain the old-class knowledge. At the same time, they should be small enough to reduce the memory cost. Therefore, we choose Canny edge maps as the visual prompts. We use the classical Canny edge detector (Canny, 1986) to obtain the edge map e as follows: $$(1)$$ $$e=\mathrm{CannyEdge}(\mathbf{x}).$$ e = CannyEdge(x). (1) Then, we directly save the class label t as the textual prompt. For example, if the class label is "cupcakes", then t = 'cupcakes'. We repeat the above process to add other visual and textual prompts to the memory until the memory budget is exhausted. After that, we obtain the final prompt set Pi for the i-th phase, i.e., Pi = {(ej , tj )} Ri j=1, where Ri denotes the maximum number of prompts at the i-th phase to fit in the memory. Prompt memory consumption. Storing edge maps and class tags in the memory buffer takes far less space than storing the original images. For an 8-bit RGB image, compressing it to a 1-bit edge map of equal size achieves a compression ratio of 8 × 3 = 24. This means that each memory unit, which can originally store 1 exemplar image, can now store 24 edge maps. The class labels, which usually contain one or two words, consume negligible memory. ## 3.3 Exemplar Regeneration And Cil Model Training To regenerate exemplars from past prompts, we leverage a general-purpose pre-trained ControlNet (Zhang et al., 2023), which can produce high-resolution images based on visual and textual prompts. We directly apply the off-the-shelf ControlNet without fine-tuning it on target datasets. Therefore, we do not need to allocate any memory space to save the ControlNet model, as we are able to re-download the ControlNet model from the cloud at the beginning of each phase. This setting has its strengths and weaknesses. By avoiding saving the generator in our buffer, we take full advantage of the memory to store old data. However, we have to sacrifice the generator's capability to fit our dataset. Consequently, the generated images might greatly differ from our images in various properties such as brightness, contrast, noise, etc., leading to a potentially huge domain gap. We propose two solutions to tackle this issue in Section 3.4. Exemplar regeneration. In the i-th phase, we first take out a pair of visual and textual prompts (e, t) from the memory P1:i−1 = P1 *∪ · · · ∪ P*i−1 of R1:i−1 =Pi−1 m=1 Rm prompts in total. Then, we resize the edge map e with nearest neighbor interpolation to meet the input size requirement of ControlNet (i.e., both height and width must be a multiple of 64). After that, we forward the prompts (e, t) to ControlNet and generate K new exemplars with K different random seeds: xˆk = ControlNet(e, t, sk), k = 1, · · · *, K,* (2) where k and sk are the index and its corresponding random seed, respectively. The generated image xˆk is then resized to the original size of x for consistency of image resolution. We repeat this operation until we finish processing the entire prompt set P1:i−1. Finally, we obtain the regenerated exemplar set Eˆ1:i−1, which contains R1:i−1K exemplars. CIL model training. Then, we combine the regenerated exemplars Eˆ1:i−1 with the new training data Di to train the CIL model: Θi ← Θi − γ∇ΘiL(Θi; Eˆ1:i−1 ∪ Di), (3) $$\left(2\right)$$ $$\mathbf{\partial}\cdot\mathbf{\partial},K,$$ $$\Theta_{i}\leftarrow\Theta_{i}-\gamma\nabla_{\Theta_{i}}{\mathcal{L}}(\Theta_{i};\hat{\mathcal{E}}_{1:i-1}\cup{\mathcal{D}}_{i}),$$ where γ denotes the learning rate, and L is the CIL training loss. To avoid bias towards a large number of similar regenerated images, in each epoch, we randomly sample only one image from the K synthetic exemplars originating from the same base image for training. ## 3.4 Exemplar-Image Domain Gap Reduction With ESCORT, we can significantly increase the quantity and diversity of exemplars without breaking the memory constraint. However, the domain gap between the synthetic exemplars and real images is still a concern. Therefore, directly using the regenerated exemplars for CIL model training leads to suboptimal performance in practice. A common approach to solve this problem is to fine-tune the generative model on the target dataset. However, if the generator is updated, we have to store it at the end of each phase, taking considerable memory of our buffer. To overcome this issue, we introduce the following two techniques to reduce the domain gap without fine-tuning ControlNet on our datasets. Partial compression. It is indeed tempting to utilize all memory budget to save edge maps, as we would obtain 24 times the original number of exemplars, not to mention the K random generations from each edge map. However, excessive exemplars incur a greater domain gap. Therefore, we consider only spending a part of our memory on saving edge maps, while leaving the rest to save RGB image exemplars as usual. Specifically, for each phase i, we assume the memory budget at this phase is Bi units in total, i.e., we are allowed to save at most Biimages as exemplars normally. We set α as the compressed proportion of the dataset whose value can be adjusted depending on the CIL setting. So, only αBi memory units will be allocated for edge maps, saving 24αBi edge maps; while the remaining (1 − α)Bi units will be allocated for images, saving (1 − α)Bi original images. We firstly sort the training images in Di by herding (Rebuffi et al., 2017) and save the (1 − α)Bi most representative ones directly, while the next 24αBiimages are stored as edge maps in the buffer. This strategy preserves the most representative information of old classes in their original form. The remaining less representative images in Di are discarded. Diffusion-based data augmentation. Another technique we apply to attenuate the domain gap is diffusion-based data augmentation. During CIL model training, every real image x has a certain probability of being replaced by one of its K generated copies. Before training starts at the i-th phase, for each instance (x, y) with a real image x, we extract the edge map e from x and obtain K generated copies {xˆk} K k=1 by Equation 2. In each training epoch, x has a probability p of being replaced by any one of {xˆk} K k=1 using uniform sampling. This augmentation operation enables the model to learn from generated features and mitigates the domain gap between real and synthetic images. Quantity–quality trade-off. By adjusting the compressed proportion α and augmentation probability p, we can control the trade-off between the quantity and quality of generated information that is learned by the CIL model. Larger α and p let the model learn from more generated exemplars more frequently, but the widened domain gap might cause performance degradation. Smaller α and p alleviate the domain gap and improve the information quality, but the exemplar quantity and learning frequency are compromised. The optimal choice of α and p depends on the CIL setting. Overall CIL training loss. The CIL training process of the i-th phase (Equation 3) is adjusted to: $$\Theta_{i}\leftarrow\Theta_{i}-\alpha\nabla_{\Theta_{i}}{\mathcal{L}}(\Theta_{i};(\hat{\mathcal{E}}_{1:i-1}\lor\hat{\mathcal{E}}_{1:i-1})\cup(\hat{\mathcal{D}}_{i}\lor\mathcal{D}_{i})),$$ Θi ← Θi − α∇ΘiL(Θi; (Eˆ1:i−1 ∨ E1:i−1) ∪ (Dˆi ∨ Di)), (4) where Eˆ1:i−1 and E1:i−1 represent the regenerated subset and the real-image subset of previous exemplars, respectively. Dˆi and Di are the augmented version and the original version of the new dataset at phase i, respectively. ∨ denotes the logic OR operation. $$\left({4\atop4}\right)$$ Algorithm 1: CIL with ESCORT (Phase i with growing budget setup) Input: New class data Di; old real-image exemplars E1:i−1; old prompts P1:i−1; CIL model Θi−1; random seeds {sk} K k=1; compressed proportion α, augmentation probability p, phase budget B Output: New real-image exemplars Ei; new prompts Pi; CIL model Θi 1 Download the pre-trained ControlNet; 2 Set Dˆi = ∅; 3 for (x, y) ∈ Di do 4 Get the prompts (e, t) from (x, y) by Equation 1; 5 for k = 1, · · · , K do 6 Generate xˆk from (e*, t, s*k) by Equation 2 and add (xˆk, y) to Dˆi; 7 end 8 end 9 Set Eˆ1:i−1 = ∅; 10 for (x, y) ∈ E1:i−1 do 11 Get the prompts (e, t) from (x, y) by Equation 1; 12 for k = 1, · · · , K do 13 Generate xˆk from (e*, t, s*k) by Equation 2 and add (xˆk, y) to Eˆ1:i−1; 14 end 15 end 16 for (e, t) ∈ P1:i−1 do 17 for k = 1, · · · , K do 18 Generate xˆk from (e*, t, s*k) by Equation 2 and add (xˆk, y) to Eˆ1:i−1; 19 end 20 end 21 Initialize Θi with Θi−1; 22 for iterations do 23 Sample a training data point from Dˆi ∪ Eˆ1:i−1 with probability p or from Di ∪ E1:i−1 with probability 1 − p; 24 Update the model Θi by Equation 4; 25 end 26 Select the (1 − α)B most representative samples (x, y) from Di to form Ei; 27 Select the next 24αB representative (x, y) from Di and add their prompts (e, t) to Pi; 28 Store Ei and Piin the memory buffer. ## 3.5 Algorithm In Algorithm 1, we summarize the overall procedures of the proposed ESCORT in the i-th incremental learning phase (i ≥ 2) with a growing budget setting of B units per phase. Line 1 corresponds to the ControlNet preparation step. Lines 2-15 encapsulate the diffusion-based data augmentation for the training set. Lines 16-20 explicate the exemplar regeneration process. Lines 21-25 elucidate the CIL training procedures. (For minibatch updates, we just repeat this operation in each iteration to sample a minibatch.) Lines 26-28 explain the exemplar updating approach with partial compression. ## 4 Experiments 4.1 Experiment Settings Datasets. We conduct CIL experiments and evaluate ESCORT on four image classification datasets: Caltech-256, Food-101, Places-100, and ImageNet-100. **Caltech-256** (Griffin et al., 2022) is an object recognition dataset with 30,607 images from 257 classes (256 object classes and a clutter class), each class having 80 to 827 images. We remove the clutter class and keep at most 150 images in each class by random | memory units/class for Caltech-256 and b = 20 for other datasets. Learning from Scratch (LFS) | | | | | Learning from Half (LFH) | | | | | | | | |-------------------------------------------------------------------------------------------------|-------------|--------------|-------------|------|----------------------------|------------|------|--------------|------|------|------|------| | CIL Method | Caltech-256 | ImageNet-100 | Caltech-256 | | Food-101 | Places-100 | | ImageNet-100 | | | | | | | N=5 | 10 | 5 | 10 | 5 | 10 | 5 | 10 | 5 | 10 | 5 | 10 | | iCaRL (Rebuffi et al., 2017) | 57.7 | 48.9 | 66.2 | 58.7 | 53.4 | 49.1 | 58.6 | 50.0 | 42.2 | 37.6 | 59.2 | 50.3 | | WA (Zhao et al., 2020) | 66.2 | 54.6 | 76.2 | 69.7 | 60.1 | 46.8 | 74.2 | 64.6 | 62.0 | 57.3 | 73.6 | 66.2 | | MEMO (Zhou et al., 2022) | 65.7 | 61.4 | 78.5 | 74.0 | 62.6 | 60.1 | 71.1 | 50.1 | 53.4 | 48.8 | 72.5 | 70.6 | | PODNet (Douillard et al., 2020) | 67.0 | 60.9 | 76.4 | 68.7 | 68.4 | 66.6 | 79.5 | 77.0 | 68.3 | 66.3 | 79.4 | 77.2 | | FOSTER (Wang et al., 2022a) | 41.3 | 36.4 | 81.1 | 78.7 | 62.4 | 60.9 | 81.2 | 78.9 | 69.4 | 68.5 | 81.6 | 78.6 | | CIM (Luo et al., 2023) | 65.5 | 66.3 | 82.0 | 78.0 | 64.1 | 65.5 | 79.5 | 77.1 | 71.1 | 70.5 | 80.5 | 79.5 | | DER (Yan et al., 2021) | 68.1 | 64.8 | 81.8 | 78.5 | 68.4 | 66.8 | 82.0 | 80.4 | 70.4 | 69.5 | 81.8 | 80.2 | | ESCORT (ours) | 72.8 | 69.8 | 83.4 | 81.3 | 72.1 | 71.3 | 83.9 | 83.2 | 72.3 | 71.8 | 84.2 | 83.4 | | indicate R real and S synthetic exemplars/class are saved in the buffer. p 20+0 19+24 18+48 17+72 16+96 15+120 | | 14+144 | 13+168 | 12+192 | | | | | | |------------------------------------------------------------------------------------------------------------------|------|----------|----------|----------|------|------|------|------|------| | 0.0 | 81.8 | 82.4 | 82.4 | 82.5 | 82.8 | 82.6 | 81.6 | 81.4 | 80.8 | | 0.1 | 82.4 | 82.9 | 83.4 | 83.4 | 83.5 | 83.4 | 83.8 | 83.4 | 83.5 | | 0.2 | 82.8 | 83.0 | 83.3 | 83.6 | 83.9 | 84.2 | 83.7 | 83.4 | 83.9 | | 0.3 | 82.5 | 83.4 | 83.6 | 84.1 | 83.8 | 84.0 | 84.2 | 84.0 | 84.0 | | 0.4 | 82.9 | 83.1 | 83.5 | 84.0 | 83.7 | 83.9 | 83.8 | 84.0 | 83.6 | | 0.5 | 82.4 | 82.2 | 83.1 | 82.8 | 83.5 | 83.5 | 83.4 | 83.7 | 83.4 | Table 1: Average N-phase LFS and (N + 1)-phase LFH accuracies (%) of different methods, with b = 5 memory units/class for Caltech-256 and b = 20 for other datasets. selection to avoid extreme class imbalance. The remaining images of each class are randomly split into training (80%) and test (20%) sets. **Food-101** (Bossard et al., 2014) contains 101,000 food images of 101 classes, each class with 750 training and 250 test images. **Places-100** is a subset of Places-365-Standard (Zhou et al., 2016b), a large-scale dataset including 365 scene categories with 3,068 to 5,000 training and 100 validation images per class. We construct the subset by randomly choosing 100 classes with seed 0. Then 3,000 training images are randomly chosen from each category for class balance. We use their validation set as the test set. **ImageNet-100** is a subset of ImageNet-1000 (Deng et al., 2009) randomly sampled with seed 1993, following (Hou et al., 2019), and each class has about 1,300 training and 50 test images. CIL protocols. We adopt two protocols in our experiments: learning from half (LFH) and learning from scratch (LFS). LFH assumes the model is trained on half of the classes in the first phase and on the remaining classes evenly in the following N phases. LFS assumes the model learns from an equal number of classes in each of the N phases. We set N to be 5 or 10 in our experiments. The model is evaluated on all the classes observed so far at each phase, and the final average classification accuracy is reported. Memory budget. Although MEMO (Zhou et al., 2022) recently proposes a new buffer setting (with model memory taken into account), based on which MEMO attains SOTA performance by storing lighter models, we still follow the common setup, assigning a fixed number of b memory units per class for all methods. The growing budget setting is more challenging than the fixed budget setting, where the memory in earlier phases is much more abundant. (As Caltech-256 has fewer images per class, we set b = 5 on Caltech-256 and b = 20 on other datasets by default, unless otherwise specified.) Textual prompt extraction. The textual prompt of each class is directly derived from its class label with minimal processing. For Caltech-256, we remove the prefix and suffix and replace the hyphen with a space. e.g., "063.electric-guitar-101" is changed to "electric guitar". For Food-101, we replace the underscore with a space. e.g., "apple_pie" is modified as "apple pie". For Places-100, which adopts a bilevel categorization scheme for some classes, such as "general_store/indoor", "general_store/outdoor", and "train_station/platform", we transform them to "indoor general store", "outdoor general store", and "train station platform" to ensure semantic meaningfulness. For ImageNet-100, the original class labels are used directly as textual prompts. | without ESCORT plugged in. The accuracy improvements by applying ESCORT are listed | in the last row. | | | | | | |--------------------------------------------------------------------------------------|--------------------|------|--------|--------|------|------| | iCaRL | WA | MEMO | PODNet | FOSTER | DER | | | Baseline | 50.3 | 66.2 | 70.6 | 77.2 | 78.6 | 80.2 | | ESCORT | 66.4 | 71.7 | 77.6 | 80.6 | 79.4 | 83.4 | | Improvements | +16.1 | +5.5 | +7.0 | +3.4 | +0.8 | +3.2 | Table 4: Average 6-phase LFH accuracies (%) of ESCORT on ImageNet-100 with b = 20 memory units/class and K generated copies per image. Diffusion augmentation is applied with probability p = 0.4, and R = 16 real and S = 96 synthetic exemplars/class are saved in the buffer after partial compression. K 0 1 5 10 15 20 25 Accuracy 81.8 83.1 83.7 83.4 **83.8** 83.7 83.4 Training setup. All CIL algorithms are evaluated on ResNet-18 (He et al., 2016), which is trained by 200 epochs in the first phase and 170 epochs in subsequent phases with SGD. Data augmentations include random resized cropping, horizontal flip, color jitter, and AutoAugment (Cubuk et al., 2019), following (Wang et al., 2022a). We adopt the hyperparameters in PyCIL (Zhou et al., 2023) to implement all the CIL methods. The previous SOTA exemplar compression approach CIM (Luo et al., 2023) is also implemented by plugging into DER (Yan et al., 2021) and FOSTER (Wang et al., 2022a), and we report the better result for each experiment. Unless otherwise mentioned, we incorporate ESCORT into DER (Yan et al., 2021), which generally has the best performance across various settings. We choose α and p based on grid search and find that α ∈ [0.05, 0.3] and p ∈ [0.2, 0.4] work the best in general. We generate K = 5 synthetic copies per image for diffusion-based augmentation, but we do not train the model with more epochs. ## 4.2 Results And Analysis Comparison with previous approaches. We test the CIL performance of different methods with LFS and LFH protocols, illustrating the results in Table 1. ESCORT significantly enhances the baseline method DER by a large margin: in 10-phase LFS setting, ESCORT improves accuracy by 5.0% and 2.8% on Caltech256 and ImageNet-100, respectively; in 11-phase LFH setting, ESCORT improves accuracy by 4.5%, 2.8%, 2.3%, and 3.2% on Caltech-256, Food-101, Places-100, and ImageNet-100, respectively. Ablation study. We investigate the effect of partial compression (by setting α > 0) and diffusion-based data augmentation (by setting p > 0) on ESCORT in Table 2. These two operations respectively focus on improving quantity and diversity of exemplars. For straightforward representation, we directly show the number of real (R) and synthetic (S) exemplars per class, instead of α. The compressed ratio can be expressed as α = 1 − R b . It can be observed that when augmentation is not applied, the improvement by increasing exemplar quantity is relatively limited. When augmentation is applied, as the domain gap is reduced, the model benefits much more from additional exemplars. Jointly applying these two techniques yields the best result. However, excessive exemplars (when α > 0.3) harm the model performance, as the model is biased towards learning from the dominating synthetic data. ESCORT with different CIL methods. We incorporate ESCORT with six different CIL methods and measure the accuracy increase in Table 3. ESCORT improves the performance of five methods considerably by a large margin (from 3.2% to 16.1%), and has relatively weak performance only on FOSTER. As reported in (Wang et al., 2022a), increasing the number of exemplars per class from 20 to 50 in FOSTER brings almost no improvement, but with more than 50 exemplars, the domain gap leads to performance degradation for ESCORT, thus the accuracy increase is not as tremendous as in other methods. Number of generated copies per image. We explore the influence of increasing K (the number of generated copies per image) on ESCORT. Table 4 illustrates the accuracy improvement by generating more copies, thanks to the growing sample diversity. The performance increment gradually diminishes when | row. | Food-101 | Places-100 | ImageNet-100 | | | | | | | |--------------|------------|--------------|----------------|------|------|------|------|------|------| | b=5 | 10 | 20 | 5 | 10 | 20 | 5 | 10 | 20 | | | DER | 77.3 | 79.1 | 80.4 | 67.1 | 68.6 | 69.5 | 77.9 | 79.3 | 80.2 | | DER+ESCORT | 80.1 | 81.3 | 83.2 | 70.4 | 70.9 | 71.8 | 81.1 | 81.4 | 83.4 | | Improvements | +2.8 | +2.2 | +2.8 | +3.3 | +2.3 | +2.3 | +3.2 | +2.1 | +3.2 | ![9_image_0.png](9_image_0.png) Figure 3: Two generated exemplars of Food-101 with their Grad-CAM given by the classification model trained without diffusion-based augmentation and with augmentation. Activation regions given by the model trained with augmentation capture the class objects much more accurately. K ≥ 5, due to 1) the duplicated features generated from each edge map, 2) limited training epochs to learn from additional generations, and 3) the widened domain gap between real and superfluous synthetic images. Therefore, we use K = 5 copies per image for all the experiments. ESCORT with different memory budgets. Due to data privacy and hardware limitations in real-world applications, the memory to store exemplars in CIL problems is often highly restricted, sometimes far less than 20 units per class. We alter the memory budget b and quantify the improvements of ESCORT in Table 5. It is remarkable that our approach can consistently enhance CIL performance even with more restricted budgets. Grad-CAM visualization of generated exemplars. To verify if the class objects generated by ControlNet can be successfully identified by the CIL model, we use Grad-CAM (Selvaraju et al., 2017) to detect the image region with the most importance to the classification decision. Displayed in Figure 3 are two synthetic exemplars of Food-101 with their activation maps produced by a model trained without diffusion-based data augmentation and a model trained with augmentation. Evidently, the augmentation process is vital for the classification model to comprehensively detect the generated objects. This ensures that the model can properly benefit from the synthetic exemplars during training in subsequent stages. Sampling order of prompt-based exemplars. We study the effect of sampling order for choosing exemplars to store as prompts. As specified in Lines 26-27 of Algorithm 1, the default sampling approach is to pick the most representative (1 − α)b samples of each class to store as RGB images, while picking the next 24αb representative ones to store as edges. We refer to this scheme as *least representative prompts*. Its alternate sampling scheme, namely *most representative prompts*, is to save the most representative 24αb samples as prompts and preserve the next (1 − α)b ones. Another scheme is *random sampling*, which picks 24αb + (1 − α)b most representative samples first, and then randomly chooses (1 − α)b to save as original images and 24αb to save as prompts. We compare these three sampling schemes in Table 6. The three sampling modes yield similar accuracies, meaning that picking more or less representative images to be prompts does not have a significant impact on the final performance. Training time. We provide the time to train a CIL model (with DER) in Table 7 with different numbers of exemplars per class. By compressing α = 10% of the data, we can gain 66 exemplars per class (3.3 times Table 6: Average and last 6-phase LFH accuracies (last accuracies in parentheses, %) of ImageNet-100 with different exemplar sampling modes. Augmentation probability p = 0.4, budget b = 20 units/class, and R = 16 real and S = 96 synthetic exemplars/class are saved. Each sampling mode is run three times and their average results are reported. (LRP: least representative prompts; MRP: most representative prompts.) Sampling Mode LRP MRP Random Accuracy **83.7** (79.7) 83.6 (79.6) 83.6 (79.7) Table 7: Total training time of DER in 2×A5000 GPU hours and average LFH accuracies (%) on 11-phase Food-101, with budget b = 20 units/class and augmentation probability p = 0.4. R real and S synthetic exemplars/class are saved in the buffer. The compressed ratio α = 1 − R b . #Exemplars/Class=R + S 20=20+0 43=19+24 66=18+48 89=17+72 Compressed Ratio α 0.00 0.05 0.10 0.15 Training time 22.3 25.9 29.5 34.1 Accuracy 81.9 82.0 **83.2** 82.4 the original). This costs approximately 32% more training time, but the model accuracy is substantially increased from 81.9% to 83.2%. ## 5 Conclusion In this paper, we propose ESCORT, an exemplar super-compression and regeneration approach to enhance replay-based class-incremental learning methods by significantly increasing the quantity and diversity of exemplars under the same memory restriction. We challenge the conventional viewpoint that data from former classes can only be stored as RGB images, and present a novel prompt-based data storage approach: at the end of each incremental phase, the selected images are compressed to Canny edge maps, reducing memory consumption by a factor of 24. In subsequent stages, images of great diversity are regenerated from the edge maps and class tags by ControlNet, a pre-trained text-to-image diffusion model. To bridge the domain gap between real and generated exemplars, partial compression and diffusion-based data augmentation are introduced to let the model properly benefit from the synthetic exemplars. This enables us to directly apply ControlNet on our target datasets without any fine-tuning, so the generator does not have to be saved in our memory buffer. Comprehensive experiments demonstrate that our approach constantly improves the CIL model performance by a large margin on numerous datasets and CIL settings. ## References Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. In *CVPR*, pp. 3366–3375, 2017. Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In *ECCV*, pp. 139–154, 2018. Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. *NeurIPS*, 32, 2019. Jihwan Bang, Heesu Kim, YoungJoon Yoo, Jung-Woo Ha, and Jonghyun Choi. Rainbow memory: Continual learning with a memory of diverse samples. In *CVPR*, pp. 8218–8227, 2021. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In *ECCV*, pp. 446–461. Springer, 2014. John F. Canny. A computational approach to edge detection. *TPAMI*, 8(6):679–698, 1986. Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In *ECCV*, pp. 532–547, 2018. Zhiyuan Chen and Bing Liu. Lifelong machine learning. *Synthesis Lectures on Artificial Intelligence and* Machine Learning, 12(3):1–207, 2018. Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In *CVPR*, pp. 113–123, 2019. Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. *arXiv*, 1909.08383, 2019a. Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. Continual learning: A comparative study on how to defy forgetting in classification tasks. *arXiv*, 1909.08383, 2019b. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, and Eduardo Valle. Podnet: Pooled outputs distillation for small-tasks incremental learning. In *ECCV*, pp. 86–102. Springer, 2020. Gregory Griffin, Alex Holub, and Pietro Perona. Caltech 256, Apr 2022. Chen He, Ruiping Wang, Shiguang Shan, and Xilin Chen. Exemplar-supported generative reproduction for class incremental learning. In *BMVC*, pp. 98, 2018. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In *NeurIPS*, 2020. Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. *JMLR*, 23(47):1–33, 2022. Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. Learning a unified classifier incrementally via rebalancing. In *CVPR*, pp. 831–839, 2019. Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, and Rui Yan. Overcoming catastrophic forgetting for continual learning via model adaptation. In *ICLR*, 2018. Ahmet Iscen, Jeffrey Zhang, Svetlana Lazebnik, and Cordelia Schmid. Memory-efficient incremental learning through feature adaptation. In *ECCV*, pp. 699–715. Springer, 2020. David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In *AAAI*, volume 32, 2018. Quentin Jodelet, Xin Liu, Yin Jun Phua, and Tsuyoshi Murata. Class-incremental learning using diffusion model for distillation and replay. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3425–3433, 2023. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. *PNAS*, 114(13):3521–3526, 2017. Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. *NeurIPS*, 30, 2017. Zhizhong Li and Derek Hoiem. Learning without forgetting. *TPAMI*, 40(12):2935–2947, 2017. Yaoyao Liu, Yuting Su, An-An Liu, Bernt Schiele, and Qianru Sun. Mnemonics training: Multi-class incremental learning without forgetting. In *CVPR*, pp. 12245–12254, 2020. Yaoyao Liu, Bernt Schiele, and Qianru Sun. Adaptive aggregation networks for class-incremental learning. In *CVPR*, pp. 2544–2553, 2021a. Yaoyao Liu, Bernt Schiele, and Qianru Sun. Rmm: Reinforced memory management for class-incremental learning. *NeurIPS*, 34:3478–3490, 2021b. Yaoyao Liu, Qianru Sun, Xiangnan He, An-An Liu, Yuting Su, and Tat-Seng Chua. Generating face images with attributes for free. *TNNLS*, 32(6):2733–2743, 2021c. David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. In *NIPS*, pp. 6467–6476, 2017. Zilin Luo, Yaoyao Liu, Bernt Schiele, and Qianru Sun. Class-incremental exemplar compression for classincremental learning. In *CVPR*, pp. 11371–11380, 2023. Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109–165. Elsevier, 1989. Grégoire Petit, Adrian Popescu, Hugo Schindler, David Picard, and Bertrand Delezoide. Fetril: Feature translation for exemplar-free class-incremental learning. In *WACV*, pp. 3911–3920, 2023. Quang Pham, Chenghao Liu, and Steven Hoi. Dualnet: Continual learning, fast and slow. *NeurIPS*, 34: 16131–16144, 2021. Ameya Prabhu, Philip HS Torr, and Puneet K Dokania. Gdumb: A simple approach that questions our progress in continual learning. In *ECCV*, 2020. R. Ratcliff. Connectionist models of recognition memory: Constraints imposed by learning and forgetting functions. *Psychological Review*, 97:285–308, 1990. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In *CVPR*, pp. 2001–2010, 2017. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *CVPR*, pp. 10684–10695, 2022. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, pp. 618–626, 2017. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. NeurIPS, 30, 2017. Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. *arXiv* preprint arXiv:2209.14792, 2022. Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, and Dumitru Erhan. Phenaki: Variable length video generation from open domain textual description. *arXiv preprint arXiv:2210.02399*, 2022. Gregory K Wallace. The jpeg still picture compression standard. *IEEE Transactions on Consumer Electronics*, 38(1):xviii–xxxiv, 1992. Fu-Yun Wang, Da-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan. Foster: Feature boosting and compression for class-incremental learning. In *ECCV*, pp. 398–414. Springer, 2022a. Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, and Jun Zhu. Memory replay with data compression for continual learning. *arXiv* preprint arXiv:2202.06592, 2022b. Shipeng Yan, Jiangwei Xie, and Xuming He. Der: Dynamically expandable representation for class incremental learning. In *CVPR*, pp. 3014–3023, 2021. Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. *arXiv preprint arXiv:1708.01547*, 2017. Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In ICML, pp. 3987–3995. PMLR, 2017. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In *ICCV*, pp. 3836–3847, 2023. Bowen Zhao, Xi Xiao, Guojun Gan, Bin Zhang, and Shu-Tao Xia. Maintaining discrimination and fairness in class incremental learning. In *CVPR*, pp. 13208–13217, 2020. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In *CVPR*, pp. 2921–2929, 2016a. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Antonio Torralba, and Aude Oliva. Places: An image database for deep scene understanding. *arXiv preprint arXiv:1610.02055*, 2016b. Da-Wei Zhou, Qi-Wei Wang, Han-Jia Ye, and De-Chuan Zhan. A model or 603 exemplars: Towards memoryefficient class-incremental learning. *arXiv preprint arXiv:2205.13218*, 2022. Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, and De-Chuan Zhan. Pycil: a python toolbox for class-incremental learning. *SCIENCE CHINA Information Sciences*, 66(9):197101–, 2023. doi: https://doi.org/10.1007/ s11432-022-3600-y. Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and Cheng-Lin Liu. Prototype augmentation and selfsupervision for incremental learning. In *CVPR*, pp. 5871–5880, 2021. ## A Dataset Information We present the detailed information of the datasets used in our experiments in Table 8. Table 8: Detailed information of the four datasets, including number of classes, total number of training/test images, average number of training/test images per class, and median image size. | images, average number of training/test images per class, and median image size. Dataset Caltech-256 Food-101 Places-100 | ImageNet100 | | | | |----------------------------------------------------------------------------------------------------------------------------|-----------|------------|---------------|---------------| | # classes | 256 | 101 | 100 | 100 | | Total/average # training images | 21,436/84 | 75,750/750 | 300,000/3,000 | 128,856/1,289 | | Total/average # test images | 5,472/21 | 25,250/250 | 10,000/100 | 5,000/50 | | Median image size (h × w) | 289 × 300 | 512 × 512 | 512 × 683 | 375 × 500 | ## B Data Preprocessing The image transformation procedures are listed in Table 9. Following FOSTER [45], we apply AutoAugment [9] in all CIL methods for a fair comparison. The same training transformations are applied to both real and generated images. Table 9: Training and test image transformations. Normalization has mean [0.485, 0.456, 0.406] and standard deviation [0.229, 0.224, 0.225]. Training transformations Test transformations RandomResizedCrop(224), Resize(256), CenterCrop(224), RandomHorizontalFlip(p = 0.5), ColorJitter(brightness=63/255), ImageNetPolicy(), ToTensor(), ToTensor(), Normalize(), Normalize(), ## C Image Resizing There are two approaches to transform an image (h × w) into an edge map (H × W), which can be in a different size to accommodate the input requirement of ControlNet. *Resizing edge map*: convert the image (h × w) to an edge map (h × w) and then resize the edge map (H × W) by nearest neighbor interpolation. Resizing image: resize the image (h × w) into an intermediate image (H × W) by Lanczos interpolation and then convert it to an edge map (H × W). We compare these two approaches qualitatively in Figure 4 and find that resizing the image can produce generations of higher quality. These two approaches consume similar memory in total. Figure 4: Two example images from Caltech-256 and their generated versions by resizing edge map and ![14_image_0.png](14_image_0.png) resizing image. ## D Image Generations From each dataset, we show two images with their class labels, Canny edge maps, and generations by ControlNet in Figure 5. In general, the quality of generations is satisfying. ControlNet is able to generate diverse images simply by changing the random seed. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) Figure 5: Random images selected from the four datasets, with their class labels, edge maps, and five generations from ControlNet. Random seeds 0, 1, 2, 3, and 4 are used to generate the five images respectively.
# Enhancing Vision-Language Model With Unmasked Token Alignment Jihao Liu jihaoliu@link.cuhk.edu.hk CUHK MMLab Jinliang Zheng *2toinf@bupt.edu.cn* Institute for AI Industry Research (AIR) Tsinghua University Boxiao Liu *liuboxiao@sensetime.com* Sensetime Research Yu Liu liuyuisanai@gmail.com Sensetime Research Hongsheng Li *hsli@ee.cuhk.edu.hk* CUHK MMLab CPII under InnoHK Shanghai AI Laboratory Reviewed on OpenReview: *https://openreview.net/forum?id=JkFEVbW6wE* ## Abstract Contrastive pre-training on image-text pairs, exemplified by CLIP, becomes a standard technique for learning multi-modal visual-language representations. Although CLIP has demonstrated remarkable performance, training it from scratch on noisy web-scale datasets is computationally demanding. On the other hand, mask-then-predict pre-training approaches, like Masked Image Modeling (MIM), offer efficient self-supervised learning for single-modal representations. This paper introduces Unmasked Token Alignment (UTA), a method that leverages existing CLIP models to further enhance its visionlanguage representations. UTA trains a Vision Transformer (ViT) by aligning unmasked visual tokens to the corresponding image tokens from a frozen CLIP vision encoder, which automatically aligns the ViT model with the CLIP text encoder. The pre-trained ViT can be directly applied for zero-shot evaluation even without training on image-text pairs. Compared to MIM approaches, UTA does not suffer from training-finetuning inconsistency and is much more training-efficient by avoiding using the extra [MASK] tokens. Extensive experimental results demonstrate that UTA can enhance CLIP models and outperform existing MIM methods on various uni- and multi-modal benchmarks. ## 1 Introduction Contrastive pre-training, e.g., CLIP (Radford et al., 2021), with web-scale image-text pairs is becoming the mainstream technique for learning multi-modal visual-language representations. The pre-trained CLIP model has unlocked the potential of various downstream applications, including zero-shot image classification and retrieval, and high-quality text-to-image generation (Rombach et al., 2022; Ramesh et al., 2022). Furthermore, the pre-trained visual and text encoders can be further used for multi-modal and even uni-modal tasks. Unlike classical supervised learning on the human-annotated classification dataset, CLIP and its variants are typically trained on much noisier datasets found on the web such as LAION (Schuhmann et al., 2022) and WIT (Radford et al., 2021), and require an extremely large batch size to work well. Directly training on those datasets from scratch requires a lot of computing resources, making it not accessible to most researchers. In contrast, the mask-then-predict 1 pre-training approaches, e.g., Masked Image Modeling (MIM) (He et al., 2021; Xie et al., 2021) and Masked Language Modeling (MLM) (Devlin et al., 2019), have been shown to be efficient and powerful way to learn single-modal (visual or language) representations in self-supervised manner and can achieve strong performance by fine-tuning the pre-trained models on downstream tasks. The key design of those methods is to predict the masked tokens from the other visible and unmasked input tokens. We ask the question: can we take advantage of both types of methods and further enhance the vision-language representations over CLIP? There are recent works, e.g., EVA (Fang et al., 2023b), utilizing a pre-trained CLIP model for generating the prediction targets for MIM. The resulting vision models show stronger performance than the encoders pre-trained using either only MIM or only CLIP, demonstrating the effectiveness of combining MIM and CLIP for multi-modal feature learning. However, those methods are limited to learning single-modal representations, and extra contrastive fine-tuning is needed for multi-modal feature learning, as proposed in EVA-CLIP (Sun et al., 2023). In this paper, we propose an efficient method, Unmasked Token Alignment (UTA), for enhancing the alignment between vision-language representations, which better utilizes existing pre-trained CLIP models. In particular, our method trains a Vision Transformer (ViT) (Dosovitskiy et al., 2021) model from scratch by using the unmasked and sparse visual tokens to align with corresponding image tokens of a frozen CLIP model. For the train-from-scratch ViT model, we randomly mask a portion of image tokens with a *reversed* masking strategy, where only the unmasked (i.e. kept) tokens (including the [CLS] token) are inputted into the ViT model and aligned with the output of the frozen CLIP visual model. We maximize the cosine similarity for token alignment, and therefore, the ViT model is automatically aligned with the CLIP text encoder in the normalized embedding space. There are two major advantages of using the proposed unmasked token alignment strategy. 1) After pre-training the vision model, we can directly conduct zero-shot classification and retrieval using the normalized features of the trained ViT model and the CLIP text encoder. We illustrate the pre-training and fine-tuning pipeline of UTA in Fig. 1. In contrast, the masked prediction objective used in existing MIM works (EVA (Fang et al., 2023b), BEiT-3 (Wang et al., 2022b)) relies on the [MASK] tokens to predict the CLIP features while the unmasked tokens are not trained to align with the CLIP model as we do. They do not support zero-shot evaluation without contrastive fine-tuning as only the unmasked tokens are used for zero-shot evaluation. 2) MIM works suffer from the training-finetuning inconsistency as a large portion of [MASK] tokens never appear during the fine-tuning. In contrast, our approach better maintains the training-finetuning consistency by only inputting and aligning the unmasked tokens, which are processed both in training and inference. We also empirically find that further adding the masked prediction objective on our UTA results in much worse zero-shot performance. Compared to the existing MIM approach that relies on the [MASK] tokens to predict the CLIP features with the masked prediction objective, our method is conceptually simple and computationally efficient by avoiding introducing the [MASK] tokens, which can reduce the training FLOPs for up to 50%. But at the same time, our pre-trained models are also suitable for fine-tuning on downstream uni-modal or multi-modal tasks. In particular, our pre-trained ViT-L obtains 78.5% zero-shot accuracy on ImageNet without contrastive fine-tuning from image-text pairs. After fine-tuning with the DataComp-1B dataset (Gadre et al., 2023), we obtained 80.8% zero-shot accuracy on ImageNet, surpassing the DataComp baseline and EVA-02-CLIP by 1.6% and 1.0%, respectively. On the more recent multi-modal benchmark, i.e., LLaVA-Bench (Liu et al., 2023), we outperform CLIP and EVA-02 by 2.2% and 1.4%, respectively. We also fine-tune the pre-trained vision model on object detection and segmentation tasks and demonstrate better results than the competitive EVA-02 (Fang et al., 2023a) models on those tasks. ## 2 Method In this section, we first review the widely used Masked Image Modeling (MIM) pre-training and its more advanced version equipped with a pre-trained CLIP model. We then introduce the Unmasked Token Alignment (UTA) approach and its implementation. ## 2.1 A Revisit Of Masked Image Modeling With Clip MIM methods (Bao et al., 2021; He et al., 2021; Xie et al., 2021) typically use a Vision Transformer (ViT) (Dosovitskiy et al., 2021) for pre-training. An input image is first divided into non-overlapping image patches, which are converted into a sequence of tokens with a project layer and positional embedding. Then a portion of the tokens are randomly ![2_image_0.png](2_image_0.png) Figure 1: Overview of Unmasked Token Alignment (UTA). During the pre-training of UTA, only the unmasked tokens are inputted into the vision encoder and aligned with the CLIP vision encoder. After pre-training, the pre-trained vision encoder is automatically aligned with the CLIP text encoder and can be directly applied for the zero-shot evaluation even without contrastive training on image-text pairs. The pre-trained vision encoder can be further fine-tuned for uni-modal or multi-modal downstream tasks. sampled, where the masked tokens are filled with a special [MASK] token. The masked image is processed by the ViT to produce the latent representations, and a lightweight head is utilized to predict the original image based on the latent representations. After pre-training, the ViT is used for further fine-tuning on downstream visual tasks. Some recent papers (Peng et al., 2022; Fang et al., 2023b; Hou et al., 2022; Xiao et al., 2022) utilize the hidden features of a pre-trained CLIP model as the reconstruction targets and achieve much better performance than methods using the low-level pixels as the targets (He et al., 2021; Xie et al., 2021). In particular, the unmasked image is fed into the visual encoder of the CLIP model for obtaining the full image's hidden feature map. The masked prediction objective is to align the predicted feature with the CLIP's visual feature on the masked tokens. ## 2.2 Unmasked Token Alignment Using the masked prediction objective to align a train-from-scratch ViT model with the pre-trained CLIP visual model still uses the problematic [MASK] tokens. It causes training-finetuning inconsistency and makes the trained ViT unable to perform zero-shot classification without fine-tuning. To tackle the issue, we propose a simple yet effective solution that does not utilize the extra [MASK] tokens. We align the feature maps of the two models with a dense distillation objective, where the feature maps of the train-from-scratch ViT model and CLIP vision encoder are obtained with a partial view and a full view, respectively. Specifically, given an input image, we use a random mask to mask a portion of image tokens. Unlike previous works that use the [MASK] tokens to fill in the masked patches, we directly drop the masked tokens and only input the rest tokens into the ViT encoder. For the pre-trained CLIP model, we input the original image and obtain a full hidden feature map. Then we select the corresponding unmasked (kept) tokens from the CLIP vision encoder's feature map, which are used as the targets for the train-from-scratch ViT encoder. The cosine similarity is maximized for the token alignment. After pre-training, the ViT encoder is aligned with the CLIP vision encoder in the normalized embedding space. Therefore, the ViT encoder is also aligned with the CLIP text coder as the CLIP's vision and text encoders share the same embedding space. As a result, we can directly conduct the zero-shot evaluation with the pre-trained ViT encoder and CLIP text encoder even without training on the image-text pairs. We show that we can already achieve decent zero-shot performance after the unmasked alignment. Reversed block-wise masking. Previous works (Bao et al., 2021) typically use block-wise masking to preserve the structure of input images. However, we note that such masking is spatially unequalized, which tends to mask the center area of the images with a much higher probability, and as a result, the tokens in the border area are trained much more times than tokens in the center area. We introduce a reversed block-wise masking strategy, which first generates a mask with block-wise masking and then randomly reverses the mask with a probability of 0.5. Our masking strategy preserves the structure of the input images and also alleviates the spatial unequalization problem. Pre-training efficiency analysis. As we do not need to process the extra [MASK] tokens during the pre-training, we can largely improve the masked training efficiency. In practice, we use a large mask ratio, e.g., 0.5, for pre-training. Thus, compared to EVA (Fang et al., 2023b) or BEiT v2 (Peng et al., 2022) which require inputting extra [MASK] tokens, our UTA can reduce the training FLOPs by 50%. ## 2.3 Implementation Vision transformer architecture. We follow EVA-02 (Fang et al., 2023a) to introduce architectural modifications on vision transformer for improving the performance and training stability. In particular, we add extra relative positional embedding introduced by Su et al. (2021) in the self-attention layer. We replace the original feedforward network (FFN) in vision transformer with the SwiGLU variant introduced by Shazeer (2020). Moreover, we add an extra LayerNorm (Ba et al., 2016) layer in the FFN to stabilize the training as proposed by Wang et al. (2022a). Note that those architectural modifications are not contributions of our method. We refer the readers to EVA-02 (Fang et al., 2023a) for the detailed performance comparison of these modifications. CLIP teacher model. Instead of using original CLIP models for pre-training, we follow Fang et al. (2023a) to use a better-performing CLIP model, i.e., giant-sized EVA-01-CLIP model (Sun et al., 2023), for providing the alignment targets during pre-training. Our experiments show that the stronger CLIP model can bring large zero-shot accuracy improvements. Additionally, we find that our ViT-L model can surpass the giant-sized CLIP models (e.g., Open-CLIP and EVA-01-CLIP) after contrastive fine-tuning as shown in Tab. 1. ## 3 Experimental Setup To demonstrate the effectiveness of the proposed Unmasked Token Alignment (UTA), we conduct experiments to pre-train ViT to align with CLIP vision-language representation on large-scale dataset and apply the pre-trained models to downstream multi-modal and uni-modal tasks. The multi-modal tasks include zero-shot classification, zero-shot retrieval, and the more recent LLaVA-Bench (Liu et al., 2023). The uni-modal tasks include ImageNet classification (Deng et al., 2009), object detection, and segmentation. Pre-training. All ViT models are pre-trained on ImageNet-21K (Deng et al., 2009) dataset using 224×224 input resolution and patch size of 14. Unless otherwise specified, we pre-train for 150 epochs with batch size of 4096. We use AdamW (Loshchilov & Hutter, 2017) optimizer with weight decay of 0.05. The learning rate is linearly increased to 1.5×10-3 with 1 epoch of training and decays to 10-5 with cosine schedule (Loshchilov & Hutter, 2016). By default, we use reversed block-wise masking with mask ratios of 0.4 and 0.5 for base and large models, respectively. Contrastive fine-tuning. Although the pre-trained ViT model can already demonstrate excellent zero-shot capabilities even without contrastive fine-tuning, we also perform a much shorter contrastive fine-tuning similar to other CLIP counterparts to further improve its zero-shot performance, especially for the out-of-distribution tasks. In particular, we initialize the vision and text encoders with the pre-trained ViT model and CLIP text encoder. Then we perform contrastive fine-tuning on the DataComp-1B dataset (Gadre et al., 2023). The temperature parameter in the contrastive loss (Radford et al., 2021) is fixed to 0.01 during our training as initially the vision encoder and text encoder are already aligned. To be comparable with other methods (Cherti et al., 2023), we use patch size of 16 for fine-tuning UTA-B. Fine-tuning. For evaluation on the LLaVA-Bench (Liu et al., 2023) and uni-modal tasks, we only keep the pre-trained ViT. On LLaVA-Bench, we follow the default settings to first train a projection layer on CC-3M dataset (Sharma et al., 2018) for feature alignment and then fine-tune the project layer and Large Language Model (LLM) (Chiang et al., 2023) on LLaVA-Instruct-150K dataset (Liu et al., 2023). For object detection and instance segmentation tasks, we adopt the Cascade Mask R-CNN (He et al., 2017; Cai & Vasconcelos, 2019) framework and separately fine-tune on the COCO (Lin et al., 2014) and LVIS (Gupta et al., 2019) datasets. For semantic segmentation task, we adopt the UperNet (Xiao et al., 2018) framework and fine-tune on the ADE20K (Zhou et al., 2017) dataset. Please refer to the appendix A.1 for more detailed configurations. ## 4 Main Results In this section, we compare the proposed Unmasked Token Alignment (UTA) to prior arts on various benchmarks. We first conduct comparisons between UTA and previous zero-shot results in Sec. 4.1. We then compare UTA with other | Following Sun et al. (2023), the numbers of I-T pairs do not account for the I-T pairs used to train the teacher model. | | | | | | | | | | |---------------------------------------------------------------------------------------------------------------------------|----------|-------------|-------|------|------|-------|------|-----------|---------| | Method | Model | # I-T Pairs | IN-1K | IN-A | IN-R | IN-V2 | IN-S | ObjectNet | Average | | CLIP | B/16@224 | 13B | 68.3 | 50.0 | 77.7 | 61.9 | 48.2 | 55.3 | 60.2 | | Open-CLIP | B/16@224 | 34B | 70.2 | 38.2 | 80.6 | 62.3 | 56.1 | 56.0 | 60.6 | | EVA-02-CLIP | B/16@224 | 8B | 74.7 | 54.1 | 82.5 | 67.0 | 57.7 | 62.3 | 66.4 | | UTA | B/14@224 | 0B | 76.0 | 54.2 | 76.7 | 68.1 | 52.5 | 63.6 | 65.2 | | UTA | B/16@224 | 2B | 77.0 | 59.8 | 84.1 | 69.5 | 60.2 | 68.3 | 69.8 | | CLIP | L/14@224 | 13B | 74.0 | 48.0 | 86.5 | 66.4 | 61.8 | 61.1 | 66.3 | | Open-CLIP | L/14@224 | 32B | 75.5 | 70.8 | 87.8 | 69.9 | 59.6 | 69.0 | 72.1 | | DataComp | L/14@224 | 13B | 79.2 | 69.6 | 90.8 | 72.1 | 68.0 | 74.3 | 75.7 | | EVA-02-CLIP | L/14@224 | 4B | 79.8 | 76.1 | 92.7 | 72.9 | 68.1 | 75.3 | 77.5 | | UTA | L/14@224 | 0B | 78.5 | 69.4 | 89.4 | 71.7 | 63.9 | 72.7 | 74.3 | | UTA | L/14@224 | 4B | 80.8 | 79.1 | 92.3 | 73.7 | 68.4 | 77.6 | 78.6 | | CLIP | L/14@336 | 13B | 76.6 | 77.5 | 89.0 | 70.9 | 61.0 | 72.0 | 74.5 | | EVA-02-CLIP | L/14@336 | 6B | 80.4 | 82.9 | 93.2 | 73.8 | 68.9 | 78.4 | 79.6 | | UTA | L/14@336 | 4B | 81.4 | 84.2 | 92.9 | 74.6 | 69.1 | 80.1 | 80.4 | | Open-CLIP | g/14@224 | 34B | 78.5 | 60.8 | 90.2 | 71.7 | 67.5 | 69.2 | 73.0 | | EVA-01-CLIP | g/14@224 | 11B | 79.3 | 74.1 | 92.5 | 72.1 | 68.1 | 75.3 | 76.9 | | UTA | g/14@224 | 0B | 79.3 | 73.5 | 91.6 | 72.6 | 66.7 | 74.6 | 76.4 | | UTA | g/14@224 | 2B | 81.5 | 81.9 | 93.5 | 74.8 | 69.6 | 79.7 | 80.2 | | UTA | g/14@336 | 3B | 83.9 | 87.5 | 94.5 | 76.9 | 71.6 | 81.9 | 82.7 | pre-training methods on LLaVA-Bench in Sec. 4.2. To show the transferability of UTA, we present the transfer learning results on core vision tasks in Sec. 4.3. ## 4.1 Zero-Shot Results We conduct zero-shot classification and retrieval and compare the results with other CLIP variants (Radford et al., 2021; Cherti et al., 2023; Sun et al., 2023). In Tab. 1, we show that the pre-trained ViT-B model can obtain 76.0% zero-shot accuracy on ImageNet-1K even without training on image-text pairs. After fine-tuning with only 2B image-text samples, our ViT-B obtains 77.0% zero-shot accuracy on ImageNet-1K, surpassing Open-CLIP (Cherti et al., 2023) and EVA-02-CLIP (Sun et al., 2023) by 2.3% and 1.0% respectively. On the challenging ObjectNet (Barbu et al., 2019) dataset, we outperform Open-CLIP and EVA-02-CLIP by 11.3% and 6.0% points respectively. Our pre-trained ViT-L model obtains 78.5% zero-shot accuracy on ImageNet-1K. After fine-tuning with 4B samples, we achieve 80.8% accuracy, which outperforms Open-CLIP and EVA-02-CLIP by 5.3% and 1.0% respectively. Compared to strong EVA-02-CLIP, we achieve an average of 1.1% improvements over 6 evaluation datasets. We also fine-tune with 336×336 input resolution using 200M samples, and we obtain an average of 1.8% points improvements on the 6 evaluation datasets. We find that fine-tuning on the larger but noisier DataComp-1B dataset (Gadre et al., 2023) can greatly boost the performance on the ImageNet robust variants. Table 2 presents the zero-shot retrieval results on the Flickr30k (Young et al., 2014) and COCO (Lin et al., 2014) datasets. We find that the pre-trained model can already outperform other CLIP models on all evaluated metrics. In particular, the base model improves the Open-CLIP and EVA-02-CLIP by an average of 4% top-1 recall over the two datasets. For the large model, we improve the Open-CLIP and EVA-02-CLIP by an average of 3.4% and 1.8% top-1 recall, respectively. We also find that further fine-tuning on DataComp-1B dataset can improve the text retrieval performance but also degenerate the image retrieval performance, which may be related to the filtering strategies used for DataComp-1B (Gadre et al., 2023). Table 2: Zero-shot retrieval performance on Flickr30k (Young et al., 2014) and COCO (Lin et al., 2014). R@1, R@5, and R@10 denote the recall performance among top-1, top-5, and top-10, respectively. Following Sun et al. (2023), the numbers of I-T pairs do not account for the I-T pairs used to train the teacher model. | numbers of I-T pairs do not account for the I-T pairs used to train the teacher model. Text Retrieval | | Image Retrieval | | | | | | | | | | | | | |---------------------------------------------------------------------------------------------------------|-------|-------------------|-----------|------|-----------|------|------|------|------|------|------|------|------|------| | Method | Model | # I-T Pairs | Flickr30k | COCO | Flickr30k | COCO | | | | | | | | | | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | | | CLIP | B | 13B | 81.9 | 96.2 | 98.8 | 52.4 | 76.8 | 84.7 | 62.1 | 85.6 | 91.8 | 33.1 | 58.4 | 69.0 | | Open-CLIP | B | 34B | 86.3 | 97.9 | 99.4 | 59.4 | 81.8 | 88.6 | 69.8 | 90.4 | 94.6 | 42.3 | 66.7 | 77.1 | | EVA-02-CLIP | B | 8B | 85.7 | 96.7 | 98.9 | 58.7 | 80.7 | 88.2 | 71.2 | 91.0 | 94.7 | 42.4 | 66.9 | 76.3 | | UTA | B | 0B | 88.4 | 98.5 | 99.5 | 63.4 | 83.9 | 90.0 | 75.5 | 93.1 | 96.4 | 46.8 | 71.5 | 80.8 | | UTA | B | 2B | 91.3 | 98.9 | 99.7 | 64.7 | 85.0 | 90.5 | 74.5 | 93.1 | 96.0 | 45.9 | 70.5 | 79.3 | | CLIP | L | 13B | 85.2 | 97.3 | 99.0 | 56.3 | 79.3 | 86.7 | 65.2 | 87.3 | 92.0 | 36.5 | 61.0 | 71.1 | | Open-CLIP | L | 34B | 88.7 | 98.4 | 99.2 | 62.1 | 83.4 | 90.3 | 75.0 | 92.5 | 95.6 | 46.1 | 70.7 | 79.4 | | DataComp | L | 13B | 89.5 | 98.6 | 99.7 | 63.3 | 84.2 | 90.4 | 73.4 | 91.7 | 95.4 | 45.7 | 70.0 | 79.2 | | EVA-02-CLIP | L | 4B | 89.7 | 98.6 | 99.2 | 63.7 | 84.3 | 90.4 | 77.3 | 93.6 | 96.8 | 47.5 | 71.2 | 79.7 | | UTA | L | 0B | 91.2 | 98.7 | 99.8 | 66.6 | 86.5 | 91.5 | 78.3 | 94.1 | 96.9 | 49.5 | 73.4 | 81.9 | | UTA | L | 4B | 93.0 | 99.0 | 99.7 | 66.5 | 86.9 | 92.2 | 77.4 | 93.8 | 96.6 | 48.7 | 72.3 | 80.9 | | Open-CLIP | g | 34B | 91.4 | 99.2 | 99.6 | 66.4 | 86.0 | 91.8 | 77.7 | 94.1 | 96.9 | 48.8 | 73.3 | 81.5 | | EVA-01-CLIP | g | 11B | 91.6 | 99.3 | 99.8 | 68.2 | 87.5 | 92.5 | 78.9 | 94.5 | 96.9 | 50.3 | 74.0 | 82.1 | | UTA | g | 0B | 92.2 | 99.1 | 99.7 | 68.0 | 87.2 | 92.4 | 79.0 | 94.5 | 97.2 | 50.3 | 74.2 | 82.5 | | UTA | g | 2B | 93.2 | 99.4 | 99.8 | 68.2 | 87.6 | 93.0 | 78.2 | 94.4 | 96.7 | 48.7 | 72.9 | 81.1 | Question: What is the position of the skateboard in the image? ![5_image_0.png](5_image_0.png) EVA: The skateboard is on the ground, with the person standing on top of it. UTA: The skateboard is positioned upright, with the wheels off the ground, and the deck facing upwards. Question: What is the man sitting in the middle **doing in the** image? EVA: The man in the image is sitting down, holding a glass of beer, and making a gesture or a sign with his hand. UTA: The man in the image is sitting down, talking on his cell phone, and holding his hands up while doing so. Figure 2: Qualitative examples generated by LLaVA models fine-tuned with EVA-02 and UTA. ## 4.2 Multi-Modal Results The emergent multi-modal capabilities of GPT-4 (OpenAI, 2023) have attracted widespread attention, and there are various re-implementations of such capabilities using open-sourced vision and large language models (Liu et al., 2023; Zhu et al., 2023). We adopt the LLaVA framework Liu et al. (2023) and evaluate pre-trained models on the LLaVA-Bench Liu et al. (2023). The results are presented in Tab. 3. Note that all the results are obtained by fixing the vision encoders' parameters, which can directly reflect the representation quality of the pre-trained model. Notably, our model achieves the best results in the overall category. Compared to the original CLIP large model (Radford et al., 2021), we overall obtain an improvement of 2.2% accuracy. Using the same pre-training dataset and iterations, we also outperform EVA-02 (Fang et al., 2023a) for 1.4%. We compare the outputs generated by the two LLaVA models and highlight the difference in Fig. 2. We show that our approach can capture more fine-grained details to produce better answers. Table 3: Results on LLaVA-Bench (Liu et al., 2023). The results of CLIP and EVA-02 are obtained by our reimplementation with official checkpoints. Conversation, Detail, and Reasoning represent the performance of conversation, detailed description, and complex reasoning, respectively. The overall result is a summary of the results of the three categories. Method Model Conversation Detail Reasoning Overall CLIP B/16 74.5 **69.9** 90.3 78.3 EVA-02 B/16 75.3 61.1 **91.8** 76.2 UTA B/16 **80.8** 66.2 88.8 **78.8** CLIP L/14 78.7 70.4 90.0 79.8 EVA-02 L/14 80.4 71.6 91.1 80.6 UTA L/14 81.4 72.2 91.8 **82.0** EVA-01 g/14 79.9 **72.2** 91.0 80.8 UTA g/14 **84.1** 71.3 93.5 **83.1** Table 4: ImageNet classification and ADE20K segmentation results. ZS and FT denote the zero-shot and fine-tuning top-1 accuracy on ImageNet respectively. † denotes the model after contrastive fine-tuning. ‡ indicates the results equal to random choice without contrastive fine-tuning. | to random choice without contrastive fine-tuning. Method Model #Params | ImageNet | ADE20K | | | | | | |--------------------------------------------------------------------------|------------|----------|-----|------------|------|-----|------| | | Input Size | ZS | FT | Input Size | mIoU | | | | MAE | B | 86M | 224 | - | 83.6 | 512 | 48.1 | | BEiT v2 | B | 86M | 224 | - | 85.5 | 512 | 53.1 | | CLIP | B | 86M | 224 | 68.3 | 85.7 | - | - | | EVA-02 | B | 86M | 224 | 0.1‡ | 87.4 | 512 | 55.3 | | UTA | B | 86M | 224 | 76.0 | 87.5 | 512 | 55.6 | | UTA† | B | 86M | 224 | 77.0 | 87.4 | 512 | 55.1 | | MAE | L | 304M | 224 | - | 85.9 | 512 | 53.6 | | BEiT v2 | L | 304M | 224 | - | 87.3 | 512 | 56.7 | | CLIP | L | 304M | 224 | 74.0 | 88.0 | - | - | | EVA-02 | L | 304M | 224 | 0.1‡ | 89.0 | 512 | 58.3 | | UTA | L | 304M | 224 | 78.5 | 89.2 | 512 | 58.8 | | EVA-01-CLIP | g | 1011M | 224 | 79.3 | 89.1 | 512 | 57.4 | ## 4.3 Core Vision Task Results Prior arts (Bao et al., 2021; He et al., 2021) demonstrate that the MIM pre-trained models have superior performance after fine-tuning to downstream tasks, including ImageNet classification, object detection, image segmentation, etc. There are some recent papers (Xie et al., 2023) that show the mask-then-predict objective is the key to such fine-tuning capabilities. In our empirical evaluation, we show that our UTA pre-training also has such capabilities. We present the results of ImageNet classification in Tab. 4. Compared to recent MIM works (e.g., BEiT v2 (Peng et al., 2022)) which also utilize pre-trained CLIP model for pre-training, we obtain an improvement of ∼2% points after fine-tuning. We can also largely outperform the CLIP model for both the zero-shot and fine-tuning accuracy. Compared with EVA-02, although we slightly improve the fine-tuning accuracy, we can largely improve the zero-shot accuracy. We show the results of performing object detection and instance segmentation on COCO and LVIS datasets in Tab. 5. Compared to the MAE pre-training (He et al., 2021), we find our UTA can improve the APbox for more than 1% mAP on COCO and 6% mAP on more challenging LVIS. Additionally, our approach also performs better than EVA-02, which demonstrates 2.0% and 0.6% mAP improvements on LVIS for the base and large models respectively. | Method | Model | #Enc. | COCO | LVIS | | | |-------------|---------|---------|--------|--------|--------|------| | | Params | APbox | APmask | APbox | APmask | | | ViTDet | B | 86M | 54.0 | 46.7 | 43.0 | 38.9 | | EVA-02 | B | 86M | 55.5 | 47.1 | 47.1 | 41.4 | | UTA | B | 86M | 55.8 | 47.7 | 49.1 | 43.1 | | UTA† | B | 86M | 55.6 | 47.5 | 47.9 | 42.2 | | ViTDet | L | 304M | 57.6 | 50.0 | 49.2 | 44.5 | | EVA-02 | L | 304M | 58.5 | 50.3 | 55.3 | 48.6 | | UTA | L | 304M | 58.7 | 50.5 | 55.9 | 49.5 | | EVA-01-CLIP | g | 1011M | 58.7 | 50.4 | 53.8 | 47.7 | Table 5: Object detection and instance segmentation results on COCO and LVIS datasets. † denotes the model after contrastive fine-tuning. | et al., 2022). ZS and FT denote the zero-shot and fine-tuned top-1 accuracy on ImageNet re | | | spectively. | | | | | | |----------------------------------------------------------------------------------------------|-------|----------|---------------|-------|--------|------|------|------| | Config | FLOPs | ImageNet | COCO | LVIS | ADE20K | | | | | ZS | FT | APbox | APmask | APbox | APmask | mIoU | | | | FD | 1.0× | 74.7 | 87.2 | 55.2 | 47.0 | 47.9 | 42.2 | 54.7 | | MIM | 1.0× | - | 86.9 | 54.7 | 46.6 | 46.6 | 41.1 | 54.3 | | UTA+MIM | 1.0× | 70.7 | 87.2 | 55.4 | 47.1 | 47.7 | 42.0 | 54.8 | | UTA | 0.6× | 75.0 | 87.3 | 55.7 | 47.4 | 48.9 | 43.1 | 55.4 | Table 6: The effect of pre-training objectives. FD denotes the re-implementation of the Feature Distillation method (Wei et al., 2022). ZS and FT denote the zero-shot and fine-tuned top-1 accuracy on ImageNet respectively. ## 5 Ablation Studies In this section, we conduct ablation studies to evaluate the impact of different design choices of our proposed Unmasked Token Alignment (UTA). Unless otherwise specified, we use the ViT-B backbone and pre-train it for 90 epochs on the ImageNet-21K (Deng et al., 2009) dataset. Pre-training objectives. We thoroughly explore the effect of pre-training objectives and show the results in Tab. 6. We also explore combining UTA and MIM by inputting masked and unmasked tokens simultaneously and conducting token alignment for unmasked tokens and feature prediction for masked tokens. We find that UTA performs best on all evaluated benchmarks while requiring the least computation cost. In particular, we find the improvements on LVIS are most significant compared to other approaches. Moreover, we show that combining UTA and MIM can lead to much worse zero-shot accuracy but similar fine-tuning accuracy on ImageNet than using UTA alone. We suspect the training-finetuning inconsistency introduced by the extra [MASK] tokens is more significant when the backbone is fixed for evaluation. Different pre-trained CLIP models. We study the impact of different pre-trained CLIP models on downstream performance. As shown in Tab. 7, we find that using a stronger CLIP model can lead to better downstream performance. Additionally, we observe that the performance gap was not as significant as on COCO and ADE20K, probably because the classes of those datasets can already be easily classified by CLIP-L/14. UTA for pre-training the text encoder. While we perform UTA to pre-train only the vision encoder by default, we also explore using it to pre-train a text encoder from scratch. We train a smaller text encoder on DataComp-1B for 1 epoch. We use the text encoder of EVA-01-CLIP as the teacher model and apply random masking during pre-training. The hyper-parameters are the same as those used for pre-training vision model. Empirically, we only obtain 54.5% zero-shot accuracy after pre-training, which is much lower than using the CLIP text encoder. We hypothesize that the text encoder is too small to benefit from such pre-training. Thus, we decide to not perform UTA for pre-training the text encoder. | Table 7: The effect of pre-trained CLIP model. | | | | | | | |--------------------------------------------------|----------|----------|-------|--------|------|------| | CLIP Model | ImageNet | ImageNet | COCO | ADE20K | | | | ZS | ZS | FT | APbox | APmask | mIoU | | | CLIP-L/14 | 74.0 | 67.7 | 86.6 | 55.6 | 47.3 | 53.7 | | EVA-01-CLIP-g/14 | 79.3 | 75.0 | 87.3 | 55.7 | 47.4 | 55.4 | | use mask ratio of 0.5 for the mask type ablation. | | Mask | ImageNet | COCO | ADE20K | | | |-----------------------------------------------------|-------|----------|------------|--------|----------|------|------| | | | ZS | FT | APbox | APmask | mIoU | | | | | Block | 74.2 | 87.2 | 55.3 | 46.6 | 54.1 | | | | Random | 74.7 | 87.2 | 55.1 | 46.4 | 54.6 | | | | Block-R | 74.8 | 87.3 | 55.3 | 46.8 | 55.0 | | Ratio | FLOPs | ImageNet | COCO | ADE20K | | | | | ZS | FT | APbox | APmask | mIoU | | | | | 0.0 | 1.0× | 74.7 | 87.2 | 55.2 | 47.0 | 54.7 | | | 0.4 | 0.6× | 75.0 | 87.3 | 55.7 | 47.4 | 55.4 | | | 0.5 | 0.5× | 74.8 | 87.3 | 55.3 | 46.8 | 55.0 | | | 0.7 | 0.3× | 74.0 | 87.0 | 55.0 | 46.6 | 54.8 | | Table 8: The effect of mask ratio (left) and mask type (right). Block-R denotes the reversed block-wise masking. We use mask ratio of 0.5 for the mask type ablation. Mask ratio and mask type. We examine the effect of the mask ratio and mask type on the final performance. As shown in Tab. 8 (left), we find that using a mask ratio of 0.4 achieves the best computation-performance trade-off. Additionally, using the block-reversed masking performs best on all evaluated datasets. ## 6 Related Works Vision (-Language) Foundation Models. The Transformer architecture (Vaswani et al., 2017) has rapidly evolved to become a pivotal paradigm in both Computer Vision (CV) and Natural Language Processing (NLP). Models like BERT (Devlin et al., 2019) and the GPT (Floridi & Chiriatti, 2020) series, built upon the Transformer architecture, have exhibited exceptional prowess across various language tasks. Simultaneously, in the field of CV, Vision Transformers (ViTs) (Dosovitskiy et al., 2021) have emerged as potent contenders, gradually displacing CNNs in various downstream vision tasks. Furthermore, the fusion of text and images in a shared embedding space, exemplified by CLIP (Radford et al., 2021), has rendered the Transformer an indispensable tool for versatile uni- and multi-modal tasks. As training CLIP requires a large amount of computation resources, FLIP (Li et al., 2023b) proposes to mask the visual input tokens to accelerate the training process of CLIP. Recently, large-scale visual pre-training methods based on the Transformer architecture, such as BEiT-3 (Wang et al., 2022a) and EVA (Sun et al., 2023), have continuously pushed the performance boundaries of various downstream visual tasks. In this work, we introduce a simple yet effective large-scale pre-training method for enhancing the multi-modal representations and demonstrate competitive performance on various uni- and multi-modal tasks. Masked Image Modeling (MIM). MIM is a popular pretext task where the vision model learns rich visual representations by conducting reconstruction from corrupted images. Its initial introduction can be traced back to ViT (Dosovitskiy et al., 2021) and iGPT (Chen et al., 2020). Subsequent advancements in the field, exemplified by the notable contributions of BEiT (Bao et al., 2021), MAE (He et al., 2021), and others (Wang et al., 2022b; Liu et al., 2022; Xie et al., 2021), have consistently elevated the performance of the MIM method across diverse downstream tasks. Recent works (Fang et al., 2023b; Peng et al., 2022; Hou et al., 2022; Xiao et al., 2022) have highlighted the utilization of carefully devised reconstruction targets, like the hidden features from a pre-trained CLIP model, which has been shown to facilitate MIM in acquiring superior visual representations. However, these methods rely on the [MASK] tokens to predict the masked features/pixels which introduces the training-finetuning inconsistency. While UMT (Li et al., 2023a) does not use the [MASK] tokens and only processes the unmasked tokens, it focuses on training video models and does not align with the CLIP text model without contrastive fine-tuning. In contrast, our UTA automatically aligns the train-from-scratch ViT model with CLIP text model and enables zero-shot evaluation even without training on image-text pairs. ## 7 Conclusion In this paper, we introduce the Unmasked Token Alignment (UTA) method, which enhances the alignment between vision and language representations by leveraging pre-trained CLIP models. UTA trains a Vision Transformer (ViT) by aligning the unmasked tokens with corresponding visual tokens of a frozen CLIP model. UTA does not suffer from training-finetuning inconsistency and is training-efficient by avoiding using extra [MASK] tokens. The pre-trained ViT model and CLIP text model can be directly applied for zero-shot evaluation even without contrastive training on image-text pairs. Experimental results demonstrate the effectiveness of UTA across various uni- and multi-modal downstream tasks, outperforming existing MIM and CLIP methods. Limitations While the proposed UTA method presents promising results and advantages, it also has some limitations. Firstly, UTA relies on the availability of a pre-trained CLIP model, which may limit its applicability in scenarios where such models are not accessible or suitable. Additionally, although UTA achieves strong zero-shot performance without contrastive fine-tuning, it still benefits from further fine-tuning on large-scale image-text pairs, especially for robustness evaluation. While UTA shows great potential for enhancing multi-modal representations, further research is needed to address these limitations and improve its applicability in a wider range of applications. ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. In *ICLR*, 2021. Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. Advances in neural information processing systems, 32, 2019. Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: High quality object detection and instance segmentation. IEEE transactions on pattern analysis and machine intelligence, 43(5):1483–1498, 2019. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *ICML*, 2020. Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 2818–2829, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, K. Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*, 2021. Yuxin Fang, Quan Sun, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva-02: A visual representation for neon genesis. *arXiv preprint arXiv:2303.11331*, 2023a. Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Exploring the limits of masked visual representation learning at scale. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition, pp. 19358–19369, 2023b. Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. *Minds and Machines*, 30: 681–694, 2020. Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. *arXiv preprint arXiv:2304.14108*, 2023. Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5356–5364, 2019. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In *Proceedings of the IEEE international* conference on computer vision, pp. 2961–2969, 2017. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll'ar, and Ross B. Girshick. Masked autoencoders are scalable vision learners. In *CVPR*, 2021. Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 8340–8349, 2021a. Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15262–15271, 2021b. Zejiang Hou, Fei Sun, Yen-Kuang Chen, Yuan Xie, and Sun-Yuan Kung. Milan: Masked image pretraining on language assisted representation. *arXiv preprint arXiv:2208.06049*, 2022. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In ECCV, 2016. Kunchang Li, Yali Wang, Yizhuo Li, Yi Wang, Yinan He, Limin Wang, and Yu Qiao. Unmasked teacher: Towards training-efficient video foundation models. *arXiv preprint arXiv:2303.16058*, 2023a. Yanghao Li, Saining Xie, Xinlei Chen, Piotr Dollar, Kaiming He, and Ross Girshick. Benchmarking detection transfer learning with vision transformers. *arXiv:2111.11429*, 2021. Yanghao Li, Haoqi Fan, Ronghang Hu, Christoph Feichtenhofer, and Kaiming He. Scaling language-image pretraining via masking. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 23390–23400, 2023b. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In *ECCV*, 2014. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. *arXiv preprint arXiv:2304.08485*, 2023. Jihao Liu, Xin Huang, Yu Liu, and Hongsheng Li. Mixmim: Mixed and masked image modeling for efficient visual representation learning. *arXiv preprint arXiv:2205.13137*, 2022. Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. *arXiv preprint* arXiv:1608.03983, 2016. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In *ICLR*, 2017. OpenAI. Gpt-4 technical report. *ArXiv*, abs/2303.08774, 2023. URL https://api.semanticscholar.org/ CorpusID:257532815. Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, and Furu Wei. Beit v2: Masked image modeling with vectorquantized visual tokenizers. *arXiv preprint arXiv:2208.06366*, 2022. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748–8763. PMLR, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 1(2):3, 2022. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? In *International conference on machine learning*, pp. 5389–5400. PMLR, 2019. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684–10695, 2022. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. *Advances in Neural Information Processing Systems*, 35:25278–25294, 2022. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2556–2565, 2018. Noam Shazeer. Glu variants improve transformer. *arXiv preprint arXiv:2002.05202*, 2020. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. *arXiv preprint arXiv:2104.09864*, 2021. Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, and Yue Cao. Eva-clip: Improved training techniques for clip at scale. *arXiv preprint arXiv:2303.15389*, 2023. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *NeurIPS*, 2017. Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. In *Advances in Neural Information Processing Systems*, pp. 10506–10518, 2019. Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, et al. Foundation transformers. *arXiv preprint arXiv:2210.06423*, 2022a. Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. *arXiv preprint arXiv:2208.10442*, 2022b. Yixuan Wei, Han Hu, Zhenda Xie, Zheng Zhang, Yue Cao, Jianmin Bao, Dong Chen, and Baining Guo. Contrastive learning rivals masked image modeling in fine-tuning via feature distillation. *arXiv:2205.14141*, 2022. Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In *ECCV*, 2018. Tete Xiao, Ilija Radosavovic, Trevor Darrell, and Jitendra Malik. Masked visual pre-training for motor control. arXiv:2111.11429, 2022. Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In *CVPR*, 2021. Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, and Yue Cao. Revealing the dark secrets of masked image modeling. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 14475–14485, 2023. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. *arXiv* preprint arXiv:1904.00962, 2019. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. *Transactions of the Association for Computational* Linguistics, 2:67–78, 2014. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In *CVPR*, 2017. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. *arXiv preprint arXiv:2304.10592*, 2023. ## A Appendix A.1 Training Details Contrastive fine-tuning on DataComp-1B. We initialize the model with pre-trained ViT encoder and CLIP text encoder and fix the temperature value in CLIP loss to 0.01. We use a total batch size of 49,152 for fine-tuning. Following Sun et al. (2023), we use LAMB (You et al., 2019) optimizer with peak learning rate of 2×10-4 and 4×10-4 for base and large models respectively. We use layer-wise learning rate for fine-tuning and set the decay rate to 0.75 and 0.85 for base and large models respectively. The weight decay is set to 0.05 for all models. We use cosine learning rate schedule and decay the learning rate to 0. We use the prompt provided in CLIP (Radford et al., 2021) for zero-shot evaluation. Fine-tuning and evaluation with LLaVA. Following Liu et al. (2023), we use a two-stage instruction-tuning procedure for LLaVA model training. **Stage 1: Feature alignment.** At this stage, we train the linear projection layer between the frozen vision encoder and the Large Language Model (LLM) for 1 epoch, utilizing a filtered dataset containing 585K image-text pairs from CC-3M (Sharma et al., 2018). We use AdamW (Loshchilov & Hutter, 2017) optimizer with a learning rate of 2 × 10−3. The learning rate is linearly warmed up for the first 150 iterations and decayed to 0 with cosine schedule. We use a batch size of 128 and apply no weight decay. **Stage 2: End-to-end** fine-tuning. We fine-tune the LLaVA model using 158K unique language-image instruction-following dataset for 3 epochs while keeping the vision encoder weights frozen. We use the same optimizer and learning rate schedule as in the first stage except for changing the batch size to 32 and setting the learning rate to 2 × 10−5. We do not apply weight decay during this stage either. We evaluate the results on LLaVA-Bench (Liu et al., 2023), which measures the instruction following capabilities of multi-modal models over conversation, detailed description, and complex reasoning dimensions. We refer the readers to LLaVA (Liu et al., 2023) paper for the detailed description of LLaVA-Bench. Object detection and segmentation. Following (Li et al., 2021), we adopt Cascade Mask R-CNN (He et al., 2017; Cai & Vasconcelos, 2019) framework for fine-tuning on COCO (Lin et al., 2014) and LVIS (Gupta et al., 2019). We follow most of the hyper-parameters settings in EVA-02 (Fang et al., 2023a). On COCO, we use batch size of 128 and fine-tune for 60k iterations. We use learning rate of 5 × 10−5/6 × 10−5, drop path rate (Huang et al., 2016) of 0.1/0.4, layer-wise decay rate of 0.7/0.8 for base/large models. On LVIS, we use batch size of 64 and fine-tune for 100k iterations. The learning rate is set to 10−4. The drop path rate and layer-wise decay rate are the same as those used on COCO. We adopt the UperNet (Xiao et al., 2018) framework for semantic segmentation on ADE20K (Zhou et al., 2017). In particular, we use batch size of 32 and fine-tune for 60k iterations. We use learning rate of 6 × 10−5/4 × 10−5, drop path rate of 0.15/0.2, layer-wise decay rate of 0.85/0.9 for base/large models. ## A.2 Analysis Of Masking Strategies We show the masking probabilities of different locations in Fig. 3. We find that original block-wise masking tends to mask the center locations with a much higher probability. In contrast, the reversed block-wise masking (Block-R) can guarantee spatial equalization. ![13_image_0.png](13_image_0.png) Figure 3: Masking probabilities of different locations. The probabilities are calculated by averaging over 5000 random samples. ## A.3 Loss Curve In Fig. 4, we show the training and validation loss curves of UTA-B during the pre-training phase. For simplicity, we apply random masking with a mask ratio of 0.5 for validation. Note that the loss is negative since we minimize the negative cosine similarities during pre-training. We observe a similar downward trend in both curves and do not observe over-fitting phenomenon over the pre-training process. ## A.4 Comparison With The Teacher Model To have a more fair comparison with the teacher model, we conduct additional fine-tuning with the pre-trained teacher model to align the overall computation cost. Specifically, we utilize the OpenAI CLIP-L (Radford et al., 2021) as the teacher model and pre-train a UTA-L. We also fine-tune the OpenAI CLIP-L on DataComp-1B. For OpenAI CLIP-L, we fine-tune the model on DataComp-1B using about 2B samples. For UTA-L, we pre-train the model on ImageNet-21K for 90 epochs (about 1.2B samples), and then fine-tune it on DataComp-1B using about 0.8B samples. We obtain zero-shot accuracy scores of 76.5% for OpenAI CLIP-L and 77.8% for UTA-L. Since the model sizes are comparable, we use the number of samples as an indicator to align the overall computation. Under such a setting, UTA-L is still able to outperform the teacher model. However, we note that the 77.8% score for UTA-L is lower than the result reported in Tab. 1, indicating we can benefit more from a stronger teacher model, as also shown in our ablation study in Tab. 7. Since our pre-training utilizes reversed block-wise masking to improve the training efficiency, we also fine-tune the OpenAI CLIP-L with such a masking strategy. In particular, we fine-tune CLIP-L using about 4B samples with a mask ratio of 0.5. We keep the batch size the same to ensure a fair comparison. With this setup, we obtained a zero-shot accuracy score of 76.8% on ImageNet-1K, which is still lower than our UTA-L. ![14_image_0.png](14_image_0.png) Figure 4: Training and validation loss over the pre-training process.
Review 1: Summary: This paper introduces a distillation method for enhancing CLIP's visual encoder, named unmasked token alignment (UTA). UTA aligns the unmasked tokens with CLIP's supervisory tokens, inspired by the EVA and EVA-CLIP strategies, which utilize a pre-trained vision transformer to initialize the vision encoder alongside CLIP's text encoder. Unlike EVA, which relies on masked image modeling (MIM) and requires contrastive finetuning due to its training on ImageNet alone, UTA directly aligns with CLIP's visual encoder using only unmasked tokens. This approach differs significantly from MIM's reliance on masked tokens and avoids the need for additional contrastive finetuning while maintaining alignment with the text encoder. The authors support their methodology with experimental evidence and ablation studies to validate the effectiveness of UTA. The evaluations follow the evaluation protocol of CLIP to see the zero-shot capability on the ImageNet with ImageNet-variant datasets, LLaVA-bench for multi-modal capability, and further dense prediction metrics on ADE20k/COCO/LVIS. Strengths and Weaknesses: Pros) + This paper is well-written and easy to follow. + Its core concept is both straightforward and impactful, especially the innovative approach of distilling a CLIP visual encoder by aligning partial tokens with the text encoder, eliminating the need for additional finetuning. + The experimental results presented are remarkable and demonstrate significant performance enhancements. Cons) - The paper lacks clear explanations about the effectiveness of its method: - It does not sufficiently clarify why training on IN-21k and aligning tokens after dropping a few from the ViT backbone yields better results than the traditional approach of pre-training solely on IN-21k, followed by finetuning on IN-1k and additional contrastive finetuning with more images. - There is no insight/intuition into why token dropping improves results compared to using all tokens for alignment. - The experiments conducted are not comprehensive enough. For instance, only the EVA-02 architecture was tested, leaving uncertainty about whether UTA could consistently enhance training across different baseline backbones. The paper would benefit from further experimentation with more ViTs from OpenAI-CLIP, OpenCLIP, or smaller/other CLIP variants. - The impact of the introduced Block-wise Reversed masking (Block-R) on ADE20k mIoU, as shown in Table 8, is significant. However, the paper does not provide insights into why this method is so effective. Questions) - Understood that using IN-21k pre-trained ViT is to match EVA's setup, but the necessity of IN-21k pretraining for aligning with CLIP supervision is unclear (because doing that is costly). It would be beneficial for the authors to conduct additional experiments using ViTs pre-trained on IN-1k or with random initialization, to assess the impact of IN-21k pretraining on the effectiveness of their method. - In Table 1, EVA-01-CLIP outperforms UTA, despite UTA utilizing EVA-02, which is an improvement over EVA-01, and this is achieved without contrastive finetuning. An explanation from the authors on why this occurs, especially given the supposed advantages of UTA, would be helpful. - The difference between 'ZS' and 'ImageNet ZS' in Table 7 needs clarification from the authors. Additionally, the authors should explain the purpose of including Table 7 in the paper. Requested Changes: - Reflect on the previously provided comments. - Include training and validation loss trends for the pre-training phase using token alignment to aid in understanding the proposed method. - Correct the typo in Table 1: Change "B/14" to "B/16". Broader Impact Concerns: The method itself may not directly raise ethical issues; however, employing datasets like DataComp could introduce concerns due to potential biases or problematic images within these datasets. ================================================== Review 2: Summary: This paper introduces a novel method named Unmasked Token Alignment (UTA), aimed at enhancing the performance of vision foundation models through masked modeling-based distillation. The key contributions and findings of this paper can be summarized as: 1) porposing UTA incotrast to MIM 2) Extensive experiments and evaluation 2) superior performance over existing methods w and w/o finetuning Strengths and Weaknesses: [strengths] - Interesting results: The paper demonstrates high performance for different tasks without any finetuning with image and text pairs. - Well-Written: The paper is exceptionally crafted, presenting a clear and detailed overview of the method in section 2, enriched with insights from related works. - Innovative Approach: Introduces a simple yet effective method to enhance vision-language representations, building on pretrained CLIP models, using unmasked tokens, enhanced with a token alignment loss. - Addressing Existing Limitations: Skillfully highlights the limitations of current approaches that use masked tokens in the vision encoder. - Effective Experiments: Conducts well-organized and thorough experiments that convincingly demonstrate the superiority of the proposed method over existing ones. [weaknesses] - some detailed analysis of the results is missing, e.g. page 5 last sentences: why finetuning UTA deacreases the performance for Image Retrieval tasks? - It is not clear for the reader if the improvements over related works comes from factors such as changes to the ViT architecture? or the dataset distibution? - What would you respond if I hypothesize the large gain after finetuning on datacomp is coming from the better distribution of dataset over CLIP models? Requested Changes: - Can you provide a fair comparisonbtween CLIP models (Open-CLIP or EVAL-02-CLIP) and UTA where both are using datacomp image and text pairs to see the effect of data distribution? Broader Impact Concerns: While not the main focus of this paper, there might be biases in the datasets used in large scale, i.e. datacomp ================================================== Review 3: Summary: - This manuscript proposes Unmasked Token Alignment, UTA, a masked modeling based approach to distilling vision foundation models. - It distills a large CLIP vision encoder and demonstrates zero shot improvements when combined with the CLIP text encoder. - A wide range of experiments evaluate the model against baselines, including zero shot ImageNet classification, ImageNet robustness variants, zero shot retrieval, segmentation on ADE-20k, object detection on COCO and LVIS., etc. Strengths and Weaknesses: ## Strengths - UTA does not use `[mask]` tokens in the encoder and unlike MAE also does not have `[mask]` tokens in the decoder. In fact, there is no need for a decoder as CLIP features are being distilled. Thus UTA avoids the train-test gap otherwise caused by `[mask]` tokens during pre-training. - Reversed block-wise masking (same as block-wise masking, but randomly inverting the mask to avoid center bias) is a simple and efficient masking method proposed in this manuscript. Ablation experiments in Table 8 (right) evaluate it on classification, detection and segmentation. -- ## Weaknesses The manuscript restricts its scope to distilling CLIP, but does not ablate this position. The following experiments are needed: (a) If the FLOPS used to train UTA are instead used to fine-tune EVA-02-CLILP itself on the DataComp-1B dataset. (b) Same as (a), that is fine-tuning EVA-02-CLILP on DataComp-1B, but now also omitting vision tokens as UTA does. I think Table 1 or Table 6 might be a good place for these comparisons. -- ## Clarification questions - In page 4, line 1, "ViT-L model can surpass the giant-sized CLIP model after contrastive fine-tuning". Which table-rows should I compare for this claim? - In page 7, "Compared with EVA-02, ..., we can largely improve zero-shot accuracy". But in Table 4, EVA-02 does not have any zero-shot numbers to compare against. - In page 8, UTA for pre-training the text encoder, is unclear. Is a the CLIP text encoder being distilled into a text model? Is block-R masking still being used? - The delta on ADE20k between Random/Block and Block-R in Table 8, right, is massive. More analysis on what aspect of the models output has improved would be very helpful. Are the boundaries better? Is the classification better? etc. Requested Changes: The baselines suggested in the weaknesses section and the clarifications requested below it would be great. In Table 1, I-T pairs are written as 0B for UTA because in the distillation process no Image-text (I-T) pairs are needed. But the original EVA-02-CLIP model being distilled used many image-text pairs. I believe that one should at least write 0B + 8B to clarify this (and similarly in other rows). Broader Impact Concerns: Pros: The manuscript shows that CLIP can be further improved without access to billions of image-text pairs and massive amounts of compute. With access to some Image-text pairs, further improvement is possible. This slightly furthers the democratization of these foundation models. Cons: Detection and segmentation are sensitive research topics as there is potential for misuse. Although these problems are not the main focus of this manuscript, a broader impact statement in the appendix would be nice. ================================================== Review 4: Summary: Paper introduces a new visual pretraining approach dubbed UTA, which effectively extends the previous MIM + CLIP distillation methods to only distilling the unmasked tokens instead. Comprehensive evaluations including zero-shot image classification, multi-modal tasks (LLaVA-Bench) and other core vision tasks (detection and segmentation) are conducted. Also, several ablation studies shed some light on how this approach works. Strengths and Weaknesses: Strengths +The paper is clear and well written, the technical part is very easy to follow. +The presented method is simple yet effective. Overall I think the method is neat and sound. +The empirical evaluation seems to be quite thorough. Although some results over the previous method is marginal, some gains are pretty clear. Weaknesses -The motivation of this line of study is questionable. I understand the approach can attain similar performance to the teacher model with only distillation and much less I-T finetuning. However, the resulting model seems to have the same size as the original teacher model and the architecture is the same as well, with similar or slightly better results. If this is indeed the case, I don't quite understand the incentive of doing this -- why not just use the original teacher model? In light of this, I am curious about the potential of this method of distilling into smaller models. Some results on this will be fantastic. Requested Changes: Please add some results of applying your approach to smaller models (to distill from larger models). Broader Impact Concerns: N/A ================================================== Review 5: Summary: The paper introduces a novel method called unmasked token alignment (UTA) which enhances representations of multi-modal vision-language models. During the training phase, UTA only inputs unmasked tokens to the vision encoder and trains these token representations to be aligned with the token representations extracted from the pretrained CLIP vision encoder. A key advantage of UTA is its applicability to zero-shot tasks eliminating the need for additional fine-tuning with masked tokens. This feature ensures a more consistent training and finetuning phase. The experimental results demonstrate the superiority of UTA over existing methodologies in the field. Strengths and Weaknesses: [Strengths] - The paper proposes a simple and effective approach to learn enhanced vision-language representations based on pretrained CLIP models. - The paper clearly points out the limitations of existing approaches which inputs masked tokens to the vision encoder and proposes a novel method which addresses such limitations by using the unmasked tokens as the only input while utilizing a token alignment loss. - Overall, well-organized experiments conducted in this study effectively demonstrates the effectiveness of the proposed method and its superiority over the previous methods. [Weaknesses] - The paper lacks a detailed analysis of the experiment results. For example, in Table 2, further fine-tuning on the DataComp-1B dataset degenerates the image retrieval performance, as mentioned at the end of page 5. This phenomenon is observed in Tables 1 and 5 as well. Why would additional fine-tuning deteriorate performance? - Also, in page 7, the last sentence of the first paragraph says “We can largely improve the zero-shot accuracy.” However, in Table 4, the zero-shot accuracy of EVA-02 is not reported. - While the paper slightly modifies (or improves) the backbone architecture as mentioned in Section 2.3, the paper does not investigate the impact of this modification on performance improvement. - As a minor weakness, there is a lack of explanation about the conducted experiments. The paper does not provide a detailed explanation on the criteria reported in Table 3 (e.g., Conversation, Detail, Reasoning, and Overall). Requested Changes: - Detailed explanation about the experiment settings and a deeper analysis of the experiment results are required. - In addition, analysis of the impact of architectural modification on the performance improvement should also be included in the manuscript. Broader Impact Concerns: Although the model is trained with a large-scale dataset, it may have unintentionally learned spurious correlation between representations due to the inherent dataset bias within the training set. This may cause unwanted impact to certain minority groups when the model serves as a foundation model for various downstream tasks. This issue can be alleviated by encouraging the model to learn debiased features during the training or by utilizing post-processing strategies for debiasing. It is important to be aware of the potential issues regarding fairness. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper aims for enhancing representation power of vision-language models, and proposes a new method called unmasked token alignment (UTA), which, during the training phase, takes as inputs the unmasked tokens to the vision encoder and trains them to be aligned with the token representations extracted from the pretrained CLIP vision encoder.Thorough experiments demonstrate the superiority of UTA over existing methods. All reviewers are satisfied with the novelty and contributions made from the paper. They initially made a few requests for revisions, including providing additional explanations and insights on effectiveness of the proposed method, additional experiments, and additional ablation studies. The authors made the corresponding revisions that reviewers have been satisfied with. Please the authors make sure all these revisions have been included in the final version. ==================================================
# Meta-Learning Sparse Compression Networks Jonathan Richard Schwarz schwarzjn@google.com DeepMind University College London Yee Whye Teh ywteh@google.com DeepMind Reviewed on OpenReview: *https: // openreview. net/ forum? id= Cct7kqbHK6* ## Abstract Recent work in Deep Learning has re-imagined the representation of data as functions mapping from a coordinate space to an underlying continuous signal. When such functions are approximated by neural networks this introduces a compelling alternative to the more common multi-dimensional array representation. Recent work on such Implicit Neural Representations (INRs) has shown that - following careful architecture search - INRs can outperform established compression methods such as JPEG (e.g. Dupont et al., 2021). In this paper, we propose crucial steps towards making such ideas scalable: Firstly, we employ stateof-the-art network sparsification techniques to drastically improve compression. Secondly, introduce the first method allowing for sparsification to be employed in the inner-loop of commonly used Meta-Learning algorithms, drastically improving both compression and the computational cost of learning INRs. The generality of this formalism allows us to present results on diverse data modalities such as images, manifolds, signed distance functions, 3D shapes and scenes, several of which establish new state-of-the-art results. ## 1 Introduction An emerging sub-field of Deep Learning has started to re-imagine the representation of data items: While traditionally, we might represent an image or 3D shape as a multi-dimensional array, continuous representations of such data appear to be a more natural choice for the underlying signal. This can be achieved by defining a functional representation: mapping from spatial coordinates (x, y) to (r, g, b) values in the case of an image. The problem of learning such a function is then simply a supervised learning task, for which we may employ a neural network - an idea referred to as Implicit Neural Representations (INRs). An advantage of this strategy is that the algorithms for INRs are data agnostic - we may simply re-define the coordinate system and target signal values for other modalities and readily apply the same procedure. Moreover, the learned function can be queried at any point, allowing for the signal to be represented at higher resolutions once trained. Finally, the size of the network representation can be chosen by an expert or as we propose in this work, an algorithmic method to be lower than the native dimensionality of the array representation. Thus, this perspective provides a compelling new avenue into the fundamental problem of data compression. INRs are particularly attractive in cases where array representations scale poorly with the discretisation level (e.g. 3D shapes) or the underlying signal is inherently continuous such as in neural radiance fields (NerF) (Mildenhall et al., 2020) or when discretisation is non-trivial, for example when data lies on a manifold. So far, the difficulty of adopting INRs as a compression strategy has been a trade-off between network size and approximation quality requiring architecture search (e.g. Dupont et al., 2021) or strong inductive biases (e.g. Chan et al., 2021; Mehta et al., 2021). Furthermore, the cost of fitting a network to a single data point vastly exceeds the computational cost of standard compression methods such as JPEG (Wallace, 1992), an issue that is compounded when additional architecture search is required. ![1_image_0.png](1_image_0.png) Figure 1: Overview of MSCN as a compression method. In order to compress a data item, we perform sparse adaptation of a meta-learned initialisation, leading to a small subset of changes δθ encoding the item. Sparsity reduces compression cost and avoids costly architecture search. Meta-Learning drastically cuts compression time. δθ can be subsequently compressed and encoded using standard techniques. In this work, we specifically focus on improving the suitability of INRs as a compression method by tackling the aforementioned problems. First, we recognise insights of recent deep learning studies (e.g. Frankle & Carbin, 2018) which show how only a small subset of parameters encode the predictive function. We thus employ recent state-of-the-art sparsity techniques (Louizos et al., 2017) to explicitly optimise INRs using as few parameters as possible, drastically improving their compression cost. Secondly, in order to tackle the computational cost of learning INRs, we follow recent work (e.g. Lee et al., 2021; Tancik et al., 2021; Dupont et al., 2022a) by adopting Meta-Learning techniques such as MAML (Finn et al., 2017) which allow learning an INR representing a single signal by fine-tuning from a learned initialisation using only a handful of optimisation steps. Crucially, our sparsity procedure allow efficient Backpropagation through this learning procedure, resulting in an initialisation that is specifically optimised for sparse signals. This allows us to re-imagine the sparsity procedure as uncovering a network structure most suitable for the task at hand. This is noticeably different from related work which decouples Meta-Learning from the sparsity procedure (Tian et al., 2020; Lee et al., 2021). Figure 1 shows an overview of our technique for compression. The key novelty of our work lies in the sparse adaptation phase, significantly reducing compression cost. Finally, our framework is flexible and allows for Weight-, Representation-, Group- and Gradient-Sparsity with minimal changes and is thus suitable for many applications outside of the core focus of this work. ## 2 Background We start by reviewing relevant aspects of the Meta-Learning and INR literature that will constitute the fundamental building blocks of our method. ## 2.1 Implicit Neural Representations Throughout the manuscript, we represent INRs as functions fθ : *X → Y* parameterised by a neural network with parameters θ mapping from a coordinate space X to the output space Y. INRs are instance-specific, i.e. they are unique networks representing single data items. Its approximation quality is thus measured across all coordinates making up the data item, represented by the discrete index set I. As an example, for a 3D shape the coordinate space is X = R 3(x, y, z) and Y = [0, 1] are Voxel occupancies. I the 3D-grid {0*, . . . , D*} 3 where D is the discretisation level. We can thus formulate the learning of an INR as a minimisation problem of the squared error between the INR's prediction and the underlying signal: $$\operatorname*{min}_{\boldsymbol{\theta}}{\mathcal{L}}(f_{\boldsymbol{\theta}},\mathbf{x}\in{\mathcal{X}},\mathbf{y}\in{\mathcal{Y}})=\operatorname*{min}_{\boldsymbol{\theta}}\sum_{i\in{\mathcal{I}}}||f_{\boldsymbol{\theta}}(\mathbf{x}_{i})-\mathbf{y}_{i}||_{2}^{2}$$ $$(1)$$ ||fθ(xi) − yi||22(1) which is typically minimised via Gradient Descent. As a concrete choice for fθ, recent INR breakthroughs propose the combination of Multi-Layer Perceptrons (MLPs) with either positional encodings (Mildenhall et al., 2020; Tancik et al., 2020) or sinusoidal activation functions (Sitzmann et al., 2020b). Both methods significantly improve on the reconstruction error of standard ReLU Networks (Nair & Hinton, 2010). Of particular importance to the remainder of our discussion around compression is the recent observation that signals can be accurately learned using merely data-item specific modulations to a shared base network (Perez et al., 2018; Mehta et al., 2021; Dupont et al., 2022a). Specifically, in the forward pass of a network, each layer l represents the transformation x 7→ f(W(l)x + b (l) + m(l)), where {W(l), b (l)} are weights and biases shared between signal with only modulations m(l) being specific to each signal. This has the advantage of drastically reducing compression costs and is thus of particular interest to the problem considered by us. ## 2.2 Meta-Learning It is worth pointing out that the minimisation of Equation (1) is extraordinarily expensive: Learning a single NeRF ((Mildenhall et al., 2020)) scene can take up to an entire day on a single GPU (Dupont et al., 2022a); even the compression of a single low-dimensional image requires thousands of iterative optimisation steps. Fortunately, we need not resort to Tabula rasa optimisation for each data item in turn: In recent years, developments in Meta-Learning (e.g. Thrun & Pratt, 2012; Andrychowicz et al., 2016) have provided a formalism that allow a great deal of learning to be shared among related tasks or in our case signals in a dataset. Model-agnostic Meta Learning (MAML) (Finn et al., 2017) provides an optimisation-based approach to finding an initialisation from which we can specialise the network to an INR representing a signal in merely a handful of optimisation steps. Following custom notation, we will consider a set of tasks or signals {T1*, . . . ,* Tn}. Finding a minimum of the loss on each signal is achieved by Gradient-based learning on the task-specific data (xTi , yTi ). Writing LTi (fθ) as a shorthand for L(fθ, xTi , yTi ), a single gradient step from a shared initialisation θ0 takes the form: $$\theta_{i}^{\prime}=\theta_{0}-\beta\nabla_{\theta}{\mathcal{L}}_{T_{i}}(f_{\theta})$$ $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ (fθ) (2) which we can trivially iterate for multiple steps. The key insight in MAML is to define a Meta-objective for learning the initialisation θ0 as the minimisation of an expectation of the task-specific loss after its update: $$\pmb{\theta}_{0}=\operatorname*{arg\,min}_{\pmb{\theta}}\mathbb{E}_{T_{i}\sim p(\mathcal{T})}\mathcal{L}_{T_{i}}(f_{\pmb{\theta}_{i}^{\prime}})=\operatorname*{arg\,min}_{\pmb{\theta}}\mathbb{E}_{T_{i}\sim p(\mathcal{T})}\mathcal{L}_{T_{i}}(f_{\pmb{\theta}-\beta}\nabla_{\pmb{\theta}_{i}}\mathcal{L}_{T_{i}}(f_{\pmb{\theta}}))$$ where p(T ) is a distribution over signals in a dataset. The iterative optimisation of (3) is often referred to the MAML "outer loop" and (2) as the "inner loop" respectively. Note that this is a second-order optimisation objective requiring the differentiation through a learning process, although first-order approximations exist (Nichol & Schulman, 2018). Indeed, this procedure has been widely popular in the work on INRs, e.g. being successfully used for NeRF scenes in Tancik et al. (2021) or signed distance functions (Sitzmann et al., 2020a). Finally, it should be noted that the idea of learning modulations explored in the previous section relies on the MAML process. Thus, the learning of weights and biases is achieved using (3), although only modulations are adopted in the inner-loop (2). Meta-learning a subset of parameters in the inner-loop corresponds to the MAML-derivative CAVIA (Zintgraf et al., 2019). ## 3 Meta-Learning Sparse Compression Networks This section introduces the key contributions of this work, providing a framework for sparse Meta-Learning which we instantiate in two concrete algorithms for learning INRs. ## 3.1 L0 **Regularisation** While the introduction of sparsity in the INR setting is highly attractive from a compression perspective, our primary difficulty in doing so is finding a procedure compatible with the MAML process described in the previous section. This requires (i) differentiability and (ii) fast learning of both parameters and the subnetwork structure. We can tackle (i) by introducing L0 Regularisation (Louizos et al., 2017), a re-parameterised L0 objective using stochastic gates on parameters. Consider again the INR objective (1) with a sparse reparameterisation θe of a dense set of parameters θ: θe = θ ⊙ z; zj ∈ {0, 1} and an L0 Regularisation term on the gates with regularisation coefficient λ. We can learn a subnetwork structure by optimising distributional parameters π of a distribution q(z|π) on the gates leading to the regularised objective: $$\min_{\mathbf{\theta},\mathbf{\pi}}\mathcal{L}^{\mathcal{R}}(f_{\mathbf{\theta}};\mathbf{x},\mathbf{y},\mathbf{\pi})=\mathbb{E}_{\mathbf{q}(\mathbf{a}|\mathbf{\pi})}\Big{[}\sum_{i\in\mathcal{L}}||f_{\mathbf{\theta}\otimes\mathbf{a}}(\mathbf{x}_{i})-\mathbf{y}_{i}||_{2}^{2}\Big{]}+\lambda\sum_{j=1}^{\dim(\mathbf{\pi})}\mathbf{\pi}_{j}\tag{4}$$ where λPdim(π) j=1 πj penalises the probability of gates being non-zero. The key on overcoming non-differentiability due to the discrete nature of z is smoothing the objective: This is achieved by choosing an underlying continuous distribution q(s|ϕ) and transforming its random variables by a hard rectification: z = min(1, max(0, s)) = g(s). In addition, we note that the L0 penalty can be naturally expressed by using the CDF of q to penalise the probability of the gate being non zero (1 − Q(s ≤ 0|ϕj )): $$\min_{\mathbf{\phi},\mathbf{\phi}}{\mathcal{L}}^{\mathcal{R}}(f_{\mathbf{\tilde{g}}},\mathbf{x},\mathbf{y},\mathbf{\phi})=\mathbb{E}_{q(\mathbf{s}|\mathbf{\phi})}\Big[\sum_{i\in\mathcal{I}}|f_{\mathbf{\theta}\ominus\mathbf{g}(\mathbf{s})}(\mathbf{x}_{i})-\mathbf{y}_{i}|_{2}^{2}\Big]+\lambda\sum_{j=1}^{\dim(\mathbf{\phi})}\mathbf{1}-Q(\mathbf{s}_{j}\leq0|\mathbf{\phi}_{j})$$ $$\mathbf{\Sigma}$$ A suitable choice for q(z|π) is a distribution allowing for the reparameterisation trick (Kingma & Welling, 2013), i.e. the expression of the expectation in (5) as an expectation of a parameter-free noise distribution p(ϵ) from which we obtain samples of s through a transformation f(·; ϕ). This allows a simple Monte-Carlo estimation of (5): $$\min_{\mathbf{\phi},\mathbf{\phi}}\mathcal{L}^{\mathrm{R}}(f_{\mathbf{\phi}}^{*},\mathbf{x},\mathbf{y},\mathbf{\phi})=\frac{1}{\mathbf{S}}\sum_{i=1}^{\mathbf{S}}\Bigl{[}\sum_{\mathbf{c}\in\mathcal{L}}||f_{\mathbf{\phi}\otimes f(\mathbf{r}\mathbf{c}_{*},\mathbf{\phi})}(\mathbf{x}_{i})-\mathbf{y}_{i}||_{2}^{2}\Bigr{]}+\lambda\sum_{j=1}^{\dim(\mathbf{\phi})}1-Q(\mathbf{s}_{j}\leq0|\mathbf{\phi}_{j});\mathbf{r}_{*}\sim\mathbf{p}(\mathbf{c})\tag{6}$$ The choice for q(s|ϕ) in Louizos et al. (2017) is the Hard concrete distribution, obtained by stretching the concrete (Maddison et al., 2016) allowing for reparameterisation, evaluation of the CDF and exact zeros in the gates/masks z. A suitable estimator is chosen at test time. ## 3.2 Sparsity In The Inner Loop We are now in place to build our method from the aforementioned building blocks. Concretely, consider the application of L0 Regularisation in the MAML inner loop objective. Using a single Monte-Carlo sample for simplicity, can re-write the MAML meta-objective as: $$\mathbf{\theta}_{0}=\operatorname*{arg\,min}_{\mathbf{\theta}}\mathbb{E}_{T\sim p(T)}\Big{[}\mathcal{L}_{T_{1}}^{\mathcal{B}}(f_{\mathbf{\theta}^{\prime},\mathbf{\phi}^{\prime}})\Big{]}=\operatorname*{arg\,min}_{\mathbf{\theta}}\mathbb{E}_{T\sim p(T)}\Big{[}\mathcal{L}_{T_{1}}(f_{\mathbf{\theta}+\mathbf{\phi}\otimes|\mathbf{\phi}^{\prime}})+\lambda\sum_{j=1}^{\dim(\mathbf{\phi})}1-Q(s_{j}\leq0|\mathbf{\phi}_{j}^{\prime})\Big{]}\tag{7}$$ $\mathbf{z}^{\prime}=g(f(\mathbf{\epsilon},\mathbf{\phi}^{\prime}));\mathbf{\phi}^{\prime}=\mathbf{\phi}_{0}+\mathbf{\delta}\mathbf{\phi}$ and $\mathbf{\epsilon}\sim p(\mathbf{\epsilon})$ where δθ and δϕ are updates to both parameters and gate distributions computed in the inner loop (2). Note that this constitutes a fully differentiable sparse Meta-Learning algorithm. However, a moment of reflection reveals concerns with (7): (i) The joint learning of both model and mask parameters in the inner-loop is expected to pose a more difficult optimisation problem due to trade-off between regularisation cost and task performance. This is particularly troublesome as the extension of MAML to long inner-loops is computationally very expensive and hence still an active area of research (e.g. Flennerhag et al., 2019; 2021). (ii) The sparsification (θ + δθ) ⊙ z ′is sub-optimal from a compression standpoint: As the signal-specific compression cost is δθ, there is no need to compute the INR using a sparse network, provided δθ is sparse. θ0 is signal-independent set of parameters and its compression cost thus amortised. Our remedy to (i) is take inspiration from improvements on MAML that propose learning further parameters of the inner optimisation process (Li et al., 2017) such as the step size (known as MetaSGD). In particular, we learn both an initial set of parameters θ0 and gates ϕ0 through the outer loop, i.e. $$\mathbf{\theta}_{0},\mathbf{\phi}_{0}=\operatorname*{arg\,min}_{\mathbf{\theta},\mathbf{\phi}}\mathbb{E}_{T_{i}\sim p(T)}\Big{[}\mathcal{L}_{T_{i}}\big{(}f_{(\mathbf{\phi}+\delta\mathbf{\theta})\otimes g(\mathbf{\epsilon},\mathbf{\phi}+\delta\mathbf{\phi})}\big{)}+\lambda\sum_{j=1}^{\dim(\mathbf{\phi})}1-Q(\mathbf{s}_{j}\leq0|(\mathbf{\phi}+\delta\mathbf{\phi})_{j})\big{]}\tag{8}$$ This has interesting consequences: While we previously relied solely on the inner-loop adaptation procedure to "pick out" an appropriate network for each signal, we have now in effect reserved a sub-network that provides a particularly suitable initialisation for the set of signals at hand. This sub-network can either be taken to be fixed (such as when the adopted networks are used to learn a prior or generative model (Dupont et al., 2022a)) or adopted within an acceptable budget of inner steps, providing a mostly overlapping but yet signal-specific set of gates. With regards to concern (ii) it is worth noting that gates z may be applied *at any point* in the network. This is attractive as it provides us with a simple means to implement various forms of commonly encountered sparsity: 1. Unstructured sparsity by a direct application to all parameters 2. Structured sparsity by restricting the sparsity pattern 3. Group sparsity by sharing a single gate among sets of parameters 4. representational sparsity by gating activations or 5. Gradient Sparsity by masking updates in the inner loop. This highlights a strength of our method: the principles discussed so far allow for a framework in which sparsity can be employed in a Meta-Learning process in a variety of different ways depending on the requirements of the problem at hand. We refer to this framework with a shorthand of this manuscript's title: MSCN. With regards to compression considerations, a more natural objective would thus involve a term (θ0 + δθ ⊙ z) in the inner loop, which ensures that we directly optimise for per-signal performance using an update to as few parameters as possible - the real per-signal compression cost. Note that as θ0 is dense, the resulting θ0 + δθ (sparse) is still dense, thus allowing the use of more capacity in comparison to a fully sparse network. We suggest two concrete forms of δθ in the next section. ## 3.3 Forms Of Δθ 3.3.1 Unstructured Sparse Gradients Perhaps the most natural form of implementing a direct sparsification of δθ is through gating of the gradients. Assuming a budget of inner-loop steps T and writing L R for the regularised L0 objective and θt for the state of the parameters after t − 1 updates, this takes the form of unstructured sparse gradients: ![5_image_0.png](5_image_0.png) (a) Unstructured Sparse Gradients ![5_image_2.png](5_image_2.png) ![5_image_1.png](5_image_1.png) Figure 2: Instantiations of the MSCN framework. $$\delta\mathbf{\theta}=-\beta\sum_{i=1}^{T}\mathbf{z}\odot\nabla_{\mathbf{\theta}_{t}}{\mathcal{L}}^{\mathcal{R}}(f_{\mathbf{\theta}_{t}},\cdot)$$ $$({\mathfrak{g}})$$ , ·) (9) and thus forces updates to concentrate on parameters with the highest plasticity in a few-shot learning setting. We impose no restrictions on the gates, applying them to both weight and biases gradients and thus allow for complex gradient sparsity patterns to emerge. In this setting, we found the use of aforementioned MetaSGD (Li et al., 2017) to be particularly effective. ## 3.3.2 Structured Sparse Modulations An alternative form allows known inductive biases to inform our application of sparsity: In the second proposed instantiation of the MSCN framework we work in the case where only modulations {m(l)} L l=1 are allowed to adapt to each task-specific instance (see Section 2.1). The sparsification of those modulations is straight-forward and is achieved by introducing a layer-specific gate: $$\mathbf{x}\mapsto f(\mathbf{W}^{(l)}\mathbf{x}+\mathbf{b}^{(l)}+\mathbf{z}^{(l)}\odot\mathbf{m}^{(l)})$$ $$(10)$$ (l) ⊙ m(l)) (10) such that the only non-zero entries in δθ are for sparse modulations. For ease of notation we omit zero-entries and simply write δθ = {z (1) ⊙ m(1)*, . . . ,* z (L) ⊙ m(L)}. This provides a particularly attractive form for compression due to the comparably low dimensionality of m(l). Further sparsifying the modulations has the advantage of allowing the use of very deep or wide base networks θ0, ensuring that common structure is modelled as accurately as possible while a small communication cost continues to be paid for δθ. Note also that an intuitive argument for deep networks is that modulations in early layers have a large impact on the rest of of the network. Both forms are shown in Figure 2. ## 4 Related Work 4.1 Neural Network Sparsity While dating back at least to the early 1990s, recent interest in sparsifying Deep Neural Networks has come both from experimental observations such as the lottery ticket hypothesis (Frankle & Carbin, 2018) and the unparalleled growth of models (e.g. Brown et al., 2020; Rae et al., 2021), often making the cost of training and inference prohibitive for all but the largest institutions in machine learning. While more advanced methods have been studied (e.g. LeCun et al., 1989; Thimm & Fiesler, 1995), most contemporary techniques rely on the simple yet powerful approach of magnitude-based pruning - removing a pre-determined fraction by absolute value. Current techniques can be mainly categorised as the iterative sparsification of a densely initialised network (Gale et al., 2019) or techniques that maintain constant sparsity throughout learning (e.g. Evci et al., 2020; Jayakumar et al., 2020). Critically, many recent techniques would not be suitable for learning in the inner-loop of a meta-learning algorithm due to non-differentiability, motivating our choice of a relaxed L0 regularisation objective. ## 4.2 Sparse Meta-Learning Despite Meta-Learning and Sparsity being well established research areas, the intersection of both topics has only recently started to attract increased attention. Noteworthy early works are Liu et al. (2019), who design a network capable of producing parameters for any sparse network structure, although an additional evolutionary procedure is required to determine well-performing networks. Also using MAML as a Meta-learner, Tian et al. (2020) employ weight sparsity as a means to improve generalisation. Of particular relevance to this work is MetaSparseINR (Lee et al., 2021), who provide the first sparse MetaLearning approach specifically designed for INRs. We can think of their procedure as the aforementioned iterative magnitude-based pruning technique (Ström, 1997; Gale et al., 2019) applied in the outer loop of MAML training. While has the advantage of avoiding the difficulty of computing gradients through a pruning operation, iterative pruning is known to produce inferior results (Schwarz et al., 2021) and limits the application to an identical sparsity pattern for each signal. Due to the direct relevance to our work, we will re-visit MetaSparseINR as a key baseline in the experimental Section. ## 4.3 Implicit Neural Representations The compression perspective of INRs has recently been explored in the aforementioned COIN (Dupont et al., 2021) and in Davies et al. (2020) who focus on 3D shapes. Work on videos has naturally received increased attention, with (Chen et al., 2021) focusing on convolutional networks for video prediction (where the time stamp is provided as additional input) and Zhang et al. (2021) proposing to learn differences between frames via flow warping. While still lacking behind standard video codecs, fast progress is being made and early results are encouraging. An advantage of our contribution is that the suggested procedures can be readily applied to almost all deep-learning based architectures. The recently proposed Functa (Dupont et al., 2022a) is a key baseline in our work as it also works on the insights of the modulation-based approach. Rather than using sparsity, the authors introduce a second network which maps a low dimensional latent vector to the full set of modulations. The instance-specific communication cost is thus the dimensionality of that latent vector. A minor disadvantage is the additional processing cost of running the latent vector to modulations network. A particularly interesting observation is that quantisation of such modulations is highly effective (Strümpler et al., 2021; Dupont et al., 2022b), showing that simple uniform quantisation and arithmetic coding can significantly improve compression results. Finally, it is also worth noting that work on INRs is related to the literature on multimodal architectures which have so far been mostly implemented through modality-specific feature extractors (e.g Kaiser et al., 2017), although recent work has used a single shared architecture (Jaegle et al., 2021). ## 5 Experiments We now provide an extensive empirical analysis of the MSCN framework in the two aforementioned instantiations. Our analysis covers a wide range of datasets and modalities ranging from images to manifolds, Voxels, signed distance functions and scenes. While network depth and width vary based on established best practices, in all cases we use SIREN-style (Sitzmann et al., 2020b) sinusoidal activations. Throughout the section, we make frequent use of the Peak signal-to-noise ratio (PSNR), a metric commonly used to quantify reconstruction quality. For data standardised to the [0, 1] range this is defined as: PSNR = − log10(MSE) where MSE is the mean-squared error. Further experimental details such as data processing steps and hyperparameter configurations in the Appendix. ## 5.1 Unstructured Sparse Gradients In the unstructured sparsity case (Section 3.3.1), the closest related work is MetaSparseINR (Lee et al., 2021) which will be the basis for our evaluations. Experiments in this section focus on Images, covering the CelebA (Liu et al., 2015) & ImageNette (Howard, 2022) datasets as well as Signed Distance Functions (Tancik et al., 2021), all widely used in the INR community. All datasets have been pre-processed to a size of 178 × 178. We follow the MetaSparseINR authors in the choice of those datasets and thus compare directly to that work as well as the baselines discussed. We provide a description of those baselines in the Appendix. ![7_image_0.png](7_image_0.png) Figure 3: Quantitative results for unstructured sparsity. Results show the mean and standard deviation over three runs of each method. Closely following the MetaSparseINR setup, the network for all tasks is a 4-layer MLP with 256 units, which we meta-train for 2 inner loop steps using a batch size of 3 examples. We use the entire image to compute gradients in the inner loop. In order to correct for the absence of MetaSGD (Li et al., 2017) in MetaSparseINR, we also provide results using a fixed learning rate for SDF & CelebA as a fair comparison, although we strongly suggest to use MetaSGD as a default. Thanks to the kind cooperation of the MetaSparseINR authors, all baseline results are directly taken from their evaluation, ensuring an apples-to-apples comparison. Full quantitative results across all datasets for varying levels of target sparsity are shown in Figure 3. In all cases we notice a significant improvement, especially at high sparsity levels which are most important for good compression results. At its best, MSCN (with MetaSGD) on CelebA outperforms MetaSparseINR by over 9 decibel at the highest and over 4 PSNR at the lowest sparsity levels considered. As PSNR is calculated on a log-scale this is a significant improvement. To provide a more intuitive sense of those results we provide qualitative results in Figure 4b. Note that at extreme sparsity levels the MetaSparseINR result is barely distinguishable while facial features in MSCN reconstructions are still clearly recognisable. Further qualitative examples are shown in the Appendix. An interesting aspect of our method is the analysis of resulting sparsity patterns. Figure 4a shows the distribution of sparsity throughout the network at various global sparsity levels. We notice that such patterns vary both based on the overall sparsity level as well as the random initialisation, suggesting that optimal pattern are specific to each optimisation problem and thus cannot be specified in advance to a high degree of certainty. It is also worth pointing out that existing hand-designed sparsity distributions (e.g. Mocanu et al., 2018; Evci et al., 2020) would result in a different pattern, allocating equal sparsity to layers 2-4, whereas our empirical results suggest this might not be optimal in all cases. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) Figure 4: Results on CelebA. (a) Sparsity Pattern across layers at various levels of overall network sparsity. Results shown over 3 random seeds. (b) Qualitative results on CelebA for various levels of sparsity. ## 5.2 Structured Sparse Modulations | Dataset, array size | Model | Performance at modulation size | | | | | |-----------------------|---------|----------------------------------|-------|-------|-------|-------| | 64 | 128 | 256 | 512 | 1024 | | | | ERA5, 181 × 360 | Functa | 43.2 | 43.7 | 43.8 | 44.0 | 44.1 | | MSCN | 44.6 | 45.7 | 46.0 | 46.6 | 46.9 | | | CelebA-HQ, 64 × 64 | Functa | 21.6 | 23.5 | 25.6 | 28.0 | 30.7 | | MSCN | 21.8 | 23.8 | 25.7 | 28.1 | 30.9 | | | ShapeNet 10, 643 | Functa | 99.30 | 99.40 | 99.44 | 99.50 | 99.55 | | MSCN | 99.43 | 99.50 | 99.56 | 99.63 | 99.69 | | | SRN Cars, 128 × 128 | Functa | 22.4 | 23.0 | 23.1 | 23.2 | 23.1 | | MSCN | 22.8 | 24.0 | 24.3 | 24.5 | 24.8 | | Table 1: Quantitative results for each dataset. Shown is the mean reconstruction error for various modulations sizes. Metrics are Voxel accuracy for ShapeNet 10 and PSNR for all others. Corresponding sparsity levels for MSCN are: 64: 99.1%, 128: 98.2%, 256: 96.4%, 512: 92.9%, 1024: 85.7%. Figure 5: Modulations show a high level of reuse (top). Modulation distribution for 1024 (middle) & 64 modulations (bottom). In this section, we consider the evaluation of the sparse structured modulation setting (Section 3.3.2). The closest competitor is Functa (Dupont et al., 2022a), which we take to be our baseline method. We evaluate on *Voxels* using the top 10 classes of the ShapeNet dataset (Chang et al., 2015), *NeRF scenes* using SRN Cars (Sitzmann et al., 2019), manifolds using the ERA-5 dataset (Hersbach et al., 2019) and on images using the CelebAHQ dataset (Karras et al., 2017). Due to the relatively low dimensionality of modulations, networks in this section are significantly deeper, comprising 15 layers of 512 units each. In accordance with Functa, we report results using MetaSGD with 3 inner-loop sets and varying batch sizes (see Appendix for details) due to memory restrictions. We show full quantitative results in Table 1, noting that we outperform Functa by a noticeable margin in almost all settings. Resulting sparsity patterns shown in Figure 5 (middle & bottom) are particularly interesting, showing that our method leads to the intuitive result of preferring to allocate most of its modulation budget in earlier layers, as such modulations have the potential to have a large effect on the whole network. Indeed Dupont et al. (2022a) write *"reconstructions are more sensitive to earlier modulation layers than later ones. Hence we* can reduce the number of shift modulations by only using them for the first few layers of the MLP". A possible explanation for that observation is that while early modulations do make the largest difference, our method continues to make use of modulations in later layers. This is another argument for learned sparsity patterns over pre-defined distributions, which would result in a mostly uniform allocation throughout the network. Interestingly, Figure 5 (top) shows that modulations show a high-degree of reuse. We plot the fraction of modulations that are re-used when starting an optimisation process from the same random initialisation but allowing for larger number of modulations (x-axis) relative to a network with fewer modulations (distinguished by colours). In all cases the fraction of re-use is much higher than with a random allocation. Furthermore, we provide qualitative results in Figures 6 and 7. In both cases we observe noticeable improvements over Functa. To provide a qualitative notion of gains afforded by larger number of modulations, we show results for MSCN in Figure 6d. ![9_image_0.png](9_image_0.png) Figure 6: Quantitative results for structured sparsity on ShapeNet10. (a)-(c) Comparison with Functa for 1024 Modulations (d) Reconstruction quality for increasing number of modulations. ![9_image_1.png](9_image_1.png) Figure 7: Qualitative results on ERA5 (1024 modulations). (a)-(b) Prediction errors shown on the manifold for 1024 modulation. Best viewed as a .gif (Functa, MSCN). (c) Full error shown over the entire map. ## 5.3 Compression Performance Returning to one of the key motivations of our work, we now provide a comparison with various commonly used compression schemes. The closest method to our work is COIN++ (Dupont et al., 2022b), which is an application of the previously discussed Functa for compression. The authors apply uniform quantisation and entropy-coding to the dense latent vector and compare to a wide range of traditional and learned compression algorithms. In order for MSCN to be a competitive quantisation scheme, we follow the COIN++ approach to quantisation and entropy coding, which we apply to any non-zero modulations remaining after the trained architecture has has been adapted to a single data item. Identical to the observations for COIN++ we find that modulations can be sparsified to a high level, allowing the quantisation of a standard 32bit representation to only 5-6 bits with little loss of performance. For the large images found in the Kodak dataset, we split a large images into smaller patches that are represented separately. As any shared weights are identical for each example, we consider their cost amortised (i.e. they are part of the compression program) and are thus not reported in the following results. This is standard practice. For this reason, we force the use of identical sparsity patterns for each example (i.e. we do not update ϕ in Equation equation 8) to avoid the otherwise necessary cost of communicating the sparsity pattern. Finally, we found the structured sparsity formulation of Section 3.3.2 to be most suitable for optimal compression, combining inductive biases with structure learning algorithms. We provide further details on these experiments in the Appendix. ![10_image_0.png](10_image_0.png) ![10_image_2.png](10_image_2.png) Figure 8: Rate-distortion plots on CIFAR10 (a) and Kodak (b). (c) shows a qualitative comparison on ![10_image_1.png](10_image_1.png) CIFAR10. The 128/1024 modulation rows correspond to evaluating to the leftmost and rightmost point of the MSCN curve in (a). Results are almost indistinguishable for 1024 modulations. Best viewed on a computer. (a) Original (b) Compressed (c) Residuals (d) Original (e) Compressed (f) Residuals Figure 9: Qualitative examples on the Kodak dataset. The PSNRS achieved are 25.58 (left) and 31.77 (right). We provide a comparison with various compression schemes for CIFAR10 in Figure 8a and Kodak (for which we pre-train on the DIV2K dataset (Agustsson & Timofte, 2017) as in (Strümpler et al., 2021)) in Figure 8b. The x-axis shows the bit-rate in bits-per-pixel 1. We note that MSCN consistently outperforms COIN++, suggesting that the previously observed performance improvements over Functa also hold for compression applications. We also find that our method is competitive specifically in the low bit-rate regime, where we achieve strong performance improvements over JPEG/JPEG2000. Note that the strong improvement over COIN++ on Kodak may be a result on the pre-training on Div2k (COIN++ uses frames of Vimeo90k), whereas the results on CIFAR10 are in line with what can be expected given the improvement over Functa in the previous section. Qualitative results are shown in Figures 8c and 9. As hypothesised in COIN++ the resulting gap to state-of-the-art codecs like BMS & CST may be due to their use of deep generative models for entropy coding which could be added to our formulation in future work with little conceptual work. Moreover, the use of more intelligent post-training compression has recently proven fruitful (Strümpler et al., 2021) and should further improve results. It is however important to state that such methods require significantly higher encoding times, thus being in conflict with one of our key motivations, We thus avoid a direct comparison as this would likely fail to communicate the inherent trade-off of quality versus compression speed. As with deep entropy coding, such techniques could be straight-forwardly used in conjunction with MSCN, demonstrating the flexibility of our method. ## 5.4 Discussion & Future Work In this work we have introduced a principled framework for sparse Meta-Learning, demonstrating two instantiations particularly suitable for Implicit Neural Representations. Our extensive evaluation show competitive results, outperforming various state-of-the-art techniques. It is worth mentioning that the ideas introduced in this work can be straight-forwardly combined with some of the considered baselines. In the Functa case for instance, it would be reasonable to expect a latent variable approach to be more competitive if only a sparse subset of modulations needs to be reconstructed. 1BPP = Bits per param.×Number of param. Data dim. A key motivation was the goal of avoiding costly architecture search procedures used in related work (e.g. Dupont et al., 2021; Strümpler et al., 2021). We have shown that in both cases of structured and unstructured sparsity, we observe sparsity patterns that differ from previously hand-designed distributions and also adopt to the specific initialisation of the weights. In the structured sparsity case, we observe that this can be combined with inductive biases which have previously been found to useful. Our method continues the trend of rapid advances made with INRs for compression. As the field continues to challenge the state-of-art in compression, we observe that sparsity is likely to be a key element in this endeavour. We further hypothesise that additional improvements are likely to come from alternative MetaLearning techniques which avoid the high memory requirements of MAML. In addition, we expect methods with smarter quantisation and based on deep entropy coding to significantly improve results over our simple baseline. Current progress nevertheless inspires optimism for the prospect of a single learning algorithm that can be applied as a compression method to a vast set up modalities. In the wider context of Meta-Learning, we anticipate this framework to be particularly suitable in applications where fast inference is required. Sharing the gates introduced in Section 3 among sets of parameters is an easy way of introduced group sparsity and thus reducing floating point operations required in the forward pass. A popular example would be on-device recommender systems (Galashov et al., 2019)). Furthermore, the introduced framework could be used in Continual Meta Learning as previously done in (e.g. Mallya & Lazebnik, 2018; Von Oswald et al., 2021; Schwarz et al., 2021). ## References Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops*, July 2017. Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. *Advances* in neural information processing systems, 29, 2016. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. *arXiv preprint arXiv:2112.07945*, 2021. Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser Nam Lim, and Abhinav Shrivastava. Nerv: Neural representations for videos. *Advances in Neural Information Processing Systems*, 34, 2021. Thomas Davies, Derek Nowrouzezahrai, and Alec Jacobson. On the effectiveness of weight-encoded neural implicit 3d shapes. *arXiv preprint arXiv:2009.09808*, 2020. Emilien Dupont, Adam Goliński, Milad Alizadeh, Yee Whye Teh, and Arnaud Doucet. Coin: Compression with implicit neural representations. *arXiv preprint arXiv:2103.03123*, 2021. Emilien Dupont, Hyunjik Kim, SM Eslami, Danilo Rezende, and Dan Rosenbaum. From data to functa: Your data point is a function and you should treat it like one. *arXiv preprint arXiv:2201.12204*, 2022a. Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Goliński, Yee Whye Teh, and Arnaud Doucet. Coin++: Data agnostic neural compression. *arXiv preprint arXiv:2201.12904*, 2022b. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In *International Conference on Machine Learning*, pp. 2943–2952. PMLR, 2020. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pp. 1126–1135. PMLR, 2017. Sebastian Flennerhag, Andrei A Rusu, Razvan Pascanu, Francesco Visin, Hujun Yin, and Raia Hadsell. Meta-learning with warped gradient descent. *arXiv preprint arXiv:1909.00025*, 2019. Sebastian Flennerhag, Yannick Schroecker, Tom Zahavy, Hado van Hasselt, David Silver, and Satinder Singh. Bootstrapped meta-learning. *arXiv preprint arXiv:2109.04504*, 2021. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. *arXiv preprint arXiv:1803.03635*, 2018. Alexandre Galashov, Jonathan Schwarz, Hyunjik Kim, Marta Garnelo, David Saxton, Pushmeet Kohli, SM Eslami, and Yee Whye Teh. Meta-learning surrogate models for sequential decision making. *arXiv* preprint arXiv:1903.11907, 2019. Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. *arXiv preprint* arXiv:1902.09574, 2019. H Hersbach, B Bell, P Berrisford, G Biavati, A Horányi, J Muñoz Sabater, J Nicolas, C Peubey, R Radu, I Rozum, et al. Era5 monthly averaged data on single levels from 1979 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), 10:252–266, 2019. J Howard. Imagenette. https://github.com/fastai/imagenette/, 2022. Version 2. Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. Perceiver: General perception with iterative attention. In *International Conference on Machine Learning*, pp. 4651– 4664. PMLR, 2021. Siddhant Jayakumar, Razvan Pascanu, Jack Rae, Simon Osindero, and Erich Elsen. Top-kast: Top-k always sparse training. *Advances in Neural Information Processing Systems*, 33:20744–20754, 2020. Lukasz Kaiser, Aidan N Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. One model to learn them all. *arXiv preprint arXiv:1706.05137*, 2017. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. *arXiv preprint arXiv:1710.10196*, 2017. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. *Advances in neural information processing* systems, 2, 1989. Jaeho Lee, Jihoon Tack, Namhoon Lee, and Jinwoo Shin. Meta-learning sparse implicit neural representations. Advances in Neural Information Processing Systems, 34, 2021. Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835, 2017. Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Kwang-Ting Cheng, and Jian Sun. liu2019metapruning. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 3296–3305, 2019. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738, 2015. Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through l_0 regularization. *arXiv preprint arXiv:1712.01312*, 2017. Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. *arXiv preprint arXiv:1611.00712*, 2016. Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In *Proceedings of the IEEE conference on Computer Vision and Pattern Recognition*, pp. 7765–7773, 2018. Ishit Mehta, Michaël Gharbi, Connelly Barnes, Eli Shechtman, Ravi Ramamoorthi, and Manmohan Chandraker. Modulated periodic activations for generalizable local functional representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14214–14223, 2021. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In *European conference on computer* vision, pp. 405–421. Springer, 2020. Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and Antonio Liotta. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. *Nature communications*, 9(1):1–12, 2018. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In *Icml*, 2010. Alex Nichol and John Schulman. Reptile: a scalable metalearning algorithm. *arXiv preprint arXiv:1803.02999*, 2(3):4, 2018. Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. *arXiv preprint arXiv:2112.11446*, 2021. Jonathan Schwarz, Siddhant Jayakumar, Razvan Pascanu, Peter Latham, and Yee Teh. Powerpropagation: A sparsity inducing weight reparameterisation. *Advances in Neural Information Processing Systems*, 34, 2021. Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. *Advances in Neural Information Processing Systems*, 32, 2019. Vincent Sitzmann, Eric Chan, Richard Tucker, Noah Snavely, and Gordon Wetzstein. Metasdf: Meta-learning signed distance functions. *Advances in Neural Information Processing Systems*, 33:10136–10147, 2020a. Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. *Advances in Neural Information Processing Systems*, 33: 7462–7473, 2020b. Nikko Ström. Sparse connection and pruning in large dynamic artificial neural networks. In *Fifth European* Conference on Speech Communication and Technology. Citeseer, 1997. Yannick Strümpler, Janis Postels, Ren Yang, Luc Van Gool, and Federico Tombari. Implicit neural representations for image compression. *arXiv preprint arXiv:2112.04267*, 2021. Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. *Advances in Neural Information Processing Systems*, 33:7537–7547, 2020. Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P Srinivasan, Jonathan T Barron, and Ren Ng. Learned initializations for optimizing coordinate-based neural representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2846–2855, 2021. Georg Thimm and Emile Fiesler. Evaluating pruning methods. In Proceedings of the International Symposium on Artificial neural networks, pp. 20–25, 1995. Sebastian Thrun and Lorien Pratt. *Learning to learn*. Springer Science & Business Media, 2012. Hongduan Tian, Bo Liu, Xiao-Tong Yuan, and Qingshan Liu. Meta-learning with network pruning. In European Conference on Computer Vision, pp. 675–700. Springer, 2020. Johannes Von Oswald, Dominic Zhao, Seijin Kobayashi, Simon Schug, Massimo Caccia, Nicolas Zucchet, and João Sacramento. Learning where to learn: Gradient sparsity in meta and continual learning. Advances in Neural Information Processing Systems, 34:5250–5263, 2021. Gregory K Wallace. The jpeg still picture compression standard. *IEEE transactions on consumer electronics*, 38(1):xviii–xxxiv, 1992. Yunfan Zhang, Ties van Rozendaal, Johann Brehmer, Markus Nagel, and Taco Cohen. Implicit neural video compression. *arXiv preprint arXiv:2112.11312*, 2021. Luisa Zintgraf, Kyriacos Shiarli, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via meta-learning. In *International Conference on Machine Learning*, pp. 7693–7702. PMLR, 2019.
Review 1: Summary: The manuscript proposes a new method for sparsifying parameter updates in meta-learning, specifically within the context of data compression with INRs (implicit neural representations). The method is based on imposing an additional L_0 penalty (Louizos et al., 2017) on the model update ($\delta \theta$) in the MAML framework. Following Louizos et al., 2017, the Gumbel-Softmax trick is used to differentiate through samples of the binary distributions masking the weights, so the modified MAML inner loop is still differentiable (for the purpose of computing the gradient of the overall meta-learning objective), allowing for end-to-end (meta) training. The proposed method is evaluated on several datasets, demonstrating superior sparsity v.s. reconstruction quality tradeoff, especially compared to a closely related method (Meta-SparseINR; Lee et al., 2021) which performs iterative pruning in the MAML outer loop. However, I have concerns about how meaningful the experimental comparisons are. Strengths and Weaknesses: Strengths: 1. The method has broad applicability, not only to INR, but also other meta-learning tasks where a sparse model (or model update) is required -- although further experiments may be needed to verify this. 2. The method is conceptually straightforward, and allows end-to-end training and straightforward implementation with standard tools from deep learning. Weaknesses: 1. Lack of comparison with baselines that directly targets the bit-rate cost of compression with INR. As the author write early on in the manuscript, "we specifically focus on improving the suitability of INRs as a compression method", and therefore argue (correctly) that we should focus on the compression cost of the parameter difference $\delta \theta$ that is transmitted during data compression, instead of the updated model $\theta_0 + \delta \theta$. Now, the sparsity of $\delta \theta$ is only a surrogate for its compressibility, and at the end of the day compression algorithms are evaluated on the file size (bitrate). The lack of such comparison makes it hard to evaluate the success/failure of the sparsity-based method towards the stated goal of improving data compression with INR. 2. It's unclear what is being compared across some of the methods --- the sparsity of $\delta \theta$, or $\theta$ ($\theta_0 + \delta \theta$) itself. Clearly, the proposed method targets the sparsity of $\delta \theta$ -- "Note that as $\theta_0$ is dense, the resulting $\theta_0 + \delta \theta$ is still dense". By contrast, the Meta-SparseINR (Lee et al., 2021) baseline aims to sparsity $\theta_0$, and subsequently $\theta_0 + \delta \theta$. It's not surprising that, with a denser $\theta_0 + \delta \theta$ (although still sparse $\delta \theta$), the proposed MSCN method outperforms Meta-SparseINR in reconstruction quality. It would be rather meaningless if the comparison is between the sparsity level of $\delta \theta$ for the proposed MSCN method, v.s. the sparsity level of $\theta_0 + \delta \theta$ for the Meta-SparseINR baseline. Requested Changes: 1. As per weakness #1, adding results on a standard image compression benchmark in terms of bitrate v.s. PSNR, against INR compression baselines such as (Strümpler et al., 2021; Dupont et al., 2022b), would make a much more convincing case for the viability of the proposed method for compression. 2. As per weakness #2, more details for the evaluation setup regarding where the sparsity numbers come from ($\delta \theta$ v.s. something else) for each method would add significant insight into the comparisons and behavior of the different methods. 3. Minor typo on page 4, right above Eq(8): "the other loop" -> "the outer loop". Broader Impact Concerns: There are no obvious ethical implications of this work that need to be addressed in the manuscript. ================================================== Review 2: Summary: The main topic of this paper is lossy compression (LC) using implicit neural representations (INRs). In previous works on LC with INRs, encoding a message (e.g. image) is done by overfitting a neural network to learn a function from the spatial coordinates of images to the RGB values. The weights of the network are then stored and represent the code for that specific message. To decode, the network weights are restored and the function is evaluated to recover the message. The author's contributions are two-fold: 1. (Reducing the required rate) The size of the network defines the final code-length. Therefore the authors incorporate state-of-the-art (according to them) learned-sparsity techniques to reduce the number of weights which should bring down code-length. This technique is backpropagation-compatible, which is crucial for point (2.) below. 2. (Reducing the required compute) Instead of overfitting a network to each message, a meta-learning approach (MAML) is used to learn a common initialization across a set of messages. Strengths and Weaknesses: I don’t know much about meta-learning, so I’ll comment on the aspects regarding compression. Overall the paper looks technically correct and I don’t see any major technical issues with the method. However, the experiments and discussions for lossy compression are a bit lacking. The exact setup of the experiments that resulted in Figure 7 need to be clarified. **Note that it is quite possible that I did not completely understand how MAML works or is used.** In detail: 1. How exactly would a compression scheme with this method work? You are given a sequence of images $X_1, \dots, X_n$ to compress. MAML is used to learn $(\theta_0 + \delta\theta_i)\circ\mathbf{z}^\prime$ for $i=1,\dots,n$ images. Then, what exactly is used to calculate the rate in Figure 7? Is it just the parameters $\\{\delta\theta_i\circ\mathbf{z}^\prime\\}_{i=1}^n$ or does $\theta_0\circ\mathbf{z}^\prime$ also enter the calculation? I'm assuming both are accounted for in the rate, as that would be the correct calculation, but the authors should clarify this. 2. Is the method highly sensitive to the number of images being compressed (i.e. $n$ above)? As you compress more images, $\theta_0$ needs to be a good initialization point for more and more samples. If so, it would only make sense to talk about the rate-distortion curve (Figure 7) as a function of $n$, especially because JPEG/JPEG2000 won't have a reduction in performance as $n$ increases (assuming it was applied individually to each image). 4. The idea of MAML was to reduce the overall compute necessary to compress a set of images with respect to previous methods using INRs. Maybe the author could comment and give a sense of how much less time/steps/resources were required in the experiments. 5. The wording of the introduction, abstract and final paragraph of section 5.2 suggest that the main objective is compression. The paper mostly focuses on evaluating the resulting sparseness of the method. This makes sense because the weights are not entropy coded and are instead stored to disk using 16 or 32bit precision. However, this is mentioned only at the very end of page 9 which kept me wondering while reading the paper why the authors focused on the sparsity and not the actual rate (e.g. BPP). Maybe this could be mentioned at the start of the paper. Requested Changes: 1. (critical) I believe the authors should clarify how the experiments were conducted that resulted in figure 7. See previous section of this review. All below are optional, but I'd highly recommend the following to strengthen the paper: 2. Usually other distortion measures, such as MS-SSIM, are placed alongside PSNR as they capture perceptual quality better (empirically). 3. This paper might be of interest and can serve as a motivation for the introduction: https://proceedings.mlr.press/v151/isik22a.html. They show that any good model compressor, under certain conditions, must perform some sort of pruning/sparsity. 4. Maybe the author could comment and give a sense of how much less time/steps/resources were required in the experiments, as mentioned before. 5. Writing suggestions. Nothing major, but might want to consider. - I found the discussion of part (ii) in section 3.2 a bit confusing. I assume it refers to using $\theta + \delta\theta\circ\mathbf{z}’$ as discussed in the third paragraph on page 5. If so, I’d recommend merging that later paragraph into part (ii) to make it clearer. - I believe equation (6) has a typo. The first 2 summations have the same indexing variable $i$. - What does “highest plasticity” mean on page 5? Maybe make that clearer (unless it is a widely used term that I’m not aware of). - Typo on page 6: “for ease of ~notion~ notation” - Typo on page 9: “modulations in ~layer~ later layers” - Vector notation is inconsistent. Some vectors are bold such as $\mathbf{x}$ and $\mathbf{y}$, but $\theta, \phi, s$ are not. Broader Impact Concerns: No broader impact statement was available. The authors may want to consider discussing known algorithmic biases for model pruning, and how that may disproportionately affect underrepresented features in the data when working with images of humans. See https://arxiv.org/abs/2010.03058 ================================================== Review 3: Summary: The paper introduces a method for data compression based on the idea of encoding the data in the weights of a neural network which maps coordinates to signal values. This has been explored before, but the present work uses a better method for sparsification of the weights based on a differentiable L0 penalty, and shows how this can be combined with a meta learning approach that significantly improves encoding time as well as compression rates. The method is evaluated on a range of datasets with quite different kinds of data such as images, data on manifolds and shapes. Strengths and Weaknesses: The idea of this paper makes sense to me, and the project is well executed. The paper is clearly written, and explains the important ideas well. Experimental validation is fairly extensive. The main weakness is that due to the way results are presented, it is not clear how this method compares to state of the art (neural) compression methods. The main issues are that the datasets are non-standard in the compression literature, and the results are not presented as R/D curves (bits per pixel vs PSNR) but rather as "Fraction of surviving parameters" vs PSNR. I'm sure the later can be turned into BPP, but it would be nice to not leave this job to the reader. Also, the result would be most convincing if the BPP is actually computed from the filesize on disk rather than a theoretical calculation (many practical issues can come up when trying to actually encode the data). Another good metric to report is BD-rate. Regarding the dataset, it would be nice if the authors could run their method on one of the standard benchmarks, such as KODAK which is quite small. Even if the method is not competitive with the state of the art (non-INR based methods), I would likely recommend acceptance because the method is interesting and promising (perhaps with further innovations), and method is very flexible, likely yielding at least decent results on a wide range of data modalities, with fairly minimal effort. Regarding novelty, I am not completely up to date with the literature, but it seems like Strumpler et al. also already considered sparsification, which is not mentioned although Strumpler is cited. They use an L1 penalty instead of L0. Comparing to this method would be nice. Other papers based on optimizing parameters for compression include Yang et al. and van Rozendaal et al. Strumpler, Postels, Yang, Van Gool, Tombari, Implicit Neural Representations for Image Compression Yang, Bamler, Mandt, Improving inference for neural image compression van Rozendaal, Huijben, Cohen, Overfitting for Fun and Profit: Instance-Adaptive Data Compression Requested Changes: - Report standard metrics on standard datasets to enable direct comparison to existing methods based on INRs and other methods, as discussed above. - Discuss difference to Strumpler and other related work if deemed relevant - Eq. 3, missed a min and a subscript 0, I think Broader Impact Concerns: none ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers agreed that the paper was innovative and technically correct. The authors have successfully addressed the reviewers' concerns on clarity, novelty, metrics, and baseline comparisons during the review period. These extensive improvements helped make the approach more comparable to earlier work in the field. No further revisions are necessary. ==================================================
# Graph-Level Representation Learning With Joint-Embedding Predictive Architectures Anonymous authors Paper under double-blind review ## Abstract Joint-Embedding Predictive Architectures (JEPAs) have recently emerged as a novel and powerful technique for self-supervised representation learning. They aim to learn an energybased model by predicting the latent representation of a target signal y from the latent representation of a context signal x. JEPAs bypass the need for negative and positive samples, traditionally required by contrastive learning while avoiding the overfitting issues associated with generative pretraining. In this paper, we show that graph-level representations can be effectively modeled using this paradigm by proposing a Graph Joint-Embedding Predictive Architecture (Graph-JEPA). In particular, we employ masked modeling and focus on predicting the *latent* representations of masked subgraphs starting from the latent representation of a context subgraph. To endow the representations with the implicit hierarchy that is often present in graph-level concepts, we devise an alternative prediction objective that consists of predicting the coordinates of the encoded subgraphs on the unit hyperbola in the 2D plane. Through multiple experimental evaluations, we show that Graph-JEPA can learn highly semantic and expressive representations, as shown by the downstream performance in graph classification, regression, and distinguishing non-isomorphic graphs. The code will be made available upon acceptance. ## 1 Introduction Graph data is ubiquitous in the real world due to its ability to universally abstract various concepts and problems (Ma & Tang, 2021; Veličković, 2023). To deal with this widespread data structure, Graph Neural Networks (GNNs) (Scarselli et al., 2008; Kipf & Welling, 2016a; Gilmer et al., 2017; Veličković et al., 2017) have established themselves as a staple solution. Nevertheless, most applications of GNNs usually rely on ground-truth labels for training. The growing amount of graph data in fields such as bioinformatics, chemoinformatics, and social networks makes manual labeling laborious, sparking significant interest in unsupervised graph representation learning. A particularly emergent area in this line of research is self-supervised learning (SSL). In SSL, alternative forms of supervision are created stemming from the input signal. This process is then typically followed by invariance-based or generative-based pretraining (Liu et al., 2023; Assran et al., 2023). Invariance-based approaches optimize the model to produce comparable embeddings for different views of the input signal. A common paradigm associated with this procedure is contrastive learning (Tian et al., 2020). Typically, these alternative views are created by a data augmentation procedure. The views are then passed through their respective encoder networks (which may share weights), as shown in Fig. 1a. Finally, an energy function, usually framed as a distance, acts on the latent embeddings. In the graph domain, several works have applied contrastive learning by designing graph-specific augmentations (You et al., 2020), using multi-view learning (Hassani & Khasahmadi, 2020) and even adversarial learning (Suresh et al., 2021). Invariance-based pretraining is effective but comes with several drawbacks i.e., the necessity to augment the data and process negative samples, which limits computational efficiency. In order to learn semantic embeddings that are useful for downstream tasks, the augmentations must also be non-trivial. ![1_image_0.png](1_image_0.png) Figure 1: Illustration of the SSL approaches discussed in this paper: (a) Joint-Embedding (Contrastive) Architectures learn to create similar embeddings for inputs x and y that are compatible with each other and dissimilar embeddings otherwise. This compatibility is implemented in practice by creating different views of the input data. (b) Generative Architectures reconstruct a signal y from an input signal x by conditioning the decoder network on additional (potentially latent) variables z. (c) Joint-Embedding Predictive Architectures act as a bridge: They utilize a predictor network that processes the context x and is conditioned on additional (potentially latent) variables to predict the embedding of the target y *in latent space*. Generative-based pretraining methods, on the other hand, typically remove or corrupt portions of the input and predict them using an autoencoding procedure (Vincent et al., 2010; He et al., 2022) or rely on autoregressive modeling (Brown et al., 2020; Hu et al., 2020). Fig. 1b depicts the typical instantiation of these methods: The input signal x is fed into an encoder network that constructs the latent representation, and from it a decoder generates yˆ, the data corresponding to the target signal y. The energy function is then applied in data space, often through a reconstruction error. Generative models generally display strong overfitting tendencies van den Burg & Williams (2021) and can be non-trivial to train due to issues such as mode collapse Adiga et al. (2018). Moreover, the features they learn are not always useful for downstream tasks Meng et al. (2017). An intuitive explanation of this problem is that generative models have to estimate a data distribution that is usually quite complex, so the latent representations must be directly descriptive of the whole data space Loaiza-Ganem et al. (2022). This can become even more problematic for graphs because they live in a non-Euclidean and inhomogenous data space. Despite the aforementioned issues, masked autoencoding has recently shown promising results also in the graph domain with appropriately designed models (Hou et al., 2022; Tan et al., 2023). Inspired by the innovative Joint-Embedding Predictive Architecture (JEPA) (LeCun, 2022; Assran et al., 2023), we propose Graph-JEPA, a JEPA for the graph domain that can learn graph-level representations by bridging contrastive and generative models. As illustrated in Fig. 1c, a JEPA has two encoder networks that receive the input signals and produce the corresponding representations. The two encoders can potentially be different models and don't need to share weights. A predictor module outputs a prediction of the latent representation of one signal based on the other, possibly conditioned on another variable. Graph-JEPA does not require any negative samples or complex data augmentation, and by operating in the latent space avoids the pitfalls associated with learning high-level details needed to fit the data distribution. However, the graph domain presents several challenges needed to properly design such an architecture: context and target extraction, designing a latent prediction task optimal for graph-level concepts, and learning expressive representations. In response to these questions, we equip Graph-JEPA with a specific masked modeling objective. The input graph is first divided into several subgraphs, and then the latent representation of randomly chosen target subgraphs is predicted given a context subgraph. The subgraph representations are consequently pooled to create a graph-level representation that can be used for downstream tasks. The nature of graph-level concepts is often assumed to be hierarchical (Ying et al., 2018). We conjecture that the typical latent reconstruction objective used in current JEPA formulations is not enough to provide optimal downstream performance. To this end, we design a prediction objective that starts by expressing the target subgraph encoding as a high-dimensional description of the hyperbolic angle. The predictor module is then tasked with predicting the location of the target in the 2D unit hyperbola. This prediction is compared with the target coordinates obtained by using the aforementioned hyperbolic angle. In this self-predictive setting, we explain why the stop-gradient operation and a simple predictor parametrization are useful to prevent representation collapse. To experimentally validate our approach, we evaluate GraphJEPA against established contrastive and generative graph-level SSL methods across various graph datasets from different domains. Our proposed method demonstrates superior performance, outperforming most competitors while maintaining efficiency and ease of training. Notably, we observe from our experiments that Graph-JEPA can run up to 2.5x faster than Graph-MAE (Hou et al., 2022) and 8x faster than MVGRL (Hassani & Khasahmadi, 2020). Finally, we empirically demonstrate Graph-JEPA's ability to learn highly expressive graph representations by showing that a linear classifier trained on the learned representations almost perfectly distinguishes pairs of non-isomorphic graphs that the 1-WL test cannot differentiate. ## 2 Related Work 2.1 Self-Supervised Graph Representation Learning Graph Neural Networks (GNNs)(Wu et al., 2019; Scarselli et al., 2008; Kipf & Welling, 2016a; Veličković et al., 2017) are now established solutions to different graph machine learning problems such as node classification, link prediction, and graph classification. Nevertheless, the cost of labeling graph data is quite high, given the immense variability of graph types and the information they can represent. To alleviate this problem, SSL on graphs has become a research frontier, where we can distinguish between two major groups of methods(Xie et al., 2022b; Liu et al., 2023): Contrastive Methods. Contrastive learning methods usually minimize an energy function (Hinton, 2002; Gutmann & Hyvärinen, 2010) between different views of the same data. InfoGraph (Sun et al., 2019) maximizes the mutual information between the graph-level representation and the representations of substructures at different scales. GraphCL (You et al., 2020) works similarly to distance-based contrastive methods in the imaging domain. The authors first propose four types of graph augmentations and then perform contrastive learning based on them. The work of (Hassani & Khasahmadi, 2020) goes one step further by contrasting structural views of graphs. They also show that a large number of views or multiscale training does not seem to be beneficial, contrary to the image domain. Another popular research direction for contrastive methods is learning graph augmentations and how to leverage them efficiently (Suresh et al., 2021; Jin et al., 2021). Contrastive learning methods typically require a lot of memory due to data augmentation and negative samples. Graph-JEPA is much more efficient than typical formulations of these architectures, given that it does not require any augmentations or negative samples. Another major difference is that the prediction in latent space in JEPAs is done through a separate predictor network rather than using the common Siamese structure (Bromley et al., 1993)(Fig. 1a vs. 1c). Generative Methods. The goal of generative models is to recover the data distribution, an objective that is typically implemented through a reconstruction process. In the graph domain, most generative architectures that are also used for SSL are extensions of Auto-Encoder (AE) models (Hinton & Zemel, 1993) architectures. These models learn an embedding from the input data and then use a reconstruction objective with (optional) regularization to learn the data distribution. Kipf & Welling (2016b) extended the framework of AEs and VAEs (Kingma & Welling, 2013) to graphs by using a GNN as an encoder and the reconstruction of the adjacency matrix as the training objective. However, the results on downstream tasks with embeddings learned in this way are often unsatisfactory compared with contrastive learning methods, a tendency also observed in other domains (Liu et al., 2023). A recent and promising direction is masked autoencoding (MAE) (He et al., 2022), which has proved to be a very successful framework for the image and text domains. GraphMAE (Hou et al., 2022) is an instantiation of MAEs in the graph domain, where the node attributes are perturbed and then reconstructed, providing a paradigm shift from the structure learning objective of GAEs. S2GAE (Tan et al., 2023) is one of the latest GAEs, which focuses on reconstructing the topological structure but adds several auxiliary objectives and additional designs. Our architecture differs from generative models in that it learns to predict directly in the latent space, thereby bypassing the necessity of remembering and overfitting high-level details that help maximize the data evidence (Fig. 1b vs. 1c). ## 2.2 Joint-Embedding Predictive Architectures Joint-Embedding Predictive Architectures (LeCun, 2022) are a recently proposed design for SSL. The idea is similar to both generative and contrastive approaches, yet JEPAs are non-generative since they cannot directly predict y from x, as shown in Fig. 1c. The energy of a JEPA is given by the prediction error in the embedding space, not the input space. These models can intuitively be understood as a way to capture abstract dependencies between x and y, potentially given another latent variable z. It is worth noting that the different models comprising the architecture may differ in terms of structure and parameters. An in-depth explanation of Joint-Embedding Predictive Architectures and their connections to human representation learning is provided by LeCun (2022). Some works acknowledged that latent self-predictive architectures were effective (Grill et al., 2020; Chen & He, 2021) even before JEPAs effectively became synonymous with this concept. Inspired by these trends, a number of related works have tried to employ latent prediction objectives for graph SSL, showing advantages mostly compared to contrastive learning. Thakoor et al. (2021) perform latent self-prediction on augmented views of a graph in a similar fashion to BYOL (Grill et al., 2020), while Zhang et al. (2021) rely on ideas from Canonical Correlation Analysis to frame a learning objective that preserves feature invariance and forces decorrelation when necessary. The work of Lee et al. (2022) presents a model that learns latent positive examples through a k-NN and clustering procedure in the transductive setting, while Xie et al. (2022a) combine instance-level reconstruction (generative pretraining) and feature-level invariance (latent prediction). Given that these models learn using a latent self-predictive objective, similar to ours, we will refer to them also using the term self-predictive in the rest of the paper. Unlike previously proposed methods, Graph-JEPA operates exclusively in the latent space and implements a novel training task without the need for data augmentation. At the current state of the art, the JEPA framework has been implemented for images (Assran et al., 2023), video(Bardes et al., 2023b;a), and audio (Fei et al., 2023). We propose the first architecture implementing the modern JEPA principles for the graph domain and use it to learn graph-level representations. ## 3 Method A general overview. We consider graphs G defined as G = (*V, E*) where V = {v1 *. . . v*N } is the set of nodes, with a cardinality |V | = N, and E = {e1 *. . . e*M} is the set of edges, with a cardinality |E| = M. For simplicity of the exposition, we consider symmetric, unweighted graphs, although our method can be generalized to weighted or directed graphs. In this setting, G can be represented by an adjacency matrix A ∈ {0, 1} N×N , with Aij = 1 if nodes vi and vj are connected and 0 otherwise. Fig. 2 provides the overview of the proposed architecture. The high-level idea of Graph-JEPA is to divide the input graph into subgraphs (patches) (He et al., 2023) and then predict the representation of a randomly chosen target subgraph from the representation of a single context subgraph (Assran et al., 2023). Again, we would like to stress that this masked modeling objective is realized in latent space without needing negative (or positive) samples. The subgraph representations are then pooled to create a vector representation for the whole graph, i.e., a graph-level representation. Therefore, the learning procedure consists of a sequence of operations: 1. Spatial Partitioning; 2. Subgraph Embedding; 3. Context and Target Encoding; 4. Latent Target Prediction. ## 3.1 Spatial Partitioning We base the initial part of our architecture on the recent work of (He et al., 2023), but similar ideas consisting of graph partitioning have been proposed before for Graph SSL (Jin et al., 2020). This step consists of creating different subgraphs (patches), similar to how Vision Transformers (ViT) (Dosovitskiy et al., 2020) operate on images. We rely on the METIS (Karypis & Kumar, 1998) graph clustering algorithm, which partitions a graph into a pre-defined, non-overlapping number of clusters p (Fig. 2a), such that the number of within-cluster links is much higher than between-cluster links. Note that having non-overlapping subgraphs can be problematic since edges can be lost in this procedure, and it is possible to end up with empty "subgraphs". In order to maintain a simple notion of locality in each subgraph and avoid completely ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) Figure 2: A complete overview of Graph-JEPA. We first extract non-overlapping subgraphs (patches) (a.), perform a 1-hop neighborhood expansion (b.), and encode the subgraphs with a GNN (c.). After the subgraph encoding, one is randomly picked as the context and m others as the targets (d.), and they are fed into their respective encoders (e.). The embeddings generated from the target encoder are used to produce the target subgraphs' coordinates ψy. Finally, the predictor network is tasked with directly predicting the coordinates ψˆy for each target subgraph based on the context embedding and the positional embedding of each target subgraph (f.). A regression loss acts as the energy function D between the predicted and target coordinates. Note that the extracted subgraphs in (a.) and (b.) are meant for illustrative purposes only. The number of nodes in each subgraph can vary. empty subgraphs, a one-hop neighborhood expansion1 of the nodes in each extracted subgraph is performed (Fig. 2b). ## 3.2 Subgraph Embedding After partitioning the graph, we learn a representation for each subgraph through a GNN (Fig. 2c.). The specific choice of the GNN is arbitrary and depends on what properties one wishes to induce in the representation. The learned node embeddings are mean pooled to create a vector representation for each subgraph: {h1...hp}, h ∈ R d. Given that these embeddings will be used as context or target variables, providing additional information regarding the subgraphs is key in order to guide the predictor network. Otherwise, the prediction task might be too difficult. Thus, we propose to use a positional embedding for each subgraph, which is implemented as the maximum Random Walk Structural Embedding (RWSE) of all the nodes in that subgraph. In this way, the position is characterized consistently for each patch. Formally, a RWSE (Dwivedi et al., 2021) for a node v can be defined as: $$P_{v}=(M_{i i},M_{i i}^{2},\ldots,M_{i i}^{k})$$ ii) (1) with Pv ∈ R k, Mk = (D−1A) kthe random-walk transition matrix of order k, and i is the index of node v in the adjacency matrix. Therefore, Mk ii encodes the probability of node v landing to itself after a k-step random walk. Given a subgraph l, let Vl denote the set of nodes in l. We define the subgraph RWSE as: $$P_{l}=\operatorname*{max}_{v\in V_{l}}P_{v}$$ Pv (2) 1Connecting all (not already present) adjacent nodes from the input graph with each node present in the subgraph. $$(1)$$ $\eqref{eq:walpha}$. where the max operation is performed elementwise. A straightforward explanation of the above definition is that the positional information of each subgraph is essentially contained in the node with the highest degree2, which will act as an anchor reference when predicting the target subgraphs' latent representations. Intuitively, knowing Plis useful for the prediction task because the features present in the most representative node will have been diffused the most (via the GNN) into the subgraph. ## 3.3 Context And Target Encoding Given the subgraph representations and their respective positional embeddings, we frame the Graph-JEPA prediction task in a similar manner to I-JEPA (Assran et al., 2023). The goal of the network is to predict the latent embeddings of randomly chosen target subgraphs, given one random context subgraph. The prediction is conditioned on positional information regarding each subgraph. At each training step, we choose one random subgraph as the context x and m others as targets Y = {y1*, . . . , y*m} (Fig. 2d). These subgraphs are processed by the context and target encoders (Fig. 2e) which are parametrized by Transformer encoder blocks (without self-attention for the context) where normalization is applied at first (Xiong et al., 2020). The target encoder uses Hadamard self-attention (He et al., 2023), but other choices, such as the standard self-attention mechanism (Vaswani et al., 2017) are perfectly viable. We can summarize this step as: $$z^{x}=E_{c}(x),\ Z^{y}=E_{t}(Y),$$ ## X = Ec(X), Zy = Et(Y ), (3) With Z X ∈ R D And Z Y ∈ R M×D. At This Stage, We Can Use The Predictor Network To Directly Predict Z Yfrom Z X. This Is The Typical Formulation Of Jepas, Also Followed By The Popular Work Of Assran Et Al. (2023). We Argue That Learning How To Organize Concepts For Abstract Objects Such As Graphs Or Networks Directly In Euclidean Space Is Suboptimal. In The Following Subsection, We Propose A Simple Trick To Bypass This Problem Using The Encoding And Prediction Mechanisms In Graph-Jepa. A Discussion In Section 4.3 Will Provide Additional Insights Regarding This Topic. 3.4 Latent Target Prediction Learning hierarchically consistent concepts (Deco et al., 2021) is considered a crucial aspect of human learning, especially during infancy and young age (Rosenberg & Feigenson, 2013). Networks in the real world often conform to some concept of hierarchy (Moutsinas et al., 2021), and this assumption is frequently used when learning graph-level representations (Ying et al., 2018). Thus, we conjecture that Graph-JEPA should operate in a hyperbolic space, where learned embeddings implicitly organize hierarchical concepts(Nickel & Kiela, 2017; Zhao et al., 2023). This gives rise to another issue: commonly used hyperbolic (Poincaré) embeddings are known to have several tradeoffs between dimensionality and performance(Sala et al., 2018; Guo et al., 2022), which severely limits the expressive ability of the model. Given that graphs can describe very abstract concepts, high expressivity in terms of model parameters is preferred. In simple words, we would ideally like to have a high-dimensional latent code that has a concept of hyperbolicity built into it. To achieve this, we think of the target embedding as a high-dimensional representation of the hyperbolic angle, which allows us to describe each target patch through its position in the 2D unit hyperbola. Formally, given a target patch l, its embedding Z y l and positional encoding Pl, we express the latent target as: $$\psi_{l}^{y}=\left(\begin{array}{l}{{c o s h(\alpha_{l}^{y})}}\\ {{s i n h(\alpha_{l}^{y})}}\end{array}\right),\ \alpha_{l}^{y}=\frac{1}{N}\sum_{n=1}^{d}Z_{l}^{y\,(n)},$$ $$(3)$$ $$\quad(4)$$ $$\left(5\right)$$ where *cosh*(·) and *sinh*(·) are the hyperbolic cosine and sine functions. The predictor module is then tasked with directly locating the target in the unit hyperbola, given the context embedding and the target patch's positional encoding: $$\hat{\psi}_{l}^{y}=W_{2}(\sigma(W_{1}(z^{x}+P_{l})+b_{1}))+b_{2},$$ x + Pl) + b1)) + b2, (5) where Wn and bn represent the n-th weight matrix and bias vector (i.e., n-th fully connected layer), σ is an elementwise non-linear activation function, and ψˆy l ∈ R 2. This allows us to frame the learning procedure as 2A more suitable term would be the in-degree, but there is no difference in the undirected case. a regression problem, and the whole network can be trained end-to-end (Fig. 2f). In practice, we use the smooth L1 loss as the distance function, as it is less sensitive to outliers compared to the typical L2 loss (Girshick, 2015): $$L(y,\hat{y})=\frac{1}{N}\sum_{n=1}^{N}s_{n},\quad s_{n}=\begin{cases}0.5(y_{n}-\hat{y}_{n})^{2}/\beta,&\text{if}|y-\hat{y}|<\beta\\ |y-\hat{y}|-0.5\beta,&\text{otherwise}\end{cases}\tag{6}$$ Thus, we are effectively measuring how far away the context and target patches are in the unit hyperbola of the plane, but the targets are actually described through a high dimensional latent code (Eq. 4). We explicitly show the differences between this choice and using the Euclidean or Hyperbolic distances as energy functions (in the latent space) in Section 4.3. Our proposed pretraining objective forces the context encoder to understand the differences in the hyperbolic angle between the target patches, which can be thought of as establishing an implicit hierarchy between them. Preventing representation collapse. JEPAs are based on a self-distillation procedure. Therefore, they are by definition susceptible to representation collapse (Assran et al., 2023). This is due to the nature of the learning process, where both the context and target representations have to be learned. We formalize this intuition with an example and argue why there is a need to adopt two well-known training tricks that are prevalent in the literature to prevent representation collapse: i) The stop-gradient operation on the target encoder followed by a momentum update (using an Exponential Moving Average (EMA) of the context encoder weights) (Grill et al., 2020; Chen & He, 2021); ii) a simpler parametrization of the predictor compared to the context and target networks (in terms of parameters)(Chen et al., 2020; Baevski et al., 2022); Let us simplify the problem through the following assumptions: i) The predictor network is linear; ii) There is a one-to-one correspondence between context and target patches. (This holds also in practice due to Eq. 5); iii) The self-predictive task is a least-squares problem in a finite-dimensional vector space. Based on these assumptions, we can rewrite the context features as X ∈ R n×d, the target coordinates as Y ∈ R n×2, and the weights of the linear model as W ∈ R d×2. The optimal weights of the predictor are given by solving: $$\operatorname*{arg\,min}_{W}\|XW-Y\|^{2}\tag{1}$$ $\square$ $\downarrow$ . he error: $\frac{1}{2}$ where ∥.∥ indicates the Frobenius norm. The (multivariate) OLS estimator can give the solution to this problem by setting W to: $$W=(X^{T}X)^{-1}X^{T}Y$$ −1XT Y (8) Plugging $$\text{Plugging}$$ Plugging Eq. 8 into Eq. 7 and factorizing Y , the least squares solution leads to the error: Let $\alpha=1$ and $\alpha=2$. $$\left\|(X(X^{T}X)^{-1}X^{T}-I_{n})Y\right\|^{2}$$ $\downarrow$ . 2(9) Thus, the optimality of a linear predictor is defined by the orthogonal projection of Y onto the orthogonal complement of a subspace of Col(X). As is commonly understood, this translates to finding the linear combination of X that is closest, in terms of ∥·∥2, to Y . Similarly to what was shown by Richemond et al. (2023), we argue that this behavior unveils a key intuition: The target encoder, *which estimates* Y must not share weights or be optimized via the same optimizer as the context encoder. If that were the case, the easiest solution to Eq. 9 would be learning a representation that is orthogonal to itself, i.e., the 0 vector, leading to representation collapse. Using a well-parametrized EMA update is what allows us to bypass this problem. In practice, even with the slower dynamics induced by the EMA procedure, it is possible to immediately encounter a degenerate solution with a non-linear and highly expressive network. For instance, consider a scenario where the target subgraphs are straightforward and similar. In this case, if the predictor network is sufficiently powerful, it can predict the target representations even without a well-learned context representation. Since the target encoder weights are updated via the EMA procedure, the learned representations will tend to be uninformative. Therefore, implementing the predictor network as a simpler and less expressive network is crucial to achieving the desired training dynamics. | Hyperparameter | PROTEINS | MUTAG | DD | REDDIT-B | REDDIT-M5 | IMDB-B | IMDB-M | ZINC | |----------------------|------------|---------|-------|------------|-------------|----------|----------|--------| | # Subgraphs | 32 | 32 | 32 | 128 | 128 | 32 | 32 | 32 | | # GNN Layers | 2 | 2 | 3 | 2 | 2 | 2 | 2 | 2 | | # Encoder Blocks | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | | Embedding size | 512 | 512 | 512 | 512 | 512 | 512 | 512 | 512 | | RWSE size | 20 | 15 | 30 | 40 | 40 | 15 | 15 | 20 | | # context - # target | 1 - 2 | 1 - 3 | 1 - 4 | 1 - 4 | 1 - 4 | 1- 4 | 1- 4 | 1- 4 | Table 1: Values of Graph-JEPA specific hyperparameters for the experiments on the TUD datasets. ## 4 Experiments The experimental section introduces the empirical evaluation of the Graph-JEPA model in terms of downstream performance on different graph datasets and tasks, along with additional studies on the latent space's structure and the encoders' parametrization. Furthermore, a series of ablation studies are presented in order to elucidate the design choices behind Graph-JEPA. ## 4.1 Experimental Setting We use the TUD datasets (Morris et al., 2020) as commonly done for graph-level SSL (Suresh et al., 2021; Tan et al., 2023). We utilize seven different graph-classification datasets: PROTEINS, MUTAG, DD, REDDITBINARY, REDDIT-MULTI-5K, IMDB-BINARY, and IMDB-MULTI. We report the accuracy of ten-fold cross-validation for all classification experiments over five runs (with different seeds). It is worth noting that we retrain the Graph-JEPA model for each fold without ever having access to the testing partition in both the pretraining and fine-tuning stages. We use the ZINC dataset for graph regression and report the Mean Squared Error (MSE) over ten runs (with different seeds), given that the testing partition is already separated. To produce the unique graph-level representations, we feed all the subgraphs through the trained target encoder and then use mean pooling, obtaining a single feature vector zG ∈ R dthat represents the whole graph. This high-dimensional is then used to fit a *linear model* with L2 regularization for the downstream task. Specifically, we employ Logistic Regression with L2 regularization on the classification datasets and Ridge Regression for the ZINC dataset. For the datasets that do not natively have node and edge features, we use a simple constant (0) initialization. The subgraph embedding GNN (Figure 2c.) consists of the GIN operator with support for edge features Hu et al. (2019), often referred to as GINE. The neural network modules were trained using the Adam optimizer Kingma & Ba (2014) and implemented using PyTorch Paszke et al. (2019) and PyTorch-Geometric Fey & Lenssen (2019), while the linear classifiers and cross-validation procedure were implemented through the Scikit-Learn library Pedregosa et al. (2011). All experiments were performed on Nvidia RTX 3090 GPUs. Finally, 1 shows the JEPA-specific hyperparameters used for the following experiments. ## 4.2 Downstream Performance For the experiments on downstream performance, we follow Suresh et al. (2021) and also report the results of a fully supervised Graph Isomorphism Network (GIN) (Xu et al., 2018), denoted F-GIN in Table 2. We compare Graph-JEPA against four contrastive methods, two generative methods, and one latent selfpredictive method (Xie et al., 2022a) (which also regularizes through instance-level reconstruction). As can be seen in Table 2, Graph-JEPA achieves competitive results on all datasets, setting the state-of-the-art as a pretrained backbone on five different datasets and coming second on one (out of eight total). Overall, our proposed framework learns semantic embeddings that work well on different graphs, showing that GraphJEPA can be utilized as a general pretraining method for graph-level SSL. Notably, Graph-JEPA works well for both classification and regression and performs better than a supervised GIN on all classification datasets. We also provide results with BGRL (Thakoor et al., 2021), a node-level latent self-predictive strategy. We train this model using the official code and hyperparameters and then mean-pool the node representations for the downstream task. The results are underwhelming compared to the models reporting graph-level performance, which is to be expected considering that methods that also perform well on graphlevel learning are appropriately designed. Table 2: Performance of different graph SSL techniques on various TUD benchmark datasets, ordered by pretraining type: contrastive, generative, and self-predictive. F-GIN is an end-to-end supervised GIN and serves as a reference for the performance values. The results of the competitors are taken as the best values from (Hassani & Khasahmadi, 2020; Suresh et al., 2021; Tan et al., 2023). "-" indicates missing values from the literature. The **best results** are reported in boldface, and the second best are underlined. For the sake of completeness, we also report the training and evaluation results of GraphMAE on the DD, REDDIT-M5, and ZINC datasets in *italics*, along with the results of a node-level self-predictive method (BGRL), which does not originally report results on graph-level tasks. | Model | PROTEINS ↑ | MUTAG ↑ | DD ↑ | REDDIT-B ↑ REDDIT-M5 ↑ | IMDB-B ↑ | IMDB-M ↑ | ZINC ↓ | | |------------------------------------|--------------|--------------|--------------|--------------------------|--------------|--------------|--------------|---------------| | F-GIN | 72.39 ± 2.76 | 90.41 ± 4.61 | 74.87 ± 3.56 | 86.79 ± 2.04 | 53.28 ± 3.17 | 71.83 ± 1.93 | 48.46 ± 2.31 | 0.254 ± 0.005 | | InfoGraph (Sun et al., 2019) | 72.57 ± 0.65 | 87.71 ± 1.77 | 75.23 ± 0.39 | 78.79 ± 2.14 | 51.11 ± 0.55 | 71.11 ± 0.88 | 48.66 ± 0.67 | 0.890 ± 0.017 | | GraphCL (You et al., 2020) | 72.86 ± 1.01 | 88.29 ± 1.31 | 74.70 ± 0.70 | 82.63 ± 0.99 | 53.05 ± 0.40 | 70.80 ± 0.77 | 48.49 ± 0.63 | 0.627 ± 0.013 | | MVGRL (Hassani & Khasahmadi, 2020) | - | - | - | 84.5 ± 0.6 | - | 74.2 ± 0.7 | 51.2 ± 0.5 | - | | AD-GCL-FIX (Suresh et al., 2021) | 73.59 ± 0.65 | 89.25 ± 1.45 | 74.49 ± 0.52 | 85.52 ± 0.79 | 53.00 ± 0.82 | 71.57 ± 1.01 | 49.04 ± 0.53 | 0.578 ± 0.012 | | AD-GCL-OPT (Suresh et al., 2021) | 73.81 ± 0.46 | 89.70 ± 1.03 | 75.10 ± 0.39 | 85.52 ± 0.79 | 54.93 ± 0.43 | 72.33 ± 0.56 | 49.89 ± 0.66 | 0.544 ± 0.004 | | GraphMAE (Hou et al., 2022) | 75.30 ± 0.39 | 88.19 ± 1.26 | 74.27 ± 1.07 | 88.01 ± 0.19 | 46.06 ± 3.44 | 75.52 ± 0.66 | 51.63 ± 0.52 | 0.935 ± 0.034 | | S2GAE (Tan et al., 2023) | 76.37 ± 0.43 | 88.26 ± 0.76 | - | 87.83 ± 0.27 | - | 75.76 ± 0.62 | 51.79 ± 0.36 | - | | BGRL (Thakoor et al., 2021) | 70.99 ± 3.86 | 74.99 ± 8.83 | 71.52 ± 2.97 | 50 ± 0 | 20 ± 0.1 | 0.5 ± 0 | 0.33 ± 0 | 1.2 ± 0.011 | | LaGraph (Xie et al., 2022a) | 75.2 ± 0.4 | 90.2 ± 1.1 | 78.1 ± 0.4 | 90.4 ± 0.8 | 56.4 ± 0.4 | 73.7 ± 0.9 | - | - | | Graph-JEPA | 75.67 ± 3.78 | 91.25 ± 5.75 | 78.64 ± 2.35 | 91.99 ± 1.59 | 56.73 ± 1.96 | 73.68 ± 3.24 | 50.69 ± 2.91 | 0.434 ± 0.014 | | Model | Accuracy ↑ | |--------------------------------------------|---------------| | GCN (Kipf & Welling, 2016a) | 51.90 ± 1.96 | | GatedGCN (Bresson & Laurent, 2017) | 51.73 ± 1.65 | | GINE (Xu et al., 2018) | 50.69 ± 1.39 | | GraphTransformer (Dwivedi & Bresson, 2020) | 52.35 ± 2.32 | | Graph-MLP-Mixer (He et al., 2023) | 100.00 ± 0.00 | | Graph-JEPA | 98.77 ± 0.99 | Table 3: Classification accuracy on the synthetic EXP dataset (Abboud et al., 2020), which contains 600 pairs of non-isomorphic graphs that are indistinguishable by the 1-WL test. Note that the competitor models are all trained with *end-to-end supervision*. The **best result** is reported in boldface, and the second best is underlined. Performances for all competitor models are taken from (He et al., 2023). We further explore the performance of our model on the synthetic EXP dataset (Abboud et al., 2020), compared to end-to-end supervised models. This experiment aims to empirically verify if Graph-JEPA can learn highly expressive graph representations (in terms of the commonly used WL hierarchy (Morris et al., 2019)) without relying on supervision. The results in Table 3 show that our model is able to perform much better than commonly used GNNs. Given its local and global exchange of information, this result is to be expected. Most importantly, Graph-JEPA almost matches the flawless performance achieved by He et al. (2023), who train fully supervised. ## 4.3 Exploring The Graph-Jepa Latent Space As discussed in Section 3.4, the choice of energy function has a big impact on the learned representations. Given the latent prediction task of Graph-JEPA, we expect the latent representations to display hyperbolicity. The predictor network is linearly approximating the behavior of the unit hyperbola such that it best matches the generated target coordinates (Eq. 6). Thus, the network is actually trying to estimate a space that can be considered a particular section of the hyperboloid model (Reynolds, 1993), where hyperbolas appear as geodesics. We are, therefore, evaluating our energy in a restricted part of hyperbolic space. As mentioned before, we find this task to offer great flexibility as it is straightforward to implement and it is computationally efficient compared to the hyperbolic distance used to typically learn hyperbolic embeddings in the Poincaré ball model (Nickel & Kiela, 2017). Table 4 provides empirical evidence for our conjectures regarding the suboptimality of Euclidean or Poincaré embeddings on 4 out of the 8 datasets initially presented in Table 2, where we make sure to choose different graph types for a fair comparison. The results reveal that learning the distance between patches in the 2D unit hyperbola provides a simple way to get the advantages of both embedding types. Hyperbolic embeddings must be learned in lower dimensions due to stability issues (Yu Euclidean Objective Ours ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) a. b. Figure 3: 3D t-SNE(Van der Maaten & Hinton, 2008) of the latent representations used to train the linear classifier on the DD dataset. The change in the curvature of the embedding using the Graph-JEPA objective (b.) is noticeable. Best viewed in color. Table 4: Comparison of Graph-JEPA performance for different distance functions. The optimization for Poincaré embeddings in higher dimensions is problematic, as shown by the NaN loss on the IMDB-B dataset. LD stands for Lower Dimension, where we use a smaller embedding size. | Distance function | Ours | Euclidean | Hyperbolic | Euclidean (LD) | Hyperbolic (LD) | |---------------------|--------------|--------------|---------------|------------------|-------------------| | MUTAG | 91.25 ± 5.75 | 87.04 ± 6.01 | 89.43 +- 5.67 | 86.63 ± 5.9 | 86.32 ± 5.52 | | REDDIT-M | 56.73 ± 1.96 | 56.55 ± 1.94 | 56.19 +- 1.95 | 54.84 ± 1.6 | 55.07 ± 1.83 | | IMDB-B | 73.68 ± 3.24 | 73.76 ± 3.46 | NaN | 72.5 ± 3.97 | 73.4 ± 4.07 | | ZINC | 0.434 ± 0.01 | 0.471 ± 0.01 | 0.605 +- 0.01 | 0.952 ± 0.05 | 0.912 ± 0.04 | & De Sa, 2021), while Euclidean ones do not properly reflect the dependencies between subgraphs and the hierarchical nature of graph-level concepts. Our results suggest that the hyperbolic (Poincaré) distance is generally a better choice than the Euclidean distance in lower dimensions, but it is computationally unstable and expensive in high dimensions. The proposed approach provides the best overall results. We provide a qualitative example of how the embedding space is altered from our latent prediction objective in Fig. 3. | Dataset | Model | Num. parameters | Training time | |------------|----------|------------------------|-----------------| | IMDB | MVGRL | 3674118 | ∼ 7 min | | GraphMAE | 2257193 | ∼ 1.5 min (1min 36sec) | | | Graph-JEPA | 19219460 | < 1min (56 sec) | | | REDDIT-M5 | MVGRL | 4198406 | OOM | | GraphMAE | 2784555 | ∼ 46 min | | | Graph-JEPA | 19245060 | ∼ 18 min | | Table 5: Total training time and model parameters of MVGRL, GraphMAE, and Graph-JEPA for pretraining (single run) based on the optimal configuration for downstream performance. OOM stands for Out-OfMemory. Table 6: Performance when parametrizing the context and target encoders through MLPs vs using the proposed Transformer encoders. | Dataset | Transformer Encoders | MLP Encoders | |-----------|------------------------|----------------| | MUTAG | 91.25 ± 5.75 | 90.5 ± 5.99 | | REDDIT-M5 | 56.73 ± 1.96 | 56.21 ± 2.29 | | IMDB-B | 73.68 ± 3.24 | 74.26 ± 3.56 | | ZINC | 0.434 ± 0.01 | 0.472 ± 0.01 | ## 4.4 Additional Insights And Ablation Studies Model efficiency. In an attempt to characterize the efficiency of our proposed model, we perform a simple experimental check. In Table 5, we compare the total training time needed for different Graph-SSL strategies to provide a representation that performs optimally on the downstream task. We show results on the datasets with the largest graphs from Table 2: IMDB and REDDIT-M5. While the runtime is hardware-dependent, all experiments are performed on the same machine. Graph-JEPA displays superior efficiency and promising scaling behavior. The presented runtime is naturally dependent on the self-supervised scheme used in each model, so we do not regard it as a definitive descriptor, but rather an indicator of the potential of fully latent self-predictive models. MLP parametrization. Table 6 contains the results of parametrizing the whole architecture, other than the initial GNN encoder, through MLPs. This translates to not using the Attention mechanism at all. For this experiment, and also the following ablations, we consider 4 out of the 8 datasets initially presented in Table 2, making sure to choose different graph domains for a fair comparison. Graph-JEPA still manages to perform well, showing the flexibility of our architecture, even though using the complete Transformer encoders leads to better overall performance and less variance in the predictions. Positional embedding. Following He et al. (2023), it is possible to use the RWSE of the patches as conditioning information. Formally, let B ∈ {0, 1} p×N be the patch assignment matrix, such that Bij = 1 if vj ∈ pi. We can calculate a coarse patch adjacency matrix A′ = BBT ∈ R p×p, where each A′ij contains the node overlap between pi and pj . The positional embedding can then be calculated for each patch by simply using the RWSE described in Eq. 1 on A′. We test Graph-JEPA with these relative positional embeddings and find that they still provide good performance but consistently fall behind the node-level (global) RWSE that we employ in our formulation (Table 7b). An issue of these relative patch RWSEs is that the number of shared neighbors can obscure the local peculiarities of each patch, rendering the context given to the predictor more ambiguous. Random subgraphs. A natural question that arises in our framework is how to design the spatial partitioning procedure. Using a structured approach like METIS Karypis & Kumar (1998) is intuitive and leads to favorable results. Another option would be to extract random, non-empty subgraphs as context and targets. As seen in Table 7a, the random patches also provide strong performance, showing that the proposed JEPA architecture is not reliant on the initial input, as is the case for many methods that rely on data augmentation for view generation (Lee et al., 2022). Even though our results show that using a structured way to extract the patches might not be necessary, random sampling can be problematic for larger graphs. Thus, we advocate extracting subgraphs with METIS as it is a safer option in terms of generalizability across different graphs and the inductive bias it provides. ## 5 Conclusion In this work, we introduce a new Joint Embedding Predictive Architecture (JEPA) (LeCun, 2022), for graphlevel Self-Supervised Learning (SSL). An appropriate design of the model, both in terms of data preparation and pretraining objective, reveals that it is possible for a neural network to self-organize the semantic knowledge embedded in a graph, demonstrating competitive performance in different graph data and tasks. Future | (a) | (b) | | | | | |-----------|--------------|--------------|---------------|-----------|------------| | Dataset | METIS | Random | | | | | MUTAG | 91.25 ± 5.75 | 91.58 ± 5.82 | | | | | REDDIT-M5 | 56.73 ± 1.96 | 56.08 ± 1.69 | | | | | IMDB-B | 73.68 ± 3.24 | 73.52 ± 3.08 | | | | | ZINC | 0.434 ± 0.01 | 0.43 ± 0.01 | Dataset | Node RWSE | Patch RWSE | | | MUTAG | 91.25 ± 5.75 | 91.23 ± 5.86 | | | | | REDDIT-M5 | 56.73 ± 1.96 | 56.01 ± 2.1 | | | | | IMDB-B | 73.68 ± 3.24 | 73.58 ± 4.47 | | | | | ZINC | 0.434 ± 0.01 | 0.505 ± 0.005 | | | Table 7: (a) Performance when using node-level vs patch-level RWSEs. (b) Performance when extracting subgraphs with METIS vs. using random subgraphs. research directions include extending the proposed method to nodes and edge-level learning, theoretically exploring the expressiveness of Graph-JEPA, and gaining more insights into the optimal geometry of the embedding space for graph SSL. ## References Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. *arXiv preprint arXiv:2010.01179*, 2020. Sudarshan Adiga, Mohamed Adel Attia, Wei-Ting Chang, and Ravi Tandon. On the tradeoff between mode collapse and sample quality in generative adversarial networks. In 2018 IEEE global conference on signal and information processing (GlobalSIP), pp. 1184–1188. IEEE, 2018. Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 15619–15629, 2023. Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. Data2vec: A general framework for self-supervised learning in speech, vision and language. In *International Conference on* Machine Learning, pp. 1298–1312. PMLR, 2022. Adrien Bardes, Quentin Garrido, Jean Ponce, Xinlei Chen, Michael Rabbat, Yann LeCun, Mido Assran, and Nicolas Ballas. V-jepa: Latent video prediction for visual representation learning. 2023a. Adrien Bardes, Jean Ponce, and Yann LeCun. Mc-jepa: A joint-embedding predictive architecture for self-supervised learning of motion and content features. *arXiv preprint arXiv:2307.12698*, 2023b. Xavier Bresson and Thomas Laurent. Residual gated graph convnets. *arXiv preprint arXiv:1711.07553*, 2017. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a" siamese" time delay neural network. *Advances in neural information processing systems*, 6, 1993. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In *Proceedings of the* IEEE/CVF conference on computer vision and pattern recognition, pp. 15750–15758, 2021. G. Deco, D. Vidaurre, and M. Kringelbach. Revisiting the global workspace orchestrating the hierarchical organization of the human brain. *Nature Human Behaviour*, 5:497 - 511, 2021. doi: 10.1038/s41562-020-01003-6. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699, 2020. Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Graph neural networks with learnable structural and positional representations. *arXiv preprint arXiv:2110.07875*, 2021. Zhengcong Fei, Mingyuan Fan, and Junshi Huang. A-jepa: Joint-embedding predictive architecture can listen. *arXiv preprint arXiv:2311.15830*, 2023. Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In *International conference on machine learning*, pp. 1263–1272. PMLR, 2017. Ross Girshick. Fast r-cnn. In *Proceedings of the IEEE international conference on computer vision*, pp. 1440–1448, 2015. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. *Advances in neural information processing systems*, 33:21271–21284, 2020. Yunhui Guo, Haoran Guo, and Stella X Yu. Co-sne: Dimensionality reduction and visualization for hyperbolic data. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 21– 30, 2022. Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In *Proceedings of the thirteenth international conference on artificial intelligence* and statistics, pp. 297–304. JMLR Workshop and Conference Proceedings, 2010. Kaveh Hassani and Amir Hosein Khasahmadi. Contrastive multi-view representation learning on graphs. In International conference on machine learning, pp. 4116–4126. PMLR, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 16000–16009, 2022. Xiaoxin He, Bryan Hooi, Thomas Laurent, Adam Perold, Yann LeCun, and Xavier Bresson. A generalization of vit/mlp-mixer to graphs. In *International Conference on Machine Learning*, pp. 12724–12745. PMLR, 2023. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. *Neural computation*, 14(8):1771–1800, 2002. Geoffrey E Hinton and Richard Zemel. Autoencoders, minimum description length and helmholtz free energy. Advances in neural information processing systems, 6, 1993. Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, and Jie Tang. Graphmae: Self-supervised masked graph autoencoders. In *Proceedings of the 28th ACM SIGKDD Conference on* Knowledge Discovery and Data Mining, pp. 594–604, 2022. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In *International Conference on Learning Representations*, 2019. Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. Gpt-gnn: Generative pretraining of graph neural networks. In *Proceedings of the 26th ACM SIGKDD International Conference on* Knowledge Discovery & Data Mining, pp. 1857–1867, 2020. Wei Jin, Tyler Derr, Haochen Liu, Yiqi Wang, Suhang Wang, Zitao Liu, and Jiliang Tang. Self-supervised learning on graphs: Deep insights and new direction. *arXiv preprint arXiv:2006.10141*, 2020. Wei Jin, Xiaorui Liu, Xiangyu Zhao, Yao Ma, Neil Shah, and Jiliang Tang. Automated self-supervised learning for graphs. In *International Conference on Learning Representations*, 2021. George Karypis and Vipin Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. *SIAM Journal on scientific Computing*, 20(1):359–392, 1998. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016a. Thomas N Kipf and Max Welling. Variational graph auto-encoders. *arXiv preprint arXiv:1611.07308*, 2016b. Yann LeCun. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. *Open Review*, 62, 2022. Namkyeong Lee, Junseok Lee, and Chanyoung Park. Augmentation-free self-supervised learning on graphs. In *Proceedings of the AAAI conference on artificial intelligence*, volume 36, pp. 7372–7380, 2022. Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self-supervised learning: Generative or contrastive. *IEEE Transactions on Knowledge and Data Engineering*, 35(1): 857–876, 2023. doi: 10.1109/TKDE.2021.3090866. Gabriel Loaiza-Ganem, Brendan Leigh Ross, Jesse C Cresswell, and Anthony L Caterini. Diagnosing and fixing manifold overfitting in deep generative models. *Transactions on Machine Learning Research*, 2022. Yao Ma and Jiliang Tang. *Deep Learning on Graphs*. Cambridge University Press, 2021. doi: 10.1017/ 9781108924184. Qinxue Meng, Daniel Catchpoole, David Skillicom, and Paul J. Kennedy. Relational autoencoder for feature extraction. In *2017 International Joint Conference on Neural Networks (IJCNN)*, pp. 364–371, 2017. doi: 10.1109/IJCNN.2017.7965877. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In *Proceedings* of the AAAI conference on artificial intelligence, volume 33, pp. 4602–4609, 2019. Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020. Giannis Moutsinas, Choudhry Shuaib, Weisi Guo, and Stephen Jarvis. Graph hierarchy: a novel framework to analyse hierarchical structures in complex networks. *Scientific Reports*, 11(1):13943, 2021. Maximillian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. *Advances in neural information processing systems*, 30, 2017. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *Advances in neural information processing systems*, 32, 2019. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. William F. Reynolds. Hyperbolic geometry on a hyperboloid. *The American Mathematical Monthly*, 100(5): 442–455, 1993. ISSN 00029890, 19300972. Pierre Harvey Richemond, Allison Tam, Yunhao Tang, Florian Strub, Bilal Piot, and Felix Hill. The edge of orthogonality: A simple view of what makes byol tick. In *International Conference on Machine Learning*, pp. 29063–29081. PMLR, 2023. R. Rosenberg and L. Feigenson. Infants hierarchically organize memory representations. *Developmental* science, 16 4:610–21, 2013. doi: 10.1111/desc.12055. Frederic Sala, Chris De Sa, Albert Gu, and Christopher Re. Representation tradeoffs for hyperbolic embeddings. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pp. 4460–4469. PMLR, 10–15 Jul 2018. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. *IEEE transactions on neural networks*, 20(1):61–80, 2008. Fan-Yun Sun, Jordan Hoffman, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In International Conference on Learning Representations, 2019. Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. Adversarial graph augmentation to improve graph contrastive learning. *Advances in Neural Information Processing Systems*, 34:15920–15933, 2021. Qiaoyu Tan, Ninghao Liu, Xiao Huang, Soo-Hyun Choi, Li Li, Rui Chen, and Xia Hu. S2gae: Self-supervised graph autoencoders are generalizable learners with graph masking. In *Proceedings of the Sixteenth ACM* International Conference on Web Search and Data Mining, pp. 787–795, 2023. Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Veličković, and Michal Valko. Large-scale representation learning on graphs via bootstrapping. In International Conference on Learning Representations, 2021. Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning? *Advances in neural information processing systems*, 33:6827–6839, 2020. Gerrit van den Burg and Chris Williams. On memorization in probabilistic deep generative models. *Advances* in Neural Information Processing Systems, 34:27916–27928, 2021. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *Journal of machine learning* research, 9(11), 2008. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Petar Veličković. Everything is connected: Graph neural networks. *Current Opinion in Structural Biology*, 79:102538, 2023. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. *arXiv preprint arXiv:1710.10903*, 2017. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and Léon Bottou. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. *Journal of machine learning research*, 11(12), 2010. Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In *International conference on machine learning*, pp. 6861–6871. PMLR, 2019. Yaochen Xie, Zhao Xu, and Shuiwang Ji. Self-supervised representation learning via latent graph prediction. In *International Conference on Machine Learning*, pp. 24460–24477. PMLR, 2022a. Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji. Self-supervised learning of graph neural networks: A unified review. *IEEE transactions on pattern analysis and machine intelligence*, 45 (2):2412–2429, 2022b. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In International Conference on Machine Learning, pp. 10524–10533. PMLR, 2020. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2018. Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. *Advances in neural information processing* systems, 31, 2018. Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. *Advances in neural information processing systems*, 33:5812–5823, 2020. Tao Yu and Christopher M De Sa. Representing hyperbolic space accurately using multi-component floats. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in* Neural Information Processing Systems, volume 34, pp. 15570–15581. Curran Associates, Inc., 2021. Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, and Philip S Yu. From canonical correlation analysis to self-supervised graph neural networks. *Advances in Neural Information Processing Systems*, 34:76–89, 2021. Wei Zhao, Federico Lopez, Maxwell J Riestenberg, Michael Strube, Diaaeldin Taha, and Steve Trettel. Modeling graphs beyond hyperbolic: Graph neural networks in symmetric positive definite matrices. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 122–139. Springer, 2023.
# A Latent Diffusion Model For Protein Structure Generation Anonymous authors Paper under double-blind review ## Abstract Proteins are complex biomolecules that perform a variety of crucial functions within living organisms. Designing and generating novel proteins can pave the way for many future synthetic biology applications, including drug discovery. However, it remains a challenging computational task due to the large modeling space of protein structures. In this study, we propose a latent diffusion model that can reduce the complexity of protein modeling while flexibly capturing the distribution of natural protein structures in a condensed latent space. Specifically, we propose an equivariant protein autoencoder that embeds proteins into a latent space and then uses an equivariant diffusion model to learn the distribution of the latent protein representations. Experimental results demonstrate that our method can effectively generate novel protein backbone structures with high designability and efficiency. ## 1 Introduction The discovery of novel proteins (Anand & Huang, 2018; Eguchi et al., 2022; Anand et al., 2019; Sabban & Markovsky, 2020; Luo et al., 2022; Shi et al., 2022) is crucial in bio-medicine (Liu et al., 2021b; 2022a;b; Wang et al., 2022a;b; Liu et al., 2021a) and materials (McMillan et al., 2019; Yan et al., 2022). Recently, instead of generating novel protein sequences (Wu et al., 2021; Anishchenko et al., 2021; Ferruz et al., 2022; Repecka et al., 2021; Hawkins-Hooker et al., 2021; Madani et al., 2020; Nijkamp et al., 2022; Karimi et al., 2020) and then predicting their corresponding structures, Trippe et al. (2022) and Wu et al. (2022a) propose to directly generate protein structures using diffusion models, due to the impressive modeling power and generation quality of diffusion models (Ho et al., 2020; Song et al., 2020; Xu et al., 2021; Jing et al., 2022; Rombach et al., 2022) for images and small molecules. However, generating 3D protein structures is a more challenging task because of their complex geometric structures and vast exploration space. Additionally, as the modeling space increases, the cost of time and computational resources required to train and sample from diffusion models also increases significantly. There are attempts to reduce the modeling space in the image and small molecule domain for diffusion models. Stable Diffusion (Rombach et al., 2022) combines a pretrained image autoencoder and a latent diffusion model to reduce the modeling space for large images. However, there are currently no robust and powerful 3D graph autoencoders and latent diffusion models for 3D protein structures. Torsional Diffusion (Jing et al., 2022) only focuses on torsional angles and employs RDKit (Landrum et al.) predictions for bond lengths and bond angles, as the distributions of bond angles and lengths are highly confined in small molecules. But this assumption does not hold for protein structures. In this paper, we reduce the diffusion modeling space of complex 3D protein structures by integrating a 3D graph autoencoder and a latent 3D diffusion model. To achieve this, the following challenges are addressed: (1) ensuring rotation and reflection equivariance in the autoencoder design, (2) accurately reconstructing intricate connection information in 3D graphs during decoding, and (3) developing a specialized latent diffusion process for 3D protein latent representations, including position and node latent representations. In the following sections, we first recap the background and related works for protein backbone structure generation and diffusion models in Sec. 2, and then show in detail how we address the above challenges in Sec. 3. The efficiency and ability to generate novel protein backbone structures of our proposed method are demonstrated in Sec. 4. 1 ## 2 Background And Related Work 2.1 Protein Backbone Structure Generation Protein backbone generation aims to generate novel protein backbone structures by learning from real data distributions. To this end, a mapping between known distributions, such as a Gaussian, and the real data distribution, which is high dimensional and sparse, needs to be constructed. Since protein global geometric structures are mainly determined by backbones, the generation of protein structures can be simplified to the generation of backbones consisting of a sequence of amino acids and their corresponding positions. Following ProtDiff (Trippe et al., 2022), we use the positions of alpha carbons to represent amino acid positions. The protein backbone structure is then represented by $${\mathcal{S}}=\{({\mathbf{x}}_{i},a_{i})\}_{i=1}^{n},$$ $$(1)$$ i=1, (1) where xi ∈ R 3 denotes the 3D position of alpha carbon in the i-th amino acid, and ai ∈ {k|1 ≤ k ≤ 20, k ∈ Z} denotes the corresponding amino acid type. Instead of modeling amino acid types and alpha carbon positions together, previous studies (Trippe et al., 2022) have shown that it is better to decompose the whole generation process into two stages as p(x, a) = p(a|x)p(x), where x = [x1, x2, *· · ·* , xn], and a = [a1, a2, · · · , an] T. Specifically, the positions of alpha carbons are first generated, and the corresponding amino acid types are predicted using pretrained inverse folding models such as ProteinMPNN (Dauparas et al., 2022). ## 2.2 Denoising Diffusion Probabilistic Models As a powerful class of generative models (Luo et al., 2021; Liu et al., 2021c; Luo & Ji, 2022), denoising diffusion probabilistic models (DDPM) (Ho et al., 2020) solve the Bayesian inverse problem of deriving the underlying data distribution (posterior) pdata(z) by establishing a bijective mapping between given prior distributions and pdata(z). We review the background of DDPM here following the adopted conventions of ScoreSDE (Song et al., 2020). To enable faithful generation based on pdata(z) by sampling simpler prior distributions, a discrete Markov chain is employed to gradually diffuse inputs as a map from given training data into random noise, for example, following multivariate normal (Gaussian) distributions. For every training sample z0 ∼ pdata(z), DDPMs consider a sequence of variance values 0 < β1, β2*, . . . , β*N < 1 and construct a discrete Markov chain {z0, z1*, . . . ,* zN }, where p(zi|zi−1) = N (zi; √1 − βizi−1, βiI). Based on this, we obtain p(zi|z0) = N (zi; √αiz0,(1 − αi)I), where αi =Qit=0(1 − βt). Hence, a sequence of noise scales can be predefined such that αN → 0 and zN is approximately distributed according to N (0, I). For the reverse mapping from N (0, I) to pdata(z), a reverse Markov chain is parameterized as pθ(zi−1|zi) = N (zi; µθ(zi, i), βiI), where µθ(zi, i) = √ 1 1−βi (zi−1 − √ βi 1−αi sθ(zi, i)). The reverse diffusion model sθ is trained with a re-weighted evidence lower bound (ELBO) as below $$\mathbf{\theta}^{*}=\arg\!\min_{\mathbf{\theta}}\mathbb{E}_{t,\mathbf{z}_{0},\mathbf{\sigma}}[\|\mathbf{\sigma}-\mathbf{s}_{\mathbf{\theta}}(\sqrt{\alpha_{t}}\mathbf{z}_{0}+\sqrt{1-\alpha_{t}}\mathbf{\sigma},t)\|^{2}],\tag{2}$$ where σ ∼ N (0, I). After sθ is trained, the reverse sampling process is conducted by first sampling from zT ∼ N (0, I) and then updating from time N to time 0 by the estimated reverse Markov chain $$\mathbf{z}_{t-1}={\frac{1}{\sqrt{1-\beta_{t}}}}(\mathbf{z}_{t}-{\frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}}\mathbf{s}_{\mathbf{\theta}}(\mathbf{z}_{t},t))+{\sqrt{\beta_{t}}}\mathbf{\sigma}.\tag{1}$$ ## 2.3 Related Work Diffusion Models for Protein Structure Generation. Recent research (Anand & Achim, 2022; Wu et al., 2022a; Trippe et al., 2022; Lee & Kim, 2022; Watson et al., 2022; Ingraham et al., 2022) has been exploring the use of diffusion models to generate novel protein structures, building on the successes of diffusion models in other areas such as images (Ho et al., 2020; Song et al., 2020) and small molecules (Xu et al., 2021; Jing et al., 2022; Hoogeboom et al., 2022). Among them, ProtDiff (Trippe et al., 2022) focuses on generating protein backbone structures by determining the positions of alpha carbons, while FoldingDiff (Wu et al., $\quad(3)$ . 2022a) represents protein backbone structures using bond and torsion angles and applies a sequence diffusion model to generate new backbone structures. Anand & Achim (2022) attempts to generate the entire protein structure by using three separate diffusion models to generate alpha carbon positions, amino acid types, and side chain rotation angles sequentially, but the joint modeling performance is relatively low. Additionally, Lee & Kim (2022) proposes to diffuse 2D pairwise distances and angle matrices for amino acid residues, but further optimization using Rosseta minimization (Yang et al., 2020) is needed. It is notable that, while developing our method, two recent works RFdiffusion (Watson et al., 2022) and Chroma (Ingraham et al., 2022) have been developed that enable generating long proteins with very high quality. RFdiffusion takes advantage of the powerful protein structure prediction model, RoseTTAFold (Baek et al., 2021), to achieve remarkable results on many generation tasks. RFdiffusion pretrains RoseTTAFold on the protein structure prediction task and then finetunes on generative tasks. And RFdiffusion only demonstrates the effectiveness of generating proteins only when using pretrained weights. Chroma uses a correlated diffusion process to transform protein structures into random collapsed polymers and encode the chain and radius of gyration constraints by a designed covariance model. In this way, Chroma can model the target distribution more efficiently by preserving some basic structures in proteins. Despite the success of protein backbone structure generation (Anand & Achim, 2022; Wu et al., 2022a; Trippe et al., 2022; Lee & Kim, 2022; Watson et al., 2022; Ingraham et al., 2022), the modeling space of diffusion models is vast and increases exponentially with the number of amino acids considered. Decreasing Modeling Space for Protein Structure. The modeling space for protein structure generation is reduced in several ways. ProtDiff (Trippe et al., 2022) only considers the positions of alpha carbons, while FoldingDiff (Wu et al., 2022a) represents protein backbone structures using bond and torsion angles and omits bond lengths to decrease the modeling space. Torsional Diffusion (Jing et al., 2022) uses RDKitgenerated bond lengths and angles and only diffuses the torsional angles for the conformer generation of small molecules, but it is not applicable for protein structures. Recently, the impressive generative capability of Stable Diffusion (Rombach et al., 2022) in the image domain has attracted significant attention. By integrating a pre-trained image autoencoder with latent diffusion models, Stable Diffusion reduces the modeling space of large images and improves the generative power of image diffusion models. However, 3D geometric graphs for protein structures are different from images, and even though there are some equivariant networks for modeling interatomic potentials (Batzner et al., 2022) or predicting protein binding sites (Zhang et al., 2023), no robust 3D equivariant protein autoencoders and 3D latent diffusion models for protein structures have been proposed yet. ## 3 Method In this section, we introduce our LatentDiff for generating protein backbone structures. We first illustrate the motivation for reducing modeling space in Section 3.1. Then, We describe the design of our equivariant protein autoencoder in Section 3.2, and next the latent space diffusion model in Section 3.3. We present the overall generation process in Section 3.4. ## 3.1 **Motivation Of Reducing Modeling Space** In this section, we describe the motivation for designing a protein autoencoder to reduce modeling space in terms of modeling difficulty and parallel sampling efficiency, respectively. An important motivation for reducing modeling space through downsampling is that it can make the diffusion model easier to learn the desired distribution, as the modeling capacity of diffusion models has a direct relationship with the size of their modeling space. By decreasing the modeling space, we aim to focus the generative model's attention on a more condensed space that is relevant to the original protein structure space. This reduction in complexity allows the model to more effectively learn and capture the underlying distribution of protein structures, resulting in improved generation quality. Moreover, a smaller modeling space helps address the challenge of high dimensionality and sparsity that is prevalent in protein structure data compared with small molecules. The vast space of possible protein ![3_image_0.png](3_image_0.png) Figure 1: Autoencoder network structure for proteins. Step A, B, and C denote the Encoder network. A. Augmented input protein sequence (white) with padding (red node), similar to image padding. B. (1) Edge building: create a fully connected graph (limited edges shown for simplicity) on the padded sequence; (2) Graph Expansion: introduce new nodes (black) with specific connections according to the 1D-CNN convention. C. Compressed sequence (in latent space). Steps D, E, and F denote the Decoder network. D. Padding latent sequence for upsampling (similar to padding operation in image transpose convolution). E. Edge building and Graph Expansion are similar to B. F. Reconstructed protein chain. conformations presents a considerable challenge for generative models to learn from limited training data. By narrowing down the modeling space, we provide the generative model with a more manageable and structured latent search space, enabling it to learn the essential features and patterns of protein structures more efficiently. Another advantage of using protein autoencoder to reduce modeling space is that generation in latent space can improve memory efficiency as the latent space is much smaller than the protein space. So for the same amount of GPU memory, more proteins can be sampled in latent space than in protein space. In practice, it requires sampling a large amount of proteins in the screening procedure, so high throughput sampling is desired. In this sense, parallel sampling in latent space could demonstrate significant efficiency improvement. More experiments on parallel sampling efficiency can be found in Section 4.7. ## 3.2 Equivariant Protein Autoencoder We first introduce our equivariant autoencoder that helps reduce the protein design space. To design such an autoencoder, we identify some constraints and the uniqueness of protein backbones. First, Cα atoms in protein backbones have a fixed order due to the sequential nature of amino acid sequences. In general, downsampling or upsampling of sequence data can be achieved by 1D convolutional neural networks (CNNs). Also, since Cα atoms form a chain structure that could be preserved during upsampling, we don't need to reconstruct edge connections like traditional graph autoencoder. Second, despite the sequence representation of protein backbones, they also possess 3D geometries, which require equivariance during the downsampling and upsampling stages. Traditional CNN cannot meet this equivariant requirement, but graph neural networks (GNNs) are capable of dealing with this challenge. Based on these observations, we propose a novel equivariant protein autoencoder that considers both the amino acid sequence and 3D graph information of protein backbones. Overview. In the equivariant protein autoencoder, we first downsample protein to smaller sizes and upsample the latent graph to reconstruct the original protein. There are four steps within each downsampling and upsampling layer, namely sequence padding, edge building, **graph expansion**, and equivariant message passing. The first three steps are used to construct a graph that contains the input nodes and initialized downsampling or upsampling nodes in the current layer. After the message passing, only updated downsampling or upsampling nodes will be kept as input in the next layer for further downsampling or upsampling operation. In the following, we describe the network input and details of one downsampling layer. The upsampling layer shares the exact same steps except for sequence padding, which we will also introduce in the sequence padding section. Network Input. For a protein backbone structure S, we move the structure to the zero centroid in order to make the model avoid capturing translational equivariance. Then we will augment the protein to a fixed length m to simplify the remaining operations in the network. So m is the maximum protein length that we can generate, and we choose m as 128 in this work. The augmented protein is shown as the white part in Figure 1.A. Specifically, we append m − n extra nodes to the end of the protein sequence. Each extra node is assigned a zero position and the same node type. And we denote the augmented protein structure as Saug = (X, H), where X ∈ R 3×m and H ∈ R d×m are node positions and node feature vectors respectively. For X, the first n columns {xi} n i=1 denote the positions of all Cα atoms in the original protein and the last m − n columns {xi} m i=n+1 denote the zero positions of extra nodes. Each node feature vector hi ∈ R din H is a d-dimensional type embedding indicating the corresponding node type. Then the preprocessed Saug is the input to the first downsampling layer. Sequence Padding. Similar to padding in image convolution, within each layer, we first need to pad the augmented protein sequence Saug before downsampling or upsampling the sequence in order to obtain an output with the desired size. Let's assume that we have k nodes after sequence padding. Denote the padded sequence as Spad = (Xpad, Hpad), where Xpad ∈ R 3×k and Hpad ∈ R d×k. As shown in Figure 1.A and D, red nodes are padding nodes. For the downsampling, we pad the input sequence on the boundary by adding nodes with the same node position and node features as the boundary node. For example, in Figure 1.A, the red node is the duplicate of the last white node. For the upsampling, we need both boundary padding and internal padding, similar to image padding in transpose convolution. The boundary padding is the same as that of downsampling. For an internal padding node, such as the second red node in Figure 1.D, it is initialized with the average value of the position and node features of its two nearest nodes on both sides. Edge Building. After sequence padding, we perform an edge-building step to construct a graph from a padded protein sequence Spad. We could adopt fully connected graphs in order to capture interactions between all atom pairs. As shown in Figure 1.B, the edges in the constructed complete graph are in red. For simplicity, we only show the edge connections for one node. Note that ways of edge connections can be flexible in this step. Empirically we find that constructing a complete graph only over the non-padded sequence during downsampling gives better reconstruction performance. Graph Expansion. Then, for the graph expansion step, we need to first initialize downsampled nodes and connect them to the graph constructed in the edge-building step. We denote the expanded graph as Gexp = (Xexp, Hexp, Aexp), where Xexp = [Xpad, Xdown] ∈ R 3×(k+ m 2 ), Hexp = [Hpad, Hdown] ∈ R d×(k+ m 2 ), and Aexp ∈ R (k+ m 2 )×(k+ m 2 ). Specifically, we create a set of new nodes with positions Xdown ∈ R 3× m 2 and node feature vectors Hdown ∈ R d× m 2 which represent the downsampled sequence. The edge connections between downsampled sequence and the augmented protein sequence are created in a 1D CNN convention. Specifically, only nodes within a kernel-sized window will be connected to a new node. For example, as shown in Figure 1.B, the green area denotes a kernel of size 3, and the first black node connects to the first three white nodes in the green area. And each new node is initialized as the average of its connected nodes for both position and node feature. Equivariant Message Passing. Next, we use an E(n) equivariant graph neural network (EGNN) (Satorras et al., 2021) to perform message passing on the expanded graph Gexp to update downsample nodes. Formally, $${\hat{\mathbf{X}}}_{\mathrm{exp}},{\hat{\mathbf{H}}}_{\mathrm{exp}}=\mathrm{EGNN}[\mathbf{X}_{\mathrm{exp}},\mathbf{H}_{\mathrm{exp}}],$$ Xˆexp, Hˆexp = EGNN[Xexp, Hexp], (4) where Xˆexp = [Xˆ , Xˆdown] and Hˆexp = [Hˆ , Hˆdown]. EGNN contains L equivariant convolution layers (EGCL). Each layer performs a position and feature update, such that x l+1 i, h l+1 i = EGCL[x l i , h l i ], which is defined below: $$\begin{array}{l}{{\mathbf{m}_{i j}=\phi_{e}(\mathbf{h}_{i}^{l},\mathbf{h}_{j}^{l},d_{i j}^{2},a_{i j}),}}\\ {{\mathbf{h}_{i}^{l+1}=\phi_{h}(\mathbf{h}_{i}^{l},\sum_{j\neq i}\tilde{e}_{i j}\mathbf{m}_{i j}),}}\\ {{\mathbf{x}_{i}^{l+1}=\mathbf{x}_{i}^{l}+\sum_{j\neq i}\frac{\mathbf{x}_{i}^{l}-\mathbf{x}_{j}^{l}}{d_{i j}+1}\phi_{x}(\mathbf{m}_{i j}),}}\end{array}$$ $$\left(4\right)$$ $$\left(7\right)$$ where dij =x l i − x l j 2 denotes the Euclidean distance between nodes i and j, and aij = MLP([h l i , h l j ]) is the edge feature for edge (*i, j*). Following Hoogeboom et al. (2022), we use dij + 1 to normalize the node distance to improve numerical stability and use an attention mechanism e˜ij = ϕinf (mij ) to infer a soft estimation of edges. Then after the message passing, we will only keep the updated downsampled sequence (Xˆdown, Hˆdown) as the input of next layer, as shown in Figure 1.C. During the upsampling stage in the decoder, we perform the same four steps as introduced above. After upsampling to the original size of the input augmented protein, we obtain a reconstructed sequence with position and node embedding for each node. Then we use an MLP to process the final node embedding and predict whether a reconstructed node belongs to the augmented node type. We then use another MLP to predict the amino acid type of each node. Training Loss. Reconstruction loss of autoencoder consists of six parts. First, we have a cross-entropy loss Laug on a binary classification task to determine whether each reconstructed node is an augmented node that not belongs to the original protein. Next, we use another cross-entropy loss Laa on the amino acid type prediction for each node. And then, we calculate the mean absolute error (MAE) of the position for each non-augmented node between the reconstructed protein and ground truth, and we denote it as Lpos. Apart from these three losses, to further consider the secondary structure reconstruction for proteins, we also include edge distance loss Ldist and torsion angle loss Ltor calculated across the non-augmented nodes. Specifically, edge distance is calculated as the Euclidean distance between every two consecutive Cα atoms, and the torsion angle is the angle between two planes formed by four consecutive Cα atoms. To avoid latent node embeddings having an arbitrarily high variance, we use slight KL divergence loss Lreg to regularize latent node embeddings, which is similar to a variational autoencoder. So the total loss is the weighted sum of these individual losses. Formally, $${\mathcal{L}}_{\mathrm{total}}={\mathcal{L}}_{\mathrm{aug}}+{\mathcal{L}}_{\mathrm{aa}}+{\mathcal{L}}_{\mathrm{pos}}+w_{1}*{\mathcal{L}}_{\mathrm{dist}}+w_{2}*{\mathcal{L}}_{\mathrm{tor}}+w_{3}*{\mathcal{L}}_{\mathrm{reg}},$$ $$({\boldsymbol{\delta}})$$ where w1, w2, and w3 are relative weights to control the edge distance loss, torsion angle loss, and regularization loss, respectively. We want the network to optimize the absolute position of each node first and adjust edge distance and torsion angle later, so we set w1 and w2 as 0.5. Also, we want the autoencoder to have good reconstruction performance, so we only use very small regularization, and we set w3 equal to 1e −4. ## 3.3 Latent Diffusion Modeling the extracted latent representations (Xdown, Hdown) of protein backbone structures poses unique challenges due to the fact that they consist of 3D Euclidean positions, which differ from images and texts. In this section, we first explain the desired distribution E(3) invariance property and then provide a detailed description of the latent diffusion process that satisfies this property for the task of protein backbone generation. In this section, pdata , pmodel, and pθ denote the underlying data distribution, the output distribution of the whole model framework, and the latent distribution from the latent diffusion model, respectively. Distribution E(3) Invariance. For a given protein backbone structure (X, H), we would like the learned data distribution to be E(3) invariant: pdata(X, H) = pdata(RX + b, H) as the geometric 3D structure remains unchanged after E(3) transformations, where R ∈ R 3×3, |R| = ±1 describing the rotation and reflection transformations and b ∈ R 3for translation in 3D space. Because our protein autoencoder is translation invariant as described in Sec. 3.2, pmodel(X, H) = pmodel(X + b, H) holds naturally. Hence, distribution rotation and reflection invariance pmodel(X, H) = pmodel(RX, H) needs to be satisfied for the latent diffusion process. In our approach, we propose to decompose the generation of protein backbone structures into two stages, including (1) protein latent representation generation and (2) latent representation decoding. The model distribution can be defined as pmodel(X, H) = fdecoder (pθ(Xdown, Hdown)). Given that the decoding process is E(3) equivariant and deterministic, if the latent diffusion model sθ satisfies pθ(Xdown, Hdown) = pθ(RXdown + b, Hdown), the distribution rotation and reflection invariance pmodel(X, H) = pmodel(RX + b, H) can be satisfied. ![6_image_0.png](6_image_0.png) Figure 2: Pipeline of LatentDiff. Encoder E and decoder D are pretrained via equivariant protein autoencoder introduced in Section 3.2, and their parameters are fixed during training the latent diffusion. Protein structures are encoded into latent representations via the encoder E. And latent representations are gradually perturbed into Gaussian noise. During generation, we first sample Gaussian noise and use the learned denoising network to generate protein representations in the latent space. And then, the decoder D decodes latent representations to protein structures. The challenge of pθ(Xdown, Hdown) = pθ(RXdown + b, Hdown) can be addressed by (1) modeling zero-mean geometric distribution for X, (2) using a high-dimensional Gaussian distribution as the prior distribution, and (3) employing rotation and reflection equivariant reverse diffusion process (Hoogeboom et al., 2022; Xu et al., 2021). Specifically, the influence of translation transformations in 3D space is omitted by reducing the central position of X. Additionally, by using an isotropic high dimensional Gaussian prior, we have pθ(XT , HT ) = pθ(RXT , HT ). The rotation and reflection equivariant reverse diffusion process further guarantees that pθ(Xt, Ht) = pθ(RXt, Ht) for any time t and the proof is provided in Appendix. A.1. Rotation and Reflection Invariant Latent Diffusion. Due to the aforementioned considerations, we propose the rotation and reflection distribution invariant latent forward and reverse diffusion processes for the extracted protein backbone latent features (Xdown, Hdown). The implementation is based on Hoogeboom et al. (2022) with adjustments to support the latent diffusion process. The pipeline of our protein latent diffusion is shown in Figure 2. During the forward process, the input latent representations (Xdown, Hdown) are diffused slowly into random noise by a sequence of noise scales 0 < β1, β2*, . . . , β*N < 1 as follows $$\begin{array}{l}{{X_{i}=\sqrt{1-\beta_{i}}X_{i-1}+\sqrt{\beta_{i}}\sigma_{X},}}\\ {{H_{i}=\sqrt{1-\beta_{i}}H_{i-1}+\sqrt{\beta_{i}}\sigma_{H},}}\end{array}$$ where σH ∼ N (0, I), and σX is first sampled from N (0, I) and then reduced based on the corresponding central position following Hoogeboom et al. (2022). And the closed-form forward process can be written as $$\begin{array}{l}{{X_{t}=\sqrt{\alpha_{t}}X_{\mathrm{down}}+\sqrt{1-\alpha_{t}}\sigma_{X},}}\\ {{H_{t}=\sqrt{\alpha_{t}}H_{\mathrm{down}}+\sqrt{1-\alpha_{t}}\sigma_{H},}}\end{array}$$ $$\begin{array}{l}{(9)}\\ {(10)}\end{array}$$ where αt =Qt i=0(1 − βi). Since αt is a scalar value, we have pt(Xt, Ht) = p(Xdown, Hdown)p(σX,σH) where pt is the data distribution at time t and p(σX,σH) = p(σX)p(σH) denotes the corresponding multivariate Gaussian distributions. It can be seen that pt(Xt, Ht) = pt(RXt, Ht) because p(Xdown, Hdown)p(σX,σH) = p(RXdown, Hdown)p(σX,σH). Hence, the forward diffusion process satisfies rotation and reflection distribution invariance. For the reverse diffusion process, a reverse Markov chain is formed as below $$\begin{array}{l}{{(\mathbf{X}_{t-1},\mathbf{H}_{t-1})=\frac{1}{\sqrt{1-\beta_{t}}}\mathbf{\mu}_{t}+\sqrt{\beta_{t}}(\mathbf{\sigma}_{\mathbf{X}},\mathbf{\sigma}_{\mathbf{H}}),}}\\ {{\mathbf{\mu}_{t}=(\mathbf{X}_{t},\mathbf{H}_{t})-\frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}\mathbf{s}_{\mathbf{\theta}}(\mathbf{X}_{t},\mathbf{H}_{t},t),}}\end{array}$$ where sθ is a rotation and reflection equivariant network implemented based on EGNN (Satorras et al., 2021). Training Loss. The reverse diffusion model sθ is trained with a re-weighted evidence lower bound (ELBO) following ProtDiff (Trippe et al., 2022) and DDPM (Ho et al., 2020) as below $$\begin{array}{l}{{\theta^{\star}=\mathrm{argmin}_{\theta}\mathbb{E}_{t,(X_{\mathrm{down}},H_{\mathrm{down}}),\sigma}[\|\delta\|^{2}],}}\\ {{\delta=\sigma-s_{\theta}(\sqrt{\alpha_{t}}(X_{\mathrm{down}},H_{\mathrm{down}})+\sqrt{1-\alpha_{t}}\sigma,t),}}\end{array}$$ $$(11)$$ 2], (11) $$(12)$$ √αt(Xdown, Hdown) + √1 − αtσ, t), (12) where σ = (σX,σH). ## 3.4 Overall Generation Process We have introduced the main components of our protein latent diffusion model, LatentDiff. To generate a novel protein backbone structure, we first sample multivariate Gaussian noise and use the learned latent diffusion model to generate 3D positions and node embeddings in the latent space. To further improve generation quality, we also use low-temperature sampling (Ingraham et al., 2022) to guide the reverse process in the diffusion model. And then we use the pre-trained decoder to generate backbone structures in the protein space. Note that the output of the decoder has a pre-defined fixed size. In order to generate proteins of various lengths, each node in the decoder output is predicted to be an augmented node or not. We simply find the first node that is classified as an augmented node and drop the remaining nodes in the generated protein backbone structure. Note that we do not use reconstructed amino acid types for the corresponding node. Instead, we use the inverse folding model ProteinMPNN (Dauparas et al., 2022) to predict protein amino acid sequences from generated backbone structures. ## 4 Experiments We empirically demonstrate the effectiveness and efficiency of our method for generating protein backbone structures. In Section 4.1, we first introduce the dataset we curated from existing protein databases and the benchmarking baseline models. In Section 4.2–Section 4.7, we show the reconstruction performance of the pre-trained autoencoder, the quality and diversity of generated protein backbone structures, and the parallel sampling efficiency of LatentDiff. In Appendix A.3, we describe the training details of the autoencoder and latent diffusion model. ## 4.1 Experimental Setting Dataset. We curate the dataset from Protein Data Bank (PDB) and Swiss-Prot data in AlphaFold Protein Structure Database (AlphaFold DB) (Jumper et al., 2021; Varadi et al., 2022). Details of the dataset can be found in Appendix A.2. Note that the dataset employed in our research is larger than the ones used in both ProtDiff (Trippe et al., 2022) and FoldingDiff (Wu et al., 2022a). Our methodology necessitates that the latent space is well-structured and equivalent to the protein space. To realize this goal to the maximum extent, it is imperative that we utilize the maximum possible volume of data to train our model. Baselines. To evaluate our proposed methods, we compare with the state-of-the-art protein backbone structure generation methods including ProtDiff (Trippe et al., 2022) and FoldingDiff (Wu et al., 2022a). Both of them are proposed to generate novel protein backbone level structures as discussed in Sec. 2.3. ## 4.2 Autoencoder Reconstruction In this section, we demonstrate the reconstruction performance of the protein autoencoder. We compare autoencoders with different downsampling factors f = {2, 4, 8}, which we denote as *auto* - f. Metrics. First, we evaluate the classification accuracy of augmented and non-augmented nodes (Augment Acc), and the accuracy of amino acid type classification (Residue Acc). And we have the following three geometric evaluations. We use root mean square deviation (RMSD) to compare the absolute position error between reconstructed Cα atoms and ground truth. Additionally, we measure edge stability, which counts the proportion of Cα - Cα distance that resides with range [3.65Å, 3.95Å]. The reason for choosing this range is that 99% Cα - Cα distances in ground truth are within this range. We also calculate the mean Table 1: Performance of autoencoder with different downsampling factors. ↑ (↓) represents that a higher (lower) value indicates better performance. | Factor | RMSD (Å)↓ | Augment Acc (%)↑ | Residue Acc (%)↑ | Edge Stable (%)↑ | Torsion MAE (rad)↓ | |----------|-------------|--------------------|--------------------|--------------------|----------------------| | 2 | 1.1607 | 100 | 99 | 68.61 | 1.1558 | | 4 | 1.4070 | 100 | 98 | 61.66 | 1.2932 | | 8 | 2.9488 | 100 | 48 | 52.54 | 1.8518 | | Method | 50–70 | 70–128 | 50–128 | |-------------|---------|----------|----------| | ProtDiff | 17.1% | 8.9% | 11.8% | | FoldingDiff | 27.1% | 9.4% | 14.2% | | LatentDiff | 31.1% | 35.6% | 31.6% | Table 2: Percentage of generated proteins with scTM score > 0.5. Following FoldingDiff and ProtDiff, results are shown within short (50–70) and long (70–128) categories. absolute error (MAE) of the torsion angle. Note that all the geometric evaluations are performed on the original protein backbones without considering augmented nodes. In Table 1, we summarize the results with respect to these five metrics for protein autoencoders with different downsampling factors. In order to reduce the modeling space of proteins and make it easier for the diffusion model to learn the latent distribution, larger downsampling factors are preferred; but meanwhile, it will become more difficult to achieve good reconstruction results. We can see that *auto* - 8 has the worst reconstruction performance because the autoencoder compresses information too much. Although *auto* - 2 performs the best among the three settings, the number of nodes in the latent space is still relatively large. So in order to achieve a balance between computation and reconstruction performance, we finally choose auto - 4 as the pre-trained model for generating latent space data and decoding protein backbones. ## 4.3 In-Silico **Evaluation** After sampling protein backbone structures, we also need to evaluate the designability of these generated structures. This means that for a generated backbone, whether we can connect some amino acids into a sequence and the sequence can naturally fold into that desired backbone structure. The most faithful and desirable evaluation is to check through a wet-lab experiment, but this is often resource demanding and not feasible. Here we use *in silico* evaluations as an alternative. Specifically, for a generated backbone structure, we first use an inverse folding model, ProteinMPNN (Dauparas et al., 2022), to predict eight amino acid sequences that could possibly fold into that backbone structure. OmegaFold (Wu et al., 2022b) is then used to predict folding structures for each amino acid sequence. Next, we adopt TMalign (Zhang & Skolnick, 2005) to compute the similarity between the generated backbone structure and each OmegaFold-predicted backbone structure and calculate a TM score to quantify the similarity. The maximum TM-score among these eight scores is referred to as the self-consistency TM-score (scTM). If a scTM score is larger than 0.5, two backbone structures are considered with the same fold and that generated backbone structure is designable. Similar as in previous works (Wu et al., 2022a; Trippe et al., 2022), we generate 780 backbone structures with various lengths between 50 and 128 and evaluate them by the scTM score, for which the sampling temperature in ProteinMPNN is 0.1. The histogram of scTM scores is shown in Figure 3, and the comparison with FoldingDiff and ProtDiff is shown in Table 2. For our LatentDiff, 247 of 780 (31.6%) generated structures have their scTM scores > 0.5. The achieved percentage of designable structures has a significant margin over FoldingDiff (14.2%) and ProtDiff (11.8%). In the reported results by these two baseline methods, the authors grouped their generated structures into short (50-70) and long (70-128) ones, and reported the designable percentage (scTM > 0.5) within each category. For the short category, 31.1% of our generated structures are designable, which is higher than FoldingDiff (27.1%) and ProtDiff (17.1%). For the long category, our percentage (35.6%) is in fact much better than these two baseline models (8.9% for ProtDiff and 9.4% for FoldingDiff), demonstrating significantly improved scalability ![9_image_0.png](9_image_0.png) Figure 3: scTM score distribution of generated backbone structures with length between 50 and 128. 31.6% (247/780) generated samples are designable (scTM > 0.5) ![9_image_1.png](9_image_1.png) Figure 4: Some samples of generated structures with scTM > 0.5. The top row shows our generated backbones and the second row shows the backbone structures predicted by the OmegaFold from the predicted amino acid sequences. We use the inverse folding model ProteinMPNN to generate these amino acid sequences that can likely to fold into our generated structures. due to our designed model space reduction in LatentDiff. We also visualize some exemplar backbones and OmegaFold-predicted backbone structures using PyMOL (DeLano, 2002) in Figure 4. ## 4.4 Structure Distribution Analysis After showing the success of *in silico* tests, we illustrate the distributions of generated samples in both the original protein space and the latent space. First, we show the edge distance and bond angle distributions of generated backbones and test set backbones. As shown in Figure 5, the distributions of generated samples are similar to the test distributions. We further investigate the distributions in the latent space. Specifically, we show the distributions of node positions, edge distances, and node embeddings in the latent space. For simplicity, we only show the x coordinate of the latent node position and the first dimension of latent node embeddings. As shown in Figure 6, these distributions of generated latent samples almost recover the latent training data distributions. ![10_image_0.png](10_image_0.png) Figure 5: Distribution comparison between generated backbone structures and test set protein backbones. (a) Edge distance between any two consecutive Cα atoms along a protein chain. (b) Bond angle formed by any three consecutive Cα atoms along a protein chain. ![10_image_1.png](10_image_1.png) Figure 6: Distribution comparison between training data and generated samples in the latent space. (a) Position of latent node in the x direction. (b) Edge distance between any two consecutive nodes in the latent space. (c) First dimension of latent node embeddings. ## 4.5 **Secondary Structures** We use P-SEA (Labesse et al., 1997) to count the number of two types of secondary structures in the generated proteins with scTM > 0.5. Specifically, we calculate the percentage of generated proteins that contain only α-helix, only β-sheet, and both α-helix and β-sheet, respectively. The results are shown in Table 3. As seen, more than half of the generated proteins include α-helix, and a large portion of generated proteins contain β-sheet. This proves that our method can generate various secondary structures in natural proteins. ## 4.6 **Diversity** We also evaluate the diversity of generated proteins with scTM > 0.5 (designable), as shown in Table 4. Specifically, we calculate the TM scores with all other designable proteins for each designable protein and choose the maximum TM score to measure its similarity with the generated proteins. Then, we calculate the average of maximum TM scores over all designable proteins to assess the diversity of the generated proteins (lower is better). From the table, we can see that LatentDiff can generate more diverse protein structures than ProtDiff and is comparable with FoldingDiff. ## 4.7 **Parallel Sampling Efficiency Comparison** In this section, we demonstrate the parallel sampling efficiency of our method. Diffusion models usually need to perform thousands of reverse steps to generate a single data point, and the data size must be the same during every reverse step. So this generation process is very time-consuming and computationally expensive, Table 3: Percentage of generated proteins that contain only α-helix, only β-sheet, and both α-helix and β-sheet, respectively. | α-helix only | β-sheet only | α-helix + β-sheet | |----------------|----------------|---------------------| | 58.7% | 13.3% | 14.9% | | Method | Diversity↓ | |-------------|--------------| | ProtDiff | 0.836±0.1648 | | FoldingDiff | 0.585±0.1276 | | LatentDiff | 0.615±0.0849 | | Method | Parameters | Protein Length | Latent node | Diffusion steps | Time (hrs) | Speed (sec/sample) | |--------------|--------------|------------------|---------------|-------------------|--------------|----------------------| | ProtDiff | 1974528 | 128 | N/A | 1000 | 1.9 | 6.85 | | LatentDiff-P | 2016453 | 128 | N/A | 1000 | 2.9 | 10.66 | | LatentDiff | 2027984 | 128 | 32 | 1000 | 0.18 | 0.68 | | LatentDiff | 2027984 | 128 | 32 | 2000 | 0.36 | 1.33 | | LatentDiff | 2027984 | 256 | 64 | 1000 | 0.73 | 2.66 | Table 4: Diversity of generated designable proteins (scTM > 0.5). ↓ represents that a lower value indicates better performance. Table 5: Sampling efficiency comparison between diffusion models in latent and protein space. LatentDiff-P denotes no protein autoencoder being used and diffusion is performed directly in the protein space. especially when the modeling space of diffusion models is large. So this prohibits efficient parallel sampling with limited computing resources. Generation in latent space can reduce memory usage and computational complexity as the latent space is much smaller than the protein space, thereby improving the generation throughput. The reason we compare efficiency in terms of parallel sampling is that, in practice, it requires sampling a large amount of proteins in the screening procedure, so high throughput sampling is desired. In this sense, sampling in latent space demonstrates significant efficiency improvement. For the experiments, we compare sampling 1000 proteins with different methods on a single NVIDIA 2080Ti GPU and summarize the result in Table 5. Note that these experiments are only used to test sampling efficiency, and the network weights are just randomly initialized. For fair comparison and to rule out the other factors other than different modeling space, we compare with ProtDiff and our LatentDiff without downsampling (named LatentDiff-P), as denoising networks for these models are similar, and we also make the number of parameters to be similar for these models. For our model, the processing time of the decoder is orders of magnitude less than that of our latent diffusion model, so we do not take the decoder time into account. From the result, we can see that the generation time of 1000 protein structures in the protein space is about 2.9 hours, while it only takes about 11 minutes to generate in the latent space and then map to the protein space. So reducing modeling space demonstrates potential usefulness in practice. The sampling time of LatentDiff scales linearly with the number of diffusion steps because diffusion steps are performed sequentially. Moreover, since we use a fully connected graph for the diffusion model, increasing latent nodes will quadratically increase memory consumption and computational complexity. Consequently, the sampling throughput will decrease and is contingent upon the GPU memory and computational capacity, with the throughput being constrained by whichever resource reaches its limit first. ## 5 Discussion In this section, we discuss the limitations of our LatentDiff and potential future directions beyond LatentDiff. Limitations. For one thing, the input protein backbone structure needs to be padded to a fixed length for the protein autoencoder, and the actual length is predicted during the decoding process. For another thing, due to the modeling difficulty of structure and sequence co-design mentioned by previous works (Trippe et al., 2022; Wu et al., 2022a), we only use the generated protein backbone structures but corresponding generated sequences are not used. We think the reason for co-design not working well is that directly modeling the joint distribution of structures and sequences is very hard since the model needs to learn the complex correlation between protein structures and amino acid sequences. This somewhat requires the generative model to implicitly learn the inverse folding or folding process, which by themselves are complex tasks that need powerful inverse folding or protein folding prediction models to solve. However, using the high-accuracy inverse folding model to predict sequences from structures can reduce the burden of the generative model to learn the complex correlation between protein structures and amino acid sequences. Future Directions. There are several potential directions beyond the current LatentDiff and we leave them to future works. (1) The 3D protein autoencoder can be adjusted to support arbitrary length input and generate arbitrary length protein backbone structure. (2) Due to limited computation resources, we cannot train our 3D protein autoencoder on all protein structures predicted by AlphaFold (Varadi et al., 2022). And the performance of the 3D protein autoencoder is promising to boost when more training protein structures are available. (3) The length of generated proteins is limited to 128 in our work. With more data and computing resources, our method has the potential capability to generate longer proteins and then can generate proteins that exhibit more diverse folds. (4) There's still an opportunity to improve structure and sequence co-design. With a more powerful protein autoencoder obtained by (1) and (2), the modeling difficulty of structure and sequence co-design may be addressed naturally by our proposed LatentDiff framework. Besides, an iterative refinement approach involving alternating between sequence and structure generation steps might also be useful to gradually improve the consistency between the generated sequence and structure. In addition, incorporating physical constraints during the generative modeling process, such as integrating physical principles such as energy functions and geometric constraints into the generative model, could also be possible to guide the generation process to produce sequences that are more likely to fold into the generated structures. (5) Conditional generation tasks are very useful in practice as they enable protein generation with desired properties and are worth more exploration in future work. ## 6 **Broader Impact Statement** Our protein generation method, enabling the production of novel proteins, has significant broader impact potential. On one hand, it might offer potential opportunities for advancements in medicine, agriculture, and biotechnology, facilitating the development of innovative therapeutics, enzymes, and biomaterials in the future. On the other hand, while considering the concerns raised regarding the computational selection of potentially dangerous agents, we should prioritize responsible research practices, with stringent safety protocols, adherence to regulations, and collaboration with biosecurity experts to ensure the responsible handling of generated proteins. By fostering collaboration and knowledge dissemination, we aim to advance protein design while actively managing any potential risks associated with our method. ## 7 Conclusion We have proposed LatentDiff, a 3D latent diffusion framework for protein backbone structure generation. To reduce the modeling space of protein structures, LatentDiff uses a pre-trained equivariant 3D autoencoder to transform protein backbones into a more compact latent space, and models the latent distribution with an equivariant latent diffusion model. LatentDiff is shown to be effective and efficient in generating designable protein backbone structures by comprehensive experimental results. ## References Namrata Anand and Tudor Achim. Protein structure and sequence generation with equivariant denoising diffusion probabilistic models. *arXiv preprint arXiv:2205.15019*, 2022. Namrata Anand and Possu Huang. Generative modeling for protein structures. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. Namrata Anand, Raphael Eguchi, and Po-Ssu Huang. Fully differentiable full-atom protein backbone generation, 2019. Ivan Anishchenko, Samuel J Pellock, Tamuka M Chidyausiku, Theresa A Ramelot, Sergey Ovchinnikov, Jingzhou Hao, Khushboo Bafna, Christoffer Norn, Alex Kang, Asim K Bera, et al. *De novo* protein design by deep network hallucination. *Nature*, 600(7889):547–552, 2021. Minkyung Baek, Frank DiMaio, Ivan Anishchenko, Justas Dauparas, Sergey Ovchinnikov, Gyu Rie Lee, Jue Wang, Qian Cong, Lisa N Kinch, R Dustin Schaeffer, et al. Accurate prediction of protein structures and interactions using a three-track neural network. *Science*, 373(6557):871–876, 2021. Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for dataefficient and accurate interatomic potentials. *Nature communications*, 13(1):2453, 2022. Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J Ragotte, Lukas F Milles, Basile IM Wicky, Alexis Courbet, Rob J de Haas, Neville Bethel, et al. Robust deep learning–based protein sequence design using ProteinMPNN. *Science*, 378(6615):49–56, 2022. Warren L DeLano. The PyMOL molecular graphics system. *http://www. pymol. org*, 2002. Raphael R Eguchi, Christian A Choe, and Po-Ssu Huang. Ig-VAE: Generative modeling of protein structure by direct 3d coordinate generation. *PLoS computational biology*, 18(6):e1010271, 2022. Noelia Ferruz, Steffen Schmidt, and Birte Höcker. ProtGPT2 is a deep unsupervised language model for protein design. *Nature communications*, 13(1):4348, 2022. Alex Hawkins-Hooker, Florence Depardieu, Sebastien Baur, Guillaume Couairon, Arthur Chen, and David Bikard. Generating functional protein variants with variational autoencoders. *PLoS computational biology*, 17(2):e1008736, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020. Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International Conference on Machine Learning*, pp. 8867–8887. PMLR, 2022. John Ingraham, Max Baranov, Zak Costello, Vincent Frappier, Ahmed Ismail, Shan Tie, Wujie Wang, Vincent Xue, Fritz Obermeyer, Andrew Beam, et al. Illuminating protein space with a programmable generative model. *bioRxiv*, pp. 2022–12, 2022. Bowen Jing, Gabriele Corso, Regina Barzilay, and Tommi S Jaakkola. Torsional diffusion for molecular conformer generation. In *ICLR2022 Machine Learning for Drug Discovery*, 2022. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with AlphaFold. *Nature*, 596(7873):583–589, 2021. Mostafa Karimi, Shaowen Zhu, Yue Cao, and Yang Shen. De novo protein design for novel folds using guided conditional Wasserstein generative adversarial networks. *Journal of chemical information and modeling*, 60(12):5667–5681, 2020. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015. Gilles Labesse, N Colloc'h, Joël Pothier, and J-P Mornon. P-SEA: a new efficient assignment of secondary structure from cα trace of proteins. *Bioinformatics*, 13(3):291–295, 1997. Greg Landrum et al. RDKit: A software suite for cheminformatics, computational chemistry, and predictive modeling. Jin Sub Lee and Philip M Kim. ProteinSGM: Score-based generative modeling for *de novo* protein design. bioRxiv, 2022. Meng Liu, Cong Fu, Xuan Zhang, Limei Wang, Yaochen Xie, Hao Yuan, Youzhi Luo, Zhao Xu, Shenglong Xu, and Shuiwang Ji. Fast quantum property prediction via deeper 2d and 3d graph networks. arXiv preprint arXiv:2106.08551, 2021a. Meng Liu, Youzhi Luo, Limei Wang, Yaochen Xie, Hao Yuan, Shurui Gui, Haiyang Yu, Zhao Xu, Jingtun Zhang, Yi Liu, Keqiang Yan, Haoran Liu, Cong Fu, Bora M Oztekin, Xuan Zhang, and Shuiwang Ji. DIG: A turnkey library for diving into graph deep learning research. *Journal of Machine Learning Research*, 22 (240):1–9, 2021b. Meng Liu, Keqiang Yan, Bora Oztekin, and Shuiwang Ji. GraphEBM: Molecular graph generation with energy-based models. In *Energy Based Models Workshop-ICLR 2021*, 2021c. Meng Liu, Youzhi Luo, Kanji Uchino, Koji Maruhashi, and Shuiwang Ji. Generating 3D molecules for target protein binding. In *Proceedings of The 39th International Conference on Machine Learning*, pp. 13912–13924, 2022a. Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for 3D molecular graphs. In *International Conference on Learning Representations*, 2022b. Shitong Luo, Yufeng Su, Xingang Peng, Sheng Wang, Jian Peng, and Jianzhu Ma. Antigen-specific antibody design and optimization with diffusion-based generative models. *bioRxiv*, 2022. Youzhi Luo and Shuiwang Ji. An autoregressive flow model for 3D molecular geometry generation from scratch. In *International Conference on Learning Representations*, 2022. Youzhi Luo, Keqiang Yan, and Shuiwang Ji. GraphDF: A discrete flow model for molecular graph generation. In *Proceedings of The 38th International Conference on Machine Learning*, pp. 7192–7203, 2021. Ali Madani, Bryan McCann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R Eguchi, PoSsu Huang, and Richard Socher. ProGen: Language modeling for protein generation. arXiv preprint arXiv:2004.03497, 2020. Janet R McMillan, Oliver G Hayes, Peter H Winegar, and Chad A Mirkin. Protein materials engineering with dna. *Accounts of chemical research*, 52(7):1939–1948, 2019. Erik Nijkamp, Jeffrey Ruffolo, Eli N Weinstein, Nikhil Naik, and Ali Madani. ProGen2: exploring the boundaries of protein language models. *arXiv preprint arXiv:2206.13517*, 2022. Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In International Conference on Learning Representations, 2018. Donatas Repecka, Vykintas Jauniskis, Laurynas Karpus, Elzbieta Rembeza, Irmantas Rokaitis, Jan Zrimec, Simona Poviloniene, Audrius Laurynenas, Sandra Viknander, Wissam Abuajwa, et al. Expanding functional protein sequence spaces using generative adversarial networks. *Nature Machine Intelligence*, 3(4): 324–333, 2021. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022. Sari Sabban and Mikhail Markovsky. RamaNet: Computational *de novo* helical protein backbone design using a long short-term memory generative neural network. *bioRxiv*, pp. 671552, 2020. Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks. In International conference on machine learning, pp. 9323–9332. PMLR, 2021. Chence Shi, Chuanrui Wang, Jiarui Lu, Bozitao Zhong, and Jian Tang. Protein sequence and structure co-design with equivariant translation. *arXiv preprint arXiv:2210.08761*, 2022. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2020. Brian L Trippe, Jason Yim, Doug Tischer, Tamara Broderick, David Baker, Regina Barzilay, and Tommi Jaakkola. Diffusion probabilistic modeling of protein backbones in 3d for the motif-scaffolding problem. arXiv preprint arXiv:2206.04119, 2022. Mihaly Varadi, Stephen Anyango, Mandar Deshpande, Sreenath Nair, Cindy Natassia, Galabina Yordanova, David Yuan, Oana Stroe, Gemma Wood, Agata Laydon, et al. AlphaFold protein structure database: massively expanding the structural coverage of protein-sequence space with high-accuracy models. *Nucleic* acids research, 50(D1):D439–D444, 2022. Limei Wang, Yi Liu, Yuchao Lin, Haoran Liu, and Shuiwang Ji. ComENet: Towards complete and efficient message passing for 3D molecular graphs. In The 36th Annual Conference on Neural Information Processing Systems, 2022a. Zhengyang Wang, Meng Liu, Youzhi Luo, Zhao Xu, Yaochen Xie, Limei Wang, Lei Cai, Qi Qi, Zhuoning Yuan, Tianbao Yang, and Shuiwang Ji. Advanced graph and sequence neural networks for molecular property prediction and drug discovery. *Bioinformatics*, 38(9):2579–2586, 2022b. Joseph L Watson, David Juergens, Nathaniel R Bennett, Brian L Trippe, Jason Yim, Helen E Eisenach, Woody Ahern, Andrew J Borst, Robert J Ragotte, Lukas F Milles, et al. Broadly applicable and accurate protein design by integrating structure prediction networks and diffusion generative models. *bioRxiv*, pp. 2022–12, 2022. Kevin E Wu, Kevin K Yang, Rianne van den Berg, James Y Zou, Alex X Lu, and Ava P Amini. Protein structure generation via folding diffusion. *arXiv preprint arXiv:2209.15611*, 2022a. Ruidong Wu, Fan Ding, Rui Wang, Rui Shen, Xiwen Zhang, Shitong Luo, Chenpeng Su, Zuofan Wu, Qi Xie, Bonnie Berger, et al. High-resolution *de novo* structure prediction from primary sequence. *BioRxiv*, 2022b. Zachary Wu, Kadina E Johnston, Frances H Arnold, and Kevin K Yang. Protein sequence design with deep generative models. *Current opinion in chemical biology*, 65:18–27, 2021. Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang. GeoDiff: A geometric diffusion model for molecular conformation generation. In *International Conference on Learning Representations*, 2021. Keqiang Yan, Yi Liu, Yuchao Lin, and Shuiwang Ji. Periodic graph transformers for crystal material property prediction. In *The 36th Annual Conference on Neural Information Processing Systems*, 2022. Jianyi Yang, Ivan Anishchenko, Hahnbeom Park, Zhenling Peng, Sergey Ovchinnikov, and David Baker. Improved protein structure prediction using predicted interresidue orientations. Proceedings of the National Academy of Sciences, 117(3):1496–1503, 2020. Yang Zhang and Jeffrey Skolnick. TM-align: a protein structure alignment algorithm based on the TM-score. Nucleic acids research, 33(7):2302–2309, 2005. Yang Zhang, Wenbing Huang, Zhewei Wei, Ye Yuan, and Zhaohan Ding. Equipocket: an e (3)-equivariant geometric graph neural network for ligand binding site prediction. *arXiv preprint arXiv:2302.12177*, 2023. ## A Appendix A.1 Distribution Rotation And Reflection Invariant Reverse Diffusion Process In this section, we provide proof that by (1) using a high-dimensional Gaussian distribution as the prior distribution, and (2) employing rotation and reflection equivariant reverse diffusion model sθ Hoogeboom et al. (2022); Xu et al. (2021), the challenge of pθ(Xdown, Hdown) = pθ(RXdown, Hdown) can be addressed. The proof process borrows ideas from Xu et al. (2021) and Hoogeboom et al. (2022). First, because pθ(XT , HT ) = N (0, I), and N (0, I) is isotropic, we have pθ(XT , HT ) = pθ(RXT , HT ), where R ∈ R 3×3, |R| = ±1 describes the rotation and reflection transformations in 3D space. Second, because sθ is rotation and reflection equivariant for Xt and rotation and reflection invariant for Ht, and $$\mathbf{X}_{t-1}=\frac{1}{\sqrt{1-\beta_{t}}}(\mathbf{X}_{t}-\frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}\mathbf{s}_{\theta}(\mathbf{X}_{t},\mathbf{H}_{t},t)_{\mathbf{X}})+\sqrt{\beta_{t}}\mathbf{\sigma}_{\mathbf{X}},$$ $$\mathbf{H}_{t-1}=\frac{1}{\sqrt{1-\beta_{t}}}(\mathbf{H}_{t}-\frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}\mathbf{s}_{\theta}(\mathbf{X}_{t},\mathbf{H}_{t},t)_{\mathbf{H}})+\sqrt{\beta_{t}}\mathbf{\sigma}_{\mathbf{H}},$$ where $(\mathbf{X}_{t},\mathbf{X}_{t})$ is the number of independent events $\mathbf{X}_{t}$ and $\mathbf{X}_{t}$ (13) $\binom{14}{2}$ . where sθ(Xt, Ht, t)X and sθ(Xt, Ht, t)H denote the network predictions to update X and H, correspondingly. When we apply transformation R ∈ R 3×3, |R| = ±1 to Xt−1, we will have $$RX_{t-1}=\frac{1}{\sqrt{1-\beta_{t}}}R(X_{t}-\frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}s_{\theta}(X_{t},H_{t},t)_{X})+\sqrt{\beta_{t}}R\sigma_{X}$$ $$=\frac{1}{\sqrt{1-\beta_{t}}}(RX_{t}-\frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}Rs_{\theta}(X_{t},H_{t},t)_{X})+\sqrt{\beta_{t}}R\sigma_{X}$$ $$=\frac{1}{\sqrt{1-\beta_{t}}}(RX_{t}-\frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}s_{\theta}(RX_{t},H_{t},t)_{X})+\sqrt{\beta_{t}}R\sigma_{X},$$ (15) $\binom{16}{}$ (17) (18) and we can have the following pθ(Xt−1, Ht−1|Xt, Ht) = pθ(Xt, Ht)p(σX,σH) = pθ(RXt, Ht)p(RσX,σH) = pθ(RXt−1, Ht−1|RXt, Ht). (18) Beyond this, for the reverse diffusion time t ∈ {T, T − 1, *· · ·* , 1}, assume pθ(Xt, Ht) satisfies pθ(Xt, Ht) = pθ(RXt, Ht), where R ∈ R 3×3, |R| = ±1 describes the rotation and reflection transformations in 3D space. Then we have: pθ(RXt−1, Ht−1) = Z (Xt,Ht) pθ(RXt−1, Ht−1|Xt, Ht)pθ(Xt, Ht) = Z (Xt,Ht) pθ(RXt−1, Ht−1|RR−1Xt, Ht)pθ(RR−1Xt, Ht) = Z (Xt,Ht) pθ(Xt−1, Ht−1|R −1Xt, Ht)pθ(R −1Xt, Ht), let X′ = R−1Xt, we have det R = 1 and $$p_{\theta}(R\mathbf{X}_{t-1},\mathbf{H}_{t-1})=\int_{(\mathbf{X}^{\prime},\mathbf{H}_{t})}p_{\theta}(\mathbf{X}_{t-1},\mathbf{H}_{t-1}|\mathbf{X}^{\prime},\mathbf{H}_{t})p_{\theta}(\mathbf{X}^{\prime},\mathbf{H}_{t})*\det\ R=p_{\theta}(\mathbf{X}_{t-1},\mathbf{H}_{t-1}),$$ and pθ(Xt−1, Ht−1) is invariant. By induction, pθ(XT −1, HT −1)*, . . . , p*θ(X0, H0) are all invariant and the proof is complete. ## A.2 Datasets We curate the dataset from Protein Data Bank (PDB) and Swiss-Prot data in AlphaFold Protein Structure Database (AlphaFold DB) (Jumper et al., 2021; Varadi et al., 2022). We filter all the single-chain protein $$(19)$$ data from PDB with Cα - Cα distance less than 5Å and sequence length between 40 and 128 residues, resulting in 4460 protein sequences. We randomly split the data according to 80/10/10 train/validation/test split. In order to include more training data, we further curate protein data from two resources and add them to the current training set. The first part of augmented training data comes AlphaFold DB. Specifically, we filter single-chain proteins in Swiss-Prot with lengths between 40 and 128 and add these proteins to the training data. The second part of augmented training data comes from PDB, where we curate data from those single-chain proteins with Cα - Cα distance larger than 5Å and sequence lengths longer than 40. Specifically, we split these proteins at the position where Cα - Cα distance is larger than 5Å to obtain protein fragments. Then we add these fragments with lengths between 50 and 128 to the training data. For these fragments with lengths longer than 256, we uniformly cut them into lengths between 50 and 128, and add them to the training data. After this data augmentation process, we can finally obtain about 100k training data. ## A.3 Experimental Details For training of the autoencoder, we have used all the available training data. We then use the trained encoder to embed all the training protein data and use their latent representations to train the latent diffusion model. We have trained the autoencoder for 200 epochs with batch size 128, by Adam optimizer (Kingma & Ba, 2015) with learning rate 1e −3, β1 = 0.9, β2 = 0.999, and weight delay 2e −4. The latent diffusion model has been trained for 13M steps with batch size 128, by Amsgrad optimizer (Reddi et al., 2018) with learning rate 5e −5, β1 = 0.9, β2 = 0.999, and weight delay 1e −12. We use 1000 diffusion steps and the same noise scheduler used in Hoogeboom et al. (2022). We have implemented all the models in PyTorch and have trained all the models on a single NVIDIA A100 GPU. ## A.4 Latent Space Interpolation Usually, it is natural to visualize the latent space and perform latent code interpolation to test if the latent space is well-structured. However, a protein in our latent space is not represented by a single latent feature vector, but rather, it is a set of nodes associated with 3D coordinates and node features. As such, it is difficult to use dimension reduction techniques like t-SNE to visualize the latent space. In addition, we did not add a KL-divergence loss on coordinates since it would break equivariance. Even for invariant node features, we only add a minimal KL-divergence penalty to control the variance of the latent space, as we aim to maintain high reconstruction accuracy for the autoencoder. Therefore, in our case, the latent space does not necessarily need to be well-structured, and arbitrary interpolation may not guarantee valid protein structures upon decoding. To show this, we pick two generated proteins with scTM>0.5 (designable), and their corresponding latent space data are (Xs emb, Hs emb) and (Xtemb, Ht emb). Then we interpolate these two latent space data as (X interp emb , H*interp* emb ) = (Xs emb ∗ (1 − λ) + Xtemb ∗ λ, Hs emb ∗ (1 − λ) + Ht emb ∗ λ). We choose different values of λ and decode the interpolated latent space data into proteins and calculate the scTM score, as shown in Table 6. We can see that if λ is close to 0 or 1, generated proteins are still designable. However, if λ is near 0.5, generated proteins are not valid, just as we analyzed above. Table 6: The scTM score of proteins decoded from the interpolation of two latent protein representations. λ is the interpolation weights. TM-left means the TM score with the start protein, and TM-right means the TM score with the end protein. | λ | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1 | |----------|------|-------|-------|-------|-------|-------|-------|-------|-------|-------|------| | scTM | 0.85 | 0.57 | 0.49 | 0.37 | 0.27 | 0.31 | 0.42 | 0.52 | 0.49 | 0.68 | 0.74 | | TM-left | 1.0 | 0.78 | 0.66 | 0.61 | 0.51 | 0.35 | 0.36 | 0.38 | 0.40 | 0.42 | 0.43 | | TM-right | 0.57 | 0.55 | 0.56 | 0.48 | 0.47 | 0.43 | 0.44 | 0.52 | 0.61 | 0.71 | 1.0 |
Review 1: Summary: The authors present a generative approach for protein backbones. The idea is to operate in a similar way to the stable diffusion approach for image generation, i.e. apply diffusion in the latent space. The authors propose to do the same for 3D graphs. They identify the following challenges: 1) E(3) equivariance for the autoencoder, 2) accuracy of decoding for 3D graphs, 3) diffusion process should be specialised for 3D. The contributions are: 1) a novel E(3) graph autoencoder, 2) a novel E(3) diffusion process, 3) an overall increase in sampling efficiency due to operating in a reduced latent space. Strengths and Weaknesses: The idea of working in a latent space that is subject to constraints such as E(3) is interesting. The authors should however show that this brings advantages with respect to other approaches, for example working with nodes with attributes of a different number of dimensions and not forced to be rotation and reflection invariant. Given that the latent representation consists of a small number of nodes in 3D space, it would be of interest to visualise these latent embeddings and identify trends, clustering properties, relationships to functional annotations of the associated proteins, etc. This seems to be an important opportunity that is completely missed. The authors offer a 3D equivariant graph autoencoder, but several other approaches exist in literature and these are not acknowledged nor compared to; why is the proposed approach better than other? e.g. 1. Batzner, Simon, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt, and Boris Kozinsky. "E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials." Nature communications 13, no. 1 (2022): 2453. 2. Zhang, Yang, Wenbing Huang, Zhewei Wei, Ye Yuan, and Zhaohan Ding. "EquiPocket: an E (3)-Equivariant Geometric Graph Neural Network for Ligand Binding Site Prediction." arXiv preprint arXiv:2302.12177 (2023). One important aspect seems to be the joint generation of 3D coordinates and attributes (i.e. the amino acid type) but then the authors simply state that:<<Note that we do not use reconstructed amino acid types for the corresponding node. Instead, we use the inverse folding model ProteinMPNN (Dauparas et al., 2022) to predict protein amino acid sequences from generated backbone structures.>> The authors should offer some insights on why the predicted ACs are not of acceptable quality. In the Discussion's future directions the authors state that this is one aspect to improve but do not offer any analysis or suggestion on how to improve this shortcoming. In literature we see now conditional generative methods that can bias the generation process according to desired prompts/constraints, but the authors propose an unconditional model to simply sample protein structures at random; the authors should justify the use case for their approach. To improve the overall quality of the paper the authors should consider a clearer introduction of the main ideas, i.e. why using 3D latent nodes is a good idea; the paper does not read smoothly and clearly, its clarity could be significantly improved. Requested Changes: The paper proposes an efficient approach for the generation of protein backbones. The efficiency aspect is underdeveloped in the paper and the relative experimental results receive only 10 line in Section 4.5 and one numerical comparative result. Important questions are, how does the computational efficiency of the proposed approach scale with (and compare against other systems): 1. the length of the protein (the parameter m) 2. the number of latent nodes? 3. the number of layers? the number of steps in the diffusion process? 4. the overall number of parameters of the whole architecture? To reiterate a previous point: since the latent representation consists of a small number of nodes in 3D space, it would be of interest to visualise these latent embeddings and identify trends, clustering properties, relationships to functional annotations of the associated proteins, etc. Broader Impact Concerns: The capacity to generate biological agents and filter them with computational approaches to select potentially dangerous agents should be considered. ================================================== Review 2: Summary: Diffusion models are quite popular for generating images, but generation is quite computationally demanding. Recent work has sped up generation by using the diffusion model to sample in the latent space from a pretrained image autoencoder (e.g., stable diffusion). Diffusion models have recently shown promise for sampling 3D protein structures, and this paper explores the potential for generation costs to be reduced using a pretrained 3D structure autoencoder. Much of the technical content of the paper is devoted to ensuring that the autoencoder and the diffusion model are equivariant to 3D rotations, flips, and translations. Experimental results demonstrate that samples from the model have similar quality to recent work, but are substantially faster. Strengths and Weaknesses: # Strengths This is a useful contribution to the research dialog on diffusion models for proteins. Given the success of latent diffusion for images, many protein researchers will be curious to see how it generalizes to proteins. Being able to generate diverse novel proteins has broad interest in drug design, biocatalysis, etc. # Weaknesses ## Evaluation As the authors explain, the gold standard evaluation setup would require synthesizing new proteins and measuring their structures. This would be extremely expensive, so this work (and prior work) depends on in-silico surrogate metrics. With this in mind, I found that the description of this setup lacking in key details: Can you explain what the train-test split is? How would a model perform that just memorized a single training example? As far as I understand, it would do well under the criteria in Table 2. It's possible that your model has better percentage of samples with scTM > 0.5, but that these samples are substantially less diverse than prior work. You write that the 'maximum TM-score among these eight scores is referred to as the self-consistency TM-score (scTM).' Can you please explain why taking this maximum is appropriate? In practice, if one was designing a protein, how would they choose among samples? Sec 4.3: Can you explain why you chose to use omegafold as your structure prediction model? Is this what the other systems in Table 2 used? If not, could this be an unnecessary confounder when comparing the systems? The visual analysis of samples was underwhelming. The paper writes: "we also visualize some exemplar backbones and OmegaFold-predicted backbone structures using PyMOL (DeLano, 2002) in Figure 4, which clearly shows that our LatentDiff is capable of generating designable proteins". These are just a few examples for which scTM is low, so it doesn't provide new information over the previous tables. I would have appreciated, for example, a demonstration that the samples capture a diverse set of folds, secondary structure elements. ## Exposition I found the description of the encoder/decoder architectures quite confusing. It wasn't clear to me why the language of graph neural networks is necessary if the graph is fully connected and some of the update rules are convolutional. For example, why describe 'edge building' when the graph is fully connected? Could it be expressed in terms of CNN and MLP layers? Is it some sort of transformer? ## Related work I was very surprised that you didn't discuss the RFDiffusion paper: Watson et al. "Broadly applicable and accurate protein design by integrating structure prediction networks and diffusion generative models." Similarly, I was surprised by how little Ingraham et al. was discussed. Can you please discuss the relationship between your work and theirs? ## Speedup Results I'm very confused as to how a speedup is obtained by your system. The encoder-decoder model was tuned to provide a downsampling factor of 4. At first, it seems that the diffusion model would be generating data that is 4x lower-dimensional. However, the diffusion model also needs to generate not just X, but also H, so there are 4x fewer nodes in the graph, but each node has more dimensions. Can you please explain how such a big speedup was obtained in section 4.5? Are you comparing results from two different software packages, or did you implement ProtDiff yourself? ## Sequence Length As with other recent work on diffusion for proteins, the authors can't generate proteins longer than ~128 amino acids, which is considerably shorter than most natural proteins. I expected that the latent-space sampling would enable generating longer sequences, but was disappointed to see that this was explored. Requested Changes: Please addresses all of my comments in 'Weaknesses' above except 'sequence length'. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper proposes a method to reduce the protein structure generative space by leveraging the downsampling and upsampling of the amino acids in the protein. Then, with diffusion, the generation could be performed in the reduced space. Strengths and Weaknesses: ### Strengths: - The proposed method appears to be simple and straightforward. ### Weaknesses & Questions: - The motivation for the subsampled node seems weak, as its purpose is to improve efficiency. However, many designed proteins are small, typically ranging from 50 to 200 amino acids, and smaller proteins are generally easier to design, synthesize, and analyze experimentally. The proposed model also uses a protein length of $m=128$. With such a small dimension, further compression to something like $m/4$ may not be necessary in practice. - Why the proposed reduced space method can help to improve the performance? It seems that is not well aligned with the motivation for efficiency. - The model does not appear to be end-to-end. It first trains the encoder-decoder, then fixes the weight of the encoder-decoder, and finally trains the latent diffusion. Additionally, ProteinMPNN is required for amino acid prediction. - The random split for train/val/test data seems unreasonable. It would be better to split based on protein similarity. Section A.2 is also confusing; for instance, it mentions "sequence length between 40 and 128 residues" and then "sequence lengths longer than 40." - The proposed space-reducing method might be more effective as a plugin for existing models, like RFDiffusion. Introducing a new model (encoder-decoder) makes it difficult to determine whether the gain comes from the proposed space-reducing method or the new model. - In Section 4.3, it is unclear how the 780 backbone structures were generated. Were they generated randomly? How do the baselines generate these backbones, and is the comparison fair? For example, do the models train on the same dataset? - In Figure 5, the CA-CA distance and bond angle appear unnatural and quite differ from the ground truth values. - The provided benchmark appears to be inadequate, as it only showcases unconditional single-chain generation. Numerous intriguing tasks in RFDiffusion have not been assessed or explored. Requested Changes: Refer to "Weaknesses & Questions". Broader Impact Concerns: N/A ================================================== Review 4: Summary: Latent Diffusion Model is a proposed method for generating protein structures aiming to reduce the complexity of protein modeling while capturing the distribution of natural protein structures in a condensed latent space. It combines a 3D graph autoencoder and a latent 3D diffusion model to reduce the modeling space of protein structures. The model solves the Bayesian inverse problem of deriving the underlying data distribution by establishing a bijective mapping between given prior distributions and p(z). To generate a novel protein backbone structure using the Latent Diffusion Model, the process is decomposed into two stages: protein latent representation generation and latent representation decoding. In the first stage, multivariate Gaussian noise is sampled and the learned latent diffusion model is used to generate 3D positions and node embeddings in the latent space. In the second stage, the pre-trained decoder is used to generate backbone structures in the protein space. Empirically, the Latent Diffusion Model is an effective and efficient approach for generating designable protein backbone structures by combining a 3D graph autoencoder and a latent 3D diffusion model. Strengths and Weaknesses: # Novelty The novel aspect of this model is the proposal of a latent diffusion model for generating protein structures. The model reduces the complexity of protein modeling while capturing the distribution of natural protein structures in a condensed latent space, making it more efficient and able to generate novel protein structures effectively. The method combines a 3D graph autoencoder and a latent 3D diffusion model to reduce the modeling space of protein structures, and it ensures rotation and reflection equivariance in the autoencoder design and accurately reconstructs intricate connection information in 3D graphs during decoding. The proposed method can effectively generate novel protein backbone structures with high designability and efficiency, which is important for many synthetic biology applications. # Advantage One advantage is the reduction of the complexity of protein modeling while capturing the distribution of natural protein structures in a condensed latent space. This can make protein modeling more efficient and cost-effective. Another advantage is the high designability and efficiency in generating novel protein backbone structures. Additionally, the proposed method ensures rotation and reflection equivariance in the autoencoder design and accurately reconstructs intricate connection information in 3D graphs during decoding, which improves the accuracy and effectiveness of the model. Finally, the proposed method is more memory-efficient and can demonstrate superiority in sampling speed compared to performing diffusion directly in the protein space. # Drawback The limitations of the proposed model include the need to pad input protein backbone structures to a fixed length, the inability to use corresponding generated sequences, and the fact that the 3D protein autoencoder was not trained on all protein structures predicted by AlphaFold. Requested Changes: For reproducing the results, uploading the source code and experimental files is important. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: Thank you for submitting your manuscript to TMLR. We have carefully considered the feedback provided by three reviewers and have reached a decision regarding your paper. Reviewer 1 has expressed a preference for acceptance, acknowledging the paper's contribution to the discussion around the use of diffusion models for protein structures and the technical novelty of the approach. We noticed that the results are not as strong as recent work by high-profile labs in this area and understood the lack of access to computational resources could have influenced the outcomes. Reviewer 2 expressed the concerns about the fairness of the comparison with previous baselines due to the larger training data potentially providing an undue advantage. They strongly recommend conducting an ablation study on dataset sizes for LatentDiff to ensure a fair assessment of its performance relative to existing methods. We noticed some discussions between Reviewer 2 and the authors, and also observed the strong willingness of the authors to enhance their experiments according to the feedback (however, we understand this will take some time). Reviewer 3 stated that the paper's results are not strong enough compared to previous research in the field and noting that the key idea (latent diffusion) is not very new. But we also noticed that it is unfair to require people in the academia to have access to huge computational resources to compete with industrial labs in terms of model size and model performance. We also noticed that the application of latent diffusion to protein generation has good implications to the related fields. Based on the reviewers' comments, we believe that we cannot accept your paper in its current form, and we encourage you to majorly revise your paper before making a new submission. The revisions should include: 1) Conduct an ablation study on dataset sizes for LatentDiff to ensure a fair comparison with existing methods. 2) Provide a more in-depth discussion of the novelty of the latent diffusion approach and its contribution to the field. ==================================================
# Multi-Label Node Classification On Graph-Structured Data Tianqi Zhao *T.Zhao-1@tudelft.nl* Department of Intelligent Systems Delft University of Technology Ngan Thi Dong dong@l3s.de L3S Research Center, Hannover, Germany Alan Hanjalic A.Hanjalic@tudelft.nl Delft University of Technology Megha Khosla *M.Khosla@tudelft.nl* Delft University of Technology Reviewed on OpenReview: *https: // openreview. net/ forum? id= EZhkV2BjDP* ## Abstract Graph Neural Networks (GNNs) have shown state-of-the-art improvements in node classification tasks on graphs. While these improvements have been largely demonstrated in a multi-class classification scenario, a more general and realistic scenario in which each node could have multiple labels has so far received little attention. The first challenge in conducting focused studies on multi-label node classification is the limited number of publicly available multi-label graph datasets. Therefore, as our first contribution, we collect and release three real-world biological datasets and develop a multi-label graph generator to generate datasets with tunable properties. While high label similarity (high homophily) is usually attributed to the success of GNNs, we argue that a multi-label scenario does not follow the usual semantics of homophily and heterophily so far defined for a multi-class scenario. As our second contribution, we define homophily and Cross-Class Neighborhood Similarity for the multi-label scenario and provide a thorough analyses of the collected 9 multi-label datasets. Finally, we perform a large-scale comparative study with 8 methods and 9 datasets and analyse the performances of the methods to assess the progress made by current state of the art in the multi-label node classification scenario. We release our benchmark at https://github.com/Tianqi-py/MLGNC. ## 1 Introduction Most of the existing works on graph node classification deploying Graph Neural Networks (GNNs) focus on a multi-class classification scenario while ignoring a more general and realistic scenario of multi-label classification, in which each node could have multiple labels. This scenario holds, for example in proteinprotein interaction networks, in which each protein is labeled with multiple protein functions or associated with different diseases, or in social networks, where each user may carry multiple interest labels. In this work, we focus on multi-label node classification on graph-structured data with graph neural networks. For the sake of brevity, in what follows we will refer to multi-label node classification on graphs simply as multi-label node classification. Regarding the approaches to deploying GNNs in a multi-label node classification scenario, a common practice is to transform the classification problem into multiple binary classification problems, one per label (Zhang & Zhou, 2013). In other words, |L| binary classifiers are trained, where each classifier j is responsible for predicting the 0/1 association for the corresponding label ℓj ∈ L. An assumption here, however, is that given the learned feature representations of the nodes, the labels are conditionally independent (Ma et al., 2020). The validity of this assumption cannot be assured in a GNN-based learning approach as GNNs ignore the label correlation among the neighboring nodes and only focus on node feature aggregation during the representation learning step (Jia & Benson, 2020; Ma et al., 2020). Furthermore, we note that the success of GNNs is widely attributed to feature smoothing over neighborhoods and high label similarity among the neighboring nodes. Graphs with high similarity among labels of neighboring nodes are referred to as having high homophily. Alternatively, in heterophilic graphs, the labels of neighboring nodes usually disagree. Consequently, approaches like H2Gcn (Zhu et al., 2020) have been proposed, which claim a high performance on both homophilic and heterophilic node classification datasets. A subtle point here, however, is that a network with nodes characterized by multiple labels does not obey the crisp separation of homophilic and heterophilic characteristics. As an illustration, consider a friendship network with node labels representing user interests. Each user might share only a very small fraction of the interests with his friends, which indicates low homophily in the local neighborhood. Yet her/his interests/labels could be fully determined by looking at his one-hop neighbors. Therefore, the network is also not heterophilic in the sense that the connected users have similar interests. Consequently, the solutions taking into account higher-order neighborhoods to tackle low homophily might not always perform well in multi-label networks. The lack of focused studies on multi-label node classification can also be attributed to the scarcity of available benchmark multi-label graph datasets. As an example, there is a single multi-label classification dataset, namely OGB-Proteins in the Open Graph Benchmark (OGB) (Hu et al., 2020). More so, the OGB-Proteins dataset has around 90% of the nodes unlabeled in the test set. While the OGB leaderboard reflects benchmarking of a large number of methods on OGB-Proteins, the lack of labels in the test set combined with the use of the Area Under the ROC Curve (AUROC) metric leads to overly exaggerated performance scores. In particular, the performance is measured by the average of AUROC scores across the L (L is the total number of labels) binary classification tasks. In such a scenario, the model already achieves a very high score if it outputs a high probability for the negative class for each binary classification task. Our Contributions. Our work thoroughly investigates the problem of multi-label node classification with GNNs. **Firstly**, we analyze various characteristics of multi-label graph-structured datasets, including label distribution, label induced first- and second-order similarities, that influence the performance of the prediction models. We observe that a large number of nodes in the current datasets only have a single label even if the average number of labels per node is relatively high. Moreover, for the popular OGB-Proteins dataset, around 89.38% of the nodes in the test set and 29.12% of train nodes have no label assigned. Secondly, to remedy the gap of lack of datasets we build a benchmark of multi-label graph-structured datasets with varying structural properties and label-induced similarities. In particular, we curate 3 biological graph datasets using publicly available data. Besides, we develop a synthetic multi-label graph generator with tunable properties. The possibility to tune certain characteristics allows us to compare various learning methods rigorously. Finally, we perform a large-scale experimental study evaluating 8 methods from various categories for the node classification task over 9 datasets. We observe that simple baselines like DeepWalk outperform more sophisticated GNNs for several datasets. We present a comprehensive analysis of the performance of different methods based on their own and the dataset's characteristics. ## 2 Background And Related Work 2.1 Notations And The Problem Setting. Notations. Let G = (V, E) denote a graph where V = {v1, · · · , vn} is the set of vertices, E represents the set of links/edges among the vertices. We further denote the adjacency matrix of the graph by A ∈ {0, 1} n×n and ai,j denotes whether there is an edge between vi and vj . N (v) represents the immediate neighbors of node v in the graph. Furthermore, let X = {x1, · · · , xn} ∈ R n×D and Y = {y1, · · · , yn*} ∈ {*0, 1} n×C represent the feature and label matrices corresponding to the nodes in V. In the feature matrix and label matrix, the i-th row represents the feature/label vector of node i. Let ℓ(i) denote the set of labels that are assigned to node i. Finally, let F correspond to the feature set and L be the set of all labels. Problem Setting. In this work, we focus on multi-label node classification problem on graph-structured data. In particular, we are given a set of labeled and unlabelled nodes such that each node can have more than one label. We are then interested in predicting labels of unlabelled nodes. We assume that the training nodes are completely labeled. We deal with the transductive setting multi-label node classification problem, where the features and graph structure of the test nodes are present during training. ## 2.2 Related Work Multi-label classification, which assigns multiple labels for each instance simultaneously, finds applications in multiple domains ranging from text classification to protein function prediction. In this work we focus on the case when the input data is graph-structured, for example, a protein-protein interaction network or a social network. For completeness, we also discuss the related works on using graph neural networks for multi-label classification on non-graph-structured data in Section 2.2.2. Several other paradigms of multi-label classification including extreme multi-label classification (Liu et al., 2017; Song et al., 2022; Wang et al., 2019), partial multi-label classification (Huynh & Elhamifar, 2020; Jain et al., 2017), multi-label classification with weak supervision (Chu et al., 2018; Hu et al., 2019) ( for a complete overview see the recent survey (Liu et al., 2021)). are out of the scope of the current work. ## 2.2.1 Multi-Label Classification On Graph-Structured Data Recent methods designed for multi-label node classification over graph-structured data can be categorized into four groups utilizing (1) node embedding approaches, (2) convolutional neural networks, (3) graph neural networks, and (4) the combination of label propagation and graph neural networks. Node representation or embedding approaches (Perozzi et al., 2014; Khosla et al., 2019; Ou et al., 2016) usually generate a lookup table of representations such that similar nodes are embedded closer. The learned representations are used as input features for various downstream prediction modules. While different notions of similarities are explored by different approaches, a prominent class of method is random walk based which defines similarity among nodes by their co-occurrence frequency in random walks. In this work, we specifically use DeepWalk (Perozzi et al., 2014) as a simple baseline that uses uniform random walks to define node similarity. Other methods like (Shi et al., 2019; Zhou et al., 2021; Song et al., 2021) use **convolutional neural networks** to first extract node representations by aggregating feature information from its local neighborhood. The extracted feature vectors are then fused with label embeddings to generate final node embeddings. Finally, these node embeddings are used as input for the classification model to generate node labels. In this work, we adopt LANC (Zhou et al., 2021) as a baseline from this category, as previous works (Zhou et al., 2021; Song et al., 2021) have shown its superior performance compared to other commonly used baselines for the multi-label node classification task. Graph neural networks(GNNs) popularised by graph convolution network (Kipf & Welling, 2016) and its variants compute node representation by recursive aggregation and transformation of feature representations of its neighbors which are then passed to a classification module. Let x (k) ibe the feature representation of node i at layer k, N (i) denote the set of its 1-hop neighbors. The k − th layer of a graph convolutional operation can then be described as $\mathbf{z}_{i}^{(k)}=\text{AGGREGATE}\left(\left\{\mathbf{x}_{i}^{(k-1)},\left\{\mathbf{x}_{j}^{(k-1)}\mid j\in\mathcal{N}(i)\right\}\right\}\right),\quad\mathbf{x}_{i}^{(k)}=\text{TRANSFORM}\left(\mathbf{z}_{i}^{(k)}\right)$. For the multi-label node classification, a sigmoid layer is employed as the last layer to predict the class probabilities: y ← (sigmoid(z (L) iθ)), where θ corresponds to the learnable weight matrix in the classification module. GNN models mainly differ in the implementation of the aggregation layer. The simplest model is the graph convolution network (GCN) (Kipf & Welling, 2016) which employs degree-weighted aggregation over neighborhood features. Gat (Veličković et al., 2018) employs several stacked Graph Attention Layers, which allows nodes to attend over their neighborhoods' features. GraphSage (Hamilton et al., 2017) follows a sample and aggregate approach in which only a random sample of the neighborhood is used for the feature aggregation step. GNNs, in general, show better performance on high homophilic graphs in which the connected nodes tend to share the same labels. Recent approaches like H2Gcn (Zhu et al., 2020) show improvement on heterophilic graphs (in the multi-class setting). Specifically, it separates the information aggregated from the neighborhood from that of the ego node. Further, it utilizes higher-order neighborhood information to learn informative node representations. Label propagation with GNNs. Prior to the advent of GNNs label propagation (LPA) algorithms constituted popular approaches for the task of node classification. Both LPA and GNNs are based on message passing. While GNNs propagate and transform node features, LPA propagates node label information along the edges of the graph to predict the label distribution of the unlabelled nodes. A few recent works (Yang et al., 2021; Wang & Leskovec, 2020) have explored the possibilities of combining LPA and GNNs. (Yang et al., 2021) employs knowledge distillation in which the trained GNN model (teacher model) is distilled into a combination of GNN and parameterized label propagation module. GCN-LPA (Wang & Leskovec, 2020) utilizes LPA serves as regularization to assist the GCN in learning proper edge weights that lead to improved classification performance. Different from our work both of the above works implicitly assume a multi-class setting. Moreover, (Yang et al., 2021) focuses on the interpretable extraction of knowledge from trained GNN and can only discover the patterns learned by the teacher GNN. ## 2.2.2 Multi-Label Classification On Non-Graph-Structured Data Using Gnns There has also been an increasing trend to use graph neural networks to exploit the implicit relations between the data and labels in multi-label classification problems on non-graph data. For example, (Saini et al., 2021) models the problem of extreme classification as that of link prediction in a document-label bipartite graph and uses graph neural networks together with the attention mechanism to learn superior node representations by performing graph convolutions over neighborhoods of various orders. ML-GCN(Shi et al., 2019) generates representations for the images using CNN and extracts label correlation by constructing a label-label graph from the label co-occurrence matrix. LaMP(Lanchantin et al., 2019) treats labels as nodes on a label-interaction graph and computes the hidden representation of each label node conditioned on the input using attention-based neural message passing. Likewise, for the task of multi-label image recognition (Chen et al., 2019) builds a directed graph over the object labels, where each label node is represented by word embeddings of the label, and GCN is employed to map this label graph into a set of inter-dependent object classifiers. The above works and many more like (Li et al., 2021; Zong & Sun, 2020; Ma et al., 2021a; Zheng et al., 2022; Shi et al., 2020; Xu et al., 2021; Cheng et al., 2021; Pal et al., 2020) are not part of the current study as they are developed for non-graph structured data. ## 3 A Detailed Analysis Of Existing And New Datasets We commence by analyzing various properties of existing multi-label datasets including label distributions, label similarities, and cross-class neighborhood similarity(CCNS), which could affect the performance of prediction models. In section 3.2 we further curate new real-world biological datasets which to some extent improve the representativeness of multi-label graph datasets. Finally, in Section 4 we propose our synthetic multi-label graph generator to generate multi-label graph datasets with tunable properties. The possibility to control specific influencing properties of the dataset allows us to benchmark various learning methods effectively. We will need the following quantification of label similarity in multi-label datasets. Label homophily. The performance of GNNs is usually argued in terms of label homophily which quantifies similarity among the neighboring nodes in the graph. In particular, label homophily is defined in (Zhu et al., 2020) as the fraction of the homophilic edges in the graph, where an edge is considered homophilic, if it connects two nodes with the same label. This definition can not be directly used in the multi-label graph datasets, as each node can have more than one label and it is rare in the multi-label datasets that the whole label sets of two connected nodes are the same. Usually, two nodes share a part of their labels. We, therefore, propose a new metric to measure the label homophily h of the multi-label graph datasets as follows. Definition 1 Given a multi-label graph G, the label homophily h of G is defined as the average of the Jaccard similarity of the label set of all connected nodes in the graph: $$h=\frac{1}{|{\mathcal{E}}|}\sum_{(i,j)\in{\mathcal{E}}}\frac{|\ell(i)\cap\ell(j)|}{|\ell(i)\cup\ell(j)|}.$$ Label homophily is a first-order label-induced similarity in that it quantifies the similarity among neighboring nodes based on their label distributions. Cross-Class Neighborhood Similarity for Multi-label graphs. Going beyond the label similarity among neighboring nodes, we consider a second order label induced metric which quantifies the similarity among neighborhoods of any two nodes. Ma et al. (2021b) introduced Cross-Class Neighborhood Similarity (CCNS) for multi-class graphs. Using CCNS, (Ma et al., 2021b) attributes the improved performance of GNNs to the higher similarity among neighborhood label distributions of nodes of the same class as compared to different classes. We extend their proposed CCNS measure to analyze multi-label datasets. Given two classes c and c ′, the CCNS score measures the similarity in the label distributions of neighborhoods of nodes belonging to c and c ′ and is defined as follows. One can visualize the CCNS scores between all class pairs in an C × C matrix where C is the total number of classes. Common GNNs are expected to perform better on datasets which higher scores on the diagonal than off-diagonal elements of the corresponding CCNS matrix. Definition 2 Given a multi-label graph G and the set of node labels Y *for all nodes, we define the multi-label* cross-class neighborhood similarity between classes c, c′ ∈ C *is given by* $$s(c,c^{\prime})=\frac{1}{|\mathcal{V}_{c}||\mathcal{V}_{c^{\prime}}|}\sum_{i\in\mathcal{V}_{c},j\in\mathcal{V}_{c^{\prime}},i\neq j}\frac{1}{|\ell(i)||\ell(j)|}\cos(\mathbf{d}_{i},\mathbf{d}_{j}),\tag{1}$$ where Vc = {i|c ∈ ℓ(i)} is the set of nodes with one of their labels as c*. The vector* di ∈ R C *corresponds to* the empirical histogram (over |C| classes) of node i*'s neighbors' labels, i.e., the* c th entry of di corresponds to the number of nodes in N (i) that has one of their label as c and the function cos(., .) *measures the cosine* similarity and is defined as $$c o s(\mathbf{d}_{i},\mathbf{d}_{j})={\frac{\mathbf{d}_{i}\cdot\mathbf{d}_{i}}{||\mathbf{d}_{i}||||\mathbf{d}_{j}||}}.$$ Note that as a node in a multi-label dataset could belong to multiple classes we exclude the possibility of comparing a node to itself. By introducing a factor of |ℓ(i)||ℓ(j)| in the denominator we are able to normalize the contribution of multi-labeled nodes for several class pairs. ## 3.1 Existing Datasets We start by analyzing the four popular multi-label node classification datasets: (i) BlogCat (Tang & Liu, 2009), in which nodes represent bloggers and edges their relationships, the labels denote the social groups a blogger is a part of, (ii)Yelp (Zeng et al., 2019), in which nodes correspond to the customer reviews and edges to their friendships with node labels representing the types of businesses and (iii) OGB-Proteins (Hu et al., 2020), in which nodes represent proteins, and edges indicate different types of biologically meaningful associations between the proteins, such as physical interactions, co-expression, or homology (Consortium, 2018; D et al., 2019). The labels correspond to protein functions. (iiii) DBLP (Akujuobi et al., 2019), in which nodes represent authors and edges the co-authorship between the authors, and the labels indicate the research areas of the authors. We report in Table 1 the characteristics of these datasets including the label homophily. In the following, we discuss in detail the various characteristics of these datasets along with their limitations for the effective evaluation of multi-label classification. Table 1: Dataset statistics. |V| and |E| denote the number of nodes and edges in the graph. |F| is the dimension of the node features. *clus* and r*homo* denote the clustering coefficient and the label homophily. C indicates the size of all labels in the graph. ℓmed, ℓ*mean*, and ℓmax specify the median, mean, and max values corresponding to the number of labels of a node. '25%', '50%', and '75%' corresponds to the 25th, 50th, and 75th percentiles of the sorted list of the number of labels for a node. "N.A." means the corresponding characteristic is not available in the graph. | Dataset | |V| | |E| | |F| | clus rhomo | C | ℓmed | ℓmean | ℓmax | 25% 50% 75% | | | | |-------------------|------------|-------|-----------|--------------|------|--------|---------|--------|---------------|----|----|----| | BlogCat | 10K | 333K | N.A. 0.46 | 0.10 | 39 | 1 | 1.40 | 11 | 1 | 1 | 2 | | | Yelp | 716K 7.34M | 300 | 0.09 | 0.22 | 100 | 6 | 9.44 | 97 | 3 | 6 | 11 | | | OGB-Proteins 132K | 39M | 8 | 0.28 | 0.15 | 112 | 5 | 12.75 | 100 | 0 | 5 | 20 | | | DBLP | 28K | 68K | 300 | 0.61 | 0.76 | 4 | 1 | 1.18 | 4 | 1 | 1 | 1 | ![5_image_0.png](5_image_0.png) Figure 1: Label distributions. In BlogCat, the majority of the nodes have one label. In OGB-Proteins, around 41% of total nodes have no labels, and only three nodes have the maximum number of 100 labels. Skewed label distributions. Figure 1 illustrates the label distributions in the four datasets. Quantitatively 72.34% nodes in BlogCat only have one label. However, the most labeled data points are assigned with 11 labels. Yelp has a total of 100 labels, the most labeled data points have 97 labels, whereas over 50% of the nodes have equal or less than 5 labels. Nevertheless, Yelp exhibits a high multi-label character with 75% of the nodes with more than 3 labels. OGB-Proteins is an extreme case in which 40.27% of the nodes do not have any label. DBLP is the dataset with the highest portion of nodes with single labels, with the exact percentage of 85.4%. Issue in evaluation using AUROC scores under high label sparsity. Another so far unreported issue in multi-label datasets is the unlabeled data. In OGB-Proteins, 40.27% of nodes do not have labels. Moreover, 89.4% of the test nodes are unlabelled. More worrying is the use of the AUROC score metric in the OGB leaderboard to benchmark methods for multi-label classification. In particular, a model that assigns "No Label" to each node (i.e. predict negative class corresponding to each of the independent L binary classification tasks) will already show a high AUROC score. We in fact observed that increasing the number of training epochs (which encourage the model to decrease training loss by predicting the negative class) increased the AUROC score whereas other metrics like AP or F1 score dropped or stayed unchanged. Cross-class neighborhood similarity. In Figure 2 we visualize the cross-class neighborhood similarity matrix for DBLP and BlogCat. The cells on the diagonal reflect the intra-class neighborhood similarity, whereas the other cells indicate the inter-class neighborhood similarity computed using equation 1. The contrast in 2a means that nodes from the same class tend to have similar label distribution in their neighborhood, while nodes from different classes have rather different label distributions in their neighborhoods. We will later see in the experimental section that GNNs indeed benefit from this characteristic to identify correctly the nodes in the same classes in DBLP. On the contrary, the intra- and inter-class similarity are more similar in BlogCat, making it intricate for GNNs to classify the nodes to their corresponding classes. ![6_image_0.png](6_image_0.png) Figure 2: Cross class Neighborhood Similarity in real-world datasets ## 3.2 New Biological Interaction Datasets Motivated by the natural applicability of the multi-label classification task in various biological datasets and to improve the representativeness of available datasets, we collect three real-world biological datasets corresponding to different multi-label classification problems: the PCG dataset for the protein phenotype prediction, the HumLoc, and EukLoc datasets for the human and eukaryote protein subcellular location prediction tasks, respectively. On each dataset, we build a graph in which each protein is modeled as a node. The node label is the corresponding protein's label. An edge represents a known interaction between two proteins retrieved from a public database. The detailed pre-processing steps and the original data sources are discussed in Appendix A.1.1, A.1.2, and A.1.3. Table 2 presents an overview of the three datasets' characteristics. Table 2: Statistics for new datasets. The column notations are the same as in Table 1. | Dataset | |V| | |E| | |F| clus rhomo | C | ℓmed | ℓmean | ℓmax | 25% 50% 75% | | | | | |-----------|-----------|-------|------------------|------|--------|---------|--------|---------------|----|----|----|----| | PCG | 3K | 37k | 32 | 0.34 | 0.17 | 15 | 1 | 1.93 | 12 | 1 | 1 | 2 | | HumLoc | 3.10k | 18K | 32 | 0.13 | 0.42 | 14 | 1 | 1.19 | 4 | 1 | 1 | 1 | | EukLoc | 7.70K 13K | 32 | 0.14 | 0.46 | 22 | 1 | 1.15 | 4 | 1 | 1 | 1 | | While all the existing datasets had very low homophily, HumLoc, and EukLoc show higher homophily. Moreover, these datasets improve the representativeness in terms of varying graph structure (reflected in computed clustering coefficient) and in node, edge, and feature sizes. On the downside, these datasets also show a similar low multi-label character, with the majority of nodes in these datasets still having a single label. Among the three datasets, PCG shows a bit more balanced label distribution (see Figure 3) as compared to the other two. Figure 4 provides the cross-class neighborhood similarity scores. All three datasets show different patterns according to CCNS measure which is desirable to analyse the differences in method's performance. While in PCG we see an overall high scores for CCNS, the difference in inter- and intra- class similarities is not prominent. HumLoc shows a slightly more contrasting intra- and inter-class neighborhood similarity. EukLoc, on the other hand, show very small neighborhood label similarities for nodes of same or different classes. PCG HumLoc EukLoc Label count per node Figure 3: Label distributions in biological datasets. The majority of the nodes in all datasets have one label. 7 ![7_image_0.png](7_image_0.png) Figure 4: Cross class Neighborhood Similarity in real-world datasets and proposed biological datasets ## 4 Multi-Label Graph Generator Framework In the previous sections, we analyzed various real-world dataset properties which could influence a method's performance. We now develop a multi-label graph generator that will allow us to build datasets with tunable properties for a holistic evaluation. With our proposed framework we can build datasets with *high multi-label* character, varying feature quality, varying label homophily, and CCNS similarity. We now describe the two main steps of our multi-label graph generator. Multi-label generator. In the first step, we generate a multi-label dataset using Mldatagen (Tomás et al., 2014). We start by fixing the total number of labels and features. We then construct a hypersphere, H ∈ R |F | centered at the origin and has a unit radius. Corresponding to each label in set L we then generate a smaller hypersphere with a random radius but with the condition that it is contained in H. We now start populating the smaller hyperspheres with randomly generated datapoints with |F| dimensions. Note that each datapoint may lie in a number of overlapping hyperspheres. The labels of the datapoint then correspond to the hyperspheres it lies in. Graph generator. Having constructed the multi-label dataset, we now construct edges between the data points by using a social distance attachment model (Boguná et al., 2004). In particular, for two given datapoints (nodes) i and j, their corresponding feature vectors are their coordinates given xi and xj respectively. The corresponding label vectors are denoted by yi and yj . We denote the hamming distance between the label vectors of nodes i and j by d(yi, yj ). We then construct an edge between datapoints (nodes) i and j, (*i, j*) with probability given by $$p_{i j}={\frac{1}{1+[b^{-1}d(\mathbf{y}_{i},\mathbf{y}_{j})]^{\alpha}}}$$ $$\left(2\right)$$ 1 + [b−1d(yi, yj )]α(2) where α is a homophily parameter, b is the characteristic distance at which pij = 1 2 . Note that the edge density is dictated by both the parameters α and b. A larger b would result in denser graphs. A larger homophily parameter α would assign a higher connection probability to the node pairs with shorter distances or nodes with similar labels. In particular, it is a random geometric graph model, which in the limit of large system size (number of nodes) and high homophily (large α) leads to sparsity, non-trivial clustering coefficient and positive degree assortativity (Talaga & Nowak, 2020), properties exhibited by real-world networks. By using different combinations of values of α and b, we can control the connection probability and further the label homophily of the generated synthetic graphs. We perform an extensive empirical analysis to study the relationship between α, b, and the label homophily of the generated datasets. We provide detailed instructions to use our synthetic data generator and our empirical analysis in Appendix A.2. Synthetic datasets with fixed homophily and varying feature quality. For our experimental analysis, we generate synthetic datasets with 3K nodes, 10 features, and a total of 20 labels. Towards analyzing the ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) ![8_image_3.png](8_image_3.png) ![8_image_4.png](8_image_4.png) (a) Label distribution in the synthetic dataset is more balanced. A relatively high multi-label character is exhibited with 50% nodes having ![8_image_5.png](8_image_5.png) (b) Cross-class Neighborhood Similarity in graph label homophily=0.2(c) Cross-class Neighborhood Similarity in graph label homophily=0.4 (d) Cross-class Neighborhood Similarity in graph label homophily=0.6(e) Cross-class Neighborhood Similarity in graph label homophily=0.8(f) Cross-class Neighborhood Similarity in graph label homophily=1.0 Figure 5: Cross-class Neighborhood Similarity in hypersphere datasets with varying label homophily variation in the method's performance with variation in *homophily* and *feature quality* first we constructed a dataset with fixed homophily of 0.377 (using α = 8.8, b = 0.12) and edge set of size 1M. We refer to this dataset as Synthetic1. We create five variants of Synthetic1 with varying feature quality. In particular, we add 10 random features for every node, which we refer to as *irrelevant* features. We then generate its variants by removing original features such that the ratio of the number of original to that of irrelevant features varies as in {1, 0.8, 0.5, 0.2, 0}. From the label distribution plot in 5a, we observe the dataset is more multi-label than the real-world datasets because a higher number of nodes now have multiple labels. Synthetic datasets with varying homophily and CCNS. We also use 5 different pairs of α and b and the same multi-label data from the first step to construct 5 synthetic graphs with label homophily (rounded up) in {0.2, 0.4, 0.6, 0.8, 1.0} to conduct the experiment where we test the influence of the label homophily on the performances of the node classification methods. The detailed statistics of the synthetic datasets are provided in Appendix A.2.1 in Table 5. Figure 5 visualizes the cross-class neighborhood similarity in the five hypersphere datasets with varying homophily levels (homophily varies in [0.2, 0.4, 0.6, 0.8, 1.0]). In the synthetic graphs with low label homophily, the intra- and inter-class neighborhood similarities show no significant differences, i.e., the nodes from different classes have similar label distributions in their neighborhoods. We overall observe a high absolute value of neighborhood similarity. The reason here is that similar to BlogCat (which also shows overall high CCNS scores) these synthetic low homophily graphs are highly connected and have a high average degree. As the label homophily gets higher, the contrast between the intra- and inter-class similarity becomes more significant. ## 5 Experiments From our dataset analysis in the previous sections we observe that, unlike commonly used multi-class datasets, multi-label datasets usually have low label homophily and a more varied cross-class neighborhood similarity. Moreover, similar to the case of multi-class datasets, node features might not always be available (as in BlogCat) or might be noisy. In this section, we perform a large-scale empirical study comparing 8 methods over 7 real-world multi-label datasets and 2 sets of synthetic datasets with varying homophily and feature quality. Our experiments are designed to reveal and understand (i) properties of datasets that favor certain methods over others and (ii) the effect of varying feature quality and label homophily on method performance. The training and hyperparameter settings for each model are summarized in the Appendix A.3 in tables 7 and 8. Our code is available at https://github.com/Tianqi-py/MLGNC. Datasets. We employ 7 real-world datasets including BlogCat, Yelp, OGB-Proteins, DBLP, PCG, HumLoc, EukLoc, and 2 sets of synthetic datasets with varying homophily and feature quality. These datasets are already described in Section 3. For all datasets except OGB-Proteins, HumLoc, and EukLoc we generate 3 random training, validation, and test splits with 60%, 20%, and 20% of the data. For OGBProteins, HumLoc, and EukLoc we follow the predefined data splits from (Hu et al., 2020), (Shen & Chou, 2007) and (Chou & Shen, 2007) respectively. As BlogCat has no given node features, we use an identity matrix as the input feature matrix. Compared Methods. For a holistic evaluation we include four classes of compared methods (i) simple methods, which include Multilayer Perceptron (Mlp), which only uses node features and ignores the graph structure, and DeepWalk, which only uses graph structure and ignores the node features (ii) Convolutional neural networks based which employ convolutional operations to extract representations from node's local neighborhoods and merge them with label embeddings for the final classification. We choose LANC as a baseline from this category, as previous works (Zhou et al., 2021; Song et al., 2021) have shown its superior performance. (iii) graph neural networks including (a) Gcn, Gat, and GraphSage, which are known to perform well for graphs with high label homophily, and (b)H2Gcn, which is designed to perform well both on homophilic and heterophilic graphs and (iiii) GCN-LPA which combines label propagation and GCN for node classification. All these methods are also discussed in Section 2. Evaluation Metrics. We report the average micro- and macro-F1 score, macro-averaged AUC-ROC score, macro-averaged average precision score, and standard deviation over the three random splits. Due to space constraints, we report average precision (AP) in the main paper, and all detailed results are available in Tables 9, 10, 11 in the Appendix A.4. Our choice of using AP over AUROC as the metric is also motivated in Appendix A.5. ## 6 Results And Discussion Table 3: Mean performance scores (Average Precision) on real-world datasets. The best score is marked in bold. The second best score is marked with underline. "OOM" denotes the "Out Of Memory" error. | Method | BlogCat | Yelp | OGB-Proteins | DBLP | PCG | HumLoc | EukLoc | |-----------|-----------|--------|----------------|--------|-------|----------|----------| | Mlp | 0.043 | 0.096 | 0.026 | 0.350 | 0.148 | 0.170 | 0.120 | | DeepWalk | 0.190 | 0.096 | 0.044 | 0.585 | 0.229 | 0.186 | 0.076 | | LANC | 0.050 | OOM | 0.045 | 0.836 | 0.185 | 0.132 | 0.062 | | Gcn | 0.037 | 0.131 | 0.054 | 0.893 | 0.210 | 0.252 | 0.152 | | Gat | 0.041 | 0.150 | 0.021 | 0.829 | 0.168 | 0.238 | 0.136 | | GraphSage | 0.045 | 0.251 | 0.027 | 0.868 | 0.185 | 0.234 | 0.124 | | H2Gcn | 0.039 | 0.226 | 0.036 | 0.858 | 0.192 | 0.172 | 0.134 | | GCN-LPA | 0.043 | 0.116 | 0.023 | 0.801 | 0.167 | 0.150 | 0.075 | ## 6.1 Results On Real-World Datasets. Table 3 we provide the results for 7 real-world datasets. In general, on datasets with low label homophily, such as BlogCat and PCG, node representation or embedding learning methods such as DeepWalk, outperform more sophisticated GNN based methods and simple Mlp baseline. Classical GNNs show better performance on datasets characterized by high label homophily. H2Gcn which is designed for multi-class datasets with heterophily do not show a performance improvement over classical GNNs on multi-label graph datasets with low homophily. Likewise, the method that combine label propagation with GNNs, achieve only comparable results to classical GNNs. BlogCat. For BlogCat all GNN approaches as well as LANC and GCN-LPA obtain scores close to that of Mlp which do not use any graph structure. Notably, Mlp uses identity matrix as input features which in principle provides no useful information. The corresponding scores can be seen as the result of a random assignment. As a method unifying the label and feature propagation, GCN-LPA does not show improvement on BlogCat compared to other baselines. This is because GCN-LPA uses the label propagation to adjust the edge weight and still only generates embedding by aggregating features over the weighted graph, while DeepWalk take advantage of the informative topological structure in the graph and achieves the best performance. Yelp, OGB-Proteins and DBLP. Yelp has low label homophily and low clustering coefficient but a more balanced label distribution as compared to other real-world datasets, so approaches designed specifically for low homophilic graphs and are capable of preserving information from both low- and high- order neighborhoods are expected to perform better for this dataset. Among the GNN-based baselines, H2Gcn which has shown improvements in low homophilic multi-class datasets outperforms Gcn,Gat, and GCN-LPA but is still outperformed by GraphSage. The use of CNNs in LANC to aggregate neighborhood features leads to excessive memory utilization for a graph with a very high maximum degree. This led to the out-of-memory error for Yelp. All methods perform poorly on OGB-Proteins with Gcn and LANC slighty outperforming others. It also has low homophily which is similar to BlogCat, which also does not provide GNNs with any additional advantage. In particular, DeepWalk and H2Gcn outperform Gat and GraphSage as well as more sophisticated GCN-LPA. It is worth noting that the previously reported results on the OGB-Proteins leaderboard Hu et al. (2020) are significantly exaggerated due to the utilization of the AUROC metric in conjunction with excessive model training. When using the metic Average Precision, the scores we get is much lower that what were reported on the leaderboard. As a co-authorship dataset, DBLP has the highest label homophily and the largest portion of nodes with a single label among all the real-world datasets. As shown in Figure 2a in the section 3.1, the inter-class similarity is much weaker than the intra-class similarity. Besides, the high clustering coefficient indicates that the local neighborhood is highly connected. All of these factors further justify the best performance shown by Gcn. Besides, the relatively poor performances of Mlp and DeepWalk indicate that the features or the structure alone are not sufficient for estimating node labels. PCG, HumLoc and EukLoc. If the features are highly predictive of the labels, the simple baselines Mlp using only feature information would be a competitive baseline. In the experiments with the datasets, where the features are highly correlated (evident from relatively better performance of Mlp) with the assigned labels, i.e., the biological interaction datasets HumLoc and EukLoc (shown in Table 3), GNNs and GCN-LPA tend to have better performance than DeepWalk and LANC which only utilizes graph structure and features from the direct neighborhood. PCG exhibits low label homophily and high clustering coefficient. Consistent with observations in other low homophilic datasets DeepWalk outperforms other methods in PCG too. H2Gcn, on the other hand, is outperformed by simpler GNN baseline like GCN. ## 6.2 Results On Synthetic Datasets Table 4 provides results for synthetic datasets with varying feature quality and label homophily. We provide a detailed analysis and argue about performance differences in the following sections. Table 4: Average Precision (mean) on the synthetic datasets with varying levels of feature quality and homophily parameter. rori_*f eat* and r*homo* refer to the fraction of original features and the homophily parameter value, respectively. | Method | rori_feat | | | | rhomo | | | | | | |-----------|-------------------------------|-------|-------|-------|-------------------|-------|-------|-------|-------|-------| | 0.0 | 0.2 | 0.5 | 0.8 | 1.0 | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 | | | Mlp | 0.172 | 0.187 | 0.220 | 0.277 | 0.343 | 0.343 | 0.343 | 0.343 | 0.343 | 0.343 | | DeepWalk | 0.487 0.487 0.487 0.487 0.487 | | | 0.181 | 0.522 0.813 0.869 | | 0.552 | | | | | LANC | 0.337 | 0.342 | 0.365 | 0.353 | 0.391 | 0.190 | 0.380 | 0.434 | 0.481 | 0.629 | | Gcn | 0.313 | 0.316 | 0.311 | 0.301 | 0.337 | 0.261 | 0.343 | 0.388 | 0.450 | 0.493 | | Gat | 0.311 | 0.339 | 0.329 | 0.338 | 0.360 | 0.172 | 0.359 | 0.390 | 0.428 | 0.439 | | GraphSage | 0.300 | 0.328 | 0.377 | 0.393 | 0.430 | 0.289 | 0.426 | 0.458 | 0.533 | 0.553 | | H2Gcn | 0.376 | 0.401 | 0.427 | 0.442 | 0.467 | 0.297 | 0.484 | 0.512 | 0.572 | 0.652 | | GCN-LPA | 0.337 | 0.333 | 0.368 | 0.363 | 0.391 | 0.170 | 0.408 | 0.495 | 0.604 | 0.583 | ## 6.2.1 Effect Of Varying Feature Quality. As simple baseline Mlp and other GNN-based models use features as input, we assume they will be sensitive to the varying feature quality. As DeepWalk uses the graph structure alone, its performance would not be affected by the varying feature quality. To further validate our hypothesis and test the robustness of the methods to feature quality, we compare the method performances on variants of the generated Synthetic1 dataset. Specifically, we vary the ratio of the original to the irrelevant features as in {0, 0.2, 0.5, 0.8, 1.0}. As shown in Table 4, under all levels of feature quality, the performances of DeepWalk do not change, as it generates representations for the nodes solely from the graph structure. The Mlp is unsurprisingly the most sensitive method to the varying feature quality because it completely ignores the graph structure. LANC is also sensitive to the change of the feature quality as it extracts feature vectors from the local neighborhood by performing convolutional operations on the stacked feature matrix of the direct neighbors. The Synthetic1 dataset used in this experiment has a label homophily of 0.3768, which is a relatively high label homophily for the multi-label datasets. Surprisingly, GCN-LPA which employs the label information performs only a little better than GraphSage on the feature-to-noise ratio of 0 and 0.2. H2Gcn, on the other hand, outperforms all GNN-based baselines and GCN-LPA for all levels of feature quality. ## 6.2.2 Effect Of Varying Label Homophily. In this subsection, we test the robustness of the methods to varying homophily and further argue how low label homophily has different semantics on multi-label graphs as it does on multi-class datasets. Specifically, we vary the label homophily as in {0.2, 0.4, 0.6, 0.8, 1.0}. As shown in Table 4, the performances of Mlp do not change under different levels of label homophily, due to the fact that Mlp only use the input features. H2Gcn is a method that has been shown to perform well on heterophilic multi-class datasets. In the multi-label scenarios, it exhibits better performance than other GNN methods but is outperformed by simple MLP baseline in case of label homophily of 0.2. On the other hand, with most of the attention drawn to developing new complicated methods for the node classification task, we observe simple baselines such as DeepWalk outperform standard GNNs in several scenarios. On the synthetic datasets with the label homophily of 0.4, 0.6 and 0.8, DeepWalk is the bestperforming method. As shown in Table 4, the performance of DeepWalk drops when the label homophily is 1.0. However, we want to emphasize that this is because we fixed the same walk length for DeepWalk for all levels of label homophily and the improvement can be shown with a possible hyperparameter tuning. As mentioned in section 4, the intra-class similarity is significantly stronger than the inter-class similarity in the synthetic graphs with higher label homophily, which helps GNN-based models to better distinguish the nodes from different classes and thus achieve better results in the node classification task. ## 7 Conclusion We investigate the problem of multi-label node classification on graph-structured data. Filling in the gaps in current literature, we (i) perform in-depth analysis on the commonly used benchmark datasets, create and release several real-world multi-label datasets and a graph generator model to produce synthetic datasets with tunable properties, (ii) compare and analyse the performances of the methods from different categories for the node classification task by conducting large-scale experiments on 9 datasets and 8 methods, and (iii) release our benchmark publicly. We have novel and compelling insights from our analysis of specific datasets and GNN approaches. For instance, we uncover the pitfalls of the commonly used OGB-Protein dataset for model evaluation. While multi-label graph datasets usually show low homophily, we show that approaches working on low homophilic multi-class datasets cannot trivially work on multi-label datasets which usually have low homophily. While current graph-based machine learning methods are usually evaluated on multi-class datasets, we demonstrate that the acquired improvements cannot always be translated to the more general scenario when the nodes are characterized by multiple labels. We believe that our work will open avenues for more future work and bring much-deserved attention to multi-label classification on graph-structured data. In future work, we plan to study the interplay of different dataset characteristics (for example edge density and label homophily) on the model performance. ## References Uchenna Akujuobi, Yufei Han, Qiannan Zhang, and Xiangliang Zhang. Collaborative graph walk for semisupervised multi-label node classification. *CoRR*, abs/1910.09706, 2019. URL http://arxiv.org/abs/ 1910.09706. Sanmitra Bhattacharya, Viet Ha-Thuc, and Padmini Srinivasan. Mesh: a window into full text for document summarization. *Bioinformatics*, 27(13):i120–i128, 2011. Marián Boguná, Romualdo Pastor-Satorras, Albert Díaz-Guilera, and Alex Arenas. Models of social networks based on social distance attachment. *Physical review E*, 70(5):056122, 2004. Zhao-Min Chen, Xiu-Shen Wei, Peng Wang, and Yanwen Guo. Multi-label image recognition with graph convolutional networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5177–5186, 2019. Yinlin Cheng, Mengnan Ma, Xingyu Li, and Yi Zhou. Multi-label classification of fundus images based on graph convolutional network. *BMC Medical Informatics and Decision Making*, 21(2):1–9, 2021. Kuo-Chen Chou and Hong-Bin Shen. Euk-mploc: a fusion classifier for large-scale eukaryotic protein subcellular location prediction by incorporating multiple sites. *Journal of Proteome Research*, 6(5): 1728–1734, 2007. Hong-Min Chu, Chih-Kuan Yeh, and Yu-Chiang Frank Wang. Deep generative models for weakly-supervised multi-label classification. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 400–415, 2018. Gene Ontology Consortium. The gene ontology resource: 20 years and still going strong. 47(D1):D330–D338, 2018. UniProt Consortium. Uniprot: a hub for protein information. *Nucleic acids research*, 43(D1):D204–D212, 2015. Szklarczyk D, Gable AL, Junge A Lyon D, Wyder S, Huerta-Cepas J, Simonovic M, Doncheva NT, Morris JH, Bork P, Jensen LJ, and Mering CV. String v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. *Nucleic Acids Research*, 47(D1):D607–D613, 2019. Thi Ngan Dong and Megha Khosla. Towards a consistent evaluation of mirna-disease association prediction models. In *2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)*, pp. 1835–1842. IEEE, 2020. William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. *CoRR*, abs/1706.02216, 2017. URL http://arxiv.org/abs/1706.02216. Mengying Hu, Hu Han, Shiguang Shan, and Xilin Chen. Weakly supervised image classification through noise regularization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11517–11525, 2019. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. *arXiv preprint* arXiv:2005.00687, 2020. Dat Huynh and Ehsan Elhamifar. Interactive multi-label cnn learning with partial labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9423–9432, 2020. Vikas Jain, Nirbhay Modhe, and Piyush Rai. Scalable generative models for multi-label learning with missing labels. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pp. 1636–1644. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/jain17a.html. Junteng Jia and Austion R Benson. Residual correlation in graph neural network regression. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 588–598, 2020. Samuel Kerrien, Bruno Aranda, Lionel Breuza, Alan Bridge, Fiona Broackes-Carter, Carol Chen, Margaret Duesbury, Marine Dumousseau, Marc Feuermann, Ursula Hinz, et al. The intact molecular interaction database in 2012. *Nucleic acids research*, 40(D1):D841–D846, 2012. Megha Khosla, Jurek Leonhardt, Wolfgang Nejdl, and Avishek Anand. Node representation learning for directed graphs. In *Joint European Conference on Machine Learning and Knowledge Discovery in Databases*, pp. 395–411. Springer, 2019. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *arXiv* preprint arXiv:1609.02907, 2016. Jack Lanchantin, Arshdeep Sekhon, and Yanjun Qi. Neural message passing for multi-label classification. In *Joint European Conference on Machine Learning and Knowledge Discovery in Databases*, pp. 138–163. Springer, 2019. Irene Li, Tianxiao Li, Yixin Li, Ruihai Dong, and Toyotaro Suzumura. Heterogeneous graph neural networks for multi-label text classification. *CoRR*, abs/2103.14620, 2021. URL https://arxiv.org/abs/2103.14620. Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. Deep learning for extreme multi-label text classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '17, pp. 115–124, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450350228. doi: 10.1145/3077136.3080834. URL https://doi.org/10. 1145/3077136.3080834. Weiwei Liu, Haobo Wang, Xiaobo Shen, and Ivor W Tsang. The emerging trends of multi-label learning. IEEE transactions on pattern analysis and machine intelligence, 44(11):7955–7974, 2021. Jiaqi Ma, Bo Chang, Xuefei Zhang, and Qiaozhu Mei. Copulagnn: towards integrating representational and correlational roles of graphs in graph neural networks. *arXiv preprint arXiv:2010.02089*, 2020. Qianwen Ma, Chunyuan Yuan, Wei Zhou, and Songlin Hu. Label-specific dual graph neural network for multilabel text classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3855–3864, 2021a. Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? arXiv preprint arXiv:2106.06134, 2021b. Mingdong Ou, Peng Cui, Jian Pei, et al. Asymmetric transitivity preserving graph embedding. In Proc. of the International Conference on Knowledge Discovery and Data Mining, pp. 1105–1114, 2016. Ankit Pal, Muru Selvakumar, and Malaikannan Sankarasubbu. Multi-label text classification using attentionbased graph neural network. *arXiv preprint arXiv:2003.11644*, 2020. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proc. of the International Conference on Knowledge Discovery and Data Mining, 2014. Janet Piñero, Juan Manuel Ramírez-Anguita, Josep Saüch-Pitarch, Francesco Ronzano, Emilio Centeno, Ferran Sanz, and Laura I Furlong. The disgenet knowledge platform for disease genomics: 2019 update. Nucleic acids research, 48(D1):D845–D855, 2020. Deepak Saini, Arnav Kumar Jain, Kushal Dave, Jian Jiao, Amit Singh, Ruofei Zhang, and Manik Varma. Galaxc: Graph neural networks with labelwise attention for extreme classification. WWW '21, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383127. doi: 10.1145/3442381.3449937. URL https://doi.org/10.1145/3442381.3449937. Hong-Bin Shen and Kuo-Chen Chou. Hum-mploc: an ensemble classifier for large-scale human protein subcellular location prediction by incorporating samples with multiple sites. *Biochemical and biophysical* research communications, 355(4):1006–1011, 2007. Min Shi, Yufei Tang, Xingquan Zhu, and Jianxun Liu. Multi-label graph convolutional network representation learning. *CoRR*, abs/1912.11757, 2019. URL http://arxiv.org/abs/1912.11757. Min Shi, Yufei Tang, Xingquan Zhu, and Jianxun Liu. Multi-label graph convolutional network representation learning. *IEEE Transactions on Big Data*, 2020. Dezhao Song, Andrew Vold, Kanika Madan, and Frank Schilder. Multi-label legal document classification: A deep learning-based approach with label-attention and domain-specific pre-training. *Inf. Syst.*, 106(C), may 2022. ISSN 0306-4379. doi: 10.1016/j.is.2021.101718. URL https://doi.org/10.1016/j.is.2021. 101718. Zixing Song, Ziqiao Meng, Yifei Zhang, and Irwin King. Semi-supervised multi-label learning for graphstructured data. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 1723–1733, 2021. Szymon Talaga and Andrzej Nowak. Homophily as a process generating social networks: Insights from social distance attachment model. *Journal of Artificial Societies and Social Simulation*, 23(2):6, 2020. ISSN 1460-7425. Lei Tang and Huan Liu. Relational learning via latent social dimensions. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '09, pp. 817–826, New York, NY, USA, 2009. Association for Computing Machinery. ISBN 9781605584959. doi: 10.1145/1557019. 1557109. URL https://doi.org/10.1145/1557019.1557109. Jimena Torres Tomás, Newton Spolaôr, Everton Alvares Cherman, and Maria Carolina Monard. A framework to generate synthetic multi-label datasets. *Electronic Notes in Theoretical Computer Science*, 302:155–176, 2014. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. *International Conference on Learning Representations*, 2018. URL https: //openreview.net/forum?id=rJXMpikCZ. accepted as poster. Bingyu Wang, Li Chen, Wei Sun, Kechen Qin, Kefeng Li, and Hui Zhou. Ranking-based autoencoder for extreme multi-label classification. *CoRR*, abs/1904.05937, 2019. URL http://arxiv.org/abs/1904. 05937. Hongwei Wang and Jure Leskovec. Unifying graph convolutional neural networks and label propagation. CoRR, abs/2002.06755, 2020. URL https://arxiv.org/abs/2002.06755. Guanming Wu, Xin Feng, and Lincoln Stein. A human functional protein interaction network and its application to cancer data analysis. *Genome biology*, 11(5):1–23, 2010. Linli Xu, Sijie Teng, Ruoyu Zhao, Junliang Guo, Chi Xiao, Deqiang Jiang, and Bo Ren. Hierarchical multi-label text classification with horizontal and vertical category correlations. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pp. 2459–2468, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.190. URL https://aclanthology.org/2021.emnlp-main.190. Cheng Yang, Jiawei Liu, and Chuan Shi. Extract the knowledge of graph neural networks and go beyond it: An effective knowledge distillation framework. In *Proceedings of The Web Conference 2021 (WWW '21)*. ACM, 2021. Xiaodi Yang, Shiping Yang, Qinmengge Li, Stefan Wuchty, and Ziding Zhang. Prediction of human-virus protein-protein interactions through a sequence embedding-based machine learning method. *Computational* and structural biotechnology journal, 18:153–161, 2020. Yang Yang, Ryan N Lichtenwalter, and Nitesh V Chawla. Evaluating link prediction methods. Knowledge and Information Systems, 45(3):751–782, 2015. Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor K. Prasanna. Graphsaint: Graph sampling based inductive learning method. *CoRR*, abs/1907.04931, 2019. URL http://arxiv.org/ abs/1907.04931. Min-Ling Zhang and Zhi-Hua Zhou. A review on multi-label learning algorithms. *IEEE transactions on* knowledge and data engineering, 26(8):1819–1837, 2013. Xiangping Zheng, Xun Liang, and Bo Wu. Capsule graph neural network for multi-label image recognition (student abstract). In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 13117–13118, 2022. Cangqi Zhou, Hui Chen, Jing Zhang, Qianmu Li, Dianming Hu, and Victor S. Sheng. Multi-label graph node classification with label attentive neighborhood convolution. *Expert Systems with Applications*, 180: 115063, 2021. ISSN 0957-4174. doi: https://doi.org/10.1016/j.eswa.2021.115063. URL https://www. sciencedirect.com/science/article/pii/S0957417421005042. Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. *Advances in Neural Information* Processing Systems, 33, 2020. Daoming Zong and Shiliang Sun. GNN-XML: graph neural networks for extreme multi-label text classification. CoRR, abs/2012.05860, 2020. URL https://arxiv.org/abs/2012.05860. ## A Appendix Organization. We explain the construction details of the biological datasets in A.1. Furthermore, in Section A.2, we study the parameters of the graph generator and demonstrate how we generate the synthetic datasets with varying label homophily. We also summarize the characteristics of the generated synthetic graphs that were used in Section 5 for the varying homophily and feature quality experiments. In Section A.3, we summarize the hyperparameters of the models we used in this work. In Section A.4, we provide the full original experiment results on all datasets reported in Micro- and Macro- F1, AUROC, and Average Precision score with the standard deviation of the 3 random splits if the dataset is not pre-splitted. Note that for higher precision, the scores are provided in percentages. Last but not least, we provide the motivation for using Average Precision in the main paper in Section A.5. ## A.1 Biological Dataset Construction A.1.1 The Protein Phenotype Prediction Dataset. A phenotype is any observable characteristic or trait of a disease. Identifying the phenotypes associated with a particular protein could help in clinical diagnostics or finding possible drug targets. To construct the phenotype prediction dataset, we first retrieve the experimentally validated protein-phenotype associations from the DisGeNET (Piñero et al., 2020) database. We then (i) retain only those protein associations that are marked as "phenotype", (ii) match each disease to its first-level category in the MESH ontology (Bhattacharya et al., 2011), and (iii) remove any (phenotype) label with less than 100 associated proteins. To construct the edges, we acquire the protein functional interaction network from (Wu et al., 2010) (version 2020). We then (i) model each protein as a node in the graph, (ii) retain only the protein-protein interactions between the proteins that have the phenotype labels available, and (iii) remove any isolated nodes from the constructed graph. In the end, our dataset consists of 3, 233 proteins and 37, 351 edges. The node features are the 32-dimensional sequence-based embeddings retrieved from (Consortium, 2015) and (Yang et al., 2020). ## A.1.2 The Human Protein Subcellular Location Prediction Dataset (Humloc). Proteins might exist at or move between different subcellular locations. Predicting protein subcellular locations can aid the identification of drug targets1. We retrieve the human protein subcellular location data from (Shen & Chou, 2007) which contains 3, 106 proteins. Each protein can have one to several labels in 14 possible locations. We then generate the graph multi-label node classification data as follows: - We model each protein as a node in the graph. We retrieve the corresponding protein sequences from Uniprot (Consortium, 2015). We obtain the corresponding 32-dimensional node feature representation by feeding them to a pre-trained model (Yang et al., 2020) on protein sequences. - Each node's label is the one-hot encoding (i.e., 14 dimensions) generated from its sub-cellular information. Each value in the label vector represents one sub-cellular location. A value of 1 indicates the corresponding protein exists at the respective location and 0 means otherwise. - The edge information is generated from the protein-protein interactions retrieved from the IntAct (Kerrien et al., 2012) database. There exists a connection between two nodes in the graph if there exists an interaction between the corresponding proteins in IntAct. For each pair of proteins, more than one interaction of different types might exist. Therefore, we assign each edge a label. The edge label is modeled as a 21-dimensional vector where each value in the vector represents the confidence score for a particular connection type. In the end, the HumLoc dataset consists of 3,106 nodes and 18,496 edges. Each node can have one to several labels in the 14 possible locations. Among the 3,106 different proteins, 2,580 of them belong only to 1 location; 480 of them belong to 2 locations; 43 of them belong to 3 locations and 3 of them belong to 4 locations. Both the accession numbers and sequences are given. None of the proteins has more than 25% sequence identity to any other in the same subset (subcellular location). For a more detailed description of the original dataset, we refer the readers to (Shen & Chou, 2007). ## A.1.3 The Eukaryote Protein Subcellular Location Prediction Dataset (Eukloc). We retrieve the eukaryote protein subcellular location multi-label data from (Chou & Shen, 2007). We then employ the same data sources and pre-processing strategy as described for the HumLoc dataset to generate the multi-label node classification dataset for eukaryote protein subcellular location prediction. In the end, the final pre-processed data contains 7,766 proteins (nodes) and 13,818 connections(edges). Each protein(node) can receive one to several labels in 22 possible locations. 1https://en.wikipedia.org/wiki/Protein_subcellular_localization_prediction A.2 Parameter Study of the Graph Generator Model ![17_image_0.png](17_image_0.png) Figure 6: Visualization of the parameter study of the Graph Generator Model. The first two subplots demonstrate the relationships between the value of α and b and the label homophily of the generated synthetic datasets. The last subplot shows the edge density and the label homophily of the generated synthetic graphs. This shows with the same multi-label data, we can generate synthetic graphs with varying label homophily. As mentioned in Section 4, the choice of α and b will directly determine the connection probability pi,j of each pair of nodes i and j and the homophily ratio of the generated synthetic graph. Here, we demonstrate how we choose the value of α and b to generate synthetic graphs with varying homophily ratios. Note that the valid range of α and b may differ when a different distance metric is used. Firstly, we randomly choose 500 nodes from Synthetic1 dataset and generate a series of small synthetic graphs with varying α and b and observe the relationship between them. Since we have two hyperparameters' ranges to determine, we first fix the value of one and explore the range of the other and then vice versa. To recall, b indicates the characteristic distance at which pij = 1 2 , and our hamming distance should be in the range of [0, 1], we first fix b to 0.05. We chose a small value of b because the larger value of b would dominate the influence of α and the relationship between the homophily ratio and the change of the value of α becomes unclear. The relationship between the homophily ratio of the generated synthetic graphs and the value of α is shown in Figure 6(a). As shown in the subplot, the label homophily increases monotonically as the value of α increases in the range of [0, 10]. As α is interpreted as the homophily parameter in (Boguná et al., 2004), only positive values make sense. Similarly, we then fix α to its middle value in the valid range, i.e. 5, and explore the valid range of b and visualize the relationship of the graph level homophily ratio and the value of b in Figure 6(b). As illustrated in the subfigure, the label homophily decreases as the b increases in the range of (0, 0.25]. As b increases, the node pairs with bigger distance would also have 50% of the probability to be connected, the number of edges will increase, and the label homophily ratio will decrease. As b decreases, the node pair with a smaller distance would only have 50% of the probability of being connected. The model becomes cautious about connecting a node pair. The number of edges decreases and only the node pairs, which are alike will be connected, thus, the label homophily ratio increases. Then, we use combinations of α and b to generate synthetic graphs with specific homophily ratios. We sample 20 αs and bs uniformly from their valid ranges with the increments 0.5 and 0.0125. Since the graph label homophily has an inverse linear relationship with α and b, we arrange the sampled b in reverse order and then form 20 value pairs (alpha, b). We generate 20 synthetic graphs from the multi-label dataset corresponding to Synthetic1 with these value pairs (α, b) and plot the homophily ratio and the edge density of the generated graphs in Figure 6(c). As shown in the subplot, using the same multi-label data, we are able to generate synthetic graphs with varying homophiles. And the edge density decreases when label homophily increases. As in higher label homophily graphs, the generator will only connect the nodes that are highly similar to each other. In contrast, when the label homophily is low, the graph generator will connect every possible node pair in the graph resulting in denser graphs. ## A.2.1 Statistics Of The Synthetic Datasets Here we summarize the characteristics of the generated synthetic graphs with varying label homophily and feature quality. The first row denotes the name of the synthetic graphs, where in the varying homophily experiments, the variants of datasets are named with their label homophily. The Synthetic1 dataset is used in the varying feature quality experiment, where we remove the relevant features to create dataset variants with varying feature quality levels. Note that for all the datasets the label distribution stays the same as they are just different graphs generated from the same multi-label dataset. The statistics on label distribution for these datasets are given in Table 5 and Table 6. Table 5: The number of edges and clustering coefficient of the synthetic datasets with varying label homophily and Synthetic1. The row of '|E|' denotes the number of edges and the 'clustering coefficient' denotes the clustering coefficient of these datasets | Dataset | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 | Synthetic1 | |------------------------|-------|-------|-------|-------|-------|--------------| | |E| | 2.37M | 598k | 298k | 79.5k | 47.6k | 1.00M | | Clustering Coefficient | 0.53 | 0.37 | 0.39 | 0.49 | 0.93 | 0.57 | | |L| | |Lmed| | |Lmean| | |Lmax| | 25% | 50% | 75% | | |--------------------|----------|-----------|----------|-------|-------|-------|----| | label distribution | 20 | 3 | 3.23 | 12 | 1 | 3 | 5 | Table 6: The label distribution of the synthetic dataset used in this work. The column notations are same as in Table 1. ## A.3 Hyperparameter Setting In this section, we summarize all the hyperparameters we used for the experiment section. The detailed setting is listed in Table 7 and 8. More specifically, for Mlp and all GNN-based methods, we summarize the number of layers, the dimension of the hidden layer, the learning rate, the patience of the Earlystopping, the weight decay, and the number of neighbors we sample for the models that require sampling. We use the same number of layers and the same hidden size for Mlp and the other GNN-based methods. The learning rate for the synthetic datasets in the varying feature quality and homophily experiments is 0.001 instead of 0.01 as in the other models because the performance of the H2Gcn is further improved. We also use Earlystopping with the patience of 100 epochs to train the models properly. For GraphSage, we sample 25 and 20 one and two hops away neighbors for aggregation. As other GNN-based baselines do not use the sampling method, the corresponding cells are filled with "No". For the only random-walk-based method, we deploy the default setting for all the datasets in this work as DeepWalk already shows competitive performance. We perform 10 random walks with the walk length of 10 for each node to generate the sequence and use the window size of 5 for the training pairs, the generated embedding size is 64. ## A.4 The Experiment Results Reported In Four Metrics We summarize the experimental results on the real-world datasets and the synthetic datasets in Table 9, Table 10, and Table 11, respectively. For better precision, we report the scores in percentages. Specifically, for the scores reported on OGB-Proteins, the difference between our results and those reported in the | | MLP | GCN, GAT, GraphSAGE | H2GCN | GCN-LPA 2 GCN | |------------------------|-------|-----------------------|---------|-----------------| | Layers | 2 | 2 | 2 | 5 LPA | | Hidden size | 256 | 256 | 256 | 256 | | | | real-world: 0.01 | | | | Learning rate | 0.01 | 0.01 | 0.01 | | | | | synthetic: 0.001 | | | | Earlystopping patience | 100 | 100 | 100 | 100 | | Weight decay | 5e-4 | 5e-4 | 5e-4 | 1e-4 | | Sample for aggregation | No | GraphSAGE:[25, 10] | No | No | Table 7: The hyperparameter setting for Mlp and GNN baselines in this work for all datasets | Number of walks | Walk length | Embedding size | Window size | | |-------------------|---------------|------------------|---------------|----| | DeepWalk | 10 | 10 | 64 | 5 | Table 8: The hyperparameter setting for DeepWalk in this work for all datasets | Dataset | Method | Micro-F1 | Macro-F1 | AUC-ROC | AP | |-----------|--------------|--------------|--------------|--------------|-------------| | Blog- Cat | Mlp | 17.11 ± 0.64 | 2.49 ± 0.18 | 50.30 ± 1.04 | 4.25 ± 0.63 | | DeepWalk | 35.59 ± 0.21 | 19.74 ± 0.54 | 73.20 ± 0.58 | 18.55 ± 0.17 | | | LANC | 13.95 ± 2.02 | 4.55 ± 0.82 | 52.34 ± 0.91 | 5.03 ± 0.07 | | | Gcn | 16.69 ± 0.47 | 2.63 ± 0.08 | 47.85 ± 0.06 | 3.69 ± 0.04 | | | Gat | 17.22 ± 0.52 | 2.48 ± 0.08 | 50.88 ± 1.45 | 4.05 ± 0.09 | | | GraphSage | 16.18 ± 0.31 | 2.38 ± 0.27 | 52.73 ± 0.82 | 4.50 ± 0.12 | | | H2Gcn | 16.86 ± 0.34 | 2.60 ± 0.16 | 49.83 ± 1.08 | 3.92 ± 0.05 | | | GCN-LPA | 17.15 ± 0.68 | 2.55 ± 0.19 | 51.35 ± 0.67 | 4.33 ± 0.31 | | | Mlp | 26.04 ± 0.09 | 18.55 ± 0.20 | 50.17 ± 0.01 | 9.58 ± 0.01 | | | DeepWalk | 49.78 ± 0.07 | 24.98 ± 0.04 | 50.67 ± 0.08 | 9.60 ± 0.02 | | | LANC | − | − | − | − | | | Yelp | | | | | | benchmark is because 1) we use 2 layer Mlp without Node2Vec features. 2) We use Earlystopping with the patience of 100 epochs to prevent the models from being overtrained. 3) We use sampled local neighborhoods to have a consistent setting for all the datasets using GraphSage. The specific parameters we used are summarized in A.3. Another pre-processing we did was to remove the isolated nodes in PCG. The details are described in Appendix A.1.1. Table 9: Multi-label Node Classification results on real-world datasets. The results of BlogCat, Yelp, PCG, and DBLP are the mean of three random splits with 60% for train-, 20% validation and 20% test dataset, while the results of OGB-Proteins, HumLoc, EukLoc are reported with the built-in split. | Dataset | Method | Micro-F1 | Macro-F1 | AUC-ROC | AP | | |-----------|--------------|--------------|--------------|--------------|--------------|--------------| | Gcn | 52.21 ± 0.07 | 27.60 ± 0.04 | 53.81 ± 0.13 | 13.14 ± 0.06 | | | | Gat | 51.24 ± 0.08 | 26.66 ± 0.06 | 67.80 ± 0.05 | 15.00 ± 0.07 | | | | GraphSage | 56.06 ± 0.10 | 31.26 ± 0.10 | 81.05 ± 0.25 | 25.09 ± 0.31 | | | | H2Gcn | 54.12 ± 0.01 | 30.52 ± 0.11 | 75.25 ± 0.46 | 22.57 ± 0.51 | | | | GCN-LPA | 50.31 ± 0.29 | 25.68 ± 0.43 | 61.09 ± 2.27 | 11.62 ± 0.74 | | | | OGBProteins | Mlp | 2.55 | 2.40 | 54.05 | 2.59 | | | DeepWalk | 2.88 | 2.75 | 68.75 | 4.41 | | | | LANC | 2.35 | 2.21 | 68.03 | 4.48 | | | | Gcn | 2.77 | 2.63 | 71.48 | 5.36 | | | | Gat | 2.55 | 2.40 | 50.64 | 2.14 | | | | GraphSage | 2.59 | 2.43 | 55.83 | 2.68 | | | | H2Gcn | 2.55 | 2.39 | 62.75 | 3.61 | | | | GCN-LPA | 2.56 | 2.41 | 53.22 | 2.33 | | | | DBLP | | Mlp | 42.14 ± 0.27 | 32.04 ± 0.65 | 54.47 ± 0.07 | 34.97 ± 0.14 | | DeepWalk | 63.27 ± 0.34 | 59.11 ± 0.36 | 74.81 ± 0.16 | 58.49 ± 0.25 | | | | LANC | 81.93 ± 0.29 | 80.39 ± 0.42 | 91.76 ± 0.36 | 83.55 ± 0.98 | | | | Gcn | 87.03 ± 0.20 | 85.80 ± 0.38 | 94.15 ± 0.16 | 89.27 ± 0.24 | | | | Gat | 83.06 ± 0.17 | 81.26 ± 0.14 | 92.57 ± 0.07 | 82.93 ± 0.16 | | | | GraphSage | 85.22 ± 0.23 | 83.89 ± 0.21 | 94.32 ± 0.02 | 86.84 ± 0.18 | | | | H2Gcn | 83.99 ± 0.92 | 82.56 ± 0.86 | 92.14 ± 0.57 | 85.82 ± 0.64 | | | | GCN-LPA | 82.88 ± 0.31 | 81.31 ± 0.34 | 90.17 ± 0.43 | 80.07 ± 1.24 | | | | PCG | | Mlp | 38.04 ± 1.20 | 18.03 ± 1.29 | 51.07 ± 0.63 | 14.78 ± 0.60 | | DeepWalk | 42.26 ± 1.37 | 31.49 ± 0.90 | 63.58 ± 0.87 | 22.86 ± 1.00 | | | | LANC | 36.28 ± 0.34 | 20.50 ± 1.15 | 56.58 ± 0.69 | 18.53 ± 1.14 | | | | Gcn | 41.46 ± 1.21 | 25.59 ± 0.92 | 59.54 ± 0.82 | 21.03 ± 0.34 | | | | Gat | 36.91 ± 1.75 | 19.24 ± 0.75 | 56.33 ± 4.64 | 16.75 ± 2.17 | | | | GraphSage | 38.89 ± 1.17 | 24.44 ± 1.74 | 58.57 ± 0.08 | 18.45 ± 0.29 | | | | H2Gcn | 39.05 ± 0.99 | 24.38 ± 2.17 | 58.10 ± 0.14 | 19.19 ± 0.49 | | | | GCN-LPA | 39.57 ± 1.12 | 22.90 ± 1.33 | 54.74 ± 0.95 | 16.71 ± 0.14 | | | | HumLoc | Mlp | 42.12 | 18.04 | 66.04 | 16.95 | | | DeepWalk | 45.26 | 23.30 | 65.67 | 18.58 | | | | LANC | 39.25 | 11.51 | 59.65 | 13.24 | | | | Gcn | 51.67 | 25.57 | 67.28 | 25.15 | | | | Gat | 47.10 | 17.49 | 72.47 | 23.75 | | | | GraphSage | 48.05 | 21.22 | 70.30 | 23.42 | | | | H2Gcn | 45.39 | 18.35 | 64.31 | 17.23 | | | | GCN-LPA | 45.73 | 18.15 | 62.40 | 14.96 | | | | Eu | kLoc | Mlp | 43.58 | 11.13 | 66.83 | 12.00 | | DeepWalk | 34.67 | 6.74 | 56.12 | 7.58 | | | | LANC | 36.08 | 4.55 | 51.13 | 6.16 | | | | Gcn | 45.86 | 12.27 | 70.53 | 15.15 | | | | Gat | 41.58 | 6.76 | 71.65 | 13.59 | | | | GraphSage | 44.65 | 11.96 | 69.04 | 12.44 | | | | H2Gcn | 44.93 | 11.80 | 69.45 | 13.35 | | | | GCN-LPA | 36.72 | 5.93 | 56.65 | 7.45 | | | Table 9: Multi-label Node Classification results on real-world datasets. The results of BlogCat, Yelp, PCG, and DBLP are the mean of three random splits with 60% for train-, 20% validation and 20% test dataset, while the results of OGB-Proteins, HumLoc, EukLoc are reported with the built-in split. | Feature Ratios | Method | Micro-F1 | Macro-F1 | AUC-ROC | AP | |------------------|--------------|--------------|--------------|--------------|--------------| | Mlp | 67.05 ± 0.91 | 25.15 ± 0.89 | 50.99 ± 1.11 | 17.17 ± 0.42 | | | DeepWalk | 86.22 ± 0.39 | 47.80 ± 0.31 | 84.15 ± 0.56 | 48.70 ± 0.51 | | | LANC | 69.67 ± 0.26 | 26.68 ± 1.16 | 75.36 ± 1.17 | 33.68 ± 0.88 | | | Gcn | 67.09 ± 0.95 | 25.12 ± 1.09 | 77.17 ± 0.92 | 31.27 ± 0.66 | | | Gat | 66.67 ± 0.71 | 25.09 ± 1.24 | 61.96 ± 3.97 | 31.10 ± 3.17 | | | GraphSage | 67.90 ± 1.40 | 25.87 ± 1.51 | 66.47 ± 0.31 | 30.01 ± 0.71 | | | H2Gcn | 71.12 ± 1.26 | 27.45 ± 1.18 | 80.83 ± 0.98 | 37.64 ± 1.72 | | | GCN-LPA | 70.26 ± 2.00 | 26.70 ± 1.47 | 70.58 ± 2.04 | 33.65 ± 2.61 | | | 0 0.2 | Mlp | 68.37 ± 0.55 | 26.61 ± 0.47 | 54.47 ± 0.19 | 18.70 ± 0.69 | | DeepWalk | 86.22 ± 0.39 | 47.80 ± 0.31 | 84.15 ± 0.56 | 48.70 ± 0.51 | | | LANC | 70.84 ± 1.75 | 27.62 ± 0.74 | 74.42 ± 2.02 | 34.24 ± 1.42 | | | Gcn | 67.42 ± 1.07 | 25.31 ± 1.09 | 77.47 ± 0.87 | 31.59 ± 0.63 | | | Gat | 67.07 ± 1.07 | 25.03 ± 1.16 | 64.55 ± 0.60 | 33.90 ± 0.30 | | | GraphSage | 70.14 ± 0.60 | 27.99 ± 1.07 | 69.44 ± 1.09 | 32.84 ± 0.79 | | | H2Gcn | 73.21 ± 1.07 | 28.85 ± 1.23 | 81.78 ± 1.24 | 40.12 ± 4.24 | | | GCN-LPA | 69.67 ± 1.67 | 26.14 ± 1.53 | 71.02 ± 1.71 | 33.33 ± 1.31 | | | 0.5 | Mlp | 68.83 ± 0.78 | 27.57 ± 1.10 | 61.63 ± 0.74 | 21.95 ± 0.40 | | DeepWalk | 86.22 ± 0.39 | 47.80 ± 0.31 | 84.15 ± 0.56 | 48.70 ± 0.51 | | | LANC | 73.21 ± 2.79 | 30.02 ± 2.01 | 77.05 ± 1.16 | 36.45 ± 1.30 | | | Gcn | 67.55 ± 1.08 | 25.39 ± 1.08 | 77.25 ± 0.85 | 31.14 ± 0.61 | | | Gat | 67.76 ± 1.75 | 25.49 ± 1.46 | 64.40 ± 1.64 | 32.92 ± 0.88 | | | GraphSage | 74.70 ± 0.49 | 31.25 ± 0.89 | 76.24 ± 0.47 | 37.65 ± 0.65 | | | H2Gcn | 76.16 ± 0.47 | 32.85 ± 0.28 | 84.96 ± 0.55 | 42.70 ± 0.27 | | | GCN-LPA | 72.11 ± 1.89 | 27.80 ± 0.77 | 74.39 ± 1.26 | 36.84 ± 1.02 | | | 0.8 | Mlp | 70.17 ± 0.81 | 29.37 ± 0.99 | 69.53 ± 0.46 | 27.65 ± 0.77 | | DeepWalk | 86.22 ± 0.39 | 47.80 ± 0.31 | 84.15 ± 0.56 | 48.70 ± 0.51 | | | LANC | 72.67 ± 3.56 | 29.67 ± 3.13 | 74.73 ± 1.07 | 35.32 ± 2.06 | | | Gcn | 69.56 ± 1.14 | 25.79 ± 1.14 | 77.03 ± 0.99 | 30.10 ± 0.71 | | | Gat | 67.93 ± 0.75 | 25.40 ± 1.17 | 66.41 ± 3.55 | 33.78 ± 0.96 | | | GraphSage | 75.66 ± 0.51 | 32.39 ± 1.50 | 78.31 ± 0.49 | 39.25 ± 0.86 | | | H2Gcn | 78.68 ± 0.38 | 36.17 ± 0.56 | 86.01 ± 0.64 | 44.21 ± 0.31 | | | GCN-LPA | 72.62 ± 1.46 | 27.74 ± 0.67 | 73.99 ± 1.06 | 36.28 ± 1.57 | | | 1.0 | Mlp | 72.90 ± 0.27 | 31.52 ± 0.52 | 75.23 ± 0.63 | 34.29 ± 0.07 | | DeepWalk | 86.22 ± 0.39 | 47.80 ± 0.31 | 84.15 ± 0.56 | 48.70 ± 0.51 | | | LANC | 74.13 ± 1.61 | 30.38 ± 1.29 | 75.26 ± 0.93 | 37.47 ± 0.27 | | | Gcn | 70.74 ± 0.80 | 26.57 ± 0.97 | 79.22 ± 0.82 | 33.70 ± 0.79 | | | Gat | 70.22 ± 1.27 | 26.35 ± 0.93 | 66.52 ± 1.03 | 36.01 ± 0.72 | | | GraphSage | 78.71 ± 0.59 | 35.77 ± 1.40 | 80.77 ± 0.17 | 43.05 ± 0.22 | | | H2Gcn | 79.97 ± 0.19 | 38.73 ± 0.91 | 87.09 ± 0.12 | 46.71 ± 0.37 | | | GCN-LPA | 76.28 ± 1.27 | 31.91 ± 1.07 | 76.38 ± 0.36 | 39.10 ± 2.03 | | Table 10: Multi-label Node Classification results on Synthetic dataset with varying feature quality. All results are the mean of three random splits. The Ratios of the relevant and the irrelevant features are [0, 0.2, 0.5, 0.8, 1.0]. ## A.5 Challenge Of The Evaluation Metrics The Area under the ROC curve (AUC) and the Area under the Precision-Recall curve (AUPR) are two widely accepted non-parametric measurement scores used by existing works. Nevertheless, as discussed in (Yang et al., 2015), the AUC score is sometimes misleading for highly imbalanced datasets like those in our work. In addition, AUPR might lead to over-estimation of the models' performance when the number of thresholds (or unique prediction values) is limited (Dong & Khosla, 2020). For such reasons, as suggested in (Dong & Khosla, 2020), we instead use the Average Precision (AP) score as our primary evaluation metric. Following | Label Homophily | Method | Micro-F1 | Macro-F1 | AUC-ROC | AP | |-------------------|--------------|--------------|--------------|--------------|--------------| | 0.2 | Mlp | 72.90 ± 0.27 | 31.52 ± 0.52 | 75.23 ± 0.63 | 34.29 ± 0.07 | | DeepWalk | 66.62 ± 0.62 | 26.02 ± 0.55 | 52.19 ± 0.91 | 18.07 ± 0.71 | | | LANC | 67.05 ± 0.92 | 25.12 ± 2.20 | 54.49 ± 0.53 | 18.95 ± 0.43 | | | Gcn | 67.28 ± 0.67 | 25.21 ± 1.04 | 66.59 ± 0.58 | 26.06 ± 1.07 | | | Gat | 65.09 ± 0.89 | 24.61 ± 1.41 | 51.43 ± 0.33 | 17.05 ± 0.36 | | | GraphSage | 73.57 ± 0.20 | 31.22 ± 0.65 | 71.72 ± 0.30 | 28.94 ± 1.09 | | | H2Gcn | 73.80 ± 0.59 | 31.86 ± 0.89 | 73.24 ± 0.08 | 29.69 ± 0.53 | | | GCN-LPA | 67.02 ± 0.89 | 25.24 ± 1.31 | 49.92 ± 0.93 | 17.02 ± 0.31 | | | 0.4 | Mlp | 72.90 ± 0.27 | 31.52 ± 0.52 | 75.23 ± 0.63 | 34.29 ± 0.07 | | DeepWalk | 88.79 ± 0.20 | 54.74 ± 1.73 | 85.74 ± 0.33 | 52.15 ± 1.41 | | | LANC | 75.16 ± 0.77 | 33.69 ± 0.94 | 76.53 ± 0.86 | 37.99 ± 0.74 | | | Gcn | 72.16 ± 1.70 | 26.95 ± 1.18 | 80.19 ± 0.75 | 34.29 ± 0.88 | | | Gat | 71.50 ± 1.30 | 27.58 ± 1.11 | 69.96 ± 1.69 | 35.86 ± 0.59 | | | GraphSage | 79.09 ± 0.33 | 36.05 ± 1.56 | 81.00 ± 0.35 | 42.57 ± 0.49 | | | H2Gcn | 81.30 ± 0.25 | 40.54 ± 0.74 | 87.91 ± 0.04 | 48.36 ± 0.44 | | | GCN-LPA | 78.11 ± 1.38 | 32.71 ± 1.47 | 78.00 ± 0.70 | 40.80 ± 0.20 | | | 0.6 | Mlp | 72.90 ± 0.27 | 31.52 ± 0.52 | 75.23 ± 0.63 | 34.29 ± 0.07 | | DeepWalk | 95.94 ± 0.05 | 82.58 ± 0.19 | 95.32 ± 0.80 | 81.34 ± 0.95 | | | LANC | 80.36 ± 1.81 | 39.65 ± 3.14 | 81.85 ± 2.66 | 43.42 ± 3.61 | | | Gcn | 75.47 ± 0.46 | 29.43 ± 0.43 | 84.08 ± 0.72 | 38.78 ± 0.33 | | | Gat | 75.13 ± 0.39 | 29.26 ± 0.56 | 72.85 ± 0.79 | 39.01 ± 0.23 | | | GraphSage | 82.46 ± 0.67 | 40.46 ± 1.67 | 83.24 ± 0.89 | 45.79 ± 0.91 | | | H2Gcn | 83.40 ± 1.94 | 44.32 ± 1.59 | 89.98 ± 0.86 | 51.19 ± 1.92 | | | GCN-LPA | 85.45 ± 2.42 | 47.67 ± 4.72 | 83.96 ± 2.22 | 49.54 ± 3.54 | | | 0.8 | Mlp | 72.90 ± 0.27 | 31.52 ± 0.52 | 75.23 ± 0.63 | 34.29 ± 0.07 | | DeepWalk | 96.53 ± 0.61 | 89.25 ± 1.96 | 95.81 ± 0.47 | 86.93 ± 1.92 | | | LANC | 79.35 ± 0.97 | 46.03 ± 2.24 | 87.75 ± 0.60 | 48.10 ± 1.17 | | | Gcn | 80.72 ± 0.55 | 36.60 ± 1.31 | 86.82 ± 0.62 | 44.98 ± 0.40 | | | Gat | 77.35 ± 0.49 | 31.63 ± 1.10 | 83.10 ± 0.69 | 42.82 ± 0.86 | | | GraphSage | 84.22 ± 0.38 | 48.16 ± 1.42 | 87.37 ± 0.34 | 53.34 ± 0.84 | | | H2Gcn | 86.00 ± 2.63 | 55.85 ± 4.92 | 91.61 ± 1.64 | 57.17 ± 2.57 | | | GCN-LPA | 88.96 ± 0.68 | 64.09 ± 4.18 | 89.53 ± 1.07 | 60.44 ± 4.41 | | | 1.0 | Mlp | 72.90 ± 0.27 | 31.52 ± 0.52 | 75.23 ± 0.63 | 34.29 ± 0.07 | | DeepWalk | 80.25 ± 0.11 | 62.80 ± 0.46 | 83.83 ± 1.08 | 55.16 ± 0.67 | | | LANC | 83.37 ± 0.96 | 60.77 ± 3.51 | 92.53 ± 1.27 | 62.85 ± 3.01 | | | Gcn | 81.61 ± 0.25 | 42.19 ± 0.45 | 86.48 ± 0.31 | 49.32 ± 0.92 | | | Gat | 79.37 ± 0.64 | 34.67 ± 1.87 | 87.20 ± 2.15 | 43.90 ± 2.90 | | | GraphSage | 82.15 ± 0.49 | 46.06 ± 0.99 | 89.73 ± 0.45 | 55.25 ± 0.42 | | | H2Gcn | 91.59 ± 1.74 | 76.09 ± 5.96 | 93.94 ± 1.03 | 65.20 ± 5.71 | | | GCN-LPA | 86.92 ± 1.13 | 60.71 ± 3.01 | 90.75 ± 0.46 | 58.28 ± 3.02 | | Table 11: Multi-label Node Classification results on Synthetic dataset with varying label homophily. All results are the mean of three random splits. The label homophiles (rounded up) are [0.2, 0.4, 0.6, 0.8, 1.0] ![23_image_0.png](23_image_0.png) Figure 7: Cross class Neighborhood Similarity in DBLP previous works, we also report the F1 score. As it is a threshold-based metric we emphasize that it might be biased when the benchmarked models have different prediction score ranges. ## A.6 Cross-Class Neighborhood Similarity Plots In this section, we put the heat maps of the cross-class neighborhood similarity for all the datasets used in this work. We use color coding, where a darker shade in the cell indicates a stronger cross-class neighborhood similarity. ![24_image_0.png](24_image_0.png) Figure 8: Cross class Neighborhood Similarity in BLOGCAT | PCG | | | | | | | | | | | | | | | |---------------|-------|-------|------|------|------|------|---------------|------|------|------|------|------|------|-------| | 0.61 | 0.61 | 在线 | ﻋﻬﺪﻩ | 0.63 | 0.62 | Q.58 | 0.61 | 0.63 | 0.66 | ﻋﻬﺪ | 0.64 | 0.62 | 0.64 | ﻋﻬﺪ | | 0.61 | 0.64 | @64 | 0.05 | 0.65 | 0.63 | 0.59 | 0.62 | 0.64 | 0.08 | 0.06 | 0.56 | 0.64 | 0.08 | 0.05 | | ക്രോ | 0.64 | 0.67 | 0.67 | வள | 0.62 | 0.6 | കോ | 0.06 | 0.7 | aaa | மகா | 0.65 | 06 | a ga | | ﻋﻬﺪﻩ | 0.66 | 4.67 | 0.68 | 0.68 | 0.66 | 0.0 | 0.64 | 0.66 | 0.7 | 0.63 | 0,67 | 0.66 | 0.68 | 0.69 | | | ගැන | aar | 0.68 | 0.68 | 0.64 | 0.6 | යෙ | 0.66 | 0.7 | aar | 0.56 | 0.08 | 0.68 | റ.ഔ | | 0.02 | | @45 | 0.65 | વ.બ | 0.64 | 0.59 | 0.02 | વ.બ | 0.08 | a.as | 0.56 | 0.64 | 0.65 | a.as | | 0.68 | 0.59 | 0.6 | 0.6 | 0.6 | 0.60 | 0,57 | 0.68 | es | 0.68 | 0.62 | 0.63 | ﺔ | 0.61 | Q.6 = | | 0.61 | 0.62 | 요 63 | 0.64 | 0.63 | 0.62 | 0.58 | 0.62 | 0.63 | 0.68 | 0.05 | 0.55 | 0.63 | 0.64 | 0.64 | | ය | 0.64 | aos | a.os | 0.06 | 0.64 | 0.6 | යෙ | 0.00 | 0.00 | aar | പ്രചാ | ංග | ೦೮ | aar | | 0.66 | 0.68 | 0,7 | 07 | 07 | 0.68 | 0,63 | 0.66 | 0.69 | 073 | 0.75 | 0.69 | 0.69 | 071 | 0.75 | | 0.64 | 265 | 0.69 | 0.67 | | | | | | | | | | | | | 0.66 | 0.68 | 0.67 | 0.68 | 0.62 | 0.05 | 0.67 | דס | વ.69 | 0.68 | 0.68 | | | | | | ﺔ ﺍﻟﻤﺘﺤﺪﺓ ﺍﻟﻤ | 056 | @ 57 | 0.57 | 0.56 | 058 | 053 | 0.55 | 0.56 | 0.50 | 0.59 | ക്ക | 0.57 | 057 | 0.57 | | 0.02 | 0.64 | @45 | 0.66 | 0.66 | 0.64 | 0.6 | 0.63 | 0.65 | 0.60 | 0,67 | 0,57 | ous | 016 | 0,67 | | 0.64 | 0.666 | 在线 | 0.69 | 0.68 | 0.66 | Q.�3 | ﺘﻬﺎ ﺍﻟﻤﺘﺤﺪﺓ ﺍ | 0.67 | דס | 0.68 | 0.67 | 0.66 | 0.68 | 07 | | 0.64 | 0.00 | 요68 | 0.09 | ග.ഗ്ര | 0.08 | 0.61 | 0.64 | வக | 071 | 0.68 | மா | டன | от | от | 10 es p ![25_image_0.png](25_image_0.png) - DS ![25_image_1.png](25_image_1.png) - 10.4 ![25_image_2.png](25_image_2.png) - 0:2 - ca Figure 9: Cross class Neighborhood Similarity in PCG | Humloc | 1.0 | | | | | | | | | | | | | | |----------|-------|------|------|------|------|------|------|------|------|------|------|------|------|-------| | 0.62 | 0.51 | 0.55 | 0,42 | 0,36 | 022 | 0,35 | 0,33 | 0,25 | 0.35 | 0 56 | 0,4 | 0,25 | 0,35 | | | 0.51 | 0.52 | 0.52 | 0.36 | 0.23 | 0.34 | 0.34 | 0.25 | 0.36 | 0.57 | 0.41 | 0.25 | 0,35 | | | | 0.41 | | | | | | | | | | | | | | | | 0.55 | 0.52 | 0.55 | 0.42 | 0:37 | 0.24 | 0.36 | 0.35 | 0.26 | 0.37 | 0.57 | 0.41 | 0.26 | 0.36 | - 0.8 | | 0.42 | 0.41 | 0.42 | 0.39 | 0:33 | 0.19 | 0.32 | 0.3 | 0.23 | 0:33 | 0.43 | 0.33 | 0.23 | 0.33 | | | 0.36 | 0.36 | 0.37 | 0.33 | 0.35 | 0.19 | 0.29 | 0.28 | 0.22 | 0.29 | 0.39 | 0.31 | 0.22 | 0.3 | | | 0.22 | 0.23 | 0.24 | 0.19 | 0.19 | 0.15 | 0.17 | 0.18 | 0.12 | 0.18 | 0.25 | 0.19 | 0.14 | 0.17 | - 0.6 | | 0.35 | 0.34 | 0.36 | 0:32 | 0.29 | 0.17 | 0.28 | 0.26 | 0.19 | 0.28 | 0.37 | 0.29 | 0.2 | 0.27 | | | 0.33 | 0.34 | 0.35 | 0,3 | 0.28 | 0.18 | 0.26 | 0.28 | 0.19 | 0.24 | 0.36 | 0.28 | 0.2 | 0.27 | ﺔ 0 - | | 0.25 | 0.25 | 0.26 | 0.23 | 0.22 | 0.12 | 0.19 | 0.19 | 0.16 | 0.21 | 0.27 | 0.22 | 0.15 | 0.21 | | | 0.35 | 0.36 | 0.37 | 0:33 | 0.29 | 0.18 | 0.28 | 0.24 | 0.21 | 0.54 | 0:4 | 0.32 | 0.2 | 0.27 | | | 0.56 | 0,57 | 0.57 | 0,43 | 0:39 | 0.25 | 0.37 | 0,36 | 0.27 | 0:4 | 0,67 | 0.44 | 0.27 | 0,36 | - 0.2 | | 0,4 | 0,41 | 0,41 | 0,33 | 0,31 | 0,19 | 0,29 | 0,28 | 0,22 | 0,32 | 0,44 | 0,47 | 0,22 | 0,3 | | | 0.25 | 0.25 | 0.26 | 0.23 | 0:22 | 0.14 | 0.2 | 0.2 | 0:2 | 0:27 | 0.22 | 0.16 | | | | | | 0.15 | 0.21 | | | | | | | | | | | | | | 0.35 | 0.35 | 0.36 | 0.33 | 0.3 | 0.17 | 0.27 | 0.27 | 0.21 | 0:27 | 0.36 | 0.3 | 0.21 | 0.3 | - α.ο | Figure 10: Cross class Neighborhood Similarity in HUMLOC | Exidoo | | | | | | | | | | | | | | | | | | | | | | | |-----------|------|-------|------|----|------|------|-------|------|------|------|---------|------|-------|-------|-------|------|-------|------|--------|-------|------|-------| | 0.00 | 0.02 | 0.01 | 0.07 | 0 | 0 | 0.06 | 0.05 | © Da | 0.06 | 0.01 | a a a a | � | 0.022 | 0.09 | 0.06 | 0.03 | 0.07 | 0.06 | 0.06 | 0.02 | © Da | | | 0.02 | 0.04 | 0:01 | 0.08 | o | o | 0.07 | 0.06 | 0:00 | 0.08 | 0.01 | G:05 | ্র | 0.04 | 0.12 | 0.07 | 0.04 | a.ar | 0.05 | 0.06 | 0:04 | 0:04 | | | 0.01 | QQs | 0.01 | 0.04 | 0 | 0 | 0.03 | 0.03 | © EQ | 0.00 | 0 | 0.02 | a | 0.01 | 0.04 | 0.03 | 0.02 | 0.03 | 0.00 | 0.00 | Qas | © EQ | | | 0.07 | 0.05 | 0:04 | 0.35 | o | σ | 0.2 | 0.19 | 0.11 | 0.18 | 0.02 | 0.1 | a | ο.στ | 0.31 | 0.18 | 0.11 | 0.22 | 0.13 | 0.17 | 0.05 | 0.1 | - DS | | 。 | o | ୍ଦ | � | 0 | o | ০ | o | o | � | o | 0 | � | � | o | o | o | o | a | o | 0 | o | | | o | o | 0 | � | ס | o | ୍ତ | D | o | � | 0 | o | a | � | 0 | 0 | D | o | o | D | o | o | | | 0.06 | 0.07 | a-aa | 0.2 | o | o | 0,17 | 0.14 | © Dø | 0.14 | 0.02 | a ca | � | 0.00 | 0:24 | 0.14 | -01 | 0,59 | 0.11 | 0.15 | Qas | © Dø | | | 0.05 | 0.05 | 0:03 | D.19 | o | o | 0.14 | 0.14 | 0:00 | 0.13 | 0.02 | 0:05 | ্ | 0.06 | 0.22 | 0.12 | 0.08 | 0.115 | 0.00 | 0.12 | 0.05 | 0:00 | | | 0.03 | 0,05 | 0-02 | 0,11 | o | 0 | 0.00 | 0.013 | 0.1 | 0, 1 | 0.01 | 0.07 | a | 0.05 | 0, 15 | 0,11 | 0.05 | 0.09 | 007 | 0.08 | 0.04 | 0 DG | os | | 0.06 | 0.09 | ۾--- | 0.18 | o | o | 0.14 | 0.13 | 0.1 | 0.17 | 0.02 | Q-49 | の | 0.07 | 0.24 | 0.14 | 0.09 | 0.15 | 0.06 | 0.12 | 0.47 | ૯૯૯ | | | D.D1 | aas | ் | 0.02 | o | o | 0.02 | 0.02 | DD1 | 0.02 | 0.01 | Gal | ் | 0.01 | 0.04 | 0.02 | 0.01 | 0.02 | DDI | 0.02 | aas | DD1 | | | 0.03 | 0.05 | ¢-42 | ರ | ం | 0 | 0.09 | 0.08 | ల లిల | 0.00 | 0.01 | 0.67 | � | 0.04 | 0.14 | ۾-09 | 0.06 | 0.09 | ୦.୦୧ | 0.00 | 0.04 | ૯ (ત | | | o | D | a | a | o | 0 | ୍ତ | D | o | . | D | 0 | a | � | o | o | D | o | a | D | D | a | 04 | | 0.02 | 0.04 | 0:01 | 0.07 | 0 | 0 | 0.06 | 0.00 | ବ ୧୧ | 0.07 | 0.01 | 0.04 | o | 0.04 | 0.12 | 0.07 | 0.04 | 0.009 | 0.04 | 0.06 | 0.04 | 0:04 | | | 0.00 | 0.12 | 004 | 0.51 | o | 0.24 | 0.22 | 0.15 | 0.24 | 0.04 | @.54 | ে | 0.12 | 0.4 | 0.24 | 0.14 | 0.27 | D.16 | 021 | a.s. | 0.13 | | | | o | | | | | | | | | | | | | | | | | | | | | | | | 0.06 | 0.07 | 축-03 | 0.18 | o | 0 | 0.14 | 0.12 | 0.11 | 0.14 | 0.02 | 点击分 | � | 0.07 | 0.24 | 0.16 | 0.09 | 0.10 | 0.06 | 0.14 | 0.05 | ૯.૯૭ | | | 0.03 | 0.04 | 0 02 | 0.11 | o | o | 0.1 | 0.015 | 0 05 | 0.00 | 0.01 | a as | ் | 0.04 | 0.54 | 0.00 | 0.09 | 0.11 | DDE | 0.00 | aaa | D DS | | | 0.07 | 0.07 | & # 3 | 022 | 0 | o | 0.19 | 0.16 | @ 09 | 0.16 | 0.02 | Q.�9 | � | 0.009 | 0.27 | 0.16 | 0.11 | 0.23 | 0.11 | 0.19 | 0.05 | @ 09 | - 0.2 | | 0.05 | 0.05 | 0 02 | 0.13 | 0 | 0 | 0.11 | 0.02 | 0 DV | D.CO | 0.01 | a os | a | 0.04 | 0. 15 | 0.00 | 0.06 | 0.11 | D.11 | 0.00 | 0.04 | 0 DG | | | 0.06 | 0.09 | & # 3 | 0.17 | 0 | o | 0.16 | 0.12 | ૯૮૮ | 0.12 | 0.02 | 0.47 | o | 0.05 | 0.21 | 0.14 | 0.09 | 0.19 | 0.06 | 028 | 0.04 | ૯૮૮ | | | 0.02 | 0.04 | 0:01 | 0.06 | o | o | 0.05 | 0.06 | 0 04 | D.D7 | 0.01 | 0.04 | ் | 0.04 | D.11 | 0.06 | 0.03 | 0.05 | D.D4 | 0.04 | aas | 0 04 | | | 0.08 | 0.04 | 4-42 | 0.1 | o | o | 0.09 | 0.08 | ૯ ૦૯ | 0.08 | 0.01 | 4.64 | 0.04 | 0.13 | 0.08 | 0.065 | 0.09 | 0.06 | 0.08 | ﻪ 2.64 | £ £ 9 | - ca | | Figure 11: Cross class Neighborhood Similarity in EUKLOC | torno_0.2 | | | | | | | | | | | | | | | | | | | | | |-------------|-------|--------|------|-------|----|----|------|------|----|------|------|-----|----|----|----|------|----|----|-------|------| | 1 | . | 1 | 1 | - | 1 | 1 | 1 | - | 1 | - | 1 | - | 1 | 1 | 1 | 1 | 1 | 1 | י | | | - | 요 29 | - | י | - | - | - | - | - | - | - | - | - | 1 | 1 | - | י | - | 1 | - | | | - | 1 | മേ | s | - | s | 1 | 1 | - | 1 | ﺍ | 1 | s | 1 | s | 1 | s | 1 | 1 | ﺍ | | | 1 | 1 | 1 | @ 20 | 1 | s | 1 | 1 | - | 1 | ﺍ | 1 | 1 | 1 | s | 1 | s | 1 | 1 | 1 | ros | | 1 | - | 1 | 1 | 0.999 | 1 | ﺍ | 1 | - | 1 | - | 1 | - | 1 | 1 | 1 | 1 | 1 | 1 | י | | | - | - | י | 1 | י | - | - | י | י | - | ﺍ | - | ﺍ | - | - | - | 1 | - | ﺍ | ﺍ | | | 1 | - | 1 | s | - | s | 1 | 1 | - | 1 | 1 | 1 | - | 1 | s | - | s | - | 1 | - | | | 1 | 1 | 1 | s | 1 | 1 | 1 | 0.99 | - | 1 | - | 1 | . | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 网 | | 1 | - | 1 | s | - | 1 | 1 | 1 | 0.99 | 1 | - | 1 | . | 1 | 1 | 1 | 1 | 1 | 1 | י | | | י | - | - | - | - | - | - | - | - | - | - | - | - | י | - | י | - | | | | | | י | - | - | | | | | | | | | | | | | | | | | | | | 1 | ﺍ | ﺍ | י | | | - | | | י | י | 1 | 0.23 | 1 | - | 1 | י | 1 | י | 1 | 1 | 1 | | | 1 | 1 | 1 | s | 1 | 1 | 1 | 1 | ﺍ | 1 | - | 0.99 | 1 | 1 | 1 | 1 | 1 | י | | | | | . | 1 | - 10-4 | | | | | | | | | | | | | | | | | | | | 1 | ﺍ | 1 | s | 1 | 1 | - | 1 | ﺍ | 1 | ﺍ | 1 | Q99 | 1 | s | 1 | 1 | 1 | 1 | 1 | | | 1 | - | - | 1 | - | 1 | - | 1 | - | 1 | - | 1 | - | 1 | 1 | - | 1 | - | ﺍ | י | | | 1 | 1 | י | - | 1 | - | | | | | | | | | | | | | | | | | - | - | י | - | - | - | - | 1 | 0.50 | 1 | י | - | 1 | ﺍ | | | | | | | | | 1 | | | | | | | | | | | | | | | | | | | | | | 1 | 1 | s | 1 | 1 | - | 1 | - | 1 | ﺍ | 1 | 1 | 1 | s | 1 | s | 1 | 1 | ﺍ | - 0.2 | | | 1 | - | 1 | s | - | - | 1 | ﻤﺎ | - | 一 | י | - | - | - | s | - | 0.08 | 1 | 1 | 1 | | | 1 | - | 1 | 1 | - | 1 | ﺍ | 1 | - | 1 | - | 1 | - | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | - | - | 1 | - | 1 | - | 1 | י | - | - | - | - | - | 1 | - | 1 | י | 1 | 1 | ﺍ | | | 1 | 1 | 1 | s | 1 | s | 1 | 1 | - | 1 | . | 1 | . | 1 | s | 1 | s | 1 | 1 | 0.99 | - ca | Figure 12: Cross-class Neighborhood Similarity in graph label homophily=0.2 | torea_0.4 | | | | | | | | | | | | | | | | | | | | | |-------------|--------|------|------|------|-------|------|-------|------|------|------|------|------|------|------|------|------|------|------|------|------| | 0.89 | ﻪ ﻭﻫ | 0.83 | 0.8 | 0.79 | 0.66 | 0.89 | 0.83 | 0.84 | 0.87 | 0.66 | 0.81 | 0.81 | 0.87 | 0.66 | 0.89 | 0.8 | 0.84 | 0.82 | 08 | | | 0.84 | @ar | 0.85 | 0.80 | 08 | 0.83 | 0.55 | 0.84 | 0.82 | 0.08 | 085 | 0.82 | 0.83 | 0.84 | 0.64 | வகா | 0.79 | 0.83 | 08 | 0.8 | | | ﺣﻘﺎﺏ | Gas | മ.മു | 0.01 | 0.79 | 0.10 | 0.47 | വെ | ﻘﺎﺕ | വ.ജെ | 044 | 0.01 | 010 | ﺼﻒ | 0.00 | മക്ക | 0.77 | ക്കു | 0.81 | ﻘﺎﺕ | | | on | @ 42 | മക്ഷ | 0.00 | 0.79 | on | കോ | 0.81 | 0 79 | മക്ര | gas | 0.79 | 4 79 | o at | 0.81 | മേഖ | மா | OR | மா | 0.78 | ros | | 0.79 | 0.8 | 0.78 | 479 | 0.81 | 0.79 | 0.81 | 0.8 | 0.79 | 0.81 | 0.8 | 0.78 | 078 | 08 | 0.8 | 0.82 | 078 | 0.78 | 0.76 | 0.70 | | | ക്ക | 요 53 | 0.82 | 0.8 | a.ro | 0.08 | 0.85 | 0.802 | 0.82 | 0.85 | 083 | DB | 0.5 | 0.84 | 0.84 | ക്ക | ೧೨೬ | 0.82 | 0.62 | a.ro | | | 0.06 | @ & B | 0.07 | 0.03 | Q.A. | 0.05 | 0.9 | 0.68 | qaa | 0.9 | 0.86 | മൊ | OBA | | OLEG | 0.40 | 0.01 | 0.04 | 0.01 | Qas | | | о.вз | ﻪ 8.84 | 0.83 | 0.61 | os | 0.82 | 0.85 | 0.84 | 0.81 | 0.86 | 0.84 | 0.81 | 0.81 | 0.84 | 0.63 | 0.89 | 0.79 | 0.82 | રા | 0.79 | 01 | | ﻋﻬﺪ ﺍ | 4.82 | 08 | 479 | 0.79 | 0.82 | 0.83 | 0,81 | 0.84 | 0.84 | 4,82 | 0.79 | 4.79 | 0.83 | 0.62 | 0.84 | 0.79 | 0.81 | 0.79 | 0.78 | | | 081 | G.88 | 0.88 | 0.84 | 0.81 | 0.88 | 0.9 | 0.08 | 0.84 | זש.ס | 0.87 | 0.84 | 0.85 | 0.86 | 0.87 | 0.89 | 0.8 | 0.85 | 0.62 | 0.82 | | | බැප | @45 | 0.04 | 0.01 | OB | 0.03 | 0.06 | 0.04 | 0.82 | 0.07 | Qass | 0.01 | 080 | ﻤﻬﻢ | 0.64 | 0.06 | లూ | 0.64 | 0.01 | ﻣﺎﻩ | | | 0.81 | 4.82 | 0,81 | 479 | 0.78 | 0.8 | 0.83 | 0,81 | 0.79 | 0.04 | @81 | 08 | 4.79 | 0,81 | 0.81 | 0.83 | 0,77 | 08 | 0,77 | 0,77 | 04 | | 0.81 | ﺗﻌﻪ | 0.82 | 479 | 0.78 | as | 0.84 | 0.81 | 0.79 | 0.86 | 0.82 | 0.79 | 0.8 | 0.81 | കേ | 0.76 | 0.81 | ﻌﻪ | 0.78 | | | | | 0.81 | | | | | | | | | | | | | | | | | | | | | ost | @84 | 0.83 | 0.61 | 08 | 0.84 | 0.85 | 0.84 | 0.83 | 0.86 | 0.84 | 0.81 | 0.61 | 0.88 | 0.84 | 0.86 | లూక | 0.84 | 0.62 | 0.8 | | | බැප | 0:54 | 0.04 | 0.01 | | | | | | | | | | | | | | | | | | | 0.83 | 0.01 | OB | 0.85 | 0.83 | 0.82 | 0.87 | 0.54 | 0.61 | 0.84 | 0.00 | 0.86 | | 0.82 | 08 | 0.79 | | | | | | | 0.86 | 0.87 | 0.80 | ﻣﻌﻬﺪ | 0.86 | ﻪ ﻣﻬﺎ | 0.89 | പജ | 0.80 | 0.86 | 0.89 | 0.81 | ﻪ‌ﻫﺎ | 0.81 | | | | | | | | | 0.85 | 0.83 | 0.86 | 0.46 | 0.66 | 0.41 | - 02 | | | | | | | | | | | | | | | | a s | 479 | 0,77 | 0,77 | 0,79 | 0,78 | 0.81 | 0.79 | 0.79 | 08 | 0.79 | 0,77 | 076 | 0.79 | 0,79 | 0,81 | 0,78 | 9.77 | 0.75 | 0 74 | | | 0.84 | 0.83 | 0.83 | 0.8 | 0.78 | 0.62 | 0.84 | 0.82 | 0.81 | 0.86 | @84 | 08 | 0.61 | 0.84 | 0.62 | 0.84 | 077 | 0.89 | 0.84 | 0.81 | | | 0.82 | ​ ​ | 0.81 | 0.77 | 0.76 | 0.80 | a si | ga | 0.79 | 0.80 | @81 | о.тт | ០០ | 0.82 | οσ | a gi | 0.75 | 0.84 | 0.06 | פי ב | | | oa | | OR | 078 | סהם | 0.79 | Qas | 0.79 | 0.78 | 0.00 | OR | 0.77 | 078 | OR | 0,79 | Q.A. | 0.74 | Qas | 0.79 | 0 79 | - co | Figure 13: Cross-class Neighborhood Similarity in graph label homophily=0.4 | torno_0.6 | | | | | | | | | | | | | | | | | | | | | |-------------|--------|-------|------|------|------|------|-------|------|------|--------|-------|------|-------|------|------|------|------|-------|------|------| | 0.81 | 0.7 | 07 | 0.69 | 0.66 | 078 | 0.76 | 0.68 | 0.68 | 0.77 | 072 | 0.66 | ಂ.6 | 0.74 | 072 | 0.76 | 0.62 | 0.73 | 0.69 | 0.64 | | | ar | @ 76 | 0.71 | ០០ | 033 | 0.00 | 0.76 | 0.67 | 0.62 | 0.77 | வ | 0.64 | 0.68 | 0.65 | 0.68 | 0.74 | 0.58 | വ.യാ | 0.64 | DE3 | | | 07 | @75 | 0.79 | og | ﻣﺤﻤﺪ | 0.00 | 0.76 | | 0.42 | 0.70 | 075 | മക | 449 | a.a.a | o ca | 0.73 | ೦.57 | 0.75 | 0.00 | ﻪ ﻋ | | | 020 | og | 00 | ore | 0,40 | 050 | കോ | ﻫﻚ0 | 0:54 | 0.64 | oo | ച്ച് | 058 | ﻤﺤﻤ | 014 | മമ | 051 | Da | 0.525 | 054 | ros | | org | 0.66 | 0.63 | 0.48 | 0.00 | 0.64 | 0.68 | 0.64 | 0.51 | 0.68 | 0.66 | 0.52 | 0,63 | 0.66 | 0.64 | 00 | 0.6 | 0.64 | વડ | 0,49 | | | ars | Geo | ക്ക | 0.58 | ﻤﻮﻫ | 0.77 | 0.74 | 0.06 | aas | വന്ത | லா | යෙ | ore | යන | ಂಚಾ | 0.13 | as | от | ಂಡ | aaz | | | 0.76 | @ 76 | 0.76 | ୦ ​ | 0.50 | 0.74 | 0.42 | 0.71 | 0.67 | 0.00 | 0 76 | എന് | 072 | 0.73 | 0.74 | 0.79 | | 0.74 | 0.60 | aar | | | ore | 4.47 | 0,67 | 0.58 | ﻪ؟ | 0.66 | 0.75 | 0.71 | 0.61 | 0.73 | 0.68 | 0.62 | 0.66 | 0.67 | 0.66 | 0.72 | 0.57 | 0.68 | 0.63 | 0.6 | 01 | | 0.68 | 4.42 | 0.62 | 0,54 | 0,51 | 0.66 | 0.67 | 0.61 | 0.69 | 0.68 | ��� | 0.58 | 00 | 0.65 | 0.68 | 0.68 | 0.66 | 0.64 | 24 | ୦ ୧୧ | | | o.rr | 0.77 | 0.18 | 0.64 | 0.68 | லக | 0.82 | ೧೨ತ | ପ୍ରେଲ | 0.85 | arr | 0.699 | 0.74 | 0.74 | 0.74 | 0.81 | 0.62 | 0.75 | 0.69 | aar | | | 0.73 | வ | 0.71 | 0.6 | ග.පස | 0.7 | 0.76 | 0.08 | વધ | 0.77 | @ 77 | 0.64 | 449 | D7 | 000 | 0.75 | 050 | 0.72 | ಂ.ದ | 0.64 | | | 0.65 | ﻪ 44 | 0.64 | 0.64 | 0,52 | 0.68 | 0.69 | 0.62 | 0,59 | 0.69 | 0,44 | 0.60 | ০৪২ | 0.63 | 0.68 | 0.69 | 0.66 | ﻋﻬﺪ | 0.68 | ల కన | 04 | | வன | 448 | 0.69 | 0.58 | 0.53 | 0.66 | 0.72 | 0.66 | aa | 0.74 | 449 | 0.02 | 4.49 | 0.665 | 0.66 | 0.75 | 0.50 | 0.69 | 0.66 | aas | | | 0.74 | 요68 | 0.68 | 0.58 | 0.66 | 0.69 | 0.73 | 0.67 | 0.05 | 0.74 | രാ | 0.65 | 0.65 | 0.78 | 0.68 | 0.74 | 0.68 | 0.75 | 0.67 | 0.62 | | | | 0.05 | | | | | | | | | | | | | | | | | | | | | 0.72 | 요소8 | 0.65 | 058 | 0.54 | 0.00 | 0.74 | 0.066 | යයා | 0.74 | മരു | ය ස | 0.68 | 075 | 0.73 | 0.50 | 0.65 | | ads | | | | 0.73 | 073 | এ ৫৪ | 0.81 | 0.00 | 065 | | | | | | | | | | | | | | | | | 0.76 | ﻪ 74 | 0 13: | 00 | 0.79 | 0.72 | 075 | 071 | 0.74 | 072 | 0.81 | 0.64 | 0.73 | 0.67 | - 02 | | | | | | | | 0.02 | a sa | 0,57 | 051 | 05 | ag | 0.63 | ೦,47 | 0,56 | 0.00 | a 59 | 0.56 | 056 | 0.59 | 050 | aga | 0.00 | 0.58 | 0.53 | 0 52 | | | 0.73 | 在台湾 | 0.71 | 0.6 | 0.64 | 0.7 | 0.74 | 0.68 | 0.64 | 0.76 | @72 | 0.64 | 在线 | 0.71 | 0.68 | 0.73 | 0.68 | 0.77 | 0.72 | 0.09 | | | 00 | යය | 0.68 | 035 | 05 | 0.68 | 0.65 | | os | ගණ | এবং | ගණ | 0.00 | or | 0.03 | ogr | 0.53 | 0.72 | 0.77 | වස | | | 0.66 | යයා | aga | 054 | 0.49 | 0.00 | QG7 | oa | 0.57 | oa | ﻪ‏ 0 44 | | 061 | ﺻﻒ | 0.01 | a.ടേ | 050 | a as | ﺘﻘﺪﻣﺎ | oo | - co | Figure 14: Cross-class Neighborhood Similarity in graph label homophily=0.6 | torno_0.8 | | | | | | | | | | | | | | | | | | | | |-------------|------|-------|------|------|--------|------|------|-------|-------|------|-------|------|------|------|--------|------|-------|------|------| | 0.69 | @49 | 0.61 | 0.81 | 0.29 | 0.60 | 0.63 | 0.43 | 0.48 | 0.63 | 0.66 | 0.42 | 0.46 | 05 | 0.62 | രക്ഷ | 0.84 | 0.69 | 0.68 | 0.42 | | 0.0 | 06 | 0.465 | 0.3 | 0.24 | 0.46 | a y | 0.38 | 0.37 | 0.57 | 0.47 | 0.57 | 0.42 | 0.44 | 0.43 | 0.55 | 0.28 | 0.49 | 0.42 | D.36 | | 0.51 | 0,46 | മക്ക | 0.31 | 0.24 | 0.48 | 0.50 | 0.20 | 0.58 | 86 | 0-49 | 0.30 | 0.44 | 0.66 | 0.44 | oza | 0.27 | 0.54 | 0.48 | 038 | | 0.21 | oa | 0.21 | 051 | a sa | בס | 0,27 | 0.25 | 0.24 | 0.20 | כבס | 0.25 | 0.28 | 0.20 | 0,28 | 0,26 | 0.10 | 0,023 | 0.27 | 0.24 | | 0.20 | Q.24 | 0.24 | 0.16 | 0.64 | 026 | 0.8 | 0.21 | 0.22 | ۾ 3 | 0.20 | 0.21 | 028 | 0.26 | 024 | 0.32 | 0.18 | 0.27 | 0.22 | 0.19 | | மகா | @45 | 0.48 | 0.5 | යෙන | 0.64 | 0.09 | 0.41 | 0.44 | 0.50 | 051 | 0.59 | 0.43 | os | 0.48 | 0.665 | 0.82 | යන | 0.51 | 0.59 | | ക്ര | a 56 | 0.50 | 0.37 | D3 | 050 | 0.75 | 0.47 | 0.48 | 07 | 00 | 0.47 | 051 | 0.56 | 055 | алт | 0.36 | ags | 0.53 | 0.46 | | 0.43 | 0.33 | 0.39 | 0.26 | 0.21 | 0.41 | 0,47 | 0.61 | 0.33 | 0.48 | 0:42 | 0.32 | 0.24 | 0.99 | 0.87 | 0.49 | 0.26 | 0,44 | 0.38 | 0.82 | | 0,48 | 0.37 | 0.39 | 0.24 | 0.22 | ﺚ:0,44 | 0.49 | 0.33 | 0.64 | 0.48 | 0.42 | 0.32 | 0.36 | 0.42 | 0.3� | 0.0 | 027 | 0.45 | 0.39 | 0.31 | | 0.63 | @57 | 0.28 | 03 | 0.50 | 07 | 0.48 | 0.48 | 0.73 | 요성1 | 0.47 | 0.53 | 0.57 | 056 | 0.09 | 0.36 | 0.62 | 0.54 | 0.45 | | | OP | | | | | | | | | | | | | | | | | | | | | ගණ | @47 | 0.40 | 0.32 | 0.26 | 0.51 | 0.6 | 0.42 | 0.42 | 0.61 | DES | D/4 | 0.45 | 03 | 0.46 | പഞ | പ | ﺗﻘﻪ | 0.40 | 0.4 | | 0,42 | 0.37 | 0.39 | 0.25 | 0.21 | 0.3� | 0,47 | 0.32 | 0.022 | 0.47 | 04 | os | 0.25 | 0.37 | 0.24 | 0,47 | 0.26 | 0.42 | 0.34 | 0.31 | | 0.45 | 0:42 | ﻘﻚ() | 0.28 | 0.23 | ﺠﺔ 0 | 0.36 | 0.25 | 0.53 | 0,45 | 0.% | 0.52 | 0.41 | asi | 0.26 | ﺟﺔ،0 | 0.35 | | | | | 0.61 | 0,4 | 0,49 | | | | | | | | | | | | | | | | | | | 0.57 | @44 | 0.46 | 0.26 | 0.25 | 0.5 | 0.39 | 0.42 | 0.57 | 0.37 | 0.41 | 0.62 | 0.46 | asr | 0.29 | 0.64 | 0.48 | 0.37 | | | | 0.66 | 0.5 | | | | | | | | | | | | | | | | | | | | 0.24 | 0.4 | | | | | | | | | | | | | | | | | | | | 050 | @43 | 0.44 | 0.28 | 0.48 | 0.00 | 0.37 | 0.22 | 0.58 | 0.45 | 0.36 | 0.465 | 058 | 0.54 | 0.29 | 0.42 | 0.42 | D 35 | | | | 055 | 0.26 | ﻘﺠﺔ | 0.00 | 0,47 | 051 | 07 | 0.20 | ﻪ‌ﻪ‌ﺑﻪ‌ | | | | | | | | | | | | | പക്ഷ | 0.50 | 0.56 | 0,67 | 0,48 | 0.49 | a 59 | 0.57 | 054 | 00 | 0.50 | | | | | | | | | | | 0.94 | 0:28 | 0.27 | 0.18 | 0,59 | 0.22 | 0,96 | 0.25 | 0.27 | 0.36 | 0.3 | 0.25 | 0.26 | 0.29 | 0,29 | 0,37 | 0,43 | 0,21 | 0.26 | 023 | | 0.69 | 0.49 | 0.64 | 0.83 | 0.27 | 0.66 | 0.01 | 0.44 | 0.45 | 0.62 | 0.67 | 0.42 | 0.48 | 0.64 | 0.46 | 06 | 0.81 | 0.69 | | 0.44 | | 053 | @42 | 0.45 | 0.27 | 0.22 | 0.51 | 033 | 0.38 | 0.22 | 04 | QAD | 0.34 | D.43 | 0.45 | 0.42 | 012 | 0.26 | asr | ೦.೮ರ | 0.32 | | ﺼﺼﺼ | 0,26 | 0,20 | 0,24 | ය ප | 0.20 | 0,46 | 0.22 | 0,25 | 0,26 | ﺔ،0 | 0.21 | 0.35 | 0.27 | 0,25 | ﺔ 0,44 | 0.22 | 0,44 | 0,20 | on | ![31_image_0.png](31_image_0.png) ![31_image_1.png](31_image_1.png) Figure 15: Cross-class Neighborhood Similarity in graph label homophily=0.8 | | torno_1.0 | | | | | | | | | | | | | | | | | | | |------|-------------|------|-------|-------|------|-------|------|-------|------|------|------|------|------|------|-------|------|-------|------|-------| | 0.48 | 0.3 | 0.31 | 0.16 | 0.16 | 0.87 | 0.44 | 0.24 | 0.28 | ﻋﺔ 0 | 0.86 | 0.24 | 0.27 | 0.37 | 0.81 | 0.43 | 0.14 | 0.4 | 0.36 | 0.24 | | D3 | 451 | 0.2T | 0.14 | 0.13 | 0.28 | 0.37 | ន | 0.2 | 0.36 | 03 | Dz | 024 | 0.27 | 0.23 | 0.35 | 0.11 | 0.32 | 0.27 | 0.2 | | 0.21 | 0:27 | ﺼﻒ | 0.14 | a, sa | 0.2 | 0.22 | 0.21 | 0.25 | 0.41 | 031 | 0,21 | 025 | 0.20 | 0.24 | 0.26 | 0.5 | 0.25 | 0.32 | 0.25 | | 0.15 | a sa | 0.56 | 0.48 | 0.07 | 0.14 | o, sa | 0.11 | 0.1 | 02 | 0.15 | 0.51 | 0.13 | 0,12 | 0.12 | 0,59 | 0.06 | 0.57 | 0.14 | 0.1 | | 0.16 | Q13 | 0.13 | 0.00 | 0.62 | 0.15 | 0.18 | 0.11 | 0.11 | 0.18 | 0.16 | Q.11 | 0.12 | 0.14 | 0.12 | 02 | 0.07 | 0.17 | 0.13 | 0.1 | | 0.57 | 0.28 | D3 | 0.14 | a. 15 | 0.43 | 0.41 | 0.22 | 0.26 | 0.41 | 022 | 0.22 | 0.25 | 0.31 | 028 | 0.4 | 0.12 | 0.31 | 0.25 | 0.22 | | 0.44 | 0.37 | 0.20 | 0.19 | a. sa | 0.41 | 0.52 | 0.29 | ב0 | 0.50 | 0.42 | 0.29 | 0.34 | 0.20 | 0.34 | 0.49 | 0.16 | 0.44 | 0.20 | 0.28 | | 0.24 | 02 | 0.21 | 0.11 | Q. 11 | 022 | 0.29 | 0,42 | 0.56 | | 022 | 0.10 | 员19 | 0.22 | 0.18 | 0.29 | 0.09 | 0.20 | 022 | 0.99 | | 0.28 | 0.2 | 0.21 | 0.1 | Q. 11 | 0.26 | 08 | 0.16 | 0.09 | | 0.24 | 0.10 | 员19 | 0.24 | 02 | 08 | 0.00 | 0.27 | 0.23 | Q. 59 | | 0.44 | @ 38 | 0.41 | 0.2 | 0.18 | 0.41 | 0.62 | 0.3 | 0.8 | 0.56 | 0.43 | 0.29 | 0.36 | 0.39 | 0.36 | 0.51 | 0.15 | 0.45 | 0.4 | 0.29 | | പങ | പ | പാ! | 0.15 | a. 15 | 0.32 | 0.42 | 0.23 | 0.24 | 0.43 | 053 | 0.23 | 0.27 | പ്ര | 0.27 | 0.42 | 0.12 | 0.22 | 0.34 | D 22 | | 0.24 | 02 | 0.21 | 0.11 | 요 11 | 0.22 | 0.29 | 0.16 | 0.56 | 0.29 | 0.22 | 0.40 | 点好 | 02 | 0.18 | 0.28 | 0.00 | 0.25 | 42 | 0.15 | | 0.27 | 0.24 | 0.13 | 0.12 | 0.25 | 0.19 | 0. 19 | 0.36 | 0.19 | 0,47 | 0.24 | 0.93 | a s | 0.95 | 0.27 | 0. 19 | | | | | | 0.25 | 0.34 | 0:27 | 021 | | | | | | | | | | | | | | | | | | 0.37 | 0.27 | 0.28 | 0.18 | 0.14 | 0.81 | 0.38 | 022 | 0.24 | 0.39 | 033 | 02 | 024 | 0.46 | 0.26 | 0.59 | 0.11 | 0.56 | 0.83 | 0.21 | | 0.31 | 0:23 | 0.24 | 0.12 | 0.12 | 0.28 | 0.54 | 0.18 | 0.2 | 0.35 | 0:27 | 0.18 | 021 | 0.26 | 0.42 | 0.54 | 0.1 | 0.3 | 0.26 | 0. 55 | | ﺟﻪ | 02 | 0.17 | 0.27 | | | | | | | | | | | | | | | | | | a 36 | 0,96 | a ta | ﻪ,0,4 | 0.49 | 0.29 | 0.9 | 0.51 | 0,42 | 0.28 | 0.33 | 0.29 | 0,24 | 0.5 | ﺑﻪ 0 | 0,38 | | | | | | 0.14 | 0.11 | 0.1 | 0.06 | 0.07 | 0.12 | 0,56 | 0.09 | 0.09 | 0.15 | 0.12 | 0.09 | 0,1 | 0 11 | 0.5 | 0,57 | 0,42 | 0, sa | 0 1 | 0.09 | | 0:4 | d. 32 | 0.36 | 0.17 | 0.17 | 0.87 | 0.44 | 0.26 | 0.27 | 0.45 | 0.39 | 0.26 | 0.81 | 0.36 | 0.3 | 0.43 | 0.18 | 0.48 | 0.41 | 0.27 | | 0.36 | 0.27 | 0.32 | 0.14 | 0.13 | 0.35 | 0.22 | 0.22 | 0.23 | 04 | 0.34 | Dz | 0.27 | 0.33 | 0.26 | 0.35 | 01 | 0.41 | 0.0 | D 22 | | 0.24 | 02 | 0.21 | 0.1 | 0.1 | 022 | 0.28 | 0.16 | 日 56 | 0.29 | 023 | 0.15 | 0.18 | 0.21 | 0.18 | 0.27 | 0.08 | 0.27 | 0.22 | 04 | ![32_image_0.png](32_image_0.png) Figure 16: Cross-class Neighborhood Similarity in graph label homophily=1.0
Review 1: Summary: The paper studies the problem of multi-label node classification using GNN models. Specifically, it makes three different contributions. *(Metrics)*. First, it proposes a new metric to measure label homophily in the case of multi-label graph datasets. Four real-world datasets are explored regarding their label properties, having implications for the usage of the AUROC metric. *(Datasets)*. Second, the paper introduces new multi-label datasets as well as a generator to produce synthetic graphs. *(Model)*. Finally, the paper proposes a new GNN model that combines feature and label propagation with an unsupervised learning loss. The proposed methodology is evaluated on different datasets, and its performance is compared against different baseline models. Strengths and Weaknesses: Below I list the strengths and weaknesses of the paper. In the weaknesses part, I also include specific questions for the authors. **Strengths** - The main strength of the paper has to do with the new datasets as well as the graph generator. Both can facilitate future research in the field. - Besides, the paper makes interesting observations about the usage of the AUROC metric while evaluating multi-label node classification models. **Weaknesses** - To some extent, the main contribution of the proposed model, LFLF, is limited to an interesting combination of the feature and label propagation components. To be more specific, the feature propagation step is the same as in a GCN model (or the GraphSAGE model). The label propagation component used here is similar to traditional label propagation in semi-supervised learning. Thus, the novel aspect of the methodology concerns the fusion by attention component. - The description of the proposed architecture needs further clarification. First, it is not clear how the model is trained. How are the test node treated during the training phase? - Does the model have an inductive learning capability? This point is particularly important point. Therefore, I would propose to further discuss this aspect in the paper. - The supervised loss in Eq. (5) has been proposed before in the GraphSAGE architecture to train a GNN in an unsupervised manner. However, to my understanding, it promotes homophily-based proximity. How is this suitable for the proposed architecture? - Looking at the experimental results in Table 3, I noticed that the GAT model outperforms GCN in two out of seven datasets. How is this explained? Besides, it seems that LFLF can be used with any message-passing architecture. Thus, I am wondering why LFLF-GAT hasn’t been considered an option here. - The paper does not discuss the running time complexity of the model. It would be interesting to address this aspect. Requested Changes: Please find the questions and requested changes in the 'Weaknesses' part of the review. Broader Impact Concerns: No concerns on the ethical implications have been identified. ================================================== Review 2: Summary: This paper studies multi-label node classification on graphs. It starts with analyzing existing datasets by computing the label homophily and the cross-class neighborhood similarity. The authors introduce three new biological interaction datasets as well as a multi-label graph generator to synthesize models with tunable parameters. In section 5, they introduce a variant of GNN called layerwise feature label fusion (LFLF), where both features and labels are propagated through GNN layers and then aggregated with an attention-type layer. The learning task is a link prediction task. Finally, experiments are conducted on real datasets and synthetic ones where LFLF is shown to be optimal. Strengths and Weaknesses: Strengths: 1- Section 3 on the analysis of the existing dataset and the new datasets is an interesting contribution. The main point made by the authors is that the success of GNN is usually argued in terms of label homophily, but the empirical label homophily computed for existing datasets is pretty low. 2- In order to understand better the possible reasons for the success of GNN, section 4 introduces a graph generator allowing the authors to tune various parameters like clustering and label homophily. This is a very interesting approach. Weaknesses: 1- I have a general problem with the message of this paper. On the one hand, the authors show that the label homophily and the cross-class neighborhood similarity are low in real datasets, and GNN layers should take these facts into account. On the other hand, the authors propose a graph generator with a possible high level of homophily and show that on such synthetic data, their GNN layer (LFLF) is performing better. That's fine, but then what are the practical implications? I understand that LFLF has been designed to work better when homophily is high, but this is not the case in most real-world datasets. Results in Table 3 about LFLF for real datasets are then quite surprising and not explained. 2- I find it problematic that the analysis of datasets is completely ignoring the features of the nodes. Indeed both definitions 1 and 2 only take into account the graph structure and the labels. Features on nodes are not taken into account. I do not understand why and the authors should explain this choice. 3- The general writing of this paper should be improved. There are a lot of crucial places where the paper does not define important concepts: a- in definition 2, define d(i) the histogram properly and define cosine similarity. b- on page 8, the description of the multi-label generator is completely unclear to me. c- LFLF defined in section 5 is unclear. What is the label correlation matrix? What is the negative sampling distribution? What changes in your model variants? d- in your experiments, you should be more precise. I do not understand how you get labels from node embeddings. In Section 5, your GNN learns embedding for nodes; how do you get a classifier from there? Similar question with DeepWalk? For the GCN, GAT... what task did you train them on? e- in section 7, how do you vary what you call the feature quality? What is the definition of feature quality? What is the definition of original versus irrelevant features? Requested Changes: Given the major weaknesses discussed above, I am recommending rejecting the paper. Here are some other required changes: - overall, the results obtained by DeepWalk are very strong, but there is a more advanced version Node2Vec, and comparisons should be done with Node2Vec - in section 3, the authors claim that OGB-Protein is not suitable for experiments because a lot of labels are missing. But then in Section 7, they use this dataset to show that their algorithm is better than others. How did they deal with the missing labels? - Figures 1,3, and 5a are histograms; why do the authors use these non-standard plots? - Figures for Cross class neighborhood similarity are not readable. - on page 5, I do not understand the following: "Specifically, each node pair(i,j)would contribute a total of 1 units to CCNS similarity over all possible class pairs." - on page 6, I do not understand the following: "We in fact observed that increasing the number of training epochs (which encourage the model to decrease training loss by predicting the negative class)increased the AUROC score whereas other metrics like AP or F1 score dropped or stayed unchanged." - on page 10, I do not understand the following: "We comment that in our current experimental setup, to save extra computational time, we set Lk=L0 for all k but always update Lk." Broader Impact Concerns: I do not think there is a need for a BIS. ================================================== Review 3: Summary: In this paper, the authors make several distinct contribution on the topic of multi-label node classification in graphs. They first revisit the notions of homophily and neighborhood similarity, which are generally associated with the success (or failure) of GNNs. They analyze some existing datasets under this light and comment on some discutable choices made in the literature. Then, they propose three new curated datasets of biological data for multi-label node classification. Next, they propose a simple random model for generating graphs with multi-labelled nodes, with some adjustable parameters. Finally, they propose a novel variant of GNNs that use both features and labels as input, with some attentional mechanism between the two, and benchmark several baselines and models on multi-label tasks. Strengths and Weaknesses: Strength: - the paper is generally quite well-written and clear - the paper is very complete and explore its topic quite thoroughly - the authors propose new interesting insights on graphs with multi-labelled nodes, and the use of node features and node labels Weaknesses: - there is a somewhat lack of rigorousness and/or completeness on several notions and definitions introduced by the authors. Especially on the (few) mathematical formulations: objects are generally vaguely defined, no dimensionality is given, many notions and notations are not defined and can be ambiguous, etc. - For instance, the very definition of multi-label is not given: is it that, among a set of labels, a node can have several, or is it that each node is associated to a multidimensional label vector, with several number of classes for each coordinate? (I deduced that it is the first one, but it took me some back and forth reading). In this light, Def. 2 is quite not clear (what does $i \in V_c$ means in a multi-label context?). - As another example, the authors should mention from the beginning which task they are adressing: I'm guessing semi-supervised learning? Transductive? Inductive? The training of the the proposed GNN is unclear: why is it a good strategy to consider purely an unsupervised graph reconstruction loss? If no label are available (inductive settings), what it the advantage of doing label propagation? - Similarly, in the random model, it took me some times to understand that the $x_i$ were the continuous points in the hypersphere, playing the role of node features, and not the labels, since $d(x_i,x_j)$ refers to a distance between labels! By the way, in the many existing Latent Position Models (LPM) of random graphs, it is much much more classical to consider a true distance between the $x_i$ and not the labels. Some discussion with the literature is needed here. - the authors mention several times the problem with the AUROC approach, but do not give true mathematical justification. Some more details would be nice. Requested Changes: See above. The paper has merits, but suffers from a lack of rigorousness and completeness that would greatly improve the reading. Broader Impact Concerns: N/A. New datasets are provided, but new data is not collected, only curated. Sufficient reference is given. ==================================================
# The Conceptarc Benchmark: Evaluating Understanding And Generalization In The Arc Domain Arseny Moskvichev arseny.moskvichev@gmail.com Santa Fe Institute Victor Vikram Odouard vicviod@gmail.com Santa Fe Institute Melanie Mitchell *mm@santafe.edu* Santa Fe institute Reviewed on OpenReview: *https: // openreview. net/ forum? id= 8ykyGbtt2q* ## Abstract The abilities to form and abstract concepts are key to human intelligence, but such abilities remain lacking in state-of-the-art AI systems. There has been substantial research on conceptual abstraction in AI, particularly using idealized domains such as Raven's Progressive Matrices and Bongard problems, but even when AI systems succeed on such problems, the systems are rarely evaluated in depth to see if they have actually grasped the concepts they are meant to capture. In this paper we describe an in-depth evaluation benchmark for the Abstraction and Reasoning Corpus (ARC), a collection of few-shot abstraction and analogy problems developed by Chollet (2019). In particular, we describe ConceptARC, a new, publicly available benchmark in the ARC domain that systematically assesses abstraction and generalization abilities on a number of basic spatial and semantic concepts. ConceptARC differs from the original ARC dataset in that it is specifically organized around "concept groups"—sets of problems that focus on specific concepts and that vary in complexity and level of abstraction. We report results on testing humans on this benchmark as well as three machine solvers: the top two programs from a 2021 ARC competition and OpenAI's GPT-4. Our results show that humans substantially outperform the machine solvers on this benchmark, showing abilities to abstract and generalize concepts that are not yet captured by AI systems. We believe that this benchmark will spur improvements in the development of AI systems for conceptual abstraction and in the effective evaluation of such systems. ## 1 Introduction Forming and abstracting concepts is at the heart of human intelligence (Carey, 2011; Hofstadter, 1995; Lake et al., 2017). These abilities enable humans to understand and create internal models of the world, to use these models to make sense of new information, often via analogy, and to decide how to behave in novel situations. Giving machines such abilities was one of the key goals of McCarthy et al.'s 1955 Dartmouth AI workshop, but these are precisely the capabilities that are still largely lacking in today's AI systems (Mitchell, 2021; Mitchell and Krakauer, 2023). In AI, research on concept formation and abstraction often utilizes idealized domains that capture some of the essential aspects of abstraction and analogy in the real world. In such domains one can be explicit about the assumed prior knowledge without requiring the open-ended knowledge involved in real-world language and imagery. Examples of idealized domains that require abstraction and analogy abilities include Raven's Progressive Matrices (Carpenter et al., 1990; Zhang et al., 2019), Copycat letter-string analogies (Hofstadter and Mitchell, 1994), Bongard problems (Bongard, 1970; Foundalis, 2023), and the Abstraction and Reasoning Corpus (ARC) (Chollet, 2019). The latter three especially, which require the solver to *generate* answers to problems (rather than selecting from candidate answers), remain open challenges for AI systems. There has been substantial research on developing AI systems to solve problems in each of these domains (Małkiński and Mańdziuk, 2022b). Symbolic AI systems have been developed to solve many of the original Raven's problems (Lovett and Forbus, 2017) and deep learning methods have surpassed human accuracy on automatically generated Raven's-like problems (Małkiński and Mańdziuk, 2022a). Particular subsets of Copycat letter-string problems have been solved by early "active symbol" methods (Hofstadter and Mitchell, 1994) and more recently by large language models (Webb et al., 2022). Simple Bongard problems have been recently addressed by program induction methods (Sonwane et al., 2021), and automatically generated Bongard-like problems have been tackled by deep learning systems (Nie et al., 2020). ARC problems were the subject of a 2020 Kaggle challenge (Kaggle.com, 2020) and a limited number were solved by programsynthesis approaches (Alford et al., 2022; Banburski et al., 2020; de Miquel Bleier, 2020; Wind, 2020a). However, few of these efforts have probed the extent to which AI systems have actually grasped the abstract concepts that these various problems are meant to capture. More specifically, if an AI system is able to solve a problem involving a specific concept, to what extent will it be able to solve other problems that target the same concept, including problems that instantiate the concept in a quite different manner? Such generalization abilities would be crucial to any AI system operating in the real world. In this paper, we examine how to evaluate the degree to which an AI system has learned or understood a concept in a generalizable way. Machine learning systems are typically developed by randomly splitting a set of examples into training and test sets. However, this kind of evaluation does not systematically test for the kind of learning and understanding that is needed for "out of distribution" generalization. Indeed, it has been shown many times that machine learning systems can learn "shortcuts" that produce high accuracy on the test set but that do not generalize more broadly (Geirhos et al., 2020; Lapuschkin et al., 2019). To evaluate AI systems, in particular systems that are claimed to perform abstract reasoning, new evaluation methods and benchmarks are needed that specifically test that the system has grasped the relevant abstract concepts. We propose a systematic *concept-based* evaluation method, in which test examples are designed to instantiate variations on chosen concepts. If a system performs well on a range of such examples that vary in complexity and degree of abstraction, that performance provides strong evidence that the system has understood the concept in a generalizable way. In previous work we applied such an evaluation method to programs that exceeded human accuracy on the RAVEN corpus (Odouard and Mitchell, 2022). Our evaluation provided evidence that, while attaining high accuracy on the test set, these programs had not actually learned generalizable abstract concepts. In this paper we propose a concept-based evaluation benchmark for the ARC domain. We discuss why ARC is an excellent domain for studying concept formation and abstraction in both humans and AI systems, but we argue that the original ARC test examples do not systematically evaluate concept understanding. Our contributions in this paper are (1) the creation of a new concept-based evaluation benchmark for the ARC domain and (2) results from our studies using this benchmark to evaluate state-of-the-art programs that solve ARC problems, as well as human performance on this benchmark. Our results show that humans exhibit strong conceptual generalization abilities in the ARC domain, as compared with much weaker abilities in current AI programs, both those designed for this domain and more general-purpose large language models. We believe that our benchmark, and future extensions of it, will spur improvements in the development of AI systems for conceptual abstraction and in the effective evaluation of such systems. ## 2 The Abstraction And Reasoning Corpus Chollet (2019) proposed the Abstraction and Reasoning Corpus (ARC) as a domain for evaluating abstract concept understanding and reasoning abilities in both humans and AI systems. ARC consists of a set of analogy problems, exemplified by those given in Figure 1. In particular, each problem consists of a set of demonstrations—initial and transformed grids—and one or more *test input* grids. In Chollet's terminology, the demonstrations coupled with the test inputs form a *task* to be solved. To solve a task, an agent needs ![2_image_0.png](2_image_0.png) Figure 1: Three sample ARC tasks from Chollet (2023). Each task consists of a set of task demonstrationstransformations between colored grids that follow the same abstract rule—and (here) a single test input. The job of the solver is to generate a new grid that results from applying the abstract rule to the test input. (Best viewed in color.) to infer the abstract rule governing the demonstrations and apply that rule to each test input to produce a correct output grid. The ARC domain was inspired by the hypothesis that humans possess innate (or early learned) "core knowledge systems" on which all further learning and knowledge is based. According to Spelke and Kinzler (2007), core knowledge systems include: (1) **Objectness:** knowledge that the world can be parsed into objects that have certain physical properties, such as traveling in continuous trajectories, being preserved through time, and interacting upon contact; (2) **Numerosity:** knowledge of small quantities and notions of "smallest," "largest," "greater than," "less than," etc.; (3) **Basic geometry and topology:** knowledge of lines, simple shapes, symmetries, containment, etc.; (4) **Agents and goal-directed behavior:** knowledge that some entities are agents who have their own intentions and act to achieve goals. In creating ARC tasks, Chollet assumed the first three as priors—that is, the only knowledge that should be necessary to solve these tasks. For example, Figure 1(a) requires spatial notions of extending a line diagonally from an object to a boundary; Figure 1(b) requires parsing connected sets of pixels into objects and recognizing shapes across different rotations and symmetries; and Figure 1(c) requires notions of counting and comparisons among quantities. The tasks in Figure 1 are sampled from the 1,000-task corpus created by Chollet. Eight-hundred tasks were made public online (Chollet, 2023) and as a challenge on the Kaggle platform (Kaggle.com, 2020). The remaining 200 tasks were kept as a "hidden" test set for evaluating AI systems; 100 of these were used to evaluate submissions to the Kaggle challenge. Each program in the Kaggle challenge was allowed to generate three candidate solutions for each test input in the hidden evaluation set; if one of the three was correct, the test input was considered solved. A task is considered to be solved if all of its test inputs are solved. The first-place program in the Kaggle challenge solved 21% of the 100 hidden tasks; an ensemble of the first- and second-place programs solved about 31%.1 ARC remains challenging for AI systems, even for enormous pre-trained language models (see Section 7) for several reasons. ARC tasks involve few-shot learning—inferring an abstract concept from just a few examples. Moreover, the "core knowledge" required is enormously open-ended (e.g., even recognizing an "object" in this domain can require taking context into account), and solving the tasks requires applying core knowledge concepts with a flexibility that is key to human cognition but has not yet been achieved in AI. One limitation on ARC's usefulness for AI research is that it might be too challenging. Many of the tasks in Chollet's corpus are difficult even for humans, and the corpus as a whole might be sufficiently difficult for machines that it does not reveal real progress on machine acquisition of core knowledge. Another limitation is that the current corpus does not systematically test generalization of the concepts underlying individual tasks. For example, if an ARC solver correctly answers the test input in Figure 1(c), one cannot conclude that the solver can generalize the concepts of "counting" and "greater than"—the system might have employed another strategy to solve this specific instance. Only by systematically evaluating a system on many variants of a given concept can we gain evidence that the system grasps that concept in a way that predicts corresponding generalization abilities. We address these limitations by developing a new benchmark set of tasks in the ARC domain that (1) are designed to rely on straightforward instances of core concepts (and thus be relatively easy for humans), and (2) systematically evaluate the degree to which a task solver has sufficient understanding of a particular concept so as to be able to generalize. Furthermore, we test three programs—the first- and second-place programs from the ARC-Kaggle challenge, as well as OpenAI's GPT-4 pre-trained language model—on our tasks, and compare their performance to humans tested on these same tasks. ## 3 The Conceptarc Benchmark As a first step in developing new benchmarks for concept understanding in the ARC domain, we created ConceptARC.2 We studied all publicly available ARC tasks and manually identified 16 concepts used in them (listed in the left column of Table 1). Each of these concepts is central in one or more tasks in Chollet's published ARC "training" and "evaluation" sets, though those sets were not organized around specific concepts, nor does our set of 16 concepts by any means cover all the different concepts used in ARC tasks. For each concept, we created 10 new ARC tasks that are different instantiations of the concept. This set of tasks is termed the *concept group* for a given concept. Each of our tasks has three different test inputs. The purpose of creating multiple test inputs for a given task was to control for possible "shortcuts" that might enable a solver to solve a particular test input without correctly grasping the underlying rule, whereas the purpose of creating multiple tasks within a concept group was to ensure that a solver is able to generalize across different instantiations of the concept. As an example, Figure 2 shows three tasks from ConceptARC that are variations on the concept *Sameness*. Figure 2(a) focuses on sameness between shapes (in each transformation, only objects with the same shape are retained); in Figure 2(b) lines with the same orientation are retained, and in Figure 2(c) each grid is divided (by a gray line) into two subgrids; if the two subgrids are identical, both are copied, and if not, only the lefthand subgrid is copied. These sample tasks illustrate the range of variation among test inputs for a given task and among tasks in a given concept group. This range is meant to be sufficiently broad that an agent that correctly solves most or all of the tasks in a group is likely to possess a rich understanding of the concept. (Examples of problems from each concept group are given in Appendix A.) 1Nearly all of the 800 published tasks had only one test input; the number of test inputs per task in the hidden test set was not revealed. 2All ConceptARC tasks can be downloaded from https://github.com/victorvikram/ConceptARC. ![4_image_0.png](4_image_0.png) Figure 2: Three sample tasks from ConceptARC, each of which has a set of demonstrations and three test inputs. Each task is a variation on the concept *Sameness*. (Best viewed in color.) We constructed the tasks in the ConceptARC benchmark manually.3 We do not believe that interesting, diverse task variations on a particular concept could be constructed automatically, unless we were able to create an automated system that understands the concept in a general way (and the challenge of developing such a system is what inspired the benchmark in the first place). Given that the goal of the ARC benchmark is to evaluate humanlike concept abstraction, we (following Chollet (2019)) constructed the tasks using our own intuitions about human core concepts. It is important to note that "concept" does not have a rigorous definition in cognitive science. Here we use the term informally to refer to a facet of core knowledge that plays important roles in the original ARC tasks, and that play more obviously central roles in the ConceptARC tasks in a given concept group. ## 4 Human And Machine Performance On Conceptarc In this section we present results from our studies of human and machine performance on ConceptARC. We recruited human participants using the Amazon Mechanical Turk and Prolific platforms and tested them on tasks in our corpus, using a visual interface. We also obtained code for the first- and second-place ARC-Kaggle winning programs and ran them on the same tasks, in the same text-based format used in the ARC-Kaggle competition. Finally we used OpenAI's API to test GPT-4 on these tasks, using a text prompt similar to the format given to the ARC-Kaggle programs. Details of each study are given in the next sections. Recall that each of our 16 concept groups contains 10 tasks, each of which includes three unique test inputs, for a total of 30 test inputs per concept. Both humans and machine solvers are allowed three guesses for 3It should be noted that in our tasks, following human conventions, the color black plays a special role, signifying background or "unfilled" grid squares. each test input, and a solver (human or machine) is considered correct on a test input if one of the three guesses is correct. Table 1 gives, for each concept, the accuracies over the 30 test inputs in the concept group. These results provide an assessment of how well solvers can generalize over the range of different tasks associated with each concept. The human accuracy reported for each concept is the average accuracy over the 30 test inputs in that concept-group, where the accuracy on a given test input is the fraction of participants who correctly solved that test input. The accuracies reported for each Kaggle-ARC program and for GPT-4 are simply the fraction of test inputs in each concept group that were correctly solved by the program. We discuss these results in detail in Section 8.4 Table 1: Accuracies of humans, the two top-scoring ARC-Kaggle programs, and GPT-4 (with temperatures 0 and 0.5) on test inputs in each concept group in ConceptARC. | Concept | Humans | ARC-Kaggle First Place | ARC-Kaggle Second Place | GPT-4 Temp 0 | GPT-4 Temp 0.5 | |-------------------------|----------|--------------------------|---------------------------|----------------|------------------| | Above and Below | 0.90 | 0.70 | 0.33 | 0.23 | 0.37 | | Center | 0.94 | 0.50 | 0.20 | 0.33 | 0.33 | | Clean Up | 0.97 | 0.50 | 0.20 | 0.20 | 0.27 | | Complete Shape | 0.85 | 0.47 | 0.30 | 0.23 | 0.23 | | Copy | 0.94 | 0.23 | 0.27 | 0.23 | 0.27 | | Count | 0.88 | 0.60 | 0.40 | 0.13 | 0.17 | | Extend To Boundary | 0.93 | 0.77 | 0.47 | 0.07 | 0.1 | | Extract Objects | 0.86 | 0.43 | 0.43 | 0.03 | 0.07 | | Filled and Not Filled | 0.96 | 0.73 | 0.43 | 0.17 | 0.27 | | Horizontal and Vertical | 0.91 | 0.43 | 0.10 | 0.27 | 0.33 | | Inside and Outside | 0.91 | 0.57 | 0.10 | 0.10 | 0.16 | | Move To Boundary | 0.91 | 0.37 | 0.30 | 0.20 | 0.20 | | Order | 0.83 | 0.27 | 0.23 | 0.27 | 0.27 | | Same and Different | 0.88 | 0.53 | 0.17 | 0.17 | 0.27 | | Top and Bottom 2D | 0.95 | 0.60 | 0.57 | 0.23 | 0.37 | | Top and Bottom 3D | 0.93 | 0.50 | 0.03 | 0.20 | 0.27 | ## 5 Details Of Human Studies To evaluate human accuracy in our tasks, we ran an online study, recruiting participants from the Amazon Mechanical Turk and Prolific platforms. This section provides additional details on our procedure. ## 5.1 Procedure Participants were presented with a visual interface for solving ARC tasks, adapted from the original ARC viewer (Chollet, 2023), and programmed for online data collection using the psiTurk framework.5 Each participant was presented with a random selection of tasks (17 for most participants, although see Appendix C for a discussion of a few exceptions). Each task had three test inputs, but these were randomly split among participants, with each participant seeing only one test input of a given task. Similar to the ARC-Kaggle programs, participants were given three attempts to solve each test input. If a participant managed to solve the test input correctly, they were asked to verbally describe their solution before moving on to the next task. We will report on the analysis of this natural language data in future work. Note that among the 17 tasks given to a participant, the first two were extremely simple training tasks, for which the participants were allowed unlimited attempts. These training tasks were included to give the participants time to familiarize themselves with the study interface. Additionally, among the remaining tasks, three were "minimal," that is, the simplest concept instantiations we could create. These minimal tasks served as "attention checks," helping to identify individuals who did not try to solve the tasks or follow instructions (see Section 5.2). We give examples of minimal tasks in Appendix B. Because they were used to 4Results for human participants and machines on all 480 test inputs can be downloaded from https://github.com/ victorvikram/ConceptARC. 5https://psiturk.org/ determine which participants to exclude, we did not include performance on the minimal tasks in the results given in Section 4.6 ## 5.2 Exclusion Criteria We used two criteria to exclude participants. A participant was excluded from further analysis if 1) they failed at solving two or more minimal tasks; or 2) they provided empty or nonsensical explanations for their solutions (such as "Nice," "Solve task is good," and so on). Failing the first criterion suggests that the person was not paying attention to the task, while failing the second shows lack of ability or motivation to follow the task instructions. Since it was always faster for a participant to pretend to fail any given problem rather than to try to solve it, excluding unmotivated, inattentive participants was crucial to avoid skewing the results. In total, 55 out of 482 initial participants were excluded based on inadequate explanations (all from Amazon Mechanical Turk). An additional 12 participants were excluded based on failing to solve two or more minimal tasks (8 from Amazon Mechanical Turk, 4 from Prolific). ## 5.3 Participants The final sample comprised 415 participants—204 from Amazon Mechanical Turk and 211 through Prolific. To ensure linguistic fluency in English for the purpose of collecting natural language descriptions, only U.S.- based Amazon Mechanical Turk workers were invited to participate in the study, and only U.S. or U.K. participants were recruited from Prolific.7 Since test inputs were randomly assigned to different participants, and since the psiTurk platform does not naturally have a mechanism to monitor how many participant answers were collected for each test input, there is some variation in the amount of data collected for different test inputs. Overall, each test input was given to at least 8 participants (with one exception, which was given to only 7 participants). The detailed results available at https://github.com/victorvikram/ConceptARC provide the number of participants tested on each test input in the corpus. ## 6 Details Of Testing Winning Programs From The Arc-Kaggle Challenge As we described in Section 2, in 2020 the Kaggle platform hosted a three-month competition on ARC tasks (Kaggle.com, 2020). Competing programs were scored on 100 hidden tasks. Programs were allowed to make three predictions for each test input, and if one of the predictions was correct, the test input was considered to be solved. Using this metric, the first and second place programs attained accuracies of 21% and 19% respectively. An ensemble of the two winning programs attained an accuracy of about 31%, and as of this writing, this is the state-of-the-art accuracy on this hidden evaluation set.8 To our knowledge, there have been no published large-scale experiments to date evaluating humans on tasks in the ARC corpus (though, as we describe in Section 8, small-scale studies were performed by Acquaviva et al. (2022) and Johnson et al. (2021)). We obtained the source code for the first- and second-place ARC-Kaggle winners on GitHub.9, and tested each of them on all of the ConceptARC tasks. The first and second place programs in the ARC-Kaggle challenge both work by performing a heuristic search over a fixed, manually defined set of grid operations to generate a pipeline of these operations that, when applied to inputs from the task demonstrations, correctly generates the corresponding outputs. 6All of the minimal tasks are included in the corpus provided at https://github.com/victorvikram/ConceptARC. 7We cannot exclude the possibility that a person from another country might register a U.S.-based account on Prolific or Amazon Mechanical Turk. However, we have manually checked the verbal answers provided by the study participants. All participants in the final sample demonstrated high fluency in English, which at least should ensure that they fully understood the study instructions. 8F. Chollet, Personal Communication, April 7, 2023. 9https://github.com/top-quarks/ARC-solution (first-place ARC-Kaggle winner); https://github.com/ alejandrodemiquel/ARC_Kaggle (second-place ARC-Kaggle winner). In particular, the first-place program (Wind, 2020a) constructed its solutions using a manually created set of 142 grid operations, such as an operation that splits a given grid into multiple grids consisting of "background color" and "objects" consisting of color-connected pixels, operations that perform rotations, reflections, and other variations on a given grid, and an operation that extracts the "object" with the most non-black pixels. The second-place program (de Miquel Bleier, 2020) constructed its programs from a set of 50 manually created grid operations, and used a genetic algorithm to search for successful pipelines of operations. Both programs were able to increase their success by augmenting the given task demonstrations, for example, by flipping the demonstration input and output grids along the diagonal, by remapping colors, and other heuristic transformations. Given the open-ended nature of ARC, we doubt that similar heuristic search methods, even over a much larger number of grid operations, will achieve anything like human performance on ARC tasks. Even the authors of the winning programs seem to agree that a wholly different kind of method is needed. The author of the first-place program wrote, "Unfortunately, I don't feel like my solution itself brings us closer to AGI" (Wind, 2020b) and one of the authors of the second-place program noted that "No team out of the 914 [ARCKaggle competition] participants found a satisfying, AI-focused solution for this problem" (de Miquel Bleier, 2020). ## 7 Details Of Testing Gpt-4 GPT-4 (OpenAI, 2023) is a large-scale multimodal AI system created by OpenAI. Webb et al. (2022) showed that the publicly available language-only version of GPT-4 (as well as its predecessor GPT-3) was able to match or exceed human performance on several idealized analogy tasks, in a zero-shot manner (i.e., without any fine-tuning on these tasks). To test the generality of these findings, we assess GPT-4's zero-shot performance on the tasks in ConceptARC, which have some resemblance to the tasks used by Webb et al. To test this language-only version of GPT-4 on the tasks in ConceptARC, we used the API provided by OpenAI.10 GPT-4 API prompts have "system" and "user" components, with the "system" component intended to provide general instructions, priming the model towards certain behaviors, and the "user" component for dialogue inputs. We used the prompt structure (similar to the one used by Webb et al. (2022)) illustrated in Figure 3. Within each row of a grid, the colors of each pixel were numerically coded as in the original ARC data files given by Chollet (2023) (these were the inputs to the ARC-Kaggle competitors) and space-separated. For example, [2 1 0 1] would encode a row with four pixels: red, blue, black, and blue again. We tested GPT-4 on all the tasks in our corpus, first with temperature 0 (following Webb et al. (2022)) and then with temperature 0.5. In every case, GPT-4's response was in the correct format for an output—that is, the errors that the model made were "true" mistakes, rather than improperly formatted correct answers. In the case where temperature was set to 0, the outputs are deterministic, so only one output was considered. In the case where temperature was set to 0.5, we repeated each task prompt three times, and if at least one of outputs was correct, we considered the task to be solved correctly.11 ## 8 Discussion 8.1 Human And Machine Performance Table 1 shows that the human participants achieved substantially higher accuracy than the machine solvers on tasks in our ConceptARC benchmark (for a more detailed report with uncertainty estimates, see Appendix D). Recall that the average accuracy across test inputs in a concept group measures how well solvers 10https://openai.com/product. In our experiments the model name was set to "gpt-4", the temperature was set to 0 or 0.5, and other parameters were left at their default values. 11Following Webb et al. (2022), we also tested GPT-4 using the same prompt format as in Figure 3 but with '\n' inserted after each grid row, to indicate new lines. The results were not substantially different from testing GPT-4 without inserting '\n'. ![8_image_0.png](8_image_0.png) Figure 3: (a) ConceptARC task. (b) Corresponding prompt given to GPT-4. (Best viewed in color.) can generalize over different tasks representing a given concept. The average difference in per-concept accuracy between humans and the first-place ARC-Kaggle program was 40 percentage points. The human participants exhibited over 90% average accuracy on 11 of the 16 concepts, and over 80% accuracy on each of the remaining 5 concepts. In contrast, the first-place program never scored above 80% accuracy on any concept, and for 11 out of 16 concepts, its accuracy was below 60%. The second-place program's accuracy never reached 60% and was below 50% on 15 out of 16 concepts. On each concept, GPT-4 performed better with temperature set to 0.5 (and given three guesses) than it did with temperature set to 0. But even in the former case GPT-4's performance was relatively weak: its accuracy was below 30% on 12 out of 16 concepts and its maximum accuracy was only 37%. GPT-4's weak performance here contrasts with its much better performance on other idealized domains for analogy-making found by Webb et al. (2022). We speculate that there are two potential reasons for this discrepancy. First, to successfully solve ConceptARC problems, a system must identify the relevant higher-level features (e.g., objects, shapes, number, orientation) in each grid and determine how they were transformed into a new grid. In the grid-based tasks discussed in Webb et al., which were inspired by Raven's Progressive Matrices, the relevant features were pre-extracted and given as inputs. This likely makes the grid-based tasks created by Webb et al. considerably less challenging than those in ConceptARC. Second, we speculate that the patterns in Raven-based tasks tested by Webb et al. might be easier to recognize for large language models: each grid row had a constant number of features (which might aid in finding analogies using attention mechanisms) and the concepts in these tasks were generally of a more sequential rather than spatial nature. Testing these speculations is a topic for future work. The generally high accuracies of humans on each concept indicates successful generalization over the different variations in each given concept group. In contrast, the much lower accuracies of programs we tested indicates a lack of ability to generalize over the variations in a concept group, and thus a failure to develop the abstractions that ARC is meant to test. While the first- and second-place ARC-Kaggle programs did not reach human-level accuracy, it is interesting to note that these programs have significantly higher accuracy on the tasks in ConceptARC than they did on the original tasks in the ARC-Kaggle competition, where their respective accuracies were 21% and 19%. This is likely due to our intentional design of the tasks in ConceptARC to be easier than those in the original ARC set. Providing an easier benchmark also gives more insight into differences between the two programs: while their scores on the original set were quite close, in our results the first-place winner has substantially ![9_image_0.png](9_image_0.png) Figure 4: Two examples illustrating human "near-miss" errors, compared with errors made by the first-place ARC-Kaggle program on the same test input. (a) A task in the *Copy* concept group. The human correctly copied the green and red object into the blue rectangle, but incorrectly deleted the original object. The first-place program ("Program") did not seem to grasp the notion of copying an object. (b) A task in the Extend To Boundary concept group. The human correctly extended a line to the boundary, but modified the original object to make it a solid rectangle rather than a single line. The first-place program did not seem to grasp the notion of extending a line from a given object to a boundary. (Best viewed in color.) higher accuracy than the second-place winner—the average difference in per-concept accuracy between the two programs is about 23 percentage points. We ran a simple χ 2test to see whether there were statistically significant differences in accuracy across concepts. For humans, as well as for Kaggle First- and Second- place algorithms, the data strongly suggest such a difference (p < 0.001 in each of the three hypothesis tests). At the same time, GPT-4 results did not provide sufficient evidence that there were true differences in its mastery of those concepts (p = 0.27 and p = 0.23 for 0 and 0.5 temperature regimes respectively). In other words, the observed variation in GPT-4's performance across concepts might have occurred by chance. At the same time, GPT-4's generally low accuracy across all concepts, together with the relatively low-data regime, limits the statistical power of this test, hence this negative result should be interpreted with caution. ## 8.2 Comparing Human And Machine Errors It is enlightening to compare the kinds of errors made by humans and those made by programs on these tasks. In analyzing a sample of errors made by the human participants in our study, we found that many errors included obvious careless mistakes (e.g., off-by-one errors in the size of the output grid), wrong answers due to "giving up" (simply copying the input grid or creating a blank grid), and "near misses," in which it is obvious that the person grasped the underlying concept but made an error in applying it. In Figure 4 we show two examples of these types of near-misses by humans, and corresponding incorrect answers to the same task by the first-place ARC-Kaggle program. As illustrated in Figure 4, the errors made by the winning ARC-Kaggle programs and by GPT-4 are harder to categorize. As we described in Section 6 above, the ARC-Kaggle winning programs were not designed to capture abstract *concepts*, but instead heuristically construct pipelines of pre-designed grid transformations, so it is not surprising that their errors were typically less interpretable than those of humans. GPT-4 of course was not designed for this domain at all, although Webb et al. (2022) demonstrated the pattern-recognition abilities of large language models in other analogy domains. ## 8.3 Limitations Of Our Studies There are several limitations of the studies we report here in using the ConceptARC corpus to assess conceptual abstraction abilities in humans and machines. Because the tasks in ConceptARC were created manually, the corpus is relatively small: 16 concept groups, with 10 tasks per concept-group and three test inputs per task, for a total of 480 test inputs. We plan to substantially extend this corpus in the future, adding additional concept groups, tasks, and test inputs in order to more thoroughly explore abstraction and generalization abilities in the ARC domain. Our human studies, using Amazon Mechanical Turk and Prolific, included 415 participants, each being tested on approximately 17 test inputs (from different tasks) in an approximately 45-minute session. As we described in Section 5, this yielded typically 8 to 14 people solving each test input. Our results, showing high human accuracy on these tasks, are based on these relatively small sets of people, whose numbers were limited by the funds we had available for these studies. In future work we will extend these studies to determine if they generalize over larger populations of human solvers. One additional limitation—for both ConceptARC and the original ARC dataset—is that there might be more than one reasonable solution for a given test input. Like Chollet in the original ARC set, we tried to design tasks that have only one clear solution for each test input. However, there is always the possibility of other solutions that people would find reasonable, but that would be counted as incorrect in our study. By closely examining incorrect solutions submitted by humans in our studies, we found six (out of 480 total) test inputs that we considered to be ambiguous in this way. There might be additional such test inputs that we did not identify. Indeed, a small number of ambiguous test inputs are likely inevitable in any corpus. However, the effects of such ambiguity on our results are mitigated, for both humans and machines, by (1) allowing three solutions to be submitted for each test input; (2) having multiple test inputs per task; and (3) having multiple tasks per concept group. These factors help make the aggregate results robust in spite of an inevitable small subset of ambiguous test inputs. There are also a small number of tasks that allow for "shortcut solutions": for example, tasks in which the correct solution to a test input is simply to copy it, which can be a default strategy for the ARC-Kaggle winners and an easy pattern for GPT-4 to recognize. However, our purpose in creating numerous tasks that are variations on a particular concept is to make it highly unlikely that any program could use shortcuts to solve most or all of the tasks in a given concept group. ## 9 Related Work In a similar spirit to our work on the ConceptARC benchmark, Kim et al. (2022) created the "Mini-ARC" dataset, in which grids are fixed at 5 × 5 in order to simplify the domain, and 150 tasks (each containing one test input) are organized around six broad categories (movement, color, object, number, geometry, and "common sense"). This set can complement our ConceptARC benchmark, which allows any grid dimensions and systematically explores 16 more specific spatial and semantic concepts. Johnson et al. (2021) carried out a study of humans solving ARC tasks. They chose 40 tasks from the public ARC dataset and tested each of 95 participants on a random subset of 10 out of the 40 tasks. On average the participants' per-task accuracy was about 84%, though with substantial variance. Johnson et al. also recorded the average time to complete each task, as well as participants' action sequences while generating responses, and analyzed the errors people made. Similar to the results of our study, the authors found that human errors generally were near-misses, whereas the errors made by the first-place ARC-Kaggle program indicated that it did not grasp the underlying abstract rule. The human study we report in this paper can be seen as a follow-up to Johnson et al.'s study, but with a larger set of (newly created) ARC problems that are variations on specific concepts (rather than a randomly chosen subset of the original ARC tasks) and with a considerably larger population of participants. In developing AI systems to solve ARC problems, the predominant approach is automated program synthesis—that is, automatically generating a series of operations on grids or or other representations that yields a solution. The primitive operations are typically created manually, and heuristic search is used to find a combination of operations that solves a given task. For the two winning ARC-Kaggle programs, the primitive operations were sets of hand-designed grid transformations, and the synthesized programs were piplelines of transformations resulting from heuristic search methods. Since the end of the Kaggle competition, several new program-synthesis approaches to ARC have been explored. For example, Banburski et al. (2020) used a program-synthesis algorithm called "DreamCoder" (Ellis et al., 2020) that, given a domain-specific primitive operation, can generate a more abstract operation that can be added to the set of available primitives. Banburski et al. manually defined a small set of gridtransformation operations, and used these as the basis for training an agent based on DreamCoder to generate programs that could solve a small set of ARC tasks that focused on symmetry transformations. In followup work, Alford et al. (2022) explored a similar method using a neural-network-guided program-synthesis approach. In contrast to using grid-transformation primitives, Xu et al. (2022) proposed a "object-centric" approach to solving ARC tasks. In their system, grids are mapped to graph representations, and the system searches for pipelines of operations on these graphs rather than on the grids themselves. The nodes in a graph correspond to objects in a grid and links between nodes correspond to relationships between objects. Which sets of pixels are grouped as an "object" in a graph is decided heuristically, as is what relationships are included in the graph. In their experiments, Xu et al. focused on a set of 160 "object-centric" tasks from the public ARC dataset, and showed that their system was able to solve about a third of them. In an interesting cognitive-science-based study, Acquaviva et al. (2022) posited that the advantage of humans on ARC tasks may be due to their ability to generate descriptions of abstract concepts in natural language. The authors carried out a study, like ours, in which they asked human participants to both solve ARC problems and generate natural-language instructions that would enable another human to produce the correct output, given only the test input (i.e., not including the demonstrations). The authors then tested these instructions on other human participants and found that the instructions were sufficient for solving the task about 88% of the time. The authors released the LARC (Language-Complete ARC) dataset, which couples 354 original ARC tasks with human-generated language instructions. They used this dataset to train and evaluate selected program-synthesis methods, to see if these systems could utilize language the way humans do. The results were quite poor—the best system was able to solve only about 12% of the tasks it was tested on. Any of these program-synthesis systems might be improved by adopting more expressive domain-specific languages and by improving their program-search methods. While these and other approaches to ARC-like tasks (Ainooson et al., 2023; Assouel et al., 2022; Ferré, 2021; Fischer et al., 2020) have produced some promising results, as of this writing the first-place ARC-Kaggle program remains the most successful single approach (though as we described above, an ensemble of the top two winning programs attained higher accuracy). As yet, there is no AI system that is close to reaching human accuracy and generalization abilities on ARC tasks. The ARC domain remains a wide-open challenge for AI. ## 10 Conclusions And Future Work In this paper we have described ConceptARC, a new benchmark set of tasks in the ARC domain. The tasks in ConceptARC are designed to systematically test conceptual understanding and generalization while remaining relatively easy for humans. Our purpose in designing a benchmark with these attributes is threefold: first, to promote the development of AI systems that grasp generalizable core concepts and are able to use them in new situations; second, to fairly evaluate systems that are claimed to have such abilities; and third, to provide an evaluation set that is not overly difficult, and that would thus mask real progress in developing such systems. In addition to describing and publishing the ConceptARC benchmark, we have reported results of testing humans and machine solvers on these tasks. Our results show that humans substantially outperform stateof-the-art programs on all the concepts in our benchmark; moreover, when humans make errors, they often still exhibit a grasp of the underlying concept, unlike the programs. We also showed that our benchmark is able to reveal differences in performance among machine solvers that were masked by the difficulty of the original ARC dataset. In addition, we showed that GPT-4's performance, while impressive given that in many cases it exceeded the second-place ARC Kaggle program even though it was not designed for or trained on these tasks, is dramatically below that of humans, which contrasts with the results of Webb et al. (2022) in testing GPT-4 on other idealized analogy domains. As we described in Section 5, in addition to asking participants to solve tasks, we also asked them to write natural language instructions for solving a given test input. In the near future we will perform a new study with human participants to test the viability of these instructions, by giving a test input (without accompanying demonstrations) along with the corresponding instructions, to see if people can arrive at the correct solution by following these instructions. Following Acquaviva et al. (2022), we will take the viable instructions and use them as part of a training set for a new machine ARC solver, to see if augmenting training with language inputs will improve performance. In the future we also plan to extend the ConceptARC benchmark to encompass additional tasks and concept groups, and to further evaluate humans and machine solvers on these new tasks, as well as to more thoroughly analyze the different types of errors made by humans and machines. In particular, in addition to the tasks that we make publicly available, we will create a "hidden" evaluation set that can be used in future ARC competitions. When solving a task in the ARC domain, humans bring to bear not only their core knowledge about the world but also a highly evolved visual system that is not present in any of the proposed machine solvers or in GPT-4. While the grids in an ARC task are visually simple, it may be that incorporating routines inspired by the visual system (Ullman, 1987) into program-synthesis approaches could be a way to make progress on these tasks. We plan to explore this hypothesis in future work. We also plan to test the multimodal version of GPT-4 on ARC tasks, once it is made publicly available. ## Acknowledgments This material is based in part upon work supported by the National Science Foundation under Grant No. 2139983. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. This work has also been supported by the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under the grant https://doi.org/10.54224/20650. ## References S. Acquaviva, Y. Pu, M. Kryven, T. Sechopoulos, C. Wong, G. Ecanow, M. Nye, M. Tessler, and J. Tenenbaum. Communicating natural programs to humans and machines. Advances in Neural Information Processing Systems, 35:3731–3743, 2022. J. Ainooson, D. Sanyal, J. P. Michelson, Y. Yang, and M. Kunda. An approach for solving tasks on the Abstract Reasoning Corpus, 2023. arXiv:2302.09425. S. Alford, A. Gandhi, A. Rangamani, A. Banburski, T. Wang, S. Dandekar, J. Chin, T. Poggio, and P. Chin. Neural-guided, bidirectional program search for abstraction and reasoning. In Proceedings of the Tenth International Conference on Complex Networks and Their Applications, pages 657–668. Springer, 2022. R. Assouel, P. Rodriguez, P. Taslakian, D. Vazquez, and Y. Bengio. Object-centric compositional imagination for visual abstract reasoning. In ICLR Workshop on the Elements of Reasoning: Objects, Structure and Causality, 2022. A. Banburski, A. Ghandi, S. Alford, S. Dandekar, P. Chin, and T. Poggio. Dreaming with ARC. Technical report, Center for Brains, Minds and Machines (CBMM), 2020. M. M. Bongard. *Pattern Recognition*. Spartan Books, 1970. S. Carey. *The Origin of Concepts*. MIT Press, Cambridge, MA, 2011. P. A. Carpenter, M. A. Just, and P. Shell. What one intelligence test measures: A theoretical account of the processing in the Raven progressive matrices test. *Psychological Review*, 97(3):404–431, 1990. F. Chollet. On the measure of intelligence, 2019. arXiv:1911.01547. F. Chollet. The Abstraction and Reasoning Corpus (ARC), 2023. URL https://github.com/fchollet/ARC. Online; last accessed May 4, 2023. A. de Miquel Bleier. Finishing 2nd in Kaggle's Abstraction and Reasoning Challenge, 2020. URL https:// blog.jovian.com/finishing-2nd-in-kaggles-abstraction-and-reasoning-challenge-24e59c07b50a. K. Ellis, C. Wong, M. Nye, M. Sablé-Meyer, L. Cary, L. Morales, L. Hewitt, A. Solar-Lezama, and J. B. Tenenbaum. DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning, 2020. arXiv:2006.08381. S. Ferré. First steps of an approach to the ARC challenge based on descriptive grid models and the minimum description length principle, 2021. arXiv:2112.00848. R. Fischer, M. Jakobs, S. Mücke, and K. Morik. Solving abstract reasoning tasks with grammatical evolution. In *Proceedings of the Conference Lernen, Wissen, Daten, Analysen, LWDA*, pages 6–10, 2020. H. E. Foundalis. Index of Bongard Problems, 2023. URL http://www.foundalis.com/res/bps/bpidx.htm. Online; last accessed May 4, 2023. R. Geirhos, J.-H. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence*, 2(11):665–673, 2020. D. R. Hofstadter. *Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms* of Thought. Basic Books, New York, 1995. D. R. Hofstadter and M. Mitchell. The Copycat project: A model of mental fluidity and analogy-making. In K. J. Holyoak and J. A. Barnden, editors, *Advances in Connectionist and Neural Computation Theory*, volume 2, pages 31–112. Ablex Publishing Corporation, 1994. A. Johnson, W. K. Vong, B. M. Lake, and T. M. Gureckis. Fast and flexible: Human program induction in abstract reasoning tasks, 2021. arXiv:2103.05823. Kaggle.com. Kaggle Abstraction and Reasoning Challenge, 2020. URL https://www.kaggle.com/c/ abstraction-and-reasoning-challenge. S. Kim, P. Phunyaphibarn, D. Ahn, and S. Kim. Playgrounds for abstraction and reasoning. In NeurIPS Workshop on Neuro, Causal, and Symbolic AI (nCSI), 2022. B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like people. *Behavioral and brain sciences*, 40:e253, 2017. S. Lapuschkin, S. Wäldchen, A. Binder, G. Montavon, W. Samek, and K.-R. Müller. Unmasking Clever Hans predictors and assessing what machines really learn. *Nature Communications*, 10(1):1096, 2019. A. Lovett and K. D. Forbus. Modeling visual problem solving as analogical reasoning. *Psychological Review*, 124(1), 2017. M. Małkiński and J. Mańdziuk. Deep learning methods for abstract visual reasoning: A survey on Raven's Progressive Matrices, 2022a. arXiv:2201.12382. M. Małkiński and J. Mańdziuk. A review of emerging research directions in abstract visual reasoning. Information Fusion, 2022b. J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon. A Proposal for the Dartmouth Summer Research Project in Artificial Intelligence, 1955. Reprinted in *AI Magazine*, 27, no. 4, 2006, 12-14. M. Mitchell. Abstraction and analogy-making in artificial intelligence. *Annals of the New York Academy of* Sciences, 1505:79–101, 2021. M. Mitchell and D. C. Krakauer. The debate over understanding in AI's large language models. *Proceedings* of the National Academy of Sciences, 120(13):e2215907120, 2023. W. Nie, Z. Yu, L. Mao, A. B. Patel, Y. Zhu, and A. Anandkumar. Bongard-LOGO: A new benchmark for human-level concept learning and reasoning. *Advances in Neural Information Processing Systems*, 33: 16468–16480, 2020. V. V. Odouard and M. Mitchell. Evaluating understanding on conceptual abstraction benchmarks, 2022. arXiv:2206.14187. OpenAI. GPT-4 Technical Report, 2023. arXiv:2303.08774. A. Sonwane, S. Chitlangia, T. Dash, L. Vig, G. Shroff, and A. Srinivasan. Using program synthesis and inductive logic programming to solve Bongard problems, 2021. arXiv:2110.09947. E. S. Spelke and K. D. Kinzler. Core knowledge. *Developmental Science*, 10(1):89–96, 2007. S. Ullman. Visual routines. In *Readings in Computer Vision*, pages 298–328. Elsevier, 1987. T. Webb, K. J. Holyoak, and H. Lu. Emergent analogical reasoning in large language models, 2022. arXiv:2212.09196. J. S. Wind. DSL Solution to the ARC Challenge, 2020a. URL https://github.com/top-quarks/ ARC-solution/blob/master/ARC-solution_documentation.pdf. J. S. Wind. 1st place solution + code and official documentation, 2020b. URL https://www.kaggle.com/ competitions/abstraction-and-reasoning-challenge/discussion/154597. Y. Xu, E. B. Khalil, and S. Sanner. Graphs, constraints, and search for the Abstraction and Reasoning Corpus, 2022. arXiv:2210.09880. C. Zhang, F. Gao, B. Jia, Y. Zhu, and S.-C. Zhu. RAVEN: A dataset for relational and analogical visual reasoning. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR*, pages 5312–5322, 2019. Examples of Tasks From Each Concept Group Appendix A ![15_image_0.png](15_image_0.png) ![16_image_0.png](16_image_0.png) ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) Top and Bottom 3D # Appendix B Examples Of Minimal Tasks ![18_Image_0.Png](18_Image_0.Png) ## Appendix C Participant Recruitment Details In Our Human Study In order to establish the data collection regime that yields the highest quality data, we introduced minor recruitment and procedure adjustments after the start of data collection. In the first batch of participants collected via Amazon Mechanical Turk, each received 11 problems (this batch also only had two "minimal Problems," as opposed to three such problems for everyone else). However, preliminary data examination showed that some participants did not fully follow the study instructions and had to be excluded (see Section 5.2). In response, we made the screening criteria more strict (requiring a Master Worker qualification, 99% of HITs approved with at least 2000 HIT history, as opposed to 95% approval requirement in the first batch). Participants in all but the first batch were paid $10 upon completing the experiment. Participants in the first batch were paid $5. In all batches, the median pay-per-hour exceeded the U.S. minimal wage. Additionally, since participant quality was very "bimodal" (i.e. each participant either diligently followed instructions on all tasks, or ignored instructions on all tasks and were thus excluded), we increased the number of tasks per participant, so that non-excluded participants provided us with more data. Lastly, after the first large batch of participants, we transitioned the study to another crowdsourcing platform: Prolific.org. This was done both due to technical reasons and to diversify the data source. ## Appendix D Accuracies With Binomial Confidence Intervals | Concept | Humans | ARC-Kaggle First Place | ARC-Kaggle Second Place | GPT-4 Temp 0 | GPT-4 Temp 0.5 | |-------------------------|-------------------|--------------------------|---------------------------|--------------------|--------------------| | Above and Below | 0.90 (0.86, 0.93) | 0.70 (0.52, 0.83) | 0.33 (0.19, 0.52) | 0.23 (0.12, 0.41) | 0.37 (0.22, 0.54) | | Center | 0.94 (0.90, 0.96) | 0.50 (0.33, 0.67) | 0.20 (0.1, 0.37) | 0.33 (0.19, 0.51) | 0.33 (0.19, 0.51) | | Clean Up | 0.97 (0.94, 0.99) | 0.50 (0.33, 0.67) | 0.20 (0.1, 0.37) | 0.20 (0.1, 0.37) | 0.27 (0.14, 0.44) | | Complete Shape | 0.85 (0.8, 0.89) | 0.47 (0.3, 0.64) | 0.30 (0.17, 0.48) | 0.23 (0.12, 0.41) | 0.23 (0.12, 0.41) | | Copy | 0.94 (0.9, 0.96) | 0.23 (0.12, 0.41) | 0.27 (0.14, 0.44) | 0.23 (0.12, 0.41) | 0.27 (0.14, 0.44) | | Count | 0.88 (0.83, 0.91) | 0.60 (0.42, 0.75) | 0.40 (0.24, 0.58) | 0.13 (0.05, 0.3) | 0.17 (0.07, 0.34) | | Extend To Boundary | 0.93 (0.89, 0.96) | 0.77 (0.59, 0.88) | 0.47 (0.3, 0.64) | 0.07 (0.2, 0.21) | 0.1 (0.03, 0.26) | | Extract Objects | 0.86 (0.81, 0.9) | 0.43 (0.27, 0.61) | 0.43 (0.27, 0.61) | 0.03 (0.00, 0. 17) | 0.07 (0.02, 0.21) | | Filled and Not Filled | 0.96 (0.93, 0.98) | 0.73 (0.56, 0.86) | 0.43 (0.27, 0.61) | 0.17 (0.07, 0.34) | 0.27 (0.14, 0.44) | | Horizontal and Vertical | 0.91 (0.91, 0.94) | 0.43 (0.27, 0.61) | 0.10 (0.03, 0.26) | 0.27 (0.14, 0.44) | 0.33 (0.19, 0.51) | | Inside and Outside | 0.91 (0.91, 0.94) | 0.57 (0.39, 0.73) | 0.10 (0.03, 0.26) | 0.10 (0.03, 0.26) | 0.17 (0.07, 0.34) | | Move To Boundary | 0.91 (0.91, 0.94) | 0.37 (0.22, 0.54) | 0.30 (0.17, 0.48) | 0.20 (0.1, 0.37) | 0.20 (0.1, 0.37) | | Order | 0.83 (0.78, 0.87) | 0.27 (0.14, 0.44) | 0.23 (0.12, 0.41) | 0.27 (0.14, 0.44) | 0.27 (0.14, 0. 44) | | Same and Different | 0.88 (0.83, 0.91) | 0.53 (0.36, 0.7) | 0.17 (0.07, 0.33) | 0.17 (0.07, 0.34) | 0.27 (0.14, 0. 44) | | Top and Bottom 2D | 0.95 (0.91, 0.97) | 0.60 (0.42, 0.75) | 0.57 (0.39, 0.73) | 0.23 (0.12, 0.41) | 0.37 (0.21, 0.54) | | Top and Bottom 3D | 0.93 (0.89, 0.96) | 0.50 (0.33, 0.67) | 0.03 (0.00, 0.17) | 0.20 (0.1, 0.37) | 0.27 (0.14, 0. 44) |
Review 1: Summary: This paper proposes an evaluation task that extends the Abstraction and Reasoning Corpus (ARC) to cover conceptual "wrappers" to the above benchmark (e.g., above, center, inside/outside). The authors then benchmark Strengths and Weaknesses: Strengths - Neat and timely study that moves beyond traditional benchmarking tasks to settings that are not only more realistic but also more difficult for existing AI systems (e.g., GPT-4). - The paper is very well-written and is effectively ready for publication as is. Weaknesses - It's unclear how narrow models optimized for the task itself would perform. Instead of using general purpose learners (e.g. GPT-esque systems). How would a standard supervised learning model (e.g., finetuned ResNet perhaps) perform on each of the concept groups? This would simply provide another baseline to compare against. Requested Changes: - Sorry if I missed it but did you discuss why/how each concept group in ConceptARC was selected? It would be good to provide some methodological description for why these were chosen. - It would good to understand how easily others can leverage this dataset, perhaps including a code snippet or release information. Broader Impact Concerns: Can you please include a brief discussion about manipulation and deception as it pertains to high-level abstract reasoning? ================================================== Review 2: Summary: This work proposes ConceptARC, an extension to the Abstraction and Reasoning Corpus (ARC) benchmark, that is intended both to be more reliably solvable by humans, and to systematically explore the role of specific concepts. The study also presents results on this new benchmark for human participants, GPT-4, and two heuristic program designed to solve ARC problems. The basic finding is that human participants reliably perform well, significantly outperforming both the heuristic programs and GPT-4. Strengths and Weaknesses: This work is a valuable contribution that will be very useful in the continued effort to develop AI systems with human-like abstract reasoning capabilities. The proposed benchmark makes some important improvements relative to the original ARC benchmark, and the human behavioral experiment and evaluation of GPT-4 / heuristic programs are thorough and well documented. I have a number of comments (detailed below), but I think the paper is already in good shape, so these are really just suggestions (some of which I imagine won't be feasible to address in this particular study). #### Comments: - It would be helpful to add a bit more discussion concerning which features of the conceptARC problems make them more difficult for GPT-4 than the problems in Webb et al. (2022). There are a few places in the paper where the problems from Webb et al. are described as more 'idealized' but there is not much discussion otherwise. One essential difference seems to be that the conceptARC problems require object segmentation from pixel-level inputs, whereas the problems in Webb et al. (e.g. digit matrices) only involve analogies applied to individual tokens. This also relates to the first element of core knowledge listed on pg. 3, 'objectness'. - The evaluation of error types (e.g. near misses vs. failing to grasp the concept altogether) was somewhat unsystematic. First, one of the descriptions of a common human error type involved 'giving up', which doesn't necessarily sound like a careless mistake -- it seems like this could just as easily result from a failure to grasp the underlying concept. Second, there is no description of how errors were categorized, and no quantification of the frequency of different error types. But this does seem like a potentially important aspect of the evaluation, especially if it reveals a systematic difference between humans vs. programs. One possible way to improve this section is to have blind raters judge whether errors are more likely the result of careless mistakes vs. failure to grasp the underlying concept. - In Webb et al., analogy problems were presented using line breaks ('\n') to indicate the spatial layout of a grid, whereas it appears that GPT-4 was evaluated on conceptARC without this formatting. This seems like it could potentially affect performance. Insofar as LLMs have exposure to 2D pseudo-visual arrays, it seems likely that it often involves explicit line breaks of this sort. - Relatedly, I wonder how human participants would perform if given the problems in the same format as GPT-4. It does seem that it might be difficult in some cases to identify objects when the problems are formatted in this way. - Were human participants given feedback when they got problems correct? If so, an interesting comparison would be to see how GPT-4 performs when given similar feedback. In principle it could benefit from in-context learning based on this feedback. - It would be helpful to include supplementary figures depicting some of the 'minimal' problems used to determine subject exclusion in the human study. There is a concern that these are not sufficiently minimally difficult to be treated as mere attention checks. - It would be informative to include some measure of error for the performance estimates (in both humans and programs). Binomial confidence intervals seem like a natural choice. It might also be helpful to plot the results, either instead of or in addition to providing a table. Requested Changes: I do not have any requested changes, but the authors may wish to address some of the concerns detailed above. Broader Impact Concerns: I do not believe there are any ethical implications that would necessitate such a statement. ================================================== Review 3: Summary: This work proposes a new benchmark in the style of Chollet’s Abstraction and Reasoning Corpus (ARC) that differs in two ways. First, rather than presenting all tasks in one large collection, the new tasks are organized into 16 concept groups. Second, the tasks are designed to be easier than those in the original ARC, since the latter may be too difficult to measure AI progress in the near term. Human solvers and three machine solvers are evaluated. The machine solvers include the first and second place winners of the ARC Kaggle competition and GPT-4. Humans substantially outperform all machine solvers, the first place winner outperforms the second place winner, and GPT-4 performs worst overall. Strengths and Weaknesses: ## Strengths * The paper is very well written. The prose is clear and persuasive. The organization is good. The motivation, experiments, and results are well-explained. * I particularly appreciated the comparison between human and machine errors in Section 8.2. * I also appreciated the thorough discussion of limitations of the work in Section 8.3. * The experimental design seems sound. I did not find any issues related to the experiments or results. * The connections to related work are also good. ## Weaknesses * The paper suggests that all of the tasks in a concept group are related by a shared concept. There are two possible interpretations of this, each with potential issues. (1) Tasks within a concept group are designed to evaluate _only_ that concept. (2) Each task within a group involves _many_ concepts, but the _intersection_ of concepts within a group has just one shared concept. * The issue with (1) is that I don’t think it is actually possible to evaluate just a single concept. For example, the tasks in the Sameness group shown in Figure 2 also involve objects, shapes, and colors. One could say that objects, shapes, and colors are not “concepts”, but that would require a careful (and probably contentious) argument. * The issue with (2) is that we might as well take the original ARC tasks and organize them by concept, rather than making new tasks. * The notion of a “concept” is not well-defined in this work. The “core knowledge” that ARC uses is backed by cognitive science research. Is there any criteria that one could use to determine whether something is _not_ a “concept” for the purposes of ConceptARC? Nonetheless, I am willing to look past this issue because there is a long line of work on concept learning in AI, and it is always hard to precisely define the meaning of “concept”. * I was also sometimes confused about the word “generalization” as used in this work. For example: “the much lower accuracies of programs we tested indicates a lack of ability to generalize.” Usually, generalization means either (1) a model trained on A performs well on B; or (2) a finding that we observed in setting A can also be observed in setting B. When generalization is mentioned in this work, which kind is it, and what are A and B? * Relatedly, I think there is a missed opportunity to examine within- and between- concept group performance, for both humans and machines. In the absence of further results, it would be good to hint at possibilities in this direction. * The performance of GPT-4 is said to be “impressive given that it was not designed or trained for such tasks.” It is difficult to know how impressed we should be though, given that it’s the lowest performing method, and this is a new benchmark. It would be great if it were possible to design some simple baseline approach to give further context for these results. Admittedly, I can’t think of a baseline that is simple enough to easily understand but not so simple that it gets zero on all tasks. * I found it strange that each task has three test inputs, but human participants were each only given one of the three. Why is this better than just having one test input, like most of the original ARC tasks do? Requested Changes: Overall, I think the paper is already in very good shape. Here are a few small things: * The related works that develop easier ARC tasks (Kim et al. 2022 in particular) are not mentioned until Section 9. These works seem extremely related, but since this is TMLR, novelty doesn’t matter. Nonetheless, it would be good to mention these works in the introduction and/or where the motivation to simplify ARC is first mentioned. * Can you clarify how 3 different guesses per test input are collected from GPT-4? Since the temperature is set to be 0, I would have thought that the 3 guesses are often identical. I also would expect that a higher temperature might be better, though I do see that the Webb et al. work also used a temperature of 0. * In the limitations section, it says: ““In addition to these limitations, our studies revealed that there are a small number of tasks in the ConceptARC corpus that are ambiguous—that is, for which test inputs have more than one reasonable solution.” Can you elaborate? Which tasks are these, and how did you make this determination? ### Minor * Typo: “that are vary in complexity” * “Pretrained” vs. “pre-trained” both used * Typo: “each of these concepts is central to in one or more tasks” * “Psiturk” vs. “psiTurk” both used Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: This submission proposes an extension to the original Abstraction and Reasoning Corpus (ARC) by incorporating concept groups and systematic human studies. All three reviewers find the submission addressing a timely topic, the studies thorough and rigorous, and the findings useful to the community. Eventually, all reviewers recommended acceptance. The AE agrees. ==================================================
# Personalised Federated Learning On Heterogeneous Feature Spaces Alain Rakotomamonjy alain.rakoto@insa-rouen.fr Criteo AI Lab Paris, France Maxime Vono *m.vono@criteo.com* Criteo AI Lab Paris, France Hamlet Jesse Medina Ruiz *hj.medinaruiz@criteo.com* Criteo AI Lab Paris, France Liva Ralaivola l.ralaivola@criteo.com Criteo AI Lab Paris, France Reviewed on OpenReview: *https: // openreview. net/ forum? id= uCZJaqJchs* ## Abstract Personalised federated learning (FL) approaches assume that raw data of all clients are defined in a common space *i.e.* all clients store their data according to the same schema. For real-world applications, this assumption is restrictive as clients, having their own systems to collect and then store data, may use *heterogeneous* data representations. To bridge the gap between the assumption of a shared subspace and the more realistic situation of client-specific spaces, we propose a general framework coined FLIC that maps client's data onto a common feature space via local embedding functions, in a federated manner. Preservation of class information in the latent space is ensured by a distribution alignment with respect to a learned reference distribution. We provide the algorithmic details of FLIC as well as theoretical insights supporting the relevance of our methodology. We compare its performances against FL benchmarks involving heterogeneous input features spaces. Notably, we are the first to present a successful application of FL to Brain-Computer Interface signals acquired on a different number of sensors. ## 1 Introduction Federated learning (FL) is a machine learning paradigm where models are trained from multiple isolated data sets owned by individual agents/clients, where raw data need not be transferred to a central server, nor even shared in any way (Kairouz et al., 2021). FL ensures data ownership, and structurally incorporates the principle of data exchange minimisation by only transmitting the required updates of the models being learned. Recently, FL works have focused on *personalised* FL to tackle statistical data heterogeneity and used local models to fit client-specific data (Tan et al., 2022; Jiang et al., 2019; Khodak et al., 2019; Hanzely & Richtárik, 2020). However, most existing personalised FL works assume that the raw data on all clients share the same structure and are defined on a common feature space. Yet, in practice, data collected by clients may use differing structures: they may not capture the same information, some features may be missing or not stored, or some might have been transformed (*e.g.* via normalization, scaling, or linear combinations). An illustrative example of this, related to Brain-Computer Interfaces (Yger et al., 2016; Lv et al., 2021) and tackled in this paper, is the scenario where electroencephalography signals are recorded from different subjects, with varying numbers of electrodes and a diverse range of semantic information (*e.g.* motor imagery tasks and resting state). To tackle the challenge of making FL possible in situations where clients have heterogeneous feature spaces - such as disparate dimensionalities or differing semantics of vector coordinates - we present the first personalised FL framework specifically designed to address this learning scenario. Proposed Approach. The key idea of our proposal is driven by two objectives: (i) clients' data have to be embedded in a common latent space, and (ii) data related to the same semantic information (*e.g.* label) have to be embedded in the same region of this latent space. The first objective is a prior necessary step before FL since it allows to define a relevant aggregation scheme on the central server for model parameters (*e.g.* via weighted averaging). The second one is essential for a proper federated learning of the model parameters as FL approaches are known to struggle when data across clients follow different probability distributions (?). As shown later, this second objective is not guaranteed by performing client-independent learning of embedding functions, such as via low-dimensional embeddings or autoencoders. To cope with this issue, we align clients' embedded feature distributions with a latent *anchor distribution* that is shared across clients. The learning of the *anchor distribution* happens in a federated way, which means it is updated locally on each client and then combined on a central server through barycenter computation (Veldhuis, 2002; Banerjee et al., 2005). Then, we seamlessly integrate this distribution alignment mechanism, using local embedding functions and anchor distribution, into a personalised federated learning framework that is similar to the approach proposed by Collins et al. (2021), without any loss of generality. Contributions. To help the reader better grasp the differences of our approach with respect to the existing literature, we here spell out our contributions: 1. We are *the first* to formalise the problem of personalised FL on heterogeneous client's feature spaces. In contrast to existing approaches, the proposed general framework, referred to as FLIC, allows each client to leverage other clients' data even though they do not have the same raw representation. 2. We introduce a distribution alignment framework and an algorithm that learns the feature embedding functions along with the latent anchor distribution in a local and global federated manner, respectively. We also show how those essential algorithmic components are integrated into a personalised FL algorithm, easing adoption by practitioners. 3. We provide algorithmic and theoretical support to the proposed methodology. In particular, we show that for a simpler but insightful learning scenario, FLIC is able to recover the true latent subspace underlying the FL problem. 4. Beyond competitive experimental analyses on toy and real-world problems, we stand out as a pioneer in Brain-Computer Interfaces (BCI) by being the first to learn from heterogeneous BCI datasets using federated learning. The proposed methodology can handle data with different sensor counts and class numbers, a feat not achieved by any other methodology to our knowledge, and can have a strong impact on other medical domains with similar data heterogeneity. Conventions and Notations. The Euclidean norm on R dis *∥ · ∥*. |S| denotes the cardinality of the set S and N ∗ = N \ {0}. For n ∈ N ∗, we refer to {1*, . . . , n*} with [n]. N(m, Σ) is the Gaussian distribution with mean vector m and covariance matrix Σ and X ∼ ν means that the random variable X is drawn from the probability distribution ν. The Wasserstein distance of order 2 between any probability measures *µ, ν* on R d with finite 2-moment is W2(*µ, ν*) = (infζ∈T (µ,ν) RRd×Rd ∥θ − θ ′∥ 2dζ(*θ, θ*′))1/2, where T (*µ, ν*) is the set of transference plans of µ and ν. ## 2 Related Works As far as our knowledge goes, the proposed methodology is the first one to tackle the problem of FL from heterogeneous feature spaces. However, some related ideas have been proposed in the literature. The idea of using distribution alignement has been considered in the FL literature but only for addressing distribution shifts | method | type | ̸= feature spaces | multi-party | no shared ID | no shared feature | |-------------------------|--------|--------------------|---------------|----------------|---------------------| | (Zhang et al., 2021a) | PFL | ✗ | ✓ | ✓ | ✗ | | (Diao et al., 2021) | PFL | ✗ | ✓ | ✓ | ✗ | | (Collins et al., 2021) | PFL | ✗ | ✓ | ✓ | ✗ | | (Shamsian et al., 2021) | PFL | ✗ | ✓ | ✓ | ✗ | | (Hong et al., 2022) | PFL | ✗ | ✓ | ✓ | ✗ | | (Makhija et al., 2022) | PFL | ✗ | ✓ | ✓ | ✓ | | FLIC (this paper) | PFL | ✓ | ✓ | ✓ | ✓ | | (Hardy et al., 2017) | VFL | ✓ | ✗ | ✗ | ✓ | | (Yang et al., 2019) | VFL | ✓ | ✗ | ✗ | ✓ | | (Gao et al., 2019) | FTL | ✓ | ✓ | ✓ | ✗ | | (Sharma et al., 2019) | FTL | ✗ | ✗ | ✓ | ✗ | | (Liu et al., 2020) | FTL | ✓ | ✗ | ✗ | ✓ | | (Mori et al., 2022) | FTL | ✓ | ✓ | ✗ | ✗ | Table 1: Related works. PFL refers to horizontal personalised FL, VFL to vertical FL and FTL to federated transfer learning. on clients (Zhang et al., 2021b; Ye et al., 2022). Other methodological works on autoencoders (Xu et al., 2020), word embeddings (Alvarez-Melis & Jaakkola, 2018; Alvarez-Melis et al., 2019) or FL under high statistical heterogeneity (Makhija et al., 2022; Luo et al., 2021; Zhou et al., 2022) use similar ideas of distribution alignment for calibrating feature extractors and classifiers. Comparing distributions from different spaces has also been considered in a (non-FL) centralised manner using approaches like the Gromov-Wasserstein distance or related distances (Mémoli, 2011; Bunne et al., 2019; Alaya et al., 2022). Several other works can also be broadly related to the proposed methodology. Loosely speaking, we can divide these related approaches into three categories namely (i) heterogeneous-architecture personalised FL, (ii) vertical FL and (iii) federated transfer learning. Compared to traditional horizontal personalised FL (PFL) approaches, so-called *heterogeneous-architecture* ones are mostly motivated by local heterogeneity regarding resource capabilities of clients *e.g.* computation a nd storage (Zhang et al., 2021a; Diao et al., 2021; Collins et al., 2021; Shamsian et al., 2021; Hong et al., 2022; Makhija et al., 2022). Nevertheless, they never consider features defined on heterogeneous subspaces, which is our main motivation. In vertical federated learning (VFL), clients hold disjoint subsets of features. However, a restrictive assumption is that a large number of users are common across the clients (Yang et al., 2019; Hardy et al., 2017; Angelou et al., 2020; Romanini et al., 2021). In addition, up to our knowledge, no vertical personalised FL approach has been proposed so far, which is restrictive if clients have different business objectives and/or tasks. Finally, some works have focused on adapting standard transfer learning approaches with heterogeneous feature domains under the FL paradigm. These *federated transfer learning* (FTL) approaches (Gao et al., 2019; Mori et al., 2022; Liu et al., 2020; Sharma et al., 2019) stand for FL variants of heterogeneous-feature transfer learning where there are b *source* clients and 1 target client with a target domain. However, these methods do not consider the same setting as ours and assume that clients share a common subset of features. We compare the most relevant approaches among the previous ones in Table 1. ## 3 Proposed Methodology Problem Formulation. We consider the problem where b ∈ N ∗clients want to solve a learning task within the *centralised personalised FL paradigm* (Yang et al., 2019; Kairouz et al., 2021), where a central entity orchestrates the collaborative solving of a common machine learning problem by the b clients, without requiring raw data exchanges. The clients are assumed to possess local data sets {Di}i∈[b] such that, for any i ∈ [b], Di = {(x (j) i, y (j) i)}j∈[ni] where x (j) istands for a feature vector, y (j) iis a label and ni = |Di|. In contrast ![3_image_0.png](3_image_0.png) Figure 1: Illustration of part of the proposed methodology for b = 3 clients with *heterogeneous* digit images coming from three different data sets namely MNIST (Deng, 2012), USPS (Hull, 1994) and SVHN (Netzer et al., 2011). The circles with digits inside stand for a group of samples, of a given class, owned by a client and the size of the circles indicates their probability mass. In the subspace Φ, {µi}i∈[b] (and their level sets) refer to some learnable reference measures to which we seek to align the transformed version νϕi of νi. Personalised FL then occurs in the space Φ and aims at learning local models {θi}i∈[b]for each client as well as {ϕi, µi}i∈[b]. Non-personalised FL could also be considered and naturally embedded in the proposed distribution alignement framework. to existing FL approaches, we assume that the raw input features {x (j) i}j∈[ni] of clients live in *heterogeneous* spaces *i.e.* for any i ∈ [b], x (j) i ∈ Xi where Xiis a client-specific measurable space. More precisely, for any i ∈ [b] and j ∈ [ni], we assume that x (j) i ∈ Xi ⊆ R ki such that {Xi}i∈[b] are not part of a common ground metric. This setting is challenging since standard FL approaches (McMahan et al., 2017; Li et al., 2020) and even personalised FL ones (Collins et al., 2021; Hanzely et al., 2021) cannot be directly applied. For simplicity, we assume that all clients want to solve a multi-class classification task with C ∈ N ∗classes. We discuss later how regression tasks can be encompassed in the proposed framework. Methodology. The goal of the proposed methodology, coined FLIC, is to learn a personalised model for each client while leveraging the information stored by other clients' data sets despite the heterogeneity issue. To address this feature space heterogeneity, we propose to map client's features into a fixed-dimension common subspace Φ ⊆ R k by resorting to learned *local* embedding functions {ϕi: Xi → Φ}i∈[b]. In order to preserve semantical information (such as the class associated to a feature vector) from the original data distribution, we seek at learning the functions {ϕi}i∈[b] such that they are aligned with (*i.e.* minimise their distance to) some learnable latent anchor distributions that are shared across all clients. These anchor distributions act as universal "calibrators" for clients, preventing similar semantic information from different clients from scattering across the subspace Φ. This scattering would otherwise impede proper subsequent federated learning of the classification model. As depicted in Figure 1, local embedding functions are learned by aligning the mapped probability distributions, denoted as ν (c) ϕi , conditioned on the class c ∈ [C], with C learnable anchor measures {µc}c∈[C]. This alignment is achieved by minimising their distance. Remark 1. *We want to stress the significance of aligning the class-conditional probability distributions* ν (c) ϕ with respect to the anchor distributions µc. Local and independent learning of embedding functions ϕi *by each* client does not guarantee alignment of resulting probability distributions in the common subspace Φ*. As in* unsupervised multilingual embeddings (Grave et al., 2019), alignments are crucial for preserving semantic similarity of class information. Misalignment occurs when projecting class-conditionals in a lower-dimensional space using an algorithm, like t-sne, that seeks at preserving only local similarities. Indeed, data from different clients are projected in a subspace in which different class-conditionals may overlap. This is also the case when using a neural network with random weights as a projector or an auto-encoder. Examples of such phenomena are illustrated in Figure *3. Notably, this figure shows that the alignment with respect to the anchor* distribution is crucial to ensure that the class-conditional distributions are aligned in the common subspace Φ. Once data from the heterogeneous spaces are embedded in the same latent subspace Φ, we can deploy a federated learning methodology for training from this novel representation space. While any standard FL approach *e.g.* FedAvg (McMahan et al., 2017) can be used, we consider a *personalised* FL where each client has a local model tailored to its specific data distribution as statistical heterogeneities that are still present in Φ (Tan et al., 2022). Hence, given the aforementioned local embedding functions {ϕi}, the model parameters {θi ∈ R di } and some non-negative weights associated to each client {ωi}i∈[b] such that Pb i=1 ωi = 1, we consider the following empirical risk minimisation problem: $$\min_{\theta_{1:b},\phi_{1:b}}f(\theta_{1:b},\phi_{1:b})=\sum_{i=1}^{b}\omega_{i}f_{i}(\theta_{i},\phi_{i})\,,\tag{1}$$ and for any i ∈ [b], $$f_{i}(\theta_{i},\phi_{i})=\frac{1}{n_{i}}\sum_{j=1}^{n_{i}}\ell\left(y_{i}^{(j)},g_{\theta_{i}}^{(i)}\left[\phi_{i}\left(x_{i}^{(j)}\right)\right]\right).\tag{2}$$ where ℓ(·, ·) stands for a classification loss function between the true label y (j) iand the predicted one g (i) θi [ϕi(x (j) i)] where g (i) θiis the local model that admits a personalised architecture parameterised by θi and taking as input an embedded feature vector ϕi(x (j) i) ∈ Φ. Objective Function. At this stage, we are able to integrate the FL paradigm and the local embedding function learning into a global objective function we want to optimise, see (1). Remember that we want to learn the parameters {θi}i∈[b] of personalised FL models, in conjuction with some local embedding functions {ϕi}i∈[b] and shared anchor distributions {µc}. In particular, the latter have to be aligned with class-conditional distributions {ν (c) ϕi }. We enforce this alignment via a Wasserstein regularisation term leading us to a regularised version of the empirical risk minimisation problem defined in (1), namely $$\theta_{1:b}^{\star},\phi_{1:b}^{\star},\mu_{1:C}^{\star}=\operatorname*{arg\,min}_{\theta_{1:b},\phi_{1:b},\mu_{1:C}}\sum_{i=1}^{b}F_{i}(\theta_{i},\phi_{i},\mu_{1:C})\,,$$ and for any i ∈ [b], $$F_{i}(\theta_{i},\phi_{i},\mu_{1:C})=\omega_{i}f_{i}(\theta_{i},\phi_{i})+\lambda_{1}\omega_{i}\sum_{c\in\mathcal{Y}_{i}}\mathrm{W}_{2}^{2}\left(\mu_{c},\nu_{\phi_{i}}^{(c)}\right)+\lambda_{2}\omega_{i}\sum_{c\in\mathcal{Y}_{i}}\frac{1}{J}\sum_{j=1}^{J}\ell\left(c,g_{\theta_{i}}^{(i)}\left[Z_{c}^{(j)}\right]\right)\,,\tag{3}$$ where {Z (j) c ; j ∈ [J]}c∈[C] stand for samples drawn from {µc}c∈[C], and λ1, λ2 > 0 are regularisation parameters. The second term in (3) aims at aligning the conditional probability distributions of the transformed features to the anchors. The third one is an optional term aspiring to calibrate the anchor distributions with the classifier in cases where two or more classes are still ambiguous after mapping onto the common feature space; it has also some benefits to tackle covariate shift in standard FL (Luo et al., 2021). Design Choices and Justifications. In the sequel, we consider Gaussian anchor measures µc = N(vc, Σc) where vc ∈ R k and c ∈ [C]. One of the key advantages of this Gaussian assumption is that, under mild assumptions, it guarantees the existence of a transport map T (i)such that T (i) \# (νi) = µ, owing to Brenier's theorem (Santambrogio, 2015) since a mixture of Gaussians admits a density with respect to the Lebesgue measure. Hence, in our case, learning the local embedding functions boils down to approximating this transport map T (i) \# by ϕi. We also approximate the conditional probability measures {ν (c) ϕi ; c ∈ Yi}i∈[b] by using Gaussian measures {νˆ (c) ϕi = N(mˆ (c) i, Σˆ (c) i); c ∈ Yi}i∈[b] such that for any i ∈ [b] and c ∈ [C], mˆ (c) iand Σˆ (c) istand for empirical mean vector and covariance matrix. The relevance of this approximation is detailed in Appendix S1.2. These two Gaussian choices (for the anchor distributions and the class-conditional distributions) allow us to have a closed-form expression for the Wasserstein distance of order 2 which appears in (3), see *e.g.* Gelbrich (1990); Dowson & Landau (1982). More precisely, we have for any i ∈ [b] and c ∈ [C], $$\mathrm{W}_{2}^{2}\left(\mu_{c},\nu_{\phi_{i}}^{(c)}\right)=\left\|v_{c}-m_{i}^{(c)}\right\|^{2}+\Re^{2}\left(\Sigma_{c},\Sigma_{i}^{(c)}\right),$$ 5 Table 2: Current personalised FL techniques that can be embedded in the proposed framework. The parameters α and βi respectively stand for model weights while ω ∈ [0, 1]. Typically, α stand for the shared weights associated to the shared layers of a neural network architecture and βi for local ones aiming at performing personalised classification. | Algorithm | Local model | Local weights | | | |-------------|---------------|-----------------|---------------------|---------------| | | (i) | | | | | FedAvg-FT | g | = gθi | θi | | | | θi (i) | | | | | L2GD | g | = gθi | θi = ωα + (1 − ω)βi | | | | θi | | | | | | (i) | (i) | | | | FedRep | g | = g | ◦ gα | θi = [α, βi ] | | | θi | βi | | | where B(·, ·) denotes the Bures distance between two positive definite matrices (Bhatia et al., 2019). Remark 2. *Instead of Gaussian distribution approximations, we can consider more complex probability* distributions. For instance, we can use a Gaussian mixture model (GMM) and still be able to compute cheaply the Wasserstein distance (Chen et al., *2018). We can even make no hypothesis on the data distribution and* compute the distance for alignment using the linear programming based OT-formulation (Flamary et al., 2021) or use any other IPM such as the maximum mean discrepancy (MMD), see Gretton et al. *(2012). However,* in practice, we found that the closed-form Wasserstein distance achieves slightly better performance than MMD. Regarding the parametrization of the local embedding functions ϕi, we consider a neural network architecture which takes into account the nature of the input data and outputs to a vector of dimension k. For instance, for the digit datasets, we consider a convolutional neural network (CNN), followed by pooling and linear layers whereas for the BCI datasets, we have used a fully connected neural network as the data has been preprocessed. The choice of the architecture is left to the user and can be adapted to the problem at hand. Reference Distribution for Regression. For a regression problem, except the change in the loss function, we also need to define properly the reference anchor distributions. Since our goal is to map all samples for all clients into a common latent subspace, in which some structural information about the regression problem is preserved. As such, in order to reproduce the idea of using a Gaussian mixture model as a anchor distribution, we propose to use an infinite number of Gaussian mixtures in which the distribution of a transformed feature vector ϕ(x) ∈ R k associated to a response y is going to be mapped on a unit-variance Gaussian distribution whose mean depends uniquely on y. Formally, we define the anchor distribution as ## Μy = N(M(Y),Ik) where m(y)is a vector of dimension k that is uniquely defined. In practice, we consider m(y) = ya + (1 − y)b where a and b are two distinct vectors in R k. When training FLIC, this means that for a client i, we can compute W2 2 µy, ν (y) ϕi based on the set of training samples {(*x, y*)}. In practice, if for a given batch of samples we have a single sample of value x, then the Wasserstein distance boils to ∥ϕi(x) − m(y)∥ 2 2 , which means that we are going to map x to its corresponding vector on the segment [*a, b*]. ## 4 Algorithm As detailed in (3), we perform personalisation under the FL paradigm by considering local model architectures {g (i) θi }i∈[b] and local weights θ1:b. As an example, we could resort to federated averaging with fine-tuning (*e.g.* FedAvg-FT (Collins et al., 2022)), model interpolation (*e.g.* L2GD (Hanzely & Richtárik, 2020; Hanzely et al., 2020)) or partially local models (*e.g.* FedRep (Collins et al., 2021) or the works of Oh et al. (2022); Singhal et al. (2021)). Table 2 details how these methods can be embedded into the proposed methodology and explicitly shows the local model and local weights associated to each algorithm as well as how the global weights are taken into account locally. In Algorithm 1, we detail the pseudo-code associated to a specific instance of the proposed methodology when FedRep is resorted to learn model parameters {θi}i∈[b] under the FL paradigm. For FedRep, each θiincludes local and global weights. Besides these learnable parameters, the algorithm also learns the local embedding functions ϕ1:b and the anchor distributions µ1:C . In practice, at a given epoch t of the algorithm, a subset At+1 ⊆ [b] of clients are selected to participate to the training process. Those clients receive the current latent anchor distribution µ (t) 1:C and the current shared representation part of θ (t). Then, each client locally updates ϕi, θi and µ (t) 1:C . The number of local steps M for each client is a hyperparameter of the algorithm. Afterwards, clients send back to the server their updated version of θ (t) and µ (t) 1:C . Updated global parameters θ (t+1) and µ (t+1) 1:Care then obtained by weighted averaging of client updates on appropriate manifolds. The use of the Wasserstein loss in (3) naturally leads to perform averaging of the local anchor distributions via a Wasserstein barycenter; algorithmic details are provided in the next paragraph. In Algorithm 1, we use for the sake of simplicity the notation DescStep(F (t,m) i, ·) to denote a (stochastic) gradient descent step on the function F (t,m) i = Fi(θ (t,m) i, ϕ(t,m) i, µ (t) 1:C ) with respect to a subset of parameters in (θi, ϕi, µ1:C ). This subset is specified in the second argument of DescStep. A fully detailed version of Algorithm 1 is provided in the supplementary material (see Algorithm S2). Updating and Averaging Anchor Distributions. In this paragraph, we provide algorithmic details regarding steps 14 and 19 in Algorithm 1. For any c ∈ [C], the anchor distribution µc involves two learnable parameters namely the mean vector vc and the covariance matrix Σc. For the mean, step 14 stands for a (stochastic) gradient descent step aiming to obtain a local version of v (t) c and step 19 boils down to compute the averaging of the mean vectors : v (t+1) c = (b/|At+1|)Pi∈At+1 ωiv (t+1) i,c . For updating the covariance matrix, we resort to the Cholesky decomposition to enforce the positive semi-definite constraint of the covariance matrix. Hence, we rewrite it as Σc = LcL ⊤ c where Lc ∈ R k×k and optimise in step 14 with respect to the factor Lc instead of Σc. Then, we can handle the gradient computation of the Bures distance in step 14 using the work of Muzellec & Cuturi (2018); and obtain a local factor L (t+1) i,c at iteration t. In step 19, we compute L (t+1) c = (b/|At+1|)Pi∈At+1 ωiL (t+1) i,c and set Σ (t+1) c = L (t+1) c [L (t+1) c ] ⊤. When λ2 = 0 in (3), these mean vector and covariance matrix updates exactly boil down to perform one stochastic (because of partial participation) gradient descent step to solve the Wasserstein barycenter problem arg minµc Pb i=1 ωiW2 2 (µc, ν (c) ϕi ). Pre-training ϕ1:C . Owing to the introduction of reference anchor distributions which carry semantical information, each local feature embedding function can be pre-trained by optimising the loss Pc∈Yi W2 2 µc, ν (c) ϕi . While in theory, pre-training may not be necessary, we believe that it helps reach a better solution of the federated learning problem as parameters θi are optimised starting from a better latent representation of νϕi . This is a phenomenon that has been observed in the context of fine-tuning (Kumar et al., 2022) or domain generalisation (Rame et al., 2022). Note that in practice, we initialize the mean vectors vc of the anchor distributions by randomly sampling a Gaussian distribution with a very large variance. By doing so, we ensure a good separability of the classes and the anchor distributions in the latent space. The covariance matrices are initialized as identity matrices. Remark 3. Regarding privacy, FL ensures that raw data never leaves the client device. Compared to the base FL algorithm such as FedRep, FLIC *exchanges between clients and server the anchor distribution* µ (t+1) i,1:C . This can be seen a further privacy risk. However, the anchor distribution is a set of C Gaussian distributions, providing only aggregated information about the data and it needs to be clarified based on further work the level of privacy it brings.. A possible solution to mitigate potential privacy leak is to consider differential privacy (Dwork & Roth, *2014) and use a differentially private version of the Wasserstein barycenter algorithm* based on a differentially private Wasserstein distance (Lê Tien et al., 2019; Goldfeld & Greenewald, *2020;* Rakotomamonjy & Ralaivola, *2021).* ## 5 Non-Asymptotic Convergence Guarantees In A Simplified Setting Deriving non-asymptotic convergence bounds for Algorithm 1 in the general case is challenging since the considered C-class classification problem leads to jointly solving personalised FL and federated Wasserstein barycenter problems. Regarding the latter, obtaining non-asymptotic convergence results is still an active research area in the centralised learning framework (Altschuler et al., 2021). As such, we propose to analyse Algorithm 1 FLIC for FedRep Require: initialisation µ (0) 1:C , ϕ (0,0) 1:b, θ (0) = [α (0), β(0,0) 1:b]. 1: for t = 0 to T − 1 do 2: Sample a set of At+1 of active clients. 3: for i ∈ At+1 do 4: *// Communication to clients* 5: The central server sends the global parameters α (t) and µ (t) 1:C to At+1 to clients. 6: *// Update local parameters* 7: for m = 0 to M − 1 do 8: ϕ (t,m+1) i ← DescStep F (t,m) i, ϕ(t,m) i . 9: β (t,m+1) i ← DescStep F (t,m) i, β(t,m) i . 10: ϕ (t+1,0) i = ϕ (t,M) i. 11: β (t+1,0) i = β (t,M) i. 12: *// Update global parameters* 13: α (t+1) i ← DescStep F (t,M) i, α(t). 14: µ (t+1) i,1:C ← DescStep F (t,M) i, µ (t) 1:C . 15: *// Communication with the server* 16: Send α (t+1) iand µ (t+1) i,1:C to central server. 17: *// Averaging global parameters* 18: α (t+1) =b |At+1| Pi∈At+1 wiα (t+1) i 19: µ (t+1) 1:C ← WassersteinBarycenter({µ (t+1) i,1:C }) Ensure: parameters α (T), µ (T) 1:C , ϕ (T ,0) 1:b, β (T ,0) 1:b. a simpler regression framework where the anchor distribution is known beforehand and not learned under the FL paradigm. While we acknowledge that this theoretical analysis is based on a simplified setting of our approach, it still offers insightful perspective and we leave the general case for future work. More precisely, we assume that x (j) i ∼ N(mi, Σi) with mi ∈ R ki and Σi ∈ R ki×kifor i ∈ [b], j ∈ [ni]. In addition, we consider that the continuous scalar labels are generated via the oracle model y (j) i = (A⋆β ⋆ i ) ⊤ϕ ⋆ i (x (j) i) where A⋆ ∈ R k×d, β ⋆ i ∈ R d and ϕ ⋆ i (·) are the ground-truth parameters and the feature embedding functions, respectively. We make the following assumptions on the ground truth, which are inherited from those of FedRep. H1. (i) *For any* i ∈ [b], ki ≥ k. (ii) For any i ∈ [b], j ∈ [ni]*, embedded features* ϕ ⋆ i (x (j) i) *are distributed according to* N(0k,Ik). (iii) *Ground-truth model parameters satisfy* ∥β ⋆ i ∥2 = √d for i ∈ [b] and A⋆ has orthonormal columns. (iv) For any t ∈ {0*, . . . , T* − 1}, |At+1| = b ′ *with* 1 ≤ b ′ ≤ b*, and if we select* b ′*clients, their ground-truth head* parameters {β ⋆ i }i∈At+1 *span* R d. (v) In (2), ℓ(·, ·) is the ℓ2 *norm,* ωi = 1/b, θi = [A, βi] and g (i) θi (x) = (Aβi) ⊤x for x ∈ R k. Under H1-(i), Delon et al. (2022, Theorem 4.1) show that ϕ ⋆ i can be expressed as a non-unique affine map with closed-form expression. To align with the true latent anchor distribution µ = N(0k,Ik), we propose to estimate ϕˆi by leveraging this closed-form mapping between N(mi, Σi) and µ. Because of the non-unicity of ϕ ⋆ i , we show in Theorem 1 that we can only recover it up to a matrix multiplication. Interestingly, Theorem 1 also proves that the global representation A(T)learnt via FedRep (see Algorithm S3 in Appendix) is able to correct this feature mapping indetermination. Associated convergence behavior is illustrated in Figure 2 on a toy example whose details are postponed to Appendix S2. Theorem 1. Assume H*1. Then, for any* xi ∈ R ki*, we have* ϕˆi(xi) = Qϕ⋆ i (xi) *where* Q ∈ R k×k*is of the form* diagk (±1). Under additional technical assumptions detailed in Appendix S2, we have for any t ∈ {0*, . . . , T* −1} ![8_image_0.png](8_image_0.png) Figure 2: Illustration of the convergence behavior of FLIC on a toy linear regression problem. The plot shows the principal angle distance between the estimated linear global map A(t) and the true linear global map A⋆ (up to a matrix multiplication). (blue) A(t)is estimated based on the latent space feature mapped obtained using ϕ ⋆ i and FedRep. (orange) A(t)is estimated based on the latent space feature mapped obtained using ϕˆi. (green) A(t)is estimated based on the latent space feature mapped obtained using ϕˆi but compared to QA⋆. Here, we show FLIC is able to recover the true global map A⋆ up to a matrix multiplication. and with high probability, $$\operatorname{dist}(A^{(t+1)},Q A^{\star})\leq(1-\kappa)^{(t+1)/2}\mathrm{dist}(A^{(0)},Q A^{\star})\,,$$ where κ ∈ (0, 1) is detailed explicitly in Theorem S3 and dist *denotes the principal angle distance.* ## 6 Numerical Experiments For numerically validating the benefits associated to the proposed methodology FLIC, we consider toy problems with different characteristics of heterogeneity; as well as experiments on real data, namely (i) a digit classification problem from images of different sizes, (ii) an object classification problem from either images or text captioning on clients, and (iii) a Brain-Computer Interfaces problem. Code for reproducing part of the experiments is available at https://github.com/arakotom/flic. Baselines. Since the problem we are addressing is novel, no FL competitor in the literature can serve as a baseline beyond local learning. However, we propose to modify the methodology proposed by Makhija et al. (2022) and Collins et al. (2022) to make them applicable to clients with heterogeneous feature spaces. The method proposed by Makhija et al. (2022) handles local representation models with different architectures. Since their key idea is to align the latent representations of fixed-dimensionality inputs shared by the server to all clients, we propose an alternative approach called FedHeNN, that works for clients with different feature spaces, where we build a Representation Alignment Dataset (RAD) based on the largest feature space and then prune it to obtain a lower-dimensional RAD for each client. We can also adapt the FedRep approach (Collins et al., 2022) to our setting by considering a local feature transformation followed by a shared global representation model and a local classification head. This approach, denoted as HetFedRep, maps the input data to fixed dimensionality before the shared global representation model. We can understand this approach as a special case of FLiC where the local feature transformation are not enforced to align with the anchor distributions. We adapted our network architecture to match the baselines by considering two variants. Following Makhija et al. (2022), we treated all layers except the last as the representation learning module for a fair comparison. Therefore, in our approach, the alignment applies to the penultimate layer, and the last layer is the classifier layer. We call this model FLIC-Class. Additionally, we introduced another model, called ![9_image_1.png](9_image_1.png) ![9_image_0.png](9_image_0.png) Figure 3: Projecting the Gaussian class-conditionals stored on 20 clients obtained from the "added features" toy problem. For readability, the problem has been made easier by considering only 3 classes with all classes on clients. Means of the class-conditionals are drawn from a zero-mean Gaussian with a covariance matrix σ 2I with σ = 3. Added features range from 1 to 10. (left) *t-sne* projection. (right) *Multi Dimensional Scaling* (MDS) projection. Different tones of a color represent the same class-conditionals of a given problem. From this figure, we remark the overlapping of classes regardless of the algorithm used for projection (*t-sne* uses a random initialization and while MDS uses a deterministic ones). This emphasizes the need for a proper alignement of the class-conditionals during projection. FLIC-HL, similar to FedRep, but with an extra trainable global hidden layer with α and βi as parameters for respectively the shared representation and classification layers. Data Sets. We consider four different classification problems to assess the performances of our approach. For all simulations, we assume prior probability shift e.g each client will have access to data of only specific classes. The first problem is a toy problem with 20 classes and Gaussian class-conditional distributions, where we conduct two sub-experiments: adding random spurious features and applying a random linear mapping, both of random dimensionality on each client. Section S3.1 in the supplementary material provides more details about these data sets. The second problem involves digit classification using MNIST and USPS datasets, with dimensions of 28 × 28 and 16 × 16, respectively. Each client hosts a subset of either MNIST or USPS dataset. The third experiment addresses a multimodal problem using a subset of the *TextCaps* dataset (Sidorov et al., 2020), an image captioning dataset. We converted it into a 4-class classification problem with 12, 000 and 3, 000 examples for training and testing, respectively, based on caption text or image. We used pre-trained models (Bert and ResNet) to embed the caption and image into 768-dimensional and 512-dimensional vectors. To create heterogeneity, we randomly pruned 10% of features on each client. Each client hosts either image or text embeddings. Finally, the fourth problem is a real medical problem denoted as Brain-Computer Interface (BCI) which consists in classifying mental imagery EEG datasets into five classes. The datasets we have considered is based on six datasets from the mental imagery MOABB data repository (Jayaram & Barachant, 2018) (details are given in Section S3.1 in the supplement). Each of those EEG datasets have been acquired on different subjects, have different number of channels and classes. We used a vectorised channel-dependent covariance matrices representations of each EEG signals as a feature (Yger et al., 2016; Barachant et al., 2022). Hence, the dimensionality of the feature space is different for each dataset. We have considered each subject in all the experiments as a client owning his own dataset. In practice, the number of training examples on client ranges from 30 to 600 while the dimensionality of the features goes from 6 to 1,830. Illustrating the need for anchor distributions. The main bottleneck for applying FL algorithms to heterogeneous feature spaces is the lack of a common space. However, one can argue that this common space can be created by projecting the data onto a joint common space. As we have claimed, we illustrated here that this is not sufficient. To do so, we have considered the "added noisy features" toy problem with ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) ![10_image_2.png](10_image_2.png) Figure 4: Performance of FLIC and competitors on the toy data sets with respect to the number of clients. (left) Gaussian classes in dimension k = 5 with added noisy feature. (right) Gaussian classes in dimension k = 30, transformed by a random linear mapping. Only 3 classes are present on each client among the 20 possible ones. the 3 class-conditionals over 20 clients. We have projected those class-conditionals onto a common space using different projection algorithms, namely *t-sne* (Van der Maaten & Hinton, 2008) and multi-dimensional scaling (MDS) (Kruskal, 1964). The results are shown in Figure 3. We can note that just projecting into a common subspace without taking into account the semantic of the class-conditionals leads to overlapping of classes. This emphasizes the need for a proper alignment of the class-conditionals during projection, based on reference distribution as we propose in FLIC. In Appendix S3.7, we provide quantitative evidence of the need for a proper alignment of the class-conditionals by exhibiting the poor performance of FedRep trained on data from this common space. Experimental Setting. For the experimental analysis, we use the codebase of Collins et al. (2021) with some modifications to meet our setting. For all experiments, we consider T = 50 communication rounds for all algorithms; and at each round, a client participation rate of r = 0.1. The number of local epochs for training has been set to M = 10. As optimisers, we have used an Adam optimiser with a learning rate of 0.001 for all problems and approaches. Further details are given in Section S3.3 in the supplement. For each component of the latent anchor distribution, we consider a Gaussian with learnable mean vectors and fixed Identity covariance matrix. As such, the Wasserstein barycenter computation boils down to simply average the mean of client updates and for computing the third term in (3), we just sample from the Gaussian distribution. Accuracies are computed as the average accuracy over all clients after the last epoch in which all local models are trained. Results on Toy Data Sets. Figure 4 depicts the performance, averaged over 5 runs, of the different algorithms with respect to the number of clients and when only 3 classes are present in each client. Since the fixed amount of training data (2000 per classes) are equally shared across clients holding a given class, as we increase the number of clients, we expect the problem to be harder and the performance to decrease. For both data sets, we can note that for the *noisy feature* setting, FLIC improves on FedHeNN of about 3% of accuracy across the setting, performs better than local learning and is comparable to HetFedRep. For the *linear mapping* setting, FLIC achieves better than other approaches with a gain of performance of about 4% while the gap tends to decrease as the number of clients increases. For this problem, HetFedRep fails to achieve good performance, highlighting the need for a proper alignment of the class-conditionals. Since the local embedding functions are handled as any local layers and not enforced to align with the anchor distributions. Interestingly for both problems, FLIC-HL performs slightly better than FLIC-Class showing the benefit of adding a shared representation layer α in addition to the local embedding functions and the classification layer. Figure 5 also illustrates how samples embedded with ϕi evolve during training towards the anchor distributions µ1:C . At start, they are clustered client-wise (same marker with different colors are grouped together.) As ![11_image_0.png](11_image_0.png) Figure 5: . 2D *t-sne* projection of 5 classes partially shared by 3 clients for the **toy linear mapping** dataset after learning the local transformation functions for (left) 10 epochs, (middle) 50 epochs, (right) 100 epochs. The three different markers represent the different clients while classes are represented by different color tones. The ⋆ marker represents the class-conditional mean of the reference anchor distribution. We note that training set converges towards those means. Table 3: Performance over 3 runs of our FLIC model and the competitors on some real-data problems (*Digits* and *TextCaps* data set). | Data sets (setting) | Local | FedHeNN | HetFedRep | FLIC-Class | FLIC-HL | |--------------------------------------|-------------|-------------|-------------|--------------|-------------| | Digits (b = 100, 3 Classes/client) | 97.49 ± 0.4 | 97.45 ± 0.5 | 57.85 ± 1.4 | 97.83 ± 0.3 | 97.70 ± 0.2 | | Digits (b = 100, 5 Classes/client) | 96.15 ± 0.3 | 96.15 ± 0.2 | 54.87 ± 5.0 | 96.46 ± 0.6 | 96.54 ± 0.7 | | Digits (b = 200, 3 Classes/client) | 93.33 ± 0.2 | 93.40 ± 0.4 | 67.99 ± 2.2 | 94.50 ± 0.3 | 94.51 ± 0.3 | | Digits (b = 200, 5 Classes/client) | 87.48 ± 1.5 | 87.22 ± 1.8 | 48.88 ± 3.0 | 91.11 ± 0.6 | 91.10 ± 0.7 | | TextCaps (b = 100, 2 Classes/client) | 84.19 ± 0.8 | 83.99 ± 0.7 | 87.05 ± 0.7 | 89.14 ± 1.1 | 89.68 ± 0.7 | | TextCaps (b = 100, 3 Classes/client) | 76.04 ± 0.8 | 75.39 ± 0.9 | 77.99 ± 0.6 | 81.27 ± 0.2 | 81.50 ± 0.2 | | TextCaps (b = 200, 2 Classes/client) | 83.78 ± 1.8 | 83.89 ± 1.7 | 85.48 ± 1.5 | 87.73 ± 0.8 | 87.74 ± 1.3 | | TextCaps (b = 200, 3 Classes/client) | 74.95 ± 1.1 | 74.77 ± 1.0 | 75.73 ± 0.8 | 79.08 ± 0.7 | 78.49 ± 0.7 | | BCI (b=54) | 73.51 ± 0.8 | 70.84 ± 1.0 | 75.03 ± 0.6 | 75.17 ± 0.9 | 76.27 ± 0.2 | | BCI (b=40) | 73.98 ± 0.2 | 71.48 ± 0.6 | 74.23 ± 0.7 | 75.09 ± 1.0 | 75.82 ± 0.3 | iterations go, samples converge towards the relevant class wise anchor distributions. For instance, we note that after 50 epochs, samples from the class 5 are converging towards the mean of the related anchor distribution. After 100 epochs, they are almost all concentrated around the mean. Results on Digits and TextCaps Data Sets. Performance, averaged over 3 runs, of all algorithms on the real-word problems are reported in Table 3. For the *Digits* data set problem, we first remark that in all situations, FL algorithms performs a bit better than local learning. In addition, both variants of FLIC achieve better accuracy than competitors. Difference in performance in favor of FLIC reaches 3% for the most difficult problem. For the *TextCaps* data set, gains in performance of FLIC-HL reach about 4% across settings. While FedHeNN and FLIC algorithms follow the same underlying principle (alignment of representation in a latent space), our framework benefits from the use of the latent anchor distributions, avoiding the need of sampling from the original space. Instead, FedHeNN may fail as the sampling strategy of their RAD approach suffers from the curse of dimensionality and does not properly lead to a successful feature alignment. Results on Brain-Computer interfaces. For the BCI problem, performances are in Table 3. In this case, the Local model performance corresponds also to the usual BCI performance measure as models are usually subject-specific. We can note that FLIC-HL achieves better performance than all competitors with a gain of Table 4: Performance over 3 runs of our FLIC model and the competitors on the same real-world problems when processed so as to have same input sizes. Data sets (setting) Local FedHeNN HetFedRep FedRep FLIC-Class FLIC-HL Digits-Resize (b = 100, 3 Classes) 97.70 ± 0.1 97.62 ± 0.1 92.63 ± 1.8 95.76 ± 0.4 98.14 ± 0.1 98.05 ± 0.1 Digits-Resize (b = 100, 5 Classes) 96.36 ± 0.1 96.37 ± 0.2 94.90 ± 0.7 96.07 ± 0.5 96.94 ± 0.1 96.91 ± 0.1 Digits-Resize (b = 200, 3 Classes) 93.62 ± 0.3 93.56 ± 0.2 69.21 ± 3.0 92.93 ± 0.8 94.73 ± 0.4 94.54 ± 0.2 Digits-Resize (b = 200, 5 Classes) 87.74 ± 1.3 87.49 ± 1.0 69.27 ± 1.8 94.57 ± 0.9 91.40 ± 0.4 91.16 ± 0.3 TextCaps-Clip (b = 100, 2 Classes) 96.55 ± 0.5 96.44 ± 0.5 96.45 ± 0.4 82.31 ± 2.6 96.59 ± 0.5 96.65 ± 0.4 TextCaps-Clip (b = 100, 3 Classes) 94.38 ± 0.2 94.21 ± 0.3 94.13 ± 0.2 76.47 ± 2.7 94.34 ± 0.3 94.21 ± 0.3 TextCaps-Clip (b = 200, 2 Classes) 96.27 ± 0.4 96.16 ± 0.4 96.36 ± 0.2 83.33 ± 1.7 96.55 ± 0.2 96.44 ± 0.2 TextCaps-Clip (b = 200, 3 Classes) 93.78 ± 0.2 93.55 ± 0.2 93.74 ± 0.4 73.23 ± 0.5 94.04 ± 0.2 93.88 ± 0.3 BCI-common (b=40) 71.24 ± 0.6 70.63 ± 0.4 72.09 ± 0.1 71.50 ± 0.3 72.17 ± 0.3 72.12 ± 0.6 about 3% of accuracy compared to BCI baseline. In addition, we pave the way to learning BCI models with heterogeneous datasets. Preprocessing datasets to same dimensionality. Some preprocessing can be applied to the above datasets so that standard "same dimensionality" FL methods can be considered. We can apply a simple resizing of an image. For the TextCaps classification problem we can extract features from multimodal embedder such as CLIP Radford et al. (2021b) in which embeddings between text and images are aligned and of dimension 512. For the BCI problem, we used EEGs acquired from the set of common electrodes. This experiment also serves at proving that FLIC is not only useful for heterogeneous feature spaces but can be applied when feature spaces are identical. In such a case, we expect to benefit from the alignment of the class-conditionals to enhance performances. We have run the same experiments as above but using these preprocessings and also compared to plain FedRep and report the results in Table 4. FLIC still achieves slightly better performance than competitors on 7 out of 9 settings, but more importantly, one should highlight the gain in performance on BCI problems when considering all sensors at disposal. ## 7 Conclusion We have introduced a novel and general framework, referred to as FLIC, for personalised FL when clients have heterogeneous feature spaces. Under this framework, we proposed a FL algorithm involving two key components: (i) a local feature embedding function; and (ii) a latent anchor distribution which allows to match similar semantical information from each client. Experiments on relevant data sets have shown that FLIC achieves better performances than competing approaches. Finally, we provided theoretical support to the proposed methodology, notably via a non-asymptotic convergence result. Limitations and Broader impacts of **FLIC**. One main limitation of FLIC is that it requires a common feature space to be defined with an *ad-hoc* dimensionality. While this dimensionality can be chosen by the user, it is not clear how to select it in practice and has to be set to default value (in our case 64). In addition, the proposed approach has a computational overhead due to the need of learning the local embedding functions. In addition, it is worth noting that the proposed approach is a stateful algorithm that is applicable in a cross-silo setting. FLIC has the potential to widen the scope of privacy-aware FL applications by allowing clients to have heterogeneous feature spaces. This is particularly relevant for medical applications where data are collected from different sources and may have different formats. ## References Mokhtar Z Alaya, Maxime Bérar, Gilles Gasso, and Alain Rakotomamonjy. Theoretical guarantees for bridging metric measure embedding and optimal transport. *Neurocomputing*, 468:416–430, 2022. Jason Altschuler, Sinho Chewi, Patrik Robert Gerber, and Austin J Stromme. Averaging on the BuresWasserstein manifold: dimension-free convergence of gradient descent. In Advances in Neural Information Processing Systems, 2021. David Alvarez-Melis and Tommi Jaakkola. Gromov-Wasserstein Alignment of Word Embedding Spaces. In Conference on Empirical Methods in Natural Language Processing, pp. 1881–1890, 2018. David Alvarez-Melis, Stefanie Jegelka, and Tommi S. Jaakkola. Towards Optimal Transport with Global Invariances. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), *International Conference on Artificial* Intelligence and Statistics, volume 89, pp. 1870–1879, 2019. Nick Angelou, Ayoub Benaissa, Bogdan Cebere, William Clark, Adam James Hall, Michael A Hoeh, Daniel Liu, Pavlos Papadopoulos, Robin Roehm, Robert Sandmann, et al. Asymmetric private set intersection with applications to contact tracing and private vertical federated machine learning. *arXiv preprint* arXiv:2011.09350, 2020. Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, and Joydeep Ghosh. Clustering with Bregman Divergences. *Journal of Machine Learning Research*, 6(58):1705–1749, 2005. Alexandre Barachant, Quentin Barthélemy, Jean-Rémi King, Alexandre Gramfort, Sylvain Chevallier, Pedro L. C. Rodrigues, Emanuele Olivetti, Vladislav Goncharenko, Gabriel Wagner vom Berg, Ghiles Reguig, Arthur Lebeurrier, Erik Bjäreholt, Maria Sayu Yamamoto, Pierre Clisson, and Marie-Constance Corsi. pyriemann/pyriemann: v0.3, July 2022. URL https://doi.org/10.5281/zenodo.7547583. Rajendra Bhatia, Tanvi Jain, and Yongdo Lim. On the Bures–Wasserstein distance between positive definite matrices. *Expositiones Mathematicae*, 37(2):165–191, 2019. Charlotte Bunne, David Alvarez-Melis, Andreas Krause, and Stefanie Jegelka. Learning Generative Models across Incomparable Spaces. In *International Conference on Machine Learning*, volume 97, pp. 851–861, 2019. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konevcn`y, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097, 2018. Wei-Ning Chen, Christopher A Choquette Choo, Peter Kairouz, and Ananda Theertha Suresh. The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning. In *International* Conference on Machine Learning, volume 162, pp. 3056–3089, 2022. Yongxin Chen, Tryphon T Georgiou, and Allen Tannenbaum. Optimal transport for gaussian mixture models. IEEE Access, 7:6269–6278, 2018. Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploiting Shared Representations for Personalized Federated Learning. In *International Conference on Machine Learning*, pp. 2089–2099, 2021. Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. FedAvg with Fine Tuning: Local Updates Lead to Representation Learning. In *Advances in Neural Information Processing Systems*, 2022. Julie Delon, Agnes Desolneux, and Antoine Salmona. Gromov–Wasserstein distances between Gaussian distributions. *Journal of Applied Probability*, 59(4):1178–1198, 2022. Li Deng. The MNIST Database of Handwritten Digit Images for Machine Learning Research. IEEE Signal Processing Magazine, 29(6):141–142, 2012. Enmao Diao, Jie Ding, and Vahid Tarokh. HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients. In *International Conference on Learning Representations*, 2021. D.C Dowson and B.V Landau. The Fréchet distance between multivariate normal distributions. *Journal of* Multivariate Analysis, 12(3):450–455, 1982. Paul-Ambroise Duquenne, Hongyu Gong, and Holger Schwenk. Multimodal and Multilingual Embeddings for Large-Scale Speech Mining. In *Advances in Neural Information Processing Systems*, volume 34, pp. 15748–15761, 2021. Cynthia Dwork and Aaron Roth. The Algorithmic Foundations of Differential Privacy. *Foundations and* Trends in Theoretical Computer Science, 9(3–4):211–407, 2014. Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1–8, 2021. URL http://jmlr.org/papers/v22/20-451.html. Dashan Gao, Yang Liu, Anbu Huang, Ce Ju, Han Yu, and Qiang Yang. Privacy-preserving Heterogeneous Federated Transfer Learning. In *IEEE International Conference on Big Data (Big Data)*, pp. 2552–2559, 2019. Matthias Gelbrich. On a Formula for the L2 Wasserstein Metric between Measures on Euclidean and Hilbert Spaces. *Mathematische Nachrichten*, 147(1):185–203, 1990. Ziv Goldfeld and Kristjan Greenewald. Gaussian-smoothed optimal transport: Metric structure and statistical efficiency. In Silvia Chiappa and Roberto Calandra (eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of *Proceedings of Machine Learning Research*, pp. 3327–3337. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/v108/goldfeld20a. html. Edouard Grave, Armand Joulin, and Quentin Berthet. Unsupervised alignment of embeddings with wasserstein procrustes. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 1880–1890. PMLR, 2019. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *Journal of Machine Learning Research*, 13(25):723–773, 2012. URL http://jmlr.org/ papers/v13/gretton12a.html. Filip Hanzely and Peter Richtárik. Federated learning of a mixture of global and local models. *arXiv preprint* arXiv:2002.05516, 2020. Filip Hanzely, Slavomír Hanzely, Samuel Horváth, and Peter Richtárik. Lower bounds and optimal algorithms for personalized federated learning. *arXiv preprint arXiv:2010.02372*, 2020. Filip Hanzely, Boxin Zhao, and Mladen Kolar. Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques. *arXiv preprint arXiv: 2102.09743*, 2021. Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Richard Nock, Giorgio Patrini, Guillaume Smith, and Brian Thorne. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. *arXiv preprint arXiv:1711.10677*, 2017. Junyuan Hong, Haotao Wang, Zhangyang Wang, and Jiayu Zhou. Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization. In *International Conference on Learning Representations*, 2022. J. J. Hull. A database for handwritten text recognition research. *IEEE Transactions on Pattern Analysis* and Machine Intelligence, 16(5):550–554, 1994. Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-Rank Matrix Completion Using Alternating Minimization. In *ACM Symposium on Theory of Computing*, pp. 665–674, 2013. Vinay Jayaram and Alexandre Barachant. Moabb: trustworthy algorithm benchmarking for bcis. *Journal of* neural engineering, 15(6):066011, 2018. Yihan Jiang, Jakub Konevcn`y, Keith Rush, and Sreeram Kannan. Improving federated learning personalization via model agnostic meta learning. *arXiv preprint arXiv:1909.12488*, 2019. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, K. A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konevcný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and Open Problems in Federated Learning. *Foundations and Trends in Machine Learning*, 14 (1–2):1–210, 2021. Mikhail Khodak, Maria-Florina F Balcan, and Ameet S Talwalkar. Adaptive gradient-based meta-learning methods. *Advances in Neural Information Processing Systems*, 32:5917–5928, 2019. Joseph B Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1):1–27, 1964. Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. In *The Tenth International Conference* on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id=UYneFzXSJWh. Nam Lê Tien, Amaury Habrard, and Marc Sebban. Differentially private optimal transport: Application to domain adaptation. In *IJCAI*, pp. 2852–2858, 2019. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated Optimization in Heterogeneous Networks. In *Machine Learning and Systems*, volume 2, pp. 429–450, 2020. Yang Liu, Yan Kang, Chaoping Xing, Tianjian Chen, and Qiang Yang. A Secure Federated Transfer Learning Framework. *IEEE Intelligent Systems*, 35(4):70–82, 2020. Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, and Jiashi Feng. No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data. In *Advances in Neural Information Processing* Systems, volume 34, 2021. Zhihan Lv, Liang Qiao, Qingjun Wang, and Francesco Piccialli. Advanced Machine-Learning Methods for Brain-Computer Interfacing. *IEEE/ACM Transactions on Computational Biology and Bioinformatics*, 18 (5):1688–1698, 2021. Disha Makhija, Xing Han, Nhat Ho, and Joydeep Ghosh. Architecture Agnostic Federated Learning for Neural Networks. In *International Conference on Machine Learning*, volume 162, pp. 14860–14870, 2022. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. CommunicationEfficient Learning of Deep Networks from Decentralized Data. In International Conference on Artificial Intelligence and Statistics, volume 54, pp. 1273–1282, 2017. Facundo Mémoli. Gromov–Wasserstein distances and the metric approach to object matching. Foundations of computational mathematics, 11(4):417–487, 2011. Fan Mo, Hamed Haddadi, Kleomenis Katevas, Eduard Marin, Diego Perino, and Nicolas Kourtellis. PPFL: Privacy-Preserving Federated Learning with Trusted Execution Environments. In International Conference on Mobile Systems, Applications, and Services, pp. 94–108, 2021. Junki Mori, Isamu Teranishi, and Ryo Furukawa. Continual Horizontal Federated Learning for Heterogeneous Data. *arXiv preprint arXiv:2203.02108*, 2022. Boris Muzellec and Marco Cuturi. Generalizing Point Embeddings using the Wasserstein Space of Elliptical Distributions. In *Advances in Neural Information Processing Systems*, volume 31, 2018. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading Digits in Natural Images with Unsupervised Feature Learning. In *NIPS Workshop on Deep Learning and* Unsupervised Feature Learning, 2011. Jaehoon Oh, SangMook Kim, and Se-Young Yun. FedBABU: Toward Enhanced Representation for Federated Image Classification. In *International Conference on Learning Representations*, 2022. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision. In *International Conference on Machine Learning*, volume 139, pp. 8748–8763, 2021a. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pp. 8748–8763. PMLR, 2021b. Alain Rakotomamonjy and Liva Ralaivola. Differentially private sliced wasserstein distance. In *International* Conference on Machine Learning, pp. 8810–8820. PMLR, 2021. Alexandre Rame, Matthieu Kirchmeyer, Thibaud Rahier, Alain Rakotomamonjy, Patrick Gallinari, and Matthieu Cord. Diverse weight averaging for out-of-distribution generalization. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 10821–10836. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/ paper_files/paper/2022/file/46108d807b50ad4144eb353b5d0e8851-Paper-Conference.pdf. Thomas Rippl, Axel Munk, and Anja Sturm. Limit laws of the empirical Wasserstein distance: Gaussian distributions. *Journal of Multivariate Analysis*, 151:90–109, 2016. Daniele Romanini, Adam James Hall, Pavlos Papadopoulos, Tom Titcombe, Abbas Ismail, Tudor Cebere, Robert Sandmann, Robin Roehm, and Michael A Hoeh. PyVertical: A Vertical Federated Learning Framework for Multi-headed SplitNN. *arXiv preprint arXiv:2104.00489*, 2021. Filippo Santambrogio. Optimal transport for applied mathematicians. *Birkäuser, NY*, 55(58-63):94, 2015. Aviv Shamsian, Aviv Navon, Ethan Fetaya, and Gal Chechik. Personalized Federated Learning using Hypernetworks. In *International Conference on Machine Learning*, volume 139, pp. 9489–9502, 2021. Shreya Sharma, Chaoping Xing, Yang Liu, and Yan Kang. Secure and Efficient Federated Transfer Learning. In *IEEE International Conference on Big Data (Big Data)*, pp. 2569–2576, 2019. Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In *European Conference on Computer Vision*, pp. 742–758, 2020. Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, John Rush, and Sushant Prakash. Federated Reconstruction: Partially Local Federated Learning. In *Advances in Neural Information Processing Systems*, volume 34, pp. 11220–11232, 2021. Alysa Ziying Tan, Han Yu, Lizhen Cui, and Qiang Yang. Towards Personalized Federated Learning. *IEEE* Transactions on Neural Networks and Learning Systems, pp. 1–17, 2022. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *Journal of machine learning* research, 9(11), 2008. R. Veldhuis. The centroid of the symmetrical Kullback-Leibler distance. *IEEE Signal Processing Letters*, 9 (3):96–99, 2002. Roman Vershynin. *High-Dimensional Probability: An Introduction with Applications in Data Science*. Cambridge University Press, 2018. Cedric Villani. *Optimal Transport: Old and New*. Springer Berlin Heidelberg, 2008. Hongteng Xu, Dixin Luo, Ricardo Henao, Svati Shah, and Lawrence Carin. Learning Autoencoders with Relational Regularization. In *International Conference on Machine Learning*, volume 119, pp. 10576–10586, 2020. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated Machine Learning: Concept and Applications. *Transactions on Intelligent Systems and Technology*, 10(2), 2019. Rui Ye, Zhenyang Ni, Chenxin Xu, Jianyu Wang, Siheng Chen, and Yonina C Eldar. Fedfm: Anchor-based feature matching for data heterogeneity in federated learning. *arXiv preprint arXiv:2210.07615*, 2022. Florian Yger, Maxime Berar, and Fabien Lotte. Riemannian approaches in brain-computer interfaces: a review. *IEEE Transactions on Neural Systems and Rehabilitation Engineering*, 25(10):1753–1762, 2016. Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wenchao Xu, and Feijie Wu. Parameterized Knowledge Transfer for Personalized Federated Learning. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, 2021a. Lin Zhang, Yong Luo, Yan Bai, Bo Du, and Ling-Yu Duan. Federated learning for non-iid data via unified feature learning and optimization objective alignment. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4420–4428, 2021b. Tailin Zhou, Jun Zhang, and Danny Tsang. FedFA: Federated Learning with Feature Anchors to Align Feature and Classifier for Heterogeneous Data. *arXiv preprint arXiv: 22211.09299*, 2022. # Appendices Personalised Federated Learning On Heterogeneous Feature Spaces Notations and conventions. We denote by B(R d) the Borel σ-field of R d, M(R d) the set of all Borel measurable functions f on R d and ∥·∥ the Euclidean norm on R d. For µ a probability measure on (R d, B(R d)) and f ∈ M(R d) a µ-integrable function, denote by µ(f) the integral of f with respect to (w.r.t.) µ. Let µ and ν be two sigma-finite measures on (R d, B(R d)). Denote by µ ≪ ν if µ is absolutely continuous w.r.t. ν and dµ/dν the associated density. We say that ζ is a transference plan of µ and ν if it is a probability measure on (R d × R d, B(R d × R d)) such that for all measurable set A of R d, ζ(A × R d) = µ(A) and ζ(R d × A) = ν(A). We denote by T (*µ, ν*) the set of transference plans of µ and ν. In addition, we say that a couple of R d-random variables (*X, Y* ) is a coupling of µ and ν if there exists ζ ∈ T (*µ, ν*) such that (*X, Y* ) are distributed according to ζ. We denote by P1(R d) the set of probability measures with finite 1-moment: for all µ ∈ P1(R d),RRd ∥x∥dµ(x) < ∞. We denote by P2(R d) the set of probability measures with finite 2-moment: for all µ ∈ P2(R d),RRd ∥x∥ 2dµ(x) < ∞. We define the squared Wasserstein distance of order 2 associated with *∥ · ∥* for any probability measures µ, ν ∈ P2(R d) by $$\mathrm{W}_{2}^{2}(\mu,\nu)=\operatorname*{inf}_{\zeta\in{\mathcal{T}}(\mu,\nu)}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\|x-y\|^{2}\mathrm{d}\zeta(x,y)\,.$$ By Villani (2008, Theorem 4.1), for all µ, ν probability measures on R d, there exists a transference plan ζ ⋆ ∈ T (*µ, ν*) such that for any coupling (*X, Y* ) distributed according to ζ ⋆, W2(*µ, ν*) = E[∥x − y∥ 2] 1/2. This kind of transference plan (respectively coupling) will be called an optimal transference plan (respectively optimal coupling) associated with W2. By Villani (2008, Theorem 6.16), P2(R d) equipped with the Wasserstein distance W2 is a complete separable metric space. For the sake of simplicity, with little abuse, we shall use the same notations for a probability distribution and its associated probability density function. For n ≥ 1, we refer to the set of integers between 1 and n with the notation [n]. The d-multidimensional Gaussian probability distribution with mean µ ∈ R d and covariance matrix Σ ∈ R d×dis denoted by N(µ, Σ). Given two matrices *M, N* ∈ R k×d, the principal angle distance between the subspaces spanned by the columns of M and N is given by dist(*M, N*) = ∥Mˆ ⊤ ⊥ Nˆ∥2 = ∥Nˆ ⊤ ⊥Mˆ ∥2 where M, ˆ Nˆ are orthonormal bases of Span(M) and Span(N), respectively. Similarly, Mˆ⊥, Nˆ⊥ are orthonormal bases of orthogonal complements Span(M) ⊥ and Span(N) ⊥, respectively. This principal angle distance is upper bounded by 1, see Jain et al. (2013, Definition 1). **Outline.** This supplementary material aims at providing the interested reader with a further understanding of the statements pointed out in the main paper. More precisely, in Appendix S1, we support the proposed methodology FLIC with algorithmic and theoretical details. In Appendix S2, we prove the main results stated in the main paper. Finally, in Appendix S3, we provide further experimental design choices and show complementary numerical results. ## S1 Algorithmic And Theoretical Insights In this section, we highlight alternative but limited ways to cope with feature space heterogeneity; and justify the usage, in the objective function (3) of the main paper, of Wasserstein distances with empirical probability distributions instead of true ones. In addition, we detail the general steps depicted Algorithm 1. ## S1.1 Some Limited But Common Alternatives To Cope With Feature Space Heterogeneity Depending on the nature of the spaces {Xi}i∈[b], the feature transformation functions {ϕi}i∈[b] can be either known beforehand or more difficult to find. As an example, if for any i ∈ [b], Xi ⊆ X , then we can set mask functions as feature transformation functions in order to only consider features that are shared across all the clients. Besides, we could consider multimodal embedding models to perform feature transformation on each client (Duquenne et al., 2021). For instance, if clients own either pre-processed images or text of titles, descriptions and tags, then we can use the Contrastive Language-Image Pre-Training (CLIP) model as feature transformation function (Radford et al., 2021a). These two examples lead to the solving of a classical (personalised) FL problem which can be performed using existing state-of-the-art approaches. However, when the feature transformation functions cannot be easily found beforehand, solving the FL problem at stake becomes more challenging and has never been addressed in the federated learning literature so far, up to the authors' knowledge. ## S1.2 Use Of Wasserstein Losses Involving Empirical Probability Distributions Since the true probability distributions {ν (c) ϕi ; c ∈ Yi}i∈[b] are unknown a priori, we propose in the main paper to estimate the latter using {νˆ (c) ϕi ; c ∈ Yi}i∈[b] and to replace W2 2 µc, ν (c) ϕi by W2 2 µc, νˆ (c) ϕi in the objective function (3) in the main paper. As shown in the following result, this assumption is theoretically grounded when the marginal distributions of the input features are Gaussian. Theorem S2. For any i ∈ [b] and c ∈ [C]*, let* n (c) i = |D (c) i| *where* D (c) idenotes the subset of the local data set Di only involving observations associated to the label c*. Besides, assume that* ν (c) ϕi*is Gaussian with mean* vector m (c) i ∈ R k *and full-rank covariance matrix* Σ (c) i ∈ R k×k*. Then, we have in the limiting case* n (c) i → ∞, $$\sqrt{n_{i}^{(c)}}\left(\mathrm{W}_{2}^{2}\left(\mu_{c},\hat{\nu}_{\phi_{i}}^{(c)}\right)-\mathrm{W}_{2}^{2}\left(\mu_{c},\nu_{\phi_{i}}^{(c)}\right)\right)\xrightarrow{\mathrm{in~distribution}}Z_{i}^{(c)}\;,$$ where Z (c) i ∼ N(0, s (c) i) and s (c) i = 4(m (c) i −vc) ⊤Σ (c) i(m (c) i −vc)+2Tr(Σ(c) i Σc)−4Pk j=1 κ 1/2 jr ⊤ j Σ −1/2 c Σ (c) i Σ 1/2 c rj , with {κj , rj}j∈[k] *standing for (eigenvalue, eigenvector) pairs of the symmetric covariance matrix* Σ (c) i. Proof. The proof follows from Rippl et al. (2016, Theorem 2.1) with the specific choices µ1 = ν (c) ϕi , µ2 = µc and µˆ1 = ˆν (c) ϕiwhich are defined in Section 3 in the main paper. ## S1.3 Detailed Pseudo-Code For Algorithm 1 In Algorithm S2, we provide algorithmic support to Algorithm 1 in the main paper by detailing how to perform each step. Note that we use the decomposition Σ = LL⊤ to enfore the positive semi-definite constraint for the covariance matrix Σ. Algorithm S2 Detailed version of FLIC when using FedRep ``` Require: initialisation α (0), µ (0) 1:C = [Σ(0) 1:C , v (0) 1:C ] with Σ (0) c = L (0) c [L (0) c ]⊤, ϕ (0,0) 1:b, β (0,0) 1:band step-size η ≤ η¯ for some η >¯ 0. 1: for t = 0 to T − 1 do 2: Sample a set of At+1 of active clients. 3: for i ∈ At+1 do 4: The central server sends α (t) and µ (t) 1:C to At+1. 5: // Update local parameters 6: for m = 0 to M − 1 do 7: Sample a fresh batch I (i,m) t+1 of n ′ i samples with n ′ i ∈ [ni]. 8: Sample Z (j,t,m) c ∼ µ (t) c for j ∈ I (i,m) t+1 and c ∈ Yi via Z (j,t,m) c = v (t) c + L (t) c ξ (t,m) i where ξ (t,m) i ∼ N(0k,Ik). 9: ϕ (t,m+1) i = ϕ (t,m) i − ηni |I (i,m) t+1 | X j∈I (i,m) t+1 ``` ∇ϕi ℓ y (j) i, g (i) [α(t),β(t,m) i] hϕ (t,m) ix (j) ∇ϕiW2 2 µ (t) c , ν (c) (t,m) . ηλ1 X ii− 23: *// Communication with the server* 24: Send α (t+1) i, v (t+1) i,1:C and L (t+1) i,1:C to central server. 25: *// Averaging global parameters* 26: α (t+1) = b |At+1| Pi∈At+1 wiα (t+1) i. 27: for c = 1 to C do 28: v (t+1) c = (b/|At+1|)Pi∈At+1 ωiv (t+1) i,c . 29: L (t+1) c = (b/|At+1|)Pi∈At+1 ωiL (t+1) i,c and set Σ (t+1) c = L (t+1) c [L (t+1) c ]⊤. Ensure: parameters α (T ), µ (T ) 1:C , ϕ (T ,0) 1:b, β (T ,0) 1:b. ## S1.4 Additional Algorithmic Insights Scalability. When the number of classes C is large, both local computation and communication costs are increased. In this setting, we propose to partition all the classes into Cmeta ≪ C meta-classes and consider reference measures {µc}c∈[Cmeta] associated to these meta-classes. As an example, if we are considering a dataset made of features associated to animals, the meta-class refers to an animal (*e.g.* a dog) and the class refers to a specific breed (*e.g.* golden retriever). c∈Yi ϕ i 10: β (t,m+1) i ← β (t,m) i − ηni |I (i,m) t+1 | X j∈I (i,m) t+1 Bj 11: with Bj = ∇βi ℓ y (j) i, g (i) [α(t),β(t,m) i] hϕ (t,m) ix (j) ii− ηλ2Pc∈Yi ∇βi ℓ y (j) i, g (i) [α(t),β(t,m) i] hZ (*j,t,m*) ci. 12: ϕ (t+1,0) i = ϕ (t,M) i. 13: β (t+1,0) i = β (t,M) i. 14: *// Update global parameters* 15: α (t+1) i ← α (t) − ηni |I (i,M) t+1 | X j∈I (i,M) t+1 Aj 16: with Aj = ∇αℓ y (j) i, g (i) [α(t),β(t,M) i] hϕ (t,M) ix (j) ii− ηλ2Pc∈Yi ∇αℓ y (j) i, g (i) [α(t),β(t,M) i] hZ (*j,t,M*) ci. 17: for c = 1 to C do 18: //mˆ (c,t) i, Σˆ(c,t) i*are the empirical mean and covariance of* ϕ (t,M) ix (j) ifor j ∈ I (i,M) t+1 19: // v (t) c *is the mean of anchor distribution* µ (t) c 20: Update mˆ (c,t) i, Σˆ(c,t) i using ϕ (t,M) i. 21: v (t+1) i,c = v (t) c − ηλ1∇vc v (t) c − mˆ (c,t) i 2 − ηλ2Pc∈Yini |I (i,m) t+1 | X j∈I (i,m) t+1 ∇vc ℓ y (j) i, g (i) [α(t),β(t,M) i] -Z (*j,t,M*) c. ∇Lc ℓ y (j) i, g (i) [α(t),β(t,M) i] -Z (j,t,M) c. 22: L (t+1) i,c = L (t) c −ηλ1∇LcB2L (t) c [L (t) c ] ⊤, Σˆ(c,t) i−ηλ2Pc∈Yini |I (i,m) t+1 | X j∈I (i,m) t+1 Privacy Consideration. As other standard (personalised) FL algorithms, FLIC satisfies first-order privacy guarantees by not allowing raw data exchanges but rather exchanges of local Gaussian statistics. Note that FLIC stands for a post-hoc approach and can be combined with other privacy/confidentiality techniques such as differential privacy (Dwork & Roth, 2014), secure aggregation via secure multi-party computation (Chen et al., 2022) or trusted execution environments (Mo et al., 2021). Inference on New Clients. When a client who has not participated to the training procedure appears, there is no need to re-launch a potentially costly federated learning procedure. Instead, the server sends the shared parameters {α (T), µ (T) 1:C } to the new client and the latter only needs to learn the local parameters {ϕi, βi}. ## S2 Proof Of Theorem 1 This section aims at proving Theorem 1 in the main paper. To this end, we first provide in Appendix S2.1 a closed-form expression for the estimated embedded features based on the features embedded by the oracle. Then, in Appendix S2.3, we show technical lemmata that will be used in Appendix S2.2 to show Theorem 1. To prove our results, we consider the following set of assumptions. H1. (i) For any i ∈ [b], j ∈ [ni]*, ground-truth embedded features* ϕ ⋆ i (x (j) i) *are distributed according to* N(0k,Ik). (ii) *Ground-truth model parameters satisfy* ∥β ⋆ i ∥2 = √d for i ∈ [b] and A⋆ *has orthonormal columns.* (iii) For any t ∈ {0, . . . , T − 1}, |At+1| = ⌊rb⌋ with 1 ≤ ⌊rb⌋ ≤ b, and if we select ⌊rb⌋ *clients, their* ground-truth head parameters {β ⋆ i }i∈At+1 *span* R d. (iv) In (2) in the main paper, ℓ(·, ·) is the ℓ2 *norm,* ωi = 1/b, θi = [A, βi] and g (i) θi (x) = (Aβi) ⊤x for x ∈ R k. ## S2.1 Estimation Of The Feature Transformation Functions As in Section 5 in the main paper, we assume that x (j) i ∼ N(mi, Σi) with mi ∈ R ki and Σi ∈ R ki×kifor i ∈ [b], j ∈ [ni]. In addition, we consider that the continuous scalar labels are generated via the oracle model y (j) i = (A⋆β ⋆ i ) ⊤ϕ ⋆ i (x (j) i) where A⋆ ∈ R k×d, β ⋆ i ∈ R d and ϕ ⋆ i (·) are ground-truth parameters and feature transformation function, respectively. Under H1-(i), the oracle feature transformation functions {ϕ ⋆ i }i∈[b] are assumed to map ki-dimensional Gaussian distributions N(mi, Σi) to a common k-dimension Gaussian N(0k,Ik). As shown in Delon et al. (2022, Theorem 4.1), there exist closed-form expressions for {ϕ ⋆ i }i∈[b], which can be shown to stand for solutions of a Gromov-Wasserstein problem restricted to Gaussian transport plans. More precisely, these oracle feature transformation stand for affine maps and are of the form, for any i ∈ [b], $$\phi_{i}^{\star}\left(x_{i}^{(j)}\right)=\left[\tilde{I}_{k}^{(i,\star)}(D_{i}^{(k)})^{-1/2}P_{i}^{\top}\quad0_{k,k_{i}-k}\right]\left(x_{i}^{(j)}-m_{i}\right)\,,$$ where ˜I (i,⋆) k = diagk (±1) is a k-dimensional diagonal matrix with diagonal elements in {−1, 1}, Σi = PiDiP ⊤ i is the diagonalisation of Σi and D (k) istands for the restriction of Di to the first k components. In the sequel, we assume that all oracle feature transformation functions share the same randomness, that is ˜I (i,⋆) k = ˜I ⋆ k = diagk (±1). For the sake of simplicity, we assume that we know the true latent distribution of ϕ ⋆ i (x (j) i) and as such consider a pre-fixed reference latent distribution that equals the latter, that is µ = N(0k,Ik). Since we know from Delon et al. (2022, Theorem 4.1) that there exist mappings between Gaussian distributions with supports associated to different metric spaces, we propose an estimate for the ground-truth feature transformation functions defined by for any i ∈ [b], $$\hat{\phi}_{i}\left(x_{i}^{(j)}\right)=\left[\tilde{I}_{k}(D_{i}^{(k)})^{-1/2}P_{i}^{\top}\quad0_{k,k_{i}-k}\right]\left(x_{i}^{(j)}-m_{i}\right)\,,$$ where ˜Ik = diagk (±1). By noting that ˜Ik = Q˜I ⋆ k , where Q ∈ R k×kis a diagonal matrix of the form diagk (±1), it follows that $$\hat{\phi}_{i}\left(x_{i}^{(j)}\right)=Q\phi_{i}^{\star}\left(x_{i}^{(j)}\right)\,.$$ . (S1) $$(\mathrm{{S1}})$$ Algorithm S3 FLIC-FedRep for linear regression and Gaussian features Require: step size η, number of outer iterations T, participation rate r ∈ (0, 1), diagonalizations Σi = PiDiP ⊤ isorting eigenvalues in decreasing order. 1: *// Estimation of embedded features* 2: For each client i ∈ [b], set ϕˆi x (j) i = h˜Ik(D (k) i) −1/2P ⊤ i0k,ki−k i x (j) i − mi . 3: *// Initialisation* A(0) 4: Each client i ∈ [b] sends Zi = (1/ni)Pni j=1(y (j) i) 2ϕˆi x (j) i [ϕˆi x (j) i ] ⊤ to the central server. 5: The central server computes UDU ⊤ ← rank−d SVD (1/b)Pb i=1 Zi . 6: The central server initialises A(0) = U. 7: for t = 0 to T − 1 do 8: Sample a set of At+1 of active clients such that |At+1| = ⌊rb⌋. 9: for i ∈ At+1 do 10: The central server sends A(t)to At+1. 11: *// Update local parameters* 12: β (t+1) i = arg minβi Pni j=1 y (j) i − β ⊤ i [A(t)] ⊤ϕˆi x (j) i 2. 13: *// Update global parameters* 14: A (t+1) i = A(t) − η∇APni j=1 y (j) i − [β (t+1) i] ⊤A⊤ϕˆi x (j) i 2. 15: *// Communication with the server* 16: Send A (t+1) ito the central server. 17: *// Averaging and orthogonalisation of global parameter* 18: A¯(t+1) =1 ⌊rb⌋ Pi∈At+1 A (t+1) i. 19: A(t+1), R(t+1) ← QR A¯(t+1). Ensure: parameters A(T), β (T) 1:b . In Appendix S2.2, the equation (S1) will allow us to relate the ground-truth labels y (j) i = (A⋆β ⋆ i ) ⊤ϕ ⋆ i (x (j) i) with estimated predictions yˆ (j) i = (A(T)β (T) i) ⊤ϕˆi(x (j) i) via Algorithm S3 starting from the same embedded features. ## S2.2 Proof Of Theorem 1 Let B ∈ R b×dthe matrix having local model parameters {βi}i∈[b] as columns and denote by BAt+1 ∈ R ⌊rb⌋×d its restriction to the row set defined by At+1 where |At+1| = ⌊rb⌋ for some r ∈ (0, 1]. For the sake of simplicity, we assume in the sequel that all clients have the same number of data points that is for any i ∈ [b], ni = n. For random batches of samples {(x (j) i, y (j) i), j ∈ [n]}i∈[⌊rb⌋], we define similarly to Collins et al. (2021); Jain et al. (2013), the random linear operator A : R ⌊rb⌋×d → R ⌊rb⌋n for any M ∈ R ⌊rb⌋×d as A(M) = [⟨ei(ϕ ⋆ i (x (j) i))⊤, M⟩]1≤i≤⌊rb⌋,1≤j∈n, where ei stands for the i-th standard vector of R ⌊rb⌋. Using these notations, it follows from Algorithm S3 that for any t ∈ {0*, . . . , T* − 1}, the model parameters θ (t+1) i = [A(t+1), β(t+1) i] are computed as follows: 1 ⌊rb⌋ n A (t+1) B ⋆ At+1 [A ⋆] ⊤ − BAt+1 [A (t)] ⊤Q 2 B (t+1) At+1= arg min BAt+1 A¯(t+1) = A¯(t) −η ⌊rb⌋ n h(A (t+1)) †A (t+1) B ⋆ At+1 [A ⋆] ⊤ − B (t+1) At+1 [A (t)] ⊤Q i⊤QB(t+1) At+1 , A (t+1), R(t+1) ← QR A¯(t+1), (S3) , (S2) $$(\mathrm{S2})$$ $$(\mathrm{S3})$$ where A(t+1) stands for a specific instance of A depending on the random subset of active clients available at each round and A†is the adjoint operator of A defined by A†(M) = Pi∈[⌊rb⌋] Pn i=1[⟨ei(ϕ ⋆ i (x (j) i))⊤, M⟩]ei(ϕ ⋆ i (x (j) i)). The update in (S2) admits a closed-form expression as shown in the following lemma. Lemma S1. For any t ∈ . . . 0, . . . , T − 1*, we have* $$B_{\mathsf{A}_{t+1}}^{(t+1)}=B_{\mathsf{A}_{t+1}}^{\star}[A^{\star}]^{\top}Q A^{(t)}-F^{(t)}\;,$$ where F (t)*is defined in* (S12), A(t)*is defined in* (S3) and B (t) At is defined in (S2). Proof. The proof follows from the same steps as in Collins et al. (2021, Proof of Lemma 1) using (S2). Under H1, we have the following non-asymptotic convergence result. Theorem S3. Assume H*1. Then, for any* xi ∈ R ki*, we have* ϕˆi(xi) = Qϕ⋆ i (xi) *where* Q ∈ R k×k*is of the* form diagk (±1)*. Define* E0 = dist(A(0), QA⋆)*. Assume that* n ≥ c(d 3log(⌊rb⌋))/E2 0 + d 2k/(E2 0 ⌊rb⌋) for some absolute constant c > 0. Then, for any t ∈ {0*, . . . , T* − 1}, η ≤ 1/(4σ¯ 2 max,⋆) *and with high probability at least* 1 − e −110k − e −110d 2log(⌊rb⌋)*, we have* $$\mathrm{dist}(A^{(t+1)},Q A^{\star})\leq(1-\kappa)^{(t+1)/2}\mathrm{dist}(A^{(0)},Q A^{\star})\,,$$ where A(t)is computed via Algorithm S3, dist denotes the principal angle distance and κ ∈ (0, 1) *is defined as* κ = 1 − ηE0σ¯ 2 min,⋆/2. Proof. The proof follows first by plugging Lemma S3, Lemma S8 and Lemma S9 into Lemma S2. Then, we use the same technical arguments and steps as in Collins et al. (2021, Proof of Lemma 6). ## S2.3 Technical Lemmata In this section, we provide a set of useful technical lemmata to prove our main result in Appendix S2.2. Notations. We begin by defining some notations that will be used in the sequel. For any t ∈ {0*, . . . , T* − 1}, we define $$Z^{(t+1)}=B_{{\sf A}_{t+1}}^{(t+1)}[A^{(t)}]^{\top}Q-B_{{\sf A}_{t+1}}^{\star}[A^{\star}]^{\top}\,.\tag{1}$$ In addition, let G (t) = G (t) 11 · · · G (t) 1d ......... G (t) d1· · · G (t) dd , C(t) = C (t) 11 · · · C (t) 1d ......... C (t) d1· · · C (t) dd , D(t) = D (t) 11 · · · D (t) 1d ......... D (t) d1· · · D (t) dd , $$(\mathrm{S}4)$$ $$\mathrm{(S5)}$$ $$\mathrm{(S6)}$$ $$(\mathrm{{S}}7)$$ where for *p, q* ∈ [d], G (t) pq = 1 n X i∈At+1 Xn j=1 ei ϕ ⋆ i (x (j) i) ⊤Qa(t) p [a (t) q ] ⊤Qϕ⋆ i (x (j) i)e ⊤ i , (S5) C (t) pq = 1 n X i∈At+1 Xn j=1 ei ϕ ⋆ i (x (j) i) ⊤Qa(t) p [a ⋆ q ] ⊤Qϕ⋆ i (x (j) i)e ⊤ i , (S6) D(t) pq = ⟨a (t) p , a⋆ q ⟩I⌊rb⌋ , (S7) with a (t) p ∈ R kstanding for the p-th column of A(t) ∈ R k×d; and a ⋆ p ∈ R kstanding for the p-th column of A⋆ ∈ R k×d. Finally, we define for any i ∈ At+1, $$\Pi^{i}=\frac{1}{n}\sum_{j=1}^{n}\phi_{i}^{\star}(x_{i}^{(j)})[\phi_{i}^{\star}(x_{i}^{(j)})]^{\top}\,,$$ (S8) $$(G^{(t)})^{i}=[A^{(t)}]^{\top}Q\Pi^{i}QA^{(t)}\,,$$ (S9) $$(C^{(t)})^{i}=[A^{(t)}]^{\top}Q\Pi^{i}QA^{\star}\,,$$ (S10) $$(D^{(t)})^{i}=[A^{(t)}]^{\top}QA^{\star}\,.$$ (S11) $$(\mathrm{S}12)$$ Using these notations, we also define β˜⋆ = [(β ⋆ 1 ) ⊤*, . . . ,*(β ⋆ d ) ⊤] ⊤ ∈ R $$\in\mathbb{R}^{\lfloor r b\rfloor d}{\mathrm{~and~}}$$ $$F^{(t)}=[([G^{(t)}]^{-1}(G^{(t)}D^{(t)}-C$$ (t)] −1(G (t)D(t) − C (t))β˜⋆)1, . . . ,([G (t)] −1(G (t)D(t) − C (t))β˜⋆)d] . (S12) Technical results. To prove our main result in Theorem S3, we begin by providing a first upper bound on the quantity of interest namely dist A(t+1)*, QA*⋆. This is the purpose of the next lemma. Lemma S2. For any t ∈ {0, . . . , T − 1} and η > 0*, we have* $$\operatorname{dist}\left(A^{(t+1)},Q A^{\star}\right)\leq C_{1}+C_{2},\ ,$$ $$\mathrm{(S13)}$$ $$\mathrm{(S14)}$$ where $$\begin{array}{l}{{C_{1}=\left\|[A_{1}^{\star}]^{\top}Q A^{(t)}\left(\mathbb{I}_{d}-\frac{\eta}{|r b|}[B_{\mathbb{A}_{t+1}}^{(t+1)}]^{\top}B_{\mathbb{A}_{t+1}}^{(t+1)}\right)\right\|_{2}\left\|\left(R^{(t+1)}\right)^{-1}\right\|_{2},}}\\ {{C_{2}=\frac{\eta}{|r b|}\left\|\left(\frac{1}{n}[A_{1}^{\star}]^{\top}(Q A^{(t+1)})^{\dagger}A^{(t+1)}\left(Z^{(t+1)}\right)Q-Z^{(t+1)}\right)^{\top}B_{\mathbb{A}_{t+1}}^{(t+1)}\right\|_{2}\left\|\left(R^{(t+1)}\right)^{-1}\right\|_{2},}}\end{array}$$ , (S14) where A(t)*is defined in* (S3), B (t) At is defined in (S2), Z (t)*is defined in* (S4) and R(t)comes from the QR factorisation of A¯(t), see step 19 in Algorithm S3. Proof. The proof follows from the same steps as in Collins et al. (2021, Proof of Lemma 6) and by noting that dist(A(t)*, QA*⋆) = dist(QA(t), A⋆) for t ∈ {0*, . . . , T* − 1}. We now have to control the terms C1 and C2. For the sake of clarity, we split technical results aiming to upper bound of C1 and C2 in two different paragraphs. Control of C1. Lemma S3. Assume H*1. Let* δd = cd3/2plog(⌊rb⌋)/n1/2for some absolute constant c > 0*. Then, for any* t ∈ {0, . . . , T − 1}*, with probability at least* 1 − e −111k 2log(⌊rb⌋), we have for δd ≤ 1/2 and η ≤ 1/(4¯σ 2 max,⋆) $$C_{1}\leq\left[\leq1-\eta\left(1-\operatorname{dist}\left(A^{(0)},Q A^{\star}\right)\right)\bar{\sigma}_{\operatorname*{min},\star}^{2}+2\eta\frac{\delta_{d}}{1-\delta_{d}}\bar{\sigma}_{\operatorname*{max}}^{2}\right]\,\operatorname*{dist}\left(A^{(t)},Q A^{\star}\right)\left\|\left(R^{(t+1)}\right)^{-1}\right\|_{2}\,,$$ where σ¯ 2 min, σ¯ 2 max *are defined in* (S15)-(S16), C1 *is defined in* (S13), A(t)*is defined in* (S3) and R(t)comes from the QR factorisation of A¯(t), see step 19 in Algorithm S3. Proof. Using Cauchy-Schwarz inequality, we have $$C_{1}\leq\left\|\left(A_{\perp}^{\star}\right)^{\top}QA^{(t)}\right\|_{2}\left\|\mathrm{I}_{d}-\frac{\eta}{\left[rb\right]}[B_{\mathrm{A}_{t+1}}^{(t+1)}]^{\top}B_{\mathrm{A}_{t+1}}^{(t+1)}\right\|_{2}\left\|\left(R^{(t+1)}\right)^{-1}\right\|_{2}$$ $$=\mathrm{dist}\left(A^{(t)},QA^{\star}\right)\left\|\mathrm{I}_{d}-\frac{\eta}{\left[rb\right]}[B_{\mathrm{A}_{t+1}}^{(t+1)}]^{\top}B_{\mathrm{A}_{t+1}}^{(t+1)}\right\|_{2}\left\|\left(R^{(t+1)}\right)^{-1}\right\|_{2}\,.$$ Define the following minimum and maximum singular values: $$\bar{\sigma}_{\min,*}^{2}=\min_{\mathsf{A}\subseteq[b],|\mathsf{A}|=[rb]}\sigma_{\min}\left(\frac{1}{\sqrt{\lceil rb\rceil}}B_{\mathsf{A}}^{*}\right)$$ (S15) $$\bar{\sigma}_{\max,*}^{2}=\min_{\mathsf{A}\subseteq[b],|\mathsf{A}|=[rb]}\sigma_{\max}\left(\frac{1}{\sqrt{\lceil rb\rceil}}B_{\mathsf{A}}^{*}\right)\,.$$ (S16) Using Collins et al. (2021, Proof of Lemma 6, equations (67)-(68)), we have for δd ≤ 1/2 where δd is defined in Lemma S4 and η ≤ 1/(4¯σ 2 max,⋆), $$\left\|\left[\mathbf{I}_{d}-\frac{\eta}{[r b]}[B_{\mathbf{A}_{t+1}}^{(t+1)}]^{\top}B_{\mathbf{A}_{t+1}}^{(t+1)}\right]\right\|_{2}\leq1-\eta\left(1-\operatorname{dist}\left(A^{(0)},Q A^{\star}\right)\right)\bar{\sigma}_{\mathrm{min},\star}^{2}+2\eta\frac{\delta_{d}}{1-\delta_{d}}\bar{\sigma}_{\mathrm{max},\star}^{2}\,,$$ with probability at least 1−e −111k 2log(⌊rb⌋) The proof is concluded by combining the two previous bounds. Control of C2. We begin by showing four intermediary results gathered in the next four lemmata. Lemma S4. Assume H*1. Let* δd = cd3/2plog(⌊rb⌋)/n1/2for some absolute constant c > 0*. Then, for any* t ∈ {0, . . . , T − 1}*, with probability at least* 1 − e −111k 3log(⌊rb⌋)*, we have* $$\left\|[G^{(t)}]^{-1}\right\|_{2}\leq\frac{1}{1-\delta_{d}}\,,$$ where G(t)*is defined in* (S5). Proof. The proof stands as a straightforward extension of Collins et al. (2021, Proof of Lemma 2) by noting that the random variable Qϕ⋆ i (x (j) i) = ϕˆi(x (j) i) is sub-Gaussian under H1-(i); and as such is omitted. Lemma S5. Assume H*1. Let* δd = cd3/2plog(⌊rb⌋)/n1/2for some absolute constant c > 0*. Then, for any* t ∈ {0, . . . , T − 1}*, with probability at least* 1 − e −111k 2log(⌊rb⌋)*, we have* $$\left\|(G^{(t)}D^{(t)}-C^{(t)})B_{\mathsf{A}_{t}}^{\star}\right\|_{2}\leq\delta_{d}\ \left\|B_{\mathsf{A}_{t}}^{\star}\right\|_{2}\ \mathrm{dist}\left(A^{(t)},Q A^{\star}\right)\,,$$ where G(t)*is defined in* (S5), D(t)*is defined in* (S7), C (t)*is defined in* (S6) and A(t)in (S3). Proof. Without loss of generality and to ease notation, we remove the superscript (t) in the proof and re-index the indexes of clients in At+1. Let H = GD − C. From (S8), (S9), (S10) and (S11), it follows, for any i ∈ [⌊rb⌋], that $$H^{i}=G^{i}D^{i}-C^{i}=A^{\top}Q\Pi^{i}Q(A A^{\top}-\mathrm{I}_{k})Q A^{\star}\;.$$ Hence, by using the definition of H, we have $$\|(G D-C)\beta^{*}\|_{2}^{2}=\sum_{i=1}^{\lfloor r b\rfloor}\|H^{i}\beta_{i}^{*}\|_{2}^{2}\leq\sum_{i=1}^{\lfloor r b\rfloor}\|H^{i}\|_{2}^{2}\,\|\beta_{i}^{*}\|^{2}\leq\frac{d}{\lfloor r b\rfloor}\,\|B^{*}\|_{2}^{2}\sum_{i=1}^{\lfloor r b\rfloor}\|H^{i}\|_{2}^{2}\,$$ where the last inequality follows almost surely from H1-(iii). As in Collins et al. (2021, Proof of Lemma 3), we then define for any j ∈ [n], the vectors $$\begin{array}{l}{{u_{i}^{(j)}=\frac{1}{\sqrt{n}}[A^{\star}]^{\top}(A A^{\top}-\mathrm{I}_{k})Q\phi_{i}^{\star}(x_{i}^{(j)})\,,}}\\ {{v_{i}^{(j)}=\frac{1}{\sqrt{n}}A^{\top}Q\phi_{i}^{\star}(x_{i}^{(j)})\,.}}\end{array}$$ Let S d−1 denotes the d-dimensional unit spheres. Then, by Vershynin (2018, Corollary 4.2.13), we can define Nd, the 1/4-net over S d−1such that |Nd| ≤ 9 d. Therefore, by using Vershynin (2018, Equation (4.13)), we have $$\left\|H^{i}\right\|_{2}^{2}\leq2\,\operatorname*{max}_{z,y\in{\mathcal{N}}_{d}}\sum_{j=1}^{n}\langle z,u_{i}^{(j)}\rangle\langle v_{i}^{(j)},y\rangle\,.$$ Since ϕ ⋆ i (x (j) i) is a standard Gaussian vector, it is sub-Gaussian and therefore ⟨*z, u* (j) i⟩ and ⟨v (j) i, y⟩ are sub-Gaussian with norms ∥ √ 1 n [A⋆] ⊤(AA⊤ − Ik)Q∥2 = (1/ √n)dist(*A, QA*⋆) and (1/ √n), respectively. In addition, we have $$\mathbb{E}\left[\langle z,u_{i}^{(j)}\rangle\langle v_{i}^{(j)},y\rangle\right]=\frac{1}{n}\mathbb{E}\left[z^{\top}\frac{1}{\sqrt{n}}[A^{*}]^{\top}(AA^{\top}-\mathrm{I}_{k})Q\phi_{i}^{*}(x_{i}^{(j)})[\phi_{i}^{*}(x_{i}^{(j)})]^{\top}QAy\right]$$ $$=\frac{1}{n}z^{\top}\frac{1}{\sqrt{n}}[A^{*}]^{\top}(AA^{\top}-\mathrm{I}_{k})Ay$$ $$=0,$$ where we have used the fact that E[ϕ ⋆ i (x (j) i)[ϕ ⋆ i (x (j) i)]⊤] = 1, Q2 = Ik and (AA⊤ − Ik)A = 0. The rest of the proof is concluded by using the Bernstein inequality by following directly the steps detailed in Collins et al. (2021, Proof of Lemma 3, see equations (35) to (39)). Lemma S6. Assume H*1. Let* δd = cd3/2plog(⌊rb⌋)/n1/2for some absolute constant c > 0*. Then, for any* t ∈ [T]*, with probability at least* 1 − e −111k 2log(⌊rb⌋)*, we have* $$\left\|F^{(t)}\right\|_{F}\leq\frac{\delta_{d}}{1-\delta_{d}}\;\left\|B_{\mathbf{A}_{t}}^{\star}\right\|_{2}\;\mathrm{dist}\left(A^{(t)},Q A^{\star}\right)\,,$$ where F (t)*is defined in* (S12) and A(t)in (S3). Proof. By the Cauchy-Schwarz inequality, we haveF (t)F=[G(t)] −1(G(t)D(t) − C (t))B⋆At 2≤ δdB⋆At 2 ≤[G(t)] −12 (G(t)D(t) − C (t))B⋆At 2 ≤ δdB⋆At 2 . The proof is concluded by combining the upper bounds given in Lemma S4 and Lemma S5. Lemma S7. Assume H1 *and let* δ ′ d = cd√k/p⌊rb⌋ n for some absolute positive constant c*. For any* t ∈ [T] and whenever δ ′ d ≤ d*, we have with probability at least* 1 − e −110k − e −110d 2log(⌊rb⌋) $$\frac{1}{[r b]}\left\|\left(\frac{1}{n}Q({\mathcal A}^{(t)})^{\dagger}{\mathcal A}^{(t)}\left(Z^{(t)}\right)Q-Z^{(t)}\right)^{\top}B_{{\bf A}_{t}}^{(t)}\right\|_{2}\leq\delta_{d}^{\prime}\mathrm{~dist}\left(A^{(t)},Q A^{\star}\right)\,,$$ where B (t) At is defined in (S2) and Z (t)*is defined in* (S4). Proof. Let t ∈ [T]. Note that we have $$\left(\frac{1}{n}Q({\cal A}^{(t)})^{\dagger}{\cal A}^{(t)}\left(Z^{(t)}\right)Q-Z^{(t)}\right)^{\top}B_{{\sf A}_{t}}^{(t)}=\frac{1}{n}\sum_{i\in{\sf A}}\sum_{j=1}^{m}\langle Q\phi_{i}^{*}(x_{i}^{(j)}),z_{i}^{(t)}\rangle Q\phi_{i}^{*}(x_{i}^{(j)})\left[\beta_{i}^{(t)}\right]^{\top}-z_{i}^{(t)}\left[\beta_{i}^{(t)}\right]^{\top}.$$ Let S k−1 and S d−1 denote the k-dimensional and d-dimensional unit spheres, respectively. Then, by Vershynin (2018, Corollary 4.2.13), we can define Nk and Nd, 1/4-nets over S k−1 and S d−1, respectively, such that |Nk| ≤ 9 k and |Nd| ≤ 9 d. Therefore, by using Vershynin (2018, Equation (4.13)), we have $$\left\|\left(\frac{1}{n}Q({\mathcal A}^{(t)})^{\dagger}{\mathcal A}^{(t)}\left(Z^{(t)}\right)Q-Z^{(t)}\right)^{\top}B_{\mathbf A_{t}}^{(t)}\right\|_{2}^{2}$$ = 2 max u∈Nd,v∈Nk u ⊤ 1 n X i∈At Xm j=1 ⟨Qϕ⋆ i (x (j) i), z (t) i⟩Qϕ⋆ i (x (j) i) hβ (t) i i⊤− z (t) i hβ (t) i i⊤ v = 2 max u∈Nd,v∈Nk 1 n X i∈At Xm j=1 ⟨Qϕ⋆ i (x (j) i), z (t) i⟩⟨u, Qϕ⋆ i (x (j) i)⟩⟨β (t) i, v⟩ − ⟨u, z (t) i⟩⟨β (t) i, v⟩. (S17) In order to control (S17) using Bernstein inequality as in Lemma S5, we need to characterise, in particular, the sub-Gaussianity of ⟨*u, z* (t) i⟩ and ⟨β (t) i, v⟩ which require a bound on ∥z (t) i∥ and ∥β (t) i∥, respectively. From Lemma S1, we have [β (t) i] ⊤ = (β ⋆ i z (t) i 2 = QA(t)(A (t)) ⊤QA⋆β ⋆ i − QA(t)f (t) i − A ⋆β ⋆ i 2 2 = (QA(t)(A (t)) ⊤Q − Id)A ⋆β ⋆ i − QA(t)f (t) i 2 2 ≤ 2 (QA(t)(A (t)) ⊤Q − Id)A ⋆ 2 2 ∥β ⋆ i ∥ 2 + 2 f (t) i 2 ≤ 2d dist2(A (t), QA⋆) + 2 f (t) i 2 . ) ⊤(A⋆) ⊤A(t) − (z (t) i) ⊤ which leads to Using (S12) and the Cauchy-Schwarz inequality, we have by Schwarz inequality, we have $$\left\|f_{i}^{(t)}\right\|^{2}=\left\|[G^{i,(t)}]^{-1}(G^{i,(t)}D^{i,(t)}-C^{i,(t)})\beta_{*}^{*}\right\|^{2}$$ $$\leq\left\|[G^{i,(t)}]^{-1}\right\|_{2}^{2}\left\|G^{i,(t)}D^{i,(t)}-C^{i,(t)}\right\|_{2}^{2}\left\|\beta_{*}^{*}\right\|^{2}$$ $$\leq d\left\|[G^{i,(t)}]^{-1}\right\|_{2}^{2}\left\|G^{i,(t)}D^{i,(t)}-C^{i,(t)}\right\|_{2}^{2},$$ (S18) follows from $\mathbf{H}_{1-(\mathrm{ij})}$. where the last inequality follows from H1-(ii). Using Lemma S4 and Lemma S5 and similarly to Collins et al. (2021, Equation (45)), it follows for any i ∈ At that $$\left\|z_{i}^{(t)}\right\|_{2}^{2}\leq4d\;\mathrm{dist}(A^{(t)},Q A^{\star})\,,$$ with probability at least 1 − e 110d 2log(⌊rb⌋). Similarly, using Lemma S1 and (S18), we have with probability at least 1 − e 110d 2log(⌊rb⌋) and for any i ∈ At, that $$\left\|\beta_{i}^{(t)}\right\|^{2}\leq2\left\|[A^{(t)}]^{\top}Q A^{\star}\beta_{i}^{\star}\right\|^{2}+2\left\|f_{i}^{(t)}\right\|^{2}\leq4d\,.$$ Besides, note we have $$\mathbb{E}\left[\langle Q\phi_{i}^{\star}(x_{i}^{(j)}),z_{i}^{(t)}\rangle\langle u,Q\phi_{i}^{\star}(x_{i}^{(j)})\rangle\langle\beta_{i}^{(t)},v\rangle\right]=\langle u,z_{i}^{(t)}\rangle\langle\beta_{i}^{(t)},v\rangle\,.$$ The proof is then concluded by applying the Bernstein inequality following the same steps as in the final steps of Collins et al. (2021, Proof of Lemma 5). We are now ready to control C2. Lemma S8. Assume H1 *and let* δ ′ d = cd√k/p⌊rb⌋ n for some absolute positive constant c*. For any* t ∈ {0, . . . , T − 1}, η > 0 *and whenever* δ ′ d ≤ d*, we have with probability at least* 1 − e −110k − e −110d 2log(⌊rb⌋) $$C_{2}\leq\eta\delta_{d}^{\prime}\,\operatorname{dist}\left(A^{(t)},Q A^{\star}\right)\left\|\left(R^{(t+1)}\right)^{-1}\right\|_{2}\,,$$ where C2 *is defined in* (S14), A(t)*is defined in* (S3) and R(t)comes from the QR factorisation of A¯(t)*, see* step 19 in Algorithm S3. Proof. Let t ∈ {0*, . . . , T* − 1} and η > 0. Then, whenever δ ′ d ≤ d, we have with probability at least 1 − e −110k − e −110d 2log(⌊rb⌋), we have C2 =η ⌊rb⌋ 1 n [A ⋆ ⊥] ⊤(QA (t+1)) †A (t+1) Z (t+1)Q − Z (t+1)⊤B (t+1) At+1 2 R (t+1)−12 ≤η ⌊rb⌋ 1 n (QA (t+1)) †A (t+1) Z (t+1)Q − Z (t+1)⊤B (t+1) At+1 2 R (t+1)−12 ≤ ηδ′d dist A (t), QA⋆ R (t+1)−12 , where we used the Cauchy-Schwarz inequality in the second inequality and Lemma S7 for the last one. Control of ∥R(t+1)−1∥2. To finalise our proof, it remains to bound ∥R(t+1)−1∥2. The associated result is depicted in the next lemma. Lemma S9. *Define* ¯δd = δd + δ ′ d where δd and δ ′ d are defined in Lemma S4 and Lemma S5, respectively. Assume H*1. Then, we have with probability at least* 1 − e −110k − e −110d 2log(⌊rb⌋), $$\left\|\left(R^{(t+1)}\right)^{-1}\right\|_{2}\leq\left(1-4\eta\frac{\bar{\delta}_{d}}{(1-\bar{\delta}_{d})^{2}}\bar{\sigma}_{\mathrm{max},\star}^{2}\right)^{-1/2}\,.$$ $\square$ Proof. The proof follows from Collins et al. (2021, Proof of Lemma 6). ## S3 Experimental Details S3.1 Data Sets We provide some details about the datasets we used for our numerical experiments ## S3.1.1 Toy Data Sets The first toy dataset, denoted as *noisy features*, is a 20-class classification problem in which the features for a given class is obtained by sampling a Gaussian distribution of dimension 5, with random mean and Identity covariance matrix. Mean for each class is sampled from a Gaussian distribution with zero mean and diagonal covariance matrix σ 2I with σ = 0.8. For building the training set, we sample 2000 examples for each class and equally share those examples among clients who hold that class. Then, in order to generate some class imbalances on clients, we randomly subsample examples on all clients. For instance, with 100 clients and 2 classes per clients, this results in a problem with a total of about 16k samples with a minimal number of samples of 38 and a maximal one of 400. In order to get different dimensionality, we randomly append on each client dataset some Gaussian random noisy features with dimensionality varying from 1 to 10. The second toy dataset, denoted as *linear mapping*, is a 20-class classification problem where each classconditional distribution is Gaussian distribution of dimension 5, with random mean and identity covariance matrix. Mean for each class is sampled from a Gaussian distribution with zero mean 0 and diagonal covariance matrix σ 2I with σ = 0.5. matrix. As above, we generate 2000 samples per class, distribute and subsample them across clients in the similar way, leading to a total number of samples of about 15k. The dimensionality perturbation is modelled by a random (Gaussian) linear transformation that maps the original samples to a space which dimension goes from 3 to 100. Examples from a given class are equally shared among clients after random permutation of their index. For subsampling samples on a client, we randomly select a percentage of samples to keep ranging from 5% to 100%, and them uniformly randomly select among all classes on the clients the ones we keep. Table S1: Summary of the Brain-Computer Interfaces dataset we used. We report the number of subjects (\#Subj), the number of channels (\#Chan), the number of classes (\#Classes), the number of trials per class (\#Trials class) and the number of features (\#features) on the covariance representation has been vectorized. | Name | #Subj | #Chan | #Classes | #Trials class | # features | |-------------|---------|---------|------------|-----------------|--------------| | AlexMI | 8 | 16 | 3 | 20 | 136 | | BNCI2014001 | 9 | 22 | 4 | 144 | 253 | | BNCI2014002 | 14 | 15 | 2 | 80 | 120 | | BNCI2014004 | 9 | 3 | 2 | 360 | 6 | | Weibo2014 | 10 | 60 | 7 | 80 | 1830 | | Zhou2016 | 4 | 14 | 3 | 160 | 105 | For these problems, the test sets are obtained in the same way as the training sets but 1000 examples for each class shared across clients. ## S3.1.2 Mnist-Usps We consider a digit classification problem with the original MNIST and USPS data sets which are respectively of dimension 28 × 28 and 16 × 16 and we assume that a client hosts either a subset of MNIST or USPS data set. We use the natural train/test split of those datasets and randomly share them accross clients. ## S3.1.3 Textcaps Data Set The TextCaps data set (Sidorov et al., 2020) is an Image captioning dataset for which goal is to develop a model able to produce a text that captions the image. The dataset is composed of about 21k images and 110k captions and each image also comes with an object class. For our purpose, we have extracted pair of 14977 images and captions from the following four classes Bottle, Car, *Food* and *Book*. At each run, those pairs are separated in 80% train and 20% test sets. Examples from the TextCaps datasets are presented in Figure S7. Images and captions are represented by vectors by feeding them respectively to a pre-trained ResNet18 and a pretrained Bert, leading to vectors of size 512 and 768. Each client holds either the image or the text representation of subset of examples and the associated vectors are randomly pruned of up to 10% coordinates. As such, all clients hold dataset with different dimensionality. For this dataset, we randomly split the samples for the train and test sets and share them across clients. ## S3.2 Brain-Computer Interfaces Data Set The Brain-Computer Interfaces dataset we used are summarized in Table S1. Each dataset description can be obtained from the MOABB library (Jayaram & Barachant, 2018) and at the following URL: http: //moabb.neurotechx.com/docs/datasets.html. For each subject, we select the predefined train/test splits or used 75% of the trials for training and the remaining 25% for testing. We used a bandpass prefiltering between 8 and 30 Hz of the EEG signals and extracted a covariance matrix for each trial using all available channels. These covariance matrices are vectorized and used as a feature. The classes that we used for the classification problem are the following ones: ['left hand', 'right hand', 'feet', 'tongue', 'rest'] and a subset of them as available for each dataset. For these datasets, if a subject has more than one session, then we use the last one for generating the test set and the remaining ones for the training set. If we have only a single session, we randomly split the trials into training and test sets. ## S3.3 Models And Learning Parameters For the toy problems, the *TextCaps data set* and the BCI one, as a local transformation functions we used a fully connected neural network with one input, one hidden layer and one output layers. The number of units in hidden layer has been fixed to 64 and the dimension of latent space as been fixed to 64. ReLU activation ![30_image_0.png](30_image_0.png) Figure S1: Evolution of the local loss curve of three different clients for three different learning situations. See text for details. has been applied after the input and hidden layers. For the digits dataset, we used a CNN model with 2 convolutional layers followed by a max-pooling layer and a sigmoid activation function. Once flattened, we have a one fully-connected layer and ReLU activation. The latent dimension is fixed to 64. For all datasets, as for the local model get, in order to be consistent with competitors, we first considered a single layer linear model implementing the local classifier as well as a model with one input layer (linear units followed by a LeakyReLU activation funcion) denoting the shared representation layer and an output linear layer. For training, all methods use Adam with a default learning rate of 0.001 and a batch size of 100. Other hyperparameters have been set as follows. Unless specified, the regularization strength À1 and À2 have been fixed to 0.001. Local sample batch size is set to 100 and the participation rate r to 0.1. For all experiments, we have set the number of communication round T to 50 and the number of local epochs to respectively 10 and 100 for the real-world and toy datasets. For FLIC, as in FedRep those local epochs is followed by one epoch for representation learning. We have trained the local embedding functions for 100 local epochs and a batch size of 10 for toy datasets and TextCaps and while of 100 for MNIST-USPS and BCI. Reported accuracies are computed after local training for all clients. ## S3.4 Ablating Loss Curves In order to gain some understanding on the learning mechanism that involves local and global training respectively due to the local embedding functions, the local classifier and the global representation learning, we propose to look at local loss curves across different clients. Here, we have considered the linear mapping toy dataset as used in the toy problem analysis. However, the learning parameters we have chosen are different from those we have used to produce the results so as to highlight some specific features. The number of epochs (communication rounds) is set to 100 with a client activation ration of 0.1. Local epochs are shared for either training the local parameters or the global ones (note that in our reference Algorithm 1, the global parameter is updated only once for each client) Those latter are trained starting after the 20-th communication round and in this case, the local epochs are equally shared between local and global parameter updates. Note that because of the randomness in the client selection at each epoch, the total number of local epochs is different from client to client. We have evaluated three learning situations and plotted the loss curves for each client. - the local embedding functions and the global models are kept fixed, and only the local classifier is trained. Examples of loss curves for 3 clients are presented in the left plot of Figure S1. For this learning situation, there is no shared global parameters that are trained locally. Hence, the loss curve is typical of those obtained by stochastic gradient descent with a smooth transition, at multiple of 100 local epochs, when a given client is retrained after a communication round. - the local embedding functions are kept fixed, while the classifier and global parameters are updated using half of the local epochs each. This situation is interesting and reported in middle plot in Figure S1. We can see that for some rounds of 100 local epochs, a strong drop in the loss occurs at starting at the 50th local epoch because the global parameters are being updated. Once the local update of a client is finished the global parameter is sent back to the server and all updates of global parameters are averaged by the server. When a client is selected again for local updates, it is served with a new global parameter (hence a new loss value ) which causes the discontinuity in the loss curve at the beggining of each local update. - all the part (local embedding functions, global parameter and the classifier) of the models are trained. Note at first that the loss value for those curves (bottom plot in Figure S1) is larger than for the two first most left plots as the Wasserstein distance to the anchor distribution is now taken into account and tends to dominate the loss. The loss curves are globally decreasing with larger drops in loss at the beginning of local epochs. ## S3.5 On Importance Of Alignment Pre-Training And Updates. We have analyzed the impact of pre-training the local emebedding functions and their updates during learning for fixed anchor reference distributions . For this, we have kept the same global experimental settings as for the performance comparisons except that we fixed the number of users to 100. We have compared two learning situations : the first one in which the local embedding functions are pre-trained for K epochs and updated during local training and the second in which they are also pre-trained for K epochs and kept fixed during local training. We have reported the classification performance when the number of pre-training epochs K varies from 1 to 200. Results, averaged over 5 runs are shown in Figure S2. At first, we can note that there exists a performance gap between 1 and 200 epochs of pre-training. Nonetheless, this gap is not that large except for the toy added noisy feature dataset. For the three datasets, increasing the number of pre-training epochs up to a certain number tends to increase performance, but overfitting may occur. The latter is mostly reflected in the *toy linear mapping* dataset for which 10 to 50 epochs is sufficient for good pre-training. Examples of how classes evolves during pre-training are illustrated in Figure 5, through *t-sne* projection. We also illustrate cases of how pre-training impact on the test set and may lead to overfitting as shown in Figure S4. ## S3.6 On The Impact Of The Participation Rate We have analyzed the effect of the participation rate of each client into our federated learning approach. Figure S3 reports the accuracies, averaged over 3 runs, of our approach for the toy datasets and the *TextCaps* problem with respect to the partication rate at each round. We can note that the proposed approach is rather robust to the participation rate but may rather suffer from overfitting due to overtraining of local models. On the left plot, performances, measured after the last communication round, for *TextCaps* is stable over participation rate while those performances tend to decrease for the toy problems. We associate these decrease to overfitting since when we report (see right plot) the best performance over communication rounds ![32_image_0.png](32_image_0.png) Figure S2: Impact of the number of pre-training epochs for ϕi on the model accuracy. We compared the case ![32_image_1.png](32_image_1.png) ![32_image_2.png](32_image_2.png) where ϕiis (plain) updated during local training or (dashed) kept fixed. Results for three different datasets are reported. Figure S3: Evolution of the performance of our FLIC-Class algorithm with respects to the participation rate of clients, using the same experimental setting as in Figure 4. (left) evaluating performance after last communication rounds, (right) best performance across communication rounds. (and not the last one), they are stable for all problems. This suggests that number of local epochs may be dependent to the task on each client and the client participation rate. ## S3.7 Comparing Flic **With T-Sne And** Fedrep We have claimed several times the need for a proper alignment of the class-conditional distributions when mapping into a common latent subspace. We have illustrated this need in Figure 3 where we have projected the class-conditional distributions of the *toy linear mapping* dataset without alignment and using classical dimensionality reduction algorithm like *t-sne* oracle multidimensional scaling (MDS). Without alignment, mapped class-conditional distributions can get mixed up and learning a accurate classification model becomes hard. In order to back up our claim with quantitative results, we have compared the performance of FLIC with the baseline that consists in projecting data on clients into a common latent subspace (here R 64) using *t-sne* and then applying FedRep on the projected data. We have followed the same protocol as for the results on toy dataset and reported the results in Figure S5. As expected, the perfomance of this baseline in terms of classification accuracy is lower than the one of FLIC and even lower than the one of a local classifier. ![33_image_0.png](33_image_0.png) Figure S4: . 2D t-sne projection of 5 classes partially shared by 3 clients for the toy linear mapping dataset after learning the local embedding functions for (left) 10 epochs, (middle) 50 epochs, (right) 100 epochs. Original dimensions on clients vary from 5 to 50. Top row shows the projection the training set while bottom row plots show both training and test set. Star + markers represent the projection of the mean of each class-conditional. The three different marker styles represent the different clients. Classes are denoted by colors and similar tones of color distinguish train and test sets. We see that each class from the training set from each client converges towards the mean of its anchor distribution, represented by the star marker. Interestingly, we also remark that unless convergence is reached, empirical class-conditional distributions on each clients are not equal making necessary the learning of a joint representation. From the bottom plots, we can understand that distribution alignment impacts mostly the training set but this alignment does not always generalize properly to the test sets. ![33_image_1.png](33_image_1.png) Figure S5: Comparing the baseline t-sne projection + FedRep to FLIC and other baselines We report the classification accuracy for the (left) toy noisy features dataset and (right) toy linear mapping dataset. ![34_image_0.png](34_image_0.png) Figure S6: Comparing the computational cost for training all the models over 50 epochs on the toy dataset (left) *toy noisy features* dataset and (right) *toy linear mapping* dataset. ## S3.8 On The Computational Cost Of Flic The approach we propose has computational overhead compared to the classical federated learning approach and compared to the baselines that are able to handle heterogeneous feature spaces. For instance, in addition to the local version of the model on each client, we need to update the local embedding function and the anchor distributions parameters. In order to properly quantify the computational cost of FLIC, we report in Figure S6 the running time needed for training each model over 50 epochs and for increasing number of clients, for the toy dataset problems. Remind that as the number of clents increase, the number of samples on each client decreases. We can see that roughly, the computational cost of FLIC is about two times higher than for other competitors, across the range of clients and for the two learning problems. ## S3.9 Experiments On Femnist We have also evaluated our approach on the FEMNIST dataset. It is part of the LEAF (LEAF: A Benchmark for Federated Settings) project (Caldas et al., 2018), which aims to provide a benchmarking framework for learning in federated settings. It is a 28 by 28 pixel image dataset that is built by partitioning the data in the Extended MNIST dataset based on the writer of the digit/character. It contains 62 different classes, including 10 digits, 26 lowercase letters, and 26 uppercase letters. As in Collins et al. (2021), we have considered a subset of 50 and 150 clients (one writer, one client) and limited ourselves to the 10 digits samples available for each writer. Samples in FEMNIST are all of the same dimensionality so our goal here is to compare our approach to some baselines such as FedRep for a well-known federated learning problem. For training, we used the same parameters as for the other datasets except that the number of epochs has been set to 1000, as the learning problem is more complex and needs more epochs to converge. Results, averaged over 3 runs, are reported in Table S2. We first note that the performance of FedRep is in line with the ones reported in Collins et al. (2021). Then, we remark that our approach outperforms the competitors on this dataset by a large margin with a performance gain of about 5% over the best competitor. This is a strong result that shows the potential of our approach on real-world datasets even when the feature spaces are homogeneous. We can also highlight the impact of pre-training the local embedding functions when comparing the performance of FLIC and HetFedRep (the latter is a variant of FLIC where the local embedding functions are not pre-trained). Here, pre-training the local embedding functions play the role of a domain adaptation mechanism that allows to better align the class-conditional distributions of each client with a common shared representation. Table S2: Performance comparison on the FEMNIST dataset. We report the classification accuracy for the ![35_image_0.png](35_image_0.png) FEMNIST dataset for 50 and 150 clients. | Method | 50 clients | 150 clients | |------------|--------------|---------------| | Local | 77.29 ± 0.6 | 79.59 ± 0.7 | | FedHeNN | 71.01 ± 0.8 | 74.77 ± 0.2 | | HetFedRep | 52.90 ± 2.0 | 57.78 ± 2.3 | | FedRep | 70.29 ± 3.0 | 74.55 ± 1.3 | | FLIC-Class | 82.25 ± 0.4 | 84.81 ± 0.4 | | FLIC-HL | 82.37 ± 0.6 | 84.30 ± 0.4 | ![35_image_1.png](35_image_1.png) Figure S7: Examples of some TextCaps pairs of image/caption from the 4 classes we considered of (top-left) ![35_image_2.png](35_image_2.png) Food, (top-right) Bottle, (bottom-left) Book (bottom-right) Car. We can see how difficult some examples can be, especially from the caption point of view since few hint about the class is provided by the text. ![35_image_3.png](35_image_3.png)
Review 1: Summary: The authors propose a federated learning algorithm to handle cases where the datasets and features are heterogeneous across clients. The key ingredients of the proposed FLIC framework are the following: * Map client data to a common representation space with learnable mapping $\phi_i$ for client $i$. * In the common representation space, encourage examples from class $c$ from all clients to be close to an anchor Gaussian distribution $\mu_c$. * On top of these representations, learn a classifier parameterized by global parameters $\alpha$ and local parameters $\beta_i$ for clients $i \in [b]$. The authors apply their methods to * Two toy datasets with Gaussian data * MNIST and USPS digit classification task * An image, caption to class task * A Brain Computer Interface data task. Strengths and Weaknesses: The proposed FLIC framework is interesting -- in particular, the alignment of representations of same-class examples with a latent anchor distribution. In numerical experiments, the proposed methods FLIC-HL and FLIC-Class perform well on these problems. The FLIC methods have only marginal gains over competing methods. The theoretical result (Theorem 1) appears unsatisfactory. Requested Changes: Details for experiments should be fully specified. For example, in the toy dataset problems, the following details should be added. * How are the means of the class-conditional distributions generated? How separated are the means compared to the noise? * How are examples subsampled in each client? * How is the random matrix formed? For the plot in Figure 3 (left), more details are required for reproducibility. For example, the Gaussian distributions used to generate data for the 10 classification problems. In Figure 4 (left), why is the accuracy going down as we increase the number of clients? In Figure 4 (right), why does it look very different from 4 (left)? Why is HetFedRep so poor the right plot, but good on the left plot? In Theorem 1: * The result is on regression, whereas the focus of FLIC is classification. * Are you assuming that $k_i \ge k$ for $i \in [b]$? If so, consider making it explicit. * Suppose $k_i = k$ for simplicity. Assume $\Sigma = P D P^T$. May be I am missing something, but should the mapping from $N(0, \Sigma)$ to $N(0, I)$ be $f(x) = D^{-1/2} P^T x$ instead of $f(x) = D^{-1/2} x$? Note that, for $y=Bx$, $$E[yy'] = E[Bxx'B'] = B \Sigma B' = D^{-1/2} P' PDP' (D^{-1/2}P')' = I$$ You can apply another orthonormal transformation and still the covariance would be identity. So, there is nothing special about the diagonal matrices with $\pm 1$, I think. **Following comments are minor in nature** Why don't we explicitly enforce separation between means $\mu_c$ of different classes? Is it automatically done by the classification loss minimization? What does FLIC stand for? It should be mentioned in the main text how $\phi_i$ can be parametrized. Typo: In page 7, right below Algorithm 1, the description of the algorithm, the step numbers seems to be wrong. Typo: Correct the typo "FedRed". What is principal angle distance between two orthonormal matrices A, B? Broader Impact Concerns: None. ================================================== Review 2: Summary: The paper presents a method for applying federated learning (FL) to heterogeneous datasets, which may have different schema or feature spaces. In order to enable FL, which typically assumes all client data has been drawn from the same feature space, across clients with distinct feature spaces, the authors propose having each client learn a local embedding function that maps client data to a common subspace. This framework "FLIC" is compatible with existing FL algorithms such as FedRep and FedAvg with fine-tuning. The authors include some empirical results on a range of datasets - from toy to brain-computer interface data - and some theoretical analysis of the proposed approach. Primary contributions: * First FL framework for operating on client data with heterogeneous feature spaces. * First empirical results for using FL on brain computer interface data. * Distribution alignment algorithm for learning local embeddings functions to map data to a common subspace. Strengths and Weaknesses: Strengths: * Defines a strategy for learning client-specific embeddings across heterogeneous feature spaces in a federated way. * The need for such an approach is well-motivated, especially in the case of cross-institution FL where dataset formats may not be standardized a priori. * This learned feature representation method is broadly applicable to existing FL algorithms. Weaknesses: * There is a need for specifying in advance the latent dimensionality and having some prior knowledge of the latent distribution. * The theoretical analysis is for a simplified setting in which the anchor distribution is known beforehand, which may not be the case in practice. * Lacks comparison with alternative ways of embedding distinct data in a common subspace. * The empirical results are quite simple. The chosen datasets and settings do not have prior work on them to have benchmarks and points of reference that could be used to put the numbers in context in terms of expected performance. The difference in performance across methods are quite minimal with the top accuracy falling within the confidence intervals of other approaches (e.g. Table 3). * The paper anchors the contribution and discussion on extending FedRep to the heterogeneous feature space setting, making it difficult to understand for anyone unfamiliar with the prior work, even though the contributions are more broadly applicable. Requested Changes: Critical changes: * Additional experimental results would strengthen the work. * Include baseline performance of FLIC on federated datasets commonly used in the literature, where the partitions are based on user and all data has the same format. This would indicate how well the method works on data that is non-iid but not completely disjoint, and offer a comparison point to common FL approaches. * Demonstrate FedAvg + FT with learned feature embeddings for comparison (simpler baseline for personalized FL). * Compare FLIC to t-sne and multi-dimensional scaling in the experimental results. Depicting clusters visually using these methods on a toy set does not demonstrate the effect of these methods in practice. * Additional description of figures and discussion of what the results show is necessary. * Figure 2: It is not clear how the two plots included in the figure relate. Either the relation should be discussed or the plots should be separated. What should be understood from the leftmost plot? There is no discussion and no metrics to indicate whether the projected features are far from the ground truth. The caption of the rightmost plot includes numbers that are not matching what is shown and requires further explanation. * Figure 4: What is the effect of increasing the number of clients on a toy problem? Is this a more or less challenging problem? Include analysis of the results and interpretation of how or why each method differs. * Section 4 claims that the presented algorithm avoids the client drift phenomenon. It is unclear how this is possible, as the federated algorithm presented has clients train locally for some number of steps between rounds of parameter averaging. The communication cost of using this approach is strictly higher than other federated algorithms without it, such as FedAvg or FedRep, due to the additional parameters that are needed to define the learned anchor measures. Claiming FLIC is communication-efficient is misleading and incorrect. Minor changes: * Section 4 “Averaging Anchor Distributions” references incorrect steps in Algorithm 1. * The experimental section could use further clarification. Is the reported accuracy on a held out portion of each client’s data? * It is not clear what should be expected when plotting performance with respect to the number of clients. * Table 4 accuracies should include confidence intervals over the 3 runs. * Simplifying the notation through the manuscript and algorithms would help with the readability. As this approach is compatible with different model definitions it would be useful to use a simple base model \theta in the algorithms and throughout the text, rather than splitting the model into \alpha and \beta components, since that is not the contribution of this work. The key change this work presents is in adding a local embedding layer, which could be expressed much more simply. * Typo “jsut” on page 10. * A limitation that should be noted is that this is a stateful algorithm applicable to the cross-silo setting. Broader Impact Concerns: Learning on heterogeneous data formats has the potential to enable learning across data collected by different institutions without standardizing the format a priori. Making the claim of expanded privacy-aware FL scope, however, is misleading without adding a privacy analysis to strengthen "Remark 3" and demonstrating that there is no further privacy leakage than what is given by FedAvg. Stating how the proposed approach might be compatible with privacy techniques such as differential privacy or secure aggregation would strengthen the work. ================================================== Review 3: Summary: The paper formulizes personalized Federated Learning on heterogeneous client’s feature spaces and proposes a framework FLIC. FLIC learns a function locally at each client to map the input feature space to a common embedding space. To preserve semantical information from the original data distribution, the locally learned functions at each client are aligned with some learnable latent anchor distributions that are shared across all clients. A personalized Federated Learning algorithm is then employed on the common embedding space to learn the desired task. The paper shows that for a simpler regression framework where the anchor distribution is known beforehand and not learned under the FL paradigm, FLIC can recover the true latent subspace underlying the FL problem. The effectiveness of FLIC is shown through experiments on toy and real-world problems including heterogeneous BCI datasets. Strengths and Weaknesses: Strengths: 1. The paper proposes a novel framework for federated learning on heterogeneous feature spaces via the mapping of input features into a common embedding space 2. Theoretical analysis for a simple regression framework is presented. 3. Experimental results are presented for a toy example and real-world datasets such as Digits, TextCaps, and BCI datasets. 4. The paper discusses the limitations and privacy impacts of the proposed framework. 5. The appendix includes interesting ablation studies. Weaknesses/questions: 1. The paper was hard to follow and can use some improvement in writing. 2. It is not clear how the anchor distributions are initialized. In particular, what is $v_{1:C}^0$ in Algorithm S2? Is it equal to $\hat{m}_{1:C}^0$? 3. FLIC has memory and communication overheads stemming from $v_{i, 1:C}, L_{i, 1:C}$. 4. FLIC is computationally heavy at the client end as it needs to compute gradients with respect to multiple parameters and loss functions. 5. It would be helpful to present some quantitative results comparing the overheads of FLIC to the other baselines. 6. The Averaging anchor distribution paragraph on page 7 mentions "We provide algorithmic details regarding steps 14 and 20 in Algorithm 1". Algorithm 1 only has 18 steps and that paragraph doesn't seem to be consistent with the algorithm. 7. Section S3.5 and the caption for Figure S2 are not clear. It mentions "We have considered two learning situations: one in which they are updated during local training (as usual) and another one in they are kept fixed all along the training". Is the transformation $\phi$ pre-trained in these two cases? The discussion on the impact of pre-training the embedding space is interesting and needs to be more clearly written. Requested Changes: See Weaknesses/questions Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The authors propose FLIC, a framework that enables personalized FL in scenarios where different clients may have different data representations (heterogeneous feature spaces). This involves clients learning functions to map their data to a common feature space, and the embedding functions being optimized such that the learned feature space represents similar data/same classes across clients similarly. The authors provide simplified theoretical analysis showing that their approach recovers the true latent subspace. The experiment shows some gains, even if marginal, over other embedding methods. The authors incorporated presentation improvements (clarified concepts, algorithm steps, and provided missing details on the experimental setup, removed some statements not supported by the experiments), and ran additional experiments comparing to other baselines as suggested by the reviewers, further improving the paper. ==================================================
# Thompson Sampling For Non-Stationary Bandit Problems Anonymous authors Paper under double-blind review ## Abstract Non-stationary multi-armed bandit (MAB) problems have recently attracted extensive attention. We focus on the abruptly changing scenario where reward distributions remain constant for a certain period and change at unknown time steps. Although Thompson Sampling (TS) has shown empirical success in non-stationary settings, there is currently no regret bound analysis for TS with Gaussian priors. To address this, we propose two algorithms, discounted TS and sliding-window TS, designed for sub-Gaussian reward distributions. For these algorithms, we establish an upper bound for the expected regret by bounding the expected number of times a suboptimal arm is played. We show that the regret order of both algorithms is O˜( √T BT ), where T is the time horizon, BT is the number of breakpoints. This upper bound matches the lower bound for abruptly changing problems up to a logarithmic factor. Empirical comparisons with other non-stationary bandit algorithms highlight the competitive performance of our proposed methods. ## 1 Introduction MAB is a classic sequential decision problem. At each time step, the learner selects an arm from a finite set of arms (also known as actions) based on its past observations, and she only observes the reward of the chosen action. The learner's goal is to maximize its expected cumulative reward or minimize the regret incurred during the learning process. The regret is defined as the difference between the expected reward of the optimal arm and the expected reward achieved by the MAB algorithm. MAB has found practical use in various scenarios, with one of the earliest applications being the diagnosis and treatment experiments proposed by Robbins (1952). In this experiment, each patient's treatment plan corresponds to an arm in the MAB problem, and the goal is to minimize the patient's health loss by making optimal treatment decisions. Recently, MAB has gained wide-ranging applicability. For example, MAB algorithms have been used in online recommendation systems to improve user experiences and increase engagement (Li et al., 2011; Bouneffouf et al., 2012; Li et al., 2016). Similarly, MAB has been employed in online advertising campaigns to optimize the allocation of resources and maximize the effectiveness of ad placements (Schwartz et al., 2017). While the standard MAB model assumes fixed reward distributions, real-world scenarios often involve changing distributions over time. For instance, in online recommendation systems, the collected data gradually becomes outdated, and user preferences are likely to evolve (Wu et al., 2018). This dynamic nature necessitates the development of algorithms that can adapt to these changes, leading to the exploration of non-stationary MAB problems. In recent years, there has been much research on non-stationary multi-armed bandit problems. These methods can be roughly divided into two categories: they either detect changes in the reward distribution using change-point detection algorithms (Liu et al., 2018; Cao et al., 2019; Auer et al., 2019; Chen et al., 2019; Besson et al., 2022), or they passively reduce the effect of past observations (Garivier & Moulines, 2011; Raj & Kalyani, 2017; Trovo et al., 2020; Baudry et al., 2021). The former method needs to make some assumptions about the change of arms distribution to ensure the effectiveness of the change-point detection algorithm. For instance, (Liu et al., 2018; Cao et al., 2019) require a lower bound on the amplitude of change of each arm's expected rewards. The latter requires fewer assumptions about the characteristics of the change. They often use a sliding window or discount factor to forget past information to adapt to the change of arms distribution. 1 These methods all provide the theoretical guarantees for regret upper bounds. However, the known Thompson sampling, have received little theoretical analysis of regret in non-stationary MAB problems, despite the fact TS algorithms often have superior or comparable performance to frequentist algorithms in most non-stationary scenarios. Raj & Kalyani (2017) have studied the discounted Thompson sampling with Beta priors. However, they only derive the probability of picking a suboptimal arm for the simple case of a two-armed bandit. To the best of our knowledge, only sliding-window Thompson sampling with Beta priors (Trovo et al., 2020) provides the regret upper bounds. However, their proof is incorrect with a wrong application of a well-known result (Lemma A.3). We analyze their mistakes in detail and provide a counterexample in Appendix C. There are two main challenges in analyzing Thompson sampling algorithm in non-stationary setting. The first challenge is that the DS-TS algorithm cannot **fully forget previous information** and the second is the **under-estimation of the optimal arm**. In non-stationary setting, solving these problems is highly challenging due to the changing reward distribution. We define a UCB-like function serving as the upper confidence bound to tackle the first challenge, as detailed in Lemma 5.1 and Lemma 5.2. Along with the defined function, we employ a new regret decomposition to bound the regret comes from the under-estimation of the optimal arm, as presented in the proof of Lemma 5.3. We provide details about these challenges and their solutions in the theoretical analysis section (Section 5). Our contributions are as follows: we propose discounted TS (DS-TS) and sliding-window TS (SW-TS) with Gaussian priors for abruptly changing settings. We adopt a unified method to analyze the regret upper bound for both algorithms. The theoretical analysis results show that their regret upper bounds are of order O˜( √T BT ), where T is the number of time steps, BT is the number of breakpoints. This regret bound matches the Ω(√T) lower bound proven by Garivier & Moulines (2011) in an order sense. We also verify the algorithms in various environmental settings with Gaussian and Bernoulli rewards, and both DS-TS and SW-TS achieve competitive performance. ## 2 Related Works Many works are based on the idea of forgetting past observations. Discounted UCB (DS-UCB) (Kocsis & Szepesvári, 2006; Garivier & Moulines, 2011) uses a discounted factor to average the past rewards. In order to achieve the purpose of forgetting information, the weight of the early reward is smaller. Garivier & Moulines (2011) also propose the sliding-window UCB (SW-UCB) by only using a few recent rewards to compute the UCB index. They calculate the regret upper bound for DS-UCB and SW-UCB as O˜( √T BT ). EXP3.S, as proposed in (Auer et al., 2002), has been shown to achieve the regret upper bound by O˜( √T BT ). Under the assumption that the total variation of the expected rewards over the time horizon is bounded by a budget VT , Besbes et al. (2014) introduce REXP3 with regret O˜(T 2/3). Combes & Proutiere (2014) propose the SW-OSUB algorithm, specifically for the case of smoothly changing with an upper bound of O˜(σ 1/4T), where σ is the Lipschitz constant of the evolve process. Raj & Kalyani (2017) propose the discounted Thompson sampling for Bernoulli priors without providing the regret upper bound. They only calculate the probability of picking a sub-optimal arm for the simple case of a two-armed bandit. Trovo et al. (2020) propose the sliding-window Thompson sampling algorithm with regret O˜(T 1+α 2 ) for abruptly changing settings and O˜(T β) for smoothly changing settngs. Baudry et al. (2021) propose a novel algorithm named Sliding Window Last Block Subsampling Duelling Algorithm (SW-LB-SDA) with regret O˜( √T BT ). They only assume that the reward distributions belong to the same one-parameter exponential family for all arms during each stationary phase. There are also many works that exploit techniques from the field of change detection to deal with reward distributions varying over time. Mellor & Shapiro (2013) combine a Bayesian change point mechanism and Thompson sampling strategy to tackle the non-stationary problem. Their algorithm can detect global switching and per-arm switching. Liu et al. (2018) propose a change-detection framework that combines UCB and a change-detection algorithm named CUSUM. They obtain an upper bound for the average detection delay and a lower bound for the average time between false alarms. Cao et al. (2019) propose M-UCB, which is similar to CUSUM but uses another simpler change-detection algorithm. M-UCB and CUSUM are nearly optimal, their regret bounds are O˜( √T BT ). Recently, there are also some works deriving regret bounds without knowing the number of changes. For example, Auer et al. (2019) propose an algorithm called ADSWITCH with optimal regret bound O˜( √BT T). Suk & Kpotufe (2022) improve the work (Auer et al., 2019) so that the obtained regret bound is smaller than O˜( √ST), where S only counts the best arms switches. ## 3 Problem Formulation Assume that the non-stationary MAB problem has K arms A := {1, 2*, ..., K*} with finite time horizon T. At each round t, the learner must select an arm it ∈ A and obtain the corresponding reward Xt(it). The rewards are generated from σ-subGaussian distributions. The expectation of Xt(i) is denoted as µt(i) = E[Xt(i)]. A policy π is a function that selects arm it to play at round t. Let µt(∗) := maxi∈{1*,...,K*} µt(i) denote the expected reward of the optimal arm i ∗ t at round t. Unlike the stationary MAB settings, where an arm is optimal all of the time (i.e. ∀t ∈ {1, ..., T}, i∗ t = i ∗), while in the non-stationary settings, the optimal arms might change over time. The performance of a policy π is measured in terms of cumulative expected regret: $$R_{T}^{\pi}=\mathbb{E}\left[\sum_{t=1}^{T}(\mu_{t}(*)-\mu_{t}(i_{t}))\right],\tag{1}$$ where E[·] is the expectation with respect to randomness of π. Let ∆t(i) = µt(∗) − µt(i) and let $$k_{T}(i)=\sum_{t=1}^{T}\mathbb{1}\left\{i_{t}=i,i\neq i_{t}^{*}\right\}$$ denote the number of plays of arm i when it is not the best arm until time T. When we analyze the upper bound of Rπ T , we can directly analyze E[kT (i)] to get the upper bound of each arm. Abruptly Changing Setting The abruptly changing setting is introduced by Garivier & Moulines (2011) for the first time. The number of breakpoints is denoted as BT =PT −1 t=1 1{∃i ∈ A : µt(i) ̸= µt+1(i)}. Suppose the set of breakpoints is B = {b1*, ..., b*BT } (we define b1 = 1). At each breakpoint, the reward distribution changes for at least one arm. The rounds between two adjacent breakpoints are called *stationary phase*. Abruptly changing bandits pose a more challenging problem as the learner needs to balance exploration and exploitation within each stationary phase and during the changes between different phases. Trovo et al. (2020) makes assumption about the number of breakpoints to facilitate more generalized analysis, while we explicitly use BT to represent the number of breakpoints for analysis. ## 4 Algorithms In this section, we propose the DS-TS and SW-TS with Gaussian priors for the non-stationary stochastic MAB problems. Different from Agrawal & Goyal (2013), we assume that the reward distribution follows a σ-subGaussian distribution rather than a bounded distribution. Assume that X1*, ..., X*n are independently and identically distributed, following a σ-subGaussian distribution with mean µ. Assume further that the prior distribution is a Gaussian distribution µ ∼ N (0, σ2 0 ). The posterior distribution is also Gaussian distribution N (µ1, σ2 1 ) where $$\mu_{1}=\sigma_{1}^{2}(\frac{0}{\sigma_{0}^{2}}+\frac{\sum_{i=1}^{n}X_{i}}{\sigma^{2}}),\sigma_{1}^{2}=\frac{1}{\frac{1}{\sigma_{0}^{2}}+\frac{n}{\sigma^{2}}}.$$ Let σ0 = +∞, we get the posterior distribution as N ( Let $\mathcal{U}$ be the set of all $n$-tuples. n Pn i=1 Xi, σ 2 ). $\pm\;\overline{\frac{1}{n}}$ ## 4.1 Ds-Ts DS-TS uses a discount factor γ (0 *< γ <* 1) to dynamically adjust the estimate of each arm's distribution. The key to our algorithm is to decrease the sampling variance of the selected arm while increasing the sampling variance of the unselected arms. $\neg x$ Algorithm 1: DS-TS Input: discounted factor γ, µˆ1(i) = 0, µ˜1(i) = 0, Nt(*γ, i*) = 0 1 for t = 1*, ..., T* do 2 for i = 1*, .., K* do 3 sample θt(i) ∼ N (ˆµt(*γ, i*),4σ 2 Nt(γ,i) ) 4 end 5 Pull arm it = arg maxi θt(i), observe reward Xt(it); 6 for i = 1*, ..., K* do 7 µ˜t+1(*γ, i*) = γµ˜t(*γ, i*) + 1{it = i}Xt(i) 8 Nt+1(*γ, i*) = γNt(*γ, i*) + 1{it = i} 9 µˆt+1(*γ, i*) = µ˜t+1(γ,i) Nt+1(γ,i) 10 end 11 end Specifically, let $$N_{t}(\gamma,i)=\sum_{j=1}^{t}\gamma^{t-j}\mathbb{1}\left\{i_{j}=i\right\}$$ denote the discounted number of plays of arm i until time t. We use $${\hat{\mu}}_{t}(\gamma,i)={\frac{1}{N_{t}(\gamma,i)}}\sum_{j=1}^{t}\gamma^{t-j}X_{j}(i)\mathbb{1}\left\{i_{j}=i\right\}$$ called discounted empirical average to estimate the expected rewards of arm i. In non-stationary settings, we use the discounted average and discounted number of plays instead of the true average and number of plays respectively. Therefore, the posterior distribution is N (ˆµt(*γ, i*),σ 2 Nt(γ,i) ). Algorithm 2: SW-TS Input: sliding window τ , µˆ1(i) = 0, µ˜1(i) = 0, Nt(*τ, i*) = 0 1 for t = 1*, ..., T* do 2 for i = 1*, .., K* do 3 sample θt(i) ∼ N (ˆµt(*τ, i*),4σ 2 Nt(τ,i) ) 4 end 5 Pull arm it = arg maxi θt(i), observe reward Xt(it) 6 for i = 1*, ..., K* do 7 Nt+1(*τ, i*) = Nt(*τ, i*) + 1{it = i} − 1{it−τ = i} 8 µ˜t+1(*τ, i*) = ˜µt(*τ, i*) + 1{it = i}Xt(i) − 1{it−τ = i}Xt−τ (i) 9 µˆt+1(*τ, i*) = µ˜t+1(τ,i) Nt+1(τ,i) 10 end 11 end Algorithm 1 shows the pseudocode of DS-TS. Step 3 is the Thompson sampling. For each arm, we draw a random sample θt(i) from N (µˆt(*γ, i*),4σ 2 Nt(γ,i) ). We use 4σ 2 Nt(γ,i) as the posterior variance instead of σ 2 Nt(γ,i) , which helps the subsequent analysis. Then we select arm it with the maximum sample value and obtain the reward Xt(it) (Step 5). To avoid the time complexity going to O(T 2), we introduce µ˜t(*γ, i*) = Ptj=1 γ t−jXj (i)1{ij = i} to calculate µˆt(*γ, i*) using an iterative method(Step 7-9). If arm i is selected at round t, the posterior distribution is updated as follows: $$\hat{\mu}_{t+1}(\gamma,i)=\frac{\gamma\hat{\mu}_{t}(\gamma,i)N_{t}(\gamma,i)+X_{t}(i)}{\gamma N_{t}(\gamma,i)+1}=\frac{\tilde{\mu}_{t+1}(\gamma,i)}{N_{t+1}(\gamma,i)}$$ If arm i isn't selected at round t, the posterior distribution is updated as $$\hat{\mu}_{t+1}(\gamma,i)=\frac{\tilde{\mu}_{t+1}(\gamma,i)}{N_{t+1}(\gamma,i)}=\frac{\gamma\tilde{\mu}_{t}(\gamma,i)}{\gamma N_{t}(\gamma,i)}=\hat{\mu}_{t}(\gamma,i)$$ i.e. the expectation of posterior distribution remains unchanged. ## 4.2 Sw-Ts SW-TS uses a sliding window τ to adapt to changes in the reward distribution. Let $$N_{t}(\tau,i)=\sum_{j=t-\tau+1}^{t}\mathbbm{1}\{i_{j}=i\},\hat{\mu}_{t}(\tau,i)=\frac{1}{N_{t}(\tau,i)}\sum_{j=t-\tau+1}^{t}X_{j}(i)\mathbbm{1}\{i_{j}=i\}.$$ If *t < τ* , the range of summation is from 1 to t. Similar to DS-TS, the posterior distribution is N (µˆt(*τ, i*),4σ 2 Nt(τ,i) ). Algorithm 2 shows the pseudocode of SW-TS. To avoid the time complexity going to O(T 2), we introduce µ˜t(*τ, i*) = Ptj=t−τ+1 Xj (i)1{ij = i} to update µˆt(*τ, i*). ## 4.3 Results In this section, we give the regret upper bounds of DS-TS and SW-TS. Then we discuss how to take the values of the parameters so that these algorithms reach the optimal upper bound. Recall that ∆t(i) = µt(∗)−µt(i). Let ∆T (i) = min{∆t(i) : t ≤ *T, i* ≠ i ∗ t }, be the minimum difference between the expected reward of the best arm i ∗ t and the expected reward of arm i in all time T when the arm i is not the best arm. Let ∆Tmax = max{µt1 (i) − µt2 (i) : t1 ̸= t2, i ∈ [K]} denote the maximum expected variation of arms. Theorem 4.1 (DS-TS). Let γ ∈ (0, 1) *satisfying* σ 2 ∆*Tmax* (1 − γ) 2log 1 1−γ < 1*. For any suboptimal arm* i, $\mathbb{E}[k_{T}(i)]\leq B_{T}D(\gamma)+C_{1}(\gamma)L_{1}(\gamma)\gamma^{-\frac{1}{1-\gamma}}T(1-\gamma),$ where $$D(\gamma)=\frac{\log((\frac{\gamma}{2\log_{10}})^{2}(1-\gamma)^{2}\log\frac{1}{1-\gamma})}{\log\gamma},C_{1}(\gamma)=e^{17}+12+3\log\frac{1}{1-\gamma},L_{1}(\gamma)=\frac{1152\log(\frac{1}{1-\gamma}+e^{17})\sigma^{2}}{\gamma^{1/(1-\gamma)}(\Delta_{T}(i))^{2}}.$$ Corollary 4.2. When γ *is close to* 1, γ − 1 1−γ is around e. If the time horizon T *and number of breakpoints* BT *are known in advance, the discounted factor can be chosen as* γ = 1 − 1 σ q BT T log T . If BT ≪ T, $$\frac{\sigma^{2}}{\Delta_{m a x}^{T}}(1-\gamma)^{2}\log\frac{1}{1-\gamma}<\frac{\sigma/e}{\Delta_{m a x}^{T}}\sqrt{\frac{B_{T}}{T\log T}}<1.$$ We have $$\mathbb{E}[k_{T}(i)]=O({\sqrt{T B_{T}}}(\log T)^{\frac{3}{2}}).$$ Theorem 4.3 (SW-TS). Let τ > 0*, for any suboptimal arm* i, $$\mathbb{E}[k_{T}(i)]\leq B_{T}\tau+C_{2}(\tau)L_{2}(\tau)\frac{T}{\tau},$$ where $$C_{2}(\tau)=e^{11}+12+3\log\tau,L_{2}(\tau)=\frac{1152\log(\tau+e^{11})\sigma^{2}}{(\Delta_{T}(i))^{2}}.$$ Corollary 4.4. If the time horizon T and number of breakpoints BT *are known in advance, the sliding* window can be chosen as τ = σp*T /B*T log T*, then* $$\mathbb{E}[k_{T}(i)]=O({\sqrt{T B_{T}}}\log T).$$ ## 5 Proofs Of Upper Bounds Before giving the detailed proof, we discuss the main challenges in regret analysis of Thompson sampling in non-stationary setting. These challenges are addressed by Lemmas 5.1 to 5.3. ## 5.1 Challenges In Regret Analysis Existing analyses of regret bounds for Thompson sampling (Agrawal & Goyal, 2013; Jin et al., 2021; 2022) decompose the regret into two parts. The first part of regret comes from the over-estimation of suboptimal arm, which can be dealt with by the concentration properties of the sampling distribution and rewards distribution. The second part is the under-estimation of the optimal arm, which mainly relies on bounding the following equation. $$\sum_{t=1}^{T}\mathbb{E}[\frac{1-p_{i,t}}{p_{i,t}}\mathbb{1}\left\{i_{t}=i_{t}^{*},\theta_{t}(*)\leq\mu_{t}(*)-\epsilon_{i}\right\}],\tag{1}$$ $$\left(2\right)$$ where pi,t = P(θt(∗) > µt(∗) − ϵi) is the probability that the best arm will not be under-estimated from the mean reward by a margin ϵi. The first challenge is specific to the DS-TS algorithm. Unlike SW-TS, which completely forgets previous information after τ rounds following a breakpoint, DS-TS cannot **fully forget past information**. This makes it challenging to utilize the concentration properties of the reward distribution to bound regret comes from the over-estimate of the suboptimal arm. And this will further affect the analysis of Equation (2). The second challenge is the **under-estimation of the optimal arm.** In stationary settings, pi,t changes only when the optimal arm is selected, Equation (2) can be bounded by the method proposed by Agrawal & Goyal (2013). However, the distribution of θt(∗) may vary over time in non-stationary settings. It is challenging and nontrivial to obtain a tight bound of Equation (2). To overcome the first challenges, we adjust the posterior variance to be 4σ 2 Nt(γ,i) . This slightly larger variance is specifically designed for the σ 2-subGaussian distribution, which helps to bound E[ 1 pi,t ]. Then, we define Ut(*γ, i*), which serves a role similar to the upper confidence bound in the UCB algorithm. We solve this problem through Lemma 5.1 and Lemma 5.2. For the second challenge, we use the new defined Ut(*γ, i*) and employ a new regret decomposition for Equation (2) based on whether the event {Nt(γ, ∗) > L1(γ)} occurs. Intuitively, if Nt(γ, ∗) > L1(γ), pi,t is close to 1, which will lead to a sharp bound. If Nt(γ, ∗) ≤ L1(γ), using Lemma A.3 we can also get the upper bound of Equation (2). We derive the upper bound of E[ 1 pi,t ] for non-stationary settings, with an extra logarithmic term compared with the stationary settings. The proof of Lemma 5.3 in Appendix B.3 demonstrates these details. ## 5.2 Proofs Of Theorem 4.1 For arm i ̸= i ∗ t , we choose two threshold xt(i), yt(i) such that xt(i) = µt(i) + ∆t(i) 3, yt(i) = µt(∗)− ∆t(i) 3. Then µt(i) < xt(i) < yt(i) < µt(∗) and yt(i) − xt(i) = ∆t(i) 3. The history Ft is defined as the plays and rewards of the previous t plays. µˆt(γ, i), it and the distribution of θt(i) are determined by the history Ft−1. The abruptly changing setting is in fact piecewise-stationary. The rounds between two adjacent breakpoints is stable stationary. Based on this observation, we define the **pseudo-stationary phase** as $T(\gamma)=\{t\leq T:\forall s\in(t-D(\gamma),t],\mu_{s}(\cdot)=\mu_{t}(\cdot)\}$. Let S(γ) = {t ≤ T : t /∈ T (γ)}. Note that, on the right side of any breakpoint, there will be at most D(γ) rounds belonging to S(γ). Therefore, the number of elements in the set S(γ) has an upper bound BT D(γ), i.e. |S(γ)| ≤ BT D(γ) (3) Figure 1 shows T (γ) and S(γ) in two different situations. $$|{\mathcal{S}}(\gamma)|\leq B_{T}D(\gamma)$$ ![6_image_0.png](6_image_0.png) Figure 1: Illustration of T (γ) and S(γ) in two different situations. bi, bi+1, bi+2 are the breakpoints. The situation that bi+1 − bi > D(γ) is shown in the top figure, and bi+1 − bi ≤ D(γ) is in the bottom. To facilitate the analysis, we define the following quantities < +1 − $$\quad(4)$$ $$\mathbf{\dot{\theta}}$$ > +1 − $$n=6\sqrt{2}+3\sqrt{1-\gamma},A(\gamma)=\frac{n^{2}\log\bigl(\frac{1}{1-\gamma}\bigr)\sigma^{2}}{(\Delta_{T}(i))^{2}},U_{t}(\gamma,i)=\sigma\sqrt{\frac{(1-\gamma)\log\frac{1}{1-\gamma}}{N_{t}(\gamma,i)}}.\tag{1}$$ Now we list some useful lemmas. The detailed proofs are provided in the appendix. The following lemma depicts that after finite rounds at the breakpoint, i.e., in the pseudo-stationary phase, the distance between µt(i) and discounted average of expectation for arm i can be bounded by Ut(γ, i). Ut(*γ, i*) is analogous to the upper confidence bound in the UCB algorithm. Lemma 5.1. Let µ¨t(*γ, i*) = 1 Nt(γ,i) Ptj=1 γ t−j1{ij = i}µj (i) denote the discounted average of expectation for arm i at time step t. ∀t ∈ T (γ), the distance between µt(i) and µ¨t(γ, i) is less than Ut(*γ, i*). $$|\mu_{t}(i)-\ddot{\mu}_{t}(\gamma,i)|\leq U_{t}(\gamma,i),$$ |µt(i) − µ¨t(γ, i)| ≤ Ut(*γ, i*), (5) Using Lemma 5.1 and the self-normalized Hoeffding-type inequality for subGaussian distributions (Lemma A.1), we have the following lemma, which helps to bound regret comes from the over-estimation of suboptimal arm. Lemma 5.2. ∀t ∈ T (γ), i ̸= i ∗ t , $$\mathbb{P}({\hat{\mu}}_{t}(\gamma,i)>x_{t}(i),N_{t}(\gamma,i)>A(\gamma))\leq(1-\gamma)^{2}$$ The following key lemma helps bound the regret comes from the under-estimation of the optimal arm. This is the most tricky part of analyzing TS. Note that, the proof in Trovo et al. (2020) does not prove the result of the following lemma. Lemma 5.3. Let pi,t = P(θt(∗) > yt(i)|Ft−1). For any t ∈ T (γ) and i ̸= i ∗ t , $$\sum_{t\in\mathcal{T}(\gamma)}\mathbb{E}[\frac{1-p_{i,t}}{p_{i,t}}1\{i_{t}=i_{t}^{*},\theta_{t}(i)<y_{t}(i)\}]\leq(e^{17}+9+3\log\frac{1}{1-\gamma})T(1-\gamma)L_{1}(\gamma)\gamma^{-1/(1-\gamma)}.$$ Now we can give the detailed proof. The proof is in 5 steps: Step 1 We can divide the rounds t ∈ {1*, ..., T*} into two parts: {t ∈ T (γ)} and {t /∈ T (γ)}. Equation (3) shows that the number of elements in the second part is smaller than BT D(γ), we have $$\mathbb{E}[k_{T}(i)]\leq B_{T}D(\gamma)+\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(i_{t}=i).\tag{1}$$ $$({\mathfrak{f}}{\mathfrak{h}})$$ 7 Step 2 Then we consider the event {Nt(γ, i) > A(γ)}. $$\sum_{t\in{\mathcal{T}}(\gamma)}\mathbb{P}(i_{t}=i)=\sum_{t\in{\mathcal{T}}(\gamma)}\mathbb{P}(i_{t}=i,N_{t}(\gamma,i)<A(\gamma))+\sum_{t\in{\mathcal{T}}(\gamma)}\mathbb{P}(i_{t}=i,N_{t}(\gamma,i)>A(\gamma)).$$ We first bound Pt∈T (γ) P(it = i, Nt(γ, i) < A(γ)). $$\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(i_{t}=i,N_{t}(\gamma,i)<A(\gamma))=\sum_{t\in\mathcal{T}(\gamma)}\mathbb{E}\big{[}\mathbb{P}(i_{t}=i,N_{t}(\gamma,i)<A(\gamma)\mid\mathcal{F}_{t-1})\big{]}\tag{7}$$ $$=\sum_{t\in\mathcal{T}(\gamma)}\mathbb{E}\big{[}\mathbb{E}\big{[}1(i_{t}=i,N_{t}(\gamma,i)<A(\gamma)\mid\mathcal{F}_{t-1})\big{]}\big{]}$$ $$=\sum_{t\in\mathcal{T}(\gamma)}\mathbb{E}\big{[}1(i_{t}=i,N_{t}(\gamma,i)<A(\gamma))\big{]},$$ $$\mathbb{)}$$ $$\mathbf{\mu})$$ where the last equation uses the tower rule of expectation. Using Lemma A.3, we have $$\sum_{t\in{\mathcal T}(\gamma)}\mathbb{P}(i_{t}=i,N_{t}(\gamma,i)<A(\gamma))\leq T(1-\gamma)A(\gamma)\gamma^{-1/(1-\gamma)}$$ Therefore, $$\mathbb{E}[k_{T}(i)]\leq T(1-\gamma)A(\gamma)\gamma^{-1/(1-\gamma)}+B_{T}D(\gamma)+\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(i_{t}=i,N_{t}(\gamma,i)>A(\gamma))$$ Step 3 Define Et(*γ, i*) as the event {it = i, Nt(γ, i) > A(γ)}. Define Eθ t (i) as the event θt(i) < yt(i). Equation (9) may be decomposed as follows: $$\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(E_{t}(\gamma,i))=\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(E_{t}(\gamma,i),\hat{\mu}_{t}(\gamma,i)>x_{t}(i))+\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(E_{t}(\gamma,i),\hat{\mu}_{t}(\gamma,i)<x_{t}(i),\overline{E_{t}^{q}(i)})\tag{10}$$ $$+\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(E_{t}(\gamma,i),\hat{\mu}_{t}(\gamma,i)<x_{t}(i),E_{t}^{q}(i))$$ Using Lemma 5.2, the first part in Equation (10) can be bounded by T(1 − γ) 2. Step 4 Then we bound the second part in Equation (10). Use the fact that Nt(*γ, i*) and µˆt(i) are determined by the history Ft−1, we have t∈T (γ) P(Et(γ, i), µˆt(γ, i) < xt(i), Eθ t (i)) X = E X t∈T (γ) E -1{it = i, Nt(γ, i) > A(γ), µˆt(γ, i) < xt(i), Eθ t (i)} | Ft−1 (11) = E X t∈T (γ) 1{Nt(γ, i) > A(γ), µˆt(γ, i) < xt(i)}P(it = i, Eθ t (i) | Ft−1) ≤ E X t∈T (γ) 1{Nt(γ, i) > A(γ), µˆt(γ, i) < xt(i)}P(θt(i) > yt(i) | Ft−1) . Given the history Ft−1 such that Nt(γ, i) > A(γ) and µˆt(γ, i) < xt(i), we have $$y_{t}(i)-{\hat{\mu}}_{t}(\gamma,i)>y_{t}(i)-x_{t}(i)={\frac{\Delta_{t}(i)}{3}}\geq{\frac{\Delta_{T}(i)}{3}}.$$ 8 Therefore, $$\mathbb{P}(\theta_{i}(i)>y_{t}(i)\mid\mathcal{F}_{t-1}))\leq\mathbb{P}(\theta_{t}(i)-\hat{\mu}_{t}(\gamma,i)>\frac{\Delta\tau(i)}{3}|\mathcal{F}_{t-1})\leq\frac{1}{2}\exp(-\frac{(\Delta\tau(i))^{2}A(\gamma)}{72\sigma^{2}})\leq\frac{1}{2}(1-\gamma),\tag{12}$$ where $\hat{\mu}_{t}$ is the $\hat{\mu}_{t}$-norm of $\mathcal{F}_{t}$. where the second inequality follows θt(i) ∼ N µˆt(*γ, i*),4σ Nt(γ,i) and Fact 1. For other Ft−1, the indicator term 1{Nt(γ, i) > A(γ), µˆt(γ, i) < xt(i)} will be 0. Hence, we can bound the second part by T2 (1 − γ) Step 5 Finally, we focus the third term in Equation (10). Using Lemma A.2 and the fact that pi,t is fixed given Ft−1, $$\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(E_{t}(\gamma,i),\hat{\mu}_{t}(\gamma,i)<x_{t}(i),E_{t}^{\theta}(i))\leq\sum_{t\in\mathcal{T}(\gamma)}\mathbb{E}\bigg{[}\frac{1-p_{i,t}}{p_{i,t}}\mathbb{P}(i_{t}=i_{t}^{*},E_{t}^{\theta}(i)|\mathcal{F}_{t-1})\bigg{]}$$ $$=\sum_{t\in\mathcal{T}(\gamma)}\mathbb{E}\bigg{[}\frac{1-p_{i,t}}{p_{i,t}}\mathbb{1}\{i_{t}=i_{t}^{*},E_{t}^{\theta}(i)|\mathcal{F}_{t-1}\}\bigg{]}$$ $$=\sum_{t\in\mathcal{T}(\gamma)}\mathbb{E}[\frac{1-p_{i,t}}{p_{i,t}}\mathbb{1}\{i_{t}=i_{t}^{*},E_{t}^{\theta}(i)\}]$$ Then by Lemma 5.3, we have $$\sum_{t\in\mathcal{T}(\gamma)}\mathbb{P}(E_{t}(\gamma,i),\hat{\mu}_{t}(\gamma,i)<x_{t}(i),E_{t}^{\theta}(i))\leq(e^{17}+9+3\log\frac{1}{1-\gamma})T(1-\gamma)L_{1}(\gamma)\gamma^{-1/(1-\gamma)}.\tag{13}$$ Substituting the results in Step 3-5 to Equation (10) and Equation (9), $$\mathbb{E}[k_{T}(i)]\leq T(1-\gamma)A(\gamma)\gamma^{-\frac{1}{1+\gamma}}+B_{T}D(\gamma)+2T(1-\gamma)+(e^{1T}+9+3\log\frac{1}{1-\gamma})T(1-\gamma)L_{1}(\gamma)\gamma^{-\frac{1}{1+\gamma}}$$ $$\leq B_{T}D(\gamma)+(e^{1T}+12+3\log\frac{1}{1-\gamma})L_{1}(\gamma)\gamma^{-\frac{1}{1+\gamma}}T(1-\gamma).$$ ## 5.3 Proofs Of Theorem 4.3 The proof of Theorem 4.3 is similar to Theorem 4.1. The main difference is that the pseudo-stationary phase is now defined as T (τ ) = {t ≤ T : ∀s ∈ (t − τ, t], µs(·) = µt(·)}. Let $$\ddot{\mu}_{t}(\tau,i)=\frac{1}{N_{t}(\tau,i)}\sum_{j=t-\tau+1}^{t}\mathbb{1}\{i_{j}=i\}\mu_{j}(i).$$ If t ∈ T (τ ), $$\ddot{\mu}_{t}(\tau,i)=\frac{1}{N_{t}(\tau,i)}\sum_{j=t-\tau+1}^{t}\mathbb{1}\{i_{j}=i\}\mu_{t}(i)=\mu_{t}(i).$$ This means the bias (Ut(*γ, i*)) vanishes. We no longer need an n related to τ to deal with the bias issue. We only need to define A(τ ) as $$A(\tau)=\frac{72\log(\tau)\sigma^{2}}{(\Delta_{T}(i))^{2}}.$$ We directly list the following two lemmas, corresponding to Lemma 5.2 and Lemma 5.3, respectively. The detailed proofs can be found in Appendix B.4 and Appendix B.5. Lemma 5.4. ∀t ∈ T (τ ), t ̸= i ∗ t , $$\mathbb{P}({\hat{\mu}}_{t}(\tau,i)>x_{t}(i),N_{t}(\tau,i)>A(\tau))\leq{\frac{1}{\tau^{2}}}.$$ **Lemma 5.5**.: _Let $p_{i,t}=\mathbb{P}(\theta_{t}(*)>y_{t}(i)|\mathcal{F}_{t-1})$. For any $t\in\mathcal{T}(\tau)$ and $i\neq i_{t}^{*}$,_ $$\sum_{t\in{\mathcal{T}}(\gamma)}\mathbb{E}[{\frac{1-p_{i,t}}{p_{i,t}}}1\!\left\{i_{t}=i_{t}^{*},\theta_{t}(i)<y_{t}(i)\right\}]\leq(e^{11}+9+3\log\tau){\frac{T}{\tau}}L_{2}(\tau).$$ Let S(τ ) = {t ≤ T : t /∈ T (τ )}. Similar to Equation (3), we have |S(τ )| ≤ BT τ . Then the proof of step 1 is $$\mathbb{E}[k_{T}(i)]\leq B_{T}\tau+\sum_{t\in{\mathcal{T}}(\tau)}\mathbb{P}(i_{t}=i).$$ The rest of the proof is nearly identical to the proof of Theorem 4.1. ## 6 Experiments In this section, we empirically compare the performance of our method with state-of-the-art algorithms on Bernoulli and a Gaussian reward distributions. Specifically, we compare DS-TS and SW-TS with Thompson Sampling to evaluate the improvement obtained thanks to the employment of the discounted factor γ and sliding window τ . We also compare our method with the UCB method, DS-UCB and SW-UCB (Garivier & Moulines, 2011) to evaluate the effect of Thompson Sampling and UCB. Furthermore, we compare our method with some novel and efficient algorithms such as CUSUM (Liu et al., 2018), M-UCB (Cao et al., 2019) and SW-LB-SDA (Baudry et al., 2021). We measure the performance of each algorithm with the cumulative expected regret defined in Equation Equation (1). The expected regret is averaged over 100 independently runs. The 95% confidence interval is obtained by performing 100 independent runs and is depicted as a semi-transparent region in the figure. ![9_image_0.png](9_image_0.png) Figure 2: K = 5, BT = 5. Gauss arms (a), Bernoulli arms (b). ## 6.1 Gaussian Arms Experimental setting for Gaussian arms We fix the time horizon as T = 100000. The mean and variance are drawn from distributions N (0, 5 2) and U(1, 5). For Gaussian rewards, we conduct two experiments. In the first experiment, we split the time horizon into 5 phases and use a number of arms K = 5. While in the second experiment, we split the time horizon into 10 phases and use a number of arms K = 10. The analysis of SW-UCB and DS-UCB is conducted under the bounded reward assumption, but the algorithms can adapt to Gaussian scenarios. To achieve reasonable performance, it is necessary to adjust the discounted factor and the sliding-window appropriately. We use the settings recommended in (Baudry et al., 2021), where τ = 2(1 + 2σ)pT log(T)/BT for SW-UCB and γ = 1 − 1/(4(1 + 2σ))pBT /T for DS-UCB. Results Figure 3 illustrates the performance of these algorithms for Gaussian rewards under two different settings. Notably, CUSUM and M-UCB are not applicable to Gaussian rewards: CUSUM is designed for Bernoulli distributions, while M-UCB assumes bounded distributions. The discounted methods tend to perform better than sliding-windows methods in Gaussian rewards. Among these algorithms, only our algorithms and SW-LB-SDA provide regret analysis for unbounded rewards. Our algorithm (DS-TS) and SW-LB-SDA have demonstrated highly competitive experimental performance. ![10_image_0.png](10_image_0.png) Figure 3: Gaussian arms. (a) K = 5, BT = 5. (b) K = 10, BT = 10 ## 6.2 Bernoulli Arms Experimental setting for Bernoulli arms The time horizon is set as T = 100000. We split the time horizon into 5, 10 phases of equal length and use a number of arms K = {5, 10},respectively. For Bernoulli rewards, the expected value µt(i) of each arm i is drawn from a uniform distribution over [0, 1]. In the stationary phase, the rewards distributions remain unchanged. The Bernoulli arms for each phase are generated as µt(i) ∼ U(0, 1). Figure 2 depicts the expected rewards for Gaussian arms and Bernoulli arms with K = 5 and BT = 5. For Bernoulli distribution, we modify the Thompson sampling (step 3) in our algorithm as θt(i) ∼ N (µˆt(*γ, i*),1 Nt(γ,i) ) and θt(i) ∼ N (µˆt(*τ, i*),1 Nt(τ,i) ). Based on Corollary 4.2 and Corollary 4.4, we set γ = 1 − q BT T log T and τ =p*T /B*T log T. To allow for fair comparison, DS-UCB uses the discount factor γ = 1 −pBT */T /*4, SW-UCB uses the sliding window τ = 2pT log *T /B*T suggested by (Garivier & Moulines, 2011). Based on (Baudry et al., 2021), we set τ = 2pT log(T)/BT for LB-SDA. For changepoint detection algorithm M-UCB, we set w = 800, b =pw/2 log(2KT2) suggested by (Cao et al., 2019). But set the amount of exploration γ =pKBT log(T)/T. In practice, it has been found that using this value instead of the one guaranteed in (Cao et al., 2019) will improve empirical performance (Baudry et al., 2021). For CUSUM, following from (Liu et al., 2018), we set α =pBT /T log(*T /B*T ) and h = log(*T /B*T ). For our experiment settings, we choose M = 50, ϵ = 0.05. Results Figure 4 presents the results for Bernoulli arms in abruptly changing settings. It can be observed that our method (SW-TS) and SW-LB-SDA exhibit almost identical performance. Thompson Sampling, designed for stationary MAB problems, shows significant oscillations at the breakpoints. The changepoint ![11_image_0.png](11_image_0.png) Figure 4: Bernoulli arms. Settings with K = 5, BT = 5 (a), K = 10, BT = 10 (b) detection algorithm CUSUM (Liu et al., 2018) also shows competitive performance. Note that, our experiment does not satisfy the detectability assumption of CUSUM. As the number of arms and breakpoints increase, the performance of UCB-class algorithms (DS-UCB, SW-UCB) declines, while two TS-based algorithms (DS-TS, SW-TS) still work well. Storage and Compute Cost These algorithms can be divided into three class: UCB, TS and SW-LB-SDA. At each round, UCB-class and TS-class algorithms require O(K) storage and spend O(K) time complexity for computational cost. However, for round T, SW-LB-SDA require O(K(log T) 2) storage and spend O(K log T) time cost. Although the experimental performance of SW-LB-SDA is similar to our algorithms, our algorithm has less storage space and lower computational complexity. ## 7 Conclusion In this paper, we analyze the regret upper bound of the TS algorithm with Gaussian prior in non-stationary settings, filling a research gap in this field. Our approach builds upon previous works while tackling two key challenges specific to non-stationary environments: under-estimation of the optimal arm and the inability of DS-TS algorithm to fully forget previous information. Finally, we conduct some experiments to verify theory results. Below we discuss the results and propose directions for future research. (1) The standard posterior update rule for Thompson Sampling has a sampling variance as σ 2 N . We use 4σ 2 N only for ease of analysis. While this discrepancy is significant only for relatively small values of N. It would be valuable to develop proof techniques that leverage the variance of standard Bayesian updates. (2) Our regret upper bound includes an additional logarithmic term compared to DS-UCB and SW-UCB, along with coefficients of e 17 and e 11. It would be interesting to explore whether the additional logarithm and large coefficients are intrinsic to DS-TS and SW-TS algorithms or is a limitation of our analysis. ## References Milton Abramowitz and Irene A Stegun. Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55. US Government printing office, 1964. Shipra Agrawal and Navin Goyal. Further optimal regret bounds for thompson sampling. In *Artificial* intelligence and statistics, pp. 99–107. PMLR, 2013. Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. *SIAM journal on computing*, 32(1):48–77, 2002. Peter Auer, Pratik Gajane, and Ronald Ortner. Adaptively tracking the best bandit arm with an unknown number of distribution changes. In *Conference on Learning Theory*, pp. 138–158. PMLR, 2019. Dorian Baudry, Yoan Russac, and Olivier Cappé. On limited-memory subsampling strategies for bandits. In International Conference on Machine Learning, pp. 727–737. PMLR, 2021. Omar Besbes, Yonatan Gur, and Assaf Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. *Advances in neural information processing systems*, 27, 2014. Lilian Besson, Emilie Kaufmann, Odalric-Ambrym Maillard, and Julien Seznec. Efficient change-point detection for tackling piecewise-stationary bandits. *Journal of Machine Learning Research*, 23(77):1–40, 2022. Djallel Bouneffouf, Amel Bouzeghoub, and Alda Lopes Ganarski. A contextual-bandit algorithm for mobile context-aware recommender system. In *International conference on neural information processing*, pp. 324–331. Springer, 2012. Yang Cao, Zheng Wen, Branislav Kveton, and Yao Xie. Nearly optimal adaptive procedure with change detection for piecewise-stationary bandit. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 418–427. PMLR, 2019. Yifang Chen, Chung-Wei Lee, Haipeng Luo, and Chen-Yu Wei. A new algorithm for non-stationary contextual bandits: Efficient, optimal and parameter-free. In *Conference on Learning Theory*, pp. 696–726. PMLR, 2019. Richard Combes and Alexandre Proutiere. Unimodal bandits: Regret lower bounds and optimal algorithms. In *International Conference on Machine Learning*, pp. 521–529. PMLR, 2014. Aurélien Garivier and Eric Moulines. On upper-confidence bound policies for switching bandit problems. In International Conference on Algorithmic Learning Theory, pp. 174–188. Springer, 2011. Tianyuan Jin, Pan Xu, Jieming Shi, Xiaokui Xiao, and Quanquan Gu. Mots: Minimax optimal thompson sampling. In *International Conference on Machine Learning*, pp. 5074–5083. PMLR, 2021. Tianyuan Jin, Pan Xu, Xiaokui Xiao, and Anima Anandkumar. Finite-time regret of thompson sampling algorithms for exponential family multi-armed bandits. *Advances in Neural Information Processing Systems*, 35:38475–38487, 2022. Levente Kocsis and Csaba Szepesvári. Discounted ucb. In *2nd PASCAL Challenges Workshop*, volume 2, pp. 51–134, 2006. Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. Unbiased offline evaluation of contextual-banditbased news article recommendation algorithms. In *Proceedings of the fourth ACM international conference* on Web search and data mining, pp. 297–306, 2011. Shuai Li, Alexandros Karatzoglou, and Claudio Gentile. Collaborative filtering bandits. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 539–548, 2016. Fang Liu, Joohyun Lee, and Ness Shroff. A change-detection based framework for piecewise-stationary multi-armed bandit problem. In *Proceedings of the AAAI Conference on Artificial Intelligence*, 2018. Joseph Mellor and Jonathan Shapiro. Thompson sampling in switching environments with bayesian online change detection. In *Artificial intelligence and statistics*, pp. 442–450. PMLR, 2013. Vishnu Raj and Sheetal Kalyani. Taming non-stationary bandits: A bayesian approach. *arXiv preprint* arXiv:1707.09727, 2017. Herbert Robbins. Some aspects of the sequential design of experiments. *Bulletin of the American Mathematical* Society, 58(5):527–535, 1952. Eric M Schwartz, Eric T Bradlow, and Peter S Fader. Customer acquisition via display advertising using multi-armed bandit experiments. *Marketing Science*, 36(4):500–522, 2017. Joe Suk and Samory Kpotufe. Tracking most significant arm switches in bandits. In Conference on Learning Theory, pp. 2160–2182. PMLR, 2022. Francesco Trovo, Stefano Paladino, Marcello Restelli, and Nicola Gatti. Sliding-window thompson sampling for non-stationary settings. *Journal of Artificial Intelligence Research*, 68:311–364, 2020. Qingyun Wu, Naveen Iyer, and Hongning Wang. Learning contextual bandits in a non-stationary environment. In *The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval*, pp. 495–504, 2018. ## A Facts And Lemmas In this section, we list some well-known lemmas. Garivier & Moulines (2011) has derived a Hoeffding-type inequality for self-normalized means with a random number of summands. Their bound is for bounded distribution. Leveraging the properties of σ-subGaussian distributions, we have the following bound for σ-subGaussian. Recall that $$N_{t}(\gamma,i)=\sum_{j=1}^{t}\gamma^{t-j}\mathbb{1}\left\{i_{j}=i\right\}\!,\hat{\mu}_{t}(\gamma,i)=\frac{1}{N_{t}(\gamma,i)}\sum_{j=1}^{t}\gamma^{t-j}X_{j}(i)\mathbb{1}\left\{i_{j}=i\right\}\ ;$$ $$\tilde{\mu}_{t}(\gamma,i)=\frac{1}{N_{t}(\gamma,i)}\sum_{j=1}^{t}\gamma^{t-j}\mathbb{1}\left\{i_{j}=i\right\}\!\mu_{j}(i)$$ $\gamma$\(\ Lemma A.1. Let t ∈ T (γ)*, δ >* 0, $$\mathbb{P}(\frac{N_{t}(\gamma,i)(\hat{\mu}(\gamma,i)-\hat{\mu}(\gamma,i))}{\sqrt{N_{t}(\gamma^{2},i)}}>\delta)\leq\log(\frac{1}{1-\gamma})\exp{(-\frac{3\delta^{2}}{8\sigma^{2}})}.$$ $$L e t\;t\in{\mathcal{T}}(\tau),\delta>0,$$ $$\mathbb{P}(\sqrt{N_{t}(\tau,i)}(\hat{\mu}(\tau,i)-\mu_{t}(i))>\delta)\leq\log\tau\exp{(-\frac{3\delta^{2}}{8\sigma^{2}})},$$ The following inequality is the anti-concentration and concentration bound for Gaussian distributed random variables. Fact 1 (Abramowitz & Stegun (1964)). For a Gaussian distributed random variable X with mean µ and variance σ 2, for any a > 0 $${\frac{1}{\sqrt{2\pi}}}{\frac{a}{1+a^{2}}}e^{-a^{2}/2}\leq\mathbb{P}(X-\mu>a\sigma)\leq{\frac{1}{a+{\sqrt{a^{2}+4}}}}e^{-a^{2}/2}$$ Since 1 a+ √a2+4 ≤ 1 2 , we also have the following well-known result: $$\mathbb{P}(X-\mu>a\sigma)\leq{\frac{1}{2}}e^{-a^{2}/2}$$ The following lemma is adapted from Agrawal & Goyal (2013) and is often used in the analysis of Thompson Sampling, which can transform the probability of selecting the ith arm into the probability of selecting the optimal arm i ∗ t . Lemma A.2. Let pi,t = P(θt(∗) > yt(i)|Ft−1). For any A > 0, i ̸= i ∗ t , $$\mathbb{P}(i_{t}=i,\theta_{t}(i)<y_{t}(i)|\mathcal{F}_{t-1})\leq{\frac{(1-p_{i,t})}{p_{i,t}}}\mathbb{P}(i_{t}=i_{t}^{*},\theta_{t}(i)<y_{t}(i)|\mathcal{F}_{t-1})$$ Lemma A.3 (Garivier & Moulines (2011)). For any i ∈ {1, ..., K}, γ ∈ (0, 1) and A > 0, $$\sum_{t=1}^{T}\mathbb{1}\left\{i_{t}=i,N_{t}(\gamma,i)<A\right\}\leq\left[T(1-\gamma)\right]A\gamma^{-1/(1-\gamma)},$$ $$\sum_{t=1}^{T}\mathbb{1}\left\{i_{t}=i,N_{t}(\tau,i)<A\right\}\leq\left[\frac{T}{\tau}\right]A.$$ ## B Detailed Proofs Of Lemmas And Theorems In this section, we provide the detailed proofs of Lemma 5.1,Lemma 5.2,Lemma 5.3, Lemma 5.4 and Lemma 5.5. The proof of Theorem 4.3 is almost identical to that of Theorem 4.1, so we have omitted the details of the proof. Recall that $\tilde{\mu}_t(\gamma,i)=\frac{1}{N_t(i)}$, $\mu_j(i),j=1,...,t$, we have $\gamma$. Nt(γ,i) Ptj=1 γ t−j1{ij = i}µj (i). Since µ¨t(*γ, i*) is a convex combination of elements |µt(i) − µ¨t(γ, i)| ≤ ∆Tmax (14) We can write µt(i) as µt(i) = 1 Nt(γ,i) Ptj=1 γ $j\,1\,\{i_{j}=i\}\mu_{j}(i)$. Since $\tilde{\mu}_{t}(\gamma,i)$ is a co-$\mu_{t}(i)-\tilde{\mu}_{t}(\gamma,i)|\leq\Delta^{T}_{max}$ $i=1\,\gamma^{t-j}\,1\,\{i_{j}=i\}\mu_{t}(i)$. Thus, we have $$|\mu_{t}(i)-\tilde{\mu}_{t}(\gamma,i)|=\frac{1}{N_{t}(\gamma,i)}|\sum_{j=1}^{t}\gamma^{t-j}(\mu_{j}(i)-\mu_{t}(i))1\{i_{j}=i\}|.$$ Recall that T (γ) = {t ≤ T : ∀s ∈ (t − D(γ), t], µs(·) = µt(·)}. If t ∈ T (γ), we have µj (i) = µt(i), ∀j ∈ (t − D(γ), t). Therefore, ∀t ∈ T (γ), we have $$\begin{split}|\mu_{t}(i)-\bar{\mu}_{t}(\gamma,i)|&=\frac{1}{N_{t}(\gamma,i)}\sum_{j=1}^{t-D(\gamma)}\gamma^{t-j}(\mu_{j}(i)-\mu_{t}(i))\mathbb{I}\left\{i_{j}=i\right\}|\\ &\leq\frac{\Delta_{max}^{T}}{N_{t}(\gamma,i)}\sum_{j=1}^{t-D(\gamma)}\gamma^{t-j}\mathbb{I}\left\{i_{j}=i\right\}\\ &=\frac{\Delta_{max}^{T}}{N_{t}(\gamma,i)}\gamma^{D(\gamma)}N_{t-D(\gamma)}(\gamma,i)\\ &\leq\frac{\Delta_{max}^{T}}{N_{t}(\gamma,i)(1-\gamma)}\end{split}$$ $$(14)$$ where the last inequality follows from $N_{t-D(\gamma)}(\gamma,i)\leq\frac{1}{1-\gamma}$. If $\frac{\gamma^{D(\gamma)}}{N_{t}(\gamma,i)(1-\gamma)}<1$, $\frac{\gamma^{D(\gamma)}}{N_{t}(\gamma,i)(1-\gamma)}<\sqrt{\frac{\gamma^{D(\gamma)}}{N_{t}(\gamma,i)(1-\gamma)}}$, we have $$|\mu_{t}(i)-\tilde{\mu}_{t}(\gamma,i)|\leq\Delta_{m a x}^{T}\sqrt{\frac{\gamma^{D(\gamma)}}{N_{t}(\gamma,i)(1-\gamma)}}.$$ If γ D(γ) Nt(γ,i)(1−γ) ≥ 1, from Equation (14), we also have $$|\mu_{t}(i)-\ddot{\mu}_{t}(\gamma,i)|\leq\Delta_{m a x}^{T}\leq\Delta_{m a x}^{T}\sqrt{\frac{\gamma^{D(\gamma)}}{N_{t}(\gamma,i)(1-\gamma)}}.$$ By the definition of D(γ) = $$D(\gamma)=\frac{\log((\frac{\sigma}{\Delta_{m a x}^{T}})^{2}(1-\gamma)^{2}\log\frac{1}{1-\gamma})}{\log\gamma},$$ $$|\mu_{t}(i)-\ddot{\mu}_{t}(\gamma,i)|\leq\sigma{\sqrt{\frac{(1-\gamma)\log{\frac{1}{1-\gamma}}}{N_{t}(\gamma,i)}}}$$ From the definition of n, A(γ), Ut(*γ, i*) in Equation (4), we can get $$U_{t}(\gamma,i)=\frac{\sqrt{1-\gamma}\Delta_{T}(i)}{n}\sqrt{\frac{A(\gamma)}{N_{t}(\gamma,i)}}.$$ $$(15)$$ . (15) If Nt(γ, i) > A(γ), Ut(*γ, i*) < √1−γ n ∆T (i). Thus, we have $$\frac{\Delta_{t}(i)}{3}-U_{t}(\gamma,i)>\frac{\Delta_{T}(i)}{3}-\frac{\sqrt{1-\gamma}}{n}\Delta_{T}(i)=\frac{2\sqrt{2}}{n}\Delta_{T}(i).$$ Therefore, P(ˆµt(γ, i) > µt(i) + ∆t(i) 3, Nt(γ, i) > A(γ)) (a) ≤ P(ˆµt(γ, i) − µ¨t(γ, i) > ∆t(i) 3− Ut(γ, i), Nt(γ, i) > A(γ)) (b) ≤ P(ˆµt(γ, i) − µ¨t(γ, i) > 2 √2 n∆T (i), Nt(γ, i) > A(γ)) (c) ≤ P( Nt(γ, i)(ˆµt(γ, i) − µ¨t(γ, i)) pNt(γ 2, i) > 2 √2 n∆T (i)pA(γ)) (17) (d) ≤ log 1 1 − γ exp(− 3(∆T (i))2 n2σ 2A(γ)) ≤ (1 − γ) 3log 1 1 − γ $$(16)$$ where (a) uses Lemma 5.1, (b) uses Equation (16), (c) follows from Nt(γ, i) > Nt(γ 2, i), (d) uses Lemma A.1. Since (1 − γ) log 1 1−γ ≤ 1 e < 1, this ends the proof. This proof is adapted from Agrawal & Goyal (2013) for the stationary settings. However, there are some technical problems that are difficult to overcome in non-stationary settings. The tricky problem is to lower bound the probability of the mean's estimation of optimal arm Equation (21). By designing the function Ut(*γ, i*) and decomposing the regret to use Lemma A.3 again, we solve this challenge. We use blue font to emphasize the techniques used in the proof. The proof is in 3 steps. Step 1 We first prove that E[ 1 pi,t ] has an upper bound independent of t. Define a Bernoulli experiment as sampling from N (µˆt(∗),4σ 2 Nt(γ,∗) ), where success implies that θt(∗) > yt(i). Let Gt denote the number of experiments performed when the event {θt(∗) > yt(i)} first occurs. Then $$\mathbb{E}[{\frac{1}{p_{i,t}}}]=\mathbb{E}[\mathbb{E}[G_{t}|{\mathcal{F}}_{t-1}]]=\mathbb{E}[G_{t}]$$ Let z = √log r + 1 2 (r ≥ 1 is an integer ) and let MAXr denote the maximum of r independent Bernoulli experiment. Then $$\mathbb{P}(G_{t}\leq r)\geq\mathbb{P}(\text{MAX}_{r}>\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\gamma_{i}*)}}\geq y_{t}(i))$$ $$=\mathbb{E}[\mathbb{E}[1\{\text{MAX}_{r}>\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\gamma_{i}*)}}\geq y_{t}(i)\}|\mathcal{F}_{t-1}|]\tag{18}$$ $$=\mathbb{E}[1\{\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\gamma_{i}*)}}\geq y_{t}(i)\}\mathbb{P}(\text{MAX}_{r}>\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\gamma_{i}*)}}|\mathcal{F}_{t-1})]$$ 1. Using Fact 1, act 1; $$\mathbb{P}(\text{MAX}_{r}>\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}}(\gamma_{t}*)}|F_{t-1})\geq1-(1-\frac{1}{\sqrt{2\pi}}\frac{z}{z^{2}+1}e^{-z^{2}/2})^{r}$$ $$=1-(1-\frac{1}{\sqrt{2\pi}}\frac{\sqrt{\log r}+\frac{1}{2}}{(\sqrt{\log r}+\frac{1}{2})^{2}+1}\frac{e^{-1/4-\sqrt{\log r}/2}}{\sqrt{r}})^{r}\tag{19}$$ $$\geq1-e^{-\frac{\sqrt{\mu_{t}}-\sqrt{\mu_{\text{w}}}r/2}{e^{\beta\gamma_{t}}\sqrt{\mu_{\text{w}}}r+1)}}$$ For any $r\geq e^{17}$, $e^{-\frac{\sqrt{\pi}e^{-\sqrt{\log r}/2}}{e^{0.25}\sqrt{2\pi}(\sqrt{\log r}+1)}}$ . 1 r 2 . Hence, for any r ≥ e 17, $$\mathbb{P}(\mathrm{MAX}_{r}>\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\gamma,*)}}|\mathcal{F}_{t-1})\geq1-\frac{1}{r^{2}}.$$ Therefore, for any r ≥ e P(Gt ≤ r) ≥ (1 − 1 r 2 )P(ˆµt(∗) + z pNt(γ, ∗) ≥ yt(i)) Next, we apply Lemma A.1 to lower bound P(ˆµt(∗) + √ z·2σ Nt(γ,∗) ≥ yt(i)). P(ˆµt(∗) + z · 2σ pNt(γ, ∗) ≥ yt(i)) ≥ 1 − P(ˆµt(∗) + z · 2σ pNt(γ, ∗) ≤ µt(∗)) ≥ 1 − P(ˆµt(∗) − µ¨t(∗) ≤ Ut(γ, ∗) −z · 2σ pNt(γ, ∗) ) (20) 17, $${\bf\mathrm{Since~}}U_{\bf t}(\gamma,*)={\frac{\sigma\sqrt{(1-\gamma)\log{\frac{1}{1-\gamma}}}}{\sqrt{N_{\bf t}(\gamma,*)}}},z=\log r+{\frac{1}{2}},$$ $$U_{t}(\gamma,*)-\frac{z\cdot2\sigma}{\sqrt{N_{t}(\gamma,*)}}=\frac{\sigma\sqrt{(1-\gamma)\log\frac{1}{1-\gamma}}-\sigma-2\sigma\sqrt{\log r}}{\sqrt{N_{t}(\gamma,*)}}<-\frac{2\sigma\sqrt{\log r}}{\sqrt{N_{t}(\gamma,*)}}.$$ Then we have $$\mathbb{P}(\mu_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\gamma_{i}*)}}\geq y_{t}(i))\geq1-\mathbb{P}(\mu_{t}(*)-\bar{\mu}_{t}(*)<-\frac{2\sigma\sqrt{\log r}}{\sqrt{N_{t}(\gamma_{i}*)}})$$ $$\geq1-\log(\frac{1}{1-\gamma})^{-\frac{2}{2}\log r}$$ $$\geq1-\log\frac{1}{1-\gamma}\frac{1}{r^{1.5}}.$$ $$(21)$$ $$(22)$$ Substituting, for any *r > e*17, $$\mathbb{P}(G_{t}\leq r)\geq1-\log\frac{1}{1-\gamma}\frac{1}{r^{1.5}}-\frac{1}{r^{2}}$$ Therefore, E[Gt] = X∞ r=0 P(Gt ≥ r) r>e17 (log 1 1 − γ 1 r 1.5 + 1 r 2 ) ≤ 1 + e 17 + X ≤ e 17 + 3 + 3 log 1 1 − γ This proves a bound of E[ 1 pi,t ] ≤ e $\therefore3+3\log\frac{1}{1-\gamma}\text{independent of}t.$ $\text{'})\sigma^2\quad.$ We consider the upper-hour. Step 2. Define L(γ) = 1152 log( 1 1−γ +e 17)σ (∆T (i))2 . We consider the upper bound of E[ 1 pi,t ] when Nt(γ, ∗) > L(γ). P(Gt ≤ r) ≥ P(MAXr > µˆt(∗) + z · 2σ pNt(γ, ∗) − ∆t(i) 6≥ yt(i)) (23) = E[1{µˆt(∗) + z · 2σ pNt(γ, ∗) − ∆t(i) 6≥ yt(i)}P(MAXr > µˆt(∗) + z · 2σ pNt(γ, ∗) − ∆t(i) 6|Ft−1)] Now, since Nt(γ, ∗) > L(γ),√1 Nt(γ,∗) <∆t(i) 48plog( 1 1−γ +e 17)σ . Therefore, for any r ≤ ( 1 1−γ + e 17) 2, $$\frac{z\cdot2\sigma}{\sqrt{N_{t}(\gamma,*)}}-\frac{\Delta_{t}(i)}{6}=\frac{2\sigma\sqrt{\log r}+\sigma}{\sqrt{N_{t}(\gamma,*)}}-\frac{\Delta_{t}(i)}{6}\leq-\frac{\Delta_{t}(i)}{12}.$$ Using Fact 1, $$\mathbb{P}(\theta_{t}(i)>\hat{\mu}_{t}(i)-\frac{\Delta_{t}(i)}{12}|\mathcal{F}_{t-1})\leq1-\frac{1}{2}e^{-\frac{N_{t}(i-\gamma)}{4\sigma^{2}}\frac{\Delta_{t}(i)^{2}}{28\sigma}}\geq1-\frac{1}{2(1/(1-\gamma)+e^{17})}.$$ This implies lies $$\mathbb{P}(\mathrm{MAX}_r>\hat{\mu}_t(*)+\frac{z}{\sqrt{N_t(\gamma,*)}}-\frac{\Delta_t(i)}{6}|\mathcal{F}_{t-1})\geq1-\frac{1}{2^r(1/(1-\gamma)+e^{17})^r}.$$ by the self-normalized Hoeffding type inequality. Also, apply the self-normalized Hoeffding-type inequality, P(ˆµt(∗) + z · 2σ pNt(γ, ∗) − ∆t(i) 6≥ yt(i))≥ 1 − P(ˆµt(∗) ≤ µt(∗) − ∆t(i) 6) ≥ 1 − P(ˆµt(∗) − µ¨t(∗) ≥ −Ut(γ, ∗) + ∆t(i) 6) sL(γ) Nt(γ, ∗) ) > 1 − P(ˆµt(∗) − µ¨t(∗) ≥ ∆T (i) 8 ≥ 1 − log( 1 1 − γ + e 17)1 (1/(1 − γ) + e 17) 3 . Let γ ′ = ( 1 1−γ + e 17) 2. Therefore,for any 1 ≤ r ≤ γ ′, $$\mathbb{P}(G_{t}\leq r)\geq1-\frac{1}{2^{r}\gamma^{r/2}}-\log(\frac{1}{1-\gamma}+e^{17})\frac{1}{\gamma^{1.5}}.$$ When r ≥ γ ′ > e17, we can use Equation (22) to obtain, $$\mathbb{P}(G_{t}\leq r)\geq1-\log\frac{1}{1-\gamma}\frac{1}{r^{1.5}}-\frac{1}{r^{2}}$$ Combining these results, E[Gt] ≤ X∞ r=0 P(Gt ≥ r) ≤ 1 + γ X′ r=1 P(Gt ≥ r) + X∞ r=γ′ P(Gt ≥ r) ≤ 1 + γ X′ r=1 (1 2 rγ ′r/2 + log( 1 1 − γ + e 17)1 γ ′1.5 ) + X∞ r=γ′ (log 1 1 − γ 1 r 1.5 + 1 r 2 ) ≤ 1 +1 2 √γ ′ + log( 1 1 − γ + e 17)1 √γ ′ + 2 γ ′ + log( 1 1 − γ )3 √γ ′ ≤ 1 + 6(1 − γ) log( 1 1 − γ + e 17). Therefore, when Nt(γ, ∗) > L(γ), it holds that $$\mathbb{E}[{\frac{1}{p_{i,t}}}]-1=\mathbb{E}[G_{t}]-1\leq6(1-\gamma)\log({\frac{1}{1-\gamma}}+e^{17}).$$ Step 3 Let A(γ, ∗) = {t ∈ {1*, ..., T*} : Nt(γ, ∗) ≤ L(γ)}. t∈T (γ) E[ 1 − pi,t pi,t 1{it = i ∗ t , θt(i) < yt(i)}] X ≤ X t∈T (γ)∩A(γ,∗) +X t∈T (γ)\A(γ,∗) E 1 − pi,t pi,t 1{it = i ∗ t , θt(i) < yt(i)} ≤ {t : it = i ∗ t , Nt(γ, ∗) ≤ L(γ)} (e 17 + 3 + 3 log 1 1 − γ ) + X t∈T (γ)\A(γ,∗) E 1 − pi,t pi,t (24) ≤ T(1 − γ)L(γ)γ −1/(1−γ)(e 17 + 3 + 3 log 1 1 − γ ) + 6T(1 − γ) log( 1 1 − γ + e 17) ≤ (e 17 + 9 + 3 log 1 1 − γ )T(1 − γ)L(γ)γ −1/(1−γ). Lemma A.1 has a stricter upper bound as log 1 1−γ log(1+η) exp(− 1 2σ2 (1 − η 2 16 )). Suppose the variance of Thompson sampling is ξσ2 Nt(γ,i) . The lower bound of Equation (21) becomes $${\frac{\log{\frac{1}{1-\gamma}}}{\log(1+\eta)}}\exp(-{\frac{\xi\log r}{2}}(1-{\frac{\eta^{2}}{16}})).$$ To ensure that E[Gt] has a finite upper bound, our analysis method requires ξ > 2, i.e. the sampling variance needs to be strictly greater than 2σ 2 Nt(γ,i) . Recall that A(τ ) = 72 log(τ)σ 2 (∆T (i))2 . Using Lemma A.1, we have $$\mathbb{P}(\hat{\mu}_{t}(\tau,i)>x_{t}(i),N_{t}(\tau,i)>A(\tau))=\mathbb{P}(\hat{\mu}_{t}(\tau,i)-\mu_{t}(i)>\frac{\Delta_{t}(i)}{3},N_{t}(\tau,i)>A(\gamma))\tag{25}$$ $$\leq\mathbb{P}(\sqrt{N_{t}(\tau,i)}(\hat{\mu}_{t}(\tau,i)-\mu_{t}(i))>\frac{\Delta_{T}(i)}{3}\sqrt{A(\gamma)})$$ $$\leq\log\tau\exp(-\frac{3(\Delta_{T}(i))^{2}}{72\sigma^{2}}A(\gamma))$$ $$\leq\frac{1}{\tau^{2}}$$ The proof is similar to the proof of Lemma 5.3. Step 1 We first prove that E[ 1 pi,t ] has an upper bound independent of t. Let z = √log r (r ≥ 1 is an integer ). Then $$\mathbb{P}(G_t\leq r)\geq\mathbb{P}(\text{MAX}_r>\hat{\mu}_t(*)+\frac{z\cdot2\sigma}{\sqrt{N_t(\tau,*)}}\geq y_t(i))$$ $$=\mathbb{E}[1\{\hat{\mu}_t(*)+\frac{z\cdot2\sigma}{\sqrt{N_t(\tau,*)}}\geq y_t(i)\}\mathbb{P}(\text{MAX}_r>\hat{\mu}_t(*)+\frac{z\cdot2\sigma}{\sqrt{N_t(\tau,*)}}|\mathcal{F}_{t-1})]$$ 1. $$(26)$$ Using Fact 1, $$\mathbb{P}(\text{MAX}_{r}>\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\tau,*)}}|\mathcal{F}_{t-1})\geq1-(1-\frac{1}{\sqrt{2\pi}}\frac{z}{z^{2}+1}e^{-z^{2}/2})^{r}$$ $$=1-(1-\frac{1}{\sqrt{2\pi}}\frac{\sqrt{\log r}}{(\sqrt{\log r})^{2}+1}\frac{1}{\sqrt{r}})^{r}\tag{27}$$ $$\geq1-e^{-\frac{\sigma}{\sqrt{2\pi}(\sqrt{2\pi}r+i)}}$$ For any $r\geq e^{11}$, $e^{-\frac{\sqrt{r}}{\sqrt{2\pi}\left(\sqrt{\log r}+1\right)}}\leq\frac{1}{r}$. 1 r 2 . Hence, for any r ≥ e 11, $$\mathbb{P}(\mathrm{MAX}_{r}>{\hat{\mu}}_{t}(*)+{\frac{z\cdot2\sigma}{\sqrt{N_{t}(\tau,*)}}}|{\mathcal{F}}_{t-1})\geq1-{\frac{1}{r^{2}}}.$$ Therefore, for any r ≥ e 11, $$\mathbb{P}(G_{t}\leq r)\geq(1-\frac{1}{r^{2}})\mathbb{P}(\hat{\mu}_{t}(*)+\frac{z}{\sqrt{N_{t}(\tau,*)}}\geq y_{t}(i))$$ to lower bound $\mathbb{P}(\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\tau,*)}}\geq y_{t}(i))$. Next, we apply Lemma A.1 to lower bound P(ˆµt(∗) + √z·2σ $$\sqrt{N_{t}(\tau,*)}$$ $$\mathbb{P}(\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\tau,*)}}\geq y_{t}(i))\geq1-\mathbb{P}(\hat{\mu}_{t}(*)+\frac{z\cdot2\sigma}{\sqrt{N_{t}(\tau,*)}}\leq\mu_{t}(*))$$ $$\geq1-\mathbb{P}(\hat{\mu}_{t}(*)-\mu_{t}(*)<-\frac{2\sigma\sqrt{\log r}}{\sqrt{N_{t}(\tau,*)}})$$ $$\geq1-\log\tau e^{-\frac{3}{2}\log r}$$ $$=1-\log\tau\frac{1}{r^{1.5}}.$$ Substituting, for any *r > e*11, $$\mathbb{P}(G_{t}\leq r)\geq1-\log\tau{\frac{1}{r^{1.5}}}-{\frac{1}{r^{2}}}$$ 2(28) Therefore, $$\mathbb{E}[G_{t}]=\sum_{r=0}^{\infty}\mathbb{P}(G_{t}\geq r)$$ $$\leq1+e^{11}+\sum_{r>e^{11}}(\log\tau\frac{1}{r^{1.5}}+\frac{1}{r^{2}})$$ $$\leq e^{11}+3+3\log\tau$$ $$\leq e^{11}+3+3\log\tau\text{independent of}t.$$ We consider the upper bound of $\mathbb{E}[\tau]$ This proves a bound of E[ 1 pi,t Step 2. Define L(τ ) = 1152 log(τ+e 1 pi,t ] when Nt(τ, ∗) > L(τ ). P(Gt ≤ r) ≥ P(MAXr > µˆt(∗) + z · 2σ pNt(τ, ∗) − ∆t(i) 6≥ yt(i)) $$(28)^{\frac{1}{2}}$$ (29) = E[1{µˆt(∗) + z · 2σ pNt(τ, ∗) − ∆t(i) 6≥ yt(i)}P(MAXr > µˆt(∗) + z · 2σ pNt(τ, ∗) − ∆t(i) 6|Ft−1)] Now, since Nt(τ, ∗) > L(τ ),√1 Nt(τ,∗) <∆t(i) 48√log(τ+e 11)σ . Therefore, for any r ≤ (τ + e 11) 2, z · 2σ pNt(τ, ∗) − ∆t(i) 6= 2σ √log r + σ pNt(τ, ∗) − ∆t(i) 6≤ − ∆t(i) 12. Using Fact 1, P(θt(i) > µˆt(i) − ∆t(i) 12|Ft−1) ≤ 1 − 1 2 e − Nt(τ,∗) 4σ2∆t(i) 2 288 ≥ 1 −1 2(τ + e 11) . This implies P(MAXr > µˆt(∗) + z pNt(τ, ∗) − ∆t(i) 6|Ft−1) ≥ 1 − 1 2 r(τ + e 11) r . Also, apply Lemma A.1, P(ˆµt(∗) + z pNt(τ, ∗) − ∆t(i) 6≥ yt(i)) ≥ 1 − P(ˆµt(∗) − µt(∗) ≥ ∆t(i) 6) ≥ 1 − log(τ + e 11)1 (τ + e 11) 3 . Let $\tau^{\prime}=(\tau+e^{11})^{2}$. Therefore, for any $1\leq r\leq\tau^{\prime}$, $$\mathbb{P}(G_{t}\leq r)\geq1-\frac{1}{2^{r}\tau^{\prime r/2}}-\log(\tau+e^{11})\frac{1}{\tau^{\prime1.5}}.$$ When $r\geq\tau^{\prime}>e^{11}$, we can use Equation (22) to obtain, In $r\geq\tau'>e^{11}$, we can use Equation. $$\mathbb{P}(G_{t}\leq r)\geq1-\log\tau{\frac{1}{r^{1.5}}}-{\frac{1}{r^{2}}}$$ Combining these results, $$\mathbb{E}[G_{t}]\leq\sum_{r=0}^{\infty}\mathbb{P}(G_{t}\geq r)$$ $$\leq1+\sum_{r=1}^{\tau^{\prime}}\mathbb{P}(G_{t}\geq r)+\sum_{r=\tau^{\prime}}^{\infty}\mathbb{P}(G_{t}\geq r)$$ $$\leq1+\frac{6}{\tau}\log(\tau+e^{11}).$$ Therefore, when Nt(τ, ∗) > L(τ ), it holds that $$\mathbb{E}[{\frac{1}{p_{i,t}}}]-1=\mathbb{E}[G_{t}]-1\leq{\frac{6}{\tau}}\log(\tau+e^{11}).$$ Step 3 Let A(τ, ∗) = {t ∈ {1, ..., T} : Nt(τ, ∗) ≤ L(τ )} and C = e 11 + 9. t∈T (τ) E[ 1 − pi,t pi,t 1{it = i ∗ t , θt(i) < yt(i)}] X t∈T (τ)∩A(τ,∗) E[ 1 − pi,t pi,t 1{it = i ∗ t , θt(i) < yt(i)}] + X t∈T (τ)\A(τ,∗) E[ 1 − pi,t pi,t 1{it = i ∗ t , θt(i) < yt(i)}] ≤X ≤ |{t ∈ {1, ..., T} : it = i ∗ t , Nt(τ, ∗) ≤ L(τ )}|(e 11 + 3 + 3 log τ ) + X t∈T (τ)\A(τ,∗) E[ 1 − pi,t pi,t] (30) ≤ T τ L(τ )(e 11 + 3 + 3 log τ ) + 6T τ log(τ + e 11) ≤ (e 11 + 9 + 3 log τ ) T τ L(τ ). $$(31)$$ $$(32)$$ ## C Incorrect Proof Of Sw-Ts With Beta Priors Here, we discuss the mistakes in proof of Trovo et al. (2020). It is precisely because of these errors that they bypassed the analysis of under-estimation of the optimal arm ( i.e. Lemma 5.3). We first define the same notions in Trovo et al. (2020). Let F ′ ϕ := {t : bϕ−1 + τ ≤ *t < b*ϕ}, bϕ is the ϕ-th breakpoints. Ti(F ′ ϕ ) := Pt∈F′ϕ 1{it = *i, i* ̸= i ∗ ϕ } denote the number of times a suboptimal arm is played during phase F ′ ϕ . T*i,t,r* := Pts=max{t−τ+1,1} 1{is = i}. ϑi,t is the result of Thompson sampling from the Beta distribution. Then we cite the same equations in Trovo et al. (2020). Use Lemma A.3, they also have the following result: $$\sum_{t\in{\mathcal{F}}_{\phi}^{\tau}}\mathbb{E}\big[1\{i_{t}=i,T_{i,t,\tau}\leq\bar{n}_{A}\}\big]\leq\bar{n}_{A}\frac{N_{\phi}}{\tau},$$ where F ′ ϕ ≤ Nϕ. Thus by choosing n¯A = l19 log τ m, we have: t∈F′ϕ P ϑi ∗ ϕ ,t ≤ µi ∗ ϕ ,t − s5 log τ Ti ∗ ϕ ,t,τ (32) RA = X t∈F′ϕ P ϑi ∗ ϕ ,t ≤ µi ∗ ϕ ,t − s5 log τ Ti ∗ ϕ ,t,τ , Ti ∗ ϕ ,t,τ > n¯A + X t∈F′ϕ P Ti ∗ ϕ ,t,τ ≤ n¯A (33) ≤ X t∈F′ϕ P ϑi ∗ ϕ ,t ≤ µi ∗ ϕ ,t − s5 log τ Ti ∗ ϕ ,t,τ , Ti ∗ ϕ ,t,τ > n¯A + X t∈F′ϕ E -1{Ti ∗ ϕ ,t,τ ≤ n¯A}(34) ≤ X t∈F′ϕ P ϑi ∗ ϕ ,t ≤ µi ∗ ϕ ,t − s5 log τ Ti ∗ ϕ ,t,τ , Ti ∗ ϕ ,t,τ > n¯A + n¯A Nϕ τ(35) ≤ X The first mistake appears in the blue part. They claim that $$\sum_{t\in{\mathcal{F}}_{\phi}^{\prime}}\mathbb{E}\big[\mathbb{I}\left\{T_{i_{\phi}^{*},t,\tau}\leq\bar{n}_{A}\right\}\big]\leq\bar{n}_{A}\frac{N_{\phi}}{\tau}.$$ . (36) (33) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (34) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (35) . $$(36)^{\frac{1}{2}}$$ This inequality is not true, as it lacks one condition it = i ∗ ϕ . And from the context in their proof, we know that only it = i holds, not it = i ∗ ϕ . To see why Equation (36) is wrong, consider the simple example: the algorithm always select the suboptimal arm in phase F ′ ϕ . Thus, 1{Ti ∗ ϕ ,t,τ ≤ n¯A} = 1, ∀t ∈ F′ϕ . We have $$\sum_{t\in{\mathcal{F}}_{\phi}^{\prime}}\mathbb{E}\big[1\left\{T_{i_{\phi}^{*},t,\tau}\leq\bar{n}_{A}\right\}\big]=|{\mathcal{F}}_{\phi}^{\prime}|.$$ This implies Equation (36) is not true. In their proof, there exists other three mistakes related to Equation (36)(the numbering of the equations below is the same as in their proof): Eq 43 → Eq 44 : $\sum_{t\in\mathcal{F}_\phi^*}\mathbb{P}\big(T_{i_\phi^*,t,\tau}^*\leq\bar{n}_{B^*}\big)\leq\bar{n}_{B^*}\dfrac{N_\phi}{\tau}$ Eq 71 → Eq 72 : $\sum_{t\in\mathcal{F}_{\Delta C_{-N}}}\mathbb{P}\big(T_{i_\phi^*,t,\tau}^*\leq\bar{n}_{A}\big)\leq\bar{n}_{B}\left[\dfrac{N}{\tau}\right]$ Eq 95 → Eq 96 : $\sum_{t\in\mathcal{F}_{\Delta C_{-N}}}\mathbb{P}\big(T_{i_\phi^*,t,\tau}^*\leq\bar{n}_{B^*}\big)\leq\bar{n}_{B^*}\left[\dfrac{N_\phi}{\tau}\right]$ $$(37)$$ $$(38)$$ $$(39)$$ The last two inequalities are for smoothly changes. We speculate that fixing these errors would also require proving conclusions similar to Lemma 5.3 and Lemma 5.5. However, since the Beta distribution does not have concentration properties like the Gaussian distribution (Fact 1), fixing these errors is challenging.